<table cellspacing="0" cellpadding="0" border="0" ><tr><td valign="top" style="font: inherit;"><div id="yiv555466979">Thanks all for the replies and good advice. Two loops and a shady coupler (on an uplink line of all places) were the culprit. Pulled some logs and found out that the overnight scripts had been steadily increasing in run time from an hour to almost 8 hours over the past 2 or 3 months. It's so appropriate that these issues all came to a head at closing time on April Fools Day. <br><br>We're running Wireshark (thanks John) and virus scans (thanks Greg) overnight but traffic is normal......for now. They're also going to update the firmware on all switches this weekend. <br><br>Thanks again :-)<br><br>--- On <b>Thu, 4/1/10, Politik Durden <i><politikdurden@yahoo.com></i></b> wrote:<br><blockquote style="border-left: 2px solid rgb(16, 16, 255); margin-left: 5px; padding-left: 5px;"><br>From: Politik Durden
<politikdurden@yahoo.com><br>Subject: [DLC]help in investigating a possible packet
storm<br>To: "Depaul Linux" <dlc@mailman.depaul.edu>, "UFO Mail list" <ufo@ufo.chicago.il.us><br>Date: Thursday, April 1, 2010, 11:15 PM<br><br><div id="yiv267351073"><table border="0" cellpadding="0" cellspacing="0"><tbody><tr><td style="font: inherit;" valign="top">Hello all, <br><br>Going to a client site at 6 AM tomorrow because at about 5 PM today (Thursday) all network traffic started getting really really slow.<br><br>Here's what I know:<br><br>- no recent changes (no new switch, NIC, changes to static routes, config changes, patches/upgrades, etc)<br><br>- about a dozen switches feed into a 3COM switch (no model #s yet). ballpark of 2 to 3 hundred nodes total<br><br>- no protocols are used, all devices are in "dumb" mode and act as just a plain 'ol switch. some can be managed but no features (snmp, etc) are turned on.<br><br>- most nodes *seem* to be pingable from both sides of the firewall, but everything is just crawling. <br><br>-
nothing (reports, scripts, etc) is timing out, but everything is just super super slow.<br><br>They tried swapping out switches one at a time to narrow down the culprit and that helped
for a bit, but then traffic slowed down again and they couldn't really do any more during production hours.<br><br>Theories: <br><br>- Can one bad port cause this kind of a traffic jam ? They started diags on all the major nodes (server NICs, the central 3COM switch, etc) but nothing obvious so far. <br><br>- Some sort of protocol/feature was turned on by mistake and now all the switches are confused ? A quick "topeka" (ha!!) points to stories of spanning tree causing these kinds of traffic jams.<br><br>- Somehow a loop got introduced ? <br><br>What I really need is suggestions on a good free traffic tool, something we can install on two or three laptops and put each switch through its paces. Any ideas ? <br><br>Thanks in advance for your comments. This lot always points me in the right direction :-)<br></td></tr></tbody></table><br>
</div><div class="plainMail">_______________________________________________<br>DLC mailing list<br><a rel="nofollow">DLC@mailman.depaul.edu</a><br><a rel="nofollow" target="_blank" href="http://mailman.depaul.edu/mailman/listinfo/dlc">http://mailman.depaul.edu/mailman/listinfo/dlc</a><br>Use the Above Link to Unsubcribe!!<br><a rel="nofollow" target="_blank" href="http://linux.depaul.edu/">http://linux.depaul.edu/</a><br></div></blockquote></div></td></tr></table><br>