by Harper Mann

Gearing up for Interop …

analysis
Apr 20, 20064 mins

Early next month in Vegas, the Interop event (formerly known as “NetWorld+Interop,” or “N+I”) kicks off. As always, this year the InteropNet — the actual event network, built from scratch by 20+ different vendors — will be a focal point for attendees to check out the latest technologies.

I recently spoke with Thomas Stocking (GroundWork co-founder), to hear his thoughts on why installing and configuring network monitoring on an extremely heterogeneous network like the InteropNet can be challenging. Here’s what he had to say:

>There’s a tremendous diversity of different equipment. That’s the main challenge for any monitoring technology you choose. For instance, if you’re going to do something with SNMP, you have to poll a lot of different MIB values just to get a consistent set of data you want to monitor. It’s a lot easier if you’re doing just one type of vendor equipment — where you can pull CPU utilization across several routers, for instance. When you’re pulling across multiple vendors with all sorts of different MIB values for CPU utilization, there are all sorts of differences, sometimes even between models of the same product line from a single vendor — that’s where a truly heterogeneous environment like that found in the InteropLab and InteropNet can become more challenging.

>If you want to get down to the deeper level of server monitoring (counting running processes, picking out CPU, disk and that sort of thing), you can get it off of SNMP — but SNMP isn’t always on. It’s nearly always installed, but a lot of time the different participants are using different community strings, and it’s not practical to standardize, so you have to adjust the monitoring system to use those settings.

>You also have to deal with very rich network security — firewalls blocking the monitoring protocols, honeypots trapping your discovery attempts, IPS systems shutting down your monitoring because it looks similar to a hacker trying to map the network. It’s not uncommon to see no support for ICMP (one of the networking protocols under TCP/IP) echo requests, or pings coming back from certain equipment. So you might have to use a different protocol, like HTTP for instance, just to see if a device is up. You might have a router built in ’99, and it will respond to SNMP and ICMP — but it may not have a web interface or HTTP. A lot of the time the configuration interfaces on the routers are running different software versions, which impacts the sorts of monitoring data you can gather from them.

>The whole goal with network and server monitoring is to be able to monitor the right things (“garbage in, garbage out,” as they say). Not too much data or you’ll be flooded. And not too little, or you’ll miss important information. If you over-monitor, you cannot manage and resolve your issues — you get data, not information. That’s why open source tools are so popular. They’re point solutions, but they get very specific information on that point, and often make it easier to “right size” your monitoring solution.

Every year, the event also runs some InteropLabs — separate projects sponsored by Network World to dig deep into emerging networking technologies, investigating them and testing what is possible. In the past, they have focused on emerging protocols like SIP (session initial protocol for VoIP), advanced wireless technologies, and other “cutting edge” emerging technologies. This year, one of the InteropLabs was designated as an “Open Source Software Initiative” — which sets out to help networking professionals answer (through extensive testing) how ready open source is for prime time networking use.

There are a whole lot of cool open source network monitoring tools that will be on display this year at Interop. I’ll be reporting back some of the key findings live from the event the week of 5/1.