Few tasks are more annoying than wrestling with data center patch cables. It's high time we drastically reduce their number Here we are in a world with tiny connectors like Micro USB and Thunderbolt, yet we’re still trying to shoehorn hundreds of RJ45 cables into our aggregation switching. It seems rather Paleolithic.Although many aren’t aware of this, you can still buy RJ21 switches and switching modules for many popular modular switches. The only connectors on the front are the four 24-pin RJ21 ports, each handling 12 Ethernet interfaces running at a maximum of 100 megabits. You drop those into the switch, run a cable to the patch panel, and directly punch down the bare ends.[ Also see “How to move a data center without having a heart attack” by InfoWorld’s Paul Venezia. | Stay current with Paul’s “What IT should know about AC power.” | Subscribe to InfoWorld’s Data Center newsletter and make sure you don’t miss a post. ] Thus, there is no need for patching from an RJ45 panel to an RJ45 jack on a switch. In large switches, this dramatically reduces the cabling mess and arguably increases the reliability of the wiring since there are fewer chances of individual cables going bad.Too bad RJ21 never really caught on. You can still buy RJ21 modules for many switches and even get them with 802.3af Power-over-Ethernet support. You can also get them in densities as high as 96 ports per module, which is double what a standard RJ45 module can handle. It’s no secret that large modular switching backplanes can handle that much traffic with ease, but there’s no getting around the fact that you can’t load more than 48 RJ45 ports on a single module and still fit the switch in a standard rack. We’re being hamstrung by the limitations of a physical port that’s simply unnecessary in many cases.The main problem is that limitation of 100 megabits. You can’t achieve gigabit data rates with an RJ21 connector — the wiring layer of the connector simply doesn’t allow for that many pins to be used. But we’ve managed to vanquish other wasteful connectors over the years, such as the parallel port, serial port, and game port (remember those with 15 pins just for a joystick?). Indeed, a potential solution is in the works with the development of the MRJ21 connector that will handle six gigabit ports per cable. That’s half of a standard RJ21 connector, but it’s significantly smaller and can be somewhat easier to work with. I suppose laypeople are impressed when they see an aggregation switch overflowing with hundreds of patch cords that run from RJ45 modules to RJ45 patch panels in the same rack, but all I see is a pain in the ass. It’s difficult to trace out bad cables, it’s difficult to run new cabling if the cable management trays are overrun with existing wires, and god help you when you need to replace a failed module when all 48 Ethernet interfaces need to be disconnected and reconnected to the exact same port on the new blade in order to maintain VLAN membership and other individual configuration elements.A brief aside: If you ever have to do this, by far the easiest way is to get a small-form-factor 48-port Ethernet patch panel and use it as a template by plugging each cable into it in the same place as the blade, port 1 to port 1, port 2 to port 2, and so forth. Then, you can pull all the patch cables out of the way and have a somewhat easier time wrangling the module of of the switch for replacement. Once that’s done, just plug the cables back into the same port and you’re basically guaranteed to get them in the right spot.Yet with the lower density and higher aggravation factor, why is it that we don’t have more options available? Cisco doesn’t even offer MRJ21 modules, though Extreme Networks and others do have 96-port MRJ21 gigabit modules available. That said, they’re still not terribly common, and the MRJ21 standard isn’t exactly what I’m envisioning either — it’s still only six ports per cable. On a busy aggregation switch, that leaves 56 MRJ21 cables to support 336 ports. If we’re talking 96 ports per blade, that number doubles. It’s fantastic to be able to achieve a density of 768 ports in a nine-slot chassis, but you’re still dealing with 112 MRJ21 cables.Smaller connectors — such as RJ point five, which is essentially a squashed RJ45 jack — are the opposite of helpful. Sure, you can fit 96 RJ point five ports on a single blade, but that makes the cabling problem far worse. Replacing a blade with 48 individual patch cables is bad enough. Fighting through 96 cables per blade to replace it is just ridiculous.Instead, I’d like to see switching modules with 96 or 128 ports per module, with only a few physical connectors that run to paired patch panels. When the cabling waterfall comes down into the rack, those cables are punched down one time, and that corresponds to a port within the connector. There would be fewer cabling problems, assuming the punches were done well, and the patch panels would have two connectors on the front that would directly connect to the switch blade. This would result in far less cabling and a far simpler approach to high-density switching. If a port goes bad because of a problem within the trunking cable, you either repunch that port to a different pair group or replace the cable. In my mind this would be a welcome trade-off to losing hundreds of individual Cat5e patch cables and their accompanying frustrations.In fact, this might even lead to cheaper manufacturing and retail costs for the blades. Depending on the construction of the module, a group of RJ45 ports is controlled by a single ASIC. Cheaper switches pile more ports on a single ASIC, while higher-end switches reduce that number for better overall performance and enhanced feature sets. As the number of ports increases on a single blade, the number of ASICs will increase, but you’d still potentially get more bang for your buck since you can dispense with all the other hardware necessary to run two lighter-density modules.It doesn’t seem like too much to ask. These days, people who know very little about the details talk endlessly about magically automating the data center. How about starting from the ground up and cleaning up the cabling mess first? This story, “Why aren’t we finally rid of patch cables?,” was originally published at InfoWorld.com. Read more of Paul Venezia’s The Deep End blog at InfoWorld.com. For the latest business technology news, follow InfoWorld.com on Twitter. Technology Industry