Not the patches you might think. If there was a Robert's Rules of Networking, it would have some harsh language for patch cables longer than 15'. Besides the obvious headache of managing dozens or hundreds of 50'-100' patch cables in a datacenter, or the very real impact of signal loss on long copper runs, the concept that it's OK to run copper patch cables from room to room in a multi-room datacenter is anathem Not the patches you might think. If there was a Robert’s Rules of Networking, it would have some harsh language for patch cables longer than 15′. Besides the obvious headache of managing dozens or hundreds of 50′-100′ patch cables in a datacenter, or the very real impact of signal loss on long copper runs, the concept that it’s OK to run copper patch cables from room to room in a multi-room datacenter is anathema to an orderly cabling layout. The bandwidth achievable with fiber over copper in the same physical space is almost immeasurable. The maximum bandwidth of fiber is over 1.7 terabit per second, and growing, whereas gigabit over copper is roughly the fastest copper can carry. Why spend the time and money to bring Cat6 patch cables through conduits, when bringing a multimode fiber bundle takes less space and provides more bandwidth (and extendable bandwidth) per pair? Any datacenter rack layout should always include space in the rack for the switches, with short patch cables delivering data to the servers in that rack. Multimode fiber runs between racks to core switches then connects that rack to the core. It’s interesting to note that most vendors’ 48-port layer 3 switches with gigabit uplinks are extremely close in price to the cost of wiring 48 copper ports into patch panels in the same rack, and terminating that copper in another patch panel in the network rack, or room. Granted, your mileage may vary, but unless there is no other option, copper shouldn’t escape the rack.