I've got to tell you, having 4TB in a home office is really too much. I'm currently looking at a few SAN arrays from HP and Dell for an upcoming article, and looking at the back of the lab servers got me thinking... The theoretical limits of fiber are unknown. Testing has been done at over 1.7Tb, but the ceiling hasn't been reached. Why are there a dozen cables in the back of these servers? If we take a standard I’ve got to tell you, having 4TB in a home office is really too much. I’m currently looking at a few SAN arrays from HP and Dell for an upcoming article, and looking at the back of the lab servers got me thinking…The theoretical limits of fiber are unknown. Testing has been done at over 1.7Tb, but the ceiling hasn’t been reached. Why are there a dozen cables in the back of these servers? If we take a standard fileserver that requires storage, network, KVM and power, we notice that everything is data except the juice. Leaving the power cabling alone for a moment, with a single 10G connection to the back of this unit, we could consolidate the delivery of the rest into a single pair of fiber, or two pairs for redundancy. Ethernet is easy, iSCSI gets us storage, and KVM over IP is everywhere. The switching for all this is simply 10G Ethernet.How long will the standards battle for this idea take? It’ll take one company to actually deliver a working implementation, and then license it for a song, just as 3Com licensed Ethernet for just $1,000 over 20 years ago. Had IBM done this instead of 3Com, we might all be speaking token ring right now.