At Storage Network World, the staid world of storage was all shook up by next-gen FCoE, SSDs, deduplication, and more I spent most of this week at Storage Network World in Orlando, Fla., and I came away with the overwhelming impression that storage is moving at a frantic pace in half a dozen different directions. There was some real excitement, even adjusting for the usual shrill marketing messages.If pressed to identify one theme at Storage Network World, I would have to pick convergence — not just as it applies to storage, but to the entire data center. Silos are out! Convergence is in! If you ask the vendors, that is.[ Also on InfoWorld.com: Learn how data deduplication can slow the explosive growth of data with Keith Schultz’s Deep Dive Report. | Looking to revise your storage strategy? See InfoWorld’s iGuide on the Enterprise Data Explosion. ] In reality, convergence is still a tough sell in many large organizations. The network team doesn’t want the server gang to touch any of the network gear, and the storage group don’t want to let the server gang near their metal, either, no matter how much HP says we should all get along.Meanwhile, admins in midsize companies are wondering what the fuss is all about. Most of them do everything anyway because there’s no alternative.But convergence wasn’t the only buzz. Throughout the sessions, panels, and presentations, storage professionals kept touching on common technologies and trends — led by SSDs (solid-state drives), data deduplication, and FCoE (Fibre Channel over Ethernet). Is Fibre Channel spinning down?I sat in on several talks discussing the realities of FCoE adoption and integration over the next few years, and it seems clear that traditional Fibre Channel has a long row to hoe if it’s going to survive the lower cost and higher performance of FCoE.Granted, there are generally higher latencies with FCoE, but the benefits of lower cost and less complexity can push those concerns aside for many infrastructures. For the moment, Fibre Channel has an 8Gbps limit, whereas FCoE can run up to 10Gbps. There is a 16Gbps Fibre Channel standard in the works (and planning for 32Gbps down the road), but 40Gbps and 100Gbps Ethernet standards have already been approved — there is actually 100Gbps gear out in the wild now. It’s going to be hard for Fibre Channel purveyors to match those numbers in the coming years, and nearly all disk vendors are shelving Fibre Channel for disk access in favor of SAS. But FCoE has some political battles to fight. The blending of Ethernet networking and storage crosses boundaries in many large companies, and in many cases, those two camps simply don’t trust each other. There’s also the matter of security to contend with, but more than once I heard concerns that FCoE wouldn’t be a reality until those walls were broken down. Remember that bit about convergence?That said, there’s no doubt that FCoE will surpass standard Fibre Channel in terms of throughput in the near future, and those numbers are going to be hard to reconcile. But there are limits to what disk can push, right? How beneficial is a 100Gbps pipe if you can’t get anywhere near that throughput from the disk itself? SSDs try to make the grade That’s where SSDs come in. Don’t make the mistake of viewing SSDs as simply 2.5-inch disks in hot-swap sleds. They’re showing up in all kinds of form factors, from PCI Express to custom arrays from vendors like Texas Memory Systems.TMS was showing off its RamSan 630 that can handle up to 10TB of SLC SSD storage in a single 3U server and claim 60GBps throughput. At a cost of $35 per gigabyte, that’s a $350,000 unit, but if you need only a single terabyte, it’s a more reasonable $35,000. It might be the most expensive terabyte since the days of 9GB drives, but if it can make the difference between a highly visible performance problem and a highly visible performance solution, it might be worth it.There are other uses for SSDs, such as FIMMs (Flash Inline Memory Modules). They’re nowhere near as fast as standard DIMMs, but if the operating system is smart enough to treat memory as a tiered structure, the cost differential might make it worthwhile. However, I wouldn’t expect to see server products using this technology for some time. But not all is well in SSD land. The obvious problem is price, but there are other issues. For one, SSD technology in enterprise settings is still new. Enterprise SSDs are SLC (single-level cell) memory devices, which make them more robust and increase their lifespan, but it also means they’re extremely expensive compared to lower-spec MLC devices. Some seem to think that the solution is to increase the performance and reliability of MLC devices and bring their lower price point into the enterprise; others, meanwhile, worry this isn’t possible, and that the market will eventually drive SLC prices down. There’s no good answer yet.There’s also the problem of SSD failure in heavy transactional environments. With finite write operations, most SSDs are built with a variety of firmware and controller smarts to distribute writes evenly, to compress some data to reduce the amount of writes, and nearly all are built with a significant oversubscription to account for failed sectors. That 100GB SSD is actually a 128GB SSD with 28GB held back to replace failed write sectors, which will eventually occur. This extends the lifespan of the SSD, but it’s not a great overall solution.Then there are some detriments to SSD workload learning. Some SSDs are sufficiently smart enough to tune themselves to the workload. This means that running heavily random workloads on an SSD, then switching to sequential workloads can have a temporarily harmful effect on drive performance. It will eventually relearn the new workload, but situations like this are what give database admins ulcers and can affect backup windows. It doesn’t appear that we’ll all move over to fully SSD-based storage anytime soon. In intense transactional environments, it makes all kinds of sense, especially in database applications, but we already knew that. The case can still be made for just about every storage tier, from the blazing fast SLC-based SSDs arrays to 7,200-rpm SATA arrays for data warehousing and disk-based backup solutions. Doubling down on deduplicationSpeaking of backups, data deduplication is all the rage. It hasn’t been available in any reasonable performance and reliability form for very long — in fact, spot polls show that most companies haven’t adopted deduplication yet — but the reality of ever expanding storage requirements makes it an extremely attractive proposition for most large infrastructures. Naturally, deduplication is only as effective as the source material; if you don’t have much data overlap, you’re not going to get much out of running deduplicators. But if you do, it can make a massive difference in the size of your active storage and backups. At one lab session, I ran through several deduplication scenarios and generally saw somewhere around a 10:1 ratio on the fly. Backups showed about the same.This isn’t always an easy sell; just as file system compression has significant drawbacks, deduplication can cause plenty of consternation among admins since it’s essentially throwing away data in favor of pointers to a single copy of those blocks. The detection algorithms used by the dedupe devices have to be exactly right, or data is lost, and that’s simply not an option. In the storage trenches One of the more interesting general session talks was given by Anthony Abbattista of Allstate Insurance, who detailed a recently completed datacenter construction project that was so green the company is growing grass on the roof of the building and using the cold Illinois winters to cool the datacenter for half the year. It even has its carbon credit certificates framed on the walls.Truly, this year’s Storage Network World was highly focused on storage, but also leaned toward the data center as a whole. From every possible perspective, from virtualization to SSDs to tiered storage to deduplication, it’s a whole new world once again.This story, “Storage? Boring? Not anymore,” was originally published at InfoWorld.com. Follow the latest developments in storage and read more of Paul Venezia’s The Deep End blog at InfoWorld.com.