The next wave of distributed computing will be guided not by fashion, but by necessity and technology ALTHOUGH IT doesn’t have many immutable laws, the periodic pendulum swing between centralized and distributed computing paradigms might qualify. We’ve gone from terminals to desktops, desktops to thin clients, thin clients to server-grade notebooks, mainframes to departmental servers, departmental servers to monolithic enterprise beasts, and from monoliths to clusters. Keeping up with shifting fashion trends is expensive, but it can be just as costly to resist the pendulum’s momentum, as those who have refit their mainframes to run Linux have found. The trick is to identify those technologies that survive the swing without getting caught up in the fashion. For example, desktops and workstations have fared well for 20 years. They have enough brains to participate in a distributed environment — yesterday’s peer is tomorrow’s grid node — and they’ll mimic any terminal or thin client. That doesn’t mean thin isn’t worthwhile technology; it just marries you to your server vendor. The swing can be rooted in practicality, too. Departmental servers initially solved the problem of demand outpacing capacity. But IT couldn’t manage and fix all those far-flung boxes, so all the assets were moved back onto the raised floor. In many shops, demand is outpacing capacity again. Stopgaps, such as planting humongous hard drives in desktops, setting draconian quotas, and throttling network traffic, won’t hold. Long-term solutions will be found in distributed technology that looks centralized to users, applications, and administrators. Storage virtualization is a perfect example of this principle. In some ways, so are blade servers, although current implementations suffer from the same vendor lock-in as centralized solutions. AMD’s Opteron multiprocessing architecture takes an interesting tack: Each CPU has its own memory, but the CPUs are linked together by a super-fast HyperTransport bus that makes the partitioned RAM look like a unified memory pool. InfiniBand, despite the recent setback dealt by Intel, still looks like a great way to link systems together in much the same way HyperTransport links chips. RapidIO is another high-speed external bus. All of these technologies are widely licensed, creating the promise of constructing cross-vendor solutions. None of these buses can be added to existing systems; no current peripheral bus is fast enough to support them. Developments such as HyperTransport, InfiniBand, RapidIO, and 10Gb Ethernet will reduce the bottlenecks that prevented discrete CPUs and storage devices from working together efficiently. In a few years, there won’t be much difference between a blade implementation and a rack full of interconnected multiprocessor servers. Does a future-proof crossover (distributed/centralized) computing solution exist today? Not really, because those fast buses aren’t here yet. They are just around the corner, and that suggests there may be wisdom in sticking with your old fashioned (but entirely functional) clusters and monoliths until those high-speed inter-and intra-system interconnects hit the market. Technology IndustrySoftware DevelopmentSmall and Medium Business