Intel's ubiquitous architecture is cheap and easy. Is that the best we can do? The Java programming language has a slogan: Write once, run anywhere. By now you’ve probably heard it so many times that it’s lost all meaning, no matter what your language of choice. But when Sun Microsystems first introduced Java in the early 1990s, those four words had all the resonance of a manifesto.Back in those days, “portable code” was code that was clean enough to pass through a compiler on any of a half-dozen different processor architectures. There was Alpha, MIPS, PA-RISC, PowerPC, and Sparc — and catering to them all wasn’t easy. Some used little-endian byte ordering, while others were big-endian. Some were CISC, and some were RISC. Compared to that morass, Java bytecode seemed like a godsend.[ Cut straight to the key news for technology development and IT management with our once-a-day summary of the top tech news. Subscribe to the InfoWorld Daily newsletter. ] Ah, but look where we are today. When Apple abandoned the PowerPC architecture in 2006, it left x86 as the only game in town on the desktop. Meanwhile, all of the major enterprise hardware vendors now offer x86 servers no matter what their histories. If it ain’t x86, it’s legacy.And it doesn’t stop there. Today, even non-x86 platforms want to look like x86. Just last week, a company called Mantissa gave a sneak peek of a new software package that will allow virtualized x86 operating systems to run on IBM mainframes. Perhaps the ultimate jaw-dropper, though, is JPC, a complete x86 emulation layer written in Java — so complete, in fact, that it can boot Linux in a browser window.As time goes on, we seem to be witnessing the emergence of a new “write once, run anywhere,” and it isn’t bytecode. It’s machine language. Its name is x86. x86: The big compromise As strange as that may sound, it makes some sense if you consider the history of the x86 architecture. Think back to the very beginning, to the chip that brought x86 to the mainstream. Careful, though — if you were thinking about the 8086, you went back too far. That honor goes to the 8088.In 1980, when IBM came looking for a CPU to power its top-secret Model 5150 — the computer that would come to be known as the original IBM PC — it had plenty of options to choose from. It could have picked Intel’s flagship product, the 8086. It even had chip designs of its own. And yet it chose the 8086’s little brother, the 8088. Why? As it turns out, the 8088 had two things going for it. First, unlike IBM’s proprietary CPUs, it was easy to manufacture and readily available in quantity. Second, and more important, while it shared the same architecture as the 8086, the 8088 used an eight-bit data bus like the previous generation of CPUs. That meant it could be integrated with existing, mass-market support chips and components. In other words, the 8088 wasn’t just a cheap chip; it let you build cheap computers, too.That simple distinction made all the difference. As fate would have it, the 8088 became the CPU that launched the PC revolution, while the 8086 — the chip that gave x86 its name — enjoyed only limited success. IBM chose Intel’s x86 architecture not because it was the best technology, but because the 8088 made it affordable and expedient. The tyranny of low expectations Cheap and easy: Doesn’t that sum up the history of the x86 architecture as we’ve known it? When Linux made x86 a viable platform for Unix-like operating systems, the market for expensive, proprietary RISC chips began drying up. Apple didn’t switch to x86 because it had a superior architecture but because Apple’s preferred PowerPC platform was falling behind in the performance race. Why swim against the tide?Of course, Intel played a role in all this. The runaway success of the IBM PC and its clones left Intel with unparalleled market leverage, to say nothing of R&D funds. But what Intel didn’t realize is that it had created a monster, one that eventually even it couldn’t control: When the market rejected Itanium, Intel was forced to follow AMD’s lead with x86-64.Today, we’re seeing the same decisions repeated over and over again. Nobody relishes the idea of running Windows on a mainframe, but that’s the only way to run Windows applications while taking advantage of the full reliability and scalability of mainframe hardware. And JPC makes it possible to run proven x86 code in new and unforeseen environments, such as mobile phones. I just wonder: Is this really the best we can do? To give just one counterexample, in the 1970s the Soviet Union built actual, working models of computers based on ternary logic — three logic states, instead of a binary computer’s two. Ironically, the Soviets gave up that research for x86 clone chips based on stolen American technology. There have been no ternary computers since.Yahoo’s Douglas Crawford once said of the Web, “the only thing we have to fear is premature standardization.” I fear that’s exactly what has already happened in the CPU market — and as a result, x86 will be our only real option for a long time to come. Software Development