The iPhone, Oracle’s Sun buyout, and Windows 7 dominated the year’s tech news. Discover the key events that fell under the media radar Think your wireless service is crummy? Just wait until next year when the spectrum drought really hits home. And maybe you’ve been telling your users that installing a graphics card in an office PC is a waste of money. If that’s the case, you’re missing a chance to make them a lot more productive (as long as the games stay at home). You’ve known about CMOS for years. But do you know that an emerging technology called PCMOS, which uses non-Boolean logic, is on the verge of slashing power consumption in ASICs?Those are just three of the ten top technology stories of 2009 you probably haven’t heard about. Our writers, editors, and contributors have scoured the landscape, pinged their sources, and stayed up late to find the news you need to stay on top of the lightning-fast changes sweeping the information technology industry.In a year when Google’s Android finally made the mobile market a multiplatform competition, when Microsoft seemed to redeem itself with the launch of Windows 7 and the beta releases of various 2010 server products, Oracle swallowed Sun as enterprise software vendors continued to consolidate, and the bottom fell out of the IT employment market, it’s no wonder that many important stories went unnoticed. We think you’ll find these stories interesting and, above all, useful. Happy New Year! The top 10 underreported tech stories of 20091. Wireless broadband woes are harder to fix than you might realize2. “Conflict minerals” in your PC and cell phone fuel civil war in Congo 3. Malware’s new frontier: Organized gangs tricking users to act stupidly4. GPU computing: Graphics accelerators aid mainstream, even business, PCs5. Dumber may be smarter: Less-accurate chips bring more speed and savings on power and heat 6. Lawyers begin trolling in the cloud7. Ruby on Rails gets respect in the enterprise8. From feast to famine: Dark fiber gets hard, and expensive, to find 9. Patent Office to inventors: See you in 201310. Enterprise wikis become a platformThis article, “The top underreported tech stories of 2009,” was originally published at InfoWorld.com. 2009 top underreported technology stories:1. Wireless broadband woes are harder to fix than you might realizeAT&T has taken a huge amount of heat for its subpar 3G performance. Much of the criticism is well deserved, but there’s a larger, more disturbing truth: We’re running out of wireless spectrum. What’s more, networks designed to handle big downloads can’t cope with the peer-to-peer traffic generated by games and smartphones.“No one was prepared for the effects of [Apple’s] iPhone,” says Charlotte Yates, CEO of Telwares, a telecom and IT infrastructure consultancy. Sure. You’ve heard that before, but Yates explains that it’s not just the amount of traffic, as many of us suppose, but the type of traffic, that poses difficulties. [ Related: “Is our Internet future in danger?” | “Proof of the coming mobile revolution” | “Mobile wish list 2010: iPhone 4G, no more AT&T, mobile money, and …” ]Consider your iPhone, Droid, Pre, or similar device. Much of the time when it’s in your pocket or purse, it’s actually pinging the network to see if you have e-mails, stock updates, or news alerts. Most of those chunks of information are rather small, but when added to the constant polling of the network, they consume lots of resources. Similarly, multiplayer games, Twitter, and social networking sites used on wireless networks are constantly refreshing and pulling down data on what individuals are doing and broadcasting it.There’s also a less-than-obvious problem caused by big downloads of things like HD video. Networks, says Yates, are designed for two-way communication. In effect, the network is waiting for traffic to come up the pipe and consuming a certain amount of resources as those channels are idle. Thus, massive downloads actually cause both downstream and upstream problems — stress the networks weren’t designed to handle. “Carriers handle network management differently — even if one carrier is optimized, another may not be. And because networks are connected, the weakest link sets the pace,” Yates says.Spectrum, like water, is a resource you don’t think much about — until it runs out. And that’s a major challenge facing carriers, consumers, and the government.“I believe that that the biggest threat to the future of mobile in America is the looming spectrum crisis,” said FCC chairman Julius Genachowski at the CTIA conference in October. He predicted that total wireless consumption could grow from 6 petabytes a month last year to 400 petabytes by 2013. (A petabyte is 1,024 terabytes.) “So we must ask: What happens when every mobile user has an iPhone, a Palm Pre, a BlackBerry Tour, or whatever the next device is? What happens when we quadruple the number of subscribers with mobile broadband on their laptops or netbooks?” Genachowski said.Right now, there’s approximately 834MHz of total spectrum available (including 50MHz about to be added), but the FCC believes that most of it — 760MHz to 840MHz — will be needed by 2010, leaving little for future demand. The commission may well expand that, but there will be competition beyond the wireless industry to use it, particularly from the military and emerging entrants to the marketplace, says Yates.A December 2009 report from Morgan Stanley shows that peak wireless data usage in the United States routinely exceeds 75 percent of capacity, which is a danger sign for carriers, as the figure below shows. Much of that is due to iPhone users, who use the Internet much, much more than other smartphone users (though Android users are beginning to take significant advantage of the Web as well). The financial firm expects AT&T and other carriers to have boosted capacity significantly by 2012 at its cell sites, where much of the bottlenecks occur that frustrate users, thus reducing peak demand to 60 to 70 percent of capacity. The wireless industry has wrestled with capacity challenges in the past. In the 1990s, AT&T added Digital One Rate plans to its offering. This “one rate” deal was an overwhelming commercial success, adding hundreds of thousands of subscribers — but also overwhelmed a network that wasn’t ready or optimized to receive them in such short order, recalls Michael Voellinger, executive vice president of Telwares.AT&T, Verizon, and the other major carriers have plenty of responsibility for the limpid 3G service, but if they are to avoid another, much broader meltdown, a lot of players — including the FCC — had better start moving to solve network management issues and the shortage of spectrum. Consumers may even have to moderate their desire for the most bandwidth-intensive applications.2009 top underreported technology stories: 2. “Conflict minerals” in your PC and cell phone fuel civil war in CongoIt’s not often that the technology industry, human rights activists, and both parties in the U.S. Congress are on the same page. But in 2009, the long-running horror story in the eastern regions of the Democratic Republic of Congo — where the mining of coltan, tungsten, and other minerals crucial to the manufacture of cell phones has fueled a series of bloody civil wars — has moved significant players to action.Late last month, Reps. Jim McDermott (D-Wash.) and Frank Wolf (R-Va.) introduced the Conflict Minerals Trade Act, a bill aimed at slowing the importation of products made with “conflict minerals.” In essence, the House bill (and a similar effort in the Senate) requires companies along the supply chain to prove that materials they use are not coming from mines controlled by warlords in Congo. The cost of verification and inspections, which will include onsite visits to mines and refineries, will be borne by technology companies.Coltan is the short name for the mineral ore columbite-tantalite, from which the element tantalum, a key ingredient of capacitors, is refined. Tantalum capacitors are used in a range of electronic applications, including cell phones and PCs. Tin, another conflict mineral, is an ingredient of solder paste. For all its importance, coltan is still mined in much the same way that gold was extracted from the hills of California in 1848. Laborers, many of whom are children, chip away chunks of rock with hand tools, then wash the rubble to separate coltan from useless minerals. It can take as much as two days to collect a few ounces of coltan and earn the equivalent of about $10. Naturally, prices rise as the coltan moves up the supply chain, and significant profits are siphoned off by warlords who use the money to buy arms.It’s not hard to understand why this issue has stayed below the radar. Stories about atrocities and suffering in Africa are so commonplace that without a feasible way to make a difference, Americans tend to sigh and move on.Until now, even consumers and companies that wanted to avoid the use of “conflict minerals” had trouble identifying where and how they were mined. Given the pervasive use of those mineral in electronics, it simply isn’t feasible to boycott them. And companies sincerely wanting to buy components built with conflict-free minerals could not unravel the lengthy and obscure supply chain that leads from mines in the Congo to refineries and manufacturers around the world and ultimately to the United States. To address that, the House bill mandates penalties for companies that knowingly or carelessly file false declarations regarding importation of goods tainted with “conflict minerals.” But with the exceptions of two U.S. coltan refiners that would be banned from importing the mineral from the conflict zone, there is no bar to the importation of goods using “conflict minerals,” as long as the use is disclosed. Ultimately, it’s a “name and shame” effort that supporters hope will give ammunition to consumers and manufacturers wishing to help stem the violence in Congo.Among the technology companies and associations that have publicly supported the McDermott-Wolf bill are Motorola, Research in Motion, and Hewlett-Packard, which is already pushing suppliers on this issue.2009 top underreported technology stories:3. Malware’s new frontier: Organized gangs tricking users to act stupidlyIf you’re in IT, you spend countless hours defending the network from threats. But no matter how hard you work, the biggest threat doesn’t come from outside the firewall, and it isn’t from unpatched software or buffer overflows. The user is your biggest, albeit unwitting, enemy. “If all software had zero exploits, it wouldn’t drastically change the amount of successful hacking,” says Roger A. Grimes, a security pro and InfoWorld’s Security Adviser blogger. It’s because the bad guys have elevated social engineering, the hack that takes advantage of a user’s greed, lust, or simply naivete, to open the gates to malware.[ InfoWorld Test Center reviews: “Malware-fighting firewalls miss the mark” and “Whitelisting security offers salvation” ]Tricking users into visiting a phony, malware-laden Web page or clicking on a virus-laden attachment are hardly new tactics. But now hackers “have brought the industrial revolution to malware,” and that makes their attacks deadlier and more pervasive than ever, says David Perry, global director of education for Trend Micro, whose global array of sensors now detects some 60,000 malware samples a day.Mimicking traditional businesses, the hackers are working in large, highly structured organizations, like the Russian “partnerka,” automating production and distribution and even outsourcing production to freelancers and smaller gangs in other countries.Not long ago, most malware, including the infamous Jerusalem and Monkey, was self-contained, encompassing the replicator and the payload. But now, the encryption engine remains on a hacker’s (or a zombie’s) server, generating a new variant every few minutes, says Perry. The downloadables are scrambled just enough so that their patterns aren’t recognized by conventional defenses. Sadly, an unintended consequence of Microsoft’s decision to weaken the unpopular UAC (user access control) in Windows 7 is an operating system that may be more at risk of malware infection.To be fair to users, some of the traditional advice they get from IT or popular publications is no longer adequate. IT, says Grimes, tells people to go to only trusted sites. Unfortunately, by the beginning of 2009, the majority of infectious sites were mainstream. In a typical attack, users of FoxNews.com were told they needed to install a new codec to watch clips on the site. Once installed, the “codec” turned out to be a malicious piece of code undetected by most defenses, Grimes recounts.Even security sites aren’t always safe. Last year, a blog belonging to Microsoft security expert Jesper M. Johansson was seeded with a link to a malware site embedded in a comment. Johansson noticed it, then backtracked, finally landing on a site hosted in Ukraine that ran a fake program called XP Anti-virus, that scanned nothing, of course, but did load malware.Finally, it’s clear that hacking has long since moved into the money-making mode through the use of keyloggers, scrapers, and other malware designed to steal personal and corporate data. A major source of malware is now an organization of hundreds of affiliated partnerka networks in Russia, says Dmitry Samosseiko of SophosLabs Canada. Indeed, the partnerkas [PDF] are the main source of the “scareware” (malware that masquerades as anti-virus software), he says.2009 top underreported technology stories: 4. GPU computing: Graphics accelerators aid mainstream, even business, PCsAdding a discrete graphics card to an enterprise PC seems awfully silly. After all, why support the game-playing habits of your users, right? Not so fast, says Nathan Brookwood, a principal analyst at Insight64. “Performance-constrained applications will be taking advantage of the GPU for operations that have nothing to do with graphics,” he says.The longtime chip analyst is referring to a quietly emerging technology known as GPU computing. While a typical CPU has two to six cores, graphics processing units may have hundreds crunching away at numerical calculations. Single-core CPUs were basically designed to tackle one problem and move onto the next problem, and software for those chips has been designed accordingly. By contrast, GPUs break up a problem into very small bits and process it in parallel with other problems at a very high rate of speed.[ Related: “GPU computing is about massive data parallelism” | “Is Apple barking up the wrong tree with Grand Central?” ]Generally, GPUs are used for games and other graphics-intensive applications. But given their power — discrete graphics processors can deliver up to a teraflop (1 trillion calculations per second) of computing power from the same silicon area as a comparable microprocessor — why not let them do other things?Indeed, they already are. Brookwood notes that moving and converting a video file on a Windows 7 machine is excruciatingly slow. It might take 30 minutes to move a 30-minute file. But with the aid of a garden-variety GPU from Nvidia or ATI, the operation speeds up by three to five times.Not surprisingly, there is a catch of sorts. Only applications that have a substantial amount of parallelism can benefit from GPUs. GPUs also require a fresh approach to programming. The programming model used on GPUs is different from the conventional serial processor programming model, according to the National Center for Supercomputing Apps. (Read NCSA’s take on GPU computing.)Even so, some existing applications can take advantage of the GPU’s power. They include drag-and-drop transcoding (changing the format of a file), face tagging for photo management, video upscaling for improved DVD viewing, and faster video editing, says Brookwood.Meanwhile, the new OpenCL standard makes parallel programming easier and less proprietary. Windows 7 supports it, as do Linux and Apple’s Snow Leopard. Apple’s Grand Central Dispatch technology (now open source) allows programmers to distribute workloads across multicore CPUs and GPUs.Nvidia and AMD, which owns ATI, see GPU computing as a great marketing tool and are (no surprise) hyping it rather hard. But puffery aside, this is a technology to watch and give you pause before you say no to putting PCs equipped with discrete graphics processors on company desks.2009 top underreported technology stories:5. Dumber may be smarter: Less-accurate chips bring more speed and savings on power and heatA quiet revolution in integrated circuit technology is occurring in Texas. Researchers at Rice University in Houston have developed a new type of chip that uses just one-thirtieth the electricity and runs faster than comparable silicon produced using standard CMOS technology.Is there a trade-off? Absolutely. PCMOS (probabilistic complementary metal-oxide semiconductor technology), as the new technology is called, is based on probabilistic logic, which by definition is less accurate than the Boolean-based logic we all learned in school and that is the basis of the calculating algorithms used by conventional microchips.[ 2009’s major chip story: “ARM versus Atom: The battle for the next digital frontier” ]That may sound counterintuitive. After all, who wants a computer that doesn’t return accurate information? But before you click to the next page, consider this: Applications such as random number generation used for encryption, video playback on handheld devices, and signal processing don’t have to be completely accurate, says Krishna Palem, a professor of computer science at Rice. “If we are willing to live with a tiny bit of error, there are significant savings in power,” he says.Palem explains that the movement of electrons in a chip produces “noise” that interferes with accuracy at the transistor level. To overcome that noise, most chips run at a relatively high voltage. A lower voltage means more noise and less accuracy, but decreased power consumption and heat.How much of a problem is decreased accuracy? To paraphrase the researcher, “not all calculations are created equal.” Pick a four-digit number — say, 2143. In a calculation, the values of the thousands and hundreds places are quite significant, the tens place less so, and the ones place the least significant. So PCMOS lowers the operating voltage of the logic gates that calculate the least significant bits. In a video application, PCMOS might cause a degradation of quality too small for most people to discern. Used in a cell phone, the savings in power could mean battery life measured in weeks.Despite the seemingly radical nature of PCMOS, devices made on this technology can be produced using conventional fabricating techniques and materials, says Palem. Indeed, samples have already been produced by Taiwan Semiconductor Manufacturing.The first samples were ASICS used for encryption; in the near future, Palem, whose team includes graduate students at Rice and researchers at Singapore’s Nanyang Technological University, will be conducting proof-of-concept tests on microchips for Bluetooth headsets, graphics cards, hearing aids, medical implants, and eventually cell phones. He expects to see some actual production in 2010.2009 top underreported technology stories: 6. Lawyers begin trolling in the cloudYou know a technology is gaining traction when the lawyers figure there’s money to be made. And although it is early days, cloud computing is beginning to generate lawsuits, settlements, and hassles. We’ve seen three potentially significant ones this fall, and more may well follow.One of the few, and perhaps the first, cloud-related lawsuit to be resolved was a patent infringement case brought by an inventor named Mitchell Prust, who claimed he patented cloud storage and cloud computing technology that allows users to access computer applications through Web browsers.[ Get the no-nonsense explanations and advice you need to take real advantage of cloud computing in the InfoWorld editors’ 21-page Cloud Computing Deep Dive PDF special report, featuring an exclusive excerpt from David Linthicum’s new book on cloud architecture. | Stay up on the cloud with InfoWorld’s Cloud Computing Report newsletter. ]Prust sued, and in November he won a federal court judgment against NetMass, an online storage vendor. Three of his patents were the basis of his claim against NetMass and a still-pending lawsuit against Apple, according to Prust’s attorney. The broadest of his patents was filed back in 2000 and is titled “Method and system for seamless access to a remote storage server utilizing multiple access interfaces executing on the remote server.” (His patents are U.S. Patent 6,714,968, 6,735,623, and 6,952,724. If you’re interested, you can look them up at the Patent Office Web site.)Weeks before the Prust/NetMass settlement was announced, Sidekick users, angry over a weekend-long outage in October, filed two suits in federal court in Northern California. Each action alleged negligence and false claims on the part of Microsoft and T-Mobile, although it appears that users’ data was eventually recovered. “T-Mobile and Microsoft promised to safeguard the most important data their customers possess and then apparently failed to follow even the most basic data-protection principles. What they did is unthinkable in this day and age,” said attorney Jay Edelson, who represents one of the plaintiffs.The legal action this fall may well be harbingers of litigation yet to come. “There’s a big rush to adopt the newest and most cost-efficient tech model, but there are some fundamental business and legal issues that will first need to be addressed,” says Daren Orzechowski, an attorney specializing in intellectual property issues at the law firm of White & Case.Legal issues, such as security — Amazon.com’s EC2 has already been breached — privacy protection, and service levels are not unique to the cloud, Orzechowski says. Indeed, there’s no reason why cloud providers couldn’t offer service-level agreements similar to those in traditional computing, he adds. Did T-Mobile really guarantee 24/7 access to data, or did customers just assume that? The courts will decide.“[Intellectual property] is always a fertile ground for litigation. As companies connect legacy systems into the new cloud platforms, are they complying with existing intellectual property arrangements, licenses, and laws?” asks Orzechowski. In the case of NetMass, apparently not.And as vendors rush to offer cloud-spaced solutions, earthbound issues such as compliance with layers of state, federal, and international bodies will need to be addressed.It’s not at all clear that cloud computing is any more likely to result in litigation than other technologies. But because it is so new, the legal bugs will have to be driven out as adoption becomes more common.2009 top underreported technology stories:7. Ruby on Rails gets respect in the enterpriseWhen Bust Out Solution, a small Web design and development shop, was hired to Webify Oracle apps for Williams-Sonoma, founder Jeff Lin quickly cranked out a 40-page paper proposing Ruby on Rails (or just Rails, as it’s often called) as a development framework. The answer came back almost immediately: “No. We’ve never heard of it,” Lin recalls. That was in 2005.Fast-forward to the present. Lin and his team built Best Buy’s IdeaX social networking application for customers of the home electronics giant. The framework: Rails, on top of a cloud platform from Heroku, a platform service vendor.[ Related: “Lab test: Climb aboard Ruby on Rails” | Keep up with the latest open source news with InfoWorld’s open source newsletter and topic center. ]Rails, which was new in 2005 when Williams-Sonoma gave it the thumbs-down, has matured and is slowly spreading beyond its traditional base of agile startups into the enterprise. Why has this story remained under the radar? “Java became a big deal quickly because it grew with the early Internet and garnered huge amounts of attention,” says Heroku CEO Byron Sebastian. “But the shift to Rails is more of a generational shift with programmers who aren’t out of school very long.”For developers like Lin, one of the biggest reasons to use Rails is its agility. “We can write [the apps] quickly and see how users react and change them.” Or as he jokes, “fail early, fail fast.” The ability to deploy a prototype in a hurry is a big selling point for developers going up against much larger, but slower, competition, as such rapid prototyping lowers costs. Best Buy, for example, looked at Salesforce.com but found that it was cheaper to build the application on Rails than to buy a ready-made platform.However, while enterprise resistance to Rails has certainly diminished, there are situations in which Rails is simply not a good solution, says Sergei Serdyuk, a managing partner at Redleaf Software. In some cases, he says, the existing infrastructure of an enterprise doesn’t lend itself to the use of Rails because it might not be a good fit for, say, existing security policies.Even so, Redleaf is a Rails-only shop and has built infrastructure for large clients, including IMS, the world’s largest barter Web site. In addition to the framework’s agility, Serdyuk says the structure of Rails makes it relatively simple for teams to collaborate, since naming conventions and the location of different components are very simple.If you’re skilled in Rails, 2010 should bring more opportunities, but wholesale enterprise adoption is not here yet.2009 top underreported technology stories:8. From feast to famine: Dark fiber gets hard, and expensive, to findEarly in this about-to-close decade, fiber optic cable was going to be the answer to everyone’s bandwidth problems. Before the dot-com bubble burst, “everyone who owned a right of way, from railroads to telcos, got greedy and started laying cable,” says Glenn Ricart, an Internet pioneer who is now CEO of National LambdaRail. Not surprisingly, the result was a glut of dark (this is, unconnected and unlit) fiber that lasted for years.But prices are soaring as the shortage disappears. And it could affect the connectivity choices open to enterprises.[ Related: “Hunting down dark fiber” | “Is our Internet future in danger?” ]Dark fiber can provide high-speed connectivity at a low cost. Instead of paying telcos to incrementally adjust the bandwidth on a physical link as needed, dark-fiber customers can simply light up the circuit with inexpensive 100Mbps or 1Gbps termination gear. What’s more, the use of dark fiber means the circuit can be used for other protocols as well, not just IP.Until this year, prices were generally reasonable, but have doubled in the last 12 months as the inventory of dark fiber shrinks. By contrast, prices for lit fiber have gone up just 10 to 15 percent, says Ricart.One reason for the big uptick in prices appears to be a surge of buying by Google and perhaps Amazon.com and Microsoft, which will need ever more bandwidth as they expand efforts in cloud computing. “Google has bought an awful lot of dark fiber from people like us,” Mark Lewis, director of service development at Interoute said at an industry conference in November. It’s not hard to see the connection between cloud computing and fiber: If services are located outside the enterprise, there’s a much greater need for connectivity. Cloud computing still plays a relatively small role in enterprise IT, but buying fiber now is an insurance policy against the day when capacity is needed.Meanwhile, companies holding dark-fiber inventory have taken portions off the market to push prices even higher, Ricart says. That’s a tactic that wouldn’t work if there weren’t an imbalance between supply and demand in the first place.Ultimately, the market should correct itself, says Michael Voellinger, executive vice president of IT and telecom consultancy Telwares. “The investment opportunity surrounding low-latency capacity solutions in the context of the cloud and bandwidth-intensive applications and content is enormous. The dollars will flow into fiber,” he says.Voellinger expects the shortage to be short-lived. But it took about nine years for the glut to turn to a drought, so it’s not at all clear how long it will take for the balance to swing the other way.The shortage is not uniform, notes Ricart. Enterprises in the largest markets can still find the capacity they need, but in second- and third-tier cities, there is a crunch. His advice: “If I were a CIO in a lower-tier market, I would think about locking in connectivity.”2009 top underreported technology stories:9. Patent Office to inventors: See you in 2013Think it’s tough getting Apple to approve your iPhone app? Just try and get approval or a patent in the United States. It now takes nearly three years to find out if the U.S. Patent Office will give you the thumbs-up.Maybe you’ll never file a patent yourself, but patents equal innovation. An industry that can’t protect its intellectual property by patenting it is less innovative, less dynamic, and less able to create capital and jobs.[ Related: “Tech to Obama: Rethink patent reform” | “Patent trolls: Machiavelli’s descendents” ]The warning symptoms: The U.S. Patent Office is underfunded and overburdened, awards in patent lawsuits have become astronomical, and the cost of defending claims against a patent has soared.Last year, nearly 500,000 applications were filed with the U.S. Patent Office, adding to a backlog of about 1.2 million. The agency has a budget of about $1.9 billion, all of it raised via application fees. Not only has Congress declined to give the office more resources, it treats it like a piggy bank, diverting the money it takes in to other uses. And by all accounts, the patent office is embarrassingly burdened with second- or third-rate technology, another factor slowing down the approval process.Maybe the quality of patent applications has gone down, or maybe the system is suffering from an unexplained blip, but in fiscal 2008, the Patent Office’s Board of Patent Appeals approved only 44 percent of patents that came before it — down from 66 percent five years ago and 71 percent at the start of the decade.Meanwhile, patent litigation costs can run as high as $5 million in cases where potential damages exceed $25 million, according to the American Intellectual Property Law Association. “As the costs of litigation increase, the stakes for companies are getting higher and higher,” says Bijal Vakil, an intellectual property attorney with the firm of White & Case.Before 1990, only one patent damage award passed the $100 million mark. Since that time, huge verdicts in patent cases have become increasingly common, with at least five topping $500 million, Vakil says.At the same, lawsuits waged by “nonpracticing entities,” less politely known as patent trolls or patent hoarders, have increased sharply, while overall patent litigation is down, according to Patent Freedom, a Web site that tracks troll activity.2009 top underreported technology stories:10. Enterprise wikis become a platformWhen engineers at Cisco Systems realized the networking giant needed a better system to track network errors, it turned to an unexpected technology: the wiki. What they built was no simple, collaborative Web page. It’s an inventory-tracking application that correlates network problems with changes to network hardware and configurations, built on technology from Twiki, an open source software developer.The wiki has long since become an enterprise tool. But now, wikis like the ones at Cisco are moving beyond their traditional role and becoming a platform in their own right. The applications are light, quick, easy to deploy, and far cheaper than traditional enterprise software. Custom-tailored wikis allow developers to automate workflows, pull data from multiple applications, and blend structured and unstructured data in innovative ways. What’s more, wikis are beginning to include enterprise-type features such as permissions, versioning, and tracking, which makes it much easier for old-line IT managers to accept.[ Related: “Five out-of-the-box ways to cut IT costs” | “Enterprise 2.0 vendors abound, but what about buyers?” ]Consider this use of the new wiki: One major enterprise (the company asked that its name not be used) uses wiki technology from MindTouch to pull together a rich mashup of Sugar CRM data, trouble ticketing, and accounts payable. The information lets a sales rep or technician know right away if a customer pays on time or has a history of related network problems. Should a sales rep want to develop a new relationship, she can find out right away who at the company is connected to her via LinkedIn, for example. The system was developed fairly quickly using some of the 100 or so application extensions built into the MindTouch platform.While we tend to think of wikis as collaborative documents generated by data entered on a Web page, Cisco’s Twiki-based inventory system is integrated with SNMP, which allows IT to see changes to the hardware configurations in real time. A separate Twiki app at Cisco uses standard connectors to pull information from multiple databases — or if the information only resides on the Web page, it will scrape the data from the page.That wouldn’t work if developers at Twiki hadn’t taken a revision control system typically used to manage code and turned it into a database-like feature that can manage unstructured information.Even so, extracting the full value of unstructured data is still quite a ways off, says Twiki CEO Jitendra Kavatheka. “Unstructured data represents human thought, and software hasn’t caught up.”This article, “The top underreported tech stories of 2009,” was originally published at InfoWorld.com. Technology IndustryIntellectual PropertyCloud Computing