How bad is computer security in the business world? Complete disarray, if you believe a friend of mine who's worked in the industry forever. Behold his hair-raising tales Recently I met up with an old friend at cafe in San Francisco’s Mission District, the kind where hipsters pay six bucks for an individually brewed cup of ultracaffeinated coffee. We didn’t belong there — we were too old. But for some reason it seemed like a good moment to inquire about my friend’s experiences as a professional computer security sleuth, a subject he seldom visits. The powerful brew had put him in a talkative mood, and because it was midday, only a few hipsters lingered, all three of them wearing earbuds.“I have a good idea of what you do,” I said, taking a glance around the room. “But I generally don’t hear the details. You must have some pretty outrageous stories.”[ Take another look at security — two former CIOs who show how to rethink your security strategy for today’s world. Bonus: Available in PDF and e-book versions. | Stay up to date on the latest security developments with InfoWorld’s Security Central newsletter. ] He grinned. “Sure I do. I’ll tell you a few. But I know what you do also — so no names.”With that he was off and running, as was my voice recorder. I got the feeling he was dying to get these stories off his chest. The thread running through these tales: Security practices in the real world are much, much worse than you think they are, even at Fortune 50 companies you’d expect to have the expertise and resources to know better.My friend’s first anecdote was pure irony — what he discovered at a security software company. If you don’t know the lingo, “pen test” is short for “penetration test”: I once worked for a great security company that was acquired by a very large, very well-known antivirus vendor. One of the first things we did was to pen test the antivirus vendor’s software that was running on tens of millions of computers. What did we find? Hundreds of buffer overflow bugs and other exploits. The software people were running to protect themselves probably had more bugs than the software they were trying to protect.Who could have guessed the guardians needed such serious help? In this next case, help came too late — the bad guys had already walked off with a priceless map of the world:I was hired at another company to help with minimizing the damage from an APT (advanced persistent threat) break-in. I quickly learned that the company I was working for made tens of millions of dollars penetration testing other companies. Their customer list was a who’s who of the Fortune 500. Their penetration-testing database, which had been accessed by the APT, contained a list of every vulnerability that the client’s pen-testing teams had found, including whether the vulnerability had been resolved or was still a problem. With one download, the APT had a road map of how to break into some of the world’s largest companies.Here’s another one that boggles the mind, following on the theme of vulnerabilities where you least expect to find them:A very well-known financial company liked to brag about its very high-end security. Not only did they not use Windows, but they didn’t use TCP/IP or any popular protocol. They had written their own communication protocols and required three-factor authentication to access their network. They monitored their network like no other company I’ve ever encountered, and frequently fired employees and contractors for doing things beyond the scope of their job. One day I was sitting in their cafeteria eating lunch when I noticed they had a company kiosk sitting in an area accessible by the general public. I went to it and found that it was logged in as root and had access to every system and database the company had to offer. And everyone knew this. The company that prided itself on its heightened security also accepted that they needed public kiosks that could access any of the company’s data. No one in the company seemed to think this was an unnecessarily high risk.More in the same vein. I have a feeling this one was corrected, but you never know: At another company I was hired to do a routine security audit. I found out that every Web server had permissions that essentially said that every folder and file was open to the world. Any user accessing the Web server could access and change any file on the Web server. It turns out this particular configuration vulnerability was part of their base image, and every server in their company had this flaw.It gets worse. How about leaving the keys to the kingdom lying around where anyone can use them? Check this out:I often look for overly permissive permissions on shared files and folders. At this one company, one of largest in the world, I found that their logon folder — which every computer and user in the company had access to and used to log on to the company’s worldwide network — all the files were marked Everyone Full Control. This meant that any employee could modify the files, perhaps launch a key-logging Trojan or malicious worm, and immediately infect the whole environment. This particular security permission had been set over 10 years ago.How do these things happen? Quite often, the companies in question have no idea — whoever made a stupid decision has left the company or doesn’t want to admit to being responsible. But you can bet that some mistakes start with management opting for security on the cheap:I was hired to submit a bid for encrypting a company’s customer data end to end. I’m talking about one of the largest companies in the world. My bid was one of five, and I was happy to see that, at about $5 million, it was in line with all the other major players in the room. There was only one vendor left to present, which none of us had ever heard of. Their bid was less than $1,000 — to create a private key that every person in the company would share. In crypto circles this is a laughable 101 problem. Who did the company worth hundreds of billions of dollars go with? The low bid! Despite all our protestations and warnings, they implemented a solution where the compromise of one device out of millions would make the whole thing insecure.Last comes a general observation about password management — a horror show my friend has seen again and again. It’s an illustration of how even the most basic security practices run afoul of operational reality: Everyone knows that you should change your password on a frequent basis. And every company I visit has a password policy that forces password changes every 45 to 120 days. But every company I visit usually has hundreds or even thousands of passwords that haven’t changed in a long time — for years, sometimes decades — often belonging to the most privileged accounts in the company. See, they’ve been using the privileged account so long, along with its password, that they are scared to change it for fear of causing operational issues. Even though every corporate policy says that all passwords must change in x number of days, it is ignored by every company, most likely for their most privileged accounts.This final observation illustrates why it’s so hard to find any organization whose security is acceptable, let alone locked down tight. Decisions made long ago lock us into poor practices that seem impossible to unwind.I’ve included only half the stories my friend told me, and if I’d let him, he would probably have kept telling tales long into the night.It’s unfortunate that we live in a world where security vendors push complicated, expensive solutions to obscure problems, when as these stories make plain, common sense audits and fixes would clean up 90 percent of the security mess. I bet quite a few of you out there could share some stories of your own. This article, “Horrifying confessions of a security sleuth,” originally appeared at InfoWorld.com. Read more of Eric Knorr’s Modernizing IT blog. And for the latest business technology news, follow InfoWorld on Twitter. SecurityEncryptionCareersAuthentication