Martin Heller
Contributing Writer

Optimize the slowest thing, take 2

analysis
Jul 2, 20084 mins

My posting Optimize the slowest thing last week drew quite a bit of feedback, which made me think about the issue again. Let's start with the feedback. DCrosby pointed out that "optimize the slowest thing" is a tenet of Theory of Constraints. He pointed to http://www.goldratt.com/ and the book The Goal: A Process of Ongoing Improvement. I wasn't previously familiar with the Theory of Constraints. Brian said: I d

My posting Optimize the slowest thing last week drew quite a bit of feedback, which made me think about the issue again. Let’s start with the feedback.

DCrosby pointed out that “optimize the slowest thing” is a tenet of Theory of Constraints. He pointed to https://www.goldratt.com/ and the book The Goal: A Process of Ongoing Improvement. I wasn’t previously familiar with the Theory of Constraints.

Brian said:

I don’t completely agree with this. There is logic to it, but that logic is limited in scope. 

My projects always try to encompass certain performance strategies from the beginning. The key is not to go overboard, but make at least a first-pass effort.

The problems with reserving all optimization work to the end are:

  1. You work against the well-known principle that ANY design factor, introduced early enough, is easy and cheap to incorporate. Leave it too late and those factors become orders of magnitude more expensive to implement;
  2. Once the project is working, business realities often force attentions to switch to other projects. In the real world your “working” system may never get optimized;
  3. When multiple factors all contribute to poor performance in roughly equal magnitude, spending time only on the slowest thing can cause you to return to optimization work again, and again, and again, and… ;
  4. Moore’s law makes it tempting to simply throw more hardware at the problem. This “solution” can be the design intent, in other words. There are too many example of this extant to list. However in the real world, upgrades tend to come at widely spaced intervals of years, not continuously. The net result is unsatisfied clients, also for years.

So as a backfilling effort, I generally agree with your strategy. However I wouldn’t want that to be my sole performance strategy.

I answered:

I agree with you in practice. As I wrote the post, I kept thinking of times when I did optimize early on because of long experience as to where the bottlenecks were likely to be. And then I felt a little guilty about them.

Is knowing what you’re doing cheating? 🙂

Brian responded:

From where I stand, knowing what you’re doing is NEVER cheating, so long as appropriate disclosure guidelines are followed!

I was working years ago in a programming environment and had rote learned all the traditional optimization techniques. Most programmers did the same thing. Finally a technical author wrote an article about benchmarks he had performed on those techniques.

They all worked, but it turned out that most were minor techniques, barely worth using. Just a very small group of optimizations accounted for 95+% of the real world performance improvements.

It was a revelation!

This made me think of two famous stories. One was a classic optimization blunder made as I recall by Jeff Richter: he got the order wrong when traversing a big two-dimensional array in a sample application, and the performance of a simple nested loop suddenly became something like a thousand times slower than it should have been. Jeff, to his credit, turned this mistake into a very interesting article.

The other was a story from the 1960s, when a hot consultant came in to optimize a huge, badly-understood IBM mainframe application. He made a bunch of changes, charged a fortune, and left.

Years later, the application crashed. The crash dumps pointed back to some of the code the consultant had changed. When the programmers looked at it carefully, they discovered that the code never could have run without crashing. In other words, the expensive consultant had optimized, at great expense, at least one section of code that had never been called.

There are a bunch of lessons here, and there’s a bunch more to say about it, but it’s your turn: Let’s hear more about your optimization experiences, good and bad, and what lessons you took from them.

Martin Heller

Martin Heller is a contributing writer at InfoWorld. Formerly a web and Windows programming consultant, he developed databases, software, and websites from his office in Andover, Massachusetts, from 1986 to 2010. From 2010 to August of 2012, Martin was vice president of technology and education at Alpha Software. From March 2013 to January 2014, he was chairman of Tubifi, maker of a cloud-based video editor, having previously served as CEO.

Martin is the author or co-author of nearly a dozen PC software packages and half a dozen Web applications. He is also the author of several books on Windows programming. As a consultant, Martin has worked with companies of all sizes to design, develop, improve, and/or debug Windows, web, and database applications, and has performed strategic business consulting for high-tech corporations ranging from tiny to Fortune 100 and from local to multinational.

Martin’s specialties include programming languages C++, Python, C#, JavaScript, and SQL, and databases PostgreSQL, MySQL, Microsoft SQL Server, Oracle Database, Google Cloud Spanner, CockroachDB, MongoDB, Cassandra, and Couchbase. He writes about software development, data management, analytics, AI, and machine learning, contributing technology analyses, explainers, how-to articles, and hands-on reviews of software development tools, data platforms, AI models, machine learning libraries, and much more.

More from this author