james_reinders
Software Programmer

Let’s Make ‘Adaptive’ a Programming Technique

opinion
Mar 24, 20176 mins

istock 182752008
Credit: istock

Let’s think about making programs themselves adapt. This has been called adaptive computing by some. And no, I’m not talking about agile programming or adaptive programming.

Thinking adaptively can let us:

Get better results faster (e.g., adaptive mesh)

  • Meet deadlines (e.g., adaptive rendering)
  • Adjust dynamically to use newer hardware as it becomes available (e.g., adaptive mesh or adaptive rendering)

 For me, a favorite example is adaptive mesh. You may find the uses of adaptive rendering to maintain 90 frames per second (fps) for games more compelling, so I’ll talk about both. I find that both adaptive mesh and adaptive rendering are cool to know because they’re inspirational for adaptive methods in general.

We’ve seen adaptive algorithms―but do we think to write that way?

We’ve all seen adaptive algorithms. They’re not the radical new concept they were 20 years ago. But they are a technique we can keep in mind as we design algorithms and applications.

I’ve been wanting to share more about adaptive mesh since my article about N-body . At one time, adaptive mesh was a bleeding-edge new method for N-body problem solving. Now, it’s mainstream. The concept of using adaptive algorithms reaches far beyond N-body problems (so please stick with me even if N-body is not exciting to you per se).

Think of adaptive as a general technique worth focusing on

You’ve probably heard the expression “Make it as simple as possible, but not simpler” (often attributed to Einstein). It sounds good, but how do we know when we’ve made it too simple? That’s a tougher question.

Imagine you can compute something in 4 seconds, 16 seconds, or 64 seconds. Which do you choose? What if I told you that you always get the right answer in 64 seconds, but we need to compute this thing a million times (assuming no parallelism: 46 days, 185 days, or 2 years, respectively, for our 4, 16, or 64 seconds)?

Consider this: Try it all three ways. More precisely, try it each way from time to time, and shift to the fastest one that still gets a good enough answer and shift away from anything that is too simple. That’s the inspiration for adaptive mesh techniques, to which many refinements are possible to make the shifting effective and efficient.

Speed up through the boring, slow down to smell the roses

Using adaptive means investing when it matters, and running faster when we can. Instead of always doing it the fast or the slow way, we come up with ways to adapt to what is sufficient.

Imagine we’re analyzing a car crashing into a wall. Most of the action is near the wall and the front of the car. In my simple diagram below, I’ve divided up the elements (bodies – see my prior N-body post in regions.

car1 reinders

In this simple world, you can imagine that the simulations in regions B2-C2-B3-C3 are very important and detail matters. Likewise, the simulations in regions A1-A2-A3 probably matter very little. If we think “adaptive,” we realize that we don’t need to apply the same exact simulation intensity or techniques in all 15 regions.

The idea of an adaptive technique is to figure out when in a simulation to apply intensive computation (slow down), and when to relax and take it easy (go fast). Amazingly, this allows simulations to run faster and get better answers. That’s because better answers will come from focusing computation where it matters, and faster comes because the adaptive program can reduce computations where it doesn’t matter.

The exact algorithms for fast and slow regions may vary, and more processing cores may be committed to slow regions as well. The boost in overall performance comes when the reduction is more than the addition, which turns out to be very common. The quality of the answer is greatly improved when the critical regions are computed in far more detail than we could possibly do across the whole simulation. I’ve seen simulations virtually ignore spaces filled with nothingness, but focus down to the molecular level for very small regions of high interest. The molecular-level simulations are far too detailed to apply to a large region.

I love this general concept: write a program to adapt to the needs of the algorithm, and essentially speed up or slow down to match the needs. As simple as possible, but not too simple.

Adapting to deadlines; adaptive rendering for games

I’ve been dwelling on adapting to get the right (or good enough) answer. But as long as we’re discussing adaptive, we must consider adapting to meet deadlines. We could also think of it as adapting to the hardware we have available. One such use is to render an image as fast as we can, but always fast enough to support 90 fps. Games do this a lot. They adjust the detail in view based on the ability of a computer to render the image. This is why gamers will buy the fastest hardware available (games adapt and take advantage of it). This was a revolutionary concept 20 years ago. Before then, games ran fast or slow on different computers but always tried to render the same graphics.

Now, a couple of decades later, we take this adaptive technique for granted and it’s routinely used very widely. Instead of writing for the slowest computer that your software package lists as “required,” we can write software to determine as it runs how much detail can be computed on the fly. Some games employ backup methods (sometimes referred to as a safety nets) when a detailed update cannot complete on time. Various methods tagged as “reprojections” are hot topics in game development these days as safety nets. Our applications can adapt as better hardware becomes available, possibly with safety nets as part of how they adapt.

Adaptive algorithms last longer

In my toolbox of techniques, I rate adaptive highly. When I use adaptive techniques, I can get the quality I want (simulate a car crash perfectly with adaptive meshes) or the deadline that I want (keep frame rate high in my game with adaptive rendering). Either way, we get better results that continue to adapt as new hardware arrives.

If I’ve intrigued you enough to want to study this topic some more, here are some resources to provoke more thinking:

 Adaptive mesh refinement – Wikipedia

 Click here to download your free 30-day trial of Intel Parallel Studio XE

james_reinders
Software Programmer

James Reinders is a software programmer with a passion for Parallel Programming and Parallel Computer Architecture. He has contributed to the development of some of the world’s fastest computers, and the software tools that make that performance accessible for programmers. James has shared this passion in classes, webinars, articles and has authored eight books for software developers. James enjoyed 10,001 days working at Intel, and now continues to share his passion to help others “Think Parallel.”

More from this author