Serdar Yegulalp
Senior Writer

6 signs containers will gain ground in 2016

news analysis
Dec 31, 20154 mins

There's little question containers will continue to reshape IT in the coming year; here are several insights into what might happen

climbing the career stairs 165493383
Credit: Thinkstock

With nearly every major IT product either adopting them outright or building in support for them, containers are guaranteed to continue changing IT through the coming year.

Here are six key ways that containers, and the ecosystem around them, will evolve and influence IT through 2016.

We’ll see more experimentation

The experiments in question don’t involve only using containers in places where they’ve never appeared before, but finding ways to further transform container technology itself.

Projects like CoreOS, RancherOS, and the various cloud-based container services are all examples of positive and productive experiments with containers. Folks like VMware, whose legacy VM model has been put on notice by containers, are also looking into how their product lines can be augmented by containers.

The next stage of experiments may address how containers can work with, or perhaps be partly eclipsed by, technologies like unikernels. They promise even higher levels of isolation and efficiency than containers, but at the cost of custom-compilation for each application.

Containers will reshape Windows — yes, Windows — from the inside out

The fact that Windows is being reshaped by containers says as much about the new regime at Microsoft as it does about the transformative power of container technology.

We’ve already had our first looks at what it’s like to use container technology in Windows Server, both with Docker and with Microsoft’s Hyper-V Containers. The technology goes public in the second half of next year, along with the container-focused Nano Server. The possibilities unlocked by a containerized, highly modular version of Windows Server have barely begun to be explored, but it’s clear Microsoft wants them to be fully unleashed in both local and remote data centers with its hybrid Azure Service Fabric.

Container support will become standard

However, not everyone doing containers will matter. Support for containers in many software technologies will become a necessity, not a luxury or an option. But by that token, container support alone won’t mean a company is doing anything truly creative with it. The bar for what constitutes creative work with containers has already been pushed fairly high, and it’ll only continue to rise.

The new open communities for containers will be put to the test

A major development for containers in 2015 was the creation of a number of consortia to drive the development and use of containers: the Open Container Initiative and the Cloud Native Computing Foundation.

Luke Marsden, co-founder and CTO of ClusterHQ (makers of Flocker and a member of both consortiums), feels this will allow developers to lead the push forward for containers: “As a community driven by technical merit — not marketing budgets — container projects that get real developer traction will be quickly adopted as the standard in the fast moving container ecosystem.”

The hard part will be seeing whether these groups do more than merely radiate good intentions. They’ll need to prove that a technology as powerful and potentially diverse as containers can in fact be steered forward collectively, rather than become an arena for a proxy war over control for the future of enterprise IT.

We’ll get more and better tooling, and we’ll need it

The novelty phase for containers ended a while ago, and we’re now in the realm of creating advanced tools for introspection and monitoring, debugging, and deploying and management of orchestrated apps.

This work will become more critical and demanding as additional infrastructure is moved into containers and distributed via microservices. While a lot of the work is being done by third parties, it leads to a major issue that won’t likely have an easy resolution: How much should be done by third parties, and how much should be core technology?

We’ll still be debating what containers should and shouldn’t try to do

For the best example of this debate, look no further than Docker, the de facto torchbearer for container technology as we have come to know it.

From the start, one debate has raged over how much of what is done with Docker belongs in Docker as an original component, and how much of it should be done by third parties. Docker’s “batteries included, but optional” approach was devised to counter this controversy, by having any natively included feature be based on open APIs to allow anyone else to replace the functionality in question.

But as containers were pressed into service in a wider variety of scenarios, the functionality included under the hood may also mushroom — and fans of the minimalist approach to container technology aren’t fond of the idea. What was originally meant to be a lightweight and fleet-footed alternative to the bloat and clunk of VMs could become a sprawl itself. Here’s to hoping the container community won’t let that happen.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author