Serdar Yegulalp
Senior Writer

Google’s cloud benchmarking tool ups its game

news analysis
Dec 10, 20152 mins

PerfKit hits 1.0, offering new tests and a stable programming framework for running benchmarks on most every cloud out there

Google’s PerfKit toolset for benchmarking cloud environments was originally released earlier this year in a pre-1.0 version. Today, it’s officially been bumped to a 1.0 release, with expanded support for various cloud providers and automation of 26 different benchmarks, up from the 20 originally provided.

Given how tough it can be to reliably benchmark any cloud, having an open source, cloud-agnostic toolkit to help make it happen is a net boon.

Google devised PerfKit as a way to benchmark a variety of different cloud resources. It doesn’t just clock network speed or CPU, but the performance of real-world applications that are often part of cloud deployments. As such, MongoDB, Cassandra, and Hadoop were included in the original PerfKit package.

PerfKit emphasizes programmability and extensibility, since it controls every phase of the testing — config, provisioning of resources, execution, teardown, and publishing of the results — with Python scripts. The tester creates YAML files that describe how the tests are to be performed, with abstractions for needed resources like disk space, networking, firewalls, and VMs.

The target environment can be a standalone system, a VM in a private cloud, or a VM on one of nine popular cloud providers: AliCloud, Amazon Web Services, CloudStack, DigitalOcean, Google Cloud Platform, Kubernetes, Microsoft Azure, OpenStack, and Rackspace.

The 1.0 label is only now being applied because Google needed to find “the right abstractions making it easy to extend and maintain,” and “the right balance between variance and runtime,” according to Google’s blog post.

A few new benchmarks have also been added to the mix, mainly EPFL EcoCloud Web Search and Web Serving. The former sets up an instance of the Nutch search engine (based on Lucene) and tests the system in question against simulated client traffic; the latter configures the Nginx Web server and benchmarks traffic to a synthetic Web application.

Another addition that came along the road to 1.0 is the ability to run benchmarks inside a Docker container. Its currently implementation is a little limited, though; the only container currently supported is the current Ubuntu image, although that image can be hosted on most any VM.

Serdar Yegulalp

Serdar Yegulalp is a senior writer at InfoWorld. A veteran technology journalist, Serdar has been writing about computers, operating systems, databases, programming, and other information technology topics for 30 years. Before joining InfoWorld in 2013, Serdar wrote for Windows Magazine, InformationWeek, Byte, and a slew of other publications. At InfoWorld, Serdar has covered software development, devops, containerization, machine learning, and artificial intelligence, winning several B2B journalism awards including a 2024 Neal Award and a 2025 Azbee Award for best instructional content and best how-to article, respectively. He currently focuses on software development tools and technologies and major programming languages including Python, Rust, Go, Zig, and Wasm. Tune into his weekly Dev with Serdar videos for programming tips and techniques and close looks at programming libraries and tools.

More from this author