Contributor

Serverless computing: Do we need to rethink the serverless framework?

opinion
Oct 26, 20166 mins

With serverless computing, services will be constructed as components, microservices, linked together by dynamic policy intelligence

cloud computing data center
Credit: Thinkstock

Serverless computing is one of today’s hottest technology topics. Now that Amazon has announced AWS Lambda and Microsoft is previewing Azure Functions, the concept is becoming real.

Serverless is billed as a solution that dynamically creates cloud services to process events in an ephemeral container that are executed on your behalf as a backend-as-a-service. Instead of leasing a virtual machine, then writing and deploying your code, you get to use a new “pay-per-event” pricing model while leveraging a catalogue of executable functions (building blocks) to construct your own service. It is a DIY cloud deployment model that promises to allow clouds to be used the same way we have become accustomed to using mobile applications on our smartphones: simply access the app (“function”) you need at any moment.

+ Also on Network World: Serverless computing: How did we get here? +

In a serverless framework, developers should think of their services as being decoupled from the virtual machines upon which they execute, and they should only be concerned with the function they need for their service. In a way, this is analogous to how cloud applications are already being decoupled from physical infrastructure via virtualization, except that now we don’t even have to worry about virtual machines!

This is a giant leap in the evolution of cloud services. It suggests that virtual machines and containers are just infrastructure optimizations that too can be allocated and automated. Presumably, in a completely serverless environment, services can be instantiated anywhere in the cloud with full access to whatever data the service requires.

This implies a service architecture in which the storage and network resources are broadly accessible (by replication or remote access), and service address resolution is global, dynamic and instantaneous. This address resolution must map a service to a particular VM or physical server and ensure that the necessary infrastructure resources are available to be accessed at that moment. Today AWS Lamba and Azure Functions can’t quite do this broadly, but they do work as a backend-as-a-service for some very well-defined use cases (e.g., IoT) and some specific enterprise application flows.

Serverless services are infrastructure agnostic

If serverless services are truly our ultimate goal, is it necessary that they be built on top of virtual machines and containers? The answer depends on what you are trying to do. Architecturally, the concept is based on a Service-Oriented Architecture (SOA), so a serverless framework can be directly constructed over physical infrastructure, containers, virtual machines or a combination of these.

Regardless of which environment is used, one thing is universally common: Serverless services will be constructed as a list of microservices, linked together dynamically with policy enforcement intelligence. 

Further, each microservice becomes a small service of its own right in the sense that it represents an autonomous application unit that requires access to the small but specific compute, storage and network infrastructure it needs to be executed, regardless of where it is instantiated. Just as the microservice can be thought of as a unit of an application, the small compute, storage and network infrastructure it needs to be executed can be thought of as an autonomous “microexecution unit.” The concept of a microservice being a small part of an application and a microexecution unit being a small part of the infrastructure becomes a cornerstone of our serverless vision.

Since a serverless microservice can be executed over bare metal, container or virtual machine to ensure that it does not lose its “soft connection” to the resources it needs wherever executed, a new notion of infrastructure “resource resolution” must be implemented where a microservice links to its resources via resource descriptors that are logical abstractions, and resource resolution protocols then translate them to the right location where the information it needs is located, regardless of where the microservice is instantiated.

Fortunately, the notion of a logical resource abstraction already exists. Today, services can be accessed via a URL, which is a service logical end point. For instance, in Linux, once a server providing a service is reached, socket file descriptors are used to access the network it attaches to regardless of the physical network within which it exists. Likewise, a file descriptor can be used to access the files it requires wherever that file might be located.

Microexecution units

What this all means is that microservices do not act alone. Each microservice needs to be associated with a number of logical resource descriptors, which are moved as the microservice is moved, and the resource resolution protocols behind these logical descriptors almost instinctively find where resources are truly located. This is what I mean when I call the resource descriptors associated with a microservice a “microexecution unit.”

One may ask: If a microservice only controls logical resource descriptors, then whose responsibility is it to ensure that the physical resources that are resolved by the resource resolution protocols will be policy enforceable for traffic shaping, security, access control and authentication?

This is a very important question. In today’s VM- and container-dominated world, you might assume this job belongs to either the server operating system or the hypervisor. But neither the OS nor the hypervisor can really control, nor can they enforce, policies between microservices or from microservices logical resource descriptors into the infrastructure.

Fortunately, new initiatives such as the Contiv open source framework help here. Contiv rightly advocates that while containers have done a good job providing a framework to specify “application intents” relative to what the OS should expect, it has fallen short of the ability to specify “infrastructure intents” that are policy enforceable.

What does all of this mean?

While serverless computing and its associated framework are here to stay, today’s serverless services are derived from a cloud infrastructure that is based on virtual machines that don’t provide an appropriate foundation for execution. A broader framework is needed to expand the serverless movement to cover all services of the future. I think a framework based on a microexecution unit is worth our deep consideration.

Cheng Wu is a successful serial entrepreneur and a well-respected technologist, having founded and led visionary technology companies in a range of industries.

Cheng most recently founded Acetti Software (formerly APL Software), a company dedicated to advocating new computing architectures to optimize application performance. Before founding Acetti Software, he founded Azuki Systems, developer of an adaptive video streaming and digital rights management platform, which was acquired by Ericsson in 2014. Prior to that, he founded and was executive chairman at Acopia, a leader in high-performance intelligent file virtualization solutions that was acquired by F5 Networks in 2007.

Cheng was group vice president and general manager of Cisco Systems' Content and Multiservice Edge Group, following Cisco's acquisition of Arrowpoint Communications, where he was founder and CEO. Arrowpoint was one of the first web content delivery systems and completed a successful IPO prior to being acquired by Cisco for $5.7 billion.

The first company Cheng founded and led was Arris Networks, a developer of high-density Internet access solutions, which was acquired by Cascade Communications.

The opinions expressed in this blog are those of Cheng Wu and do not necessarily represent those of IDG Communications Inc. or its parent, subsidiary or affiliated companies.

More from this author