Contributor

Be careful what you call ‘fog computing’

opinion
Aug 16, 20184 mins

What to look for in a true fog computing architecture

fog obscures the horizon beyond a highway / uncertainty / unknown future
Credit: Markus Spiske

Fog computing is picking up steam as a buzzword in the tech world, often used in comparison to cloud or confused with edge, both of which have geography built in: either the computer is at the edge, or the computer is in the cloud. The easiest way to understand what is unique about fog is that it is location agnostic. The computers in a fog infrastructure can be anywhere: from edge to cloud and anywhere in between.

In fog, you program against what a service does, not where it is. So the same service that was deployed to cloud today can be deployed at the edge tomorrow. Think of it as a framework that supports a vast ecosystem of resources. It enables the flexible consumption of computing resources that span a continuum from on-premises, to nearby, to cloud—with each used for the benefits it may provide like speed, availability, bandwidth, scalability, and cost.  

Fog enables us to look differently at the spare computing power that surrounds us in our daily lives and opens up opportunities to put all computing power to use, regardless of location. As fog’s star continues to rise, people are using the term fog computing to market a variety of products, so if you are truly interested in the benefits it can provide, make sure it meets these two main criteria:

1. Provides a spectrum of computing power that spans a continuum of onsite to cloud

In current cloud-centric computing infrastructures, much of the processing power used is located in the far cloud. But with the number of connected devices skyrocketing and set to reach 20 billion in the next two years, the quantity of data travelling that distance is increasing dramatically

As a result, there has been a surge in demand for processing power located closer to the devices that need it, achieved through edge computing. Edge typically involves installing servers, often called “edge nodes,” closer to the source of demand for processing power, providing important benefits like reduced latency and bandwidth strain.

Because fog computing can leverage compute everywhere, including on computers that are the most geographically appropriate, it can also provide low-latency compute, and is therefore often sought out for the same reasons as edge computing.  As a result, the terms “edge” and “fog” are often used synonymously, despite edge computing being just one aspect of the more comprehensive fog computing infrastructure.

While edge computing is an effective way to reduce latency and bandwidth strain for high-traffic tech like IoT, the services running at any point in a given business or home have varying needs for performance, scalability, uptime and cost that a single “edge node” cannot address.

An effective fog computing infrastructure should be geographically diverse enough to enable edge-appropriate computing to be done at the edge, cloud-appropriate computing to be done in the cloud, and ideally a spectrum of resources in between for flexibility and resiliency.

Unless hardware involved is just one component of a much broader spectrum of resources, it is not fog.  

2. Dynamically use optimal computing resources on demand

Fog computing not only encompasses a greater geography than cloud or edge, but that geography can be dynamic. The computer processing data can be anywhere, and its location can change regularly (whether for scaling, optimizing location to better serve demand, or recovering from failures). This is achieved through the use of location agnostic services.

For engineers deploying a software service, this means they specify what a service needs when deploying to fog architecture, instead of where it will run. If low latency is the requirement, for example, a service will automatically be deployed to the best available match, whether that’s a server in the same room, a regional datacenter, or, if nothing faster is available, perhaps a cloud datacenter.

The ability to broadly specify for business requirements through fog computing has the potential to make life far simpler for engineers by relieving the burden of provisioning, scaling, and maintaining fixed computing resources. Through some fog computing platforms, engineers need simply to prioritize features like low latency, lost cost, or green energy for each of their services, and the platform will automatically deploy services to the computers that best meet that criteria on demand.

In summary, an effective fog computing architecture should provide a geographically diverse set of computing resources, and a platform through which you can easily use the most optimal set of those resources at any given time, according to your unique (and changing) business needs.

If you decide that fog computing is the solution you’ve been looking for, do your homework. The number of “fog”-labeled offerings will surely increase as its benefits become even more widely known, but before beginning to adopt fog computing infrastructure yourself, make sure it’s the real deal.

Contributor

Allan is the COO at AetherWorks and co-founder of AetherStore and ActiveAether.

A seasoned entrepreneur with a background in Computer Science, Allan bootstrapped his first IT company right out of University, and has since built and managed teams of all sizes and in many verticals, from Finance to Hospitality. As the Chief Operating Officer at AetherWorks, Allan puts his broad experience to work spearheading productization and go-to-market strategy for AetherWorks’ IP.

Allan joined AetherWorks in 2011 following his ascent through the ranks at Kelvin Connect in Scotland. During his time at Kelvin, he saw the company through its acquisition by Airwave (now Motorola), successfully managing the transition and executing the company’s post-merger strategy. Prior to Kelvin Connect, Allan worked as a Systems Analyst at Aquila Heywood, the UK’s largest provider of life, pension and investment platforms, where he got his hands dirty with existing system architectures, enterprise design and full lifecycle quality assurance.

Allan graduated from the University of St Andrews with Honors in Computer Science. While completing his degree, he was awarded “Best Growth Potential” and “Best Overall Business” from the Scottish Institute for Enterprise for his first company, IT Onsite.

Allan is an accomplished musician, and is a supporter of children’s music education through the Fife Horn Union, a charitable organization based in Scotland, and through an annual scholarship he sponsors at the Ingenium Music Academy in the UK.

The opinions expressed in this blog are those of Allan Boyd and do not necessarily represent those of IDG Communications, Inc., its parent, subsidiary or affiliated companies.

More from this author