paul_venezia
Senior Contributing Editor

Inside Nebula’s new ‘turn-key’ private cloud

analysis
Apr 2, 20135 mins

Leveraging OpenStack and Amazon APIs, Nebula One promises fast deployment and superior ease-of-use

As amorphous as the concept of “the cloud” continues to be, we know it means instant gratification. It’s a foregone conclusion that you can head over to Amazon Web Services and spin up some servers in about as much time as it takes to order a book. A new private cloud solution from OpenStack startup Nebula, called Nebula One, is looking to bring that Amazon Web Services experience to your own data center.

The goal of Nebula is to make it extremely simple to build and manage your own private cloud, one that’s even easier to use than a public cloud. As Nebula co-founder and CEO Chris Kemp said while demonstrating the solution to InfoWorld, the goal was to deliver “the simplest and most elegant user experience that has ever been created for cloud.” Before founding Nebula, Kemp served as CTO for NASA Ames Research Center, where he led the creation of the cloud compute service that became OpenStack Compute. 

Without spending time with Nebula One in the lab, I can’t say how simple the solution really is, but everything about the design indicates that Nebula is traveling both the high road and the low road. The company is looking to foolproof the construction of a private cloud while still allowing heavy-duty back-end access via an assortment of standard APIs to facilitate the smooth functioning of cloud-aware applications.

Note the focus on cloud-aware applications. Nebula is not aiming to provide an enterprise-class virtualization platform, but a true cloud computing solution.

Private cloud in a box Much as cloud computing in general is coloring outside the lines of the traditional computing environment, Nebula’s architecture is beyond the norm. Nebula dispenses with the concept of servers, storage, and network as separate entities, instead encapsulating all of them into a centrally managed cluster. At the core of this cluster is the Nebula Cloud Controller, a 2U server that provides all of the smarts necessary to control up to 20 cluster nodes. It also includes 48 10G SFP+ ports built right into the controller chassis. Thus, no external switching is required.

To build a Nebula One cloud, you rack up a controller and connect between five and 20 industry-standard servers with dual 10G links to each, then uplink the controller to your network using one or more of the remaining eight 10G ports. You can also connect the IPMI or Lights-Out management ports on each server to an external switch and the controller to allow the controller to manage the server’s power state and so forth.

At launch, Nebula will support a single Cloud Controller and up to 20 nodes, though the solution can scale out to a total of five Cloud Controllers and 100 nodes, all residing within the same logical cluster.

The cluster nodes themselves have identical resource configurations, with the same CPU, RAM count, and disk. Nebula has a preferred configuration of these servers from Dell, HP, and IBM that are best suited to the solution. Although it may be possible to use other server types, they may require more configuration to build and deploy.

Rather than segment compute and storage into separate nodes or arrays, Nebula uses local disk on each node as object storage. Thus, the more disk you have in each node, the more storage the overall solution can address. In keeping with cloud storage models, each server runs the local disk as a JBOD, with the controller managing object storage and replication, which stores three copies of each object on various nodes throughout the cluster.

A fully built Nebula cloud cluster looks like a bunch of 2U servers packed with local disk, RAM, and CPU, all plugged directly into the Nebula Cloud Controller, which is uplinked to the LAN. When the nodes boot, they are fed an OS called Nebula Cosmos via PXE, then begin communicating with the controller for direction.

Nebula vs. VMware Once built and configured, the Nebula One cloud is designed to provide self-service resource allocation, allowing users to create accounts that are then approved by administrators. With a valid account, users can begin defining and deploying Linux server instances immediately, while working within a defined quota of RAM, CPU, and storage.

Bringing up a new instance based on pre-built images of most popular Linux distributions is the matter of a few mouse clicks, and persistent storage can be assigned to any of those instances very easily. Further, users can create and manage security groups that allow interinstance communication at Layer 4 granularities. Thus, it’s possible to deploy a handful of instances and assign them to a security group that allows them to communicate among themselves via SSH and HTTP, as well as potentially assign them another IP address that will permit them to be accessed from outside the cloud infrastructure itself. Network load-balancing services are also included.

These instances are based on KVM and thus can be deployed in seconds, but they are not persistent virtual machines.

This highlights an important fact: Nebula is not a VMware replacement. Nebula is not designed to provide or ensure compute instance persistence. It’s designed to be used with cloud-aware applications and frameworks that can tie into its Amazon- and OpenStack-compatible API and request resources as needed. There is no concept of restarting instances if the node fails, or migrating instances from one node to another.

“Applications that require infrastructure to be reliable should be run on VMware,” Kemp said. “If the app is able to manage its own availability by making direct calls to infrastructure, then you can run it on Amazon or Nebula.”

This story, “Inside Nebula’s new ‘turn-key’ private cloud,” was originally published at InfoWorld.com. Get the first word on what the important tech news really means with the InfoWorld Tech Watch blog. For the latest developments in business technology news, follow InfoWorld.com on Twitter.