3 Ways Hyperconvergence Can Harm or Help Your Network Performance

brandpost
May 4, 20174 mins

By Bharath Vasudevan, Director of Product Management, Hewlett Packard Enterprise Software-defined and Cloud Group.

Performance is only as strong as IT’s weakest link. In the traditional IT stack, that was hard drives. Now, thanks to the increased performance provided by flash and the simplicity of hyperconvergence, the weakest link is the network.

Hyperconverged architectures can magnify the strain on networks, specifically in multi-site environments. This is because some hyperconverged vendors made significant improvements to the IT stack but, since it wasn’t affecting performance before, did not improve the network. However, the good news is that not all vendors approached the network the same way. IT teams need to vet their options to find the networking masters in the space. When choosing a vendor, keep in mind the three ways hyperconverged infrastructure can help or harm network performance.

1. Data and compute locality

Both where and how data is stored matters. In a hyperconverged environment, VM data exists throughout a cluster to protect the data and keep it resilient should the hardware fail. Vendors typically use two approaches. The first is like spreading peanut butter on toast because vendors widely distribute small chunks of data across the environment. The second is known as full data localization, which saves complete sets of data grouped together, stored on multiple systems.

Data locality is an architectural challenge affecting network performance that is exclusive to hyperconverged infrastructure. The peanut butter method helps reduce performance issues because only small portions of the data are being accessed from each location. This optimizes the CPU and memory resources, and thus puts less strain on the network for consistent performance. The problem is, the user can only access a piece of the full data set at one time. Trying to access each piece of the data that is saved across the network will only put users back where they started – with congested network traffic. Full data localization allows users to efficiently access the full set of data needed without clogging multiple network channels.

2. The direction of data

Looking at the two approaches above, both increase resiliency but have very different implications for network traffic. There are two directions of data: east-west traffic – data moving across the network –and north-south traffic – data moving from the core of the data center to the edges.

When the data is spread across the network, like in the peanut butter approach, this increases the amount of east-west traffic. The trouble is that most networks are not designed to handle this type of traffic. Conversely, the full data localization approach copies the entire VM to a single system that reduces east-west traffic and eliminates the network bottleneck. Since the data is always local (in a sense), it requires less network travel and puts less strain on the network.

3. Backup and recovery

Everyone needs a backup strategy. Numerous specialized solutions are available, but any IT admin can tell you about the effect backups have on network performance. This is why most companies run their backups when the least amount of people are on the network, and why performance seems to plummet in a recovery scenario.

To combat this problem, hyperconverged vendors leverage additional compute resources or scaling capabilities to accommodate more network “highways” for the data to move site-to-site so the traffic is more evenly distributed. A handful of vendors integrate backup functions into the product – allowing backup operations to occur the moment production data is first written and to continue through the entire lifecycle of the data, lowering bandwidth and storage requirements. Deduplication and compression is also included, which means that when running a new backup, only the data that absolutely needs to be sent goes to the backup location, dramatically lowering WAN usage and allowing more sites to be connected using the same bandwidth.

All vendor solutions include pros and cons, and it is up to each IT team to determine what their network requirements are to function at peak performance and still maintain all other functions of the data center. The HPE SimpliVity 380, for instance, allows for data localization, north-south traffic, and efficient backup functions – all factors that will help network performance. Clearly, certain solutions are superior at maintaining data integrity, and IT should keep an open mind when considering new innovative solutions such as hyperconverged.

For more information on how hyperconverged solutions can help your IT, download the free e-book.