Search Our Site

Our Newsletter

Our Ramblings

Virtual WAN Optimisation

WAN optimisation has been atop the project list whiteboard for many data center managers. WAN connectivity is expensive. By optimising utilization of the WAN you can either lower your WAN connectivity bill or at least delay the need to upgrade it. WAN Optimisation is critical today, especially in storage.

It seems that every storage system, backup application and failover application has the potential to replicate data between data centers. Storage replication software used to be a significant investment that only the largest of data centers could afford. Now the functionality is often either included with the storage array or backup application at a minimal cost, making improved disaster recovery (DR) capabilities available to a broader range of applications and companies. The cost savings in functionality quickly erodes if you need to make a significant upgrade to your WAN links. While IT budgets are loosening, you have to still watch your expenditures as close as ever. WAN Optimisation allows you to add these types of capabilities to improve your ability to recover from disaster while not increasing you WAN expenses.

The challenge for WAN Optimisation vendors in the past is that they would deploy their software on appliances to ease the installation for network administrators. Many of the appliances though were not purpose built, they were simply off the shelf hardware from a server OEM that the WAN Optimisation vendor preloaded and preconfigured their software.

This gave the vendors some consistency in the hardware and made things easier to support. The challenge was that it was another brand of hardware to integrate into your data center. The vendors also quickly learned that even if they bought the same hardware from the same hardware vendor they still had to deal with changes to motherboards or network interface cards. The consistency they desired was not being delivered.

Enter virtual appliances. With virtual appliances WAN Optimisation vendors like Riverbed Technology, Silver Peak Systems, Certeon, Ipanema Technologies and NetEx now had a very consistent “hardware” platform. Certeon and NetEx have both earned the VMware Ready label. As a result these companies and others not only eliminate the challenge of having to deal with physical machines they gain all the advantages of the virtual environment. For example you now have the ability to move your WAN appliance between physical hosts in case of hardware maintenance or a need for more performance. You also don’t have to worry about having IT staff that understands WAN Optimisation to mount hardware in each remote location. For the vendors the “hardware” is now downloadable so you can more easily evaluate it. For customers you can easily “ship it.” It is also bundle-able meaning that WAN Optimisation vendors can partner with storage and backup software companies to deliver and all in one solution.

There is also some potential for Virtual WAN Optimisation and Cloud Storage. With hardware it would be difficult to offer a universal solution to thousands of potential customers. Virtual WAN Optimisation software could be part of the cloud storage download or integrated into the cloud gateway to make performance of cloud storage even better.

From a performance perspective most vendors are reporting results in the virtual versions of WAN Optimisation that leads me to believe that performance between dedicated hardware and shared hosts is going to be a wash for many data centers. Even if you decide to dedicate a virtual host to just the WAN Optimisation virtual appliance you still gain all the benefits of virtualization should you need them.

Recent Developments in Private Clouds.

While most of the initial interest in cloud computing focused on companies that offered their cloud services for businesses to use as they need, there has recently been a shift towards so-called private clouds. These private clouds leverage an existing data centre infrastructure to provide scalable infrastructure throughout the business, at the times it is needed.

Nimbula is the latest company to release their software to the public, currently in the form of an open public beta trial. Their newly launched proprietary cloud operating system, Nimbula Director, aims to mimic the Amazon EC2 public infrastructure cloud within the corporate data centre. It is also designed to fully automate the process of creating the private cloud by exposing a Representational State Transfer (REST) API, which is then accessed through a web console or the command line.

Designed to install on bare-metal servers and supporting both KVM and Xen hypervisors, Nimbula is intended to provide access to computing resources as required, by load-balancing and distributing numerous applications across the entire estate, whilst making the data available resiliently and reliably in multiple different locations.

In addition to Nimbula, there are a couple of other private cloud offerings that are currently available, Eucalyptus and OpenStack, both of which are, at least partially, open-source. OpenStack was created by RackSpace and NASA and is designed to be very scalable, which is the main problem that Eucalyptus suffers from. All three of these products is designed to be built from the ground up and could require lots of time and effort to migrate an existing infrastructure to, so is more likely to be utilized by new systems during the initial design and development phase or as a new hosting platform for an existing application.

An alternative process was conceived by Gridcentric with their Copper application, which is designed to rapidly create copies of existing virtual machines on-the-fly. By continuing to run with existing hardware and software platforms, this method reduces the time taken to realize fully scalable applications, with the benefit of centralized management. This approach is more likely to be popular for applications that need to be scalable for periods of increased demand.

The ability of a business to build its own cloud to its particular requirements is something that is going to become increasingly popular over the next few years. These new products might be the first in the market, but sooner or later, the big players are going to move into this marketplace, either by releasing their own products, or more likely through acquisition or merger.

Hybrid clouds – merge the benefits of private and public clouds

In order to fully leverage the benefits of both public and private cloud computing, it seems that some kind of hybrid cloud computing could be the viable solution many enterprises are seeking. By implementing a hybrid cloud strategy, enterprises perhaps manage to merge the advantages of private and public clouds, and escape the obvious disadvantages. Let’s briefly examine some of the advantages and disadvantages for each cloud type:

Public clouds:
Although public clouds address many of the benefits enterprises are looking for, including the on-demand and elastic scaling, automatic provisioning, metering and pay-as-you-go mechanism – and in general to reduce capital expenditures, there are still several problems associated with public clouds.

Most of the concerns related to public clouds have to do with security and latency issues. From a security perspective, there are numerous potential vulnerabilities, both in the WAN networking domain and at a data center level. For example, in many cases the customer is sending unencrypted data via the Internet to a public cloud provider and has no idea of the data location, either within a particular virtualized data center or even without knowing at what data center his data resides. Most public cloud providers utilize virtualization to the fullest, meaning that it is very difficult to track data location as it can be migrating from one virtual instance to another. The strength of data isolation in a multi-tenanted public cloud environment is also sometimes disputable, especially if applications are not clearly isolated from the environment around it at all times. Additionally, latency can be a real concern when it comes to many real-time enterprise applications. Therefore, from an enterprise perspective, the majority of applications that have been migrated to a public cloud provider have been limited to non-critical applications and date, e.g. web servers and associated, CRM systems, storage and sometimes collaboration and productivity tools. Some providers are providing Content Delivery Networks to reduce the latency problem and bring content closer to the end-user, including Amazon with its Cloud Front and Rackspace with its collaboration with Limelight Networks.

Private clouds:
Establishing a internal private cloud environment (enterprise cloud) is another option for enterprises not interested or willing to trust the public cloud. As so many enterprises already have virtualized their data centers then certainly it should’tn be too difficult to transform the infrastructure to a Cloud based architecture, or what? It depends. Larger enterprises certainly can gain from a fully cloud-based infrastructure, providing IT scalability, automatic provisioning and cost granularity for individual divisions and departments, thereby adopting many of the Cloud benefits. Having said this, it seems inevitable that most enterprises will establish private clouds, right? Not so sure. Private clouds often lack the benefits of public cloud, such as the following:

* Requiring large investments in purchasing and maintaing IT infrastructure/resources and data centers
* Lack of economies of scale – making own investments more expensive compare to a public cloud provider
* Assuming over-capacity in IT resources to handle temporary peak-loads – leading to underutilization
* Maintaining more highly qualified and costly IT experts in-house

However, when it comes to security and data privacy, many would conclude that private clouds are a much safer alternative. Although by no means a security guarantee, data would normally reside within the enterprise data center and therefore not be exposed to the dreaded “uncertain” data location and potential infringements, both due to WAN networking transport vulnerabilities and external data center conditions. This is certainly some of the issues vendors like IBM and HP are emphasizing – offering their enterprise/carrier class private cloud platforms (IBM Cloud Service Provider Platform and HP Cloud Start).

Hybrid clouds:
With hybrid clouds, IT managers can decide on what data and applications should reside within and be run in the internal private cloud and which should be “bursted” or moved to the public cloud. Minimizing resource overcapacity and balancing critical applications and data within the private cloud while moving peak-loads and less critical apps/data to the public cloud should me the goal.

API compatibility is another important issue. For example the Eucalyptus Enterprise Edition automates image conversion between supported hypervisors (WMware, Xen, KVM), including Amazon EC2 compatible AMIs (Amazon Machine Image). Several public cloud providers, including Terremark and Saavis, have also deployed WMware vCloud Express in their data centers, making it possible for enterprises that already use WMware’s virtualization technology (e.g. vSphere) to extend their private data center to a public cloud provider.

It seems likely that hybrid clouds will be an important element in making cloud computing more accessible and valuable for enterprises and that a hybrid cloud architecture will become increasingly important and deployed. To discuss how your organisation can benefit from private, public or hybrid based cloud computing, give us a call.