Search Our Site

Our Newsletter

Our Ramblings

Bringing the Network Infrastructure into the 21st Century.

Despite the lower costs and increased availability of bandwidth in the UK, network managers are still engaged in a continuous struggle to ensure business applications perform optimally, enabling users to maximise their productivity. Unfortunately, measuring network performance is all too often relegated to connectivity issues, says Paula Livingstone of Rustyice Solutions.

The reason for this could be that network managers are rewarded according to the uptime of their networks, not the actual performance. The result is managers monitor their connectivity and can easily see when they reach or regularly breach the 80% threshold of their available capacity. When this happens, the normal response is to simply buy more bandwidth.

However, while adding more bandwidth may seem to solve the problem, it is only a temporary solution, as bandwidth usage always increases to consume the amount available.

Bandwidth availability is not the primary cause of poor network performance. Performance problems are often the result of a lack of visibility into the application layer of the network, or layer 7 for the more technically inclined.

Unfortunately, while network managers focus on ensuring their networks are connected and operational, they can neglect the real issues affecting the performance of applications at the user interface. In other words, they fail to gain insight into exactly what is hampering the performance of business applications.

When network managers do manage to gain insight into their networks at the application layer by in-depth examination of the traffic in transit over their networks, they usually find their performance issues are not related to bandwidth. The problem is usually unauthorised applications hogging corporate bandwidth as some users take advantage of what they view as “free bandwidth”.

Port 80, for example, is left open in all networks for Internet data transmission. Companies may assume this will be used for loading legitimate Web pages; but by leaving it open, they automatically also allow Skype conversations, the downloading of torrent data through which most movies and audio piracy occurs, and live streaming of videos from YouTube and other sites.

If a company fails to gain visibility over the application layer, it will not be able to properly manage this unauthorised usage of network resources. The result is corporate bandwidth vanishes and the performance of business applications declines.

In-depth visibility not only delivers insight into data to determine what applications are being used, but also allows management to set and manage service-level agreements (SLAs) on a departmental or even a per-user level. In other words, each user can be guaranteed a specific level of network performance to ensure they have optimal access to the relevant business applications by automatically reducing the priority of, or even ban the use of unauthorised applications.

Effective application visibility enables efficient management that ensures networks perform optimally. The result is more productive users, better performing business applications, and, in some cases, can even result in lower bandwidth costs. If you would like to examine the situation regarding your own network access connectivity give our sales team a call today. Its free on 0800 012 1090. We look forward to hearing from you.

 

Can we really make Autonomic Network systems succeed?

The real world is uncertain. Thats a given. Our networks, at their most fundamental, carry the real world from one point to another and therefore by definition carry that uncertainty during every moment they operate. Any autonomic system which seeks to properly manage our networks faces this challenge of pervasive uncertainty. They will always be constructed around that dichotomy of bringing order to chaos by applying their adaptive techniques to create order from chaos. If we map too much of that adaptation into the systems, they become cumbersome and unwieldy. We therefore need to smooth the chaos curve in order to drive autonomic systems design in a direction that will maintain their efficacy. How might we do this? Read on for our thoughts.

We are currently engaged in a conflict with the increasingly complex systems we seek to create which we are losing. Things may have become easier for the end user(arguably), but these systems which provide the end user more simplicity mask a corresponding increase in the the complexity of the underlying systems which support them. This affects the economics of viability of new developments in the marketplace and actually makes some of them non-viable. This situation forces us into choices that we cannot make on an informed basis and our decisions may end up fossilising parts of the network so that future development becomes uneconomic or infeasible.

In principle, autonomic network systems are founded on the principle that we need to reduce the variability that passes from the environment to the network and its applications. In latter years, many companies including Rustyice Solutions have brought products to the market that simplify the management of networks by offering levels of abstraction which make configuration easier and allow the network to heal itself  on occasion. These products tend to smooth the chaos curve and increase the reliability of the systems without the involvement of a low level re-inspection of the systems themselves. They do this by integrating the available information from different semantic levels and leveraging it to give the systems a more holistic view from which to consider the operational status of themselves.

Lets consider what we expect of an autonomic system. It can be defined in terms of a simple feedback loop which comprises information gathering, analysis, decision making and taking action. If the system is working properly then this feedback loop will achieve and maintain balance. Examining these elements one by one, information gathering can come from network management platforms which talk to the discrete nework components on many levels as well as environmental and application based alerts. Analysis can mean such activities as applying rules and policies to the gathered information. Decision making is the application of the analysis against the rules and policies to determine whether or not they meet the conditions set out in the policies and taking action could involve adjusting network loads on managed elements and potentially informing humans who need to take some form of action. These are the fundamental terms with which we seek to understand any requirement from any of our own customers.

This sounds fine in theory but what do we need to understand in order to make it work? The network is currently modelled on a layer based concept where each of the layers has a distinct job to do and talks only to its neighbour layers as well as its corresponding layer at the distant end of the communications link. This model has served us well and brings many distinct advantages including hardware and software compatibility and interoperability, international compatibility, inter layer independence and error containment. It does however carry some disadvantages with it too and most significant of those in terms of this discussion is that of the lack of awareness at any point in the system of the metadata which is why we have the networked systems in the first place. The question of whether the network is doing what it is needed to do at the holistic level is something which no discrete layer ever asks, nor should it. It almost comes down to a division between the analogue concerns of the real world versus the digital, yes/no, abilities of the systems themselves.

Taking this discussion a step further we need to improve our ability to ascribe the real world requirements which are the reasons these networks exist and why we build them to the systems which we intend should be capable of making decisions about whether the systems are working or not. Can these systems really know whether or not the loss of a certain percentage of the packets in a data-stream originating on the netflix servers will impact the enjoyability of somebody watching the on demand movie they have paid for. From a higher perspective, the question becomes whether we can really design autonomic decision making systems that could understand the criteria the real world applies to the efficacy of the network and base their decisions on that finding. They also need to be aware of the impact any decisions they make will have on the efficacy of any other concurrent real world requirements.

There are many mathematical abstractions which seek to model this scenario in order to predict and design the autonomic behaviours we require of our systems and you will be relieved to read that we do not propose to go into those here. In principle however we need to move towards a universal theory of autonomic behaviour. We need to find an analytic framework that facilitates a conceptual decision making model relating to what we actually want from the network. We need to couple this with an open decision making mechanism along the lines of UML in order for us to fold in the benefits of new techniques as they develop and ultimately we need to be able to build these ideas directly into programming languages such that they better reflect the real world systems we want on a higher level of abstraction.

In conclusion, we can say that autonomics is a multi level subject and we need to take account of these different semantic levels. We need to build an assumed level of uncertainty into our programming in order to maximise our ability to engineer autonomic systems and we need to develop standards in order to further enable the capability of our systems in this area. These are the fundamental points which we at Rustyice Solutions begin any discussion with respect to network management and more especially autonomic networking such as WAN acceleration. If you or your business are interested in examining this topic in more detail with a view to enhancing the value which your network brings to the table why not give us a call. We look forward to hearing from you.

Alternatives to WAN performance optimisation

Here are some alternatives to WAN performance optimisation that should always be considered:

  • Application redesign or reselection: In some cases, it’s better to replace a few poorly-designed applications instead of trying to alter the WAN characteristics. Backup and file transfer or distribution applications that don’t remove long duplicate data strings (“deduplication”) or that handle transmission errors or congestion inefficiently are prime examples.
  • Application remoting: Often called “terminal server” or “Citrix,” this solution is best for applications that are tightly-intertwined with some remote service; for example, an application in a remote office that makes frequent calls on a database in the enterprise’s central server location. Application remoting can also save money on licensing fees and has other advantages. Application Remoting data flows will probably require network QoS.
  • System tuning: In some cases (e.g., inability to use all of the bandwidth in a high-latency path), simply tuning existing software or upgrading to more recent version (e.g., shifting to Microsoft Windows Vista from Windows/XP) can produce massive results at minimal cost.
  • WAN service modification: For some situations, the need for more bandwidth or better network delay or error characteristics is unavoidable or is the most cost-effective solution. In some cases, technology changes (e.g., to satellite from terrestrial links) are also involved. Renegotiation of carrier contracts and changing carriers are also options.

Nine Tips and Technologies for Network WAN Optimisation

Although there is no way to actually make your true WAN speed faster, here are some tips for  corporate IT professionals that can make better use of the bandwidth you already have, thus providing the illusion of a faster pipe.

1) Caching — How  does it work and is it a good idea?

Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Caching servers keep a time stamp of their last update to data. If the page time stamp has not changed since the last time a user has accessed the page, the caching server will present a local stored copy of the Web page, saving the time it would take to load the page from across the Internet.

Caching on your WAN link in some instances can reduce traffic by 50 percent or more. For example, if your employees are making a run on the latest PDF explaining their benefits, without caching each access would traverse the WAN link to a central server duplicating the data across the link many times over. With caching, they will receive a local copy from the caching server.

What is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current –If you access a cache page that is not current you are at risk of getting old and incorrect information. Some things you may never want to be cached. For example, the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume – There are some 300 million websites on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

We recommend Squid as a proxy solution however there are more elaborate and ultimately more capable solutions such as those at Ipanema Technologies.

2) Protocol Spoofing

Historically, there have been client server applications developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help. It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, just as chatty applications can be.

What protocol spoofing accomplishes is to “fake out” the client or server side of the transaction and then send a more compact version of the transaction over the Internet (i.e., put all the pictures in one envelope and send it on your behalf, thus saving you postage).

For more information, visit the Protocol Spoofing page at Ipanema Technologies.

3) Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Windows ZIP file. If you examine the file sizes pre- and post-extraction, it reveals there is more data on the hard drive after the extraction. Well, WAN compression products use some of the same principles, only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Here are two questions to consider.

a) How Does it Work? — A good and easy way to visualize data compression is comparing it to the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each word. The basic principle behind compression techniques is to use shortcuts to represent common data.

Commercial compression algorithms, although similar in principle, can vary widely in practice. Each company offering a solution typically has its own trade secrets that they closely guard for a competitive advantage. However, there are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example, let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-?. When transporting the document across a WAN link without compression, this line of document would require 80 bytes of data, but with clever compression, we can encode this using a special notation “-? X 160.

The compression device at the front end would read the 160 character line and realize,”Duh, this is stupid. Why send the same character 160 times in a row?” So, it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example, many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

b) What are the downsides? — Compression always requires equipment at both ends of the link and results can be sporadic depending on the traffic type.

If you’re looking for compression vendors, we recommend FatPipe, Juniper Networks or of course Ipanema Technologies.

4) Requesting Text Only from Browsers on Remote Links

Editors note: Although this may seem a bit archaic and backwoods, it can be effective in a pinch to keep a remote office up and running.

If you are stuck with a dial-up or slower WAN connection, have your users set their browsers to text-only mode. However, while this will speed up general browsing and e-mail, it will do nothing to speed up more bandwidth intensive activities like video conferencing. The reason why text only can be effective is that  most Web pages are loaded with graphics which take up the bulk of the load time. If you’re desperate, switching to text-only will eliminate the graphics and save you quite a bit of time.

5) Application Shaping on Your WAN Link

Editor’s Note: Application shaping is appropriate for corporate IT administrators and is generally not a practical solution for a home user. Makers of application shapers include Packeteer and Allot and are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping,” with aliases of “traffic shaping,” “bandwidth control,” and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else. However, this approach is not without its drawbacks.

Here are a few common questions potential users of application shaping generally ask.

a) Can you control applications with just a firewall or do you need a special product? — Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well known “port 21.”

The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

b) So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a shipping container. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operators policies on that flow. Some examples of policy are:

  • Limit Citrix traffic to 100kbs
  • Reserve 500kbs for Shoretel voice traffic

The list of rules you can apply to traffic types and flow is unlimited. However, there is a  downside to application shaping of which you should be aware. Here are a few:

  • The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a Web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up to date is large and there are cracks.
  • Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

6) Test Your WAN-Link Speed

A common issues with slow WAN link service is that your provider is not giving you what they have advertised.

7) Make Sure There Is No Interference on Your Wireless Point-to-Point WAN Link

If the signal between locations served by a point to point link are weak, the wireless equipment will automatically downgrade its service to a slower speed. We have seen this many times where a customer believes they have perhaps a 40-megabit backhaul link and perhaps are only realizing five megabits.

As we have stated above, Ipanema Technologies represents what is in our opinion the best all round solution for these types of situation. With bandwidth costs  consuming a major slice of the network related opex of any distributed organisation it becomes ever more obvious that the right solution is to keep these costs to a minimum whilst ensuring the experience for the end user is as good as it should be. Call us at Rustyice Solutions to discuss WAN optimisation and how it can help to make and save you money.