Search Our Site

Our Newsletter

Our Ramblings

Bringing the Network Infrastructure into the 21st Century.

Despite the lower costs and increased availability of bandwidth in the UK, network managers are still engaged in a continuous struggle to ensure business applications perform optimally, enabling users to maximise their productivity. Unfortunately, measuring network performance is all too often relegated to connectivity issues, says Paula Livingstone of Rustyice Solutions.

The reason for this could be that network managers are rewarded according to the uptime of their networks, not the actual performance. The result is managers monitor their connectivity and can easily see when they reach or regularly breach the 80% threshold of their available capacity. When this happens, the normal response is to simply buy more bandwidth.

However, while adding more bandwidth may seem to solve the problem, it is only a temporary solution, as bandwidth usage always increases to consume the amount available.

Bandwidth availability is not the primary cause of poor network performance. Performance problems are often the result of a lack of visibility into the application layer of the network, or layer 7 for the more technically inclined.

Unfortunately, while network managers focus on ensuring their networks are connected and operational, they can neglect the real issues affecting the performance of applications at the user interface. In other words, they fail to gain insight into exactly what is hampering the performance of business applications.

When network managers do manage to gain insight into their networks at the application layer by in-depth examination of the traffic in transit over their networks, they usually find their performance issues are not related to bandwidth. The problem is usually unauthorised applications hogging corporate bandwidth as some users take advantage of what they view as “free bandwidth”.

Port 80, for example, is left open in all networks for Internet data transmission. Companies may assume this will be used for loading legitimate Web pages; but by leaving it open, they automatically also allow Skype conversations, the downloading of torrent data through which most movies and audio piracy occurs, and live streaming of videos from YouTube and other sites.

If a company fails to gain visibility over the application layer, it will not be able to properly manage this unauthorised usage of network resources. The result is corporate bandwidth vanishes and the performance of business applications declines.

In-depth visibility not only delivers insight into data to determine what applications are being used, but also allows management to set and manage service-level agreements (SLAs) on a departmental or even a per-user level. In other words, each user can be guaranteed a specific level of network performance to ensure they have optimal access to the relevant business applications by automatically reducing the priority of, or even ban the use of unauthorised applications.

Effective application visibility enables efficient management that ensures networks perform optimally. The result is more productive users, better performing business applications, and, in some cases, can even result in lower bandwidth costs. If you would like to examine the situation regarding your own network access connectivity give our sales team a call today. Its free on 0800 012 1090. We look forward to hearing from you.

 

Can we really make Autonomic Network systems succeed?

The real world is uncertain. Thats a given. Our networks, at their most fundamental, carry the real world from one point to another and therefore by definition carry that uncertainty during every moment they operate. Any autonomic system which seeks to properly manage our networks faces this challenge of pervasive uncertainty. They will always be constructed around that dichotomy of bringing order to chaos by applying their adaptive techniques to create order from chaos. If we map too much of that adaptation into the systems, they become cumbersome and unwieldy. We therefore need to smooth the chaos curve in order to drive autonomic systems design in a direction that will maintain their efficacy. How might we do this? Read on for our thoughts.

We are currently engaged in a conflict with the increasingly complex systems we seek to create which we are losing. Things may have become easier for the end user(arguably), but these systems which provide the end user more simplicity mask a corresponding increase in the the complexity of the underlying systems which support them. This affects the economics of viability of new developments in the marketplace and actually makes some of them non-viable. This situation forces us into choices that we cannot make on an informed basis and our decisions may end up fossilising parts of the network so that future development becomes uneconomic or infeasible.

In principle, autonomic network systems are founded on the principle that we need to reduce the variability that passes from the environment to the network and its applications. In latter years, many companies including Rustyice Solutions have brought products to the market that simplify the management of networks by offering levels of abstraction which make configuration easier and allow the network to heal itself  on occasion. These products tend to smooth the chaos curve and increase the reliability of the systems without the involvement of a low level re-inspection of the systems themselves. They do this by integrating the available information from different semantic levels and leveraging it to give the systems a more holistic view from which to consider the operational status of themselves.

Lets consider what we expect of an autonomic system. It can be defined in terms of a simple feedback loop which comprises information gathering, analysis, decision making and taking action. If the system is working properly then this feedback loop will achieve and maintain balance. Examining these elements one by one, information gathering can come from network management platforms which talk to the discrete nework components on many levels as well as environmental and application based alerts. Analysis can mean such activities as applying rules and policies to the gathered information. Decision making is the application of the analysis against the rules and policies to determine whether or not they meet the conditions set out in the policies and taking action could involve adjusting network loads on managed elements and potentially informing humans who need to take some form of action. These are the fundamental terms with which we seek to understand any requirement from any of our own customers.

This sounds fine in theory but what do we need to understand in order to make it work? The network is currently modelled on a layer based concept where each of the layers has a distinct job to do and talks only to its neighbour layers as well as its corresponding layer at the distant end of the communications link. This model has served us well and brings many distinct advantages including hardware and software compatibility and interoperability, international compatibility, inter layer independence and error containment. It does however carry some disadvantages with it too and most significant of those in terms of this discussion is that of the lack of awareness at any point in the system of the metadata which is why we have the networked systems in the first place. The question of whether the network is doing what it is needed to do at the holistic level is something which no discrete layer ever asks, nor should it. It almost comes down to a division between the analogue concerns of the real world versus the digital, yes/no, abilities of the systems themselves.

Taking this discussion a step further we need to improve our ability to ascribe the real world requirements which are the reasons these networks exist and why we build them to the systems which we intend should be capable of making decisions about whether the systems are working or not. Can these systems really know whether or not the loss of a certain percentage of the packets in a data-stream originating on the netflix servers will impact the enjoyability of somebody watching the on demand movie they have paid for. From a higher perspective, the question becomes whether we can really design autonomic decision making systems that could understand the criteria the real world applies to the efficacy of the network and base their decisions on that finding. They also need to be aware of the impact any decisions they make will have on the efficacy of any other concurrent real world requirements.

There are many mathematical abstractions which seek to model this scenario in order to predict and design the autonomic behaviours we require of our systems and you will be relieved to read that we do not propose to go into those here. In principle however we need to move towards a universal theory of autonomic behaviour. We need to find an analytic framework that facilitates a conceptual decision making model relating to what we actually want from the network. We need to couple this with an open decision making mechanism along the lines of UML in order for us to fold in the benefits of new techniques as they develop and ultimately we need to be able to build these ideas directly into programming languages such that they better reflect the real world systems we want on a higher level of abstraction.

In conclusion, we can say that autonomics is a multi level subject and we need to take account of these different semantic levels. We need to build an assumed level of uncertainty into our programming in order to maximise our ability to engineer autonomic systems and we need to develop standards in order to further enable the capability of our systems in this area. These are the fundamental points which we at Rustyice Solutions begin any discussion with respect to network management and more especially autonomic networking such as WAN acceleration. If you or your business are interested in examining this topic in more detail with a view to enhancing the value which your network brings to the table why not give us a call. We look forward to hearing from you.

Cisco Plays Catchup in DPI Test

If you’re a large enterprise with its own network, an ISP, or a company intent on claiming its own, online, the technology force of Deep Packet Inspection (DPI) is with you.

But it may surprise you who shows up on the doorstep to sell it to you.

Results of a test of P2P filtering gear conducted for Internet Evolution by the European Advanced Networking Test Center AG (EANTC) , show that Cisco Systems Inc. (Nasdaq: CSCO), ipoque GmbH , and Procera Networks Inc. (Amex: PKT) are ready, willing, and able to help enterprises and ISPs reduce network production costs.

Some are more ready than others, though. Cisco, while matching and exceeding its rivals in various test scenarios, offers half the bandwidth capacity of the two smaller, younger companies. Cisco offers a total of four 10-Gbit/s load modules on its SCE-8000 unit, compared with eight 10-Gbit/s modules supported on ipoque’s PRX-10G Traffic Manager and Procera’s PacketLogic 10014.

During the tests, Procera’s and ipoque’s devices, both equipped with four interface pairs, were exposed to twice the load and concurrent connections number as Cisco’s, with its two interface pairs.

Cisco, despite being the world’s biggest networking vendor, was bested in blocking P2P traffic by ipoque, whose PRX-10G allowed less than 0.01 percent of P2P traffic to bypass its filtering, compared with 2.4 percent for Cisco’s SCE-8000 and 2.9 percent for Procera’s PacketLogic.

Further, Cisco, along with Procera, required some updates and adjustments to perform as expected in detecting popular P2P protocols.

Does this mean Cisco’s not ready for P2P prime time?

Hardly. While its DPI device may be lower capacity than the competitors in this test, Cisco, like the others, appears to have emerged from the beta-like vaporware stage all vendors were in during the March 2008 P2P EANTC test.

“Whether we talk about intelligent management of consumer traffic or about freeing bandwidth by throttling the massive amount of P2P traffic, the devices we tested are ready to be rolled out in service provider backbones,” says Carsten Rossenhövel, managing director of EANTC.

As detailed in our latest Big Report, “P2P Taste Test,” vendors have improved performance and accuracy significantly since last year’s test. EANTC increased its performance test bed by a factor of 25, and still didn’t hit the limits of these boxes.

Alternatives to WAN performance optimisation

Here are some alternatives to WAN performance optimisation that should always be considered:

  • Application redesign or reselection: In some cases, it’s better to replace a few poorly-designed applications instead of trying to alter the WAN characteristics. Backup and file transfer or distribution applications that don’t remove long duplicate data strings (“deduplication”) or that handle transmission errors or congestion inefficiently are prime examples.
  • Application remoting: Often called “terminal server” or “Citrix,” this solution is best for applications that are tightly-intertwined with some remote service; for example, an application in a remote office that makes frequent calls on a database in the enterprise’s central server location. Application remoting can also save money on licensing fees and has other advantages. Application Remoting data flows will probably require network QoS.
  • System tuning: In some cases (e.g., inability to use all of the bandwidth in a high-latency path), simply tuning existing software or upgrading to more recent version (e.g., shifting to Microsoft Windows Vista from Windows/XP) can produce massive results at minimal cost.
  • WAN service modification: For some situations, the need for more bandwidth or better network delay or error characteristics is unavoidable or is the most cost-effective solution. In some cases, technology changes (e.g., to satellite from terrestrial links) are also involved. Renegotiation of carrier contracts and changing carriers are also options.