Search Our Site

Our Newsletter

Our Ramblings

Home Network Security

1448644898859For those of us who grew up in the 80’s, we can probably think back to a time when hackers were looked upon as being pretty cool robin hood style outriders who dared to stand up against oppressors. The movie Wargames demonstrated that fascination about the possibilities of connectivity. Drinking terminals, discarded fast food boxes and unfinished cans of flat cola. The reality nowadays is considerably murkier. Hardly a week goes by without a story breaking about the nefarious activities of the hacking ‘community’ which is nowadays better described as organised criminals. As we’ve seen in the past it’s not just security agencies, nuclear launch facilities, or evil dictators that get stiffed by hackers, it’s more often normal folk like us.

In recent years hacking has continued to hit the headlines almost every week. The most well known has to be the UK phone hacking scandal. Ironically, that wasn’t even a true example of hacking as the clueless victims of the “hack” had merely neglected to change the pin on their voicemail from its default setting. It all goes to show that the weakest link in the security chain is usually human stupidity. I suppose calling it “hacking” deflected the glare of publicity away from their own stupidity but thats another discussion for another day. The ones that hit the headlines are usually interesting in some way, but they pale into insignificance when compared to the millions of attempts that occur every day to the rest of us. Cybercrime is big business. We hear it so often that the words threaten to lose their impact.

According to the Trustwave 2016 Global Security Report, there was a recorded 26.6 million victims of hacking and identity theft in a 12 month period during 2015. A number which roughly equates to one person being hacked every second. In 2015, 96% of all hacking attacks were credit card, or payment data theft used in fraudulent online or at the till transactions. Over £24 billion was estimated to be have been lost to identity theft from hackers, with a potential loss averaging £5,061 per household globally.

The checklist of items the hacker tends to go for are usernames, passwords, PINs, National Insurance numbers, phone and utility account numbers, bank and credit card details, employee numbers, driving licence and passport numbers, insurance documentation and account numbers, and any other financial background account details.

How they get this data ranges from acquiring remote access to your computer, SQL injections to a popular website, spoofing a banking or other financial website, remote code execution, exploits in website trust certificates, physical theft, and through social media.

On the subject of social media there are some interesting and worrying facts. According to sources, 18% of people under the age of 19 were the victims of a phishing scam, and 74% were victims when they followed links posted by they know that they believed were legitimate. Furthermore, 74% of all social media users share their birthday information publicly. 69% shared the schools and universities they attended. An amazing 22% of users publicly share their phone numbers, and unsurprisingly, 15% share the names of their furry little friends.

If these numbers aren’t scary enough, there’s the fact that 15% of all Wi-Fi users worldwide are still using WEP encryption for their home WIFI and, 91% of all public Wi-Fi hotspots are unsecured, unmonitored, and available 24x7x365.

And finally, it’s estimated that 11% of all spam contains some kind of code designed to hijack your computer if opened. A further 8% of all spam contains links to websites that have been designed to grab information or download some trojan to gain access behind your firewall.

WHAT CAN WE DO?

We’ve put together a number of measures to help you prevent hackers invading your private domain, whether in the cloud or locally inside your trusted networks.

We don’t suggest you take thing to the extreme but there is a happy medium where you can do everything you reasonably can to protect yourself and educate yourself to spot the signs when they arise.

NETWORK PROTECTION

Starting with the home network there are a number of easy wins we can gain to stop the baddies from getting too close. Most of these steps are surprisingly simple.

CHANGE ROUTER ADMINISTRATOR CREDENTIALS

This is one of the most common points of entry for someone to gain access to your home network. The router you received from your ISP may well be up to date and offer the best possible forms of encryption, but they have a weakness. They usually come with a limited number of preconfigured SSIDs and WIFI keys which can be found on the back of the router on a sticker.

It doesn’t take too much gumption to do a google search and find out the SSIDs and WIFI keys used by the big ISP’s. It doesn’t help that your router is usually advertising itself as a BT, Sky or Virgin Media router and that just makes life easier for the baddies.

A reasonably savvy hacker can therefore gain access to your router, get connected, and even log in using the weak default logins. For this reason we recommend that our customers change the default router usernames and passwords to something more complex.

CHECK WIRELESS ENCRYPTION

Most routers come with a level of encryption already active, but there are some examples where the default state of encryption may be extremely weak, or worse still, completely open.

If you scan your WIFI using your phone and you see a padlock beside your network name then you at least know you have some encryption active. If you then look on your router and it tells you that the encryption method is WEP then you’ll need to fix that PDQ. WEP is the older standard of wireless encryption and can be cracked in less than fifteen minutes by using a variety of tools, all of which are freely available on the net. Unfortunately, WPA isn’t great either, but the its generally strong enough to hold back low level hackers.

USE MAC ADDRESS FILTERING

Every network interface has a unique identifier known as a MAC (Media Access Code) address, regardless of whether it’s a computer, tablet, phone, or sky box.

The idea behind MAC address filtering is simple enough. You obtain the MAC addresses of your devices at home and enter them into the router so that only those you know about are able to connect. Obviously, if you have loads of network connected devices this could take a while. But it will improve your chancer against a drive by hacker in a car outside your home with their laptop balanced on their dashboard.

But hey, MAC addresses can be spoofed, so while the junior hacker will likely give up the more determined one will not. Think of MAC address filtering as putting a padlock on the garden gate; it may stop most casual nasties from entering your garden, but those who really want to get in there will just jump over.

DISABLE SSID BROADCAST

There are two schools of thought when it comes to hiding your network SSID. The first recommends hiding your router’s SSID from the public view, with the idea that invisibility to those around you makes you somehow immune to their attempts. But a hidden SSID may seem like a far more juicy target to a determined hacker with an SSID radio grabber. Both sides of the argument have merit. Are you successfully hidden by being invisible, or is the best hiding place in plain sight? Probably invisible on balance.

USE STATIC IP ADDRESSES

By default your router will automatically assign an IP address to any device that connects to it, so the pair, and the rest of the network, can communicate successfully.

DHCP (Dynamic Host Configuration Protocol) is the name for this feature, and it makes perfect sense. After all, who wants to have to add new IP addresses to new devices every time they connect to your network?

On the other hand, anyone who gains access to your router will now have a valid IP address which allows it to communicate with your network. So to some degree it’s worth considering opting out of DHCP controlled IP addresses and instead configuring your devices and computers to use something like 10.10.0.0 as their range of IP addresses.

Like most good anti-hacking attempts though, this will only slow the intruder down.

ROUTER POSITION

This simple network protection act is one of the best, if done correctly.

Believe it or not, by moving your router to the centre of your house, or more to the rear (depending on where your closest neighbours or the road is), you are limiting the range of your wireless broadcast signal.

Most routers are located in the front room where the master phone socket usually is. This means the router can reach most corners of the house, and to some degree beyond the house. If someone was moving down the road, for example, sampling wireless networks then they would come across yours as they passed your house.

If the router is situated in a more central location, away from the front window, then the signal may be too weak to get a successful reading without having to stand on your porch.

SWITCH OFF THE ROUTER WHEN YOU’RE NOT USING IT

Most people will already do this anyway. Since no one is using the router, what’s the point of wasting electricity?

However, a lot of people simply have their router powered on all the time, regardless of whether they are in the house or not. Granted there are those who will be running a server, or downloading something while at work or asleep, but the vast majority just keep it on.

If you’re not using the internet or any other home network resource, it’s a good idea to power off the router. And if you’re away for an extended period, then do the same.

BEYOND THE HOME NETWORK.

cloud-computingHome network security is one thing, and frankly it’s not all that often you’ll get a team of hackers travelling down your street with the intent of gaining access to you and your neighbour’s home networks.

Where most of us fall foul in terms of hacking is when we’re online and surfing happily without a care in the world.

PASSWORDS

Passwords are the single weakest point of entry for the online hacker. Face it, how many of us use the same password for pretty much every website we visit? Some people even use the same password for access to a forum that they use for their online banking, pretty alarming we think you’ll agree.

Using the same password on every site you visit is like giving someone the skeleton key to your digital life. It’s a bloody pain having different passwords for every different site, but when you stop and think logically about it, doing so leaves you incredibly vulnerable to those who have ill intentions with regards to your identity and bank balance. For many a kind of compromise is usually sufficient. Many of the sites we use on the net that require us to use a password are pretty innocuous. Using the same password for this swathe is normally fine but make sure that you use strong passwords for those services that are really sensitive. More about that below.

Where passwords are concerned, using ‘12345’, ‘password’, or ‘qwerty’ isn’t going to stop someone from gaining access. And passwords such as ‘L3tmeIn’ aren’t much better either. Additionally, as we mentioned earlier, using the names of your pets may seem like a good idea, maybe even mixing their names with the date of your birth as well sounds like a solid plan, but if you then go and plaster Mr Tiggywinles, Rover, or Fido’s name all over public posts on Facebook along with pictures of you blowing out the candles on your birthday cake then you’ve just seriously lowered the strength of your passwords from staying secret.

Security questions and two-phase verification techniques are now being employed by a number of credible sites. What this means is that you basically enter more than one password to log into your account. Most online banking is done this way now, and sometimes includes a visual verification such as a pre-selected thumbnail image from a range that the user can click on to verify who they are.

If you have trouble coming up with passwords yourself, then there are a number of password managers available that can help you create highly secure combinations of letters, numbers, and special symbols unique for every website you visit. Even better they’ll even store them for you in the program itself in case you forget them. They are usually managed by one ultra secret master password. Be sure to keep that one complex and safe. Some examples are as follows.

LastPass – LastPass allows you to create a single username and password while securely entering the correct details.

Kaspersky Password Manager – A fully automated and powerful password manager that can store your username and password details, then enter them into the site for you while remaining encrypted throughout.

Either way, human beings are the weakest link in the secure password chain so any help you can get is to be welcomed.

DON’T TELL THE WORLD EVERYTHING

David Glasser, the MD at Twitter US, recently admitted, “I hate to say it, but in reality, people need to share a little bit less about themselves.”

While there’s nothing wrong with letting your nearest and dearest know what you’re up to on Facebook, you really must consider the fact that they probably aren’t the only ones reading. Facebook and Twitter often come under fire because of their attempts to make users newsfeeds public by default and where you have to jump through hoops to limit the views for your own timeline.

It’s worth taking the time to double-check the security settings on all your social media sites and check back often. Are the things you’re posting on your timeline or feeds viewable by friends only, or friends of friends? Has it mysteriously been reverted back to public viewing? Are you sure you want to display that picture of you sat at your desk with all that information on the screen behind you?

As we said before publicly announcing your private details, like when you’re on your hols and for how long, the names and birthdays of you, your nearest and dearest, children, pets and so on, isn’t particularly smart, but hey we’re all guilty of it.

CLOUD SECURITY

The newsworthy hacking events of Pippa Middleton and many others has rammed home to us the fact that cloud storage isn’t quite as secure as we’d like to think.

Every device, either Android, Microsoft, or Apple, is capable of backing up your photos to its own particular cloud storage solution – sometimes it’s even a default setting. Most of the time the cloud solutions used are so secure that anyone trying to hack into them will have a pretty rough time of it, and no doubt bring down the wrathful vengeance of Google or Apple upon themselves. How the celebrity photos and videos were obtained is something you’ll have to find out for yourselves, but if storing stuff on the cloud is alarming you there are a couple of choices.

The first is to encrypt everything locally on your computer before uploading it to the cloud. This will take time, we’ll grant you, but it means only you’ll be able to decrypt them. Secondly, you could always compress everything first, using Winzip/Winrar etc., then password the compressed file. Breaking a password compressed file takes far longer than it’s actually worth, providing you’re not a celebrity, so most hackers won’t bother.

Finally, there are cloud storage solutions that encrypt the data on the device before uploading it to the also fully encrypted servers e.g. SpiderOak and Tresorit.

CONCLUSION

The very fact that you’re online makes you a potential target. If you’re sitting back and saying “they’ll have no interest in me” you’re sadly mistaken. Lets face it, you’re easy to find, easy to hack, and probably won’t do much about it when you do get hacked. Its in your best interests to stay up to speed with the latest hacking techniques and how to defend yourself against them.

Can we really make Autonomic Network systems succeed?

The real world is uncertain. Thats a given. Our networks, at their most fundamental, carry the real world from one point to another and therefore by definition carry that uncertainty during every moment they operate. Any autonomic system which seeks to properly manage our networks faces this challenge of pervasive uncertainty. They will always be constructed around that dichotomy of bringing order to chaos by applying their adaptive techniques to create order from chaos. If we map too much of that adaptation into the systems, they become cumbersome and unwieldy. We therefore need to smooth the chaos curve in order to drive autonomic systems design in a direction that will maintain their efficacy. How might we do this? Read on for our thoughts.

We are currently engaged in a conflict with the increasingly complex systems we seek to create which we are losing. Things may have become easier for the end user(arguably), but these systems which provide the end user more simplicity mask a corresponding increase in the the complexity of the underlying systems which support them. This affects the economics of viability of new developments in the marketplace and actually makes some of them non-viable. This situation forces us into choices that we cannot make on an informed basis and our decisions may end up fossilising parts of the network so that future development becomes uneconomic or infeasible.

In principle, autonomic network systems are founded on the principle that we need to reduce the variability that passes from the environment to the network and its applications. In latter years, many companies including Rustyice Solutions have brought products to the market that simplify the management of networks by offering levels of abstraction which make configuration easier and allow the network to heal itself  on occasion. These products tend to smooth the chaos curve and increase the reliability of the systems without the involvement of a low level re-inspection of the systems themselves. They do this by integrating the available information from different semantic levels and leveraging it to give the systems a more holistic view from which to consider the operational status of themselves.

Lets consider what we expect of an autonomic system. It can be defined in terms of a simple feedback loop which comprises information gathering, analysis, decision making and taking action. If the system is working properly then this feedback loop will achieve and maintain balance. Examining these elements one by one, information gathering can come from network management platforms which talk to the discrete nework components on many levels as well as environmental and application based alerts. Analysis can mean such activities as applying rules and policies to the gathered information. Decision making is the application of the analysis against the rules and policies to determine whether or not they meet the conditions set out in the policies and taking action could involve adjusting network loads on managed elements and potentially informing humans who need to take some form of action. These are the fundamental terms with which we seek to understand any requirement from any of our own customers.

This sounds fine in theory but what do we need to understand in order to make it work? The network is currently modelled on a layer based concept where each of the layers has a distinct job to do and talks only to its neighbour layers as well as its corresponding layer at the distant end of the communications link. This model has served us well and brings many distinct advantages including hardware and software compatibility and interoperability, international compatibility, inter layer independence and error containment. It does however carry some disadvantages with it too and most significant of those in terms of this discussion is that of the lack of awareness at any point in the system of the metadata which is why we have the networked systems in the first place. The question of whether the network is doing what it is needed to do at the holistic level is something which no discrete layer ever asks, nor should it. It almost comes down to a division between the analogue concerns of the real world versus the digital, yes/no, abilities of the systems themselves.

Taking this discussion a step further we need to improve our ability to ascribe the real world requirements which are the reasons these networks exist and why we build them to the systems which we intend should be capable of making decisions about whether the systems are working or not. Can these systems really know whether or not the loss of a certain percentage of the packets in a data-stream originating on the netflix servers will impact the enjoyability of somebody watching the on demand movie they have paid for. From a higher perspective, the question becomes whether we can really design autonomic decision making systems that could understand the criteria the real world applies to the efficacy of the network and base their decisions on that finding. They also need to be aware of the impact any decisions they make will have on the efficacy of any other concurrent real world requirements.

There are many mathematical abstractions which seek to model this scenario in order to predict and design the autonomic behaviours we require of our systems and you will be relieved to read that we do not propose to go into those here. In principle however we need to move towards a universal theory of autonomic behaviour. We need to find an analytic framework that facilitates a conceptual decision making model relating to what we actually want from the network. We need to couple this with an open decision making mechanism along the lines of UML in order for us to fold in the benefits of new techniques as they develop and ultimately we need to be able to build these ideas directly into programming languages such that they better reflect the real world systems we want on a higher level of abstraction.

In conclusion, we can say that autonomics is a multi level subject and we need to take account of these different semantic levels. We need to build an assumed level of uncertainty into our programming in order to maximise our ability to engineer autonomic systems and we need to develop standards in order to further enable the capability of our systems in this area. These are the fundamental points which we at Rustyice Solutions begin any discussion with respect to network management and more especially autonomic networking such as WAN acceleration. If you or your business are interested in examining this topic in more detail with a view to enhancing the value which your network brings to the table why not give us a call. We look forward to hearing from you.

Scottish SME’s increasingly adopting the latest technology.

Nowadays, cloud computing, unified comms and virtualisation are the technologies most in demand but it would seem that the public sector will not be the sector who are most interested in them.

According to a recent Pearlfinders Index, which monitors trends and opinions in the IT world,virtualisation remained the most popular area for investment, and more customers were looking to move to the cloud.

But in terms of buyers of IT support the industry/manufacturing sector was followed by retail and financial services with public sector lagging well behind.

To meet the customer requirement the skills that these new adopters are looking for include an in-depth knowledge of software, hardware but also managed services and outsourcing capabilities.

When quizzed about what they hope IT to deliver the users had some specific aims with supporting growth and improving efficiency at the top of the wish list.

Virtualisation

The attitude towards virtualisation has changed with it no longer being seen as just a route to saving money but more as an option to introduce greater flexibility.

In just the latest few months of the year the reasons for deploying virtualisation changed with cost cutting dropping down the list of priorities.

In a recent interview, one of our customers said, “The drivers behind virtualisation work have changed massively. Cost cutting is certainly not our main reason. We are more interested in looking at virtualisation as a way to improve the flexibility of our operations and enhance storage/DR infrastructure as part of a previously planned hardware refresh. Also high priority for us are reasons of sustainability.”

Another extremely intersting development is that for the first time the data back from Pearlfinders shows a stronger demand for desktop rather than server virtualisation.

One of the benefits to this technology which is no longer being seen as the new kid on the block is that smaller firms are more willing to embrace what they now percieve as a tried and tested product. The influence of Microsoft’s Hyper-V, VMware and Citrix in driving demand is also being seen across the sector.

If we look at Unified Comms the results were surprising with the public sector remaining a strong buyer for the time being.

Unified comms

A forward thinking IT manager at one of our customers said, “The growing penetration of hosted or cloud-based VoIP and UC platforms is driving uptake among SMEs and I am starting to win the battle when it comes to convincing the business that a hosted UC solution can be both cost-effective and high quality.”

The adoption of hosted VOIP is particularly interesting with a fairly significant spike in interest in Q2 2011. Reduced telephone call and line rental costs, including free
calls between all users within an implementation, the high level of business telephony functionality for all users with absolutely no maintenance and support charges, the minimal CAPEX outlays whilst moving towards a future-proof technology in which the investment is protected with free upgrades, the seamless integration of multiple locations with improved productivity and work-life balance through flexible working and finally built in business continuity with disaster recovery solutions out of the box, all conspire to present a compelling business case.

Another of the technological developments that should start coming through is the extension of video conferencing and messaging opportunities to tablets with the iPad, Samsung Galaxy and now Dell Streak all being more widely adopted by business users. The upside is that many of these services are very quickly integrated to support each others features so expect to see tight integration between the hosted VOIP proposals and the these new messaging opportunities.

The Ubiquitous Cloud

Finally, the area of high interest in the current market which will come as little surprise is cloud. Cloud is still being hyped by numerous vendors and even some of their partners including ourselves. The technology is certainly being used and deployed more widely but a debate about the preference for private rather than public clouds exists. One could argue that there is a high degree of crossover between all three of these fields of technology and where they intersect most greatly is what we call the cloud.

Some users are perhaps a bit cynical about the cloud viewing it as another name for a virtualised data centre but overall the trend towards some sort of hosted solution seems to be gathering pace. The sector which seems to have embraced the cloud most fully is the financial services sector when some large banks made the move to the hosted environment in Q4 of 2010. For the rest of the potential user base however there are still concerns that will have to be overcome. Within enterprise organisations, concerns over the security and uptime of public cloud-based solutions remain, as does nervousness over running mission-critical applications in these environments.

Another issue is the ongoing debate surrounding the use of the word ‘cloud’. It has too many definitions and we have actually found that many of our customers are reacting negatively to the word when it’s used.

The rise of the SME sector as a user of virtualisation, cloud and UC is yet another milestone in the mainstream adoption of these technologies and we at Rustyice Solutions are sure that this trend will only increase.

Nine Tips and Technologies for Network WAN Optimisation

Although there is no way to actually make your true WAN speed faster, here are some tips for  corporate IT professionals that can make better use of the bandwidth you already have, thus providing the illusion of a faster pipe.

1) Caching — How  does it work and is it a good idea?

Caching servers have built-in intelligence to store the most recently and most frequently requested information, thus preventing future requests from traversing a WAN/Internet link unnecessarily.

Caching servers keep a time stamp of their last update to data. If the page time stamp has not changed since the last time a user has accessed the page, the caching server will present a local stored copy of the Web page, saving the time it would take to load the page from across the Internet.

Caching on your WAN link in some instances can reduce traffic by 50 percent or more. For example, if your employees are making a run on the latest PDF explaining their benefits, without caching each access would traverse the WAN link to a central server duplicating the data across the link many times over. With caching, they will receive a local copy from the caching server.

What is the downside of caching?

There are two main issues that can arise with caching:

a) Keeping the cache current –If you access a cache page that is not current you are at risk of getting old and incorrect information. Some things you may never want to be cached. For example, the results of a transactional database query. It’s not that these problems are insurmountable, but there is always the risk the data in cache will not be synchronized with changes. I personally have been misled by old data from my cache on several occasions.

b) Volume – There are some 300 million websites on the Internet. Each site contains upwards of several megabytes of public information. The amount of data is staggering and even the smartest caching scheme cannot account for the variation in usage patterns among users and the likelihood they will hit an uncached page.

We recommend Squid as a proxy solution however there are more elaborate and ultimately more capable solutions such as those at Ipanema Technologies.

2) Protocol Spoofing

Historically, there have been client server applications developed for an internal LAN. Many of these applications are considered chatty. For example, to complete a transaction between a client and server, tens of messages may be transmitted when perhaps one or two would suffice. Everything was fine until companies, for logistical and other reasons, extended their LANs across the globe using WAN links to tie different locations together.

To get a better visual on what goes on in a chatty application perhaps an analogy will help. It’s like  sending family members your summer vacation pictures, and, for some insane reason, putting each picture in a separate envelope and mailing them individually on the same mail run. Obviously, this would be extremely inefficient, just as chatty applications can be.

What protocol spoofing accomplishes is to “fake out” the client or server side of the transaction and then send a more compact version of the transaction over the Internet (i.e., put all the pictures in one envelope and send it on your behalf, thus saving you postage).

For more information, visit the Protocol Spoofing page at Ipanema Technologies.

3) Compression

At first glance, the term compression seems intuitively obvious. Most people have at one time or another extracted a compressed Windows ZIP file. If you examine the file sizes pre- and post-extraction, it reveals there is more data on the hard drive after the extraction. Well, WAN compression products use some of the same principles, only they compress the data on the WAN link and decompress it automatically once delivered, thus saving space on the link, making the network more efficient. Even though you likely understand compression on a Windows file conceptually, it would be wise to understand what is really going on under the hood during compression before making an investment to reduce network costs. Here are two questions to consider.

a) How Does it Work? — A good and easy way to visualize data compression is comparing it to the use of short hand when taking dictation. By using a single symbol for common words a scribe can take written dictation much faster than if he were to spell out each word. The basic principle behind compression techniques is to use shortcuts to represent common data.

Commercial compression algorithms, although similar in principle, can vary widely in practice. Each company offering a solution typically has its own trade secrets that they closely guard for a competitive advantage. However, there are a few general rules common to all strategies. One technique is to encode a repeated character within a data file. For a simple example, let’s suppose we were compressing this very document and as a format separator we had a row with a solid dash.

The data for this solid dash line is comprised of approximately 160 times the ASCII character “-?. When transporting the document across a WAN link without compression, this line of document would require 80 bytes of data, but with clever compression, we can encode this using a special notation “-? X 160.

The compression device at the front end would read the 160 character line and realize,”Duh, this is stupid. Why send the same character 160 times in a row?” So, it would incorporate a special code to depict the data more efficiently.

Perhaps that was obvious, but it is important know a little bit about compression techniques to understand the limits of their effectiveness. There are many types of data that cannot be efficiently compressed.

For example, many image and voice recordings are already optimized and there is very little improvement in data size that can be accomplished with compression techniques. The companies that sell compression based solutions should be able to provide you with profiles on what to expect based on the type of data sent on your WAN link.

b) What are the downsides? — Compression always requires equipment at both ends of the link and results can be sporadic depending on the traffic type.

If you’re looking for compression vendors, we recommend FatPipe, Juniper Networks or of course Ipanema Technologies.

4) Requesting Text Only from Browsers on Remote Links

Editors note: Although this may seem a bit archaic and backwoods, it can be effective in a pinch to keep a remote office up and running.

If you are stuck with a dial-up or slower WAN connection, have your users set their browsers to text-only mode. However, while this will speed up general browsing and e-mail, it will do nothing to speed up more bandwidth intensive activities like video conferencing. The reason why text only can be effective is that  most Web pages are loaded with graphics which take up the bulk of the load time. If you’re desperate, switching to text-only will eliminate the graphics and save you quite a bit of time.

5) Application Shaping on Your WAN Link

Editor’s Note: Application shaping is appropriate for corporate IT administrators and is generally not a practical solution for a home user. Makers of application shapers include Packeteer and Allot and are typically out of the price range for many smaller networks and home users.

One of the most popular and intuitive forms of optimizing bandwidth is a method called “application shaping,” with aliases of “traffic shaping,” “bandwidth control,” and perhaps a few others thrown in for good measure. For the IT manager that is held accountable for everything that can and will go wrong on a network, or the CIO that needs to manage network usage policies, this is a dream come true. If you can divvy up portions of your WAN/Internet link to various applications, then you can take control of your network and ensure that important traffic has sufficient bandwidth.

At the center of application shaping is the ability to identify traffic by type.  For example, identifying between Citrix traffic, streaming audio, Kazaa peer-to-peer, or something else. However, this approach is not without its drawbacks.

Here are a few common questions potential users of application shaping generally ask.

a) Can you control applications with just a firewall or do you need a special product? — Many applications are expected to use Internet ports when communicating across the Web. An Internet port is part of an Internet address, and many firewall products can easily identify ports and block or limit them. For example, the “FTP” application commonly used for downloading files uses the well known “port 21.”

The fallacy with this scheme, as many operators soon find out, is that there are many applications that do not consistently use a fixed port for communication. Many application writers have no desire to be easily classified. In fact, they don’t want IT personnel to block them at all, so they deliberately design applications to not conform to any formal port assignment scheme. For this reason, any product that aims to block or alter application flows by port should be avoided if your primary mission is to control applications by type.

b) So, if standard firewalls are inadequate at blocking applications by port, what can help?

As you are likely aware, all traffic on the Internet travels around in what is called an IP packet. An IP packet can very simply be thought of as a string of characters moving from Computer A to Computer B. The string of characters is called the “payload,” much like the freight inside a shipping container. On the outside of this payload, or data, is the address where it is being sent. These two elements, the address and the payload, comprise the complete IP packet.

In the case of different applications on the Internet, we would expect to see different kinds of payloads. For example, let’s take the example of a skyscraper being transported from New York to Los Angeles. How could this be done using a freight train? Common sense suggests that one would disassemble the office tower, stuff it into as many freight cars as it takes to transport it, and then when the train arrived in Los Angeles hopefully the workers on the other end would have the instructions on how to reassemble the tower.

Well, this analogy works with almost anything that is sent across the Internet, only the payload is some form of data, not a physical hunk of bricks, metal and wires. If we were sending a Word document as an e-mail attachment, guess what, the contents of the document would be disassembled into a bunch of IP packets and sent to the receiving e-mail client where it would be re-assembled. If I looked at the payload of each Internet packet in transit, I could actually see snippets of the document in each packet and could quite easily read the words as they went by.

At the heart of all current application shaping products is special software that examines the content of Internet packets, and through various pattern matching techniques, determines what type of application a particular flow is. Once a flow is determined, then the application shaping tool can enforce the operators policies on that flow. Some examples of policy are:

  • Limit Citrix traffic to 100kbs
  • Reserve 500kbs for Shoretel voice traffic

The list of rules you can apply to traffic types and flow is unlimited. However, there is a  downside to application shaping of which you should be aware. Here are a few:

  • The number of applications on the Internet is a moving target. The best application shaping tools do a very good job of identifying several thousand of them, and yet there will always be some traffic that is unknown (estimated at 10 percent by experts from the leading manufacturers). The unknown traffic is lumped into the unknown classification and an operator must make a blanket decision on how to shape this class. Is it important? Is it not? Suppose the important traffic was streaming audio for a Web cast and is not classified. Well, you get the picture. Although theory behind application shaping by type is a noble one, the cost for a company to stay up to date is large and there are cracks.
  • Even if the application spectrum could be completely classified, the spectrum of applications constantly changes. You must keep licenses current to ensure you have the latest in detection capabilities. And even then it can be quite a task to constantly analyze and change the mix of policies on your network. As bandwidth costs lessen, how much human time should be spent divvying up and creating ever more complex policies to optimize your WAN traffic?

6) Test Your WAN-Link Speed

A common issues with slow WAN link service is that your provider is not giving you what they have advertised.

7) Make Sure There Is No Interference on Your Wireless Point-to-Point WAN Link

If the signal between locations served by a point to point link are weak, the wireless equipment will automatically downgrade its service to a slower speed. We have seen this many times where a customer believes they have perhaps a 40-megabit backhaul link and perhaps are only realizing five megabits.

As we have stated above, Ipanema Technologies represents what is in our opinion the best all round solution for these types of situation. With bandwidth costs  consuming a major slice of the network related opex of any distributed organisation it becomes ever more obvious that the right solution is to keep these costs to a minimum whilst ensuring the experience for the end user is as good as it should be. Call us at Rustyice Solutions to discuss WAN optimisation and how it can help to make and save you money.