## Cisco – Two LAN, Two WAN ISP and NAT

We recently received a request from a rural customer who was tired of their unreliable 1Mbps ADSL line to add a second ADSL line to their network. The line was ordered and installed and we then added a second ADSL wic interface to their router and set to work making it work. Our brief was to make it work so that each LAN was associated with one WAN link and only used that WAN link. This is how we went about it.

Clearly the second interface needed to be associated with a second dialer created to log in and manage the second connection. Furthermore, we needed to add a second DHCP pool. This new config is shown as follows:

DHCP config:-

```ip dhcp pool pollux
network 192.168.200.0 255.255.255.0
default-router 192.168.200.1
dns-server 62.6.40.178 62.6.40.162

As you can see this connection is using BT DNS servers.```

Dialer config:-

```interface Dialer1
ip mtu 1492
ip nat outside
ip virtual-reassembly
encapsulation ppp
dialer pool 2
no cdp enable
ppp authentication chap pap callin
ppp chap hostname xxxxxxx

We then needed to allocate a second default route to the router and this was achieved by means of the following command:-```
```ip route 0.0.0.0 0.0.0.0 Dialer1

We created an access list to handle the new traffic relating to the new DHCP network as follows:-```
`access-list 22 permit 192.168.200.0 0.0.0.255`

and then we added a new access list to ensure that the traffic on each LAN network remained segregated from the other LAN. This was done as follows:-

```access-list 112 deny   ip 192.168.1.0 0.0.0.255 10.0.0.0 0.255.255.255
access-list 112 deny   ip 192.168.1.0 0.0.0.255 172.16.0.0 0.15.255.255
access-list 112 deny   ip 192.168.1.0 0.0.0.255 192.168.0.0 0.0.255.255
access-list 112 permit ip 192.168.1.0 0.0.0.255 any
access-list 122 deny   ip 192.168.200.0 0.0.0.255 10.0.0.0 0.255.255.255
access-list 122 deny   ip 192.168.200.0 0.0.0.255 172.16.0.0 0.15.255.255
access-list 122 deny   ip 192.168.200.0 0.0.0.255 192.168.0.0 0.0.255.255
access-list 122 permit ip 192.168.200.0 0.0.0.255 any

Finally we needed to apply route maps to mechanise the access lists, putting them to work maintaining segregation and ensuring correct operation. The following two route maps were configured:-```
```route-map pollux permit 22
set interface Dialer1
!
route-map castor permit 12
set interface Dialer0```

Putting it all together our new configuration was as follows:-

```version 12.4
service tcp-keepalives-in
service tcp-keepalives-out
service timestamps debug datetime msec
service timestamps log datetime msec
service sequence-numbers
!
hostname xxx-core-router
!
boot-start-marker
boot-end-marker
!
security authentication failure rate 3 log
logging buffered 512000 debugging
enable secret xxxxxxx
!
no aaa new-model
no network-clock-participate slot 1
no network-clock-participate wic 0
ip cef
!
!
ip auth-proxy max-nodata-conns 3
no ip dhcp use vrf connected
!
ip dhcp pool pollux
network 192.168.200.0 255.255.255.0
default-router 192.168.200.1
dns-server 62.6.40.178 62.6.40.162
!
ip dhcp pool castor
network 192.168.1.0 255.255.255.0
default-router 192.168.1.1
dns-server 62.6.40.178 62.6.40.162
!
!
ip flow-cache timeout active 1
ip name-server 212.159.13.49
ip name-server 212.159.13.50
ip name-server 141.1.1.1
!
!
!
archive
log config
hidekeys
!
!
!
!
!
!
interface ATM0/0
no ip redirects
no ip unreachables
no ip proxy-arp
no atm ilmi-keepalive
dsl operating-mode itu-dmt
pvc 0/38
encapsulation aal5mux ppp dialer
dialer pool-member 1
!
!
interface FastEthernet0/0
ip flow ingress
ip flow egress
ip nat inside
ip virtual-reassembly
ip route-cache flow
ip policy route-map castor
duplex auto
speed auto
!
interface ATM0/1
no ip redirects
no ip unreachables
no ip proxy-arp
no atm ilmi-keepalive
dsl operating-mode itu-dmt
pvc 0/38
encapsulation aal5mux ppp dialer
dialer pool-member 2
!
!
interface FastEthernet0/1
ip nat inside
ip virtual-reassembly
ip policy route-map pollux
duplex auto
speed auto
!
interface Dialer0
ip mtu 1492
ip nat outside
ip virtual-reassembly
encapsulation ppp
dialer pool 1
no cdp enable
ppp authentication chap pap callin
ppp chap hostname xxxxxxx
!
interface Dialer1
ip mtu 1492
ip nat outside
ip virtual-reassembly
encapsulation ppp
dialer pool 2
no cdp enable
ppp authentication chap pap callin
ppp chap hostname xxxxxxx
!
router rip
version 2
network 192.168.1.0
network 192.168.200.0
!
ip forward-protocol nd
ip route 0.0.0.0 0.0.0.0 Dialer0
ip route 0.0.0.0 0.0.0.0 Dialer1
ip route 192.168.88.0 255.255.255.0 192.168.200.2
ip route 212.159.13.49 255.255.255.255 Dialer1
ip route 212.159.13.50 255.255.255.255 Dialer1
!
no ip http server
no ip http secure-server
ip nat inside source list 12 interface Dialer0 overload
ip nat inside source list 22 interface Dialer1 overload
ip nat inside source route-map castor interface Dialer1 overload
ip nat inside source route-map pollux interface Dialer0 overload
!
access-list 12 permit 192.168.1.0 0.0.0.255
access-list 22 permit 192.168.200.0 0.0.0.255
access-list 112 deny   ip 192.168.1.0 0.0.0.255 10.0.0.0 0.255.255.255
access-list 112 deny   ip 192.168.1.0 0.0.0.255 172.16.0.0 0.15.255.255
access-list 112 deny   ip 192.168.1.0 0.0.0.255 192.168.0.0 0.0.255.255
access-list 112 permit ip 192.168.1.0 0.0.0.255 any
access-list 122 deny   ip 192.168.200.0 0.0.0.255 10.0.0.0 0.255.255.255
access-list 122 deny   ip 192.168.200.0 0.0.0.255 172.16.0.0 0.15.255.255
access-list 122 deny   ip 192.168.200.0 0.0.0.255 192.168.0.0 0.0.255.255
access-list 122 permit ip 192.168.200.0 0.0.0.255 any
route-map pollux permit 22
set interface Dialer1
!
route-map castor permit 12
set interface Dialer0
!
!
!
control-plane
!
!
!
alias exec arp tclsh flash:arp.tcl
alias exec shutnoshut tclsh flash:shutnoshut.tcl
!
line con 0
line aux 0
line vty 0 4
access-class 12 in
transport input telnet
!
ntp clock-period 17207966
ntp server 85.119.80.233
!
end```

The network worked beautifully. Another elegant solution from Rustyice Solutions.

## The Warning Signs Your Network Needs Replaced

If you think lightning can’t strike twice, think again. It can strike twice, thrice or even quarce. (Is that really a protologism?) A well known ferry operator on the West Of Scotland has the blown out network equipment to prove it. The coastal ferry terminals of Scotland are long standing facilities. Surprising to some, in the winter in Scotland, the seemingly constant procession of Atlantic storms frequently bring intense atmospheric instability with them as the UK’s TORnado and storm Research Organisation (TORRO for short) can testify. Lightning frequently is a major problem for many businesses in this area and it can have a devastating effect on their sensitive IT infrastructure. As Doug Rask, an IT manager in the area put it, “We would take an occasional lightning strike and the equipment would be fine initially, but after some days or weeks, they’d start falling over, and we’d have to analyse the problem and quickly get things replaced.” Lightning strikes are admittedly a bit of an extreme example, well concede, but they do qualify the problem of the constant environmental stresses and strains that your static sensitive network hardware has to face almost constantly.

When systems begin to show signs of this wear and tear, it can manifest itself in the shape of chronic network niggles such as poor throughput or frequent hangs, crashes and outages. The hardware may simply be coming to the end of its natural life, or perhaps the user enterprise has simply grown beyond the maximum capabilities of the network, says Pete Macsorley, IT manager at Corpach Pumps. Other factors that can cause an agency to consider a network refresh could be the deployment of new applications such as anti virus systems or phased migration and collaborative services. Usually the real world reason for a network overhaul is not just a single warning sign but a combination of multiple elements of the above.

### CONSISTENT EQUIPMENT FAILURE

When lightning strikes a building, the earthing systems should and almost always will protect the systems but every now and again a strike of ferocious magnitude can overwhelm these safety systems to the point that damage is caused to the network  and IT equipment. “When we get strikes on our sites, it typically doesn’t kill of our systems there and then. It does however initiate a collapsing system which culminates in the eventual hard failure of equipment. This can take 3-6 months,” said Mark Forrest, IT engineer for a well known salmon farming company.

At some time in the next 6 months, the kit would begin to play up. Users would notice a badly performing network, intermittent hangs and patchy access to servers, forcing Forrest to carry out systematic fault finding and replace the failing kit. “Atmospherics and particularly lightning places a cost burden of 3 to 4 switches per hit” says Marks boss, Joe McGarry.

### AGEING HARDWARE

Ageing or end-of-life networking gear can compel organisations to replace their systems, especially when the initial warranty expires and/or support organisations place a premium on their support due to the increased likelihood of call out and expensive engineering time. “Sometimes this cost uplift is so great that there is no option left but to replace new for old and enjoy the more relaxed maintenance landscape that ensues,” says Dan McDougall, CTO at a major food manufacturing company.

“Networks are there for one reason, to serve the business. Unless they’re failing too frequently, the main reason we would decide to upgrade the network is the cost of their support,” McDougall says. “Sometimes the kit on your business network is just so old that the cost of the warranty dwarfs the cost of new for old.”

For example, the salmon farming company mentioned previously recently needed to move their regional offices in Oban. It made perfect sense to look at equipping the new premises with a new network and servers because most of the kit at the old office was 5-9 years old and EOL (End Of Life), Forrest says “I was sure I didn’t want to be moving any of my old equipment that had been through the lightning hits more than once. I wanted new for old.”

They had also decided to move some services to the cloud such as their voice network services and had enhanced the resolution with which they remotely monitored the underwater salmon pens. They purchased 2 Cisco UCS servers, three new routers, eight switches and upgraded their WIFI network using Cisco Meraki, Furthermore in addition to moving to hosted voice, they improved the storage of their IP video inputs from the farm cages as well as a new access and building control system which also used the network. This network upgrade brought with it gigabit networking to the desktop and has markedly increased the performance and efficiency of the business unit.

For example, in the past, bandwidth contention had sometimes resulted in the live video from the pens squeezing out the traffic for the very control systems which were used to enable the application of feed to the pens. The upgraded network, using 802.1q VLAN trunking was able to segment the traffic and ensure that the requirements of each business process were safeguarded. Bandwidth contention had become a thing of the past. Finally and perhaps ironically, they also installed a new earthing system and new earthing cable, which should protect the new locations sensitive electronic equipment more effectively from future lightning strikes.

### CREATING BANDWIDTH

Sometimes, the introduction of new applications on the network necessitates a network upgrade. For example, VoIP (hosted or owned) or realtime video services can place very specific demands on a network and if it isn’t up to the job, a refresh can prove inevitable,

An increased rollout of virtualisation and thin client technology can also drive cost savings in terms of network user hardware which may be partially offset by the costs of the new network to support it.

### CONTINUOUS PHASED APPROACH

Some agencies adopt what is best described as a continuous phased approach to keeping the network at the cutting edge. This can prove to be a useful mitigation to the sometimes problematic expense of replacing the whole network every few years as well as enabling financial planners to smooth the requirement for capital across many financial years. It’s a cost-effective way of keeping the network stable and up to date.

DB Refrigeration in Ayr, for example, has gradually upgraded most of its 15 wiring-closet switches since 2012 and will replace a few more this year, says IT manager, Connor Piacentini.

This autumn however, it was the core routers turn to be replaced. The IT department upgraded its Gigabit Ethernet Cisco Catalyst 6509 core switch to a 6509-E which could support 10 Gigabit Ethernet. “It was 8 years old, so we knew we had to upgrade it finally,” Piacentini says.

### SHARED SERVICES

Consolidation by agencies of the use of their expensive network resources seems to be a popular way to save costs these days. For example in the Public Sector, many regional councils share some of the higher end networked resources making the burden on each organisation smaller. This can however mean that the new network  must be far more capable than any of the incumbents.

Police Scotland, following its recent merger has to build regional backup emergency operations facilities, so if disaster strikes and one goes down, another can take over. The investment requires new network equipment to build the WAN. It is speculated at this time that they are also negotiating with infrastructure providers to build a proprietary fibre ring.

1. Plan for future needs. When deciding how much bandwidth you need and what equipment to buy, don’t assess for your current needs. Spec it out for five years from now. A good rule of thumb is to plan for a 50 percent increase in bandwidth usage and a 30 percent increase in the number of employees.

2. Pair a network upgrade with a larger technology project. It’s often easier to prove return on investment and get a network upgrade funded if it’s tied to a bigger project. When IT administrators propose a private-cloud deployment, for example, they can argue that a network upgrade is critical for good cloud performance.

3. Purchase maintenance contracts only on the most critical equipment, such as main routers and switches. Purchasing contracts on all equipment can be cost-prohibitive. It can be cheaper to purchase one backup wiring closet switch and use that if a switch fails instead of purchasing contracts for each switch.

## Cisco Plays Catchup in DPI Test

If you’re a large enterprise with its own network, an ISP, or a company intent on claiming its own, online, the technology force of Deep Packet Inspection (DPI) is with you.

But it may surprise you who shows up on the doorstep to sell it to you.

Results of a test of P2P filtering gear conducted for Internet Evolution by the European Advanced Networking Test Center AG (EANTC) , show that Cisco Systems Inc. (Nasdaq: CSCO), ipoque GmbH , and Procera Networks Inc. (Amex: PKT) are ready, willing, and able to help enterprises and ISPs reduce network production costs.

Some are more ready than others, though. Cisco, while matching and exceeding its rivals in various test scenarios, offers half the bandwidth capacity of the two smaller, younger companies. Cisco offers a total of four 10-Gbit/s load modules on its SCE-8000 unit, compared with eight 10-Gbit/s modules supported on ipoque’s PRX-10G Traffic Manager and Procera’s PacketLogic 10014.

During the tests, Procera’s and ipoque’s devices, both equipped with four interface pairs, were exposed to twice the load and concurrent connections number as Cisco’s, with its two interface pairs.

Cisco, despite being the world’s biggest networking vendor, was bested in blocking P2P traffic by ipoque, whose PRX-10G allowed less than 0.01 percent of P2P traffic to bypass its filtering, compared with 2.4 percent for Cisco’s SCE-8000 and 2.9 percent for Procera’s PacketLogic.

Further, Cisco, along with Procera, required some updates and adjustments to perform as expected in detecting popular P2P protocols.

Does this mean Cisco’s not ready for P2P prime time?

Hardly. While its DPI device may be lower capacity than the competitors in this test, Cisco, like the others, appears to have emerged from the beta-like vaporware stage all vendors were in during the March 2008 P2P EANTC test.

“Whether we talk about intelligent management of consumer traffic or about freeing bandwidth by throttling the massive amount of P2P traffic, the devices we tested are ready to be rolled out in service provider backbones,” says Carsten Rossenhövel, managing director of EANTC.

As detailed in our latest Big Report, “P2P Taste Test,” vendors have improved performance and accuracy significantly since last year’s test. EANTC increased its performance test bed by a factor of 25, and still didn’t hit the limits of these boxes.

## The EIGRP (Enhanced Interior Gateway Routing Protocol) metric

EIGRP (Enhanced Interior Gateway Routing Protocol) is a network protocol that lets routers exchange information more efficiently than was the case with older routing protocols. EIGRP which is a proprietary protocol evolved from IGRP (Interior Gateway Routing Protocol) and routers using either EIGRP and IGRP can interoperate because the metric (criteria used for selecting a route) used with one protocol can be translated into the metrics of the other protocol. It is this metric which we will examine in more detail.

Using EIGRP, a router keeps a copy of its neighbour’s routing tables. If it can’t find a route to a destination in one of these tables, it queries its neighbours for a route and they in turn query their neighbours until a route is found. When a routing table entry changes in one of the routers, it notifies its neighbours of the change. To keep all routers aware of the state of neighbours, each router sends out a periodic “hello” packet. A router from which no “hello” packet has been received in a certain period of time is assumed to be inoperative.

EIGRP uses the Diffusing-Update Algorithm (DUAL) to determine the most efficient (least cost) route to a destination. A DUAL finite state machine contains decision information used by the algorithm to determine the least-cost route (which considers distance and whether a destination path is loop-free).

The Diffusing Update Algorithm (DUAL) is a modification of the way distance-vector routing typically works that allows the router to identify loop free failover paths.  This concept is easier to grasp if you imagine it geographically. Consider the map of the UK midlands shown in Figure1. The numbers show approximate travel distance, in miles. Imagine that you live in Glasgow. From Glasgow, you need to determine the best path to Hull. Imagine that each of Glasgow’s neighbours advertises a path to Hull. Each neighbour advertises its cost (travel distance) to get to Hull. The cost from the neighbour to the destination is called the advertised distance. The cost from Glasgow itself is called the feasible distance.
In this example, Newcastle reports that if Glasgow routed to Hull through Newcastle, the total cost (feasible distance) is 302 miles, and that the remaining cost once the traffic gets to Newcastle is only 141 miles. Table1 shows distances reported from Glasgow to Hull going through each of Glasgow’s neighbours.

Glasgow will select the route with the lowest feasible distance which is the path through Newcastle.

If the Glasgow-Newcastle road were to be closed, Glasgow knows it may fail over to Carlisle without creating a loop. Notice that the distance from Carlisle to Hull (211 miles) is less than the distance from Glasgow to Hull (302 miles). Because Carlisle is closer to Hull, routing through Hull does not involve driving to Carlisle and then driving back to Glasgow (as it would for Ayr). Carlisle is a guaranteed loop free path.

The idea that a path through a neighbour is loop free if the neighbour is closer is called the feasibility requirement and can be restated as “using a path where the neighbour’s advertised distance is less than our feasible distance will not result in a loop.”

The neighbour with the best path is referred to as the successor. Neighbours that meet the feasibility requirement are called feasible successors. In emergencies, EIGRP understands that using feasible successors will not cause a routing loop and instantly switches to the backup paths.

Notice that Ayr is not a feasible successor. Ayr’s AD (337) is higher than Newcastle’s FD (302). For all we know, driving to Hull through Ayr involves driving from Glasgow to Ayr, then turning around and driving back to Glasgow before continuing on to Hull (in fact, it does). Ayr will still be queried if the best path is lost and no feasible successors are available because potentially there could be a path that way; however, paths that do not
meet the feasibility requirement will not be inserted into the routing table without careful consideration.

EIGRP uses a sophisticated metric that considers bandwidth, load, reliability and delay. That metric is:

 $256, *, left(K_1, *, bandwidth ,+, dfrac {K_2 ,*, bandwidth}{256 - load}, +, K_3 ,*, delayright), *,dfrac {K_5}{reliability ,+, K_4}$

Although this equation looks intimidating, a little work will help you understand the maths and the impact the metric has on route selection.

You first need to understand that EIGRP selects path based on the fastest path. To do that it uses K-values to balance bandwidth and delay. The K-values are constants that are used to adjust the relative contribution of the various parameters to the total metric. In other words, if you wanted delay to be much more relatively important than bandwidth, you might set K3 to a much larger number.

You next need to understand the variables:

• Bandwidth—Bandwidth is defined as (100 000 000 / slowest link in the path) kbps. Because routing protocols select the lowest metric, inverting the bandwidth (using it as the divisor) makes faster paths have lower costs.
• Load and reliability—Load and reliability are 8-bit calculated values based on the performance of the link. Both are multiplied by a zero K-value, so neither is used.
• Delay—Delay is a constant value on every interface type, and is stored in terms of microseconds. For example, serial links have a delay of 20,000 microseconds and Ethernet lines have a delay of 1000 microseconds. EIGRP uses the sum of all delays along the path, in tens of microseconds.

By default, K1=K3=1 and K2=K4=K5=0. Those who followed the maths will note that when K5=0 the metric is always zero. Because this is not useful, EIGRP simply ignores everything outside the parentheses. Therefore, given the default K-values the equation becomes:

 $256, *, left(1, *, bandwidth ,+, dfrac {0 ,*, bandwidth}{256 - load}, +, 1 ,*, delayright), *,dfrac {0}{reliability ,+, 0}$

Substituting the earlier description of variables, the equation becomes 100,000,000 divided by the chokepoint bandwidth plus the sum of the delays:

 $256, *, left(dfrac {10^7}{min(bandwidth)}, +,sum,dfrac {delays}{10}right)$

As a final note, it is important to remember that routers running EIGRP will not become neighbours unless they share K-values. That said however you really should not change the K-values from the default without a compelling reason.