I have the lab now sheduled. Currently i’m working on the ie workbook and have restarted the labs, after doing all 30 labs from volume I+II and all 10 Core Labs. I have started over with Lab1 and looking through the technologies again. I have had already times, where i could not learn any longer, since i was not longer able to sit on this chair. I could not do the easiest things. After some time now, i hope i’m back on track. 🙂
21. Dezember 2006
Calculation EIGRP delay for load balancing
The defaults for K1 to K5 you can see with "sh ip prot"
Three
Rack1R1#sh ip prot Routing Protocol is "eigrp 100" Outgoing update filter list for all interfaces is not set Incoming update filter list for all interfaces is not set Default networks flagged in outgoing updates Default networks accepted from incoming updates EIGRP metric weight K1=1, K2=0, K3=1, K4=0, K5=0 EIGRP maximum hopcount 100 EIGRP maximum metric variance 5 Redistributing: eigrp 100 EIGRP NSF-aware route hold timer is 240s Automatic network summarization is not in effect Maximum path: 4 Routing for Networks: 150.1.1.1/32 164.1.12.1/32 164.1.13.1/32 164.1.18.1/32 Routing Information Sources: Gateway Distance Last Update 164.1.12.2 90 01:24:30 164.1.13.3 90 01:24:30 164.1.18.8 90 01:24:30 Distance: internal 90 external 170 As you can see as by default K1 and K3 are one and all other values are 0. So from the formular only (K1*Bandwidth + K3*Delay) will count. The complete formular for calculating metrics is: (107/Bandwidth + Delay/10)*256 While Bandwith is in Kbps and Delay in ms. So then you just look at: Rack1R2#sh ip eigrp top 164.1.26.0/24 IP-EIGRP (AS 100): Topology entry for 164.1.26.0/24 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 281600 Routing Descriptor Blocks: 0.0.0.0 (Ethernet0/0), from Connected, Send flag is 0x0 Composite metric is (281600/0), Route is Internal Vector metric: Minimum bandwidth is 10000 Kbit Total delay is 1000 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 0 164.1.12.1 (Serial0/0.12), from 164.1.12.1, Send flag is 0x0 Composite metric is (3561472/3049472), Route is Internal Vector metric: Minimum bandwidth is 1280 Kbit Total delay is 61000 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 3 Rack1R1#sh ip eigrp topology 164.1.26.0/24 IP-EIGRP (AS 100): Topology entry for 164.1.26.0/24 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 3049472 Routing Descriptor Blocks: 164.1.13.3 (Serial0/1), from 164.1.13.3, Send flag is 0x0 Composite metric is (3049472/2537472), Route is Internal Vector metric: Minimum bandwidth is 1280 Kbit Total delay is 41000 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 2 164.1.12.2 (Serial0/0), from 164.1.12.2, Send flag is 0x0 Composite metric is (15247360/281600), Route is Internal Vector metric: Minimum bandwidth is 256 Kbit Total delay is 204980 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 1 You can see here Bandwidth and Delay for the local interface and the advertised interface from the EIGRP neighbor. The question is to make the path from R1 to R3 5 times more choosen than from R1 to R2. That means, the metrics from R1 to R2 has to be 5 times lower than from R1 to R3. Or in other words: 5*MetricR3 = MetricR2 Then you can calculate with the given formular. (Note, for me Delay from R1 to R3 is 41000 and not 40100 like in the example from the Solutionguide.) 5*(107/1280 + 41000/10)*256 = (107/256 + DelayToR2/10)*256 5*(107/1280 + 41000/10) = (107/256 + DelayToR2/10) 5*(7812,5 + 4100) = (39062,5 + DelayToR2/10) 5*(7812 + 4100) = (39062 + DelayToR2/10) 5*11912 = 39062 + DelayToR2/10 59560 = 39062 + DelayToR2/10 20498 = DelayToR2/10 204980 = DelayToR2 Then for my i had to subtract the Delay from R2 (1000ms). 204980 - 1000 = 203980 Dividing it through 10 = 20398 delay for my int s0/0 from R1 to R2 is 20398. So if you see the metric from R1 to R2 and R3: Rack1R1#sh ip eigrp topology 164.1.26.0/24 IP-EIGRP (AS 100): Topology entry for 164.1.26.0/24 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 3049472 Routing Descriptor Blocks: 164.1.13.3 (Serial0/1), from 164.1.13.3, Send flag is 0x0 Composite metric is (3049472 /2537472), Route is Internal Vector metric: Minimum bandwidth is 1280 Kbit Total delay is 41000 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 2 164.1.12.2 (Serial0/0), from 164.1.12.2, Send flag is 0x0 Composite metric is (15247360/281600), Route is Internal Vector metric: Minimum bandwidth is 256 Kbit Total delay is 204980 microseconds Reliability is 255/255 Load is 1/255 Minimum MTU is 1500 Hop count is 1 3049472*5 = 15247360 That the feassible successor has a 5 times higher metric than the successor. With variance 5 you let this feassible successor also being installed as a valid path.
A definition of different PIM modes
There are three different PIM (Protocol Independent Multicast) modes.
- dense mode – havving multiple clients, which are tightly spaced together, it’s implicit for all devices in the PIM domain that they are joined to the multicast domain, if they don’t want to receive the multicast stream, they have to send a „prune“ message. It’s called the „flood and prune“ behavior
- sparse mode – devices have to send a join message, without they will not receive the multicast stream. It’s designed for networks which have clients that are few and far between. It’s called an „explicit join“ mechanism. It depends on a central RP (rendezvous point“, which is organizing forwarding in the multicast domain
- sparse-dense mode – is a combination of both. if there is an RP configured it will use sparse mode. But if the RP fails or is not reachable any more, the router will fall back to dense mode. If you want to prohibit this, there is an „no ip pim dense fallback“
Lab news
Okay, as for sure know already, there are now four switches in the lab. two 3550 and 3560 and i would suggest, it’s better to know the news 3560 features. For example the is a new queuing feature called srr-queue or fore example more options on load sharing for etherchannels.