Archive for the ‘QoS’ Category

Link Fragmentation and Interleave / LFI / FRF.12

Sonntag, November 9th, 2008

If a packet is sent over a link, the packet delayed due to serialization. Bigger packets need more time to get over the link, than smaller packets. If a 1500 bytes packet is in the transit, a small 150 bytes packet has to wait.

Here LFI can be used to solve the problem of bigger packets blocking smaller, for example voice packets, to long, and let them being sent before the whole 1500 bytes packet is completely being sent over the link.

The definition of packet is, that it includes the Layer 3 header information and the end-user-data, namely the payload. A frame is a packet, but includes also Layer 2 information header and trailer.

While fragmenting frames, the router will chop the 1500 byte frame possibly into two frames, which had to include again the header and trailer information in each new fragmented frame.

Make sure you configure „frame-relay fragment“ on both sides of the PVC. If not, one side will definetly have problems in recognizing the fragmented traffic. If only one side fragments the traffic and the other side will not defragment it, it will just drop all fragmented traffic coming over the DLCI.

A value of 80 kpbs recommended fragmentation size to every 64 kbps bandwidth. So 256 kbps interface bandwidth, means a fragment size of 320 kbps on this link.

Note: For interleaving to work, both fragmentation and the low-latency queueing policy must be configured with shaping disabled.


access-list 101 match ip any host

class-map voice

match access-group 101

policy-map llq

class voice

priority 64

class video

bandwidth 32

interface serial 1/0

ip address

encapsulation frame-relay

frame-relay fragment 80 end-to-end

bandwidth 128

clock rate 128000

service-policy output llq

Show and debug commands:

R5#sh frame-relay fragment interface s1/0 501

fragment size 200 fragment type end-to-end
in fragmented pkts 4511 out fragmented pkts 86
in fragmented bytes 109183 out fragmented bytes 10797
in un-fragmented pkts 162 out un-fragmented pkts 88
in un-fragmented bytes 10808 out un-fragmented bytes 5952
in assembled pkts 1053 out pre-fragmented pkts 130
in assembled bytes 94707 out pre-fragmented bytes 16317
in dropped reassembling pkts 0 out dropped fragmenting pkts
in DE fragmented pkts 4511 out DE fragmented pkts 0
in DE un-fragmented pkts 162 out DE un-fragmented pkts 0
in timeouts 0
in out-of-sequence fragments 0
in fragments with unexpected B bit set 0
in fragments with skipped sequence number 0
out interleaved packets 0
R5#sh frame-relay fragment
interface dlci frag-type size in-frag out-frag dropped-frag
Se1/0 501 end-to-end 200 4519 86 0
Se1/0 502 end-to-end 200 0 0 0
Se1/0 503 end-to-end 200 0 0 0
Se1/0 504 end-to-end 200 0 0 0
Se1/0 513 end-to-end 200 0 0 0

R5# debug frame-relay fragment interface s1/0 501
*Mar 1 04:33:08.866: Serial1/0(o): dlci 501, tx-seq-num 125, B bit set, frag_hdr 03 B1 80 7D
*Mar 1 04:33:08.870: Serial1/0(o): dlci 501, tx-seq-num 126, no bit set, frag_hdr 03 B1 00 7E
*Mar 1 04:33:08.874: Serial1/0(o): dlci 501, tx-seq-num 127, E bit set, frag_hdr 03 B1 40 7F


Cisco QOS, Second Edition, Exam Certification Guide

multiple LLQ Low Latency Queues / bandwith (remaining) percent

Sonntag, November 9th, 2008

LLQ means priority a queue, to forward voice and video traffic before all other traffic.

If you have multiple LLQ queues, the difference between the single and multiple queue configuration is, that if you have at least two priority queues, both get policed. So if configured in a single police-map command, you will always policed the traffic at a maximum rate. Even if more bandwidth will be available, in case one queue fills up and the other still is not yet. The traffic will strictly be policed at the maximum rate.

The bandwidth percent gives the option to reserve a percentage of a link, also in case the link speed will change in the future. It will be calculated dependent on the actual link speed for the interface. This is changeable with the „bandwidth“ command on the interface.

The bandwidth remaining percent gives the option, to configure a remaining bandwidth on the actual link. If the link for example has a bandwidth of 1000 kbps and there is already different LLQ’s (100,200), then this is added to 300 kbps being already reserved. „max reserved-bandwidth“ will be per default 75% on an interface, which is 750 kpbs. So if you configure a reservation from the remaining percent, it will be calculated from

750 kbps

-300 kbps


450 kbps.

So if you configure „bandwidth remaining percent 50“ you will get 225 kbps from the bandwidth of the interface.


Cisco Qos, Exam Certification Guide, Second Edition, Wendell Odom

Done 642-642 QoS today

Donnerstag, August 30th, 2007

As my last project would haven’nt been about qos, i would have not been that good and it would have not been that easy. I’m on my way to the CCVP cert. Currently i just hope i can learn things as quickly as possible. This test has not been that difficult. QoS is a topic that has not changed a lot in the last years. Even cisco’s design guide for campus qos is about 2 years old from 2005. So things you will learn are about standards, that have not changed a lot since.

What i haven’t seen is an RSVP or IntServ implementation. I would like to test it a bit with dynamips, but i have to keep an going.

Just know now how to implement LLQ, CBWFQ, PQ, CQ and WRED for example. Knowing about COS and DSCP values. I have not had anything about thresholds in the test. I had 45 questions and the passing score was 790 points. Thanks to the good vue testcenter. I had quite a lot of problems with prometric. Probably this is the reason why cisco changed to use only vue as a testing center for cisco tests. Everythings was working fine. I did the test at OpenLine, Maastricht in the Netherlands.

QoS campus design / telephony / avaya / diffserv

Samstag, August 25th, 2007

I have implemented a QoS design for a customer with about 5000 nodes per campus. The design is relatively straigt through. Once you have decided with classes you want to implement, you have to configure the different devices. There has been an access, distribution and core layer. It’s the best to mark as close to the apllications as possible. So at the access layer we had 3750, 4000/4500. The 3750 supports srr and for the 4000 it’s dependend on the module. But for the 4000/45000 you don’t have input queues. As forexample the 6500 it depends all also on the module. You have to find out what kind of hardware queues there are. It’s probable a notation like 1P3Q8T, what means something like, 1 priority queue, 3 normal queues and 8 thresholds per queue. Sometime you can find also the notation 4Q8T/1P3Q8T, that means the one priority queue is able to be a normal or a prio queue.

If you have measurements about the actual network traffic, you can distribute the traffic on the different queues and thresholds. If not, it’s maybe good enough to have a priority queue and to roughly estimate the other queues, but maybe not go to deep into changing the default thresholds. Later it might be necessary. If you don’t have enough queues left and you want to keep up you queuing schema.

There is a good design guide for enterprise campus QoS implementation and i suggest taking this as a good starting point in you QoS campus design. It coves all the different catalyst types and also gives some suggestions about the ECN/DBL (dynamic buffer limiting) marking espacially for the 4000/4500’er catalyst.

This could be a good feature when both stations (server/client) support the ECN flag. I read about the xp/vista does it not have enabled by default. But it’s only available at the 4000/4500’er.

Make sure you use hardware queueing and not queuing in software. This will save you from having problems with cpu overload. As long queueing is done in hardware, you will be on the safe side.

Avaya does not have recomandations about qos implementations on cisco hardware. Phones can be configures like setting voice bearer traffic to COS 5 and Signaling traffic to COS 3. You can overwrite the data port with the PC connected to COS 0. This would be a relativ straight forwarded setup.

QoS is a quite complex task. It’s necessary to develop and administrate the current needs constantly.


642-642 QoS

Donnerstag, August 23rd, 2007

Currently going for the first QoS test for the CCVP. Think this is the most difficult one. I like to take the hard part first. I haven’t seen any RSVP implementations. I wonder if it’s really widely used outside in the network for QoS. I will write some comments to my QoS implementation for a customer with about 5000 access ports per site in the next days.

QoS can be quite a complex task. It seems simple, but implementing it continuously on differnt kinds of hardware queues and different queueing techniques, could be quite callenging.

DSCP values and usage guidelines

Donnerstag, Oktober 19th, 2006
|   Service     |  DSCP   |    DSCP     |       Application        |
|  Class name   |  name   |    value    |        Examples          |
|Administration |  CS7    |   111000    | Heartbeats, SSH, Telnet  |
|Network Control|  CS6    |   110000    | Network routing          |
| Telephony     | EF,CS5  |101010,101000| IP Telephony             |
| Multimedia    |AF41,AF42|100010,100100| Video conferencing       |
| Conferencing  |  AF43   |100110       | Interactive gaming       |
| Multimedia    |AF31,AF32|011010,011100|Broadcast TV, Pay per view|
| Streaming     |AF33, CS4|011110,100000|Video surveillance        |
| Low Latency   |AF21,AF22|010010,010100|Client/server transactions|
|   Data        |AF23, CS3|010110,011000|peer-to-peer signaling    |
|High Throughput|AF11,AF12|001010,001100|Store&forward applications|
|    Data       |AF13, CS2|001110,010000|Non-critcal OAM&P         |
|    Standard   | DF,(CS0)|   000000    | Undifferentiated         |
|               |         |             | applications             |
| Low Priority  | CS1     |   001000    | Any flow that has no BW  |
|     Data      |         |             | assurance                |


CQ Custom Queuing

Donnerstag, Oktober 5th, 2006
  • Custom Queuing has 16 queues available.
  • All queues are serviced in a round-robin fashion.
  • Bandwidth is specified in terms of byte count and queue length

RTP/Voice traffic range

Mittwoch, September 27th, 2006

The range for RTP/VOIP traffic packet could take is:

permit udp any any range 16384 32767

Also tcp port 1720 is used for voice control connetions similiar like port 21 with ftp

What’s ip precedence?

Mittwoch, September 13th, 2006
    0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
|Version|  IHL  |Type of Service|          Total Length         |
|         Identification        |Flags|      Fragment Offset    |
|  Time to Live |    Protocol   |         Header Checksum       |
|                       Source Address                          |
|                    Destination Address                        |
|                    Options                    |    Padding    |

The precedence value is presented in Type of service field in the tcp header. It can take up to 1 byte.

      0     1     2     3     4     5     6     7
|                 |     |           |     |     |
|                 |     |           |     |     |

The ip precedence value is set and carried over network boarders. It does not have to be set by each router on the path again.


QoS and what to manage with what

Montag, August 14th, 2006

There are many different kind of QoS techniques around at Cisco. All of them are like a tollbox for managing traffic. Each technique has it’s preferred operational area. So here are some scenarios in which you would use a certain technique.

  • Classification

You want to provide a prefered service to a type of traffic. The packet may be marked or not. Classification don only on one device, without marking the packet is described as a per-hop based classification. PQ (priority queing) and CQ (custom queing) are techniques used for this. Possible methods to identify certain traffic are ACL’s, policy based routing, comiited acces rate (CAR) or network-based application recognistion (NBAR).

  • Congestion Management

What if an interface is accessed above it’s given bandwidth? Congestion occurs and priority queuing (PQ), custom queuing (CQ), weighted fair queuing (WFQ), and class-based weighted fair queuing (CBWFQ) are tools to mangage congestion.

  • Queue Management

If a queue does fill up and buffers are flowing over, packets must be dropped. Which packets to drop, maybe packets with lower priority, to be able to deliver higher priority, this is done with Weighted early random detect (WRED).

  • Link Efficiency

Some packets might be to large for efficent transport and it might be neccessary to compress these packets. RTP header compression (Compressed Real-Time Protocol header) can be used for this.

  • Traffic shaping and policing

When shaping traffic, you would take care of a certain link not to exceed the configured bandwitdh or maybe another certain bandwith. Traffic is buffered then, with poicing it’s just discarded as other functions are similar for policing.

Queuing techniques, algorithms and when to use them.

  • FIFO, First-in, first-out

Is the default queuing algorithm, and delivers packet in the same row it receives them, but could buffer them in between

  • PQ, Priority queuing

PQ gives priority to traffic over other traffic, each packet is placed into one of four queues: high, medium, normal, low. There is absolut preferential treatment over low-priority queues.

  • CQ, Custom queuing

is used to provide a garantied bandwidth, leaving the remaining bandwidth to other traffic. CQ does this by assigning a specific amount of queue space to each class of packet and then servicing the queues round-robin. PQ and CQ are statically configured. They don’t adapt network changes automatically.

  • WFQ, Flow-based weighted fair queuing

provides consistent response time to congested networks, each queue ist serviced on a bye counted base. Each time 1000 bytes are serviced, one stream with 2×500 bytes it qually serviced, like the 1×1000 byte packet. It’s mostly used on serial interfaces. WFQ is IP-precedence aware

  • CBWFQ, Class-based weighted fair queuing

CBWFQ is used to provide a minimum of bandwidth to a certain flow. It’s a garanteed amount of bandwidth. If it’s not used by the class other applications can use it.

Tools for congestion avoidance:

  • WRED  Weighted random early detection is to avoid congestion before it becomes a problem. It’s an algorithm to drop packets if congestion is about to occuring. Senders themself then slows down transmitions speed.