US20110019572A1 - Method and apparatus for shared shaping - Google Patents

Method and apparatus for shared shaping Download PDF

Info

Publication number
US20110019572A1
US20110019572A1 US12/899,845 US89984510A US2011019572A1 US 20110019572 A1 US20110019572 A1 US 20110019572A1 US 89984510 A US89984510 A US 89984510A US 2011019572 A1 US2011019572 A1 US 2011019572A1
Authority
US
United States
Prior art keywords
queue
priority
traffic
rate
scheduler node
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US12/899,845
Inventor
Thomas A. Lemaire
John C. Carney
Paul Giacobbe
Michael E. Lipman
Ryan T. Ross
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Juniper Networks Inc
Original Assignee
Juniper Networks Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Juniper Networks Inc filed Critical Juniper Networks Inc
Priority to US12/899,845 priority Critical patent/US20110019572A1/en
Publication of US20110019572A1 publication Critical patent/US20110019572A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2416Real-time traffic
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2441Traffic characterised by specific attributes, e.g. priority or QoS relying on flow classification, e.g. using integrated services [IntServ]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/25Flow control; Congestion control with rate being modified by the source upon detecting a change of network conditions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/39Credit based
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5638Services, e.g. multimedia, GOS, QOS
    • H04L2012/5646Cell characteristics, e.g. loss, delay, jitter, sequence integrity
    • H04L2012/5651Priority, marking, classes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L12/00Data switching networks
    • H04L12/54Store-and-forward switching systems 
    • H04L12/56Packet switching systems
    • H04L12/5601Transfer mode dependent, e.g. ATM
    • H04L2012/5678Traffic aspects, e.g. arbitration, load balancing, smoothing, buffer management
    • H04L2012/5679Arbitration or scheduling

Definitions

  • Systems and methods consistent with the principles of the invention relate generally to computer networks and, more particularly, to sharing bandwidth among different priorities or classes of subscriber traffic.
  • Network providers generally configure their network devices to treat some classes of traffic differently from other classes of traffic. For example, voice reproduced from network voice traffic may be distorted when it is delayed. Therefore, a network provider may configure network devices to forward a voice traffic class according to a highest or strict priority. Video reproduced from video traffic is less affected by delays than voice traffic. Therefore, the network provider may configure network devices to forward a video traffic class at a lower priority than the voice traffic class. Data traffic is less affected by delays than either the video or the voice traffic. Consequently, the network provider may configure the network devices to treat a data traffic class according to a best effort or a lower priority than that assigned to either the voice traffic class or the video traffic class.
  • Network providers generally use a hierarchical scheduler, implemented within the network devices, to schedule the forwarding of traffic via logical interfaces according to the traffic's associated priority.
  • the priority may be based on the class of traffic, such as, for example, voice, video, and data classes.
  • the hierarchical scheduler may include multiple scheduler nodes. Each scheduler node may be associated with a priority. For example, one scheduler node may process all high priority traffic for a logical interface and another scheduler node may process all medium priority traffic for the logical interface.
  • the hierarchical scheduler may provide the network provider with the ability to configure a separate scheduler node for each of the priorities.
  • each scheduler node of a network device may receive traffic of a particular priority and may forward the traffic to one or more logical interfaces, such as, for example, virtual circuits (VCs).
  • logical interfaces such as, for example, virtual circuits (VCs).
  • VCs virtual circuits
  • some of the scheduler nodes may be in separate hierarchies or priorities, unused bandwidth for a logical interface in one hierarchy is not shared with a logical interface queue of another hierarchy. For example, if voice traffic for a first virtual circuit is low, the unused bandwidth cannot be used by the first virtual circuit in another hierarchy, such as one that carries video or data traffic.
  • Many network providers “carve out” bandwidth for logical interfaces by configuring rate or bandwidth limits for each hierarchy. However, the “carve out” does not provide for bandwidth sharing among different hierarchies when bandwidth use of a logical interface by high priority traffic is low.
  • a method for sharing bandwidth among a group of classes of traffic for an interface is provided.
  • Bandwidth is allocated to at least one traffic class of a first priority for the interface.
  • At least some unused bandwidth of the at least one traffic class is allocated to at least one other traffic class of a second priority for the interface.
  • a network device for receiving and forwarding traffic in a computer network.
  • the network device is configured to provide bandwidth for at least one traffic class of a first priority of an interface, and provide at least some of unused bandwidth of the at least one traffic class for at least one other traffic class of a second priority.
  • FIG. 1 illustrates an exemplary system which may include implementations consistent with principles of the invention
  • FIG. 2 illustrates a portion of a network shown in FIG. 1 ;
  • FIG. 3 is a functional block diagram of an exemplary network device of FIG. 2 ;
  • FIG. 4 illustrates an exemplary hierarchical scheduler upon which simple shared shaping may be implemented consistent with the principles of the invention
  • FIG. 5 is a flowchart that illustrates exemplary simple shared shaping processing consistent with principles of the invention
  • FIG. 6 illustrates a second exemplary hierarchical scheduler upon which simple shared shaping may be implemented consistent with the principles of the invention
  • FIG. 7 is a flowchart that illustrates processing of another exemplary implementation of simple shared shaping processing consistent with the principles of the invention.
  • FIG. 8 illustrates an exemplary hierarchical scheduler upon which compound shared shaping may be implemented consistent with the principles of the invention
  • FIGS. 9 , 10 , and 11 are flowcharts that illustrate exemplary compound shared shaping processing consistent with the principles of the invention.
  • FIG. 12 illustrates a second exemplary hierarchical scheduler upon which compound shared shaping may be implemented consistent with the principles of the invention
  • FIG. 13 illustrates an exemplary hierarchical scheduler upon which simple shared shaping with virtual local area networks (VLANs) may be implemented consistent with the principles of the invention.
  • VLANs virtual local area networks
  • FIG. 14 illustrates an exemplary hierarchical scheduler upon which simple or compound shared shaping with VLANs may be implemented consistent with the principles of the invention.
  • FIG. 1 illustrates an exemplary system 100 , which includes an implementation consistent with the principles of the invention.
  • System 100 may include a network 102 , and devices 104 - 1 , 104 - 2 and 104 - 3 (hereinafter collectively referred to as devices 104 ) connected to network 102 .
  • Devices 104 may be servers, host computers, personal computers, wireless PDAs or any other device capable of connecting to a network.
  • System 100 may include more or fewer components than shown in FIG. 1 .
  • system 100 may have more or fewer devices 104 connected to network 102 .
  • FIG. 2 illustrates a portion of network 102 .
  • Network 102 may include a number of network devices 202 - 1 through 202 - 7 (hereinafter collectively referred to as network devices 202 ).
  • Network 102 may include additional or fewer network devices 202 than shown in FIG. 2 .
  • Each network device 202 may have connections with one or more other network devices, such as, for example, one or more routers or network nodes.
  • FIG. 3 is a functional block diagram of an exemplary network device consistent with the principles of the invention.
  • the network device takes the form of a router 302 , which may be used to implement one or more network devices 202 .
  • Router 302 may receive one or more packet streams from a physical link, process the stream(s) to determine destination information, and transmit the stream(s) on one or more links in accordance with the destination information.
  • Router 302 may include a routing engine (RE) 310 and multiple packet forwarding engines (PFEs) 320 a , 320 b , . . . 320 n (collectively, “PFEs 320 ”) interconnected via a switch fabric 330 .
  • Switch fabric 330 may include one or more switching planes to facilitate communication between two or more of PFEs 320 .
  • each of the switching planes includes a single or multi-stage switch of crossbar elements.
  • RE 310 may perform high level management functions for router 302 .
  • RE 310 may communicate with other networks and systems connected to router 302 to exchange information regarding network topology.
  • RE 310 may create routing tables based on network topology information, may create forwarding tables based on the routing tables, and may send the forwarding tables to PFEs 320 .
  • PFEs 320 may use the forwarding tables to perform route lookups for incoming packets.
  • RE 310 may also perform other general control and monitoring functions for router 302 .
  • Each of PFEs 320 connects to RE 310 and switch fabric 330 .
  • PFEs 320 may receive and send packets on physical links connected to a network, such as network 102 .
  • Each of the physical links may include one or more logical interfaces, such as virtual circuits or virtual paths, which may include a group of virtual circuits.
  • Each of PFEs 320 may include a hierarchical scheduler for forwarding, on the one or more logical interfaces, received traffic according to a priority established for a class of traffic.
  • Each physical link could be one of many types of transport media, such as optical fiber or Ethernet cable.
  • the packets on the physical link may be formatted according to one of several protocols, such as Ethernet.
  • FIG. 4 illustrates operation of simple shared shaping that may be implemented on top of a hierarchical scheduler, in network device 202 , consistent with principles of the invention.
  • the exemplary hierarchical scheduler of FIG. 4 includes a physical port 402 , a high priority group scheduler node 404 , a medium priority group scheduler node 406 , a virtual circuit (VC) 3 no group scheduler node 408 , a VC 2 no group scheduler node 410 , a VC 1 no group scheduler node 412 , a VC 1 high priority scheduler node 414 , a VC 3 high priority scheduler node 416 , a VC 1 high priority queue 418 , a VC 3 high priority queue 420 , a VC 1 medium priority scheduler node 422 , a VC 2 medium priority scheduler node 424 , a VC 1 medium priority queue 426 , a VC 2 medium priority queue 428 , a VC 1 low or no priority queue
  • network device 202 may receive network traffic via physical port 402 .
  • Network device 202 may then forward traffic from port 402 according to a priority that may be associated with the traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned a medium priority, and other traffic, such as data traffic for VC 1 , VC 2 , and VC 3 may be assigned a low priority or no priority.
  • network device 202 may forward voice traffic from port 402 to high priority group scheduler node 404 , video traffic from port 402 to medium priority group scheduler node 406 , and other traffic, such as data traffic, from port 402 to low priority or no priority scheduler nodes, such as VC 1 (no group) scheduler node 412 , VC 2 (no group) scheduler node 410 , or VC 3 (no group) scheduler node 408 .
  • High priority group scheduler node 404 may then forward traffic to a VC associated with the traffic. In this example, only VC 1 and VC 3 may carry high priority traffic. High priority group scheduler node 404 may forward traffic received from port 402 to VC 1 high priority traffic queue 418 or VC 3 high priority traffic queue 420 via VC 1 high priority scheduler node 414 or VC 3 high priority schedule node 416 , respectively.
  • Medium priority group scheduler node 406 may forward medium priority traffic to VC 1 or VC 2 medium priority traffic queues 426 , 428 via VC 1 or VC 2 medium priority scheduler nodes 422 , 424 , respectively.
  • network device 202 may forward low or no priority traffic for VC 1 , VC 2 or VC 3 from port 402 to low priority queues 430 , 432 , 434 via VC 1 -VC 3 (no group) scheduler nodes 412 , 410 , 408 , respectively.
  • only low (or no) priority queues may be actively controlled by simple shared shaping.
  • a simple shared shaper configured on low priority scheduler node 412 for subscriber interface VC 1 may track, for example, an enqueue rate for VC 1 high priority queue 418 and VC 1 medium priority queue 426 .
  • the simple shared shaper may then shape the VC 1 scheduler node 412 to have a rate equal to:
  • the constituents of the VC 1 subscriber interface are VC 1 high priority traffic, VC 1 medium priority traffic, and VC 1 low priority traffic.
  • a simple shared shaper configured on low priority scheduler node 410 for VC 2 may track, for example, an enqueue rate for VC 2 medium priority queue 424 .
  • the simple shared shaper may shape the VC 2 scheduler node 410 to have a rate equal to:
  • the constituents of the VC 2 subscriber interface are VC 2 medium priority traffic and VC 2 low priority traffic. There is no VC 2 high priority traffic.
  • a simple shared shaper configured on low priority scheduler node 408 for VC 3 may track, for example, an enqueue rate for VC 3 high priority queue 420 .
  • the simple shared shaper may shape the VC 3 scheduler node 408 to have a rate equal to:
  • the constituents of the VC 3 subscriber interface are VC 3 high priority traffic and VC 3 low priority traffic. There is no VC 3 medium priority traffic.
  • FIG. 5 is a flowchart that illustrates exemplary processing for simple shared shaping within network device 202 .
  • Network device 202 may monitor the enqueue rate of high and medium priority queues.
  • a processor may monitor the enqueue rate of high and medium priority queues and may provide the measured enqueue rates once per second as a rate update. Other time intervals may be used instead of one second.
  • network device 202 may perform the process of FIG. 5 .
  • network device 202 may prepare to determine the shaping rate of a first low or no priority queue, corresponding to a subscriber interface or VC to be shaped (act 502 ).
  • Network device 202 may then obtain the updated enqueue rate for a high priority queue corresponding to the VC, if one exists (act 504 ).
  • network device 202 may obtain the updated enqueue rate for a medium priority queue corresponding to the VC, if one exists (act 506 ).
  • Network device 202 may then determine the shared shaping rate for the VC by subtracting the sum of the enqueue rate of the high priority queue for the VC and the enqueue rate of the medium priority queue for the VC from the allowed bandwidth or rate cap for the VC (act 508 ).
  • the low or no priority queue for the VC may be allowed to use any unused bandwidth capacity configured for the VC.
  • Network device 202 may then determine whether any additional subscriber interfaces or VCs are to be shaped. If no more VCs are to be shaped, then the process is completed. Otherwise, network device 202 may prepare to shape the next VC (act 512 ). Network device 202 may then repeat acts 504 through 510 .
  • FIG. 6 illustrates operation of simple shared shaping that may be implemented on top of another hierarchical scheduler, in network device 202 , consistent with principles of the invention.
  • the exemplary hierarchical scheduler of FIG. 6 includes a physical port 602 , a low priority VP 1 group scheduler node 604 , a medium priority group scheduler node 618 , a high priority group scheduler node 622 , a VC 1 low priority scheduler node 606 , a VC 2 low priority scheduler node 608 , a VC 3 low priority scheduler node 610 , a VC 1 low priority queue 612 , a VC 2 low priority queue 614 , a VC 3 low priority queue 616 , a medium priority queue 620 , and a high priority queue 624 .
  • a VP is a subscriber interface that includes a group of VCs.
  • network device 202 forwards traffic from port 602 according to priority.
  • Network device 202 may forward low, or no priority data, such as, for example, data traffic, from port 602 to low priority VP 1 group scheduler node 604 , which may then forward the traffic to one of three low priority scheduler nodes 606 , 608 , 610 , depending on whether the traffic is for VC 1 , VC 2 , or VC 3 , respectively.
  • VP 1 includes traffic for VC 1 , VC 2 , and VC 3 .
  • Each of scheduler nodes 606 , 608 , 610 may then forward the traffic to an associated queue 612 , 614 , 616 , respectively, for transmission through network 102 .
  • Network device 202 may forward medium priority traffic, such as, for example, video traffic, from port 602 to medium priority group scheduler node 618 .
  • Scheduler node 618 may then forward the traffic to queue 620 for holding medium priority VP 1 traffic for transmission.
  • Network device 202 may forward high or strict priority traffic, such as, for example, voice traffic, from port 602 to high priority group scheduler node 622 .
  • Scheduler node 622 may then forward the traffic to queue 624 for holding high priority VP 1 traffic for transmission.
  • only low (or no) priority queues may be actively controlled by simple shared shaping.
  • a simple shared shaper configured on low priority VP 1 group scheduler node 604 for VP 1 may track, for example, an enqueue rate for VP 1 high priority queue 624 and VP 1 medium priority queue 620 .
  • the simple shared shaper may then shape VP 1 group scheduler node 604 to have a rate equal to:
  • the constituents of the VP 1 interface are VP 1 high priority traffic, VP 1 medium priority traffic, and VP 1 low priority traffic.
  • the flowchart of FIG. 7 explains the process of shared shaping for VPs.
  • either VCs or VPs may be shaped, but not both.
  • network device 202 may prepare to shape bandwidth for VP 1 , the only VP in this example (act 702 ). Rate updates for VP 1 may be available periodically, such as once per second, or another time period.
  • Network device 202 may obtain the latest enqueue rate for VP 1 's high priority queue 624 (act 704 ).
  • network device 202 may obtain the latest enqueue rate for VP 1 's medium priority queue 620 (act 706 ).
  • Network device 202 may then determine a shared shaping rate or bandwidth that may be used for low priority VP 1 traffic by subtracting the sum of enqueue rates for constituents high and medium priority queues 624 , 620 , respectively, from bandwidth permitted for subscriber interface VP 1 (act 708 ). Thus, in this example, VC 1 , VC 2 , and VC 3 share the calculated shared shaping rate. Network device 202 may then determine whether there are additional VPs to shape (act 710 ). In this example, there are no other VPs, therefore, the process is completed until the next time period.
  • FIG. 8 illustrates operation of compound shared shaping that may be implemented on top of a hierarchical scheduler in network device 202 consistent with the principles of the invention.
  • the exemplary hierarchical scheduler of FIG. 8 includes a physical port 802 , a VP 1 group scheduler node 804 , a VC 1 low priority scheduler node 806 , a VC 2 low priority scheduler node 808 , a VC 3 low priority scheduler node 810 , a VC 1 low priority queue 812 , a VC 2 low priority queue 814 , a VC 3 low priority queue 816 , a medium priority group scheduler node 818 , a VP 1 medium priority scheduler node 820 , a VC 1 medium priority queue 822 , a VC 2 medium priority queue 824 , a VC 3 medium priority queue 826 , a high priority group scheduler node 828 , a VP 1 high priority scheduler node 830 , a VC 1 high priority
  • network device 202 may receive network traffic via physical port 802 .
  • Network device 202 may then forward traffic from port 802 according to a priority that may be associated with a class of traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned medium priority, and other traffic, such as data traffic for VC 1 , VC 2 , and VC 3 may be assigned a low priority.
  • network device 202 may forward voice traffic from port 802 to high priority group scheduler node 828 , video traffic from port 802 to medium priority group scheduler node 818 , and other traffic, such as data traffic, from port 802 to low priority or no priority scheduler nodes, such as VP 1 group scheduler node 804 .
  • Network device 202 may forward VP 1 low priority traffic to VC 1 low priority queue 812 , VC 2 low priority queue 814 , or VC 3 low priority queue 810 from VC 1 low priority scheduler node 806 , VC 2 low priority scheduler node 808 , and VC 3 low priority scheduler node 810 , respectively.
  • Network device 202 may forward traffic from medium priority group scheduler node 818 to VC 1 medium priority queue 822 , VC 2 medium priority queue 824 , or VC 3 medium priority queue 826 through VP 1 medium priority scheduler node 820 .
  • network device 202 may forward traffic from high priority group scheduler node 828 to VC 1 high priority queue 832 , VC 2 high priority queue 834 , or VC 3 high priority queue 836 through VP 1 high priority scheduler node 836 .
  • Compound shared shaping may provide the ability to shape VPs and/or VCs.
  • scheduler node 804 may be permitted to use unused bandwidth from scheduler nodes 820 and 830 for VP 1 .
  • any of scheduler nodes 806 , 808 , and 810 may be permitted to use unused bandwidth of queues 832 , 834 , 836 , respectively, and 822 , 824 , and 826 , respectively.
  • VP 1 scheduler node 804 which is for low priority VP 1 traffic, may be permitted to use the unused bandwidth of VP 1 high and medium priority traffic and VC 1 , VC 2 and VC 3 may use the unused bandwidth of high and medium priority traffic for VC 1 , VC 2 and VC 3 , respectively, for low priority traffic.
  • FIG. 9 is a flowchart of an exemplary compound shared shaping process that may be implemented in network device 202 consistent with the principles of he invention.
  • a dequeueing rate of dequeueing from queues or scheduling nodes may be monitored and updated periodically, such as, for example, every 8 milliseconds or any other useful time period.
  • Hardware such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA) may perform the monitoring and updating. Alternatively, monitoring and updating may be performed by one or more processors executing program instructions.
  • ASIC Application Specific Integrated Circuit
  • FPGA Field Programmable Gate Array
  • total rate credits may be determined by subtracting the dequeue rate of each constituent of the subscriber interface from a rate limit for the subscriber interface. Updated total rate credits may be stored in scheduler descriptors, which are storage areas associated with scheduler nodes or queues.
  • Network device 202 may perform compound shared shaping on a priority basis or a weighted basis, as will be explained below.
  • compound shared shaping may first be performed for all priority-based constituents, followed by all weight-based constituents. Of the priority-based constituents, compound shared shaping is performed for higher priority constituents before lower priority constituents.
  • Network device 202 may begin by obtaining, from a latest rate update, the total shared rate credits for a subscriber interface, which in this case is VP 1 (act 902 ). Next, network device 202 may obtain the constituent's current rate credits for the first constituent (VP 1 high priority scheduler node 830 ) to be shaped (act 906 ).
  • the current rate credits for a constituent is a value representing a number of units (for example, bytes or another unit) that the constituent may send during a reporting interval, for example, 8 milliseconds.
  • Network device 202 may obtain the constituent's current rate credits from the scheduler node's scheduler descriptor. Network device 202 may then obtain the constituent's clip from the scheduler descriptor.
  • the clip is a configured size, which may be in bytes or any other convenient unit, such that network device 202 may not forward traffic until at least a size of the configured clip has accumulated.
  • Network device 202 may then determine whether the constituent has a rate deficit by subtracting current rate credits from the clip (act 910 ). If the deficit is not greater than 0, then network device 202 may determine whether there are more constituents and more total rate credits (act 914 ). If there are more constituents, for example, VP 1 medium priority scheduler node 820 , network device 202 may prepare to update rate credits for the next constituent (act 916 ).
  • network device 202 may take rate credit for the constituent from total rate credits for the subscriber interface (VP 1 for this example) (act 918 ) and may determine whether there are more constituents and more credits (act 914 ). After updating rate credits for a constituent (scheduler node 820 , for this example), network device 202 may then proceed to update rate credits for the next constituent of the subscriber interface (VP 1 group scheduler node 804 ), if more total rate credits for the subscriber interface (VP 1 ) exist.
  • FIG. 10 is a flowchart that explain exemplary processing for take rate credits (act 918 : FIG. 9 ).
  • network device 202 may get a rate cap for the current constituent from the constituent's scheduler descriptor (act 1002 ).
  • network device 202 may perform priority shared shaping if the rate cap is greater than 0 or weighted shared shaping if the rate cap is not greater than 0.
  • other indicators of weighted shared shaping may be used, such as, for example, a separate weighted shared shaping flag, or a negative rate cap.
  • network device 202 may determine tmp, which is a minimum of total rate credits for the subscriber interface and the rate cap for the constituent (act 1006 ). Tmp is an amount of rate or bandwidth that network device 202 may provide to the constituent. Therefore, tmp cannot be more than the rate cap. Network device 202 may then increment the constituent's current rate credits by tmp (act 1008 ) and may decrement total rate credits for the subscriber interface by tmp (act 1010 ). If total rate credits is less than or equal to 0, network device 202 may indicate that no more credits exist, such that the update process of FIG. 9 (act 914 ) determines that the process is completed for the subscriber interface.
  • network device 202 may accumulate the sum of weights for all weighted constituents sharing rate for a subscriber interface (act 1016 ).
  • weights may range from 1 to 31. Additional, fewer, or other weights may be used in other implementations.
  • FIG. 11 is a flowchart of an exemplary process for sharing rate or bandwidth among weighted constituents of a subscriber interface.
  • Network device 202 may begin by calculating mult by dividing the available total rate credits by the sum of weights previously determined at act 1016 (act 1102 ). In some implementations, network device 202 may perform the division of act 1102 by using a table lookup in order to keep processing time to a minimum. Next, network device 202 may prepare to process the first weighted constituent (act 1104 ). If a weighted constituent does not exist (act 1106 ), then the process is completed.
  • network device 202 may obtain the constituent's current rate credits and weight from the constituent's scheduler descriptor and may increment the constituent's current rate credits by the product of mult with the weight of the constituent (act 1108 ). Network device 202 may then prepare to process the next weighted constituent for the subscriber interface (act 1110 ). Network device 202 may then repeat acts 1106 through 1110 until no weighted constituents remain to be processed.
  • FIG. 12 illustrates an exemplary hierarchical scheduler configuration within network device 202 in which weighted shared shaping may be used.
  • the exemplary hierarchical scheduler of FIG. 12 includes a physical port 1202 , a VP 1 group scheduler node 1204 , a VC 1 low priority scheduler node 1206 , a VC 1 low priority data queue 1208 , a VC 1 low priority data 2 queue 1210 , a VC 2 low priority scheduler node 1212 , a VC 2 low priority data queue 1214 , a VC 2 low priority data 2 queue 1216 , a VC 3 low priority scheduler node 1218 , a VC 3 low priority data queue 1220 , a VC 3 low priority data 2 queue 1222 , a medium priority group scheduler node 1205 , a VP 1 medium priority scheduler node 1224 , a VC 1 video traffic queue 1226 , a VC 2 video traffic queue 1228 , a VC 3 video traffic queue 12
  • traffic may be one of several traffic classes, voice, video, data, and a new data class, data 2 .
  • Traffic may arrive via physical port 1202 .
  • Network device 202 may forward low priority VP 1 traffic from port 1202 to VP 1 group scheduler node 1204 .
  • VP 1 includes traffic for VC 1 , VC 2 and VC 3 .
  • VP 1 group scheduler 1204 may then forward traffic to VC 1 low priority scheduler node 1206 , VC 2 low priority scheduler node 1212 or VC 3 low priority scheduler node 1218 .
  • VC 1 low priority scheduler 1206 may then forward VC 1 data traffic to queue 1208 and VC 1 data 2 traffic to queue 1210 .
  • VC 2 low priority scheduler node 1212 may forward data traffic to queue 1214 and data 2 traffic to queue 1216 .
  • VC 3 low priority scheduler node 1218 may forward data traffic to queue 1220 and data 2 traffic to queue 1222 .
  • Medium priority group scheduler node 1205 may forward VP 1 traffic to VP 1 medium priority scheduler node 1224 .
  • VP 1 medium priority scheduler node 1224 may then forward VC 1 video traffic to queue 1226 , VC 2 video traffic queue 1228 , and VC 3 video traffic to queue 1230 .
  • High priority group scheduler node 1232 may forward VP 1 traffic to VP 1 high priority scheduler node 1234 .
  • VP 1 high priority scheduler node 1234 may then forward VC 1 voice traffic to queue 1236 , VC 2 voice traffic queue 1238 , and VC 3 voice traffic queue 1240 .
  • network device 202 may allocate rate credits or bandwidth to constituent, VP 1 high priority traffic, up to the constituent's rate cap, and then may allocate rate credits or bandwidth to constituent, VP 1 medium priority traffic, up to the constituent's rate cap. Remaining bandwidth may be allocated to low priority traffic.
  • the low priority constituents may be defined as VC 1 data, VC 1 data 2 , VC 2 data, VC 2 data 2 , VC 3 data, and VC 3 data 2 .
  • the low priority constituents may be assigned weights.
  • VC 1 data may have a weight of 10
  • VC 1 data 2 may be have a weight of 20
  • VC 2 data may have a weight of 10
  • VC 3 data may have a weight of 10
  • VC 3 data 2 may have a weight of 20.
  • network device 202 may calculate mult to be available total rate credits divided by the sum of weights, which for VC 1 is (6,500)/(10+20), which is approximately 216 (act 1102 ).
  • Network device 202 may then prepare to determine the weighted credits of the first weighted constituent, VC 1 data, (act 1104 ).
  • Network device 202 may determine whether the weighted constituent exists (act 1106 ). For this example, the result is, “Yes.”
  • Network device 202 may then calculate rate credits for the constituent to be current rate credits+(mult*weight), which is current weight credit+(approximately) 2160 credits (act 1108 ).
  • Network device 202 may then prepare to process the next weighted constituent, VC 1 data 2 , with a weight of 20.
  • Network device 202 determines that this constituent exists (act 1106 ) and determines that mult*weight, which is about 4320, is to be added to the constituent's current weight (act 1108 ).
  • Next, network device 202 prepares to process the next weighted constituent of VC 1 (act 1110 ).
  • Network device 202 may determine that the constituent does not exist (act 1106 ) and then processing may end.
  • VC 2 and VC 3 data and data 2 traffic are the same as VC 1 data and data 2 traffic
  • the calculations for VC 2 and VC 3 are the same as for VC 1 , using the assumption that voice may consume 500 rate credits at any given moment and video may consume 1000 rate credits at any given moment, leaving 6500 rate credits to subdivide between data and data 2 .
  • VC 2 data traffic and VC 3 data traffic may receive an additional 2160 rate credits and 4320 rate credits, respectively
  • VC 3 data and VC 3 data 2 traffic may receive an additional 2160 rate credits and 4320 rate credits, respectively.
  • simple shared shaping may first monitor bandwidth use of higher priority constituents and may allocate remaining bandwidth to a lowest priority scheduler node or queue.
  • a group of simple shared shaping constituents may all be VCs or VPs.
  • compound shared shaping constituents may be VC constituents or VP constituents or both. Some constituents may be strict priority constituents and other constituents may be weighted constituents.
  • enqueue rate being used as a rate measurement for simple shared shaping and a dequeue rate being used as a rate measurement for compound shared shaping.
  • a dequeue rate may be used as a rate measurement and in other implementations of compound shared shaping an enqueue rate may be used as a rate measurement.
  • An implementation consistent with the principles of the invention may use one or more Ethernet networks that may conform to the IEEE 802.1Q standard, which describes an extension to the Ethernet header that may include a tag to identify a virtual local area network (VLAN).
  • VLAN is a logical group of devices. VLANs provide a network administrator with the ability to resegment networks without physically rearranging the devices or network connections.
  • the tag may identify a particular VLAN that may correspond to either a VP or a VC from the previously described examples.
  • FIG. 13 illustrates an implementation consistent with the principles of the invention that uses Ethernet VLANs.
  • FIG. 13 corresponds to FIG. 4 having virtual circuits replaced with VLANs.
  • FIG. 13 illustrates operation of simple shared shaping that may be implemented on top of a hierarchical scheduler in network device 202 , consistent with principles of the invention.
  • 13 includes a physical Ethernet port 1302 , a high priority group scheduler node 1304 , a medium priority group scheduler node 1306 , a VLAN 3 no group scheduler node 1308 , a VLAN 2 no group scheduler node 1310 , a VLAN 1 no group scheduler node 1312 , a VLAN 1 high priority scheduler node 1314 , a VLAN 3 high priority scheduler node 1316 , a VLAN 1 high priority queue 1318 , a VLAN 3 high priority queue 1320 , a VLAN 1 medium priority scheduler node 1322 , a VLAN 2 medium priority scheduler node 1324 , a VLAN 1 medium priority queue 1326 , a VLAN 2 medium priority queue 1328 , a VLAN 1 low or no priority queue 1330 , a VLAN 2 low or no priority queue 1332 , and a VLAN 3 low or no priority queue 1334 .
  • network device 202 may receive network traffic via physical Ethernet port 1302 .
  • Network device 202 may then forward traffic from port 1302 according to a priority that may be associated with the traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned a medium priority, and other traffic, such as data traffic for VLAN 1 , VLAN 2 , and VLAN 3 may be assigned a low priority or no priority.
  • network device 202 may forward voice traffic from port 1302 to high priority group scheduler node 1304 , video traffic from port 1302 to medium priority group scheduler node 1306 , and other traffic, such as data traffic, from port 402 to low priority or no priority scheduler nodes, such as VLAN 1 (no group) scheduler node 1312 , VLAN 2 (no group) scheduler node 1310 , or VLAN 3 (no group) scheduler node 1308 .
  • VLAN 1 (no group) scheduler node 1312 such as VLAN 1 (no group) scheduler node 1312 , VLAN 2 (no group) scheduler node 1310 , or VLAN 3 (no group) scheduler node 1308 .
  • High priority group scheduler node 1304 may then forward traffic to a VLAN associated with the traffic. In this example, only VLAN 1 and VLAN 3 may carry high priority traffic. High priority group scheduler node 1304 may forward traffic received from port 1302 to VLAN 1 high priority traffic queue 1318 or VLAN 3 high priority traffic queue 1320 via VLAN 1 high priority scheduler node 1314 or VLAN 3 high priority schedule node 1316 , respectively.
  • Medium priority group scheduler node 1306 may forward medium priority traffic to VLAN 1 or VLAN 2 medium priority traffic queues 1326 , 1328 via VLAN 1 or VLAN 2 medium priority scheduler nodes 1322 , 1324 , respectively.
  • network device 202 may forward low or no priority traffic for VLAN 1 , VLAN 2 or VLAN 3 from port 1302 to low priority queues 1330 , 1332 , 1334 via VLAN 1 -VLAN 3 (no group) scheduler nodes 1312 , 1310 , 1308 , respectively.
  • the exemplary simple shared shaper may shape the VLAN 1 scheduler node 1312 to have a rate equal to:
  • VLAN1 configured shared shaping rate for VLAN1 ⁇ (VLAN1 high priority enqueue rate+VLAN1 medium priority enqueue rate).
  • the constituents of the VLAN 1 interface are VLAN 1 high priority traffic, VLAN 1 medium priority traffic, and VLAN 1 low priority traffic.
  • the exemplary simple shared shaper may shape the VLAN 2 scheduler node 1310 to have a rate equal to:
  • the constituents of the VLAN 2 interface are VLAN 2 medium priority traffic and VLAN 2 low priority traffic. There is no VLAN 2 high priority traffic.
  • the exemplary simple shared shaper may shape the VLAN 3 scheduler node 1308 to have a rate equal to:
  • the constituents of the VLAN 3 subscriber interface are VLAN 3 high priority traffic and VLAN 3 low priority traffic. There is no VLAN 3 medium priority traffic.
  • the exemplary simple shared shaper may use dequeue rates instead of enqueue rates.
  • a stacked VLAN encapsulation may be employed. That is, two IEEE 802.1Q VLAN tags may be used, where a VP may correspond to an outer VLAN tag and a VC may correspond to an inner VLAN tag. In an alternative implementation, a VP may correspond to an inner VLAN tag and a VC may correspond to an outer VLAN tag.
  • FIG. 14 which is similar to FIG. 12 , illustrates an exemplary configuration of a shared shaped that uses a stacked VLAN encapsulation.
  • the exemplary hierarchical scheduler of FIG. 14 includes a physical Ethernet port 1402 , a VLAN 1 group scheduler node 1404 , a VLAN 11 low priority scheduler node 1406 , a VLAN 11 low priority data queue 1408 , a VLAN 11 low priority data 2 queue 1410 , a VLAN 12 low priority scheduler node 1412 , a VLAN 12 low priority data queue 1414 , a VLAN 12 low priority data 2 queue 1416 , a VLAN 13 low priority scheduler node 1318 , a VLAN 13 low priority data queue 1420 , a VLAN 13 low priority data 2 queue 1422 , a medium priority group scheduler node 1405 , a VLAN 1 medium priority scheduler node 1424 , a VLAN 11 video traffic queue 1426 , a VLAN 12 video traffic queue 1428 ,
  • traffic may be one of several traffic classes, voice, video, data, and a new data class, data 2 .
  • Traffic may arrive via physical Ethernet port 1402 .
  • Network device 202 may forward low priority VLAN 1 traffic from port 1402 to VLAN 1 group scheduler node 1404 .
  • VLAN 1 includes traffic for VLAN 11 , VLAN 12 and VLAN 13 .
  • VLAN 1 group scheduler 1404 may then forward traffic to VLAN 11 low priority scheduler node 1406 , VLAN 12 low priority scheduler node 1412 or VLAN 13 low priority scheduler node 1418 .
  • VLAN 11 low priority scheduler 1406 may then forward VLAN 11 data traffic to queue 1408 and VLAN 11 data 2 traffic to queue 1410 .
  • VLAN 12 low priority scheduler node 1412 may forward data traffic to queue 1414 and data 2 traffic to queue 1416 .
  • VLAN 13 low priority scheduler node 1418 may forward data traffic to queue 1420 and data 2 traffic to queue 1422 .
  • Medium priority group scheduler node 1405 may forward VLAN 1 traffic to VLAN 1 medium priority scheduler node 1424 .
  • VLAN 1 medium priority scheduler node 1424 may then forward VLAN 11 video traffic to queue 1426 , VLAN 12 video traffic queue 1428 , and VLAN 13 video traffic to queue 1430 .
  • High priority group scheduler node 1432 may forward VLAN 1 traffic to VLAN 1 high priority scheduler node 1434 .
  • VLAN 1 high priority scheduler node 1434 may then forward VLAN 1 voice traffic to queue 1436 , VLAN 2 voice traffic queue 1438 , and VLAN 3 voice traffic queue 1440 .
  • VLAN 1 corresponds to VP 1 of FIG. 12 and VLANs 11 - 13 correspond to VCs 1 - 3 of FIG. 12 , respectively.
  • the simple or compound shared shaping procedures of FIGS. 5 , 7 , and 9 - 11 may be employed with the exemplary shared shaper of FIG. 14 by, for example, substituting an inner VLAN tag for a VP and an outer VLAN tag for a VC, or alternatively, substituting an outer VLAN tag for a VP and an inner VLAN tag for a VC.

Abstract

A method and a network device for sharing bandwidth among a group of classes of traffic for an interface are provided. Bandwidth may be allocated to at least one traffic class of a first priority for the interface. At least some unused bandwidth of the at least one traffic class may be allocated to at least one other traffic class of a second priority for the interface. In some implementations, weighted constituents may be allocated unused interface bandwidth based on an assigned weight of each of the weighted constituents of the interface.

Description

    RELATED APPLICATION
  • This application is a continuation of U.S. patent application Ser. No. 10/934,558, filed Sep. 7, 2004, which is incorporated herein by reference.
  • TECHNICAL FIELD
  • Systems and methods consistent with the principles of the invention relate generally to computer networks and, more particularly, to sharing bandwidth among different priorities or classes of subscriber traffic.
  • BACKGROUND OF THE INVENTION
  • Network providers generally configure their network devices to treat some classes of traffic differently from other classes of traffic. For example, voice reproduced from network voice traffic may be distorted when it is delayed. Therefore, a network provider may configure network devices to forward a voice traffic class according to a highest or strict priority. Video reproduced from video traffic is less affected by delays than voice traffic. Therefore, the network provider may configure network devices to forward a video traffic class at a lower priority than the voice traffic class. Data traffic is less affected by delays than either the video or the voice traffic. Consequently, the network provider may configure the network devices to treat a data traffic class according to a best effort or a lower priority than that assigned to either the voice traffic class or the video traffic class.
  • Network providers generally use a hierarchical scheduler, implemented within the network devices, to schedule the forwarding of traffic via logical interfaces according to the traffic's associated priority. The priority may be based on the class of traffic, such as, for example, voice, video, and data classes. The hierarchical scheduler may include multiple scheduler nodes. Each scheduler node may be associated with a priority. For example, one scheduler node may process all high priority traffic for a logical interface and another scheduler node may process all medium priority traffic for the logical interface. The hierarchical scheduler may provide the network provider with the ability to configure a separate scheduler node for each of the priorities. Thus, each scheduler node of a network device may receive traffic of a particular priority and may forward the traffic to one or more logical interfaces, such as, for example, virtual circuits (VCs). Because some of the scheduler nodes may be in separate hierarchies or priorities, unused bandwidth for a logical interface in one hierarchy is not shared with a logical interface queue of another hierarchy. For example, if voice traffic for a first virtual circuit is low, the unused bandwidth cannot be used by the first virtual circuit in another hierarchy, such as one that carries video or data traffic. Many network providers “carve out” bandwidth for logical interfaces by configuring rate or bandwidth limits for each hierarchy. However, the “carve out” does not provide for bandwidth sharing among different hierarchies when bandwidth use of a logical interface by high priority traffic is low.
  • SUMMARY OF THE INVENTION
  • In a first aspect, a method for sharing bandwidth among a group of classes of traffic for an interface is provided. Bandwidth is allocated to at least one traffic class of a first priority for the interface. At least some unused bandwidth of the at least one traffic class is allocated to at least one other traffic class of a second priority for the interface.
  • In a second aspect, a network device for receiving and forwarding traffic in a computer network is provided. The network device is configured to provide bandwidth for at least one traffic class of a first priority of an interface, and provide at least some of unused bandwidth of the at least one traffic class for at least one other traffic class of a second priority.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate an embodiment of the invention and, together with the description, explain the invention. In the drawings,
  • FIG. 1 illustrates an exemplary system which may include implementations consistent with principles of the invention;
  • FIG. 2 illustrates a portion of a network shown in FIG. 1;
  • FIG. 3 is a functional block diagram of an exemplary network device of FIG. 2;
  • FIG. 4 illustrates an exemplary hierarchical scheduler upon which simple shared shaping may be implemented consistent with the principles of the invention;
  • FIG. 5 is a flowchart that illustrates exemplary simple shared shaping processing consistent with principles of the invention;
  • FIG. 6 illustrates a second exemplary hierarchical scheduler upon which simple shared shaping may be implemented consistent with the principles of the invention;
  • FIG. 7 is a flowchart that illustrates processing of another exemplary implementation of simple shared shaping processing consistent with the principles of the invention;
  • FIG. 8 illustrates an exemplary hierarchical scheduler upon which compound shared shaping may be implemented consistent with the principles of the invention;
  • FIGS. 9, 10, and 11 are flowcharts that illustrate exemplary compound shared shaping processing consistent with the principles of the invention;
  • FIG. 12 illustrates a second exemplary hierarchical scheduler upon which compound shared shaping may be implemented consistent with the principles of the invention;
  • FIG. 13 illustrates an exemplary hierarchical scheduler upon which simple shared shaping with virtual local area networks (VLANs) may be implemented consistent with the principles of the invention; and
  • FIG. 14 illustrates an exemplary hierarchical scheduler upon which simple or compound shared shaping with VLANs may be implemented consistent with the principles of the invention.
  • DETAILED DESCRIPTION
  • The following detailed description of the invention refers to the accompanying drawings. The same reference numbers in different drawings may identify the same or similar elements. Also, the following detailed description does not limit the invention. Instead, the scope of the invention is defined by the appended claims.
  • FIG. 1 illustrates an exemplary system 100, which includes an implementation consistent with the principles of the invention. System 100 may include a network 102, and devices 104-1, 104-2 and 104-3 (hereinafter collectively referred to as devices 104) connected to network 102. Devices 104 may be servers, host computers, personal computers, wireless PDAs or any other device capable of connecting to a network. System 100 may include more or fewer components than shown in FIG. 1. For example, system 100 may have more or fewer devices 104 connected to network 102.
  • FIG. 2 illustrates a portion of network 102. Network 102 may include a number of network devices 202-1 through 202-7 (hereinafter collectively referred to as network devices 202). Network 102 may include additional or fewer network devices 202 than shown in FIG. 2. Each network device 202 may have connections with one or more other network devices, such as, for example, one or more routers or network nodes.
  • FIG. 3 is a functional block diagram of an exemplary network device consistent with the principles of the invention. In this particular implementation, the network device takes the form of a router 302, which may be used to implement one or more network devices 202. Router 302 may receive one or more packet streams from a physical link, process the stream(s) to determine destination information, and transmit the stream(s) on one or more links in accordance with the destination information.
  • Router 302 may include a routing engine (RE) 310 and multiple packet forwarding engines (PFEs) 320 a, 320 b, . . . 320 n (collectively, “PFEs 320”) interconnected via a switch fabric 330. Switch fabric 330 may include one or more switching planes to facilitate communication between two or more of PFEs 320. In an implementation consistent with the principles of the invention, each of the switching planes includes a single or multi-stage switch of crossbar elements.
  • RE 310 may perform high level management functions for router 302. For example, RE 310 may communicate with other networks and systems connected to router 302 to exchange information regarding network topology. RE 310 may create routing tables based on network topology information, may create forwarding tables based on the routing tables, and may send the forwarding tables to PFEs 320. PFEs 320 may use the forwarding tables to perform route lookups for incoming packets. RE 310 may also perform other general control and monitoring functions for router 302.
  • Each of PFEs 320 connects to RE 310 and switch fabric 330. PFEs 320 may receive and send packets on physical links connected to a network, such as network 102. Each of the physical links may include one or more logical interfaces, such as virtual circuits or virtual paths, which may include a group of virtual circuits. Each of PFEs 320 may include a hierarchical scheduler for forwarding, on the one or more logical interfaces, received traffic according to a priority established for a class of traffic. Each physical link could be one of many types of transport media, such as optical fiber or Ethernet cable. The packets on the physical link may be formatted according to one of several protocols, such as Ethernet.
  • Simple Shared Shaping
  • FIG. 4 illustrates operation of simple shared shaping that may be implemented on top of a hierarchical scheduler, in network device 202, consistent with principles of the invention. The exemplary hierarchical scheduler of FIG. 4 includes a physical port 402, a high priority group scheduler node 404, a medium priority group scheduler node 406, a virtual circuit (VC) 3 no group scheduler node 408, a VC2 no group scheduler node 410, a VC1 no group scheduler node 412, a VC1 high priority scheduler node 414, a VC3 high priority scheduler node 416, a VC1 high priority queue 418, a VC3 high priority queue 420, a VC1 medium priority scheduler node 422, a VC2 medium priority scheduler node 424, a VC1 medium priority queue 426, a VC2 medium priority queue 428, a VC1 low or no priority queue 430, a VC2 low or no priority queue 432, and a VC3 low or no priority queue 434.
  • In this example, network device 202 may receive network traffic via physical port 402. Network device 202 may then forward traffic from port 402 according to a priority that may be associated with the traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned a medium priority, and other traffic, such as data traffic for VC1, VC2, and VC3 may be assigned a low priority or no priority. Thus, network device 202 may forward voice traffic from port 402 to high priority group scheduler node 404, video traffic from port 402 to medium priority group scheduler node 406, and other traffic, such as data traffic, from port 402 to low priority or no priority scheduler nodes, such as VC1 (no group) scheduler node 412, VC2 (no group) scheduler node 410, or VC3 (no group) scheduler node 408.
  • High priority group scheduler node 404 may then forward traffic to a VC associated with the traffic. In this example, only VC1 and VC3 may carry high priority traffic. High priority group scheduler node 404 may forward traffic received from port 402 to VC1 high priority traffic queue 418 or VC3 high priority traffic queue 420 via VC1 high priority scheduler node 414 or VC3 high priority schedule node 416, respectively.
  • In this example, only VC1 and VC2 carry medium priority traffic. Medium priority group scheduler node 406 may forward medium priority traffic to VC1 or VC2 medium priority traffic queues 426, 428 via VC1 or VC2 medium priority scheduler nodes 422, 424, respectively.
  • In this example, network device 202 may forward low or no priority traffic for VC1, VC2 or VC3 from port 402 to low priority queues 430, 432, 434 via VC1-VC3 (no group) scheduler nodes 412, 410, 408, respectively.
  • In at least some implementations consistent with the principles of the invention, only low (or no) priority queues may be actively controlled by simple shared shaping. A simple shared shaper configured on low priority scheduler node 412 for subscriber interface VC1 may track, for example, an enqueue rate for VC1 high priority queue 418 and VC1 medium priority queue 426. The simple shared shaper may then shape the VC1 scheduler node 412 to have a rate equal to:

  • configured shared shaping rate for subscriber interface VC1−(VC1 high priority enqueue rate+VC1 medium priority enqueue rate).
  • In this example, the constituents of the VC1 subscriber interface are VC1 high priority traffic, VC1 medium priority traffic, and VC1 low priority traffic.
  • A simple shared shaper configured on low priority scheduler node 410 for VC2 may track, for example, an enqueue rate for VC2 medium priority queue 424. The simple shared shaper may shape the VC2 scheduler node 410 to have a rate equal to:

  • configured shared shaping rate for subscriber interface VC2−(VC2 medium priority enqueue rate).
  • In this example, the constituents of the VC2 subscriber interface are VC2 medium priority traffic and VC2 low priority traffic. There is no VC2 high priority traffic.
  • A simple shared shaper configured on low priority scheduler node 408 for VC3 may track, for example, an enqueue rate for VC3 high priority queue 420. The simple shared shaper may shape the VC3 scheduler node 408 to have a rate equal to:

  • configured shared shaping rate for subscriber interface VC3−(VC3 high priority enqueue rate).
  • In this example, the constituents of the VC3 subscriber interface are VC3 high priority traffic and VC3 low priority traffic. There is no VC3 medium priority traffic.
  • FIG. 5 is a flowchart that illustrates exemplary processing for simple shared shaping within network device 202. Network device 202 may monitor the enqueue rate of high and medium priority queues. In one implementation, a processor may monitor the enqueue rate of high and medium priority queues and may provide the measured enqueue rates once per second as a rate update. Other time intervals may be used instead of one second. When the rate update is available, network device 202 may perform the process of FIG. 5.
  • First, network device 202 may prepare to determine the shaping rate of a first low or no priority queue, corresponding to a subscriber interface or VC to be shaped (act 502). Network device 202 may then obtain the updated enqueue rate for a high priority queue corresponding to the VC, if one exists (act 504). Next, network device 202 may obtain the updated enqueue rate for a medium priority queue corresponding to the VC, if one exists (act 506). Network device 202 may then determine the shared shaping rate for the VC by subtracting the sum of the enqueue rate of the high priority queue for the VC and the enqueue rate of the medium priority queue for the VC from the allowed bandwidth or rate cap for the VC (act 508). Thus, the low or no priority queue for the VC may be allowed to use any unused bandwidth capacity configured for the VC. Network device 202 may then determine whether any additional subscriber interfaces or VCs are to be shaped. If no more VCs are to be shaped, then the process is completed. Otherwise, network device 202 may prepare to shape the next VC (act 512). Network device 202 may then repeat acts 504 through 510.
  • FIG. 6 illustrates operation of simple shared shaping that may be implemented on top of another hierarchical scheduler, in network device 202, consistent with principles of the invention. The exemplary hierarchical scheduler of FIG. 6 includes a physical port 602, a low priority VP1 group scheduler node 604, a medium priority group scheduler node 618, a high priority group scheduler node 622, a VC1 low priority scheduler node 606, a VC2 low priority scheduler node 608, a VC3 low priority scheduler node 610, a VC1 low priority queue 612, a VC2 low priority queue 614, a VC3 low priority queue 616, a medium priority queue 620, and a high priority queue 624.
  • A VP is a subscriber interface that includes a group of VCs. In this example, network device 202 forwards traffic from port 602 according to priority. Network device 202 may forward low, or no priority data, such as, for example, data traffic, from port 602 to low priority VP1 group scheduler node 604, which may then forward the traffic to one of three low priority scheduler nodes 606, 608, 610, depending on whether the traffic is for VC1, VC2, or VC3, respectively. Thus, VP1 includes traffic for VC1, VC2, and VC3. Each of scheduler nodes 606, 608, 610 may then forward the traffic to an associated queue 612, 614, 616, respectively, for transmission through network 102.
  • Network device 202 may forward medium priority traffic, such as, for example, video traffic, from port 602 to medium priority group scheduler node 618. Scheduler node 618 may then forward the traffic to queue 620 for holding medium priority VP1 traffic for transmission.
  • Network device 202 may forward high or strict priority traffic, such as, for example, voice traffic, from port 602 to high priority group scheduler node 622. Scheduler node 622 may then forward the traffic to queue 624 for holding high priority VP1 traffic for transmission.
  • As mentioned previously, in at least some implementations consistent with the principles of the invention, only low (or no) priority queues may be actively controlled by simple shared shaping. A simple shared shaper configured on low priority VP1 group scheduler node 604 for VP1 may track, for example, an enqueue rate for VP1 high priority queue 624 and VP1 medium priority queue 620. The simple shared shaper may then shape VP1 group scheduler node 604 to have a rate equal to:

  • configured shared shaping rate for interface VP1−(VP1 high priority enqueue rate+VP1 medium priority enqueue rate).
  • In this example, the constituents of the VP1 interface are VP1 high priority traffic, VP1 medium priority traffic, and VP1 low priority traffic.
  • The flowchart of FIG. 7 explains the process of shared shaping for VPs. In some implementations of simple shared shaping in network device 202, either VCs or VPs may be shaped, but not both. Using the example of FIG. 6 with reference to FIG. 7, network device 202 may prepare to shape bandwidth for VP1, the only VP in this example (act 702). Rate updates for VP1 may be available periodically, such as once per second, or another time period. Network device 202 may obtain the latest enqueue rate for VP1's high priority queue 624 (act 704). Next, network device 202 may obtain the latest enqueue rate for VP1's medium priority queue 620 (act 706). Network device 202 may then determine a shared shaping rate or bandwidth that may be used for low priority VP1 traffic by subtracting the sum of enqueue rates for constituents high and medium priority queues 624, 620, respectively, from bandwidth permitted for subscriber interface VP1 (act 708). Thus, in this example, VC1, VC2, and VC3 share the calculated shared shaping rate. Network device 202 may then determine whether there are additional VPs to shape (act 710). In this example, there are no other VPs, therefore, the process is completed until the next time period.
  • Compound Shared Shaping
  • FIG. 8 illustrates operation of compound shared shaping that may be implemented on top of a hierarchical scheduler in network device 202 consistent with the principles of the invention. The exemplary hierarchical scheduler of FIG. 8 includes a physical port 802, a VP1 group scheduler node 804, a VC1 low priority scheduler node 806, a VC2 low priority scheduler node 808, a VC3 low priority scheduler node 810, a VC1 low priority queue 812, a VC2 low priority queue 814, a VC3 low priority queue 816, a medium priority group scheduler node 818, a VP1 medium priority scheduler node 820, a VC1 medium priority queue 822, a VC2 medium priority queue 824, a VC3 medium priority queue 826, a high priority group scheduler node 828, a VP1 high priority scheduler node 830, a VC1 high priority queue 832, a VC2 high priority queue 834, and a VC3 high priority queue 836.
  • In this example, network device 202 may receive network traffic via physical port 802. Network device 202 may then forward traffic from port 802 according to a priority that may be associated with a class of traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned medium priority, and other traffic, such as data traffic for VC1, VC2, and VC3 may be assigned a low priority. Thus, network device 202 may forward voice traffic from port 802 to high priority group scheduler node 828, video traffic from port 802 to medium priority group scheduler node 818, and other traffic, such as data traffic, from port 802 to low priority or no priority scheduler nodes, such as VP1 group scheduler node 804.
  • Network device 202 may forward VP1 low priority traffic to VC1 low priority queue 812, VC2 low priority queue 814, or VC3 low priority queue 810 from VC1 low priority scheduler node 806, VC2 low priority scheduler node 808, and VC3 low priority scheduler node 810, respectively. Network device 202 may forward traffic from medium priority group scheduler node 818 to VC1 medium priority queue 822, VC2 medium priority queue 824, or VC3 medium priority queue 826 through VP1 medium priority scheduler node 820. Similarly, network device 202 may forward traffic from high priority group scheduler node 828 to VC1 high priority queue 832, VC2 high priority queue 834, or VC3 high priority queue 836 through VP1 high priority scheduler node 836.
  • Compound shared shaping may provide the ability to shape VPs and/or VCs. For example, scheduler node 804 may be permitted to use unused bandwidth from scheduler nodes 820 and 830 for VP1. Simultaneously, any of scheduler nodes 806, 808, and 810, may be permitted to use unused bandwidth of queues 832, 834, 836, respectively, and 822, 824, and 826, respectively. That is, VP1 scheduler node 804, which is for low priority VP1 traffic, may be permitted to use the unused bandwidth of VP1 high and medium priority traffic and VC1, VC2 and VC3 may use the unused bandwidth of high and medium priority traffic for VC1, VC2 and VC3, respectively, for low priority traffic.
  • FIG. 9 is a flowchart of an exemplary compound shared shaping process that may be implemented in network device 202 consistent with the principles of he invention. In implementations of compound shared shaping, a dequeueing rate of dequeueing from queues or scheduling nodes may be monitored and updated periodically, such as, for example, every 8 milliseconds or any other useful time period. Hardware, such as an Application Specific Integrated Circuit (ASIC) or a Field Programmable Gate Array (FPGA) may perform the monitoring and updating. Alternatively, monitoring and updating may be performed by one or more processors executing program instructions.
  • In implementations that use compound shared shaping, total rate credits may be determined by subtracting the dequeue rate of each constituent of the subscriber interface from a rate limit for the subscriber interface. Updated total rate credits may be stored in scheduler descriptors, which are storage areas associated with scheduler nodes or queues.
  • Network device 202 may perform compound shared shaping on a priority basis or a weighted basis, as will be explained below. The processing described in the flowchart of FIG. 9 will be described with reference to the exemplary hierarchical scheduler of FIG. 8 for strict shared shaping of subscriber interface VP1 to a particular shared rate. In this implementation, compound shared shaping may first be performed for all priority-based constituents, followed by all weight-based constituents. Of the priority-based constituents, compound shared shaping is performed for higher priority constituents before lower priority constituents.
  • Network device 202 may begin by obtaining, from a latest rate update, the total shared rate credits for a subscriber interface, which in this case is VP1 (act 902). Next, network device 202 may obtain the constituent's current rate credits for the first constituent (VP1 high priority scheduler node 830) to be shaped (act 906). The current rate credits for a constituent is a value representing a number of units (for example, bytes or another unit) that the constituent may send during a reporting interval, for example, 8 milliseconds. Network device 202 may obtain the constituent's current rate credits from the scheduler node's scheduler descriptor. Network device 202 may then obtain the constituent's clip from the scheduler descriptor. The clip is a configured size, which may be in bytes or any other convenient unit, such that network device 202 may not forward traffic until at least a size of the configured clip has accumulated. Network device 202 may then determine whether the constituent has a rate deficit by subtracting current rate credits from the clip (act 910). If the deficit is not greater than 0, then network device 202 may determine whether there are more constituents and more total rate credits (act 914). If there are more constituents, for example, VP1 medium priority scheduler node 820, network device 202 may prepare to update rate credits for the next constituent (act 916).
  • If network device 202 determines that deficit is greater than 0, then network device 202 may take rate credit for the constituent from total rate credits for the subscriber interface (VP1 for this example) (act 918) and may determine whether there are more constituents and more credits (act 914). After updating rate credits for a constituent (scheduler node 820, for this example), network device 202 may then proceed to update rate credits for the next constituent of the subscriber interface (VP1 group scheduler node 804), if more total rate credits for the subscriber interface (VP1) exist.
  • FIG. 10 is a flowchart that explain exemplary processing for take rate credits (act 918: FIG. 9). First, network device 202 may get a rate cap for the current constituent from the constituent's scheduler descriptor (act 1002). In this implementation of compound shared shaping, network device 202 may perform priority shared shaping if the rate cap is greater than 0 or weighted shared shaping if the rate cap is not greater than 0. In alternative implementations, other indicators of weighted shared shaping may be used, such as, for example, a separate weighted shared shaping flag, or a negative rate cap. If network device 202 determines that rate cap is greater than 0 (act 1004), then network device 202 may determine tmp, which is a minimum of total rate credits for the subscriber interface and the rate cap for the constituent (act 1006). Tmp is an amount of rate or bandwidth that network device 202 may provide to the constituent. Therefore, tmp cannot be more than the rate cap. Network device 202 may then increment the constituent's current rate credits by tmp (act 1008) and may decrement total rate credits for the subscriber interface by tmp (act 1010). If total rate credits is less than or equal to 0, network device 202 may indicate that no more credits exist, such that the update process of FIG. 9 (act 914) determines that the process is completed for the subscriber interface.
  • If network device 202 determines that rate cap is not greater than 0 (act 1004), then network device 202 may accumulate the sum of weights for all weighted constituents sharing rate for a subscriber interface (act 1016). In one implementation, weights may range from 1 to 31. Additional, fewer, or other weights may be used in other implementations.
  • FIG. 11 is a flowchart of an exemplary process for sharing rate or bandwidth among weighted constituents of a subscriber interface. Network device 202 may begin by calculating mult by dividing the available total rate credits by the sum of weights previously determined at act 1016 (act 1102). In some implementations, network device 202 may perform the division of act 1102 by using a table lookup in order to keep processing time to a minimum. Next, network device 202 may prepare to process the first weighted constituent (act 1104). If a weighted constituent does not exist (act 1106), then the process is completed. Otherwise, network device 202 may obtain the constituent's current rate credits and weight from the constituent's scheduler descriptor and may increment the constituent's current rate credits by the product of mult with the weight of the constituent (act 1108). Network device 202 may then prepare to process the next weighted constituent for the subscriber interface (act 1110). Network device 202 may then repeat acts 1106 through 1110 until no weighted constituents remain to be processed.
  • FIG. 12 illustrates an exemplary hierarchical scheduler configuration within network device 202 in which weighted shared shaping may be used. The exemplary hierarchical scheduler of FIG. 12 includes a physical port 1202, a VP1 group scheduler node 1204, a VC1 low priority scheduler node 1206, a VC1 low priority data queue 1208, a VC1 low priority data2 queue 1210, a VC2 low priority scheduler node 1212, a VC2 low priority data queue 1214, a VC2 low priority data2 queue 1216, a VC3 low priority scheduler node 1218, a VC3 low priority data queue 1220, a VC3 low priority data2 queue 1222, a medium priority group scheduler node 1205, a VP1 medium priority scheduler node 1224, a VC1 video traffic queue 1226, a VC2 video traffic queue 1228, a VC3 video traffic queue 1230, a high priority group scheduler node 1232, a VP1 medium priority scheduler node 1234, a VC1 voice queue 1236, a VC2 voice queue 1238, and a VC3 voice queue 1240.
  • In this example, traffic may be one of several traffic classes, voice, video, data, and a new data class, data2. Traffic may arrive via physical port 1202. Network device 202 may forward low priority VP1 traffic from port 1202 to VP1 group scheduler node 1204. In this example, VP1 includes traffic for VC1, VC2 and VC3. VP1 group scheduler 1204 may then forward traffic to VC1 low priority scheduler node 1206, VC2 low priority scheduler node 1212 or VC3 low priority scheduler node 1218. VC1 low priority scheduler 1206 may then forward VC1 data traffic to queue 1208 and VC1 data2 traffic to queue 1210. VC2 low priority scheduler node 1212 may forward data traffic to queue 1214 and data2 traffic to queue 1216. VC3 low priority scheduler node 1218 may forward data traffic to queue 1220 and data2 traffic to queue 1222.
  • Medium priority group scheduler node 1205 may forward VP1 traffic to VP1 medium priority scheduler node 1224. VP1 medium priority scheduler node 1224 may then forward VC1 video traffic to queue 1226, VC2 video traffic queue 1228, and VC3 video traffic to queue 1230.
  • High priority group scheduler node 1232 may forward VP1 traffic to VP1 high priority scheduler node 1234. VP1 high priority scheduler node 1234 may then forward VC1 voice traffic to queue 1236, VC2 voice traffic queue 1238, and VC3 voice traffic queue 1240.
  • Assuming that voice and video traffic constituents are defined as strict priority constituents, first, network device 202 may allocate rate credits or bandwidth to constituent, VP1 high priority traffic, up to the constituent's rate cap, and then may allocate rate credits or bandwidth to constituent, VP1 medium priority traffic, up to the constituent's rate cap. Remaining bandwidth may be allocated to low priority traffic.
  • In this example, the low priority constituents may be defined as VC1 data, VC1 data2, VC2 data, VC2 data2, VC3 data, and VC3 data2. The low priority constituents may be assigned weights. For example, VC1 data may have a weight of 10, VC1 data2 may be have a weight of 20, VC2 data may have a weight of 10, VC2 data2 may have a weight of 20, VC3 data may have a weight of 10 and VC3 data2 may have a weight of 20.
  • For the sake of this example, assume that the shared shaping rate for the group of constituents of each of VC1, VC2, and VC3 is 1 megabit per second. If the rate is updated periodically at, for example, every 8 milliseconds, then the total rate credits for the VC1 subscriber interface is 8,000 bits every 8 milliseconds. In this example, assume that voice may consume 500 rate credits at any given moment and video may consume 1000 rate credits at any given moment, leaving 6500 rate credits to subdivide between data and data2 constituents With reference to FIG. 11, network device 202 may calculate mult to be available total rate credits divided by the sum of weights, which for VC1 is (6,500)/(10+20), which is approximately 216 (act 1102). Network device 202 may then prepare to determine the weighted credits of the first weighted constituent, VC1 data, (act 1104). Network device 202 may determine whether the weighted constituent exists (act 1106). For this example, the result is, “Yes.” Network device 202 may then calculate rate credits for the constituent to be current rate credits+(mult*weight), which is current weight credit+(approximately) 2160 credits (act 1108). Network device 202 may then prepare to process the next weighted constituent, VC1 data2, with a weight of 20. Network device 202 determines that this constituent exists (act 1106) and determines that mult*weight, which is about 4320, is to be added to the constituent's current weight (act 1108). Next, network device 202 prepares to process the next weighted constituent of VC1 (act 1110). Network device 202 may determine that the constituent does not exist (act 1106) and then processing may end.
  • Because, in this example, the weights for VC2 and VC3 data and data2 traffic are the same as VC1 data and data2 traffic, the calculations for VC2 and VC3 are the same as for VC1, using the assumption that voice may consume 500 rate credits at any given moment and video may consume 1000 rate credits at any given moment, leaving 6500 rate credits to subdivide between data and data2. Thus, VC2 data traffic and VC3 data traffic may receive an additional 2160 rate credits and 4320 rate credits, respectively, and VC3 data and VC3 data2 traffic may receive an additional 2160 rate credits and 4320 rate credits, respectively.
  • Variations
  • Although the above examples use three priorities, any number of priorities may be used in various implementations. In implementations consistent with the principles of the invention, simple shared shaping may first monitor bandwidth use of higher priority constituents and may allocate remaining bandwidth to a lowest priority scheduler node or queue. In some implementations, a group of simple shared shaping constituents may all be VCs or VPs.
  • In implementations of compound shared shaping consistent with the principles of the invention, compound shared shaping constituents may be VC constituents or VP constituents or both. Some constituents may be strict priority constituents and other constituents may be weighted constituents.
  • The above examples describe an enqueue rate being used as a rate measurement for simple shared shaping and a dequeue rate being used as a rate measurement for compound shared shaping. In other implementations of simple shared shaping a dequeue rate may be used as a rate measurement and in other implementations of compound shared shaping an enqueue rate may be used as a rate measurement.
  • An implementation consistent with the principles of the invention may use one or more Ethernet networks that may conform to the IEEE 802.1Q standard, which describes an extension to the Ethernet header that may include a tag to identify a virtual local area network (VLAN). A VLAN is a logical group of devices. VLANs provide a network administrator with the ability to resegment networks without physically rearranging the devices or network connections. In such an implementation, the tag may identify a particular VLAN that may correspond to either a VP or a VC from the previously described examples. For example, FIG. 13 illustrates an implementation consistent with the principles of the invention that uses Ethernet VLANs.
  • FIG. 13 corresponds to FIG. 4 having virtual circuits replaced with VLANs. FIG. 13 illustrates operation of simple shared shaping that may be implemented on top of a hierarchical scheduler in network device 202, consistent with principles of the invention. The exemplary hierarchical scheduler of FIG. 13 includes a physical Ethernet port 1302, a high priority group scheduler node 1304, a medium priority group scheduler node 1306, a VLAN3 no group scheduler node 1308, a VLAN2 no group scheduler node 1310, a VLAN1 no group scheduler node 1312, a VLAN1 high priority scheduler node 1314, a VLAN3 high priority scheduler node 1316, a VLAN1 high priority queue 1318, a VLAN3 high priority queue 1320, a VLAN1 medium priority scheduler node 1322, a VLAN2 medium priority scheduler node 1324, a VLAN1 medium priority queue 1326, a VLAN2 medium priority queue 1328, a VLAN1 low or no priority queue 1330, a VLAN2 low or no priority queue 1332, and a VLAN3 low or no priority queue 1334.
  • In this example, network device 202 may receive network traffic via physical Ethernet port 1302. Network device 202 may then forward traffic from port 1302 according to a priority that may be associated with the traffic. For example, voice traffic may be assigned a strict or highest priority, video traffic may be assigned a medium priority, and other traffic, such as data traffic for VLAN1, VLAN2, and VLAN3 may be assigned a low priority or no priority. Thus, network device 202 may forward voice traffic from port 1302 to high priority group scheduler node 1304, video traffic from port 1302 to medium priority group scheduler node 1306, and other traffic, such as data traffic, from port 402 to low priority or no priority scheduler nodes, such as VLAN1 (no group) scheduler node 1312, VLAN2 (no group) scheduler node 1310, or VLAN3 (no group) scheduler node 1308.
  • High priority group scheduler node 1304 may then forward traffic to a VLAN associated with the traffic. In this example, only VLAN1 and VLAN3 may carry high priority traffic. High priority group scheduler node 1304 may forward traffic received from port 1302 to VLAN1 high priority traffic queue 1318 or VLAN3 high priority traffic queue 1320 via VLAN1 high priority scheduler node 1314 or VLAN3 high priority schedule node 1316, respectively.
  • In this example, only VLAN1 and VLAN2 carry medium priority traffic. Medium priority group scheduler node 1306 may forward medium priority traffic to VLAN1 or VLAN2 medium priority traffic queues 1326, 1328 via VLAN1 or VLAN2 medium priority scheduler nodes 1322, 1324, respectively.
  • In this example, network device 202 may forward low or no priority traffic for VLAN1, VLAN2 or VLAN3 from port 1302 to low priority queues 1330, 1332, 1334 via VLAN1-VLAN3 (no group) scheduler nodes 1312, 1310, 1308, respectively.
  • The exemplary simple shared shaper may shape the VLAN1 scheduler node 1312 to have a rate equal to:

  • configured shared shaping rate for VLAN1−(VLAN1 high priority enqueue rate+VLAN1 medium priority enqueue rate).
  • In this example, the constituents of the VLAN1 interface are VLAN1 high priority traffic, VLAN1 medium priority traffic, and VLAN1 low priority traffic.
  • The exemplary simple shared shaper may shape the VLAN2 scheduler node 1310 to have a rate equal to:

  • configured shared shaping rate for interface VLAN2−(VLAN2 medium priority enqueue rate).
  • In this example, the constituents of the VLAN2 interface are VLAN2 medium priority traffic and VLAN2 low priority traffic. There is no VLAN2 high priority traffic.
  • The exemplary simple shared shaper may shape the VLAN3 scheduler node 1308 to have a rate equal to:

  • configured shared shaping rate for interface VLAN3−(VLAN3 high priority enqueue rate).
  • In this example, the constituents of the VLAN3 subscriber interface are VLAN3 high priority traffic and VLAN3 low priority traffic. There is no VLAN3 medium priority traffic.
  • In an alternative implementation, the exemplary simple shared shaper may use dequeue rates instead of enqueue rates.
  • In another implementation consistent with the principles of the invention, a stacked VLAN encapsulation may be employed. That is, two IEEE 802.1Q VLAN tags may be used, where a VP may correspond to an outer VLAN tag and a VC may correspond to an inner VLAN tag. In an alternative implementation, a VP may correspond to an inner VLAN tag and a VC may correspond to an outer VLAN tag.
  • FIG. 14, which is similar to FIG. 12, illustrates an exemplary configuration of a shared shaped that uses a stacked VLAN encapsulation. The exemplary hierarchical scheduler of FIG. 14 includes a physical Ethernet port 1402, a VLAN1 group scheduler node 1404, a VLAN11 low priority scheduler node 1406, a VLAN11 low priority data queue 1408, a VLAN11 low priority data2 queue 1410, a VLAN12 low priority scheduler node 1412, a VLAN12 low priority data queue 1414, a VLAN12 low priority data2 queue 1416, a VLAN13 low priority scheduler node 1318, a VLAN13 low priority data queue 1420, a VLAN13 low priority data2 queue 1422, a medium priority group scheduler node 1405, a VLAN1 medium priority scheduler node 1424, a VLAN11 video traffic queue 1426, a VLAN12 video traffic queue 1428, a VLAN13 video traffic queue 1430, a high priority group scheduler node 1432, a VLAN1 medium priority scheduler node 1434, a VLAN111 voice queue 1436, a VLAN12 voice queue 1438, and a VLAN13 voice queue 1440.
  • In this example, traffic may be one of several traffic classes, voice, video, data, and a new data class, data2. Traffic may arrive via physical Ethernet port 1402. Network device 202 may forward low priority VLAN1 traffic from port 1402 to VLAN1 group scheduler node 1404. In this example, VLAN1 includes traffic for VLAN11, VLAN12 and VLAN13. VLAN1 group scheduler 1404 may then forward traffic to VLAN11 low priority scheduler node 1406, VLAN12 low priority scheduler node 1412 or VLAN13 low priority scheduler node 1418. VLAN11 low priority scheduler 1406 may then forward VLAN11 data traffic to queue 1408 and VLAN11 data2 traffic to queue 1410. VLAN12 low priority scheduler node 1412 may forward data traffic to queue 1414 and data2 traffic to queue 1416. VLAN13 low priority scheduler node 1418 may forward data traffic to queue 1420 and data2 traffic to queue 1422.
  • Medium priority group scheduler node 1405 may forward VLAN1 traffic to VLAN1 medium priority scheduler node 1424. VLAN1 medium priority scheduler node 1424 may then forward VLAN11 video traffic to queue 1426, VLAN12 video traffic queue 1428, and VLAN13 video traffic to queue 1430.
  • High priority group scheduler node 1432 may forward VLAN1 traffic to VLAN1 high priority scheduler node 1434. VLAN1 high priority scheduler node 1434 may then forward VLAN1 voice traffic to queue 1436, VLAN2 voice traffic queue 1438, and VLAN3 voice traffic queue 1440.
  • In the above exemplary shared shaper of FIG. 14, VLAN 1 corresponds to VP1 of FIG. 12 and VLANs 11-13 correspond to VCs 1-3 of FIG. 12, respectively. The simple or compound shared shaping procedures of FIGS. 5, 7, and 9-11 may be employed with the exemplary shared shaper of FIG. 14 by, for example, substituting an inner VLAN tag for a VP and an outer VLAN tag for a VC, or alternatively, substituting an outer VLAN tag for a VP and an inner VLAN tag for a VC.
  • CONCLUSION
  • The foregoing description describes implementations of network devices that may shape bandwidth and share unused bandwidth with constituents of different priorities. The foregoing description of exemplary embodiments of the present invention provides illustration and description, but is not intended to be exhaustive or to limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. For example, configurations other than those described may be possible. For example, five priorities of traffic may be defined.
  • While series of acts have been described with regard to FIGS. 5, 7, and 9-11, the order of the acts is not critical. No element, act, or instruction used in the description of the present application should be construed as critical or essential to the invention unless explicitly described as such. Also, as used herein, the article “a” is intended to include one or more items. Where only one item is intended, the term “one” or similar language is used. The scope of the invention is defined by the following claims and their equivalents.

Claims (21)

1-20. (canceled)
21. A method, comprising:
associating, by a device, a first part of bandwidth with a first traffic class of a first priority;
associating, by the device, a second part of the bandwidth with a second traffic class of a second priority;
determining, by the device, whether a portion of the first part is unused; and
allocating, by the device, the portion of the first part to the second traffic class when the portion of the first part is unused.
22. The method of claim 21, further comprising:
determining whether a portion of the second part is unused; and
allocating the portion of the second part to the first traffic class when the portion of the second part is unused.
23. The method of claim 21, where the associating the first part of the bandwidth with the first traffic class of the first priority comprises:
assigning the first priority to the first traffic class;
associating a scheduler node with the first priority; and
calculating an amount of the first part of the bandwidth for the scheduler node.
24. The method of claim 21, where the determining whether the portion of the first part is unused comprises:
monitoring a queue rate associated with the first traffic class; and
determining whether the portion of the first part is unused based on the queue rate.
25. The method of claim 24, where the queue rate is based on at least one of an enqueue rate or a dequeue rate of a queue associated with the first traffic class.
26. The method of claim 21, where the first traffic class corresponds to voice traffic.
27. The method of claim 21, further comprising associating a third part of the bandwidth with a third traffic class of a third priority, where the allocating the portion of the first part to the second traffic class comprises allocating the portion of the first part to the third traffic class.
28. A network device comprising:
a packet forwarding engine with a hierarchical scheduler, the hierarchical scheduler comprising:
a physical port to receive network traffic,
a first priority group scheduler node associated with a first part of bandwidth for a first traffic class of a first priority,
a second priority group scheduler node associated with a second part of the bandwidth for a second traffic class of a second priority,
where the first priority group scheduler node and the second priority group scheduler node receive the network traffic from the physical port, and
where the network traffic comprises the first traffic class and the second traffic class; and
a processor to allocate a portion of the first part of the bandwidth for the second traffic class when the portion of the first part of the bandwidth is unused.
29. The network device of claim 28, where the hierarchical scheduler further comprises:
a first virtual circuit scheduler node and a second virtual circuit scheduler node associated with the first priority group scheduler; and
a queue associated with the second priority group scheduler.
30. The network device of claim 29, where the first priority group scheduler forwards the network traffic corresponding to the first traffic class to at least one of the first virtual circuit scheduler node or the second virtual circuit scheduler node.
31. The network device of claim 29, where, when allocating the portion of the first part of the bandwidth for the second traffic class, the processor is to: allocate the portion of the first part of the bandwidth based on a queue rate associated with the queue.
32. The network device of claim 29,
where the hierarchical scheduler further comprises a first virtual circuit queue associated with the first virtual circuit scheduler node, and
where, when allocating the portion of the first part of the bandwidth for the second traffic class, the processor is to: allocate the portion of the first part of the bandwidth based on a first queue rate associated with the queue and a second queue rate associated with the first virtual circuit queue.
33. The network device of claim 28,
where the first traffic class comprises of at least one of voice traffic or video traffic, and
where the second traffic class comprises data traffic.
34. A device comprising:
a hierarchical scheduler comprising:
a first virtual circuit group scheduler node and a second virtual circuit group scheduler node associated with a first part of bandwidth for a first traffic class of a first priority,
a medium priority group scheduler node associated with a second part of the bandwidth for a second traffic class of a second priority, and
a high priority group scheduler node associated with a third part of the bandwidth for a third traffic class of a third priority; and
a processor to:
allocate a first portion of at least one of the second part of the bandwidth or the third part of the bandwidth for the first traffic class based on a shaping rate.
35. The device of claim 34, where the hierarchical scheduler further comprises:
a first queue associated with the first virtual circuit group scheduler node;
a second queue associated with the second virtual circuit group scheduler node;
a third queue associated with a first virtual circuit of the medium priority group scheduler node; and
a fourth queue associated with a second virtual circuit of the medium priority group scheduler node.
36. The device of claim 35, where, when allocating the first portion of the second part of the bandwidth for the first traffic class, the processor is to:
monitor a first queue rate associated with the first queue;
monitor a second queue rate associated with the third queue; and
calculate the shaping rate based on the first queue rate and the second queue rate.
37. The device of claim 35, where the first portion corresponds only to the first virtual circuit of the medium priority group.
38. The device of claim 34, where the hierarchical scheduler further comprises:
a first queue associated with the first virtual circuit group scheduler node;
a second queue associated with a first virtual circuit of the medium priority group scheduler node; and
a third queue associated with a first virtual circuit of the high priority group scheduler node.
39. The device of claim 35, where when allocating the first portion of the second part of the bandwidth for the first traffic class, the processor is to:
monitor a first queue rate associated with the first queue;
monitor a second queue rate associated with the second queue;
monitor a third queue rate associated with the third queue; and
calculate the shaping rate based on the first queue rate, the second queue rate, and the third queue rate.
40. The device of claim 39, where, when monitoring the first queue rate, the processor is to: measure and update the first queue rate at a defined interval.
US12/899,845 2004-09-07 2010-10-07 Method and apparatus for shared shaping Abandoned US20110019572A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/899,845 US20110019572A1 (en) 2004-09-07 2010-10-07 Method and apparatus for shared shaping

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US10/934,558 US7369495B1 (en) 2004-09-07 2004-09-07 Method and apparatus for shared shaping
US12/056,466 US7835279B1 (en) 2004-09-07 2008-03-27 Method and apparatus for shared shaping
US12/899,845 US20110019572A1 (en) 2004-09-07 2010-10-07 Method and apparatus for shared shaping

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US12/056,466 Continuation US7835279B1 (en) 2004-09-07 2008-03-27 Method and apparatus for shared shaping

Publications (1)

Publication Number Publication Date
US20110019572A1 true US20110019572A1 (en) 2011-01-27

Family

ID=39332391

Family Applications (3)

Application Number Title Priority Date Filing Date
US10/934,558 Expired - Fee Related US7369495B1 (en) 2004-09-07 2004-09-07 Method and apparatus for shared shaping
US12/056,466 Expired - Fee Related US7835279B1 (en) 2004-09-07 2008-03-27 Method and apparatus for shared shaping
US12/899,845 Abandoned US20110019572A1 (en) 2004-09-07 2010-10-07 Method and apparatus for shared shaping

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US10/934,558 Expired - Fee Related US7369495B1 (en) 2004-09-07 2004-09-07 Method and apparatus for shared shaping
US12/056,466 Expired - Fee Related US7835279B1 (en) 2004-09-07 2008-03-27 Method and apparatus for shared shaping

Country Status (1)

Country Link
US (3) US7369495B1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120158838A1 (en) * 2010-12-15 2012-06-21 Sap Ag System and method for logging a scheduler
US8693470B1 (en) * 2010-05-03 2014-04-08 Cisco Technology, Inc. Distributed routing with centralized quality of service
US11811621B2 (en) * 2020-06-04 2023-11-07 Sandvine Corporation System and method for quality of experience management through the allocation of bandwidth per service category

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7369495B1 (en) * 2004-09-07 2008-05-06 Juniper Networks, Inc. Method and apparatus for shared shaping
JP2007013449A (en) * 2005-06-29 2007-01-18 Nec Commun Syst Ltd Shaper control method, data communication system, network interface device and network repeating device
CN100377548C (en) * 2005-07-15 2008-03-26 华为技术有限公司 Method and device for realizing virtual exchange
CN101645828A (en) * 2008-08-07 2010-02-10 华为技术有限公司 Method, device and system for synchronizing bandwidth resources
US8009560B2 (en) * 2008-12-31 2011-08-30 Microsoft Corporation Detecting and managing congestion on a shared network link
US20100189116A1 (en) * 2009-01-23 2010-07-29 Fujitsu Network Communications, Inc. Routing A Packet Flow In A VLAN
EP2721785B1 (en) * 2011-06-15 2016-05-18 BAE Systems PLC Data transfer
EP2536070A1 (en) * 2011-06-15 2012-12-19 BAE Systems Plc Data transfer
WO2014067051A1 (en) * 2012-10-29 2014-05-08 Qualcomm Incorporated Credit-based dynamic bandwidth allocation for time-division multiple access communications
US9455794B2 (en) 2012-10-29 2016-09-27 Qualcomm Incorporated Device registration and sounding in a time-division multiple access network
CN103067308A (en) * 2012-12-26 2013-04-24 中兴通讯股份有限公司 Method and system for bandwidth distribution
US9450881B2 (en) * 2013-07-09 2016-09-20 Intel Corporation Method and system for traffic metering to limit a received packet rate
US9960957B2 (en) * 2015-07-29 2018-05-01 Netapp, Inc. Methods for prioritizing failover of logical interfaces (LIFs) during a node outage and devices thereof
US10069755B1 (en) 2016-07-01 2018-09-04 Mastercard International Incorporated Systems and methods for priority-based allocation of network bandwidth
US10764201B2 (en) 2017-11-28 2020-09-01 Dornerworks, Ltd. System and method for scheduling communications

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6119235A (en) * 1997-05-27 2000-09-12 Ukiah Software, Inc. Method and apparatus for quality of service management
US6292465B1 (en) * 1997-05-27 2001-09-18 Ukiah Software, Inc. Linear rule based method for bandwidth management
US6438106B1 (en) * 1998-12-22 2002-08-20 Nortel Networks Limited Inter-class schedulers utilizing statistical priority guaranteed queuing and generic cell-rate algorithm priority guaranteed queuing
US20030076849A1 (en) * 2001-10-10 2003-04-24 Morgan David Lynn Dynamic queue allocation and de-allocation
US20040252714A1 (en) * 2003-06-16 2004-12-16 Ho-Il Oh Dynamic bandwidth allocation method considering multiple services in ethernet passive optical network system
US6842783B1 (en) * 2000-02-18 2005-01-11 International Business Machines Corporation System and method for enforcing communications bandwidth based service level agreements to plurality of customers hosted on a clustered web server
US6973315B1 (en) * 2001-07-02 2005-12-06 Cisco Technology, Inc. Method and system for sharing over-allocated bandwidth between different classes of service in a wireless network
US6980511B1 (en) * 2000-07-26 2005-12-27 Santera Systems Inc. Method of active dynamic resource assignment in a telecommunications network
US20060034330A1 (en) * 2003-03-17 2006-02-16 Ryuichi Iwamura Bandwidth management of virtual networks on a shared network
US7369495B1 (en) * 2004-09-07 2008-05-06 Juniper Networks, Inc. Method and apparatus for shared shaping
US7472159B2 (en) * 2003-05-15 2008-12-30 International Business Machines Corporation System and method for adaptive admission control and resource management for service time guarantees
US20090279568A1 (en) * 2003-02-26 2009-11-12 Xue Li Class-based bandwidth allocation and admission control for virtual private networks with differentiated service

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6292465B1 (en) * 1997-05-27 2001-09-18 Ukiah Software, Inc. Linear rule based method for bandwidth management
US6119235A (en) * 1997-05-27 2000-09-12 Ukiah Software, Inc. Method and apparatus for quality of service management
US6438106B1 (en) * 1998-12-22 2002-08-20 Nortel Networks Limited Inter-class schedulers utilizing statistical priority guaranteed queuing and generic cell-rate algorithm priority guaranteed queuing
US6842783B1 (en) * 2000-02-18 2005-01-11 International Business Machines Corporation System and method for enforcing communications bandwidth based service level agreements to plurality of customers hosted on a clustered web server
US6980511B1 (en) * 2000-07-26 2005-12-27 Santera Systems Inc. Method of active dynamic resource assignment in a telecommunications network
US7349704B2 (en) * 2001-07-02 2008-03-25 Cisco Technology, Inc. Method and system for sharing over-allocated bandwidth between different classes of service in a wireless network
US6973315B1 (en) * 2001-07-02 2005-12-06 Cisco Technology, Inc. Method and system for sharing over-allocated bandwidth between different classes of service in a wireless network
US20030076849A1 (en) * 2001-10-10 2003-04-24 Morgan David Lynn Dynamic queue allocation and de-allocation
US20090279568A1 (en) * 2003-02-26 2009-11-12 Xue Li Class-based bandwidth allocation and admission control for virtual private networks with differentiated service
US20060034330A1 (en) * 2003-03-17 2006-02-16 Ryuichi Iwamura Bandwidth management of virtual networks on a shared network
US7472159B2 (en) * 2003-05-15 2008-12-30 International Business Machines Corporation System and method for adaptive admission control and resource management for service time guarantees
US20040252714A1 (en) * 2003-06-16 2004-12-16 Ho-Il Oh Dynamic bandwidth allocation method considering multiple services in ethernet passive optical network system
US7369495B1 (en) * 2004-09-07 2008-05-06 Juniper Networks, Inc. Method and apparatus for shared shaping

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8693470B1 (en) * 2010-05-03 2014-04-08 Cisco Technology, Inc. Distributed routing with centralized quality of service
US20120158838A1 (en) * 2010-12-15 2012-06-21 Sap Ag System and method for logging a scheduler
US8965966B2 (en) * 2010-12-15 2015-02-24 Sap Se System and method for logging a scheduler
US11811621B2 (en) * 2020-06-04 2023-11-07 Sandvine Corporation System and method for quality of experience management through the allocation of bandwidth per service category

Also Published As

Publication number Publication date
US7369495B1 (en) 2008-05-06
US7835279B1 (en) 2010-11-16

Similar Documents

Publication Publication Date Title
US7835279B1 (en) Method and apparatus for shared shaping
CN111600754B (en) Industrial heterogeneous network scheduling method for interconnection of TSN (transmission time network) and non-TSN (non-Transmission time network)
US7701849B1 (en) Flow-based queuing of network traffic
US7158528B2 (en) Scheduler for a packet routing and switching system
US8320240B2 (en) Rate limiting and minimum and maximum shaping in a network device
US7212490B1 (en) Dynamic load balancing for dual ring topology networks
US8467294B2 (en) Dynamic load balancing for port groups
US9276870B2 (en) Switching node with load balancing of bursts of packets
CN114073052A (en) Slice-based routing
US7944834B2 (en) Policing virtual connections
CN103534997A (en) Port and priority based flow control mechanism for lossless ethernet
US20210135998A1 (en) Quality of service in virtual service networks
US20100271955A1 (en) Communication system
US20020154648A1 (en) Scaleable and robust solution for reducing complexity of resource identifier distribution in a large network processor-based system
US20070268825A1 (en) Fine-grain fairness in a hierarchical switched system
KR101990235B1 (en) Method and system for traffic management in a network node in a packet switched network
US8218440B2 (en) High speed transmission protocol
US7397762B1 (en) System, device and method for scheduling information processing with load-balancing
JP4758476B2 (en) Arbitration method in an integrated circuit and a network on the integrated circuit
US7554919B1 (en) Systems and methods for improving packet scheduling accuracy
US7016302B1 (en) Apparatus and method for controlling queuing of data at a node on a network
JP2012182605A (en) Network control system and administrative server
WO2021116117A1 (en) Method for an improved traffic shaping and/or management of ip traffic in a packet processing system, telecommunications network, system, program and computer program product
US7009973B2 (en) Switch using a segmented ring
US20070133561A1 (en) Apparatus and method for performing packet scheduling using adaptation round robin

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION