WO2011131738A1 - Network data congestion management probe system - Google Patents

Network data congestion management probe system Download PDF

Info

Publication number
WO2011131738A1
WO2011131738A1 PCT/EP2011/056364 EP2011056364W WO2011131738A1 WO 2011131738 A1 WO2011131738 A1 WO 2011131738A1 EP 2011056364 W EP2011056364 W EP 2011056364W WO 2011131738 A1 WO2011131738 A1 WO 2011131738A1
Authority
WO
WIPO (PCT)
Prior art keywords
network
source node
network device
traffic
probe packet
Prior art date
Application number
PCT/EP2011/056364
Other languages
French (fr)
Inventor
Casimer De Cusatis
Mircea Gusat
Daniel Crisan
Cyriel Johan Minkenberg
Original Assignee
International Business Machines Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation filed Critical International Business Machines Corporation
Priority to GB1219662.2A priority Critical patent/GB2512808B/en
Priority to CN201180019644XA priority patent/CN102859951A/en
Priority to DE112011100198.3T priority patent/DE112011100198B4/en
Publication of WO2011131738A1 publication Critical patent/WO2011131738A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/11Identifying congestion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/125Shortest path evaluation based on throughput or bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/12Shortest path evaluation
    • H04L45/127Shortest path evaluation based on intermediate node capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/26Route discovery packet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L45/00Routing or path finding of packets in data switching networks
    • H04L45/66Layer 2 routing, e.g. in Ethernet based MAN's
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/12Avoiding congestion; Recovering from congestion
    • H04L47/122Avoiding congestion; Recovering from congestion by diverting traffic away from congested entities

Definitions

  • the invention relates to the field of computer systems, and, more particularly, to address data congestion and management of such.
  • Ethernet fabrics are dynamically routed. In other words, packets are directed from one switch node to the next, hop by hop, through the network. Examples of protocols used include Converged Enhanced Ethernet (CEE), Fibre Channel over Converged Enhanced Ethernet (FCoCEE), and Data Center Bridging (DCB), as well as proprietary routing schemes.
  • CEE Converged Enhanced Ethernet
  • FCoCEE Fibre Channel over Converged Enhanced Ethernet
  • DCB Data Center Bridging
  • a system to investigate congestion in a computer network may include network devices to route data packets throughout the network.
  • the system may also include a source node that sends a probe packet to the network devices to gather information about the traffic queues at each network device that receives the probe packet.
  • the system may further include a routing table at each network device that receives the probe packet, and the routing table is based upon the gathered information for each respective traffic queue.
  • the network devices may be members of at least one virtual local area network.
  • the probe packets may include a layer 2 flag and/or sequence/flow/source node IDs.
  • Each network device may ignore the probe packet if it is busy. At least one of the network devices may provide its extended queue status to the source node in response to receiving the probe packet. A network device may provide its extended queue status to other network devices in response to receiving the probe packet.
  • the extended queue status includes the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump.
  • the source node updates the routing table to rebalance traffic loads.
  • the probe packet is sent in response to the source node receiving a threshold number of congestion notification messages in a given time interval.
  • Another aspect of the invention is a method to investigate congestion in a computer network that may include sending a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet.
  • the method may also include basing a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue.
  • the method may further include organizing the network devices into a virtual local area network.
  • the method may additionally include structuring the probe packets to include at least one of a layer 2 flag and sequence/flow/source node IDs.
  • the method may further include sending an extended queue status of at least one of the network devices to the source node in response to receiving the probe packet.
  • the method may additionally include providing the extended queue status of a network device to other network devices in response to the network device receiving the probe packet.
  • the method may further comprise including at least one of the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and pointers to a complete network device core dump as part of the extended queue status.
  • the method may additionally include updating the routing table to rebalance traffic loads via the source node if the extended queue status exceeds a threshold level.
  • Another aspect of the invention is a computer readable program codes coupled to tangible media to investigate congestion in a computer network.
  • the computer readable program codes may be configured to cause the program to send a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet.
  • the computer readable program codes may also base a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue.
  • FIG. 1 is a schematic block diagram of a system to investigate congestion in a computer network in accordance with the invention.
  • FIG. 2 is a flowchart illustrating method aspects according to the invention.
  • FIG. 3 is a flowchart illustrating method aspects according to the method of FIG. 2.
  • FIG. 4 is a flowchart illustrating method aspects according to the method of FIG. 2.
  • FIG. 5 is a flowchart illustrating method aspects according to the method of FIG. 2.
  • FIG. 6 is a flowchart illustrating method aspects according to the method of FIG. 2.
  • FIG. 7 is a flowchart illustrating method aspects according to the method of FIG. 5.
  • FIG. 8 is a flowchart illustrating method aspects according to the method of FIG. 2.
  • the system 10 is a programmable apparatus that stores and manipulates data according to an instruction set as will be appreciated by those of skill in the art.
  • the system 10 includes a communications network(s) 12, which enables a signal, e.g. data packet, probe packet, and/or the like, to travel anywhere within, or outside of, system 10.
  • the communications network 12 is wired and/or wireless, for example.
  • the communications network 12 is local and/or global with respect to system 10, for instance.
  • the system 10 includes network devices 14a-14n to route data packets throughout the network 12.
  • the network devices 14a-14n are computer network equipment such as switches, network bridges, routers, and/or the like.
  • the network devices 14a-14n can be connected together in any configuration to form the communications network 12, as will be appreciated by those of skill in the art.
  • the system 10 may further include a source node 16 that sends data packets to any of the network devices 14a-14n.
  • a source node 16 that sends data packets to any of the network devices 14a-14n.
  • the source node 16 is any piece of computer equipment that is able to send data packets to the network devices 14a-14n.
  • the system 10 can also include a routing table 18a-18n at each respective network device 14a-14n.
  • the route the data packets are sent by any network device 14a-14n is based upon each respective routing table 18a-18n.
  • the network devices 14a-14n can be members of at least one virtual local area network 20.
  • the virtual local area network 20 permits the network devices 14a-14n to be configured and/or reconfigured with less regard for each network devices' 14a-14n physical characteristics as such relates to the communications network's 12 topology, as will be appreciated by those of skill in the art.
  • the source node 16 adds a header to the data packets in order to define the virtual local area network 20.
  • the source node 16 sends a probe packet(s) to the network devices 14a- 14n to gather information about the traffic queues at each network device that receives the probe packet(s).
  • the the routing table 18a-18n at each network device 14a-14n that receives the probe packet may be based upon the gathered information for each respective traffic queue.
  • the network devices 14a-14n are members of at least one virtual local area network 20.
  • the probe packets include a layer 2 flag and/or sequence/flow/source node IDs.
  • Each network device 14a-14n can ignore the probe packet if it is busy. In one configuration, at least one of the network devices 14a-14n provides its extended queue status to the source node 16 in response to receiving the probe packet.
  • One of the network devices 14a-14n may provide its extended queue status to other network devices in response to receiving the probe packet.
  • the extended queue status can include the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump.
  • the source node 16 if an extended queue status exceeds a threshold level, the source node 16 updates the routing table 18a-18n to rebalance traffic loads within the VLAN 20.
  • the probe packet can be sent in response to the source node 16 receiving a threshold number of congestion notification messages in a given time interval.
  • the system 10 additionally includes a destination node 22 that works together with the source node 16 to determine the route the data packets follow through network 12. There can be any number of destination nodes 22 in system 10.
  • the source node 16 may be configured to collect congestion notification messages from the network devices 14a-14n, and map the collected congestion notification messages to the network topology.
  • the system 10 may also include a filter 24 that controls which portions of the congestion notification messages from the network devices 14a-14n are used by the source node 16.
  • the source node 16 can route around any network device 14a-14n for which the collected congestion notification messages reveal a history of congestion.
  • the source node 16 routes to, or around, any network device 14a-14n based upon a link cost indicator 26.
  • the system 10 can further include a destination node 22 that selects the order of the routes.
  • Another aspect of the invention is a method to investigate congestion in a computer network 12, which is now described with reference to flowchart 30 of FIG. 2.
  • the method begins at Block 32 and may include sending a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet at Block 34.
  • the method may also include basing a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue at Block 36.
  • the method ends at Block 38.
  • the method begins at Block 42.
  • the method may include the steps of FIG. 2 at Blocks 34 and 36.
  • the method may additionally include organizing the network devices into a virtual local area network at Block 44.
  • the method ends at Block 46.
  • the method begins at Block 50.
  • the method may include the steps of FIG. 2 at Blocks 34 and 36.
  • the method may additionally include structuring the probe packets to include at least one of a layer 2 flag and sequence/flow/source node IDs at Block 52.
  • the method ends at Block 54.
  • the method begins at Block 58.
  • the method may include the steps of FIG. 2 at Blocks 34 and 36.
  • the method may additionally include sending an extended queue status of at least one of the network devices to the source node in response to receiving the probe packet at Block 60.
  • the method ends at Block 62.
  • the method begins at Block 66.
  • the method may include the steps of FIG. 2 at Blocks 34 and 36.
  • the method may additionally include providing the extended queue status of a network device to other network devices in response to the network device receiving the probe packet at Block 68.
  • the method ends at Block 70.
  • the method begins at Block 74.
  • the method may include the steps of FIG. 5 at Blocks 34, 36, and 60.
  • the method may additionally comprise including at least one of the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump as part of the extended queue status at Block 76.
  • the method ends at Block 78.
  • the method begins at Block 82.
  • the method may include the steps of FIG. 5 at Blocks 34, 36, and 60.
  • the method may additionally include updating the routing table to rebalance traffic loads via the source node if the extended queue status exceeds a threshold level at Block 84.
  • the method ends at Block 86.
  • the system 10 addresses the investigation of congestion in computer networks 12. For example, large converged networks are prone to congestion and poor performance because they cannot sense and react to potential congestion conditions.
  • System 10 provides a proactive scheme for probing network congestion points, identifying potential congestion areas in advance, and/or preventing them from forming by rerouting traffic along different paths.
  • system 10 uses proactive source-based routing, which incorporates an active feedback request command that takes snapshots of the state of the network and uses this information to prevent congestion or other traffic flow problems before they occur.
  • the source node 16 e.g. traffic source
  • This probe packet will traverse the network 12 (the VLANs plus any alternative paths) and collect information on the traffic queue loads.
  • system 10 does not require congestion notification messages ("CNM”) in order to work.
  • CNM congestion notification messages
  • the probing can also be triggered by a source having received more than a certain number of CNMs in a given time interval.
  • a related problem in converged networks is the monitoring and control of adaptive routing fabrics.
  • Most industry standard switches are compliant with IEEE 802.1Qau routing mechanisms. However, they fail to offer a means for delivering adaptive feedback information to the traffic sources before congestion arises in the network.
  • System 10 addresses the foregoing and greatly enhances the speed of congestion feedback on layer 2 networks, and provides the new function of anticipating probable congestion points before they occur.
  • the source node 16 autonomously issues a feedback request command. In another embodiment, the source node 16 begins to issue feedback request after receiving a set number of congestion notification messages, e.g. as defined in Quantized Congestion Notification (QCN). When feedback requests are returned, system 10 can either count the number of responses per flow ID (stateful approach) or allow the responses to remain anonymous (stateless approach).
  • the source node 16 injects a feedback request packet into the network 12 with a layer 2 flag and sequence/flow/RP IDs.
  • the network device 14a-14n receives the feedback request. If the network device 14a-14n is busy it may disregard the request, if not, it increments a counter indicating that a feedback request packet has been received. The network device 14a-14n then dumps its extended queue status information and returns this data back to the source node 16 that originated the feedback request packet.
  • the network device 14a-14n may also be set to forward the feedback request to other nodes in the network 12.
  • the extended queue status may include the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or the pointers to a complete CP core dump.
  • the feedback requests may be triggered by QCN frames, so that any rate-limiting traffic flows are probed.
  • source adaptive routing may be employed to stop network 12 congestion before it happens. This information also makes it possible to optimize traffic flows according to latency, throughput, or other user requirements.
  • system 10 uses QCN messaging on a converged network.
  • the detailed queue information is already available in the network device 14a-14n, e.g. switch CP, but it needs to be formatted and collected by the source node 16.
  • the performance overhead has been demonstrated to be less than 1 % for feedback request monitoring in software.
  • the overhead limits can be further reduced if desired by allowing the source node 16 and network devices 14a-14n to adjust the frequency of feedback control requests.
  • This approach is further enhanced the value of enabled switch fabrics and allows for more effective use of source based adaptive routing.
  • a method for locating potential congestion points in the network is described.
  • system 10 uses a source based, reactive, and adaptive routing scheme.
  • system 10 adds a virtual LAN (VLAN) 20 routing table 18a-18n in every network device 14a-14n, e.g. switches.
  • VLAN virtual LAN
  • the VLAN 20 is defined by a 12 bit header field appended to all packets (hence this is a source-based routing scheme), plus a set of routing table 18a-18n entries (in all the switches) that can route the VLANs.
  • the 12 bit VLAN 20 ID is in addition to the usual packet header fields, and it triggers the new VLAN 20 routing scheme in each network device 14a-14n.
  • Each network device 14a- 14n has its own routing entry for every active VLAN 20.
  • source node 16 and destination node 22 use a global selection function to decide the optimal end-to-end path for the traffic flows.
  • the optimal end-to-end path is then pre-loaded into the network devices 14a-14n, e.g. switches, which are members of this VLAN 20.
  • the VLAN 20 table 18a-18n is adaptive and will be periodically updated.
  • the refresh time of the routing table 18a-18n can be varied, but will probably be at least a few seconds for a reasonably large number (4,000 or so) of VLANs 20.
  • the data traffic subject to optimization will use the VLANs 20 as configured by the controlling sources/applications 16.
  • congestion notification messages from the network devices 14a-14n, e.g. fabric switches, are collected by the traffic source 16, marking the switch and port locations based on the port ID.
  • Every traffic source 16 builds a history of CNMs that it has received, which is mapped to the network topology. Based on the source's 16 historical mapping of global end-to-end paths, the source will reconfigure any overloaded paths, defined by the VLAN 20 tables 18a-18n, to route around the most persistent congestion points (signaled by the enabled switches).
  • the source 16 knows all the possible paths a packet can take. The source 16 can then evaluate the congestion level along each of these paths and choose the one with the smallest cost, and therefore the method is adaptive.
  • the order in which the paths are selected is given by the destination 22.
  • the source 16 will default to the same path used by conventional and oblivious methods.
  • the alternative paths are checked next (by comparing their congestion cost), starting with the default one, in a circular search, until a non-congested path is found. Otherwise, the first path with the minimum congestion cost is chosen.
  • the CNMs are used as link cost indicators 26.
  • System 10 defines both a global and local method of cost weighting, plus a filtering scheme to enhance performance.
  • system 10 can determine where the most congested links are located in the network 12. For each destination 22, the source 16 knows all the possible paths a packet can take. The source 16 can then evaluate the congestion level along each of these paths and choose the one with the smallest cost and therefore the method is adaptive. In one embodiment, the system 10 uses at least one of two different methods of computing the path cost. The first is a global price, which is the (weighted) sum of the congestions levels on each link of the path. The other is the local price, which is the maximum
  • the system 10 applies filter 24 to the incoming stream of CNMs.
  • the filter 24 is a low pass filter, for example.
  • the filter 24 would have a running time window to average and smooth the CNM stream.
  • the source 16 will refresh, and if necessary, update the VLAN 20 path information in the affected network devices 14a-14n.
  • the optimal path routing is calculated by the end points, e.g. source 16 and destination 22, and refreshed periodically throughout the switch fabric.
  • the system 10 can be implemented in hardware, software, and/or firmware.
  • Another aspect of the invention is a computer readable program codes coupled to tangible media to investigate congestion in a computer network 12.
  • the computer readable program codes may be configured to cause the program to send a probe packet to network devices 14a-14n from a source node 16 to gather information about the traffic queues at each network device that is examined by the probe packet.
  • the computer readable program codes may also base a routing table 18a-18n at each network device 14a-14n that receives the probe packet on the gathered information for respective each traffic queue.
  • aspects of the invention may be embodied as a system, method or computer program product. Accordingly, aspects of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit,” “module” or “system.” Furthermore, aspects of the invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof.
  • a computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
  • Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.

Abstract

A system to investigate congestion in a computer network may include network devices to route data packets throughout the network. The system may also include a source node that sends a probe packet to the network devices to gather information about the traffic queues at each network device that receives the probe packet. The system may further include a routing table at each examined network device that is based upon the gathered information for each respective traffic queue.

Description

NETWORK DATA CONGESTION MANAGEMENT PROBE SYSTEM
FIELD OF THE INVENTION
The invention relates to the field of computer systems, and, more particularly, to address data congestion and management of such.
BACKGROUND OF THE INVENTION
Generally, conventional Ethernet fabrics are dynamically routed. In other words, packets are directed from one switch node to the next, hop by hop, through the network. Examples of protocols used include Converged Enhanced Ethernet (CEE), Fibre Channel over Converged Enhanced Ethernet (FCoCEE), and Data Center Bridging (DCB), as well as proprietary routing schemes.
SUMMARY OF THE INVENTION
According to one embodiment of the invention, a system to investigate congestion in a computer network may include network devices to route data packets throughout the network. The system may also include a source node that sends a probe packet to the network devices to gather information about the traffic queues at each network device that receives the probe packet. The system may further include a routing table at each network device that receives the probe packet, and the routing table is based upon the gathered information for each respective traffic queue.
The network devices may be members of at least one virtual local area network. The probe packets may include a layer 2 flag and/or sequence/flow/source node IDs.
Each network device may ignore the probe packet if it is busy. At least one of the network devices may provide its extended queue status to the source node in response to receiving the probe packet. A network device may provide its extended queue status to other network devices in response to receiving the probe packet. The extended queue status includes the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump.
If an extended queue status exceeds a threshold level, the source node updates the routing table to rebalance traffic loads. The probe packet is sent in response to the source node receiving a threshold number of congestion notification messages in a given time interval.
Another aspect of the invention is a method to investigate congestion in a computer network that may include sending a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet. The method may also include basing a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue.
The method may further include organizing the network devices into a virtual local area network. The method may additionally include structuring the probe packets to include at least one of a layer 2 flag and sequence/flow/source node IDs.
The method may further include sending an extended queue status of at least one of the network devices to the source node in response to receiving the probe packet. The method may additionally include providing the extended queue status of a network device to other network devices in response to the network device receiving the probe packet.
The method may further comprise including at least one of the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and pointers to a complete network device core dump as part of the extended queue status. The method may additionally include updating the routing table to rebalance traffic loads via the source node if the extended queue status exceeds a threshold level. Another aspect of the invention is a computer readable program codes coupled to tangible media to investigate congestion in a computer network. The computer readable program codes may be configured to cause the program to send a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet. The computer readable program codes may also base a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue.
BRIEF DESCRIPTION OF the DRAWINGS
FIG. 1 is a schematic block diagram of a system to investigate congestion in a computer network in accordance with the invention.
FIG. 2 is a flowchart illustrating method aspects according to the invention.
FIG. 3 is a flowchart illustrating method aspects according to the method of FIG. 2. FIG. 4 is a flowchart illustrating method aspects according to the method of FIG. 2. FIG. 5 is a flowchart illustrating method aspects according to the method of FIG. 2.
FIG. 6 is a flowchart illustrating method aspects according to the method of FIG. 2.
FIG. 7 is a flowchart illustrating method aspects according to the method of FIG. 5. FIG. 8 is a flowchart illustrating method aspects according to the method of FIG. 2. DETAILED DESCRIPTION OF THE INVENTION
The invention will now be described more fully hereinafter with reference to the
accompanying drawings, in which preferred embodiments of the invention are shown. Like numbers refer to like elements throughout, like numbers with letter suffixes are used to identify similar parts in a single embodiment, and letter suffix lower case n is a variable that indicates an unlimited number of similar elements.
With reference now to Fig. 1, a system 10 to investigate congestion in a computer network 12 is initially described. The system 10 is a programmable apparatus that stores and manipulates data according to an instruction set as will be appreciated by those of skill in the art.
In one embodiment, the system 10 includes a communications network(s) 12, which enables a signal, e.g. data packet, probe packet, and/or the like, to travel anywhere within, or outside of, system 10. The communications network 12 is wired and/or wireless, for example. The communications network 12 is local and/or global with respect to system 10, for instance.
The system 10 includes network devices 14a-14n to route data packets throughout the network 12. The network devices 14a-14n are computer network equipment such as switches, network bridges, routers, and/or the like. The network devices 14a-14n can be connected together in any configuration to form the communications network 12, as will be appreciated by those of skill in the art.
The system 10 may further include a source node 16 that sends data packets to any of the network devices 14a-14n. There can be any number of source nodes 16 in the system 10. The source node 16 is any piece of computer equipment that is able to send data packets to the network devices 14a-14n.
The system 10 can also include a routing table 18a-18n at each respective network device 14a-14n. In another embodiment, the route the data packets are sent by any network device 14a-14n is based upon each respective routing table 18a-18n. The network devices 14a-14n can be members of at least one virtual local area network 20. The virtual local area network 20 permits the network devices 14a-14n to be configured and/or reconfigured with less regard for each network devices' 14a-14n physical characteristics as such relates to the communications network's 12 topology, as will be appreciated by those of skill in the art. In another embodiment, the source node 16 adds a header to the data packets in order to define the virtual local area network 20.
In one embodiment, the source node 16 sends a probe packet(s) to the network devices 14a- 14n to gather information about the traffic queues at each network device that receives the probe packet(s). The the routing table 18a-18n at each network device 14a-14n that receives the probe packet may be based upon the gathered information for each respective traffic queue.
In one embodiment, the network devices 14a-14n are members of at least one virtual local area network 20. In another embodiment, the probe packets include a layer 2 flag and/or sequence/flow/source node IDs.
Each network device 14a-14n can ignore the probe packet if it is busy. In one configuration, at least one of the network devices 14a-14n provides its extended queue status to the source node 16 in response to receiving the probe packet.
One of the network devices 14a-14n may provide its extended queue status to other network devices in response to receiving the probe packet. The extended queue status can include the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump.
In one embodiment, if an extended queue status exceeds a threshold level, the source node 16 updates the routing table 18a-18n to rebalance traffic loads within the VLAN 20. The probe packet can be sent in response to the source node 16 receiving a threshold number of congestion notification messages in a given time interval. In one embodiment, the system 10 additionally includes a destination node 22 that works together with the source node 16 to determine the route the data packets follow through network 12. There can be any number of destination nodes 22 in system 10.
The source node 16 may be configured to collect congestion notification messages from the network devices 14a-14n, and map the collected congestion notification messages to the network topology. The system 10 may also include a filter 24 that controls which portions of the congestion notification messages from the network devices 14a-14n are used by the source node 16. Thus, the source node 16 can route around any network device 14a-14n for which the collected congestion notification messages reveal a history of congestion.
In one embodiment, the source node 16 routes to, or around, any network device 14a-14n based upon a link cost indicator 26. The system 10 can further include a destination node 22 that selects the order of the routes.
Another aspect of the invention is a method to investigate congestion in a computer network 12, which is now described with reference to flowchart 30 of FIG. 2. The method begins at Block 32 and may include sending a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet at Block 34. The method may also include basing a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue at Block 36. The method ends at Block 38.
In another method embodiment, which is now described with reference to flowchart 40 of FIG. 3, the method begins at Block 42. The method may include the steps of FIG. 2 at Blocks 34 and 36. The method may additionally include organizing the network devices into a virtual local area network at Block 44. The method ends at Block 46.
In another method embodiment, which is now described with reference to flowchart 48 of FIG. 4, the method begins at Block 50. The method may include the steps of FIG. 2 at Blocks 34 and 36. The method may additionally include structuring the probe packets to include at least one of a layer 2 flag and sequence/flow/source node IDs at Block 52. The method ends at Block 54.
In another method embodiment, which is now described with reference to flowchart 56 of FIG. 5, the method begins at Block 58. The method may include the steps of FIG. 2 at Blocks 34 and 36. The method may additionally include sending an extended queue status of at least one of the network devices to the source node in response to receiving the probe packet at Block 60. The method ends at Block 62.
In another method embodiment, which is now described with reference to flowchart 64 of FIG. 6, the method begins at Block 66. The method may include the steps of FIG. 2 at Blocks 34 and 36. The method may additionally include providing the extended queue status of a network device to other network devices in response to the network device receiving the probe packet at Block 68. The method ends at Block 70.
In another method embodiment, which is now described with reference to flowchart 72 of FIG. 7, the method begins at Block 74. The method may include the steps of FIG. 5 at Blocks 34, 36, and 60. The method may additionally comprise including at least one of the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or pointers to a complete network device core dump as part of the extended queue status at Block 76. The method ends at Block 78.
In another method embodiment, which is now described with reference to flowchart 80 of FIG. 8, the method begins at Block 82. The method may include the steps of FIG. 5 at Blocks 34, 36, and 60. The method may additionally include updating the routing table to rebalance traffic loads via the source node if the extended queue status exceeds a threshold level at Block 84. The method ends at Block 86.
In view of the foregoing, the system 10 addresses the investigation of congestion in computer networks 12. For example, large converged networks are prone to congestion and poor performance because they cannot sense and react to potential congestion conditions. System 10 provides a proactive scheme for probing network congestion points, identifying potential congestion areas in advance, and/or preventing them from forming by rerouting traffic along different paths.
In other words, system 10 uses proactive source-based routing, which incorporates an active feedback request command that takes snapshots of the state of the network and uses this information to prevent congestion or other traffic flow problems before they occur. In this approach the source node 16, e.g. traffic source, actively monitors the end-to-end traffic flows by inserting a probing packet, called "feedback request", into the data stream at periodic intervals. This probe packet will traverse the network 12 (the VLANs plus any alternative paths) and collect information on the traffic queue loads.
In one embodiment, system 10 does not require congestion notification messages ("CNM") in order to work. In another embodiment, in a network which uses CNMs, the probing can also be triggered by a source having received more than a certain number of CNMs in a given time interval.
A related problem in converged networks is the monitoring and control of adaptive routing fabrics. Most industry standard switches are compliant with IEEE 802.1Qau routing mechanisms. However, they fail to offer a means for delivering adaptive feedback information to the traffic sources before congestion arises in the network.
System 10 addresses the foregoing and greatly enhances the speed of congestion feedback on layer 2 networks, and provides the new function of anticipating probable congestion points before they occur.
In one embodiment, the source node 16 autonomously issues a feedback request command. In another embodiment, the source node 16 begins to issue feedback request after receiving a set number of congestion notification messages, e.g. as defined in Quantized Congestion Notification (QCN). When feedback requests are returned, system 10 can either count the number of responses per flow ID (stateful approach) or allow the responses to remain anonymous (stateless approach). In one embodiment, the source node 16 injects a feedback request packet into the network 12 with a layer 2 flag and sequence/flow/RP IDs. The network device 14a-14n receives the feedback request. If the network device 14a-14n is busy it may disregard the request, if not, it increments a counter indicating that a feedback request packet has been received. The network device 14a-14n then dumps its extended queue status information and returns this data back to the source node 16 that originated the feedback request packet. In another embodiment, the network device 14a-14n may also be set to forward the feedback request to other nodes in the network 12.
In one embodiment, the extended queue status may include the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and/or the pointers to a complete CP core dump.
In one embodiment, the feedback requests may be triggered by QCN frames, so that any rate-limiting traffic flows are probed. For example, system 10 might send one feedback request frame for every N kilobytes of data sent per flow (e.g. N = 750 kB). This provides the potential for early response to pending congestion points. Using the information obtained from feedback requests, source adaptive routing may be employed to stop network 12 congestion before it happens. This information also makes it possible to optimize traffic flows according to latency, throughput, or other user requirements.
In one embodiment, system 10 uses QCN messaging on a converged network. The detailed queue information is already available in the network device 14a-14n, e.g. switch CP, but it needs to be formatted and collected by the source node 16.
The performance overhead has been demonstrated to be less than 1 % for feedback request monitoring in software. The overhead limits can be further reduced if desired by allowing the source node 16 and network devices 14a-14n to adjust the frequency of feedback control requests. This approach is further enhanced the value of enabled switch fabrics and allows for more effective use of source based adaptive routing. In one embodiment, in a CEE/FCoE network 12 having a plurality of VLANs 20 each having a plurality of network devices 14a-14n, e.g. switches, which enable paths over which traffic can be routed through the network, a method for locating potential congestion points in the network is described.
As noted above, large converged networks do not define adequate means to control network congestion, leading to traffic delays, dropped data frames, and poor performance. The conventional hop-by-hop routing is not efficient at dealing with network congestion, especially when a combination of storage and networking traffic is placed over a common network, resulting in new and poorly characterized traffic statistics. If the benefits of converged networking are to be realized, a new method of traffic routing is required. To address such, system 10 uses a source based, reactive, and adaptive routing scheme.
In one embodiment, system 10 adds a virtual LAN (VLAN) 20 routing table 18a-18n in every network device 14a-14n, e.g. switches. The VLAN 20 is defined by a 12 bit header field appended to all packets (hence this is a source-based routing scheme), plus a set of routing table 18a-18n entries (in all the switches) that can route the VLANs.
The 12 bit VLAN 20 ID is in addition to the usual packet header fields, and it triggers the new VLAN 20 routing scheme in each network device 14a-14n. Each network device 14a- 14n has its own routing entry for every active VLAN 20.
In one embodiment, source node 16 and destination node 22 use a global selection function to decide the optimal end-to-end path for the traffic flows. The optimal end-to-end path is then pre-loaded into the network devices 14a-14n, e.g. switches, which are members of this VLAN 20.
In one embodiment, the VLAN 20 table 18a-18n is adaptive and will be periodically updated. The refresh time of the routing table 18a-18n can be varied, but will probably be at least a few seconds for a reasonably large number (4,000 or so) of VLANs 20. The data traffic subject to optimization will use the VLANs 20 as configured by the controlling sources/applications 16. In one embodiment, congestion notification messages (CNMs) from the network devices 14a-14n, e.g. fabric switches, are collected by the traffic source 16, marking the switch and port locations based on the port ID.
Every traffic source 16 builds a history of CNMs that it has received, which is mapped to the network topology. Based on the source's 16 historical mapping of global end-to-end paths, the source will reconfigure any overloaded paths, defined by the VLAN 20 tables 18a-18n, to route around the most persistent congestion points (signaled by the enabled switches).
In one embodiment, for each destination, the source 16 knows all the possible paths a packet can take. The source 16 can then evaluate the congestion level along each of these paths and choose the one with the smallest cost, and therefore the method is adaptive.
In another embodiment, the order in which the paths are selected is given by the destination 22. In the case that no CNMs are received, the source 16 will default to the same path used by conventional and oblivious methods.
In one embodiment, if the default path is congested, the alternative paths are checked next (by comparing their congestion cost), starting with the default one, in a circular search, until a non-congested path is found. Otherwise, the first path with the minimum congestion cost is chosen.
In another embodiment, the CNMs are used as link cost indicators 26. System 10 defines both a global and local method of cost weighting, plus a filtering scheme to enhance performance.
In this manner, system 10 can determine where the most congested links are located in the network 12. For each destination 22, the source 16 knows all the possible paths a packet can take. The source 16 can then evaluate the congestion level along each of these paths and choose the one with the smallest cost and therefore the method is adaptive. In one embodiment, the system 10 uses at least one of two different methods of computing the path cost. The first is a global price, which is the (weighted) sum of the congestions levels on each link of the path. The other is the local price, which is the maximum
(weighted) congestion level of a link of the path.
The intuition behind the local price method is that a path where a single link experiences heavy congestion is worse than a path where multiple links experience mild congestion. On the other hand, a path with two heavily congested links is worse than a path with a single heavily congested link.
The intuition behind using a global price method is that the CNMs received from distant network devices 14a-14n, e.g. switches, are more informative than those received from switches that are close to the source 16. This happens because the congestion appears on the links that are likely to concentrate more flows (i.e. the links that are farther away from the source).
In one embodiment, to avoid high frequency noise which could lead to instabilities in the network devices 14a-14n, e.g. switch, updating process, the system 10 applies filter 24 to the incoming stream of CNMs. The filter 24 is a low pass filter, for example. The filter 24 would have a running time window to average and smooth the CNM stream.
In one embodiment, periodically the source 16 will refresh, and if necessary, update the VLAN 20 path information in the affected network devices 14a-14n. In another
embodiment, the optimal path routing is calculated by the end points, e.g. source 16 and destination 22, and refreshed periodically throughout the switch fabric.
The system 10 can be implemented in hardware, software, and/or firmware. Another aspect of the invention is a computer readable program codes coupled to tangible media to investigate congestion in a computer network 12. The computer readable program codes may be configured to cause the program to send a probe packet to network devices 14a-14n from a source node 16 to gather information about the traffic queues at each network device that is examined by the probe packet. The computer readable program codes may also base a routing table 18a-18n at each network device 14a-14n that receives the probe packet on the gathered information for respective each traffic queue.
As will be appreciated by one skilled in the art, aspects of the invention may be embodied as a system, method or computer program product. Accordingly, aspects of the invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the invention are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other
programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or
"comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
While the preferred embodiment to the invention has been described, it will be understood that those skilled in the art, both now and in the future, may make various improvements and enhancements which fall within the scope of the claims which follow. These claims should be construed to maintain the proper protection for the invention first described.

Claims

1. A method comprising:
sending a probe packet to network devices from a source node to gather information about the traffic queues at each network device that is examined by the probe packet; and basing a routing table at each network device that receives the probe packet on the gathered information for respective each traffic queue, whereby traffic is re-routed in accordance with the gathered information about the traffic queues.
2. The method of claim 1, further comprising organizing the network devices into a virtual local area network.
3. The method of claim 1, further comprising structuring the probe packets to include at least one of a layer 2 flag and sequence/flow/source node IDs.
4. The method of claim 1, further comprising sending an extended queue status of at least one of the network devices to the source node in response to receiving the probe packet.
5. The method of claim 1, further comprising providing an extended queue status of a network device to other network devices in response to the network device receiving the probe packet.
6. The method of claim 4, further comprising including at least one of the number of pings from any flow ID received since the last queue change, the number of packets forwarded since the last queue change, and pointers to a complete network device core dump as part of the extended queue status.
7. The method of claim 4, further comprising updating the routing table to rebalance traffic loads via the source node if the extended queue status exceeds a threshold level.
8. The method of claim 1 wherein the source node evaluates the congestion level along each of the possible paths a packet can take and chooses the path with the smallest path cost.
9. The method of claim 8 wherein the path cost is computed according to the weighted sum of the congestion levels on each link of the path.
10. A system comprising means adapted for carrying out all the steps of the method according to any preceding method claim.
1 1. A computer program comprising instructions for carrying out all the steps of the method according to any preceding method claim, when said computer program is executed on a computer system.
PCT/EP2011/056364 2010-04-22 2011-04-20 Network data congestion management probe system WO2011131738A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
GB1219662.2A GB2512808B (en) 2010-04-22 2011-04-20 Network data congestion management probe system
CN201180019644XA CN102859951A (en) 2010-04-22 2011-04-20 Network data congestion management probe system
DE112011100198.3T DE112011100198B4 (en) 2010-04-22 2011-04-20 Test system for a network data overload countermeasure

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US12/765,637 2010-04-22
US12/765,637 US20110261696A1 (en) 2010-04-22 2010-04-22 Network data congestion management probe system

Publications (1)

Publication Number Publication Date
WO2011131738A1 true WO2011131738A1 (en) 2011-10-27

Family

ID=44118916

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2011/056364 WO2011131738A1 (en) 2010-04-22 2011-04-20 Network data congestion management probe system

Country Status (6)

Country Link
US (2) US20110261696A1 (en)
CN (1) CN102859951A (en)
DE (1) DE112011100198B4 (en)
GB (1) GB2512808B (en)
TW (1) TW201218693A (en)
WO (1) WO2011131738A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013225692B4 (en) * 2012-12-17 2020-02-13 Avago Technologies International Sales Pte. Ltd. Network Status Figure

Families Citing this family (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9474440B2 (en) 2009-06-18 2016-10-25 Endochoice, Inc. Endoscope tip position visual indicator and heat management system
US8634297B2 (en) * 2010-11-01 2014-01-21 Cisco Technology, Inc. Probing specific customer flow in layer-2 multipath networks
US8737418B2 (en) * 2010-12-22 2014-05-27 Brocade Communications Systems, Inc. Queue speed-up by using multiple linked lists
JP5703909B2 (en) * 2011-03-31 2015-04-22 富士通株式会社 Information processing apparatus, parallel computer system, and control method of parallel computer system
US20130205038A1 (en) * 2012-02-06 2013-08-08 International Business Machines Corporation Lossless socket-based layer 4 transport (reliability) system for a converged ethernet network
US8873403B2 (en) 2012-02-21 2014-10-28 Avaya Inc. System and method for automatic DSCP tracing for XoIP elements
US9438524B2 (en) * 2012-02-29 2016-09-06 Avaya Inc. System and method for verifying multiprotocol label switching contracts
US9769074B2 (en) 2013-03-15 2017-09-19 International Business Machines Corporation Network per-flow rate limiting
US9104643B2 (en) 2013-03-15 2015-08-11 International Business Machines Corporation OpenFlow controller master-slave initialization protocol
US9407560B2 (en) 2013-03-15 2016-08-02 International Business Machines Corporation Software defined network-based load balancing for physical and virtual networks
US9444748B2 (en) 2013-03-15 2016-09-13 International Business Machines Corporation Scalable flow and congestion control with OpenFlow
US9596192B2 (en) 2013-03-15 2017-03-14 International Business Machines Corporation Reliable link layer for control links between network controllers and switches
US9118984B2 (en) 2013-03-15 2015-08-25 International Business Machines Corporation Control plane for integrated switch wavelength division multiplexing
US9609086B2 (en) 2013-03-15 2017-03-28 International Business Machines Corporation Virtual machine mobility using OpenFlow
TWI586124B (en) * 2013-04-26 2017-06-01 Nec Corp Communication node, communication system, packet processing method and program
JP2014233028A (en) * 2013-05-30 2014-12-11 富士通株式会社 Communication control device, information processing device, storage device, communication control method, and communication control program
US20150023173A1 (en) * 2013-07-16 2015-01-22 Comcast Cable Communications, Llc Systems And Methods For Managing A Network
WO2015044719A1 (en) * 2013-09-27 2015-04-02 Freescale Semiconductor, Inc. Apparatus for optimising a configuration of a communications network device
US9943218B2 (en) 2013-10-01 2018-04-17 Endochoice, Inc. Endoscope having a supply cable attached thereto
US8891376B1 (en) 2013-10-07 2014-11-18 International Business Machines Corporation Quantized Congestion Notification—defense mode choice extension for the alternate priority of congestion points
US9968242B2 (en) 2013-12-18 2018-05-15 Endochoice, Inc. Suction control unit for an endoscope having two working channels
US9843518B2 (en) 2014-03-14 2017-12-12 International Business Machines Corporation Remotely controlled message queue
CN104980359A (en) * 2014-04-04 2015-10-14 中兴通讯股份有限公司 Flow control method of fiber channel over Ethernet (FCoE), flow control device of FCoE and flow control system of FCoE
US9537743B2 (en) * 2014-04-25 2017-01-03 International Business Machines Corporation Maximizing storage controller bandwidth utilization in heterogeneous storage area networks
US9548930B1 (en) 2014-05-09 2017-01-17 Google Inc. Method for improving link selection at the borders of SDN and traditional networks
US9832125B2 (en) * 2015-05-18 2017-11-28 Dell Products L.P. Congestion notification system
CN106470116B (en) * 2015-08-20 2019-06-25 中国移动通信集团公司 A kind of Network Fault Detection and restoration methods and device
TWI617157B (en) 2016-05-31 2018-03-01 鴻海精密工業股份有限公司 Load sensitive adjusting device and method thereof
EP3267639B1 (en) * 2016-07-06 2019-12-25 Alcatel Lucent Congestion control within a communication network
CN112787904B (en) * 2020-12-24 2022-03-22 郑州信大捷安信息技术股份有限公司 IPSec VPN cascaded routing information pushing method and system
US11558310B2 (en) * 2021-06-16 2023-01-17 Mellanox Technologies, Ltd. Low-latency delivery of in-band telemetry data
US11637778B2 (en) * 2021-06-25 2023-04-25 Cornelis Newtorks, Inc. Filter with engineered damping for load-balanced fine-grained adaptive routing in high-performance system interconnect

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006008494A2 (en) * 2004-07-20 2006-01-26 British Telecommunications Public Limited Company Method of operating a network with test packets

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU647267B2 (en) * 1991-05-07 1994-03-17 Fujitsu Limited Switching node in label multiplexing type switching network
US6134218A (en) * 1994-04-28 2000-10-17 Pmc-Sierra (Maryland), Inc. Many dimensional congestion detection system and method
JP2757779B2 (en) * 1994-06-21 1998-05-25 日本電気株式会社 Buffer priority control method
US5740346A (en) 1996-02-22 1998-04-14 Fujitsu, Ltd. System and method for dynamic network topology exploration
US5793976A (en) * 1996-04-01 1998-08-11 Gte Laboratories Incorporated Method and apparatus for performance monitoring in electronic communications networks
US5995503A (en) * 1996-06-12 1999-11-30 Bay Networks, Inc. Method and apparatus for providing quality of service routing in a network
US5987011A (en) * 1996-08-30 1999-11-16 Chai-Keong Toh Routing method for Ad-Hoc mobile networks
US6075769A (en) 1997-11-26 2000-06-13 Cisco Systems, Inc. Method and apparatus for network flow control
US6424629B1 (en) * 1998-11-23 2002-07-23 Nortel Networks Limited Expediting reconvergence in a routing device
US6690676B1 (en) * 1998-11-23 2004-02-10 Advanced Micro Devices, Inc. Non-addressed packet structure connecting dedicated end points on a multi-pipe computer interconnect bus
US6587434B1 (en) * 1999-08-10 2003-07-01 Cirrus Logic, Inc TCP/IP communications protocol
JP4105341B2 (en) * 1999-08-13 2008-06-25 富士通株式会社 Fragment size changing method and router apparatus
US6839767B1 (en) * 2000-03-02 2005-01-04 Nortel Networks Limited Admission control for aggregate data flows based on a threshold adjusted according to the frequency of traffic congestion notification
DE10047658A1 (en) * 2000-09-26 2002-05-29 Siemens Ag Method for controlling a data conversion during the transition of a connection between a packet-switched and a circuit-switched communication network
JP2002252640A (en) * 2001-02-23 2002-09-06 Fujitsu Ltd Network repeater and method and system for the same
US20020176363A1 (en) * 2001-05-08 2002-11-28 Sanja Durinovic-Johri Method for load balancing in routers of a network using overflow paths
CA2357785A1 (en) * 2001-09-14 2003-03-14 Alcatel Canada Inc. Intelligent routing for effective utilization of network signaling resources
US6714787B2 (en) * 2002-01-17 2004-03-30 Motorola, Inc. Method and apparatus for adapting a routing map for a wireless communications network
US6996225B1 (en) * 2002-01-31 2006-02-07 Cisco Technology, Inc. Arrangement for controlling congestion in an SS7 signaling node based on packet classification
KR100449488B1 (en) * 2002-05-21 2004-09-22 한국전자통신연구원 Network for transferring active packet and method for employing the same
GB0215505D0 (en) * 2002-07-04 2002-08-14 Univ Cambridge Tech Packet routing
US20040028069A1 (en) * 2002-08-07 2004-02-12 Tindal Glen D. Event bus with passive queuing and active routing
KR100548134B1 (en) * 2003-10-31 2006-02-02 삼성전자주식회사 Communication system for improving data transmission efficiency of ??? in wireless network environment and a method thereof
US7414977B2 (en) 2003-11-25 2008-08-19 Mitsubishi Electric Research Laboratories, Inc. Power and delay sensitive ad-hoc communication networks
US7948931B2 (en) * 2004-03-01 2011-05-24 The Charles Stark Draper Laboratory, Inc. MANET routing based on best estimate of expected position
US7317918B2 (en) * 2004-07-19 2008-01-08 Motorola, Inc. Method for domain name service (DNS) in a wireless ad hoc network
US7403496B2 (en) 2004-09-28 2008-07-22 Motorola, Inc. Method and apparatus for congestion relief within an ad-hoc communication system
US7760646B2 (en) 2005-02-09 2010-07-20 Nokia Corporation Congestion notification in 3G radio access
TWI392274B (en) 2005-03-10 2013-04-01 Interdigital Tech Corp Multi-node communication system and method of requesting, reporting and collecting destination node-based measurements and route-based measurements
WO2006098263A1 (en) * 2005-03-14 2006-09-21 Matsushita Electric Industrial Co., Ltd. Switching source device, switching destination device, high-speed device switching system, and signaling method
TWI379544B (en) 2005-06-14 2012-12-11 Interdigital Tech Corp Method and system for conveying backhaul link information for intelligent selection of a mesh access point
JP5039705B2 (en) * 2005-09-21 2012-10-03 エルジー エレクトロニクス インコーポレイティド Method for reducing signaling overhead and power consumption in a wireless communication system
US8094552B1 (en) * 2005-11-03 2012-01-10 Seagate Technology Llc Adaptive buffer for frame based storage communications protocols
US7675857B1 (en) * 2006-05-03 2010-03-09 Google Inc. Method and apparatus to avoid network congestion
US7586912B2 (en) * 2006-07-28 2009-09-08 Cisco Technology, Inc. Techniques for exchanging DHCP information among DHCP relay agents and DHCP servers
US20080075003A1 (en) * 2006-09-21 2008-03-27 Futurewei Technologies, Inc. Method and system for admission and congestion control of network communication traffic
WO2008055548A1 (en) * 2006-11-09 2008-05-15 Telefonaktiebolaget Lm Ericsson (Publ) Congestion control in stateless domains
US8085674B2 (en) * 2007-04-11 2011-12-27 Alcatel Lucent Priority trace in data networks
US9054973B2 (en) * 2007-04-25 2015-06-09 Broadcom Corporation Method and system for Ethernet congestion management
US8369221B2 (en) * 2007-11-01 2013-02-05 Telefonaktiebolaget Lm Ericsson (Publ) Efficient flow control in a radio network controller (RNC)
EP3145240B1 (en) * 2008-02-20 2019-04-10 Amazon Technologies, Inc. Method and apparatus for processing padding buffer status reports
US8248930B2 (en) * 2008-04-29 2012-08-21 Google Inc. Method and apparatus for a network queuing engine and congestion management gateway
KR101001556B1 (en) * 2008-09-23 2010-12-17 한국전자통신연구원 Packet transmission apparatus and method for node on the wireless sensor networks
US8351437B2 (en) * 2009-11-12 2013-01-08 Sony Mobile Communications Ab Stereo bit clock tuning
US8767742B2 (en) * 2010-04-22 2014-07-01 International Business Machines Corporation Network data congestion management system

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006008494A2 (en) * 2004-07-20 2006-01-26 British Telecommunications Public Limited Company Method of operating a network with test packets

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RAKESH KUMAR ET AL: "An Efficient Gateway Discovery in Ad Hoc Networks for Internet Connectivity", PROCEEDINGS / INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND MULTIMEDIA APPLICATIONS, ICCIMA 2007 : 13 - 15 DECEMBER 2007, SIVAKASI, TAMIL NADU, INDIA, IEEE COMPUTER SOCIETY, LOS ALAMITOS, CALIF., USA, 13 December 2007 (2007-12-13), pages 275 - 282, XP031535060, ISBN: 978-0-7695-3050-5 *
ZAMAN R U ET AL: "A review of gateway load balancing strategies in Integrated Internet-MANET", INTERNET MULTIMEDIA SERVICES ARCHITECTURE AND APPLICATIONS (IMSAA), 2009 IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 10 December 2009 (2009-12-10), pages 1 - 6, XP031653484, ISBN: 978-1-4244-4792-3 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102013225692B4 (en) * 2012-12-17 2020-02-13 Avago Technologies International Sales Pte. Ltd. Network Status Figure

Also Published As

Publication number Publication date
US20110261696A1 (en) 2011-10-27
GB201219662D0 (en) 2012-12-12
CN102859951A (en) 2013-01-02
US20130114412A1 (en) 2013-05-09
GB2512808B (en) 2017-12-13
DE112011100198B4 (en) 2017-09-21
US9628387B2 (en) 2017-04-18
TW201218693A (en) 2012-05-01
DE112011100198T5 (en) 2012-11-15
GB2512808A (en) 2014-10-15

Similar Documents

Publication Publication Date Title
US9628387B2 (en) Network data congestion management probe system
US8755390B2 (en) Network data congestion management method
EP3259885B1 (en) Traffic engineering feeder for packet switched networks
US8601126B2 (en) Method and apparatus for providing flow based load balancing
US8891534B2 (en) Redirecting traffic via tunnels to discovered data aggregators
CN111107001B (en) Method for segment source route in network and storage medium
US7746784B2 (en) Method and apparatus for improving traffic distribution in load-balancing networks
CN110945842A (en) Path selection for applications in software defined networks based on performance scores
US20160197812A1 (en) Network status mapping
US20220191134A1 (en) Malleable routing for data packets
US9461893B2 (en) Communication system, node, statistical information collection device, statistical information collection method and program
US20190173793A1 (en) Method and apparatus for low latency data center network
CN110557342B (en) Apparatus for analyzing and mitigating dropped packets
Attarha et al. A load balanced congestion aware routing mechanism for Software Defined Networks
US8792366B2 (en) Network packet latency measurement
CN113542064A (en) Network path determination method, network path determination device, electronic apparatus, network path determination medium, and program product
JP2005094768A (en) System and method for routing network traffic passing through weighted zone
CN111901237B (en) Source routing method and system, related device and computer readable storage medium
Tri et al. Effective route scheme of multicast probing to locate high-loss links in OpenFlow networks
EP1535432B1 (en) A method for load balancing using short path routing
CN115460150A (en) Communication method, device and system
Balakiruthiga et al. A simple congestion avoidance mechanism for opendaylight (odl)-multipath tcp (mptcp) network structure in software defined data center (sddc)
Gunavathie et al. DLBA-A Dynamic Load-balancing Algorithm in Software-Defined Networking
WO2023109794A1 (en) Methods and systems for adaptive stochastic-based load balancing
Tri et al. On Reducing Measurement Load on Control-Plane in Locating High Packet-Delay Variance Links for OpenFlow Networks

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201180019644.X

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11717541

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 112011100198

Country of ref document: DE

Ref document number: 1120111001983

Country of ref document: DE

ENP Entry into the national phase

Ref document number: 1219662

Country of ref document: GB

Kind code of ref document: A

Free format text: PCT FILING DATE = 20110420

WWE Wipo information: entry into national phase

Ref document number: 1219662.2

Country of ref document: GB

122 Ep: pct application non-entry in european phase

Ref document number: 11717541

Country of ref document: EP

Kind code of ref document: A1