WO2017161148A1 - Method for congestion relief with network coding - Google Patents

Method for congestion relief with network coding Download PDF

Info

Publication number
WO2017161148A1
WO2017161148A1 PCT/US2017/022761 US2017022761W WO2017161148A1 WO 2017161148 A1 WO2017161148 A1 WO 2017161148A1 US 2017022761 W US2017022761 W US 2017022761W WO 2017161148 A1 WO2017161148 A1 WO 2017161148A1
Authority
WO
WIPO (PCT)
Prior art keywords
node
packet
packets
queue
transmission
Prior art date
Application number
PCT/US2017/022761
Other languages
French (fr)
Inventor
Dapeng Oliver Wu
Yun Zhu
Qiuyuan Huang
Jiade LI
Original Assignee
University Of Florida Research Foundation, Incorporated
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University Of Florida Research Foundation, Incorporated filed Critical University Of Florida Research Foundation, Incorporated
Publication of WO2017161148A1 publication Critical patent/WO2017161148A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/10Flow control between communication endpoints
    • H04W28/12Flow control between communication endpoints using signalling between network elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/24Traffic characterised by specific attributes, e.g. priority or QoS
    • H04L47/2458Modification of priorities while in transit
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/26Flow control; Congestion control using explicit feedback to the source, e.g. choke packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6255Queue scheduling characterised by scheduling criteria for service slots or service orders queue load conditions, e.g. longest queue first
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/50Queue scheduling
    • H04L47/62Queue scheduling characterised by scheduling criteria
    • H04L47/625Queue scheduling characterised by scheduling criteria for service slots or service orders
    • H04L47/6275Queue scheduling characterised by scheduling criteria for service slots or service orders based on priority
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0268Traffic management, e.g. flow control or congestion control using specific QoS parameters for wireless networks, e.g. QoS class identifier [QCI] or guaranteed bit rate [GBR]

Definitions

  • Hybrid ARQ has been proposed to combine the best of both worlds.
  • the transmitter transmits a coded packet with error correcting codes; but if packet errors are not correctable, a new coded packet will need to be transmitted.
  • Hybrid ARQ are not designed for erasure channels, because lost packets cannot be corrected using error-correcting codes.
  • the transmitter can instead use a fountain code and keep transmitting coded packets, until the receiver is able to decode all the original packets in a group (e.g., Network coded TCP [2]).
  • the transmitter does not need a priori knowledge on the channel condition, and is able to adapt to time-varying channel conditions. Since a fountain code does not have a fixed rate, it is well suited for reliable multicast as well.
  • fountain codes and network coding are effective mechanisms for communication over lossy channels, fountain codes and network coding only deal with error control but do not have congestion control capability.
  • a method of operating a communication system comprises a plurality of nodes.
  • the method comprises, at each of the plurality of nodes, tracking packets queued for transmission, including numbers of packets received from respective others of the plurality of nodes; and numbers of packets for transmission to respective others of the plurality of nodes.
  • the method also comprises exchanging, with others of the plurality of nodes, the tracked numbers of packets received from and for transmission to respective others of the plurality of nodes.
  • the method also comprises selecting a packet for transmission to an other node based at least in part on the tracked numbers of packets.
  • the method further comprises setting a transmit priority for the selected packet based at least in part on the number of packets for transmission to the other node and the number of packets tracked on the other node that were received from the node and are queued for transmission.
  • tracking numbers of packets received from others of the plurality of nodes and numbers of packets for transmission to respective others of the plurality of nodes comprises maintaining a plurality of queues indexed by a prior hop node and a next hop node and storing packets received from a first node for retransmission to a second node in a queue indexed with the first node as the prior hop node and the second node as the next hop node.
  • exchanging tracked numbers of packets received and tracked numbers of packets queued comprises transmitting the selected packet with a header containing the tracked numbers of packets received and tracked numbers of packets queued.
  • exchanging tracked numbers of packets received and tracked numbers of packets queued comprises extracting information from packet headers of packets transmitted by neighboring nodes.
  • selecting a packet for transmission based at least in part on the tracked numbers of packets queued comprises making a weighted selection from among packets for transmission to other nodes, the weighting based on the numbers of packets queued for transmission to respective others of the plurality of nodes.
  • setting a transmit priority comprises setting MAC parameters of transmission of the selected packet.
  • the network operates according to a TDMA protocol with a plurality of time slots; and the acts of selecting a packet for transmission and setting a transmit priority for the selected packet are repeated in each of the plurality of time slots.
  • setting MAC parameters of transmission of the selected packet comprises adjusting a time of a contention window for the packet within a time slot, with higher priority packets being assigned contention windows earlier in the time slot.
  • a method of operating a node in a communication system comprises a plurality of nodes forming a network in which packets are communicated by transmissions between nodes.
  • the method comprising, at the node, upon receipt of a packet from a first node for retransmission to a second node, storing the packet in a queue of a plurality of queues.
  • the queue is selected from the plurality of queues based on the identity of the first node and the identity of the second node.
  • the method also comprises receiving information about a number of packets from the node queued on the second node for retransmission to at least one other node.
  • the method also comprises selectively transmitting a packet from the queue based at least in part on a relative number of packets queued on the node for transmission to the second node and the number of packets from the node queued on the second node for retransmission to at least one other node of the plurality of nodes.
  • selectively transmitting the packet from the queue comprises assigning a priority to the queue based at least in part on the relative number of packets and selecting the queue based at least in part on the priority assigned to the queue relative to respective priorities assigned to other queues of the plurality of queues. [0016] According to some embodiments, selectively transmitting the packet from the queue further comprises setting transmit parameters of the packet in accordance with the priority assigned to the queue.
  • setting the transmit parameters comprises setting a transmit start time within a time slot and a backoff time.
  • assigning the priority to the queue is further based in part on the transmission rate between the node and the second node.
  • selectively transmitting the packet comprises transmitting the packet with a header indicating a number of packets queued on the node for transmission to the second node.
  • the header comprises an identifier of the first node, an identifier of the second node and a length of the queue.
  • the length of the queue is selected based on the identity of the first node and the identity of the second node.
  • a method of operating a network interface of a node in a communication network to implement prioritized transmission is provided.
  • the communication operates in accordance with time slots.
  • the method comprises, for a time slot, determining queue backlog information for the node and exchanging queue backlog information with neighboring nodes in a network.
  • the method also comprises assigning a MAC access category to a packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes and providing the packet to MAC layer components for transmission in accordance with the assigned access category.
  • the MAC layer components are configured to implement, for each of a plurality of access categories a respective contention window.
  • the respective contention windows are disjoint for a plurality of access categories and the access category is selected from the plurality of access categories.
  • the plurality of access categories are ordered from lower to higher. Assigning a MAC access category to the packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes comprises assigning a higher access category for a higher backlog on the node relative to the backlog on the neighboring nodes.
  • the network operates according to a TDMA protocol with a plurality of time slots. Within each time slot, the MAC layer components are configured to initiate transmission of a packet during a contention window with a start relative to the start of the time slot, that is set based on the access category of the packet. Contention windows of higher access categories start earlier in the time slot than contention windows of lower access categories.
  • exchanging queue backlog information comprises transmitting the packet with queue backlog information about the node in a packet header; and obtaining queue backlog information about the neighboring nodes from headers of packets transmitted by the neighboring nodes.
  • FIG. 1 is a schematic illustration showing an exemplary input model for TRI queue
  • FIG. 2 is a schematic illustration showing an exemplary output model for TRI queue
  • FIG. 3 is an schematic illustration showing an example of the Trickle approach
  • FIG. 4 is a schematic illustration showing an example of Trickle queues for network coding (mixed flows).
  • FIG. 5 is a schematic illustration showing an exemplary Trickle header structure
  • FIG. 6 is a schematic illustration showing an exemplary case 1 network topology
  • FIG. 7 is a schematic illustration showing Trickle's throughput improvement on different channel busyness conditions for FUN-1 under Case 1 according to an exemplary analysis.
  • the inventors have recognized and appreciated a transmission protocol that yields high network throughput in a multi-hop network, such as a 5G wireless network.
  • High throughput may be achieved with distributed control based on prioritizing communication of packets selected to reduce disparity in node to node backlogs.
  • Higher priority may be assigned to transmission of a packet at a node that has a large backlog of packets to be sent to a second node that has a small backlog of packets from that node.
  • Other criterion may alternatively or additionally be used to select specific packets at a node for transmission and the priority of transmission from that node.
  • Such control may be practically achieved with relatively low overhead by communicating backlog information in packet headers.
  • backlog information does not change unless a packet is transmitted, transmission of backlog information in connection with transmission is adequate for each node to retain current backlog information for its neighboring nodes, to which or from which it may send or receive packets.
  • each node may queue packets based on a prior-hop node from which the packet was received and a next hop node to which the packet is to be sent.
  • each node may compute a relative priority of its own transmission from this information.
  • the priorities may be set such that priority is given to packets from nodes with large backlogs for transmission to nodes with small backlogs.
  • Distributed prioritization may be implemented by setting transmission parameters for a packet in proportion to its priority. In some embodiments, these parameters may be set at the MAC level, impacting the start time, backoff time and/or length of a contention window.
  • Trickle method a new Medium Access Control (MAC) protocol is proposed based on the principle of back pressure, custom tailored to the needs of coding techniques such as fountain codes and network coding.
  • the Trickle method hereinafter referred to as Trickle, maintains TRiple-node Indexed (TRI) queues at each node, which significantly reduces the number of queues and is scalable to large- scale networks.
  • TRI TRiple-node Indexed
  • per-link (PL) queueing model In order to address the limitations of PD queueing structure, existing works have proposed the per-link (PL) queueing model as in [4], [9]-[12].
  • the PL queueing structure requires less complexity when maintaining queueing structures and preserves good design features such as network decomposition [14].
  • per- link-queue is used to mean that packets destined to the same next hop node are put into the same queue. Thus the number of queues for a node to maintain is only related to the number of its neighbors within one hop.
  • the inventors have recognized and appreciated that a limitation of the PL queueing structure is lack of some queueing information needed for better congestion control in network coding. In accordance with some embodiments, this limitation may be addressed using a TRI queueing structure.
  • Shadow queues or counters were introduced in [9] to replace the physical PD queue in each node.
  • HOL head of line
  • Ji et al. [11] proposed to use a shadow queue with the cumulative number of aggregate packet arrivals for MaxWeight scheduling.
  • the aggregated packet arrivals' data needs to be recorded from the beginning of the network formation. This poses a practical challenge, since in real-world networks there may be unpredictable node failures when the recorded data would be lost.
  • Trickle does not need to maintain shadowing PD queues according to an aspect of some embodiments.
  • Trickle may eliminate the PD queue constraints, and does not need any PD queue information at all.
  • TRI queueing structure each node maintains multiple TRI queues for each of its ⁇ parent, node, child) flow pairs.
  • the TRI queues provide a similar functionality to the conventional per-destination queues or per-link queues, but with a more lightweight and practical design.
  • One notable feature of TRI queues is that they are optimized for working with network coding schemes, such as COPE [13] and FUN [15], which generally utilize broadcast channels to save packet transmissions.
  • TRI queues may be implemented using known hardware and software design techniques, adapted and controlled to perform fucntions as described herein. In some embodiments, the TRI queues may be built between the data link layer and the network layer.
  • TRI queues in each node are dynamically created, which means one does not have to preallocate memory for each queue until a packet arrives that will be placed in that queue. This conserves space and improves the efficiency of Trickle scheduling.
  • the scheduling mechanism of Trickle is based on the differential queue length of current node and its neighbor nodes which are possible next hop of the current node.
  • a sending node piggybacks its queue information in a Trickle header. This information thus can be overheard by all its neighbors.
  • the neighbor node can retrieve the information by parsing Trickle header and calculate the differential queue length. Then, the neighbor node can use the differential queue information for scheduling, annot
  • Trickle c works effectively because of support from MAC protocols.
  • Trickle MAC protocol may be based on IEEE 802. l ie whose QoS capability would provide a cornerstone that could facilitate Trickle implementation in some embodiments.
  • the number of Access Categories (ACs) may be modified and each AC's parameters also may be be changed to adapt to algorithms in some embodiments.
  • a source congestion control mechanism may be implemented at transport layer which controls the sending flow rate of source node according to network congestion conditions.
  • the designed mechanism regulates the sending rate by monitoring the queue length in the source node. It behaves in a way similar to TCP congestion control.
  • two queue length thresholds may be used - an upper threshold and a lower threshold, whereas TCP uses one threshold.
  • a control scheme is designed that ensures that the queue length of source node will keep a balanced level between upper and lower threshold which provides better flow control for the whole network system.
  • Trickle framework aims to solve the congestion control problem and to increase network throughput.
  • Q (JV, £), where is the set of nodes and £ is the set of links.
  • a link denoted by l t j exists if it is in £.
  • the set of the neighboring nodes of node i is denoted by N t .
  • a slotted system may be provided and the time slot is denoted by t.
  • the slotted system may be a TDM A network.
  • S (t) represent the instantaneous channel state of li j G £, where ⁇ and j are the transmitter and receiver of the link.
  • is the cardinality of a set.
  • the destination node of flow / is denoted by d(f).
  • the strong stability in [16] is used to define the network stability. Majority of notations used in this paper are listed in Table 1.
  • FIG. 1 and FIG. 2 Solid lines are links between two nodes and dash lines indicate the route between two queues to exchange data.
  • q ⁇ defined on link ⁇ ⁇ ; -, holds the data in node i with node j as the next hop node. Therefore, the number of the queues in node i is at most the number of its neighbors. If l t is active, packets in q t are transmitted.
  • ⁇ ( ⁇ denote the input rate of link l t j at time t.
  • ⁇ ( ⁇ ) denote the input rate of link l t j at time t.
  • the destination of the packet will be checked against j. If the packet is destined to node j, the packet goes out of the queueing system. Otherwise, the packet shall enter another queue, say, qj k based on the routing. Then the packets not destined to node j yield a data rate from q t j to qj k .
  • ⁇ ( ⁇ represent the allocated output data rate of q t j
  • ⁇ i : j :k (t) represent the allocated data rate from q t j to qj k at time t.
  • the wireless network may be a TDM A network, such as a 5G netowrk.
  • the TRI queueing model is characterized by the following properties.
  • the wireless channel process 5(t) is assumed to be convergent with a finite number of channel states ⁇ 5 ⁇ and state probabilities n s .
  • the convergence interval [3] K is the number of time-slots, such that for given value ⁇ 1 > 0, we have:
  • the second moment of exogenous arrivals at each node is bounded every time-slot by some finite maximum value o max regardless of past history, so that for any i G K,j G .
  • this section provides a backpressure control policy according to some embodiments that stabilizes the TRI queueing network whenever the input rate matrix [ ⁇ ] is inside the network capacity region A pi .
  • the Trickle TRI queues are the queues that keep track of TRI traffic through adjacent hops.
  • qt :k denote TRI queue that tracks traffic from q t j to qj k
  • Vi k denote the queue backlog of qt ⁇ at time t. Similar to (9), for regular unicast transmission the TRI queue updates according to
  • V i ik (t + l) [v i ik (t)-Ji jk (t)] + + +
  • V iijik (t + 1) [V i ik (t) - 2i j ik (t)] + +
  • Step 1 For node ⁇ and node j, the weight V i ; (t) is calculated by:
  • ⁇ (t) is transmission rate on l t j at time t.
  • Step 2 Transmit the packets in q m i according to the allocated rate and MAC priority, and send packets going through l t j to queue qt ⁇ in node j, where next hop node k is determined by the routing algorithm running in node j.
  • Step 3 Calculate and update [Vi ik (t + 1)] according to (10) or (11), where . fc (t) and ⁇ (t) can be calculated by monitoring the data transmission activity on each link li j .
  • network interface hardware and/or software may dynamically adjust the sending rate by monitoring the queue length of the TRI queues residing in source node.
  • Two queue length thresholds may be monitored: high-threshold, which indicates queue length reaches certain level of queue size; and low-threshold, which indicates queue length drops to certain level ⁇ of queue size.
  • the rate may be dynamically adjusted according to these thresholds.
  • R denote the source rate and when the source node's queue length exceeds high-threshold, the sending rate of the source node is adjusted according to
  • (parent, child) queues represent actual TRI queues in the exemplary embodiments below. Since the TRI queues' residing node is the current node, we eliminate the need for an extra node field that denotes the current node in our actual protocol design.
  • each node i maintains multiple queues for each of its (parent, child) flow pairs, i.e, a (parent, child) pair uniquely determines a queue that the packets from the corresponding flow will be put in. This kind of queues are herein referred to as "TRI queues". It should also be noted that the sense of "flow" here is different from the end-to-end traffic flows that were commonly referred to by the literature. The flow related to a specific pair is the local flow that passes through that (parent, child) pair.
  • a packet belonging to the flow from node 2 through A 1 to B will be put into the corresponding TRI queue Q(M 2 , B) in node A 1 when it reaches node A x .
  • the packet will be put in its corresponding TRI queue Q A 1 , C 2 ) in node B.
  • the packet will keep traveling this way until it reaches the destination.
  • network coding where packets are often mixed from two individual flows and where broadcast is often adopted for transmitting such packets, we set aside another type of TRI queues to hold broadcast packets.
  • the network coding may be schemes such as FUN [15]. As illustrated in the example illustration in FIG.
  • a packet 410 transmitted from node A to node B and a packet 420 from node C to node B can have opportunities to be mixed and broadcast to neighbors so as to save one transmission.
  • the mixed packet will be put in the broadcast TRI queue represented by Q AC, AC).
  • FIG. 5 An exemplary header structure is illustrated in FIG. 5.
  • the respective 32-bit fields Previous hop address 1 and Previous hop address 2 represent the IP addresses of prior hop nodes which the last received packet came from. If the received packet came from a unicast flow, only Previous hop address 1 field is used.
  • Field Next hop address is the IP address of the next node that the packet will be sent to. This field is determined by the routing (Layer 3) module.
  • Queue length field is the queue length information the transmitting packet piggybacks. It is calculated at the current sending node before the packet is transmitted corresponding to the right hand side term " ⁇ / e W; V iJik (ty in (12).
  • network hardware and/or software in a transmitting node we may also set an extra 64-bit field in the Trickle header Time stamp to indicate the validity of current queue length information. This field is updated to the current time once the packet is about to be sent out. Every time when a node is trying to use the queue length information, for example by reading the Sum of queue length field in the header, the validity check will be performed.
  • the policy is described as follows. In addition to storing current queue length information from interested neighbors, every TRI queue also makes a backup queue length information from last time. Let T 0 denote the current time value which can be retrieved from the operating system and assume the Time stamp value in the header is t. If
  • T 0 - t > T th (15) where T th is time-up threshold, then the queue length information carried in the packet is deemed outdated and it will be ignored. Instead, the queue length value from the last- time backup will be used. This policy ensures that we only use the most confident queue length information from the most recent updates.
  • the rate R described in Section 4.4 is just time interval between consecutive two packets and may be converted to its corresponding time interval for controling the time interval according to (13) and (14) in some embodiments.
  • a modified IEEE 802.1 le for MAC layer protocol may be employed to implement the Trickle protocol, retaining aspects of IEEE 802. l ie, including the QoS assurance capability of IEEE 802. l ie, which is beneficial to Trickle implementation.
  • the number of Access Categories (ACs) and each AC's parameters may be be changed to adapt to the Trickle protocol.
  • the parameters setup chosen for each AC index are shown in Table 2. These numbers are chosen to eliminate the possibility of collision between different ACs by setting non-overlapping minimum and maximum size of contention windows (CW).
  • the network operates with a slotted system and within a time slot, transmission of a packet is initiated during a contention window with a start time relative to the start of the time slot that is set based on the access category of the packet.
  • contention windows of higher access categories start earlier in the time slot than contention windows of lower access categories.
  • the MAC scheduling priority may be implemented using window mapping techniques [17] .
  • the order of AC index assigned for packet transmission from a node is chosen based on the relative backlog of the node and neighboring nodes. For example, a higher access category may be assigned for a node with a higher backlog relative to backlogs on the neighboring nodes.
  • each node keeps a queue for each source-destination-link combination.
  • links will have ⁇ T ⁇ X
  • the PDPL queueing network contains more queues than the TRI network and only acts as a comparative model that is only used as a reference network in the following analysis.
  • G [Gi ] , and is in a time-average sense. Then, we have the following lemma.
  • Theorem 1 The proposed Trickle algorithm is throughput optimal, and for an arbitrary network admission rate vector [A ⁇ ] inside of the network capacity region denoted by A tri , the Trickle algorithm stabilizes the network.
  • RLNC For RLNC, a file is segmented into batches, each of which consists of M native packets.
  • a relay node has a buffer of M packets; When receiving packets, the relay node takes all the packets in the buffer as input and generates one RLNC-coded packet.
  • the source node Upon receiving the ACK message, the source node stops transmitting the coded packets of the current batch, and starts to transmit the coded packets of the next batch.
  • TCP the Layer 4 protocol
  • RLNC we use UDP as the Layer 4 protocol.
  • the modified 802. l ie MAC AC parameters for Trickle are set as shown in Table 2.
  • use of the Ad hoc On-Demand Distance Vector (AODV) protocol for routing is preferrably avoided except for Case 4 because AODV's buffering mechanism implemented in QualNet may have a bad effect in our Trickle scheme.
  • AODV Ad hoc On-Demand Distance Vector
  • the setup of this set of experiments is the following. There are a total of 9 nodes in the network. These nodes are distributed in a cross topology, as illustrated in
  • FIG. 6 There is a chain communication path 620 comprised of 5 nodes from one end to another (5 3 -> R 3 -> R 5 -> R 4 -> 5 4 ) placed in horizontal direction; and another chain communication path 640 also comprised of 5 nodes (S- L -> R 1 -> R 5 -> R 2 ⁇ S 2 ) from one end to another placed in vertical direction.
  • the two chain paths 620, 640 share a single relay node R5 in the cross center and sustain a total of four flows in the network.
  • FUN- 1 and FUN-2 there also exists one background UDP flow sided with each FUN flow at each source node. The sending rate of the background flow is 200 kbits/s.
  • K the number of native packets to be transmitted by the source.
  • PLR packet loss rate
  • FIG. 7 illustrates the resulting throughput trend that FUN- 1 with Trickle enabled 720 versus Trickle disabled 710.
  • the schematic illustration in FIG. 7 shows Trickle' s, throughput improvement on different channel busyness conditions for FUN- 1 under Case 1.
  • Trickle is able to provide significant throughput improvement over different channel busyness conditions.
  • Tables 4 and 3 show the total throughput of the four flows (i.e., forward/backward flows in both vertical and horizontal direction) of the seven schemes with and without Trickle under Case 1. Based on results from the exemplary simulation analysis, we have the following observations:
  • FUN-2 uses RLNC at a relay node and RLNC has a higher coding gain than the XOR operation used in FUN-1.
  • the setup of this set of simulative experiments is the following.
  • the network consists of a source node, a destination node, and 1 or 2 or 4 relay nodes. All the nodes in the network form a chain topology from the source node to the destination node.
  • the communication path from the source node to the destination node has 2 or 3 or 5 hops. All the nodes are immobile; hence the network topology is fixed.
  • PLR 10% for each hop/link. Again, the PLR does not include packet loss due to the thermal noise in the physical layer and packet collision.
  • Tables 5 and 6 show the throughput of seven schemes under Case 2.
  • RLNC achieves a lower throughput than any other coding scheme. This is because a coded packet in RLNC is restricted to one batch of M native packets.
  • the setup of this set of simulative experiments is the following.
  • the distance between the source node and the destination node is 1200 meters; the transmission range of each node is 700 meters.
  • the source node cannot directly communicate with the destination node; a relay node is needed.
  • the relay node is moving back and forth along a center line, which is perpendicular to the straight line that links the source node and the destination node; When the relay node moves into the transmission range of the source node, it can pick up the packets transmitted by the source node, and relay the packets to the destination node. Otherwise, it cannot receive packets transmitted by the source node; in this case, all the packets transmitted by the source node will be lost. Since the relay node moves around, the network topology is dynamic.
  • ⁇ RLNC achieves the lowest throughput than other coding schemes.
  • TCP achieves the least throughput. This is because all other six schemes have coding gain while TCP does not. However, even in such situation, Trickle does help TCP achieve about 7% throughput gain due to better efficiency achieved in channel utilization.
  • the network topology is dynamic.
  • the data rate is 1024 bytes/s.
  • the number of hops from the source node to the destination node is random, depending on the positions of all the nodes. Since all the nodes are mobile, the network topology is dynamic.
  • COPE achieves the third highest throughput. Their throughput is higher than the BATS code, the fountain code, and RLNC; this is because COPE combines two flow and send packets more efficiently while other schemes are more prone to packet loss due to constant change of relay nodes. The Trickle exhibits slightly better performance in such scenario. [00160] ⁇ The fountain code achieves a higher throughput than the BATS code.
  • TCP achieves the least throughput due to the moving of relay node which causes intermittent connection break that TCP is sensitive to.
  • the Trickle scheme may help alleviate this problem and achieves a better throughput.
  • the PDPL shadow queues are counters that keep track of the PD traffic for each flow in the TRI queues.
  • vj(t) denote the queue backlog for flow / in q t j at time t. Then the PDPL shadow queue length can be updated according to
  • Vfct + 1) [3 ⁇ 4(t) - 3 ⁇ 4(t)] + + a ⁇ j (t) + tfj(t), (22) where ⁇ . . (t), ⁇ (t) and a (t) are defined in (5.2).
  • # L (t) 2E( ⁇ f]i 3 ⁇ 4( ( ⁇ 3 ⁇ 4 ⁇ ( +A3 ⁇ 4 fft (t))
  • embodiments for the Trickle method can be implemented in any of numerous ways.
  • embodiments for the Trickle method may be implemented using hardware, software or a combination thereof.
  • the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers.
  • processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component, including commercially available integrated circuit components known in the art by names such as CPU chips, GPU chips, microprocessor, microcontroller, or co-processor.
  • a processor may be implemented in custom circuitry, such as an ASIC, or semicustom circuitry resulting from configuring a programmable logic device.
  • a processor may be a portion of a larger circuit or semiconductor device, whether commercially available, semi-custom or custom.
  • some commercially available microprocessors have multiple cores such that one or a subset of those cores may constitute a processor.
  • a processor may be implemented using circuitry in any suitable format.
  • the embodiments may be implemented in connection with a network interface, such as with a wireless network interface.
  • Other embodiments of Trickle method according to the present invention can include a portable computing device such as a smartphone or a base station of a cellular network or any device that can act as a relay node of a wireless network.
  • a smart phone or other portable electronic device may include a camera, capable of capturing still or video images.
  • a computing device may include sensors such as a global positioning system (GPS) to sense location and inertial sensors such as a compass, an inclinometer and/o ran accelerometer.
  • GPS global positioning system
  • the operating system may include utilities to control these devices to capture data from them and make it available to applications executing on the computing device.
  • a computing device may include a network interface to implement a personal area network.
  • a network interface may operate in accordance with any suitable technology, including a Bluetooth, Zigbee or an 802.11 ad hoc mode, for example.
  • a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
  • PDA Personal Digital Assistant
  • a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets.
  • a computer may receive input information through speech recognition or in other audible format.
  • the input/output devices are illustrated as physically separate from the computing device. In some embodiments, however, the input and/or output devices may be physically integrated into the same unit as the processor or other elements of the computing device. For example, a keyboard might be implemented as a soft keyboard on a touch screen. Alternatively, the input/output devices may be entirely disconnected from the computing device, and functionally integrated through a wireless connection.
  • Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet.
  • networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
  • the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
  • the invention may be embodied as a computer readable storage medium (or multiple computer readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above.
  • a computer readable storage medium may retain information for a sufficient time to provide computer-executable instructions in a non-transitory form.
  • Such a computer readable storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above.
  • the term "computer-readable storage medium” encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine.
  • the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as a propagating signal.
  • code means any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
  • Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices.
  • program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types.
  • functionality of the program modules may be combined or distributed as desired in various embodiments.
  • data structures may be stored in computer-readable media in any suitable form.
  • data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields.
  • any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
  • the invention may be embodied as a method, of which an example has been provided.
  • the acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
  • a reference to "A and/or B", when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
  • the phrase "at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements.
  • This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified.
  • At least one of A and B can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Mobile Radio Communication Systems (AREA)

Abstract

A congestion control mechanism tailored for fountain codes and network coding is provided for error control when communicating over lossy channels. This control mechanism is referred to as Trickle, which is a lightweight and practical framework. Trickle is designed based on the principle of back pressure, and maintains a novel triple- node indexed queueing structure at each node. As compared to existing back pressure protocols with per-destination or per-link queues, Trickle uses significantly fewer queues, and is more scalable to larger networks. Trickle provides high throughput for fountain codes, network coding, as well as non-coded TCP in multi-hop wireless networks.

Description

METHOD FOR CONGESTION RELIEF WITH NETWORK CODING
RELATED APPLICATIONS
[0001] This application claims priority to and the benefit of U.S. Provisional Patent Application Number 62/309,872, entitled "METHOD FOR CONGESTION RELIEF WITH NETWORK CODING," filed March 17, 2016. The entire contents of the foregoing are hereby incorporated herein by reference.
BACKGROUND
[0002] Communication over wireless channels is inherently lossy, where packets may be discarded due to bit errors caused by fading, shadowing, or interference [1]. To mitigate the effects of lossy communication channels, Automatic Repeat reQuest (ARQ) and Forward Error Correction (FEC) are most widely used. ARQ enjoys the advantage that it adapts to time-varying channel conditions well, and is therefore well suited for the case where the transmitter has little knowledge of the channel condition. Yet, ARQ requires explicit feedback from the receiver to the transmitter, and suffers from feedback implosion when reliable multicast communication is desired. Forward Error Correction (FEC), on the other hand, does not require explicit feedback, as it uses coding techniques — such as Reed-Solomon codes and fountain codes— to correct packet errors. As such, it is well suited for reliable multicast, yet requires knowledge of channel conditions.
[0003] Hybrid ARQ has been proposed to combine the best of both worlds. With Hybrid ARQ, the transmitter transmits a coded packet with error correcting codes; but if packet errors are not correctable, a new coded packet will need to be transmitted. However, Hybrid ARQ are not designed for erasure channels, because lost packets cannot be corrected using error-correcting codes.
[0004] In the case of erasure channels, the transmitter can instead use a fountain code and keep transmitting coded packets, until the receiver is able to decode all the original packets in a group ( e.g., Network coded TCP [2]). The transmitter does not need a priori knowledge on the channel condition, and is able to adapt to time-varying channel conditions. Since a fountain code does not have a fixed rate, it is well suited for reliable multicast as well. [0005] Although fountain codes and network coding are effective mechanisms for communication over lossy channels, fountain codes and network coding only deal with error control but do not have congestion control capability.
SUMMARY
[0006] According to some embodiments, a method of operating a communication system is provided. The communication system comprises a plurality of nodes. The method comprises, at each of the plurality of nodes, tracking packets queued for transmission, including numbers of packets received from respective others of the plurality of nodes; and numbers of packets for transmission to respective others of the plurality of nodes. The method also comprises exchanging, with others of the plurality of nodes, the tracked numbers of packets received from and for transmission to respective others of the plurality of nodes. The method also comprises selecting a packet for transmission to an other node based at least in part on the tracked numbers of packets. The method further comprises setting a transmit priority for the selected packet based at least in part on the number of packets for transmission to the other node and the number of packets tracked on the other node that were received from the node and are queued for transmission.
[0007] According to some embodiments, tracking numbers of packets received from others of the plurality of nodes and numbers of packets for transmission to respective others of the plurality of nodes comprises maintaining a plurality of queues indexed by a prior hop node and a next hop node and storing packets received from a first node for retransmission to a second node in a queue indexed with the first node as the prior hop node and the second node as the next hop node.
[0008] According to some embodiments, exchanging tracked numbers of packets received and tracked numbers of packets queued comprises transmitting the selected packet with a header containing the tracked numbers of packets received and tracked numbers of packets queued.
[0009] According to some embodiments, exchanging tracked numbers of packets received and tracked numbers of packets queued comprises extracting information from packet headers of packets transmitted by neighboring nodes.
[0010] According to some embodiments, selecting a packet for transmission based at least in part on the tracked numbers of packets queued comprises making a weighted selection from among packets for transmission to other nodes, the weighting based on the numbers of packets queued for transmission to respective others of the plurality of nodes.
[0011] According to some embodiments, setting a transmit priority comprises setting MAC parameters of transmission of the selected packet.
[0012] According to some embodiments, the network operates according to a TDMA protocol with a plurality of time slots; and the acts of selecting a packet for transmission and setting a transmit priority for the selected packet are repeated in each of the plurality of time slots.
[0013] According to some embodiments, setting MAC parameters of transmission of the selected packet comprises adjusting a time of a contention window for the packet within a time slot, with higher priority packets being assigned contention windows earlier in the time slot.
[0014] According to some embodiments, a method of operating a node in a communication system is provided. The communication system comprises a plurality of nodes forming a network in which packets are communicated by transmissions between nodes. The method comprising, at the node, upon receipt of a packet from a first node for retransmission to a second node, storing the packet in a queue of a plurality of queues. The queue is selected from the plurality of queues based on the identity of the first node and the identity of the second node. The method also comprises receiving information about a number of packets from the node queued on the second node for retransmission to at least one other node. The method also comprises selectively transmitting a packet from the queue based at least in part on a relative number of packets queued on the node for transmission to the second node and the number of packets from the node queued on the second node for retransmission to at least one other node of the plurality of nodes.
[0015] According to some embodiments, selectively transmitting the packet from the queue comprises assigning a priority to the queue based at least in part on the relative number of packets and selecting the queue based at least in part on the priority assigned to the queue relative to respective priorities assigned to other queues of the plurality of queues. [0016] According to some embodiments, selectively transmitting the packet from the queue further comprises setting transmit parameters of the packet in accordance with the priority assigned to the queue.
[0017] According to some embodiments, setting the transmit parameters comprises setting a transmit start time within a time slot and a backoff time.
[0018] According to some embodiments, assigning the priority to the queue is further based in part on the transmission rate between the node and the second node.
[0019] According to some embodiments, selectively transmitting the packet comprises transmitting the packet with a header indicating a number of packets queued on the node for transmission to the second node.
[0020] According to some embodiments, the header comprises an identifier of the first node, an identifier of the second node and a length of the queue. The length of the queue is selected based on the identity of the first node and the identity of the second node.
[0021] According to some embodiments, a method of operating a network interface of a node in a communication network to implement prioritized transmission is provided. The communication operates in accordance with time slots. The method comprises, for a time slot, determining queue backlog information for the node and exchanging queue backlog information with neighboring nodes in a network. The method also comprises assigning a MAC access category to a packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes and providing the packet to MAC layer components for transmission in accordance with the assigned access category.
[0022] According to some embodiments, the MAC layer components are configured to implement, for each of a plurality of access categories a respective contention window. The respective contention windows are disjoint for a plurality of access categories and the access category is selected from the plurality of access categories.
[0023] According to some embodiments, the plurality of access categories are ordered from lower to higher. Assigning a MAC access category to the packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes comprises assigning a higher access category for a higher backlog on the node relative to the backlog on the neighboring nodes. [0024] According to some embodiments, the network operates according to a TDMA protocol with a plurality of time slots. Within each time slot, the MAC layer components are configured to initiate transmission of a packet during a contention window with a start relative to the start of the time slot, that is set based on the access category of the packet. Contention windows of higher access categories start earlier in the time slot than contention windows of lower access categories.
[0025] According to some embodiments, exchanging queue backlog information comprises transmitting the packet with queue backlog information about the node in a packet header; and obtaining queue backlog information about the neighboring nodes from headers of packets transmitted by the neighboring nodes.
[0026] Features of the above-described embodiments may be used alone or in any suitable combination.
[0027] The foregoing is a non-limiting summary of the invention, which is defined by the appended claims.
BRIEF DESCRIPTION OF DRAWINGS
[0028] The accompanying drawings are not intended to be drawn to scale. In the drawings, each identical or nearly identical component that is illustrated in various figures is represented by a like numeral. For purposes of clarity, not every component may be labeled in every drawing. In the drawings:
[0029] FIG. 1 is a schematic illustration showing an exemplary input model for TRI queue;
[0030] FIG. 2 is a schematic illustration showing an exemplary output model for TRI queue;
[0031] FIG. 3 is an schematic illustration showing an example of the Trickle approach;
[0032] FIG. 4 is a schematic illustration showing an example of Trickle queues for network coding (mixed flows);
[0033] FIG. 5 is a schematic illustration showing an exemplary Trickle header structure; [0034] FIG. 6 is a schematic illustration showing an exemplary case 1 network topology; [0035] FIG. 7 is a schematic illustration showing Trickle's throughput improvement on different channel busyness conditions for FUN-1 under Case 1 according to an exemplary analysis.
DETAILED DESCRIPTION
[0036] The inventors have recognized and appreciated a transmission protocol that yields high network throughput in a multi-hop network, such as a 5G wireless network. High throughput may be achieved with distributed control based on prioritizing communication of packets selected to reduce disparity in node to node backlogs. Higher priority may be assigned to transmission of a packet at a node that has a large backlog of packets to be sent to a second node that has a small backlog of packets from that node. Other criterion may alternatively or additionally be used to select specific packets at a node for transmission and the priority of transmission from that node.
[0037] Such control may be practically achieved with relatively low overhead by communicating backlog information in packet headers. As backlog information does not change unless a packet is transmitted, transmission of backlog information in connection with transmission is adequate for each node to retain current backlog information for its neighboring nodes, to which or from which it may send or receive packets.
[0038] Further, the described approach enables use of a limited number of queues on each node, as each node may queue packets based on a prior-hop node from which the packet was received and a next hop node to which the packet is to be sent.
[0039] Because each node has access to its own backlog information and backlog information from neighboring nodes, each node may compute a relative priority of its own transmission from this information. The priorities may be set such that priority is given to packets from nodes with large backlogs for transmission to nodes with small backlogs. Distributed prioritization may be implemented by setting transmission parameters for a packet in proportion to its priority. In some embodiments, these parameters may be set at the MAC level, impacting the start time, backoff time and/or length of a contention window.
[0040] 1. Introduction
[0041] According to some embodiments, Trickle method, a new Medium Access Control (MAC) protocol is proposed based on the principle of back pressure, custom tailored to the needs of coding techniques such as fountain codes and network coding. The Trickle method, hereinafter referred to as Trickle, maintains TRiple-node Indexed (TRI) queues at each node, which significantly reduces the number of queues and is scalable to large- scale networks. This is a noteworthy departure from traditional MAC protocols that maintain per-destination or per-link queues at each node [3-12]. As a highlight of its properties, Trickle is lightweight and amenable to practical implementations. Our extensive experimental results have demonstrated that Trickle achieves much higher throughput than IEEE 802.11b for fountain codes, network coding, and TCP in multihop wireless networks.
[0042] 2 Related Work
[0043] In the wireless networking area, back pressure has been studied extensively over the years. Most existing approaches to network modeling are based on the per- destination (PD) queueing model as in [3] -[7]. In PD queueing networks, every node needs to maintain a queue for each flow, represented by a source-destination pair. However, the cost incurred by such a solution could be prohibitive in large scale networks where there are a large number of flows, as it requires each node to keep track of every flow passing through. In contrast, the TRiple-node Indexed (TRI) queueing structure in Trickle may operate in acordance with some embodiments with each node maintaining at most n2 queues, where n is the number of neighbors of a specific node.
[0044] Multicast scheduling with network coding has been studied in [7]. According to [7], there is proposed a random linear network coding to save transmission costs for virtual queues. However, this approach was not practical in that it requires a global encoding vector to be carried in the header of each packet, posing a challenge in its practical implementation. A similar technique with back pressure was proposed in [8], which tried to utilize COPE's XOR feature [13] to save transmission. However, that scheme is COPE oriented and COPE requires additional buffers to buffer packets already transmitted. The incurred overhead is inevitable and complicates the practical design of the framework. Further, they were essentially based on PD queues, and suffered from the same scalability problem. In contrast, embodiments of Trickle are designed to be practical, and works effectively with a wide range of network coding schemes.
[0045] In order to address the limitations of PD queueing structure, existing works have proposed the per-link (PL) queueing model as in [4], [9]-[12]. The PL queueing structure requires less complexity when maintaining queueing structures and preserves good design features such as network decomposition [14]. Herein "per- link-queue" is used to mean that packets destined to the same next hop node are put into the same queue. Thus the number of queues for a node to maintain is only related to the number of its neighbors within one hop. The inventors have recognized and appreciated that a limitation of the PL queueing structure is lack of some queueing information needed for better congestion control in network coding. In accordance with some embodiments, this limitation may be addressed using a TRI queueing structure.
[0046] Shadow queues or counters were introduced in [9] to replace the physical PD queue in each node. However, it is computationally expensive to locate the head of line (HOL) packet. Ji et al. [11] proposed to use a shadow queue with the cumulative number of aggregate packet arrivals for MaxWeight scheduling. The aggregated packet arrivals' data needs to be recorded from the beginning of the network formation. This poses a practical challenge, since in real-world networks there may be unpredictable node failures when the recorded data would be lost. In comparison, Trickle does not need to maintain shadowing PD queues according to an aspect of some embodiments. In fact, in some embodiments Trickle may eliminate the PD queue constraints, and does not need any PD queue information at all.
[0047] 3 System Overview
[0048] An introduction based on aspects of some embodiments is provided to Trickle, the new Medium Access Control (MAC) protocol specifically designed for fountain codes and network coding in wireless multi-hop networks, based on the general principle of back pressure.
[0049] According to one aspect, with the use of TRiple-node Indexed (TRI) queues,
Trickle marks a significant departure from most existing designs of back pressure (BP) algorithms, which tend to use either per-destination (PD) queues or per-link queues. In a
TRI queueing structure, each node maintains multiple TRI queues for each of its {parent, node, child) flow pairs. The TRI queues provide a similar functionality to the conventional per-destination queues or per-link queues, but with a more lightweight and practical design. One notable feature of TRI queues is that they are optimized for working with network coding schemes, such as COPE [13] and FUN [15], which generally utilize broadcast channels to save packet transmissions. [0050] TRI queues may be implemented using known hardware and software design techniques, adapted and controlled to perform fucntions as described herein. In some embodiments, the TRI queues may be built between the data link layer and the network layer. Since both are below the network layer, packets that are placed in these queues have already gone through the routing process. In addition, the TRI queues in each node are dynamically created, which means one does not have to preallocate memory for each queue until a packet arrives that will be placed in that queue. This conserves space and improves the efficiency of Trickle scheduling.
[0051] According to another aspect, the scheduling mechanism of Trickle is based on the differential queue length of current node and its neighbor nodes which are possible next hop of the current node. In order to reduce overhead of exchanging queue length information among neighboring nodes, when transmitting a packet, a sending node piggybacks its queue information in a Trickle header. This information thus can be overheard by all its neighbors. When a neighbor node has overheard this packet, the neighbor node can retrieve the information by parsing Trickle header and calculate the differential queue length. Then, the neighbor node can use the differential queue information for scheduling, annot
[0052] Further according to an aspect, Trickle cworks effectively because of support from MAC protocols. Trickle MAC protocol may be based on IEEE 802. l ie whose QoS capability would provide a cornerstone that could facilitate Trickle implementation in some embodiments. The number of Access Categories (ACs) may be modified and each AC's parameters also may be be changed to adapt to algorithms in some embodiments.
[0053] In another aspect, in addition to the Trickle MAC protocol, a source congestion control mechanism may be implemented at transport layer which controls the sending flow rate of source node according to network congestion conditions. The designed mechanism regulates the sending rate by monitoring the queue length in the source node. It behaves in a way similar to TCP congestion control. In this design, two queue length thresholds may be used - an upper threshold and a lower threshold, whereas TCP uses one threshold. A control scheme is designed that ensures that the queue length of source node will keep a balanced level between upper and lower threshold which provides better flow control for the whole network system.
[0054] 4 Description of Trickle 55] Table 1: Notations
Figure imgf000011_0001
V-max Maximum channel service rate
Queue backlog of qr^-^ at time t
Input rate matrix
Capacity region that stabilizes the TRI network
Backlog or TRI network utility function of qt at time t
Queue backlog for flow / of lt at time t
P(t) Power matrix at time t
S(t) Channel state matrix at time t
Maximum weight for lt at time t
[0056] As mentioned in previous sections, Trickle framework aims to solve the congestion control problem and to increase network throughput. Consider an exemplary network modeled by a graph, Q = (JV, £), where is the set of nodes and £ is the set of links. A link denoted by lt j exists if it is in £. The set of the neighboring nodes of node i is denoted by Nt. In some embodiments, a slotted system may be provided and the time slot is denoted by t. In one example, the slotted system may be a TDM A network. In every time slot, let S (t) represent the instantaneous channel state of li j G £, where ί and j are the transmitter and receiver of the link. The channel state matrix of the network can be denoted by 5(t) = [s (t)], which is a tensor notation.
[0057] Let T be the set of flows consisted in the exemplary network where each of them is indexed by / = 1,2, . . . , \T\ and | · | is the cardinality of a set. The destination node of flow / is denoted by d(f). In the network, let ¾ represent the transmission rate on link I, then a schedule π = μ™, μ , ... , μ^ is the link rates that can be supported simultaneously by the network. The strong stability in [16] is used to define the network stability. Majority of notations used in this paper are listed in Table 1.
[0058] 4.1 The TRI Queueing Model [0059] The TRI queueing model according to some embodiments is illustrated in FIG. 1 and FIG. 2. Solid lines are links between two nodes and dash lines indicate the route between two queues to exchange data. In these two figures, q^ , defined on link Ζί ;-, holds the data in node i with node j as the next hop node. Therefore, the number of the queues in node i is at most the number of its neighbors. If lt is active, packets in qt are transmitted.
[0060] Let μ^ (ΐ denote the input rate of link lt j at time t. For a packet in q^ , when it is transmitted to node j, the destination of the packet will be checked against j. If the packet is destined to node j, the packet goes out of the queueing system. Otherwise, the packet shall enter another queue, say, qj k based on the routing. Then the packets not destined to node j yield a data rate from qt j to qj k . Let μ^ (ΐ represent the allocated output data rate of qt j and ^i:j:k (t) represent the allocated data rate from qt j to qj k at time t. We have
Hj(t) =∑meNi / uC + αί.ν( * (!) i¾(t) =∑keN . //ij,fc (t), (2) where di (t is the data rate of the local traffic generated by node i to node j in slot t.
[0061] In addition, since each queue contains the traffic for different flows, we can write ί ;- k (t) in terms of the flow rates. Then we have
where l {j k (.t) is the allocated rate for flow / on Zi ;- from q^ to qj k at time t. [0062] 4.2 The Network Model
[0063] In some embodiments, consider a wireless network in consecutive K time slots. In a non-limiting example, the wireless network may be a TDM A network, such as a 5G netowrk. The TRI queueing model is characterized by the following properties.
[0064] 1) Convergent wireless channels.
[0065] Let Ts(t, K) be the set of time-slots at which the channel state matrix 5(t) = S during the interval 0 < t < K— 1. The wireless channel process 5(t) is assumed to be convergent with a finite number of channel states {5} and state probabilities ns. The convergence interval [3] K is the number of time-slots, such that for given value δ1 > 0, we have:
Figure imgf000014_0001
where Ε{·} is expectation.
[0066] 2) Bounded and convergent arrival rates.
[0067] For given δ2 > 0, an arrival process a{j (t) convergent with the exogenous arrival rate λ within interval K satisfies:
Figure imgf000014_0002
[0068] Besides, the second moment of exogenous arrivals at each node is bounded every time-slot by some finite maximum value omax regardless of past history, so that for any i G K,j G .
E {[a{;.(t)]2}≤(aL )2 , (6)
Ε (Ε/ ί(ί)]2} < (½χ)2 , (7) where omax and amax are constants.
[0069] 3) Upper semi-continuous power-rate function.
[0070] Let i. , (/3(t), 5(t)) denote the power-rate function under some power allocation matrix P(t) =
Figure imgf000014_0003
P(t) G T and channel state matrix 5(t) , where T is the set of feasible power allocation. Each element Pt (t) is the allocated power on Zi ;- at time t. The transmission rates are bounded for every time-slot t by \imax, so that
which can also be deduced by the fact that the power-rate function is bounded with finite transmission power.
[0071] 4) Queue update dynamics.
[0072] Let Ui (t denote the backlog of qt j at time t. Then the queueing dynamics in the network satisfy i/u(t + l) < [i/ij(t)- i..(t)]
Figure imgf000015_0001
where [x]+ = max(x, 0) and it is an inequality instead of an equality because the arrivals may be less than the allocated output data if the neighboring nodes have little or no data to transmit [3].
[0073] 4.3 The Trickle Algorithm for Medium Access Control
[0074] Following the discussions provided in section 4.2, this section provides a backpressure control policy according to some embodiments that stabilizes the TRI queueing network whenever the input rate matrix [Λ^■] is inside the network capacity region Api.
[0075] First, we define Trickle TRI queues. The Trickle TRI queues are the queues that keep track of TRI traffic through adjacent hops. Let qt :k denote TRI queue that tracks traffic from qtj to qj k, and Vi k(t denote the queue backlog of qt ^ at time t. Similar to (9), for regular unicast transmission the TRI queue updates according to
Vi ik(t + l) = [vi ik(t)-Jijk(t)]+ +
fel.. a{7.(t) +^7(t). (10)
[0076] For network coding where broadcast is used for transmitting packets mixed from two flows,
Viijik(t + 1) = [Vi ik(t) - 2ij ik(t)]+ +
∑fe juik a{ t) + ½j(t) + / ,;(t). (11) [0077] Then we have the following algorithm. [0078] Algorithm 1 Trickle Scheduling Algorithm
[0079] Purpose: For every time-slot t, the algorithm schedules a packet from node i to its next hop node j.
[0080] Steps: Step 1. For node ί and node j, the weight V i ; (t) is calculated by:
Figure imgf000016_0001
where ^ (t) is transmission rate on lt j at time t.
[0081] Step 2. Transmit the packets in qmi according to the allocated rate and MAC priority, and send packets going through lt j to queue qt ^ in node j, where next hop node k is determined by the routing algorithm running in node j.
[0082] Step 3. Calculate and update [Vi ik(t + 1)] according to (10) or (11), where . fc(t) and ^ (t) can be calculated by monitoring the data transmission activity on each link li j.
[0083] 4.4 Source Congestion Control
[0084] In some embodiments, for source congestion control in Layer 4, network interface hardware and/or software may dynamically adjust the sending rate by monitoring the queue length of the TRI queues residing in source node. Two queue length thresholds may be monitored: high-threshold, which indicates queue length reaches certain level of queue size; and low-threshold, which indicates queue length drops to certain level β of queue size. The rate may be dynamically adjusted according to these thresholds.
[0085] Let R denote the source rate and when the source node's queue length exceeds high-threshold, the sending rate of the source node is adjusted according to
Figure imgf000016_0002
where a is a constant which should satisfy a > 1. When the source node's queue length becomes lower than low-threshold, the sending rate of the source node is adjusted according to
R new — Rold + b, (14) where b is another constant positive number. The rate R thus will keep increasing until queue length reaches high-threshold, then it will decrease back. This process continues and ensures that queue length is always between level of low-threshold and high- threshold and makes queue length in the source node never overflow.
[0086] 4.5 Implementing Trickle in Practice [0087] In order to better facilitate practical implementation of Trickle, (parent, child) queues are used represent actual TRI queues in the exemplary embodiments below. Since the TRI queues' residing node is the current node, we eliminate the need for an extra node field that denotes the current node in our actual protocol design. In Trickle, each node i maintains multiple queues for each of its (parent, child) flow pairs, i.e, a (parent, child) pair uniquely determines a queue that the packets from the corresponding flow will be put in. This kind of queues are herein referred to as "TRI queues". It should also be noted that the sense of "flow" here is different from the end-to-end traffic flows that were commonly referred to by the literature. The flow related to a specific pair is the local flow that passes through that (parent, child) pair.
[0088] In the example network illustrated in FIG. 3, a packet belonging to the flow from node 2 through A1 to B will be put into the corresponding TRI queue Q(M2, B) in node A1 when it reaches node Ax. Similarly, when the packet continues traveling to node B, it will be put in its corresponding TRI queue Q A1, C2) in node B. The packet will keep traveling this way until it reaches the destination. In network coding where packets are often mixed from two individual flows and where broadcast is often adopted for transmitting such packets, we set aside another type of TRI queues to hold broadcast packets. In one non-limiting example, the network coding may be schemes such as FUN [15]. As illustrated in the example illustration in FIG. 4, for two FUN flows simultaneously traveling from side A to side C and side C to side A, a packet 410 transmitted from node A to node B and a packet 420 from node C to node B can have opportunities to be mixed and broadcast to neighbors so as to save one transmission. The mixed packet will be put in the broadcast TRI queue represented by Q AC, AC).
[0089] One aspect of implementing Trickle is dealing with queueing information exchange between neighboring nodes. In order to save overhead of retrieving this information, an overhearing mechanism is developed where each node can "eavesdrop" on its neighbors' transmissions for the needed queueing information. According to some embodiments, an extra Trickle header structure may be added before Ethernet header in the data packet. An exemplary header structure is illustrated in FIG. 5. The respective 32-bit fields Previous hop address 1 and Previous hop address 2 represent the IP addresses of prior hop nodes which the last received packet came from. If the received packet came from a unicast flow, only Previous hop address 1 field is used. Field Next hop address is the IP address of the next node that the packet will be sent to. This field is determined by the routing (Layer 3) module. Queue length field is the queue length information the transmitting packet piggybacks. It is calculated at the current sending node before the packet is transmitted corresponding to the right hand side term "∑/ eW; ViJik(ty in (12).
[0090] Since overhearing is a passive method to obtain the information, in another embodiment network hardware and/or software in a transmitting node we may also set an extra 64-bit field in the Trickle header Time stamp to indicate the validity of current queue length information. This field is updated to the current time once the packet is about to be sent out. Every time when a node is trying to use the queue length information, for example by reading the Sum of queue length field in the header, the validity check will be performed. The policy is described as follows. In addition to storing current queue length information from interested neighbors, every TRI queue also makes a backup queue length information from last time. Let T0 denote the current time value which can be retrieved from the operating system and assume the Time stamp value in the header is t. If
T0 - t > Tth, (15) where Tth is time-up threshold, then the queue length information carried in the packet is deemed outdated and it will be ignored. Instead, the queue length value from the last- time backup will be used. This policy ensures that we only use the most confident queue length information from the most recent updates.
[0091] For source congestion control, the rate R described in Section 4.4 is just time interval between consecutive two packets and may be converted to its corresponding time interval for controling the time interval according to (13) and (14) in some embodiments.
[0092] Accordingly, a modified IEEE 802.1 le for MAC layer protocol may be employed to implement the Trickle protocol, retaining aspects of IEEE 802. l ie, including the QoS assurance capability of IEEE 802. l ie, which is beneficial to Trickle implementation. However, the number of Access Categories (ACs) and each AC's parameters may be be changed to adapt to the Trickle protocol. In one example, the parameters setup chosen for each AC index are shown in Table 2. These numbers are chosen to eliminate the possibility of collision between different ACs by setting non-overlapping minimum and maximum size of contention windows (CW). In some embodiments, the network operates with a slotted system and within a time slot, transmission of a packet is initiated during a contention window with a start time relative to the start of the time slot that is set based on the access category of the packet. In some embodiments, contention windows of higher access categories start earlier in the time slot than contention windows of lower access categories. The MAC scheduling priority may be implemented using window mapping techniques [17] . In some embodiments, the order of AC index assigned for packet transmission from a node is chosen based on the relative backlog of the node and neighboring nodes. For example, a higher access category may be assigned for a node with a higher backlog relative to backlogs on the neighboring nodes.
[0093] Table 2: 802. l ie AC Modification
Figure imgf000019_0001
[0095] In this section, the stability of Trickle as described in the embodiments in the previous section is analyzed. To provide some intuition on how to design the stabilizing algorithms, we first give a brief introduction to the Lyapunov drift method.
[0096] 5.1 Lyapunov Drift Method
[0097] A sufficient condition for network stability using the Lyapunov theory is given in Lemma 2 in [18] . Denote the set of queues in a system by X. t/(t) = [Ux(t)] is the queue backlog vector for multiple queues and L(t/(t)) =∑ Ux (t) is the Lyapunov function for the queue states. Note that this model is applicable to any system that contains multiple queues. If there exists a positive integer K such that for every time-slot, the Lyapunov function evaluated K steps into the future satisfies
E{L(U(K + t)) - L(U(t)) \ U(t)}≤B1 -∑X 0xUx(t) (16) for some positive constants B1, [θχ], and if E{t/(t)} <∞ for t G {0,1, ··· , K— 1}, then the network is stable, and limsup
Figure imgf000020_0001
[∑x 0xE{Ux t }]≤ 5i- (17)
[0098] Similar to (9), the queue backlog for K slots into the future can be bounded in terms of
Ux t + K)≤ [Ux(t) - S jo 1 μχ(ΐ)]+ + S jo 1 ax t , (18) where μχ(ί) is the output rate of queue x and a (t) is the queue input rate including both the exogenous arrivals and the traffic from other queues. Squaring both sides of the inequality above and taking conditional expectations w.r.t. U (t), we have
-2Κ∑χχ - λχχ(ΐ), (19) where ^max = max ^ (t) and amax = max (t), and the service rate μ(ί;) and the arrival rate a(t) are convergent to the rate μ = [fix] and λ = [λχ] similar to (5).
[0099] Comparing (16) and (19), we can see if there exists ε > 0, such that μχ— λχ≥ ε, the constant θχ can be found to stabilize all the queues. Therefore, a stabilizing algorithm should be designed to keep the average service rate larger than the arrival rate for each queue in the network during K slots.
[00100] 5.2 Trickle Network Stability
[00101] First define an exemplary PDPL queueing network where each node keeps a queue for each source-destination-link combination. Thus, a node in a PDPL network with \T\ flows and |£| links will have \T\ X |£| PDPL queues. It should be noted that the PDPL queueing network contains more queues than the TRI network and only acts as a comparative model that is only used as a reference network in the following analysis.
[00102] Let q(j denote the queue for flow / on lt and let ^(t) and ^ ; (t) denote the instantaneous input and output rate of qfj at time t to obtain ώ( =∑meNi , (20a) μ{ ) =∑ΙίΕΝ . μ[} 1ί {1). (20b)
[00103] Define Apdpi as the set of all input rate matrices [A^■] of the PDPL network such that there exist flow rate variables d(j fc] satisfying: d{j k > 0 , Vi,j, k E M , (21a) di=d(f)J* = 0 » (21b)
∑k d{jik -∑m dm f i j≥ λ{ . , Vd(f)≠ j , (21c) Gi,j≥∑k d{j k , Vi E M . (21d)
[00104] Note that the input rate matrix [A^■] is different from the link rate matrix
G = [Gi ] , and is in a time-average sense. Then, we have the following lemma.
[00105] Lemma 1 Let Apdpl denote the capacity region of PDPL queueing network and Atri denote the capacity region of TRI queueing network, then Apdpl =
[00106] Proof. For any [A^ -] G Atri, there exists a scheduling algorithm R that can stabilize the network. At time slot t, R yields a transmission rate matrix for the flows of TRI queueing network. Suppose initial queue states of the TRI network and PDPL network are the same. Then, any flow scheduling under TRI network can be equally achieved under PDPL network, and the transmission rate can be achieved by PDPL network. This implies Apdpl G Atri. On the other hand, since any [A{J] G Apdpi can also be supported in TRI network, we have Atri G Apdpi. Therefore, we have Apdpi = Atri.
[00107] Based on Lemma 1, the stability of Trickle algorithm can be stated as follows.
[00108] Theorem 1 The proposed Trickle algorithm is throughput optimal, and for an arbitrary network admission rate vector [A^■] inside of the network capacity region denoted by Atri, the Trickle algorithm stabilizes the network.
[00109] The proof is presented in the Appendix section. [00110] 6 Simulated examples
[00111] 6.1 Development of Simulator
[00112] First, an implementation is provided in some embodiments for FUN-1,
FUN-2 [15], a BATS code [19], a fountain code (specifically, the RQ code [20]), random linear network code (RLNC) [21], and COPE [13] on QualNet [22]. Then, the proposed Trickle algorithm on MAC layer is implemented as experiments for embodiments with each of above coding schemes. For COPE, we only implement the XOR operation for mixing two flows; and Layer 4 in the COPE scheme is TCP; the reason why we use TCP for COPE is because each scheme needs to achieve perfect recovery of lost packets to make a fair comparison.
[00113] For RLNC, a file is segmented into batches, each of which consists of M native packets. A relay node has a buffer of M packets; When receiving packets, the relay node takes all the packets in the buffer as input and generates one RLNC-coded packet. Upon receiving the ACK message, the source node stops transmitting the coded packets of the current batch, and starts to transmit the coded packets of the next batch. For COPE, we use TCP as the Layer 4 protocol; for FUN-1, FUN-2, BATS, fountain code, and RLNC, we use UDP as the Layer 4 protocol.
[00114] All the experiments have the following setting: the packet size T = 1024 bytes; the batch size M = 16 packets; Trickle queue size is 150 kilobytes, which replaces the layer 3 output queue; source rate control is enabled. The time-up threshold Tth is set to 30 milliseconds. For source rate control, we set high-threshold = 0.75 and low-threshold β = 0.25.
[00115] The modified 802. l ie MAC AC parameters for Trickle are set as shown in Table 2. We use IEEE 802.1 lb for the physical layer of each wireless node. In contrast to the experiment setup described in [15], use of the Ad hoc On-Demand Distance Vector (AODV) protocol for routing is preferrably avoided except for Case 4 because AODV's buffering mechanism implemented in QualNet may have a bad effect in our Trickle scheme.
[00116] 6.2 Performance Evaluation
[00117] Simulative experiments are conducted for the following four exemplary cases: 1) four hops with no node mobility (fixed topology) in a cross topology in both lossless and lossy condition, 2) various number of hops with no node mobility (fixed topology) under fixed packet loss rate per hop, 3) two hops with fixed source/destination nodes and a moving relay node (dynamic topology), 4) a large number nodes with node mobility (dynamic topology), In order for FUN-1, FUN-2 and COPE to work better, all these experiments are conducted under the situation of two flows (forward and backward flows) between each source/destination pair. For each case, we compare the performance of seven schemes: FUN-1, FUN-2, a BATS code, a fountain code, RLNC, COPE, and TCP, under IEEE 802.11b MAC and the disclosed Trickle MAC according to some embodiments.
[00118] As is known in the field, the lower the packet sending rate of UDP, the lower throughput. But too high packet sending rate of UDP will incur congestion and packet loss. Hence, in the experiments of BATS, fountain code, and RLNC without enabling Trickle, which use UDP as their Layer 4 protocol, we tune the packet sending rate of UDP until we find a maximum throughput for each of these three schemes. In such situations, the network congestion condition is the lowest since maximum throughput can be achieved. For FUN-1 and FUN-2, we intentionally create a congested environment where there are also background interfering UDP flows generated at each source node. In one example, the sending rate of these interfering flows is 400 kbits/s. At the optimal packet sending rate of UDP, we conduct ten experiments for each of these five schemes, and take the average throughput of the ten experiments as the performance measure of each scheme. However, for those with Trickle enabled, it is not necessary to manually tune sending rate. The source rate control scheme will dynamically adjust sending rate according to queue length.
[00119] For COPE and TCP, we conduct ten experiments for each of these two schemes, and take the average throughput of the ten experiments as the performance measure of COPE and TCP.
[00120] 6.2.1 Case 1: Four Hops with No Node Mobility in a Cross Topology
[00121] The setup of this set of experiments is the following. There are a total of 9 nodes in the network. These nodes are distributed in a cross topology, as illustrated in
FIG. 6. There is a chain communication path 620 comprised of 5 nodes from one end to another (53 -> R3 -> R5 -> R4 -> 54) placed in horizontal direction; and another chain communication path 640 also comprised of 5 nodes (S-L -> R1 -> R5 -> R2 <→ S2) from one end to another placed in vertical direction. The two chain paths 620, 640 share a single relay node R5 in the cross center and sustain a total of four flows in the network. There are two flows from one end to another (node S-L to S2) and vice-versa in vertical direction; and symmetrically, another two flows from one end to another (node 53 to 54) and vice-versa in horizontal direction. For FUN- 1 and FUN-2, there also exists one background UDP flow sided with each FUN flow at each source node. The sending rate of the background flow is 200 kbits/s.
[00122] Denote K the number of native packets to be transmitted by the source. In the experiments, we measure the throughput in Kbits/s under different values of K and different packet loss rate (PLR). The PLR is the same for all the links/hops in the network. Here, the PLR does not include packet loss due to the thermal noise in the physical layer and packet collision, which is out of our direct control; here the PLR is achieved by randomly dropping a correctly received packet at Layer 2 with a probability equal to the given PLR.
[00123] FIG. 7 illustrates the resulting throughput trend that FUN- 1 with Trickle enabled 720 versus Trickle disabled 710. The schematic illustration in FIG. 7 shows Trickle' s, throughput improvement on different channel busyness conditions for FUN- 1 under Case 1. Trickle is able to provide significant throughput improvement over different channel busyness conditions.
[00124] Tables 4 and 3 show the total throughput of the four flows (i.e., forward/backward flows in both vertical and horizontal direction) of the seven schemes with and without Trickle under Case 1. Based on results from the exemplary simulation analysis, we have the following observations:
[00125] · Thanks to better scheduling, our proposed Trickle MAC achieves higher throughput than IEEE 802.11b for FUN- 1, FUN-2, a BATS code, a fountain code, RLNC, COPE and TCP under the same network conditions.
[00126] · Due to introduce of interfering flows, FUN-1 and FUN-2 can only achieve limited throughput. Trickle helps an additional throughput gain of more than 40%. [00127] · The throughput of FUN-2 is higher than or equal to that of FUN-1.
This is because FUN-2 uses RLNC at a relay node and RLNC has a higher coding gain than the XOR operation used in FUN-1.
[00128] · Even if COPE-like flow mixing techniques are not used in BATS,
Fountain and RLNC, and they are tuned to be at their optimal transmission rate, Trickle scheme is still able to improve the system's throughput by around 10%.
[00129] · For PLR=10%, COPE and TCP time out and could not receive all K number of packets due to high packet loss rate. This is consistent with the fact that TCP performs poorly under environments of high packet loss rates [23].
[00130] Table 3: Throughput under Case 1 for PLR=0
Figure imgf000025_0001
Figure imgf000025_0002
Figure imgf000026_0001
[00132] 6.2.2 Case 2: Various Number of Hops with No Node Mobility in a
Chain Topology
[00133] The setup of this set of simulative experiments is the following. The network consists of a source node, a destination node, and 1 or 2 or 4 relay nodes. All the nodes in the network form a chain topology from the source node to the destination node. The communication path from the source node to the destination node has 2 or 3 or 5 hops. All the nodes are immobile; hence the network topology is fixed. For all the experiments in Case 2, we set PLR=10% for each hop/link. Again, the PLR does not include packet loss due to the thermal noise in the physical layer and packet collision.
[00134] Tables 5 and 6 show the throughput of seven schemes under Case 2.
Based on results from the exemplary simulation analysis, we have the following observations: [00135] · Trickle achieves higher throughput than the standard IEEE 802. l ie for FUN-1, FUN-2, a BATS code, a fountain code, RLNC, COPE, and TCP under the same network conditions.
[00136] · FUN-1 and FUN-2 achieve similarly rather low throughput under congested environment. However, Trickle helps them achieve more than 40% throughput gain in such environment.
[00137] · Under near-optimal (light traffic) situations for BATS, Fountain and
RLNC, the throughput gain percentage from Trickle increases along with the increase of hop number. This is because smaller number of hops make it easier to grab the channel than larger number of hops. There is also less probability to cause congestion. However, our Trickle is best fit for use in congested environment.
[00138] · Fountain code generally achieves better throughput gain than BATS code although the difference is marginal.
[00139] · COPE and TCP time out and could not receive all K number of packets due to high packet loss rate. Hence, Trickle algorithm does not work in these two cases as well.
[00140] · For all the situations in Case 2, RLNC achieves a lower throughput than any other coding scheme. This is because a coded packet in RLNC is restricted to one batch of M native packets.
[00141] · For all the cases listed above, Trickle algorithm is able to bring notable throughput improvement. The more congested the network condition becomes, the more percentage of throughput gain Trickle is able to achieve.
[00142] Table 5: Throughput under Case 2 for three hops
Figure imgf000027_0001
Figure imgf000028_0001
[00143] "able 6: Throughput under Case 2 for five hop
Figure imgf000028_0002
Figure imgf000029_0001
[00144] 6.2.3 Case 3: Two Hops with Fixed Source/Destination Nodes and a
Moving Relay Node
[00145] The setup of this set of simulative experiments is the following. There are three prospective nodes in the network: a fixed source node, a fixed destination node, and one moving relay node. The distance between the source node and the destination node is 1200 meters; the transmission range of each node is 700 meters. Hence, the source node cannot directly communicate with the destination node; a relay node is needed. The relay node is moving back and forth along a center line, which is perpendicular to the straight line that links the source node and the destination node; When the relay node moves into the transmission range of the source node, it can pick up the packets transmitted by the source node, and relay the packets to the destination node. Otherwise, it cannot receive packets transmitted by the source node; in this case, all the packets transmitted by the source node will be lost. Since the relay node moves around, the network topology is dynamic.
[00146] In this set of simulative experiments, the number of native packets to be transmitted by the source is 1600 packets, i.e., K = 1600. Table 7 shows the throughput of seven schemes under Case 3. Based on results from the simulative experiments, we have the following observations:
[00147] · Trickle achieves higher throughput than IEEE 802. l ie for FUN-1,
FUN-2, a BATS code, a fountain code, RLNC, COPE, and TCP.
[00148] · FUN-1 and FUN-2 achieve the highest throughput among the seven schemes. In our Trickle version, they gain an extra amount of throughout increase thanks to the dynamic packet scheduling mechanism.
[00149] · The fountain code achieves a slightly higher throughput than the
BATS code, which tends to suffer more than the fountain code due to its dependency on whole batch for decoding. However, Trickle algorithm helps both Fountain and BATs achieve around 1% throughout gain. This back-pressured based scheme will ultimately trigger the rate control mechanism in the source node to lower packet sending rate. This may reduce the number of packet loss due to connection break. Thus the throughout may increase.
[00150] · RLNC achieves the lowest throughput than other coding schemes.
This is because coding in RLNC is restricted to a batch of M native packets. COPE achieves a even lower throughput than RLNC. This is because RLNC uses erasure channel coding, which is more robust against packet loss when the relay node moves out of the transmission range. In these two cases, we can see that Trickle scheme does play a role in increasing their throughput.
[00151] · TCP achieves the least throughput. This is because all other six schemes have coding gain while TCP does not. However, even in such situation, Trickle does help TCP achieve about 7% throughput gain due to better efficiency achieved in channel utilization.
[00152] Table 7: Throughput under Case 3
Figure imgf000030_0001
[00153] 6.2.4 Case 4: A Large Number of Nodes with Node Mobility
[00154] The setup of this set of simulative experiments is the following. There are
400 nodes in the network. All the nodes move under the random waypoint mobility model with nodes' speed randomly ranging from 5 meters/s to 10 meters/s and moving in a area of 3000 meters by 3000 meters., i.e., each node selects a random position, moves towards it Due to node mobility, the communication routes change over time. Hence, the network topology is dynamic.
[00155] We measure the throughput between a specific pair of source/destination nodes. This pair of source/destination nodes do not move and are not in the transmission range of each other. Hence, they need moving relay nodes to forward packets to the destination. To make the experiments more realistic, we also generate some background traffic. The background traffic is generated in the following manner: we randomly select 100 pairs of nodes out of the 400 nodes; generate a Constant Bit Rate (CBR) flows between each pair of nodes. Each CBR flow lasts for 30 seconds with a random start time. Since the data rate needs to be constant for CBR, the source generates a packet every Tc second (Tc G (0,1]); the packet size is 1024 bytes. For example, for Tc = 1 second, the data rate is 1024 bytes/s. The number of hops from the source node to the destination node is random, depending on the positions of all the nodes. Since all the nodes are mobile, the network topology is dynamic.
[00156] In this set of simulative experiments, the number of native packets to be transmitted by the source under study is 1600 packets, i.e., K = 1600. Table 8 shows the throughput of seven schemes under Case 4. Based on results from the simulativeexperiments, we have the following observations:
[00157] · Trickle still achieves slightly higher throughput than IEEE 802.11b for FUN-1, FUN-2, a BATS code, a fountain code, RLNC, COPE, and TCP with standard MAC even under such complexity of the scenario.
[00158] · FUN-2 achieves the highest throughput and FUN-1 achieves the second highest throughput. This is because the number of hops in Case 4 is usually small (mostly two hops) and hence FUN-2 performs better than FUN-1 as in two-hop cases. Trickle helps achieve extra throughput gain.
[00159] · COPE achieves the third highest throughput. Their throughput is higher than the BATS code, the fountain code, and RLNC; this is because COPE combines two flow and send packets more efficiently while other schemes are more prone to packet loss due to constant change of relay nodes. The Trickle exhibits slightly better performance in such scenario. [00160] · The fountain code achieves a higher throughput than the BATS code.
RLNC achieves a lower throughput than the BATS code. The reason is the same as in Case 3. The BATS code is less robust against moving relay nodes, compared to the fountain code. Our proposed Trickle also can help in these cases.
[00161] · TCP achieves the least throughput due to the moving of relay node which causes intermittent connection break that TCP is sensitive to. The Trickle scheme may help alleviate this problem and achieves a better throughput.
[00162] Table 8: Throughput under Case 4
Figure imgf000032_0001
[00163] In summary, all the experimental results described herein demonstrate that Trickle achieves higher throughput than IEEE 802.11b for FUN-1, FUN-2, BATS code, the best fountain code (RQ code), RLNC, COPE, and TCP in multihop wireless networks in some embodiments.
[00164] 7 Conclusion
[00165] To address the lack of congestion control capability in fountain codes and network coding, Trickle - a novel back-pressure-based MAC scheduling algorithm is disclosed that in some embodiments maintains TRI queues at each node, which significantly reduces the number of queues and is also scalable for large networks.
[00166] Appendix [00167] In this section, we provide the proof for Theorem 1 proposed in section 5.
[00168] Proof. Following the clues provided by the Lyapunov drift method in section 5.1, we define the PDPL shadow queues. The PDPL shadow queues are counters that keep track of the PD traffic for each flow in the TRI queues. Let vj(t) denote the queue backlog for flow / in qt j at time t. Then the PDPL shadow queue length can be updated according to
Vfct + 1) = [¾(t) - ¾(t)]+ + a{j (t) + tfj(t), (22) where μ. . (t), μ (t) and a (t) are defined in (5.2).
[00169] According to Lemma 1, we know that PDPL queueing network and PL network have the same capacity region. In addition, by considering PL queueing dynamics from (9) and comparing the PDPL shadow queue lengths and the PL queue lengths in the PL queueing network, we can easily get
Ui (t) =∑fel. . ¾(C)
[00170] Thus, evaluating the stability of Trickle algorithm is equivalent to evaluating the stability of PDPL queueing network.
[00171] The one step Lyaponuv drift of the PDPL shadow queues can be written as
2 2
(¾(t + i)) - (¾(t)) < (o max max ^max) )
-2¾(t) (¾ (t) - t) - a{j (t)). (23)
[00172] Next, we sum (23) over the whole network on all shadow queues and obtain
∑^ (¾(t + i))2 -∑ ;ij (¾(t))2
< B - 2∑f:lJ ¾(t) (¾ (t) - fc (t) - a{/t)), (24) where
Β = \£\ \Τ\ ((μ max max + Umax)2 (25)
is a constant. [00173] Taking the conditional expectation of (24) w.r.t. (t) = [l¾(t)], we have
E (∑f]i (¾(t + 1))2| V(t)) - E (∑f:iJ (¾(t))2| V(t))
≤ B -2∑f:lJ ¾(t)E(¾(t) - ;.(t) - a{ t)|7( )- (26)
[00174] Define the routing scheme at time t associated with each flow on Zi;- by
W jcWWif) where ¾fc(t) G [0,1] and
k ¾fc(t) = l,(d(/)≠;)- (27)
Then we can rewrite the output rate of each flow in terms of the routing
Figure imgf000034_0001
[00176] Substituting (28) into (26), we have
.2
E (∑ ;iJ (¾(t + 1)) I K(0) - E (∑ ;iJ (¾(t)) I V(tj)
Figure imgf000034_0002
2∑f,i,j E (¾(t)(¾(t) -∑k ¾,fc(t)¾(t))| (t)) (2 )·
[00177] Now define the maximum queue difference for each flow / and lt j as
= ¾(0-¾(0
AV^C = maxA¾fc(t) = max(¾(t) - mm¾(t)). [00178] Define
#L(t) = 2E(∑f]i ¾( (ΔΙ¾^( +A¾fft(t))|7( ).
and
7^( = E(#L(0).
[00179] Next, we add both sides by J$L(t to (26), and have
E (∑ ;iJ (¾(t + 1))2| ) - E (∑f:iJ (¾(t))2| 7(C))
Figure imgf000035_0001
- a{ t)|7( ). (30)
[00180] Denote the R.H.S. of (30) as Θ, which can be rewritten as
Figure imgf000035_0002
+2∑f i E (¾(t)(A^(t) + A¾fc(t))| (t)). (31) [00181] The last two terms of (31) can be simplified as
2∑iJ E((∑ ¾(t))A^(t))
- ∑w E (¾(t)(¾(t) -∑k β[. ft(t)¾(t)
-A¾fc(t))| (t)). (32)
[00182] The Trickle algorithm essentially minimizes the R.H.S. of (36) over all possible scheduling algorithms.
[00183] Since [A^ -] lies in the interior of the PDPL network capacity region Apdpi, it immediately follows that there exists a small positive constant ε > 0 such that
[A{;.] + ε G Apdpi
[00184] We can get similar results to the Corollary 3.9 in [24] under PDPL queueing that there exists a randomized scheduling policy, denoted by RA that stabilizes the PDPL network while providing a data rate of
Figure imgf000035_0003
and μ..(t) and liik(i) are the link data rates induced by the RA algorithm. Thus, we have eRA =B-2e∑f.tj V^f
+∑f;i,j E(¾(t)( ^--(t) +A¾fc(t)| t))), (33) where the last term in (33) is the queue difference under the RA algorithm during slot t. Therefore, the PDPL queues are bounded, so do the PDPL queue differences AV ax(t) and Vj j (t). Thus, the last term of (33) is also bounded and we denote the bound by
J max- Then we have
eRA =B-2e∑f]i vf(t)+Jmax.
[00185] In light of (30), it is obtained that
E (∑f:i (¾(t + 1)) I V(tj) - Έ (∑f:iJ (¾(t)) I V(tj)
+/^(t) < QDRPC-PL < QRA < B— 2e∑f:iJ ¾(t) +/„ (38)
[00186] Taking expectation w.r.t. V(t) to (34), we have
E (∑fi,j ¾(t + I))') - E (∑fi,j ¾(t))2) <B- 2e∑f i E¾(t)) +Jmax. (35)
[00187] Summing (35) over time slots 0 to K - 1 yields
Figure imgf000036_0001
(36)
[00188] Then, we divide (36) by K and manipulate the result and have
Figure imgf000036_0002
[00189] Note that the last two terms of (37) are both non-positive. By taking limsupJf→∞ to both sides of (37), we obtain
B+
limsupi∑ -0 1;ij E¾(t)) < < oo. (38)
K→∞
[00190] Since the PL queue lengths are the sum of a finite number of the PDPL shadow queue lengths. Then we have E(¾(t)) <∞, (39)
Figure imgf000037_0001
which implies the stability of the PDPL queues or Trickle within capacity region Atri.
[00191] Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art.
[00192] For example, although control of unicast flows is discussed, Trickle scheme may also be extended to multicast situations. Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Further, though advantages of the present invention are indicated, it should be appreciated that not every embodiment of the invention will include every described advantage. Some embodiments may not implement any features described as advantageous herein and in some instances. Accordingly, the foregoing description and drawings are by way of example only.
[00193] The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, embodiments for the Trickle method may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component, including commercially available integrated circuit components known in the art by names such as CPU chips, GPU chips, microprocessor, microcontroller, or co-processor. Alternatively, a processor may be implemented in custom circuitry, such as an ASIC, or semicustom circuitry resulting from configuring a programmable logic device. As yet a further alternative, a processor may be a portion of a larger circuit or semiconductor device, whether commercially available, semi-custom or custom. As a specific example, some commercially available microprocessors have multiple cores such that one or a subset of those cores may constitute a processor. Though, a processor may be implemented using circuitry in any suitable format. The embodiments may be implemented in connection with a network interface, such as with a wireless network interface. Other embodiments of Trickle method according to the present invention can include a portable computing device such as a smartphone or a base station of a cellular network or any device that can act as a relay node of a wireless network.
[00194] Depending on the nature of the computing device, one or more additional elements may be present. For example, a smart phone or other portable electronic device may include a camera, capable of capturing still or video images. In some embodiments, a computing device may include sensors such as a global positioning system (GPS) to sense location and inertial sensors such as a compass, an inclinometer and/o ran accelerometer. The operating system may include utilities to control these devices to capture data from them and make it available to applications executing on the computing device.
[00195] As another example, in some embodiments, a computing device may include a network interface to implement a personal area network. Such an interface may operate in accordance with any suitable technology, including a Bluetooth, Zigbee or an 802.11 ad hoc mode, for example.
[00196] Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
[00197] Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets.
As another example, a computer may receive input information through speech recognition or in other audible format. In the embodiment illustrated, the input/output devices are illustrated as physically separate from the computing device. In some embodiments, however, the input and/or output devices may be physically integrated into the same unit as the processor or other elements of the computing device. For example, a keyboard might be implemented as a soft keyboard on a touch screen. Alternatively, the input/output devices may be entirely disconnected from the computing device, and functionally integrated through a wireless connection.
[00198] Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
[00199] Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
[00200] In this respect, the invention may be embodied as a computer readable storage medium (or multiple computer readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. As is apparent from the foregoing examples, a computer readable storage medium may retain information for a sufficient time to provide computer-executable instructions in a non-transitory form. Such a computer readable storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above. As used herein, the term "computer-readable storage medium" encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine. Alternatively or additionally, the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as a propagating signal. [00201] The terms "code", "program" or "software" are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
[00202] Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
[00203] Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
[00204] Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
[00205] Also, the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
[00206] The indefinite articles "a" and "an," as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean "at least one."
[00207] The phrase "and/or," as used herein in the specification and in the claims, should be understood to mean "either or both" of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with "and/or" should be construed in the same fashion, i.e., "one or more" of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the "and/or" clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to "A and/or B", when used in conjunction with open-ended language such as "comprising" can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[00208] As used herein in the specification and in the claims, the phrase "at least one," in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase "at least one" refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, "at least one of A and B" (or, equivalently, "at least one of A or B," or, equivalently "at least one of A and/or B") can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
[00209] Use of ordinal terms such as "first," "second," "third," etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
[00210] Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of "including," "comprising," or "having," "containing," "involving," and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.
LIST OF REFERENCES
[00211] The following references are hereby incorporated by reference in their entireties:
[00212] [1] T. S. Rappaport, Wireless communications: principles and practice.
Prentice-Hall: Upper Saddle River, NJ, 1996.
[00213] [2] M. Kim, J. Cloud, A. ParandehGheibi, L. Urbina, K. Fouli, D. Leith, and M. Medard, "Network coded tcp (ctcp)," arXiv preprint arXiv: 1212.2291, 2012.
[00214] [3] M. Neely, E. Modiano, and C. Rohrs, "Dynamic power allocation and routing for time varying wireless networks," in IEEE INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications Societies, vol. 1.
[00215] [4] X. Lin and N. Shroff, "Joint rate control and scheduling in multihop wireless networks," in 43rd IEEE Conference on Decision and Control, 2004. CDC, vol. 2, 2004.
[00216] [5] M. Chiang, "Balancing transport and physical layers in wireless multihop networks: Jointly optimal congestion control and power control," IEEE Journal on Selected Areas in Communications, vol. 23, no. 1, pp. 104-116, 2005. [00217] [6] S. Shakkottai and R. Srikant, Network optimization and control. Now
Publishers,, 2008.
[00218] [7] T. Ho and H. Viswanathan, "Dynamic algorithms for multicast with intra-session network coding," IEEE Transactions on Information Theory, vol. 55, no. 2, pp. 797-815, 2009.
[00219] [8] Z. Jiao, Z. Yao, B. Zhang, and C. Li, "Nbp: An efficient network- coding based backpressure algorithm," in 2013 IEEE International Conference on Communications (ICC). IEEE, 2013, pp. 1625-1629.
[00220] [9] L. Bui, R. Srikant, and A. Stolyar, "Novel architectures and algorithms for delay reduction in back-pressure scheduling and routing," in IEEE INFOCOM 2009. IEEE, 2009, pp. 2936-2940.
[00221] [10] Z. Ding and D. Wu, "Sliding Mode Based Joint Congestion Control and Scheduling in Multi-hop Ad Hoc Networks with Multi-class Services," in IEEE Globecom, 2010.
[00222] [11] B. Ji, C. Joo, and N. B. Shroff, "Scheduling with per-link queues and no per-flow information in multi-hop wireless networks," in 2011 International Symposium on Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks (WiOpt). IEEE, 2011, pp. 25-32.
[00223] [12] Z. Ding, Y. Song, D. Wu, and Y. Fang, "Capacity region and dynamic control of wireless networks under per-link queueing," Wireless Communications and Mobile Computing, 2013.
[00224] [13] S. Katti, H. Rahul, W. Hu, D. Katabi, M. M'edard, and J. Crowcroft,
"Xors in the air: practical wireless network coding," in ACM SIGCOMM Computer Communication Review, vol. 36, no. 4, 2006, pp. 243-254.
[00225] [14] M. Chiang, S. Low, A. Calderbank, and J. Doyle, "Layering as optimization decomposition: A mathematical theory of network architectures," PROCEEDINGS-IEEE, vol. 95, no. 1, p. 255, 2007.
[00226] [15] Q. Huang, K. Sun, X. Li, and D. Wu, "Just fun: A joint fountain coding and network coding approach to loss-tolerant information spreading," in Proceedings of ACM MobiHoc , 2014. [00227] [16] M. Neely, "Stability and capacity regions for discrete time queueing networks," Arxiv preprint arXiv: 1003.3396, 2010.
[00228] [17] A. Warrier, S. Janakiraman, S. Ha, and I. Rhee, "Diffq: Practical differential backlog congestion control for wireless networks," in IEEE INFOCOM 2009. IEEE, 2009, pp. 262-270.
[00229] [18] M. Neely, "Dynamic power allocation and routing for satellite and wireless networks with time varying channels," Ph.D. dissertation, Massachusetts Institute of Technology, 2003.
[00230] [19] S. Yang and R. W. Yeung, "Batched sparse codes," submitted to
IEEE Transactions on Information Theory.
[00231] [20] A. Shokrollahi and M. Luby, "Raptor codes," Foundations and
Trends in Communications and Information Theory, vol. 6, no. 3-4, pp. 213-322, 2011. [Online]. Available: http://dx.doi.org/10.1561/0100000060
[00232] [21] M. Wang and B. Li, "Lava: A reality check of network coding in peerto-peer live streaming," in IEEE INFOCOM 2007, pp. 1082-1090.
[00233] [22] "Qualnet web site," http://web. scalable- networks .com/content/qualnet.
[00234] [23] G. Holland and N. Vaidya, "Analysis of tcp performance over mobile ad hoc networks," Wireless Networks, vol. 8, no. 2/3, pp. 275-288, 2002.
[00235] [24] L. Georgiadis, M. Neely, M. Neely, and L. Tassiulas, Resource allocation and cross layer control in wireless networks. Now Pub, 2006.

Claims

CLAIMS What is claimed is:
1. A method of operating a communication system comprising a plurality of nodes, the method comprising, at each of the plurality of nodes: tracking packets queued for transmission, including: numbers of packets received from respective others of the plurality of nodes; and numbers of packets for transmission to respective others of the plurality of nodes; exchanging, with others of the plurality of nodes, the tracked numbers of packets received from and for transmission to respective others of the plurality of nodes; selecting a packet for transmission to an other node based at least in part on the tracked numbers of packets; and setting a transmit priority for the selected packet based at least in part on the number of packets for transmission to the other node and the number of packets tracked on the other node that were received from the node and are queued for transmission.
2. The method of claim 1, wherein tracking numbers of packets received from others of the plurality of nodes and numbers of packets for transmission to respective others of the plurality of nodes comprises: maintaining a plurality of queues indexed by a prior hop node and a next hop node; and storing packets received from a first node for retransmission to a second node in a queue indexed with the first node as the prior hop node and the second node as the next hop node.
3. The method of claim 2, wherein exchanging tracked numbers of packets received and tracked numbers of packets queued comprises: transmitting the selected packet with a header containing the tracked numbers of packets received and tracked numbers of packets queued.
4. The method of claim 3, wherein exchanging tracked numbers of packets received and tracked numbers of packets queued comprises: extracting information from packet headers of packets transmitted by neighboring nodes.
5. The method of claim 1, wherein selecting a packet for transmission based at least in part on the tracked numbers of packets queued comprises: making a weighted selection from among packets for transmission to other nodes, the weighting based on the numbers of packets queued for transmission to respective others of the plurality of nodes.
6. The method of claim 1, wherein setting a transmit priority comprises: setting MAC parameters of transmission of the selected packet.
7. The method of claim 6, wherein: the network operates according to a TDMA protocol with a plurality of time slots; and the acts of selecting a packet for transmission and setting a transmit priority for the selected packet are repeated in each of the plurality of time slots.
8. The method of claim 6, wherein setting MAC parameters of transmission of the selected packet comprises: adjusting a time of a contention window for the packet within a time slot, with higher priority packets being assigned contention windows earlier in the time slot.
9. A method of operating a node in a communication system comprising a plurality of nodes forming a network in which packets are communicated by transmissions between nodes, the method comprising, at the node: upon receipt of a packet from a first node for retransmission to a second node, storing the packet in a queue of a plurality of queues, the queue selected from the plurality of queues based on the identity of the first node and the identity of the second node; receiving information about a number of packets from the node queued on the second node for retransmission to at least one other node; selectively transmitting a packet from the queue based at least in part on a relative number of packets queued on the node for transmission to the second node and the number of packets from the node queued on the second node for retransmission to at least one other node of the plurality of nodes.
10. The method of claim 9, wherein selectively transmitting the packet from the queue comprises: assigning a priority to the queue based at least in part on the relative number of packets; and selecting the queue based at least in part on the priority assigned to the queue relative to respective priorities assigned to other queues of the plurality of queues.
11. The method of claim 10, wherein selectively transmitting the packet from the queue further comprises: setting transmit parameters of the packet in accordance with the priority assigned to the queue.
12. The method of claim 11, wherein: setting the transmit parameters comprises setting a transmit start time within a time slot and a backoff time.
13. The method of claim 10, wherein: assigning the priority to the queue is further based in part on the transmission rate between the node and the second node.
14. The method of claim 9, wherein selectively transmitting the packet comprises: transmitting the packet with a header indicating a number of packets queued on the node for transmission to the second node.
15. The method of claim 14, wherein: the header comprises an identifier of the first node, an identifier of the second node and a length of the queue selected based on the identity of the first node and the identity of the second node.
16. A method of operating a network interface of a node in a communication network to implement prioritized transmission, wherein the communication operates in accordance with time slots, the method comprising, for a time slot: determining queue backlog information for the node; exchanging queue backlog information with neighboring nodes in a network; assigning a MAC access category to a packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes; and providing the packet to MAC layer components for transmission in accordance with the assigned access category.
17. The method of claim 16, wherein: the MAC layer components are configured to implement, for each of a plurality of access categories a respective contention window, the respective contention windows are disjoint for a plurality of access categories, and the access category is selected from the plurality of access categories.
18. The method of claim 17, wherein: the plurality of access categories are ordered from lower to higher; and assigning a MAC access category to the packet for transmission from the node based at least in part on relative backlog of the node and neighboring nodes comprises assigning a higher access category for a higher backlog on the node relative to the backlog on the neighboring nodes.
19. The method of claim 18, wherein: the network operates according to a TDMA protocol with a plurality of time slots; within each time slot, the MAC layer components are configured to initiate transmission of a packet during a contention window with a start relative to the start of the time slot, that is set based on the access category of the packet, contention windows of higher access categories start earlier in the time slot than contention windows of lower access categories.
20. The method of claim 19, wherein: exchanging queue backlog information comprises: transmitting the packet with queue backlog information about the node in a packet header; and obtaining queue backlog information about the neighboring nodes from headers of packets transmitted by the neighboring nodes.
PCT/US2017/022761 2016-03-17 2017-03-16 Method for congestion relief with network coding WO2017161148A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662309872P 2016-03-17 2016-03-17
US62/309,872 2016-03-17

Publications (1)

Publication Number Publication Date
WO2017161148A1 true WO2017161148A1 (en) 2017-09-21

Family

ID=59851110

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/022761 WO2017161148A1 (en) 2016-03-17 2017-03-16 Method for congestion relief with network coding

Country Status (1)

Country Link
WO (1) WO2017161148A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089271A (en) * 2018-09-17 2018-12-25 中南大学 A kind of wireless cooperation Internet resources random distribution approach of mixed tensor energy supply
CN110430613A (en) * 2019-04-11 2019-11-08 重庆邮电大学 Resource allocation methods of the multicarrier non-orthogonal multiple access system based on efficiency
CN110636451A (en) * 2019-08-21 2019-12-31 深圳市天彦通信股份有限公司 Information management method and related device
CN111918305A (en) * 2020-07-24 2020-11-10 湖南遥昇通信技术有限公司 Same-frequency self-organizing self-healing network method and device
CN112954719A (en) * 2021-03-25 2021-06-11 盐城工学院 Flow matching method for network coding sensing route in wireless multi-hop network

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404353A (en) * 1991-06-28 1995-04-04 Digital Equipment Corp. Dynamic defer technique for traffic congestion control in a communication network bridge device
US6738368B1 (en) * 2000-05-12 2004-05-18 Interdigital Technology Corporation Prioritization and flow control of a spread spectrum multiuser channel
US20120250494A1 (en) * 2009-11-17 2012-10-04 University Of Maryland Method and system for cooperative transmission in wireless multi-hop networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5404353A (en) * 1991-06-28 1995-04-04 Digital Equipment Corp. Dynamic defer technique for traffic congestion control in a communication network bridge device
US6738368B1 (en) * 2000-05-12 2004-05-18 Interdigital Technology Corporation Prioritization and flow control of a spread spectrum multiuser channel
US20120250494A1 (en) * 2009-11-17 2012-10-04 University Of Maryland Method and system for cooperative transmission in wireless multi-hop networks

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109089271A (en) * 2018-09-17 2018-12-25 中南大学 A kind of wireless cooperation Internet resources random distribution approach of mixed tensor energy supply
CN109089271B (en) * 2018-09-17 2021-12-14 中南大学 Hybrid energy powered wireless cooperative network resource random allocation method
CN110430613A (en) * 2019-04-11 2019-11-08 重庆邮电大学 Resource allocation methods of the multicarrier non-orthogonal multiple access system based on efficiency
CN110636451A (en) * 2019-08-21 2019-12-31 深圳市天彦通信股份有限公司 Information management method and related device
CN110636451B (en) * 2019-08-21 2021-06-11 深圳市天彦通信股份有限公司 Information management method and related device
CN111918305A (en) * 2020-07-24 2020-11-10 湖南遥昇通信技术有限公司 Same-frequency self-organizing self-healing network method and device
CN111918305B (en) * 2020-07-24 2023-10-03 湖南遥昇通信技术有限公司 Same-frequency self-organizing self-healing network method and device
CN112954719A (en) * 2021-03-25 2021-06-11 盐城工学院 Flow matching method for network coding sensing route in wireless multi-hop network
CN112954719B (en) * 2021-03-25 2023-10-13 盐城工学院 Traffic matching method for network coding perception route in wireless multi-hop network

Similar Documents

Publication Publication Date Title
US10530526B2 (en) Joint fountain coding and network coding for loss-tolerant information spreading
WO2017161148A1 (en) Method for congestion relief with network coding
Keller et al. SenseCode: Network coding for reliable sensor networks
Laufer et al. Multirate anypath routing in wireless mesh networks
Koutsonikolas et al. Efficient network-coding-based opportunistic routing through cumulative coded acknowledgments
Laufer et al. Polynomial-time algorithms for multirate anypath routing in wireless multihop networks
Darehshoorzadeh et al. Toward a comprehensive model for performance analysis of opportunistic routing in wireless mesh networks
Costa et al. Robust improvement of the age of information by adaptive packet coding
He et al. Channel aware opportunistic routing in multi-radio multi-channel wireless mesh networks
Van Vu et al. Dynamic coding for TCP transmission reliability in multi-hop wireless networks
Zhu et al. INCOR: inter-flow Network Coding based opportunistic routing in wireless mesh networks
Lee et al. QoS‐aware routing and power control algorithm for multimedia service over multi‐hop mobile ad hoc network
Grilo et al. Routing metrics for cache-based reliable transport in wireless sensor networks
Sheeja et al. Cross layer based congestion control scheme for mobile ad hoc networks
Alic et al. Bearing-opportunistic network coding
Huang et al. Trickle irrigation: congestion relief for communication with network coding
Divyashree et al. Performance Analysis and Enhancement of QoS Parameters for Real-Time Applications in MANETs-Comparative Study
Mahmood et al. Routing load of route calculation and route maintenance in wireless proactive routing protocols
Gopinath et al. An optimized multicast backbone routing for increasing residual energy in MANET
Kafaie et al. Throughput analysis of network coding in multi-hop wireless mesh networks using queueing theory
Chen et al. Multiple Trees with Network Coding for Efficient and Reliable Multicast in MANETs
Gomez et al. Tcp acknowledgement encapsulation in coded multi-hop wireless networks
Jiao et al. An efficient network‐coding based back‐pressure scheduling algorithm for wireless multi‐hop networks
Dinesh et al. Ultimate Video Spreading With Qos over Wireless Network Using Selective Repeat Algorithm
Garrido et al. Providing reliable services over wireless networks using a low overhead random linear coding scheme

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17767544

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17767544

Country of ref document: EP

Kind code of ref document: A1