WO2016100890A1 - Smooth bandwidth-delay product variation inside wireless networks - Google Patents

Smooth bandwidth-delay product variation inside wireless networks Download PDF

Info

Publication number
WO2016100890A1
WO2016100890A1 PCT/US2015/066825 US2015066825W WO2016100890A1 WO 2016100890 A1 WO2016100890 A1 WO 2016100890A1 US 2015066825 W US2015066825 W US 2015066825W WO 2016100890 A1 WO2016100890 A1 WO 2016100890A1
Authority
WO
WIPO (PCT)
Prior art keywords
bandwidth
delay product
variation
racs
flow
Prior art date
Application number
PCT/US2015/066825
Other languages
French (fr)
Inventor
Ram Lakshmi NARAYANAN
Swaminathan ARUNACHALAM
Original Assignee
Nokia Solutions And Networks Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Solutions And Networks Oy filed Critical Nokia Solutions And Networks Oy
Publication of WO2016100890A1 publication Critical patent/WO2016100890A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/22Traffic shaping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/30Flow control; Congestion control in combination with information about buffer occupancy at either end or at transit nodes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W28/00Network traffic management; Network resource management
    • H04W28/02Traffic management, e.g. flow control or congestion control
    • H04W28/0278Traffic management, e.g. flow control or congestion control using buffer status reports

Definitions

  • Various communication systems may benefit from proper handling of varying conditions.
  • wireless networks may benefit from a mechanism, system, or method for handling bandwidth-delay product variation.
  • Routers and switches connect different sub-networks with varying link speeds. Routers may have built in buffers. These buffers can absorb temporary bandwidth fluctuations and can also avoid packet drops in nodes when data is transported from high speed link to low speed links. Typically buffers are fixed hardware buffers both on ingress and egress port of the nodes. In this discussion the term "nodes" can be used broadly to refer to network elements such as routers, switches, NodeBs, or the like.
  • a conventional myth is that more buffers are good, because buffers can prevent drop of packets and may give 100% link utilization.
  • the reality is that excessive un-managed buffers can cause side-effects.
  • These non-adaptive buffers can get filled up quickly and can induce latency, and consequently no throughput gain.
  • the phenomena of buffer getting filled up quickly and forming a standing queue inside the node is called Buffer bloat.
  • Bandwidth does not equal speed. Bandwidth is capacity over an interval. Real speed can be measured by the amount of latency or delay, between an action and a response. An emphasis on speed may have been lost over time due to pressure for bandwidth.
  • the presence of large, unmanaged network buffers, primarily across the edge devices of the Internet can lead to buffer bloat. Buffer bloat can be observed in access routers such as Wi-Fi routers at home, or eNodeB RAN networks where fewer flows dominate to fill the buffers. In eNodeB, especially with variable bit rates, shedding load to match the available bandwidth does not presently happen, so loaded conditions can lead to huge delays.
  • long duration sessions can be called elephant sessions or elephant flows and short duration sessions or flows can be called mice flows or mice sessions.
  • elephant and mice are mixed inside the single queue, user experience may be affected, mainly due to excessive buffer fill by the elephant session.
  • a buffer inside the node can act as a queue that gets filled with packets.
  • Nodes implement some form of queuing discipline such as First-In First-Out (FIFO), and the like.
  • FIFO First-In First-Out
  • a buffer is to be configured to store at least the "delay * bandwidth product.”
  • BDP bandwidth-delay product
  • bandwidth-delay product value As in wireless networks, though bandwidth varies over time and also round-trip times - namely the "delay" used in bandwidth-delay product - varies, individual connections can still use the static pre-configured bandwidth-delay product value for the entire session, which may be provided during the TCP handshake by the remote endpoints.
  • Network equipment manufacture may use a large number of static buffers, thinking that more buffers are good. More buffers, however, can create standing queues, and can create latency issues without throughput improvements.
  • Excessive buffers or "buffer bloat” may be the result of keeping too many packets in queue.
  • Buffer bloat can occur in access routers such as eNodeB, cable modem, or the like. Unmanaged buffers have more effect as buffer sizes get larger, as delay-sensitive applications get more prevalent, and as large or streaming downloads get more common.
  • Figure 1 illustrates a sender sending packets on to a network equal to the advertised window. More particularly, Figure 1 describes buffer status inside the router or path between TCP sender and receiver. After slow start, the TPC sender can inject a packet that is equal to that of a receiver advertised window size during the TCP handshake process. Sender can keep injecting packets until packets in transit reach the advertised window size. Next, the sender can then wait for the receiver to respond with an ACK packet, as shown in Figure 2.
  • Figure 2 illustrates sender and receiver being connected via different links and packets getting buffered inside a router. Routers that connect the sender and receiver may have different link speed and a packet may get transmitted from high speed links to low speed links such as wirelessly connected UE.
  • the routers may store the packets in their internal buffers.
  • the network may have conditions such that round trip time between sender and receiver is 100 msec, link speed between server and router is 10 Mbits/sec, and link speed between receiver and router is 1 Mbit/sec.
  • link speed is being used in terms of capacity over time interval.
  • a router may need to have sufficient buffers (for example, up to 9Mbits) to store packets that are received from sender towards receiver. Also, the goal of the router may be to keep sufficient buffers so that link utilization is 100%.
  • FIG. 3 illustrates a TCP connection after one RTT.
  • a receiver can generate an acknowledgement packet (ACK) to each received packet from the sender. After receiving a first acknowledgement, the sender can control the rate of pumping more data. For each ACK packet the sender receives, the sender can then send a next packet. Now, sender will not pump at sender speed. After pumping each packet, the number of bytes in transit will be equal to receiver advertised window. ACK becomes clocking rate between sender and receiver. In certain cases there may be a combined ACK packet once for every few packets from sender, or the like. For the purpose of illustrating buffer bloat, a scenario is described in which a receiver will send ACK packet for each received packet from sender. Upon receiving the ACK packet from the receiver, the sender can inject one packet on to the wire.
  • ACK acknowledgement packet
  • Figure 4 illustrates a steady state condition being s reached when receiver generates an ACK regularly.
  • a sender may inject one packet at a time based on the received ACK.
  • Steady state may be reached when the receiver is clocking at constant speed and the total amount of data in buffers is equal to packet in flight and is shown in Figure 4.
  • the amount of data that must be in transit must be equal to bandwidth times the sender-receiver RTT delay (Bandwidth * Delay).
  • bottleneck node must have sufficient buffer to handle the transient changes.
  • Figure 5 illustrates queue size versus time for a TCP session.
  • the queue size is equal to that of the window size.
  • the queue size may reduce over time.
  • the queue may get built up in the router. Queue size does not reduce to zero, and is called a stay queue or standing queue. This stay queue can also be called buffer bloat. Whenever the advertised window is greater than that of the link capacity, the stay queue may be created.
  • neither sender nor receiver may know how to accurately determine bandwidth-delay product value and window size.
  • Advertised window may be a static parameter that gets negotiated during start of TCP session as part of a TCP 3-way handshake procedure. This value may not change with respect to access networks and may remain constant for entire TCP flow duration. This may affect access routers such as Wi-Fi nodes and eNodeB. In a wireless link, the bandwidth may change over time and it may be difficult for TCP endpoints to get to know accurate queue size inside routers and also bandwidth-delay product value.
  • a TCP protocol is conventionally slow to react to such changes. User can easily see the impact of buffer bloat when two flows such as Mice (web transaction) and Elephant (YouTube or Video session) flows are concurrently received by receiver. Moreover, when more than one TCP flow is for a receiver, the effect may be especially pronounced, such as when video is mixed with other background/foreground traffic.
  • Buffer bloat can arise when there is a mismatch in window size and BPD value. It is challenging to address how to choose the exact window sizes between senders while queues manifest at bottleneck gateways such as eNodeBs. Furthermore, a TCP sender cannot compute the window size as the bottleneck bandwidth and RTT may change constantly and may not be known to the sender.
  • Figure 6 illustrates various sources of network delay. Most of the delay component may be predictable, with the exception of stochastic delay which is random delay and is based on current network conditions and variation observed in bandwidth and delay. This delay may affect user experience of the network services provided over the network.
  • Buffer bloat issues may point to the value of Active Queue Management (AQM), in view of larger buffer sizes, greater numbers of delay-sensitive applications, and the increasing ubiquity of large or streaming downloads.
  • Active Queue Management techniques can include techniques such as Controlled Delay (CoDEL) and Explicit Congestion Notification (ECN). These techniques, however, may not be applicable in all environments. Conventionally, TCP/IP networks signal congestion by dropping packets and most of the AQM finally suggest dropping packets.
  • a method can include determining a variation in a bandwidth-delay product. The method can also include performing at least one flow management process based on the variation in the bandwidth-delay product.
  • the method can be performed by a RACS.
  • the determining can be based on radio conditions information from an access point.
  • the access point can be an eNode B.
  • the at least one flow management process can include performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
  • the determining can be based on a buffer status report.
  • the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
  • the determining can include performing or receiving a minimal bandwidth-delay product path determination.
  • the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server.
  • the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
  • the at least one flow management process can include sending a calculated maximum queue depth to a remote transmission control protocol server.
  • the at least one flow management process can include sending a bandwidth-delay product measurement timestamp to a user equipment client.
  • the at least one flow management process can include providing an explicit congestion notification.
  • the explicit congestion notification can be sent toward both endpoints of an end-to-end communication.
  • the at least one flow management process can include providing bandwidth-delay product guidance to at least one of a server or a client.
  • the determining can include performing a prediction or bandwidth-delay product estimation.
  • the determining can include computing a current window size and queue size required per flow.
  • the at least one flow management process can include providing a bandwidth-delay product value.
  • the value can be provided in an enriched header.
  • the method can include measuring a bottleneck queue built up inside a radio access network, analyzing measurements of the bottleneck queue, and signalling at least one adaptive buffer proposal for a remote endpoint based on the analysis of the measurements.
  • the analysis comprises determining a bandwidth-delay product variation.
  • the measuring can include monitoring and analyzing all user equipment flows in each direction.
  • the remote endpoint can be a transmission control protocol endpoint.
  • the method can further include learning buffering between an access point and RACS, and between RACS and a remote endpoint independently.
  • the access point can be an eNode B.
  • the method can include managing an eNode B's layer-2 and layer-3 buffers for all user equipment sessions.
  • the method can further include gathering data from a user equipment buffer status report, wherein analyzing the measurements comprises analyzing the data.
  • the method can further include performing per-flow fair share queuing based on the analysis of the measurements.
  • the method can include categorizing a plurality of flows based on each flow's expected impact.
  • the method can include dynamically adjusting a bandwidth-delay product value based on end-to-end queue depth.
  • a method can include receiving an indication of variation in bandwidth-delay product. The method can also include performing a congestion adaption based on the received indication.
  • the adaptation can include calculating an advertisement window value based on the changed bandwidth-delay product.
  • the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
  • the method can include using the bandwidth-delay product variation to adapt a receiver advertised window.
  • an apparatus can include means for performing the method according to the first and second embodiments respectively, in any of their variants.
  • an apparatus can include at least one processor and at least one memory and computer program code.
  • the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to perform the method according to the first and second embodiments respectively, in any of their variants.
  • a computer program product may encode instructions for performing a process including the method according to the first and second embodiments respectively, in any of their variants.
  • a non-transitory computer readable medium may encode instructions that, when executed in hardware, perform a process including the method according to the first and second embodiments respectively, in any of their variants.
  • a system may include at least one apparatus according to the third or fifth embodiments in communication with at least one apparatus according to the fourth or sixth embodiments, respectively in any of their variants.
  • Figure 1 illustrates a sender sending packets on to a network equal to the advertised window.
  • Figure 2 illustrates sender and receiver being connected via different links and packets getting buffered inside a router.
  • FIG. 3 illustrates a TCP connection after one RTT.
  • Figure 4 illustrates a steady state condition being s reached when receiver generates an ACK regularly.
  • Figure 5 illustrates queue size versus time for a TCP session.
  • Figure 6 illustrates various sources of network delay.
  • Figure 7 illustrates various input sources and a bandwidth-delay product determination procedure, according to certain embodiments.
  • Figure 8 illustrates managing queue size based on bandwidth-delay product values, according to certain embodiments.
  • Figure 9 illustrates a method according to certain embodiments.
  • Figure 10 illustrates another method according to certain embodiments.
  • Figure 1 1 illustrates a system according to certain embodiments.
  • Certain embodiments can avoid buffering an excessive amount of data and quickly filling the queue.
  • the queue buffer size can be adjusted and have enough data and yet achieve 100% link utilization with maximum throughput, in certain embodiments.
  • certain embodiments provide a mechanism for wireless networks with the help of Radio Applications Cloud Server (RACS), wherein the mechanism may be able to identify the flows, and do flow based treatment in separating elephant and mice flows for each users.
  • RAS Radio Applications Cloud Server
  • certain embodiments may implement adaptive queue management techniques and avoid excessive buffers inside the eNodeB layer 3 and Layer 2.
  • Certain embodiments may provide a proactive mechanism to learn, self-adjust the queue size in access network, such as eNodeB, and make adaptive buffers. Furthermore, certain embodiments provide an implementation in RACS to assist eNodeB to regain latency and achieve effective link utilization. Moreover, certain embodiments provide a discovery scheme to implement adaptive TCP window between client and server. Additionally, certain embodiments provide a mechanism complaint with ongoing standards discussion in IETF Active Queue Management (AQM) working group and ETSI Mobile Edge Computing (MEC) working groups. Furthermore, certain embodiments provide a mechanism to enable proactive measures to ensure no session dominance for throughput and latency.
  • QAM Active Queue Management
  • MEC ETSI Mobile Edge Computing
  • Certain embodiments can measure the queue build up at bottleneck node. More specifically, certain embodiments can use a RACS inside RAN to measure the bottleneck queue built up.
  • RACS can be implemented as part of a radio access network solution and can be tightly coupled to an eNodeB.
  • a RACS endpoint can be used to monitor and analyze all UE flows in each direction. The RACS can then analyze and propose adaptive buffers for queues in each direction, and signal remote TCP endpoints.
  • Certain embodiments provide a discovery procedure to learn the buffering between eNodeB and RACS, and RACS and remote TCP endpoint independently. Such embodiments may eliminate the "problem of stay" in RAN and in backhaul.
  • RACS can supplement the procedures to perform per- flow fair share queuing.
  • Servers can cooperate with RACS and an operator can have explicit notification about bandwidth-delay product value and adapt TCP congestion control mechanism to follow bandwidth-delay product variations smoothly.
  • a remote server does not necessarily need to have a pre-agreement with RACS. Changes in bandwidth-delay product can be communicated by combination of explicit congestion notification so as to avoid sending the packet using the static advertised values. Also, a new window advertisement can reflect current bandwidth-delay product values.
  • Certain embodiments can provide a mechanism that can help to manage an eNodeB 's Layer-2 and Layer-3 buffers for all UE sessions with the help of RACS server. Moreover, certain embodiments can provide management of per-flow queueing for handling each Elephant and Mice flows in RACS. Furthermore, certain embodiments an involve providing identification of Elephant and Mice flows per UE in RACS. In certain embodiments, a mechanism can normalize different flows depending on the L2 and L3 buffer size of eNB.
  • certain embodiments can provide a mechanism to derive UE's E2E queue depth with a combination of inputs including UE's current number of flows, flow-completion time, eNB's active number of users, buffer size in eNB (downlink), buffer size in UE (uplink).
  • bandwidth-delay product value is dynamically adjusted.
  • Certain embodiments can use a technique such as Pair-Packet to estimate the queue depth between eNodeB and Remote Server. Similarly, certain embodiments can use a technique such as Pair-Packet to estimate the queue depth between eNodeB and UE.
  • Certain embodiments can provide a mechanism to inform the bandwidth-delay product value to nodes, both remote server and client, along with explicit congestion notification communication to avoid congestion collapse.
  • Certain embodiments can provide a technique to generate explicit congestion notification towards remote servers whenever bandwidth-delay product needs an adjustment per flow. Furthermore, in certain embodiments explicit congestion notification capability can be successfully negotiated amongst UE, Server & RACS.
  • RACS can act as an ECN-aware router and can set a mark in the IP header towards the receiver (UE) instead of dropping a packet in order to signal the impending congestion. Then the receiver of the packet (which contains the explicit congestion notification flag) can echo the congestion indication to the sender (Server), which can reduce the sender's transmission rate as it does in the packet dropped case.
  • an implementation in remote endpoints can use the bandwidth-delay product values for adapting various things, such as the following: the size of the initial congestion window, when to exit the slow start phase, the size of the congestion window during the congestion avoidance phase, the size of the window after a congestion event, or any combination thereof.
  • the bandwidth-delay product can gives the insights of the network to remote endpoint in order to avoid buffer bloat in the radio network and also the backhaul network.
  • the bandwidth-delay product change notification can help the server to adjust the receiver advertised window.
  • Certain embodiments can work for both encrypted and unencrypted traffic. Furthermore, certain embodiments can work for all types of wireless networks.
  • a TCP session can communicate the available link capacity, as advertised window, to a remote peer during the TCP start session.
  • the UE may not have a mechanism to know what the available bandwidth is. Available bandwidth, by definition, can be considered unused link capacity between sender and receiver. Typically, the size of the window may be larger than the bandwidth.
  • TCP can rise to the available bandwidth using Additive Increase Multiplicative Decrease (AIMD) algorithm process.
  • AIMD Additive Increase Multiplicative Decrease
  • the TCP algorithm may be reactive, and may send window size worth of packets and then inject further packets based on received ACK packets.
  • bandwidth-delay product value may be challenging to adjust bandwidth-delay product value to congestion window value due to the following reasons: lack of bandwidth-delay product change detection methods, lack of implementation on the adjustment of queue length inside the access routers, lack of routers performing per-flow queuing as they can see only IP packets, and lack of information on how and when to a explicit congestion notification aware router should send explicit congestion notification to remote nodes.
  • Figure 7 illustrates various input sources and a bandwidth-delay product determination procedure, according to certain embodiments.
  • Figure 7 describes bandwidth-delay product detection, estimation and update processes, according to certain embodiments.
  • a RACS can receive radio conditions information from eNodeB.
  • the bandwidth-delay product (BDP) may keep changing in a wireless network due to movement of a user or available load in the network. This change may influence the queue size of the eNodeB.
  • a user session can start with a pre-configured window size and can establish communication with the remote server.
  • the RACS can determine a manageable queue size according to the changed conditions and can perform a fair share queuing discipline for each UE flow.
  • the RACS can receive a user equipment buffer status report.
  • UE buffer status can describe the amount of data that is currently in a UE buffer to be sent towards eNodeB. For example, when a user is about to upload pictures or has data to send, the UE can communicate such information (UL- BSR) to the eNodeB. Based on UL-BSR information at RACS, RACS can determine a corresponding bandwidth-delay product.
  • the RACS can receive an eNodeB buffer status report.
  • This buffer status report may be a downlink BSR and may be computed as part of Layer-2 RLC buffer.
  • the RLC buffer may be a fixed buffer size, and may not change when the bandwidth changes over time.
  • the mismatch in buffer size at Layer-2(RLC-MAC) and Layer-3 (egress and ingress IP Queue) inside eNodeB can cause delay inside the eNodeB.
  • RACS may make or receive a minimal bandwidth-delay product path determination. It may be possible that a backhaul path between eNodeB and remote server may be more congested than a path between eNodeB and UE. In that scenario, guidance from RAN may not be sufficient. Therefore a discovery protocol may use TCP time stamp (TS) options to determine the round trip time (RTT) between the RACS and the remote server. Just determining the RTT may not be sufficient. Thus, certain embodiments may determine and approximate the queue size (or buffer) in path nodes between RACS and remote server. For this a determination of the maximum queue depth between the RACS and the remote server may be made.
  • TCP time stamp TCP time stamp
  • Two TCP packets with Time Stamp (TS) option packets can be generated from the RACS towards the remote server at a fixed interval from one another.
  • the response and difference in response can be computed.
  • the computed response and difference in response can determine RTT and processing delay or overall queue depth incurred from RACS to remote Server.
  • a maximum value between (eNodeB-RACS, RACS-Remote Server) can be used for bandwidth-delay product adjustment and can be sent to a remote TCP server.
  • bandwidth-delay product measurement timestamps towards UE can be used. Such timestamps can be sent to a UE client so that the UE can directly adjust the congestion window parameter accordingly.
  • the RACS can provide explicit congestion notification generation. Routers conventionally do not know how and when to generate explicit congestion notification. Such routers do not compute their queues and adjust based on the current bandwidth-delay product. At the RACS, for each of the active TCP flows, explicit congestion notification can be generated towards both endpoints. Hence packet loss inside the routers can be avoided. Moreover, this notification may result in the sender slowing down and taking corrective action based on the bandwidth-delay product.
  • bandwidth-delay product guidance can sent to servers, such as those servers that have a pre-arrangement with the RACS or operator. This information can be communicated to either server or client or both. This way, for example, a remote TCP endpoint can adjust the endpoint's sending rate to match with the bandwidth-delay product.
  • a TCP extension header can be used to communicate the changed bandwidth- delay product values, and new advertisement window values can be computed from this proposed bandwidth-delay product.
  • Figure 8 illustrates managing queue size based on bandwidth-delay product values, according to certain embodiments.
  • Table 1 describes several possible flows that can be considered to illustrate the real-world usage scenario.
  • Figure 8 describes how persistent queue is managed for ongoing TCP session.
  • a user equipment is engaged with Server E (Elephant flow) for a long duration session typically FTP or Video download or video play; and in a second scenario a user equipment is engaged with Server M (Mice flow) for a short duration session such as a web session or chat session or background session like sync-ing an application.
  • Server E Electronic flow
  • Server M Machine flow
  • Figure 8 describes message interaction by considering two flows namely Flow_E, a long duration flow that demands more bandwidth and latency, and Flow_M, a short duration flow such as Sync or web traffic request/response.
  • Video session may demand both low latency and bandwidth.
  • video sessions from sites such as YouTube can be delivered in burst.
  • the UE can send an Uplink BSR along with the TCP session request or via other supplementary radio channel information message.
  • the UE can send such information whenever the UE is queried for such information.
  • the UE can make such information available at regular interval, when the BSR is greater than a certain threshold value.
  • bandwidth may not equal real "speed.”
  • Bandwidth is capacity over an interval, whereas real "speed” may be measured by the amount of latency (lag), between an action and a response, with lower latency corresponding to higher speed.
  • the RACS scheduler can keep managing the individual flow level queue irrespective of underlying radio layer technology enhancement.
  • an eNodeB can implement basic scheduling for TTI including Frequency, Time, QCI, and Subscriber.
  • RACS can complement queue discipline fair share, and can perform per-flow queuing for each UE.
  • the RACS can do dynamic adjustment of the queues so as to avoid standing queues.
  • the fixed eNodeB RLC buffers do not have data filled in all the time. RACS can fill these Layer-2 buffers just in time, or just ahead of time, with required data when needed.
  • the amount of data required at a given point can be determined by TTI values, RB per UE, maximum number of RB per TTI from UE, current load on UE, average cell bandwidth computed from received radio condition information, and the like.
  • the RACS server can create one queue for each TCP flow per UE. This mechanism can allow even distribution of TCP flow buffers. As mentioned above, TCP may not work well for competing flows, and there may be a risk that TCP throughput goes down drastically. To avoid this, or for other reasons, RACS can perform scheduling of window adjustment on each direction. RACS can also ensure 100% link utilization. As per Figure 8, a message may not have even reached the remote server E. These pre- computed values can, therefore, be applied when the data is being exchanged.
  • a result of adjusted bandwidth-delay product can be reflected as a window parameter on the flight.
  • the maximum window parameter may be sent only during the beginning and not during the session.
  • These pre-computed values can be applied when the data is being exchanged and also can be communicated to a remote endpoint based on the current bandwidth-delay product calculation.
  • certain embodiments may have an adaptive value in the start of TCP session.
  • RACS can apply during TCP handshake to be explicit congestion notification capable transport nodes.
  • endpoints can implement explicit congestion notification functions.
  • Steps 7 through 10 may correspond to steps 2-5, but for a short flow. During these steps, however, RACS may still not know whether this is flow is properly categorized as an Elephant flow or Mice flow. After 10, both TCP handshake for both Elephant flow and Mice flow can be active.
  • the data can start flowing from the server towards the UE at 11.
  • the window worth of data may be on the flight.
  • TCP slow-start AIMD can permit computation of timing and buffering sizing per flow inside the RACS.
  • RACS can distinguish Elephant from Mice flows based on the volume of data being shipped on the transit. For example, average webpage size is much smaller compared to video data chunks or FTP data chunks.
  • RACS may want to avoid a sender sending a window worth of data that was agreed during the start of TCP session.
  • the window size thus can be adapted to current ongoing network conditions.
  • the window size can be communicated for current network conditions. It may happen that when TCP started the bandwidth condition may be good and later the condition may worsen. If continuous adaptation is not present, then the RACS queue may have more data and may cause throughput and latency effects.
  • a goal of RACS can be to adjust the current TCP window sizing closer to the bandwidth-delay product and hence the queue size inside the RACS. To achieve this, RACS can perform per flow fair share scheduling. RACS can perform this irrespective of any TCP optimization and can avoid building excessive fill of data buffers.
  • Various TCP optimizations can be applied with respect to ACK. For example, in a first alternative, for each packet sent by server_E, receiving UE may send one ACK packet. In a second alternative, for each packet sent by server_E, receiving UE may not send one ACK packet, it could send a window worth of packet. In a third alternative, for each packet sent by server_E, receiving UE may not send one ACK packet it could send packets by combining data in UL direction that could be delayed. In a fourth alternative, for each packet sent by server_E, receiving UE may not send one ACK packet, it could send packets by combining data and delaying the ACK by fixed amount. Other TCP optimization applied on ACK are also permitted as further alternatives.
  • RACS can compute a time difference between data sent, not as part of ACK correlation, to the acknowledged data and can computes the real throughput.
  • RACS apart from the per- flow queuing, can at 13 performs traffic shaping in such a way that eNodeB buffers are not filled only by dominating flows but all packets flows to/from UE get fair treatment.
  • eNB can sends data from any one of the flows equal to that of Maximum RB of UE per TTI plus additional required packets from the same flow. This can ensure that no single flow dominates the eNodeB buffers.
  • TCP data and ACK can be seen in each direction by RACS, and queue size can be adjusted accordingly based on a computed available bandwidth (ABW) by RACS.
  • ABW available bandwidth
  • the RACS may infer the actual ACK, packet-pair difference and provide an adjusted window size as a function of bandwidth-delay product value.
  • step 15 For Mice session the data or packets that are seen may be relatively low. It may even happen that TCP may not even reach steady state. This step may be similar to step 11. Certain configuration such as average data bytes for web session or short session values can be used to get a minimal initial advertised window size. These values can be used to derive the values in message 5 and message 10.
  • the process can be similar to process 12. However, now the data arrival rate and/or number of bytes in the queue can determine the dominant flow inside the queue. As there are going to be fewer packets (or data volume) when compared to Flow_E, the flow can be classified as Flow_M.
  • the resultant queue that is seen inside the eNodeB or UE may be uniform and normalized flows. Nevertheless, the situation may change.
  • eNodeB may inform RACS.
  • RACS can, at 19, constantly compute the current window size and queue size required per flow. If a deviation of queue or current window size is more than a threshold, then RACS can decide to perform queue (re)size and congestion window size readjustment.
  • RACS can keeps learning current bandwidth-delay product variation all the time as described in Figure 8. For example, RACS can compute the value for RTT between UE and RACS. Moreover, RACS can compute the value for RTT between RACS and remote server. Combining the computed values for RTT, RACS can compute a window size that is needed to accommodate 100% link utilization without overfilling the buffers.
  • Such computation can be made by various mechanisms. For example, for packets towards UE nodes, RACS can insert with time stamp options for two consecutive packets. Thereby, RACS can learn both RTT and bandwidth available. For packets towards remote server, RACS can insert ongoing packets with time stamp options for two consecutive packets, and thereby similarly learn both RTT and bandwidth available. If a current bandwidth- delay product is lower than the window size, RACS can also send window full condition to ensure that difference between previous congestion window and current window is reached. This may involve full manipulation of TCP sequence numbering.
  • RACS can adjust a queue buffer size internally, for example as a marker value. Then , at 22, the computed bandwidth-delay product value can be sent to a managed server having an existing SLA between operator and RACS vendor. In this situation Header Enrichment (FIE) or appropriate IP level extension information can be communicated for window values, along with TG information.
  • FIE Header Enrichment
  • IP level extension information can be communicated for window values, along with TG information.
  • an explicit congestion notification can be generated for that flow in the direction in which more data is coming. For example it could be towards the server if the arrival data from the server is more towards UE or both. This may ensure server not sending data at faster rate.
  • a new extension can enable the remote TCP server to take a new proposed bandwidth-delay product value and derive both congestion window parameters and new receiver advertised window size.
  • Steps from 18 to 22 can be performed when bandwidth-delay product changes from low bandwidth to high bandwidth.
  • TCP flows are shared across one link, it may be difficult to assign priorities to each flow.
  • RACS can be simply configured to avoid having one flow dominating the entire bandwidth.
  • a worst case delay may be that thre are N flows and N flows will have windows worth of data in the RACS, but RACS can eventually perform per-flow shaping and reduce the queue size evenly.
  • identifiable message exchange may occur between a RACS and UE. Furthermore, in certain embodiments identifiable message exchange may occur between a RACS and remote server.
  • Certain embodiments may employ a TCP TS generated and packet pairing technique to determine a bottleneck between UE and eNodeB/RACS and between RACS and remote server.
  • a self- adjusting window size can be based on bandwidth-delay product.
  • a value of the self-adjusting window size may be exposed to both UE and a remote server.
  • Other embodiments may be identified based on changed bandwidth-delay product value, ABW for remote peer.
  • Certain embodiments may have various benefits and/or advantages. For example, in certain embodiments no single application flow may dominate the radio buffers. Furthermore, certain embodiments may provide bottleneck detection inside RAN and between RAN and remote server. Additionally, in certain embodiments RACS can perform per-flow queuing and shaping to normalize the eNodeB buffers. Furthermore, in certain embodiments RACS can perform window size adjustments and can generate explicit congestion notification notification to remote server based on bandwidth-delay product changes. Additionally, in certain embodiments there may be no unmanaged or excessive buffers between endpoints in RAN. TCP flow may, in certain embodiments, smoothly follow bandwidth-delay product changes and hence result in minimal retransmission.
  • Figure 9 illustrates a method according to certain embodiments.
  • the method of Figure 9 may be performed by, for example, a RACS.
  • the method may include, at 910, determining a variation in a bandwidth-delay product. The determining can be based on radio conditions information received from an access point at 902. In a variation the access point can be an eNode B.
  • the determining can be based on a buffer status report received at 904.
  • the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
  • the determining can include performing or receiving a minimal bandwidth-delay product path determination at 906.
  • the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server.
  • the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
  • the determining can include, at 908, performing a prediction or bandwidth-delay product estimation. As another alternative, at 909, the determining can include computing a current window size and queue size required per flow.
  • the method may also include, at 920, performing at least one flow management process based on the variation in the bandwidth-delay product.
  • the at least one flow management process can include, at 921, performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
  • the at least one flow management process can also or alternatively include, at 923, sending a calculated maximum queue depth to a remote transmission control protocol server.
  • the at least one flow management process can also or alternatively include, at 925, sending a bandwidth-delay product measurement timestamp to a user equipment client.
  • the at least one flow management process can include providing, at 927, an explicit congestion notification.
  • the explicit congestion notification can be sent toward one or both endpoints of an end-to-end communication.
  • the at least one flow management process can include, at 928, providing bandwidth-delay product guidance to at least one of a server or a client.
  • the at least one flow management process can include, at 929, providing a bandwidth-delay product value. The value can be provide, for example, in an enriched header.
  • Figure 10 illustrates another method according to certain embodiments.
  • the method of Figure 10 may be performed by a network element, such as communication endpoint - for example a server.
  • a method can include, at 1010, receiving an indication of variation in bandwidth-delay product.
  • the method can also include, at 1020, performing a congestion adaption based on the received indication.
  • the adaptation can include such things as calculating an advertisement window value based on the changed bandwidth-delay product.
  • the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
  • the method can, in various ways, include using the bandwidth-delay product variation to adapt a receiver advertised window.
  • FIG. 11 illustrates a system according to certain embodiments of the invention.
  • a system may include multiple devices, such as, for example, at least one UE 1110, at least one throughput guidance entity 1120, which may be an eNB, RACS, RNC, or other base station or access point, and at least one information receiver 1130, which may be an OTT Server, UE, or other entity configured to receive throughput guidance or other congestion information.
  • at least one UE 1110 at least one throughput guidance entity 1120, which may be an eNB, RACS, RNC, or other base station or access point
  • at least one information receiver 1130 which may be an OTT Server, UE, or other entity configured to receive throughput guidance or other congestion information.
  • Each of these devices may include at least one processor, respectively indicated as 1114, 1124, and 1134.
  • At least one memory can be provided in each device, and indicated as 1115, 1125, and 1135, respectively.
  • the memory may include computer program instructions or computer code contained therein.
  • the processors 1114, 1124, and 1134 and memories 1115, 1125, and 1135, or a subset thereof, can be configured to provide means corresponding to the various blocks of Figures 9 or 10.
  • transceivers 1116, 1126, and 1136 can be provided, and each device may also include an antenna, respectively illustrated as 1117, 1127, and 1137.
  • information receiver 1130 may be configured for wired communication, in addition to wireless communication, and in such a case antenna 1137 can illustrate any form of communication hardware, without requiring a conventional antenna.
  • Transceivers 1116, 1126, and 1136 can each, independently, be a transmitter, a receiver, or both a transmitter and a receiver, or a unit or device that is configured both for transmission and reception.
  • Processors 1114, 1124, and 1134 can be embodied by any computational or data processing device, such as a central processing unit (CPU), application specific integrated circuit (ASIC), or comparable device.
  • the processors can be implemented as a single controller, or a plurality of controllers or processors.
  • Memories 1115, 1125, and 1135 can independently be any suitable storage device, such as a non-transitory computer-readable medium.
  • a hard disk drive (HDD), random access memory (RAM), flash memory, or other suitable memory can be used.
  • the memories can be combined on a single integrated circuit as the processor, or may be separate from the one or more processors.
  • the computer program instructions stored in the memory and which may be processed by the processors can be any suitable form of computer program code, for example, a compiled or interpreted computer program written in any suitable programming language.
  • the memory and the computer program instructions can be configured, with the processor for the particular device, to cause a hardware apparatus such as UE 1110, throughput guidance entity 1120, and information receiver 1130, to perform any of the processes described herein (see, for example, Figures 9 and 10). Therefore, in certain embodiments, a non-transitory computer-readable medium can be encoded with computer instructions that, when executed in hardware, perform a process such as one of the processes described herein. Alternatively, certain embodiments of the invention can be performed entirely in hardware.
  • Figure 11 illustrates a system including a UE, throughput guidance entity, and information receiver
  • embodiments of the invention may be applicable to other configurations, and configurations involving additional elements.
  • additional UEs may be present, and additional core network elements may be present, as illustrated in Figure 8, for example.
  • a method can include determining a variation in a bandwidth-delay product. The method can also include performing at least one flow management process based on the variation in the bandwidth-delay product.
  • the method can be performed by a RACS.
  • the determining can be based on radio conditions information from an access point.
  • the access point can be an eNode B.
  • the at least one flow management process can include performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
  • the determining can be based on a buffer status report.
  • the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
  • the determining can include performing or receiving a minimal bandwidth-delay product path determination.
  • the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server.
  • the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
  • the at least one flow management process can include sending a calculated maximum queue depth to a remote transmission control protocol server.
  • the at least one flow management process can include sending a bandwidth-delay product measurement timestamp to a user equipment client.
  • the at least one flow management process can include providing an explicit congestion notification.
  • the explicit congestion notification can be sent toward both endpoints of an end-to-end communication.
  • the at least one flow management process can include providing bandwidth-delay product guidance to at least one of a server or a client.
  • the determining can include performing a prediction or bandwidth-delay product estimation.
  • the determining can include computing a current window size and queue size required per flow.
  • the at least one flow management process can include providing a bandwidth-delay product value.
  • the value can be provided in an enriched header.
  • the method can include measuring a bottleneck queue built up inside a radio access network, analyzing measurements of the bottleneck queue, and signalling at least one adaptive buffer proposal for a remote endpoint based on the analysis of the measurements.
  • the analysis comprises determining a bandwidth-delay product variation.
  • the measuring can include monitoring and analyzing all user equipment flows in each direction.
  • the remote endpoint can be a transmission control protocol endpoint.
  • the method can further include learning buffering between an access point and RACS, and between RACS and a remote endpoint independently.
  • the access point can be an eNode B.
  • the method can include managing an eNode B's layer-2 and layer-3 buffers for all user equipment sessions.
  • the method can further include gathering data from a user equipment buffer status report, wherein analyzing the measurements comprises analyzing the data.
  • the method can further include performing per- flow fair share queuing based on the analysis of the measurements.
  • the method can include categorizing a plurality of flows based on each flow's expected impact.
  • the method can include dynamically adjusting a bandwidth-delay product value based on end-to-end queue depth.
  • a method can include receiving an indication of variation in bandwidth-delay product. The method can also include performing a congestion adaption based on the received indication.
  • the adaptation can include calculating an advertisement window value based on the changed bandwidth-delay product.
  • the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
  • the method can include using the bandwidth-delay product variation to adapt a receiver advertised window.
  • an apparatus can include means for performing the method according to the first and second embodiments respectively, in any of their variants.
  • an apparatus can include at least one processor and at least one memory and computer program code.
  • the at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to perform the method according to the first and second embodiments respectively, in any of their variants.
  • a computer program product may encode instructions for performing a process including the method according to the first and second embodiments respectively, in any of their variants.
  • a non-transitory computer readable medium may encode instructions that, when executed in hardware, perform a process including the method according to the first and second embodiments respectively, in any of their variants.
  • a system may include at least one apparatus according to the third or fifth embodiments in communication with at least one apparatus according to the fourth or sixth embodiments, respectively in any of their variants.

Abstract

Various communication systems may benefit from proper handling of varying conditions. For example, wireless networks may benefit from a mechanism, system, or method for handling bandwidth-delay product variation. A method can include determining a variation in a bandwidth- delay product. The method can also include performing at least one flow management process based on the variation in the bandwidth-delay product.

Description

TITLE:
Smooth bandwidth-delay product variation inside wireless networks
CROSS-REFERENCE TO RELATED APPLICATION:
[0001] This application is related to and claims the benefit and priority of U.S. Provisional Patent Application No. 62/094,946, filed December 19, 2014, the entirety of which is hereby incorporated herein by reference, and claims the benefit and priority of U.S. Provisional Patent Application No. 62/104,526, filed January 16, 2015, the entirety of which is also hereby incorporated herein by reference.
BACKGROUND:
Field:
[0002] Various communication systems may benefit from proper handling of varying conditions. For example, wireless networks may benefit from a mechanism, system, or method for handling bandwidth-delay product variation.
Description of the Related Art:
[0003] Routers and switches connect different sub-networks with varying link speeds. Routers may have built in buffers. These buffers can absorb temporary bandwidth fluctuations and can also avoid packet drops in nodes when data is transported from high speed link to low speed links. Typically buffers are fixed hardware buffers both on ingress and egress port of the nodes. In this discussion the term "nodes" can be used broadly to refer to network elements such as routers, switches, NodeBs, or the like.
[0004] A conventional myth is that more buffers are good, because buffers can prevent drop of packets and may give 100% link utilization. The reality is that excessive un-managed buffers can cause side-effects. These non-adaptive buffers can get filled up quickly and can induce latency, and consequently no throughput gain. The phenomena of buffer getting filled up quickly and forming a standing queue inside the node is called Buffer bloat.
[0005] Bandwidth does not equal speed. Bandwidth is capacity over an interval. Real speed can be measured by the amount of latency or delay, between an action and a response. An emphasis on speed may have been lost over time due to pressure for bandwidth. The presence of large, unmanaged network buffers, primarily across the edge devices of the Internet can lead to buffer bloat. Buffer bloat can be observed in access routers such as Wi-Fi routers at home, or eNodeB RAN networks where fewer flows dominate to fill the buffers. In eNodeB, especially with variable bit rates, shedding load to match the available bandwidth does not presently happen, so loaded conditions can lead to huge delays.
[0006] It is easy for a user to see the impact of buffer bloat. Often a user cannot distinguish between network congestion and buffer bloat. To create buffer bloat, a user could start a short duration web session which generates a small TCP burst and concurrently start a long duration FTP or Video session session. In such a case, the user experience will be poor for the web session. By way of analogy, the short duration web session can be described as "Mice," while the long duration FTP or Video session can be described as "Elephant," in view of the relative size difference of those animals.
[0007] Thus, long duration sessions can be called elephant sessions or elephant flows and short duration sessions or flows can be called mice flows or mice sessions. When elephant and mice are mixed inside the single queue, user experience may be affected, mainly due to excessive buffer fill by the elephant session.
[0008] A buffer inside the node, such as eNodeB or switch or router, can act as a queue that gets filled with packets. Nodes implement some form of queuing discipline such as First-In First-Out (FIFO), and the like. Conventionally, it is considered that a buffer is to be configured to store at least the "delay * bandwidth product." To determine correct buffer sizing is challenging. Sizing of buffers below the traditional bandwidth-delay product (BDP) can create a lower utilization of links and may even lead to packet drop. As in wireless networks, though bandwidth varies over time and also round-trip times - namely the "delay" used in bandwidth-delay product - varies, individual connections can still use the static pre-configured bandwidth-delay product value for the entire session, which may be provided during the TCP handshake by the remote endpoints.
[0009] Network equipment manufacture may use a large number of static buffers, thinking that more buffers are good. More buffers, however, can create standing queues, and can create latency issues without throughput improvements.
[0010] Excessive buffers or "buffer bloat" may be the result of keeping too many packets in queue.
[0011] Buffer bloat can occur in access routers such as eNodeB, cable modem, or the like. Unmanaged buffers have more effect as buffer sizes get larger, as delay-sensitive applications get more prevalent, and as large or streaming downloads get more common.
[0012] The remainder of this section, TCP flow and the standing queue or buffer bloat issue are discussed. As the TCP flows are bi-directional, role of sender and receiver can change based on the TCP packet exchange. Nevertheless, generic terms sender and receiver are used for purposes of illustration.
[0013] Figure 1 illustrates a sender sending packets on to a network equal to the advertised window. More particularly, Figure 1 describes buffer status inside the router or path between TCP sender and receiver. After slow start, the TPC sender can inject a packet that is equal to that of a receiver advertised window size during the TCP handshake process. Sender can keep injecting packets until packets in transit reach the advertised window size. Next, the sender can then wait for the receiver to respond with an ACK packet, as shown in Figure 2. [0014] Figure 2 illustrates sender and receiver being connected via different links and packets getting buffered inside a router. Routers that connect the sender and receiver may have different link speed and a packet may get transmitted from high speed links to low speed links such as wirelessly connected UE. To avoid losing packets, the routers may store the packets in their internal buffers. For example, the network may have conditions such that round trip time between sender and receiver is 100 msec, link speed between server and router is 10 Mbits/sec, and link speed between receiver and router is 1 Mbit/sec. Here the term "link speed" is being used in terms of capacity over time interval.
[0015] In order to avoid the packet drops, a router may need to have sufficient buffers (for example, up to 9Mbits) to store packets that are received from sender towards receiver. Also, the goal of the router may be to keep sufficient buffers so that link utilization is 100%.
[0016] Figure 3 illustrates a TCP connection after one RTT. As shown in Figure 3, a receiver can generate an acknowledgement packet (ACK) to each received packet from the sender. After receiving a first acknowledgement, the sender can control the rate of pumping more data. For each ACK packet the sender receives, the sender can then send a next packet. Now, sender will not pump at sender speed. After pumping each packet, the number of bytes in transit will be equal to receiver advertised window. ACK becomes clocking rate between sender and receiver. In certain cases there may be a combined ACK packet once for every few packets from sender, or the like. For the purpose of illustrating buffer bloat, a scenario is described in which a receiver will send ACK packet for each received packet from sender. Upon receiving the ACK packet from the receiver, the sender can inject one packet on to the wire.
[0017] Figure 4 illustrates a steady state condition being s reached when receiver generates an ACK regularly. A sender may inject one packet at a time based on the received ACK. Steady state may be reached when the receiver is clocking at constant speed and the total amount of data in buffers is equal to packet in flight and is shown in Figure 4. To achieve 100% link utilization, the amount of data that must be in transit must be equal to bandwidth times the sender-receiver RTT delay (Bandwidth * Delay). Also, bottleneck node must have sufficient buffer to handle the transient changes.
[0018] Figure 5 illustrates queue size versus time for a TCP session. As it is observed that initially the TCP sender sends windows worth of data and therefore the queue size is equal to that of the window size. As the receiver starts to generate ACK towards sender, queue size inside the buffer may reduce over time. When the receiver advertised window is more than Bandwidth * Delay product, the queue may get built up in the router. Queue size does not reduce to zero, and is called a stay queue or standing queue. This stay queue can also be called buffer bloat. Whenever the advertised window is greater than that of the link capacity, the stay queue may be created. During the start of TCP session, neither sender nor receiver may know how to accurately determine bandwidth-delay product value and window size.
[0019] Advertised window may be a static parameter that gets negotiated during start of TCP session as part of a TCP 3-way handshake procedure. This value may not change with respect to access networks and may remain constant for entire TCP flow duration. This may affect access routers such as Wi-Fi nodes and eNodeB. In a wireless link, the bandwidth may change over time and it may be difficult for TCP endpoints to get to know accurate queue size inside routers and also bandwidth-delay product value. A TCP protocol is conventionally slow to react to such changes. User can easily see the impact of buffer bloat when two flows such as Mice (web transaction) and Elephant (YouTube or Video session) flows are concurrently received by receiver. Moreover, when more than one TCP flow is for a receiver, the effect may be especially pronounced, such as when video is mixed with other background/foreground traffic.
[0020] Thus, there may be a mismatch between advertised window size and BPD value between sender and receiver, which can create a standing queue or buffer bloat. A side-effect of this is delay and no improvement in throughput.
[0021] Buffer bloat can arise when there is a mismatch in window size and BPD value. It is challenging to address how to choose the exact window sizes between senders while queues manifest at bottleneck gateways such as eNodeBs. Furthermore, a TCP sender cannot compute the window size as the bottleneck bandwidth and RTT may change constantly and may not be known to the sender.
[0022] Figure 6 illustrates various sources of network delay. Most of the delay component may be predictable, with the exception of stochastic delay which is random delay and is based on current network conditions and variation observed in bandwidth and delay. This delay may affect user experience of the network services provided over the network.
[0023] Buffer bloat issues, such as persistent or buffer full, may point to the value of Active Queue Management (AQM), in view of larger buffer sizes, greater numbers of delay-sensitive applications, and the increasing ubiquity of large or streaming downloads. Active Queue Management techniques can include techniques such as Controlled Delay (CoDEL) and Explicit Congestion Notification (ECN). These techniques, however, may not be applicable in all environments. Conventionally, TCP/IP networks signal congestion by dropping packets and most of the AQM finally suggest dropping packets.
SUMMARY:
[0024] According to a first embodiment, a method can include determining a variation in a bandwidth-delay product. The method can also include performing at least one flow management process based on the variation in the bandwidth-delay product.
[0025] In a variation, the method can be performed by a RACS. [0026] In a variation, the determining can be based on radio conditions information from an access point.
[0027] In a variation the access point can be an eNode B.
[0028] In a variation the at least one flow management process can include performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
[0029] In a variation, the determining can be based on a buffer status report.
[0030] In certain variations, the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
[0031] In a variation, the determining can include performing or receiving a minimal bandwidth-delay product path determination.
[0032] In a variation, the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server.
[0033] In a variation, the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
[0034] In a variation, the at least one flow management process can include sending a calculated maximum queue depth to a remote transmission control protocol server.
[0035] In a variation, the at least one flow management process can include sending a bandwidth-delay product measurement timestamp to a user equipment client.
[0036] In a variation, the at least one flow management process can include providing an explicit congestion notification.
[0037] In a variation, the explicit congestion notification can be sent toward both endpoints of an end-to-end communication.
[0038] In a variation, the at least one flow management process can include providing bandwidth-delay product guidance to at least one of a server or a client.
[0039] In a variation, the determining can include performing a prediction or bandwidth-delay product estimation.
[0040] In a variation the determining can include computing a current window size and queue size required per flow.
[0041] In a variation, the at least one flow management process can include providing a bandwidth-delay product value.
[0042] In a variation, the value can be provided in an enriched header.
[0043] In a variation, the method can include measuring a bottleneck queue built up inside a radio access network, analyzing measurements of the bottleneck queue, and signalling at least one adaptive buffer proposal for a remote endpoint based on the analysis of the measurements.
[0044] In a variant, the analysis comprises determining a bandwidth-delay product variation.
[0045] In a variant, the measuring can include monitoring and analyzing all user equipment flows in each direction.
[0046] In a variant, the remote endpoint can be a transmission control protocol endpoint.
[0047] In a variant, the method can further include learning buffering between an access point and RACS, and between RACS and a remote endpoint independently.
[0048] In a variant, the access point can be an eNode B.
[0049] In a variant, the method can include managing an eNode B's layer-2 and layer-3 buffers for all user equipment sessions.
[0050] In a variant, the method can further include gathering data from a user equipment buffer status report, wherein analyzing the measurements comprises analyzing the data.
[0051] In a variant, the method can further include performing per-flow fair share queuing based on the analysis of the measurements. [0052] In a variant, the method can include categorizing a plurality of flows based on each flow's expected impact.
[0053] In a variant, the method can include dynamically adjusting a bandwidth-delay product value based on end-to-end queue depth.
[0054] According to a second embodiment, a method can include receiving an indication of variation in bandwidth-delay product. The method can also include performing a congestion adaption based on the received indication.
[0055] In a variation, the adaptation can include calculating an advertisement window value based on the changed bandwidth-delay product.
[0056] In a variant, the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
[0057] In a variant, the method can include using the bandwidth-delay product variation to adapt a receiver advertised window.
[0058] According to third and fourth embodiments, an apparatus can include means for performing the method according to the first and second embodiments respectively, in any of their variants.
[0059] According to fifth and sixth embodiments, an apparatus can include at least one processor and at least one memory and computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to perform the method according to the first and second embodiments respectively, in any of their variants.
[0060] According to seventh and eighth embodiments, a computer program product may encode instructions for performing a process including the method according to the first and second embodiments respectively, in any of their variants. [0061] According to ninth and tenth embodiments, a non-transitory computer readable medium may encode instructions that, when executed in hardware, perform a process including the method according to the first and second embodiments respectively, in any of their variants.
[0062] According to tenth and eleventh embodiments, a system may include at least one apparatus according to the third or fifth embodiments in communication with at least one apparatus according to the fourth or sixth embodiments, respectively in any of their variants.
BRIEF DESCRIPTION OF THE DRAWINGS:
[0063] For proper understanding of the invention, reference should be made to the accompanying drawings, wherein:
[0064] Figure 1 illustrates a sender sending packets on to a network equal to the advertised window.
[0065] Figure 2 illustrates sender and receiver being connected via different links and packets getting buffered inside a router.
[0066] Figure 3 illustrates a TCP connection after one RTT.
[0067] Figure 4 illustrates a steady state condition being s reached when receiver generates an ACK regularly.
[0068] Figure 5 illustrates queue size versus time for a TCP session.
[0069] Figure 6 illustrates various sources of network delay.
[0070] Figure 7 illustrates various input sources and a bandwidth-delay product determination procedure, according to certain embodiments.
[0071] Figure 8 illustrates managing queue size based on bandwidth-delay product values, according to certain embodiments.
[0072] Figure 9 illustrates a method according to certain embodiments.
[0073] Figure 10 illustrates another method according to certain embodiments.
[0074] Figure 1 1 illustrates a system according to certain embodiments. DETAILED DESCRIPTION:
[0075] Certain embodiments can avoid buffering an excessive amount of data and quickly filling the queue. The queue buffer size can be adjusted and have enough data and yet achieve 100% link utilization with maximum throughput, in certain embodiments.
[0076] More particularly, certain embodiments provide a mechanism for wireless networks with the help of Radio Applications Cloud Server (RACS), wherein the mechanism may be able to identify the flows, and do flow based treatment in separating elephant and mice flows for each users. In other words, certain embodiments may implement adaptive queue management techniques and avoid excessive buffers inside the eNodeB layer 3 and Layer 2.
[0077] Certain embodiments may provide a proactive mechanism to learn, self-adjust the queue size in access network, such as eNodeB, and make adaptive buffers. Furthermore, certain embodiments provide an implementation in RACS to assist eNodeB to regain latency and achieve effective link utilization. Moreover, certain embodiments provide a discovery scheme to implement adaptive TCP window between client and server. Additionally, certain embodiments provide a mechanism complaint with ongoing standards discussion in IETF Active Queue Management (AQM) working group and ETSI Mobile Edge Computing (MEC) working groups. Furthermore, certain embodiments provide a mechanism to enable proactive measures to ensure no session dominance for throughput and latency.
[0078] Certain embodiments can measure the queue build up at bottleneck node. More specifically, certain embodiments can use a RACS inside RAN to measure the bottleneck queue built up. RACS can be implemented as part of a radio access network solution and can be tightly coupled to an eNodeB. A RACS endpoint can be used to monitor and analyze all UE flows in each direction. The RACS can then analyze and propose adaptive buffers for queues in each direction, and signal remote TCP endpoints.
[0079] Certain embodiments provide a discovery procedure to learn the buffering between eNodeB and RACS, and RACS and remote TCP endpoint independently. Such embodiments may eliminate the "problem of stay" in RAN and in backhaul.
[0080] Information from additional procedures, such as to find the data present in a UE Buffer Status Report (BSR), can also be communicated to RACS. RACS can supplement the procedures to perform per- flow fair share queuing.
[0081] Servers can cooperate with RACS and an operator can have explicit notification about bandwidth-delay product value and adapt TCP congestion control mechanism to follow bandwidth-delay product variations smoothly.
[0082] A remote server does not necessarily need to have a pre-agreement with RACS. Changes in bandwidth-delay product can be communicated by combination of explicit congestion notification so as to avoid sending the packet using the static advertised values. Also, a new window advertisement can reflect current bandwidth-delay product values.
[0083] Certain embodiments can provide a mechanism that can help to manage an eNodeB 's Layer-2 and Layer-3 buffers for all UE sessions with the help of RACS server. Moreover, certain embodiments can provide management of per-flow queueing for handling each Elephant and Mice flows in RACS. Furthermore, certain embodiments an involve providing identification of Elephant and Mice flows per UE in RACS. In certain embodiments, a mechanism can normalize different flows depending on the L2 and L3 buffer size of eNB.
[0084] Additionally, certain embodiments can provide a mechanism to derive UE's E2E queue depth with a combination of inputs including UE's current number of flows, flow-completion time, eNB's active number of users, buffer size in eNB (downlink), buffer size in UE (uplink). Using the E2E queue depth, bandwidth-delay product value is dynamically adjusted.
[0085] Certain embodiments can use a technique such as Pair-Packet to estimate the queue depth between eNodeB and Remote Server. Similarly, certain embodiments can use a technique such as Pair-Packet to estimate the queue depth between eNodeB and UE.
[0086] Certain embodiments can provide a mechanism to inform the bandwidth-delay product value to nodes, both remote server and client, along with explicit congestion notification communication to avoid congestion collapse.
[0087] Certain embodiments can provide a technique to generate explicit congestion notification towards remote servers whenever bandwidth-delay product needs an adjustment per flow. Furthermore, in certain embodiments explicit congestion notification capability can be successfully negotiated amongst UE, Server & RACS.
[0088] In certain embodiments, RACS can act as an ECN-aware router and can set a mark in the IP header towards the receiver (UE) instead of dropping a packet in order to signal the impending congestion. Then the receiver of the packet (which contains the explicit congestion notification flag) can echo the congestion indication to the sender (Server), which can reduce the sender's transmission rate as it does in the packet dropped case.
[0089] In certain embodiments, an implementation in remote endpoints can use the bandwidth-delay product values for adapting various things, such as the following: the size of the initial congestion window, when to exit the slow start phase, the size of the congestion window during the congestion avoidance phase, the size of the window after a congestion event, or any combination thereof. The bandwidth-delay product can gives the insights of the network to remote endpoint in order to avoid buffer bloat in the radio network and also the backhaul network. The bandwidth-delay product change notification can help the server to adjust the receiver advertised window.
[0090] Certain embodiments can work for both encrypted and unencrypted traffic. Furthermore, certain embodiments can work for all types of wireless networks.
[0091] The following are possible use cases, not exhaustive, of scenarios that may be affected due to a change in bandwidth-delay product: UE engaged with 2 sessions, foreground and background with following combinations, as shown in Table 1.
Figure imgf000015_0001
[0092] A TCP session can communicate the available link capacity, as advertised window, to a remote peer during the TCP start session. The UE may not have a mechanism to know what the available bandwidth is. Available bandwidth, by definition, can be considered unused link capacity between sender and receiver. Typically, the size of the window may be larger than the bandwidth. TCP can rise to the available bandwidth using Additive Increase Multiplicative Decrease (AIMD) algorithm process. The TCP algorithm may be reactive, and may send window size worth of packets and then inject further packets based on received ACK packets. It may be challenging to adjust bandwidth-delay product value to congestion window value due to the following reasons: lack of bandwidth-delay product change detection methods, lack of implementation on the adjustment of queue length inside the access routers, lack of routers performing per-flow queuing as they can see only IP packets, and lack of information on how and when to a explicit congestion notification aware router should send explicit congestion notification to remote nodes.
[0093] Figure 7 illustrates various input sources and a bandwidth-delay product determination procedure, according to certain embodiments. Thus, Figure 7 describes bandwidth-delay product detection, estimation and update processes, according to certain embodiments.
[0094] As shown in Figure 7, at 1, a RACS can receive radio conditions information from eNodeB. The bandwidth-delay product (BDP) may keep changing in a wireless network due to movement of a user or available load in the network. This change may influence the queue size of the eNodeB. A user session can start with a pre-configured window size and can establish communication with the remote server. In certain embodiments, when the communication progresses, the RACS can determine a manageable queue size according to the changed conditions and can perform a fair share queuing discipline for each UE flow.
[0095] At 2, the RACS can receive a user equipment buffer status report. UE buffer status can describe the amount of data that is currently in a UE buffer to be sent towards eNodeB. For example, when a user is about to upload pictures or has data to send, the UE can communicate such information (UL- BSR) to the eNodeB. Based on UL-BSR information at RACS, RACS can determine a corresponding bandwidth-delay product.
[0096] At 3, the RACS can receive an eNodeB buffer status report. This buffer status report may be a downlink BSR and may be computed as part of Layer-2 RLC buffer. The RLC buffer may be a fixed buffer size, and may not change when the bandwidth changes over time. The mismatch in buffer size at Layer-2(RLC-MAC) and Layer-3 (egress and ingress IP Queue) inside eNodeB can cause delay inside the eNodeB.
[0097] At 4, RACS may make or receive a minimal bandwidth-delay product path determination. It may be possible that a backhaul path between eNodeB and remote server may be more congested than a path between eNodeB and UE. In that scenario, guidance from RAN may not be sufficient. Therefore a discovery protocol may use TCP time stamp (TS) options to determine the round trip time (RTT) between the RACS and the remote server. Just determining the RTT may not be sufficient. Thus, certain embodiments may determine and approximate the queue size (or buffer) in path nodes between RACS and remote server. For this a determination of the maximum queue depth between the RACS and the remote server may be made.
[0098] The following is one example of a way by which a maximum queue depth could be established. Two TCP packets with Time Stamp (TS) option packets can be generated from the RACS towards the remote server at a fixed interval from one another. The response and difference in response can be computed. The computed response and difference in response can determine RTT and processing delay or overall queue depth incurred from RACS to remote Server. A maximum value between (eNodeB-RACS, RACS-Remote Server) can be used for bandwidth-delay product adjustment and can be sent to a remote TCP server.
[0099] Optionally, at 5, bandwidth-delay product measurement timestamps towards UE can be used. Such timestamps can be sent to a UE client so that the UE can directly adjust the congestion window parameter accordingly.
[0100] At 6, the RACS can provide explicit congestion notification generation. Routers conventionally do not know how and when to generate explicit congestion notification. Such routers do not compute their queues and adjust based on the current bandwidth-delay product. At the RACS, for each of the active TCP flows, explicit congestion notification can be generated towards both endpoints. Hence packet loss inside the routers can be avoided. Moreover, this notification may result in the sender slowing down and taking corrective action based on the bandwidth-delay product.
[0101] At 7, bandwidth-delay product guidance can sent to servers, such as those servers that have a pre-arrangement with the RACS or operator. This information can be communicated to either server or client or both. This way, for example, a remote TCP endpoint can adjust the endpoint's sending rate to match with the bandwidth-delay product. In another alternative, a TCP extension header can be used to communicate the changed bandwidth- delay product values, and new advertisement window values can be computed from this proposed bandwidth-delay product.
[0102] Figure 8 illustrates managing queue size based on bandwidth-delay product values, according to certain embodiments.
[0103] As mentioned above, Table 1 describes several possible flows that can be considered to illustrate the real-world usage scenario. Figure 8 describes how persistent queue is managed for ongoing TCP session. To see how certain embodiments can be incorporated in a RAN product such as RACS, the following are the scenarios considered: in a first scenario a user equipment is engaged with Server E (Elephant flow) for a long duration session typically FTP or Video download or video play; and in a second scenario a user equipment is engaged with Server M (Mice flow) for a short duration session such as a web session or chat session or background session like sync-ing an application. As mentioned above, there could be several flow combinations possible when considering elephant, mice, foreground and background flows. In all cases, there can be several implementation choices possible. Certain embodiments can be implemented in variety of deployment and usage scenario.
[0104] Figure 8 describes message interaction by considering two flows namely Flow_E, a long duration flow that demands more bandwidth and latency, and Flow_M, a short duration flow such as Sync or web traffic request/response.
[0105] User using the UE can start a video session and can want to watch the video session fully. Video session may demand both low latency and bandwidth. Typically, video sessions from sites such as YouTube can be delivered in burst.
[0106] In this example, as shown in Figure 8, at 1 the UE can send an Uplink BSR along with the TCP session request or via other supplementary radio channel information message. The UE can send such information whenever the UE is queried for such information. In certain embodiments the UE can make such information available at regular interval, when the BSR is greater than a certain threshold value.
[0107] Similarly, at 2, the eNodeB can send radio conditions information. As mentioned above, bandwidth may not equal real "speed." Bandwidth is capacity over an interval, whereas real "speed" may be measured by the amount of latency (lag), between an action and a response, with lower latency corresponding to higher speed.
[0108] At 3, the RACS scheduler can keep managing the individual flow level queue irrespective of underlying radio layer technology enhancement. Meanwhile, an eNodeB can implement basic scheduling for TTI including Frequency, Time, QCI, and Subscriber. In addition to this, RACS can complement queue discipline fair share, and can perform per-flow queuing for each UE. The RACS can do dynamic adjustment of the queues so as to avoid standing queues. In this implementation, the fixed eNodeB RLC buffers do not have data filled in all the time. RACS can fill these Layer-2 buffers just in time, or just ahead of time, with required data when needed. The amount of data required at a given point can be determined by TTI values, RB per UE, maximum number of RB per TTI from UE, current load on UE, average cell bandwidth computed from received radio condition information, and the like. [0109] At 4, the RACS server can create one queue for each TCP flow per UE. This mechanism can allow even distribution of TCP flow buffers. As mentioned above, TCP may not work well for competing flows, and there may be a risk that TCP throughput goes down drastically. To avoid this, or for other reasons, RACS can perform scheduling of window adjustment on each direction. RACS can also ensure 100% link utilization. As per Figure 8, a message may not have even reached the remote server E. These pre- computed values can, therefore, be applied when the data is being exchanged.
[0110] At 5, a result of adjusted bandwidth-delay product can be reflected as a window parameter on the flight. In a TCP session, the maximum window parameter may be sent only during the beginning and not during the session. These pre-computed values can be applied when the data is being exchanged and also can be communicated to a remote endpoint based on the current bandwidth-delay product calculation. In effect, instead of taking a fixed value, and making the buffer full, certain embodiments may have an adaptive value in the start of TCP session. RACS can apply during TCP handshake to be explicit congestion notification capable transport nodes. Thus, endpoints can implement explicit congestion notification functions.
[0111] At 6, there can be a background sync operation started by a phone. This may involve allowing multiple flows to be generated from the UE device towards different server. Similar to 1, for short flow at 6 UE can start to establish connection with a remote server, Server M. RACS may not know whether this is an Elephant of Mice flow.
[0112] Steps 7 through 10 may correspond to steps 2-5, but for a short flow. During these steps, however, RACS may still not know whether this is flow is properly categorized as an Elephant flow or Mice flow. After 10, both TCP handshake for both Elephant flow and Mice flow can be active.
[0113] By considering the Flow_E, the data can start flowing from the server towards the UE at 11. As the video data is long duration and bandwidth intensive, the window worth of data may be on the flight. But TCP slow-start AIMD can permit computation of timing and buffering sizing per flow inside the RACS. RACS can distinguish Elephant from Mice flows based on the volume of data being shipped on the transit. For example, average webpage size is much smaller compared to video data chunks or FTP data chunks.
[0114] At 12, RACS may want to avoid a sender sending a window worth of data that was agreed during the start of TCP session. The window size thus can be adapted to current ongoing network conditions. Already, in step 5 the window size can be communicated for current network conditions. It may happen that when TCP started the bandwidth condition may be good and later the condition may worsen. If continuous adaptation is not present, then the RACS queue may have more data and may cause throughput and latency effects. In certain embodiments, a goal of RACS can be to adjust the current TCP window sizing closer to the bandwidth-delay product and hence the queue size inside the RACS. To achieve this, RACS can perform per flow fair share scheduling. RACS can perform this irrespective of any TCP optimization and can avoid building excessive fill of data buffers.
[0115] Various TCP optimizations can be applied with respect to ACK. For example, in a first alternative, for each packet sent by server_E, receiving UE may send one ACK packet. In a second alternative, for each packet sent by server_E, receiving UE may not send one ACK packet, it could send a window worth of packet. In a third alternative, for each packet sent by server_E, receiving UE may not send one ACK packet it could send packets by combining data in UL direction that could be delayed. In a fourth alternative, for each packet sent by server_E, receiving UE may not send one ACK packet, it could send packets by combining data and delaying the ACK by fixed amount. Other TCP optimization applied on ACK are also permitted as further alternatives.
[0116] At 12, RACS can compute a time difference between data sent, not as part of ACK correlation, to the acknowledged data and can computes the real throughput.
[0117] RACS, apart from the per- flow queuing, can at 13 performs traffic shaping in such a way that eNodeB buffers are not filled only by dominating flows but all packets flows to/from UE get fair treatment. At any given time, eNB can sends data from any one of the flows equal to that of Maximum RB of UE per TTI plus additional required packets from the same flow. This can ensure that no single flow dominates the eNodeB buffers.
[0118] At 14, as the process continues, TCP data and ACK can be seen in each direction by RACS, and queue size can be adjusted accordingly based on a computed available bandwidth (ABW) by RACS. Computed ABW for that flow may not just be guidance from radio, the RACS may infer the actual ACK, packet-pair difference and provide an adjusted window size as a function of bandwidth-delay product value.
[0119] At 15, for Mice session the data or packets that are seen may be relatively low. It may even happen that TCP may not even reach steady state. This step may be similar to step 11. Certain configuration such as average data bytes for web session or short session values can be used to get a minimal initial advertised window size. These values can be used to derive the values in message 5 and message 10.
[0120] At 16, the process can be similar to process 12. However, now the data arrival rate and/or number of bytes in the queue can determine the dominant flow inside the queue. As there are going to be fewer packets (or data volume) when compared to Flow_E, the flow can be classified as Flow_M.
[0121] At 17, as the RACS does the flow based scheduling and shaping (or even mixing of traffic flows), the resultant queue that is seen inside the eNodeB or UE may be uniform and normalized flows. Nevertheless, the situation may change. [0122] At 18, when a radio condition changes, eNodeB may inform RACS. Alternatively, RACS can, at 19, constantly compute the current window size and queue size required per flow. If a deviation of queue or current window size is more than a threshold, then RACS can decide to perform queue (re)size and congestion window size readjustment.
[0123] At 20, RACS can keeps learning current bandwidth-delay product variation all the time as described in Figure 8. For example, RACS can compute the value for RTT between UE and RACS. Moreover, RACS can compute the value for RTT between RACS and remote server. Combining the computed values for RTT, RACS can compute a window size that is needed to accommodate 100% link utilization without overfilling the buffers.
[0124] Such computation can be made by various mechanisms. For example, for packets towards UE nodes, RACS can insert with time stamp options for two consecutive packets. Thereby, RACS can learn both RTT and bandwidth available. For packets towards remote server, RACS can insert ongoing packets with time stamp options for two consecutive packets, and thereby similarly learn both RTT and bandwidth available. If a current bandwidth- delay product is lower than the window size, RACS can also send window full condition to ensure that difference between previous congestion window and current window is reached. This may involve full manipulation of TCP sequence numbering.
[0125] At 21, RACS can adjust a queue buffer size internally, for example as a marker value. Then , at 22, the computed bandwidth-delay product value can be sent to a managed server having an existing SLA between operator and RACS vendor. In this situation Header Enrichment (FIE) or appropriate IP level extension information can be communicated for window values, along with TG information.
[0126] In case of an unmanaged server or any server in the Internet, at 23 an explicit congestion notification can be generated for that flow in the direction in which more data is coming. For example it could be towards the server if the arrival data from the server is more towards UE or both. This may ensure server not sending data at faster rate. Alternatively, a new extension can enable the remote TCP server to take a new proposed bandwidth-delay product value and derive both congestion window parameters and new receiver advertised window size.
[0127] Steps from 18 to 22 can be performed when bandwidth-delay product changes from low bandwidth to high bandwidth. As the TCP flows are shared across one link, it may be difficult to assign priorities to each flow. However, RACS can be simply configured to avoid having one flow dominating the entire bandwidth. Thus, a worst case delay may be that thre are N flows and N flows will have windows worth of data in the RACS, but RACS can eventually perform per-flow shaping and reduce the queue size evenly.
[0128] In certain embodiments, identifiable message exchange may occur between a RACS and UE. Furthermore, in certain embodiments identifiable message exchange may occur between a RACS and remote server.
[0129] Certain embodiments may employ a TCP TS generated and packet pairing technique to determine a bottleneck between UE and eNodeB/RACS and between RACS and remote server. In certain embodiments a self- adjusting window size can be based on bandwidth-delay product. Moreover, in certain embodiments a value of the self-adjusting window size may be exposed to both UE and a remote server. Other embodiments may be identified based on changed bandwidth-delay product value, ABW for remote peer.
[0130] Certain embodiments may have various benefits and/or advantages. For example, in certain embodiments no single application flow may dominate the radio buffers. Furthermore, certain embodiments may provide bottleneck detection inside RAN and between RAN and remote server. Additionally, in certain embodiments RACS can perform per-flow queuing and shaping to normalize the eNodeB buffers. Furthermore, in certain embodiments RACS can perform window size adjustments and can generate explicit congestion notification notification to remote server based on bandwidth-delay product changes. Additionally, in certain embodiments there may be no unmanaged or excessive buffers between endpoints in RAN. TCP flow may, in certain embodiments, smoothly follow bandwidth-delay product changes and hence result in minimal retransmission.
[0131] Figure 9 illustrates a method according to certain embodiments. The method of Figure 9 may be performed by, for example, a RACS. The method may include, at 910, determining a variation in a bandwidth-delay product. The determining can be based on radio conditions information received from an access point at 902. In a variation the access point can be an eNode B.
[0132] The determining can be based on a buffer status report received at 904. In certain variations, the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
[0133] The determining can include performing or receiving a minimal bandwidth-delay product path determination at 906. For example, the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server. As another example, the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
[0134] The determining can include, at 908, performing a prediction or bandwidth-delay product estimation. As another alternative, at 909, the determining can include computing a current window size and queue size required per flow.
[0135] The method may also include, at 920, performing at least one flow management process based on the variation in the bandwidth-delay product. [0136] The at least one flow management process can include, at 921, performing a fair share queuing for each user equipment flow of a plurality of user equipment flows. The at least one flow management process can also or alternatively include, at 923, sending a calculated maximum queue depth to a remote transmission control protocol server. The at least one flow management process can also or alternatively include, at 925, sending a bandwidth-delay product measurement timestamp to a user equipment client.
[0137] The at least one flow management process can include providing, at 927, an explicit congestion notification. The explicit congestion notification can be sent toward one or both endpoints of an end-to-end communication.
[0138] The at least one flow management process can include, at 928, providing bandwidth-delay product guidance to at least one of a server or a client. In certain embodiments, the at least one flow management process can include, at 929, providing a bandwidth-delay product value. The value can be provide, for example, in an enriched header.
[0139] Figure 10 illustrates another method according to certain embodiments. The method of Figure 10 may be performed by a network element, such as communication endpoint - for example a server.
[0140] As shown in Figure 10, a method can include, at 1010, receiving an indication of variation in bandwidth-delay product. The method can also include, at 1020, performing a congestion adaption based on the received indication.
[0141] The adaptation can include such things as calculating an advertisement window value based on the changed bandwidth-delay product. In other example, the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event. The method can, in various ways, include using the bandwidth-delay product variation to adapt a receiver advertised window.
[0142] Figure 11 illustrates a system according to certain embodiments of the invention. In one embodiment, a system may include multiple devices, such as, for example, at least one UE 1110, at least one throughput guidance entity 1120, which may be an eNB, RACS, RNC, or other base station or access point, and at least one information receiver 1130, which may be an OTT Server, UE, or other entity configured to receive throughput guidance or other congestion information.
[0143] Each of these devices may include at least one processor, respectively indicated as 1114, 1124, and 1134. At least one memory can be provided in each device, and indicated as 1115, 1125, and 1135, respectively. The memory may include computer program instructions or computer code contained therein. The processors 1114, 1124, and 1134 and memories 1115, 1125, and 1135, or a subset thereof, can be configured to provide means corresponding to the various blocks of Figures 9 or 10.
[0144] As shown in Figure 11, transceivers 1116, 1126, and 1136 can be provided, and each device may also include an antenna, respectively illustrated as 1117, 1127, and 1137. Other configurations of these devices, for example, may be provided. For example, information receiver 1130 may be configured for wired communication, in addition to wireless communication, and in such a case antenna 1137 can illustrate any form of communication hardware, without requiring a conventional antenna.
[0145] Transceivers 1116, 1126, and 1136 can each, independently, be a transmitter, a receiver, or both a transmitter and a receiver, or a unit or device that is configured both for transmission and reception.
[0146] Processors 1114, 1124, and 1134 can be embodied by any computational or data processing device, such as a central processing unit (CPU), application specific integrated circuit (ASIC), or comparable device. The processors can be implemented as a single controller, or a plurality of controllers or processors.
[0147] Memories 1115, 1125, and 1135 can independently be any suitable storage device, such as a non-transitory computer-readable medium. A hard disk drive (HDD), random access memory (RAM), flash memory, or other suitable memory can be used. The memories can be combined on a single integrated circuit as the processor, or may be separate from the one or more processors. Furthermore, the computer program instructions stored in the memory and which may be processed by the processors can be any suitable form of computer program code, for example, a compiled or interpreted computer program written in any suitable programming language.
[0148] The memory and the computer program instructions can be configured, with the processor for the particular device, to cause a hardware apparatus such as UE 1110, throughput guidance entity 1120, and information receiver 1130, to perform any of the processes described herein (see, for example, Figures 9 and 10). Therefore, in certain embodiments, a non-transitory computer-readable medium can be encoded with computer instructions that, when executed in hardware, perform a process such as one of the processes described herein. Alternatively, certain embodiments of the invention can be performed entirely in hardware.
[0149] Furthermore, although Figure 11 illustrates a system including a UE, throughput guidance entity, and information receiver, embodiments of the invention may be applicable to other configurations, and configurations involving additional elements. For example, not shown, additional UEs may be present, and additional core network elements may be present, as illustrated in Figure 8, for example.
[0150] One having ordinary skill in the art will readily understand that the invention as discussed above may be practiced with steps in a different order, and/or with hardware elements in configurations which are different than those which are disclosed. Therefore, although the invention has been described based upon these preferred embodiments, it would be apparent to those of skill in the art that certain modifications, variations, and alternative constructions would be apparent, while remaining within the spirit and scope of the invention.
[0151] According to a first embodiment, a method can include determining a variation in a bandwidth-delay product. The method can also include performing at least one flow management process based on the variation in the bandwidth-delay product.
[0152] In a variation, the method can be performed by a RACS.
[0153] In a variation, the determining can be based on radio conditions information from an access point.
[0154] In a variation the access point can be an eNode B.
[0155] In a variation the at least one flow management process can include performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
[0156] In a variation, the determining can be based on a buffer status report.
[0157] In certain variations, the buffer status report may be a user equipment buffer status report or an eNode B buffer status report.
[0158] In a variation, the determining can include performing or receiving a minimal bandwidth-delay product path determination.
[0159] In a variation, the performing the minimal bandwidth-delay product path determination can include determining a round trip time between a RACS and a remote server.
[0160] In a variation, the performing the minimal bandwidth-delay product path determination can include determining a maximum queue depth between the RACS and the remote server.
[0161] In a variation, the at least one flow management process can include sending a calculated maximum queue depth to a remote transmission control protocol server.
[0162] In a variation, the at least one flow management process can include sending a bandwidth-delay product measurement timestamp to a user equipment client.
[0163] In a variation, the at least one flow management process can include providing an explicit congestion notification.
[0164] In a variation, the explicit congestion notification can be sent toward both endpoints of an end-to-end communication.
[0165] In a variation, the at least one flow management process can include providing bandwidth-delay product guidance to at least one of a server or a client.
[0166] In a variation, the determining can include performing a prediction or bandwidth-delay product estimation.
[0167] In a variation the determining can include computing a current window size and queue size required per flow.
[0168] In a variation, the at least one flow management process can include providing a bandwidth-delay product value.
[0169] In a variation, the value can be provided in an enriched header.
[0170] In a variation, the method can include measuring a bottleneck queue built up inside a radio access network, analyzing measurements of the bottleneck queue, and signalling at least one adaptive buffer proposal for a remote endpoint based on the analysis of the measurements.
[0171] In a variant, the analysis comprises determining a bandwidth-delay product variation.
[0172] In a variant, the measuring can include monitoring and analyzing all user equipment flows in each direction.
[0173] In a variant, the remote endpoint can be a transmission control protocol endpoint.
[0174] In a variant, the method can further include learning buffering between an access point and RACS, and between RACS and a remote endpoint independently. [0175] In a variant, the access point can be an eNode B.
[0176] In a variant, the method can include managing an eNode B's layer-2 and layer-3 buffers for all user equipment sessions.
[0177] In a variant, the method can further include gathering data from a user equipment buffer status report, wherein analyzing the measurements comprises analyzing the data.
[0178] In a variant, the method can further include performing per- flow fair share queuing based on the analysis of the measurements.
[0179] In a variant, the method can include categorizing a plurality of flows based on each flow's expected impact.
[0180] In a variant, the method can include dynamically adjusting a bandwidth-delay product value based on end-to-end queue depth.
[0181] According to a second embodiment, a method can include receiving an indication of variation in bandwidth-delay product. The method can also include performing a congestion adaption based on the received indication.
[0182] In a variation, the adaptation can include calculating an advertisement window value based on the changed bandwidth-delay product.
[0183] In a variant, the method can include using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
[0184] In a variant, the method can include using the bandwidth-delay product variation to adapt a receiver advertised window.
[0185] According to third and fourth embodiments, an apparatus can include means for performing the method according to the first and second embodiments respectively, in any of their variants.
[0186] According to fifth and sixth embodiments, an apparatus can include at least one processor and at least one memory and computer program code. The at least one memory and the computer program code can be configured to, with the at least one processor, cause the apparatus at least to perform the method according to the first and second embodiments respectively, in any of their variants.
[0187] According to seventh and eighth embodiments, a computer program product may encode instructions for performing a process including the method according to the first and second embodiments respectively, in any of their variants.
[0188] According to ninth and tenth embodiments, a non-transitory computer readable medium may encode instructions that, when executed in hardware, perform a process including the method according to the first and second embodiments respectively, in any of their variants.
[0189] According to tenth and eleventh embodiments, a system may include at least one apparatus according to the third or fifth embodiments in communication with at least one apparatus according to the fourth or sixth embodiments, respectively in any of their variants.

Claims

WE CLAIM:
1. A method, comprising:
determining a variation in a bandwidth-delay product; and
performing at least one flow management process based on the variation in the bandwidth-delay product.
2. The method of claim 1, wherein the method is performed by a radio applications cloud server.
3. The method of claim 1 or claim 2, wherein the determining is based on radio conditions information from an access point.
4. The method of claim 3, wherein the access point comprises an evolved Node B.
5. The method of any of claims 1-4, wherein the at least one flow management process comprises performing a fair share queuing for each user equipment flow of a plurality of user equipment flows.
6. The method of any of claims 1-5, wherein the determining is based on a buffer status report.
7. The method of claim 6, wherein the buffer status report comprises at least one of a user equipment buffer status report or an evolved Node B buffer status report.
8. The method of any of claims 1-7, wherein the determining comprises performing or receiving a minimal bandwidth-delay product path determination.
9. The method of claim 8, wherein the performing the minimal bandwidth-delay product path determination comprises determining a round trip time between the radio applications cloud server and a remote server.
10. The method of claim 8 or claim 9, wherein the performing the minimal bandwidth-delay product path determination comprises determining a maximum queue depth between the radio applications cloud server and the remote server.
11. The method of any of claims 1-10, wherein the at least one flow management process comprises sending a calculated maximum queue depth to a remote transmission control protocol server.
12. The method of any of claims 1-11, wherein the at least one flow management process comprises sending a bandwidth-delay product measurement timestamp to a user equipment client.
13. The method of any of claims 1-12, wherein the at least one flow management process comprises providing an explicit congestion notification.
14. The method of claim 13, wherein the explicit congestion notification is sent toward both endpoints of an end-to-end communication.
15. The method of any of claims 1-14, wherein the at least one flow management process comprises providing bandwidth-delay product guidance to at least one of a server or a client.
16. The method of any of claims 1-15, wherein the determining comprises performing a prediction or bandwidth-delay product estimation.
17. The method of any of claims 1-16, wherein the determining comprises computing a current window size and queue size required per flow.
18. The method of any of claims 1-17, wherein the at least one flow management process comprises providing a bandwidth-delay product value.
19. The method claim 18, wherein the bandwidth-delay product value is provided in an enriched header.
20. The method of any of claims 1-19, further comprising:
measuring a bottleneck queue built up inside a radio access network; analyzing measurements of the bottleneck queue; and
signaling at least one adaptive buffer proposal for a remote endpoint based on the analysis of the measurements.
21. The method of claim 20, wherein the analysis comprises determining a bandwidth-delay product variation.
22. The method of claim 20 or 21, wherein the measuring comprises monitoring and analyzing all user equipment flows in each direction.
23. The method of any of claims 20-21, wherein the remote endpoint comprises a transmission control protocol endpoint.
24. The method of any of claims 20-23, further comprising:
learning buffering between an access point and the radio applications cloud server, and between the radio applications cloud server and a remote endpoint independently.
25. The method of claim 24, wherein the access point comprises an evolved Node B.
26. The method of any of claims 1-25, further comprising:
managing an evolved Node B's layer-2 and layer-3 buffers for all user equipment sessions.
27. The method of any of claims 20-26, further comprising:
gathering data from a user equipment buffer status report, wherein analyzing the measurements comprises analyzing the data.
28. The method of any of claims 20-27, further comprising:
performing per-flow fair share queuing based on the analysis of the measurements.
29. The method of claim 28, further comprising:
categorizing a plurality of flows based on each flow's expected impact.
30. The method of any of claims 20-29, further comprising:
dynamically adjusting a bandwidth-delay product value based on end- to-end queue depth.
31. A method, comprising:
receiving an indication of variation in bandwidth-delay product; and performing a congestion adaption based on the received indication.
32. The method of claim 31, wherein the adaptation comprises calculating an advertisement window value based on the changed bandwidth- delay product.
33. The method of claim 31 or 32, further comprising:
using the bandwidth-delay product variation to adapt at least one of a size of an initial congestion window, when to exit a slow start phase, a size of the congestion window during the congestion avoidance phase, or the size of the congestion window after a congestion event.
34. The method of any of claims 31-33, further comprising:
using the bandwidth-delay product variation to adapt a receiver advertised window.
35. An apparatus, comprising:
means for performing the method according to any of claims 1-34.
36. An apparatus, comprising:
at least one processor; and
at least one memory and computer program code,
wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform the method according to any of claims 1-34.
37. A computer program product encoding instructions for performing a process, the process comprising the method according to any of claims 1-34.
38. A non-transitory computer readable medium encoded with instructions that, when executed in hardware, perform a process, the process comprising the method according to any of claims 1-34.
PCT/US2015/066825 2014-12-19 2015-12-18 Smooth bandwidth-delay product variation inside wireless networks WO2016100890A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201462094946P 2014-12-19 2014-12-19
US62/094,946 2014-12-19
US201562104526P 2015-01-16 2015-01-16
US62/104,526 2015-01-16

Publications (1)

Publication Number Publication Date
WO2016100890A1 true WO2016100890A1 (en) 2016-06-23

Family

ID=56127717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2015/066825 WO2016100890A1 (en) 2014-12-19 2015-12-18 Smooth bandwidth-delay product variation inside wireless networks

Country Status (1)

Country Link
WO (1) WO2016100890A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018051189A1 (en) * 2016-09-16 2018-03-22 Alcatel Lucent Congestion control based on flow control
WO2019132744A1 (en) * 2017-12-27 2019-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for controlling communication between an edge cloud server and a plurality of clients via a radio access network
CN112202607A (en) * 2020-09-28 2021-01-08 中移(杭州)信息技术有限公司 Statistical calculation method of log message, server and storage medium
US20230155963A1 (en) * 2021-11-17 2023-05-18 Charter Communications Operating, Llc Methods and apparatus for coordinating data transmission in a communications network

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123394A1 (en) * 2001-11-13 2003-07-03 Ems Technologies, Inc. Flow control between performance enhancing proxies over variable bandwidth split links
US20050213586A1 (en) * 2004-02-05 2005-09-29 David Cyganski System and method to increase network throughput
US20130114408A1 (en) * 2011-11-04 2013-05-09 Cisco Technology, Inc. System and method of modifying congestion control based on mobile system information
WO2013167647A1 (en) * 2012-05-11 2013-11-14 Nokia Siemens Networks Oy Mechanism for controlling buffer setting in flow control
US8593964B1 (en) * 2009-11-06 2013-11-26 Brocade Communications Systems, Inc. Method and system for traffic management
US20140286313A1 (en) * 2011-11-23 2014-09-25 Telefonaktiebolaget L M Ericsson (Publ) Methods and arrangements for improving transmission control protocol performance in a cellular network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030123394A1 (en) * 2001-11-13 2003-07-03 Ems Technologies, Inc. Flow control between performance enhancing proxies over variable bandwidth split links
US20050213586A1 (en) * 2004-02-05 2005-09-29 David Cyganski System and method to increase network throughput
US8593964B1 (en) * 2009-11-06 2013-11-26 Brocade Communications Systems, Inc. Method and system for traffic management
US20130114408A1 (en) * 2011-11-04 2013-05-09 Cisco Technology, Inc. System and method of modifying congestion control based on mobile system information
US20140286313A1 (en) * 2011-11-23 2014-09-25 Telefonaktiebolaget L M Ericsson (Publ) Methods and arrangements for improving transmission control protocol performance in a cellular network
WO2013167647A1 (en) * 2012-05-11 2013-11-14 Nokia Siemens Networks Oy Mechanism for controlling buffer setting in flow control

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018051189A1 (en) * 2016-09-16 2018-03-22 Alcatel Lucent Congestion control based on flow control
WO2019132744A1 (en) * 2017-12-27 2019-07-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for controlling communication between an edge cloud server and a plurality of clients via a radio access network
US11218414B2 (en) 2017-12-27 2022-01-04 Telefonaktiebolaget Lm Ericsson (Publ) Apparatus and method for controlling communication between an edge cloud server and a plurality of clients via a radio access network
CN112202607A (en) * 2020-09-28 2021-01-08 中移(杭州)信息技术有限公司 Statistical calculation method of log message, server and storage medium
CN112202607B (en) * 2020-09-28 2022-06-14 中移(杭州)信息技术有限公司 Statistical calculation method of log message, server and storage medium
US20230155963A1 (en) * 2021-11-17 2023-05-18 Charter Communications Operating, Llc Methods and apparatus for coordinating data transmission in a communications network
US11805079B2 (en) * 2021-11-17 2023-10-31 Charter Communications Operating, Llc Methods and apparatus for coordinating data transmission in a communications network

Similar Documents

Publication Publication Date Title
US10367738B2 (en) Throughput guidance based on user plane insight
US20170187641A1 (en) Scheduler, sender, receiver, network node and methods thereof
US10057147B2 (en) Apparatus and method for controlling data flow in communication system
US20140043994A1 (en) Providing Feedback To Media Senders Over Real Time Transport Protocol (RTP)
KR20190030649A (en) Systems and methods for improving integrated throughput of concurrent connections
CN104581422B (en) A kind of method and apparatus transmitted for network data
US9967769B2 (en) Methods and apparatuses for recovering data packet flow control against radio base station buffer run away
BRPI0822489B1 (en) METHOD FOR ADAPTING A CURRENT TARGET RATE FROM A VIDEO SIGNAL TRANSMITTED FROM A VIDEO PROVIDER TO A VIDEO RECEIVER, DEVICE FOR CALCULATING A NEW TARGET RATE FROM A VIDEO SIGNAL TRANSMITTED FROM A VIDEO PROVIDER, AND THEREFORE COMPUTER
WO2010092324A2 (en) Controlling bandwidth share
WO2016100890A1 (en) Smooth bandwidth-delay product variation inside wireless networks
Liu et al. Segment duration for rate adaptation of adaptive HTTP streaming
Lautenschlaeger et al. Global synchronization protection for bandwidth sharing TCP flows in high-speed links
KR101425300B1 (en) Method for managing width of window in multi-path TCP
US10952102B2 (en) Method and apparatus for controlling data transmission speed in wireless communication system
CN111669665B (en) Real-time pushing method of media stream and server
EP3560152B1 (en) Determining the bandwidth of a communication link
KR101837637B1 (en) Streaming method based on Client-side ACK-regulation and apparatus thereof
JP5308364B2 (en) Transmission device, transmission method, and program
US20150257162A1 (en) Controlling Bandwidth Usage of an Application Using a Radio Access Bearer on a Transport Network
US9130843B2 (en) Method and apparatus for improving HTTP adaptive streaming performance using TCP modifications at content source
US10298475B2 (en) System and method for jitter-aware bandwidth estimation
Papadimitriou et al. A rate control scheme for adaptive video streaming over the internet
US11695847B2 (en) Throughput guidance based on user plane insight
Mehrotra et al. Bandwidth management for mobile media delivery
KR102176176B1 (en) METHOD AND APPARATUS FOR CONTROLLING CONGESTION IN A WIRELESS NETWORK USING Transmission Control Protocol

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15871213

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15871213

Country of ref document: EP

Kind code of ref document: A1