WO2016198112A1 - Nodes and methods for handling packet flows - Google Patents

Nodes and methods for handling packet flows Download PDF

Info

Publication number
WO2016198112A1
WO2016198112A1 PCT/EP2015/063058 EP2015063058W WO2016198112A1 WO 2016198112 A1 WO2016198112 A1 WO 2016198112A1 EP 2015063058 W EP2015063058 W EP 2015063058W WO 2016198112 A1 WO2016198112 A1 WO 2016198112A1
Authority
WO
WIPO (PCT)
Prior art keywords
packet
flow
node
controller
data rate
Prior art date
Application number
PCT/EP2015/063058
Other languages
French (fr)
Inventor
Linus GILLANDER
Trevor NEISH
Original Assignee
Telefonaktiebolaget Lm Ericsson (Publ)
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Telefonaktiebolaget Lm Ericsson (Publ) filed Critical Telefonaktiebolaget Lm Ericsson (Publ)
Priority to PCT/EP2015/063058 priority Critical patent/WO2016198112A1/en
Publication of WO2016198112A1 publication Critical patent/WO2016198112A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L47/00Traffic control in data switching networks
    • H04L47/10Flow control; Congestion control
    • H04L47/20Traffic policing

Definitions

  • Embodiments herein relate generally to a controller, a method performed by the controller, a policer node, a method performed by the policer node. More particularly the embodiments herein relate handling packet flows in a wireless communications system.
  • Capacity is a large factor in telecommunications products. It is therefore important to utilize the maximum capacity of the hardware in such products.
  • LTE Long Term Evolution
  • LTE-Advanced up to 600Mbps and beyond
  • 5G Fifth Generation
  • WiFi Wireless Fidelity
  • COTS Commercial-off-the-shelf
  • One issue is that the variation in throughput for a session over time is much larger than historically.
  • Another issue is that when two or more sessions peak at the same time, they use a much larger percentage of the resources (e.g. the Central Processing Unit (CPU) core) which they are assigned to. With each new hardware generation, the performance per CPU core is not increasing as fast as the peak session speeds are.
  • the term session used herein may be explained as an association between a UE (e.g. represented by an IP address) and a Packet Data Network (PDN).
  • PDN Packet Data Network
  • IP Internet Protocol
  • IP 5-tuple refers to a set of five different values that comprise a Transmission Control Protocol/Internet Protocol (TCP/IP) connection. It comprises a source IP address/port number, destination IP address/port number and the protocol in use.
  • TCP/IP Transmission Control Protocol/Internet Protocol
  • Figure 1 illustrates an example of the high variation in session throughput causing short-term overload in a Packet data network Gateway (PGW).
  • PGW Packet data network Gateway
  • the x-axis in figure 1 represents time measured in minutes and the y-axis represents load measured in %.
  • Figure 1 illustrates that there are too many 100% peaks CPU (indicated with circles in figure 1 ) on circuit boards. Since there are too many 100% peaks, the operator will not raise the average CPU.
  • a PGW is a gateway towards a Packet Data Network (PDN). Functions performed by the PGW are e.g. providing connectivity from a User
  • PDN Packet Data Network
  • UE Equipment
  • UE to external PDNs by being the point of exit and entry of traffic for the UE, performing policy enforcement, packet filtering for each user, charging support, lawful interception and packet screening etc.
  • Figure 2 illustrates an example of a heat map of CPU loads over time. When several sessions have high throughput at the same time, overload occurs.
  • the x-axis in figure 2 represents discrete CPUs and the y-axis represents time.
  • the white area in figure 2 indicates that there is low CPU load, e.g. at night.
  • the circles indicates that there is one user and more than one user which has high throughput.
  • Figure 3 is a histogram of example load over CPUs at an instance in time. Outliers with high load limit the total capacity of the system.
  • the x-axis in figure 3 represents CPU utilization in % and the y-axix represents the number of CPUs.
  • the two circles in figure 3 illustrates that two CPUs are limiting the capacity of the system because they have a much larger load than the rest of the CPUs.
  • Real-time load-balancing techniques are based on aggregated weighted load per server over a plurality of servers. In other words, sessions are treated equally, and the primary input to choosing a server is the weighted, aggregated (over all sessions) loads of the servers.
  • the object is achieved by a method performed by a controller for handling packet flows in a wireless communications system.
  • the controller receives, from a policer node, data rate information indicating that a continuously monitored realtime data rate associated with data packets handled by a packet processor has exceeded a threshold.
  • the controller determines that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or a flow-cache.
  • the object is achieved by a method performed by a policer node for handling packet flows in a wireless communications system.
  • the policer node continuously monitors a packet data rate in real-time associated with data packets handled by a packet processor.
  • the policer node detects that the monitored real-time data rate has exceeded a threshold.
  • the exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or by a flow-cache.
  • the policer node transmits, to a controller, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
  • the object is achieved by a controller for handling packet flows in a wireless communications system.
  • the controller is adapted to receive, from a policer node, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor has exceeded a threshold.
  • the controller is adapted to, when the monitored real-time data rate has exceeded the threshold, determine that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or a flow-cache.
  • the object is achieved by a policer node for handling packet flows in a wireless communications system.
  • the policer node is adapted to continuously monitor a packet data rate in real-time associated with data packets handled by a packet processor.
  • the policer node is adapted to detect that the monitored real-time data rate has exceeded a threshold.
  • the exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or by a flow-cache.
  • the policer node is adapted to transmit, to a controller, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
  • the packet processor can be offloaded so that the load balancing in the communications system is improved.
  • Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows:
  • the embodiments herein propose to monitor the changing peak throughput of a session, and each measurement period produce a list of the top X% of sessions ranked by peak throughput. This provides an advantage for a network operator because it can use real-time peak throughput to control user experience or lower costs (through resource allocation and/or different service chain/session features).
  • Another advantage of the embodiments herein is that the wireless communications system can be dimensioned for a higher average load and by this utilized more efficiently. Thus, the embodiments herein are cost effective.
  • a further advantage of the embodiments herein is that they may be applied to internal resources such as CPU cores within a server, but they may also be applied to multiple network elements in e.g. a SDN network
  • Another advantage of the embodiments herein is that they may have a low impact to existing performance of the wireless communications system.
  • Fig. 1 is a graph illustrating high variation in session throughput.
  • Fig. 2 is a heat map of CPU loads over time.
  • Fig. 3 is a histogram of load over CPUs at an instance in time.
  • Fig. 4 is a schematic block diagram illustrating embodiments of a wireless communications system.
  • Fig. 5a, 5b are signaling diagrams illustrating embodiments of a method.
  • Fig. 6 is a schematic block diagram illustrating a payload packet flow.
  • Fig. 7 is a flow chart illustrating embodiments of a method.
  • Fig. 8 is a schematic block diagram illustrating embodiments of a method.
  • Fig. 9 is a flow chart illustrating embodiments of a method.
  • Fig. 10 is a schematic block diagram illustrating embodiments of a wireless
  • Fig. 11 is a flow chart illustrating embodiments of a method performed by a
  • Fig. 12 is a schematic block diagram illustrating embodiments of a controller.
  • Fig. 13 is a flow chart illustrating embodiments of a method performed by a
  • Fig. 14 is a schematic block diagram illustrating embodiments of a policer node.
  • processing resources can be re-assigned as rates vary over time. This enables support for high peak rate and still supports high average processing resource utilization.
  • a processing resource may be described as a general or special purpose processor and memory where a system function can be executed and information stored and retrieved.
  • a session policing function may be used to measure the real-time throughput.
  • the real-time throughput may be calculated by observing the data rate during a time window e.g. a measurement period. After each measurement period, a table with the top X% of sessions may be updated.
  • the term current throughput may also be used instead of real-time throughput.
  • a load-balancing function (e.g. internal logic within the node or performed by an external Software Defined Network (SDN) controller which is external to the PGW) may use this dynamic table to steer or offload traffic to dedicated resources, and/or evenly spread the traffic over existing processing resources, and/or reduce the cost of the top sessions by e.g. temporarily bypassing costly functions such as e.g. Deep Packet Inspection (DPI).
  • DPI Deep Packet Inspection
  • the dynamic processing resource assignment may also be based on policy or mobility events.
  • a UE was initially attached using Third Generation (3G) radio access. Later, the UE moves into 5G coverage and the system may then dynamically reallocate the UE to processing resources which are more suitable for the characteristics of 5G (e.g. high data rate and low latency).
  • Figure 4 depicts a wireless communications system 100 in which embodiments herein may be implemented.
  • the communications network 100 may in some embodiments apply to one or more radio access technologies such as for example Long Term Evolution (LTE), LTE Advanced (LTE-A), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), 5G or any other Third Generation Partnership Project (3GPP) radio access technology, any 3GPP 2 technology such as e.g. Code Division Multiple Access 2000 (CDMA2000), or other radio access technologies such as e.g. WiFi or Wireless Local Area Network (WLAN) or Time Division-Synchronous Code Division Multiple Access (TD-SCDMA).
  • LTE Long Term Evolution
  • LTE-A LTE Advanced
  • WCDMA Wideband Code Division Multiple Access
  • GSM Global System for Mobile Communications
  • 5G Fifth Generation Partnership Project
  • 3GPP 2 technology such as e.g. Code Division Multiple Access 2000 (CDMA2000), or other radio access technologies such as e.g. WiFi or Wireless Local Area Network (WLAN) or Time Division-Synchronous Code Division Multiple Access (TD-SCDMA).
  • the wireless communications system 100 comprises a UE 101 which is served by a RAN node (not shown in figure 4).
  • the UE 101 may be a device by which a subscriber may access services offered by an operator's network and services outside operator's network to which the operators radio access network and core network provide access, e.g. access to the Internet.
  • the UE 101 may be any device, mobile or stationary, enabled to communicate in the communications network, for instance but not limited to e.g. user equipment, mobile phone, smart phone, sensors, meters, vehicles, household appliances, medical appliances, media players, cameras, Machine to Machine (M2M) device, Device to Device (D2D) device, Internet of Things (loT) device or any type of consumer electronic, for instance but not limited to television, radio, lighting
  • M2M Machine to Machine
  • D2D Device to Device
  • LoT Internet of Things
  • the UE 101 may be portable, pocket storable, hand held, computer comprised, or vehicle mounted devices, enabled to communicate voice and/or data, via the radio access network, with another entity, such as another UE or a server.
  • the wireless communications system 100 comprises a packet processor 103 and at least one other packet processor 103. Together, all packet processors 103 in the wireless communications system 100 may be referred to as a set of packet processors.
  • a packet processor 103 may be described as a processor which is adapted to process packets. In more detail, a packet processor 103 may perform tunnel encapsulation and de-capsulation, IP next-hop lookup, charging, updating of statistics, etc.
  • the wireless communications system 100 comprises a policer node 105, a controller 110, a redirection node 113, a flow-cache 115 and a traffic detection module 118.
  • Each of the policer node 105, the controller 100, the redirection node 113, the flow- cache 115 and the traffic detection module 118 may be a packet processor 103.
  • the policer node 105 may be described as a node which is adapted to handle policies in the wireless communication system 100.
  • the policer node 105 may monitor bits-per-second time-averages of packet flows.
  • the policer node 105 may asynchronously notify controller(s) 110 of significant changes in rate, and discard packets above a predetermined threshold.
  • the policer node 105 may also be referred to as a policer, a policing node, a policer module, a policer function, a policer unit etc.
  • the controller 110 may be described as an arbiter to the redirection node 113 and the flow-cache 115.
  • the controller 110 is adapted to take inputs and control outputs.
  • the controller 110 may apply filters and prioritization based on configured policy, and may update the flow-cache 115 and the redirection table if necessary.
  • the controller 110 may be e.g. a SDN controller.
  • a SDN controller is a controller in a SDN which lies between the UE 101 at one end and applications at the other end. Any communications between applications and UEs 101 may have to go through the SDN controller.
  • the redirection node 113 is a node which may comprise a table which combines IP packet content with the set of available packet processor 103 instances.
  • the redirection node 113 may be adapted to redirect packets and packet flows in the wireless communications system 100.
  • the redirection node 113 may be referred to as a redirection table, a load balancer, a redirector, a redirection module, a redirection function, a redirection unit etc.
  • the flow-cache 115 is a cache which is configured to store data such as e.g. one or more packet flows.
  • the flow-cache 115 may hold flow descriptors for packet flows whose packets can be minimally processed without need for packet manipulation or the traffic detection module 118.
  • the flow-cache 115 may also be referred to as a fast-path or a memory.
  • the traffic detection module 118 is a module in the wireless communications system 100 which is adapted to detect traffic (e.g. packet flows) in the wireless communications system 100.
  • the traffic detection module 118 may performs a deep analysis of IP packets and flows in order to classify the data.
  • the traffic detection module 118 may also manipulate packet headers and payload.
  • the traffic detection module 118 may be referred to as a Traffic Detection Function (TDF), a traffic detector, a traffic detection module, a traffic detection function, a traffic detection unit etc.
  • TDF Traffic Detection Function
  • the communication links in the wireless communications system 100 may be of any suitable kind including either a wired or wireless link.
  • the link may use any suitable protocol depending on type and level of layer (e.g. as indicated by the Open Systems Interconnection (OSI) model) as understood by the person skilled in the art.
  • OSI Open Systems Interconnection
  • At least some of the entities illustrated in figure 4 may be co-located in one entity.
  • entity may be for example a node such as a PGW, a GGSN, a PCEF, a SDN node, a
  • the SDN node may be SDN data path node, a SDN switch, a SDN forwarding node etc.
  • the packet processor 103 and the other packet processor 103 may be co-located in a PGW and the other entities are external to the PGW. In another example, all the entities in figure 4 except the UE 101 are co-located within the PGW.
  • FIG. 5a illustrates steps 500-512 and figure 5b illustrates steps 501 and 513- 519.
  • the method comprises at least some of the following steps, which steps may as well be carried out in another suitable order than described below.
  • the packet processor 103 handles one or more packet flows.
  • a data packet flow (also referred to as packet flow) is sequence of data packets (also referred to as packets) sent from a particular source to a particular unicast, anycast, or multicast destination that a node desires to label as a flow (RFC6437).
  • the handling may be e.g. processing of the packet flow.
  • a packet may be comprised in a packet flow which is a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that a node desires to label as a flow.
  • a packet flow may be an IP 6-tuple (IP 5 tuple + DSCP) or a subset there-of, or based on another packet descriptor e.g. GTP-U TEID.
  • IP 5 tuple + DSCP IP 5 tuple + DSCP
  • a packet may also be referred to as a payload, IP packet or a data payload.
  • the policer node 105 continuously monitors the packet data rate in real-time.
  • the policer node 105 detects that real-time data rate has exceeded a threshold. Instead of exceeding a threshold, the policer node 105 may detect that the data rate is within a predetermined range.
  • the policer node 105 may dampen at least one of the changes in the data rate. Step 504
  • the policer node 105 sends data rate information to the controller 110.
  • the data rate information may indicate that the continuously monitored realtime data rate has exceeded the threshold.
  • the controller 110 may store the received data rate information in e.g. a table.
  • step 5a Based on the data rate information received in step 504, the controller 110 determines that packet flow should be handled by someone else, e.g. another packet processor 103 or a flow-cache 115.
  • the controller 110 may determine which packet flow that should be handled by someone else . This may be done in case the packet processor 103 handles a plurality of packet flows. Step 507
  • the controller 10 may determine that packet flow is of movable class. Some packet flows cannot be moved to be handled by someone else, and other packet flows is allowed to be handled by someone else.
  • the controller 110 may classify the packet flow on its own, or the controller 110 may receive the classification information from e.g. the traffic detection module 118 (i.e. the traffic detection module 118 may perform the classification of the packet flow). Step 508
  • the controller 110 may determine who should handle the packet flow, e.g. another packet processor 103 or a flow-cache 115.
  • the redirection node 113 takes this decision after having received the instructions in step 509 described below.
  • the controller 110 may transmit instructions to the redirection node 113 or the flow-cache 115 to move the packet flow.
  • the instructions may be transmitted to the one which is determined in step 508.
  • the instructions may also comprise information indicating which packet flow that should be moved, to whom the packet flow should be moved, and the classification of the packet flow.
  • Step 510
  • step 5a This step is seen in figure 5a. If the instructions in step 509 where transmitted to the redirection node 113, the redirection node 113 may move the packet flow(s) from the packet processor 103 to the other packet processor 103 or to the flow-cache 115. If the instructions in step 509 where transmitted to the flow-cache 115, the flow-cache 115 may move the packet flow(s) from the packet processor 103 to the other packet processor 103. Moving the packet flow to the flow-cache 115 is not illustrated in figure 5a.
  • Step 512 This step is seen in figure 5a. Instead of sending the instructions in step 509, the controller 110 itself may move the packet flow to the other packet processor 103 or the flow-cache 115. Step 512
  • the other packet processor 103 or the flow-cache 115 may handle the moved packet flow.
  • the embodiment where the moved packet flow is handled by the flow-cache 115 is not illustrated in figure 5a.
  • the packet processor 103 When the packet flow is moved, the packet processor 103 may be seen as being offloaded because it is heavily loaded. When the packet flow is moved, it may also be due to a policy associated with the packet flow or the packet processor 103. For example, idle flows (e.g. flows with a data rate close to zero) may be dynamically moved to another packet processor 103 with a large, but slow memory. A video call may be dynamically moved to another packet processor 103 with lower latency (e.g. another hardware with a fast memory). A video flow (e.g. with a data rate indicating HD data) may be dynamically moved to a packet processor which is geographically located in proximity of the video server. Step 513
  • Step 514 When the policer node 105 is continuously monitoring the packet data rate in real-time, the policer node 105 may detect that the data rate is equal to or under the threshold, i.e. that the load in the packet processor 103 has been reduced. Step 514
  • the policer node 105 may send data rate information to the controller 110 to indicate that the continuously monitored packet data rate is equal to or under the threshold.
  • the controller 110 may store the data rate information e.g. in a table.
  • the controller 110 may determine that at least one of the moved packet flow(s) may be moved back to the packet processor.
  • the controller 110 may send instructions to the redirection node 113 or the flow-cache 115 (depending on who is handling the moved packet flow) to move back the packet flow which was previously moved to the other packet processor 103.
  • Step 517 This step is seen in figure 5b.
  • the redirection node 113 or the flow-cache 115 (depending on who has received the instructions which was sent in step 516) may move the packet flow from the other packet flow 103 or the flow-cache 115 back to the packet processor 103.
  • step 5b This may be an alternative to step 516.
  • the controller 110 may move the packet flow back to the packet processor 103.
  • Figure 6 illustrates an example of a packet flow in the wireless communications system 100 and that a different packet processor can be chosen to handle the packet flow.
  • the wireless communications system 100 is exemplified with four packet processors 103.
  • any other suitable number of packet processors 103 is also applicable to the embodiments herein.
  • the solid arrow illustrates the packet flow which may go through the redirection node 113, one or more packet processors 103 and the policer node 105.
  • the data packet does not go through the controller 110, but may be seen as the entity which controls the packet flow in the wireless communications system 100.
  • Figure 7 is a flow chart illustrating an example method.
  • the dotted arrows in figure 7 represent control feedback and the solid arrows represent payload packet flow.
  • the method in figure 7 comprises at least some of the following steps, which steps may be performed in any suitable order than described below.
  • the redirection node 113 may receive IP packets from another node. The received IP packets may be examined by the redirection node 113. This redirection node 113 may determine the packet processor instance (702) for the IP packet. This decision may be made using the IP packet content (e.g. IP address, GTP TEID values, etc.) and information in a table where IP packet content is combined with the set of available packet processor 103 instances. In other words, the redirection node 113 chooses a packet processor 103 based on entry in redirection node 113. Some example implementations of this are IP routing, ECMP load balancing, SDN switching, GTP TEID load balancing/steering, etc. Step 702
  • the packet processor 103 processes each received IP packet by performing necessary actions such as charging, header de-capsulation, header en-capsulation, update of statistics, etc. After processing the packet is sent to the policer node 105 (step 703) before being forwarded further in the network.
  • the policer node 105 may keep track of the real-time data rate. It updates bits per second time- average and evaluates against threshold.
  • the real-time data rate may also be referred to as a real-time packet data rate.
  • the real-time rate may be monitored on different hierarchical levels such as per IP flow, per IP endpoint, per
  • the rate value(s) are sent to the controller 110 (step 705).
  • the policer node 105 may apply a way to dampen how often rate values are sent to the controller. An example implementation of this is to only send a rate value when it has changed significantly, e.g. between pre-defined intervals.
  • the IP packet may be forwarded further in the network.
  • the threshold in the policer node 105 is used as an early filter to reduce the amount of data that should be sent to, and processed by the controller 110.
  • a PGW for
  • a suitable threshold may be 0 Mbps. If peak throughput has not historically been a problem for the operator before launching LTE-A/5G; a higher threshold may be used, e.g. 30
  • the dampening mechanism avoids the session being moved unnecessarily (without any gain), and also avoids oscillations (moving the
  • step 703 corresponds to step 504 in figure 5a.
  • the controller 110 may be asynchronously notified about the exceeded threshold.
  • the controller 105 receives rate values from the policer node 105.
  • the controller 110 also has configuration and information about available packet processor instances 103 and their capabilities and also configured load balancing policies. By using the received rate values the controller 110 may update the redirection node 113 (step 701) to accommodate (i) dynamic load balancing of the packet processors 103 (step 702), (ii) improved data rate by dynamically moving a high data rate IP flow to a packet processor 103 (step 702) with sufficient capacity.
  • the controller 105 may install the top entries sorted by rate (e.g. bandwidth or packets-per-second) in the redirection node 113. Summarized, the controller 105 applies filters based on configured policy, and updates redirection node 113 if necessary.
  • Figure 8 illustrates another example of the payload packet flow.
  • Figure 8 illustrates an example where the payload is inspected by a traffic detection module 118.
  • Figure 8 illustrates three different packet flows which use different internal paths through the wireless
  • One packet flow uses the traffic detection module 118, another packet flow is handled by the packet processor 103 and a third packet flow is moved into the flow-cache 115.
  • Figure 9 is a flow chart illustrating an example method.
  • the dotted arrows in figure 9 represent control feedback and the solid arrows represent payload packet flow.
  • the method in figure 9 comprises at least some of the following steps, which steps may be performed in any suitable order than described below.
  • the flow-cache 115 searches for a matching entry (step 901 ) and the flow-cache 115 (aka fast-path) processes the packet (step 902). All received IP packets may be first handled by the flow-cache 115 (aka fast-path).
  • the flow- cache 115 may comprise a table of IP flows. For each flow entry, information for where and how the packet shall be processed is available. The IP packet can be matched in the table based on 6-tuple (IP 5-tuple + DSCP), standard 5-tuple, a subset of the 5-tuple, or other packet descriptor such as GTP-U TEID.
  • the flow-cache 115 may process the packet according to the information given by the table entry. After processing, the packet may be sent to the policing node (step 904). If no match was found, the packet may be sent to a packet processor 103 (step 903).
  • the packet processor 103 selection may be for example as exemplified in figure 7.
  • This step may correspond to step 702 in figure 7.
  • the packet processor 103 processes the packet, and the traffic detection module 118 may be invoked if the subscriber policy dictates so.
  • the packet processor 103 processes each received IP packet. It performs necessary actions like charging, header de-capsulation, header en-capsulation, update of statistics, etc.
  • IP-flows e.g. all packets for a given subscriber
  • a traffic detection module 118 After processing (including returning from traffic detection module 118 in step 906) the packet may be sent to the policing node 105 (step 904) before forwarded further in the network.
  • the policer node 105 updates the real-time data rate and evaluate the real-time data rate against a threshold. When the real-time data is above the threshold, the controller 110 is notified asynchronously The policer node 105 keeps track of the real-time data rate on an IP flow granularity level i.e. 6-tuple or a subset of 6-tuple. It may apply rate limiting measures based on configuration or policies applied. The rate values are sent to a controller 110 (step 904). In order to avoid extensive signaling and also to avoid oscillations, the policer node 105 may apply a mechanism to dampen how rate values are updated. An example implementation of this is to only send a rate value when it has changed significantly, e.g. between pre-defined intervals. As a final step, the IP packet may be forwarded further in the network.
  • the controller 110 applies filters based on configured policy, and updates the flow-cache 115 if necessary.
  • the controller 110 receives rate values from the policer node 105 (step 904) and IP flows suitable for flow-cache offload from the traffic detection module 118 (step 905).
  • the controller 110 also has configuration and information about available flow-cache 115 (step 901 ) instances and their capabilities and also configured load balancing policies. By using the received rate values the controller 110 may update table comprised in the flow-cache 115 (step 901 ) to accommodate (i) dynamic load balancing of the packet processors 103 (step 903) and the traffic detection module 118 (step 905) instances, (ii) improved data rate by
  • the controller 110 may install the top flows sorted by rate (e.g. bandwidth or packets-per-second) in the available flow-cache 115 instances.
  • rate e.g. bandwidth or packets-per-second
  • the traffic detection module 118 may classify the packet and evaluates whether subsequent packets in the flow needs further classification. When no further classification is needed, the controller 110 may be notified asynchronously
  • the traffic detection module 118 may perform a deeper analysis of IP packets and flows in order to classify the data packet flow. The classification may be used for service
  • the controller 110 (step 904) is notified of the IP flow. The notification may also be skipped for other
  • URI Uniform Resource Locator
  • URL Uniform Resource Locator
  • the embodiments herein use a meter (this is the same as the policer node 105) which constantly monitors the real-time data rate per user or flow.
  • the output is sent to a mechanism that collects and stores these samples.
  • Periodically the top X% of the users or flows over a certain threshold are sent to a load balancer or SDN controller (this is the same as the controller 110 in figure 4) for action.
  • the action can be to either re-assign resources or simply to disable a function like DPI.
  • An existing session policing function may be reused to measure the real-time data rate.
  • the real-time rate may be calculated by observing the passing packets during a time window i.e. a measurement period. After each measurement period a table with the top X% of sessions is updated.
  • Offload IP flows to a flow-cache 115 location e.g. a line card or an external switch/router
  • FIG 10 is an example of a PGW node in a wireless communications system 100.
  • the packet processors 103 are marked with "P" in figure 10.
  • a PGW may have a type of Line Cards (LC) which has a type of CPU/Network Processing Unit (NPU) that may be suitable to work as a cache for PGW flows, e.g. the flow-cache 115.
  • LC Line Cards
  • NPU Network Processing Unit
  • Figure 10 shows a case where a high rate flow F2 (the continuous arrow) has been relocated to the flow-cache 115 while another flow F1 (the dotted arrow) remains in its original location.
  • the embodiments herein may also be deployed in a cloud environment.
  • the application representing the node in which at least some of the entities in figure 4 are co-located may execute in a Virtual Machine (VM) on top of server blades.
  • VM Virtual Machine
  • the cloud infrastructure may comprise switches and routers to enable networking.
  • the embodiments herein may be utilized also in this type of setup.
  • the high rate flows/users are identified. Instead of re-locating the high rate flows/users internally, they may be installed in the infrastructure equipment (e.g. a hypervisor, a Top of the Rack (TOR) switch or gateway router).
  • the infrastructure equipment e.g. a hypervisor, a Top of the Rack (TOR) switch or gateway router.
  • the packet is forwarded to a packet processor 103.
  • the packet is processed entirely by the flow-cache 115.
  • the above decision can either be made on a per session or per flow (e.g. 6-tuple) basis.
  • the result is that for a given session, different flows may be handled using different internal paths.
  • the information used by the flow-cache to make the decision uses the information provided by the controller (110) in the previously described steps (509, 516)
  • the controller 110 After the controller 110 has made a decision on which sessions should be moved, it signals the redirection node 113 or flow-cache 115 where the session should be processed. It may also signal the relevant packet processors 103 to prepare for the session at the new location, and to clean up the session at the previous location.
  • Figure 11 is a flowchart describing the present method performed by the controller 110, for handling packet flows in a wireless communications system 100.
  • the method comprises at least some of the following steps to be performed by the controller 110, which steps may be performed in any suitable order than described below:
  • This step corresponds to step 504 in figure 5a, step 704 in figure 7 and step 805 in figure 8.
  • the controller 110 receives, from a policer node 105, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor 103 has exceeded a threshold.
  • the monitored real-time data rate may be per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value.
  • step 505 in figure 5a corresponds to step 505 in figure 5a, step 704 in figure 7 and step 806 in figure 8.
  • the controller 110 determines that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or a flow-cache 115.
  • the at least one packet flow may be temporarily or permanently handled by the other packet processor 103 or the flow-cache 115.
  • the determining that the packet flow should be handled by another packet processor 103 or by the flow-cache 115 may be based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold.
  • Step 1105 The controller 110 may determine which packet flow comprising at least one of the data packets that should be handled by another packet processor 103 or the flow-cache 115. Step 1105
  • Step 1106 corresponds to step 508 in figure 5a, step 705 in figure 7 and step 806 in figure 8.
  • the controller 110 may determine whether the at least one packet flow should be handled by another packet processor or by the flow-cache 115. Step 1106
  • the controller 110 may determine that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor 103 or the flow-cache 115.
  • the decision of that the packet flow is of a class that is allowed to be moved may be performed based classification information received from a traffic detection module 118, or the decision may be based on a classification of the at least one packet flow performed by the controller 110.
  • This step corresponds to step 511 in figure 5a and step 806 in figure 8.
  • the controller 110 may move the at least one packet flow to at least one other packet processor 103 or flow-cache 115.
  • the controller 110 may transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow to the at least one other packet processor 103 or the flow-cache 115.
  • Step 1109 This step corresponds to step 514 in figure 5b, step 704 in figure 7 and step 805 in figure 8.
  • the controller 110 may receive, from the policer node 105, data rate information indicating that the monitored real-time data rate is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
  • This step corresponds to step 518 in figure 5b and step 806 in figure 8.
  • the controller 110 may move the at least one packet flow back to the packet processor 103.
  • This step corresponds to step 516 in figure 5b and is an alternative to step 1109.
  • the controller 110 may transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow back to the packet processor 103.
  • At least some of the policer node 105, the packet processor 103, the other packet processor 103, the flow-cache 115, a redirection node 113, a traffic detection module 118 and the controller 110 are collocated in one node.
  • the node may be a PGW or a GGSN or a PCEF node or a SDN node.
  • the controller 110 may be a SDN controller.
  • the controller 110 may comprise a controller arrangement as shown in figure 12.
  • the controller 110 is adapted to, e.g. by means of a controller receiving module 1201, receive, from a policer node 105, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor 103 has exceeded a threshold.
  • the determining that the packet flow should be handled by another packet processor 103 or by the flow-cache 115 may be based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold.
  • the monitored real-time data rate may be per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value.
  • the controller receiving module 1201 may also be referred to as a controller receiving unit, a controller receiving means, a controller receiving circuit, controller means for receiving, controller output unit etc.
  • the controller receiving module 1201 may be a receiver, a transceiver etc.
  • the controller receiving module 1201 may be a wireless receiver of the controller 110 of a wireless or fixed communications system.
  • the controller 110 is further adapted to, e.g. by means of a controller determining module 1203, determine, when the monitored real-time data rate has exceeded the threshold, that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or a flow-cache 115.
  • the at least one packet flow may be temporarily or permanently handled by the other packet processor 103 or the flow-cache 115.
  • the controller determining module 1203 may be a controller processor 1205 of the controller 110.
  • the controller determining module 1203 may also be referred to as a controller determining unit, a controller determining means, a controller determining circuit, controller means for determining, etc.
  • the controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine which packet flow comprising at least one of the data packets that should be handled by another packet processor 103 or the flow-cache 115.
  • the controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine whether the at least one packet flow should be handled by another packet processor or by the flow-cache 115.
  • the controller 110 may be further adapted to, e.g. by means of a controller moving module 1208, move the at least one packet flow to at least one other packet processor 103 or flow-cache 115.
  • the controller moving module 1208 may be the controller processor 1205 of the controller 110.
  • the controller moving module 1208 may also be referred to as a controller moving unit, a controller moving means, a controller moving circuit, controller means for moving, etc.
  • the controller 110 may be further adapted to, e.g. by means of a controller
  • the controller transmitting module 1210 transmit instructions to a redirection node 113 or the flow- cache 115 to move the at least one packet flow to the at least one other packet processor 103 or the flow-cache 115.
  • the controller transmitting module 1210 may also be referred to as a controller transmitting unit, a controller transmitting means, a controller transmitting circuit, controller means for transmitting, controller output unit etc.
  • the controller transmitting module 1210 may be a transmitter, a transceiver etc.
  • the controller transmitting module 1210 may be a wireless transmitter of the controller 110 of a wireless or fixed communications system.
  • the controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor 103 or the flow-cache 115.
  • the determining that the packet flow is of a class that is allowed to be moved may be performed based classification information received from a traffic detection module 118, or it may be based on a classification of the at least one packet flow performed by the controller 110.
  • the controller 110 may be further adapted to, e.g. by means of the controller receiving module 1201 , receive, from the policer node 105, data rate information indicating that the monitored real-time data rate is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
  • the controller 110 may be adapted to, e.g. by means of the controller moving module 1208, move the at least one packet flow back to the packet processor 103.
  • the controller 110 may be adapted to, e.g. by means of the controller transmitting module 1210, transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow back to the packet processor 103.
  • the controller 110 may be adapted to, e.g. by means of the controller transmitting module 1210, send the received data rate information to a redirection node 113.
  • a redirection node 113 may be collocated in one node.
  • the node may be a PGW or a GGSN or a PCEF node or a SDN node.
  • the controller 110 may be a SDN controller.
  • the controller 110 may further comprise a controller memory 1213 comprising one or more memory units.
  • the controller memory 1213 is arranged to be used to store data, received data streams, power level measurements, threshold values, time periods, configurations, schedulings, data rate information, policies, information about packet processors 103 with available capacity, instructions, information about packet processor capabilities, packet data flows, sessions, classes, real-time data rate, and applications to perform the methods herein when being executed in the controller 110.
  • the controller memory 1213 may comprise instructions executable by the controller processor 1205.
  • a first computer program may comprise instructions which, when executed on at least one processor, cause the at least one processor to carry out the method steps in figure 11.
  • a first carrier may comprise the first computer program, and the first carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium.
  • Figure 13 is a flowchart describing the present method performed by the policer node 105 for handling packet flows in a wireless communications system 100.
  • the method comprises at least some of the following steps to be performed by the policer node 105, which steps may be performed in any suitable order than described below:
  • This step corresponds to step 501 in figure 5a, step 703 in figure 7 and step 804 in figure 8.
  • the policer node 105 continuously monitors a packet data rate in real-time associated with data packets handled by a packet processor 103.
  • the real-time data rate may be monitored per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value.
  • This step corresponds to step 502 in figure 5a, step 703 in figure 7 and step 804 in figure 8.
  • the policer node 105 detects that the monitored real-time data rate has exceeded a threshold.
  • the exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or by a flow-cache 115.
  • This step corresponds to step 503 in figure 5a, step 703 in figure 7 and step 804 in figure 8.
  • the policer node 105 may dampen changes in the data rate information before transmission to the controller 110.
  • This step corresponds to step 504 in figure 5a, step 704 in figure 7 and step 805 in figure 8.
  • the policer node 105 transmits to the controller 110, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
  • This step corresponds to step 513 in figure 5b, step 703 in figure 7 and step 804 in figure 8.
  • the policer node 105 detects that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold. That the monitored realtime data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 130.
  • the policer node 105 may transmit, to a controller 110, data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
  • the policer node 105 may be collocated in one node.
  • the node may be a PGW or a GGSN or a PCEF node, or a SDN node.
  • the policer node 105 may comprise a policer node arrangement as shown in figure 14.
  • the policer node 105 is adapted to, e.g. by means of a policer monitoring module 1401, continuously monitor a packet data rate in real-time associated with data packets handled by a packet processor 103.
  • the real-time data rate may be monitored per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value.
  • the policer monitoring module 1401 may be a policer processor 1403 of the policer node 105.
  • the policer monitoring module 1401 may also be referred to as a policer monitoring unit, a policer monitoring means, a policer monitoring circuit, policer means for monitoring, etc.
  • the policer node 105 is further adapted to, e.g. by means of a policer detecting module 1405, detect that the monitored real-time data rate has exceeded a threshold. The exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or by a flow- cache 115.
  • the policer detecting module 1405 may be the policer processor 1403 of the policer node 105.
  • the policer detecting module 1401 may also be referred to as a policer detecting unit, a policer detecting means, a policer detecting circuit, policer means for detecting, etc.
  • the policer node 105 is further adapted to, e.g.
  • a policer transmitting module 1408 transmits, to a controller 110, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
  • the policer transmitting module 1408 may also be referred to as a policer transmitting unit, a policer transmitting means, a policer transmitting circuit, policer means for transmitting, policer output unit etc.
  • the policer transmitting module 1408 may be a transmitter, a transceiver etc.
  • the policer transmitting module 1408 may be a wireless transmitter of the policer node 105 of a wireless or fixed communications system.
  • the policer node 105 may be further adapted to, e.g. by means of a policer dampening module 1410, dampen changes in the data rate information before transmission to the controller 110.
  • the policer dampening module 1410 may be the policer processor 1403 of the policer node 105.
  • the policer dampening module 1410 may also be referred to as a policer dampening unit, a policer dampening means, a policer dampening circuit, policer means for dampening, etc.
  • the policer node 105 may be further adapted to, e.g. by means of the policer detecting module 1405, detect that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
  • the policer node 105 may be adapted to, e.g. by means of the policer transmitting module 1408, transmit, to a controller 110, data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
  • At least some of the policer node 105, the packet processor 103, a redirection node 113, a flow-cache 115, a traffic detection module 118 and the controller 110 may be collocated in one node.
  • the node may be a PGW or a GGSN or a PCEF node or a SDN node.
  • the policer node 105 is further adapted to, e.g. by means of a policer receiving module 1413, receive information from the other entities in the wireless communications system 100.
  • the policer receiving module 1413 may also be referred to as a policer receiving unit, a policer receiving means, a policer receiving circuit, policer means for receiving, policer input unit etc.
  • the policer receiving module 1413 may be a receiver, a transceiver etc.
  • the policer receiving module 1413 may be a wireless receiver of the policer node 105 of a wireless or fixed communications system.
  • the policer node 105 may further comprise a policer memory 1415 comprising one or more memory units.
  • the policer memory 1415 is arranged to be used to store data, received data streams, power level measurements, threshold values, time periods, configurations, schedulings, data rate information, policies, information about packet processors 103 with available capacity, instructions, information about packet processor capabilities, packet data flows, sessions, classes, real-time data rate, and applications to perform the methods herein when being executed in the policer node 105.
  • the policer memory 1415 may comprise instructions executable by the policer processor 1403.
  • a second computer program may comprise instructions which, when executed on at least one processor, cause the at least one processor to carry out the method steps in figure 13.
  • a second carrier may comprise the second computer program, and the second carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium.
  • the present mechanism for handling packet flows in a wireless communications system 100 may be implemented through one or more processors, such as a controller processor 1205 in the controller arrangement depicted in figure 12 and a policer processor 1403 in the policer node arrangement depicted in figure 14, together with computer program code for performing the functions of the embodiments herein.
  • the processor may be for example a Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC) processor, Field-programmable gate array (FPGA) processor or micro processor.
  • DSP Digital Signal Processor
  • ASIC Application Specific Integrated Circuit
  • FPGA Field-programmable gate array
  • the program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into at least one of the controller 110 and the policer node 105.
  • One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick.
  • the computer program code can furthermore be provided as pure program code on a server and downloaded to at least one of the controller 110 and the policer node 105.
  • resources may be reassigned as rates vary over time. This enables support for high peak rate and still supports high average resource utilization.
  • the real-time throughput data in the packet processor is used as input to the controller 110 to determine session features.

Abstract

The embodiments herein relate to a method performed by a controller (110) for handling packet flows in a wireless communications system (100). The controller (110) receives, from a policer node (105), data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor (103) has exceeded a threshold. When the monitored real-time data rate has exceeded the threshold, the controller (110) determines that at least one packet flow comprising at least one of the data packets should be handled by another packet processor (103) or a flow-cache (115).

Description

NODES AND METHODS FOR HANDLING PACKET FLOWS
TECHNICAL FIELD
Embodiments herein relate generally to a controller, a method performed by the controller, a policer node, a method performed by the policer node. More particularly the embodiments herein relate handling packet flows in a wireless communications system.
BACKGROUND
Capacity is a large factor in telecommunications products. It is therefore important to utilize the maximum capacity of the hardware in such products.
With higher session speeds due to radio access technologies such as e.g. Long Term Evolution (LTE) (150Mbps), LTE-Advanced (up to 600Mbps and beyond), Fifth Generation (5G) (Gbps), and WiFi, there are some issues with the underlying hardware of e.g. routers and Commercial-off-the-shelf (COTS) components. One issue is that the variation in throughput for a session over time is much larger than historically. Another issue is that when two or more sessions peak at the same time, they use a much larger percentage of the resources (e.g. the Central Processing Unit (CPU) core) which they are assigned to. With each new hardware generation, the performance per CPU core is not increasing as fast as the peak session speeds are. The term session used herein may be explained as an association between a UE (e.g. represented by an IP address) and a Packet Data Network (PDN).
These and other issues put pressure on the application to load-balance effectively in order to cope with the higher peak speeds, and are common for the
telecommunications industry. Existing technology either considers all sessions having an equal resource need, or only considers the resource need at the time the session is created (e.g. Maximum Bit Rate (MBR), user category, type of device, etc.). For example, US 7580716 discloses a load balancer that predicts user load based on negotiated rate, and session is assigned at startup. US 20090028045 discloses that load balancing may be based on the number of bytes in a communications session. US 8031612 and US 8300526 discloses load-balancing based on measurement of load of a processor and dynamically re-assignment. The load metric is aggregated for the processer. However session load over time has a high variation with high speed access technologies (e.g. LTE-Advanced, 5G, WiFi, etc.) and this leads to uneven load over available resources such as servers, CPU cores, etc. A resource might become overloaded (100%) while the total average load is e.g. 60%. An overloaded server will drop traffic, and hence 60% becomes the maximum capacity of the server group that can be dimensioned for, which increases cost (more servers are needed).
If there are several Internet Protocol (IP) 5-tuple flows within the session, existing solutions can load-balance based on flow. These solutions assume equal or unchanging resource need for the individual flows. Single-flow applications, and applications that multiplex flows above 5-tuple (including new HTTP 2.0
standard/Google Quick User Datagram Protocol (UDP) Internet Connections
(QUIC)/SPDY) are not catered for.
The term IP 5-tuple used above refers to a set of five different values that comprise a Transmission Control Protocol/Internet Protocol (TCP/IP) connection. It comprises a source IP address/port number, destination IP address/port number and the protocol in use.
Figure 1 illustrates an example of the high variation in session throughput causing short-term overload in a Packet data network Gateway (PGW). The x-axis in figure 1 represents time measured in minutes and the y-axis represents load measured in %. Figure 1 illustrates that there are too many 100% peaks CPU (indicated with circles in figure 1 ) on circuit boards. Since there are too many 100% peaks, the operator will not raise the average CPU. A PGW is a gateway towards a Packet Data Network (PDN). Functions performed by the PGW are e.g. providing connectivity from a User
Equipment (UE) to external PDNs by being the point of exit and entry of traffic for the UE, performing policy enforcement, packet filtering for each user, charging support, lawful interception and packet screening etc.
Figure 2 illustrates an example of a heat map of CPU loads over time. When several sessions have high throughput at the same time, overload occurs. The x-axis in figure 2 represents discrete CPUs and the y-axis represents time. The white area in figure 2 indicates that there is low CPU load, e.g. at night. The circles indicates that there is one user and more than one user which has high throughput.
Figure 3 is a histogram of example load over CPUs at an instance in time. Outliers with high load limit the total capacity of the system. The x-axis in figure 3 represents CPU utilization in % and the y-axix represents the number of CPUs. The two circles in figure 3 illustrates that two CPUs are limiting the capacity of the system because they have a much larger load than the rest of the CPUs.
Real-time load-balancing techniques are based on aggregated weighted load per server over a plurality of servers. In other words, sessions are treated equally, and the primary input to choosing a server is the weighted, aggregated (over all sessions) loads of the servers.
There are few real-time load-balancing techniques that consider the load of new sessions to be balanced, and a few that propose changing the choice of server dynamically during the lifetime of the session, as a consequence of the change in session characteristics.
SUMMARY
An objective of embodiments herein is therefore to obviate at least one of the above disadvantages and to provide improved load balancing in a communications system. According to a first aspect, the object is achieved by a method performed by a controller for handling packet flows in a wireless communications system. The controller receives, from a policer node, data rate information indicating that a continuously monitored realtime data rate associated with data packets handled by a packet processor has exceeded a threshold. When the monitored real-time data rate has exceeded the threshold, the controller determines that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or a flow-cache.
According to a second aspect, the object is achieved by a method performed by a policer node for handling packet flows in a wireless communications system. The policer node continuously monitors a packet data rate in real-time associated with data packets handled by a packet processor. The policer node detects that the monitored real-time data rate has exceeded a threshold. The exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or by a flow-cache. The policer node transmits, to a controller, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
According to a third aspect, the object is achieved by a controller for handling packet flows in a wireless communications system. The controller is adapted to receive, from a policer node, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor has exceeded a threshold. The controller is adapted to, when the monitored real-time data rate has exceeded the threshold, determine that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or a flow-cache.
According to a fourth aspect, the object is achieved by a policer node for handling packet flows in a wireless communications system. The policer node is adapted to continuously monitor a packet data rate in real-time associated with data packets handled by a packet processor. The policer node is adapted to detect that the monitored real-time data rate has exceeded a threshold. The exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor or by a flow-cache. The policer node is adapted to transmit, to a controller, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
Thanks to the monitoring of the real-time data rate, the packet processor can be offloaded so that the load balancing in the communications system is improved.
Embodiments herein afford many advantages, of which a non-exhaustive list of examples follows: The embodiments herein propose to monitor the changing peak throughput of a session, and each measurement period produce a list of the top X% of sessions ranked by peak throughput. This provides an advantage for a network operator because it can use real-time peak throughput to control user experience or lower costs (through resource allocation and/or different service chain/session features).
Another advantage of the embodiments herein is that the wireless communications system can be dimensioned for a higher average load and by this utilized more efficiently. Thus, the embodiments herein are cost effective.
A further advantage of the embodiments herein is that they may be applied to internal resources such as CPU cores within a server, but they may also be applied to multiple network elements in e.g. a SDN network
Another advantage of the embodiments herein is that they may have a low impact to existing performance of the wireless communications system.
The embodiments herein are not limited to the features and advantages mentioned above. A person skilled in the art will recognize additional features and advantages upon reading the following detailed description.
BRIEF DESCRIPTION OF THE DRAWINGS The embodiments herein will now be further described in more detail in the following detailed description by reference to the appended drawings illustrating the embodiments and in which: Fig. 1 is a graph illustrating high variation in session throughput.
Fig. 2 is a heat map of CPU loads over time.
Fig. 3 is a histogram of load over CPUs at an instance in time.
Fig. 4 is a schematic block diagram illustrating embodiments of a wireless communications system.
Fig. 5a, 5b are signaling diagrams illustrating embodiments of a method.
Fig. 6 is a schematic block diagram illustrating a payload packet flow.
Fig. 7 is a flow chart illustrating embodiments of a method. Fig. 8 is a schematic block diagram illustrating embodiments of a method.
Fig. 9 is a flow chart illustrating embodiments of a method.
Fig. 10 is a schematic block diagram illustrating embodiments of a wireless
communications system.
Fig. 11 is a flow chart illustrating embodiments of a method performed by a
controller. Fig. 12 is a schematic block diagram illustrating embodiments of a controller.
Fig. 13 is a flow chart illustrating embodiments of a method performed by a
policer node. Fig. 14 is a schematic block diagram illustrating embodiments of a policer node.
The drawings are not necessarily to scale and the dimensions of certain features may have been exaggerated for the sake of clarity. Emphasis is instead placed upon illustrating the principle of the embodiments herein.
DETAILED DESCRIPTION By constantly monitoring the rate per UE or per IP flow, processing resources can be re-assigned as rates vary over time. This enables support for high peak rate and still supports high average processing resource utilization. A processing resource may be described as a general or special purpose processor and memory where a system function can be executed and information stored and retrieved.
A session policing function may be used to measure the real-time throughput. The real-time throughput may be calculated by observing the data rate during a time window e.g. a measurement period. After each measurement period, a table with the top X% of sessions may be updated. The term current throughput may also be used instead of real-time throughput.
A load-balancing function (e.g. internal logic within the node or performed by an external Software Defined Network (SDN) controller which is external to the PGW) may use this dynamic table to steer or offload traffic to dedicated resources, and/or evenly spread the traffic over existing processing resources, and/or reduce the cost of the top sessions by e.g. temporarily bypassing costly functions such as e.g. Deep Packet Inspection (DPI).
The dynamic processing resource assignment may also be based on policy or mobility events. E.g. a UE was initially attached using Third Generation (3G) radio access. Later, the UE moves into 5G coverage and the system may then dynamically reallocate the UE to processing resources which are more suitable for the characteristics of 5G (e.g. high data rate and low latency). Figure 4 depicts a wireless communications system 100 in which embodiments herein may be implemented. The communications network 100 may in some embodiments apply to one or more radio access technologies such as for example Long Term Evolution (LTE), LTE Advanced (LTE-A), Wideband Code Division Multiple Access (WCDMA), Global System for Mobile Communications (GSM), 5G or any other Third Generation Partnership Project (3GPP) radio access technology, any 3GPP 2 technology such as e.g. Code Division Multiple Access 2000 (CDMA2000), or other radio access technologies such as e.g. WiFi or Wireless Local Area Network (WLAN) or Time Division-Synchronous Code Division Multiple Access (TD-SCDMA).
The wireless communications system 100 comprises a UE 101 which is served by a RAN node (not shown in figure 4). The UE 101 may be a device by which a subscriber may access services offered by an operator's network and services outside operator's network to which the operators radio access network and core network provide access, e.g. access to the Internet. The UE 101 may be any device, mobile or stationary, enabled to communicate in the communications network, for instance but not limited to e.g. user equipment, mobile phone, smart phone, sensors, meters, vehicles, household appliances, medical appliances, media players, cameras, Machine to Machine (M2M) device, Device to Device (D2D) device, Internet of Things (loT) device or any type of consumer electronic, for instance but not limited to television, radio, lighting
arrangements, tablet computer, laptop or Personal Computer (PC). The UE 101 may be portable, pocket storable, hand held, computer comprised, or vehicle mounted devices, enabled to communicate voice and/or data, via the radio access network, with another entity, such as another UE or a server.
The wireless communications system 100 comprises a packet processor 103 and at least one other packet processor 103. Together, all packet processors 103 in the wireless communications system 100 may be referred to as a set of packet processors. A packet processor 103 may be described as a processor which is adapted to process packets. In more detail, a packet processor 103 may perform tunnel encapsulation and de-capsulation, IP next-hop lookup, charging, updating of statistics, etc.
The wireless communications system 100 comprises a policer node 105, a controller 110, a redirection node 113, a flow-cache 115 and a traffic detection module 118. Each of the policer node 105, the controller 100, the redirection node 113, the flow- cache 115 and the traffic detection module 118 may be a packet processor 103.
The policer node 105 may be described as a node which is adapted to handle policies in the wireless communication system 100. In more detail, the policer node 105 may monitor bits-per-second time-averages of packet flows. The policer node 105 may asynchronously notify controller(s) 110 of significant changes in rate, and discard packets above a predetermined threshold. The policer node 105 may also be referred to as a policer, a policing node, a policer module, a policer function, a policer unit etc.
The controller 110 may be described as an arbiter to the redirection node 113 and the flow-cache 115. The controller 110 is adapted to take inputs and control outputs. The controller 110 may apply filters and prioritization based on configured policy, and may update the flow-cache 115 and the redirection table if necessary. The controller 110 may be e.g. a SDN controller. A SDN controller is a controller in a SDN which lies between the UE 101 at one end and applications at the other end. Any communications between applications and UEs 101 may have to go through the SDN controller.
The redirection node 113 is a node which may comprise a table which combines IP packet content with the set of available packet processor 103 instances. The redirection node 113 may be adapted to redirect packets and packet flows in the wireless communications system 100. The redirection node 113 may be referred to as a redirection table, a load balancer, a redirector, a redirection module, a redirection function, a redirection unit etc.
The flow-cache 115 is a cache which is configured to store data such as e.g. one or more packet flows. The flow-cache 115 may hold flow descriptors for packet flows whose packets can be minimally processed without need for packet manipulation or the traffic detection module 118. The flow-cache 115 may also be referred to as a fast-path or a memory.
The traffic detection module 118 is a module in the wireless communications system 100 which is adapted to detect traffic (e.g. packet flows) in the wireless communications system 100. In more detail, the traffic detection module 118 may performs a deep analysis of IP packets and flows in order to classify the data. The traffic detection
module 118 may also manipulate packet headers and payload. The traffic detection module 118 may be referred to as a Traffic Detection Function (TDF), a traffic detector, a traffic detection module, a traffic detection function, a traffic detection unit etc.
It should be noted that the communication links in the wireless communications system 100 may be of any suitable kind including either a wired or wireless link. The link may use any suitable protocol depending on type and level of layer (e.g. as indicated by the Open Systems Interconnection (OSI) model) as understood by the person skilled in the art.
At least some of the entities illustrated in figure 4 may be co-located in one entity. Such entity may be for example a node such as a PGW, a GGSN, a PCEF, a SDN node, a
Packet Data Serving Node (PDSN) (for CDMA2000), a home agent (for Mobile IP) or a Local Mobility Anchor (LMA) (for Proxy Mobile IPv6 (PMIP)). The SDN node may be SDN data path node, a SDN switch, a SDN forwarding node etc. For example, the packet processor 103 and the other packet processor 103 may be co-located in a PGW and the other entities are external to the PGW. In another example, all the entities in figure 4 except the UE 101 are co-located within the PGW.
The method for handling packet flows in a wireless communications system according to some embodiments will now be described with reference to the signaling diagram depicted in figure 5a and 5b. Figure 5a illustrates steps 500-512 and figure 5b illustrates steps 501 and 513- 519. The method comprises at least some of the following steps, which steps may as well be carried out in another suitable order than described below.
Step 500
This step is seen in figure 5a. The packet processor 103 handles one or more packet flows. As mentioned earlier, a data packet flow (also referred to as packet flow) is sequence of data packets (also referred to as packets) sent from a particular source to a particular unicast, anycast, or multicast destination that a node desires to label as a flow (RFC6437). Can be IP 6-tuple (IP 5 tuple + DSCP) or subset there-of, or based on another packet descriptor e.g. GTP-U TEID. The handling may be e.g. processing of the packet flow. A packet may be comprised in a packet flow which is a sequence of packets sent from a particular source to a particular unicast, anycast, or multicast destination that a node desires to label as a flow. A packet flow may be an IP 6-tuple (IP 5 tuple + DSCP) or a subset there-of, or based on another packet descriptor e.g. GTP-U TEID. A packet may also be referred to as a payload, IP packet or a data payload.
Step 501
This step is seen in figure 5a and figure 5b. The policer node 105 continuously monitors the packet data rate in real-time.
Step 502
This step is seen in figure 5a. The policer node 105 detects that real-time data rate has exceeded a threshold. Instead of exceeding a threshold, the policer node 105 may detect that the data rate is within a predetermined range.
Step 503
This step is seen in figure 5a. The policer node 105 may dampen at least one of the changes in the data rate. Step 504
This step is seen in figure 5a. The policer node 105 sends data rate information to the controller 110. The data rate information may indicate that the continuously monitored realtime data rate has exceeded the threshold. The controller 110 may store the received data rate information in e.g. a table.
Step 505
This step is seen in figure 5a. Based on the data rate information received in step 504, the controller 110 determines that packet flow should be handled by someone else, e.g. another packet processor 103 or a flow-cache 115.
Step 506
This step is seen in figure 5a . The controller 110 may determine which packet flow that should be handled by someone else . This may be done in case the packet processor 103 handles a plurality of packet flows. Step 507
This step is seen in figure 5a. The controller 10 may determine that packet flow is of movable class. Some packet flows cannot be moved to be handled by someone else, and other packet flows is allowed to be handled by someone else. The controller 110 may classify the packet flow on its own, or the controller 110 may receive the classification information from e.g. the traffic detection module 118 (i.e. the traffic detection module 118 may perform the classification of the packet flow). Step 508
This step is seen in figure 5a. The controller 110 may determine who should handle the packet flow, e.g. another packet processor 103 or a flow-cache 115. In another example, the redirection node 113 takes this decision after having received the instructions in step 509 described below.
Step 509
This step is seen in figure 5a. The controller 110 may transmit instructions to the redirection node 113 or the flow-cache 115 to move the packet flow. The instructions may be transmitted to the one which is determined in step 508. The instructions may also comprise information indicating which packet flow that should be moved, to whom the packet flow should be moved, and the classification of the packet flow.
Step 510
This step is seen in figure 5a. If the instructions in step 509 where transmitted to the redirection node 113, the redirection node 113 may move the packet flow(s) from the packet processor 103 to the other packet processor 103 or to the flow-cache 115. If the instructions in step 509 where transmitted to the flow-cache 115, the flow-cache 115 may move the packet flow(s) from the packet processor 103 to the other packet processor 103. Moving the packet flow to the flow-cache 115 is not illustrated in figure 5a.
Step 511
This step is seen in figure 5a. Instead of sending the instructions in step 509, the controller 110 itself may move the packet flow to the other packet processor 103 or the flow-cache 115. Step 512
This step is seen in figure 5a. The other packet processor 103 or the flow-cache 115 may handle the moved packet flow. The embodiment where the moved packet flow is handled by the flow-cache 115 is not illustrated in figure 5a.
When the packet flow is moved, the packet processor 103 may be seen as being offloaded because it is heavily loaded. When the packet flow is moved, it may also be due to a policy associated with the packet flow or the packet processor 103. For example, idle flows (e.g. flows with a data rate close to zero) may be dynamically moved to another packet processor 103 with a large, but slow memory. A video call may be dynamically moved to another packet processor 103 with lower latency (e.g. another hardware with a fast memory). A video flow (e.g. with a data rate indicating HD data) may be dynamically moved to a packet processor which is geographically located in proximity of the video server. Step 513
This step is seen in figure 5b. When the policer node 105 is continuously monitoring the packet data rate in real-time, the policer node 105 may detect that the data rate is equal to or under the threshold, i.e. that the load in the packet processor 103 has been reduced. Step 514
This step is seen in figure 5b. The policer node 105 may send data rate information to the controller 110 to indicate that the continuously monitored packet data rate is equal to or under the threshold. The controller 110 may store the data rate information e.g. in a table. Step 515
This step is seen in figure 5b. The controller 110 may determine that at least one of the moved packet flow(s) may be moved back to the packet processor.
Step 516
This step is seen in figure 5b. The controller 110 may send instructions to the redirection node 113 or the flow-cache 115 (depending on who is handling the moved packet flow) to move back the packet flow which was previously moved to the other packet processor 103.
Step 517 This step is seen in figure 5b. The redirection node 113 or the flow-cache 115 (depending on who has received the instructions which was sent in step 516) may move the packet flow from the other packet flow 103 or the flow-cache 115 back to the packet processor 103. Step 518
This step is seen in figure 5b. This may be an alternative to step 516. The controller 110 may move the packet flow back to the packet processor 103.
Step 519
This step is seen in figure 5b. The packet processor 103 again handles the packet flow.
Figure 6 illustrates an example of a packet flow in the wireless communications system 100 and that a different packet processor can be chosen to handle the packet flow. In figure 6, the wireless communications system 100 is exemplified with four packet processors 103.
However, any other suitable number of packet processors 103 is also applicable to the embodiments herein. The solid arrow illustrates the packet flow which may go through the redirection node 113, one or more packet processors 103 and the policer node 105. The data packet does not go through the controller 110, but may be seen as the entity which controls the packet flow in the wireless communications system 100.
Figure 7 is a flow chart illustrating an example method. The dotted arrows in figure 7 represent control feedback and the solid arrows represent payload packet flow. The method in figure 7 comprises at least some of the following steps, which steps may be performed in any suitable order than described below.
Step 701
The redirection node 113 may receive IP packets from another node. The received IP packets may be examined by the redirection node 113. This redirection node 113 may determine the packet processor instance (702) for the IP packet. This decision may be made using the IP packet content (e.g. IP address, GTP TEID values, etc.) and information in a table where IP packet content is combined with the set of available packet processor 103 instances. In other words, the redirection node 113 chooses a packet processor 103 based on entry in redirection node 113. Some example implementations of this are IP routing, ECMP load balancing, SDN switching, GTP TEID load balancing/steering, etc. Step 702
The packet processor 103 processes each received IP packet by performing necessary actions such as charging, header de-capsulation, header en-capsulation, update of statistics, etc. After processing the packet is sent to the policer node 105 (step 703) before being forwarded further in the network.
Step 703
This step corresponds to step 501 in figure 5a and 5b, steps 503 and 504 in figure 5a. The policer node 105 may keep track of the real-time data rate. It updates bits per second time- average and evaluates against threshold. The real-time data rate may also be referred to as a real-time packet data rate.
It may apply rate limiting measures if configuration or applied policy dictates. The real-time rate may be monitored on different hierarchical levels such as per IP flow, per IP endpoint, per
DSCP values, etc. Or even a combination of these. The rate value(s) are sent to the controller 110 (step 705). In order to avoid extensive signaling and also to avoid oscillations the policer node 105 may apply a way to dampen how often rate values are sent to the controller. An example implementation of this is to only send a rate value when it has changed significantly, e.g. between pre-defined intervals. As a final step, the IP packet may be forwarded further in the network.
The threshold in the policer node 105 is used as an early filter to reduce the amount of data that should be sent to, and processed by the controller 110. In a PGW, for
example only 20% of sessions will have a non-zero throughput at any time. A suitable threshold may be 0 Mbps. If peak throughput has not historically been a problem for the operator before launching LTE-A/5G; a higher threshold may be used, e.g. 30
Mbps.
In the case the move operation takes longer than the need for higher throughput (for very short bursts of throughput), the dampening mechanism avoids the session being moved unnecessarily (without any gain), and also avoids oscillations (moving the
session back and forth). Step 704
This step corresponds to step 504 in figure 5a. When the real-time data rate is above the threshold (detected in step 703), the controller 110 may be asynchronously notified about the exceeded threshold.
Step 705
This step corresponds to steps 504-512 in figure 5a. The controller 105 receives rate values from the policer node 105. The controller 110 also has configuration and information about available packet processor instances 103 and their capabilities and also configured load balancing policies. By using the received rate values the controller 110 may update the redirection node 113 (step 701) to accommodate (i) dynamic load balancing of the packet processors 103 (step 702), (ii) improved data rate by dynamically moving a high data rate IP flow to a packet processor 103 (step 702) with sufficient capacity. The controller 105 may install the top entries sorted by rate (e.g. bandwidth or packets-per-second) in the redirection node 113. Summarized, the controller 105 applies filters based on configured policy, and updates redirection node 113 if necessary.
Figure 8 illustrates another example of the payload packet flow. Figure 8 illustrates an example where the payload is inspected by a traffic detection module 118. Figure 8 illustrates three different packet flows which use different internal paths through the wireless
communications system 100. One packet flow uses the traffic detection module 118, another packet flow is handled by the packet processor 103 and a third packet flow is moved into the flow-cache 115.
Figure 9 is a flow chart illustrating an example method. The dotted arrows in figure 9 represent control feedback and the solid arrows represent payload packet flow. The method in figure 9 comprises at least some of the following steps, which steps may be performed in any suitable order than described below.
Step 901 and 902
The flow-cache 115 searches for a matching entry (step 901 ) and the flow-cache 115 (aka fast-path) processes the packet (step 902). All received IP packets may be first handled by the flow-cache 115 (aka fast-path). The flow- cache 115 may comprise a table of IP flows. For each flow entry, information for where and how the packet shall be processed is available. The IP packet can be matched in the table based on 6-tuple (IP 5-tuple + DSCP), standard 5-tuple, a subset of the 5-tuple, or other packet descriptor such as GTP-U TEID.
If a match is found between a received IP packet and an instance in the table, the flow-cache 115 may process the packet according to the information given by the table entry. After processing, the packet may be sent to the policing node (step 904). If no match was found, the packet may be sent to a packet processor 103 (step 903). The packet processor 103 selection may be for example as exemplified in figure 7.
Step 903
This step may correspond to step 702 in figure 7. The packet processor 103 processes the packet, and the traffic detection module 118 may be invoked if the subscriber policy dictates so.
The packet processor 103 processes each received IP packet. It performs necessary actions like charging, header de-capsulation, header en-capsulation, update of statistics, etc.
Configuration or policies may dictate that IP-flows (e.g. all packets for a given subscriber) are to be sent to a traffic detection module 118 (step 906). After processing (including returning from traffic detection module 118 in step 906) the packet may be sent to the policing node 105 (step 904) before forwarded further in the network. Step 904
This step corresponds to step 703 in figure 7. The policer node 105 updates the real-time data rate and evaluate the real-time data rate against a threshold. When the real-time data is above the threshold, the controller 110 is notified asynchronously The policer node 105 keeps track of the real-time data rate on an IP flow granularity level i.e. 6-tuple or a subset of 6-tuple. It may apply rate limiting measures based on configuration or policies applied. The rate values are sent to a controller 110 (step 904). In order to avoid extensive signaling and also to avoid oscillations, the policer node 105 may apply a mechanism to dampen how rate values are updated. An example implementation of this is to only send a rate value when it has changed significantly, e.g. between pre-defined intervals. As a final step, the IP packet may be forwarded further in the network.
Step 905
The controller 110 applies filters based on configured policy, and updates the flow-cache 115 if necessary.
The controller 110 receives rate values from the policer node 105 (step 904) and IP flows suitable for flow-cache offload from the traffic detection module 118 (step 905). The controller 110 also has configuration and information about available flow-cache 115 (step 901 ) instances and their capabilities and also configured load balancing policies. By using the received rate values the controller 110 may update table comprised in the flow-cache 115 (step 901 ) to accommodate (i) dynamic load balancing of the packet processors 103 (step 903) and the traffic detection module 118 (step 905) instances, (ii) improved data rate by
dynamically moving a high data rate IP flow to a flow-cache 115 (step 903) for more efficient processing (aka fast path handling). The controller 110 may install the top flows sorted by rate (e.g. bandwidth or packets-per-second) in the available flow-cache 115 instances.
Step 906
The traffic detection module 118 may classify the packet and evaluates whether subsequent packets in the flow needs further classification. When no further classification is needed, the controller 110 may be notified asynchronously
The traffic detection module 118 may perform a deeper analysis of IP packets and flows in order to classify the data packet flow. The classification may be used for service
based charging and control. If the traffic detection module 118 has no reason to analyze subsequent IP packets of a flow, e.g. if a final classification can be given to a flow and no processing is needed exclusive to the traffic detection module 118, the controller 110 (step 904) is notified of the IP flow. The notification may also be skipped for other
reasons such as lawful-intercept. An example of processing exclusive to the traffic
detection module 118 and not available in a flow-cache implementation is manipulation of packet headers or payload. Another example may be where Uniform Resource
Identifiers (URI) (e.g. Uniform Resource Locator (URL)) are used for flow classification. In such example, new URIs may arrive during the lifetime of a packet flow. Therefore, the classification may also change.
The embodiments herein use a meter (this is the same as the policer node 105) which constantly monitors the real-time data rate per user or flow. The output is sent to a mechanism that collects and stores these samples. Periodically the top X% of the users or flows over a certain threshold are sent to a load balancer or SDN controller (this is the same as the controller 110 in figure 4) for action. The action can be to either re-assign resources or simply to disable a function like DPI.
An existing session policing function may be reused to measure the real-time data rate. The real-time rate may be calculated by observing the passing packets during a time window i.e. a measurement period. After each measurement period a table with the top X% of sessions is updated.
With this table the embodiments herein have the possibility to either
• Move entire session to dedicated CPU resources (i.e. set of CPU cores which are the same as the packet processors in figure 4) for high data rates
• Offload IP flows to a flow-cache 115 location (e.g. a line card or an external switch/router)
Figure 10 is an example of a PGW node in a wireless communications system 100. In figure 10, all entities from figure 4 are co-located within one PGW. The packet processors 103 are marked with "P" in figure 10. A PGW may have a type of Line Cards (LC) which has a type of CPU/Network Processing Unit (NPU) that may be suitable to work as a cache for PGW flows, e.g. the flow-cache 115.
Figure 10 shows a case where a high rate flow F2 (the continuous arrow) has been relocated to the flow-cache 115 while another flow F1 (the dotted arrow) remains in its original location.
The embodiments herein may also be deployed in a cloud environment. The application representing the node in which at least some of the entities in figure 4 are co-located may execute in a Virtual Machine (VM) on top of server blades. The cloud infrastructure may comprise switches and routers to enable networking.
By choosing an infrastructure that supports a flow-cache 115 (FC) the embodiments herein may be utilized also in this type of setup. The high rate flows/users are identified. Instead of re-locating the high rate flows/users internally, they may be installed in the infrastructure equipment (e.g. a hypervisor, a Top of the Rack (TOR) switch or gateway router).
Flow-cache 115
Once a packet is received by a flow-cache 115 for a session, there may be three alternatives for further processing, depending on the session and flow the packet belongs to:
1. The packet is forwarded to a packet processor 103.
2. The packet is processed entirely by the flow-cache 115.
The above decision can either be made on a per session or per flow (e.g. 6-tuple) basis. The result is that for a given session, different flows may be handled using different internal paths. The information used by the flow-cache to make the decision uses the information provided by the controller (110) in the previously described steps (509, 516)
Controller 110:
There may be a many-to-one relationship between the policer nodes 105 and the controller 110.
Aggregation of the policer samples may be done in a separate entity (the controller 110) for two reasons:
1. To support filtering use-cases in which sessions on many workers are ordered and prioritized against each other. (I.e. filtering cannot be located in each worker).
2. In order to keep the implementation of the Load-balancer simple and efficient. A simple Load-balancer allows for implementation on accelerated hardware. If sorting and filtering of samples is a suitable workload, then the controller function may be subsumed into the flow-cache 115 or packet-processor 103.
After the controller 110 has made a decision on which sessions should be moved, it signals the redirection node 113 or flow-cache 115 where the session should be processed. It may also signal the relevant packet processors 103 to prepare for the session at the new location, and to clean up the session at the previous location.
When a session/flow in the flow-cache 115 expires, the move operation is from the flow-cache 115 back to the packet processor 103.
The method described above will now be described seen from the perspective of the controller 110. Figure 11 is a flowchart describing the present method performed by the controller 110, for handling packet flows in a wireless communications system 100. The method comprises at least some of the following steps to be performed by the controller 110, which steps may be performed in any suitable order than described below:
Step 1102
This step corresponds to step 504 in figure 5a, step 704 in figure 7 and step 805 in figure 8. The controller 110 receives, from a policer node 105, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor 103 has exceeded a threshold.
The monitored real-time data rate may be per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value.
Step 1103
This step corresponds to step 505 in figure 5a, step 704 in figure 7 and step 806 in figure 8. When the monitored real-time data rate has exceeded the threshold, the controller 110 determines that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or a flow-cache 115.
The at least one packet flow may be temporarily or permanently handled by the other packet processor 103 or the flow-cache 115. The determining that the packet flow should be handled by another packet processor 103 or by the flow-cache 115 may be based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold. Step 1104
This step corresponds to step 506 in figure 5a and step 806 in figure 8. The controller 110 may determine which packet flow comprising at least one of the data packets that should be handled by another packet processor 103 or the flow-cache 115. Step 1105
This step corresponds to step 508 in figure 5a, step 705 in figure 7 and step 806 in figure 8. The controller 110 may determine whether the at least one packet flow should be handled by another packet processor or by the flow-cache 115. Step 1106
This step corresponds to step 507 in figure 5a and steps 807 and 808 in figure 8. The controller 110 may determine that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor 103 or the flow-cache 115. The decision of that the packet flow is of a class that is allowed to be moved may be performed based classification information received from a traffic detection module 118, or the decision may be based on a classification of the at least one packet flow performed by the controller 110.
Step 1107
This step corresponds to step 511 in figure 5a and step 806 in figure 8. The controller 110 may move the at least one packet flow to at least one other packet processor 103 or flow-cache 115.
Step 1108
This step corresponds to step 509 in figure 5a. The controller 110 may transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow to the at least one other packet processor 103 or the flow-cache 115.
Step 1109 This step corresponds to step 514 in figure 5b, step 704 in figure 7 and step 805 in figure 8. The controller 110 may receive, from the policer node 105, data rate information indicating that the monitored real-time data rate is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
Step 1110
This step corresponds to step 518 in figure 5b and step 806 in figure 8. The controller 110 may move the at least one packet flow back to the packet processor 103.
Step 1111
This step corresponds to step 516 in figure 5b and is an alternative to step 1109. The controller 110 may transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow back to the packet processor 103.
At least some of the policer node 105, the packet processor 103, the other packet processor 103, the flow-cache 115, a redirection node 113, a traffic detection module 118 and the controller 110 are collocated in one node. The node may be a PGW or a GGSN or a PCEF node or a SDN node. The controller 110 may be a SDN controller.
To perform the method steps shown in figure 11 for handling packet flows in a wireless communications system 100, the controller 110 may comprise a controller arrangement as shown in figure 12. To perform the method steps shown in figure 11 for handling packet flows in a wireless communications system 100, the controller 110 is adapted to, e.g. by means of a controller receiving module 1201, receive, from a policer node 105, data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor 103 has exceeded a threshold. The determining that the packet flow should be handled by another packet processor 103 or by the flow-cache 115 may be based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold. The monitored real-time data rate may be per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value. The controller receiving module 1201 may also be referred to as a controller receiving unit, a controller receiving means, a controller receiving circuit, controller means for receiving, controller output unit etc. The controller receiving module 1201 may be a receiver, a transceiver etc. The controller receiving module 1201 may be a wireless receiver of the controller 110 of a wireless or fixed communications system.
The controller 110 is further adapted to, e.g. by means of a controller determining module 1203, determine, when the monitored real-time data rate has exceeded the threshold, that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or a flow-cache 115. The at least one packet flow may be temporarily or permanently handled by the other packet processor 103 or the flow-cache 115. The controller determining module 1203 may be a controller processor 1205 of the controller 110. The controller determining module 1203 may also be referred to as a controller determining unit, a controller determining means, a controller determining circuit, controller means for determining, etc.
The controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine which packet flow comprising at least one of the data packets that should be handled by another packet processor 103 or the flow-cache 115. The controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine whether the at least one packet flow should be handled by another packet processor or by the flow-cache 115.
The controller 110 may be further adapted to, e.g. by means of a controller moving module 1208, move the at least one packet flow to at least one other packet processor 103 or flow-cache 115. The controller moving module 1208 may be the controller processor 1205 of the controller 110. The controller moving module 1208 may also be referred to as a controller moving unit, a controller moving means, a controller moving circuit, controller means for moving, etc.
The controller 110 may be further adapted to, e.g. by means of a controller
transmitting module 1210, transmit instructions to a redirection node 113 or the flow- cache 115 to move the at least one packet flow to the at least one other packet processor 103 or the flow-cache 115. The controller transmitting module 1210 may also be referred to as a controller transmitting unit, a controller transmitting means, a controller transmitting circuit, controller means for transmitting, controller output unit etc. The controller transmitting module 1210 may be a transmitter, a transceiver etc. The controller transmitting module 1210 may be a wireless transmitter of the controller 110 of a wireless or fixed communications system.
The controller 110 may be further adapted to, e.g. by means of the controller determining module 1203, determine that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor 103 or the flow-cache 115. The determining that the packet flow is of a class that is allowed to be moved may be performed based classification information received from a traffic detection module 118, or it may be based on a classification of the at least one packet flow performed by the controller 110. The controller 110 may be further adapted to, e.g. by means of the controller receiving module 1201 , receive, from the policer node 105, data rate information indicating that the monitored real-time data rate is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
The controller 110 may be adapted to, e.g. by means of the controller moving module 1208, move the at least one packet flow back to the packet processor 103.
The controller 110 may be adapted to, e.g. by means of the controller transmitting module 1210, transmit instructions to a redirection node 113 or the flow-cache 115 to move the at least one packet flow back to the packet processor 103.
The controller 110 may be adapted to, e.g. by means of the controller transmitting module 1210, send the received data rate information to a redirection node 113. As mentioned earlier, at least some of the policer node 105, the packet processor 103, the other packet processor 103, the flow-cache 115, a redirection node 113, a traffic detection module 118 and the controller 110 may be collocated in one node. The node may be a PGW or a GGSN or a PCEF node or a SDN node. The controller 110 may be a SDN controller. The controller 110 may further comprise a controller memory 1213 comprising one or more memory units. The controller memory 1213 is arranged to be used to store data, received data streams, power level measurements, threshold values, time periods, configurations, schedulings, data rate information, policies, information about packet processors 103 with available capacity, instructions, information about packet processor capabilities, packet data flows, sessions, classes, real-time data rate, and applications to perform the methods herein when being executed in the controller 110. The controller memory 1213 may comprise instructions executable by the controller processor 1205.
In some embodiments, a first computer program may comprise instructions which, when executed on at least one processor, cause the at least one processor to carry out the method steps in figure 11. A first carrier may comprise the first computer program, and the first carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium.
The method described above will now be described seen from the perspective of the policer node 105. Figure 13 is a flowchart describing the present method performed by the policer node 105 for handling packet flows in a wireless communications system 100. The method comprises at least some of the following steps to be performed by the policer node 105, which steps may be performed in any suitable order than described below:
Step 1301
This step corresponds to step 501 in figure 5a, step 703 in figure 7 and step 804 in figure 8. The policer node 105 continuously monitors a packet data rate in real-time associated with data packets handled by a packet processor 103. The real-time data rate may be monitored per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value. Step 1302
This step corresponds to step 502 in figure 5a, step 703 in figure 7 and step 804 in figure 8. The policer node 105 detects that the monitored real-time data rate has exceeded a threshold. The exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or by a flow-cache 115.
Step 1303
This step corresponds to step 503 in figure 5a, step 703 in figure 7 and step 804 in figure 8. The policer node 105 may dampen changes in the data rate information before transmission to the controller 110.
Step 1304
This step corresponds to step 504 in figure 5a, step 704 in figure 7 and step 805 in figure 8. The policer node 105 transmits to the controller 110, data rate information indicating that the monitored real-time data rate has exceeded the threshold.
Step 1305
This step corresponds to step 513 in figure 5b, step 703 in figure 7 and step 804 in figure 8. The policer node 105 detects that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold. That the monitored realtime data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 130.
Step 1306
This step corresponds to step 514 in figure 5b, step 704 in figure 7 and step 805 in figure 8. The policer node 105 may transmit, to a controller 110, data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
As mentioned earlier, at least some of the policer node 105, the packet processor 103, the redirection node 113, the flow-cache 115, the traffic detection module 118 and the controller 110 may be collocated in one node. The node may be a PGW or a GGSN or a PCEF node, or a SDN node.
To perform the method steps shown in figure 13 for handling packet flows in a wireless communications system 100, the policer node 105 may comprise a policer node arrangement as shown in figure 14. To perform the method steps shown in figure 13 for handling packet flows in a wireless communications system 100, the policer node 105 is adapted to, e.g. by means of a policer monitoring module 1401, continuously monitor a packet data rate in real-time associated with data packets handled by a packet processor 103. The real-time data rate may be monitored per at least one of an IP flow, IP endpoint, DSCP value and GTP TEID value. The policer monitoring module 1401 may be a policer processor 1403 of the policer node 105. The policer monitoring module 1401 may also be referred to as a policer monitoring unit, a policer monitoring means, a policer monitoring circuit, policer means for monitoring, etc.
The policer node 105 is further adapted to, e.g. by means of a policer detecting module 1405, detect that the monitored real-time data rate has exceeded a threshold. The exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor 103 or by a flow- cache 115. The policer detecting module 1405 may be the policer processor 1403 of the policer node 105. The policer detecting module 1401 may also be referred to as a policer detecting unit, a policer detecting means, a policer detecting circuit, policer means for detecting, etc. The policer node 105 is further adapted to, e.g. by means of a policer transmitting module 1408, transmit, to a controller 110, data rate information indicating that the monitored real-time data rate has exceeded the threshold. The policer transmitting module 1408 may also be referred to as a policer transmitting unit, a policer transmitting means, a policer transmitting circuit, policer means for transmitting, policer output unit etc. The policer transmitting module 1408 may be a transmitter, a transceiver etc. The policer transmitting module 1408 may be a wireless transmitter of the policer node 105 of a wireless or fixed communications system.
The policer node 105 may be further adapted to, e.g. by means of a policer dampening module 1410, dampen changes in the data rate information before transmission to the controller 110. The policer dampening module 1410 may be the policer processor 1403 of the policer node 105. The policer dampening module 1410 may also be referred to as a policer dampening unit, a policer dampening means, a policer dampening circuit, policer means for dampening, etc. The policer node 105 may be further adapted to, e.g. by means of the policer detecting module 1405, detect that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold. That the monitored real-time data rate is equal to or below the threshold may indicate that the at least one packet flow can be moved back to be handled by the packet processor 103.
The policer node 105 may be adapted to, e.g. by means of the policer transmitting module 1408, transmit, to a controller 110, data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
At least some of the policer node 105, the packet processor 103, a redirection node 113, a flow-cache 115, a traffic detection module 118 and the controller 110 may be collocated in one node. The node may be a PGW or a GGSN or a PCEF node or a SDN node.
The policer node 105 is further adapted to, e.g. by means of a policer receiving module 1413, receive information from the other entities in the wireless communications system 100. The policer receiving module 1413 may also be referred to as a policer receiving unit, a policer receiving means, a policer receiving circuit, policer means for receiving, policer input unit etc. The policer receiving module 1413 may be a receiver, a transceiver etc. The policer receiving module 1413 may be a wireless receiver of the policer node 105 of a wireless or fixed communications system. The policer node 105 may further comprise a policer memory 1415 comprising one or more memory units. The policer memory 1415 is arranged to be used to store data, received data streams, power level measurements, threshold values, time periods, configurations, schedulings, data rate information, policies, information about packet processors 103 with available capacity, instructions, information about packet processor capabilities, packet data flows, sessions, classes, real-time data rate, and applications to perform the methods herein when being executed in the policer node 105. The policer memory 1415 may comprise instructions executable by the policer processor 1403. In some embodiments, a second computer program may comprise instructions which, when executed on at least one processor, cause the at least one processor to carry out the method steps in figure 13. A second carrier may comprise the second computer program, and the second carrier may be one of an electronic signal, optical signal, radio signal or computer readable storage medium.
The present mechanism for handling packet flows in a wireless communications system 100 may be implemented through one or more processors, such as a controller processor 1205 in the controller arrangement depicted in figure 12 and a policer processor 1403 in the policer node arrangement depicted in figure 14, together with computer program code for performing the functions of the embodiments herein. The processor may be for example a Digital Signal Processor (DSP), Application Specific Integrated Circuit (ASIC) processor, Field-programmable gate array (FPGA) processor or micro processor. The program code mentioned above may also be provided as a computer program product, for instance in the form of a data carrier carrying computer program code for performing the embodiments herein when being loaded into at least one of the controller 110 and the policer node 105. One such carrier may be in the form of a CD ROM disc. It is however feasible with other data carriers such as a memory stick. The computer program code can furthermore be provided as pure program code on a server and downloaded to at least one of the controller 110 and the policer node 105.
By constantly monitoring the rate per UE 101 or per packet flow, resources may be reassigned as rates vary over time. This enables support for high peak rate and still supports high average resource utilization. The real-time throughput data in the packet processor is used as input to the controller 110 to determine session features.
The embodiments herein are not limited to the above described embodiments. Various alternatives, modifications and equivalents may be used. Therefore, the above embodiments should not be taken as limiting the scope of the embodiments, which is defined by the appending claims.
It should be emphasized that the term "comprises/comprising" when used in this specification is taken to specify the presence of stated features, integers, steps or components, but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof. It should also be noted that the words "a" or "an" preceding an element do not exclude the presence of a plurality of such elements.
The term "configured to" used herein may also be referred to as "arranged to", "adapted to", "capable of" or "operative to".
It should also be emphasised that the steps of the methods defined in the appended claims may, without departing from the embodiments herein, be performed in another order than the order in which they appear in the claims.

Claims

1. A method performed by a controller (110) for handling packet flows in a wireless communications system (100), the method comprising:
receiving (504, 704, 805, 1101), from a policer node (105), data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor (103) has exceeded a threshold; and
when the monitored real-time data rate has exceeded the threshold, determining (505, 704, 806, 1103) that at least one packet flow comprising at least one of the data packets should be handled by another packet processor (103) or a flow-cache (115).
2. The method according to claim 1 , further comprising:
determining (506, 806, 1104) which packet flow comprising at least one of the data packets that should be handled by another packet processor (103) or the flow- cache (115).
3. The method according to any one of claims 1-2, further comprising:
determining (508, 705, 806, 1105) whether the at least one packet flow should be handled by another packet processor or by the flow-cache (115).
4. The method according to any one of claims 1-3, further comprising:
moving (511 , 806, 1107) the at least one packet flow to at least one other packet processor (103) or flow-cache (115); or
transmitting (509, 1108) instructions to a redirection node (113) or the flow-cache (115) to move the at least one packet flow to the at least one other packet processor (103) or the flow-cache (115).
5. The method according to any one of claims 1-4, wherein the at least one packet flow should be temporarily or permanently handled by the other packet processor (103) or the flow-cache (115).
6. The method according to any one of claims 1-5, further comprising: determining (507, 807, 808, 1106) that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor (103) or the flow- cache (115).
7. The method according to claim 6, wherein the determining (507, 807, 808, 1106) that the packet flow is of a class that is allowed to be moved is performed based
classification information received from a traffic detection module (118); or
based on a classification of the at least one packet flow performed by the controller (110).
8. The method according to claim 1-7, wherein the determining that the packet flow should be handled by another packet processor (103) or by the flow-cache (115) is based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold.
9. The method according to any one of claims 1-8, further comprising:
receiving (514, 704, 805, 1109), from the policer node (105), data rate information indicating that the monitored real-time data rate is equal to or below the threshold; wherein that the monitored real-time data rate is equal to or below the threshold indicates that the at least one packet flow can be moved back to be handled by the packet processor (103).
10. The method according to claim 9, further comprising:
moving (518, 806, 1110) the at least one packet flow back to the packet processor (103); or
transmitting (516, 1111 ) instructions to a redirection node (113) or the flow-cache (115) to move the at least one packet flow back to the packet processor (103).
11. The method according to any one of claims 1-10, wherein the monitored real-time data rate is per at least one of an Internet Protocol, IP, flow, IP endpoint, Differentiated
Services Code Point, DSCP, value and General Packet Radio Service Tunneling Protocol Tunnel Endpoint Identifier, GTP TEID, value.
12. The method according to any one of claims 1-11 , wherein at least some of the policer node (105), the packet processor (103), the other packet processor (103), the flow-cache (115), a redirection node (113), a traffic detection module (118) and the controller (110) are collocated in one node.
13. The method according to claim 1-12, wherein the node is a Packet data network Gateway, PGW, or a Gateway General packet radio services Support Node, GGSN, or a Policy Control Enforcement Function, PCEF, node or a Software Defined Network, SDN, node.
14. The method according to any one of claim 1-13, wherein the controller (110) is a Software Defined Network, SDN, controller.
15. A method performed by a policer node (105) for handling packet flows in a wireless communications system (100), the method comprising:
continuously monitoring (501 , 703, 804, 1301) a packet data rate in real-time associated with data packets handled by a packet processor (103);
detecting (502, 703, 804, 1302) that the monitored real-time data rate has exceeded a threshold, wherein the exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor (103) or by a flow-cache (115); and
transmitting (504, 704, 805, 1304), to a controller (110), data rate information indicating that the monitored real-time data rate has exceeded the threshold.
16. The method according to claim 15, wherein the real-time data rate is monitored per at least one of an Internet Protocol, IP, flow, IP endpoint, Differentiated Services Code Point, DSCP, value and General Packet Radio Service Tunneling Protocol Tunnel Endpoint Identifier, GTP TEID, value.
17. The method according to any one of claims 15-16, further comprising:
dampening (503, 703, 804, 1303) changes in the data rate information before transmission to the controller (110).
18. The method according to any one of claims 15-17, further comprising: detecting (513, 703, 804, 1305) that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold, wherein that the monitored real-time data rate is equal to or below the threshold indicates that the at least one packet flow can be moved back to be handled by the packet processor (130); and
transmitting (514, 704, 805, 1306), to a controller (110), data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
19. The method according to any one of claims 15-18, wherein at least some of the policer node (105), the packet processor (103), a redirection node (113), a flow-cache
(115), a traffic detection module (118) and the controller (110) are collocated in one node.
20. The method according to claim 19, wherein the node is a Packet data network Gateway, PGW, or a Gateway General packet radio services Support Node, GGSN, or a Policy Control Enforcement Function, PCEF, node, or a Software Defined Network, SDN, node.
21. A controller (110) for handling packet flows in a wireless communications system (100), the controller (110) being adapted to:
receive, from a policer node (105), data rate information indicating that a continuously monitored real-time data rate associated with data packets handled by a packet processor (103) has exceeded a threshold; and
when the monitored real-time data rate has exceeded the threshold, determine that at least one packet flow comprising at least one of the data packets should be handled by another packet processor (103) or a flow-cache (115).
22. The controller (110) according to claim 21, being further adapted to:
determine which packet flow comprising at least one of the data packets that should be handled by another packet processor (103) or the flow-cache (115).
23. The controller (110) according to any one of claims 21-22, being further adapted to: determine whether the at least one packet flow should be handled by another packet processor or by the flow-cache (115).
24. The controller (110) according to any one of claims 21-23, being further adapted to: move the at least one packet flow to at least one other packet processor (103) or flow-cache (115); or to
transmit instructions to a redirection node (113) or the flow-cache (115) to move the at least one packet flow to the at least one other packet processor (103) or the flow- cache (115).
25. The controller (110) according to any one of claims 21-24, wherein the at least one packet flow should be temporarily or permanently handled by the other packet processor
(103) or the flow-cache (115).
26. The controller (110) according to any one of claims 21-25, being further adapted to: determine that the at least one packet flow is of a class that is allowed to be moved to the at least one other packet processor (103) or the flow-cache (115).
27. The controller (110) according to claim 26, wherein the determining (507, 807, 808) that the packet flow is of a class that is allowed to be moved is performed based classification information received from a traffic detection module (118); or
based on a classification of the at least one packet flow performed by the controller (110).
28. The controller (110) according to claim 21-27, wherein the determining that the packet flow should be handled by another packet processor (103) or by the flow-cache (115) is based on at least one of a configuration, a class or a policy, in addition to the exceeded threshold.
29. The controller (110) according to any one of claims 21-28, being further adapted to: receive, from the policer node (105), data rate information indicating that the monitored real-time data rate is equal to or below the threshold; wherein that the monitored real-time data rate is equal to or below the threshold indicates that the at least one packet flow can be moved back to be handled by the packet processor (103).
30. The controller (110) according to claim 29, being further adapted to: move the at least one packet flow back to the packet processor (103); or to transmit instructions to a redirection node (113) or the flow-cache (115) to move the at least one packet flow back to the packet processor (103).
31. The controller (110) according to any one of claims 21-30, wherein the monitored real-time data rate is per at least one of an Internet Protocol, IP, flow, IP endpoint, Differentiated Services Code Point, DSCP, value and General Packet Radio Service Tunneling Protocol Tunnel Endpoint Identifier, GTP TEID, value.
32. The controller (110) according to any one of claims 21-31 , wherein at least some of the policer node (105), the packet processor (103), the other packet processor (103), the flow-cache (115), a redirection node (113), a traffic detection module (118) and the controller (110) are collocated in one node.
33. The controller (110) according to claim 21-32, wherein the node is a Packet data network Gateway, PGW, or a Gateway General packet radio services Support Node, GGSN, or a Policy Control Enforcement Function, PCEF, node or a Software Defined Network, SDN, node.
34. The controller (110) according to any one of claim 21-33, wherein the controller (110) is a Software Defined Network, SDN, controller.
35. A policer node (105) for handling packet flows in a wireless communications system (100), the policer node (105) being adapted to:
continuously monitor a packet data rate in real-time associated with data packets handled by a packet processor (103);
detect that the monitored real-time data rate has exceeded a threshold, wherein the exceeded threshold indicates that at least one packet flow comprising at least one of the data packets should be handled by another packet processor (103) or by a flow- cache (115); and to
transmit, to a controller (110), data rate information indicating that the monitored real-time data rate has exceeded the threshold.
36. The policer node (105) according to claim 35, wherein the real-time data rate is monitored per at least one of an Internet Protocol, IP, flow, IP endpoint, Differentiated Services Code Point, DSCP, value and General Packet Radio Service Tunneling Protocol Tunnel Endpoint Identifier, GTP TEID, value.
37. The policer node (105) according to any one of claims 35-36, being further adapted to:
dampen changes in the data rate information before transmission to the controller
(110).
38. The policer node (105) according to any one of claims 35-37, being further adapted to:
detect that the monitored real-time data rate which previously has exceeded the threshold is equal to or below the threshold, wherein that the monitored real-time data rate is equal to or below the threshold indicates that the at least one packet flow can be moved back to be handled by the packet processor (103); and to
transmit, to a controller (110), data rate information indicating that the monitored real-time data rate is equal to or below the threshold.
39. The policer node (105) according to any one of claims 35-38, wherein at least some of the policer node (105), the packet processor (103), a redirection node (113), a flow- cache (115), a traffic detection module (118) and the controller (110) are collocated in one node.
40. The policer node (105) according to claim 39, wherein the node is a Packet data network Gateway, PGW, or a Gateway General packet radio services Support Node, GGSN, or a Policy Control Enforcement Function, PCEF, node, or a Software Defined Network, SDN, node.
41. A first computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any one of claims 1-14.
42. A first carrier comprising the first computer program of claim 41, wherein the first carrier is one of an electronic signal, optical signal, radio signal or computer readable storage medium.
43. A second computer program comprising instructions which, when executed on at least one processor, cause the at least one processor to carry out the method according to any one of claims 15-20.
44. A second carrier comprising the second computer program of claim 43, wherein the second carrier is one of an electronic signal, optical signal, radio signal or computer readable storage medium.
PCT/EP2015/063058 2015-06-11 2015-06-11 Nodes and methods for handling packet flows WO2016198112A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/063058 WO2016198112A1 (en) 2015-06-11 2015-06-11 Nodes and methods for handling packet flows

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2015/063058 WO2016198112A1 (en) 2015-06-11 2015-06-11 Nodes and methods for handling packet flows

Publications (1)

Publication Number Publication Date
WO2016198112A1 true WO2016198112A1 (en) 2016-12-15

Family

ID=53483784

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2015/063058 WO2016198112A1 (en) 2015-06-11 2015-06-11 Nodes and methods for handling packet flows

Country Status (1)

Country Link
WO (1) WO2016198112A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109768931A (en) * 2017-11-09 2019-05-17 中国移动通信集团公司 Handle method, interchanger, device and the computer readable storage medium of data packet
CN115150338A (en) * 2021-03-29 2022-10-04 华为技术有限公司 Message flow control method, device, equipment and computer readable storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5697054A (en) * 1994-06-15 1997-12-09 Telefonaktiebolaget Lm Ericsson Method and apparatus for load sharing transceiver handlers in regional processors of radio communications systems
US7397762B1 (en) * 2002-09-30 2008-07-08 Nortel Networks Limited System, device and method for scheduling information processing with load-balancing
US20090006521A1 (en) * 2007-06-29 2009-01-01 Veal Bryan E Adaptive receive side scaling
WO2010048419A2 (en) * 2008-10-24 2010-04-29 Qualcomm Incorporated Wireless network resource adaptation
US20100195974A1 (en) * 2009-02-04 2010-08-05 Google Inc. Server-side support for seamless rewind and playback of video streaming
US20110231564A1 (en) * 2000-09-25 2011-09-22 Yevgeny Korsunsky Processing data flows with a data flow processor
EP2432224A1 (en) * 2010-09-16 2012-03-21 Harman Becker Automotive Systems GmbH Multimedia system
WO2013169258A1 (en) * 2012-05-10 2013-11-14 Intel Corporation Network routing based on resource availability

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5697054A (en) * 1994-06-15 1997-12-09 Telefonaktiebolaget Lm Ericsson Method and apparatus for load sharing transceiver handlers in regional processors of radio communications systems
US20110231564A1 (en) * 2000-09-25 2011-09-22 Yevgeny Korsunsky Processing data flows with a data flow processor
US7397762B1 (en) * 2002-09-30 2008-07-08 Nortel Networks Limited System, device and method for scheduling information processing with load-balancing
US20090006521A1 (en) * 2007-06-29 2009-01-01 Veal Bryan E Adaptive receive side scaling
WO2010048419A2 (en) * 2008-10-24 2010-04-29 Qualcomm Incorporated Wireless network resource adaptation
US20100195974A1 (en) * 2009-02-04 2010-08-05 Google Inc. Server-side support for seamless rewind and playback of video streaming
EP2432224A1 (en) * 2010-09-16 2012-03-21 Harman Becker Automotive Systems GmbH Multimedia system
WO2013169258A1 (en) * 2012-05-10 2013-11-14 Intel Corporation Network routing based on resource availability

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109768931A (en) * 2017-11-09 2019-05-17 中国移动通信集团公司 Handle method, interchanger, device and the computer readable storage medium of data packet
CN109768931B (en) * 2017-11-09 2020-10-13 中国移动通信集团公司 Method, switch, device and computer readable storage medium for processing data packet
CN115150338A (en) * 2021-03-29 2022-10-04 华为技术有限公司 Message flow control method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
Li et al. Toward software-defined cellular networks
JP6701196B2 (en) Enhancement of quality of experience (QoE) in communication
US10506492B2 (en) System and method to facilitate link aggregation using network-based internet protocol (IP) flow mobility in a network environment
US9380489B2 (en) Dynamic network traffic analysis and traffic flow configuration for radio networks
US10454827B2 (en) Method and devices for controlling usage of multi-path TCP
US9722887B2 (en) Adaptive quality of service (QoS) based on application latency requirements
US20180063018A1 (en) System and method for managing chained services in a network environment
WO2017071186A1 (en) Multi-link bandwidth allocation method and apparatus for mobile network, and mobile device
EP2916613A1 (en) Devices and method using same EPS bearers in downlink and uplink
US10728793B2 (en) Aggregation of congestion information
US10979349B2 (en) Methods and apparatuses for flexible mobile steering in cellular networks
CN110121898B (en) Method and system for switching equipment terminal generating elephant flow and transmission manager
US10341224B2 (en) Layer-3 flow control information routing system
JP2021512567A (en) Systems and methods for identifying candidate flows in data packet networks
Grover et al. liteflow: Lightweight and distributed flow monitoring platform for sdn
US20220286904A1 (en) Technique for Controlling and Performing Data Traffic Handling in a Core Network Domain
Krishnan et al. Mechanisms for optimizing link aggregation group (LAG) and equal-cost multipath (ECMP) component link utilization in networks
WO2016198112A1 (en) Nodes and methods for handling packet flows
CN109792405B (en) Method and apparatus for shared buffer allocation in a transmission node
WO2015139729A1 (en) Configuration of backhaul bearers
Al-Najjar et al. Flow-level load balancing of http traffic using openflow
WO2022166577A1 (en) Method for measuring packet loss rate, and communication apparatus and communication system
WO2024078695A1 (en) Quality of service support for service traffic requiring in-network computing in mobile networks
Ghanwani et al. Internet Engineering Task Force (IETF) R. Krishnan Request for Comments: 7424 Brocade Communications Category: Informational L. Yong

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15730992

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 15730992

Country of ref document: EP

Kind code of ref document: A1