WO2008061022A1 - Peer-to-peer aided live video sharing system - Google Patents

Peer-to-peer aided live video sharing system Download PDF

Info

Publication number
WO2008061022A1
WO2008061022A1 PCT/US2007/084296 US2007084296W WO2008061022A1 WO 2008061022 A1 WO2008061022 A1 WO 2008061022A1 US 2007084296 W US2007084296 W US 2007084296W WO 2008061022 A1 WO2008061022 A1 WO 2008061022A1
Authority
WO
WIPO (PCT)
Prior art keywords
peer
video
node
overlay
message
Prior art date
Application number
PCT/US2007/084296
Other languages
French (fr)
Inventor
Yusuo Hu
Chong Luo
Jian Tang
Jian-Guang Lou
Hua Cai
Jiang Li
Original Assignee
Microsoft Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Microsoft Corporation filed Critical Microsoft Corporation
Publication of WO2008061022A1 publication Critical patent/WO2008061022A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/27Server based end-user applications
    • H04N21/274Storing end-user multimedia data in response to end-user request, e.g. network recorder
    • H04N21/2743Video hosting of uploaded data from client
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1074Peer-to-peer [P2P] networks for supporting data block transmission mechanisms
    • H04L67/1078Resource delivery mechanisms
    • H04L67/1085Resource delivery mechanisms involving dynamic management of active down- or uploading connections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/632Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17318Direct or substantially direct transmission and handling of requests
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/104Peer-to-peer [P2P] networks
    • H04L67/1061Peer-to-peer [P2P] networks using node-based peer discovery mechanisms
    • H04L67/1072Discovery involving ranked list compilation of candidate peers

Definitions

  • Some web sites offer a comprehensive search engine of webcams from around the world.
  • the portal maintains a database of live webcams on the Internet, allowing users to search by keyword or browse through the categories and subcategories.
  • Video chatting services allow users to share cameras with others in real-time.
  • most of these video charting applications use unicast methods to send video from the capturer to audiences and therefore can only serve a limited number of users in each session.
  • Overall, an efficient way for an ordinary Internet user to share a live video with a large audience in real time is still lacking.
  • Embodiments of the invention are directed to a system and method for sharing live video streams on a large scale through the Internet using a Shared Video
  • Embodiments herein include a peer-to-peer streaming technique where video data packets are exchanged among the audience. This technique relieves load and bandwidth stress on a content server and ensures the scalability of the SVS.
  • FIC. 1 is a block diagram of a SVS in accordance with an embodiment of the invention.
  • FIC. 2 is a flowchart showing the logic and operations of uploading video to an SVS in accordance with an embodiment of the invention.
  • FIC. 3 is a flowchart showing the logic and operations of receiving video from and SVS in accordance with an embodiment of the invention.
  • FIC. 4 is a block diagram of an example computing device for implementing embodiments of the invention.
  • FIC. 5 is a block diagram of a Shared Video Service in accordance with an embodiment of the invention.
  • FIC. 6 is a flowchart showing the logic and operations of a peer joining a membership overlay in accordance with an embodiment of the invention.
  • FIC. 7 is a flowchart showing the logic and operations of candidate list operations in accordance with an embodiment of the invention.
  • FIC. 8 is a flowchart showing the logic and operations of partner operations in accordance with an embodiment of the invention.
  • FIC. 9 is a flowchart showing the logic and operations of streaming using a prefetch-pull strategy in accordance with an embodiment of the invention.
  • FIC. 1 0 is a block diagram of a Shared Video Service protocol stack in accordance with an embodiment of the invention.
  • FIC. 1 1 is a block diagram of a Link Layer Protocol (LLP) header format in accordance with an embodiment of the invention.
  • FIC. 1 2 is a block diagram of a bandwidth measurement packet in accordance with an embodiment of the invention.
  • LLP Link Layer Protocol
  • FIC. 1 3 is a block diagram of a Transport Level Protocol (TLP) header in accordance with an embodiment of the invention.
  • TLP Transport Level Protocol
  • FIC. 14 is a block diagram of a membership overlay message structure in accordance with an embodiment of the invention.
  • FIC. 1 5 is a block diagram of a Media Streaming Protocol (MSP) packet header in accordance with an embodiment of the invention.
  • MSP Media Streaming Protocol
  • FIC. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment to implement embodiments of the invention.
  • the operating environment of FIC. 1 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment.
  • Other well known computing systems, environments, and/or configurations that may be suitable for use with embodiments described herein including, but not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players), multiprocessor systems, programmable consumer electronics, network personal computers, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
  • Computer readable instructions may be distributed via computer readable media (discussed below).
  • Computer readable instructions may be implemented as program modules, such as functions, objects, application programming interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types.
  • APIs application programming interfaces
  • the functionality of the computer readable instructions may be combined or distributed as desired in various environments.
  • Embodiments of the invention are directed to a system and method for sharing live video streams on a large scale through the Internet called a Shared Video Service (SVS).
  • SVS enables a user to share captured video with other people in real time, specify the metadata of the shared video for indexing and retrieval, search video of interest on the service using metadata, and watch shared video in real time.
  • video includes a video and an audio portion.
  • embodiments of the invention may also be used with audio only streams or video only streams.
  • a Shared Video Service (SVS) system 1 00 includes a transmission subsystem 1 02 and a management subsystem 1 04.
  • Transmission subsystem 1 02 is responsible for video uploading, downloading, and peer-to-peer (P2P) relaying.
  • Management subsystem 1 04 provides functionalities for browsing, searching, indexing, and managing shared videos.
  • the user When a user wants to share video through the SVS, the user sends a live or recorded video to an SVS hosting node.
  • the hosting node receives video from the upload client and stores it.
  • the video may include a live video stream captured by the upload client in real time or a video file (i.e., recorded video) stored on the upload client.
  • the user is required before to logon to the SVS prior to uploading any video.
  • Viewers may browse and search for available videos on the SVS.
  • Each shared video may have some associated metadata stored in the database server such as category, location, tags, and the like. Viewers can browse and search the videos using this metadata.
  • a hosting node connects with a bootstrapping node to stream the video to the viewer.
  • the video streamed to the viewer may be being captured live or may be sent from a video file being sent by an upload client to the system in real time.
  • a P2P streaming mechanism is designed to assist in video transmission. When the video hosting node reaches a threshold, such as its maximum capacity, the P2P mechanism begins working. All the download clients within the same video session form a location- aware overlay and live peer-to-peer streaming performed in a receiver-driven fashion.
  • Transmission subsystem 1 02 includes an upload client 1 06, one or more download clients 1 08, a hosting node 1 1 0 (additionally hosting nodes 1 1 1 and 1 1 2 are shown), and a bootstrapping node 1 14.
  • Upload client 1 06 acts as the source of a video.
  • upload client 1 06 captures live video, encodes the video streams, generates the packets and transmits them onto hosting node 1 1 0 in real time.
  • upload client 1 06 sends a pre-recorded video file in a live fashion to hosting node 1 1 0.
  • the video was captured and recorded at an earlier time. However, the recorded video is sent to hosting node 1 1 0 and streamed to download clients 1 08 in real time.
  • a flowchart 200 shows the operations of uploading a video to the SVS in accordance with an embodiment of the invention.
  • upload client 1 06 first contacts bootstrapping node 1 14 to request an available hosting node (shown at block 202).
  • Bootstrapping node 1 14 responds with address information for an available hosting node (shown at block 204).
  • the bootstrapping node schedules a hosting node to the upload client according to the current loads of all the hosting nodes.
  • upload client 1 06 sends an upload request to hosting node 1 1 0
  • upload client 1 06 After receiving an Acknowledgement (ACK) message from hosting node 1 1 0 (shown at block 208), upload client 1 06 then starts to send the video packets to hosting node 1 1 0 in their sequencing order (shown at block 21 0). After upload client 1 06 starts successfully sending video to hosting node 1 1 0, hosting node 1 1 0 registers the information of the new video session with bootstrapping node 1 14 (shown at block 21 2). Bootstrapping node 1 1 4 updates database server 1 20 with video session information. The video being sent by upload client 1 06 is now available for viewing at download clients 1 08. As discussed below, database server 1 20 tracks all video sessions within the SVS and may be searched by a user.
  • ACK Acknowledgement
  • a flowchart 300 shows the operations of requesting a video by a download client in accordance with an embodiment of the invention.
  • a download client 1 08 contacts bootstrapping node 1 14 to join an existing video session (shown at block 302).
  • a downloading client also referred to as a streaming client
  • Download client 1 08 also discovers some peers (i.e., other download clients) within the same video session and form its own neighborhood through the P2P overlay 1 1 6 (shown at block 304).
  • the bootstrapping node determines if the status of the video session is at a threshold (shown at block 306).
  • the threshold is based at least in part on the maximum capacity of the hosting node.
  • the maximum capacity of the hosting node is the maximum number of download clients per video session the hosting node can stream to simultaneously.
  • the maximum capacity may be affected by load limitations of the hosting node (e.g., SERVEFLFU LLLOAD), traffic load on the network, any combination thereof, and the like.
  • the threshold may be set at maximum capacity or at a portion of maximum capacity (e.g., 90%).
  • the download client works in P2P mode and obtain the streaming packets from its overlay peers (shown at block 31 0). Otherwise, the download client contacts hosting node 1 1 0 to obtain the video directly from hosting node 1 1 0 (shown at block 308).
  • the download client when running in P2P mode, uses a prefetch-pull streaming algorithm to get data from its peers (discussed below in section 2). The download client also periodically updates the peers according to the observed throughput. In one embodiment, the download client starts playback when a specified time interval has elapsed after receiving the first packet. At the download client, the received packets are first buffered in a transmission buffer pool, and then pulled on demand by the playback module of the download client.
  • Hosting node 1 1 0 handles the video upload from an upload client and streaming to the download clients (i.e., peers).
  • the hosting node can be a dedicated server or a client node. For both cases, the hosting node has limited bandwidth and is assumed to be able to serve only a limited number of directly streaming download clients.
  • the hosting node receives video packets from an upload client and sends the video packets to the download clients in real time.
  • Each hosting node can manage multiple video sessions. Each video session is identified by a video's identification (ID). Each video session serves a number of download clients and an upload client.
  • ID video's identification
  • a hosting node registers its address with the bootstrapping node when joining the SVS (e.g., on boot up) and un-registers itself on exit from the SVS.
  • a hosting node also periodically reports its load, including its total number of download clients and upload clients to the bootstrapping node, so that the bootstrapping node can manage the load balancing of all the hosting nodes.
  • a bootstrapping node 1 14 is deployed to help the download client peers to join the video session and discover other peers.
  • the bootstrapping node handles all the requests from upload and download clients. It helps all the downloading clients to form a location-aware overlay through a membership overlay algorithm (discussed in section 2 below).
  • the bootstrapping node may include a dedicated server or a client computer. In one embodiment, the bootstrapping node may be deployed on the same computer together with the hosting node.
  • Bootstrapping node 1 14 may include the following functionalities: hosting node management, video session management, P2P bootstrapping support, and video entry management.
  • bootstrapping node 1 14 performs hosting node management. Each hosting node reports its address and streaming load to the bootstrapping node. When an upload client starts to upload, the upload client first contacts the bootstrapping node and queries for an available hosting node to host the video. The bootstrapping node schedules a hosting node to the upload client according to the current loads of all the hosting nodes.
  • bootstrapping node 1 1 4 performs video session management. After an upload client starts successfully sending video to the hosting node, the hosting node registers the information of the new video session with the bootstrapping node.
  • video session information may include video bit-rate, resolution, frame-rate, and the like.
  • the bootstrapping node monitors all the video sessions.
  • the bootstrapping node records all download clients that receive streaming packers directly from a hosting node.
  • load threshold e.g., maximum capacity
  • bootstrapping node 1 14 performs P2P bootstrapping support.
  • the bootstrapping node may maintain a peer cache that contains a number of the random peers watching the video. The peer cache is periodically updated. If any peer wants to watch a shared video, it first contacts the bootstrapping node to join a corresponding streaming overlay.
  • bootstrapping node 1 1 4 performs video entry management.
  • the bootstrapping node calls a web service to add the entry into a database at a database server 1 20.
  • bootstrapping node 1 14 calls the web service to remove the entry from the database at a database server 1 20.
  • the bootstrapping node exits the SVS, the bootstrapping node calls the web service to remove all the video entries from the database.
  • video management subsystem 1 04 includes a database server 1 20, a web server 1 22, and a browsing/sharing user interface (Ul) 1 24.
  • All the video session entries and their metadata are stored and managed by database server 1 20.
  • Each video has a unique entry (record) in the database.
  • the primary key of the record is the video ID.
  • Each record may also include other metadata such as title, author name, width, height, bit rate, and frame rate of the video.
  • the indexing table of the location, category, and tags are also stored in database server 1 20. Thus, all shared videos can be retrieved through these metadata.
  • video metadata is directly provided by upload client 1 06 to database server 1 20 through a web service and browsing/sharing Ul 1 24.
  • Web server 1 22 is deployed as a unique entry point for the whole Shared
  • Video Service Some web services are developed on the web server to manage all the video entries and their metadata. These web services include the functionalities of browsing, searching, and authoring videos.
  • Web services of web server 1 22 may be accessed using browsing/sharing user interface 1 24. Ul 1 24 responds to user actions or queries, calls the web services, and generates the appropriate views to users. Through user interface 1 24, users are able to browse all shared videos by metadata, share personal videos, author the metadata of individual shared video entries, and the like. Once a user finds a video of interest, the user may select the video through Ul 1 24. The user's system (i.e., download client) may then request the video as described above in connection with download clients 1 08.
  • the user's system i.e., download client
  • Embodiments of the SVS may be used in various scenarios.
  • a breaking news event such as a disaster or an accident
  • a by-stander person holds his or her camera phone and transmits live video to SVS.
  • Official news- agency coverage of the event may begin much later, if at all.
  • a camera from a window of a building is pointed to the intersection of two roads.
  • Video is transmitted to the SVS in real-time.
  • Drivers who will use this intersection watch the traffic situations in advance, during driving or in congestion.
  • a traveler keeps his camera open.
  • SVS subscribers watch the journey in real-time.
  • television stations and news agencies may provide television programs and news through SVS.
  • computing device 400 for implementing one or more embodiments of the invention is shown.
  • Configurations of computing device 400 may be used as a bootstrapping node, a hosting node, an upload client, or a download client as described herein.
  • Computing device 400 may also be used as a database server or a web server as described herein.
  • computing device 400 includes at least one processing unit 402 and memory 404.
  • memory 404 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in FIC. 4 by dashed line 406.
  • device 400 may also have additional features and/or functionality.
  • device 400 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape.
  • additional storage e.g., removable and/or non-removable
  • FIC. 4 FIC. 4
  • storage 408 computer readable instructions to implement embodiments of the invention may be stored in storage 408.
  • Storage 408 may also store other computer readable instructions to implement an operating system, an application program, and the like.
  • Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data.
  • Memory 404 and storage 408 are examples of computer storage media.
  • Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 400. Any such computer storage media may be part of device 400.
  • Computer readable media may include communication media.
  • Device 400 may also include communication connection(s) 41 2 that allow the device 400 to communicate with other devices, such as with other computing devices through network 420.
  • Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.
  • modulated data signal means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
  • communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media.
  • Device 400 may also have input device(s) 414 such as keyboard, mouse, pen, voice input device, touch input device, laser range finder, infra-red cameras, video input devices (also referred to as video capture devices) such as cameras, and/or any other input device.
  • Output device(s) 41 6 such as one or more displays, speakers, printers, and/or any other output device may also be included.
  • Input devices 41 4 and output devices 41 6 may be coupled to computing device 400 via a wired connection, wireless connection, or any combination thereof.
  • the term “coupled” and its derivatives may be used. “Coupled” may mean that two or more elements are in contact (physically, electrically, magnetically, optically, etc.).
  • Coupled may also mean two or more elements are not in contact with each other, but still cooperate or interact with each other (for example, communicatively coupled).
  • storage devices utilized to store computer readable instructions may be distributed across a network.
  • a computing device 430 accessible via network 420 may store computer readable instructions to implement one or more embodiments of the invention.
  • Computing device 400 may access computing device 430 and download a part or all of the computer readable instructions for execution.
  • computing device 400 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 400 and some at computing device 430.
  • a dedicated circuit such as a Digital Signal Processor (DSP), programmable logic array, and the like.
  • DSP Digital Signal Processor
  • This invention includes a system and method to allow sharing a live video stream from one computer to a large audience through the Internet.
  • Most of current live streaming servers on the Internet use unicast techniques to send streaming video to their clients. That means each streaming client establishes a separate data channel with servers to receive an identical copy of the original video stream. When the number of the streaming clients increases, the bandwidth of a server will be exhausted. Therefore, such a transmission scheme is not scalable and cannot support large scale video services or applications.
  • Embodiments herein include a system and method for streaming a live video from a hosting node to many download clients.
  • video from a camera connected to a computer or a file stored on a computer can be quickly and efficiently streamed out to a large number of download clients through the Internet.
  • aspects of the P2P technique include: 1 ) a separate bootstrapping node to handle all the joining requests from new clients. This isolates the hosting node from a flash crowd of download clients to a video session and makes the video session more stable, 2) an algorithm to build a location aware overlay so that the peer joins the session with less latency and the streaming session will have less stress on the underlying physical network, and 3) a prefetch-pull method to improve the throughput of the peer- to-peer streaming system.
  • An efficient peer-to-peer streaming algorithm is developed to leverage the resource of downloading clients within the same video session. Therefore, the hosting node's load will be kept constant and the live video streaming session will be able to serve many more users than traditional streaming servers. Live video streaming has more stringent requirements than other file transfer applications. Meanwhile, most download client peers have limited bandwidth. Therefore, the peer-to-peer streaming algorithm should make efficient use of network resources to reduce latency and maximize the throughput.
  • an SVS system 500 is shown (a management subsystem is not shown for clarity).
  • Embodiments herein of peer-to-peer streaming include a two- layer overlay.
  • the lower layer of peer-to-peer overlay 1 1 6 includes a membership overlay, which is used for discovering nearby peers and disseminating messages.
  • the upper layer includes a streaming overlay, which is used for transmitting real-time streaming packets.
  • Embodiments of a new peer 504 joining P2P overlay 1 1 6 and streaming operations of new peer 504 are discussed below. It will be noted that an OutView, InView, partner list, and candidate list is shown only for new peer 504 for clarity, but it will be understood that each peer (i.e., download client) in P2P overlay 1 1 6 includes these components. As used herein, a download client may be referred to as a peer or a node of P2P overlay 1 1 6. [0072] 2.1 P2P Membership
  • each peer i.e., each download client
  • the connection between a peer and its neighbor is modeled as a directional edge.
  • Each peer maintains a node set called “OutView” 508 which stores all its neighbors. It also maintains a node set called “InView” 509 that stores all peers that take it as their neighbor.
  • Each peer sends messages to the neighbors in its OutView and receives message from the neighbors in its InView.
  • each peer has a number of "partners."
  • the partners of a peer are a set of peers that can provide streaming packets to the peer more quickly and efficiently than other peers of the P2P overlay.
  • the partners are not necessary the nearest peers. Although nearby peers can provide data in smaller latency, some of them may lack available bandwidth to serve the peer.
  • the connection between a peer and its partner is modeled as an un-directional edge, i.e. the peer and its partner exchange data.
  • a location aware overlay construction algorithm is designed to help the peer to construct an unstructured membership overlay and quickly find its peers.
  • Embodiments herein include a prefetch-pull combined algorithm designed to exchange packets with partners.
  • Location-aware means that the peer-to-peer overlay is aware of the underlying physical network topology and tries to align the peer-to-peer communication with the underlying physical channel.
  • each peer tries to find a number of nearby neighbors and a number of random neighbors.
  • a nearby neighbor may be measured by roundtrip time for packets between peers. In another embodiment, a nearby neighbor may be measured by Internet Protocol (IP) address difference between peers.
  • IP Internet Protocol
  • Bootstrapping node 1 1 4 maintains a peer cache 506 for the peer-to-peer overlay 1 1 6.
  • the peer cache contains the hosting node and a queue of random nodes whose size is no more than a constant L.
  • Bootstrapping node 1 1 4 periodically discovers new random node(s) on the overlay to update peer cache 506. For this update, bootstrapping node 1 1 4 keeps M daemon processes on the bootstrapping node. Each daemon simulates a random walk process on overlay 1 1 6. Bootstrapping node 1 1 4 uses the node visited by the random walk every c steps as a sample and use the sampled nodes to update the peer cache queue in a (First In First Out) FIFO order.
  • a flowchart 600 shows an embodiment of joining a membership overlay by new peer 504.
  • a new peer first sends a join request to the bootstrapping node (shown at block 602).
  • the bootstrapping node selects k records from peer cache 506 which are nearest to new peer 504.
  • the bootstrapping node also selects / random peers from the rest of the peer cache.
  • the bootstrapping node returns the selected peer information back to the new peer (shown at block 604). If the cache size is less than k+l, the bootstrapping node sends information on all peers to the new peer.
  • the new peer After receiving the peer records (shown at block 606), the new peer joins the membership overlay (shown at block 608). [0084] In one embodiment, the new peer P performs the following operations to join the membership overlay:
  • NE Nearest Neighbor
  • new peer P selects k nearest peers to new peer P from all its neighbors (including InView and OutView) and returns them as an acknowledgement of the NE query to new peer P.
  • the new peer may perform membership overlay maintenance after having joined P2P overlay 1 1 6 (shown at block 61 0 of FIC. 6).
  • Embodiments of membership overlay maintenance include handling a loss of communication with a peer and refining the overlay membership to adjust for peers entering or leaving the overlay.
  • Embodiments herein include techniques for peer failure handling.
  • a peer may suffer from a system failure or a network disconnection.
  • each peer stays alive with all its neighbors so that peer failure can be detected.
  • each peer sends heartbeat message to all its neighbors (such as those in the InView and OutView).
  • Embodiments herein include techniques for refining the overlay membership.
  • each peer In order to handle the dynamic change of Internet topology and peers' join and leaving, each peer is able to atomically refine its neighborhood. In one embodiment, each peer periodically performs the following operations in this order to refine the overlay membership:
  • the peer checks the nearest neighborhood, if it is less than k/2, it starts a new query process to update the nearest neighborhood. In this step, the peer's nearest neighborhood has become too small and needs to be increased to ensure uninterrupted streaming.
  • the peer checks the random neighborhood, if it is less than /, say /, it contacts the bootstrapping node, the bootstrapping node selects / - / random nodes from the cache and return them to the peer. The peer then adds them to the random neighborhood. [0097] 3) if a peer in the random neighborhood already exists in the nearest neighborhood, the peer removes it from the random neighborhood. However, the connection with the peer does not change, and the peer still exists in the peer's OutView while it is now regarded as one of the nearest neighbors instead of a random neighbor. [0098] 2.2 Peer-to-peer Streaming
  • the streaming overlay is a mesh, with every node having a few (4-6) partners to exchange streaming packets. Peers select their partners from a candidate list 51 2. A heartbeat mechanism is employed to maintain the partnership and to detect peer failures. Peers keep on refining their partners to achieve better streaming performances. [00101 ] There are two entities involved in this phase: candidate lists and partners.
  • the initial candidate list is obtained when a peer joints an overlay (shown at block 702).
  • a new joining peer obtains the candidate list from the nearest in-view peers in the member join process.
  • the candidate list is periodically updated including add, refresh, and remove (shown at block 704).
  • Each peer periodically floods its "alive" information to its neighbors within two hops.
  • the peer which receives an alive message updates the entry in its partner candidate list, or adds a new entry if the peer has not been on the list. This peer also checks the lifetime of each entry, and removes an entry if the time exceeds a given threshold.
  • Heartbeat messages are exchanged periodically between partners.
  • the nearest available peer from the candidate list is selected to replace this partner.
  • the failed partner is deleted from the candidate list.
  • the refinement procedure includes the following:
  • the candidate list is checked to see whether there are any nearer nodes other than the current partner(s). If so, this nearer node becomes a new partner.
  • Each peer in a streaming overlay has a media buffer to accommodate the received packets of the video stream.
  • the media buffer may include a
  • FIFO packet queue The size L of the packet queue is fixed. If the media buffer is full, the oldest packet is removed as a new packet arrives.
  • Buffer Map - A buffer map is a bit vector with size of L. Each bit in the buffer map represents the availability of a single packet in the packet queue. Given the start sequence number and the buffer map, one can know which media packets are available in its packet queue. During streaming, each peer periodically publishes its buffer map to its partners.
  • each video packet has a unique sequence number which is assigned by the hosting program, and the sequence number is used as an index of the video packet. Furthermore, a video stream is divided into P strips, and a packet with sequence number s belongs to the strip (s mod
  • Prefetch means that a peer can order some strips from one partner, and some strips from other partners.
  • flowchart 900 of FIC. 9 an embodiment of a prefetch-pull strategy is shown. Periodically, there is a prefetch process. A peer assigns
  • P prefetch tasks each task corresponds to one strip
  • partners shown at block 902
  • the prefech task of strips is assigned to a partner according to the bandwidth and latency of the partner. In one embodiment, the statistic information of previous received packets in the prefetch scheduling is used. A strip is assigned to the partner from whom the peer has received maximal packets of that strip by now.
  • Each node also periodically receives prefetch requests from their partners due to the symmetric partnership (shown in block 906).
  • An older prefetch request is updated by a new prefetch request from the same partner.
  • a peer receives a packet (block 908), it hashes the packet to one strip (shown in block 91 0).
  • peer uses the function (s mod P) as described above. The peer determines whether there are any prefetch requests regarding this strip having been received from any partners. If so, the peer immediately forwards the packet to the requesting partner(s). This forward operation may be referred to as a
  • each peer independently schedules its own pre-fetching process, it is possible that several peers may form pre-fetching circles.
  • a peer receives two or more copies of the same packet because the peer has previously sent out a packet request to all of its partners. This wastes bandwidth and reduces streaming performance.
  • each peer records and updates its partners 1 buffer maps once it has received/pushed a packet from/to a partner so that the peer will not push the same packet to a partner more than once
  • the peer maintains a local copy of each partner's buffer map that is updated each time a packet is received or pushed by the peer. Local copies of the buffer maps at a peer are added /removed as the peer's partners are added/removed.
  • the pre-fetching algorithm is a best-effort algorithm; there is no retransmission scheme to guarantee the quality of service (QoS). It is possible that some packets are lost due to network packet loss or congestion.
  • embodiments herein include a pulling strategy to proactively pull needed packets from partners if a pull condition is triggered
  • the peer takes the available packet with the maximum sequence number in the peer's own buffer map as a reference, (denoted as position E)
  • Any non-received packet whose sequence number is smaller than (E-THl) is scheduled to be pulled, where THl is a constant threshold.
  • the non-received packet is pulled from a random selected partner whose buffer map (local copies of buffer maps stored at the peer) shows that it has the packet.
  • a peer receives a pull request, it sends the required packet(s) as an acknowledgement.
  • an urgent pull request is scheduled if the peer fails to receive a packet in a certain time period TH2 before its playback time, where TH2 is another constant threshold.
  • a media playback module gets media data packets from the P2P streaming module for playing back the video.
  • the playback timing is independent from the streaming scheme. The playback can start at a time several seconds after the first packet in the stream is received at the peer.
  • Embodiments herein include a peer-to-peer video streaming system that includes a location-aware unstructured peer-to-peer overlay and a prefetch-pull based streaming algorithm for transmitting video data among peers.
  • live video streams can be sent to a large amount of receivers in a cost- effective way.
  • the Link Layer is the lowest layer of the SVS protocol stack.
  • the Link Layer tries to establish and manage all incoming/outgoing connections through the underlying transportation protocols such as TCP, UDP, and HTTP/HTTPS.
  • PROTOCOL_TYPE2 IP2 : Port2, PROTOCOL_TYPE3 : IP3 : Port3, ... ⁇
  • the Link Layer also carries common Internet protocols through the Link Layer modules.
  • the Link Layer also carries common Internet protocols through the Link Layer modules.
  • the Link Layer also carries common Internet protocols through the Link Layer modules.
  • the Peer layer is built on top of the Link layer and is responsible for
  • Each peer is assigned a unique 1 28-bit ID.
  • the Peer Layer protocol uses
  • the Link layer is the lowest layer of the SVS protocol stack which is in
  • IP links including TCP, UDP and HTTP/HTTPS. All
  • LLP Protocol
  • MAGIC WORD 1 6-bit field contains the magic word "SV" of SVS.
  • V 2-bit field contains the version of the protocol. The current version is
  • R: 1 -bit field indicates whether this packet is supposed to be sent via a
  • MESSACE TYPE 8-bit field contains the message type information.
  • MESSAGE LENGTH 1 6-bit field contains the length of the packet in Bytes
  • SEQNUM sequence number
  • receiver with a SEQNUM. If the receiver receives the packet, it returns an SYN_ACK with
  • connection is established. This mechanism is only used for UDP channels.
  • the Probe and Answer Probe messages are a pair of messages used to
  • a host For every 1 0 seconds, a host sends a Probe message to
  • the timestamp in the packet indicates the sending time.
  • the peer Upon receiving this message, the peer changes the message ID to Answer Probe and
  • RTT Round Trip Time
  • control messages there are basically two kinds of packets: control messages and video streams. While the loss of media data affects the communication quality, the loss of control messages is usually damaging. For example, loss of the subscription request keeps the subscriber waiting; loss of network status updates may result in network congestion.
  • the TCP channel is used in transmitting the control messages.
  • the LLP_ACK message is used to acknowledge the receipt of a need-ack packet.
  • the body of the packet contains an 8-bit sequence number which indicates which packet the peer has received.
  • An 8-bit field for the sequence number is used because the volume of need-ack packets probably will not be large.
  • DigiMetro is used. DigiMetro is an application-level multicast system for multi-party video conferencing.
  • the packet body has the format shown in FIC. 1 2.
  • the first 1 6-bit field (ROUND NUMBER) indicates which round of measurement this packet belongs to.
  • SEQ_NO 1 6-bit field
  • TIMESTAMP the sending time
  • assigned data are the user's display image and the most recent l-frame of the real-time
  • This data can be passed to the network monitor through the exposed interface.
  • the network monitor uses a block of nonsense data for bandwidth
  • the developer can also set a callback function if the developer wishes to
  • the structure of the data part can be defined by the
  • the Link management layer should detach the LLP header, and pass
  • TLP Transport Level Protocol
  • the TLP header has the data fields as shown in FIC. 1 3.
  • MESSAGE TYPE 1 6-bit field contains the message type information.
  • OVERLAY PORT 1 6-bit field contains the overlay port number of the packet source. By using this port one can easily dispatch messages to different modules.
  • the Hello message is only used as a keep alive message. For every 5 seconds, a host sends a Hello message to every available UDP and IP multicast channel to keep the link alive. If a host has not received the Hello message for 30 seconds from a specific link, it assumes that the connection is broken.
  • the SYN message is mainly used for setting up a connection. After a connection object accepts a TCP connection request, the first message it receives from this connection should be a SYN message. Otherwise, the host closes this connection. A host also uses the SYN message to ping its peers through UDP and IP multicast channels.
  • the peer manager Upon receiving the SYN message, the peer manager checks whether there already exists a
  • Peer object which is associated with the SENDER ID. If so, it updates the connectivity information for the object (e.g., set the UDP channel status to "available" if the SYN message is received from UDP). Otherwise, the peer manager creates a new Peer object and associates it with the SENDER ID in the packet header. The body of the SYN message contains the SENDER ID. [00168] 3.2.2.3 Data
  • Message type "PLP_DATA" indicates that this packet belongs to an upper layer.
  • the Link management layer should detach the Peer Layer Protocol (PLP) header, and pass the PLP data to the upper level protocol.
  • PLP Peer Layer Protocol
  • the system connects all peers together with location-awareness. Peers can join and find nearest neighbors very quickly. Though the streaming partners do not necessary correspond to the nearest nodes, and the streaming overlay does not necessarily match the topology, the location-aware property of the membership overlay can facilitate a peer to find good streaming partners. In addition, the membership overlay also supports abundant distributed operations through P2P communication. The protocol of the membership overlay is described as follows.
  • the Membership overlay message structure includes data fields as shown in FIC. 14.
  • m_MsgType is a 1 6-bit message word. In embodiments herein, all
  • a message whose highest bit is 1 is processed by the streaming overlay.
  • the lower byte contains the message ID.
  • Random walk is a basic function or message of a membership overlay.
  • Random walk can be used to select random neighbors when a peer tries to join a membership overlay, and can be used to select random nodes over the membership overlay. In some embodiments of the protocol, biased forwarding and biased halting schemes are not supported.
  • a peer Once a peer received a random walk message from its neighbor, it decreases the hop number of the message. If the hop number is larger than zero, the peer only selects a random peer from destination neighbors (refer to the Membership Overlay Message Destination Description in the previous section) and directly forwards the message.
  • the peer will not forward further and send a random walk acknowledge message to the peer which is the source of the random walk message (for a SVSOVERLAY_RANWALK_D message, the membership overlay delivers the data payload to the upper layer (i.e., the streaming overlay) and does not send an acknowledge message to the source peer.
  • the upper layer can send their own acknowledge message according the received data.
  • SVSOVERLAY_RANWALK_D is a message only for membership overlay maintaining such as peer joining and membership refinement.
  • SVSOVERLAY_RANWALK_D is a data conveying message for the upper layer.
  • An embodiment of a message body of a random walk message is as
  • SVSOVERLAY_RANWALK message data is a contactinfo structure.
  • SVSOVERLAY_RANWALK_D message data is a contactinfo structure + upper layer data.
  • a flood message is also a basic function or message of a membership
  • the peer directly forwards
  • SVSOVERLAY_RANWALK is a message only for membership
  • overlay maintaining such as peer joining and membership refinement.
  • SVSOVERLAY_RANWALK_D is a data conveying message for the upper layer.
  • the membership overlay delivers the data payload to
  • BYTE m_Data[0]; / /For SVSOVERLAY_FLOOD message data is a contactinfo structure of the source peer.
  • SVSOVERLAY_ FLOOD_D message data is a contactinfo of the source peer + upper layer data.
  • An embodiment of a query message is defined as below.
  • a peer may query k nearest peers from the
  • An embodiment of the query message is as
  • Peer B can determine to accept or reject the request.
  • An embodiment of the acknowledge message is: struct MOVERLA Y_ACK_CON
  • Any peer who receives this message only deletes the peer from its neighborhood.
  • MSP Media Streaming Protocol
  • MESSAGE TYPE 1 6-bit field contains message type information.
  • a new joined peer can query partner candidates from another peer in the
  • the queried peer sends back the following acknowledge message which
  • the receiver sends back the acknowledge message to establish or refuse
  • brQuit 0x00
  • brPoorThroughput OxOl
  • brTooManyPartners 0x02
  • brUareNotMyParner 0x03
  • brUnknown 0x1 0
  • each peer may advertise its status of the current buffer.
  • a peer can send the pull request
  • the media data message includes the actual video streaming packets with
  • one or more of the operations described may constitute computer readable instructions stored on computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described.
  • the order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • General Engineering & Computer Science (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Video data from an upload client is received at a hosting node. A request from a download client is received at a bootstrapping node to receive the video data. The download client to receive the video data directly from the hosting node when the hosting node is below a threshold, wherein the threshold is based at least in part on the maximum number of download clients the hosting node can stream to simultaneously. The download client to receive the video data from peers in a peer-to-peer overlay when the hosting node above the threshold.

Description

A PEER-TO-PEER AIDED LIVE VIDEO SHARING SYSTEM
BACKGROUND
[0001 ] A number of applications and services that support sharing video files exist on the Internet. For example, some web sites allow users to upload video files to a database. These videos are also searchable via metadata provided in the submission process. Users can even charge audiences to view the videos.
[0002] Other web sites let video producers upload their video onto servers, arrange them into programs, display them to other Internet users, and make money if anyone watches them. However, these video hosting services are only for sharing stored video files. They do not enable the user to share the user's video or camera with others in real time.
[0003] Some web sites offer a comprehensive search engine of webcams from around the world. The portal maintains a database of live webcams on the Internet, allowing users to search by keyword or browse through the categories and subcategories.
Users can see what is happening around the world in real time. However, these sites only provide searching and indexing of real time cameras. They do not provide any infrastructure support for sharing live videos.
[0004] Some video chatting services allow users to share cameras with others in real-time. However, most of these video charting applications use unicast methods to send video from the capturer to audiences and therefore can only serve a limited number of users in each session. [0005] Overall, an efficient way for an ordinary Internet user to share a live video with a large audience in real time is still lacking.
SUMMARY
[0006] The following presents a simplified summary of the disclosure in order to provide a basic understanding to the reader. This summary is not an extensive overview of the disclosure and it does not identify key/critical elements of the invention or delineate the scope of the invention. Its sole purpose is to present some concepts disclosed herein in a simplified form as a prelude to the more detailed description that is presented later.
[0007] Embodiments of the invention are directed to a system and method for sharing live video streams on a large scale through the Internet using a Shared Video
Service (SVS). Embodiments herein include a peer-to-peer streaming technique where video data packets are exchanged among the audience. This technique relieves load and bandwidth stress on a content server and ensures the scalability of the SVS.
[0008] Many of the attendant features will be more readily appreciated as the same become better understood by reference to the following detailed description considered in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Like reference numerals are used to designate like parts in the accompanying drawings. [0010] FIC. 1 is a block diagram of a SVS in accordance with an embodiment of the invention.
[001 1 ] FIC. 2 is a flowchart showing the logic and operations of uploading video to an SVS in accordance with an embodiment of the invention.
[001 2] FIC. 3 is a flowchart showing the logic and operations of receiving video from and SVS in accordance with an embodiment of the invention.
[001 3] FIC. 4 is a block diagram of an example computing device for implementing embodiments of the invention.
[0014] FIC. 5 is a block diagram of a Shared Video Service in accordance with an embodiment of the invention.
[001 5] FIC. 6 is a flowchart showing the logic and operations of a peer joining a membership overlay in accordance with an embodiment of the invention.
[0016] FIC. 7 is a flowchart showing the logic and operations of candidate list operations in accordance with an embodiment of the invention.
[001 7] FIC. 8 is a flowchart showing the logic and operations of partner operations in accordance with an embodiment of the invention.
[001 8] FIC. 9 is a flowchart showing the logic and operations of streaming using a prefetch-pull strategy in accordance with an embodiment of the invention.
[001 9] FIC. 1 0 is a block diagram of a Shared Video Service protocol stack in accordance with an embodiment of the invention.
[0020] FIC. 1 1 is a block diagram of a Link Layer Protocol (LLP) header format in accordance with an embodiment of the invention. [0021 ] FIC. 1 2 is a block diagram of a bandwidth measurement packet in accordance with an embodiment of the invention.
[0022] FIC. 1 3 is a block diagram of a Transport Level Protocol (TLP) header in accordance with an embodiment of the invention.
[0023] FIC. 14 is a block diagram of a membership overlay message structure in accordance with an embodiment of the invention.
[0024] FIC. 1 5 is a block diagram of a Media Streaming Protocol (MSP) packet header in accordance with an embodiment of the invention.
DETAILED DESCRIPTION
[0025] The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present examples may be constructed or utilized. The description sets forth the functions of the examples and the sequence of steps for constructing and operating the examples. However, the same or equivalent functions and sequences may be accomplished by different examples.
[0026] FIC. 1 and the following discussion are intended to provide a brief, general description of a suitable computing environment to implement embodiments of the invention. The operating environment of FIC. 1 is only one example of a suitable operating environment and is not intended to suggest any limitation as to the scope of use or functionality of the operating environment. Other well known computing systems, environments, and/or configurations that may be suitable for use with embodiments described herein including, but not limited to, personal computers, server computers, hand-held or laptop devices, mobile devices (such as mobile phones, Personal Digital Assistants (PDAs), media players), multiprocessor systems, programmable consumer electronics, network personal computers, mini computers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
[0027] Although not required, embodiments of the invention will be described in the general context of "computer readable instructions" being executed by one or more computing devices. Computer readable instructions may be distributed via computer readable media (discussed below). Computer readable instructions may be implemented as program modules, such as functions, objects, application programming interfaces (APIs), data structures, and the like, that perform particular tasks or implement particular abstract data types. The functionality of the computer readable instructions may be combined or distributed as desired in various environments. [0028] 1 SYSTEM FOR LIVE VIDEO SHARING
[0029] Embodiments of the invention are directed to a system and method for sharing live video streams on a large scale through the Internet called a Shared Video Service (SVS). SVS enables a user to share captured video with other people in real time, specify the metadata of the shared video for indexing and retrieval, search video of interest on the service using metadata, and watch shared video in real time. It will be understood that the term "video" includes a video and an audio portion. One skilled in the art will appreciate that embodiments of the invention may also be used with audio only streams or video only streams.
[0030] Turning to FIC. 1 , a Shared Video Service (SVS) system 1 00 includes a transmission subsystem 1 02 and a management subsystem 1 04. Transmission subsystem 1 02 is responsible for video uploading, downloading, and peer-to-peer (P2P) relaying. Management subsystem 1 04 provides functionalities for browsing, searching, indexing, and managing shared videos.
[0031 ] When a user wants to share video through the SVS, the user sends a live or recorded video to an SVS hosting node. The hosting node receives video from the upload client and stores it. The video may include a live video stream captured by the upload client in real time or a video file (i.e., recorded video) stored on the upload client. In one embodiment, the user is required before to logon to the SVS prior to uploading any video.
[0032] Viewers (at the download clients) may browse and search for available videos on the SVS. Each shared video may have some associated metadata stored in the database server such as category, location, tags, and the like. Viewers can browse and search the videos using this metadata.
[0033] When viewers request a particular video, a hosting node connects with a bootstrapping node to stream the video to the viewer. The video streamed to the viewer may be being captured live or may be sent from a video file being sent by an upload client to the system in real time. [0034] In one embodiment, to overcome limited bandwidth of the hosting nodes, a P2P streaming mechanism is designed to assist in video transmission. When the video hosting node reaches a threshold, such as its maximum capacity, the P2P mechanism begins working. All the download clients within the same video session form a location- aware overlay and live peer-to-peer streaming performed in a receiver-driven fashion. [0035] 1 .1 Transmission subsystem
[0036] Transmission subsystem 1 02 includes an upload client 1 06, one or more download clients 1 08, a hosting node 1 1 0 (additionally hosting nodes 1 1 1 and 1 1 2 are shown), and a bootstrapping node 1 14.
[0037] Upload client 1 06 acts as the source of a video. In one embedment, upload client 1 06 captures live video, encodes the video streams, generates the packets and transmits them onto hosting node 1 1 0 in real time. In another embodiment, upload client 1 06 sends a pre-recorded video file in a live fashion to hosting node 1 1 0. In this embodiment, the video was captured and recorded at an earlier time. However, the recorded video is sent to hosting node 1 1 0 and streamed to download clients 1 08 in real time.
[0038] Turning to FIC. 2, a flowchart 200 shows the operations of uploading a video to the SVS in accordance with an embodiment of the invention. To start a video session, upload client 1 06 first contacts bootstrapping node 1 14 to request an available hosting node (shown at block 202). Bootstrapping node 1 14 responds with address information for an available hosting node (shown at block 204). The bootstrapping node schedules a hosting node to the upload client according to the current loads of all the hosting nodes.
[0039] Next, upload client 1 06 sends an upload request to hosting node 1 1 0
(shown at block 206). After receiving an Acknowledgement (ACK) message from hosting node 1 1 0 (shown at block 208), upload client 1 06 then starts to send the video packets to hosting node 1 1 0 in their sequencing order (shown at block 21 0). After upload client 1 06 starts successfully sending video to hosting node 1 1 0, hosting node 1 1 0 registers the information of the new video session with bootstrapping node 1 14 (shown at block 21 2). Bootstrapping node 1 1 4 updates database server 1 20 with video session information. The video being sent by upload client 1 06 is now available for viewing at download clients 1 08. As discussed below, database server 1 20 tracks all video sessions within the SVS and may be searched by a user.
[0040] Turning to FIC. 3, a flowchart 300 shows the operations of requesting a video by a download client in accordance with an embodiment of the invention. A download client 1 08 contacts bootstrapping node 1 14 to join an existing video session (shown at block 302). A downloading client (also referred to as a streaming client) only needs to know the bootstrapping node's address before beginning to view a video. Download client 1 08 also discovers some peers (i.e., other download clients) within the same video session and form its own neighborhood through the P2P overlay 1 1 6 (shown at block 304).
[0041 ] The bootstrapping node determines if the status of the video session is at a threshold (shown at block 306). In one embodiment, the threshold is based at least in part on the maximum capacity of the hosting node. The maximum capacity of the hosting node is the maximum number of download clients per video session the hosting node can stream to simultaneously. The maximum capacity may be affected by load limitations of the hosting node (e.g., SERVEFLFU LLLOAD), traffic load on the network, any combination thereof, and the like. The threshold may be set at maximum capacity or at a portion of maximum capacity (e.g., 90%). If the hosting node is at the threshold for this video session, then the download client works in P2P mode and obtain the streaming packets from its overlay peers (shown at block 31 0). Otherwise, the download client contacts hosting node 1 1 0 to obtain the video directly from hosting node 1 1 0 (shown at block 308).
[0042] In one embodiment, when running in P2P mode, the download client uses a prefetch-pull streaming algorithm to get data from its peers (discussed below in section 2). The download client also periodically updates the peers according to the observed throughput. In one embodiment, the download client starts playback when a specified time interval has elapsed after receiving the first packet. At the download client, the received packets are first buffered in a transmission buffer pool, and then pulled on demand by the playback module of the download client. [0043] Hosting node 1 1 0 handles the video upload from an upload client and streaming to the download clients (i.e., peers). The hosting node can be a dedicated server or a client node. For both cases, the hosting node has limited bandwidth and is assumed to be able to serve only a limited number of directly streaming download clients. The hosting node receives video packets from an upload client and sends the video packets to the download clients in real time.
[0044] Each hosting node can manage multiple video sessions. Each video session is identified by a video's identification (ID). Each video session serves a number of download clients and an upload client.
[0045] A hosting node registers its address with the bootstrapping node when joining the SVS (e.g., on boot up) and un-registers itself on exit from the SVS. A hosting node also periodically reports its load, including its total number of download clients and upload clients to the bootstrapping node, so that the bootstrapping node can manage the load balancing of all the hosting nodes.
[0046] A bootstrapping node 1 14 is deployed to help the download client peers to join the video session and discover other peers. The bootstrapping node handles all the requests from upload and download clients. It helps all the downloading clients to form a location-aware overlay through a membership overlay algorithm (discussed in section 2 below). The bootstrapping node may include a dedicated server or a client computer. In one embodiment, the bootstrapping node may be deployed on the same computer together with the hosting node.
[0047] Bootstrapping node 1 14 may include the following functionalities: hosting node management, video session management, P2P bootstrapping support, and video entry management.
[0048] In one embodiment, bootstrapping node 1 14 performs hosting node management. Each hosting node reports its address and streaming load to the bootstrapping node. When an upload client starts to upload, the upload client first contacts the bootstrapping node and queries for an available hosting node to host the video. The bootstrapping node schedules a hosting node to the upload client according to the current loads of all the hosting nodes.
[0049] In one embodiment, bootstrapping node 1 1 4 performs video session management. After an upload client starts successfully sending video to the hosting node, the hosting node registers the information of the new video session with the bootstrapping node. Such video session information may include video bit-rate, resolution, frame-rate, and the like.
[0050] The bootstrapping node monitors all the video sessions. The bootstrapping node records all download clients that receive streaming packers directly from a hosting node. When the bootstrapping node finds that a hosting node has reached its load threshold (e.g., maximum capacity), it notifies the newly joined download client to get video from other peers, i.e. streaming in a P2P fashion. [0051 ] In one embodiment, bootstrapping node 1 14 performs P2P bootstrapping support. For each video session, the bootstrapping node may maintain a peer cache that contains a number of the random peers watching the video. The peer cache is periodically updated. If any peer wants to watch a shared video, it first contacts the bootstrapping node to join a corresponding streaming overlay. The bootstrapping node helps all the watching peers to form a location-aware overlay through the membership overlay algorithm. [0052] In one embodiment, bootstrapping node 1 1 4 performs video entry management. When a new video session is established, the bootstrapping node calls a web service to add the entry into a database at a database server 1 20. When a video session is destroyed, bootstrapping node 1 14 calls the web service to remove the entry from the database at a database server 1 20. When the bootstrapping node exits the SVS, the bootstrapping node calls the web service to remove all the video entries from the database.
[0053] 1 .2 Video management subsystem
[0054] Referring again to FIC. 1 , video management subsystem 1 04 includes a database server 1 20, a web server 1 22, and a browsing/sharing user interface (Ul) 1 24. [0055] All the video session entries and their metadata are stored and managed by database server 1 20. Each video has a unique entry (record) in the database. In one embodiment, the primary key of the record is the video ID. Each record may also include other metadata such as title, author name, width, height, bit rate, and frame rate of the video. The indexing table of the location, category, and tags are also stored in database server 1 20. Thus, all shared videos can be retrieved through these metadata. In one embodiment, video metadata is directly provided by upload client 1 06 to database server 1 20 through a web service and browsing/sharing Ul 1 24.
[0056] Web server 1 22 is deployed as a unique entry point for the whole Shared
Video Service. Some web services are developed on the web server to manage all the video entries and their metadata. These web services include the functionalities of browsing, searching, and authoring videos. [0057] Web services of web server 1 22 may be accessed using browsing/sharing user interface 1 24. Ul 1 24 responds to user actions or queries, calls the web services, and generates the appropriate views to users. Through user interface 1 24, users are able to browse all shared videos by metadata, share personal videos, author the metadata of individual shared video entries, and the like. Once a user finds a video of interest, the user may select the video through Ul 1 24. The user's system (i.e., download client) may then request the video as described above in connection with download clients 1 08. Also, a user at upload client 1 06 may use Ul 1 24 for uploading a video to the system. [0058] Embodiments of the SVS may be used in various scenarios. In one example, in a breaking news event, such as a disaster or an accident, a by-stander person holds his or her camera phone and transmits live video to SVS. Official news- agency coverage of the event may begin much later, if at all. In another example, a camera from a window of a building is pointed to the intersection of two roads. Video is transmitted to the SVS in real-time. Drivers who will use this intersection watch the traffic situations in advance, during driving or in congestion. In another example, in an exploration or a journey, a traveler keeps his camera open. SVS subscribers watch the journey in real-time. In yet another example, television stations and news agencies may provide television programs and news through SVS.
[0059] Turning to FIC. 4, an example of a computing device 400 for implementing one or more embodiments of the invention is shown. Configurations of computing device 400 may be used as a bootstrapping node, a hosting node, an upload client, or a download client as described herein. Computing device 400 may also be used as a database server or a web server as described herein. [0060] In one configuration, computing device 400 includes at least one processing unit 402 and memory 404. Depending on the exact configuration and type of computing device, memory 404 may be volatile (such as RAM), non-volatile (such as ROM, flash memory, etc.) or some combination of the two. This configuration is illustrated in FIC. 4 by dashed line 406.
[0061 ] Additionally, device 400 may also have additional features and/or functionality. For example, device 400 may also include additional storage (e.g., removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIC. 4 by storage 408. In one embodiment, computer readable instructions to implement embodiments of the invention may be stored in storage 408. Storage 408 may also store other computer readable instructions to implement an operating system, an application program, and the like.
[0062] The term "computer readable media" as used herein includes computer storage media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules, or other data. Memory 404 and storage 408 are examples of computer storage media. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVDs) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by device 400. Any such computer storage media may be part of device 400.
[0063] The term "computer readable media" may include communication media.
Device 400 may also include communication connection(s) 41 2 that allow the device 400 to communicate with other devices, such as with other computing devices through network 420. Communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term "modulated data signal" means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency, infrared, and other wireless media.
[0064] Device 400 may also have input device(s) 414 such as keyboard, mouse, pen, voice input device, touch input device, laser range finder, infra-red cameras, video input devices (also referred to as video capture devices) such as cameras, and/or any other input device. Output device(s) 41 6 such as one or more displays, speakers, printers, and/or any other output device may also be included. Input devices 41 4 and output devices 41 6 may be coupled to computing device 400 via a wired connection, wireless connection, or any combination thereof. In the following description and claims, the term "coupled" and its derivatives may be used. "Coupled" may mean that two or more elements are in contact (physically, electrically, magnetically, optically, etc.). "Coupled" may also mean two or more elements are not in contact with each other, but still cooperate or interact with each other (for example, communicatively coupled). [0065] Those skilled in the art will realize that storage devices utilized to store computer readable instructions may be distributed across a network. For example, a computing device 430 accessible via network 420 may store computer readable instructions to implement one or more embodiments of the invention. Computing device 400 may access computing device 430 and download a part or all of the computer readable instructions for execution. Alternatively, computing device 400 may download pieces of the computer readable instructions, as needed, or some instructions may be executed at computing device 400 and some at computing device 430. Those skilled in the art will also realize that all or a portion of the computer readable instructions may be carried out by a dedicated circuit, such as a Digital Signal Processor (DSP), programmable logic array, and the like.
[0066] 2 PEER-TO-PEER LIVE STREAMING TECHNIQUE
[0067] This invention includes a system and method to allow sharing a live video stream from one computer to a large audience through the Internet. Most of current live streaming servers on the Internet use unicast techniques to send streaming video to their clients. That means each streaming client establishes a separate data channel with servers to receive an identical copy of the original video stream. When the number of the streaming clients increases, the bandwidth of a server will be exhausted. Therefore, such a transmission scheme is not scalable and cannot support large scale video services or applications.
[0068] Embodiments herein include a system and method for streaming a live video from a hosting node to many download clients. Through embodiments of the system, video from a camera connected to a computer or a file stored on a computer can be quickly and efficiently streamed out to a large number of download clients through the Internet.
[0069] Aspects of the P2P technique include: 1 ) a separate bootstrapping node to handle all the joining requests from new clients. This isolates the hosting node from a flash crowd of download clients to a video session and makes the video session more stable, 2) an algorithm to build a location aware overlay so that the peer joins the session with less latency and the streaming session will have less stress on the underlying physical network, and 3) a prefetch-pull method to improve the throughput of the peer- to-peer streaming system.
[0070] An efficient peer-to-peer streaming algorithm is developed to leverage the resource of downloading clients within the same video session. Therefore, the hosting node's load will be kept constant and the live video streaming session will be able to serve many more users than traditional streaming servers. Live video streaming has more stringent requirements than other file transfer applications. Meanwhile, most download client peers have limited bandwidth. Therefore, the peer-to-peer streaming algorithm should make efficient use of network resources to reduce latency and maximize the throughput. [0071 ] Turning to FIC. 5, an SVS system 500 is shown (a management subsystem is not shown for clarity). Embodiments herein of peer-to-peer streaming include a two- layer overlay. The lower layer of peer-to-peer overlay 1 1 6 includes a membership overlay, which is used for discovering nearby peers and disseminating messages. The upper layer includes a streaming overlay, which is used for transmitting real-time streaming packets. Embodiments of a new peer 504 joining P2P overlay 1 1 6 and streaming operations of new peer 504 are discussed below. It will be noted that an OutView, InView, partner list, and candidate list is shown only for new peer 504 for clarity, but it will be understood that each peer (i.e., download client) in P2P overlay 1 1 6 includes these components. As used herein, a download client may be referred to as a peer or a node of P2P overlay 1 1 6. [0072] 2.1 P2P Membership
[0073] In the membership overlay, each peer (i.e., each download client) has a number of "neighbors." The connection between a peer and its neighbor is modeled as a directional edge. Each peer maintains a node set called "OutView" 508 which stores all its neighbors. It also maintains a node set called "InView" 509 that stores all peers that take it as their neighbor. Each peer sends messages to the neighbors in its OutView and receives message from the neighbors in its InView.
[0074] In the streaming overlay, each peer has a number of "partners." The partners of a peer are a set of peers that can provide streaming packets to the peer more quickly and efficiently than other peers of the P2P overlay. The partners are not necessary the nearest peers. Although nearby peers can provide data in smaller latency, some of them may lack available bandwidth to serve the peer. The connection between a peer and its partner is modeled as an un-directional edge, i.e. the peer and its partner exchange data. A location aware overlay construction algorithm is designed to help the peer to construct an unstructured membership overlay and quickly find its peers. [0075] Embodiments herein include a prefetch-pull combined algorithm designed to exchange packets with partners. It also monitors the throughputs between itself and its partners and dynamically adjusts the peers named as partners to get best streaming performance. It also includes a loop avoidance mechanism to prevent a peer from receiving the same data packets more than once. [0076] 2.1 .1 Location-aware Membership Overlay Construction
[0077] Location-aware means that the peer-to-peer overlay is aware of the underlying physical network topology and tries to align the peer-to-peer communication with the underlying physical channel. In the membership overlay, each peer tries to find a number of nearby neighbors and a number of random neighbors. The OutView of each peer includes k nearby neighbors and /random neighbors (in one embodiment, the settings used are k = 8 and / = 3).
[0078] In one embodiment, a nearby neighbor may be measured by roundtrip time for packets between peers. In another embodiment, a nearby neighbor may be measured by Internet Protocol (IP) address difference between peers. 2.1 .1 .1 Peer Cache
[0079] Bootstrapping node 1 1 4 maintains a peer cache 506 for the peer-to-peer overlay 1 1 6. The peer cache contains the hosting node and a queue of random nodes whose size is no more than a constant L.
[0080] Bootstrapping node 1 1 4 periodically discovers new random node(s) on the overlay to update peer cache 506. For this update, bootstrapping node 1 1 4 keeps M daemon processes on the bootstrapping node. Each daemon simulates a random walk process on overlay 1 1 6. Bootstrapping node 1 1 4 uses the node visited by the random walk every c steps as a sample and use the sampled nodes to update the peer cache queue in a (First In First Out) FIFO order.
[0081 ] 2.1 .1 .2 Peer Join Procedure
[0082] Referring to FIC. 6, a flowchart 600 shows an embodiment of joining a membership overlay by new peer 504. A new peer first sends a join request to the bootstrapping node (shown at block 602). The bootstrapping node then selects k records from peer cache 506 which are nearest to new peer 504. The bootstrapping node also selects / random peers from the rest of the peer cache. The bootstrapping node returns the selected peer information back to the new peer (shown at block 604). If the cache size is less than k+l, the bootstrapping node sends information on all peers to the new peer.
[0083] After receiving the peer records (shown at block 606), the new peer joins the membership overlay (shown at block 608). [0084] In one embodiment, the new peer P performs the following operations to join the membership overlay:
[0085] 1 ) new peer P takes the /random peers as the random neighborhood. If there are no random peers in the records, the random neighborhood will be empty.
[0086] 2) new peer P takes the k nearest peers as the initial nearest neighborhood, and starts the following locating process:
[0087] a) for each peer Q in Nnearest (P), new peer P sends a Nearest Neighbor (NE) query to Q.
[0088] b) for each peer Q, when it receives a NE query, new peer P selects k nearest peers to new peer P from all its neighbors (including InView and OutView) and returns them as an acknowledgement of the NE query to new peer P.
[0089] c) find out the k nearest peers from the merged query result and update
Nnearest (P)-
[0090] d) repeat step (a) - (c), until convergence. Convergence occurs when the median of the distances to the nearest peers does not change. At this point, new peer P stops the locating process and adds the k nearest peers observed so far to its OutView. [0091 ] 2.1 .1 .3 Membership Overlay Maintenance
[0092] The new peer (as well as the other peers) may perform membership overlay maintenance after having joined P2P overlay 1 1 6 (shown at block 61 0 of FIC. 6). Embodiments of membership overlay maintenance include handling a loss of communication with a peer and refining the overlay membership to adjust for peers entering or leaving the overlay. [0093] Embodiments herein include techniques for peer failure handling. A peer may suffer from a system failure or a network disconnection. In the membership overlay, each peer stays alive with all its neighbors so that peer failure can be detected. Periodically, each peer sends heartbeat message to all its neighbors (such as those in the InView and OutView). When a peer does not receive a heartbeat message from one of its neighbor peers for a long time, it removes the peer from the InView and/or OutView. When a peer receives a low-level connection-broken event with one of its neighbor peer, it also removes the peer from the InView and/or OutView. [0094] Embodiments herein include techniques for refining the overlay membership. In order to handle the dynamic change of Internet topology and peers' join and leaving, each peer is able to atomically refine its neighborhood. In one embodiment, each peer periodically performs the following operations in this order to refine the overlay membership:
[0095] 1 ) the peer checks the nearest neighborhood, if it is less than k/2, it starts a new query process to update the nearest neighborhood. In this step, the peer's nearest neighborhood has become too small and needs to be increased to ensure uninterrupted streaming.
[0096] 2) the peer checks the random neighborhood, if it is less than /, say /, it contacts the bootstrapping node, the bootstrapping node selects / - / random nodes from the cache and return them to the peer. The peer then adds them to the random neighborhood. [0097] 3) if a peer in the random neighborhood already exists in the nearest neighborhood, the peer removes it from the random neighborhood. However, the connection with the peer does not change, and the peer still exists in the peer's OutView while it is now regarded as one of the nearest neighbors instead of a random neighbor. [0098] 2.2 Peer-to-peer Streaming
[0099] 2.2.1 Streaming Partner Discovery
[00100] The streaming overlay is a mesh, with every node having a few (4-6) partners to exchange streaming packets. Peers select their partners from a candidate list 51 2. A heartbeat mechanism is employed to maintain the partnership and to detect peer failures. Peers keep on refining their partners to achieve better streaming performances. [00101 ] There are two entities involved in this phase: candidate lists and partners.
Referring to flowchart 700 of FIC. 7, embodiments of candidate list operations are shown. The initial candidate list is obtained when a peer joints an overlay (shown at block 702). In one embodiment, a new joining peer obtains the candidate list from the nearest in-view peers in the member join process.
[00102] The candidate list is periodically updated including add, refresh, and remove (shown at block 704). Each peer periodically floods its "alive" information to its neighbors within two hops. The peer which receives an alive message updates the entry in its partner candidate list, or adds a new entry if the peer has not been on the list. This peer also checks the lifetime of each entry, and removes an entry if the time exceeds a given threshold. [00103] Referring to flowchart 800 of FIC. 8, embodiments of partner operations are shown. The initial partners are selected when the new peer joins the overlay by selecting 1 -5 nearest partners from the candidate list (shown at block 802).
[00104] Partnership maintenance is performed (shown at block 804). Heartbeat messages are exchanged periodically between partners. When a partner failure is detected, the nearest available peer from the candidate list is selected to replace this partner. In the meantime, the failed partner is deleted from the candidate list.
[00105] The partners are refined to achieve better performance (shown at block
806). In one embodiment, the refinement procedure includes the following:
[00106] a) a routine refinement occurs periodically, such as every 1 0 seconds. The candidate list is checked to see whether there are any nearer nodes other than the current partner(s). If so, this nearer node becomes a new partner.
[00107] b) after each transmission round, if the bidirectional throughput to a partner is smaller than a given threshold, this partner is disconnected and a new partner is selected from the candidate list. The disconnected partner is given a poor flag in the candidate list. It will not be re-selected unless there are no other usable nodes.
[00108] 2.2.2 Streaming using a Prefetch-Pull Strategy
[00109] Once a partnership is established, a peer starts exchanging media data with its partners. Each peer uses a prefetch-pull strategy to get media data packets from its partners. Before describing the prefetch-pull strategy, the following concepts are discussed. [001 1 0] Media Buffer - Each peer in a streaming overlay has a media buffer to accommodate the received packets of the video stream. The media buffer may include a
FIFO packet queue. The size L of the packet queue is fixed. If the media buffer is full, the oldest packet is removed as a new packet arrives.
[001 1 1 ] Buffer Map - A buffer map is a bit vector with size of L. Each bit in the buffer map represents the availability of a single packet in the packet queue. Given the start sequence number and the buffer map, one can know which media packets are available in its packet queue. During streaming, each peer periodically publishes its buffer map to its partners.
[001 1 2] Media Strips - In embodiments of the streaming algorithm, each video packet has a unique sequence number which is assigned by the hosting program, and the sequence number is used as an index of the video packet. Furthermore, a video stream is divided into P strips, and a packet with sequence number s belongs to the strip (s mod
P).
[001 1 3] 2.2.2.1 Prefetch
[001 1 4] Prefetch means that a peer can order some strips from one partner, and some strips from other partners. Turning to flowchart 900 of FIC. 9, an embodiment of a prefetch-pull strategy is shown. Periodically, there is a prefetch process. A peer assigns
P prefetch tasks (each task corresponds to one strip) to its partners (shown at block 902).
The prefech task of strips is assigned to a partner according to the bandwidth and latency of the partner. In one embodiment, the statistic information of previous received packets in the prefetch scheduling is used. A strip is assigned to the partner from whom the peer has received maximal packets of that strip by now.
[001 1 5] The assignment is sent as prefetch requests to partners (shown in block
904).
[001 1 6] Each node also periodically receives prefetch requests from their partners due to the symmetric partnership (shown in block 906). An older prefetch request is updated by a new prefetch request from the same partner.
[001 1 7] Whenever a peer receives a packet (block 908), it hashes the packet to one strip (shown in block 91 0). In one embodiment, peer uses the function (s mod P) as described above. The peer determines whether there are any prefetch requests regarding this strip having been received from any partners. If so, the peer immediately forwards the packet to the requesting partner(s). This forward operation may be referred to as a
"push" (shown at block 91 2).
[001 1 8] Because each peer independently schedules its own pre-fetching process, it is possible that several peers may form pre-fetching circles. In a pre-fetching circle, a peer receives two or more copies of the same packet because the peer has previously sent out a packet request to all of its partners. This wastes bandwidth and reduces streaming performance. To resolve this pre-fetching circle problem, each peer records and updates its partners1 buffer maps once it has received/pushed a packet from/to a partner so that the peer will not push the same packet to a partner more than once
(shown at 914). In one embodiment, the peer maintains a local copy of each partner's buffer map that is updated each time a packet is received or pushed by the peer. Local copies of the buffer maps at a peer are added /removed as the peer's partners are added/removed.
[001 1 9] 2.2.2.2 Pull
[001 20] In one embodiment, the pre-fetching algorithm is a best-effort algorithm; there is no retransmission scheme to guarantee the quality of service (QoS). It is possible that some packets are lost due to network packet loss or congestion. In order to handle the possible pre-fetching failure of some packets, embodiments herein include a pulling strategy to proactively pull needed packets from partners if a pull condition is triggered
(shown in block 91 6). For a pre-fetching failure, there are two pull conditions to trigger a pull for a packet: a general pull and an urgent pull.
[001 21 ] In a general pull, the peer takes the available packet with the maximum sequence number in the peer's own buffer map as a reference, (denoted as position E)
Any non-received packet whose sequence number is smaller than (E-THl) is scheduled to be pulled, where THl is a constant threshold. The non-received packet is pulled from a random selected partner whose buffer map (local copies of buffer maps stored at the peer) shows that it has the packet. Once a peer receives a pull request, it sends the required packet(s) as an acknowledgement.
[001 22] In order to further reduce a failure possibility, an urgent pull request is scheduled if the peer fails to receive a packet in a certain time period TH2 before its playback time, where TH2 is another constant threshold. At the peer, a media playback module gets media data packets from the P2P streaming module for playing back the video. In embodiments herein, the playback timing is independent from the streaming scheme. The playback can start at a time several seconds after the first packet in the stream is received at the peer.
[001 23] Embodiments herein include a peer-to-peer video streaming system that includes a location-aware unstructured peer-to-peer overlay and a prefetch-pull based streaming algorithm for transmitting video data among peers. Through embodiments of the system, live video streams can be sent to a large amount of receivers in a cost- effective way.
[001 24] 3 PEER-TO-PEER PROTOCOL DESIGN
[001 25] An embodiment of a P2P protocol design for communication between peers is described as follows. It will be understood that embodiments of the invention are not limited to use with the following P2P protocol design and other protocols may be used to implement embodiments of the invention. [001 26] 3.1 Protocol Stack
[001 27] To support efficient communication among peers, an extensible multiple layer protocol stack is used. The structure of the protocol stack is illustrated in Fig 1 0. [001 28] 3.1 .1 Link Layer
[001 29] The Link Layer is the lowest layer of the SVS protocol stack. The Link Layer tries to establish and manage all incoming/outgoing connections through the underlying transportation protocols such as TCP, UDP, and HTTP/HTTPS.
[001 30] Because the peer may use multiple protocols to transmit data, one peer can have multiple addresses and ports. All of these addresses and ports of a peer are defined as the peer's Contactlnfo and stored in the following format: [001 31 ] { PROTOCOL_TYPE1 : IPl : Portl ,
PROTOCOL_TYPE2 : IP2 : Port2, PROTOCOL_TYPE3 : IP3 : Port3, ... }
[001 32] The definition of the Contactlnfo makes it possible to support any
common Internet protocols through the Link Layer modules. The Link Layer also
monitors the status of the underlying connections and reports the events such as link
failure or congestion to the upper peer layer. When necessary, the Link Layer level also
performs Firewall traversal to help establish the connection between peers.
[001 33] 3.1 .2 Peer Layer
[001 34] The Peer layer is built on top of the Link layer and is responsible for
application level networking.
[001 35] Each peer is assigned a unique 1 28-bit ID. The Peer Layer protocol uses
the peer ID as the application level address. After establishing application level
connection through peer layer's hand-shake protocol, two peers can send and receive
data simply through their IDs. The underlying communication events also are mapped
into high level peer behaviors such as peer failure, peer connected, peer disconnected
and the like.
[001 36] Through the design of link layer and peer layer, the peer level
communication is abstracted from the underlying transportation protocols. The upper
peer-to-peer applications and protocols can be developed solely based on the uniform
peer level communication interface. Such a design is very flexible. It makes the peer-to-
peer development much easier and can support efficient transportation protocols as well. [001 37] 3.1 .3 Application Layer
[001 38] As described above in section 2, for a peer-to-peer streaming application,
two application level protocols are developed: 1 ) membership overlay protocol, which
helps peers form a location-aware network; 2) streaming overlay protocol, which enables
peers to exchange streaming packets.
[001 39] 3.2 Protocol Details
[00140] 3.2.1 Link Level Protocol
[00141 ] The Link layer is the lowest layer of the SVS protocol stack which is in
charge of the incoming and out going IP links including TCP, UDP and HTTP/HTTPS. All
packets in the SVS system pass through this layer. An embodiment of the Link Layer
Protocol (LLP) header has the format as shown in FIC. 1 1 .
[00142] MAGIC WORD: 1 6-bit field contains the magic word "SV" of SVS.
[00143] V: 2-bit field contains the version of the protocol. The current version is
1 0 (binary).
[00144] R: 1 -bit field indicates whether this packet is supposed to be sent via a
reliable transmission channel. If this packet is received from the UDP channel, an ACK
might needs to be sent upon receiving this message.
[00145] MESSACE TYPE: 8-bit field contains the message type information. The
message type is defined as:
////
//// Link layer message types
////
#define LLP_SYN 0x0001 //Used for establish a connnectionless link #define LLP_SYN_ACK 0x0002 //Used for establish a connnectionless link
#define LLP_CLOSE 0x0008 //Close the link, connetionless
#define LLP_PRO 0x0003 //Probe message used to measure RTT
#define LLP_PRO_ACK 0x0004 //Acknowledgement to the probe message
#define LLP_MEM 0x0005 //Reserved for bandwidth measurement
#define LLP_DATA 0x0080 //Data for the up layers
#define LLP_ACK 0x0006 //Ack for Data packet
[00146] MESSAGE LENGTH: 1 6-bit field contains the length of the packet in Bytes,
including the LLP header.
[00147] SEQNUM (sequence number) is used for SYN/SYN_ACK and PRO/PRCLACK
messages.
[00148] 3.2.1 .1 SYN and SYNLACK
[00149] To establish the UDP connection, a SYN message is first sent to the
receiver with a SEQNUM. If the receiver receives the packet, it returns an SYN_ACK with
the same SEQNUM. Once the sender receives the SYN_ACK, it makes sure that the
connection is established. This mechanism is only used for UDP channels.
[001 50] 3.2.1 .2 Probe and Answer Probe
[001 51 ] The Probe and Answer Probe messages are a pair of messages used to
measure the end-to-end latency. Both messages have the same body structure: a DWORD
field that contains a timestamp. For every 1 0 seconds, a host sends a Probe message to
every available UDP channel. The timestamp in the packet indicates the sending time.
Upon receiving this message, the peer changes the message ID to Answer Probe and
sends the packet back immediately. When the host receives the return packet, it can calculate the Round Trip Time (RTT) by subtracting the timestamp carried in the packet from the current time. The result is reported to the network monitor.
[001 52] 3.2.1 .3 Acknowledgement
[001 53] In SVS, there are basically two kinds of packets: control messages and video streams. While the loss of media data affects the communication quality, the loss of control messages is usually damaging. For example, loss of the subscription request keeps the subscriber waiting; loss of network status updates may result in network congestion. In one embodiment, the TCP channel is used in transmitting the control messages.
[001 54] The LLP_ACK message is used to acknowledge the receipt of a need-ack packet. The body of the packet contains an 8-bit sequence number which indicates which packet the peer has received. An 8-bit field for the sequence number is used because the volume of need-ack packets probably will not be large.
[001 55] 3.2.1 .4 Bandwidth Measurement
[001 56] In one embodiment, the bandwidth measurement packet structure of
DigiMetro is used. DigiMetro is an application-level multicast system for multi-party video conferencing. The packet body has the format shown in FIC. 1 2. The first 1 6-bit field (ROUND NUMBER) indicates which round of measurement this packet belongs to.
This is to avoid mixing packets from consecutive measurement processes. The second
1 6-bit field (SEQ_NO) contains the in-sequence number of this packet. The 32-bit timestamp records the sending time (TIMESTAMP). [001 57] The network monitor also allows the user to set and process the
bandwidth measurement data at the application layer. Examples of the application layer
assigned data are the user's display image and the most recent l-frame of the real-time
video. This data can be passed to the network monitor through the exposed interface.
By default, the network monitor uses a block of nonsense data for bandwidth
measurement. The developer can also set a callback function if the developer wishes to
process the measurement data. The structure of the data part can be defined by the
developer.
[001 58] 3.2.1 .5 Data
[001 59] Message type "LLP_DATA" indicates that this packet belongs to an upper
layer. In such a case, the Link management layer should detach the LLP header, and pass
the LLP data to the upper level protocol.
[00160] 3.2.2 Peer Level Protocol
[00161 ] The Transport Level Protocol (TLP) handles the message from the connect
level. The TLP header has the data fields as shown in FIC. 1 3.
[00162] MESSAGE TYPE: 1 6-bit field contains the message type information. The
message type is defined as:
////
//// Peer layer message types
////
#define PLP_HELLO 0x0001 //Used as a heart beat message
#define PLP_SYN 0x0002 //Used for establish a peer conncetion
#define PLP_SYN_ACK 0x0003 //Acknowledge for the SYN message #define PLP_DATA 0x0080 / /Data payload for up layer
[001 63] OVERLAY PORT: 1 6-bit field contains the overlay port number of the packet source. By using this port one can easily dispatch messages to different modules.
In addition, one can also establish multiple overlay structures based on the same lower layers.
[001 64] 3.2.2.1 Hello
[001 65] Unlike DigiParty (a multiparty video conferencing system), the Hello message is only used as a keep alive message. For every 5 seconds, a host sends a Hello message to every available UDP and IP multicast channel to keep the link alive. If a host has not received the Hello message for 30 seconds from a specific link, it assumes that the connection is broken.
[001 66] 3.2.2.2 SYN/SYNLACK
[001 67] The SYN message is mainly used for setting up a connection. After a connection object accepts a TCP connection request, the first message it receives from this connection should be a SYN message. Otherwise, the host closes this connection. A host also uses the SYN message to ping its peers through UDP and IP multicast channels.
Upon receiving the SYN message, the peer manager checks whether there already exists a
Peer object which is associated with the SENDER ID. If so, it updates the connectivity information for the object (e.g., set the UDP channel status to "available" if the SYN message is received from UDP). Otherwise, the peer manager creates a new Peer object and associates it with the SENDER ID in the packet header. The body of the SYN message contains the SENDER ID. [00168] 3.2.2.3 Data
[00169] Message type "PLP_DATA" indicates that this packet belongs to an upper layer. In such a case, the Link management layer should detach the Peer Layer Protocol (PLP) header, and pass the PLP data to the upper level protocol. [001 70] 3.2.3 Membership Overlay Level Protocol
[001 71 ] Through maintaining a consistent membership overlay, the system connects all peers together with location-awareness. Peers can join and find nearest neighbors very quickly. Though the streaming partners do not necessary correspond to the nearest nodes, and the streaming overlay does not necessarily match the topology, the location-aware property of the membership overlay can facilitate a peer to find good streaming partners. In addition, the membership overlay also supports abundant distributed operations through P2P communication. The protocol of the membership overlay is described as follows.
[001 72] 3.2.3.1 Message Structure
[001 73] The Membership overlay message structure includes data fields as shown in FIC. 14.
[001 74] struct MOVERLA Y_H EADER
SVS_Msg_Type m_MsgType; //The message type Ulntl 6 m_SeqNo;
[001 75] 3.2.3.2 Message Type [001 76] m_MsgType is a 1 6-bit message word. In embodiments herein, all
incoming messages are delivered to both of the membership overlay and the streaming
overlay. The value of the highest bit of the message type is used to distinguish the
messages. A message whose highest bit is 1 is processed by the streaming overlay.
Otherwise, it is a message for the membership overlay.
[001 77] For message types of the membership overlay, the higher byte is the
destination description. The lower byte contains the message ID.
[001 78] Membership Overlay Message IDs
#define SVSOVERLAY_RANWALK 0x0001 / /Random walk
#define SVSOVERLAY_ACK_RANWALK 0x0009 / /ACK of the Random walk message
#define SVSOVERLAY_RANWALK_D OxOOOB / /Random walk message with data from above level
#define SVSOVERLAY_FLOOD_L 0x0002 / /Flood
#define SVSOVERLAY_FLOOD_D OxOOOA / /Flood message with data from above level
#define SVSOVERLAY_QUE_RAND 0x0003 / /Require random message from bootstrapping server
#define SVSOVERLAY_ACK_RAND 0x0004 / /Ack of the require random message from bootstrapping server #define SVSOVERLAY_QUE_NEAR 0x0005 / /Require nearest peers from bootstrapping server
#define SVSOVERLA Y_ACK_N EAR 0x0006 / /Ack of the require nearest peers from bootstrapping server
#define SVSOVERLA Y_REQ_CON 0x0007 //Request to be add as an inlink
#define SVSOVERLAY_ACK_CON 0x0008 / /Acknowledge of the request
#define SVSOVERLAY_QUIT OxOOOC / /Request to quit the membership overlay
#define SVSOVERLAY_FLOOD_QUIT OxOOOD / /Request to quit the whole overlay (sent by bootstrapping node)
#define SVSOVERLAY_REQJOIN OxOOOE / /Request to join a membership
#define SVSOVERLAY_ACKJOIN OxOOOF / /Ack to join a membership
[001 79] Membership Overlay Message Destination Description
#define SVSOVERLAY_NINVIEW 0x01 00 / /near in view
#define SVSOVERLAY_RINVIEW 0x0200 / /random in view
#define SVSOVERLAY_NOUTVIEW 0x0400 / /near out view
#define SVSOVERLAY_ROUTVIEW 0x0800 / /random out view
#define SVSOVERLAY_S ERVER 0x1 000 / /the bootstrapping node
#define SVSOVERLAY_INVIEW 0x0300 / /all in views #define SVSOVERLAY_OUTVIEW OxOCOO / /all out views
#define SVSOVERLAY_ALLVIEW OxOFOO / /all views
[001 80] 3.2.3.3 Random Walk
[001 81 ] Random walk is a basic function or message of a membership overlay.
Random walk can be used to select random neighbors when a peer tries to join a membership overlay, and can be used to select random nodes over the membership overlay. In some embodiments of the protocol, biased forwarding and biased halting schemes are not supported. Once a peer received a random walk message from its neighbor, it decreases the hop number of the message. If the hop number is larger than zero, the peer only selects a random peer from destination neighbors (refer to the Membership Overlay Message Destination Description in the previous section) and directly forwards the message. Otherwise, the peer will not forward further and send a random walk acknowledge message to the peer which is the source of the random walk message (for a SVSOVERLAY_RANWALK_D message, the membership overlay delivers the data payload to the upper layer (i.e., the streaming overlay) and does not send an acknowledge message to the source peer. The upper layer can send their own acknowledge message according the received data.)
[001 82] There are two message types including SVSOVERLAY_RANWALK and
SVSOVERLAY_RANWALK_D. SVSOVERLAY_RANWALK is a message only for membership overlay maintaining such as peer joining and membership refinement. SVSOVERLAY_RANWALK_D is a data conveying message for the upper layer. [001 83] An embodiment of a message body of a random walk message is as
follows:
struct MOVERLAY_RANDOM
{
MOVERLAY_H EADER m_Header; / /the message header of the message int m_Hops; / /the hop number
BYTE m_Data[0]; / /the Peerlnfo of the source message. For
SVSOVERLAY_RANWALK message, data is a contactinfo structure. For
SVSOVERLAY_RANWALK_D message, data is a contactinfo structure + upper layer data.
};
[001 84] An embodiment of a message body of random walk acknowledge message
is as follows:
struct MOVERLAY_RAN DOIVLACK
{
MOVERLAY_H EADER m_Header;
CUID m_VideolD; / /the Port of the overlay
Peerjnfo mjnfo; / /the local Contactinfo structure
};
[001 85] 3.2.3.4 Flood Message
[001 86] A flood message is also a basic function or message of a membership
overlay. Once a peer receives a flood message from its neighbor, it decreases the hop
number of the message. If the hop number is larger than zero, the peer directly forwards
the message to destination neighbors (please refer to the Membership Overlay Message
Destination Description in the previous section). Otherwise, the peer will not forward
message. There are two message types including SVSOVERLAY_RANWALK and SVSOVERLAY_RANWALK_D. SVSOVERLAY_RANWALK is a message only for membership
overlay maintaining such as peer joining and membership refinement.
SVSOVERLAY_RANWALK_D is a data conveying message for the upper layer. For a
SVSOVERLAY_RANWALK_D message, the membership overlay delivers the data payload to
the upper layer.
[001 87] An embodiment of a message body of random walk message is as follows:
struct MOVERLA Y_FLOOD
{
MOVERLAY_H EADER m_Header; / /the message header of the message int m_Hops; / /the hop number
BYTE m_Data[0]; / /For SVSOVERLAY_FLOOD message, data is a contactinfo structure of the source peer. For SVSOVERLAY_ FLOOD_D message, data is a contactinfo of the source peer + upper layer data.
};
[001 88] 3.2.3.5 Query Random Candidates
[001 89] As mentioned above, a peer can query random out links from the
bootstrapping node. An embodiment of a query message is defined as below.
struct MOVERLAY_QUE_RAND
{
MOVERLAY_H EADER m_Header; / /the message header of the message
Ulntl 6 m_Num; / /The number of random peer
Ulntl 6 m_Reserved;
PORTTYPE m_SrcPort; / /the overlay Port of the querying peer ID m_VideolD; / /The video ID
Peerjnfo m_lnfo; / /the local contactinfo
};
[001 90] Once a bootstrapping node receives a MOVERLAY_QUE_RAND message, it
replies with an acknowledgement MOVERLA Y_ACK_RAN D.
struct MOVERLAY_ACK_RAND
{
MOVERLAY_H EADER m_Header;
Ulntl 6 m_Num; / /The number of random peers
Ulntl 6 m_Reserved;
BYTE m_Data[0]; / /the random peer IDs, ports and contactinfos
};
[001 91 ] 3.2.3.6 Query nearest Peer Candidates
[001 92] As mentioned earlier, a peer may query k nearest peers from the
bootstrapping node and its out-links. An embodiment of the query message is as
follows.
struct MOVERLAY_QUE_NEAR
{
MOVERLAY_H EADER m_Header;
Ulntl 6 m_Num; / /The number of peers
Ulntl 6 m_Reserved;
PORTTYPE m_SrcPort; //the overlay Port of the requester
ID m_VideolD; / /The video ID
Peerjnfo mjnfo; / /the local contactinfo
}; [001 93] Once a bootstrapping node or a normal peer receives a
MOVERLAY_QUE_NEAR message, it finds out the m_Num nearest peers from its cached
peers and replies with an acknowledgement MOVERLAY_ACK_NEAR.
struct MOVERLA Y_ACK_N EAR
{
MOVERLAY_H EADER m_Header;
Ulntl 6 m_Num; / /The number of peers
Ulntl 6 m_subType; / /if 1 , it is a global nearest peers
BYTE m_Data[0]; / /the random peer IDs, port numbers and contactinfos
};
[001 94] 3.2.3.7 Request to Connect
[001 95] When a peer A wants to add a peer B as its out-link, it sends a connection
request to peer B. Peer B can determine to accept or reject the request. An embodiment
of the message is as follows:
struct MOVERLAY_REQ_CON
{
MOVERLAY_H EADER m_Header;
Ulntl 6 m_subType; / /0: as a nearest outview, 1 : as a random out-link
Ulntl 6 m_Reserved;
Peerjnfo mjnfo; / /the source peer ID, port number and contactinfo
};
[001 96] An embodiment of the acknowledge message is: struct MOVERLA Y_ACK_CON
{
MOVERLAY_H EADER m_Header;
Ulntl 6 m_subType; / /0: refuse; 1 : acknowledge
Ulntl 6 m_Reserved;
};
[001 97] 3.2.3.8 Join Overlay Request
[001 98] When a new peer tries to join a membership overlay, it first connects to a
bootstrapping node, and sends a join message to the bootstrapping node. Below is an
embodiment of a join message sent by a peer, and its acknowledge from a bootstrapping
node.
/ /peer send join request to bootstrapping node to join the overlay struct MOVERLA Y_REQJOI N
{
MOVERLAY_H EADER m_Header;
BYTE m_NeareastNum; / / Number of nearest peers
BYTE m_RandomNum; / / Number of random peers
Ulntl 6 m_Reserved;
CUID m_VideolD;
Peerjnfo mjnfo;
};
[001 99] / /bootstrapping node reply peer's join request
struct MOVERLA Y_ACK JOI N
{
MOVERLAY_H EADER m_Header;
Ulntl δ m_SessionStatus; Ulntl 6 m_Reserved;
PORTTYPE m_Port; //Global unique port of the overlay
BYTE m_NeareastNum; / /Number of nearest peers
BYTE m_RandomNum; / / Number of random peers
Ulntl 6 mJDOBMsgLen; / / Length of the Out-of-Band message
BYTE m_Data[0]; / / Contact Info Data:
/ /Nearest peers1 Peerlnfo, sorted from near to far
/ /Random peers1 Peerlnfo
/ /Out of Band Data: struct BootstrappingVideolnfo
};
[00200] 3.2.3.9 Quit Message
[00201 ] When a peer tries to quit a membership overlay, it sends a quit to the
bootstrapping node and all its neighbors. This message has no acknowledge message.
Any peer who receives this message only deletes the peer from its neighborhood.
/ / Peer sends quit msg to bootstrapping node and other neighbors to quit the overlay, struct MOVERLA Y_QU IT
{
MOVERLAY_H EADER m_Header;
CUID m_VideolD;
};
/ / Bootstrapping node floods quit message to all peers to destroy the overlay struct MOVERLA Y_FLOOD_QU IT
{
MOVERLAY_H EADER m_Header;
CUID m_VideolD; PORTTYPE m_Port; / / Global unique port of the overlay
};
[00202] 3.2.4 Streaming Overlay Level Protocol
[00203] An embodiment of a Media Streaming Protocol (MSP) packet header is
shown in FIC. 1 5 and described below.
[00204] MESSAGE TYPE: 1 6-bit field contains message type information. The
message type is defined as:
#define MSP_DATA 0x8001 / / Media data message
/ / direct download
#define MSP_REQ_DLV 0x801 5 / / Request to download live video
#define MSP_STP_DLV 0x801 6 / / Stop download live video
/ / Prefetch media messages #define MSP_SUB_LVS 0x81 01 / / Subscribe live video strip
#define MSP_ACK_LVS 0x81 02 // Acknowledge to live video strip subscription
/ / Pull media messages
#define MSP_NTF_BUFMAP 0x81 1 1 / / Buffer map notification to publish
#define MSP_REQ_DLVB 0x81 1 2 / / Request live video blocks download
/ / Partnership messages
#define MSP_REQ_PCL 0x8201 / / Request partner candidate list
#define MSP_ACK_PCL 0x8202 / / Acknowledge partner candidate list
#define MSP_NTF_ALB 0x8203 / / Availability notification to be flooded #define MSP_REQ_PARTNER 0x8204 // Request to establish partnership
#define MSP_ACK_PARTNER 0x8205
#define MSP_BYE_PARTNER 0x8206
[00205] 3.2.4.1 Partnership Messages
[00206] A new joined peer can query partner candidates from another peer in the
overlay by sending the following request message:
// partnership struct Message_MSP_REQ_PCL // Request partner candidate list
{
Ulnt32 reserved;
[00207] The queried peer sends back the following acknowledge message which
includes all of its partner candidates:
struct Message_MSP_ACK_PCL // Acknowledge partner candidate list
{
Ulntl 6 count; CUID peerlD; Ulntl 6 length;
Ulntδ contact[0]; // contact info, totally length Bytes
// totally count candidates
[00208] When a peer wants to establish the partnership with another peer, it sends
the following partnership message:
struct Message_MSP_REQ_PARTNER // Request to establish partnership {
CUID peerID; // my peer id
Ulntl 6 length; // length of contact info, in Bytes
Ulntδ contact[0]; // contact info, totally length Bytes
};
[00209] The receiver sends back the acknowledge message to establish or refuse
the partnership:
struct Message_MSP_ACK_PARTNER { enum ACK_Partner { ptOK = OxOO, ptPoorThroughput = 0x01 , ptTooManyPartners = 0x02, ptPartnerDenied = 0x03, ptNotEnoughBw = 0x04, ptUnknown = 0x1 0
};
Ulntδ result; // 0 means success, others mean error type Ulnt32 reserved;
};
[0021 0] Either of the two peers can send the following bye message to terminate
the partnership:
struct Message_MSP_BYE_PARTNER
{ enum Bye_Reason
{ brQuit = 0x00, brPoorThroughput = OxOl , brTooManyPartners = 0x02, brUareNotMyParner = 0x03, brUnknown = 0x1 0
};
Ulntδ reason; // possible reasons: peer quit, poor throughput Ulnt32 reserved; };
[0021 1 ] When the peer starts receiving streaming packets and becomes available
for other peers to get data from it, it notifies its neighbors in two hops through the
following message. The neighbors add it to their candidates lists. An embodiment of an
availability notification message is as follows:
struct Message_MSP_NTF_ALB // Availability notification to be flooded
{
CUID peerlD; // my peer id
Ulntl 6 length; // length of contact info, in Bytes
Ulntδ contact[0]; // contact info, totally length Bytes
};
[0021 2] 3.2.4.2 Prefetch Messages
[0021 3] When a peer needs to prefetch media data from another peer, it sends the
following subscribe message to it.
// push media struct Message_MSP_SUB_LVS // Subscribe live video strip
{
Ulnt32 stripFlag; // bits of 1 mean strips of interest Ulnt32 reserved;
[0021 4] When the destination peer receives the message and decides to accept or
reject this subscription and send a acknowledge message back.
struct Message_MSP_ACK_LVS // Acknowledge to live video strip subscription
{
Ulnt32 stripAccepted; // bits of 1 mean strips of successful subscription
Ulnt32 reserved; };
[0021 5] 3.2.4.3 Pull Messages
[0021 6] To reflect the status of the current buffer, each peer may advertise its
buffer map to all its partners through the following message:
struct Message_MSP_NTF_BUFMAP // Buffer map notification to publish
{
Ulnt32 maxSeqNo; // max sequence number of the buffer
Ulnt32 lenWords; // length of buffMap, in 32bit-Words
Ulnt32 bufferMap[O]; // bit vector representation for buffer map, bufferMap[nBufferMapWords]
};
[0021 7] According to the partner's buffer map, a peer can send the pull request
message to pull the packet from the partner:
struct Message_MSP_REQ_DLV // Request live video blocks download
{
Ulnt32 maxSeqNo; // max sequence number of the buffer Ulnt32 lenWords; // length of buffMap, in 32bit-Words
Ulnt32 bufferMap[O]; // bit vector representation for buffer map, bufferMap[nBufferMapWords]
};
[0021 8] 3.2.4.4 Direct Download Messages
[0021 9] The first several peers in the streaming overlay network download from
the hosting node directly using the following messages:
// direct download struct Message_MSP_REQ_DLV // Request to download live video
{
CUID videolD; Ulnt32 reserved;
struct Message_MSP_STP_DLV // Stop download live video
{
CUID videolD; Ulnt32 reserved;
[00220] 3.2.4.5 Media Data Messages
[00221 ] The media data message includes the actual video streaming packets with
a global numbered sequence number.
struct Message_MSP_DATA // Media data message
{
Ulnt32 seqNo; // sequence number Ulntl 6 reserved; Ulnt8 data[0]; };
[00222] Various operations of embodiments of the present invention are described herein. In one embodiment, one or more of the operations described may constitute computer readable instructions stored on computer readable media, which if executed by a computing device, will cause the computing device to perform the operations described. The order in which some or all of the operations are described should not be construed as to imply that these operations are necessarily order dependent. Alternative ordering will be appreciated by one skilled in the art having the benefit of this description. Further, it will be understood that not all operations are necessarily present in each embodiment of the invention.
[00223] The above description of embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. While specific embodiments and examples of the invention are described herein for illustrative purposes, various equivalent modifications are possible, as those skilled in the relevant art will recognize in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification. Rather, the following claims are to be construed in accordance with established doctrines of claim interpretation.

Claims

CLAIMSWhat is claimed is:
1 . A method, comprising: receiving video data from an upload client at a hosting node; receiving a request from a download client at a bootstrapping node to receive the video data; informing the download client, by the bootstrapping node, to receive the video data directly from the hosting node when the hosting node is below a threshold, wherein the threshold is based at least in part on the maximum number of download clients the hosting node can stream to simultaneously; and informing the download client, by the bootstrapping node, to receive the video data from peers of the download client in a peer-to-peer overlay when the hosting node is above the threshold.
2. The method of claim 1 wherein the video data includes a live video stream captured by the upload client, wherein the download client receives the live video stream.
3. The method of claim 1 , further comprising: receiving a request at the bootstrapping node from the upload client for an available hosting node; and sending, by the bootstrapping node, a hosting node address for the hosting node to the upload client in response to the request.
4. The method of claim 1 , further comprising: receiving an upload request from the upload client at the hosting node to upload the video data; and sending an acknowledgement to the upload client by the hosting node in response to the upload request.
5. The method of claim 1 , further comprising receiving video session information associated with the video data from the hosting node at the bootstrapping node after the hosting node has started receiving the video data.
6. The method of claim 1 , further comprising sending video session information associated with the video data to a database server by the bootstrapping node.
7. The method of claim 1 , further comprising discovering the peers of the peer-to- peer overlay by the download client.
8. One or more computer readable media including computer readable instructions that, when executed, perform the method of claim 1 .
9. A method, comprising: receiving a prefetch request for a particular packet of a video stream at a peer from one or more partners of the peer, wherein the partners is a subset of peers of a peer-to-peer overlay; receiving a packet of the video stream at the peer; pushing the packet, by the peer, to any partner whose prefetch request matches the packet; and updating a local copy of the partner's buffer map at the peer after pushing the packet, wherein the buffer map indicates packets of the video stream available in a partner's packet queue.
1 0. The method of claim 9, further comprising sending prefetch requests of the peer to the partners in the peer-to-peer overlay.
1 1 . The method of claim 9, further comprising pulling a needed packet from a partner, by the peer, when a pull condition is triggered.
1 2. The method of claim 9, further comprising: sending a join request to a bootstrapping node to join the peer-to-peer overlay by the peer; receiving peer information regarding one or more peers of the peer-to-peer overlay from the bootstrapping node; and joining the peer-to-peer overlay by the peer.
1 3. The method of claim 9, further comprising updating the set of peers that are partners in response to a loss of communication with a partner.
14. The method of claim 9, further comprising updating the set of peers that are partners in response to a peer entering or leaving the peer-to-peer overlay.
1 5. One or more computer readable media including computer readable instructions that, when executed, perform the method of claim 9.
1 6. A system, comprising: a hosting node to receive a video from an upload client and to stream the video to one or more download clients of a peer-to-peer overlay; and a bootstrapping node communicatively coupled to the hosting node, wherein the bootstrapping node to inform a download client to receive the video directly from the hosting node when the hosting node is below a threshold, wherein the threshold is based at least in part on the maximum number of download clients the hosting node can stream to simultaneously, wherein the bootstrapping node to inform a download client to receive the video from one or more download clients in the peer-to-peer overlay when the hosting node has exceeded the threshold.
1 7. The system of claim 1 6 wherein the video includes a live video stream captured by the upload client, wherein the one or more download clients receive the live video stream.
1 8. The system of claim 1 6, further comprising a second hosting node communicatively coupled to the bootstrapping node, the second hosting node to receive a second video from a second upload client and to stream the second video to one or more download clients of a second peer-to-peer overlay.
1 9. The system of claim 1 6, further comprising a database server communicatively coupled to the bootstrapping server, the database server to store video information associated with each video available on the system.
20. The system of claim 1 9, further comprising a web server communicatively coupled to the database server, the web server to provide access to viewers to video information stored on the database server.
PCT/US2007/084296 2006-11-10 2007-11-09 Peer-to-peer aided live video sharing system WO2008061022A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/558,878 US7733808B2 (en) 2006-11-10 2006-11-10 Peer-to-peer aided live video sharing system
US11/558,878 2006-11-10

Publications (1)

Publication Number Publication Date
WO2008061022A1 true WO2008061022A1 (en) 2008-05-22

Family

ID=39369086

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2007/084296 WO2008061022A1 (en) 2006-11-10 2007-11-09 Peer-to-peer aided live video sharing system

Country Status (2)

Country Link
US (2) US7733808B2 (en)
WO (1) WO2008061022A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10057337B2 (en) 2016-08-19 2018-08-21 AvaSure, LLC Video load balancing system for a peer-to-peer server network

Families Citing this family (91)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9549043B1 (en) 2004-07-20 2017-01-17 Conviva Inc. Allocating resources in a content delivery environment
US10862994B1 (en) 2006-11-15 2020-12-08 Conviva Inc. Facilitating client decisions
WO2007121610A1 (en) * 2006-04-21 2007-11-01 Yongmin Zhang A peer-to-peer network content transmitting method and an apparatus for implementing locating and playing
US7733808B2 (en) 2006-11-10 2010-06-08 Microsoft Corporation Peer-to-peer aided live video sharing system
US9386056B1 (en) 2006-11-14 2016-07-05 Arris Enterprises, Inc. System, method and computer readable medium for providing media stream fragments
US8874964B1 (en) 2006-11-15 2014-10-28 Conviva Inc. Detecting problems in content distribution
US8751605B1 (en) 2006-11-15 2014-06-10 Conviva Inc. Accounting for network traffic
US8874725B1 (en) 2006-11-15 2014-10-28 Conviva Inc. Monitoring the performance of a content player
US9124601B2 (en) 2006-11-15 2015-09-01 Conviva Inc. Data client
US9264780B1 (en) * 2006-11-15 2016-02-16 Conviva Inc. Managing synchronized data requests in a content delivery network
US7996550B2 (en) * 2006-11-30 2011-08-09 Red Hat, Inc. Peer-to-peer download with quality of service fallback
JP2008182431A (en) * 2007-01-24 2008-08-07 Nec Corp Video image and voice distribution system, and information processor
US20080294788A1 (en) * 2007-05-21 2008-11-27 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Systems and methods for p2p streaming
EP2151127B1 (en) * 2007-06-04 2017-01-04 Telefonaktiebolaget LM Ericsson (publ) Method and arrangement for improved channel switching
WO2009003055A1 (en) * 2007-06-25 2008-12-31 University Of Southern California Alert when streaming media of live events on computer network
CN101836383A (en) * 2007-07-26 2010-09-15 游牧人创新有限责任公司 Equipment and method based on full duplex network
US8250616B2 (en) * 2007-09-28 2012-08-21 Yahoo! Inc. Distributed live multimedia capture, feedback mechanism, and network
US8522289B2 (en) * 2007-09-28 2013-08-27 Yahoo! Inc. Distributed automatic recording of live event
US9769255B2 (en) * 2007-12-24 2017-09-19 Core Wireless Licensing S.A.R.L. Continuous scheduling for peer-to-peer streaming
US8099462B2 (en) * 2008-04-28 2012-01-17 Cyberlink Corp. Method of displaying interactive effects in web camera communication
US8731458B2 (en) * 2008-06-02 2014-05-20 Edward Matthew Sullivan Transmission and retrieval of real-time scorekeeping
US9393485B2 (en) 2008-06-02 2016-07-19 Gamechanger Media, Inc. Transmission and retrieval of real-time scorekeeping
US7886072B2 (en) * 2008-06-12 2011-02-08 Apple Inc. Network-assisted remote media listening
US9800926B2 (en) * 2008-08-13 2017-10-24 At&T Intellectual Property I, L.P. Peer-to-peer video data sharing
US8910197B2 (en) * 2008-08-25 2014-12-09 Alcatel Lucent Update process for interface device based targeted information insertion
US20100106797A1 (en) * 2008-10-23 2010-04-29 Qualcomm Incorporated Methods and apparatus for hybrid broadcast and peer-to-peer network using cooperative mimo
US8452886B2 (en) * 2008-12-04 2013-05-28 Microsoft Corporation Peer-to-peer packet scheduling algorithm
US7995476B2 (en) * 2008-12-04 2011-08-09 Microsoft Corporation Bandwidth allocation algorithm for peer-to-peer packet scheduling
US8051161B2 (en) * 2008-12-08 2011-11-01 Microsoft Corporation Peer-to-peer (P2P) streaming overlay construction and optimization
US8082356B2 (en) * 2008-12-09 2011-12-20 Hong Kong Applied Science And Technology Research Institute Co., Ltd. Synchronizing buffer map offset in peer-to-peer live media streaming systems
US8767081B2 (en) * 2009-02-23 2014-07-01 Microsoft Corporation Sharing video data associated with the same event
US9251317B2 (en) * 2009-03-23 2016-02-02 Microsoft Technology Licensing, Llc Network video messaging
US8402494B1 (en) 2009-03-23 2013-03-19 Conviva Inc. Switching content
US8051194B2 (en) * 2009-05-27 2011-11-01 Ray-V Technologies, Ltd. Method for buffer management for video swarms in a peer-to-peer network
US11064023B2 (en) 2009-05-27 2021-07-13 Verizon Media Inc. Method for actively sharing available bandwidth to consumer nodes in a peer-to-peer network for delivery of video streams
US9191219B2 (en) * 2009-06-16 2015-11-17 Adobe Systems Incorporated Network multicast peer discovery methods
US8483093B2 (en) * 2009-06-30 2013-07-09 Intel Corporation Energy efficient network forwarding based on performance and energy
US9203913B1 (en) 2009-07-20 2015-12-01 Conviva Inc. Monitoring the performance of a content player
US20110066745A1 (en) * 2009-09-14 2011-03-17 Sony Ericsson Mobile Communications Ab Sharing video streams in commnication sessions
US9757639B2 (en) 2009-11-24 2017-09-12 Seth E. Eisner Trust Disparity correction for location-aware distributed sporting events
US7934983B1 (en) * 2009-11-24 2011-05-03 Seth Eisner Location-aware distributed sporting events
US20130232198A1 (en) * 2009-12-21 2013-09-05 Arbitron Inc. System and Method for Peer-to-Peer Distribution of Media Exposure Data
US20110153391A1 (en) * 2009-12-21 2011-06-23 Michael Tenbrock Peer-to-peer privacy panel for audience measurement
PL392068A1 (en) * 2010-08-05 2012-02-13 Zuza Pictures Spółka Z Ograniczoną Odpowiedzialnością Portable video-telecommunication device, method for data transmission, especially audio/video data and their application
US9215486B2 (en) 2010-08-13 2015-12-15 Simon Fraser University System and method for multiplexing of variable bit-rate video streams in mobile video systems
US9444876B2 (en) 2010-11-08 2016-09-13 Microsoft Technology Licensing, Llc Content distribution system
US8806049B2 (en) * 2011-02-15 2014-08-12 Peerialism AB P2P-engine
US8892681B2 (en) 2011-03-03 2014-11-18 At&T Intellectual Property I, L.P. Peer to peer metadata distribution
US9152373B2 (en) 2011-04-12 2015-10-06 Apple Inc. Gesture visualization and sharing between electronic devices and remote displays
US20130019154A1 (en) * 2011-07-11 2013-01-17 Collegenet, Inc. Systems and methods for collecting multimedia form responses
US9137086B1 (en) 2011-08-25 2015-09-15 Google Inc. Social media session access
US8898327B2 (en) 2011-10-05 2014-11-25 Peerialism AB Method and device for arranging peers in a live streaming P2P network
US8799498B2 (en) 2011-11-18 2014-08-05 Peerialism AB Method and device for peer arrangement in streaming-constrained P2P overlay networks
US8713194B2 (en) 2011-11-18 2014-04-29 Peerialism AB Method and device for peer arrangement in single substream upload P2P overlay networks
US20130145405A1 (en) * 2011-12-06 2013-06-06 Fredrik Carpio Using different physical interface to request retransmission of packet lost on unidirectional interface
US8788680B1 (en) 2012-01-30 2014-07-22 Google Inc. Virtual collaboration session access
US8831999B2 (en) 2012-02-23 2014-09-09 Collegenet, Inc. Asynchronous video interview system
US10148716B1 (en) 2012-04-09 2018-12-04 Conviva Inc. Dynamic generation of video manifest files
US8719345B2 (en) * 2012-05-11 2014-05-06 Oracle International Corporation Database replication using collaborative data transfers
US9246965B1 (en) 2012-09-05 2016-01-26 Conviva Inc. Source assignment based on network partitioning
US10182096B1 (en) 2012-09-05 2019-01-15 Conviva Inc. Virtual resource locator
US9444712B2 (en) 2012-11-21 2016-09-13 Cisco Technology, Inc. Bandwidth on-demand services in multiple layer networks
US20140181049A1 (en) * 2012-12-20 2014-06-26 Idea Den Archival systems and methods for live performances
US20140181261A1 (en) * 2012-12-26 2014-06-26 Alcatel-Lucent Usa Inc. Method and apparatus for providing efficient transmission of streaming video through a complex ip network
GB2509323B (en) 2012-12-28 2015-01-07 Glide Talk Ltd Reduced latency server-mediated audio-video communication
US9614907B2 (en) * 2013-01-04 2017-04-04 Bittorrent, Inc. Expediting content retrieval using peer-to-peer networks
GB2524958A (en) * 2014-04-03 2015-10-14 Orbital Multi Media Holdings Corp Data flow control method
US10178043B1 (en) 2014-12-08 2019-01-08 Conviva Inc. Dynamic bitrate range selection in the cloud for optimized video streaming
US10305955B1 (en) 2014-12-08 2019-05-28 Conviva Inc. Streaming decision in the cloud
CN104811827A (en) * 2015-04-20 2015-07-29 中兴通讯股份有限公司 Message transmitting method, and code stream processing method and device
WO2017015082A1 (en) * 2015-07-17 2017-01-26 Tribune Broadcasting Company, Llc Media production system with social media content integration feature
US9774914B2 (en) * 2015-08-25 2017-09-26 Wowza Media Systems, LLC Scheduling video content from multiple sources for presentation via a streaming video channel
US9787735B2 (en) 2015-08-27 2017-10-10 Drop In, Inc. Methods, devices, and systems for live video streaming from a remote location based on a received request utilizing keep alive messages
CN105656986A (en) * 2015-11-26 2016-06-08 乐视云计算有限公司 Live video play method, device and system
CN105681387A (en) * 2015-11-26 2016-06-15 乐视云计算有限公司 Method, device and system for uploading live video
US10405023B2 (en) * 2016-08-16 2019-09-03 At&T Intellectual Property I, L.P. Method and apparatus for providing video content using collaborative end points
US9641566B1 (en) 2016-09-20 2017-05-02 Opentest, Inc. Methods and systems for instantaneous asynchronous media sharing
CN106453289B (en) * 2016-09-29 2020-05-22 广州市千钧网络科技有限公司 Data transmission method and device for live video
US10742998B2 (en) 2018-02-20 2020-08-11 Netgear, Inc. Transmission rate control of data communications in a wireless camera system
US11102492B2 (en) 2018-02-20 2021-08-24 Arlo Technologies, Inc. Multi-sensor motion detection
US11064208B2 (en) * 2018-02-20 2021-07-13 Arlo Technologies, Inc. Transcoding in security camera applications
US10805613B2 (en) 2018-02-20 2020-10-13 Netgear, Inc. Systems and methods for optimization and testing of wireless devices
US11558626B2 (en) 2018-02-20 2023-01-17 Netgear, Inc. Battery efficient wireless network connection and registration for a low-power device
US11756390B2 (en) 2018-02-20 2023-09-12 Arlo Technologies, Inc. Notification priority sequencing for video security
US11076161B2 (en) 2018-02-20 2021-07-27 Arlo Technologies, Inc. Notification priority sequencing for video security
US11272189B2 (en) 2018-02-20 2022-03-08 Netgear, Inc. Adaptive encoding in security camera applications
CN110392071B (en) * 2018-04-18 2021-06-22 网宿科技股份有限公司 Uploading and downloading methods of streaming media resources, distribution system and streaming media server
US10638366B2 (en) * 2018-06-18 2020-04-28 Verizon Patent And Licensing Inc. Flow level pacing by controlling socket send buffer size
CN112153394B (en) * 2020-07-01 2022-06-17 广州点云科技有限公司 Cloud game live broadcast interaction method and system
CN113497950B (en) * 2021-06-15 2023-03-17 杭州华橙软件技术有限公司 Method, device and storage medium for live streaming
CN114697695B (en) * 2022-03-28 2024-03-08 阿里巴巴(中国)有限公司 Live broadcast data pushing method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020162109A1 (en) * 2001-04-26 2002-10-31 Koninklijke Philips Electronics N.V. Distributed storage on a P2P network architecture
US20040030651A1 (en) * 2002-08-08 2004-02-12 Jin-Sung Kim Method and apparatus for distributing content through on-line network
JP2004213508A (en) * 2003-01-08 2004-07-29 Hitachi Software Eng Co Ltd File download method and system using peer to peer technique
KR100562427B1 (en) * 2005-10-04 2006-03-17 주식회사 솔루션박스 Apparatus for receiving contents and recording medium storing program therefor

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040261127A1 (en) 1991-11-25 2004-12-23 Actv, Inc. Digital interactive system for providing full interactivity with programming events
US7849393B1 (en) 1992-12-09 2010-12-07 Discovery Communications, Inc. Electronic book connection to world watch live
US5930473A (en) 1993-06-24 1999-07-27 Teng; Peter Video application server for mediating live video services
US6745237B1 (en) 1998-01-15 2004-06-01 Mci Communications Corporation Method and apparatus for managing delivery of multimedia content in a communications system
US6564380B1 (en) 1999-01-26 2003-05-13 Pixelworld Networks, Inc. System and method for sending live video on the internet
WO2001017246A1 (en) 1999-08-27 2001-03-08 Nitetime Networks, Inc. Method and apparatus to link a network of live cameras
US20030132939A1 (en) 2000-03-01 2003-07-17 Levin Moshe Interactive navigation through real-time live video space created in a given remote geographic location
US6941575B2 (en) 2001-06-26 2005-09-06 Digeo, Inc. Webcam-based interface for initiating two-way video communication and providing access to cached video
US20030140093A1 (en) 2002-01-23 2003-07-24 Factor Cory L. Method and apparatus for providing content over a distributed network
US20030163339A1 (en) 2002-02-25 2003-08-28 John Elliot Process of accessing live activities and events through internet
US20050055718A1 (en) 2003-09-05 2005-03-10 Stone Christopher J. Peer-to-peer architecture for sharing video on demand content
US20060156330A1 (en) * 2005-01-07 2006-07-13 Fu-Sheng Chiu Intelligent interactive multimedia
US8477658B2 (en) * 2006-04-25 2013-07-02 The Hong Kong University Of Science And Technology Intelligent peer-to-peer media streaming
US7733808B2 (en) 2006-11-10 2010-06-08 Microsoft Corporation Peer-to-peer aided live video sharing system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020162109A1 (en) * 2001-04-26 2002-10-31 Koninklijke Philips Electronics N.V. Distributed storage on a P2P network architecture
US20040030651A1 (en) * 2002-08-08 2004-02-12 Jin-Sung Kim Method and apparatus for distributing content through on-line network
JP2004213508A (en) * 2003-01-08 2004-07-29 Hitachi Software Eng Co Ltd File download method and system using peer to peer technique
KR100562427B1 (en) * 2005-10-04 2006-03-17 주식회사 솔루션박스 Apparatus for receiving contents and recording medium storing program therefor

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10057337B2 (en) 2016-08-19 2018-08-21 AvaSure, LLC Video load balancing system for a peer-to-peer server network

Also Published As

Publication number Publication date
US20100250678A1 (en) 2010-09-30
US8116235B2 (en) 2012-02-14
US20080112315A1 (en) 2008-05-15
US7733808B2 (en) 2010-06-08

Similar Documents

Publication Publication Date Title
US8116235B2 (en) Peer-to-peer aided live video sharing system
JP6612355B2 (en) Client node, virtual broadcast server and method
CN109889543B (en) Video transmission method, root node, child node, P2P server and system
US7742485B2 (en) Distributed system for delivery of information via a digital network
US8307024B2 (en) Assisted peer-to-peer media streaming
CN111479121B (en) Live broadcasting method and system based on streaming media server
CN102804727B (en) For processing the method and apparatus damaging path in peer-to-peer network
US20070160048A1 (en) Method for providing data and data transmission system
US20110106965A1 (en) Apparatus and method for peer-to-peer streaming and method of configuring peer-to-peer streaming system
JP2010504652A (en) Method and system for managing a video network
TWI530147B (en) Apparatus and method for level-based self-adjusting peer-to-peer media streaming
US20120239727A1 (en) Multimedia service network and method for providing the same
JP4170942B2 (en) Network device and data transfer method for efficient data transmission / reception in mobile ad hoc network environment
Gusev et al. Real-time streaming data delivery over named data networking
Nahrstedt et al. Next generation session management for 3D teleimmersive interactive environments
WO2011116060A1 (en) Multimedia service network and method for providing the same
KR102095563B1 (en) Apparatus and method for ethernet network using mac address
Piro et al. Providing crowd-sourced and real-time media services through an NDN-based platform
Ha et al. Topology and architecture design for peer to peer video live streaming system on mobile broadcasting social media
Schuett et al. A distributed recording system for high quality MBone archives
Guo et al. P cast: P2p patching scheme for vod service
CN108307149B (en) Video proxy system and monitoring method
Dybsjord DTSS-Signaling for Media Streaming and Delay Tolerant Network
JP2021034898A (en) Processing device, method and program
Nazeeruddin et al. An efficient and robust service discovery protocol for dynamic MANETs

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07864222

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07864222

Country of ref document: EP

Kind code of ref document: A1