WO2001022688A9 - Method and system for providing streaming media services - Google Patents

Method and system for providing streaming media services

Info

Publication number
WO2001022688A9
WO2001022688A9 PCT/US2000/025899 US0025899W WO0122688A9 WO 2001022688 A9 WO2001022688 A9 WO 2001022688A9 US 0025899 W US0025899 W US 0025899W WO 0122688 A9 WO0122688 A9 WO 0122688A9
Authority
WO
WIPO (PCT)
Prior art keywords
media
server
network
depository
shared
Prior art date
Application number
PCT/US2000/025899
Other languages
French (fr)
Other versions
WO2001022688A1 (en
Inventor
Horng-Juing Lee
Joe M-J Lin
Original Assignee
Streaming21 Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Streaming21 Inc filed Critical Streaming21 Inc
Priority to AU40225/01A priority Critical patent/AU4022501A/en
Publication of WO2001022688A1 publication Critical patent/WO2001022688A1/en
Publication of WO2001022688A9 publication Critical patent/WO2001022688A9/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/1004Server selection for load balancing
    • H04L67/1008Server selection for load balancing based on parameters of servers, e.g. available memory or workload
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/222Secondary servers, e.g. proxy server, cable television Head-end
    • H04N21/2225Local VOD servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23103Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion using load balancing strategies, e.g. by placing or distributing content on different disks, different memories or different servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations
    • H04N21/4331Caching operations, e.g. of an advertisement for later insertion during playback
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/478Supplemental services, e.g. displaying phone caller identification, shopping application
    • H04N21/4788Supplemental services, e.g. displaying phone caller identification, shopping application communicating with other users, e.g. chatting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/61Network physical structure; Signal processing
    • H04N21/6106Network physical structure; Signal processing specially adapted to the downstream path of the transmission network
    • H04N21/6125Network physical structure; Signal processing specially adapted to the downstream path of the transmission network involving transmission via Internet
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/632Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing using a connection between clients on a wide area network, e.g. setting up a peer-to-peer communication via Internet for retrieving video segments from the hard-disk of other client devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/64Addressing
    • H04N21/6402Address allocation for clients
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/10Protocols in which an application is distributed across nodes in the network
    • H04L67/1001Protocols in which an application is distributed across nodes in the network for accessing one among a plurality of replicated servers
    • H04L67/10015Access to distributed or replicated servers, e.g. using brokers

Definitions

  • the present invention relates generally to networked multimedia systems. More particularly, the invention relates to a method and system architecture for providing global streaming media service over a data network using multiple media servers to ensure that streaming signals are delivered with a high quality of service (QoS).
  • QoS quality of service
  • the Internet is a rapidly growing communication network of interconnected computers around the world. Together, these millions of connected computers form a vast repository of hypermedia information that is readily accessible by users through any of the connected computers from anywhere and anytime. As there is an increasing number of users who are connected to the Internet and surf for various information, they meanwhile create tremendous demands for more content to be available and methods to deliver them on the Internet. Currently, the most commonly available information that is deliverable on the Internet may include text information, images and graphics, videos and audio clips.
  • Continuous information such as continuous videos, audio clips, audiovisual or other multimedia works (referred to below collectively as "media") may be one of the frequently requested network resources. It is therefore not uncommon to experience thousands of access requests simultaneously to a piece of popular video, audio or audiovisual program.
  • Most of the existing streaming technologies are based on a single media server architecture that ultimately suffers from resource/performance limitation.
  • the single media server architecture may support a few hundreds of streaming sessions at one time, but can be hardly in the scale of thousands of streaming sessions simultaneously.
  • the single server architecture suffers a server pitfall on the reliability issue. It represents a single point of failure when a computer system crashes or unexpected power disruption happens. Once the single server is down, the entire media streaming service would be discontinued so do the revenue and profits for the service providers.
  • a reliable, fault-tolerance, and 24-hour streaming media service is of most importance to a streaming media services provider such as Internet Service Provider (ISP) or Internet Content Provider (ICP).
  • ISP Internet Service Provider
  • ICP Internet Content Provider
  • the disclosure hereof discloses a global media delivery system based on a hierarchical architecture of multiple servers to support a large number of streaming media sessions at any given time over data networks.
  • the data networks may include the Internet, the Intranet or a network of other private networks.
  • a media deliver system employing the present invention can advantageously provide streaming media over the data networks to unlimited number of devices coupled to the data networks at any given time without compromising the quality of service.
  • the global media delivery system comprises a plurality of video servers that may be located sparsely or remotely with respect to each other.
  • Each of the video servers may provide different video titles that are to be shared among the video servers based on requests received regardless where the requests are from.
  • the distribution of the video titles is a function of the request frequency that controls appropriate caching of the most frequently accessed video titles.
  • a video server receiving high number of requests to the same video title may cache a portion or an entire of the video title locally as such network traffic is reduced while the quality of service is ensured.
  • a server that provides a streaming service to a client device is selected according to a load balance manager.
  • the load balance manager determines a most appropriate server to provide a requested media delivery. With the help of the load balance manager, the video servers in the global media delivery system will be neither overloaded nor idle. Further, in case one of the servers is disrupted, the load balance manager may immediately determine a next available server to continue the deliver service, which ensures a reliable, fault tolerance, media service provided by the global media delivery system.
  • each server is connected to a local media depository, with some of the servers additionally connected to a shared media depository.
  • a server can supply continuous media to the network from any of the media depositories to which it is connected. Service providers are assigned to servers within the system based upon their requirements as well as the relative abilities of the servers.
  • a service provider request to use the system to stream media to users will include information such as the number of streams the provider wants, the amount of storage required, and any fault tolerance requirements. Using these parameters, the system assigns the service provider a set servers from which to stream content to a user of the provider's service, the servers being selected based upon their relative abilities to meet the requirements. Once a service provider has been allocated to the servers, requests from a user is then decided based upon this allocation.
  • Figure 1 shows the flow of streaming requests across a wide area network.
  • Figure 2A illustrates an exemplary configuration in which the present invention may be practiced.
  • Figure 2 B illustrates another exemplary configuration in which the present invention may be practiced.
  • Figure 3 A shows a block diagram of a global streaming gateway.
  • Figure 3B shows an exemplary functional block diagram of the global media delivery system.
  • FIG. 4 shows a process flowchart of a gateway module that may be implemented as a server module to be installed in a server that is configured to function as a global streaming gateway.
  • Figure 5 shows a process flowchart in a selected video server.
  • Figure 6 shows a process flowchart in a second video server that supports the selected video server of Figure 5 when a selected video title is not in the local depository or cached.
  • Figure 7 presents an embodiment for global streaming with shared media repository storage in the media cluster.
  • Figure 8 presents an embodiment for global streaming with mixed shared and non-shared media repository storage.
  • Figure 9 shows a detail of Figure 8 to highlight the portions relevant to determining server capability.
  • Figure 10 is a flow chart showing an exemplary embodiment of a retailer allocation scheme.
  • Figure 11 shows an example of a retail allocation table construction.
  • Figure 12 is a media allocation table containing the information associated with a title stored.
  • Figure 13 shows the structure of a workload table.
  • Figure 14 is a block diagram showing a representative example logic device in which aspects of the present invention may be embodied.
  • the present invention presents a global media delivery system based on a hierarchical architecture of multiple servers to support a large number of streaming media sessions at any given time over data networks.
  • the data networks may include the Internet, the Intranet or a network of other private networks.
  • a media deliver system employing the present invention can advantageously provide streaming media over the data networks to unlimited number of devices coupled to the data networks at any given time without compromising the quality of service.
  • the global media delivery system comprises a plurality of video or other continuous media servers that may be located sparsely or remotely with respect to each other.
  • Each of the servers may provide different video titles that are to be shared among the servers based on requests received regardless where the requests are from.
  • the distribution of the continuous media titles is a function of the request frequency that controls appropriate caching of the most frequently accessed titles.
  • a server receiving high number of requests to the same title may cache a portion of or an entire title locally so that network traffic is reduced while the quality of service is ensured.
  • a server that provides a streaming service to a client device is selected according to a load balance manager.
  • the load balance manager determines a most appropriate server to provide a requested media delivery. With the help of the load balance manager, none of the video servers in the global media delivery system will be neither overloaded nor idle. Further, in case one of the servers is disrupted, the load balance manager may immediately determine a next available server to continue the deliver service, which ensures a reliable, fault tolerance, media service provided by the global media delivery system.
  • the global streaming technology enables users to watch the very best quality of the streaming media via the Internet.
  • This technology integrates technologies from web technology, web proxy technology, and large-scale media streaming technology altogether to serve millions of on-line users with the streaming media capability no matter where they are.
  • the global streaming proposes a hierarchical architecture of multiple web servers, web proxy servers, and multiple media server architecture.
  • Figure 1 illustrates the proposed technology via the following scenario: A user living in City A would like to access continuous media, for example view a recorded concert broadcasting in City B hosted by an Internet content or service provider over the Internet. The user could check the program listing from the home page of the content provider by specifying the appropriate URL. After browsing the web site, the user would find the required program to watch and click the corresponding streaming media link. The request from the user's media playback station 10 will reach the web proxy server 23 in the City A. The request will then redirect to a special global streaming gateway 25 hosted in the same proxy server.
  • the request will then redirect up to one higher hierarchy of the web architecture via the Internet 30 until it reach the main web server 41 in the City B .
  • the request is redirected to the global streaming gateway 45 hosted in the main web server 41.
  • the global streaming gateway 45 checks the status of collected data related to media server loads and geographical information about each media server that owns or caches the requested media. Based on the techniques described below, the global streaming gateway 45 will select the best suited media server nearby the user located in the City A. All the related information collected by the main global streaming gateway 45 will migrate to the down stream global streaming gateways in each web proxy server.
  • This information is then passed back to the web server 41 or proxy server 43 in City B, and then through the Internet 30 back to City A.
  • the web server 21 or proxy server 23 can then cache the streaming information about the response for later usage.
  • the selected concert will then stream from the selected media server to stream to the user in the City A at the media playback station 10.
  • the nearby web proxy server 23 will intercept the request and redirect the request to its global streaming gateway 25 based on the streaming information cached during the previous request. Based on the local information gathered from nearby media servers, the global streaming gateway will select the most appropriate media server to serve the current streaming request. This scheme dramatically improves the performance and the consequent streaming accesses.
  • the global streaming expands its streaming service to unlimited number of users to access any media content around the globe.
  • the technology enables large-scale media service, fault-tolerance, automatic migration of media content, and 24-hour continuous media streaming by exploiting the web concept to enable the streaming media delivery. Users use the same web interface to select the media content.
  • logic or digital systems and/or methods can include a wide variety of different components and different functions in a modular fashion.
  • Different embodiments of the present invention can include different combinations of elements and/or functions.
  • Different embodiments of the present invention can include actions or steps performed in a different order than described in any specific example herein.
  • Different embodiments of the present invention can include groupings of parts or components into larger parts or components different than described in any specific example herein. For purposes of clarity, the invention is described in terms of systems that include many different innovative components and innovative combinations of innovative components and known components.
  • the invention therefore in specific aspects provides a streaming of continuous media such as video/audio signals that can be played on various types of video-capable terminal devices operating under any types of operating systems regardless of what type of players are pre-installed in the terminal devices.
  • the present invention involves methods and systems suitable for providing multimedia streaming over a communication data network including a cable network, a local area network, a network of other private networks and the Internet.
  • the present invention is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations that resemble data processing devices. These process descriptions and representations are the means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art.
  • the method along with the system to be described in detail below is a self-consistent sequence of processes or steps leading to a desired result. These steps or processes are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical signals capable of being stored, transferred, combined, compared, displayed and otherwise manipulated in a computer system or electronic computing devices.
  • FIG. 2 A illustrates an exemplary configuration in which the present invention may be practiced.
  • Global media delivery system comprises a plurality of servers of which only three 102, 108 and 110 are shown.
  • Each of the servers stores or caches some of the video files in a video repository of the global media delivery system.
  • video files or titles are referred to any video footage, video films and/or video/audio clips or other continuous media that typically are in a compressed format such as MPEG or MP3. It should be noted, however, that the exact format of the video files do not affect the operations of the present invention. As will be noted and appreciated, the present invention applies to any formats of the video files.
  • data network 106 is a data network backbone, namely a larger transmission line.
  • a backbone is a line or set of lines that local area networks connect to for a wide area network connection or within a local area network to span distances efficiently (for example, between buildings).
  • a backbone is a set of paths that local or regional networks connect to for long-distance interconnection.
  • Coupled to data network A 106 there are two other networks 112 and 114 that are typically the Internet, a local area network, or phone network through which terminal devices can receive video files.
  • the terminal devices may include, but not be limited to, multimedia computers (e.g. 116 and 119), networked television sets or other video/audio players (e.g.
  • the terminal devices are equipped with applications or capabilities to execute and display received video files.
  • one of the popular applications is an MPEG player provided in WINDOWS 98 from Microsoft.
  • the video file can be displayed on a display screen of the computer.
  • one of the terminal devices e.g. 116 must send in a request that may comprise the title of the desired video.
  • the request is in a form of URL that may include a subscriber identification if the video services allow only authorized access.
  • the server e.g. 108
  • the server will first check in its cache if the selected video is provided therein, meanwhile the request is recorded by a request manager.
  • the selected video will be provided as a streaming video to the terminal device if some or the entire video is in the cache.
  • server 108 proceeds to send a request to another server 102 for the selected video of the rest of the video if there are some units of the video in a cache memory of server 108 or the entire video if there is no any cached unit of the video.
  • Figure I B shows another exemplary configuration in which the present invention may be practiced.
  • all the video servers 120 in the global media delivery system as well as the terminal devices (e.g. 122) are coupled to a network 126 (e.g. the Internet).
  • network 126 e.g. the Internet
  • terminal device 122 sends in a request for a specific video title, the request is routed to one of the servers (e.g. 120-1).
  • Server 120- 1 referred to as a global streaming gateway, comprises a request handler that processes the received request.
  • Further server 120-1 comprises a load balancing manager that monitors the system performance of all the video servers in the global media delivery system. With the load balancing manager, an appropriate server is identified to provide the video server to terminal devices 122.
  • the detail description of the request processing and the load balancing manager is provided in exemplary embodiments described below.
  • Figure 3A shows a block diagram of global streaming gateway (i.e. a server).
  • the gateway is loaded with a compiled and link version of the embodiment of the present invention, when executed by the processor, the compiled and link version will perform at least the following: receiving a request to a video title from a terminal device; retrieving system performance information on each of video servers in a global media delivery system; determining an appropriate video server in the global media delivery system with respect to the retrieved system performance information; and sending a response to the request, the response comprising an address identifier identifying the appropriate video server to service the video title selected by a user.
  • the compiled and link version when executed, performs additionally: inserting commercial information into the response so that the commercial information becomes available to the user when the video title is played.
  • Figure 3B shows an exemplary functional block diagram of the global media delivery system.
  • the video servers e.g. server 2
  • the video server that provides the video service is selected based on the system performance information on each of video servers in a global media delivery system. In other words, no single video server would be overloaded so as to affect the quality of services.
  • a progressive cache technique is used to cache the video title.
  • the detailed description of the progressive cache technique is provided in U.S. patent application / by Horng-Juing Lee, entitled "METHOD AND
  • Figure 4 shows a process flowchart of the gateway module that may be implemented as a server module to be installed in a server that is configured to function as a global streaming gateway.
  • Each of the process in the flowchart includes many sub-processes that are provided below.
  • Figure 5 shows a process flowchart in the selected video server.
  • the process flowchart may be implemented as a proxy server module to be installed in a server to function as one of the video servers in the global media delivery system.
  • Figure 6 shows a process flowchart in a second video server that supports the selected video server of Figure 5 when a selected video title is not in the local depository or cached.
  • the global media delivery system as described in accordance with one aspect of the present invention is robust, operationally efficient and cost effective.
  • the global streaming mechanism provides the best use of all proxy video servers and permits seamless delivery of streaming video with highest quality of services possible.
  • the present invention may be used in connection with presentations of any type, including sales presentations and product/service promotion, which provides the video service providers additional revenue resources. While the embodiment discussed herein may appear to include some limitations as to the presentation of the cache units and the way of managing the units, in terms of the format and arrangement, the invention has applicability well beyond such embodiment, which can be appreciated by those skilled in the art.
  • Figures 7 and 8 show some of the system components in a two different embodiments.
  • Figure 7 presents the detailed layout of the architecture for global streaming with shared media repository storage in the media cluster.
  • the embodiment of Figure 8 has a more complicated storage arrangement with mixed shared and non-shared media repository storage.
  • the elements are numbered similarly in this pair of figures and will initially be described with respect to Figure 7.
  • Request handler 720 is shown connected to media server 701, remote manager 703, media recording station 705, and media playback station 707 and uses one or more web servers and web proxy servers as front-end handler for all kinds of client streaming requests.
  • the embodiment of Figure 7 shows a request handler with a pair of web servers 721 and 723.
  • a client request for example in URL format, for operations such as playback, recording or administration is entered the system through a request handler interface.
  • CGI common gateway interface
  • these components are considered as common gateway interface (CGI) components.
  • CGI common gateway interface
  • these components have a number of tasks. These can include authentication, admission control, load balancing, performance monitoring, the global streaming gateway function, and advertisement insertion.
  • Admission control 739 is responsible for manage the concurrent number of streaming sessions and total streaming bandwidth. This limitation can be caused by machine capability and/or product license control. The purpose of restricting the session count is to ensure the playback quality for all admitted requests.
  • Load balancer 735 responsible for determining which server with which the incoming request should make contact.
  • the component interacts with performance monitor 733 to get the media server workload information 743. Together with the client and video clip information, the component evens out the amount of workload among all working servers within the media server cluster 740.
  • Performance monitor 733 handles inquiries from the remote management block 703 as well monitoring the performance and operational status of the media server 701.
  • Global streaming gateway 730 is responsible for receiving the request and replying to the request with an HTML page. This component will interact with the rest of the components to get the required information and send it back to the client.
  • Ad inserter 737 is responsible for generating an HTML template with the advertisement postings. This component interacts with the user database 741 to get user profile (if possible) and uses it to determine the appropriate ad template from ad template database 747 for better targeting to the audience.
  • Media server cluster 740 comprises several media servers 741-i. Each media server provides its own streaming service. The media servers need not be capability- equivalent. The goal of media server cluster 740 is to ensure smooth and uninterrupted media playback/recording.
  • a two- level media depository architecture is used: local media depositories 743-i and a shared media depository 745. Each local media depository 743-i is attached to a single media server 741-i. Therefore, only that particular media server can access its contents directly. The concurrent number of streaming sessions on clips stored on a particular server then is limited by that server's capability.
  • every media server within the cluster can access the contents stored on the shared media depository. Hence, the aggregated server capability can support significantly more streaming sessions on clips stored on the shared media depository.
  • Figure 8 differs from Figure 7 in that not all of the media servers
  • media servers 841 -i are connected to the shared media depository 845.
  • media servers 841-1 and 841-2 are connected only to their respective local media depositories 843-1 and 843-2, while media servers 841-3 and 841-4 are connected to both their respective local media depositories 843-3 and 843-4 and the shared media depository 845.
  • each server will have its own local media depository, with some servers further connected to one or more shared media depositories which are in turn each connected to multiple servers.
  • the servers and the depositories are shown together as part of the media cluster 740 in Figures 7 and 8, they need not have any actually physical proximity, but may be separated from each other. Although they from a single block conceptually in the present invention, generically they are distributed over an extended physical region as described above with respect to Figures 1, 2A, and 2B.
  • the architecture of media cluster 740 differs from what is found in the prior art as described in the Background section above.
  • the approach using multiple commodity-oriented server platforms would completely do away with the shared media depository 745 and rely solely upon the local media depositories. In this arrangement, only those servers which contain the particular continuous media requested in their own local media depository can supply it to the user. In order to insure a high quality of service, this requires a large amount of duplication and redundancy among the local depositories as the same title must be stored in many different locations, often with every local depository having a complete collection of titles.
  • a client is a component asking for service.
  • the service can be playback, recording, upload media clips, download media clips, or query of operation status.
  • a client can be another media server system, a media proxy server, a remote management console, a media recording station or a media playback station.
  • it is employed as a general term to cover both the service or content provider as well as the user requesting content from the provider.
  • Case 1 Playback client requests to play back pre-recorded video. 1. Client sends out the request to play back movie A by clicking on a URL link.
  • Request handler receives the request.
  • Request handler dispatches the request to global streaming gateway. 4. Request handler sends the request to the authentication component.
  • Authentication component verifies the client identity by consulting with user database and making sure the client has been granted the right to perform the request.
  • the global streaming gateway passes the request to the admission control to make sure the new request will not interfere with existing service and also meet the product license agreement.
  • Admission control checks the existence of the movie A. If the content is not cached in the global streaming gateway, the request is pass to a nearby request handler for further process and repeat Step 2 - Step 6. 8. When the admission control component approves the request, the global streaming gateway passes the request to the load balancer component.
  • the load balancer component consults with the performance monitor component to understand the current server workload. Then based on the load balance scheme, it returns the address of the media server ready to serve the request. 10. The global streaming gateway then passes the user profile information
  • the global streaming gateway combines the server address, video clip information, and ad template to create a response HTML page to the client.
  • the response HTML page contains an obj ect tag indicating the player component and its associated parameters.
  • the HTML page from the global streaming gateway will automatically invoke the Web browser to show the page content which including the invocation of media player when the Web browser sees the object tag.
  • the media player then communicates with the assigned media server to start the media streaming service. Any interactive playback control now is between the client and the assigned media server.
  • Case 2 Recording client requests to store pre-recorded media into media server.
  • Client sends out the request to record movie A by using client media recorder.
  • Request handler receives the request.
  • Request handler dispatches the request to global streaming gateway.
  • the global streaming gateway sends the request to the authentication component.
  • Authentication component verifies the client identity by consulting with user database and make sure the client has been granted the right to perform the request. 6.
  • the global streaming gateway passes the request to the admission control to make sure the new request will not interfere with existing service and meet the product license agreement as well.
  • the global streaming gateway passes the request to the load balancer component.
  • the load balancer component consults with the performance monitor component to understand the current server workload. Then based on the load balance scheme, it returns the address of the media server ready to serve the request.
  • the global streaming gateway returns an HTML page with the assigned media server information.
  • Client sends out the request to observe a server's status.
  • Request handler receives the request.
  • Request handler dispatches the request to global streaming gateway.
  • Global streaming gateway sends the request to the authentication component.
  • Authentication component verifies the client identity by consulting with user database and make sure the client has been granted the right to perform the request. 6.
  • the global streaming gateway consults with the performance monitor component to understand the current server workload.
  • the performance monitor interacts with media servers listed on the configuration database and retrieves the performance data and passes it back to the global streaming gateway.
  • the global streaming gateway packages the performance data as an
  • HTML page returns it to the client.
  • the remote management client then displays the performance data into a graphic user interface (GUI) representation.
  • GUI graphic user interface
  • a media server's streaming capability determines a media server's streaming capability, allocate a new subscriber media data to a particular media server, implement a load balancer component, and to implement an admission control component.
  • the amount of storage space in a local or shared media depository is often be referred to as disk space, although this could clearly be stored on an alternate media
  • the continuous media is often referred to as video
  • the service or content provider is referred to as a retailer.
  • the system could be provided by a telecommunications company which would then provide this system for retails to store content for supplying to users.
  • the discussion begins with an approach for determination of a media server's streaming capability. Because maintaining streaming quality is an important requirement in any media streaming service, the system needs to control the number of streaming sessions to ensure every session is running at high quality. Therefore, determining server's streaming capacity is considered first. There are several configuration factors affecting capability of server machines including CPU power, main memory size (M), network bandwidth (NB), and storage access bandwidth for the local and shared depositories (SBDL, SBDS). In the determination method below, CPU power is excluded from the calculations since it is general found that streaming service is an input/output-intensive application and the CPU power is much less significant than other factors: if this may be a limiting factor in a particular application, it may be readily included with the other limiting factors.
  • CPU power is excluded from the calculations since it is general found that streaming service is an input/output-intensive application and the CPU power is much less significant than other factors: if this may be a limiting factor in a particular application, it may be readily included with the other limiting factors.
  • media data retrieval cycle time (p). Since media data is too large to be fit into the main memory space of a processor in one access, it is reasonable to retrieve media data one block at a time from the media repository and send it to the client. The retrieve-and-send pattern repeats until the whole media data is sent.
  • the media data retrieval cycle time corresponds to the time required for one such block to be played back or consumed by the user.
  • the system can use a simple two-buffer scheme: one for holding the media data being sent by networking module and the other one is storing next media data block from the disk access module. The role of the two buffers exchange roles at next cycle.
  • an exemplary method to determine each media server's streaming capability can be defined.
  • the streaming capability is abstracted into one single value SB,, the concurrent number of streams for streaming of video bit rate at b. This represents the upper limit of the streaming requests the server can accept without jeopardizing the streaming quality.
  • SB the concurrent number of streams for streaming of video bit rate at b.
  • the below uses a single media data retrieval cycle time, p, and a single bit rate, b, for streaming video. In the more general situation, these are independent in each processor and would have distinct values ?, and b t . The following formulae all generalize to this case in a straightforward manner.
  • FIG 9 shows a detail of Figure 8 to highlight the portions relevant to this discussion.
  • a list of n total servers Sj-S,, 841-i are shown, each connected to its respective local media depository LMD ⁇ LMD,, 843-i.
  • a single shared media depository SMD 845 is shown, to which m of the servers are connected, where O ⁇ m ⁇ n.
  • These m servers are connected through block 847, which could be a fibre channel arbitration loop or some other switching mechanism.
  • the ability of the servers can then be quantified by the quantities SB t , server f s streaming bandwidth in number of streams, and SS favor server /'s space capacity.
  • the first of these can be defined by
  • SB a ⁇ n ⁇ SBN h SBD t , SBM>, where this expression ignores the CPU as limiting factor. Similarly, if any of these factors is not found limiting, it can be ignored, or, conversely, other limiting factors can added.
  • SBN is server 's aggregated network interface card bandwidth in terms of number of streams
  • SBDL if server / is connected to the shared media depository.
  • SBDL denotes the concurrent number of streams limited by access module for the local depository and similarly SBDS 1 is server disk access bandwidth for the shared media depository at time t. This last term assumes that the access to the shared depository is evenly divided among the m servers. In a more general arrangement, block 847 in Figure 9 could divide up SBDS 1 among the m servers asymmetrical, with a fraction other than 1/m for each.
  • SBM is server i's memory bandwidth in terms of number of streams.
  • the expression for the streaming bandwidth can be described with reference to Figure 9.
  • server S t 841-1 This can stream media from LMU ! 843- 1 through 851-1 into server S ! 841-1. There it must pass through main memory MM 857-1 before passing on to the network along 855-1.
  • the bottlenecks in this process can be the limit of the number of streams into the server from the depository along 851-1, the number of streams out of the server along 855-1, of the number of streams the server is able to concurrent handle through it main memory MM: these three effects correspond to the three terms in the expression for SB,.
  • the server is connected to the shared depository, such as server S n 841-n, then the number of streams from the shared media depository into the server along 853-n should also be included with those along 851 -n.
  • SSL SSL
  • SSS> is the range of possible values, form zero to SSS, the total disk space of the shared media depository
  • SSS* is the total disk space of the shared media depository within this range at a time t.
  • each media server's configuration may be different.
  • two media servers may have storage space of (10GB, 50GB), storage access bandwidth of (40 megabytes per second, 80 megabytes per second), and networking bandwidth of (240 megabits per second, 360 megabits per second).
  • a request from a new retailer, or for additional capability from a current retailer can include a number of requirements.
  • a request is defined as where RB, is retailer i' s streaming requirements in terms of number of streams; RS, is retailer i's space requirements in, for example, terms of number of streams; and RF, is retailer i's fault tolerance requirements in fault tolerance degrees.
  • the concept of fault tolerance can be defined in a number of ways.
  • the sort RF such that RF, ⁇ RF, +1 , l ⁇ i ⁇ k- l.
  • This particular scheme emphasizes the number of streams requested, RB, relative to the other two quantities. This order could be rearranged, for example using the fault tolerance as the first stage in this sieve if the system wished to stress this quantity, or deleted the storage space requirement comparison RS if space limits are not a concern.
  • step 1001 would be skipped and the process would go directly to step 1005. If there are multiple requests, the process goes through these one at a time in the order established in step 1001, beginning with R t in step 1003.
  • SS j ⁇ RS and either SSE. ⁇ RS, or SSS* ⁇ RS,.
  • a given server will, for a given request, consider storage on either its local or the shared media depository.
  • the media will be distributed over both, for example caching in its local media depository a portion of a title which is stored in full in the shared media depository, using a arrangement such as that described in U. S. patent application / , by Horng- Juing Lee, entitled “METHOD AND APPARATUS FOR CACHING FOR STREAMING DATA", filed on September 8, 2000, which was included by reference above.
  • Step 1007 determines whether the server candidate list is large enough. If the number of servers in the server candidate list for request R, is less than (RF,+l), then the system will deny the retailer request in block 1008 and return to step 2. Otherwise, there are enough servers for the request and the list is sorted in step 1009.
  • step 1009 the servers in the server candidate list are ordered such that
  • Step 1011 picks the first p-(RF e +1) servers from the sorted server candidate list. This is the selected server list ⁇ S/ e/ ,...,S e, >.
  • case 1 is used if its condition is met; otherwise case 2 is used, unless its condition is not met, and then case 3 is used.
  • the resultant updated values are the values which will used in the next time through the allocation process.
  • step 1015 the current value oft is checked to see if all the requests have been dealt with. If not, it continues back to step 1005, incrementing and repeating until all k requests are treated.
  • a retailer allocation table is then created in step 1017.
  • Figure 11 shows one example of how such a retail allocation table can be constructed. This will list a retailer by its identifier and present the information from the request: the total number of streams, total storage (or disk) space, and the fault tolerance requirement. It will also give the associated information assigned in the allocation scheme, namely the locations assigned to the retailer in both the shared and the local media depositories.
  • Figure 12 is a media allocation table containing the information associated with a title stored in the media server cluster. Associated with a title will be its owner, given by the retailer's identifier, the locations within the local media depositories where it is stored, whether it is stored in the shared media depository, and a title license specifying how many copies of the title the retailer is entitled to use.
  • a media file from the retailer is ready to be put into a server, it is to be copied into the shared media depository, if the retailer has allocated it, or into the local media repositories if local media depository has been allocated.
  • the file may have multiple local media repositories containing it to allow multiple servers to provide the title from their respective local media repositories.
  • the admission control function determines if a request for a particular title will be accepted. When a subscriber's request comes in, if accepting the request would exceed either the retailer's stream license, in terms of the number of streams allocated for the retailer, or the title's stream license, in terms of the number of streams allocated for the title, the request is rejected. If this is not the case, the admission control accepts the request and determines the server to serve the request.
  • Determining the server that will serve the request (for a read operation) is part of the load balancing function.
  • a request comes in it will be in the form ⁇ subscriber ID, media title>.
  • this determination process can be implemented as: if (the title is in local media depository (maybe multiple servers) only) then ⁇ assign the request to the servers with the lowest SB,- value within the group of servers within which the title resides
  • FIG. 13 shows the structure of a workload table. This will list the retailer by its identifier and the title associated with the request. The server filling this request is then listed along with whether the continuous media is being supplied from the identified servers local media depository or from the shared media depository.
  • this determination process can be implemented as: if (the subscriber's allocation is shared media depository) then
  • the architecture of the present invention provides a number of feature which improve upon the prior art as described in the Background section. Three aspects of this architectural structure are described in some detail below: scalability and reliability, load balancing, and value-added E-commence aspects.
  • the aspect of server scalability and reliability provides access transparency on locating media server. From a client's point of view, there is no need to know where a particular piece of continuous media is really stored and which one of the working media servers should be contacted to provide it. By exploiting web server(s) as the front-end request handler, any client requests always go to the web server first. A component within the server called “global streaming gateway" will, based on the request and the media server workloads, redirect the request to the appropriate working media server. From that point on, the streaming service is between client and the assigned media server. The client need not know the exact media server location. The only thing client needs to know is the request handler' s URL link, nothing more.
  • This design provides easily scalable deployment.
  • a system can start from a single media server, adding more media servers as the streaming demand increases. Adding/removing media servers from a media server cluster will not affect the client's connection practices. All client requests still go to the front-end web site.
  • the described structure also provides scalability and availability for request handler(s). Since all client requests go through a web front end first, the availability of the request handlers should be assured. In addition, as the system expands, the number of requests increases accordingly.
  • the request handlers preferably scales up as well to match up the increase in requests.
  • web server(s) as the front-end component, the design can take advantage of all kinds of web server(s) scalability/reliability products, such as Microsoft's WLBS (Windows NT Load Balancing Service). This kind of product can detect a faulty web server within a web server cluster and automatically redistribute the requests to other working web Servers. In the meantime, it is easy to add an extra web server into the system with- out interfere the operation. This design can ensure the request handler can receive all client requests.
  • WLBS Windows NT Load Balancing Service
  • Using the media server cluster concept also supports server scalability. Since it uses multiple media servers to service streaming requests, when the streaming demand increases, the system administrator can simply add one more media server into the media server cluster. A client need not know about the newly installed media server at all.
  • each media server handles multiple streaming sessions.
  • the server handles all media streaming from start to end.
  • This design cuts down the dependency between media servers. Therefore, when one server goes down, it will not bring down the other servers. When it is time to add a new server, it can simply plug into the media server cluster without affecting the rest. Server scalability is thus easy to reach and deploy.
  • the de-coupling design does not restrict the system to using a collection of equivalent media servers.
  • the all of the servers within media server cluster can be equipped with, say, Ultra disk having a speed of 40 Mega Bytes Per Second (MBPS).
  • MBPS Mega Bytes Per Second
  • the system may later be expanded by adding a newer server configured differently, such as with Ultra 2 disk having a speed of 80 MBPS, resulting in a heterogeneous server environment.
  • the original media servers, having a 40 MBPS capability have relatively loss capability, lacking any advantage of newer technology found in the newer server, all these servers of differing abilities may be retained in the system.
  • Load balancing is provided among the request handlers. Using web Servers as the front-end request handlers enhances the architecture. By using existing web servers balancing product such as Microsoft's Windows NT Load Balancing Service (WLBS), it can distribute out the request to participating web servers and achieve the goal of load balancing. It also provides load balancing within the media server cluster.
  • WLBS Microsoft's Windows NT Load Balancing Service
  • each media server may not have the same capabilities.
  • Our Load Balancer recognizes this and tries to balance out the streaming requests.
  • the load balancing scheme maintains the equal utilization ratio in each media server.
  • the design can also incorporate valued-added E-commence service by creating reply web page based on user profile: When a playback request comes in, it is represented as a URL link.
  • the URL link represents a request rather than accessing an existing web page.
  • the "global streaming gateway" component knows information such as user name and the requested media clip. By dynamically generating a response web page, the client will be redirected to the appropriate media server to start media playback.
  • the "ad inserter” component can select a template (filled of presentation style and ad. postings) from a template database and pass it to the "global streaming gateway” module.
  • the "global streaming gateway” can send back the reply HTML page to the client.
  • Internet and Intranet environments This includes (but not limit to) video on demand service, video mail service, movie on demand service, etc.
  • the media delivery system as described herein is robust, operationally efficient and cost-effective.
  • the present invention may be used in connection with presentations of any type, including sales presentations and product/service promotion, which provides the video service providers additional revenue resources.
  • a user digital information appliance has generally been illustrated as a personal computer.
  • the digital computing device is meant to be any device for interacting with a remote data application, and could include such devices as a digitally enabled television, cell phone, personal digital assistant, etc.
  • client is intended to be understood broadly to comprise any logic used to access data from a remote system
  • server is intended to be understood broadly to comprise any logic used to provide data to a remote system.
  • the invention can be implemented in hardware and/or software.
  • different aspects of the mvention can be implemented in either client-side logic or a server-side logic.
  • the invention or components thereof may be embodied in a fixed media program component containing logic instructions and/or data that when loaded into an appropriately configured computing device cause that device to perform according to the invention.
  • a fixed media program may be delivered to a user on a fixed media for loading in a users computer or a fixed media program can reside on a remote server that a user accesses through a communication medium in order to download a program component.
  • Figure 14 shows an information appliance (or digital device) 1400 that may be understood as a logical apparatus that can read instructions from media 1417 and/or network port 1419. Apparatus 1400 can thereafter use those instructions to direct server or client logic, as understood in the art, to embody aspects of the invention.
  • One type of logical apparatus that may embody the invention is a computer system as illustrated in 1400, containing CPU 1407, optional input devices 1409 and 1411, disk drives 1415 and optional monitor 1405.
  • Fixed media 1417 may be used to program such a system and may represent a disk-type optical or magnetic media, magnetic tape, solid state memory, etc..
  • the invention may be embodied in whole or in part as software recorded on this fixed media.
  • Communication port 1419 may also be used to initially receive instructions that are used to program such a system and may represent any type of communication connection.
  • the invention also may be embodied in whole or in part within the circuitry of an application specific integrated circuit (ASIC) or a programmable logic device (PLD).
  • ASIC application specific integrated circuit
  • PLD programmable logic device
  • the invention may be embodied in a computer understandable descriptor language which may be used to create an ASIC or PLD that operates as herein described.

Abstract

The invention provides a system for the delivery of continuous media based on a hierarchical architecture of multiple servers to support a large number of streaming media sessions at any given time over data networks. The data networks may include the Internet, the Intranet or a network of other private networks. In an exemplary embodiment, each server is connected to a local media depository, with some of the servers additionally connected to a shared media depository. A server can supply continuous media to the network from any of the media depositories to which it is connected. Service providers are assigned to servers within the system based upon their requirements as well as the relative abilities of the servers. Once a service provider has been allocated to the servers, requests from a user is then decided based upon this allocation.

Description

METHOD AND SYSTEM FOR PROVIDING STREAMING MEDIA SERVICES
CROSS REFERENCE TO RELATED APPLICATIONS
This application claims priority from provisional patent application 60/155,000, entitled METHOD AND SYSTEM FOR PROVIDING WORLD WIDE STREAMING MEDIA SERVICES, filed September 21, 1999. The above referenced application is incorporated herein by reference for all purposes. The prior application, in some parts, may indicate earlier efforts at describing the invention or describing specific embodiments and examples. The present invention is, therefore, best understood as described herein.
FIELD OF THE INVENTION
The present invention relates generally to networked multimedia systems. More particularly, the invention relates to a method and system architecture for providing global streaming media service over a data network using multiple media servers to ensure that streaming signals are delivered with a high quality of service (QoS).
Copyright Notice
A portion of the disclosure of this patent document may contain materials that are subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever. Prior Publications
The following publications may be related to the invention or provide background information. Listing of these references here should not be taken to indicate that any formal search has been completed or that any of these references constitute prior art.
BACKGROUND OF THE INVENTION
The Internet is a rapidly growing communication network of interconnected computers around the world. Together, these millions of connected computers form a vast repository of hypermedia information that is readily accessible by users through any of the connected computers from anywhere and anytime. As there is an increasing number of users who are connected to the Internet and surf for various information, they meanwhile create tremendous demands for more content to be available and methods to deliver them on the Internet. Currently, the most commonly available information that is deliverable on the Internet may include text information, images and graphics, videos and audio clips.
Continuous information such as continuous videos, audio clips, audiovisual or other multimedia works (referred to below collectively as "media") may be one of the frequently requested network resources. It is therefore not uncommon to experience thousands of access requests simultaneously to a piece of popular video, audio or audiovisual program. Most of the existing streaming technologies are based on a single media server architecture that ultimately suffers from resource/performance limitation. The single media server architecture may support a few hundreds of streaming sessions at one time, but can be hardly in the scale of thousands of streaming sessions simultaneously. Furthermore, the single server architecture suffers a server pitfall on the reliability issue. It represents a single point of failure when a computer system crashes or unexpected power disruption happens. Once the single server is down, the entire media streaming service would be discontinued so do the revenue and profits for the service providers. A reliable, fault-tolerance, and 24-hour streaming media service is of most importance to a streaming media services provider such as Internet Service Provider (ISP) or Internet Content Provider (ICP). There is therefore a great need for solutions to media delivery systems that can be configured to support hundreds thousands of streaming sessions at the same time meanwhile keeping minimum impact on the quality of service.
Two possible paradigms have been suggested to scale-up the media streaming server: use a single powerful server machine or have multiple servers working together to provide the aggregated performance. The first approach may leads to a special-build proprietary-oriented machine platform, which translates into high cost and maintenance complexities. Additionally, it does not solve the reliability problem inherent in a single server architecture. On the other hand, using multiple commodity-oriented server platforms, as suggested in the second approach, gives a much better performance/cost ratio and takes advantage of the whole PC industry's effect to further improve performance/cost ratio down the road. However, how to implement the architectural structure of multiple media servers working together to produce the highest media service quality becomes a critical issue.
SUMMARY OF THE INVENTION
The disclosure hereof discloses a global media delivery system based on a hierarchical architecture of multiple servers to support a large number of streaming media sessions at any given time over data networks. The data networks may include the Internet, the Intranet or a network of other private networks. A media deliver system employing the present invention can advantageously provide streaming media over the data networks to unlimited number of devices coupled to the data networks at any given time without compromising the quality of service.
According to one aspect of the present invention, the global media delivery system comprises a plurality of video servers that may be located sparsely or remotely with respect to each other. Each of the video servers may provide different video titles that are to be shared among the video servers based on requests received regardless where the requests are from. According to another aspect of the present invention, the distribution of the video titles is a function of the request frequency that controls appropriate caching of the most frequently accessed video titles. In other words, a video server receiving high number of requests to the same video title may cache a portion or an entire of the video title locally as such network traffic is reduced while the quality of service is ensured. According to still another aspect of the present invention, a server that provides a streaming service to a client device is selected according to a load balance manager. Based on capabilities of each of the servers and working load thereof being monitored, the load balance manager determines a most appropriate server to provide a requested media delivery. With the help of the load balance manager, the video servers in the global media delivery system will be neither overloaded nor idle. Further, in case one of the servers is disrupted, the load balance manager may immediately determine a next available server to continue the deliver service, which ensures a reliable, fault tolerance, media service provided by the global media delivery system. In an exemplary embodiment, each server is connected to a local media depository, with some of the servers additionally connected to a shared media depository. A server can supply continuous media to the network from any of the media depositories to which it is connected. Service providers are assigned to servers within the system based upon their requirements as well as the relative abilities of the servers. A service provider request to use the system to stream media to users will include information such as the number of streams the provider wants, the amount of storage required, and any fault tolerance requirements. Using these parameters, the system assigns the service provider a set servers from which to stream content to a user of the provider's service, the servers being selected based upon their relative abilities to meet the requirements. Once a service provider has been allocated to the servers, requests from a user is then decided based upon this allocation.
BRIEF DESCRIPTION OF THE DRAWINGS Figure 1 shows the flow of streaming requests across a wide area network. Figure 2A illustrates an exemplary configuration in which the present invention may be practiced.
Figure 2 B illustrates another exemplary configuration in which the present invention may be practiced. Figure 3 A shows a block diagram of a global streaming gateway.
Figure 3B shows an exemplary functional block diagram of the global media delivery system.
Figure 4 shows a process flowchart of a gateway module that may be implemented as a server module to be installed in a server that is configured to function as a global streaming gateway.
Figure 5 shows a process flowchart in a selected video server.
Figure 6 shows a process flowchart in a second video server that supports the selected video server of Figure 5 when a selected video title is not in the local depository or cached. Figure 7 presents an embodiment for global streaming with shared media repository storage in the media cluster.
Figure 8 presents an embodiment for global streaming with mixed shared and non-shared media repository storage.
Figure 9 shows a detail of Figure 8 to highlight the portions relevant to determining server capability.
Figure 10 is a flow chart showing an exemplary embodiment of a retailer allocation scheme.
Figure 11 shows an example of a retail allocation table construction.
Figure 12 is a media allocation table containing the information associated with a title stored.
Figure 13 shows the structure of a workload table.
Figure 14 is a block diagram showing a representative example logic device in which aspects of the present invention may be embodied.
The invention and various specific aspects and embodiments will be better understood with reference to the drawings and detailed descriptions. In the different figures, similarly numbered items are intended to represent similar functions within the scope of the teachings provided herein.
DESCRIPTION OF THE PREFERRED EMBODIMENTS Although the media delivery system described below is based on video streaming signals, those skilled in the art can appreciate that the description can be equally applied to audio streaming signals or other media or multimedia signals as well. The detailed description of the present invention here provides numerous specific details in order to provide a thorough understanding of the present invention. However, it will become obvious to those skilled in the art that the present invention may be practiced without these specific details. In other instances, well known methods, procedures, components, and circuitry have not been described in detail to avoid unnecessarily obscuring aspects of the present invention. Before the specifics of the embodiments are described, a few more embodiments are outlined generally below.
The present invention presents a global media delivery system based on a hierarchical architecture of multiple servers to support a large number of streaming media sessions at any given time over data networks. The data networks may include the Internet, the Intranet or a network of other private networks. A media deliver system employing the present invention can advantageously provide streaming media over the data networks to unlimited number of devices coupled to the data networks at any given time without compromising the quality of service.
According to one aspect of the present invention, the global media delivery system comprises a plurality of video or other continuous media servers that may be located sparsely or remotely with respect to each other. Each of the servers may provide different video titles that are to be shared among the servers based on requests received regardless where the requests are from. According to another aspect of the present invention, the distribution of the continuous media titles is a function of the request frequency that controls appropriate caching of the most frequently accessed titles. In other words, a server receiving high number of requests to the same title may cache a portion of or an entire title locally so that network traffic is reduced while the quality of service is ensured. According to still another aspect of the present invention, a server that provides a streaming service to a client device is selected according to a load balance manager. Based on capabilities of each of the servers and working load thereof being monitored, the load balance manager determines a most appropriate server to provide a requested media delivery. With the help of the load balance manager, none of the video servers in the global media delivery system will be neither overloaded nor idle. Further, in case one of the servers is disrupted, the load balance manager may immediately determine a next available server to continue the deliver service, which ensures a reliable, fault tolerance, media service provided by the global media delivery system.
Regardless of user location, the global streaming technology enables users to watch the very best quality of the streaming media via the Internet. This technology integrates technologies from web technology, web proxy technology, and large-scale media streaming technology altogether to serve millions of on-line users with the streaming media capability no matter where they are.
The global streaming proposes a hierarchical architecture of multiple web servers, web proxy servers, and multiple media server architecture. Figure 1 illustrates the proposed technology via the following scenario: A user living in City A would like to access continuous media, for example view a recorded concert broadcasting in City B hosted by an Internet content or service provider over the Internet. The user could check the program listing from the home page of the content provider by specifying the appropriate URL. After browsing the web site, the user would find the required program to watch and click the corresponding streaming media link. The request from the user's media playback station 10 will reach the web proxy server 23 in the City A. The request will then redirect to a special global streaming gateway 25 hosted in the same proxy server. If this is the first time the request to retrieve the concern has been made, or for other some other reason the media is not stored in proxy server 23, the request will then redirect up to one higher hierarchy of the web architecture via the Internet 30 until it reach the main web server 41 in the City B . When the request reaches the main web server 41 , the request is redirected to the global streaming gateway 45 hosted in the main web server 41. The global streaming gateway 45 then checks the status of collected data related to media server loads and geographical information about each media server that owns or caches the requested media. Based on the techniques described below, the global streaming gateway 45 will select the best suited media server nearby the user located in the City A. All the related information collected by the main global streaming gateway 45 will migrate to the down stream global streaming gateways in each web proxy server. This information is then passed back to the web server 41 or proxy server 43 in City B, and then through the Internet 30 back to City A. The web server 21 or proxy server 23 can then cache the streaming information about the response for later usage. The selected concert will then stream from the selected media server to stream to the user in the City A at the media playback station 10. If a similar request for the same continuous media content in the City A is received at a later time in request handler 20, the nearby web proxy server 23 will intercept the request and redirect the request to its global streaming gateway 25 based on the streaming information cached during the previous request. Based on the local information gathered from nearby media servers, the global streaming gateway will select the most appropriate media server to serve the current streaming request. This scheme dramatically improves the performance and the consequent streaming accesses.
To achieve the above streaming service, the global streaming expands its streaming service to unlimited number of users to access any media content around the globe. The technology enables large-scale media service, fault-tolerance, automatic migration of media content, and 24-hour continuous media streaming by exploiting the web concept to enable the streaming media delivery. Users use the same web interface to select the media content.
Furthermore, it is well known in the art that logic or digital systems and/or methods can include a wide variety of different components and different functions in a modular fashion. The following will be apparent to those of skill in the art from the teachings provided herein. Different embodiments of the present invention can include different combinations of elements and/or functions. Different embodiments of the present invention can include actions or steps performed in a different order than described in any specific example herein. Different embodiments of the present invention can include groupings of parts or components into larger parts or components different than described in any specific example herein. For purposes of clarity, the invention is described in terms of systems that include many different innovative components and innovative combinations of innovative components and known components. No inference should be taken to limit the invention to combinations containing all of the innovative components listed in any illustrative embodiment in this specification. The functional aspects of the invention, as will be understood from the teachings herein, may be implemented or accomplished using any appropriate implementation environment or programming language, such as C++, COBOL, Pascal, Java, Java-script, etc. All publications, patents, and patent applications cited herein are hereby incorporated by reference in their entirety for all purposes.
The invention therefore in specific aspects provides a streaming of continuous media such as video/audio signals that can be played on various types of video-capable terminal devices operating under any types of operating systems regardless of what type of players are pre-installed in the terminal devices.
In specific embodiments, the present invention involves methods and systems suitable for providing multimedia streaming over a communication data network including a cable network, a local area network, a network of other private networks and the Internet.
The present invention is presented largely in terms of procedures, steps, logic blocks, processing, and other symbolic representations that resemble data processing devices. These process descriptions and representations are the means used by those experienced or skilled in the art to most effectively convey the substance of their work to others skilled in the art. The method along with the system to be described in detail below is a self-consistent sequence of processes or steps leading to a desired result. These steps or processes are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities may take the form of electrical signals capable of being stored, transferred, combined, compared, displayed and otherwise manipulated in a computer system or electronic computing devices. It proves convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, operations, messages, terms, numbers, or the like. It should be borne in mind that all of these similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following description, it is appreciated that throughout the present invention, discussions utilizing terms such as "processing" or "computing" or "verifying" or "displaying" or the like, refer to the actions and processes of a computing device that manipulates and transforms data represented as physical quantities within the device's registers and memories into analog output signals via resident transducers.
Global Streaming
The discussion of the exemplary functions designed for a media delivery system according to the present invention and for teaching and referring to the detailed design, features, use, advantages, configurations and characteristics of the present invention begins with Figures 2A and 2B, which provide a context for the invention. Details of a caching technique suitable for the present application can be found in copending U.S. patent application __/ , by Horng-Juing Lee, entitled "METHOD AND APPARATUS FOR CACHING FOR STREAMING DATA", filed on September 8, 2000, and which is hereby included by this reference, which refers to an exemplary embodiment to cache those frequently requested video titles in a local video server.
Referring to the drawings, in which like numerals refer to like parts throughout the several views, Figure 2 A illustrates an exemplary configuration in which the present invention may be practiced. Global media delivery system comprises a plurality of servers of which only three 102, 108 and 110 are shown. Each of the servers stores or caches some of the video files in a video repository of the global media delivery system. As used herein, video files or titles are referred to any video footage, video films and/or video/audio clips or other continuous media that typically are in a compressed format such as MPEG or MP3. It should be noted, however, that the exact format of the video files do not affect the operations of the present invention. As will be noted and appreciated, the present invention applies to any formats of the video files. Preferably data network 106 is a data network backbone, namely a larger transmission line. At the local level, a backbone is a line or set of lines that local area networks connect to for a wide area network connection or within a local area network to span distances efficiently (for example, between buildings). On the Internet or other wide area network, a backbone is a set of paths that local or regional networks connect to for long-distance interconnection. Coupled to data network A 106, there are two other networks 112 and 114 that are typically the Internet, a local area network, or phone network through which terminal devices can receive video files. The terminal devices may include, but not be limited to, multimedia computers (e.g. 116 and 119), networked television sets or other video/audio players (e.g. 117 and 118). Typically the terminal devices are equipped with applications or capabilities to execute and display received video files. For example, one of the popular applications is an MPEG player provided in WINDOWS 98 from Microsoft. When an MPEG video file is received in streaming from one of the proxy servers, by executing the MPEG player in a multimedia computer, the video file can be displayed on a display screen of the computer.
To receive a desired video, one of the terminal devices (e.g. 116) must send in a request that may comprise the title of the desired video. Typically the request is in a form of URL that may include a subscriber identification if the video services allow only authorized access. Upon receiving the request, the server (e.g. 108), will first check in its cache if the selected video is provided therein, meanwhile the request is recorded by a request manager. The selected video will be provided as a streaming video to the terminal device if some or the entire video is in the cache. Otherwise server 108 proceeds to send a request to another server 102 for the selected video of the rest of the video if there are some units of the video in a cache memory of server 108 or the entire video if there is no any cached unit of the video. Figure I B shows another exemplary configuration in which the present invention may be practiced. As shown in the figure, all the video servers 120 in the global media delivery system as well as the terminal devices (e.g. 122) are coupled to a network 126 (e.g. the Internet). When terminal device 122 sends in a request for a specific video title, the request is routed to one of the servers (e.g. 120-1). Server 120- 1 , referred to as a global streaming gateway, comprises a request handler that processes the received request. The processing includes authentication of the request and other administration control. Further server 120-1 comprises a load balancing manager that monitors the system performance of all the video servers in the global media delivery system. With the load balancing manager, an appropriate server is identified to provide the video server to terminal devices 122. The detail description of the request processing and the load balancing manager is provided in exemplary embodiments described below.
To better understand the present invention, Figure 3A shows a block diagram of global streaming gateway (i.e. a server).
According to one embodiment of the present invention, the gateway is loaded with a compiled and link version of the embodiment of the present invention, when executed by the processor, the compiled and link version will perform at least the following: receiving a request to a video title from a terminal device; retrieving system performance information on each of video servers in a global media delivery system; determining an appropriate video server in the global media delivery system with respect to the retrieved system performance information; and sending a response to the request, the response comprising an address identifier identifying the appropriate video server to service the video title selected by a user. Further, the compiled and link version, when executed, performs additionally: inserting commercial information into the response so that the commercial information becomes available to the user when the video title is played.
Figure 3B shows an exemplary functional block diagram of the global media delivery system. After the user receives the response and activates the address identifier that is typically a universal resource identifier (URI) or universal resource locator (URL), one of the video servers (e.g. server 2) is linked and starts to provide the video streaming data. One of the important features in the present invention is that the video server that provides the video service is selected based on the system performance information on each of video servers in a global media delivery system. In other words, no single video server would be overloaded so as to affect the quality of services.
In the case that the selected video server does not have the desired video title in the local depository, a progressive cache technique is used to cache the video title. The detailed description of the progressive cache technique is provided in U.S. patent application / by Horng-Juing Lee, entitled "METHOD AND
APPARATUS FOR CACHING FOR STREAMING DATA", filed on September 8, 2000, which was included by reference above.
To further understand the present invention, Figure 4 shows a process flowchart of the gateway module that may be implemented as a server module to be installed in a server that is configured to function as a global streaming gateway. Each of the process in the flowchart includes many sub-processes that are provided below. Figure 5 shows a process flowchart in the selected video server. The process flowchart may be implemented as a proxy server module to be installed in a server to function as one of the video servers in the global media delivery system. Figure 6 shows a process flowchart in a second video server that supports the selected video server of Figure 5 when a selected video title is not in the local depository or cached.
The global media delivery system as described in accordance with one aspect of the present invention is robust, operationally efficient and cost effective. The global streaming mechanism provides the best use of all proxy video servers and permits seamless delivery of streaming video with highest quality of services possible. In addition, the present invention may be used in connection with presentations of any type, including sales presentations and product/service promotion, which provides the video service providers additional revenue resources. While the embodiment discussed herein may appear to include some limitations as to the presentation of the cache units and the way of managing the units, in terms of the format and arrangement, the invention has applicability well beyond such embodiment, which can be appreciated by those skilled in the art.
While the forgoing and attached are illustrative of various aspects/embodiments of the present invention, the disclosure of specific sequence/steps and the inclusion of specifics with regard to broader methods and systems are not intended to limit the scope of the invention which finds itself in the various permutations of the features disclosed and described herein as conveyed to one of skill in the art. In the following paragraphs the proposed architecture of the global streaming, described above with respect to Figure 1, is more fully developed. This shows the interactions within the architecture of web servers described in copending U. S. provisional patent application number 60/155,354, entitled "Method and System for Providing Real-Time Streaming Video Services", filed September 22, 1999, and which is hereby included by this reference, web proxy servers, and associated global streaming gateways
System Components
Figures 7 and 8 show some of the system components in a two different embodiments. Figure 7 presents the detailed layout of the architecture for global streaming with shared media repository storage in the media cluster. The embodiment of Figure 8 has a more complicated storage arrangement with mixed shared and non-shared media repository storage. The elements are numbered similarly in this pair of figures and will initially be described with respect to Figure 7. Request handler 720 is shown connected to media server 701, remote manager 703, media recording station 705, and media playback station 707 and uses one or more web servers and web proxy servers as front-end handler for all kinds of client streaming requests. The embodiment of Figure 7 shows a request handler with a pair of web servers 721 and 723. A client request, for example in URL format, for operations such as playback, recording or administration is entered the system through a request handler interface. This gives client the capability of accessing system resource through URL address. Therefore, the client can access system resource anywhere web access available. Within the back-end of the web servers there are several components for handling client requests. These components are considered as common gateway interface (CGI) components. However, their implementations are not limited to CGI scripts and they can alternately be implemented as other CGI alternatives such as ISAPI or Servlets. These components have a number of tasks. These can include authentication, admission control, load balancing, performance monitoring, the global streaming gateway function, and advertisement insertion.
Authentication block 731 responsible for verifying the identity of client with a user data base 741 before granting any operations. Admission control 739 is responsible for manage the concurrent number of streaming sessions and total streaming bandwidth. This limitation can be caused by machine capability and/or product license control. The purpose of restricting the session count is to ensure the playback quality for all admitted requests.
Load balancer 735 responsible for determining which server with which the incoming request should make contact. The component interacts with performance monitor 733 to get the media server workload information 743. Together with the client and video clip information, the component evens out the amount of workload among all working servers within the media server cluster 740. Performance monitor 733 handles inquiries from the remote management block 703 as well monitoring the performance and operational status of the media server 701. Global streaming gateway 730 is responsible for receiving the request and replying to the request with an HTML page. This component will interact with the rest of the components to get the required information and send it back to the client. Ad inserter 737 is responsible for generating an HTML template with the advertisement postings. This component interacts with the user database 741 to get user profile (if possible) and uses it to determine the appropriate ad template from ad template database 747 for better targeting to the audience.
Media server cluster 740 comprises several media servers 741-i. Each media server provides its own streaming service. The media servers need not be capability- equivalent. The goal of media server cluster 740 is to ensure smooth and uninterrupted media playback/recording. Within the media server cluster, a two- level media depository architecture is used: local media depositories 743-i and a shared media depository 745. Each local media depository 743-i is attached to a single media server 741-i. Therefore, only that particular media server can access its contents directly. The concurrent number of streaming sessions on clips stored on a particular server then is limited by that server's capability. On the other hand, every media server within the cluster can access the contents stored on the shared media depository. Hence, the aggregated server capability can support significantly more streaming sessions on clips stored on the shared media depository. Figure 8 differs from Figure 7 in that not all of the media servers
841 -i are connected to the shared media depository 845. In this generalization, media servers 841-1 and 841-2 are connected only to their respective local media depositories 843-1 and 843-2, while media servers 841-3 and 841-4 are connected to both their respective local media depositories 843-3 and 843-4 and the shared media depository 845. In a generic situation, each server will have its own local media depository, with some servers further connected to one or more shared media depositories which are in turn each connected to multiple servers.
Although the servers and the depositories are shown together as part of the media cluster 740 in Figures 7 and 8, they need not have any actually physical proximity, but may be separated from each other. Although they from a single block conceptually in the present invention, generically they are distributed over an extended physical region as described above with respect to Figures 1, 2A, and 2B. The architecture of media cluster 740 differs from what is found in the prior art as described in the Background section above. The approach using multiple commodity-oriented server platforms would completely do away with the shared media depository 745 and rely solely upon the local media depositories. In this arrangement, only those servers which contain the particular continuous media requested in their own local media depository can supply it to the user. In order to insure a high quality of service, this requires a large amount of duplication and redundancy among the local depositories as the same title must be stored in many different locations, often with every local depository having a complete collection of titles.
The other approach found in the prior relies solely on the shared media depository 745, dispensing with the local depositories. In this approach, the total number of streams is then limited by the bandwidth of the interface 847. Additionally, as discussed in the Background section, it does not solve the reliability problem inherent in using only a single depository.
In this discussion, a client is a component asking for service. The service can be playback, recording, upload media clips, download media clips, or query of operation status. In this aspect, a client can be another media server system, a media proxy server, a remote management console, a media recording station or a media playback station. Thus, it is employed as a general term to cover both the service or content provider as well as the user requesting content from the provider.
Scenario of Media Streaming Service
Based on these concepts, several scenarios related to a streaming service can be presented in terms of the following steps.
Case 1 : Playback client requests to play back pre-recorded video. 1. Client sends out the request to play back movie A by clicking on a URL link.
2. Request handler receives the request.
3. Request handler dispatches the request to global streaming gateway. 4. Request handler sends the request to the authentication component.
5. Authentication component verifies the client identity by consulting with user database and making sure the client has been granted the right to perform the request.
6. When the client gains the access right, the global streaming gateway passes the request to the admission control to make sure the new request will not interfere with existing service and also meet the product license agreement.
7. Admission control checks the existence of the movie A. If the content is not cached in the global streaming gateway, the request is pass to a nearby request handler for further process and repeat Step 2 - Step 6. 8. When the admission control component approves the request, the global streaming gateway passes the request to the load balancer component.
9. The load balancer component consults with the performance monitor component to understand the current server workload. Then based on the load balance scheme, it returns the address of the media server ready to serve the request. 10. The global streaming gateway then passes the user profile information
(retrieved from user database) to the ad inserter component asking for the appropriate ad template.
11. The global streaming gateway combines the server address, video clip information, and ad template to create a response HTML page to the client. The response HTML page contains an obj ect tag indicating the player component and its associated parameters.
12. The HTML page from the global streaming gateway will automatically invoke the Web browser to show the page content which including the invocation of media player when the Web browser sees the object tag. 13. The media player then communicates with the assigned media server to start the media streaming service. Any interactive playback control now is between the client and the assigned media server.
Case 2: Recording client requests to store pre-recorded media into media server.
1. Client sends out the request to record movie A by using client media recorder.
2. Request handler receives the request.
3. Request handler dispatches the request to global streaming gateway. 4. The global streaming gateway sends the request to the authentication component.
5. Authentication component verifies the client identity by consulting with user database and make sure the client has been granted the right to perform the request. 6. When the client gains the access right, the global streaming gateway passes the request to the admission control to make sure the new request will not interfere with existing service and meet the product license agreement as well.
7. Admission control checks if space has been allocated from storing movie A. If the space is not allocated in the global streaming gateway, the request will pass to a nearby request handler for further process and repeat Step 2 - Step 6.
8. When the admission control component accepts the request, the global streaming gateway passes the request to the load balancer component.
9. The load balancer component consults with the performance monitor component to understand the current server workload. Then based on the load balance scheme, it returns the address of the media server ready to serve the request.
10. The global streaming gateway returns an HTML page with the assigned media server information.
11. The media recorder then communicates with the assigned media server to start the media streaming service. Case 3 : Remote management client requests to monitoring server operational status.
1. Client sends out the request to observe a server's status.
2. Request handler receives the request.
3. Request handler dispatches the request to global streaming gateway. 4. Global streaming gateway sends the request to the authentication component.
5. Authentication component verifies the client identity by consulting with user database and make sure the client has been granted the right to perform the request. 6. The global streaming gateway consults with the performance monitor component to understand the current server workload.
7. The performance monitor interacts with media servers listed on the configuration database and retrieves the performance data and passes it back to the global streaming gateway. 8. The global streaming gateway packages the performance data as an
HTML page and returns it to the client.
9. The remote management client then displays the performance data into a graphic user interface (GUI) representation.
Implementation Approach:
In the following, methods are describes to determine a media server's streaming capability, allocate a new subscriber media data to a particular media server, implement a load balancer component, and to implement an admission control component. These will be discussed in terms of a particular embodiment, all this readily extends to a more general case. For example, the amount of storage space in a local or shared media depository is often be referred to as disk space, although this could clearly be stored on an alternate media, the continuous media is often referred to as video, and the service or content provider is referred to as a retailer. For example, the system could be provided by a telecommunications company which would then provide this system for retails to store content for supplying to users. Although these are particular examples of the more general situation, they will often be referred to in this way for simplicity of exposition.
The discussion begins with an approach for determination of a media server's streaming capability. Because maintaining streaming quality is an important requirement in any media streaming service, the system needs to control the number of streaming sessions to ensure every session is running at high quality. Therefore, determining server's streaming capacity is considered first. There are several configuration factors affecting capability of server machines including CPU power, main memory size (M), network bandwidth (NB), and storage access bandwidth for the local and shared depositories (SBDL, SBDS). In the determination method below, CPU power is excluded from the calculations since it is general found that streaming service is an input/output-intensive application and the CPU power is much less significant than other factors: if this may be a limiting factor in a particular application, it may be readily included with the other limiting factors. Another parameter that affects the streaming server' s operation is media data retrieval cycle time (p). Since media data is too large to be fit into the main memory space of a processor in one access, it is reasonable to retrieve media data one block at a time from the media repository and send it to the client. The retrieve-and-send pattern repeats until the whole media data is sent. The media data retrieval cycle time corresponds to the time required for one such block to be played back or consumed by the user. In this scenario, the system can use a simple two-buffer scheme: one for holding the media data being sent by networking module and the other one is storing next media data block from the disk access module. The role of the two buffers exchange roles at next cycle. Based on the above discussion, an exemplary method to determine each media server's streaming capability can be defined. The streaming capability is abstracted into one single value SB,, the concurrent number of streams for streaming of video bit rate at b. This represents the upper limit of the streaming requests the server can accept without jeopardizing the streaming quality. To again simplify the discussion, the below uses a single media data retrieval cycle time, p, and a single bit rate, b, for streaming video. In the more general situation, these are independent in each processor and would have distinct values ?, and bt. The following formulae all generalize to this case in a straightforward manner.
Figure 9 shows a detail of Figure 8 to highlight the portions relevant to this discussion. A list of n total servers Sj-S,, 841-i are shown, each connected to its respective local media depository LMD^LMD,, 843-i. A single shared media depository SMD 845 is shown, to which m of the servers are connected, where O≤m≤n. These m servers are connected through block 847, which could be a fibre channel arbitration loop or some other switching mechanism.
The ability of the servers can then be quantified by the quantities SBt, server f s streaming bandwidth in number of streams, and SS„ server /'s space capacity. The first of these can be defined by
SB, = aύn<SBNh SBDt, SBM>, where this expression ignores the CPU as limiting factor. Similarly, if any of these factors is not found limiting, it can be ignored, or, conversely, other limiting factors can added. In this expression, SBN, is server 's aggregated network interface card bandwidth in terms of number of streams,
NBS
SBN =
where |_ J around a quantity indicates the integer part and in the more general case b would also have the subscript i here and below. SBD, is server t's disk access bandwidth in terms of number of streams,
SBDL.
SBD, =
if server is not connected to the shared media depository, or
SBD, =
Figure imgf000024_0001
if server / is connected to the shared media depository. In this last expression, SBDL denotes the concurrent number of streams limited by access module for the local depository and similarly SBDS1 is server disk access bandwidth for the shared media depository at time t. This last term assumes that the access to the shared depository is evenly divided among the m servers. In a more general arrangement, block 847 in Figure 9 could divide up SBDS1 among the m servers asymmetrical, with a fraction other than 1/m for each. SBM, is server i's memory bandwidth in terms of number of streams. Here
Figure imgf000025_0001
where b replaced by b, in the more general case and the two is due to two-buffer scheme used for the memory in the exemplary embodiment.
The expression for the streaming bandwidth can be described with reference to Figure 9. For the example of a server not connected to the shared media depository, consider server St 841-1. This can stream media from LMU! 843- 1 through 851-1 into server S! 841-1. There it must pass through main memory MM 857-1 before passing on to the network along 855-1. The bottlenecks in this process can be the limit of the number of streams into the server from the depository along 851-1, the number of streams out of the server along 855-1, of the number of streams the server is able to concurrent handle through it main memory MM: these three effects correspond to the three terms in the expression for SB,. If the server is connected to the shared depository, such as server Sn 841-n, then the number of streams from the shared media depository into the server along 853-n should also be included with those along 851 -n.
The space capacity of server , SS„ can be express as
SS, = SSL, if server is not connected to the shared media depository, or
SSt = SSLt + { ,...,SSS) = SSLt + SSS < if server i is connected to the shared media depository. Here <0, ... , SSS> is the range of possible values, form zero to SSS, the total disk space of the shared media depository, and SSS* is the total disk space of the shared media depository within this range at a time t. When a new service provider, or retailer, joins the media streaming service, the provider can more effectively be assigned servers if a retailer profile is provided. The system can then allocate the resource to satisfy the new subscriber's requested requirements in light of the different servers relative abilities. In this way, the high- power server can service more requests than the one with less power. The resources mentioned here include disk space, disk access bandwidth, and network bandwidth. Although represented the same in the figures, each media server's configuration may be different. For instance, two media servers may have storage space of (10GB, 50GB), storage access bandwidth of (40 megabytes per second, 80 megabytes per second), and networking bandwidth of (240 megabits per second, 360 megabits per second).
A request from a new retailer, or for additional capability from a current retailer, can include a number of requirements. In this exemplary embodiment, a request is defined as
Figure imgf000026_0001
where RB, is retailer i' s streaming requirements in terms of number of streams; RS, is retailer i's space requirements in, for example, terms of number of streams; and RF, is retailer i's fault tolerance requirements in fault tolerance degrees. The concept of fault tolerance can be defined in a number of ways. Here it is given in terms of the number of servers which the service provider can accept as non-functioning: if RF, = 0, a fault tolerance of no servers down; if RF, = 1, a fault tolerance of 1 server down and the service to the subscribers of retailer / can still work; if RF, = j , a fault tolerance of j servers down and the service to the subscribers of retailer can still work. Using this information, the system can then assign the new subscribers. In this allocation, the storage space usage must be within the system's limitations. For storage access bandwidth and networking bandwidth, the assigned resource may exceed the system limitation. This is possible since the allocation is dealing with the static scenario. In other words, the worst case is that every request is asking for streaming service on a particular server. The admission control component will and should recognize the possibility and deny the request at first place. This should avoid the system overloaded.
Figure 10 is a flow chart showing an exemplary embodiment of a retailer allocation scheme based on a request as defined above. This assumes that k retailers with requests Rh l=l,...,k, are asking for new or additional allocations. If no retailers are currently allocated, the flow will use the initial capacities corresponding to the capabilities as defined above for the servers. Otherwise, it will start with the quantities as currently updated in the previous allocation determination corresponding to step 1013 in Figure 10.
If more than one retailer / is making a request R,, these requests are sorted in step 1001. This step sorts the R, based on the following criteria:
- first, sort KB, such that RB, ≥ RB,+1, 1 ≤i≤k- 1
- if the same bandwidth requirement, the sort RS,- such that RS, ≤ RS,+ l≤i≤k-1
- if the same bandwidth and space requirements, the sort RF, such that RF,≥RF,+1, l≤i≤k- l.
This particular scheme emphasizes the number of streams requested, RB, relative to the other two quantities. This order could be rearranged, for example using the fault tolerance as the first stage in this sieve if the system wished to stress this quantity, or deleted the storage space requirement comparison RS if space limits are not a concern.
If only one request is presented, step 1001 would be skipped and the process would go directly to step 1005. If there are multiple requests, the process goes through these one at a time in the order established in step 1001, beginning with Rt in step 1003.
Step 1005 finds a list of servers which satisfy the following two criteria for j= 1 to n: first, that both SBj ≥ RB, and either
SBDLj
≥ RB.. or SBDS ≥ RBt. m-b
and, second, that SSj ≥ RS, and either SSE. ≥ RS, or SSS* ≥ RS,. This restricts consideration only those servers which can supply both the requested number of streams from either its local or the shared media depository and the requested amount of storage in either its local or the shared media depository. This servers which meet this requirement form the server candidate list. In this simplified exemplary embodiment, a given server will, for a given request, consider storage on either its local or the shared media depository. In a more general arrangement, the media will be distributed over both, for example caching in its local media depository a portion of a title which is stored in full in the shared media depository, using a arrangement such as that described in U. S. patent application / , by Horng- Juing Lee, entitled "METHOD AND APPARATUS FOR CACHING FOR STREAMING DATA", filed on September 8, 2000, which was included by reference above.
Step 1007 determines whether the server candidate list is large enough. If the number of servers in the server candidate list for request R, is less than (RF,+l), then the system will deny the retailer request in block 1008 and return to step 2. Otherwise, there are enough servers for the request and the list is sorted in step 1009.
In step 1009, the servers in the server candidate list are ordered such that
SB. SB„+. ≤ — — , β = \,...,\servercandiatelist\-\ .
SSQ SS(+l Thus they are ranked in an order based on their relative abilities, varying inversely with their space capacity, as accessible on the local and shared depositories, and varying directly with their streaming bandwidth. Unless this is the initial assignment for the system, these quantities are the updated values from a previous step 1013. Step 1011 then picks the first p-(RFe +1) servers from the sorted server candidate list. This is the selected server list {S/e/,...,S e,>.
From this selected server list, the actual allocation is determined is step 1013. In terms of a pseudo-code, this can be described as: For/ = 1 top {let x = S el case 1 : Server x has enough local disk space and enough local disk access bandwidth from local media depository, {update SD„ SBD„ SS„ SSLX} case 2: Server x has enough shared disk space, enough shared disk access bandwidth, and the space has not been allocated on the shared media repository before, {update SBX, SBDS',, SSX, SSS1} case 3: Server x has enough shared disk space, enough shared disk access bandwidth, and the space has on the shared media repository.
{update SB^ SBDS",}
} Here, case 1 is used if its condition is met; otherwise case 2 is used, unless its condition is not met, and then case 3 is used. The resultant updated values are the values which will used in the next time through the allocation process.
In step 1015, the current value oft is checked to see if all the requests have been dealt with. If not, it continues back to step 1005, incrementing and repeating until all k requests are treated. Once they have been, a retailer allocation table is then created in step 1017. Figure 11 shows one example of how such a retail allocation table can be constructed. This will list a retailer by its identifier and present the information from the request: the total number of streams, total storage (or disk) space, and the fault tolerance requirement. It will also give the associated information assigned in the allocation scheme, namely the locations assigned to the retailer in both the shared and the local media depositories.
Figure 12 is a media allocation table containing the information associated with a title stored in the media server cluster. Associated with a title will be its owner, given by the retailer's identifier, the locations within the local media depositories where it is stored, whether it is stored in the shared media depository, and a title license specifying how many copies of the title the retailer is entitled to use. When a media file from the retailer is ready to be put into a server, it is to be copied into the shared media depository, if the retailer has allocated it, or into the local media repositories if local media depository has been allocated. The file may have multiple local media repositories containing it to allow multiple servers to provide the title from their respective local media repositories.
The admission control function determines if a request for a particular title will be accepted. When a subscriber's request comes in, if accepting the request would exceed either the retailer's stream license, in terms of the number of streams allocated for the retailer, or the title's stream license, in terms of the number of streams allocated for the title, the request is rejected. If this is not the case, the admission control accepts the request and determines the server to serve the request.
Determining the server that will serve the request (for a read operation) is part of the load balancing function. When a request comes in it will be in the form <subscriber ID, media title>.
In a string of pseudo-code, this determination process can be implemented as: if (the title is in local media depository (maybe multiple servers) only) then {assign the request to the servers with the lowest SB,- value within the group of servers within which the title resides
} else if (the title is in the shared media depository only) then
{assign the request to the servers with the lowest SB, value
} The workload table is subsequently updated to reflect the assignment. Figure 13 shows the structure of a workload table. This will list the retailer by its identifier and the title associated with the request. The server filling this request is then listed along with whether the continuous media is being supplied from the identified servers local media depository or from the shared media depository.
Determine the server that will serve a request for a write operation when a retailer wants to input a file with a title license is a similar process similar to that for the read operation. As a string of pseudo-code, this determination process can be implemented as: if (the subscriber's allocation is shared media depository) then
{copy it into shared media depository and update the media allocation table
} else if (the subscriber's allocation is in multiple local media depositories) then
{copy it into multiple server locations and update the media allocation table
} Once this information is stored in the media allocation table, it can then be accessed for the media supplier to provide to a user upon request according to the previous procedure. Architectural Features
The architecture of the present invention provides a number of feature which improve upon the prior art as described in the Background section. Three aspects of this architectural structure are described in some detail below: scalability and reliability, load balancing, and value-added E-commence aspects.
The aspect of server scalability and reliability provides access transparency on locating media server. From a client's point of view, there is no need to know where a particular piece of continuous media is really stored and which one of the working media servers should be contacted to provide it. By exploiting web server(s) as the front-end request handler, any client requests always go to the web server first. A component within the server called "global streaming gateway" will, based on the request and the media server workloads, redirect the request to the appropriate working media server. From that point on, the streaming service is between client and the assigned media server. The client need not know the exact media server location. The only thing client needs to know is the request handler' s URL link, nothing more.
This design provides easily scalable deployment. A system can start from a single media server, adding more media servers as the streaming demand increases. Adding/removing media servers from a media server cluster will not affect the client's connection practices. All client requests still go to the front-end web site.
The described structure also provides scalability and availability for request handler(s). Since all client requests go through a web front end first, the availability of the request handlers should be assured. In addition, as the system expands, the number of requests increases accordingly. The request handlers preferably scales up as well to match up the increase in requests. By using web server(s) as the front-end component, the design can take advantage of all kinds of web server(s) scalability/reliability products, such as Microsoft's WLBS (Windows NT Load Balancing Service). This kind of product can detect a faulty web server within a web server cluster and automatically redistribute the requests to other working web Servers. In the meantime, it is easy to add an extra web server into the system with- out interfere the operation. This design can ensure the request handler can receive all client requests.
Using the media server cluster concept also supports server scalability. Since it uses multiple media servers to service streaming requests, when the streaming demand increases, the system administrator can simply add one more media server into the media server cluster. A client need not know about the newly installed media server at all.
Using a de-coupled design among media servers enhances the scalability and reliability. In this architecture, each media server handles multiple streaming sessions. When one streaming session is established between a client and a media server, the server handles all media streaming from start to end. This design cuts down the dependency between media servers. Therefore, when one server goes down, it will not bring down the other servers. When it is time to add a new server, it can simply plug into the media server cluster without affecting the rest. Server scalability is thus easy to reach and deploy.
In addition to the above benefit, the de-coupling design does not restrict the system to using a collection of equivalent media servers. For instance, in the beginning, the all of the servers within media server cluster can be equipped with, say, Ultra disk having a speed of 40 Mega Bytes Per Second (MBPS). The system may later be expanded by adding a newer server configured differently, such as with Ultra 2 disk having a speed of 80 MBPS, resulting in a heterogeneous server environment. Although the original media servers, having a 40 MBPS capability, have relatively loss capability, lacking any advantage of newer technology found in the newer server, all these servers of differing abilities may be retained in the system. This saves replacing all old servers with new ones each time a new server is introduced to the system, which would cost a lot of money and be impractical as the technology may again improve within a short period of time. With the help from the described load balancer, the de-coupling design can utilize the full capability of each individual media servers despite the differences in performance. There are several aspects of load balancing. Load balancing is provided among the request handlers. Using web Servers as the front-end request handlers enhances the architecture. By using existing web servers balancing product such as Microsoft's Windows NT Load Balancing Service (WLBS), it can distribute out the request to participating web servers and achieve the goal of load balancing. It also provides load balancing within the media server cluster. As we discussed above, each media server may not have the same capabilities. Our Load Balancer recognizes this and tries to balance out the streaming requests. The load balancing scheme maintains the equal utilization ratio in each media server. - The design can also incorporate valued-added E-commence service by creating reply web page based on user profile: When a playback request comes in, it is represented as a URL link. In the present invention, the URL link represents a request rather than accessing an existing web page. After interpreting the URL link, the "global streaming gateway" component knows information such as user name and the requested media clip. By dynamically generating a response web page, the client will be redirected to the appropriate media server to start media playback. Based upon the dynamic feature and the known user name (and its profile information on user database), the "ad inserter" component can select a template (filled of presentation style and ad. postings) from a template database and pass it to the "global streaming gateway" module. By combining the template with the object tag (indicating the playback information), the "global streaming gateway" can send back the reply HTML page to the client.
Application Domains As already noted, the described structures and methods are suitable for continuous media besides video related service, and deliverable over both the
Internet and Intranet environments. This includes (but not limit to) video on demand service, video mail service, movie on demand service, etc.
Additionally, although this discussion has focused on streaming continuous media, these techniques extend to the non-continuous data. This is particularly so where the amount of data being transmitted for a single media data title is, although not continuous, very large. An example is the transmission of an image, for example a high-resolution X-ray. Here the amount of data may be of sufficient size that it is more practical to transmit the particular media data title broken up into blocks as is done for the continuous case. The limits on transmitting this data then become the same as for the continuous case, with similar storage and transmission bandwidth concerns.
The media delivery system as described herein is robust, operationally efficient and cost-effective. In addition, the present invention may be used in connection with presentations of any type, including sales presentations and product/service promotion, which provides the video service providers additional revenue resources.
The processes, sequences or steps and features discussed herein are related to each other and each are believed independently novel in the art. The disclosed processes and sequences may be performed alone or in any combination to provide a novel and nonobvious file structure system suitable for media delivery system. It should be understood that the processes and sequences in combination yield an equally independently novel combination as well, even if combined in their broadest sense.
Other Embodiments
The invention has now been described with reference to specific embodiments. Other embodiments will be apparent to those of skill in the art. In particular, a user digital information appliance has generally been illustrated as a personal computer. However, the digital computing device is meant to be any device for interacting with a remote data application, and could include such devices as a digitally enabled television, cell phone, personal digital assistant, etc.
Furthermore, while the invention has in some instances been described in terms of client/server application environments, this is not intended to limit the mvention to only those logic environments described as client/server. As used herein, "client" is intended to be understood broadly to comprise any logic used to access data from a remote system and "server" is intended to be understood broadly to comprise any logic used to provide data to a remote system.
It is understood that the examples and embodiments described herein are for illustrative purposes only and that various modifications or changes in light thereof will be suggested by the teachings herein to persons skilled in the art and are to be included within the spirit and purview of this application and scope of the claims and their equivalents.
Embodiment in a Programmed Information Appliance
As shown in Figure 14, the invention can be implemented in hardware and/or software. In some embodiments of the invention, different aspects of the mvention can be implemented in either client-side logic or a server-side logic. As will be understood in the art, the invention or components thereof may be embodied in a fixed media program component containing logic instructions and/or data that when loaded into an appropriately configured computing device cause that device to perform according to the invention. As will be understood in the art, a fixed media program may be delivered to a user on a fixed media for loading in a users computer or a fixed media program can reside on a remote server that a user accesses through a communication medium in order to download a program component.
Figure 14 shows an information appliance (or digital device) 1400 that may be understood as a logical apparatus that can read instructions from media 1417 and/or network port 1419. Apparatus 1400 can thereafter use those instructions to direct server or client logic, as understood in the art, to embody aspects of the invention. One type of logical apparatus that may embody the invention is a computer system as illustrated in 1400, containing CPU 1407, optional input devices 1409 and 1411, disk drives 1415 and optional monitor 1405. Fixed media 1417 may be used to program such a system and may represent a disk-type optical or magnetic media, magnetic tape, solid state memory, etc.. The invention may be embodied in whole or in part as software recorded on this fixed media. Communication port 1419 may also be used to initially receive instructions that are used to program such a system and may represent any type of communication connection.
The invention also may be embodied in whole or in part within the circuitry of an application specific integrated circuit (ASIC) or a programmable logic device (PLD). In such a case, the invention may be embodied in a computer understandable descriptor language which may be used to create an ASIC or PLD that operates as herein described.

Claims

WHAT IS CLAIMED IS:
1. A method of operating a system to supply media titles over a network, wherein each of said media titles is divided into a plurality of blocks, comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide media titles to users; and allocating the new service provider to one or more media servers based upon the relative ability of the media servers to provide media titles stored on the local and shared depositories to the network at the time of receiving the request.
2. The method of claim 1, wherein said media titles include continuous media.
3. The method of claim 2, wherein the relative ability of a media server varies directly with the number of streams the media server can provide from the local and shared depositories to the network.
4. The method of claim 3 , wherein the relative ability of a media server is proportional to the ratio of the number of streams the media server can provide from the local and shared depo sitories to the network to the storage capacity on the respective local media depository and shared media depository that the server is able to access.
5. The method of claim 2, wherein the request includes the number of streams of stored media to the network that the new service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as many such streams as requested.
6. The method of claim 5, wherein each of the servers has a maximum number of streams between itself and its respective local media depository and a maximum number of streams between itself and the shared media depository, and wherein the media servers are allocated only from among those servers that can provide at least as many streams between itself and one of either its respective local media depository or the shared media depository as the number of streams that the new service provider requires.
7. The method of claim 6, wherein the maximum number of streams between a server and its respective local media depository is the servers local storage access bandwidth divided by the server's streaming bit rate, and wherein the maximum number of streams between a server and the shared media depository is the server's shared storage access bandwidth at the time of the request divided by the product of the server's streaming bit rate and the number of servers connected to the shared media depository.
8. The method of claim 2, wherein the request includes the amount of storage the service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as much storage as requested.
9. The method of claim 8, wherein at the time of the request each of the servers has a maximum amount of accessible storage space on its respective local media depository and a maximum amount of accessible storage space on the shared media depository, and wherein the media servers are allocated only from those servers for which either the maximum amount of accessible storage space on its respective local media depository or the maximum amount of accessible storage space on the shared media depository exceeds the amount of storage requested at the time of the request.
10. The method of claim 2, wherein the request includes the number of server failures that the service provider can tolerate, and wherein the number of media server's allocated exceeds the number of server failures that the service provider can tolerate.
11. The method of claim 10, wherein the media servers allocated are those with the lowest ratio of the number of streams the media server can provide from the local and shared depositories to the network at the time of receiving the request to the storage capacity on the respective local media depository and shared media depository that it is able to access network at the time of receiving the request.
12. The method of claim 2, wherein the request includes the number of streams of stored media to the network that the new service provider requires and the amount of storage the new service provider requires, and wherein if an allocated server can provide the requested number of stream and requested amount of storage from the server's respective media depository, the server stores media from the service provider on the allocated server's local media depository.
13. The method of claim 2, wherein the request includes the number of streams of stored media to the network that the new service provider requires and the amount of storage the new service provider requires, and wherein if an allocated server can provide both the requested number of stream and requested amount of storage from the shared media depository but not from the server's respective local media depository, the server stores media from the service provider on the shared media depository.
14. A method of operating a system to supply media titles over a network, wherein each of said media titles is divided into a plurality of blocks, comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a plurality of requests for new service providers to use the media delivery system to provide media titles to users, wherein each of the requests includes the number of streams to the network that the new service provider requires; and allocating each of the new service providers to one or more media servers based upon the relative ability of the media servers to provide media titles stored on the local and shared depositories to the network at the time of receiving the requests, wherein service providers are allocated servers based upon the number of streams from the local and shared depositories to the network that the new service provider requires, service providers requiring more such streams being given higher priority.
15. The method of claim 14, wherein said media titles include continuous media.
16. The method of claim 15, wherein the relative ability of a media server varies directly with the number of streams the media server can provide from the local and shared depositories to the network.
17. The method of claim 16, wherein the relative ability of a media server is proportional to the ratio of the number of streams the media server can provide from the local and shared depositories to the network to the storage capacity on the respective local media depository and shared media depository that the server is able to access.
18. The method of claim 15, wherein media servers are allocated to a respective one of the new service providers only from among those servers that can provide at least as many such streams as requested by the respective one of the new service providers.
19. The method of claim 18, wherein each of the servers has a maximum number of streams between itself and its respective local media depository and a maximum number of streams between itself and the shared media depository, and wherein the media servers are allocated to the respective one of the new service providers only from among those servers that can provide at least as many streams between itself and one of either its respective local media depository or the shared media depository as the number of streams that the respective new service provider requires.
20. The method of claim 19, wherein the maximum number of streams between a server and its respective local media depository is the servers local storage access bandwidth divided by the server's streaming bit rate, and wherein the maximum number of streams between a server and the shared media depository is the server's shared storage access bandwidth at the time of the request divided by the product of the server's streaming bit rate and the number of servers connected to the shared media depository.
21. The method of claim 15, wherein each of the requests additionally includes the amount of storage the service provider requires, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network, service providers are additionally allocated servers based upon the amount of storage the service provider requires, service providers requiring lesser amounts of storage being given higher priority
22. The method of claim 21 , wherein media servers are allocated to a respective one of the new service providers only from among those servers that can provide at least as much storage as requested by the respective one of the new service providers.
23. The method of claim 22, wherein at the time of allocating servers each of the servers has a maximum amount of accessible storage space on its respective local media depository and a maximum amount of accessible storage space on the shared media depository, and wherein the media servers are allocated to a respective one of the new service providers only from those servers for which either the maximum amount of accessible storage space on its respective local media depository or the maximum amount of accessible storage space on the shared media depository exceeds the amount of storage requested at the time of the respective server's request.
24. The method of any of claims 3, 5, 8, 16, 18 or 22, wherein each of the servers has a first maximum number of streams to the network, has a second maximum number of streams from the combination of the server's respective local media depository and the shared media depository, and can process a third maximum number of streams through the server's memory, wherein the number of combined streams a server can provide from the server's respective local media depository and the shared media depository to the network is the minimum of the first, second, and third maximum numbers.
25. The method of any of 24, wherein the maximum number of streams a server can through the server's memory is the main memory size divided by twice the product of the server's streaming bit rate times the server's media data retrieval cycle time, the maximum number of streams to the network the server can provide is the aggregated networking bandwidth divided by the server's streaming bit rate, and the maximum number of streams to its respective local media depository and the shared media depository the server can provide is sum of the respective local media depository storage access bandwidth divided by the server's streaming bit rate plus the shared media depository shared storage access bandwidth accessible at the time of the request divided by the product of the server's bit rate and the number of servers connected to the shared media depository.
26. The method of claim 21, wherein each of the requests additionally includes the number of server failures that the service provider can tolerate, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network and the same amount of storage space, service providers are additionally allocated servers based upon the number of server failures that the service provider can tolerate, service providers that can tolerate more server failures being given higher priority .
27. The method of claim 26, wherein the number of media server's allocated to a given service provider exceeds the number of server failures that the given service provider can tolerate.
28. The method of claim 27, wherein the media servers allocated are those with the lowest ratio of the number of streams the media server can provide from the local and shared depositories to the network at the time of allocating servers to the storage capacity on the respective local media depository and shared media depository that it is able to access network at the time of allocating servers.
30. The method of claim 16, wherein if an allocated server can provide the requested number of stream and requested amount of storage from the server's respective media depository, the server stores media from the service provider on the allocated server's local media depository.
31. The method of claim 16, wherein if an allocated server can provide both the requested number of stream and requested amount of storage from the shared media depository but not from the server's respective local media depository, the server stores media from the service provider on the shared media depository.
32. A method of operating a system to supply continuous media over a network, comprising: providing a media deliver system comprising: a plurality of local media depositories connected to the network to supply media stored therein to the network; a shared media depository connected to the network to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users, wherein the request includes information on the service provider's performance requirements; and allocating the new service provider a respective maximum number of streams from each of the local media depositories to the network and a maximum number of streams from the shared media depository to the network based upon the service provider's performance requirements.
33. The method of claim 32, wherein the service provider's performance requirements includes the number of streams of stored media to the network that the new service provider requires.
34. The method of claim 33, wherein the service provider's performance requirements further includes the amount of storage the service provider requires.
35. The method of claim 33, wherein one of either the maximum number of streams from each of the local media depositories to the network or the maximum number of streams from the shared media depository to the network are zero.
36. A method of supplying continuous media from a service provider over a network to one or more users, comprising: providing a media deliver system comprising: a plurality of local media depositories connected to the network to supply media identified by title stored therein to the network, wherein the service provider is allocated a respective maximum number of streams from each of the local media depositories; a shared media depository connected to the network to supply media identified by title stored therein to the network, wherein the service provider is allocated a maximum number of streams from the shared media depository; receiving a request from one of the users to receive one or more new streams of continuous media identified by title from the service provider; and accepting the request only if the resultant number of streams that the service provider will provide to the users if the request is accepted does not exceed any of said maximum number of streams allocated to the service provider within the media deliver system.
37. The method of claim 36, wherein the titles stored on the shared media depository are distinct form those on the local media depositories.
38. The method of claim 36, wherein for each of the titles stored therein a respective maximum number of streams is allocated, and further comprising accepting the request only if the resultant number of streams that the service provider will provide if the request is accepted for each of the titles does not exceed any of the respective maximum number of streams for each of the titles.
39. The method of claim 36, further comprising: assigning an accepted request to the server connected to supply the title which has the lowest the maximum number of combined streams at the time of receiving the request.
40. A method of operating a system to supply continuous media over a network to a user, comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media identified by title stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media identified by title stored therein to the network, and wherein each of the servers has a maximum number of combined streams to the network from the server's respective local media depository and the shared media depository; receiving a request from a user to receive a new stream of continuous media identified by title stored on the media deliver system; and assigning the request to the server connected to supply the title which has the lowest the maximum number of combined streams at the time of receiving the request.
41. The method of claim 40, wherein the titles stored on the shared media depository are distinct form those on the local media depositories.
42. The method of claim 40, wherein each of the servers has a first maximum number of streams to the network, has a second maximum number of streams from the combination of the server's respective local media depository and the shared media depository, and can process a third maximum number of streams through the server's memory, wherein the maximum number of combined streams to the network from the server's respective local media depository and the shared media depository is the minimum of the first, second, and third maximum numbers.
43. A method of operating a system to supply continuous media over a network, comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users, wherein the request includes a fault tolerance requirement; and allocating the new service provider to one or more media servers based upon the fault tolerance requirement.
44. The method of claim 43, wherein the fault tolerance requirement includes the number of server failures that the service provider can tolerate, and wherein the number of media server's allocated exceeds the number of server failures that the service provider can tolerate.
45. A media deliver system for providing continuous media over a network, comprising: a plurality N of local media depositories to store continuous media; a plurality N of media servers connected to the network, each connected to a respective one of said plurality of N local media depositories to stream continuous media stored therein to the network; and a shared media depository to store continuous media, wherein a plurality of the media servers are connected to the shared media depository to stream continuous media stored therein to the network.
46. The media deliver system of claim 45, wherein said media is identified by title, and wherein a given title is stored in one of either the shared media depository or in a set of one or more of the local media depositories.
47. The media deliver system of claim 45, wherein not all of said media servers are of equivalent capability.
48. The media deliver system of claim 45, wherein each of the servers has a first maximum number of streams to the network, has a second maximum number of streams from the combination of the server's respective local media depository and the shared media depository, and can process a third maximum number of streams through the server's memory, wherein the number of combined streams a server can stream from the server's respective local media depository and the shared media depository to the network is the minimum of the first, second, and third maximum numbers.
49. The media deliver system of claim 48, wherein the maximum number of streams a server can through the server's memory is the main memory size divided by twice the product of the server's streaming bit rate times the server's media data retrieval cycle time, the maximum number of streams to the network the server can provide is the aggregated networking bandwidth divided by the server's streaming bit rate, and the maximum number of streams to its respective local media depository and the shared media depository the server can provide is sum of the respective local media depository storage access bandwidth divided by the server's streaming bit rate plus the shared media depository shared storage access bandwidth accessible at the time of the request divided by the product of the server's bit rate and the number of servers connected to the shared media depository.
50. The media deliver system of claim 45, wherein a plurality of service providers each store continuous media in the shared and local media depositories for steaming to one or more users over the network.
51. The media deliver system of claim 50, wherein said continuous media is identified by title, and wherein each of the service providers is allocated a respective maximum number of streams from the shared and local media depositories to the network for a given title.
52. The media deliver system of claim 50, wherein said continuous media is identified by title, and wherein for each of the service providers each title stored is stored in one of either the shared media depository or in a set of one or more of the local media depositories.
53. The media deliver system of claim 50, wherein each of the service providers is allocated a respective maximum number of streams from the shared and local media depositories to the network.
54. The media deliver system of claim 53, wherein each of said respective maximum number of streams is distributed between the media servers based upon the relative abilities of the media servers to provide continuous media stored on the local and shared media depositories to the network.
55. The media deliver system of claim 53, wherein the relative ability of each of said media servers is proportional to the ratio of the number of streams the media server can provide from the local and shared depositories to the network to the storage capacity on the respective local media depository and shared media depository that the server is able to access.
56. The media deliver system of claim 53, wherein each of said respective maximum number of streams is distributed between the media servers based upon the service provider's requirements for streaming continuous media in the shared and local media depositories to one or more users over the network.
57. A computer readable storage device embodying a program of instructions executable by a computer to perform a method of operating a system to supply continuous media over a network, said method comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users; and allocating the new service provider to one or more media servers based upon the relative ability of the media servers to provide continuous media stored on the local and shared depositories to the network at the time of receiving the request.
58. The method of claim 57, wherein the relative ability of a media server is proportional to the ratio of the number of streams the media server can provide from the local and shared depositories to the network to the storage capacity on the respective local media depository and shared media depository that the server is able to access.
59 The method of claim 57, wherein the request includes the number of streams of stored media to the network that the new service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as many such streams as requested.
60. The method of claim 57, wherein the request includes the amount of storage the service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as much storage as requested.
61. The method of claim 57, wherein the request includes the number of server failures that the service provider can tolerate, and wherein the number of media server's allocated exceeds the number of server failures that the service provider can tolerate.
62. A method for transmitting a program of instructions executable by a computer to perform a process of operating a system to supply continuous media over a network, said method comprising: causing the transmission to a client device a program of instructions, thereby enabling the client device to perform, by means of such program, the following process: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users; and allocating the new service provider to one or more media servers based upon the relative ability of the media servers to provide continuous media stored on the local and shared depositories to the network at the time of receiving the request.
63. The process of claim 62, wherein the relative ability of a media server is proportional to the ratio of the number of streams the media server can provide from the local and shared depositories to the network to the storage capacity on the respective local media depository and shared media depository that the server is able to access.
64. The process of claim 62, wherein the request includes the number of streams of stored media to the network that the new service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as many such streams as requested.
65. The process of claim 62, wherein the request includes the amount of storage the service provider requires, and wherein media servers are allocated only from among those servers that can provide at least as much storage as requested.
66. The process of claim 62, wherein the request includes the number of server failures that the service provider can tolerate, and wherein the number of media server's allocated exceeds the number of server failures that the service provider can tolerate.
67. A computer readable storage device embodying a program of instructions executable by a computer to perform a method of operating a system to supply continuous media over a network, said method comprising: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a plurality of requests for new service providers to use the media delivery system to provide continuous media to users, wherein each of the requests includes the number of streams to the network that the new service provider requires; and allocating each of the new service providers to one or more media servers based upon the relative ability of the media servers to provide continuous media stored on the local and shared depositories to the network at the time of receiving the requests, wherein service providers are allocated servers based upon the number of streams from the local and shared depositories to the network that the new service provider requires, service providers requiring more such streams being given higher priority.
68. The method of claim 67, wherein each of the requests additionally includes the amount of storage the service provider requires, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network, service providers are additionally allocated servers based upon the amount of storage the service provider requires, service providers requiring lesser amounts of storage being given higher priority
69. The method of claim 68, wherein each of the requests additionally includes the number of server failures that the service provider can tolerate, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network and the same amount of storage space, service providers are additionally allocated servers based upon the number of server failures that the service provider can tolerate, service providers that can tolerate more server failures being given higher priority .
70. A method for transmitting a program of instructions executable by a computer to perform a of operating a system to supply continuous media over a network, said method comprising: causing the transmission to a client device a program of instructions, thereby enabling the client device to perform, by means of such program, the following process: providing a media deliver system comprising: a plurality of media servers connected to the network, each connected to a respective local media depository to supply media stored therein to the network; a shared media depository, wherein two or more of the media servers are connected to the shared media depository to supply media stored therein to the network; receiving a plurality of requests for new service providers to use the media delivery system to provide continuous media to users, wherein each of the requests includes the number of streams to the network that the new service provider requires; and allocating each of the new service providers to one or more media servers based upon the relative ability of the media servers to provide continuous media stored on the local and shared depositories to the network at the time of receiving the requests, wherein service providers are allocated servers based upon the number of streams from the local and shared depositories to the network that the new service provider requires, service providers requiring more such streams being given higher priority.
71. The process of claim 70, wherein each of the requests additionally includes the amount of storage the service provider requires, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network, service providers are additionally allocated servers based upon the amount of storage the service provider requires, service providers requiring lesser amounts of storage being given higher priority
72. The process of claim 71, wherein each of the requests additionally includes the number of server failures that the service provider can tolerate, and wherein for service providers requiring the same number of streams from the local and shared depositories to the network and the same amount of storage space, service providers are additionally allocated servers based upon the number of server failures that the service provider can tolerate, service providers that can tolerate more server failures being given higher priority .
73. A computer readable storage device embodying a program of instructions executable by a computer to perform a method of operating a system to supply continuous media over a network, said method comprising: providing a media deliver system comprising: a plurality of local media depositories connected to the network to supply media stored therein to the network; a shared media depository connected to the network to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users, wherein the request includes information on the service provider's performance requirements; and allocating the new service provider a respective maximum number of streams from each of the local media depositories to the network and a maximum number of streams from the shared media depository to the network based upon the service provider's performance requirements.
74. The method of claim 73, wherein upon the service provider's performance requirements include a fault tolerance requirement.
75. A method for transmitting a program of instructions executable by a computer to perform a process of operating a system to supply continuous media over a network, said method comprising: causing the transmission to a client device a program of instructions, thereby enabling the client device to perform, by means of such program, the following process: providing a media deliver system comprising: a plurality of local media depositories connected to the network to supply media stored therein to the network; a shared media depository connected to the network to supply media stored therein to the network; receiving a request for a new service provider to use the media delivery system to provide continuous media to users, wherein the request includes information on the service provider's performance requirements; and allocating the new service provider a respective maximum number of streams from each of the local media depositories to the network and a maximum number of streams from the shared media depository to the network based upon the service provider's performance requirements.
76. The process of claim 75, wherein upon the service provider's performance requirements include a fault tolerance requirement.
PCT/US2000/025899 1999-09-21 2000-09-21 Method and system for providing streaming media services WO2001022688A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU40225/01A AU4022501A (en) 1999-09-21 2000-09-21 Method and system for providing streaming media services

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15500099P 1999-09-21 1999-09-21
US60/155,000 1999-09-21

Publications (2)

Publication Number Publication Date
WO2001022688A1 WO2001022688A1 (en) 2001-03-29
WO2001022688A9 true WO2001022688A9 (en) 2002-10-03

Family

ID=22553734

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/025899 WO2001022688A1 (en) 1999-09-21 2000-09-21 Method and system for providing streaming media services

Country Status (3)

Country Link
AU (1) AU4022501A (en)
TW (1) TW529279B (en)
WO (1) WO2001022688A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9106959B2 (en) 2004-07-30 2015-08-11 Broadband Itv, Inc. Method for adding or updating video content from internet sources to existing video-on-demand application of digital TV services provider system
US9113228B2 (en) 2004-07-30 2015-08-18 Broadband Itv, Inc. Method of addressing on-demand TV program content on TV services platform of a digital TV services provider
US9529870B1 (en) 2000-09-14 2016-12-27 Network-1 Technologies, Inc. Methods for linking an electronic media work to perform an action

Families Citing this family (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8464302B1 (en) 1999-08-03 2013-06-11 Videoshare, Llc Method and system for sharing video with advertisements over a network
US20020056123A1 (en) 2000-03-09 2002-05-09 Gad Liwerant Sharing a streaming video
US8495167B2 (en) 2001-08-02 2013-07-23 Lauri Valjakka Data communications networks, systems, methods and apparatus
US20030126263A1 (en) * 2001-12-31 2003-07-03 Gregg Fenton Multimedia load balancing architecture
DE10320889B3 (en) * 2003-05-09 2004-11-04 Ingo Wolf Method and device for generating and transmitting a television program via Ip-based media, in particular the Internet
US8589473B2 (en) 2004-05-19 2013-11-19 Telefonaktiebolaget L M Ericsson (Publ) Technique for handling initiation requests
US9641902B2 (en) 2007-06-26 2017-05-02 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US11259059B2 (en) 2004-07-30 2022-02-22 Broadband Itv, Inc. System for addressing on-demand TV program content on TV services platform of a digital TV services provider
US9584868B2 (en) 2004-07-30 2017-02-28 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US9344765B2 (en) 2004-07-30 2016-05-17 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
US8438297B1 (en) 2005-01-31 2013-05-07 At&T Intellectual Property Ii, L.P. Method and system for supplying media over communication networks
US11570521B2 (en) 2007-06-26 2023-01-31 Broadband Itv, Inc. Dynamic adjustment of electronic program guide displays based on viewer preferences for minimizing navigation in VOD program selection
KR101531960B1 (en) * 2008-02-29 2015-06-26 톰슨 라이센싱 Methods and apparatuses for providing load balanced signal distribution
CN101540886B (en) * 2009-04-15 2012-09-05 中兴通讯股份有限公司 Realization method and system of video-on-demand business and home streaming server
GB2531242A (en) 2014-09-11 2016-04-20 Piksel Inc Decision logic
US11153087B1 (en) 2015-12-29 2021-10-19 Amazon Technologies, Inc. Hub-based token generation and endpoint selection for secure channel establishment
US10455296B2 (en) 2016-07-21 2019-10-22 Newblue, Inc. Intelligent title cache system

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2130395C (en) * 1993-12-09 1999-01-19 David G. Greenwood Multimedia distribution over wide area networks
US6006264A (en) * 1997-08-01 1999-12-21 Arrowpoint Communications, Inc. Method and system for directing a flow between a client and a server

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9529870B1 (en) 2000-09-14 2016-12-27 Network-1 Technologies, Inc. Methods for linking an electronic media work to perform an action
US9536253B1 (en) 2000-09-14 2017-01-03 Network-1 Technologies, Inc. Methods for linking an electronic media work to perform an action
US9538216B1 (en) 2000-09-14 2017-01-03 Network-1 Technologies, Inc. System for taking action with respect to a media work
US9544663B1 (en) 2000-09-14 2017-01-10 Network-1 Technologies, Inc. System for taking action with respect to a media work
US9558190B1 (en) 2000-09-14 2017-01-31 Network-1 Technologies, Inc. System and method for taking action with respect to an electronic media work
US9106959B2 (en) 2004-07-30 2015-08-11 Broadband Itv, Inc. Method for adding or updating video content from internet sources to existing video-on-demand application of digital TV services provider system
US9113228B2 (en) 2004-07-30 2015-08-18 Broadband Itv, Inc. Method of addressing on-demand TV program content on TV services platform of a digital TV services provider
US9338487B2 (en) 2004-07-30 2016-05-10 Broadband Itv, Inc. System for addressing on-demand TV program content on TV services platform of a digital TV services provider
US9578376B2 (en) 2004-07-30 2017-02-21 Broadband Itv, Inc. Video-on-demand content delivery method for providing video-on-demand services to TV service subscribers

Also Published As

Publication number Publication date
WO2001022688A1 (en) 2001-03-29
TW529279B (en) 2003-04-21
AU4022501A (en) 2001-04-24

Similar Documents

Publication Publication Date Title
WO2001022688A9 (en) Method and system for providing streaming media services
US5805804A (en) Method and apparatus for scalable, high bandwidth storage retrieval and transportation of multimedia data on a network
US8001471B2 (en) Systems and methods for providing a similar offline viewing experience of online web-site content
US8015491B2 (en) Systems and methods for a single development tool of unified online and offline content providing a similar viewing experience
AU716842B2 (en) System and method for delivery of video data over a computer network
US6925499B1 (en) Video distribution system using disk load balancing by file copying
EP0966715B1 (en) System and method for selection and retrieval of diverse types of video data on a computer network
CA2267953C (en) Web serving system with primary and secondary servers
JP4732667B2 (en) Selective routing
US7426546B2 (en) Method for selecting an edge server computer
EP1320994B1 (en) Systems and method for interacting with users over a communications network
US20070204115A1 (en) Systems and methods for storage shuffling techniques to download content to a file
US20070201502A1 (en) Systems and methods for controlling the delivery behavior of downloaded content
Laursen et al. Oracle media server: providing consumer based interactive access to multimedia data
JP2004513411A (en) Content exchange device
US20020059574A1 (en) Method and apparatus for management and delivery of electronic content to end users
JP2004501559A (en) Viewer object proxy
WO1998004985A9 (en) Web serving system with primary and secondary servers
CN1972311A (en) A stream media server system based on cluster balanced load
JP2004514961A (en) Content tracking
EP0983559A1 (en) A system and method for optimizing the delivery of audio and video data over a computer network
Korkea-aho Scalability in Distributed Multimedia Systems
JP2004507806A (en) Overall health check on the client side
JP2004508614A (en) Content Manager
Calvagna et al. Design and implementation of a low-cost/high-performance Video on Demand server

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: C2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: C2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

COP Corrected version of pamphlet

Free format text: PAGES 1/16-16/16, DRAWINGS, REPLACED BY NEW PAGES 1/14-14/14; DUE TO LATE TRANSMITTAL BY THE RECEIVING OFFICE

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP