WO1996008112A1 - Video server system - Google Patents

Video server system Download PDF

Info

Publication number
WO1996008112A1
WO1996008112A1 PCT/GB1995/002113 GB9502113W WO9608112A1 WO 1996008112 A1 WO1996008112 A1 WO 1996008112A1 GB 9502113 W GB9502113 W GB 9502113W WO 9608112 A1 WO9608112 A1 WO 9608112A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video
βtorage
node
media
Prior art date
Application number
PCT/GB1995/002113
Other languages
French (fr)
Inventor
Ashok Raj Saxena
Lorenzo Falcon, Jr.
Original Assignee
International Business Machines Corporation
Ibm United Kingdom Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corporation, Ibm United Kingdom Limited filed Critical International Business Machines Corporation
Publication of WO1996008112A1 publication Critical patent/WO1996008112A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/16Analogue secrecy systems; Analogue subscription systems
    • H04N7/173Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
    • H04N7/17309Transmission or handling of upstream communications
    • H04N7/17336Handling of requests in head-ends
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0628Interfaces specially adapted for storage systems making use of a particular technique
    • G06F3/0655Vertical data movement, i.e. input-output transfer; data movement between one or more hosts and one or more storage devices
    • G06F3/0656Data buffering arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/06Digital input from, or digital output to, record carriers, e.g. RAID, emulated record carriers or networked record carriers
    • G06F3/0601Interfaces specially adapted for storage systems
    • G06F3/0668Interfaces specially adapted for storage systems adopting a particular infrastructure
    • G06F3/0671In-line storage system
    • G06F3/0673Single storage device

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Television Signal Processing For Recording (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)
  • Memory System (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A media streamer (10) includes at least one storage node (16) including mass storage units for storing a digital representation of at least one video presentation. The video presentation requires a time T to present in its entirety, and is stored as a plurality of N data blocks each corresponding to approximately a T/N period of the video presentation. The media streamer includes a plurality of communication nodes (14) each having at least one input port that is coupled to an output of the storage node for receiving a digital representation of a video presentation. Each communication node also includes a plurality of output ports which transmit a digital representation as a data stream to a consumer of the digital representation. The N data blocks are partitioned into X stripes, wherein data blocks 1, X+1, 2*X+1, ... etc., are associted with a first one of the X stripes, data blocks 2, X+2, 2*X+2, ... etc., are associated with a second one of the X stripes, etc., and wherein individual X stripes are each stored on a different one of the mass storage units.

Description

VIDEO SERVER SYSTEM
Field of the Invention
This invention relates to a video server system.
Background of the Invention
The playing of movies and video is today accomplished with rather old technology. The primary storage media is analog tape, such as VHS recorders/players, and extends up to the very high quality and very expensive Dl VTR's used by television studios and broadcaβters. There are many problems with this technology. A few such problemβ include: the manual labour required to load the tapes, the wear and tear on the mechanical units, tape head, and the tape itself, and also the expense. One significant limitation that troubles Broadcast Stations is that the VTRβ can only perform one function at a time, sequentially. Each tape unit costs from $75,000 to $150,000.
TV stations want to increase their revenueβ from commercials, which are nothing more than short movies, by inserting special commercials into their standard programs and thereby targeting each city as a separate market. This is a difficult task with tape technology, even with the very expensive Digital Dl tape systems or tape robots.
Traditional methods of delivery of multimedia data to end users fall into two categories: 1) broadcast industry methods and 2) computer industry methods. Broadcast methods (including motion picture, cable, television network, and record industries) generally provide storage in the form of analog or digitally recorded tape. The playing of tapes causes isochronous data streams to be generated which are then moved through broadcast industry equipment to the end user. Computer methods generally provide storage in the form of disks, or disks augmented with tape, and record data in compressed digital formats such as DVI, JPEG and MPEG. On request, computers deliver non-isochronous data streams to the end user, where hardware buffers and special application code smooths the data streams to enable continuous viewing or listening.
Video tape subsystems have traditionally exhibited a cost advantage over computer disk subsystems due to the cost of the storage media. However, video tape subsystems have the disadvantages of tape management, access latency, and relatively low reliability. These disadvantages are increasingly significant as computer storage costs have dropped, in combination with the advent of the real-time digital compression/decompression techniques. Though computer subsystems have exhibited compounding cost/performance improvements, they are not generally considered to be "video friendly". Computers interface primarily to workstations and other computer terminals with interfaces and protocols that are termed "non-isochronous". To assure smooth (isochronous) delivery of multimedia data to the end user, computer systems require special application code and large buffers to overcome inherent weaknesses in their traditional communication methods. Also, computers are not video friendly in that they lack compatible interfaces to equipment in the multimedia industry which handle isochronous data streams and switch among them with a high degree of accuracy.
With the introduction of the use of computers to compress and store video material in digital format, a revolution haβ begun in several major industries such as television broadcasting, movie studio production,
"Video on Demand" over telephone lines, pay-per-view movies in hotels, etc. Compression technology has progressed to the point where acceptable results can be achieved with compression ratios of lOOx to 180x. Such compression ratios make random access disk technology an attractive alternative to prior art tape βystems.
With an ability to random access digital disk data and the very high bandwidth of disk systems, the required system function and performance is within the performance, hardware cost, and expandability of disk technology. In the past, the use of disk files to store video or movies was never really a consideration because of the cost of storage. That cost has seen significant reductions in the recent past.
Summary of the Invention
It is an object of the invention to provide an improved video stream server system capable of implementing on disk systems although not limited thereto.
Accordingly, the invention provides a media streamer comprising at least one storage node comprising a plurality of mass storage units for storing a digital representation of at least one video presentation requiring a time T to present in its entirety and stored as a plurality of N data blocks each storing data corresponding to approximately a T/N period of the video presentation, and a plurality of communication nodes each having at least one input port that is coupled to an output of the at least one storage node for receiving a digital representation of a video presentation therefrom, each communication node further having a plurality of output ports each of which transmits a digital representation as a data stream to a consumer of the digital representation, wherein the N data blocks are partitioned into X stripes, wherein data blocks 1, X+l, 2*X+1, ... etc., are associated with a first one of the X stripes, data blocks 2, X+2, 2*X+2, ... etc., are associated with a second one of the X stripes, etc., and wherein different ones of the X stripes are each stored on a different one of the mass storage units.
An embodiment of the invention described herein provides a "video friendly" computer subsystem which enables isochronous data stream delivery in a multimedia environment over traditional interfaces for that industry. A media streamer in accordance with the embodiment is optimized for the delivery of isochronous data streams and can stream data into new computer networks with ATM (Asynchronous Transfer Mode) technology. The embodiment eliminates the disadvantages of video tape while providing a VTR (video tape recorder) metaphor for system control. The embodiment provides the following features: scaleability to deliver from 1 to 1000'β of independently controlled data streams to end users; an ability to deliver many isochronous data streams from a single copy of data; mixed output interfaces; mixed data rates; a simple "open system" control interface; automation control support; storage hierarchy support; and low cost per delivered stream.
Brief Description of the Drawings
Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
Fig. 1 is a block diagram of a media streamer embodying the invention;
Fig. 1A is a block diagram which illustrates further details of a circuit switch shown in Fig. 1;
Fig. IB is a block diagram which illustrates further details of a tape storage node shown in Fig. 1;
Fig. 1C is a block diagram which illustrates further details of a disk storage node shown in Fig. 1;
Fig. ID is a block diagram which illustrates further details of a communication node shown in Fig. 1;
Fig. 2 illustrates a list of video stream output control commands which are executed at high priority and a further list of data management commands which are executed at lower priority;
Fig. 3 is a block diagram illustrating communication node data flow; RULE 2Θ Fig. 4 is a block diagram illustrating disk storage node data flow;
Fig. 5 illustrates control message flow to enable a connect to be accomplished;
Fig. 6 illustrates control message flow to enable a play to occur;
Fig. 7 illustrates interfaces which exist between the media streamer and client control systems;
Fig. 8 illustrates a display panel showing a plurality of "soft" keys used to operate the media streamer;
Fig. 9 illustrates a load selection panel that is displayed upon selection of the load soft key on Fig. 8;
Fig. 10 illustrates a batch selection panel that is displayed when the batch key in Fig. 8 is selected;
Fig. 11 illustrates several client/server relationships which exist between a client control system and the media streamer;
Fig. 12 illustrates a prior art technique for accessing video data and feeding it to one or more output ports;
Fig. 13 is a block diagram indicating how plural video ports can access a single video segment contained in a communications node cache memory;
Fig. 14 is a block diagram illustrating how plural video ports have direct access to a video segment contained in cache memory on the disk storage node;
Fig. 15 illustrates a memory allocation scheme;
Fig. 16 illustrates a segmented logical file for a video 1;
Fig. 17 illustrates how the various segments of video 1 are striped across a plurality of disk drives;
Fig. 18 illustrates a prior art switch interface between a storage node and a cross bar switch;
Fig. 19 illustrates how the prior art switch interface shown in Fig. 18 is modified to provide extended output bandwidth for a storage node; Fig. 20 is a block diagram illustrating a procedure for aββuring constant video output to a video output bus;
Fig. 21 illustrates a block diagram of a video adapter used in converting digital video data to analog video data; and
Fig. 22 is a block diagram showing control modules that enable SCSI bus commands to be employed to control the video adapter card of Fig. 21. Detailed Description of the Embodiments GLOSSARY
In the following description, a number of terms are used that are described below:
AAL-5 ATM ADAPTATION LAYER-5: Refers to a class of ATM service suitable for data transmission. ATM ASYNCRHONOUS TRANSFER MODE: A high speed switching and transport technology that can be used in a local or wide area network, or both. It is designed to carry both data and video/audio.
Betacam A professional quality analog video format. CC R 601 A standard resolution for digital television. 720 x 840 (for
NTSC) or 720 x 576 (for PAL) luminance, with chrominance βubβampled 2:1 horizontally.
CPU CENTRAL PROCESSING UNIT: In computer architecture, the main entity that processes computer instructions. CRC CYCLIC REDUNDANCY CHECK. A data error detection scheme. Dl Digital Video recording format conforming to CCIR 601.
Records on 19mm video tape. D2 Digital video recording format conforming to SMPTE 244M.
Records on 19mm video tape. D3 Digital Video recording format conforming to SMPTE 244M.
Recordβ on 1/2" video tape. DASD DIRECT ACCESS STORAGE DEVICE: Any on-line data storage device or CD-ROM player that can be addressed is a DASD. Used synonymously with magnetic disk drive. DMA DIRECT MEMORY ACCESS: A method of moving data in a computer architecture that does not require the CPU to move the data. DVI A relatively low quality digital video compression format usually used to play video from CD-ROM disks to computer screens. El European equivalent of Tl.
FIFO FIRST IN FIRST OUT: Queue handling method that operates on a first-come, first-served basis. GenLock Refers to a process of synchronization to another video signal. It iβ required in computer capture of video to synchronize the digitizing process with the scanning parameters of the video signal. I/O INPUT/OUTPUT
Isochronous Used to describe information that is time sensitive and that is sent (preferably) without interruptions. Video and audio data sent in real time are isochronous. JPEG JOINT PHOTOGRAPHIC EXPERT GROUP: A working committee under the auspices of the International Standards Organization that is defining a proposed universal standard for digital compression of still images for use in computer systems.
KB KILO BYTES: 1024 bytes.
LAN LOCAL AREA NETWORK: High-speed transmission over twisted pair, coax, or fibre optic cables that connect terminals, computers and peripherals together at distances of about a mile or less.
LRU LEAST RECENTLY USED
MPEG MOVING PICTURE EXPERTS GROUP: A working committee under the auspices of the International Standards Organization that iβ defining standards for the digital compression/decompression of motion video/audio. MPEG-1 is the initial standard and iβ in use. MPEG-2 will be the next standard and will support digital, flexible, scaleable video transport. It will cover multiple resolutions, bit rates and delivery mechanisms. MPEG-1, MPEG-2 See MPEG MRU MOST RECENTLY USED
MTNU MOST TIME TO NEXT USE
NTSC format NATIONAL TELEVISION STANDARDS COMMITTEE: The colour television format that is the standard in the United States and Japan. PAL format PHASE ALTERNATION LINE: The colour television format that is the standard for Europe except for France. PC PERSONAL COMPUTER: A relatively low cost computer that can be used for home or business. RAID REDUNDANT ARRAY of INEXPENSIVE DISKS: A storage .arrangement that uses several magnetic or optical disks working in tandem to increase bandwidth output and to provide redundant backup. SCSI SMALL COMPUTER SYSTEM INTERFACE: An industry standard for connecting peripheral devices and their controllers to a computer. SIF SOURCE INPUT FORMAT: One quarter the CCIR 601 resolution. SMPTE SOCIETY OF MOTION PICTURE & TELEVISION ENGINEERS. SSA SERIAL STORAGE ARCHITECTURE: A standard for connecting peripheral devices and their controllers to computers. A possible replacement for SCSI. Tl Digital interface into the telephone network with a bit rate of 1.544 Mb/sec. TCP/IP TRANSMISSION CONTROL PROTOCOL/INTERNET PROGRAM: A set of protocols developed by the Department of Defense to link diββimilar computers across networks.
VHS VERTICAL HELICAL SCAN: A common format for recording analog video on magnetic tape.
VTR VIDEO TAPE RECORDER: A device for recording video on magnetic tape.
VCR VIDEO CASSETTE RECORDER: Same as VTR.
A. GENERAL ARCHITECTURE
A video optimized stream βerver system 10 (hereafter referred to as media streamer) iβ shown in Fig. 10 and includes four architecturally distinct components to provide scaleability, high availability and configuration flexibility. The major components follow:
1) Low Latency Switch 12: a hardware/microcode component with a primary task of delivering data and control information between Communication Nodes 14, one or more Storage Nodes 16, 17 and one or more Control Nodes 18.
2) Communication Node 14: a hardware/microcode component with the primary task of enabling the "playing" (delivering data iβochronously) or "recording" (receiving data isochronously) over an externally defined interface usually familiar to the broadcast industry: NTSC, PAL, Dl, D2, etc. The digital-to- video interface is embodied in a video card contained in a plurality of video ports 15 connected at the output of each communication node 14.
3) Storage Node 16, 17: a hardware/microcode component with the primary task of managing a storage medium such as disk and associated storage availability options.
4) Control Node 18: a hardware/microcode component with the primary task of receiving and executing control commands from an externally defined subsystem interface familiar to the computer industry.
A typical media streamer with 64 nodes implementation might contain 31 communication nodes, 31 storage nodes, 2 control nodes interconnected with the low latency switch 12. A smaller system might contain no switch and a single hardware node that supports communications, storage and control functions. The design of media streamer 10 allows a small system to grow to a large system in the customer installation. In all configurations, the functional capability of media streamer 10 can remain the same except for the number of streams delivered and the number of multimedia hours stored.
In Fig. 1A, further details of low latency switch 12 are shown. A plurality of circuit switch chips (not shown) are interconnected on crosβbar βwitch cardβ 20 which are interconnected via a planar board (schematically shown). The planar and a single card 20 constitute a low latency crosβbar βwitch with 16 node ports. Additional cards 20 may be added to configure additional node ports and, if desired, active redundant node ports for high availability. Each port of the low latency βwitch 12 enables, by example, a 25 megabyte per second, full duplex communication channel.
Information iβ transferred through the switch 12 in packets. Each packet contains a header portion that controls the βwitching state of individual crosβbar βwitch points in each of the switch chips. The control node 18 provides the other nodes (βtorage nodes 16, 17 and communication nodes 14) with the information necessary to enable peer-to-peer operation via the low latency βwitch 12.
In Fig. IB, internal details of a tape βtorage node 17 are illustrated. As will be hereafter understood, tape storage node 17 provides a high capacity βtorage facility for βtorage of digital representations of video presentations.
As employed herein a video presentation can include one or more images that are suitable for diβplay and/or processing. A video presentation may include an audio portion. The one or more images may be logically related, such as sequential frames of a film, movie, or animation sequence. The images may originally be generated by a camera, by a digital computer, or by a combination of a camera and a digital computer. The audio portion may be synchronized with the display of successive images. As employed herein a data representation of a video presentation can be any suitable digital data format for representing one or more images and possibly audio. The digital data may be encoded and/or compressed.
Referring again to Fig. IB a tape storage node 17 includes a tape library controller interface 24 which enables acceββ to multiple tape recordβ contained in a tape library 26. A further interface 28 enables access to other tape libraries via an SCSI bus interconnection. An internal system memory 30 enables a buffering of video data received from either of interfaces 24 or 28, or via DMA data transfer path 32. System memory block 30 may be a portion of a PC 34 which includes software 36 for tape library and file management actions. A switch interface and buffer module 38 (used also in disk βtorage nodes 16, communication nodes 14, and control nodes 18) enables interconnection between the tape storage node 17 and low latency switch 12. That is, the module 38 iβ responsible for partitioning a data transfer into packets and adding the header portion to each packet that the switch 12 employs to route the packet. When receiving a packet from the switch 12 the module 38 is responsible for stripping off the header portion before locally buffering or otherwise handling the received data.
Video data from tape library 26 iβ entered into system memory 30 in a first buffering action. Next, in response to initial direction from control node 18, the video data is routed through low latency βwitch 12 to a diβk βtorage node 16 to be made ready for substantially immediate acceββ when needed.
In Fig. 1C, internal details of a disk storage node 16 are shown. Each diβk βtorage node 16 includes a βwitch interface and buffer module 40 which enableβ data to be transferred from to a RAID buffer video cache and βtorage interface module 42. Interface 42 passes received video data onto a plurality of disks 45, spreading the data across the disks in a quaβi- RAID fashion. Details of RAID memory βtorage are known in the prior art and are deβcribed in "A Case for Redundant Arrays of Inexpensive Disks (RAID)", Patterson et al., ACM SIGMOD Conference, Chicago, IL, June 1-3, 1988 pages 109-116.
A diβk βtorage node 16 further haβ an internal PC 44 which includes software modules 46 and 48 which, reβpectively, provide βtorage node control, video file and diβk control, and RAID mapping for data stored on disks 45. In essence, each diβk βtorage node 16 provideβ a more immediate level of video data availability than a tape βtorage node 17. Each diβk storage node 16 further iβ enabled to buffer (in a cache manner) video data in a semiconductor memory of switch interface and buffer module 40 so as to provide even faster availability of video data, upon receiving a request therefor.
In general, a βtorage node includes a mass βtorage unit (or an interface to a maββ βtorage unit) and a capability to locally buffer data read from or to be written to the mass storage unit. The βtorage node may include sequential acceββ maββ βtorage in the form of one or more tape drives and/or disk drives, and may include random accesβ βtorage, such as one or more diβk drives accessed in a random accesβ fashion and/or semiconductor memory.
In Fig. ID, a block diagram is βhown of internal components of a communications node 14. Similar to each of the above noted nodeβ, communication node 14 includeβ a βwitch interface and buffer module 50 which enables communications with low latency βwitch 12 aβ described previously. Video data is directly transferred between switch interface and buffer module 50 to a stream buffer and communication interface 52 for transfer to a user terminal (not shown). A PC 54 includeβ βoftware modules 56 and 58 which provide, respectively, communication node control (e.g., stream start/βtop actionβ) and enable the subsequent generation of an isochronous stream of data. An additional input 60 to βtream buffer and communication interface 52 enables frame synchronization of output data. That data is received from automation control equipment 62 which iβ, in turn, controlled by a system controller 64 that exerts overall operational control of the βtream βerver 10 (see Fig. 1). System controller 64 reβpondβ to inputβ from user control βet top boxes 65 to cause commands to be generated that enable media streamer 10 to accesβ a requested video presentation. System controller 64 is further provided with a user interface and display facility 66 which enables a user to input commands, βuch aβ by hard or eoft buttons, and other data to enable an identification of video presentations, the scheduling of video presentations, and control over the playing of a video presentation.
Each control node 18 iβ configured aβ a PC and includeβ a βwitch interface module for interfacing with low latency βwitch 12. Each control node 18 reβpondβ to inputβ from system controller 64 to provide information to the communication nodes 14 and storage nodes 16, 17 to enable desired interconnections to be created via the low latency βwitch 12.
Furthermore, control node 18 includeβ software for enabling staging of requested video data from one or more of diβk storage nodes 16 and the delivery of the video data, via a βtream delivery interface, to a user diβplay terminal. Control node 18 further controls the operation of both tape and diβk βtorage nodes 16, 17 via commands sent through low latency βwitch 12.
The media streamer has three architected external interfaces, shown in Fig. 1. The external interfaces are: 1) Control Interface: an open system interface executing TCP/IP protocol (Ethernet LAN, TokβnRing LAN, serial port, modem, etc.)
2) Stream Delivery Interface: one of several industry standard interfaces designed for the delivery of data streams (NTSC, Dl, etc.).
3) Automation Control Interface: a collection of industry standard control interfaces for precise synchronization of stream outputs (GenLock, BlackBurβt, SMPTE clock, etc. ) Application commands are iββued to media streamer 10 over the control interface. When data load commandβ are iββued, the control node breaks the incoming data file into segments (i.e. data blocks) and spreads it across one or more storage nodes. Material density and the number of βimultaneouβ uβerβ of the data affect the placement of the data on βtorage nodeβ 16, 17. Increaβing denβity and/or βimultaneouβ uβerβ implies the use of more βtorage nodeβ for capacity and bandwidth.
When commandβ are iββued over the control interface to βtart the βtreaming of data to an end uβer, control node 18 selects and activates an appropriate communication node 14 and pasβeβ control information indicating to it the location of the data file segments on the storage nodeβ 16, 17. The communications node 14 activateβ the βtorage nodeβ 16, 17 that need to be involved and proceeds to communicate with these nodeβ, via command packets sent through the low latency βwitch 12, to begin the movement of data.
Data iβ moved between diβk βtorage nodeβ 16 and communication nodeβ 14 via low latency βwitch 12 and "just in time" scheduling algorithms. The technique used for scheduling and data flow control iβ more fully described below. The data βtream that iβ emitted from a communication node interface 14 iβ multiplexed to/from diβk βtorage nodeβ 16 βo that a βingle communication node βtream uβeβ a fraction of the capacity and bandwidth of each diβk βtorage node 16. In this way, many communication nodeβ 14 may multiplex acceββ to the same or different data on the diβk βtorage nodeβ 16. For example, media streamer 10 can provide 1500 individually controlled end user βtreamβ from the pool of communication nodeβ 14, each of which iβ multiplexing acceββes to a single multimedia file spread across the diβk βtorage nodeβ 16. This capability is termed "βingle copy multiple βtream".
The commandβ that are received over the control interface are executed in two diβtinct categories. Those which manage data and do not relate directly to stream control are executed at "low priority". This enables an application to load new data into the media βtreamer 10 without interfering with the delivery of data βtreamβ to end uβerβ. The commandβ that affect stream delivery (i.e. output) are executed at "high priority".
The control interface commands are shown in Fig. 2. The low priority data management commands for loading and managing data in media streamer 10 include VS-CREATE, VS-OPEN, VS-READ, VS-WRITE, VS-GET_POSITION, VS- SET_POSITION, VS-CLOSE, VS-RENAME, VS-DELETE GET_ATTRIBUTES, and VS- GET NAMES. The high priority stream control commands for starting and managing stream outputs include VS-CONNECT, VS-PLAY, VS-RECORD, VS-SEEK, VS-PAUSE, VS-STOP and VS-DISCONNECT. Control node 18 monitors βtream control commandβ to assure that requests can be executed. This "admission control" facility in control node 18 may reject requests to βtart streams when the capabilities of media βtreamer 10 are exceeded. This may occur in several circumstances:
1) when some component fails in the system that preventβ maximal operation; 2) when a βpecified number of βimultaneouβ βtreamβ to a data file
(aβ βpecified by parameters of a VS-CREATE command) is exceeded; and 3) when a specified number of simultaneous streams from the βyβtem, as specified by an installation configuration, is exceeded.
The communication nodes 14 are managed as a heterogeneous group, each with a potentially different bandwidth (βtream) capability and physical definition. The VS-CONNECT command directs media βtreamer 10 to allocate a communication node 14 and some or all of its associated bandwidth enabling isochronouβ data βtream delivery. For example, media βtreamer 10 can play uncompreββed data βtream(s) through communication node(β) 14 at 270 MBits/Sec while simultaneously playing compressed data βtream(β) at much lower data rates (usually 1-16 Mbitβ/Sec) on other communication nodeβ 14.
Storage nodeβ 16, 17 are managed aβ a heterogeneous group, each with a potentially different bandwidth (βtream) capability and physical definition. The VS-CREATE command directs media streamer 10 to allocate βtorage in one or more βtorage nodeβ 16, 17 for a multimedia file and its asβociated metadata. The VS-CREATE command specifies both the βtream denβity and the maximum number of βimultaneouβ uβerβ required.
Three additional commandβ support automation control systems in the broadcast induβtry: VS-CONNECT-LIST, VS-PLAY-AT-SIGNAL and VS-RECORD-AT- SIGNAL. VS-CONNECT-LIST allows applications to specify a sequence of play commandβ in a βingle command to the subsystem. Media βtreamer 10 will execute each play command aβ if it were iββued over the control interface but will transition between the delivery of one βtream and the next seamlessly. An example sequence follows:
1) Control node 18 receives a VS-CONNECT-LIST command with play subcommands indicating that all or part of FILE1, FILE2 and F2LE3 are to be played in sequence. Control node 18 determines the maximum data rate of the files and allocates that resource on a communication node 14. The allocated communication node 14 iβ given the detailed play list and initiates the delivery of the isochronous stream.
2) Near the end of the delivery of FILEl, the communication node 14 initiates the delivery of FILE2 but it does not enable it to the output port of the node. When FILEl completes or a signal from the Automation Control Interface occurs, the communication node 14 switches the output port to the second βtream from the first. This is done within l/30th of a second or within one βtandard video frame time.
3) The communication node 14 deallocates resources associated with FILEl.
VS-PLAY-AT-SIGNAL and VS-RECORD-AT-SIGNAL allow signalβ from the external Automation Control Interface to enable data tranβfer for play and record operations with accuracy to a video fame boundary. In the previous example, the VS-CONNECT-LIST includeβ a PLAY-AT-SIGNAL subcommand to enable the tranβition from FILEl to FILE2 baβed on the external automation control interface signal. If the subcommand were VS-PLAY inβtead, the tranβition would occur only when the FILEl tranβfer waβ completed.
Other commands that media βtreamer 10 executes provide the ability to manage βtorage hierarchies. These commands are: VS-DUMP, VS-RESTORE, VS- SEND, VS-RECEIVE and VS-RECEIVE_AND_PLAY. Each causes one or more multimedia files to move between βtorage nodes 16 and two externally defined hierarchical entities.
1) VS-DUMP and VS-RESTORE enable movement of data between diβk βtorage nodeβ 16, and a tape βtorage unit 17 accessible to control node 18. Data movement may be initiated by the controlling application or automatically by control node 18.
2) VS-SEND and VS-RECEIVE provide a method for transmitting a multimedia file to another media streamer. Optionally, the receiving media streamer can play the incoming file immediately to a preallocated communication node without waiting for the entire file.
In addition to the modular design and function set defined in the media βtreamer architecture, data flow iβ optimized for isochronous data tranβfer to βignificantly reduce cost. In particular:
1) bandwidth of the low latency βwitch exceeds that of the attached nodes; communications between nodes iβ nearly non- blocking;
2) data movement into proceββor memory iβ avoided, more bandwidth iβ provided; 3) processing of data is avoided; expensive processing units are eliminated; and
4) data movement is carefully scheduled so that; large data caches are avoided.
In traditional computer terms, media βtreamer 10 functionβ as a system of interconnected adapterβ with an ability to perform peer-peer data movement between themselves through the low latency βwitch 12. The low latency βwitch 12 has acceββ to data βtorage and moveβ data βegmentβ from one adapter's memory to that of another without a "host computer" intervention.
B. HIERARCHICAL MANAGEMENT OF DIGITAL COMPRESSED VIDEO DATA FOR ISOCHRONOUS DELIVERY
Media streamer 10 provides hierarchical storage elements. It exhibits a design that allows βcaleability from a very small video system to a very large system. It also provides a flexibility for storage management to adapt to the varied requirements necessary to satisfy functions of Video on Demand, Near Video on Demand, Commercial insertion, high quality uncompressed video storage, capture and playback.
Bl. TAPE STORAGE
In media streamer 10, video presentationβ are moved from high performance digital tape to diβk, to be played out at the much lower data rate required by the end user. In this way, only a minimum amount of video time iβ βtored on the diβk subsystem. If the system is "Near Video on Demand", then only, by example, 5 minutes of each movie need be in disk storage at any one time. This requires only 22 βegmentβ of 5 minutes each for a typical 2 hour movie. The result is that the total disk storage requirement for a video presentation is reduced, since not all of the video presentation is kept on the diβk file at any one time. Only that portion of the presentation that is being played need be present in the disk file.
In other words, if a video presentation requireβ a time T to present in its entirety, and iβ βtored aβ a digital representation having N data blocks, then each data block stores a portion of the video presentation that corresponds to approximately a T/N period of the video presentation. A last data block of the N data blocks may store less than a T/N period.
As demand on the system grows and the number of streams increases, the statistical average is that about 25% of video stream requests will be for the same movie, but at different sub-second time intervals, and the distribution of viewers will be such that more than 50% of those sub- second demands will fall within a group of 15 movie segments.
An aspect of this invention iβ the ability to use the most appropriate technology that will satisfy this demand. A random access cartridge loader (such as produced by the IBM Corporation) is a digital tape system that has high storage capacity per tape, mechanical robotic loading of 100 tapes per drawer, and up to 2 tape drives per drawer. The result iβ an effective tape library for movie-on-demand systems. However, the invention also enables very low cost digital tape βtorage library systems to provide the mass βtorage of the movies, and further enables low demand movies to be played directly from tape to speed- matching buffers and then on to video decompression and distribution channels.
A second advantage of combining hierarchical tape βtorage to any video system iβ that it provides rapid backup to any movie that iβ βtored on diβk, in the event that a diβk becomes inoperative. A typical system will maintain a "spare" diβk such that if one diβk unit fails, then movies can be reloaded from tape. This would typically be combined with a RAID or a RAID-like syβtem.
B2. DISK STORAGE SYSTEMS
When demand for video βtreamβ increaβeβ to a higher level, it becomes more efficient to store an entire movie on disk and save the system performance overhead required to continually move video data from tape to diβk. A typical syβtem will still contain a library of movies that are stored on tape, since the usual number of movies in the library iβ lOx to lOOx greater than the number that will be playing at any one time. When a uβer requests a specific movie, segments of it are loaded to a diβk βtorage node 16 and started from there.
When there are large numbers of uβerβ wanting to see the same movie, it is beneficial to keep the movie on diβk. Theβe movies are typically the "Hot" movies of the current week and are pre-loaded from tape to diβk prior to peak viewing hourβ. This tends to reduce the work load on the syβtem during peak hourβ.
B3. MOVIES OUT OF CACHE
Aβ demand for "hot" movies grows, media streamer 10, through an MRU-based algorithm, decides to move key movies up into cache. This requireβ substantial cache memory, but in terms of the ratio of cost to the number of active streams, the high volume that can be supported out of cache lowers the total cost of the media βtreamer 10. Because of the nature of video data, and the fact that the system always knows in advance what videos are playing and what data will be required next, and for how long, methods are employed to optimize the use of cache, internal buffers, disk βtorage, the tape loader, buβ performance, etc.
Algorithms that control the placement and distribution of the content across all of the storage media enable delivery of isochronous data to a wide spectrum of bandwidth requirements. Because the delivery of isochronous data is substantially 100% predictable, the algorithms are very much different from the traditional ones used for other βegmentβ of the computer industry where caching of user-accessed data is not alwayβ predictable.
C. MEDIA STREAMER DATA FLOW ARCHITECTURE
As indicated above, media βtreamer 10 delivers video streams to various outputs such aβ TV βetβ and βet top boxeβ attached via a network, βuch aβ a LAN, ATM, etc. To meet the requirements for storage capacity and the number of βimultaneouβ βtreamβ, a distributed architecture consisting of multiple storage and communication nodeβ iβ preferred. The data iβ βtored on βtorage nodeβ 16, 17 and iβ delivered by communication nodeβ. A communication node 14 obtains the data from appropriate βtorage nodeβ 16, 17. The control node 18 provides a single system image to the external world. The nodes are connected by the cross-connect, low latency βwitch 12.
Data rates and the data to be delivered is predictable for each βtream. The embodiment makeβ use of this predictability to construct a data flow architecture that makes full use of reβourceβ and which inβureβ that the data for each βtream is available at every stage when it is needed.
Data flow between the βtorage nodes 16, 17 and the communication nodes 14 can be βet up in a number of different ways.
A communication node 14 iβ generally responsible for delivering multiple βtreamβ. It may have requests outstanding for data for each of theβe βtreamβ, and the required data may come from different βtorage nodeβ 16,17. If different βtorage nodeβ were to attempt, simultaneously, to send data to the same communication node, only one βtorage node would be able to βend the data, and the other βtorage nodeβ would be blocked. The blockage would cauβe theβe βtorage nodeβ to retry sending the data, degrading βwitch utilization and introducing a large variance in the time required to βend data from a βtorage node to the communication node, in thiβ embodiment, there iβ no contention for an input port of a communication node 14 among different βtorage nodeβ 16, 17. The amount of required buffering can be determined as follows: the communication node 14 determines the mean time required to send a request to the βtorage node 16, 17 and receive the data. Thiβ time iβ determined by adding the time to βend a request to the βtorage node and the time to receive the response, to the time needed by the storage node to process the request. The βtorage node in turn determines the mean time required to process the request by adding the mean time required to read the data from diβk and any delays involved in processing the request. Thiβ iβ the latency in processing the request. The amount of buffering required iβ the memory βtorage needed at the βtream data rate to cover the latency.
The solution described below takes advantage of special conditions in the media streamer environment to reduce latency and hence to reduce the reβourceβ required. The latency iβ reduced by using a juβt-in-time scheduling algorithm at every stage of the data (e.g., within storage nodeβ and communications nodeβ), in conjunction with anticipating requests for data from the previous βtage.
Contention by the βtorage nodeβ 16, 17 for the input port of a communication node 14 iβ eliminated by employing the following two criterion:
1) A βtorage node 16, 17 only sends data to a communication node 14 on receipt of a specific request.
2) A given communication node 14 serializes all requests for data to be read from βtorage nodeβ βo that only one request for receiving data from the communication node 14 iβ outstanding at any time, independent of the number of streams the communication node 14 is delivering.
Aβ was noted above, the reduction of latency relies on a juβt-in-time βcheduling algorithm at every βtage. The baβic principle iβ that at every βtage in the data flow for a βtream, the data iβ available when the request for that data arrives. Thiβ reduces latency to the time needed for βending the request and performing any data transfer. Thus, when the control node 18 βendβ a request to the storage node 16 for data for a specific βtream, the βtorage node 16 can respond to the request almost immediately. This characteristic iβ important to the βolution to the contention problem described above.
Since, in the media streamer environment, access to data is βequential and the data rate for a βtream iβ predictable, a βtorage node 16 can anticipate when a next request for data for a specific stream can be expected. The identity of the data to be supplied in response to the request iβ also known. The βtorage node 16 alβo knows where the data iβ βtored and the expected requests for the other βtreamβ. Given thiβ information and the expected time to process a read request from a disk, the βtorage node 16 βcheduleβ a read operation βo that the data is available just before the request from the communication node 14 arrives. For example, if the stream data rate is 250KB/sec, and a storage node 16 contains every 4th segment of a video, requeβtβ for data for that stream will arrive every 4 seconds. If the time to process a read request is 500 msec (with the requisite degree of confidence that the read request will complete in 500 msec) then the request is scheduled for at least 500 msec before the anticipated receipt of request from the communication node 14.
Cl. CONTROL NODE 18 FUNCTIONS
The control node 18 function iβ to provide an interface between media βtreamer 10 and the external world for control flow, it alβo presents a βingle βyβtem image to the external world even if the media βtreamer 10 iβ itself implemented as a distributed βyβtem. The control node functionβ are implemented by a defined Application Program Interface (API). The API provideβ functionβ for creating the video content in media βtreamer 10 aβ well aβ for real-time functionβ βuch aβ playing/recording of video data. The control node 18 forwards real-time requeβtβ to play or stop the video to the communication nodes 14.
C2. COMMUNICATION NODE 14
A communication node 14 has the following threads (in the same process) dedicated to handle a real time video interface: a thread to handle connect/disconnect requeβtβ, a thread to handle play/βtop and pauβe/reβume requeβtβ, and a thread to handle a jump request (seek forward or seek backward) . In addition it has an input thread that readβ data for a βtream from the βtorage nodeβ 16 and an output thread that writes data to the output ports.
A data flow structure in a communication node 14 for handling data during the playing of a video iβ depicted in Fig. 3. The data flow βtructure includeβ an input thread 100 that obtains data from a βtorage node 16. The input thread 100 βerializeβ receipt of data from βtorage nodeβ βo that only one βtorage node iβ βending data at any one time. The input thread 100 ensures that when an output thread 102 needs to write out of a buffer for a βtream, the buffer iβ already filled with data. In addition, there iβ a βcheduler function 104 that βcheduleβ both the input and output operations for the streams. This function iβ used by both the input and output threads 100 and 102.
Each thread works off a queue of requests. The request queue 106 for the output thread 102 contains requeβtβ that identify the βtream and that pointβ to an associated buffer that needs to be emptied. These requeβtβ are arranged in order by a time at which they need to be written to the video output interface. When the output thread 102 emptieβ a buffer, it markβ it aβ empty and invokes the βcheduler function 104 to queue the requeβt in an input queue 108 for the βtream to the input thread (for the buffer to be filled). The queue 108 for the Input thread 100 iβ alβo arranged in order by a time at which buffers need to be filled.
Input thread 100 alβo works off the request queue 108 arranged by requeβt time. Its task is to fill the buffer from a βtorage node 16. For each request in its queue, the input thread 100 takes the following actions. The input thread 100 determines the storage node 16 that has the next segment of data for the stream (the data for a video stream iβ preferably βtriped across a number of βtorage nodeβ). The input thread 100 then sends a request to the determined βtorage node (uβing meββageβ through βwitch 12) requesting data for the βtream, and then waits for the data to arrive. Thiβ protocol ensureβ that only one βtorage node 16 will be βending data to a particular communications node 14 at any time, i.e., it removes the conflict that may arise if the storage nodes were to send data asynchronously to a communications node 14. When the requested data is received from the βtorage node 16, the input thread 100 markβ the buffer aβ full and invokes the βcheduler 104 to buffer a request (baβed on the stream's data rate) to the output thread 102 to empty the buffer.
C.3. STORAGE NODE 16
The βtructure of the βtorage node 16 for data flow to support the playing of a βtream iβ depicted in Fig. 4. The βtorage node 16 has a pool of buffers that contain video data. It has an input thread 110 for each of the logical diβk drives and an output thread 112 that writes data out to the communications nodes 14 via the βwitch matrix 12. It alβo haβ a βcheduler function 114 that iβ used by the input and output threads 110, 112 to schedule operations. It also haβ a message thread 116 that proceββeβ requests from communications nodeβ 14 requeβting data.
When a message iβ received from a communications node 14 requesting data, the message thread 116 will normally find the requested data already buffered, and queues the requeβt (queue 118) to the output thread. The requeβtβ are queued in time order. The output thread 112 will empty the buffer and add it to the liβt of free buffere. Each of the input threads 110 have their own request queues. For each of the active streams that have video data on the associated diβk drive, a queue 120 ordered by requeβt time (baβed on the data rate, level of βtriping, etc.) to fill the next buffer iβ maintained. The thread takes the first request in queue 120, aββociateβ a free buffer with it and issues an I/O requeβt to fill the buffer with the data from the diβk drive. When the buffer iβ filled, it is added to the list of full buffers. This is the list that is checked by the message thread 116 when the request for data for the stream is received. When a message for data is received from a communication node 14 and the required buffer iβ not full, it iβ considered to be a missed deadline.
C4. JUST-IN-TIME SCHEDULING
A just-in-time scheduling technique is uβed in both the communications nodes 14 and the storage nodes 16. The technique employs the following parameters: be = buffer size at the communications node 14; be = buffer βize at the βtorage node 16; r = video βtream data rate; n = number of stripes of video containing the data for the video stream; sr » stripe data rate; and sr = r/n. The algorithm uβed is as follows:
(1) βfc = frequency of requeβtβ at the communications node for a stream = r/bc; and
(2) dfc = frequency of diβk read requeβtβ at the Storage
Figure imgf000022_0001
The "striping" of video data is described in detail below in section H.
The requests are scheduled at a frequency determined by the expressions given above, and are scheduled βo that they complete in advance of when the data is needed. This iβ accompliβhed by "priming" the data pipe with data at the βtart of playing a video βtream.
Calculations of βfc and dfc are made at connect time, in both the communication node 14 playing the βtream and the storage nodes 16 containing the video data. The frequency (or its inverse, the interval) is uβed in βcheduling input from diβk in the βtorage node 16 (βee Fig. 4) and in βcheduling the output to the port (and input from the βtorage nodeβ) in the communication node 14 (βee Fig. 3).
Example of Just-In-Time Scheduling:
Play a βtream at 2.0 mbitβ/βec (250,000 bytes/sec.) from a video βtriped on four storage nodeβ. Alβo assume that the buffer βize at the communication node iβ 50,000 byteβ and the buffer βize at the diβk node is 250,000 bytes. Alβo, assume that the data is striped in segments of 250,000 byteβ/βec.
The values for the various parameters in the Just-In-Time algorithm are as follows: be = 250,00 bytes (buffer size at the communication node 14); be = 250,000 byteβ (buffer βize at the βtorage node) 16; r = 250,000 byteβ/βec (βtream data rate); n = 4 (number of βtripes that video for the stream is striped over); βr « r/n - 6250 byteβ/βec. or 250,000/4 sec, i.e. 250,000 byteβ every four βecondβ; βfc = r/bc = 1/βec, (frequency of requests at the communication node 14); and dfc = r/bβ = 1/βec. (frequency of requests at the βtorage node 16).
The communication node 14 reβponβible for playing the βtream will βchedule input and output requeβtβ at the frequency of 1/βec. or at intervale of 1.0 βecondβ. Assuming that the communication node 14 haβ two buffere dedicated for the βtream, the communication node 14 ensures that it haβ both bufferβ filled before it starts outputting the video βtream. At connect time the communication node 14 will have sent meββageβ to all four βtorage nodeβ 16 containing a stripe of the video data. The first two of the βtorage nodeβ will anticipate the requeβtβ for the first segment from the stripes and will βchedule diβk requests to fill the buffere. The communication node 14 will βchedule input requeβtβ (βee Fig. 3) to read the first two βegmentβ into two bufferβ, each of βize 250,000 bytes. When a play request comes, the communication node 14 will first insure that the two buffers are full, and then informs all storage nodeβ 16 that play is about to commence. It then starts playing the βtream. When the first buffer has been output (which at 2 M its/sec. or 250,000 byteβ/βec.) will take one βecond), the communication node 14 requeβtβ data from a βtorage node 16. The communication node 14 then requeβtβ data from each of the storage nodes, in sequence, at intervale of one βecond, i.e. it will requeβt data from a βpecific βtorage node at intervale of four βecondβ. It always requeβtβ 250,000 byteβ of data at a time. The calculations for the frequency at which a communication node requeβtβ data from the βtorage nodeβ 16 iβ done by the communication node 14 at connect time.
The βtorage nodeβ 16 anticipate the requeβtβ for the βtream data aβ follows. The βtorage node 16 containing βtripe 3 (βee section H below) can expect a request for the next 250,000 byte segment one second after the play has commenced, and every four seconds thereafter. The βtorage node 16 containing stripe 4 can expect a requeβt two seconds after the play haβ commenced and every four βecondβ thereafter. The βtorage node 16 containing stripe 2 can expect a request four βecondβ after play has commenced and four βecondβ thereafter. That iβ, each βtorage node 16 βcheduleβ the input from diβk at a frequency of 250,000 bytes every four seconds from some starting time (aβ described above). The scheduling iβ accompliβhed in the βtorage node 16 after receipt of the play command and after a buffer for the βtream haβ been output. The calculation of the requeβt frequency iβ done at the time the connect requeβt iβ received.
It iβ alβo possible to use different buffer βizeβ at the communication node 14 and the βtorage node 16. For example, the buffer βize at the communication node 14 may be 50,000 byteβ and the buffer βize at the βtorage node 16 may be 250,000 byteβ. In thiβ case, the frequency of requeβtβ at the communication node 14 will be (250,000/50,000) 5/βec. or every 0.2 seconds, while the frequency at the βtorage node 16 will remain at 1/βec. The communication node 14 reads the firβt two bufferβ (100,000 byteβ) from the βtorage node containing the firβt βtripe (note that the βegment βize iβ 250,000 bytes and the βtorage node 16 containing the firβt βegment will βchedule the input from diβk at connect time) . When play commences, the communication node 14 informs the βtorage nodeβ 16 of same and outputs the firβt buffer. When the buffer empties, the communication node 14 βcheduleβ the next input. The bufferβ will empty every 0.2 βecondβ and the communication node 14 requeβtβ input from the storage nodeβ 16 at that frequency, and alβo βcheduleβ output at the same frequency.
In thiβ example, βtorage nodeβ 16 can anticipate five requeβtβ to arrive at intervale of 0.2 βecondβ (except for the firβt βegment where 100,000 byteβ have been already read, βo initially three requeβt will come after commencement of play every four βecondβ, i.e., the next sequence of five requeβtβ (each for 50,000 byteβ) will arrive four βecondβ after the laβt requeβt of the previouβ sequence). Since, the buffer βize at the βtorage node iβ 250,000 byteβ, the βtorage nodeβ 16 will βchedule the input from diβk every four βecondβ (juβt as in the example above).
C.5. DETAILS OF A FXAY ACTION
The following βtepβ trace the control and data flow for the playing action of a βtream. The βtepβ are depicted in Figure 5 for setting up a video for play. The βtepβ are in time order. 1. The user invokes a command to setup a port with a specific video that haβ been previously loaded. The request iβ βent to the control node 18.
2. A thread in the control node 18 receives the requeβt and a VS- CONNECT function.
3. The control node thread opens a catalog entry for the video, and βetβ up a memory deβcriptor for the video with the βtriped file information.
4. The control node 18 allocates a communication node 14 and an output port on that node for the requeβt.
5. Then control node 18 sends a message to the allocated communication node 14.
6. A thread in the communication node 14 receives the message from the control node 18. 7. The communication node thread sends an open request to the βtorage node 16 containing the βtripe fileβ. 8,9. A thread in each βtorage node 16 that the open requeβt iβ βent to receive the requeβt and openβ the requested βtripe file and allocate any needed reβourceβ, aβ well aβ βcheduling input from diβk (if the βtripe file containβ the firβt few βegmentβ).
10. The βtorage node thread sends a response back to the communication node 14 with the handle (identifier) for the βtripe file.
11. The thread in the communication node 14 waitβ on responses from all of the βtorage nodes involved and on receiving successful responses allocates resources for the βtream, including βetting up the output port.
12. The communication node 14 then βcheduleβ input to prime the video data pipeline.
13. The communication node 14 then βendβ a response back to the control node 18.
14. The control node thread on receipt of a successful response from the communication node 14 returns a handle for the βtream to the uβer be uβed in subsequent requestβ related to this instance of the stream.
The following are the βtepβ in time order for the actionβ that are taken on receipt of the play requeβt after a video βtream haβ been βucceββfully βet up. The βtepβ are depicted in Fig. 6.
1. The uβer invokes the play command. 2. A thread in the control node 18 receives the requeβt.
3. The thread in the control node 18 verifies that the requeβt iβ for a βtream that iβ βet up, and then βendβ a play requeβt to the allocated communication node 14.
4. A thread in the communication node 14 receiveβ the play requeβt. The communication node 14 βendβ the play request to all of the involved storage nodeβ 16 BO that they can βchedule their own operations in anticipation of subsequent requeβtβ for thiβ stream. An "involved" storage node is one that storeβ at least one βtripe of the video presentation of interest.
A thread in each involved storage node 16 receives the request and sets up βcheduleβ for servicing future requests for the stream. Each involved βtorage node 16 sends a response back to the communication node 14.
7. The communication node thread ensures that the pipeline is primed (preloaded with video data) and enables the βtream for output.
8. The communication node 14 then βendβ a response back to the control node 18. 9. The control node 18 sends a response back to the uβer that the βtream iβ playing.
The input and output threadβ continue to deliver the video preβentation to the βpecified port until a stop/pause command iβ received or the video completes.
D. USER AND APPLICATION INTERFACES TO MEDIA STREAMER
Media βtreamer 10 iβ a passive βerver, which performs video server operations when it receives control commandβ from an external control βyβtem. Figure 7 shows a βyβtem configuration for media βtreamer 10 applications and illustrates the interfaces present in the system.
Media βtreamer 10 provideβ two levels of interfaces for users and application programs to control its operations:
a uβer interface ((A) in Fig. 7); and
an application program interface ((B) in Fig. 7).
Both levels of interface are provided on client control systems, which communicate with the media βtreamer 10 through a remote procedure call (RPC) mechanism. By providing the interfaces on the client control systems, instead of on the media βtreamer 10, the separation of application βoftware from media βtreamer 10 iβ achieved. Thiβ facilitateβ upgrading or replacing the media βtreamer 10, βince it doeβ not require changing or replacing the application βoftware on the client control βyβtem. Dl. USER COMMUNICATIONS
Media βtreamer 10 provideβ two typeβ of uβer interfaces: a command line interface; and a graphical uβer interface.
Dl.l. COMMAND LINE INTERFACE
The command line interface displays a prompt on the user console or interface (65,66 of Fig. 1). After the command prompt, the uβer enters a command, starting with a command keyword followed by parameters. After the command iβ executed, the interface displayβ a prompt again and waits for the next command input. The media βtreamer command line interface iβ especially suitable for the following two types of operations:
Batch Control: Batch control involves starting execution of a command βcript that containβ a βerieβ of video control commandβ. For example, in the broadcaβt industry, a command βcript can be prepared in advance to include pre-recorded, βcheduled programs for an extended period of time. At the βcheduled βtart time, the command βcript iβ executed by a βingle batch command to βtart broadcaβting without further operator intervention.
Automatic Control: Automatic control involves executing a liβt of commandβ generated by a program to update/play materialβ βtored on media βtreamer 10. For example, a news agency may load new materialβ into the media βtreamer 10 every day. An application control program that manageβ the new materialβ can generate media βtreamer commandβ (for example, Load, Delete, Unload) to update the media βtreamer 10 with the new materialβ. The generated commandβ may be piped to the command line interface for execution.
D1.2. GRAPHICAL USER INTERFACE
Fig. 8 iβ an example of the media βtreamer graphical uβer interface. The interface resembles the control panel of a video cassette recorder, which has control buttons βuch aβ Play, Pauβe, Rewind, and Stop. In addition, it alβo provideβ βelection panels when an operation involves a selection by the uβer (for example, load requireβ the uβer to βelect a video preβentation to be loaded.) The graphical uβer interface iβ especially useful for direct uβer interactions.
A "Batch" button 130 and an "Import/Export" button 132 are included in the graphical user interface. Their functions are described below. D2. USER FUNCTIONS
Media βtreamer 10 provideβ three general types of uβer functionβ: Import/Export; VCR-like play controls; and
Advanced user controls.
D2.1. IMPORT/EXPORT
Import/Export functions are uβed to move video data into and out of the media streamer 10. When a video iβ moved into media βtreamer 10 (Import) from the client control βyβtem, the βource of the video data iβ βpecified aβ a file or a device of the client control βyβtem. The target of the video data iβ βpecified with a unique name within media streamer 10. When a video is moved out of media βtreamer 10 (Export) to the client control βyβtem, the βource of the video data iβ βpecified by itβ name within media βtreamer 10, and the target of the video data iβ βpecified aβ a file or a device of the client control βyβtem.
In the Import/Export category of uβer functionβ, media βtreamer 10 alβo provideβ a "delete" function to remove a video and a "get attributes" function to obtain information about βtored videoβ (βuch aβ name, data rate).
To invoke Import/Export functionβ through the graphical uβer interface, the uβer clicks on the "Import/Export" soft button 132 (Fig. 8). Thiβ brings up a new panel (not shown) that containβ "Import", "Export", "Delete", "Get Attribute" buttonβ to invoke the individual functionβ.
D2.2. VCR-LIKE PLAY CONTROLS
Media βtreamer 10 provideβ a βet of VCR-like play controls. The media streamer graphical user interface in Fig. 8 shows that the following functions are available: Load, Eject, Play, Slow, Pauβe, Stop, Rewind, Fast Forward and Mute. Theβe functionβ are activated by clicking on the corresponding βoft buttonβ on the graphical uβer interface. The media βtreamer command line interface provideβ a similar βet of functionβ:
Setup - βetβ up a video for a βpecific output port. Analogous to loading a video caββette into a VCR.
Play - initiateβ playing a video that haβ been βet up or reβumeβ playing a video that haβ been pauβed.
Pauβe - pauses playing a video. Detach - analogous to ejecting a video cassette from a VCR.
Status - diβplayβ the status of ports, such as which video is playing, elapsed playing time, etc.
02.3. ADVANCED USER CONTROLS
In order to support βpecific application requirements, βuch aβ the broadcaβting industry, the present embodiment provideβ several advanced uβer controls:
Play list - βet up multiple videoβ and their sequence to be played on a port
Play length - limit the time a video will be played
Batch operation - perform a liβt of operationβ stored in a command file.
The Play list and Play length controls are accomplished with a "Load" button 134 on the graphical uβer interface. Each "setup" command will specify a video to be added to the Play liβt for a βpecific port. It alβo βpecifieβ a time limit that the video will be played. Fig. 9 shows the panel which appears in reaponβe to clicking on the "load" βoft button 134 on the graphical uβer interface to βelect a video to be added to the play liβt and to βpecify the time limit for playing the video. When the uβer clicks on a file name in the "Files" box 136, the name iβ entered into "File Name" box 138. When the uβer clickβ on the "Add" button 140, the file name in "File Name" box 138 iβ appended to the "Play Liβt" box 142 with itβ time limit and diβplayβ the current play liβt (with time limit of each video on the play liβt).
The batch operation iβ accompliβhed by using a "Batch" βoft button 130 on the graphical uβer interface (βee Fig. 8).
When the "Batch" button 130 iβ activated, a batch βelection panel is displayed for the uβer to βelect or enter the command file name (βee Fig. 10). Preββing an "Execute" button 144 on the batch βelection panel starts the execution of the commandβ in the βelected command file. Fig. 10 iβ an example of the "Batch" and "Execute" operation on the graphical uβer interface. For example, the uβer haβ firβt created a command βcript in a file "batch2" in the c:/batchcmd directory. The uβer then clickβ on "Batch" button 130 on the graphical uβer interface shown in Fig. 8 to bring up the Batch Selection panel. Next, the uβer clickβ on "c:/batchcmd" in "Directory" box 146 of the Batch Selection panel. Thiβ reβultβ in the diβplay of a liβt of files in "Files" box 148. Clicking on the "batch2" line in "Fileβ" box 148 enters it into the "File Name" box 150. Finally, the user clickβ on the "Execute" button 144 to execute in sequence the commandβ βtored in the "batch.2" file.
D3. APPLICATION PROGRAM INTERFACE
Media βtreamer 10 provideβ the above-mentioned Application Program Interface (API) βo that application control programs can interact with media streamer 10 and control itβ operations (reference may be made again to Fig. 7).
The API consists of remote procedure call (RPC)-baβed procedureβ. Application control programs invoke the API functions by making procedure calls. The parameters of the procedure call specify the functions to be performed. The application control programs invoke the API functions without regarding the logical and physical location of media βtreamer 10. The identity of a media streamer 10 to provide the video services is established at either the client control syβtem startup time or, optionally, at the application control program initiation time. Once the identity of media βtreamer 10 iβ established, the procedure calls are directed to the correct media streamer 10 for servicing.
Except aβ indicated below, API functionβ are proceββed synchronously, i.e., once a function call is returned to the caller, the function iβ completed and no additional proceββing at media βtreamer 10 iβ needed. By configuring the API functionβ aβ synchronous operationβ, additional proceββing overheadβ for context switching, asynchronous signalling and feedbacks are avoided. Thiβ performance iβ important in video βerver applicationβ due to the βtringent real-time requirementβ.
The proceββing of API functionβ iβ performed in the order that requeβtβ are received. Thiβ ensures that uβer operationβ are proceββed in the correct order. For example, a video muβt be connected (setup) before it can be played. Another example iβ that switching the order of a "Play" requeβt followed by a "Pauβe" requeβt will have a completely different result to the uβer.
A VS-PLAY function initiateβ the playing of the video and returns the control to the caller immediately (without waiting until the completion of the video play). The rationale for thiβ architecture iβ that βince the time for playing a video iβ typically long (minutes to hourβ) and unpredictable (there may be pauβe or βtop commandβ), by making the VS-PLAY function asynchronous, it frees up the reβourceβ that would otherwise be allocated for an unpredictably, long period of time. At completion of video play, media βtreamer 10 generates an asynchronous call to a βyβtem/port addreββ βpecified by the application control program to notify the application control program of the video completion event. The βyβtem/port addreββ iβ βpecified by the application control program when it calls the API VS-CONNECT function to connect the video. It should be noted that the callback system/port address for VS-PLAY is specified at the individual video level. That means the application control programs have the freedom of directing video completion messages to any control point. For example, one application may desire the uβe of one central system/port to proceββ the video completion messages for many or all of the client control systems. In another application, several different βyβtem/port addresses may be employed to proceββ the video completion messages for one client control βyβtem.
With the API architecture, media βtreamer 10 iβ enabled to support multiple concurrent client control eyβtemβ with heterogeneous hardware and βoftware platforme, with efficient proceββing of both synchronous and asynchronous types of operationβ, while ensuring the correct sequencing of the operation requeβtβ. For example, the media βtreamer 10 may uβe an IBM OS/2 operating βyβtem running on a PS/2 βyβtem, while a client control βyβtem may uβe an IBM AIX operating βyβtem running on an RS/6000 βyβtem (IBM, OS/2, PS/2, AIX, and RS/6000 are all trademarks of the International Business Machines Corporation).
D4. CLIENT/MEDIA STREAMER COMMUNICATIONS
Communications between a client control βyβtem and the media βtreamer 10 iβ accompliβhed through, by example, a known type of Remote Procedure Call (RPC) facility. Fig. 11 βhowβ the RPC structure for the communications between a client control system 11 and the media βtreamer 10. In calling media βtreamer functionβ, the client control βyβtem 11 functionβ aβ the RPC client and the media βtreamer 10 functionβ aβ the RPC βerver. Thiβ iβ indicated at (A) in Fig. 11. However, for an asynchronous function, i.e., VS-PLAY, its completion causeβ media βtreamer 10 to generate ,a call to the client control βyβtem 11. In thiβ case, the client control βyβtem 11 functions as the RPC βerver, while media βtreamer 10 is the RPC client. Thiβ iβ indicated at (B) in Fig. 11.
D4.1. CLIENT CONTROL SYSTEM 11
In the client control βyβtem 11, the uβer command line interface iβ comprised of three internal parallel proceββeβ (threadβ). A firβt proceββ parβeβ a uβer command line input and performs the requested operation by invoking the API functionβ, which result in RPC calls to the media streamer 10 ((A) in Figure 11). Thiβ proceββ alβo keeps track of the status of videos being βet up and played for various output ports. A second process periodically checks the elapsed playing time of each video against their βpecified time limit. If a video haβ reached itβ time limit, the video iβ stopped and disconnected and the next video in the wait queue (if any) for the same output port is started. A third proceββ in the client control βyβtem 11 functionβ aβ an RPC βerver to receive the VS-PLAY asynchronous termination notification from the media streamer 10 ((B)in Fig. 11).
D4.2 MEDIA STREAMER 10
During startup of media βtreamer 10, two parallel processes (threads) are invoked in order to support the RPCβ between the client control βyβtem(β) 11 and media βtreamer 10. A firβt proceββ functionβ aβ an RPC βerver for the API function calls coming from the client control βyβtem 11 ( (A) in Fig. 11). The firβt proceββ receives the RPC calls and dispatches the appropriate procedures to perform the requested functionβ (βuch aβ VS- CONNECT, VS-PLAY, VS-DISCONNECT) . A βecond proceββ functionβ aβ an RPC client for calling the appropriate client control βyβtem addresses to notify the application control programs with asynchronous termination events. The process blocks itself waiting on an internal pipe, which iβ written by other proceββeβ that handle the playing of videoβ. When the latter reaches the end of a video or an abnormal termination condition, it writes a meββage to the pipe. The blocked process reads the message and makes an RPC call ((B) in Fig. 11 to the appropriate client control βyβtem 11 port addreββ βo that the client control βyβtem can update its statue and take actions accordingly.
E. MEDIA STREAMER MEMORY ORGANIZATION AND OPTIMIZATION FOR VIDEO DELIVERY
An aspect of thiβ embodiment provideβ integrated mechanisms for tailoring cache management and related I/O operationβ to the video delivery environment. Thiβ aspect of the embodiment iβ now described in detail.
El. PRIOR ART CACHE MANAGEMENT
Prior art mechanisms for cache management are built into cache controllers and the file subsystems of operating βyβtems. They are designed for general purpose uβe, and are not specialized to meet the needs of video delivery.
Fig. 12 illustrates one possible way in which a conventional cache management mechanism may be configured for video delivery. Thiβ technique employe a video split between two disk files 160, 162 (because it is too large for one file), and a processor containing a file system 164, a media server 168, and a video driver 170. Also illustrated are two video adapter ports 172, 174 for two video streams. Alβo illustrated is the data flow to read a βegment of diβk file 160 into main βtorage, and to subsequently write the data to a first video port 172, and alβo the data flow to read the same βegment and write it to a βecond video port 174. Fig. 12 is used to illuβtrate problems incurred by the prior art which are addressed and overcome by the media streamer 10 of thiβ embodiment.
Description of steps A1-A12 in Fig. 12.
Al. Media server 168 calls file syβtem 166 to read βegment Sk into a buffer in video driver 170. A2. File βyβtem 166 readβ a part of Sk into a cache buffer in file βyβtem 166. A3. File βyβtem 166 copies the cache buffer into a buffer in video driver 170. Steps A2 and A3 are repeated multiple times.
A4. File βyβtem 166 calls video driver 170 to write Sk to video port 1 (176). AS. Video driver 170 copies part of Sk to a buffer in video driver 170. A6. Video driver 170 writes the buffer to video port 1 (176).
Steps A5 and A6 are repeated multiple times.
Steps A7-A12 function in a similar manner, except that port 1 iβ changed to port 2. If a part of Sk iβ in the cache in file βyβtem 166 when needed for port 2, then step A8 may be skipped.
Aβ can be realized, video delivery involveβ massive amounts of data being transferred over multiple data βtreamβ. The overall usage pattern fits neither of the two traditional patternβ uβed to optimize caching; random and βequential. If the random option iβ βelected, moβt cache bufferβ will probably contain data from video βegmentβ which have been recently read, but will have no video βtream in line to read them before they have expired. If the βequential option iβ choβen, the moβt recently uβed cache bufferβ are re-uβed firβt, βo there iβ even less chance of finding the needed βegment part in the file βyβtem cache. Aβ waβ described previouβly, an important element of video delivery iβ that the data stream be delivered iβochronouβly, that iβ without breaks and interruptions that a viewer or uβer would find objectionable. Prior art caching mechaniβmβ, aβ juβt shown, cannot ensure the iβochronouβ delivery of a video data βtream to a uβer.
Additional problemβ illustrated by Fig. 12 are: a. Diβk and video port I/O is done in relatively small segments to βatiβfy general file βyβtem requirements. Thiβ requireβ more processing time, diβk βeek overhead, and buβ overhead than would be required by vide βegment βize βegmentβ.
b. The proceββing time to copy data between the file system cache buffer and media server bufferβ, and between media βerver bufferβ and video driver bufferβ, iβ an undeβirable overhead that it would be desirable to eliminate.
c. Using two video bufferβ (i.e. 172, 174) to contain copies of the same video βegment at the same time iβ an inefficient uβe of main memory. There iβ even more waβte when the βame data iβ βtored in the file βyβtem cache and alβo in the video driver bufferβ.
E2. VIDEO-OPTIMIZED CACHE MANAGEMENT
There are three principal facets of the cache management operation in accordance with thiβ aspect of the embodiment: sharing βegment βize cach bufferβ across streams; predictive caching; and synchronizing to optimize caching.
E2.1. SHARING SEGMENT SIZE CACHE BUFFERS ACROSS STREAMS
Videoβ are βtored and managed in fixed βize βegmentβ. The segments are sequentially numbered βo that, for example, βegment 5 would store a portion of a video preβentation that iβ nearer to the beginning of the preβentation than would a βegment numbered 6. The βegment βize iβ choβen to optimize diβk I/O, video I/O, buβ uβage and proceββor uβage. A βegment of a video haβ a fixed content, which dependβ only on the video name, and the βegment number. All I/O to diβk and to the video output, and all caching operationβ, are done aligned on segment boundaries.
This aspect of the embodiment takes two forms, depending on whether the underlying hardware βupportβ peer-to-peer operationβ with data flow directly between diβk and video output card in a communications node 14, without paββing through cache memory in the communications node. For peer-to-peer operationβ, caching iβ done at the diβk βtorage unit 16. For hardware which doeβ not βupport peer-to-peer operationβ, data iβ read directly into page-aligned, contiguouβ cache memory (in a communications node 14) in segment-sized blocks to minimize I/O operations and data movement. (See F. Video Optimized Digital Memory Allocation, below).
The data remains in the βame location and iβ written directly from thiβ location until the video βegment iβ no longer needed. While the video βegment is cached, all video streams needing to output the video segment acceββ the βame cache buffer. Thus, a βingle copy of the video βegment is used by many uβerβ, and the additional I/O, processor, and buffer memory uβage to read additional copieβ of the βame video βegment iβ avoided. For peer to peer operationβ, half of the remaining I/O and almoβt all of the proceββor and main memory uβage are avoided at the communication nodeβ 14.
Fig. 13 illustrates an embodiment of the invention for the case of a βyβtem without peer-to-peer operationβ. The video data iβ βtriped on the diβk βtorage nodeβ 16 βo that odd numbered βegmentβ are on firβt diβk βtorage node 180 and even numbered βegmentβ are on βecond diβk βtorage node 182 (βee Section H below).
The data flow for thiβ configuration iβ alβo illustrated in Fig. 13. As can be seen, βegment Sk iβ to be read from disk 182 into a cache buffer 184 in communication node 186, and iβ then to be written to video output ports 1 and 2. The SK video data βegment iβ read directly into cache buffer 184 with one I/O operation, and iβ then written to port 1. Next the SK video data βegment iβ written from cache buffer 184 to port 2 with one I/O operation.
Aβ can be realized, all of the problemβ deβcribed for the conventional approach of Fig. 12 are overcome by the βyβtem illustrated in Fig. 13.
Fig. 14 illustrates the data flow for a configuration containing support for peer-to-peer operationβ between a diβk βtorage node and a video output card. A pair of diβk driveβ 190, 192 contain a βtriped video preβentation which iβ fed directly to a pair of video ports 194, 196 without paββing through the main memory of an intervening communication node 14.
The data flow for thiβ configuration iβ to read βegment Sk from diβk 192 directly to port 1 (with one I/O operation) via diβk cache buffer 198.
If a call follows to read βegment SK to port 2, βegment Sk iβ. read directly from diβk cache buffer 198 into port 2 (with one I/O operation).
When the data read into the diβk cache buffer 198 for port 1 iβ βtill resident for the write to port 2, a best possible uβe of memory, bus, and proceββor reβourceβ results in the tranβfer of the video βegment to ports 1 and 2.
It iβ possible to combine the peer to peer and main memory caching mechanism, e.g., uβing peer to peer operationβ for video presentations which are playing to only one port of a communication node 14, and caching in the communications node 14 for video presentations which are playing to multiple ports of the communication node 14.
A policy for dividing the caching responsibility between disk βtorage nodeβ and the communication node iβ choβen to maximize the number of video βtreamβ which can be supported with a given hardware configuration. If the number of βtreamβ to be βupported known, then the amount and placement of caching βtorage can then be determined.
E2.2. PREDICTIVE CACHING
A predictive caching mechanism meets the need for a caching policy well suited to video delivery. Video presentations are in general very predictable. Typically, they βtart playing at the beginning, play at a fixed rate for a fairly lengthy predetermined period, and stop only when the end is reached. The caching approach of the media streamer 10 takes advantage of this predictability to optimize the βet of video segments which are cached at any one time.
The predictability is uβed both to βchedule a read operation to fill a cache buffer, and to drive the algorithm for reclaiming of cache buffere. Bufferβ whoβe contentβ are not predicted to be uβed before they would expire are reclaimed immediately, freeing the βpace for higher priority uβe. Bufferβ whoβe contentβ are in line for uβe within a reasonable time are not reclaimed, even if their last uβe waβ long ago.
More particularly, given videoβ vl, v2,..., and βtreamβ si, s2,... playing theβe videoβ, each βtream βj playβ one video, v(βj), and the time predicted for writing the k-th segment of v(βj) iβ a linear function:
Figure imgf000036_0001
where a(βj) dependβ on the βtart time and βtarting βegment number, r(βj) is the constant time it takes to play a βegment, and t(βj,k) iβ the βcheduled time to play the k-th βegment of βtream sj.
Thiβ information iβ uβed both to βchedule a read operation to fill a cache buffer, and to drive the algorithm for re-uβing cache bufferβ. Some exampleβ of the operation of the cache management algorithm follow:
EXAMPLE A
A cache buffer containing a video βegment which iβ not predicted to be played by any of the currently playing video βtreamβ iβ re-uβed before re- uβing any bufferβ which are predicted to be played. After satisfying this conβtraint, the frequency of playing the video and the βegment number are uβed as weights to determine a priority for keeping the video segment cached. The highest retention priority within this group is assigned to video segments that occur early in a frequently played video.
EXAMPLE B
For a cache buffer containing a video segment which is predicted to be played, the next predicted play time and the number of streams left to play the video βegment are ueed aβ weights to determine the priority for keeping the video segment cached. The weights essentially allow the retention priority of a cache buffer to be βet to the difference between the predicted number of I/Os (for any video βegment) with the cache buffer reclaimed, and the predicted number with it retained.
For example, if v5 iβ playing on β7, v8 iβ playing on β2 and β3, with β2 running 5 βecondβ behind β3, and v4 iβ playing on βtreamβ sl2 to s20 with each βtream 30 βecondβ behind the next, then: bufferβ containing v5 data already uβed by s7 are reclaimed firβt, followed by bufferβ containing vβ data already uβed by
llowed by bufferβ containing v4 data already uβed by
Figure imgf000037_0001
followed by remaining bufferβ with the loweβt retention priority.
The cache management algorithm provideβ variations for special caβeβ βuch aβ connection operationβ (where it iβ possible to predict that a video βegment will be played in the near future, but not exactly when) and βtop operationβ (when previous predictions must be revised) .
E2.3. SYNCHRONIZING STREAMS TO OPTIMIZE CACHING
It iβ desirable to cluster all βtreamβ that require a given video βegment, to minimize the time that the cache buffer containing that βegment must remain in βtorage and thus leave more of the system capacity available for other video βtreamβ. For video playing, there iβ usually little flexibility in the rate at which βegmentβ are played. However, in some application of video delivery the rate of playing iβ flexible (that iβ, video and audio may be accelerated or decelerated βlightly without evoking adverse human reactions). Moreover, videoβ may be delivered for purposes other than immediate human viewing. When a variation in rate is allowed, the βtreamβ out in front (timewiβe) are played at the minimum allowable rate and those in back (timewiβe) at a maximum allowable rate in order to cloβe the gap between the βtreamβ and reduce the time that segments must remain buffered.
The cluβtering of βtreamβ uβing a βame video preβentation iβ alβo taken into account during connection and play operations. For example, VS-PLAY- AT-SIGNAL can be used to βtart playing a video on multiple streamβ at the same time. Thiβ improves cluβtering, leaving more βyβtem reβourceβ for other video βtreamβ, enhancing the effective capacity of the βyβtem. More specifically, clustering, by delaying one stream for a short period so that it coincides in time with a βecond βtream, enables one copy of βegmentβ in cache to be uβed for both βtreamβ and thus conserves proceββing assets.
F. VIDEO OPTIMIZED DIGITAL MEMORY ALLOCATION
Digital video data haβ attributes unlike those of normal data processing data in that it is non-random, that iβ βequential, large, and time critical rather than content critical. Multiple βtreamβ of data must be delivered at high bit rates, requiring all noneββential overhead to be minimized in the data path. Careful buffer management is required to maximize the efficiency and capacity of the media βtreamer 10. Memory allocation, deallocation, and acceββ are key elementβ in thiβ proceββ, and improper uβage can reβult in memory fragmentation, decreased efficiency, and delayed or corrupted video data.
The media βtreamer 10 of thiβ embodiment employe a memory allocation procedure which allows high level applicationβ to allocate and deallocate non-βwappable, page aligned, contiguous memory segments (blocks) for digital video data. The procedure provideβ a simple, high level interface to video tranβmiββion applicationβ and utilizeβ low level operating βyβtem modules and code βegmentβ to allocate memory blocks in the requested βize. The memory blocks are contiguous and fixed in physical memory, eliminating the delays or corruption poββible from virtual memory swapping or paging, and the complexity of having to implement gather/scatter routineβ in the data tranβmiββion βoftware.
The high level interface alβo returnβ a variety of addressing mode values for the requested memory block, eliminating the need to do costly dynamic addreββ converβion to fit the various memory modelβ that can be operating concurrently in a media βtreamer environment. The phyβical addreββ iβ available for direct acceββ by other device driverβ, βuch aβ a fixed diβk device, aβ well aβ the proceββ linear and proceββ segmented addresses that are used by various applications. A deallocation routine is alβo provided that returns a memory block to the syβtem, eliminating fragmentation problemβ βince the memory iβ all returned aβ a βingle block.
F.l. COMMANDS EMPLOYED FOR MEMORY ALLOCATION
1. Allocate Phyβical Memory:
Allocate the requested size memory block, a control block iβ returned with the variouβ memory model addreββeβ of the memory area, along with the length of the block. 2. Deallocate Phyβical Memory:
Return the memory block to the operating βyβtem and free the associated memory pointers.
F2. APPLICATION PROGRAM INTERFACE
A device driver iβ defined in the βyβtem configuration files and iβ automatically initialized aβ the βyβtem βtartβ. An application then openβ the device driver aβ a pβeudo device to obtain itβ label, then uses the interface to pass the commands and parameters. The supported commandβ are Allocate Memory and Deallocate Memory, the parameters are memory βize and pointers to the logical memory addreββeβ. Theβe addreββeβ are βet by the device driver once the phyβical block of memory haβ been allocated and the phyβical addreββ iβ converted to logical addreββeβ. A null iβ returned if the allocation failβ.
Fig. 15 βhowβ a typical βet of applicationβ that would uβe thiβ procedure. Buffer 1 iβ requested by a 32-bit application for data that is modified and then placed into buffer 2. This buffer can then be directly manipulated by a 16 bit application using a segmented addreββ, or by a phyβical device βuch aβ a fixed diβk drive. By uβing thiβ allocation scheme to preallocate the fixed, phyβical, and contiguous bufferβ, each application iβ enabled to uβe it's native direct addressing to accesβ the data, eliminating the addreββ tranβlation and dynamic memory allocation delayβ. A video application may uβe thiβ approach to minimize data movement by placing the digital video data in the buffer directly from the phyβical diβk, then tranβferring it directly to the output device without moving it βeveral timeβ in the proceββ.
β. DISK DRIVE OPTIMIZED FOR VIDEO APPLICATIONS
It iβ important that video streams be delivered to their destination isochronouβly, that iβ without delayβ that can be perceived by the human eye aβ discontinuities in movement or by the ear as interruptions in sound. Current disk technology may involve periodic action, βuch aβ performing predictive failure analyβiβ that may cause significant delays in data access. While most I/O operations complete within 100 mβ, periodic delayβ of 100 mβ are common and delays of three full βecondβ can occur.
The media βtreamer 10 muβt alβo be capable of efficiently sustaining high data tranβfer rates. A disk drive configured for general purpose data βtorage and retrieval will suffer inefficiencies in the uβe of memory, diβk buffers, SCSI bus and diβk capacity if not optimized for the video βerver application.
In accordance with an aspect of the embodiment, disk drives employed herewith are tailored for the role of smooth and timely delivery of large amounts of data by optimizing diβk parameterβ. The parameterβ may be incorporated into the manufacture of diβk driveβ βpecialized for video servers, or they may be variables that can be set through a command mechaniβm.
Parameterβ controlling periodic actionβ are βet to minimize or eliminate delayβ. Parameterβ affecting buffer uβage are βet to allow for tranβfer of very large amounts of data in a βingle read or write operation.
Parameterβ affecting βpeed matching between a SCSI buβ and a proceββor buβ are tuned βo that data tranβfer βtartβ neither too soon nor too late. The diβk media itβelf iβ formatted with a βector βize that maximizeβ effective capacity and band-width.
To accomplish optimization:
The phyβical diβk media iβ formatted with a maximum allowable phyβical βector βize. Thiβ formatting option minimizes the amount of space wasted in gape between sectors, maximizeβ device capacity, and maximizeβ the burβt data rate. A preferred implementation iβ 744 byte sectors.
Disks may have an associated buffer. Thiβ buffer iβ uβed for reading data from the diβk media asynchronously from availability of the bus for the tranβfer of the data. Likewiβe the buffer iβ uβed to hold data arriving from the buβ asynchronously from the tranβfer of that data to the diβk media. The buffer may be divided into a number of βegmentβ and the number iβ controlled by a parameter. If there are too many βegmentβ, each may be too email to hold the amount of data requested in a βingle tranβfer. When the buffer iβ full, the device must initiate reconnection and begin tranβfer; if the bus/device iβ not available at thiβ time, a rotational delay will enβue. In the preferred implementation, thiβ value iβ βet βo that any buffer βegment iβ at leaβt aβ large aβ the data tranβfer βize,
Figure imgf000040_0001
Aβ a buffer βegment begins to fill on a read, the diβk attempts to reconnect to the buβ to effect a data tranβfer to the host. The point in time that the disk attempts thiβ reconnection affects the efficiency of bus utilization. The relative speeds of the buβ and the diβk determine the beet point in time during the fill operation to begin data tranβfer to the hoβt. Likewiβe during write operations, the buffer will fill as data arrives from the hoβt and, at a certain point in the fill proceββ, the diβk βhould attempt a reconnection to the buβ. Accurate βpeed matching reβultβ in fewer diβconnect/reβelect cycleβ on the SCSI buβ with reβulting higher maximum throughput.
The parameterβ that control when the reconnection is attempted are called "read buffer full ratio" and "write buffer empty ratio". For video data, the preferred algorithm for calculating these ratios in 256 x (Instantaneouβ SCSI Data Tranβfer Rate - Sustainable Diβk Data Tranβfer
Rate) / Instantaneous SCSI Data Tranβfer Rate. Preβently preferred valueβ for buffer-full and buffer-empty ratios are approximately 204.
Some diβk drive designs require periodic recalibration of head position with changes in temperature. Some of theβe diβk drive typeβ further allow control over whether thermal compensation is done for all heads in an assembly at the βame time, or whether thermal compenβation iβ done one head at a time. If all heads are done at once, delays of hundreds of milliseconds during a read operation for video data may enβue. Longer delayβ in read times resultβ in the need for larger main memory bufferβ to smooth data flow and prevent artifacts in the multimedia preβentation. The preferred approach iβ to program the Thermal Compenβation Head Control function to allow compenβation of one head at a time.
The βaving of error logβ and the performance of predictive failure analyβiβ can take βeveral βecondβ to complete. Theβe delayβ cannot be tolerated by video βerver applicationβ without very large main memory bufferβ to smooth over the delayβ and prevent artifacts in the multimedia preβentation. Limit Idle Time Function parameterβ can be uβed to inhibit the βaving of error logβ and performing idle time functionβ. The preferred implementation βetβ a parameter to limit theβe functionβ.
B. DATA STRIPING FOR VIDEO DATA
In video applicationβ, there iβ a need to deliver multiple βtreamβ from the βame data (e.g., a movie). Thiβ requirement translates to a need to read data at a high data rate; that is, a data rate needed for delivering one stream multiplied by the number of streams simultaneously acceββing the βame data. Conventionally, thiβ problem waβ generally solved by having multiple copies of the data and thus resulted in additional expense. The media βtreamer 10 of this embodiment uses a technique for serving many simultaneous βtreamβ from a βingle copy of the data. The technique takeβ into account the data rate for an individual βtream and the number of βtreamβ that may be simultaneously accessing the data.
The above-mentioned data striping involveβ the concept of a logical file whoβe data iβ partitioned to reside in multiple file components, called stripes. Each βtripe iβ allowed to exiβt on a different diβk volume, thereby allowing the logical file to βpan multiple phyβical disks. The disks may be either local or remote.
When the data is written to the logical file, it is separated into logica lengths (i.e. βegmentβ) that are placed βequentially into the βtripeβ. A depicted in Fig. 16, a logical file for a video, video 1, iβ segmented into M segments or blocks each of a βpecific βize, e.g. 256 KB. The laβt βegment may only be partially filled with data. A βegment of data iβ placed in the firβt βtripe, followed by a next βegment that iβ placed in the βecond βtripe, etc. When a βegment haβ been written to each of the stripes, the next βegment iβ written to the firβt βtripe. Thuβ, if a fil iβ being βtriped into N βtripeβ, then βtripe 1 will contain the βegmentβ
1, N+l, 2*N+1, etc., and βtripe 2 will contain the βegmentβ 2, N+2, 2*N+2,
Figure imgf000042_0001
A βimilar βtriping of data iβ known to be uβed in data processing RAID arrangements, where the purpose of βtriping iβ to assure data integrity i case a diβk iβ lost. Such a RAID βtorage βyβtem dedicatee one of N disks to the βtorage of parity data that iβ uβed when data recovery iβ required. The diβk βtorage nodeβ 16 of the media βtreamer 10 are organized aβ a RAID-like βtructure, but parity data iβ not required (aβ a copy of the video data iβ available from a tape store).
Fig. 17 illuβtrateβ a firβt important aβpect of thiβ data arrangement, i.e., the βeparation of each video preβentation into data blocks or segments that are spread acroββ the available diβk driveβ to enable each video preβentation to be acceββed βimultaneouβly from multiple drives without requiring multiple copieβ. Thuβ, the concept is one of βtriping, not for data integrity reaβonβ or performance reaβons, per βe, but for concurrency or bandwidth reaβons. Thuβ, the media βtream 10 βtripeβ vide preβentation by play βegmentβ, rather than by byte block, etc.
λβ iβ shown in Fig. 17, where a video data file 1 iβ segmented into M βegmentβ and βplit into four βtripeβ, βtripe 1 iβ a file containing βegmentβ 1, 5, 9, etc. of video file 1; βtripe 2 iβ a file containing βegmentβ 2, 6, 10, etc., of video file 1, βtripe 3 iβ a file containing βegmentβ 3, 7, 11, etc. of the video file and βtripe 4 iβ a file containing the βegmentβ 4, 8, 12, etc., of video file 1, until all M βegmentβ of video file 1 are contained in one of the four βtripe files.
Given the deβcribed βtriping βtrategy, parameterβ are computed as follows to cuβtomize the βtriping of each individual video.
Firβt, the βegment βize iβ βelected βo aβ to obtain a reasonably effective data rate from the diβk. However, it cannot be βo large aβ to adversely affect the latency. Further it should be small enough to buffer/cache in memory. A preferred segment βize iβ 256KB, and is constant for video presentations of data rates in ranges from 128KB/βec. to 512KB/βec. If the video data rate is higher, then it may be preferable to uβe a larger segment βize. The βegment βize dependβ on the baβic unit of I/O operation for the range of video preβentationβ βtored on the βame media. The principle employed iβ to uβe a βegment βize that containβ approximately 0.5 to 2 βecondβ of video data.
Next, the number of βtripeβ, i.e. the number of disks over which video data iβ distributed, is determined. This number must be large enough to βuβtain the total data rate required and iβ computed individually for each video, preβentation baβed on an anticipated uβage rate. More βpecifically, each diβk haβ a logical volume associated with it. Each video presentation iβ divided into component files, aβ many componentβ aβ the number of βtripeβ needed. Each component file iβ βtored on a different logical volume. For example, if video data haβ to be delivered at 250 KB/sec per βtream and 30 βimultaneouβ βtreamβ are βupported from the βame video, βtarted at βay 15 second intervale, a total data rate of at leaβt 7.5 MB/βec iβ obtained. If a diβk drive can support on the average 3 MB/sec, at least 3 βtripeβ are required for the video preβentation.
The effective rate at which data can be read from a diβk iβ influenced by the βize of the read operation. For example, if data iβ read from the diβk in 4KB blocks (from random positions on the diβk), the effective data rate may be lMB/eec. whereaβ if the data iβ read in 256KB blockβ the rate may be 3 MB/βec. However, if data iβ read in very large blockβ, the memory required for bufferβ also increaβeβ and the latency, the delay in uβing the data read, alβo increaβeβ becauβe the operation haβ to complete before the data can be acceββed. Hence there is a trade-off in selecting a size for data tranβfer. A βize iβ βelected baβed on the characteriβticβ of the deviceβ and the memory configuration. Preferably, the βize of the data tranβfer iβ the βelected βegment βize. For a given βegment βize the effective data rate from a device iβ determined. For example, for some diβk drives, a 256KB βegment βize provideβ a good balance for the effective uβe of the diβk driveβ (effective data rate of 3 MB/βec.) and buffer βize (256 KB).
If βtriping iβ not uβed, the maximum number of βtreamβ that can be βupported iβ limited by the effective data rate of the diβk, e.g. if the effective data rate iβ 3MB/s and a βtream data rate iβ 200KB/β, then no more than 15 streamβ can be βupplied from the diβk. If, for instance, 60 streams of the βame video are needed then the data haβ to be duplicated o 4 disks. However, if striping iβ uβed in accordance with thiβ embodiment 4 diβkβ of 1/4 the capacity can be uβed. Fifteen streams can be simultaneously played from each of the 4 βtripeβ for a total of 60 βimultaneouβ βtreamβ from a βingle copy of the video data. The βtart times of the βtreamβ are βkewed to ensure that the requeβtβ for the 60 βtreamβ are evenly spaced among the stripes. Note also that if the βtreamβ are βtarted cloβe to each other, the need for I/O can be reduced by uβing video data that iβ cached.
The number of βtripeβ for a given video iβ influenced by two factorβ, the firβt iβ the maximum number of βtreamβ that are to be βupplied at any tim from the video and the other iβ the total number of βtreamβ that need to be βupplied at any time from all the videoβ βtored on the βame diβkβ aβ the video.
The number of βtripeβ (β) for a video iβ determined aβ follows: β = maximum (r*n/d, r*m/d), where: r « nominal data rate at which the βtream iβ to be played; n = maximum number of βimultaneouβ βtreamβ from thiβ video preβentation at the nominal rate; d as effective data rate from a diβk
(Note that the effective data rate from diβk iβ influenced by the βegment βize); m = maximum number of βimultaneouβ βtreamβ at nominal rate from all diβkβ that containβ any part of thiβ video; preβentation; and β = number of βtripeβ for a video preβentation.
The number of diβkβ over which data for a video presentation iβ βtriped are managed aβ a βet, and can be thought of aβ a very large phyβical diβk. Striping allowβ a video file to exceed the βize limit of the largest file that a βyβtem'β phyβical file βyβtem will allow. The video data, in general, will not always require the βame amount of βtorage on all the diβkβ in the βet. To balance the uβage of the diβk, when a video iβ βtriped, the βtriping iβ begun from the diβk that haβ the most free βpace. As an example, consider the case of a video presentation that needs to be played at 2 mbits/sec. (250,000 byteβ/βec.), i.e., r iβ equal to 250,000 byteβ/βec., and assume that it is necessary to deliver up to 30 βimultaneouβ βtreamβ from this video, i.e., n iβ 30. Assume in this example, that m is also 30, i.e., the total number of βtreamβ to be delivered from all diβkβ iβ alβo 30. Further, aββume that the data iβ βtriped in βegmentβ of 250,000 byteβ and that the effective data rate from a diβk for the given βegment βize (250,000 byteβ) iβ 3,000,000 byteβ/βec. Then n, the number of βtripeβ needed, iβ (250,000 * 30 / 3,000,000) 2.5 which iβ rounded up to 3 (s = ceiling(r*n/d)) .
If the maximum number of βtreamβ from all disks that contain thiβ data iβ, for inβtance 45, then 250,000 * 45 / 3,000,000 or 3.75 βtripeβ and needed, which iβ rounded up to 4 βtripeβ.
Even though βtriping the video into 3 βtripeβ iβ sufficient to meet the requirement for delivering the 30 streamβ from the βingle copy of the video, if diβkβ containing the video alβo contain other content, and the total number of βtreamβ from that video to be βupported iβ 45, then four diβk driveβ are needed (βtriping level of 4).
The manner in which the algorithm iβ uβed in the media βtreamer 10 iβ aβ follows. The βtorage (number of diβk driveβ) iβ divided into groupβ of diβkβ. Each group haβ a certain capacity and capability to deliver a given number of βimultaneouβ βtreamβ (at an effective data rate per diβk baβed on a predetermined βegment βize). The βegment βize for each group iβ conβtant. Different groupβ may chooβe different βegmentβ βizeβ (and hence have different effective data rateβ). When a video preβentation iβ to be βtriped, a group iβ firβt choβen by the following criteria.
The βegment βize iβ consistent with the data rate of the video, i.e., if the βtream data rate iβ 250,000 byteβ/βec., the βegment βize iβ in the range of 125K to 500 KB. The next criteria iβ to enβure that the number of diβkβ in the group are βufficient to support the maximum number of simultaneous βtreamβ, i.e., the number of diβkβ where "r" iβ the βtream data rate and "n" the maximum number of βimultaneouβ βtreams, and "d" the effective data rate of a diβk in the group. Finally, it βhould be inβured that the βum total of βimultaneouβ βtreamβ that need to be βupported from all of the videoβ in the diβk group doeβ not exceed itβ capacity. That iβ, if "m" is the capacity of the group, the "m - n" βhould be greater than or equal to the βum of all the βtreamβ that can be played simultaneously from the videos already βtored in the group.
The calculation iβ done in control node 18 at the time the video data iβ loaded into the media βtreamer 10. In the simplest case all disks will be in a βingle pool which defines the total capacity of the media βtreamer 10, both for βtorage and the number of supportable βtreamβ. In thiβ caβe the number of diβkβ (or βtripeβ) necessary to support a given number of simultaneous streams is calculated from the formula m*r/d, where m iβ the number of βtreamβ, r iβ the data rate for a βtream, and d iβ the effective data rate for a diβk. Note that if the βtreamβ can be of different rateβ, then m*r, in the above formula, βhould be replaced by: Max (βum of the data rateβ of all βimultaneouβ βtreams).
The result of using this technique for writing the data iβ that the data can be read for delivering many streams at a βpecified rate without the need for multiple copieβ of the digital representation of the video presentation. By βtriping the data across multiple disk volumes the reading of one part of the file for delivering one βtream doeβ not interfere with the reading of another part of the file for delivering another βtream.
I. MEDIA STREAMER DATA TRANSFERS AND CONVERSION PROCEDURES
1.1. DYNAMIC BANDWIDTH ALLOCATION FOR VIDEO DELIVERY TO THE SWITCH 18
Conventionally video βerverβ generally fit one of two profiles. Either they use PC technology to build a low coβt (but alβo low bandwidth) video βerver or they uβe super-computing technology to build a high bandwidth (alβo expenβive) video βerver. A object of thiβ invention then iβ to be able to deliver high bandwidth video, but without the high coβt of super¬ computer technology.
A preferred approach to achieving high bandwidth at low coβt iβ to uβe the low latency βwitch (croββbar circuit βwitch matrix) 18 to interconnect low coβt PC baβed "nodeβ" into a video βerver (aβ shown in Fig. 1). An important aspect of the media βtreamer architecture iβ efficient uβe of the video βtream bandwidth that iβ available in each of the βtorage nodeβ 16 and communication nodeβ 14. The bandwidth iβ maximized by combining the special nature of video data (write once, read many times) with the dynamic, real time bandwidth allocation capability of a low-cost βwitch technology.
Fig. 18 shows a conventional logical connection between a βwitch interface and a βtorage node. The βwitch interface muβt be full duplex (i.e., information can be aent in either direction βimultaneouβly) to allow the tranβfer of video (and control information) both into and out of the βtorage node. Becauβe video content iβ written to the βtorage node once and then read many times, moβt of the bandwidth requirements for the βtorage node are in the direction towardβ the βwitch. In the caβe of a typical βwitch interface, the bandwidth of the βtorage node iβ under- utilized becauβe that half of the bandwidth devoted to write capability is βo infrequently uβed.
Fig. 19 βhowβ a βwitch interface in accordance with thiβ embodiment. This interface dynamically allocates its total bandwidth in real time either into or out of the βwitch 18 to meet the current demandβ of the node. (The βtorage node 16 is uβed aβ an example. ) The communication nodeβ 14 have similar requirements, but most of their bandwidth iβ in the direction from the βwitch 18.
The dynamic allocation iβ achieved by grouping two or more of the phyβical βwitch interfaceβ, uβing appropriate routing headerβ for the βwitch 12, into one logical βwitch interface 18a. The video data (on a read, for example) iβ then βplit between the two phyβical interfaceβ. Thiβ iβ facilitated by βtriping the data across multiple βtorage unite aβ described previously. The receiving node combines the video data back into a βingle logical βtream.
Aβ an example, in Fig. 18 the βwitch interface iβ rated at 2X MB/βec. full duplex i.e., X MB/βec. in each direction. But video data iβ usually βent only in one direction (from the βtorage node into the βwitch). Therefore only X MB/βec. of video bandwidth iβ delivered from the βtorage node, even though the node haβ twice that capability (2X). The βtorage node iβ under utilized. The βwitch interface of Fig. 19 dynamically allocates the entire 2X MB/βec. bandwidth to tranβmitting video from the βtorage node into the switch. The reβult iβ increaβed bandwidth from the node, higher bandwidth from the video βerver, and a lower coβt per video βtream.
J. ISOCHRONOUS VIDEO DATA DELIVERY USING COMMUNICATIONS ADAPTERS
Digital video data iβ βequential, continuouβ, large, and time critical, rather than content critical. Streamβ of video data muβt be delivered isochronouβly at high bit rateβ, requiring all noneββential overhead to be minimized in the data path. Typically, the receiving hardware iβ a video βet top box or βo e other βuitable video data receiver. Standard serial communication protocols insert additional bite and byteβ of data into the βtream for βynchronization and data verification, often at the hardware level. Thiβ corrupts the video data βtream if the receiver iβ not able to tranβparently remove the additional data. The additional overhead introduced by theβe bite and byteβ alβo decreaaeβ the effective data rate which creates video decompression and conversion errorβ.
It haβ been determined that the tranβmiββion of video data over βtandard communications adapterβ, to enβure isochronous delivery to a uβer, requireβ disabling moβt of the βtandard βerial communications protocol attributes. The methods for achieving this vary depending on the communications adapterβ uβed, but the following describes the underlying concepts. In Fig. 20, a serial communications chip 200 in a communications node 14 disables data formatting and integrity information such aβ the parity, βtart and βtop bits, cyclic redundancy check codeβ an βync byteβ, and prevents idle characters from being generated. Input FIF buffers 202, 204, 206, etc. are employed to insure a conβtant (isochronous) output video data βtream while allowing buβ cycleβ for loading of the data blockβ. A 1000 byte FIFO buffer 208 simplifies the CPU and bus loading logic.
If communications output chip 200 does not allow the disabling of an initial synchronization (βync) byte generation, then the value of the βyn byte iβ programmed to the value of the firβt byte of each data block (and the data block pointer iβ incremented to the βecond byte). Byte alignmen muβt alβo be managed with real data, βince any padding byteβ will corrupt the data βtream if they are not part of the actual compreββed video data.
To achieve the constant, high βpeed βerial data outputs required for the high quality levels of compreββed video data, either a circular buffer or a plurality of large bufferβ (e.g. 202, 204, 206) muβt be uβed. Thiβ iβ necessary to allow βufficient time to fill an input buffer while outputting data from a previouβly filled buffer. Unleββ buffer packing i done earlier in the video data βtream path, the end of video condition ca reβult in a very small buffer that will be output before the next buffer tranβfer can complete resulting in a data underrun. This necessitates a minimum of three large, independent bufferβ. A circular buffer in dual mode memory (writable while reading) iβ alβo a suitable embodiment.
Jl. CONVERSION OF VIDEO IMAGES AND MOVIES FROM COMPRESSED MPEG-1, 1+, O MPEG-2, DIGITAL DATA FORMAT INTO INDUSTRY STANDARD TELEVISIONS FORMATS (NTSC OR PAL)
As described above, digital video data iβ moved from diβk to buffer memory. Once enough data iβ in buffer memory, it iβ moved from memory to an interface adapter in a communications node 14. The interfaces used ar the SCSI 20 MB/βec. , faβt/wide interface or the SSA βerial SCSI interface. The SCSI interface iβ expanded to handle 15 addreββeβ and the SSA architecture supports up to 256. Other suitable interfaces include, but are not limited to, RS422, V.35, V.36, etc.
As shown in Fig. 21, video data from the interface is pasβed from a communication node 14 acroββ a communications bus 210 to NTSC adapter 212 (βee alβo Fig. 20) where the data iβ buffered. Adapter 212 pullβ the dat from a local buffer 214, where multiple blockβ of data are βtored to maximize the performance of the bus. The key goal of adapter 212 is to maintain an iβochronouβ flow of data from the memory 214 to MPEG chipβ 216, 218 and thuβ to NTSC chip 220 and D/A 222, to insure that there are no interruptions in the delivery of video and/or audio.
MPEG logic modules 216, 218 convert the digital (compressed) video data into component level video and audio. An NTSC encoder 220 converts the βignal into NTSC baβeband analog βignalβ. MPEG audio decoder 216 convertβ the digital audio into parallel digital data which iβ then paββed through a Digital to Analog converter 222 and filtered to generate audio Left and Right outputs.
The goal in creating a solution to the βpeed matching and Iβochronouβ delivery problem iβ an approach that not only maximizeβ the bandwidth delivery of the system but alβo imposes the fewest performance constraints.
Typically, application developers have used a bus βtructure, βuch aβ SSA and SCSI, for control and delivery of data between proceββorβ and mechanical βtorage deviceβ βuch diβk fileβ, tape files, optical βtorage unite, etc. Both of theβe buβeβ contain attributes that make them suitable for high bandwidth delivery of video data, provided that means are taken to control the βpeed and iβochronouβ delivery of video data.
The SCSI buβ allows for the bursting of data at 20 Mbytes/sec. which minimizeβ the amount of time that any one video βignal iβ being moved from buffer memory to a βpecific NTSC adapter. The adapter card 212 containβ a large buffer 214 with a performance capability to burβt data into memory from buβ 210 at high peak rateβ and to remove data from buffer 214 at much lower rateβ for delivery to NTSC decoder chipβ 216, 218. Buffer 214 iβ further segmented into smaller bufferβ and connected via βoftware controls to act as multiple buffers connected in a circular manner.
This allows the βyβtem to deliver varying block βizeβ of data, to separate buffers and controls the sequence of playout. An advantage of thiβ approach iβ that it freeβ the βyβtem βoftware to deliver blockβ of video data well in advance of any requirement for the video data, and at very high delivery rateβ. Thiβ provideβ the media βtreamer 10 with the ability to manage many multiple video steams on a dynamic throughput requirement. When a proceββor in a communications node haβ time, it can cause delivery of βeveral large blockβ of data that will be played in sequence. Once thiβ iβ done, the proceββor iβ free to control other βtreams without an immediate need to deliver slow continuous iβochronouβ data to each port. To further improve the cost effectiveness of the decoder syβtem, a email FIFO memory 224 iβ inβerted between the larger decoder buffer 214 and MPE decoders 216, 218. The FIFO memory 224 allows controller 226 to move smaller blocks, typically 512 bytes of data, from buffer 214 to FIFO 224 which, in turn, converts the data into serial bit βtreamβ for delivery to MPEG decoderβ 216, 218. Both the audio and the video decoder chipβ 216, 218 can take their input from the βame βerial data stream, and internally separate and decode the data required. The transmission of data from the output of the FIFO memory 224 occurs in an isochronous manner, or substantially isochronous manner, to ensure the delivery of an uninterrupted video preβentation to a uβer or consumer of the video preβentation.
K. TRANSMISSION OF DIGITAL VIDEO TO SCSI DEVICES
Aβ βhown in Fig. 22, compreββed digital video data and command βtreamβ from buffer memory are converted by device level βoftware into SCSI commandβ and data streams, and are transmitted over SCSI bus 210 to a target adapter 212 at SCSI II faβt data rateβ. The data is then buffered and fed at the required content output rate to MPEG logic for decompression and conversion to analog video and audio data. Feedback is provided across SCSI buβ 210 to pace the data flow and inβure proper buffer management.
The SCSI NTSC/PAL adapter 212 provideβ a high level interface to SCSI buβ 210, supporting a βubβet of the βtandard SCSI protocol. The normal mode of operation iβ to open the adapter 212, write data (video and audio) βtreamβ to it and, cloβing the adapter 212 only when completed. Adapter 212 pullβ data aβ faβt aβ neceββary to keep itβ bufferβ full, with the communication nodeβ 14 and βtorage nodeβ 16 providing blockβ of data, that are sized to optimize the buβ data tranβfer and minimize buβ overhead.
Syβtem parameterβ can be overwritten via control packetβ uβing a Mode Select SCSI command if neceββary. Video/Audio βynchronization iβ internal to the adapter 212 and no external controlβ are required. Errorβ are minimized, with automatic reβynchronization and continued audio/video output.
Kl. SCSI LEVEL COMMAND DESCRIPTION
A mix of direct access device and βequential device commandβ are uβed aβ well aβ βtandard common commandβ to fit the functionality of the SCSI video output adapter. Aβ with all SCSI commandβ, a valid statue byte iβ returned after every command, and the sense data area iβ loaded with the error conditionβ if a check condition is returned. The standard SCSI commands used include RESET, INQUIRY, REQUEST SENSE, MODE SELECT, MODE SENSE, READ, WRITE, RESERVE, RELEASE, TEST UNIT READY.
Video Commandβ:
The video control commandβ are uβer-level video output control commands, and are extensions to the βtandard commandβ listed above. They provide a simplified uβer level front end to the low level operating βyβtem or SCSI commandβ that directly interface to the SCSI video output adapter 212. The implementation of each command employe microcode to emulate the neceββary video device function and avoid video and audio anomalieβ cauβed by invalid control states. A single SCSI command; the SCSI START/STOP UNIT command, iβ uβed to translate video control commands to the target SCSI video output adapter 212, with any necessary parameters moved along with the command. This simplifies both the uβer application interface and the adapter card 212 microcode. The following commandβ are employed.
Stop (SCSI START/STOP 1 - parameter = mode)
The data input into the MPEG chip βet (216, 218) iβ halted, the audio iβ muted, and the video iβ blanked. The parameter field βelectβ the βtop mode. The normal mode iβ for the buffer and position pointer to remain current, βo that PLAY continues at the βame location in the video βtream. A βecond (end of movie or abort) mode iβ to βet the buffer pointerβ to the βtart of the next buffer and releaβe the current buffer. A third mode iβ alβo for end of movie conditionβ, but the βtop (mute and blank) iβ delayed until the data buffer runs empty. A fourth mode may be employed with certain MPEG decoder implementations to provide for a delayed stop with audio, but freeze frame for the last valid frame when the data runs out. In each of theβe cases, the video adapter 212 microcode determines the stopping point βo that the video and audio output iβ halted on the proper boundary to allow a clean restart.
Pause (SCSI START/STOP 2 - no parameterβ)
The data input into the MPEG chip βet (216, 218) iβ halted and the audio iβ muted, but the video iβ not blanked. Thiβ cauβeβ the MPEG video chip βet (216, 218) to hold a freeze frame of the last good frame. This is limited to avoid burn-in of the video tube. A Stop command is preferably iββued by the control node 18 but the video output will automatically go to blank if no commandβ are received within 5 minuteβ. The adapter 212 microcode maintains the buffer poβitionβ and decoder states to allow for a smooth tranβition back to play.
Blank-Mute (SCSI START/STOP 3 - parameter = mode) Thiβ command blanks the video output without impacting the audio output, mutes the audio output without impacting the video, or both. Both muting and blanking can be turned off with a single command using a Mode parameter, which allows a smoother transition and reduced command overhead. Theβe are implemented on the video adapter 212 after decompreββion and converβion to analog, with hardware controlβ to enβure a positive, smooth transition.
Slow Play (SCSI START/STOP 4 - parameter = rate)
This command slows the data input rate into the MPEG chip βet, (216, 218) cauβing it to intermittently freeze frame, simulating a βlow play function on a VCR. The audio iβ muted to avoid digital error noiβe. The parameter field specifies a relative βpeed from 0 to 100. An alternative implementation disables the decoder chip βet (216, 218) error handling, and then modifieβ the data clocking βpeed into the decoder chip βet to the desired playing speed. This is dependent on the flexibility of the video adapter'β clock architecture.
Play (SCSI START/STOP 5 - parameter = buffer)
Thiβ command starts the data feed proceββ into the MPEG chip βet (216, 218), enabling the audio and video outputs. A buffer βelection number iβ passed to determine which buffer to begin the playing sequence from, and a zero value indicates that the current play buffer βhould be uβed (typical operation). A non-zero value iβ only accepted if the adapter 212 iβ in STOPPED mode, if in PAUSED mode the buffer βelection parameter iβ ignored and playing iβ resumed uβing the current buffer βelection and position.
When 'PLAYING', the controller 226 rotates through the buffers βequentially maintaining a βteady stream of data into the MPEG chip βet (216, 218). Data iβ read from the buffer at the appropriate rate into the MPEG buβ starting at addreββ zero until N byteβ are read, then the controller 226 switches to the next buffer and continues reading data. The adapter bus and microcode provides βufficient bandwidth for both the SCSI Faβt data tranβfer into the adapter bufferβ 214, and the βteady loading of the data onto the output FIFO 224 that feeds the MPEG decompression chips (216, 218).
Fast Forward (SCSI START/STOP 6 - parameter * rate)
This command is uβed to βcan through data in a manner that emulates faβt forward on a VCR. There are two modes of operation that are determined by the rate parameter. A rate of 0 means that it is a rapid faβt forward where the video and audio βhould be blanked and muted, the bufferβ fluβhed, and an implicit play iβ executed when data iβ received from a new position forward in the video βtream. An integer value between 1 and 10 indicates the rate that the input stream iβ being forwarded. The video is 'sampled' by skipping over blockβ of data to achieve the βpecified average data rate. The adapter 212 plays a portion of data at nearly the normal rate, jumps ahead, then plays the next portion to emulate the fast forward action.
Rewind (SCSI START/STOP 7 - parameter = buffer)
This command iβ uβed to βcan backwards through data in a manner that emulates rewind on a VCR. There are two modes of operation that are determined by the rate parameter. A rate of 0 means that it is a rapid rewind where the video and audio βhould be blanked and muted, the bufferβ fluβhed, and an implicit play executed when data iβ received from a new position forward in the video stream. An integer value between 1 and 10 indicates the rate that the input βtream iβ being rewound. The video iβ 'βa pled' by βkipping over blockβ of data to achieve the βpecified average data rate. The rewind data βtream iβ built by assembling small blockβ of data that are 'βampled' from progressively early positions in the video βtream. The adapter card 212 smoothly handleβ the transitions and βynchronization to play at the normal rate, βkipping back to the next βampled portion to emulate rewind βcanning.
K2. BUFFER MANAGEMENT
Digital video servers provide data to many concurrent output devices, but digital video data decompreββion and converβion requireβ a conβtant data βtream. Data buffering techniques are uβed to take advantage of the SCSI data burβt mode transmission, while βtill avoiding data underrun or buffer overrun, allowing media βtreamer 10 to transmit data to many βtreamβ with minimal intervention. SCSI video adapter card 212 (Figβ. 21, 22) includes a large buffer 214 for video data to allow full utilization of the SCSI burβt mode data tranβfer proceβs. An exemplary configuration, would be one buffer 214 of 768K, handled by local logic aβ a wrap-around circular buffer. Circular bufferβ are preferred to dynamically handle varying data block βizeβ, rather than fixed length bufferβ that are inefficient in terms of both βtorage and management overhead when tranβferring digital video data.
The video adapter card 212 microcode supports βeveral buffer pointerβ, keeping the laβt top of data aβ well aβ the current length and top of data. Thiβ allows a retry to overwrite failed transmiββion, or a pointer to be positioned to a byte position within the current buffer if neceββary. The data block length iβ maintained exactly aβ transmitted (e.g., byte or word βpecific even if long word alignment iβ uβed by the intermediate logic) to insure valid data delivery to the decode chip βet (216, 218). Thiβ approach minimizeβ the βteady state operation overhead, while still allowing flexible control of the data buffers.
K2.1. BUFFER SELECTION AND POSITION
Assuming multiple sets of buffers are required, multiple pointerβ are available for all buffer related operationβ. For example, one βet may be uβed to βelect the PLAY buffer and current poaition within that buffer, and a βecond βet to βelect the write buffer and a position within that buffer (typically zero) for a data preload operation. A current length and maximum length value are maintained for each block of data received βince variable length data blockβ are alβo βupported.
K2.2. AUTOMATIC MODE
The buffer operation iβ managed by the video adapter's controller 226, placing the N bytes of data in the next available buffer βpace βtarting at addreββ zero of that buffer. Controller 226 keeps track of the length of data in each buffer and if that data haβ been "played" or not. Whenever βufficient buffer βpace iβ free, the card accepts the next WRITE command and DMA's the data into that buffer. If not enough buffer βpace iβ free to accept the full data block (typically a Slow Play or Pauβe condition), the WRITE iβ not accepted and a buffer full return code iβ returned.
K2.3. MANUAL MODE
A LOCATE command iβ uβed to βelect a 'current' write buffer and position within that buffer (typically zero) for each buffer acceββ command (Write, Eraβe, etc.). The buffer poβition iβ relative to the βtart of data for the laβt block of data that waβ βucceββfully tranβmitted. Thiβ is done preferably for video βtream tranβition management, with the automatic mode reactivated aβ βoon aβ poββible to minimize command overhead in the βyβtem.
K2.4. ERROR MANAGEMENT
Digital video data tranβmiββion haβ different error management requirements than the random data access uβage that SCSI iβ normally uβed for in data proceββing applicationβ. Minor data loββ iβ lees critical than tranβmiββion interruption, βo the conventional retries and data validation βchemes are modified or diβabled. The normal SCSI error handling procedures are followed with the statue byte being returned during the βtatuβ phase at the completion of each command. The statue byte indicates either a GOOD (00) condition, a BUSY (8h) if the target SCSI chip 227 iβ unable to accept a command, or a CHECK CONDITION (02h) if an error haβ occurred.
K2.5. ERROR RECOVERY
The controller 226 of the SCSI video adapter 212 automatically generates a Request Sense command on a Check Condition response to load the error and βtatuβ information, and determined if a recovery procedure iβ poββible. The normal recovery procedure iβ to clear the error state, discard any corrupted data, and resume normal play as quickly as poββible. In a worst case, the adapter 212 may have to be reset and the data reloaded before the play can resume. Error conditionβ are logged and reported back to the host βyβtem with the next INQUIRY or REQUEST SENSE SCSI operation.
K2.6. AUTOMATIC RETRIES
For buffer full or device buβy conditionβ, retries are automated up to X number of retries, where X iβ dependent on the βtream data rate. Thiβ iβ allowed only to the point in time that the next data buffer arrives. At that point, an error is logged if the condition iβ unexpected (i.e., Buffer full but not PAUSED or in SLOW PLAY mode) and a device reβet or clear may be neceββary to recover and continue video play.
Although described primarily in the context of delivering a video presentation to a uβer, it βhould be realized that bidirectional video adapterβ can be employed to receive a video preβentation, to digitize the video preβentation aβ a data repreβentation thereof, and to tranβmit the data repreβentation over the buβ 210 to a communication node 14 for βtorage, via low latency βwitch 18, within a βtorage node or nodeβ 16, 17 aβ βpecified by the control node 18.

Claims

1. A media βtreamer, compriβing:
at leaβt one βtorage node compriβing a plurality of mass βtorage unite for etoring a digital repreβentation of at least one video presentation requiring a time T to present in its entirety and βtored aβ a plurality of N data blockβ each etoring data corresponding to approximately a T/N period of the video preβentation; and
a plurality of communication nodeβ each having at leaβt one input port that iβ coupled to an output of the at leaβt one βtorage node for receiving a digital repreβentation of a video preβentation therefrom, each communication node further having a plurality of output ports each of which tranβmitβ a digital representation as a data βtream to a consumer of the digital repreβentation;
wherein the N data blockβ are partitioned into X βtripeβ, wherein data blockβ 1, X+l, 2*X+1, ... etc., are associated with a first one of the X βtripeβ, data blockβ 2, X+2, 2*X+2, ... etc., are associated with a second one of the X βtripeβ, etc., and
wherein different ones of the X stripes are each stored on a different one of the mass βtorage unite.
2. A media βtreamer aβ claimed in claim 1 wherein the plurality of maββ βtorage unite store a βingle copy of a digital representation of a video preβentation, and wherein the X βtripeβ are read out in βuch a manner aβ to enable a plurality of data βtreamβ to simultaneously convey a same one of the N data blockβ.
3. A media βtreamer aβ claimed in claim 1 wherein the plurality of maββ βtorage unite βtore a βingle copy of a digital repreβentation of a video preβentation, and wherein the X βtripeβ are read out in βuch a manner aβ to enable a plurality of data βtreams to βimultaneouβly convey a different one of the N data blockβ.
4. A media βtreamer aβ claimed in claim 1 wherein a duration of the T/N period iβ in a range of approximately 0.2 βecond to approximately 2 βecondβ.
5. A media βtreamer aβ claimed in claim 1 wherein a value of X iβ determined in accordance with the expression: X=maximum(r*n/d, r*m/d) ; where
r iβ a nominal data rate for a data βtream;
n iβ a maximum number of simultaneously output data βtreamβ at the nominal data rate;
d iβ an effective output data rate of one of the maβs storage units; and
m iβ a maximum number of βimultaneouβly output data βtreamβ at the nominal data rate from all of the maββ βtorage unite that βtore at leaβt one of the N data unite.
6. A media βtreamer aβ claimed in any preceding claim wherein the mass storage comprises a plurality of disk data storage units.
PCT/GB1995/002113 1994-09-08 1995-09-06 Video server system WO1996008112A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/302,624 US5712976A (en) 1994-09-08 1994-09-08 Video data streamer for simultaneously conveying same one or different ones of data blocks stored in storage node to each of plurality of communication nodes
US08/302,624 1994-09-08

Publications (1)

Publication Number Publication Date
WO1996008112A1 true WO1996008112A1 (en) 1996-03-14

Family

ID=23168552

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB1995/002113 WO1996008112A1 (en) 1994-09-08 1995-09-06 Video server system

Country Status (6)

Country Link
US (1) US5712976A (en)
JP (1) JP3096409B2 (en)
KR (1) KR0184627B1 (en)
CN (1) CN1122985A (en)
CA (1) CA2154511A1 (en)
WO (1) WO1996008112A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2312318A (en) * 1996-04-15 1997-10-22 Discreet Logic Inc Video data storage
EP1393560A1 (en) * 2001-04-20 2004-03-03 Concurrent Computer Corporation System and method for retrieving and storing multimedia data
CN101710901B (en) * 2009-10-22 2012-12-05 乐视网信息技术(北京)股份有限公司 Distributed type storage system having p2p function and method thereof

Families Citing this family (198)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7188352B2 (en) * 1995-07-11 2007-03-06 Touchtunes Music Corporation Intelligent digital audiovisual playback system
US7424731B1 (en) 1994-10-12 2008-09-09 Touchtunes Music Corporation Home digital audiovisual information recording and playback system
US8661477B2 (en) * 1994-10-12 2014-02-25 Touchtunes Music Corporation System for distributing and selecting audio and video information and method implemented by said system
EP0786121B1 (en) 1994-10-12 2000-01-12 Touchtunes Music Corporation Intelligent digital audiovisual playback system
JP2833507B2 (en) * 1995-01-31 1998-12-09 日本電気株式会社 Server device data access control method
JPH0981497A (en) * 1995-09-12 1997-03-28 Toshiba Corp Real-time stream server, storing method for real-time stream data and transfer method therefor
DE69628798T2 (en) 1995-10-16 2004-04-29 Hitachi, Ltd. Method for the transmission of multimedia data
EP0812513B1 (en) * 1995-12-01 2000-10-04 Koninklijke Philips Electronics N.V. Method and system for reading data for a number of users
US6128467A (en) * 1996-03-21 2000-10-03 Compaq Computer Corporation Crosspoint switched multimedia system
JP3258236B2 (en) 1996-05-28 2002-02-18 株式会社日立製作所 Multimedia information transfer system
JP3279186B2 (en) * 1996-06-21 2002-04-30 日本電気株式会社 Playback control method for moving image data
US5995995A (en) * 1996-09-12 1999-11-30 Cabletron Systems, Inc. Apparatus and method for scheduling virtual circuit data for DMA from a host memory to a transmit buffer memory
US5922046A (en) * 1996-09-12 1999-07-13 Cabletron Systems, Inc. Method and apparatus for avoiding control reads in a network node
US5999980A (en) * 1996-09-12 1999-12-07 Cabletron Systems, Inc. Apparatus and method for setting a congestion indicate bit in an backwards RM cell on an ATM network
US5966546A (en) 1996-09-12 1999-10-12 Cabletron Systems, Inc. Method and apparatus for performing TX raw cell status report frequency and interrupt frequency mitigation in a network node
US5970229A (en) * 1996-09-12 1999-10-19 Cabletron Systems, Inc. Apparatus and method for performing look-ahead scheduling of DMA transfers of data from a host memory to a transmit buffer memory
US5941952A (en) * 1996-09-12 1999-08-24 Cabletron Systems, Inc. Apparatus and method for transferring data from a transmit buffer memory at a particular rate
US5870553A (en) * 1996-09-19 1999-02-09 International Business Machines Corporation System and method for on-demand video serving from magnetic tape using disk leader files
FR2753868A1 (en) * 1996-09-25 1998-03-27 Technical Maintenance Corp METHOD FOR SELECTING A RECORDING ON AN AUDIOVISUAL DIGITAL REPRODUCTION SYSTEM AND SYSTEM FOR IMPLEMENTING THE METHOD
US5913038A (en) * 1996-12-13 1999-06-15 Microsoft Corporation System and method for processing multimedia data streams using filter graphs
KR100251539B1 (en) * 1996-12-14 2000-04-15 구자홍 Method for uniforming load of continuous media service system
US6173329B1 (en) * 1997-02-19 2001-01-09 Nippon Telegraph And Telephone Corporation Distributed multimedia server device and distributed multimedia server data access method
US6654933B1 (en) 1999-09-21 2003-11-25 Kasenna, Inc. System and method for media stream indexing
GB2323963B (en) * 1997-04-04 1999-05-12 Sony Corp Data transmission apparatus and data transmission method
WO1998054657A2 (en) * 1997-05-26 1998-12-03 Koninklijke Philips Electronics N.V. System for retrieving data in a video server
US5845279A (en) * 1997-06-27 1998-12-01 Lucent Technologies Inc. Scheduling resources for continuous media databases
DE19835668A1 (en) 1997-08-07 1999-02-25 Matsushita Electric Ind Co Ltd Transmission media connection arrangement
US5987179A (en) * 1997-09-05 1999-11-16 Eastman Kodak Company Method and apparatus for encoding high-fidelity still images in MPEG bitstreams
FR2769165B1 (en) 1997-09-26 2002-11-29 Technical Maintenance Corp WIRELESS SYSTEM WITH DIGITAL TRANSMISSION FOR SPEAKERS
US6594699B1 (en) * 1997-10-10 2003-07-15 Kasenna, Inc. System for capability based multimedia streaming over a network
US5933834A (en) * 1997-10-16 1999-08-03 International Business Machines Incorporated System and method for re-striping a set of objects onto an exploded array of storage units in a computer system
ATE364203T1 (en) * 1997-11-04 2007-06-15 Collaboration Properties Inc DIVISABLE, NETWORKED MULTIMEDIA SYSTEMS AND APPLICATIONS
KR100455055B1 (en) * 1997-12-24 2005-01-05 주식회사 대우일렉트로닉스 Method for storing data of mulitinput HD monitor
US6374336B1 (en) 1997-12-24 2002-04-16 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US6415373B1 (en) 1997-12-24 2002-07-02 Avid Technology, Inc. Computer system and process for transferring multiple high bandwidth streams of data between multiple storage units and multiple applications in a scalable and reliable manner
US5941972A (en) * 1997-12-31 1999-08-24 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
USRE42761E1 (en) 1997-12-31 2011-09-27 Crossroads Systems, Inc. Storage router and method for providing virtual local storage
US6208640B1 (en) 1998-02-27 2001-03-27 David Spell Predictive bandwidth allocation method and apparatus
US6961801B1 (en) * 1998-04-03 2005-11-01 Avid Technology, Inc. Method and apparatus for accessing video data in memory across flow-controlled interconnects
US6202124B1 (en) * 1998-05-05 2001-03-13 International Business Machines Corporation Data storage system with outboard physical data transfer operation utilizing data path distinct from host
US7272298B1 (en) * 1998-05-06 2007-09-18 Burst.Com, Inc. System and method for time-shifted program viewing
US6018780A (en) * 1998-05-19 2000-01-25 Lucent Technologies Inc. Method and apparatus for downloading a file to a remote unit
US6480876B2 (en) * 1998-05-28 2002-11-12 Compaq Information Technologies Group, L.P. System for integrating task and data parallelism in dynamic applications
US6675189B2 (en) * 1998-05-28 2004-01-06 Hewlett-Packard Development Company, L.P. System for learning and applying integrated task and data parallel strategies in dynamic applications
FR2781582B1 (en) * 1998-07-21 2001-01-12 Technical Maintenance Corp SYSTEM FOR DOWNLOADING OBJECTS OR FILES FOR SOFTWARE UPDATE
FR2781580B1 (en) 1998-07-22 2000-09-22 Technical Maintenance Corp SOUND CONTROL CIRCUIT FOR INTELLIGENT DIGITAL AUDIOVISUAL REPRODUCTION SYSTEM
US7197570B2 (en) * 1998-07-22 2007-03-27 Appstream Inc. System and method to send predicted application streamlets to a client device
US20010044850A1 (en) 1998-07-22 2001-11-22 Uri Raz Method and apparatus for determining the order of streaming modules
US8028318B2 (en) * 1999-07-21 2011-09-27 Touchtunes Music Corporation Remote control unit for activating and deactivating means for payment and for displaying payment status
US6311221B1 (en) 1998-07-22 2001-10-30 Appstream Inc. Streaming modules
US6574618B2 (en) 1998-07-22 2003-06-03 Appstream, Inc. Method and system for executing network streamed application
FR2781591B1 (en) 1998-07-22 2000-09-22 Technical Maintenance Corp AUDIOVISUAL REPRODUCTION SYSTEM
US7558472B2 (en) 2000-08-22 2009-07-07 Tivo Inc. Multimedia signal processing system
US8577205B2 (en) * 1998-07-30 2013-11-05 Tivo Inc. Digital video recording system
US6233389B1 (en) * 1998-07-30 2001-05-15 Tivo, Inc. Multimedia time warping system
US8380041B2 (en) * 1998-07-30 2013-02-19 Tivo Inc. Transportable digital video recorder system
US6061720A (en) * 1998-10-27 2000-05-09 Panasonic Technologies, Inc. Seamless scalable distributed media server
CN1127857C (en) * 1999-01-06 2003-11-12 皇家菲利浦电子有限公司 Transmission system for transmitting multimedia signal
US8726330B2 (en) 1999-02-22 2014-05-13 Touchtunes Music Corporation Intelligent digital audiovisual playback system
US6408436B1 (en) 1999-03-18 2002-06-18 Next Level Communications Method and apparatus for cross-connection of video signals
AU4641300A (en) * 1999-04-21 2000-11-02 Toni Data, Llc Managed remote virtual mass storage for client data terminal
US6842422B1 (en) * 1999-06-15 2005-01-11 Marconi Communications, Inc. Data striping based switching system
US7222155B1 (en) * 1999-06-15 2007-05-22 Wink Communications, Inc. Synchronous updating of dynamic interactive applications
FR2796482B1 (en) 1999-07-16 2002-09-06 Touchtunes Music Corp REMOTE MANAGEMENT SYSTEM FOR AT LEAST ONE AUDIOVISUAL INFORMATION REPRODUCING DEVICE
KR100647412B1 (en) * 1999-12-18 2006-11-17 주식회사 케이티 Apparatus and Method for generating Image map and controlling Communication of Image map
US6738972B1 (en) 1999-12-30 2004-05-18 Opentv, Inc. Method for flow scheduling
US6983315B1 (en) 2000-01-18 2006-01-03 Wrq, Inc. Applet embedded cross-platform caching
FR2805377B1 (en) 2000-02-23 2003-09-12 Touchtunes Music Corp EARLY ORDERING PROCESS FOR A SELECTION, DIGITAL SYSTEM AND JUKE-BOX FOR IMPLEMENTING THE METHOD
FR2805072B1 (en) * 2000-02-16 2002-04-05 Touchtunes Music Corp METHOD FOR ADJUSTING THE SOUND VOLUME OF A DIGITAL SOUND RECORDING
FR2805060B1 (en) 2000-02-16 2005-04-08 Touchtunes Music Corp METHOD FOR RECEIVING FILES DURING DOWNLOAD
FR2808906B1 (en) 2000-05-10 2005-02-11 Touchtunes Music Corp DEVICE AND METHOD FOR REMOTELY MANAGING A NETWORK OF AUDIOVISUAL INFORMATION REPRODUCTION SYSTEMS
US7010788B1 (en) 2000-05-19 2006-03-07 Hewlett-Packard Development Company, L.P. System for computing the optimal static schedule using the stored task execution costs with recent schedule execution costs
FR2811175B1 (en) * 2000-06-29 2002-12-27 Touchtunes Music Corp AUDIOVISUAL INFORMATION DISTRIBUTION METHOD AND AUDIOVISUAL INFORMATION DISTRIBUTION SYSTEM
FR2811114B1 (en) 2000-06-29 2002-12-27 Touchtunes Music Corp DEVICE AND METHOD FOR COMMUNICATION BETWEEN A SYSTEM FOR REPRODUCING AUDIOVISUAL INFORMATION AND AN ELECTRONIC ENTERTAINMENT MACHINE
US6906999B1 (en) 2000-06-30 2005-06-14 Marconi Intellectual Property (Ringfence), Inc. Receiver decoding algorithm to allow hitless N+1 redundancy in a switch
US7318107B1 (en) 2000-06-30 2008-01-08 Intel Corporation System and method for automatic stream fail-over
US6498937B1 (en) 2000-07-14 2002-12-24 Trw Inc. Asymmetric bandwidth wireless communication techniques
US7277956B2 (en) * 2000-07-28 2007-10-02 Kasenna, Inc. System and method for improved utilization of bandwidth in a computer system serving multiple users
US7310678B2 (en) * 2000-07-28 2007-12-18 Kasenna, Inc. System, server, and method for variable bit rate multimedia streaming
GB2365557A (en) * 2000-08-04 2002-02-20 Quantel Ltd Stored data distribution in file server systems
KR100575527B1 (en) * 2000-08-22 2006-05-03 엘지전자 주식회사 Method for recording a digital data stream
FR2814085B1 (en) 2000-09-15 2005-02-11 Touchtunes Music Corp ENTERTAINMENT METHOD BASED ON MULTIPLE CHOICE COMPETITION GAMES
US6757894B2 (en) 2000-09-26 2004-06-29 Appstream, Inc. Preprocessed applications suitable for network streaming applications and method for producing same
US20020087717A1 (en) * 2000-09-26 2002-07-04 Itzik Artzi Network streaming of multi-application program code
US6990671B1 (en) * 2000-11-22 2006-01-24 Microsoft Corporation Playback control methods and arrangements for a DVD player
US6871012B1 (en) 2000-11-22 2005-03-22 Microsoft Corporation Unique digital content identifier generating methods and arrangements
US7451453B1 (en) 2000-11-22 2008-11-11 Microsoft Corporation DVD navigator and application programming interfaces (APIs)
US7085842B2 (en) 2001-02-12 2006-08-01 Open Text Corporation Line navigation conferencing system
US20030018978A1 (en) * 2001-03-02 2003-01-23 Singal Sanjay S. Transfer file format and system and method for distributing media content
EP1374080A2 (en) * 2001-03-02 2004-01-02 Kasenna, Inc. Metadata enabled push-pull model for efficient low-latency video-content distribution over a network
US20070230921A1 (en) * 2001-04-05 2007-10-04 Barton James M Multimedia time warping system
US20020147827A1 (en) * 2001-04-06 2002-10-10 International Business Machines Corporation Method, system and computer program product for streaming of data
JP2003067201A (en) * 2001-08-30 2003-03-07 Hitachi Ltd Controller and operating system
JP3719180B2 (en) * 2001-09-27 2005-11-24 ソニー株式会社 COMMUNICATION METHOD, COMMUNICATION SYSTEM AND OUTPUT DEVICE
US7350206B2 (en) * 2001-11-05 2008-03-25 Hewlett-Packard Development Company, L.P. Method to reduce provisioning time in shared storage systems by preemptive copying of images
US7437472B2 (en) * 2001-11-28 2008-10-14 Interactive Content Engines, Llc. Interactive broadband server system
US7788396B2 (en) * 2001-11-28 2010-08-31 Interactive Content Engines, Llc Synchronized data transfer system
US7644136B2 (en) * 2001-11-28 2010-01-05 Interactive Content Engines, Llc. Virtual file system
US6986015B2 (en) 2001-12-10 2006-01-10 Incipient, Inc. Fast path caching
US7013379B1 (en) 2001-12-10 2006-03-14 Incipient, Inc. I/O primitives
AU2002366270A1 (en) * 2001-12-10 2003-09-09 Incipient, Inc. Fast path for performing data operations
US6959373B2 (en) * 2001-12-10 2005-10-25 Incipient, Inc. Dynamic and variable length extents
US7173929B1 (en) * 2001-12-10 2007-02-06 Incipient, Inc. Fast path for performing data operations
EP1508862A1 (en) * 2003-08-21 2005-02-23 Deutsche Thomson-Brandt GmbH Method for seamless real-time splitting and concatenating of a data stream
FR2835141B1 (en) * 2002-01-18 2004-02-20 Daniel Lecomte DEVICE FOR SECURING THE TRANSMISSION, RECORDING AND VIEWING OF AUDIOVISUAL PROGRAMS
US8903089B2 (en) 2002-01-18 2014-12-02 Nagra France Device for secure transmission recording and visualization of audiovisual programs
US7010762B2 (en) * 2002-02-27 2006-03-07 At&T Corp. Pre-loading content to caches for information appliances
US7411901B1 (en) * 2002-03-12 2008-08-12 Extreme Networks, Inc. Method and apparatus for dynamically selecting timer durations
US7822687B2 (en) 2002-09-16 2010-10-26 Francois Brillon Jukebox with customizable avatar
US9646339B2 (en) 2002-09-16 2017-05-09 Touchtunes Music Corporation Digital downloading jukebox system with central and local music servers
US8584175B2 (en) 2002-09-16 2013-11-12 Touchtunes Music Corporation Digital downloading jukebox system with user-tailored music management, communications, and other tools
US8103589B2 (en) 2002-09-16 2012-01-24 Touchtunes Music Corporation Digital downloading jukebox system with central and local music servers
US8332895B2 (en) 2002-09-16 2012-12-11 Touchtunes Music Corporation Digital downloading jukebox system with user-tailored music management, communications, and other tools
US11029823B2 (en) 2002-09-16 2021-06-08 Touchtunes Music Corporation Jukebox with customizable avatar
US10373420B2 (en) * 2002-09-16 2019-08-06 Touchtunes Music Corporation Digital downloading jukebox with enhanced communication features
US8151304B2 (en) * 2002-09-16 2012-04-03 Touchtunes Music Corporation Digital downloading jukebox system with user-tailored music management, communications, and other tools
US20040199650A1 (en) * 2002-11-14 2004-10-07 Howe John E. System and methods for accelerating data delivery
US7878908B2 (en) * 2002-11-14 2011-02-01 Nintendo Co., Ltd. Multiplexed secure video game play distribution
KR100670578B1 (en) * 2002-11-21 2007-01-17 삼성전자주식회사 A sound card, a computer system using the sound card and control method thereof
US8964830B2 (en) * 2002-12-10 2015-02-24 Ol2, Inc. System and method for multi-stream video compression using multiple encoding formats
US20090118019A1 (en) * 2002-12-10 2009-05-07 Onlive, Inc. System for streaming databases serving real-time applications used through streaming interactive video
US9314691B2 (en) * 2002-12-10 2016-04-19 Sony Computer Entertainment America Llc System and method for compressing video frames or portions thereof based on feedback information from a client device
US9108107B2 (en) * 2002-12-10 2015-08-18 Sony Computer Entertainment America Llc Hosting and broadcasting virtual events using streaming interactive video
US9138644B2 (en) 2002-12-10 2015-09-22 Sony Computer Entertainment America Llc System and method for accelerated machine switching
US9192859B2 (en) 2002-12-10 2015-11-24 Sony Computer Entertainment America Llc System and method for compressing video based on latency measurements and other feedback
US10201760B2 (en) * 2002-12-10 2019-02-12 Sony Interactive Entertainment America Llc System and method for compressing video based on detected intraframe motion
US9077991B2 (en) * 2002-12-10 2015-07-07 Sony Computer Entertainment America Llc System and method for utilizing forward error correction with video compression
US8549574B2 (en) * 2002-12-10 2013-10-01 Ol2, Inc. Method of combining linear content and interactive content compressed together as streaming interactive video
US7287180B1 (en) 2003-03-20 2007-10-23 Info Value Computing, Inc. Hardware independent hierarchical cluster of heterogeneous media servers using a hierarchical command beat protocol to synchronize distributed parallel computing systems and employing a virtual dynamic network topology for distributed parallel computing system
KR100556844B1 (en) * 2003-04-19 2006-03-10 엘지전자 주식회사 Method for error detection of moving picture transmission system
US20050044250A1 (en) * 2003-07-30 2005-02-24 Gay Lance Jeffrey File transfer system
US7117333B2 (en) * 2003-08-25 2006-10-03 International Business Machines Corporation Apparatus, system, and method to estimate memory for recovering data
US20050160470A1 (en) * 2003-11-25 2005-07-21 Strauss Daryll J. Real-time playback system for uncompressed high-bandwidth video
DE602004020271D1 (en) * 2003-12-03 2009-05-07 Koninkl Philips Electronics Nv ENERGY SAVING PROCESS AND SYSTEM
US7349334B2 (en) * 2004-04-09 2008-03-25 International Business Machines Corporation Method, system and program product for actively managing central queue buffer allocation using a backpressure mechanism
US7408875B2 (en) * 2004-04-09 2008-08-05 International Business Machines Corporation System and program product for actively managing central queue buffer allocation
US20050262245A1 (en) * 2004-04-19 2005-11-24 Satish Menon Scalable cluster-based architecture for streaming media
US7228364B2 (en) * 2004-06-24 2007-06-05 Dell Products L.P. System and method of SCSI and SAS hardware validation
US7174385B2 (en) * 2004-09-03 2007-02-06 Microsoft Corporation System and method for receiver-driven streaming in a peer-to-peer network
KR100584323B1 (en) * 2004-10-04 2006-05-26 삼성전자주식회사 Method for streaming multimedia content
CA2588630C (en) 2004-11-19 2013-08-20 Tivo Inc. Method and apparatus for secure transfer of previously broadcasted content
US7533182B2 (en) * 2005-01-24 2009-05-12 Starz Media, Llc Portable screening room
US7793329B2 (en) * 2006-02-06 2010-09-07 Kasenna, Inc. Method and system for reducing switching delays between digital video feeds using multicast slotted transmission technique
JP4519082B2 (en) * 2006-02-15 2010-08-04 株式会社ソニー・コンピュータエンタテインメント Information processing method, moving image thumbnail display method, decoding device, and information processing device
US20080109557A1 (en) * 2006-11-02 2008-05-08 Vinay Joshi Method and system for reducing switching delays between digital video feeds using personalized unicast transmission techniques
US9171419B2 (en) 2007-01-17 2015-10-27 Touchtunes Music Corporation Coin operated entertainment system
US9330529B2 (en) * 2007-01-17 2016-05-03 Touchtunes Music Corporation Game terminal configured for interaction with jukebox device systems including same, and/or associated methods
US9953481B2 (en) * 2007-03-26 2018-04-24 Touchtunes Music Corporation Jukebox with associated video server
CN101335883B (en) * 2007-06-29 2011-01-12 国际商业机器公司 Method and apparatus for processing video stream in digital video broadcast system
US10290006B2 (en) 2008-08-15 2019-05-14 Touchtunes Music Corporation Digital signage and gaming services to comply with federal and state alcohol and beverage laws and regulations
US8332887B2 (en) 2008-01-10 2012-12-11 Touchtunes Music Corporation System and/or methods for distributing advertisements from a central advertisement network to a peripheral device via a local advertisement server
US20090100188A1 (en) * 2007-10-11 2009-04-16 Utstarcom, Inc. Method and system for cluster-wide predictive and selective caching in scalable iptv systems
US8165451B2 (en) * 2007-11-20 2012-04-24 Echostar Technologies L.L.C. Methods and apparatus for displaying information regarding interstitials of a video stream
US8165450B2 (en) 2007-11-19 2012-04-24 Echostar Technologies L.L.C. Methods and apparatus for filtering content in a video stream using text data
US8136140B2 (en) 2007-11-20 2012-03-13 Dish Network L.L.C. Methods and apparatus for generating metadata utilized to filter content from a video stream using text data
US11065552B2 (en) * 2007-12-05 2021-07-20 Sony Interactive Entertainment LLC System for streaming databases serving real-time applications used through streaming interactive video
US10058778B2 (en) * 2007-12-05 2018-08-28 Sony Interactive Entertainment America Llc Video compression system and method for reducing the effects of packet loss over a communication channel
US8510370B2 (en) * 2008-02-26 2013-08-13 Avid Technology, Inc. Array-based distributed storage system with parity
US8606085B2 (en) * 2008-03-20 2013-12-10 Dish Network L.L.C. Method and apparatus for replacement of audio data in recorded audio/video stream
US8156520B2 (en) 2008-05-30 2012-04-10 EchoStar Technologies, L.L.C. Methods and apparatus for presenting substitute content in an audio/video stream using text data
US8849435B2 (en) 2008-07-09 2014-09-30 Touchtunes Music Corporation Digital downloading jukebox with revenue-enhancing features
US8407735B2 (en) * 2008-12-24 2013-03-26 Echostar Technologies L.L.C. Methods and apparatus for identifying segments of content in a presentation stream using signature data
US8588579B2 (en) * 2008-12-24 2013-11-19 Echostar Technologies L.L.C. Methods and apparatus for filtering and inserting content into a presentation stream using signature data
US8510771B2 (en) * 2008-12-24 2013-08-13 Echostar Technologies L.L.C. Methods and apparatus for filtering content from a presentation stream using signature data
US10719149B2 (en) 2009-03-18 2020-07-21 Touchtunes Music Corporation Digital jukebox device with improved user interfaces, and associated methods
US10564804B2 (en) 2009-03-18 2020-02-18 Touchtunes Music Corporation Digital jukebox device with improved user interfaces, and associated methods
CA2754990C (en) 2009-03-18 2015-07-14 Touchtunes Music Corporation Entertainment server and associated social networking services
US9292166B2 (en) 2009-03-18 2016-03-22 Touchtunes Music Corporation Digital jukebox device with improved karaoke-related user interfaces, and associated methods
EP2264604A1 (en) * 2009-06-15 2010-12-22 Thomson Licensing Device for real-time streaming of two or more streams in parallel to a solid state memory device array
US8437617B2 (en) * 2009-06-17 2013-05-07 Echostar Technologies L.L.C. Method and apparatus for modifying the presentation of content
US8925034B1 (en) 2009-06-30 2014-12-30 Symantec Corporation Data protection requirements specification and migration
US8352937B2 (en) * 2009-08-03 2013-01-08 Symantec Corporation Streaming an application install package into a virtual environment
US8387047B1 (en) 2009-08-03 2013-02-26 Symantec Corporation Method of virtualizing file extensions in a computer system by determining an association between applications in virtual environment and a file extension
US8090744B1 (en) 2009-08-26 2012-01-03 Symantec Operating Corporation Method and apparatus for determining compatibility between a virtualized application and a base environment
US8473444B1 (en) 2009-08-28 2013-06-25 Symantec Corporation Management of actions in multiple virtual and non-virtual environments
US8438555B1 (en) 2009-08-31 2013-05-07 Symantec Corporation Method of using an encapsulated data signature for virtualization layer activation
US8458310B1 (en) 2009-09-14 2013-06-04 Symantec Corporation Low bandwidth streaming of application upgrades
US8566297B1 (en) 2010-01-14 2013-10-22 Symantec Corporation Method to spoof data formats from image backups
CN105355221A (en) 2010-01-26 2016-02-24 踏途音乐公司 Digital jukebox device with improved user interfaces, and associated methods
US8290912B1 (en) 2010-01-29 2012-10-16 Symantec Corporation Endpoint virtualization aware backup
US8934758B2 (en) 2010-02-09 2015-01-13 Echostar Global B.V. Methods and apparatus for presenting supplemental content in association with recorded content
US20110197224A1 (en) * 2010-02-09 2011-08-11 Echostar Global B.V. Methods and Apparatus For Selecting Advertisements For Output By A Television Receiver Based on Social Network Profile Data
WO2011111009A1 (en) * 2010-03-09 2011-09-15 Happy Cloud Inc. Data streaming for interactive decision-oriented software applications
US8495625B1 (en) 2010-07-27 2013-07-23 Symantec Corporation Method and system for creation of streamed files on-demand
CN102082696B (en) * 2011-03-10 2012-11-21 中控科技集团有限公司 Redundancy network system and message sending method based on same
JP6002770B2 (en) 2011-09-18 2016-10-05 タッチチューンズ ミュージック コーポレーション Digital jukebox device with karaoke and / or photo booth functions and related techniques
US11151224B2 (en) 2012-01-09 2021-10-19 Touchtunes Music Corporation Systems and/or methods for monitoring audio inputs to jukebox devices
US9880776B1 (en) 2013-02-22 2018-01-30 Veritas Technologies Llc Content-driven data protection method for multiple storage devices
IN2013MU03094A (en) * 2013-09-27 2015-07-17 Tata Consultancy Services Ltd
US9921717B2 (en) 2013-11-07 2018-03-20 Touchtunes Music Corporation Techniques for generating electronic menu graphical user interface layouts for use in connection with electronic devices
US10372361B2 (en) 2014-02-27 2019-08-06 Mitsubishi Electric Corporation Data storage device including multiple memory modules and circuitry to manage communication among the multiple memory modules
KR102303730B1 (en) 2014-03-25 2021-09-17 터치튠즈 뮤직 코포레이션 Digital jukebox device with improved user interfaces, and associated methods
CN106445403B (en) * 2015-08-11 2020-11-13 张一凡 Distributed storage method and system for paired storage of mass data
JPWO2017145781A1 (en) * 2016-02-25 2018-10-04 日本電信電話株式会社 Pacing control device, pacing control method, and program
US10423500B2 (en) * 2016-06-01 2019-09-24 Seagate Technology Llc Technologies for limiting performance variation in a storage device
US10997065B2 (en) * 2017-11-13 2021-05-04 SK Hynix Inc. Memory system and operating method thereof
US11172269B2 (en) 2020-03-04 2021-11-09 Dish Network L.L.C. Automated commercial content shifting in a video streaming system
CN113542822B (en) * 2021-07-12 2023-01-06 中国银行股份有限公司 Image file transmission method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0368683A2 (en) * 1988-11-11 1990-05-16 Victor Company Of Japan, Limited Data handling apparatus
WO1993016557A1 (en) * 1992-02-11 1993-08-19 Koz Mark C Adaptive video file server and methods for its use
WO1994001964A1 (en) * 1992-07-08 1994-01-20 Bell Atlantic Network Services, Inc. Media server for supplying video and multi-media data over the public telephone switched network
WO1994012937A2 (en) * 1992-11-17 1994-06-09 Starlight Networks, Inc. Method of operating a disk storage system

Family Cites Families (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4355324A (en) * 1980-03-03 1982-10-19 Rca Corporation Sampled or digitized color in a high speed search record and replay system
US4679191A (en) * 1983-05-04 1987-07-07 Cxc Corporation Variable bandwidth switching system
US4604687A (en) * 1983-08-11 1986-08-05 Lucasfilm Ltd. Method and system for storing and retrieving multiple channel sampled data
US4616263A (en) * 1985-02-11 1986-10-07 Gte Corporation Video subsystem for a hybrid videotex facility
US5089885A (en) * 1986-11-14 1992-02-18 Video Jukebox Network, Inc. Telephone access display system with remote monitoring
IT1219727B (en) * 1988-06-16 1990-05-24 Italtel Spa BROADBAND COMMUNICATION SYSTEM
US4949187A (en) * 1988-12-16 1990-08-14 Cohen Jason M Video communications system having a remotely controlled central source of video and audio data
US5421031A (en) * 1989-08-23 1995-05-30 Delta Beta Pty. Ltd. Program transmission optimisation
US5099319A (en) * 1989-10-23 1992-03-24 Esch Arthur G Video information delivery method and apparatus
CA2022302C (en) * 1990-07-30 1995-02-28 Douglas J. Ballantyne Method and apparatus for distribution of movies
EP0529864B1 (en) * 1991-08-22 2001-10-31 Sun Microsystems, Inc. Network video server apparatus and method
CA2084575C (en) * 1991-12-31 1996-12-03 Chris A. Dinallo Personal computer with generalized data streaming apparatus for multimedia devices
US5526507A (en) * 1992-01-06 1996-06-11 Hill; Andrew J. W. Computer memory array control for accessing different memory banks simullaneously
US5471640A (en) * 1992-07-06 1995-11-28 Hewlett-Packard Programmable disk array controller having n counters for n disk drives for stripping data where each counter addresses specific memory location by a count n
JP3083663B2 (en) * 1992-12-08 2000-09-04 株式会社日立製作所 Disk array device
US5289461A (en) * 1992-12-14 1994-02-22 International Business Machines Corporation Interconnection method for digital multimedia communications
EP0609054A3 (en) * 1993-01-25 1996-04-03 Matsushita Electric Ind Co Ltd Method and apparatus for recording or reproducing video data on or from storage media.
US5550982A (en) * 1993-06-24 1996-08-27 Starlight Networks Video application server
US5442390A (en) * 1993-07-07 1995-08-15 Digital Equipment Corporation Video on demand with memory accessing and or like functions
US5522054A (en) * 1993-09-13 1996-05-28 Compaq Computer Corporation Dynamic control of outstanding hard disk read requests for sequential and random operations
US5510905A (en) * 1993-09-28 1996-04-23 Birk; Yitzhak Video storage server using track-pairing
US5528513A (en) * 1993-11-04 1996-06-18 Digital Equipment Corp. Scheduling and admission control policy for a continuous media server
US5473362A (en) * 1993-11-30 1995-12-05 Microsoft Corporation Video on demand system comprising stripped data across plural storable devices with time multiplex scheduling
US5544327A (en) * 1994-03-01 1996-08-06 International Business Machines Corporation Load balancing in video-on-demand servers by allocating buffer to streams with successively larger buffer requirements until the buffer requirements of a stream can not be satisfied
US5461415A (en) * 1994-03-15 1995-10-24 International Business Machines Corporation Look-ahead scheduling to support video-on-demand applications
US5606359A (en) * 1994-06-30 1997-02-25 Hewlett-Packard Company Video on demand system with multiple data sources configured to provide vcr-like services
US5586264A (en) * 1994-09-08 1996-12-17 Ibm Corporation Video optimized media streamer with cache management

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0368683A2 (en) * 1988-11-11 1990-05-16 Victor Company Of Japan, Limited Data handling apparatus
WO1993016557A1 (en) * 1992-02-11 1993-08-19 Koz Mark C Adaptive video file server and methods for its use
WO1994001964A1 (en) * 1992-07-08 1994-01-20 Bell Atlantic Network Services, Inc. Media server for supplying video and multi-media data over the public telephone switched network
WO1994012937A2 (en) * 1992-11-17 1994-06-09 Starlight Networks, Inc. Method of operating a disk storage system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
E. CHANG ET AL.: "Scalable Video Data Placement on Parallel Disk Arrays", PROC. SPIE : STORAGE AND RETRIEVAL FOR IMAGE AND VIDEO DATABASES II, 7 February 1994 (1994-02-07), SAN JOSE, CA, USA, pages 208 - 221 *
P. LOUGHER ET AL.: "The Design and Implementation of a Continuous Media Storage Server", NETWORK AND OPERATING SYSTEM SUPPORT FOR DIGITAL AUDIO AND VIDEO, 3RD INT. WORKSHOP, November 1992 (1992-11-01), LA JOLLA, CA, USA, pages 69 - 80 *
P. SCHEUERMANN ET AL.: "Adaptive Load Balancing in Disk Arrays", 4TH INT. CONF. FOUNDATIONS OF DATA ORGANIZATION AND ALGORITHMS, 13 October 1993 (1993-10-13), CHICAGO, IL, USA, pages 345 - 360 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2312318A (en) * 1996-04-15 1997-10-22 Discreet Logic Inc Video data storage
GB2312318B (en) * 1996-04-15 1998-11-25 Discreet Logic Inc Video data storage
EP1393560A1 (en) * 2001-04-20 2004-03-03 Concurrent Computer Corporation System and method for retrieving and storing multimedia data
EP1393560A4 (en) * 2001-04-20 2007-03-07 Concurrent Comp Corp System and method for retrieving and storing multimedia data
CN101710901B (en) * 2009-10-22 2012-12-05 乐视网信息技术(北京)股份有限公司 Distributed type storage system having p2p function and method thereof

Also Published As

Publication number Publication date
CA2154511A1 (en) 1996-03-09
JP3096409B2 (en) 2000-10-10
CN1122985A (en) 1996-05-22
US5712976A (en) 1998-01-27
KR0184627B1 (en) 1999-05-01
JPH08107542A (en) 1996-04-23
KR960011859A (en) 1996-04-20

Similar Documents

Publication Publication Date Title
EP0701371B1 (en) Video media streamer
US5805821A (en) Video optimized media streamer user interface employing non-blocking switching to achieve isochronous data transfers
US5761417A (en) Video data streamer having scheduler for scheduling read request for individual data buffers associated with output ports of communication node to one storage node
US5603058A (en) Video optimized media streamer having communication nodes received digital data from storage node and transmitted said data to adapters for generating isochronous digital data streams
US5586264A (en) Video optimized media streamer with cache management
US5712976A (en) Video data streamer for simultaneously conveying same one or different ones of data blocks stored in storage node to each of plurality of communication nodes
US5606359A (en) Video on demand system with multiple data sources configured to provide vcr-like services
US5987621A (en) Hardware and software failover services for a file server
US6005599A (en) Video storage and delivery apparatus and system
US5790794A (en) Video storage unit architecture
US5440336A (en) System and method for storing and forwarding audio and/or visual information on demand
EP0701373B1 (en) Video server system
EP1175776B1 (en) Video on demand system
JP2001028741A (en) Data distribution system, and distribution method and data input/output device

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): CZ HU PL RU

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase