WO2011094346A1 - Integrated concurrent multi-standard encoder, decoder and transcoder - Google Patents

Integrated concurrent multi-standard encoder, decoder and transcoder Download PDF

Info

Publication number
WO2011094346A1
WO2011094346A1 PCT/US2011/022624 US2011022624W WO2011094346A1 WO 2011094346 A1 WO2011094346 A1 WO 2011094346A1 US 2011022624 W US2011022624 W US 2011022624W WO 2011094346 A1 WO2011094346 A1 WO 2011094346A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
signal
video
audio
encoding
Prior art date
Application number
PCT/US2011/022624
Other languages
French (fr)
Inventor
Barry L. Hobbs
Original Assignee
Hobbs Barry L
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hobbs Barry L filed Critical Hobbs Barry L
Publication of WO2011094346A1 publication Critical patent/WO2011094346A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/40Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/12Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements

Definitions

  • the present invention relates generally to video and audio encoding. More particularly, the present invention relates to an integrated concurrent multi- standard encoder, decoder and transcoder for encoding, decoding and transcoding video and audio into multiple standards and/or at multiple data rates.
  • This apparatus will provide the users with capabilities to address concerns with security and information assurance.
  • the apparatus provides unique capabilities for automated reference time clock and positioning information with the ability to add additional metadata.
  • the system capabilities yield an apparatus providing rapid access to critical video in multiple modes and data rates. These qualities, in whole and in part, are required in surveillance, news-gathering, security, and commercial/broadcast video applications.
  • the apparatus described is capable of processing video, audio and data and placing the information on a serial stream of data for transmission over multiple media links.
  • the video processing is commonly referred to as encoding or compressing.
  • the apparatus may be programmed to process a single video signal with multiple audio and data services or concurrently processing dual video programs in multiple standards and modes.
  • the video standards and modes may be at different processing and output data rates.
  • the video may be supplemented with application software and that would allow the following optional features:
  • An optional real time clock that will provide a time code reference to the video.
  • An optional Global Positioning System, GPS capability when enabled with additional application software, may allow the user to have an automated calculation with a camera for the GPS position of the video being processed. The GPS position will be associated with the proper video frames and provided in separate private data packets.
  • An optional Program System Information Protocol capability to provide a minimum of sixteen days of program data information originated from a third party source and embedded in the transport stream.
  • An optional internal capability to generate, store and transmit transport stream Program System Information Protocol packets.
  • An optional Digital Video Broadcasting, DVB program guide generation capability to be generated via a third party through a predetermined interface.
  • the current art form allows items a and b, above, to be integrated on a single processing board today.
  • Items c, d and e, above, are generated through a combination of external hardware and application software today. Items g and h are generally not done in today's art form within the encoding environment, especially on a single board.
  • a concurrent item i is also not implemented on a single board in today's art form.
  • Item j is implemented through additional hardware in today's art form. It is the combination of these capabilities, which provide the unique art form for this design on the integrated multi-encoder apparatus.
  • This apparatus shall also process audio.
  • the audio processes supported may be Dolby AC-3, AC-3 5.1 , Dolby-E, AAC, PCM, Musicam or other audio processes defined and carried in a compliant fashion in the International Standards Organization/ International Electro technical Committee 13818-1 system and/or 13818-3 audio specifications.
  • the audio services can be synchronized with the video or they may be separate and independent audio processes that are independent of any processed video packets.
  • the Figures describe a high level step-by-step view of each of the concurrent processes. It provides examples of the processing elements provided to process the information. The actual processing elements and the number of processing elements will vary due to programmer's preferences and the video and audio standards being processed.
  • Figure 1 is a composite Figure of the functions, processing elements, inputs and outputs of a video encoder ("apparatus”) in accordance with one embodiment of the present invention
  • Figure 2 is a view of the parallel processor built by Coherent Logix of Austin, Texas.
  • Figure 3 is a view of the parallel processor's processing elements, data memory and routing capabilities along and its input and output structures;
  • Figure 4 is a view of the first video input with its signal routing and signal processing elements
  • Figure 5 is a view of the first section of audio inputs, up to eight, with their signal routing and signal processing elements;
  • Figure 6 is a view of the second section of audio inputs, up to four, with their signal routing and signal processing elements;
  • Figure 7 is a view of the network time reference and real time clock to derive the
  • Figure 8 is a view of the optional global positioning system inputs, signal routing and signal processing elements
  • Figure 9 is a view of the optional software application program for object recognition, and its signal routing and signal processing elements
  • Figure 10 is a view of the optional stream analysis capability for a resident software application program. This depicts the signal routing of this information;
  • Figure 11 depicts the Digital Video System, DVB, system information input, routing and processing elements
  • Figure 12 depicts the Internal Program Specific Information Protocol routing and processing element for an internal software applications program resident on the processing board
  • Figure 13 depicts the External Program Specific Information Protocol routing and processing element for an external input formatted for the processing board
  • Figure 14 depicts the command interface to the processing board
  • Figure 15 depicts the control function for the processor board
  • Figure 16 depicts the input, routing and processing element utilization for embedding conditional access information for network receiver control
  • Figure 17 depicts the second video input, routing and processing element utilization
  • Figure 18 depicts a third set of high data rate audio inputs, up to eight inputs, their routing and processing element utilization;
  • Figure 19 depicts the first Asynchronous Serial Output, processing element utilization and output routing
  • Figure 20 depicts the second Asynchronous Serial Output, processing element utilization and output routing
  • Figure 21 depicts the third Asynchronous Serial Output, processing element utilization and output routing
  • Figure 22 depicts the first Ethernet output, its processing elements and routing
  • Figure 23 depicts the second Ethernet output, its processing elements and routing
  • Figure 24 depicts a serial data output for an external monitor for the processed video and audio. This provides a view of the processing elements and routing;
  • Figure 25 depicts a high-resolution memory component, routing and associated processing elements
  • Figure 26 depicts an external ASI input, its routing, and associated processing elements
  • Figure 27 depicts the processor's boot function components
  • Figure 28 depicts the RS-232 input for closed captioning, its routing and associated processing elements
  • Figure 29 depicts the composite video input for synchronization to a studio input, its' routing and associated processing elements
  • Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization;
  • Figure 31 is a Figure of the 10 X 10 parallel processor and the ability to boot the system for a decoder in accordance with one embodiment of the present invention
  • Figure 32 is a view of the first RF input and routing
  • Figure 33 is a view of the addition of a second RF input channel and its routing
  • Figure 34 is a view of the addition of the first ASI input and its routing
  • Figure 35 is a view of the addition of the second ASI input and its routing
  • Figure 36 is a view of the addition of a first Ethernet input
  • Figure 37 is a view of the addition of a second Ethernet input
  • Figure 38 is a view of the addition of an input for a mobile transmission system
  • Figure 39 is a view of the addition of a signal level reference input for the mobile system
  • Figure 40 depicts the meta-data tagging and retrieval capability through an additional
  • Figure 41 depicts the ETR 290 analysis application
  • Figure 42 depicts the command channel for the apparatus
  • Figure 43 depicts the control channel for the apparatus
  • Figure 44 depicts the addition of a first ASI output
  • Figure 45 depicts the addition of a second ASI output
  • Figure 46 depicts the addition of a third ASI output
  • Figure 47 depicts the addition of a first Ethernet output
  • Figure 48 depicts the addition of a second Ethernet output
  • Figure 49 depicts the addition of a first digital SMPTE 292M or SMPTE 259M output
  • Figure 50 depicts the addition of a second digital SMPTE 292M or SMPTE 259M output
  • Figure 51 depicts the addition of a first output to support an analog high definition interface
  • Figure 52 depicts the addition of a second output to support an analog high definition interface.
  • Figure 53 depicts the ability to provide a first multi-channel audio output capability
  • Figure 54 depicts the ability to provide a second multi-channel audio output capability
  • Figure 55 depicts the ability to provide a Program Specific Information Protocol
  • Figure 56 depicts the first video and associated channel decoding and access to video memory
  • Figure 57 depicts the second video and associated channel decoding and access to video memory
  • Figure 58 depicts the ability to scale one of the two video decoding applications based on a received RF signal strength.
  • Figure 1 depicts a high level composite picture of the inputs and outputs with the associated signal routing and processing element utilization.
  • the Figure also provides a view of the critical processor communications structure through complimentary Low Voltage Differential Signaling (LVDS).
  • the encoder is contained on a single printed circuit board. At the center of the board are two massive parallel processors each containing multiple processing elements. These processors contain 968Kbytes of data memory and 400Kbytes of instruction memory.
  • the DMRs provide data memory, control logic, registers and routers for fast routing services.
  • This structure provides the real-time programmable and adaptable communications fabric to support arbitrary network topologies and sophisticated algorithm implementations. c.) There are twenty-four (24) Input/ Output blocks to connect the periphery to DMRs. This structure also supports sustainable on-chip communications to other Hx3100 processors and allows the preservation of a consistent programming model.
  • the input/output structure also enables interfacing to other memory, processor buses, analog-to-digital converters, sensors and displays. There are, in addition, 24 user configurable timers, one associated with each Input/Output element.
  • a. Two programmable input ports for either analog video or digital video; b. ) Three programmable input ports for analog audio or digital audio; c. ) Two programmable Ethernet ports that will support either Internet Protocol version 4 or Internet Protocol version 6; d. ) A RS-232 interface with an analog to digital converter to support closed captioning for the hearing impaired feature; e. ) An analog input for a composite synchronization signal from a studio reference.
  • Ethernet output ports There are two Ethernet output ports that will support either IPv4 or IPv6 formats. These ports will be void of command, control and other extraneous information. They are reserved for processed video, audio and data services.
  • FIG. 2 depicts an internal view of the type of massive parallel processor that sits at the core of this apparatus.
  • the internal structure contains one- hundred (100) processing elements, (“PEs"), configured in a ten-by-ten (10X10) physical array.
  • PEs processing elements
  • DMR data memory and routing
  • Each processing element can be configured to perform a unique mathematical function on a cycle-by-cycle basis.
  • the DMR and the PE structure allows the processing elements to be efficiently configured for multiple functions and multiple program executions on a concurrent time basis.
  • a massive parallel processor of the type described in the previous paragraph provides the capability to utilize the low voltage communications, LVDS, ports extending the processing capabilities across multiple massive parallel processors.
  • the multiple processors can be unified under the control of one instruction set, either on board or through an external electrically erasable programmable read-only-memory component on the same board.
  • the Figure also provides a view of the interface ports for the boot information as well as multiple CMOS, DDR and LVDS interface capabilities.
  • Figure 3 provides an understanding of the data memory and routing capability and the relationship to the processing elements of the Hx3100 or similar type massive parallel processor. In the following Figures, not all of the DMR elements and address routing for each function are shown. It is important to understand this particular Figure and ability to move across the fabric of the processor in an efficient manner.
  • FIG. 3 A snapshot view of four (4)of the one hundred (lOO)processing elements, nine (9) data memory and routing elements as well as nine input/output routers of one Coherent Logix Hx3100 component is also illustrated in Figure 3. This depicts the multiple address capabilities allowing data to be routed and stored from as many as eight (8) sources into DMR (1, 1). The efficiency in routing data through a multi-dimensional implementation provides the programmer capabilities to write software algorithms that are more efficient in processing by reducing the number of cycles required, and reducing power consumption.
  • Figure 4 depicts an example of how the first programmable video input for the apparatus may be routed.
  • the analog or digital input signal flow is from the input port ( Figure 1A) to either the analog to digital converter ( Figure IB) or to the DDR memory ( Figure 1C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router ( Figure ID) of the processor. The signal is then routed to the data memory and routing element, DMR, ( Figure IE) of the processor. From the DMR, the signal is routed to processing elements (0,0) through (0, 19) plus (1 ,0) through (1 , 19), plus (5,0) through (5,9) and (6,0) through (6,9).
  • the input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
  • the signal When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
  • an analog to digital converter component located on the board and is identified as Figure IB.
  • a digital input When a digital input is utilized it can be routed through a double data rate synchronous dynamic random access component, DDR SDRAM, such as an EOREX EM44CM1688LBA as shown in Figure 1C.
  • DDR SDRAM double data rate synchronous dynamic random access component
  • the EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan.
  • the input signal is then clocked into the massive parallel processor's input/output data port.
  • the signal is routed to the associated DMR on the processor. From the DMR a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
  • -13818-2 for the video format for MPEG-2, H.264 for advanced video coding
  • -13818-3 for audio coding of Musicam or formatting for packaging of alternate audio systems such as Dolby or Advanced Audio Coding
  • Figure 5 depicts a programmable audio input port represented as Port 2 ( Figure 2A).
  • the input port is user-configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to eight, for systems such as Dolby "E”.
  • the input audio signal flows from the input port(s) ( Figure 2A) to either the analog to digital converter ( Figure 2B) or directly into the input/output port of the parallel processor ( Figure 2C) if it is already formatted as a digital signal.
  • the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port. In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
  • the DMR routes the signal to processing elements (2,0) through (2,3).
  • the processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.)
  • the processing elements support audio systems including Musicam, Dolby AC-3, Dolby AC-3 5.1, Dolby E, Pulse coded modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
  • Figure 6 depicts an alternative audio input on Port 3 ( Figure 3A) for additional audio services.
  • the signal flow is from an audio source ( Figure 3 A) to either the analog to digital converter ( Figure 3B) or directly to the input/output port of the processor ( Figure 3C). From the output of the analog to digital converter ( Figure3B), if utilized, the signal is sent to the input/output router of the processor ( Figure 3C). From the input/output router the signal flows to the data memory and router ( Figure 3D) and on to processing elements (2,4) and (2,5).
  • the input port is user configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to four, depending on the audio sources chosen by the end user. If the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal.
  • the processing elements support audio systems that will include Musicam, Dolby AC-3, Dolby AC-3 5.1 , Dolby E, Pulse Coded Modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a mater clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the parallel processor's processing elements provide the synchronized video to the proper output ports as defined by the programmer and/or end user, if this audio is program related. This audio may be independent of the video.
  • Figure 7 depicts the Network Reference Clock (Figure 4A).
  • the signal flow is from a network provided clock ( Figure 4 A) to a real time clock ( Figure 4B). From the output of the real time clock ( Figure 4B) the signal is sent to an input/output port of the processor ( Figure 4C). From the input/output port the signal flows to a data memory and routing element (4D) and onto processing element (2,6).
  • the network reference clock synchronizes time with specific networks and coordinates with broadcast clocks.
  • the real time clock on the board maintains synchronization when a network clock is not available.
  • Thee real time clock produces a reference for the on-board software application within the massive parallel processor to produce a Universal Time Code. This information marks contiguous video frames with metadata, which can then be used for future reference in video asset management systems.
  • Figure 8 depicts the input from a global positioning sensor (Figure 5A). From the sensor the signal flows to the on-board telemetry (Figure 5B). The GPS and telemetry information pass to the input port of the massive parallel processor ( Figure 5C). The processor will host a software application in its memory to process the GPS information with information provided by a cameras target position and within the telemetry information. The information passes from the data memory and routing element ( Figure 5D) to the processing elements as defined by the software application. In this case an example is provided using processing elements (2,7) and (2,8). The processing elements will calculate the observation point of the camera with the GPS software application. This information is formatted and synchronized with the video in the processing elements (2,7) and (2,8). The information is provided in the formatted output of the transmitted stream for video asset management systems.
  • GPS systems such as the Raytheon Anti-Jam Receiver are utilized on flight systems today. They interface at a 1394B specification level that is easily supported through the processors input/output router configured for this digital input format.
  • Figure 9 depicts the object recognition input(Figure 6A).
  • the input sensor is typically a "smart camera” provided by vendors such as Pittsburgh Pattern or Cogent Systems that provides information formatted and compliant to the ISO- 19794-5 standard.
  • the Information is input through the sensor ( Figure 6A) to Ethernet input ports, either ( Figure 61) or ( Figure 6M).
  • Figures 61 and 6M are Ethernet input ports. These ports act independently of each other. They can be configured to either Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6). Each Ethernet port is independently configurable.
  • IPv4 Internet Protocol version 4
  • IPv6 Internet Protocol version 6
  • Each Ethernet port is independently configurable.
  • the input from Figure 61 is routed to the input/output port of the massive parallel processor ( Figure 6J).
  • the input and output element (Figure 6J) routes the signal to the DMR ( Figure 6K)and then to the processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6L.
  • the alternative route and object recognition software application storage is through the input from Figure 6M routed to the input/output port of the massive parallel processor ( Figure 6N).
  • the input and output element ( Figure 6N) routes the signal to the DMR ( Figure 60) and onto processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6P.
  • FIG 9 the sensor input is depicted as Figure 6A.
  • the smart camera information is provided through the Ethernet Port ( Figure 61) to the parallel processor's input port (Figure 6J) and routed through the data memory and routing element ( Figure 6K).
  • the information may require access to the program located in Figure 6L or be immediately processed by processing elements (2,9) through (2, 11).
  • This information can be sent in user data packets to the end users in the transport stream following ISO/IEC Standard 13818-1. This information can be coordinated with video frame timing information as described in ISO/IEC Standard 13818-2.
  • Some of the applications of object recognition are facial recognition, scene change detection, license plate recognition and geo-spatial location recognition.
  • Figure 10 depicts the ETR-290 software application. This is a European
  • the program resides on the processing board on non-volatile double data rate memory.
  • the program provides a web services interface for the user also stored in memory on the board ( Figure 6L).
  • the routing originates in DDR memory ( Figure 6L) and monitors one of the three ASI outputs or two Ethernet output ports across the fabric of the massive parallel processor. Specifically, the way we depict this application is to monitor read the output streams of processing elements providing the output streams to Figures 9C, IOC, 11C, 12C andl3C . We can also monitor the external ASI stream in Figure 16C.
  • the web services interface provides compliance information for ISO/IEC 13818-1 or
  • Figure 11 depicts the DVB Standard system information being inserted into the program streams for non-ATSC PS IP applications.
  • the required tables of the standard for the transport stream are integrated and maintained in either DDR memory or memory on the massive parallel processor.
  • the information flows from the DDR memory ( Figure 6L) to the input/output element of the processor ( Figure 6J) through the data memory and routing element ( Figure 6K) and to a defined processing element on the processor.
  • processing element (2, 14) has been chosen.
  • the DVB program guide utilizes the DDR memory on the processing board.
  • This program guide information is provided through a system management external computer via the Ethernet Port ( Figure 6L).
  • This information is non-real time information and is downloaded to memory on an ad-hoc basis. This process occurs once every one to two weeks.
  • the processing signal flow for the DVB program guide information would be from the DDR memory ( Figure 6L) to the input/output router of the processor ( Figure 6J) through a DMR ( Figure 6K) to the processing element (2, 14).
  • FIG. 12 depicts the Internal Program System Information Protocol (PS IP) software application. This is a software application that resides on memory on the processor board on non- volatile memory. In the Figure the capacity for this software program resides in the DDR memory ( Figure 6P). The data in the PSIP is populated by the end user through a web based interface.
  • PS IP Program System Information Protocol
  • the signal flows as follows.
  • the program resides in memory, Figure 6P. It is accessed on a continual basis by processing element (2, 15).
  • the information flows from the software application stored in DDR memory ( Figure 6P) through the I/O router of the massive parallel processor ( Figure 6N) to the data memory router( Figure 60). The information is then placed within the stream by processing element (2, 15).
  • Figure 13 depicts an external Program System Information Protocol program that resides on hardware external to the processor board. This program enters through either Ethernet Port ( Figures 61) or ( Figure 6M). The information is routed through the appropriate I/O elements on the massive parallel processor, either Figure 6J or 6N. From the I/O element the signal is routed to the appropriate data memory and routing element ( Figure 6K) or ( Figure 60). The information is then processed by processing element (2, 16) and placed onto the output stream in conformance with the A/65 specification.
  • Figure 14 depicts Command information for the video/audio and data encoder processor board.
  • the information control is stored in memory ( Figure 6P) and accessed on web pages resident stored in non-volatile memory ( Figure 6P) on the processor board.
  • the program is accessed through the Ethernet Port ( Figure 6M) through the processor's I/O port ( Figure 6N) and routed to the data memory and routing element ( Figure 60).
  • the information flows back through the I/O port ( Figure 6N) to the DDR memory ( Figure 6P). All changes are implemented through the route of the DDR ( Figure 6P) through the I/O port (Figure 6N) to the data memory router ( Figure 60) to the processing element (2, 17).
  • the command sets system parameters. These parameters include:
  • Frame rates including 24fps, 25fps, 29.97fps, 30fps, 50fps, 59.94fps and 60fps for the appropriate resolutions;
  • Audio data rates per channel or system such as 384Kb for Dolby AC-3 5.1. up to 640Kbs for non-Dolby "E” .
  • Dolby E rates 1.536 Mbs./sec at 16 bits, 1.920 Mbs./sec. for 20 bits or 2.304 Mbs./sec. for 24 bits sampled;
  • Audio sampling rate 16 bits, 20 bit, 24 bit for Dolby "E” , 32kHz, 44.1 kHz or 48 kHz;
  • ASI port configuration either byte or burst mode
  • Ethernet input port configuration either IPv4 or IPv6 for each port individually;
  • Ethernet output port configuration either IPv4 or Ipv6 for each port individually;
  • Figure 15 depicts the Control function of the Processor Board.
  • the control parameters and web services interface pages are stored in non- volatile memory in ( Figure 6P).
  • the access is requested through the Ethernet Port ( Figure 6M).
  • the request is routed through the input/output router of the processor ( Figure 6N) to the DDR non-volatile memory ( Figure 6P) and /or the Boot section of the processor, or external EE PROM holding boot information through the data and memory routing element ( Figure 60).
  • the control function appears on web pages to the end user.
  • the functions addressed in the control include, but are not limited to:
  • Figure 16 depicts the Conditional Access "CA" information from an external subscriber gement system. There are two separate pieces of conditional access information.
  • the first piece of conditional access information is related to the program(s) being processed on the processor board by the massive parallel processor. This information is an indication of whether the program utilizes conditional access or if it does not use conditional access information. This information is provided and stored for each program on the massive parallel processor.
  • the second piece of the conditional access information is also originated in an external subscriber management computer and transmitted to the reception devices in the network. This tells the reception device if it has access to the program. If there is conditional access information being transmitted to the receivers in the network it may be entered through the Ethernet port ( Figure M). The routing of this information is from the external computer through the Ethernet Port (Figure 6M) to the I/O router of the massive parallel processor ( Figure 6N) and onto the data memory and routing element ( Figure 60). From the DMR the information is routed to processing element, in this example, (2, 19). The processing element will route the CA information to the proper program DMR and place it on the transport stream for transmission. The conditional access information for the network reception devices is not stored on the processor board.
  • This "CA” information may also be entered downstream of this apparatus as an alternative in external multiplexers. This is a common practice in many networks.
  • the "CA” information is in the "CA” section of the 13818-1 transport stream that is set for each program elementary stream for video, audio and data service.
  • Figure 17 depicts an example of how the second programmable video input for the apparatus may be routed.
  • the analog or digital input signal flow is from the input port ( Figure 7A) to either the analog to digital converter (Figure 7B) or to the DDR memory ( Figure 7C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router ( Figure 7D) of the processor. The signal is then routed to the data memory and routing element, DMR, ( Figure 7E) of the processor. From the DMR, the signal is routed to processing elements (3,0) through (3, 19) plus (4,0) through (4, 19), plus (7,0) through (7,9) and (8,0) through (8,9).
  • the input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
  • the signal When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
  • an analog to digital converter component located on the board and is identified as Figure IB.
  • DDR SDRAM double data rate synchronous dynamic random access component
  • EOREX EM44CM1688LBA synchronous dynamic random access component
  • Figure 1C The EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan.
  • the input signal is then clocked into the massive parallel processor's input/output data port.
  • the signal is routed to the associated DMR on the processor.
  • a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
  • variable length coding i. variable length coding
  • j binary arithmetic coding
  • the ISO/IEC 13818-1 , -2 and -3 also cover the clock synchronization for frame recovery by the receiving devices.
  • Figure 18 depicts a programmable audio input port represented as input port 8 ( Figure
  • the input port is user-configurable through a web services interface as an analog audio input or a digital audio input.
  • This port is defined as a collection of ports, up to eight, for systems such as Dolby "E” .
  • the input audio signal flows from the input port(s) ( Figure 8A) to either the analog to digital converter ( Figure 8B) or directly into the input/output port of the parallel processor ( Figure 8D) if it is already formatted as a digital signal.
  • the input signal source is analog, it must be converted to a digital signal.
  • a device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port.
  • the signal In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
  • the DMR on the processor.
  • the DMR routes the signal to processing elements (5, 10) through (5, 13).
  • the processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.)
  • the processing elements support audio systems including Musicam, Dolby AC-3,
  • Dolby AC-3 5.1 Dolby E, Pulse coded modulation techniques and Advanced Audio Coding.
  • the processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
  • the massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
  • Figure 19 depicts the first of three Asynchronous Serial Interface "ASI" output ports.
  • FIG. 9C depicts the second of three Asynchronous Serial Interface "ASI" output ports.
  • the port is identical in function for figure 9C in figure 19.
  • Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications.
  • the processing element (5, 15) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications. This output port is void of extraneous information found on the input Ethernet ports.
  • the signal is routed from a processing element (5, 15) to the data memory and router (Figure 10A) onto the parallel processor input/output ( Figure 10B) and then to the ASI output port ( Figure IOC).
  • Figure 21 depicts the third of three Asynchronous Serial Interface "ASI" output ports.
  • the port is identical in function for figures 9C and IOC in Figures 19 and 20.
  • Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications.
  • the processing element (5, 16) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications.
  • This output port is void of extraneous information found on the input Ethernet ports.
  • the signal is routed from a processing element (5, 16) to the data memory and router (Figure 11A) onto the parallel processor input/output (Figure 11B) and then to the ASI output port ( Figure 11C).
  • FIG 22 depicts the first of two Ethernet outputs.
  • the Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6.
  • the ports are configured to carry specific video, audio and data services information formatted as MPEG over IP.
  • the video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 17) and (5, 18).
  • the information from processing elements (5, 17) and (5, 18) passes to the data memory and routing element ( Figure 12A) then onto the parallel processor input/output port element( Figure 12B) and then routed to the output Ethernet Port ( Figure 12C).
  • FIG 23 depicts the second of two Ethernet outputs.
  • the Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6.
  • the ports are configured to carry specific video, audio and data services information formatted as MPEG over IP.
  • video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 19) and (6, 19).
  • Figure 24 depicts a video and audio decoder. This is designed as a confidence decoder that provides decompression of video and audio prior to transmission.
  • the decoder is designed to take the compressed video and audio from one set of processing elements and route the signal to a second set of processing elements. Depending upon the data rate and resolution of the video and audio streams additional or fewer processing elements can be assigned as required.
  • the decoder routes the signal from an assigned series of processing elements to the data memory and routing element ( Figure 14A). From the DMR the signal is then routed to the parallel processor input/output router ( Figure 14B) and out to a serial decoder output ( Figure 14C).
  • the serial decoder output ( Figure 14C) is formatted by processing elements to either the SMPTE 259M or SMPTE 292M specification for viewing on a monitor.
  • Figure 25 depicts a high bit rate storage capability on the processing board.
  • the function supports a feature that allows a capture of high bit rate video/audio that may be accessed at a later time.
  • the processing board provides the ability to concurrently dual process video at a two resolutions at two different data rates. The user has the capability to store either of the video/audio streams on the high data rate storage.
  • the compressed signal is routed from a set of selected processing elements to processing elements (8, 12) and (8, 13). From these processing elements the information is routed to a data memory and router ( Figure 15A) to the parallel processor input/output router ( Figure 15B) and then routed to the storage device ( Figurel5C).
  • Figure 26 depicts the monitoring and/or processing of an external ASI transport stream compliant to ISO/IEC 13818-1 standards.
  • the external input Figure 16C
  • the parallel processor input/output router Figure 16B
  • the stream is then routed to a predefined set of processing elements.
  • the external stream may be:
  • Figure 27 depicts the instruction and boot function of the parallel processing component. This Figure has three components. (1) The Electrical Erasable Program Read Only Memory, EE PROM, (Figure 17A) (2) the Boot Control -SPI ( Figure 17B) and (3) the Serial Bus Controller ( Figure 17C).
  • the EE PROM is a separate component on the processor board and it works in conjunction with the Boot Control and Serial Bus Controller.
  • Figure 28 depicts the RS-232 closed captioning input.
  • the source is an external component with its input to a RS-232 connector ( Figure 18A). This is an analog signal and must be converted to digital through a converter ( Figure 18B).
  • the signal is then routed to a data memory and routing element ( Figure 18C).
  • the signal is then routed to the data memory and router element ( Figure 18D) and then routed to a predefined processing element.
  • the processed signal is timed with the video frames and embedded within the transport stream.
  • Figure 29 depicts the analog composite synchronization signal. This signal is used for frame synchronization in studio applications.
  • the signal is applied to the input port ( Figure 19A).
  • the signal is then routed to an analog to digital converter ( Figure 19B).
  • Figure 19C From the analog to digital converter the signal is routed to the parallel processor input/output element ( Figure 19C) and then to the data memory and router element ( Figure 19D). From the data memory and router the signal is routed to a predefined processing element.
  • Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization.
  • the video will be monitored within processing elements, in this example (0,0) through (0, 19) and (1 ,0) through (1 , 19), for the first video input (figure 1A) for horizontal and vertical movement. If the movement exceeds pre- defined parameters, the video will be pre-processed, in this example, in processing elements (9, 10) through (9, 19) to provide electronic stabilization prior to applying a compression algorithm.
  • This processing can apply to two video compression processes or standards if the images being processed are from one image source for figures 1A and 7A.
  • FIG. 31 - 58 an integrated concurrent multi- standard video/audio decoder and software applications processor is shown and described.
  • the following descriptions provide examples of applications being assigned to specific processing elements within a massive parallel processor.
  • the parallel processor design allows dynamic, cycle-by-cycle, real time programming. In the actual implementation the processing elements may be shared among multiple mathematical processes and functional applications.
  • Figure 31 depicts the processing board with the parallel processor.
  • the processor has multiple parallel processing elements.
  • This Figure addresses the ability to start the processor through a boot process. This process can originate from either stored code on the processor or through external devices such as an electronically erasable programmable-read-only-memory (EEPROM) as this Figure depicts. It is also capable of receiving the instructions from other devices such as RISC controllers.
  • EEPROM electronically erasable programmable-read-only-memory
  • FIG 31 A providing the boot information through the Boot Controller SPI interface internal to the processor (figure 3 IB) to the actual serial bus controller (figure 31C).
  • Figure 32 depicts the first RF input information process.
  • the RF input (figure 32 A) to the board is routed to a demodulator (figure 32B) which removes the signal from a carrier wave.
  • the digital output from the demodulator is routed to the I/O router (figure 32C) of the parallel processor.
  • the I/O router then provides the data to the data memory and routing device, DMR (figure 32D).
  • DMR then routes the information to the processing element, in this example processing element (0,0).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 33 depicts the second RF input information process.
  • the RF input (figure 33A) to the board is routed to a demodulator (figure 33B) which removes the signal from a carrier wave.
  • the digital output from the demodulator is routed to the I/O router (figure 33C) of the parallel processor.
  • the I/O router then provides the data to the data memory and routing device, DMR (figure 33D).
  • DMR then routes the information to the processing element, in this example processing element (0, 1).
  • the processing element parses the packets and routes them to the appropriate processing elements through the DMR structure for further processing.
  • Figure 34 depicts the first Asynchronous Serial Input, ASI (figure 34A).
  • ASI Asynchronous Serial Input
  • This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the ASI input to the I/O router (figure 34B) and then to the data memory and router (figure 34C).
  • the information is routed to the processing element (0,2).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 35 depicts the second Asynchronous Serial Input, ASI (figure 35 A).
  • ASI This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the ASI input to the I/O router (figure 35B) and then to the data memory and router (figure 35C). In this example the information is routed to processing element (0,3).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 36 depicts the first Ethernet input (figure 36A).
  • This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from the I/O router (figure 36B) and then to the data memory and router (figure 36C).
  • the information is routed to the processing elements (0,4) and (0,5).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 37 depicts the second Ethernet input (figure 37A).
  • This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard.
  • the information packets carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed to the I/O router (figure 37B) and then to the data memory and router (figure 37C).
  • the information is routed to the processing elements (0,6) and (0,7).
  • the processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 38 depicts the mobile communications RF input port (figure 38A). This input is demodulated in (figure 38B) and provides information packets to the input/output router (figure 38C). The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the input/output router (figure 38C) and then to the data memory and router (figure 38D). In this example the information is routed to the processing elements (0,8) and (0,9). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
  • Figure 39 depicts the mobile communications RF reference level input (figure 39A).
  • This input is an analog voltage and is applied to an analog to digital converter (figure 39B).
  • the analog to digital converter provides a digitally sampled reference level to the I/O router, figure 39C.
  • the I/O router provides the signal to the data memory and router element (figure 39D).
  • the DMR information is routed to the processing element (1,0).
  • the processing element provides information to the decoder processing element section for scaling of the video dependent upon the signal strength at any given moment.
  • Figure 40 depicts the meta-data tagging input and output application of the apparatus.
  • the primary function is to allow the input of data to specific frames of video for future reference.
  • the decoder may supply the decoding of the universal time code and geo- positioning system data if it is present in the stream provided to the processing elements. This information can be enhanced with meta-data if required.
  • this application also provides an interface for the object recognition application software.
  • the object recognition application software can provide access for scene change detection, facial recognition and other pre-defined object recognition.
  • the information is routed from the application (figure 40A) to the Ethernet input (figure 40E) and onto the I/O router (figure 40G). From the input/output router the information can be routed either to the DDR memory (figure 40F) for reference information, or to the data memory and router element (figure 40H).
  • FIG 41 depicts the ETR 290 software application.
  • This application allows the user to monitor the content and bandwidth of the program elements within either the input or output streams of this parallel processor.
  • This program will appear as a web service based application.
  • the application is started in (figure 40B).
  • the information in this example is routed from processing element (1 ,2) to the data memory and router element (figure 40H) through the Input/output port (figure 40G) and back through the Ethernet Port (figure 40E) to the end user.
  • Figure 42 depicts the command channel information set-up.
  • the command channel allows the configuration of the decoder.
  • This function is a web based interface starting with the information request in (figure 40C).
  • the information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G).
  • the input/output interface information is routed to the processing element (1 ,3).
  • the information is processed and controls such functions as:
  • Figure 43 depicts the command channel information set-up.
  • the command channel allows or denies user access to the apparatus and to separate operational levels of the apparatus.
  • This function is a web based interface starting with the application request in (figure 40D).
  • the information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G).
  • the input/output interface information is routed to the processing element (1 ,4).
  • the information is processed and controls such functions as:
  • Figure 44 depicts the first Asynchronous Serial Output, ASI (figure 41 A).
  • ASI Asynchronous Serial Output
  • This output carries video and audio in a compressed format on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the information packets may carry multiple standards for video and audio in addition to other various data packets.
  • the information is routed from processing element (1 ,5) to the ASI data memory router element (figure 41C). From the DMR the information is routed to the input/output router (figure 4 IB) and then to the ASI output port (figure 41 A).
  • Figure 45 depicts the second Asynchronous Serial Output, ASI (figure 42 A).
  • ASI This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the video and audio are in a compressed format.
  • the information packets may carry multiple standards for video and audio plus other related and non-related data packets.
  • the information is routed from processing element (1 ,6) to the ASI data memory router element (figure 12C). From the DMR the information is routed to the input/output router (figure 42B) and then to the ASI output port (figure 42 A).
  • Figure 46 depicts the third Asynchronous Serial Output, ASI (figure 43A).
  • ASI (figure 43A)
  • This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets.
  • the video and audio are in a compressed format.
  • the information packets may carry multiple standards for video and audio plus other related and non-related data packets.
  • the information is routed from processing element (1,7) to the ASI data memory router element (figure 43C). From the DMR the information is routed to the input/output router (figure 43B) and then to the ASI output port (figure 43 A).
  • the additional ASI output allows an output to a dedicated storage capability along with allowing redundant ASI outputs in Figures 44 and 45 to provide system level fail safe capability.
  • Figure 47 depicts the first Ethernet Output (figure 44 A).
  • This output port carries information on an Internet Protocol version 4 or 6.
  • the user may select the format.
  • the information may be formatted in MPEG over IP configurations.
  • the information packets may carry multiple standards for video and audio in addition to various other data packets.
  • the information is routed from processing elements (l ,8)and (1 ,9) to the data memory router element (figure 44C). From the DMR the information is routed to the I/O router (figure 44B) and then to the Ethernet output port (figure 44 A).
  • Figure 48 depicts the second Ethernet Output (figure 45 A).
  • This output carries information on an Internet Protocol version 4 or 6.
  • the user may select the format.
  • the information may be formatted in MPEG over IP configurations.
  • the information packets may carry multiple standards for video and audio in addition to various other data packets.
  • the information is routed from processing elements (2,0)and (2, 1) to the data memory router element (figure 45C). From the DMR the information is routed to the I/O (figure 45B) and then to the Ethernet output port (figure 45 A).
  • Figure 49 depicts the first of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information.
  • the information is formatted in processing elements (2,2), (2,3) and (2,4). After formatting, the information is then routed to the data memory routing element (figure 46C). From the DMR the information is passed to the input/output router (figure 46B) and then supplied to the SMPTE-292M output element (figure 46A).
  • This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
  • Figure 50 depicts the second of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information.
  • the information is formatted in processing elements (2,5), (2,6) and (2,7). After formatting, the information is then routed to the data memory routing element (figure 47C). From the DMR the information is passed to the I/O router (figure 47B) and then supplied to the SMPTE-292M output element (figure 47 A).
  • This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
  • Figure 51 depicts the first of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes.
  • the output could also be R, G, B, H and V or other format.
  • This information is then passed to the data memory routing element (figure 48D).
  • the information is then passed to the input/output router (figure 48C) and then to the digital to analog converter (figure 48B).
  • the analog outputs are then provided to the component elements (figure 48A) for display processing.
  • Figure 52 depicts the second of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes.
  • the output could also be R, G, B, H and V or other format.
  • This information is then passed to the data memory routing element (figure 49D).
  • the information is then passed to the input/output router (figure 49C) and then to the digital to analog converter (figure 49B).
  • the analog outputs are then provided to the component elements (figure 49A) for display processing.
  • FIG 53 depicts the first of two output processed audio routes.
  • the audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding.
  • the number of channels can be up to eight per output port configuration.
  • processing elements (3,4) and (3,5) format the information for the data memory router (figure 50D). From the DMR the information is routed to the input/output router (figure 50C). From the input/output router the information is provided to a digital to analog converter (figure 50B). The information is then provided to the audio output channel element (figure 50A).
  • FIG. 54 depicts the second of two output processed audio routes.
  • the audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding.
  • the number of channels can be up to eight per output port configuration.
  • processing elements (3,6) and (3,7) format the information for the data memory router, (figure 5 ID). From the DMR the information is routed to the input/output router (figure 51C). From the input/output router the information is provided to a digital to analog converter (figure 5 IB). The information is then provided to the audio output channel element (figure 51 A). The information may bypass the digital to analog converter and be routed directly to the output element (figure 51 A) if an external digital audio decoder is utilized.
  • FIG. 55 depicts the Program Specific Information Protocol (PS IP) information file.
  • PS IP Program Specific Information Protocol
  • This information if included in the received stream, is processed by processing element (3,8).
  • the information is then routed to the data memory router element (figure 52D) and then passed to the data memory routing element (figure 40H) supporting the Ethernet interface.
  • the information is then passed to the input/output router (figure 40G) and then routed to the Ethernet interface (figure 40E) and onto the Ethernet IP stream.
  • Figure 56 depicts the processing of the first of two compressed video and associated audio elementary streams by processing elements (4,0 through 4,9 and 5,0 through 5,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 53C).
  • Figure 57 depicts the processing of the second of two compressed video and associated audio elementary streams by processing elements (6,0 through 6,9 and 7,0 through 7,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 54C).
  • Figure 58 depicts the ability to direct the video decoder to process the video in a scaled format.
  • the scaled format is described in Annex G of the H.264 documentation. Processing elements (8,0 through 8,9) and (9,0 through 9,5) are utilized for this process.
  • the video can be scaled in conjunction with the signal level (figure 39A) of a mobile application (figure 38A) input.
  • the decoder and encoder functions described can be combined on a single board using a unified parallel processor to create a new platform for transcoding by using a concurrent multi- standard decoder for decoding a signal and re-encoding the signal to a new format using a multi- standard concurrent encoder.
  • a concurrent multi-standard transcoder architecture the decoded video, audio and data channels are received in one standard and are formatted in an alternative standard.
  • the apparatus described can concurrently decode multiple standards and concurrently encode the signals in multiple alternative standards.
  • the transcoder apparatus receives the signal from an external source as described with respect to the decoder above.
  • the decoded information is routed on a common internal bus structure of the parallel processors to the encoding processing elements on the unified parallel processor for video, audio and data processing.
  • the architecture allows the replacement of alternative system data.
  • the alternative system data can include conditional access information, program specific information protocol "PSIP" information, separate system information and alternatively formatted closed captioning data.
  • PSIP program specific information protocol
  • the command and control of the transcoder additionally allows the processing of the incoming streams to the apparatus.
  • the incoming stream processing may include dropping unwanted packets or program services.
  • the transcoder platform architecture additionally allows the insertion of new video, audio and data information for processing by the encoding function.
  • a combination of the decoder functionality and the encoder functionality may be combined using a common internal bus structure of the processor to provide for a Transcoder.

Abstract

A system for encoding signals comprises a parallel processor, at least one input coupled to the parallel processor and configured to receive a first signal comprising at least one of a first video signal and a first audio signal and at least one output coupled to the parallel processor. The parallel processor is configured to concurrently encode at least one of the first video signal and the first audio signal using at least one of two video standards, two audio standards and two data rates, and output a second signal comprising at least one of a second video signal and a second audio signal.

Description

PATENT
ATTORNEY DOCKET NO: 35511/09000-PCT TITLE OF THE INVENTION INTEGRATED CONCURRENT MULTI-STANDARD ENCODER, DECODER AND
TRANSCODER FIELD OF THE INVENTION
[001] The present invention relates generally to video and audio encoding. More particularly, the present invention relates to an integrated concurrent multi- standard encoder, decoder and transcoder for encoding, decoding and transcoding video and audio into multiple standards and/or at multiple data rates.
BACKGROUND OF THE INVENTION
[002] The video and audio compression techniques have continued to improve since the first professional MPEG video encoders were designed and sold commercially in the late 1980s. There have been significant reductions in the physical size, power consumption, quality of video and audio processing, and associated data applications since the introduction of the MPEG technology and the ratification of the MPEG-2 standards in the early 1990s. The ISO/IEC MPEG specifications, 13818-1 , 13818-2, 13818-3, and the International Telecommunications Union, ITU-T, H.264 specifications form the basis of the current video and audio processing designs that will be utilized in this device. The patents from the MPEG Licensing Authority for MPEG-2 and H.264 video and audio processing are recognized and will be subscribed to as required by law. There are additional patents that will be utilized from Dolby Laboratories and others that will be subscribed to for this apparatus. These patents are critical for interoperability within the industry but are not the basis for the use patent claims of this application.
[003] The patent claim of this application is enabled by massive parallel processing techniques and expanded memory capabilities. The video processing will use unique algorithms that will be covered under separate software copyrights and/or available patents. The parallel processing platform provides an opportunity to write more efficient processing algorithms that are not constrained by previous processor hardware designs. In addition, the combination of unique video processing algorithms with audio processing and software applications described in the Abstract and Claims sections, creates an apparatus with integrated capabilities unavailable in an integrated package within any industry today.
[004] This apparatus will provide the users with capabilities to address concerns with security and information assurance. The apparatus provides unique capabilities for automated reference time clock and positioning information with the ability to add additional metadata. The system capabilities yield an apparatus providing rapid access to critical video in multiple modes and data rates. These qualities, in whole and in part, are required in surveillance, news-gathering, security, and commercial/broadcast video applications.
SUMMARY OF THE INVENTION
[005] The apparatus described is capable of processing video, audio and data and placing the information on a serial stream of data for transmission over multiple media links. The video processing is commonly referred to as encoding or compressing. The apparatus may be programmed to process a single video signal with multiple audio and data services or concurrently processing dual video programs in multiple standards and modes. The video standards and modes may be at different processing and output data rates. The video may be supplemented with application software and that would allow the following optional features:
a. An optional real time clock that will provide a time code reference to the video. b. An optional Global Positioning System, GPS, capability when enabled with additional application software, may allow the user to have an automated calculation with a camera for the GPS position of the video being processed. The GPS position will be associated with the proper video frames and provided in separate private data packets. c. An optional Program System Information Protocol capability to provide a minimum of sixteen days of program data information originated from a third party source and embedded in the transport stream. d. An optional internal capability to generate, store and transmit transport stream Program System Information Protocol packets. e. An optional Digital Video Broadcasting, DVB, program guide generation capability to be generated via a third party through a predetermined interface. f. An optional application to provide object and facial recognition information to be generated from video frames being processed. This information can generate key frames on scene changes, objects, or facial changes. g. An optional capability to write video, audio and multiple data stream information to a storage device on the apparatus. h. An optional capability to monitor the content of the transmitted stream and its compliance to the European Telecommunications Recommendation 290 for packet timing and data rates. i. An option to enable the ability to concurrently process multiple modes of video, H.264, MPEG-2, JPEG 2000, Vector Quantization, Fractals, Wavelets or other processes. j. An option to accept program data from the input of an external transport stream. One or more elementary streams from that input can be added to the current stream being processed. This capability is a re-multiplexing feature.
The current art form allows items a and b, above, to be integrated on a single processing board today. Items c, d and e, above, are generated through a combination of external hardware and application software today. Items g and h are generally not done in today's art form within the encoding environment, especially on a single board. A concurrent item i is also not implemented on a single board in today's art form. Item j is implemented through additional hardware in today's art form. It is the combination of these capabilities, which provide the unique art form for this design on the integrated multi-encoder apparatus.
This apparatus shall also process audio. The audio processes supported may be Dolby AC-3, AC-3 5.1 , Dolby-E, AAC, PCM, Musicam or other audio processes defined and carried in a compliant fashion in the International Standards Organization/ International Electro technical Committee 13818-1 system and/or 13818-3 audio specifications. The audio services can be synchronized with the video or they may be separate and independent audio processes that are independent of any processed video packets. The Figures describe a high level step-by-step view of each of the concurrent processes. It provides examples of the processing elements provided to process the information. The actual processing elements and the number of processing elements will vary due to programmer's preferences and the video and audio standards being processed.
BRIEF DESCRIPTION OF THE FIGURES
[006] A full and enabling disclosure of the present invention, including the best mode thereof, to one of ordinary skill in the art, is set forth more particularly in the remainder of the specification, including reference to the accompanying Figures, in which:
[007] Figure 1 is a composite Figure of the functions, processing elements, inputs and outputs of a video encoder ("apparatus") in accordance with one embodiment of the present invention;
[008] Figure 2 is a view of the parallel processor built by Coherent Logix of Austin, Texas.
This is a picture of one, ten-by-ten (10X10) processor illustrating its key components;
[009] Figure 3 is a view of the parallel processor's processing elements, data memory and routing capabilities along and its input and output structures;
[0010] Figure 4 is a view of the first video input with its signal routing and signal processing elements;
[0011] Figure 5 is a view of the first section of audio inputs, up to eight, with their signal routing and signal processing elements;
[0012] Figure 6 is a view of the second section of audio inputs, up to four, with their signal routing and signal processing elements;
[0013] Figure 7 is a view of the network time reference and real time clock to derive the
Universal Time Code for video frame references; [0014] Figure 8 is a view of the optional global positioning system inputs, signal routing and signal processing elements;
[0015] Figure 9 is a view of the optional software application program for object recognition, and its signal routing and signal processing elements;
[0016] Figure 10 is a view of the optional stream analysis capability for a resident software application program. This depicts the signal routing of this information;
[0017] Figure 11 depicts the Digital Video System, DVB, system information input, routing and processing elements;
[0018] Figure 12 depicts the Internal Program Specific Information Protocol routing and processing element for an internal software applications program resident on the processing board;
[0019] Figure 13 depicts the External Program Specific Information Protocol routing and processing element for an external input formatted for the processing board;
[0020] Figure 14 depicts the command interface to the processing board;
[0021] Figure 15 depicts the control function for the processor board;
[0022] Figure 16 depicts the input, routing and processing element utilization for embedding conditional access information for network receiver control;
[0023] Figure 17 depicts the second video input, routing and processing element utilization;
[0024] Figure 18 depicts a third set of high data rate audio inputs, up to eight inputs, their routing and processing element utilization;
[0025] Figure 19 depicts the first Asynchronous Serial Output, processing element utilization and output routing; [0026] Figure 20 depicts the second Asynchronous Serial Output, processing element utilization and output routing;
[0027] Figure 21 depicts the third Asynchronous Serial Output, processing element utilization and output routing;
[0028] Figure 22 depicts the first Ethernet output, its processing elements and routing;
[0029] Figure 23 depicts the second Ethernet output, its processing elements and routing;
[0030] Figure 24 depicts a serial data output for an external monitor for the processed video and audio. This provides a view of the processing elements and routing;
[0031] Figure 25 depicts a high-resolution memory component, routing and associated processing elements;
[0032] Figure 26 depicts an external ASI input, its routing, and associated processing elements;
[0033] Figure 27 depicts the processor's boot function components;
[0034] Figure 28 depicts the RS-232 input for closed captioning, its routing and associated processing elements;
[0035] Figure 29 depicts the composite video input for synchronization to a studio input, its' routing and associated processing elements;
[0036] Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization;
[0037] Figure 31 is a Figure of the 10 X 10 parallel processor and the ability to boot the system for a decoder in accordance with one embodiment of the present invention;
[0038] Figure 32 is a view of the first RF input and routing;
[0039] Figure 33 is a view of the addition of a second RF input channel and its routing; [0040] Figure 34 is a view of the addition of the first ASI input and its routing;
[0041] Figure 35 is a view of the addition of the second ASI input and its routing;
[0042] Figure 36 is a view of the addition of a first Ethernet input;
[0043] Figure 37 is a view of the addition of a second Ethernet input;
[0044] Figure 38 is a view of the addition of an input for a mobile transmission system;
[0045] Figure 39 is a view of the addition of a signal level reference input for the mobile system;
[0046] Figure 40 depicts the meta-data tagging and retrieval capability through an additional
Ethernet port;
[0047] Figure 41 depicts the ETR 290 analysis application;
[0048] Figure 42 depicts the command channel for the apparatus;
[0049] Figure 43 depicts the control channel for the apparatus;
[0050] Figure 44 depicts the addition of a first ASI output;
[0051] Figure 45 depicts the addition of a second ASI output;
[0052] Figure 46 depicts the addition of a third ASI output;
[0053] Figure 47 depicts the addition of a first Ethernet output;
[0054] Figure 48 depicts the addition of a second Ethernet output;
[0055] Figure 49 depicts the addition of a first digital SMPTE 292M or SMPTE 259M output;
[0056] Figure 50 depicts the addition of a second digital SMPTE 292M or SMPTE 259M output;
[0057] Figure 51 depicts the addition of a first output to support an analog high definition interface; [0058] Figure 52 depicts the addition of a second output to support an analog high definition interface.
[0059] Figure 53 depicts the ability to provide a first multi-channel audio output capability;
[0060] Figure 54 depicts the ability to provide a second multi-channel audio output capability;
[0061] Figure 55 depicts the ability to provide a Program Specific Information Protocol
"PSIP" output file through an Ethernet interface;
[0062] Figure 56 depicts the first video and associated channel decoding and access to video memory;
[0063] Figure 57 depicts the second video and associated channel decoding and access to video memory; and
[0064] Figure 58 depicts the ability to scale one of the two video decoding applications based on a received RF signal strength.
[0065] Repeat use of reference characters in the present specification and Figures is intended to represent same or analogous features or elements of the invention according to the disclosure. The accompanying Figures, which are incorporated in and constitute a part of this specification, illustrate one or more embodiments of an integrated concurrent multi-standard encoder, decoder and transcoder of the present invention.
[0066] Various combinations and sub-combinations of the disclosed elements, as well as methods of utilizing same, which are discussed in detail below, provide other objects, features and aspects of the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
Reference will now be made in detail to presently preferred embodiments of the invention, one or more examples of which are illustrated in the accompanying Figures. Each example is provided by way of explanation, not limitation, of the invention. It is to be understood by one of ordinary skill in the art that the present discussion is a description of exemplary embodiments only, and is not intended as limiting the broader aspects of the present invention, which broader aspects are embodied in the exemplary constructions. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present invention without departing from the scope and spirit thereof. For instance, features illustrated or described as part of one embodiment may be used on another embodiment to yield a still further embodiment. Thus, it is intended that the present invention covers such modifications and variations as come within the scope of the appended claims and their equivalents.
[0068] Figure 1 depicts a high level composite picture of the inputs and outputs with the associated signal routing and processing element utilization. The Figure also provides a view of the critical processor communications structure through complimentary Low Voltage Differential Signaling (LVDS). The encoder is contained on a single printed circuit board. At the center of the board are two massive parallel processors each containing multiple processing elements. These processors contain 968Kbytes of data memory and 400Kbytes of instruction memory.
[0069] Within each massive parallel processor are the following elements that do the following functions:
a. ) There are 100 processing elements in a ten-by-ten (10X10) array that are designed for computation intensive communications and image/video processing. b. ) There is an eleven-by-eleven (11X11) array of data memory and routing elements,
DMRs. The DMRs provide data memory, control logic, registers and routers for fast routing services. This structure provides the real-time programmable and adaptable communications fabric to support arbitrary network topologies and sophisticated algorithm implementations. c.) There are twenty-four (24) Input/ Output blocks to connect the periphery to DMRs. This structure also supports sustainable on-chip communications to other Hx3100 processors and allows the preservation of a consistent programming model. The input/output structure also enables interfacing to other memory, processor buses, analog-to-digital converters, sensors and displays. There are, in addition, 24 user configurable timers, one associated with each Input/Output element.
Surrounding the two massive parallel processors are the following input components: a. ) Two programmable input ports for either analog video or digital video; b. ) Three programmable input ports for analog audio or digital audio; c. ) Two programmable Ethernet ports that will support either Internet Protocol version 4 or Internet Protocol version 6; d. ) A RS-232 interface with an analog to digital converter to support closed captioning for the hearing impaired feature; e. ) An analog input for a composite synchronization signal from a studio reference.
This is converted to a digital signal through and analog to digital converter; f. ) There are a minimum of three double data rate memory components on the processor board; and g-) There is one Asynchronous Serial Interface input port capable of handling an external transport stream compliant to ISO/IEC 13818-1.
[0071] On the other output side of the processor board are the following components:
a) There are three Asynchronous Serial Interface output ports with the processed signal routed from the input/output ports of the parallel processor
b) There are two Ethernet output ports that will support either IPv4 or IPv6 formats. These ports will be void of command, control and other extraneous information. They are reserved for processed video, audio and data services.
[0072] Figure 2 depicts an internal view of the type of massive parallel processor that sits at the core of this apparatus. This is an illustration of the Hx3100 processor that is commercially available through Coherent Logix of Austin, Texas. The internal structure contains one- hundred (100) processing elements, ("PEs"), configured in a ten-by-ten (10X10) physical array. Associate with each processor element is four (4) data memory and routing, ("DMR"), elements. Each processing element can be configured to perform a unique mathematical function on a cycle-by-cycle basis. The DMR and the PE structure allows the processing elements to be efficiently configured for multiple functions and multiple program executions on a concurrent time basis.
A massive parallel processor of the type described in the previous paragraph provides the capability to utilize the low voltage communications, LVDS, ports extending the processing capabilities across multiple massive parallel processors. The multiple processors can be unified under the control of one instruction set, either on board or through an external electrically erasable programmable read-only-memory component on the same board. [0074] The Figure also provides a view of the interface ports for the boot information as well as multiple CMOS, DDR and LVDS interface capabilities.
[0075] Figure 3 provides an understanding of the data memory and routing capability and the relationship to the processing elements of the Hx3100 or similar type massive parallel processor. In the following Figures, not all of the DMR elements and address routing for each function are shown. It is important to understand this particular Figure and ability to move across the fabric of the processor in an efficient manner.
[0076] A snapshot view of four (4)of the one hundred (lOO)processing elements, nine (9) data memory and routing elements as well as nine input/output routers of one Coherent Logix Hx3100 component is also illustrated in Figure 3. This depicts the multiple address capabilities allowing data to be routed and stored from as many as eight (8) sources into DMR (1, 1). The efficiency in routing data through a multi-dimensional implementation provides the programmer capabilities to write software algorithms that are more efficient in processing by reducing the number of cycles required, and reducing power consumption.
[0077] Figure 4 depicts an example of how the first programmable video input for the apparatus may be routed. The analog or digital input signal flow is from the input port (Figure 1A) to either the analog to digital converter (Figure IB) or to the DDR memory (Figure 1C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router (Figure ID) of the processor. The signal is then routed to the data memory and routing element, DMR, (Figure IE) of the processor. From the DMR, the signal is routed to processing elements (0,0) through (0, 19) plus (1 ,0) through (1 , 19), plus (5,0) through (5,9) and (6,0) through (6,9). [0078] The input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
[0079] When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
[0080] When a digital input is utilized it can be routed through a double data rate synchronous dynamic random access component, DDR SDRAM, such as an EOREX EM44CM1688LBA as shown in Figure 1C. The EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan. Out of the memory input the input signal is then clocked into the massive parallel processor's input/output data port. Next the signal is routed to the associated DMR on the processor. From the DMR a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
b. routing to the appropriate Hx3100 processing element(s), in this case we show utilizing -processing elements (0,0) through (0, 19) plus (1 ,0) through (1 , 19), plus (5,0) through (5,9) and (6,0) through (6,9);
c. sampling by the processing element(s); d. filtering in both spatial and temporal regions;
e. dividing into appropriate block sizes;
f. calculating block values;
g. calculating discrete cosine transforms;
h. performing motions estimation calculations and matches both through exhaustive searches and/or hierarchical estimations;
i. variable length coding;
j. binary arithmetic coding;
k. Huffman coding; and
1. multiplexing processed video with processed audio with data components into the appropriate output formats. (Dependent upon the video resolutions and data rates used for video coding, additional or fewer or more processing elements of the massive parallel processor can be used.).
[0081] The output formats for the transmitted signal are defined by the International Standards
Organization/International Electrotechnical Committee in the following specifications:
a) -ISO/IEC 13818-1 for the system format;
b) -13818-2 for the video format for MPEG-2, H.264 for advanced video coding; c) -13818-3 for audio coding of Musicam or formatting for packaging of alternate audio systems such as Dolby or Advanced Audio Coding; and
d) -The ISO/IEC 13818-1 , -2 and -3 also cover the clock synchronization for frame recovery by the receiving devices.
[0082] Figure 5 depicts a programmable audio input port represented as Port 2 (Figure 2A).
The input port is user-configurable through a web services interface as an analog audio input or a digital audio input. This port is defined as a collection of ports, up to eight, for systems such as Dolby "E". The input audio signal flows from the input port(s) (Figure 2A) to either the analog to digital converter (Figure 2B) or directly into the input/output port of the parallel processor (Figure 2C) if it is already formatted as a digital signal.
[0083] If the input signal source is analog, it must be converted to a digital signal. A device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port. In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
[0084] Within the parallel processor the original or converted digital audio signal is routed to a
DMR on the processor. In this Figure the DMR routes the signal to processing elements (2,0) through (2,3). The processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.) The processing elements support audio systems including Musicam, Dolby AC-3, Dolby AC-3 5.1, Dolby E, Pulse coded modulation techniques and Advanced Audio Coding. The processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents. The massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
[0085] Figure 6 depicts an alternative audio input on Port 3 (Figure 3A) for additional audio services. The signal flow is from an audio source (Figure 3 A) to either the analog to digital converter (Figure 3B) or directly to the input/output port of the processor (Figure 3C). From the output of the analog to digital converter (Figure3B), if utilized, the signal is sent to the input/output router of the processor (Figure 3C). From the input/output router the signal flows to the data memory and router (Figure 3D) and on to processing elements (2,4) and (2,5).
[0086] The input port is user configurable through a web services interface as an analog audio input or a digital audio input. This port is defined as a collection of ports, up to four, depending on the audio sources chosen by the end user. If the input signal source is analog, it must be converted to a digital signal. A device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. The processing elements support audio systems that will include Musicam, Dolby AC-3, Dolby AC-3 5.1 , Dolby E, Pulse Coded Modulation techniques and Advanced Audio Coding. The processing elements utilize a mater clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents. The parallel processor's processing elements provide the synchronized video to the proper output ports as defined by the programmer and/or end user, if this audio is program related. This audio may be independent of the video.
[0087] Figure 7 depicts the Network Reference Clock (Figure 4A). The signal flow is from a network provided clock (Figure 4 A) to a real time clock (Figure 4B). From the output of the real time clock (Figure 4B) the signal is sent to an input/output port of the processor (Figure 4C). From the input/output port the signal flows to a data memory and routing element (4D) and onto processing element (2,6). The network reference clock synchronizes time with specific networks and coordinates with broadcast clocks. The real time clock on the board maintains synchronization when a network clock is not available. Thee real time clock produces a reference for the on-board software application within the massive parallel processor to produce a Universal Time Code. This information marks contiguous video frames with metadata, which can then be used for future reference in video asset management systems.
[0088] Figure 8 depicts the input from a global positioning sensor (Figure 5A). From the sensor the signal flows to the on-board telemetry (Figure 5B). The GPS and telemetry information pass to the input port of the massive parallel processor (Figure 5C). The processor will host a software application in its memory to process the GPS information with information provided by a cameras target position and within the telemetry information. The information passes from the data memory and routing element (Figure 5D) to the processing elements as defined by the software application. In this case an example is provided using processing elements (2,7) and (2,8). The processing elements will calculate the observation point of the camera with the GPS software application. This information is formatted and synchronized with the video in the processing elements (2,7) and (2,8). The information is provided in the formatted output of the transmitted stream for video asset management systems.
[0089] The GPS systems such as the Raytheon Anti-Jam Receiver are utilized on flight systems today. They interface at a 1394B specification level that is easily supported through the processors input/output router configured for this digital input format.
[0090] There are multiple telemetry systems currently in use and vary based on the flight system platforms. Examples of vendors are Rockwell Collins, Raytheon and Boeing, but not limited to these vendors, that provide these GPS systems.
[0091] Figure 9: Figure 9 depicts the object recognition input(Figure 6A).The input sensor is typically a "smart camera" provided by vendors such as Pittsburgh Pattern or Cogent Systems that provides information formatted and compliant to the ISO- 19794-5 standard. The Information is input through the sensor (Figure 6A) to Ethernet input ports, either (Figure 61) or (Figure 6M).
[0092] Figures 61 and 6M are Ethernet input ports. These ports act independently of each other. They can be configured to either Internet Protocol version 4 (IPv4) or Internet Protocol version 6 (IPv6). Each Ethernet port is independently configurable. For simplicity purposes, the input from Figure 61 is routed to the input/output port of the massive parallel processor (Figure 6J). The input and output element (Figure 6J) routes the signal to the DMR (Figure 6K)and then to the processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6L.
[0093] The alternative route and object recognition software application storage is through the input from Figure 6M routed to the input/output port of the massive parallel processor (Figure 6N). The input and output element (Figure 6N) routes the signal to the DMR (Figure 60) and onto processing elements (2,9) through (2, 11) and utilizes the object recognition software application located on Figure 6P.
[0094] The user chooses the input port. Both IPv4 and IPv6 capability are available.
[0095] In Figure 9 the sensor input is depicted as Figure 6A. The smart camera information is provided through the Ethernet Port (Figure 61) to the parallel processor's input port (Figure 6J) and routed through the data memory and routing element (Figure 6K). The information may require access to the program located in Figure 6L or be immediately processed by processing elements (2,9) through (2, 11).
[0096] Alternative Ethernet Port Selection:
[0097] This information works identically to the original Ethernet Port selection as discussed above. [0098] The object recognition software programs are provided through companies such as
Pittsburgh Pattern Recognition, out of Pittsburgh, Pennsylvania. This information can be sent in user data packets to the end users in the transport stream following ISO/IEC Standard 13818-1. This information can be coordinated with video frame timing information as described in ISO/IEC Standard 13818-2.
[0099] Some of the applications of object recognition are facial recognition, scene change detection, license plate recognition and geo-spatial location recognition.
[00100] Figure 10 depicts the ETR-290 software application. This is a European
Telecommunications recommendation that analyses of the MPEG transport stream. This software application would either be written internally or licensed from a third party like Thomson, Grass Valley Division.
[00101] The program resides on the processing board on non-volatile double data rate memory.
It provides information on each program elementary stream within the transmitted stream, either on the ASI output or the Ethernet output. It measures the bit rate of each elementary stream program, the timing of required information within the 13818-1 Standard and provides the user with information on inconsistencies between transmission specifications and actual transmitted information.
[00102] The program provides a web services interface for the user also stored in memory on the board (Figure 6L).
[00103] The routing originates in DDR memory (Figure 6L) and monitors one of the three ASI outputs or two Ethernet output ports across the fabric of the massive parallel processor. Specifically, the way we depict this application is to monitor read the output streams of processing elements providing the output streams to Figures 9C, IOC, 11C, 12C andl3C . We can also monitor the external ASI stream in Figure 16C.
[00104] The web services interface provides compliance information for ISO/IEC 13818-1 or
ATSC standard A/65. This information is read through the processing elements (2, 12) and (2, 13) and the DMR (Figure 6K). The information is then routed to the I/O port of the massive parallel processor (Figure 6J) and routed through the Ethernet Port (Figure 61) to the web services report pages.
[00105] Figure 11 depicts the DVB Standard system information being inserted into the program streams for non-ATSC PS IP applications. The required tables of the standard for the transport stream are integrated and maintained in either DDR memory or memory on the massive parallel processor. The information flows from the DDR memory (Figure 6L) to the input/output element of the processor (Figure 6J) through the data memory and routing element (Figure 6K) and to a defined processing element on the processor. In this example processing element (2, 14) has been chosen.
[00106] The DVB program guide utilizes the DDR memory on the processing board. This program guide information is provided through a system management external computer via the Ethernet Port (Figure 6L). This information is non-real time information and is downloaded to memory on an ad-hoc basis. This process occurs once every one to two weeks. The processing signal flow for the DVB program guide information would be from the DDR memory (Figure 6L) to the input/output router of the processor (Figure 6J) through a DMR (Figure 6K) to the processing element (2, 14).
[00107] Alternative Ethernet Port Selection: (Routing can be changed as in previous sections concerning Ethernet Port inputs.) [00108] Figure 12 depicts the Internal Program System Information Protocol (PS IP) software application. This is a software application that resides on memory on the processor board on non- volatile memory. In the Figure the capacity for this software program resides in the DDR memory (Figure 6P). The data in the PSIP is populated by the end user through a web based interface.
[00109] The signal flows as follows. The program resides in memory, Figure 6P. It is accessed on a continual basis by processing element (2, 15). The information flows from the software application stored in DDR memory (Figure 6P) through the I/O router of the massive parallel processor (Figure 6N) to the data memory router(Figure 60). The information is then placed within the stream by processing element (2, 15).
[00110] Figure 13 depicts an external Program System Information Protocol program that resides on hardware external to the processor board. This program enters through either Ethernet Port (Figures 61) or (Figure 6M). The information is routed through the appropriate I/O elements on the massive parallel processor, either Figure 6J or 6N. From the I/O element the signal is routed to the appropriate data memory and routing element (Figure 6K) or (Figure 60). The information is then processed by processing element (2, 16) and placed onto the output stream in conformance with the A/65 specification.
[00111] Figure 14 depicts Command information for the video/audio and data encoder processor board. The information control is stored in memory (Figure 6P) and accessed on web pages resident stored in non-volatile memory (Figure 6P) on the processor board. The program is accessed through the Ethernet Port (Figure 6M) through the processor's I/O port (Figure 6N) and routed to the data memory and routing element (Figure 60). The information flows back through the I/O port (Figure 6N) to the DDR memory (Figure 6P). All changes are implemented through the route of the DDR (Figure 6P) through the I/O port (Figure 6N) to the data memory router (Figure 60) to the processing element (2, 17).
The command sets system parameters. These parameters include:
a) Port configuration for each video input port. (Analog composite video, SMPTE 259M standard definition digital video, or SMPTE 292M, high definition digital video);
b) Resolution of the video to be compressed such as 352 X 240/288, 352 x 480 X 576, 480 X 480/576, 528 X 480/576, 544 X 480/576, 640 X 480/576, 704 X 480/576, 720 X 480/576, 720 X 512/608, 720 X 1280, 1440 X 1080, 1920 X 1080;
c) Frame rates including 24fps, 25fps, 29.97fps, 30fps, 50fps, 59.94fps and 60fps for the appropriate resolutions;
d) Bit rates for video coding;
e) Video standard either MPEG-2 or H.264;
f) Chroma sampling, either 4:2:2 or 4:2:0;
g) Port configuration for each audio port. (Analog audio, PCM audio, Compressed pass-through audio, AES Audio.);
h) Configuration of audio port impedance either 600 ohms or 110 ohms;
i) Configuration of audio processing, Advanced Audio Coding, Musicam, Dolby AC-3, Dolby AC-3 5.1 , Dolby "E" or PCM;
j) Audio data rates per channel or system such as 384Kb for Dolby AC-3 5.1. up to 640Kbs for non-Dolby "E" . For Dolby E rates of 1.536 Mbs./sec at 16 bits, 1.920 Mbs./sec. for 20 bits or 2.304 Mbs./sec. for 24 bits sampled; k) Audio sampling rate 16 bits, 20 bit, 24 bit for Dolby "E" , 32kHz, 44.1 kHz or 48 kHz;
1) ASI port configuration, either byte or burst mode;
m) Ethernet input port configuration, either IPv4 or IPv6 for each port individually;
n) Ethernet output port configuration, either IPv4 or Ipv6 for each port individually;
o) Closed captioning port standard for cable or terrestrial; and
p) Commands for external receiver port controls, cue tones.
Figure 15 depicts the Control function of the Processor Board. The control parameters and web services interface pages are stored in non- volatile memory in (Figure 6P). The access is requested through the Ethernet Port (Figure 6M). The request is routed through the input/output router of the processor (Figure 6N) to the DDR non-volatile memory (Figure 6P) and /or the Boot section of the processor, or external EE PROM holding boot information through the data and memory routing element (Figure 60). The control function appears on web pages to the end user. The functions addressed in the control include, but are not limited to:
a) Super-user access through password;
b) End-user passwords and level of operation;
c) IP addresses for each of the two input Ethernet Ports and the two output Ethernet Ports;
d) Software downloads for boot function;
e) Software downloads for application software; f) Web pages access for PSIP input;
g) Web pages access for DVB SI information;
h) Web pages access for ETR-290 analysis;
i) Web pages access for ASI external stream monitoring and grooming, Figure 16C;
j) Web Pages access for high resolution or high data rate storage, Figure 15C; and k) Web pages access for Object Recognition information and storage, Figure 6A. Figure 16 depicts the Conditional Access "CA" information from an external subscriber gement system. There are two separate pieces of conditional access information.
1. The first piece of conditional access information is related to the program(s) being processed on the processor board by the massive parallel processor. This information is an indication of whether the program utilizes conditional access or if it does not use conditional access information. This information is provided and stored for each program on the massive parallel processor.
2. The second piece of the conditional access information is also originated in an external subscriber management computer and transmitted to the reception devices in the network. This tells the reception device if it has access to the program. If there is conditional access information being transmitted to the receivers in the network it may be entered through the Ethernet port (Figure M). The routing of this information is from the external computer through the Ethernet Port (Figure 6M) to the I/O router of the massive parallel processor (Figure 6N) and onto the data memory and routing element (Figure 60). From the DMR the information is routed to processing element, in this example, (2, 19). The processing element will route the CA information to the proper program DMR and place it on the transport stream for transmission. The conditional access information for the network reception devices is not stored on the processor board.
[00115] Note: This "CA" information may also be entered downstream of this apparatus as an alternative in external multiplexers. This is a common practice in many networks.
[00116] The "CA" information is in the "CA" section of the 13818-1 transport stream that is set for each program elementary stream for video, audio and data service.
[00117] Figure 17 depicts an example of how the second programmable video input for the apparatus may be routed. The analog or digital input signal flow is from the input port (Figure 7A) to either the analog to digital converter (Figure 7B) or to the DDR memory (Figure 7C). From the analog to digital converter or the DDR memory the signal is directly routed to the input/output router (Figure 7D) of the processor. The signal is then routed to the data memory and routing element, DMR, (Figure 7E) of the processor. From the DMR, the signal is routed to processing elements (3,0) through (3, 19) plus (4,0) through (4, 19), plus (7,0) through (7,9) and (8,0) through (8,9).
[00118] The input port is user configurable, through a web services interface via the Ethernet input port, as an analog composite video input, a SMPTE 259M serial digital standard definition video signal, along with embedded audio and other data services, or a high definition SMPTE 292M serial digital input with embedded audio and data services. These three inputs are standard within the broadcast and commercial markets.
[00119] When an analog composite video input is utilized, the signal must be routed through an analog to digital converter component, located on the board and is identified as Figure IB. These components are readily available such as Analog devices AD 9203. This device is commercially available from, among others, Analog Devices of Norwood, Massachusetts. From the analog to digital converter component the signal is routed to a data memory and routing element in the massive parallel processor.
When a digital input is utilized it can be routed through a double data rate synchronous dynamic random access component, DDR SDRAM, such as an EOREX EM44CM1688LBA as shown in Figure 1C. The EM44CM1688LBA is commercially available from EOREX is out of Chubei County, Taiwan. Out of the memory input the input signal is then clocked into the massive parallel processor's input/output data port. Next the signal is routed to the associated DMR on the processor. From the DMR a preprogrammed software application instruct the massive parallel processor to complete one or more of the following functions, as required: a. separation into video, audio or data elements by address;
b. routing to the appropriate massive parallel processing element(s), in this case we show utilizing processing elements (3,0) through (3, 19) plus (4,0) through (4, 19) plus (7,0) through (7,9) and (8,0) through (8,9);
c. sampling by the processing element(s);
d. filtering in both spatial and temporal regions;
e. dividing into appropriate block sizes;
f. calculating block values;
g. calculating discrete cosine transforms;
h. performing motions estimation calculations and matches both through exhaustive searches and/or hierarchical estimations;
i. variable length coding; j. binary arithmetic coding;
k. Huffman coding; and
1. multiplexing processed video with processed audio with data components into the appropriate output formats. (Dependent upon the video resolutions and data rates used for video coding, additional or fewer or more processing elements of the massive parallel processor can be used.).
[00121] The output formats for the transmitted signal are defined by the International Standards
Organization/International Electrotechnical Committee in the following specifications:
-ISO/IEC 13818-1 for the system format,
-13818-2 for the video format for MPEG-2, H.264 for advanced video coding,
-13818-3 for audio coding of Musicam or formatting for packaging of alternate audio systems such as Dolby or Advanced Audio Coding.
[00122] The ISO/IEC 13818-1 , -2 and -3 also cover the clock synchronization for frame recovery by the receiving devices.
[00123] Figure 18 depicts a programmable audio input port represented as input port 8 (Figure
8A). The input port is user-configurable through a web services interface as an analog audio input or a digital audio input. This port is defined as a collection of ports, up to eight, for systems such as Dolby "E" . The input audio signal flows from the input port(s) (Figure 8A) to either the analog to digital converter (Figure 8B) or directly into the input/output port of the parallel processor (Figure 8D) if it is already formatted as a digital signal.
[00124] If the input signal source is analog, it must be converted to a digital signal. A device such as the Analog Devices AD 9203 is used to convert the analog audio signal into to a digital audio signal. From the Analog Devices component the signal is routed to the massive parallel processor input/output port.
[00125] In the event the signal is a digital source, it can be routed directly to the input/output port of the massive parallel processor.
[00126] Within the parallel processor the original or converted digital audio signal is routed to a
DMR on the processor. In this Figure the DMR routes the signal to processing elements (5, 10) through (5, 13). The processing elements provide the sampling, filtering and coding functions as defined by the audio processing algorithms. (Dependent upon the data rates used for audio coding, fewer or additional processing elements may be utilized.)
[00127] The processing elements support audio systems including Musicam, Dolby AC-3,
Dolby AC-3 5.1 , Dolby E, Pulse coded modulation techniques and Advanced Audio Coding.
[00128] The processing elements utilize a master clock as a reference for frame accuracy as defined in the ISO/IEC 13818-1 , 2 and 3 documents.
[00129] The massive parallel processor's processing elements will provide the synchronized video to the proper output ports as defined by the programmer and/or end user.
[00130] Figure 19 depicts the first of three Asynchronous Serial Interface "ASI" output ports.
These ports are identical in function for figures 9C, IOC and 11C. Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications. In Figure 19 the processing element (5, 14) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications. This output port is void of extraneous information found on the input Ethernet ports. The signal is routed from a processing element (5, 14) to the data memory and router (Figure 9A) onto the parallel processor input/output(Figure 9B) and then to the ASI output port (Figure 9C). [00131] Figure 20 depicts the second of three Asynchronous Serial Interface "ASI" output ports. The port is identical in function for figure 9C in figure 19. Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications. In figure 20 the processing element (5, 15) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications. This output port is void of extraneous information found on the input Ethernet ports. The signal is routed from a processing element (5, 15) to the data memory and router (Figure 10A) onto the parallel processor input/output (Figure 10B) and then to the ASI output port (Figure IOC).
[00132] Figure 21 depicts the third of three Asynchronous Serial Interface "ASI" output ports.
The port is identical in function for figures 9C and IOC in Figures 19 and 20. Each ASI is formatted per ISO/IEC specification 13818-1 and ITU-T H.264 specifications. In Figure 21 the processing element (5, 16) provide a multiplex of video, audio and data services in compliance with the previous 13818-1 and H.264 formatted specifications. This output port is void of extraneous information found on the input Ethernet ports. The signal is routed from a processing element (5, 16) to the data memory and router (Figure 11A) onto the parallel processor input/output (Figure 11B) and then to the ASI output port (Figure 11C).
[00133] Figure 22 depicts the first of two Ethernet outputs. The Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6. The ports are configured to carry specific video, audio and data services information formatted as MPEG over IP. The video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 17) and (5, 18). [00134] The information from processing elements (5, 17) and (5, 18) passes to the data memory and routing element (Figure 12A) then onto the parallel processor input/output port element(Figure 12B) and then routed to the output Ethernet Port (Figure 12C).
[00135] Figure 23 depicts the second of two Ethernet outputs. The Ethernet output ports can be user configured to be either formatted as IPv4 or IPv6. The ports are configured to carry specific video, audio and data services information formatted as MPEG over IP. In this example the video, audio, and data services that make up the elementary program streams are multiplexed onto the Ethernet port through processing elements (5, 19) and (6, 19).
[00136] The information from processing elements (5, 19) and (6, 19) passes to the data memory and routing element(Figure 13 A) then onto the parallel processor input/output port element (Figure 13B) and then routed to the output Ethernet Port (Figure 13C).
[00137] Figure 24 depicts a video and audio decoder. This is designed as a confidence decoder that provides decompression of video and audio prior to transmission. The decoder is designed to take the compressed video and audio from one set of processing elements and route the signal to a second set of processing elements. Depending upon the data rate and resolution of the video and audio streams additional or fewer processing elements can be assigned as required.
[00138] The decoder routes the signal from an assigned series of processing elements to the data memory and routing element (Figure 14A). From the DMR the signal is then routed to the parallel processor input/output router (Figure 14B) and out to a serial decoder output (Figure 14C). The serial decoder output (Figure 14C) is formatted by processing elements to either the SMPTE 259M or SMPTE 292M specification for viewing on a monitor. [00139] Figure 25 depicts a high bit rate storage capability on the processing board. The function supports a feature that allows a capture of high bit rate video/audio that may be accessed at a later time. The processing board provides the ability to concurrently dual process video at a two resolutions at two different data rates. The user has the capability to store either of the video/audio streams on the high data rate storage.
[00140] In this application the compressed signal is routed from a set of selected processing elements to processing elements (8, 12) and (8, 13). From these processing elements the information is routed to a data memory and router (Figure 15A) to the parallel processor input/output router (Figure 15B) and then routed to the storage device (Figurel5C).
[00141] Figure 26 depicts the monitoring and/or processing of an external ASI transport stream compliant to ISO/IEC 13818-1 standards. The external input (Figure 16C) is routed through the parallel processor input/output router (Figure 16B) and then routed to the data memory and routing device(Figure 16A). The stream is then routed to a predefined set of processing elements.
[00142] The external ASI stream is monitored through the ETR 290 web interface. (See Figure
10 and the accompanying description). The external stream may be:
a) Monitored for compliance purposes only;
b) Edited to drop services not required and pass the balance of the stream on for internal utilization; and
c) Prepared by an editing process to pass desired elementary stream programs to be added to the ASI and/or Ethernet output ports of the processor board.
[00143] These functions are presented as option in the web services pages under the ETR 290 software applications program. [00144] Figure 27 depicts the instruction and boot function of the parallel processing component. This Figure has three components. (1) The Electrical Erasable Program Read Only Memory, EE PROM, (Figure 17A) (2) the Boot Control -SPI (Figure 17B) and (3) the Serial Bus Controller (Figure 17C). The EE PROM is a separate component on the processor board and it works in conjunction with the Boot Control and Serial Bus Controller.
[00145] Figure 28 depicts the RS-232 closed captioning input. The source is an external component with its input to a RS-232 connector (Figure 18A). This is an analog signal and must be converted to digital through a converter (Figure 18B). The signal is then routed to a data memory and routing element (Figure 18C). The signal is then routed to the data memory and router element (Figure 18D) and then routed to a predefined processing element. The processed signal is timed with the video frames and embedded within the transport stream.
[00146] There are multiple specifications for closed captioning applications. The most commonly specifications utilized are EIA-708 for ATSC and SCTE 20 for cable.
[00147] Figure 29 depicts the analog composite synchronization signal. This signal is used for frame synchronization in studio applications. The signal is applied to the input port (Figure 19A). The signal is then routed to an analog to digital converter (Figure 19B). From the analog to digital converter the signal is routed to the parallel processor input/output element (Figure 19C) and then to the data memory and router element (Figure 19D). From the data memory and router the signal is routed to a predefined processing element.
[00148] Figure 30 depicts the utilization of processing elements, in this example elements (9, 10) through (9, 19), for electronic image stabilization. The video will be monitored within processing elements, in this example (0,0) through (0, 19) and (1 ,0) through (1 , 19), for the first video input (figure 1A) for horizontal and vertical movement. If the movement exceeds pre- defined parameters, the video will be pre-processed, in this example, in processing elements (9, 10) through (9, 19) to provide electronic stabilization prior to applying a compression algorithm. This processing can apply to two video compression processes or standards if the images being processed are from one image source for figures 1A and 7A.
[00149] In yet another embodiment as shown in Figures 31 - 58, an integrated concurrent multi- standard video/audio decoder and software applications processor is shown and described. The following descriptions provide examples of applications being assigned to specific processing elements within a massive parallel processor. The parallel processor design allows dynamic, cycle-by-cycle, real time programming. In the actual implementation the processing elements may be shared among multiple mathematical processes and functional applications.
[00150] Figure 31 depicts the processing board with the parallel processor. The processor has multiple parallel processing elements. This Figure addresses the ability to start the processor through a boot process. This process can originate from either stored code on the processor or through external devices such as an electronically erasable programmable-read-only-memory (EEPROM) as this Figure depicts. It is also capable of receiving the instructions from other devices such as RISC controllers. In this depiction the EEPROM is shown ( figure 31 A) providing the boot information through the Boot Controller SPI interface internal to the processor (figure 3 IB) to the actual serial bus controller (figure 31C).
[00151] Figure 32 depicts the first RF input information process. The RF input (figure 32 A) to the board is routed to a demodulator (figure 32B) which removes the signal from a carrier wave. The digital output from the demodulator is routed to the I/O router (figure 32C) of the parallel processor. The I/O router then provides the data to the data memory and routing device, DMR (figure 32D). The DMR then routes the information to the processing element, in this example processing element (0,0). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00152] Figure 33 depicts the second RF input information process. The RF input (figure 33A) to the board is routed to a demodulator (figure 33B) which removes the signal from a carrier wave. The digital output from the demodulator is routed to the I/O router (figure 33C) of the parallel processor. The I/O router then provides the data to the data memory and routing device, DMR (figure 33D). The DMR then routes the information to the processing element, in this example processing element (0, 1). The processing element parses the packets and routes them to the appropriate processing elements through the DMR structure for further processing.
[00153] Figure 34 depicts the first Asynchronous Serial Input, ASI (figure 34A). This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets. The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the ASI input to the I/O router (figure 34B) and then to the data memory and router (figure 34C). In this example the information is routed to the processing element (0,2). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00154] Figure 35 depicts the second Asynchronous Serial Input, ASI (figure 35 A). This input carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets. The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the ASI input to the I/O router (figure 35B) and then to the data memory and router (figure 35C). In this example the information is routed to processing element (0,3). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00155] Figure 36 depicts the first Ethernet input (figure 36A). This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard. The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the I/O router (figure 36B) and then to the data memory and router (figure 36C). In this example the information is routed to the processing elements (0,4) and (0,5). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00156] Figure 37 depicts the second Ethernet input (figure 37A). This input carries information on either an Internet Protocol version 4 Standard or Internet Protocol version 6 Standard. The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed to the I/O router (figure 37B) and then to the data memory and router (figure 37C). In this example the information is routed to the processing elements (0,6) and (0,7). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00157] Figure 38 depicts the mobile communications RF input port (figure 38A). This input is demodulated in (figure 38B) and provides information packets to the input/output router (figure 38C). The information packets carry multiple standards for video and audio in addition to other various data packets. The information is routed from the input/output router (figure 38C) and then to the data memory and router (figure 38D). In this example the information is routed to the processing elements (0,8) and (0,9). The processing element parses the packets and routes them to the appropriate processing elements via the DMR structure for further processing.
[00158] Figure 39 depicts the mobile communications RF reference level input (figure 39A).
This input is an analog voltage and is applied to an analog to digital converter (figure 39B). The analog to digital converter provides a digitally sampled reference level to the I/O router, figure 39C. The I/O router provides the signal to the data memory and router element (figure 39D). In this example the DMR information is routed to the processing element (1,0). The processing element provides information to the decoder processing element section for scaling of the video dependent upon the signal strength at any given moment.
[00159] Figure 40 depicts the meta-data tagging input and output application of the apparatus.
The primary function is to allow the input of data to specific frames of video for future reference. The decoder may supply the decoding of the universal time code and geo- positioning system data if it is present in the stream provided to the processing elements. This information can be enhanced with meta-data if required. In addition, this application also provides an interface for the object recognition application software. The object recognition application software can provide access for scene change detection, facial recognition and other pre-defined object recognition. The information is routed from the application (figure 40A) to the Ethernet input (figure 40E) and onto the I/O router (figure 40G). From the input/output router the information can be routed either to the DDR memory (figure 40F) for reference information, or to the data memory and router element (figure 40H). In this example the information is provided to processing element (1 , 1) for processing and additional routing. [00160] Figure 41 depicts the ETR 290 software application. This application allows the user to monitor the content and bandwidth of the program elements within either the input or output streams of this parallel processor. This program will appear as a web service based application. The application is started in (figure 40B). The information in this example is routed from processing element (1 ,2) to the data memory and router element (figure 40H) through the Input/output port (figure 40G) and back through the Ethernet Port (figure 40E) to the end user.
[00161] Figure 42 depicts the command channel information set-up. The command channel allows the configuration of the decoder. This function is a web based interface starting with the information request in (figure 40C). The information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G). In this example the input/output interface information is routed to the processing element (1 ,3). The information is processed and controls such functions as:
a) Input port control;
b) Output port control, IPv4 or IPv6;
c) Output port content, program elementary streams;
d) ETR 290 control;
e) Audio output port set-up;
f) Frequency;
g) Symbol rates;
h) Coding standard for convolutional coding;
i) Video program selection from input streams;
j) Audio program selection from input streams;
k) Video decode program selection; and I) Audio decode program selection.
[00162] Figure 43 depicts the command channel information set-up. The command channel allows or denies user access to the apparatus and to separate operational levels of the apparatus. This function is a web based interface starting with the application request in (figure 40D). The information is exchanged over the Ethernet interface (figure 40E) to the input/output interface (figure 40G). In this example the input/output interface information is routed to the processing element (1 ,4). The information is processed and controls such functions as:
a) User access;
b) User capability of decoder configuration;
c) User access for new code downloads; and
d) Plus additional information.
[00163] Figure 44 depicts the first Asynchronous Serial Output, ASI (figure 41 A). This output carries video and audio in a compressed format on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets. The information packets may carry multiple standards for video and audio in addition to other various data packets. In this example the information is routed from processing element (1 ,5) to the ASI data memory router element (figure 41C). From the DMR the information is routed to the input/output router (figure 4 IB) and then to the ASI output port (figure 41 A).
[00164] Figure 45 depicts the second Asynchronous Serial Output, ASI (figure 42 A). This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets. (The video and audio are in a compressed format.) The information packets may carry multiple standards for video and audio plus other related and non-related data packets. In this example the information is routed from processing element (1 ,6) to the ASI data memory router element (figure 12C). From the DMR the information is routed to the input/output router (figure 42B) and then to the ASI output port (figure 42 A).
[00165] Figure 46 depicts the third Asynchronous Serial Output, ASI (figure 43A). This output carries information on a DVB AO 10 electrically compliant stream formatted to the MPEG 13818-1 system format for information packets. (The video and audio are in a compressed format.) The information packets may carry multiple standards for video and audio plus other related and non-related data packets. In this example the information is routed from processing element (1,7) to the ASI data memory router element (figure 43C). From the DMR the information is routed to the input/output router (figure 43B) and then to the ASI output port (figure 43 A). The additional ASI output allows an output to a dedicated storage capability along with allowing redundant ASI outputs in Figures 44 and 45 to provide system level fail safe capability.
[00166] Figure 47 depicts the first Ethernet Output (figure 44 A). This output port carries information on an Internet Protocol version 4 or 6. The user may select the format. The information may be formatted in MPEG over IP configurations. The information packets may carry multiple standards for video and audio in addition to various other data packets. In this example the information is routed from processing elements (l ,8)and (1 ,9) to the data memory router element (figure 44C). From the DMR the information is routed to the I/O router (figure 44B) and then to the Ethernet output port (figure 44 A).
[00167] Figure 48 depicts the second Ethernet Output (figure 45 A). This output carries information on an Internet Protocol version 4 or 6. The user may select the format. The information may be formatted in MPEG over IP configurations. The information packets may carry multiple standards for video and audio in addition to various other data packets. In this example the information is routed from processing elements (2,0)and (2, 1) to the data memory router element (figure 45C). From the DMR the information is routed to the I/O (figure 45B) and then to the Ethernet output port (figure 45 A).
[00168] Figure 49 depicts the first of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information. In this example the information is formatted in processing elements (2,2), (2,3) and (2,4). After formatting, the information is then routed to the data memory routing element (figure 46C). From the DMR the information is passed to the input/output router (figure 46B) and then supplied to the SMPTE-292M output element (figure 46A). This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
[00169] Figure 50 depicts the second of two SMPTE 292M outputs. This is decoded, non- compressed video, non-compressed audio and data information. In this example the information is formatted in processing elements (2,5), (2,6) and (2,7). After formatting, the information is then routed to the data memory routing element (figure 47C). From the DMR the information is passed to the I/O router (figure 47B) and then supplied to the SMPTE-292M output element (figure 47 A). This non-compressed video, audio and data can also be routed to other digital interfaces such as HDMI.
[00170] Figure 51 depicts the first of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes. In this example we utilize the format of Y, Pb, Pr. The output could also be R, G, B, H and V or other format. In this example we show the formatting in processing elements (2,8), (2,9) and (3,0). This information is then passed to the data memory routing element (figure 48D). The information is then passed to the input/output router (figure 48C) and then to the digital to analog converter (figure 48B). The analog outputs are then provided to the component elements (figure 48A) for display processing.
[00171] Figure 52 depicts the second of two outputs from the parallel processor to be formatted to a digital to analog conversion for display purposes. In this example we utilize the format of Y, Pb, Pr. The output could also be R, G, B, H and V or other format. In this example we show the formatting in processing elements (3, 1), (3,2) and (3,3). This information is then passed to the data memory routing element (figure 49D). The information is then passed to the input/output router (figure 49C) and then to the digital to analog converter (figure 49B). The analog outputs are then provided to the component elements (figure 49A) for display processing.
[00172] Figure 53 depicts the first of two output processed audio routes. The audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding. The number of channels can be up to eight per output port configuration. In this example processing elements (3,4) and (3,5) format the information for the data memory router (figure 50D). From the DMR the information is routed to the input/output router (figure 50C). From the input/output router the information is provided to a digital to analog converter (figure 50B). The information is then provided to the audio output channel element (figure 50A). The information may bypass the digital to analog converter and be routed directly to the output element (figure 50A) if an external digital audio decoder is utilized. [00173] Figure 54 depicts the second of two output processed audio routes. The audio can be one of multiple standards including but not limited to Musicam, Dolby AC-3, Dolby AC3 5.1 , Dolby E and Advanced Audio Coding. The number of channels can be up to eight per output port configuration. In this example processing elements (3,6) and (3,7) format the information for the data memory router, (figure 5 ID). From the DMR the information is routed to the input/output router (figure 51C). From the input/output router the information is provided to a digital to analog converter (figure 5 IB). The information is then provided to the audio output channel element (figure 51 A). The information may bypass the digital to analog converter and be routed directly to the output element (figure 51 A) if an external digital audio decoder is utilized.
[00174] Figure 55 depicts the Program Specific Information Protocol (PS IP) information file.
This information, if included in the received stream, is processed by processing element (3,8). The information is then routed to the data memory router element (figure 52D) and then passed to the data memory routing element (figure 40H) supporting the Ethernet interface. The information is then passed to the input/output router (figure 40G) and then routed to the Ethernet interface (figure 40E) and onto the Ethernet IP stream.
[00175] Figure 56 depicts the processing of the first of two compressed video and associated audio elementary streams by processing elements (4,0 through 4,9 and 5,0 through 5,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 53C). [00176] Figure 57 depicts the processing of the second of two compressed video and associated audio elementary streams by processing elements (6,0 through 6,9 and 7,0 through 7,9). These processes convert the information from a compressed format to a non-compressed format. The information is then routed and ported to either the digital output ports (figures 46A or 47 A) and/or the analog output ports (figures 48A or 49A). This process utilizes memory capabilities of the additional memory located on the board (figure 54C).
[00177] Figure 58 depicts the ability to direct the video decoder to process the video in a scaled format. The scaled format is described in Annex G of the H.264 documentation. Processing elements (8,0 through 8,9) and (9,0 through 9,5) are utilized for this process. The video can be scaled in conjunction with the signal level (figure 39A) of a mobile application (figure 38A) input.
[00178] In yet another preferred embodiment in accordance with the present invention, the decoder and encoder functions described can be combined on a single board using a unified parallel processor to create a new platform for transcoding by using a concurrent multi- standard decoder for decoding a signal and re-encoding the signal to a new format using a multi- standard concurrent encoder. In a concurrent multi-standard transcoder architecture, the decoded video, audio and data channels are received in one standard and are formatted in an alternative standard. The apparatus described can concurrently decode multiple standards and concurrently encode the signals in multiple alternative standards.
[00179] The transcoder apparatus receives the signal from an external source as described with respect to the decoder above. The decoded information is routed on a common internal bus structure of the parallel processors to the encoding processing elements on the unified parallel processor for video, audio and data processing. In addition to encoding the video, audio and data into an alternative standard, the architecture allows the replacement of alternative system data. The alternative system data can include conditional access information, program specific information protocol "PSIP" information, separate system information and alternatively formatted closed captioning data. The command and control of the transcoder additionally allows the processing of the incoming streams to the apparatus. The incoming stream processing may include dropping unwanted packets or program services. The transcoder platform architecture additionally allows the insertion of new video, audio and data information for processing by the encoding function. Thus, from the above, it should be understood by one of skill in the art that a combination of the decoder functionality and the encoder functionality may be combined using a common internal bus structure of the processor to provide for a Transcoder.

Claims

A system for encoding signals comprising:
a. a parallel processor;
b. at least one input operatively coupled to said parallel processor and configured to receive a first signal comprising at least one of a first video signal and a first audio signal; and
c. at least one output operatively coupled to said parallel processor,
wherein
said parallel processor is configured to concurrently encode at least one of said first video signal and said first audio signal using at least one of two video standards, two audio standards and two data rates, and
output a second signal comprising at least one of a second video signal and a second audio signal on said at least one output.
The system for encoding of claim 1 , wherein said parallel processor comprising a plurality of parallel processors having a unified control.
The system for encoding of claim 1 , wherein said second signal comprises at least said second audio signal and said second video signal.
The system for encoding of claim 1 , said parallel processor being configured to concurrently encode said first video signal and said first audio signal using a combination of said two video standards, said two audio standards and said two data rates.
The system for encoding of claim 2, wherein said plurality of parallel processors are located on a single integrated board.
6. The system for encoding of claim 3, further comprising a time clock for providing a universal time code for a respective frame in second video signal and associate said universal time code with said respective second video frame in said second signal.
7. The system for encoding of claim 3, said parallel processor being configured to determine a global position of said system for a respective frame in said second video signal and associate said global position with said respective second video frame in said second signal.
8. The system for encoding of claim 3, said parallel processor being configured to receive external data and associate said external data with said second signal.
9. The system for encoding of claim 8, wherein said external data is at least one of Digital Video Broadcasting (DVB) standard and an Advance Television System Committee (ATSC) standard.
10. The system for encoding of claim 1 , said parallel processor being configured to perform at least one of object recognition and scene change recognition for each frame in said first video signal and provide data regarding said at least one of said object recognition and scene change recognition.
11. The system for encoding of claim 10, wherein said at least one of said object recognition and scene change recognition data is written to at least one of a plurality of private data packets transmitted within said second data signal and a memory device located on the same board as said parallel processor.
12. The system for encoding of claim 10, wherein said at least one of said object recognition and scene change recognition data is routed to an external device for storage or immediate analysis.
13. The system for encoding of claim 1 , wherein said processor is configured to analyze the contents of said second signal prior to transmission on said at least one output for compliance with certain provisions of European Telecommunication Standards Institute, technical report 290 (ETR 290).
14. The system for encoding of claim 1 , wherein said parallel processor is configured to receive an external transport data stream and integrate said external data transport stream with said at least one of said first video signal and said first audio signal.
15. The system for encoding of claim 1 , said system further comprising a user interface for various configurations of said parallel processor in real-time.
16. The system for encoding of claim 1 , wherein said parallel processor is configured to operate in one of a fixed bit rate mode, a variable bit rate mode and a statistical multiplexed mode.
17. The system for encoding of claim 1 , wherein said parallel processor is configured to provide electronic stabilization of said first signal first video signal.
18. A method of encoding data comprising the steps of:
a. receiving a first input signal having at least one of first audio data and first video data;
b. concurrently encoding, using a parallel processor, at least one of said video data and said audio data using at least one of two video standards, two audio standards and two data rates; and
c. outputting at least one output signal containing said encoded at least one of said video data and said audio data.
19. The method of encoding data of claim 18, said method further comprising the step of associating additional data with said output signal.
20. The method of encoding of claim 19, said additional data further comprising at least one of a universal time code, global positioning data, object recognition data, scene change recognition data, Digital Video Broadcasting (DVB) standard data and an Advance Television System Committee (ATSC) standard data.
21. The method of encoding data of claim 18, wherein said parallel processor comprises a plurality of parallel processors under unified control.
22. The method of encoding data of claim 18, further comprising the steps of:
a. receiving a second input signal having at least one of second audio data and second video data; and
b. concurrently encoding, using said parallel processor, at least one of said second video data and said second audio data using at least one of a video standard, an audio standard and a data rate, with said first input signal.
PCT/US2011/022624 2010-01-26 2011-01-26 Integrated concurrent multi-standard encoder, decoder and transcoder WO2011094346A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US29844510P 2010-01-26 2010-01-26
US61/298,445 2010-01-26

Publications (1)

Publication Number Publication Date
WO2011094346A1 true WO2011094346A1 (en) 2011-08-04

Family

ID=44319752

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/022624 WO2011094346A1 (en) 2010-01-26 2011-01-26 Integrated concurrent multi-standard encoder, decoder and transcoder

Country Status (1)

Country Link
WO (1) WO2011094346A1 (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5629988A (en) * 1993-06-04 1997-05-13 David Sarnoff Research Center, Inc. System and method for electronic image stabilization
WO1999024903A1 (en) * 1997-11-07 1999-05-20 Bops Incorporated METHODS AND APPARATUS FOR EFFICIENT SYNCHRONOUS MIMD OPERATIONS WITH iVLIW PE-to-PE COMMUNICATION
US20020131763A1 (en) * 2000-04-05 2002-09-19 David Morgan William Amos Video processing and/or recording
US20030108105A1 (en) * 1999-04-06 2003-06-12 Amir Morad System and method for video and audio encoding on a single chip
US6674741B1 (en) * 1996-05-20 2004-01-06 Nokia Telecommunications Oy High speed data transmission in mobile communication networks
US6792441B2 (en) * 2000-03-10 2004-09-14 Jaber Associates Llc Parallel multiprocessing for the fast fourier transform with pipeline architecture
US20040218094A1 (en) * 2002-08-14 2004-11-04 Choi Seung Jong Format converting apparatus and method
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US20070286275A1 (en) * 2004-04-01 2007-12-13 Matsushita Electric Industrial Co., Ltd. Integated Circuit For Video/Audio Processing

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5629988A (en) * 1993-06-04 1997-05-13 David Sarnoff Research Center, Inc. System and method for electronic image stabilization
US6674741B1 (en) * 1996-05-20 2004-01-06 Nokia Telecommunications Oy High speed data transmission in mobile communication networks
WO1999024903A1 (en) * 1997-11-07 1999-05-20 Bops Incorporated METHODS AND APPARATUS FOR EFFICIENT SYNCHRONOUS MIMD OPERATIONS WITH iVLIW PE-to-PE COMMUNICATION
US20030108105A1 (en) * 1999-04-06 2003-06-12 Amir Morad System and method for video and audio encoding on a single chip
US6792441B2 (en) * 2000-03-10 2004-09-14 Jaber Associates Llc Parallel multiprocessing for the fast fourier transform with pipeline architecture
US20020131763A1 (en) * 2000-04-05 2002-09-19 David Morgan William Amos Video processing and/or recording
US7254249B2 (en) * 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US20040218094A1 (en) * 2002-08-14 2004-11-04 Choi Seung Jong Format converting apparatus and method
US20070286275A1 (en) * 2004-04-01 2007-12-13 Matsushita Electric Industrial Co., Ltd. Integated Circuit For Video/Audio Processing

Similar Documents

Publication Publication Date Title
JP6793231B2 (en) Reception method
JP4240545B2 (en) System for digital data format conversion and bitstream generation
CN101068367B (en) Method and apparatus for changing codec
US20030156342A1 (en) Audio-video synchronization for digital systems
WO2002061596A1 (en) Method and apparatus for delivery of metadata synchronized to multimedia contents
US11895352B2 (en) System and method for operating a transmission network
WO2005091590A1 (en) Apparatuses for preparing data bitstreams for encrypted transmission
WO2013188065A2 (en) System and methods for encoding live multimedia content with synchronized resampled audio data
US8850590B2 (en) Systems and methods for using transport stream splicing for programming information security
CN101980541A (en) Digital television receiving device and channel-switching method thereof
CN108307202A (en) Real-time video transcoding sending method, device and user terminal
KR100689474B1 (en) Transport Stream Receiving Apparatus For Multi-Screen and Its Control Method
CN109040818B (en) Audio and video synchronization method, storage medium, electronic equipment and system during live broadcasting
US7092411B2 (en) Transport stream multiplexing method, transport stream multiplexing apparatus, and storage and reproduction system
KR20040017830A (en) System and method for broadcast of independently encoded signals on atsc channels
US8184660B2 (en) Transparent methods for altering the video decoder frame-rate in a fixed-frame-rate audio-video multiplex structure
US20040190629A1 (en) System and method for broadcast of independently encoded signals on atsc channels
KR100881371B1 (en) Apparatus of transmitting real time moving picture using wireless multiple access, apparatus of receiving real time moving picture using wireless multiple access, apparatus of transmitting/receiving real time moving picture using wireless multiple access and method thereof
WO2011094346A1 (en) Integrated concurrent multi-standard encoder, decoder and transcoder
CN107210041B (en) Transmission device, transmission method, reception device, and reception method
WO2013017387A1 (en) Methods for compressing and decompressing animated images
US20040161032A1 (en) System and method for video and audio encoding on a single chip
JP7007293B2 (en) Transmission device for wirelessly transmitting an MPEG-TS (transport stream) compatible data stream
KR20080051086A (en) Method of object description for three dimensional image service based on dmb, and method for receiving three dimensional image service and converting image format
Chernock et al. ATSC 1.0 Encoding, Transport, and PSIP Systems

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11737603

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11737603

Country of ref document: EP

Kind code of ref document: A1