US20050201469A1 - Method and apparatus for improving the average image refresh rate in a compressed video bitstream - Google Patents

Method and apparatus for improving the average image refresh rate in a compressed video bitstream Download PDF

Info

Publication number
US20050201469A1
US20050201469A1 US10/798,519 US79851904A US2005201469A1 US 20050201469 A1 US20050201469 A1 US 20050201469A1 US 79851904 A US79851904 A US 79851904A US 2005201469 A1 US2005201469 A1 US 2005201469A1
Authority
US
United States
Prior art keywords
video
macroblocks
decoder
skipped
increasing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/798,519
Inventor
John Sievers
Stephen Botzko
David Lindbergh
Charles Crisler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Polycom Inc
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US10/798,519 priority Critical patent/US20050201469A1/en
Application filed by Individual filed Critical Individual
Assigned to POLYCOM, INC. reassignment POLYCOM, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BOTZKO, STEPHEN, CRISLER, CHARLES M., LINDBERGH, DAVID, SIEVERS, JOHN
Priority to EP04026781A priority patent/EP1575294B1/en
Priority to AU2004229063A priority patent/AU2004229063B2/en
Priority to DE602004024863T priority patent/DE602004024863D1/en
Priority to AT04026781T priority patent/ATE454013T1/en
Priority to CNB2004100817339A priority patent/CN100440975C/en
Priority to JP2005049362A priority patent/JP2005260935A/en
Publication of US20050201469A1 publication Critical patent/US20050201469A1/en
Priority to HK05109079.0A priority patent/HK1075159A1/en
Priority to US13/452,325 priority patent/US8374236B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234381Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the temporal resolution, e.g. decreasing the frame rate by frame skipping
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/637Control signals issued by the client directed to the server or network components
    • H04N21/6377Control signals issued by the client directed to the server or network components directed to server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6582Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number

Definitions

  • the present invention relates generally to video coding and compression, and more particularly to providing a method of improving the frame rate or picture size of a compressed video sequence beyond that which a given decoder would otherwise be able to process.
  • Digitization of video has become increasingly important. Digitization of video in communication (e.g., videoconferencing) and for digital video recording has become increasingly common. In these applications, video is transmitted across telecommunication links such as telephone lines, computer networks, and radio, or stored on various media such as DVDs, hard disks, and SVCDs.
  • Video compression coding is a technique of processing and encoding digital video data such that less information (typically measured in bits) is required to represent a good-quality rendition of the video sequence.
  • Most video compression/decompression systems are “lossy”, which means that the reproduced video output is only an approximate, and not exact, version of the original input video data.
  • Various compression/decompression schemes are used to compress video sequences.
  • ITU-T Recommendations H.120, H.261, H.263, and H.264 (hereafter “H.120”, “H.261”, “H.262”, “H.263” and “H.264”, respectively), and standards promulgated by the International Organization for Standards/International Electrotechnical Commission (“ISO/IEC”) “MPEG-1” (ISO/IEC 11172-2:1993), “MPEG-2” (ISO/IEC 13818-2:2000/ITU-T Recommendation H.262), and “MPEG-4” (ISO/IEC 14496-2:2001). Each of these standards is incorporated by reference in its entirety.
  • each uncompressed picture is represented by a rectangular array of pixels.
  • the whole image is not processed at one time, but is divided into rectangular groups of “macroblocks” (e.g., 16 ⁇ 16 pixels in each macroblock) that are individually processed.
  • Each macroblock may represent either luminance or “luma” pixels or chrominance or “chroma” pixels, or some combination of both.
  • All lossy video compression systems face a tradeoff between the fidelity of the of the decompressed video compared to the original and the number of bits used to represent the compressed video, all other factors being equal.
  • different video quality may be produced by a video encoder for a fixed number of bits, if different compression techniques are used. Which techniques may be used, and their effectiveness, are in some cases dependent on the amount of computation, memory, and latency margin available to the compression system.
  • the rate at which a video compression system (in both the encoder and decoder) can process frames is limited by a number of factors, such as the input frame rate, the bitrate of the compressed video stream, and the amount of computation the compression system can perform in a given period of time. Usually, in cases where there are ample input frames and available bitrate, the computation limit becomes the dominant limit on frame rate.
  • What is needed in the art is a technique that allows an encoder to increase (i.e., speed up) the frame rate dynamically based on a computational model of the decoder.
  • ITU-T and MPEG compression systems define a fixed frame rate ceiling for a given picture size, which is based on the assumption that all macroblocks in each frame are coded.
  • Another aspect of what is needed in the art is to further take advantage of the lowered decoding computation requirements when “skipping” is used, allowing the encoder to encode a faster frame rate than would otherwise be possible.
  • the present invention is directed to such a system.
  • the present invention is directed to a technique in which a video encoder, using information either communicated from the decoder or from prior knowledge (for example from a published specification), determines a model of the decoder's computational load and adjusts its encoding dynamically in response thereto.
  • the encoder In many video compression systems the encoder must constrain the content of the encoded bitstream such that the decoding process will not exceed the capability of the decoder. For example, the computational capability and storage in a decoder limits the bitrate, frame rate, picture size, or combinations thereof that can be decoded in real-time. Appropriate bitstream constraints must be met when producing bitstreams for playback systems such as DVD players or video streaming media players, as well as real-time communication systems such as video conferencing systems (VCS). These bitstream constraints may be specified by providing the encoder with prior knowledge of the limitations of prospective decoders (for example from a published specification), or by the transmission of a set of one or more parameters from the decoder to the encoder, which directly or indirectly signal the decoder's capability.
  • VCS video conferencing systems
  • One bitstream constraint is the maximum frame rate that can be decoded under a given set of circumstances.
  • the maximum frame rate for a given picture size is computed from a parameter that specifies the maximum number of luminance macroblocks per second (each macroblock contains 256 pixels in H.264) that can be decoded (this parameter is called “MaxMBPS”).
  • MaxMBPS the maximum number of luminance macroblocks per second
  • an H.264 decoder is known to support Level 1.2 of the Baseline profile, then it can receive frames containing up to 396 luminance macroblocks and can decode 6,000 luminance macroblocks per second (MaxMBPS has a value of 6,000).
  • the decoder is receiving common intermediate format “CIF” frames (which contain 396 luminance macroblocks each), the maximum frame rate is 6,000 ⁇ 396 or approximately 15 frames per second. If the decoder is receiving quarter common intermediate format “QCIF” frames (which contain 99 luminance macroblocks each), the maximum frame rate is 6,000 ⁇ 99, or approximately 60 frames per second. In this example, the encoder is not permitted to encode more frames per second than the decoder can handle, e.g., 15 frames per second in the case of CIF.
  • the system disclosed herein exploits this possibility to maintain a higher frame rate, or encode better video quality without exceeding the peak computational capability of the decoder, and therefore permits a given compression system design to achieve better performance.
  • the invention is described with reference to a video conferencing application, it is foreseen that the invention would also find beneficial application in other applications involving digitization of video data, e.g., the recording of DVDs, digital television, streaming video, video telephony, tele-medicine, tele-working, etc.
  • FIG. 1 is a block diagram of an exemplary video conferencing system
  • FIG. 2 is a block diagram of an exemplary video conference station of the video conferencing system of FIG. 1 .
  • bitstream A sequence of bits representing a video sequence.
  • a bitstream can be stored or conveyed one bit at a time, or in groups of bits.
  • coded macroblock A macroblock that is represented by coded bits which are to be decoded. Compare to “skipped macroblock”.
  • Frame A single picture in a video sequence. Frames may or may not be interlaced (consisting of two or more “fields”).
  • image A single frame, same as a picture.
  • macroblock A group of 1 or more pixels representing some particular area of a picture.
  • a macroblock is a group of 256 pixels in a 16 ⁇ 16 array, but in the context of this invention the pixels in a macroblock are not necessarily in a rectangular group, or even adjacent to one another.
  • picture size The number of pixels in each frame.
  • Quality The accuracy of the visual correspondence between the input and output of a coding/decoding process. Quality is improved by increasing frame rate, or by increasing picture size, or by increasing the fidelity of each individual decompressed frame compared to the original.
  • rate The inverse of an interval. A phrase like “a maximum frame rate of 30 Hz” is equivalent to “a minimum inter-frame interval of 1/30 seconds.” This use of “rate” does not imply that each successive frame must necessarily be separated by the same interval of time.
  • swipe macroblock A macroblock for which no coded bits or substantially fewer than the normal number of bits are generated by the encoder. Usually this is because the skipped MB represents a portion of the picture that has not changed, or has changed little, from the preceding frame. Usually the amount of computation required to decode such a skipped macroblock is less than for a normal macroblock. Note that some encoders signal in some fashion (sometimes using bits) that the macroblock is skipped (H.263's Coded Macroblock Indication for instance).
  • Video sequence A sequence of frames.
  • FIG. 1 illustrates an exemplary video conferencing system 100 .
  • the video conferencing system 100 includes a local video conference station 102 and a remote video conference station 104 connected through a network 106 .
  • FIG. 1 only shows two video conference stations 102 and 104 , those skilled in the art will recognize that more video conference stations may be coupled, directly or indirectly, to the video conferencing system 100 .
  • the network 106 may be any type of transmission medium including, but not limited to, POTS (Plain Old Telephone Service), cable, optical, and radio transmission media, or combinations thereof. Alternatively, any data storage and retrieval mechanism may be substituted for the network 106 .
  • POTS Phase Old Telephone Service
  • FIG. 2 is a block diagram of an exemplary video conference station 200 .
  • the video conference station 200 will be described as the local video conference station 102 ( FIG. 1 ), although the remote video conference station 104 ( FIG. 1 ) may contain a similar configuration.
  • the video conference station 200 includes one or more of the following: a display device 202 , a CPU 204 , a memory 206 , a video capture device 208 , an image processing engine 210 , and a communication interface 212 .
  • other devices may be provided in the video conference station 200 , or not all above named devices provided.
  • the video capture device 208 may be either a camera capturing natural scenes (of people, places, or any other things) or an input from any source of visual material (such as but not limited to a VCR or DVD player, a motion-picture projector, or the display output from a computer), and sends the images to the image processing engine 210 . Certain functions of image processing engine 210 will be discussed in more detail below. Similarly, the image processing engine 210 may also transform received data from the remote video conference station 104 into a video signal for display on the display device 202 , or for storage for later display, or for forwarding to other devices.
  • the encoder may elect to “skip” that area of the image.
  • the decoder outputs the same pixel data in the skipped area as was present in the previous frame (perhaps modified based on other factors such as picture areas that are not skipped, the history of object motion in the scene, error concealment techniques, etc.). Techniques to modify the output picture based on these other factors are known to those of ordinary skill in the art.
  • decoding usually requires very few computational resources for skipped areas of a picture
  • the decoder's computational capabilities are underutilized when picture areas are skipped.
  • the decoder is usually capable of much higher maximum frame rates when significant areas of the image are skipped. For example, suppose a given decoder is capable of receiving 15 frames per second (fps) at CIF picture size when no macroblocks are skipped. If 75% of the macroblocks in each image were skipped, the decoder might be capable of receiving 30 fps.
  • the system disclosed herein improves the average frame rate of a compressed video stream by taking advantage of the lowered decoding computational load when “skipping” is used. It comprises a method of specifying the decoder's processing capability and regulating the frame rate using this information. All other things being equal, the technique disclosed herein allows the encoder to encode a faster average frame rate than would otherwise be possible at a given picture size.
  • encoders When choosing the tradeoff of picture size vs. frame rate, encoders generally take into account the frame rate the decoder can handle at a given picture size.
  • the system disclosed herein allows the encoder to run at the normal picture size it would have selected, taking advantage of the higher average frame rate that skipping permits.
  • the encoder can select a larger picture size than normally practical and maintain an acceptable frame rate, thereby improving the image quality. A combination of both benefits is also possible.
  • Described herein is an improved method of specifying the decoding system's computational capability, which is used together with the existing H.264 macroblocks per second limit MaxMBPS (or its equivalents in other video coding systems) to constrain the encoder bitstream in a new way described below.
  • the preferred embodiment includes a parameter that allows the decoder's peak frame rate to be calculated by the encoder for whatever particular picture size and proportion of “skipped” macroblocks the encoder is encoding. In most decoder implementations, this peak frame rate is considerably higher than the frame rate limit that applies when the entire image is coded.
  • MaxSKIPPED the number of macroblocks per second that can be processed by the decoder if all the macroblocks in the video sequence are skipped.
  • MaxSKIPPED the number of macroblocks per second that can be processed by the decoder if all the macroblocks in the video sequence are skipped.
  • MaxSKIPPED the number of macroblocks per second that can be processed by the decoder if all the macroblocks in the video sequence are skipped.
  • MaxSKIPPED specifies a theoretical limit of the decoding system speed. It is theoretical because it is not useful in practice to encode a video sequence in which all macroblocks are skipped.
  • the units of “macroblocks per second” is a good choice because the decoding system speed tends to slow down approximately linearly as picture size increases.
  • MaxSKIPPED Other signaling could have been used instead of MaxSKIPPED. It would be equivalent to specify the maximum frame rate (in units of Hz for instance), or a minimum picture interval (in units of seconds for instance). Alternatively, a more complex set of parameters indicating MaxSKIPPED values for different picture sizes (for example a formula, complete set of values, or series of sample values for interpolation) may be used. However, MaxSKIPPED allows a single parameter to span a range of picture sizes, whereas maximum frame rate would have to be picture-size-specific. MaxSKIPPED also fits in well with the other signaling specified in H.264.
  • this MaxSKIPPED parameter can be conveyed to the encoder by the decoder (if the decoder has a communication path back to the encoder, e.g., many video-conferencing systems) or as prior knowledge given to the encoder (for example in a published specification), based on a given target type of decoder (if the decoder does not have a communication path back to the encoder, e.g., a DVD player).
  • MaxFrameRate MaxMBPS PictureSize with PictureSize in units of macroblocks.
  • the H.264 Level 1.2 decoder described above (which has a MaxMBPS of 6,000 MB/s) can alternatively process 24,000 skipped macroblocks per second (MaxSKIPPED is 24,000).
  • MaxSKIPPED is 24,000.
  • the traditional encoder regulation method would limit the frame rate at the 396 macroblock per picture CIF picture size to about 15.2 frames per second (6,000 ⁇ 396).
  • the method described above allows the frame rate to be increased to 24.2 frames per second as long as 50% or more of the macroblocks are being skipped (1 ⁇ (198 ⁇ 6,000+198 ⁇ 24,000)). If the percentage of skipped blocks is increased to 75%, this method gives a maximum frame rate of 34.6 frames per second (1 ⁇ (99 ⁇ 6,000)+(297 ⁇ 24,000)), which is far faster than the traditional encoding method.
  • the H.264 Level 1.2 decoder is receiving SVGA video, which contains 1,875 macroblocks per frame (800 ⁇ 600 pixels), and that only the mouse cursor is moving. Assume further that encoding the mouse cursor region requires only 16 macroblocks. Traditional encoder regulation would limit the frame rate to 3.2 frames per second (6,000 ⁇ 1,875). The method described above gives a frame rate of 12.5 frames per second (1 ⁇ (16 ⁇ 6,000)+(1,861 ⁇ 6,000)). Of course, if the entire picture is changing and all macroblocks are coded (for example during a camera pan), the frame rate will drop off to the same value that the traditional method delivers.
  • the end result is a system that can be automatically regulated to run at the highest possible frame rate by allowing the encoder to dynamically determine the minimum frame interval that the decoder can computationally handle given the number of “skipped” macroblocks in the image stream.
  • This minimum frame interval is used by the encoder as described below.
  • synchronous transmission systems it is well known that the video bitrate must be matched to the synchronous transmission rate to ensure that the decoder receives an updated picture before that picture's display time. Since the number of bits in the picture is not always precisely known before the encoding process, in such cases system designs must account for some variation between the expected bits per compressed frame and the actual number of bits.
  • TargetPictureBits ChannelCapacity ⁇ PictureSize MaxMBPS where PictureSize is in macroblocks.
  • NextFrameInterval max ⁇ ( PictureSize MaxMBPS , ActualPictureBits ChannelCapacity )
  • TargetPictureBits ChannelCapacity ExpectedFrameRate
  • the expected frame rate could simply be the average frame rate that the invention yields on this image source, or it could be adaptively determined depending on the amount of change in the image, the amount of motion in the scene, or other factors.
  • This improved method ensures that the actual frame rate never exceeds the decoder's computational capability, and that the actual bitrate simultaneously never exceeds the channel capacity.
  • Another method of frame rate regulation is to include a buffering model as part of the decoder capabilities.
  • This method allows for more variation in the bitrate for individual pictures than the first method, but also adds more delay to the decoding process.
  • video bits are presumed to be received at a known rate.
  • the decoder buffers these bits in a buffer of a known size, and empties the buffer as pictures are decoded.
  • the picture decode time used in the buffering model may be the fixed frame rate limit averaged over some period of image transmission.
  • the HRD Hypothetical Reference Decoder
  • VBV Video Buffering Verifier
  • Other buffering models can also be employed.
  • the permitted number of coded bits for that frame is limited to a range of values (i.e., to avoid overflow or underflow of the buffer).
  • the target number of coded picture bits is computed as described above, but is constrained to fall within these limits. It is common practice to increase or decrease the target number of coded picture bits to maintain an average level of buffer fullness.
  • the minimum frame interval described above is used in the encoder to calculate when bits are removed from the buffer by the decoder, e.g., to adjust the VBV buffer examination times described in ISO/IEC 13818-2 Annex C.9 through Annex C.12. Alternatively, the actual frame intervals may be used.
  • the encoder feeds its encoded bits into an encoder buffer for delivery to the actual channel. If the channel is synchronous (for instance ISDN), then the bits are drained from the buffer synchronously. Periodically groups of one or more video bits are removed from the buffer for transmission. If the channel is packet oriented, then the bits are drained using a traffic shaping algorithm that delivers the bits to the packet network at the media bitrate. The current fullness of the encoder buffer drives the bit-rate control algorithms used by the encoder.
  • the system disclosed herein can be used to run at variable image picture size and a fixed frame rate, in contrast to the fixed picture size and variable frame rate mode described above.
  • Many video compression algorithms (such as H.263 Annex P) have methods for adjusting the video picture size dynamically in the compressed bitstream.
  • these methods are of limited utility since the frame rate is generally reduced dramatically as the picture size increases.
  • the system can be configured to run at a fixed frame rate (for instance 30 fps) at a guaranteed minimum picture size (for instance CIF). During times when sufficient macroblocks per second are being skipped, this invention allows the picture size of the compressed images to be automatically increased while maintaining the fixed frame rate.
  • H.264 video codec standard is used as an illustrative example. It should be noted, however, that the invention is generalizable and applicable to most video compression systems, including all modem video compression systems known to the inventors (H.261, H.263, H.264, Microsoft's WM9, MPEG-1, MPEG-2, MPEG-4, etc.).

Abstract

An apparatus and method for digital video encoding is disclosed. The disclosed system provides for a way of improving video quality for a given video coding system design.

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates generally to video coding and compression, and more particularly to providing a method of improving the frame rate or picture size of a compressed video sequence beyond that which a given decoder would otherwise be able to process.
  • 2. Description of Related Art
  • Digitization of video has become increasingly important. Digitization of video in communication (e.g., videoconferencing) and for digital video recording has become increasingly common. In these applications, video is transmitted across telecommunication links such as telephone lines, computer networks, and radio, or stored on various media such as DVDs, hard disks, and SVCDs.
  • Presently, efficient transmission and/or storage of video data requires encoding and compression of video data. Video compression coding is a technique of processing and encoding digital video data such that less information (typically measured in bits) is required to represent a good-quality rendition of the video sequence. Most video compression/decompression systems are “lossy”, which means that the reproduced video output is only an approximate, and not exact, version of the original input video data. Various compression/decompression schemes are used to compress video sequences.
  • Several approaches and standards for encoding and compressing source video signals exist. These include ITU-T Recommendations H.120, H.261, H.263, and H.264 (hereafter “H.120”, “H.261”, “H.262”, “H.263” and “H.264”, respectively), and standards promulgated by the International Organization for Standards/International Electrotechnical Commission (“ISO/IEC”) “MPEG-1” (ISO/IEC 11172-2:1993), “MPEG-2” (ISO/IEC 13818-2:2000/ITU-T Recommendation H.262), and “MPEG-4” (ISO/IEC 14496-2:2001). Each of these standards is incorporated by reference in its entirety.
  • In the most commonly used video compression systems, each uncompressed picture is represented by a rectangular array of pixels. In many operations, the whole image is not processed at one time, but is divided into rectangular groups of “macroblocks” (e.g., 16×16 pixels in each macroblock) that are individually processed. Each macroblock may represent either luminance or “luma” pixels or chrominance or “chroma” pixels, or some combination of both. A number of methods and techniques for macroblock-based processing of images are generally known to those skilled in the art, and thus are not repeated here in detail. All lossy video compression systems face a tradeoff between the fidelity of the of the decompressed video compared to the original and the number of bits used to represent the compressed video, all other factors being equal. For a given video sequence, different video quality may be produced by a video encoder for a fixed number of bits, if different compression techniques are used. Which techniques may be used, and their effectiveness, are in some cases dependent on the amount of computation, memory, and latency margin available to the compression system.
  • Existing methods of frame rate regulation allow an encoder to decrease (i.e., slow down) the frame rate to ensure that the image compression maintains an acceptable visual quality level, given other constraints such as computation, memory, and latency limits. Particularly, if enough of the image is changing, (meaning that a relatively larger number of bits will be required to maintain image quality), the encoder can slow down the frame rate to increase the available bits per frame. The prior art discloses a variety of existing methods that allow an encoder to run at a variable bitrate or variable frame rate. The prior art also discloses the idea of “skipping” coding for unchanged areas of the image. This technique has been used before for the purpose of reducing the bitrate of a video stream, or increasing the image quality (by increasing the bitrate of the coded areas to take advantage of the bits saved by not coding the unchanged areas).
  • The rate at which a video compression system (in both the encoder and decoder) can process frames is limited by a number of factors, such as the input frame rate, the bitrate of the compressed video stream, and the amount of computation the compression system can perform in a given period of time. Usually, in cases where there are ample input frames and available bitrate, the computation limit becomes the dominant limit on frame rate.
  • What is needed in the art is a technique that allows an encoder to increase (i.e., speed up) the frame rate dynamically based on a computational model of the decoder. Instead, ITU-T and MPEG compression systems define a fixed frame rate ceiling for a given picture size, which is based on the assumption that all macroblocks in each frame are coded. Another aspect of what is needed in the art is to further take advantage of the lowered decoding computation requirements when “skipping” is used, allowing the encoder to encode a faster frame rate than would otherwise be possible. The present invention is directed to such a system.
  • SUMMARY OF THE INVENTION
  • The present invention is directed to a technique in which a video encoder, using information either communicated from the decoder or from prior knowledge (for example from a published specification), determines a model of the decoder's computational load and adjusts its encoding dynamically in response thereto.
  • In many video compression systems the encoder must constrain the content of the encoded bitstream such that the decoding process will not exceed the capability of the decoder. For example, the computational capability and storage in a decoder limits the bitrate, frame rate, picture size, or combinations thereof that can be decoded in real-time. Appropriate bitstream constraints must be met when producing bitstreams for playback systems such as DVD players or video streaming media players, as well as real-time communication systems such as video conferencing systems (VCS). These bitstream constraints may be specified by providing the encoder with prior knowledge of the limitations of prospective decoders (for example from a published specification), or by the transmission of a set of one or more parameters from the decoder to the encoder, which directly or indirectly signal the decoder's capability.
  • One bitstream constraint is the maximum frame rate that can be decoded under a given set of circumstances. For example, in the ITU-T H.264 video codec specification, the maximum frame rate for a given picture size is computed from a parameter that specifies the maximum number of luminance macroblocks per second (each macroblock contains 256 pixels in H.264) that can be decoded (this parameter is called “MaxMBPS”). For example, if an H.264 decoder is known to support Level 1.2 of the Baseline profile, then it can receive frames containing up to 396 luminance macroblocks and can decode 6,000 luminance macroblocks per second (MaxMBPS has a value of 6,000). This indicates that if the decoder is receiving common intermediate format “CIF” frames (which contain 396 luminance macroblocks each), the maximum frame rate is 6,000÷396 or approximately 15 frames per second. If the decoder is receiving quarter common intermediate format “QCIF” frames (which contain 99 luminance macroblocks each), the maximum frame rate is 6,000÷99, or approximately 60 frames per second. In this example, the encoder is not permitted to encode more frames per second than the decoder can handle, e.g., 15 frames per second in the case of CIF.
  • Frequently, it is advantageous to encode a large picture size. For example, when sending a computer-based presentation in a video conference it is desirable to maintain the XGA (1024×768 pixels, which is 3072 luminance macroblocks in H.264) picture size of a typical computer screen. ITU-T Recommendation H.241, which is hereby incorporated by reference in its entirety, provides a method of signaling support for XGA video with H.264 Baseline Profile Level 1.2. However, the frame rate limit (as computed above) results in a very low frame rate at this picture size (approximately 1.95 frames per second). Often such computer presentations have large areas that do not change from frame to frame. Frequently the only motion will be the mouse cursor moving across the static picture on the screen. It would be a significant improvement to be able to increase the XGA frame rate in such situations so that the mouse cursor motion appears smooth on the far end.
  • There are other situations in which there is little motion in a scene where increasing the frame rate would result in more natural video. If the decoder can process unchanging areas of the picture more quickly than changing areas, then an encoder could in principle exploit this to encode a higher frame rate than might otherwise be possible.
  • The system disclosed herein exploits this possibility to maintain a higher frame rate, or encode better video quality without exceeding the peak computational capability of the decoder, and therefore permits a given compression system design to achieve better performance. Although the invention is described with reference to a video conferencing application, it is foreseen that the invention would also find beneficial application in other applications involving digitization of video data, e.g., the recording of DVDs, digital television, streaming video, video telephony, tele-medicine, tele-working, etc.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of an exemplary video conferencing system;
  • FIG. 2 is a block diagram of an exemplary video conference station of the video conferencing system of FIG. 1.
  • DETAILED DESCRIPTION OF THE INVENTION
  • For purposes of the following description and claims, these terms shall have the following meanings:
  • “bitstream”—A sequence of bits representing a video sequence. A bitstream can be stored or conveyed one bit at a time, or in groups of bits.
  • “coded macroblock”—A macroblock that is represented by coded bits which are to be decoded. Compare to “skipped macroblock”.
  • “frame”—A single picture in a video sequence. Frames may or may not be interlaced (consisting of two or more “fields”).
  • “image”—A single frame, same as a picture.
  • “macroblock”—A group of 1 or more pixels representing some particular area of a picture. In H.264 a macroblock is a group of 256 pixels in a 16×16 array, but in the context of this invention the pixels in a macroblock are not necessarily in a rectangular group, or even adjacent to one another.
  • “MB”—Abbreviation for macroblocks.
  • “picture size”—The number of pixels in each frame.
  • “quality”—The accuracy of the visual correspondence between the input and output of a coding/decoding process. Quality is improved by increasing frame rate, or by increasing picture size, or by increasing the fidelity of each individual decompressed frame compared to the original.
  • “rate”—The inverse of an interval. A phrase like “a maximum frame rate of 30 Hz” is equivalent to “a minimum inter-frame interval of 1/30 seconds.” This use of “rate” does not imply that each successive frame must necessarily be separated by the same interval of time.
  • “skipped macroblock”—A macroblock for which no coded bits or substantially fewer than the normal number of bits are generated by the encoder. Usually this is because the skipped MB represents a portion of the picture that has not changed, or has changed little, from the preceding frame. Usually the amount of computation required to decode such a skipped macroblock is less than for a normal macroblock. Note that some encoders signal in some fashion (sometimes using bits) that the macroblock is skipped (H.263's Coded Macroblock Indication for instance).
  • “video sequence”—A sequence of frames.
  • FIG. 1 illustrates an exemplary video conferencing system 100. The video conferencing system 100 includes a local video conference station 102 and a remote video conference station 104 connected through a network 106. Although FIG. 1 only shows two video conference stations 102 and 104, those skilled in the art will recognize that more video conference stations may be coupled, directly or indirectly, to the video conferencing system 100. The network 106 may be any type of transmission medium including, but not limited to, POTS (Plain Old Telephone Service), cable, optical, and radio transmission media, or combinations thereof. Alternatively, any data storage and retrieval mechanism may be substituted for the network 106.
  • FIG. 2 is a block diagram of an exemplary video conference station 200. For simplicity, the video conference station 200 will be described as the local video conference station 102 (FIG. 1), although the remote video conference station 104 (FIG. 1) may contain a similar configuration. In one embodiment, the video conference station 200 includes one or more of the following: a display device 202, a CPU 204, a memory 206, a video capture device 208, an image processing engine 210, and a communication interface 212. Alternatively, other devices may be provided in the video conference station 200, or not all above named devices provided.
  • The video capture device 208 may be either a camera capturing natural scenes (of people, places, or any other things) or an input from any source of visual material (such as but not limited to a VCR or DVD player, a motion-picture projector, or the display output from a computer), and sends the images to the image processing engine 210. Certain functions of image processing engine 210 will be discussed in more detail below. Similarly, the image processing engine 210 may also transform received data from the remote video conference station 104 into a video signal for display on the display device 202, or for storage for later display, or for forwarding to other devices.
  • In a video compression system, if, for example, an area of the picture has not changed, or for other reasons, then the encoder (image processing engine 210) may elect to “skip” that area of the image. When this is done, the decoder outputs the same pixel data in the skipped area as was present in the previous frame (perhaps modified based on other factors such as picture areas that are not skipped, the history of object motion in the scene, error concealment techniques, etc.). Techniques to modify the output picture based on these other factors are known to those of ordinary skill in the art.
  • Because decoding usually requires very few computational resources for skipped areas of a picture, the decoder's computational capabilities are underutilized when picture areas are skipped. In fact, the decoder is usually capable of much higher maximum frame rates when significant areas of the image are skipped. For example, suppose a given decoder is capable of receiving 15 frames per second (fps) at CIF picture size when no macroblocks are skipped. If 75% of the macroblocks in each image were skipped, the decoder might be capable of receiving 30 fps. However, in the current art, there is no method for regulating frame rate that allows the encoder to exploit such a capability, and no technique for the encoder to be aware of such a dependency of the decoder's maximum frame rate on the proportion of skipped macroblocks, so the encoder must be limited to a 15 fps rate.
  • The system disclosed herein improves the average frame rate of a compressed video stream by taking advantage of the lowered decoding computational load when “skipping” is used. It comprises a method of specifying the decoder's processing capability and regulating the frame rate using this information. All other things being equal, the technique disclosed herein allows the encoder to encode a faster average frame rate than would otherwise be possible at a given picture size.
  • When choosing the tradeoff of picture size vs. frame rate, encoders generally take into account the frame rate the decoder can handle at a given picture size. The system disclosed herein allows the encoder to run at the normal picture size it would have selected, taking advantage of the higher average frame rate that skipping permits. Alternatively, the encoder can select a larger picture size than normally practical and maintain an acceptable frame rate, thereby improving the image quality. A combination of both benefits is also possible.
  • Described herein is an improved method of specifying the decoding system's computational capability, which is used together with the existing H.264 macroblocks per second limit MaxMBPS (or its equivalents in other video coding systems) to constrain the encoder bitstream in a new way described below. The preferred embodiment includes a parameter that allows the decoder's peak frame rate to be calculated by the encoder for whatever particular picture size and proportion of “skipped” macroblocks the encoder is encoding. In most decoder implementations, this peak frame rate is considerably higher than the frame rate limit that applies when the entire image is coded.
  • One such parameter that scales to different picture sizes is the number of macroblocks per second that can be processed by the decoder if all the macroblocks in the video sequence are skipped. We will call this parameter “MaxSKIPPED”, which for the purposes of this explanation we will consider to be in units of macroblocks per second (MB/s). Note that if this MaxSKIPPED value is not constant for all supported picture sizes, then the minimum of these values may be used. MaxSKIPPED specifies a theoretical limit of the decoding system speed. It is theoretical because it is not useful in practice to encode a video sequence in which all macroblocks are skipped. The units of “macroblocks per second” is a good choice because the decoding system speed tends to slow down approximately linearly as picture size increases.
  • Other signaling could have been used instead of MaxSKIPPED. It would be equivalent to specify the maximum frame rate (in units of Hz for instance), or a minimum picture interval (in units of seconds for instance). Alternatively, a more complex set of parameters indicating MaxSKIPPED values for different picture sizes (for example a formula, complete set of values, or series of sample values for interpolation) may be used. However, MaxSKIPPED allows a single parameter to span a range of picture sizes, whereas maximum frame rate would have to be picture-size-specific. MaxSKIPPED also fits in well with the other signaling specified in H.264.
  • Like other decoder parameters, this MaxSKIPPED parameter can be conveyed to the encoder by the decoder (if the decoder has a communication path back to the encoder, e.g., many video-conferencing systems) or as prior knowledge given to the encoder (for example in a published specification), based on a given target type of decoder (if the decoder does not have a communication path back to the encoder, e.g., a DVD player).
  • Normally, the encoder determines the maximum frame rate (in frames per second) as: MaxFrameRate = MaxMBPS PictureSize
    with PictureSize in units of macroblocks. Instead, in connection with the system described herein, the minimum frame interval is determined as:
    MinFrameInterval=T coded ×N coded +T skipped ×N skipped
    with: MaxFrameRate = 1 MinFrameInterval
    which reduces to: MaxFrameRate = 1 N coded MaxMBPS + N skipped MaxSKIPPED
    where MaxMBPS is the macroblock per second limit specified in H.264 Annex A or its equivalent; MaxSKIPPED is the maximum number of macroblocks per second the decoder can process if all macroblocks are skipped; Ncoded is the number of coded macroblocks in a picture; Nskipped is the number of skipped macroblocks in a picture; Tcoded is the number of seconds to decode and output one coded macroblock (1÷MaxMBPS); and Tskipped is the number of seconds to output (but not decode) one skipped macroblock (1÷MaxSKIPPED).
  • As an example, assume that the H.264 Level 1.2 decoder described above (which has a MaxMBPS of 6,000 MB/s) can alternatively process 24,000 skipped macroblocks per second (MaxSKIPPED is 24,000). Assume also that only 50% of the macroblocks are being encoded each second, as might be the case of a stationary camera framing one or two people are sitting at a table. The traditional encoder regulation method would limit the frame rate at the 396 macroblock per picture CIF picture size to about 15.2 frames per second (6,000÷396). The method described above allows the frame rate to be increased to 24.2 frames per second as long as 50% or more of the macroblocks are being skipped (1÷(198÷6,000+198÷24,000)). If the percentage of skipped blocks is increased to 75%, this method gives a maximum frame rate of 34.6 frames per second (1÷(99÷6,000)+(297÷24,000)), which is far faster than the traditional encoding method.
  • As another example, assume that the H.264 Level 1.2 decoder is receiving SVGA video, which contains 1,875 macroblocks per frame (800×600 pixels), and that only the mouse cursor is moving. Assume further that encoding the mouse cursor region requires only 16 macroblocks. Traditional encoder regulation would limit the frame rate to 3.2 frames per second (6,000÷1,875). The method described above gives a frame rate of 12.5 frames per second (1÷(16÷6,000)+(1,861÷6,000)). Of course, if the entire picture is changing and all macroblocks are coded (for example during a camera pan), the frame rate will drop off to the same value that the traditional method delivers.
  • The end result is a system that can be automatically regulated to run at the highest possible frame rate by allowing the encoder to dynamically determine the minimum frame interval that the decoder can computationally handle given the number of “skipped” macroblocks in the image stream. This minimum frame interval is used by the encoder as described below. When synchronous transmission systems are used, it is well known that the video bitrate must be matched to the synchronous transmission rate to ensure that the decoder receives an updated picture before that picture's display time. Since the number of bits in the picture is not always precisely known before the encoding process, in such cases system designs must account for some variation between the expected bits per compressed frame and the actual number of bits.
  • One well-known method of video bitrate matching operates on the principle that the encoder can account for an unexpectedly high number of bits per frame by encoding fewer frames per second (usually by not encoding one or more input frames) when this event occurs. In common practice, the encoder attempts to produce pictures that are precisely: TargetPictureBits = ChannelCapacity × PictureSize MaxMBPS
    where PictureSize is in macroblocks. The frame interval to the next encoded frame then is: NextFrameInterval = max ( PictureSize MaxMBPS , ActualPictureBits ChannelCapacity )
  • This method of video bitrate matching ensures that the actual frame rate never exceeds the decoder's receive capability, and that the actual bitrate simultaneously never exceeds the channel capacity. When this method is employed in conjunction with the present invention, target picture bits is: TargetPictureBits = ChannelCapacity ExpectedFrameRate
    The expected frame rate could simply be the average frame rate that the invention yields on this image source, or it could be adaptively determined depending on the amount of change in the image, the amount of motion in the scene, or other factors. The frame interval to the next encoded image frame then is: NextFrameInterval = max ( N coded MaxMBPS + N skipped MaxSKIPPED , ActualPictureBits ChannelCapacity )
  • This improved method ensures that the actual frame rate never exceeds the decoder's computational capability, and that the actual bitrate simultaneously never exceeds the channel capacity.
  • Another method of frame rate regulation is to include a buffering model as part of the decoder capabilities. This method allows for more variation in the bitrate for individual pictures than the first method, but also adds more delay to the decoding process. In this method, video bits are presumed to be received at a known rate. The decoder buffers these bits in a buffer of a known size, and empties the buffer as pictures are decoded. The picture decode time used in the buffering model may be the fixed frame rate limit averaged over some period of image transmission. The HRD (Hypothetical Reference Decoder) in the H.261 and H.263 standards, and the VBV (Video Buffering Verifier) described in ISO/IEC 13818-2 Annex C are examples of this method. Other buffering models can also be employed.
  • When this method of video bitrate matching is in use with the invention, for each newly encoded frame the permitted number of coded bits for that frame is limited to a range of values (i.e., to avoid overflow or underflow of the buffer). The target number of coded picture bits is computed as described above, but is constrained to fall within these limits. It is common practice to increase or decrease the target number of coded picture bits to maintain an average level of buffer fullness. The minimum frame interval described above is used in the encoder to calculate when bits are removed from the buffer by the decoder, e.g., to adjust the VBV buffer examination times described in ISO/IEC 13818-2 Annex C.9 through Annex C.12. Alternatively, the actual frame intervals may be used.
  • In an alternative buffering model, the encoder feeds its encoded bits into an encoder buffer for delivery to the actual channel. If the channel is synchronous (for instance ISDN), then the bits are drained from the buffer synchronously. Periodically groups of one or more video bits are removed from the buffer for transmission. If the channel is packet oriented, then the bits are drained using a traffic shaping algorithm that delivers the bits to the packet network at the media bitrate. The current fullness of the encoder buffer drives the bit-rate control algorithms used by the encoder.
  • Other equivalent methods of frame rate regulation could also be used. More complex models are possible, and could possibly further improve the results.
  • Ways of further improving the encoder's model of decoder computation requirements are possible. For example, in most video coding systems there are several types of macroblocks, each of which has its own decoding computational cost. Also, the number of transform coefficients that are included in each macroblock can have an effect on the computational load depending on the transform technology that is used. Another possibility is to separate the cost to decode symbols in the bitstream from the cost of decoding macroblocks, which requires the encoder to track the number of symbols in each compressed picture. Note that there are at least three basic entropy coding schemes for these symbols: arithmetic, fixed field, and variable length. Arithmetic has the highest computational cost, fixed field has the lowest. For all of these improved methods, information about the relative decoding computational burden would be given to the encoder, and analogous procedures performed.
  • Additionally, the system disclosed herein can be used to run at variable image picture size and a fixed frame rate, in contrast to the fixed picture size and variable frame rate mode described above. Many video compression algorithms (such as H.263 Annex P) have methods for adjusting the video picture size dynamically in the compressed bitstream. However, these methods are of limited utility since the frame rate is generally reduced dramatically as the picture size increases. With the present invention, the system can be configured to run at a fixed frame rate (for instance 30 fps) at a guaranteed minimum picture size (for instance CIF). During times when sufficient macroblocks per second are being skipped, this invention allows the picture size of the compressed images to be automatically increased while maintaining the fixed frame rate.
  • In the foregoing description, the H.264 video codec standard is used as an illustrative example. It should be noted, however, that the invention is generalizable and applicable to most video compression systems, including all modem video compression systems known to the inventors (H.261, H.263, H.264, Microsoft's WM9, MPEG-1, MPEG-2, MPEG-4, etc.).
  • The invention has been explained with reference to exemplary embodiments. It will be evident to those skilled in the art that various modifications may be made thereto without departing from the broader spirit and scope of the invention. Further, although the invention has been described in the context of its implementation in particular environments and for particular applications, those skilled in the art will recognize that the present invention's usefulness is not limited thereto and that the invention can be beneficially utilized in any number of environments and implementations. The foregoing description and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.

Claims (33)

1. A method of quality-improvement of a digitally-encoded video sequence, wherein the video sequence comprises information representing a sequence of encoded frames, each encoded frame comprising one or more encoded macroblocks, the method comprising:
determining one or more processing capabilities of a decoder that will decode the video sequence;
encoding macroblocks of a first image;.
encoding macroblocks of subsequent images, wherein some macroblocks are skipped; and
increasing video quality as a function of a fraction of macroblocks that are skipped to take advantage of decoder processing capability that would otherwise be unused as a result of the skipped macroblocks.
2. The method of claim 1 wherein the step of determining one or more processing capabilities of a decoder comprises having prior knowledge of the decoder type
3. The method of claim 1 wherein the step of determining one or more processing capabilities of the decoder comprises receiving processing capability information from the decoder.
4. The method of claim 1 wherein the step of determining one or more processing capabilities of the decoder comprises determining the number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
5. The method of claim 4 wherein the step of increasing video quality comprises determining the maximum frame rate in accordance with the following expression:
MaxFrameRate = 1 N coded MaxMBPS + N skipped MaxSKIPPED
where Ncoded is the number of coded macroblocks per frame, Nskipped is the number of skipped macroblocks per frame, MaxMBPS is the maximum number of macroblocks that can be decoded in a given interval, and MaxSKIPPED is the maximum number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
6. The method of claim 1 wherein the step of increasing video quality comprises increasing a video frame rate.
7. The method of claim 1 wherein the step of increasing video quality comprises increasing a video picture size.
8. The method of claim 1 wherein the step of increasing video quality further comprises increasing a video frame rate as a function of a computational cost of the decoder to decode various types of macroblocks.
9. The method of claim 1 wherein the step of increasing video quality further comprises increasing a video picture size as a function of a computational cost of the decoder to decode various types of macroblocks.
10. The method of claim 1 further comprising:
taking account of a number of coefficients included in the encoded macroblocks and a computational requirement of the decoder as a function of this number.
11. The method of claim 10 wherein the step of increasing video quality comprises increasing a video frame rate.
12. The method of claim 10 wherein the step of increasing video quality comprises increasing a video picture size.
13. The method of claim 10 wherein the step of increasing video quality further comprises increasing a video frame rate as a function of a computational cost of the decoder to decode various types of macroblocks.
14. The method of claim 10 wherein the step of increasing video quality further comprises increasing a video picture size as a function of a computational cost of the decoder to decode various types of macroblocks.
15. A video conferencing terminal adapted to produce encoded video including a sequence of encoded frames, each encoded frame comprising one or more encoded macroblocks, the video conferencing terminal comprising:
one or more image processing engines adapted to encode a video signal, wherein some macroblocks are skipped; and
a communication interface adapted to determine one or more processing capabilities of a decoder that will decode the encoded video and further adapted to increase video quality as a function of a fraction of macroblocks that are skipped to take advantage of decoder processing capability that would otherwise be unused as a result of the skipped macroblocks.
16. The video conferencing terminal of claim 15 wherein the processing capability of the decoder is determined as a function the number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
17. The video conferencing terminal of claim 16 wherein a maximum frame rate is determined in accordance with the following expression:
MaxFrameRate = 1 N coded MaxMBPS + N skipped MaxSKIPPED
where Ncoded is the number of coded macroblocks per frame, Nskipped is the number of skipped macroblocks per frame, MaxMBPS is the maximum number of macroblocks that can be decoded in a given interval, and MaxSKIPPED is the maximum number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
18. The video conferencing terminal of claim 15 wherein video quality is increased by increasing a frame rate.
19. The video conferencing terminal of claim 15 wherein video quality is increased by increasing an picture size.
20. The video conferencing terminal of claim 18 wherein the frame rate is further determined as a function of a computational cost of the decoder to decode various types of macroblocks.
21. The video conferencing terminal of claim 19 wherein the picture size is further determined as a function of a computational cost of the decoder to decode various types of macroblocks.
22. A method of quality-improvement of a digitally-encoded video sequence, the method comprising:
determining one or more processing capabilities of a decoder that will decode the video sequence; and
increasing video quality as a function of an encoder model of decoder processing load to take advantage of decoder processing capability that would otherwise be unused.
23. The method of claim 22 wherein the step of determining one or more processing capabilities of a decoder comprises having prior knowledge of the decoder type.
24. The method of claim 22 wherein the step of determining one or more processing capabilities of the decoder comprises receiving processing capability information from the decoder.
25. The method of claim 22 wherein the step of increasing video quality comprises increasing a video frame rate.
26. The method of claim 22 wherein the step of increasing video quality comprises increasing a video picture size.
27. A video encoder for generating an encoded video sequence, comprising:
one or more image processing engines adapted to:
encode a video signal;
determine one or more processing capabilities of a decoder that will decode the encoded video sequence; and
increase video quality as a function of an encoder model of decoder processing load to take advantage of decoder processing capability that would otherwise be unused.
28. The video encoder of claim 27 wherein the processing capabilities of the decoder are determined as a function a number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
29. The video encoder of claim 28 wherein a maximum frame rate is determined in accordance with the following expression:
MaxFrameRate = 1 N coded MaxMBPS + N skipped MaxSKIPPED
where Ncoded is the number of coded macroblocks per frame, Nskipped is the number of skipped macroblocks per frame, MaxMBPS is the maximum number of macroblocks that can be decoded in a given interval, and MaxSKIPPED is the maximum number of macroblocks that can be decoded in a given interval if all macroblocks are skipped.
30. The video encoder of claim 27 wherein video quality is increased by increasing a frame rate.
31. The video encoder of claim 27 wherein video quality is increased by increasing an picture size.
32. The video encoder of claim 30 wherein the frame rate is further determined as a function of a computational cost of the decoder to decode various types of macroblocks.
33. The video encoder of claim 31 wherein the picture size is further determined as a function of a computational cost of the decoder to decode various types of macroblocks.
US10/798,519 2004-03-11 2004-03-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream Abandoned US20050201469A1 (en)

Priority Applications (9)

Application Number Priority Date Filing Date Title
US10/798,519 US20050201469A1 (en) 2004-03-11 2004-03-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
EP04026781A EP1575294B1 (en) 2004-03-11 2004-11-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
AU2004229063A AU2004229063B2 (en) 2004-03-11 2004-11-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
DE602004024863T DE602004024863D1 (en) 2004-03-11 2004-11-11 Method and apparatus for improving the refresh rate in a compressed video
AT04026781T ATE454013T1 (en) 2004-03-11 2004-11-11 METHOD AND APPARATUS FOR IMPROVING THE IMAGE REPRESENTATION FREQUENCY IN A COMPRESSED VIDEO
CNB2004100817339A CN100440975C (en) 2004-03-11 2004-12-24 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
JP2005049362A JP2005260935A (en) 2004-03-11 2005-02-24 Method and apparatus for increasing average image refresh rate in compressed video bitstream
HK05109079.0A HK1075159A1 (en) 2004-03-11 2005-10-13 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
US13/452,325 US8374236B2 (en) 2004-03-11 2012-04-20 Method and apparatus for improving the average image refresh rate in a compressed video bitstream

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/798,519 US20050201469A1 (en) 2004-03-11 2004-03-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US13/452,325 Continuation US8374236B2 (en) 2004-03-11 2012-04-20 Method and apparatus for improving the average image refresh rate in a compressed video bitstream

Publications (1)

Publication Number Publication Date
US20050201469A1 true US20050201469A1 (en) 2005-09-15

Family

ID=34827663

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/798,519 Abandoned US20050201469A1 (en) 2004-03-11 2004-03-11 Method and apparatus for improving the average image refresh rate in a compressed video bitstream
US13/452,325 Expired - Lifetime US8374236B2 (en) 2004-03-11 2012-04-20 Method and apparatus for improving the average image refresh rate in a compressed video bitstream

Family Applications After (1)

Application Number Title Priority Date Filing Date
US13/452,325 Expired - Lifetime US8374236B2 (en) 2004-03-11 2012-04-20 Method and apparatus for improving the average image refresh rate in a compressed video bitstream

Country Status (7)

Country Link
US (2) US20050201469A1 (en)
EP (1) EP1575294B1 (en)
JP (1) JP2005260935A (en)
CN (1) CN100440975C (en)
AT (1) ATE454013T1 (en)
DE (1) DE602004024863D1 (en)
HK (1) HK1075159A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070153692A1 (en) * 2005-12-29 2007-07-05 Xudong Song Methods and apparatuses for performing scene adaptive rate control
US20080039088A1 (en) * 2005-06-09 2008-02-14 Kyocera Corporation Radio Communication Terminal and Communication Method
US20080181314A1 (en) * 2007-01-31 2008-07-31 Kenjiro Tsuda Image coding apparatus and image coding method
US20090073005A1 (en) * 2006-09-11 2009-03-19 Apple Computer, Inc. Complexity-aware encoding
US20090161766A1 (en) * 2007-12-21 2009-06-25 Novafora, Inc. System and Method for Processing Video Content Having Redundant Pixel Values
US20090304086A1 (en) * 2008-06-06 2009-12-10 Apple Inc. Method and system for video coder and decoder joint optimization
US7643422B1 (en) * 2006-03-24 2010-01-05 Hewlett-Packard Development Company, L.P. Dynamic trans-framing and trans-rating for interactive playback control
US20130177071A1 (en) * 2012-01-11 2013-07-11 Microsoft Corporation Capability advertisement, configuration and control for video coding and decoding
US20140072032A1 (en) * 2007-07-10 2014-03-13 Citrix Systems, Inc. Adaptive Bitrate Management for Streaming Media Over Packet Networks
US8976856B2 (en) 2010-09-30 2015-03-10 Apple Inc. Optimized deblocking filters
US9179155B1 (en) 2012-06-14 2015-11-03 Google Inc. Skipped macroblock video encoding enhancements
US9591318B2 (en) 2011-09-16 2017-03-07 Microsoft Technology Licensing, Llc Multi-layer encoding and decoding
US9781418B1 (en) 2012-06-12 2017-10-03 Google Inc. Adaptive deadzone and rate-distortion skip in video processing

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4747109B2 (en) * 2007-01-15 2011-08-17 パナソニック株式会社 Calculation amount adjustment device
US10003792B2 (en) 2013-05-27 2018-06-19 Microsoft Technology Licensing, Llc Video encoder for images
US10136140B2 (en) 2014-03-17 2018-11-20 Microsoft Technology Licensing, Llc Encoder-side decisions for screen content encoding
EP3041233A1 (en) * 2014-12-31 2016-07-06 Thomson Licensing High frame rate-low frame rate transmission technique
EP3254463A4 (en) 2015-02-06 2018-02-21 Microsoft Technology Licensing, LLC Skipping evaluation stages during media encoding
US10136132B2 (en) 2015-07-21 2018-11-20 Microsoft Technology Licensing, Llc Adaptive skip or zero block detection combined with transform size decision
CN106528278B (en) * 2015-09-14 2019-06-25 纬创资通(上海)有限公司 Hardware load method of adjustment and electronic device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6490320B1 (en) * 2000-02-02 2002-12-03 Mitsubishi Electric Research Laboratories Inc. Adaptable bitstream video delivery system
US6526099B1 (en) * 1996-10-25 2003-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Transcoder
US20030112366A1 (en) * 2001-11-21 2003-06-19 General Instrument Corporation Apparatus and methods for improving video quality delivered to a display device
US20050041740A1 (en) * 2002-04-06 2005-02-24 Shunichi Sekiguchi Video data conversion device and video data conversion method
US7114174B1 (en) * 1999-10-01 2006-09-26 Vidiator Enterprises Inc. Computer program product for transforming streaming video data
US20070120967A1 (en) * 2004-01-20 2007-05-31 Polycom, Inc. Method and apparatus for mixing compressed video

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6289067B1 (en) * 1999-05-28 2001-09-11 Dot Wireless, Inc. Device and method for generating clock signals from a single reference frequency signal and for synchronizing data signals with a generated clock
US6990151B2 (en) * 2001-03-05 2006-01-24 Intervideo, Inc. Systems and methods for enhanced error concealment in a video decoder

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6526099B1 (en) * 1996-10-25 2003-02-25 Telefonaktiebolaget Lm Ericsson (Publ) Transcoder
US7114174B1 (en) * 1999-10-01 2006-09-26 Vidiator Enterprises Inc. Computer program product for transforming streaming video data
US6490320B1 (en) * 2000-02-02 2002-12-03 Mitsubishi Electric Research Laboratories Inc. Adaptable bitstream video delivery system
US20030112366A1 (en) * 2001-11-21 2003-06-19 General Instrument Corporation Apparatus and methods for improving video quality delivered to a display device
US20050041740A1 (en) * 2002-04-06 2005-02-24 Shunichi Sekiguchi Video data conversion device and video data conversion method
US20070120967A1 (en) * 2004-01-20 2007-05-31 Polycom, Inc. Method and apparatus for mixing compressed video

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080039088A1 (en) * 2005-06-09 2008-02-14 Kyocera Corporation Radio Communication Terminal and Communication Method
US7689220B2 (en) * 2005-06-09 2010-03-30 Kyocera Corporation Radio communication terminal and communication method
US20070153692A1 (en) * 2005-12-29 2007-07-05 Xudong Song Methods and apparatuses for performing scene adaptive rate control
US8249070B2 (en) * 2005-12-29 2012-08-21 Cisco Technology, Inc. Methods and apparatuses for performing scene adaptive rate control
US7643422B1 (en) * 2006-03-24 2010-01-05 Hewlett-Packard Development Company, L.P. Dynamic trans-framing and trans-rating for interactive playback control
US20090073005A1 (en) * 2006-09-11 2009-03-19 Apple Computer, Inc. Complexity-aware encoding
US8830092B2 (en) 2006-09-11 2014-09-09 Apple Inc. Complexity-aware encoding
US7969333B2 (en) * 2006-09-11 2011-06-28 Apple Inc. Complexity-aware encoding
US20110234430A1 (en) * 2006-09-11 2011-09-29 Apple Inc. Complexity-aware encoding
US20080181314A1 (en) * 2007-01-31 2008-07-31 Kenjiro Tsuda Image coding apparatus and image coding method
US9191664B2 (en) * 2007-07-10 2015-11-17 Citrix Systems, Inc. Adaptive bitrate management for streaming media over packet networks
US20140072032A1 (en) * 2007-07-10 2014-03-13 Citrix Systems, Inc. Adaptive Bitrate Management for Streaming Media Over Packet Networks
US20090161766A1 (en) * 2007-12-21 2009-06-25 Novafora, Inc. System and Method for Processing Video Content Having Redundant Pixel Values
US20090304086A1 (en) * 2008-06-06 2009-12-10 Apple Inc. Method and system for video coder and decoder joint optimization
US8976856B2 (en) 2010-09-30 2015-03-10 Apple Inc. Optimized deblocking filters
US9591318B2 (en) 2011-09-16 2017-03-07 Microsoft Technology Licensing, Llc Multi-layer encoding and decoding
US9769485B2 (en) 2011-09-16 2017-09-19 Microsoft Technology Licensing, Llc Multi-layer encoding and decoding
US20130177071A1 (en) * 2012-01-11 2013-07-11 Microsoft Corporation Capability advertisement, configuration and control for video coding and decoding
US11089343B2 (en) * 2012-01-11 2021-08-10 Microsoft Technology Licensing, Llc Capability advertisement, configuration and control for video coding and decoding
US9781418B1 (en) 2012-06-12 2017-10-03 Google Inc. Adaptive deadzone and rate-distortion skip in video processing
US9179155B1 (en) 2012-06-14 2015-11-03 Google Inc. Skipped macroblock video encoding enhancements
US9888247B2 (en) 2012-06-14 2018-02-06 Google Llc Video coding using region of interest to omit skipped block information

Also Published As

Publication number Publication date
HK1075159A1 (en) 2005-12-02
CN100440975C (en) 2008-12-03
JP2005260935A (en) 2005-09-22
AU2004229063A1 (en) 2005-09-29
US8374236B2 (en) 2013-02-12
DE602004024863D1 (en) 2010-02-11
ATE454013T1 (en) 2010-01-15
EP1575294B1 (en) 2009-12-30
EP1575294A1 (en) 2005-09-14
CN1668110A (en) 2005-09-14
US20120200663A1 (en) 2012-08-09

Similar Documents

Publication Publication Date Title
US8374236B2 (en) Method and apparatus for improving the average image refresh rate in a compressed video bitstream
JP5180294B2 (en) Buffer-based rate control that utilizes frame complexity, buffer level, and intra-frame location in video encoding
US7170938B1 (en) Rate control method for video transcoding
JP4571489B2 (en) Method and apparatus for displaying quantizer parameters in a video coding system
US8355437B2 (en) Video error resilience
KR100329892B1 (en) Control strategy for dynamically encoding multiple streams of video data in parallel for multiplexing onto a constant bit rate channel
US6324217B1 (en) Method and apparatus for producing an information stream having still images
US20020122491A1 (en) Video decoder architecture and method for using same
US20020118755A1 (en) Video coding architecture and methods for using same
US20130308707A1 (en) Methods and device for data alignment with time domain boundary
US20040223549A1 (en) Video decoder architecture and method for using same
CA2504185A1 (en) High-fidelity transcoding
EP1829376A1 (en) Rate control with buffer underflow prevention
WO2006067373A1 (en) Processing video signals
US6961377B2 (en) Transcoder system for compressed digital video bitstreams
US7826529B2 (en) H.263/MPEG video encoder for efficiently controlling bit rates and method of controlling the same
JPH07312756A (en) Circuit, device and method for conversion of information quantity of compressed animation image code signal
WO2005065030A2 (en) Video compression device and a method for compressing video
KR20040048289A (en) Transcoding apparatus and method, target bit allocation, complexity prediction apparatus and method of picture therein
US6040875A (en) Method to compensate for a fade in a digital video input sequence
JPH08251597A (en) Moving image encoding and decoding device
KR101371507B1 (en) System and method for low-delay video telecommunication
JP2001346207A (en) Image information converter and method
KR100923961B1 (en) System and method for low-delay video telecommunication
KR100932727B1 (en) Video stream switching device and method

Legal Events

Date Code Title Description
AS Assignment

Owner name: POLYCOM, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SIEVERS, JOHN;LINDBERGH, DAVID;BOTZKO, STEPHEN;AND OTHERS;REEL/FRAME:015132/0250

Effective date: 20040311

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION