WO2007081177A1 - Processing multiview video - Google Patents

Processing multiview video Download PDF

Info

Publication number
WO2007081177A1
WO2007081177A1 PCT/KR2007/000226 KR2007000226W WO2007081177A1 WO 2007081177 A1 WO2007081177 A1 WO 2007081177A1 KR 2007000226 W KR2007000226 W KR 2007000226W WO 2007081177 A1 WO2007081177 A1 WO 2007081177A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
information
neighboring block
block
views
Prior art date
Application number
PCT/KR2007/000226
Other languages
French (fr)
Inventor
Jeong Hyu Yang
Original Assignee
Lg Electronics Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from KR1020060037773A external-priority patent/KR20070076356A/en
Priority claimed from KR1020060110337A external-priority patent/KR20070076391A/en
Priority claimed from KR1020060110338A external-priority patent/KR20070076392A/en
Application filed by Lg Electronics Inc. filed Critical Lg Electronics Inc.
Priority to EP07700953A priority Critical patent/EP1977593A4/en
Priority to JP2008550242A priority patent/JP5199123B2/en
Publication of WO2007081177A1 publication Critical patent/WO2007081177A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/103Selection of coding mode or of prediction mode
    • H04N19/105Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/174Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/187Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scalable video layer
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/19Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/189Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
    • H04N19/196Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
    • H04N19/197Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters including determination of the initial value of an encoding parameter
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/44Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • H04N19/61Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/70Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/455Demodulation-circuits

Definitions

  • the invention relates to processing multiview video.
  • Multiview Video Coding relates to compression of video sequences (e.g., a sequence of images or
  • picture that are typically acquired by respective cameras.
  • the video sequences or “views” can be encoded according to a standard such as MPEG.
  • a picture in a video sequence can represent a full video frame or a field of a video frame.
  • a slice is an independently coded portion of a picture that includes some or all of the macroblocks in the picture, and a macroblock includes blocks of picture elements (or "pixels") .
  • the video sequences can be encoded as a multiview video sequence according to the H.264 /AVC codec technology, and many developers are conducting research into amendment of standards to accommodate multiview video sequences.
  • the term "profile” indicates the standardization of technical components for use in the video encoding/decoding algorithms.
  • the profile is the set of technical components prescribed for decoding a bitstream of a compressed sequence, and may be considered to be a substandard.
  • the above-mentioned three profiles are a baseline profile, a main profile, and an extended profile.
  • a variety of functions for the encoder and the decoder have been defined in the H.264 standard, such that the encoder and the decoder can be compatible with the baseline profile, the main profile, and the extended profile respectively.
  • the bitstream for the H.264 /AVC standard is structured according to a Video Coding Layer (VCL) for processing the moving-image coding (i.e., the sequence coding) , and a Network Abstraction Layer (NAL) associated with a subsystem capable of transmitting/storing encoded information.
  • VCL Video Coding Layer
  • NAL Network Abstraction Layer
  • the output data of the encoding process is VCL data, and is mapped into NAL units before it is transmitted or stored.
  • Each NAL unit includes a Raw Byte Sequence Payload (RBSP) corresponding to either compressed video data or header information.
  • the NAL unit includes a NAL header and a RBSP.
  • the NAL header includes flag information (e.g., nal_ref_idc) and identification (ID) information (e.g., nal_unit_type) .
  • the flag information NX nal_ref_idc" indicates the presence or absence of a slice used as a reference picture of the NAL unit.
  • the ID information "nal_unit_type” indicates the type of the NAL unit.
  • the RBSP stores compressed original data. An RBSP trailing bit can be added to the last part of the RBSP, such that the length of the RBSP can be represented by a multiple of 8 bits .
  • NAL units there are a variety of the NAL units, for example, an Instantaneous Decoding Refresh (IDR) picture, a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), and Supplemental Enhancement Information (SEI), etc.
  • the standard has generally defined a target product using various profiles and levels, such that the target product can be implemented with appropriate costs.
  • the decoder satisfies a predetermined constraint at a corresponding profile and level.
  • the profile and the level are able to indicate a function or parameter of the decoder, such that they indicate which compressed images can be handled by the decoder.
  • Specific information indicating which one of multiple profiles corresponds to the bitstream can be identified by profile ID information.
  • the profile ID information "profile_idc” provides a flag for identifying a profile associated with the bitstream.
  • the H.264/AVC standard includes three profile identifiers (IDs). If the profile ID information "profile idc” is set to “66”, the bitstream is based on the baseline profile. If the profile ID information “profile__idc” is set to "77”, the bitstream is based on the main profile. If the profile ID information "profile_idc” is set to "88”, the bitstream is based on the extended profile.
  • the above-mentioned "profile_idc” information may be contained in the SPS (Sequence Parameter Set), for example. Disclosure of Invention
  • a method for decoding a video signal comprises: receiving a bitstream comprising the video signal encoded according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile; extracting the profile information from the bitstream; and decoding the video signal according to the determined profile using illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments (e.g., an image block segment such as a single block or a macroblock, or a segment such as a slice of an image) .
  • Aspects can include one or more of the following features .
  • the method further comprises extracting from the bitstream configuration information associated with multiple views when the determined profile corresponds to a multiview video signal, wherein the configuration information comprises at least one of view-dependency information representing dependency relationships between respective views, view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view-arrangement information indicating a camera arrangement.
  • the profile information is located in a header of the bitstream.
  • the view level information corresponds to one of a plurality of levels associated with a hierachical view prediction structure among the views of the multiview video signal .
  • the view-dependency information represents the dependency relationships in a two-dimensional data structure .
  • the two-dimensional data structure comprises a matrix.
  • the segments comprise image blocks.
  • Using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
  • the method further comprises selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors .
  • Selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
  • Determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
  • Selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
  • a method for decoding a multiview video signal comprises: receiving a bitstream comprising the multiview video signal encoded according to dependency relationships between respective views, and view-dependency data representing the dependency relationships; extracting the view-dependency data and determining the dependency relationships from the extracted data; and decoding the multiview video signal according to the determined dependency relationships using illumination compensation between segments of pictures in respective views, where the multiview video signal includes multiple views each comprising multiple pictures segmented into multiple segments.
  • aspects can include one or more of the following features .
  • the view-dependency data represents the dependency relationships in a two-dimensional data structure.
  • the view-dependency data comprises a matrix.
  • the method further comprises extracting from the bitstream configuration information comprising at least one of view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view-arrangement information indicating a camera arrangement .
  • the segments comprise image blocks .
  • Using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
  • the method further comprises selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors.
  • Selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
  • Determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
  • Selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
  • the method further comprises, when multiple offset values are to be used, obtaining the predictor for performing illumination compensation of the first block by combining the multiple offset values.
  • Combining the multiple offset values comprises taking an average or median of the offset values.
  • a method for encoding a video signal comprises generating a bitstream capable of being decoded into the video signal by the respective decoding method.
  • a method for encoding a bitstream comprises: forming the bitstream according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile; and providing information for illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments.
  • a method for encoding a bitstream comprises: forming the bitstream according to dependency relationships between respective views, and view-dependency data representing the dependency relationships; and providing information for illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments.
  • a computer program stored on a computer- readable medium, comprises instructions for causing a computer to perform the respective decoding method.
  • image data embodied on a machine-readable information carrier is capable of being decoded into a video signal by the respective decoding method.
  • a decoder comprises means for performing the respective decoding method.
  • an encoder comprises means for generating a bitstream capable of being decoded into a video signal by the respective decoding method.
  • FIG. 1 is an exemplary decoding apparatus.
  • FIG. 2 is a structural diagram illustrating a seguence parameter set RBSP syntax.
  • FIG. 3A is a structural diagram illustrating a bitstream including only one sequence.
  • FIG. 3B is a structural diagram illustrating a bitstream including two sequences .
  • FIGS. 4A-4C are diagrams illustrating exemplary Group Of GOP (GGOP) structures.
  • FIG. 5 is a flowchart illustrating a method for decoding a video sequence.
  • FIGS. 6A-6B, 7A-7B, and 8 are diagrams illustrating examples of multiview-sequence prediction structures.
  • FIGS. 9A-9B are diagrams illustrating a hierarchical prediction structure between several viewpoints of multiview sequence data.
  • FIGS. 1OA-IOB are diagrams illustrating a prediction structure of two-dimensional (2D) multiview sequence data.
  • FIGS. 11A-11C are diagrams illustrating a multiview
  • FIG. 12 is a diagram illustrating a hierarchical encoding/decoding system.
  • FIG. 13 is a flowchart illustrating a method for encoding a video sequence.
  • FIG. 14 is a block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views.
  • FIG. 15 is a detailed block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views.
  • FIG. 16 is a diagram illustrating a 16x16 macroblock.
  • FIGS. 17A-17B are diagrams illustrating 16x8 macroblocks .
  • FIGS. 18A-18B are diagrams illustrating 8x16 macroblocks .
  • FIGS. 19A-19B are diagrams illustrating 8x8 macroblocks .
  • FIG. 20 is a diagram illustrating a process for obtaining an offset value of a current block.
  • FIG. 21 is a flowchart illustrating a process for performing illumination compensation of a current block.
  • FIG. 22 is a flowchart illustrating a method for obtaining a predictor by determining whether a reference index of a current block is equal to a reference index of a neighboring block.
  • FIG. 23 is a flow chart illustrating a method for performing for an illumination compensation on the basis of a prediction type of a current block according to the present invention
  • FIG. 24 is a flow chart illustrating a method for performing illumination compensation using flag information indicating whether the illumination compensation of a block is performed.
  • FIG. 25 is a flow chart illustrating a method for predicting flag information of a current block by determining whether a reference index of the current block is equal to a reference index of a neighboring block.
  • FIG. 26 is a flow chart illustrating a method for performing illumination compensation when a current block is predictively coded by two or more reference blocks.
  • FIG. 27 is a flow chart illustrating a method for performing illumination compensation using not only a flag indicating whether illumination compensation of a current block is performed, but also an offset value of a current block.
  • FIGS. 28A-28B are diagrams illustrating a method for performing illumination compensation using a flag and an offset value in association with blocks of P and B slices.
  • FIG. 29 is a flow chart illustrating a method for performing illumination compensation when a current block is predictively encoded by two or more reference blocks.
  • FIG. 30 is a flow chart illustrating a method for performing illumination compensation using a flag indicating whether illumination compensation of a current block is performed.
  • FIGS. 31A-31C are diagrams illustrating the scope of flag information indicating whether illumination compensation of a current block is performed.
  • FIG. 32 is a flow chart illustrating a method for obtaining a motion vector considering an offset value of a current block.
  • an input bitstream includes information that allows a decoding apparatus to determine whether the input bitstream relates to a multiview profile.
  • supplementary information associated with the multiview sequence is added according to a syntax to the bitstream and transmitted to the decoder.
  • the multiview profile ID can indicate a profile mode for handling multiview video data as according to an amendment of the H.264 /AVC standard.
  • the MVC (Multiview Video Coding) technology is an amendment technology of the H.264 /AVC standards. That is, a specific syntax is added as supplementary information for an MVC mode. Such amendment to support MVC technology can be more effective than an alternative in which an unconditional syntax is used. For example, if the profile identifier of the AVC technology is indicative of a multiview profile, the addition of multiview sequence information may increase a coding efficiency.
  • the sequence parameter set (SPS) of an H.264/AVC bitstream is indicative of header information including information (e.g., a profile, and a level) associated with the entire-sequence encoding.
  • the entire compressed moving images (i.e., a sequence) can begin at a sequence header, such that a sequence parameter set (SPS) corresponding to the header information arrives at the decoder earlier than data referred to by the parameter set.
  • SPS sequence parameter set
  • the sequence parameter set RBSP acts as header information of a compressed data of moving images at entry Sl (FIG. 2) .
  • the profile ID information "profile_idc" identifies which one of profiles from among several profiles corresponds to the received bitstream.
  • the profile ID information "profile_idc” can be set, for example, to "MULTI_VIEWJ?ROFILE) ", so that the syntax including the profile ID information can determine whether the received bitstream relates to a multiview profile.
  • the following configuration information can be added when the received bitstream relates to the multiview profile.
  • FIG. 1 is a block diagram illustrating an exemplary decoding apparatus (or "decoder") of a multiview video system for decoding a video signalcontaining a multiview video sequence.
  • the multiview video system includes a corresponding encoding apparatus (or “encoder”) to provide the multiview video sequence as a bitstream that includes encoded image data embodied on a machine-readable information carrier (e.g., a machine-readable storage medium, or a machine-readable energy signal propagating between a transmitter and receiver.)
  • a machine-readable information carrier e.g., a machine-readable storage medium, or a machine-readable energy signal propagating between a transmitter and receiver.
  • the decoding apparatus includes a parsing unit 10, an entropy decoding unit 11, an Inverse Quantization/Inverse Transform unit 12, an inter-prediction unit 13, an intra-prediction unit 14, a deblocking filter 15, and a decoded-picture buffer 16.
  • the inter-prediction unit 13 includes a motion compensation unit 17, an illumination compensation unit 18, and an illumination-compensation offset prediction unit 19.
  • the parsing unit 10 performs a parsing of the received video sequence in NAL units to decode the received video sequence.
  • one or more sequence parameter sets and picture parameter sets are transmitted to a decoder before a slice header and slice data are decoded.
  • the NAL header or an extended area of the NAL header may include a variety of configuration information, for example, temporal level information, view level information, anchor picture ID information, and view ID information, etc.
  • time level information is indicative of hierarchical-structure information for providing temporal scalability from a video signal, such that sequences of a variety of time zones can be provided to a user via the above-mentioned temporal level information.
  • view level information is indicative of hierarchical-structure information for providing view scalability from the video signal.
  • the multiview video sequence can define the temporal level and view level, such that a variety of temporal sequences and view sequences can be provided to the user according to the defined temporal level and view level .
  • the level information is defined as described above, the user may employ the temporal scalability and the view scalability. Therefore, the user can view a sequence corresponding to a desired time and view, or can view a sequence corresponding to another limitation.
  • the above-mentioned level information may also be established in various ways according to reference conditions. For example, the level information may be changed according to a camera location, and may also be changed according to a camera arrangement type. In addition, the level information may also be arbitrarily established without a special reference.
  • anchor picture is indicative of an encoded picture in which all slices refer to only slices in a current view and not slices in other views. A random access between views can be used for multiview-sequence decoding.
  • Anchor picture ID information can be used to perform the random access process to access data of a specific view without requiring a large amount of data to be decoded.
  • view ID information is indicative of specific information for discriminating between a picture of a current view and a picture of another view. In order to discriminate one picture from other pictures when the video sequence signal is encoded, a Picture Order Count
  • inter-view prediction can be performed.
  • An identifier is used to discriminate a picture of the current view from a picture of another view.
  • a view identifier can be defined to indicate a picture's view.
  • the decoding apparatus can obtain information of a picture in a view different from a view of the current picture using the above-mentioned view identifier, such that it can decode the video signal using the information of the picture.
  • the above-mentioned view identifier can be applied to the overall encoding/decoding process of the video signal. Also, the above-mentioned view identifier can also be applied to the multiview video coding process using the frame number information N ⁇ frame_num" considering a view.
  • the multiview sequence has a large amount of data, and a hierarchical encoding function of each view (also called a "view scalability") can be used for processing the large amount of data.
  • a prediction structure considering views of the multiview sequence may be defined.
  • the above-mentioned prediction structure may be defined by structuralizing the prediction order or direction of several view sequences. For example, if several view sequences to be encoded are given, a center location of the overall arrangement is set to a base view, such that view sequences to be encoded can be hierarchically selected. The end of the overall arrangement or other parts may be set to the base view.
  • a hierarchical prediction structure between several view sequences may be formed on the basis of the above-mentioned case of the camera views denoted by the exponential power of "2". Otherwise, if the number of camera views is not denoted by the exponential power of "2", virtual views can be used, and the prediction structure may be formed on the basis of the virtual views. If the camera arrangement is indicative of a two- dimensional arrangement, the prediction order may be established by turns in a horizontal or vertical direction. A parsed bitstream is entropy-decoded by an entropy decoding unit 11, and data such as a coefficient of each macroblock, a motion vector, etc., are extracted.
  • the inverse quantization/inverse transform unit 12 multiplies a received quantization value by a predetermined constant to acquire a transformed coefficient value, and performs an inverse transform of the acquired coefficient value, such that it reconstructs a pixel value.
  • the inter-prediction unit 13 performs an inter-prediction function from decoded samples of the current picture using the reconstructed pixel value.
  • the deblocking filter 15 is applied to each decoded macroblock to reduce the degree of block distortion.
  • the deblocking filter 15 performs a smoothing of the block edge, such that it improves an image quality of the decoded frame.
  • the selection of a filtering process is dependent on a boundary strength and a gradient of image samples arranged in the vicinity of the boundary.
  • the filtered pictures are stored in the decoded picture buffer 16, such that they can be outputted or be used as reference pictures.
  • the decoded picture buffer 16 stores or outputs pre- coded pictures to perform the inter-prediction function.
  • frame number information "frame_num” and POC (Picture Order Count) information of the pictures are used to store or output the pre-coded pictures.
  • Pictures of other view may exist in the above-mentioned pre-coded pictures in the case of the MVC technology. Therefore, in order to use the above-mentioned pictures as reference pictures, not only the "frame_num" and POC information, but also view identifier indicating a picture view may be used as necessary.
  • the inter-prediction unit 13 performs the inter- prediction using the reference pictures stored in the decoded picture buffer 16.
  • the inter-coded macroblock may be divided into macroblock partitions . Each macroblock partition can be predicted by one or two reference pictures.
  • the motion compensation unit 17 compensates for a motion of the current block using the information received from the entropy decoding unit 11.
  • the motion compensation unit 17 extracts motion vectors of neighboring blocks of the current block from the video signal, and obtains a motion-vector predictor of the current block.
  • the motion compensation unit 17 compensates for the motion of the current block using a difference value between the motion vector and a predictor extracted from the video signal and the obtained motion-vector predictor.
  • the above-mentioned motion compensation may be performed by only one reference picture, or may also be performed by a plurality of reference pictures . Therefore, if the above-mentioned reference pictures are determined to be pictures of other views different from the current view, the motion compensation may be performed according to a view identifier indicating the other views.
  • a direct mode is indicative of a coding mode for predicting motion information of the current block on the basis of the motion information of a block which is completely decoded.
  • the above-mentioned direct mode can reduce the number of bits required for encoding the motion information, resulting in the increased compression efficiency.
  • a temporal direct mode predicts motion information of the current block using a correlation of motion information of a temporal direction. Similar to the temporal direct mode, the decoder can predict the motion information of the current block using a correlation of motion information of a view direction.
  • view sequences may be captured by different cameras respectively, such that a difference in illumination may occur due to internal or external factors of the cameras.
  • an illumination compensation unit 18 performs an illumination compensation function.
  • flag information may be used to indicate whether an illumination compensation at a specific level of a video signal is performed.
  • the illumination compensation unit 18 may perform the illumination compensation function using flag information indicating whether the illumination compensation of a corresponding slice or macroblock is performed.
  • the above- mentioned method for performing the illumination compensation using the above-mentioned flag information may be applied to a variety of macroblock types (e.g., an inter 16x16 mode, a B-skip mode, a direct mode, etc.)
  • information of a neighboring block or information of a block in views different from a view of the current block may be used, and an offset value of the current block may also be used.
  • the offset value of the current block is indicative of a difference value between an average pixel value of the current block and an average pixel value of a reference block corresponding to the current block.
  • a predictor of the current-block offset value may be obtained by using the neighboring blocks of the current block, and a residual value between the offset value and the predictor may be used. Therefore, the decoder can reconstruct the offset value of the current block using the residual value and the predictor.
  • the offset value of the current block can be predicted by using the offset value of a neighboring block. Prior to predicting the current-block offset value, it is determined whether the reference index of the current block is equal to a reference index of the neighboring blocks. According to the determined result, the illumination compensation unit 18 can determine which one of neighboring blocks will be used or which value will be used.
  • the illumination compensation unit 18 may perform the illumination compensation using a prediction type of the current block. If the current block is predictively encoded by two reference blocks, the illumination compensation unit 18 may obtain an offset value corresponding to each reference block using the offset value of the current block. As described above, the inter-predicted pictures or intra-predicted pictures acquired by the illumination compensation and motion compensation are selected according to a prediction mode, and reconstructs the current picture. A variety of examples of encoding/decoding methods for reconstructing a current picture are described later in this document.
  • FIG. 2 is a structural diagram illustrating a sequence parameter set RBSP syntax.
  • a sequence parameter set is indicative of header information including information (e.g., a profile, and a level) associated with the entire- sequence encoding.
  • the entire compressed sequence can begin at a sequence header, such that a sequence parameter set corresponding to the header information arrives at the decoder earlier than data referring to the parameter set.
  • the sequence parameter set acts as header information associated with resultant data of compressed moving images at step Sl.
  • “profile_idc” information determines which one of profiles from among several profiles corresponds to the received bitstream at step S2. For example, if “profile_idc” is set to "66”, this indicates the received bitstream is based on a baseline profile. If “profile_idc” is set to "77”, this indicates the received bitstream is based on a main profile.
  • the received bitstream relates to the multiview profile at step S3
  • a variety of information of the multiview sequence can be added to the received bitstream.
  • the "reference__view” information represents a reference view of an entire view, and may add information associated with the reference view to the bitstream.
  • the MVC technique encodes or decodes a reference view sequence using an encoding scheme capable of being used for a single sequence (e.g., the H.264/AVC codec). If the reference view is added to the syntax, the syntax indicates which one of views from among several views will be set to the reference view.
  • a base view acting as an encoding reference acts as the above-mentioned reference view. Images of the reference-view are independently encoded without referring to an image of another-view.
  • the number of views may add specific information indicating the number of multiview captured by several cameras.
  • the view number (num_views) of each sequence may be set in various ways.
  • the "num_views" information is transmitted to an encoder and a decoder, such that the encoder and the decoder can freely use the "num_views" information at step S5.
  • Camera arrangement indicates an arrangement type of cameras when a sequence is acquired. If the "view_arrangement" information is added to the syntax, the encoding process can be effectively performed to be appropriate for individual arrangements. Thereafter, if a new encoding method is developed, different x View_arrangement" information can be used.
  • the number of frames NN temporal_units_size indicates the number of successively encoded/decoded frames of each view. If required, specific information indicating the number of frames may also be added.
  • the u temporal_units_size” information indicates how many frames will be firstly processed at the N-th view and the M-th view will be then processed.
  • the "temporal_units_size” information may be processed at only one view, and may go to the next view.
  • the "temporal_units_size” information may be equal to or less than the conventional GOP length. For example, FIGS.
  • 4B—4C show the GGOP structure for explaining the
  • the MVC method arranges several frames on a time axis and a view axis, such that it may process a single frame of each view at the same time value, and may then process a single frame of each view at the next time value, corresponding to a "temporal_units_size" of "1".
  • the MVC method may process N frames at the same view, and may then process the N frames at the next view, corresponding to a "temporal_units_size" of X ⁇ N" Since generally at least one frame is processed, "temporal_units_size_minusl" may be added to the syntax to represent how many additional frames are processed.
  • temporary_units_size_minusl 0" and
  • the profiles of the conventional encoding scheme have no common profile, such that a flag is further used to indicate compatibility.
  • "constraint_set*_flag” information indicates which one of profiles can decode the bitstream using a decoder.
  • "constraint_setO_flag” information indicates that the bitstream can be decoded by a decoder of the baseline profile at step S8.
  • "constraint_setl_flag” information indicates that the bitstream can be decoded by a decoder of the main profile at step S9.
  • “constraint_set2_flag” information indicates that the bitstream can be decoded by a decoder of the extended profile at step SlO. Therefore, there is need to define the "MULTI_VIEW_PROFILE” decoder, and the "MULTI_VIEW_PROFILE” decoder may be defined by X ⁇ "constraint_set4_flag” information at step SIl.
  • the ⁇ X level_idc information indicates a level identifier.
  • the "level” generally indicates the capability of the decoder and the complexity of bitstream, and relates to technical elements prescribed in the above-mentioned profiles at step S12.
  • the "seq_parameter_set_id” information indicates SPS (Sequence Parameter Set) ID information contained in the SPS (sequence parameter set) in order to identify sequence types at step S13.
  • FIG. 3A is a structural diagram illustrating a bitstream including only one sequence.
  • the sequence parameter set is indicative of header information including information (e.g., a profile, and a level) associated with the entire-sequence encoding.
  • the supplemental enhancement information is indicative of supplementary information, which is not required for the decoding process of a moving-image (i.e., sequence) encoding layer.
  • the picture parameter set is header information indicating an encoding mode of the entire picture.
  • the I slice performs only an intra coding process.
  • the P slice performs the intra coding process or the inter prediction coding process.
  • the picture delimiter indicates a boundary between video pictures.
  • the system applies the SPS RBSP syntax to the above-mentioned SPS. Therefore, the system employs the above-mentioned syntax during the generation of the bitstream, such that it can add a variety of information to a desired object.
  • FIG. 3B is a structural diagram illustrating a bitstream including two sequences.
  • the H.264/AVC technology can handle a variety of sequences using a single bitstream.
  • the SPS includes SPS ID information (seq_parameter_set_id) in the SPS so as to identify a sequence.
  • the SPS ID information is prescribed in the PPS (Picture Parameter
  • the PPS ID information (pic_parameter_set_id) is prescribed in the slice header, such that the
  • XN pic_parameter_set_id information can identify which one of PPSs will be used.
  • a header of the slice #1 of FIG. 3B includes PPS ID information (pic_parameter_set_id) to be
  • the PPS#1 includes the
  • the slice #1 belongs to the sequence #1. In this way, it can also be recognized
  • the slice #2 belongs to the sequence #2, as denoted by (D
  • bitstreams are added and edited to create a new video bitstream.
  • two bitstreams are assigned different SPS ID information. Any one of the two bitstreams may also be converted into a multiview profile as necessary.
  • FIG. 4A shows an exemplary Group Of GOP (GGOP) structure.
  • FIG. 4B and FIG. 4C shows a GGOP structure for explaining a "temporal_units_size" concept.
  • the GOP is indicative of a data group of some pictures.
  • the MVC uses the GGOP concept to perform spatial prediction and temporal prediction .
  • a first length between the I slice and the P slide of each view-sequence, a second length between the P slices, or a third length corresponding to a multiple of the first or second length is set to the "temporal_units_size" information
  • “temporal_units_size” information may be processed at only one view, and may go to the next view.
  • the “temporal__units_size” information may be equal to or less than the conventional GOP length.
  • the "temporal_units_size” information is set to "3".
  • the "temporal__units_size” information is set to "1". Specifically, in FIG. 4B, if the
  • NN temporal_units_size information is denoted by ⁇ N temporal_units_size > 1", and one or more views begin at the I frame, (temporal_units_size + 1) frames can be processed. Also, the system can recognize which one of views from among several views corresponds to each frame of the entire sequence by referring to the above-mentioned "temporal_units_size” and "num_views” information.
  • Pictures of V1 ⁇ V8 indicate a GOP
  • the V4 acting as a base GOP is used as a reference GOP of other GOPs. If the "temporal_units-size" information is set to "1", the MVC method processes frames of individual views at the same time zone, and then can re- process the frames of the individual views at the next time
  • the MVC method can firstly process the Tl frames, and then can process a plurality of frames in the order of T4 -> T2 ->
  • the MVC method may firstly process N frames in the direction of the time axis within a single view, and may process the N frames at the next view. In other words, if the "temporal--units_size" information is set to "4", the MVC method may firstly process frames contained in the
  • Tl-T4 frames of the V4 GOP may process a
  • the number of views (num_views) is set to "8"
  • the reference view is set to the V4 GOP (Group Of Pictures) .
  • the number of frames indicates the number of successively encoded/decoded frames of each view. Therefore, if the frames of each view are processed at the same time zone in FIG. 4A, the "temporal_unit_size” information is set to "1". If the frames are processed in the direction of the time axis within a single view, the "temporal_unit_size” information is set to "N". The above- mentioned information is added to the bitstream generating process .
  • FIG. 5 is a flow chart illustrating a method for decoding a video sequence.
  • one or more profile information is extracted from the received bitstream.
  • the extracted profile information may be at least one of several profiles (e.g., the baseline profile, the main profile, and the multiview profile) .
  • the above-mentioned profile information may be changed according to input video sequences at step S51.
  • At least one configuration information contained in the above-mentioned profile is extracted from the extracted profile information. For example, if the extracted profile information relates to the multiview profile, one or more configuration information (i.e., "reference_view”, _ “view_arrangement”, and "temporal_units_size” information) contained in the multiview profile is extracted at step S53. In this way, the above-mentioned extracted information is used for decoding the multiview-coded bitstream.
  • FIGS. 6A-6B are conceptual diagrams illustrating a multiview-sequence prediction structure according to a first example.
  • the number (m) of several viewpoints i.e., multiview number
  • the bitstream includes a single base-view bitstream and n hierarchical auxiliary-view bitstreams.
  • the term "base view” is indicative of a reference view from among several viewpoints (i.e., the multiview).
  • a sequence i.e., moving images
  • general video encoding schemes e.g., MPEG-2, MPEG-4, H.263, and H.264, etc.
  • this independent bitstream is referred to as a "base-view bitstream”.
  • auxiliary view is indicative of the remaining view other than the above-mentioned base view from among several viewpoints (i.e., the multiview) .
  • sequence corresponding to the auxiliary view forms a bitstream by performing disparity estimation of the base-view sequence, and this bitstream is referred to as "auxiliary-view bitstream" .
  • auxiliary-view bitstream In the case of performing a hierarchical encoding process (i.e., a view scalability process) between several viewpoints (i.e., the multiview), the above-mentioned auxiliary-view bitstream is classified into a first auxiliary-view bitstream, a second auxiliary-view bistream, and a n-th auxiliary-view bistream.
  • bitstream may include the above-mentioned base-view bitstream and the above-mentioned auxiliary-view bitstream as necessary.
  • the bitstream includes a single base-view and three hierarchical auxiliary-views. If the bitstream includes the single base-view and n hierarchical auxiliary- views, it is preferable that a location to be the base-view from among the multiview and a location to be each hierarchical auxiliary-view are defined by general rules. For reference, square areas of FIGS. 6A-6B indicate individual viewpoints.
  • the number "0" is indicative of a base-view
  • the number “1” is indicative of a first hierarchical auxiliary-view
  • the number “2” is indicative of a second hierarchical auxiliary-view
  • the number “3” is indicative of a third hierarchical auxiliary-view.
  • a maximum of 8 viewpoints are exemplarily disclosed as the multiview video sequence, however, it should be noted that the multiview number is not limited to "8" and any multiview number is applicable to other examples as necessary.
  • FIGS. 6A-6B shows an exemplary case in which the beginning view is located at the rightmost side. A specific view corresponding to a fourth order from the rightmost view 61 is used as the base-view.
  • the base-view location may be located at a specific location in the vicinity of a center view from among the multiview or may be set to the center view from among the multiview, because the base-view may be used as a reference for performing the predictive coding (or predictive encoding) process of other auxiliary-views.
  • the first hierarchical auxiliary-view location may be set to a left-side view spaced apart from the above- mentioned base-view by a 2 n ⁇ 2 -th magnitude, or a right-side view spaced apart from the above-mentioned base-view by the 2 n ⁇ 2 -th magnitude.
  • FIG. 6B shows an exemplary case in which a viewpoint spaced apart from the base view in the right direction by the 2 n ⁇ 2 -th view is determined to be the first hierarchical auxiliary-view.
  • the number of the first hierarchical auxiliary-view is set to "1".
  • the second hierarchical auxiliary-view location may be set to left-side view spaced apart from the base-view by a 2 n ⁇ 2 -th magnitude, or a right-side view spaced apart from the first hierarchical auxiliary-view by the 2 n ⁇ -th magnitude.
  • the above-mentioned case of FIG. 6A generates two second hierarchical auxiliary-views. Since the above-mentioned case of FIG. 6B has no view spaced apart from the first hierarchical auxiliary-view in the right direction by 2 n"2 -th magnitude, a viewpoint spaced apart from the base-view in the left direction by the 2 n ⁇ 2 - th magnitude is determined to be the second hierarchical auxiliary-view .
  • a viewpoint spaced apart from the second hierarchical auxiliary-view in the left direction by the 2 n"2 -th magnitude may also be determined to be the second hierarchical auxiliary-view 63. However, if the viewpoint corresponds to both ends of the multiview, the above- mentioned viewpoint may be determined to the third hierarchical auxiliary-view.
  • One or two second hierarchical auxiliary-views may be generated in the case of FIG. 6B.
  • the third hierarchical auxiliary-view location is set to the remaining viewpoints other than the above-mentioned viewpoints having been selected as the base-view and the first and second hierarchical auxiliary- views.
  • FIG. 6A four third hierarchical auxiliary-views are generated.
  • FIG. 6B four or five third hierarchical auxiliary-views are generated.
  • FIGS. 7A-7B are conceptual diagrams illustrating a multiview-sequence prediction structure according to a second example.
  • FIGS. 7A-7B show that the beginning-view for selecting the base-view is located at the leftmost side, differently from FIGS. 6A-6B. In other words, a fourth view spaced apart from the leftmost side 65 is selected as the base-view. In FIGS. 7A-7B, the remaining parts other than the above-mentioned difference are the same as those of FIGS. 6A-6B.
  • FIG. 8 is a conceptual diagram illustrating a multiview-sequence prediction structure according to a third example.
  • FIG. 8 shows an exemplary case in which the multiview number (m) is set to 2 n ⁇ ⁇ m ⁇ 2 n .
  • the system applies a virtual-view concept, such that the above- mentioned problem is obviated by the virtual-view concept.
  • (2 n -m+l)/2 virtual-views are generated at the left side (or the right side) of the multiview arrangement, and (2 n -m- l)/2 virtual-views are generated at the right side (or the left side) of the multiview arrangement. If the multiview number (m) is an even number, (2 n -m) /2 virtual-views are generated at the left side and the right side of the multiview arrangement, respectively. And then, the above- mentioned prediction structure can be applied with the resultant virtual views in the same manner.
  • the base-view and the first to third hierarchical auxiliary-views are selected according to the above-mentioned example of FIG. 6A.
  • the multiview number (m) is set to "6”
  • the base-view and the first to third hierarchical auxiliary-views are selected according to the above- mentioned example of FIG. 6A.
  • a single virtual-view is added to the end of the left side, such that the base-view and the first to third hierarchical auxiliary-views are selected according to the above- mentioned example of FIG. 6A.
  • FIGS. 9A-9B are conceptual diagrams illustrating a hierarchical prediction structure between several viewpoints of multiview sequence data.
  • FIG. 9A shows the implementation example of the case of FIG. 6A
  • FIG. 9B shows the implementation example of the case of FIG. 7A.
  • the multiview number (m) is set to "8”
  • the base-view and three hierarchical auxiliary- views are provided, such that the hierarchical encoding (or "view scalability") between several viewpoints is made available during the encoding of the multiview sequence.
  • the first hierarchical auxiliary-view 92 performs the estimation/encoding process between viewpoints (i.e., estimation/encoding process of the multiview) by referring to the base-view 91.
  • the second hierarchical auxiliary-views (93a and 93b) perform the estimation/encoding process between viewpoints by referring to the base-view 91 and/or the first hierarchical auxiliary-view 92.
  • the third hierarchical auxiliary-views (94a, 94b, 94c, and 94d) perform the estimation/encoding process between viewpoints by referring to the base-view and the first hierarchical auxiliary-view 92, and/or the second hierarchical auxiliary-views (93a and 93b).
  • the arrows of drawings indicate progressing directions of the above-mentioned estimation/encoding process of the multiview, and it can be recognized that auxiliary streams contained in the same hierarchy may refer to different views as necessary.
  • the above-mentioned hierarchically- encoded bitstream is selectively decoded in the reception end according to display characteristics, and a detailed description thereof will be described later with reference to FIG. 12.
  • the prediction structure of the encoder may be changed to another structure, such that the decoder can easily recognize the prediction structure relationship of individual view images by transmission of information indicating the relationship of individual views. Also, specific information, indicating which one of levels from among the entire view hierarchy includes the individual views, may also be transmitted to the decoder.
  • the view level (view_level) is assigned to respective images (or slices) , and a dependency relationship between the view images is given, even if the prediction structure is changed in various ways by the encoder, the decoder can easily recognize the changed prediction structure.
  • the prediction structure/direction information of the respective views may be configured in the form of a matrix, such that the matrix-type prediction structure/direction information is transmitted to a destination.
  • the number of views (num_view) is transmitted to the decoder, and the dependency relationship of the respective views may also be represented by a two-dimensional (2D) matrix. If the dependency relationship of the views is changed in time, for example, if the dependency relationship of first frames of each GOP is different from that of other frames of the remaining time zones, the dependency-relationship matrix information associated with individual cases may be transmitted.
  • FIGS. 1OA-IOB are conceptual diagrams illustrating a prediction structure of two-dimensional (2D) multiview sequence according to a fourth example.
  • first to third examples have disclosed the multiview of a one-dimensional array as examples. It should be noted that they can also be applied to two-dimensional (2D) multiview sequence as necessary.
  • squares indicate individual views arranged in the form of a 2D, and numerals contained in the squares indicate the relationship of hierarchical views .
  • the number "0" is indicative of a base-view
  • the number “1” is indicative of a first hierarchical auxiliary-view
  • the number “2-1” or “2-2” is indicative of a second hierarchical auxiliary-view
  • the number “3-1” or “3-2” is indicative of a third hierarchical auxiliary-view
  • the number "4-1", “4-2” or “4-3” is indicative of a fourth hierarchical auxiliary-view
  • the number “5-1", "5-2", or “5-3” is indicative of a fifth hierarchical auxiliary- view.
  • the above-mentioned bitstream includes a single base-view bitstream and (n+k) hierarchical auxiliary-view bitstreams.
  • the above-mentioned (n+k) hierarchical auxiliary-views are formed alternately on the horizontal axis and the vertical axis.
  • a first hierarchical auxiliary-view from among the (n+k) hierarchical auxiliary-views in FIG. 1OA is positioned at the vertical axis including the base-view.
  • a first hierarchical auxiliary-view from among the (n+k) hierarchical auxiliary-views in FIG. 1OB is positioned at the horizontal axis including the base-view.
  • the bitstream includes a single base-view and five hierarchical auxiliary-views.
  • FIG. 1OA shows that the hierarchical auxiliary-views are selected in the order of "vertical axis -> horizontal axis -> vertical axis - ⁇ ## .
  • the base-view location is determined in the same manner as in the above-mentioned one-dimensional array. Therefore, the base-view location is determined to be a specific view corresponding to a 2 n ⁇ 1 -th location in the direction of the horizontal axis and 2 k"1 -th location in the direction of the vertical axis.
  • the first hierarchical auxiliary-view location is determined to be a top-side view or bottom-side view spaced apart from the base-view location in the direction of the
  • the second hierarchical auxiliary-view locations are
  • the third hierarchical auxiliary-view locations are determined to be the remaining views contained in the vertical axes including not only the first and second hierarchical auxiliary-views but also the base-view.
  • the fourth hierarchical auxiliary-view location is determined to be a left-side view or right-side view spaced apart from the first to third hierarchical auxiliary-views and the base- view in the direction of the horizontal axis by the 2 n ⁇ 2 -th magnitude.
  • the fifth hierarchical auxiliary-view locations are determined to be the remaining views other than the base-view and the first to fourth hierarchical auxiliary-views.
  • bitstream includes a single base-view and five hierarchical auxiliary-views .
  • FIG. 1OB shows that the hierarchical auxiliary-views are selected in
  • the base-view location is determined in the same manner as in the above-mentioned one-dimensional array. Therefore, the base-view location is determined to be a specific view corresponding to a 2 n ⁇ 1 -th location in the direction of the horizontal axis and 2 k ⁇ x -th location in the direction of the vertical axis.
  • the first hierarchical auxiliary-view location is determined to be a left-side view or right-side view spaced apart from the base-view location in the direction of the
  • the second hierarchical auxiliary-view locations are
  • the third hierarchical auxiliary-view locations are determined to be left- and right- direction views spaced apart from the base-view and the first to second hierarchical auxiliary- views in the direction of the horizontal axis by the 2 n ⁇ 2 -th magnitude.
  • the fourth hierarchical auxiliary-view locations are determined to be the remaining views contained in the vertical axes including not only the first to third hierarchical auxiliary-views but also the base- view.
  • the fifth hierarchical auxiliary-view locations are determined to be the remaining views other than the base-view and the first to fourth hierarchical auxiliary-views .
  • FIGS. 11A-11C are conceptual diagrams illustrating a multiview-sequence prediction structure according to a fifth example.
  • the fifth example of FIGS. 11A-11C has prediction-structure rules different from those of the above-mentioned first to fourth examples.
  • the square areas of FIGS. 11A-11C indicate individual views, however, numerals contained in the square areas indicate the order of prediction of the views.
  • the number "0" is indicative of a first predicted view (or a first view)
  • the number "1” is indicative of a second predicted view (or a second view)
  • the number SS 2" is indicative of a third predicted view (or a third view)
  • the number "3" is indicative of a fourth predicted view (or a fourth view) .
  • FIG. HA shows decision formats of the first to fourth views in case the multiview number (m) is
  • both ends of the multiview are set to the first view (0), and the center view from among the multiview is set to the second view (1) .
  • Views successively arranged by skipping over at least one view in both directions on the basis of the second view (1) are set to the third views (2), respectively.
  • the remaining views other than the first to third views are set to the fourth views (3), respectively.
  • the first to fourth views are determined as described above, there is a need to discriminate between the base-view and the auxiliary-view.
  • any one of the first view, the second view, and third view is set to the base-view, and the remaining views other than the base-view may be set to the auxiliary- views .
  • identification (ID) information i.e., "base_view_position" of the base-view location may be contained in the bitstream.
  • FIG. HB shows another example of the decision of the second view (1) .
  • FIG. HB may be different from the second view (1) of FIG. HA as necessary.
  • upper views may be determined by sequentially skipping over a single view on the basis of the leftmost first view (0) .
  • the first hierarchical auxiliary-view is set to the third view (2)
  • the second hierarchical auxiliary-view is set to the first view (0)
  • the third hierarchical auxiliary- view is set to the fourth view (3) .
  • the base-view may also be set to the first view (1) as shown in FIG. HC.
  • the reason is that if the base-view is located at a specific location in the vicinity of the center part of the multiview, or is located at the center part of the multiview, the estimation/encoding process of other auxiliary-views can be effectively performed. Therefore, the base-view location and the auxiliary-view location can be determined according to the following rules.
  • the base-view location is set to the center view (1) of the multiview
  • the second auxiliary-view location is set to both-end views (0) of the multiview
  • the first auxiliary-view location is set to the view (2) successively arranged by skipping over at least one view in both directions on the basis of the base-view.
  • the remaining views (3) other than the above-mentioned views are all set to the third auxiliary-views .
  • the multiview number (m) is equal to or less than "7" (i.e., m ⁇ 7), only two or less views are arranged between the base-view (1) and the second auxiliary-view (0), all the views arranged between the base-view (1) and the second auxiliary-view (0) are set to the first auxiliary-views (2), respectively.
  • all the views arranged between the second auxiliary-view (0) and the first auxiliary-view (2) are set to the third auxiliary-views (3), respectively.
  • all the views arranged between the base-view (1) and the second auxiliary-view (0) may be set to the third auxiliary-views (3) , respectively.
  • the view scalability between views (or viewpoints) can be performed.
  • the multiview number (m) is equal to or less than "7" (i.e., m ⁇ 7)
  • a single base-view stream and two hierarchical auxiliary-view bitstreams are generated.
  • the second auxiliary-view (0) can be set to the first hierarchical auxiliary-view
  • the first auxiliary-view (2) can also be set to the second hierarchical auxiliary-view.
  • the first auxiliary-view (2) is selected as the first hierarchical auxiliary-view
  • the second auxiliary-view (0) is selected as the first hierarchical auxiliary-view
  • the third auxiliary-view (3) is selected as the third hierarchical auxiliary-view.
  • FIG. 12 is a conceptual diagram illustrating a hierarchical method of encoding/decoding a multiview sequence.
  • the encoder of a transmission end performs the view scalability function of the multiview sequence using modified methods which may be predicted by the first through fifth embodiments and methods shown in the first to fifth examples, for generating a bitstream, and transmits the bitstream to the reception end.
  • the decoding method or apparatus receives the bitstream formed by the above-mentioned characteristics, decodes the received bitstream, and generates decoded data for each hierarchy. Thereafter, according to the selection of a user or display, a variety of displays can be implemented, using data decoded by each hierarchy.
  • a base layer 121 for reproducing data of only the base-view is appropriate for the 2D display 125.
  • a first enhancement layer #1 (122) for reproducing data of the base-view and data of the first hierarchical auxiliary- view together is appropriate for a stereo-type display 126 formed by a combination of two 2D images.
  • a second enhancement layer #2 (123) for reproducing data of the base-view, data of the first hierarchical auxiliary-view, and data of the second hierarchical auxiliary-view together is appropriate for a low multiview display 127 for 3D- reproduction of the multiview sequence.
  • a third enhancement layer #3 (124) for reproducing data . of the base-view and data of all hierarchical auxiliary-views together is appropriate for a high multiview display 128 for 3D-reproduction of the multiview sequence.
  • FIG. 13 is a flow chart illustrating a method for encoding a video sequence.
  • an example of a video-sequence encoding method obtains an average pixel value of at least one block from among neighboring blocks of a current block and reference blocks of another view at step S131.
  • the video-sequence encoding method derives a predicted average pixel value of the current block using at least one mode from among several modes at step S132.
  • the video-sequence encoding method obtains a difference value between the predicted average pixel value and the actual average pixel value of the current block at step S133.
  • the video-sequence encoding method measures individual encoding efficiency of the above-mentioned several modes, and selects an optimum mode from among the several modes at step S134.
  • the above- mentioned optimum mode can be selected in various ways, for example, a method for selecting a minimum difference value from among the obtained difference values, and a method for using an equation indicating the relationship of Rate- Distortion (RD), etc.
  • the above-mentioned RD equation recognizes not only the number of encoding bits generated during the encoding of a corresponding block but also a distortion value indicating a difference value associated with an actual image, such that it calculates costs using the number of encoding bits and the distortion value.
  • the video-sequence encoding method multiplies the bit number by a Lagrange multiplier determined by a quantization coefficient, and adds the distortion value to the multiplied result, such that it calculates the costs. If the optimum mode is selected, the video-sequence encoding method can encode identification (ID) information indicating the selected mode, and transmit the encoded result.
  • ID identification
  • FIG. 14 is a block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of another view.
  • an average pixel value of the B c block is m c
  • an average pixel value of the B r ,i block is m r ,i
  • an average pixel value of the remaining blocks is represented by the above-mentioned block notation.
  • the reference frame #1 is used as a candidate reference frame in the case of encoding the B c block.
  • a first method for predicting m c information according to information of one or more neighboring blocks is a first mode method (Model) for predicting the m c information on the basis of an average pixel value of a reference block of another view corresponding to the current block.
  • the first mode method (Model) is indicative of the method for predicting the m c information using the average pixel value the B r ,i block of the reference frame #1.
  • the difference value can be represented by the following equation 1 : [Equation 1]
  • a second method for predicting a difference value between an average pixel value of a current block and an average pixel value of a reference block of another view corresponding to the current block is a second mode method (Mode2) for predicting the difference value on the basis of a difference between average pixel values of each neighboring blocks of the current block and the reference block.
  • the second mode method (Mode2) predicts a difference value between an average pixel value of the current block and an average pixel value of the B r ,i block of the reference frame #1 using a difference value in
  • the difference value can be represented by the following equation 2 :
  • a third method for predicting a difference value between an average pixel value of a current block and an average pixel value of a reference block of another view corresponding to the current block is a third mode method
  • (Mode3) predicts the m c information on the basis of a difference between an average pixel value of the
  • the difference value can be represented by the following equation 3:
  • FIG. 15 is a detailed block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views.
  • FIG. 15 shows a current block, pre- encoded blocks, each of which shares a boundary with the current block, and other blocks, each of which shares a boundary with the reference block.
  • the reference block the
  • Mode2-method equation, the Mode3-method equation, and the Mode4-method equation can be represented by the following equation 5 :
  • W 1 indicates a weighted coefficient
  • the neighboring blocks used for prediction are not limited to blocks sharing a boundary, and may also include other blocks adjacent to the above-mentioned neighboring blocks as necessary. Otherwise, the above-mentioned neighboring blocks may also employ only some parts of the other blocks. The scope of the above-mentioned neighboring blocks may be
  • the reference frames of the above-mentioned Model, Mode2, Mode3, and Mode4 methods are determined to be optimum frames in consideration of rate and distortion factors after calculating several steps to an actual bitstream stage.
  • There are a variety of methods for selecting the optimum mode for example, a method for selecting a specific mode of a minimum difference value from among the obtained difference values, and a method for using the RD relationship.
  • the above-mentioned RD-relationship method calculates actual bitstreams of individual modes, and selects an optimum mode in consideration of the rate and the distortion.
  • the above-mentioned RD-relationship method deducts an average pixel value of each block from the current block, deducts the average pixel value of each block from the reference block, and calculates a difference value between the deducted results of the current and reference blocks, as represented by the following equation 6:
  • Equation 6 AxAy is indicative of a disparity
  • m r is indicative of an
  • the encoding unit has the same m r as
  • the reference block is searched for in a time domain, and an optimum block is searched for in a space-time domain. Therefore, ID information indicating whether an illumination compensation will be used is set to "0" or "1" in association with individual frames and blocks, and the resultant ID information is entropy-encoded.
  • the optimum mode it is possible to encode only the selected mode, such that the encoded result of the selected mode may be transmitted to the decoding unit.
  • a difference value obtained by the selected mode can also be encoded and transmitted.
  • the selected mode information is represented by index types, and can also be predicted by neighboring-mode information.
  • a difference value between the index of the currently- selected mode and the index of the predicted mode can also be encoded and transmitted.
  • All of the above-mentioned modes may be considered, some of the above-mentioned modes may be selected, or only one of the above-mentioned modes may also be selected as necessary. In the case of using a single method from among all available methods, there is no need to separately encode the mode index.
  • pre-decoded pixel values may be applied to current blocks of a reference frame and a target frame to be encoded.
  • pre-decoded values of left-side pixels and pre-decoded values of upper-side pixels are used to predict an average pixel value of the current block.
  • the video seguence is encoded on the basis of a macroblock.
  • the 16x16 macroblock is divided into 16x8 blocks, 8x16 blocks, and 8x8 blocks, and is then decoded.
  • the 8x8 blocks may also be divided into 8x4 blocks, 4x8 blocks, and 4x4 blocks.
  • FIG. 16 is a conceptual diagram illustrating a 16x16 macroblock for explaining usages of pre-decoded pixel values located at left- and upper- parts of an entire block in the case of deriving an average pixel value and a predicted average pixel value of a current block.
  • the 16x16 macroblock can use all the pixel values of the left- and upper- parts. Therefore, in the case of predicting an average pixel value of the current block, an average pixel value of pixels
  • FIG. 17A is a conceptual diagram illustrating a 16x8 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • FIG. 17B is a conceptual diagram illustrating a 16x8 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • an average value of the B16x8_0 block and the B16x8_l block can be represented by the following equation 8 : [Equation 8]
  • an average value of the B16x8_0 block can be represented by the following equation 9
  • an average value of the B16x8_l block can be represented by the following equation 10: [Equation 9]
  • an average pixel value of the B16x8_0 block of FIG. 17A can be represented by the following equation 11
  • the average pixel value of the B16x8_0 of FIG. 17B can be represented by the following equation 12 : [ Equation 11 ]
  • an average pixel value of the Bl6x8_l block of FIG. 17A can be represented by the following equation 13
  • the average pixel value of the B16x8_l of FIG. 17B can be represented by the following equation 14:
  • FIG. 18A is a conceptual diagram illustrating a 8x16 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • FIG. 18B is a conceptual diagram illustrating a 8x16 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • the method for deriving an average pixel value of the divided blocks is the same as
  • FIG. 19A is a conceptual diagram illustrating a 8x8 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • FIG. 19B is a conceptual diagram illustrating a 8x8 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
  • the method for deriving an average pixel value of the divided blocks is the same as that of
  • FIGS. 17A-17B are identical to FIGS. 17A-17B.
  • the 8x8 block can be divided into a plurality of sub-blocks .
  • An average pixel value of a corresponding block of a current block of a current frame to be encoded is predicted
  • An average pixel value of a corresponding block of the reference frame is predicted, such that the predicted
  • average pixel value is set to m, .
  • Equation 15 (Ax,Ay) is indicative of a disparity
  • a reference block having a minimum block residual value is selected as an illumination-compensated optimum block.
  • the disparity vector is denoted by ( ⁇ x, ⁇ y) .
  • an average pixel value of the reference block is not predicted by pixel values of neighboring blocks, and is directly calculated by an average pixel value of all pixels contained in an actual block.
  • the number of left- and upper-part pixels may be increased.
  • pixels of two or more neighboring layers of a current layer may be used instead of pixels of only one layer next to a current layer.
  • the decoding unit determines whether to perform an illumination compensation of a corresponding block using the ID information. If the illumination compensation is performed, the decoding unit calculates a decoded value of the difference value (e) , and obtains a predicted value according to an above-mentioned prediction method. The decoded value of the difference value (e) is added to the
  • B prediction block + residual block + (m ⁇ .-— m r + e), where B is the value of the current block, reference block
  • the decoding unit obtains the difference between a offset value of illumination compensation of the current block and a predicted difference, and can reconstruct the offset value of illumination compensation of the current block using the obtained residual block value and the predicted difference.
  • FIG. 20 is a diagram illustrating a process for obtaining an offset value of a current block.
  • the illumination compensation may be performed during the motion estimation. When it compares the current block with the reference block, a difference in illumination between two blocks is considered. New motion estimation and new motion compensation are used to compensate for the illumination difference.
  • a new SAD Sud of Absolute Differences
  • M c is indicative of an average pixel value of the current block
  • M r is indicative of an average pixel value of the
  • I c (x,y) is indicative of a pixel value at a
  • Equation 16 a difference value between an average pixel value of the current block and an average pixel value of the reference block can be obtained.
  • the difference value in average pixel value between the current block and the reference block is referred to as an offset value (IC_offset) .
  • the illumination compensation can be performed by the following equation 18 using the offset value and the motion vector: [Equation 18]
  • R(x,y) I c (x,y)-I r (x+ ⁇ x,y+Ay)-(M c -M r )
  • the illumination compensation of the decoding unit can be performed by the following equation 19:
  • ⁇ c (x,y) is indicative of a pixel value of the
  • the offset value is transmitted to the decoding unit, and the offset value can be predicted by data of the neighboring blocks.
  • a difference value (Ric_offset) between the current-block offset value (IC_offset) and the neighboring-block offset value (IC_offset_pred) can be transmitted to the decoding unit 50, as denoted by the following equation 20: [Equation 20]
  • FIG. 21 is a flow chart illustrating a process for performing for an illumination compensation of a current block.
  • an illumination compensation flag of a current block is set to "0"
  • the illumination compensation of the current block is not performed. Otherwise, if the illumination compensation flag of the current block is set to "1", a process for reconstructing the offset value of the current block is performed.
  • information of the neighboring block can be employed. It is determined whether a reference index of the current block is equal to a reference index of the neighboring block at step S210. A predictor for performing the illumination compensation of the current block is obtained on the basis of the determined result at step S211. An offset value of the current block is reconstructed by using the obtained predictor at step S212.
  • FIG. 22 is a flow chart illustrating a method for obtaining a predictor by determining whether a reference index of a current block is equal to a reference index of a neighboring block.
  • the decoding unit extracts a variety of information from a video signal, for example, flag information and offset values of neighboring blocks of the current block, and reference indexes of reference blocks of the current and neighboring blocks, such that the decoding unit can obtain the predictor of the current block using the extracted information.
  • the decoding unit obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor.
  • information of the neighboring block can be employed.
  • the offset value of the current block can be predicted by the offset value of the neighboring block.
  • the reference index of the current block is equal to that of the neighboring block, such that it can be determined which one of values or which one of neighboring blocks will be used by referring to the determined result.
  • flag information of the neighboring block is set to "true", such that it can be determined whether the neighboring block will be used by referring to the determined result.
  • step S220 If it is determined that three neighboring blocks, each of which has the same reference index as that of the current block, exist at step S220, a median value of the offset values of the three neighboring blocks is assigned to the predictor of the current block at step S223. If it is determined that there is no neighboring block having the same reference index as that of the current block according to the determined result at step S220, the predictor of the current block is set to "0" at step S224. If required, the step S220 for determining whether the reference index of the current block is equal to that of the neighboring block may further include another step for determining whether a flag of the neighboring block is set to "1".
  • a plurality of neighboring blocks may be checked in the order of a left neighboring block -> an upper neighboring block -> a right-upper neighboring block -> a left-upper neighboring block.
  • the neighboring blocks may also be checked in the order of the upper neighboring block -> the left neighboring block -> the right-upper neighboring block - ⁇ the left-upper neighboring block. If there is no neighboring block capable of satisfying the two conditions, and flags of the three neighboring blocks
  • the predictor of the current block may be set to "0".
  • FIG. 23 is a flow chart illustrating a method for performing for an illumination compensation on the basis of a prediction type of a current block.
  • the neighboring block acting as a reference block may be changed according to a prediction type of the current block. For example, if the current block has the same shape as that of the neighboring block, the current block is predicted by a median value of the neighboring blocks. Otherwise, if the shape of the current block is different from that of the neighboring block, another method will be employed.
  • the example of FIG. 23 determines a neighboring block to be referred by the prediction type of the current block at step S231. It is determined whether the reference index of the determined neighboring block is equal to a reference index of the current block at step S232.
  • the step S232 for determining whether the reference index of the neighboring block is equal to that of the current block may further include another step for determining whether a flag of the neighboring block is set to "1".
  • the predictor for performing an illumination compensation of the current block can be obtained on the basis of the determined result at step S233.
  • the offset value of the current block is reconstructed by the obtained predictor, such that the illumination compensation can be performed at step S234.
  • the process for performing the step S233 by referring to the result of step S232 will hereinafter be described in detail, and a detailed description thereof will be similar to that of FIG. 22.
  • the prediction type of the current block indicates that the prediction is performed by using a neighboring block located at the left side of the current block
  • the prediction type of the current block indicates that the prediction is performed by referring to the left- and upper- neighboring blocks of the current block, or if the prediction is performed by referring to three neighboring blocks (i.e., the left neighboring block, the upper neighboring block, and the right-upper neighboring block) , the individual cases will be applied similarly as a method of FIG. 22.
  • FIG. 24 is a flow chart illustrating a method for performing for an illumination compensation using flag information indicating whether the illumination compensation of a block is performed.
  • flag information (IC_flag) indicating whether an illumination compensation of the current block is performed may also be used to reconstruct the offset value of the current block.
  • the predictor may also be obtained using both the method for checking the reference index of FIG. 22 and the method for predicting flag information. Firstly, it is determined whether a neighboring block having the same reference index as that of the current block exists at step S241. A predictor for performing an illumination compensation of the current block is obtained by the determined result at step S242. In this case, a process for determining whether the flag of the neighboring block is NN 1" may also be included in the step S242. The flag information of the current block is predicted on the basis of the determined result at step S243.
  • step S242 An offset value of the current block is reconstructed by using the obtained predictor and the predicted flag information, such that the illumination compensation can be performed at step S244.
  • the step S242 may be applied similarly as a method of FIG. 22, and the step S243 will hereinafter be described with reference to FIG. 25.
  • FIG. 25 is a flow chart illustrating a method for predicting flag information of a current block by determining whether a reference index of the current block is equal to a reference index of a neighboring block.
  • flag information of the current block is predicted by flag information of the neighboring block having the same reference index at step S251. If it is determined that two neighboring blocks, each of which has the same reference index as that of the current block, exist at step S250, flag information of the current block is predicted by any one of flag information of the two neighboring blocks having the same reference index at step S252.
  • the flag information of the current block is predicted by a median value of the flag information of the three neighboring blocks at step S253. Also, if there is no neighboring block having the same reference index as that of the current block according to the determined result of step S250, the flag information of the current block is not predicted at step S254.
  • FIG. 26 is a flow chart illustrating a method for performing an illumination compensation when a current block is predictively coded by two or more reference blocks.
  • the decoding unit cannot directly recognize an offset value corresponding to each reference block, because it uses an average pixel value of the two reference blocks when obtaining the offset value of the current block. Therefore, in one example, an offset value corresponding to each reference block is obtained, resulting in the implementation of correct prediction.
  • the offset value of the current block is reconstructed by using the predictor of the current block and the residual value at step S261. If the current block is predictively encoded by using two reference blocks, an offset value corresponding to each reference is obtained by the offset value at step S262, as denoted by the following equation 21: [Equation 21]
  • IC _ offset m c - w ⁇ x m r>l - w 2 x m r 2
  • IC _ offsetLO m c - m r ⁇ ⁇ IC _ offset + ( W 1 - 1) x m r X + W 2 x m r 2
  • IC _ offsetl ⁇ m c - m r 2 ⁇ IC __ offset + W 1 x m r l + (w 2 - l) x m r 2
  • m c is an average pixel value of the
  • m t . , and m r2 are indicative of an average
  • W 2 are indicative of a weighted coefficients for a bi-
  • the system independently obtains an accurate offset value corresponding to each reference block, such that it can more correctly perform the predictive coding process.
  • the system adds the reconstructed residual value and the predictor value, such that it obtains an offset value.
  • the predictor of a reference picture of ListO and the predictor of a reference picture of Listl are obtained repectively and combined, such that the system can obtain a predictor used for reconstructing the offset value of the current block.
  • the system can also be applied to skip-macroblock.
  • the prediction is performed to obtain an information for the illumination- compensation.
  • a value predicted by the neighboring block block is used as flag information indicating whether the illumination compensation is performed.
  • An offset value predicted by the neighboring block may be used as the offset value of the current block. For example, if flag information is set to "true", the offset value is added to a reference block.
  • the prediction is performed by using flags and offset values of the left- and upper- neighboring blocks, such that flag and offset values of the macroblock can be obtained.
  • a flag and an offset value of the current block may be set to the flag and the offset value of the block, respectively. If two blocks have the flag of "1”, the flag of the current block is set to "1", and the offset value of the current block is set to an average offset value of the two neighboring blocks .
  • the system can also be applied to a direct mode, for example, temporal direct mode, B-skip mode, etc.
  • the prediction is performed to obtain information of the illumination- compensation.
  • Each predictor can be obtained by using the variable method for predicting the flag and the offset. This predictor may be set to an actual flag and an actual offset value of the current block. If each block has a pair of flags and offset information, a prediction value for each block can be obtained. In this case, if there are two reference blocks and the reference indexes of the two reference blocks are checked, it is determined whether the reference index of the current block is equal to that of the neighboring block.
  • each reference block includes a unique offset value
  • first predicted flag information a first predicted offset value, second predicted flag information, and a second predicted offset value
  • a value predicted by the neighboring block may be used as the flag information.
  • the offset values of the two reference blocks may be used as the first predicted offset value and the second predicted offset value, respectively.
  • the offset value of the current block may be set to an average offset value of individual reference blocks.
  • the system may encode/decode the flag information indicating whether the direct mode or the skip-macroblock mode is applied to the current block.
  • an offset value is added or not according to the flag value.
  • a residual value between the offset value and the predicted offset value may also be encoded/decoded.
  • desired data can be more correctly reconstructed, and an optimum mode may be selected in consideration of a RD (Rate-Distortion) - relationship. If a reference picture cannot be used for the prediction process, i.e., if a reference picture number is less than "1", the flag information or predicted flag information may be set to "false", and the offset value or the predicted offset value may also be set to "0".
  • the system can also be applied to the entropy-coding process.
  • three context models may be used according to flag values of the neighboring blocks (e.g., blocks located at the left- and upper- parts of the current block) .
  • the flag information is encoded/decoded by using the three context models.
  • a transform-coefficient level coding method can be used for the predictive residual value of the offset values. In other words, data binarization is performed by UEGO, a single context model can be applied to a first bin value, and another context mode is applied to the remaining bin values of a unary prefix part A sign bit is encoded/decoded by a bypass mode.
  • FIG. 27 is a flow chart illustrating a method for performing illumination compensation using not only flag information indicating whether illumination compensation of a current block is performed, but also an offset value of the current block.
  • the decoding unit extracts a variety of information from a video signal, for example, flag information and offset values of the current and neighboring blocks of the current block, and index information of reference blocks of the current and neighboring blocks, such that the decoding unit can obtain the predictor of the current block using the above- mentioned extracted information.
  • the decoding unit 50 obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor.
  • flag information indicating whether the illumination compensation of the current block is performed may be used.
  • the decoding unit obtains flag information indicating whether the illumination compensation of the current block is performed at step S271. If the illumination compensation is performed according to the above-mentioned flag information (IC_flag), the offset value of the current block indicating a difference in average pixel value between the current block and the reference block can be reconstructed at step S272. In this way, the above-mentioned illumination compensation technology encodes a difference value in average pixel value between blocks of different pictures. If a corresponding block is contained in the P slice when the flag indicating whether the illumination compensation is applied to each block, single flag information and a single offset value are encoded/decoded. However, if the corresponding block is contained in the B slice, a variety of methods can be made available, and a detailed description thereof will hereinafter be described with
  • FIGS. 28A-28B are diagrams illustrating a method for
  • the encoding unit can transmit the residual value (Ric_offset) between the offset value (IC_offset) of the current block and the offset value (IC_offset__pred) of the neighboring block to a decoding unit, such that it can reconstruct the offset value " IC_offset” of the current block (C) .
  • the " Ric_offse t " information can also be represented by the above-mentioned Equation 20.
  • the illumination compensation can be performed using a single offset value and single flag information.
  • the corresponding block is contained in the B slice, i.e., if the current block is predictively encoded by two or more reference blocks, a variety of methods can be made available.
  • a predictor of the current block can be obtained by combining information of two reference blocks via the motion compensation.
  • single flag information indicates whether the illumination compensation of the current block is performed. If the flag information is determined to be "true", a single offset value is obtained from the current block and the predictor, such that the encoding/decoding processes can be performed.
  • the motion compensation process it is determined whether the illumination compensation will be applied to each of two reference blocks. Flag information is assigned to each of the two reference blocks, and a single offset value obtained by using the above-mentioned flag information may be encoded or decoded.
  • two flag information may be used on the basis of the reference block, and a single offset value may be used on the basis of the current block.
  • single flag information may indicate whether the illumination compensation will be applied to a corresponding block on the basis of the current block. Individual offset values can be encoded/decoded for two reference blocks. If the illumination compensation is not applied to any one of the reference blocks during the encoding process, a corresponding offset value is set to NX 0". In this case, single flag information may be used on the basis of the current block, and two offset values may be used on the basis of the reference block'.
  • the flag information and the offset value can be encoded/decoded for individual reference blocks.
  • two flags and two offset values can be used on the bass of the reference block.
  • the offset value is not encoded without any change, and is predicted by an offset value of the neighboring block, such that its residual value is encoded.
  • FIG. 29 is a flow chart illustrating a method for performing an illumination compensation when a current block is predictively encoded by two or more reference blocks .
  • flag information and offset values of the neighboring blocks of the current block are extracted from the video signal, and index information of corresponding reference blocks of the current and neighboring blocks are extracted, such that the predictor of the current block can be obtained by using the extracted information.
  • the decoding unit obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor.
  • flag information IC_flag
  • flag information indicating whether the illumination compensation of the current block is performed may be used as necessary.
  • the decoding unit obtains flag information indicating whether the illumination compensation of the current block is performed at step S291. If the illumination compensation is performed according to the above-mentioned flag information (IC_flag) , the offset value of the current block indicating a difference in average pixel value between the current block and the reference block can be reconstructed at step S292.
  • flag information indicating whether the illumination compensation of the current block is performed at step S291. If the illumination compensation is performed according to the above-mentioned flag information (IC_flag) , the offset value of the current block indicating a difference in average pixel value between the current block and the reference block can be reconstructed at step S292.
  • IC _ offsetLO m c - m r X - IC _ offset 4- ( W 1 - 1) x m r ⁇ + W 2 x m r 2
  • Equation 22 m c is an average pixel value of the
  • m r l and m r 2 are indicative of average pixel
  • the system independently obtains an accurate offset value corresponding to each reference block, such that it can more correctly perform the predictive coding process.
  • the system adds the reconstructed residual value and the predictor value, such that it obtains the offset value.
  • the predictor of List 0 and the predictor of List 1 are obtained and combined, such that the system can obtain a predictor value used for reconstructing the offset value of the current block.
  • FIG. 30 is a flow chart illustrating a method for performing an illumination compensation using flag information indicating whether the illumination compensation of a current block is performed.
  • the illumination compensation technology is adapted to compensate for an illumination difference or a difference in color. If the scope of the illumination compensation technology is extended, the extended illumination compensation technology may also be applied between obtained sequences captured by the same camera. The illumination compensation technology can prevent the difference in illumination or color from greatly affecting the motion estimation. However, indeed, the encoding process employs flag information indicating whether the illumination compensation is performed.
  • the application scope of the illumination compensation may be extended to a sequence, a view, a GOP (Group Of Pictures) , a picture, a slice, a macroblock, and a sub-block, etc.
  • the illumination compensation technology is applied to a small-sized area, a local area may also be controlled, however, it should be noted that a large number of bits used for the flag information are consumed.
  • the illumination compensation technology may not be required. Therefore, a flag bit indicating whether the illumination compensation is assigned to individual areas, such that the system can effectively use the illumination compensation technology.
  • the system obtains flag information capable of allowing a specific level of the video signal to be illumination-compensated at step S201.
  • VN seq_IC_flag information is assigned to a sequence level
  • view_IC_flag information is assigned to a view level
  • NN GOP_IC_flag information is assigned to a GOP level
  • picture_IC_flag information is assigned to a picture level
  • slice__IC_flag information is assigned to a slice level
  • mb_IC_flag information is assigned to a macroblock level
  • blk_IC_flag is assigned to a block level.
  • FIGS. 31A ⁇ 31C are conceptual diagrams illustrating
  • indicating whether the illumination compensation is performed can hierarchically be classified. For example,
  • information 311 is assigned to a sequence level
  • "view__IC_flag” information 312 is assigned to a view level
  • "GOP_IC__flag” information 313 is assigned to a GOP level
  • xx pic_IC__flag” information 314 is assigned to a picture level
  • "slice_IC_flag” information 315 is assigned to a slice level
  • N ⁇ mb_IC_flag” information 316 is assigned to a macroblock level
  • blk_IC_flag” information 317 is assigned to a block level.
  • each flag is composed of 1 bit.
  • the number of the above-mentioned flags may be set to at least one.
  • the above-mentioned sequence/view/picture/slice-level flags may be located at a corresponding parameter set or header, or may also be located another parameter set.
  • the "seq_IC_flag" information 311 may be located at a sequence parameter set
  • the "view__IC_flag” information 312 may be located at the view parameter set
  • the "pic_IC_flag” information 314 may be located at the picture parameter set
  • the "slice__IC_flag” information 315 may be located at the slice header.
  • specific information indicating whether the illumination compensation of an upper level is performed may control whether the illumination compensation of a lower level is performed. In other words, if each flag bit value is set to "1", the illumination compensation technology may be applied to a lower level.
  • the "slice_IC_flag” information of each slice contained in a corresponding picture may be set to “1” or “0”
  • the “mb_IC_flag” information of each macroblock may be set to “1” or “0”
  • the " blk_IC_flag” information of each block may be set to “ 1” or “0”.
  • the "seq_IC_flag” information is set to "1” on the condition that a view parameter set exists, the "view_IC_flag” value of each view may be set to "1” or "0".
  • a flag bit value of GOP, picture, slice, macroblock, or block of a corresponding view may be set to "1" or “0", as shown in FIG. 31A.
  • the above-mentioned flag bit value of GOP, picture, slice, macroblock, or block of the corresponding view may not be set to "1” or "0” as necessary. If the above-mentioned flag bit value of GOP, picture, slice, macroblock, or block of the corresponding view may not be set to "1” or " 0", this indicates that the GOP flag, the picture flag, the slice flag, the macroblock flag, or the block flag is not controlled by the view flag information, as shown in FIG 31B.
  • the flag bit values of a lower scope are automatically “ set to "0". For example, if the "seq__IC__flag” information is set to "0", this indicates that the illumination compensation technology is not applied to a corresponding sequence. Therefore, the "view_IC_flag” information is set to “0”, the "GOP_IC_flag” information is set to “0”, the “pic_IC_flag” information is set to “0”, the “slice_IC_flag” information is set to "0”, the “mb_-IC_flag” information is set to "0”, and the " blk_IC_flag” information is set to " 0".
  • mb_IC_flag information or only one "blk-IC_flag” information may be employed according to a specific implementation methods of the illumination compensation technology.
  • the "view_IC_flag” information may be employed when the view parameter set is newly applied to the multiview video coding.
  • the offset value of the current block may be additionally encoded/decoded according to a flag bit value of the macroblock or sub- block acting as the lowest-level unit.
  • the flag indicating the IC technique application may also be applied to both the slice level and macroblock level. For example, if the "slice_IC_flag" information is set to “0”, this indicates that the IC technique is not applied to a corresponding slice. If the “slice_IC_flag” information is set to "1”, this indicates that the IC technique is applied to a corresponding slice. In this case, if the "mb__IC_flag” information is set to "1", “IC_offset” information of a corresponding macroblock is reconstructed. If the "mb_IC_flag” information is set to "0”, this indicates that the IC technique is not applied to a corresponding macroblock.
  • the system can obtain an offset value of a current block indicating a difference in average pixel value between the current block and the reference block.
  • the flag information of the macroblock level or the flag information of the block level may not be employed as necessary.
  • the illumination compensation technique can indicate whether the illumination compensation of each block is performed using the flag information.
  • the illumination compensation technique may also indicate whether the illumination compensation of each block is performed using a specific value such as a motion vector. The above-mentioned example can also be applied to a variety of applications of the illumination compensation technique.
  • the above-mentioned example can indicate whether the illumination compensation of a lower scope is performed using the flag information.
  • the macroblock or block level acting as the lowest scope can effectively indicate whether the illumination compensation is performed using the offset value without using the flag bit.
  • the predictive coding process can be performed. For example, if the predictive coding process is applied to the current block, the offset value of the neighboring block is assigned to an offset value of the current block. If the predictive coding scheme is determined to be the bi-predictive coding scheme, offset values of individual reference blocks are obtained by the calculation of the reference blocks detected from List 0 and List 1.
  • the offset value of each reference is not directly encoded by the offset values of the _ neighboring blocks, and a residual value is encoded/decoded.
  • the method for predicting the offset value may be determined to be the above-mentioned offset prediction method or a method for obtaining a median value used for predicting the motion vector.
  • a direct mode of a bi-directional prediction supplementary information is not encoded/decoded using the same method as in the motion vector, and the offset values can be obtained by predetermined information.
  • a decoding unit (e.g., H.264-based decoding unit) is used instead of the MVC decoding unit.
  • a view sequence compatible with a conventional decoding unit should be decoded by the conventional decoding unit, such that the "view_IC_flag" information is set to NN false" or "0".
  • the base-view concept there is a need to explain the base-view concept.
  • a single view sequence compatible with the H.264/AVC decoder may be required. Therefore, at least one view, which can be independently decoded, is defined and referred to as a base view.
  • the base view is indicative of a reference view from among several views (i.e., the multiview) .
  • a sequence corresponding to the base view in the MVC scheme is encoded by general video encoding schemes
  • the above-mentioned base-view sequence can be compatible with the H.264 /AVC scheme, or cannot be compatible with the same.
  • the view sequence compatible with the H.264/AVC scheme is always set to the base view.
  • FIG. 32 is a flow chart illustrating a method for obtaining a motion vector considering an offset value of a current block.
  • the system can obtain an offset value of the current block at step S321.
  • the system searches for a reference block optimally matched with the current block using the offset value at step S322.
  • the system obtains the motion vector from the reference block, and encodes the motion vector at step S323.
  • For the illumination compensation a variety of factors are considered during the motion estimation. For example, in the case of a method for comparing a first block with a second block by offsetting average pixel values of the first and second blocks, average pixel values of the two blocks are deducted from pixel values of each block during the motion estimation, such that the similarity between the two blocks can be calculated.
  • the offset value between the two blocks is independently encoded, such that the costs for the independent encoding are reflected in the motion estimation process.
  • the conventional costs can be calculated by the following equation 23: [Equation 23]
  • I c is indicative of a pixel value of
  • the current block, and /,. is indicative of a pixel value of
  • M c is indicative of an average pixel
  • Equation 27 represents a method for predicting the offset coding bit. In this case, the coding bit can be predicted in proportion to the magnitude of an offset residual value.
  • GenBit IC GenBit + Bit IC

Abstract

Decoding a video signal comprises receiving a bitstream comprising the video signal encoded according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile. The profile information is extracted from the bitstream. The video signal is decoded according to the determined profile using illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal. Each of multiple views comprise multiple pictures segmented into multiple segments.

Description

PROCESSING MULTIVIEW VIDEO
Technical Field
The invention relates to processing multiview video.
Background Art
Multiview Video Coding (MVC) relates to compression of video sequences (e.g., a sequence of images or
"pictures") that are typically acquired by respective cameras. The video sequences or "views" can be encoded according to a standard such as MPEG. A picture in a video sequence can represent a full video frame or a field of a video frame. A slice is an independently coded portion of a picture that includes some or all of the macroblocks in the picture, and a macroblock includes blocks of picture elements (or "pixels") .
The video sequences can be encoded as a multiview video sequence according to the H.264 /AVC codec technology, and many developers are conducting research into amendment of standards to accommodate multiview video sequences.
Three profiles for supporting specific functions are prescribed in the current H.264 standard. The term "profile" indicates the standardization of technical components for use in the video encoding/decoding algorithms. In other words, the profile is the set of technical components prescribed for decoding a bitstream of a compressed sequence, and may be considered to be a substandard. The above-mentioned three profiles are a baseline profile, a main profile, and an extended profile. A variety of functions for the encoder and the decoder have been defined in the H.264 standard, such that the encoder and the decoder can be compatible with the baseline profile, the main profile, and the extended profile respectively. The bitstream for the H.264 /AVC standard is structured according to a Video Coding Layer (VCL) for processing the moving-image coding (i.e., the sequence coding) , and a Network Abstraction Layer (NAL) associated with a subsystem capable of transmitting/storing encoded information. The output data of the encoding process is VCL data, and is mapped into NAL units before it is transmitted or stored. Each NAL unit includes a Raw Byte Sequence Payload (RBSP) corresponding to either compressed video data or header information. The NAL unit includes a NAL header and a RBSP. The NAL header includes flag information (e.g., nal_ref_idc) and identification (ID) information (e.g., nal_unit_type) . The flag information NXnal_ref_idc" indicates the presence or absence of a slice used as a reference picture of the NAL unit. The ID information "nal_unit_type" indicates the type of the NAL unit. The RBSP stores compressed original data. An RBSP trailing bit can be added to the last part of the RBSP, such that the length of the RBSP can be represented by a multiple of 8 bits .
There are a variety of the NAL units, for example, an Instantaneous Decoding Refresh (IDR) picture, a Sequence Parameter Set (SPS), a Picture Parameter Set (PPS), and Supplemental Enhancement Information (SEI), etc. The standard has generally defined a target product using various profiles and levels, such that the target product can be implemented with appropriate costs. The decoder satisfies a predetermined constraint at a corresponding profile and level. The profile and the level are able to indicate a function or parameter of the decoder, such that they indicate which compressed images can be handled by the decoder. Specific information indicating which one of multiple profiles corresponds to the bitstream can be identified by profile ID information. The profile ID information "profile_idc" provides a flag for identifying a profile associated with the bitstream. The H.264/AVC standard includes three profile identifiers (IDs). If the profile ID information "profile idc" is set to "66", the bitstream is based on the baseline profile. If the profile ID information "profile__idc" is set to "77", the bitstream is based on the main profile. If the profile ID information "profile_idc" is set to "88", the bitstream is based on the extended profile. The above-mentioned "profile_idc" information may be contained in the SPS (Sequence Parameter Set), for example. Disclosure of Invention
In one aspect, in general, a method for decoding a video signal comprises: receiving a bitstream comprising the video signal encoded according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile; extracting the profile information from the bitstream; and decoding the video signal according to the determined profile using illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments (e.g., an image block segment such as a single block or a macroblock, or a segment such as a slice of an image) . Aspects can include one or more of the following features .
The method further comprises extracting from the bitstream configuration information associated with multiple views when the determined profile corresponds to a multiview video signal, wherein the configuration information comprises at least one of view-dependency information representing dependency relationships between respective views, view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view-arrangement information indicating a camera arrangement.
The profile information is located in a header of the bitstream.
The view level information corresponds to one of a plurality of levels associated with a hierachical view prediction structure among the views of the multiview video signal . The view-dependency information represents the dependency relationships in a two-dimensional data structure .
The two-dimensional data structure comprises a matrix. The segments comprise image blocks.
Using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
The method further comprises selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors .
Selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
Determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
Selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
In another aspect, in general, a method for decoding a multiview video signal comprises: receiving a bitstream comprising the multiview video signal encoded according to dependency relationships between respective views, and view-dependency data representing the dependency relationships; extracting the view-dependency data and determining the dependency relationships from the extracted data; and decoding the multiview video signal according to the determined dependency relationships using illumination compensation between segments of pictures in respective views, where the multiview video signal includes multiple views each comprising multiple pictures segmented into multiple segments.
Aspects can include one or more of the following features .
The view-dependency data represents the dependency relationships in a two-dimensional data structure.
The view-dependency data comprises a matrix.
The method further comprises extracting from the bitstream configuration information comprising at least one of view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view-arrangement information indicating a camera arrangement . The segments comprise image blocks .
Using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
The method further comprises selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors.
Selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
Determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
Selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
The method further comprises, when multiple offset values are to be used, obtaining the predictor for performing illumination compensation of the first block by combining the multiple offset values.
Combining the multiple offset values comprises taking an average or median of the offset values.
In another aspect, in general, for each respective decoding method, a method for encoding a video signal comprises generating a bitstream capable of being decoded into the video signal by the respective decoding method. For example, in another aspect, in general, a method for encoding a bitstream comprises: forming the bitstream according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile; and providing information for illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments. In another aspect, in general, a method for encoding a bitstream comprises: forming the bitstream according to dependency relationships between respective views, and view-dependency data representing the dependency relationships; and providing information for illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments.
In another aspect, in general, for each respective decoding method, a computer program, stored on a computer- readable medium, comprises instructions for causing a computer to perform the respective decoding method.
In another aspect, in general, for each respective decoding method, image data embodied on a machine-readable information carrier is capable of being decoded into a video signal by the respective decoding method.
In another aspect, in general, for each respective decoding method, a decoder comprises means for performing the respective decoding method. In another aspect, in general, for each respective decoding method, an encoder comprises means for generating a bitstream capable of being decoded into a video signal by the respective decoding method. Other features and advantages will become apparent from the following description, and from the claims.
Brief Description of Drawings
FIG. 1 is an exemplary decoding apparatus. FIG. 2 is a structural diagram illustrating a seguence parameter set RBSP syntax.
FIG. 3A is a structural diagram illustrating a bitstream including only one sequence.
FIG. 3B is a structural diagram illustrating a bitstream including two sequences .
FIGS. 4A-4C are diagrams illustrating exemplary Group Of GOP (GGOP) structures.
FIG. 5 is a flowchart illustrating a method for decoding a video sequence. FIGS. 6A-6B, 7A-7B, and 8 are diagrams illustrating examples of multiview-sequence prediction structures.
FIGS. 9A-9B are diagrams illustrating a hierarchical prediction structure between several viewpoints of multiview sequence data. FIGS. 1OA-IOB are diagrams illustrating a prediction structure of two-dimensional (2D) multiview sequence data.
FIGS. 11A-11C are diagrams illustrating a multiview
sequence prediction structure. FIG. 12 is a diagram illustrating a hierarchical encoding/decoding system.
FIG. 13 is a flowchart illustrating a method for encoding a video sequence.
FIG. 14 is a block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views.
FIG. 15 is a detailed block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views. FIG. 16 is a diagram illustrating a 16x16 macroblock.
FIGS. 17A-17B are diagrams illustrating 16x8 macroblocks .
FIGS. 18A-18B are diagrams illustrating 8x16 macroblocks . FIGS. 19A-19B are diagrams illustrating 8x8 macroblocks .
FIG. 20 is a diagram illustrating a process for obtaining an offset value of a current block. FIG. 21 is a flowchart illustrating a process for performing illumination compensation of a current block.
FIG. 22 is a flowchart illustrating a method for obtaining a predictor by determining whether a reference index of a current block is equal to a reference index of a neighboring block.
FIG. 23 is a flow chart illustrating a method for performing for an illumination compensation on the basis of a prediction type of a current block according to the present invention;
FIG. 24 is a flow chart illustrating a method for performing illumination compensation using flag information indicating whether the illumination compensation of a block is performed. FIG. 25 is a flow chart illustrating a method for predicting flag information of a current block by determining whether a reference index of the current block is equal to a reference index of a neighboring block.
FIG. 26 is a flow chart illustrating a method for performing illumination compensation when a current block is predictively coded by two or more reference blocks.
FIG. 27 is a flow chart illustrating a method for performing illumination compensation using not only a flag indicating whether illumination compensation of a current block is performed, but also an offset value of a current block.
FIGS. 28A-28B are diagrams illustrating a method for performing illumination compensation using a flag and an offset value in association with blocks of P and B slices.
FIG. 29 is a flow chart illustrating a method for performing illumination compensation when a current block is predictively encoded by two or more reference blocks.
FIG. 30 is a flow chart illustrating a method for performing illumination compensation using a flag indicating whether illumination compensation of a current block is performed.
FIGS. 31A-31C are diagrams illustrating the scope of flag information indicating whether illumination compensation of a current block is performed.
FIG. 32 is a flow chart illustrating a method for obtaining a motion vector considering an offset value of a current block.
Best Mode for Carrying Out the Invention
In order to effectively handle a multiview sequence, an input bitstream includes information that allows a decoding apparatus to determine whether the input bitstream relates to a multiview profile. In cases that it is determined that the input bitstream relates to the multiview profile, supplementary information associated with the multiview sequence is added according to a syntax to the bitstream and transmitted to the decoder. For example, the multiview profile ID can indicate a profile mode for handling multiview video data as according to an amendment of the H.264 /AVC standard.
The MVC (Multiview Video Coding) technology is an amendment technology of the H.264 /AVC standards. That is, a specific syntax is added as supplementary information for an MVC mode. Such amendment to support MVC technology can be more effective than an alternative in which an unconditional syntax is used. For example, if the profile identifier of the AVC technology is indicative of a multiview profile, the addition of multiview sequence information may increase a coding efficiency.
The sequence parameter set (SPS) of an H.264/AVC bitstream is indicative of header information including information (e.g., a profile, and a level) associated with the entire-sequence encoding.
The entire compressed moving images (i.e., a sequence) can begin at a sequence header, such that a sequence parameter set (SPS) corresponding to the header information arrives at the decoder earlier than data referred to by the parameter set. As a result, the sequence parameter set RBSP acts as header information of a compressed data of moving images at entry Sl (FIG. 2) . If the bitstream is received, the profile ID information "profile_idc" identifies which one of profiles from among several profiles corresponds to the received bitstream.
The profile ID information "profile_idc" can be set, for example, to "MULTI_VIEWJ?ROFILE) ", so that the syntax including the profile ID information can determine whether the received bitstream relates to a multiview profile. The following configuration information can be added when the received bitstream relates to the multiview profile.
FIG. 1 is a block diagram illustrating an exemplary decoding apparatus (or "decoder") of a multiview video system for decoding a video signalcontaining a multiview video sequence. The multiview video system includes a corresponding encoding apparatus (or "encoder") to provide the multiview video sequence as a bitstream that includes encoded image data embodied on a machine-readable information carrier (e.g., a machine-readable storage medium, or a machine-readable energy signal propagating between a transmitter and receiver.)
Referring to FIG. 1, the decoding apparatus includes a parsing unit 10, an entropy decoding unit 11, an Inverse Quantization/Inverse Transform unit 12, an inter-prediction unit 13, an intra-prediction unit 14, a deblocking filter 15, and a decoded-picture buffer 16.
The inter-prediction unit 13 includes a motion compensation unit 17, an illumination compensation unit 18, and an illumination-compensation offset prediction unit 19. The parsing unit 10 performs a parsing of the received video sequence in NAL units to decode the received video sequence. Typically, one or more sequence parameter sets and picture parameter sets are transmitted to a decoder before a slice header and slice data are decoded. In this case, the NAL header or an extended area of the NAL header may include a variety of configuration information, for example, temporal level information, view level information, anchor picture ID information, and view ID information, etc.
The term "time level information" is indicative of hierarchical-structure information for providing temporal scalability from a video signal, such that sequences of a variety of time zones can be provided to a user via the above-mentioned temporal level information.
The term "view level information" is indicative of hierarchical-structure information for providing view scalability from the video signal. The multiview video sequence can define the temporal level and view level, such that a variety of temporal sequences and view sequences can be provided to the user according to the defined temporal level and view level . In this way, if the level information is defined as described above, the user may employ the temporal scalability and the view scalability. Therefore, the user can view a sequence corresponding to a desired time and view, or can view a sequence corresponding to another limitation. The above-mentioned level information may also be established in various ways according to reference conditions. For example, the level information may be changed according to a camera location, and may also be changed according to a camera arrangement type. In addition, the level information may also be arbitrarily established without a special reference.
The term "anchor picture" is indicative of an encoded picture in which all slices refer to only slices in a current view and not slices in other views. A random access between views can be used for multiview-sequence decoding.
Anchor picture ID information can be used to perform the random access process to access data of a specific view without requiring a large amount of data to be decoded. The term "view ID information" is indicative of specific information for discriminating between a picture of a current view and a picture of another view. In order to discriminate one picture from other pictures when the video sequence signal is encoded, a Picture Order Count
(POC) and frame number information (frame_num) can be used.
If a current sequence is determined to be a multiview video sequence, inter-view prediction can be performed. An identifier is used to discriminate a picture of the current view from a picture of another view.
A view identifier can be defined to indicate a picture's view. The decoding apparatus can obtain information of a picture in a view different from a view of the current picture using the above-mentioned view identifier, such that it can decode the video signal using the information of the picture. The above-mentioned view identifier can be applied to the overall encoding/decoding process of the video signal. Also, the above-mentioned view identifier can also be applied to the multiview video coding process using the frame number information frame_num" considering a view.
Typically, the multiview sequence has a large amount of data, and a hierarchical encoding function of each view (also called a "view scalability") can be used for processing the large amount of data. In order to perform the view scalability function, a prediction structure considering views of the multiview sequence may be defined.
The above-mentioned prediction structure may be defined by structuralizing the prediction order or direction of several view sequences. For example, if several view sequences to be encoded are given, a center location of the overall arrangement is set to a base view, such that view sequences to be encoded can be hierarchically selected. The end of the overall arrangement or other parts may be set to the base view.
If the number of camera views is denoted by an exponential power of "2", a hierarchical prediction structure between several view sequences may be formed on the basis of the above-mentioned case of the camera views denoted by the exponential power of "2". Otherwise, if the number of camera views is not denoted by the exponential power of "2", virtual views can be used, and the prediction structure may be formed on the basis of the virtual views. If the camera arrangement is indicative of a two- dimensional arrangement, the prediction order may be established by turns in a horizontal or vertical direction. A parsed bitstream is entropy-decoded by an entropy decoding unit 11, and data such as a coefficient of each macroblock, a motion vector, etc., are extracted. The inverse quantization/inverse transform unit 12 multiplies a received quantization value by a predetermined constant to acquire a transformed coefficient value, and performs an inverse transform of the acquired coefficient value, such that it reconstructs a pixel value. The inter-prediction unit 13 performs an inter-prediction function from decoded samples of the current picture using the reconstructed pixel value. At the same time, the deblocking filter 15 is applied to each decoded macroblock to reduce the degree of block distortion. The deblocking filter 15 performs a smoothing of the block edge, such that it improves an image quality of the decoded frame. The selection of a filtering process is dependent on a boundary strength and a gradient of image samples arranged in the vicinity of the boundary. The filtered pictures are stored in the decoded picture buffer 16, such that they can be outputted or be used as reference pictures. The decoded picture buffer 16 stores or outputs pre- coded pictures to perform the inter-prediction function. In this case, frame number information "frame_num" and POC (Picture Order Count) information of the pictures are used to store or output the pre-coded pictures. Pictures of other view may exist in the above-mentioned pre-coded pictures in the case of the MVC technology. Therefore, in order to use the above-mentioned pictures as reference pictures, not only the "frame_num" and POC information, but also view identifier indicating a picture view may be used as necessary.
The inter-prediction unit 13 performs the inter- prediction using the reference pictures stored in the decoded picture buffer 16. The inter-coded macroblock may be divided into macroblock partitions . Each macroblock partition can be predicted by one or two reference pictures.
The motion compensation unit 17 compensates for a motion of the current block using the information received from the entropy decoding unit 11. The motion compensation unit 17 extracts motion vectors of neighboring blocks of the current block from the video signal, and obtains a motion-vector predictor of the current block. The motion compensation unit 17 compensates for the motion of the current block using a difference value between the motion vector and a predictor extracted from the video signal and the obtained motion-vector predictor. The above-mentioned motion compensation may be performed by only one reference picture, or may also be performed by a plurality of reference pictures . Therefore, if the above-mentioned reference pictures are determined to be pictures of other views different from the current view, the motion compensation may be performed according to a view identifier indicating the other views. A direct mode is indicative of a coding mode for predicting motion information of the current block on the basis of the motion information of a block which is completely decoded. The above-mentioned direct mode can reduce the number of bits required for encoding the motion information, resulting in the increased compression efficiency.
For example, a temporal direct mode predicts motion information of the current block using a correlation of motion information of a temporal direction. Similar to the temporal direct mode, the decoder can predict the motion information of the current block using a correlation of motion information of a view direction.
If the received bitstream corresponds to a multiview sequence, view sequences may be captured by different cameras respectively, such that a difference in illumination may occur due to internal or external factors of the cameras. In order to reduce potential inefficiency associated with the difference in illumination, an illumination compensation unit 18 performs an illumination compensation function.
In the case of performing illumination compensation function, flag information may be used to indicate whether an illumination compensation at a specific level of a video signal is performed. For example, the illumination compensation unit 18 may perform the illumination compensation function using flag information indicating whether the illumination compensation of a corresponding slice or macroblock is performed. Also, the above- mentioned method for performing the illumination compensation using the above-mentioned flag information may be applied to a variety of macroblock types (e.g., an inter 16x16 mode, a B-skip mode, a direct mode, etc.)
In order to reconstruct the current block when performing the illumination compensation, information of a neighboring block or information of a block in views different from a view of the current block may be used, and an offset value of the current block may also be used.
In this case, the offset value of the current block is indicative of a difference value between an average pixel value of the current block and an average pixel value of a reference block corresponding to the current block. As an example for using the above-mentioned offset value, a predictor of the current-block offset value may be obtained by using the neighboring blocks of the current block, and a residual value between the offset value and the predictor may be used. Therefore, the decoder can reconstruct the offset value of the current block using the residual value and the predictor.
In order to obtain the predictor of the current block, information of the neighboring blocks may be used as necessary.
For example, the offset value of the current block can be predicted by using the offset value of a neighboring block. Prior to predicting the current-block offset value, it is determined whether the reference index of the current block is equal to a reference index of the neighboring blocks. According to the determined result, the illumination compensation unit 18 can determine which one of neighboring blocks will be used or which value will be used.
The illumination compensation unit 18 may perform the illumination compensation using a prediction type of the current block. If the current block is predictively encoded by two reference blocks, the illumination compensation unit 18 may obtain an offset value corresponding to each reference block using the offset value of the current block. As described above, the inter-predicted pictures or intra-predicted pictures acquired by the illumination compensation and motion compensation are selected according to a prediction mode, and reconstructs the current picture. A variety of examples of encoding/decoding methods for reconstructing a current picture are described later in this document. FIG. 2 is a structural diagram illustrating a sequence parameter set RBSP syntax.
Referring to FIG. 2, a sequence parameter set is indicative of header information including information (e.g., a profile, and a level) associated with the entire- sequence encoding.
The entire compressed sequence can begin at a sequence header, such that a sequence parameter set corresponding to the header information arrives at the decoder earlier than data referring to the parameter set. As a result, the sequence parameter set (RBSP) acts as header information associated with resultant data of compressed moving images at step Sl. If the bitstream is received, "profile_idc" information determines which one of profiles from among several profiles corresponds to the received bitstream at step S2. For example, if "profile_idc" is set to "66", this indicates the received bitstream is based on a baseline profile. If "profile_idc" is set to "77", this indicates the received bitstream is based on a main profile. If "profile_idc" is set to "88", this indicates the received bitstream is based on an extended profile. A step S3 uses the syntax "If (profile_idc) ===== MULTI_VIEW_PROFILE) " to determine whether the received bitstream relates to a multiview profile .
If the received bitstream relates to the multiview profile at step S3, a variety of information of the multiview sequence can be added to the received bitstream.
The "reference__view" information represents a reference view of an entire view, and may add information associated with the reference view to the bitstream. Generally, the MVC technique encodes or decodes a reference view sequence using an encoding scheme capable of being used for a single sequence (e.g., the H.264/AVC codec). If the reference view is added to the syntax, the syntax indicates which one of views from among several views will be set to the reference view. A base view acting as an encoding reference acts as the above-mentioned reference view. Images of the reference-view are independently encoded without referring to an image of another-view.
The number of views (num views) may add specific information indicating the number of multiview captured by several cameras. The view number (num_views) of each sequence may be set in various ways. The "num_views" information is transmitted to an encoder and a decoder, such that the encoder and the decoder can freely use the "num_views" information at step S5.
Camera arrangement (view_arrangement ) indicates an arrangement type of cameras when a sequence is acquired. If the "view_arrangement" information is added to the syntax, the encoding process can be effectively performed to be appropriate for individual arrangements. Thereafter, if a new encoding method is developed, different xView_arrangement" information can be used.
The number of frames NNtemporal_units_size" indicates the number of successively encoded/decoded frames of each view. If required, specific information indicating the number of frames may also be added. In more detail, provided that a current N-th view is being encoded/decoded, and a M-th view will be encoded/decoded at the next time, the utemporal_units_size" information indicates how many frames will be firstly processed at the N-th view and the M-th view will be then processed. By the "temporal_units_size" information and the "num_views" information, the system can determine which one of views from among several views corresponds to each frame. If a first length from the I slice to the P slice of each view sequence, a second length between the P slices, or the length corresponding to a multiple of the first or second length is set to the "temporal_units_size" information, the "temporal_units_size" information may be processed at only one view, and may go to the next view. The "temporal_units_size" information may be equal to or less than the conventional GOP length. For example, FIGS.
4B—4C show the GGOP structure for explaining the
"temporal_units_size" concept. In this case, in FIG. 4B, the NNtemporal_units_size" information is set to "3". In FIG. 4C, the "temporal_units_size" information is set to "1". In some examples, the MVC method arranges several frames on a time axis and a view axis, such that it may process a single frame of each view at the same time value, and may then process a single frame of each view at the next time value, corresponding to a "temporal_units_size" of "1". Alternatively, the MVC method may process N frames at the same view, and may then process the N frames at the next view, corresponding to a "temporal_units_size" of N" Since generally at least one frame is processed, "temporal_units_size_minusl" may be added to the syntax to represent how many additional frames are processed. Thus, the above-mentioned examples may be denoted by "temporal_units_size_minusl =0" and
"temporal_units_size_minusl = N-I", respectively, at step S7.
The profiles of the conventional encoding scheme have no common profile, such that a flag is further used to indicate compatibility. "constraint_set*_flag" information indicates which one of profiles can decode the bitstream using a decoder. "constraint_setO_flag" information indicates that the bitstream can be decoded by a decoder of the baseline profile at step S8. "constraint_setl_flag" information indicates that the bitstream can be decoded by a decoder of the main profile at step S9. "constraint_set2_flag" information indicates that the bitstream can be decoded by a decoder of the extended profile at step SlO. Therefore, there is need to define the "MULTI_VIEW_PROFILE" decoder, and the "MULTI_VIEW_PROFILE" decoder may be defined by "constraint_set4_flag" information at step SIl.
The λXlevel_idc" information indicates a level identifier. The "level" generally indicates the capability of the decoder and the complexity of bitstream, and relates to technical elements prescribed in the above-mentioned profiles at step S12.
The "seq_parameter_set_id" information indicates SPS (Sequence Parameter Set) ID information contained in the SPS (sequence parameter set) in order to identify sequence types at step S13.
FIG. 3A is a structural diagram illustrating a bitstream including only one sequence.
Referring to FIG. 3A, the sequence parameter set (SPS) is indicative of header information including information (e.g., a profile, and a level) associated with the entire-sequence encoding. The supplemental enhancement information (SEI) is indicative of supplementary information, which is not required for the decoding process of a moving-image (i.e., sequence) encoding layer. The picture parameter set (PPS) is header information indicating an encoding mode of the entire picture. The I slice performs only an intra coding process. The P slice performs the intra coding process or the inter prediction coding process. The picture delimiter indicates a boundary between video pictures. The system applies the SPS RBSP syntax to the above-mentioned SPS. Therefore, the system employs the above-mentioned syntax during the generation of the bitstream, such that it can add a variety of information to a desired object. FIG. 3B is a structural diagram illustrating a bitstream including two sequences.
Referring to FIG. 3B, the H.264/AVC technology can handle a variety of sequences using a single bitstream. The SPS includes SPS ID information (seq_parameter_set_id) in the SPS so as to identify a sequence. The SPS ID information is prescribed in the PPS (Picture Parameter
Set), such that which one of sequences includes the picture.
Also, the PPS ID information (pic_parameter_set_id) is prescribed in the slice header, such that the
XNpic_parameter_set_id" information can identify which one of PPSs will be used.
For example, a header of the slice #1 of FIG. 3B includes PPS ID information (pic_parameter_set_id) to be
referred, as denoted by © . The PPS#1 includes the
referred SPS ID information (SPS=I), as denoted by (D .
Therefore, it can be recognized that the slice #1 belongs to the sequence #1. In this way, it can also be recognized
the slice #2 belongs to the sequence #2, as denoted by (D
and © . Indeed, the baseline profile and the main profile
are added and edited to create a new video bitstream. In this case, two bitstreams are assigned different SPS ID information. Any one of the two bitstreams may also be converted into a multiview profile as necessary.
FIG. 4A shows an exemplary Group Of GOP (GGOP) structure. FIG. 4B and FIG. 4C shows a GGOP structure for explaining a "temporal_units_size" concept. The GOP is indicative of a data group of some pictures. In order to effectively perform the encoding process, the MVC uses the GGOP concept to perform spatial prediction and temporal prediction .
If a first length between the I slice and the P slide of each view-sequence, a second length between the P slices, or a third length corresponding to a multiple of the first or second length is set to the "temporal_units_size" information, the
"temporal_units_size" information may be processed at only one view, and may go to the next view. The "temporal__units_size" information may be equal to or less than the conventional GOP length. For example, in FIG. 4B, the "temporal_units_size" information is set to "3". In FIG. 4C, the "temporal__units_size" information is set to "1". Specifically, in FIG. 4B, if the
NNtemporal_units_size" information is denoted by λNtemporal_units_size > 1", and one or more views begin at the I frame, (temporal_units_size + 1) frames can be processed. Also, the system can recognize which one of views from among several views corresponds to each frame of the entire sequence by referring to the above-mentioned "temporal_units_size" and "num_views" information.
In FIG. 4A, individual frames are arranged on a time
axis and a view axis. Pictures of V1~V8 indicate a GOP
respectively. The V4 acting as a base GOP is used as a reference GOP of other GOPs. If the "temporal_units-size" information is set to "1", the MVC method processes frames of individual views at the same time zone, and then can re- process the frames of the individual views at the next time
zone. Picture of T1~T4 indicate frames of individual
views at the same time zone. In other words, the MVC method can firstly process the Tl frames, and then can process a plurality of frames in the order of T4 -> T2 ->
T3 -^ ••■. If the "temporal-units_size" information is set
to "N", the MVC method may firstly process N frames in the direction of the time axis within a single view, and may process the N frames at the next view. In other words, if the "temporal--units_size" information is set to "4", the MVC method may firstly process frames contained in the
Tl-T4 frames of the V4 GOP, and then may process a
plurality of frames in the order of Vl -> V2 -> V3 -> •••.
Therefore, in the case of generating the bitstream in FIG. 4A, the number of views (num_views) is set to "8", the reference view is set to the V4 GOP (Group Of Pictures) . The number of frames (temporal_units_size) indicates the number of successively encoded/decoded frames of each view. Therefore, if the frames of each view are processed at the same time zone in FIG. 4A, the "temporal_unit_size" information is set to "1". If the frames are processed in the direction of the time axis within a single view, the "temporal_unit_size" information is set to "N". The above- mentioned information is added to the bitstream generating process .
FIG. 5 is a flow chart illustrating a method for decoding a video sequence.
Referring to FIG. 8, one or more profile information is extracted from the received bitstream. In this case, the extracted profile information may be at least one of several profiles (e.g., the baseline profile, the main profile, and the multiview profile) . The above-mentioned profile information may be changed according to input video sequences at step S51. At least one configuration information contained in the above-mentioned profile is extracted from the extracted profile information. For example, if the extracted profile information relates to the multiview profile, one or more configuration information (i.e., "reference_view", _ "view_arrangement", and "temporal_units_size" information) contained in the multiview profile is extracted at step S53. In this way, the above-mentioned extracted information is used for decoding the multiview-coded bitstream.
FIGS. 6A-6B are conceptual diagrams illustrating a multiview-sequence prediction structure according to a first example.
Referring to FIGS. 6A-6B, provided that the number (m) of several viewpoints (i.e., multiview number) is set to 2n (i.e., m = 2n) , if n=0, the multiview number (m) is set to "1". If n=l, the multiview number (m) is set to "2". If n=2, the multiview number (m) is set to NN4". If n=3, the multiview number (m) is set to "8". Therefore, if the multiview number (m) is set to 2n~x < m < 2n, the bitstream includes a single base-view bitstream and n hierarchical auxiliary-view bitstreams.
Specifically, the term "base view" is indicative of a reference view from among several viewpoints (i.e., the multiview). In other words, a sequence (i.e., moving images) corresponding to the base view is encoded by general video encoding schemes (e.g., MPEG-2, MPEG-4, H.263, and H.264, etc.), such that it is generated in the form of an independent bitstream. For the convenience of description, this independent bitstream is referred to as a "base-view bitstream".
The term "auxiliary view" is indicative of the remaining view other than the above-mentioned base view from among several viewpoints (i.e., the multiview) . In other words, the sequence corresponding to the auxiliary view forms a bitstream by performing disparity estimation of the base-view sequence, and this bitstream is referred to as "auxiliary-view bitstream" . In the case of performing a hierarchical encoding process (i.e., a view scalability process) between several viewpoints (i.e., the multiview), the above-mentioned auxiliary-view bitstream is classified into a first auxiliary-view bitstream, a second auxiliary-view bistream, and a n-th auxiliary-view bistream.
The term "bitstream" may include the above-mentioned base-view bitstream and the above-mentioned auxiliary-view bitstream as necessary.
For example, if the multiview number (m) is set to "8" (n=3), the bitstream includes a single base-view and three hierarchical auxiliary-views. If the bitstream includes the single base-view and n hierarchical auxiliary- views, it is preferable that a location to be the base-view from among the multiview and a location to be each hierarchical auxiliary-view are defined by general rules. For reference, square areas of FIGS. 6A-6B indicate individual viewpoints. As for numerals contained in the square areas, the number "0" is indicative of a base-view, the number "1" is indicative of a first hierarchical auxiliary-view, the number "2" is indicative of a second hierarchical auxiliary-view, and the number "3" is indicative of a third hierarchical auxiliary-view. In this example of FIGS. 6A-6B, a maximum of 8 viewpoints are exemplarily disclosed as the multiview video sequence, however, it should be noted that the multiview number is not limited to "8" and any multiview number is applicable to other examples as necessary.
Referring to FIG. 6A, respective base-views and respective auxiliary-views are determined by the following rules. Firstly, the location of the base-view is set to a 2n~1-th view. For example, if n=3, the base-view is set to a fourth view. FIGS. 6A-6B shows an exemplary case in which the beginning view is located at the rightmost side. A specific view corresponding to a fourth order from the rightmost view 61 is used as the base-view. Preferably, the base-view location may be located at a specific location in the vicinity of a center view from among the multiview or may be set to the center view from among the multiview, because the base-view may be used as a reference for performing the predictive coding (or predictive encoding) process of other auxiliary-views.
For another example, the leftmost view is always set to the beginning view, and the number (m) of viewpoints (i.e., the multiview number) may be arranged in the order of m=0 -> m=l -ϊ m=2 -> m=3, • • . For example, if n=3, the 2n~1-th multiview number (i.e., m=4) may be set to the base- view. The first hierarchical auxiliary-view location may be set to a left-side view spaced apart from the above- mentioned base-view by a 2n~2-th magnitude, or a right-side view spaced apart from the above-mentioned base-view by the 2n~2-th magnitude. For example, FIG. 6A shows an exemplary case in which a viewpoint spaced apart from the base view in the left direction by the 2n~2-th view (i.e., two viewpoints is case of n=3) is determined to be the first hierarchical auxiliary-view. Otherwise, FIG. 6B shows an exemplary case in which a viewpoint spaced apart from the base view in the right direction by the 2n~2-th view is determined to be the first hierarchical auxiliary-view. In the above-mentioned example, the number of the first hierarchical auxiliary-view is set to "1".
The second hierarchical auxiliary-view location may be set to left-side view spaced apart from the base-view by a 2n~2-th magnitude, or a right-side view spaced apart from the first hierarchical auxiliary-view by the 2n~~-th magnitude. For example, the above-mentioned case of FIG. 6A generates two second hierarchical auxiliary-views. Since the above-mentioned case of FIG. 6B has no view spaced apart from the first hierarchical auxiliary-view in the right direction by 2n"2-th magnitude, a viewpoint spaced apart from the base-view in the left direction by the 2n~2- th magnitude is determined to be the second hierarchical auxiliary-view .
A viewpoint spaced apart from the second hierarchical auxiliary-view in the left direction by the 2n"2-th magnitude may also be determined to be the second hierarchical auxiliary-view 63. However, if the viewpoint corresponds to both ends of the multiview, the above- mentioned viewpoint may be determined to the third hierarchical auxiliary-view. One or two second hierarchical auxiliary-views may be generated in the case of FIG. 6B.
Finally, the third hierarchical auxiliary-view location is set to the remaining viewpoints other than the above-mentioned viewpoints having been selected as the base-view and the first and second hierarchical auxiliary- views. In FIG. 6A, four third hierarchical auxiliary-views are generated. In FIG. 6B, four or five third hierarchical auxiliary-views are generated.
FIGS. 7A-7B are conceptual diagrams illustrating a multiview-sequence prediction structure according to a second example.
The second example of FIGS. 7A-7B is conceptually similar to the above-mentioned first example of FIGS. 6A-6B, however, it should be noted that FIGS. 7A-7B show that the beginning-view for selecting the base-view is located at the leftmost side, differently from FIGS. 6A-6B. In other words, a fourth view spaced apart from the leftmost side 65 is selected as the base-view. In FIGS. 7A-7B, the remaining parts other than the above-mentioned difference are the same as those of FIGS. 6A-6B.
FIG. 8 is a conceptual diagram illustrating a multiview-sequence prediction structure according to a third example.
The third example of FIG. 8 shows an exemplary case in which the multiview number (m) is set to 2n~λ < m < 2n. In more detail, FIG. 8 shows a variety of cases denoted by m=5, m=6, m=7, and m=8. If m=5, 6, and 7, the multiview number (m) does not satisfy the condition of m=2n, such that the system has difficulty in implementing the above- mentioned first example of FIGS. 6A-6B and the above- mentioned second example of FIGS. 7A-7B without any change. In order to solve the above-mentioned problem, the system applies a virtual-view concept, such that the above- mentioned problem is obviated by the virtual-view concept.
For example, if 2n~λ < m < 2n, 2n-m virtual-views are generated. If the multiview number (m) is an odd number,
(2n-m+l)/2 virtual-views are generated at the left side (or the right side) of the multiview arrangement, and (2n-m- l)/2 virtual-views are generated at the right side (or the left side) of the multiview arrangement. If the multiview number (m) is an even number, (2n-m) /2 virtual-views are generated at the left side and the right side of the multiview arrangement, respectively. And then, the above- mentioned prediction structure can be applied with the resultant virtual views in the same manner.
For example, if the multiview number (m) is set to "5", the multiview of m=8 is virtually formed by adding one or two virtual-views to both ends of the multiview, respectively, and the base-view location and three hierarchical auxiliary-view locations are selected. As can be seen from FIG. 8, two virtual-views are added to the end of the left side, and a single virtual-view is added to the end of the right side, such that the base-view and the first to third hierarchical auxiliary-views are selected according to the above-mentioned example of FIG. 6A.
For example, if the multiview number (m) is set to "6", the multiview of m=8 is virtually formed by adding a single virtual-view to both ends of the multiview, and the base-view location and three hierarchical auxiliary-view locations are selected, respectively. As can be seen from FIG. 8, the base-view and the first to third hierarchical auxiliary-views are selected according to the above- mentioned example of FIG. 6A.
For example, if the multiview number (m) is set to λX7", the multiview of m=8 is virtually formed by adding a single virtual-view to any one of both ends of the multiview, and the base-view location and three hierarchical auxiliary-view locations are selected, respectively. For example, as shown in FIG. 8, a single virtual-view is added to the end of the left side, such that the base-view and the first to third hierarchical auxiliary-views are selected according to the above- mentioned example of FIG. 6A.
FIGS. 9A-9B are conceptual diagrams illustrating a hierarchical prediction structure between several viewpoints of multiview sequence data. For example, FIG. 9A shows the implementation example of the case of FIG. 6A, and FIG. 9B shows the implementation example of the case of FIG. 7A. In more detail, if the multiview number (m) is set to "8", the base-view and three hierarchical auxiliary- views are provided, such that the hierarchical encoding (or "view scalability") between several viewpoints is made available during the encoding of the multiview sequence.
Individual pictures implemented by the above- mentioned hierarchical auxiliary-view bitstreams are estimated/predicted on the basis of a picture of the base- view and/or a picture of an upper hierarchical auxiliary- view image, such that the encoding of the resultant pictures is performed. Specifically, the disparity estimation is generally used as the above-mentioned estimation. For example, the first hierarchical auxiliary-view 92 performs the estimation/encoding process between viewpoints (i.e., estimation/encoding process of the multiview) by referring to the base-view 91. The second hierarchical auxiliary-views (93a and 93b) perform the estimation/encoding process between viewpoints by referring to the base-view 91 and/or the first hierarchical auxiliary-view 92. The third hierarchical auxiliary-views (94a, 94b, 94c, and 94d) perform the estimation/encoding process between viewpoints by referring to the base-view and the first hierarchical auxiliary-view 92, and/or the second hierarchical auxiliary-views (93a and 93b). In association with the above-mentioned description, the arrows of drawings indicate progressing directions of the above-mentioned estimation/encoding process of the multiview, and it can be recognized that auxiliary streams contained in the same hierarchy may refer to different views as necessary. The above-mentioned hierarchically- encoded bitstream is selectively decoded in the reception end according to display characteristics, and a detailed description thereof will be described later with reference to FIG. 12.
Generally, the prediction structure of the encoder may be changed to another structure, such that the decoder can easily recognize the prediction structure relationship of individual view images by transmission of information indicating the relationship of individual views. Also, specific information, indicating which one of levels from among the entire view hierarchy includes the individual views, may also be transmitted to the decoder.
Provided that the view level (view_level) is assigned to respective images (or slices) , and a dependency relationship between the view images is given, even if the prediction structure is changed in various ways by the encoder, the decoder can easily recognize the changed prediction structure. In this case, the prediction structure/direction information of the respective views may be configured in the form of a matrix, such that the matrix-type prediction structure/direction information is transmitted to a destination. In other words, the number of views (num_view) is transmitted to the decoder, and the dependency relationship of the respective views may also be represented by a two-dimensional (2D) matrix. If the dependency relationship of the views is changed in time, for example, if the dependency relationship of first frames of each GOP is different from that of other frames of the remaining time zones, the dependency-relationship matrix information associated with individual cases may be transmitted.
FIGS. 1OA-IOB are conceptual diagrams illustrating a prediction structure of two-dimensional (2D) multiview sequence according to a fourth example.
The above-mentioned first to third examples have disclosed the multiview of a one-dimensional array as examples. It should be noted that they can also be applied to two-dimensional (2D) multiview sequence as necessary.
In FIGS. 10A-10B, squares indicate individual views arranged in the form of a 2D, and numerals contained in the squares indicate the relationship of hierarchical views .
For example, if the square number is configured in the form of "A-B", "A" indicates a corresponding hierarchical auxiliary-view, and "B" indicates priority in the same hierarchical auxiliary-view.
As for numerals contained in the square areas, the number "0" is indicative of a base-view, the number "1" is indicative of a first hierarchical auxiliary-view, the number "2-1" or "2-2" is indicative of a second hierarchical auxiliary-view, the number "3-1" or "3-2" is indicative of a third hierarchical auxiliary-view, the number "4-1", "4-2" or "4-3" is indicative of a fourth hierarchical auxiliary-view, and the number "5-1", "5-2", or "5-3" is indicative of a fifth hierarchical auxiliary- view.
In conclusion, in the case of generating a bitstream by encoding images acquired from the two-dimensional (2D) multiview, if the 2D multiview number (m) on a horizontal axis is 2n~λ < m < 2n and the 2D multiview number (p) on a vertical axis is 2k~α < p < 2k, the above-mentioned bitstream includes a single base-view bitstream and (n+k) hierarchical auxiliary-view bitstreams.
In more detail, the above-mentioned (n+k) hierarchical auxiliary-views are formed alternately on the horizontal axis and the vertical axis. For example, a first hierarchical auxiliary-view from among the (n+k) hierarchical auxiliary-views in FIG. 1OA is positioned at the vertical axis including the base-view. A first hierarchical auxiliary-view from among the (n+k) hierarchical auxiliary-views in FIG. 1OB is positioned at the horizontal axis including the base-view.
For example, as shown in FIG. 1OA, if the multiview number of the horizontal axis (m) is set to "8" (i.e., n=3), and the multiview number of the vertical axis (p) is set to "4" (i.e., k=2), the bitstream includes a single base-view and five hierarchical auxiliary-views. In association with the above-mentioned description, FIG. 1OA shows that the hierarchical auxiliary-views are selected in the order of "vertical axis -> horizontal axis -> vertical axis -^ ..." . A method for determining locations of the base-view and the auxiliary-views will hereinafter be described as follows.
Firstly, the base-view location is determined in the same manner as in the above-mentioned one-dimensional array. Therefore, the base-view location is determined to be a specific view corresponding to a 2n~1-th location in the direction of the horizontal axis and 2k"1-th location in the direction of the vertical axis.
The first hierarchical auxiliary-view location is determined to be a top-side view or bottom-side view spaced apart from the base-view location in the direction of the
vertical axis by the 2k~2-th magnitude, as denoted by φ.
The second hierarchical auxiliary-view locations are
determined to be left-side views, as denoted by (2), or
right-side views spaced apart from the base-view location and the first hierarchical auxiliary-view in the direction of the horizontal axis by the 2n"2-th magnitude. The third hierarchical auxiliary-view locations are determined to be the remaining views contained in the vertical axes including not only the first and second hierarchical auxiliary-views but also the base-view. The fourth hierarchical auxiliary-view location is determined to be a left-side view or right-side view spaced apart from the first to third hierarchical auxiliary-views and the base- view in the direction of the horizontal axis by the 2n~2-th magnitude. Finally, the fifth hierarchical auxiliary-view locations are determined to be the remaining views other than the base-view and the first to fourth hierarchical auxiliary-views.
For example, as can be seen from FIG. 1OB, if the multiview number of the horizontal axis (m) is set to "8"
(i.e., n=3) r and the multiview number of the vertical axis
(p) is set to "4" (i.e., k=2 ) , the bitstream includes a single base-view and five hierarchical auxiliary-views . In association with the above-mentioned description, FIG. 1OB shows that the hierarchical auxiliary-views are selected in
the order of "horizontal axis -> vertical axis -> horizontal -> ..." . A method for determining locations of the base-view and the auxiliary-views will hereinafter be described as follows.
Firstly, the base-view location is determined in the same manner as in the above-mentioned one-dimensional array. Therefore, the base-view location is determined to be a specific view corresponding to a 2n~1-th location in the direction of the horizontal axis and 2k~x-th location in the direction of the vertical axis.
The first hierarchical auxiliary-view location is determined to be a left-side view or right-side view spaced apart from the base-view location in the direction of the
horizontal axis by the 2n~2-th magnitude, as denoted by ©.
The second hierarchical auxiliary-view locations are
determined to be top-side views, as denoted by (2), or
bottom-side views spaced apart from the base-view and the first hierarchical auxiliary-view in the direction of the vertical axis by the 2k~1-th magnitude. The third hierarchical auxiliary-view locations are determined to be left- and right- direction views spaced apart from the base-view and the first to second hierarchical auxiliary- views in the direction of the horizontal axis by the 2n~2-th magnitude. The fourth hierarchical auxiliary-view locations are determined to be the remaining views contained in the vertical axes including not only the first to third hierarchical auxiliary-views but also the base- view. Finally, the fifth hierarchical auxiliary-view locations are determined to be the remaining views other than the base-view and the first to fourth hierarchical auxiliary-views .
FIGS. 11A-11C are conceptual diagrams illustrating a multiview-sequence prediction structure according to a fifth example. The fifth example of FIGS. 11A-11C has prediction-structure rules different from those of the above-mentioned first to fourth examples. For example, the square areas of FIGS. 11A-11C indicate individual views, however, numerals contained in the square areas indicate the order of prediction of the views. In other words, as for numerals contained in the square areas, the number "0" is indicative of a first predicted view (or a first view) , the number "1" is indicative of a second predicted view (or a second view) , the number SS2" is indicative of a third predicted view (or a third view) , and the number "3" is indicative of a fourth predicted view (or a fourth view) . For example, FIG. HA shows decision formats of the first to fourth views in case the multiview number (m) is
denoted by m=l~m=10. The first to fourth views are
determined by the following rules . For example, both ends of the multiview are set to the first view (0), and the center view from among the multiview is set to the second view (1) . Views successively arranged by skipping over at least one view in both directions on the basis of the second view (1) are set to the third views (2), respectively. The remaining views other than the first to third views are set to the fourth views (3), respectively. If the first to fourth views are determined as described above, there is a need to discriminate between the base-view and the auxiliary-view. For example, any one of the first view, the second view, and third view is set to the base-view, and the remaining views other than the base-view may be set to the auxiliary- views .
Provided that the base-view is not determined by the prescribed rules described above and is arbitrarily selected by the encoder, identification (ID) information (i.e., "base_view_position") of the base-view location may be contained in the bitstream.
FIG. HB shows another example of the decision of the second view (1) . In more detail, FIG. HB shows another example different from the example of FIG. HA, such that it shows an exemplary case in which the remaining views other than the first view (0) are set to even numbers. In other words, if m=4, m=6, m=8, or m=10, the second view
(1) of FIG. HB may be different from the second view (1) of FIG. HA as necessary. For another example, in the case of determining views located after the second view (1) , upper views may be determined by sequentially skipping over a single view on the basis of the leftmost first view (0) .
In association with the above-mentioned description, FIG. HC shows an exemplary case in which the multiview number (m) is 10 (i.e., m=10), and the base-view from among the multiview is denoted by Λλbase_view_position = Λl' view" (corresponding to a sixth view) by the base-view ID information. For example, as can be seen from FIG. HC, the first hierarchical auxiliary-view is set to the third view (2), the second hierarchical auxiliary-view is set to the first view (0), and the third hierarchical auxiliary- view is set to the fourth view (3) .
In association with the above-mentioned description, in FIGS. HA-HB, the base-view may also be set to the first view (1) as shown in FIG. HC. The reason is that if the base-view is located at a specific location in the vicinity of the center part of the multiview, or is located at the center part of the multiview, the estimation/encoding process of other auxiliary-views can be effectively performed. Therefore, the base-view location and the auxiliary-view location can be determined according to the following rules.
In other words, the base-view location is set to the center view (1) of the multiview, the second auxiliary-view location is set to both-end views (0) of the multiview, and the first auxiliary-view location is set to the view (2) successively arranged by skipping over at least one view in both directions on the basis of the base-view. The remaining views (3) other than the above-mentioned views are all set to the third auxiliary-views . In association with the above-mentioned description, if the multiview number (m) is equal to or less than "7" (i.e., m<7), only two or less views are arranged between the base-view (1) and the second auxiliary-view (0), all the views arranged between the base-view (1) and the second auxiliary-view (0) are set to the first auxiliary-views (2), respectively.
If the multiview number (m) is equal to or more than
"8" (i.e., m≥8) and only two or less views are arranged
between the second auxiliary-view (0) and the first auxiliary-view (2), all the views arranged between the second auxiliary-view (0) and the first auxiliary-view (2) are set to the third auxiliary-views (3), respectively.
For example, as depicted in FIGS. HA — HB, if m=8, ITi=9, and m=10, it can be recognized that one or two views located between the second auxiliary-view (0) and the first auxiliary-view (2) are set to the third auxiliary-views (3), respectively.
For another example, if only two or less views are located between the base-view (1) and the second auxiliary- view (0), all the views arranged between the base-view (1) and the second auxiliary-view (0) may be set to the third auxiliary-views (3) , respectively. For example, as shown in FIGS. HA ~ HB, if m=8 , it can be recognized that two views located between the base-view (1) and the second auxiliary-view (0) are set to the third auxiliary-views (3), respectively.
Using the base-view and the auxiliary-views determined by the above-mentioned method, the view scalability between views (or viewpoints) can be performed.
For example, if the multiview number (m) is equal to or less than "7" (i.e., m<7), a single base-view stream and two hierarchical auxiliary-view bitstreams are generated. For example, the second auxiliary-view (0) can be set to the first hierarchical auxiliary-view, and the first auxiliary-view (2) can also be set to the second hierarchical auxiliary-view.
For example, if the multiview number (m) is equal to or higher than "8" (i.e., m>8), i.e., if m=8, m=9, or m=10, a single base-view bitstream and three hierarchical auxiliary-view bitstreams are generated. For example, the first auxiliary-view (2) is selected as the first hierarchical auxiliary-view, the second auxiliary-view (0) is selected as the first hierarchical auxiliary-view, and the third auxiliary-view (3) is selected as the third hierarchical auxiliary-view.
FIG. 12 is a conceptual diagram illustrating a hierarchical method of encoding/decoding a multiview sequence.
Referring to FIG. 12, the encoder of a transmission end performs the view scalability function of the multiview sequence using modified methods which may be predicted by the first through fifth embodiments and methods shown in the first to fifth examples, for generating a bitstream, and transmits the bitstream to the reception end.
Therefore, the decoding method or apparatus receives the bitstream formed by the above-mentioned characteristics, decodes the received bitstream, and generates decoded data for each hierarchy. Thereafter, according to the selection of a user or display, a variety of displays can be implemented, using data decoded by each hierarchy.
For example, a base layer 121 for reproducing data of only the base-view is appropriate for the 2D display 125. A first enhancement layer #1 (122) for reproducing data of the base-view and data of the first hierarchical auxiliary- view together is appropriate for a stereo-type display 126 formed by a combination of two 2D images. A second enhancement layer #2 (123) for reproducing data of the base-view, data of the first hierarchical auxiliary-view, and data of the second hierarchical auxiliary-view together is appropriate for a low multiview display 127 for 3D- reproduction of the multiview sequence. A third enhancement layer #3 (124) for reproducing data .of the base-view and data of all hierarchical auxiliary-views together is appropriate for a high multiview display 128 for 3D-reproduction of the multiview sequence.
FIG. 13 is a flow chart illustrating a method for encoding a video sequence.
Referring to FIG. 13, an example of a video-sequence encoding method obtains an average pixel value of at least one block from among neighboring blocks of a current block and reference blocks of another view at step S131. Upon receipt of the obtained value, the video-sequence encoding method derives a predicted average pixel value of the current block using at least one mode from among several modes at step S132. The video-sequence encoding method obtains a difference value between the predicted average pixel value and the actual average pixel value of the current block at step S133. The video-sequence encoding method measures individual encoding efficiency of the above-mentioned several modes, and selects an optimum mode from among the several modes at step S134. The above- mentioned optimum mode can be selected in various ways, for example, a method for selecting a minimum difference value from among the obtained difference values, and a method for using an equation indicating the relationship of Rate- Distortion (RD), etc.
In this case, the above-mentioned RD equation recognizes not only the number of encoding bits generated during the encoding of a corresponding block but also a distortion value indicating a difference value associated with an actual image, such that it calculates costs using the number of encoding bits and the distortion value. In more detail, the video-sequence encoding method multiplies the bit number by a Lagrange multiplier determined by a quantization coefficient, and adds the distortion value to the multiplied result, such that it calculates the costs. If the optimum mode is selected, the video-sequence encoding method can encode identification (ID) information indicating the selected mode, and transmit the encoded result. Alternatively, if the optimum mode is selected, the video-sequence encoding method can encode not only the ID information indicating the selected mode but also the difference value obtained by the selected mode, and transmit the encoded result at step S135. FIG. 14 is a block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of another view.
Referring to FIG. 14, it is assumed that an average pixel value of the Bc block is mc, an average pixel value of the Br,i block is mr,i, and an average pixel value of the remaining blocks is represented by the above-mentioned block notation. There are a variety of methods for predicting mc information according to information of one or more neighboring blocks. For the convenience of description, it is assumed that the reference frame #1 is used as a candidate reference frame in the case of encoding the Bc block.
A first method for predicting mc information according to information of one or more neighboring blocks is a first mode method (Model) for predicting the mc information on the basis of an average pixel value of a reference block of another view corresponding to the current block. In more detail, the first mode method (Model) is indicative of the method for predicting the mc information using the average pixel value the Br,i block of the reference frame #1. The difference value can be represented by the following equation 1 : [Equation 1]
e = mc ~ mr,l
A second method for predicting a difference value between an average pixel value of a current block and an average pixel value of a reference block of another view corresponding to the current block is a second mode method (Mode2) for predicting the difference value on the basis of a difference between average pixel values of each neighboring blocks of the current block and the reference block. In more detail, the second mode method (Mode2) predicts a difference value between an average pixel value of the current block and an average pixel value of the Br,i block of the reference frame #1 using a difference value in
average pixel values between neighboring blocks ( Bc',Br'j ) .
The difference value can be represented by the following equation 2 :
[Equation 2]
Figure imgf000063_0001
A third method for predicting a difference value between an average pixel value of a current block and an average pixel value of a reference block of another view corresponding to the current block is a third mode method
(Mode3) for predicting the difference value using a difference between an average pixel value of a neighboring block of the current block and an average pixel value of the reference block. In more detail, the third mode method
(Mode3) predicts the mc information on the basis of a difference between an average pixel value of the
neighboring block BC' and an average pixel value of the Brt
block of the reference frame #1. In this case, the difference value can be represented by the following equation 3:
[Equation 3]
e = (mc-mrtl)-(mc l-mrtl) = mc-mc 1
In the case of encoding a neighboring block of the current block by using the neighboring blocks of the reference block of another view, there is a fourth mode method (Mode4) for predicting the mc information on the basis of predicted average pixel values of the neighboring
blocks of the current block. In other words, if the B[.
block is pre-encoded by referring to the Br'2 block of the
reference frame #2, a difference value between the average
pixel value of the current block ( Bc ) and a reference
block ( B1. , ) corresponding to the current block can be
predicted by a difference value between the average pixel
value of the neighboring block of the current block (
Figure imgf000064_0001
)
and an average pixel value of neighboring block of another
view reference block ( B).a ).
In this case, the difference value can be represented by the following equation 4 : [Equation 4]
Figure imgf000064_0002
In the case of using the neighboring-block information using the above-mentioned Mode2, Mode3, and Mode4 methods, although the above-mentioned Mode2, Mode3, and Mode4 methods have disclosed that only one information of the next upper-block is exemplarily used, it should be noted that the combination of information of several neighboring blocks surrounding the current block may also be used as an example.
FIG. 15 is a detailed block diagram illustrating a process for deriving a predicted average pixel value of a current block from reference blocks of other views. In more detail, FIG. 15 shows a current block, pre- encoded blocks, each of which shares a boundary with the current block, and other blocks, each of which shares a boundary with the reference block. In this case, the
Mode2-method equation, the Mode3-method equation, and the Mode4-method equation can be represented by the following equation 5 :
[Equation 5]
Figure imgf000065_0001
V W; {m' - mr A y w,m\
∑ι wι ∑ι wι
In the above-mentioned Mode4 equation,
Figure imgf000065_0003
indicates
an average pixel value of a reference block of the Bc' block
on the condition that the reference block is located at the reference frame #k. In Equation 5, W1 indicates a weighted coefficient.
The neighboring blocks used for prediction are not limited to blocks sharing a boundary, and may also include other blocks adjacent to the above-mentioned neighboring blocks as necessary. Otherwise, the above-mentioned neighboring blocks may also employ only some parts of the other blocks. The scope of the above-mentioned neighboring blocks may be
adjusted by the W1. In this way, the difference value (e)
is quantized and entropy-encoded, such that the entropy- encoded information is transmitted to the decoding unit.
The reference frames of the above-mentioned Model, Mode2, Mode3, and Mode4 methods are determined to be optimum frames in consideration of rate and distortion factors after calculating several steps to an actual bitstream stage. There are a variety of methods for selecting the optimum mode, for example, a method for selecting a specific mode of a minimum difference value from among the obtained difference values, and a method for using the RD relationship. The above-mentioned RD-relationship method calculates actual bitstreams of individual modes, and selects an optimum mode in consideration of the rate and the distortion. In the case of calculating a block residual value, the above-mentioned RD-relationship method deducts an average pixel value of each block from the current block, deducts the average pixel value of each block from the reference block, and calculates a difference value between the deducted results of the current and reference blocks, as represented by the following equation 6:
[Equation 6]
ΣΣ I0(UJ)~ mc~(Ir(i+ AxJ'+Ay)-rn,
In Equation 6, AxAy is indicative of a disparity
vector, and / is a pixel value. If a value predicted by information of a neighboring block and a difference value are quantized, and the quantized resultant values of the predicted value and the difference value are reconstructed, and the reconstructed resultant values are added, the added
result is denoted by mc of Equation 6. In this case, the
value of mc is adapted to obtain the same values from the
encoding unit and the decoding unit. mr is indicative of an
average pixel value of a reference block. In the case of
the decoded image, the encoding unit has the same mr as
that of the decoding unit. Indeed, the reference block is searched for in a time domain, and an optimum block is searched for in a space-time domain. Therefore, ID information indicating whether an illumination compensation will be used is set to "0" or "1" in association with individual frames and blocks, and the resultant ID information is entropy-encoded.
If the optimum mode is selected, it is possible to encode only the selected mode, such that the encoded result of the selected mode may be transmitted to the decoding unit. In addition to the encoded result of the selected mode, a difference value obtained by the selected mode can also be encoded and transmitted. The selected mode information is represented by index types, and can also be predicted by neighboring-mode information. In addition, a difference value between the index of the currently- selected mode and the index of the predicted mode can also be encoded and transmitted.
All of the above-mentioned modes may be considered, some of the above-mentioned modes may be selected, or only one of the above-mentioned modes may also be selected as necessary. In the case of using a single method from among all available methods, there is no need to separately encode the mode index.
In the case of obtaining an average pixel value and deriving a predicted average pixel value, pre-decoded pixel values may be applied to current blocks of a reference frame and a target frame to be encoded.
Basically, pre-decoded values of left-side pixels and pre-decoded values of upper-side pixels are used to predict an average pixel value of the current block. In the case of encoding an actual video sequence, the video seguence is encoded on the basis of a macroblock. The 16x16 macroblock is divided into 16x8 blocks, 8x16 blocks, and 8x8 blocks, and is then decoded. The 8x8 blocks may also be divided into 8x4 blocks, 4x8 blocks, and 4x4 blocks. There are a variety of methods for predicting an average pixel value of sub-blocks on the basis of a single macroblock .
FIG. 16 is a conceptual diagram illustrating a 16x16 macroblock for explaining usages of pre-decoded pixel values located at left- and upper- parts of an entire block in the case of deriving an average pixel value and a predicted average pixel value of a current block.
Referring to FIG. 16, the 16x16 macroblock can use all the pixel values of the left- and upper- parts. Therefore, in the case of predicting an average pixel value of the current block, an average pixel value of pixels
(hl~hl6) of the upper part and pixels (vl~vl6) of the
left part is calculated, and an average pixel value of the current block is predicted by the calculated average pixel
value of the pixels (vl~vl6, hl~~hl6) . In this case, the
average pixel value of the 16x16 block (denoted by "Blβxlβ") can be represented by the following equation 7: [Equation 7]
Figure imgf000070_0001
z=l z=l
32
FIG. 17A is a conceptual diagram illustrating a 16x8 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks. FIG. 17B is a conceptual diagram illustrating a 16x8 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks.
In FIG. 17A, in the case of using all the pixels enclosing the divided blocks, an average value of the B16x8_0 block and the B16x8_l block can be represented by the following equation 8 : [Equation 8]
Figure imgf000071_0001
1=1 /=1
32
In FIG. 17B, in the case of using all the pixels enclosing the divided blocks, an average value of the B16x8_0 block can be represented by the following equation 9, and an average value of the B16x8_l block can be represented by the following equation 10: [Equation 9]
Figure imgf000071_0002
24 [ Equation 10 ]
Figure imgf000071_0003
i=l i=9
24
In the above-mentioned cases of FIGS. 17A~17B, the
value of ho located at the corner of the macroblock may also be added to the calculation result as necessary. In this case, an average pixel value of the B16x8_0 block of FIG. 17A can be represented by the following equation 11, and the average pixel value of the B16x8_0 of FIG. 17B can be represented by the following equation 12 : [ Equation 11 ]
i=0 /=1
33
[ Equation 12 ]
Figure imgf000072_0002
z =0 f=l 25
In the above-mentioned cases of FIGS. 17A~17B, the
values of ho and ve located at the corners of the macroblock may also be added to the calculation result as necessary. In this case, an average pixel value of the Bl6x8_l block of FIG. 17A can be represented by the following equation 13, and the average pixel value of the B16x8_l of FIG. 17B can be represented by the following equation 14:
[Equation 13]
Figure imgf000072_0003
z=0 z=l
33
[ Equation 14 ]
Figure imgf000073_0001
;=0 /=8
25
FIG. 18A is a conceptual diagram illustrating a 8x16 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks. FIG. 18B is a conceptual diagram illustrating a 8x16 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks. The method for deriving an average pixel value of the divided blocks is the same as
that of FIGS. 17A-17B.
FIG. 19A is a conceptual diagram illustrating a 8x8 macroblock for explaining usages of all the pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks. FIG. 19B is a conceptual diagram illustrating a 8x8 macroblock for explaining usages of only pixels enclosing divided blocks in the case of deriving an average pixel value and a predicted average pixel value of the divided blocks. The method for deriving an average pixel value of the divided blocks is the same as that of
FIGS. 17A-17B.
The 8x8 block can be divided into a plurality of sub-blocks . An average pixel value of a corresponding block of a current block of a current frame to be encoded is predicted,
such that the predicted average pixel value is set to mc .
An average pixel value of a corresponding block of the reference frame is predicted, such that the predicted
average pixel value is set to m, .
Each predicted average pixel value is deducted from all pixels of each block, and a difference value between the predicted pixel value using the reference block and a pixel value of the current block can be calculated by the following equation 15: [Equation 15]
Figure imgf000074_0001
In Equation 15, (Ax,Ay) is indicative of a disparity
vector, and / is a pixel value. A reference block having a minimum block residual value is selected as an illumination-compensated optimum block. In this case, the disparity vector is denoted by (Δx,Δy) . Indeed, a system
compares the above-mentioned illumination-compensated case with another case in which the illumination is not compensated, and selects a superior one of the two cases.
As a modified example of the above-mentioned scheme, an average pixel value of the reference block is not predicted by pixel values of neighboring blocks, and is directly calculated by an average pixel value of all pixels contained in an actual block.
As another modified example of the above-mentioned scheme, the number of left- and upper-part pixels may be increased. In more detail, pixels of two or more neighboring layers of a current layer may be used instead of pixels of only one layer next to a current layer.
The decoding unit determines whether to perform an illumination compensation of a corresponding block using the ID information. If the illumination compensation is performed, the decoding unit calculates a decoded value of the difference value (e) , and obtains a predicted value according to an above-mentioned prediction method. The decoded value of the difference value (e) is added to the
predicted value, such that the value of mc (= mc+ e) can be decoded. The value of m,-—mc is deducted from the reference block, which is prediction block so called predictor for the current block, and the deducted result is added to the decoded value of the residual block, such that the value of the current block can be finally obtained. The current block can be reconstructed as follow:
B = prediction block + residual block + (m<.-— mr + e), where B is the value of the current block, reference block
is the predictor for the current block, mc—mr is a predicted difference of average pixel values, that is the predicted offset value of illumination compensation for the current block, and e is the difference value. The decoding unit obtains the difference between a offset value of illumination compensation of the current block and a predicted difference, and can reconstruct the offset value of illumination compensation of the current block using the obtained residual block value and the predicted difference.
FIG. 20 is a diagram illustrating a process for obtaining an offset value of a current block. The illumination compensation may be performed during the motion estimation. When it compares the current block with the reference block, a difference in illumination between two blocks is considered. New motion estimation and new motion compensation are used to compensate for the illumination difference. A new SAD (Sum of Absolute Differences) can be represented by the following equations 16 and 17: [Equation 16]
Figure imgf000077_0001
\=m v=n
Figure imgf000077_0002
\=m V=Ii
[Equation 17]
Λ/+m-l N+n-\
Figure imgf000077_0003
With reference to Equations 16 and 17, Mc is indicative of an average pixel value of the current block, and Mr is indicative of an average pixel value of the
reference block. Ic(x,y) is indicative of a pixel value at a
specific coordinate (x,y) of the current block, and
I:(x+Ax,y + Ay) is indicative of a pixel value at a motion
vector (Ax,Ay) of the reference block. The motion
estimation is performed on the basis of the new SAD denoted by Equation 16, such that a difference value between an average pixel value of the current block and an average pixel value of the reference block can be obtained. The difference value in average pixel value between the current block and the reference block is referred to as an offset value (IC_offset) .
If the motion estimation applying for the illumination compensation is performed, the offset value and the motion vector are obtained. The illumination compensation can be performed by the following equation 18 using the offset value and the motion vector: [Equation 18]
R(x,y) =Ic(x,y)-Ir(x+Δx,y+Ay)-(Mc-Mr)
With reference to Equation 18, R(x,y) is indicative
of an illumination-compensated residual value.
The offset value (IC_offset = Mc-Mr) is transmitted to the decoding unit. The illumination compensation of the decoding unit can be performed by the following equation 19:
[Equation 19]
fc(x,y) = Ir(x+Ax,y+Ay)+R\x,y)+(Mc-Mr)
With reference to Equation 19, R(x,y) is indicative
of an reconstructed and illumination-compensated residual
value, and ϊc(x,y) is indicative of a pixel value of the
current block.
In order to reconstruct the current block, the offset value is transmitted to the decoding unit, and the offset value can be predicted by data of the neighboring blocks. In order to further reduce the number of bits for coding the offset value, a difference value (Ric_offset) between the current-block offset value (IC_offset) and the neighboring-block offset value (IC_offset_pred) can be transmitted to the decoding unit 50, as denoted by the following equation 20: [Equation 20]
R/c φel = IC__ offset - IC _ offset _ pred
FIG. 21 is a flow chart illustrating a process for performing for an illumination compensation of a current block.
Referring to FIG. 21, if an illumination compensation flag of a current block is set to "0", the illumination compensation of the current block is not performed. Otherwise, if the illumination compensation flag of the current block is set to "1", a process for reconstructing the offset value of the current block is performed. In the case of obtaining a predictor of the current block, information of the neighboring block can be employed. It is determined whether a reference index of the current block is equal to a reference index of the neighboring block at step S210. A predictor for performing the illumination compensation of the current block is obtained on the basis of the determined result at step S211. An offset value of the current block is reconstructed by using the obtained predictor at step S212. In this case, the step S210 for determining whether the reference index of the current block is equal to that of the neighboring block and the step S211 for obtaining the predictor on the basis of the determined result will hereinafter be described with reference to FIG. 22. FIG. 22 is a flow chart illustrating a method for obtaining a predictor by determining whether a reference index of a current block is equal to a reference index of a neighboring block.
Referring to FIG. 22, in order to perform an illumination compensation, the decoding unit extracts a variety of information from a video signal, for example, flag information and offset values of neighboring blocks of the current block, and reference indexes of reference blocks of the current and neighboring blocks, such that the decoding unit can obtain the predictor of the current block using the extracted information. The decoding unit obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor.
In the case of obtaining the predictor of the current block, information of the neighboring block can be employed. For example, the offset value of the current block can be predicted by the offset value of the neighboring block. Prior to predicting the offset value of the current block, it can be determined whether the reference index of the current block is equal to that of the neighboring block, such that it can be determined which one of values or which one of neighboring blocks will be used by referring to the determined result. Also, it is determined whether flag information of the neighboring block is set to "true", such that it can be determined whether the neighboring block will be used by referring to the determined result.
According to a first example, it is determined whether the neighboring block having the same reference index as that of the current block exists at step S220. If it is determined that only one neighboring block having the same reference index as that of the current block exists, an offset value of the neighboring block having the same reference index is assigned to the predictor of the current block at step S221. If it is determined that two neighboring blocks, each of which has the same reference index as that of the current block, exist at step S220, an average value of the offset values of the two neighboring blocks is assigned to the predictor of the current block at step S222. If it is determined that three neighboring blocks, each of which has the same reference index as that of the current block, exist at step S220, a median value of the offset values of the three neighboring blocks is assigned to the predictor of the current block at step S223. If it is determined that there is no neighboring block having the same reference index as that of the current block according to the determined result at step S220, the predictor of the current block is set to "0" at step S224. If required, the step S220 for determining whether the reference index of the current block is equal to that of the neighboring block may further include another step for determining whether a flag of the neighboring block is set to "1".
According to a second example, it is determined whether the neighboring block has the same reference index as that of the current block, and it is determined whether a flag of the neighboring block is set to "1". If it is determined that the neighboring block has the same reference index as that of the current block, and has the flag of "1", an offset value of the neighboring block may be set to the predictor of the current block. In this case, a plurality of neighboring blocks may be checked in the order of a left neighboring block -> an upper neighboring block -> a right-upper neighboring block -> a left-upper neighboring block. If required, the neighboring blocks may also be checked in the order of the upper neighboring block -> the left neighboring block -> the right-upper neighboring block -^ the left-upper neighboring block. If there is no neighboring block capable of satisfying the two conditions, and flags of the three neighboring blocks
(i.e., the left neighboring block, the upper neighboring block, and the right-upper (or left-upper) neighboring block) are set to "1", respectively, the median value of the offset values of the three blocks is set to the predictor. Otherwise, the predictor of the current block may be set to "0".
FIG. 23 is a flow chart illustrating a method for performing for an illumination compensation on the basis of a prediction type of a current block. Referring to FIG. 23, the neighboring block acting as a reference block may be changed according to a prediction type of the current block. For example, if the current block has the same shape as that of the neighboring block, the current block is predicted by a median value of the neighboring blocks. Otherwise, if the shape of the current block is different from that of the neighboring block, another method will be employed.
For example, if a block located at the left side of the current block is divided into several sub-blocks, the uppermost sub-block from among the sub-blocks is used for the prediction. Also, if a block located at an upper part of the current block is divided into several sub-blocks, the leftmost sub-block is used for the prediction. In this case, a prediction value may be changed according to the prediction type of the current block. Therefore, the example of FIG. 23 determines a neighboring block to be referred by the prediction type of the current block at step S231. It is determined whether the reference index of the determined neighboring block is equal to a reference index of the current block at step S232. The step S232 for determining whether the reference index of the neighboring block is equal to that of the current block may further include another step for determining whether a flag of the neighboring block is set to "1". The predictor for performing an illumination compensation of the current block can be obtained on the basis of the determined result at step S233. The offset value of the current block is reconstructed by the obtained predictor, such that the illumination compensation can be performed at step S234. In this case, the process for performing the step S233 by referring to the result of step S232 will hereinafter be described in detail, and a detailed description thereof will be similar to that of FIG. 22.
For example, if the prediction type of the current block indicates that the prediction is performed by using a neighboring block located at the left side of the current block, it is determined whether the reference index of the left-side neighboring block is equal to that of the current block. If the reference index of the current block is equal to that of the left-side neighboring block, an offset value of the left-side neighboring block is assigned to the predictor of the current block. Also, if the prediction type of the current block indicates that the prediction is performed by referring to the left- and upper- neighboring blocks of the current block, or if the prediction is performed by referring to three neighboring blocks (i.e., the left neighboring block, the upper neighboring block, and the right-upper neighboring block) , the individual cases will be applied similarly as a method of FIG. 22.
FIG. 24 is a flow chart illustrating a method for performing for an illumination compensation using flag information indicating whether the illumination compensation of a block is performed.
Referring to FIG. 24, flag information (IC_flag) indicating whether an illumination compensation of the current block is performed may also be used to reconstruct the offset value of the current block. In addition, the predictor may also be obtained using both the method for checking the reference index of FIG. 22 and the method for predicting flag information. Firstly, it is determined whether a neighboring block having the same reference index as that of the current block exists at step S241. A predictor for performing an illumination compensation of the current block is obtained by the determined result at step S242. In this case, a process for determining whether the flag of the neighboring block is NN1" may also be included in the step S242. The flag information of the current block is predicted on the basis of the determined result at step S243. An offset value of the current block is reconstructed by using the obtained predictor and the predicted flag information, such that the illumination compensation can be performed at step S244. In this case, the step S242 may be applied similarly as a method of FIG. 22, and the step S243 will hereinafter be described with reference to FIG. 25.
FIG. 25 is a flow chart illustrating a method for predicting flag information of a current block by determining whether a reference index of the current block is equal to a reference index of a neighboring block.
Referring to FIG. 25, it is determined whether the neighboring block having the same reference index as that of the current block exists at step S250. If it is determined that only one neighboring block having the same reference index as that of the current block exists, flag information of the current block is predicted by flag information of the neighboring block having the same reference index at step S251. If it is determined that two neighboring blocks, each of which has the same reference index as that of the current block, exist at step S250, flag information of the current block is predicted by any one of flag information of the two neighboring blocks having the same reference index at step S252.
If it is determined that three neighboring blocks, each of which has the same reference index as that of the current block, exist at step S250, the flag information of the current block is predicted by a median value of the flag information of the three neighboring blocks at step S253. Also, if there is no neighboring block having the same reference index as that of the current block according to the determined result of step S250, the flag information of the current block is not predicted at step S254.
FIG. 26 is a flow chart illustrating a method for performing an illumination compensation when a current block is predictively coded by two or more reference blocks. Referring to FIG. 26, during performing the illumination compensation, if the current block is predictively coded by using two reference blocks, the decoding unit cannot directly recognize an offset value corresponding to each reference block, because it uses an average pixel value of the two reference blocks when obtaining the offset value of the current block. Therefore, in one example, an offset value corresponding to each reference block is obtained, resulting in the implementation of correct prediction. The offset value of the current block is reconstructed by using the predictor of the current block and the residual value at step S261. If the current block is predictively encoded by using two reference blocks, an offset value corresponding to each reference is obtained by the offset value at step S262, as denoted by the following equation 21: [Equation 21]
IC _ offset = mc - wλ x mr>l - w2 x mr 2
IC _ offsetLO = mc - mr λ ~ IC _ offset + ( W1 - 1) x mr X + W2 x mr 2
IC _ offsetlλ = mc - mr 2 ~ IC __ offset + W1 x mr l + (w2 - l) x mr 2 In Equation 21, mc is an average pixel value of the
current block. mt. , and mr2 are indicative of an average
pixel values of reference blocks, respectively, w, and
W2 are indicative of a weighted coefficients for a bi-
predictive coding process, respectively.
In one example of the illumination compensation method, the system independently obtains an accurate offset value corresponding to each reference block, such that it can more correctly perform the predictive coding process. In the case of reconstructing the offset value of the current block at step S262, the system adds the reconstructed residual value and the predictor value, such that it obtains an offset value. In this case, the predictor of a reference picture of ListO and the predictor of a reference picture of Listl are obtained repectively and combined, such that the system can obtain a predictor used for reconstructing the offset value of the current block.
According to another example, the system can also be applied to skip-macroblock. In this case, the prediction is performed to obtain an information for the illumination- compensation. A value predicted by the neighboring block block is used as flag information indicating whether the illumination compensation is performed. An offset value predicted by the neighboring block may be used as the offset value of the current block. For example, if flag information is set to "true", the offset value is added to a reference block. In the case of a macroblock to which a P-skip mode is applied, the prediction is performed by using flags and offset values of the left- and upper- neighboring blocks, such that flag and offset values of the macroblock can be obtained. If only one block has the flag of "1", a flag and an offset value of the current block may be set to the flag and the offset value of the block, respectively. If two blocks have the flag of "1", the flag of the current block is set to "1", and the offset value of the current block is set to an average offset value of the two neighboring blocks .
According to another example, the system can also be applied to a direct mode, for example, temporal direct mode, B-skip mode, etc. In this case, the prediction is performed to obtain information of the illumination- compensation. Each predictor can be obtained by using the variable method for predicting the flag and the offset. This predictor may be set to an actual flag and an actual offset value of the current block. If each block has a pair of flags and offset information, a prediction value for each block can be obtained. In this case, if there are two reference blocks and the reference indexes of the two reference blocks are checked, it is determined whether the reference index of the current block is equal to that of the neighboring block. Also, if each reference block includes a unique offset value, first predicted flag information, a first predicted offset value, second predicted flag information, and a second predicted offset value can be obtained. In this case, a value predicted by the neighboring block may be used as the flag information. The offset values of the two reference blocks may be used as the first predicted offset value and the second predicted offset value, respectively. In this case, the offset value of the current block may be set to an average offset value of individual reference blocks.
In the direct mode or the skip macroblock mode, the system may encode/decode the flag information indicating whether the direct mode or the skip-macroblock mode is applied to the current block. In more detail, an offset value is added or not according to the flag value. A residual value between the offset value and the predicted offset value may also be encoded/decoded. In this case, desired data can be more correctly reconstructed, and an optimum mode may be selected in consideration of a RD (Rate-Distortion) - relationship. If a reference picture cannot be used for the prediction process, i.e., if a reference picture number is less than "1", the flag information or predicted flag information may be set to "false", and the offset value or the predicted offset value may also be set to "0".
According to another example, the system can also be applied to the entropy-coding process. In association with the flag information, three context models may be used according to flag values of the neighboring blocks (e.g., blocks located at the left- and upper- parts of the current block) .
If it is determined that the flag value is set to "true", the value of "1" occurs. If it is determined that the flag value is set to "false", the value of "0" occurs. If the two values "1" and "0" of the two cases are added, three cases can be obtained. The flag information is encoded/decoded by using the three context models. A transform-coefficient level coding method can be used for the predictive residual value of the offset values. In other words, data binarization is performed by UEGO, a single context model can be applied to a first bin value, and another context mode is applied to the remaining bin values of a unary prefix part A sign bit is encoded/decoded by a bypass mode. According to another example of the flag information, two contexts may be considered according to a predicted flag values, such that the encoding/decoding process can be performed. FIG. 27 is a flow chart illustrating a method for performing illumination compensation using not only flag information indicating whether illumination compensation of a current block is performed, but also an offset value of the current block. Referring to FIG. 27, in order to perform illumination compensation, the decoding unit extracts a variety of information from a video signal, for example, flag information and offset values of the current and neighboring blocks of the current block, and index information of reference blocks of the current and neighboring blocks, such that the decoding unit can obtain the predictor of the current block using the above- mentioned extracted information. The decoding unit 50 obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor. In the case of reconstructing the offset value of the current block, flag information (IC flag) indicating whether the illumination compensation of the current block is performed may be used.
The decoding unit obtains flag information indicating whether the illumination compensation of the current block is performed at step S271. If the illumination compensation is performed according to the above-mentioned flag information (IC_flag), the offset value of the current block indicating a difference in average pixel value between the current block and the reference block can be reconstructed at step S272. In this way, the above-mentioned illumination compensation technology encodes a difference value in average pixel value between blocks of different pictures. If a corresponding block is contained in the P slice when the flag indicating whether the illumination compensation is applied to each block, single flag information and a single offset value are encoded/decoded. However, if the corresponding block is contained in the B slice, a variety of methods can be made available, and a detailed description thereof will hereinafter be described with
reference to FIGS. 28A~28B.
FIGS. 28A-28B are diagrams illustrating a method for
performing illumination compensation using flag information and an offset value in association with blocks of P and B slices . Referring to FIG. 28A, "C" is indicative of a current block, "N" is indicative of a neighboring block of the current block (C) , "R" is indicative of a reference block of the current block (C) , "S" is indicative of a reference block of the neighboring block (N) of the current block (C) , and "mc" is indicative of an average pixel value of the current block (C) , "mr" is indicative of an average pixel value of the reference block of the current block (C) . If the offset value of the current block (C) is denoted by "IC_offset", the "IC_offset" information can be denoted by "IC_offset = mc - mr" .
In this way, if the offset value of the neighboring block (S) is denoted by "IC_offset_pred", the encoding unit can transmit the residual value (Ric_offset) between the offset value (IC_offset) of the current block and the offset value (IC_offset__pred) of the neighboring block to a decoding unit, such that it can reconstruct the offset value "IC_offset" of the current block (C) . In this case, the "Ric_offset" information can also be represented by the above-mentioned Equation 20.
In the case of generating the predictor of the current block on the basis of flag information or offset value of the neighboring block, a variety of methods can be made available. For example, information of only one neighboring block may be employed, or information of two or more neighboring blocks may also be employed. In the case of employing the information of two or more neighboring blocks, an average value or a median value may be employed. In this way, if the current block is predictively encoded by a single reference block, the illumination compensation can be performed using a single offset value and single flag information.
However, if the corresponding block is contained in the B slice, i.e., if the current block is predictively encoded by two or more reference blocks, a variety of methods can be made available.
For example, as shown in FIG. 28B, it is assumed that "C" is indicative of a current block, "N" is indicative of a neighboring block of the current block (C) , "RO" is indicative of a reference block located at a reference picture (1) of List 0 referred by the current block, "SO" is indicative of a reference block located at the reference picture (1) of List 0 referred by the neighboring block, Rl" is indicative of a reference block located at a reference picture (3) of List 1 referred by the current block, and "Sl" is indicative of a reference block located at the reference picture (3) of List 1 referred by the neighboring block. In this case, the flag information and the offset value of the current block are associated with each reference block, such that each reference block includes two values. Therefore, at least one of the flag information and the offset value can be employed respectively.
According to a first example, a predictor of the current block can be obtained by combining information of two reference blocks via the motion compensation. In this case, single flag information indicates whether the illumination compensation of the current block is performed. If the flag information is determined to be "true", a single offset value is obtained from the current block and the predictor, such that the encoding/decoding processes can be performed. According to a second example, in the motion compensation process, it is determined whether the illumination compensation will be applied to each of two reference blocks. Flag information is assigned to each of the two reference blocks, and a single offset value obtained by using the above-mentioned flag information may be encoded or decoded. In this case, it should be noted that two flag information may be used on the basis of the reference block, and a single offset value may be used on the basis of the current block. According to a third example, single flag information may indicate whether the illumination compensation will be applied to a corresponding block on the basis of the current block. Individual offset values can be encoded/decoded for two reference blocks. If the illumination compensation is not applied to any one of the reference blocks during the encoding process, a corresponding offset value is set to NX0". In this case, single flag information may be used on the basis of the current block, and two offset values may be used on the basis of the reference block'.
According to a fourth example, the flag information and the offset value can be encoded/decoded for individual reference blocks. In this case, two flags and two offset values can be used on the bass of the reference block.
According to the above-mentioned first to fourth examples, the offset value is not encoded without any change, and is predicted by an offset value of the neighboring block, such that its residual value is encoded. FIG. 29 is a flow chart illustrating a method for performing an illumination compensation when a current block is predictively encoded by two or more reference blocks .
Referring to FIG. 29, in order to perform the illumination compensation on the condition that the current block is contained in the B slice, flag information and offset values of the neighboring blocks of the current block are extracted from the video signal, and index information of corresponding reference blocks of the current and neighboring blocks are extracted, such that the predictor of the current block can be obtained by using the extracted information. The decoding unit obtains a residual value between the offset value of the current block and the predictor, and can reconstruct the offset value of the current block using the obtained residual value and the predictor. In the case of reconstructing the offset value of the current block, flag information (IC_flag) indicating whether the illumination compensation of the current block is performed may be used as necessary.
The decoding unit obtains flag information indicating whether the illumination compensation of the current block is performed at step S291. If the illumination compensation is performed according to the above-mentioned flag information (IC_flag) , the offset value of the current block indicating a difference in average pixel value between the current block and the reference block can be reconstructed at step S292.
However, if the current block is predictively encoded by two reference blocks, a decoder cannot directly recognize an offset value corresponding to each reference block, because it uses an average pixel value of two reference blocks when obtaining the offset value of the current block. Therefore, according to a first example, an offset value corresponding to each reference is obtained, resulting in the implementation of correct prediction. Therefore, if the current block is predictively encoded by two reference blocks, an offset value corresponding to each reference can be obtained by using the above-mentioned offset value at step S293, as denoted by the following equation 22:
[Equation 22]
IC _ offset - mc - W1 x mr ϊ — w2 x mr 2
IC _ offsetLO = mc - mr X - IC _ offset 4- ( W1 - 1) x mr λ + W2 x mr 2
IC _ offsetLl - mc - mr l = IC _ offset + W1 x mr λ + ( W2 - 1) x mr 2
In Equation 22 , mc is an average pixel value of the
current block . mr l and mr 2 are indicative of average pixel
values of reference blocks , respectively , w, and W2 are
indicative of weighted coefficients for a bi-predictive coding process , respectively . In the case of performing the illumination compensation using the above-mentioned method, the system independently obtains an accurate offset value corresponding to each reference block, such that it can more correctly perform the predictive coding process. In the case of reconstructing the offset value of the current block, the system adds the reconstructed residual value and the predictor value, such that it obtains the offset value. In this case, the predictor of List 0 and the predictor of List 1 are obtained and combined, such that the system can obtain a predictor value used for reconstructing the offset value of the current block.
FIG. 30 is a flow chart illustrating a method for performing an illumination compensation using flag information indicating whether the illumination compensation of a current block is performed. The illumination compensation technology is adapted to compensate for an illumination difference or a difference in color. If the scope of the illumination compensation technology is extended, the extended illumination compensation technology may also be applied between obtained sequences captured by the same camera. The illumination compensation technology can prevent the difference in illumination or color from greatly affecting the motion estimation. However, indeed, the encoding process employs flag information indicating whether the illumination compensation is performed. The application scope of the illumination compensation may be extended to a sequence, a view, a GOP (Group Of Pictures) , a picture, a slice, a macroblock, and a sub-block, etc. If the illumination compensation technology is applied to a small-sized area, a local area may also be controlled, however, it should be noted that a large number of bits used for the flag information are consumed. The illumination compensation technology may not be required. Therefore, a flag bit indicating whether the illumination compensation is assigned to individual areas, such that the system can effectively use the illumination compensation technology. The system obtains flag information capable of allowing a specific level of the video signal to be illumination-compensated at step S201.
For example, the following flag information may be assigned to individual areas. VNseq_IC_flag" information is assigned to a sequence level, "view_IC_flag" information is assigned to a view level, NNGOP_IC_flag" information is assigned to a GOP level, "pic_IC_flag" information is assigned to a picture level, "slice__IC_flag" information is assigned to a slice level, "mb_IC_flag" information is assigned to a macroblock level, and "blk_IC_flag" information is assigned to a block level. A detailed description of the above-mentioned flag information will be
described with reference to FIGS. 31A~31C. A specific
level of the video signal in which the illumination compensation is performed by the flag information can be decoded at step S302.
FIGS. 31A~31C are conceptual diagrams illustrating
the scope of flag information indicating whether illumination compensation of a current block is performed.
Referring to FIGS. 31A-31C, the flag information
indicating whether the illumination compensation is performed can hierarchically be classified. For example,
as can be seen from FIGS. 31A-31C, Mseq_IC_flag"
information 311 is assigned to a sequence level, "view__IC_flag" information 312 is assigned to a view level, "GOP_IC__flag" information 313 is assigned to a GOP level, xxpic_IC__flag" information 314 is assigned to a picture level, "slice_IC_flag" information 315 is assigned to a slice level, mb_IC_flag" information 316 is assigned to a macroblock level, and "blk_IC_flag" information 317 is assigned to a block level.
In this case, each flag is composed of 1 bit. The number of the above-mentioned flags may be set to at least one. The above-mentioned sequence/view/picture/slice-level flags may be located at a corresponding parameter set or header, or may also be located another parameter set. For example, the "seq_IC_flag" information 311 may be located at a sequence parameter set, the "view__IC_flag" information 312 may be located at the view parameter set, the "pic_IC_flag" information 314 may be located at the picture parameter set, and the "slice__IC_flag" information 315 may be located at the slice header.
If two or more flags exist, specific information indicating whether the illumination compensation of an upper level is performed may control whether the illumination compensation of a lower level is performed. In other words, if each flag bit value is set to "1", the illumination compensation technology may be applied to a lower level.
For example, if the "pic_IC_flag" information is set to "l", the "slice_IC_flag" information of each slice contained in a corresponding picture may be set to "1" or "0", the "mb_IC_flag" information of each macroblock may be set to "1" or "0", or the "blk_IC_flag" information of each block may be set to "1" or "0". If the "seq_IC_flag" information is set to "1" on the condition that a view parameter set exists, the "view_IC_flag" value of each view may be set to "1" or "0". Otherwise, if the "view_IC_flag" information is set to "1", a flag bit value of GOP, picture, slice, macroblock, or block of a corresponding view may be set to "1" or "0", as shown in FIG. 31A. Needless to say, the above-mentioned flag bit value of GOP, picture, slice, macroblock, or block of the corresponding view may not be set to "1" or "0" as necessary. If the above-mentioned flag bit value of GOP, picture, slice, macroblock, or block of the corresponding view may not be set to "1" or "0", this indicates that the GOP flag, the picture flag, the slice flag, the macroblock flag, or the block flag is not controlled by the view flag information, as shown in FIG 31B.
If the flag bit value of an upper scope is set to 0", the flag bit values of a lower scope are automatically " set to "0". For example, if the "seq__IC__flag" information is set to "0", this indicates that the illumination compensation technology is not applied to a corresponding sequence. Therefore, the "view_IC_flag" information is set to "0", the "GOP_IC_flag" information is set to "0", the "pic_IC_flag" information is set to "0", the "slice_IC_flag" information is set to "0", the "mb_-IC_flag" information is set to "0", and the "blk_IC_flag" information is set to "0". If required, only one mb_IC_flag" information or only one "blk-IC_flag" information may be employed according to a specific implementation methods of the illumination compensation technology. If required, the "view_IC_flag" information may be employed when the view parameter set is newly applied to the multiview video coding. The offset value of the current block may be additionally encoded/decoded according to a flag bit value of the macroblock or sub- block acting as the lowest-level unit.
As can be seen from FIG. 31C, the flag indicating the IC technique application may also be applied to both the slice level and macroblock level. For example, if the "slice_IC_flag" information is set to "0", this indicates that the IC technique is not applied to a corresponding slice. If the "slice_IC_flag" information is set to "1", this indicates that the IC technique is applied to a corresponding slice. In this case, if the "mb__IC_flag" information is set to "1", "IC_offset" information of a corresponding macroblock is reconstructed. If the "mb_IC_flag" information is set to "0", this indicates that the IC technique is not applied to a corresponding macroblock.
According to another example, if the flag information of an upper level higher than the macroblock level is determined to be "true", the system can obtain an offset value of a current block indicating a difference in average pixel value between the current block and the reference block. In this case, the flag information of the macroblock level or the flag information of the block level may not be employed as necessary. The illumination compensation technique can indicate whether the illumination compensation of each block is performed using the flag information. The illumination compensation technique may also indicate whether the illumination compensation of each block is performed using a specific value such as a motion vector. The above-mentioned example can also be applied to a variety of applications of the illumination compensation technique. In association with the upper scope (i.e., sequence, view, GOP, and picture), the above-mentioned example can indicate whether the illumination compensation of a lower scope is performed using the flag information. The macroblock or block level acting as the lowest scope can effectively indicate whether the illumination compensation is performed using the offset value without using the flag bit. Similar to the method for use of the motion vector, the predictive coding process can be performed. For example, if the predictive coding process is applied to the current block, the offset value of the neighboring block is assigned to an offset value of the current block. If the predictive coding scheme is determined to be the bi-predictive coding scheme, offset values of individual reference blocks are obtained by the calculation of the reference blocks detected from List 0 and List 1. Therefore, in the case of encoding the offset values of the current block, the offset value of each reference is not directly encoded by the offset values of the _ neighboring blocks, and a residual value is encoded/decoded. The method for predicting the offset value may be determined to be the above-mentioned offset prediction method or a method for obtaining a median value used for predicting the motion vector. In the case of a direct mode of a bi-directional prediction, supplementary information is not encoded/decoded using the same method as in the motion vector, and the offset values can be obtained by predetermined information.
According to another example, a decoding unit (e.g., H.264-based decoding unit) is used instead of the MVC decoding unit. A view sequence compatible with a conventional decoding unit should be decoded by the conventional decoding unit, such that the "view_IC_flag" information is set to NNfalse" or "0". In this case, there is a need to explain the base-view concept. It should be noted that a single view sequence compatible with the H.264/AVC decoder may be required. Therefore, at least one view, which can be independently decoded, is defined and referred to as a base view. The base view is indicative of a reference view from among several views (i.e., the multiview) . A sequence corresponding to the base view in the MVC scheme is encoded by general video encoding schemes
(e.g., MPEG-2, MPEG-4, H.263, and H.264, etc.), such that it is generated in the form of an independent bitstream.
The above-mentioned base-view sequence can be compatible with the H.264 /AVC scheme, or cannot be compatible with the same. However, the view sequence compatible with the H.264/AVC scheme is always set to the base view.
FIG. 32 is a flow chart illustrating a method for obtaining a motion vector considering an offset value of a current block.
Referring to FIG. 32, the system can obtain an offset value of the current block at step S321. The system searches for a reference block optimally matched with the current block using the offset value at step S322. The system obtains the motion vector from the reference block, and encodes the motion vector at step S323. For the illumination compensation, a variety of factors are considered during the motion estimation. For example, in the case of a method for comparing a first block with a second block by offsetting average pixel values of the first and second blocks, average pixel values of the two blocks are deducted from pixel values of each block during the motion estimation, such that the similarity between the two blocks can be calculated. In this case, the offset value between the two blocks is independently encoded, such that the costs for the independent encoding are reflected in the motion estimation process. The conventional costs can be calculated by the following equation 23: [Equation 23]
COST = SAD + λM0TI0N - GenBit
In the case of using the illumination compensation, the SAD (Sum of Absolute Differences) can be represented by the following equation 24: [Equation 24]
Figure imgf000110_0001
In equation 24, Ic is indicative of a pixel value of
the current block, and /,. is indicative of a pixel value of
the reference block. Mc is indicative of an average pixel
value of the current block, and M1, is indicative of an
average pixel value of the reference block. The offset costs can be included in the above-mentioned SAD calculation process, as denoted by the following equations 25 and 26:
[Equation 25]
COSTIC = SAD1C + λM0TI0N • GenBit
[Equation 26]
SADIC
Figure imgf000111_0001
-M1.
With reference to Equations 25 and 26, a is indicative of a weighted coefficient. If the value of a is set to "1", the absolute value of the offset value is reflected. For another method for reflecting the illumination compensation cost, there is a method for reflecting the illumination compensation cost by predicting the number of bits required for encoding the offset value. The following equation 27 represents a method for predicting the offset coding bit. In this case, the coding bit can be predicted in proportion to the magnitude of an offset residual value. [Equation 27]
GenBit IC = GenBit + BitIC
In this case, a new cost can be calculated by the following equation 28: [Equation 28] Cost = SAD
Figure imgf000112_0001

Claims

What is Claimed is:
1. A method for decoding a video signal, comprising: receiving a bitstream comprising the video signal encoded according to a first profile that represents a selection from a set of multiple profiles that includes at least one profile for a multiview video signal, and profile information that identifies the first profile; extracting the profile information from the bitstream; and decoding the video signal according to the determined profile using illumination compensation between segments of pictures in respective views when the determined profile corresponds to a multiview video signal with each of multiple views comprising multiple pictures segmented into multiple segments.
2. The method according to claim 1, further comprising extracting from the bitstream configuration information associated with multiple views when the determined profile corresponds to a multiview video signal, wherein the configuration information comprises at least one of view-dependency information representing dependency relationships between respective views, view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view- arrangement information indicating a camera arrangement.
Ill
3. The method according to claim 1, where the profile information is located in a header of the bitstream.
4. The method according to claim 1, wherein the view level information corresponds to one of a plurality of levels associated with a hierachical view prediction structure among the views of the multiview video signal.
5. The method according to claim 1, wherein the view-dependency information represents the dependency relationships in a two-dimensional data structure.
6. The method according to claim 5, wherein the two- dimensional data structure comprises a matrix.
7. The method according to claim 1, wherein the segments comprise image blocks.
8. The method according to claim 7, wherein using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
9. The method according to claim 8, further comprising selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors .
10. The method according to claim 9, wherein selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
11. The method according to claim 9, wherein determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
12. The method according to claim 9, wherein selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
13. A method for decoding a multiview video signal, comprising: receiving a bitstream comprising the multiview video signal encoded according to dependency relationships between respective views, and view-dependency data representing the dependency relationships ; extracting the view-dependency data and determining the dependency relationships from the extracted data; and decoding the inultiview video signal according to the determined dependency relationships using illumination compensation between segments of pictures in respective views, where the multiview video signal includes multiple views each comprising multiple pictures segmented into multiple segments.
14. The method according to claim 13, wherein the view-dependency data represents the dependency relationships in a two-dimensional data structure.
15. The method according to claim 14, wherein the view-dependency data comprises a matrix.
16. The method according to claim 13, further comprising extracting from the bitstream configuration information comprising at least one of view identification information indicating a reference view, view-number information indicating the number of views, view level information for providing view scalability, and view- arrangement information indicating a camera arrangement.
17. The method according to claim 13, wherein the segments comprise image blocks.
18. The method according to claim 17, wherein using illumination compensation for a first segment comprises obtaining an offset value for illumination compensation of a neighboring block by forming a sum that includes a predictor for illumination compensation of the neighboring block and a residual value.
19. The method according to claim 18, further comprising selecting at least one neighboring block based on whether one or more conditions are satisfied for a neighboring block in an order in which one or more vertical or horizontal neighbors are followed by one or more diagonal neighbors.
20. The method according to claim 19, wherein selecting at least one neighboring block comprises determining whether one or more conditions are satisfied for a neighboring block in the order of: a left neighboring block, followed by an upper neighboring block, followed by a right-upper neighboring block, followed by a left-upper neighboring block.
21. The method according to claim 19, wherein determining whether one or more conditions are satisfied for a neighboring block comprises extracting a value associated with the neighboring block from the bitstream indicating whether illumination compensation of the neighboring block is to be performed.
22. The method according to claim 19, wherein selecting at least one neighboring block comprises determining whether to use an offset value for illumination compensation of a single neighboring block or multiple offset values for illumination compensation of respective neighboring blocks.
23. The method according to claim 22, further comprising, when multiple offset values are to be used, obtaining the predictor for performing illumination compensation of the first block by combining the multiple offset values.
24. The method according to claim 23, wherein combining the multiple offset values comprises taking an average or median of the offset values .
PCT/KR2007/000226 2006-01-12 2007-01-12 Processing multiview video WO2007081177A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP07700953A EP1977593A4 (en) 2006-01-12 2007-01-12 Processing multiview video
JP2008550242A JP5199123B2 (en) 2006-01-12 2007-01-12 Multi-view video processing

Applications Claiming Priority (24)

Application Number Priority Date Filing Date Title
US75823406P 2006-01-12 2006-01-12
US60/758,234 2006-01-12
KR10-2006-0004956 2006-01-17
KR20060004956 2006-01-17
US75962006P 2006-01-18 2006-01-18
US60/759,620 2006-01-18
US76253406P 2006-01-27 2006-01-27
US60/762,534 2006-01-27
KR10-2006-0027100 2006-03-24
KR20060027100 2006-03-24
US78719306P 2006-03-30 2006-03-30
US60/787,193 2006-03-30
KR10-2006-0037773 2006-04-26
KR1020060037773A KR20070076356A (en) 2006-01-18 2006-04-26 Method and apparatus for coding and decoding of video sequence
US81827406P 2006-07-05 2006-07-05
US60/818,274 2006-07-05
US83008706P 2006-07-12 2006-07-12
US60/830,087 2006-07-12
US83032806P 2006-07-13 2006-07-13
US60/830,328 2006-07-13
KR10-2006-0110338 2006-11-09
KR10-2006-0110337 2006-11-09
KR1020060110337A KR20070076391A (en) 2006-01-18 2006-11-09 A method and apparatus for decoding/encoding a video signal
KR1020060110338A KR20070076392A (en) 2006-01-18 2006-11-09 A method and apparatus for decoding/encoding a video signal

Publications (1)

Publication Number Publication Date
WO2007081177A1 true WO2007081177A1 (en) 2007-07-19

Family

ID=46045583

Family Applications (3)

Application Number Title Priority Date Filing Date
PCT/KR2007/000226 WO2007081177A1 (en) 2006-01-12 2007-01-12 Processing multiview video
PCT/KR2007/000225 WO2007081176A1 (en) 2006-01-12 2007-01-12 Processing multiview video
PCT/KR2007/000228 WO2007081178A1 (en) 2006-01-12 2007-01-12 Processing multiview video

Family Applications After (2)

Application Number Title Priority Date Filing Date
PCT/KR2007/000225 WO2007081176A1 (en) 2006-01-12 2007-01-12 Processing multiview video
PCT/KR2007/000228 WO2007081178A1 (en) 2006-01-12 2007-01-12 Processing multiview video

Country Status (6)

Country Link
US (9) US7856148B2 (en)
EP (3) EP1982517A4 (en)
JP (3) JP5192393B2 (en)
KR (8) KR100943912B1 (en)
DE (1) DE202007019463U1 (en)
WO (3) WO2007081177A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011515940A (en) * 2008-03-18 2011-05-19 サムスン エレクトロニクス カンパニー リミテッド Video encoding and decoding method and apparatus
JP2013505615A (en) * 2009-09-17 2013-02-14 ミツビシ・エレクトリック・アールアンドディー・センター・ヨーロッパ・ビーヴィ Video weighted motion compensation
US9363432B2 (en) 2012-06-11 2016-06-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
CN104053003B (en) * 2008-07-02 2018-01-26 三星电子株式会社 Method for encoding images and device and its coding/decoding method and device
WO2020018524A1 (en) * 2018-07-17 2020-01-23 Qualcomm Incorporated Block-based adaptive loop filter design and signaling
US10547866B2 (en) 2012-03-02 2020-01-28 Sun Patent Trust Image encoding method, image decoding method, image encoding apparatus, image decoding apparatus, and image coding apparatus
EP4091323A4 (en) * 2020-01-14 2024-02-14 Tencent America LLC Method and apparatus for video coding

Families Citing this family (234)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6307487B1 (en) 1998-09-23 2001-10-23 Digital Fountain, Inc. Information additive code generator and decoder for communication systems
US7068729B2 (en) 2001-12-21 2006-06-27 Digital Fountain, Inc. Multi-stage code generator and decoder for communication systems
US7003035B2 (en) 2002-01-25 2006-02-21 Microsoft Corporation Video coding methods and apparatuses
US20040001546A1 (en) 2002-06-03 2004-01-01 Alexandros Tourapis Spatiotemporal prediction for bidirectionally predictive (B) pictures and motion vector prediction for multi-picture reference motion compensation
US9240810B2 (en) 2002-06-11 2016-01-19 Digital Fountain, Inc. Systems and processes for decoding chain reaction codes through inactivation
US7154952B2 (en) 2002-07-19 2006-12-26 Microsoft Corporation Timestamp-independent motion vector prediction for predictive (P) and bidirectionally predictive (B) pictures
EP2357732B1 (en) 2002-10-05 2022-04-06 QUALCOMM Incorporated Systematic encoding and decoding of chain reaction codes
WO2005112250A2 (en) 2004-05-07 2005-11-24 Digital Fountain, Inc. File download and streaming system
US7903737B2 (en) * 2005-11-30 2011-03-08 Mitsubishi Electric Research Laboratories, Inc. Method and system for randomly accessing multiview videos with known prediction dependency
JP4991757B2 (en) * 2006-01-09 2012-08-01 エルジー エレクトロニクス インコーポレイティド Video signal encoding / decoding method
KR101276847B1 (en) 2006-01-12 2013-06-18 엘지전자 주식회사 Processing multiview video
KR100943912B1 (en) * 2006-01-12 2010-03-03 엘지전자 주식회사 Method and apparatus for processing multiview video
US20070177671A1 (en) * 2006-01-12 2007-08-02 Lg Electronics Inc. Processing multiview video
CN101686107B (en) 2006-02-13 2014-08-13 数字方敦股份有限公司 Streaming and buffering using variable FEC overhead and protection periods
US9270414B2 (en) 2006-02-21 2016-02-23 Digital Fountain, Inc. Multiple-field based code generator and decoder for communications systems
KR101342587B1 (en) * 2006-03-22 2013-12-17 세종대학교산학협력단 Method and Apparatus for encoding and decoding the compensated illumination change
US20100091845A1 (en) * 2006-03-30 2010-04-15 Byeong Moon Jeon Method and apparatus for decoding/encoding a video signal
ES2636917T3 (en) 2006-03-30 2017-10-10 Lg Electronics, Inc. A method and apparatus for decoding / encoding a video signal
WO2007134196A2 (en) 2006-05-10 2007-11-22 Digital Fountain, Inc. Code generator and decoder using hybrid codes
US9419749B2 (en) 2009-08-19 2016-08-16 Qualcomm Incorporated Methods and apparatus employing FEC codes with permanent inactivation of symbols for encoding and decoding processes
US9432433B2 (en) 2006-06-09 2016-08-30 Qualcomm Incorporated Enhanced block-request streaming system using signaling or block creation
US9209934B2 (en) 2006-06-09 2015-12-08 Qualcomm Incorporated Enhanced block-request streaming using cooperative parallel HTTP and forward error correction
US9380096B2 (en) 2006-06-09 2016-06-28 Qualcomm Incorporated Enhanced block-request streaming system for handling low-latency streaming
US9178535B2 (en) 2006-06-09 2015-11-03 Digital Fountain, Inc. Dynamic stream interleaving and sub-stream based delivery
US9386064B2 (en) 2006-06-09 2016-07-05 Qualcomm Incorporated Enhanced block-request streaming using URL templates and construction rules
WO2007148909A1 (en) * 2006-06-19 2007-12-27 Lg Electronics, Inc. Method and apparatus for processing a vedeo signal
CN101485208B (en) * 2006-07-05 2016-06-22 汤姆森许可贸易公司 The coding of multi-view video and coding/decoding method and device
CN101491096B (en) * 2006-07-12 2012-05-30 Lg电子株式会社 Signal processing method and apparatus thereof
CN101518086B (en) * 2006-07-20 2013-10-30 汤姆森特许公司 Method and apparatus for signaling view scalability in multi-view video coding
WO2008023968A1 (en) * 2006-08-25 2008-02-28 Lg Electronics Inc A method and apparatus for decoding/encoding a video signal
KR101366092B1 (en) 2006-10-13 2014-02-21 삼성전자주식회사 Method and apparatus for encoding and decoding multi-view image
EP2087738B1 (en) * 2006-10-13 2016-04-13 Thomson Licensing Method for reference picture management involving multiview video coding
EP2090110A2 (en) 2006-10-13 2009-08-19 Thomson Licensing Reference picture list management syntax for multiple view video coding
BRPI0719536A2 (en) * 2006-10-16 2014-01-14 Thomson Licensing METHOD FOR USING A GENERAL LAYER UNIT IN THE WORK NETWORK SIGNALING AN INSTANT DECODING RESET DURING A VIDEO OPERATION.
CN101529921B (en) * 2006-10-18 2013-07-03 汤姆森特许公司 Local illumination and color compensation without explicit signaling
US20080095228A1 (en) * 2006-10-20 2008-04-24 Nokia Corporation System and method for providing picture output indications in video coding
WO2008051381A2 (en) * 2006-10-24 2008-05-02 Thomson Licensing Picture management for multi-view video coding
KR101370287B1 (en) * 2006-11-22 2014-03-07 세종대학교산학협력단 Method and apparatus for deblocking filtering
KR100856411B1 (en) * 2006-12-01 2008-09-04 삼성전자주식회사 Method and apparatus for compensating illumination compensation and method and apparatus for encoding moving picture based on illumination compensation, and method and apparatus for encoding moving picture based on illumination compensation
KR100905723B1 (en) * 2006-12-08 2009-07-01 한국전자통신연구원 System and Method for Digital Real Sense Transmitting/Receiving based on Non-Realtime
KR100922275B1 (en) * 2006-12-15 2009-10-15 경희대학교 산학협력단 Derivation process of a boundary filtering strength and deblocking filtering method and apparatus using the derivation process
EP2116063B1 (en) * 2007-01-04 2017-03-08 Thomson Licensing Methods and apparatus for multi-view information conveyed in high level syntax
CN101578874B (en) * 2007-01-04 2011-12-07 汤姆森特许公司 Methods and apparatus for reducing coding artifacts for illumination compensation and/or color compensation in multi-view coded video
JP5249242B2 (en) * 2007-01-24 2013-07-31 エルジー エレクトロニクス インコーポレイティド Video signal processing method and apparatus
KR20100014552A (en) * 2007-03-23 2010-02-10 엘지전자 주식회사 A method and an apparatus for decoding/encoding a video signal
US8548261B2 (en) * 2007-04-11 2013-10-01 Samsung Electronics Co., Ltd. Method and apparatus for encoding and decoding multi-view image
ES2858578T3 (en) 2007-04-12 2021-09-30 Dolby Int Ab Tiled organization in video encoding and decoding
JP5254565B2 (en) * 2007-04-24 2013-08-07 株式会社エヌ・ティ・ティ・ドコモ Moving picture predictive coding apparatus, method and program, and moving picture predictive decoding apparatus, method and program
WO2008140190A1 (en) * 2007-05-14 2008-11-20 Samsung Electronics Co, . Ltd. Method and apparatus for encoding and decoding multi-view image
CN105791864B (en) * 2007-05-16 2019-01-15 汤姆逊许可Dtv公司 The device of chip set is used in coding and transmission multi-view video coding information
KR101244917B1 (en) * 2007-06-11 2013-03-18 삼성전자주식회사 Method and apparatus for compensating illumination compensation and method and apparatus for encoding and decoding video based on illumination compensation
KR101460362B1 (en) * 2007-06-25 2014-11-14 삼성전자주식회사 Method and apparatus for illumination compensation of multi-view video coding
US20080317124A1 (en) * 2007-06-25 2008-12-25 Sukhee Cho Multi-view video coding system, decoding system, bitstream extraction system for decoding base view and supporting view random access
KR20080114482A (en) * 2007-06-26 2008-12-31 삼성전자주식회사 Method and apparatus for illumination compensation of multi-view video coding
US20100118942A1 (en) * 2007-06-28 2010-05-13 Thomson Licensing Methods and apparatus at an encoder and decoder for supporting single loop decoding of multi-view coded video
US8254455B2 (en) 2007-06-30 2012-08-28 Microsoft Corporation Computing collocated macroblock information for direct mode macroblocks
CN101816180B (en) * 2007-08-06 2013-01-16 汤姆森特许公司 Methods and apparatus for motion skip mode with multiple inter-view reference pictures
US20090060043A1 (en) * 2007-08-29 2009-03-05 Geert Nuyttens Multiviewer based on merging of output streams of spatio scalable codecs in a compressed domain
BRPI0816680A2 (en) 2007-09-12 2015-03-17 Qualcomm Inc Generate and communicate source identification information to enable reliable communications.
WO2009048502A2 (en) * 2007-10-05 2009-04-16 Thomson Licensing Methods and apparatus for incorporating video usability information (vui) within a multi-view video (mvc) coding system
KR101345287B1 (en) * 2007-10-12 2013-12-27 삼성전자주식회사 Scalable video encoding method and apparatus and scalable video decoding method and apparatus
CN101415114B (en) * 2007-10-17 2010-08-25 华为终端有限公司 Method and apparatus for encoding and decoding video, and video encoder and decoder
US8270472B2 (en) * 2007-11-09 2012-09-18 Thomson Licensing Methods and apparatus for adaptive reference filtering (ARF) of bi-predictive pictures in multi-view coded video
US20090154567A1 (en) * 2007-12-13 2009-06-18 Shaw-Min Lei In-loop fidelity enhancement for video compression
KR20090090152A (en) * 2008-02-20 2009-08-25 삼성전자주식회사 Method and apparatus for video encoding and decoding
US20090219985A1 (en) * 2008-02-28 2009-09-03 Vasanth Swaminathan Systems and Methods for Processing Multiple Projections of Video Data in a Single Video File
US8811499B2 (en) * 2008-04-10 2014-08-19 Imagine Communications Corp. Video multiviewer system permitting scrolling of multiple video windows and related methods
KR101591085B1 (en) * 2008-05-19 2016-02-02 삼성전자주식회사 Apparatus and method for generating and playing image file
US8326075B2 (en) 2008-09-11 2012-12-04 Google Inc. System and method for video encoding using adaptive loop filter
ES2515967T3 (en) * 2008-10-07 2014-10-30 Telefonaktiebolaget L M Ericsson (Publ) Multi-view multimedia data
KR20100040640A (en) * 2008-10-10 2010-04-20 엘지전자 주식회사 Receiving system and method of processing data
US8760495B2 (en) * 2008-11-18 2014-06-24 Lg Electronics Inc. Method and apparatus for processing video signal
KR101190891B1 (en) * 2008-12-17 2012-10-12 파나소닉 주식회사 Method for forming through electrode, and semiconductor device
KR101578740B1 (en) * 2008-12-18 2015-12-21 엘지전자 주식회사 Digital broadcasting reception method capable of displaying stereoscopic image, and digital broadcasting reception apparatus using same
CN101884220B (en) 2009-01-19 2013-04-03 松下电器产业株式会社 Coding method, decoding method, coding apparatus, decoding apparatus, program, and integrated circuit
BRPI1007163A2 (en) 2009-01-26 2018-09-25 Thomson Licensing frame compression for video encoding
US8947504B2 (en) * 2009-01-28 2015-02-03 Lg Electronics Inc. Broadcast receiver and video data processing method thereof
US8189666B2 (en) * 2009-02-02 2012-05-29 Microsoft Corporation Local picture identifier and computation of co-located information
KR20100089705A (en) * 2009-02-04 2010-08-12 삼성전자주식회사 Apparatus and method for encoding and decoding 3d video
WO2010092772A1 (en) * 2009-02-12 2010-08-19 日本電信電話株式会社 Multi-view image encoding method, multi-view image decoding method, multi-view image encoding device, multi-view image decoding device, multi-view image encoding program, and multi-view image decoding program
US8270495B2 (en) * 2009-02-13 2012-09-18 Cisco Technology, Inc. Reduced bandwidth off-loading of entropy coding/decoding
ES2524973T3 (en) 2009-02-23 2014-12-16 Nippon Telegraph And Telephone Corporation Multivist image coding and decoding using localized lighting and color correction
US9281847B2 (en) 2009-02-27 2016-03-08 Qualcomm Incorporated Mobile reception of digital video broadcasting—terrestrial services
JP4957823B2 (en) 2009-04-08 2012-06-20 ソニー株式会社 Playback apparatus and playback method
JP5267886B2 (en) * 2009-04-08 2013-08-21 ソニー株式会社 REPRODUCTION DEVICE, RECORDING MEDIUM, AND INFORMATION PROCESSING METHOD
JP4962525B2 (en) * 2009-04-08 2012-06-27 ソニー株式会社 REPRODUCTION DEVICE, REPRODUCTION METHOD, AND PROGRAM
JP4985884B2 (en) * 2009-04-08 2012-07-25 ソニー株式会社 REPRODUCTION DEVICE, REPRODUCTION METHOD, AND RECORDING METHOD
US8982183B2 (en) 2009-04-17 2015-03-17 Lg Electronics Inc. Method and apparatus for processing a multiview video signal
CN103124351A (en) * 2009-04-28 2013-05-29 松下电器产业株式会社 Image decoding method, image coding method, image decoding apparatus, and image coding apparatus
US8780999B2 (en) * 2009-06-12 2014-07-15 Qualcomm Incorporated Assembling multiview video coding sub-BITSTREAMS in MPEG-2 systems
US8411746B2 (en) 2009-06-12 2013-04-02 Qualcomm Incorporated Multiview video coding over MPEG-2 systems
KR101631270B1 (en) * 2009-06-19 2016-06-16 삼성전자주식회사 Method and apparatus for filtering image by using pseudo-random filter
KR20110007928A (en) * 2009-07-17 2011-01-25 삼성전자주식회사 Method and apparatus for encoding/decoding multi-view picture
US8948241B2 (en) * 2009-08-07 2015-02-03 Qualcomm Incorporated Signaling characteristics of an MVC operation point
KR101456498B1 (en) * 2009-08-14 2014-10-31 삼성전자주식회사 Method and apparatus for video encoding considering scanning order of coding units with hierarchical structure, and method and apparatus for video decoding considering scanning order of coding units with hierarchical structure
US9288010B2 (en) 2009-08-19 2016-03-15 Qualcomm Incorporated Universal file delivery methods for providing unequal error protection and bundled file delivery services
US9917874B2 (en) 2009-09-22 2018-03-13 Qualcomm Incorporated Enhanced block-request streaming using block partitioning or request controls for improved client-side handling
WO2011068807A1 (en) * 2009-12-01 2011-06-09 Divx, Llc System and method for determining bit stream compatibility
KR20110068792A (en) 2009-12-16 2011-06-22 한국전자통신연구원 Adaptive image coding apparatus and method
US9215445B2 (en) 2010-01-29 2015-12-15 Thomson Licensing Block-based interleaving
WO2011105337A1 (en) * 2010-02-24 2011-09-01 日本電信電話株式会社 Multiview video coding method, multiview video decoding method, multiview video coding device, multiview video decoding device, and program
KR101289269B1 (en) * 2010-03-23 2013-07-24 한국전자통신연구원 An apparatus and method for displaying image data in image system
JP2011216965A (en) * 2010-03-31 2011-10-27 Sony Corp Information processing apparatus, information processing method, reproduction apparatus, reproduction method, and program
US20110280311A1 (en) 2010-05-13 2011-11-17 Qualcomm Incorporated One-stream coding for asymmetric stereo video
WO2011146451A1 (en) 2010-05-20 2011-11-24 Thomson Licensing Methods and apparatus for adaptive motion vector candidate ordering for video encoding and decoding
JP5387520B2 (en) * 2010-06-25 2014-01-15 ソニー株式会社 Information processing apparatus and information processing method
US9485546B2 (en) 2010-06-29 2016-11-01 Qualcomm Incorporated Signaling video samples for trick mode video representations
JP5392199B2 (en) * 2010-07-09 2014-01-22 ソニー株式会社 Image processing apparatus and method
US9185439B2 (en) 2010-07-15 2015-11-10 Qualcomm Incorporated Signaling data for multiplexing video components
US9596447B2 (en) * 2010-07-21 2017-03-14 Qualcomm Incorporated Providing frame packing type information for video coding
US9456015B2 (en) 2010-08-10 2016-09-27 Qualcomm Incorporated Representation groups for network streaming of coded multimedia data
KR102185765B1 (en) * 2010-08-11 2020-12-03 지이 비디오 컴프레션, 엘엘씨 Multi-view signal codec
WO2012050832A1 (en) 2010-09-28 2012-04-19 Google Inc. Systems and methods utilizing efficient video compression techniques for providing static image data
US9055305B2 (en) 2011-01-09 2015-06-09 Mediatek Inc. Apparatus and method of sample adaptive offset for video coding
US9532059B2 (en) 2010-10-05 2016-12-27 Google Technology Holdings LLC Method and apparatus for spatial scalability for video coding
CA2810899C (en) 2010-10-05 2016-08-09 General Instrument Corporation Coding and decoding utilizing adaptive context model selection with zigzag scan
US20130250056A1 (en) * 2010-10-06 2013-09-26 Nomad3D Sas Multiview 3d compression format and algorithms
US20120269275A1 (en) * 2010-10-20 2012-10-25 Nokia Corporation Method and device for video coding and decoding
CN107105292B (en) 2010-12-13 2020-09-08 韩国电子通信研究院 Method for decoding video signal based on interframe prediction
GB2486692B (en) * 2010-12-22 2014-04-16 Canon Kk Method for encoding a video sequence and associated encoding device
US9161041B2 (en) 2011-01-09 2015-10-13 Mediatek Inc. Apparatus and method of efficient sample adaptive offset
US20120189060A1 (en) * 2011-01-20 2012-07-26 Industry-Academic Cooperation Foundation, Yonsei University Apparatus and method for encoding and decoding motion information and disparity information
US9215473B2 (en) * 2011-01-26 2015-12-15 Qualcomm Incorporated Sub-slices in video coding
US9270299B2 (en) 2011-02-11 2016-02-23 Qualcomm Incorporated Encoding and decoding using elastic codes with flexible source block mapping
KR20120095611A (en) * 2011-02-21 2012-08-29 삼성전자주식회사 Method and apparatus for encoding/decoding multi view video
KR20120095610A (en) * 2011-02-21 2012-08-29 삼성전자주식회사 Method and apparatus for encoding and decoding multi-view video
US8938001B1 (en) 2011-04-05 2015-01-20 Google Inc. Apparatus and method for coding using combinations
US8780971B1 (en) 2011-04-07 2014-07-15 Google, Inc. System and method of encoding using selectable loop filters
US8780996B2 (en) 2011-04-07 2014-07-15 Google, Inc. System and method for encoding and decoding video data
US8781004B1 (en) 2011-04-07 2014-07-15 Google Inc. System and method for encoding video using variable loop filter
US9247249B2 (en) 2011-04-20 2016-01-26 Qualcomm Incorporated Motion vector prediction in video coding
US8989256B2 (en) 2011-05-25 2015-03-24 Google Inc. Method and apparatus for using segmentation-based coding of prediction information
KR102083012B1 (en) * 2011-06-28 2020-02-28 엘지전자 주식회사 Method for setting motion vector list and apparatus using same
US8879826B2 (en) * 2011-07-05 2014-11-04 Texas Instruments Incorporated Method, system and computer program product for switching between 2D and 3D coding of a video sequence of images
US11496760B2 (en) 2011-07-22 2022-11-08 Qualcomm Incorporated Slice header prediction for depth maps in three-dimensional video codecs
US9521418B2 (en) * 2011-07-22 2016-12-13 Qualcomm Incorporated Slice header three-dimensional video extension for slice header prediction
US8891616B1 (en) 2011-07-27 2014-11-18 Google Inc. Method and apparatus for entropy encoding based on encoding cost
US9635355B2 (en) 2011-07-28 2017-04-25 Qualcomm Incorporated Multiview video coding
US9674525B2 (en) 2011-07-28 2017-06-06 Qualcomm Incorporated Multiview video coding
US9288505B2 (en) * 2011-08-11 2016-03-15 Qualcomm Incorporated Three-dimensional video with asymmetric spatial resolution
KR102163151B1 (en) 2011-08-30 2020-10-08 디빅스, 엘엘씨 Systems and methods for encoding and streaming video encoded using a plurality of maximum bitrate levels
US8818171B2 (en) 2011-08-30 2014-08-26 Kourosh Soroushian Systems and methods for encoding alternative streams of video for playback on playback devices having predetermined display aspect ratios and network connection maximum data rates
US9253233B2 (en) 2011-08-31 2016-02-02 Qualcomm Incorporated Switch signaling methods providing improved switching between representations for adaptive HTTP streaming
US8885706B2 (en) 2011-09-16 2014-11-11 Google Inc. Apparatus and methodology for a video codec system with noise reduction capability
US9237356B2 (en) 2011-09-23 2016-01-12 Qualcomm Incorporated Reference picture list construction for video coding
US9843844B2 (en) 2011-10-05 2017-12-12 Qualcomm Incorporated Network streaming of media data
US9781449B2 (en) * 2011-10-06 2017-10-03 Synopsys, Inc. Rate distortion optimization in image and video encoding
US9338463B2 (en) 2011-10-06 2016-05-10 Synopsys, Inc. Visual quality measure for real-time video processing
US9712819B2 (en) 2011-10-12 2017-07-18 Lg Electronics Inc. Image encoding method and image decoding method
US8855433B2 (en) * 2011-10-13 2014-10-07 Sharp Kabushiki Kaisha Tracking a reference picture based on a designated picture on an electronic device
US8768079B2 (en) 2011-10-13 2014-07-01 Sharp Laboratories Of America, Inc. Tracking a reference picture on an electronic device
US8787688B2 (en) * 2011-10-13 2014-07-22 Sharp Laboratories Of America, Inc. Tracking a reference picture based on a designated picture on an electronic device
US9077998B2 (en) 2011-11-04 2015-07-07 Qualcomm Incorporated Padding of segments in coded slice NAL units
US9124895B2 (en) 2011-11-04 2015-09-01 Qualcomm Incorporated Video coding with network abstraction layer units that include multiple encoded picture partitions
WO2013067942A1 (en) * 2011-11-08 2013-05-16 华为技术有限公司 Intra-frame prediction method and device
SG10201502731VA (en) 2011-11-08 2015-05-28 Samsung Electronics Co Ltd Method and apparatus for motion vector determination in video encoding or decoding
US9485503B2 (en) 2011-11-18 2016-11-01 Qualcomm Incorporated Inside view motion prediction among texture and depth view components
US9247257B1 (en) 2011-11-30 2016-01-26 Google Inc. Segmentation based entropy encoding and decoding
US9258559B2 (en) 2011-12-20 2016-02-09 Qualcomm Incorporated Reference picture list construction for multi-view and three-dimensional video coding
PL2805511T3 (en) 2012-01-20 2019-09-30 Sun Patent Trust Methods and apparatus for encoding and decoding video using temporal motion vector prediction
EP2811744A1 (en) * 2012-01-31 2014-12-10 Sony Corporation Image processing apparatus and image processing method
ES2865101T3 (en) 2012-02-03 2021-10-15 Sun Patent Trust Image encoding procedure, image decoding procedure, image encoding device, image decoding device and image encoding / decoding device
US9094681B1 (en) 2012-02-28 2015-07-28 Google Inc. Adaptive segmentation
US9131073B1 (en) 2012-03-02 2015-09-08 Google Inc. Motion estimation aided noise reduction
WO2013132792A1 (en) 2012-03-06 2013-09-12 パナソニック株式会社 Method for coding video, method for decoding video, device for coding video, device for decoding video, and device for coding/decoding video
GB2500023A (en) * 2012-03-06 2013-09-11 Queen Mary & Westfield College Coding and Decoding a Video Signal Including Generating and Using a Modified Residual and/or Modified Prediction Signal
US11039138B1 (en) 2012-03-08 2021-06-15 Google Llc Adaptive coding of prediction modes using probability distributions
US20130243085A1 (en) * 2012-03-15 2013-09-19 Samsung Electronics Co., Ltd. Method of multi-view video coding and decoding based on local illumination and contrast compensation of reference frames without extra bitrate overhead
US10200709B2 (en) 2012-03-16 2019-02-05 Qualcomm Incorporated High-level syntax extensions for high efficiency video coding
WO2013137697A1 (en) * 2012-03-16 2013-09-19 엘지전자 주식회사 Method for storing image information, method for parsing image information and apparatus using same
US9503720B2 (en) 2012-03-16 2016-11-22 Qualcomm Incorporated Motion vector coding and bi-prediction in HEVC and its extensions
US9294226B2 (en) 2012-03-26 2016-03-22 Qualcomm Incorporated Universal object delivery and template-based file delivery
JP2013247651A (en) * 2012-05-29 2013-12-09 Canon Inc Coding apparatus, coding method, and program
US9781447B1 (en) 2012-06-21 2017-10-03 Google Inc. Correlation based inter-plane prediction encoding and decoding
US20140003799A1 (en) * 2012-06-30 2014-01-02 Divx, Llc Systems and methods for decoding a video sequence encoded using predictions that include references to frames in reference segments from different video sequences
US10452715B2 (en) 2012-06-30 2019-10-22 Divx, Llc Systems and methods for compressing geotagged video
RU2608354C2 (en) * 2012-07-02 2017-01-18 Самсунг Электроникс Ко., Лтд. Method and apparatus for encoding video and method and apparatus for decoding video determining inter-prediction reference picture list depending on block size
US9774856B1 (en) 2012-07-02 2017-09-26 Google Inc. Adaptive stochastic entropy coding
RU2510944C2 (en) * 2012-07-03 2014-04-10 Корпорация "САМСУНГ ЭЛЕКТРОНИКС Ко., Лтд." Method of encoding/decoding multi-view video sequence based on adaptive local adjustment of brightness of key frames without transmitting additional parameters (versions)
JP5885604B2 (en) 2012-07-06 2016-03-15 株式会社Nttドコモ Moving picture predictive coding apparatus, moving picture predictive coding method, moving picture predictive coding program, moving picture predictive decoding apparatus, moving picture predictive decoding method, and moving picture predictive decoding program
EP2854393A4 (en) * 2012-07-11 2015-12-30 Lg Electronics Inc Method and apparatus for processing video signal
US9344729B1 (en) 2012-07-11 2016-05-17 Google Inc. Selective prediction signal filtering
CN107454426A (en) * 2012-07-27 2017-12-08 寰发股份有限公司 3 d video encoding or coding/decoding method
US9167268B1 (en) 2012-08-09 2015-10-20 Google Inc. Second-order orthogonal spatial intra prediction
US9332276B1 (en) 2012-08-09 2016-05-03 Google Inc. Variable-sized super block based direct prediction mode
US9380298B1 (en) 2012-08-10 2016-06-28 Google Inc. Object-based intra-prediction
US9344742B2 (en) 2012-08-10 2016-05-17 Google Inc. Transform-domain intra prediction
AU2013305370B2 (en) * 2012-08-23 2016-07-07 Mediatek Inc. Method and apparatus of interlayer texture prediction
US20140079116A1 (en) * 2012-09-20 2014-03-20 Qualcomm Incorporated Indication of interlaced video data for video coding
US9554146B2 (en) 2012-09-21 2017-01-24 Qualcomm Incorporated Indication and activation of parameter sets for video coding
JP6074509B2 (en) 2012-09-29 2017-02-01 華為技術有限公司Huawei Technologies Co.,Ltd. Video encoding and decoding method, apparatus and system
US9723321B2 (en) 2012-10-08 2017-08-01 Samsung Electronics Co., Ltd. Method and apparatus for coding video stream according to inter-layer prediction of multi-view video, and method and apparatus for decoding video stream according to inter-layer prediction of multi view video
US9369732B2 (en) 2012-10-08 2016-06-14 Google Inc. Lossless intra-prediction video coding
TW201415898A (en) * 2012-10-09 2014-04-16 Sony Corp Image-processing device and method
US9774927B2 (en) * 2012-12-21 2017-09-26 Telefonaktiebolaget L M Ericsson (Publ) Multi-layer video stream decoding
WO2014103606A1 (en) * 2012-12-26 2014-07-03 シャープ株式会社 Image decoding device
US9628790B1 (en) 2013-01-03 2017-04-18 Google Inc. Adaptive composite intra prediction for image and video compression
US9509998B1 (en) 2013-04-04 2016-11-29 Google Inc. Conditional predictive multi-symbol run-length coding
CN104104958B (en) * 2013-04-08 2017-08-25 联发科技(新加坡)私人有限公司 Picture decoding method and its picture decoding apparatus
US9930363B2 (en) * 2013-04-12 2018-03-27 Nokia Technologies Oy Harmonized inter-view and view synthesis prediction for 3D video coding
KR102105323B1 (en) * 2013-04-15 2020-04-28 인텔렉추얼디스커버리 주식회사 A method for adaptive illuminance compensation based on object and an apparatus using it
WO2014203726A1 (en) * 2013-06-18 2014-12-24 シャープ株式会社 Illumination compensation device, lm predict device, image decoding device, image coding device
US10284858B2 (en) * 2013-10-15 2019-05-07 Qualcomm Incorporated Support of multi-mode extraction for multi-layer video codecs
US9392288B2 (en) 2013-10-17 2016-07-12 Google Inc. Video coding using scatter-based scan tables
US9179151B2 (en) 2013-10-18 2015-11-03 Google Inc. Spatial proximity context entropy coding
FR3014278A1 (en) * 2013-11-29 2015-06-05 Orange IMAGE ENCODING AND DECODING METHOD, IMAGE ENCODING AND DECODING DEVICE AND CORRESPONDING COMPUTER PROGRAMS
US10554967B2 (en) * 2014-03-21 2020-02-04 Futurewei Technologies, Inc. Illumination compensation (IC) refinement based on positional pairings among pixels
US10102613B2 (en) 2014-09-25 2018-10-16 Google Llc Frequency-domain denoising
WO2016070363A1 (en) * 2014-11-05 2016-05-12 Mediatek Singapore Pte. Ltd. Merge with inter prediction offset
US9871967B2 (en) * 2015-01-22 2018-01-16 Huddly As Video transmission based on independently encoded background updates
US10356416B2 (en) 2015-06-09 2019-07-16 Qualcomm Incorporated Systems and methods of determining illumination compensation status for video coding
US10887597B2 (en) * 2015-06-09 2021-01-05 Qualcomm Incorporated Systems and methods of determining illumination compensation parameters for video coding
PL412844A1 (en) 2015-06-25 2017-01-02 Politechnika Poznańska System and method of coding of the exposed area in the multi-video sequence data stream
US10375413B2 (en) * 2015-09-28 2019-08-06 Qualcomm Incorporated Bi-directional optical flow for video coding
US10148989B2 (en) 2016-06-15 2018-12-04 Divx, Llc Systems and methods for encoding video content
KR102147447B1 (en) * 2016-09-22 2020-08-24 엘지전자 주식회사 Inter prediction method and apparatus in video coding system
US20190200021A1 (en) * 2016-09-22 2019-06-27 Lg Electronics Inc. Illumination compensation-based inter-prediction method and apparatus in image coding system
US10742979B2 (en) * 2016-12-21 2020-08-11 Arris Enterprises Llc Nonlinear local activity for adaptive quantization
KR20180074000A (en) * 2016-12-23 2018-07-03 삼성전자주식회사 Method of decoding video data, video decoder performing the same, method of encoding video data, and video encoder performing the same
JP7248664B2 (en) 2017-10-05 2023-03-29 インターデジタル ヴイシー ホールディングス, インコーポレイテッド Method and apparatus for adaptive illumination compensation in video encoding and decoding
EP3468194A1 (en) * 2017-10-05 2019-04-10 Thomson Licensing Decoupled mode inference and prediction
EP3468198A1 (en) * 2017-10-05 2019-04-10 Thomson Licensing Method and apparatus for video encoding and decoding based on illumination compensation
US10652571B2 (en) * 2018-01-25 2020-05-12 Qualcomm Incorporated Advanced motion vector prediction speedups for video coding
US10958928B2 (en) * 2018-04-10 2021-03-23 Qualcomm Incorporated Decoder-side motion vector derivation for video coding
AU2018423422B2 (en) * 2018-05-16 2023-02-02 Huawei Technologies Co., Ltd. Video coding method and apparatus
MX2021000192A (en) * 2018-07-06 2021-05-31 Mitsubishi Electric Corp Bi-prediction with adaptive weights.
CN111263147B (en) 2018-12-03 2023-02-14 华为技术有限公司 Inter-frame prediction method and related device
CN111726598B (en) * 2019-03-19 2022-09-16 浙江大学 Image processing method and device
CN110139112B (en) * 2019-04-29 2022-04-05 暨南大学 Video coding method based on JND model
KR20210066282A (en) 2019-11-28 2021-06-07 삼성전자주식회사 Display apparatus and control method for the same
WO2021108913A1 (en) * 2019-12-04 2021-06-10 Studio Thinkwell Montréal Inc. Video system, method for calibrating the video system and method for capturing an image using the video system
KR102475334B1 (en) * 2020-01-13 2022-12-07 한국전자통신연구원 Video encoding/decoding method and apparatus
US11412256B2 (en) 2020-04-08 2022-08-09 Tencent America LLC Method and apparatus for video coding
US20230024288A1 (en) * 2021-07-13 2023-01-26 Tencent America LLC Feature-based multi-view representation and coding

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US20030202592A1 (en) * 2002-04-20 2003-10-30 Sohn Kwang Hoon Apparatus for encoding a multi-view moving picture

Family Cites Families (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0698312A (en) * 1992-09-16 1994-04-08 Fujitsu Ltd High efficiency picture coding system
IL112795A (en) 1994-03-04 2001-01-28 Astrazeneca Ab Peptide derivatives as antithrombic agents their preparation and pharmaceutical compositions containing them
KR20000064585A (en) 1997-01-13 2000-11-06 요트.게.아. 롤페즈 Method and apparatus for inserting auxiliary data into digital video signal
JPH11252552A (en) 1998-03-05 1999-09-17 Sony Corp Compression coding method and compression coder for video signal, and multiplexing method and multiplexer for compression coded data
US6167084A (en) 1998-08-27 2000-12-26 Motorola, Inc. Dynamic bit allocation for statistical multiplexing of compressed and uncompressed digital video signals
KR100795255B1 (en) * 2000-04-21 2008-01-15 소니 가부시끼 가이샤 Information processing apparatus and method, program, and recorded medium
KR100375708B1 (en) * 2000-10-28 2003-03-15 전자부품연구원 3D Stereosc opic Multiview Video System and Manufacturing Method
KR100397511B1 (en) * 2001-11-21 2003-09-13 한국전자통신연구원 The processing system and it's method for the stereoscopic/multiview Video
US20040190615A1 (en) 2002-05-22 2004-09-30 Kiyofumi Abe Moving image encoding method, moving image decoding method, and data recording medium
AU2003251484B2 (en) 2002-06-12 2009-06-04 The Coca-Cola Company Beverages containing plant sterols
EP1530370A4 (en) 2002-06-20 2008-12-03 Sony Corp Decoding device and decoding method
WO2004002141A1 (en) 2002-06-20 2003-12-31 Sony Corporation Decoding apparatus and decoding method
KR20040001354A (en) 2002-06-27 2004-01-07 주식회사 케이티 Method for Wireless LAN Service in Wide Area
KR100475060B1 (en) 2002-08-07 2005-03-10 한국전자통신연구원 The multiplexing method and its device according to user's request for multi-view 3D video
KR100751422B1 (en) 2002-12-27 2007-08-23 한국전자통신연구원 A Method of Coding and Decoding Stereoscopic Video and A Apparatus for Coding and Decoding the Same
US7489342B2 (en) 2004-12-17 2009-02-10 Mitsubishi Electric Research Laboratories, Inc. Method and system for managing reference pictures in multiview videos
US7286689B2 (en) * 2003-06-07 2007-10-23 Hewlett-Packard Development Company, L.P. Motion estimation for compression of calibrated multi-view image sequences
JP2007519273A (en) 2003-06-30 2007-07-12 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ System and method for video processing using overcomplete wavelet coding and cyclic prediction mapping
US7778328B2 (en) 2003-08-07 2010-08-17 Sony Corporation Semantics-based motion estimation for multi-view video coding
CN1212014C (en) 2003-08-18 2005-07-20 北京工业大学 Video coding method based on time-space domain correlation quick movement estimate
US7613344B2 (en) 2003-12-08 2009-11-03 Electronics And Telecommunications Research Institute System and method for encoding and decoding an image using bitstream map and recording medium thereof
KR100987775B1 (en) 2004-01-20 2010-10-13 삼성전자주식회사 3 Dimensional coding method of video
KR100679740B1 (en) * 2004-06-25 2007-02-07 학교법인연세대학교 Method for Coding/Decoding for Multiview Sequence where View Selection is Possible
US7444664B2 (en) * 2004-07-27 2008-10-28 Microsoft Corp. Multi-view video format
US7671893B2 (en) 2004-07-27 2010-03-02 Microsoft Corp. System and method for interactive multi-view video
KR100584603B1 (en) 2004-08-03 2006-05-30 학교법인 대양학원 Direct mode motion prediction method and apparatus for multi-view video
US7924923B2 (en) 2004-11-30 2011-04-12 Humax Co., Ltd. Motion estimation and compensation method and device adaptive to change in illumination
EP2538675A1 (en) 2004-12-10 2012-12-26 Electronics and Telecommunications Research Institute Apparatus for universal coding for multi-view video
US7728878B2 (en) 2004-12-17 2010-06-01 Mitsubishi Electric Research Labortories, Inc. Method and system for processing multiview videos for view synthesis using side information
US7710462B2 (en) 2004-12-17 2010-05-04 Mitsubishi Electric Research Laboratories, Inc. Method for randomly accessing multiview videos
US7468745B2 (en) 2004-12-17 2008-12-23 Mitsubishi Electric Research Laboratories, Inc. Multiview video decomposition and encoding
US8644386B2 (en) * 2005-09-22 2014-02-04 Samsung Electronics Co., Ltd. Method of estimating disparity vector, and method and apparatus for encoding and decoding multi-view moving picture using the disparity vector estimation method
KR101276720B1 (en) * 2005-09-29 2013-06-19 삼성전자주식회사 Method for predicting disparity vector using camera parameter, apparatus for encoding and decoding muti-view image using method thereof, and a recording medium having a program to implement thereof
EP1946563A2 (en) * 2005-10-19 2008-07-23 Thomson Licensing Multi-view video coding using scalable video coding
ZA200805337B (en) 2006-01-09 2009-11-25 Thomson Licensing Method and apparatus for providing reduced resolution update mode for multiview video coding
CN101375593A (en) 2006-01-12 2009-02-25 Lg电子株式会社 Processing multiview video
KR100943912B1 (en) * 2006-01-12 2010-03-03 엘지전자 주식회사 Method and apparatus for processing multiview video
ES2636917T3 (en) 2006-03-30 2017-10-10 Lg Electronics, Inc. A method and apparatus for decoding / encoding a video signal
CN101529921B (en) * 2006-10-18 2013-07-03 汤姆森特许公司 Local illumination and color compensation without explicit signaling
US20100118942A1 (en) * 2007-06-28 2010-05-13 Thomson Licensing Methods and apparatus at an encoder and decoder for supporting single loop decoding of multi-view coded video
US8665958B2 (en) * 2008-01-29 2014-03-04 Electronics And Telecommunications Research Institute Method and apparatus for encoding and decoding video signal using motion compensation based on affine transformation
US8130277B2 (en) * 2008-02-20 2012-03-06 Aricent Group Method and system for intelligent and efficient camera motion estimation for video stabilization

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US20030202592A1 (en) * 2002-04-20 2003-10-30 Sohn Kwang Hoon Apparatus for encoding a multi-view moving picture

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
JENS-RAINER OHM: "Stereo/Multiview Video Encoding Using the MPEG Family of Standards", IS&T/SPIE CONFERENCE ON STEREOSCOPIC DISPLAYS IN SAN JOSE, January 1999 (1999-01-01)
JOAQUIN L'OPEZ; JAE HOON KIM; ANTONIO ORTEGA; GEORGE CHEN: "Block-based Illumination Compensation and Search Techniques for Multiview Video Coding", PICTURE CODING SYMPOSIUM 2004 IN SAN FRANCISCO, 17 December 2004 (2004-12-17)
See also references of EP1977593A4

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011515940A (en) * 2008-03-18 2011-05-19 サムスン エレクトロニクス カンパニー リミテッド Video encoding and decoding method and apparatus
CN104053003B (en) * 2008-07-02 2018-01-26 三星电子株式会社 Method for encoding images and device and its coding/decoding method and device
JP2013505615A (en) * 2009-09-17 2013-02-14 ミツビシ・エレクトリック・アールアンドディー・センター・ヨーロッパ・ビーヴィ Video weighted motion compensation
US10547866B2 (en) 2012-03-02 2020-01-28 Sun Patent Trust Image encoding method, image decoding method, image encoding apparatus, image decoding apparatus, and image coding apparatus
US11109063B2 (en) 2012-03-02 2021-08-31 Sun Patent Trust Image encoding method, image decoding method, image encoding apparatus, image decoding apparatus, and image coding apparatus
US9363432B2 (en) 2012-06-11 2016-06-07 Canon Kabushiki Kaisha Image processing apparatus and image processing method
WO2020018524A1 (en) * 2018-07-17 2020-01-23 Qualcomm Incorporated Block-based adaptive loop filter design and signaling
US11140418B2 (en) 2018-07-17 2021-10-05 Qualcomm Incorporated Block-based adaptive loop filter design and signaling
EP4091323A4 (en) * 2020-01-14 2024-02-14 Tencent America LLC Method and apparatus for video coding

Also Published As

Publication number Publication date
US7817865B2 (en) 2010-10-19
KR20090099589A (en) 2009-09-22
US7817866B2 (en) 2010-10-19
US20120121015A1 (en) 2012-05-17
KR100943912B1 (en) 2010-03-03
EP1982518A1 (en) 2008-10-22
JP5199124B2 (en) 2013-05-15
KR100943914B1 (en) 2010-03-03
KR20090099588A (en) 2009-09-22
US20070177811A1 (en) 2007-08-02
JP2009523355A (en) 2009-06-18
KR100943915B1 (en) 2010-03-03
US8115804B2 (en) 2012-02-14
US20070177672A1 (en) 2007-08-02
US20070177810A1 (en) 2007-08-02
KR100947234B1 (en) 2010-03-12
EP1982518A4 (en) 2010-06-16
KR20080094047A (en) 2008-10-22
KR20090099098A (en) 2009-09-21
KR100934677B1 (en) 2009-12-31
JP5192393B2 (en) 2013-05-08
US20070177674A1 (en) 2007-08-02
EP1982517A1 (en) 2008-10-22
JP5199123B2 (en) 2013-05-15
US8154585B2 (en) 2012-04-10
KR100953646B1 (en) 2010-04-21
US20070177813A1 (en) 2007-08-02
WO2007081178A1 (en) 2007-07-19
US20070177812A1 (en) 2007-08-02
JP2009536793A (en) 2009-10-15
DE202007019463U8 (en) 2013-03-21
JP2009523356A (en) 2009-06-18
KR100943913B1 (en) 2010-03-03
US7970221B2 (en) 2011-06-28
DE202007019463U1 (en) 2012-10-09
KR20090099097A (en) 2009-09-21
US7831102B2 (en) 2010-11-09
US20070177673A1 (en) 2007-08-02
EP1977593A4 (en) 2010-06-16
KR20080094046A (en) 2008-10-22
KR20090099590A (en) 2009-09-22
EP1982517A4 (en) 2010-06-16
US20090310676A1 (en) 2009-12-17
KR20090099591A (en) 2009-09-22
US7856148B2 (en) 2010-12-21
US8553073B2 (en) 2013-10-08
KR100934676B1 (en) 2009-12-31
EP1977593A1 (en) 2008-10-08
WO2007081176A1 (en) 2007-07-19

Similar Documents

Publication Publication Date Title
WO2007081177A1 (en) Processing multiview video
US20070177671A1 (en) Processing multiview video
JP5021739B2 (en) Signal processing method and apparatus
JP2010525724A (en) Method and apparatus for decoding / encoding a video signal
USRE44680E1 (en) Processing multiview video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 2008550242

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 200780003112.0

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2007700953

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 1020087019746

Country of ref document: KR

WWE Wipo information: entry into national phase

Ref document number: 1020097017206

Country of ref document: KR

Ref document number: 1020097017207

Country of ref document: KR