WO1996041474A1 - System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals - Google Patents

System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals Download PDF

Info

Publication number
WO1996041474A1
WO1996041474A1 PCT/US1996/009815 US9609815W WO9641474A1 WO 1996041474 A1 WO1996041474 A1 WO 1996041474A1 US 9609815 W US9609815 W US 9609815W WO 9641474 A1 WO9641474 A1 WO 9641474A1
Authority
WO
WIPO (PCT)
Prior art keywords
col
row
dataout
points
data points
Prior art date
Application number
PCT/US1996/009815
Other languages
French (fr)
Inventor
David M. Geshwind
Original Assignee
Geshwind David M
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Geshwind David M filed Critical Geshwind David M
Priority to JP9502028A priority Critical patent/JPH11507484A/en
Priority to EP96919297A priority patent/EP0847648A4/en
Priority to AU61668/96A priority patent/AU6166896A/en
Publication of WO1996041474A1 publication Critical patent/WO1996041474A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0135Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes
    • H04N7/014Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving interpolation processes involving the use of motion vectors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/59Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/01Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
    • H04N7/0125Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards being a high definition standard
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/12Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
    • H04N7/122Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal involving expansion and subsequent compression of a signal segment, e.g. a frame, a line
    • H04N7/125Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal involving expansion and subsequent compression of a signal segment, e.g. a frame, a line the signal segment being a picture element

Definitions

  • Figure 29 is a flow chart of the generalized process of the instant invention, as described herein. Process steps are represented in the figure by boxes, information set inputs to and outputs from the processes are represented by lines.
  • the initial input to the process is the maximum definition source (1) which may be an HDTV signal, an audio signal, a motion picture film, a "real scene" which is to be scanned by a television camera, or a computer database from which a synthetic television signal will be generated.
  • the maximum definition source (1) may be an HDTV signal, an audio signal, a motion picture film, a "real scene" which is to be scanned by a television camera, or a computer database from which a synthetic television signal will be generated.
  • a given frame of source information (e.g., an HDTV field; a "chunk”, say one second, of audio signal; a film frame; or a real or synthetic television frame) may be sampled at an arbitrarily high definition or resolution.
  • the first process step (2) is to select, from all possible samples which might be derived from the source signal, a subset of points which may potentially be sampled (3).
  • the second process step (4) is to select from the potential sample points (3) a subset of active points (5) which will actually be sampled.
  • Figure 30 depicts two main ways that the variable STS scheme may be implemented.
  • high-definition signal (20) is encoded by the variable STS encoder of the instant invention (21), transmitted over a low-definition channel (22) and then displayed directly on a television monitor (23) which may be of either low- or high-definition.
  • the lower system diagram is identical except that, prior to display on the momtor, the encoded signal is decoded by a companion variable STS decoder (24) which may, in practice, be incorporated into the monitor (23).
  • Figure 31 consists of two system diagrams of a way to implement a companion pair of a variable STS encoder (above) and decoder (below).
  • the encoder consists of a standard television camera (30) which is not shown in detail except for the subsystem (31) which controls the deflection of the scanning beam across the imaging element.
  • the non-standard element which implements the instant invention is the variable STS offset generator (32) which generates a "wobble" or perturbation signal which in turn modulates the standard repeated raster pattern of the deflection of the scanning beam.
  • Figure 32 consists of a system diagram of the equivalent encoder employing a newer type standard video camera with a digital CCD imaging element.
  • the encoder consists of a standard CCD television camera (36) which is not shown in detail except for the subsystem (37) which controls the generation of the addressing signals used to access the pixels of the CCD imaging element.
  • the non-standard element which implements the instant invention is the variable STS offset generator (38) which generates a "wobble" signal which in turn modulates the standard repeated raster pattern of the addressing of the imaging element.
  • Figure 33 consists of two block system diagrams of a more complex way to implement a companion pair of a variable STS encoder (above) and decoder (below).
  • the digital HD signal is then input to the input memory buffer (33).
  • the choice of the addresses at which the individual signal samples are stored is made by the input address generator (34) which is synchronized by the sync detector (31) which extracts timing information from the HD signal input (30).
  • the size of the input memory buffer (33) may range from a partial "window" of an input frame (as is common in the design of digital television TBCs — time base correctors), to an entire frame store, to storage for several frames as is described in the section entitled "FRAME STORE TECHNIQUES” , above. It may even be reduced to size zero if the CPU (35) is very fast and/or the encoding scheme is very simple. (The same statements may be applied to the other memory buffers (37, 53 & 57) where appropriate.)
  • the graphics processor (35/36) computes the encoded x'-point values, according to the variable STS encoding schemes or other techniques described herein by the accompanying text and diagrams, and places the results in the output memory buffer (37).
  • variable STS decoder (below, 50-61) is virtually identical to that of the encoder (above,
  • variable STS decoder is shown in block system form that is largely identical to Figure 33 (bottom), with the addition of pixel modification information element (65).
  • This additional information may be derived from the LD signal input (50) and/or created by graphics processor (55/56); and it may be input to the output address generator (58) from information stored in the input memory buffer (53), information stored in the output memory buffer (57), or information generated or stored in the graphics processor (55/56).
  • the pixel modification information (65) will be used to modify the scanning of data from output memory buffer (57) as controlled by the output address generator (58) and/or to modify the synchronization signal part generated by the sync generator (60) and which is included in the decoded signal output (61). It will be used to modify geometric information about the display of individual pixels such as the position, size or shape of the pixel as displayed on a television momtor to which the decoded signal output (60) is sent.
  • NTSC frame N contains the "center portion" of HDTV frame N and encoded in its blanking intervals the "side strips” of later (by M frame times) frame N+M.
  • NTSC frame N contains the "center portion" of HDTV frame N and encoded in its blanking intervals the "side strips" of later (by M frame times) frame N+M.
  • N+M contains the "center portion" of HDTV frame N+M with the encoded "side strips” of HDTV frame N+2M.
  • the encoded information containing the "side strips” of HDTV frame N+M (contained in the blanking intervals of NTSC frame N) is directed to a "slow” decoder (which may take up to M frame times to decode this information back into the visual information representing the HDTV "side strips") and storage unit.
  • M frame times later the standard NTSC image of frame N+M which contains the center section of HDTV frame N+M, is routed by the "quick pass-through" section to a unit which assembles that information with the delayed and stored "side strip” information and outputs a complete HDTV frame N+M at time N+M.
  • the five phases of the (4+ 1: 1) scheme each define a linear correspondence (defined by the associations and weights of each phase of the pattern) and each takes a high- definition image frame and produces a low-definition image frame.
  • the high-definition image is constant for five frames, and that the noise component is zero, at least for certain resolutions, perfect reconstruction is possible.
  • the notion of choosing an optimar solution from among many imperfect alternatives need not be considered, and the implementation becomes a matter of straightforward mathematical analysis? It is then observed that if the high-definition image stream does not vary too much from one frame to the next, the same inversion process can be used, and the errors will be small (because the inverse linear transformation is a bounded operator).
  • the algorithm consists of several steps.
  • a phase number will start off at 0 for the first frame, and increment by one for each succeeding frame. When the phase goes above 4 it gets set back to 0.
  • Each high-definition frame is down-sampled according to Figure 11, into a smaller array which represents the active pixels, (the precise indexing needed to do this is specified below).
  • one of 5 different linear transformations is applied to produce a low-definition image for transmission. More specifically, the transformation is computed by taking each pixel labeled X in Figure 11, and adding together a weighted sum of its value, and the value of several of the O pixels near it, as specified in Figure 9, with weights as specified elsewhere herein.
  • the high-definition image can be treated as a 2-dimensional array. It is down-sampled to a 2-dimensional array of active pixels. The down-sampled active pixel array is re-indexed to a 1-dimensional array (i.e., a vector). The linear transformations are applied simply as matrices to these vectors. The resulting vector is then re-indexed back into a 2-dimensional array resulting in a 2-dimensional, low-definition image for transmission.
  • a stream of incoming low-definition frames are received and stored in a cyclic array of 5 frame buffers. Again, a phase number will start off at 0 for the first frame, and increment by one for each succeeding frame. When the phase goes above 4 it gets set back to 0. Each incoming low-definition frame is thus tagged with a phase number. A store is kept of the current low-definition frame, together with the previous 4 frames (initialized to black before any information comes in, for example). Hence, at each given time, there will be exactly one frame in the store corresponding to each phase.
  • SK SKIP[ (ROW-1) % 5 ] /* arrays are indexed from 0, % is */
  • Figure 9 shows how the O pixels are associated to the X pixels, in each of the 4 phases 1, 2, 3 and 4.
  • each X pixel is associated with only itself.
  • Figure 11 it is necessary to rotate Figure 11 counterclockwise by about 28 degrees.
  • the following diagram shows the high-definition pixel array as a 2-D array, with the data from Figure 9 worked in around a particular X pixel (X').
  • the pixels in the 2-dimensional array will be indexed sequentially, starting at 1 in the upper left, going across the first row to N, then beginning again at the left of the second row, and so on, down to the lower right.
  • the pixel in the eighth row from the top, in the sixth column over, will be indexed by 7N+6.
  • the pixel m rows from the top, n columns over, will be indexed (n-l)N+m.
  • the first N values from the 1-dimensional array are put, in order, into the first row of the 2-dimensional array.
  • the next N values are put in the second row.
  • VH 1-dimensionalized high-definition vector
  • A stacked composite matrix
  • the last five low-definition frames received can be 1-dimensionalized to produce this same (VL5) data structure.
  • the high-definition monitor will display a high-definition frame with information computed by multiplying a sequenced composite of the 1-dimensionalized versions of the last five low-definition frames. This will yield high-definition data points of the number and for the positions of the "active" data points as shown above or in Figure 11. These data points may be displayed as is, or information for additional display points corresponding to the "inactive" data points (•) may be calculated by bilinear interpolation.
  • ROW l...Mo
  • COL l...N 0
  • the full decoding procedure is shown here and in Figure 50. It includs optional calculation of pixels corresponding to the "inactive" positions in Figure 11. It keeps 5 arrays of low-definition pixels in a recirculating frame store (see Figure 51) where the oldest incoming frame is replaced by the newest.
  • the first approach is to just use the last five frames so that one of the phases will have two representative frames the oldest and most recent frames. That older version of the current phase can then be substimted for the X- only phase 0 frame data. (Alternately, the newer version of the current phase can be substituted for the X-only phase 0 frame data.)
  • a second approach uses only four frames, one each of phase 1, 2, 3 & 4, and then uses an average (or weighted average) of those four data sets as a substitute for the data set of the expected X-only or 0 phase data.
  • the weights may, for example, weight the oldest (or the newest) frames more heavily.
  • phase (or combination) is used to substitute for the X-only, phase 0 data, its deviation from X-only will contribute to an error or deviation from perfect reconstruction.
  • the calculations will have to be carried out five times, with a different set of X-only data each time. However, only one fifth of the calculations will have to be carried out each time so, in total, the amount of calculation is about the same, although some overhead of calculating and reloading the X-only data is incurred.
  • the second practical consideration is that the high-definition image will not be static. However, generally, with television images, large amounts of the image for large amounts of time change little. Thus, for those areas, the reconstruction mechanism described thus far will be effective.
  • a motion map may be derived for any current high-definition frame by comparing it with its predecessor prior to transmission.
  • the map may be bi-variate; that is, two valued — moving or not.
  • the map may have (for the current application) values from 0 (moving) to S (still for S frames) where S is 4 or 5 depending upon whether a 4 or 5 frame cycle is being used.
  • Such a 2 (or 4 or 5) valued image can be compressed by many well-known methods into a very small data file. Further, although two high-definition frames are compared to create the map a grosser resolution version can be used reducing the file size much further. Generally this need be only at the resolution of the low-definition transmitted frames at most. A technique described earlier (as a method for sending highly compressed versions of "side strips”) will be used to transmit this compressed motion map. That is, it will be pre-computed and transmitted in the blanking interval of an earlier frame.
  • the motion map for a given frame will arrive well before the visual data to which it refers and it can be used to determine which technique (reconstruction, interpolation, or some combination) will be used for each area of the high-definition image.
  • a decision may be made for each area (as small as the resolution of the motion map will permit) whether to display the reconstructed image for that area or just the present data with fill-in or interpolation filtering. See Figure 52.
  • Both types of image data will be calculated (and perhaps held in two separate frame stores) so the choice may be made for each area displayed by consultation of the motion map held in a third buffer.
  • the two images (reconstructed and interpolated) may be "mixed" or cross-dissolved between rather than just selected between. Such is a standard practice called soft-edge keying or matting.
  • the encoding may be done off-line (i.e., not in real time) and may then employ a low-powered general purpose processor, such as an IBM-PC, for computational purposes, suitably outfitted with one or two frame stores? ⁇ f the frame store is of such a type as may be programmed to display and digitize at different resolutions, then a single one may be used. Alternately, and more conveniently, a high-resolution digitizing frame store may be used in addition to a low-resolution display. This second approach will be assumed in the following discussion. Under computer control, a source tape player will synchronize with the frame store to permit digitization of individual frames.
  • a low-powered general purpose processor such as an IBM-PC
  • a destination tape recorder is synchronized to the output frame store for the single-frame collection of the processed frames.
  • data may be written to the low-definition frame store by a subroutine P ⁇ XELOUT(X, y, rgb) where x represents a column, y a row and rgb a pomter to a three integer array where the integers represent red, green and blue values of the pixel respectively.
  • the following computer subroutine, written in "C”, SQUEEZE(P) where p is the phase, would be run on the host computer to affect the conversion of each frame.
  • An overall control program would coordinate this subroutine with others that control the videotape machine interfaces. See Figure 53 for a system diagram of this embodiment.
  • This "C” code fragment implements the (4+1 ):1 encoding pattern as shown in Figures 6 through 11.
  • squeeze(p) is the call to the code fragment where p is the phase of the pattern, 1 through 4
  • piXEUN will access, as input, the pixels of a high-resolution image that is 2,000 by 2,000 pixels, upper-left o ⁇ gin of (0,0).
  • p ⁇ xel ⁇ n(x, y) where x represents a column and y a row retums a pointer to a three integer array where the integers represent red, green and blue values of the pixel respectively; retums a pointer to a
  • ⁇ rgb[k] 7 gbx+k) * wx + *(rgba+k) * wa + *(rgbb+k) * wb
  • PIP-512, PIP-1024 and PIP-EZ (software); PG-640 & PG-1280; MVP-AT & Imager-AT (software), all for the IBM-PC/AT, from Matrox Electronic Systems, Ltd. Que., Canada.
  • TARGA severe models with software utilities
  • AT- VISTA with software available from the manufacturer and Texas Instruments, manufacturer of the TMS34010 onboard Graphics System Processor chip), for the IBM-PC/ AT, from AT&T EPICenter/Truevision, Inc., Indiana.
  • the low-end Pepper Series and high-end Pepper Pro Series of boards (with NNIOS software, and including the Texas Instruments TMS34010 onboard Graphics System Processor chip) from Number Nine Computer Corporation, Massachusetts.
  • FGS-4000 and FGS-4500 high-resolution imaging systems from Broadcast Television Systems, Utah.
  • 911 Graphics Engine and 911 Software Library that runs on an IBM-PC/AT connected by an interface cord) from Megatek, Corporation, California.
  • Advanced Graphics Chip Set (including the RBG, BPU, VCG and VSR) from National Semiconductor Co ⁇ oration, California.
  • TMS34010 Graphics System Processor (with available Software Development Board, Assembly Language Tools, "C” Cross-Compiler and other software) from Texas Instruments, Texas.
  • microsaccades see, for example: Movements of the Eyes, R. H. S. Carpenter, Pion,
  • This computer program implements a (4+1) :l sampling pattern, as described in figures 6 through 11 of the patent application.
  • the coordinate directions are called h and v, for horizontal and vertical, just — TIME-VARYING DA TA POINT SELECTION & ARRANGEMENT (BW.C) — INVENTOR: DA VID M. GESHWIND
  • this table gives the offsets of the (o) -type pixels associated with each (x) -type pixel, and the weights they are to be given when computing the output pixel.
  • stsresample computes a compressed image for a single frame, using an STS resampling scheme.
  • the parameter 'phase' indicates which phase of the STS is in effect. stsresample calls two routines not defined here;
  • GetInputPIXEL (h, v) to get pixel values from the high-definition input image
  • PutOutputPixel to set pixel values in the compressed output image.
  • PutOutputPixel (h, v, Pixel/DENOM) ; TIME-VARYING DA TA POINT SELECTION & ARRANGEMENT (BW.C) — INVENTOR: DAVID M. GESHWIND
  • Typical examples include:
  • sample A matrix was created ⁇ uh a factor of 1/16 omitted) and the sample B (A "1 ) matrix computed, both in sparse form, as demonstration examples; the images they handle being too small to be practical.
  • each of the A ⁇ 0) , A (1) , A (2> , A (3> and A 14 ' matrices are 9 x 45 and the composite A matrix is 45 x 45, as is its inverse.
  • the 45 active points are 1-dimensionalized into a vector of length 45 and the output is a vector of length 9 which is 2-dimensionalized into a 3 x 3 "low-definition" image.

Abstract

Processes for selecting, manipulating and arranging data points or pixels derived from information bearing signals are useful to reduce the bandwidth of, or improve the perceived quality of, such a signal as transmitted and displayed. The techniques utilize time-varying sampling schemes and take into account the characteristics of the human visual system. For each information frame, a subset (3) of all possible data points (1) is selected (2). A further subset of active data points (5) is selected (4) for which data will actually be sampled. The active points (5) are further divided (6) into points for which a value will be transmitted (x-points) (7) and points which will be sampled but for which no separate value will be transmitted (o-points) (8). A mathematical association between the x-points and o-points is made (9) and new values to be transmitted are calculated for the x-points (10). The parameters of the selection and association processes are varied in a non-trivial manner and the, now modified, cycle repeated (11) for subsequent data frames. In particular, the techniques may be used to process a high-definition television signal prior to its storage, or transmission over a low-bandwidth channel.

Description

SYSTEM FOR TIME-VARYING SELECTION AND ARRANGEMENT OF DATA POINTS FOR PROCESSING NTSC-COMPATIBLE HDTV SIGNALS
TECHNICAL FIELD
The instant invention relates to a class of methods for selecting subsets of information, such as that derived by digitizing a high-definition television signal, and arranging that reduced amount of information into a second set of information which can then be, for example, converted to a low-definition signal which can then be transmitted over channels intended for low-definition signals, with minimal perceivable artifacts.
The instant application is a continuation-in-part of applicant's application 07/077,916 filed July 27, 1987, PCT Application No. PCT/US88/02542, published as WO 89/01270.
The instant application is also a continuation-in-part of applicant's application 08/110,230 filed August 23,
1993 which is a continuation-in-part of 07/435,487 (filed August 17, 1989), which was both: a continuation-in-part of application 07/227,403 filed December 17, 1986 (issued May 15, 1990 as US patent 4,925,294); and, a continuation-in-part of application 07/006,291 filed January 20, 1987 (issued September 24, 1991 as US Patent
5,050,894) which was a continuation of 06/601,091 (filed April 20, 1984, now abandoned) which was a continuation-in-part of application 06/492,816 filed May 9, 1983 (issued August 19, 1986 as US Patent 4,606,625).
The instant application is also a continuation-in-part of applicant's application 07/951,267 filed September 25, 1992 which is a continuation-in-part of the above referenced application 07/435,487 and also of the above referenced application 07/077,916.
All of these documents (except for those abandoned) are hereby incorporated in their entirety.
FURTHER DETAILS OF IMPLEMENTATION: Figure 29 is a flow chart of the generalized process of the instant invention, as described herein. Process steps are represented in the figure by boxes, information set inputs to and outputs from the processes are represented by lines.
The initial input to the process is the maximum definition source (1) which may be an HDTV signal, an audio signal, a motion picture film, a "real scene" which is to be scanned by a television camera, or a computer database from which a synthetic television signal will be generated.
A given frame of source information (e.g., an HDTV field; a "chunk", say one second, of audio signal; a film frame; or a real or synthetic television frame) may be sampled at an arbitrarily high definition or resolution.
The first process step (2) is to select, from all possible samples which might be derived from the source signal, a subset of points which may potentially be sampled (3). The second process step (4) is to select from the potential sample points (3) a subset of active points (5) which will actually be sampled.
The third process step (6) is to select from the active points (5) two further subsets: the points for which data will be transmitted, the x-points (7); and the points which will be sampled but for which data will not be separately transmitted, the o-points (8). The fourth process step (9) is to create mathematical associations between the x-points and o-points. This results in the encoded values that will be transmitted for the x-points (denoted here as x' -points — read x-prime-points) (10) which will each, in general, be a combination of the sample value of an x-point and the sample values of some subset of the o-points. The last process step (11) is to vary the parameters of the four previous processes in a non-trivial manner — the variable STS — as described herein; and then to repeat the cycle with the next frame of source information.
Figure 30 depicts two main ways that the variable STS scheme may be implemented. In the upper system diagram, high-definition signal (20) is encoded by the variable STS encoder of the instant invention (21), transmitted over a low-definition channel (22) and then displayed directly on a television monitor (23) which may be of either low- or high-definition.
The lower system diagram, is identical except that, prior to display on the momtor, the encoded signal is decoded by a companion variable STS decoder (24) which may, in practice, be incorporated into the monitor (23).
Figure 31 consists of two system diagrams of a way to implement a companion pair of a variable STS encoder (above) and decoder (below).
The encoder consists of a standard television camera (30) which is not shown in detail except for the subsystem (31) which controls the deflection of the scanning beam across the imaging element. The non-standard element which implements the instant invention is the variable STS offset generator (32) which generates a "wobble" or perturbation signal which in turn modulates the standard repeated raster pattern of the deflection of the scanning beam.
Similarly, the decoder consists of a standard television momtor (40) which is not shown in detail except for the subsystem (41) which controls the deflection of the scanning beam across the picmre tube. The non-standard element which implements the instant invention is the variable STS offset generator (42) which generates a "wobble" signal which in turn modulates the standard repeated raster pattern of the deflection of the scanning beam. The offset may be applied uniformly to an entire image frame or may vary over the image frame. The wobble of the monitor raster would be synchronized with the wobble of the camera by information imbedded in the transmitted signal and/or pattern information incorporated into the variable STS offset generator.
Figure 32 consists of a system diagram of the equivalent encoder employing a newer type standard video camera with a digital CCD imaging element. The encoder consists of a standard CCD television camera (36) which is not shown in detail except for the subsystem (37) which controls the generation of the addressing signals used to access the pixels of the CCD imaging element. The non-standard element which implements the instant invention is the variable STS offset generator (38) which generates a "wobble" signal which in turn modulates the standard repeated raster pattern of the addressing of the imaging element.
Figure 33 consists of two block system diagrams of a more complex way to implement a companion pair of a variable STS encoder (above) and decoder (below).
The encoder consists of hardware elements that, taken together, constitute a standard type of computer display device known as a Frame Store or Frame Buffer. For example, elements 31 through 34 would constitute a scanning or digitizing frame buffer. If we extended the list to elements 31 through 36, it would be an "intelligent" scanning frame buffer. Similarly, elements 37 through 40, or 35 through 40, would constitute a display frame buffer or "intelligent" display frame buffer. Elements 31 through 40 would constitute an "intelligent" frame buffer capable of both digitizing and displaying, however, in that case elements 33 and 37 might be a combined into a single memory buffer.
Similar comments can be made about the hardware elements of the decoder 51 through 60.
Also note that there are four pairs of elements (33/34, 37/38, 53/54 & 57/58) each consisting of memory buffer and an address generator. For each pair, the two halves are interconnected and an input output line communication with one element may be considered to communicate with both. Similarly the computer and memory elements 35/36 and 55/56 are also interrelated.
Referring now to the encoder (above) the high-definition signal input (30) may consist of an HDTV camera, computer generated graphic or any other high-definition information source. If the signal is an analog signal it is converted to digital form by the analog-to-digital converter (ADC) element (32) otherwise, if the signal originates in digital form, this element may be removed. (Similar statements about the optionality of the other converter elements (39, 52 & 59) are implied by this and will not be repeated.)
The digital HD signal is then input to the input memory buffer (33). The choice of the addresses at which the individual signal samples are stored is made by the input address generator (34) which is synchronized by the sync detector (31) which extracts timing information from the HD signal input (30).
Depending upon the complexity of the particular encoding scheme to be implemented and the computational speed of the graphics processor CPU (35), the size of the input memory buffer (33) may range from a partial "window" of an input frame (as is common in the design of digital television TBCs — time base correctors), to an entire frame store, to storage for several frames as is described in the section entitled "FRAME STORE TECHNIQUES" , above. It may even be reduced to size zero if the CPU (35) is very fast and/or the encoding scheme is very simple. (The same statements may be applied to the other memory buffers (37, 53 & 57) where appropriate.)
The graphics processor CPU (35) is a specialized, high-speed micro processor of which many now exist (for example, the Texas Instruments TI34010)2^nd which may also be called a DSP for digital signal processor. It generally will also have an amount of computational or "scratch pad" memory, or registers associated with it, as indicated by "& MEM" in (35). As is common practice in high speed computing, if additional speed or computing power is required, several CPUs may be employed to operate in tandem as a composite CPU (for example, the systems made by Broadcast Television Systems or Pixar)*7The graphics processor (35) also has associated with it additional program memory to hold software (36). Working from data input from the input memory buffer (33) the graphics processor (35/36) computes the encoded x'-point values, according to the variable STS encoding schemes or other techniques described herein by the accompanying text and diagrams, and places the results in the output memory buffer (37).*
For some implementations either memory buffer (33 or 37 in the encoder, 53 or 57 in the decoder) may be absent, or both functions may be implemented by a common memory buffer. Also, the scheme described in the section entitled "FRAME STORE TECHNIQUES" , above, which employs two separate frame stores may also be implemented as shown by buffers 33 and 37, or 53 and 57.
The data placed in the output memory buffer (37) by the graphics processor (35/36) may then be scanned out, under control of the output address generator (38), converted by the digital-to-analog converter or DAC (39), and output as a low-definition encoded data signal (41). The output signal (41) will also contain synchronization information, generated by the sync generator (40) based on timing information from the output address generator
(38).
The combined, data and synchronization, encoded low-definition signal output (41) is then available for transmission, for recording on videotape, videodisc or other media, or for direct undecoded display in certain instances. The operation of the variable STS decoder (below, 50-61) is virtually identical to that of the encoder (above,
30-41) except that: the low-definition signal input (50) is now the encoded low-definition output of the encoder (41) which is available via cable, radio transmission, from a storage medium or any other appropriate means; the graphics processor (55/56) now implements the variable STS decoding and reconstruction schemes or other techniques described herein by the accompanying text and diagrams; and, the output (61) is now a decoded/reconstructed signal which is available for display on a television momtor.
Referring now to Figure 34, a variable STS decoder is shown in block system form that is largely identical to Figure 33 (bottom), with the addition of pixel modification information element (65). This additional information may be derived from the LD signal input (50) and/or created by graphics processor (55/56); and it may be input to the output address generator (58) from information stored in the input memory buffer (53), information stored in the output memory buffer (57), or information generated or stored in the graphics processor (55/56).
The pixel modification information (65) will be used to modify the scanning of data from output memory buffer (57) as controlled by the output address generator (58) and/or to modify the synchronization signal part generated by the sync generator (60) and which is included in the decoded signal output (61). It will be used to modify geometric information about the display of individual pixels such as the position, size or shape of the pixel as displayed on a television momtor to which the decoded signal output (60) is sent.
Figure 35 shows one example of a data structure which could be stored for each pixel of the input memory buffer (53) or the output memory buffer (57). For each pixel 8 bits of data would be stored representing any of 256 levels of red intensity (70), 8 bits of data representing one of 256 levels of green intensity (71), 8 bits of data representing one of 256 levels of blue intensity (72), and two bits representing pixel modification information (73).
The two bits of pixel modification data would allow for a choice among four possibilities (00, 01, 10, 11); other numbers of bits in (73) would permit other numbers of choices. Two bits might be used to pick a position (upper left, lower left, upper right, lower right) for the pixel within a 2-by-2 pixel cell. It might be used to determine a shape/size for the pixel (lxl, 1x2, 2x1, 2x2). It might be used to modify how long a pixel was to be displayed (not at all, just once, twice, until replaced) for the "accumulation mode" as described herein; the "not at all" option allowing the RGB locations (70, 71 & 72) for that pixel in that data frame to be used for other purposes (for example, additional resolution, area boundary or side strip data, as described elsewhere herein). Or it might be used in other ways, as described herein, to modify the display characteristics of individual pixels in individual frames.
Figure 36 shows one example of an embodiment of the instant invention described above in the section entitled "FRAME STORE TECHNIQUES". Here, two arrangements of pixels are shown above and below, each employing four different shaped pixels. Two bits of data stored with the content information for each pixel would indicate: 00 = lxl cell, 01 = 1x2 cell, 10 = 2x1 cell, and 11 = 2x2 cell. Each arrangement uses the same number of the various types of cells 16 of 00-type, 8 each of 01-tyρe and 10-type, and 4 of 11-type. These, and other patterns may be alternated to implement a variable STS scheme. Figure 37 shows an example of how these variously shaped picture elements may be used to advantage with
"imagery with horizontal, vertical or diagonal lines, edges or details" and as described above in the section entitled "VARIABLE ALGORITHM OVER IMAGE FRAME" . Here a pattern is shown which exhibits better vertical than horizontal resolution in the upper-left-diagonal half, and better horizontal than vertical resolution in the lower-right-diagonal half. Although this pattern is extremely simple, it is meant to be illustrative only. In practice, such patterns may be varied in subtle and complex ways, over time, to implement some of the variable STS schemes as described herein.
Figure 38 depicts the encoding process and device described above in the section entitled "PRACTICAL
APPLICATIONS", above, for use with an HDTV signal with "side strips". Shown are two HDTV frames (which may be the same frame if M = 0) separated by M frame times. The center portion of HDTV frame N (roughly corresponding to an NTSC 4/3 aspect ratio frame) is extracted and stored or delayed for M frames. The "side strips" of subsequent HDTV frame N+M are encoded and inserted into the blanking intervals of the same NTSC frame into which the earlier center section of N is now also inserted to create an NTSC frame containing the "center portion" of HDTV frame N and the encoded "side strips" of HDTV frame N+M.
Figure 39 shows the complementary process/device. Two frames separated by M frame times (which may be the same frame if M = 0) are depicted. NTSC frame N contains the "center portion" of HDTV frame N and encoded in its blanking intervals the "side strips" of later (by M frame times) frame N+M. Similarly, NTSC frame
N+M contains the "center portion" of HDTV frame N+M with the encoded "side strips" of HDTV frame N+2M. At time N the encoded information containing the "side strips" of HDTV frame N+M (contained in the blanking intervals of NTSC frame N) is directed to a "slow" decoder (which may take up to M frame times to decode this information back into the visual information representing the HDTV "side strips") and storage unit. M frame times later the standard NTSC image of frame N+M which contains the center section of HDTV frame N+M, is routed by the "quick pass-through" section to a unit which assembles that information with the delayed and stored "side strip" information and outputs a complete HDTV frame N+M at time N+M.
Figure 40 depicts how the "variable STS" encoding principle could be applied to audio signals as indicated above. Since a standard audio signal is one-dimensional (with respect to time only) the spacial/temporal variability is pretty much limited to temporal. Figure 40-A shows the standard method for sampling audio — samples uniformly spread in time — such as employed on a CD audio disc. Figure 40-B shows how this scheme may be varied by sampling at specifically "random" times, although, on the average, at the same density of samples per unit time (although this is not a requirement). Figure 40-C shows how different variable STS sampling patterns may be applied to each channel of a stereo audio signal.
Figure 41 depicts the myriad of optional commumcation devices that may be inserted between an encoder and decoder along a communications chain.
Figure 42 depicts the "off-line recording" process described above. The dotted box in Figure 42 indicates which system elements one might expect to find combined in a typical "advanced television receiver" which are currently available at least in laboratory settings.
Figure 43 depicts a typical "shotgun" pattern such as described, above, in the section entitled "ADDING COMPLEXITY AND VARIATION". Four such patterns, which each fit within a 6 x 6 rectangle, are depicted which might be alternated in practice. These are just examples. Each pattern has the property that the entire raster may be tiled with the pattern as depicted in the lower illustration. With respect to a 6 x 6 box, the 0s represent the basic pattern in "frame 1" without any offset. The Is represent the pattern offset 3 boxes up, the 2s 3 boxes down, the 3s to the left, the 4s to the right, the 5s up and left, the 6s up and right, the 7s down and left and, finally, the 8s three boxes to the right and down. Note that the basic 6 6 rectangle is completely covered with no overlapping pixels. Each of these "sprays" now represents a arbitrarily related set of points (rather than a geometrically coherent set of points, such as a square or an M x N rectangle) which may, for example, be thought of as o-points. The combined, and possibly weighted, values of these points would then be transmitted as an x-point value, whose location might be one of the points within the "spray" or at some other location on the raster.
Figure 44 depicts an N X M rectangular multi-point cell pattern, as described, above, in the section entitled "ADDING COMPLEXITY AND VARIATION". In Example A, four distributions of an x-point and 3 o-points are shown. There are four other related patterns (symmetrical about the horizontal mid-line) not shown which would allow extension to an eight frame cycle. Other variations from the disclosure may also be applied, such as shown in
Example B.
Figure 45 depicts a portion of a television receiver or video display monitor screen. The upper left image depicts transmitted points only (by Xs) in a (4+ l): l pattern. The upper right shows (by the letters A, B, C, and D), in addition, intermediate display points reconstructed by interpolation of a more complex computational method, such as is discussed, above, in the section entitled "INTERMEDIATE POINT RECONSTRUCTION" . The lower left image instead shows, with the X-points, points which have been retained or "accumulated" for 1, 2, 3, or 4 frame times, as indicated, while the position of the X-points has changed, as discussed, above, in the section entitled "FRAME STORE TECHNIQUES" . The lower right image shows how two screen sections (a central rectangle and the border area) would display the transmitted and either the "reconstructed" or "accumulated" information. Figure 46 depicts a camera system capable of "irregular frame rate" operation, as discussed, above, in the section entitled "ADDING COMPLEXITY AND VARIATION" . Shown are a high frame-rate video camera and input and output frame buffers connected to a high-speed graphics processor with program memory and software, all of which are standard off-the-shelf items. These may be combined in a single package or plugged together. As information comes into the input frame buffer from the camera, it is selectively transferred to the output frame buffer, by the graphics processor, at irregular time intervals, by the "frame full" or in smaller sections, for output at standard display rates and formats.
FURTHER EMBODIMENTS As has been described above, some of the weighting factors of the (4 + 1 : 1) pattern (Ka, Kb, Kc, Kd & Kx) can be set to zero for some applications. One will be described here.
If, in addition to the four phases of the (4+ 1: 1) described above, a fifth (or sometimes called the 0TH or zeroth phase) phase is added where Ka, Kb, Kc and Kd are all set to zero and Kx is set to 1 (the sum of all 5 should equal one in a normalized fashion) this fifth image in the cycle will consist of information from the x-pixels alone. There are two advantages to this.
First, if these phases are to be applied to an interleaved video signal, with four phases the repeat is four fields or two frames. With five phases (an increase of 25%) the repeat cycle becomes 10 fields or 5 frames (before the same phase type and field type coincide) an increase to 250% . The repeat cycle is as follows:
Field e o e o e o e o e o
Frame 1 1 2 2 3 3 4 4 5 5 Phase 1 2 3 4 5 1 2 3 4 5
As described above, the encoding patterns of the instant invention may be applied to entire frames, or to individual fields. For simplicity, however, only frames will be discussed below, but the discussion also applies where the phase is incremented on a field- rather than frame-basis.
The second advantage is that such a scheme permits theoretically (in the absence of noise) perfect reconstruction of a high-definition still image, as will be discussed below.
With moving images and/or in the absence of such a fifth phase, approximate reconstruction may be accomplished by several techniques described after that.
THEOB TIΓAI. BACKGROUND:
The five phases of the (4+ 1: 1) scheme (the four X+O phases and the zeroth X-alone phase) each define a linear correspondence (defined by the associations and weights of each phase of the pattern) and each takes a high- definition image frame and produces a low-definition image frame. Under the assumption that the high-definition image is constant for five frames, and that the noise component is zero, at least for certain resolutions, perfect reconstruction is possible. Hence, the notion of choosing an optimar solution from among many imperfect alternatives need not be considered, and the implementation becomes a matter of straightforward mathematical analysis? It is then observed that if the high-definition image stream does not vary too much from one frame to the next, the same inversion process can be used, and the errors will be small (because the inverse linear transformation is a bounded operator).
DESCRIPTION OF THE ALGORITHMS: The basic algorithm for five phase, (4+ 1: 1) STS encoding, with periodic boundary conditions, is as follows.
The input is a sequence of high-definition frames, assumed for the purposes of specification to be stored, one at a time, sequentially, in a frame buffer with N0 pixels per row, and Mp pixels per column.
The algorithm consists of several steps. A phase number will start off at 0 for the first frame, and increment by one for each succeeding frame. When the phase goes above 4 it gets set back to 0. Each high-definition frame is down-sampled according to Figure 11, into a smaller array which represents the active pixels, (the precise indexing needed to do this is specified below). Next, depending on the phase number, one of 5 different linear transformations is applied to produce a low-definition image for transmission. More specifically, the transformation is computed by taking each pixel labeled X in Figure 11, and adding together a weighted sum of its value, and the value of several of the O pixels near it, as specified in Figure 9, with weights as specified elsewhere herein. However, in order to invert this process for decoding, it is useful to observe that this weighted sum is a linear mapping. Its inverse can be computed by standard matrix inversion algorithms'?' For decoding, this inverse will simply be specified as a matrix, and for purposes of uniform exposition, the encoding algorithm will be specified in this way as well; although, the actual implementation may be software that performs the mathematics as agebraic formulae that are not implemented as matrix operations.
The high-definition image can be treated as a 2-dimensional array. It is down-sampled to a 2-dimensional array of active pixels. The down-sampled active pixel array is re-indexed to a 1-dimensional array (i.e., a vector). The linear transformations are applied simply as matrices to these vectors. The resulting vector is then re-indexed back into a 2-dimensional array resulting in a 2-dimensional, low-definition image for transmission.
The basic algorithm for decoding is as follows.
A stream of incoming low-definition frames are received and stored in a cyclic array of 5 frame buffers. Again, a phase number will start off at 0 for the first frame, and increment by one for each succeeding frame. When the phase goes above 4 it gets set back to 0. Each incoming low-definition frame is thus tagged with a phase number. A store is kept of the current low-definition frame, together with the previous 4 frames (initialized to black before any information comes in, for example). Hence, at each given time, there will be exactly one frame in the store corresponding to each phase.
A vector is built by the 1-dimensional indexed values in the phase 0 frame, followed by the 1 -dimensional indexed values in the phase 1 frame, etc., up to the 1-dimensional indexed values in the phase 4 frame. A matrix linear transformation is applied to the vector. The output of this linear transformation is re-indexed into a single high-definition frame which is then displayed.
DETAILED DESCRIPTION OF ALGORITHMS:
First, here is how to convert from a high-definition image array, to an array of active pixels. This algorithm is taken from the pattern of Figure 11. A periodic portion of that figure is repeated here for convenience. The (•) symbol represents inactive points.
• • • o o o o o 0 O •
0 o o • • • • • o o 0 o • • • • . . x X x x x X • •
• • • • o o o o o o 0
• o o o o o o o • • •
• • • o o o o o • • • '• • o o • o o o o o o o • • • • • • x x x x x x x • •
• • • • o o o o o o o
. 0 o o o o 0 o • • •
. . • o o o o o 0 0 • o o 0 o o 0 o • • • • • • x X X X X X X • •
. . • . o o o o o o 0
• o o o o o o o • • •
. . . o o o o o o 0 « o o o o o o 0 • • • •
• • x x x x x x x . .
. . . • o o o o o o o
• o o o 0 o O o • • •
Start with a high-definition frame buffer assumed to have N0 pixels per row, and M- pixels per column. Further, assume that these numbers are multiples of 5, and let N and M be such that N0= 5N, and MQ= 5M. A sub-image of the high-definition image will be extracted which has dimensions Mo by N. Note: for this first step it is only necessary that N0 be a multiple of 5, but the other index will be important later.
The following pseudo-code performs the extraction.
INPUTS: N and Mo as above, 2D array of HD pixels: HD_PIXEL[R0W,C0L1 ROW = l...Mo, COL = l...N0 OUTPUT: 2D array of active pixels: AC_PIXEL[ROW,COL]
ROW = 1...M-, COL = 1...N
/* SKIP is an array which holds the number of inactive pixels to the left of the first active pixel in a particular row. This is a pattern that repeats every 5 rows */
SKIPπ = {3, 0, 2, 4, 1}
FOR ROW = 1 ... Mo FOR AC_COL = 1 ... N
SK = SKIP[ (ROW-1) % 5 ] /* arrays are indexed from 0, % is */
/* modular arithmetic function */ AC_PLXEL[ROW,COL] = HD_PIXEL[ROW, SK + 5*(ACTIVE_COL - 1)] END FOR END FOR
Now Figure 9 shows how the O pixels are associated to the X pixels, in each of the 4 phases 1, 2, 3 and 4. In phase 0, each X pixel is associated with only itself. In order to make Figure 11 correspond with Figure 9 it is necessary to rotate Figure 11 counterclockwise by about 28 degrees. The following diagram shows the high-definition pixel array as a 2-D array, with the data from Figure 9 worked in around a particular X pixel (X').
• • • 0 0 o o o o 0 • o o o o 0 o o . . . . . » x x x X X x x « .
• • • • O O DI Cl 0 O O
• O O 0 B2 D2 0 o . . •
• . . 0 O C4 A4 0 0 0 •
0 0 0 A3 B3 O o . . . . • • x x x X ' x x x . .
• • • • O 0 Bl Al 0 O 0
• 0 O 0 A2 C2 0 o • • •
. . • O 0 D4 B4 0 0 0 •
0 O O C3 D3 O o • • • • • • x x x x x X x . .
• • • • 0 o • • • • • o 0 o o 0
• o o 0 o 0 0 o • • •
• • • o o • • . . . o 0 0 0 0 • o o o o o o o . . . .
• • x x x x x X x . .
• • • • 0 0 0 o o 0 o
• o 0 0 o o o o • • •
This diagram shows the horizontally compressed active pixel array as a 2-dimensional array, with the data from Figure 9 worked in around a particular pixel (X'):
0 0 0 0 0 0 0
0 0 0 0 0 0 0
X X X X X X X
0 0 DI Cl 0 0 0
0 0 0 B2 D2 0 0
0 0 C4 A4 0 0 0
0 0 0 A3 B3 0 0
X X X X' X X X
0 0 Bl Al 0 0 0
0 0 0 A2 C2 0 0
0 0 D4 B4 0 0 0
0 0 0 C3 D3 0 0
X X X X X X X
0 0 0 O 0 0 0
0 0 0 0 0 0 0
0 o 0 0 0 0 0 o 0 0 0 0 0 0
X X X X X X X
0 0 0 0 0 0 0
0 0 0 0 0 0 0
From the above diagram, it clear how to find the index of a particular type of O pixel, given the index of X'. For example, if X' has 2-dimensional index (ROW.COL), then its Al pixel (that is, its A pixel in phase 1) has index (ROW+l.COL). See the "C" code, below, for the other relationships.
For an X' pixel which is at an edge, periodic boundary conditions are used. For example, this results in "edge wrap-around" for an X' at a left edge as shown in the following diagram:
0 0 0 0 0 0 0
0 0 0 0 0 0 0
X X X X X X X
Cl 0 0 0 0 0 DI
B2 D2 0 0 0 0 0
A4 0 0 0 0 0 C4
A3 B3 0 0 0 0 0
X' X X X X X X Al 0 0 0 0 0 Bl
A2 C2 0 0 0 0 0
B4 0 0 0 0 0 D4
C3 D3 0 0 0 0 0
X X X X X X X
0 0 0 0 0 0 0
0 0 0 0 0 0 0
See the "C" code, below, for details of the other periodic conditions.
Now it is illustrated exactly how the 1 -dimensional re-indexing into the frame buffers is carried out. Since a linear transformation must be specified, it will be convenient to index the frame buffers as linear arrays (i.e., not as 2-dimensional arrays). Hence, it must be specified how to re-index a 2-dimensional array into a 1-dimensional array, and then how to re-index back.
Assume that the active pixels are stored in a 2-dimensional array that has N columns and M- rows of pixels.
To convert to a 1-dimensional array, the pixels in the 2-dimensional array will be indexed sequentially, starting at 1 in the upper left, going across the first row to N, then beginning again at the left of the second row, and so on, down to the lower right. For example, the pixel which is in the third row from the top, in the fourth column from the left will be indexed by K = 2N+4. The pixel in the eighth row from the top, in the sixth column over, will be indexed by 7N+6. The pixel m rows from the top, n columns over, will be indexed (n-l)N+m.
To convert from a 1-dimensional array to a 2-dimensional array, the first N values from the 1-dimensional array are put, in order, into the first row of the 2-dimensional array. The next N values are put in the second row.
And so on.
Using the correspondences from the diagrams above, and obvious similar ones for the other types of boundary points (top, right and bottom), the following "C" computer program produces a matrix to perform the forward encoding process. The matrix is specified in a standard called sparse form. This is a list of triples of numbers:
Rl , Cl , Al R2 , C2 , A2 R3 , C3 , A3
RK, CK, AK
The entry Ri, Ci, Ai indicates that the number Ai is the entry of the matrix at row Ri and column Ci. The value ofthe matrix is zero at row-column pairs for which there is not an entry. There will be at most one entry for any row-column pair. t / r Program to produce the sparse matrix for an operator for 5 phase */ /* (4+1 :1 ) STS scheme. The scheme is applied to a grid of pixels */ r as in figure 11 , with the top 3 rows, left 2 columns and rightmost 7 /* column deleted, to be specific about how we deal with edge-effects. 7 r Moreover, we fill in the missing o-pixels on the outer edges using */
Λ the outer two edges of o-pixels on the opposite edge. */ Λ Such deletions produce a 7 X 4 array of x's. This program makes 7
Λ these two numbers into parameters N and M 7
Λ The STS scheme is applied by rotating figure 11 (roughly) 28 degrees7
Λ counter-clockwise, and then identifying with figure 9. 7 I" Active pixels are numbered from 1 to 5*N*M starting in the upper 7
/* left, going across, and then down to the next row, eta . . 7 r Pixels in the 5 frames of resulting low definition images are 7
/* numbered similarly, as if the 5 frames are stacked one atop the 7 next. 7 v
#ιnclude <stdιo.h>
#ιnclude <stdlιb.h>
Λ Number of x pixels across 7 r #defιne N 3 (made into vanable) 7 r Number of x pixels up and down 7
Λ #define 3 (made into variable) 7
/* Weights for different types of pixels 7
#defιne A (unsigned long)4 #defιne B (unsigned long)3
#define C (unsigned long)2
#define D (unsigned long)1
#defιne X (unsigned long)6 maιn() unsigned long mt m,n,p,row,col,xcol,M,N,
FILE 'dataout; charfin[256]; pπntf("Fιle for output data: '), gets(fln); if ((dataout = fopen(fin, "wb")) == NULL)
{ pπntf("Error: couldnl open output file %s\n", fin); exιt(-1);
} pnntf("\nNumber of x pixels across: "); gets(fin); N = (unsigned long ιπt)atoi(fin); pπntf("\nNumber of x pixels up&down: "); gets(fιπ); = (unsigned long ιnt)atoι(fιn); pπntf("\nPhaes (0,1,2,3 or 4)- "); getsffin); p = (unsigned long ιnt)atoι(fιn); r Do non-edge part of atπx first 7
Λ loop thru non-edge x rows 7 for (m=2; m < ; m++) r loop thπj non-edge x columns 7 for (n=2; n < N; n++)
{ Λ..~....--..-.-.......-.... ...../ r x pixel at m,n, is sample data point # 7 2*N + 5*N*(m-1) + n 7
I* We map rt to output data point #: 7 r N*(m-1) + n 7 row = N*(m-1)+n; /* this wonl change 7 switch (p)
{ case 0: Λ phase 0, pass x data thru 7 col = 2*N+5TT(m-1)+n, fpπntf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1 r phase 1 7 col = 2*N+5*N*(m-1)+n; fpπntf(dataout, "%lu %lu %lu\n", row, col, X); col = col + N, fpπntf(dataout, "%lu %lu %lu\n", row, col, A); col = col - , fpπntf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 1 - 5*N; fpπntf(dataout, "%lu %lu %lu\n", row, col, C), col = col - 1 ; fpπntf(dataout, "%lu %lu %lu\n", row, col, D), break; case 2: r phase 2 7 col = 2*N+5*N7xm-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X) col = col + 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col. A) col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C) col = col - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D) col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B) break; case 3: f phase 3 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X) col = col - N; fprintf(dataout, "%lu %lu %lu\n", row, col, A): col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B) col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D) col = col - 1 ; fprirrtf(dataout, "%lu %lu %lu\n\ row, col, C) break; case 4: Λ phase 47 col = 2*N+5-N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X): col = col - 2*N; fprintf(dataout, '%lu %lu %lu\n", row, col, A) col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C) col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D) col = col + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B)
}
} Top edge, except for comers 7 I* loop thru non-edge x columns 7 for (n=2; n < N; n++)
{ r fixed x row = 1 7 m = 1; row = N*(m-1)+n; Λ this won't change 7 switch (p)
{ case 0: /* phase 0, pass x data thru 7 col = 2'N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1 : f phase 1 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + N; fprintf(dataout, "%lu %lu %lu\n", row, col. A); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 1 - 5*N + 5*M*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, D); break; case 2: Λ phase 2 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + 2*N; fprintf(dataout, "%lu %lu %lu\π", row, col, A); col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 5*N + 5*M*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1 ; fprintf(dataout, "%lu %lu %Iu\n", row, col, B); break; case 3: Λ phase 37 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X) col = col - N; fprintt(dataout, "%lu %lu %lu\n", row, col, A) col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n\ row, col, B) col = col + 5*N; fpriπtf(dataout, "%lu %lu %lu\n", row, col, D) col = col - 1; fprintf(dataout, *%lu %lu %lu\n", row, col, C) break; case 4: Λ phase 47 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X): col = col - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A) col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C) col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D) col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B) break;
} r Bottom edge, except for corners 7 for (n=2; n < N; n++)
{ r fixed x row = M 7 m = M; row = N*(m-1)+n; f this won't change 7 switch (p)
{ case 0: Λ phase 0, pass x data thru 7
C0l = 2*N+5'N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1 : f phase 1 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, *%lu %lu %lu\n", row, col, X); col = col + N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 1 - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, D); break; case 2: Λ phase 27 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; case 3: Λ phase 37 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col - N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 5*N - 5*N* ; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C); break; case 4: Λ phase 47 col = 2*N+5*N7lm-1)+n; fprinτf(dataout, "%lu %lu %lu\n", row, col, X); col = col - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C) col = col + 5*N - 5*N*M; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B) break; } }
Λ Left edge, except for comers 7 fix x column = 1 7 n=1;
/* loop thru non-edge x rows 7 for (m=2; m < M; m++)
{ row = N*(m-l)+n; /* this won't change 7 switch (p)
{ case 0: phase 0, pass x data thru 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1: phase 1 7 col = 2'N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + N; fprintf(dataout, "%lu %lu %lu\n", row, col. A); col = col - 1 + N; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 1 - 6'N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col + N - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, D); break; case 2: Λ phase 2 V col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col + 1 ; fprintffdataout, "%lu %lu %lu\n', row, col, C); col = col - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; case 3: phase 37 col = 2"N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n*, row, col, X); col = col - N; fprintf(dataout, "%lu %lu %lu\n*, row, col, A); col = col + 1; fpriπtf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, C); break; case 4: r phase 47 col = 2*N+5*N*(m-1)+π; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col + N - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col + 5*N; fprintffdataout, "%lu %lu %lu\n", row, col, D); col = col + 1 - N; fprintffdataout, "%lu %lu %lu\n", row, col, B); break; }
} Λ Right edge, except for comers 7 /* set x column = N 7 n = N;
/* loop thru non-edge x rows 7 for (m=2; m < M; m++)
{ row = N*(m-1)+n; Λ this wont change 7 switch (p) { case 0: phase 0, pass x data thru 7
Cθl = 2*N+5*N7,m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1: /* phase 1 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + N; fprintf(dataout, "%lu %lu %lu\n", row, col. A); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 1 - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, D); break; case 2: Λ phase 2 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + 2*N; fprintf(dataout, '%lu %lu %lu\n", row, col, A); col = col + 1 - N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = col - 1 + N; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; case 3: Λ phase 37 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col - N; fprintf(dataout, "%lu %lu %iu\n", row, col, A); col = col + 1 - N; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = col + 5*N; fprintf(dataout, '%lu %lu %lu\n", row, col, D); col = col - 1 + N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); break; case 4: /* phase 4 7 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col - 1 ; fprirrtf(dataout, "%lu %lu %lu\n", row, col, C); col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n\ row, col, D); col = col + 1; fprintf(dataout, *%lu %lu %lu\n", row, col, B); break; } }
/* upper left co er 7
/* set x column = 1 7 n = 1;
/* set x row = 1 m = 1;
{ row = N*(m-1)+n; /* this won't change 7 switch (p)
{ case 0: phase 0, pass x data thru 7 col = 2*N+1; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1: Λ phase 1 7 col = 2*N+1; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = 3*N+1; fprintf(dataout, "%lu %lu %lu\n", row, col, A] col = 4*N; fprintf(dataout, "%lu %lu %lu\n", row, col, B col = 5'N* + 1 - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C; col = col + N - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, D] break; case 2: Λ phase 2 7 col = 2*N+1; fprintf(dataout, "%lu %lu %lu\n", row, col, X] col = col + 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, A; col = col + 1 ; fprintf dataout, "%lu %lu %lu\n", row, col, C
COl = 5*N* - N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, D col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B; break; case 3: /* phase 37 col = 2*N+5*N*(m-1)+n; fprintf(dataout, "%lu %lu %lu\n", row, col, X; col = col - N; fprintf(dataout, "%lu %lu %lu\n", row, col, A col = col + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B; col = col + 5*N; fprintf(dataout, "%lu %lu %lu\n', row, col, D; col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, break; case 4: f phase 47 col = 2*N+1; fprintf(dataout, "%lu %lu %lu\n", row, col, X] col = 1; fprintf(dataout, "%lu %lu %lu\n", row, col, A] col = N; fprintf(dataout, "%lu %Iu %lu\n", row, col, C] col = 6*N; fprintf(dataout, "%lu %lu %lu\n", row, col, D' col = S*N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B] break; } } r upper right co er 7 m=1; n=N;
{ row = N*(m-1)+n; /* this won change 7 switch (p)
{ case 0: Λ phase 0, pass x data thru 7 col = 3*N; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned long)16); break; case 1: /* phase 1 7 col = 3*N; fprintf(dataout, "%lu %lu %lu\n", row, col, X) col = 4'N; fprintf(dataout, "%lu %lu %lu\n", row, col, A) col = 4*N - 1; fprintf(dataout, "%lu %lu %iu\n", row, col, B) col = 5*N*M - N; fprintf(dataout, "%lu %lu %!u\n", row, col, C) col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, D) break; case 2: Λ phase 2 7 col = 3*N; fprintf(dataout, "%lu %lu %lu\n", row, col, X): col = 5*N; fprintf (dataout, "%lu %lu %lu\n", row, col, A) col = 4*N + 1; fprintf(dataout, "%lu %lu %lu\π", row, col, C) col = 5*N* - N + 1; fpnntf (dataout, "%lu %lu %lu\n", row, col, D), col = 5*N*M, fpπntf(dataout "%lu %lu %lu\n", row, col, B), break, case 3 /* phase 3 7 col = 3*N, fpπntf(dataout, "%lu %lu %lu\n", row, col, X), col = 2*N, fpπntf(dataout, "%lu %lu %lu\n", row, col, A), col = N+1, fpπntf(dataout "%lu %lu %lu\n", row, col, B), col = 6*N+1, fpπntf(dataout "%lu %lu %lu\n", row, col, D), col = 7*N, fpπntf(dataout, "%lu %lu %lu\n", row, col, C), break, case 4 phase 47 col = 3*N, fpπntf(dataout "%lu %lu %lu\n", row, col, X), col = N, fpnπtf(dataout, "%lu %lu %lu\n", row, col, A), col = N-1 , fpπntf(dataout, "%Iu %lu %lu\n", row, col, C), col = 7*N-1 , fpπntf(dataout, "%lu %lu %lu\n", row, col, D), col = 6*N, fpπntf(dataout, "%lu %lu %lu\n", row, col, B), break,
} )
/* lower left co er 7 m=M, n=1,
{ row = N*(m-1)+n, /* this wont change 7 switch (p)
{ case 0 * phase 0, pass x data thru 7 col = 5*M*N - 3*N + 1, fpπntf(dataout, -%lu %lu %lu\n", row, col, (unsigned long)16), break; case 1 Λ phase 1 7 col = 5* *N - 3*N + 1, fpπntf(dataout, "%lu %lu %lu\n", row, col, X), col = col + N, fpπntf(dataout, "%lu %lu %lu\n", row, col, A), col = 5*M*N - N, fpnntf(dataout "%lu %lu %lu\n", row, col, B), col = 5*M*N - 7*N + 1 , fpπntf(dataout, "%lu %lu %lu\n", row, col, C), col = 5* *N - 6*N, fpπntf(dataout, "%lu %lu %lu\n", row, col, D), break, case 2 f phase 2 7 col = 5*M*N - 3*N + 1, fpπntf(dataout, "%lu %lu %lu\n", row, col, X), col = 5'M*N - N + 1, fpπntf(dataout, "%lu %lu %lu\n", row, col, A), col = col + 1 , fpnntf(dataout, "%lu %lu %lu\n", row, col, C), col = 5*M*N - 6*N + 2, fpπntf(dataout, "%lu %lu %lu\n", row, col, D), col = col - 1 , fpπntf(dataout, "%lu %lu %lu\n", row, col, B), break, case 3 r phase 3 7 col = 5* *N - 3*N + 1, fpπntf(dataout "%lu %lu %lu\n", row, col, X), col = col - N, fpπntffdataout, "%lu %lu %lu\n", row, col, A), col = 5*M*N-4*N+2, fpπntf(dataout, "%lu %lu %lu\n", row, col, B), col = N+2, fpπntf(dataout, "%lu %lu %lu\n", row, col, D), col = col - 1 , fpπntf(dataout, "%lu %lu %lu\n", row, col, C), break case 4: /* phase 47
COl = 5*M*N - 3*N + 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = 5* *N - 5*N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = 5* *N - 4*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = N; fpriπtf(dataout, "%lu %lu %lu\π", row, col, D); col = 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; } } /* lower right comer 7- m=M; n=N;
{ row = N*(m-1)+n; /* this wont change 7 switch (p)
{ case 0: /* phase 0, pass x data thru 7 col = 5* *N - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, (unsigned loπg)16); break; case 1 : Λ phase 1 7 col = 5*M*N - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col + IM; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = 5*M*N - 6*N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = col - 1; fprintf(dataout, "%lu %lu %lu\n", row, col, D); break; case 2: Λ phase 2 7 col = 5*M*N - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = 5*M*N; fprintf(dataout, "%lu %lu %lu\n", row, col. A); col = 5*M*N - N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = 5*M*N - 6*N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = 5* *N - 5*N; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; case 3: f phase 3 7 col = 5*M*N - 2*N; fprintf(dataout, "%lu %lu %lu\n", row, col, X); col = col - N; fprintf(dataout, "%lu %lu %lu\n", row, col, A); col = 5*M*N-4*N + 1; fprintf(dataout, "%lu %lu %lu\n", row, col, B); col = N + 1; fprintf(dataout, '%lu %lu %lu\n", row, col, D); col = 2-N; fprintf(dataout, "%lu %lu %lu\n", row, col, C); break; case 4: Λ phase 4 7 col = 5* *N - 2*N; fprintf(dataout, "%lu %Iu %lu\n", row, col, X); col = 5*M*N - 4*N; fprintf(dataout, "%Iu %lu %lu\n", row, col, A); col = col - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, C); col = N - 1 ; fprintf(dataout, "%lu %lu %lu\n", row, col, D); col = N; fprintf(dataout, "%lu %lu %lu\n", row, col, B); break; } } retum(O); /* just to please operating system 7 } The program requires that the user specify the output filename, the resolution of the resulting low-definition frames (N and M), and the phase desired. The program then outputs the sparse form of the matrix to the specified file.
For a fixed resolution Mo by N0, let us denote the 5 matrices corresponding to phases 0 through 4, by A<0), A<", A< >, A 3), and A<4).
Then the full procedure for encoding is shown in Figure 47 and is as follows:
1. Pre-compute 5 matrices A(0), A(1), A(2), A(3), and A(4).
2. Initialize PHASE = 0
3. Get next high-definition frame into M0 by N- array HD_PIXEL (if all frames have been encoded and transmitted, then quit).
4. Compute MQ by N array of active pixels AC_PIXEL
5. Re-index array AC_PIXEL into a 1 -dimensional array (of size NM-)
6. Apply the NM by NMQ matrix A(PHΛSE) to the 1-dimensional array from step 5 to produce an output 1- dimensional array of size NM 7. Re-index the output 1-dimensional array from step 6, into an M by N 2-dimensional array.
8. Transmit the array from Step 7 as a low definition image
9. Increment PHASE. Reset it to 0 if it is > 4. 10. Go to step 3.
DECODING PROCEDURE:
The encoding process used five matrices A(0), A(1), A<2>, A(3) and A4), of size NM by NMQ. Since 5M = MQ, it follows that a square matrix results from stacking the matrices atop one-another, in order. That is, if the entry of matrix Aφ> at row i, column j, is denoted by A<p, u then one can form the matrix A such that A,, = A( ) kJ, where p is the integer part of i divided by NM, and k is the remainder plus 1. See Figure 48 for an example of this A matrix for a small, 15 x 15 pixel (active and inactive), high-definition source image. Note: this matrix was calculated with un-normalized, integer weighting factors (expressed in scientific notation) and, in practice, each entry in this matrix would have to be divided by 16 in order for it to perform properly.
Multiplying the 1-dimensionalized high-definition vector (VH) by any one of the five smaller matrices (A(0), A 1), A<2), A<3) or A(4)) results in a 1-dimensionalized low-definition vector of one of the five phases (VL0, VL1, VL2,
VL3 or VL4). Multiplying the 1-dimensionalized high-definition vector (VH) by the stacked composite matrix (A) results in a 1-dimensionalized vector that constitutes data for five low-definition frames, one each of the five phases
(VL5).
At a receiver, at any time, the last five low-definition frames received can be 1-dimensionalized to produce this same (VL5) data structure.
Since A x VH = L5. It follows that A"1 x L5 = VH, under appropriate conditions. These conditions are that the matrix A must be invertible and that (for perfect reconstruction) the high-definition image (VH) must be constant for the five frame period. See Figure 49 for an example of the inverse (A"1 or B) matrix for a small, 15 x 15 pixel (active and inactive), high-definition source image. Since the "full matrix" A is a square matrix, it is potentially invertible. And at least for certain resolutions, with the (4+ 1): 1 pattern, the matrix A is in fact invertible. That is, at least, in cases where M and N are both odd. Further, the theoretical basis for the "perfect reconstruction" model, that is, that the high-definition image remain constant, can be relaxed in practice.
In the case where the high-definition images does change somewhat over the five frame period, it is reasonable for small changes to compute the same inverse, and use these as approximate reconstructed values. In cases where there has been significant motion in parts of the image, additional techniques will be discussed below to improve performance.
At each frame time, the high-definition monitor will display a high-definition frame with information computed by multiplying a sequenced composite of the 1-dimensionalized versions of the last five low-definition frames. This will yield high-definition data points of the number and for the positions of the "active" data points as shown above or in Figure 11. These data points may be displayed as is, or information for additional display points corresponding to the "inactive" data points (•) may be calculated by bilinear interpolation.
The following pseudo-code takes an array of pixels for the "active" positions (which are generated by the matrix multiplication and are an intermediate step in the reconstruction algorithm given below) and produces a high- definition array (with room for both active and inactive places) with those values inserted in the correct places. Alternately, the computed high-definition pixels may be displayed alone.
INPUTS: N and MQ as above, 2D array of active pixels: AC_PIXEL[ROW,COL] ROW = 1... Mo, COL = 1...N OUTPUT: 2D array of HD pixels: HD_PIXEL[ROW,COL]
ROW = l...Mo, COL = l...N0
/* SKIP is an array which holds the number of inactive pixels to the left of the first active pixel in a particular row. This is a pattern that repeats every 5 rows */
SKIPQ = {3, 0, 2, 4, 1}
INITIALIZE HD_PIXEL[1 = ALL 0's FOR ROW = 1 ... Mo
FOR AC_COL = 1 ... N SK = SKIP[ (ROW-1) % 5 ] /* arrays are indexed from 0, % is */
/* modular arithmetic function */ HD_PIXEL[ROW, SK + 5*(ACTIVE_COL - 1)] = AC_PIXEL[ROW,COL] END FOR
END FOR
The full decoding procedure is shown here and in Figure 50. It includs optional calculation of pixels corresponding to the "inactive" positions in Figure 11. It keeps 5 arrays of low-definition pixels in a recirculating frame store (see Figure 51) where the oldest incoming frame is replaced by the newest.
1. Precompute the matrix B = A'1.
2. Initialize each 1-dimensional array I^, L„ L , L3, and L4 to 0. 3. Initialize PHASE = 0
4. Get next incoming low-definition frame (or quit of done displaying all frames).
5. Re-index and place pixel values from frame into 1 -dimensional array LPHASE 6. Construct a vector V with values Lo followed by L„ followed by ]_,, followed by L>, followed by L4. (That is, V[k] = Lj[i] where j is the integer part of k divided by NM, and i is the remainder plus 1.)
7. Let W = B x V (i.e. , W is the vector resulting from applying the matrix B to the vector V).
8. Re-index the values of the NM0 array W into an M0 by N 2-dimensional array AC_PIXEL.
9. Insert the values of AC_PIXEL in to HD_PIXEL as is described above.
10. Fill in any empty HD_PIXEL values by bilinear interpolation and display the resulting high-definition image.
11. increment phase resetting to 0 if > 4
12. Go to step 4.
PRACTICAL CONSIDERATIONS:
Two practical considerations will be discussed below which will modify the theoretical embodiment described above. The first is that the fifth (or 0) phase may not be present. In that case several alternatives can be considered or used in combination.
The first approach is to just use the last five frames so that one of the phases will have two representative frames the oldest and most recent frames. That older version of the current phase can then be substimted for the X- only phase 0 frame data. (Alternately, the newer version of the current phase can be substituted for the X-only phase 0 frame data.)
A second approach uses only four frames, one each of phase 1, 2, 3 & 4, and then uses an average (or weighted average) of those four data sets as a substitute for the data set of the expected X-only or 0 phase data. The weights may, for example, weight the oldest (or the newest) frames more heavily.
The problem is that whatever phase (or combination) is used to substitute for the X-only, phase 0 data, its deviation from X-only will contribute to an error or deviation from perfect reconstruction.
Conceptually, for example, if one has an X pixel value and an X+P pixel value (where P is some other data) then P can be derived by subtracting X from X+P. However, if X+P is substituted for the X value then when subtracting X from X+P, instead, X+P is subtracted from X+P leaving 0.
Thus, substituting an average (or weighted average) of phases 1, 2, 3 & 4 for the 0 phase will achieve less than perfect reconstruction, and, whatever data is substimted will diminish the reconstruction of that data where it should be displayed.
Therefore, the following is suggested as the preferred method. As is noted from Figure 10, with the particular scheme depicted there, l/4th of the O points are sampled in each of the four phases. When calculating O points that correspond to a particular phase (say phase 2) the average substimted for the X-only pixels will be an average of the three other phases (1, 3 and 4 in this example). Similarly when calculating O-points that are transmitted during phase 3, the phase 0 data will be substituted by an average of phases 1, 2 and 4. And so on. For calculating the X-pixels themselves an average of all four phases (1, 2, 3 and 4) is suggested.
Thus, the calculations will have to be carried out five times, with a different set of X-only data each time. However, only one fifth of the calculations will have to be carried out each time so, in total, the amount of calculation is about the same, although some overhead of calculating and reloading the X-only data is incurred.
The second practical consideration is that the high-definition image will not be static. However, generally, with television images, large amounts of the image for large amounts of time change little. Thus, for those areas, the reconstruction mechanism described thus far will be effective.
For areas where there is fast or large motions, it would be preferable to display a high-definition image computed just from the most recent data and "fill in" the high-definition pixels intermediate to those transmitted by bilinear interpolation.
The following addition will permit the smart receiver to chose between these alternatives for sections of the image.
First, it is noted that techniques for deteπriining what parts of an image have motion (i.e., have changed since the last frame) are well developed. These range from simple frame to frame pixel comparisons (such as used in security motion detectors) to more sophisticated algorithms such as used to calculate "optical flow" for robot vision or computer vision applications r^uch techniques and algorithms are: well known and well developed; and not, in and of themselves, the subject of this invention.
From such techniques a motion map may be derived for any current high-definition frame by comparing it with its predecessor prior to transmission. The map may be bi-variate; that is, two valued — moving or not. Or, the map may have (for the current application) values from 0 (moving) to S (still for S frames) where S is 4 or 5 depending upon whether a 4 or 5 frame cycle is being used.
Such a 2 (or 4 or 5) valued image can be compressed by many well-known methods into a very small data file. Further, although two high-definition frames are compared to create the map a grosser resolution version can be used reducing the file size much further. Generally this need be only at the resolution of the low-definition transmitted frames at most. A technique described earlier (as a method for sending highly compressed versions of "side strips") will be used to transmit this compressed motion map. That is, it will be pre-computed and transmitted in the blanking interval of an earlier frame. Thus, at the receiver, the motion map for a given frame will arrive well before the visual data to which it refers and it can be used to determine which technique (reconstruction, interpolation, or some combination) will be used for each area of the high-definition image. With the simple two-valued map a decision may be made for each area (as small as the resolution of the motion map will permit) whether to display the reconstructed image for that area or just the present data with fill-in or interpolation filtering. See Figure 52. Both types of image data will be calculated (and perhaps held in two separate frame stores) so the choice may be made for each area displayed by consultation of the motion map held in a third buffer. With 4 or 5 values, the two images (reconstructed and interpolated) may be "mixed" or cross-dissolved between rather than just selected between. Such is a standard practice called soft-edge keying or matting.
Additionally, with such a motion map (either 2 or 4/5 valued) an additional technique may be used which Inventor calls "back propagation". In this case, when a new low-definition frame comes in, wherever the image is motionless the data will be entered in the current frame of the recirculating frame store only. If the image portion (X-point) has changed, its value is not only entered in the current frame, but is entered (at the equivalent position) into the other (3 or 4) frames as well.
OFF-LINE ENCODING:
The above describes matrix algebra that would be embodied as high-speed special purpose hardware within a smart HDTV receiver, with the matrix coefficients most likely pre-computed and stored as data on a PROM.
Similarly a special purpose (or programmable) digital signal processing system may embody the underlying algebraic encoding functions for real-time encoding; for example, as might be insinuated between a high-definition camera and a transmitter or a recorder.
Alternately, as described above, the encoding may be done off-line (i.e., not in real time) and may then employ a low-powered general purpose processor, such as an IBM-PC, for computational purposes, suitably outfitted with one or two frame stores?τf the frame store is of such a type as may be programmed to display and digitize at different resolutions, then a single one may be used. Alternately, and more conveniently, a high-resolution digitizing frame store may be used in addition to a low-resolution display. This second approach will be assumed in the following discussion. Under computer control, a source tape player will synchronize with the frame store to permit digitization of individual frames. (Alternately, these may be digitized on another system and imported via a commumcation line or removable mass storage device.) Similarly, a destination tape recorder is synchronized to the output frame store for the single-frame collection of the processed frames.
While a high definition frame is in the high resolution frame store data from its individual pixels will be made available via a subroutine PDCE IN(X, y) where x represents a column and y a row; which returns a pointer to a three integer array where the integers represent red, green and blue values of the pixel respectively; or, returns a pointer to a (0, 0, 0) triple if x or y are out of range.
Similarly, data may be written to the low-definition frame store by a subroutine PΓXELOUT(X, y, rgb) where x represents a column, y a row and rgb a pomter to a three integer array where the integers represent red, green and blue values of the pixel respectively. The following computer subroutine, written in "C", SQUEEZE(P) where p is the phase, would be run on the host computer to affect the conversion of each frame. An overall control program would coordinate this subroutine with others that control the videotape machine interfaces. See Figure 53 for a system diagram of this embodiment.
This "C" code fragment, the subroutine SQUEEZE, implements the (4+1 ):1 encoding pattern as shown in Figures 6 through 11. squeeze(p) is the call to the code fragment where p is the phase of the pattern, 1 through 4
It is assumed that two low-level system specific subroutines exist. piXEUN will access, as input, the pixels of a high-resolution image that is 2,000 by 2,000 pixels, upper-left oπgin of (0,0). pιxelιn(x, y) where x represents a column and y a row retums a pointer to a three integer array where the integers represent red, green and blue values of the pixel respectively; retums a pointer to a
(0, 0, 0) tπple if out of range pixELOuτ will access, as output, the pixels of a low-resolution image that is 400 by 400 pixels, upper-left ongin of (0,0). pιxelout(x, y, rgb) where x represents a column, y a row and rgb a pointer to a three integer array where the integers represent red, green and blue vaiues of the pixei respectively.
NOTE: This code has been wπtten for claπty, not efficiency.
/...........-..-...-. BEGINNING OF SQUEEZE ' squeeze( int p Λ phase index 7
) { int x; high-resolution column index 7 int y; /* high-resolution row index 7 int wx, wa, wb, wc, wd, Λ digital weighting factors 7 int i; /* low-resolution column counter 7 int j; Λ low-resolution row counter 7 int k; r rgb index 7 int ax, bx, ex, dx, Λ column offsets for O-points 7 int ay, by, cy, dy, Λ row offsets for O-points 7 int "rgbx, "rgba, *rgbb, "rgbc, "rgbd;
Λ pointers to input rgb pixel values 7 int rgb[3]; holders for output rgb pixel values 7
Λ Set Digital Weighting Factors 7 wx = 6; wa = 4; wb = 3; wc = 2; wd = 1
Λ Set Row & Column Offsets for O-Points for Correct Phase 7 if(p = 1) /' For Phase 1 7
{ ax = +2; ay = +1 bx = -3; by = +1 ex = +2; cy = -4; dx = -3; dy = -4; } if(p = 2) Λ For Phase 2 7
{ ax = -1; ay = +2; bx = -1; by = -3; ex = +4; cy = +2; dx = +4; dy = -3;
} else if (p = 3) For Phase 3 7
{ ax = -2; ay = -1; bx = +3; by = -1; ex = -2; cy = +4; dx = +3; dy = +4; } else if(p = 4) r For Phase 47 { ax = +1 ; ay = -2; bx = +1 ; by = +3; ex = -4; cy = -2; dx = -4; dy = +3;
} else Λ Otherwise Return with Error Condition 7
{ %d is not a valid phase (1 - 4).", p); retum(-1); }
Λ Calculate low-resolution values from high-resolution data 7 for(i=0; i<400; i++) for 400 low-resolution rows 7
{ x = i * 5 + 2; /* calculate X-point x-coordinate 7 for(j=0; j<400; j++) for 400 low-resolution columns 7
{ y = j * 5 + 2; /" calculate X-point y-coordinate 7 rgbx = pixelin(x, y); Λ get pointer to X-point 7 rgba = pιxelιn(x+ax, y+ay); Λ get pointer to A-O-pomt 7 rgbb = pιxelιn(x+bx, y+by); Λ get pointer to B-O-point 7 rgbo = pιxelιn(x+cx, y+cy); Λ get pointer to C-O-point 7 rgbd = pιxelιn(x+dx, y+dy); /* get pointer to D-O-point 7 for(k=0, k<3; k++) /* calculate low-resolution r, g & b 7
{ rgb[k] = 7 gbx+k) * wx + *(rgba+k) * wa + *(rgbb+k) * wb
+ *(rgbc+k) * wc + *(rgbd+k) * wd; rgb[k] /= 16, /* normalize value 7
} pιxelout(ι, j, rgb); /* wnte out low-resolution pixel 7
}
/' Done — Return with OK Code 7 retum(O); normal return 7 }
/.-«.....-....-....-..- END OF SQUEEZE ***
The flows depicted in the software flow diagrams herein are exemplary, some items may be ordered differently, combined in a single step, skipped entirely, or accomplished in a different manner. However, the depicted flows will work. In particular, some of these functions may be carried out by hardware components, or by software routines residing on, or supplied with, such a component.
Similarly the systems depicted in the system diagrams herein are exemplary, some items may be organized differently, combined in a single element, omitted entirely, or accomplished in a different manner. However, the depicted systems will work. In particular, some of these functions may be earned out by hardware components, or by software routines residing on, or supplied with, such a component.
It will thus be seen that the objects set forth above, among those made apparent from the preceding description, are efficiently attained and certain changes may be made m carrying out the above method and in the construction set forth. Accordingly, it is intended that all matter contained in the above description or shown in the accompanying figures shall be interpreted as illustrative and not in a limiting sense.
While there has been shown and described what are considered to be preferred embodiments of the invention, it will, of course, be understood that vanous modifications and changes in form or detail could readily be made without departing from the spirit of the invention. It is, therefore, intended that the invention be not limited to the exact form and detail herein shown and described, nor to anything less than the whole of the invention herein disclosed as hereinafter claimed.
I claim:
Figure imgf000028_0001
3. For example:
PIP-512, PIP-1024 and PIP-EZ (software); PG-640 & PG-1280; MVP-AT & Imager-AT (software), all for the IBM-PC/AT, from Matrox Electronic Systems, Ltd. Que., Canada.
The Clipper Graphics Series (hardware and software), for the IBM-PC/ AT, from Pixelworks, New Hampshire.
TARGA (several models with software utilities) and AT- VISTA (with software available from the manufacturer and Texas Instruments, manufacturer of the TMS34010 onboard Graphics System Processor chip), for the IBM-PC/ AT, from AT&T EPICenter/Truevision, Inc., Indiana.
The low-end Pepper Series and high-end Pepper Pro Series of boards (with NNIOS software, and including the Texas Instruments TMS34010 onboard Graphics System Processor chip) from Number Nine Computer Corporation, Massachusetts.
4. For example:
FGS-4000 and FGS-4500 high-resolution imaging systems from Broadcast Television Systems, Utah.
911 Graphics Engine and 911 Software Library (that runs on an IBM-PC/AT connected by an interface cord) from Megatek, Corporation, California.
One/80 and One/380 frame buffers (with software from manufacturer and third parties) from Raster Technologies, Inc., Massachusetts. TIME-VARYING DATA POINT SELECTION & ARRANGEMENT (BW.C) - INVENTOR: DAVID M. GESHWIND
Image processing systems manufactured by Pixar, Inc., California.
And many different models of graphic-capable workstations from companies such as Apollo, SUN and Silicon Graphics, Inc.
5. For Example:
GMP VLSI Graphics Microprocessor from Xtar Electronics, Inc., Illinois.
Advanced Graphics Chip Set (including the RBG, BPU, VCG and VSR) from National Semiconductor Coφoration, California.
TMS34010 Graphics System Processor (with available Software Development Board, Assembly Language Tools, "C" Cross-Compiler and other software) from Texas Instruments, Texas.
6. Such as inventor's own film colorization system subject of US Patent Number 4,606,625 issutα August 19, 1986.
7. 77zc? Media Lab: Inventihe the Future at MIT, Stewart Brand, Viking Press, New York 1987, page 72.
8. Reference to be supplied.
9. Also referred to as microsaccades; see, for example: Movements of the Eyes, R. H. S. Carpenter, Pion,
MIZE function subroutines in "C" compiler libraries.
Figure imgf000029_0001
Figure imgf000030_0001
The Laplacian Pyramid as a ' Compact Image Code, Bun and Adelson, IEEE Transactions on Communications, Vol. Com-31 , No. April 1983, p. 532-540.
Figure imgf000030_0002
25. See, for example, the devices mentioned in Note 3 as having "onboard processor chips" or most of the devices or systems mentioned in Note 4.
26. See Notes 3 and 5.
27. See Note 4.
28. Example of code implementing (4+ l):l variable STS encoding.
/*
This computer program implements a (4+1) :l sampling pattern, as described in figures 6 through 11 of the patent application. The coordinate directions are called h and v, for horizontal and vertical, just — TIME-VARYING DA TA POINT SELECTION & ARRANGEMENT (BW.C) — INVENTOR: DA VID M. GESHWIND
to avoid possible confision between x coordinates and (x) -type pixels .
*/
#define NH 102 /* size of output image in h direction */
#define NV 96 /* size of output imahe in v direction */
#define NPHASE 4 /* number of phases */
#define NOTYP 4 /* number of (o) -type pixels for each (x) -type */
#define XSPACE 5 /* spacing in h or v of pixels in input image */
#define X EIGHT 6 /* the weight of (x) -type pixels */
#define DENOM 16 /* the denominator of the weight function */
/*
For each phase, this table gives the offsets of the (o) -type pixels associated with each (x) -type pixel, and the weights they are to be given when computing the output pixel.
*/ struct sts( int hoffs, voffs; int weight;
)sts [NPHASE] [NOTYPE] =
{
2, -1, 4, -3, -1, 3, 2, 4, 2, -3, 4, 1, /* phase 1 Of 4 */
-1, -2, 4, -1, 3, 3, 4, -2, 2, 4, 3, 1, /* phase 2 of 4 */
-2, 1, 4, 3, 1, 3, -2, -4, 2, 3, -4, 1, /* phase 3 Of 4 */
1/ 2, 4, 1, -3, 3, -4, 2, 2, -4, -3, 1, /* phase 4 Of 4 */
}; /* stsresample computes a compressed image for a single frame, using an STS resampling scheme. The parameter 'phase' indicates which phase of the STS is in effect. stsresample calls two routines not defined here;
GetInputPIXEL (h, v) to get pixel values from the high-definition input image, PutOutputPixel (h, v, value) to set pixel values in the compressed output image.
*/ stsresample(phase)
{ int h, v; /* horizontal and vertical coordinates */ int pixel; /* the output pixel value at a particular (h,v) */ struct sts *stsp; /* pointer to sts information */ int i; for(v=0; v<=NV; v++) { for (h=0 ;h<=NH; h++) { pixel=GetInputPixel (h * XSPACE , v * XSPACE) * XWEIGHT ; sts = &sts [phase- 1] [0] ; for ( i=0 ; i ! =NOTYPE ; i++ ) pixel+=step->weight*GetInputPixel ( h*XSPACE+stsp->hoffs, v*XSPACE+stsp->voffs) ;
PutOutputPixel (h, v, Pixel/DENOM) ; TIME-VARYING DA TA POINT SELECTION & ARRANGEMENT (BW.C) — INVENTOR: DAVID M. GESHWIND
29. A mathematical specification of optimal would be, for example, "minimum expected energy in the error". A precise but non-mathematical specification might be, "maximize the root mean square of the subjective level of satisfaction with image quality, as reported by a particular group of test subjects".
30. Typical examples include:
Real Linear Algebra, Anatal E. Fekete, Marcel Dekker, Inc., New York 1985.
Finite Dimensional Multilinear Algebra, Parts I & II, Marvin Marcus, Marcel Dekker, Inc., New York 1973.
Sparse Matrix Computations, Ed. Bunch & Rose, Academic Press, Inc., New York 1976.
Matrix Computations and Mathematical Software, John R. Rice, McGraw-Hill Book Company, New York 1981.
LINPAC User's Guide, Dongarra, Bunch, Moler and Stewart, SIAM, Philadelphia 1979.
31. See Note 30.
32. The sample A matrix was created ι uh a factor of 1/16 omitted) and the sample B (A"1) matrix computed, both in sparse form, as demonstration examples; the images they handle being too small to be practical.
The "high-definition" frame is 15 x 15 pixels, thus, having 225 pixels, both active and inactive. The total active pixels are 1/5 of that or 45. Of those, 1/5 or 9 are x-pixels and 36 are o-pixels. The o- pixels are in four groups of nine, each associated with one phase, a fifth phase having no o-pixels; the x- pixels are associated with all five phases.
Thus, each of the A<0), A(1), A(2>, A(3> and A14' matrices are 9 x 45 and the composite A matrix is 45 x 45, as is its inverse.
At each phase, the 45 active points are 1-dimensionalized into a vector of length 45 and the output is a vector of length 9 which is 2-dimensionalized into a 3 x 3 "low-definition" image.
Five such sequential low-definition images are again 1 -dimensionalized and sequenced to create TIME-VARYING DATA POINT SELECTION & ARRANGEMENT (BW.C) - INVENTOR: DA VID M. GESHWIND
a vector of length 45 which, when multiplied by the inverse 45 x 45 matrix yields another vector of length 45. These 45 points are then re-positioned into the appropriate 45 "active" positions of the 15 x 15 pixel high-definition display; the other positions being empty (just used to space and position the recomputed x- points and reconstructed o-points) or may have values computed for them by interpolation.
For an example of computer software suitable to carry out large matrix calculations see:
LINPAC User's Guide, Dongarra, Bunch, Moler and Stewart, SIAM, Philadelphia 1979.
33. For just one example see chapter 13 of: Handbook of Pattern Recognition and Image Processing, Ed. Tzay Y. Young, Academic Press, Inc., New York 1986.
34. See Note 3.

Claims

1. A process for deriving a second information bearing signal from a first information bearing signal, each information bearing signal comprising a succession of data frames, which process comprises the steps of: a. designating a first set of data points from a first data frame of said first signal; b. designating an active set of data points from said first set; c. designating a to-be-transmitted set of data points from said active set; d. designating a to-be-sampled-but-not-transmitted set of data points from said active set; e. designating a set of associations between said to-be-transmitted set and said to-be- sampled-but-not-transmitted set to derive a set of processed data points; f . designating an arrangement of said set of processed data points into a first data frame of said second information bearing signal; and g. repeating steps a through f on successive frames of said first signal to derive successive frames of said second signal while varying at least one of said designations.
2. A process for deriving a second information bearing signal comprising a second succession of data frames, from a first information bearing signal comprising a first succession of data frames, which process comprises the steps of: a. designating a first set of data points from a first data frame of said first signal; b. designating an active set of data points from said first set; c. designating a to-be-transmitted set of data points from said active set; d. designating a to-be-sampled-but-not-transmitted set of data points from said active set; e. designating at least one to-be-sampled-but-not-transmitted subset of said to-be- sampled-but-not-transmitted set; f. designating at least one to-be-transmitted subset of said to-be-transmitted set; g. designating a set of associations between said subsets to derive a set of processed data points; h. designating an arrangement of said set of processed data points into a first data frame of said second information bearing signal; and i. repeating steps a through h on successive frames of said first signal to derive successive frames of said second signal while varying at least one of said designations.
3. The process of claim 1, wherein said set of to-be-sampled-but-not-transmitted points is the null set.
4. A process for deriving a second information bearing signal comprising a second spatial-temporal distribution of data points, from a first information bearing signal comprising a first spatial-temporal distribution of data points, wherein the individual data points comprising said first spatial-temporal distribution of data points of said first information bearing signal are sampled at irregularly varying time intervals in order to generate said second spatial-temporal s r u on o a p n s ng sa secon n orma on ear ng s gna .
5. The process of claim 1, wherein said first information bearing signal is a video signal and said set of to-be-transmitted data points has a resolution, comprising the additional step of displaying said second information bearing signal on a video monitor of a resolution comparable to the resolution of the set of to-be-transmitted data points.
6. The process of claim 1, wherein said first information bearing signal is a video signal and said set of active data points has a resolution, comprising the additional step of displaying said second information bearing signal on a monitor of a resolution comparable to the resolution of the set of active data points.
7. The process of claim 6, comprising the additional step prior to the displaying step, of creating and inserting additional information intermediate to said processed data points.
8. The process of claim 6, wherein the additional information is created by a process selected from the group consisting of interpolation, reconstruction, and accumulation.
9. A product produced by the process set forth in claim 1 and conveyed on an information-bearing distribution medium.
10. The process of claim 1, wherein at least one of said variations is applied at irregular time intervals.
11. The process. of claim 1 , wherein at least some of the data points comprising said signals are modulated by weighting factors.
12. The process of claim 1, wherein at least one of said information bearing signals is separated into at least two areas and, for at least one of said designations, said designation is varied differently in at least two signal areas.
13. The process of claim 1, wherein the first mfoimation bearing signal comprises a computer-generated data base.
14. The process of claim 1 , wherein data from said to-be-transmitted points* is selected at every frame, and data from at least some of said to-be-sampled-but-not-transmitted points is selected for less than every frame.
PCT/US1996/009815 1995-06-07 1996-06-07 System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals WO1996041474A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP9502028A JPH11507484A (en) 1995-06-07 1996-06-07 Method and apparatus for time transition selection and data point arrangement for processing NTSC compatible HDTV signals
EP96919297A EP0847648A4 (en) 1995-06-07 1996-06-07 System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals
AU61668/96A AU6166896A (en) 1995-06-07 1996-06-07 System for time-varying selection and arrangement of data po ints for processing ntsc-compatible hdtv signals

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US48538595A 1995-06-07 1995-06-07
US08/485,385 1995-06-07

Publications (1)

Publication Number Publication Date
WO1996041474A1 true WO1996041474A1 (en) 1996-12-19

Family

ID=23927946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1996/009815 WO1996041474A1 (en) 1995-06-07 1996-06-07 System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals

Country Status (4)

Country Link
EP (1) EP0847648A4 (en)
JP (1) JPH11507484A (en)
AU (1) AU6166896A (en)
WO (1) WO1996041474A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2940005A (en) * 1950-07-19 1960-06-07 Moore And Hall Variable discontinuous interlaced scanning system
US4713688A (en) * 1984-09-26 1987-12-15 Ant Nachrichtentechnik Gmbh Method for increasing resolution in a compatible television system
US5291281A (en) * 1992-06-18 1994-03-01 General Instrument Corporation Adaptive coding level control for video compression systems

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2940005A (en) * 1950-07-19 1960-06-07 Moore And Hall Variable discontinuous interlaced scanning system
US4713688A (en) * 1984-09-26 1987-12-15 Ant Nachrichtentechnik Gmbh Method for increasing resolution in a compatible television system
US5291281A (en) * 1992-06-18 1994-03-01 General Instrument Corporation Adaptive coding level control for video compression systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP0847648A4 *

Also Published As

Publication number Publication date
AU6166896A (en) 1996-12-30
EP0847648A1 (en) 1998-06-17
JPH11507484A (en) 1999-06-29
EP0847648A4 (en) 2001-01-24

Similar Documents

Publication Publication Date Title
US5430486A (en) High resolution video image transmission and storage
JP2534534B2 (en) Television system for transferring a coded digital image signal from a coding station to a decoding station
Dubois The sampling and reconstruction of time-varying imagery with application in video systems
US5384904A (en) Image scaling using real scale factors
US6456745B1 (en) Method and apparatus for re-sizing and zooming images by operating directly on their digital transforms
US6239847B1 (en) Two pass multi-dimensional data scaling arrangement and method thereof
JP2000270326A (en) Method for transmitting high definition television through channel of narrow band width
CN1051900C (en) Digital video signal processor apparatus
GB2314720A (en) De-interlacing using a non-separable spatio-temporal interpolation filter
JPH07118627B2 (en) Interlaced digital video input filter / decimator and / or expander / interpolator filter
US5159453A (en) Video processing method and apparatus
AU593394B2 (en) Interpolator for television special effects system
JPH04222186A (en) Method of obtaining video signal and video frame receiver
US4745462A (en) Image storage using separately scanned color component variables
US4163257A (en) Spatial toner for image reconstitution
US5646696A (en) Continuously changing image scaling performed by incremented pixel interpolation
US5995990A (en) Integrated circuit discrete integral transform implementation
US5483474A (en) D-dimensional, fractional bandwidth signal processing apparatus
EP0700016B1 (en) Improvements in and relating to filters
US5668602A (en) Real-time television image pixel multiplication methods and apparatus
US5986707A (en) Methods and devices for the creation of images employing variable-geometry pixels
EP0847648A1 (en) System for time-varying selection and arrangement of data points for processing ntsc-compatible hdtv signals
GB2265783A (en) Bandwidth reduction employing a DATV channel
Lukacs The Personal Presence System—Hardware Architecture
US5978035A (en) Methods and devices for encoding high-definition signals by computing selection algorithms and recording in an off-line manner

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AM AT AU BB BG BR BY CA CH CN CZ DE DK EE ES FI GB GE HU IL JP KE KG KP KR KZ LK LR LT LU LV MD MG MN MW MX NO NZ PL PT RO RU SD SE SI SK TJ TT UA US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 502028

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1996919297

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1996919297

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: CA

WWW Wipo information: withdrawn in national office

Ref document number: 1996919297

Country of ref document: EP