CA2368396A1 - Copy protection for digital motion picture image data - Google Patents

Copy protection for digital motion picture image data Download PDF

Info

Publication number
CA2368396A1
CA2368396A1 CA002368396A CA2368396A CA2368396A1 CA 2368396 A1 CA2368396 A1 CA 2368396A1 CA 002368396 A CA002368396 A CA 002368396A CA 2368396 A CA2368396 A CA 2368396A CA 2368396 A1 CA2368396 A1 CA 2368396A1
Authority
CA
Canada
Prior art keywords
pattern
modulation
frequency
pixels
copy
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002368396A
Other languages
French (fr)
Inventor
Babak Tehranchi
Paul W. Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Eastman Kodak Co
Original Assignee
Eastman Kodak Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eastman Kodak Co filed Critical Eastman Kodak Co
Publication of CA2368396A1 publication Critical patent/CA2368396A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0085Time domain based watermarking, e.g. watermarks spread over several images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/913Television signal processing therefor for scrambling ; for copy protection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/12Picture reproducers
    • H04N9/31Projection devices for colour picture display, e.g. using electronic spatial light modulators [ESLM]
    • H04N9/3179Video signal processing therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0064Image watermarking for copy protection or copy management, e.g. CGMS, copy only once, one-time copy
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/913Television signal processing therefor for scrambling ; for copy protection
    • H04N2005/91357Television signal processing therefor for scrambling ; for copy protection by modifying the video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/913Television signal processing therefor for scrambling ; for copy protection
    • H04N2005/91392Television signal processing therefor for scrambling ; for copy protection using means for preventing making copies of projected video images

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Signal Processing For Digital Recording And Reproducing (AREA)

Abstract

An apparatus and method for displaying a copy-deterrent pattern (104) within a digital motion picture in order to discourage recording of the motion picture using a video camera or other sampling recording device. A copy-deterrent pattern (104) could be, for example, one or more symbols, a random pattern, a digital watermark or a text message (106). The copy-deterrent pattern (104) comprises a plurality of pixels within each frame of the digital motion picture, and the displayed pixel intensities are modulated at a temporal frequency using modulation characteristics deliberately selected to be imperceptible to human observers while simultaneously producing objectionable aliasing in any copy made using a video camera.

Description

p. ' .. ~- a COPY PROTECTION FOR DIGITAL MOTION PICTURE
IMAGE DATA
This invention generally relates to an apparatus for displaying a time-varying copy-deterrent pattern when projecting a digital motion picture, the copy-deterrent pattern not visible to a viewing audience but visible in a recording of the motion picture made using a video capture device such as a video camera.
Movie piracy is a cause of substantial revenue loss to the motion picture industry. Illegally copied movies, filmed during projection with video to cameras or camcorders and similar devices, are a significant contributing factor to revenue loss. Even the questionable quality of copies pirated in this fashion does not prevent them from broad distribution in the "black market", especially in some overseas markets, and on the Internet. As video cameras improve in imaging quality and become smaller and more capable, the threat of illegal copying activity becomes IS more menacing to motion picture providers. While it may not be possible to completely eliminate theft by copying, it can be advantageous to provide display delivery techniques that frustrate anyone who attempts to copy a motion picture using a portable video camera device.
It is known to provide a distinct symbol or watermark to an original 2o still image as a means of image or copy identification, such as in order to authenticate a copy. As examples, U.S. Patent No. 5,875,249 (Mintzer et al.), U.S. Patent No.
6,031,914 (Tewfik et al.), U.S. Patent No. 5,912,972 (Barton), andU.S. Patent No.
5,949,885 (Leighton} disclose methods of applying a perceptually invisible watermark to image data as verification of authorship or ownership or as evidence that an image 25 has not been altered. However, where such methods identify and validate image data, they provide no direct means of protection against copying an image, such as using a conventional scanner and color printer. In contrast, U.S. Patent No. 5,530,759 (Braudaway et aL) discloses providing a visible, color correct watermark that is generated by altering brightness characteristics but not chromaticity of specific pixels 30 in the image. But the approach used in U.S. Patent No. 5,530,759 could be objectionable if used for a motion picture, since the continuing display of a watermark on film could annoy an audience and adversely affect the viewing experience.
The above examples for still-frame images illustrate a key problem: an invisible watermark identifies but does not adversely affect the quality of an illegal r r ,_'r., _2_ copy, while a visible watermark can be distracting and annoying. With video and motion picture images, there can be yet other problems with conventional image watermarking. For example, U.S. Patent No. 5,960,081 (Vynne et al.) discloses applying a hidden watermark to MPEG data using motion vector data. But this method identifies and authenticates the original compressed data stream and would not provide identification for a motion picture that was copied using a camcorder.
Other patents, such as U.S. Patent No. 5,809,139 (Girod et al.), U.S. Patent No.
6,069,914 (Cox); and U.S. Patent No. 6,037,984 ( Isnardi et al.) disclose adding an imperceptible watermark directly to the discrete cosine transform (DCT) coefficients l0 of a MPEG-compressed video signal. If such watermarked images are subsequently recompressed using a lossy compression method (such as by a camcorder, for example) or are modified by some other image processing operation, the watermark may no longer be detectable:
The invisible watermarking schemes disclosed in the patents listed IS above add a watermark directly to the compressed bit stream of an image or image sequence. Alternatively, there are other watermarking schemes that add the watermark to the image data itself, rather than to the compressed data representation.
An example of such a scheme is given in U.S. Patent No. 6,044, I 56 (Honsinger et al.), which discloses a spread spectrum technique using a random phase carrier.
20 However, regardless of the specific method that is used to embed a watermark, there is always a concern that a watermarking method be robust, that is, able to withstand various "attacks" that can remove or alter the watermark. Some attacks may be deliberately aimed at the underlying structure of a given watermarking scheme and require detailed knowledge of watermarking techniques applied. However, most 25 attack methods are less sophisticated, performing common modifications to the image such as using lossy compression, introducing lowpass filtering, or cropping the image, for example. Such modifications can be made when a video camera is used to capture a displayed motion picture. These methods present a constant threat that a watermark may be removed during the recording process.
30 The watermarking schemes noted above are directed to copy identification, ownership, or authentication. However, even if a watermarking approach is robust, provides copy control management, and succeeds in identifying the source of a motion picture, an invisible watermark may not be a sufficient deterrent for illegal copying.

'' r .~-, As an alternative to watermarking, same copy deterrent schemes used in arts other than video or movie display operate by modifying a signal or inserting a different signal to degrade the quality of any illegal copies. The modified or inserted signal does not affect playback of a legally obtained manufactured copy, but S adversely impacts the quality of an illegally produced copy. As one example, U.S.
Patent No. 5,883,959 (Kori) discloses deliberate modification of a burst signal to foil copying of a video. Similarly, U.S. Patent No. 6,041,1 S8 (Sato) and U.S.
Patent No.
5,663,927 (Ryan) disclose modification of expected video signals in order to degrade the quality of an illegal copy. As yet another example of this principle, U.S.
Patent No. 4,644,422 (Bedini) discloses adding a degrading signal to discourage copying of audio recordings. An audio signal having a frequency at and above the high threshold frequency range for human hearing is selectively inserted into a recording.
T'he inserted signal is not detectable to the listener. However, any unauthorized attempt to copy the recording onto tape obtains a degraded copy, since the inserted audio signal IS interacts adversely with the bias oscillator frequency of a tape recording head.
The above-mentioned copy protection schemes disclose the use of a deliberately injected signal introduced in order to degrade the quality of an electronic copy. While such methods may be effective for copy protection of data from a tape or optical storage medium, these methods do not discourage copying of a motion picture 2o image using a video camera.
As a variation of the general method where a signal is inserted that does not impact viewability but degrades copy quality, U.S. Patent No.
6,018,374 (Wrobleski) discloses the use of a second projector in video and motion picture presentation. This second projector is used to project an infrared (IR) message onto 25 the display screen, where the infrared message can contain, for example, a date/time stamp, theater identifying text, or other information. The infrared message is not visible to the human eye. However; because a video camera has broader spectral sensitivity that includes the IR range, the message will be clearly visible in any video camera copy made from the display screen. The same technique can be used to distort 30 a recorded image with an "overlaid" infrared image. While the method disclosed in U. S. Patent No. 6,018,374 can be effective for frustrating casual camcorder recording, the method has some drawbacks. A more sophisticated video camera operator could minimize the effect of a projected infrared watermark using a filter designed to block infrared light. Video cameras are normally provided with some amount of IR
filtering T -.P"P

-to compensate for silicon sensitivity to IR. With a focused watermark image, such as a text message projected using infrared light, retouching techniques could be applied to alter or remove a watermark, especially if the infrared signal can be located within frame coordinates and is consistent, frame to frame. A further drawback of the method disclosed in U.S. Patent No. 6,018,374 relates to the infrared light source itself. Since an infrared lamp can generate significant amounts of heat, it may not be practical to project a watermark or copy deterrent image over a large area of the display screen using only an IR source.
Motion picture display and video recording standards have well-known l0 frame-to-frame refresh rates. In standard motion picture projection, for example, each film frame is typically displayed for a time duration of 1124 second.
Respective refresh rates for interlaced NTSC and PAL video recording standards are 1/60 second and 1/50 second. Video camera capabilities such as variable shutter speeds allow close synchronization of a video camera with film projection, making it easier for IS illegal copies to be filmed within a theater. Attempts to degrade the quality of such a copy include that disclosed in U.S. Patent No. 5,680,454 (Mead). U.5. Patent No.
5,680;454, which discloses use of a pseudo-random variation in frame rate, causing successive motion picture frames to be displayed at slightly different rates than nominal. Using this method, for example, frame display periods would randomly 20 change between 1/23 and 1/25 second for a nominal 1/24 second display period.
Timing shifts within this range would be imperceptible to the human viewer, but significantly degrade the quality of any copy filmed using a video camera.
Randomization, as used in the method of U.S. Patent No. 5,680,454, would prevent resynchronization of the video camera to a changed display frequency. While the 25 method of U.S. Patent No. 5,680,454 may degrade the image quality of a copy made by video camera, this method does have limitations. As noted in the disclosure of U.S. Patent No. 5,680,454, the range of frame rate variability is constrained, since the overall frame rate must track reasonably closely with accompanying audio.
Also, such a method does not provide a mechanism for including any type of spatial pattern 30 or watermark in each frame, which could be used to provide a human-readable warning message or to trace the individual copy of the film that was illegally recorded.
U.5. Patent No. 5,959,717 (Chaum) also discloses a method and apparatus for copy prevention of a displayed motion picture work. The apparatus of 'r -.5-U.S. Patent No. 5;959,717 includes a film projector along with a separate video projector. The video projector can be used, for example, to display an identifying or cautionary message or an obscuring pattern that is imperceptible to human viewers but can be recorded using a video camera. Alternately, the video camera may even display part of the motion picture content itself. By controlling the timing of the video proj ector relative to film projector timing, a message or pattern can be made that will be recorded when using a video camera, but will be imperceptible to a viewing audience. The method of U.S. Patent No. 5,959,717, however, has some drawbacks. Notably, this method requires distribution of a motion picture in multiple parts, which greatly complicates film replication and distribution. Separate projectors are required for the film-based and video-based image components, adding cost and complexity to the system and to its operation. Image quality, particularly for large-screen environments, may not be optimal for video projection and alignment of both projectors to each other and to the display surface must be precisely maintained.
IS Conventional methods such as those described above could be adapted to provide some measure of copy deterrence and watermarking for digital motion pictures. However, none of the methods noted above is wholly satisfactory, for the reasons stated. None of the existing copy protection or watermarking methods takes advantage of key characteristics of the digital motion picture environment that would prevent successful recording using a video camera.
While the capability for encoding "passive" invisible digital watermarks within digital images data has been developed, there is a need for more aggressive copy-deterrence techniques that can be embedded within digital motion picture data content and can take full advantage of digital projector technology.
Image aliasing is a well-known effect that results from a difference between the scan line or frame refresh rate of an electronic display or motion picture and the sampling rate of a video camera. Inherently, image abasing imposes some constraints on the image quality of a video camera recording made from a display screen. Thus it is known that simply varying a scan or refresh rate may result in increased levels of aliasing. For example, video projectors from Silicon Light Machines, Sunnyvale, CA, use a high scan rate and complex segmented scanning sequence that can corrupt a video-taped copy by producing vertical black bars in the captured image. Similar effects are also observed when one tries to capture an image from a computer screen with a camcorder. These are the result of differences in scan ~ ~s rates between the display and the video camera systems. These techniques, however, offer a somewhat limited capability for protection, since scan synchronization of video camera apparatus makes it feasible to override this protection.
Moreover, abasing caused by simple scan rate differences does not provide a suitable vehicle for display of a warning message or other pattern in a taped copy or for digital watermarking in order to identify the source of the original image.
In a fully digital motion picture system, the spectral content and timing of each displayed pixel is known and can be controlled for each frame. While there can be a standard refresh rate for screen pixels (corresponding to the 1/24 or to second frame rate used for motion picture film or video displays), there may be advantages in altering the conventional "frame-based" model for motion picture display. Each displayed pixel on the screen can be individually addressed within any frame, and its timing characteristics can be modified as needed. This capability has, however, not been used for displaying a copy-deterrent pattern.
Therefore, it can be seen that there is clearly a need for a method that allows embedding of a copy-deterrent pattern within motion picture content, where the content is projected from digital data. It would be most advantageous for such a pattern to be invisible to a viewer but recordable using a video camera.
Further, it can be seen that there is a need for a method that uses the opportunity for control of timing and of individual screen pixel content that digital motion picture technology offers in order to discourage movie piracy using a video camera.
With the above description in mind, it is an object of the present invention to provide a method and apparatus for displaying, within a projected frame of a digital motion picture, said frame comprising an array of pixels, a copy-deterrent pattern, said pattern comprising a plurality of pixels selected from said frame, said pattern not visible to a human viewer but perceptible when sampled and displayed using a video capture device. In a preferred embodiment, the pattern is modulated at a modulation rate and modulation scheme selected to maximize signal aliasing when said digital motion picture is sampled by said video capture device.
It is another object of the present invention to provide a copy-deterrent projection apparatus for projecting a digital motion picture anto a display screen, said digital motion picture comprising a sequential plurality of frames, each of said frames comprising an array of pixels, each said pixel assigned to be projected at a 'r _W

predetermined intensity for the duration of each said frame, said apparatus comprising:
(a) a pattern generator capable of specifying a pattern of pixels within each said frame and capable of modulating said pattern at a modulating frequency and using a modulation timing capable of being varied so as to produce a modulated pattern, said modulating frequency chosen for aliasing with the displayed digital motion picture when sampled by a video capture device, said pattern generator also capable of specifying a variable intensity level for each said pixel within said pattern of pixels;
(b) an image forming assembly capable of accepting said modulated pattern specified by said pattern generator and of projecting onto said display screen, within each said frame, said modulated pattern.
A feature of the present invention is the deliberate use of modulation frequency and modulation timing in order to obtain abasing of the projected image IS when sampled using a video capture device. At the same time, however, modulation effects are not perceptible to a human viewer.
It is an advantage of the present invention that it provides an apparatus and method for obscuring an illegal copy of a projected digital motion picture, where said apparatus and method apply copy protection at the time of projection.
It is a further advantage of the present invention that it provides a method for displaying a copy-deterrent effect that is imperceptible to a viewing audience.
It is a further advantage of the present invention that it allows digital watermarking ofprojected digital motion picture frames using modulation of projected pixels at frequencies that are not perceptible to a viewing audience.
These and other objects, features, and advantages of the present invention will become apparent to those skilled in the art upon a reading of the following detailed description when taken in conjunction with the drawings wherein there is shown and described an illustrative embodiment of the invention.
While the specification concludes with claims particularly pointing out and distinctly claiming the subject matter of the present invention, it is believed that the invention will be better understood from the following description when taken in conjunction with the accompanying drawings, wherein:
Figure 1 is a graph showing relative sensitivity of the human eye to flicker, F.T .~1 Figure 2 is a graph showing the timing arrangement used for double-shuttering with conventional film projection;
Figure 3 is a graph showing the timing arrangement for a single pixel in digital motion picture projection;
S Figure 4 is a graph showing a time-domain representation of a raised cosine having an oscillation frequency of 10 Hz;
Figure 5 is a graph showing a frequency-domain representation of a visible signal, flickering at 10 Hz;
Figure 6 is a graph showing a frequency-domain representation of a signal oscillating at 80 Hz;
Figure 7 is a graph showing a time-domain representation of a function oscillating at I0 Hz and sampled at 80 Hz;
Figure 8 is a graph showing a frequency-domain representation of the sampled function of Figure 7;
IS Figure 9 is a graph showing a frequency-domain representation of the function of Figure 7 sampled at 30 Hz;
Figure I 0 is a graph showing a frequency-domain representation of an abasing condition for a sampled, oscillating function;
Figure 1 Ia is a graph showing a frequency-domain representation of a sinusoidally oscillating function truncated to contain only I O cycles;
Figure l 1b is a graph showing a frequency-domain representation of a sinusoidally oscillating function truncated to contain only 5 cycles;
Figure 12 is a graph showing a time-domain representation of an introduced flicker effect for a pixel;
Figure 13 is a graph showing changing pixel intensity values for n consecutive frames of a digital motion picture;
Figure 14 is a graph showing a frequency-domain representation of the changing pixel intensity values of Figure I3;
Figure 15 is a graph showing a frequency-domain representation of the intensity values of Figure 13, modulated;
Figure 16 is a graph showing a frequency-domain representation of the time-varying function of Figure 13, not modulated, but sampled;
Figure 17 is a graph showing a frequency-domain representation of the time-varying function of Figure I 3, modulated and sampled;

s9 _~ao Figure 18 is a plane view showing a digital motion picture frame having a pattern;
Figure l9.is a plane view showing a digital motion picture frame with a message as a pattern;
Figure 20 is a plane view showing a digital motion picture frame with multiple regions defined for modulation of patterns;
Figure 22 is a flow chart showing the decision process followed for the method of the present invention.
Figure 22 is a schematic block diagram showing major components of the copy-deterrent projection apparatus of the present invention;
Figure 23 is a block diagram showing the signal conversion path for a digital projection system;
Figure 24 is a schematic block diagram showing major components of an alternative embodiment of the copy-deterrent projection apparatus of the present invention; and Figures 25-28 are tables 2, 3, 4 and 5, respectively, as referred to in the specification.
The present description is directed in particular to elements forming part of, or cooperating more directly with, apparatus in accordance with the invention. It is to be understood that elements not specifically shown ar described may take various forms well known to those skilled in the art.
The present invention provides a method and apparatus capable of providing copy protection and watermarking for digital motion picture display. The present invention accomplishes this purpose by introducing, as part of the displayed images, a 25, copy-deterrent pattern that is imperceptible to a human observer, but that is clearly perceptible when captured using a video camera or related image capture device that uses sampling for image capture. In order to adequately disclose an implementation for practice of the present invention, it is first necessary to describe specific boundaries within which the method and apparatus of the present invention operate.
Sensitivity of the Human Visual S stem to Time-Varying Stimuli The first boundary of interest relates to the flicker sensitivity of the human visual system. Flicker sensitivity refers to the perception of a light source with time-varying intensity (e.g., a strobe light) as a steady illumination source.

~Y lei For a time-varying stimulus at a given temporal frequency and under a given set of viewing conditions, such as image display size and adaptation level, an average threshold amplitude can be identified at which the time-varying stimulus is no longer perceived as flickering (that is, the flicker-fusion threshold). Studies show that sensitivity of the human visual system to sinusoidal intensity oscillations decreases dramatically at higher temporal frequencies. (Reference is made to Kelly, D.
H., "Visual Responses to Time-Dependent Stimuli: Amplitude Sensitivity Measurements" in Journal of the Optical Society of America, Volume 51, No. 4, p.
422 and to Kelly, D. H., "Visual Responses to Time-Dependent Stimuli: III
Individual 1o Variations" in Journal of the Optical Society of America, Volume 52, No. 1, p. 89).
Referring to Fig. l, which shows the flicker-fusion threshold as a function of temporal frequency, human visual system sensitivity to flicker is maximized near the 10-cycles/sec range, drops off rapidly at just above 30 cycles/sec, and continues to drop as temporal frequency increases. For temporal frequencies above a cutoff frequency, IS there is essentially no perception of flicker regardless of the stimulus amplitude. This cutoff frequency occurs somewhere around 50-70 Hz for the light adaptation levels that occur in typical display systems.
Although the flicker sensitivity results shown in Fig. 1 refer to sinusoidal intensity oscillations, it is possible to derive similar curves for other types of time-2o varying stimuli. Of particular interest are squares waves, which have distinct "ON"
and "OFF" phases, as opposed to the continuously varying characteristics of sinusoids. For square waves, it is possible to compute a "duty cycle" which is the proportion of the ON interval in a full ON/OFF cycle. A typical duty cycle is 50%, meaning that the ON and OFF intervals are equal. However, it is also possible to 25 create stimuli with longer duty cycles, so that the ON duration is greater than the OFF
duration within a single cycle. In addition, other stimuli might have a non-zero light intensity during the OFF phase, so that the stimuli are never completely dark.
In general, studies have shown that increasing the duty cycle and/or increasing the intensity during the OFF phase results in reduced flicker sensitivity. It has been 30 demonstrated that the sensitivity variations for many different stimuli can be explained by considering the amplitude of the fundamental frequency component of the stimulus waveform. (Reference is made to Kelly, D. H., Flicker Fusion and Harmonic Analysis" in Journal of the Optical Society ofAmerica, Volume 51, p.
917.) 'd ~e Relevant to the present invention, when a sequence of motion picture frames is displayed at a sufficiently high temporal frequency, a human observer does not detect flicker but instead integrates the sequence of frames to perceive the effect of images in smooth motion. However, video cameras do not use the same detection mechanisms as the human visual system. Thus, it is entirely possible for a time-varying illumination to be captured by a video camera while the human observer detects only a steady illumination.
The object of the present invention is to provide, utilizing this inherent sensitivity of the human visual system and using the ability of a digital motion picture 1o projection system to control timing and intensity levels at each individual pixel, an apparatus and method for frustrating illegal filming of a digital motion picture using a video camera. 'The present invention operates by inserting a time-varying pattern within successive projected digital motion picture frames, where the time-varying pattern cannot be detected by the unaided eye but is clearly visible from a video camera.
Conventional versus Digital Motion Picture Projection Another boundary of interest relates to the nature of motion picture projection as it has evolved using film during the past century, and to new capabilities inherent to digital motion picture projection. It is instructive to distinguish the mode of 2o operation used by display projectors for digital motion pictures from the mode used for film projectors with conventional motion picture films.
A conventional film projection system consists generally of a high brightness arc lamp and a lens assembly that are used to illuminate and project film frames onto the display screen. Film frames are typically captured at 24 frames/sec, but projection at this same rate is undesirable as a 24Hz frequency is within the region of high flicker sensitivity as noted in the preceding discussion. Therefore, in order to reduce the perceptible flicker of projected films, a technique known as double shuttering is used. Double shuttering increases the effective display rate to 48 frames/sec by alternatively blocking and unblocking the projected light twice during the projection of each frame. This concept is shown in Fig. 2 for an ideal case of two consecutive frames, n and n+1 at intensity levels In and In+i, respectively. Double shuttering increases the presentation rate to a frequency at which flicker sensitivity is greatly reduced as compared to the sensitivity at 24 Hz, which allows a film to be viewed without flicker. It is also noted that double shuttering further improves the quality of 'r A

the projected images by allowing a new frame to settle (during the first"OFF"
time) before it is projected on the screen. In a practical situation, there is a finite transition time between the "ON" and "OFF" periods and therefore, the square-shaped blocks of Fig. 2 may be replaced by saw-tooth functions, for example.
In a conventional film projection system, each film frame is illuminated by a light source that has approximately constant intensity across the full extent of the frame. Moreover, each frame is sequentially projected from a film reel, and the average illumination intensity is held constant from frame to frame, as controlled by the shutter.1n contrast, digital motion picture display projectors are capable of to controlling, for each pixel in a two-dimensional array of pixels, multiple characteristics such as intensity, color, and refresh timing. With digital motion picture projection, the "image frame" presented to the viewer is a projection of this two-dimensional pixel array.
In a digitally projected movie, there is no need for shuttering. The projected IS frames consist of individual pixels, typically made up of three primary component colors (Red, Green, and Blue, abbreviated RGB) and having variable intensity, where frames are refreshed at regular intervals. 'This refresh rate may be I/24 of a second or higher. The transition time for the display of new pixel values, indicated as a pixel transition period in Fig. 3, is short enough that no perceptible flicker artifacts are 20 produced. Fig. 3 shows a typical timing arrangement for a single pixel having two different intensity values, I~ and I2 for frames n and n+1, respectively.
Because motion pictures are typically captured at 24 frames/sec, the description that follows uses a 24 Hz frame refresh rate as the fundamental rate to be used for digital motion picture projection. However, the actual refresh rate could 25 vary. The present invention is capable of adaptation to any standard refresh rate selected. As mentioned, the object of the present invention is to provide an apparatus and method for frustrating illegal filming of a digital motion picture using a video camera, by using the ability of a digital motion picture display system to control timing and intensity levels at each individual pixel.
30 Sampling of Movie Content by Video Camera A video camera operates by sampling a scene at regular time intervals. By sampling at a fast enough rate, a video camera can reproduce time-varying scenes with sufficient accuracy for the human visual system to perceive the temporally sampled data as continuous movement.

''a However, the complication with video camera sampling of a motion picture is that the motion picture display is not truly continuous, as is noted above.
Thus, attempting to capture a motion picture using a video camera introduces the complexity of sampling a time-varying image display using time-varying sampling apparatus. Intuitively, it can be seen that some synchronization of sampling rate to refresh rate would be most likely to yield satisfactory results.
It may be possible to adjust the sampling rate of a capturing device to provide synchronization between the video camera capture frequency and the motion picture proj ector frequency. Frame-to-frame synchronization of a video camera capture frequency to a motion picture projector frequency then enables illegal filming of a displayed motion picture with few, if any, imaging anomalies due to timing differences. The method and apparatus of the present invention is intended to prevent any type of adequate synchronization, thereby deliberately causing interference due to frequency differences to obscure or mark any copy of a motion picture obtained using IS a video camera.
The baseline sampling rates for video cameras can vary over a range of discrete values. Typical sampling rates for most video cameras commercially available are in a range between 60-120 Hz. For example, the NTSC and PAL
video standards, conventionally used for commercially available video cameras, use discrete rates of 50 and 60 fields per second, respectively. Optionally, in some of the so-called flickerless video cameras, multiples of these base rates can be used, allowing higher sampling rates of 100 or I20 Hz, respectively. These rates are, in turn, easily convertible to the SO and 60 fields per second replay rates that are used in most TVs and VCRs.
It must be noted that the present invention is not constrained to any assumption of video camera sampling rate being at a specific value. However, for the purpose of description, a standard, discrete sampling rate within the 50-120 Hz range is assumed. In subsequent description, sampling rate is represented as ~S.
With these bounds of human visual system flicker sensitivity, pixel refresh rate of the display, and video camera sampling rate as outlined above, it is next instructive to describe the tools and techniques used for analyzing and describing frequency-related phenomena in general.

n _14_ Time Domain and Frequency Domain As is well known in the signal processing arts, it is possible to describe and quantify a time-varying signal in either a time domain or in a frequency domain. The frequency domain is assuredly the less intuitive of the two. However, in order to S clearly disclose the functions performed by the apparatus and method of the present invention, it is most illustrative to utilize the descriptive tools and representation of the frequency domain. (The following discussion will highlight those features most pertinent to description of the present invention. A more detailed theoretical description can be found in an upper-level undergraduate or graduate text in linear 1o systems analysis, from which the following description can be derived. The nomenclature used in the subsequent description substantially follows the conventions used in a standard upper-Ievel text, Linear Systems, Fourier Transforms, and Optics, by Jack D. Gaskill, published by John Wiley & Sons, New York, NY, 1978.
The frequency domain representation is also sometimes called the signal IS "spectrum". Within certain constraints, the mathematical transformation between the time and frequency domains is accomplished via Fourier Transformation. Using this transformation tool, the relationship between the function in time domain, f(t), and the function in frequency domain, F(~), may be written as:
2D ~(~)'~~f(t)e ~2~adcr (1) (2) Here, f(t) and F(l;) are referred to as Fourier Transform pairs. It can be seen from the above equations that Fourier transformation is reversible or invertible. In 25 other words, if F(~) is the transform of f(t), then f(~) will be the Fourier transform of F(t).
There have been a number of corresponding pairs of invertible functions derived in working between the two domains. Some of the more useful Fourier transform pairs for the present discussion are given in Table 1.

w ~s -IS-Table 1. Exemplary Fonrier Transform Pairs Time Domain Frequency Domain I

cos(2~~ot) 'h (8(~-~0) + 8(~+~0)]

rect(t) sinc(~) comb(tlb) ~b~ comb(b~) f(t/b) ~b~ F(b~) f(t-c) F(e)e' n ~(t-c)~J

f(t)-g(t) F(~)~~~) Notes to Table 1:
~ b, c and ~o are constants;
~ S( ) is Dirac's delta function;
~ rect((x-c)/b] is the rectangular function of height 1 and of width b that is centered around c;
x-c sin ~~
~ Sinc((x-c)/b] = b ~~x-cl !' l/b 1 o ~ comb(x/b) _ ~b~. ~s(x-nb) represents an infinite series of delta functions that are n--oo separated by b.
~ Convolution is denoted by "*"

Note that multiplication in one domain corresponds to convolution in the other domain. A shift in one domain corresponds to a linear phase multiplier in the other domain (e' 2"b~ represents the phase).
The functions listed in Table 1 above are ideal, mathematical functions. Such idealized functions are rarely found under actual, measured conditions.
However, functions such as these are useful for modeling and for high-level assessment of actual conditions, as will be apparent in subsequent description.
Raised Cosine as Modulation Model It was noted above that the present invention takes advantage of differences between human eye sensitivity to a flickering pattern and video camera sensitivity to a flickering pattern. In order to describe the present invention clearly, it is beneficial to consider a model type of oscillation that is conceptually simple. For this purpose, the raised cosine function, as shown in Fig. 4, is a suitable model that illustrates how the present invention uses a modulation function to induce flicker in motion image sequences. Peak values of the raised cosine waveform would correspond to periods of highest intensity and valleys would correspond to the periods of lowest intensity on the screen.
The following is the time-domain equation for a "raised cosine" function:
f(t) = all + cos(2~~mt)J (3) This function has an oscillation frequency ~m and an average or DC level equal to "a". The Fourier transform of f(t) may be calculated as:
F(~) = a~s(~) +'~2 ~S(~-~m) + ~(~+~m)~ J (4') Figs. 4 and S show f{t) and its spectrum, F(~), respectively, for a raised cosine function. For convenience, the constant amplitude multiplier, a, is omitted from the figures. The function, f(t), may be used to represent an intensity pattern on a movie screen, varying sinusoidally with respect to time. As indicated in Fig. S, the spectrum of a raised cosine contains three components: the DC component, which is represented by a delta function at the origin, and two discrete frequency components at ~ ~m (gym 10 Hz in Fig. S). A region of flicker perceptibility is also indicated in ~.i P

Fig. 5 by the shaded region, with a range ~ ~~ where ~~ is the cutoff threshold frequency for flicker visibility as in Fig. 1. As the example of Figs. 4 and 5 shows, a flicker frequency, approximated by a sinusoid at 10 cycles per second, or 10 Hz, as ' shown in Fig. 4, has spectral frequencies ~ ~m that are well within the perceptible range. .
In contrast to the conditions of Figs. 4 and 5 where intensity flicker occurs at a 10 Hz oscillation rate, Fig. 6 shows what happens when the rate of intensity flicker is increased, to 80 Hz in this example. Notice that, for the conditions of Fig. 6, spectral frequencies ~ ~m are not within the perceptible range. In contrast to 1 D what occurs under the conditions of Figs. 4 and 5, a viewer cannot perceive any flickering in pixel intensity if this flickering occurs at a rate of 80 Hz. In terms of Fig.
6, the left and right sidebands that represent spectral frequencies ~ ~m have moved outside the perceptible range. The viewer perceives only a constant (that is, DC) intensity level.
15 The spectrum representation shown in Figs. S and 6 are idealized. In practice, actual measured modulated signals do not exhibit a spectrum that is as easily visualized. However, as will be seen subsequently, the representation given in Figs. 5 and 6 is sufficiently close for illustrating actual modulation behavior.
2o Sampling Frequency Considerations As noted above, a video camera operates by periodically sampling an image, unlike the human eye. The rate at which this sampling occurs, that is, the sampling frequency, affects how the video camera responds to a flicker pattern having a specified flicker frequency: The interplay of video camera sampling frequency and 25 display flicker frequency must be considered in order to make effective use of the present invention.
In order to reproduce a time-varying signal, a capture system acquires samples of that signal at given instants in time. Intuitively, if the samples are "close enough" to each other, the time varying signal function may be reproduced with great 30 accuracy. The Shannon-Whittaker sampling theorem quantifies the above statement by indicating how close the samples have to be in order to exactly reproduce a signal function. According to this theorem, if a function, f(t), is band limited (i.e., frequency components of F(~) are contained within a limited range of frequencies,' z' ~), then it is only necessary to sample f(t) at tN = ~ intervals, or higher, in order to perfectly reproduce f{t). This theorem is sometimes referred to as the Nyquist Theorem and 2 is referred to as the Nyquist Frequency. Mathematically, sampling of a function in time and frequency domains can be described as follows:
fs(t) = f(t) . ~ llts~. comb(t/ ts) (5) Fs(~) - F(~) * comb(~/i;s) = (~5~. EF(W n~s) where is is the sampling interval, ~S I/t$ is the sampling frequency, and "*"
indicates convolution. It should be noted that an idealized "zero-width" sampling function, ~ l ltg~. comb(tlts), is used in the following description to facilitate the understanding of the underlying concepts. In practice, rectangular functions of finite width are used for sampling. The ramifications of such sampling functions will be discussed below.
According to the above equation, sampling of a function, f(t), results in the ,l5 replication of its spectrum, F(~), at intervals ~S along the frequency axis. The replicates of F(~) are referred to as the spectral orders of fs(t), with F(~-n~s) known as the nth spectral order. As a straightforward first approximation for initial analysis, if f(t) is selected to be a raised cosine function, then the sampled function and its spectrum will take the following forms:
fg(t) = a[ I + cos(2~~mt)]. ~ I lts~. comb(t/ts ) (7) Fs(~) - E (a~ yea!-~s(W n~8)+2 ~s(~-~m -n~s)+sO+~m w~~)~,- (8) Figures 7 and 8 depict f$(t) and FS(~), respectively, for a sampling frequency ~S
of 80 Hz. The constant multipliers are once again omitted for convenience.
Note that in Fig. 8, because of high sampling rate, all spechal orders other than the 0~' order are outside the region of flicker visibility. In contrast, Fig. 9 shows the spectrum of the same raised cosine function that is sampled at a lower rate of ~S 30 Hz. It is evident that some of the frequency components of the ~1 St orders have moved into the region of flicker visibility when using this lower sampling rate., If the temporally sampled version of f(t) with ~S 30 Hz were to be subsequently displayed and viewed by a human observer, extraneous frequency artifacts (i.e., flickers) would be observed in addition to the inherent flicker of the raised cosine at 10 Hz.
It should be noted that where oscillation and sampling frequencies interact as is shown in Fig. 9, there can be some remedial steps taken to minimize frequency artifacts in video camera recording, such as using temporal low-pass filters, for example.
Aliasing Recalling that the purpose of the present invention is to cause frequency artifacts, it can be seen that there would be advantages in causing signal aliasing, where such aliasing could not then be remedied using low-pass filter techniques.
Abasing is often the cause of visual artifacts in the display of sampled images that contain high frequency components (a familiar example resulting from aliasing is the effect by which carriage or locomotive wheels appear to rotate backwards in early movi es).
If the sampling rate, ~S, is below the Nyquist frequency, the components from higher order harmonics will overlap the 0"' order components. This phenomenon is known as aliasing. This can be readily visualized by examining Figs. 8, 9 and 10 in progression. In Fig. 8, only the fundamental or "0 order" frequency components are within the perceptible region; none of the +1 or -1 order components will be perceptible to a viewer. Moving from Fig. 8 to Fig. 9, the affect of a change in sampling frequency ~S is illustrated. Fig. 9 shows how reducing the sampling frequency can cause lower order frequency components (shown in dashed lines) to creep toward the origin, into the perceptible region. In Fig. 9, components from the +1 and -1 orders now are within perceptible range.
Fig. 10 shows an example in which a raised cosine intensity pattern has an oscillation frequency of 50 Hz. This results in spectral components at ~ 50 Hz. As noted above, the eye does not easily perceive the flicker of this pattern since this oscillation frequency ~m is near the visible threshold. Now, if the displayed pattern is captured by a camcorder that has a sampling rate ~S 30 Hz, aliased components from higher order harmonics now fall within the region of perceptibility and degrade the viewed image. This is depicted in Fig. 10 with only the ~1 St order harmonics present.

It is evident that the ~2nd spectral orders will also introduce some additional frequency artifacts into the perceptible region. For clarity, however, these orders are not shown to avoid cluttering the figure.
Aliasing occurs whenever any portion of the higher order components overlap S the 0 order frequency components, as is illustrated in Fig. I 0. It was noted above that, given the conditions shown in Fig. 9, a temporal low-pass filter could be employed to isolate only those frequency components of the fundamental order. However, it must be pointed out that abasing, as illustrated in Fig. 10, does not permit a straightforward remedy using filtering techniques.
At this point, it is instructive to re-emphasize that abasing, as is used by the present invention, occurs as a resultof sampling over discrete intervals, as performed by a video camera. The human eye does not "sample" a motion picture image in the same manner. Thus, abasing effects as illustrated in Fig. I0 occur only with respect to video camera sampling; a human viewer would not perceive any such abasing IS effect in watching the displayed motion picture itself.
The preceding discussion has focused on the interaction of oscillation and sampling frequencies from an idealized theoretical perspective. With the principles of this interaction in mind, it is now instructive to describe some more practical aspects of actual sampling conditions, in order to provide a framework for understanding how to implement the present invention.
Effects of a Finite Sampling Duration The above description of spectral frequencies and aliasing used an ideal "zero-width" comb sampling function ~ 1/ts~comb(t/ts) as a straightforward first approximation to show the interaction of oscillation and sampling frequencies.
The comb function is familiar to those skilled in the digital signal processing arts. In practice, however, sampling functions have a finite integration time duration.
For the purpose of this disclosure, this is termed a "finite sampling duration".
A good approximation of a practical function with some finite width is a rectangular function of width (i.e., duration) "d" that is used to sample the function f(t) at fixed intervals ts. In such a case, the sampled time-varying function takes the following form:

fs(t) _ .1~(t)'~~d reef dJ* r comer ~~ (9) s s that is, equivalent to f(t) times a sampling function. Its corresponding spectrum equivalent takes the following form:
Fs(~)= F(~)* Sinc~d~)~comb~~s~~ ~~5~- ~~Sinc(dn~SyF(yn~s)~ (10) JJ n=_ao Here, the result in the frequency domain can be interpreted again as series of F(~) functions that are replicated at ~s intervals. But, as equation (10) above shows, the amplitude of each nth spectral order is attenuated by a constant value, that is, by a sine( ) function evaluated at ~= dn~s. Thus, the amplitude of the ~l St orders is attenuated by sinc(d~s), the amplitude of the ~2"d orders is attenuated by sinc(2d ~S), l0 etc. It is evident that if d«ts (that is, the width of the rectangular sampling function is much smaller than the interval between samples), the attenuation due to the sine( ) function is negligible for the first few spectral orders. However, if the value of d becomes comparable to TS, all spectral orders (other than the 0'h order) will undergo attenuation dictated by the sine( ) envelope. The ramification of this phenomenon is reduced visibility of the higher spectral orders if they fall in the region of flicker perceptibility.
Time-varying functions of Finite Extent Refernng back to Figs. 4 and 5, function f(t) was represented as a raised cosine extending from negative infinity to positive infinity. In practice, any time-varying function f(t) must be limited to some finite time interval . In the case of motion pictures this finite time interval corresponds to some portion or the entire presentation length of the movie. This truncation can be mathematically represented by multiplying f(t) by a rectangular function for the duration of interest.
Here, the duration (i.e., width of the rectangle) is selected to be D=ntf" where n is an integer typically much greater than I and tf is an arbitrary time interval, usually representing the duration of one movie frame. Then the finite time-varying function f~(t) and its spectrum F~{~} can be represented by the following:
f f (t) = f (t) - I~ - rect. DJ ( I I ) F f (,~) = F(,f) * Sincj ~ ~ ( I Z) ~~D

where ~n=1lD.
Note that 2~ is the width of the main lobe of the sine( ) function. The main lobe contains the majority of the energy (i.e., area) of the sine( ).
Examination of the above equations reveals that if the spectral width of the sine( ) is small compared to the spectral width of F(~) (which means that the width, I), of the rectangular function is large), the sine( ) function effectively acts as a delta function. For example, the spectrum of the raised cosine intensity pattern shown in Fig. 5 would contain narrow sine( ) functions in place of delta functions. In this case, the pure spectral components l0 at ~~m are replaced by broadened components at ~~m that have the shape of a sine( ) function. Fig. 1 I a shows this broadening effect for the raised cosine function truncated to contain 10 cycles (i.e., i;D=I O~m). Fig. I 1b shows the spectrum for a raised cosine function truncated to contain 5 cycles (i.e., ~D=S~",).
As ~D becomes comparable to the spectral extent of F(~) (i.e., ~p ~ ~~,), the IS spectral components overlap. This has no impact on the concepts developed so far, but makes visualization of the spectrum plots somewhat difficult. In a practical implementation of the present invention, the target sequence of images usually contain a large number of frames, and the spectral broadening effects are negligible.
20 Realistic Intensity Function Using the raised cosine function as a model, the above description illustrates basic concepts underlying pixel modulation, frequency sampling, and aliasing, all viewed with respect to the frequency domain. The next part of this description considers a more realistic time-varying pixel intensity function f(t).
25 Referring to Fig. I3, there is shown an example intensity function for a pixel over n consecutive image frames. (Pixel transition time, assumed to be small relative to frame duration, is not shown.) For the example represented in Fig. 13, tfis the frame duration and parameters ao, ai, ..., an are used to indicate the height of each rectangular function (i.e., the 30 amplitude of the intensity function for duration of frames 0, l, .. . n, respectively).
Mathematically, this train of rectangular functions may be represented as follows:

(13) f(t~=aprect t +airect t tf +a2rect t 2~tf +...+a"rect t n~t f tf tf tf tf =~aos(t~+a~8(t-t f)+a28(t-2~tf)+...+a~8(t-n~t f)~*rect r 1f ( I 4) s = ~aos(t)+ais(t-t f)+a28(t -2~t f)+...+a"8(t-n.r f))*h(t) .
(IS) Equations 13 and 14 are mathematically equivalent. Insight can be gained from examination of equation 14, which presents the time-varying function f(t) as the to convolution of a series of weighted and shifted delta functions with a rectangular function of width tf. In the context of digital projection of pixels, each delta function can represent the pixel intensity value for the duration of one frame.
Convolution with the rect( ) function effectively spreads (that is, interpolates) the value of each delta function over its frame duration, tf. The rect( ) function, described in this manner, can Is be considered to be an interpolation or spread function. This description uses rectangular spread functions to represent projected movie pixels. It is significant to note that other forms of spread functions can be used in place of rectangles without compromising the validity of this analysis. Equation (I S) describes the projected frame pixels in their most generic form.
2o The Fourier transform of f(t) from Equation (15) may be written generally as:
F'(~)=La0+ane'2~f~+a2.e .i2m21f~+...+ay.e i2mrrf~~.,~(~) (16a) and, in this particular case:
F'(S~)=[QO+a~~e'2"'f~+a2.e ~2~r2ff~+...+a".e i2~rf~~.Sin~~ ~ (16b) 'I
2s where fir= 1/tf and H(~) is the Fourier transform of the interpolation function h(t). For the example of Fig. 13, the Fourier transform of a rect() function is a sine function, as presented in Equation (16b) above. The term in square brackets is a series of weighted linear phase functions that contains both real and imaginary parts.
In order to produce the spectrum plot of F(~), amplitude of the term in square brackets ,..

must be calculated. This will result in a complicated mathematical amplitude term, A(~), that is multiplied by a sine( ) function:
,F(B)I = A(~) ~ Sine ~ f _ ( 17) It is possible to numerically or symbolically evaluate A(~) using scientific software packages, as is well known in the linear systems analysis arts. Fig.
14 shows a plot of (F(~)( for the case where n=9, tf=1/24 and an coefficients that are arbitrarily selected to be: (ao, al, a2, a3, a4, as, a6, a~, a8, a~) _ (l, 0.7, 0.8, 0.7, 0.6, 0.4, 0.5, 0.3, 0.4, 0.2). The plot of Fig. 14 indicates that the spectrum of this signal is nearly band limited to within ~24 Hz, but the energy of the signal becomes negligible beyond roughly ~12 Hertz.
For the purposes of the present invention, it is not necessary to know the exact form of (F(~)~ in equation 17, as this quantity depends on varying parameters such as n and an. Instead, it is beneficial to examine some of the general characteristics of F(~).
One point to note is that if A(~) is evaluated at different values of ~, the highest value for A(~) is obtained at ~=0, namely A(0) = ao+a1+ . . .+an. Thus one property of A(~) is that its highest value occurs at the origin (that is, the DC value). This property plus the fact that A(~) is multiplied by a sine( ) function, limits the spectral spread of (F(~)~ to within the first few side lobes of the sine( ) function. In other words, regardless of the exact form of A(~), the spectrum plot of F(~) is band limited and follows a sine( ) envelope (more generally, follows the envelope of the Fourier transform of the spread function, h(t)). The sine( ) envelope is shown in dashed lines in Fig. 14.
Modulating the Projected Pixel There are a number of possible modulation schemes that can be employed in order to introduce a flickering pattern that is not perceptible to a human viewer, but that degrades a video camera copy by using the interaction of frequencies described here. For the pixel represented (without flicker) in Fig. 3, the basic concept for adding a flicker effect is illustrated in Fig. 12. Here, a frame interval of 1/24 second is further subdivided into 8 segments which are then used to provide flickering on/off intervals.

,z It is important to note, as shown in the example of Fig. 12, that average pixel intensity must be maintained in whatever modulation scheme is used. For example, using the timing arrangement of Fig. 12, a pixel is on for %2 of the total time when compared against Fig. 3. This represents a 50% duty cycle. To provide the same S average intensity for the pixel, then, the pixel intensity during an on-time must be doubled.
The pulse-width modulation (PW1VI) technique shown in Fig. I2 gives a flicker frequency of 96 Hz, outside the perceptible range of the human eye.
However, such a PWM modulated pixel appears to flicker when recorded and viewed from a 1 D typical video camera.
In Fig. 12, an idealized square wave modulation scheme is used. More generally, given a modulation function m(t), the modulated time-varying signal can be written as:
IS fm (t~= f ~t)~m(t}= ~ao8(t)+a~8(t-t f~+a28~t -2. t f~+...+a"8~t -n ~t f)~*h(t~}~m(t) .
(I 8) For the time-varying function of Fig. 13, with a rectangular interpolation function, this becomes:
20 fm~t)= ~ao8~1)+a~cS~t-tf)+a2s(t-2~tf)+...+a"8(t-n.tf)~*rect tt ~m(t) I
( 19) In practice, m(t) may be a sinusoidal function (such as the raised cosine discussed earlier) or a train of square waves with 50% duty cycle (as depicted in Fig.
12) or some other time-varying function. In order to simplify the analysis, a raised 25 cosine modulation function is used. This modulation function is shown as dashed lines in Fig. 12. The modulated time-varying function then becomes:
fm(t)= [a~s(t)+az8(t-t f)+a28(t-2.t f)+...+an~(t-n.tr))*rect( r ~
.~1+cos~a~ct~~
ltf tm (20) The Fourier transform of this signal becomes:

f _26_ ~)) = A~~~ ~ Sin ~ * ~S(~) + 2 ~S(~ - ~m ) + S(~ "~ ~~ )~~
tt =A(~)~Sin ~ +yA(~-~m)'SIYIC ~~f"' +2~A(~+~m)~Sinc ~~~"' =IF~~)'f'2'F'E~Wn,)i'2'F'U'~'~~n~
(21) The modulated spectrum, Fm(~), consists of three replicates of the un-S modulated signal spectrum, F(~), given by equation 17 and plotted in Fig.
14. One of these replicates is located at the origin and the other two, with one-half the amplitude of the original spectrum, are centered respectively around ~~m. In conventional linear systems analysis terminology, F(~), (~-~,r,) and F(~+~,r,) are sometimes referred to as the main band, right sideband and left sideband, respectively. The plot of Fm(~) is shown in Fig. 1 S for the parameters n=9, tf=I/24, tm I/48 and an coefficients (ao, al, a2, a3, a4, as, a4, a~, a8, a~) - (1, 0.7, 0.8, 0.7, 0.6, 0.4, 0.5, 0.3; 0.4, 0.2).
Note that in the absence of modulation, the signal spectrum only contains the first term, F(~). In effect, by introducing modulation into the system, we have advantageously increased the bandwidth (that is, the spectral spread} of the original IS signal by adding the components F(~ ~ Vim). It must be stressed once again that if a high enough modulation frequency, Vim, is selected, the left and right sidebands cannot be seen by the observer. However, the interaction of the modulation frequency with the video camera sampling frequency, ~S, will result in an aliased signal that is perceptible in the video camera copy.
Sampling the Projected Pixel As a summary and wrap-up of the theoretical material provided above, it is instructive to illustrate, for a realistic example, the spectral characteristics of a sampled un-modulated signal and to compare these against the spectral characteristics of a sampled modulated signal.
As has been described above, if an un-modulated signal, f(t), is sampled at is intervals (ts< tf), spectral orders appear in the spectrum plot. Refernng to Fig. I 6, this is shown for the time varying function of Fig. I3, using a sampling frequency of 180 r' Hz. Mathematically, the Fourier transform of the un-modulated and sampled signal can be written as follows:
(22) IFs~~)I=A(~)'Sin~~ ~+A(~-~s~'Sin~~~~s~+fI(~+~s)'Sin~~~~s~+..
r _ ~F(~)+F(~-~a )+F(~+~a )+F(y 2~a >+F(~+2~a )+...~
If, instead of an un-modulated signal, sinusoidally modulated pixel values are sampled at a rate ~S l/ts (as with a video camera, for example), the spectrum contains broader spectral orders. This is depicted in Fig. 17, using a sampling frequency of to 180 Hertz and modulation frequency of 48 Hertz. Mathematically, the spectrum of the sinusoidally modulated and sampled signal can be represented by the following:
F'ms(~~-Frn~~r~+Fm~~-~s~+Fm~~+~s~+Fm(~-2~s~~'Fm(~'~'2~s~+.
=~F(~~+ 2'F(~ Wm)+ 2'F(~+~m~
+~F(~-~a)+ 2' F(~-~m -~s~';' 2 'F(~+~m' ~s~'~'~F(~'~'~s)~' z' F~~-~m +~a~+ 2' F(~+~m +~s~
+~F(~-2~s~+Z'F(~-~m-2~s~+Z'F~~+~m-2~s~+~F''(~ ~-Z~s~+ Z'F(~-~m +2~s~+ 2' F(~~'~m+2~s~
(23) IS
+...
The fundamental order is given by the terms on the second line of equation 23, the ~1 S' orders appear in the third line, etc. According to the Nyquist Theorem, in order to produce aliasing, the sampling frequency must be less than twice the bandwidth of the signal. If spectral spread, W, of F(~) is taken into account, this 20 criterion becomes ~S< 2(~m+ W/2 ).
It is instructive to note that in the equations derived above, care has been taken to express intensity, modulation, and interpolation functions in a generic form, using f(t), m(t), and h(t). This allows these equations to be applied to any type of real world function as it may be encountered in digital projection of frames and pixels.
In 25 addition, parametric representation of quantities such as modulation frequency ~~,, sampling frequency ~S, and bandwidth W allow straightforward manipulation using the underlying concepts outlined above.

f Therefore, in a practical application for deliberately causing abasing effects in digital motion picture projection, it is sufficient to employ best approximations of f(t), m(t), h(t), and other variables and, with the aid of linear systems and Fourier analysis tools as developed above, to determine the best conditions for maximum aliasing.
Exemplary Tables and Calculations For the purpose of describing the method and apparatus of the present invention in detail, it is instructive to provide a summary listing of those variables that must be considered in order to deliberately cause abasing artifacts when using a video to camera to record a digital motion picture. Using the naming conventions employed in the above description, the following parameters must be considered:
(a) Sampling frequency ~S. This parameter is not in the control of any copy protection method or apparatus. The method of the present invention will be effectively implemented by making some assumptions on likely sampling frequencies that could be used by a video camera when attempting to make a copy of a displayed motion picture. Standard sampling rates widely used by commercially available video cameras include 1/50 second (the standard NTSC rate) and 1/60 second (the standard PAL rate), and so-called "flickerless" speeds of 1/100 second and 1/120 second. Other rates are possible; the method of the present invention be employed where image aliasing appears at the most widely used sampling frequencies as well as in cases where non-standard sampling rates are utilized.
(b) Bandwidth, W, of the unmodulated signal or the one-sided bandwidth, W/2, as is used in some instances throughout this application to denote the spectral extent in either the positive or negative frequency domains.
As an upper limit, the bandwidth of the unmodulated signal is the displayed frame rate. As noted in the above description, the frame rate has a likely value of 24 Hz. However, the present invention can be used with other frame rates.
It must be noted that the "unmodulated" signal, as this teen is used here, refers to continuous pixel intensity as is shown in Fig. 13. The actual bandwidth value is not controlled by the present invention; instead, W is dependent on the frame rate and on the scene content, frame to frame. Thus, a t'"

practical limit of bandwidth less than or equal to the frame rate is sufficient for purposes of the present invention.
(c) Modulation frequency i;m. This parameter is controlled using the method of the present invention, taking into consideration parameters (a) and S (b) above and also considering practical constraints imposed by intensity limitations of the projection system. For any individual pixel or set of pixels, a modulation frequency ~m is selected so that the effective bandwidth of the modulated signal is such that aliasing will be perceptible when sampled using a video camera.
The method of the present invention, then, consists in careful selection of a modulation frequency that is not visible to the unaided eye, but causes aliasing when sampled by a video camera.
As was noted with respect to Fig. 14, a result of using a finite sampling IS duration is the attenuation of higher order spectral components. For this reason, it is generally considered most practical to consider the first order (~ I order) spectral components for creating the abasing effects of the present invention. Second order spectral components may also have sufficient energy to cause visible aIiasing in some instances.
Refernng to the examples illustrated by Tables 2, 3, 4, and 5, (see Figures 25-28, respectively) there are shown values of spectral spread for various modulation frequencies, using "flickerless" conditions of sampling frequency (i.e., ~$
120 Hz), varied for modulated signal bandwidth.
Recall from the above discussion that the spectral spread of the zero order modulated signal with modulation frequency ~m and bandwidth W ranges from -(gym +
W/2) to (gym + W/2), Table 6 below summarizes the spectral spread values for different spectral orders. In the example of Fig. I 5, ~m = 48 Hz and W/2 =
24, so that the spectral spread of the zero order component, according to Table 6, is -60 to +60 Hz.
Abasing due to the I St spectral order begins as soon as the following is true:
~~s- (~,m-~-W/2)~ < U~m-~W/2)J (24) s 1 Solving the inequality for ~m results in the lower limit for the modulation frequency:
~m ~ ~~S 2 ~~ (25) Aliasing continues until the following condition holds:
~~S+ (~m+W/2)~ > - (~,n,+W/2) (26) Solving inequality (26) for ~m results in the upper limit for the modulation frequency. Thus, the condition for modulation frequency to produce first order aliasing can be expressed as follows:
ID
C ~S 2 W ~ < ~m < ( ~s 2 W , (27~
Similar calculations can be carned out to determine the range of modulation frequencies that can produce aliasing for other orders. Table 7 summarizes these results for the first few orders, using the spectral spread values of Table 6.

P
c a - 31 Table 6. Spectral spread of the fundamental, ~1s' and ~Znd orders for a signal with bandwidth W and modulation frequency Vim.
Spectral Range From To 0 order -(gym+W/2) +(~m+W/2) +1s' order~S - (~m+W/2) ~S + (~m+W/2) _1s' order-~S- (~,~+W/2) -~S+ (gym+W/2) +2 order 2~S - (gym+W/2) 2~S + (~",+W/2) -2 order -2~S - (gym+W/2)-2~S + (gym+W/2) Table 7. Modulation Frequency range that can produce abasing for different orders Abasing order Modulation Range +1g order (~S W)/2 < ~m < (~S+W)/2 -1S' order (~S ~/2 < ~m < (-~S+W)/2 +2n order (2~S-W)/2 < ~m <

(2~S+~I2 -2na order (-2~S-W)/2 < ~m <
(_ 2~S+W)/2 It must be noted that the abasing conditions summarized in Table 7 are l0 necessary---but not sufficient---conditions for visible abasing. In order to obtain the necessary and sufficient conditions, it is necessary to look more closely at the actual frequency spread within each of the spectral orders. This is because it is unlikely that the entire spectral range of the modulated signal (i.e., ~(~",+W/2)) is populated with spectral components. To illustrate this point, re-examine Figs. 14 and 1 S. If W/2 and IS ~~, are selected to be 12 and 48 Hz respectively, the modulated signal of Fig. I S, by definition, has a bandwidth of ~(~m+W/2) _ ~(,0 ~, ~ was noted above. But, as evident from the figure, the energy of the spectrum in not uniformly distributed within the band of ~60 Hz. In fact, most of the energy is only concentrated around three spectral frequencies of 0, +48 and -48 Hz. Thus, if this signal is sampled, only the components around the main band, the left sideband and the right sideband of the higher orders have sufficient energy to create strong abasing effects. In a practical system design, care must be taken to ensure that aliasing is caused by the portions of S the spectrum with a significant amount of energy (i.e., by the main, left and right sidebands).
Tables 2-5 provide spectral spread calculations for different values of modulation frequencies, Vim. Entries in each table indicate the spectral spread of the main, left and right bands of the first and second spectral orders for the modulated and l0 sampled signal. For example, in Table 2, the spectral spread of the left sideband of the +IS' order for a signal with W/2=30, ~S 120 and ~m 50 Hz is calculated to be from 40 to 100 Hz. The results for the corresponding negative spectral orders can be obtained by reversing the signs of the entries. Also note that aliasing only becomes visible if the spectral spread of these higher orders falls somewhere within the frequency range I S ~40 Hz.
Using the straightforward calculations described above, any number of tables for a given sampling frequency ~S and bandwidth can be derived, over any number of modulation frequencies, Vim. It is important to note that Tables 2-5, each tabulated frequency spread values for a different estimate of W/2 namely, W/2 of 30, 24, 12, 5, 20 respectively. Alternatively, an exact value of W/2 could have been obtained by examining the Fourier transform of the time-varying function if the intensity values of the projected pixels within the target frames were precisely known. The selection of suitable modulation frequencies for causing aliasing can be accomplished by inspection of such tables.
25 Table 2 lists spectral spread values for 1 st and 2nd spectral orders given a one-sided bandwidth W/2 of 30 Hz, sampling frequency ~S of 120 Hz, for modulation frequencies from 50 Hz to 210 Hz.
Table 3 lists spectral spread values for 1 st and 2nd spectral orders given a one-sided bandwidth W/2 of 24 Hz, sampling frequency ~S of 120 Hz, for modulation 30 frequencies from 50 Hz to 210 Hz.
Table 4 lists spectral spread values for I st and 2nd spectral orders given a one-sided bandwidth W of 12 Hz, sampling frequency ~S of 120 Hz, for modulation frequencies from 50 Hz to 210 Hz.

Table 5 lists spectral spread values for 1 st and 2nd spectral orders given a one-sided bandwidth W of 5 Hz, sampling frequency ~S of 120 Hz, for modulation frequencies from 50 Hz to 210 Hz.
For the purpose of finding a suitable modulation frequency ~m under given conditions, the preferred approach is to consider worst-case conditions. Among factors to be considered is bandwidth. A narrow bandwidth is a worst-case condition, since it is more difficult to cause aliasing with a narrow bandwidth signal.
Table 5, with values for a one-sided bandwidth W/2 of S, represents worst-case conditions among Tables 2-5. Using just one example from Table 5, it can be seen that aliasing l0 at frequencies within the visible range can be caused using modulation frequencies ~m between 70 and 165 Hz. The optimum values appear in the middle of this range.
For example, with a modulation frequency ~m of 110 Hz, the left sideband of the 1st order is centered at roughly around 10 Hz. This creates a distinctly visible abasing condition, no matter how narrow the bandwidth. Other modulation frequency ~m IS values near this 110 Hz frequency also are likely candidates for causing abasing.
Note that negative spectral orders (-1 order, -2 order) are not listed in Tables 2 - 5. However, these values can be expressed simply by changing the sign for each spectral value listed.
Simple arithmetic calculations are all that is needed to obtain the values that 20 populate successive rows of Tables 2-5. For the 1 S' and 2nd spectral orders in Tables 2-5, the location of the main band is determined by sampling frequency ~S .
Since sampling frequency is 120 Hz, the 1 S' spectral order main band is centered at 120 Hz (~S). The 2nd spectral order main band is centered at 240 Hz (2~S) respectively. The Frorn/To spread of the main band and of both left and right side bands is set by the 25 bandwidth W, which differs for each of Tables 2-S. The left and right side bands are centered using the following simple calculation:
Center of main band - Modulation frequency ~m = Center of left band (2 8) Center of main band + Modulation frequency ~", = Center of right band (2 9) It should be noted that the previous analysis of aliasing assumed the simplest case, which is the use of a sinusoidal modulation frequency. In actual practice, it may prove advantageous to employ a different type of modulation signal waveform;
as we will discuss in the next section.
Method of Preferred Embodiment The present invention temporally modulates the displayed pixel intensities in such a way that objectionable patterns will be produced when the displayed pixel intensities are recorded with a video camcorder. More specifically, the displayed pixel intensities are modulated so that an observer of a displayed movie will not see any degradation in image quality; but any attempt to capture the displayed movie with a camcorder will result in abased temporal frequency components that will be readily visible when the camcorder copy is subsequently viewed. According to the present IS invention, individual pixels or groups of pixels are modulated in various ways to produce specific spatial patterns in the video copy and to prevent a video pirate from circumventing the degradations that are produced by the patterns. The key aspects of the present invention are 1 } the spatial arrangements of pixels that undergo temporal modulation; 2) the temporal modulation signal waveform; and 3) the temporal 2o modulation frequency (or frequencies). We now discuss each of these aspects.
Referring to Fig. 18, there is represented an arbitrary frame 100 of a digital motion picture. Frame 100 comprises an array of pixels 102 that display the scene content. To display scene content in color, each individual pixel 102 actually comprises a red pixel 1028, a green pixel 1026, and a blue pixel 102B, where the red, 25 green, and blue color pixels are visually overlapped and intensities are varied using color representation techniques well known in the imaging arts. However, for simplicity of description, the model of a single, generic pixel 102 is used here as a generalization. The description provided here refers to individual RGB color components only when necessary.
30 As Fig. 18 shows, a spatial pattern 104 of pixels 102 within frame 100 can be identified for modulation using the method of the present invention. There are numerous options for selecting and modulating one or more patterns 104 during display of a digital motion picture. Chief among selection options for specifying pattern I 04 composition are the following, including combinations of the following:

> r ( 1 a) random arrangement of pattern 104. In a preferred embodiment, a random selection of pixels 102 is used to produce a pattern I04 that is optimal for obscuring a video camera copy of the digital motion picture.
Pattern 104 can be changed for each frame 100 or for each set of n frames.
The use of a random modulated pattern 104 that is constantly changing obviates the use of temporal filters or other image processing techniques for re-creating an acceptable image from the degraded video copy.
( 1 b) arrangement of pattern 104 as a text message. In another preferred embodiment, pattern 104 could be employed as a "bitmap" for a message 106 comprising text and symbols, as shown in Fig. 19. Message 106 could be the same for each frame 100, such as a message that displays the name of the theater where the projection apparatus is located, but concurrently part of message 106 could be changed every n frames to display time-varying information such as the date and time. Message 106 could also display a simple message such as "ILLEGAL COPY" or "STOLEN," or a more detailed message could be displayed, such as a reward notice or other incentive for return of the illegal copy.
( 1 c) arrangement of pattern 104 as a pictorial image, possibly as part of an animation. Pattern I 04 could be embodied as an image intended to obscure or provide information. Animation techniques could be employed to generate an informational or annoying pattern 104.
( 1 d) arrangement of pattern 104 as a watermark message. A
watermark pattern 104 could comprise a plurality of modulated pixels 102 within various parts of frame 100. For example, an algorithm could be used for determining which pixels 102 are used to create a digital watermark, assigning a spatial distribution to the watermark. Cryptographic methods could be employed, in conjunction with the algorithm used for pixel 102 selection, to securely encode watermark information. In order to recover a message contained within a watermark pattern 104, a decryption key may be required for deciphering contents in accordance with the watermark embedding algorithm.
( 1 e) entire frame 100 considered as pattern 104. In certain implementations, it may be advantageous to modulate every pixel 102 within frame 100, typically with the same modulation frequency for every pixel.

However, if such a method were used, it would then be preferable to change modulation frequency periodically or randomly, to prevent synchronization of video camera timing with modulation frequency.
The above techniques for pattern 104 selection are just some of the more S Likely techniques that could be used, all within the scope of the present invention.
There are some practical limitations that may constrain the modulation of an individual pixel within the chosen pattern 104. This is because the process of modulating the pixel intensity inherently results in a lower average intensity for a given peak intensity value. For example, if pixel 102 corresponds to a very bright l0 displayed value, it may not be possible to modulate pixel 102 because the peak intensity that is needed to maintain the average intensity for pixel 102 may exceed the projector capabilities. Thus, it may be necessary to modulate other nearby pixels 102 or other areas of frame 100, where the peak intensity requirements can be met.
Thus, pixels in an image frame may be analyzed according to a peak intensity criterion and IS pixels meeting the criterion further determine the pattern that is subject to the temporal modulation. This criterion may be pixels not exceeding a certain brightness level. Additionally, there may be some types of scene content or locations within the scene for which induced flicker may be perceptible at higher modulation frequencies than average. This may necessitate applying modulation to other regions of a 20 displayed frame.
Some of these limitations are addressed in the present invention by choosing the proper mode for the temporal modulation. In a preferred embodiment, the modulation signal waveform is a sinusoid, as we described previously in the analysis of abasing. A sinusoidal function will minimize the spectral extent of the frequency 25 sidebands that are produced by the modulation, which makes it somewhat easier to place the sidebands at the desired frequency position. However, many projection systems may not be capable of modulating the pixel intensities according to a pure sinusoidal waveform, so another preferred embodiment is to approximate a sinusoidal waveform using a rectangular or sawtooth modulation signal waveform. In a 30 preferred embodiment using a rectangular modulation waveform, a 50% duty cycle is used, which requires a doubling of the peak pixel intensity to maintain the same average intensity as an unmodulated signal. In still another preferred embodiment, the rectangular waveforrrr may not be completely dark during the OFF period, in order to reduce the peak pixel intensities that are required to maintain a desired average intensity. This potentially allows for more pixels to be modulated when a display device has a limited maximum intensity. 'The tradeoff is that the resulting abased components will not be as severe as a full ONIOFF modulation. Another useful variation of the modulation wavefom is to use a duty cycle of more than 50%, which also lessens the peak intensity that is required to maintain the same average intensity.
Any number of other possible modulation waveforms can be employed, all within the scope of the present invention.
In addition to the basic shape of the modulation signal waveform, there are a number of temporal modulation options for pattern 104 as part of the present to invention. These temporal modulation options include the following, and combinations of the following:
(2a) using a single modulation frequency. In a preferred embodiment, a single modulation frequency is used to modulate the pixels that comprise pattern 104. It is advantageous to change the modulation frequency IS periodically or randomly, in order to frustrate attempts to synchronize video camera recording equipment to the modulation rates in use.
(2b) using multiple modulation frequencies. In another preferred embodiment, two or more different modulation frequencies are used in two or more regions 108 of frame 100, as is shorwn in Fig. 20. 'The use of different 20 modulation frequencies within a frame 100 allows for video cameras with different sampling rates to be simultaneously affected to the maximum extent Even with a single camera operating at a single sampling rate, the use of two or more modulation frequencies in different regions will introduce patterns that flicker at different rates, which can be highly objectionable. When 25 multiple modulation frequencies are used within the same frame, it is extremely difficult to synchronize video camera recording equipment to all modulation rates simultaneously. However, it again may be advantageous to change the modulation frequencies periodically or randomly, in order to further frustrate attempts to synchronize the video camera to the modulation 30 rates in use.
(2c) modulation of pixels 1028, 1026, 102B in pattern 104. Pixels in each R, G, and B color plane can be modulated independently of one another or synchronously, within the scope of the present invention. It may be advantageous to modulate pixels within only one or another color plane to simplify apparatus design, for example.
(2d) using frequency modulation techniques as a form of digital watermarking. In addition to obscuring any video camera copy, modulation of pattern l 04 can also be used to encode an information signal. Changes in modulation frequency can thereby be used as an encoding technique for copy watermarking and for information such as location, time and date of projection, identifying number of the film copy sent, and related information.
(2e) using pulse-width modulation. This variation on a frequency modulation technique uses manipulation of the duty cycle of pixels l 02. Pulse width modulation can be particularly useful where intensity levels needed for pixel 102 modulation may not be possible for a projector to achieve at 2X, as is needed for a 50% duty cycle modulation. Using this technique, however, the intensity must be preserved for each modulated pixel 102.
IS (2f) using amplitude modulation. This modulation technique can employ the inherent capability for digital cinema to control intensity of each pixel 102 over a range. An amplitude modulation scheme would enable encoding of waternlark information within a sequence of variable intensity values.
It would also be feasible to combine or to alternate any of methods (2a) through (2f) to implement a hybrid modulation scheme for digital watermarking, within the scope of the present invention.
Once selections have been made for the spatial arrangement of pattern 104 (as in methods ( I a) - ( I e)} and the temporal modulation waveform and mode (as in methods (2a)-(2fJ), it is necessary to choose the specific modulation frequency (or frequencies}. In general, a reasonable design approach is to select the modulation frequency so that one of the first-order side bands of the sampled signal is centered in the frequency range of approximately 10 to 30 Hz. This produces an abased component in the sampled signal that is in the peak sensitivity range of the human visual system as shown in Fig. I. From equations 28 and 29, we can see that this design approach is equivalent to satisfying the following equation:

Hz <_ (Sampling frequency ~S - Modulation frequency ~", I <_ 30 Hz_ (3 0) 5 For example, if the sampling frequency ~S of the specified camcoder is 120 Hz, then a modulation frequency of either 100 Hz or 140 Hz will place the center of a first-order side band at 20 Hz, thus producing a strong abased component in the sampled video signal. Since many camcorders use a sampling frequency of either 60 Hz or 120 Hz, the selection of a modulation frequency at 90 Hz will produce an aliased component 10 at 30 Hz for either sampling frequency. In this way, a single modulation frequency can produce the desired effect regardless of the particular camcorder that is used.
However, as described in methods (2a) - (2b), it may be advantageous to use different modulation frequencies, either for different regions within a single frame or for a given region across multiple frames, to provide an even greater deterrent to video IS piracy. For example, using modulation frequencies of 80 Hz and 100 Hz for different regions in a frame 100 will produce abased components at 20 Hz and 40 Hz regardless of whether the sampling frequency is 60 Hz or 120 Hz. In this way, there is always an abased component in the camcorder video at 20 Hz, which is near the peak flicker sensitivity, thus producing a highly objectionable pattern. In other applications, it may desirable to select a modulation frequency that moves the aliased component to very low temporal frequencies, say less than 10 Hz. The visual appearance of a slowly varying pattern may not be as visually objectionable as a rapidly varying pattern, but if pattern 104 represents a text message, the slowly varying pattern may be more easily comprehended. Finally, when changing the modulation frequency (or frequencies) periodically or randonly to prevent the synchronization of video recording equipment, the amount of change from the preferred frequency (or frequencies) does not need to be extremely large. Even small changes, such as ~ 5 Hz, will be sufficient to prevent continuous synchronization.
Summary of steps for implementation of method The basic steps for implementing the method of the present invention are as follows:

1. Identify pattern 104 to be used. Decide upon a strategy for displaying pattern 104, using options ( 1 a)-( 1 e) given above, or some other technique for pattern 104. This decision depends, in large part, on the purpose for which this invention is to be applied. For example, when maximally obscuring any copied movie content is the goal, as in the preferred mode of the present invention, a randomly changing pattern 104 has advantages.
2. Select the appropriate mode for temporal modulation. Options (2a)-(2f) given above provide the preferred options for temporal modulation available in implementing the present invention.
3. Select the appropriate modulation frequency or frequencies.
Suitable modulation frequencies can be identified using the techniques described above and those used to generate Tables 2-5. As was noted, the best option is generally to employ first order frequencies for abasing; however, there can be applications where second order abasing has advantages.
IS
The general flow diagram for implementation of the copy-deterrent pattern is shown in Fig. 21: In a Frame Selection step 200, n consecutive frames 100 are selected for implementation of the copy-deterrent pattern. Next, in a Pattern Selection step 202, specific pixel 102 locations within copy-deterrent pattern 104 are selected.
2o As mentioned above, pixels 102 for modulation may be simply selected at random or may comprise a text or encrypted message. In a Pixel Value Calculation step 204, the intensity values of selected pixels 102 (that make up copy-deterrent pattern 104 and are spanned over n frames 100) are calculated. This calculation may simply entail computation of individual R, G and B values or may include computation of an 25 average or hybrid luminance or chrominance measure. In a Decision step 206, it is made certain that output intensity characteristics/limitations of the projection system do not prevent the selected pixels 102 from being modulated and projected. If projection is not possible, new pixels 102 must be selected. In a Bandwidth Calculation step 208, the bandwidth of the selected pixel intensity pattern (that is 3o spanned over n frames) is calculated and/or estimated. As noted earlier, this bandwidth is less than the frame 100 projection rate but its exact value depends on the movie content (i.e., number of selected frames 100 and the pixel 102 intensity values within the selected frames 100). Since the number of frames 100 and the intensity values of the selected pixels 102 are precisely known, bandwidth calculations may be carned out by taking the Fourier transform of the selected pixels I02 and examining its spectrum. However, since this process may require a considerable amount of computation, a simple estimate of the bandwidth (e.g., W/2 ~ 5 Hz for a very conservative estimation) may be used.
The next step is a Sampling Frequency Selection step 210. This can be selected to be one of several standard sampling rates (e.g., 50, 60, 100, IZO
Hz, etc.) used in today's camcorders. It is also possible to select one sampling frequency value for the first n1 set of frames I00 and select a different sampling rate for the next n2 set of frames 100. Alternatively, or in conjunction with above, different sampling rates for different regions 108 within the frame 100 may be used to carry out the calculations. This way, copy-deterrent pattern 104 would affect a wider variety of camcorders. In a Modulation Selection step 212, the appropriate modulation scheme (or a combination of them) as is outlined in (2a) - (2e) above is used at the appropriate modulation frequency to produce aliasing for the chosen sampling rate. In IS order to broaden the effectiveness of this technique, a range of modulation schemes and frequencies may be used to affect the same set of pixels 102 but in different set of "n" frames I00. Finally, once the appropriate modulation scheme and frequency is selected, the necessary information is supplied to image-forming assembly 16 in a Modulate step 214 in order to affect the projection of pixels 102.
Apparatus of Preferred Embodiment Referring to Fig. 22, there is shown a block diagram of a copy-deterrent projection apparatus 10 with an arrangement of components for implementing the present invention in a preferred embodiment. Image data for projection of each frame 100 is typically provided in compressed form and may be transmitted to the projecting site or provided on storage media. In any event, compressed image data is input to a decompression circuit 12. Decompression circuit 12 decompresses frame I00 data and provides this data to a display logic assembly 14. Display logic assembly formats the image data for frame 100 into pixel array format and provides the 3o necessary color correction, image resizing, and related functions. Then the data is passed to an image forming assembly I6 for display onto a display screen 20.
The decompression and display sequence and apparatus just described are familiar to those skilled in the digital motion picture projection arts and may be embodied using a number of different types of components.

Pattern data and control parameters are provided to a pattern generator/modulator assembly 18 for generating and modulating pattern 104.
Pattern data and control parameters can be provided from a number of sources. For example, pattern data and control parameters can be provided by the motion picture supplier, S provided along with the compressed image data. 'This arrangement would put the movie supplier in control of pattern 104 generation for copy protection. At the other extreme, copy protection could be solely in the domain of the projection site itself. In such a case, pattern data and control parameters can be provided by optional pattern logic circuitry 22, indicated by a dashed box in Fig. 22, or can be generated on the fly at the time of projection, based on known characteristics of the movie that must be provided to the projection site. Other possibilities include some combination of control by the movie supplier and local control by the projection site.
The object is to provide the location of pixels 102 as well as modulation frequency and intensity variation data. As is described previously for methods(1 a) -I S ( 1 e) and (2a) - (2f), plaintext or encrypted information can be sent by selection of specific location, frequency, and intensity data for modulation of pixels 102.
Significantly, pattern generatorlmodulator assembly 18 provides this location, frequency, and intensity data for modulation, assigned to existing pixels 102 in frame 100. That is, pattern generator/modulator assembly I8 does not provide "new"
pixels I02 relative to the scene content for frame I 00. Instead, the output from pattern generator/modulator assembly 18 can be considered as control signals, sent to image forming assembly I6, in order to manipulate the scene content that is sent from display logic assembly 14.
Pattern generator/modulator assembly 18 provides modulation information for pixels 102 within pattern 104 that are sent to the display logic assembly 14 and the image forming assembly 16 for projection.
It is instructive to note that, in addition to image forming assembly 16, decompression circuit 12, and display logic assembly 14, pattern generator/modulator assembly I 8 can be part of projector 10 as in the preferred embodiment or can be separate components, such as components running on a separate computer or other processor.
Image forming assembly 16 may employ any one of a number of display technologies for projection of a sequence of frames 100 onto display screen 20. In a preferred embodiment, image forming assembly 16 comprises a transmissive t~

Liquid-Crystal Device (LCD) spatial light modulator and support components for projection of frames 100. Pattern 104 data may be used to modulate individual pixels within the LCD for each color. Or, a separate LCD may be employed far modulation of pattern 104. Other types of modulator could alternately be used, including a digital micromirror device (DMD) or reflective LCDs, for example.
Alternate techniques for applying modulation:
The apparatus of the present invention can employ any of a number of different techniques for applying modulation to pattern 104. Referring to Fig.

l0 which shows the signal path that applies for projection of each pixel 102 in frame I00, methods for applying modulation include the following:
(a) Modulation at digital control circuitry. In the preferred embodiment, modulation is applied to a digital signal 30 within the digital control circuitry used for generation of the array of pixels 102 in each frame 100. With this method, the digital values that correspond to pixels 102 in each frame 100 are modulated in accordance with the modulation scheme. An example of a 50% duty-cycle square wave modulation is depicted in Fig 23 as an "AND" operation that can be readily implemented in digital domain.
(b) Analog modulation. Analog modulation is possible, using some method for modulation of an analog signal 32 that controls intensity of the light source used for pixels 102 within pattern 104, for example. Analog modulation techniques, as opposed to the digital techniques of part a, manipulate continuous-time analog signals. Nevertheless, similar concepts of modulation can be carried over from the digital domain to the analog domain.
(c) Optical modulation. Optical modulation could be implemented using techniques of projection optics 34. such as shuttering, masking, or controlled pixel emission. Optical-modulation could be applied within image forming assembly 16 as well as at display screen 20, using masking or filtering techniques. Referring to Fig. 23, the optical modulation mask 216, for example, can be used either in front of the projection optics or on top of the display screen to selectively manipulate the projected pixel locations. Liquid crystal display material, with electrically controlled transmission characteristics, may be used to construct such a spatial light modulation mask;

w the opacity of different regions of such a mask may be controlled by changing the applied electrical signal to that region of the mask.
As was noted above, modulation can be differently applied for each color component of frame 100.
Alternative Embodiments Refernng to Fig. 24, there is shown an alternate embodiment for copy-deterrent projection apparatus 10, in which image forming assembly 16 does not project onto display screen 20. Instead, image forming assembly 16 communicates 1o with an emissive display panel 24. Emissive display panel 24 may be in the form of a panel that comprises an LED array 26, similar to the display panels widely used in large sports facilities, for example. Using display panel 24 of this type, image forming assembly 16 directly controls the intensity and modulation of red, green, and blue LEDs, 28R, 28G, and 28B; respectively.
IS While the invention has been described with particular reference to its preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements in the preferred embodiments without departing from the scope of the invention. For example, copy-deterrent pattern 104 can comprise any of a number of arrangements 20 of pixels 102 to form messages 106 or regions 108 anywhere within frame 100.
Copy-deterrent patterns 104 can be provided primarily in order to obscure a recording or to provide digital watermarking, or both. The present invention is adaptable to a number of possible configurations of digital motion projector and display screen apparatus, such as using micromirror technology or a light valve array, for example.
25 Therefore, what is provided is a copy-deterrent projection apparatus for digital motion pictures and a method for applying modulation to pixels within a displayed motion picture frame in order to discourage recording of the image using a video camera.

Claims (22)

1 A copy-deterrent display apparatus for displaying a sequential plurality of image frames, each of said image frames comprising an array of pixels, each said pixel assigned to be displayed at a predetermined intensity within each frame, said apparatus comprising:
(a) a pattern generator/modulator assembly capable of providing control signals specifying a pattern of pixels within each said frame, said control signals specifying temporal modulation of said pattern of pixels, wherein said temporal modulation is chosen to be imperceptible to a human observer while simultaneously producing objectionable artifacts due to aliasing when said displayed frames are captured by a video capturing device;
(b) an image forming assembly capable of accepting said control signals from said pattern generator/modulator assembly and of modifying the displayed pixel intensities within each said frame, in response to said control signals.
2. The apparatus of claim 1 wherein said modulated pattern of pixels comprises a text message.
3. The apparatus of claim 1 wherein said modulated pattern of pixels comprises a predetermined copy obscuration pattern.
4. The apparatus of claim 1 wherein said modulated pattern of pixels comprises a random pattern.
5. The apparatus of any of claims 1-4 and wherein in order to provide copy deterrence to a video capturing device having a sampling frequency of 60 Hertz and/or 120 Hertz a temporal modulation frequency is used that meets the criteria of the absolute value of the difference between the sampling frequency and the temporal modulation frequency is greater than or equal to 10 Hertz and less than or equal to 30 Hertz.
6. A method for displaying a copy deterrent pattern in a sequential plurality of image frames, each of said image frames comprising an array of pixels, said pattern comprising a plurality of pixels selected from said frame, the method comprising temporally modulating said pattern wherein said temporal modulation is chosen to be imperceptible to a human observer while simultaneously producing objectionable artifacts due to aliasing when said displayed frames are captured by a video capturing device.
7. The method of claim 6 wherein said copy deterrent pattern comprises a message.
8. The method of claim 6 wherein said copy deterrent pattern comprises a predetermined copy obscuration pattern.
9. The method of claim 6 wherein said copy deterrent pattern comprises a random pattern.
10. The method of claim 6 wherein said copy deterrent pattern comprises a digital watermark.
11. The method of claim 6 wherein said copy deterrent pattern substantially comprises said frame.
12. The method of claim 6 wherein the plurality of image frames form part of a digital movie or video image having a predetermined refresh rate and wherein said pattern is temporally modulated at a frequency higher than the refresh rate.
13. The method of claim 12 and wherein the temporal modulation is sinusoidal.
14. The method of claim 13 and wherein intensity of the pixels comprising the modulated pattern is adjusted to compensate for the modulation.
15. The method of claim 6 or 12 and wherein the modulation is a rectangular wave.
16. The method of claim 6 or 12 and wherein one or more color components of an image in the frame is subject to said temporal modulation but other color components of the same image frame are not subject to said temporal modulation.
17. The method according to claim 6 or 12 and wherein the pattern comprises plural portions and the plural portions are temporarily modulated at different frequencies from each other.
18. The method according to claim 6 or 12 and wherein the frequency of modulation is changed from one frequency to another frequency during the course of display.
19. The method according to claim 6 or 12 and wherein the plurality of pixels are analyzed according to a peak intensity criterion and pixels meeting the criterion further determine the pattern that is subject to the temporal modulation.
20. The method of claim 6 or 12 and wherein the temporal modulation is sinusoidal.
21. The method according to any of claims 6 through 20 and wherein in order to provide copy deterrence to a video capturing device having a sampling frequency of 60 Hertz and/or 120 Hertz a temporal modulation frequency is used that meets the criteria of the absolute value of the difference between the sampling frequency and the temporal modulation frequency is greater than or equal to 10 Hertz and less than or equal to 30 Hertz.
22. A method for identifying a candidate modulation frequency to be applied to a selected pattern of pixels within a sequence of digital motion picture frames, said candidate modulation frequency intended to cause aliasing when the sequence of digital motion picture frames is sampled using a video capture device, said method comprising:

(a) selecting a plurality of sequential frames;
(b) obtaining intensity values for each pixel within said selected pattern of pixels;
(c) calculating a modulation intensity value for each said pixel within said selected pattern of pixels, said modulation intensity value conditioned by the duty cycle for a modulation signal waveform;
(d) calculating a bandwidth for said pixels within said selected pattern of pixels;
(e) choosing a target sampling frequency; and (f) calculating said candidate modulation frequency based on said target sampling frequency and based on predetermined thresholds for visible modulation, said candidate modulation frequency selected based on having a first order or a second order side band within bounds of said predetermined thresholds for visible modulation.
CA002368396A 2001-02-28 2002-01-17 Copy protection for digital motion picture image data Abandoned CA2368396A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US09/796,201 US7043019B2 (en) 2001-02-28 2001-02-28 Copy protection for digital motion picture image data
US09/796,201 2001-02-28

Publications (1)

Publication Number Publication Date
CA2368396A1 true CA2368396A1 (en) 2002-08-28

Family

ID=25167594

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002368396A Abandoned CA2368396A1 (en) 2001-02-28 2002-01-17 Copy protection for digital motion picture image data

Country Status (4)

Country Link
US (1) US7043019B2 (en)
EP (1) EP1237369A3 (en)
JP (1) JP2002314938A (en)
CA (1) CA2368396A1 (en)

Families Citing this family (133)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100302436B1 (en) * 1998-03-24 2001-09-26 포만 제프리 엘 Motion picture electronic watermark system
EP1202250A4 (en) * 1999-10-29 2006-12-06 Sony Corp Signal processing device and method therefor and program storing medium
US7391929B2 (en) * 2000-02-11 2008-06-24 Sony Corporation Masking tool
US6950532B1 (en) * 2000-04-24 2005-09-27 Cinea Inc. Visual copyright protection
US6664976B2 (en) 2001-04-18 2003-12-16 Digimarc Corporation Image management system and methods using digital watermarks
US7254249B2 (en) 2001-03-05 2007-08-07 Digimarc Corporation Embedding location data in video
US7197160B2 (en) 2001-03-05 2007-03-27 Digimarc Corporation Geographic information systems using digital watermarks
US7061510B2 (en) 2001-03-05 2006-06-13 Digimarc Corporation Geo-referencing of aerial imagery using embedded image identifiers and cross-referenced data sets
US7249257B2 (en) 2001-03-05 2007-07-24 Digimarc Corporation Digitally watermarked maps and signs and related navigational tools
US7098931B2 (en) 2001-03-05 2006-08-29 Digimarc Corporation Image management system and methods using digital watermarks
US6950519B2 (en) 2001-03-05 2005-09-27 Digimarc Corporation Geographically watermarked imagery and methods
US7042470B2 (en) * 2001-03-05 2006-05-09 Digimarc Corporation Using embedded steganographic identifiers in segmented areas of geographic images and characteristics corresponding to imagery data derived from aerial platforms
US9363409B2 (en) 2001-03-05 2016-06-07 Digimarc Corporation Image management system and methods using digital watermarks
JP3518528B2 (en) * 2001-08-10 2004-04-12 ソニー株式会社 Imaging disturbance method and system
GB2379295A (en) * 2001-08-31 2003-03-05 Sony Uk Ltd A system for distributing audio/video material to a potential buyer
EP1294189A3 (en) * 2001-09-18 2004-01-14 Sony Corporation Optical state modulation
US7030956B2 (en) * 2002-03-11 2006-04-18 Sony Corporation Optical intensity modulation method and system, and optical state modulation apparatus
US20040125125A1 (en) * 2002-06-29 2004-07-01 Levy Kenneth L. Embedded data windows in audio sequences and video frames
US7623115B2 (en) * 2002-07-27 2009-11-24 Sony Computer Entertainment Inc. Method and apparatus for light input device
US20060110004A1 (en) * 2002-08-07 2006-05-25 Agency For Science, Technology And Research Method and system for deterence of unauthorised reuse of display content
US7302162B2 (en) * 2002-08-14 2007-11-27 Qdesign Corporation Modulation of a video signal with an impairment signal to increase the video signal masked threshold
ES2385876T3 (en) * 2002-09-27 2012-08-02 Technicolor, Inc. Anti-piracy coding of movie films
US7206409B2 (en) * 2002-09-27 2007-04-17 Technicolor, Inc. Motion picture anti-piracy coding
JP3901072B2 (en) 2002-10-23 2007-04-04 ソニー株式会社 Video display device and video display method
US7386125B2 (en) * 2002-10-28 2008-06-10 Qdesign Usa, Inc. Techniques of imperceptibly altering the spectrum of a displayed image in a manner that discourages copying
US20040091110A1 (en) * 2002-11-08 2004-05-13 Anthony Christian Barkans Copy protected display screen
US8141159B2 (en) * 2002-12-31 2012-03-20 Portauthority Technologies Inc. Method and system for protecting confidential information
US20040150794A1 (en) * 2003-01-30 2004-08-05 Eastman Kodak Company Projector with camcorder defeat
JP2004266345A (en) * 2003-02-05 2004-09-24 Sony Corp Method, processor, and system for displaying video image
US7221759B2 (en) * 2003-03-27 2007-05-22 Eastman Kodak Company Projector with enhanced security camcorder defeat
JP2004310386A (en) * 2003-04-04 2004-11-04 Canon Inc Image verification device, image verification method, computer program, and computer-readable storage medium
KR100948381B1 (en) * 2003-05-15 2010-03-22 삼성전자주식회사 Image Watermarking Method Using Human Visual System
US7756288B2 (en) * 2003-05-29 2010-07-13 Jeffrey Lubin Method and apparatus for analog insertion of low frequency watermarks
US20050055228A1 (en) * 2003-09-08 2005-03-10 Aircraft Protective Systems, Inc. Management method of in-flight entertainment device rentals having self-contained audio-visual presentations
US8406453B2 (en) * 2003-09-08 2013-03-26 Digecor, Inc. Security system and method of in-flight entertainment device rentals having self-contained audiovisual presentations
GB2407227B (en) * 2003-09-08 2006-11-08 Deluxe Lab Inc Program encoding and counterfeit tracking system and method
US7818257B2 (en) * 2004-07-16 2010-10-19 Deluxe Laboratories, Inc. Program encoding and counterfeit tracking system and method
FR2859857A1 (en) * 2003-09-17 2005-03-18 Thomson Licensing Sa Source image processing method for e.g. projector, involves compensating modified colors of pixels on processed images to obtain color corresponding to color of pixel in source image, where luminance of pixels in images are equal
JP4520809B2 (en) * 2003-10-08 2010-08-11 パナソニック株式会社 Data processing device for controlling video recording and quality
JP2007515678A (en) * 2003-12-11 2007-06-14 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method and apparatus for detecting a watermark in a signal
WO2005076985A2 (en) 2004-02-04 2005-08-25 Digimarc Corporation Digital watermarking image signals on-chip and photographic travel logs through digital watermarking
TWI288873B (en) * 2004-02-17 2007-10-21 Mitsubishi Electric Corp Method for burying watermarks, method and device for inspecting watermarks
US7693330B2 (en) * 2004-03-15 2010-04-06 Vincent So Anti-piracy image display methods and systems with sub-frame intensity compensation
US7634134B1 (en) 2004-03-15 2009-12-15 Vincent So Anti-piracy image display methods and systems
US7289644B2 (en) * 2004-04-27 2007-10-30 Thomson Licensing Anti-piracy coding of motion pictures
FR2869752A1 (en) * 2004-04-28 2005-11-04 Thomson Licensing Sa APPARATUS AND METHOD FOR DISPLAYING IMAGES
FR2869751A1 (en) * 2004-04-28 2005-11-04 Thomson Licensing Sa APPARATUS AND METHOD FOR PROCESSING IMAGES
US7483059B2 (en) * 2004-04-30 2009-01-27 Hewlett-Packard Development Company, L.P. Systems and methods for sampling an image sensor
US8509472B2 (en) * 2004-06-24 2013-08-13 Digimarc Corporation Digital watermarking methods, programs and apparatus
US20060051061A1 (en) * 2004-09-09 2006-03-09 Anandpura Atul M System and method for securely transmitting data to a multimedia device
WO2006053023A2 (en) 2004-11-09 2006-05-18 Digimarc Corporation Authenticating identification and security documents
US7272240B2 (en) * 2004-12-03 2007-09-18 Interdigital Technology Corporation Method and apparatus for generating, sensing, and adjusting watermarks
US7321761B2 (en) 2004-12-03 2008-01-22 Interdigital Technology Corporation Method and apparatus for preventing unauthorized data from being transferred
US20070242852A1 (en) * 2004-12-03 2007-10-18 Interdigital Technology Corporation Method and apparatus for watermarking sensed data
FR2887389A1 (en) * 2005-06-21 2006-12-22 Thomson Licensing Sa APPARATUS AND METHOD FOR DISPLAYING IMAGES
EP1932339A1 (en) * 2005-09-08 2008-06-18 Thomson Licensing Digital cinema projector watermarking system and method
FR2890517A1 (en) * 2005-09-08 2007-03-09 Thomson Licensing Sas METHOD AND DEVICE FOR DISPLAYING IMAGES
EP1814073A1 (en) * 2006-01-26 2007-08-01 THOMSON Licensing Method and device for processing a sequence of video images
TW200746037A (en) * 2006-02-14 2007-12-16 Sony Corp Display control device and display control method
EP1830582A1 (en) * 2006-03-01 2007-09-05 THOMSON Licensing Method for processing a video sequence for preventing unauthorized recording and apparatus implementing said method
US20070217612A1 (en) * 2006-03-17 2007-09-20 Vincent So Method and system of key-coding a video
EP1843584A1 (en) * 2006-04-03 2007-10-10 THOMSON Licensing Digital light processing display device
JP5140939B2 (en) * 2006-04-14 2013-02-13 株式会社ニコン Image recording / playback device
JP4834473B2 (en) * 2006-06-23 2011-12-14 キヤノン株式会社 Image processing system and image processing method
JP2009543443A (en) * 2006-06-29 2009-12-03 トムソン ライセンシング System and method for object-oriented fingerprinting of digital video
EP1931143A1 (en) * 2006-12-06 2008-06-11 Thomson Licensing Method and device for processing a sequence of source pictures
US8374382B2 (en) 2006-12-06 2013-02-12 Thomson Licensing Device for processing video images, video projection system and signal intended for use by the projection system
US20090316890A1 (en) * 2006-12-11 2009-12-24 Mark Alan Schultz Text based anti-piracy system and method for digital cinema
EP1936975A1 (en) * 2006-12-20 2008-06-25 Thomson Licensing Method and device for processing source pictures to generate aliasing
WO2008078236A1 (en) * 2006-12-21 2008-07-03 Koninklijke Philips Electronics N.V. A system, method, computer-readable medium, and user interface for displaying light radiation
US8150097B2 (en) * 2007-01-24 2012-04-03 Sony Corporation Concealed metadata transmission system
WO2008107731A1 (en) * 2007-03-06 2008-09-12 Thomson Licensing Digital cinema anti-camcording method and apparatus based on image frame post-sampling
WO2009002315A1 (en) * 2007-06-27 2008-12-31 Thomson Licensing Frequency and spectral domain solutions for prevention of video recording
EP2162793A4 (en) * 2007-06-27 2010-10-20 Thomson Licensing Frequency and spectral domain solutions for prevention of video recording
FR2918239A1 (en) * 2007-06-29 2009-01-02 Thomson Licensing Sas METHOD FOR SELECTING TATOO IMAGE PIXELS AND TATTOO PROCESS USING THE SELECTION
CN101803374A (en) * 2007-08-21 2010-08-11 汤姆逊许可公司 Digital light processing anti-camcorder switch
US20110206349A1 (en) * 2007-11-08 2011-08-25 Thomson Licensing Method, apparatus and system for anti-piracy protection and verification
US8320729B2 (en) * 2007-11-21 2012-11-27 Sri International Camcorder jamming techniques using high frame rate displays
US8108681B2 (en) * 2007-12-03 2012-01-31 International Business Machines Corporation Selecting bit positions for storing a digital watermark
US20090161909A1 (en) * 2007-12-19 2009-06-25 Nstreams Technologies, Inc. Method of dynamically showing a water mark
US20090180079A1 (en) * 2008-01-16 2009-07-16 Oakley Willliam S Projected Overlay for Copy Degradation
US20110064218A1 (en) * 2008-05-15 2011-03-17 Donald Henry Willis Method, apparatus and system for anti-piracy protection in digital cinema
EP2133839A1 (en) * 2008-06-12 2009-12-16 Thomson Licensing Image treatment method to prevent copying
WO2009157905A1 (en) * 2008-06-27 2009-12-30 Thomson Licensing Method and system for efficient transmission of anti-camcorder video
EP2321964B1 (en) 2008-07-25 2018-12-12 Google LLC Method and apparatus for detecting near-duplicate videos using perceptual video signatures
JP4636142B2 (en) * 2008-08-29 2011-02-23 ソニー株式会社 Video playback device, video playback method and program
EP2335418A1 (en) * 2008-09-08 2011-06-22 Telefonaktiebolaget L M Ericsson (PUBL) Provision of marked data content to user devices of a communications network
KR20100095245A (en) * 2009-02-20 2010-08-30 삼성전자주식회사 Method and apparatus for embedding watermark
US8890892B2 (en) * 2009-04-24 2014-11-18 Pixar System and method for steganographic image display
US8817043B2 (en) * 2009-04-24 2014-08-26 Disney Enterprises, Inc. System and method for selective viewing of a hidden presentation within a displayed presentation
JP5596943B2 (en) * 2009-07-09 2014-09-24 キヤノン株式会社 Image display apparatus and control method thereof
KR20120060350A (en) * 2010-12-02 2012-06-12 삼성전자주식회사 Image processing apparatus and control method thereof
US8845110B1 (en) 2010-12-23 2014-09-30 Rawles Llc Powered augmented reality projection accessory display device
US8905551B1 (en) 2010-12-23 2014-12-09 Rawles Llc Unpowered augmented reality projection accessory display device
US8845107B1 (en) 2010-12-23 2014-09-30 Rawles Llc Characterization of a scene with structured light
US9721386B1 (en) 2010-12-27 2017-08-01 Amazon Technologies, Inc. Integrated augmented reality environment
US9508194B1 (en) 2010-12-30 2016-11-29 Amazon Technologies, Inc. Utilizing content output devices in an augmented reality environment
US9607315B1 (en) 2010-12-30 2017-03-28 Amazon Technologies, Inc. Complementing operation of display devices in an augmented reality environment
CN103548079B (en) * 2011-08-03 2015-09-30 Nds有限公司 Audio frequency watermark
US9118782B1 (en) * 2011-09-19 2015-08-25 Amazon Technologies, Inc. Optical interference mitigation
JP5933030B2 (en) 2011-12-29 2016-06-08 インテル・コーポレーション Display backlight modulation
JP6003098B2 (en) * 2012-03-02 2016-10-05 大日本印刷株式会社 Device for embedding interference noise for acoustic signals
JP6003099B2 (en) * 2012-03-02 2016-10-05 大日本印刷株式会社 Device for embedding different acoustic signals relative to acoustic signals
JP2013197843A (en) * 2012-03-19 2013-09-30 Toshiba Corp Transmission system and transmission device
KR101981685B1 (en) * 2012-10-04 2019-08-28 삼성전자주식회사 Display apparatus, user terminal apparatus, external apparatus, display method, data receiving method and data transmitting method
EP2926553A1 (en) 2012-11-27 2015-10-07 Koninklijke Philips N.V. Use of ambience light for copy protection of video content displayed on a screen
US9679053B2 (en) 2013-05-20 2017-06-13 The Nielsen Company (Us), Llc Detecting media watermarks in magnetic field data
WO2015010859A1 (en) * 2013-07-23 2015-01-29 Koninklijke Philips N.V. Registration system for registering an imaging device with a tracking device
JP6152787B2 (en) 2013-11-29 2017-06-28 富士通株式会社 Information embedding device, information detecting device, information embedding method, and information detecting method
EP2961157A1 (en) * 2014-06-23 2015-12-30 Thomson Licensing Message inserting method in a rendering of a video content by a display device, reading method, devices and programs associated
JP6433014B2 (en) * 2014-09-02 2018-12-05 国立大学法人 奈良先端科学技術大学院大学 Information acquisition apparatus and information transmission system
US9832338B2 (en) * 2015-03-06 2017-11-28 Intel Corporation Conveyance of hidden image data between output panel and digital camera
US10104047B2 (en) * 2015-04-08 2018-10-16 Microsemi Solutions (U.S.), Inc. Method and system for encrypting/decrypting payload content of an OTN frame
CN107431759B (en) * 2015-04-09 2021-06-15 索尼公司 Imaging device, imaging method, electronic apparatus, and in-vehicle electronic apparatus
KR101797042B1 (en) * 2015-05-15 2017-11-13 삼성전자주식회사 Method and apparatus for synthesizing medical images
WO2017049221A1 (en) * 2015-09-16 2017-03-23 Shotblock Technologies Pty Ltd. System and method of pixel manipulation and screen display disruption
US10474745B1 (en) 2016-04-27 2019-11-12 Google Llc Systems and methods for a knowledge-based form creation platform
US11039181B1 (en) 2016-05-09 2021-06-15 Google Llc Method and apparatus for secure video manifest/playlist generation and playback
US10595054B2 (en) 2016-05-10 2020-03-17 Google Llc Method and apparatus for a virtual online video channel
US11069378B1 (en) 2016-05-10 2021-07-20 Google Llc Method and apparatus for frame accurate high resolution video editing in cloud using live video streams
US10750216B1 (en) 2016-05-10 2020-08-18 Google Llc Method and apparatus for providing peer-to-peer content delivery
US10771824B1 (en) 2016-05-10 2020-09-08 Google Llc System for managing video playback using a server generated manifest/playlist
US10785508B2 (en) 2016-05-10 2020-09-22 Google Llc System for measuring video playback events using a server generated manifest/playlist
US10750248B1 (en) 2016-05-10 2020-08-18 Google Llc Method and apparatus for server-side content delivery network switching
US11032588B2 (en) 2016-05-16 2021-06-08 Google Llc Method and apparatus for spatial enhanced adaptive bitrate live streaming for 360 degree video playback
CN108234977B (en) 2018-01-12 2021-03-09 京东方科技集团股份有限公司 Video playing method and display system
KR102523167B1 (en) 2018-07-02 2023-04-19 삼성전자주식회사 Display apparatus and controlling method thereof
CN109360140B (en) * 2018-09-10 2023-08-29 五邑大学 Reversible image watermarking method and device based on prediction error addition expansion
CN109410113B (en) * 2018-09-13 2023-08-29 五邑大学 Error modeling method and device for prediction context of reversible image watermark
US11636183B2 (en) * 2018-12-30 2023-04-25 DISH Technologies L.L.C. Automated piracy detection
JP2021068145A (en) * 2019-10-23 2021-04-30 セイコーエプソン株式会社 Operation method of head-mounted display device and head-mounted display device
US11409917B2 (en) * 2020-08-26 2022-08-09 Red Hat, Inc. Un-photographable screens and displays
US11521578B2 (en) * 2021-01-12 2022-12-06 Lenovo (Singapore) Pte. Ltd. Securely presenting content on a display

Family Cites Families (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4644422A (en) 1982-07-22 1987-02-17 Tvi Systems, Ltd. Anti-copy system
CA1292056C (en) * 1987-09-07 1991-11-12 Masatoshi Tanaka Video signal scrambling system
US5303294A (en) 1991-06-18 1994-04-12 Matsushita Electric Industrial Co., Ltd. Video theater system and copy preventive method
US5757910A (en) 1993-04-06 1998-05-26 Goldstar Co., Ltd. Apparatus for preventing illegal copying of a digital broadcasting signal
US6614914B1 (en) * 1995-05-08 2003-09-02 Digimarc Corporation Watermark embedder and reader
US5646997A (en) 1994-12-14 1997-07-08 Barton; James M. Method and apparatus for embedding authentication information within digital data
US5530759A (en) 1995-02-01 1996-06-25 International Business Machines Corporation Color correct digital watermarking of images
US5706061A (en) * 1995-03-31 1998-01-06 Texas Instruments Incorporated Spatial light image display system with synchronized and modulated light source
US5680454A (en) 1995-08-04 1997-10-21 Hughes Electronics Method and system for anti-piracy using frame rate dithering
JP3430750B2 (en) 1995-10-27 2003-07-28 ソニー株式会社 Video signal copy guard apparatus and method
US5949885A (en) 1996-03-12 1999-09-07 Leighton; F. Thomson Method for protecting content using watermarking
JP3694981B2 (en) 1996-04-18 2005-09-14 ソニー株式会社 Video signal processing apparatus and video signal processing method
US5663927A (en) 1996-05-23 1997-09-02 The United States Of America As Represented By The Secretary Of The Navy Buoyed sensor array communications system
US6018374A (en) 1996-06-25 2000-01-25 Macrovision Corporation Method and system for preventing the off screen copying of a video or film presentation
US6031914A (en) 1996-08-30 2000-02-29 Regents Of The University Of Minnesota Method and apparatus for embedding data, including watermarks, in human perceptible images
US6069914A (en) 1996-09-19 2000-05-30 Nec Research Institute, Inc. Watermarking of image data using MPEG/JPEG coefficients
US5809139A (en) 1996-09-13 1998-09-15 Vivo Software, Inc. Watermarking method and apparatus for compressed digital video
US5875249A (en) 1997-01-08 1999-02-23 International Business Machines Corporation Invisible image watermark for image verification
US6044156A (en) 1997-04-28 2000-03-28 Eastman Kodak Company Method for generating an improved carrier for use in an image data embedding application
US5960081A (en) 1997-06-05 1999-09-28 Cray Research, Inc. Embedding a digital signature in a video sequence
US5959717A (en) 1997-12-12 1999-09-28 Chaum; Jerry Motion picture copy prevention, monitoring, and interactivity system
US6037984A (en) 1997-12-24 2000-03-14 Sarnoff Corporation Method and apparatus for embedding a watermark into a digital image or image sequence
US6529600B1 (en) * 1998-06-25 2003-03-04 Koninklijke Philips Electronics N.V. Method and device for preventing piracy of video material from theater screens
AU5448100A (en) 1999-05-27 2000-12-18 Digital Electronic Cinema, Inc. Systems and methods for preventing camcorder piracy of motion picture images
US7324646B1 (en) 1999-10-29 2008-01-29 Sarnoff Corporation Method and apparatus for film anti-piracy
AU2001234600A1 (en) 2000-01-28 2001-08-07 Sarnoff Corporation Cinema anti-piracy measures
KR100775774B1 (en) * 2000-02-29 2007-11-12 코닌클리케 필립스 일렉트로닉스 엔.브이. Embedding and detecting a watermark in an information signal
KR20020027569A (en) * 2000-06-23 2002-04-13 요트.게.아. 롤페즈 Watermark embedding method and arrangement
US6674876B1 (en) * 2000-09-14 2004-01-06 Digimarc Corporation Watermarking in the time-frequency domain

Also Published As

Publication number Publication date
US7043019B2 (en) 2006-05-09
EP1237369A3 (en) 2004-08-25
US20020168069A1 (en) 2002-11-14
EP1237369A2 (en) 2002-09-04
JP2002314938A (en) 2002-10-25

Similar Documents

Publication Publication Date Title
US7043019B2 (en) Copy protection for digital motion picture image data
US7865034B2 (en) Image display methods and systems with sub-frame intensity compensation
US6809792B1 (en) Spectral watermarking for motion picture image data
US7324646B1 (en) Method and apparatus for film anti-piracy
US7634134B1 (en) Anti-piracy image display methods and systems
EP1557031B1 (en) Techniques of imperceptibly altering the spectrum of a displayed image in a manner that discourages copying
Lubin et al. Robust content-dependent high-fidelity watermark for tracking in digital cinema
US7302162B2 (en) Modulation of a video signal with an impairment signal to increase the video signal masked threshold
JP2002519724A (en) Method and apparatus for preventing illegal access to video material from theater screens
US8130258B2 (en) Method for processing a video sequence and apparatus implementing said method
JP5040908B2 (en) Display control apparatus and display control method
JP2004266345A (en) Method, processor, and system for displaying video image
JP2004234007A (en) Projector having means for obstructing illicit duplication by camcorder
WO2009067295A1 (en) Camcorder jamming techniques using high frame rate displays
Gao et al. DLP based anti-piracy display system
JP2007219512A (en) Method and device for processing sequence of video image
WO2001056279A2 (en) Cinema anti-piracy measures
KR20090085637A (en) Method and device for processing a sequence of source pictures
US20110064218A1 (en) Method, apparatus and system for anti-piracy protection in digital cinema
US7634089B1 (en) Cinema anti-piracy measures

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued