US20120081404A1 - Simulating animation during slideshow - Google Patents

Simulating animation during slideshow Download PDF

Info

Publication number
US20120081404A1
US20120081404A1 US12/896,271 US89627110A US2012081404A1 US 20120081404 A1 US20120081404 A1 US 20120081404A1 US 89627110 A US89627110 A US 89627110A US 2012081404 A1 US2012081404 A1 US 2012081404A1
Authority
US
United States
Prior art keywords
computer usable
picture
main subject
usable code
selected portion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US12/896,271
Other versions
US8576234B2 (en
Inventor
Scot MacLellan
Ivan Orlandi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to US12/896,271 priority Critical patent/US8576234B2/en
Assigned to INTERNATIONAL BUSINESS MACHINES CORPORATION reassignment INTERNATIONAL BUSINESS MACHINES CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: MACLELLAN, SCOT, ORLANDI, IVAN
Publication of US20120081404A1 publication Critical patent/US20120081404A1/en
Application granted granted Critical
Publication of US8576234B2 publication Critical patent/US8576234B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/802D [Two Dimensional] animation, e.g. using sprites

Definitions

  • the present invention relates to the field of image processing and more particularly simulating the animation during a slideshow.
  • the Ken Burns effect can be used as a transition between clips as well. For example, to segue from one person in the story to another, an operator might open a clip with a close-up of one person in a photo, then zoom out so that another person in the photo becomes visible. This is especially practicable when covering older subjects where there is little or no available film. The zooming and panning across photographs gives the feeling of motion, and keeps the viewer visually entertained. This technique has become a staple of documentaries, slide shows, presentations, and even screen savers.
  • Existing editing systems some of which also use this effect for screensavers, often include the Ken Burns Effect or transition, with which a still image may be incorporated into a film using this kind of slow pan and zoom.
  • Some picture slideshow systems or photo editing programs present such an option labelled “Ken Burns Effect”.
  • Ken Burns effect works very well when applied manually to each image, but all current attempts to apply the effect automatically (like in iPhoto screenshows) are less effective as the program that applies the technique does not know what is the subject of the image, and cannot therefore apply the most appropriate effect. Often the result can be unpleasant, as it excludes all or part of the interesting portions of the image. To combat this, some programs minimize the impact of the animation, by ensuring that very little of the area of the image is ever excluded from the view, but this reduces the positive effects.
  • a method, system, and computer program for simulating an animation effect during the display of a digitally encoded picture are provided.
  • a set of predetermined animation effects is accessed.
  • a first selected portion of the picture is identified with a computer implemented main subject identification algorithm.
  • the image display is centered on the first selected portion of the picture.
  • a predetermined animation effect is selected and applied to the picture.
  • FIG. 1 is a block diagram of a generic computer system adapted to perform the method of a preferred embodiment of the present invention
  • FIG. 2 a shows an example of an image with a clearly identifiable Main Subject
  • FIG. 2 b shows an example graph representing portions of an image according to their proximity to the border
  • FIG. 2 c shows an example of an image with two possible Main Subjects
  • FIG. 3 shows a flowchart representing the steps to perform a method according to a preferred embodiment of the present invention.
  • a generic computer of the system e.g. computer, Internet server, router, remote servers
  • the computer 150 includes several units that are connected in parallel to system bus 153 .
  • one or more microprocessors 156 control the operation of computer 150 ;
  • RAM 159 is used as a working memory by microprocessors 156
  • ROM 162 stores basic code for a bootstrap of computer 150 .
  • Peripheral units are clustered around local bus 165 (using respective interfaces).
  • a mass memory consists of hard-disk 168 and drive 171 for reading CD-ROMs 174 .
  • computer 150 includes input devices 177 (for example, a keyboard and a mouse), and output devices 180 (for example, a monitor and a printer).
  • Network Interface Card 183 is used to connect computer 150 to the network.
  • Bridge unit 186 interfaces system bus 153 with local bus 165 .
  • Each microprocessor 156 and bridge unit 186 can operate as master agents requesting an access to system bus 153 for transmitting information.
  • Arbiter 189 manages the granting of the access with mutual exclusion to system bus 153 . Similar considerations apply if the system has a different topology, or it is based on other networks.
  • the computers can have a different structure, include equivalent units, or consist of other data processing entities (such as PDAs, mobile phones, and the like).
  • a preferred embodiment of the present invention exploits the so called Main Subject Identification algorithm, which is known in the art. Examples of prior art documents which disclose such algorithm are the following:
  • this technique is combined with the Ken Burns effect giving the impression of the animation in an automatic way.
  • a grid is overlaid on the image and the amount of information in each square is measured.
  • this selection of dense sectors could be done in the following way: a square is selected and is assigned an object ID and a density value; any surrounding square with a similar density value (i.e. having a density value within a predetermined range centered on the first square value) is assigned the same object ID and this operation is recursively repeated until an object is identified which is completely surrounded by squares with different density value (or by the border of the overall image).
  • ‘busy’ sectors and ‘empty’ sectors can be identified. Not all of the identified sectors or objects will be selected as “main subjects”, but normally only the largest one. Also a large object could represent a background, e.g. ‘empty’ sectors around the edges of an image may represent the sky, the sea, or a lawn which is background to the main subject, which may be represented by a cluster of busy sectors in the centre of the image, representing e.g. a person, a boat, or an animal.
  • FIG. 2 b represents a graph where the identified objects are displayed according to their size and their percentage of border squares.
  • Objects A and B represent large portions of the image, but close to the border.
  • Object D represents an object in the middle of the overall image (i.e. not on the border) and it will be the best candidate to be main subject of the image.
  • the main subject can be identified in an area of high information density with respect to the background which is relatively empty.
  • the subject of the photo may be represented by areas of low information density and the background relatively dense with information.
  • a dull fish in front of a brightly-coloured coral reef may be an example of such a case.
  • the subject has been identified by being a cluster of similar sectors that contrasts with the rest of the image (and in particular with the sectors around the outside of the image).
  • a main subject is clearly identifiable with the house in the centre of the picture.
  • the animation would ensure that this portion of the picture is included in the image being displayed.
  • the animation would either begin with a zoomed image that includes both subjects, and would zoom out slowly to include the full image, or it would begin with the full image and zoom in to include only one of them.
  • the zoom-in/zoom-out choice may alternate during the course of the slideshow, or may be applied randomly.
  • a picture can fall into one of the following categories: an image with a single main subject, an image with more than one main subjects, or an image with no real main subject.
  • the method according to a preferred embodiment of the present invention provides an appropriate animation effect by selecting the animation effect from a set of predetermined effects for that category.
  • Single subject animation effect will include e.g. zoom-in or zoom-out (the Ken Burns effect), pan left and pan right with subject.
  • the animation effect can be a zooming-in and out and moving around the picture.
  • Other adjustments are possible, e.g. assigning an additional parameter to the animation effect which depends on the relative size of the identified main subject (or group of subjects) compared to the overall image.
  • an “amplitude” parameter at High
  • an “amplitude” parameter at Medium when the main subject is between 10% and 50% of the whole image and High when it is more than 50%.
  • the maximum zoom percentage applied to an image e.g. in the present example we could say that the parameter High corresponds to zooming an image at 400%, while 25% is Low. This is an example, of course other effects, parameters and thresholds could be created and customized.
  • the subject identification may result from an automatic process but may also result from previously performed authoring steps.
  • One or a plurality of predefined marks may be previously associated with an image (as embedded metadata or as files attached or associated with the considered image).
  • the discussed predefined marks may define areas of interest, for which the described animation effect may be applied.
  • such predefined marks may be defined during authoring times.
  • the predefined masks may also be indicated upon user selection, in real-time.
  • face recognition algorithms may be leveraged.
  • Face detection algorithms are widely available; these algorithms detect the presence of human faces in a picture, as shown in the example of FIG. 2 c . They should be carefully distinguished from face recognition algorithms which try to associate identity data to a detected face.
  • Image recognition techniques perform badly to date, but these algorithms may also be leveraged by the present method (with a step of filtering by identity of people to simulate an animation).
  • one or more human faces may be detected in the content of the image and the simulation of the animation is performed on the detected faces, successively, randomly or selectively.
  • Pattern recognition algorithms may also be leveraged.
  • one or more predetermined patterns may be detected and the method according to an embodiment is applied to such detected pattern.
  • Examples of patterns may be images of: the sun, a tree, a flower, a car, a wheel, a box, etc.
  • Objects in the foreground or presenting specific color characteristics are more likely to be the objects of interest to the embodiments of the present invention.
  • the considered patterns of reference may be stored in a local or remote database.
  • FIG. 3 schematically shows the method steps according to a preferred embodiment of the present invention.
  • the process starts at step 301 where a main subject is identified (if possible) as described above. If the algorithm determines (step 303 ) that no portion of the picture can be considered a “main subject”, then the control goes to step 305 where alternative motion effects are implemented as described above.
  • step 303 the algorithm determines that no portion of the picture can be considered a “main subject”
  • step 305 alternative motion effects are implemented as described above.
  • the user can customize the algorithm by setting different parameters, e.g. predetermined thresholds which should be considered to determine whether a “main subject” is available or not.
  • step 307 one of the predetermined effects available in the system is selected (step 307 ) and applied at step 309 to emphasize, e.g. zooming-in on the selected main subject.
  • the selected main subject becomes the center of the displayed image or, in other words, the focus is directed to the selected main subject.
  • the duration and the speed of the motion effect again can be easily varied and customized according to the user needs.
  • the system determines that it is time to change, another main subject if available (see step 311 ) is selected and the control goes back to step 307 where a new motion effect (or even the same one as before) is selected. If no more main subjects are available, the method goes to step 313 where a different effect is selected among those available within the system and the control goes back to step 309 where a new effect is applied to the same main subject.
  • the method of the present invention can help to solve the problem of the prior art by providing an automation application of the Ken Burns Effect (or another effect which emphasizes/de-emphasizes a portion of an image) that produces a result that is close to the optimal result obtained by a manual application.
  • the method according to a preferred embodiment of the present invention identifies the main subject of an image, and then applies an animation effect selected from a predetermined set of effects to highlight the subject.
  • the subject of the image is identified e.g. by using a digital filter which measures the quantity of information in various sectors of the image, and the subject is derived from the overall pattern across the image.
  • the effect to be used can be chosen from a list according to subject type and position.
  • a computer program which realizes the method above when executed on a computer.
  • the proposed solution lends itself to be implemented with an equivalent method (having similar or additional steps, even in a different order).
  • the program may take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code).
  • the program may be provided on any computer-usable medium; the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program.
  • the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program. Examples of such medium are fixed disks (where the program can be pre-loaded), removable disks, tapes, cards, wires, fibres, wireless connections, networks, broadcast waves, and the like; for example, the medium may be of the electronic, magnetic, optical, electromagnetic, infrared, or semiconductor type.
  • the solution according to the present invention lends itself to be carried out with a hardware structure (for example, integrated in a chip of semiconductor material), or with a combination of software and hardware.

Abstract

A method and system for simulating an animation effect during the display of a digitally encoded picture, including the steps of: storing a plurality of predetermined animation effects; identifying at least one selected portion of the picture by means of a main subject identification algorithm; selecting at least one of the predetermined animation effects; modifying the display of the picture according to the selected at least one predetermined animation effect, so that the at least one selected portion of the picture is emphasised.

Description

    TECHNICAL FIELD
  • The present invention relates to the field of image processing and more particularly simulating the animation during a slideshow.
  • BACKGROUND OF INVENTION
  • Displaying a sequence of still images is used for many purposes, like performing a slideshow, viewing a collection of photographs or to have something interesting to look at when a computer is locked (screensaver). A sequence of static images however does not always capture the attention of the viewer. To combat this and to make a sequence of still images more attractive, techniques such as zooming in or out and scrolling from one part of the image to another are often used to give the impression of motion. One of the prior art techniques was made famous by Ken Burns, and it is often referred to as the Ken Burns Effect. The Ken Burns Effect is well known among those skilled in the art (see e.g. description on Wikipedia). It is a technique of embedding still photographs in motion pictures, displayed with slow zooming and panning effects, and fading transitions between them. It can be used to give the impression of animation to still pictures by e.g. slowly zooming in on subjects of interest and then moving from one subject to another. For example, in a photograph with several persons, the focus might slowly pan across the faces and come to rest on one person. With focus it is intended that the center (or an emphasized part) of the image is displayed.
  • The Ken Burns effect can be used as a transition between clips as well. For example, to segue from one person in the story to another, an operator might open a clip with a close-up of one person in a photo, then zoom out so that another person in the photo becomes visible. This is especially practicable when covering older subjects where there is little or no available film. The zooming and panning across photographs gives the feeling of motion, and keeps the viewer visually entertained. This technique has become a staple of documentaries, slide shows, presentations, and even screen savers. Existing editing systems, some of which also use this effect for screensavers, often include the Ken Burns Effect or transition, with which a still image may be incorporated into a film using this kind of slow pan and zoom. Some picture slideshow systems or photo editing programs present such an option labelled “Ken Burns Effect”.
  • The Ken Burns effect is customizable and can be applied together with other effects using code like the one described in the following page: http://forums.slideshowpro.net/viewtopic.php?pid=29056
  • The Ken Burns effect works very well when applied manually to each image, but all current attempts to apply the effect automatically (like in iPhoto screenshows) are less effective as the program that applies the technique does not know what is the subject of the image, and cannot therefore apply the most appropriate effect. Often the result can be unpleasant, as it excludes all or part of the interesting portions of the image. To combat this, some programs minimize the impact of the animation, by ensuring that very little of the area of the image is ever excluded from the view, but this reduces the positive effects.
  • It is an object of the present invention to provide a technique which alleviates the above drawback of the prior art.
  • SUMMARY OF THE INVENTION
  • A method, system, and computer program for simulating an animation effect during the display of a digitally encoded picture are provided.
  • A set of predetermined animation effects is accessed. A first selected portion of the picture is identified with a computer implemented main subject identification algorithm. The image display is centered on the first selected portion of the picture. A predetermined animation effect is selected and applied to the picture.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • Embodiments of the invention will now be described, by way of example only, by reference to the accompanying drawings, in which:
  • FIG. 1 is a block diagram of a generic computer system adapted to perform the method of a preferred embodiment of the present invention;
  • FIG. 2 a shows an example of an image with a clearly identifiable Main Subject;
  • FIG. 2 b shows an example graph representing portions of an image according to their proximity to the border;
  • FIG. 2 c shows an example of an image with two possible Main Subjects; and
  • FIG. 3 shows a flowchart representing the steps to perform a method according to a preferred embodiment of the present invention.
  • DESCRIPTION OF A PREFERRED EMBODIMENT
  • As shown in FIG. 1, a generic computer of the system (e.g. computer, Internet server, router, remote servers) is denoted with 150. The computer 150 includes several units that are connected in parallel to system bus 153. In detail, one or more microprocessors 156 control the operation of computer 150; RAM 159 is used as a working memory by microprocessors 156, and ROM 162 stores basic code for a bootstrap of computer 150. Peripheral units are clustered around local bus 165 (using respective interfaces). Particularly, a mass memory consists of hard-disk 168 and drive 171 for reading CD-ROMs 174. Moreover, computer 150 includes input devices 177 (for example, a keyboard and a mouse), and output devices 180 (for example, a monitor and a printer). Network Interface Card 183 is used to connect computer 150 to the network. Bridge unit 186 interfaces system bus 153 with local bus 165. Each microprocessor 156 and bridge unit 186 can operate as master agents requesting an access to system bus 153 for transmitting information. Arbiter 189 manages the granting of the access with mutual exclusion to system bus 153. Similar considerations apply if the system has a different topology, or it is based on other networks. Alternatively, the computers can have a different structure, include equivalent units, or consist of other data processing entities (such as PDAs, mobile phones, and the like).
  • A preferred embodiment of the present invention exploits the so called Main Subject Identification algorithm, which is known in the art. Examples of prior art documents which disclose such algorithm are the following:
  • Luo, J. [Jiebo], Singhal, A.[Amit], Etz, S. P. [Stephen P.], Gray, R. T. [Robert T.], A computational approach to determination of main subject regions in photographic images. This is an example description of Main Subject Identification.
  • U.S. Pat. No. 7,212,668: according to the method of this patent, an image is segmented into regions and various features of the regions are extracted and used to calculate a confidence factor that a particular region is a main subject. This is another example of the known Main Subject Identification technique.
  • In a preferred embodiment of the present invention this technique is combined with the Ken Burns effect giving the impression of the animation in an automatic way.
  • In a preferred embodiment of the present invention, in order to identify the subject of an image (e.g. the one represented in FIG. 2 a), a grid is overlaid on the image and the amount of information in each square is measured. Of course the finer the grid the more precise will be the result. In a preferred embodiment of the present invention this selection of dense sectors (i.e. groups of neighbouring squares having similar density, and therefore being candidate to represent an object) could be done in the following way: a square is selected and is assigned an object ID and a density value; any surrounding square with a similar density value (i.e. having a density value within a predetermined range centered on the first square value) is assigned the same object ID and this operation is recursively repeated until an object is identified which is completely surrounded by squares with different density value (or by the border of the overall image).
  • The process is repeated for any unassigned square. In this way, ‘busy’ sectors and ‘empty’ sectors can be identified. Not all of the identified sectors or objects will be selected as “main subjects”, but normally only the largest one. Also a large object could represent a background, e.g. ‘empty’ sectors around the edges of an image may represent the sky, the sea, or a lawn which is background to the main subject, which may be represented by a cluster of busy sectors in the centre of the image, representing e.g. a person, a boat, or an animal.
  • In a preferred embodiment of the present invention, large sectors with similar density, which are placed at the border of the image will be filtered out, because it is assumed that main subject (or the plurality of main subjects) will not be around the borders. A simple way of filtering these border large sectors (e.g. a sky or a sea) is to check if an image has a large number of squares along the border of the image: when the percentage of border squares in a selected object is higher than a predetermined thresholds, than it can be assumed that the object is not a candidate main subject. FIG. 2 b represents a graph where the identified objects are displayed according to their size and their percentage of border squares. Objects A and B represent large portions of the image, but close to the border. Object D represents an object in the middle of the overall image (i.e. not on the border) and it will be the best candidate to be main subject of the image.
  • As mentioned above, the main subject can be identified in an area of high information density with respect to the background which is relatively empty. Of course, the opposite may also be true, where the subject of the photo may be represented by areas of low information density and the background relatively dense with information. A dull fish in front of a brightly-coloured coral reef, may be an example of such a case. In both cases however, the subject has been identified by being a cluster of similar sectors that contrasts with the rest of the image (and in particular with the sectors around the outside of the image).
  • In the example shown on FIG. 2 a, a main subject is clearly identifiable with the house in the centre of the picture. Here we would easily identify the main subject, and the animation would ensure that this portion of the picture is included in the image being displayed.
  • Alternatively, there might be more than one main subject which can be identified (see example in FIG. 2 c). In this case, the animation would either begin with a zoomed image that includes both subjects, and would zoom out slowly to include the full image, or it would begin with the full image and zoom in to include only one of them.
  • The zoom-in/zoom-out choice may alternate during the course of the slideshow, or may be applied randomly.
  • Other images present a more difficult problem of identification of the main subject. An example could be a group of people that occupies the entire image (school photo or regimental photo) which would be represented by an entire grid of similarly-busy sectors, and there is no single main subject.
  • In such a case the system would conclude that all of the images are equally important, and would either scroll slowly across the image or would zoom very slightly in order to exclude very little of the image at any point. Other examples include many potential main subjects, in which a defensive animation would be applied (very slight zoom or scroll, in order to exclude little). Similarly, two potential subjects at two extremities of the image would result in a very slight animation.
  • Always applying the same approach, as in the prior art, yields unsatisfactory results, i.e. little benefit from an always defensive approach, or annoying errors from an always aggressive approach. The combination according to an embodiment of the invention of defensive animation where the main subject is not clear (such as landscapes) and more aggressive animation where the subject is clear (portrait) provides the advantage of a solution which is tailored to the characteristics of the picture.
  • According to a preferred embodiment of the present invention, after the main subject identification algorithm has been applied, a picture can fall into one of the following categories: an image with a single main subject, an image with more than one main subjects, or an image with no real main subject. According to which of the above categories better describes the picture, the method according to a preferred embodiment of the present invention provides an appropriate animation effect by selecting the animation effect from a set of predetermined effects for that category. Single subject animation effect will include e.g. zoom-in or zoom-out (the Ken Burns effect), pan left and pan right with subject. When a number of main subjects are identified (as mentioned above, this can be dependent on the thresholds imposed on the system) possible solutions are e.g.: moving from one subject to another and then applying the same effects as the single subject case; or alternatively a common area including all main subjects could be selected (in other words the minimum subset of the image in which all identified main subjects are included) and then handled as a single main subject with the same effects.
  • Finally if no main subject is found (again this can depend on the imposed thresholds and other parameters) the animation effect can be a zooming-in and out and moving around the picture. Other adjustments are possible, e.g. assigning an additional parameter to the animation effect which depends on the relative size of the identified main subject (or group of subjects) compared to the overall image. Just as an example we could set an “amplitude” parameter at High, if the main subject is less than 10% of the whole image, an “amplitude” parameter at Medium when the main subject is between 10% and 50% of the whole image and High when it is more than 50%. With “amplitude” the maximum zoom percentage applied to an image: e.g. in the present example we could say that the parameter High corresponds to zooming an image at 400%, while 25% is Low. This is an example, of course other effects, parameters and thresholds could be created and customized.
  • The subject identification may result from an automatic process but may also result from previously performed authoring steps.
  • One or a plurality of predefined marks may be previously associated with an image (as embedded metadata or as files attached or associated with the considered image). The discussed predefined marks may define areas of interest, for which the described animation effect may be applied. For example, such predefined marks may be defined during authoring times. As another example, the predefined masks may also be indicated upon user selection, in real-time.
  • Among automatic methods for subject identification, face recognition algorithms may be leveraged. Today, such algorithms are fast and efficient. Face detection algorithms are widely available; these algorithms detect the presence of human faces in a picture, as shown in the example of FIG. 2 c. They should be carefully distinguished from face recognition algorithms which try to associate identity data to a detected face.
  • Image recognition techniques perform badly to date, but these algorithms may also be leveraged by the present method (with a step of filtering by identity of people to simulate an animation). According to alternative embodiments of the present invention, one or more human faces may be detected in the content of the image and the simulation of the animation is performed on the detected faces, successively, randomly or selectively.
  • Pattern recognition algorithms—yet at an earlier stage of development—may also be leveraged. According to other embodiments, one or more predetermined patterns may be detected and the method according to an embodiment is applied to such detected pattern. Examples of patterns may be images of: the sun, a tree, a flower, a car, a wheel, a box, etc. Objects in the foreground or presenting specific color characteristics are more likely to be the objects of interest to the embodiments of the present invention. The considered patterns of reference may be stored in a local or remote database.
  • FIG. 3 schematically shows the method steps according to a preferred embodiment of the present invention. The process starts at step 301 where a main subject is identified (if possible) as described above. If the algorithm determines (step 303) that no portion of the picture can be considered a “main subject”, then the control goes to step 305 where alternative motion effects are implemented as described above. Those skilled in the art will appreciate that the user can customize the algorithm by setting different parameters, e.g. predetermined thresholds which should be considered to determine whether a “main subject” is available or not.
  • If a main subject is found, then one of the predetermined effects available in the system is selected (step 307) and applied at step 309 to emphasize, e.g. zooming-in on the selected main subject. In general the selected main subject becomes the center of the displayed image or, in other words, the focus is directed to the selected main subject.
  • The duration and the speed of the motion effect again can be easily varied and customized according to the user needs. Whenever, according to its setting, the system determines that it is time to change, another main subject if available (see step 311) is selected and the control goes back to step 307 where a new motion effect (or even the same one as before) is selected. If no more main subjects are available, the method goes to step 313 where a different effect is selected among those available within the system and the control goes back to step 309 where a new effect is applied to the same main subject.
  • The method of the present invention can help to solve the problem of the prior art by providing an automation application of the Ken Burns Effect (or another effect which emphasizes/de-emphasizes a portion of an image) that produces a result that is close to the optimal result obtained by a manual application. The method according to a preferred embodiment of the present invention identifies the main subject of an image, and then applies an animation effect selected from a predetermined set of effects to highlight the subject.
  • The subject of the image is identified e.g. by using a digital filter which measures the quantity of information in various sectors of the image, and the subject is derived from the overall pattern across the image. Once the subject is identified, the effect to be used can be chosen from a list according to subject type and position.
  • In a further embodiment of the present invention, a system comprising components adapted to implement the method above is provided.
  • In another embodiment a computer program is provided which realizes the method above when executed on a computer.
  • Alterations and modifications may be made to the above without departing from the scope of the invention. Naturally, in order to satisfy local and specific requirements, a person skilled in the art may apply to the solution described above many modifications and alterations. Particularly, although the present invention has been described with a certain degree of particularity with reference to preferred embodiment(s) thereof, it should be understood that various omissions, substitutions and changes in the form and details as well as other embodiments are possible.
  • Additionally, it is expressly intended that specific elements and/or method steps described in connection with any disclosed embodiment of the invention may be incorporated in any other embodiment as a general matter of design choice.
  • For example, similar considerations apply if the computers have different structure or include equivalent units. It is possible to replace the computers with any code execution entity (such as a PDA, a mobile phone, and the like).
  • Similar considerations apply if the program (which may be used to implement each embodiment of the invention) is structured in a different way, or if additional modules or functions are provided. Likewise, the memory structures may be of other types, or may be replaced with equivalent entities (not necessarily consisting of physical storage media).
  • Moreover, the proposed solution lends itself to be implemented with an equivalent method (having similar or additional steps, even in a different order). In any case, the program may take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code).
  • Moreover, the program may be provided on any computer-usable medium; the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program. Examples of such medium are fixed disks (where the program can be pre-loaded), removable disks, tapes, cards, wires, fibres, wireless connections, networks, broadcast waves, and the like; for example, the medium may be of the electronic, magnetic, optical, electromagnetic, infrared, or semiconductor type.
  • In any case, the solution according to the present invention lends itself to be carried out with a hardware structure (for example, integrated in a chip of semiconductor material), or with a combination of software and hardware.

Claims (20)

1. A method for simulating an animation effect during the display of a digitally encoded picture, comprising:
accessing a set of predetermined animation effects;
identifying a first selected portion of the picture with a computer implemented main subject identification algorithm;
centering the image display on the first selected portion of the picture;
selecting at least one of the predetermined animation effects; and
applying to the picture the selected at least one predetermined animation effect.
2. The method of claim 1, wherein the set of predetermined animation effects includes the Ken Burns effect.
3. The method of claim 1, wherein the set of predetermined animation effects includes automatic zoom-in and zoom-out effects.
4. The method of claim 1, wherein the at least one predetermined animation effect includes emphasizing the first selected portion of the picture.
5. The method of claim 1, further comprising:
selecting a second portion of the picture by means of a computer implemented main subject identification algorithm; and
moving the center of the image display on the second selected portion of the picture.
6. The method of claim 5, wherein the moving the center of the image display includes:
de-emphasizing the first selected portion of the picture;
moving the center of the image display to the second selected portion of the picture; and
emphasizing the second selected portion of the picture.
7. The method of claim 6, wherein the step of emphasizing includes applying a zoom-in effect, and wherein the de-emphasizing includes applying a zoom-out effect.
8. The method of claim 1, wherein the main subject identification algorithm includes:
mapping a picture with a grid having a plurality of adjacent squares, each squares being assigned a density factor, indicative of the amount of information included in the square;
predetermining a density upper thresholds and a density lower thresholds;
identifying on the picture a first largest possible set of adjacent squares all having a density factor higher than the predetermined upper thresholds and a second largest possible set of adjacent squares all having a density factor less than the predetermined lower thresholds; and
selecting as the main subject one of the first and second set.
9. The method of claim 8, further including:
calculating an average density value of the plurality of squares in the picture; and wherein the selecting as the main subject includes:
selecting as the main subject, between the first and second set, the set having an average density factor value with the absolute greatest difference from the average density value.
10. The method of claim 1, wherein the main subject identification algorithm includes a face detection algorithm or a pattern recognition algorithm.
11. A data processing system for simulating an animation effect during the display of a digitally encoded picture, the data processing system comprising:
a storage device including a storage medium, wherein the storage device stores computer usable program code; and
a processor, wherein the processor executes the computer usable program code, and wherein the computer usable program code comprises:
computer usable code for accessing a set of predetermined animation effects;
computer usable code for identifying a first selected portion of the picture with a computer implemented main subject identification algorithm;
computer usable code for centering the image display on the first selected portion of the computer usable code for selecting at least one of the predetermined animation effects; and
computer usable code for applying to the picture the selected at least one predetermined animation effect.
12. A computer usable program product comprising a computer usable non-transitory storage medium including computer usable code for simulating an animation effect during the display of a digitally encoded picture, the computer usable code comprising:
computer usable code for accessing a set of predetermined animation effects;
computer usable code for identifying a first selected portion of the picture with a computer implemented main subject identification algorithm;
computer usable code for centering the image display on the first selected portion of the picture;
computer usable code for selecting at least one of the predetermined animation effects; and
computer usable code for applying to the picture the selected at least one predetermined animation effect.
13. The computer usable program product of claim 12, wherein the set of predetermined animation effects includes the Ken Burns effect.
14. The computer usable program product of claim 12, wherein the set of predetermined animation effects includes automatic zoom-in and zoom-out effects.
15. The computer usable program product of claim 12, wherein the at least one predetermined animation effect includes emphasizing the first selected portion of the picture.
16. The computer usable program product of claim 12, further comprising:
computer usable code for selecting a second portion of the picture by means of a computer implemented main subject identification algorithm; and
computer usable code for moving the center of the image display on the second selected portion of the picture.
17. The computer usable program product of claim 16, wherein the moving the center of the image display includes:
computer usable code for de-emphasizing the first selected portion of the picture;
computer usable code for moving the center of the image display to the second selected portion of the picture; and
computer usable code for emphasizing the second selected portion of the picture.
18. The computer usable program product of claim 17, wherein the emphasizing includes applying a zoom-in effect, and wherein the de-emphasizing includes applying a zoom-out effect.
19. The computer usable program product of claim 12, wherein the main subject identification algorithm includes:
computer usable code for mapping a picture with a grid having a plurality of adjacent squares, each squares being assigned a density factor, indicative of the amount of information included in the square;
computer usable code for predetermining a density upper thresholds and a density lower thresholds;
computer usable code for identifying on the picture a first largest possible set of adjacent squares all having a density factor higher than the predetermined upper thresholds and a second largest possible set of adjacent squares all having a density factor less than the predetermined lower thresholds; and
computer usable code for selecting as the main subject one of the first and second set.
20. The computer usable program product of claim 19, further including:
computer usable code for calculating an average density value of the plurality of squares in the picture; and
wherein the selecting as the main subject includes:
computer usable code for selecting as the main subject, between the first and second set, the set having an average density factor value with the absolute greatest difference from the average density value.
US12/896,271 2010-10-01 2010-10-01 Simulating animation during slideshow Expired - Fee Related US8576234B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US12/896,271 US8576234B2 (en) 2010-10-01 2010-10-01 Simulating animation during slideshow

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US12/896,271 US8576234B2 (en) 2010-10-01 2010-10-01 Simulating animation during slideshow

Publications (2)

Publication Number Publication Date
US20120081404A1 true US20120081404A1 (en) 2012-04-05
US8576234B2 US8576234B2 (en) 2013-11-05

Family

ID=45889390

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/896,271 Expired - Fee Related US8576234B2 (en) 2010-10-01 2010-10-01 Simulating animation during slideshow

Country Status (1)

Country Link
US (1) US8576234B2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015121699A1 (en) * 2014-02-12 2015-08-20 Sony Corporation A method for presentation of images
US20160140748A1 (en) * 2014-11-14 2016-05-19 Lytro, Inc. Automated animation for presentation of images
US20170052672A1 (en) * 2012-06-05 2017-02-23 Apple Inc. Mapping application with 3d presentation
US10156455B2 (en) 2012-06-05 2018-12-18 Apple Inc. Context-aware voice guidance
US10318104B2 (en) 2012-06-05 2019-06-11 Apple Inc. Navigation application with adaptive instruction text
US10323701B2 (en) 2012-06-05 2019-06-18 Apple Inc. Rendering road signs during navigation
US10508926B2 (en) 2012-06-05 2019-12-17 Apple Inc. Providing navigation instructions while device is in locked mode
US11055912B2 (en) 2012-06-05 2021-07-06 Apple Inc. Problem reporting in maps
US11956609B2 (en) 2021-01-28 2024-04-09 Apple Inc. Context-aware voice guidance

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5880743A (en) * 1995-01-24 1999-03-09 Xerox Corporation Apparatus and method for implementing visual animation illustrating results of interactive editing operations
US6404433B1 (en) * 1994-05-16 2002-06-11 Apple Computer, Inc. Data-driven layout engine
US20030146915A1 (en) * 2001-10-12 2003-08-07 Brook John Charles Interactive animation of sprites in a video production
US20050041872A1 (en) * 2003-08-20 2005-02-24 Wai Yim Method for converting PowerPoint presentation files into compressed image files
US8352397B2 (en) * 2009-09-10 2013-01-08 Microsoft Corporation Dependency graph in data-driven model
US8409000B1 (en) * 2012-03-09 2013-04-02 Hulu Llc Configuring advertisements in a video segment based on a game result

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6404433B1 (en) * 1994-05-16 2002-06-11 Apple Computer, Inc. Data-driven layout engine
US5880743A (en) * 1995-01-24 1999-03-09 Xerox Corporation Apparatus and method for implementing visual animation illustrating results of interactive editing operations
US20030146915A1 (en) * 2001-10-12 2003-08-07 Brook John Charles Interactive animation of sprites in a video production
US20050041872A1 (en) * 2003-08-20 2005-02-24 Wai Yim Method for converting PowerPoint presentation files into compressed image files
US8352397B2 (en) * 2009-09-10 2013-01-08 Microsoft Corporation Dependency graph in data-driven model
US8409000B1 (en) * 2012-03-09 2013-04-02 Hulu Llc Configuring advertisements in a video segment based on a game result

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11082773B2 (en) 2012-06-05 2021-08-03 Apple Inc. Context-aware voice guidance
US11290820B2 (en) 2012-06-05 2022-03-29 Apple Inc. Voice instructions during navigation
US10508926B2 (en) 2012-06-05 2019-12-17 Apple Inc. Providing navigation instructions while device is in locked mode
US20170052672A1 (en) * 2012-06-05 2017-02-23 Apple Inc. Mapping application with 3d presentation
US11727641B2 (en) 2012-06-05 2023-08-15 Apple Inc. Problem reporting in maps
US10156455B2 (en) 2012-06-05 2018-12-18 Apple Inc. Context-aware voice guidance
US10318104B2 (en) 2012-06-05 2019-06-11 Apple Inc. Navigation application with adaptive instruction text
US11055912B2 (en) 2012-06-05 2021-07-06 Apple Inc. Problem reporting in maps
US10911872B2 (en) 2012-06-05 2021-02-02 Apple Inc. Context-aware voice guidance
US10732003B2 (en) 2012-06-05 2020-08-04 Apple Inc. Voice instructions during navigation
US10323701B2 (en) 2012-06-05 2019-06-18 Apple Inc. Rendering road signs during navigation
US10718625B2 (en) 2012-06-05 2020-07-21 Apple Inc. Voice instructions during navigation
KR20200047782A (en) * 2014-02-12 2020-05-07 소니 주식회사 A method for presentation of images
WO2015121699A1 (en) * 2014-02-12 2015-08-20 Sony Corporation A method for presentation of images
KR102362997B1 (en) 2014-02-12 2022-02-16 소니그룹주식회사 A method for presentation of images
KR102108606B1 (en) 2014-02-12 2020-05-07 소니 주식회사 A method for presentation of images
US10063791B2 (en) 2014-02-12 2018-08-28 Sony Mobile Communications Inc. Method for presentation of images
KR20160122144A (en) * 2014-02-12 2016-10-21 소니 주식회사 A method for presentation of images
US20160140748A1 (en) * 2014-11-14 2016-05-19 Lytro, Inc. Automated animation for presentation of images
US11956609B2 (en) 2021-01-28 2024-04-09 Apple Inc. Context-aware voice guidance

Also Published As

Publication number Publication date
US8576234B2 (en) 2013-11-05

Similar Documents

Publication Publication Date Title
US8576234B2 (en) Simulating animation during slideshow
CN108537859B (en) Image mask using deep learning
Korayem et al. Enhancing lifelogging privacy by detecting screens
Jian et al. The extended marine underwater environment database and baseline evaluations
JP5035432B2 (en) Method for generating highly condensed summary images of image regions
Cheng et al. Learning to photograph
EP2356631B1 (en) Systems and methods for evaluating robustness
Kumar et al. Automatic segmentation of liver and tumor for CAD of liver
CN109064390A (en) A kind of image processing method, image processing apparatus and mobile terminal
WO2010021625A1 (en) Automatic creation of a scalable relevance ordered representation of an image collection
JP5524219B2 (en) Interactive image selection method
KR20140041557A (en) Hierarchical, zoomable presentations of media sets
CA2739023A1 (en) Systems and methods for optimizing a scene
CN110832583A (en) System and method for generating a summary storyboard from a plurality of image frames
Chen et al. Improved seam carving combining with 3D saliency for image retargeting
CN107944420A (en) The photo-irradiation treatment method and apparatus of facial image
CN111066026A (en) Techniques for providing virtual light adjustments to image data
WO2021010974A1 (en) Automatically segmenting and adjusting images
Goudelis et al. Fall detection using history triple features
US9471967B2 (en) Relighting fragments for insertion into content
US8218823B2 (en) Determining main objects using range information
Agus et al. Data-driven analysis of virtual 3D exploration of a large sculpture collection in real-world museum exhibitions
CN105141974B (en) A kind of video clipping method and device
CN108776959B (en) Image processing method and device and terminal equipment
CN110378840A (en) Image processing method and device

Legal Events

Date Code Title Description
AS Assignment

Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MACLELLAN, SCOT;ORLANDI, IVAN;REEL/FRAME:025183/0835

Effective date: 20101004

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees

Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.)

STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20171105