US20120224833A1 - Apparatus and method for segmenting video data in mobile communication terminal - Google Patents

Apparatus and method for segmenting video data in mobile communication terminal Download PDF

Info

Publication number
US20120224833A1
US20120224833A1 US13/411,458 US201213411458A US2012224833A1 US 20120224833 A1 US20120224833 A1 US 20120224833A1 US 201213411458 A US201213411458 A US 201213411458A US 2012224833 A1 US2012224833 A1 US 2012224833A1
Authority
US
United States
Prior art keywords
sensor
video
data
video data
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US13/411,458
Other versions
US8929713B2 (en
Inventor
Sun-hee Youm
Jin-guk Jeong
Mi-hwa Park
Soo-Hong Park
Min-Ho Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Assigned to SAMSUNG ELECTRONICS CO., LTD reassignment SAMSUNG ELECTRONICS CO., LTD ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JEONG, JIN-GUK, LEE, MIN-HO, PARK, MI-HWA, PARK, SOO-HONG, YOUM, SUN-HEE
Publication of US20120224833A1 publication Critical patent/US20120224833A1/en
Application granted granted Critical
Publication of US8929713B2 publication Critical patent/US8929713B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • H04N5/772Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera the recording apparatus and the television camera being placed in the same enclosure
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/93Regeneration of the television signal or of selected parts thereof
    • H04N5/9305Regeneration of the television signal or of selected parts thereof involving the mixing of the reproduced video signal with a non-recorded signal, e.g. a text signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present invention relates generally to a mobile communication terminal, and in particular, to an apparatus and method for segmenting video data in a mobile communication terminal.
  • Video segmentation is a technique that segments video data for jumping to a specific playback position within a video and playing the video from the specific playback position.
  • One particular conventional video segmentation technique uses a video analyzing method to separate news services by distinguishing speakers through a voice analysis or the like, or to notify the change of an image by distinguishing a difference between images.
  • an object of the present invention is to provide an apparatus and method for segmenting video data in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for segmenting video data based on sensor data in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for segmenting video data based on sensor data by storing video data and sensor data acquired from a variety of sensors, together with a playback position (or playback time) offset value of the corresponding video data, during video shooting in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for storing a changed sensor data value of a current period in comparison with a previous period and a playback position (or playback time) offset value of corresponding video data for each sensor in a metadata storage space within a video file configured with the video data in a metadata format in a mobile communication terminal.
  • a method for segmenting video data in a mobile communication terminal includes: acquiring sensor data periodically together with video data during video shooting; and segmenting the video data based on the acquired sensor data.
  • an apparatus for segmenting video data in a mobile communication terminal includes: a camera unit for acquiring video data through video shooting; a sensor unit for acquiring sensor data periodically during the video shooting; and a video segmenting unit for segmenting the video data based on the acquired sensor data.
  • FIG. 1 illustrates an example configuration of a mobile communication terminal according to an embodiment of the present invention
  • FIG. 2 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention
  • FIG. 3 illustrates an example method for playing video data segmented based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention
  • FIG. 4 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention
  • FIG. 5 illustrates an example format of video data according to an exemplary embodiment of the present invention.
  • FIGS. 1 through 5 discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged mobile communications terminals. In the following description, detailed descriptions of well-known functions or configurations will be omitted since they would unnecessarily obscure the subject matters of the present invention. Also, the terms used herein are defined according to the functions of the present invention. Thus, the terms may vary depending on users' or operators' intentions or practices. Therefore, the terms used herein must be understood based on the descriptions made herein.
  • a mobile communication terminal will be described below as one example, it is apparent that the present invention can also be applied to any device that is provided with a built-in sensor (or connected to an external sensor) and is capable of shooting or playing a video.
  • FIG. 1 illustrates an example configuration of a mobile communication terminal according to an exemplary embodiment of the present invention.
  • the mobile communication terminal includes a control unit 100 , a camera unit 102 , a video processing unit 104 , a display unit 106 , an input unit 108 , a video segmenting unit 110 , a sensor unit 112 , a memory unit 114 , and an audio processing unit 118 .
  • the control unit 100 controls an overall operation of the mobile communication terminal and processes a function of segmenting video data based on sensor data.
  • the camera unit 102 includes a camera sensor and a signal processor.
  • the camera sensor converts an optical signal detected during video shooting into an electrical signal.
  • the signal processor converts an analog video signal from the camera sensor into digital video data.
  • the camera sensor may be a Charge Coupled Device (CCD) sensor, and the signal processor may be a Digital Signal Processor (DSP).
  • CCD Charge Coupled Device
  • DSP Digital Signal Processor
  • the video processing unit 104 generates screen data for displaying camera video data received from the camera unit 102 .
  • the video processing unit 104 includes a video codec (not shown) that codes video data in accordance with a specified protocol or decodes coded video data into original video data.
  • the display unit 106 displays information, such as numerals and characters, moving pictures and still pictures, and status information generated during the operation of the mobile communication terminal.
  • the display unit 106 may be a color Liquid Crystal Display (LCD) or other display device.
  • the input unit 108 includes alpha-numeric keys and a plurality of function keys.
  • the input unit 108 provides the control unit 100 with key input data that corresponds to a key pressed by a user.
  • the video segmenting unit 110 segments video data based on sensor data by storing the video data and the sensor data acquired from a variety of sensors in the memory unit 114 , together with a playback position (or playback time) offset value of the corresponding video data.
  • the sensor unit 112 includes one or more sensors.
  • the sensor unit 112 may include a position sensor (for example, a Global Positioning System (GPS)), a temperature sensor, an orientation sensor, a noise sensor, and the like.
  • the sensor may be built in the mobile communication terminal, or may be configured outside the mobile communication terminal and coupled to the mobile communication terminal via a wired or wireless connection.
  • GPS Global Positioning System
  • the memory unit 114 stores a variety of reference data, instructions of an executable program for processing and controlling the control unit 100 , temporary data generated during the execution of various programs, and/or updatable backup data.
  • the memory unit 114 stores an executable program for segmenting the video data based on the sensor data.
  • the memory unit 114 may store a video file 116 .
  • the audio processing unit 118 includes an audio codec 107 that processes an audio signal such as a voice signal.
  • the audio processing unit 118 may process an audio signal from the control unit 100 and plays the processed audio signal through a speaker, or may process an audio signal from a microphone and provides the processed audio signal to the control unit 100 .
  • FIG. 2 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention.
  • the mobile communication terminal determines whether video shooting is requested according to a user's key manipulation in step 201 . If so, the mobile communication terminal displays a sensor list on a screen in order to allow the user to select one or more sensors to be used to segment video data in step 203 .
  • the sensor list may include a position sensor, such as a GPS device, a temperature sensor, an orientation sensor, a noise sensor, and the like.
  • the mobile communication sensor determines whether one or more sensors are selected from the sensor list displayed on the screen. For example, if it is intended to check a shooting position of a current video and store a moving path of the video shooting together with the video data, the user may select the position sensor and use the selected position sensor to segment the video data.
  • the mobile communication terminal drives a camera and the selected sensor(s) in step 207 .
  • the mobile communication terminal acquires video data by shooting a video through the driven camera, and acquires sensor data periodically through the driven sensor(s).
  • the sensor data may include a position value, a temperature value, an orientation value, a noise value, and the like.
  • the mobile communication terminal compares sensor data acquired at a current period with sensor data acquired at a previous period with respect to each selected sensor, and checks whether the sensor data value has changed. If so, the mobile communication terminal determines that a current playback position (or playtime time) of the video data is a playback position (or playback time) requiring the video data to be segmented based on the sensor data, and stores the changed sensor data value together with a playback position (or playback time) offset value of the video data in step 213 . Then, the mobile communication terminal proceeds to step 215 .
  • the mobile communication terminal shoots the video, it segments the video data by storing the changed sensor data value of the current period in comparison with the previous period together with the playback position (or playback time) offset value of the corresponding video data with respect to each sensor. Accordingly, as illustrated in FIG. 4 , a plurality of video data segmented based on the sensor data of each sensor may be created.
  • the changed sensor data value of the current period in comparison with the previous period with respect to each sensor and the playback position (or playback time offset value of the corresponding video data may be stored in a metadata format in the metadata 504 storage space within the video file 500 configured with the video data 502 . That is, the changed sensor data value and the playback position (or playback time) offset value may be stored separately from the video data within the corresponding video data. As another example, the changed sensor data value and the playback position (or playback time) offset value may be stored within the corresponding video data.
  • step 215 the mobile communication terminal determines whether it is requested to stop video shooting according to the user's key manipulation. If it is determined in step 215 that the mobile communication terminal is requested to stop the video shooting according to the user's key manipulation, the mobile communication terminal stores the video data segmented based on the sensor data acquired periodically through the driven sensor(s) in step 217 . On the other hand, if it is checked in step 215 that it is not requested to stop the video shooting according to the user's key manipulation, the mobile communication terminal returns to step 209 and repeats the subsequent steps.
  • the mobile communication terminal ends the process of FIG. 2 .
  • FIG. 3 illustrates an example method for playing video data segmented based on sensor data in a mobile communication terminal according to an embodiment of the present invention.
  • the mobile communication terminal checks whether video shooting is requested according to a user's key manipulation in step 301 . If so, the mobile communication terminal displays a sensor list on a screen in order to allow the user to select one or more sensors to be used to segment video data in step 203 .
  • the sensor list may include a position sensor, such as a GPS, a temperature sensor, an orientation sensor, a noise sensor, and the like.
  • the mobile communication sensor checks whether one or more sensors are selected from the sensor list displayed on the screen. If so, the mobile communication terminal plays the corresponding video in step 307 and displays a list of sensor data on the screen, which have been previously stored during the shooting of the corresponding video with respect to each selected sensor.
  • the sensor data may include a position value, a temperature value, an orientation value, a noise value, and the like.
  • step 309 the mobile communication terminal determines whether one sensor data is selected from the sensor data list displayed on the screen. If so, the mobile communication terminal extracts a playback position (or playback time) offset value of the video data, which is previously stored together with the selected sensor data during the shooting of the corresponding video in step 311 .
  • step 313 the mobile communication terminal determines a playback position (or playback time) of the corresponding video data based on the extracted playback position (or playback time) offset value of the video data.
  • the mobile communication terminal randomly accesses the determined playback position (or playback time) and plays the corresponding video data. For example, upon video shooting, in the case where a video shooting position is checked through a position sensor and a moving path of the video shooting is stored together with the video data, or the respective videos shot in the above-described manner are edited into a single video and stored, and if the user wants to directly find the video shot in a specific area and play the corresponding video when viewing the video after the completion of the video shooting, the mobile communication terminal may directly jump to the video of the corresponding area and play the corresponding video, based on the sensor data that is provided from the position sensor and stored together with the video data during the video shooting.
  • the mobile communication terminal ends the process of FIG. 3 .
  • the video data and the sensor data acquired from the various sensors are stored together with the playback position (or playback time) offset value of the corresponding video data, and the corresponding video data is segmented based on the sensor data. Accordingly, more diverse video services may be provided to the user, based on the sensor data. As one example, a video search based on the sensor data may be achieved through a video search service.
  • the corresponding video could be searched only when a search word is ‘France.’
  • video data related to the corresponding regions within the video file can be searched and played even when the search word is ‘Germany’ or ‘Sweden’. Therefore, the search accuracy and convenience can be increased.
  • the video data and the sensor data acquired from the various sensors are stored together with the playback position (or playback time) offset value of the corresponding video data, and the corresponding video data is segmented based on the sensor data. Accordingly, more diverse video services may be provided to the user, based on the sensor data (for example, the position value, the temperature value, the orientation value, the noise value, and the like).

Abstract

In one embodiment, a method for segmenting video data in a mobile communication terminal includes acquiring sensor data periodically together with video data during video shooting, and segmenting the video data based on the acquired sensor data.

Description

    CROSS-REFERENCE TO RELATED APPLICATION(S) AND CLAIM OF PRIORITY
  • The present application is related to and claims priority under 35 U.S.C. §119 to an application filed in the Korean Intellectual Property Office on Mar. 2, 2011 and assigned Serial No. 10-2011-0018377, the contents of which are incorporated herein by reference.
  • TECHNICAL FIELD OF THE INVENTION
  • The present invention relates generally to a mobile communication terminal, and in particular, to an apparatus and method for segmenting video data in a mobile communication terminal.
  • BACKGROUND OF THE INVENTION
  • Video segmentation is a technique that segments video data for jumping to a specific playback position within a video and playing the video from the specific playback position.
  • One particular conventional video segmentation technique uses a video analyzing method to separate news services by distinguishing speakers through a voice analysis or the like, or to notify the change of an image by distinguishing a difference between images.
  • However, such a video analyzing method may often not provide an adequate segmentation technique. Therefore, as in the case of a chaptering service in a Digital Video Disk (DVD) or the like, video data segmented by an author's arbitrary decision in a contents creating step have been distributed.
  • SUMMARY OF THE INVENTION
  • To address the above-discussed deficiencies of the prior art, it is a primary object to provide at least the advantages below. Accordingly, an object of the present invention is to provide an apparatus and method for segmenting video data in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for segmenting video data based on sensor data in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for segmenting video data based on sensor data by storing video data and sensor data acquired from a variety of sensors, together with a playback position (or playback time) offset value of the corresponding video data, during video shooting in a mobile communication terminal.
  • Another object of the present invention is to provide an apparatus and method for storing a changed sensor data value of a current period in comparison with a previous period and a playback position (or playback time) offset value of corresponding video data for each sensor in a metadata storage space within a video file configured with the video data in a metadata format in a mobile communication terminal.
  • According to an aspect of the present invention, a method for segmenting video data in a mobile communication terminal includes: acquiring sensor data periodically together with video data during video shooting; and segmenting the video data based on the acquired sensor data.
  • According to another aspect of the present invention, an apparatus for segmenting video data in a mobile communication terminal includes: a camera unit for acquiring video data through video shooting; a sensor unit for acquiring sensor data periodically during the video shooting; and a video segmenting unit for segmenting the video data based on the acquired sensor data.
  • Before undertaking the DETAILED DESCRIPTION OF THE INVENTION below, it may be advantageous to set forth definitions of certain words and phrases used throughout this patent document: the terms “include” and “comprise,” as well as derivatives thereof, mean inclusion without limitation; the term “or,” is inclusive, meaning and/or; the phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like; and the term “controller” means any device, system or part thereof that controls at least one operation, such a device may be implemented in hardware, firmware or software, or some combination of at least two of the same. It should be noted that the functionality associated with any particular controller may be centralized or distributed, whether locally or remotely. Definitions for certain words and phrases are provided throughout this patent document, those of ordinary skill in the art should understand that in many, if not most instances, such definitions apply to prior, as well as future uses of such defined words and phrases.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the present disclosure and its advantages, reference is now made to the following description taken in conjunction with the accompanying drawings, in which like reference numerals represent like parts:
  • FIG. 1 illustrates an example configuration of a mobile communication terminal according to an embodiment of the present invention;
  • FIG. 2 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention;
  • FIG. 3 illustrates an example method for playing video data segmented based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention;
  • FIG. 4 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention; and
  • FIG. 5 illustrates an example format of video data according to an exemplary embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • FIGS. 1 through 5, discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged mobile communications terminals. In the following description, detailed descriptions of well-known functions or configurations will be omitted since they would unnecessarily obscure the subject matters of the present invention. Also, the terms used herein are defined according to the functions of the present invention. Thus, the terms may vary depending on users' or operators' intentions or practices. Therefore, the terms used herein must be understood based on the descriptions made herein.
  • Hereinafter, an apparatus and method for segmenting video data based on sensor data in a mobile communication terminal according to exemplary embodiments of the present invention will be described.
  • Although a mobile communication terminal will be described below as one example, it is apparent that the present invention can also be applied to any device that is provided with a built-in sensor (or connected to an external sensor) and is capable of shooting or playing a video.
  • FIG. 1 illustrates an example configuration of a mobile communication terminal according to an exemplary embodiment of the present invention. The mobile communication terminal includes a control unit 100, a camera unit 102, a video processing unit 104, a display unit 106, an input unit 108, a video segmenting unit 110, a sensor unit 112, a memory unit 114, and an audio processing unit 118.
  • The control unit 100 controls an overall operation of the mobile communication terminal and processes a function of segmenting video data based on sensor data.
  • The camera unit 102 includes a camera sensor and a signal processor. The camera sensor converts an optical signal detected during video shooting into an electrical signal. The signal processor converts an analog video signal from the camera sensor into digital video data. In certain embodiments, the camera sensor may be a Charge Coupled Device (CCD) sensor, and the signal processor may be a Digital Signal Processor (DSP).
  • The video processing unit 104 generates screen data for displaying camera video data received from the camera unit 102. The video processing unit 104 includes a video codec (not shown) that codes video data in accordance with a specified protocol or decodes coded video data into original video data.
  • The display unit 106 displays information, such as numerals and characters, moving pictures and still pictures, and status information generated during the operation of the mobile communication terminal. The display unit 106 may be a color Liquid Crystal Display (LCD) or other display device.
  • The input unit 108 includes alpha-numeric keys and a plurality of function keys. The input unit 108 provides the control unit 100 with key input data that corresponds to a key pressed by a user.
  • Upon video shooting, the video segmenting unit 110 segments video data based on sensor data by storing the video data and the sensor data acquired from a variety of sensors in the memory unit 114, together with a playback position (or playback time) offset value of the corresponding video data.
  • The sensor unit 112 includes one or more sensors. For example, the sensor unit 112 may include a position sensor (for example, a Global Positioning System (GPS)), a temperature sensor, an orientation sensor, a noise sensor, and the like. The sensor may be built in the mobile communication terminal, or may be configured outside the mobile communication terminal and coupled to the mobile communication terminal via a wired or wireless connection.
  • The memory unit 114 stores a variety of reference data, instructions of an executable program for processing and controlling the control unit 100, temporary data generated during the execution of various programs, and/or updatable backup data. In particular, the memory unit 114 stores an executable program for segmenting the video data based on the sensor data. In addition, the memory unit 114 may store a video file 116.
  • The audio processing unit 118 includes an audio codec 107 that processes an audio signal such as a voice signal. In particular, the audio processing unit 118 may process an audio signal from the control unit 100 and plays the processed audio signal through a speaker, or may process an audio signal from a microphone and provides the processed audio signal to the control unit 100.
  • FIG. 2 illustrates an example method for segmenting video data based on sensor data in a mobile communication terminal according to an exemplary embodiment of the present invention. The mobile communication terminal determines whether video shooting is requested according to a user's key manipulation in step 201. If so, the mobile communication terminal displays a sensor list on a screen in order to allow the user to select one or more sensors to be used to segment video data in step 203. The sensor list may include a position sensor, such as a GPS device, a temperature sensor, an orientation sensor, a noise sensor, and the like.
  • In step 205, the mobile communication sensor determines whether one or more sensors are selected from the sensor list displayed on the screen. For example, if it is intended to check a shooting position of a current video and store a moving path of the video shooting together with the video data, the user may select the position sensor and use the selected position sensor to segment the video data. When it is determined in step 205 that one or more sensors are selected from the sensor list displayed on the screen, the mobile communication terminal drives a camera and the selected sensor(s) in step 207.
  • In step 209, the mobile communication terminal acquires video data by shooting a video through the driven camera, and acquires sensor data periodically through the driven sensor(s). For example, the sensor data may include a position value, a temperature value, an orientation value, a noise value, and the like.
  • In step 211, the mobile communication terminal compares sensor data acquired at a current period with sensor data acquired at a previous period with respect to each selected sensor, and checks whether the sensor data value has changed. If so, the mobile communication terminal determines that a current playback position (or playtime time) of the video data is a playback position (or playback time) requiring the video data to be segmented based on the sensor data, and stores the changed sensor data value together with a playback position (or playback time) offset value of the video data in step 213. Then, the mobile communication terminal proceeds to step 215. As such, while the mobile communication terminal shoots the video, it segments the video data by storing the changed sensor data value of the current period in comparison with the previous period together with the playback position (or playback time) offset value of the corresponding video data with respect to each sensor. Accordingly, as illustrated in FIG. 4, a plurality of video data segmented based on the sensor data of each sensor may be created.
  • As one example, which is illustrated in FIG. 5, the changed sensor data value of the current period in comparison with the previous period with respect to each sensor and the playback position (or playback time offset value of the corresponding video data may be stored in a metadata format in the metadata 504 storage space within the video file 500 configured with the video data 502. That is, the changed sensor data value and the playback position (or playback time) offset value may be stored separately from the video data within the corresponding video data. As another example, the changed sensor data value and the playback position (or playback time) offset value may be stored within the corresponding video data.
  • On the other hand, if it is determined from the sensor data comparison of step 211 that the sensor data value is not changed, the mobile communication terminal proceeds to step 215. In step 215, the mobile communication terminal determines whether it is requested to stop video shooting according to the user's key manipulation. If it is determined in step 215 that the mobile communication terminal is requested to stop the video shooting according to the user's key manipulation, the mobile communication terminal stores the video data segmented based on the sensor data acquired periodically through the driven sensor(s) in step 217. On the other hand, if it is checked in step 215 that it is not requested to stop the video shooting according to the user's key manipulation, the mobile communication terminal returns to step 209 and repeats the subsequent steps.
  • Thereafter, the mobile communication terminal ends the process of FIG. 2.
  • FIG. 3 illustrates an example method for playing video data segmented based on sensor data in a mobile communication terminal according to an embodiment of the present invention. Referring to FIG. 3, the mobile communication terminal checks whether video shooting is requested according to a user's key manipulation in step 301. If so, the mobile communication terminal displays a sensor list on a screen in order to allow the user to select one or more sensors to be used to segment video data in step 203. The sensor list may include a position sensor, such as a GPS, a temperature sensor, an orientation sensor, a noise sensor, and the like.
  • In step 305, the mobile communication sensor checks whether one or more sensors are selected from the sensor list displayed on the screen. If so, the mobile communication terminal plays the corresponding video in step 307 and displays a list of sensor data on the screen, which have been previously stored during the shooting of the corresponding video with respect to each selected sensor. For example, the sensor data may include a position value, a temperature value, an orientation value, a noise value, and the like.
  • In step 309, the mobile communication terminal determines whether one sensor data is selected from the sensor data list displayed on the screen. If so, the mobile communication terminal extracts a playback position (or playback time) offset value of the video data, which is previously stored together with the selected sensor data during the shooting of the corresponding video in step 311.
  • In step 313, the mobile communication terminal determines a playback position (or playback time) of the corresponding video data based on the extracted playback position (or playback time) offset value of the video data.
  • In step 315, the mobile communication terminal randomly accesses the determined playback position (or playback time) and plays the corresponding video data. For example, upon video shooting, in the case where a video shooting position is checked through a position sensor and a moving path of the video shooting is stored together with the video data, or the respective videos shot in the above-described manner are edited into a single video and stored, and if the user wants to directly find the video shot in a specific area and play the corresponding video when viewing the video after the completion of the video shooting, the mobile communication terminal may directly jump to the video of the corresponding area and play the corresponding video, based on the sensor data that is provided from the position sensor and stored together with the video data during the video shooting.
  • Thereafter, the mobile communication terminal ends the process of FIG. 3.
  • As such, when the mobile communication terminal shoots the video, the video data and the sensor data acquired from the various sensors are stored together with the playback position (or playback time) offset value of the corresponding video data, and the corresponding video data is segmented based on the sensor data. Accordingly, more diverse video services may be provided to the user, based on the sensor data. As one example, a video search based on the sensor data may be achieved through a video search service. For instance, regarding a video shot by a user while traveling France, Germany, and Sweden, and entitled ‘France,’ the corresponding video could be searched only when a search word is ‘France.’ However, according to the embodiment of the present invention, video data related to the corresponding regions within the video file can be searched and played even when the search word is ‘Germany’ or ‘Sweden’. Therefore, the search accuracy and convenience can be increased.
  • When the mobile communication terminal shoots the video, the video data and the sensor data acquired from the various sensors are stored together with the playback position (or playback time) offset value of the corresponding video data, and the corresponding video data is segmented based on the sensor data. Accordingly, more diverse video services may be provided to the user, based on the sensor data (for example, the position value, the temperature value, the orientation value, the noise value, and the like).
  • While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Therefore, the scope of the invention is defined not by the detailed description of the invention but by the appended claims, and all differences within the scope will be construed as being included in the present invention.

Claims (20)

1. A method for segmenting video data in a mobile communication terminal, the method comprising:
acquiring sensor data periodically together with video data during video shooting; and
segmenting the video data based on the acquired sensor data.
2. The method of claim 1, wherein the sensor data comprises at least one of a position value, a temperature value, an orientation value, and a noise value.
3. The method of claim 1, further comprising:
when the video shooting is requested, displaying a sensor list on a screen; and
when one or more sensors are selected from the sensor list, controlling a camera using the one or more sensors,
4. The method of claim 3, wherein the sensor list comprises at least one of a position sensor, a temperature sensor, an orientation sensor, and a noise sensor.
5. The method of claim 1, wherein the segmenting of the video data comprises:
determining whether a sensor data value has changed by comparing sensor data acquired at a current period with sensor data acquired at a previous period; and
if the sensor data value has changed, storing the changed sensor data value together with a playback position or playback time offset value of the video data.
6. The method of claim 5, wherein the changed sensor data value and the playback position or playback time offset value of the video data are stored in a metadata format in a metadata storage space within a video file configured with the video data.
7. The method of claim 5, wherein the changed sensor data value and the playback position or playback time offset value of the video data are stored in the video data.
8. The method of claim 1, further comprising:
when the playback of the segmented video data is requested, displaying a sensor list;
when one or more sensors are selected from the sensor list, displaying a list of sensor data used to segment the video data by the selected one or more sensors;
when one sensor data is selected from the list of sensor data, extracting a playback position or playback time offset value of a video data, which is previously stored together with the selected sensor data;
determining a playback position or playback time of a corresponding video data based on the extracted playback position or playback time offset value of the video data; and
randomly accessing the determined playback position or playback time and playing the corresponding video.
9. An apparatus configured to segment video data in a mobile communication terminal, the apparatus comprising:
a camera unit configured to acquire video data through video shooting;
a sensor unit configured to acquire sensor data periodically during the video shooting; and
a video segmenting unit configured to segment the video data based on the acquired sensor data.
10. The apparatus of claim 9, wherein the sensor data comprises at least one of a position value, a temperature value, an orientation value, and a noise value.
11. The apparatus of claim 9, further comprising a display unit configured to display a sensor list when the video shooting is requested, wherein, when one or more sensors are selected from the sensor list, the video segmenting unit configured to control a camera using the selected one or more sensors.
12. The apparatus of claim 11, wherein the sensor list comprises at least one of a position sensor, a temperature sensor, an orientation sensor, and a noise sensor.
13. The apparatus of claim 9, wherein the video segmenting unit is configured to compare sensor data acquired at a current period with sensor data acquired at a previous period and determine whether a sensor data value is changed and, if the sensor data value is changed, the video segmenting unit is configured to store a playback position or playback time offset value of the video data and the changed sensor data value.
14. The apparatus of claim 13, wherein the changed sensor data value and the playback position or playback time offset value of the video data are configured to be stored in a metadata format in a metadata storage space within a video file configured with the video data.
15. The apparatus of claim 13, wherein the changed sensor data value and the playback position or playback time offset value of the video data are stored in the corresponding video data.
16. The apparatus of claim 9, further comprising a display unit configured to display a sensor list when the playback of the segmented video data is requested, and display a list of sensor data used to segment the video data by one or more sensors that are selected from the sensor list;
wherein when one sensor data is selected from the list of sensor data, the video segmenting unit configured to extract a playback position or playback time offset value of the video data previously stored together with the selected sensor data, determine a playback position or playback time of a corresponding video data based on the extracted playback position or playback time offset value of the video data, randomly access the determined playback position or playback time, and play the corresponding video.
17. A non-transitory computer readable medium embodying a computer program, the computer program comprising computer readable program code configured to:
acquire sensor data periodically together with video data during video shooting; and
segment the video data based on the acquired sensor data.
18. The computer program of claim 17, further comprising code configured to:
when the video shooting is requested, display a sensor list on a screen; and
when one or more sensors are selected from the sensor list, control a camera using the one or more sensors,
19. The computer program of claim 18, wherein the sensor list comprises at least one of a position sensor, a temperature sensor, an orientation sensor, and a noise sensor.
20. The computer program of claim 17, further comprising code configured to:
when the playback of the segmented video data is requested, display a sensor list;
when one or more sensors are selected from the sensor list, display a list of sensor data used to segment the video data by the selected sensor or sensors;
when one sensor data is selected from the list of sensor data, extract a playback position or playback time offset value of a video data, which is previously stored together with the selected sensor data;
determine a playback position or playback time of a corresponding video data based on the extracted playback position or playback time offset value of the video data; and
randomly access the determined playback position or playback time and play the corresponding video.
US13/411,458 2011-03-02 2012-03-02 Apparatus and method for segmenting video data in mobile communication terminal Active US8929713B2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020110018377A KR101748576B1 (en) 2011-03-02 2011-03-02 Apparatus and method for segmenting video data in mobile communication teminal
KR10-2011-0018377 2011-03-02

Publications (2)

Publication Number Publication Date
US20120224833A1 true US20120224833A1 (en) 2012-09-06
US8929713B2 US8929713B2 (en) 2015-01-06

Family

ID=46753359

Family Applications (1)

Application Number Title Priority Date Filing Date
US13/411,458 Active US8929713B2 (en) 2011-03-02 2012-03-02 Apparatus and method for segmenting video data in mobile communication terminal

Country Status (3)

Country Link
US (1) US8929713B2 (en)
KR (1) KR101748576B1 (en)
WO (1) WO2012118298A2 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150199995A1 (en) * 2014-01-10 2015-07-16 ModCon IP LLC Modular content generation, modification, and delivery system
US20160307598A1 (en) * 2015-04-16 2016-10-20 Daniel Laurence Ford JOHNS Automated editing of video recordings in order to produce a summarized video recording

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5714997A (en) * 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
US20020097322A1 (en) * 2000-11-29 2002-07-25 Monroe David A. Multiple video display configurations and remote control of multiple video signals transmitted to a monitoring station over a network
US20120173927A1 (en) * 2010-12-30 2012-07-05 American Power Conversion Corporation System and method for root cause analysis
US20120188376A1 (en) * 2011-01-25 2012-07-26 Flyvie, Inc. System and method for activating camera systems and self broadcasting
US8310542B2 (en) * 2007-11-28 2012-11-13 Fuji Xerox Co., Ltd. Segmenting time based on the geographic distribution of activity in sensor data

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3879170B2 (en) 1997-03-31 2007-02-07 ソニー株式会社 Imaging apparatus and imaging method
US20040143434A1 (en) 2003-01-17 2004-07-22 Ajay Divakaran Audio-Assisted segmentation and browsing of news videos
JP2004342252A (en) 2003-05-16 2004-12-02 Matsushita Electric Ind Co Ltd Apparatus and method for recording data
KR20070002159A (en) * 2005-06-30 2007-01-05 주식회사 팬택 Method and apparatus for managing moving picture
JP4654153B2 (en) 2006-04-13 2011-03-16 信越化学工業株式会社 Heating element
KR20080041837A (en) * 2006-11-08 2008-05-14 주식회사 히타치엘지 데이터 스토리지 코리아 Multimedia data recorder
KR20080041835A (en) * 2006-11-08 2008-05-14 주식회사 히타치엘지 데이터 스토리지 코리아 Multimedia data recorder
JP5191240B2 (en) 2008-01-09 2013-05-08 オリンパス株式会社 Scene change detection apparatus and scene change detection program
KR20090080720A (en) * 2008-01-22 2009-07-27 삼성전자주식회사 A system of management multimedia files and a method thereof

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5714997A (en) * 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
US20020097322A1 (en) * 2000-11-29 2002-07-25 Monroe David A. Multiple video display configurations and remote control of multiple video signals transmitted to a monitoring station over a network
US8310542B2 (en) * 2007-11-28 2012-11-13 Fuji Xerox Co., Ltd. Segmenting time based on the geographic distribution of activity in sensor data
US20120173927A1 (en) * 2010-12-30 2012-07-05 American Power Conversion Corporation System and method for root cause analysis
US20120188376A1 (en) * 2011-01-25 2012-07-26 Flyvie, Inc. System and method for activating camera systems and self broadcasting

Also Published As

Publication number Publication date
US8929713B2 (en) 2015-01-06
KR20120099877A (en) 2012-09-12
WO2012118298A2 (en) 2012-09-07
KR101748576B1 (en) 2017-06-20
WO2012118298A3 (en) 2012-12-20

Similar Documents

Publication Publication Date Title
US10219011B2 (en) Terminal device and information providing method thereof
CN108259991B (en) Video processing method and device
US9589596B2 (en) Method and device of playing multimedia and medium
US9723366B2 (en) System and method to provide supplemental content to a video player
EP2947891A1 (en) Method for providing episode selection of video and apparatus thereof
US20130195427A1 (en) Method and apparatus for developing and utilizing multi-track video files
KR20210090262A (en) Information processing method and apparatus, electronic device and recording medium
EP2811751A1 (en) Method for providing media-content related information, device, server, and computer-readable storage medium for executing the method
EP2297990B1 (en) System and method for continuous playing of moving picture between two devices
US8875213B2 (en) Video player and portable computer with detection
US20130312026A1 (en) Moving-image playing apparatus and method
US20170186440A1 (en) Method, device and storage medium for playing audio
US20160026638A1 (en) Method and apparatus for displaying video
US8929713B2 (en) Apparatus and method for segmenting video data in mobile communication terminal
US9538119B2 (en) Method of capturing moving picture and apparatus for reproducing moving picture
US8874370B1 (en) Remote frames
KR101000924B1 (en) Caption presentation method and apparatus thereof
KR101619150B1 (en) The cimema screenings digital video content made by other foreign language easy seeing or hearing system and method for foreigners using smart devices
KR101619155B1 (en) The digital video content made by other foreign language easy seeing or hearing system and method for foreigners using smart devices
US20120063739A1 (en) User terminal, server, displaying method and information providing method thereof
KR102231163B1 (en) Electronic device for editing a video and method for operating thereof
US9294706B2 (en) Method and apparatus for playing back a moving picture
KR102607703B1 (en) Content playback program and content playback device
US20080168094A1 (en) Data Relay Device, Digital Content Reproduction Device, Data Relay Method, Digital Content Reproduction Method, Program, And Computer-Readable Recording Medium
KR20160114430A (en) Method and apparatus for providing second screen service

Legal Events

Date Code Title Description
AS Assignment

Owner name: SAMSUNG ELECTRONICS CO., LTD, KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:YOUM, SUN-HEE;JEONG, JIN-GUK;PARK, MI-HWA;AND OTHERS;REEL/FRAME:027800/0569

Effective date: 20120222

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551)

Year of fee payment: 4

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 8