US20050212753A1 - Motion controlled remote controller - Google Patents

Motion controlled remote controller Download PDF

Info

Publication number
US20050212753A1
US20050212753A1 US10/807,562 US80756204A US2005212753A1 US 20050212753 A1 US20050212753 A1 US 20050212753A1 US 80756204 A US80756204 A US 80756204A US 2005212753 A1 US2005212753 A1 US 2005212753A1
Authority
US
United States
Prior art keywords
gesture
motion
remote
handheld device
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/807,562
Inventor
David Marvit
Albert Reinhardt
B. Adler
Bruce Wilcox
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to US10/807,562 priority Critical patent/US20050212753A1/en
Assigned to FUJITSU LIMITED reassignment FUJITSU LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: WILCOX, BRUCE A., ADLER, B. THOMAS, MARVIT, DAVID L., REINHARDT, ALBERT H.M.
Priority to EP05724864A priority patent/EP1728142B1/en
Priority to KR1020067019664A priority patent/KR100853605B1/en
Priority to DE602005022685T priority patent/DE602005022685D1/en
Priority to PCT/US2005/007409 priority patent/WO2005103863A2/en
Priority to JP2007504983A priority patent/JP2007531113A/en
Publication of US20050212753A1 publication Critical patent/US20050212753A1/en
Priority to JP2008192455A priority patent/JP4812812B2/en
Priority to US12/826,439 priority patent/US7990365B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/10File systems; File servers
    • G06F16/17Details of further file system functions
    • G06F16/176Support for shared access to files; File sharing support
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C17/00Arrangements for transmitting signals characterised by the use of a wireless electrical link
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/10Power supply of remote control devices
    • G08C2201/12Power saving techniques of remote control or controlled devices
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/32Remote control based on movements, attitude of remote control device
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/33Remote control using macros, scripts
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/30User interface
    • G08C2201/34Context aware guidance
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/40Remote control systems using repeaters, converters, gateways
    • G08C2201/42Transmitting or receiving remote control signals via a network
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/60Security, fault tolerance
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/60Security, fault tolerance
    • G08C2201/61Password, biometric
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/70Device selection
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/90Additional features
    • G08C2201/92Universal remote control
    • GPHYSICS
    • G08SIGNALLING
    • G08CTRANSMISSION SYSTEMS FOR MEASURED VALUES, CONTROL OR SIMILAR SIGNALS
    • G08C2201/00Transmission systems of control signals via wireless link
    • G08C2201/90Additional features
    • G08C2201/93Remote control using other portable devices, e.g. mobile phone, PDA, laptop

Definitions

  • the present invention relates generally to portable devices and, more particularly, to portable devices with a motion interface.
  • computing devices such as cellular phones and personal digital assistants (PDAs) have grown rapidly. Such devices provide many different functions to users through different types of interfaces, such as keypads and displays.
  • Some computing devices utilize motion as an interface by detecting tilt of the device by a user.
  • Some implementations of a motion interface involve tethering a computing device with fishing lines or carrying large magnetic tracking units that require large amounts of power.
  • a handheld device with motion a motion interface is provided.
  • a handheld device includes a display having a viewable surface and operable to generate an image indicating a currently controlled remote device and a gesture database maintaining a plurality of remote command gestures.
  • Each remote command gesture is defined by a motion of the device with respect to a first position of the handheld device.
  • the device includes a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device and a motion detection module operable to detect motion of the handheld device within three dimensions and to identify components of the motion in relation to the viewable surface.
  • the device includes a control module operable to track movement of the handheld device using the motion detection module, to compare the tracked movement against the remote command gestures to determine a matching gesture, and to identify the one of the commands corresponding to the matching gesture.
  • the device also includes a wireless interface operable to transmit the identified command to a remote receiver for delivery to the remote device.
  • a method for remotely controlling devices includes generating, on a viewable surface of a handheld device, an image indicating a currently controlled remote device and maintaining a gesture database comprising a plurality of remote command gestures.
  • Each remote command gesture is defined by a motion of the device with respect to a first position of the handheld device.
  • the method includes maintaining a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device, tracking movement of the handheld device in relation to the viewable surface, comparing the tracked movement against the remote command gestures to determine a matching gesture, identifying the one of the commands corresponding to the matching gesture, and transmitting the identified command to a remote receiver for delivery to the remote device.
  • a handheld device to control various other local and remote devices through motion input of the handheld device.
  • gestures of the handheld device may be used to communicate commands to a device selected for control.
  • motion input of one handheld device may be used to control a plurality of other devices thus facilitating control of the other devices for a user.
  • FIG. 1 illustrates a handheld device with motion interface capability, in accordance with a particular embodiment
  • FIG. 2 illustrates a motion detector of the handheld device of FIG. 1 , in accordance with a particular embodiment
  • FIG. 3 illustrates the use of motion detector components of the handheld device of FIG. 1 , in accordance with a particular embodiment
  • FIG. 4 illustrates an example handheld device with motion detection capability, in accordance with a particular embodiment
  • FIG. 5 illustrates an example of selection and amplification of a dominant motion of a handheld device, in accordance with a particular embodiment
  • FIG. 6 is a flowchart illustrating preferred motion selection, in accordance with a particular embodiment
  • FIG. 7 is a flowchart illustrating the setting of a zero-point for a handheld device, in accordance with a particular embodiment
  • FIG. 8 illustrates an example of scrubbing functionality with a handheld device for virtual desktop navigation, in accordance with a particular embodiment
  • FIG. 9 is a flowchart illustrating the scrubbing process of FIG. 8 , in accordance with a particular embodiment
  • FIG. 10A illustrates an example of menu navigation using gesture input, in accordance with a particular embodiment
  • FIG. 10B illustrates example gestures which may be used to perform various functions at a handheld device, in accordance with a particular embodiment
  • FIG. 11 illustrates an example of map navigation using motion input, in accordance with a particular embodiment
  • FIG. 12A illustrates a form of motion input cursor navigation, in accordance with a particular embodiment
  • FIG. 12B illustrates another form of motion input cursor navigation, in accordance with a particular embodiment
  • FIG. 13 is a flowchart illustrating a process for utilizing feedback in response to motion input, in accordance with a particular embodiment
  • FIG. 14 illustrates an example system utilizing spatial signatures with a handheld device, in accordance with a particular embodiment
  • FIG. 15 illustrates an example system in which motion input of a handheld device controls multiple other devices, in accordance with a particular embodiment
  • FIG. 16 is a flowchart illustrating an environmental modeling process of a handheld device, in accordance with a particular embodiment
  • FIG. 17 illustrates example gestures which may be mapped to different functions of a handheld device, in accordance with a particular embodiment
  • FIG. 18 is a flowchart illustrating the utilization of a preexisting symbol gesture, in accordance with a particular embodiment
  • FIG. 19 is a flowchart illustrating the use of context-based gesture mapping, in accordance with a particular embodiment
  • FIG. 20 is a flowchart illustrating the use of user-based gesture mapping, in accordance with a particular embodiment
  • FIG. 21 is a flowchart illustrating the assignment process for user-created gestures, in accordance with a particular embodiment
  • FIG. 22 illustrates three gestures input using a handheld device with varying levels of precision, in accordance with a particular embodiment
  • FIG. 23 is a flowchart illustrating a gesture recognition process utilizing a number of features, in accordance with a particular embodiment.
  • FIG. 1 illustrates a handheld device 10 with motion interface capability, in accordance with a particular embodiment of the present invention.
  • Handheld device 10 can recognize movement of the device and can perform various functions corresponding to such movement. Thus, movement of the device operates as a form of input for the device. Such movement input may directly alter what is being displayed on a device display or may perform other functions.
  • Handheld device 10 may comprise a mobile phone, personal digital assistant (PDA), still camera, video camera, pocket calculator, portable radio or other music or video player, digital thermometer, game device, portable electronic device, watch or any other device capable of being held or worn by a user.
  • PDA personal digital assistant
  • handheld device 10 may include wearable portable devices such as watches as well.
  • a watch may include any computing device worn around a user's wrist.
  • Handheld device 10 includes a display 12 , input 14 , processor 16 , memory 18 , communications interface 20 and motion detector 22 .
  • Display 12 presents visual output of the device and may comprise a liquid crystal display (LCD), a light emitting diode (LED) or any other type of display for communicating output to a user.
  • Input 14 provides an interface for a user to communicate input to the device.
  • Input 14 may comprise a keyboard, keypad, track wheel, knob, touchpad, stencil or any other component through which a user may communicate an input to device 10 .
  • display 12 and input 14 may be combined into the same component, such as a touchscreen.
  • Processor 16 may be a microprocessor, controller or any other suitable computing device or resource. Processor 16 is adapted to execute various types of computer instructions in various computer languages for implementing functions available within system handheld device 10 . Processor 16 may include any suitable controllers for controlling the management and operation of handheld device 10 .
  • Memory 18 may be any form of volatile or nonvolatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read only memory (ROM), removable media or any other suitable local or remote memory component.
  • Memory 18 includes components, logic modules or software executable by processor 16 .
  • Memory 18 may include various applications 19 with user interfaces utilizing motion input, such as mapping, calendar and file management applications, as further discussed below.
  • Memory 18 may also include various databases, such as gesture databases and function or gesture mapping databases, as further discussed below. Components of memory 18 may be combined and/or divided for processing according to particular needs or desires within the scope of the present invention.
  • Communications interface 20 supports wireless or wireline communication of data and information with other devices, such as other handheld devices, or components.
  • Motion detector 22 tracks movement of handheld device 10 which may be used as a form of input to perform certain functions. Such input movement may result from a user moving the device in a desired fashion to perform desired tasks, as further discussed below.
  • handheld device 10 in accordance with particular embodiments may include any suitable processing and/or memory modules for performing the functions as described herein, such as a control module, a motion tracking module, a video analysis module, a motion response module, a display control module and a signature detection module.
  • a control module for performing the functions as described herein, such as a control module, a motion tracking module, a video analysis module, a motion response module, a display control module and a signature detection module.
  • input movement may be in the form of translation and/or gestures.
  • Translation-based input focuses on a beginning point and endpoint of a motion and differences between such beginning points and endpoints.
  • Gesture-based input focuses on an actual path traveled by the device and is a holistic view of a set of points traversed.
  • motion in the form of an “O” may change the display during the movement but may ultimately yield no change between the information displayed prior to the movement and the information displayed at the end of the movement since the device presumably will be in the same point as it started when the motion ends.
  • the device will recognize that it has traveled in the form of an “O” because in gesture-based input the device focuses on the path traveled during the motion or movement between a beginning point and an endpoint of the gesture (e.g., even though the beginning and endpoints may be the same).
  • This gesture “O” movement may be mapped to particular functions such that when the device recognizes it has traveled along a path to constitute an “O” gesture, it may perform the functions, as further elaborated upon below.
  • movement of the device intended as a gesture may be recognized as by the device as a gesture by matching a series, sequence or pattern of accelerations of the movement to those defining gestures of a gesture database.
  • Handheld devices in accordance with other embodiments may not include some of the components of the device illustrated in FIG. 1 .
  • some embodiments may include a handheld device 10 without an input 14 separate from a motion detector such that motion of the device provides the sole or primary input for the device.
  • handheld devices in accordance with other embodiments may include additional components not specifically illustrated with respect to device 10 .
  • FIG. 2 illustrates motion detector 22 of FIG. 1 , in accordance with a particular embodiment of the present invention.
  • motion detector 22 includes accelerometers 24 a , 24 b and 24 c ; cameras 26 a , 26 b and 26 c ; gyros 28 a , 28 b and 28 c ; rangefinders 30 a , 30 b and 30 c ; and a processor 32 .
  • Accelerometers 24 a , 24 b and 24 c detect movement of the device by detecting acceleration along a respective sensing axis.
  • a particular movement of the device may comprise a series, sequence or pattern of accelerations detected by the accelerometers.
  • the gravitational acceleration along the sensing axis changes. This change in gravitational acceleration is detected by the accelerometer and reflects the tilt of the device.
  • translation of the handheld device, or movement of the device without rotation or tilt also produces a change in acceleration along a sensing axis which is also detected by the accelerometers.
  • accelerometer 24 a comprises an x-axis accelerometer that detects movement of the device along an x-axis
  • accelerometer 24 b comprises a y-axis accelerometer that detects movement of the device along a y-axis
  • accelerometer 24 c comprises a z-axis accelerometer that detects movement of the device along a z-axis.
  • accelerometers 24 a , 24 b and 24 c are able to detect rotation and translation of device 10 . As indicated above, rotation and/or translation of device 10 may serve as an input from a user to operate the device.
  • the use of three accelerometers for motion detection provides certain advantages. For example, if only two accelerometers were used, the motion detector may not be able to disambiguate translation of the handheld device from tilt in the plane of translation. However, using a third, z-axis accelerometer (an accelerometer with a sensing axis at least approximately perpendicular to the sensing axes of the other two accelerometers) enables many cases of tilt to be disambiguated from many cases of translation.
  • accelerometers 24 a , 24 b and 24 c may not be discernible from each other by accelerometers 24 a , 24 b and 24 c .
  • movement comprising a certain rotation and a certain translation may appear to accelerometers 24 a , 24 b and 24 c as the same movement as a different movement that comprises a different particular rotation and a different particular translation.
  • a motion detector 22 merely included three accelerometers to detect movement (without any additional components to ensure greater accuracy), some unique, undiscernible movements may be mapped to the same function or may not be mapped to a function to avoid confusion.
  • motion detector 22 also includes cameras 26 a , 26 b and 26 c , which may comprise charge coupled device (CCD) cameras or other optical sensors. Cameras 26 a , 26 b and 26 c provide another way to detect movement of the handheld device (both tilt and translation). If only one camera were installed on a device for movement detection, tilt of the device may be indistinguishable from translation (without using other motion detection components, such as accelerometers). However, by using at least two cameras, tilt and translation may be distinguished from each other. For example, if two cameras were installed on handheld device 10 (one on the top of the device and one on the bottom of the device), each camera would see the world moving to the right when the device was translated to the left.
  • CCD charge coupled device
  • the motion detector knows that the device is being translated. If both cameras see the world moving in opposite directions, then the motion detector knows that the device is being rotated.
  • the magnitude of the movement of the world to the cameras is directly related to the magnitude of the rotation of the device.
  • the amount of the rotation can accurately be determined based on such movement of the world to the cameras.
  • the magnitude of the translation is related to both the magnitude of the movement of the world to the cameras and to the distance to the objects in the field of view of the cameras. Therefore, to accurately determine amount of translation using cameras alone, some form of information concerning the distance to objects in the camera fields of view must be obtained. However, in some embodiments cameras with rangefinding capability may be used.
  • optical information can be of significant value when correlated against the information from accelerometers or other sensors.
  • optical camera input may be used to inform the device that no significant motion is taking place. This could provide a solution to problems of drift which may be inherent in using acceleration data to determine absolute position information for certain device functions.
  • Rangefinders 30 a , 30 b and 30 c may comprise ultrasound rangefinders, laser rangefinders or any other suitable distance measuring component. Other components may also be used to determine distance information. For example, cameras with rangefinding capability may be used, and multiple cameras may be utilized on the same side of the device to function as a range-finder using stereopsis. Determined distance information allows for accurate and explicit computation of the portion of any apparent translation that is due to translation and the portion that is due to rotation.
  • motion detector 22 additionally includes gyros 28 a , 28 b and 28 c .
  • Gyros 28 a , 28 b and 28 c are used in combination with the other components of motion detector 22 to provide increased accuracy in detecting movement of device 10 .
  • Processor 32 processes data from accelerometers 24 , cameras 26 , gyros 28 and rangefinders 30 to produce an output indicative of the motion of device 10 .
  • Processor 32 may comprise a microprocessor, controller or any other suitable computing device or resource, such as a video analysis module for receiving a video stream from each camera.
  • the processing described herein with respect to processor 32 of motion detector 22 may be performed by processor 16 of handheld device 10 or any other suitable processor, including processors located remote to the device.
  • motion detector 22 includes three accelerometers, three cameras, three gyros and three rangefinders.
  • Motion detectors in accordance with other embodiments may include fewer or different components than motion detector 22 .
  • some embodiments may include a motion detector with three accelerometers and no cameras, gyros or rangefinders; two or three accelerometers and one or more gyros; two or three accelerometers and one or more cameras; or two or three accelerometers and one or more rangefinders.
  • the location of the motion detection components on the device may vary on different embodiments.
  • some embodiments may include cameras on different surfaces of a device while other embodiments may include two cameras on the same surface (e.g., to add rangefinding functionality).
  • Altering the type, number and location of components of motion detector 22 may affect the ability of motion detector to detect or accurately measure various types of movement.
  • the type and number of components of motion detectors may vary in different embodiments in order to fulfill particular needs. Fewer or less accurate components may be used in particular embodiments when it is desired to sacrifice accuracy to reduce manufacturing cost of a handheld device with motion detection capabilities. For example, some handheld devices may only need to detect that the device has been translated and may not need to detect exact amount of such translation to perform desired functions of the device. Such handheld devices may thus include a motion detector with cameras and without any sort of rangefinder or other component providing distance information. In particular embodiments, components described above, such as cameras and rangefinders, may also be used for other purposes by the device than those described above relating to motion detection functionality.
  • FIG. 3 is a diagram illustrating the use of the motion detector components of handheld device 10 of FIG. 1 .
  • Raw data from motion detection components is processed at processor 32 .
  • Such raw data includes x-axis accelerometer raw data 23 a , y-axis accelerometer raw data 23 b and z-axis accelerometer raw data 23 c from accelerometers 24 a , 24 b and 24 c , respectfully; camera raw data 25 a , camera raw data 25 b and camera raw data 25 c from cameras 26 a , 26 b and 26 c , respectfully; gyro raw data 27 a , gyro raw data 27 b and gyro raw data 27 c from gyros 28 a , 28 b and 28 c ; and rangefinder raw data 29 a , rangefinder raw data 29 b and rangefinder 29 c from rangefinders 30 a , 30 b and 30 c , respectively.
  • the handheld device includes more, fewer or different
  • motion detector output 34 identifying movement of device 10 .
  • motion detector output 34 comprises translation along x, y and z axes and rotation with respect to the x, y and z axes.
  • the motion detector output is communicated to a processor 16 of the handheld device which identifies the operation, function or task the device should perform (i.e., device behavior 36 ) based on the device motion. The performance of certain operations, functions or tasks based on particular movements is further discussed below.
  • FIG. 4 is an isometric illustration of an example handheld device 31 with motion detection capability, in accordance with particular embodiments.
  • Handheld device 31 includes an x-axis accelerometer 33 , a y-axis accelerometer 35 and a camera 37 oriented towards the z-axis.
  • X-axis 38 , y-axis 39 and z-axis 40 are also illustrated with respect to device 31 for reference.
  • Handheld device 31 may detect movement, including tilt and translation in various directions, using accelerometers 33 and 35 and camera 37 .
  • Handheld device 31 may also include other components, such as components illustrated and described above with respect to handheld device 10 , such as display 12 , input 14 , processor 16 , memory 18 and communications interface 20 .
  • particular embodiments may include handheld devices having various types of motion detection components (including accelerometers, gyros, cameras, rangefinders or other suitable components) in any combination and positioned or oriented in any suitable manner upon the devices.
  • a user interface function may utilize input motion along one axis of motion at a time.
  • a device application may allow a user to scroll through a list displayed on the handheld device by moving the device along a particular axis (e.g., in one direction or in two opposite directions). It may be very difficult for a user to constrain the motion of the device to that particular axis as desired. In other words, some user generated device rotation or movement along another axis may be difficult to avoid.
  • the device may include preferred motion selection including the selection and amplification of a dominant motion and the minimization of movement in other directions or axes.
  • FIG. 5 illustrates an example of selection and amplification of a dominant motion and minimization of movement in another direction as discussed above.
  • actual motion 41 represents movement of a handheld device.
  • Actual motion 41 comprises movement 42 along one axis 44 and movement 46 along another axis 48 perpendicular to axis 44 . Since the amount of movement 42 is greater than the amount of movement 46 , the handheld device may select movement 42 as the dominant motion. The handheld device may then amplify this dominant motion and minimize movement 46 (the other motion) such that actual motion 41 is treated by the device as represented motion 50 .
  • the amount or size of amplification of the dominant motion may vary in various embodiments according to particular factors, such as the particular application being run on the device at the time.
  • amplification of a dominant motion may also be based on magnitude of acceleration, speed of motion, ratio of a motion in one direction (e.g., movement 42 ) to motion in another direction (e.g., movement 46 ), size of underlying desktop being navigated or user preferences.
  • a handheld device may implement preferred motion selection only when certain motion characteristics occur. For example, in some cases the handheld device may select and amplify a dominant motion if motion in one axis is more than two times greater than any other motion. The other, smaller motion may then be minimized.
  • the selection and amplification of the dominant motion and minimization of other motion may further expand a user's ability to take advantage of motion user interfaces and may also allow the handheld device, or applications running on the device, to filter out undesired, user-induced noise. With this capability, the user may be able to, for example, move the device left to pick a list to examine, then scroll that list by moving up and down. Motion along inappropriate axes may be ignored or substantially reduced by the device.
  • the selection and amplification of a dominant motion and minimization of other motion may also be applied to rotational motion of the device. Dominant motion around an axis may be selected and amplified in the same manner as motion along an axis as described above with respect to translational motion. Moreover, rotation around another axis (that is not dominant rotation) may be minimized.
  • FIG. 6 illustrates a preferred motion selection flowchart 60 , in accordance with a particular embodiment of the present invention.
  • raw data corresponding to movement of a handheld device is received.
  • the movement raw data includes x-acceleration data 62 a , y-acceleration data 62 b and z-acceleration data 62 c that is processed at step 64 to yield an output indicating movement of the device.
  • Other embodiments may include other types of movement raw data, such as optical or camera data, gyro data and/or rangefinder data.
  • a dominant axis of motion is selected at step 66 .
  • the movement along the x-axis is augmented at step 68 a . If the selected dominant axis of motion is the y-axis, then the movement along the y-axis is augmented at step 68 b . If the selected dominant axis of motion is the z-axis, then the movement along the z-axis is augmented at step 68 c .
  • the amount of augmentation of movement in the dominant axis of motion may vary in different embodiments according to the application being utilized or other characteristics. In some embodiments, user preferences 69 may be utilized to determine type or amount of movement augmentation.
  • the augmented movement is processed to yield device behavior 72 .
  • This processing step may include accessing an application being used to determine the particular device behavior to perform based on the augmented movement. Augmented movement may yield any of a number of types of device behavior according to the application in use, a particular user or otherwise.
  • the position of a virtual display or the information displayed at display 12 of handheld device 10 , linked to the position of the device.
  • the position of the handheld device may directly determine the portion of the map displayed at display 12 .
  • device position information is kept in absolute terms (e.g., as with global positioning satellite (GPS) based systems) the utility for many tasks such as map or menu navigation may be impaired.
  • GPS global positioning satellite
  • a zero point is defined when the device is at a point A, then motion between point A and a point B may be used as input.
  • Particularly useful applications of setting a zero point may include external behaviors such as moving the virtual display or locating applications in the space around a user's body. Setting a zero point also addresses internal behaviors such as instructing the device to ignore the gravitational acceleration at the current orientation to allow the device to act only on additional, and presumably user generated, accelerations.
  • Handheld devices may include application user interfaces that utilize motion input only at certain times. At other times, for example, the motion of the device may not be utilized as input, and it may be useful to disengage or “turn off” motion sensitivity or the motion detection capability of the device. Disengagement of motion sensitivity may comprise, for example, deactivation of motion detector 22 of device 10 or other component, such as a motion response module of the device. Particular embodiments thus allow for the selective engagement and disengagement of the motion sensitivity of the device.
  • a motion response module which modifies display 12 based on motion detected at motion detector 22 , may have a mode of operation in which it awaits a trigger for switching to another mode of operation in which motion sensitivity is enabled. When motion sensitivity is not enabled, any motion of the device may be disregarded.
  • the trigger may also set a zero-point for the device. When the zero-point is set, the motion response module may measure a baseline orientation of the device based on measurement from motion detection components.
  • the baseline orientation may comprise the position of the device (determined from information from motion detector components) when the trigger is received. Future movement of the device will be compared against the baseline orientation to determine the functions to perform or the modifications which should be made at display 12 based on the user's motion of the device.
  • Particular embodiments provide for any number of user-initiated actions to act as a single trigger for zero-point selection and selective engagement/disengagement of the motion sensitivity of the device.
  • Such actions may include, for example, the pressing of a key on input 14 , moving device 10 in a particular way (e.g., movement corresponding to a particular gesture), and tapping display 12 . It should be understood that any user-initiated action may set a zero-point and engage motion sensitivity of the device at the same time.
  • a period of inactivity or minimal activity may also set a zero-point and engage or disengage motion sensitivity.
  • FIG. 7 illustrates a flowchart 80 for the passive setting of a zero-point for a handheld device. Change in acceleration with respect to an x-axis is detected at step 82 a , change in acceleration with respect to a y-axis is detected at step 82 b and change in acceleration with respect to a z-axis is detected at step 82 c . At steps 84 a , 84 b and 84 c , it is determined whether any acceleration change detected is greater than a particular respective threshold.
  • the device may be considered at rest, and at step 86 a zero-point will be set.
  • An at rest position may be determined, for example, from stabilization of the raw data or motion components of components of motion detector 22 .
  • the process returns to acceleration change detection at steps 82 .
  • this method of passively setting a zero-point may ensure that when the handheld device is at rest, a zero point will be set.
  • a zero-point will be set since there will be no detected change in acceleration.
  • the use of thresholds to determine whether an acceleration change is high enough so as not to trigger the setting of a zero-point enables a user to hold the device still to passively set the zero-point. Otherwise, this may be difficult since a device with highly sensitive accelerometers may detect acceleration change as a result of very minor unintended, user-initiated movement. It should be understood that similar methods may be used in connection with motion detectors with components other than accelerometers. Thresholds may also be used in such similar methods to account for small, unintended movements that may otherwise prevent setting of a zero point.
  • Particular embodiments of the present invention include the ability to allow a user to repeatedly selectively engage and disengage the motion sensitivity of the handheld device in order to allow greater movement through a virtual desktop (or information space) using motion input in a limited amount of physical space.
  • This process can be analogized to “scrubbing” with a mouse controlling a cursor, or lifting the mouse off of a surface and replacing the mouse on the surface at a different location to allow greater movement of the cursor. Lifting the mouse breaks the connection between the motion of the mouse and the motion of the cursor.
  • a user may be able to engage and disengage the connection between the motion of a handheld device, such as device 10 , and the operations, functions or actions based on movement of the device.
  • FIG. 8 illustrates an example of the use of scrubbing functionality to navigate across a virtual desktop, or information space, larger than the display of a handheld device.
  • a handheld device is used to navigate through virtual desktop 90 .
  • Virtual desktop 90 is illustrated as a grid map and may represent any suitable information through which a user may desire to navigate.
  • Information of virtual desktop displayed at the handheld device is represented by box 92 .
  • translation of the handheld device is used to navigate through virtual desktop 90 .
  • a user may move the handheld device from right to left to navigate from right to left through information of virtual desktop 90 .
  • handheld devices of particular embodiments may be moved in any suitable manner to implement the scrubbing process.
  • box 92 represents the information of virtual desktop 90 currently displayed at the device. If a user desired to view information represented at box 94 , the user may move the handheld device from left to right. For purposes of this example, imagine that the user moves the device to the right, and the information of virtual desktop 90 contained in box 94 is displayed at the device. Also imagine that the user's arm is now outstretched to the user's right such that the user must walk or otherwise move further right in order to view at the display of the device information of virtual desktop 90 that is to the right of box 94 .
  • the user could selectively disengage the motion sensitivity of the handheld device, move the device back to the left, selectively reengage the motion sensitivity of the device and move the device back to the right to display information to the right of box 94 .
  • the user could display information of virtual desktop 90 contained in box 96 , and this process could be repeated to display information contained in box 98 , which is further to the right of box 96 .
  • the selective disengagement and reengagement of the motion sensitivity of the device in order to allow greater movement within a virtual desktop in a limited amount of physical space may be enabled in any of a variety of ways, such as by a key on an input of the device, moving the device according to a particular gesture or movement (e.g., an arc movement) or tapping the device display. Any other user-initiated action may be used to disengage and reengage motion sensitivity for this purpose. Particular embodiments may allow multiple actions to disengage and reengage motion sensitivity of the device. Moreover, a user action that disengages motion sensitivity of the device may be different from a user action that reengages motion sensitivity. This scrubbing process may be performed in any suitable application, such as map navigation, menu navigation and scrolling through a list.
  • FIG. 9 is a flowchart illustrating the steps of the scrubbing process described above with respect to FIG. 8 , in accordance with a particular embodiment.
  • the flowchart begins at step 100 , where the handheld device is moved to the right to go from displaying information of box 92 of virtual display 90 to information of box 94 .
  • the user may desire to display information further to the right of box 94 but may have run out of physical space to move the device further to the right.
  • the user disengages the motion sensitivity of the device at step 102 . Any suitable user action may perform such disengagement, such as the pressing of a button on the device or moving the device according to a particular gesture.
  • the user moves the device to the left so that the user will have more physical space through which the user may move the device to the right when motion sensitivity is reengaged.
  • the user reengages motion sensitivity of the device. Again, such reengagement may be performed by any suitable user action, and such user action may be different from the user action performed to disengage motion sensitivity in step 102 . Since motion sensitivity has been reengaged, the user moves the device to the right at step 108 in order to change the information being displayed at the device from the information of box 94 to the information of box 96 . At step 110 , it is determined whether further movement of the device to the right is needed. If further movement is needed (e.g., to display the information of virtual display 90 in box 98 ), then the process returns to step 102 where motion sensitivity of the device is again disengaged. If no further movement is needed, then the process ends. As indicated above, this scrubbing process may be utilized in any suitable application of the device that supports motion input, and the device may be moved in any suitable manner in order to implement this functionality.
  • a particular movement of the device may be utilized in the scrubbing process to signal to the device not to change the information presented on the display during such movement.
  • This allows the user to return the device to a position from which the user may move the device to further change the information presented on the display.
  • the device may be at a base reference position from which movement of the device changes the information displayed.
  • a particular predetermined movement e.g., an arc movement
  • the base reference position may be reset such that future movement of the device from the base reference position further changes the information displayed.
  • the base reference position may identify a baseline orientation of the device represented by baseline components of the motion data received by the motion detection components of the device.
  • gestures may be received, as determined by movement from the base reference position, to perform particular commands which change the information displayed at the device.
  • handheld devices in accordance with particular embodiments may utilize multiples types or modes of input to operate the device.
  • Such input modes include motion input modes, such as a translation input mode and a gesture input mode. While multiple input modes may sometimes be used in combination with each other, in some cases the handheld device may be set to recognize a certain mode type at one time. In some situations, the handheld device may be set to function based on multiple types of non-motion input and only one type of motion input (e.g., translation or gesture) at a particular time.
  • a certain trigger may be used to switch between input modes. For example, a user may press a particular key or may move the device in a certain manner (e.g., a particular gesture) to switch input modes. In some cases where an application of the device recognizes and functions based upon multiple types of motion input, a particular key may be pressed or a particular gesture may be formed using the device to switch between a translation motion input mode and a gesture motion input mode.
  • the trigger may also comprise the mere switch from one application to another or the switch from one displayed image to another. In some situations, the trigger may switch between a non-motion input mode and a motion input mode.
  • Any particular user-action may be implemented to act as a trigger to switch between different input modes, such as between different motion input modes.
  • a voice command or physical action upon the device e.g., a device or screen tap
  • a user action that reengages motion sensitivity of a device may also contain other information that might otherwise affect device behavior. For example, if a user makes one motion to reengage translation sensitivity, it may render the device more sensitive to motion than if the user makes a different motion to reengage motion sensitivity.
  • a reengaging motion may comprise a gesture that indicates the user's identity or context, thereby engaging a variety of operational settings, such as user preferences.
  • particular embodiments include the ability to receive motion input to control various functions, tasks and operations of a handheld device and may be used to alter information displayed at the device in the process.
  • motion input may be in the form of gestures, as opposed to mere translation-based input.
  • Gesture input may be used to navigate through a multidimensional menu or grid in some applications.
  • a display of a handheld device may be smaller than the amount of information (e.g., menu options, map information) that can be presented on the display. This may lead to a menu structure that is narrow and deep. In many cases, broad, shallow menu structures may be preferred over a narrow, deep menu structure because a user is not required to remember as much information concerning where functionalities are located.
  • FIG. 10A illustrates an example of menu navigation using gesture input, in accordance with a particular embodiment.
  • a handheld device is used to navigate through virtual desktop 120 .
  • Virtual desktop 120 includes a menu tree with menu categories 122 for selection.
  • Each menu category 122 may include respective sub-categories for selection.
  • menu categories 122 may comprise categories of functions, while sub-categories of each menu selection may include the actual functions under each such category.
  • menu categories may comprise nouns (e.g., “folder,” “document,” “picture”), while sub-categories comprise verbs (e.g., “move,” “paste,” “cut”).
  • menu categories 122 may include “calls,” “phone book,” “messages,” “planner,” “sounds,” “setup” or other items. Each menu category 122 may include functions which may be accessed once a menu category 122 is selected. While two menu levels are illustrated in FIG. 10A , it should be understood that a multidimensional desktop or display of information for motion interface navigation may include any number of selections (e.g., menus) utilizing any number of levels.
  • menu category 122 e has been selected, and sub-categories 124 of menu category 122 e are displayed as available for selection.
  • Boxes 126 and 128 represent information displayed at the handheld device for the user.
  • virtual desktop 120 includes more information, or menus, which can be displayed at the device at one time.
  • a user may move the device according to particular gestures to navigate across or through the virtual desktop. Gestures may also be used to navigate through different menu levels and to make menu selections.
  • a user may move device 10 in the form of a clockwise circle 130 to navigate a predetermined amount to the right across virtual desktop 120 (e.g., moving from information of box 126 to information of box 128 ).
  • a particular menu category 122 may be selected by an away gesture 132 , or a downward gesture (e.g., to select menu category 122 e ), and thus display sub-categories 124 for selection.
  • a user may move device 10 in the form of a counterclockwise circle 134 .
  • navigation may be accomplished through four gestures: a forward gesture, a backward gesture, a left gesture and a right gesture.
  • gestures comprising motion vectors in perpendicular directions may be used for navigation.
  • gestures may be used that are mirror images of other used gestures to execute opposite functions from those accomplished by the other gestures. For example, a motion toward a user might zoom in while an opposite motion, a motion away from the user, may zoom out. Using mirror image or reciprocal gestures mapped to opposite functions may make a motion user interface for a device easier to learn and to use.
  • the menu item at the center of the display may be highlighted for selection, while in other cases a particular gesture may indicate which menu selection of a plurality of displayed selections a user desires to select. It should be understood that the menus or other information through which a user may navigate using gestures may be presented at the handheld device in any number of ways. In some embodiments, only one level of information (i.e., one menu level) may be displayed at once, while sub-levels or higher levels are not displayed until they are available for selection.
  • FIG. 10B illustrates example gestures which may be utilized to perform various functions, such as functions enabling a user to navigate through a virtual desktop.
  • the illustrated example gestures include an “up” gesture 133 to navigate in an upward direction through the desktop, a “down” gesture 135 to navigate down, a “left” gesture 136 to navigate left, a “right” gesture 137 to navigate right, an “in” gesture 138 to navigate in a direction towards the user and an “out” gesture 139 to navigate away from the user.
  • these are mere example gestures and commands for particular embodiments, and other embodiments may include different gestures or similar gestures mapped to different commands for navigating through a desktop or performing other functions with a handheld device.
  • FIG. 11 illustrates another example of map navigation using motion input, in accordance with a particular embodiment of the present invention.
  • FIG. 11 includes a virtual desktop 140 representing an information grid divided into sixteen portions, each referenced by a respective letter (A, B, C, . . . P). Portions of virtual desktop 140 are identified using reference letters only for purposes of describing particular embodiments, and portions of virtual desktops in accordance with other embodiments may or may not be identified in a device application by a reference character or otherwise.
  • Virtual desktop 140 includes more information that can be displayed at a particular handheld device at one time. Virtual desktop 140 may represent any suitable information through which a user may desire to navigate using a handheld device, such as a street map.
  • a user may desire to navigate across virtual desktop 140 to display different portions of information on the handheld device display and may also desire to zoom in (and out of) virtual desktop 140 (i.e., change the granularity of the information displayed) to more clearly view certain portions of the information of virtual desktop 140 .
  • box 142 represents information currently displayed at handheld device 10 .
  • Box 142 includes portions A, B, E and F of virtual desktop 140 .
  • motion input may comprise translation input (moving the handheld device 10 to the right an applicable amount to change the information displayed) or a gesture input (moving the handheld device 10 according to a particular gesture mapped to this function).
  • one gesture may be mapped to moving the display one portion to the right, while another gesture may be mapped to moving the display two portions to the right.
  • the user may navigate across desktop 140 .
  • the handheld device 10 may also allow the user to zoom in on certain information displayed for a clearer view of such information, for example, through translation input or gesture input.
  • gesture input if the information displayed at the device included four of the sixteen portions (e.g., box 142 displaying portions A, D, E and F), then a user may use one of four gestures, each mapped to zoom in on a particular portion, to zoom in on one of the four displayed portions.
  • the device may display information represented by box 144 (portions B 1 , B 2 , B 3 , B 4 , B 5 , B 6 , B 7 , B 8 and B 9 ) collectively forming the information of portion B of virtual desktop 140 in an enlarged view.
  • the information of portion B may be more largely and clearly displayed.
  • the user may zoom out or zoom in again on a particular portion currently displayed using appropriately mapped gestures.
  • the device may display information of box 146 (portions B 2 a , B 2 b , B 2 c , B 2 d , B 2 e , B 2 f , B 2 g , B 2 h and B 2 i ).
  • the user may also be able to navigate across the virtual desktop when zoomed in on a particular portion. For example, when zoomed in on portion B (viewing information of box 144 ), the user may use translation or gesture input to move across the virtual desktop to view enlarged views of portions other than portion B.
  • the user may make a gesture that moves the information displayed to the right such that the entire display shows only information of portion C of virtual desktop 140 (i.e., zoomed in on portion C showing portions C 1 , C 2 , C 3 , C 4 , C 5 , C 6 , C 7 , C 8 and C 9 ). It should be understood that a user may navigate through the information of virtual desktop 140 (both navigating across and zooming in and out) in any suitable manner using motion input.
  • any suitable gestures may be used to both navigate across a virtual desktop (or across a particular level) and to navigate between or through different levels or dimensions of a multidimensional desktop.
  • motion such as gestures
  • non-motion actions may be used to select or navigate between dimensions.
  • non-motion actions may include the pressing of a key in a device input.
  • a combination of motion and non-motion actions may be used for multidimensional virtual desktop or menu navigation in particular embodiments.
  • Particular embodiments may allow gesture-based navigation through any suitable application, such as a multidimensional grid, menu, calendar or other hierarchical application.
  • a calendar application certain gestures may be used to navigate within one level, such as months, while other gestures may be used to navigate between levels, such as between years, months, days, hours and events.
  • different applications implemented in the handheld device that use such gesture navigation may use different gestures.
  • the particular navigation gestures may change according to the particular application in use.
  • translation-based interfaces may be used to navigation through multidimensional information of a virtual desktop, as opposed to merely using gesture-based movements. For example, movement along x and y-axes may be used to travel within a level of a hierarchy, while movement along a z-axis may be used to travel between levels of the hierarchy.
  • Another example might involve using a telephone directory with institutions, letters of the alphabet, names, contact details (e.g., office, cellular and home phone numbers, email) and action to initiate contact all along different levels of a hierarchy.
  • the hierarchy may contain information (nouns) and actions (verbs).
  • a z-axis could be used to confirm the actions and help prevent inadvertent execution of actions.
  • the number of levels traversed may depend on the magnitude of the motion, particular in translation-based navigation. Moving the device a small amount may navigate one level at a time, while moving the device a large amount may navigate multiple levels at a time. The greater the magnitude of motion, the more levels may be navigated at once. As applied to gesture-based motion inputs, different gestures may be used to navigate different numbers of levels in the hierarchy at once. These gestures may be different magnitudes of the same motion or entirely different motions.
  • handheld devices in particular embodiments allow a user to navigate across a virtual desktop using motion input.
  • a user may utilize a cursor to navigate across information displayed at the handheld device. For example, certain information may be displayed at the device, and the user may utilize motion input to move the cursor around the device display and to select particular items displayed to perform certain functions.
  • motion input may be utilized to move a cursor, while a non-motion action (such as pressing a button) may be used to select an item currently indicated by the cursor. It should be understood that both gesture and translation motion input may be utilized in various embodiments of cursor navigation.
  • FIG. 12A illustrates an example utilizing this form of motion input cursor navigation.
  • Display 147 represents a display of a handheld device. In order to describe this cursor navigation example, display has been divided into a grid to show information being displayed. The grid includes portions A-P. Display 147 includes a cursor 148 between portions C, D, G and H. As stated above, in this embodiment the information displayed remains fixed with respect to the device when the device is moved, and the cursor remains fixed in space. However, the cursor's position with respect to the information displayed changes according to motion input. When the device is translated to the right, according to right movement 149 , the cursor is translated according to a motion opposite to the translation of the device.
  • Display 150 represents a possible display after the device is moved according to right movement 149 , with cursor 148 now between portions A, B, E and F. It should be understood that since this example involves translation-based input, the magnitude of the movement of the device (e.g., to the right in this example) may directly affect the magnitude of the movement of the cursor with respect to the information displayed.
  • Display 152 represents another display after the handheld device has been moved according to up movement 151 , with cursor 148 now between portions I, J, M and N. As evident, the cursor will have moved down with respect to information displayed since it remains fixed in space.
  • Display 154 represents another display after the handheld device has been moved according to left movement 153 , with cursor 148 now between portions K, L, O and P.
  • the cursor will have moved to the right with respect to information displayed.
  • motion of the device changes the position of the cursor on the information.
  • the handheld device may be moved instead of using a stylus to point to certain portions of information displayed.
  • a user may utilize any form of input (e.g., gesture, key press, etc.) to select or otherwise perform a function according to the information currently indicated at the cursor. For example, with respect to display 152 , a user may use a particular gesture or press a button to zoom in, select or perform some other function based on information between portions I, J, M and N currently indicated by cursor 148 .
  • input e.g., gesture, key press, etc.
  • particular embodiments may translate the cursor in a motion opposite from the motion of the device to move the cursor across the information displayed.
  • input motion of the device may be divided into motion along each of three axes, two of which are parallel to the device display (e.g., an x-axis and a y-axis). While motion of the device in the x-axis and y-axis plane changes information displayed at the device based on such motion, the cursor may be moved at the same time according to a translation vector that is opposite to the sum of movement in the x-axis direction and the y-axis direction to substantially maintain the position of the cursor in space.
  • the vector when the cursor would move past a display edge off of the display according to the translation vector, the vector may be reduced in order to keep the cursor within the display. Such reduction may include reducing one or more components of the translation vector to maintain the cursor within a certain distance from the display edge.
  • FIG. 12B illustrates another form of motion input cursor navigation, in accordance with a particular embodiment.
  • the cursor remains in a fixed position with respect to the display while motion input is used to navigate across a virtual desktop larger than the device display.
  • FIG. 12B includes virtual desktop 158 which comprises information through which a user may navigate using motion input at a handheld device, such as a street map.
  • Virtual desktop 158 includes more information that can be displayed at a particular handheld device at one time.
  • virtual desktop 158 has been divided into a grid to differentiate between information presented at the desktop. The grid includes 6 rows (A-F) and 7 columns ( 1 - 7 ).
  • Portions of the grid may be identified herein for this example using their row letter and column number (e.g., portion B 7 or D 2 ). It should be understood that the division of virtual desktop 158 into portions referenced by row and column number is done only for illustrating and describing the embodiments described above, and the virtual desktops of particular embodiments may not include such a division or other type of reference information.
  • Box 160 represents information of virtual desktop 158 currently displayed at a handheld device.
  • Display 161 represents the display of a handheld device showing information of box 160 .
  • Display 161 also includes a cursor 159 positioned at the intersection of portions B 2 , B 3 , C 2 and C 3 .
  • the cursor remains in a fixed position with respect to the display.
  • the cursor's position changes with respect to the information of the virtual desktop displayed at the handheld device.
  • a user may utilize motion input to change the information displayed at the device to that represented at box 162 .
  • the information displayed at the device changes (to portions B 5 , B 6 , C 5 and C 6 ); and cursor 159 will remain fixed at the device display (e.g., in this case at the center of the display) such that its position changes with respect to the information of virtual desktop 158 , as illustrated at display 163 . If the user desired to use motion input to change the information displayed at the device to that represented at box 164 , the information displayed at the device changes to portions E 3 , E 4 , F 3 and F 4 , as illustrated at display 165 . Cursor 159 is positioned between these illustrated portions in the center of the display since its position relative to the display remains fixed in this embodiment.
  • a user may utilize any form of input (e.g., gesture, key press, etc.) to select or otherwise perform a function according to the information currently indicated at the cursor.
  • a user may use a particular gesture or press a button to zoom in, select or perform some other function based on information between portions B 5 , B 6 , C 5 and C 6 currently indicated by cursor 159 .
  • any particular input such as a gesture or key press, may be used to switch cursor navigation modes at the device.
  • a user may switch between the translation-controlled cursor mode of FIG. 12A and the fixed cursor mode of FIG. 12B .
  • particular embodiments allow a user to move handheld device 10 according to a gesture to perform particular functions or operations.
  • a user may not move the device according to the particular gesture intended, and the device may, as a result, not be able to recognize the movement as the intended gesture.
  • handheld devices in some embodiments provide feedback to notify the user that the movement was in fact recognized as a gesture.
  • This feedback may comprise an audio format, such as speech, a beep, a tone or music, a visual format, such as an indication on the device display, a vibratory format or any other suitable feedback format.
  • Audio feedback may be provided through a user interface speaker or headphone jack of device 10
  • vibratory feedback may be provided through a user interface vibration generation module of device 10 .
  • Audio, visual and vibratory feedback may be varied to provide capability for multiple feedback indicators.
  • vibratory feedback may be varied in duration, frequency and amplitude, singly or in different combinations over time. The richness and complexity of the feedback may be expanded by using feedback of different types in combination with one another, such as by using vibratory feedback in combination with audio feedback.
  • the feedback may be gesture-specific, such that one or more recognized gestures have their own respective feedback. For example, when a certain gesture is recognized the device may beep in a particular tone or a particular number of times, while when one or more other gestures are recognized the beep tones or number of beeps may change.
  • the use of audio feedback may be especially valuable for gestures that do not have an immediately visible on-screen manifestation or function, such as calling a certain number with a cellular phone.
  • Different types of feedback may also be context or application specific in some embodiments. Different contexts might include device state, such as what application is in focus or use, battery level, and available memory, as well as states defined by the user, such as quiet or silent mode.
  • a handheld device may utilize vibratory feedback in response to gesture input while in silent mode when audio feedback would otherwise be used. This feedback process may also be utilized by a handheld motion input device of a computer or other component.
  • handheld devices in particular embodiments may also provide feedback for the user in the event that a particular user movement was not recognized as a gesture when the device is in a gesture input mode. For example, if a motion appears intended to be a gesture, but cannot be assigned to a particular gesture that is known to the device, it may play a sound indicating failure. This notifies the user that the user must make another attempt at moving the device according to the intended gesture for the device to perform the operation or function desired.
  • the feedback notifying the user that a movement was not recognized may also comprise an audio, visual, vibratory or other suitable format and may be a different feedback than that communicated when a particular movement is recognized by the device as a particular gesture.
  • handheld device 10 may look at certain characteristics of the movement that imply that the motion was intended to be a gesture. Such characteristics may include, for example, the amplitude of the motion, the time course of the above-threshold motion and the number and spacing of accelerations. If a particular gesture is unrecognized by the device, a system of gestural feedback may be used to determine the gesture intended. For example, audio feedback may indicate possibilities determined by the handheld device, and the user may utilize gestures to navigate an auditory menu to select the intended gesture.
  • a system of audio or vibratory feedback may be used such that a user could operate handheld device 10 without having to resort to viewing display 12 .
  • handheld devices in some embodiments may provide audio, visual or vibratory feedback to a user navigating a menu or other information of a virtual desktop.
  • this device feedback combined with the motion input of a user could act as a type of “conversation” between the user and the device.
  • multiple types and complexities of feedback may be utilized. The feedback process could be particularly advantageous in environments where it may be inconvenient, unsafe or impractical to look at the device display (e.g., while driving or while in a dark environment).
  • feedback such as audio, visual and vibratory feedback
  • a feedback indicator may be given when a user reached a limit or edge of a virtual desktop using translation input.
  • FIG. 13 is a flowchart 170 illustrating a process for utilizing feedback in response to motion input, in accordance with a particular embodiment.
  • raw motion data is received at handheld device 10 .
  • the raw motion data may be received by any combination of accelerometers, gyros, cameras, rangefinders or any other suitable motion detection components.
  • the raw motion data is processed to produce a motion detector output indicative of the motion of the device. Such processing may include various filtering techniques and fusion of data from multiple detection components.
  • the device state may be checked, because in some embodiments the feedback for a particular motion depends on the state of the device when the motion is received.
  • example device states may include the particular application in focus or use, battery level, available memory and a particular mode, such as a silent mode.
  • the motion detector output is analyzed with respect to the device state.
  • feedback may be in an audio, visual or vibratory format.
  • the feedback may merely be an indication that the device recognizes the gesture given the state of the device.
  • the feedback may be a further query for additional input, for example if the user was utilizing a particular application of the device that provided for a series of inputs to perform one or more functions.
  • the device behaves according to the motion input and device state, and the process may return to step 172 where additional raw motion data is received.
  • step 180 If it is determined at step 180 that the motion indicated by the motion detector output is not meaningful or recognizable given the particular device state, then the process proceeds to step 186 .
  • step 186 it is determined whether the motion is above a particular threshold. This determination may be made to determine whether particular motion input was, for example, intended to be a gesture. As indicated above, threshold characteristics for this determination may include the amplitude of the motion input, the time course of the motion input and the number and spacing of accelerations of the motion. If it is determined that the motion input was not above a particular threshold, then the process may return to step 172 where additional raw motion data is received.
  • the feedback may include audio, visual and/or vibratory feedback and may indicate that the gesture was not recognizable or meaningful.
  • the feedback may also provide a query regarding the intended gesture or may otherwise provide the user with a number of potentially intended gestures from which the user may select the particular gesture intended by the motion.
  • particular embodiments may not include some of the steps described (e.g., some embodiments may not include the threshold determination of step 186 ), while other embodiments may include additional steps or the same steps in a different order.
  • particular embodiments may utilize motion input feedback (e.g., including feedback “conversations”) in any of a number of applications and ways and the type and complexity of feedback systems may vary greatly in different embodiments.
  • handheld devices may receive gesture motion input to control any number of functions of any number of applications running at the device.
  • Some applications that utilize gesture involve may comprise mobile commerce (mCommerce) applications in which a mobile device, such as handheld device 10 , is used to conduct various transactions, such as commercial or consumer purchases.
  • mCommerce applications utilize some form of authentication to authenticate a user, such as personal identification numbers (PINs), credit card information and/or possession of the mobile device.
  • PINs personal identification numbers
  • Another form of authentication is a user's written signature, which does not suffer from such leaking problems since forgery is typically difficult to accomplish and may be easy to detect.
  • Particular embodiments may utilize motion input to receive a user's signature as a form of authentication in mCommerce or other transactions through the handheld device.
  • a written signature may be considered a two dimensional record of a gesture.
  • a user's signature When utilizing a handheld device with motion input, a user's signature may be in three dimensions and may thus comprise a “spatial signature.” Moreover, when combined with other forms of input received at the device, a user's signature can take on any number of dimensions (e.g., four, five or even more dimensions). For example, a three-dimensional gesture “written” in space using the device and detected at motion detector 22 may be combined with key-presses or other inputs to increase the number of dimensions of the signature.
  • spatial signatures can be tracked, recorded, and analyzed by motion detectors 22 of handheld devices. They can be recorded with varying degrees of precision with varying numbers of motion detector components to serve as an effective form of authentication.
  • a user's spatial signature may take comprise a three-dimensional form based on the user's traditional two-dimensional written signature or may comprise any other suitable gesture which the user records at the handheld device as his or her signature.
  • the process for recognizing a spatial signature may involve pattern recognition and learning algorithms.
  • the process may analyze relative timings of key accelerations associated with the signature. These may correspond to starts and stops of motions, curves in motions and other motion characteristics.
  • some hash of a data set of a points of a signature motion may be stored, and subsequent signatures may be compared against the hash for recognition. This may further verify if the signature was genuine by determining whether it was unique.
  • a signature may be detected (e.g., by a signature detection module of device 10 ) by comparing a particular movement of the device with respect to an initial or reference position. Such comparison may be made by comparing a sequence of accelerations of the movement with a predetermined sequence of accelerations of a stored spatial signature. This determination may be made regardless of the scale of the user's input motion signature.
  • the device can detect whether motion of the device matches the signature by determining whether positions of the device in motion relative to an initial position match the spatial signature.
  • FIG. 14 illustrates an example system 200 utilizing spatial signatures as authentication for mCommerce transactions.
  • System 200 includes handheld device 10 , mCommerce application 202 , authenticator 204 and communication network 206 .
  • mCommerce application 202 may comprise any suitable application for transacting business with a handheld device of a user. Such transactions may include consumer purchases, such as products or services of a business or other user from a website, online bill paying, account management or any other commercial transaction.
  • Authenticator 204 authenticates, or verifies, a spatial signature input by the user at handheld device 10 to complete an mCommerce transaction.
  • Authenticator 204 may store one or more spatial signatures of one or more users for authentication in an mCommerce transaction.
  • authenticator may be located within handheld device 10 , within mCommerce application 202 or at any other suitable location.
  • Communication network 206 is capable of transmitting information or data between components of system 200 and may include one or more wide area networks (WANs), public switched telephone networks (PSTNs), local area networks (LANs), the Internet and/or global distributed networks such as intranets, extranets or other form of wireless or wireline communication networks.
  • WANs wide area networks
  • PSTNs public switched telephone networks
  • LANs local area networks
  • Communication network 206 may include any suitable combination of routers, hubs, switches, gateways or other hardware, software or embedded logic implementing any number of communication protocols that allow for the exchange of information or data in system 200 .
  • the user may utilize motion input to communicate an authentication signature, for example by moving the device according to the user's three-dimensional signature.
  • a user might use their cellular phone at a point-of purchase (e.g., store) instead of a credit card.
  • the user could simply move device 10 according to the user's spatial signature.
  • the user's signature may include more than three dimensions in some embodiments.
  • the signature may have been previously recorded by the user using handheld device 10 or another mobile device, and the recorded signature may be stored at handheld device 10 , mCommerce application 202 , authenticator 204 or other suitable location such as a signature storage database for signatures of multiple mCommerce users.
  • the motion of handheld device 10 may be processed at the device, and motion output indicative of the motion may be transmitted to mCommerce application 202 .
  • mCommerce application 202 may communicate the motion output to authenticator 204 for verification that the motion input received at device 10 was in fact the signature of the user attempting to transact mCommerce. If authenticator 204 verifies the user's signature, then mCommerce application may complete the transaction with the user.
  • authenticator 204 may be located within handheld device 10 or at mCommerce application 202 in particular embodiments and may access signatures for verification stored at device 10 , mCommerce application 202 or any other suitable location.
  • Authentication may also be used by handheld devices in a non-mCommerce application, for example when electronic security is desired to perform functions such as sending private or secure data using the device.
  • a user desiring to transmit data or other information using handheld device 10 may use their spatial signature in the encryption process.
  • Spatial signatures may be used in any of a variety of ways to secure data for communication through a network and may be utilized in connection with public/private key encryption systems.
  • handheld device 10 may authenticate a user's signature received through motion input and then use its own private key to encrypt data for transmission.
  • data may be communicated to handheld device 10 such that an intended recipient must input their spatial signature to receive the decrypted data.
  • data may be communicated to a computer wirelessly connected to handheld device 10 , and the intended recipient must use handheld device 10 as a way to communicate the user's signature to the computer for data decryption.
  • a user's spatial signature itself may represent an encryption key such that motion of the device generates the encryption key instead of the signature motion merely being used for authentication.
  • a device may recognize a combination of accelerations as a signature by converting the signature into the equivalent of a private key. The handheld device may then use the private key as part of an authentication process for a transaction.
  • a spatial signature may be used to manage physical access to a building or event.
  • the signature input by a user at a device may be checked against a list of people allowed to enter as IDs are checked at “will call” for an event.
  • a user may utilize motion input for handheld device 10 to control other devices, such as audio/video equipment, home appliances and devices, computing devices or any other device capable of being controlled by a handheld device.
  • Devices may be controlled by handheld device 10 through communications interface 20 of device 10 utilizing any of a number of wireless or wireline protocols, including cellular, Bluetooth and 802.11 protocols.
  • device 10 may receive motion input to control, through wireless or wireline communication, other devices through a network.
  • devices controlled through motion input of device 10 may be at any location with respect to device 10 , such as in the same room or across a country.
  • control of the other device may be implemented through any number of intermediate devices (e.g., through a network).
  • handheld 10 device were a Bluetooth-enabled cellular phone
  • particular gestures or other motion of the cellular phone may wirelessly communicate commands to control another device, such as a laptop across a room to drive a PowerPoint presentation.
  • Other devices which may be controlled through motion input of handheld device 10 may include televisions, radios, stereo equipment, satellite receivers, cable boxes, DVD players, digital video recorders, lights, air conditioners, heaters, thermostats, security systems, kitchen appliances (e.g., ovens, refrigerators, freezers, microwaves, coffee makers, bread makers, toasters), PDAs, desktop and laptop PCs, computer peripheral equipment, projectors, radio controlled cars, boats and planes and any other device.
  • a commuter may shake their cellular phone in a certain manner to tell their heater at home to turn on before the commuter arrives at home.
  • a handheld device may receive and process raw motion data to determine commands or intended functions for communication to other devices.
  • a motion detector of a handheld device may output raw data received from its motion detection components for communication to one or more devices controlled by device 10 through the motion of device 10 .
  • different devices controlled by device 10 may treat the same raw motion data of device 10 differently. For example, a particular gesture of device 10 may perform different functions of different devices controlled by device 10 .
  • FIG. 15 illustrates an example system 220 in which handheld device 10 controls multiple other devices through motion input of device 10 .
  • System 220 includes handheld device 10 , laptop 222 and remote device 224 connected, through wireless or wireline links, to handheld device 10 through communication network 226 .
  • Handheld device 10 receives raw motion data of a particular motion of the device through motion detection components, such as accelerometers, cameras, rangefinders and/or gyros. The raw motion data is processed at the handheld device.
  • Particular databases such as gesture and gesture mapping databases, may be accessed to determine a matching gesture and intended function based on motion tracked by a control module of the device.
  • the intended function may be for another device to be controlled by handheld device 10 , such as laptop 222 or remote device 224 .
  • the motion input is the interface for the underlying operational signal communicated from device 10 to the controlled device.
  • the raw motion data or other data merely indicating a particular motion input for device 10 may be directly sent to laptop 222 and/or remote device 224 without determining a function at device 10 .
  • laptop 222 and/or remote device 224 may themselves process the raw motion data received from handheld device 10 to determine one or more intended functions or operations they should perform based on the raw motion data.
  • a user of device 10 may indicate to device 10 through motion or otherwise the other devices that handheld device 10 should communicate raw motion data or intended functions of such other devices, as applicable. While two devices controlled by handheld device 10 are illustrated, it should be understood that particular embodiments may include any number of devices of varying types to be controlled by handheld device 10 through motion input as discussed above.
  • particular embodiments include the ability to control other devices, such as other local or remote devices, through motion input of handheld device 10 .
  • a user of handheld device 10 selects the other device that a particular motion input of device 10 is intended to control.
  • a user may use input 14 of handheld device 10 (e.g., by pressing a button or moving a trackwheel) to select a local or remote device to control before moving device 10 according to a particular motion mapped to a function or operation desired for the other device.
  • a user may move handheld device 10 according to a particular gesture in order to select the other device (e.g., other local or remote device) to be controlled at the time through motion input of device 10 .
  • particular embodiments provide gesture motion selection of other devices to be controlled by handheld device 10 .
  • Handheld device 10 may include a device selection module operable to detect a device selection gesture which indicates that a user desires to control a particular device.
  • Each controllable device may include its own gesture command maps which correlate gestures to be input using device 10 and commands of the controllable device.
  • a control module of the handheld device may select a particular command map corresponding to the controllable device selected for control.
  • device 10 may include a device locator operable to detect, for each of a plurality of remote devices, a direction from the handheld device to each remote device. In this case, the user may move handheld device 10 in the direction of a particular remote device the user desires to control in order to select that remote device for control.
  • While motion input for device 10 may be used for such control of the other devices, other types of input (e.g., utilizing input 14 ) may also be used to control other local or remote devices selected for control by gesture input.
  • different gestures may be each mapped to control a different device.
  • device 10 may display possible other devices for control and particular gesture(s) to utilize to indicate a user's selection as to which other device the user desires to presently control through device 10 .
  • Handheld devices according to the present invention may utilize any particular manner of gesture selection of one or more local or remote devices to be controlled by the handheld devices.
  • particular embodiments include handheld devices 10 capable of detecting motion of the device through a motion detector 22 to modify the behavior of the device in some way according to the motion detected.
  • Handheld devices 10 in some embodiments are capable of modeling their particular environment and subsequently modifying their behavior based on such environment.
  • One distinction between modeling an environment of a handheld device and detecting a particular motion of the device is that in the former case there may be reasoning involved and in the latter case there may be no such reasoning.
  • a handheld device changes its behavior when moved according to a particular gesture, that may be considered sensing or detecting a particular motion and reacting based on the motion detected.
  • the handheld device determines that it is sitting face down on a table and reacts accordingly, that may be considered modeling its environment.
  • a handheld device moves to the left and changes its behavior based on such movement, that may be considered detecting motion and reacting. If the handheld device finds itself in free fall and powers-down so as to survive an impending collision with the ground, that may be considered modeling its environment.
  • environmental modeling may not require an immediate response to a user input, while detecting an event, such as a particular motion, generally does require such an immediate response.
  • Modeling an environment may thus involve sensing or detecting a pattern of motion (or lack thereof), matching it to a predefined set of environmental conditions and modifying the behavior of the device based on the modeled environment.
  • the behavior implemented based on the environment modeled may also change based on a particular application in use or in focus.
  • the device may change its sensitivity to particular motions based on the environment modeled.
  • a handheld device may recognize, through accelerometers or other motion detection components, that it is at rest on an approximately horizontal surface. Such recognition may result from a determination that the device is not moving, or still, with a static 1 g of acceleration orthogonal to a surface.
  • the device may be able to differentiate resting on a table from resting in a user's hand, for example, because a user's hand typically will not be able to hold the device perfectly still.
  • the device may, as a result, behave in a certain manner according to the recognition that it is at rest on an approximately horizontal surface. For example, if handheld device 10 recognized that it was lying at rest on a table, it may power off after lying in such position for a certain amount of time.
  • a cellular phone in a vibrate mode may vibrate more gently if it recognizes it is on a table upon receipt of a call or upon any other event that may trigger vibration of the phone.
  • the device may recognize its orientation while lying on a table such that it may behave in one manner when lying in a “face down” position (e.g., it may power off) while it may behave in a different manner when lying in a non-face down position.
  • handheld device 10 comprised a cellular phone, it may enter a speaker mode when it is on a call and recognizes that it is placed by a user in a “face up” position on a table while on the call.
  • the cellular phone is on a call and is placed face down on the table, it may enter a mute mode.
  • handheld device 10 may recognize through a brief period of approximately 0 g that it is in free-fall and then may behave to reduce damage due to impending impact with the ground or another surface. Such behavior may include, for example, powering down chips and/or hard drives, retracting lenses, applying covers or any other device behavior.
  • non-hand-held devices or devices that do not otherwise detect motion for input may also be able to model their environment and to behave based on the environment modeled.
  • acceleration patterns may be detected to recognize that a handheld device 10 is in a moving environment (e.g., being held by a user in a car or on a train) and may adjust various sensitivities, threshold and/or other characteristics to enable better performance of the device in that environment.
  • handheld device 10 may comprise a digital camera. Through its motion detection components, the camera may determine whether it is on a tripod or is being held by a user when a picture is taken. The camera may set a shutter speed for the picture based on that determination (e.g., slow shutter speed if on a tripod or fast shutter speed if being held by a user).
  • a shutter speed for the picture based on that determination (e.g., slow shutter speed if on a tripod or fast shutter speed if being held by a user).
  • handheld device 10 comprised a device that utilized a cradle for syncing up with another device, such as a PC
  • device 10 may recognize that it is in the cradle based on its stillness (or supported state) and its particular orientation. The device may then operate or function according to its state of being in the cradle (e.g., it may then sync up with its associated PC).
  • FIG. 16 is a flowchart 230 illustrating an environmental modeling process, in accordance with a particular embodiment.
  • raw motion data is received at handheld device 10 .
  • the raw motion data may be received by any combination of accelerometers, gyros, cameras, rangefinders or any other suitable motion detection components.
  • the raw motion data is processed to produce a motion detector output from which motion and orientation of the device is determined at step 236 .
  • Boxes 237 represent example motions and orientations of the device, such as rotating around the z-axis in box 237 a , translating along the x-axis in box 237 b , oriented at particular angles ⁇ , ⁇ , ⁇ in box 237 c and still in box 237 n . These are mere example motions and orientations of the device, and any number of motions may be utilized that are determined at step 236 .
  • the determined orientations may comprise an orientation of the device with respect to gravity.
  • handheld device 10 determines its environment based on the motion and orientation determined at step 236 .
  • Boxes 239 represent example environments of the device, such as face down on a table in box 239 a , falling in box 239 b , on a train in box 239 c and held in hand at box 239 n . Any number of environments may be determined based on motions and orientations determined at step 236 . In particular embodiments, the environmental determination may also be based on a history of the device, such as a motion/orientation history.
  • the device may detect a quite period when horizontal in the middle of a call after a short jarring is detected (e.g., the short jarring caused by a user placing the phone face up on a table).
  • the phone can detect that it was jarred so that stillness and a perpendicular position relative to gravity may take on a different meaning than had the jarring not occurred.
  • the determination of the environment may be based on the motion and orientation of the device and its history.
  • the history may comprise a previous motion/orientation of the device or any other information relating to a device's history.
  • the determined environment is mapped to a particular behavior.
  • the mapped behavior may depend on any number of factors in addition to the determined environment, such as desired characteristics of the particular user using the device at the time or the particular application in use or focus at the time.
  • the behavior according to a particular modeled environment may include engaging a mute function of the handheld device in box 241 a , powering down chips of the device to survive an impact in box 241 b and increasing a motion activation threshold of the device in box 241 n .
  • the mute behavior indicated in box 241 a may be implemented when a cell phone's environment comprises laying face down on a table while on a call.
  • the powering down chips behavior in box 241 b may be implemented when the environment of handheld device 10 comprises a free fall of the device.
  • the increasing a motion activation threshold behavior of box 241 n may be implemented when a handheld device's environment comprises being in a car or train where bumpiness may require a greater movement threshold for a user's motion input to register as an intended input.
  • Particular embodiments may include any number of behaviors mapped to one or more modeled environments.
  • the handheld device behaves according to the behavior to which its environment is mapped at step 240 .
  • gestures used as motion input for the device may comprise pre-existing symbols, such as letters of the alphabet, picture symbols or any other alphanumeric character or pictographic symbol or representation.
  • gestures used as motion input may mimic upper and lower case members of an alphabet in any language, Arabic and Roman numerals and shorthand symbols.
  • Preexisting gestures may be used for handheld input devices for other local and remote devices as well. Using preexisting gestures for handheld device input may facilitate the learning process for users with respect to gesture motion interfaces.
  • FIG. 17 illustrates example gestures which may be mapped to particular functions.
  • a user may move device 10 in the form of heart 250 to call the user's girlfriend, boyfriend or spouse or house 252 to call the user's home.
  • moving the device in the form of C-gesture 254 may be a command for copying data
  • O-gesture 256 may be a command for opening a file
  • D-gesture 258 may be a command for deleting data
  • X-gesture 260 may be an exit command for a file or application.
  • the logical connection between gestures and their intended functions or operations e.g., “O” for opening a file) further facilitates user interaction and learning.
  • any number of pre-existing symbols may be used as gestures for motion input as commands for performing any number of functions, operations or tasks of a handheld device.
  • Many preexisting gestures typically exist in two dimensions.
  • Handheld device 10 may recognize such gestures.
  • handheld device 10 may disable receipt of a particular dimension so that any movement in a third dimension when a user is attempting to input a two-dimensional gesture is not received or detected in order to facilitate recognition of the two-dimensional gesture.
  • handheld device 10 may receive three-dimensional gestures that may be based on preexisting two-dimensional gestures. Receiving and detecting three-dimensional gestures increases the capabilities of the device by, for example, increasing the number and types of gestures which may be used as motion input.
  • FIG. 18 is a flowchart 270 illustrating the utilization of a preexisting symbol gesture, the letter “O,” as motion input.
  • a user moves handheld device 10 in the form of the letter “O.”
  • handheld device 10 receives raw motion data of the “O” movement from motion detection components and process such raw motion data at step 276 to determine the actual motion of the device.
  • handheld device 10 accesses a gesture database 280 which may include a plurality of gestures recognizable by the device to map the motion to the gesture “O.”
  • the plurality of gestures of the gesture database may each be defined by a series of accelerations of a movement.
  • handheld device 10 maps the gesture “O” to a particular function by accessing a function database 284 (or a gesture mapping database) which may include a plurality of functions that may be performed by one or more applications running on the device.
  • the gesture and function databases may be comprised in memory 18 of the device.
  • the particular function mapped to the gesture “O” may depend on a particular application in focus or being used by the user at the time. For example, in some applications “O” be comprise a command to open a file, while in other applications it may comprise a command to call a certain number. In some cases, one gesture may be mapped to the same function for all applications of the device.
  • the device behaves according to the mapped function, such as opening a file.
  • gestures used as motion input for handheld device 10 may have different meanings (e.g., functions, operations, tasks) based on a particular context, such as a particular application in use or in focus, a particular device state with respect to an application or otherwise, a particular modeled environment or any combination of these or any other context.
  • a particular gesture may be mapped as a command to scroll a page up when running a web browser at the device, while the gesture may be mapped as a command to examine a different date when running a calendar program.
  • the ability for particular gestures to be mapped to different commands depending on the context, such as the application in use increases the functionality of the device.
  • Handheld devices in some embodiments may be able to utilize simpler motion detection components if gestures are mapped to different commands depending on the context.
  • a handheld device may include particular motion detection components such that the handheld device may only be able to recognize and distinguish between twenty different gestures. If each gesture is mapped to a different function for each of four different applications, then the ability to only recognize twenty unique gestures still provides eighty different functions on the device (twenty for each application). If each gesture were mapped to its own function, no matter what application was in focus; then the overall capability of the device would be reduced, and some gestures would likely not be used in some applications.
  • gestures may be mapped to different functions, operations or tasks depending on the application in use, device state, modeled environment or other context. In some cases, gestures may be mapped to different functions depending on the state of a particular application.
  • gestures may have some functions when in one state of the program (e.g., a menu state) while the same gestures may have different functions when in another state of the word processing program (e.g., a document editing state).
  • a command map associated with the gesture function mappings may include gesture mappings for each such state.
  • FIG. 19 is a flowchart 290 illustrating the use of context-based gesture mapping, in accordance with a particular embodiments.
  • a gesture has different functions assigned based on the application in focus.
  • handheld device 10 receives raw motion data of a particular gesture movement and process such raw motion data at step 294 to determine the actual motion of the device.
  • handheld device 10 maps the motion to a gesture, for example, by accessing a gesture database.
  • handheld device 10 determines which application is in focus. For example, if the device were capable of running four different applications, then it would determine which of the four was in focus or was being used at the time. The device then performs the function mapped to the gesture according to the application in focus.
  • the identification of such function may be accomplished in some embodiments by accessing a function database which may also be referred to as a gesture mapping database since it correlates gestures of a gesture database to functions.
  • a function database which may also be referred to as a gesture mapping database since it correlates gestures of a gesture database to functions.
  • the device performs Function 1 at step 300 a ; if Application 2 is in focus, then the device performs Function 2 at step 300 b ; if Application 3 is in focus, then the device performs Function 3 at step 300 c ; and if Application 4 is in focus, then the device performs Function 4 at step 300 d.
  • a handheld device with phone and PDA capabilities may run four applications: a phone application, a calendar application, a file management application and an e-mail application.
  • a gesture input mimicking the letter “S” may have different functions depending on the application in focus. For example, if the phone application is in focus, then receiving the gesture input “S” may be a command for calling a particular number designated to the “S” gesture. If the calendar application is in focus, then receiving the gesture input “S” may be a command for scrolling to the month of September in the calendar. If the file management application is in focus, then receiving the gesture input “S” may be a command for saving a file. If the e-mail application is in focus, then receiving the gesture input “S” may be a command for sending an e-mail. Particular embodiments contemplate great flexibility in the ability to map gestures to different functions depending on the context.
  • gestures may have different functions depending on a particular context at the time.
  • handheld devices may be customizable to allow users to assign device functions to pre-defined gestures.
  • the functions may be context-based such that some gestures may have different functions depending on an application in use, a device state or a modeled environment.
  • Handheld devices in some embodiments may allow different users of the same device to assign different functions to the same gesture, and such functions may also be context-based as discussed above.
  • a handheld device 10 may be utilized by a number of different users at different times. Each user may assign different functions for the same gestures. When the handheld device receives a gesture input, it must thus know which user is using the device at the time to determine which function the user intends the device to perform.
  • the device may determine the user in any of a variety of ways. In some embodiments, users may log into the device prior to use by using a username and password or otherwise. In other embodiments, the handheld device may be able to identify the user based on the manner in which the user moves the device for motion input, such as the way the user forms a gesture using the device. As indicated above, each user may also assign commands to gestures based on context, such as based on the application in focus at the device. The ability for the handheld device to map functions to gestures based on particular users further increases the device's capabilities and flexibility, particularly if the device is able to recognize and distinguish only a particular number of gestures.
  • FIG. 20 is a flowchart 310 illustrating the use of user-based gesture mapping, in accordance with a particular embodiment.
  • a gesture has different functions assigned based on the user using the device.
  • handheld device 10 receives raw motion data of a particular gesture movement and process such raw motion data at step 314 to determine the actual motion of the device.
  • handheld device 10 maps the motion to a gesture, for example, by accessing a gesture database.
  • handheld device 10 determines which user is using the device. Such determination may be made, for example, through a log in system in which users log into the device prior to use. Handheld device 10 may determine the current user through other suitable methods as well.
  • the device performs the functions assigned to the gesture input based on the user using the device.
  • the device performs Function 1 at step 320 a ; if User 2 is using the device, then the device performs Function 2 at step 320 b ; if User 3 is using the device, then the device performs Function 3 at step 320 c ; and if User 4 is using the device, then the device performs Function 4 at step 320 d.
  • gestures may be assigned different functions based on both users using the device and a context.
  • the illustrated flowchart 310 described above may have an additional step to determine the context at the time (e.g., step 298 of flowchart 290 determining the application in focus).
  • the particular function desired for performance by a certain gesture thus depends on both the user using the device at the time and the context, such as the particular application in focus at the time.
  • some embodiments include handheld devices with the ability to receive preexisting symbols as gestures for motion input. Some of those embodiments as well as other embodiments may include the ability for user's to create their own gestures for mapping to functions and/or keys.
  • the gestures may comprise any user-created symbol or other motion that the user desires to utilize as motion input for one or more particular functions, operations or tasks that the device is able to perform. Users may be able to create motions with some personal significance so they may more easily remember the motion's command or intended function.
  • FIG. 21 is a flowchart 330 illustrating the assignment process for user-created gestures, in accordance with a particular embodiments.
  • an indication is received from a user for gesture creation.
  • the indication may be received in any of a variety of ways using any suitable input format (e.g., keys, trackwheel, motion, etc.).
  • the user may move the device according to the user-created gesture such that raw motion data for the user-created gesture is received at the handheld device at step 334 .
  • the raw motion data may comprise a sequence of accelerations of a movement after stabilization of the device from a base reference position until an indication is received to stop recording the reference positions.
  • Indications to start and stop recording a user-created gesture may include motion or non-motion indications (e.g., key presses and key releases).
  • the raw motion data is processed at step 336 .
  • the motion is stored as a gesture, for example, at a gesture database.
  • the indication for gesture creation may be received after the user moves the device according to the user-created gesture.
  • the user may move the device according to a user-created gesture that is currently unrecognizable by the device.
  • the device may query the user to determine if the user desires to store the unrecognized gesture for a particular function. The user may respond in the affirmative so that the user may utilize the gesture as motion input in the future.
  • function mapping information for the gesture is received from the user.
  • the function mapping information may comprise functions, operations or tasks of the device that the user desires the user-created gesture to command.
  • function mapping information may comprise a series of functions (e.g., a macro) that one gesture may command.
  • the user may assign different functions for a gesture according to an application in focus.
  • a user may desire to map different gestures to different keys or keystrokes of the device.
  • One examples of mapping a series of functions to a gesture may include mapping a long string of characters to a gesture (e.g., telephone numbers including pauses, where appropriate).
  • the function mapping information is stored, for example, at a function database or gesture mapping database.
  • gesture input describes how accurately a gesture must be executed in order to constitute a match to a gesture recognized by the device, such as a gesture included in a gesture database accessed by the device. The closer a user generated motion must match a gesture in a gesture database, the harder it will be to successfully execute such gesture motion.
  • movements may be matched to gestures of a gesture database by matching a detected series of accelerations of the movements to those of the gestures of the gesture database.
  • the precision of gestures required for recognition increases, one may have more gestures (at the same level of complexity) that can be distinctly recognized. As an example, if the precision required was zero, then the device could only recognize a single gesture but it would recognize it easily because anything the user did would be recognized as that gesture. If, however, the precision required was infinite then it would be virtually impossible for a user to form a gesture that was recognized by the device, but the device could support an infinite number of gestures with only infinitesimal differences between them.
  • One area in which the precision requirement is especially applicable is in the area of spatial signatures. With spatial signatures, the level of precision correlates well with the level of security.
  • the precision required by handheld device 10 for gesture input may be varied. Different levels of precision may be required for different users, different regions of the “gesture space” (e.g., similar gestures may need more precise execution for recognition while gestures that are very unique may not need as much precision in execution), different individual gestures, such as signatures, and different functions mapped to certain gestures (e.g., more critical functions may require greater precision for their respective gesture inputs to be recognized).
  • users may be able to set the level(s) of precision required for some or all gestures or gestures of one or more gesture spaces. As an example, a user may set the precision required for spatial signatures higher than for other gestures for the user thus increasing security for spatial signature input.
  • gestures may be recognized by detecting a series of accelerations of the device as the device is moved along a path by a user according to an intended gesture. Recognition occurs when the series of accelerations is matched by the device to a gesture of a gesture database.
  • each gesture recognizable by handheld device 10 includes a matrix of three-dimensional points.
  • a user movement intended as a gesture input includes a matrix of three-dimensional points.
  • Handheld device 10 may compare the matrix of the movement with the matrices of each recognizable gesture (or each gesture in the gesture database) to determine the intended gesture. If a user moves the device such that the movement's matrix correlates to each point of an intended gesture's matrix, then the user may be deemed to have input the intended gesture with perfect precision. As the precision required for gesture input is reduced, then the greater the allowable differences between a user gesture movement and an intended gesture of a gesture database for gesture recognition.
  • FIG. 22 illustrates three gestures input using a handheld device with varying levels of precision.
  • the intended gesture comprises an “O.”
  • Gesture movement 350 is input as a perfect “O,” or with 100% precision for the intended gesture.
  • Gesture movement 352 is input with less than 100% precision as it does not form a perfect “O.”
  • Gesture movement 354 is input with less precision than gesture movement 352 .
  • the precision requirement for the input of gesture “O” may be set at handheld device to accept varying levels of precision.
  • the precision level may be set such that only gesture movement 350 is recognized as gesture “O,” gesture movements 350 and 352 are both recognized as gesture “O” or gesture movements 350 , 352 and 354 are all recognized as gesture “O.” As indicated above, the higher the precision requirement, then the more space available for additional recognizable gestures. For example, if the precision level for handheld device 10 were set such that only gesture movement 350 was recognized as gesture “O,” then gesture movements 352 and 354 may be recognized as other, distinct gestures.
  • handheld devices may alter gestures recognized for performing particular functions based on a user's personal precision.
  • a handheld device may have dynamic learning capability of gesture mappings. For example, if a particular gesture of a gesture database is mapped to a particular function, and a user's repeated attempts to input the gesture lack precision in a consistent manner; then the handheld device may alter the gesture in the gesture database to match the consistent gesture movement of the user such that the user's consistent gesture motion input will be mapped to the particular function.
  • a particular gesture comprises a square motion and a user's motion intended for the gesture comprises more of a triangular motion on a consistent basis (e.g., on multiple consecutive times)
  • the handheld device may be able to recognize this consistent difference in an intended gesture and the actual user motion to change the intended gesture (e.g., a square) in the gesture database mapped to the function intended to the user's actual consistent motion (e.g., triangle).
  • the intended gesture e.g., a square
  • the function e.g., triangle
  • the device may determine the gesture intended in any of a variety of ways, such as through two-way communication with the user through any form of input.
  • this dynamic learning of users' input characteristics may be applied on a user-specific basis. For example, in the example described above another user may still input the square gesture using the same handheld device to command the same function.
  • handheld devices may recognize that a user's precision increases over time and the devices may, as a result, increase the gestures available for use.
  • Increasing gestures available for input may also increase the functions capable of being commanded through gesture input.
  • a user's personal precision for inputting gestures may be such that the user is only able to input a certain number of gestures that will be recognized by the handheld device.
  • the user's personal precision may increase. This increase may be recognized by the handheld device and, as a result, the device may enable additional gestures that the user may use as gesture input.
  • the enabling of additional gestures may occur when the user's precision increases over a particular precision threshold, or a certain precision level. Since the user's precision has increased, the handheld device will be able to recognize when the user attempts to input these additional gestures.
  • providing additional gestures for input by a user may also increase the number of functions that the user is able to command through gesture input, since each gesture may be mapped to command a different function.
  • Handheld devices in particular embodiments may also allow users to set and vary noise thresholds of the device.
  • Noise thresholds are the magnitude of motion of the device that must be detected in order to be considered intended motion input (e.g., an intended gesture) of the user. For example, if noise thresholds are set low, then minimal motion of the device may be considered by the device as motion input. However, if noise thresholds are set high, then greater movement of the device would be required before the motion is considered intended input from the user. If, for example, a user is travelling in a car on a bumpy road, the user may desired to set the noise threshold higher so that when the device moves as a result of bumps in the road then such movement may not be considered by the device to be intended motion input.
  • noise thresholds may automatically change at the device based on a modeled environment. For example, if a device determines that the environment comprises traveling in a car, then the device may automatically increase the noise threshold so that minimal movements resulting from the car will not register as user-intended motion.
  • FIG. 23 is a flowchart 370 illustrating a gesture recognition process utilizing a number of features described herein, in accordance with a particular embodiment.
  • raw motion data of a particular gesture movement is received.
  • the raw motion data is processed at step 374 where the actual motion of the device is determined. Such processing may include various filtering techniques and fusion of data from multiple detection or sensing components.
  • the actual motion is mapped to a gesture. Mapping the actual motion to a gesture may include accessing a user settings database 378 , which may include user data 379 comprising, for example, user precision and noise characteristics or thresholds, user-created gestures and any other user-specific data or information including user identities 381 .
  • User-specific information may be important, for example, because different users of the handheld device may have different settings and motion input characteristics. For example, an older user may have less precision than a younger user when inputting gestures such that the older person may have fewer gestures available. Moreover, a more experienced user may have more device functionality available through gesture input.
  • User settings database 378 may also include environmental model information 380 which may factor in determining the gesture applicable at the time. As discussed above, through environmental modeling, the device can internally represent its environment and the effect that environment is likely to have on gesture recognition. For example, if the user is on a train, then the device may automatically raise the noise threshold level. The device may also reduce the precision required, depending upon how crowded the gesture space is near the gesture under consideration. Mapping the actual motion to a gesture may also include accessing gesture database 382 .
  • the gesture is mapped to a function for the device.
  • This step may include accessing a function mapping database 386 which may include correlation between gestures and functions. Different users may have different mappings of gestures to functions and different user-created functions.
  • function mapping database 386 may also include user-specific mapping instructions or characteristics, user-created functions (e.g., macros and/or phone numbers) and any other function information which may be applicable to mapping a particular gesture to one or more functions.
  • gestures may be mapped to individual keystrokes.
  • User identities 381 may also be accessed in this step.
  • device context information 388 may also be accessed and utilized in mapping the gesture, which may include environmental model information 389 , application in focus information 390 and device state information 391 , such as time and date information, location information, battery condition and mode information (e.g., silent mode).
  • the device performs the appropriately-mapped one or more function(s), such as Function 1 at step 392 a , Function 2 at step 392 b or Function 3 at step 392 c.
  • handheld device 10 may comprise a cellular phone with many of the capabilities described herein.
  • cellular phones with motion input capabilities may use motion input to flatten menus as discussed above.
  • the cellular phone may detect device states and environments, such as free fall or the cellular phone being face down or face up to map to behaviors such as mute, speaker phone and power-off. Other detection of device states may include detecting that the phone is being held to disengage mute or speakerphone states.
  • the cellular phone may utilize gestures to control dialing (e.g., through gestural speed dial) or to lock/unlock a keypad of the device. For example, the device may be moved in a clockwise circle to dial home, a counterclockwise circle to dial work and in the shape of a heart to dial a significant other. Users may also be able to program the cellular phone to customized gestural mappings.
  • handheld device 10 may comprise a digital camera utilizing motion input for at least some of the functions described herein.
  • digital cameras with motion input capabilities may use motion input to flatten menus as discussed above.
  • Motion may also be used to allow a user to zoom in (and back out) on still photos or video to examine it more closely for smoother and more intuitive functionality.
  • Motion may be used to zoom in and out of a number of thumbnails of photographs or video clips so that it is easy to select one or more to review.
  • Virtual desktops may be used to review many thumbnails of many digital photos or video clips or to review many digital photos or video clips by translating the camera or using gestural input.
  • Gestures and simple motions may be used alone or in combination with other interface mechanisms to modify various settings on digital still and video cameras, such as flash settings, type of focus and light sensing mode.
  • free fall may be detected to induce the camera to protect itself in some way from damage in an impending collision.
  • Such protection may include dropping power from some or all parts of the camera, closing the lens cover and retracting the lens.
  • handheld device 10 may comprise a digital watch utilizing motion input for at least some of the functions described herein.
  • digital watches with motion input capabilities may use motion input to flatten menus as discussed above.
  • the tapping of the watch or particular gestures may be used to silence the watch.
  • Other functions may also be accessed through taps, rotations, translations and other more complex gestures. These functions may include changing time zones, setting the watch (e.g., setting the time and other adjustable settings), changing modes (e.g., timers, alarms, stopwatch), activating the backlight, using a stopwatch (e.g., starting, stopping and splitting the stopwatch) and starting and stopping other timers.
  • setting the watch e.g., setting the time and other adjustable settings
  • changing modes e.g., timers, alarms, stopwatch
  • activating the backlight e.g., starting, stopping and splitting the stopwatch
  • motion detection may be separate from a display.
  • a display may be worn on glasses or contacts, and other parts of the handheld device may be dispersed across a user's body such that the display may not be part of the same physically component as the motion input device or component.

Abstract

A handheld device includes a display having a viewable surface and operable to generate an image indicating a currently controlled remote device and a gesture database maintaining a plurality of remote command gestures. Each remote command gesture is defined by a motion of the device with respect to a first position of the handheld device. The device includes a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device and a motion detection module operable to detect motion of the handheld device within three dimensions and to identify components of the motion in relation to the viewable surface. The device includes a control module operable to track movement of the handheld device using the motion detection module, to compare the tracked movement against the remote command gestures to determine a matching gesture, and to identify the one of the commands corresponding to the matching gesture. The device also includes a wireless interface operable to transmit the identified command to a remote receiver for delivery to the remote device.

Description

    TECHNICAL FIELD
  • The present invention relates generally to portable devices and, more particularly, to portable devices with a motion interface.
  • BACKGROUND
  • The use of computing devices, such as cellular phones and personal digital assistants (PDAs) has grown rapidly. Such devices provide many different functions to users through different types of interfaces, such as keypads and displays. Some computing devices utilize motion as an interface by detecting tilt of the device by a user. Some implementations of a motion interface involve tethering a computing device with fishing lines or carrying large magnetic tracking units that require large amounts of power.
  • SUMMARY
  • In accordance with the present invention, a handheld device with motion a motion interface is provided.
  • In accordance with a particular embodiment, a handheld device includes a display having a viewable surface and operable to generate an image indicating a currently controlled remote device and a gesture database maintaining a plurality of remote command gestures. Each remote command gesture is defined by a motion of the device with respect to a first position of the handheld device. The device includes a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device and a motion detection module operable to detect motion of the handheld device within three dimensions and to identify components of the motion in relation to the viewable surface. The device includes a control module operable to track movement of the handheld device using the motion detection module, to compare the tracked movement against the remote command gestures to determine a matching gesture, and to identify the one of the commands corresponding to the matching gesture. The device also includes a wireless interface operable to transmit the identified command to a remote receiver for delivery to the remote device.
  • In accordance with another embodiment, a method for remotely controlling devices includes generating, on a viewable surface of a handheld device, an image indicating a currently controlled remote device and maintaining a gesture database comprising a plurality of remote command gestures. Each remote command gesture is defined by a motion of the device with respect to a first position of the handheld device. The method includes maintaining a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device, tracking movement of the handheld device in relation to the viewable surface, comparing the tracked movement against the remote command gestures to determine a matching gesture, identifying the one of the commands corresponding to the matching gesture, and transmitting the identified command to a remote receiver for delivery to the remote device.
  • Technical advantages of particular embodiments include the ability of a handheld device to control various other local and remote devices through motion input of the handheld device. For example, gestures of the handheld device may be used to communicate commands to a device selected for control. Accordingly, motion input of one handheld device may be used to control a plurality of other devices thus facilitating control of the other devices for a user.
  • Other technical advantages will be readily apparent to one skilled in the art from the following figures, descriptions and claims. Moreover, while specific advantages have been enumerated above, various embodiments may include all, some or none of the enumerated advantages.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of particular embodiments of the invention and their advantages, reference is now made to the following descriptions, taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 illustrates a handheld device with motion interface capability, in accordance with a particular embodiment;
  • FIG. 2 illustrates a motion detector of the handheld device of FIG. 1, in accordance with a particular embodiment;
  • FIG. 3 illustrates the use of motion detector components of the handheld device of FIG. 1, in accordance with a particular embodiment;
  • FIG. 4 illustrates an example handheld device with motion detection capability, in accordance with a particular embodiment;
  • FIG. 5 illustrates an example of selection and amplification of a dominant motion of a handheld device, in accordance with a particular embodiment;
  • FIG. 6 is a flowchart illustrating preferred motion selection, in accordance with a particular embodiment;
  • FIG. 7 is a flowchart illustrating the setting of a zero-point for a handheld device, in accordance with a particular embodiment;
  • FIG. 8 illustrates an example of scrubbing functionality with a handheld device for virtual desktop navigation, in accordance with a particular embodiment;
  • FIG. 9 is a flowchart illustrating the scrubbing process of FIG. 8, in accordance with a particular embodiment;
  • FIG. 10A illustrates an example of menu navigation using gesture input, in accordance with a particular embodiment;
  • FIG. 10B illustrates example gestures which may be used to perform various functions at a handheld device, in accordance with a particular embodiment;
  • FIG. 11 illustrates an example of map navigation using motion input, in accordance with a particular embodiment;
  • FIG. 12A illustrates a form of motion input cursor navigation, in accordance with a particular embodiment;
  • FIG. 12B illustrates another form of motion input cursor navigation, in accordance with a particular embodiment;
  • FIG. 13 is a flowchart illustrating a process for utilizing feedback in response to motion input, in accordance with a particular embodiment;
  • FIG. 14 illustrates an example system utilizing spatial signatures with a handheld device, in accordance with a particular embodiment;
  • FIG. 15 illustrates an example system in which motion input of a handheld device controls multiple other devices, in accordance with a particular embodiment;
  • FIG. 16 is a flowchart illustrating an environmental modeling process of a handheld device, in accordance with a particular embodiment;
  • FIG. 17 illustrates example gestures which may be mapped to different functions of a handheld device, in accordance with a particular embodiment;
  • FIG. 18 is a flowchart illustrating the utilization of a preexisting symbol gesture, in accordance with a particular embodiment;
  • FIG. 19 is a flowchart illustrating the use of context-based gesture mapping, in accordance with a particular embodiment;
  • FIG. 20 is a flowchart illustrating the use of user-based gesture mapping, in accordance with a particular embodiment;
  • FIG. 21 is a flowchart illustrating the assignment process for user-created gestures, in accordance with a particular embodiment;
  • FIG. 22 illustrates three gestures input using a handheld device with varying levels of precision, in accordance with a particular embodiment; and
  • FIG. 23 is a flowchart illustrating a gesture recognition process utilizing a number of features, in accordance with a particular embodiment.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates a handheld device 10 with motion interface capability, in accordance with a particular embodiment of the present invention. Handheld device 10 can recognize movement of the device and can perform various functions corresponding to such movement. Thus, movement of the device operates as a form of input for the device. Such movement input may directly alter what is being displayed on a device display or may perform other functions. Handheld device 10 may comprise a mobile phone, personal digital assistant (PDA), still camera, video camera, pocket calculator, portable radio or other music or video player, digital thermometer, game device, portable electronic device, watch or any other device capable of being held or worn by a user. As indicated in the examples listed above, handheld device 10 may include wearable portable devices such as watches as well. A watch may include any computing device worn around a user's wrist.
  • Handheld device 10 includes a display 12, input 14, processor 16, memory 18, communications interface 20 and motion detector 22. Display 12 presents visual output of the device and may comprise a liquid crystal display (LCD), a light emitting diode (LED) or any other type of display for communicating output to a user. Input 14 provides an interface for a user to communicate input to the device. Input 14 may comprise a keyboard, keypad, track wheel, knob, touchpad, stencil or any other component through which a user may communicate an input to device 10. In particular embodiments, display 12 and input 14 may be combined into the same component, such as a touchscreen.
  • Processor 16 may be a microprocessor, controller or any other suitable computing device or resource. Processor 16 is adapted to execute various types of computer instructions in various computer languages for implementing functions available within system handheld device 10. Processor 16 may include any suitable controllers for controlling the management and operation of handheld device 10.
  • Memory 18 may be any form of volatile or nonvolatile memory including, without limitation, magnetic media, optical media, random access memory (RAM), read only memory (ROM), removable media or any other suitable local or remote memory component. Memory 18 includes components, logic modules or software executable by processor 16. Memory 18 may include various applications 19 with user interfaces utilizing motion input, such as mapping, calendar and file management applications, as further discussed below. Memory 18 may also include various databases, such as gesture databases and function or gesture mapping databases, as further discussed below. Components of memory 18 may be combined and/or divided for processing according to particular needs or desires within the scope of the present invention. Communications interface 20 supports wireless or wireline communication of data and information with other devices, such as other handheld devices, or components.
  • Motion detector 22 tracks movement of handheld device 10 which may be used as a form of input to perform certain functions. Such input movement may result from a user moving the device in a desired fashion to perform desired tasks, as further discussed below.
  • It should be understood that handheld device 10 in accordance with particular embodiments may include any suitable processing and/or memory modules for performing the functions as described herein, such as a control module, a motion tracking module, a video analysis module, a motion response module, a display control module and a signature detection module.
  • In particular embodiments, input movement may be in the form of translation and/or gestures. Translation-based input focuses on a beginning point and endpoint of a motion and differences between such beginning points and endpoints. Gesture-based input focuses on an actual path traveled by the device and is a holistic view of a set of points traversed. As an example, when navigating a map using translation-based input, motion in the form of an “O” may change the display during the movement but may ultimately yield no change between the information displayed prior to the movement and the information displayed at the end of the movement since the device presumably will be in the same point as it started when the motion ends. However, in a gesture input mode the device will recognize that it has traveled in the form of an “O” because in gesture-based input the device focuses on the path traveled during the motion or movement between a beginning point and an endpoint of the gesture (e.g., even though the beginning and endpoints may be the same). This gesture “O” movement may be mapped to particular functions such that when the device recognizes it has traveled along a path to constitute an “O” gesture, it may perform the functions, as further elaborated upon below. In particular embodiments, movement of the device intended as a gesture may be recognized as by the device as a gesture by matching a series, sequence or pattern of accelerations of the movement to those defining gestures of a gesture database.
  • Handheld devices in accordance with other embodiments may not include some of the components of the device illustrated in FIG. 1. For example, some embodiments may include a handheld device 10 without an input 14 separate from a motion detector such that motion of the device provides the sole or primary input for the device. It should be noted that handheld devices in accordance with other embodiments may include additional components not specifically illustrated with respect to device 10.
  • FIG. 2 illustrates motion detector 22 of FIG. 1, in accordance with a particular embodiment of the present invention. In this embodiment, motion detector 22 includes accelerometers 24 a, 24 b and 24 c; cameras 26 a, 26 b and 26 c; gyros 28 a, 28 b and 28 c; rangefinders 30 a, 30 b and 30 c; and a processor 32.
  • Accelerometers 24 a, 24 b and 24 c detect movement of the device by detecting acceleration along a respective sensing axis. A particular movement of the device may comprise a series, sequence or pattern of accelerations detected by the accelerometers. When the handheld device is tilted along a sensing axis of a particular accelerometer, the gravitational acceleration along the sensing axis changes. This change in gravitational acceleration is detected by the accelerometer and reflects the tilt of the device. Similarly, translation of the handheld device, or movement of the device without rotation or tilt also produces a change in acceleration along a sensing axis which is also detected by the accelerometers.
  • In the illustrated embodiment, accelerometer 24 a comprises an x-axis accelerometer that detects movement of the device along an x-axis, accelerometer 24 bcomprises a y-axis accelerometer that detects movement of the device along a y-axis and accelerometer 24 c comprises a z-axis accelerometer that detects movement of the device along a z-axis. In combination, accelerometers 24 a, 24 b and 24 c are able to detect rotation and translation of device 10. As indicated above, rotation and/or translation of device 10 may serve as an input from a user to operate the device.
  • The use of three accelerometers for motion detection provides certain advantages. For example, if only two accelerometers were used, the motion detector may not be able to disambiguate translation of the handheld device from tilt in the plane of translation. However, using a third, z-axis accelerometer (an accelerometer with a sensing axis at least approximately perpendicular to the sensing axes of the other two accelerometers) enables many cases of tilt to be disambiguated from many cases of translation.
  • It should be understood that some unique movements may exist that may not be discernible from each other by accelerometers 24 a, 24 b and 24 c. For example, movement comprising a certain rotation and a certain translation may appear to accelerometers 24 a, 24 b and 24 c as the same movement as a different movement that comprises a different particular rotation and a different particular translation. If a motion detector 22 merely included three accelerometers to detect movement (without any additional components to ensure greater accuracy), some unique, undiscernible movements may be mapped to the same function or may not be mapped to a function to avoid confusion.
  • As indicated above, motion detector 22 also includes cameras 26 a, 26 b and 26 c, which may comprise charge coupled device (CCD) cameras or other optical sensors. Cameras 26 a, 26 b and 26 c provide another way to detect movement of the handheld device (both tilt and translation). If only one camera were installed on a device for movement detection, tilt of the device may be indistinguishable from translation (without using other motion detection components, such as accelerometers). However, by using at least two cameras, tilt and translation may be distinguished from each other. For example, if two cameras were installed on handheld device 10 (one on the top of the device and one on the bottom of the device), each camera would see the world moving to the right when the device was translated to the left. If the device is lying horizontally and is rotated by lifting its left edge while lowering its right edge, the camera on the bottom will perceive the world as moving to the right while the camera on the top will perceive the world as moving to the left. Thus, when a device is translated, cameras on opposite surfaces will see the world move in the same direction (left in the example given). When a device is rotated, cameras on opposite surfaces will see the world move in opposite directions. This deductive process can be reversed. If both cameras see the world moving in the same direction, then the motion detector knows that the device is being translated. If both cameras see the world moving in opposite directions, then the motion detector knows that the device is being rotated.
  • When the device is rotated, the magnitude of the movement of the world to the cameras is directly related to the magnitude of the rotation of the device. Thus, the amount of the rotation can accurately be determined based on such movement of the world to the cameras. However, when the device is translated, the magnitude of the translation is related to both the magnitude of the movement of the world to the cameras and to the distance to the objects in the field of view of the cameras. Therefore, to accurately determine amount of translation using cameras alone, some form of information concerning the distance to objects in the camera fields of view must be obtained. However, in some embodiments cameras with rangefinding capability may be used.
  • It should be understood that even without distance information, optical information can be of significant value when correlated against the information from accelerometers or other sensors. For example, optical camera input may be used to inform the device that no significant motion is taking place. This could provide a solution to problems of drift which may be inherent in using acceleration data to determine absolute position information for certain device functions.
  • As discussed above, distance information may be useful to determine amount of translation when cameras are being used to detect movement. In the illustrated embodiment, such distance information is provided by rangefinders 30 a, 30 b and 30 c. Rangefinders 30 a, 30 b and 30 c may comprise ultrasound rangefinders, laser rangefinders or any other suitable distance measuring component. Other components may also be used to determine distance information. For example, cameras with rangefinding capability may be used, and multiple cameras may be utilized on the same side of the device to function as a range-finder using stereopsis. Determined distance information allows for accurate and explicit computation of the portion of any apparent translation that is due to translation and the portion that is due to rotation.
  • As indicated above, motion detector 22 additionally includes gyros 28 a, 28 b and 28 c. Gyros 28 a, 28 b and 28 c are used in combination with the other components of motion detector 22 to provide increased accuracy in detecting movement of device 10.
  • Processor 32 processes data from accelerometers 24, cameras 26, gyros 28 and rangefinders 30 to produce an output indicative of the motion of device 10. Processor 32 may comprise a microprocessor, controller or any other suitable computing device or resource, such as a video analysis module for receiving a video stream from each camera. In some embodiments, the processing described herein with respect to processor 32 of motion detector 22 may be performed by processor 16 of handheld device 10 or any other suitable processor, including processors located remote to the device.
  • As discussed above, motion detector 22 includes three accelerometers, three cameras, three gyros and three rangefinders. Motion detectors in accordance with other embodiments may include fewer or different components than motion detector 22. For example, some embodiments may include a motion detector with three accelerometers and no cameras, gyros or rangefinders; two or three accelerometers and one or more gyros; two or three accelerometers and one or more cameras; or two or three accelerometers and one or more rangefinders. In addition, the location of the motion detection components on the device may vary on different embodiments. For example, some embodiments may include cameras on different surfaces of a device while other embodiments may include two cameras on the same surface (e.g., to add rangefinding functionality).
  • Altering the type, number and location of components of motion detector 22 may affect the ability of motion detector to detect or accurately measure various types of movement. As indicated above, the type and number of components of motion detectors may vary in different embodiments in order to fulfill particular needs. Fewer or less accurate components may be used in particular embodiments when it is desired to sacrifice accuracy to reduce manufacturing cost of a handheld device with motion detection capabilities. For example, some handheld devices may only need to detect that the device has been translated and may not need to detect exact amount of such translation to perform desired functions of the device. Such handheld devices may thus include a motion detector with cameras and without any sort of rangefinder or other component providing distance information. In particular embodiments, components described above, such as cameras and rangefinders, may also be used for other purposes by the device than those described above relating to motion detection functionality.
  • FIG. 3 is a diagram illustrating the use of the motion detector components of handheld device 10 of FIG. 1. Raw data from motion detection components is processed at processor 32. Such raw data includes x-axis accelerometer raw data 23 a, y-axis accelerometer raw data 23 b and z-axis accelerometer raw data 23 c from accelerometers 24 a, 24 b and 24 c, respectfully; camera raw data 25 a, camera raw data 25 b and camera raw data 25 c from cameras 26 a, 26 b and 26 c, respectfully; gyro raw data 27 a, gyro raw data 27 b and gyro raw data 27 c from gyros 28 a, 28 b and 28 c; and rangefinder raw data 29 a, rangefinder raw data 29 b and rangefinder 29 c from rangefinders 30 a, 30 b and 30 c, respectively. If the handheld device includes more, fewer or different motion detection components as may be the case in some embodiments, the raw data may correspond to the components included.
  • The raw data is processed at processor 32 to produce motion detector output 34 identifying movement of device 10. In the illustrated embodiment, motion detector output 34 comprises translation along x, y and z axes and rotation with respect to the x, y and z axes. The motion detector output is communicated to a processor 16 of the handheld device which identifies the operation, function or task the device should perform (i.e., device behavior 36) based on the device motion. The performance of certain operations, functions or tasks based on particular movements is further discussed below.
  • FIG. 4 is an isometric illustration of an example handheld device 31 with motion detection capability, in accordance with particular embodiments. Handheld device 31 includes an x-axis accelerometer 33, a y-axis accelerometer 35 and a camera 37 oriented towards the z-axis. X-axis 38, y-axis 39 and z-axis 40 are also illustrated with respect to device 31 for reference. Handheld device 31 may detect movement, including tilt and translation in various directions, using accelerometers 33 and 35 and camera 37. Handheld device 31 may also include other components, such as components illustrated and described above with respect to handheld device 10, such as display 12, input 14, processor 16, memory 18 and communications interface 20. As indicated above, particular embodiments may include handheld devices having various types of motion detection components (including accelerometers, gyros, cameras, rangefinders or other suitable components) in any combination and positioned or oriented in any suitable manner upon the devices.
  • In particular embodiments, a user interface function may utilize input motion along one axis of motion at a time. For example, a device application may allow a user to scroll through a list displayed on the handheld device by moving the device along a particular axis (e.g., in one direction or in two opposite directions). It may be very difficult for a user to constrain the motion of the device to that particular axis as desired. In other words, some user generated device rotation or movement along another axis may be difficult to avoid. To counter this problem, the device may include preferred motion selection including the selection and amplification of a dominant motion and the minimization of movement in other directions or axes.
  • FIG. 5 illustrates an example of selection and amplification of a dominant motion and minimization of movement in another direction as discussed above. In the illustrated example, actual motion 41 represents movement of a handheld device. Actual motion 41 comprises movement 42 along one axis 44 and movement 46 along another axis 48 perpendicular to axis 44. Since the amount of movement 42 is greater than the amount of movement 46, the handheld device may select movement 42 as the dominant motion. The handheld device may then amplify this dominant motion and minimize movement 46 (the other motion) such that actual motion 41 is treated by the device as represented motion 50. The amount or size of amplification of the dominant motion may vary in various embodiments according to particular factors, such as the particular application being run on the device at the time. Moreover, amplification of a dominant motion may also be based on magnitude of acceleration, speed of motion, ratio of a motion in one direction (e.g., movement 42) to motion in another direction (e.g., movement 46), size of underlying desktop being navigated or user preferences. In some embodiments, a handheld device may implement preferred motion selection only when certain motion characteristics occur. For example, in some cases the handheld device may select and amplify a dominant motion if motion in one axis is more than two times greater than any other motion. The other, smaller motion may then be minimized.
  • The selection and amplification of the dominant motion and minimization of other motion may further expand a user's ability to take advantage of motion user interfaces and may also allow the handheld device, or applications running on the device, to filter out undesired, user-induced noise. With this capability, the user may be able to, for example, move the device left to pick a list to examine, then scroll that list by moving up and down. Motion along inappropriate axes may be ignored or substantially reduced by the device.
  • In particular embodiments, the selection and amplification of a dominant motion and minimization of other motion may also be applied to rotational motion of the device. Dominant motion around an axis may be selected and amplified in the same manner as motion along an axis as described above with respect to translational motion. Moreover, rotation around another axis (that is not dominant rotation) may be minimized.
  • FIG. 6 illustrates a preferred motion selection flowchart 60, in accordance with a particular embodiment of the present invention. In flowchart 60, raw data corresponding to movement of a handheld device is received. In the illustrated embodiment, the movement raw data includes x-acceleration data 62 a, y-acceleration data 62 b and z-acceleration data 62 c that is processed at step 64 to yield an output indicating movement of the device. Other embodiments may include other types of movement raw data, such as optical or camera data, gyro data and/or rangefinder data. After the processing of the raw acceleration data 62, a dominant axis of motion is selected at step 66. If the selected dominant axis of motion is the x-axis, then the movement along the x-axis is augmented at step 68 a. If the selected dominant axis of motion is the y-axis, then the movement along the y-axis is augmented at step 68 b. If the selected dominant axis of motion is the z-axis, then the movement along the z-axis is augmented at step 68 c. The amount of augmentation of movement in the dominant axis of motion may vary in different embodiments according to the application being utilized or other characteristics. In some embodiments, user preferences 69 may be utilized to determine type or amount of movement augmentation. Movement along axes other than the dominant axis of motion may be minimized as discussed above, such that such movement is ignored by a particular application in use. At step 70, the augmented movement is processed to yield device behavior 72. This processing step may include accessing an application being used to determine the particular device behavior to perform based on the augmented movement. Augmented movement may yield any of a number of types of device behavior according to the application in use, a particular user or otherwise.
  • For particular user interfaces utilizing motion input, there may be value in having the position of a virtual display, or the information displayed at display 12 of handheld device 10, linked to the position of the device. For example, in particular embodiments using translation-based input such as for navigating a map displayed at the device, the position of the handheld device may directly determine the portion of the map displayed at display 12. However, if device position information is kept in absolute terms (e.g., as with global positioning satellite (GPS) based systems) the utility for many tasks such as map or menu navigation may be impaired. Thus, it is beneficial in certain circumstances to define a “zero point,” or an origin in a local context, that may be used to determine the behavior of the device. For example, if a zero point is defined when the device is at a point A, then motion between point A and a point B may be used as input. Particularly useful applications of setting a zero point may include external behaviors such as moving the virtual display or locating applications in the space around a user's body. Setting a zero point also addresses internal behaviors such as instructing the device to ignore the gravitational acceleration at the current orientation to allow the device to act only on additional, and presumably user generated, accelerations.
  • Handheld devices according to particular embodiments may include application user interfaces that utilize motion input only at certain times. At other times, for example, the motion of the device may not be utilized as input, and it may be useful to disengage or “turn off” motion sensitivity or the motion detection capability of the device. Disengagement of motion sensitivity may comprise, for example, deactivation of motion detector 22 of device 10 or other component, such as a motion response module of the device. Particular embodiments thus allow for the selective engagement and disengagement of the motion sensitivity of the device.
  • As an example, a motion response module which modifies display 12 based on motion detected at motion detector 22, may have a mode of operation in which it awaits a trigger for switching to another mode of operation in which motion sensitivity is enabled. When motion sensitivity is not enabled, any motion of the device may be disregarded. The trigger may also set a zero-point for the device. When the zero-point is set, the motion response module may measure a baseline orientation of the device based on measurement from motion detection components. The baseline orientation may comprise the position of the device (determined from information from motion detector components) when the trigger is received. Future movement of the device will be compared against the baseline orientation to determine the functions to perform or the modifications which should be made at display 12 based on the user's motion of the device.
  • Particular embodiments provide for any number of user-initiated actions to act as a single trigger for zero-point selection and selective engagement/disengagement of the motion sensitivity of the device. Such actions may include, for example, the pressing of a key on input 14, moving device 10 in a particular way (e.g., movement corresponding to a particular gesture), and tapping display 12. It should be understood that any user-initiated action may set a zero-point and engage motion sensitivity of the device at the same time.
  • In some embodiments, a period of inactivity or minimal activity (i.e., relative stillness) may also set a zero-point and engage or disengage motion sensitivity. FIG. 7 illustrates a flowchart 80 for the passive setting of a zero-point for a handheld device. Change in acceleration with respect to an x-axis is detected at step 82 a, change in acceleration with respect to a y-axis is detected at step 82 b and change in acceleration with respect to a z-axis is detected at step 82 c. At steps 84 a, 84 b and 84 c, it is determined whether any acceleration change detected is greater than a particular respective threshold. If detected acceleration change along each of the three axes is not greater than a set threshold, then the device may be considered at rest, and at step 86 a zero-point will be set. An at rest position may be determined, for example, from stabilization of the raw data or motion components of components of motion detector 22. However, if detected acceleration change along any of the three axes is greater than a set threshold, then the process returns to acceleration change detection at steps 82. Thus, this method of passively setting a zero-point may ensure that when the handheld device is at rest, a zero point will be set. Moreover, if a device is in constant motion but not being moved by a user at a particular time (e.g., resting in a train moving at a constant speed), a zero-point will be set since there will be no detected change in acceleration. The use of thresholds to determine whether an acceleration change is high enough so as not to trigger the setting of a zero-point enables a user to hold the device still to passively set the zero-point. Otherwise, this may be difficult since a device with highly sensitive accelerometers may detect acceleration change as a result of very minor unintended, user-initiated movement. It should be understood that similar methods may be used in connection with motion detectors with components other than accelerometers. Thresholds may also be used in such similar methods to account for small, unintended movements that may otherwise prevent setting of a zero point.
  • Particular embodiments of the present invention include the ability to allow a user to repeatedly selectively engage and disengage the motion sensitivity of the handheld device in order to allow greater movement through a virtual desktop (or information space) using motion input in a limited amount of physical space. This process can be analogized to “scrubbing” with a mouse controlling a cursor, or lifting the mouse off of a surface and replacing the mouse on the surface at a different location to allow greater movement of the cursor. Lifting the mouse breaks the connection between the motion of the mouse and the motion of the cursor. Similarly, a user may be able to engage and disengage the connection between the motion of a handheld device, such as device 10, and the operations, functions or actions based on movement of the device.
  • FIG. 8 illustrates an example of the use of scrubbing functionality to navigate across a virtual desktop, or information space, larger than the display of a handheld device. In the illustrated example, a handheld device is used to navigate through virtual desktop 90. Virtual desktop 90 is illustrated as a grid map and may represent any suitable information through which a user may desire to navigate. Information of virtual desktop displayed at the handheld device is represented by box 92. In this embodiment, translation of the handheld device is used to navigate through virtual desktop 90. For example, a user may move the handheld device from right to left to navigate from right to left through information of virtual desktop 90. It should be understood that while the example illustrated describes moving the device to the right to implement the scrubbing process, it should be understood that handheld devices of particular embodiments may be moved in any suitable manner to implement the scrubbing process.
  • As indicated above, box 92 represents the information of virtual desktop 90 currently displayed at the device. If a user desired to view information represented at box 94, the user may move the handheld device from left to right. For purposes of this example, imagine that the user moves the device to the right, and the information of virtual desktop 90 contained in box 94 is displayed at the device. Also imagine that the user's arm is now outstretched to the user's right such that the user must walk or otherwise move further right in order to view at the display of the device information of virtual desktop 90 that is to the right of box 94. If this were the case, and the user could not, or did not desire to, walk or otherwise move further to the right so that the device would be moved further to the right, then the user could selectively disengage the motion sensitivity of the handheld device, move the device back to the left, selectively reengage the motion sensitivity of the device and move the device back to the right to display information to the right of box 94. In this manner, the user could display information of virtual desktop 90 contained in box 96, and this process could be repeated to display information contained in box 98, which is further to the right of box 96.
  • The selective disengagement and reengagement of the motion sensitivity of the device in order to allow greater movement within a virtual desktop in a limited amount of physical space may be enabled in any of a variety of ways, such as by a key on an input of the device, moving the device according to a particular gesture or movement (e.g., an arc movement) or tapping the device display. Any other user-initiated action may be used to disengage and reengage motion sensitivity for this purpose. Particular embodiments may allow multiple actions to disengage and reengage motion sensitivity of the device. Moreover, a user action that disengages motion sensitivity of the device may be different from a user action that reengages motion sensitivity. This scrubbing process may be performed in any suitable application, such as map navigation, menu navigation and scrolling through a list.
  • FIG. 9 is a flowchart illustrating the steps of the scrubbing process described above with respect to FIG. 8, in accordance with a particular embodiment. The flowchart begins at step 100, where the handheld device is moved to the right to go from displaying information of box 92 of virtual display 90 to information of box 94. As indicated above, the user may desire to display information further to the right of box 94 but may have run out of physical space to move the device further to the right. Thus, the user disengages the motion sensitivity of the device at step 102. Any suitable user action may perform such disengagement, such as the pressing of a button on the device or moving the device according to a particular gesture. At step 104, the user moves the device to the left so that the user will have more physical space through which the user may move the device to the right when motion sensitivity is reengaged.
  • At step 106, the user reengages motion sensitivity of the device. Again, such reengagement may be performed by any suitable user action, and such user action may be different from the user action performed to disengage motion sensitivity in step 102. Since motion sensitivity has been reengaged, the user moves the device to the right at step 108 in order to change the information being displayed at the device from the information of box 94 to the information of box 96. At step 110, it is determined whether further movement of the device to the right is needed. If further movement is needed (e.g., to display the information of virtual display 90 in box 98), then the process returns to step 102 where motion sensitivity of the device is again disengaged. If no further movement is needed, then the process ends. As indicated above, this scrubbing process may be utilized in any suitable application of the device that supports motion input, and the device may be moved in any suitable manner in order to implement this functionality.
  • As indicated above, a particular movement of the device (e.g., a particular gesture) may be utilized in the scrubbing process to signal to the device not to change the information presented on the display during such movement. This allows the user to return the device to a position from which the user may move the device to further change the information presented on the display. For example, the device may be at a base reference position from which movement of the device changes the information displayed. A particular predetermined movement (e.g., an arc movement) may be used which may signal to the device not to change the information displayed based on motion until the movement is completed. Once the predetermined movement is completed, the base reference position may be reset such that future movement of the device from the base reference position further changes the information displayed. The base reference position may identify a baseline orientation of the device represented by baseline components of the motion data received by the motion detection components of the device. In particular embodiments, gestures may be received, as determined by movement from the base reference position, to perform particular commands which change the information displayed at the device.
  • As discussed with respect to various embodiments described above, handheld devices in accordance with particular embodiments may utilize multiples types or modes of input to operate the device. Such input modes include motion input modes, such as a translation input mode and a gesture input mode. While multiple input modes may sometimes be used in combination with each other, in some cases the handheld device may be set to recognize a certain mode type at one time. In some situations, the handheld device may be set to function based on multiple types of non-motion input and only one type of motion input (e.g., translation or gesture) at a particular time.
  • To facilitate this flexibility of the handheld device in recognizing multiple input modes, in particular embodiments a certain trigger may be used to switch between input modes. For example, a user may press a particular key or may move the device in a certain manner (e.g., a particular gesture) to switch input modes. In some cases where an application of the device recognizes and functions based upon multiple types of motion input, a particular key may be pressed or a particular gesture may be formed using the device to switch between a translation motion input mode and a gesture motion input mode. The trigger may also comprise the mere switch from one application to another or the switch from one displayed image to another. In some situations, the trigger may switch between a non-motion input mode and a motion input mode. Any particular user-action may be implemented to act as a trigger to switch between different input modes, such as between different motion input modes. In some embodiments, a voice command or physical action upon the device (e.g., a device or screen tap) may be utilized to switch input modes.
  • In particular embodiments, a user action that reengages motion sensitivity of a device may also contain other information that might otherwise affect device behavior. For example, if a user makes one motion to reengage translation sensitivity, it may render the device more sensitive to motion than if the user makes a different motion to reengage motion sensitivity. A reengaging motion may comprise a gesture that indicates the user's identity or context, thereby engaging a variety of operational settings, such as user preferences.
  • As indicated above, particular embodiments include the ability to receive motion input to control various functions, tasks and operations of a handheld device and may be used to alter information displayed at the device in the process. In some cases, such motion input may be in the form of gestures, as opposed to mere translation-based input. Gesture input may be used to navigate through a multidimensional menu or grid in some applications. For example, as discussed above with respect to the scrubbing process, a display of a handheld device may be smaller than the amount of information (e.g., menu options, map information) that can be presented on the display. This may lead to a menu structure that is narrow and deep. In many cases, broad, shallow menu structures may be preferred over a narrow, deep menu structure because a user is not required to remember as much information concerning where functionalities are located.
  • FIG. 10A illustrates an example of menu navigation using gesture input, in accordance with a particular embodiment. In the illustrated embodiment, a handheld device is used to navigate through virtual desktop 120. Virtual desktop 120 includes a menu tree with menu categories 122 for selection. Each menu category 122 may include respective sub-categories for selection. In some embodiments, menu categories 122 may comprise categories of functions, while sub-categories of each menu selection may include the actual functions under each such category. In other embodiments, menu categories may comprise nouns (e.g., “folder,” “document,” “picture”), while sub-categories comprise verbs (e.g., “move,” “paste,” “cut”). If the handheld device comprised a cellular phone, menu categories 122 may include “calls,” “phone book,” “messages,” “planner,” “sounds,” “setup” or other items. Each menu category 122 may include functions which may be accessed once a menu category 122 is selected. While two menu levels are illustrated in FIG. 10A, it should be understood that a multidimensional desktop or display of information for motion interface navigation may include any number of selections (e.g., menus) utilizing any number of levels.
  • In the illustrated example, menu category 122 e has been selected, and sub-categories 124 of menu category 122 e are displayed as available for selection. Boxes 126 and 128 represent information displayed at the handheld device for the user. As illustrated, virtual desktop 120 includes more information, or menus, which can be displayed at the device at one time. A user may move the device according to particular gestures to navigate across or through the virtual desktop. Gestures may also be used to navigate through different menu levels and to make menu selections. As an example, a user may move device 10 in the form of a clockwise circle 130 to navigate a predetermined amount to the right across virtual desktop 120 (e.g., moving from information of box 126 to information of box 128). A particular menu category 122 may be selected by an away gesture 132, or a downward gesture (e.g., to select menu category 122 e), and thus display sub-categories 124 for selection. Similarly, to move to the left across virtual desktop 120, a user may move device 10 in the form of a counterclockwise circle 134. In some cases, navigation may be accomplished through four gestures: a forward gesture, a backward gesture, a left gesture and a right gesture. In some embodiments, gestures comprising motion vectors in perpendicular directions may be used for navigation.
  • In particular embodiments, gestures may be used that are mirror images of other used gestures to execute opposite functions from those accomplished by the other gestures. For example, a motion toward a user might zoom in while an opposite motion, a motion away from the user, may zoom out. Using mirror image or reciprocal gestures mapped to opposite functions may make a motion user interface for a device easier to learn and to use.
  • In some cases, the menu item at the center of the display may be highlighted for selection, while in other cases a particular gesture may indicate which menu selection of a plurality of displayed selections a user desires to select. It should be understood that the menus or other information through which a user may navigate using gestures may be presented at the handheld device in any number of ways. In some embodiments, only one level of information (i.e., one menu level) may be displayed at once, while sub-levels or higher levels are not displayed until they are available for selection.
  • FIG. 10B illustrates example gestures which may be utilized to perform various functions, such as functions enabling a user to navigate through a virtual desktop. The illustrated example gestures include an “up” gesture 133 to navigate in an upward direction through the desktop, a “down” gesture 135 to navigate down, a “left” gesture 136 to navigate left, a “right” gesture 137 to navigate right, an “in” gesture 138 to navigate in a direction towards the user and an “out” gesture 139 to navigate away from the user. It should be understood that these are mere example gestures and commands for particular embodiments, and other embodiments may include different gestures or similar gestures mapped to different commands for navigating through a desktop or performing other functions with a handheld device.
  • FIG. 11 illustrates another example of map navigation using motion input, in accordance with a particular embodiment of the present invention. FIG. 11 includes a virtual desktop 140 representing an information grid divided into sixteen portions, each referenced by a respective letter (A, B, C, . . . P). Portions of virtual desktop 140 are identified using reference letters only for purposes of describing particular embodiments, and portions of virtual desktops in accordance with other embodiments may or may not be identified in a device application by a reference character or otherwise. Virtual desktop 140 includes more information that can be displayed at a particular handheld device at one time. Virtual desktop 140 may represent any suitable information through which a user may desire to navigate using a handheld device, such as a street map. A user may desire to navigate across virtual desktop 140 to display different portions of information on the handheld device display and may also desire to zoom in (and out of) virtual desktop 140 (i.e., change the granularity of the information displayed) to more clearly view certain portions of the information of virtual desktop 140.
  • In the illustrated example, box 142 represents information currently displayed at handheld device 10. Box 142 includes portions A, B, E and F of virtual desktop 140. In particular embodiments, if a user desires to change the information of desktop 140 displayed at the device to, for example, information of boxes C, D, G and H, then the user can use motion input to move box 142 representing the display of the device to the right the necessary amount (two portions to the right in the illustrated example). Such motion input may comprise translation input (moving the handheld device 10 to the right an applicable amount to change the information displayed) or a gesture input (moving the handheld device 10 according to a particular gesture mapped to this function). As an example, one gesture may be mapped to moving the display one portion to the right, while another gesture may be mapped to moving the display two portions to the right. Thus, using translation input or gesture input, the user may navigate across desktop 140.
  • The handheld device 10 may also allow the user to zoom in on certain information displayed for a clearer view of such information, for example, through translation input or gesture input. As an example using gesture input, if the information displayed at the device included four of the sixteen portions (e.g., box 142 displaying portions A, D, E and F), then a user may use one of four gestures, each mapped to zoom in on a particular portion, to zoom in on one of the four displayed portions. If the user moved the handheld device according to the gesture mapped to zoom in on portion B, then the device may display information represented by box 144 (portions B1, B2, B3, B4, B5, B6, B7, B8 and B9) collectively forming the information of portion B of virtual desktop 140 in an enlarged view. Thus, the information of portion B may be more largely and clearly displayed. When viewing the information of box 144 on the device, the user may zoom out or zoom in again on a particular portion currently displayed using appropriately mapped gestures. If the user moved the handheld device according to a gesture mapped as zooming in on portion B2 (this could be the same gesture as that used to zoom in on portion B when information of box 142 was displayed), then the device may display information of box 146 (portions B2 a, B2 b, B2 c, B2 d, B2 e, B2 f, B2 g, B2 h and B2 i). The user may also be able to navigate across the virtual desktop when zoomed in on a particular portion. For example, when zoomed in on portion B (viewing information of box 144), the user may use translation or gesture input to move across the virtual desktop to view enlarged views of portions other than portion B. As an example, when viewing information of box 144 the user may make a gesture that moves the information displayed to the right such that the entire display shows only information of portion C of virtual desktop 140 (i.e., zoomed in on portion C showing portions C1, C2, C3, C4, C5, C6, C7, C8 and C9). It should be understood that a user may navigate through the information of virtual desktop 140 (both navigating across and zooming in and out) in any suitable manner using motion input.
  • As indicated above, any suitable gestures may be used to both navigate across a virtual desktop (or across a particular level) and to navigate between or through different levels or dimensions of a multidimensional desktop. Moreover, in some embodiments motion, such as gestures, may be used to navigate across the multidimensional desktop, and non-motion actions may be used to select or navigate between dimensions. Such non-motion actions may include the pressing of a key in a device input. Thus, a combination of motion and non-motion actions may be used for multidimensional virtual desktop or menu navigation in particular embodiments.
  • Particular embodiments may allow gesture-based navigation through any suitable application, such as a multidimensional grid, menu, calendar or other hierarchical application. For example, in a calendar application certain gestures may be used to navigate within one level, such as months, while other gestures may be used to navigate between levels, such as between years, months, days, hours and events. Moreover, different applications implemented in the handheld device that use such gesture navigation may use different gestures. Thus, the particular navigation gestures may change according to the particular application in use. In some embodiments, translation-based interfaces may be used to navigation through multidimensional information of a virtual desktop, as opposed to merely using gesture-based movements. For example, movement along x and y-axes may be used to travel within a level of a hierarchy, while movement along a z-axis may be used to travel between levels of the hierarchy.
  • Another example might involve using a telephone directory with institutions, letters of the alphabet, names, contact details (e.g., office, cellular and home phone numbers, email) and action to initiate contact all along different levels of a hierarchy. In this example, the hierarchy may contain information (nouns) and actions (verbs). One could map this example onto only two axes with, for example, a y-axis being used to select within a level of hierarchy and an x-axis being used to move between levels. A z-axis could be used to confirm the actions and help prevent inadvertent execution of actions.
  • In some cases, the number of levels traversed may depend on the magnitude of the motion, particular in translation-based navigation. Moving the device a small amount may navigate one level at a time, while moving the device a large amount may navigate multiple levels at a time. The greater the magnitude of motion, the more levels may be navigated at once. As applied to gesture-based motion inputs, different gestures may be used to navigate different numbers of levels in the hierarchy at once. These gestures may be different magnitudes of the same motion or entirely different motions.
  • The use of motion interface navigation through a multidimensional desktop or information display increases may allow menus to be flattened since a user may more easily navigate across a particular menu or dimension of a virtual desktop too large to fit on a device's display. As a result of the menu flattening, the user may be required to memorize less information thus increasing functionality and capability of the device for the user.
  • As discussed above, handheld devices in particular embodiments allow a user to navigate across a virtual desktop using motion input. In some cases, a user may utilize a cursor to navigate across information displayed at the handheld device. For example, certain information may be displayed at the device, and the user may utilize motion input to move the cursor around the device display and to select particular items displayed to perform certain functions. In some cases, motion input may be utilized to move a cursor, while a non-motion action (such as pressing a button) may be used to select an item currently indicated by the cursor. It should be understood that both gesture and translation motion input may be utilized in various embodiments of cursor navigation.
  • In particular embodiments, information displayed may be fixed with respect to the device, and the cursor may remain fixed in space such that movement of the device is used to navigate the cursor across the information. FIG. 12A illustrates an example utilizing this form of motion input cursor navigation. Display 147 represents a display of a handheld device. In order to describe this cursor navigation example, display has been divided into a grid to show information being displayed. The grid includes portions A-P. Display 147 includes a cursor 148 between portions C, D, G and H. As stated above, in this embodiment the information displayed remains fixed with respect to the device when the device is moved, and the cursor remains fixed in space. However, the cursor's position with respect to the information displayed changes according to motion input. When the device is translated to the right, according to right movement 149, the cursor is translated according to a motion opposite to the translation of the device.
  • Display 150 represents a possible display after the device is moved according to right movement 149, with cursor 148 now between portions A, B, E and F. It should be understood that since this example involves translation-based input, the magnitude of the movement of the device (e.g., to the right in this example) may directly affect the magnitude of the movement of the cursor with respect to the information displayed. Display 152 represents another display after the handheld device has been moved according to up movement 151, with cursor 148 now between portions I, J, M and N. As evident, the cursor will have moved down with respect to information displayed since it remains fixed in space. Display 154 represents another display after the handheld device has been moved according to left movement 153, with cursor 148 now between portions K, L, O and P. As evidence, the cursor will have moved to the right with respect to information displayed. Thus, in this form of cursor navigation, motion of the device changes the position of the cursor on the information. In this manner for example, the handheld device may be moved instead of using a stylus to point to certain portions of information displayed.
  • At any point in the cursor navigation process, a user may utilize any form of input (e.g., gesture, key press, etc.) to select or otherwise perform a function according to the information currently indicated at the cursor. For example, with respect to display 152, a user may use a particular gesture or press a button to zoom in, select or perform some other function based on information between portions I, J, M and N currently indicated by cursor 148.
  • As described above with respect to FIG. 12A, particular embodiments may translate the cursor in a motion opposite from the motion of the device to move the cursor across the information displayed. In one example of implementation, input motion of the device may be divided into motion along each of three axes, two of which are parallel to the device display (e.g., an x-axis and a y-axis). While motion of the device in the x-axis and y-axis plane changes information displayed at the device based on such motion, the cursor may be moved at the same time according to a translation vector that is opposite to the sum of movement in the x-axis direction and the y-axis direction to substantially maintain the position of the cursor in space. In some cases, when the cursor would move past a display edge off of the display according to the translation vector, the vector may be reduced in order to keep the cursor within the display. Such reduction may include reducing one or more components of the translation vector to maintain the cursor within a certain distance from the display edge.
  • It should be understood that the division of information displayed into portions A-P is done only for illustrating and describing the embodiments described above, and information displayed at handheld devices of particular embodiments may not include such a division or other type of reference information.
  • FIG. 12B illustrates another form of motion input cursor navigation, in accordance with a particular embodiment. In this example, the cursor remains in a fixed position with respect to the display while motion input is used to navigate across a virtual desktop larger than the device display. FIG. 12B includes virtual desktop 158 which comprises information through which a user may navigate using motion input at a handheld device, such as a street map. Virtual desktop 158 includes more information that can be displayed at a particular handheld device at one time. In order to describe this cursor navigation example, virtual desktop 158 has been divided into a grid to differentiate between information presented at the desktop. The grid includes 6 rows (A-F) and 7 columns (1-7). Portions of the grid may be identified herein for this example using their row letter and column number (e.g., portion B7 or D2). It should be understood that the division of virtual desktop 158 into portions referenced by row and column number is done only for illustrating and describing the embodiments described above, and the virtual desktops of particular embodiments may not include such a division or other type of reference information.
  • Box 160 represents information of virtual desktop 158 currently displayed at a handheld device. Display 161 represents the display of a handheld device showing information of box 160. Display 161 also includes a cursor 159 positioned at the intersection of portions B2, B3, C2 and C3. As indicated above, as a user utilizes motion input to move around the virtual desktop (i.e., to change the information displayed at the device), the cursor remains in a fixed position with respect to the display. However, the cursor's position changes with respect to the information of the virtual desktop displayed at the handheld device. For example, a user may utilize motion input to change the information displayed at the device to that represented at box 162. The information displayed at the device changes (to portions B5, B6, C5 and C6); and cursor 159 will remain fixed at the device display (e.g., in this case at the center of the display) such that its position changes with respect to the information of virtual desktop 158, as illustrated at display 163. If the user desired to use motion input to change the information displayed at the device to that represented at box 164, the information displayed at the device changes to portions E3, E4, F3 and F4, as illustrated at display 165. Cursor 159 is positioned between these illustrated portions in the center of the display since its position relative to the display remains fixed in this embodiment.
  • Thus, according to the form of cursor navigation illustrated with respect to FIG. 12B, the cursor will remain at a position fixed with respect to the device display, while its position with respect to the information of a virtual desktop changes. As discussed above with respect to the embodiment described and illustrated with respect to FIG. 12A, at any point in the navigation process, a user may utilize any form of input (e.g., gesture, key press, etc.) to select or otherwise perform a function according to the information currently indicated at the cursor. For example, with respect to display 163, a user may use a particular gesture or press a button to zoom in, select or perform some other function based on information between portions B5, B6, C5 and C6 currently indicated by cursor 159.
  • It should be understood that any particular input, such as a gesture or key press, may be used to switch cursor navigation modes at the device. For example, a user may switch between the translation-controlled cursor mode of FIG. 12A and the fixed cursor mode of FIG. 12B.
  • As discussed above, particular embodiments allow a user to move handheld device 10 according to a gesture to perform particular functions or operations. In some cases however, a user may not move the device according to the particular gesture intended, and the device may, as a result, not be able to recognize the movement as the intended gesture. In order to indicate that a particular movement of the device by a user is recognized as a particular gesture, handheld devices in some embodiments provide feedback to notify the user that the movement was in fact recognized as a gesture.
  • This feedback may comprise an audio format, such as speech, a beep, a tone or music, a visual format, such as an indication on the device display, a vibratory format or any other suitable feedback format. Audio feedback may be provided through a user interface speaker or headphone jack of device 10, and vibratory feedback may be provided through a user interface vibration generation module of device 10. Audio, visual and vibratory feedback may be varied to provide capability for multiple feedback indicators. As an example, vibratory feedback may be varied in duration, frequency and amplitude, singly or in different combinations over time. The richness and complexity of the feedback may be expanded by using feedback of different types in combination with one another, such as by using vibratory feedback in combination with audio feedback. In some cases, the feedback may be gesture-specific, such that one or more recognized gestures have their own respective feedback. For example, when a certain gesture is recognized the device may beep in a particular tone or a particular number of times, while when one or more other gestures are recognized the beep tones or number of beeps may change. The use of audio feedback may be especially valuable for gestures that do not have an immediately visible on-screen manifestation or function, such as calling a certain number with a cellular phone. Different types of feedback may also be context or application specific in some embodiments. Different contexts might include device state, such as what application is in focus or use, battery level, and available memory, as well as states defined by the user, such as quiet or silent mode. For example, a handheld device may utilize vibratory feedback in response to gesture input while in silent mode when audio feedback would otherwise be used. This feedback process may also be utilized by a handheld motion input device of a computer or other component.
  • Similar to the feedback relating to gesture recognition described above, handheld devices in particular embodiments may also provide feedback for the user in the event that a particular user movement was not recognized as a gesture when the device is in a gesture input mode. For example, if a motion appears intended to be a gesture, but cannot be assigned to a particular gesture that is known to the device, it may play a sound indicating failure. This notifies the user that the user must make another attempt at moving the device according to the intended gesture for the device to perform the operation or function desired. The feedback notifying the user that a movement was not recognized may also comprise an audio, visual, vibratory or other suitable format and may be a different feedback than that communicated when a particular movement is recognized by the device as a particular gesture. In order to determine whether a user's intent was to input a gesture, handheld device 10 may look at certain characteristics of the movement that imply that the motion was intended to be a gesture. Such characteristics may include, for example, the amplitude of the motion, the time course of the above-threshold motion and the number and spacing of accelerations. If a particular gesture is unrecognized by the device, a system of gestural feedback may be used to determine the gesture intended. For example, audio feedback may indicate possibilities determined by the handheld device, and the user may utilize gestures to navigate an auditory menu to select the intended gesture.
  • In particular embodiments, a system of audio or vibratory feedback may be used such that a user could operate handheld device 10 without having to resort to viewing display 12. For example, handheld devices in some embodiments may provide audio, visual or vibratory feedback to a user navigating a menu or other information of a virtual desktop. In effect, this device feedback combined with the motion input of a user could act as a type of “conversation” between the user and the device. As indicated above, multiple types and complexities of feedback may be utilized. The feedback process could be particularly advantageous in environments where it may be inconvenient, unsafe or impractical to look at the device display (e.g., while driving or while in a dark environment).
  • It should be understood that feedback, such as audio, visual and vibratory feedback, may also be used in some embodiments in connection with translation motion input. For example, a feedback indicator may be given when a user reached a limit or edge of a virtual desktop using translation input.
  • FIG. 13 is a flowchart 170 illustrating a process for utilizing feedback in response to motion input, in accordance with a particular embodiment. At step 172 of the process raw motion data is received at handheld device 10. As described above, the raw motion data may be received by any combination of accelerometers, gyros, cameras, rangefinders or any other suitable motion detection components. At step 174, the raw motion data is processed to produce a motion detector output indicative of the motion of the device. Such processing may include various filtering techniques and fusion of data from multiple detection components.
  • At step 176, the device state may be checked, because in some embodiments the feedback for a particular motion depends on the state of the device when the motion is received. As indicated above, example device states may include the particular application in focus or use, battery level, available memory and a particular mode, such as a silent mode. At step 178, the motion detector output is analyzed with respect to the device state. At step 180, it is determined whether the motion indicated by the motion detector output is meaningful or otherwise recognizable given the particular device state. For example, a particular gesture may perform a certain function in one application (e.g., a calendar application) while the gesture serves no function in another application. If the gesture is recognizable or meaningful give the state of the handheld device, then at step 182 feedback is provided. As indicated above, in particular embodiments feedback may be in an audio, visual or vibratory format. In some cases the feedback may merely be an indication that the device recognizes the gesture given the state of the device. In other cases, the feedback may be a further query for additional input, for example if the user was utilizing a particular application of the device that provided for a series of inputs to perform one or more functions. At step 184, the device behaves according to the motion input and device state, and the process may return to step 172 where additional raw motion data is received.
  • If it is determined at step 180 that the motion indicated by the motion detector output is not meaningful or recognizable given the particular device state, then the process proceeds to step 186. At step 186, it is determined whether the motion is above a particular threshold. This determination may be made to determine whether particular motion input was, for example, intended to be a gesture. As indicated above, threshold characteristics for this determination may include the amplitude of the motion input, the time course of the motion input and the number and spacing of accelerations of the motion. If it is determined that the motion input was not above a particular threshold, then the process may return to step 172 where additional raw motion data is received. If, however, the motion input was above a threshold such that a gesture may have been intended but was not recognized or meaningful given the device state, feedback is provided at step 188. The feedback may include audio, visual and/or vibratory feedback and may indicate that the gesture was not recognizable or meaningful. In particular embodiments, the feedback may also provide a query regarding the intended gesture or may otherwise provide the user with a number of potentially intended gestures from which the user may select the particular gesture intended by the motion. It should be understood that particular embodiments may not include some of the steps described (e.g., some embodiments may not include the threshold determination of step 186), while other embodiments may include additional steps or the same steps in a different order. As suggested above, particular embodiments may utilize motion input feedback (e.g., including feedback “conversations”) in any of a number of applications and ways and the type and complexity of feedback systems may vary greatly in different embodiments.
  • As indicated above, handheld devices according to particular embodiments may receive gesture motion input to control any number of functions of any number of applications running at the device. Some applications that utilize gesture involve may comprise mobile commerce (mCommerce) applications in which a mobile device, such as handheld device 10, is used to conduct various transactions, such as commercial or consumer purchases. Many mCommerce applications utilize some form of authentication to authenticate a user, such as personal identification numbers (PINs), credit card information and/or possession of the mobile device. However, many forms of authentication can be “leaked.” They can be shared intentionally or inadvertently. Another form of authentication is a user's written signature, which does not suffer from such leaking problems since forgery is typically difficult to accomplish and may be easy to detect. Particular embodiments may utilize motion input to receive a user's signature as a form of authentication in mCommerce or other transactions through the handheld device.
  • A written signature may be considered a two dimensional record of a gesture. When utilizing a handheld device with motion input, a user's signature may be in three dimensions and may thus comprise a “spatial signature.” Moreover, when combined with other forms of input received at the device, a user's signature can take on any number of dimensions (e.g., four, five or even more dimensions). For example, a three-dimensional gesture “written” in space using the device and detected at motion detector 22 may be combined with key-presses or other inputs to increase the number of dimensions of the signature.
  • These spatial signatures can be tracked, recorded, and analyzed by motion detectors 22 of handheld devices. They can be recorded with varying degrees of precision with varying numbers of motion detector components to serve as an effective form of authentication. A user's spatial signature may take comprise a three-dimensional form based on the user's traditional two-dimensional written signature or may comprise any other suitable gesture which the user records at the handheld device as his or her signature.
  • In some embodiments, the process for recognizing a spatial signature may involve pattern recognition and learning algorithms. The process may analyze relative timings of key accelerations associated with the signature. These may correspond to starts and stops of motions, curves in motions and other motion characteristics. In some cases, some hash of a data set of a points of a signature motion may be stored, and subsequent signatures may be compared against the hash for recognition. This may further verify if the signature was genuine by determining whether it was unique. For example, in particular embodiments, a signature may be detected (e.g., by a signature detection module of device 10) by comparing a particular movement of the device with respect to an initial or reference position. Such comparison may be made by comparing a sequence of accelerations of the movement with a predetermined sequence of accelerations of a stored spatial signature. This determination may be made regardless of the scale of the user's input motion signature.
  • In some embodiments, the device can detect whether motion of the device matches the signature by determining whether positions of the device in motion relative to an initial position match the spatial signature.
  • FIG. 14 illustrates an example system 200 utilizing spatial signatures as authentication for mCommerce transactions. System 200 includes handheld device 10, mCommerce application 202, authenticator 204 and communication network 206. mCommerce application 202 may comprise any suitable application for transacting business with a handheld device of a user. Such transactions may include consumer purchases, such as products or services of a business or other user from a website, online bill paying, account management or any other commercial transaction. Authenticator 204 authenticates, or verifies, a spatial signature input by the user at handheld device 10 to complete an mCommerce transaction. Authenticator 204 may store one or more spatial signatures of one or more users for authentication in an mCommerce transaction. In some embodiments, authenticator may be located within handheld device 10, within mCommerce application 202 or at any other suitable location. Communication network 206 is capable of transmitting information or data between components of system 200 and may include one or more wide area networks (WANs), public switched telephone networks (PSTNs), local area networks (LANs), the Internet and/or global distributed networks such as intranets, extranets or other form of wireless or wireline communication networks. Communication network 206 may include any suitable combination of routers, hubs, switches, gateways or other hardware, software or embedded logic implementing any number of communication protocols that allow for the exchange of information or data in system 200.
  • In operation, when a user utilizes handheld device 10 to conduct a transaction with mCommerce application 202, the user may utilize motion input to communicate an authentication signature, for example by moving the device according to the user's three-dimensional signature. As an example, a user might use their cellular phone at a point-of purchase (e.g., store) instead of a credit card. Instead of signing a paper document that then needs to be shipped and processed, the user could simply move device 10 according to the user's spatial signature. As indicated above, the user's signature may include more than three dimensions in some embodiments. The signature may have been previously recorded by the user using handheld device 10 or another mobile device, and the recorded signature may be stored at handheld device 10, mCommerce application 202, authenticator 204 or other suitable location such as a signature storage database for signatures of multiple mCommerce users.
  • The motion of handheld device 10 may be processed at the device, and motion output indicative of the motion may be transmitted to mCommerce application 202. mCommerce application 202 may communicate the motion output to authenticator 204 for verification that the motion input received at device 10 was in fact the signature of the user attempting to transact mCommerce. If authenticator 204 verifies the user's signature, then mCommerce application may complete the transaction with the user. As indicated above, authenticator 204 may be located within handheld device 10 or at mCommerce application 202 in particular embodiments and may access signatures for verification stored at device 10, mCommerce application 202 or any other suitable location.
  • Authentication may also be used by handheld devices in a non-mCommerce application, for example when electronic security is desired to perform functions such as sending private or secure data using the device. A user desiring to transmit data or other information using handheld device 10 may use their spatial signature in the encryption process. Spatial signatures may be used in any of a variety of ways to secure data for communication through a network and may be utilized in connection with public/private key encryption systems. For example, in some embodiments handheld device 10 may authenticate a user's signature received through motion input and then use its own private key to encrypt data for transmission. In other cases, data may be communicated to handheld device 10 such that an intended recipient must input their spatial signature to receive the decrypted data. In some embodiments, data may be communicated to a computer wirelessly connected to handheld device 10, and the intended recipient must use handheld device 10 as a way to communicate the user's signature to the computer for data decryption. Moreover, in particular embodiments, a user's spatial signature itself may represent an encryption key such that motion of the device generates the encryption key instead of the signature motion merely being used for authentication. In some cases a device may recognize a combination of accelerations as a signature by converting the signature into the equivalent of a private key. The handheld device may then use the private key as part of an authentication process for a transaction.
  • In particular embodiments, a spatial signature may be used to manage physical access to a building or event. For example, the signature input by a user at a device may be checked against a list of people allowed to enter as IDs are checked at “will call” for an event.
  • In particular embodiments, a user may utilize motion input for handheld device 10 to control other devices, such as audio/video equipment, home appliances and devices, computing devices or any other device capable of being controlled by a handheld device. Devices may be controlled by handheld device 10 through communications interface 20 of device 10 utilizing any of a number of wireless or wireline protocols, including cellular, Bluetooth and 802.11 protocols. In some embodiments, device 10 may receive motion input to control, through wireless or wireline communication, other devices through a network. Thus, devices controlled through motion input of device 10 may be at any location with respect to device 10, such as in the same room or across a country. Moreover, control of the other device may be implemented through any number of intermediate devices (e.g., through a network).
  • As an example, if handheld 10 device were a Bluetooth-enabled cellular phone, then particular gestures or other motion of the cellular phone may wirelessly communicate commands to control another device, such as a laptop across a room to drive a PowerPoint presentation. Other devices which may be controlled through motion input of handheld device 10 may include televisions, radios, stereo equipment, satellite receivers, cable boxes, DVD players, digital video recorders, lights, air conditioners, heaters, thermostats, security systems, kitchen appliances (e.g., ovens, refrigerators, freezers, microwaves, coffee makers, bread makers, toasters), PDAs, desktop and laptop PCs, computer peripheral equipment, projectors, radio controlled cars, boats and planes and any other device. As another example, a commuter may shake their cellular phone in a certain manner to tell their heater at home to turn on before the commuter arrives at home. In some embodiments, a handheld device may receive and process raw motion data to determine commands or intended functions for communication to other devices. In other embodiments, a motion detector of a handheld device may output raw data received from its motion detection components for communication to one or more devices controlled by device 10 through the motion of device 10. As a result, different devices controlled by device 10 may treat the same raw motion data of device 10 differently. For example, a particular gesture of device 10 may perform different functions of different devices controlled by device 10.
  • FIG. 15 illustrates an example system 220 in which handheld device 10 controls multiple other devices through motion input of device 10. System 220 includes handheld device 10, laptop 222 and remote device 224 connected, through wireless or wireline links, to handheld device 10 through communication network 226. Handheld device 10 receives raw motion data of a particular motion of the device through motion detection components, such as accelerometers, cameras, rangefinders and/or gyros. The raw motion data is processed at the handheld device. Particular databases, such as gesture and gesture mapping databases, may be accessed to determine a matching gesture and intended function based on motion tracked by a control module of the device. The intended function may be for another device to be controlled by handheld device 10, such as laptop 222 or remote device 224. Thus, the motion input is the interface for the underlying operational signal communicated from device 10 to the controlled device. In other embodiments, the raw motion data or other data merely indicating a particular motion input for device 10 may be directly sent to laptop 222 and/or remote device 224 without determining a function at device 10. In these embodiments, laptop 222 and/or remote device 224 may themselves process the raw motion data received from handheld device 10 to determine one or more intended functions or operations they should perform based on the raw motion data. In some embodiments, a user of device 10 may indicate to device 10 through motion or otherwise the other devices that handheld device 10 should communicate raw motion data or intended functions of such other devices, as applicable. While two devices controlled by handheld device 10 are illustrated, it should be understood that particular embodiments may include any number of devices of varying types to be controlled by handheld device 10 through motion input as discussed above.
  • As indicated above, particular embodiments include the ability to control other devices, such as other local or remote devices, through motion input of handheld device 10. In some embodiments a user of handheld device 10 selects the other device that a particular motion input of device 10 is intended to control. For example, a user may use input 14 of handheld device 10 (e.g., by pressing a button or moving a trackwheel) to select a local or remote device to control before moving device 10 according to a particular motion mapped to a function or operation desired for the other device. In particular embodiments however, a user may move handheld device 10 according to a particular gesture in order to select the other device (e.g., other local or remote device) to be controlled at the time through motion input of device 10. Thus, particular embodiments provide gesture motion selection of other devices to be controlled by handheld device 10.
  • Handheld device 10 may include a device selection module operable to detect a device selection gesture which indicates that a user desires to control a particular device. Each controllable device may include its own gesture command maps which correlate gestures to be input using device 10 and commands of the controllable device. A control module of the handheld device may select a particular command map corresponding to the controllable device selected for control. In some embodiments, device 10 may include a device locator operable to detect, for each of a plurality of remote devices, a direction from the handheld device to each remote device. In this case, the user may move handheld device 10 in the direction of a particular remote device the user desires to control in order to select that remote device for control.
  • While motion input for device 10 may be used for such control of the other devices, other types of input (e.g., utilizing input 14) may also be used to control other local or remote devices selected for control by gesture input. In some embodiments different gestures may be each mapped to control a different device. In other embodiments, device 10 may display possible other devices for control and particular gesture(s) to utilize to indicate a user's selection as to which other device the user desires to presently control through device 10. Handheld devices according to the present invention may utilize any particular manner of gesture selection of one or more local or remote devices to be controlled by the handheld devices.
  • As discussed above, particular embodiments include handheld devices 10 capable of detecting motion of the device through a motion detector 22 to modify the behavior of the device in some way according to the motion detected. Handheld devices 10 in some embodiments are capable of modeling their particular environment and subsequently modifying their behavior based on such environment. One distinction between modeling an environment of a handheld device and detecting a particular motion of the device is that in the former case there may be reasoning involved and in the latter case there may be no such reasoning. As an example, if a handheld device changes its behavior when moved according to a particular gesture, that may be considered sensing or detecting a particular motion and reacting based on the motion detected. If, on the other hand, the handheld device determines that it is sitting face down on a table and reacts accordingly, that may be considered modeling its environment. As another example, if a handheld device moves to the left and changes its behavior based on such movement, that may be considered detecting motion and reacting. If the handheld device finds itself in free fall and powers-down so as to survive an impending collision with the ground, that may be considered modeling its environment. A further distinction may be that environmental modeling may not require an immediate response to a user input, while detecting an event, such as a particular motion, generally does require such an immediate response. Modeling an environment may thus involve sensing or detecting a pattern of motion (or lack thereof), matching it to a predefined set of environmental conditions and modifying the behavior of the device based on the modeled environment. The behavior implemented based on the environment modeled may also change based on a particular application in use or in focus. In some cases, the device may change its sensitivity to particular motions based on the environment modeled.
  • As an example, a handheld device may recognize, through accelerometers or other motion detection components, that it is at rest on an approximately horizontal surface. Such recognition may result from a determination that the device is not moving, or still, with a static 1 g of acceleration orthogonal to a surface. The device may be able to differentiate resting on a table from resting in a user's hand, for example, because a user's hand typically will not be able to hold the device perfectly still. The device may, as a result, behave in a certain manner according to the recognition that it is at rest on an approximately horizontal surface. For example, if handheld device 10 recognized that it was lying at rest on a table, it may power off after lying in such position for a certain amount of time. As another example, a cellular phone in a vibrate mode may vibrate more gently if it recognizes it is on a table upon receipt of a call or upon any other event that may trigger vibration of the phone. In some embodiments, the device may recognize its orientation while lying on a table such that it may behave in one manner when lying in a “face down” position (e.g., it may power off) while it may behave in a different manner when lying in a non-face down position. If handheld device 10 comprised a cellular phone, it may enter a speaker mode when it is on a call and recognizes that it is placed by a user in a “face up” position on a table while on the call. If, on the other hand, the cellular phone is on a call and is placed face down on the table, it may enter a mute mode.
  • As another example, handheld device 10 may recognize through a brief period of approximately 0 g that it is in free-fall and then may behave to reduce damage due to impending impact with the ground or another surface. Such behavior may include, for example, powering down chips and/or hard drives, retracting lenses, applying covers or any other device behavior. In particular embodiments, non-hand-held devices or devices that do not otherwise detect motion for input may also be able to model their environment and to behave based on the environment modeled. As an additional example, acceleration patterns may be detected to recognize that a handheld device 10 is in a moving environment (e.g., being held by a user in a car or on a train) and may adjust various sensitivities, threshold and/or other characteristics to enable better performance of the device in that environment.
  • In other embodiments, handheld device 10 may comprise a digital camera. Through its motion detection components, the camera may determine whether it is on a tripod or is being held by a user when a picture is taken. The camera may set a shutter speed for the picture based on that determination (e.g., slow shutter speed if on a tripod or fast shutter speed if being held by a user).
  • If handheld device 10 comprised a device that utilized a cradle for syncing up with another device, such as a PC, then device 10 may recognize that it is in the cradle based on its stillness (or supported state) and its particular orientation. The device may then operate or function according to its state of being in the cradle (e.g., it may then sync up with its associated PC).
  • FIG. 16 is a flowchart 230 illustrating an environmental modeling process, in accordance with a particular embodiment. At step 232, raw motion data is received at handheld device 10. As described above, the raw motion data may be received by any combination of accelerometers, gyros, cameras, rangefinders or any other suitable motion detection components. At step 234, the raw motion data is processed to produce a motion detector output from which motion and orientation of the device is determined at step 236. Boxes 237 represent example motions and orientations of the device, such as rotating around the z-axis in box 237 a, translating along the x-axis in box 237 b, oriented at particular angles α, θ, ω in box 237 c and still in box 237 n. These are mere example motions and orientations of the device, and any number of motions may be utilized that are determined at step 236. In some embodiments, the determined orientations may comprise an orientation of the device with respect to gravity.
  • At step 238, handheld device 10 determines its environment based on the motion and orientation determined at step 236. Boxes 239 represent example environments of the device, such as face down on a table in box 239 a, falling in box 239 b, on a train in box 239 c and held in hand at box 239 n. Any number of environments may be determined based on motions and orientations determined at step 236. In particular embodiments, the environmental determination may also be based on a history of the device, such as a motion/orientation history. For example, when implementing a speaker mode function of a cellular phone, the device may detect a quite period when horizontal in the middle of a call after a short jarring is detected (e.g., the short jarring caused by a user placing the phone face up on a table). The phone can detect that it was jarred so that stillness and a perpendicular position relative to gravity may take on a different meaning than had the jarring not occurred. Thus, the determination of the environment may be based on the motion and orientation of the device and its history. The history may comprise a previous motion/orientation of the device or any other information relating to a device's history.
  • At step 240, the determined environment is mapped to a particular behavior. The mapped behavior may depend on any number of factors in addition to the determined environment, such as desired characteristics of the particular user using the device at the time or the particular application in use or focus at the time. For example, the behavior according to a particular modeled environment may include engaging a mute function of the handheld device in box 241 a, powering down chips of the device to survive an impact in box 241 b and increasing a motion activation threshold of the device in box 241 n. The mute behavior indicated in box 241 a may be implemented when a cell phone's environment comprises laying face down on a table while on a call. The powering down chips behavior in box 241 b may be implemented when the environment of handheld device 10 comprises a free fall of the device. The increasing a motion activation threshold behavior of box 241 n may be implemented when a handheld device's environment comprises being in a car or train where bumpiness may require a greater movement threshold for a user's motion input to register as an intended input. Particular embodiments may include any number of behaviors mapped to one or more modeled environments. At step 242, the handheld device behaves according to the behavior to which its environment is mapped at step 240.
  • As indicated above, users may move handheld devices according to particular gestures to cause the devices to perform desired functions, operations or tasks. In particular embodiments, gestures used as motion input for the device may comprise pre-existing symbols, such as letters of the alphabet, picture symbols or any other alphanumeric character or pictographic symbol or representation. For example, gestures used as motion input may mimic upper and lower case members of an alphabet in any language, Arabic and Roman numerals and shorthand symbols. Preexisting gestures may be used for handheld input devices for other local and remote devices as well. Using preexisting gestures for handheld device input may facilitate the learning process for users with respect to gesture motion interfaces.
  • FIG. 17 illustrates example gestures which may be mapped to particular functions. For example, if handheld device 10 comprised a cellular phone, a user may move device 10 in the form of heart 250 to call the user's girlfriend, boyfriend or spouse or house 252 to call the user's home. As another example, if handheld device 10 were a PDA or other device running an application managing files or data, moving the device in the form of C-gesture 254 may be a command for copying data, O-gesture 256 may be a command for opening a file, D-gesture 258 may be a command for deleting data and X-gesture 260 may be an exit command for a file or application. The logical connection between gestures and their intended functions or operations (e.g., “O” for opening a file) further facilitates user interaction and learning.
  • Any number of pre-existing symbols may be used as gestures for motion input as commands for performing any number of functions, operations or tasks of a handheld device. Many preexisting gestures typically exist in two dimensions. Handheld device 10 may recognize such gestures. In some cases, for example, handheld device 10 may disable receipt of a particular dimension so that any movement in a third dimension when a user is attempting to input a two-dimensional gesture is not received or detected in order to facilitate recognition of the two-dimensional gesture. In some embodiments, handheld device 10 may receive three-dimensional gestures that may be based on preexisting two-dimensional gestures. Receiving and detecting three-dimensional gestures increases the capabilities of the device by, for example, increasing the number and types of gestures which may be used as motion input.
  • FIG. 18 is a flowchart 270 illustrating the utilization of a preexisting symbol gesture, the letter “O,” as motion input. As illustrated in step 272, a user moves handheld device 10 in the form of the letter “O.” At step 274, handheld device 10 receives raw motion data of the “O” movement from motion detection components and process such raw motion data at step 276 to determine the actual motion of the device. At step 278, handheld device 10 accesses a gesture database 280 which may include a plurality of gestures recognizable by the device to map the motion to the gesture “O.” The plurality of gestures of the gesture database may each be defined by a series of accelerations of a movement. The actual motion of the device may be matched to a series of accelerations of one of the gestures of the gesture database. At step 282, handheld device 10 maps the gesture “O” to a particular function by accessing a function database 284 (or a gesture mapping database) which may include a plurality of functions that may be performed by one or more applications running on the device. In particular embodiments, the gesture and function databases may be comprised in memory 18 of the device. The particular function mapped to the gesture “O” may depend on a particular application in focus or being used by the user at the time. For example, in some applications “O” be comprise a command to open a file, while in other applications it may comprise a command to call a certain number. In some cases, one gesture may be mapped to the same function for all applications of the device. At step 286, the device behaves according to the mapped function, such as opening a file.
  • As indicated above, gestures used as motion input for handheld device 10 may have different meanings (e.g., functions, operations, tasks) based on a particular context, such as a particular application in use or in focus, a particular device state with respect to an application or otherwise, a particular modeled environment or any combination of these or any other context. For example, a particular gesture may be mapped as a command to scroll a page up when running a web browser at the device, while the gesture may be mapped as a command to examine a different date when running a calendar program. The ability for particular gestures to be mapped to different commands depending on the context, such as the application in use, increases the functionality of the device.
  • Handheld devices in some embodiments may be able to utilize simpler motion detection components if gestures are mapped to different commands depending on the context. As an example, a handheld device may include particular motion detection components such that the handheld device may only be able to recognize and distinguish between twenty different gestures. If each gesture is mapped to a different function for each of four different applications, then the ability to only recognize twenty unique gestures still provides eighty different functions on the device (twenty for each application). If each gesture were mapped to its own function, no matter what application was in focus; then the overall capability of the device would be reduced, and some gestures would likely not be used in some applications. The ability to use less complex components that are able to recognize and distinguish between fewer gestures as a result of mapping gestures to a plurality of functions depending on the context can lead to reduced costs in the components utilized in the device and can also simplify the task of physically learning gestures required to control the device. As indicated above, gestures may be mapped to different functions, operations or tasks depending on the application in use, device state, modeled environment or other context. In some cases, gestures may be mapped to different functions depending on the state of a particular application. For example, in the case of a word processing program, some gestures may have some functions when in one state of the program (e.g., a menu state) while the same gestures may have different functions when in another state of the word processing program (e.g., a document editing state). In this case, a command map associated with the gesture function mappings may include gesture mappings for each such state.
  • FIG. 19 is a flowchart 290 illustrating the use of context-based gesture mapping, in accordance with a particular embodiments. In the illustrated embodiment, a gesture has different functions assigned based on the application in focus. At step 292, handheld device 10 receives raw motion data of a particular gesture movement and process such raw motion data at step 294 to determine the actual motion of the device. At step 296, handheld device 10 maps the motion to a gesture, for example, by accessing a gesture database. At step 298, handheld device 10 determines which application is in focus. For example, if the device were capable of running four different applications, then it would determine which of the four was in focus or was being used at the time. The device then performs the function mapped to the gesture according to the application in focus. The identification of such function may be accomplished in some embodiments by accessing a function database which may also be referred to as a gesture mapping database since it correlates gestures of a gesture database to functions. In the illustrated embodiment, if Application 1 is in focus, then the device performs Function 1 at step 300 a; if Application 2 is in focus, then the device performs Function 2 at step 300 b; if Application 3 is in focus, then the device performs Function 3 at step 300 c; and if Application 4 is in focus, then the device performs Function 4 at step 300 d.
  • As a further example of context-based gesture mapping, a handheld device with phone and PDA capabilities may run four applications: a phone application, a calendar application, a file management application and an e-mail application. A gesture input mimicking the letter “S” may have different functions depending on the application in focus. For example, if the phone application is in focus, then receiving the gesture input “S” may be a command for calling a particular number designated to the “S” gesture. If the calendar application is in focus, then receiving the gesture input “S” may be a command for scrolling to the month of September in the calendar. If the file management application is in focus, then receiving the gesture input “S” may be a command for saving a file. If the e-mail application is in focus, then receiving the gesture input “S” may be a command for sending an e-mail. Particular embodiments contemplate great flexibility in the ability to map gestures to different functions depending on the context.
  • As discussed above, gestures may have different functions depending on a particular context at the time. In particular embodiments, handheld devices may be customizable to allow users to assign device functions to pre-defined gestures. The functions may be context-based such that some gestures may have different functions depending on an application in use, a device state or a modeled environment. Handheld devices in some embodiments may allow different users of the same device to assign different functions to the same gesture, and such functions may also be context-based as discussed above.
  • For example, a handheld device 10 may be utilized by a number of different users at different times. Each user may assign different functions for the same gestures. When the handheld device receives a gesture input, it must thus know which user is using the device at the time to determine which function the user intends the device to perform. The device may determine the user in any of a variety of ways. In some embodiments, users may log into the device prior to use by using a username and password or otherwise. In other embodiments, the handheld device may be able to identify the user based on the manner in which the user moves the device for motion input, such as the way the user forms a gesture using the device. As indicated above, each user may also assign commands to gestures based on context, such as based on the application in focus at the device. The ability for the handheld device to map functions to gestures based on particular users further increases the device's capabilities and flexibility, particularly if the device is able to recognize and distinguish only a particular number of gestures.
  • FIG. 20 is a flowchart 310 illustrating the use of user-based gesture mapping, in accordance with a particular embodiment. In the illustrated embodiment, a gesture has different functions assigned based on the user using the device. At step 312, handheld device 10 receives raw motion data of a particular gesture movement and process such raw motion data at step 314 to determine the actual motion of the device. At step 316, handheld device 10 maps the motion to a gesture, for example, by accessing a gesture database. At step 318, handheld device 10 determines which user is using the device. Such determination may be made, for example, through a log in system in which users log into the device prior to use. Handheld device 10 may determine the current user through other suitable methods as well. At steps 320, the device performs the functions assigned to the gesture input based on the user using the device. In the illustrated embodiment describing the process with four possible users, if User 1 is using the device, then the device performs Function 1 at step 320 a; if User 2 is using the device, then the device performs Function 2 at step 320 b; if User 3 is using the device, then the device performs Function 3 at step 320 c; and if User 4 is using the device, then the device performs Function 4 at step 320 d.
  • As indicated above, in some embodiments, gestures may be assigned different functions based on both users using the device and a context. In this situation, the illustrated flowchart 310 described above may have an additional step to determine the context at the time (e.g., step 298 of flowchart 290 determining the application in focus). The particular function desired for performance by a certain gesture thus depends on both the user using the device at the time and the context, such as the particular application in focus at the time.
  • As previously discussed, some embodiments include handheld devices with the ability to receive preexisting symbols as gestures for motion input. Some of those embodiments as well as other embodiments may include the ability for user's to create their own gestures for mapping to functions and/or keys. The gestures may comprise any user-created symbol or other motion that the user desires to utilize as motion input for one or more particular functions, operations or tasks that the device is able to perform. Users may be able to create motions with some personal significance so they may more easily remember the motion's command or intended function.
  • FIG. 21 is a flowchart 330 illustrating the assignment process for user-created gestures, in accordance with a particular embodiments. At step 332, an indication is received from a user for gesture creation. The indication may be received in any of a variety of ways using any suitable input format (e.g., keys, trackwheel, motion, etc.). The user may move the device according to the user-created gesture such that raw motion data for the user-created gesture is received at the handheld device at step 334. The raw motion data may comprise a sequence of accelerations of a movement after stabilization of the device from a base reference position until an indication is received to stop recording the reference positions. Indications to start and stop recording a user-created gesture may include motion or non-motion indications (e.g., key presses and key releases). The raw motion data is processed at step 336. At step 338, the motion is stored as a gesture, for example, at a gesture database. In particular embodiments, the indication for gesture creation may be received after the user moves the device according to the user-created gesture. For example, the user may move the device according to a user-created gesture that is currently unrecognizable by the device. The device may query the user to determine if the user desires to store the unrecognized gesture for a particular function. The user may respond in the affirmative so that the user may utilize the gesture as motion input in the future.
  • At step 340, function mapping information for the gesture is received from the user. The function mapping information may comprise functions, operations or tasks of the device that the user desires the user-created gesture to command. In particular embodiments, such function mapping information may comprise a series of functions (e.g., a macro) that one gesture may command. The user may assign different functions for a gesture according to an application in focus. In some cases, a user may desire to map different gestures to different keys or keystrokes of the device. One examples of mapping a series of functions to a gesture may include mapping a long string of characters to a gesture (e.g., telephone numbers including pauses, where appropriate). At step 342, the function mapping information is stored, for example, at a function database or gesture mapping database.
  • As indicated above, it may be difficult for a user to move handheld device 10 in the same precise manner for one or more gestures each time those gestures are used as input. Particular embodiments thus allow for varying levels of precision in gesture input. Precision describes how accurately a gesture must be executed in order to constitute a match to a gesture recognized by the device, such as a gesture included in a gesture database accessed by the device. The closer a user generated motion must match a gesture in a gesture database, the harder it will be to successfully execute such gesture motion. As discussed above, in particular embodiments movements may be matched to gestures of a gesture database by matching a detected series of accelerations of the movements to those of the gestures of the gesture database.
  • As the precision of gestures required for recognition increases, one may have more gestures (at the same level of complexity) that can be distinctly recognized. As an example, if the precision required was zero, then the device could only recognize a single gesture but it would recognize it easily because anything the user did would be recognized as that gesture. If, however, the precision required was infinite then it would be virtually impossible for a user to form a gesture that was recognized by the device, but the device could support an infinite number of gestures with only infinitesimal differences between them. One area in which the precision requirement is especially applicable is in the area of spatial signatures. With spatial signatures, the level of precision correlates well with the level of security.
  • In particular embodiments, the precision required by handheld device 10 for gesture input may be varied. Different levels of precision may be required for different users, different regions of the “gesture space” (e.g., similar gestures may need more precise execution for recognition while gestures that are very unique may not need as much precision in execution), different individual gestures, such as signatures, and different functions mapped to certain gestures (e.g., more critical functions may require greater precision for their respective gesture inputs to be recognized). Moreover, in some embodiments users may be able to set the level(s) of precision required for some or all gestures or gestures of one or more gesture spaces. As an example, a user may set the precision required for spatial signatures higher than for other gestures for the user thus increasing security for spatial signature input.
  • As indicated above, in particular embodiments gestures may be recognized by detecting a series of accelerations of the device as the device is moved along a path by a user according to an intended gesture. Recognition occurs when the series of accelerations is matched by the device to a gesture of a gesture database.
  • In some embodiments, each gesture recognizable by handheld device 10, or each gesture of a gesture database, includes a matrix of three-dimensional points. In addition, a user movement intended as a gesture input includes a matrix of three-dimensional points. Handheld device 10 may compare the matrix of the movement with the matrices of each recognizable gesture (or each gesture in the gesture database) to determine the intended gesture. If a user moves the device such that the movement's matrix correlates to each point of an intended gesture's matrix, then the user may be deemed to have input the intended gesture with perfect precision. As the precision required for gesture input is reduced, then the greater the allowable differences between a user gesture movement and an intended gesture of a gesture database for gesture recognition.
  • FIG. 22 illustrates three gestures input using a handheld device with varying levels of precision. In the illustrated embodiment, the intended gesture comprises an “O.” Gesture movement 350 is input as a perfect “O,” or with 100% precision for the intended gesture. Gesture movement 352 is input with less than 100% precision as it does not form a perfect “O.” Gesture movement 354 is input with less precision than gesture movement 352. The precision requirement for the input of gesture “O” may be set at handheld device to accept varying levels of precision. For example, the precision level may be set such that only gesture movement 350 is recognized as gesture “O,” gesture movements 350 and 352 are both recognized as gesture “O” or gesture movements 350, 352 and 354 are all recognized as gesture “O.” As indicated above, the higher the precision requirement, then the more space available for additional recognizable gestures. For example, if the precision level for handheld device 10 were set such that only gesture movement 350 was recognized as gesture “O,” then gesture movements 352 and 354 may be recognized as other, distinct gestures.
  • In particular embodiments, handheld devices may alter gestures recognized for performing particular functions based on a user's personal precision. In this manner, a handheld device may have dynamic learning capability of gesture mappings. For example, if a particular gesture of a gesture database is mapped to a particular function, and a user's repeated attempts to input the gesture lack precision in a consistent manner; then the handheld device may alter the gesture in the gesture database to match the consistent gesture movement of the user such that the user's consistent gesture motion input will be mapped to the particular function.
  • As an example, if a particular gesture comprises a square motion and a user's motion intended for the gesture comprises more of a triangular motion on a consistent basis (e.g., on multiple consecutive times), then the handheld device may be able to recognize this consistent difference in an intended gesture and the actual user motion to change the intended gesture (e.g., a square) in the gesture database mapped to the function intended to the user's actual consistent motion (e.g., triangle). Thus, after such change is made, any time the user inputs the triangle gesture then the function previously mapped to the square gesture will be commanded. The device may determine the gesture intended in any of a variety of ways, such as through two-way communication with the user through any form of input. In particular embodiments, this dynamic learning of users' input characteristics may be applied on a user-specific basis. For example, in the example described above another user may still input the square gesture using the same handheld device to command the same function.
  • As indicated above, as the precision of user motion with respect to intended gestures increases, then the number of gestures available for mapping to functions increases. In some embodiments, handheld devices may recognize that a user's precision increases over time and the devices may, as a result, increase the gestures available for use. Increasing gestures available for input may also increase the functions capable of being commanded through gesture input.
  • As an example, a user's personal precision for inputting gestures may be such that the user is only able to input a certain number of gestures that will be recognized by the handheld device. However, over time, the user's personal precision may increase. This increase may be recognized by the handheld device and, as a result, the device may enable additional gestures that the user may use as gesture input. In some embodiments, the enabling of additional gestures may occur when the user's precision increases over a particular precision threshold, or a certain precision level. Since the user's precision has increased, the handheld device will be able to recognize when the user attempts to input these additional gestures. As indicated above, providing additional gestures for input by a user may also increase the number of functions that the user is able to command through gesture input, since each gesture may be mapped to command a different function.
  • Handheld devices in particular embodiments may also allow users to set and vary noise thresholds of the device. Noise thresholds are the magnitude of motion of the device that must be detected in order to be considered intended motion input (e.g., an intended gesture) of the user. For example, if noise thresholds are set low, then minimal motion of the device may be considered by the device as motion input. However, if noise thresholds are set high, then greater movement of the device would be required before the motion is considered intended input from the user. If, for example, a user is travelling in a car on a bumpy road, the user may desired to set the noise threshold higher so that when the device moves as a result of bumps in the road then such movement may not be considered by the device to be intended motion input.
  • In particular embodiments, noise thresholds may automatically change at the device based on a modeled environment. For example, if a device determines that the environment comprises traveling in a car, then the device may automatically increase the noise threshold so that minimal movements resulting from the car will not register as user-intended motion.
  • FIG. 23 is a flowchart 370 illustrating a gesture recognition process utilizing a number of features described herein, in accordance with a particular embodiment. At step 372, raw motion data of a particular gesture movement is received. The raw motion data is processed at step 374 where the actual motion of the device is determined. Such processing may include various filtering techniques and fusion of data from multiple detection or sensing components. At step 376, the actual motion is mapped to a gesture. Mapping the actual motion to a gesture may include accessing a user settings database 378, which may include user data 379 comprising, for example, user precision and noise characteristics or thresholds, user-created gestures and any other user-specific data or information including user identities 381. User-specific information may be important, for example, because different users of the handheld device may have different settings and motion input characteristics. For example, an older user may have less precision than a younger user when inputting gestures such that the older person may have fewer gestures available. Moreover, a more experienced user may have more device functionality available through gesture input.
  • User settings database 378 may also include environmental model information 380 which may factor in determining the gesture applicable at the time. As discussed above, through environmental modeling, the device can internally represent its environment and the effect that environment is likely to have on gesture recognition. For example, if the user is on a train, then the device may automatically raise the noise threshold level. The device may also reduce the precision required, depending upon how crowded the gesture space is near the gesture under consideration. Mapping the actual motion to a gesture may also include accessing gesture database 382.
  • At step 384, the gesture is mapped to a function for the device. This step may include accessing a function mapping database 386 which may include correlation between gestures and functions. Different users may have different mappings of gestures to functions and different user-created functions. Thus, function mapping database 386 may also include user-specific mapping instructions or characteristics, user-created functions (e.g., macros and/or phone numbers) and any other function information which may be applicable to mapping a particular gesture to one or more functions. In some embodiments, gestures may be mapped to individual keystrokes. User identities 381 may also be accessed in this step. In addition, device context information 388 may also be accessed and utilized in mapping the gesture, which may include environmental model information 389, application in focus information 390 and device state information 391, such as time and date information, location information, battery condition and mode information (e.g., silent mode). At steps 392, the device performs the appropriately-mapped one or more function(s), such as Function 1 at step 392 a, Function 2 at step 392 b or Function 3 at step 392 c.
  • As discussed above, in particular embodiments handheld device 10 may comprise a cellular phone with many of the capabilities described herein. For example, cellular phones with motion input capabilities may use motion input to flatten menus as discussed above. The cellular phone may detect device states and environments, such as free fall or the cellular phone being face down or face up to map to behaviors such as mute, speaker phone and power-off. Other detection of device states may include detecting that the phone is being held to disengage mute or speakerphone states. The cellular phone may utilize gestures to control dialing (e.g., through gestural speed dial) or to lock/unlock a keypad of the device. For example, the device may be moved in a clockwise circle to dial home, a counterclockwise circle to dial work and in the shape of a heart to dial a significant other. Users may also be able to program the cellular phone to customized gestural mappings.
  • In particular embodiments handheld device 10 may comprise a digital camera utilizing motion input for at least some of the functions described herein. For example, digital cameras with motion input capabilities may use motion input to flatten menus as discussed above. Motion may also be used to allow a user to zoom in (and back out) on still photos or video to examine it more closely for smoother and more intuitive functionality. Motion may be used to zoom in and out of a number of thumbnails of photographs or video clips so that it is easy to select one or more to review. Virtual desktops may be used to review many thumbnails of many digital photos or video clips or to review many digital photos or video clips by translating the camera or using gestural input. Gestures and simple motions may be used alone or in combination with other interface mechanisms to modify various settings on digital still and video cameras, such as flash settings, type of focus and light sensing mode. Moreover, free fall may be detected to induce the camera to protect itself in some way from damage in an impending collision. Such protection may include dropping power from some or all parts of the camera, closing the lens cover and retracting the lens.
  • In particular embodiments handheld device 10 may comprise a digital watch utilizing motion input for at least some of the functions described herein. For example, digital watches with motion input capabilities may use motion input to flatten menus as discussed above. In some embodiments, the tapping of the watch or particular gestures may be used to silence the watch. Other functions may also be accessed through taps, rotations, translations and other more complex gestures. These functions may include changing time zones, setting the watch (e.g., setting the time and other adjustable settings), changing modes (e.g., timers, alarms, stopwatch), activating the backlight, using a stopwatch (e.g., starting, stopping and splitting the stopwatch) and starting and stopping other timers.
  • In some embodiments, motion detection may be separate from a display. For example, a display may be worn on glasses or contacts, and other parts of the handheld device may be dispersed across a user's body such that the display may not be part of the same physically component as the motion input device or component.
  • As discussed above, particular figures illustrate various methods, flowcharts and processes which may be performed in particular embodiments. It should be understood that steps may be performed in any order, and steps from a particular method, flowchart or process may be combined with other methods, flowcharts or processes or other steps from the same method, flowchart or process in various embodiments without departing from the scope of the invention.
  • Although the present invention has been described in detail with reference to particular embodiments, it should be understood that various other changes, substitutions, and alterations may be made hereto without departing from the spirit and scope of the present invention. For example, although the present invention has been described with reference to a number of elements included within handheld device 10, these elements may be combined, rearranged or positioned in order to accommodate particular architectures or needs. In addition, any of these elements may be provided as separate external elements to each other where appropriate. The present invention contemplates great flexibility in the arrangement of these elements as well as their internal components.
  • Numerous other changes, substitutions, variations, alterations and modifications may be ascertained by those skilled in the art and it is intended that the present invention encompass all such changes, substitutions, variations, alterations and modifications as falling within the spirit and scope of the appended claims.

Claims (20)

1. A handheld device comprising:
a display having a viewable surface and operable to generate an image indicating a currently controlled remote device;
a gesture database maintaining a plurality of remote command gestures, each remote command gesture defined by a motion of the device with respect to a first position of the handheld device;
a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device;
a motion detection module operable to detect motion of the handheld device within three dimensions and to identify components of the motion in relation to the viewable surface;
a control module operable to track movement of the handheld device using the motion detection module, to compare the tracked movement against the remote command gestures to determine a matching gesture, and to identify the one of the commands corresponding to the matching gesture; and
a wireless interface operable to transmit the identified command to a remote receiver for delivery to the remote device.
2. The handheld device of claim 1, wherein the remote receiver comprises a wireless interface of the remote device.
3. The handheld device of claim 1, wherein the remote receiver comprises an element of a public wireless telephone network.
4. The handheld device of claim 1, wherein the remote device comprises audio/visual equipment.
5. The handheld device of claim 4, wherein the identified command controls output of the audio/visual equipment.
6. The handheld device of claim 1, wherein the wireless interface is further operable to transmit the matching gesture to the remote receiver for delivery to the remote device.
7. The handheld device of claim 1, further comprising:
a first accelerometer operable to detect acceleration along a first axis;
a second accelerometer operable to detect acceleration along a second axis, the second axis perpendicular to the first axis; and
a third accelerometer operable to detect acceleration along a third axis, the third axis perpendicular to the first axis and perpendicular to the second axis; and wherein:
the gesture database further defines each of the remote command gestures using a sequence of accelerations;
the motion detection module is further operable to detect motion of the device using accelerations measured by the first accelerometer, the second accelerometer, and the third accelerometer; and
the control module is further operable to match the accelerations measured by the motion detection module against gesture definitions in the gesture database to identify particular ones of the remote command gestures.
8. A method for remotely controlling devices comprising:
generating, on a viewable surface of a handheld device, an image indicating a currently controlled remote device;
maintaining a gesture database comprising a plurality of remote command gestures, each remote command gesture defined by a motion of the device with respect to a first position of the handheld device;
maintaining a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device;
tracking movement of the handheld device in relation to the viewable surface;
comparing the tracked movement against the remote command gestures to determine a matching gesture;
identifying the one of the commands corresponding to the matching gesture; and
transmitting the identified command to a remote receiver for delivery to the remote device.
9. The method of claim 8, wherein the remote receiver comprises a wireless interface of the remote device.
10. The method of claim 8, wherein the remote receiver comprises an element of a public wireless telephone network.
11. The method of claim 8, wherein the remote device comprises audio/visual equipment.
12. The method of claim 11, wherein the identified command controls output of the audio/visual equipment.
13. The method of claim 8, wherein the gesture database further defines each of the remote command gestures using a sequence of accelerations; the method further comprising:
detecting acceleration of the handheld device along a first axis;
detecting acceleration of the handheld device along a second axis, the second axis perpendicular to the first axis; and
detecting acceleration of the handheld device along a third axis, the third axis perpendicular to the first axis and perpendicular to the second axis;
detecting motion of the device using accelerations measured by the first accelerometer, the second accelerometer, and the third accelerometer; and
matching the accelerations against gesture definitions in the gesture database to identify potential indicated ones of the remote command gestures.
14. Logic for controlling a handheld device, the logic embodied in a computer readable medium and operable when executed to perform the steps of:
generating, on a viewable surface of a handheld device, an image indicating a currently controlled remote device;
maintaining a gesture database comprising a plurality of remote command gestures, each remote command gesture defined by a motion of the device with respect to a first position of the handheld device;
maintaining a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device;
tracking movement of the handheld device in relation to the viewable surface;
comparing the tracked movement against the remote command gestures to determine a matching gesture;
identifying the one of the commands corresponding to the matching gesture; and
transmitting the identified command to a remote receiver for delivery to the remote device.
15. The logic of claim 14, wherein the remote receiver comprises a wireless interface of the remote device.
16. The logic of claim 14, wherein the remote receiver comprises an element of a public wireless telephone network.
17. The logic of claim 14, wherein the remote device comprises audio/visual equipment.
18. The logic of claim 17, wherein the identified command controls output of the audio/visual equipment.
19. The logic of claim 14, wherein the gesture database further defines each of the remote command gestures using a sequence of accelerations; the logic further operable when executed to perform the steps of:
detecting acceleration of the handheld device along a first axis;
detecting acceleration of the handheld device along a second axis, the second axis perpendicular to the first axis; and
detecting acceleration of the handheld device along a third axis, the third axis perpendicular to the first axis and perpendicular to the second axis;
detecting motion of the device using accelerations measured by the first accelerometer, the second accelerometer, and the third accelerometer; and
matching the accelerations against gesture definitions in the gesture database to identify potential indicated ones of the remote command gestures.
20. A motion controlled handheld device comprising:
means for generating, on a viewable surface of a handheld device, an image indicating a currently controlled remote device;
means for maintaining a gesture database maintaining a plurality of remote command gestures, each remote command gesture defined by a motion of the device with respect to a first position of the handheld device;
means for maintaining a gesture mapping database comprising a mapping of each of the remote command gestures to an associated command for controlling operation of the remote device;
means for tracking movement of the handheld device in relation to the viewable surface;
means for comparing the tracked movement against the remote command gestures to determine a matching gesture;
means for identifying the one of the commands corresponding to the matching gesture; and
means for transmitting the identified command to a remote receiver for delivery to the remote device.
US10/807,562 2004-03-23 2004-03-23 Motion controlled remote controller Abandoned US20050212753A1 (en)

Priority Applications (8)

Application Number Priority Date Filing Date Title
US10/807,562 US20050212753A1 (en) 2004-03-23 2004-03-23 Motion controlled remote controller
EP05724864A EP1728142B1 (en) 2004-03-23 2005-03-07 Distinguishing tilt and translation motion components in handheld devices
KR1020067019664A KR100853605B1 (en) 2004-03-23 2005-03-07 Distinguishing tilt and translation motion components in handheld devices
DE602005022685T DE602005022685D1 (en) 2004-03-23 2005-03-07 DISTINCTION OF TILTING AND TRANSLATION MOVEMENT COMPONENTS IN FACILITIES HELD IN THE HAND
PCT/US2005/007409 WO2005103863A2 (en) 2004-03-23 2005-03-07 Distinguishing tilt and translation motion components in handheld devices
JP2007504983A JP2007531113A (en) 2004-03-23 2005-03-07 Identification of mobile device tilt and translational components
JP2008192455A JP4812812B2 (en) 2004-03-23 2008-07-25 Identification of mobile device tilt and translational components
US12/826,439 US7990365B2 (en) 2004-03-23 2010-06-29 Motion controlled remote controller

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US10/807,562 US20050212753A1 (en) 2004-03-23 2004-03-23 Motion controlled remote controller

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US12/826,439 Division US7990365B2 (en) 2004-03-23 2010-06-29 Motion controlled remote controller

Publications (1)

Publication Number Publication Date
US20050212753A1 true US20050212753A1 (en) 2005-09-29

Family

ID=34989195

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/807,562 Abandoned US20050212753A1 (en) 2004-03-23 2004-03-23 Motion controlled remote controller
US12/826,439 Expired - Fee Related US7990365B2 (en) 2004-03-23 2010-06-29 Motion controlled remote controller

Family Applications After (1)

Application Number Title Priority Date Filing Date
US12/826,439 Expired - Fee Related US7990365B2 (en) 2004-03-23 2010-06-29 Motion controlled remote controller

Country Status (1)

Country Link
US (2) US20050212753A1 (en)

Cited By (135)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050227217A1 (en) * 2004-03-31 2005-10-13 Wilson Andrew D Template matching on interactive surface
US20050277071A1 (en) * 2004-06-14 2005-12-15 Microsoft Corporation Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface
US20060040739A1 (en) * 2004-08-19 2006-02-23 Igt, A Nevada Corporation Virtual input system
US20060202997A1 (en) * 2005-03-10 2006-09-14 Lavalley Zachery Apparatus, system and method for interpreting and reproducing physical motion
US20060256074A1 (en) * 2005-05-13 2006-11-16 Robert Bosch Gmbh Sensor-initiated exchange of information between devices
US20070046625A1 (en) * 2005-08-31 2007-03-01 Microsoft Corporation Input method for surface of interactive display
US20070157095A1 (en) * 2005-12-29 2007-07-05 Microsoft Corporation Orientation free user interface
US20070200658A1 (en) * 2006-01-06 2007-08-30 Samsung Electronics Co., Ltd. Apparatus and method for transmitting control commands in home network system
US20080036732A1 (en) * 2006-08-08 2008-02-14 Microsoft Corporation Virtual Controller For Visual Displays
US20080169930A1 (en) * 2007-01-17 2008-07-17 Sony Computer Entertainment Inc. Method and system for measuring a user's level of attention to content
US20080192007A1 (en) * 2002-02-07 2008-08-14 Microsoft Corporation Determining a position of a pointing device
US20090033618A1 (en) * 2005-07-04 2009-02-05 Rune Norager Unit, an Assembly and a Method for Controlling in a Dynamic Egocentric Interactive Space
EP2034389A1 (en) * 2007-09-07 2009-03-11 Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO Method and system for linking appliances
US20090113354A1 (en) * 2007-10-30 2009-04-30 Samsung Electronics Co., Ltd. Broadcast receiving apparatus and control method thereof
US20090121894A1 (en) * 2007-11-14 2009-05-14 Microsoft Corporation Magic wand
US20090174567A1 (en) * 2008-01-04 2009-07-09 Primax Electronics Ltd. Remote controller for controlling playback of multimedia file
US20090252311A1 (en) * 2008-04-07 2009-10-08 Martijn Kuiken Electronic device with motion controlled functions
US20090262070A1 (en) * 2004-06-16 2009-10-22 Microsoft Corporation Method and System for Reducing Effects of Undesired Signals in an Infrared Imaging System
US20090268945A1 (en) * 2003-03-25 2009-10-29 Microsoft Corporation Architecture for controlling a computer using hand gestures
WO2009139010A1 (en) * 2008-05-14 2009-11-19 Sist&Matica S.R.L. Remote control system
US20090315995A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Mobile computing devices, architecture and user interfaces based on dynamic direction information
US20090319181A1 (en) * 2008-06-20 2009-12-24 Microsoft Corporation Data services based on gesture and location information of device
US20090318168A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Data synchronization for devices supporting direction-based services
EP2146490A1 (en) * 2008-07-18 2010-01-20 Alcatel, Lucent User device for gesture based exchange of information, methods for gesture based exchange of information between a plurality of user devices, and related devices and systems
US20100031203A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
US7716008B2 (en) 2007-01-19 2010-05-11 Nintendo Co., Ltd. Acceleration data processing program, and storage medium, and acceleration data processing apparatus for use with the same
US20100149090A1 (en) * 2008-12-15 2010-06-17 Microsoft Corporation Gestures, interactions, and common ground in a surface computing environment
US7774155B2 (en) 2006-03-10 2010-08-10 Nintendo Co., Ltd. Accelerometer-based controller
US20100223549A1 (en) * 2009-02-27 2010-09-02 Greg Edwards System and method for controlling entertainment devices using a display
WO2010138520A1 (en) * 2009-05-26 2010-12-02 Dp Technologies, Inc. Method and apparatus for a motion state aware headset
US20110025603A1 (en) * 2006-02-08 2011-02-03 Underkoffler John S Spatial, Multi-Modal Control Device For Use With Spatial Operating System
US7907128B2 (en) 2004-04-29 2011-03-15 Microsoft Corporation Interaction between objects and a virtual environment display
US7927216B2 (en) 2005-09-15 2011-04-19 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
WO2011054720A1 (en) * 2009-11-05 2011-05-12 Continental Automotive Gmbh Portable remote control device for a vehicle
WO2011054717A1 (en) * 2009-11-05 2011-05-12 Continental Automotive Gmbh Portable remote control device for a vehicle
KR20110062704A (en) * 2009-12-04 2011-06-10 엘지전자 주식회사 Display device and method for setting of password the same
US20110159958A1 (en) * 2008-07-23 2011-06-30 Sega Corporation Game device, method for controlling game, game control program and computer readable recording medium storing program
EP2362299A1 (en) * 2008-11-28 2011-08-31 Fujitsu Limited Control device, control system, control method, and computer program
US8089458B2 (en) 2000-02-22 2012-01-03 Creative Kingdoms, Llc Toy devices and methods for providing an interactive play experience
CN102332203A (en) * 2011-05-31 2012-01-25 福建物联天下信息科技有限公司 System for operating and controlling other apparatuses through motion behavior
US20120042246A1 (en) * 2010-06-10 2012-02-16 Microsoft Corporation Content gestures
US8144944B2 (en) 2007-08-14 2012-03-27 Olympus Corporation Image sharing system and method
US8150384B2 (en) 2010-06-16 2012-04-03 Qualcomm Incorporated Methods and apparatuses for gesture based remote control
US8157651B2 (en) 2005-09-12 2012-04-17 Nintendo Co., Ltd. Information processing program
CN102460510A (en) * 2009-05-27 2012-05-16 奥布隆工业有限公司 Spatial, multi-modal control device for use with spatial operating system
US8212857B2 (en) 2007-01-26 2012-07-03 Microsoft Corporation Alternating light sources to reduce specular reflection
US8226493B2 (en) 2002-08-01 2012-07-24 Creative Kingdoms, Llc Interactive play devices for water play attractions
EP2483811A2 (en) * 2009-10-02 2012-08-08 Qualcomm Incorporated User interface gestures and methods for providing file sharing functionality
US8267786B2 (en) 2005-08-24 2012-09-18 Nintendo Co., Ltd. Game controller and game system
US8285344B2 (en) 2008-05-21 2012-10-09 DP Technlogies, Inc. Method and apparatus for adjusting audio for a user environment
US8282487B2 (en) 2008-10-23 2012-10-09 Microsoft Corporation Determining orientation in an external reference frame
US8308563B2 (en) 2005-08-30 2012-11-13 Nintendo Co., Ltd. Game system and storage medium having game program stored thereon
US8313379B2 (en) 2005-08-22 2012-11-20 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
US8320578B2 (en) 2008-04-30 2012-11-27 Dp Technologies, Inc. Headset
US20130002576A1 (en) * 2011-05-03 2013-01-03 Lg Electronics Inc. Remote controller and image display apparatus controllable by remote controller
US8409003B2 (en) 2005-08-24 2013-04-02 Nintendo Co., Ltd. Game controller and game system
US8460103B2 (en) 2004-06-18 2013-06-11 Igt Gesture controlled casino gaming system
WO2013087994A2 (en) * 2011-12-16 2013-06-20 Nokia Corporation Methods, apparatuses, and computer program products for enabling use of remote devices with pre-defined gestures
US8475275B2 (en) 2000-02-22 2013-07-02 Creative Kingdoms, Llc Interactive toys and games connecting physical and virtual play environments
US20130179812A1 (en) * 2012-01-10 2013-07-11 Gilles Serge BianRosa System and method for navigating a user interface using a touch-enabled input device
US8555282B1 (en) 2007-07-27 2013-10-08 Dp Technologies, Inc. Optimizing preemptive operating system with motion sensing
US8560972B2 (en) 2004-08-10 2013-10-15 Microsoft Corporation Surface UI for gesture-based interaction
US8608535B2 (en) 2002-04-05 2013-12-17 Mq Gaming, Llc Systems and methods for providing an interactive game
US8612641B1 (en) * 2011-05-31 2013-12-17 Amazon Technologies, Inc. Portable computing device as control mechanism
US8620353B1 (en) 2007-01-26 2013-12-31 Dp Technologies, Inc. Automatic sharing and publication of multimedia from a mobile device
US20140086428A1 (en) * 2007-09-27 2014-03-27 Samsung Electronics Co., Ltd. Portable terminal having bluetooth module and bluetooth communication method thereof
US8684839B2 (en) 2004-06-18 2014-04-01 Igt Control of wager-based game using gesture recognition
EP2275907A3 (en) * 2009-07-16 2014-04-02 Apple Inc. Ground detection for touch sensitive device
US8702515B2 (en) 2002-04-05 2014-04-22 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US20140115542A1 (en) * 2012-10-19 2014-04-24 Hon Hai Precision Industry Co., Ltd. Remotely controllable electronic device allowing a user to associate two menu items with a control signal
US8708821B2 (en) 2000-02-22 2014-04-29 Creative Kingdoms, Llc Systems and methods for providing interactive game play
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US8753165B2 (en) 2000-10-20 2014-06-17 Mq Gaming, Llc Wireless toy systems and methods for interactive entertainment
US8758136B2 (en) 1999-02-26 2014-06-24 Mq Gaming, Llc Multi-platform gaming systems and methods
WO2014163749A1 (en) * 2013-03-13 2014-10-09 Cambridgesoft Corporation Systems and methods for gesture-based sharing of data between separate electronic devices
US8872646B2 (en) 2008-10-08 2014-10-28 Dp Technologies, Inc. Method and system for waking up a device due to motion
US8902154B1 (en) 2006-07-11 2014-12-02 Dp Technologies, Inc. Method and apparatus for utilizing motion user interface
US20140368429A1 (en) * 2012-01-06 2014-12-18 Movea Device for gestural control of a system, and associated method
CN104238743A (en) * 2013-06-21 2014-12-24 卡西欧计算机株式会社 Information processing apparatus, and information processing method
US8949070B1 (en) 2007-02-08 2015-02-03 Dp Technologies, Inc. Human activity monitoring device with activity identification
US8996332B2 (en) 2008-06-24 2015-03-31 Dp Technologies, Inc. Program setting adjustments based on activity identification
CN104516503A (en) * 2014-12-18 2015-04-15 深圳市宇恒互动科技开发有限公司 Method and system both for sensing scene movement and reminding device
US9011248B2 (en) 2005-08-22 2015-04-21 Nintendo Co., Ltd. Game operating device
US20150193095A1 (en) * 2012-07-24 2015-07-09 Tencent Technology (Shenzhen) Company Limited Electronic apparatus and method for interacting with application in electronic apparatus
WO2015117052A1 (en) * 2014-01-31 2015-08-06 Putman Matthew C Apparatus and method for manipulating objects with gesture controls
EP2908292A1 (en) * 2014-02-14 2015-08-19 Siemens Aktiengesellschaft Access system for a technical installation
TWI499226B (en) * 2009-04-22 2015-09-01 Univ Southern Taiwan A single button to generate a variety of remote control signal remote control
US20150317055A1 (en) * 2011-04-29 2015-11-05 Google Inc. Remote device control using gestures on a touch sensitive device
CN105190483A (en) * 2013-03-15 2015-12-23 高通股份有限公司 Detection of a gesture performed with at least two control objects
US20150370290A1 (en) * 2014-06-24 2015-12-24 Kabushiki Kaisha Toshiba Electronic apparatus, method, and storage medium
US9244530B1 (en) 2011-01-31 2016-01-26 Google Inc. Virtual artifacts using mobile devices
US9317128B2 (en) 2009-04-02 2016-04-19 Oblong Industries, Inc. Remote devices used in a markerless installation of a spatial operating environment incorporating gestural control
US20160134658A1 (en) * 2013-07-05 2016-05-12 Nippon Telegraph And Telephone Corporation Unauthorized access detecting system and unauthorized access detecting method
EP2438534A4 (en) * 2009-06-05 2016-05-18 Microsoft Technology Licensing Llc Scrubbing variable content paths
US9390229B1 (en) 2006-04-26 2016-07-12 Dp Technologies, Inc. Method and apparatus for a health phone
US9446319B2 (en) 2003-03-25 2016-09-20 Mq Gaming, Llc Interactive gaming toy
US20160287989A1 (en) * 2012-08-31 2016-10-06 Blue Goji Llc Natural body interaction for mixed or virtual reality applications
US9471148B2 (en) 2009-04-02 2016-10-18 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US9471147B2 (en) 2006-02-08 2016-10-18 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US9495228B2 (en) 2006-02-08 2016-11-15 Oblong Industries, Inc. Multi-process interactive systems and methods
US9495013B2 (en) 2008-04-24 2016-11-15 Oblong Industries, Inc. Multi-modal gestural interface
DE102015005863A1 (en) * 2015-05-11 2016-11-17 SENIC GmbH Input device for electronic devices
WO2016200417A1 (en) * 2015-06-07 2016-12-15 Apple Inc. Content browsing user interface
US9596643B2 (en) 2011-12-16 2017-03-14 Microsoft Technology Licensing, Llc Providing a user interface experience based on inferred vehicle state
US9606630B2 (en) 2005-02-08 2017-03-28 Oblong Industries, Inc. System and method for gesture based control system
US9661468B2 (en) 2009-07-07 2017-05-23 Microsoft Technology Licensing, Llc System and method for converting gestures into digital graffiti
US9684380B2 (en) 2009-04-02 2017-06-20 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
EP3069215A4 (en) * 2013-11-13 2017-07-26 National University of Singapore Method and hand held laboratory device to control screen navigation
US9740922B2 (en) 2008-04-24 2017-08-22 Oblong Industries, Inc. Adaptive tracking system for spatial input devices
US9740293B2 (en) 2009-04-02 2017-08-22 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9772815B1 (en) * 2013-11-14 2017-09-26 Knowles Electronics, Llc Personalized operation of a mobile device using acoustic and non-acoustic information
US9779131B2 (en) 2008-04-24 2017-10-03 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US9804902B2 (en) 2007-04-24 2017-10-31 Oblong Industries, Inc. Proteins, pools, and slawx in processing environments
US20170344175A1 (en) * 2016-05-30 2017-11-30 Quanta Computer Inc. Portable electronic devices and operating methods thereof
US9910497B2 (en) 2006-02-08 2018-03-06 Oblong Industries, Inc. Gestural control of autonomous and semi-autonomous systems
US9933852B2 (en) 2009-10-14 2018-04-03 Oblong Industries, Inc. Multi-process interactive systems and methods
US9952673B2 (en) 2009-04-02 2018-04-24 Oblong Industries, Inc. Operating environment comprising multiple client devices, multiple displays, multiple users, and gestural control
US9990046B2 (en) 2014-03-17 2018-06-05 Oblong Industries, Inc. Visual collaboration interface
US20180217396A1 (en) * 2017-01-30 2018-08-02 Semiconductor Components Industries, Llc Systems and methods for an optical image stabilizer system
US10241564B2 (en) 2013-06-07 2019-03-26 Seiko Epson Corporation Electronic apparatus and method of detecting tap operation
WO2019080902A1 (en) * 2017-10-27 2019-05-02 Zyetric Inventions Limited Interactive intelligent virtual object
US10353495B2 (en) 2010-08-20 2019-07-16 Knowles Electronics, Llc Personalized operation of a mobile device using sensor signatures
US10529302B2 (en) 2016-07-07 2020-01-07 Oblong Industries, Inc. Spatially mediated augmentations of and interactions among distinct devices and applications via extended pixel manifold
US10642364B2 (en) 2009-04-02 2020-05-05 Oblong Industries, Inc. Processing tracking and recognition data in gestural recognition systems
US10824238B2 (en) 2009-04-02 2020-11-03 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US10990454B2 (en) 2009-10-14 2021-04-27 Oblong Industries, Inc. Multi-process interactive systems and methods
DE112009000596B4 (en) * 2008-03-19 2021-05-20 Computime Ltd. Remote control and procedure therefor
US11051002B2 (en) 2009-06-17 2021-06-29 3Shape A/S Focus scanning apparatus
US11181938B2 (en) * 2012-08-31 2021-11-23 Blue Goji Llc Full body movement control of dual joystick operated devices
US20220050529A1 (en) * 2013-08-26 2022-02-17 Paypal, Inc. Gesture identification
US11269457B1 (en) 2021-02-03 2022-03-08 Apple Inc. Systems and methods for improved touch screen selectivity and sensitivity
US11455882B2 (en) * 2017-10-31 2022-09-27 Hewlett-Packard Development Company, L.P. Actuation module to control when a sensing module is responsive to events
US11467673B2 (en) 2019-10-24 2022-10-11 Samsung Electronics Co., Ltd Method for controlling camera and electronic device therefor
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade

Families Citing this family (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8121361B2 (en) 2006-05-19 2012-02-21 The Queen's Medical Center Motion tracking system for real time adaptive imaging and spectroscopy
US9779403B2 (en) * 2007-12-07 2017-10-03 Jpmorgan Chase Bank, N.A. Mobile fraud prevention system and method
KR101509245B1 (en) * 2008-07-31 2015-04-08 삼성전자주식회사 User interface apparatus and method for using pattern recognition in handy terminal
US8213914B2 (en) * 2008-08-04 2012-07-03 Lg Electronics Inc. Mobile terminal capable of providing web browsing function and method of controlling the mobile terminal
JP5522349B2 (en) * 2009-04-14 2014-06-18 任天堂株式会社 INPUT SYSTEM, INFORMATION PROCESSING SYSTEM, PERIPHERAL DEVICE CONTROL METHOD, AND OPERATION DEVICE CONTROL PROGRAM
KR101589501B1 (en) * 2009-08-24 2016-01-28 삼성전자주식회사 Method and apparatus for controlling zoom using touch screen
US8451312B2 (en) * 2010-01-06 2013-05-28 Apple Inc. Automatic video stream selection
JP5440222B2 (en) * 2010-02-03 2014-03-12 富士ゼロックス株式会社 Information processing apparatus and program
JP5659830B2 (en) * 2011-02-03 2015-01-28 ソニー株式会社 Control device, control method and program
WO2012123788A1 (en) * 2011-03-16 2012-09-20 Sony Ericsson Mobile Communications Ab System and method for providing direct access to an application when unlocking a consumer electronic device
US8959459B2 (en) * 2011-06-15 2015-02-17 Wms Gaming Inc. Gesture sensing enhancement system for a wagering game
US9606209B2 (en) 2011-08-26 2017-03-28 Kineticor, Inc. Methods, systems, and devices for intra-scan motion correction
US8949745B2 (en) 2011-10-21 2015-02-03 Konntech Inc. Device and method for selection of options by motion gestures
CN103095341A (en) * 2011-10-31 2013-05-08 联想(北京)有限公司 Data transmission control method and electronic equipment
CN103197774A (en) * 2012-01-09 2013-07-10 西安智意能电子科技有限公司 Method and system for mapping application track of emission light source motion track
US9423877B2 (en) 2012-02-24 2016-08-23 Amazon Technologies, Inc. Navigation approaches for multi-dimensional input
WO2013180687A1 (en) 2012-05-29 2013-12-05 Hewlett-Packard Development Company, L.P. Translation of touch input into local input based on a translation profile for an application
WO2014008438A1 (en) * 2012-07-03 2014-01-09 Tourwrist, Inc Systems and methods for tracking user postures and motions to control display of and navigate panoramas
TW201403446A (en) * 2012-07-09 2014-01-16 Hon Hai Prec Ind Co Ltd System and method for displaying software interface
US9007465B1 (en) * 2012-08-31 2015-04-14 Vce Company, Llc Obtaining customer support for electronic system using first and second cameras
US9120226B2 (en) 2012-10-23 2015-09-01 Lincoln Global, Inc. System and method for remotely positioning an end effector
US10327708B2 (en) 2013-01-24 2019-06-25 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
US9305365B2 (en) 2013-01-24 2016-04-05 Kineticor, Inc. Systems, devices, and methods for tracking moving targets
US9717461B2 (en) 2013-01-24 2017-08-01 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN105392423B (en) 2013-02-01 2018-08-17 凯内蒂科尔股份有限公司 The motion tracking system of real-time adaptive motion compensation in biomedical imaging
US20140228073A1 (en) * 2013-02-14 2014-08-14 Lsi Corporation Automatic presentation of an image from a camera responsive to detection of a particular type of movement of a user device
JP2015026891A (en) * 2013-07-24 2015-02-05 ソニー株式会社 Image processing device and storage medium
US10004462B2 (en) 2014-03-24 2018-06-26 Kineticor, Inc. Systems, methods, and devices for removing prospective motion correction from medical imaging scans
WO2016014718A1 (en) 2014-07-23 2016-01-28 Kineticor, Inc. Systems, devices, and methods for tracking and compensating for patient motion during a medical imaging scan
CN104680771A (en) * 2015-03-20 2015-06-03 蒋海兵 Device and method for testing infrared remote controller
US20160304004A1 (en) * 2015-04-16 2016-10-20 Thorley Industries Llc Child restraint system
US9943247B2 (en) 2015-07-28 2018-04-17 The University Of Hawai'i Systems, devices, and methods for detecting false movements for motion correction during a medical imaging scan
CN108697367A (en) 2015-11-23 2018-10-23 凯内蒂科尓股份有限公司 Systems, devices and methods for patient motion to be tracked and compensated during medical image scan
US10127371B2 (en) * 2015-12-11 2018-11-13 Roku, Inc. User identification based on the motion of a device
CN106710196B (en) * 2016-12-12 2020-07-07 奇酷互联网络科技(深圳)有限公司 Method and apparatus for using images as analog remote control panels
US11042262B2 (en) * 2017-02-01 2021-06-22 Opentv, Inc. Menu modification based on controller manipulation data
CN107168162B (en) * 2017-05-25 2021-10-08 北京东软医疗设备有限公司 Control device
CN109871114B (en) * 2017-12-04 2023-02-03 北京搜狗科技发展有限公司 Keyboard operation method and device
CN110225415B (en) * 2018-03-01 2022-06-21 中兴通讯股份有限公司 Media file playing method
CN108563335B (en) * 2018-04-24 2021-03-23 网易(杭州)网络有限公司 Virtual reality interaction method and device, storage medium and electronic equipment

Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812831A (en) * 1987-02-10 1989-03-14 Amp Incorporated Key switch with controllable illumination
US5112785A (en) * 1987-06-16 1992-05-12 Atochem Method for the treatment of a catalytic component on a porous metal oxide support for the polymerization of olefins in the gas phase and method of polymerizing olefins
US5142655A (en) * 1987-10-14 1992-08-25 Wang Laboratories, Inc. Computer input device using an orientation sensor
US5506605A (en) * 1992-07-27 1996-04-09 Paley; W. Bradford Three-dimensional mouse with tactile feedback
US5543588A (en) * 1992-06-08 1996-08-06 Synaptics, Incorporated Touch pad driven handheld computing device
US5598187A (en) * 1993-05-13 1997-01-28 Kabushiki Kaisha Toshiba Spatial motion pattern input system and input method
US5602566A (en) * 1993-08-24 1997-02-11 Hitachi, Ltd. Small-sized information processor capable of scrolling screen in accordance with tilt, and scrolling method therefor
US5734371A (en) * 1994-12-19 1998-03-31 Lucent Technologies Inc. Interactive pointing device
US5766015A (en) * 1996-07-11 1998-06-16 Digispeech (Israel) Ltd. Apparatus for interactive language training
US6008810A (en) * 1997-03-07 1999-12-28 International Business Machines Corporation Mobile client computer programmed for system message display
US6057554A (en) * 1997-05-12 2000-05-02 Plesko; George A. Reflective switch
US6088023A (en) * 1996-12-10 2000-07-11 Willow Design, Inc. Integrated pointing and drawing graphics system for computers
US6121960A (en) * 1996-08-28 2000-09-19 Via, Inc. Touch screen systems and methods
US6184847B1 (en) * 1998-09-22 2001-02-06 Vega Vista, Inc. Intuitive control of portable data displays
US6201554B1 (en) * 1999-01-12 2001-03-13 Ericsson Inc. Device control apparatus for hand-held data processing device
US6245014B1 (en) * 1999-11-18 2001-06-12 Atlantic Limited Partnership Fitness for duty testing device and method
US6288704B1 (en) * 1999-06-08 2001-09-11 Vega, Vista, Inc. Motion detection and tracking system to control navigation and display of object viewers
US20020093483A1 (en) * 2000-11-30 2002-07-18 Kaplan Alan Edward Display control for hand-held devices
US6466198B1 (en) * 1999-11-05 2002-10-15 Innoventions, Inc. View navigation and magnification of a hand-held device with a display
US6564144B1 (en) * 2002-01-10 2003-05-13 Navigation Technologies Corporation Method and system using a hand-gesture responsive device for collecting data for a geographic database
US6573883B1 (en) * 1998-06-24 2003-06-03 Hewlett Packard Development Company, L.P. Method and apparatus for controlling a computing device with gestures
US20040027330A1 (en) * 2001-03-29 2004-02-12 Bradski Gary R. Intuitive mobile device interface to virtual spaces
US20040061621A1 (en) * 2002-09-27 2004-04-01 Alps Electric Co., Ltd. Remote control system
US6791536B2 (en) * 2000-11-10 2004-09-14 Microsoft Corporation Simulating gestures of a pointing device using a stylus and providing feedback thereto
US20040178995A1 (en) * 2001-06-29 2004-09-16 Sterling Hans Rudolf Apparatus for sensing the position of a pointing object
US6834249B2 (en) * 2001-03-29 2004-12-21 Arraycomm, Inc. Method and apparatus for controlling a computing system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2256605B1 (en) 1998-01-26 2017-12-06 Apple Inc. Method and apparatus for integrating manual input
US5989608A (en) * 1998-07-15 1999-11-23 Mizuno; Maki Food container for cooking with microwave oven
DE29918753U1 (en) * 1999-08-09 2000-02-17 Keller Karl Food packaging
WO2001086920A2 (en) 2000-05-12 2001-11-15 Zvi Lapidot Apparatus and method for the kinematic control of hand-held devices
WO2003001340A2 (en) 2001-06-22 2003-01-03 Motion Sense Corporation Gesture recognition system and method
GB2378878B (en) 2001-06-28 2005-10-05 Ubinetics Ltd A handheld display device

Patent Citations (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4812831A (en) * 1987-02-10 1989-03-14 Amp Incorporated Key switch with controllable illumination
US5112785A (en) * 1987-06-16 1992-05-12 Atochem Method for the treatment of a catalytic component on a porous metal oxide support for the polymerization of olefins in the gas phase and method of polymerizing olefins
US5142655A (en) * 1987-10-14 1992-08-25 Wang Laboratories, Inc. Computer input device using an orientation sensor
US5543588A (en) * 1992-06-08 1996-08-06 Synaptics, Incorporated Touch pad driven handheld computing device
US5506605A (en) * 1992-07-27 1996-04-09 Paley; W. Bradford Three-dimensional mouse with tactile feedback
US5598187A (en) * 1993-05-13 1997-01-28 Kabushiki Kaisha Toshiba Spatial motion pattern input system and input method
US5602566A (en) * 1993-08-24 1997-02-11 Hitachi, Ltd. Small-sized information processor capable of scrolling screen in accordance with tilt, and scrolling method therefor
US5734371A (en) * 1994-12-19 1998-03-31 Lucent Technologies Inc. Interactive pointing device
US5766015A (en) * 1996-07-11 1998-06-16 Digispeech (Israel) Ltd. Apparatus for interactive language training
US6121960A (en) * 1996-08-28 2000-09-19 Via, Inc. Touch screen systems and methods
US6088023A (en) * 1996-12-10 2000-07-11 Willow Design, Inc. Integrated pointing and drawing graphics system for computers
US6008810A (en) * 1997-03-07 1999-12-28 International Business Machines Corporation Mobile client computer programmed for system message display
US6057554A (en) * 1997-05-12 2000-05-02 Plesko; George A. Reflective switch
US6573883B1 (en) * 1998-06-24 2003-06-03 Hewlett Packard Development Company, L.P. Method and apparatus for controlling a computing device with gestures
US6184847B1 (en) * 1998-09-22 2001-02-06 Vega Vista, Inc. Intuitive control of portable data displays
US6201554B1 (en) * 1999-01-12 2001-03-13 Ericsson Inc. Device control apparatus for hand-held data processing device
US6288704B1 (en) * 1999-06-08 2001-09-11 Vega, Vista, Inc. Motion detection and tracking system to control navigation and display of object viewers
US6466198B1 (en) * 1999-11-05 2002-10-15 Innoventions, Inc. View navigation and magnification of a hand-held device with a display
US6245014B1 (en) * 1999-11-18 2001-06-12 Atlantic Limited Partnership Fitness for duty testing device and method
US20020190947A1 (en) * 2000-04-05 2002-12-19 Feinstein David Y. View navigation and magnification of a hand-held device with a display
US6791536B2 (en) * 2000-11-10 2004-09-14 Microsoft Corporation Simulating gestures of a pointing device using a stylus and providing feedback thereto
US20020093483A1 (en) * 2000-11-30 2002-07-18 Kaplan Alan Edward Display control for hand-held devices
US20040027330A1 (en) * 2001-03-29 2004-02-12 Bradski Gary R. Intuitive mobile device interface to virtual spaces
US6834249B2 (en) * 2001-03-29 2004-12-21 Arraycomm, Inc. Method and apparatus for controlling a computing system
US20040178995A1 (en) * 2001-06-29 2004-09-16 Sterling Hans Rudolf Apparatus for sensing the position of a pointing object
US6564144B1 (en) * 2002-01-10 2003-05-13 Navigation Technologies Corporation Method and system using a hand-gesture responsive device for collecting data for a geographic database
US20040061621A1 (en) * 2002-09-27 2004-04-01 Alps Electric Co., Ltd. Remote control system

Cited By (331)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8758136B2 (en) 1999-02-26 2014-06-24 Mq Gaming, Llc Multi-platform gaming systems and methods
US10300374B2 (en) 1999-02-26 2019-05-28 Mq Gaming, Llc Multi-platform gaming systems and methods
US9186585B2 (en) 1999-02-26 2015-11-17 Mq Gaming, Llc Multi-platform gaming systems and methods
US9861887B1 (en) 1999-02-26 2018-01-09 Mq Gaming, Llc Multi-platform gaming systems and methods
US8888576B2 (en) 1999-02-26 2014-11-18 Mq Gaming, Llc Multi-media interactive play system
US9731194B2 (en) 1999-02-26 2017-08-15 Mq Gaming, Llc Multi-platform gaming systems and methods
US9468854B2 (en) 1999-02-26 2016-10-18 Mq Gaming, Llc Multi-platform gaming systems and methods
US8790180B2 (en) 2000-02-22 2014-07-29 Creative Kingdoms, Llc Interactive game and associated wireless toy
US8368648B2 (en) 2000-02-22 2013-02-05 Creative Kingdoms, Llc Portable interactive toy with radio frequency tracking device
US10307671B2 (en) 2000-02-22 2019-06-04 Mq Gaming, Llc Interactive entertainment system
US9474962B2 (en) 2000-02-22 2016-10-25 Mq Gaming, Llc Interactive entertainment system
US8686579B2 (en) 2000-02-22 2014-04-01 Creative Kingdoms, Llc Dual-range wireless controller
US8531050B2 (en) 2000-02-22 2013-09-10 Creative Kingdoms, Llc Wirelessly powered gaming device
US8491389B2 (en) 2000-02-22 2013-07-23 Creative Kingdoms, Llc. Motion-sensitive input device and interactive gaming system
US9579568B2 (en) 2000-02-22 2017-02-28 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US8475275B2 (en) 2000-02-22 2013-07-02 Creative Kingdoms, Llc Interactive toys and games connecting physical and virtual play environments
US8708821B2 (en) 2000-02-22 2014-04-29 Creative Kingdoms, Llc Systems and methods for providing interactive game play
US9713766B2 (en) 2000-02-22 2017-07-25 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US8814688B2 (en) 2000-02-22 2014-08-26 Creative Kingdoms, Llc Customizable toy for playing a wireless interactive game having both physical and virtual elements
US8184097B1 (en) 2000-02-22 2012-05-22 Creative Kingdoms, Llc Interactive gaming system and method using motion-sensitive input device
US8169406B2 (en) 2000-02-22 2012-05-01 Creative Kingdoms, Llc Motion-sensitive wand controller for a game
US8164567B1 (en) 2000-02-22 2012-04-24 Creative Kingdoms, Llc Motion-sensitive game controller with optional display screen
US9814973B2 (en) 2000-02-22 2017-11-14 Mq Gaming, Llc Interactive entertainment system
US8915785B2 (en) 2000-02-22 2014-12-23 Creative Kingdoms, Llc Interactive entertainment system
US8089458B2 (en) 2000-02-22 2012-01-03 Creative Kingdoms, Llc Toy devices and methods for providing an interactive play experience
US10188953B2 (en) 2000-02-22 2019-01-29 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US9149717B2 (en) 2000-02-22 2015-10-06 Mq Gaming, Llc Dual-range wireless interactive entertainment device
US9480929B2 (en) 2000-10-20 2016-11-01 Mq Gaming, Llc Toy incorporating RFID tag
US8961260B2 (en) 2000-10-20 2015-02-24 Mq Gaming, Llc Toy incorporating RFID tracking device
US9931578B2 (en) 2000-10-20 2018-04-03 Mq Gaming, Llc Toy incorporating RFID tag
US8753165B2 (en) 2000-10-20 2014-06-17 Mq Gaming, Llc Wireless toy systems and methods for interactive entertainment
US9320976B2 (en) 2000-10-20 2016-04-26 Mq Gaming, Llc Wireless toy systems and methods for interactive entertainment
US10307683B2 (en) 2000-10-20 2019-06-04 Mq Gaming, Llc Toy incorporating RFID tag
US8711094B2 (en) 2001-02-22 2014-04-29 Creative Kingdoms, Llc Portable gaming device and gaming system combining both physical and virtual play elements
US10179283B2 (en) 2001-02-22 2019-01-15 Mq Gaming, Llc Wireless entertainment device, system, and method
US8913011B2 (en) 2001-02-22 2014-12-16 Creative Kingdoms, Llc Wireless entertainment device, system, and method
US8384668B2 (en) 2001-02-22 2013-02-26 Creative Kingdoms, Llc Portable gaming device and gaming system combining both physical and virtual play elements
US9162148B2 (en) 2001-02-22 2015-10-20 Mq Gaming, Llc Wireless entertainment device, system, and method
US9393491B2 (en) 2001-02-22 2016-07-19 Mq Gaming, Llc Wireless entertainment device, system, and method
US10758818B2 (en) 2001-02-22 2020-09-01 Mq Gaming, Llc Wireless entertainment device, system, and method
US8248367B1 (en) 2001-02-22 2012-08-21 Creative Kingdoms, Llc Wireless gaming system combining both physical and virtual play elements
US9737797B2 (en) 2001-02-22 2017-08-22 Mq Gaming, Llc Wireless entertainment device, system, and method
US20110004329A1 (en) * 2002-02-07 2011-01-06 Microsoft Corporation Controlling electronic components in a computing environment
US8456419B2 (en) 2002-02-07 2013-06-04 Microsoft Corporation Determining a position of a pointing device
US10488950B2 (en) 2002-02-07 2019-11-26 Microsoft Technology Licensing, Llc Manipulating an object utilizing a pointing device
US9454244B2 (en) 2002-02-07 2016-09-27 Microsoft Technology Licensing, Llc Recognizing a movement of a pointing device
US8707216B2 (en) 2002-02-07 2014-04-22 Microsoft Corporation Controlling objects via gesturing
US10331228B2 (en) 2002-02-07 2019-06-25 Microsoft Technology Licensing, Llc System and method for determining 3D orientation of a pointing device
US20080192007A1 (en) * 2002-02-07 2008-08-14 Microsoft Corporation Determining a position of a pointing device
US8827810B2 (en) 2002-04-05 2014-09-09 Mq Gaming, Llc Methods for providing interactive entertainment
US11278796B2 (en) 2002-04-05 2022-03-22 Mq Gaming, Llc Methods and systems for providing personalized interactive entertainment
US8608535B2 (en) 2002-04-05 2013-12-17 Mq Gaming, Llc Systems and methods for providing an interactive game
US9616334B2 (en) 2002-04-05 2017-04-11 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US8702515B2 (en) 2002-04-05 2014-04-22 Mq Gaming, Llc Multi-platform gaming system using RFID-tagged toys
US10507387B2 (en) 2002-04-05 2019-12-17 Mq Gaming, Llc System and method for playing an interactive game
US10010790B2 (en) 2002-04-05 2018-07-03 Mq Gaming, Llc System and method for playing an interactive game
US9463380B2 (en) 2002-04-05 2016-10-11 Mq Gaming, Llc System and method for playing an interactive game
US10478719B2 (en) 2002-04-05 2019-11-19 Mq Gaming, Llc Methods and systems for providing personalized interactive entertainment
US9272206B2 (en) 2002-04-05 2016-03-01 Mq Gaming, Llc System and method for playing an interactive game
US8226493B2 (en) 2002-08-01 2012-07-24 Creative Kingdoms, Llc Interactive play devices for water play attractions
US8373659B2 (en) 2003-03-25 2013-02-12 Creative Kingdoms, Llc Wirelessly-powered toy for gaming
US10022624B2 (en) 2003-03-25 2018-07-17 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US10583357B2 (en) 2003-03-25 2020-03-10 Mq Gaming, Llc Interactive gaming toy
US11052309B2 (en) 2003-03-25 2021-07-06 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US9039533B2 (en) 2003-03-25 2015-05-26 Creative Kingdoms, Llc Wireless interactive game having both physical and virtual elements
US20090268945A1 (en) * 2003-03-25 2009-10-29 Microsoft Corporation Architecture for controlling a computer using hand gestures
US9652042B2 (en) 2003-03-25 2017-05-16 Microsoft Technology Licensing, Llc Architecture for controlling a computer using hand gestures
US10369463B2 (en) 2003-03-25 2019-08-06 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US8961312B2 (en) 2003-03-25 2015-02-24 Creative Kingdoms, Llc Motion-sensitive controller and associated gaming applications
US10551930B2 (en) 2003-03-25 2020-02-04 Microsoft Technology Licensing, Llc System and method for executing a process using accelerometer signals
US9707478B2 (en) 2003-03-25 2017-07-18 Mq Gaming, Llc Motion-sensitive controller and associated gaming applications
US9993724B2 (en) 2003-03-25 2018-06-12 Mq Gaming, Llc Interactive gaming toy
US9446319B2 (en) 2003-03-25 2016-09-20 Mq Gaming, Llc Interactive gaming toy
US9393500B2 (en) 2003-03-25 2016-07-19 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US9770652B2 (en) 2003-03-25 2017-09-26 Mq Gaming, Llc Wireless interactive game having both physical and virtual elements
US20100146455A1 (en) * 2003-03-25 2010-06-10 Microsoft Corporation Architecture For Controlling A Computer Using Hand Gestures
US20050227217A1 (en) * 2004-03-31 2005-10-13 Wilson Andrew D Template matching on interactive surface
US7907128B2 (en) 2004-04-29 2011-03-15 Microsoft Corporation Interaction between objects and a virtual environment display
US7787706B2 (en) 2004-06-14 2010-08-31 Microsoft Corporation Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface
US20050277071A1 (en) * 2004-06-14 2005-12-15 Microsoft Corporation Method for controlling an intensity of an infrared source used to detect objects adjacent to an interactive display surface
US8670632B2 (en) 2004-06-16 2014-03-11 Microsoft Corporation System for reducing effects of undesired signals in an infrared imaging system
US8165422B2 (en) 2004-06-16 2012-04-24 Microsoft Corporation Method and system for reducing effects of undesired signals in an infrared imaging system
US20090262070A1 (en) * 2004-06-16 2009-10-22 Microsoft Corporation Method and System for Reducing Effects of Undesired Signals in an Infrared Imaging System
US8460103B2 (en) 2004-06-18 2013-06-11 Igt Gesture controlled casino gaming system
US9230395B2 (en) 2004-06-18 2016-01-05 Igt Control of wager-based game using gesture recognition
US9798391B2 (en) 2004-06-18 2017-10-24 Igt Control of wager-based game using gesture recognition
US8684839B2 (en) 2004-06-18 2014-04-01 Igt Control of wager-based game using gesture recognition
US8560972B2 (en) 2004-08-10 2013-10-15 Microsoft Corporation Surface UI for gesture-based interaction
US8668584B2 (en) 2004-08-19 2014-03-11 Igt Virtual input system
US9606674B2 (en) 2004-08-19 2017-03-28 Iii Holdings 1, Llc Virtual input system
US9116543B2 (en) 2004-08-19 2015-08-25 Iii Holdings 1, Llc Virtual input system
US20110212778A1 (en) * 2004-08-19 2011-09-01 Igt Virtual input system
US7942744B2 (en) 2004-08-19 2011-05-17 Igt Virtual input system
US20060040739A1 (en) * 2004-08-19 2006-02-23 Igt, A Nevada Corporation Virtual input system
US8398488B2 (en) 2004-08-19 2013-03-19 Igt Virtual input system
US10564776B2 (en) 2004-08-19 2020-02-18 American Patents Llc Virtual input system
US9675878B2 (en) 2004-09-29 2017-06-13 Mq Gaming, Llc System and method for playing a virtual game by sensing physical movements
US9606630B2 (en) 2005-02-08 2017-03-28 Oblong Industries, Inc. System and method for gesture based control system
US7492367B2 (en) 2005-03-10 2009-02-17 Motus Corporation Apparatus, system and method for interpreting and reproducing physical motion
US20060202997A1 (en) * 2005-03-10 2006-09-14 Lavalley Zachery Apparatus, system and method for interpreting and reproducing physical motion
WO2006127270A1 (en) * 2005-05-13 2006-11-30 Robert Bosch Gmbh Sensor-initiated exchange of information between devices
US8339363B2 (en) * 2005-05-13 2012-12-25 Robert Bosch Gmbh Sensor-initiated exchange of information between devices
US20060256074A1 (en) * 2005-05-13 2006-11-16 Robert Bosch Gmbh Sensor-initiated exchange of information between devices
US8125444B2 (en) * 2005-07-04 2012-02-28 Bang And Olufsen A/S Unit, an assembly and a method for controlling in a dynamic egocentric interactive space
US20090033618A1 (en) * 2005-07-04 2009-02-05 Rune Norager Unit, an Assembly and a Method for Controlling in a Dynamic Egocentric Interactive Space
US9498728B2 (en) 2005-08-22 2016-11-22 Nintendo Co., Ltd. Game operating device
US10155170B2 (en) 2005-08-22 2018-12-18 Nintendo Co., Ltd. Game operating device with holding portion detachably holding an electronic device
US10238978B2 (en) 2005-08-22 2019-03-26 Nintendo Co., Ltd. Game operating device
US10661183B2 (en) 2005-08-22 2020-05-26 Nintendo Co., Ltd. Game operating device
US9700806B2 (en) 2005-08-22 2017-07-11 Nintendo Co., Ltd. Game operating device
US9011248B2 (en) 2005-08-22 2015-04-21 Nintendo Co., Ltd. Game operating device
US8313379B2 (en) 2005-08-22 2012-11-20 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
US9044671B2 (en) 2005-08-24 2015-06-02 Nintendo Co., Ltd. Game controller and game system
US9498709B2 (en) 2005-08-24 2016-11-22 Nintendo Co., Ltd. Game controller and game system
US8267786B2 (en) 2005-08-24 2012-09-18 Nintendo Co., Ltd. Game controller and game system
US11027190B2 (en) 2005-08-24 2021-06-08 Nintendo Co., Ltd. Game controller and game system
US10137365B2 (en) 2005-08-24 2018-11-27 Nintendo Co., Ltd. Game controller and game system
US8834271B2 (en) 2005-08-24 2014-09-16 Nintendo Co., Ltd. Game controller and game system
US8870655B2 (en) 2005-08-24 2014-10-28 Nintendo Co., Ltd. Wireless game controllers
US8409003B2 (en) 2005-08-24 2013-04-02 Nintendo Co., Ltd. Game controller and game system
US8308563B2 (en) 2005-08-30 2012-11-13 Nintendo Co., Ltd. Game system and storage medium having game program stored thereon
US7911444B2 (en) 2005-08-31 2011-03-22 Microsoft Corporation Input method for surface of interactive display
US8519952B2 (en) 2005-08-31 2013-08-27 Microsoft Corporation Input method for surface of interactive display
US20070046625A1 (en) * 2005-08-31 2007-03-01 Microsoft Corporation Input method for surface of interactive display
US8157651B2 (en) 2005-09-12 2012-04-17 Nintendo Co., Ltd. Information processing program
US8708824B2 (en) 2005-09-12 2014-04-29 Nintendo Co., Ltd. Information processing program
US8430753B2 (en) 2005-09-15 2013-04-30 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
US7927216B2 (en) 2005-09-15 2011-04-19 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
USRE45905E1 (en) 2005-09-15 2016-03-01 Nintendo Co., Ltd. Video game system with wireless modular handheld controller
US20070157095A1 (en) * 2005-12-29 2007-07-05 Microsoft Corporation Orientation free user interface
US8060840B2 (en) 2005-12-29 2011-11-15 Microsoft Corporation Orientation free user interface
US20070200658A1 (en) * 2006-01-06 2007-08-30 Samsung Electronics Co., Ltd. Apparatus and method for transmitting control commands in home network system
US10061392B2 (en) * 2006-02-08 2018-08-28 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US20170068324A1 (en) * 2006-02-08 2017-03-09 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US9910497B2 (en) 2006-02-08 2018-03-06 Oblong Industries, Inc. Gestural control of autonomous and semi-autonomous systems
US9495228B2 (en) 2006-02-08 2016-11-15 Oblong Industries, Inc. Multi-process interactive systems and methods
US9471147B2 (en) 2006-02-08 2016-10-18 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US10565030B2 (en) 2006-02-08 2020-02-18 Oblong Industries, Inc. Multi-process interactive systems and methods
US9823747B2 (en) * 2006-02-08 2017-11-21 Oblong Industries, Inc. Spatial, multi-modal control device for use with spatial operating system
US20110025603A1 (en) * 2006-02-08 2011-02-03 Underkoffler John S Spatial, Multi-Modal Control Device For Use With Spatial Operating System
US7774155B2 (en) 2006-03-10 2010-08-10 Nintendo Co., Ltd. Accelerometer-based controller
US9390229B1 (en) 2006-04-26 2016-07-12 Dp Technologies, Inc. Method and apparatus for a health phone
US9495015B1 (en) 2006-07-11 2016-11-15 Dp Technologies, Inc. Method and apparatus for utilizing motion user interface to determine command availability
US8902154B1 (en) 2006-07-11 2014-12-02 Dp Technologies, Inc. Method and apparatus for utilizing motion user interface
US8049719B2 (en) 2006-08-08 2011-11-01 Microsoft Corporation Virtual controller for visual displays
US7907117B2 (en) 2006-08-08 2011-03-15 Microsoft Corporation Virtual controller for visual displays
US20090208057A1 (en) * 2006-08-08 2009-08-20 Microsoft Corporation Virtual controller for visual displays
US20080036732A1 (en) * 2006-08-08 2008-02-14 Microsoft Corporation Virtual Controller For Visual Displays
US8115732B2 (en) 2006-08-08 2012-02-14 Microsoft Corporation Virtual controller for visual displays
US20110025601A1 (en) * 2006-08-08 2011-02-03 Microsoft Corporation Virtual Controller For Visual Displays
US8552976B2 (en) 2006-08-08 2013-10-08 Microsoft Corporation Virtual controller for visual displays
KR101141370B1 (en) * 2007-01-17 2012-05-03 소니 컴퓨터 엔터테인먼트 인코포레이티드 Method and system for measuring a user's level of attention to content
US20080169930A1 (en) * 2007-01-17 2008-07-17 Sony Computer Entertainment Inc. Method and system for measuring a user's level of attention to content
US7716008B2 (en) 2007-01-19 2010-05-11 Nintendo Co., Ltd. Acceleration data processing program, and storage medium, and acceleration data processing apparatus for use with the same
US8620353B1 (en) 2007-01-26 2013-12-31 Dp Technologies, Inc. Automatic sharing and publication of multimedia from a mobile device
US8212857B2 (en) 2007-01-26 2012-07-03 Microsoft Corporation Alternating light sources to reduce specular reflection
US10744390B1 (en) 2007-02-08 2020-08-18 Dp Technologies, Inc. Human activity monitoring device with activity identification
US8949070B1 (en) 2007-02-08 2015-02-03 Dp Technologies, Inc. Human activity monitoring device with activity identification
US10664327B2 (en) 2007-04-24 2020-05-26 Oblong Industries, Inc. Proteins, pools, and slawx in processing environments
US9804902B2 (en) 2007-04-24 2017-10-31 Oblong Industries, Inc. Proteins, pools, and slawx in processing environments
US9940161B1 (en) 2007-07-27 2018-04-10 Dp Technologies, Inc. Optimizing preemptive operating system with motion sensing
US8555282B1 (en) 2007-07-27 2013-10-08 Dp Technologies, Inc. Optimizing preemptive operating system with motion sensing
US9183044B2 (en) 2007-07-27 2015-11-10 Dp Technologies, Inc. Optimizing preemptive operating system with motion sensing
US10754683B1 (en) 2007-07-27 2020-08-25 Dp Technologies, Inc. Optimizing preemptive operating system with motion sensing
US8144944B2 (en) 2007-08-14 2012-03-27 Olympus Corporation Image sharing system and method
US8433537B2 (en) 2007-09-07 2013-04-30 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Identifying mobile devices
EP2034389A1 (en) * 2007-09-07 2009-03-11 Nederlandse Organisatie voor toegepast- natuurwetenschappelijk onderzoek TNO Method and system for linking appliances
US20110004436A1 (en) * 2007-09-07 2011-01-06 Berco Beute Identifying mobile devices
WO2009031899A1 (en) * 2007-09-07 2009-03-12 Nederlandse Organisatie Voor Toegepast-Natuurwetenschappelijk Onderzoek Tno Identifying mobile devices
US20140086428A1 (en) * 2007-09-27 2014-03-27 Samsung Electronics Co., Ltd. Portable terminal having bluetooth module and bluetooth communication method thereof
US20180309951A1 (en) * 2007-10-30 2018-10-25 Samsung Electronics Co., Ltd. Broadcast receiving apparatus and control method thereof
US20090113354A1 (en) * 2007-10-30 2009-04-30 Samsung Electronics Co., Ltd. Broadcast receiving apparatus and control method thereof
US9171454B2 (en) 2007-11-14 2015-10-27 Microsoft Technology Licensing, Llc Magic wand
US20090121894A1 (en) * 2007-11-14 2009-05-14 Microsoft Corporation Magic wand
US20090174567A1 (en) * 2008-01-04 2009-07-09 Primax Electronics Ltd. Remote controller for controlling playback of multimedia file
US7839297B2 (en) * 2008-01-04 2010-11-23 Primax Electronics Ltd. Remote controller for controlling playback of multimedia file
DE112009000596B4 (en) * 2008-03-19 2021-05-20 Computime Ltd. Remote control and procedure therefor
US11209913B2 (en) 2008-03-19 2021-12-28 Computime Ltd. User action remote control
US20220083155A1 (en) * 2008-03-19 2022-03-17 Computime Ltd. User Action Remote Control
US20090252311A1 (en) * 2008-04-07 2009-10-08 Martijn Kuiken Electronic device with motion controlled functions
WO2009125244A1 (en) * 2008-04-07 2009-10-15 Sony Ericsson Mobile Communications Ab Electronic device with motion controlled functions
CN101983394A (en) * 2008-04-07 2011-03-02 索尼爱立信移动通讯有限公司 Electronic device with motion controlled functions
US8170186B2 (en) * 2008-04-07 2012-05-01 Sony Mobile Communications Ab Electronic device with motion controlled functions
US10739865B2 (en) 2008-04-24 2020-08-11 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US10255489B2 (en) 2008-04-24 2019-04-09 Oblong Industries, Inc. Adaptive tracking system for spatial input devices
US9984285B2 (en) 2008-04-24 2018-05-29 Oblong Industries, Inc. Adaptive tracking system for spatial input devices
US10067571B2 (en) 2008-04-24 2018-09-04 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9740922B2 (en) 2008-04-24 2017-08-22 Oblong Industries, Inc. Adaptive tracking system for spatial input devices
US9495013B2 (en) 2008-04-24 2016-11-15 Oblong Industries, Inc. Multi-modal gestural interface
US10235412B2 (en) 2008-04-24 2019-03-19 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US10353483B2 (en) 2008-04-24 2019-07-16 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9779131B2 (en) 2008-04-24 2017-10-03 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US10521021B2 (en) 2008-04-24 2019-12-31 Oblong Industries, Inc. Detecting, representing, and interpreting three-space input: gestural continuum subsuming freespace, proximal, and surface-contact modes
US8320578B2 (en) 2008-04-30 2012-11-27 Dp Technologies, Inc. Headset
US20110068891A1 (en) * 2008-05-14 2011-03-24 Sist&Matica S.R.L. Remote control system
WO2009139010A1 (en) * 2008-05-14 2009-11-19 Sist&Matica S.R.L. Remote control system
US8358194B2 (en) 2008-05-14 2013-01-22 Sist & Matica S.R.L. Remote control system
US8285344B2 (en) 2008-05-21 2012-10-09 DP Technlogies, Inc. Method and apparatus for adjusting audio for a user environment
US8200246B2 (en) * 2008-06-19 2012-06-12 Microsoft Corporation Data synchronization for devices supporting direction-based services
US8700301B2 (en) 2008-06-19 2014-04-15 Microsoft Corporation Mobile computing devices, architecture and user interfaces based on dynamic direction information
US20090318168A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Data synchronization for devices supporting direction-based services
US8615257B2 (en) 2008-06-19 2013-12-24 Microsoft Corporation Data synchronization for devices supporting direction-based services
US10057724B2 (en) 2008-06-19 2018-08-21 Microsoft Technology Licensing, Llc Predictive services for devices supporting dynamic direction information
US8700302B2 (en) 2008-06-19 2014-04-15 Microsoft Corporation Mobile computing devices, architecture and user interfaces based on dynamic direction information
US20090315995A1 (en) * 2008-06-19 2009-12-24 Microsoft Corporation Mobile computing devices, architecture and user interfaces based on dynamic direction information
US9200901B2 (en) 2008-06-19 2015-12-01 Microsoft Technology Licensing, Llc Predictive services for devices supporting dynamic direction information
US20090319181A1 (en) * 2008-06-20 2009-12-24 Microsoft Corporation Data services based on gesture and location information of device
US10509477B2 (en) 2008-06-20 2019-12-17 Microsoft Technology Licensing, Llc Data services based on gesture and location information of device
US9703385B2 (en) 2008-06-20 2017-07-11 Microsoft Technology Licensing, Llc Data services based on gesture and location information of device
US8868374B2 (en) 2008-06-20 2014-10-21 Microsoft Corporation Data services based on gesture and location information of device
US8467991B2 (en) 2008-06-20 2013-06-18 Microsoft Corporation Data services based on gesture and location information of device
US20100008255A1 (en) * 2008-06-20 2010-01-14 Microsoft Corporation Mesh network services for devices supporting dynamic direction information
US8996332B2 (en) 2008-06-24 2015-03-31 Dp Technologies, Inc. Program setting adjustments based on activity identification
US11249104B2 (en) 2008-06-24 2022-02-15 Huawei Technologies Co., Ltd. Program setting adjustments based on activity identification
US9797920B2 (en) 2008-06-24 2017-10-24 DPTechnologies, Inc. Program setting adjustments based on activity identification
US20100013762A1 (en) * 2008-07-18 2010-01-21 Alcatel- Lucent User device for gesture based exchange of information, methods for gesture based exchange of information between a plurality of user devices, and related devices and systems
EP2146490A1 (en) * 2008-07-18 2010-01-20 Alcatel, Lucent User device for gesture based exchange of information, methods for gesture based exchange of information between a plurality of user devices, and related devices and systems
US20110159958A1 (en) * 2008-07-23 2011-06-30 Sega Corporation Game device, method for controlling game, game control program and computer readable recording medium storing program
US20100031202A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US20100026470A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation Fusing rfid and vision for surface object tracking
US8847739B2 (en) 2008-08-04 2014-09-30 Microsoft Corporation Fusing RFID and vision for surface object tracking
US20100031203A1 (en) * 2008-08-04 2010-02-04 Microsoft Corporation User-defined gesture set for surface computing
US8872646B2 (en) 2008-10-08 2014-10-28 Dp Technologies, Inc. Method and system for waking up a device due to motion
US8282487B2 (en) 2008-10-23 2012-10-09 Microsoft Corporation Determining orientation in an external reference frame
EP2362299A1 (en) * 2008-11-28 2011-08-31 Fujitsu Limited Control device, control system, control method, and computer program
EP2362299A4 (en) * 2008-11-28 2013-04-03 Fujitsu Ltd Control device, control system, control method, and computer program
US20110221623A1 (en) * 2008-11-28 2011-09-15 Fujitsu Limited Control device, control system and control method
US9111441B2 (en) * 2008-11-28 2015-08-18 Fujitsu Limited Control device, control system and control method
US10409381B2 (en) 2008-12-15 2019-09-10 Microsoft Technology Licensing, Llc Gestures, interactions, and common ground in a surface computing environment
US9134798B2 (en) * 2008-12-15 2015-09-15 Microsoft Technology Licensing, Llc Gestures, interactions, and common ground in a surface computing environment
US20100149090A1 (en) * 2008-12-15 2010-06-17 Microsoft Corporation Gestures, interactions, and common ground in a surface computing environment
US20100223549A1 (en) * 2009-02-27 2010-09-02 Greg Edwards System and method for controlling entertainment devices using a display
US9880635B2 (en) 2009-04-02 2018-01-30 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US10296099B2 (en) 2009-04-02 2019-05-21 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9471149B2 (en) 2009-04-02 2016-10-18 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US9684380B2 (en) 2009-04-02 2017-06-20 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9471148B2 (en) 2009-04-02 2016-10-18 Oblong Industries, Inc. Control system for navigating a principal dimension of a data space
US10642364B2 (en) 2009-04-02 2020-05-05 Oblong Industries, Inc. Processing tracking and recognition data in gestural recognition systems
US10656724B2 (en) 2009-04-02 2020-05-19 Oblong Industries, Inc. Operating environment comprising multiple client devices, multiple displays, multiple users, and gestural control
US9952673B2 (en) 2009-04-02 2018-04-24 Oblong Industries, Inc. Operating environment comprising multiple client devices, multiple displays, multiple users, and gestural control
US10824238B2 (en) 2009-04-02 2020-11-03 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
US9317128B2 (en) 2009-04-02 2016-04-19 Oblong Industries, Inc. Remote devices used in a markerless installation of a spatial operating environment incorporating gestural control
US9740293B2 (en) 2009-04-02 2017-08-22 Oblong Industries, Inc. Operating environment with gestural control and multiple client devices, displays, and users
TWI499226B (en) * 2009-04-22 2015-09-01 Univ Southern Taiwan A single button to generate a variety of remote control signal remote control
WO2010138520A1 (en) * 2009-05-26 2010-12-02 Dp Technologies, Inc. Method and apparatus for a motion state aware headset
US9529437B2 (en) 2009-05-26 2016-12-27 Dp Technologies, Inc. Method and apparatus for a motion state aware device
CN102460510A (en) * 2009-05-27 2012-05-16 奥布隆工业有限公司 Spatial, multi-modal control device for use with spatial operating system
EP2438534A4 (en) * 2009-06-05 2016-05-18 Microsoft Technology Licensing Llc Scrubbing variable content paths
US11622102B2 (en) 2009-06-17 2023-04-04 3Shape A/S Intraoral scanning apparatus
US11051002B2 (en) 2009-06-17 2021-06-29 3Shape A/S Focus scanning apparatus
US11076146B1 (en) 2009-06-17 2021-07-27 3Shape A/S Focus scanning apparatus
US11368667B2 (en) 2009-06-17 2022-06-21 3Shape A/S Intraoral scanning apparatus
US11539937B2 (en) 2009-06-17 2022-12-27 3Shape A/S Intraoral scanning apparatus
US11671582B2 (en) 2009-06-17 2023-06-06 3Shape A/S Intraoral scanning apparatus
US11831815B2 (en) 2009-06-17 2023-11-28 3Shape A/S Intraoral scanning apparatus
US9661468B2 (en) 2009-07-07 2017-05-23 Microsoft Technology Licensing, Llc System and method for converting gestures into digital graffiti
EP2275907A3 (en) * 2009-07-16 2014-04-02 Apple Inc. Ground detection for touch sensitive device
US10359884B2 (en) 2009-07-16 2019-07-23 Apple Inc. Ground detection for touch sensitive device
US9632622B2 (en) 2009-07-16 2017-04-25 Apple Inc. Ground detection for touch sensitive device
EP2797015A1 (en) * 2009-10-02 2014-10-29 Qualcomm Incorporated Device Movement User Interface Gestures for File Sharing Functionality
EP2483810A1 (en) * 2009-10-02 2012-08-08 Qualcomm Incorporated Device movement user interface gestures for file sharing functionality
EP2483811A2 (en) * 2009-10-02 2012-08-08 Qualcomm Incorporated User interface gestures and methods for providing file sharing functionality
US10990454B2 (en) 2009-10-14 2021-04-27 Oblong Industries, Inc. Multi-process interactive systems and methods
US9933852B2 (en) 2009-10-14 2018-04-03 Oblong Industries, Inc. Multi-process interactive systems and methods
WO2011054720A1 (en) * 2009-11-05 2011-05-12 Continental Automotive Gmbh Portable remote control device for a vehicle
WO2011054717A1 (en) * 2009-11-05 2011-05-12 Continental Automotive Gmbh Portable remote control device for a vehicle
KR20110062704A (en) * 2009-12-04 2011-06-10 엘지전자 주식회사 Display device and method for setting of password the same
KR101634807B1 (en) * 2009-12-04 2016-06-29 엘지전자 주식회사 Display device and method for setting of password the same
US9009594B2 (en) * 2010-06-10 2015-04-14 Microsoft Technology Licensing, Llc Content gestures
US20120042246A1 (en) * 2010-06-10 2012-02-16 Microsoft Corporation Content gestures
US8150384B2 (en) 2010-06-16 2012-04-03 Qualcomm Incorporated Methods and apparatuses for gesture based remote control
US10353495B2 (en) 2010-08-20 2019-07-16 Knowles Electronics, Llc Personalized operation of a mobile device using sensor signatures
US9244530B1 (en) 2011-01-31 2016-01-26 Google Inc. Virtual artifacts using mobile devices
US10031581B1 (en) 2011-01-31 2018-07-24 Google Inc. Virtual artifacts using mobile devices
US11543956B2 (en) 2011-04-29 2023-01-03 Google Llc Remote device control using gestures on a touch sensitive device
US20150317055A1 (en) * 2011-04-29 2015-11-05 Google Inc. Remote device control using gestures on a touch sensitive device
US8933881B2 (en) * 2011-05-03 2015-01-13 Lg Electronics Inc. Remote controller and image display apparatus controllable by remote controller
US20130002576A1 (en) * 2011-05-03 2013-01-03 Lg Electronics Inc. Remote controller and image display apparatus controllable by remote controller
US9043502B1 (en) 2011-05-31 2015-05-26 Amazon Technologies, Inc. Portable computing device as control mechanism
US8612641B1 (en) * 2011-05-31 2013-12-17 Amazon Technologies, Inc. Portable computing device as control mechanism
CN102332203A (en) * 2011-05-31 2012-01-25 福建物联天下信息科技有限公司 System for operating and controlling other apparatuses through motion behavior
US9596643B2 (en) 2011-12-16 2017-03-14 Microsoft Technology Licensing, Llc Providing a user interface experience based on inferred vehicle state
WO2013087994A3 (en) * 2011-12-16 2013-12-05 Nokia Corporation Methods, apparatuses, and computer program products for enabling use of remote devices with pre-defined gestures
WO2013087994A2 (en) * 2011-12-16 2013-06-20 Nokia Corporation Methods, apparatuses, and computer program products for enabling use of remote devices with pre-defined gestures
US8902180B2 (en) 2011-12-16 2014-12-02 Nokia Corporation Methods, apparatuses, and computer program products for enabling use of remote devices with pre-defined gestures
CN104246657A (en) * 2012-01-06 2014-12-24 莫韦公司 Device for gestural control of a system, and associated method
US9874937B2 (en) * 2012-01-06 2018-01-23 Movea Device for gestural control of a system, and associated method
US20140368429A1 (en) * 2012-01-06 2014-12-18 Movea Device for gestural control of a system, and associated method
US20130179812A1 (en) * 2012-01-10 2013-07-11 Gilles Serge BianRosa System and method for navigating a user interface using a touch-enabled input device
US20150193095A1 (en) * 2012-07-24 2015-07-09 Tencent Technology (Shenzhen) Company Limited Electronic apparatus and method for interacting with application in electronic apparatus
US9244594B2 (en) * 2012-07-24 2016-01-26 Tencent Technology (Shenzhen) Company Limited Electronic apparatus and method for interacting with application in electronic apparatus
US11181938B2 (en) * 2012-08-31 2021-11-23 Blue Goji Llc Full body movement control of dual joystick operated devices
US20160287989A1 (en) * 2012-08-31 2016-10-06 Blue Goji Llc Natural body interaction for mixed or virtual reality applications
US20140115542A1 (en) * 2012-10-19 2014-04-24 Hon Hai Precision Industry Co., Ltd. Remotely controllable electronic device allowing a user to associate two menu items with a control signal
WO2014163749A1 (en) * 2013-03-13 2014-10-09 Cambridgesoft Corporation Systems and methods for gesture-based sharing of data between separate electronic devices
CN105190483A (en) * 2013-03-15 2015-12-23 高通股份有限公司 Detection of a gesture performed with at least two control objects
US10241564B2 (en) 2013-06-07 2019-03-26 Seiko Epson Corporation Electronic apparatus and method of detecting tap operation
CN104238743A (en) * 2013-06-21 2014-12-24 卡西欧计算机株式会社 Information processing apparatus, and information processing method
US10033761B2 (en) * 2013-07-05 2018-07-24 Nippon Telegraph And Telephone Corporation System and method for monitoring falsification of content after detection of unauthorized access
US20160134658A1 (en) * 2013-07-05 2016-05-12 Nippon Telegraph And Telephone Corporation Unauthorized access detecting system and unauthorized access detecting method
US20220050529A1 (en) * 2013-08-26 2022-02-17 Paypal, Inc. Gesture identification
EP3069215A4 (en) * 2013-11-13 2017-07-26 National University of Singapore Method and hand held laboratory device to control screen navigation
US9772815B1 (en) * 2013-11-14 2017-09-26 Knowles Electronics, Llc Personalized operation of a mobile device using acoustic and non-acoustic information
US11747911B2 (en) 2014-01-31 2023-09-05 Nanotronics Imaging, Inc. Apparatus and method for manipulating objects with gesture controls
US11409367B2 (en) 2014-01-31 2022-08-09 Nanotronics Imaging, Inc. Apparatus and method for manipulating objects with gesture controls
US10901521B2 (en) 2014-01-31 2021-01-26 Nanotronics Imaging, Inc. Apparatus and method for manipulating objects with gesture controls
WO2015117052A1 (en) * 2014-01-31 2015-08-06 Putman Matthew C Apparatus and method for manipulating objects with gesture controls
US20170010676A1 (en) * 2014-01-31 2017-01-12 Matthew C. Putman Apparatus and method for manipulating objects with gesture controls
US10691215B2 (en) * 2014-01-31 2020-06-23 Nanotronics Imaging, Inc. Apparatus and method for manipulating objects with gesture controls
US11723759B2 (en) 2014-02-07 2023-08-15 3Shape A/S Detecting tooth shade
US11707347B2 (en) 2014-02-07 2023-07-25 3Shape A/S Detecting tooth shade
US11701208B2 (en) 2014-02-07 2023-07-18 3Shape A/S Detecting tooth shade
EP2908292A1 (en) * 2014-02-14 2015-08-19 Siemens Aktiengesellschaft Access system for a technical installation
US10627915B2 (en) 2014-03-17 2020-04-21 Oblong Industries, Inc. Visual collaboration interface
US9990046B2 (en) 2014-03-17 2018-06-05 Oblong Industries, Inc. Visual collaboration interface
US10338693B2 (en) 2014-03-17 2019-07-02 Oblong Industries, Inc. Visual collaboration interface
US20150370290A1 (en) * 2014-06-24 2015-12-24 Kabushiki Kaisha Toshiba Electronic apparatus, method, and storage medium
CN104516503A (en) * 2014-12-18 2015-04-15 深圳市宇恒互动科技开发有限公司 Method and system both for sensing scene movement and reminding device
DE102015005863A1 (en) * 2015-05-11 2016-11-17 SENIC GmbH Input device for electronic devices
WO2016200417A1 (en) * 2015-06-07 2016-12-15 Apple Inc. Content browsing user interface
US10318525B2 (en) 2015-06-07 2019-06-11 Apple Inc. Content browsing user interface
EP3304250A1 (en) * 2015-06-07 2018-04-11 Apple Inc. Content browsing user interface
US20170344175A1 (en) * 2016-05-30 2017-11-30 Quanta Computer Inc. Portable electronic devices and operating methods thereof
US10529302B2 (en) 2016-07-07 2020-01-07 Oblong Industries, Inc. Spatially mediated augmentations of and interactions among distinct devices and applications via extended pixel manifold
US20180217396A1 (en) * 2017-01-30 2018-08-02 Semiconductor Components Industries, Llc Systems and methods for an optical image stabilizer system
US10473949B2 (en) * 2017-01-30 2019-11-12 Semiconductor Components Industries, Llc Systems and methods for an optical image stabilizer system
WO2019080902A1 (en) * 2017-10-27 2019-05-02 Zyetric Inventions Limited Interactive intelligent virtual object
US11455882B2 (en) * 2017-10-31 2022-09-27 Hewlett-Packard Development Company, L.P. Actuation module to control when a sensing module is responsive to events
US11467673B2 (en) 2019-10-24 2022-10-11 Samsung Electronics Co., Ltd Method for controlling camera and electronic device therefor
US11269457B1 (en) 2021-02-03 2022-03-08 Apple Inc. Systems and methods for improved touch screen selectivity and sensitivity

Also Published As

Publication number Publication date
US7990365B2 (en) 2011-08-02
US20110050569A1 (en) 2011-03-03

Similar Documents

Publication Publication Date Title
US11119575B2 (en) Gesture based user interface supporting preexisting symbols
US7173604B2 (en) Gesture identification of controlled devices
US7301526B2 (en) Dynamic adaptation of gestures for motion controlled handheld devices
US7365737B2 (en) Non-uniform gesture precision
US7903084B2 (en) Selective engagement of motion input modes
US7176888B2 (en) Selective engagement of motion detection
US7365735B2 (en) Translation controlled cursor
US7301529B2 (en) Context dependent gesture response
US7990365B2 (en) Motion controlled remote controller
US7301527B2 (en) Feedback based user interface for motion controlled handheld devices
US7301528B2 (en) Distinguishing tilt and translation motion components in handheld devices
US7280096B2 (en) Motion sensor engagement for a handheld device
US7176887B2 (en) Environmental modeling for motion controlled handheld devices
US7180500B2 (en) User definable gestures for motion controlled handheld devices
US7176886B2 (en) Spatial signatures
US7180502B2 (en) Handheld device with preferred motion selection
US7365736B2 (en) Customizable gesture mappings for motion controlled handheld devices
US7180501B2 (en) Gesture based navigation of a handheld user interface
EP1728142B1 (en) Distinguishing tilt and translation motion components in handheld devices

Legal Events

Date Code Title Description
AS Assignment

Owner name: FUJITSU LIMITED, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MARVIT, DAVID L.;REINHARDT, ALBERT H.M.;ADLER, B. THOMAS;AND OTHERS;REEL/FRAME:015647/0681;SIGNING DATES FROM 20040709 TO 20040710

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION