WO2014107186A1 - Controlled headset computer displays - Google Patents

Controlled headset computer displays Download PDF

Info

Publication number
WO2014107186A1
WO2014107186A1 PCT/US2013/041070 US2013041070W WO2014107186A1 WO 2014107186 A1 WO2014107186 A1 WO 2014107186A1 US 2013041070 W US2013041070 W US 2013041070W WO 2014107186 A1 WO2014107186 A1 WO 2014107186A1
Authority
WO
WIPO (PCT)
Prior art keywords
cue
control
user interface
voice command
user
Prior art date
Application number
PCT/US2013/041070
Other languages
French (fr)
Inventor
James Woodall
Christopher Parkinson
Original Assignee
Kopin Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/799,790 external-priority patent/US10013976B2/en
Application filed by Kopin Corporation filed Critical Kopin Corporation
Priority to CN201380072806.5A priority Critical patent/CN104981767A/en
Priority to JP2015551666A priority patent/JP2016508271A/en
Priority to EP13728571.4A priority patent/EP2941690A1/en
Publication of WO2014107186A1 publication Critical patent/WO2014107186A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/167Audio in a user interface, e.g. using voice commands for navigating, audio feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/16Constructional details or arrangements
    • G06F1/1613Constructional details or arrangements for portable computers
    • G06F1/163Wearable computers, e.g. on a belt
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Definitions

  • Mobile computing devices such as notebook PCs, smart phones, and tablet computing devices, are now common tools used for producing, analyzing, communicating, and consuming data in both business and personal life. Consumers continue to embrace a mobile digital lifestyle as the ease of access to digital information increases with high-speed wireless communications technologies becoming ubiquitous.
  • Popular uses of mobile computing devices include displaying large amounts of high-resolution computer graphics information and video content, often wirelessly streamed to the device. While these devices typically include a display screen, the preferred visual experience of a high-resolution, large format display cannot be easily replicated in such mobile devices because the physical size of such device is limited to promote mobility. Another drawback of the
  • the user interface is hands-dependent, typically requiring a user to enter data or make selections using a keyboard (physical or virtual) or touch-screen display.
  • a keyboard physical or virtual
  • touch-screen display
  • micro-displays can provide large-format, high- resolution color pictures and streaming video in a very small form factor.
  • One application for such displays can be integrated into a wireless headset computer worn on the head of the user with a display within the field of view of the user, similar in format to either eyeglasses, audio headset or video eyewear.
  • a "wireless computing headset” device includes one or more small high-resolution micro- displays and optics to magnify the image.
  • the WVGA microdisplay's can provide super video graphics array (SVGA) (800 x 600) resolution or extended graphic arrays (XGA) (1024 x 768) or even higher resolutions.
  • SVGA super video graphics array
  • XGA extended graphic arrays
  • a wireless computing headset contains one or more wireless computing and communication interfaces, enabling data and streaming video capability, and provides greater convenience and mobility through hands dependent devices.
  • wireless computing and communication interfaces enabling data and streaming video capability, and provides greater convenience and mobility through hands dependent devices.
  • a method includes providing a user interface in a headset computer and, in response to user utterance of a cue toggle command, displaying at least one cue in the user interface.
  • Each cue can correspond to a voice command associated with code to execute.
  • the method can also include executing the code associated with the voice command.
  • the method can further includes displaying the interface without the cue at least one of prior to the cue toggle command and after a subsequent cue toggle command. Displaying the cue can include displaying words that activate the voice command. Displaying the cue can also include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is displayed in the user interface.
  • Displaying the cue can include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is hidden from the user interface. Displaying the cue can include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is a global headset control.
  • the cue can be loaded from a control, the control indicating the cue and voice command.
  • a system for displaying a user interface in a headset computer can include a display module configured to provide a user interface in a headset computer.
  • the display module can be further configured to, in response to user utterance of a cue toggle command, display at least one cue in the user interface.
  • Each cue can correspond to a voice command associated with code to execute.
  • the system can further include a command module configured to, in response to user utterance of the voice command, execute code associated with the voice command.
  • a method of developing a user interface in a headset computer includes embedding a cue and a voice command in a control for the user interface. The method also includes providing the control to the user interface, the user interface configured to display the cue responsive to a cue toggle command.
  • FIGs. 1 A- IB are schematic illustrations of a headset computer cooperating with a host computer (e.g., Smart Phone, laptop, etc.) according to principles of the present invention.
  • a host computer e.g., Smart Phone, laptop, etc.
  • FIG. 2 is a block diagram of flow of data and control in the embodiment of Figs. 1A-1B.
  • Fig. 3 is a diagram illustrating an example embodiment of a user interface employed in the HSC.
  • Fig. 4 is a diagram illustrating an example embodiment of a user interface after receiving a show commands voice command.
  • Fig. 5 is a flow diagram illustrating an example embodiment of a method employed by the present invention.
  • Fig. 6 illustrates a computer network or similar digital processing environment in which the present invention may be implemented.
  • Fig. 7 is a diagram of the internal structure of a computer (e.g., client processor/device or server computers) in the computer system of Fig. 6.
  • a computer e.g., client processor/device or server computers
  • FIGs. 1 A and IB show an example embodiment of a wireless computing headset device 100 (also referred to herein as a headset computer (HSC)) that incorporates a high-resolution (VGA or better) microdisplay element 1010, and other features described below.
  • HSC headset computer
  • VGA high-resolution microdisplay element
  • the HSC 100 can include audio input and/or output devices, including one or more microphones, input and output speakers, geo- positional sensors (GPS), three to nine axis degrees of freedom orientation sensors, atmospheric sensors, health condition sensors, digital compass, pressure sensors, environmental sensors, energy sensors, acceleration sensors, position, attitude, motion, velocity and/or optical sensors, cameras (visible light, infrared, etc.), multiple wireless radios, auxiliary lighting, rangefmders, or the like and/or an array of sensors embedded and/or integrated into the headset and/or attached to the device via one or more peripheral ports (not shown in detail in Fig. IB).
  • GPS geo- positional sensors
  • three to nine axis degrees of freedom orientation sensors atmospheric sensors, health condition sensors, digital compass, pressure sensors, environmental sensors, energy sensors, acceleration sensors, position, attitude, motion, velocity and/or optical sensors, cameras (visible light, infrared, etc.), multiple wireless radios, auxiliary lighting, rangefmders, or the like and/or an array of sensors embedded and/
  • headset computing device 100 typically located within the housing of headset computing device 100 are various electronic circuits including, a microcomputer (single or multicore processors), one or more wired and/or wireless communications interfaces, memory or storage devices, various sensors and a peripheral mount or mount, such as a "hot shoe.”
  • Example embodiments of the HSC 100 can receive user input through sensing voice commands, head movements, 110, 111, 112 and hand gestures 113, or any combination thereof.
  • Microphone(s) operatively coupled or preferably integrated into the HSC 100 can be used to capture speech commands which are then digitized and processed using automatic speech recognition techniques.
  • Gyroscopes, accelerometers, and other micro-electromechanical system sensors can be integrated into the HSC 100 and used to track the user's head movement to provide user input commands. Cameras or other motion tracking sensors can be used to monitor a user's hand gestures for user input commands. Such a user interface overcomes the hands-dependant formats of other mobile devices.
  • the HSC 100 can be used in various ways. It can be used as a remote display for streaming video signals received from a remote host computing device 200 (shown in Fig. 1 A).
  • the host 200 may be, for example, a notebook PC, smart phone, tablet device, or other computing device having less or greater computational complexity than the wireless computing headset device 100, such as cloud-based network resources.
  • the host may be further connected to other networks 210, such as the Internet.
  • the headset computing device 100 and host 200 can wirelessly communicate via one or more wireless protocols, such as Bluetooth ® , Wi-Fi, WiMAX, 4G LTE or other wireless radio link 150. (Bluetooth is a registered trademark of Bluetooth Sig, Inc.
  • the host 200 may be further connected to other networks, such as through a wireless connection to the Internet or other cloud-based network resources, so that the host 200 can act as a wireless relay.
  • some example embodiments of the HSC 100 can wirelessly connect to the Internet and cloud-based network resources without the use of a host wireless relay.
  • Fig. IB is a perspective view showing some details of an example embodiment of a headset computer 100.
  • the example embodiment HSC 100 generally includes, a frame 1000, strap 1002, rear housing 1004, speaker 1006, cantilever, or alternatively referred to as an arm or boom 1008 with a built in microphone, and a micro-display subassembly 1010.
  • a head worn frame 1000 and strap 1002 are generally configured so that a user can wear the headset computer device 100 on the user's head.
  • a housing 1004 is generally a low profile unit which houses the electronics, such as the microprocessor, memory or other storage device, along with other associated circuitry.
  • Speakers 1006 provide audio output to the user so that the user can hear information.
  • Microdisplay subassembly 1010 is used to render visual information to the user. It is coupled to the arm 1008.
  • the arm 1008 generally provides physical support such that the microdisplay subassembly is able to be positioned within the user's field of view 300 (Fig. 1 A), preferably in front of the eye of the user or within its peripheral vision preferably slightly below or above the eye. Arm 1008 also provides the electrical or optical connections between the microdisplay subassembly 1010 and the control circuitry housed within housing unit 1004.
  • the HSC display device 100 allows a user to select a field of view 300 within a much larger area defined by a virtual display 400.
  • the user can typically control the position, extent (e.g., X-Y or 3D range), and/or magnification of the field of view 300.
  • Fig. 1 A While what is shown in Fig. 1 A is a monocular microdisplay presenting a single fixed display element supported on the face of the user with a cantilevered boom, it should be understood that other mechanical configurations for the remote control display device 100 are possible.
  • Fig. 2 is a block diagram showing more detail of the HSC device 100, host 200 and the data that travels between them.
  • the HSC device 100 receives vocal input from the user via the microphone, hand movements or body gestures via positional and orientation sensors, the camera or optical sensor(s), and head movement inputs via the head tracking circuitry such as 3 axis to 9 axis degrees of freedom orientational sensing. These are translated by software in the HSC device 100 into keyboard and/or mouse commands that are then sent over the Bluetooth or other wireless interface 150 to the host 200.
  • the host 200 interprets these translated commands in accordance with its own operating system/application software to perform various functions.
  • Among the commands is one to select a field of view 300 within the virtual display 400 and return that selected screen data to the HSC device 100.
  • a very large format virtual display area might be associated with application software or an operating system running on the host 200.
  • only a portion of that large virtual display area 400 within the field of view 300 is returned to and actually displayed by the micro display 1010 of HSC device 100.
  • the HSC 100 may take the form of the HSC described in a co-pending US Patent Publication Number 2011/0187640 which is hereby incorporated by reference in its entirety.
  • the invention relates to the concept of using a Head Mounted Display (HMD) 1010 in conjunction with an external 'smart' device 200 (such as a smartphone or tablet) to provide information and control to the user hands-free.
  • HMD Head Mounted Display
  • the invention requires transmission of small amounts of data, providing a more reliable data transfer method running in real-time.
  • the amount of data to be transmitted over the connection 150 is small- simple instructions on how to lay out a screen, which text to display, and other stylistic information such as drawing arrows, or the background colours, images to include, etc.
  • Additional data could be streamed over the same 150 or another connection and displayed on screen 1010, such as a video stream if required by the Host 200.
  • This invention relates to the viewing of context sensitive overlays within applications, on voice controlled HSCs 100.
  • the concept is the presentation of data, contextually, over a visual, on demand. Overlays can be called up by the user with a voice command, typically "Show commands.”
  • the voice command is standard across the system 100 and available at all times. This command causes HSC 100 to display applicable voice commands and other information in a context sensitive and intuitive way.
  • the overlay and displayed applicable commands fade away after a short period of time. This is accomplished by a timing mechanism that refreshes the screen view.
  • the applicable commands are displayed in order of relevance. The most relevant command is given more prominence in terms of placement over less relevant commands. 100 determines relevancy based on the current context of the display 1010 contents.
  • Each screen in the relevant system is made up of user-interface (UI) components, some of which are 'controls'.
  • a control is a UI component that provides information to the user or enables some form of functionality. Examples of controls are buttons, radio buttons, text boxes, check boxes, drop down menus, file menus, ribbon menus, live tiles, etc. Within the software developer's component library, these are available in their various forms, allowing customization of certain features. For example, one such control might be a 'button,' simply enabling the user to press it using a voice command available on the button. Controls, such as the 'button' control, are available to the developer, for example, as part of the developer's component library or other library. The developer can insert the pre- coded control and customize it to his or her liking, instead of manually coding the control from scratch.
  • a "Show Commands" function is built into the controls of the developer library.
  • the developer for example, creates a button and specifies a text string to be written onto the button, the text string becomes the default voice command to activate the button, unless the developer (or user) overrides the voice command.
  • the control e.g., the button
  • the control is configured to react to a "show commands" voice command by overlaying the text string to activate the control over the control itself, or near the control.
  • Every User Interface screen made available on the HSC 100 has the ability to receive a "Show Commands" voice command (e.g., a system command, available by default). Therefore, when a screen is constructed using controls from the UI library, show commands functionality is built in, providing guidance as to the voice commands available to the user. These voice commands are (by default) shown in context of the current displayed contents (screen view).
  • Show Commands e.g., a system command, available by default
  • voice commands can also be placed within the show commands overlay that are not associated with a visible control. These are placed in the overlay by the developer, adding a voice-command only control, or adding a hidden control, and provide a visual cue for voice commands that are not associated with a button or other control.
  • Fig. 3 is a diagram 250 illustrating an example embodiment of a user interface 202 employed in the HSC.
  • the user interface 202 in this embodiment, is an email application displaying a user's inbox.
  • Each email listing is a respective email control 1-6 204a-f.
  • the user can open each email by selecting each control.
  • Each email control 204a-f is programmed to be selected at least by a respective voice command.
  • Fig. 4 is a diagram 300 illustrating an example embodiment of a user interface 302 after receiving a show commands voice command.
  • the user interface displays voice commands corresponding to each of the email controls 204a-f of Fig. 2.
  • voice commands 1-6 304a-4 correspond with email controls 1-6 204a-f, respectively.
  • saying voice command 1 304a i.e., "Open E-mail 1" causes the HSC to open the first email in the list.
  • the user interface 302 in show commands mode also shows a plurality of implicit voice commands 1-9 306a-i.
  • the implicit voice commands 1-9 306a-i do not correspond to any particular visual control of the user interface, they are voice commands that are available to the user. For example, the user can say implicit voice commands 1 and 2 306a-b to move to the previous and next page,
  • the user can draft an email by saying implicit command 3 306c.
  • the user can manage his or her email account by saying implicit command 4 306d.
  • the user can see his or her accounts by saying implicit command 5 306e.
  • the user can switch folders by saying implicit voice command 6 306f.
  • the user can refresh the inbox by saying implicit voice command 7 306g.
  • Implicit voice commands 8 and 9 can be universal to all screens on the HSC.
  • Voice commands 1-6 304a-f and implicit voice commands 1-7 306a-g are local commands for this particular application.
  • implicit voice commands 1-2 306a-b can be global commands for moving to previous and next pages of applications.
  • the voice command overlay aids the user by de-cluttering the screen of options and buttons.
  • the voice commands further help prompt the user to how to use the system, which is especially useful while the user is learning how to use the device and voice commands.
  • Fig. 5 is a flow diagram 500 illustrating an example embodiment of a method employed by the present invention.
  • the method provides a user interface in a headset computer (502). Then, the method determines whether it has received a cue toggle command, for example, over an audio channel from a user utterance (504). If not, the method continues listening for the cue toggle command (504). If so, however, the method then displays at least one cue in the user interface (506). Each cue is associated with a corresponding voice command, which, when uttered, causes the system to execute code.
  • a cue toggle command for example, over an audio channel from a user utterance
  • the method displays at least one cue in the user interface (506).
  • Each cue is associated with a corresponding voice command, which, when uttered, causes the system to execute code.
  • the system determines whether it has received a voice command (e.g., a voice command shown by one of the cues) (508). If not, it keeps listening for a voice command (508). If so, however, it executes the code associated with the voice command (510).
  • a voice command e.g., a voice command shown by one of the cues
  • Fig. 6 illustrates a computer network or similar digital processing environment in which the present invention may be implemented.
  • Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like.
  • Client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60.
  • Communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, Local area or Wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth, etc.) to communicate with one another.
  • Other electronic device/computer network architectures are suitable.
  • Fig. 7 is a diagram of the internal structure of a computer (e.g., client processor/device 50 or server computers 60) in the computer system of Fig. 6.
  • Each computer 50, 60 contains system bus 79, where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system.
  • Bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements.
  • Attached to system bus 79 is I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 50, 60.
  • Network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of Fig. 6).
  • Memory 90 provides volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present invention (e.g., context sensitive overlays in a user interface code detailed above).
  • Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present invention.
  • Central processor unit 84 is also attached to system bus 79 and provides for the execution of computer instructions.
  • the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a computer readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the invention system.
  • Computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art.
  • at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection.
  • the invention programs are a computer program propagated signal product 107 embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)).
  • a propagation medium e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s).
  • Such carrier medium or signals provide at least a portion of the software instructions for the present invention routines/program 92.
  • the propagated signal is an analog carrier wave or digital signal carried on the propagated medium.
  • the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network.
  • the propagated signal is a signal that is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer.
  • the computer readable medium of computer program product 92 is a propagation medium that the computer system 50 may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for computer program propagated signal product.
  • carrier medium or transient carrier encompasses the foregoing transient signals, propagated signals, propagated medium, storage medium and the like.

Abstract

In headset computers that leverage voice commands, often the user does not know what voice commands are available. In one embodiment, a method includes providing a user interface in a headset computer and, in response to user utterance of a cue toggle command, displaying at least one cue in the user interface. Each cue can correspond to a voice command associated with code to execute. In response to user utterance of the voice command, the method can also include executing the code associated with the voice command. The user can therefore ascertain what voice commands are available.

Description

CONTROLLED HEADSET COMPUTER DISPLAYS RELATED APPLICATIONS
[0001] This application is a continuation of U.S. Application No. 13/799,790, filed March 13, 2013, which claims the benefit of to U.S. Application No.
61/749,240 filed January 4, 2013 and is a continuation-in-part of U.S. Application No. 13/234,916 filed September 16, 2011, which claims the benefit of 61/384,586 filed September 20, 2010. The entire teachings of the above applications are incorporated herein by reference.
BACKGROUND OF THE INVENTION
[0001] Mobile computing devices, such as notebook PCs, smart phones, and tablet computing devices, are now common tools used for producing, analyzing, communicating, and consuming data in both business and personal life. Consumers continue to embrace a mobile digital lifestyle as the ease of access to digital information increases with high-speed wireless communications technologies becoming ubiquitous. Popular uses of mobile computing devices include displaying large amounts of high-resolution computer graphics information and video content, often wirelessly streamed to the device. While these devices typically include a display screen, the preferred visual experience of a high-resolution, large format display cannot be easily replicated in such mobile devices because the physical size of such device is limited to promote mobility. Another drawback of the
aforementioned device types is that the user interface is hands-dependent, typically requiring a user to enter data or make selections using a keyboard (physical or virtual) or touch-screen display. As a result, consumers are now seeking a hands- free, high-quality, portable, color display solution to augment or replace their hands- dependent mobile devices. SUMMARY OF THE INVENTION
[0002] Recently developed micro-displays can provide large-format, high- resolution color pictures and streaming video in a very small form factor. One application for such displays can be integrated into a wireless headset computer worn on the head of the user with a display within the field of view of the user, similar in format to either eyeglasses, audio headset or video eyewear. A "wireless computing headset" device includes one or more small high-resolution micro- displays and optics to magnify the image. The WVGA microdisplay's can provide super video graphics array (SVGA) (800 x 600) resolution or extended graphic arrays (XGA) (1024 x 768) or even higher resolutions. A wireless computing headset contains one or more wireless computing and communication interfaces, enabling data and streaming video capability, and provides greater convenience and mobility through hands dependent devices. For more information concerning such devices, see co-pending patent applications entitled "Mobile Wireless Display Software Platform for Controlling Other Systems and Devices," U.S. Application No. 12/348, 648 filed January 5, 2009, "Handheld Wireless Display Devices Having High Resolution Display Suitable For Use as a Mobile Internet Device," PCT International Application No. PCT/US09/38601 filed March 27, 2009, and
"Improved Headset Computer," U.S. Application No. 61/638,419 filed April 25, 2012, each of which are incorporated herein by reference in their entirety.
[0003] In one embodiment, a method includes providing a user interface in a headset computer and, in response to user utterance of a cue toggle command, displaying at least one cue in the user interface. Each cue can correspond to a voice command associated with code to execute. In response to user utterance of the voice command, the method can also include executing the code associated with the voice command.In another embodiment, the method can further includes displaying the interface without the cue at least one of prior to the cue toggle command and after a subsequent cue toggle command. Displaying the cue can include displaying words that activate the voice command. Displaying the cue can also include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is displayed in the user interface. Displaying the cue can include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is hidden from the user interface. Displaying the cue can include displaying the cue in the user interface corresponding to the voice command associated with the control, where the control is a global headset control. The cue can be loaded from a control, the control indicating the cue and voice command.
[0004] In another embodiment, a system for displaying a user interface in a headset computer can include a display module configured to provide a user interface in a headset computer. The display module can be further configured to, in response to user utterance of a cue toggle command, display at least one cue in the user interface. Each cue can correspond to a voice command associated with code to execute. The system can further include a command module configured to, in response to user utterance of the voice command, execute code associated with the voice command.
[0005] In another embedment, a method of developing a user interface in a headset computer includes embedding a cue and a voice command in a control for the user interface. The method also includes providing the control to the user interface, the user interface configured to display the cue responsive to a cue toggle command.
BRIEF DESCRIPTION OF DRAWINGS
[0006] The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating embodiments of the present invention.
[0007] Figs. 1 A- IB are schematic illustrations of a headset computer cooperating with a host computer (e.g., Smart Phone, laptop, etc.) according to principles of the present invention.
[0008] Fig. 2 is a block diagram of flow of data and control in the embodiment of Figs. 1A-1B. [0009] Fig. 3 is a diagram illustrating an example embodiment of a user interface employed in the HSC.
[0010] Fig. 4 is a diagram illustrating an example embodiment of a user interface after receiving a show commands voice command.
[0011] Fig. 5 is a flow diagram illustrating an example embodiment of a method employed by the present invention.
[0012] Fig. 6 illustrates a computer network or similar digital processing environment in which the present invention may be implemented.
[0013] Fig. 7 is a diagram of the internal structure of a computer (e.g., client processor/device or server computers) in the computer system of Fig. 6.
DETAILED DESCRIPTION OF THE INVENTION
[0014] Figs. 1 A and IB show an example embodiment of a wireless computing headset device 100 (also referred to herein as a headset computer (HSC)) that incorporates a high-resolution (VGA or better) microdisplay element 1010, and other features described below. The HSC 100 can include audio input and/or output devices, including one or more microphones, input and output speakers, geo- positional sensors (GPS), three to nine axis degrees of freedom orientation sensors, atmospheric sensors, health condition sensors, digital compass, pressure sensors, environmental sensors, energy sensors, acceleration sensors, position, attitude, motion, velocity and/or optical sensors, cameras (visible light, infrared, etc.), multiple wireless radios, auxiliary lighting, rangefmders, or the like and/or an array of sensors embedded and/or integrated into the headset and/or attached to the device via one or more peripheral ports (not shown in detail in Fig. IB). Typically located within the housing of headset computing device 100 are various electronic circuits including, a microcomputer (single or multicore processors), one or more wired and/or wireless communications interfaces, memory or storage devices, various sensors and a peripheral mount or mount, such as a "hot shoe."
[0015] Example embodiments of the HSC 100 can receive user input through sensing voice commands, head movements, 110, 111, 112 and hand gestures 113, or any combination thereof. Microphone(s) operatively coupled or preferably integrated into the HSC 100 can be used to capture speech commands which are then digitized and processed using automatic speech recognition techniques.
Gyroscopes, accelerometers, and other micro-electromechanical system sensors can be integrated into the HSC 100 and used to track the user's head movement to provide user input commands. Cameras or other motion tracking sensors can be used to monitor a user's hand gestures for user input commands. Such a user interface overcomes the hands-dependant formats of other mobile devices.
[0016] The HSC 100 can be used in various ways. It can be used as a remote display for streaming video signals received from a remote host computing device 200 (shown in Fig. 1 A). The host 200 may be, for example, a notebook PC, smart phone, tablet device, or other computing device having less or greater computational complexity than the wireless computing headset device 100, such as cloud-based network resources. The host may be further connected to other networks 210, such as the Internet. The headset computing device 100 and host 200 can wirelessly communicate via one or more wireless protocols, such as Bluetooth®, Wi-Fi, WiMAX, 4G LTE or other wireless radio link 150. (Bluetooth is a registered trademark of Bluetooth Sig, Inc. of 5209 Lake Washington Boulevard, Kirkland, Washington 98033.) In an example embodiment, the host 200 may be further connected to other networks, such as through a wireless connection to the Internet or other cloud-based network resources, so that the host 200 can act as a wireless relay. Alternatively, some example embodiments of the HSC 100 can wirelessly connect to the Internet and cloud-based network resources without the use of a host wireless relay.
[0017] Fig. IB is a perspective view showing some details of an example embodiment of a headset computer 100. The example embodiment HSC 100 generally includes, a frame 1000, strap 1002, rear housing 1004, speaker 1006, cantilever, or alternatively referred to as an arm or boom 1008 with a built in microphone, and a micro-display subassembly 1010.
[0018] A head worn frame 1000 and strap 1002 are generally configured so that a user can wear the headset computer device 100 on the user's head. A housing 1004 is generally a low profile unit which houses the electronics, such as the microprocessor, memory or other storage device, along with other associated circuitry. Speakers 1006 provide audio output to the user so that the user can hear information. Microdisplay subassembly 1010 is used to render visual information to the user. It is coupled to the arm 1008. The arm 1008 generally provides physical support such that the microdisplay subassembly is able to be positioned within the user's field of view 300 (Fig. 1 A), preferably in front of the eye of the user or within its peripheral vision preferably slightly below or above the eye. Arm 1008 also provides the electrical or optical connections between the microdisplay subassembly 1010 and the control circuitry housed within housing unit 1004.
[0019] According to aspects that will be explained in more detail below, the HSC display device 100 allows a user to select a field of view 300 within a much larger area defined by a virtual display 400. The user can typically control the position, extent (e.g., X-Y or 3D range), and/or magnification of the field of view 300.
[0020] While what is shown in Fig. 1 A is a monocular microdisplay presenting a single fixed display element supported on the face of the user with a cantilevered boom, it should be understood that other mechanical configurations for the remote control display device 100 are possible.
[0021] Fig. 2 is a block diagram showing more detail of the HSC device 100, host 200 and the data that travels between them. The HSC device 100 receives vocal input from the user via the microphone, hand movements or body gestures via positional and orientation sensors, the camera or optical sensor(s), and head movement inputs via the head tracking circuitry such as 3 axis to 9 axis degrees of freedom orientational sensing. These are translated by software in the HSC device 100 into keyboard and/or mouse commands that are then sent over the Bluetooth or other wireless interface 150 to the host 200. The host 200 then interprets these translated commands in accordance with its own operating system/application software to perform various functions. Among the commands is one to select a field of view 300 within the virtual display 400 and return that selected screen data to the HSC device 100. Thus, it should be understood that a very large format virtual display area might be associated with application software or an operating system running on the host 200. However, only a portion of that large virtual display area 400 within the field of view 300 is returned to and actually displayed by the micro display 1010 of HSC device 100.
[0022] In one embodiment the HSC 100 may take the form of the HSC described in a co-pending US Patent Publication Number 2011/0187640 which is hereby incorporated by reference in its entirety.
[0023] In another embodiment, the invention relates to the concept of using a Head Mounted Display (HMD) 1010 in conjunction with an external 'smart' device 200 (such as a smartphone or tablet) to provide information and control to the user hands-free. The invention requires transmission of small amounts of data, providing a more reliable data transfer method running in real-time.
[0024] In this sense therefore, the amount of data to be transmitted over the connection 150 is small- simple instructions on how to lay out a screen, which text to display, and other stylistic information such as drawing arrows, or the background colours, images to include, etc.
[0025] Additional data could be streamed over the same 150 or another connection and displayed on screen 1010, such as a video stream if required by the Host 200.
[0026] This invention relates to the viewing of context sensitive overlays within applications, on voice controlled HSCs 100.
[0027] The concept is the presentation of data, contextually, over a visual, on demand. Overlays can be called up by the user with a voice command, typically "Show commands." The voice command is standard across the system 100 and available at all times. This command causes HSC 100 to display applicable voice commands and other information in a context sensitive and intuitive way.
[0028] The applicable commands are shown on a semi-transparent overlay of the current screen view of display unit 1010. This allows the user to retain the context of the screen he called the overlay up for.
[0029] The overlay and displayed applicable commands fade away after a short period of time. This is accomplished by a timing mechanism that refreshes the screen view. [0030] The applicable commands are displayed in order of relevance. The most relevant command is given more prominence in terms of placement over less relevant commands. 100 determines relevancy based on the current context of the display 1010 contents.
[0031] Each screen in the relevant system is made up of user-interface (UI) components, some of which are 'controls'. A control is a UI component that provides information to the user or enables some form of functionality. Examples of controls are buttons, radio buttons, text boxes, check boxes, drop down menus, file menus, ribbon menus, live tiles, etc. Within the software developer's component library, these are available in their various forms, allowing customization of certain features. For example, one such control might be a 'button,' simply enabling the user to press it using a voice command available on the button. Controls, such as the 'button' control, are available to the developer, for example, as part of the developer's component library or other library. The developer can insert the pre- coded control and customize it to his or her liking, instead of manually coding the control from scratch.
[0032] A "Show Commands" function is built into the controls of the developer library. When the developer, for example, creates a button and specifies a text string to be written onto the button, the text string becomes the default voice command to activate the button, unless the developer (or user) overrides the voice command. The control (e.g., the button) is configured to react to a "show commands" voice command by overlaying the text string to activate the control over the control itself, or near the control.
[0033] Every User Interface screen made available on the HSC 100 has the ability to receive a "Show Commands" voice command (e.g., a system command, available by default). Therefore, when a screen is constructed using controls from the UI library, show commands functionality is built in, providing guidance as to the voice commands available to the user. These voice commands are (by default) shown in context of the current displayed contents (screen view).
[0034] Other available voice commands can also be placed within the show commands overlay that are not associated with a visible control. These are placed in the overlay by the developer, adding a voice-command only control, or adding a hidden control, and provide a visual cue for voice commands that are not associated with a button or other control.
[0035] Fig. 3 is a diagram 250 illustrating an example embodiment of a user interface 202 employed in the HSC. The user interface 202, in this embodiment, is an email application displaying a user's inbox. Each email listing is a respective email control 1-6 204a-f. The user can open each email by selecting each control. Each email control 204a-f is programmed to be selected at least by a respective voice command.
[0036] Fig. 4 is a diagram 300 illustrating an example embodiment of a user interface 302 after receiving a show commands voice command. The user interface displays voice commands corresponding to each of the email controls 204a-f of Fig. 2. With respect to Fig. 3, voice commands 1-6 304a-4 correspond with email controls 1-6 204a-f, respectively. For example, saying voice command 1 304a (i.e., "Open E-mail 1") causes the HSC to open the first email in the list.
[0037] The user interface 302 in show commands mode also shows a plurality of implicit voice commands 1-9 306a-i. The implicit voice commands 1-9 306a-i do not correspond to any particular visual control of the user interface, they are voice commands that are available to the user. For example, the user can say implicit voice commands 1 and 2 306a-b to move to the previous and next page,
respectively. The user can draft an email by saying implicit command 3 306c. The user can manage his or her email account by saying implicit command 4 306d. The user can see his or her accounts by saying implicit command 5 306e. The user can switch folders by saying implicit voice command 6 306f. The user can refresh the inbox by saying implicit voice command 7 306g.
[0038] Further, the user can go back to a previous screen by saying implicit voice command 8 306h. The user can return to a home screen by saying implicit voice command 9 306L Implicit voice commands 8 and 9 can be universal to all screens on the HSC. Voice commands 1-6 304a-f and implicit voice commands 1-7 306a-g are local commands for this particular application. However, in other embodiments, implicit voice commands 1-2 306a-b can be global commands for moving to previous and next pages of applications.
[0039] The voice command overlay aids the user by de-cluttering the screen of options and buttons. The voice commands further help prompt the user to how to use the system, which is especially useful while the user is learning how to use the device and voice commands.
[0040] Fig. 5 is a flow diagram 500 illustrating an example embodiment of a method employed by the present invention. First, the method provides a user interface in a headset computer (502). Then, the method determines whether it has received a cue toggle command, for example, over an audio channel from a user utterance (504). If not, the method continues listening for the cue toggle command (504). If so, however, the method then displays at least one cue in the user interface (506). Each cue is associated with a corresponding voice command, which, when uttered, causes the system to execute code.
[0041] The system then determines whether it has received a voice command (e.g., a voice command shown by one of the cues) (508). If not, it keeps listening for a voice command (508). If so, however, it executes the code associated with the voice command (510).
[0042] Fig. 6 illustrates a computer network or similar digital processing environment in which the present invention may be implemented.
[0043] Client computer(s)/devices 50 and server computer(s) 60 provide processing, storage, and input/output devices executing application programs and the like. Client computer(s)/devices 50 can also be linked through communications network 70 to other computing devices, including other client devices/processes 50 and server computer(s) 60. Communications network 70 can be part of a remote access network, a global network (e.g., the Internet), a worldwide collection of computers, Local area or Wide area networks, and gateways that currently use respective protocols (TCP/IP, Bluetooth, etc.) to communicate with one another. Other electronic device/computer network architectures are suitable.
[0044] Fig. 7 is a diagram of the internal structure of a computer (e.g., client processor/device 50 or server computers 60) in the computer system of Fig. 6. Each computer 50, 60 contains system bus 79, where a bus is a set of hardware lines used for data transfer among the components of a computer or processing system. Bus 79 is essentially a shared conduit that connects different elements of a computer system (e.g., processor, disk storage, memory, input/output ports, network ports, etc.) that enables the transfer of information between the elements. Attached to system bus 79 is I/O device interface 82 for connecting various input and output devices (e.g., keyboard, mouse, displays, printers, speakers, etc.) to the computer 50, 60. Network interface 86 allows the computer to connect to various other devices attached to a network (e.g., network 70 of Fig. 6). Memory 90 provides volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present invention (e.g., context sensitive overlays in a user interface code detailed above). Disk storage 95 provides non-volatile storage for computer software instructions 92 and data 94 used to implement an embodiment of the present invention. Central processor unit 84 is also attached to system bus 79 and provides for the execution of computer instructions.
[0045] In one embodiment, the processor routines 92 and data 94 are a computer program product (generally referenced 92), including a computer readable medium (e.g., a removable storage medium such as one or more DVD-ROM's, CD-ROM's, diskettes, tapes, etc.) that provides at least a portion of the software instructions for the invention system. Computer program product 92 can be installed by any suitable software installation procedure, as is well known in the art. In another embodiment, at least a portion of the software instructions may also be downloaded over a cable, communication and/or wireless connection. In other embodiments, the invention programs are a computer program propagated signal product 107 embodied on a propagated signal on a propagation medium (e.g., a radio wave, an infrared wave, a laser wave, a sound wave, or an electrical wave propagated over a global network such as the Internet, or other network(s)). Such carrier medium or signals provide at least a portion of the software instructions for the present invention routines/program 92.
[0046] In alternate embodiments, the propagated signal is an analog carrier wave or digital signal carried on the propagated medium. For example, the propagated signal may be a digitized signal propagated over a global network (e.g., the Internet), a telecommunications network, or other network. In one embodiment, the propagated signal is a signal that is transmitted over the propagation medium over a period of time, such as the instructions for a software application sent in packets over a network over a period of milliseconds, seconds, minutes, or longer. In another embodiment, the computer readable medium of computer program product 92 is a propagation medium that the computer system 50 may receive and read, such as by receiving the propagation medium and identifying a propagated signal embodied in the propagation medium, as described above for computer program propagated signal product.
[0047] Generally speaking, the term "carrier medium" or transient carrier encompasses the foregoing transient signals, propagated signals, propagated medium, storage medium and the like.
[0048] While this invention has been particularly shown and described with references to example embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims

1. A method comprising:
providing a user interface in a headset computer;
in response to user utterance of a cue toggle command, displaying at least one cue, each cue corresponding to a voice command associated with code to execute, in the user interface; and
in response to user utterance of the voice command, executing the code associated with the voice command.
2. The method of Claim 1 , further comprising:
displaying the interface without the cue at least one of prior to the cue toggle command and after a subsequent cue toggle command.
3. The method of Claim 1, wherein displaying the cue includes displaying
words that activate the voice command.
4. The method of Claim 1, wherein displaying the cue includes displaying the cue in the user interface corresponding to the voice command associated with the control, the control displayed in the user interface.
5. The method of Claim 1, wherein displaying the cue includes displaying the cue in the user interface corresponding to the voice command associated with the control, the control hidden from the user interface.
6. The method of Claim 1 , wherein displaying the cue includes displaying the cue in the user interface corresponding to the voice command associated with the control, the control being a global headset control.
7. The method of Claim 1, wherein the cue is loaded from a control, the control indicating the cue and voice command.
8. A system for displaying a user interface in a headset computer, the system comprising:
a display module configured to provide a user interface in a headset computer and further configured to, in response to user utterance of a cue toggle command, display at least one cue, each cue corresponding to a voice command associated with code to execute, in the user interface; and
a command module configured to, in response to user utterance of the voice command, execute code associated with the voice command.
9. The system of Claim 8, wherein the display is further configured to display the interface without the cue at least one of prior to the cue toggle command and after a subsequent cue toggle command.
10. The system of Claim 8, wherein the display module is further configured to display words that activate the voice command.
11. The system of Claim 8, wherein the display module is further configured to display a cue includes displaying a cue in the user interface corresponding to a voice command associated with the control, the control displayed in the user interface.
12. The system of Claim 8, wherein the display module is further configured to display the cue in the user interface corresponding to the voice command associated with the control, the control hidden from the user interface.
13. The system of Claim 8, wherein the display module is further configured to display the cue in the user interface corresponding to the voice command associated with the control, the control being a global headset control.
14. The system of Claim 8, wherein the cue is loaded from a control, the control indicating the cue and voice command.
15. A method of developing a user interface in a headset computer, the method comprising:
embedding a cue and a voice command in a control for the user interface; and
providing the control to the user interface, the user interface configured to display the cue responsive to a cue toggle command.
16. The method of Claim 1, wherein the cue includes words that activate the voice command.
17. The method of Claim 1 , wherein the cue corresponds to the voice command associated with the control displayed in the user interface.
18. The method of Claim 1 , wherein the cue corresponds to the voice command associated with the control hidden from the user interface.
19. The method of Claim 1, wherein the cue corresponds to the voice command associated with the control being a global headset control.
20. The method of Claim 1, wherein the cue is loaded from a control, the control indicating the cue and voice command.
PCT/US2013/041070 2013-01-04 2013-05-15 Controlled headset computer displays WO2014107186A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201380072806.5A CN104981767A (en) 2013-01-04 2013-05-15 Controlled headset computer displays
JP2015551666A JP2016508271A (en) 2013-01-04 2013-05-15 Controllable headset computer display
EP13728571.4A EP2941690A1 (en) 2013-01-04 2013-05-15 Controlled headset computer displays

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201361749240P 2013-01-04 2013-01-04
US61/749,240 2013-01-04
US13/799,790 US10013976B2 (en) 2010-09-20 2013-03-13 Context sensitive overlays in voice controlled headset computer displays
US13/799,790 2013-03-13

Publications (1)

Publication Number Publication Date
WO2014107186A1 true WO2014107186A1 (en) 2014-07-10

Family

ID=48614125

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/041070 WO2014107186A1 (en) 2013-01-04 2013-05-15 Controlled headset computer displays

Country Status (4)

Country Link
EP (1) EP2941690A1 (en)
JP (2) JP2016508271A (en)
CN (1) CN104981767A (en)
WO (1) WO2014107186A1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018503857A (en) * 2015-07-02 2018-02-08 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド On-vehicle voice command recognition method, apparatus, and storage medium
US10474418B2 (en) 2008-01-04 2019-11-12 BlueRadios, Inc. Head worn wireless computer having high-resolution display suitable for use as a mobile internet device
EP3816789A1 (en) * 2018-05-07 2021-05-05 Google LLC Methods, systems, and apparatus for providing composite graphical assistant interfaces for controlling connected devices
US11941322B2 (en) 2018-12-27 2024-03-26 Saturn Licensing Llc Display control device for selecting item on basis of speech

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107832036B (en) * 2017-11-22 2022-01-18 北京小米移动软件有限公司 Voice control method, device and computer readable storage medium
JP2022045262A (en) * 2020-09-08 2022-03-18 シャープ株式会社 Voice processing system, voice processing method, and voice processing program

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090182562A1 (en) * 2008-01-14 2009-07-16 Garmin Ltd. Dynamic user interface for automated speech recognition
US20110187640A1 (en) 2009-05-08 2011-08-04 Kopin Corporation Wireless Hands-Free Computing Headset With Detachable Accessories Controllable by Motion, Body Gesture and/or Vocal Commands
WO2011097226A1 (en) * 2010-02-02 2011-08-11 Kopin Corporation Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
US20120110456A1 (en) * 2010-11-01 2012-05-03 Microsoft Corporation Integrated voice command modal user interface

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0284686A (en) * 1988-09-21 1990-03-26 Matsushita Electric Ind Co Ltd Image sound output device
US8275617B1 (en) * 1998-12-17 2012-09-25 Nuance Communications, Inc. Speech command input recognition system for interactive computer display with interpretation of ancillary relevant speech query terms into commands
GB2388209C (en) * 2001-12-20 2005-08-23 Canon Kk Control apparatus
JP2004054879A (en) * 2002-05-27 2004-02-19 Sony Corp Display device and display method
JP4135520B2 (en) * 2003-01-29 2008-08-20 トヨタ自動車株式会社 Vehicle communication device
JP4526307B2 (en) * 2004-06-09 2010-08-18 富士通テン株式会社 Function selection device
CN102483333B (en) * 2009-07-09 2016-03-09 通腾科技股份有限公司 Use the guider with the map datum of route search acceleration data
WO2012025956A1 (en) * 2010-08-25 2012-03-01 三菱電機株式会社 Navigation apparatus
US9122307B2 (en) * 2010-09-20 2015-09-01 Kopin Corporation Advanced remote control of host application using motion and voice commands
US8706170B2 (en) * 2010-09-20 2014-04-22 Kopin Corporation Miniature communications gateway for head mounted display
JP5977922B2 (en) * 2011-02-24 2016-08-24 セイコーエプソン株式会社 Information processing apparatus, information processing apparatus control method, and transmissive head-mounted display apparatus

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090182562A1 (en) * 2008-01-14 2009-07-16 Garmin Ltd. Dynamic user interface for automated speech recognition
US20110187640A1 (en) 2009-05-08 2011-08-04 Kopin Corporation Wireless Hands-Free Computing Headset With Detachable Accessories Controllable by Motion, Body Gesture and/or Vocal Commands
WO2011097226A1 (en) * 2010-02-02 2011-08-11 Kopin Corporation Wireless hands-free computing headset with detachable accessories controllable by motion, body gesture and/or vocal commands
US20120110456A1 (en) * 2010-11-01 2012-05-03 Microsoft Corporation Integrated voice command modal user interface

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10474418B2 (en) 2008-01-04 2019-11-12 BlueRadios, Inc. Head worn wireless computer having high-resolution display suitable for use as a mobile internet device
US10579324B2 (en) 2008-01-04 2020-03-03 BlueRadios, Inc. Head worn wireless computer having high-resolution display suitable for use as a mobile internet device
JP2018503857A (en) * 2015-07-02 2018-02-08 バイドゥ オンライン ネットワーク テクノロジー (ベイジン) カンパニー リミテッド On-vehicle voice command recognition method, apparatus, and storage medium
EP3816789A1 (en) * 2018-05-07 2021-05-05 Google LLC Methods, systems, and apparatus for providing composite graphical assistant interfaces for controlling connected devices
US11237796B2 (en) 2018-05-07 2022-02-01 Google Llc Methods, systems, and apparatus for providing composite graphical assistant interfaces for controlling connected devices
US11941322B2 (en) 2018-12-27 2024-03-26 Saturn Licensing Llc Display control device for selecting item on basis of speech

Also Published As

Publication number Publication date
EP2941690A1 (en) 2015-11-11
CN104981767A (en) 2015-10-14
JP2016508271A (en) 2016-03-17
JP2018032440A (en) 2018-03-01

Similar Documents

Publication Publication Date Title
US20180277114A1 (en) Context Sensitive Overlays In Voice Controlled Headset Computer Displays
US9383816B2 (en) Text selection using HMD head-tracker and voice-command
US9442290B2 (en) Headset computer operation using vehicle sensor feedback for remote control vehicle
EP2845075B1 (en) Headset computer (hsc) as auxiliary display with asr and ht input
US9830909B2 (en) User configurable speech commands
US9134793B2 (en) Headset computer with head tracking input used for inertial control
JP6082695B2 (en) Advanced remote control of host applications using movement and voice commands
US20180210544A1 (en) Head Tracking Based Gesture Control Techniques For Head Mounted Displays
US20150220142A1 (en) Head-Tracking Based Technique for Moving On-Screen Objects on Head Mounted Displays (HMD)
US9500867B2 (en) Head-tracking based selection technique for head mounted displays (HMD)
US9378028B2 (en) Headset computer (HSC) with docking station and dual personality
JP2018032440A (en) Controllable headset computer displays
US20190279636A1 (en) Context Sensitive Overlays in Voice Controlled Headset Computer Displays
US20190369400A1 (en) Head-Mounted Display System

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13728571

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2015551666

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

WWE Wipo information: entry into national phase

Ref document number: 2013728571

Country of ref document: EP