Search Images Maps Play YouTube News Gmail Drive More »
Sign in
Screen reader users: click this link for accessible mode. Accessible mode has the same essential features but works better with your reader.

Patents

  1. Advanced Patent Search
Publication numberUS20080195396 A1
Publication typeApplication
Application numberUS 12/104,195
Publication date14 Aug 2008
Filing date16 Apr 2008
Priority date11 Jul 2005
Also published asUS7424431, US7567907, US7953599, US20070011007, US20080215337, US20080221888, US20110196683, WO2007008248A2, WO2007008248A3
Publication number104195, 12104195, US 2008/0195396 A1, US 2008/195396 A1, US 20080195396 A1, US 20080195396A1, US 2008195396 A1, US 2008195396A1, US-A1-20080195396, US-A1-2008195396, US2008/0195396A1, US2008/195396A1, US20080195396 A1, US20080195396A1, US2008195396 A1, US2008195396A1
InventorsMark Greene, Michael Hegarty, Dermot Cantwell
Original AssigneeMark Greene, Michael Hegarty, Dermot Cantwell
Export CitationBiBTeX, EndNote, RefMan
External Links: USPTO, USPTO Assignment, Espacenet
System, method and computer program product for adding voice activation and voice control to a media player
US 20080195396 A1
Abstract
A media player system, method and computer program product are provided. In use, an utterance is received. A command for a media player is then generated based on the utterance. Such command is utilized for providing wireless control of the media player.
Images(9)
Previous page
Next page
Claims(20)
1. A sub-system, comprising:
logic for controlling an assembly coupled to an automobile for receiving power therefrom and further connectable to a satellite radio player adapted for playing music, news, and non-fiction information, the assembly including a speaker and a microphone;
logic for receiving a trigger signal;
logic for, after the receipt of the trigger signal, receiving an utterance utilizing the microphone of the assembly;
logic for verifying the utterance utilizing the speaker of the assembly;
logic for, after the verification of the utterance, generating a corresponding command for the satellite radio player based on the utterance, the corresponding command selected from a command set including a play command, a search command, an artist command, a volume up command, and a volume down command;
logic for channeling output of the satellite radio player;
wherein the corresponding command provides wireless control of the satellite radio player.
2. The sub-system of claim 1, wherein the trigger signal is audible.
3. The sub-system of claim 1, wherein the trigger signal includes an audible trigger word.
4. The sub-system of claim 1, wherein it is determined whether a timeout has occurred before the utterance is received.
5. The sub-system of claim 4, wherein if the timeout has occurred, the receiving of the trigger signal is repeated.
6. The sub-system of claim 1, wherein it is determined whether the utterance is verified.
7. The sub-system of claim 6, wherein it is determined whether the utterance is verified, based on whether the utterance is programmed.
8. The sub-system of claim 6, wherein it is determined whether the utterance is verified, based on whether the utterance is registered.
9. The sub-system of claim 6, wherein if the utterance is not verified, the corresponding command is not generated.
10. The sub-system of claim 6, wherein if the utterance is not verified, the utterance is outputted via the speaker.
11. The sub-system of claim 10, wherein a user is allowed to accept or reject the utterance outputted via the speaker.
12. The sub-system of claim 1, wherein the logic includes hardware logic.
13. The sub-system of claim 1, wherein the logic includes software logic.
14. The sub-system of claim 1, wherein the command set further includes a search command.
15. The sub-system of claim 1, wherein the utterance is compared against a library of words.
16. The sub-system of claim 15, wherein the corresponding command is generated in response to identifying a match between the utterance and the library of words.
17. The sub-system of claim 16, wherein the library of words includes an artist name and a voice tag.
18. The sub-system of claim 16, wherein the library of words includes an artist name and an application specific voice tag.
19. A system, comprising:
an automobile;
an assembly coupled to the automobile for receiving power therefrom and further connected to a satellite radio player adapted for playing music, news, and non-fiction information, the assembly including a power source for providing power to the satellite radio player, a speaker, and a microphone;
logic for receiving a trigger signal;
logic for, after the receipt of the trigger signal, receiving an utterance utilizing the microphone of the assembly;
logic for verifying the utterance utilizing the speaker of the assembly;
logic for, after the verification of the utterance, generating a corresponding command for the satellite radio player based on the utterance, the corresponding command selected from a command set including a play command, a search command, an artist command, a volume up command, and a volume down command;
logic for channeling output of the satellite radio player;
wherein the corresponding command provides wireless control of the satellite radio player.
20. A computer program product embodied on a computer readable medium, comprising:
computer code for controlling an assembly coupled to an automobile for receiving power therefrom and further connectable to a satellite radio player adapted for playing music, news, and non-fiction information, the assembly including a speaker and a microphone;
computer code for receiving of a trigger signal;
computer code for, after the receipt of the trigger signal, receiving an utterance utilizing the microphone of the assembly;
computer code for verifying the utterance utilizing the speaker of the assembly;
computer code for, after the verification of the utterance, generating a corresponding command for the satellite radio player based on the utterance, the corresponding command selected from a command set including a play command, a search command, an artist command, a volume up command, and a volume down command;
computer code for channeling output of the satellite radio player;
wherein the corresponding command provides wireless control of the satellite radio player.
Description
    SUMMARY
  • [0001]
    A media player system, method and computer program product are provided. In use, an utterance is received. A command for a media player is then generated based on the utterance. Such command is utilized for providing wireless control of the media player.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • [0002]
    FIG. 1 illustrates a network architecture, in accordance with one embodiment.
  • [0003]
    FIG. 2 shows a representative hardware environment that may be associated with the devices of FIG. 1, in accordance with one embodiment.
  • [0004]
    FIG. 3 shows a method for providing wireless control of a media player, in accordance with one embodiment.
  • [0005]
    FIG. 4 shows a method for providing wireless control of a media player, in accordance with another embodiment.
  • [0006]
    FIG. 5 shows a media player in connection with an assembly for receiving utterances, in accordance with another embodiment.
  • [0007]
    FIG. 6 shows a media player in connection with an assembly for receiving utterances, in accordance with still yet another embodiment.
  • [0008]
    FIG. 7 shows a method for providing wireless control of a media player when a library is loaded on the media player, in accordance with one embodiment.
  • [0009]
    FIG. 8 shows a method for providing wireless control of a media player when a library is not loaded on the media player, in accordance with another embodiment.
  • DETAILED DESCRIPTION
  • [0010]
    FIG. 1 illustrates a network architecture 100, in accordance with one embodiment. As shown, a network 102 is provided. In the context of the present network architecture 100, the network 102 may take any form including, but not limited to a telecommunications network, a local area network (LAN), a wireless network, a wide area network (WAN) such as the Internet, etc. While only one network is shown, it should be understood that two or more similar or different networks 102 may be provided.
  • [0011]
    Coupled to the network 102 is a plurality of devices. For example, a server device 104 and an end user computer 106 may be coupled to the network 102 for communication purposes. Such end user computer 106 may include a desktop computer, lap-top computer, and/or any other type of logic. Still yet, various other devices may be coupled to the network 102 including a media player 108, a mobile phone 110, etc.
  • [0012]
    It should be noted that any of the foregoing devices in the present network architecture 100, as well as any other unillustrated hardware and/or software, may be equipped with voice control of an associated media player. More exemplary information regarding such architecture and associated functionality will be set forth hereinafter in greater detail.
  • [0013]
    FIG. 2 illustrates an exemplary system 200, in accordance with one embodiment. As an option, the system 200 may be implemented in the context of any of the devices of the network architecture 100 of FIG. 1. Of course, the system 200 may be implemented in any desired environment.
  • [0014]
    As shown, a system 200 is provided including at least one central processor 201 which is connected to a communication bus 202. The system 200 also includes main memory 204 [e.g. random access memory (RAM), etc.]. The system 200 also includes an optional graphics processor 206 and a display 208.
  • [0015]
    The system 200 may also include a secondary storage 210. The secondary storage 210 includes, for example, a hard disk drive and/or a removable storage drive, representing a floppy disk drive, a magnetic tape drive, a compact disk drive, etc. The removable storage drive reads from and/or writes to a removable storage unit in a well known manner.
  • [0016]
    Computer programs, or computer control logic algorithms, may be stored in the main memory 204 and/or the secondary storage 210. Such computer programs, when executed, enable the system 200 to perform various functions. The memory 204, storage 210 and/or any other storage are possible examples of computer-readable media.
  • [0017]
    FIG. 3 shows a method 300 for providing wireless control of a media player, in accordance with one embodiment. As an option, the present method 300 may be implemented in the context of the architecture and environment of FIGS. 1 and/or 2. Of course, however, the method 300 may be carried out in any desired environment.
  • [0018]
    Initially, an utterance is received, as shown in operation 302. The utterance may include any audible word, number, and/or sound capable of being received. Still yet, the utterance may be received by a microphone and/or any other desired input device capable of receiving the utterance. As an option, the utterance may be received utilizing an input device including an integrated microphone in a set of headphones which enables voice control in an active environment. As another option, the utterance may be received utilizing an input device including a wireless device which can be positioned for optimum voice control in an automobile, an indoor environment, and/or an outdoor environment.
  • [0019]
    Next, in operation 304, a command for a media player is then generated based on the utterance. In the context of the present description, a media player may include an iPod(R), a portable satellite radio player, and/or any portable software and/or hardware capable of outputting any sort of media [e.g. audible media (e.g. music, news, non-fiction information, fictional stories, etc.), visual media (e.g. pictures, video in the form of movies, news, programming, etc.), etc.].
  • [0020]
    Still yet, as an option, the media player may be used in conjunction (e.g. built-in, retrofitted with, coupled to, etc.) any desired device including, but not limited to a cellular phone, personal digital assistant, etc. (e.g. see, for example, any of the devices of FIG. 1, etc.). Of course, it is contemplated that the media player may further be a stand alone product.
  • [0021]
    Even still, the commands may include, for example, commands that operate the media player, commands that change states of the media player, etc. Specifically, for media players that play music, for example, such commands may include play, pause, fast forward, rewind, on, off, shuffle, repeat, search, volume up, volume down, playlist, next playlist, etc. As an example, the search command may provide a user with the ability to command the media player to search for a particular song or artist and the play command may provide the user with the ability to command a particular song to be played. In various embodiments, the commands may be programmable and/or registered by the media player. Of course, the command may include any signal, instruction, code, data, etc. that is capable of being utilized for providing wireless control of the media player. For example, the command may be an utterance that is translated into a hex code capable of being recognized by the media player. Table 1 illustrates examples of such hex codes capable of being recognized by the media player, in an embodiment where the media player includes the aforementioned iPod(R).
  • [0000]
    TABLE 1
    Shuffle = FF 55 04 02 00 00 80 7A,
    Pause = FF 55 03 02 00 01 FA
    Playlist = FF 55 04 02 00 00 40 BA
  • [0022]
    It should be noted that the operations of the method 300 of FIG. 3 may be carried out by the media player itself, and/or by way of a separate assembly that is either built-in the media player or capable of being retrofitted on the media player. One exemplary assembly will be described in more detail with respect to FIGS. 4-6.
  • [0023]
    More illustrative information will now be set forth regarding various optional architectures and features with which the foregoing method 300 may or may not be implemented, per the desires of the user. It should be strongly noted that the following information is set forth for illustrative purposes and should not be construed as limiting in any manner. Any of the following features may be optionally incorporated with or without the exclusion of other features described.
  • [0024]
    FIG. 4 shows a method for providing wireless control of a media player, in accordance with another embodiment. As an option, the present method 600 may be implemented in the context of the architecture and environment of FIGS. 1-3. Of course, however, the method 600 may be carried out in any desired environment.
  • [0025]
    First, program variables within an assembly attached to a media player are initiated, as shown in operation 602. The program variables initialized may include initializing trigger and command words along with any input or output utilized. An exemplary assembly will be described in more detail with respect to FIGS. 5 and 6. Next, the assembly listens for a trigger word, as in operation 604. The trigger word may be any word capable of being recognized and understood by the assembly. It may also optionally be any word capable of initiating a voice recognition capability of the assembly. For example, a user may be required to first say “start” in order to trigger the assembly.
  • [0026]
    Once it is determined that a trigger word is detected, as in operation 606, a command word is awaited, such as in operation 608. As an option, the command word may include any of the utterances described with respect to FIG. 3. Of course the command word may include any word capable of being recognized by the assembly and capable of being translated into computer code that the media player can act upon. As another option, a time limit (e.g. 120 seconds, etc.) may be utilized such that if no trigger word is detected during operation 606 within the time limit, the method may return to operation 604.
  • [0027]
    Next, upon determination that a command word has been detected, as in operation 610, the command is converted into computer code that can be read by the media player, as shown in operation 612. As an example, in one embodiment, the computer code may include computer code that can be read by the personal media player. However, if a specified period of time expires in operation 608, the determination of operation 610 may be false (or be more scrutinized, etc.) and the method may return to operation 604. In addition, if a word is detected but it is not a registered or programmed command word, the method may also return to operation 608.
  • [0028]
    The computer code is then sent from the assembly to the media player, as shown in operation 614, and a send button releases the command to the media player, as in operation 616. Such send button may indicate to the media player that the computer code is complete. Further, the computer code may be sent as a bit stream to the media player. For example, the bit stream may consist of seven codes of ten bits each. Still yet, each bit may be sent approximately every 55 uSecs with a code repetition rate of 66 (i.e. each code is sent about every 15 milliseconds).
  • [0029]
    It is next determined whether the command is a sleep command, as shown in operation 619. For example, such command may be created by a user uttering “sleep” and such utterance then being translated into computer code that tells the media player to power down. If the determination in operation 619 is positive, the media player is powered down and all program variables are released, as depicted in operation 620. Powering down may, as an option, include setting the media player to a low power mode. Alternatively, if the command is not for the media player to sleep, the method 600 returns to operation 602 where it continues to listen for a trigger word. Again, method 600 is set forth to illustrate just one example of a method for wireless controlling a media player, and should not be construed as limiting in any manner.
  • [0030]
    FIG. 5 shows a media player in connection with an assembly for receiving utterances, in accordance with another embodiment. As an option, the media player/assembly may be implemented in the context of the architecture and environment of FIGS. 1-4. Of course, however, the media player/assembly may be implemented in any desired environment.
  • [0031]
    A media player 401 is shown connected to an assembly 403. In the present embodiment, media player 401 is shown with the assembly 403 holding the media player 401. Of course such assembly 403 could be optionally mounted on the personal media player 401 or connected in any other manner. In any way, media player 401 is configured to connect to the assembly 403.
  • [0032]
    The assembly 403 includes electrical connection (not shown), voice activation software (not shown), voice activation hardware (not shown), a memory integrated circuit (not shown), an FM transmitter 409, and a power unit 402. The voice activation software may be capable of detecting an utterance, translating the utterance into computer code, and transmitting the code to the media player 401. In addition, the power unit 402 may be capable of charging the media player 401 and may include a plug that connects to an automobile cigarette lighter device or AC/DC converter device to provide a required voltage. Optionally, the connection may be located on the end of a flexible metal rod that supports the assembly.
  • [0033]
    The FM transmitter 409 may further include up and down arrows on the front of the assembly 403 as shown in FIG. 5 for sweeping across 88 MHz to 108 MHz such that a frequency that has little outside traffic may be locked in for transmitting a stereo signal at least six feet. FM transmitter 409 may also include left and right channel programming capabilities.
  • [0034]
    Additionally, a power LED 406 and charging LED 407 may be utilized for displaying whether the media player 401 is charging or listening for an utterance. Further included may be a verification speaker 404 for verifying utterances received, a directional microphone 405 for receiving and transferring utterances to a processing circuit (not shown) that translates the utterances into computer code capable of being read by the media player 401. The processing circuit may also include a voice engine, onboard memory, and a plurality of circuit peripherals.
  • [0035]
    In use, the utterance may be verified by replaying the utterance for the user who provided the utterance and allowing the user to either accept or reject the utterance. For example, in playing the utterance back for the user, the user may be prompted to either state “yes” or “no.” If the user rejects the utterance, the user may then be prompted to give the utterance again. In this way, the verifying may allow for the adjusting of the received utterance if it is not verified. Of course, any type of verification process may optionally be utilized for verifying the utterance.
  • [0036]
    Also included may be a connector (not shown) for sending and receiving data between the assembly 403 and the media player 401 as well as providing power to the media player 401. Still yet, a mounting brace 408, a front face 410, and an FM frequency display 411 may be provided.
  • [0037]
    As an option, output from the media player 401 may be sent through the assembly 403. Such output may be FM modulated with the FM transmitter 409 of the assembly 403 for reception by nearby FM receivers.
  • [0038]
    FIG. 6 shows a media player in connection with an assembly for receiving utterances, in accordance with still yet another embodiment. As an option, the present media player/assembly may be implemented in the context of the architecture and environment of FIGS. 1-5. Of course, however, the media player/assembly may be implemented in any desired environment.
  • [0039]
    In the present embodiment, the assembly 507 is shown connected to a media player 501 by being mounted on top of the media player 501. Such assembly 507 may optionally include a signal line out 502, a directional or omni-directional microphone 503, a power LED 504 and listening LED 505 (which indicates whether the assembly 507 is awaiting a command—see operation 608 of FIG. 4, for example), an FM frequency adjustment 506, and/or an FM frequency display 508. Such features may include the same functionalities described with respect to FIG. 5.
  • [0040]
    FIG. 7 shows a method 700 for providing wireless control of a media player when a library is loaded on the media player, in accordance with one embodiment. As an option, the present method 700 may be implemented in the context of the architecture and environment of FIGS. 1-6. Of course, however, the method 700 may be implemented in any desired environment.
  • [0041]
    First, program variables within an assembly attached to a media player (or within the media player itself) are initiated, as shown in operation 702. The program variables initialized may include initializing trigger and/or command words along with any input or output utilized. In an embodiment where a separate assembly is utilized, the assembly may, for example, include the embodiments described in FIGS. 5 and 6 above.
  • [0042]
    Next, the assembly listens for a trigger word, as in operation 704. The trigger word may be any word capable of being recognized and understood. It may also optionally be any word capable of initiating a voice recognition capability. For example, a user may be required to first say “start” in order to trigger the present embodiment.
  • [0043]
    Once it is determined that a trigger word is detected, as in decision 706, a command word is awaited, such as in operation 708. As an option, the command word may include any of the utterances described with respect to FIG. 3. Of course, the command word may include any word capable of being recognized and capable of being translated into computer code that the media player can act upon. In the present embodiment, if the command word received is “Search”, as shown in decision 710, the method 700 continues to listen for a next utterance in operation 712. The next utterance may be, for example, an artist name or any other word.
  • [0044]
    As another option, a time limit (e.g. 120 seconds, etc.) may be utilized such that if no utterance is detected during operation 712 within the time limit, the method 700 may terminate, as shown in decision 716. Otherwise, if an utterance is detected during operation 712 within the time limit, the method 700 may navigate to a first item in a library of words associated with the utterance received in operation 712. In the present embodiment, and shown just by way of example, the method 700 may navigate to a first item located in an artist library, as shown in operation 714. Of course, it should be noted that any type of data capable of being located within a library may be utilized. An artist name and voice tag associated with the first item in the library may then be read, such as in operation 718, and it may be determined whether the voice tag matches the utterance received in operation 712 (see decision 720).
  • [0045]
    If the voice tag associated with the first item in the library does not match the utterance received in operation 712, the method 700 may navigate to a next item located in the artist library, as shown in operation 724. If it is determined in decision 722 that there is not a next item in the artist library to navigate to (e.g. the method 700 has reached the end of the library), the method 700 may terminate. Otherwise, the method 700 may return to operation 718 where the artist name and voice tag associated with the next item are read.
  • [0046]
    The method 700 continues until it is determined in decision 720 that the voice tag matches the utterance received in operation 712, in which case a command is issued to the media player, such as, for example, a command to play a first among a set of songs associated with the artist name received in operation 712, as shown in operation 726.
  • [0047]
    FIG. 8 shows a method 800 for providing wireless control of a media player when a library is not loaded on the media player, in accordance with another embodiment. As an option, the present method 800 may be implemented in the context of the architecture and environment of FIGS. 1-6. Of course, however, the method 800 may be implemented in any desired environment.
  • [0048]
    First, program variables within an assembly attached to a media player (or within the media player itself) are initiated, as shown in operation 802. The program variables initialized may include initializing trigger and command words along with any input or output utilized. Next, the assembly listens for a trigger word, as in operation 804. The trigger word may be any word capable of being recognized and understood. It may also optionally be any word capable of initiating a voice recognition capability. For example, a user may be required to first say “start” in order to trigger the present embodiment.
  • [0049]
    Once it is determined that a trigger word is detected, as in decision 806, a command word is awaited, such as in operation 808. As an option, the command word may include any of the utterances described with respect to FIG. 3. Of course, the command word may include any word capable of being recognized and capable of being translated into computer code that the media player can act upon. In the present embodiment, if the command word received is “Search”, as shown in decision 810, the method 800 continues to listen for a next utterance in operation 812. The next utterance may be, for example, an artist name or any other word associated with a file located in a library loaded on the media player. Just by way of example, the file may be a song located in a library of songs loaded on the media player. Of course, it should be noted that any type of data capable of being located within a library may be utilized.
  • [0050]
    As another option, a time limit (e.g. 120 seconds, etc.) may be utilized such that if no utterance is detected during operation 812 within the time limit, the method 800 may terminate, as shown in decision 814. Otherwise, if an utterance is detected during operation 812 within the time limit, the method 800 converts the detected utterance into an application specific voice tag, as shown in operation 816.
  • [0051]
    Next, the method 800 may navigate to a first item in a library associated with the utterance received in operation 812. In the present embodiment, and shown just by way of example, the method 800 may navigate to a first item located in an artist library as shown in operation 818. For example, an artist name associated with the first item may be read, such as in operation 820, and the artist name may be converted to an application specific voice tag (see operation 822).
  • [0052]
    It is next determined in decision 824 whether the application specific voice tag generated from the first item in the library (see operation 822) matches the application specific voice tag generated from the utterance received in operation 812 (see operation 816). If the application specific voice tag associated with the first item in the library from operation 822 does not match the application specific voice tag associated with the utterance from operation 816, the method 800 may navigate to a next item located in the library, as shown in operation 828.
  • [0053]
    If it is determined in decision 826 that there is not a next item in the library to navigate to (e.g. the method 800 has reached the end of the library), the method 800 may terminate. Otherwise, the method 800 may return to operation 820 where the artist name and voice tag associated with the next item in the artist library are read.
  • [0054]
    The method 800 continues until it is determined in decision 824 that the voice tags match, in which case a command is issued to the media player, such as, for example, a command to play a song (or set of songs) associated with the artist name received in operation 812 (as shown in operation 830).
  • [0055]
    While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. For example, any of the network elements may employ any of the desired functionality set forth hereinabove. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents.
Patent Citations
Cited PatentFiling datePublication dateApplicantTitle
US5404443 *24 Dec 19924 Apr 1995Nissan Motor Company, LimitedDisplay control system with touch switch panel for controlling on-board display for vehicle
US5644737 *6 Jun 19951 Jul 1997Microsoft CorporationMethod and system for stacking toolbars in a computer display
US5895464 *30 Apr 199720 Apr 1999Eastman Kodak CompanyComputer program product and a method for using natural language for the description, search and retrieval of multi-media objects
US6038441 *5 Aug 199714 Mar 2000Lear CorporationMethod and system for creating records of voice messages in an automotive vehicle
US6073143 *24 Apr 19966 Jun 2000Sanyo Electric Co., Ltd.Document conversion system including data monitoring means that adds tag information to hyperlink information and translates a document when such tag information is included in a document retrieval request
US6173266 *6 May 19989 Jan 2001Speechworks International, Inc.System and method for developing interactive speech applications
US6192340 *19 Oct 199920 Feb 2001Max AbecassisIntegration of music from a personal library with real-time information
US6212498 *28 Mar 19973 Apr 2001Dragon Systems, Inc.Enrollment in speech recognition
US6349257 *15 Sep 199919 Feb 2002International Business Machines CorporationSystem for personalized mobile navigation information
US6397086 *22 Jun 199928 May 2002E-Lead Electronic Co., Ltd.Hand-free operator capable of infrared controlling a vehicle's audio stereo system
US6405367 *5 Jun 199811 Jun 2002Hewlett-Packard CompanyApparatus and method for increasing the performance of Java programs running on a server
US6407467 *28 Apr 200018 Jun 2002Mannesmann Vdo AgBuilt-in appliance intended for a motor vehicle
US6408272 *12 Apr 199918 Jun 2002General Magic, Inc.Distributed voice user interface
US6542812 *4 Oct 20001 Apr 2003American Calcar Inc.Technique for effective navigation based on user preferences
US6697730 *4 Apr 200124 Feb 2004Georgia Tech Research Corp.Communications and computing based urban transit system
US6711474 *24 Jun 200223 Mar 2004G. Victor TreyzAutomobile personal computer systems
US6782240 *27 Apr 200024 Aug 2004Joseph A TabeMegatel communication information system
US6785656 *5 Jun 200131 Aug 2004Xm Satellite Radio, Inc.Method and apparatus for digital audio playback using local stored content
US6885874 *27 Nov 200126 Apr 2005Motorola, Inc.Group location and route sharing system for communication units in a trunked communication system
US6965863 *12 Nov 199815 Nov 2005Microsoft CorporationSpeech recognition user interface
US6983203 *28 Jul 20003 Jan 2006Alpine Electronics, Inc.POI icon display method and navigation system
US7031724 *12 Mar 200318 Apr 2006General Motors CorporationLocation-based services for a telematics service subscriber
US7158878 *5 Feb 20052 Jan 2007Google Inc.Digital mapping system
US7162215 *12 Apr 20029 Jan 2007General Motors CorporationMethod and system for setting user preference satellite radio music selections in a mobile vehicle
US7203721 *8 Oct 199910 Apr 2007At Road, Inc.Portable browser device with voice recognition and feedback capability
US7209929 *17 Apr 200324 Apr 2007Salesforce.Com, Inc.Java object cache server for databases
US7392132 *13 Jun 200324 Jun 2008Matsushita Electric Industrial Co., Ltd.Position notifying device
US7398209 *3 Jun 20038 Jul 2008Voicebox Technologies, Inc.Systems and methods for responding to natural language speech utterance
US7493257 *5 Aug 200417 Feb 2009Samsung Electronics Co., Ltd.Method and apparatus handling speech recognition errors in spoken dialogue systems
US7516191 *30 Mar 20017 Apr 2009Salesforce.Com, Inc.System and method for invocation of services
US7529728 *23 Sep 20035 May 2009Salesforce.Com, Inc.Query optimization in a multi-tenant database system
US7567907 *19 May 200828 Jul 2009Stragent, LlcSystem, method and computer program product for adding voice activation and voice control to a media player
US7646296 *5 Jun 200712 Jan 2010Honda Motor Co., Ltd.Method and system for receiving and sending navigational data via a wireless messaging service on a navigation system
US7685200 *1 Mar 200723 Mar 2010Microsoft CorpRanking and suggesting candidate objects
US7689711 *30 Mar 200130 Mar 2010Salesforce.Com, Inc.System and method for routing messages between applications
US7706967 *28 Mar 200827 Apr 2010Continental Automotive Systems Us, Inc.Vehicle information system
US7721328 *14 Dec 200418 May 2010Salesforce.Com Inc.Application identity design
US7725605 *16 Dec 200425 May 2010Salesforce.Com, Inc.Providing on-demand access to services in a wide area network
US7730478 *21 Sep 20071 Jun 2010Salesforce.Com, Inc.Method and system for allowing access to developed applications via a multi-tenant on-demand database service
US7734608 *22 Sep 20068 Jun 2010Salesforce.Com, Inc.System, method and computer program product for querying data relationships over a network
US7739351 *23 Mar 200415 Jun 2010Salesforce.Com, Inc.Synchronous interface to asynchronous processes
US7761871 *10 Mar 200520 Jul 2010Handmark, Inc.Data access architecture
US7765243 *5 Nov 200427 Jul 2010Sandisk Il Ltd.Unified local-remote logical volume
US7865303 *9 Nov 20064 Jan 2011General Motors LlcMethod of providing a navigational route for a vehicle navigation system
US7869941 *28 Dec 200711 Jan 2011Aol Inc.Meeting notification and modification service
US7880602 *26 Feb 20081 Feb 2011Fujitsu Ten LimitedImage display control apparatus
US7904882 *27 Sep 20048 Mar 2011Salesforce.Com, Inc.Managing virtual business instances within a computer network
US7912628 *22 May 200722 Mar 2011Inrix, Inc.Determining road traffic conditions using data from multiple data sources
US7912629 *13 May 200822 Mar 2011Nokia CorporationMethods, apparatuses, and computer program products for traffic data aggregation using virtual trip lines and a combination of location and time based measurement triggers in GPS-enabled mobile handsets
US7921013 *30 Aug 20055 Apr 2011At&T Intellectual Property Ii, L.P.System and method for sending multi-media messages using emoticons
US7953599 *16 Apr 200831 May 2011Stragent, LlcSystem, method and computer program product for adding voice activation and voice control to a media player
US20020095425 *16 Jan 200118 Jul 2002Loay Abu-HuseinApparatus and method for updating applications to embedded devices and peripherals within a network environment
US20020151998 *29 Mar 200217 Oct 2002Yrjo KemppiMethod and system for creating and presenting an individual audio information program
US20030084404 *30 Oct 20011 May 2003Dweck Jay S.Systems and methods for facilitating access to documents via a set of content selection tags
US20030126136 *24 Jun 20023 Jul 2003Nosa OmoiguiSystem and method for knowledge retrieval, management, delivery and presentation
US20030167174 *1 Mar 20024 Sep 2003Koninlijke Philips Electronics N.V.Automatic audio recorder-player and operating method therefor
US20030229498 *6 Jun 200211 Dec 2003International Business Machines CorporationCategorization and recall methodology for physical media in large carousel systems
US20040095260 *13 Nov 200320 May 2004Nec CorporationTransport vehicle service guiding system, transport vehichle service guiding method, and transport vehicle service guiding program
US20040097272 *26 Jan 200220 May 2004Guido SchuffertHands-free device for operating mobile telephones in motor vehicles
US20040128141 *28 Oct 20031 Jul 2004Fumihiko MuraseSystem and program for reproducing information
US20040148362 *31 Dec 200129 Jul 2004Lee FriedmanSystems and methods for managing and aggregating media formats
US20040158607 *6 Feb 200312 Aug 2004Coppinger Clifford L.System and method for associating an email attachment file with a storage location
US20040158746 *7 Feb 200312 Aug 2004Limin HuAutomatic log-in processing and password management system for multiple target web sites
US20050005242 *2 Aug 20046 Jan 2005B.E. Technology, LlcComputer interface method and apparatus with portable network organization system and targeted advertising
US20050008167 *30 Apr 200413 Jan 2005Achim GleissnerDevice for picking up/reproducing audio signals
US20050027539 *23 Jul 20043 Feb 2005Weber Dean C.Media center controller system and method
US20050043067 *21 Aug 200324 Feb 2005Odell Thomas W.Voice recognition in a vehicle radio system
US20050065909 *5 Aug 200424 Mar 2005Musgrove Timothy A.Product placement engine and method
US20050065925 *23 Sep 200324 Mar 2005Salesforce.Com, Inc.Query optimization in a multi-tenant database system
US20050080772 *14 Nov 200314 Apr 2005Jeremy BemUsing match confidence to adjust a performance threshold
US20050131677 *12 Dec 200316 Jun 2005Assadollahi Ramin O.Dialog driven personal information manager
US20050143134 *30 Dec 200330 Jun 2005Lear CorporationVehicular, hands-free telephone system
US20050165609 *21 Mar 200528 Jul 2005Microsoft CorporationSpeech recognition user interface
US20050179540 *18 Mar 200518 Aug 2005Rubenstein Jeffrey D.Apparatus for communicating with a vehicle during remote vehicle operations, program product, and associated methods
US20050240588 *26 Apr 200427 Oct 2005Siegel Hilliard BMethod and system for managing access to media files
US20060061458 *21 Sep 200423 Mar 2006Gregory SimonWireless vehicle control system and method
US20060075429 *29 Apr 20056 Apr 2006Vulcan Inc.Voice control of television-related information
US20060085735 *1 Dec 200520 Apr 2006Fujitsu LimitedAnnotation management system, annotation managing method, document transformation server, document transformation program, and electronic document attachment program
US20060095860 *2 Nov 20044 May 2006Alan WadaMethod and system of providing dynamic dialogs
US20060111893 *19 Nov 200425 May 2006International Business Machines CorporationDisplay of results of cross language search
US20060155429 *18 Jun 200413 Jul 2006Applied Digital, Inc.Vehicle entertainment and accessory control system
US20060167861 *8 Feb 200627 Jul 2006Yan ArrouyeMethods and systems for managing data
US20060184516 *17 Jan 200617 Aug 2006Gerard EllisSearch engine
US20060193450 *25 Feb 200531 Aug 2006Microsoft CorporationCommunication conversion between text and audio
US20060195605 *30 Dec 200531 Aug 2006Prabakar SundarrajanSystems and methods for providing client-side accelerated access to remote applications via TCP buffering
US20070054704 *14 Aug 20068 Mar 2007Takao SatohInformation addition system and mobile communication terminal
US20070061335 *3 Feb 200615 Mar 2007Jorey RamerMultimodal search query processing
US20070078950 *29 Sep 20065 Apr 2007Salesforce.Com, Inc.Offline web services api to mirror online web services api
US20070088741 *8 Sep 200619 Apr 2007Salesforce.Com, Inc.Systems and methods for exporting, publishing, browsing and installing on-demand applications in a multi-tenant database environment
US20070124276 *10 Nov 200631 May 2007Salesforce.Com, Inc.Method of improving a query to a database system
US20070130130 *23 Oct 20067 Jun 2007Salesforce.Com, Inc.Systems and methods for securing customer data in a multi-tenant environment
US20070130137 *2 Dec 20057 Jun 2007Salesforce.Com, Inc.Methods and systems for optimizing text searches over structured data in a multi-tenant environment
US20070179800 *29 Dec 20062 Aug 2007General Motors CorporationEmail-based command interface for a telematics-equipped vehicle
US20070185843 *29 Dec 20069 Aug 2007Chacha Search, Inc.Automated tool for human assisted mining and capturing of precise results
US20080010243 *1 Jun 200710 Jan 2008Salesforce.Com, Inc.Method and system for pushing data to a plurality of devices in an on-demand service environment
US20080026793 *5 Dec 200631 Jan 2008Microsoft CorporationProviding input and output for a mobile device
US20080027643 *30 Jul 200731 Jan 2008Basir Otman AVehicle communication system with navigation
US20080032721 *27 Jul 20077 Feb 2008Gm Global Technology Operations, Inc.Method and system for communicating information to a user of a mobile platform via broadcast services
US20080033714 *14 Jun 20077 Feb 2008Itt Manufacturing Enterprises, Inc.Acronym Extraction System and Method of Identifying Acronyms and Extracting Corresponding Expansions from Text
US20080036586 *5 Jun 200714 Feb 2008Eric Shigeru OhkiMethod and system for receiving and sending navigational data via a wireless messaging service on a navigation system
US20080046845 *21 Jun 200721 Feb 2008Rohit ChandraMethod and Apparatus for Controlling the Functionality of a Highlighting Service
US20080059447 *24 Aug 20066 Mar 2008Spock Networks, Inc.System, method and computer program product for ranking profiles
US20080122695 *16 Jun 200529 May 2008Jackson Kit WangSystems and Methods For Geographical Positioning Using Radio Spectrum Signatures
Non-Patent Citations
Reference
1 *Microsoft Plus! Voice Command for Windows Media Player, 25 October 2001, 26 Pages.
Referenced by
Citing PatentFiling datePublication dateApplicantTitle
US7953599 *16 Apr 200831 May 2011Stragent, LlcSystem, method and computer program product for adding voice activation and voice control to a media player
US807359019 Dec 20086 Dec 2011Boadin Technology, LLCSystem, method, and computer program product for utilizing a communication channel of a mobile device by a vehicular assembly
US807839719 Dec 200813 Dec 2011Boadin Technology, LLCSystem, method, and computer program product for social networking utilizing a vehicular assembly
US813145819 Dec 20086 Mar 2012Boadin Technology, LLCSystem, method, and computer program product for instant messaging utilizing a vehicular assembly
US826586219 Dec 200811 Sep 2012Boadin Technology, LLCSystem, method, and computer program product for communicating location-related information
US9633659 *20 Jan 201625 Apr 2017Motorola Mobility LlcMethod and apparatus for voice enrolling an electronic computing device
US20080215337 *16 Apr 20084 Sep 2008Mark GreeneSystem, method and computer program product for adding voice activation and voice control to a media player
US20110196683 *19 Apr 201111 Aug 2011Stragent, LlcSystem, Method And Computer Program Product For Adding Voice Activation And Voice Control To A Media Player
Classifications
U.S. Classification704/275, 381/86
International ClassificationG10L21/00
Cooperative ClassificationH03J9/02, G11B19/02, G10L15/26
European ClassificationG11B19/02, G10L15/26A, H03J9/02
Legal Events
DateCodeEventDescription
16 Apr 2008ASAssignment
Owner name: STRAGENT, LLC, CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:VOICEDEMAND, INC.;REEL/FRAME:020815/0571
Effective date: 20071029
31 May 2011ASAssignment
Owner name: SEESAW FOUNDATION, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:026365/0075
Effective date: 20110524
16 Oct 2014ASAssignment
Owner name: TAG FOUNDATION, TEXAS
Free format text: CHANGE OF NAME;ASSIGNOR:SEESAW FOUNDATION;REEL/FRAME:034012/0764
Effective date: 20111012
Owner name: STRAGENT, LLC, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:033967/0138
Effective date: 20080829
Owner name: SEESAW FOUNDATION, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:033967/0150
Effective date: 20110524
Owner name: VOICEDEMAND, INC., CALIFORNIA
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:GREENE, MARK;HEGARTY, MICHAEL;CANTWELL, DERMOT;REEL/FRAME:033967/0130
Effective date: 20051115
Owner name: STRAGENT, LLC, TEXAS
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:TAG FOUNDATION;REEL/FRAME:033967/0159
Effective date: 20141001
6 Dec 2014ASAssignment
Owner name: KILPAUK GARDEN, SERIES 64 OF ALLIED SECURITY TRUST
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STRAGENT, LLC;REEL/FRAME:034411/0025
Effective date: 20141111
15 Sep 2015ASAssignment
Owner name: INTERNATIONAL BUSINESS MACHINES CORPORATION, NEW Y
Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:KILPAUK GARDEN, SERIES 64 OF ALLIED SECURITY TRUST I;REEL/FRAME:036565/0060
Effective date: 20150803