US20030115552A1 - Method and system for automatic creation of multilingual immutable image files - Google Patents

Method and system for automatic creation of multilingual immutable image files Download PDF

Info

Publication number
US20030115552A1
US20030115552A1 US10/303,819 US30381902A US2003115552A1 US 20030115552 A1 US20030115552 A1 US 20030115552A1 US 30381902 A US30381902 A US 30381902A US 2003115552 A1 US2003115552 A1 US 2003115552A1
Authority
US
United States
Prior art keywords
text
language
translation
image
immutable
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/303,819
Inventor
Jorg Jahnke
Dietmar Cordes
Nils Fuhrmann
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sun Microsystems Inc
Original Assignee
Sun Microsystems Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from EP01128179A external-priority patent/EP1315084A1/en
Priority claimed from EP02010132A external-priority patent/EP1315085B1/en
Priority claimed from EP02010133A external-priority patent/EP1315086B1/en
Application filed by Sun Microsystems Inc filed Critical Sun Microsystems Inc
Assigned to SUN MICROSYSTEMS, INC. reassignment SUN MICROSYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: CORDES, DIETMAR, FUHRMANN, NILS, JAHNKE, JORG
Publication of US20030115552A1 publication Critical patent/US20030115552A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/40Processing or translation of natural language
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces
    • G06F9/454Multi-language systems; Localisation; Internationalisation

Definitions

  • the present invention generally relates generally to data processing systems, and, more particularly, to data processing systems for automatically generating multilingual immutable image files containing text.
  • Data processing devices have become valuable assistants in a rapidly expanding number of fields, where access to, and/or processing of data is necessary.
  • Applications for data processing devices range from office applications such as text processing, spreadsheet processing, and graphics applications to personal applications such as e-mail services, personal communication services, entertainment applications, banking applications, purchasing or sales applications, and information services.
  • Data processing devices for executing such applications include any type of computing device, such as multipurpose data processing devices including desktop or personal computers, laptop, palmtop computing devices, personal digital assistants (PDAs), and mobile communication devices such as mobile telephones or mobile communicators.
  • PDAs personal digital assistants
  • a data processing device used to provide a service such as a server maintaining data which can be accessed through a communication network, may be located at a first location, while another data processing device for obtaining a service, such as a client device operated by a user for accessing and manipulating the data, may be located at a second location remote from the first location.
  • Servers which provide services are frequently configured to be accessed by more than one client at a time.
  • a server application may enable users at any location and from virtually any data processing device to access personal data maintained at a server device or a network of server devices. Accordingly, users of the client devices may be located in different geographic areas that support different languages, or may individually have different language preferences.
  • the term “user” as used herein refers to a human user, software, hardware, or any other entity using the system.
  • a user operating a client device may connect to the server device or network of server devices using personal identification information to access data maintained at the server device or network of server devices, such as an e-mail application or a text document.
  • the server in turn may provide information to be displayed at the client device, enabling the user to access and/or control applications and/or data at the server.
  • This may include the display of a desktop screen, including menu information, or any other kind of information allowing selection and control of applications and data at the server.
  • At least some portions of this information may be displayed at the client device using a screen object whose visual elements are stored in an immutable image file containing text information.
  • a “screen object” is an entity that is displayed on a display screen. Examples of screen objects include screen buttons and icons.
  • An “immutable image file” is a file that is not character based, but rather bit based. Thus, editing such files to change any text therein is very difficult. Examples of such files include files created in formats such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and other graphic files.
  • This text information may provide information on a specific function which may be activated by selecting the screen object (e.g., by clicking on a button using a mouse and/or cursor device).
  • the formatting of immutable image files such as files in .gif format does not generally support extraction of text strings (e.g., in ASCII format) which may be perceived by viewing a screen display of a rendering of the image file, as the immutable image files (e.g., bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and other graphic files) are generally created based on graphics information instead of text string information (i.e., in bitmap format instead of ASCII, or character-based format).
  • text information of the images is preferably to be provided in respective languages understood by each individual user. This, however, would require that the text information of the images be provided in different languages.
  • the text strings in the images may be translated into different languages. While this does not pose a problem for a small number of images, where manual translation and creation of the immutable images is possible by manually recreating the immutable image for each different language, a manual translation is neither feasible nor cost effective when a large number of immutable images must be provided in different languages.
  • an application provided by a server or network of servers such as an office application may include a very large number of different menus in different layers including hundreds of individual menu or immutable images containing text. If the service is available in a large number of different geographic areas supporting different languages, a very large number of translation operations and image creation operations is necessary. For example, if an application provides 100 different immutable images (e.g., .gif files or bitmap files) containing text, and if a translation into 20 different languages is necessary, a total of 2000 translation and image creation operations may be necessary.
  • immutable images e.g., .gif files or bitmap files
  • Methods and systems consistent with the present invention provide an improved translation system that allows users to view screen objects of a user interface in different languages even though the screen object's visual elements are stored in an immutable image file (e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx).
  • an immutable image file e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx.
  • an immutable image file e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png
  • the textual elements of the screen objects are associated with the image files as text strings.
  • the improved translation system automatically translates the text strings into different languages and generates image files that contain the translated textual elements.
  • these image files are made available so that when a user wishes to display a user interface, it will be displayed in the language of their choice. For example, a browser user may select a language of their choice and receive web pages from a web server in this language.
  • Such functionality facilitates the international use of web sites.
  • a method in a data processing system for localizing an immutable image file containing text in a first language is provided.
  • the method translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language.
  • a computer-readable medium is provided.
  • This computer-readable medium is encoded with instructions that cause a data processing system for localizing an immutable image file containing text in a first language to perform a method.
  • the method translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language.
  • a data processing system for localizing an immutable image file containing text in a first language.
  • the data processing system comprises an immutable image file creation system that translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language; and a processor for running the immutable image file creation system.
  • FIGS. 1 a - 1 b depict block diagrams of an exemplary data processing system suitable for using immutable image files in a number of languages.
  • FIG. 2 depicts a block diagram of a data processing system for automatically generating localized immutable image files containing text, the system suitable for practicing methods and systems consistent with the present invention.
  • FIG. 3 depicts a flowchart illustrating steps of a method for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIG. 4 depicts a block diagram illustrating a logical flow of an exemplary system for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIG. 5 depicts a block diagram illustrating a logical flow of an exemplary system for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIG. 6 depicts a flowchart illustrating steps of an exemplary method for automatically generating immutable image files containing text in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIG. 7 depicts a flowchart illustrating steps of an exemplary method for automatically creating immutable image files having text in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIG. 8 depicts a flowchart illustrating steps of an exemplary method for automatically creating immutable image files in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention.
  • FIGS. 9 a - 9 b depict flowcharts illustrating steps of an exemplary method for identifying text and translating the text.
  • FIG. 10 depicts a flowchart illustrating steps of an exemplary method for identifying text and translating the text both in source code text strings and in software related text.
  • FIGS. 11 a - 11 b depict flowcharts illustrating steps of an exemplary method for identifying text and translating the text both in source code text strings and in software related text.
  • FIG. 12 depicts a flowchart illustrating steps of an exemplary method for identifying text element types and searching for matching strings in a database.
  • FIG. 13 depicts a flowchart illustrating steps of an exemplary method for validating a translation element and storing the validated element in a database.
  • FIG. 14 depicts an exemplary table in a pretranslation database.
  • FIG. 15 a depicts an exemplary display for a user processing a translation of text using exact matching.
  • FIG. 15 b depicts a flowchart illustrating exemplary method steps for a user, a computer, and a database for translating text using exact matching.
  • FIG. 16 a depicts an exemplary display for a user processing the translation of text using fuzzy matching.
  • FIG. 16 b depicts a flowchart which depicts exemplary method steps for a user, a computer, and a database for translating text using fuzzy matching.
  • FIG. 17 depicts an exemplary system for translating text using a pretranslation database.
  • FIG. 18 depicts an exemplary sentence based electronic translation dictionary.
  • FIG. 19 depicts an exemplary networked system for use with an exemplary translation technique.
  • Methods and systems consistent with the present invention provide an improved translation system that allows users to view screen objects of a user interface in different languages even though the screen object's visual elements are stored in an immutable image file (e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx).
  • an immutable image file e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx.
  • an immutable image file e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png
  • the textual elements of the screen objects are associated with the image files as text strings.
  • the improved translation system automatically translates the text strings into different languages and generates image files that contain the translated textual elements.
  • these image files are made available so that when a user wishes to display a user interface, it will be displayed in the language of their choice. For example, a browser user may select a language of their choice and receive web pages from a web server in this language.
  • Such functionality facilitates the international use of web sites.
  • FIG. 1 a depicts a block diagram of an exemplary data processing system including a client 102 , a client 104 and a server 100 , similar to a data processing for implementing StarOfficeTM running in Sun One Webtop, developed by Sun Microsystems, in which a number of client devices may access the server device 100 to use a service application such as a word processor, a spreadsheet application or a graphics application.
  • Client 102 or client 104 may include a human user or may include a user agent.
  • Clients 102 and 104 each include a browser 106 and 108 , respectively.
  • Browsers 106 and 108 may be used for displaying data provided by the server 100 (e.g., HTML data or XML data).
  • the data may include references to immutable image files containing text (e.g., .gif files or bitmap files) which may be downloaded to the client browsers 106 and 108 for display as images such as control buttons or icons.
  • Clients 102 and 104 may communicate with server 100 through communication networks which include a large number of computers, such as local area networks or the Internet. Access to information located in either client 102 , client 104 , or server 100 may be obtained through wireless communication links or fixed wired communication links or any other communication means. Standard protocols for accessing and/or retrieving data files over a communication link, for example, over a communication network, may be employed, such as a HyperText Transfer Protocol (“HTTP”).
  • HTTP HyperText Transfer Protocol
  • FIG. 1 b depicts a block diagram of the exemplary data processing of FIG. 1 b , more particularly showing browser displays 120 and 130 on clients 102 and 104 , respectively, of immutable images 122 and 132 having corresponding text in different languages.
  • Server 100 includes a secondary storage device 140 which includes a German image file 142 which is displayed as a button 122 on the browser display 120 of client 102 , and an English image file 144 which is displayed as a button 132 on the browser display 130 of client 104 .
  • Each of client 102 and 104 may specify a language in order to be provided with image files in the appropriate language for display during execution of the same general application such as a word processing application which may be executed as a separate process for each individual user.
  • each individual user may use the application and receive screen displays including text information in a preferred language of the individual user.
  • FIG. 2 depicts a block diagram of a data processing system 200 suitable for practicing methods and systems consistent with the present invention, which provides immutable image files having text 232 and 234 in a number of languages.
  • FIG. 2 particularly illustrates how immutable images containing text strings in different languages may be created at reduced complexity and costs.
  • System 200 includes a server 201 for generating the immutable image files and a server 260 for translating strings in particular languages into different languages.
  • Servers 201 and 260 communicate with each other via a connection 245 which may be a direct connection, a network connection such as a LAN connection or a WAN connection such as the Internet.
  • Text strings in a first language are identified on the server 201 and are transferred to the server 260 by a translator module 218 on the server 201 as a file of text strings 280 for translation into at least one second language that is different from the first language.
  • the text strings are transferred back to the server 201 for processing to generate immutable image files containing the text strings in different languages.
  • the immutable image files containing the text strings may then be used in applications so that users may view screen objects on a display in their choice of different languages.
  • Server 260 includes a CPU 262 , a secondary storage device 268 which includes a translation database 270 , a display 265 , an I/O device 264 , and a memory 266 , which communicate with each other via a bus 291 .
  • Memory 266 embodies an operating system 295 and a translation module 278 which inputs text strings from the file of text strings 280 and translates the strings into different languages using the translation database 270 .
  • One suitable translation process for use with methods and systems consistent with the present invention is described below. However, other processes may also be used.
  • the translated text strings are then transferred back to server 201 via the connection 245 .
  • the elements of servers 260 and 201 are all located on one device.
  • Server 201 includes a CPU 202 , a secondary storage device 208 , a display 205 , an I/O device 204 , and a memory 206 , which communicate with each other via a bus 231 .
  • Memory 206 embodies an operating system 250 , a translator module 218 , a parser 220 , and a script 222 which is associated with an image manipulation program 224 to input text information from a text string file 226 and a template 230 to generate immutable image files 232 and 234 , each of which contains text in a language typically different from the other, when the script 222 is initiated.
  • the script 222 may be initiated by a call entered by a user using the operating system 250 , or the script 222 may be initiated by a call included in a batch file which is executed by a user or system command.
  • a batch file of a number of calls to the script 222 may be stored in the secondary storage device 208 for access by a user or a system in order to generate a number of immutable image files in a number of different languages, and for a number of different base images (e.g., cancel button, system setting buttons).
  • the parser 220 may be used to parse text information from a text element file 228 to create the text string file 226 .
  • a “text element file” is a file which contains text elements. An exemplary text element file written in XML is shown below.
  • a “text element” is a property or attribute associated with a text string, including the text string itself.
  • text elements include the text string, a language of the text string, information regarding an associated immutable image file which includes the text string (e.g., display characteristics), and location information indicating a storage location for the associated immutable file.
  • the parser 220 may also parse the text element file 228 to obtain text strings for the translator module 218 to transfer to the server 260 for translation. After translation, the translated strings may be transferred back to the translator module 218 to be merged back into the text element file 228 to enable multilingual applications by accessing the text elements in different languages.
  • the translation database 270 may be used to store strings in different languages, as well as other information such as template information associated with the template 230 .
  • template information associated with the template 230 .
  • An example of a very simple database table containing template information is: Template File Name Width Height Color SysDef SysDef.gif 300 60 16 Cancel Cancel.gif 200 60 25
  • information is provided for two immutable image files, one for a system user button and one for a cancel button.
  • a base file name is provided for storing an immutable image file for each template.
  • width, height, and color information of the buttons is provided. Additional information such as shape, a base language text string (e.g., “System Defaults” or “Cancel” if English is the base language for translating the strings for these buttons), placement of the string within the button, and information regarding a previously generated immutable image file are stored in additional columns (not shown).
  • the information regarding a previously generated immutable image file may be used to generate an empty (not having text) immutable image file, so that a translated string may be merged into the file.
  • a call to the script 222 may be generated by requesting template information in the translation database 270 , and may be included in the call to the script 222 in lieu of using the template file 230 .
  • a specific example of a script, a call to the script, a text element file, and a text string file are shown below.
  • An exemplary image manipulation program used for the example is GIMP (GNU Image Manipulation Program).
  • GIMP GPU Image Manipulation Program
  • a commercially available program may be used for translating the text strings, such as a translation program available from Trados. Such a program may be used by configuring code to obtain the strings for translation, provide the strings to the translation program, receive the translated strings, and add the translated strings into the text element file.
  • the parser 220 may be configured to extract the text for translation from at least one image creation file including position information and text elements from a number of immutable image files containing text in the first language.
  • the text elements may be extracted from an image creation file such as a HTML (Hyper Text Markup Language) file or an XML (Extensible Markup Language) file or any other image creation file suitable for interpretation at a data processing device (e.g., user 102 or 104 ) for display.
  • the parser 220 may automatically extract text elements from the image creation file and collect the text elements in the text element file 228 for further processing.
  • An image creation file may generally be used to create a screen display (e.g., at a client accessing a service application at a server or group of servers).
  • the parser 220 may further merge the translated text elements in a second language into an image creation file for the second language.
  • the text elements for the immutable image files in the second language may be provided in the image creation file for the second language, so that a client operated by a user who is fluent in the second language may be provided with a display screen including text elements in the second language and a display of immutable image files containing text in the second language.
  • This example is particularly suitable for application in network environments such as the Internet.
  • the original uses of the Internet were electronic mail (e-mail), file transfers (ftp or file transfer protocol), bulletin boards and newsgroups, and remote computer access (telnet).
  • the World Wide Web (web), which enables simple and intuitive navigation of Internet sites through a graphical interface, expanded dramatically during the 1990s to become the most important component of the Internet.
  • the web gives users access to a vast array of documents that are connected to each other by means of links, which are electronic connections that link related pieces of information in order to allow a user easy access to them.
  • Hypertext allows the user to select a word from text and thereby access other documents that contain additional information pertaining to that word; hypermedia documents feature links to images, sounds, animations, and movies.
  • the web operates within the Internet's basic client-server format.
  • Servers include computer programs that may store and transmit documents (i.e., web pages) to other computers on the network when requested, while clients may include programs that request documents from a server as the user requests them.
  • Browser software enables users to view the retrieved documents.
  • a web page with its corresponding text and hyperlinks may be written in HTML or XML and is assigned an online address called a Uniform Resource Locator (URL).
  • URL Uniform Resource Locator
  • Information may be presented to a user through a graphical user interface called a web browser (e.g., browser 106 or 108 ).
  • a web browser e.g., browser 106 or 108
  • the web browser displays formatted text, pictures, sounds, videos, colors, and other data.
  • HTML was originally used. HTML is a language whereby a file is created that has the necessary data and also information relating to the format of the data.
  • XML has emerged as a next generation of markup languages.
  • XML is a language similar to HTML, except that it also includes information (called metadata) relating to the type of data as well as the formatting for the data and the data itself.
  • this example may be used to extract text elements from HTML or XML pages or other types of pages such as image creation files, in order to facilitate provision of immutable image files in different languages for internationalizing service applications accessible over the Internet or any other network.
  • the text element file 228 may include any kind of collection of text elements stored in a memory area of the data processing system 200 , or stored at any other location.
  • the text element file 228 may be provided at an external location, accessible from the data processing system 200 through a communication network.
  • the text element file 228 may be a centralized resource accessible by a number of data processing systems such as the data processing system 200 from arbitrary locations. Accordingly, a decentralized generation of immutable image files in various languages may be possible, including a consistent use of translated text elements. This allows consistent presentation of textual information to users, in contrast to environments in which different translation operations yield different translations of text elements for use in the same or different service applications.
  • Collecting text elements in the text element file 228 may be facilitated by a check-in process, allowing registration of text elements and translations thereof in the text element file 228 , in order to maintain information on languages in which the individual immutable image files are available.
  • the text element file 228 may include an XML file and an XML editor may be used for generating the translated text elements. Further, the text element file 228 may include information of a desired output file name of the respective immutable image files and information regarding a template, such as template 230 , which may be needed for creation of an immutable image file.
  • the text element file 228 may also serve as a basis of information enabling proper handling of the created immutable image files and enabling selection of a suitable template for the text elements which are collected.
  • different types of text elements may be associated with different templates, enabling creation of immutable image files in certain categories (e.g., characterized by size, color or any other graphic information).
  • immutable image files related to data handling may be generated using a first template
  • immutable image files enabling a setting of references e.g., for display
  • the translation module 278 for obtaining translated text elements in at least one second language may include a program or a sequence of instructions (as described below) to be executed by the CPU 202 , by corresponding hardware or a combination of hardware and software for realizing the functionality of obtaining translated text elements in the second language.
  • the translation of the text elements may be accomplished by using pretranslation databases.
  • the translator module 218 may be configured to extract relevant information from the text element file 228 or may be configured to import this information into a database such as the translation database 270 for improved handling of the text elements. After identifying individual text elements, the translator module 218 may invoke a translation service providing a translation of the text element into the second language.
  • the translation service may include an internal application, such as a thesaurus provided in a memory such as memory 206 , within or accessible from the data processing system 201 . Further, the translation service may be an external translation service provided by an external application program (e.g., offered by a third party). For example, the translator module 218 may be configured to invoke a web-based translation tool, accessed through a computer network such as a local area network or the Internet, or a combination thereof.
  • the translated image text elements may be collected in the translation database 270 and/or may then be merged into the text element file 228 .
  • a program may be provided having instructions configured to cause a data processing device to carry out the method of at least one of the above operations.
  • a computer readable medium may be provided, in which a program is embodied, where the program is to make a computer execute steps of the method discussed above.
  • a computer-readable medium may be provided having a program embodied thereon, where the program is to make a computer or a system of data processing devices to execute functions or operations of the features and elements of the examples described previously.
  • a computer-readable medium may be a magnetic or optical or other tangible medium on which a program is recorded, but may also be a signal, e.g., analog or digital, electronic, magnetic or optical, in which the program is embodied for transmission.
  • a computer program product may be provided comprising the computer-readable medium.
  • the text string file 226 may include information of a desired output file name of the respective immutable image files and information regarding a template, such as the template 230 , used for creation of an immutable image file.
  • a template 230 may include any kind of image file information.
  • a template may include information related to at least one of image size; image shape; color; and position of a text element in an image.
  • a number of templates for various images may be provided and stored in the memory 206 or at an external location.
  • the template information may be included in the call to the script 222 .
  • the text string file 226 may also serve as a basis of information enabling proper handling of the created images and enabling selection of a suitable template for the text elements collected.
  • different types of text elements may be associated with different templates, allowing creation of images in various categories such as size, color or any other type of graphic information.
  • images related to data handling may be generated using a first template, whereas images enabling a setting of references such as for display may be generated using a second template.
  • the image size may specify an area of the image such as an area in a display screen to be displayed at a client unit. Further, the image size may specify a minimum size of the image, such as a size suitable to fit the text element.
  • the image shape information may specify any graphic parameters for obtaining certain graphic effects such as shading or 3-D effects. Further, the shape may also specify geometric shapes such as rectangles, circles and polygons.
  • the image color may include information related to shading of the image to obtain further graphic effects.
  • the image information may further include information regarding a position of a text element, for an image enabling a proper placement of the text element within the image and/or may relate to a position of the image in a display screen, as specified in an image creation file (e.g., an HTML file), enabling proper placement of the image such as in a menu including different images to be displayed at a client device.
  • an image creation file e.g., an HTML file
  • the system of FIG. 2 facilitates a generation of immutable images for screen displays similar to those discussed with regard to FIG. 1, wherein the images have text elements with different languages, in order to adapt a service application to different languages of different users.
  • a first user who is fluent in a first language may be provided with display screens displaying screen objects with text information in the first language
  • a second user who prefers a second language may be provided with a display screen displaying screen objects with text elements in the second language.
  • the data processing system 200 may generally include any computing device or group of computing devices connected with one another.
  • the data processing system 200 may be a general purpose computer such as a desktop computer, a laptop computer, a workstation or combinations thereof.
  • the functionality of the data processing system 200 may be realized by distributed data processing devices connected in a local area network such as a company-wide computer network or may be connected by a wide area network such as the internet, or combinations thereof.
  • the image manipulation program 224 may be a graphics application provided as a stand alone program, e.g., offered by a third party. Further, the image manipulation program 224 may be a commercially available graphics application enabling generation of immutable image files based on given text elements and given template information. For example, the image manipulation program 224 may include the GNU (Gnu's Not UNIX) image manipulation program GIMP, which is described at www.gimp.org.
  • GNU Gnu's Not UNIX
  • the system 200 is configured to instruct the image manipulation program 224 to generate immutable image files based on given parameters and may be adapted to generate a script to instruct the image manipulation program 224 to generate the immutable image files based on the given parameters.
  • a script such as script 222 may conveniently instruct the image manipulation program 224 based on the parameters.
  • the script may be a sequence of instructions provided to the image manipulation program 224 , instructing the image manipulation program 224 to generate an immutable image file having certain properties and including a specific text element.
  • the script 222 may include a text element for insertion into the immutable image file.
  • the script may be written and intermediately stored in a file (e.g., as a batch file stored in secondary storage 208 ), to be provided to the image manipulation program 224 , and may include the parameters discussed above (i.e., parameters needed to generate the immutable image file).
  • the parameters may include a template and the text element of the immutable image.
  • the parameters may include immutable image file information as discussed above, including image size, shape, color and/or position of a text element within the image.
  • the parameters may include a number of text elements of the image and at least one template, enabling a concurrent generation of various immutable image files with different text elements and/or templates.
  • Each text element may be associated with a particular template (e.g., specified in the input file), or the text elements may be categorized in groups, each group associated with a particular template.
  • the parser 220 may be configured to generate the text string file 226 including at least one text element (e.g., from the text element file 228 ) and the image manipulation program 224 may be instructed, using the script 222 , to access the text string file 226 in order to retrieve at least one text element of the text string file 226 .
  • the text string file 226 may be intermediately stored in the memory 206 , or at any other location. Thus, the text string file 226 may be based on at least a portion of the text element file 228 .
  • the text string file 226 may further include at least one of a language identifier; and a desired name of an immutable image file to be created.
  • the language ID facilitates an easier classification of immutable image files created based on the text elements, and a desired name of an immutable image file to be created facilitates easy access to the generated graphic elements including the image files, such as by updated HTML code used to instruct the display on a display screen for display at a client unit.
  • the image manipulation program 224 may access the text string file 226 , retrieve the text elements and generate the immutable image file based on the immutable image file information.
  • the immutable image file information may be directly included in the text string file 226 .
  • the script 222 and image manipulation program 224 may be configured to generate an empty immutable image file based on the template 230 and may be configured to merge the translated text element into the empty immutable image file.
  • a collection of immutable image files with no text may be created, and text elements may be included on demand.
  • This enables generation of a generic image creation file (e.g., an HTML file) for generating a screen display at a client unit.
  • the text elements in the respective languages can then be included in the generic image creation file in order to generate image creation files in different languages.
  • the image manipulation program 224 may generate immutable image files for storage at the data processing system 200 or at any other location.
  • the immutable image files may also be directly included into image creation files for generating screen displays at client units or image creation files with references to the immutable image files stored may be created.
  • Two exemplary immutable image files which may be created using system 200 are an immutable image file in a first language, English, displaying the English expression “system defaults”, and a second image button in the second language, German, which includes the corresponding German expression “Systemeinwolfen.”
  • the buttons 122 and 132 of FIG. 1 b illustrate screen displays of the immutable image files including these text elements.
  • FIG. 3 depicts a high-level flowchart illustrating steps of a method for automatically localizing and immutable image file containing text in a first language, the method suitable for use with methods, systems and articles of manufacture consistent with the present invention, such as the system of FIG. 2.
  • a translation system such as the system shown on server 260 translates the text from the first language into a second language that is different from the first language (Step 300 ).
  • script 222 and image manipulation program 224 automatically generate a translated immutable image file containing the text in the second language (Step 302 ).
  • FIG. 4 depicts an exemplary high level logical flow using elements of a system for generating immutable image files containing specific exemplary text in different languages according to another example.
  • FIG. 4 particularly illustrates extraction of text elements and merging of translated image text elements into image creation files such as HTML or XML files.
  • screen displays may be generated for client devices accessing services at a server. As the client devices may be located in different geographic areas supporting different languages, the service provided at the server may be enabled to generate image creation files or screen displays for the client devices in different languages.
  • FIG. 4 illustrates two screen displays based on image creation files, in different languages.
  • a first screen display 410 includes English language buttons 411 and 412 .
  • the displayed buttons in the present example are used as menu items allowing control of a corresponding service application at a server.
  • the button 411 in the present example is assumed to relate to system settings and therefore shows the English expression “system settings.”
  • the second image button 412 relates to user group functionality, and therefore displays the expression “user groups.”
  • An image creation file corresponding to the screen display 410 may be used for users preferring the English language and may thus be provided from the server to corresponding client devices which have selected English as the preferred language.
  • the second screen display 450 shown in FIG. 4 shows German language buttons and therefore is suitable for users in a region that supports the German language, or users who have selected German as their preferred language.
  • the screen display 450 includes buttons 451 and 452 .
  • the button 451 includes the German expression “Systemeinwolfen,” indicating that the corresponding button also relates to system functions.
  • the button 452 corresponding to the button 412 contains the German expression “Benutzeropathy” and indicates that this button also relates to a user group functionality.
  • the screen displays may include a larger number of immutable images and/or sub-menus or sub-screens, so that there may well be a very large number of different immutable image files to be considered.
  • the exemplary text elements may be extracted from the screen display 410 corresponding to the English language and may be translated into text elements for the screen display 450 for the German language. This extraction and creation process may be carried out offline. Thus, immutable image files for screen displays in different languages may be created and stored beforehand and made accessible to a service application such as a text processing application.
  • the extraction and creation process for providing immutable image files containing text in different languages may be provided on demand (e.g., if a user with a particular preferred language logs into the server).
  • the screen display i.e., the image creation file
  • the corresponding language including text elements and images in the preferred language may be created dynamically (i.e., on demand).
  • an arrow 460 illustrates a corresponding extraction process enabling extraction of the text elements from the image buttons 411 and 412 and provision to the text element file 420 .
  • the extraction process may be carried out as described previously.
  • the text element 420 therefore will contain, in the present simplified example, the expression “system settings” and “user groups.” Then, in steps 461 and 462 translated text elements involving a translation service 430 are obtained.
  • the text element file 420 may include corresponding language identifiers (i.e., showing that the original text elements are in the English language and that needed text elements should be in the German language).
  • the text element file 420 will further include the German expressions “Systemeinwolfen” and “Benutzer psychology.” Thereafter, as illustrated by arrow 463 , a graphic program 440 is instructed to generate immutable image files in the German language, corresponding to the buttons 411 and 412 in the English language. The creation of the immutable image files was described previously.
  • immutable image files corresponding to the buttons 451 and 452 are generated.
  • the immutable image files may be checked into a database of immutable image files or may be stored intermediately in any other way. Thereafter, as outlined by an arrow 464 , the image screen 450 for display at a client device or generation of a corresponding image creation file is illustrated.
  • an English language user accessing a particular service controlled by the screen displays 410 or 450 at the client units may be provided with the English language screen display 410
  • a German language user may be provided with the German language screen display 450 .
  • FIG. 5 depicts an exemplary logical flow of elements of a system for obtaining and using immutable image files in a number of different languages according to another example.
  • FIG. 5 particularly shows how the immutable image files in different languages may be used to supply users with display screens in different languages.
  • FIG. 5 shows a data processing device 500 for obtaining image buttons in a number of different languages.
  • the data processing device 500 may include a single processing device or a number of processing devices in communication with one another.
  • the data processing device 500 includes a text element file 501 (e.g., as described previously).
  • the text element file 501 may be stored in an internal memory or may be provided at an external memory location.
  • the data processing device 500 includes an input file 502 .
  • the input file 502 may include at least one text element, and may further include a language ID and/or name of an immutable image file to create.
  • the input file may also be stored at an internal location or at any other external memory location.
  • the data processing device 500 also includes a template 503 including immutable image file information such as image size and/or image shape and/or color and/or a position of a text element within the image.
  • the data processing device 500 may store a number of templates at an internal memory location or an external memory accessible from the data processing device 500 .
  • a suitable template e.g., the template 503
  • the template 503 may be selected based on information contained in the text element file 501 or on information on image parameters obtained from another source. Further, the template 503 may be dynamically generated.
  • the data processing device 500 includes a script 504 , as described previously.
  • the script 504 may be generated based on the input file 502 , and may include information of the input file 502 or a reference or storage location enabling access to the input file. Further, the script 504 may include image information from the template 503 .
  • the script 504 may be intermediately stored at the data processing device 500 or may be dynamically generated and transferred to the graphics application program 505 .
  • the graphics application program 505 may be a graphics application (e.g., GIMP) as described previously, provided at the data processing device, as shown in FIG. 5, or may be provided at an external location, arranged to be accessed from the data processing device 500 through a communication link.
  • the graphics application program 505 is instructed to generate at least one immutable image file, as described previously.
  • the immutable image files with text elements in different languages, as illustrated by reference numeral 506 may be stored in the data processing device 500 , or may be stored at an external location.
  • the data processing device 500 invokes a translation service 510 in order to obtain translated text elements.
  • the translation may be performed by a translation system as described below in more detail.
  • a server 520 which provides services to a number of users 541 and 542 . The services enable users to access and/or manipulate data at the server 520 from remote locations through client devices.
  • the server 520 may include a large computing device providing services such as office applications, communications applications or any other applications.
  • Exemplary clients 541 and 542 are arranged to access the server 520 through the Internet, as illustrated at 530 , in order to control execution of a service application, as described previously.
  • the application at the server 520 generates image creation files or screen displays for the clients 541 and 542 in different languages.
  • FIG. 5 illustrates two exemplary screen displays 521 and 522 in a first and a second language.
  • the server 520 provides image creation files to the clients 541 and 542 in the preferred language.
  • the data processing device 500 may be configured to extract text elements (e.g., from an image creation file corresponding to screen display 521 at the server 520 ), including text elements in a first language TE 1 , TE 2 , TE 3 , and TE 4 .
  • the text elements are stored in the text element file 501 .
  • the data processing device 500 obtains translated text elements in the second language using the translation service 510 .
  • the translated text elements are then merged into the text element file 501 .
  • the input file 502 is generated and, based thereon and on the template 503 the script 504 is generated.
  • the script 504 then instructs the graphics application program 505 to generate the immutable image files with text elements TE 1 , TE 2 , TE 3 , and TE 4 in the second language.
  • the server 520 uses the immutable image files in the various languages to generate image creation files (e.g., in HTML or XML), such as image creation files corresponding to the screen display 521 and 522 .
  • image creation files e.g., in HTML or XML
  • FIG. 6 depicts a flowchart depicting steps of an exemplary method for generating screen objects (e.g., image buttons) in a number of languages. The steps of FIG. 6 may be carried out using the system shown in FIG. 2; however, FIG. 6 is not limited thereto.
  • screen objects e.g., image buttons
  • a parser generates an input file including at least one image text element, and/or a language ID and/or a name of an immutable image files to be created (Step 601 ).
  • the input file may resemble the text string file 226 and may correspond to at least part of the text element file 228 .
  • a parser obtains image information, (i.e., information on the image such as color, shape, dimensions, and position of a text string within the image) from a template (Step 602 ).
  • the image information may relate to one or a number of templates (as required for the text elements). For example, image information for different groups of text elements and/or different languages may be obtained.
  • Obtaining the image information may also include dynamically generating image information based on information from the input file (e.g., a size of a text element in a particular language).
  • a script is initiated to instruct a graphic application program (e.g., the image manipulation program 224 which may be implemented using the GNU (Gnu's Not UNIX) image manipulation program GIMP) to generate at least one immutable image file, the script including information on the input file (e.g., a file name or address, and the at least one immutable image file) (Step 603 ).
  • the script may be used to instruct a graphic application program on a remote server to create an immutable image file based on given parameters.
  • the given parameters may be obtained from the input file and may be provided with the image information.
  • the graphic application program may access or retrieve the input file from a memory including at least one of text elements, language IDs and immutable image file names. Further, the input file may also include the image information.
  • the graphic application program generates at least one immutable image file based on the image information and the translated text element (Step 604 ). For example, a number of immutable image files for a particular text element in different languages may be created simultaneously (e.g., if the input file includes one text element in a number of languages). Further, a number of immutable image files may be created simultaneously for a number of text elements in one language. Moreover, a combination of both cases is possible, wherein a number of immutable image files for a number of text elements in a number of languages may be created.
  • a number of templates may be employed to generate immutable image files for different groups of text elements.
  • the text elements may be classified according to language ID or according to content, in order to be able to make characteristics or appearance of an immutable image file language dependent and/or content dependent.
  • Different templates may be used for each classified group. For example, text elements for data manipulation may be classified into a first group for images having a first appearance, and text elements for setting preferences may be collected in a second group for images having a second appearance.
  • the graphic application program then stores at least one immutable image file using the at least one immutable image file name (Step 605 ).
  • the file names may enable a convenient identification of the immutable image files in later processing steps (e.g., during generation of image creation files, for example for clients fluent in different languages).
  • the graphic application program may generate an empty immutable image file (i.e., an immutable image file free of a text element) based on the image button information (Step 606 ).
  • immutable image files for receiving text elements may be created.
  • the translated text elements may be merged into the respective empty immutable image files (Step 607 ).
  • a library of immutable image files may be created and the text elements may be merged into the retrieved empty immutable image files.
  • the graphic application program described previously includes the GNU (Gnu's Not UNIX) image manipulation program GIMP.
  • GNU Gnu's Not UNIX
  • creation of immutable image files 232 and 234 is accomplished by a GIMP call, a GIMP input file (e.g., text string file 226 ) and a GIMP script (e.g., script 222 ).
  • GIMP call e.g., a command line of an operating system such as Windows or from a batch file by the operating system
  • the GIMP call is initiated (e.g., on a command line of an operating system such as Windows or from a batch file by the operating system) and in turn instructs the GIMP to input text string information from the input file and template information from the call itself.
  • FIG. 7 depicts a flowchart depicting steps of another exemplary method for generating immutable image files in different languages. The steps of FIG. 7 may be carried out using the system shown in FIG. 2; however, FIG. 7 is not limited thereto.
  • a parser transfers text elements into an XML text file or a database (Step 701 ). Transferring the text elements may be effected by extracting text elements from image creation files or from screen displays including immutable image files (e.g., HTML or XML files for display at a client device). Transferring the text elements to an XML file or a database facilitates efficient handling of the text elements.
  • the database may be provided internal to a data processing device such as the data processing system 200 shown in FIG. 2, or at any other location.
  • the XML file including the text elements may be visualized for an operator using an XML editor.
  • a web-based tool or an XML editor obtains translated image text elements (Step 702 ).
  • the text elements may be automatically extracted from the XML text file or database and transferred to the translation service, which returns translated text elements as described previously.
  • Commands for the translation service may include a desired language.
  • the translation service may be a commercially available translation service or a translation service using an application at the data processing system 200 shown in FIG. 2 and as described below.
  • the web-based tool or the XML editor merges the translated text elements received from the translation service back into the XML file or the data base (Step 703 ). Accordingly, a collection of different text elements, each of the text elements in different languages, may be provided in the XML file or database. Thereafter, the graphic application program (e.g., GIMP) as discussed above generates immutable image files based on the translated image text elements as described previously (Step 704 ).
  • GIMP graphic application program
  • FIG. 8 depicts a flowchart depicting exemplary options of steps for a method for generating immutable image files in a number of languages. The steps of FIG. 8 may be carried out using the system shown in FIG. 2; however, FIG. 8 is not limited thereto.
  • text elements in a first language are input (Step 801 ).
  • a user may generate the text elements manually.
  • a parser may obtain an image creation file (e.g., an HTML or XML file) (Step 802 ).
  • the parser extracts text elements in a first language from the image creation file (Step 803 ).
  • the parser generates a text element file (e.g., text element file 228 as discussed above) with the text elements (Step 804 ).
  • the parser transfers the text elements from the text element file into a database (Step 805 ).
  • a translator module invokes a translation service (e.g., the system associated with server 260 ) to translate the text elements into a second language (Step 806 ).
  • a translation service e.g., the system associated with server 260
  • the translator module 218 invokes a translation service for translating the text elements in the first language into text elements in the second language (Step 806 a ).
  • the translation module 278 then inserts the text elements in the second language into the database (Step 806 b ).
  • An example of a technique for translation is given in more detail below.
  • the translator module 218 obtains the translated text elements and merges the text elements in the second language into the text element file 228 as discussed above (Step 807 ).
  • the text element file 228 now includes text elements in the first language and corresponding text elements in the second language.
  • a parser then obtains display characteristics of the image (e.g., color, shape, dimensions, and position of text within image) from at least one template 230 as discussed above (Step 808 ).
  • an operating system e.g., operating system 250
  • obtains at least one image creation script e.g., script 222
  • a user may create at least one immutable image file script 222 for immutable image files.
  • the script may be generated automatically.
  • a script may be generated for each individual text element in each particular language.
  • a script for a number of text elements in one language may be generated or a script for one text element in a number of languages may be generated for use by the operating system in conjunction with an image manipulation program (e.g., image manipulation program 224 ).
  • the script 222 instructs the image manipulation program 224 to generate at least one immutable image file 232 with a text element in the second language as discussed above (Step 810 ).
  • the image manipulation program then stores the immutable image file in a location specified by desired location information stored in the text string file 226 as discussed above (Step 811 ).
  • the immutable image file 232 may then be used to serve users with different language preferences.
  • the image manipulation program 224 described previously includes the GNU (Gnu's Not UNIX) image manipulation program GIMP.
  • GIMP call a GIMP input file
  • GIMP script e.g., script 222
  • Code examples of a GIMP call, a GIMP input file (file name: .utf8.txt) and a GIMP script are given below.
  • the GIMP call is initiated (e.g., on a command line of an operating system such as Windows or from a batch file by the operating system) and in turn instructs the GIMP to input text string information from the input file and template information from the call itself.
  • the exemplary GIMP input file shown below (which corresponds to the text string file 226 ) includes exemplary desired location information for storing generated immutable image files.
  • the exemplary desired location information is designed so that each file generated for a particular template is assigned a file name with the same general base name, with a distinguishing prefix which includes a language code (e.g., “de” for German, “en_US” for U.S. English).
  • the files are stored in a common directory.
  • This naming and storage convention enables convenient update of image creation programs such as HTML or XML code by simply modifying the prefix of a referenced image file to agree with the language preference of a particular user to enable a browser of the user to access the immutable image files which have been generated for the user's preferred language.
  • the XML file is an example of a file initially created by a developer when developing a system using immutable image files for display, in order to describe elements of the immutable image file such as text strings which may need to be translated for multilingual users.
  • the XML file as shown has been processed by extracting the original German text strings, translating the strings into other languages, and then merging the translated strings back into the text element file as discussed above.
  • the file may be used to generate a GIMP input file by extracting strings in different languages for insertion into the GIMP input file as discussed above.
  • the discussion which follows relates to database application methods and programs for localizing software, translating texts contained therein and to support translators in localizing software and translating texts. It also relates to a translation system for localizing software, translating texts and supporting translators.
  • Conventional translation programs attempt to translate texts on a “sentence by sentence” basis. Such translation programs encompass the analysis of the grammatical structure of a sentence to be translated and the transfer of it into a grammatical structure of a target language sentence, by searching for grammar patterns in the sentence to be translated.
  • the text to be translated may also be called the source language text. Translation is usually accomplished by analyzing the syntax of the sentence by searching for main and subordinate clauses. For this purpose the individual words of the sentence need to be analyzed for their attributes (e.g., the part of speech, declination, plurality, and case). Further, conventional translation programs attempt to transform the grammatical form from the source language grammar to the target language grammar, to translate the individual words and insert them into the transformed grammatical form.
  • the translated sentence exactly represents or matches the sentence to be translated.
  • One difficulty is that the translation of many words in the text to be translated is equivocal and represents many different words in the target language. Therefore, as long as translation programs cannot use thesauruses specifically adapted for detecting the correct semantic expressions and meaning of sentences, according to the context, machine translations are imperfect.
  • Another problem is connected with software localization (i.e., the adaptation of software to a certain local, geographic or linguistic environment in which it is used). Localization must be done with internationally available software tools in order to ensure proper functionality with users from different cultural and linguistic backgrounds. This problem is also related to the fact that different translators work simultaneously on different aspects of software. So, for example, one team translates the text entries in the source code of software, another team translates the manuals, while a third team translates the “Help” files. To achieve a correct localized version of the software it is beneficial that the three teams use the same translations for common words so that the manuals, “Help” files, and the text elements in the software itself consistently match each other. To achieve this, the teams conventionally have to establish the keywords and the respective translations used in the software. This is time consuming and requires good communication among the teams.
  • An exemplary method described below localizes software by adapting the language of texts in source code of the software by extracting text elements from the source code.
  • untranslated text elements are searched in the extracted text elements.
  • Translation elements are retrieved for each untranslated text element (e.g., by searching a database with pre-stored matching source text elements and respective translations).
  • the next step is related to associating the searched untranslated element of the source language with the retrieved translation elements, if such translation elements are found.
  • the method supports a software localizer (or user) by offering one or more possible translations of a text element contained in the source code of a software program. In this way the user may simply perceive the translation proposals retrieved from a database.
  • the method ensures the adaptability of different software to certain cultural, geographic or linguistic environments.
  • the method further includes the steps of validating the retrieved translation elements, and merging the source code with the validated translation elements. With these steps the user can easily decide if he/she wants to accept (validate) or discard the suggested translation.
  • the validation of the suggested translation may be executed automatically, if the user has validated the same translation element in another section of the source code.
  • Merging the source code with the validated translation elements indicates the enlargement of the collection of text elements for one language in the source code with the validated other language(s) translation elements.
  • the merging may be executed after translation of the whole text.
  • the method further includes a compiling step to compile the merged source code to generate a localized software program in binary code.
  • This software program preferably includes one or more text libraries in languages different from the language of the original source code. Compilation may be the last step in the generation of a localized software version, depending on the programming language.
  • An additional step of binding may be executed when the programming language requires the binding of sub-modules or sub-libraries after compilation. In this case compilation is followed by one or more binding steps. Depending on the programming language, the compilation may include an assembling operation.
  • the method includes an adaptation of the language of texts related to the software (e.g., manuals and Help files) for illustrating the use of the software and supporting the user accordingly.
  • Software is often provided with additional texts such as “Help ” files, manuals, and packaging prints.
  • Such texts are provided with translations matching the translations used in the source code of the software.
  • the translation of software related texts may be executed by the additional steps of searching untranslated elements of the software related text, retrieving translation elements for each untranslated text element of the software related text on the basis and in accordance with the association of the retrieved source code translation elements, and associating the untranslated elements of the software related text with the retrieved translation elements.
  • the step of retrieving translation elements for each untranslated text element of the software related text on the basis of the association of the retrieved source code translation elements may include previous validation, merging or compilation of the associated source code translation elements, with a preference on the validated entries.
  • the method may be executed in a local area network (LAN), wherein different teams of translators access and/or retrieve the translation suggestion from the same table and/or dictionary and/or library and/or storage unit.
  • LAN local area network
  • the storage unit may be related to a single language software localization (i.e., there is one translation sub-dictionary for each language in the localized software version). This feature may be combined with a composed translation dictionary structure, so that a number of keywords are stored in and retrieved from a localization specific database, by which keywords of minimum requirements are defined. Text elements not containing the keywords may be retrieved from a common database.
  • Another exemplary method is provided by which—dependent on or irrespective from the translation of software source code—text elements are translated.
  • the method utilizes text elements of the software related text and their respective translations stored in a database.
  • Associating the searched untranslated element of the source language with the retrieved translation elements is done, if and after such translation elements are found.
  • the method supports a translator by offering one or more possible translations of a source text element.
  • the user or translator may easily decide if he wants to accept or discard the suggested translation.
  • the main difference with conventional translation programs is that the grammar of the source text element is not checked. This is not necessary, as only the text structure of the source language has to match a previously translated text element. It is similar to using an analphabet (e.g., an illiterate) for comparing a pattern (the source text element) with previously stored patterns, in a card index and attaching a respective file card (with the translation element) at the place where the pattern was found in the source text.
  • an analphabet e.g., an illiterate
  • a translator may accept or discard the index card, instead of translating each text element by himself, thereby improving his translation speed.
  • the performance of the pretranslation is therefore only dependent on the retrieval speed of the file card and on the size of the card index, and not on the linguistic abilities of the user performing the translation.
  • two or more users may sweep the card index at the same time. Additionally, in another example, the users can take newly written index cards from another user and sort them into the card index.
  • the associating is performed by entering the retrieved translation elements into a table.
  • the use of tables simplifies the processing of the text elements and enables the developers to use database management systems (DBMS) to realize the example in a computer.
  • DBMS database management systems
  • the use of tables further simplifies the access of retrieved translation elements, if they are retrieved from a database.
  • the user may accept or discard entries in the pretranslation table by interaction.
  • the interaction with a user has various advantages. First, a user may decide for himself if he wants to accept the proposed entry. Second, if multiple entries are proposed the user can select the best one, or may select one of the proposed lower rated translation suggestions and adapt it to the present text to be translated. The user may choose whether he wants to select one of the proposed translation elements in the pre-translation table, or whether he wants to translate the text element manually.
  • the translation method may include updating the dictionary database with the validated translation element.
  • This extension may be executed automatically (e.g., by automatically updating the dictionary database with the validation of a translation element).
  • the updating of the database may be executed according to different procedures. For example, the database update may be executed by transferring all validated translation elements from the user device to the database, so that the database itself may determine whether that validated translation element is already stored, or is to be stored as a new translation element. As another example, the database may be updated by transferring only newly generated database entries to the database.
  • a user, a user device, or the database checks whether a validated text element/translation element pair is already present in the database, to prevent a large number of similar or identical entries, which is preferably avoided.
  • An additional validation process initiated from a second translator may be performed to update the translation database to prevent invalid translation elements from being dragged into the translation database.
  • the second validation may be easily be implemented, as the translator for the second validation has only to determine whether a translation element matches the respective text element or not. It may also be possible to discard certain translation proposals as invalid by a user input, to prevent the system from offering this translation element a second time. This feature may be embodied as a “negative key ID”-entry in a predetermined column of the translation table.
  • An index may be generated in an additional step indicating the grade of conformity of an element of the text and of the dictionary database. This index is useful, if the text element to be translated and the text entries stored in the database differ only slightly, so that a translator may easily derive a correct translation from a slightly different database entry.
  • the depiction of an index describing the matching of an untranslated text element and a translated text element may be described as fuzzy matches.
  • the generated index may be described as a “fuzziness” index of the matching element.
  • the number of entries in the pretranslation table of one text element is related to the grade of matching.
  • the number of matching translations to be retrieved from the translation database has not been limited in the preceding discussion, so that more than one matching entry of the database may be retrieved. If more than one entry may be found, the number of retrieved entries may be limited to a predetermined value of the matching grade index. Alternatively, the number of retrieved database entries may be limited by a predetermined value. Both limits avoid the possibility that a human translator has to process a large number of translation proposals.
  • the number of translation proposals may be limited by a predetermined value (e.g., of the matching index). If the predetermined matching index is set to zero, the user in the above example would retrieve the whole database for each source text element in the table.
  • a translation entry of the source language text element entry is accepted as a valid translation automatically, if the index indicates an exact match of the untranslated text element with the source language text element entry.
  • This option automates the translation support method described previously to an automatic translator for text elements with exactly matching database entries.
  • This automatic translation method may economize the translator, if the texts to be translated are limited to a small lingual field combined with a very large database.
  • Very large databases suggest a centralized use of the translation method in computer networks such as local area networks or the internet, thus enabling fast generation of a very large universal translation database, if any net user can access the database for translation tasks improving the database with each accepted translation.
  • the entries may be marked as authorized by the translators of the provider.
  • the method may include marking translated or untranslated or partially translated text elements.
  • the markings may be used to indicate the status of the translation, distinguishing between different grades of translations (e.g., fully, partially, or untranslated).
  • the markings may be incorporated in the text, or may be added in a separate column into the translation table.
  • the markings may be used to indicate whether the specific table entry has been translated by a translator, or is an automatically generated proposal.
  • the markings are especially useful if the front-end used by the translator depicts the table entries as conventional text without a visible table structure.
  • the markings may be colored text, colored background, bold or oblique depiction of the text, bold or colored tables or frames.
  • Markings of the table entries may indicate a count of the number of times this database entry has been used or has been accepted by a translator.
  • the markings may indicate the differences between the translation suggestion and the actual text, to simplify the work of the translator.
  • the marking may indicate a single word differing in the database entry and in the translation.
  • the marking may indicate the quality of the suggested translation.
  • the marking may indicate the similarity to pre-translated database entries, so that the user may recognize whether the user selected an unusual grammar or word translation.
  • a marking may be applied to the source text, indicating a grammar structure, so that the main clause is marked with one color and the subordinate clauses in another color. Additionally, the flexion of subordinate clauses to words in the main sentences may be indicated.
  • Single text elements may be stored in any logical form in the database.
  • a text may be stored as a tuple of key identification numbers of sentences, and/or the texts and sentences may be stored as a tuple of key identification numbers of words and/or grammar references.
  • the texts, words and/or grammar references may be stored as single texts in the database.
  • the various text elements described above may be combined in a single translation method.
  • the database is first searched for text sections, then for sentences and finally for words.
  • the three searches may be executed successively with or without intermediate user interactions.
  • the user first checks the suggested text sections
  • the computer searches for translations of sentences, to be checked by the user and finally, searches for single words to be finally checked by the user.
  • the computer may successively check the three hierarchic levels of text elements, preferably only continuing in a lower hierarchic level if no suggestion has been found in the higher hierarchic level, leading to a mix of suggested text section-, sentence- and word translations.
  • a front-end of the translation support program may mark the hierarchic level of the suggested translation.
  • This technique may be compared with cutting down a groove: You can first cut all trees, then cut all branches, or you could cut down each tree, cut its branches, cut down the next tree, and so on.
  • the translation program may first compare all text sections with the database, then all sentences, and next all words, or may compare a text section, next compare all sentences of the text section, if the text section could not be found in the database, continuing by checking the database for all sentences in that text section, and continuing by searching all words in the database of sentences that could not be found in the database.
  • the search for text sections or for words may be economized. With these combined features a translator can easily utilize a database to generate a pretranslation simply and relieving the user of manually searching paper dictionaries and grammar books for translation.
  • the method further includes processing text elements and translated text elements stored in the database.
  • the processing may include a preprocessing technique to transfer an arbitrary source text to a database table, including such steps as converting different text files to a file format used in the database to simplify the retrieval of matching database entries.
  • translated text in the database may easily be transformed to an arbitrary file format.
  • the transfer of the text to a table requires splitting the source text into single text elements.
  • the processing may include a post-processing technique (e.g., by applying statistics to extract translation rules). By means of statistics, by analyzing the behavior pattern of a user, a translation device may be enabled to translate sentences automatically.
  • the method may include analyzing the user input on particular difference patterns between source text and database entries, enabling generation of translation suggestions assembled from database entries and user input patterns.
  • the method may further include transferring operations texts to and from a database, affording useful options such as simplification of the pretranslation of the source code text, as the source code text may be entered in the format of text elements in a database compatible form such as a table.
  • the method may further include sorting the text elements.
  • a sorting operation could be performed prior to or after the generation of translation suggestions.
  • a user may sort the text elements such as the text elements of the source text into alphabetic order or sort the source text for certain “keywords”.
  • a special application of a sorting operation may be sorting the sentences according to grammar structures or the length of sentences or the number of words or characters used in a sentence.
  • sorting grammar structures the method may be used to simplify the translation and the use of the method, as the translator starts with simple grammar structures working himself/herself through to the complex grammar structures. This enables a user to practice with simple sentences to prepare for complex sentences.
  • the user may adapt the order of the sorting by transferring a text element to the end of a sequence in the ordering.
  • An exemplary software tool includes program code portions for carrying out the steps of the methods described above, when the program is implemented in a computer program.
  • the software tool may be implemented in a database management program (DBMS) to offer the user additional translation features.
  • the software tool may be implemented in a translation program to offer a combination of the features from conventional features and features according to the examples described above.
  • the combination of a conventional translation program and a program according to these examples enable a user to use an intelligent and adaptive translation program, the performance of which increases with the number of translated sentences.
  • a combination of a conventional translation program and an example as described above may utilize an additional grammar filter, to check the source text and the user input to prevent errors in the translation, and to improve the quality of a translation by marking the translated passage in the text with an “equivocal source text”-mark.
  • a dictionary database may contain the data of a sentence based dictionary.
  • the database especially a database containing a sentence based translation dictionary may be applied in translation devices, or to extend the capabilities of an existing conventional translation device.
  • the system may include a sentence based dictionary for translation purposes.
  • the sentence based dictionary is different from conventional dictionaries in that the translated units are not words and but whole sentences.
  • Such a dictionary if printed as a conventional book, may be larger than the “Encyclopedia Britannica” and may appear a bit bulky, but it would require only a single access per sentence, without the need of checking the grammar.
  • the dictionary may be incorporated in an electronic book, simplifying the access, reducing the weight and improving the user interface of the dictionary.
  • the electronic dictionary may include a scanner for simplified source language text input.
  • Another example provides a network system for enabling and/or supporting at least one user to localize software by translating texts in the software by means of a database.
  • the network system includes at least one of the translation or software localization apparatuses described previously, and at least one translation database for exchanging data with the at least one apparatus.
  • the software to be localized may be transferred to the apparatuses via the network.
  • FIG. 9 a depicts a flowchart depicting exemplary steps for translating strings.
  • the steps of a method for localizing software by changing, adapting or translating the language in a source code of the software e.g., by translation
  • the method includes extracting text elements from the source code of the software (step 901 ); searching untranslated elements of the extracted text elements (step 903 ); retrieving translation elements for each untranslated text element (step 905 ); and associating the untranslated elements of the source code with the retrieved translation elements (step 909 ).
  • the translation elements are retrieved from a database 907 .
  • the associated untranslated elements and the translation elements can then be transferred to a user interface to be displayed on a screen 915 .
  • This basic method for translating texts can be extended to the method described below.
  • FIG. 9 b shows a block diagram illustrating an exemplary apparatus for localizing software by changing, adapting or translating the language in a source code of software (e.g., by translation).
  • the apparatus 920 includes a component for extracting 922 text elements from source code 930 of the software.
  • the component for extracting 922 may include a data processing device or a program module executed at a data processing device. Moreover, the component for extraction 922 may be realized by a code module containing instructions for extracting.
  • the text elements may be extracted from the source code of the software according to the step 901 shown in FIG. 9 a.
  • the apparatus 920 includes a component for searching 924 untranslated elements of the extracted text elements.
  • the component for searching 924 includes a data processing device or a program module executed at a processing device.
  • the component for searching 924 may realized by a code module containing instructions for extracting.
  • the untranslated elements of the extracted text elements may be searched according to step 903 shown in FIG. 9 a.
  • the apparatus 920 includes a component for retrieving 926 translation elements for each untranslated text element.
  • the component for retrieving 926 may include a data processing device or a program module executed at a processing device.
  • the component for retrieving 926 may include a code module containing instructions for extracting.
  • the translation elements are retrieved from a database 907 .
  • the translation elements are retrieved for each untranslated text element according to step 905 shown in FIG. 9 a.
  • the apparatus 920 includes a component for associating 928 the untranslated elements of the source code with the retrieved translation elements.
  • the component for associating 928 includes a data processing device or a program module executed at a data processing device.
  • the component for associating 928 includes a code module containing instructions for extracting. Untranslated elements of the source code are associated with the retrieved translation elements according to step 909 shown in FIG. 9 a.
  • FIG. 10 depicts a flowchart of an exemplary software localization and related text translation method.
  • the method is divided by a dotted line separating a branch 1002 to 1014 describing the localization of the software from a branch 1022 to 1032 describing the translation of software related texts.
  • the method depicted in FIG. 10 starts with a start element 1000 .
  • the start element provides the presence of software source code and/or a software related text.
  • both branches start with an extraction step 1002 , 1022 to extract text elements from the source code and/or from software related text.
  • the extracted text elements are searched for untranslated text elements (steps 1004 , 1024 ).
  • translation elements are retrieved for each untranslated text element from a database 1050 .
  • Both branches may access the same database 1050 to retrieve the translation elements.
  • Accessing the same database 1050 provides the advantage that both translations are based on the same set of keyword translations, thus avoiding confusing results such as the manual description of an icon indicating “Memory” while the same icon in the user interface of the software indicating “Storage”.
  • this method minimizes the occurrence of inconsistencies between the software and the manuals by using the same unambiguous expressions for describing the same notions.
  • the steps 1008 , 1028 describe the association of the text elements and the translation elements. As described below, this may include the use of pre-translation tables as depicted in the FIGS. 15 a , 15 b , 16 a and 16 b . It is to be noted that the software source code translation and the software related text translation may be executed with the same or with different translation tables.
  • the steps 1010 , 1030 describe a validation of translation elements. The validation may be executed automatically (or manually by user input). If none of the proposed translation elements can be validated, the method requests a user input of a valid translation element, in step 1016 . With the user input of a valid translation, the method can access a new pair of translation elements, that may be utilized in the translation of the other branch.
  • This updating results in a growing database 1050 , with an increasing number of entries and an increasing precision of the translation proposals.
  • One possible updating procedure is depicted in FIG. 13. It should further be noted that the method of both of the branches may be executed in an interleaving fashion.
  • the steps of the flowchart may be executed by first localizing the software, and then translating the software related texts, wherein the translation of the texts is determined by the translations previously used for the localization.
  • the steps of the flowchart may also be executed by first translating the software related texts, and then localizing the software, wherein the localization of the software is determined by the translation previously used for translating the software related texts.
  • the steps of the flowchart may also be executed by simultaneously translating the software related texts and localizing the software, wherein the first validation of a translation element determines the following translations. This example enables two teams to work simultaneously on localizing software and its manuals, economizing the operations of determining a translation catalogue of keywords.
  • step 1012 If all translation elements are valid, these are merged (in the first branch) with the source code of the software (step 1012 ), and the source code of the software is compiled (or assembled) to a localized software version (step 1014 ). If all translation elements are valid in the second branch they are exchanged with the related text elements of the software related text (step 1032 ). Then the localization of the software is terminated (step 1090 ).
  • FIG. 11 a depicts a flowchart depicting the steps of an exemplary method for translating elements of source language text.
  • the method include searching untranslated elements of the source language text (step 1102 ), retrieving translation elements for each untranslated text element (step 1104 ), and associating the searched untranslated elements of the source language with the retrieved translation elements (step 1108 ).
  • the translation elements are retrieved from a storage unit 1106 .
  • the associated untranslated elements and the translation elements are then transferred to a user interface to be displayed on a screen 1110 .
  • This basic method for translating texts can be extended to the method described in FIG. 12.
  • the storage unit 1106 may have been previously provided with different keyword translations from a software localization, and can be used to translate manuals and help files.
  • FIG. 11 b depicts a block diagram illustrating an example of an apparatus for translating elements of a source language text.
  • the apparatus 1120 includes a component for searching 1122 untranslated elements of the software related text 1130 .
  • the component for searching 1122 may include a data processing device or a program module executed at a data processing device.
  • the component for searching 1122 may realized by a code module containing instructions for extracting. Untranslated elements of the source language text may be searched according to step 1102 shown in FIG. 11 a.
  • the apparatus 1120 includes a component for retrieving 1124 translation elements for each untranslated text element.
  • the component for retrieving 1124 may include a data processing device or a program module executed at a processing device.
  • the component for retrieving 1124 may include a code module containing instructions for extracting.
  • Translation elements for each untranslated text element may be retrieved from a database 1106 according to step 1104 shown in FIG. 11 a.
  • the apparatus 1120 includes a component for associating 1126 the untranslated elements of the source code with the retrieved translation elements.
  • the component for associating 1126 may include a data processing device or a program module executed at a data processing device.
  • the component for associating 1126 may be realized by a code module containing instructions for extracting.
  • the searched untranslated elements of the source language may be associated with the retrieved translation elements according to step 1108 shown in FIG. 11 a.
  • FIG. 12 depicts a flowchart depicting steps of an exemplary method of translating text.
  • the start box 1210 of the flowchart presumes the presence of a source language text, a database with stored source language text elements and related translated target text elements, and presumes that the source and the target language are already entered.
  • a system program or a user selects the type of text element which the present source language text has to be split into (step 1212 ).
  • the text element includes any language or text unit (e.g., text sections, sentences, or words).
  • the text elements are entered in fields of a translation table (see FIG. 14).
  • an untranslated source text element is searched (step 1214 ), followed by checking the database for a matching source text elements (step 1216 ). If a matching text element is not found, the text element remains untranslated and a respective field in the pretranslation table remains empty or is marked with an “untranslatable” mark (step 1224 ) Then, the system searches for the next untranslated source text element (step 1222 ).
  • the system copies the related translation to a pretranslation field in the translation table (step 1218 ).
  • the text elements stored in the database can be exact matches or fuzzy matches. An exact match results when the source text element and the database text element are identical. A fuzzy match results when the source text element and the database text element only differ in one or a few words or punctuation marks. Fuzzy matches can be marked with an index indicating the fuzziness of the match (i.e., the quality of the match). In both cases the related translations are copied to a pre translation table.
  • the pretranslation table entries may be sorted (e.g., for their matching quality) to minimize time spent by a user searching for the best match. More than one translation of a single entry is possible as translations of single words can be equivocal, and so the translation of whole sentences are also equivocal.
  • the method searches for the next untranslated source text element (step 1222 ). If a next text element is found, control returns to step 1216 , to loop steps 1216 to 1222 , until a next text element is not found.
  • the method determines whether the actual text element is the smallest text element available (step 1226 ). If the actual text element is not the smallest available, a smaller text element is selected (step 1227 ) and the control returns to step 1214 , to loop steps 1216 to 1224 , until no next smaller text element is found. If no smaller text element is found the translation is terminated (step 1228 ).
  • FIG. 13 depicts a flowchart of an exemplary dictionary database updating method.
  • the flowchart represents a detailed execution example of the arrow between the element 1016 and the database 1050 in FIG. 10.
  • the element 1016 is depicted as used in FIG. 10.
  • the element is located between the steps 1010 , 1030 and the steps 1012 , 1032 . These connections are indicated with the two arrows and the respective reference numerals.
  • the validating step 1016 leads to a new validated pair of a text element and a respective translation element.
  • the two elements are provided with a preliminary key identification number (key ID) (step 1300 ).
  • key ID preliminary key identification number
  • the pair of elements is transferred to the database 1050 (step 1302 ).
  • the database analyzes both elements to generate a new key ID (in step 1304 ).
  • the text element and the translation element are stored (step 1306 ). With a new entry stored in database 1050 , the translation database is growing and enables the database to improve the grade of matching of retrieved translation elements.
  • the preliminary key ID may be unnecessary if the database itself generates a key ID. Further, the steps of updating the database may only include the updating of statistical information. The database may only be informed about the number indicating how often a known translation element has been used to translate a certain text element. The statistical information may increase the efficiency to provide a translation program with an improved standard database.
  • FIG. 14 depicts a section of a translation table for use in the example discussed above for a translation method.
  • the first three columns of the translation table as shown represent the minimum required information for a translation table.
  • the table as shown contains four columns: In the first column the table contains a key identification (ID), to identify the entry in the translation table.
  • ID may be a single key, or a main ID with attached sub-IDs.
  • the second column contains language IDs.
  • the language IDs indicate information about the language in use, and may be represented as in the present table by the characterizing numbers of the telephone area code of a particular country (e.g., “1” for U.S. English and “49” for German).
  • the language ID may be a single key, or a main ID with attached sub-IDs, indicating dialects.
  • the third column of the table contains text elements (words) in the different languages.
  • words text elements
  • One problem with translations is that synonyms lead to equivocal translations.
  • the table may contain a fourth column, providing additional information about how to use the present word.
  • the fourth column may contain synonyms or background information of the table entry in the third column.
  • the different keywords may comprise different columns with different ID numbers (not shown). Different columns may contain the sorting index in the language of the word, for sorting the text element entries according to various sorting rules (e.g., alphabetic, number of characters, and grammar information). Additional columns can contain information related to the actual software number, to enable the users of the system to distinguish whether a translation is validated for the present software localization or has been validated for another software localization.
  • the table may further include different columns to relate the present entry to different other languages, so that only one table with n ⁇ 1 additional columns is needed for each language, to provide translations between all n languages.
  • n ⁇ (n ⁇ 1) tables instead of n ⁇ (n ⁇ 1) tables to connect each language with every other, another possible table structure with single tables for each language and a central (e.g., n-dimensional) relation matrix (or tensor) would enable the method to use transitive translations. So if a text element has never been translated from a first language to a third language, the system may determine that the text element has been already validated in a translation to a second language, and has been validated in a translation from the second language to the third language. Thus, the system may be able to derive a translation proposal via the second language.
  • the described examples may also utilize an object oriented database structure for the dictionary database.
  • the translation table may further include information about the origin of the text or the translation, if the software localization system supports translation specific keyword databases.
  • FIG. 15 a depicts three user interaction interfaces (“screenshots”) embodied on a computer device.
  • the first screenshot 1540 shows a computer program or device depicting all found matches down to a matching of 84%.
  • the screenshot 1540 comprises four columns.
  • the first column represents a key ID of the text element (sentence) in the text.
  • the second column represents the language ID of the text element entry.
  • the third column represents the database text elements entries. To simplify the decision which of the pretranslation table entries matching the source text element most, are shown in the source language, and therefore all entries in the language column are all “1”.
  • the fourth column can represent two kinds of information. For a source text element the fourth column represents a status.
  • the status of the source text element in the first line and third column of screenshot 1540 is “untranslated”.
  • the fourth column represents a “Quality” value, indicating how exactly the source language entry (line 2 to 5) in the pretranslation table matches the source text element (in line 1).
  • a ste represents the number of characters of the source text element and b common represents the number of characters in common and common in sequence to the source text element and the source language database entry.
  • the differing characters are depicted bold, italic and are surrounded with a box. Other markings of the differences may be applied similarly.
  • screenshot 40 the field in the third column, second line is surrounded by a bold line to indicate that this translation suggestion is marked. With a user input the user may select the surrounded translation suggestion by a “mouseclick” an “enter” or by contact with a touch screen.
  • the depicted table changes to table 1544 .
  • the table contains only one line: In the first field, the key ID of the source text element, in the second field the language ID, in the third field the translated text element, and finally in the fourth field the status indicating “fully translated”.
  • the first two columns of the tables 1540 , 1542 , 1544 may be economized. The makings may be different. The method described above may skip the screenshot 1542 , if the quality of the selected suggestion is 100%.
  • FIG. 15 b illustrates schematically a flow diagram according to the user interaction shown in FIG. 15 a .
  • the untranslated text may be the same as shown in FIG. 15 a and references to FIG. 15 a will be made to complete the flow diagram of FIG. 15 b .
  • FIG. 15 b illustrates the steps of the user iterative interface with respect to the example presented in FIG. 15 a.
  • an untranslated text may be processed.
  • the text may be “The computer is standing on the table”.
  • the untranslated text may be a text according to language “1”.
  • step S 3 the database may be accessed for retrieving translation elements for the untranslated text of the same language, herein all related translation elements of the language “1”.
  • the retrieving may include a matching step wherein the matching step may result in exact matches or in fuzzy matches.
  • step S 5 the retrieved related elements may be received by the computer executing the user interaction interface.
  • step S 6 the untranslated text, the translation related elements is sequenced in a list by the user interaction interface according to the list 1540 depicted in FIG. 15 a . All retrieved entries are listed according to their matching quality.
  • step S 7 the user may select an entry of the presented list 1540 shown in FIG. 15 a . Since the second list entry may have assigned a quality of 100%, the untranslated text and the translating related element match exactly. Due to the matching quality the user may select the second entry of the list, indicated by the bold surrounding lines according to the depicted list 1540 in FIG. 15 a.
  • step S 9 the translation of the translation related element selected in step S 7 may be retrieved from the database.
  • the respective key-ID of the selected translation related element may be used for retrieval.
  • the retrieved translation may be a translation of the selected translation related element into the language “49”.
  • step S 11 the translation of language “49” of the selected translation related element may be received by the user interactive interface from the database.
  • a list may be prepared presenting the untranslated text, the selected translation related element and the translation to the user desired language, herein a translation from the language “1” which is the language of the untranslated text to the language “49”.
  • the matching quality has been determined in step S 5 so that a new determination may be skipped.
  • the matching quality value of the translation related element may be assigned to the corresponding retrieved translation since the determination of the above defined matching quality may be determined only in combination with text of the same language.
  • step S 12 the respective list comprising the key-ID, the language code, the corresponding element and the status or quality, respectively, may be presented in a list illustrated as list 1542 in FIG. 15 a.
  • step S 13 the user confirms the translation of the untranslated text by selecting the translation.
  • the selection may be indicated by bold surrounding lines which is shown in list 1542 in FIG. 15 a.
  • the matching quality of the untranslated text and the translation related element retrieved from the database indicates a value of 100% the translation is an exact translation.
  • the respective entry is available in the database such that no additional entry may have to be added to the translation database.
  • step S 14 the translation of the untranslated text has been done successfully and is finished.
  • the user interactive interface may present the next untranslated text for translating the user and may start again with step S 1 .
  • FIG. 16 a and FIG. 16 b present an example of a non-exactly matching untranslated text involving the generation of a new database entry.
  • FIG. 16 a depicts another set of user interaction interfaces (“screenshots”).
  • FIG. 16 a shows an example of how the examples described above may be embodied on a computer device.
  • the first screenshot 1650 shows a screenshot of a computer program or device depicting all found matches in a matching range of 66% down to 61%.
  • the screenshot 1650 comprises four columns. As in FIG. 15 a , the first column represents a key ID of the text element (sentence) in the text.
  • the second column represents the language of the text element entry.
  • the third column represents the source text element and database text elements entries. To simplify the decision which of the pretranslation table entries match the source text element most closely, the pre-translation entries are shown in the source language, and therefore all entries in the language column are “1”.
  • the fourth column can represent two kinds of information.
  • the fourth column represents a status.
  • the status of the source text element in the screenshot 1650 is “untranslated”.
  • the fourth column represents a “Quality” value, indicating the quality of the match between the source language entry in the pretranslation table and the source text element.
  • the depicted percentage values are determined according to the same formula as in FIG. 15 a .
  • the formula used in FIG. 15 a may be modified by replacing a ste by a dte representing the number of characters of the database text element. It should be noted that the formula is not limited to the use of characters, but can be applied to words and sentences. The formula may also comprise other terms using grammar structure related values to characterize the matching. Different from FIG. 15 a , the retrieved translation suggestions reach a maximum match of only 66%. The closest source text entry in the table is marked with a bold frame.
  • the system depicts the screenshot 1652 .
  • the screenshot 1652 depicts the source text element in the first line, the retrieved database text entry in the second line, and the translation of the database entry in the third line.
  • the system can detect and mark the differences between the source text element and the database entry, the system marks the differences in the translation of the database entry. All marked elements are depicted bold, italic, and with a small frame in the figure.
  • the user may edit the translation entry pretranslation table in an additional operation by keyboard input.
  • Advanced systems may utilize voice recognition algorithms.
  • the user accepted the proposed translation by a user input.
  • the screenshot 1654 represents a partially translated element in the pretranslation table.
  • the table 1654 may further depict the source text element and the retrieved database entry translation.
  • the table depicts the key ID “1” of the source text element.
  • the table depicts no entry, as a partially translated sentence is not related to a single language.
  • the entry in the second column can be “1”, or “1-49” as part of the text element remains untranslated.
  • the partially translated text is depicted, with a bold, italic underlined untranslated part depicted in capitals.
  • the untranslated sentence is surrounded by a dotted bold frame to indicate that the present state of the sentence requires additional processing.
  • the user may select another text element size to post-process the entry in the preprocessing table, or the user may first translate all other text elements and proceed then with a smaller text element.
  • the next screenshot 1656 depicts a second translation stage with smaller text elements.
  • the text elements are words and/or word/article combinations.
  • the present table depicts a newly generated key ID 127 in the first column, to distinguish the different stages of the translation algorithm.
  • the language ID has not been changed, as the sentence is still only partially translated, while in the second line the retrieved database entry is marked with a “1” and in the third line the translation suggestion is marked as the target language with a “49”.
  • the untranslated part of the sentence is marked as in table 1654 , its related database entry and its translation suggestion retrieved from a translation database.
  • the depicted table changes to table 1658 , containing only one line: In the first field, the key ID of the source text element (in this case the sentence), in the second field the language ID “49”, in the third field the translated text element, and finally in the fourth field the status indicating “fully translated”.
  • the system With the step of accepting the suggested translation proposal, the system generates a new database entry 1660 containing the source text element and the accepted translation.
  • the entries include key IDs and language IDs.
  • the new database entry is not necessarily depicted, as the storage operation can be executed automatically.
  • the translation database may include different 100% matching database entries, due to different equivocal translations.
  • FIG. 16 b illustrates schematically a flow diagram according to the user interaction shown in FIG. 16 a .
  • the untranslated text may be the same as shown in FIG. 16 a and references to FIG. 16 a will be made in order to complete the flow diagram of FIG. 16 b .
  • FIG. 16 b illustrates the operations of the user iterative interface with respect to the example presented in FIG. 16 a.
  • an untranslated text may be processed.
  • the text may be “The computer is standing on the table”.
  • the untranslated text may further include a text according to language “1”.
  • step S 22 the database may be accessed for retrieving translation elements for the untranslated text of the same language, herein all related translation element of the language “1”.
  • the retrieving may include a matching step wherein the matching step may result in exact matches or in fuzzy matches.
  • step S 24 the retrieved related elements may be received by the computer executing the user interaction interface.
  • step S 25 the untranslated text, the translation related elements is sequenced in a list by the user interaction interface according to the list 1650 depicted in FIG. 16 a . All retrieved entries are listed according to their matching quality. The list may present the key-ID, the language the untranslated text or the translation related elements, respectively, and the matching quality. The non-fitting text parts of the translation related elements in relation to the untranslated text may be indicated by italic characters.
  • step S 26 the user may select an entry of the presented list 50 shown in FIG. 15 a .
  • no entry of the list shows a matching quality of 100% indicating that no exact match was retrieved from the database.
  • the second entry of list 50 shown in FIG. 15 a may have a matching quality value of 66%.
  • the user may select the best matching entry, herein the second list entry which may be indicated by bold surrounding lines.
  • step S 28 the translation of the translation related element selected in step S 26 may be retrieved from the database.
  • the respective key-ID of the selected translation related element may be used.
  • the retrieved translation may be a translation of the selected translation related element to the language “49”.
  • step S 30 the translation of language “49” of the selected translation related element may be received by the user interactive interface from the database.
  • a list may be prepared presenting the untranslated text, the selected translation related element and the translation to the user desired language, herein a translation from the language “1” which is the language of the untranslated text to the language “49”.
  • the matching quality has been determined in step S 24 so that a new determination may be skipped.
  • the matching quality value of the translation related element may be assigned to the corresponding retrieved translation since the determination of the above defined matching quality may be determined only in combination with text of the same language.
  • step S 31 the respective list including the key-ID, the language code, the corresponding element and the status or quality, respectively, may be presented in a list illustrated as list 1652 in FIG. 16 a .
  • the non-fitting text parts of the translation related elements in relation to the untranslated text may be indicated by italic characters.
  • step S 32 the user confirms the translation of the untranslated text by selecting the translation.
  • the selection may be indicated by bold surrounding lines which is shown in list 1652 in FIG. 16 a.
  • step S 33 as no exact matching translation related element was found in the database, a mixed language translation result may be presented to the user by the user interactive interface. This can be seen in list 1654 shown in FIG. 16 b .
  • the text “The computer” may have not been translated. However, the remaining part of the sentence may have been translated by the operations described above.
  • step S 34 the user may select the untranslated text part for further translation. This is shown in list 1654 depicted in FIG. 16 a.
  • the translation may be continued by attempting to translate an untranslated text part, defining this part as untranslated text and performing translation operations as described above.
  • the untranslated text “The computer” of the language “1” may be processed as described below.
  • a translation related element according to the untranslated text part may be retrieved from the database.
  • the retrieval may include a matching step wherein the matching step may result in exact matches or in fuzzy matches.
  • the translation related elements of the language “1” according to the untranslated text part are searched.
  • the retrieval of translation related elements with respect to the untranslated text part may only return a single matching translation related element.
  • the respective translation (language “49”) may be automatically retrieved from the database since no further translation related elements may be presented to the user for selecting.
  • step S 37 after the retrieval of the translation related element and the translation into the language “49” thereof a list of the partially translated text, the translation related element according to the untranslated text part and the translation may be prepared.
  • the matching quality value may be determined. The determination of the quality may be derived regarding the untranslated text part and the retrieved translation related element.
  • the translation may be assigned the same matching quality value.
  • a new key-ID may be assigned to the list entries. As indicated in list 1656 shown in FIG. 16 b , the key-ID may be “127”.
  • step S 38 the list of the partially untranslated text, the retrieved translation related element and the translation (language “49”) may be shown to the user.
  • the respective list can be seen in list 1656 shown in FIG. 16 a .
  • the partially translated text may be assigned no language ID. Status information of the partially translated text may indicate a percentage value according to the translated text part.
  • step S 39 the user may select the second list entry of the list 1656 shown in FIG. 16 b .
  • the selection may be indicated by bold surrounding lines.
  • step S 40 the combination of the two independent translation operations of the untranslated text of step S 20 may lead to the complete translation thereof.
  • the complete translation may be presented by the user interactive interface to the user.
  • the key-ID of the translation may show the former key-ID value of step S 31 .
  • the status information indicates a successful translation.
  • the respective list 1658 is shown in FIG. 16 a.
  • step S 42 as the untranslated text was not found in the database a new database entry may be generated.
  • the contents of the new database entry is shown in list 1660 of FIG. 16 b .
  • the key-ID may be given by an available key-ID of the database and may be generated automatically as a result of the generation of the database entry.
  • the user interactive interface may present the next untranslated text for translation by the user and may start again with step S 20 .
  • a new entry may have been generated for the untranslated text presented in step S 20 . Further occurrence of this untranslated text may lead to the retrieval of a translation related element of a matching quality value of 100%.
  • the operations of the user interactive interface for exact matches are described above in detail in FIG. 15 a and FIG. 15 b.
  • FIG. 17 depicts an apparatus for translating texts according to another example by means of a database and a database management system, comprising a module 1774 to communicate with a database 1778 , a module 1775 for exchanging data, and a processing device 1772 for retrieving elements from a database.
  • the apparatus 1770 is connected via a communication module 1774 to a network 1776 . Via the network 1776 the apparatus 1770 can communicate with other devices (not shown) to exchange data (e.g., the server 201 of FIG. 2).
  • the network 1776 can be the internet, or any other wide area, or local area network.
  • the communication module 1774 is connected to a Central Processing Unit (CPU) 1772 , to execute the exchange and filtering of data.
  • the CPU 1772 is connected to a data exchange interface 1775 (e.g., a user interaction interface).
  • the data exchange interface 1775 can be connected to a keyboard and a display, or via a network to a terminal device to interact with a user.
  • the database 1778 contains the translation data to be accessed, via the communication module 1774 and the network 1776 .
  • the CPU 1772 To be able to translate texts correctly, the CPU 1772 must be able to receive the source text and to relate the source text with the data stored in the database 1778 .
  • the source text may be received via the data exchange interface 1775 , or via the communication module 1774 from the network 1776 .
  • the user interaction device 1773 and the database 1778 may be incorporated in a single device which may additionally be a portable device (see FIG. 18).
  • FIG. 18 depicts a sentence based electronic translation apparatus.
  • the translation apparatus is embodied as an interactive electronic dictionary 1880 .
  • the electronic dictionary can be an electronic book including a display 1881 for depicting a pretranslation table containing a source text, translation proposals, a translated text and translation related information.
  • the first column 1885 indicates a key ID, to identify every text element in the source text.
  • the second column 1886 contains source text elements and translation proposals.
  • the third column 1887 contains matching indices to indicate the similarity between the source text entry and the source text element.
  • the fourth column 1888 contains status symbols to indicate whether the source text element requires further processing or translating.
  • the user can interact with the dictionary by a scroll wheel 1883 , keys, a scroll roll 1884 and thumb index menu keys 1890 .
  • a user can pre-select a character or source text element with the thumb index 1890 , scroll the alphabet or the source text elements with the scroll wheel 1883 , and select translation proposals with the keys and the scroll roll 1884 .
  • the electronic dictionary can further contain an interface 1889 to communicate with a network to update its database or exchange data with another database.
  • the dictionary may further upload new source texts via the interface 1889 , or print translated texts on a printer connected to the interface.
  • the electronic dictionary (ED) may further comprise a built in scanner to recognize source texts autonomously.
  • the dictionary may further comprise an extension slot, to connect the ED with handheld scanners, mobile communication devices, or wireless networks.
  • the ED may be used as a translation device for custom officers, to translate foreign bills of loading.
  • the ED may also be used as a conventional electronic book.
  • An electronic book application has the advantage that users may read a book in its original language, with the option to translate single sentences.
  • FIG. 19 depicts a network system with an apparatus as described in FIG. 17 or 18 and at least one database 1917 .
  • the apparatus can be a server 1930 or client devices 1911 , 1912 , 1914 , 1915 , 1922 , or 1931 .
  • the database may include the database 1917 , or any other database integrated in one of the client devices 1911 - 1915 , 1922 , 1931 or the server 1900 .
  • the client devices are connected via networks 1910 , 1930 , 1920 to the server 1900 . It is assumed that the server 1900 contains an internal database (not shown) or can at least access an external database 1917 or a database in one of the client devices 1911 - 1917 , 1922 , 1931 .
  • the server is directly connected to two networks, a local area network (LAN) 1930 and a wide area network (WAN) 19310 .
  • LAN local area network
  • WAN wide area network
  • the WAN 1910 is connected to a personal computer 1911 , a personal digital assistant (PDA) 1912 , a browser 1914 , laptop 1915 , a database 1917 and a gateway 1918 to a communication network 1920 .
  • PDA personal digital assistant
  • the gateway 1918 indirectly connects a communication device 1922 (e.g., a mobile telephone) with computation abilities with the server 1900 .
  • a communication device 1922 e.g., a mobile telephone
  • All client devices may comprise user interfaces, CPUs and memories or databases.
  • the client devices 1911 - 1915 , 1922 , 19331 and/or the sever 1900 are enabled to access a translation database.
  • the database 1917 may include the translation database and offer the client devices 1911 - 1915 , 1922 , 1931 access to the stored translation elements.
  • each client device 1911 - 1915 , 1922 , 1931 includes its own translation database enabling each device to translate text elements received from an internal or remote device.
  • the method described above is executable as long as a translation database is accessible (e.g., via the networks 1910 , 1920 , 1930 ).
  • the client devices 1911 - 1915 , 1922 , 1931 themselves include a translation database.
  • the client devices are autonomous software localizing devices being connected to the network only to access new software to be localized.
  • a very complex case includes accessing a database 1917 via a server 1900 which is accessing distinct databases to retrieve text elements to be translated.
  • the data source may be able to use relational algorithms between tables.
  • the DBMS can store the text elements in a directory table and the translation elements in a related user specific translation table.
  • the data source of the DBMS may utilize relational algebra to interconnect the text elements with the translation elements.
  • the DBMS user interface enables the user to translate text elements by accessing pre-translation tables.
  • the translation elements can be selected and/or validated in each field of the pre-translation table by a user input.
  • the user input can be a keyboard input, a mouse input and the like.
  • the validated text elements and validated translation elements are stored in the database, to be automatically retrieved if the user accesses the database with a DBMS to retrieve the text element next time.
  • a computer-readable medium may be provided having a program embodied thereon, where the program is to make a computer or system of data processing devices execute functions or steps of the features and elements of the above described examples.
  • a computer-readable medium may include a magnetic or optical or other tangible medium on which a program is embodied, but can also be a signal, (e.g., analog or digital), electromagnetic or optical, in which the program is embodied for transmission.
  • a computer program product may be provided comprising the computer-readable medium.

Abstract

An improved translation system is provided that allows users to view screen objects of a user interface in different languages even though the screen object's visual elements are stored in an immutable image file (e.g., .gif or bitmap). To accomplish this, a method and system automatically generate a number of immutable image files, each containing text in different languages. These files are then made available to users so that they may select a preferred language in which to view the screen objects of the user interface.

Description

    CROSS REFERENCES TO RELATED APPLICATIONS
  • This application is related to, and claims priority to, European Patent Application No. 01 128 179.7, commonly owned, and entitled “Method and Apparatus for Localizing Software,” filed on Nov. 27, 2001, and to European Patent Application No. 02 010 132.5, filed on May 10, 2002, commonly owned, and entitled “Automatic Image-button Creation Process,” and to European Patent Application No. 02 010 133.3, filed on May 10, 2002, commonly owned, and entitled “Generation of Localized Software Applications,” all of which are hereby incorporated by reference herein in their entirety. Additionally, this application is related to U.S. patent application Attorney Docket No. 30014200-1050, commonly owned, and entitled “Generation of Localized Software Applications,” and which is being filed concurrently, and is hereby incorporated by reference herein in its entirety. [0001]
  • BACKGROUND
  • 1. Field of the Invention [0002]
  • The present invention generally relates generally to data processing systems, and, more particularly, to data processing systems for automatically generating multilingual immutable image files containing text. [0003]
  • 2. Background of the invention [0004]
  • Data processing devices have become valuable assistants in a rapidly expanding number of fields, where access to, and/or processing of data is necessary. Applications for data processing devices range from office applications such as text processing, spreadsheet processing, and graphics applications to personal applications such as e-mail services, personal communication services, entertainment applications, banking applications, purchasing or sales applications, and information services. Data processing devices for executing such applications include any type of computing device, such as multipurpose data processing devices including desktop or personal computers, laptop, palmtop computing devices, personal digital assistants (PDAs), and mobile communication devices such as mobile telephones or mobile communicators. [0005]
  • Many such applications are currently implemented using communication networks including individual data processing devices positioned at arbitrary locations. Thus, a data processing device used to provide a service, such as a server maintaining data which can be accessed through a communication network, may be located at a first location, while another data processing device for obtaining a service, such as a client device operated by a user for accessing and manipulating the data, may be located at a second location remote from the first location. [0006]
  • Servers which provide services are frequently configured to be accessed by more than one client at a time. For example, a server application may enable users at any location and from virtually any data processing device to access personal data maintained at a server device or a network of server devices. Accordingly, users of the client devices may be located in different geographic areas that support different languages, or may individually have different language preferences. The term “user” as used herein refers to a human user, software, hardware, or any other entity using the system. [0007]
  • For example, a user operating a client device may connect to the server device or network of server devices using personal identification information to access data maintained at the server device or network of server devices, such as an e-mail application or a text document. The server in turn may provide information to be displayed at the client device, enabling the user to access and/or control applications and/or data at the server. This may include the display of a desktop screen, including menu information, or any other kind of information allowing selection and control of applications and data at the server. [0008]
  • At least some portions of this information may be displayed at the client device using a screen object whose visual elements are stored in an immutable image file containing text information. A “screen object” is an entity that is displayed on a display screen. Examples of screen objects include screen buttons and icons. An “immutable image file” is a file that is not character based, but rather bit based. Thus, editing such files to change any text therein is very difficult. Examples of such files include files created in formats such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and other graphic files. This text information may provide information on a specific function which may be activated by selecting the screen object (e.g., by clicking on a button using a mouse and/or cursor device). The formatting of immutable image files such as files in .gif format does not generally support extraction of text strings (e.g., in ASCII format) which may be perceived by viewing a screen display of a rendering of the image file, as the immutable image files (e.g., .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and other graphic files) are generally created based on graphics information instead of text string information (i.e., in bitmap format instead of ASCII, or character-based format). Conventionally, if it is desired to produce immutable images in different languages, it may be necessary to view a rendering of the image to perceive the desired text string in a first language, perform a translation on the perceived individual string into another language, and then manually create a new immutable image file which embodies the translated string information. [0009]
  • If a number of users access the service from different geographic regions supporting different languages or have different individual language preferences, text information of the images is preferably to be provided in respective languages understood by each individual user. This, however, would require that the text information of the images be provided in different languages. For example, the text strings in the images may be translated into different languages. While this does not pose a problem for a small number of images, where manual translation and creation of the immutable images is possible by manually recreating the immutable image for each different language, a manual translation is neither feasible nor cost effective when a large number of immutable images must be provided in different languages. [0010]
  • For example, an application provided by a server or network of servers such as an office application may include a very large number of different menus in different layers including hundreds of individual menu or immutable images containing text. If the service is available in a large number of different geographic areas supporting different languages, a very large number of translation operations and image creation operations is necessary. For example, if an application provides 100 different immutable images (e.g., .gif files or bitmap files) containing text, and if a translation into 20 different languages is necessary, a total of 2000 translation and image creation operations may be necessary. [0011]
  • Therefore, a need has long existed for a method and system that overcome the problems noted above and others previously experienced. [0012]
  • SUMMARY OF THE INVENTION
  • Methods and systems consistent with the present invention provide an improved translation system that allows users to view screen objects of a user interface in different languages even though the screen object's visual elements are stored in an immutable image file (e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx). To do so, methods and systems consistent with the present invention automatically generate a number of immutable image files, each containing text in different languages. These files are then made available to users so that they may select a preferred language in which to view the screen objects of the user interface. [0013]
  • To facilitate the automatic generation of the image files in different languages, when the screen objects are initially developed, the textual elements of the screen objects are associated with the image files as text strings. Then, the improved translation system automatically translates the text strings into different languages and generates image files that contain the translated textual elements. As a result, these image files are made available so that when a user wishes to display a user interface, it will be displayed in the language of their choice. For example, a browser user may select a language of their choice and receive web pages from a web server in this language. Such functionality facilitates the international use of web sites. [0014]
  • In accordance with methods, systems and articles of manufacture consistent with the present invention, a method in a data processing system for localizing an immutable image file containing text in a first language is provided. The method translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language. [0015]
  • In accordance with methods, systems and articles of manufacture consistent with the present invention, a computer-readable medium is provided. This computer-readable medium is encoded with instructions that cause a data processing system for localizing an immutable image file containing text in a first language to perform a method. The method translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language. [0016]
  • In accordance with methods, systems and articles of manufacture consistent with the present invention, a data processing system is provided for localizing an immutable image file containing text in a first language. The data processing system comprises an immutable image file creation system that translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language; and a processor for running the immutable image file creation system.[0017]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIGS. 1[0018] a-1 b depict block diagrams of an exemplary data processing system suitable for using immutable image files in a number of languages.
  • FIG. 2 depicts a block diagram of a data processing system for automatically generating localized immutable image files containing text, the system suitable for practicing methods and systems consistent with the present invention. [0019]
  • FIG. 3 depicts a flowchart illustrating steps of a method for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0020]
  • FIG. 4 depicts a block diagram illustrating a logical flow of an exemplary system for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0021]
  • FIG. 5 depicts a block diagram illustrating a logical flow of an exemplary system for automatically generating localized immutable image files containing text in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0022]
  • FIG. 6 depicts a flowchart illustrating steps of an exemplary method for automatically generating immutable image files containing text in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0023]
  • FIG. 7 depicts a flowchart illustrating steps of an exemplary method for automatically creating immutable image files having text in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0024]
  • FIG. 8 depicts a flowchart illustrating steps of an exemplary method for automatically creating immutable image files in a number of languages in accordance with methods, systems, and articles of manufacture consistent with the present invention. [0025]
  • FIGS. 9[0026] a-9 b depict flowcharts illustrating steps of an exemplary method for identifying text and translating the text.
  • FIG. 10 depicts a flowchart illustrating steps of an exemplary method for identifying text and translating the text both in source code text strings and in software related text. [0027]
  • FIGS. 11[0028] a-11 b depict flowcharts illustrating steps of an exemplary method for identifying text and translating the text both in source code text strings and in software related text.
  • FIG. 12 depicts a flowchart illustrating steps of an exemplary method for identifying text element types and searching for matching strings in a database. [0029]
  • FIG. 13 depicts a flowchart illustrating steps of an exemplary method for validating a translation element and storing the validated element in a database. [0030]
  • FIG. 14 depicts an exemplary table in a pretranslation database. [0031]
  • FIG. 15[0032] a depicts an exemplary display for a user processing a translation of text using exact matching.
  • FIG. 15[0033] b depicts a flowchart illustrating exemplary method steps for a user, a computer, and a database for translating text using exact matching.
  • FIG. 16[0034] a depicts an exemplary display for a user processing the translation of text using fuzzy matching.
  • FIG. 16[0035] b depicts a flowchart which depicts exemplary method steps for a user, a computer, and a database for translating text using fuzzy matching.
  • FIG. 17 depicts an exemplary system for translating text using a pretranslation database. [0036]
  • FIG. 18 depicts an exemplary sentence based electronic translation dictionary. [0037]
  • FIG. 19 depicts an exemplary networked system for use with an exemplary translation technique.[0038]
  • DETAILED DESCRIPTION
  • Methods and systems consistent with the present invention provide an improved translation system that allows users to view screen objects of a user interface in different languages even though the screen object's visual elements are stored in an immutable image file (e.g., a file having a graphic format such as .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, or .pcx). To do so, methods and systems consistent with the present invention automatically generate a number of immutable image files, each containing text in different languages. These files are then made available to users so that they may select a preferred language in which to view the screen objects of the user interface. [0039]
  • To facilitate the automatic generation of the image files in different languages, when the screen objects are initially developed, the textual elements of the screen objects are associated with the image files as text strings. Then, the improved translation system automatically translates the text strings into different languages and generates image files that contain the translated textual elements. As a result, these image files are made available so that when a user wishes to display a user interface, it will be displayed in the language of their choice. For example, a browser user may select a language of their choice and receive web pages from a web server in this language. Such functionality facilitates the international use of web sites. [0040]
  • Reference will now be made in detail to an implementation in accordance with methods, systems and articles of manufacture consistent with the present invention as illustrated in the accompanying drawings. The same reference numerals may be used throughout the drawings and the following description to refer to the same or like parts. [0041]
  • FIG. 1[0042] a depicts a block diagram of an exemplary data processing system including a client 102, a client 104 and a server 100, similar to a data processing for implementing StarOffice™ running in Sun One Webtop, developed by Sun Microsystems, in which a number of client devices may access the server device 100 to use a service application such as a word processor, a spreadsheet application or a graphics application. Client 102 or client 104 may include a human user or may include a user agent.
  • [0043] Clients 102 and 104 each include a browser 106 and 108, respectively. Browsers 106 and 108 may be used for displaying data provided by the server 100 (e.g., HTML data or XML data). The data may include references to immutable image files containing text (e.g., .gif files or bitmap files) which may be downloaded to the client browsers 106 and 108 for display as images such as control buttons or icons.
  • [0044] Clients 102 and 104 may communicate with server 100 through communication networks which include a large number of computers, such as local area networks or the Internet. Access to information located in either client 102, client 104, or server 100 may be obtained through wireless communication links or fixed wired communication links or any other communication means. Standard protocols for accessing and/or retrieving data files over a communication link, for example, over a communication network, may be employed, such as a HyperText Transfer Protocol (“HTTP”).
  • FIG. 1[0045] b depicts a block diagram of the exemplary data processing of FIG. 1b, more particularly showing browser displays 120 and 130 on clients 102 and 104, respectively, of immutable images 122 and 132 having corresponding text in different languages. Server 100 includes a secondary storage device 140 which includes a German image file 142 which is displayed as a button 122 on the browser display 120 of client 102, and an English image file 144 which is displayed as a button 132 on the browser display 130 of client 104. Each of client 102 and 104 may specify a language in order to be provided with image files in the appropriate language for display during execution of the same general application such as a word processing application which may be executed as a separate process for each individual user. Thus, each individual user may use the application and receive screen displays including text information in a preferred language of the individual user.
  • FIG. 2 depicts a block diagram of a [0046] data processing system 200 suitable for practicing methods and systems consistent with the present invention, which provides immutable image files having text 232 and 234 in a number of languages. FIG. 2 particularly illustrates how immutable images containing text strings in different languages may be created at reduced complexity and costs.
  • [0047] System 200 includes a server 201 for generating the immutable image files and a server 260 for translating strings in particular languages into different languages. Servers 201 and 260 communicate with each other via a connection 245 which may be a direct connection, a network connection such as a LAN connection or a WAN connection such as the Internet. Text strings in a first language are identified on the server 201 and are transferred to the server 260 by a translator module 218 on the server 201 as a file of text strings 280 for translation into at least one second language that is different from the first language. After translation, the text strings are transferred back to the server 201 for processing to generate immutable image files containing the text strings in different languages. The immutable image files containing the text strings may then be used in applications so that users may view screen objects on a display in their choice of different languages.
  • [0048] Server 260 includes a CPU 262, a secondary storage device 268 which includes a translation database 270, a display 265, an I/O device 264, and a memory 266, which communicate with each other via a bus 291. Memory 266 embodies an operating system 295 and a translation module 278 which inputs text strings from the file of text strings 280 and translates the strings into different languages using the translation database 270. One suitable translation process for use with methods and systems consistent with the present invention is described below. However, other processes may also be used. The translated text strings are then transferred back to server 201 via the connection 245. Alternatively, the elements of servers 260 and 201 are all located on one device.
  • [0049] Server 201 includes a CPU 202, a secondary storage device 208, a display 205, an I/O device 204, and a memory 206, which communicate with each other via a bus 231. Memory 206 embodies an operating system 250, a translator module 218, a parser 220, and a script 222 which is associated with an image manipulation program 224 to input text information from a text string file 226 and a template 230 to generate immutable image files 232 and 234, each of which contains text in a language typically different from the other, when the script 222 is initiated. The script 222 may be initiated by a call entered by a user using the operating system 250, or the script 222 may be initiated by a call included in a batch file which is executed by a user or system command. A batch file of a number of calls to the script 222 may be stored in the secondary storage device 208 for access by a user or a system in order to generate a number of immutable image files in a number of different languages, and for a number of different base images (e.g., cancel button, system setting buttons). The parser 220 may be used to parse text information from a text element file 228 to create the text string file 226. A “text element file” is a file which contains text elements. An exemplary text element file written in XML is shown below. A “text element” is a property or attribute associated with a text string, including the text string itself. Examples of text elements include the text string, a language of the text string, information regarding an associated immutable image file which includes the text string (e.g., display characteristics), and location information indicating a storage location for the associated immutable file.
  • The [0050] parser 220 may also parse the text element file 228 to obtain text strings for the translator module 218 to transfer to the server 260 for translation. After translation, the translated strings may be transferred back to the translator module 218 to be merged back into the text element file 228 to enable multilingual applications by accessing the text elements in different languages.
  • The [0051] translation database 270 may be used to store strings in different languages, as well as other information such as template information associated with the template 230. An example of a very simple database table containing template information is:
    Template File Name Width Height Color
    SysDef SysDef.gif 300 60 16
    Cancel Cancel.gif 200 60 25
  • In the table as shown, information is provided for two immutable image files, one for a system user button and one for a cancel button. A base file name is provided for storing an immutable image file for each template. Additionally, width, height, and color information of the buttons is provided. Additional information such as shape, a base language text string (e.g., “System Defaults” or “Cancel” if English is the base language for translating the strings for these buttons), placement of the string within the button, and information regarding a previously generated immutable image file are stored in additional columns (not shown). The information regarding a previously generated immutable image file may be used to generate an empty (not having text) immutable image file, so that a translated string may be merged into the file. Thus, a call to the [0052] script 222 may be generated by requesting template information in the translation database 270, and may be included in the call to the script 222 in lieu of using the template file 230. A specific example of a script, a call to the script, a text element file, and a text string file are shown below. An exemplary image manipulation program used for the example is GIMP (GNU Image Manipulation Program). A commercially available program may be used for translating the text strings, such as a translation program available from Trados. Such a program may be used by configuring code to obtain the strings for translation, provide the strings to the translation program, receive the translated strings, and add the translated strings into the text element file.
  • Alternatively, the [0053] parser 220 may be configured to extract the text for translation from at least one image creation file including position information and text elements from a number of immutable image files containing text in the first language. For example, the text elements may be extracted from an image creation file such as a HTML (Hyper Text Markup Language) file or an XML (Extensible Markup Language) file or any other image creation file suitable for interpretation at a data processing device (e.g., user 102 or 104) for display.
  • Thus, the [0054] parser 220 may automatically extract text elements from the image creation file and collect the text elements in the text element file 228 for further processing. An image creation file may generally be used to create a screen display (e.g., at a client accessing a service application at a server or group of servers).
  • After the translated text elements are obtained, the [0055] parser 220 may further merge the translated text elements in a second language into an image creation file for the second language. Thus, the text elements for the immutable image files in the second language may be provided in the image creation file for the second language, so that a client operated by a user who is fluent in the second language may be provided with a display screen including text elements in the second language and a display of immutable image files containing text in the second language.
  • This example is particularly suitable for application in network environments such as the Internet. [0056]
  • The original uses of the Internet were electronic mail (e-mail), file transfers (ftp or file transfer protocol), bulletin boards and newsgroups, and remote computer access (telnet). The World Wide Web (web), which enables simple and intuitive navigation of Internet sites through a graphical interface, expanded dramatically during the 1990s to become the most important component of the Internet. The web gives users access to a vast array of documents that are connected to each other by means of links, which are electronic connections that link related pieces of information in order to allow a user easy access to them. Hypertext allows the user to select a word from text and thereby access other documents that contain additional information pertaining to that word; hypermedia documents feature links to images, sounds, animations, and movies. [0057]
  • The web operates within the Internet's basic client-server format. Servers include computer programs that may store and transmit documents (i.e., web pages) to other computers on the network when requested, while clients may include programs that request documents from a server as the user requests them. Browser software enables users to view the retrieved documents. A web page with its corresponding text and hyperlinks may be written in HTML or XML and is assigned an online address called a Uniform Resource Locator (URL). [0058]
  • Information may be presented to a user through a graphical user interface called a web browser (e.g., [0059] browser 106 or 108). When presented with data in the proper format, the web browser displays formatted text, pictures, sounds, videos, colors, and other data. To instruct a web browser to present the data in the desired manner, HTML was originally used. HTML is a language whereby a file is created that has the necessary data and also information relating to the format of the data.
  • Further, XML has emerged as a next generation of markup languages. XML is a language similar to HTML, except that it also includes information (called metadata) relating to the type of data as well as the formatting for the data and the data itself. [0060]
  • Thus, this example may be used to extract text elements from HTML or XML pages or other types of pages such as image creation files, in order to facilitate provision of immutable image files in different languages for internationalizing service applications accessible over the Internet or any other network. [0061]
  • The [0062] text element file 228 may include any kind of collection of text elements stored in a memory area of the data processing system 200, or stored at any other location. For example, the text element file 228 may be provided at an external location, accessible from the data processing system 200 through a communication network. Thus, the text element file 228 may be a centralized resource accessible by a number of data processing systems such as the data processing system 200 from arbitrary locations. Accordingly, a decentralized generation of immutable image files in various languages may be possible, including a consistent use of translated text elements. This allows consistent presentation of textual information to users, in contrast to environments in which different translation operations yield different translations of text elements for use in the same or different service applications.
  • Collecting text elements in the [0063] text element file 228 may be facilitated by a check-in process, allowing registration of text elements and translations thereof in the text element file 228, in order to maintain information on languages in which the individual immutable image files are available.
  • The [0064] text element file 228 may include an XML file and an XML editor may be used for generating the translated text elements. Further, the text element file 228 may include information of a desired output file name of the respective immutable image files and information regarding a template, such as template 230, which may be needed for creation of an immutable image file.
  • Thus, the [0065] text element file 228 may also serve as a basis of information enabling proper handling of the created immutable image files and enabling selection of a suitable template for the text elements which are collected. For example, different types of text elements may be associated with different templates, enabling creation of immutable image files in certain categories (e.g., characterized by size, color or any other graphic information). For example, immutable image files related to data handling may be generated using a first template, whereas immutable image files enabling a setting of references (e.g., for display) may be generated using a second template.
  • The [0066] translation module 278 for obtaining translated text elements in at least one second language may include a program or a sequence of instructions (as described below) to be executed by the CPU 202, by corresponding hardware or a combination of hardware and software for realizing the functionality of obtaining translated text elements in the second language. The translation of the text elements may be accomplished by using pretranslation databases.
  • The [0067] translator module 218 may be configured to extract relevant information from the text element file 228 or may be configured to import this information into a database such as the translation database 270 for improved handling of the text elements. After identifying individual text elements, the translator module 218 may invoke a translation service providing a translation of the text element into the second language.
  • The translation service may include an internal application, such as a thesaurus provided in a memory such as [0068] memory 206, within or accessible from the data processing system 201. Further, the translation service may be an external translation service provided by an external application program (e.g., offered by a third party). For example, the translator module 218 may be configured to invoke a web-based translation tool, accessed through a computer network such as a local area network or the Internet, or a combination thereof.
  • The translated image text elements may be collected in the [0069] translation database 270 and/or may then be merged into the text element file 228.
  • A program may be provided having instructions configured to cause a data processing device to carry out the method of at least one of the above operations. Further, a computer readable medium may be provided, in which a program is embodied, where the program is to make a computer execute steps of the method discussed above. [0070]
  • Also, a computer-readable medium may be provided having a program embodied thereon, where the program is to make a computer or a system of data processing devices to execute functions or operations of the features and elements of the examples described previously. A computer-readable medium may be a magnetic or optical or other tangible medium on which a program is recorded, but may also be a signal, e.g., analog or digital, electronic, magnetic or optical, in which the program is embodied for transmission. Further, a computer program product may be provided comprising the computer-readable medium. [0071]
  • The text string file [0072] 226 may include information of a desired output file name of the respective immutable image files and information regarding a template, such as the template 230, used for creation of an immutable image file. A template 230 may include any kind of image file information. For example, a template may include information related to at least one of image size; image shape; color; and position of a text element in an image. A number of templates for various images may be provided and stored in the memory 206 or at an external location. Alternatively, the template information may be included in the call to the script 222.
  • Thus, the text string file [0073] 226 may also serve as a basis of information enabling proper handling of the created images and enabling selection of a suitable template for the text elements collected. For example, different types of text elements may be associated with different templates, allowing creation of images in various categories such as size, color or any other type of graphic information. For example, images related to data handling may be generated using a first template, whereas images enabling a setting of references such as for display may be generated using a second template.
  • The image size may specify an area of the image such as an area in a display screen to be displayed at a client unit. Further, the image size may specify a minimum size of the image, such as a size suitable to fit the text element. The image shape information may specify any graphic parameters for obtaining certain graphic effects such as shading or 3-D effects. Further, the shape may also specify geometric shapes such as rectangles, circles and polygons. [0074]
  • The image color may include information related to shading of the image to obtain further graphic effects. The image information may further include information regarding a position of a text element, for an image enabling a proper placement of the text element within the image and/or may relate to a position of the image in a display screen, as specified in an image creation file (e.g., an HTML file), enabling proper placement of the image such as in a menu including different images to be displayed at a client device. [0075]
  • The system of FIG. 2 facilitates a generation of immutable images for screen displays similar to those discussed with regard to FIG. 1, wherein the images have text elements with different languages, in order to adapt a service application to different languages of different users. For example, a first user who is fluent in a first language may be provided with display screens displaying screen objects with text information in the first language, and a second user who prefers a second language may be provided with a display screen displaying screen objects with text elements in the second language. [0076]
  • The provision of display screens displaying screen objects with text elements in the first language and the second language can be simplified, particularly by collecting the text elements of the images in the [0077] text element file 228 and effecting creation of images for the text elements in the text element file 228 in the different languages.
  • The [0078] data processing system 200 may generally include any computing device or group of computing devices connected with one another. For example, the data processing system 200 may be a general purpose computer such as a desktop computer, a laptop computer, a workstation or combinations thereof. Further, the functionality of the data processing system 200 may be realized by distributed data processing devices connected in a local area network such as a company-wide computer network or may be connected by a wide area network such as the internet, or combinations thereof.
  • The [0079] image manipulation program 224 may be a graphics application provided as a stand alone program, e.g., offered by a third party. Further, the image manipulation program 224 may be a commercially available graphics application enabling generation of immutable image files based on given text elements and given template information. For example, the image manipulation program 224 may include the GNU (Gnu's Not UNIX) image manipulation program GIMP, which is described at www.gimp.org.
  • The [0080] system 200, as shown, is configured to instruct the image manipulation program 224 to generate immutable image files based on given parameters and may be adapted to generate a script to instruct the image manipulation program 224 to generate the immutable image files based on the given parameters. If the image manipulation program 224 is script enabled, a script such as script 222 may conveniently instruct the image manipulation program 224 based on the parameters. The script may be a sequence of instructions provided to the image manipulation program 224, instructing the image manipulation program 224 to generate an immutable image file having certain properties and including a specific text element. Further, the script 222 may include a text element for insertion into the immutable image file. The script may be written and intermediately stored in a file (e.g., as a batch file stored in secondary storage 208), to be provided to the image manipulation program 224, and may include the parameters discussed above (i.e., parameters needed to generate the immutable image file).
  • The parameters may include a template and the text element of the immutable image. For example, the parameters may include immutable image file information as discussed above, including image size, shape, color and/or position of a text element within the image. [0081]
  • Additionally, the parameters may include a number of text elements of the image and at least one template, enabling a concurrent generation of various immutable image files with different text elements and/or templates. [0082]
  • Each text element may be associated with a particular template (e.g., specified in the input file), or the text elements may be categorized in groups, each group associated with a particular template. [0083]
  • Further, the [0084] parser 220 may be configured to generate the text string file 226 including at least one text element (e.g., from the text element file 228) and the image manipulation program 224 may be instructed, using the script 222, to access the text string file 226 in order to retrieve at least one text element of the text string file 226. The text string file 226 may be intermediately stored in the memory 206, or at any other location. Thus, the text string file 226 may be based on at least a portion of the text element file 228.
  • The text string file [0085] 226 may further include at least one of a language identifier; and a desired name of an immutable image file to be created. The language ID facilitates an easier classification of immutable image files created based on the text elements, and a desired name of an immutable image file to be created facilitates easy access to the generated graphic elements including the image files, such as by updated HTML code used to instruct the display on a display screen for display at a client unit.
  • By including the name of the input file and the immutable image file information in the [0086] script 222, the image manipulation program 224 may access the text string file 226, retrieve the text elements and generate the immutable image file based on the immutable image file information. Alternatively the immutable image file information may be directly included in the text string file 226.
  • Alternatively, the [0087] script 222 and image manipulation program 224 may be configured to generate an empty immutable image file based on the template 230 and may be configured to merge the translated text element into the empty immutable image file. Thus, a collection of immutable image files with no text may be created, and text elements may be included on demand. This enables generation of a generic image creation file (e.g., an HTML file) for generating a screen display at a client unit. The text elements in the respective languages can then be included in the generic image creation file in order to generate image creation files in different languages.
  • The [0088] image manipulation program 224 may generate immutable image files for storage at the data processing system 200 or at any other location. The immutable image files may also be directly included into image creation files for generating screen displays at client units or image creation files with references to the immutable image files stored may be created.
  • Two exemplary immutable image files which may be created using [0089] system 200 are an immutable image file in a first language, English, displaying the English expression “system defaults”, and a second image button in the second language, German, which includes the corresponding German expression “Systemeinstellungen.” The buttons 122 and 132 of FIG. 1b illustrate screen displays of the immutable image files including these text elements.
  • FIG. 3 depicts a high-level flowchart illustrating steps of a method for automatically localizing and immutable image file containing text in a first language, the method suitable for use with methods, systems and articles of manufacture consistent with the present invention, such as the system of FIG. 2. First, a translation system such as the system shown on [0090] server 260 translates the text from the first language into a second language that is different from the first language (Step 300). Then, script 222 and image manipulation program 224 automatically generate a translated immutable image file containing the text in the second language (Step 302).
  • FIG. 4 depicts an exemplary high level logical flow using elements of a system for generating immutable image files containing specific exemplary text in different languages according to another example. FIG. 4 particularly illustrates extraction of text elements and merging of translated image text elements into image creation files such as HTML or XML files. As described previously, screen displays may be generated for client devices accessing services at a server. As the client devices may be located in different geographic areas supporting different languages, the service provided at the server may be enabled to generate image creation files or screen displays for the client devices in different languages. [0091]
  • FIG. 4 illustrates two screen displays based on image creation files, in different languages. A [0092] first screen display 410 includes English language buttons 411 and 412. The displayed buttons in the present example are used as menu items allowing control of a corresponding service application at a server. The button 411 in the present example is assumed to relate to system settings and therefore shows the English expression “system settings.” The second image button 412 relates to user group functionality, and therefore displays the expression “user groups.”
  • An image creation file corresponding to the [0093] screen display 410 may be used for users preferring the English language and may thus be provided from the server to corresponding client devices which have selected English as the preferred language.
  • The [0094] second screen display 450 shown in FIG. 4 shows German language buttons and therefore is suitable for users in a region that supports the German language, or users who have selected German as their preferred language. Corresponding to the buttons 411 and 412 of the screen display 410, the screen display 450 includes buttons 451 and 452. In correspondence to button 411, the button 451 includes the German expression “Systemeinstellungen,” indicating that the corresponding button also relates to system functions. Further, the button 452 corresponding to the button 412 contains the German expression “Benutzergruppen” and indicates that this button also relates to a user group functionality. The screen displays may include a larger number of immutable images and/or sub-menus or sub-screens, so that there may well be a very large number of different immutable image files to be considered.
  • The exemplary text elements may be extracted from the [0095] screen display 410 corresponding to the English language and may be translated into text elements for the screen display 450 for the German language. This extraction and creation process may be carried out offline. Thus, immutable image files for screen displays in different languages may be created and stored beforehand and made accessible to a service application such as a text processing application.
  • Further, the extraction and creation process for providing immutable image files containing text in different languages may be provided on demand (e.g., if a user with a particular preferred language logs into the server). In this case the screen display (i.e., the image creation file) in the corresponding language including text elements and images in the preferred language may be created dynamically (i.e., on demand). [0096]
  • In FIG. 4, an [0097] arrow 460 illustrates a corresponding extraction process enabling extraction of the text elements from the image buttons 411 and 412 and provision to the text element file 420. The extraction process may be carried out as described previously. The text element 420 therefore will contain, in the present simplified example, the expression “system settings” and “user groups.” Then, in steps 461 and 462 translated text elements involving a translation service 430 are obtained.
  • The [0098] text element file 420 may include corresponding language identifiers (i.e., showing that the original text elements are in the English language and that needed text elements should be in the German language).
  • After the translation step, the [0099] text element file 420 will further include the German expressions “Systemeinstellungen” and “Benutzergruppen.” Thereafter, as illustrated by arrow 463, a graphic program 440 is instructed to generate immutable image files in the German language, corresponding to the buttons 411 and 412 in the English language. The creation of the immutable image files was described previously.
  • As a result of the immutable image files creation process, immutable image files corresponding to the [0100] buttons 451 and 452 are generated. The immutable image files may be checked into a database of immutable image files or may be stored intermediately in any other way. Thereafter, as outlined by an arrow 464, the image screen 450 for display at a client device or generation of a corresponding image creation file is illustrated.
  • Accordingly, an English language user accessing a particular service controlled by the screen displays [0101] 410 or 450 at the client units may be provided with the English language screen display 410, whereas a German language user may be provided with the German language screen display 450.
  • FIG. 5 depicts an exemplary logical flow of elements of a system for obtaining and using immutable image files in a number of different languages according to another example. FIG. 5 particularly shows how the immutable image files in different languages may be used to supply users with display screens in different languages. FIG. 5 shows a [0102] data processing device 500 for obtaining image buttons in a number of different languages.
  • The [0103] data processing device 500 may include a single processing device or a number of processing devices in communication with one another. The data processing device 500 includes a text element file 501 (e.g., as described previously). The text element file 501 may be stored in an internal memory or may be provided at an external memory location.
  • Further, the [0104] data processing device 500 includes an input file 502. The input file 502, as described previously, may include at least one text element, and may further include a language ID and/or name of an immutable image file to create. The input file may also be stored at an internal location or at any other external memory location.
  • The [0105] data processing device 500 also includes a template 503 including immutable image file information such as image size and/or image shape and/or color and/or a position of a text element within the image.
  • The [0106] data processing device 500 may store a number of templates at an internal memory location or an external memory accessible from the data processing device 500. A suitable template (e.g., the template 503) may be selected based on information contained in the text element file 501 or on information on image parameters obtained from another source. Further, the template 503 may be dynamically generated.
  • Further, the [0107] data processing device 500 includes a script 504, as described previously. The script 504 may be generated based on the input file 502, and may include information of the input file 502 or a reference or storage location enabling access to the input file. Further, the script 504 may include image information from the template 503.
  • The [0108] script 504 may be intermediately stored at the data processing device 500 or may be dynamically generated and transferred to the graphics application program 505. The graphics application program 505 may be a graphics application (e.g., GIMP) as described previously, provided at the data processing device, as shown in FIG. 5, or may be provided at an external location, arranged to be accessed from the data processing device 500 through a communication link. Based on the script 504, the graphics application program 505 is instructed to generate at least one immutable image file, as described previously. The immutable image files with text elements in different languages, as illustrated by reference numeral 506, may be stored in the data processing device 500, or may be stored at an external location.
  • Further, the [0109] data processing device 500 invokes a translation service 510 in order to obtain translated text elements. The translation may be performed by a translation system as described below in more detail. Additionally, a server 520 which provides services to a number of users 541 and 542. The services enable users to access and/or manipulate data at the server 520 from remote locations through client devices. The server 520 may include a large computing device providing services such as office applications, communications applications or any other applications.
  • [0110] Exemplary clients 541 and 542 are arranged to access the server 520 through the Internet, as illustrated at 530, in order to control execution of a service application, as described previously. As the clients 542 and 541 may be located in different geographic areas that support different languages, or as users operating the clients may have different language preferences, the application at the server 520 generates image creation files or screen displays for the clients 541 and 542 in different languages.
  • FIG. 5 illustrates two exemplary screen displays [0111] 521 and 522 in a first and a second language. The server 520 provides image creation files to the clients 541 and 542 in the preferred language. In operation, in order to obtain text elements in a number of languages, the data processing device 500 may be configured to extract text elements (e.g., from an image creation file corresponding to screen display 521 at the server 520), including text elements in a first language TE1, TE2, TE3, and TE4. The text elements are stored in the text element file 501.
  • Thereafter, the [0112] data processing device 500 obtains translated text elements in the second language using the translation service 510. The translated text elements are then merged into the text element file 501. Thereafter, based on the text element file 501 the input file 502 is generated and, based thereon and on the template 503 the script 504 is generated. The script 504 then instructs the graphics application program 505 to generate the immutable image files with text elements TE1, TE2, TE3, and TE4 in the second language.
  • The [0113] server 520 then uses the immutable image files in the various languages to generate image creation files (e.g., in HTML or XML), such as image creation files corresponding to the screen display 521 and 522.
  • FIG. 6 depicts a flowchart depicting steps of an exemplary method for generating screen objects (e.g., image buttons) in a number of languages. The steps of FIG. 6 may be carried out using the system shown in FIG. 2; however, FIG. 6 is not limited thereto. [0114]
  • First, a parser generates an input file including at least one image text element, and/or a language ID and/or a name of an immutable image files to be created (Step [0115] 601). The input file may resemble the text string file 226 and may correspond to at least part of the text element file 228.
  • Next, a parser obtains image information, (i.e., information on the image such as color, shape, dimensions, and position of a text string within the image) from a template (Step [0116] 602). The image information may relate to one or a number of templates (as required for the text elements). For example, image information for different groups of text elements and/or different languages may be obtained. Obtaining the image information may also include dynamically generating image information based on information from the input file (e.g., a size of a text element in a particular language).
  • After the image information is obtained, a script is initiated to instruct a graphic application program (e.g., the [0117] image manipulation program 224 which may be implemented using the GNU (Gnu's Not UNIX) image manipulation program GIMP) to generate at least one immutable image file, the script including information on the input file (e.g., a file name or address, and the at least one immutable image file) (Step 603). For example, the script may be used to instruct a graphic application program on a remote server to create an immutable image file based on given parameters. The given parameters may be obtained from the input file and may be provided with the image information. Based on the script the graphic application program may access or retrieve the input file from a memory including at least one of text elements, language IDs and immutable image file names. Further, the input file may also include the image information.
  • Next, under the direction of the script, the graphic application program generates at least one immutable image file based on the image information and the translated text element (Step [0118] 604). For example, a number of immutable image files for a particular text element in different languages may be created simultaneously (e.g., if the input file includes one text element in a number of languages). Further, a number of immutable image files may be created simultaneously for a number of text elements in one language. Moreover, a combination of both cases is possible, wherein a number of immutable image files for a number of text elements in a number of languages may be created.
  • Further, a number of templates may be employed to generate immutable image files for different groups of text elements. The text elements may be classified according to language ID or according to content, in order to be able to make characteristics or appearance of an immutable image file language dependent and/or content dependent. Different templates may be used for each classified group. For example, text elements for data manipulation may be classified into a first group for images having a first appearance, and text elements for setting preferences may be collected in a second group for images having a second appearance. [0119]
  • The graphic application program then stores at least one immutable image file using the at least one immutable image file name (Step [0120] 605). The file names may enable a convenient identification of the immutable image files in later processing steps (e.g., during generation of image creation files, for example for clients fluent in different languages).
  • As an alternative, subsequent to Step [0121] 603, the graphic application program may generate an empty immutable image file (i.e., an immutable image file free of a text element) based on the image button information (Step 606). Thus, immutable image files for receiving text elements may be created. Next, the translated text elements may be merged into the respective empty immutable image files (Step 607). Thus, a library of immutable image files may be created and the text elements may be merged into the retrieved empty immutable image files.
  • In an example shown below, the graphic application program described previously includes the GNU (Gnu's Not UNIX) image manipulation program GIMP. [0122]
  • In the example, creation of immutable image files [0123] 232 and 234 is accomplished by a GIMP call, a GIMP input file (e.g., text string file 226) and a GIMP script (e.g., script 222). Code examples of a GIMP call, a GIMP input file (file name: .utf8.txt) and a GIMP script are given below. The GIMP call is initiated (e.g., on a command line of an operating system such as Windows or from a batch file by the operating system) and in turn instructs the GIMP to input text string information from the input file and template information from the call itself.
  • FIG. 7 depicts a flowchart depicting steps of another exemplary method for generating immutable image files in different languages. The steps of FIG. 7 may be carried out using the system shown in FIG. 2; however, FIG. 7 is not limited thereto. [0124]
  • The example of FIG. 7 particularly outlines steps for efficient handling of text elements and for obtaining translated text elements. First, a parser transfers text elements into an XML text file or a database (Step [0125] 701). Transferring the text elements may be effected by extracting text elements from image creation files or from screen displays including immutable image files (e.g., HTML or XML files for display at a client device). Transferring the text elements to an XML file or a database facilitates efficient handling of the text elements. The database may be provided internal to a data processing device such as the data processing system 200 shown in FIG. 2, or at any other location. The XML file including the text elements may be visualized for an operator using an XML editor.
  • Next, a web-based tool or an XML editor obtains translated image text elements (Step [0126] 702). The text elements may be automatically extracted from the XML text file or database and transferred to the translation service, which returns translated text elements as described previously. Commands for the translation service may include a desired language. The translation service may be a commercially available translation service or a translation service using an application at the data processing system 200 shown in FIG. 2 and as described below.
  • After translation, the web-based tool or the XML editor merges the translated text elements received from the translation service back into the XML file or the data base (Step [0127] 703). Accordingly, a collection of different text elements, each of the text elements in different languages, may be provided in the XML file or database. Thereafter, the graphic application program (e.g., GIMP) as discussed above generates immutable image files based on the translated image text elements as described previously (Step 704).
  • FIG. 8 depicts a flowchart depicting exemplary options of steps for a method for generating immutable image files in a number of languages. The steps of FIG. 8 may be carried out using the system shown in FIG. 2; however, FIG. 8 is not limited thereto. [0128]
  • First, in one option, text elements in a first language are input (Step [0129] 801). A user may generate the text elements manually. Alternatively, a parser may obtain an image creation file (e.g., an HTML or XML file) (Step 802). Next, the parser extracts text elements in a first language from the image creation file (Step 803). Then, the parser generates a text element file (e.g., text element file 228 as discussed above) with the text elements (Step 804). After generating the text element file 228, the parser transfers the text elements from the text element file into a database (Step 805).
  • Next, a translator module (e.g., translator module [0130] 218) invokes a translation service (e.g., the system associated with server 260) to translate the text elements into a second language (Step 806). As a first part of Step 806, the translator module 218 invokes a translation service for translating the text elements in the first language into text elements in the second language (Step 806 a). As a second part of Step 806, the translation module 278 then inserts the text elements in the second language into the database (Step 806 b). An example of a technique for translation is given in more detail below.
  • Next, the [0131] translator module 218 obtains the translated text elements and merges the text elements in the second language into the text element file 228 as discussed above (Step 807). Thus, the text element file 228 now includes text elements in the first language and corresponding text elements in the second language.
  • A parser then obtains display characteristics of the image (e.g., color, shape, dimensions, and position of text within image) from at least one [0132] template 230 as discussed above (Step 808). Next, an operating system (e.g., operating system 250) obtains at least one image creation script (e.g., script 222) (Step 809). A user may create at least one immutable image file script 222 for immutable image files. Alternatively, the script may be generated automatically. According to one example, a script may be generated for each individual text element in each particular language. Further, a script for a number of text elements in one language may be generated or a script for one text element in a number of languages may be generated for use by the operating system in conjunction with an image manipulation program (e.g., image manipulation program 224).
  • Next, the [0133] script 222 instructs the image manipulation program 224 to generate at least one immutable image file 232 with a text element in the second language as discussed above (Step 810). The image manipulation program then stores the immutable image file in a location specified by desired location information stored in the text string file 226 as discussed above (Step 811). The immutable image file 232 may then be used to serve users with different language preferences.
  • In a further example shown below, the [0134] image manipulation program 224 described previously includes the GNU (Gnu's Not UNIX) image manipulation program GIMP.
  • Creation of immutable image files [0135] 232 and 234 is accomplished by a GIMP call, a GIMP input file (e.g., text string file 226) and a GIMP script (e.g., script 222). Code examples of a GIMP call, a GIMP input file (file name: .utf8.txt) and a GIMP script are given below. The GIMP call is initiated (e.g., on a command line of an operating system such as Windows or from a batch file by the operating system) and in turn instructs the GIMP to input text string information from the input file and template information from the call itself.
  • The exemplary GIMP input file shown below (which corresponds to the text string file [0136] 226) includes exemplary desired location information for storing generated immutable image files. The exemplary desired location information is designed so that each file generated for a particular template is assigned a file name with the same general base name, with a distinguishing prefix which includes a language code (e.g., “de” for German, “en_US” for U.S. English). The files are stored in a common directory. This naming and storage convention enables convenient update of image creation programs such as HTML or XML code by simply modifying the prefix of a referenced image file to agree with the language preference of a particular user to enable a browser of the user to access the immutable image files which have been generated for the user's preferred language.
  • An example of an XML text element file is also shown below. The XML file is an example of a file initially created by a developer when developing a system using immutable image files for display, in order to describe elements of the immutable image file such as text strings which may need to be translated for multilingual users. The XML file as shown has been processed by extracting the original German text strings, translating the strings into other languages, and then merging the translated strings back into the text element file as discussed above. The file may be used to generate a GIMP input file by extracting strings in different languages for insertion into the GIMP input file as discussed above. [0137]
    Figure US20030115552A1-20030619-P00001
    Figure US20030115552A1-20030619-P00002
    Figure US20030115552A1-20030619-P00003
    Figure US20030115552A1-20030619-P00004
    Figure US20030115552A1-20030619-P00005
    Figure US20030115552A1-20030619-P00006
    Figure US20030115552A1-20030619-P00007
    Figure US20030115552A1-20030619-P00008
    Figure US20030115552A1-20030619-P00009
    Figure US20030115552A1-20030619-P00010
    Figure US20030115552A1-20030619-P00011
    Figure US20030115552A1-20030619-P00012
  • Translation of Text
  • A technique for translating text strings is discussed below. [0138]
  • The discussion which follows relates to database application methods and programs for localizing software, translating texts contained therein and to support translators in localizing software and translating texts. It also relates to a translation system for localizing software, translating texts and supporting translators. [0139]
  • Conventional translation programs attempt to translate texts on a “sentence by sentence” basis. Such translation programs encompass the analysis of the grammatical structure of a sentence to be translated and the transfer of it into a grammatical structure of a target language sentence, by searching for grammar patterns in the sentence to be translated. The text to be translated may also be called the source language text. Translation is usually accomplished by analyzing the syntax of the sentence by searching for main and subordinate clauses. For this purpose the individual words of the sentence need to be analyzed for their attributes (e.g., the part of speech, declination, plurality, and case). Further, conventional translation programs attempt to transform the grammatical form from the source language grammar to the target language grammar, to translate the individual words and insert them into the transformed grammatical form. If the translation program works correctly, the translated sentence exactly represents or matches the sentence to be translated. One difficulty is that the translation of many words in the text to be translated is equivocal and represents many different words in the target language. Therefore, as long as translation programs cannot use thesauruses specifically adapted for detecting the correct semantic expressions and meaning of sentences, according to the context, machine translations are imperfect. [0140]
  • Another drawback of the translation programs discussed above is that slang and slang grammar is not translatable, as slang uses grammar rules different from the standard grammar rules. Additionally the complex grammar rules are difficult to describe with mathematical laws and therefore require huge calculation costs even for simple sentences. Additionally, ambiguous source language sentences are especially difficult to translate into target sentences with the same ambiguous grammar structure. [0141]
  • Another drawback of the conventional translation programs is the fact that the translation performance is limited, with the exception of update options for the dictionaries and the grammar rules. Conventional translation programs are not adaptive, and repeat the same translation errors, as the user cannot change the grammar rules by himself/herself without risking a total breakdown of the translation apparatus. [0142]
  • Another problem is connected with software localization (i.e., the adaptation of software to a certain local, geographic or linguistic environment in which it is used). Localization must be done with internationally available software tools in order to ensure proper functionality with users from different cultural and linguistic backgrounds. This problem is also related to the fact that different translators work simultaneously on different aspects of software. So, for example, one team translates the text entries in the source code of software, another team translates the manuals, while a third team translates the “Help” files. To achieve a correct localized version of the software it is beneficial that the three teams use the same translations for common words so that the manuals, “Help” files, and the text elements in the software itself consistently match each other. To achieve this, the teams conventionally have to establish the keywords and the respective translations used in the software. This is time consuming and requires good communication among the teams. [0143]
  • An exemplary method described below localizes software by adapting the language of texts in source code of the software by extracting text elements from the source code. Next, untranslated text elements are searched in the extracted text elements. Translation elements are retrieved for each untranslated text element (e.g., by searching a database with pre-stored matching source text elements and respective translations). The next step is related to associating the searched untranslated element of the source language with the retrieved translation elements, if such translation elements are found. [0144]
  • For this example, the method supports a software localizer (or user) by offering one or more possible translations of a text element contained in the source code of a software program. In this way the user may simply perceive the translation proposals retrieved from a database. The method ensures the adaptability of different software to certain cultural, geographic or linguistic environments. [0145]
  • The method further includes the steps of validating the retrieved translation elements, and merging the source code with the validated translation elements. With these steps the user can easily decide if he/she wants to accept (validate) or discard the suggested translation. The validation of the suggested translation may be executed automatically, if the user has validated the same translation element in another section of the source code. Merging the source code with the validated translation elements indicates the enlargement of the collection of text elements for one language in the source code with the validated other language(s) translation elements. The merging may be executed after translation of the whole text. [0146]
  • The method further includes a compiling step to compile the merged source code to generate a localized software program in binary code. This software program preferably includes one or more text libraries in languages different from the language of the original source code. Compilation may be the last step in the generation of a localized software version, depending on the programming language. An additional step of binding may be executed when the programming language requires the binding of sub-modules or sub-libraries after compilation. In this case compilation is followed by one or more binding steps. Depending on the programming language, the compilation may include an assembling operation. [0147]
  • Further, the method includes an adaptation of the language of texts related to the software (e.g., manuals and Help files) for illustrating the use of the software and supporting the user accordingly. Software is often provided with additional texts such as “Help ” files, manuals, and packaging prints. Such texts are provided with translations matching the translations used in the source code of the software. The translation of software related texts may be executed by the additional steps of searching untranslated elements of the software related text, retrieving translation elements for each untranslated text element of the software related text on the basis and in accordance with the association of the retrieved source code translation elements, and associating the untranslated elements of the software related text with the retrieved translation elements. Thereby it is ensured that a consistent translation is performed for both the translation of source code texts and software related texts. [0148]
  • The step of retrieving translation elements for each untranslated text element of the software related text on the basis of the association of the retrieved source code translation elements may include previous validation, merging or compilation of the associated source code translation elements, with a preference on the validated entries. [0149]
  • The method may be executed in a local area network (LAN), wherein different teams of translators access and/or retrieve the translation suggestion from the same table and/or dictionary and/or library and/or storage unit. To simplify the access and to standardize the translation of a software, the storage unit may be related to a single language software localization (i.e., there is one translation sub-dictionary for each language in the localized software version). This feature may be combined with a composed translation dictionary structure, so that a number of keywords are stored in and retrieved from a localization specific database, by which keywords of minimum requirements are defined. Text elements not containing the keywords may be retrieved from a common database. [0150]
  • Another exemplary method is provided by which—dependent on or irrespective from the translation of software source code—text elements are translated. The method utilizes text elements of the software related text and their respective translations stored in a database. First, all untranslated text elements are searched in the software related text. Next, translation elements are retrieved for an untranslated text element (e.g., by searching a database with pre-stored matching source text elements and respective translations). Associating the searched untranslated element of the source language with the retrieved translation elements is done, if and after such translation elements are found. [0151]
  • In this example the method supports a translator by offering one or more possible translations of a source text element. The user or translator may easily decide if he wants to accept or discard the suggested translation. The main difference with conventional translation programs is that the grammar of the source text element is not checked. This is not necessary, as only the text structure of the source language has to match a previously translated text element. It is similar to using an analphabet (e.g., an illiterate) for comparing a pattern (the source text element) with previously stored patterns, in a card index and attaching a respective file card (with the translation element) at the place where the pattern was found in the source text. Then, a translator may accept or discard the index card, instead of translating each text element by himself, thereby improving his translation speed. The performance of the pretranslation is therefore only dependent on the retrieval speed of the file card and on the size of the card index, and not on the linguistic abilities of the user performing the translation. In another example two or more users may sweep the card index at the same time. Additionally, in another example, the users can take newly written index cards from another user and sort them into the card index. [0152]
  • Further, the associating is performed by entering the retrieved translation elements into a table. The use of tables simplifies the processing of the text elements and enables the developers to use database management systems (DBMS) to realize the example in a computer. The use of tables further simplifies the access of retrieved translation elements, if they are retrieved from a database. [0153]
  • The use and the interaction of tables in a database are well known and easily implemented tasks in databases, especially if the tables include pre-defined translated phrases, compared to an ambiguous grammar analysis. The transfer to tables of a database offers additional benefits, as comments, markings and translation related background information, such as language code and translation code may be stored in additional columns in the table. With the “patterns” stored in columns the job of users performing the translation in the previous example is simplified by taking the next pattern in the column. [0154]
  • The user may accept or discard entries in the pretranslation table by interaction. The interaction with a user has various advantages. First, a user may decide for himself if he wants to accept the proposed entry. Second, if multiple entries are proposed the user can select the best one, or may select one of the proposed lower rated translation suggestions and adapt it to the present text to be translated. The user may choose whether he wants to select one of the proposed translation elements in the pre-translation table, or whether he wants to translate the text element manually. [0155]
  • The translation method may include updating the dictionary database with the validated translation element. By updating the database with new text elements and translation entries the database is extended and improved. This extension may be executed automatically (e.g., by automatically updating the dictionary database with the validation of a translation element). The updating of the database may be executed according to different procedures. For example, the database update may be executed by transferring all validated translation elements from the user device to the database, so that the database itself may determine whether that validated translation element is already stored, or is to be stored as a new translation element. As another example, the database may be updated by transferring only newly generated database entries to the database. It may be advantageous that a user, a user device, or the database checks whether a validated text element/translation element pair is already present in the database, to prevent a large number of similar or identical entries, which is preferably avoided. An additional validation process initiated from a second translator may be performed to update the translation database to prevent invalid translation elements from being dragged into the translation database. The second validation may be easily be implemented, as the translator for the second validation has only to determine whether a translation element matches the respective text element or not. It may also be possible to discard certain translation proposals as invalid by a user input, to prevent the system from offering this translation element a second time. This feature may be embodied as a “negative key ID”-entry in a predetermined column of the translation table. [0156]
  • An index may be generated in an additional step indicating the grade of conformity of an element of the text and of the dictionary database. This index is useful, if the text element to be translated and the text entries stored in the database differ only slightly, so that a translator may easily derive a correct translation from a slightly different database entry. The depiction of an index describing the matching of an untranslated text element and a translated text element may be described as fuzzy matches. The generated index may be described as a “fuzziness” index of the matching element. [0157]
  • The number of entries in the pretranslation table of one text element is related to the grade of matching. The number of matching translations to be retrieved from the translation database has not been limited in the preceding discussion, so that more than one matching entry of the database may be retrieved. If more than one entry may be found, the number of retrieved entries may be limited to a predetermined value of the matching grade index. Alternatively, the number of retrieved database entries may be limited by a predetermined value. Both limits avoid the possibility that a human translator has to process a large number of translation proposals. The number of translation proposals may be limited by a predetermined value (e.g., of the matching index). If the predetermined matching index is set to zero, the user in the above example would retrieve the whole database for each source text element in the table. [0158]
  • A translation entry of the source language text element entry is accepted as a valid translation automatically, if the index indicates an exact match of the untranslated text element with the source language text element entry. This option automates the translation support method described previously to an automatic translator for text elements with exactly matching database entries. This automatic translation method may economize the translator, if the texts to be translated are limited to a small lingual field combined with a very large database. Very large databases suggest a centralized use of the translation method in computer networks such as local area networks or the internet, thus enabling fast generation of a very large universal translation database, if any net user can access the database for translation tasks improving the database with each accepted translation. To prevent abuse of the database, the entries may be marked as authorized by the translators of the provider. [0159]
  • The method may include marking translated or untranslated or partially translated text elements. The markings may be used to indicate the status of the translation, distinguishing between different grades of translations (e.g., fully, partially, or untranslated). The markings may be incorporated in the text, or may be added in a separate column into the translation table. The markings may be used to indicate whether the specific table entry has been translated by a translator, or is an automatically generated proposal. The markings are especially useful if the front-end used by the translator depicts the table entries as conventional text without a visible table structure. The markings may be colored text, colored background, bold or oblique depiction of the text, bold or colored tables or frames. [0160]
  • Markings of the table entries may indicate a count of the number of times this database entry has been used or has been accepted by a translator. The markings may indicate the differences between the translation suggestion and the actual text, to simplify the work of the translator. The marking may indicate a single word differing in the database entry and in the translation. The marking may indicate the quality of the suggested translation. The marking may indicate the similarity to pre-translated database entries, so that the user may recognize whether the user selected an unusual grammar or word translation. A marking may be applied to the source text, indicating a grammar structure, so that the main clause is marked with one color and the subordinate clauses in another color. Additionally, the flexion of subordinate clauses to words in the main sentences may be indicated. [0161]
  • Single text elements may be stored in any logical form in the database. For example, a text may be stored as a tuple of key identification numbers of sentences, and/or the texts and sentences may be stored as a tuple of key identification numbers of words and/or grammar references. The texts, words and/or grammar references may be stored as single texts in the database. [0162]
  • The various text elements described above may be combined in a single translation method. In this method the database is first searched for text sections, then for sentences and finally for words. The three searches may be executed successively with or without intermediate user interactions. In the first case the user first checks the suggested text sections, then the computer searches for translations of sentences, to be checked by the user and finally, searches for single words to be finally checked by the user. The computer may successively check the three hierarchic levels of text elements, preferably only continuing in a lower hierarchic level if no suggestion has been found in the higher hierarchic level, leading to a mix of suggested text section-, sentence- and word translations. To clarify such a mix of suggestions a front-end of the translation support program may mark the hierarchic level of the suggested translation. This technique may be compared with cutting down a groove: You can first cut all trees, then cut all branches, or you could cut down each tree, cut its branches, cut down the next tree, and so on. Similarly, the translation program may first compare all text sections with the database, then all sentences, and next all words, or may compare a text section, next compare all sentences of the text section, if the text section could not be found in the database, continuing by checking the database for all sentences in that text section, and continuing by searching all words in the database of sentences that could not be found in the database. The search for text sections or for words may be economized. With these combined features a translator can easily utilize a database to generate a pretranslation simply and relieving the user of manually searching paper dictionaries and grammar books for translation. [0163]
  • The method further includes processing text elements and translated text elements stored in the database. The processing may include a preprocessing technique to transfer an arbitrary source text to a database table, including such steps as converting different text files to a file format used in the database to simplify the retrieval of matching database entries. Similarly, translated text in the database may easily be transformed to an arbitrary file format. The transfer of the text to a table requires splitting the source text into single text elements. The processing may include a post-processing technique (e.g., by applying statistics to extract translation rules). By means of statistics, by analyzing the behavior pattern of a user, a translation device may be enabled to translate sentences automatically. Thus, the method may include analyzing the user input on particular difference patterns between source text and database entries, enabling generation of translation suggestions assembled from database entries and user input patterns. The method may further include transferring operations texts to and from a database, affording useful options such as simplification of the pretranslation of the source code text, as the source code text may be entered in the format of text elements in a database compatible form such as a table. [0164]
  • The method may further include sorting the text elements. A sorting operation could be performed prior to or after the generation of translation suggestions. Thus, a user may sort the text elements such as the text elements of the source text into alphabetic order or sort the source text for certain “keywords”. A special application of a sorting operation may be sorting the sentences according to grammar structures or the length of sentences or the number of words or characters used in a sentence. By sorting grammar structures, the method may be used to simplify the translation and the use of the method, as the translator starts with simple grammar structures working himself/herself through to the complex grammar structures. This enables a user to practice with simple sentences to prepare for complex sentences. The user may adapt the order of the sorting by transferring a text element to the end of a sequence in the ordering. [0165]
  • An exemplary software tool includes program code portions for carrying out the steps of the methods described above, when the program is implemented in a computer program. The software tool may be implemented in a database management program (DBMS) to offer the user additional translation features. The software tool may be implemented in a translation program to offer a combination of the features from conventional features and features according to the examples described above. The combination of a conventional translation program and a program according to these examples enable a user to use an intelligent and adaptive translation program, the performance of which increases with the number of translated sentences. A combination of a conventional translation program and an example as described above may utilize an additional grammar filter, to check the source text and the user input to prevent errors in the translation, and to improve the quality of a translation by marking the translated passage in the text with an “equivocal source text”-mark. [0166]
  • A dictionary database may contain the data of a sentence based dictionary. The database, especially a database containing a sentence based translation dictionary may be applied in translation devices, or to extend the capabilities of an existing conventional translation device. [0167]
  • The system may include a sentence based dictionary for translation purposes. The sentence based dictionary is different from conventional dictionaries in that the translated units are not words and but whole sentences. Such a dictionary, if printed as a conventional book, may be larger than the “Encyclopedia Britannica” and may appear a bit bulky, but it would require only a single access per sentence, without the need of checking the grammar. The dictionary may be incorporated in an electronic book, simplifying the access, reducing the weight and improving the user interface of the dictionary. The electronic dictionary may include a scanner for simplified source language text input. [0168]
  • Another example provides a network system for enabling and/or supporting at least one user to localize software by translating texts in the software by means of a database. The network system includes at least one of the translation or software localization apparatuses described previously, and at least one translation database for exchanging data with the at least one apparatus. The software to be localized may be transferred to the apparatuses via the network. [0169]
  • FIG. 9[0170] a depicts a flowchart depicting exemplary steps for translating strings. There are shown the steps of a method for localizing software by changing, adapting or translating the language in a source code of the software (e.g., by translation) wherein the method includes extracting text elements from the source code of the software (step 901); searching untranslated elements of the extracted text elements (step 903); retrieving translation elements for each untranslated text element (step 905); and associating the untranslated elements of the source code with the retrieved translation elements (step 909).
  • In this basic method, the translation elements are retrieved from a [0171] database 907. The associated untranslated elements and the translation elements can then be transferred to a user interface to be displayed on a screen 915. This basic method for translating texts can be extended to the method described below.
  • FIG. 9[0172] b shows a block diagram illustrating an exemplary apparatus for localizing software by changing, adapting or translating the language in a source code of software (e.g., by translation).
  • The [0173] apparatus 920 includes a component for extracting 922 text elements from source code 930 of the software. The component for extracting 922 may include a data processing device or a program module executed at a data processing device. Moreover, the component for extraction 922 may be realized by a code module containing instructions for extracting. The text elements may be extracted from the source code of the software according to the step 901 shown in FIG. 9a.
  • Further, the [0174] apparatus 920 includes a component for searching 924 untranslated elements of the extracted text elements. The component for searching 924 includes a data processing device or a program module executed at a processing device. Moreover, the component for searching 924 may realized by a code module containing instructions for extracting. The untranslated elements of the extracted text elements may be searched according to step 903 shown in FIG. 9a.
  • Further, the [0175] apparatus 920 includes a component for retrieving 926 translation elements for each untranslated text element. The component for retrieving 926 may include a data processing device or a program module executed at a processing device. Moreover, the component for retrieving 926 may include a code module containing instructions for extracting. The translation elements are retrieved from a database 907. The translation elements are retrieved for each untranslated text element according to step 905 shown in FIG. 9a.
  • Further, the [0176] apparatus 920 includes a component for associating 928 the untranslated elements of the source code with the retrieved translation elements. The component for associating 928 includes a data processing device or a program module executed at a data processing device. Moreover, the component for associating 928 includes a code module containing instructions for extracting. Untranslated elements of the source code are associated with the retrieved translation elements according to step 909 shown in FIG. 9a.
  • FIG. 10 depicts a flowchart of an exemplary software localization and related text translation method. In FIG. 10 the method is divided by a dotted line separating a [0177] branch 1002 to 1014 describing the localization of the software from a branch 1022 to 1032 describing the translation of software related texts. The method depicted in FIG. 10 starts with a start element 1000. The start element provides the presence of software source code and/or a software related text. As described in FIG. 9a, both branches start with an extraction step 1002, 1022 to extract text elements from the source code and/or from software related text. Next, the extracted text elements are searched for untranslated text elements (steps 1004, 1024). In the next steps 1006, 1026 translation elements are retrieved for each untranslated text element from a database 1050. Both branches may access the same database 1050 to retrieve the translation elements. Accessing the same database 1050 provides the advantage that both translations are based on the same set of keyword translations, thus avoiding confusing results such as the manual description of an icon indicating “Memory” while the same icon in the user interface of the software indicating “Storage”. Thus, this method minimizes the occurrence of inconsistencies between the software and the manuals by using the same unambiguous expressions for describing the same notions.
  • The [0178] steps 1008, 1028 describe the association of the text elements and the translation elements. As described below, this may include the use of pre-translation tables as depicted in the FIGS. 15a, 15 b, 16 a and 16 b. It is to be noted that the software source code translation and the software related text translation may be executed with the same or with different translation tables. The steps 1010, 1030 describe a validation of translation elements. The validation may be executed automatically (or manually by user input). If none of the proposed translation elements can be validated, the method requests a user input of a valid translation element, in step 1016. With the user input of a valid translation, the method can access a new pair of translation elements, that may be utilized in the translation of the other branch. This updating (depicted as the arrow between step 1016 and the database 1050) results in a growing database 1050, with an increasing number of entries and an increasing precision of the translation proposals. One possible updating procedure is depicted in FIG. 13. It should further be noted that the method of both of the branches may be executed in an interleaving fashion.
  • It should further be noted that the temporal distribution of the execution of the steps is not important. Therefore, the steps of the flowchart may be executed by first localizing the software, and then translating the software related texts, wherein the translation of the texts is determined by the translations previously used for the localization. The steps of the flowchart may also be executed by first translating the software related texts, and then localizing the software, wherein the localization of the software is determined by the translation previously used for translating the software related texts. The steps of the flowchart may also be executed by simultaneously translating the software related texts and localizing the software, wherein the first validation of a translation element determines the following translations. This example enables two teams to work simultaneously on localizing software and its manuals, economizing the operations of determining a translation catalogue of keywords. [0179]
  • If all translation elements are valid, these are merged (in the first branch) with the source code of the software (step [0180] 1012), and the source code of the software is compiled (or assembled) to a localized software version (step 1014). If all translation elements are valid in the second branch they are exchanged with the related text elements of the software related text (step 1032). Then the localization of the software is terminated (step 1090).
  • FIG. 11[0181] a depicts a flowchart depicting the steps of an exemplary method for translating elements of source language text. The method include searching untranslated elements of the source language text (step 1102), retrieving translation elements for each untranslated text element (step 1104), and associating the searched untranslated elements of the source language with the retrieved translation elements (step 1108).
  • In this basic method, the translation elements are retrieved from a [0182] storage unit 1106. The associated untranslated elements and the translation elements are then transferred to a user interface to be displayed on a screen 1110. This basic method for translating texts can be extended to the method described in FIG. 12. To extend this method to a software localizing method, the storage unit 1106 may have been previously provided with different keyword translations from a software localization, and can be used to translate manuals and help files.
  • FIG. 11[0183] b depicts a block diagram illustrating an example of an apparatus for translating elements of a source language text. The apparatus 1120 includes a component for searching 1122 untranslated elements of the software related text 1130. The component for searching 1122 may include a data processing device or a program module executed at a data processing device. Moreover, the component for searching 1122 may realized by a code module containing instructions for extracting. Untranslated elements of the source language text may be searched according to step 1102 shown in FIG. 11a.
  • Further, the [0184] apparatus 1120 includes a component for retrieving 1124 translation elements for each untranslated text element. The component for retrieving 1124 may include a data processing device or a program module executed at a processing device. Moreover, the component for retrieving 1124 may include a code module containing instructions for extracting. Translation elements for each untranslated text element may be retrieved from a database 1106 according to step 1104 shown in FIG. 11a.
  • Further, the [0185] apparatus 1120 includes a component for associating 1126 the untranslated elements of the source code with the retrieved translation elements. The component for associating 1126 may include a data processing device or a program module executed at a data processing device. Moreover, the component for associating 1126 may be realized by a code module containing instructions for extracting. The searched untranslated elements of the source language may be associated with the retrieved translation elements according to step 1108 shown in FIG. 11a.
  • FIG. 12 depicts a flowchart depicting steps of an exemplary method of translating text. The [0186] start box 1210 of the flowchart presumes the presence of a source language text, a database with stored source language text elements and related translated target text elements, and presumes that the source and the target language are already entered. First, a system program or a user selects the type of text element which the present source language text has to be split into (step 1212). The text element includes any language or text unit (e.g., text sections, sentences, or words). After the splitting of the source text into text elements, the text elements are entered in fields of a translation table (see FIG. 14). Then an untranslated source text element is searched (step 1214), followed by checking the database for a matching source text elements (step 1216). If a matching text element is not found, the text element remains untranslated and a respective field in the pretranslation table remains empty or is marked with an “untranslatable” mark (step 1224) Then, the system searches for the next untranslated source text element (step 1222).
  • If a matching text element is found in the database, the system copies the related translation to a pretranslation field in the translation table (step [0187] 1218). The text elements stored in the database can be exact matches or fuzzy matches. An exact match results when the source text element and the database text element are identical. A fuzzy match results when the source text element and the database text element only differ in one or a few words or punctuation marks. Fuzzy matches can be marked with an index indicating the fuzziness of the match (i.e., the quality of the match). In both cases the related translations are copied to a pre translation table. In the case of fuzzy matches or in the case of more than one exact match, the pretranslation table entries may be sorted (e.g., for their matching quality) to minimize time spent by a user searching for the best match. More than one translation of a single entry is possible as translations of single words can be equivocal, and so the translation of whole sentences are also equivocal. Next, the method searches for the next untranslated source text element (step 1222). If a next text element is found, control returns to step 1216, to loop steps 1216 to 1222, until a next text element is not found.
  • If the system does not find a next source text element, the method determines whether the actual text element is the smallest text element available (step [0188] 1226). If the actual text element is not the smallest available, a smaller text element is selected (step 1227) and the control returns to step 1214, to loop steps 1216 to 1224, until no next smaller text element is found. If no smaller text element is found the translation is terminated (step 1228).
  • FIG. 13 depicts a flowchart of an exemplary dictionary database updating method. The flowchart represents a detailed execution example of the arrow between the [0189] element 1016 and the database 1050 in FIG. 10. As this figure is related to FIG. 10, the element 1016 is depicted as used in FIG. 10. In FIG. 10 the element is located between the steps 1010, 1030 and the steps 1012, 1032. These connections are indicated with the two arrows and the respective reference numerals. The validating step 1016 leads to a new validated pair of a text element and a respective translation element. To increase the translation database 1050, the two elements are provided with a preliminary key identification number (key ID) (step 1300). Next, the pair of elements is transferred to the database 1050 (step 1302). Next, the database analyzes both elements to generate a new key ID (in step 1304). Finally, the text element and the translation element are stored (step 1306). With a new entry stored in database 1050, the translation database is growing and enables the database to improve the grade of matching of retrieved translation elements.
  • The preliminary key ID may be unnecessary if the database itself generates a key ID. Further, the steps of updating the database may only include the updating of statistical information. The database may only be informed about the number indicating how often a known translation element has been used to translate a certain text element. The statistical information may increase the efficiency to provide a translation program with an improved standard database. [0190]
  • FIG. 14 depicts a section of a translation table for use in the example discussed above for a translation method. The first three columns of the translation table as shown represent the minimum required information for a translation table. The table as shown contains four columns: In the first column the table contains a key identification (ID), to identify the entry in the translation table. The key ID may be a single key, or a main ID with attached sub-IDs. The second column contains language IDs. The language IDs indicate information about the language in use, and may be represented as in the present table by the characterizing numbers of the telephone area code of a particular country (e.g., “1” for U.S. English and “49” for German). The language ID may be a single key, or a main ID with attached sub-IDs, indicating dialects. The third column of the table contains text elements (words) in the different languages. One problem with translations is that synonyms lead to equivocal translations. To simplify the work of the user the table may contain a fourth column, providing additional information about how to use the present word. The fourth column may contain synonyms or background information of the table entry in the third column. [0191]
  • It may be noted that the different keywords may comprise different columns with different ID numbers (not shown). Different columns may contain the sorting index in the language of the word, for sorting the text element entries according to various sorting rules (e.g., alphabetic, number of characters, and grammar information). Additional columns can contain information related to the actual software number, to enable the users of the system to distinguish whether a translation is validated for the present software localization or has been validated for another software localization. The table may further include different columns to relate the present entry to different other languages, so that only one table with n−1 additional columns is needed for each language, to provide translations between all n languages. Instead of n·(n−1) tables to connect each language with every other, another possible table structure with single tables for each language and a central (e.g., n-dimensional) relation matrix (or tensor) would enable the method to use transitive translations. So if a text element has never been translated from a first language to a third language, the system may determine that the text element has been already validated in a translation to a second language, and has been validated in a translation from the second language to the third language. Thus, the system may be able to derive a translation proposal via the second language. The described examples may also utilize an object oriented database structure for the dictionary database. [0192]
  • The translation table may further include information about the origin of the text or the translation, if the software localization system supports translation specific keyword databases. [0193]
  • FIG. 15[0194] a depicts three user interaction interfaces (“screenshots”) embodied on a computer device. The first screenshot 1540 shows a computer program or device depicting all found matches down to a matching of 84%. The screenshot 1540 comprises four columns. The first column represents a key ID of the text element (sentence) in the text. The second column represents the language ID of the text element entry. The third column represents the database text elements entries. To simplify the decision which of the pretranslation table entries matching the source text element most, are shown in the source language, and therefore all entries in the language column are all “1”. The fourth column can represent two kinds of information. For a source text element the fourth column represents a status. The status of the source text element in the first line and third column of screenshot 1540 is “untranslated”. In the case of pretranslation entries (line 2 to 5) the fourth column represents a “Quality” value, indicating how exactly the source language entry (line 2 to 5) in the pretranslation table matches the source text element (in line 1). The depicted percentage values may be determined according to the formula: x = b common a ste · 100 ,
    Figure US20030115552A1-20030619-M00001
  • wherein a[0195] ste represents the number of characters of the source text element and bcommon represents the number of characters in common and common in sequence to the source text element and the source language database entry.
  • To simplify use, the differing characters are depicted bold, italic and are surrounded with a box. Other markings of the differences may be applied similarly. In [0196] screenshot 40 the field in the third column, second line is surrounded by a bold line to indicate that this translation suggestion is marked. With a user input the user may select the surrounded translation suggestion by a “mouseclick” an “enter” or by contact with a touch screen.
  • With the selection the displayed table changes to screenshot [0197] 1542: all unselected translations suggestions are deleted from the screen, and the third line of the table contains the translation of the database entry. Therefore, line 3 column 1 contains the same key ID as the other two lines, and line 3 column 2 contain the language ID of the target language. Line 3 column 3 contains the translation of the source language database entry, and column 4 indicates an exact match of the source text element and the source language database entry. The translation suggestion is marked with a bold frame to indicate that the translation suggestion is selected.
  • With a second user input the depicted table changes to table [0198] 1544. The table contains only one line: In the first field, the key ID of the source text element, in the second field the language ID, in the third field the translated text element, and finally in the fourth field the status indicating “fully translated”. The first two columns of the tables 1540, 1542, 1544 may be economized. The makings may be different. The method described above may skip the screenshot 1542, if the quality of the selected suggestion is 100%.
  • FIG. 15[0199] b illustrates schematically a flow diagram according to the user interaction shown in FIG. 15a. The untranslated text may be the same as shown in FIG. 15a and references to FIG. 15a will be made to complete the flow diagram of FIG. 15b. FIG. 15b illustrates the steps of the user iterative interface with respect to the example presented in FIG. 15a.
  • In step S[0200] 1, an untranslated text may be processed. The text may be “The computer is standing on the table”. The untranslated text may be a text according to language “1”.
  • In step S[0201] 3, the database may be accessed for retrieving translation elements for the untranslated text of the same language, herein all related translation elements of the language “1”. The retrieving may include a matching step wherein the matching step may result in exact matches or in fuzzy matches.
  • In step S[0202] 5, the retrieved related elements may be received by the computer executing the user interaction interface. The matching quality value according to the formula formula x = b common a ste · 100
    Figure US20030115552A1-20030619-M00002
  • is determined for the received related translation elements. [0203]
  • In step S[0204] 6, the untranslated text, the translation related elements is sequenced in a list by the user interaction interface according to the list 1540 depicted in FIG. 15a. All retrieved entries are listed according to their matching quality.
  • In step S[0205] 7, the user may select an entry of the presented list 1540 shown in FIG. 15a. Since the second list entry may have assigned a quality of 100%, the untranslated text and the translating related element match exactly. Due to the matching quality the user may select the second entry of the list, indicated by the bold surrounding lines according to the depicted list 1540 in FIG. 15a.
  • In step S[0206] 9, the translation of the translation related element selected in step S7 may be retrieved from the database. The respective key-ID of the selected translation related element may be used for retrieval. The retrieved translation may be a translation of the selected translation related element into the language “49”.
  • In step S[0207] 11, the translation of language “49” of the selected translation related element may be received by the user interactive interface from the database. A list may be prepared presenting the untranslated text, the selected translation related element and the translation to the user desired language, herein a translation from the language “1” which is the language of the untranslated text to the language “49”. The matching quality has been determined in step S5 so that a new determination may be skipped. The matching quality value of the translation related element may be assigned to the corresponding retrieved translation since the determination of the above defined matching quality may be determined only in combination with text of the same language.
  • In step S[0208] 12, the respective list comprising the key-ID, the language code, the corresponding element and the status or quality, respectively, may be presented in a list illustrated as list 1542 in FIG. 15a.
  • In step S[0209] 13, the user confirms the translation of the untranslated text by selecting the translation. The selection may be indicated by bold surrounding lines which is shown in list 1542 in FIG. 15a.
  • As the matching quality of the untranslated text and the translation related element retrieved from the database indicates a value of 100% the translation is an exact translation. The respective entry is available in the database such that no additional entry may have to be added to the translation database. [0210]
  • In step S[0211] 14, the translation of the untranslated text has been done successfully and is finished. The user interactive interface may present the next untranslated text for translating the user and may start again with step S1.
  • The example shown is related to a 100% matching translation. Thus, the exact translation of the untranslated text was found in the database. Therefore, no further database entries may need to be included in the database. FIG. 16[0212] a and FIG. 16b present an example of a non-exactly matching untranslated text involving the generation of a new database entry.
  • FIG. 16[0213] a depicts another set of user interaction interfaces (“screenshots”). FIG. 16a shows an example of how the examples described above may be embodied on a computer device. The first screenshot 1650 shows a screenshot of a computer program or device depicting all found matches in a matching range of 66% down to 61%. The screenshot 1650 comprises four columns. As in FIG. 15a, the first column represents a key ID of the text element (sentence) in the text. The second column represents the language of the text element entry. The third column represents the source text element and database text elements entries. To simplify the decision which of the pretranslation table entries match the source text element most closely, the pre-translation entries are shown in the source language, and therefore all entries in the language column are “1”. Similar to FIG. 15a, the fourth column can represent two kinds of information. For a source text element the fourth column represents a status. The status of the source text element in the screenshot 1650 is “untranslated”. For pretranslation entries (line 2 to 3) the fourth column represents a “Quality” value, indicating the quality of the match between the source language entry in the pretranslation table and the source text element.
  • The depicted percentage values are determined according to the same formula as in FIG. 15[0214] a. The formula used in FIG. 15a may be modified by replacing aste by adte representing the number of characters of the database text element. It should be noted that the formula is not limited to the use of characters, but can be applied to words and sentences. The formula may also comprise other terms using grammar structure related values to characterize the matching. Different from FIG. 15a, the retrieved translation suggestions reach a maximum match of only 66%. The closest source text entry in the table is marked with a bold frame.
  • With a selection of the bold frame by user input, the system depicts the [0215] screenshot 1652. The screenshot 1652 depicts the source text element in the first line, the retrieved database text entry in the second line, and the translation of the database entry in the third line. As the system can detect and mark the differences between the source text element and the database entry, the system marks the differences in the translation of the database entry. All marked elements are depicted bold, italic, and with a small frame in the figure.
  • Next the user may edit the translation entry pretranslation table in an additional operation by keyboard input. Advanced systems may utilize voice recognition algorithms. In the pretranslation table as shown, the user accepted the proposed translation by a user input. [0216]
  • By accepting the proposed translation matching only 66% with the source text element, the system only translates the matching part of the source text element. The [0217] screenshot 1654 represents a partially translated element in the pretranslation table. The table 1654 may further depict the source text element and the retrieved database entry translation. In the first column the table depicts the key ID “1” of the source text element. In the second column the table depicts no entry, as a partially translated sentence is not related to a single language. The entry in the second column can be “1”, or “1-49” as part of the text element remains untranslated. In the third column the partially translated text is depicted, with a bold, italic underlined untranslated part depicted in capitals. The untranslated sentence is surrounded by a dotted bold frame to indicate that the present state of the sentence requires additional processing.
  • To reach the next step, the user may select another text element size to post-process the entry in the preprocessing table, or the user may first translate all other text elements and proceed then with a smaller text element. [0218]
  • The [0219] next screenshot 1656 depicts a second translation stage with smaller text elements. In this stage the text elements are words and/or word/article combinations. The present table depicts a newly generated key ID 127 in the first column, to distinguish the different stages of the translation algorithm. In the second column first line, the language ID has not been changed, as the sentence is still only partially translated, while in the second line the retrieved database entry is marked with a “1” and in the third line the translation suggestion is marked as the target language with a “49”. In the third column the untranslated part of the sentence is marked as in table 1654, its related database entry and its translation suggestion retrieved from a translation database.
  • With a user input accepting the proposed translation, the depicted table changes to table [0220] 1658, containing only one line: In the first field, the key ID of the source text element (in this case the sentence), in the second field the language ID “49”, in the third field the translated text element, and finally in the fourth field the status indicating “fully translated”.
  • With the step of accepting the suggested translation proposal, the system generates a [0221] new database entry 1660 containing the source text element and the accepted translation. For efficient retrieval of the pair, the entries include key IDs and language IDs. The new database entry is not necessarily depicted, as the storage operation can be executed automatically.
  • The translation database may include different 100% matching database entries, due to different equivocal translations. [0222]
  • FIG. 16[0223] b illustrates schematically a flow diagram according to the user interaction shown in FIG. 16a. The untranslated text may be the same as shown in FIG. 16a and references to FIG. 16a will be made in order to complete the flow diagram of FIG. 16b. FIG. 16b illustrates the operations of the user iterative interface with respect to the example presented in FIG. 16a.
  • In step S[0224] 20, an untranslated text may be processed. The text may be “The computer is standing on the table”. The untranslated text may further include a text according to language “1”.
  • In step S[0225] 22, the database may be accessed for retrieving translation elements for the untranslated text of the same language, herein all related translation element of the language “1”. The retrieving may include a matching step wherein the matching step may result in exact matches or in fuzzy matches.
  • In step S[0226] 24, the retrieved related elements may be received by the computer executing the user interaction interface. The matching quality value according to the formula x = b common a dte · 100
    Figure US20030115552A1-20030619-M00003
  • is determined for the received related translation elements. [0227]
  • In step S[0228] 25, the untranslated text, the translation related elements is sequenced in a list by the user interaction interface according to the list 1650 depicted in FIG. 16a. All retrieved entries are listed according to their matching quality. The list may present the key-ID, the language the untranslated text or the translation related elements, respectively, and the matching quality. The non-fitting text parts of the translation related elements in relation to the untranslated text may be indicated by italic characters.
  • In step S[0229] 26, the user may select an entry of the presented list 50 shown in FIG. 15a. In this example no entry of the list shows a matching quality of 100% indicating that no exact match was retrieved from the database. The second entry of list 50 shown in FIG. 15a may have a matching quality value of 66%. The user may select the best matching entry, herein the second list entry which may be indicated by bold surrounding lines.
  • In step S[0230] 28, the translation of the translation related element selected in step S26 may be retrieved from the database. For retrieving the respective key-ID of the selected translation related element may be used. The retrieved translation may be a translation of the selected translation related element to the language “49”.
  • In step S[0231] 30, the translation of language “49” of the selected translation related element may be received by the user interactive interface from the database. A list may be prepared presenting the untranslated text, the selected translation related element and the translation to the user desired language, herein a translation from the language “1” which is the language of the untranslated text to the language “49”. The matching quality has been determined in step S24 so that a new determination may be skipped. The matching quality value of the translation related element may be assigned to the corresponding retrieved translation since the determination of the above defined matching quality may be determined only in combination with text of the same language.
  • In step S[0232] 31, the respective list including the key-ID, the language code, the corresponding element and the status or quality, respectively, may be presented in a list illustrated as list 1652 in FIG. 16a. The non-fitting text parts of the translation related elements in relation to the untranslated text may be indicated by italic characters.
  • In step S[0233] 32, the user confirms the translation of the untranslated text by selecting the translation. The selection may be indicated by bold surrounding lines which is shown in list 1652 in FIG. 16a.
  • In step S[0234] 33, as no exact matching translation related element was found in the database, a mixed language translation result may be presented to the user by the user interactive interface. This can be seen in list 1654 shown in FIG. 16b. The text “The computer” may have not been translated. However, the remaining part of the sentence may have been translated by the operations described above.
  • In step S[0235] 34, the user may select the untranslated text part for further translation. This is shown in list 1654 depicted in FIG. 16a.
  • The translation may be continued by attempting to translate an untranslated text part, defining this part as untranslated text and performing translation operations as described above. The untranslated text “The computer” of the language “1” may be processed as described below. [0236]
  • In step S[0237] 35, a translation related element according to the untranslated text part may be retrieved from the database. The retrieval may include a matching step wherein the matching step may result in exact matches or in fuzzy matches. The translation related elements of the language “1” according to the untranslated text part are searched.
  • The retrieval of translation related elements with respect to the untranslated text part may only return a single matching translation related element. In this case, the respective translation (language “49”) may be automatically retrieved from the database since no further translation related elements may be presented to the user for selecting. [0238]
  • In step S[0239] 37, after the retrieval of the translation related element and the translation into the language “49” thereof a list of the partially translated text, the translation related element according to the untranslated text part and the translation may be prepared. The matching quality value may be determined. The determination of the quality may be derived regarding the untranslated text part and the retrieved translation related element. The translation may be assigned the same matching quality value.
  • To indicate the new translation operation with respect to the untranslated text part, a new key-ID may be assigned to the list entries. As indicated in [0240] list 1656 shown in FIG. 16b, the key-ID may be “127”.
  • In step S[0241] 38, the list of the partially untranslated text, the retrieved translation related element and the translation (language “49”) may be shown to the user. The respective list can be seen in list 1656 shown in FIG. 16a. The partially translated text may be assigned no language ID. Status information of the partially translated text may indicate a percentage value according to the translated text part.
  • In step S[0242] 39, the user may select the second list entry of the list 1656 shown in FIG. 16b. The selection may be indicated by bold surrounding lines.
  • In step S[0243] 40, the combination of the two independent translation operations of the untranslated text of step S20 may lead to the complete translation thereof. The complete translation may be presented by the user interactive interface to the user. The key-ID of the translation may show the former key-ID value of step S31. The status information indicates a successful translation. The respective list 1658 is shown in FIG. 16a.
  • In step S[0244] 42, as the untranslated text was not found in the database a new database entry may be generated. The contents of the new database entry is shown in list 1660 of FIG. 16b. The key-ID may be given by an available key-ID of the database and may be generated automatically as a result of the generation of the database entry. The user interactive interface may present the next untranslated text for translation by the user and may start again with step S20.
  • A new entry may have been generated for the untranslated text presented in step S[0245] 20. Further occurrence of this untranslated text may lead to the retrieval of a translation related element of a matching quality value of 100%. The operations of the user interactive interface for exact matches are described above in detail in FIG. 15a and FIG. 15b.
  • The continuous addition of new database entries according to the example described above may ensure that untranslated text is translated in the same manner to the respective translation language, minimizing the usage of synonyms or synonym text constructions which may irritate and confuse a reader of the translated text. [0246]
  • FIG. 17 depicts an apparatus for translating texts according to another example by means of a database and a database management system, comprising a [0247] module 1774 to communicate with a database 1778, a module 1775 for exchanging data, and a processing device 1772 for retrieving elements from a database.
  • There is also depicted an exemplary realization of the [0248] apparatus 1770. The apparatus 1770 is connected via a communication module 1774 to a network 1776. Via the network 1776 the apparatus 1770 can communicate with other devices (not shown) to exchange data (e.g., the server 201 of FIG. 2). The network 1776 can be the internet, or any other wide area, or local area network. The communication module 1774 is connected to a Central Processing Unit (CPU) 1772, to execute the exchange and filtering of data. The CPU 1772 is connected to a data exchange interface 1775 (e.g., a user interaction interface). The data exchange interface 1775 can be connected to a keyboard and a display, or via a network to a terminal device to interact with a user. The database 1778 contains the translation data to be accessed, via the communication module 1774 and the network 1776. To be able to translate texts correctly, the CPU 1772 must be able to receive the source text and to relate the source text with the data stored in the database 1778. The source text may be received via the data exchange interface 1775, or via the communication module 1774 from the network 1776. To simplify the reception of the source text, the user interaction device 1773 and the database 1778 may be incorporated in a single device which may additionally be a portable device (see FIG. 18).
  • FIG. 18 depicts a sentence based electronic translation apparatus. The translation apparatus is embodied as an interactive [0249] electronic dictionary 1880. The electronic dictionary can be an electronic book including a display 1881 for depicting a pretranslation table containing a source text, translation proposals, a translated text and translation related information. In the translation table the first column 1885 indicates a key ID, to identify every text element in the source text. The second column 1886 contains source text elements and translation proposals. The third column 1887 contains matching indices to indicate the similarity between the source text entry and the source text element. The fourth column 1888 contains status symbols to indicate whether the source text element requires further processing or translating. The user can interact with the dictionary by a scroll wheel 1883, keys, a scroll roll 1884 and thumb index menu keys 1890. A user can pre-select a character or source text element with the thumb index 1890, scroll the alphabet or the source text elements with the scroll wheel 1883, and select translation proposals with the keys and the scroll roll 1884. The electronic dictionary can further contain an interface 1889 to communicate with a network to update its database or exchange data with another database. The dictionary may further upload new source texts via the interface 1889, or print translated texts on a printer connected to the interface. The electronic dictionary (ED) may further comprise a built in scanner to recognize source texts autonomously. The dictionary may further comprise an extension slot, to connect the ED with handheld scanners, mobile communication devices, or wireless networks. Thus, the ED may be used as a translation device for custom officers, to translate foreign bills of loading. The ED may also be used as a conventional electronic book. An electronic book application has the advantage that users may read a book in its original language, with the option to translate single sentences.
  • FIG. 19 depicts a network system with an apparatus as described in FIG. 17 or [0250] 18 and at least one database 1917. The apparatus can be a server 1930 or client devices 1911, 1912, 1914, 1915, 1922, or 1931. The database may include the database 1917, or any other database integrated in one of the client devices 1911-1915, 1922, 1931 or the server 1900. The client devices are connected via networks 1910, 1930, 1920 to the server 1900. It is assumed that the server 1900 contains an internal database (not shown) or can at least access an external database 1917 or a database in one of the client devices 1911-1917, 1922, 1931. In the figure the server is directly connected to two networks, a local area network (LAN) 1930 and a wide area network (WAN) 19310. In the LAN 1930 only one client is depicted, the personal computer 1931. The WAN 1910 is connected to a personal computer 1911, a personal digital assistant (PDA) 1912, a browser 1914, laptop 1915, a database 1917 and a gateway 1918 to a communication network 1920. The gateway 1918 indirectly connects a communication device 1922 (e.g., a mobile telephone) with computation abilities with the server 1900. Thus, all the client devices 1911-1917, 1922, 1931 are connected to the server 1900.
  • All client devices may comprise user interfaces, CPUs and memories or databases. To execute the method described above, the client devices [0251] 1911-1915, 1922, 19331 and/or the sever 1900 are enabled to access a translation database. Thereby, the exact location of the translation database is not important. So the database 1917 may include the translation database and offer the client devices 1911-1915, 1922, 1931 access to the stored translation elements. Alternatively, each client device 1911-1915, 1922, 1931 includes its own translation database enabling each device to translate text elements received from an internal or remote device. The method described above is executable as long as a translation database is accessible (e.g., via the networks 1910, 1920, 1930). In the simplest case the client devices 1911-1915, 1922, 1931 themselves include a translation database. In the simplest case the client devices are autonomous software localizing devices being connected to the network only to access new software to be localized. A very complex case includes accessing a database 1917 via a server 1900 which is accessing distinct databases to retrieve text elements to be translated.
  • As the data of the database is usually retrieved by a database management system (DBMS) the data source may be able to use relational algorithms between tables. The DBMS can store the text elements in a directory table and the translation elements in a related user specific translation table. Thus, the data source of the DBMS may utilize relational algebra to interconnect the text elements with the translation elements. The DBMS user interface enables the user to translate text elements by accessing pre-translation tables. The translation elements can be selected and/or validated in each field of the pre-translation table by a user input. The user input can be a keyboard input, a mouse input and the like. In a last step the validated text elements and validated translation elements are stored in the database, to be automatically retrieved if the user accesses the database with a DBMS to retrieve the text element next time. [0252]
  • Although aspects of methods and systems consistent with the present invention are described as being stored in memory, one having skill in the art will appreciate that all or part of methods and systems consistent with the present invention may be stored on or read from other computer-readable media, such as secondary storage devices, such as hard disks, floppy disks, and CD-ROM; a carrier wave received from a network such as the Internet; or other forms of ROM or RAM either currently known or later developed. Further, although specific components of the system for localization have been described, one skilled in the art will appreciate that a data processing system suitable for use with methods, systems, and articles of manufacture consistent with the present invention may contain additional or different components. [0253]
  • It is noted that the above elements of the above examples may be at least partially realized as software and/or hardware. Further, it is noted that a computer-readable medium may be provided having a program embodied thereon, where the program is to make a computer or system of data processing devices execute functions or steps of the features and elements of the above described examples. A computer-readable medium may include a magnetic or optical or other tangible medium on which a program is embodied, but can also be a signal, (e.g., analog or digital), electromagnetic or optical, in which the program is embodied for transmission. Further, a computer program product may be provided comprising the computer-readable medium. [0254]
  • The foregoing description of an implementation of the invention has been presented for purposes of illustration and description. It is not exhaustive and does not limit the invention to the precise form disclosed. Modifications and variations are possible in light of the above teachings or may be acquired from practicing of the invention. For example, the described implementation includes software but the present invention may be implemented as a combination of hardware and software or in hardware alone. Note also that the implementation may vary between systems. The invention may be implemented with both object-oriented and non-object-oriented programming systems. The claims and their equivalents define the scope of the invention. [0255]

Claims (22)

What is claimed is:
1. A method in a data processing system for displaying screen objects in a plurality of different languages, each screen object having a text string in a first language, and an associated immutable image file, and associated display characteristics of the associated immutable image file, the method comprising the steps of:
translating the text strings into the plurality of different languages;
automatically generating a text string file containing the text strings in the different languages and desired locations in a directory structure;
using an image manipulation program to generate an immutable image file for each of the text strings in the text string file by using the text strings and the associated display characteristics; and
storing the generated immutable image files at the desired locations in a directory structure such that the different language of the text string in each of the immutable image files is indicated.
2. The method of claim 1, wherein each of the associated immutable image file and the generated immutable image file includes a file in a format of at least one of .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and a graphic format.
3. The method of claim 1, wherein the image manipulation program includes GIMP (GNU Image Manipulation Program).
4. A method in a data processing system for localizing an immutable image file containing text in a first language, the method comprising the steps of:
translating the text from the first language into a second language that is different from the first language; and
automatically generating a translated immutable image file containing the text in the second language.
5. The method of claim 4, wherein the step of automatically generating further includes the step of initiating execution of a script in association with an image manipulation program, to instruct the image manipulation program to input the text in the second language and template information.
6. The method of claim 5, further comprising the step of storing the translated immutable image file at a desired location, wherein the desired location is determined based on location information provided as input to the image manipulation program by the script.
7. The method of claim 6, further comprising the step of accessing, by a user, the translated immutable image file stored at the desired location.
8. The method of claim 5, wherein the template information includes at least one of image size, image shape, image color, position of a text element within the image for the translated immutable image file, and a previously generated immutable image file.
9. The method of claim 4, wherein the step of automatically translating is performed by a translator program associated with a pretranslation database.
10. The method of claim 4, wherein the translated immutable image file includes a file in a format of at least one of .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and a graphic format.
11. The method of claim 4, further comprising the step of identifying the text in the first language by parsing source code of a user interface program.
12. A data processing system for localizing an immutable image file containing text in a first language, comprising:
means for translating the text from the first language into a second language that is different from the first language; and
means for automatically generating a translated immutable image file containing the text in the second language.
13. A computer-readable medium encoded with instructions that cause a data processing system for localizing an immutable image file containing text in a first language to perform a method comprising the steps of:
translating the text from the first language into a second language that is different from the first language; and
automatically generating a translated immutable image file containing the text in the second language.
14. The computer-readable medium of claim 13, wherein the step of automatically generating further includes the step of initiating execution of a script in association with an image manipulation program, to instruct the image manipulation program to input the text in the second language and template information.
15. The computer-readable medium of claim 14, wherein the method further comprises the step of storing the translated immutable image file at a desired location, wherein the desired location is determined based on location information provided as input to the image manipulation program by the script.
16. The computer-readable medium of claim 14, wherein the template information includes at least one of image size, image shape, image color, and position of a text element within the image for the translated immutable image file, and a previously generated immutable image file.
17. The computer-readable medium of claim 13, wherein the translated immutable image file includes a file in a format of at least one of .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and a graphic format.
18. A data processing system for localizing an immutable image file containing text in a first language, the system comprising:
a memory comprising an immutable image file creation system that translates the text from the first language into a second language that is different from the first language, and automatically generates a translated immutable image file containing the text in the second language; and
a processor for running the immutable image file creation system.
19. The data processing system of claim 18, wherein the step of automatically generating further includes the step of initiating execution of a script in association with an image manipulation program, to instruct the image manipulation program to input the text in the second language and template information.
20. The data processing system of claim 19, wherein the image file creation system further stores the translated immutable image file at a desired location, wherein the desired location is determined based on location information provided as input to the image manipulation program by the script.
21. The data processing system of claim 19, wherein the template information includes at least one of image size, image shape, image color, and position of a text element within the image for the translated immutable image file, and a previously generated immutable image file.
22. The data processing system of claim 18, wherein the translated immutable image file includes a file in a format of at least one of .gif, bitmap (.bmp), .tiff, .jpg, .png, .xpm, .tga, .mpeg, .ps, .pdf, .pcx, and a graphic format.
US10/303,819 2001-11-27 2002-11-26 Method and system for automatic creation of multilingual immutable image files Abandoned US20030115552A1 (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
EP01128179.7 2001-11-27
EP01128179A EP1315084A1 (en) 2001-11-27 2001-11-27 Method and apparatus for localizing software
EP02010132A EP1315085B1 (en) 2001-11-27 2002-05-10 Automatic image-button creation process
EP02010133A EP1315086B1 (en) 2001-11-27 2002-05-10 Generation of localized software applications
EP02010133.3 2002-05-10
EP02010132.5 2002-05-10

Publications (1)

Publication Number Publication Date
US20030115552A1 true US20030115552A1 (en) 2003-06-19

Family

ID=27224248

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/304,198 Active 2024-06-19 US7447624B2 (en) 2001-11-27 2002-11-26 Generation of localized software applications
US10/303,819 Abandoned US20030115552A1 (en) 2001-11-27 2002-11-26 Method and system for automatic creation of multilingual immutable image files

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/304,198 Active 2024-06-19 US7447624B2 (en) 2001-11-27 2002-11-26 Generation of localized software applications

Country Status (1)

Country Link
US (2) US7447624B2 (en)

Cited By (202)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040122791A1 (en) * 2002-12-19 2004-06-24 Sea Brian S Method and system for automated source code formatting
US20040168132A1 (en) * 2003-02-21 2004-08-26 Motionpoint Corporation Analyzing web site for translation
US20050071324A1 (en) * 2003-09-30 2005-03-31 Microsoft Corporation Label system-translation of text and multi-language support at runtime and design
US20050076342A1 (en) * 2003-10-01 2005-04-07 International Business Machines Corporation System and method for application sharing
US20050097109A1 (en) * 2003-10-30 2005-05-05 Microsoft Corporation Term database extension for label system
US20050222981A1 (en) * 2004-03-31 2005-10-06 Lawrence Stephen R Systems and methods for weighting a search query result
US20050267733A1 (en) * 2004-06-01 2005-12-01 Rainer Hueber System and method for a translation process within a development infrastructure
US20060004835A1 (en) * 2004-06-30 2006-01-05 International Business Machines Corporation Standard text method, system, and program product for configuring and publishing text to multiple applications
US20060059192A1 (en) * 2004-09-15 2006-03-16 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20060059424A1 (en) * 2004-09-15 2006-03-16 Petri Jonah W Real-time data localization
WO2006032846A2 (en) * 2004-09-24 2006-03-30 University Of Abertay Dundee Computer games localisation
US7038693B2 (en) * 2001-07-18 2006-05-02 Dr. Johannes Heidenhain Gmbh Method for creating pixel-oriented pictorial datasets for representing graphic symbols by a numerical control
DE102004054548A1 (en) * 2004-11-11 2006-05-24 Brand Ad Gmbh Image production method, involves producing image without additional graphical unit by using color palette which shows color applications, and storing produced image to which color palette is assigned
US20060116864A1 (en) * 2004-12-01 2006-06-01 Microsoft Corporation Safe, secure resource editing for application localization with automatic adjustment of application user interface for translated resources
US20060242621A1 (en) * 2005-04-22 2006-10-26 Microsoft Corporation System and method for managing resource loading in a multilingual user interface operating system
US20060263038A1 (en) * 2005-05-23 2006-11-23 Gilley Thomas S Distributed scalable media environment
US20070027670A1 (en) * 2005-07-13 2007-02-01 Siemens Medical Solutions Health Services Corporation User Interface Update System
US20070100597A1 (en) * 2005-10-27 2007-05-03 Jacquot Bryan J Utility, method and device providing vector images that may be formatted for display in different locales
US20070154190A1 (en) * 2005-05-23 2007-07-05 Gilley Thomas S Content tracking for movie segment bookmarks
US20070276829A1 (en) * 2004-03-31 2007-11-29 Niniane Wang Systems and methods for ranking implicit search results
US20070276814A1 (en) * 2006-05-26 2007-11-29 Williams Roland E Device And Method Of Conveying Meaning
WO2008005957A2 (en) * 2006-07-07 2008-01-10 Honeywell International Inc. Supporting multiple languages in the operation and management of a process control system
US20080040315A1 (en) * 2004-03-31 2008-02-14 Auerbach David B Systems and methods for generating a user interface
US20080040316A1 (en) * 2004-03-31 2008-02-14 Lawrence Stephen R Systems and methods for analyzing boilerplate
US20080049267A1 (en) * 2004-07-12 2008-02-28 Canon Kabushiki Kaisha Image Processing Apparatus And Control Method Of The Same
US20080077558A1 (en) * 2004-03-31 2008-03-27 Lawrence Stephen R Systems and methods for generating multiple implicit search queries
US20080172637A1 (en) * 2007-01-15 2008-07-17 International Business Machines Corporation Method and system for using image globalization in dynamic text generation and manipulation
US20080252919A1 (en) * 2004-07-07 2008-10-16 Canon Kabushiki Kaisha Image Processing Apparatus and Control Method of the Same
US20080262671A1 (en) * 2005-02-24 2008-10-23 Yoshio Suzuki Vehicle Quality Analyzing System and Plural Data Management Method
US20080301564A1 (en) * 2007-05-31 2008-12-04 Smith Michael H Build of material production system
US20090070094A1 (en) * 2007-09-06 2009-03-12 Best Steven F User-configurable translations for electronic documents
US20090111585A1 (en) * 2007-10-25 2009-04-30 Disney Enterprises, Inc. System and method of localizing assets using text substitutions
US20100011354A1 (en) * 2008-07-10 2010-01-14 Apple Inc. System and method for localizing display of applications for download
US7707142B1 (en) 2004-03-31 2010-04-27 Google Inc. Methods and systems for performing an offline search
US20100157990A1 (en) * 2008-12-19 2010-06-24 Openpeak, Inc. Systems for providing telephony and digital media services
US7788274B1 (en) 2004-06-30 2010-08-31 Google Inc. Systems and methods for category-based search
US20100305940A1 (en) * 2009-06-01 2010-12-02 Microsoft Corporation Language translation using embeddable component
WO2010139595A1 (en) * 2009-06-05 2010-12-09 International Business Machines Corporation Platform agnostic screen capture tool
US20100318743A1 (en) * 2009-06-10 2010-12-16 Microsoft Corporation Dynamic screentip language translation
US20100328349A1 (en) * 2009-06-29 2010-12-30 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd . System and method for fitting images in an electronic device
US7873632B2 (en) 2004-03-31 2011-01-18 Google Inc. Systems and methods for associating a keyword with a user interface area
US20110035660A1 (en) * 2007-08-31 2011-02-10 Frederick Lussier System and method for the automated creation of a virtual publication
US20110071832A1 (en) * 2009-09-24 2011-03-24 Casio Computer Co., Ltd. Image display device, method, and program
US20110246175A1 (en) * 2010-03-30 2011-10-06 Young Hee Yi E-book reader language mapping system and method
US20110264439A1 (en) * 2008-02-29 2011-10-27 Ichiko Sata Information processing device, method and program
US8131754B1 (en) 2004-06-30 2012-03-06 Google Inc. Systems and methods for determining an article association measure
US8316291B1 (en) * 2005-07-28 2012-11-20 Adobe Systems Incorporated Packaging an electronic document and/or a method of displaying the package
US20130185051A1 (en) * 2012-01-16 2013-07-18 Google Inc. Techniques for generating outgoing messages based on language, internationalization, and localization preferences of the recipient
EP2395423A3 (en) * 2010-06-08 2013-08-14 Canon Kabushiki Kaisha Information transmission apparatus, control method of information transmission apparatus, and computer program
US20130346063A1 (en) * 2012-06-21 2013-12-26 International Business Machines Corporation Dynamic Translation Substitution
EP2704025A1 (en) * 2011-04-28 2014-03-05 Rakuten, Inc. Browsing system, terminal, image server, program, computer-readable recording medium recording said program, and method
US20140076475A1 (en) * 2012-09-17 2014-03-20 Tomás GALICIA Translating application labels
US20140089382A1 (en) * 2012-09-26 2014-03-27 Google Inc. Techniques for context-based grouping of messages for translation
US8739205B2 (en) 2005-05-23 2014-05-27 Open Text S.A. Movie advertising playback techniques
US8818092B1 (en) * 2011-09-29 2014-08-26 Google, Inc. Multi-threaded text rendering
CN104123275A (en) * 2013-04-24 2014-10-29 国际商业机器公司 Translation validation
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US20140351798A1 (en) * 2013-05-24 2014-11-27 Medidata Solutions, Inc. Apparatus and method for managing software translation
EP2704014A4 (en) * 2011-04-28 2015-03-11 Rakuten Inc Server, server control method, program and recording medium
US9009153B2 (en) 2004-03-31 2015-04-14 Google Inc. Systems and methods for identifying a named entity
US20150106790A1 (en) * 2013-10-15 2015-04-16 International Business Machines Corporation Detecting merge conflicts and compilation errors in a collaborative integrated development environment
US9128918B2 (en) 2010-07-13 2015-09-08 Motionpoint Corporation Dynamic language translation of web site content
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
EP2783300A4 (en) * 2011-11-25 2015-12-23 Google Inc Providing translation assistance in application localization
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
WO2016053920A1 (en) * 2014-09-30 2016-04-07 Microsoft Technology Licensing, Llc Visually differentiating strings for testing
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US20160147745A1 (en) * 2014-11-26 2016-05-26 Naver Corporation Content participation translation apparatus and method
US9361294B2 (en) 2007-05-31 2016-06-07 Red Hat, Inc. Publishing tool for translating documents
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US9372672B1 (en) * 2013-09-04 2016-06-21 Tg, Llc Translation in visual context
US20160179481A1 (en) * 2013-08-29 2016-06-23 Nomura Research Institute, Ltd. Web server system, application development support system, multilingual support method in web server system, multi-device support method in web server system, and application development support method
US20160203126A1 (en) * 2015-01-13 2016-07-14 Alibaba Group Holding Limited Displaying information in multiple languages based on optical code reading
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US9436727B1 (en) * 2013-04-01 2016-09-06 Ca, Inc. Method for providing an integrated macro module
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
EP2984579A4 (en) * 2013-04-11 2017-01-18 Hewlett-Packard Enterprise Development LP Automated contextual-based software localization
US9558158B2 (en) * 2015-03-06 2017-01-31 Translation Management Systems, Ltd Automated document translation
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US9654735B2 (en) 2005-05-23 2017-05-16 Open Text Sa Ulc Movie advertising placement optimization based on behavior and content analysis
US9659010B1 (en) 2015-12-28 2017-05-23 International Business Machines Corporation Multiple language screen capture
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US20170322944A1 (en) * 2016-05-09 2017-11-09 Coupa Software Incorporated Automatic entry of suggested translated terms in an online application program
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US9953030B2 (en) * 2016-08-24 2018-04-24 International Business Machines Corporation Automated translation of screen images for software documentation
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US20180232364A1 (en) * 2017-02-15 2018-08-16 International Business Machines Corporation Context-aware translation memory to facilitate more accurate translation
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US20180260363A1 (en) * 2017-03-09 2018-09-13 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium storing program
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US20190108222A1 (en) * 2017-10-10 2019-04-11 International Business Machines Corporation Real-time translation evaluation services for integrated development environments
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
CN109871550A (en) * 2019-01-31 2019-06-11 沈阳雅译网络技术有限公司 A method of the raising digital translation quality based on post-processing technology
US10318644B1 (en) 2017-07-26 2019-06-11 Coupa Software Incorporated Dynamic update of translations for electronic resources
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US10372830B2 (en) * 2017-05-17 2019-08-06 Adobe Inc. Digital content translation techniques and systems
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10540452B1 (en) * 2018-06-21 2020-01-21 Amazon Technologies, Inc. Automated translation of applications
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10671698B2 (en) 2009-05-26 2020-06-02 Microsoft Technology Licensing, Llc Language translation using embeddable component
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10826878B2 (en) * 2016-07-22 2020-11-03 International Business Machines Corporation Database management system shared ledger support
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US11281465B2 (en) * 2018-04-13 2022-03-22 Gree, Inc. Non-transitory computer readable recording medium, computer control method and computer device for facilitating multilingualization without changing existing program data
US11282064B2 (en) 2018-02-12 2022-03-22 Advanced New Technologies Co., Ltd. Method and apparatus for displaying identification code of application
US20220129646A1 (en) * 2020-04-29 2022-04-28 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US11328113B1 (en) * 2021-03-03 2022-05-10 Micro Focus Llc Dynamic localization using color
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification

Families Citing this family (114)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8656372B2 (en) 2002-11-20 2014-02-18 Purenative Software Corporation System for translating diverse programming languages
US8332828B2 (en) * 2002-11-20 2012-12-11 Purenative Software Corporation System for translating diverse programming languages
US9086931B2 (en) 2002-11-20 2015-07-21 Purenative Software Corporation System for translating diverse programming languages
US9965259B2 (en) 2002-11-20 2018-05-08 Purenative Software Corporation System for translating diverse programming languages
US20040123233A1 (en) * 2002-12-23 2004-06-24 Cleary Daniel Joseph System and method for automatic tagging of ducuments
US8244712B2 (en) * 2003-03-18 2012-08-14 Apple Inc. Localized viewing of file system names
US7149971B2 (en) * 2003-06-30 2006-12-12 American Megatrends, Inc. Method, apparatus, and system for providing multi-language character strings within a computer
US7533372B2 (en) * 2003-08-05 2009-05-12 Microsoft Corporation Cross language migration
US20050066315A1 (en) * 2003-09-23 2005-03-24 Nguyen Liem Manh Localization tool
US7392519B2 (en) * 2003-09-23 2008-06-24 Hewlett-Packard Development Company, L.P. Localization cataloguing tool
US7620888B2 (en) * 2003-12-04 2009-11-17 Microsoft Corporation Quality enhancement systems and methods for technical documentation
US7472118B2 (en) * 2003-12-05 2008-12-30 Microsoft Corporation Systems and methods for improving information discovery
CA2453971C (en) * 2003-12-23 2009-08-11 Daniel A. Rose On-demand creation of java locale source
US7434213B1 (en) * 2004-03-31 2008-10-07 Sun Microsystems, Inc. Portable executable source code representations
TWI268445B (en) * 2004-08-11 2006-12-11 Via Tech Inc A method and system for integrating software setup packages
US7440888B2 (en) * 2004-09-02 2008-10-21 International Business Machines Corporation Methods, systems and computer program products for national language support using a multi-language property file
US7617092B2 (en) * 2004-12-01 2009-11-10 Microsoft Corporation Safe, secure resource editing for application localization
US7716641B2 (en) * 2004-12-01 2010-05-11 Microsoft Corporation Method and system for automatically identifying and marking subsets of localizable resources
US7512936B2 (en) * 2004-12-17 2009-03-31 Sap Aktiengesellschaft Code diversification
US8225232B2 (en) * 2005-02-28 2012-07-17 Microsoft Corporation Dynamic configuration of unified messaging state changes
US7788648B2 (en) * 2005-02-28 2010-08-31 International Business Machines Corporation System and method for the localization of released computer program
US8219907B2 (en) * 2005-03-08 2012-07-10 Microsoft Corporation Resource authoring with re-usability score and suggested re-usable data
US20060206797A1 (en) * 2005-03-08 2006-09-14 Microsoft Corporation Authorizing implementing application localization rules
US20060241932A1 (en) * 2005-04-20 2006-10-26 Carman Ron C Translation previewer and validator
US7882116B2 (en) * 2005-05-18 2011-02-01 International Business Machines Corporation Method for localization of programming modeling resources
US20060271920A1 (en) * 2005-05-24 2006-11-30 Wael Abouelsaadat Multilingual compiler system and method
US7657511B2 (en) * 2005-05-31 2010-02-02 Sap, Ag Multi-layered data model for generating audience-specific documents
US7640255B2 (en) 2005-05-31 2009-12-29 Sap, Ag Method for utilizing a multi-layered data model to generate audience specific documents
US20070033579A1 (en) * 2005-08-02 2007-02-08 International Business Machines Corporation System and method for searching for multiple types of errors in file following translation into a new natural language
US20070174818A1 (en) * 2005-08-18 2007-07-26 Pasula Markus I Method and apparatus for generating application programs for multiple hardware and/or software platforms
US7982739B2 (en) 2005-08-18 2011-07-19 Realnetworks, Inc. System and/or method for adjusting for input latency in a handheld device
US7747588B2 (en) * 2005-09-12 2010-06-29 Microsoft Corporation Extensible XML format and object model for localization data
US7921138B2 (en) * 2005-09-12 2011-04-05 Microsoft Corporation Comment processing
US7676359B2 (en) * 2005-10-06 2010-03-09 International Business Machines Corporation System and method for synchronizing languages and data elements
US8265924B1 (en) * 2005-10-06 2012-09-11 Teradata Us, Inc. Multiple language data structure translation and management of a plurality of languages
US20070233456A1 (en) * 2006-03-31 2007-10-04 Microsoft Corporation Document localization
US8548795B2 (en) * 2006-10-10 2013-10-01 Abbyy Software Ltd. Method for translating documents from one language into another using a database of translations, a terminology dictionary, a translation dictionary, and a machine translation system
US20080177528A1 (en) * 2007-01-18 2008-07-24 William Drewes Method of enabling any-directional translation of selected languages
US10002189B2 (en) 2007-12-20 2018-06-19 Apple Inc. Method and apparatus for searching using an active ontology
US20090172657A1 (en) * 2007-12-28 2009-07-02 Nokia, Inc. System, Method, Apparatus, Mobile Terminal and Computer Program Product for Providing Secure Mixed-Language Components to a System Dynamically
US8719693B2 (en) * 2008-02-22 2014-05-06 International Business Machines Corporation Method for storing localized XML document values
US8595710B2 (en) * 2008-03-03 2013-11-26 Microsoft Corporation Repositories and related services for managing localization of resources
US9262409B2 (en) 2008-08-06 2016-02-16 Abbyy Infopoisk Llc Translation of a selected text fragment of a screen
US8412511B2 (en) 2008-09-03 2013-04-02 United Parcel Service Of America, Inc. Systems and methods for providing translations of applications using decentralized contributions
US8676904B2 (en) 2008-10-02 2014-03-18 Apple Inc. Electronic devices with voice command and contextual data processing capabilities
US9087337B2 (en) * 2008-10-03 2015-07-21 Google Inc. Displaying vertical content on small display devices
US8527945B2 (en) * 2009-05-07 2013-09-03 Verisign, Inc. Method and system for integrating multiple scripts
US9417851B2 (en) * 2009-05-13 2016-08-16 Adobe Systems Incorporated Location-based application development for runtime environments
US8645936B2 (en) * 2009-09-30 2014-02-04 Zynga Inc. Apparatuses, methods and systems for an a API call abstractor
US20110144972A1 (en) * 2009-12-11 2011-06-16 Christoph Koenig Method and System for Generating a Localized Software Product
US8521513B2 (en) * 2010-03-12 2013-08-27 Microsoft Corporation Localization for interactive voice response systems
US8515977B2 (en) 2010-09-10 2013-08-20 International Business Machines Corporation Delta language translation
US8670973B2 (en) * 2010-09-16 2014-03-11 International Business Machines Corporation Language translation reuse in different systems
CN102455997A (en) * 2010-10-27 2012-05-16 鸿富锦精密工业(深圳)有限公司 Component name extraction system and method
US8843360B1 (en) * 2011-03-04 2014-09-23 Amazon Technologies, Inc. Client-side localization of network pages
US9015030B2 (en) * 2011-04-15 2015-04-21 International Business Machines Corporation Translating prompt and user input
CN102890693A (en) * 2011-07-20 2013-01-23 中强光电股份有限公司 Document file viewing method and projection device using document file viewing method
US8806453B1 (en) * 2011-09-15 2014-08-12 Lockheed Martin Corporation Integrating disparate programming languages to form a new programming language
US8452814B1 (en) 2011-10-24 2013-05-28 Google Inc. Gathering context in action to support in-context localization
US9195653B2 (en) * 2011-10-24 2015-11-24 Google Inc. Identification of in-context resources that are not fully localized
US8942971B1 (en) * 2011-11-28 2015-01-27 Google Inc. Automated project localization into multiple languages by using machine translation
JP5528420B2 (en) * 2011-12-05 2014-06-25 シャープ株式会社 Translation apparatus, translation method, and computer program
US9489184B2 (en) * 2011-12-30 2016-11-08 Oracle International Corporation Adaptive selection of programming language versions for compilation of software programs
US8989485B2 (en) 2012-04-27 2015-03-24 Abbyy Development Llc Detecting a junction in a text line of CJK characters
US8971630B2 (en) 2012-04-27 2015-03-03 Abbyy Development Llc Fast CJK character recognition
US9047276B2 (en) * 2012-11-13 2015-06-02 Red Hat, Inc. Automatic translation of system messages using an existing resource bundle
US10296160B2 (en) 2013-12-06 2019-05-21 Apple Inc. Method for extracting salient dialog usage from live data
US10613840B1 (en) * 2014-01-17 2020-04-07 TG, Inc Converting programs to visual representation with intercepting screen draws
US9805028B1 (en) 2014-09-17 2017-10-31 Google Inc. Translating terms using numeric representations
CN104281711B (en) * 2014-10-27 2018-04-27 浪潮(北京)电子信息产业有限公司 The multilingual treating method and apparatus of WEB application
US10909176B1 (en) * 2014-10-28 2021-02-02 Intelligent Medical Objects, Inc. System and method for facilitating migration between electronic terminologies
US10261996B2 (en) * 2014-12-19 2019-04-16 Dropbox, Inc. Content localization using fallback translations
US10152299B2 (en) 2015-03-06 2018-12-11 Apple Inc. Reducing response latency of intelligent automated assistants
US9690549B2 (en) * 2015-03-25 2017-06-27 Ca, Inc. Editing software products using text mapping files
WO2017019056A1 (en) * 2015-07-29 2017-02-02 Hewlett Packard Enterprise Development Lp Context oriented translation
US9740463B2 (en) 2015-08-10 2017-08-22 Oracle International Corporation Mechanism for increasing the performance of multiple language programs by inserting called language IR into the calling language
US10409623B2 (en) * 2016-05-27 2019-09-10 Microsoft Technology Licensing, Llc Graphical user interface for localizing a computer program using context data captured from the computer program
US9983870B2 (en) 2016-06-27 2018-05-29 International Business Machines Corporation Automated generation and identification of screenshots for translation reference
US9792282B1 (en) * 2016-07-11 2017-10-17 International Business Machines Corporation Automatic identification of machine translation review candidates
US10474753B2 (en) 2016-09-07 2019-11-12 Apple Inc. Language identification using recurrent neural networks
US11281993B2 (en) 2016-12-05 2022-03-22 Apple Inc. Model and ensemble compression for metric learning
US11204787B2 (en) 2017-01-09 2021-12-21 Apple Inc. Application integration with a digital assistant
DK201770383A1 (en) 2017-05-09 2018-12-14 Apple Inc. User interface for correcting recognition errors
US10417266B2 (en) 2017-05-09 2019-09-17 Apple Inc. Context-aware ranking of intelligent response suggestions
US10726832B2 (en) 2017-05-11 2020-07-28 Apple Inc. Maintaining privacy of personal information
US10395654B2 (en) 2017-05-11 2019-08-27 Apple Inc. Text normalization based on a data-driven learning network
US11301477B2 (en) 2017-05-12 2022-04-12 Apple Inc. Feedback analysis of a digital assistant
DK201770428A1 (en) 2017-05-12 2019-02-18 Apple Inc. Low-latency intelligent automated assistant
US10311144B2 (en) 2017-05-16 2019-06-04 Apple Inc. Emoji word sense disambiguation
US10403278B2 (en) 2017-05-16 2019-09-03 Apple Inc. Methods and systems for phonetic matching in digital assistant services
US10303715B2 (en) 2017-05-16 2019-05-28 Apple Inc. Intelligent automated assistant for media exploration
US10657328B2 (en) 2017-06-02 2020-05-19 Apple Inc. Multi-task recurrent neural network architecture for efficient morphology handling in neural language modeling
US10445429B2 (en) 2017-09-21 2019-10-15 Apple Inc. Natural language understanding using vocabularies with compressed serialized tries
US10755051B2 (en) 2017-09-29 2020-08-25 Apple Inc. Rule-based natural language processing
US10636424B2 (en) 2017-11-30 2020-04-28 Apple Inc. Multi-turn canned dialog
US10733982B2 (en) 2018-01-08 2020-08-04 Apple Inc. Multi-directional dialog
US10733375B2 (en) 2018-01-31 2020-08-04 Apple Inc. Knowledge-based framework for improving natural language understanding
US10789959B2 (en) 2018-03-02 2020-09-29 Apple Inc. Training speaker recognition models for digital assistants
US10592604B2 (en) 2018-03-12 2020-03-17 Apple Inc. Inverse text normalization for automatic speech recognition
US10818288B2 (en) 2018-03-26 2020-10-27 Apple Inc. Natural assistant interaction
US10909331B2 (en) 2018-03-30 2021-02-02 Apple Inc. Implicit identification of translation payload with neural machine translation
US11145294B2 (en) 2018-05-07 2021-10-12 Apple Inc. Intelligent automated assistant for delivering content from user experiences
US10928918B2 (en) 2018-05-07 2021-02-23 Apple Inc. Raise to speak
US10984780B2 (en) 2018-05-21 2021-04-20 Apple Inc. Global semantic word embeddings using bi-directional recurrent neural networks
US10892996B2 (en) 2018-06-01 2021-01-12 Apple Inc. Variable latency device coordination
DK179822B1 (en) 2018-06-01 2019-07-12 Apple Inc. Voice interaction at a primary device to access call functionality of a companion device
DK201870355A1 (en) 2018-06-01 2019-12-16 Apple Inc. Virtual assistant operation in multi-device environments
DK180639B1 (en) 2018-06-01 2021-11-04 Apple Inc DISABILITY OF ATTENTION-ATTENTIVE VIRTUAL ASSISTANT
US11386266B2 (en) 2018-06-01 2022-07-12 Apple Inc. Text correction
US11076039B2 (en) 2018-06-03 2021-07-27 Apple Inc. Accelerated task performance
US11847425B2 (en) * 2018-08-01 2023-12-19 Disney Enterprises, Inc. Machine translation system for entertainment and media
US11048885B2 (en) * 2018-09-25 2021-06-29 International Business Machines Corporation Cognitive translation service integrated with context-sensitive derivations for determining program-integrated information relationships
US11126644B2 (en) * 2019-01-31 2021-09-21 Salesforce.Com, Inc. Automatic discovery of locations of translated text in localized applications
US11556318B2 (en) * 2021-03-24 2023-01-17 Bank Of America Corporation Systems and methods for assisted code development

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5146587A (en) * 1988-12-30 1992-09-08 Pitney Bowes Inc. System with simultaneous storage of multilingual error messages in plural loop connected processors for transmission automatic translation and message display
US5175684A (en) * 1990-12-31 1992-12-29 Trans-Link International Corp. Automatic text translation and routing system
US5201042A (en) * 1986-04-30 1993-04-06 Hewlett-Packard Company Software process and tools for development of local language translations of text portions of computer source code
US5434776A (en) * 1992-11-13 1995-07-18 Microsoft Corporation Method and system for creating multi-lingual computer programs by dynamically loading messages
US5551055A (en) * 1992-12-23 1996-08-27 Taligent, Inc. System for providing locale dependent user interface for presenting control graphic which has different contents or same contents displayed in a predetermined order
US5664206A (en) * 1994-01-14 1997-09-02 Sun Microsystems, Inc. Method and apparatus for automating the localization of a computer program
US5678039A (en) * 1994-09-30 1997-10-14 Borland International, Inc. System and methods for translating software into localized versions
US5974372A (en) * 1996-02-12 1999-10-26 Dst Systems, Inc. Graphical user interface (GUI) language translator
US6587596B1 (en) * 2000-04-28 2003-07-01 Shutterfly, Inc. System and method of cropping an image
US6587569B2 (en) * 1999-12-28 2003-07-01 Thomson, Licensing, S.A. Differential-pressure microphone

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5675803A (en) * 1994-01-28 1997-10-07 Sun Microsystems, Inc. Method and apparatus for a fast debugger fix and continue operation
US5813019A (en) * 1995-07-06 1998-09-22 Sun Microsystems, Inc. Token-based computer program editor with program comment management
US6029002A (en) * 1995-10-31 2000-02-22 Peritus Software Services, Inc. Method and apparatus for analyzing computer code using weakest precondition
US5974568A (en) * 1995-11-17 1999-10-26 Mci Communications Corporation Hierarchical error reporting system
US5754858A (en) * 1996-05-01 1998-05-19 Microsoft Corporation Customizable application project generation process and system
US6092036A (en) * 1998-06-02 2000-07-18 Davox Corporation Multi-lingual data processing system and system and method for translating text used in computer software utilizing an embedded translator
IE981076A1 (en) 1998-12-21 2000-06-28 Transware Dev Ltd Localisation of software products
US6199195B1 (en) * 1999-07-08 2001-03-06 Science Application International Corporation Automatically generated objects within extensible object frameworks and links to enterprise resources
US7024365B1 (en) 1999-10-04 2006-04-04 Hewlett-Packard Development Company, L.P. Method for generating localizable message catalogs for Java-based applications
US6782529B2 (en) * 2001-03-29 2004-08-24 International Business Machines Corporation Method, apparatus and computer program product for editing in a translation verification test procedure
US20030004703A1 (en) * 2001-06-28 2003-01-02 Arvind Prabhakar Method and system for localizing a markup language document

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5201042A (en) * 1986-04-30 1993-04-06 Hewlett-Packard Company Software process and tools for development of local language translations of text portions of computer source code
US5146587A (en) * 1988-12-30 1992-09-08 Pitney Bowes Inc. System with simultaneous storage of multilingual error messages in plural loop connected processors for transmission automatic translation and message display
US5175684A (en) * 1990-12-31 1992-12-29 Trans-Link International Corp. Automatic text translation and routing system
US5434776A (en) * 1992-11-13 1995-07-18 Microsoft Corporation Method and system for creating multi-lingual computer programs by dynamically loading messages
US5551055A (en) * 1992-12-23 1996-08-27 Taligent, Inc. System for providing locale dependent user interface for presenting control graphic which has different contents or same contents displayed in a predetermined order
US5664206A (en) * 1994-01-14 1997-09-02 Sun Microsystems, Inc. Method and apparatus for automating the localization of a computer program
US5678039A (en) * 1994-09-30 1997-10-14 Borland International, Inc. System and methods for translating software into localized versions
US5974372A (en) * 1996-02-12 1999-10-26 Dst Systems, Inc. Graphical user interface (GUI) language translator
US6587569B2 (en) * 1999-12-28 2003-07-01 Thomson, Licensing, S.A. Differential-pressure microphone
US6587596B1 (en) * 2000-04-28 2003-07-01 Shutterfly, Inc. System and method of cropping an image

Cited By (404)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9646614B2 (en) 2000-03-16 2017-05-09 Apple Inc. Fast, language-independent method for user authentication by voice
US7038693B2 (en) * 2001-07-18 2006-05-02 Dr. Johannes Heidenhain Gmbh Method for creating pixel-oriented pictorial datasets for representing graphic symbols by a numerical control
US20040122791A1 (en) * 2002-12-19 2004-06-24 Sea Brian S Method and system for automated source code formatting
US8566710B2 (en) * 2003-02-21 2013-10-22 Motionpoint Corporation Analyzing web site for translation
US9367540B2 (en) 2003-02-21 2016-06-14 Motionpoint Corporation Dynamic language translation of web site content
US8065294B2 (en) 2003-02-21 2011-11-22 Motion Point Corporation Synchronization of web site content between languages
US20100169764A1 (en) * 2003-02-21 2010-07-01 Motionpoint Corporation Automation tool for web site content language translation
US20100030550A1 (en) * 2003-02-21 2010-02-04 Motionpoint Corporation Synchronization of web site content between languages
US20110209038A1 (en) * 2003-02-21 2011-08-25 Motionpoint Corporation Dynamic language translation of web site content
US7627479B2 (en) 2003-02-21 2009-12-01 Motionpoint Corporation Automation tool for web site content language translation
US9626360B2 (en) * 2003-02-21 2017-04-18 Motionpoint Corporation Analyzing web site for translation
US20040167768A1 (en) * 2003-02-21 2004-08-26 Motionpoint Corporation Automation tool for web site content language translation
US9652455B2 (en) 2003-02-21 2017-05-16 Motionpoint Corporation Dynamic language translation of web site content
US7627817B2 (en) * 2003-02-21 2009-12-01 Motionpoint Corporation Analyzing web site for translation
US20040167784A1 (en) * 2003-02-21 2004-08-26 Motionpoint Corporation Dynamic language translation of web site content
US9910853B2 (en) 2003-02-21 2018-03-06 Motionpoint Corporation Dynamic language translation of web site content
US8433718B2 (en) 2003-02-21 2013-04-30 Motionpoint Corporation Dynamic language translation of web site content
US20040168132A1 (en) * 2003-02-21 2004-08-26 Motionpoint Corporation Analyzing web site for translation
US10409918B2 (en) 2003-02-21 2019-09-10 Motionpoint Corporation Automation tool for web site content language translation
US10621287B2 (en) 2003-02-21 2020-04-14 Motionpoint Corporation Dynamic language translation of web site content
US20090281790A1 (en) * 2003-02-21 2009-11-12 Motionpoint Corporation Dynamic language translation of web site content
US7584216B2 (en) 2003-02-21 2009-09-01 Motionpoint Corporation Dynamic language translation of web site content
US8949223B2 (en) 2003-02-21 2015-02-03 Motionpoint Corporation Dynamic language translation of web site content
US11308288B2 (en) 2003-02-21 2022-04-19 Motionpoint Corporation Automation tool for web site content language translation
US7580960B2 (en) 2003-02-21 2009-08-25 Motionpoint Corporation Synchronization of web site content between languages
US20140058719A1 (en) * 2003-02-21 2014-02-27 Motionpoint Corporation Analyzing Web Site for Translation
US7996417B2 (en) 2003-02-21 2011-08-09 Motionpoint Corporation Dynamic language translation of web site content
US20100174525A1 (en) * 2003-02-21 2010-07-08 Motionpoint Corporation Analyzing web site for translation
US7783637B2 (en) 2003-09-30 2010-08-24 Microsoft Corporation Label system-translation of text and multi-language support at runtime and design
US20050071324A1 (en) * 2003-09-30 2005-03-31 Microsoft Corporation Label system-translation of text and multi-language support at runtime and design
US20050076342A1 (en) * 2003-10-01 2005-04-07 International Business Machines Corporation System and method for application sharing
US20050097109A1 (en) * 2003-10-30 2005-05-05 Microsoft Corporation Term database extension for label system
US20060074987A1 (en) * 2003-10-30 2006-04-06 Microsoft Corporation Term database extension for label system
US7814101B2 (en) * 2003-10-30 2010-10-12 Microsoft Corporation Term database extension for label system
US20080077558A1 (en) * 2004-03-31 2008-03-27 Lawrence Stephen R Systems and methods for generating multiple implicit search queries
US8041713B2 (en) 2004-03-31 2011-10-18 Google Inc. Systems and methods for analyzing boilerplate
US20050222981A1 (en) * 2004-03-31 2005-10-06 Lawrence Stephen R Systems and methods for weighting a search query result
US7707142B1 (en) 2004-03-31 2010-04-27 Google Inc. Methods and systems for performing an offline search
US8631001B2 (en) 2004-03-31 2014-01-14 Google Inc. Systems and methods for weighting a search query result
US7693825B2 (en) * 2004-03-31 2010-04-06 Google Inc. Systems and methods for ranking implicit search results
US7873632B2 (en) 2004-03-31 2011-01-18 Google Inc. Systems and methods for associating a keyword with a user interface area
US20080040316A1 (en) * 2004-03-31 2008-02-14 Lawrence Stephen R Systems and methods for analyzing boilerplate
US7664734B2 (en) 2004-03-31 2010-02-16 Google Inc. Systems and methods for generating multiple implicit search queries
US20080040315A1 (en) * 2004-03-31 2008-02-14 Auerbach David B Systems and methods for generating a user interface
US20070276829A1 (en) * 2004-03-31 2007-11-29 Niniane Wang Systems and methods for ranking implicit search results
US9009153B2 (en) 2004-03-31 2015-04-14 Google Inc. Systems and methods for identifying a named entity
US20050267733A1 (en) * 2004-06-01 2005-12-01 Rainer Hueber System and method for a translation process within a development infrastructure
US20060004835A1 (en) * 2004-06-30 2006-01-05 International Business Machines Corporation Standard text method, system, and program product for configuring and publishing text to multiple applications
US7788274B1 (en) 2004-06-30 2010-08-31 Google Inc. Systems and methods for category-based search
US8131754B1 (en) 2004-06-30 2012-03-06 Google Inc. Systems and methods for determining an article association measure
US7865825B2 (en) * 2004-06-30 2011-01-04 International Business Machines Corporation Standard text method, system, and program product for configuring and publishing text to multiple applications
US9210285B2 (en) * 2004-07-07 2015-12-08 Canon Kabushiki Kaisha Image processing apparatus and control method for reducing an amount of data for a plurality of languages
US20080252919A1 (en) * 2004-07-07 2008-10-16 Canon Kabushiki Kaisha Image Processing Apparatus and Control Method of the Same
US20080049267A1 (en) * 2004-07-12 2008-02-28 Canon Kabushiki Kaisha Image Processing Apparatus And Control Method Of The Same
US7889379B2 (en) * 2004-07-12 2011-02-15 Canon Kabushiki Kaisha Image processing apparatus using path management and control method of the same
US8473475B2 (en) * 2004-09-15 2013-06-25 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20080109414A1 (en) * 2004-09-15 2008-05-08 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20060059192A1 (en) * 2004-09-15 2006-03-16 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20080109449A1 (en) * 2004-09-15 2008-05-08 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20060059424A1 (en) * 2004-09-15 2006-03-16 Petri Jonah W Real-time data localization
US8108449B2 (en) * 2004-09-15 2012-01-31 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US20080109460A1 (en) * 2004-09-15 2008-05-08 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
US8135695B2 (en) * 2004-09-15 2012-03-13 Samsung Electronics Co., Ltd. Information storage medium for storing metadata supporting multiple languages, and systems and methods of processing metadata
WO2006032846A3 (en) * 2004-09-24 2007-07-26 Univ Abertay Dundee Computer games localisation
US20070245321A1 (en) * 2004-09-24 2007-10-18 University Of Abertay Dundee Computer games localisation
WO2006032846A2 (en) * 2004-09-24 2006-03-30 University Of Abertay Dundee Computer games localisation
DE102004054548B4 (en) * 2004-11-11 2013-11-07 Brandad Systems Ag Method for generating an image
DE102004054548A1 (en) * 2004-11-11 2006-05-24 Brand Ad Gmbh Image production method, involves producing image without additional graphical unit by using color palette which shows color applications, and storing produced image to which color palette is assigned
US20060116864A1 (en) * 2004-12-01 2006-06-01 Microsoft Corporation Safe, secure resource editing for application localization with automatic adjustment of application user interface for translated resources
US20080262671A1 (en) * 2005-02-24 2008-10-23 Yoshio Suzuki Vehicle Quality Analyzing System and Plural Data Management Method
US7869914B2 (en) * 2005-02-24 2011-01-11 Honda Motor Co., Ltd. Vehicle quality analyzing system and plural data management method
US20060242621A1 (en) * 2005-04-22 2006-10-26 Microsoft Corporation System and method for managing resource loading in a multilingual user interface operating system
US7669124B2 (en) * 2005-04-22 2010-02-23 Microsoft Corporation System and method for managing resource loading in a multilingual user interface operating system
US20060263038A1 (en) * 2005-05-23 2006-11-23 Gilley Thomas S Distributed scalable media environment
US9330723B2 (en) 2005-05-23 2016-05-03 Open Text S.A. Movie advertising playback systems and methods
US9648281B2 (en) 2005-05-23 2017-05-09 Open Text Sa Ulc System and method for movie segment bookmarking and sharing
US9653120B2 (en) 2005-05-23 2017-05-16 Open Text Sa Ulc Movie advertising playback systems and methods
US9654735B2 (en) 2005-05-23 2017-05-16 Open Text Sa Ulc Movie advertising placement optimization based on behavior and content analysis
US10789986B2 (en) 2005-05-23 2020-09-29 Open Text Sa Ulc Method, system and computer program product for editing movies in distributed scalable media environment
US10796722B2 (en) 2005-05-23 2020-10-06 Open Text Sa Ulc Method, system and computer program product for distributed video editing
US10672429B2 (en) 2005-05-23 2020-06-02 Open Text Sa Ulc Method, system and computer program product for editing movies in distributed scalable media environment
US10863224B2 (en) 2005-05-23 2020-12-08 Open Text Sa Ulc Video content placement optimization based on behavior and content analysis
US10950273B2 (en) 2005-05-23 2021-03-16 Open Text Sa Ulc Distributed scalable media environment for advertising placement in movies
US9934819B2 (en) 2005-05-23 2018-04-03 Open Text Sa Ulc Distributed scalable media environment for advertising placement in movies
US9940971B2 (en) * 2005-05-23 2018-04-10 Open Text Sa Ulc Method, system and computer program product for distributed video editing
US10958876B2 (en) 2005-05-23 2021-03-23 Open Text Sa Ulc System and method for movie segment bookmarking and sharing
US9947365B2 (en) * 2005-05-23 2018-04-17 Open Text Sa Ulc Method, system and computer program product for editing movies in distributed scalable media environment
US10090019B2 (en) 2005-05-23 2018-10-02 Open Text Sa Ulc Method, system and computer program product for editing movies in distributed scalable media environment
US10192587B2 (en) 2005-05-23 2019-01-29 Open Text Sa Ulc Movie advertising playback systems and methods
US11626141B2 (en) 2005-05-23 2023-04-11 Open Text Sa Ulc Method, system and computer program product for distributed video editing
US10650863B2 (en) 2005-05-23 2020-05-12 Open Text Sa Ulc Movie advertising playback systems and methods
US10594981B2 (en) 2005-05-23 2020-03-17 Open Text Sa Ulc System and method for movie segment bookmarking and sharing
US20140212109A1 (en) * 2005-05-23 2014-07-31 Open Text S.A. Method, system and computer program product for distributed video editing
US20140212111A1 (en) * 2005-05-23 2014-07-31 Open Text S.A. Method, system and computer program product for editing movies in distributed scalable media environment
US8755673B2 (en) 2005-05-23 2014-06-17 Open Text S.A. Method, system and computer program product for editing movies in distributed scalable media environment
US8739205B2 (en) 2005-05-23 2014-05-27 Open Text S.A. Movie advertising playback techniques
US11153614B2 (en) 2005-05-23 2021-10-19 Open Text Sa Ulc Movie advertising playback systems and methods
US11589087B2 (en) 2005-05-23 2023-02-21 Open Text Sa Ulc Movie advertising playback systems and methods
US20070154190A1 (en) * 2005-05-23 2007-07-05 Gilley Thomas S Content tracking for movie segment bookmarks
US10510376B2 (en) 2005-05-23 2019-12-17 Open Text Sa Ulc Method, system and computer program product for editing movies in distributed scalable media environment
US10491935B2 (en) 2005-05-23 2019-11-26 Open Text Sa Ulc Movie advertising placement optimization based on behavior and content analysis
US10504558B2 (en) 2005-05-23 2019-12-10 Open Text Sa Ulc Method, system and computer program product for distributed video editing
US11381779B2 (en) 2005-05-23 2022-07-05 Open Text Sa Ulc System and method for movie segment bookmarking and sharing
US20070027670A1 (en) * 2005-07-13 2007-02-01 Siemens Medical Solutions Health Services Corporation User Interface Update System
US8316291B1 (en) * 2005-07-28 2012-11-20 Adobe Systems Incorporated Packaging an electronic document and/or a method of displaying the package
US10318871B2 (en) 2005-09-08 2019-06-11 Apple Inc. Method and apparatus for building an intelligent automated assistant
US7882430B2 (en) * 2005-10-27 2011-02-01 Hewlett-Packard Development Company, L.P. Utility, method and device providing vector images that may be formatted for display in different locales
US20070100597A1 (en) * 2005-10-27 2007-05-03 Jacquot Bryan J Utility, method and device providing vector images that may be formatted for display in different locales
US20070276814A1 (en) * 2006-05-26 2007-11-29 Williams Roland E Device And Method Of Conveying Meaning
US8166418B2 (en) * 2006-05-26 2012-04-24 Zi Corporation Of Canada, Inc. Device and method of conveying meaning
US20080016112A1 (en) * 2006-07-07 2008-01-17 Honeywell International Inc. Supporting Multiple Languages in the Operation and Management of a Process Control System
WO2008005957A2 (en) * 2006-07-07 2008-01-10 Honeywell International Inc. Supporting multiple languages in the operation and management of a process control system
WO2008005957A3 (en) * 2006-07-07 2008-07-31 Honeywell Int Inc Supporting multiple languages in the operation and management of a process control system
US9117447B2 (en) 2006-09-08 2015-08-25 Apple Inc. Using event alert text as input to an automated assistant
US8942986B2 (en) 2006-09-08 2015-01-27 Apple Inc. Determining user intent based on ontologies of domains
US8930191B2 (en) 2006-09-08 2015-01-06 Apple Inc. Paraphrasing of user requests and results by automated digital assistant
US20080172637A1 (en) * 2007-01-15 2008-07-17 International Business Machines Corporation Method and system for using image globalization in dynamic text generation and manipulation
US10568032B2 (en) 2007-04-03 2020-02-18 Apple Inc. Method and system for operating a multi-function portable electronic device using voice-activation
US20080301564A1 (en) * 2007-05-31 2008-12-04 Smith Michael H Build of material production system
US10296588B2 (en) * 2007-05-31 2019-05-21 Red Hat, Inc. Build of material production system
US9361294B2 (en) 2007-05-31 2016-06-07 Red Hat, Inc. Publishing tool for translating documents
US20110035660A1 (en) * 2007-08-31 2011-02-10 Frederick Lussier System and method for the automated creation of a virtual publication
US20090070094A1 (en) * 2007-09-06 2009-03-12 Best Steven F User-configurable translations for electronic documents
US8527260B2 (en) 2007-09-06 2013-09-03 International Business Machines Corporation User-configurable translations for electronic documents
US8650553B2 (en) 2007-10-25 2014-02-11 Disney Enterprises, Inc. System and method for localizing assets using automatic generation of alerts
US9910850B2 (en) 2007-10-25 2018-03-06 Disney Enterprises, Inc. System and method of localizing assets using text substitutions
US20090111585A1 (en) * 2007-10-25 2009-04-30 Disney Enterprises, Inc. System and method of localizing assets using text substitutions
US20090113445A1 (en) * 2007-10-25 2009-04-30 Disney Enterprises, Inc. System and method for localizing assets using automatic generation of alerts
US20090112575A1 (en) * 2007-10-25 2009-04-30 Disney Enterprises, Inc. System and method for localizing assets using flexible metadata
US8275606B2 (en) * 2007-10-25 2012-09-25 Disney Enterprises, Inc. System and method for localizing assets using flexible metadata
US9594748B2 (en) 2007-10-25 2017-03-14 Disney Enterprises, Inc. System and method for localization of assets using dictionary file build
US10381016B2 (en) 2008-01-03 2019-08-13 Apple Inc. Methods and apparatus for altering audio output signals
US9330720B2 (en) 2008-01-03 2016-05-03 Apple Inc. Methods and apparatus for altering audio output signals
US8407040B2 (en) * 2008-02-29 2013-03-26 Sharp Kabushiki Kaisha Information processing device, method and program
US20110264439A1 (en) * 2008-02-29 2011-10-27 Ichiko Sata Information processing device, method and program
US9626955B2 (en) 2008-04-05 2017-04-18 Apple Inc. Intelligent text-to-speech conversion
US9865248B2 (en) 2008-04-05 2018-01-09 Apple Inc. Intelligent text-to-speech conversion
US20100011354A1 (en) * 2008-07-10 2010-01-14 Apple Inc. System and method for localizing display of applications for download
US8650561B2 (en) * 2008-07-10 2014-02-11 Apple Inc. System and method for localizing display of applications for download
US10108612B2 (en) 2008-07-31 2018-10-23 Apple Inc. Mobile device having human language translation capability with positional feedback
US9535906B2 (en) 2008-07-31 2017-01-03 Apple Inc. Mobile device having human language translation capability with positional feedback
US9959870B2 (en) 2008-12-11 2018-05-01 Apple Inc. Speech recognition involving a mobile device
US9753746B2 (en) * 2008-12-19 2017-09-05 Paul Krzyzanowski Application store and intelligence system for networked telephony and digital media services devices
US20100157990A1 (en) * 2008-12-19 2010-06-24 Openpeak, Inc. Systems for providing telephony and digital media services
US20100157989A1 (en) * 2008-12-19 2010-06-24 Openpeak, Inc. Application store and intelligence system for networked telephony and digital media services devices
US10671698B2 (en) 2009-05-26 2020-06-02 Microsoft Technology Licensing, Llc Language translation using embeddable component
US20100305940A1 (en) * 2009-06-01 2010-12-02 Microsoft Corporation Language translation using embeddable component
US9405745B2 (en) * 2009-06-01 2016-08-02 Microsoft Technology Licensing, Llc Language translation using embeddable component
US8797335B2 (en) 2009-06-05 2014-08-05 International Business Machines Corporation Platform agnostic screen capture tool
US11080012B2 (en) 2009-06-05 2021-08-03 Apple Inc. Interface for a virtual digital assistant
US10475446B2 (en) 2009-06-05 2019-11-12 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US8797338B2 (en) 2009-06-05 2014-08-05 International Business Machines Corporation Platform agnostic screen capture tool
WO2010139595A1 (en) * 2009-06-05 2010-12-09 International Business Machines Corporation Platform agnostic screen capture tool
US20100309212A1 (en) * 2009-06-05 2010-12-09 International Business Machines Corporation Platform agnostic screen capture tool
US9858925B2 (en) 2009-06-05 2018-01-02 Apple Inc. Using context information to facilitate processing of commands in a virtual assistant
US10795541B2 (en) 2009-06-05 2020-10-06 Apple Inc. Intelligent organization of tasks items
US8612893B2 (en) 2009-06-10 2013-12-17 Microsoft Corporation Dynamic screentip language translation
US8312390B2 (en) 2009-06-10 2012-11-13 Microsoft Corporation Dynamic screentip language translation
US20100318743A1 (en) * 2009-06-10 2010-12-16 Microsoft Corporation Dynamic screentip language translation
US20100328349A1 (en) * 2009-06-29 2010-12-30 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd . System and method for fitting images in an electronic device
US8300053B2 (en) * 2009-06-29 2012-10-30 Hong Fu Jin Precision Industry (Shenzhen) Co., Ltd. System and method for fitting images in an electronic device
US10283110B2 (en) 2009-07-02 2019-05-07 Apple Inc. Methods and apparatuses for automatic speech recognition
US8793129B2 (en) * 2009-09-24 2014-07-29 Casio Computer Co., Ltd. Image display device for identifying keywords from a voice of a viewer and displaying image and keyword
US20110071832A1 (en) * 2009-09-24 2011-03-24 Casio Computer Co., Ltd. Image display device, method, and program
US8903716B2 (en) 2010-01-18 2014-12-02 Apple Inc. Personalized vocabulary for digital assistant
US11423886B2 (en) 2010-01-18 2022-08-23 Apple Inc. Task flow identification based on user intent
US10705794B2 (en) 2010-01-18 2020-07-07 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9318108B2 (en) 2010-01-18 2016-04-19 Apple Inc. Intelligent automated assistant
US10679605B2 (en) 2010-01-18 2020-06-09 Apple Inc. Hands-free list-reading by intelligent automated assistant
US10496753B2 (en) 2010-01-18 2019-12-03 Apple Inc. Automatically adapting user interfaces for hands-free interaction
US9548050B2 (en) 2010-01-18 2017-01-17 Apple Inc. Intelligent automated assistant
US10706841B2 (en) 2010-01-18 2020-07-07 Apple Inc. Task flow identification based on user intent
US10553209B2 (en) 2010-01-18 2020-02-04 Apple Inc. Systems and methods for hands-free notification summaries
US10276170B2 (en) 2010-01-18 2019-04-30 Apple Inc. Intelligent automated assistant
US8892446B2 (en) 2010-01-18 2014-11-18 Apple Inc. Service orchestration for intelligent automated assistant
US9190062B2 (en) 2010-02-25 2015-11-17 Apple Inc. User profiling for voice input processing
US10049675B2 (en) 2010-02-25 2018-08-14 Apple Inc. User profiling for voice input processing
US9633660B2 (en) 2010-02-25 2017-04-25 Apple Inc. User profiling for voice input processing
US9009022B2 (en) * 2010-03-30 2015-04-14 Young Hee Yi E-book reader language mapping system and method
US9619462B2 (en) 2010-03-30 2017-04-11 Young Hee Yi E-book reader language mapping system and method
US20110246175A1 (en) * 2010-03-30 2011-10-06 Young Hee Yi E-book reader language mapping system and method
US9116651B2 (en) 2010-06-08 2015-08-25 Canon Kabushiki Kaisha Image processing apparatus, control method, and recording medium storing computer program for image processing
EP2395423A3 (en) * 2010-06-08 2013-08-14 Canon Kabushiki Kaisha Information transmission apparatus, control method of information transmission apparatus, and computer program
US10387517B2 (en) 2010-07-13 2019-08-20 Motionpoint Corporation Dynamic language translation of web site content
US9858347B2 (en) 2010-07-13 2018-01-02 Motionpoint Corporation Dynamic language translation of web site content
US10210271B2 (en) 2010-07-13 2019-02-19 Motionpoint Corporation Dynamic language translation of web site content
US9465782B2 (en) 2010-07-13 2016-10-11 Motionpoint Corporation Dynamic language translation of web site content
US10936690B2 (en) 2010-07-13 2021-03-02 Motionpoint Corporation Dynamic language translation of web site content
US10296651B2 (en) 2010-07-13 2019-05-21 Motionpoint Corporation Dynamic language translation of web site content
US11409828B2 (en) 2010-07-13 2022-08-09 Motionpoint Corporation Dynamic language translation of web site content
US9864809B2 (en) 2010-07-13 2018-01-09 Motionpoint Corporation Dynamic language translation of web site content
US11157581B2 (en) 2010-07-13 2021-10-26 Motionpoint Corporation Dynamic language translation of web site content
US11481463B2 (en) 2010-07-13 2022-10-25 Motionpoint Corporation Dynamic language translation of web site content
US10146884B2 (en) 2010-07-13 2018-12-04 Motionpoint Corporation Dynamic language translation of web site content
US9213685B2 (en) 2010-07-13 2015-12-15 Motionpoint Corporation Dynamic language translation of web site content
US9128918B2 (en) 2010-07-13 2015-09-08 Motionpoint Corporation Dynamic language translation of web site content
US10977329B2 (en) 2010-07-13 2021-04-13 Motionpoint Corporation Dynamic language translation of web site content
US10073917B2 (en) 2010-07-13 2018-09-11 Motionpoint Corporation Dynamic language translation of web site content
US10089400B2 (en) 2010-07-13 2018-10-02 Motionpoint Corporation Dynamic language translation of web site content
US9411793B2 (en) 2010-07-13 2016-08-09 Motionpoint Corporation Dynamic language translation of web site content
US11030267B2 (en) 2010-07-13 2021-06-08 Motionpoint Corporation Dynamic language translation of web site content
US10922373B2 (en) 2010-07-13 2021-02-16 Motionpoint Corporation Dynamic language translation of web site content
US9311287B2 (en) 2010-07-13 2016-04-12 Motionpoint Corporation Dynamic language translation of web site content
US10762293B2 (en) 2010-12-22 2020-09-01 Apple Inc. Using parts-of-speech tagging and named entity recognition for spelling correction
US10102359B2 (en) 2011-03-21 2018-10-16 Apple Inc. Device access using voice authentication
US9262612B2 (en) 2011-03-21 2016-02-16 Apple Inc. Device access using voice authentication
US20150128015A1 (en) * 2011-04-28 2015-05-07 Rakuten, Inc. Browsing system, terminal, image server, program, computer-readable recording medium storing program, and method
EP2704025A4 (en) * 2011-04-28 2014-12-03 Rakuten Inc Browsing system, terminal, image server, program, computer-readable recording medium recording said program, and method
US9396392B2 (en) 2011-04-28 2016-07-19 Rakuten, Inc. Server, server control method, program and recording medium
US10013403B2 (en) * 2011-04-28 2018-07-03 Rakuten, Inc. Browsing system, terminal, image server, program, computer-readable recording medium storing program, and method
EP2704014A4 (en) * 2011-04-28 2015-03-11 Rakuten Inc Server, server control method, program and recording medium
EP2704025A1 (en) * 2011-04-28 2014-03-05 Rakuten, Inc. Browsing system, terminal, image server, program, computer-readable recording medium recording said program, and method
US10706373B2 (en) 2011-06-03 2020-07-07 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10057736B2 (en) 2011-06-03 2018-08-21 Apple Inc. Active transport based notifications
US11120372B2 (en) 2011-06-03 2021-09-14 Apple Inc. Performing actions associated with task items that represent tasks to perform
US10241644B2 (en) 2011-06-03 2019-03-26 Apple Inc. Actionable reminder entries
US9798393B2 (en) 2011-08-29 2017-10-24 Apple Inc. Text correction processing
US8818092B1 (en) * 2011-09-29 2014-08-26 Google, Inc. Multi-threaded text rendering
US10241752B2 (en) 2011-09-30 2019-03-26 Apple Inc. Interface for a virtual digital assistant
EP2783300A4 (en) * 2011-11-25 2015-12-23 Google Inc Providing translation assistance in application localization
US20130185051A1 (en) * 2012-01-16 2013-07-18 Google Inc. Techniques for generating outgoing messages based on language, internationalization, and localization preferences of the recipient
US9268762B2 (en) * 2012-01-16 2016-02-23 Google Inc. Techniques for generating outgoing messages based on language, internationalization, and localization preferences of the recipient
US9747271B2 (en) 2012-01-16 2017-08-29 Google Inc. Techniques for generating outgoing messages based on language, internationalization, and localization preferences of the recipient
US10134385B2 (en) 2012-03-02 2018-11-20 Apple Inc. Systems and methods for name pronunciation
US9483461B2 (en) 2012-03-06 2016-11-01 Apple Inc. Handling speech synthesis of content for multiple languages
US9953088B2 (en) 2012-05-14 2018-04-24 Apple Inc. Crowd sourcing information to fulfill user requests
US10079014B2 (en) 2012-06-08 2018-09-18 Apple Inc. Name recognition system
US9678951B2 (en) * 2012-06-21 2017-06-13 International Business Machines Corporation Dynamic translation substitution
US10289682B2 (en) 2012-06-21 2019-05-14 International Business Machines Corporation Dynamic translation substitution
US20130346064A1 (en) * 2012-06-21 2013-12-26 International Business Machines Corporation Dynamic Translation Substitution
US20130346063A1 (en) * 2012-06-21 2013-12-26 International Business Machines Corporation Dynamic Translation Substitution
US9672209B2 (en) * 2012-06-21 2017-06-06 International Business Machines Corporation Dynamic translation substitution
US9495129B2 (en) 2012-06-29 2016-11-15 Apple Inc. Device, method, and user interface for voice-activated navigation and browsing of a document
US9576574B2 (en) 2012-09-10 2017-02-21 Apple Inc. Context-sensitive handling of interruptions by intelligent digital assistant
US9216835B2 (en) * 2012-09-17 2015-12-22 Intel Corporation Translating application labels
US20140076475A1 (en) * 2012-09-17 2014-03-20 Tomás GALICIA Translating application labels
US9971774B2 (en) 2012-09-19 2018-05-15 Apple Inc. Voice-based media searching
KR102194675B1 (en) * 2012-09-26 2020-12-23 구글 엘엘씨 Techniques for context-based grouping of messages for translation
US9400848B2 (en) * 2012-09-26 2016-07-26 Google Inc. Techniques for context-based grouping of messages for translation
KR20150063443A (en) * 2012-09-26 2015-06-09 구글 인코포레이티드 Techniques for context-based grouping of messages for translation
CN104813318A (en) * 2012-09-26 2015-07-29 谷歌公司 Techniques for context-based grouping of messages for translation
US20140089382A1 (en) * 2012-09-26 2014-03-27 Google Inc. Techniques for context-based grouping of messages for translation
US10978090B2 (en) 2013-02-07 2021-04-13 Apple Inc. Voice trigger for a digital assistant
US10199051B2 (en) 2013-02-07 2019-02-05 Apple Inc. Voice trigger for a digital assistant
US9368114B2 (en) 2013-03-14 2016-06-14 Apple Inc. Context-sensitive handling of interruptions
US10652394B2 (en) 2013-03-14 2020-05-12 Apple Inc. System and method for processing voicemail
US11388291B2 (en) 2013-03-14 2022-07-12 Apple Inc. System and method for processing voicemail
US9697822B1 (en) 2013-03-15 2017-07-04 Apple Inc. System and method for updating an adaptive speech recognition model
US9922642B2 (en) 2013-03-15 2018-03-20 Apple Inc. Training an at least partial voice command system
US9436727B1 (en) * 2013-04-01 2016-09-06 Ca, Inc. Method for providing an integrated macro module
EP2984579A4 (en) * 2013-04-11 2017-01-18 Hewlett-Packard Enterprise Development LP Automated contextual-based software localization
US9928237B2 (en) 2013-04-11 2018-03-27 Entit Software Llc Automated contextual-based software localization
US9852128B2 (en) * 2013-04-24 2017-12-26 International Business Machines Corporation Translation validation
CN104123275A (en) * 2013-04-24 2014-10-29 国际商业机器公司 Translation validation
US20140324411A1 (en) * 2013-04-24 2014-10-30 International Business Machines Corporation Translation validation
US20140351798A1 (en) * 2013-05-24 2014-11-27 Medidata Solutions, Inc. Apparatus and method for managing software translation
US9292271B2 (en) * 2013-05-24 2016-03-22 Medidata Solutions, Inc. Apparatus and method for managing software translation
US9582608B2 (en) 2013-06-07 2017-02-28 Apple Inc. Unified ranking with entropy-weighted information for phrase-based semantic auto-completion
US9620104B2 (en) 2013-06-07 2017-04-11 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9966060B2 (en) 2013-06-07 2018-05-08 Apple Inc. System and method for user-specified pronunciation of words for speech synthesis and recognition
US9633674B2 (en) 2013-06-07 2017-04-25 Apple Inc. System and method for detecting errors in interactions with a voice-based digital assistant
US9966068B2 (en) 2013-06-08 2018-05-08 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10657961B2 (en) 2013-06-08 2020-05-19 Apple Inc. Interpreting and acting upon commands that involve sharing information with remote devices
US10176167B2 (en) 2013-06-09 2019-01-08 Apple Inc. System and method for inferring user intent from speech inputs
US10185542B2 (en) 2013-06-09 2019-01-22 Apple Inc. Device, method, and graphical user interface for enabling conversation persistence across two or more instances of a digital assistant
US9300784B2 (en) 2013-06-13 2016-03-29 Apple Inc. System and method for emergency calls initiated by voice command
US10791216B2 (en) 2013-08-06 2020-09-29 Apple Inc. Auto-activating smart responses based on activities from remote devices
US20160179481A1 (en) * 2013-08-29 2016-06-23 Nomura Research Institute, Ltd. Web server system, application development support system, multilingual support method in web server system, multi-device support method in web server system, and application development support method
US9952836B2 (en) * 2013-08-29 2018-04-24 Nomura Research Institute, Ltd. Web server system, application development support system, multilingual support method in web server system, multi-device support method in web server system, and application development support method
US9372672B1 (en) * 2013-09-04 2016-06-21 Tg, Llc Translation in visual context
US9158658B2 (en) * 2013-10-15 2015-10-13 International Business Machines Corporation Detecting merge conflicts and compilation errors in a collaborative integrated development environment
US20150106790A1 (en) * 2013-10-15 2015-04-16 International Business Machines Corporation Detecting merge conflicts and compilation errors in a collaborative integrated development environment
US9454459B2 (en) 2013-10-15 2016-09-27 International Business Machines Corporation Detecting merge conflicts and compilation errors in a collaborative integrated development environment
US9940219B2 (en) 2013-10-15 2018-04-10 International Business Machines Corporation Detecting merge conflicts and compilation errors in a collaborative integrated development environment
US9620105B2 (en) 2014-05-15 2017-04-11 Apple Inc. Analyzing audio input for efficient speech and music recognition
US10592095B2 (en) 2014-05-23 2020-03-17 Apple Inc. Instantaneous speaking of content on touch devices
US9502031B2 (en) 2014-05-27 2016-11-22 Apple Inc. Method for supporting dynamic grammars in WFST-based ASR
US10289433B2 (en) 2014-05-30 2019-05-14 Apple Inc. Domain specific language for encoding assistant dialog
US9842101B2 (en) 2014-05-30 2017-12-12 Apple Inc. Predictive conversion of language input
US9715875B2 (en) 2014-05-30 2017-07-25 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US10170123B2 (en) 2014-05-30 2019-01-01 Apple Inc. Intelligent assistant for home automation
US10169329B2 (en) 2014-05-30 2019-01-01 Apple Inc. Exemplar-based natural language processing
US10497365B2 (en) 2014-05-30 2019-12-03 Apple Inc. Multi-command single utterance input method
US9734193B2 (en) 2014-05-30 2017-08-15 Apple Inc. Determining domain salience ranking from ambiguous words in natural speech
US10083690B2 (en) 2014-05-30 2018-09-25 Apple Inc. Better resolution when referencing to concepts
US10078631B2 (en) 2014-05-30 2018-09-18 Apple Inc. Entropy-guided text prediction using combined word and character n-gram language models
US9760559B2 (en) 2014-05-30 2017-09-12 Apple Inc. Predictive text input
US9785630B2 (en) 2014-05-30 2017-10-10 Apple Inc. Text prediction using combined word N-gram and unigram language models
US9633004B2 (en) 2014-05-30 2017-04-25 Apple Inc. Better resolution when referencing to concepts
US9966065B2 (en) 2014-05-30 2018-05-08 Apple Inc. Multi-command single utterance input method
US11133008B2 (en) 2014-05-30 2021-09-28 Apple Inc. Reducing the need for manual start/end-pointing and trigger phrases
US9430463B2 (en) 2014-05-30 2016-08-30 Apple Inc. Exemplar-based natural language processing
US11257504B2 (en) 2014-05-30 2022-02-22 Apple Inc. Intelligent assistant for home automation
US9668024B2 (en) 2014-06-30 2017-05-30 Apple Inc. Intelligent automated assistant for TV user interactions
US10659851B2 (en) 2014-06-30 2020-05-19 Apple Inc. Real-time digital assistant knowledge updates
US10904611B2 (en) 2014-06-30 2021-01-26 Apple Inc. Intelligent automated assistant for TV user interactions
US9338493B2 (en) 2014-06-30 2016-05-10 Apple Inc. Intelligent automated assistant for TV user interactions
US10446141B2 (en) 2014-08-28 2019-10-15 Apple Inc. Automatic speech recognition based on user feedback
US9818400B2 (en) 2014-09-11 2017-11-14 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10431204B2 (en) 2014-09-11 2019-10-01 Apple Inc. Method and apparatus for discovering trending terms in speech requests
US10789041B2 (en) 2014-09-12 2020-09-29 Apple Inc. Dynamic thresholds for always listening speech trigger
US9986419B2 (en) 2014-09-30 2018-05-29 Apple Inc. Social reminders
US10127911B2 (en) 2014-09-30 2018-11-13 Apple Inc. Speaker identification and unsupervised speaker adaptation techniques
US9668121B2 (en) 2014-09-30 2017-05-30 Apple Inc. Social reminders
US9594749B2 (en) 2014-09-30 2017-03-14 Microsoft Technology Licensing, Llc Visually differentiating strings for testing
US9646609B2 (en) 2014-09-30 2017-05-09 Apple Inc. Caching apparatus for serving phonetic pronunciations
US10216727B2 (en) * 2014-09-30 2019-02-26 Microsoft Technology Licensing, Llc Visually differentiating strings for testing
US9886432B2 (en) 2014-09-30 2018-02-06 Apple Inc. Parsimonious handling of word inflection via categorical stem + suffix N-gram language models
WO2016053920A1 (en) * 2014-09-30 2016-04-07 Microsoft Technology Licensing, Llc Visually differentiating strings for testing
US10074360B2 (en) 2014-09-30 2018-09-11 Apple Inc. Providing an indication of the suitability of speech recognition
US20170147562A1 (en) * 2014-09-30 2017-05-25 Microsoft Technology Licensing, Llc Visually differentiating strings for testing
CN106716398A (en) * 2014-09-30 2017-05-24 微软技术许可有限责任公司 Visually differentiating strings for testing
US20160147745A1 (en) * 2014-11-26 2016-05-26 Naver Corporation Content participation translation apparatus and method
US20160147746A1 (en) * 2014-11-26 2016-05-26 Naver Corporation Content participation translation apparatus and method
US20160147742A1 (en) * 2014-11-26 2016-05-26 Naver Corporation Apparatus and method for providing translations editor
US10733388B2 (en) * 2014-11-26 2020-08-04 Naver Webtoon Corporation Content participation translation apparatus and method
US10496757B2 (en) * 2014-11-26 2019-12-03 Naver Webtoon Corporation Apparatus and method for providing translations editor
US10713444B2 (en) 2014-11-26 2020-07-14 Naver Webtoon Corporation Apparatus and method for providing translations editor
US9881008B2 (en) * 2014-11-26 2018-01-30 Naver Corporation Content participation translation apparatus and method
US11556230B2 (en) 2014-12-02 2023-01-17 Apple Inc. Data detection
US10552013B2 (en) 2014-12-02 2020-02-04 Apple Inc. Data detection
US9711141B2 (en) 2014-12-09 2017-07-18 Apple Inc. Disambiguating heteronyms in speech synthesis
US20160203126A1 (en) * 2015-01-13 2016-07-14 Alibaba Group Holding Limited Displaying information in multiple languages based on optical code reading
US10157180B2 (en) * 2015-01-13 2018-12-18 Alibaba Group Holding Limited Displaying information in multiple languages based on optical code reading
US11062096B2 (en) * 2015-01-13 2021-07-13 Advanced New Technologies Co., Ltd. Displaying information in multiple languages based on optical code reading
US9558158B2 (en) * 2015-03-06 2017-01-31 Translation Management Systems, Ltd Automated document translation
US9865280B2 (en) 2015-03-06 2018-01-09 Apple Inc. Structured dictation using intelligent automated assistants
US10311871B2 (en) 2015-03-08 2019-06-04 Apple Inc. Competing devices responding to voice triggers
US9886953B2 (en) 2015-03-08 2018-02-06 Apple Inc. Virtual assistant activation
US11087759B2 (en) 2015-03-08 2021-08-10 Apple Inc. Virtual assistant activation
US10567477B2 (en) 2015-03-08 2020-02-18 Apple Inc. Virtual assistant continuity
US9721566B2 (en) 2015-03-08 2017-08-01 Apple Inc. Competing devices responding to voice triggers
US9899019B2 (en) 2015-03-18 2018-02-20 Apple Inc. Systems and methods for structured stem and suffix language models
US9842105B2 (en) 2015-04-16 2017-12-12 Apple Inc. Parsimonious continuous-space phrase representations for natural language processing
US10083688B2 (en) 2015-05-27 2018-09-25 Apple Inc. Device voice control for selecting a displayed affordance
US10127220B2 (en) 2015-06-04 2018-11-13 Apple Inc. Language identification from short strings
US10101822B2 (en) 2015-06-05 2018-10-16 Apple Inc. Language input correction
US10356243B2 (en) 2015-06-05 2019-07-16 Apple Inc. Virtual assistant aided communication with 3rd party service in a communication session
US10186254B2 (en) 2015-06-07 2019-01-22 Apple Inc. Context-based endpoint detection
US11025565B2 (en) 2015-06-07 2021-06-01 Apple Inc. Personalized prediction of responses for instant messaging
US10255907B2 (en) 2015-06-07 2019-04-09 Apple Inc. Automatic accent detection using acoustic models
US10671428B2 (en) 2015-09-08 2020-06-02 Apple Inc. Distributed personal assistant
US10747498B2 (en) 2015-09-08 2020-08-18 Apple Inc. Zero latency digital assistant
US11500672B2 (en) 2015-09-08 2022-11-15 Apple Inc. Distributed personal assistant
US9697820B2 (en) 2015-09-24 2017-07-04 Apple Inc. Unit-selection text-to-speech synthesis using concatenation-sensitive neural networks
US11010550B2 (en) 2015-09-29 2021-05-18 Apple Inc. Unified language modeling framework for word prediction, auto-completion and auto-correction
US10366158B2 (en) 2015-09-29 2019-07-30 Apple Inc. Efficient word encoding for recurrent neural network language models
US11587559B2 (en) 2015-09-30 2023-02-21 Apple Inc. Intelligent device identification
US10691473B2 (en) 2015-11-06 2020-06-23 Apple Inc. Intelligent automated assistant in a messaging environment
US11526368B2 (en) 2015-11-06 2022-12-13 Apple Inc. Intelligent automated assistant in a messaging environment
US10049668B2 (en) 2015-12-02 2018-08-14 Apple Inc. Applying neural network language models to weighted finite state transducers for automatic speech recognition
US10223066B2 (en) 2015-12-23 2019-03-05 Apple Inc. Proactive assistance based on dialog communication between devices
US9659010B1 (en) 2015-12-28 2017-05-23 International Business Machines Corporation Multiple language screen capture
US10446143B2 (en) 2016-03-14 2019-10-15 Apple Inc. Identification of voice inputs providing credentials
US10229220B2 (en) * 2016-05-09 2019-03-12 Coupa Software Incorporated Automatic entry of suggested translated terms in an online application program
US20170322944A1 (en) * 2016-05-09 2017-11-09 Coupa Software Incorporated Automatic entry of suggested translated terms in an online application program
US10565282B1 (en) 2016-05-09 2020-02-18 Coupa Software Incorporated Automatic entry of suggested translated terms in an online application program
US9934775B2 (en) 2016-05-26 2018-04-03 Apple Inc. Unit-selection text-to-speech synthesis based on predicted concatenation parameters
US9972304B2 (en) 2016-06-03 2018-05-15 Apple Inc. Privacy preserving distributed evaluation framework for embedded personalized systems
US10249300B2 (en) 2016-06-06 2019-04-02 Apple Inc. Intelligent list reading
US11069347B2 (en) 2016-06-08 2021-07-20 Apple Inc. Intelligent automated assistant for media exploration
US10049663B2 (en) 2016-06-08 2018-08-14 Apple, Inc. Intelligent automated assistant for media exploration
US10354011B2 (en) 2016-06-09 2019-07-16 Apple Inc. Intelligent automated assistant in a home environment
US10490187B2 (en) 2016-06-10 2019-11-26 Apple Inc. Digital assistant providing automated status report
US11037565B2 (en) 2016-06-10 2021-06-15 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10192552B2 (en) 2016-06-10 2019-01-29 Apple Inc. Digital assistant providing whispered speech
US10733993B2 (en) 2016-06-10 2020-08-04 Apple Inc. Intelligent digital assistant in a multi-tasking environment
US10509862B2 (en) 2016-06-10 2019-12-17 Apple Inc. Dynamic phrase expansion of language input
US10067938B2 (en) 2016-06-10 2018-09-04 Apple Inc. Multilingual word prediction
US10297253B2 (en) 2016-06-11 2019-05-21 Apple Inc. Application integration with a digital assistant
US11152002B2 (en) 2016-06-11 2021-10-19 Apple Inc. Application integration with a digital assistant
US10269345B2 (en) 2016-06-11 2019-04-23 Apple Inc. Intelligent task discovery
US10089072B2 (en) 2016-06-11 2018-10-02 Apple Inc. Intelligent device arbitration and control
US10521466B2 (en) 2016-06-11 2019-12-31 Apple Inc. Data driven natural language event detection and classification
US10826878B2 (en) * 2016-07-22 2020-11-03 International Business Machines Corporation Database management system shared ledger support
US9953030B2 (en) * 2016-08-24 2018-04-24 International Business Machines Corporation Automated translation of screen images for software documentation
US10255276B2 (en) * 2016-08-24 2019-04-09 International Business Machines Corporation Automated translation of screen images for software documentation
US10043516B2 (en) 2016-09-23 2018-08-07 Apple Inc. Intelligent automated assistant
US10553215B2 (en) 2016-09-23 2020-02-04 Apple Inc. Intelligent automated assistant
US10593346B2 (en) 2016-12-22 2020-03-17 Apple Inc. Rank-reduced token representation for automatic speech recognition
US10235361B2 (en) * 2017-02-15 2019-03-19 International Business Machines Corporation Context-aware translation memory to facilitate more accurate translation
US20180232364A1 (en) * 2017-02-15 2018-08-16 International Business Machines Corporation Context-aware translation memory to facilitate more accurate translation
US10528675B2 (en) 2017-02-15 2020-01-07 International Business Machines Corporation Context-aware translation memory to facilitate more accurate translation
US20180260363A1 (en) * 2017-03-09 2018-09-13 Fuji Xerox Co., Ltd. Information processing apparatus and non-transitory computer readable medium storing program
US10755703B2 (en) 2017-05-11 2020-08-25 Apple Inc. Offline personal assistant
US10410637B2 (en) 2017-05-12 2019-09-10 Apple Inc. User-specific acoustic models
US11405466B2 (en) 2017-05-12 2022-08-02 Apple Inc. Synchronization and task delegation of a digital assistant
US10791176B2 (en) 2017-05-12 2020-09-29 Apple Inc. Synchronization and task delegation of a digital assistant
US10810274B2 (en) 2017-05-15 2020-10-20 Apple Inc. Optimizing dialogue policy decisions for digital assistants using implicit feedback
US10482874B2 (en) 2017-05-15 2019-11-19 Apple Inc. Hierarchical belief states for digital assistants
US11217255B2 (en) 2017-05-16 2022-01-04 Apple Inc. Far-field extension for digital assistant services
US10528678B2 (en) * 2017-05-17 2020-01-07 Adobe Inc. Digital content translation techniques and systems
US10372830B2 (en) * 2017-05-17 2019-08-06 Adobe Inc. Digital content translation techniques and systems
US10318644B1 (en) 2017-07-26 2019-06-11 Coupa Software Incorporated Dynamic update of translations for electronic resources
US10552547B2 (en) * 2017-10-10 2020-02-04 International Business Machines Corporation Real-time translation evaluation services for integrated development environments
US20190108222A1 (en) * 2017-10-10 2019-04-11 International Business Machines Corporation Real-time translation evaluation services for integrated development environments
US11790344B2 (en) 2018-02-12 2023-10-17 Advanced New Technologies Co., Ltd. Method and apparatus for displaying identification code of application
US11282064B2 (en) 2018-02-12 2022-03-22 Advanced New Technologies Co., Ltd. Method and apparatus for displaying identification code of application
US11281465B2 (en) * 2018-04-13 2022-03-22 Gree, Inc. Non-transitory computer readable recording medium, computer control method and computer device for facilitating multilingualization without changing existing program data
US10540452B1 (en) * 2018-06-21 2020-01-21 Amazon Technologies, Inc. Automated translation of applications
CN109871550A (en) * 2019-01-31 2019-06-11 沈阳雅译网络技术有限公司 A method of the raising digital translation quality based on post-processing technology
US11640233B2 (en) * 2020-04-29 2023-05-02 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US20220129646A1 (en) * 2020-04-29 2022-04-28 Vannevar Labs, Inc. Foreign language machine translation of documents in a variety of formats
US11328113B1 (en) * 2021-03-03 2022-05-10 Micro Focus Llc Dynamic localization using color

Also Published As

Publication number Publication date
US7447624B2 (en) 2008-11-04
US20030126559A1 (en) 2003-07-03

Similar Documents

Publication Publication Date Title
US20030115552A1 (en) Method and system for automatic creation of multilingual immutable image files
EP1315086B1 (en) Generation of localized software applications
EP1315084A1 (en) Method and apparatus for localizing software
US7318021B2 (en) Machine translation system, method and program
US8645405B2 (en) Natural language expression in response to a query
JP3666004B2 (en) Multilingual document search system
US7209876B2 (en) System and method for automated answering of natural language questions and queries
US6745181B1 (en) Information access method
US8346536B2 (en) System and method for multi-lingual information retrieval
US8423537B2 (en) Method and arrangement for handling of information search results
CN103927375B (en) The flicker annotation callout of cross-language search result is highlighted
US6658408B2 (en) Document information management system
Alexa et al. A review of software for text analysis
US5999939A (en) System and method for displaying and entering interactively modified stream data into a structured form
US6704728B1 (en) Accessing information from a collection of data
US20080019281A1 (en) Reuse of available source data and localizations
US20020065857A1 (en) System and method for analysis and clustering of documents for search engine
US20080235567A1 (en) Intelligent form filler
EP0927939A1 (en) Method and system to obtain data for information retrieval
KR20210041007A (en) Patent document creating device, method, computer program, computer-readable recording medium, server and system
Watt Beginning regular expressions
US6035338A (en) Document browse support system and document processing system
EP1315085B1 (en) Automatic image-button creation process
US7127450B1 (en) Intelligent discard in information access system
KR20040048548A (en) Method and System for Searching User-oriented Data by using Intelligent Database and Search Editing Program

Legal Events

Date Code Title Description
AS Assignment

Owner name: SUN MICROSYSTEMS, INC., CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAHNKE, JORG;CORDES, DIETMAR;FUHRMANN, NILS;REEL/FRAME:013728/0423

Effective date: 20030124

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION