CN104753766A - Expression sending method and device - Google Patents
Expression sending method and device Download PDFInfo
- Publication number
- CN104753766A CN104753766A CN201510093000.5A CN201510093000A CN104753766A CN 104753766 A CN104753766 A CN 104753766A CN 201510093000 A CN201510093000 A CN 201510093000A CN 104753766 A CN104753766 A CN 104753766A
- Authority
- CN
- China
- Prior art keywords
- user
- expression
- opposite end
- communication message
- feelings
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Abstract
The invention discloses an expression sending method and an expression sending device, and belongs to the field of an internet technology. The method comprises the steps of acquiring a face image of a user when communication information is sent and received; generating a user expression of an existing mood of the user corresponding to the face image, wherein the user expression is at least one of a picture expression and a character expression; and transmitting the user expression to an opposite terminal in the process of sending and receiving communication information. According to the expression sending method provided by the invention, the problems that expression pictures in an expression library provided by a communication client cannot express the expression state of the user cannot be accurately expressed can be solved; furthermore, an effect that the communication information sent and received by the user takes the user expression of the existing mood of the user so as to accurately express the existing expression state of the user can be achieved.
Description
Technical field
The disclosure relates to Internet technical field, particularly one expression sending method and device.
Background technology
When using various telecommunication customer end to exchange between user, not only can send simple word message, manually can also choose an expression picture in the expression storehouse that telecommunication customer end provides, and this expression picture and word message are together sent to another telecommunication customer end.But the expression picture in the expression storehouse that telecommunication customer end provides accurately can not express the current emotional state of user.
Summary of the invention
Disclosure embodiment provides a kind of expression sending method and device, and this technical scheme is as follows:
According to the first aspect of disclosure embodiment, provide a kind of expression sending method, the method comprises:
Gather the facial image of user when receiving and dispatching communication message;
According to Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and this user expression is at least one in picture expression and letter expressing;
In the process of this communication message of transmitting-receiving, send this user expression to opposite end.
Alternatively, according to Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and comprising:
Extract the human face region in facial image;
Carry out image procossing to human face region, the user generated corresponding to user's current emotional expresses one's feelings; Image procossing comprises at least one in filter process, stylization process and gray proces.
Alternatively, according to Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and comprising:
Extract the human face region in facial image;
Identify the expression type of human face region;
Select to express one's feelings as user with the expression of expression type matching from the expression storehouse of presetting.
Alternatively, in the process of transmitting-receiving communication message, send user's expression to opposite end, comprising:
User's expression is defined as the real-time head portrait of user, and is sent to opposite end; Opposite end is used for original user head portrait to replace with the real-time head portrait of user;
Or,
User's expression added in the communication message needing to send, and send communication message to opposite end, the communication message that opposite end is used for carrying user's expression shows.
Alternatively, the method also comprises:
When user's expression is at least two, show at least two user's expressions;
Receive the selection signal that one of them user is expressed one's feelings;
According to selection signal, the user of correspondence expression is defined as the user's expression needing to send.
Alternatively, gathering the facial image of user when receiving and dispatching communication message, comprising:
When the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
According to the second aspect of disclosure embodiment, provide a kind of expression dispensing device, this device comprises:
Acquisition module, is configured to gather the facial image of user when receiving and dispatching communication message;
Generation module, is configured to express one's feelings according to the user of Face image synthesis corresponding to user's current emotional, and user's expression is at least one in picture expression and letter expressing;
Sending module, is configured in the process of transmitting-receiving communication message, sends user's expression to opposite end.
Alternatively, generation module, comprising:
First extracts submodule, is configured to extract the human face region in facial image;
Process submodule, is configured to carry out image procossing to human face region, and the user generated corresponding to user's current emotional expresses one's feelings; Image procossing comprises at least one in filter process, stylization process and gray proces.
Alternatively, generation module, comprising:
Second extracts submodule, is configured to extract the human face region in facial image;
Recognin module, is configured to the expression type identifying human face region;
Chooser module, is configured to from the expression storehouse of presetting, select the expression with expression type matching to express one's feelings as user.
Alternatively, sending module, comprising:
First sends submodule, is configured to user's expression to be defined as the real-time head portrait of user, and is sent to opposite end; Opposite end is used for original user head portrait to replace with the real-time head portrait of user;
Or,
Second sends submodule, is configured to user's expression to add in the communication message needing to send, and sends communication message to opposite end, and the communication message that opposite end is used for carrying user's expression shows.
Alternatively, this device also comprises:
Display module, is configured to, when user's expression is at least two, show at least two user's expressions;
Receiver module, is configured to receive the selection signal of expressing one's feelings to one of them user;
Determination module, is configured to according to selection signal, the user of correspondence expression is defined as the user's expression needing to send.
Alternatively, acquisition module, comprising:
Acquisition module, is also configured to, when the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
According to the third aspect of disclosure embodiment, provide a kind of expression dispensing device, this device comprises:
Processor;
For the memory of the executable instruction of storage of processor;
Wherein, processor is configured to:
Gather the facial image of user when receiving and dispatching communication message;
According to Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and user's expression is at least one in picture expression and letter expressing;
In the process of transmitting-receiving communication message, send user's expression to opposite end.
The technical scheme that disclosure embodiment provides can comprise following beneficial effect:
By user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and this user expression is sent to opposite end; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in specification and to form the part of this specification, shows and meets embodiment of the present disclosure, and is used from specification one and explains principle of the present disclosure.
Fig. 1 is the expression sending method flow chart according to an exemplary embodiment;
Fig. 2 A is the expression sending method flow chart according to another exemplary embodiment;
Fig. 2 B is the enforcement schematic diagram of the expression sending method according to another exemplary embodiment;
Fig. 2 C is the enforcement schematic diagram of the expression sending method according to another exemplary embodiment;
Fig. 3 is the expression sending method flow chart Gen Ju an exemplary embodiment again;
Fig. 4 is the block diagram of the expression dispensing device according to an exemplary embodiment;
Fig. 5 is the block diagram of the expression dispensing device according to another exemplary embodiment;
Fig. 6 is the block diagram of the expression dispensing device according to an exemplary embodiment.
By above-mentioned accompanying drawing, illustrate the embodiment that the disclosure is clear and definite more detailed description will be had hereinafter.These accompanying drawings and text description be not in order to limited by any mode the disclosure design scope, but by reference to specific embodiment for those skilled in the art illustrate concept of the present disclosure.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Execution mode described in following exemplary embodiment does not represent all execution modes consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
The expression sending method that each embodiment of the present invention provides, can be realized by the electronic equipment being provided with telecommunication customer end.This electronic equipment can be smart mobile phone, panel computer, E-book reader and pocket computer on knee (camera, video camera) etc.
In order to simplified characterization, hereinafter only to be performed by telecommunication customer end with sending method of expressing one's feelings and illustrate, but restriction is not formed to this.
Fig. 1 is the expression sending method flow chart according to an exemplary embodiment, and the present embodiment is used for being described in telecommunication customer end with this expression sending method, and this expression sending method can comprise the steps.
In a step 102, the facial image of user when receiving and dispatching communication message is gathered.
When the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
At step 104, according to Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and this user expression is at least one in picture expression and letter expressing.
In step 106, in the process of transmitting-receiving communication message, send this user expression to opposite end.
In sum, the expression sending method that this exemplary embodiment provides, by user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and is sent to opposite end by this user expression; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
Telecommunication customer end can gather the facial image of user by front-facing camera, the facial image collected is carried out image procossing and generate corresponding user expression, and this user expression is sent to opposite end as the real-time head portrait of user, the real-time head portrait of the user that peer user is seen meets user's expression now, and more vivid.An exemplary embodiment is adopted to be described below.
Fig. 2 A is the expression sending method flow chart according to another exemplary embodiment, and the present embodiment is used for being described in smart mobile phone with this expression sending method, and this expression sending method can comprise the steps.
In step 201, when the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
User is at the telecommunication customer end transmitting-receiving communication information by being arranged in smart mobile phone, when what namely smart mobile phone foreground was being run is applied as communication applications, the front-facing camera be arranged in smart mobile phone gathers a facial image every predetermined time interval, this time predetermined time interval can be identical, also can be different.Wherein, this telecommunication customer end can be instant communication client or enrichment telecommunication customer end etc., and the disclosure does not limit this.
Such as, when smart mobile phone detects that user is using telecommunication customer end to receive and dispatch communication information, a facial image can be gathered by front-facing camera every 1 second; Again such as, when input information not detected, when namely user checks the communication information received, gathering a facial image, when the input such as word or voice information having been detected, when namely user is replying communication information, then gathering a facial image.
In step 202., the human face region in facial image is extracted.
Due to problems such as front-facing camera shooting angle, face in the facial image collected by front-facing camera may not comprise face, so in order to improve generation user expression accuracy, telecommunication customer end also needs to carry out Face datection to the facial image collected, detect in the facial image collected and whether comprise face, and the facial image not comprising face is filtered.Wherein, carry out Face datection and can adopt Face datection algorithm based on iterative algorithm, the disclosure does not limit this.
Because needs are according to Face image synthesis user expression, so telecommunication customer end needs to extract the human face region in facial image, and generate corresponding user expression according to this human face region.Wherein, the human face region extracted in facial image generally adopts the statistical model method set up based on training set to position the human face characteristic point in facial image, and the extraction of human face region is carried out according to the human face characteristic point behind location, the disclosure does not limit the method extracting human face region.
In step 203, carry out image procossing to human face region, the user generated corresponding to user's current emotional expresses one's feelings; This image procossing comprises at least one in filter process, stylization process and gray proces.
More vivid and form is abundanter in order to make the user of generation express one's feelings, telecommunication customer end is after extracting the human face region in facial image, also need to carry out corresponding image procossing to the human face region extracted, the user generated corresponding to user's current emotional expresses one's feelings.
Owing to needing to position the human face characteristic point in facial image in above-mentioned steps 202, so telecommunication customer end can identify human face expression according to human face characteristic point further, namely user's current emotional is determined according to human face characteristic point, and carry out corresponding image procossing according to the user's current emotional determined, thus outstanding user's current emotional.Wherein, this image procossing can be filter process, stylization process or gray proces etc.
Such as, when this image procossing is stylization process, according to human face characteristic point, telecommunication customer end determines that user's current emotional is happy, and carries out stylization process to human face region, generates the user's expression meeting happy feature.
Again such as, when this image procossing is filter process, according to human face characteristic point, telecommunication customer end determines that user's current emotional is happy, then the color of filter to human face region can be used carry out bright-colouredization, makes to generate to correspond to happy user's expression.
It should be noted that, telecommunication customer end can also add corresponding default mark according to user's current emotional in the user's expression generated, this default mark can be words identification or image identification, such as, when user's current emotional is happy, a sun image can be added in the user's expression generated; When user's current emotional is difficult out-of-date, a black clouds image can be added in the user's expression generated.
In step 204, when user's expression is at least two, show at least two user's expressions.
Because front-facing camera can gather a facial image every predetermined time interval, so receive and dispatch in the process of communication information user, at least two user's facial images may be collected, corresponding, telecommunication customer end may generate at least two user's expressions.When telecommunication customer end generation at least two users express one's feelings, can show at least two users' expressions, select the user's expression wanting to send for user.
As a kind of possible implementation, telecommunication customer end can be expressed one's feelings corresponding user's current emotional according to each user, at least two users' expressions is classified, and from each classification, selects at least one user expression show, and selects for user.
Such as, as shown in Figure 2 B, telecommunication customer end 21 according to collect three facial images generate that user expresses one's feelings 22, user expresses one's feelings respectively 23 and user express one's feelings 24, user is expressed one's feelings 22 and user express one's feelings and 23 to be categorized as " sad ", user is expressed one's feelings and 24 to be categorized as " indignation ", and at least one user expression in each classification is shown, select for user.
In step 205, the selection signal that one of them user is expressed one's feelings is received.
Telecommunication customer end accepts the selection signal of user to user's expression of display, thus determines that the user needing to send expresses one's feelings.
In step 206, according to selection signal, the user of correspondence expression is defined as the user's expression needing to send.
In step 207, user's expression is defined as the real-time head portrait of user, and is sent to opposite end; Opposite end is used for original user head portrait to replace with the real-time head portrait of user.
User's expression that user selects by telecommunication customer end is defined as the real-time head portrait of user, and is sent to opposite end.When peer user uses telecommunication customer end to receive this user real-time head portrait, original user head portrait is replaced with the real-time head portrait of this user, expression user can more intuitively being recognized when user receives and dispatches communication information.
Such as, as shown in Figure 2 C, telecommunication customer end 21 expresses one's feelings 22 to the user generated, user expresses one's feelings 23 and user express one's feelings and 24 to show, and receive the selection signal of user to user expression 24, user to be expressed one's feelings the 24 real-time head portraits being defined as user, and being sent to telecommunication customer end 25, original user head portrait is replaced with user and expresses one's feelings 24 by telecommunication customer end 25, shows.
It should be noted that, above-mentioned steps 204 to step 206 is optional step, and namely at least two the user's expressions generated can be all defined as the real-time head portrait of user by telecommunication customer end, and are together sent to opposite end.
Peer user carries out round display at least two real-time head portraits of user after using telecommunication customer end to receive at least two real-time head portraits of user, and namely user more intuitively can recognize that the other user is from checking that communication information is to expression shape change during transmission communication information.
In above-described embodiment, be only described for picture expression for this user expression, restriction do not formed to the disclosure.
In sum, the expression sending method that this exemplary embodiment provides, by user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and is sent to opposite end by this user expression; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
The expression sending method that this exemplary embodiment provides, also by extracting the human face region in facial image, and image procossing being carried out to human face region, generating user's expression, make the form of the user's expression generated abundanter, more accurately express the current emotional state of user.
The expression sending method that this exemplary embodiment provides, also by the user of generation expression is defined as the real-time head portrait of user, and be sent to opposite end, there is opposite end that original user's head portrait is replaced with the real-time head portrait of user, expression shape change peer user can more intuitively being recognized when active user receives and dispatches communication information.
Fig. 3 is the expression sending method flow chart Gen Ju an exemplary embodiment again, and the present embodiment is used for being described in smart mobile phone with this expression sending method, and this expression sending method can comprise the steps.
In step 301, when the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
The implementation of this step is similar to above-mentioned steps 201, does not repeat them here.
In step 302, the human face region in facial image is extracted.
Similar to above-mentioned steps 202, telecommunication customer end can adopt the Face datection algorithm based on iterative algorithm, detect in the facial image collected and whether comprise face, and detect in facial image comprise face time, adopt the statistical model method set up based on training set to position the human face characteristic point in facial image, and carry out the extraction of human face region according to the human face characteristic point behind location.
In step 303, the expression type of human face region is identified.
After the human face region of telecommunication customer end in the facial image extracted, identify the expression type of this human face region further.
Owing to being identified as prior art according to the expression type of human face characteristic point to human face region of location, the disclosure also repeats no more at this.
In step 304, select to express one's feelings as user with the expression of expression type matching from the expression storehouse of presetting.
Telecommunication customer end stores at least one expression in the expression storehouse of presetting, and this expression can be the user's Face image synthesis according to collecting in advance, also can be the expression that user pre-deposits, and each expression at least one expression type corresponding.Wherein, the expression of the storage in expression storehouse can be as shown in Table 1 with the corresponding relation of expression type.
Table one
Expression type | Expression | Expression memory address |
Happily, glad | Expression A | Address A |
Sad, sad | Expression B | Address B |
Indignation | Expression C | Address C |
The expression type that telecommunication customer end is corresponding according to the human face region identified, searches the expression with this expression type matching, and obtains this expression from the expression memory address of this expression, express one's feelings as user in expression storehouse.
In step 305, user's expression added in the communication message needing to send, and send communication message to opposite end, the communication message that opposite end is used for carrying user's expression shows.
Telecommunication customer end user is expressed one's feelings automatic powder adding be added to need send communication information in, be together sent to opposite end with this communication information.When there is multiple user and expressing one's feelings, telecommunication customer end can also according to multiple user's expression generation dynamic picture, and this dynamic picture is added in communication information, by dynamic picture, opposite end, after receiving this dynamic picture, namely intuitively recognizes that the other user is from checking that communication information is to expression shape change during transmission communication information.
In sum, the expression sending method that this exemplary embodiment provides, by user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and is sent to opposite end by this user expression; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
The expression sending method that this exemplary embodiment provides, also by identifying the expression type of human face region, and from the expression storehouse of presetting, select the expression of mating to express one's feelings as user according to this expression type, and add in the communication message needing to send, improve the efficiency that user sends user's expression.
It should be noted that, in above-mentioned exemplary embodiment, step 207 and step 305 can be exchanged, namely step 201 to step 206 and step 305 can become an independent embodiment, step 301 to step 304 and step 207 can become an independent embodiment, and the disclosure does not limit this.
Following is disclosure device embodiment, may be used for performing disclosure embodiment of the method.For the details do not disclosed in disclosure device embodiment, please refer to disclosure embodiment of the method.
Fig. 4 is the block diagram of the expression dispensing device according to an exemplary embodiment, and this expression dispensing device can realize becoming the some or all of of the electronic equipment being provided with telecommunication customer end by software, hardware or both combinations.This expression dispensing device can comprise:
Acquisition module 402, is configured to gather the facial image of user when receiving and dispatching communication message;
Generation module 404, is configured to express one's feelings according to the user of Face image synthesis corresponding to user's current emotional, and user's expression is at least one in picture expression and letter expressing;
Sending module 406, is configured in the process of transmitting-receiving communication message, sends user's expression to opposite end.
In sum, the expression dispensing device that this exemplary embodiment provides, by user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and is sent to opposite end by this user expression; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
Fig. 5 is the block diagram of the expression dispensing device according to another exemplary embodiment, and this expression dispensing device can realize becoming the some or all of of the electronic equipment being provided with telecommunication customer end by software, hardware or both combinations.This expression dispensing device can comprise:
Acquisition module 502, is configured to gather the facial image of user when receiving and dispatching communication message;
Generation module 504, is configured to express one's feelings according to the user of Face image synthesis corresponding to user's current emotional, and user's expression is at least one in picture expression and letter expressing;
Sending module 506, is configured in the process of transmitting-receiving communication message, sends user's expression to opposite end.
Alternatively, generation module 504, comprising:
First extracts submodule 504A, is configured to extract the human face region in facial image;
Process submodule 504B, is configured to carry out image procossing to human face region, and the user generated corresponding to user's current emotional expresses one's feelings; Image procossing comprises at least one in filter process, stylization process and gray proces.
Alternatively, generation module 504, comprising:
Second extracts submodule 504C, is configured to extract the human face region in facial image;
Recognin module 504D, is configured to the expression type identifying human face region;
Chooser module 504E, is configured to from the expression storehouse of presetting, select the expression with expression type matching to express one's feelings as user.
Alternatively, sending module 506, comprising:
First sends submodule 506A, is configured to user's expression to be defined as the real-time head portrait of user, and is sent to opposite end; Opposite end is used for original user head portrait to replace with the real-time head portrait of user;
Or,
Second sends submodule 506B, is configured to user's expression to add in the communication message needing to send, and sends communication message to opposite end, and the communication message that opposite end is used for carrying user's expression shows.
Alternatively, this device also comprises:
Display module 507, is configured to, when user's expression is at least two, show at least two user's expressions;
Receiver module 508, is configured to receive the selection signal of expressing one's feelings to one of them user;
Determination module 509, is configured to according to selection signal, the user of correspondence expression is defined as the user's expression needing to send.
Alternatively, acquisition module 502, comprising:
Acquisition module 502, is also configured to, when the application program of front stage operation is communication applications, gather a facial image by front-facing camera every predetermined time interval.
In sum, the expression dispensing device that this exemplary embodiment provides, by user's expression that Face image synthesis when receiving and dispatching communication information according to the user collected is corresponding, and is sent to opposite end by this user expression; The expression picture solved in the expression storehouse that telecommunication customer end provides accurately can not express the problem of the current emotional state of user; Reach in the communication information of user's transmitting-receiving the user's expression carrying and meet user's current emotional, accurately express the effect of the current emotional state of user.
The expression dispensing device that this exemplary embodiment provides, also by extracting the human face region in facial image, and image procossing being carried out to human face region, generating user's expression, make the form of the user's expression generated abundanter, more accurately express the current emotional state of user.
The expression dispensing device that this exemplary embodiment provides, also by the user of generation expression is defined as the real-time head portrait of user, and be sent to opposite end, there is opposite end that original user's head portrait is replaced with the real-time head portrait of user, expression shape change peer user can more intuitively being recognized when active user receives and dispatches communication information.
The expression dispensing device that this exemplary embodiment provides, also by identifying the expression type of human face region, and from the expression storehouse of presetting, select the expression of mating to express one's feelings as user according to this expression type, and add in the communication message needing to send, improve the efficiency that user sends user's expression.
Fig. 6 is the block diagram of the expression dispensing device 600 according to an exemplary embodiment.Such as, device 600 can be the electronic equipment being provided with telecommunication customer end.
With reference to Fig. 6, device 600 can comprise following one or more assembly: processing components 602, memory 604, power supply module 606, multimedia groupware 608, audio-frequency assembly 610, the interface 612 of I/O (I/O), sensor cluster 614, and communications component 616.
The integrated operation of the usual control device 600 of processing components 602, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 602 can comprise one or more processor 620 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 602 can comprise one or more module, and what be convenient between processing components 602 and other assemblies is mutual.Such as, processing components 602 can comprise multi-media module, mutual with what facilitate between multimedia groupware 608 and processing components 602.
Memory 604 is configured to store various types of data to be supported in the operation of device 600.The example of these data comprises for any application program of operation on device 600 or the instruction of method, contact data, telephone book data, message, picture, video etc.Memory 604 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, disk or CD.
The various assemblies that power supply module 606 is device 600 provide electric power.Power supply module 606 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 600 and be associated.
Multimedia groupware 608 is included in the screen providing an output interface between described device 600 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 608 comprises a front-facing camera and/or post-positioned pick-up head.When device 600 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 610 is configured to export and/or input audio signal.Such as, audio-frequency assembly 610 comprises a microphone (MIC), and when device 600 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The audio signal received can be stored in memory 604 further or be sent via communications component 616.In certain embodiments, audio-frequency assembly 610 also comprises a loud speaker, for output audio signal.
I/O interface 612 is for providing interface between processing components 602 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor cluster 614 comprises one or more transducer, for providing the state estimation of various aspects for device 600.Such as, sensor cluster 614 can detect the opening/closing state of device 600, the relative positioning of assembly, such as described assembly is display and the keypad of device 600, the position of all right checkout gear 600 of sensor cluster 614 or device 600 1 assemblies changes, the presence or absence that user contacts with device 600, the variations in temperature of device 600 orientation or acceleration/deceleration and device 600.Sensor cluster 614 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor cluster 614 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor cluster 614 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure sensor or temperature sensor.
Communications component 616 is configured to the communication being convenient to wired or wireless mode between device 600 and other equipment.Device 600 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 616 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 616 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 600 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the memory 604 of instruction, above-mentioned instruction can perform said method by the processor 620 of device 600.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of device 600, makes device 600 to perform and is applied to the electronic equipment expression sending method being provided with telecommunication customer end.
Those skilled in the art, at consideration specification and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The application is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Specification and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.
Claims (13)
1. an expression sending method, it is characterized in that, described method comprises:
Gather the facial image of user when receiving and dispatching communication message;
According to described Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and described user's expression is at least one in picture expression and letter expressing;
In the process of the described communication message of transmitting-receiving, send described user expression to opposite end.
2. method according to claim 1, is characterized in that, describedly expresses one's feelings according to the user of described Face image synthesis corresponding to user's current emotional, comprising:
Extract the human face region in described facial image;
Carry out image procossing to described human face region, the user generated corresponding to user's current emotional expresses one's feelings; Described image procossing comprises at least one in filter process, stylization process and gray proces.
3. method according to claim 1, is characterized in that, describedly expresses one's feelings according to the user of described Face image synthesis corresponding to user's current emotional, comprising:
Extract the human face region in described facial image;
Identify the expression type of described human face region;
Select to express one's feelings as described user with the expression of described expression type matching from the expression storehouse of presetting.
4., according to the arbitrary described method of claims 1 to 3, it is characterized in that, described in the process of the described communication message of transmitting-receiving, send described user expression to opposite end, comprising:
Described user's expression is defined as the real-time head portrait of user, and is sent to described opposite end; Described opposite end is used for original user head portrait to replace with the real-time head portrait of described user;
Or,
Described user's expression added in the communication message needing to send, and send described communication message to described opposite end, the communication message that described opposite end is used for carrying described user expression shows.
5., according to the arbitrary described method of claims 1 to 3, it is characterized in that, described method also comprises:
When described user's expression is at least two, at least two user's expressions described in display;
Receive the selection signal to user's expression described in one of them;
According to described selection signal, described user's expression of correspondence is defined as the user's expression needing to send.
6. according to the arbitrary described method of claims 1 to 3, it is characterized in that, the facial image of described collection user when receiving and dispatching communication message, comprising:
When the application program of front stage operation is communication applications, by front-facing camera every predetermined time interval collection once described facial image.
7. an expression dispensing device, is characterized in that, described device comprises:
Acquisition module, is configured to gather the facial image of user when receiving and dispatching communication message;
Generation module, is configured to express one's feelings according to the user of described Face image synthesis corresponding to user's current emotional, and described user's expression is at least one in picture expression and letter expressing;
Sending module, is configured in the process of the described communication message of transmitting-receiving, sends described user expression to opposite end.
8. device according to claim 7, is characterized in that, described generation module, comprising:
First extracts submodule, is configured to extract the human face region in described facial image;
Process submodule, is configured to carry out image procossing to described human face region, and the user generated corresponding to user's current emotional expresses one's feelings; Described image procossing comprises at least one in filter process, stylization process and gray proces.
9. method according to claim 7, is characterized in that, described generation module, comprising:
Second extracts submodule, is configured to extract the human face region in described facial image;
Recognin module, is configured to the expression type identifying described human face region;
Chooser module, is configured to select to express one's feelings as described user with the expression of described expression type matching from the expression storehouse of presetting.
10., according to the arbitrary described device of claim 7 to 9, it is characterized in that, described sending module, comprising:
First sends submodule, is configured to described user's expression to be defined as the real-time head portrait of user, and is sent to described opposite end; Described opposite end is used for original user head portrait to replace with the real-time head portrait of described user;
Or,
Second sends submodule, is configured to described user's expression to add in the communication message needing to send, and sends described communication message to described opposite end, and the communication message that described opposite end is used for carrying described user expression shows.
11. according to the arbitrary described device of claim 7 to 9, and it is characterized in that, described device also comprises:
Display module, is configured to when described user's expression is at least two, at least two user's expressions described in display;
Receiver module, is configured to receive the selection signal to user's expression described in one of them;
Determination module, is configured to according to described selection signal, described user's expression of correspondence is defined as the user's expression needing to send.
12. according to the arbitrary described device of claim 7 to 9, and it is characterized in that, described acquisition module, comprising:
Described acquisition module, is also configured to when the application program of front stage operation is communication applications, by front-facing camera every predetermined time interval collection once described facial image.
13. 1 kinds of expression dispensing devices, is characterized in that, comprising:
Processor;
For storing the memory of the executable instruction of described processor;
Wherein, described processor is configured to:
Gather the facial image of user when receiving and dispatching communication message;
According to described Face image synthesis, the user corresponding to user's current emotional expresses one's feelings, and described user's expression is at least one in picture expression and letter expressing;
In the process of the described communication message of transmitting-receiving, send described user expression to opposite end.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510093000.5A CN104753766B (en) | 2015-03-02 | 2015-03-02 | Expression sending method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510093000.5A CN104753766B (en) | 2015-03-02 | 2015-03-02 | Expression sending method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104753766A true CN104753766A (en) | 2015-07-01 |
CN104753766B CN104753766B (en) | 2019-03-22 |
Family
ID=53592908
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510093000.5A Active CN104753766B (en) | 2015-03-02 | 2015-03-02 | Expression sending method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104753766B (en) |
Cited By (33)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105578113A (en) * | 2016-02-02 | 2016-05-11 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN105744206A (en) * | 2016-02-02 | 2016-07-06 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN105871696A (en) * | 2016-05-25 | 2016-08-17 | 维沃移动通信有限公司 | Information transmitting and receiving methods and mobile terminal |
CN106059890A (en) * | 2016-05-09 | 2016-10-26 | 珠海市魅族科技有限公司 | Information display method and system |
CN106339103A (en) * | 2016-08-15 | 2017-01-18 | 珠海市魅族科技有限公司 | Image checking method and device |
CN106886606A (en) * | 2017-03-21 | 2017-06-23 | 联想(北京)有限公司 | Method and system for recommending expression according to user speech |
WO2017198210A1 (en) * | 2016-05-19 | 2017-11-23 | 腾讯科技(深圳)有限公司 | Emoji sending method, computer device and computer readable storage medium |
CN107451560A (en) * | 2017-07-31 | 2017-12-08 | 广东欧珀移动通信有限公司 | User's expression recognition method, device and terminal |
CN107729543A (en) * | 2017-10-31 | 2018-02-23 | 上海掌门科技有限公司 | Expression picture recommends method and apparatus |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
CN109496289A (en) * | 2018-06-20 | 2019-03-19 | 优视科技新加坡有限公司 | A kind of terminal control method and device |
CN109691074A (en) * | 2016-09-23 | 2019-04-26 | 苹果公司 | The image data of user's interaction for enhancing |
CN109688041A (en) * | 2017-10-18 | 2019-04-26 | 腾讯科技(深圳)有限公司 | Information processing method, device and server, intelligent terminal, storage medium |
WO2019090603A1 (en) * | 2017-11-09 | 2019-05-16 | 深圳传音通讯有限公司 | Expression adding method and adding apparatus based on photography function |
CN110019883A (en) * | 2017-07-18 | 2019-07-16 | 腾讯科技(深圳)有限公司 | Obtain the method and device of expression picture |
CN110264544A (en) * | 2019-05-30 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
CN110597384A (en) * | 2019-08-23 | 2019-12-20 | 苏州佳世达光电有限公司 | Information communication method and system |
WO2020207041A1 (en) * | 2019-04-10 | 2020-10-15 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | System and method for dynamically recommending inputs based on identification of user emotions |
US10845968B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10846905B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US10861248B2 (en) | 2018-05-07 | 2020-12-08 | Apple Inc. | Avatar creation user interface |
US10902424B2 (en) | 2014-05-29 | 2021-01-26 | Apple Inc. | User interface for payments |
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040235531A1 (en) * | 2003-05-20 | 2004-11-25 | Ntt Docomo, Inc. | Portable terminal, and image communication program |
US20100057875A1 (en) * | 2004-02-04 | 2010-03-04 | Modu Ltd. | Mood-based messaging |
CN101710910A (en) * | 2009-12-09 | 2010-05-19 | 深圳华为通信技术有限公司 | Method for transmitting emotion information of terminal user and mobile terminal |
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
US20130144937A1 (en) * | 2011-12-02 | 2013-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for sharing user's emotion |
CN103886632A (en) * | 2014-01-06 | 2014-06-25 | 宇龙计算机通信科技(深圳)有限公司 | Method for generating user expression head portrait and communication terminal |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN103926997A (en) * | 2013-01-11 | 2014-07-16 | 北京三星通信技术研究有限公司 | Method for determining emotional information based on user input and terminal |
CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
-
2015
- 2015-03-02 CN CN201510093000.5A patent/CN104753766B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040235531A1 (en) * | 2003-05-20 | 2004-11-25 | Ntt Docomo, Inc. | Portable terminal, and image communication program |
US20100057875A1 (en) * | 2004-02-04 | 2010-03-04 | Modu Ltd. | Mood-based messaging |
CN101710910A (en) * | 2009-12-09 | 2010-05-19 | 深圳华为通信技术有限公司 | Method for transmitting emotion information of terminal user and mobile terminal |
CN102479388A (en) * | 2010-11-22 | 2012-05-30 | 北京盛开互动科技有限公司 | Expression interaction method based on face tracking and analysis |
US20130144937A1 (en) * | 2011-12-02 | 2013-06-06 | Samsung Electronics Co., Ltd. | Apparatus and method for sharing user's emotion |
CN103916536A (en) * | 2013-01-07 | 2014-07-09 | 三星电子株式会社 | Mobile device user interface method and system |
CN103926997A (en) * | 2013-01-11 | 2014-07-16 | 北京三星通信技术研究有限公司 | Method for determining emotional information based on user input and terminal |
CN103886632A (en) * | 2014-01-06 | 2014-06-25 | 宇龙计算机通信科技(深圳)有限公司 | Method for generating user expression head portrait and communication terminal |
CN104063683A (en) * | 2014-06-06 | 2014-09-24 | 北京搜狗科技发展有限公司 | Expression input method and device based on face identification |
Cited By (55)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11468155B2 (en) | 2007-09-24 | 2022-10-11 | Apple Inc. | Embedded authentication systems in an electronic device |
US10956550B2 (en) | 2007-09-24 | 2021-03-23 | Apple Inc. | Embedded authentication systems in an electronic device |
US11676373B2 (en) | 2008-01-03 | 2023-06-13 | Apple Inc. | Personal computing device control using face detection and recognition |
US11200309B2 (en) | 2011-09-29 | 2021-12-14 | Apple Inc. | Authentication with secondary approver |
US11755712B2 (en) | 2011-09-29 | 2023-09-12 | Apple Inc. | Authentication with secondary approver |
US11287942B2 (en) | 2013-09-09 | 2022-03-29 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces |
US11494046B2 (en) | 2013-09-09 | 2022-11-08 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US11768575B2 (en) | 2013-09-09 | 2023-09-26 | Apple Inc. | Device, method, and graphical user interface for manipulating user interfaces based on unlock inputs |
US11836725B2 (en) | 2014-05-29 | 2023-12-05 | Apple Inc. | User interface for payments |
US10902424B2 (en) | 2014-05-29 | 2021-01-26 | Apple Inc. | User interface for payments |
US10977651B2 (en) | 2014-05-29 | 2021-04-13 | Apple Inc. | User interface for payments |
CN105578113A (en) * | 2016-02-02 | 2016-05-11 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN105744206A (en) * | 2016-02-02 | 2016-07-06 | 北京小米移动软件有限公司 | Video communication method, device and system |
CN106059890A (en) * | 2016-05-09 | 2016-10-26 | 珠海市魅族科技有限公司 | Information display method and system |
CN106059890B (en) * | 2016-05-09 | 2019-04-12 | 珠海市魅族科技有限公司 | Information displaying method and system |
WO2017198210A1 (en) * | 2016-05-19 | 2017-11-23 | 腾讯科技(深圳)有限公司 | Emoji sending method, computer device and computer readable storage medium |
US10708207B2 (en) * | 2016-05-19 | 2020-07-07 | Tencent Technology (Shenzhen) Company Limited | Emoticon sending method, computer device and computer-readable storage medium |
US11206309B2 (en) | 2016-05-19 | 2021-12-21 | Apple Inc. | User interface for remote authorization |
CN105871696A (en) * | 2016-05-25 | 2016-08-17 | 维沃移动通信有限公司 | Information transmitting and receiving methods and mobile terminal |
CN105871696B (en) * | 2016-05-25 | 2020-02-18 | 维沃移动通信有限公司 | Information sending and receiving method and mobile terminal |
CN106339103A (en) * | 2016-08-15 | 2017-01-18 | 珠海市魅族科技有限公司 | Image checking method and device |
CN109691074A (en) * | 2016-09-23 | 2019-04-26 | 苹果公司 | The image data of user's interaction for enhancing |
CN106886606A (en) * | 2017-03-21 | 2017-06-23 | 联想(北京)有限公司 | Method and system for recommending expression according to user speech |
US10997768B2 (en) | 2017-05-16 | 2021-05-04 | Apple Inc. | Emoji recording and sending |
US11532112B2 (en) | 2017-05-16 | 2022-12-20 | Apple Inc. | Emoji recording and sending |
US10845968B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
US10846905B2 (en) | 2017-05-16 | 2020-11-24 | Apple Inc. | Emoji recording and sending |
CN110019883A (en) * | 2017-07-18 | 2019-07-16 | 腾讯科技(深圳)有限公司 | Obtain the method and device of expression picture |
CN107451560A (en) * | 2017-07-31 | 2017-12-08 | 广东欧珀移动通信有限公司 | User's expression recognition method, device and terminal |
US11393258B2 (en) | 2017-09-09 | 2022-07-19 | Apple Inc. | Implementation of biometric authentication |
US11765163B2 (en) | 2017-09-09 | 2023-09-19 | Apple Inc. | Implementation of biometric authentication |
US11386189B2 (en) | 2017-09-09 | 2022-07-12 | Apple Inc. | Implementation of biometric authentication |
CN109688041A (en) * | 2017-10-18 | 2019-04-26 | 腾讯科技(深圳)有限公司 | Information processing method, device and server, intelligent terminal, storage medium |
CN107729543A (en) * | 2017-10-31 | 2018-02-23 | 上海掌门科技有限公司 | Expression picture recommends method and apparatus |
WO2019085625A1 (en) * | 2017-10-31 | 2019-05-09 | 上海掌门科技有限公司 | Emotion picture recommendation method and apparatus |
WO2019090603A1 (en) * | 2017-11-09 | 2019-05-16 | 深圳传音通讯有限公司 | Expression adding method and adding apparatus based on photography function |
CN111656318A (en) * | 2017-11-09 | 2020-09-11 | 深圳传音通讯有限公司 | Facial expression adding method and facial expression adding device based on photographing function |
US10861248B2 (en) | 2018-05-07 | 2020-12-08 | Apple Inc. | Avatar creation user interface |
US11380077B2 (en) | 2018-05-07 | 2022-07-05 | Apple Inc. | Avatar creation user interface |
US11682182B2 (en) | 2018-05-07 | 2023-06-20 | Apple Inc. | Avatar creation user interface |
US11928200B2 (en) | 2018-06-03 | 2024-03-12 | Apple Inc. | Implementation of biometric authentication |
US11170085B2 (en) | 2018-06-03 | 2021-11-09 | Apple Inc. | Implementation of biometric authentication |
CN109496289A (en) * | 2018-06-20 | 2019-03-19 | 优视科技新加坡有限公司 | A kind of terminal control method and device |
CN109215007B (en) * | 2018-09-21 | 2022-04-12 | 维沃移动通信有限公司 | Image generation method and terminal equipment |
CN109215007A (en) * | 2018-09-21 | 2019-01-15 | 维沃移动通信有限公司 | A kind of image generating method and terminal device |
US11100349B2 (en) | 2018-09-28 | 2021-08-24 | Apple Inc. | Audio assisted enrollment |
US10860096B2 (en) | 2018-09-28 | 2020-12-08 | Apple Inc. | Device control using gaze information |
US11619991B2 (en) | 2018-09-28 | 2023-04-04 | Apple Inc. | Device control using gaze information |
US11809784B2 (en) | 2018-09-28 | 2023-11-07 | Apple Inc. | Audio assisted enrollment |
US11107261B2 (en) | 2019-01-18 | 2021-08-31 | Apple Inc. | Virtual avatar animation based on facial feature movement |
WO2020207041A1 (en) * | 2019-04-10 | 2020-10-15 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | System and method for dynamically recommending inputs based on identification of user emotions |
CN113785539A (en) * | 2019-04-10 | 2021-12-10 | Oppo广东移动通信有限公司 | System and method for dynamically recommending input based on recognition of user emotion |
CN110264544A (en) * | 2019-05-30 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Image processing method and device, storage medium and electronic device |
CN110264544B (en) * | 2019-05-30 | 2023-08-25 | 腾讯科技(深圳)有限公司 | Picture processing method and device, storage medium and electronic device |
CN110597384A (en) * | 2019-08-23 | 2019-12-20 | 苏州佳世达光电有限公司 | Information communication method and system |
Also Published As
Publication number | Publication date |
---|---|
CN104753766B (en) | 2019-03-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104753766A (en) | Expression sending method and device | |
CN104021350B (en) | Privacy information hidden method and device | |
CN105528606A (en) | Region identification method and device | |
CN104320525A (en) | Method and device for identifying telephone number | |
CN104850828A (en) | Person identification method and person identification device | |
CN104378441A (en) | Schedule creating method and device | |
CN104035558A (en) | Terminal device control method and device | |
CN104933170A (en) | Information exhibition method and device | |
CN104834559A (en) | Notification message display method and device | |
CN105096144A (en) | Social relation analysis method and social relation analysis device | |
CN105472583A (en) | Message processing method and apparatus | |
CN104731868A (en) | Method and device for intercepting advertisements | |
CN105404401A (en) | Input processing method, apparatus and device | |
CN105430146A (en) | Telephone number identification method and device | |
CN105117207A (en) | Album creating method and apparatus | |
CN104850849A (en) | Method, device and terminal for sending character | |
CN104598534A (en) | Picture folding method and device | |
CN105224601A (en) | A kind of method and apparatus of extracting time information | |
CN105139378A (en) | Card boundary detection method and apparatus | |
CN105335714A (en) | Photograph processing method, device and apparatus | |
CN104615663A (en) | File sorting method and device and terminal | |
CN104391878A (en) | Book search method and book search device | |
CN105095868A (en) | Picture matching method and apparatus | |
CN105511777A (en) | Session display method and device of touch display screen | |
CN105100193A (en) | Cloud business card recommendation method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |