US20150161508A1 - Multiple output relaxation machine learning model - Google Patents
Multiple output relaxation machine learning model Download PDFInfo
- Publication number
- US20150161508A1 US20150161508A1 US14/625,945 US201514625945A US2015161508A1 US 20150161508 A1 US20150161508 A1 US 20150161508A1 US 201514625945 A US201514625945 A US 201514625945A US 2015161508 A1 US2015161508 A1 US 2015161508A1
- Authority
- US
- United States
- Prior art keywords
- output
- lead
- components
- agent
- machine learning
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N20/00—Machine learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
Definitions
- MOR multiple output relaxation
- Machine learning is a form of artificial intelligence that is employed to allow computers to evolve behaviors based on empirical data.
- Machine learning may take advantage of training examples to capture characteristics of interest of their unknown underlying probability distribution. Training data may be seen as examples that illustrate relations between observed variables.
- a major focus of machine learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.
- SP traditional structured prediction
- Traditional SP is a single model approach to dependent output.
- the output vector z is fully conditioned on the input feature vector x and the different output components of output vector z (z 1 , z 2 , . . . ) are conditionally independent of each other given the input feature vector x.
- the probability of z 1 given x is equal to the probability of z 1 given x and z 2 , or p(z 1
- x) p(z 1
- traditional SP cannot handle an interdependent relationship between different output components.
- traditional SP cannot handle a problem having multiple correct output decisions for a given input.
- example embodiments described herein relate to methods of employing a multiple output relaxation (MOR) machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision.
- MOR multiple output relaxation
- MOD multiple output dependency
- a method for employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision may include training a classifier for each of multiple interdependent output components of an MOD output decision to predict the component based on an input and based on all of the other components.
- the method may also include initializing each possible value for each of the components to a predetermined output value.
- the method may further include running relaxation iterations on each of the classifiers to update the output value of each possible value for each of the components until a relaxation state reaches an equilibrium or a maximum number of relaxation iterations is reached.
- the method may also include retrieving an optimal component from each of the classifiers.
- FIG. 1 is a schematic block diagram illustrating an example lead response management (LRM) system including an example contact server;
- LRM lead response management
- FIG. 2 is a schematic block diagram illustrating additional details of the example contact server of FIG. 1 ;
- FIG. 3A is a schematic flow chart diagram illustrating an example multiple output relaxation (MOR) machine learning model
- FIG. 3B is a text diagram illustrating an example input feature vector
- FIG. 3C is a schematic flow chart diagram illustrating a first example multilayer perceptron (MLP) neural network that is employed to predict a first interdependent output component;
- MLP multilayer perceptron
- FIG. 3D is a schematic flow chart diagram illustrating a second example MLP neural network that is employed to predict a second interdependent output component
- FIG. 4 is a schematic flow chart diagram of an example method of employing an MOR machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision;
- MOD multiple output dependency
- FIG. 5 is a schematic flow chart diagram of multiple correct MOD output decisions
- FIG. 6 illustrates an example computer screen image of a user interface of an example customer relationship management (CRM) system
- FIG. 7 illustrates an example computer screen image of a user interface of an example LRM system
- FIG. 8A illustrates an example computer screen image of an example lead advisor display before a lead has been selected by an agent
- FIG. 8B illustrates an example computer screen image of the example lead advisor display of FIG. 8A after a lead has been selected by an agent.
- Some embodiments described herein include methods of employing a multiple output relaxation (MOR) machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision.
- MOR multiple output relaxation
- MOD multiple output dependency
- MOD multiple output dependency
- Some example MOD problems include, but are not limited to: 1) which combination of stocks to purchase to balance a mutual fund given current stock market conditions, 2) which combination of players to substitute into a lineup of a sports team given the current lineup of the opposing team, and 3) which combination of shirt, pants, belt, and shoes to wear given the current weather conditions.
- each component of the output decision depends on both the input (current stock market conditions, an opposing team lineup, or current weather conditions) and the other components (the other stocks purchased, the other substituted player, or the other clothing selected).
- Other examples of MOD problems may relate to hostage negotiations, retail sales, online shopping carts, web content management systems, customer service, contract negotiations, or crisis management, or any other situation that requires an output decision with multiple interdependent output components.
- LRM lead response management
- leads may come from a variety of sources including, but not limited to, a web form, a referral, and a list purchased from a lead vendor.
- the output decision of how to respond to the lead may include multiple interdependent components such as, but not limited to, who should respond to the lead, what method should be employed to respond to the lead, what content should be included in the response message, and when should the response take place.
- Each of these components of the output decision depends on both the input (the lead information) and the other components. For example, the timing of the response may depend on the availability of the person selected to respond.
- the content of the message may depend on the method of response (e.g. since the length of an email message is not limited like the length of a text message).
- the example methods disclosed herein are generally explained in the context of LRM, it is understood that the example methods disclosed herein may be employed to solve any MOD problem.
- FIG. 1 is a schematic block diagram illustrating an example LRM system 100 .
- the example LRM system 100 includes various components such as a public switched telephone network (PSTN) 110 , user communication and/or computing devices 112 , a TDM gateway 120 connecting the PSTN 100 to an internet 130 , remote agent stations 121 , workstations 128 , a call center 140 , an internet gateway 150 connecting a local area network 160 to the internet 130 , a web server 170 , a contact server 200 , a lead data server 190 , local agent workstations 192 , and control workstations 194 .
- the various components of the example LRM system 100 operably interconnected to collaboratively improve a process of responding to leads in a manner that optimizes contact or qualification rates.
- the remote agent stations 121 include wireless phones 122 , wired phones 124 , wireless computing devices 126 , and workstations 128 .
- the wireless phones 122 or the wired phones 124 may be voice over internet protocol (VOIP) phones.
- the computing devices 126 or the workstations 128 may be equipped with a soft phone.
- the remote agent stations 121 enable agents to respond to lead from remote locations similar to agents stationed at the workstations 192 and directly connected to the local area network 160 .
- the local area network 160 resides within a call center 140 that uses VoIP and other messaging services to contact users connected to the PSTN 110 and/or the internet 130 .
- the various servers in the call center 140 function cooperatively to acquire leads, store lead information, analyze lead information to decide how best to respond to each lead, distribute leads to agents via agent terminals such as the local agent workstations 192 and the remote agent stations 121 for example, facilitate communication between agents and leads via the PSTN 110 or the internet 130 for example, track attempted and successful agent interaction with leads, and store updated lead information.
- the web server 170 may provide one or more web forms 172 to users via browser displayable web pages.
- the web forms may be displayed to the users via a variety of communication and/or computing devices 112 including phones, smart phones, tablet computers, laptop computers, desktop computers, media players, and the like that are equipped with a browser.
- the web forms 172 may prompt the user for contact data such as name, title, industry, company information, address, phone number, fax number, email address, instant messaging address, referral information, availability information, and interest information.
- the web server 170 may receive the lead information associated with the user in response to the user submitting the web form and provide the lead information to contact server 200 and the lead data server 190 , for example.
- the contact server 200 and the lead data server 190 may receive the lead information and retrieve additional data associated with the associated user such as web analytics data, reverse lookup data, credit check data, web site data, web site rank information, do-not-call registry data, data from a customer relationship management (CRM) database, and background check information.
- the lead data server 190 may store the collected data in a lead profile (not shown) and associate the user with an LRM plan (not shown).
- the contact server 200 may contact a lead in accordance with an associated LRM plan and deliver lead information to an agent to enable the agent to respond to the lead in a manner that optimizes contact or qualification rates.
- the particular purpose of such contact or qualification may include, for example, establishing a relationship with the lead, thanking the lead for their interest in a product, answering questions from the lead, informing the lead of a product or service offering, selling a product or service, surveying the lead on their needs and preferences, and providing support to the lead.
- the contact server 200 may deliver the information to the agent using a variety of delivery services such as email services, instant messaging services, short message services, enhanced messaging services, text messaging services, telephony-based text-to-speech services, and multimedia delivery services.
- the agent terminals 121 or 192 may present the lead information to the agent and enable the agent to respond to the lead by communicating with the lead.
- FIG. 2 is a schematic block diagram illustrating additional details of the example contact server 200 of FIG. 1 .
- the contact server 200 includes a contact manager 210 , a dialing module 220 , a messaging module 230 , a PBX module 240 and termination hardware 250 .
- the contact manager includes an MOR machine learning module 212 , an LRM plan selection module 214 , an agent selection module 216 , and a lead data server access module 218 .
- the depicted modules may reside partially or wholly on other servers such as the web server 170 and the lead data server 190 for example.
- the contact server 200 enables an agent to communicate with a lead in conjunction with an LRM plan.
- the contact manager 210 establishes contact with users and agents and manages contact sessions where needed.
- the contact manager 210 may initiate contact via the dialing module 220 and/or the messaging module 230 .
- the MOR machine learning module 212 employs an MOR machine learning model to predict multiple interdependent output components of an MOD output decision, according to the example methods disclosed herein.
- the MOR machine learning module 212 utilizes the lead data server access module 208 to access and analyze lead information stored on the lead data server 190 of FIG. 1 . Once one or more response decisions are predicted for a particular lead, the one or more response decisions may be conveyed to the LRM plan selection module 214 .
- the LRM plan selection module 214 presents and or selects one or more LRM plans for a particular lead and/or offering.
- the agent selection module 216 selects an agent, class of agent, or agent skill set that is designated in each LRM plan.
- the lead data server access module 218 enables the contact manager 210 to access lead information that is useful for contacting a lead.
- the data storage access module 218 enables the contact manager 210 to access the lead data server 190 .
- the dialing module 220 establishes telephone calls including VOIP telephone calls and PSTN calls. In one embodiment, the dialing module 220 receives a unique call identifier, establishes a telephone call, and notifies the contact manager 210 that the call has been established. Various embodiments of the dialing module 220 incorporate auxiliary functions such as retrieving telephone numbers from a database, comparing telephone numbers against a restricted calling list, transferring a call, conferencing a call, monitoring a call, playing recorded messages, detecting answering machines, recording voice messages, and providing interactive voice response (IVR) capabilities. In some instances, the dialing module 220 directs the PBX module 240 to perform the auxiliary functions.
- IVR interactive voice response
- the messaging module 230 sends and receives messages to agents and leads. To send and receive messages, the messaging module 230 may leverage one or more delivery or messaging services such as email services, instant messaging services, short message services, text message services, and enhanced messaging services.
- delivery or messaging services such as email services, instant messaging services, short message services, text message services, and enhanced messaging services.
- the PBX module 240 connects a private phone network to the PSTN 110 .
- the contact manager 210 or dialing module 220 may direct the PBX module 240 to connect a line on the private phone network with a number on the PSTN 110 or internet 130 .
- the PBX module 240 provides some of the auxiliary functions invoked by the dialing module 220 .
- the termination hardware 250 routes calls from a local network to the PSTN 110 .
- the termination hardware 250 interfaces to conventional phone terminals.
- the termination hardware 250 provides some of the auxiliary functions invoked by the dialing module 220 .
- FIG. 3A is a schematic flow chart diagram illustrating an example MOR machine learning model 300 .
- the model 300 is configured to be employed in sequential decision making to predict multiple interdependent output components, namely z 1 , z 2 , z 3 , and z 4 , of an MOD output decision z.
- the output decision z includes four (4) components, it is understood that an MOR machine learning model could be employed in connection with any output decision having two (2) or more interdependent components.
- the model 300 may be trained based on recorded historical data so that it can make optimal (or near-optimal) decisions, especially when a decision is comprised of many variables that need to be determined at the same time.
- model 300 may be employed in any number of applications to produce MOD output decisions, the model 300 is employed in FIG. 3A to produce an LRM MOD output decision. In particular, the model 300 is employed to decide for a given lead what response should be performed next in a sequence that will optimize the contact or qualification of the lead.
- z 1 response agent title
- z 2 response method
- z 3 response message type
- z 4 response timing.
- the input x may be an input feature vector that includes information about a particular lead.
- response agent title, response method, response message type, and response timing are only example components of an LRM MOD output decision.
- Other example components may include, but are not limited to, agent or lead demographic profile, agent or lead histographic profile (i.e. a profile of events in the life of the agent or the lead which could include past interactions between the agent and the lead), lead contact title (i.e. the title of a particular contact person within a lead organization), agent or lead psychographic profile (i.e. a profile of the psychological characteristics of the agent or the lead), agent or lead social network profile (i.e.
- agent or lead geographic profile i.e. cities, states, or other geographic designations that define current and/or past locations of the agent or the lead
- response frequency i.e. how often an agent contacts a lead
- response persistence i.e. how long an agent persists in contacting a lead
- FIG. 3B is a text diagram illustrating an example input feature vector x.
- the example input feature vector x of FIG. 3B includes information about a particular lead.
- the example input feature vector x includes constant features about a lead, such as lead title and lead industry, and interactive features related to interactions between an agent and the lead, such as previous number of dials and previous action.
- the lead information provided by the example input feature vector x may be employed as input by the model 300 of FIG. 3A in order to determine what is the next sequential response that should be performed that will optimize the contact or qualification of the lead.
- the input features of lead source, lead title, lead industry, lead state, lead created date, lead company size, lead status, number of previous dials, number of previous emails, previous action, and hours since last action are only example input features to an LRM MOD output decision.
- Other example input features may include, but are not limited to, response agent title, response method, response message type, response timing, agent or lead demographic profile, agent or lead histographic profile, agent or lead psychographic profile, agent or lead social network profile, agent or lead geographic profile, response frequency, and response persistence.
- input features could include data on current events, such as current events related to politics, economics, natural phenomena, society, and culture. It is further understood that where a particular input feature is employed as an input to a particular LRM MOD output decision, the particular input feature will not be included among the output components of the particular LRM MOD output decision.
- a decision on the component z 2 may have an influence on the decision for the component z 4 (response timing).
- the components of z are dependent both on an input x and on the other components of z.
- the probability of z 1 given x is not necessarily equal to the probability of z 1 given x and z 2 , or p(z 1
- the model 300 of FIG. 3A employs a base classifier.
- the model 300 employs multilayer perceptron (“MLP”) neural networks MLP1, MLP2, MLP3, and MLP4 as base classifiers. It is understood, however, that the model 300 could alternatively employ other types of base classifiers including, but not limited to other multilayer neural networks, decision trees, and support vector machines.
- MLP multilayer perceptron
- FIG. 3C is a schematic flow chart diagram illustrating the MLP neural network MLP1 that is employed to predict the first interdependent output component z 1 based on the input feature vector x of FIG. 3B and based on the predicted second interdependent output component z 2 of FIG. 3D as well as the predicted third and fourth interdependent output components z 3 and z 4 .
- the input feature vector x and the input components z 2 , z 3 , and z 4 are received by an input layer of the MLP neural network MLP1 and then processed by a hidden layer and an output layer to predict z 1 ⁇ z 11 , z 12 , z 13 ⁇ .
- FIG. 3D is a schematic flow chart diagram illustrating the MLP neural network MLP2 that is employed to predict the second interdependent output component z 2 based on the input feature vector x of FIG. 3B and based on the predicted first interdependent output component z 1 of FIG. 3C as well as the predicted third and fourth interdependent output components z 3 and z 4 .
- the input feature vector x and the input components z 1 , z 3 , and z 4 are received by an input layer of the MLP neural network MLP2 and then processed by a hidden layer and an output layer to predict z 2 ⁇ z 21 , z 22 , z 23 ⁇ .
- MLP3 and MLP4 function in a similar manner to MLP1 and MLP2.
- FIG. 4 is a schematic flow chart diagram of an example method 400 of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision.
- the method 400 may be implemented, in at least some embodiments, by the MOR machine learning module 212 of the contact manager 210 of the contact server 210 of FIG. 1 .
- the MOR machine learning module 212 may be configured to execute computer instructions to perform operations of employing the MOR machine learning model 300 of FIG. 3A to predict multiple interdependent output components z 1 , z 2 , z 3 , and z 4 of an LRM MOD output decision z, as represented by one or more of blocks 402 , 404 , 406 , 408 , 410 , and 412 of the method 400 .
- blocks 402 , 404 , 406 , 408 , 410 , and 412 of the method 400 Although illustrated as discrete blocks, various blocks may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation.
- the method 400
- the method 400 may begin at block 402 , in which a classifier for each of multiple interdependent output components of an output decision is trained to predict the component based on an input and based on all of the other components.
- the MOR machine learning module 212 may train the MLP neural networks MLP1, MLP2, MLP3, and MLP4 to predict each of the components z 1 , z 2 , z 3 , and z 4 based on the input feature vector x of FIG. 3B and based on all of the other predicted components.
- z 1 response agent title
- z 2 response method
- z 3 response message type
- z 4 response timing.
- MLP1 is trained from (x, z 2 , z 3 , z 4 ; z 1 ) to predict response agent title z 1 using x, z 2 , z 3 , and z 4 as input;
- MLP2 is trained from (x, z 1 , z 3 , z 4 ; z 2 ) to predict response method z 2 using x, z 1 , z 3 , and z 4 as input;
- MLP3 is trained from (x, z 1 , z 2 , z 4 ; z 3 ) to predict response message type z 3 using x, z 1 , z 2 , and z 4 as input; and
- MLP4 is trained from (x, z 1 , z 2 , z 3 ; z 4 ) to predict response timing z 4 using x, z 1 , z 2 , and z 3 as input.
- each possible value for each output component is initialized to a predetermined output value.
- the MOR machine learning module 212 may initialize each possible value for each of the output components z 1 , z 2 , z 3 , and z 4 to the same output value of 1/N, where N is the number of possible values for the output component, so that the sum of the initial output values of possible values for the output component is equal to 1.
- the MOR machine learning module 212 may initialize each possible value for each of the output components z 1 , z 2 , z 3 , and z 4 to another predetermined output value including, but not limited to, an output value based on resource availability, based on a baseline, or based on Bayes priors.
- MLP1, MLP2, MLP3, and MLP4 there are total of twelve (12) possible input values z ij ; where i ⁇ 1, 2, 3, 4 ⁇ and j ⁇ 1, 2, 3 ⁇ .
- the inputs for the MLP neural network MLP1 are (x, z 2 , z 3 , z 4 ).
- There are total of nine (9) possible values for components z 2 , z 3 and z 4 namely, three (3) possible values z 21 , z 22 , z 23 for z 2 , three (3) possible values z 31 , z 32 , z 33 for z 3 , and three (3) possible values z 41 , z 42 , z 43 for z 4 .
- the output value of each of the nine (9) possible values for the input components of each of the MLP neural networks MLP2, MLP3, and MLP4 may also be initialized to 1 ⁇ 3.
- z 2 ⁇ z 21 , z 22 , z 23 ⁇ ⁇ call, email, fax ⁇
- the output value of each of the twelve (12) possible values for the input components of each of the MLP neural networks MLP1, MLP2, MLP3, and MLP4 may also be initialized to another identical output value, such as an output value less than 1 ⁇ 3, for example, or to non-identical output value based on resource availability, based on a baseline, or based on Bayes priors, for example.
- a relaxation iteration is run on each classifier to update the output value of each possible value for each output component.
- the MOR machine learning module 212 may run a relaxation iteration on each of the MLP neural networks MLP1, MLP2, MLP3, and MLP4 to update the output value of each possible value for each of the output components z 1 , z 2 , z 3 , and z 4 .
- running a relaxation iteration on the MLP neural network MLP1 will generate three (3) output values that are retrieved directly from MLP1, namely p(z 11 ), p(z 12 ), and p(z 13 ).
- a(z 21 )(t) is the output value of MLP2 at iteration number t and it is used as an input for MLP1, MLP3, and MLP4 in the next iteration, namely iteration number t+1.
- p(z 21 )(t) is the output value retrieved directly from MLP2 at iteration number t and it is used as a target for updating a(z 21 )(t+1).
- decision block 408 it is determined whether a relaxation state has reached an equilibrium. If the relaxation state has reached an equilibrium, (“Yes” at decision block 408 ), then the method 400 proceeds to block 412 . If the relaxation state has not reached an equilibrium (“No” at decision block 408 ), then the method 400 proceeds to decision block 410 .
- decision block 410 it is determined whether a maximum number of relaxation iterations has been reached. If the maximum number of relaxation iterations has been reached, (“Yes” at decision block 420 ), then the method 400 proceeds to block 412 . If the maximum number of relaxation iterations has not been reached (“No” at decision block 410 ), then the method 400 returns to block 406 for another relaxation iteration.
- the MOR machine learning module 212 may determine whether a maximum number of relaxation iterations has been reached. In this example, once the following equation is false, the maximum number of relaxation iterations may be considered to have been reached: t ⁇ M; where t is the iteration number, and M is the maximum number of relaxation iterations.
- MLP2 receives input from corresponding a(z ij ) (2) values retrieved from MLP1, MLP3, and MLP4. From these inputs, MLP2 generates an output value p(z 21 )(2).
- an optimal output component is retrieved from each classifier.
- the MOR machine learning module 212 may retrieve an optical output component for each of the components z 1 , z 2 , z 3 , and z 4 from the MLP neural networks MLP1, MLP2, MLP3, and MLP4, respectively.
- FIG. 5 is a schematic flow chart diagram 500 of multiple correct MOD output decisions.
- the MOR machine learning model 300 may generate multiple correct output decisions 502 and 504 for a given input feature vector x.
- the term “correct” may refer to multiple output decisions each having a substantially similar output value.
- each of the output decisions 502 and 504 of FIG. 5 may have an identical or substantially similar output value, which indicates that performing either output decision would produce similar favorable results.
- the term “correct” may refer to multiple output decisions each having an output value above a predetermined threshold. The threshold may be predetermined to be relatively high or relatively low, depending on the application. Although only two correct output decisions are disclosed in FIG. 5 , it is understood that the MOR machine learning model 300 may generate more than two correct output decisions.
- example systems and user interfaces that enable agents to access and implement the resulting output decisions will be described with respect to FIGS. 6-8B . It is understood that these specific systems and user interfaces are only some of countless systems and user interfaces in which example embodiments may be employed. The scope of the example embodiments is not intended to be limited to any particular system or user interface.
- FIG. 6 illustrates an example computer screen image of a user interface 600 of an example customer relationship management (CRM) system.
- the user interface 600 includes various controls that allow an agent to manage customer relationships and, in particular, manage leads that are provided by the CRM system.
- the user interface 600 may be presented to an agent by the web server 170 on the workstations 128 or on the local agent workstations 192 of FIG. 1 , for example.
- the agent may use the user interface 600 to respond to leads that have been previously stored on the lead data server 190 of FIG. 1 .
- the lead advisor display 800 may allow the agent to respond to leads in a manner that optimizes contact or qualification rates, as discussed below in connection with FIGS. 8A and 8B .
- FIG. 7 illustrates an example computer screen image of a user interface 700 of an example LRM system, such as the LRM system of FIG. 1 .
- the user interface 700 includes various controls that allow an agent to respond to lead.
- the user interface 700 may be presented to an agent in a similar manner as the user interface 600 .
- the user interface also includes a lead advisor display 800 .
- FIG. 8A illustrates an example computer screen image of the example lead advisor display 800 before a lead has been selected by an agent
- FIG. 8B illustrates an example computer screen image of the example lead advisor display 800 after a lead has been selected by an agent.
- the lead advisor display 800 lists five leads. Each lead includes a name 802 , a likelihood of success meter 804 , and a likelihood of success category indicator 806 . As disclosed in FIG. 8A , the leads are listed by highest likelihood of success to lowest likelihood of success.
- the lead may expand as shown in FIG. 8A for lead “Mark Littlefield.”
- the lead may present the agent with additional options, such as a confirm button 808 , a delete button 810 , and a “more info” link 812 .
- the agent may be presented with a pop-out display 814 as disclosed in FIG. 8B .
- the pop-out display 814 may present the agent with an LRM plan associated with the lead.
- This LRM plan may have been generated by the example methods disclosed herein and may reflect the output decision with the highest, or among the highest, output value for the lead.
- the LRM plan for the lead named “Mark Littlefield” may include employing a sales manager to send an email with message type MT1 in a short timeframe, which corresponds to the output decision 502 of FIG. 5 .
- the agent may then simply click on the pop-out display 814 to have the lead advisor display 800 automatically generate an email to the lead with message type MT1 that will be sent by a sales manager immediately. Alternatively, the agent may manually override the response plan and manually perform a different response.
- the embodiments disclosed herein include methods of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision.
- the example methods disclosed herein enable the prediction of each output component based on an input and based on all of the other output components. Therefore, the example methods disclosed herein may be employed to solve MOD problems such as LRM problems.
- inventions described herein may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below.
- Embodiments described herein may be implemented using computer-readable media for carrying or having computer-executable instructions or data structures stored thereon.
- Such computer-readable media may be any available media that may be accessed by a general purpose or special purpose computer.
- Such computer-readable media may include non-transitory computer-readable storage media including RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general purpose or special purpose computer. Combinations of the above may also be included within the scope of computer-readable media.
- Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions.
- module may refer to software objects or routines that execute on the computing system.
- the different modules described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While the system and methods described herein are preferably implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
Abstract
A multiple output relaxation (MOR) machine learning model. In one example embodiment, a method for employing an MOR machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision may include training a classifier for each of multiple interdependent output components of an MOD output decision to predict the component based on an input and based on all of the other components. The method may also include initializing each possible value for each of the components to a predetermined output value. The method may further include running relaxation iterations on each of the classifiers to update the output value of each possible value for each of the components until a relaxation state reaches an equilibrium or a maximum number of relaxation iterations is reached. The method may also include retrieving an optimal component from each of the classifiers.
Description
- This application is a continuation of International Patent Application No. PCT/US13/55859, filed on Aug. 20, 2013, which is a continuation of U.S. patent application Ser. No. 13/590,028, filed Aug. 20, 2012, each of which is incorporated herein by reference in its entirety.
- The embodiments discussed herein are related to a multiple output relaxation (MOR) machine learning model.
- Machine learning is a form of artificial intelligence that is employed to allow computers to evolve behaviors based on empirical data. Machine learning may take advantage of training examples to capture characteristics of interest of their unknown underlying probability distribution. Training data may be seen as examples that illustrate relations between observed variables. A major focus of machine learning research is to automatically learn to recognize complex patterns and make intelligent decisions based on data.
- One main difficulty in machine learning lies in the fact that the set of all possible behaviors, given all possible inputs, is too large to be covered by a set of training data. Hence, a machine learning model must generalize from the training data so as to be able to produce a useful output in new cases.
- One example of machine learning is traditional structured prediction (SP). Traditional SP is a single model approach to dependent output. With SP, once an input feature vector x is specified, a single correct output vector z can be fully specified. Thus the output vector z is fully conditioned on the input feature vector x and the different output components of output vector z (z1, z2, . . . ) are conditionally independent of each other given the input feature vector x. Thus, the probability of z1 given x is equal to the probability of z1 given x and z2, or p(z1|x)=p(z1|x, z2). However, traditional SP cannot handle an interdependent relationship between different output components. In addition, traditional SP cannot handle a problem having multiple correct output decisions for a given input.
- The subject matter claimed herein is not limited to embodiments that solve any disadvantages or that operate only in environments such as those described above. Rather, this background is only provided to illustrate one example technology area where some embodiments described herein may be practiced.
- In general, example embodiments described herein relate to methods of employing a multiple output relaxation (MOR) machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision. The example methods disclosed herein may be employed to solve MOD problems.
- In one example embodiment, a method for employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision may include training a classifier for each of multiple interdependent output components of an MOD output decision to predict the component based on an input and based on all of the other components. The method may also include initializing each possible value for each of the components to a predetermined output value. The method may further include running relaxation iterations on each of the classifiers to update the output value of each possible value for each of the components until a relaxation state reaches an equilibrium or a maximum number of relaxation iterations is reached. The method may also include retrieving an optimal component from each of the classifiers.
- It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention, as claimed.
- Example embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings in which:
-
FIG. 1 is a schematic block diagram illustrating an example lead response management (LRM) system including an example contact server; -
FIG. 2 is a schematic block diagram illustrating additional details of the example contact server ofFIG. 1 ; -
FIG. 3A is a schematic flow chart diagram illustrating an example multiple output relaxation (MOR) machine learning model; -
FIG. 3B is a text diagram illustrating an example input feature vector; -
FIG. 3C is a schematic flow chart diagram illustrating a first example multilayer perceptron (MLP) neural network that is employed to predict a first interdependent output component; -
FIG. 3D is a schematic flow chart diagram illustrating a second example MLP neural network that is employed to predict a second interdependent output component; -
FIG. 4 is a schematic flow chart diagram of an example method of employing an MOR machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision; -
FIG. 5 is a schematic flow chart diagram of multiple correct MOD output decisions; -
FIG. 6 illustrates an example computer screen image of a user interface of an example customer relationship management (CRM) system; -
FIG. 7 illustrates an example computer screen image of a user interface of an example LRM system; -
FIG. 8A illustrates an example computer screen image of an example lead advisor display before a lead has been selected by an agent; and -
FIG. 8B illustrates an example computer screen image of the example lead advisor display ofFIG. 8A after a lead has been selected by an agent. - Some embodiments described herein include methods of employing a multiple output relaxation (MOR) machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision. The example methods disclosed herein may be employed to solve MOD problems.
- As used herein, the term “multiple output dependency” or “MOD” refers to an output decision, or a problem having an output decision, that includes multiple output components which are interdependent in that each component is dependent not only on an input but also on the other components. Some example MOD problems include, but are not limited to: 1) which combination of stocks to purchase to balance a mutual fund given current stock market conditions, 2) which combination of players to substitute into a lineup of a sports team given the current lineup of the opposing team, and 3) which combination of shirt, pants, belt, and shoes to wear given the current weather conditions. In each of these examples, each component of the output decision depends on both the input (current stock market conditions, an opposing team lineup, or current weather conditions) and the other components (the other stocks purchased, the other substituted player, or the other clothing selected). Other examples of MOD problems may relate to hostage negotiations, retail sales, online shopping carts, web content management systems, customer service, contract negotiations, or crisis management, or any other situation that requires an output decision with multiple interdependent output components.
- Another example MOD problem is lead response management (LRM). LRM is the process of responding to leads in a manner that optimizes contact or qualification rates. Leads may come from a variety of sources including, but not limited to, a web form, a referral, and a list purchased from a lead vendor. When a lead comes into an organization, the output decision of how to respond to the lead may include multiple interdependent components such as, but not limited to, who should respond to the lead, what method should be employed to respond to the lead, what content should be included in the response message, and when should the response take place. Each of these components of the output decision depends on both the input (the lead information) and the other components. For example, the timing of the response may depend on the availability of the person selected to respond. Also, the content of the message may depend on the method of response (e.g. since the length of an email message is not limited like the length of a text message). Although the example methods disclosed herein are generally explained in the context of LRM, it is understood that the example methods disclosed herein may be employed to solve any MOD problem.
- Example embodiments will be explained with reference to the accompanying drawings.
-
FIG. 1 is a schematic block diagram illustrating anexample LRM system 100. As depicted, theexample LRM system 100 includes various components such as a public switched telephone network (PSTN) 110, user communication and/orcomputing devices 112, aTDM gateway 120 connecting thePSTN 100 to aninternet 130,remote agent stations 121,workstations 128, acall center 140, aninternet gateway 150 connecting alocal area network 160 to theinternet 130, aweb server 170, acontact server 200, alead data server 190,local agent workstations 192, and controlworkstations 194. The various components of theexample LRM system 100 operably interconnected to collaboratively improve a process of responding to leads in a manner that optimizes contact or qualification rates. - As disclosed in
FIG. 1 , theremote agent stations 121 includewireless phones 122,wired phones 124,wireless computing devices 126, andworkstations 128. In certain embodiments, thewireless phones 122 or thewired phones 124 may be voice over internet protocol (VOIP) phones. In some embodiments, thecomputing devices 126 or theworkstations 128 may be equipped with a soft phone. Theremote agent stations 121 enable agents to respond to lead from remote locations similar to agents stationed at theworkstations 192 and directly connected to thelocal area network 160. - In one example embodiment, the
local area network 160 resides within acall center 140 that uses VoIP and other messaging services to contact users connected to thePSTN 110 and/or theinternet 130. The various servers in thecall center 140 function cooperatively to acquire leads, store lead information, analyze lead information to decide how best to respond to each lead, distribute leads to agents via agent terminals such as thelocal agent workstations 192 and theremote agent stations 121 for example, facilitate communication between agents and leads via thePSTN 110 or theinternet 130 for example, track attempted and successful agent interaction with leads, and store updated lead information. - The
web server 170 may provide one ormore web forms 172 to users via browser displayable web pages. The web forms may be displayed to the users via a variety of communication and/orcomputing devices 112 including phones, smart phones, tablet computers, laptop computers, desktop computers, media players, and the like that are equipped with a browser. The web forms 172 may prompt the user for contact data such as name, title, industry, company information, address, phone number, fax number, email address, instant messaging address, referral information, availability information, and interest information. Theweb server 170 may receive the lead information associated with the user in response to the user submitting the web form and provide the lead information to contactserver 200 and thelead data server 190, for example. - The
contact server 200 and thelead data server 190 may receive the lead information and retrieve additional data associated with the associated user such as web analytics data, reverse lookup data, credit check data, web site data, web site rank information, do-not-call registry data, data from a customer relationship management (CRM) database, and background check information. Thelead data server 190 may store the collected data in a lead profile (not shown) and associate the user with an LRM plan (not shown). - The
contact server 200 may contact a lead in accordance with an associated LRM plan and deliver lead information to an agent to enable the agent to respond to the lead in a manner that optimizes contact or qualification rates. The particular purpose of such contact or qualification may include, for example, establishing a relationship with the lead, thanking the lead for their interest in a product, answering questions from the lead, informing the lead of a product or service offering, selling a product or service, surveying the lead on their needs and preferences, and providing support to the lead. Thecontact server 200 may deliver the information to the agent using a variety of delivery services such as email services, instant messaging services, short message services, enhanced messaging services, text messaging services, telephony-based text-to-speech services, and multimedia delivery services. Theagent terminals -
FIG. 2 is a schematic block diagram illustrating additional details of theexample contact server 200 ofFIG. 1 . As disclosed inFIG. 2 , thecontact server 200 includes acontact manager 210, adialing module 220, amessaging module 230, aPBX module 240 andtermination hardware 250. In the depicted embodiment, the contact manager includes an MORmachine learning module 212, an LRMplan selection module 214, anagent selection module 216, and a lead dataserver access module 218. Although shown within thecontact server 200, the depicted modules may reside partially or wholly on other servers such as theweb server 170 and thelead data server 190 for example. Thecontact server 200 enables an agent to communicate with a lead in conjunction with an LRM plan. - The
contact manager 210 establishes contact with users and agents and manages contact sessions where needed. Thecontact manager 210 may initiate contact via thedialing module 220 and/or themessaging module 230. - The MOR
machine learning module 212 employs an MOR machine learning model to predict multiple interdependent output components of an MOD output decision, according to the example methods disclosed herein. In at least some example embodiments, the MORmachine learning module 212 utilizes the lead data server access module 208 to access and analyze lead information stored on thelead data server 190 ofFIG. 1 . Once one or more response decisions are predicted for a particular lead, the one or more response decisions may be conveyed to the LRMplan selection module 214. - The LRM
plan selection module 214 presents and or selects one or more LRM plans for a particular lead and/or offering. Similarly, theagent selection module 216 selects an agent, class of agent, or agent skill set that is designated in each LRM plan. - The lead data
server access module 218 enables thecontact manager 210 to access lead information that is useful for contacting a lead. In one embodiment, the datastorage access module 218 enables thecontact manager 210 to access thelead data server 190. - The
dialing module 220 establishes telephone calls including VOIP telephone calls and PSTN calls. In one embodiment, thedialing module 220 receives a unique call identifier, establishes a telephone call, and notifies thecontact manager 210 that the call has been established. Various embodiments of thedialing module 220 incorporate auxiliary functions such as retrieving telephone numbers from a database, comparing telephone numbers against a restricted calling list, transferring a call, conferencing a call, monitoring a call, playing recorded messages, detecting answering machines, recording voice messages, and providing interactive voice response (IVR) capabilities. In some instances, thedialing module 220 directs thePBX module 240 to perform the auxiliary functions. - The
messaging module 230 sends and receives messages to agents and leads. To send and receive messages, themessaging module 230 may leverage one or more delivery or messaging services such as email services, instant messaging services, short message services, text message services, and enhanced messaging services. - The
PBX module 240 connects a private phone network to thePSTN 110. Thecontact manager 210 or dialingmodule 220 may direct thePBX module 240 to connect a line on the private phone network with a number on thePSTN 110 orinternet 130. In some embodiments, thePBX module 240 provides some of the auxiliary functions invoked by thedialing module 220. - The
termination hardware 250 routes calls from a local network to thePSTN 110. In one embodiment, thetermination hardware 250 interfaces to conventional phone terminals. In some embodiments and instances, thetermination hardware 250 provides some of the auxiliary functions invoked by thedialing module 220. - Having described a specific environment (an LRM system) and specific application (LRM) with respect to
FIGS. 1 and 2 , it is understood that this specific environment and application is only one of countless environments and application in which example embodiments may be employed. The scope of the example embodiments are not intended to be limited to any particular environment or application. -
FIG. 3A is a schematic flow chart diagram illustrating an example MORmachine learning model 300. Themodel 300 is configured to be employed in sequential decision making to predict multiple interdependent output components, namely z1, z2, z3, and z4, of an MOD output decision z. Although the output decision z includes four (4) components, it is understood that an MOR machine learning model could be employed in connection with any output decision having two (2) or more interdependent components. Themodel 300 may be trained based on recorded historical data so that it can make optimal (or near-optimal) decisions, especially when a decision is comprised of many variables that need to be determined at the same time. - Although the
model 300 may be employed in any number of applications to produce MOD output decisions, themodel 300 is employed inFIG. 3A to produce an LRM MOD output decision. In particular, themodel 300 is employed to decide for a given lead what response should be performed next in a sequence that will optimize the contact or qualification of the lead. - For example, the
model 300 may be employed to produce an LRM MOD output decision z=(z1, z2, z3, z4), where z1, z2, z3, and z4 are four components of the output decision z, based on an input x. In this example, z1=response agent title, z2=response method, z3=response message type, and z4=response timing. The input x may be an input feature vector that includes information about a particular lead. - It is understood that the components of response agent title, response method, response message type, and response timing are only example components of an LRM MOD output decision. Other example components may include, but are not limited to, agent or lead demographic profile, agent or lead histographic profile (i.e. a profile of events in the life of the agent or the lead which could include past interactions between the agent and the lead), lead contact title (i.e. the title of a particular contact person within a lead organization), agent or lead psychographic profile (i.e. a profile of the psychological characteristics of the agent or the lead), agent or lead social network profile (i.e. the proximity of the agent to the lead in an online social network such as LinkedIn® or FaceBook® or in an offline social network such as the Entrepreneurs Organization®, civic clubs, fraternities, or religions), agent or lead geographic profile (i.e. cities, states, or other geographic designations that define current and/or past locations of the agent or the lead), response frequency (i.e. how often an agent contacts a lead), and response persistence (i.e. how long an agent persists in contacting a lead).
-
FIG. 3B is a text diagram illustrating an example input feature vector x. The example input feature vector x ofFIG. 3B includes information about a particular lead. In particular, the example input feature vector x includes constant features about a lead, such as lead title and lead industry, and interactive features related to interactions between an agent and the lead, such as previous number of dials and previous action. The lead information provided by the example input feature vector x may be employed as input by themodel 300 ofFIG. 3A in order to determine what is the next sequential response that should be performed that will optimize the contact or qualification of the lead. - It is understood that the input features of lead source, lead title, lead industry, lead state, lead created date, lead company size, lead status, number of previous dials, number of previous emails, previous action, and hours since last action are only example input features to an LRM MOD output decision. Other example input features may include, but are not limited to, response agent title, response method, response message type, response timing, agent or lead demographic profile, agent or lead histographic profile, agent or lead psychographic profile, agent or lead social network profile, agent or lead geographic profile, response frequency, and response persistence. Additionally, input features could include data on current events, such as current events related to politics, economics, natural phenomena, society, and culture. It is further understood that where a particular input feature is employed as an input to a particular LRM MOD output decision, the particular input feature will not be included among the output components of the particular LRM MOD output decision.
- As disclosed in
FIG. 3A , there is a dependency among components z1, z2, z3, and z4. For example, a decision on the component z2 (response method) may have an influence on the decision for the component z4 (response timing). For example, if z2=dial, an agent may need to consider when a lead is available to talk on a phone (e.g. usually during business hours of the time zone where the lead resides). If z2=email, the agent may send the email at any time. - Therefore, in the example application of
FIG. 3A , and as is the case with other MOD output decisions, the components of z are dependent both on an input x and on the other components of z. Thus, in this example, the probability of z1 given x is not necessarily equal to the probability of z1 given x and z2, or p(z1|x)≠p(z1|x, z2). In other words, it cannot be decided what value a specific component of z should take on without considering x and the values of the other components of z. - The
model 300 ofFIG. 3A employs a base classifier. In particular, and as disclosed inFIG. 3A , themodel 300 employs multilayer perceptron (“MLP”) neural networks MLP1, MLP2, MLP3, and MLP4 as base classifiers. It is understood, however, that themodel 300 could alternatively employ other types of base classifiers including, but not limited to other multilayer neural networks, decision trees, and support vector machines. -
FIG. 3C is a schematic flow chart diagram illustrating the MLP neural network MLP1 that is employed to predict the first interdependent output component z1 based on the input feature vector x ofFIG. 3B and based on the predicted second interdependent output component z2 ofFIG. 3D as well as the predicted third and fourth interdependent output components z3 and z4. InFIG. 3C , the input feature vector x and the input components z2, z3, and z4 are received by an input layer of the MLP neural network MLP1 and then processed by a hidden layer and an output layer to predict z1ε{z11, z12, z13}. -
FIG. 3D is a schematic flow chart diagram illustrating the MLP neural network MLP2 that is employed to predict the second interdependent output component z2 based on the input feature vector x ofFIG. 3B and based on the predicted first interdependent output component z1 ofFIG. 3C as well as the predicted third and fourth interdependent output components z3 and z4. InFIG. 3D , the input feature vector x and the input components z1, z3, and z4 are received by an input layer of the MLP neural network MLP2 and then processed by a hidden layer and an output layer to predict z2ε{z21, z22, z23}. As disclosed inFIG. 3A , MLP3 and MLP4 function in a similar manner to MLP1 and MLP2. -
FIG. 4 is a schematic flow chart diagram of anexample method 400 of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision. Themethod 400 may be implemented, in at least some embodiments, by the MORmachine learning module 212 of thecontact manager 210 of thecontact server 210 ofFIG. 1 . For example, the MORmachine learning module 212 may be configured to execute computer instructions to perform operations of employing the MORmachine learning model 300 ofFIG. 3A to predict multiple interdependent output components z1, z2, z3, and z4 of an LRM MOD output decision z, as represented by one or more ofblocks method 400. Although illustrated as discrete blocks, various blocks may be divided into additional blocks, combined into fewer blocks, or eliminated, depending on the desired implementation. Themethod 400 will now be discussed with reference toFIGS. 1-4 . - The
method 400 may begin atblock 402, in which a classifier for each of multiple interdependent output components of an output decision is trained to predict the component based on an input and based on all of the other components. For example, the MORmachine learning module 212 may train the MLP neural networks MLP1, MLP2, MLP3, and MLP4 to predict each of the components z1, z2, z3, and z4 based on the input feature vector x ofFIG. 3B and based on all of the other predicted components. In the example embodiment disclosed inFIG. 3A , z1=response agent title, z2=response method, z3=response message type, and z4=response timing. Thus, MLP1 is trained from (x, z2, z3, z4; z1) to predict response agent title z1 using x, z2, z3, and z4 as input; MLP2 is trained from (x, z1, z3, z4; z2) to predict response method z2 using x, z1, z3, and z4 as input; MLP3 is trained from (x, z1, z2, z4; z3) to predict response message type z3 using x, z1, z2, and z4 as input; and MLP4 is trained from (x, z1, z2, z3; z4) to predict response timing z4 using x, z1, z2, and z3 as input. - It is understood that since the order of components in an output decision that is produced using an MOR machine learning model can be determined simultaneously, the use herein of the
subscripts - At
block 404, each possible value for each output component is initialized to a predetermined output value. For example, the MORmachine learning module 212 may initialize each possible value for each of the output components z1, z2, z3, and z4 to the same output value of 1/N, where N is the number of possible values for the output component, so that the sum of the initial output values of possible values for the output component is equal to 1. Alternatively, the MORmachine learning module 212 may initialize each possible value for each of the output components z1, z2, z3, and z4 to another predetermined output value including, but not limited to, an output value based on resource availability, based on a baseline, or based on Bayes priors. - In this example, assume that each of the components z1, z2, z3, and z4 has three (3) possible values as follows: z1ε{z11, z12, z13}={sales vice president, sales manager, sales representative}; z2ε{z21, z22, z23}={call, email, fax}; z3ε{z31, z32, z33}={MT1, MT2, MT3}; and z4ε{z41, z42, z43}={short, medium, long}. In this example, for MLP1, MLP2, MLP3, and MLP4, there are total of twelve (12) possible input values zij; where iε{1, 2, 3, 4} and jε{1, 2, 3}. The inputs for the MLP neural network MLP1 are (x, z2, z3, z4). There are total of nine (9) possible values for components z2, z3 and z4, namely, three (3) possible values z21, z22, z23 for z2, three (3) possible values z31, z32, z33 for z3, and three (3) possible values z41, z42, z43 for z4.
- The output value of each of the nine (9) possible values for the input components of the MLP neural network MLP1 may be initialized to ⅓ since N=3, namely, a(z21), a(z22), a(z23), a(z31), a(z32), a(z33), a(z41), a(z42), and a(z43) can each be initialized to ⅓, where “a(zij)” is an activation that represents an output value, iε{1, 2, 3, 4}; and jε{1, 2, 3}. In a similar manner, the output value of each of the nine (9) possible values for the input components of each of the MLP neural networks MLP2, MLP3, and MLP4 may also be initialized to ⅓. For example, where z2ε{z21, z22, z23}={call, email, fax}, a(z21) is the activation of one of the possible values, namely “call”, and may be initialized to a(z21)(t)=0.33 at iteration number t=1.
- Alternatively, the output value of each of the twelve (12) possible values for the input components of each of the MLP neural networks MLP1, MLP2, MLP3, and MLP4 may also be initialized to another identical output value, such as an output value less than ⅓, for example, or to non-identical output value based on resource availability, based on a baseline, or based on Bayes priors, for example.
- At
block 406, a relaxation iteration is run on each classifier to update the output value of each possible value for each output component. For example, the MORmachine learning module 212 may run a relaxation iteration on each of the MLP neural networks MLP1, MLP2, MLP3, and MLP4 to update the output value of each possible value for each of the output components z1, z2, z3, and z4. In this example, running a relaxation iteration on the MLP neural network MLP1 will generate three (3) output values that are retrieved directly from MLP1, namely p(z11), p(z12), and p(z13). Similarly, running a relaxation iteration on the MLP neural networks MLP2, MLP3, and MLP4 will generate nine (9) total output values, namely p(z21), p(z22), p(z23), p(z31), p(z32), p(z33), p(z41), p(z42), and p(z43). These twelve (12) output values p(zij)(iε{1, 2, 3, 4} and jε{1, 2, 3}) may be considered as initial estimate values for the (12) output values a(zij), and will be used as learning targets for updating output values a(zij) in the next relaxation iteration, namely iteration number t+1, using the formula a(zij)(t+1)=a(zij)(t)+R·(p(zij)(t)−a(zij)(t)), which is discussed in greater detail below. For example, where iteration number t=1, a(z21)(t) is updated by running a relaxation iteration on MLP2 to produce the output value p(z21)(t)=0.47 of z21. In this example, a(z21)(t) is the output value of MLP2 at iteration number t and it is used as an input for MLP1, MLP3, and MLP4 in the next iteration, namely iterationnumber t+ 1. In this example, p(z21)(t) is the output value retrieved directly from MLP2 at iteration number t and it is used as a target for updating a(z21)(t+1). - At
decision block 408, it is determined whether a relaxation state has reached an equilibrium. If the relaxation state has reached an equilibrium, (“Yes” at decision block 408), then themethod 400 proceeds to block 412. If the relaxation state has not reached an equilibrium (“No” at decision block 408), then themethod 400 proceeds todecision block 410. - For example, the MOR
machine learning module 212 may determine whether a relaxation state has reached an equilibrium. Whether an equilibrium has been reached may be determined according to the following two formulas. First, a relaxation rate is applied to update the output value of each possible value for each of the output components z1, z2, z3, and z4 as follows: a(zij)(t+1)=a(zij)(t)+R·(p(zij)(t)−a(zij)(t)); where R is a relaxation rate; t is the number of iterations; iε{1, 2, 3, 4}; and jε{1, 2, 3}. Second, once the following equation is true, the relaxation may be considered to have reached an equilibrium: |a(zij)(t+1)−a(zij)(t)|≦T; where T is a threshold. - For example, where iteration number t=1, and where R=0.1, and using the example values of a(z21)(t) and p(z21)(t) of z21 from above, namely a(z21)(1)=0.33 and p(z21)(1)=0.47, the formula a(zij)(t+1)=a(zij)(t)+R·(p(zij)(t)−a(zij)(t) is first processed as follows: a(z21)(2)=0.33+0.1 ·(0.47−0.33)=0.344. Second, where T=0.01, the formula a(zij)(t+1)−a(zij)(t)| is processed as follows: |0.344−0.33|=0.014. Since 0.014 is greater than 0.01, then the statement |a(zij)(t+1)−a(zij)(t)|≦T is false, and the relaxation is not considered to have reached an equilibrium.
- At
decision block 410, it is determined whether a maximum number of relaxation iterations has been reached. If the maximum number of relaxation iterations has been reached, (“Yes” at decision block 420), then themethod 400 proceeds to block 412. If the maximum number of relaxation iterations has not been reached (“No” at decision block 410), then themethod 400 returns to block 406 for another relaxation iteration. - For example, the MOR
machine learning module 212 may determine whether a maximum number of relaxation iterations has been reached. In this example, once the following equation is false, the maximum number of relaxation iterations may be considered to have been reached: t≦M; where t is the iteration number, and M is the maximum number of relaxation iterations. - For example, where iteration number t=1, and M=100, since 1 is less than 100, then the statement t≦M is true, and the maximum number of relaxation iterations is not considered to have been reached. Therefore, the
method 400 may return to block 406 for another relaxation iteration where iteration number t=2, a(z21)(2)=0.344 is used as an input to MLP1, MLP3, and MLP4. Similarly, MLP2 receives input from corresponding a(zij) (2) values retrieved from MLP1, MLP3, and MLP4. From these inputs, MLP2 generates an output value p(z21)(2). The value of a(z21)(3) can then be calculated from (z21)(2) and an output value p(z21)(2) using the formula a(z21)(3)=a(z21)(2)+R·(p(z21)(2)−a(z21)(2)). In this example, at iteration number t=2, the other eleven (11) a(zij)(3) will also be updated using outputs p(zij)(2) of the appropriate classifiers and a(zij)(2) at iteration number t=2 as inputs to the formula a(zij)(3)=a(zij)(2)+R·(p(zij)(2)−a(zij)(2)). - At
block 412, an optimal output component is retrieved from each classifier. For example, the MORmachine learning module 212 may retrieve an optical output component for each of the components z1, z2, z3, and z4 from the MLP neural networks MLP1, MLP2, MLP3, and MLP4, respectively. - It is understood that the above-illustrated example is but one example of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision, and the
method 400 is not limited to the particular application of this example or to the LRM MOD problem solved in this example. -
FIG. 5 is a schematic flow chart diagram 500 of multiple correct MOD output decisions. As disclosed in the diagram 500, the MORmachine learning model 300 may generate multiplecorrect output decisions output decisions FIG. 5 may have an identical or substantially similar output value, which indicates that performing either output decision would produce similar favorable results. Additionally or alternatively, the term “correct” may refer to multiple output decisions each having an output value above a predetermined threshold. The threshold may be predetermined to be relatively high or relatively low, depending on the application. Although only two correct output decisions are disclosed inFIG. 5 , it is understood that the MORmachine learning model 300 may generate more than two correct output decisions. - Having described example methods of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision with respect to
FIGS. 3A-5 , example systems and user interfaces that enable agents to access and implement the resulting output decisions will be described with respect toFIGS. 6-8B . It is understood that these specific systems and user interfaces are only some of countless systems and user interfaces in which example embodiments may be employed. The scope of the example embodiments is not intended to be limited to any particular system or user interface. -
FIG. 6 illustrates an example computer screen image of auser interface 600 of an example customer relationship management (CRM) system. Theuser interface 600 includes various controls that allow an agent to manage customer relationships and, in particular, manage leads that are provided by the CRM system. Theuser interface 600 may be presented to an agent by theweb server 170 on theworkstations 128 or on thelocal agent workstations 192 ofFIG. 1 , for example. The agent may use theuser interface 600 to respond to leads that have been previously stored on thelead data server 190 ofFIG. 1 . In particular, thelead advisor display 800 may allow the agent to respond to leads in a manner that optimizes contact or qualification rates, as discussed below in connection withFIGS. 8A and 8B . -
FIG. 7 illustrates an example computer screen image of auser interface 700 of an example LRM system, such as the LRM system ofFIG. 1 . Like theuser interface 600 ofFIG. 6 , theuser interface 700 includes various controls that allow an agent to respond to lead. Theuser interface 700 may be presented to an agent in a similar manner as theuser interface 600. The user interface also includes alead advisor display 800. -
FIG. 8A illustrates an example computer screen image of the examplelead advisor display 800 before a lead has been selected by an agent andFIG. 8B illustrates an example computer screen image of the examplelead advisor display 800 after a lead has been selected by an agent. As disclosed inFIG. 8A , thelead advisor display 800 lists five leads. Each lead includes aname 802, a likelihood ofsuccess meter 804, and a likelihood ofsuccess category indicator 806. As disclosed inFIG. 8A , the leads are listed by highest likelihood of success to lowest likelihood of success. Upon inquiry by the agent, by mousing-over a lead with a mouse pointer for example, the lead may expand as shown inFIG. 8A for lead “Mark Littlefield.” Upon expansion, the lead may present the agent with additional options, such as aconfirm button 808, adelete button 810, and a “more info”link 812. - Upon selection of the “more info”
link 812 by the agent, by clicking on the more info link 812 with a mouse pointer for example, the agent may be presented with a pop-out display 814 as disclosed inFIG. 8B . The pop-out display 814 may present the agent with an LRM plan associated with the lead. This LRM plan may have been generated by the example methods disclosed herein and may reflect the output decision with the highest, or among the highest, output value for the lead. As disclosed inFIG. 8B , the LRM plan for the lead named “Mark Littlefield” may include employing a sales manager to send an email with message type MT1 in a short timeframe, which corresponds to theoutput decision 502 ofFIG. 5 . The agent may then simply click on the pop-out display 814 to have thelead advisor display 800 automatically generate an email to the lead with message type MT1 that will be sent by a sales manager immediately. Alternatively, the agent may manually override the response plan and manually perform a different response. - Therefore, the embodiments disclosed herein include methods of employing an MOR machine learning model to predict multiple interdependent output components of an MOD output decision. The example methods disclosed herein enable the prediction of each output component based on an input and based on all of the other output components. Therefore, the example methods disclosed herein may be employed to solve MOD problems such as LRM problems.
- The embodiments described herein may include the use of a special purpose or general-purpose computer including various computer hardware or software modules, as discussed in greater detail below.
- Embodiments described herein may be implemented using computer-readable media for carrying or having computer-executable instructions or data structures stored thereon. Such computer-readable media may be any available media that may be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media may include non-transitory computer-readable storage media including RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general purpose or special purpose computer. Combinations of the above may also be included within the scope of computer-readable media.
- Computer-executable instructions comprise, for example, instructions and data which cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
- As used herein, the term “module” may refer to software objects or routines that execute on the computing system. The different modules described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While the system and methods described herein are preferably implemented in software, implementations in hardware or a combination of software and hardware are also possible and contemplated.
- All examples and conditional language recited herein are intended for pedagogical objects to aid the reader in understanding the example embodiments and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
Claims (1)
1. A method for employing a multiple output relaxation (MOR) machine learning model to predict multiple interdependent output components of a multiple output dependency (MOD) output decision, the method comprising:
training a classifier for each of multiple interdependent output components of an MOD output decision to predict the component based on an input and based on all of the other components;
initializing each possible value for each of the components to a predetermined output value;
running relaxation iterations on each of the classifiers to update the output value of each possible value for each of the components until a relaxation state reaches an equilibrium or a maximum number of relaxation iterations is reached; and
retrieving an optimal component from each of the classifiers.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/625,945 US20150161508A1 (en) | 2012-08-20 | 2015-02-19 | Multiple output relaxation machine learning model |
US14/966,422 US20160357790A1 (en) | 2012-08-20 | 2015-12-11 | Resolving and merging duplicate records using machine learning |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/590,028 US8352389B1 (en) | 2012-08-20 | 2012-08-20 | Multiple output relaxation machine learning model |
PCT/US2013/055859 WO2014031685A2 (en) | 2012-08-20 | 2013-08-20 | Multiple output relaxation machine learning model |
US14/625,945 US20150161508A1 (en) | 2012-08-20 | 2015-02-19 | Multiple output relaxation machine learning model |
Related Parent Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/590,028 Continuation US8352389B1 (en) | 2012-08-20 | 2012-08-20 | Multiple output relaxation machine learning model |
PCT/US2013/055859 Continuation WO2014031685A2 (en) | 2012-08-20 | 2013-08-20 | Multiple output relaxation machine learning model |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/590,000 Continuation-In-Part US8812417B2 (en) | 2012-08-20 | 2012-08-20 | Hierarchical based sequencing machine learning model |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150161508A1 true US20150161508A1 (en) | 2015-06-11 |
Family
ID=47427980
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/590,028 Active US8352389B1 (en) | 2012-08-20 | 2012-08-20 | Multiple output relaxation machine learning model |
US14/625,945 Abandoned US20150161508A1 (en) | 2012-08-20 | 2015-02-19 | Multiple output relaxation machine learning model |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/590,028 Active US8352389B1 (en) | 2012-08-20 | 2012-08-20 | Multiple output relaxation machine learning model |
Country Status (7)
Country | Link |
---|---|
US (2) | US8352389B1 (en) |
EP (1) | EP2885719A2 (en) |
JP (1) | JP5819572B1 (en) |
CN (1) | CN104769575A (en) |
AU (1) | AU2013305924B2 (en) |
CA (1) | CA2882701A1 (en) |
WO (1) | WO2014031685A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9460401B2 (en) | 2012-08-20 | 2016-10-04 | InsideSales.com, Inc. | Using machine learning to predict behavior based on local conditions |
US10067990B1 (en) * | 2016-03-03 | 2018-09-04 | Amdocs Development Limited | System, method, and computer program for identifying significant attributes of records |
US10140345B1 (en) | 2016-03-03 | 2018-11-27 | Amdocs Development Limited | System, method, and computer program for identifying significant records |
US10353888B1 (en) | 2016-03-03 | 2019-07-16 | Amdocs Development Limited | Event processing system, method, and computer program |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9015093B1 (en) | 2010-10-26 | 2015-04-21 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US8775341B1 (en) | 2010-10-26 | 2014-07-08 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
US9137370B2 (en) | 2011-05-09 | 2015-09-15 | Insidesales.com | Call center input/output agent utilization arbitration system |
US8788439B2 (en) * | 2012-12-21 | 2014-07-22 | InsideSales.com, Inc. | Instance weighted learning machine learning model |
US9454732B1 (en) * | 2012-11-21 | 2016-09-27 | Amazon Technologies, Inc. | Adaptive machine learning platform |
US9904725B1 (en) | 2014-12-29 | 2018-02-27 | Velocify, Inc. | Computer system for generation, storage, and analysis of connection data and utilization of connection data in scoring and distribution systems |
CN107820619B (en) * | 2017-09-21 | 2019-12-10 | 达闼科技(北京)有限公司 | hierarchical interaction decision-making method, interaction terminal and cloud server |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5408588A (en) * | 1991-06-06 | 1995-04-18 | Ulug; Mehmet E. | Artificial neural network method and architecture |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7970718B2 (en) * | 2001-05-18 | 2011-06-28 | Health Discovery Corporation | Method for feature selection and for evaluating features identified as significant for classifying data |
US20030140023A1 (en) * | 2002-01-18 | 2003-07-24 | Bruce Ferguson | System and method for pre-processing input data to a non-linear model for use in electronic commerce |
US7152051B1 (en) | 2002-09-30 | 2006-12-19 | Michael Lamport Commons | Intelligent control with hierarchical stacked neural networks |
WO2008033439A2 (en) * | 2006-09-13 | 2008-03-20 | Aurilab, Llc | Robust pattern recognition system and method using socratic agents |
US7958068B2 (en) * | 2007-12-12 | 2011-06-07 | International Business Machines Corporation | Method and apparatus for model-shared subspace boosting for multi-label classification |
US8192289B2 (en) * | 2007-12-26 | 2012-06-05 | Scientific Games Holdings Limited | System and method for collecting and using player information |
US8781989B2 (en) | 2008-01-14 | 2014-07-15 | Aptima, Inc. | Method and system to predict a data value |
US8412525B2 (en) * | 2009-04-30 | 2013-04-02 | Microsoft Corporation | Noise robust speech classifier ensemble |
US8560471B2 (en) * | 2009-08-10 | 2013-10-15 | Yaacov Shama | Systems and methods for generating leads in a network by predicting properties of external nodes |
US8458074B2 (en) * | 2010-04-30 | 2013-06-04 | Corelogic Solutions, Llc. | Data analytics models for loan treatment |
-
2012
- 2012-08-20 US US13/590,028 patent/US8352389B1/en active Active
-
2013
- 2013-08-20 CN CN201380054239.0A patent/CN104769575A/en active Pending
- 2013-08-20 CA CA2882701A patent/CA2882701A1/en active Pending
- 2013-08-20 AU AU2013305924A patent/AU2013305924B2/en not_active Ceased
- 2013-08-20 EP EP13831802.7A patent/EP2885719A2/en active Pending
- 2013-08-20 JP JP2015528605A patent/JP5819572B1/en not_active Expired - Fee Related
- 2013-08-20 WO PCT/US2013/055859 patent/WO2014031685A2/en active Application Filing
-
2015
- 2015-02-19 US US14/625,945 patent/US20150161508A1/en not_active Abandoned
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5408588A (en) * | 1991-06-06 | 1995-04-18 | Ulug; Mehmet E. | Artificial neural network method and architecture |
Non-Patent Citations (1)
Title |
---|
Timotheou "A Novel Weight Initialization Method for the Random Neural Network", ISNN, 2008, pages: 10 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9460401B2 (en) | 2012-08-20 | 2016-10-04 | InsideSales.com, Inc. | Using machine learning to predict behavior based on local conditions |
US10067990B1 (en) * | 2016-03-03 | 2018-09-04 | Amdocs Development Limited | System, method, and computer program for identifying significant attributes of records |
US10140345B1 (en) | 2016-03-03 | 2018-11-27 | Amdocs Development Limited | System, method, and computer program for identifying significant records |
US10353888B1 (en) | 2016-03-03 | 2019-07-16 | Amdocs Development Limited | Event processing system, method, and computer program |
Also Published As
Publication number | Publication date |
---|---|
JP2015534149A (en) | 2015-11-26 |
JP5819572B1 (en) | 2015-11-24 |
AU2013305924A1 (en) | 2015-03-12 |
CN104769575A (en) | 2015-07-08 |
EP2885719A2 (en) | 2015-06-24 |
CA2882701A1 (en) | 2014-02-27 |
WO2014031685A3 (en) | 2014-05-08 |
AU2013305924B2 (en) | 2015-05-14 |
US8352389B1 (en) | 2013-01-08 |
WO2014031685A2 (en) | 2014-02-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8812417B2 (en) | Hierarchical based sequencing machine learning model | |
US8352389B1 (en) | Multiple output relaxation machine learning model | |
US8788439B2 (en) | Instance weighted learning machine learning model | |
US9460401B2 (en) | Using machine learning to predict behavior based on local conditions | |
US9813556B2 (en) | Method for connecting users with agents based on user values dynamically determined according to a set of rules or algorithms | |
US10469664B2 (en) | System and method for managing multi-channel engagements | |
CN108476230B (en) | Optimal routing of machine learning based interactions to contact center agents | |
US9843681B2 (en) | Method for connecting users with agents based on dynamic user interactions with content | |
US20160241648A1 (en) | Method for connecting a user with an agent based on workflow stages of a workflow dynamically created using a workflow template | |
EP3761624A2 (en) | System and method for managing multi-channel engagements | |
US20230057877A1 (en) | Consumer - oriented adaptive cloud conversation platform | |
US20160112302A1 (en) | Dynamic voice or data routing systems | |
US10552920B2 (en) | Detecting location data of co-located users having a common interest |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: INSIDESALES.COM, INC., UTAH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MARTINEZ, TONY RAMON;ZENG, XINCHUAN;REEL/FRAME:034989/0504 Effective date: 20150218 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |
|
AS | Assignment |
Owner name: XANT, INC., TEXAS Free format text: CHANGE OF NAME;ASSIGNOR:INSIDESALES.COM;REEL/FRAME:057177/0618 Effective date: 20191104 |