WO2004095303A1 - Knowledge governing system and method - Google Patents

Knowledge governing system and method Download PDF

Info

Publication number
WO2004095303A1
WO2004095303A1 PCT/US2003/008726 US0308726W WO2004095303A1 WO 2004095303 A1 WO2004095303 A1 WO 2004095303A1 US 0308726 W US0308726 W US 0308726W WO 2004095303 A1 WO2004095303 A1 WO 2004095303A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
knowledge
rules
gscript
module
Prior art date
Application number
PCT/US2003/008726
Other languages
French (fr)
Inventor
Moshe Klein
Alon Shwartz
Jim Haim Zafrani
Original Assignee
Synthean, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Synthean, Inc. filed Critical Synthean, Inc.
Priority to PCT/US2003/008726 priority Critical patent/WO2004095303A1/en
Publication of WO2004095303A1 publication Critical patent/WO2004095303A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L63/00Network architectures or network communication protocols for network security
    • H04L63/08Network architectures or network communication protocols for network security for authentication of entities

Definitions

  • the present invention relates to computer software for business solutions. More specifically, the invention relates to a method and system capable of collaborative, intelligent assembling of data, information, and content from structured and unstructured data sources.
  • IT information technology
  • Typical ERP systems are essentially large, integrated packaged applications that support core business functions, such as manufacturing, payroll, general, ledger, marketing, sales, and human resources.
  • core business functions such as manufacturing, payroll, general, ledger, marketing, sales, and human resources.
  • today's ERP systems cannot replace all of a corporation's custom solutions. They must, therefore, communicate effectively with other legacy systems in place.
  • it is not atypical for a corporation to employ more than one and completely different ERP system because a single vendor cannot usually meet every organizational need.
  • ERPs require interaction at the business object level which deals with specific business entities such as general ledgers, budgets or accounts payable.
  • the present invention describes an Enterprise middleware software system (tool) that offers mass customization ability via intelligent governing using both event driven and content driven "Smart Rule based" architecture.
  • the system focuses on the end user while offering real time bi-directional delivery of content (knowledge) to multiple output devices including wireless and cellular devices via proprietary push technology.
  • the invention is a system for intelligent assembling of information from a plurality of data sources comprising: a knowledge processing module for intelligently assembling information into a plurality of knowledge containers; a rules processing module for evaluating and executing a plurality of rules; an information processing module for interfacing with the plurality of data sources and interacting with a second system; an action module for invoking actions in the second system; a presentation module for outputting and formatting the plurality of knowledge containers to a respective connected device; and a broadcast module for communicating with a disconnected device.
  • the invention is a method for intelligently assembling information from a plurality of data sources comprising the steps of: interfacing with the plurality of data sources; interacting with an external system; evaluating and executing a plurality of rules; invoking actions in the external system responsive to evaluating and executing a rule; intelligently assembling information into a plurality of knowledge containers; and outputtmg the plurality of knowledge containers to an external device.
  • FIG. 1 is an exemplary block diagram of a Knowledge Governing Architecture, according to one embodiment of the present invention
  • FIG. 2 is an exemplary three layers architecture of each module, according to one embodiment of the present invention.
  • FIG. 3 is an exemplary primary and secondary MR/C configuration, according to one embodiment of the present invention.
  • FIG. 4 is an exemplary multiple networks support overview
  • FIG. 5 is an exemplary multiple network support detailed view
  • FIG. 6 is an exemplary monitoring overview
  • FIG. 7 is an exemplary architectural block diagram according to one embodiment of the present invention.
  • FIG. 8 is an exemplary primary MR/C fail over for primary MR C restart
  • FIG. 9 is an exemplary primary MR/C fail over for secondary MR/C process
  • FIG. 10 is an exemplary primary MR/C fail over for manager process
  • FIG. 11 is an exemplary block diagram for communication module interaction
  • FIG. 12 is an exemplary process flow for communication module sending and receiving messages
  • FIG. 13 is an exemplary architectural block diagram for a communication module
  • FIG. 14 is an exemplary block diagram for a security gateway
  • FIG. 15 is an exemplary block diagram for access points
  • FIG. 16 is an exemplary block diagram for communication gateway (proxy mode).
  • FIG. 17 is an exemplary block diagram for gateway authentication mode
  • FIG. 18 is an exemplary process flow for privilege check process
  • FIG. 19 is an exemplary architectural block diagram for rules processing module
  • FIG. 20 is an exemplary process flow diagram for gScript processing
  • FIG. 21 is an exemplary process flow diagram for event rule processing
  • FIG. 22 is an exemplary architectural block diagram for knowledge processing module architecture
  • FIG. 23 is an exemplary iJob script logical presentation
  • FIG. 24 is an exemplary architectural block diagram for information processing module
  • FIG. 25 is an exemplary process flow for DCI module process
  • FIG. 26 is an exemplary architectural block diagram for action module
  • FIG. 27 is an exemplary process flow for broadcast server pull process
  • FIG. 28 is an exemplary process flow for broadcast server push process
  • FIG. 29 is an exemplary process flow for a request by a presentation module to run a gScript and format its results
  • FIG. 30 is an exemplary process flow for format processing
  • FIG. 31 is an exemplary block diagram of a UI overview, according to one embodiment of the present invention.
  • FIG. 32 is an exemplary Design Studio console
  • FIG. 33 displays an exemplary mapping format
  • FIG 34 is an example of mapping a Customer business object to a Customer XML schema
  • FIG 35 is an example for defining a parameter
  • FIG 36 is an exemplary UI for a KC builder
  • FIG. 37 is an example for a DCI to DCI parameter mapping
  • FIGS. 38A-38F are exemplary user interface representations for various examples.
  • FIG. 39 is an exemplary layout of a KC Analyzer
  • FIG. 40 is an example of using a KC Analyzer to view the connection of two KFs
  • FIG. 41 is an exemplary layout for a KC Emulator
  • FIG. 42 is an exemplary layout of a gScript Builder
  • FIG. 43 describes one solution using Global and Linked KCs
  • FIG. 44 is an exemplary layout for a gScript Analyzer
  • FIGS. 45A-45C are exemplary layout for various examples using gScript Analyzer
  • FIGS. 46A and 46B are exemplary Palm PilotTM and WAP Publisher layouts
  • FIG. 47 is an example where a gScript is segmented
  • FIG. 48 is an example of the gScript segmentation of FIG. 47.
  • FIG. 49 is an exemplary block diagram depicting configuration components and then- relationship.
  • the present invention allows the collaborative, intelligent assembling of information, data, and content from structured and unstructured sources across a myriad of corporate applications, as well as the World Wide Web, and produces the knowledge necessary to govern business decision-making.
  • FIG. 1 is an exemplary block diagram of a Knowledge Governing Architecture, according to one embodiment of the present invention.
  • the present invention offers a new breed of enterprise middleware application that defines a Knowledge Governing System (KGSTM) 10; a Knowledge Governing Architecture (KGATM) 11 as the architectural backbone; and a Knowledge to User K2UTM) as the workflow and audience definition.
  • the system is capable of integrating corporate infrastructure 12 including front-office systems (CRM) 12a, back-office systems (ERP) 12b, Knowledge Management, Decision Support, legacy systems 12c, portal technologies, the web 12d, email systems 12e any other corporate and non-corporate systems.
  • CRM front-office systems
  • ERP back-office systems
  • legacy systems 12c Knowledge Management, Decision Support
  • portal technologies the web 12d, email systems 12e any other corporate and non-corporate systems.
  • the system is also capable on interfacing to business-to-business 13a (B2B), business-to- consumer 13b (B2C), and EAI 13c systems.
  • B2B business-to-business 13a
  • B2C business-to- consumer 13b
  • EAI 13c EAI 13c systems.
  • the systems intelligently draws information from all of these sources, assembles the information to produce knowledge, and then delivers the information, on a need-basis, allowing users at different locations to change the information that created the knowledge or to save it back in the system.
  • the power of system is derived from its ability to enable any organization to produce knowledge from mere information by concentrating on content rather than processes and systems. Knowledge is essential in creating and maintaining loyal customers, outselling, more effective marketing, and is a significant competitive tool.
  • the present invention provides the facility to produce this knowledge from data, information, and content. In addition, the invention makes this knowledge available at the right time via both push and pull technologies.
  • the system of the present invention also provides the ability to feed information back to the myriad of systems that originally produced the knowledge. This knowledge is available to the users 14a via LAN/WAN portals 14 and any supported wireless device 15a via a wireless portal 15.
  • the invention provides a comprehensive infrastructure that allows a complete enterprise solution.
  • the system is modular, based on open architecture, and accommodates rapidly evolving technologies without the need to repeatedly update the product.
  • FIG 7 shows the logical layers of the system.
  • the system includes the following main modules; Knowledge Processing Module (KPM) 70, Rules Processing Module (RM) 71, Information Processing Module (LM) 72, Action Module (AM) 73, Presentation Module (PM) 74, Broadcast Module (BM) 75, database server 76.
  • KPM Knowledge Processing Module
  • RM Rules Processing Module
  • LM Information Processing Module
  • AM Action Module
  • PM Presentation Module
  • BM Broadcast Module
  • Every one of these modules can be located on one or more physical servers (to provide high level of transactions) and communicates with a Message Router (processor) 77 (MR/C).
  • MR/C Message Router
  • the MR/C is used to navigate messages between the different modules, provides load-balancing capability, and has a fail over mechanism. The rest of the communication is done within each module.
  • Each module is a three-layer architecture including a Message Router/Controller, a Manager, and an Agent.
  • FIG.2 is an exemplary block diagram of a three layers architecture.
  • MR/C (communication processor) 22 communicates with every Manager 24 in the system, receives tasks 28 requests from one manager and forwards it to one or more of the other managers connected to it. Manager 22 then instantiates a new Agent 26 based on the request and its own availability. Agent 26 is the actual entity that has the knowledge on how to carry out the task. When the Agent is done performing the task, it sends a message back to the Manager 22 with the returned data (if any). Based on predefined parameters, the Manager can save the returned information to the database or can keep it in memory. The Manager then sends a response back to the requestor through the MR C 22 with the returned information or with the information id if the information was saved to the database 30.
  • MR C 22 is a service that is responsible for message routing between all managers in the system. Since the MR/C communicates with all managers of the system, it also balances the load of the system by equally directing the messages based on manager type and load. This service can run on the same physical server as one of the manager, or it could run on a different physical server.
  • the system can support multiple MR/Cs in two configurations: fail over backup, and multiple networks - MR/C bridge.
  • the MR/C serves as a means of communication between applications, whether they reside on the same machine, on another machine connected on a LAN, or on a machine connected over a WAN.
  • the MR/C can perform load- balancing and recovery duties if more than one of the same type of application are connected and request these features.
  • the MR C includes the following functions:
  • Connectivity - Clients can connect to the server using either TCP/IP, a shared memory architecture, or other connectivity protocols.
  • the physical communications layer is implemented in a component, allowing for easy extensibility to other network architectures as needed.
  • the system supports multiple servers that share the workload and can take over for one another in the case of a system failure.
  • the communications layer is designed to be robust and transparently corrects transmission errors, resending data as necessary. An originating application will not consider a send operation as complete, until all parties to which the message is being delivered have acknowledged receipt of the message.
  • Access Control All applications connected to the system are required to be authenticated, and the system can be configured to provide different levels of service to applications depending on what permissions are assigned to them.
  • Encryption - All date being transmitted over a network or stored where it may be accessed is encrypted to prevent interception.
  • Performance - Message throughout and scalability are maximized by such measures as server load balancing, compression, and the extraction of message content into alternate means, of transport.
  • the system is designed along the concept of distributed client-server architecture, with one or possibly more servers, and various clients. From the point of view of the client application, the client component is the only point of interface with the system, that component handles internally all the details of connecting to the various servers that might be present, and the server or servers handle the details of routing messages from one client to another.
  • the architecture can support multiple MR/Cs in the system when the rest act as a backup to the primary.
  • An example of a system configuration is depicted in FIG. 3. Every manager 37-39 in the system is connected to the primary MR/C 34 running on server 34, where the secondary MR/C 36 is running on server 36.
  • the primary MR/C 34 sends every new connection and disconnection to the secondary MR/C 36 as well as every message it receives. This way, the secondary MR/C keeps an exact copy of the information the primary MR/C 34 has.
  • the secondary MR/C 36 is performing the same work the primary MR/C 34 is, without actually sending out messages or having managers connected to it.
  • the system can support multiple MR Cs over multiple networks or sub networks, as shown in FIG. 4.
  • the system can also work over one network in order to split the load between multiple MR/Cs.
  • this configuration is best applicable for multiple network enterprise especially with different types of networks, or network with few sub networks. In case of multiple sub networks, this " configuration is beneficial only if applying multiple MR Cs reduces the network traffic between the sub networks.
  • every primary MR/C communicates with all other primary MR/Cs in the system. Since every message can potentially "cross" multiple MR/Cs, every MR/C has a complete copy of the other's subscripted messages types.
  • An exemplary block diagram for a multiple network support configuration is shown in FIG. 5.
  • the primary MR/Cs 52a and 52b reside on respective networks 50a and 50b.
  • Each of the primary MR/Cs 52a and 52b include a secondary MR/C 54a and 54b respectively and connected to their respective managers.
  • iJob Intelligent Job
  • a remote manager receives a message where there is no local subscription to that message.
  • FIG. 6 is an exemplary configuration for a monitor service. As depicted in FIG. 6, monitor service 60 communicates with the primary MR/C 63 and receives status messages of warnings or errors within the system.
  • MR/C 63 sends periodic heartbeat messages to all managers connected to it. These messages are then returned back to MR/C 63 with a simple status report including current load (based on predefined parameters). Same as the MR/C 63, manager 65 sends periodic heartbeat message to all connected agents 66, which return activity status as well as load. In case of failure in one of the agents 66 or managers 65, a message is sent to the monitor service 60 by the manager 65 or the MR/C. Monitor service 60 then logs the in the database, and/or display on the monitoring console 61. Each primary MR/C 63 may include a secondary MR/C 64.
  • the secondary MR/C 64 takes over. Every manager 65 then establishes connection with the secondary MR/C 64. The fail over process is described below from the primary, secondary, and manager view.
  • FIG. 8 is an exemplary process flow for primary MR/C 63 recovery process.
  • the Primary MR/C 63- When the Primary MR/C 63- is restarted, it initiates the following process to regain control of the system.
  • the primary MR/C 63 first loads (block 81), it tries to establish connection with the secondary MR/C, and requests status and mode, as shown in block 82. If the secondary MR/C sends a message saying it is the primary MR/C, the current MR/C switches to secondary mode.
  • the primary MR/C gets a message from a manager indicating there is another primary MR/C in the system, the primary MR/C sends a message to the secondary MR/C verifying the message, as shown in block 85. If the other MR/C is in primary mode, it then switches to secondary mode, as shown in block 87. If the primary MR/C could not validate operational mode with the secondary MR/C (e.g., could not establish connection), it sends a message to the manager to pause operation.
  • FIG. 9 is an exemplary process flow to MR/C fail over process from a secondary MR/C view.
  • the secondary MR/C tries to reconnect to the primary MR/C for a predefined number of seconds (RT, recovery time), as depicted in block 92. If the secondary MR/C fails to reconnect, it switches to primary mode in block 94. The secondary MR/C then tries to establish connection to all managers, as shown in blocks 96 and 97. On connection to a manager in block 98, the secondary MR/C sends a message declaring itself as being the primary MR/C (in case the manager is still connected to the previous primary MR/C).
  • RT recovery time
  • the new primary MR/C can also receive connection request from managers who lost connection with the old primary MR/C. These managers are then added to the new primary. In block 95, the new primary MR/C buffers all messages addressed to managers who are not yet connected. If the old primary MR/C establishes connection, the new primary MR/C sends it a message about it being the primary.
  • FIG. 10 is an exemplary process flow for MR/C fail over process from a manager view.
  • the managers tries to reestablish connection with the primary MR/C for a predefined number of seconds (RT recovery time), as shown in block 102. If the reconnecting try failed, the manager tries to establish connection with a secondary MR/C in block 105. In block 108, if more than Re attempts for connection is performed the manager waits for connection form a primary MR/C in block 109. Once connected to the new primary MR/C. the communication between the manager and the new primary MR/C is established.
  • the manager If the manager receives connection request from the secondary MR/C while it is still connected to the primary MR/C, it sends a MR/C switch message to the primary MR/C, disconnecting the connection to the primary MR/C, and establishing a connection with the secondary MR/C (now primary).
  • the manager module 22 is the component that is responsible for managing and monitoring all Agents 26. Normally, there would be one Manger running per physical machine that would manage and control all of the assigned Agents that are running on that physical machine.
  • the system includes the following managers: Rules Manager, DCI Manager, Knowledge Processing Manager, Action Manager, and Broadcast Manager.
  • Each Manager manages an internal queue of all requests sent to the Manager by the MR/C. As requests come in from the MR/C, the Manager is responsible for processing the requests and queuing the requests as needed. Each Manager manages all agents and their instances. With each request that comes in, the Manager determines which Agent is available for use. Once the correct Agent has been identified, the Manager instructs the Agent to start processing the request. If an Agent is not found, then the Manager instantiates another Agent or sends the request back to the MR/C.
  • each Manager manages a pool of agents.
  • the Manager has the option and ability to instantiate multiple Agents on start up and keep those Agents in a pool. These Agents are idle until the Manager receives a request to process. At that point, the Manager determines which Agent is most appropriate for running the request and assign it to that specific Agent. If all Agents are busy, the Manager has the option to transfer the request to another Manager if one is available, or to wait for an Agent to become available.
  • each Manager manages error handling and fault tolerance. Each Agent is responsible for its own error handling. However, each Agent is also responsible for sending this error information to the Manager for logging and system wide handling. It is the responsibility of the Manager to log all of the errors and handle any issues that might arise. The manager is able to re-route a request that resulted in an error.
  • a Communication module is responsible for all communications between the module managers and the MR/C.
  • the communication module exposes an API set that is used by both the module Managers and the MR C to communicate back and forth.
  • the communication module is an exchangeable module that allows the system integration to use a different communication module for each installation. In situation where the MR/C is replaced by an external messaging system, the communication module will be replaced with one that knows how to communicate with the external messaging system. This allows a corporation to continue utilizing their existing infrastructure by plugging the software of the present invention into their existing messaging system.
  • the messaging system is the MR/C. In another embodiment, the messaging system is an external system that exposes a message bus and API to the system.
  • FIG. 11 is an exemplary diagram of the communication module's place in the system, according to one embodiment of the present invention.
  • the communication module listens in for any messages that are sent directly to the manager to which the communication module is connected. Depending on the message bus that the communication module is connected to, its operation is different as far as the message retrieval is concerned.
  • a communication module is utilized to communicate to Information module 111, Knowledge processing module 112, Action modules 113, Monitoring and Tracing module 114, Broadcast module 115, Presentation module 116, Rule Processing module 117, and User Interface 118.
  • FIGS. 12a and 12b are exemplary process flow diagrams for receiving and sending messages, respectively. Since the communication module needs to support different messaging systems, it needs to have an architectural that supports easily replaceable messaging system integration components. This allows the system developers to support multiple systems without major changes.
  • the communication module listens for the messages in block 122a. If the message bus does not support listeners (block 120a), it polls message bus for messages until a message is received, as shown in block 121a. Once a message is arrived (block 123a), the message is processed in block 124a. If the processed message is valid (block 125a), the message is passed to the Manager in block 127a. If the processed message is not valid (block 125a), the error is logged and a response message is sent in block 126a.
  • FIG. 13 is an exemplary block diagram of the architecture of the communication module.
  • Communication module API 130 is used by the system Managers to communicate with the message bus.
  • Message formatter 132 is in charge of formatting outgoing messages.
  • Message analyzer 134 is responsible for breaking up received messages and forwarding the body of the message to the manager via the communication API. To support the various messaging systems that are available, the only piece that needs to change is the message bus integration component 136.
  • the security system is a two layer security system.
  • a Gateway which is responsible for all authentication and communications with the system from " the outside world.
  • the second layer of security is responsible for the security of system functionality.
  • the second layer receives the user id from the Gateway once the user has been authenticated. At that point, the second layer checks the user's security policy every time the user is using the system.
  • the two layers ensure that the security subsystems is both extremely secure and yet highly manageable.
  • FIG. 14 illustrates an overall system architecture for Gateway security, according to one embodiment of the present invention.
  • the Gateway module is responsible for authenticating users before they are granted access to the system.
  • the gateway supports user authentication via Windows NT security, or System security.
  • the Gateway module controls all access to the system from any external system 145.
  • External systems are any systems that are not part of the core server or any modules that need to interact with the system via an external device. This means that all of the user interfaces 143, presentation 142 module and broadcasting 141 module have to use the gateway to communicate with the core system, as shown in FIG. 14.
  • FIG. 15 illustrates two exemplary paths via which different users may access the system.
  • a user connects to the Gateway module (block 155) via a broadcasting module (block 1545) using a hand-held device in block 151.
  • the user connects to the Gateway module (block 155) via a gateway client (block 153) and a firewall (block 157) using a browser in block 152.
  • the system block 158
  • a user directory block 156
  • the Gateway includes two modes of operations, proxy gateway, and authentication gateway.
  • proxy gateway mode of operation the gateway is the module responsible for authenticating the user and passing all commands, requests, and responses from the external client to the core system.
  • any external client needs to send the requests via a specific port that will be assigned to the eateway.
  • the gateway will do the same on its port.
  • the gateway understands a set of commands that allow the user to access every function of the system.
  • the system administrator is also able to define custom commands that allow user, for example, to request data with a specialized command.
  • FIG. 16 is an exemplary block diagram of the Gateway module operating in the proxy gateway mode.
  • the Gateway module 161 includes a gateway client 163 communicating with the Gateway module.
  • the gateway client 163 is preferably a COM object that communicates with the Gateway module through a firewall 162.
  • the client object exposes properties and methods that automatically package the commands into small XML requests that are sent to the gateway module.
  • the gateway module interprets the commands, performs the command, and sends the results back to the gateway client object.
  • System commands such as authentication and configuration commands return status codes for the operation. Commands to retrieve data return both a status code for the operation and the data that was retrieved.
  • the gateway module supports batch operations for commands. This means that the client could batch a series of commands and execute them as one. For example, a batch for logging in, retrieving information, and logging out could be saved as a command files that could be retrieved by the gateway client. Since all information in the system are preferably represented in XML, the command files are also MXL files that follow a specific format.
  • the system applications can all be run either on the same network or in disconnected networks. When these applications are run on the same network, there is no need for the communication between the user interfaces and any other clients to use the proxy gateway. Instead, the gateway could be used as a simple authentication gateway that is responsible for simply authenticating the users of the system.
  • FIG. 17 is an exemplary block diagram of the Gateway module operating in the authentication gateway mode.
  • the gateway When the gateway is used as an authentication gateway, it simply passes the login information to the gateway server 173 for authentication using security database 174.
  • the gateway server 173 attempts to authenticate the login information and send a response back to the application (175-177).
  • the response is comprised of a SID (security id) that is attached to each request that client 170 makes from the system database 171 or MR/C 172.
  • the SID is checked each time a request is received for authenticity.
  • Each time a user authenticates via the gateway a new SID is generated.
  • the SID is then encrypted and expires as soon as the user logs off.
  • the authentication gateway supports multiple authentication services. Such services could include system's own proprietary security system 177, Windows NT authentication 175, KerberosTM 176, and any other security system* that are addressable via an API. A corporation could use their own security system to control the system authentication if their system is addressable via an API and a security layer could be developed for it.
  • the Security subsystems is responsible for controlling the user's privileges. Each function that can be performed for the user is controlled via a named privilege.
  • the System user and privileges database is based largely on the model of groups and user. In this model, users belong to groups and groups can be comprised of other groups. Users could belong to multiple groups and inherit the security from the group itself. Additionally, users can be granted access to certain privileges directly. In one embodiment, the privilege system only supports explicit rights. This means that the privilege is explicitly assigned to a user or group. Under this model, users do not have any privileges unless they are directly assigned to the user or group to which the user belongs. Because the design of the privilege system does not allow for a situation where a privilege is stripped from a user or group, conflict in the privilege system will not occur in the system.
  • privileges in the system are checked on a per access basis rather than per login basis. This means that each time the user attempts to perform a function, the privilege system is queried to check if the user has access to that function.
  • per access privilege checking method allows administrators to make changes to the privilege system in real time and have these changes take effect immediately.
  • FIG. 18 is an exemplary process flow for privilege checking process.
  • the Gateway returns a SID (security id) to the gateway client in block 181.
  • This SID includes the user's GUID (global unique identifier). All user interfaces, APIs, DCIs, and other means of accessing the system require the security SID as one of the parameters.
  • a privilege check is performed in block 184. The privilege check makes sure that the user has access to the function being attempted. If the privilege is cleared (block 186), the function is then performed in block 188. Otherwise, the operation is aborted in block 187.
  • each one of the modules can run on one or many computers.
  • the MR/C is responsible for routing all messages back and forth between the various modules.
  • the MR/C is also responsible for handling the load balancing of the various module managers.
  • One of the features of the system of the present invention is the ability to integrate with other EAI systems.
  • the integration that is required varies depending on the customer requirements. Exemplary options for system deployment include: complete system deployment, complete system and EAI integration deployment in which the system communicates with the EAI system via a specialized DCI, and EAI and selected modules integration in which, only some of the modules are integrated into the EAI message bus and the MR/C is not utilized.
  • the complete deployment option involves utilizing all of the modules and components to achieve all needed functionality.
  • a DCI is provided or developed for each system that needs to be integrated into the system.
  • the system assumes the functionality of the EAI system that otherwise would be in place. Because the entire integration is based on the system, the system is responsible for all monitoring, tracing, and message routing. Systems that conform to the SNMP and MIB-II compliance could be monitored by the system.
  • the communication module has to ability to talk to various EAI systems via their message bus. This allows the modules (which use the communication module to communicate with the MR/C) to talk to other EAI systems without having to use the MR/C as a message bus or a specialized DCI.
  • the individual modules could be connected directly to the EAI message bus and, in turn, be used in the EAI system's workflow and process flow engine. Only some of the modules are used in this deployment. The MR/C is not needed because the EAI system's message bus is used instead.
  • modules When modules are deployed this way, they act independently of each other. For example, when the Rules Processing Module is executing a gScript and it needs to retrieve a KC, it simply sends a message over the message bus specifying that it needs a specific KC. The EAI System's message controller automatically sends the message to the correct module based on the workflow that has been predetermined.
  • Rules Processing Module 71 stores, evaluates, and executes rules.
  • a rule is defined as a conditional statement that tells the program how to respond to particular input or a combination of LF x THEN y ELSE z statements.
  • An event, request, or other type of input is sent to the Rules Server, which then evaluates the conditional clause of the rule (the LF clause) and in return responds accordingly (based on the THEN ELSE clauses).
  • the Rules Server supports rules, nesting, parallel execution, and encryption.
  • the Rule Manager spawns a thread that communicates with the appropriate DCI for further information, like what information is being requested, the parameters, etc. If the event occurs from an outside source, the information may already be within the event itself. Once it gets all the necessary information the Rule Manager transfers it to the Rules Engine. The same thing happens when the Knowledge Processing Server requests the processing of a rule.
  • the Rules Engine checks the rules repository for all rules that are affected by the event, that is, checking the IF clause of all rules. If there are any affected rules, the Rules Engine spawns a Rule Agent thread, which executes the appropriate rules with the submitted information. If the Knowledge Processing Server submitted a request, the Rules Engine knows which rule needs to be processed. The Rules Engine then spawns a Rule Agent that processes the requested rule.
  • FIG. 19 is an exemplary block diagram of the Rules Processing Module Subsystems Architecture.
  • Communication layer 192 is the layer responsible for all communications with the Rules Processing Module 191. This layer is preferably universal to all system objects and processes in the System.
  • Rules manager 193 is responsible for managing and monitoring all Rules Agents 196. Normally, there is one Rules Manager running per physical machine that manages and controls all of the Agents that are running on that physical machine.
  • the Rules Manager 193 is also responsible for receiving notifications of events from the message router/controller. Once a notification is received, the Rules Manager retrieves the rules that are attached to the event from Rules database 195. The Rules Manager then retrieves the gScripts that are attached to the rule from the Metadata Repository 194 and start processing them.
  • Rules Agent 196 is responsible for running all rules and processing gScripts. Rules Agent 196 evaluates the rules and runs the gScripts that are attached to, or contain these rules. The Rules Agent interacts with the Rules database 195, and gScript (Metadata) Repository 194. Rules Studio 197 is a plug-in to the User Interface that allows users to create new Event Rules and Content Rules. Rule Studio 197 also allows users to create gScripts and assign them to specific rules.
  • a gScript language is used as the Scripting language that represents the logic and functionability of a gScript.
  • the language allows the users to run iJobs, evaluate the results of the iJobs/ and make decisions based on the da +! > returned by the iJobs and actions.
  • the gScript also allows users to send requests to the Action Module, broadcast server, and information server.
  • gScript language allows the users to define variables and share them through out the gScript processing. Variables will be used for passing data between various knowledge containers and for condition evaluation purposes.
  • a variable could be declared as private or public to the gScript.
  • the gScript language allows the users to define error handling flow control. The users are able to tell the script processor how to handle errors.
  • Some of the possibilities could be running a different iJob, stopping execution, or any command supported by the gScript language.
  • the gScript language also provides the users with feedback about error information. The users could then determine the flow of the gScript according tot he error message.
  • Each gScript is capable of receiving runtime parameters. " These parameters could be used anywhere in the gScript to control the flow of the gScript or as variables for other actions. Additionally, the gScript could be split into multiple gScript segments that could be run as one or individually. The segmentation could be nested and therefore, would allow multiple gScript segments to be run as one item within a bigger gScript. Each gScript segment knows which data elements it needs to run. In order for a gScript segment to run, the needed parameters need to be supplied.
  • the following table includes an exemplary list of the functions and statements that the gScript language supports.
  • All gScripts are created using the gScript Studio user interface described below. Once a gScript has been visually created, the user has the option to save the gScript as either an executable gScript or a private gScript.
  • An executable gScript is a gScript that is ready for production and could be implemented in the production system.
  • a private gScript is either a gScript that is not complete and therefore should not be run, or a gScript fragment that is simply used inside other gScripts and can not possibly run on its own.
  • the user also has the option to export the gScript so that it can be read by a different system installation. An import feature is provided so that gScripts could be imported into the system.
  • the gScripts are compiled when they are saved and activated into the system.
  • the compilation process involves the following steps.
  • Conversion of gScript graphical layout to gScript code step converts the graphical layout of a gScript to the XML representation of a gScript.
  • Dynamic linking of all gScript fragment step copies takes all fragments that were copied into the master gScript and converts their placeholders to code.
  • Determine gScript execution plan step determines the execution plan of a gScript.
  • the execution plan specifies to the gScript processor all the dependencies that have been programmed into the gScript.
  • the gScript processor is able to determine which processes can be run concurrently.
  • Pre- Process all iJob, Action, and Presentation requests step prepares all of the requests.
  • FIG. 20 is an exemplary flow diagram depicting gScript processing.
  • gScript code and execution plan is retrieved from the gScript Repository 201.
  • the XML code is converted to VBA compatible code and the concurrent processing plan is determined in block 203.
  • the gScript language is evaluated and in block 207, requests are sent for external processing to servers 208a-e. If there are no more code to process (block 204), the processing is completed in block 206.
  • Rules include Event rules and Content rules.
  • Event rules are rules that are attached to events in the system and, therefore, do no have a condition to evaluate. In other words, the condition in an event rule is actually whether the event occurred.
  • a notification of the event is pushed to the Rules Manager.
  • the Rules Manager then takes over the processing of the rule.
  • FIG. 21 is an exemplary flow diagram describing the processing of an event rule.
  • event notification is received by the Rule Manager and the event information and rule information are retrieved in blocks 211 and 212, respectively.
  • gScript information is retrieved messages are sent by the Rule Manager to process the gScripts in block 214.
  • gScript code and execution plans are retrieved in block 200, and the gScript code is processed in block 217.
  • the gScript retrieve object retrieves all code that is related to the gScript being processed. It then creates a comprehensive gScript that contains all of the code for the gScript and sends that code to the gScript processor.
  • the processing includes evaluating conditional and looping statements (block 217a), determining concurrent processing plan (block 127b), and sending requests for external processing (block 217c).
  • Concurrent processing is only allowed for data sources from the Internet or other data sources that do not support modification of data. If the data sources require modification of data, the processing is carried out sequentially.
  • a "Finished Processing Message" is sent to the Rule Manager in block 219.
  • Content rules are rules that are based on the content of a Knowledge Container or a Knowledge Fragment. Content Rules are run by requesting a gScript that contains the rule. The steps for processing Content rules are similar to steps for processing Event rules, except that steps 210 and 211 in FIG. 21 for retrieving event request and information are not performed.
  • Knowledge Processing Module (KPM) 70 is responsible for intelligently assembling the information into Knowledge Containers based on an iJob. This module has the following components: Knowledge Processing Manager - Accepts knowledge generation requests and manages
  • Knowledge Agents responsible for.the knowledge creation by running an iJob and creating Knowledge Containers.
  • a Knowledge Processing Manager is used to execute Knowledge Agents based on ad- hoc requests.
  • the Managers provides the ability for every component, in or outside the system, to request the creation of Knowledge Containers.
  • a Knowledge Container is an object that contains the elements that define the mapping of a business object.
  • the information in the Knowledge Container can be assembled from multiple data sources, corporate and non-corporate (Internet, or others), and is not limited to only one dimensional information. In other words, information in the knowledge container can be assembled from one data source based on previously retrieved information from another data source.
  • a Knowledge Container is also known as a Knowledge Cube.
  • the principal idea behind the Knowledge Container (and the rest of the system, for that matter) is using every data element as a building block that can be used to build more information.
  • a simple example might be the retrieval of driving directions based on the address field in every contact.
  • the Knowledge Agents are the components that actually run an iJob and construct a Knowledge Container.
  • An iJob is, in essence, a script that describes where to extract the information from, as well as how to do it.
  • the Agent then communicates with the appropriate DCI to actually retrieve the information.
  • a Knowledge Container is a predefined set of fields that describe a business entity, the Agent also has the ability to process rules that can affect the content of the container, but not the structure of it; that can be done using the Rules Server.
  • FIG. 22 is an exemplary block diagram for a KPM architecture.
  • a request from a Message Router 221 comes in to process a certain iJob, the request is forwarded to the KPS Manager 225 by the KPM 222.
  • the KPS Manager 225 determines which KPS Agent 226 to send the request to.
  • the agent takes over the processing of the iJob. Similar to Rule Processing Module architecture in FIG. 19, KPS Agent 226 evaluates the request and runs the iJobs that are attached to, or contain these rules.
  • the KPS Agent interacts with he iJob database 228, and Metadata Repository 227.
  • iJob Studio 1224 is a plug-in to the User Interface that allows users to created new KCs.
  • An iJob script is a script that defines the content of a Knowledge Container (KC).
  • the KC is made up of one or more Knowledge Fragment (KF).
  • KF Knowledge Fragment
  • An iJob is simply the script that instructs the system how to create the KC by taking one or more related KFs and put them together in the right order.
  • FIG. 23 is an example of the making of an iJob KC from come contact information. All data in an iJob has to relate to each other. There has to be a connection between the Company information coming out of Siebel TM and the contact information. Unrelated information cannot be contained in a Knowledge Container. For gathering unrelated data, two or more iJobs are required.
  • an iJob is comprised of one or more related Knowledge Fragments (KF) that are related to each other.
  • KF is comprised of data that fits a specific Data Definition.
  • a KF does not have to contain data that comes from the same source each time the data is requested.
  • the data could come from any number of sources. As long as the data is mapped to the knowledge definition (KD), it will be accepted into the KC.
  • the different data sources could be selected based on availability and conditional evaluation of the data.
  • the driving directions could come from MapQuestTM if the contact's office is in the United States, but come from MapBlastTM if the contact is international.
  • a Single KC could contain multiple nested records.
  • the SiebelTM Contact Information could contain multiple contacts, which will produce multiple records for all related data.
  • the KC has a tree structure.
  • the structure of a KD does not change for any specific KC.
  • the KC structure depends on the KD structure. Any changes in the KD's structure could have major effects on the KC.
  • KD knowledge definition
  • An example of a KD is driving directions.
  • the KD that defines the driving directions makes sure that the driving directions always contains the same fields and follow the same field names and formats regardless of the source of the information.
  • KDs are setup by users of the system. To setup a KD, the user determines and defines the fields and their formats. Once the KD is defined, the various data sources that can retrieve this information are mapped to the KD. The mapping information is then vcstored and used when a KF is requested based on the KD that was just created.
  • Data Sources are the systems that supply the data for the KF. These systems are accessible via a Dei component that knows how to interact with the system. Each DS is assigned to one or more categories. The user registering the DCI maps of the fields that are retrieved by the DCI to one or more knowledge definitions. When an iJob is created, the user selects the category of data that the KF needs to contain. Once the selection is made, a list of sources is presented for the user. At that point, the user is able to select which data sources to use. Regardless as to which data source is selected, all data coming from the data sources is in the same format.
  • a knowledge fragment is defined as data that makes up a portion of the data contained in the Knowledge container.
  • an iJob is a collection of KFs that relate to each other.
  • the iJob Script language supports operators, math functions, string functions, time/date functions, and the like.
  • the request is forwarded to the KPS Manager that determines which PKS Agent to send the request to.
  • the agent takes over the processing of the iJob. Once all of the processing is done, the completed KC contains all the data that was retrieved.
  • This KC is then saved in the database for further processing by the Rules Processing Module.
  • Information Processing Module 72 provides the means for the system to interact with the outside world (i. e. other systems, programs, data sources). Since it is impossible to imagine all the ways the system can interact with external systems, it becomes necessary for the DCI system to become flexible enough to add extra functionality when developers need them without radically modifying the system.
  • the DCI system includes the following functions:
  • Some examples are being able to access a database, hook into Microsoft Exchange to send an email, or to monitor an event in an ERP system.
  • DCI Managers are responsible for instantiating and running the DCI Agents. ' The DCI Managers are also responsible for controlling the load and queuing up requests as they come in to the system.
  • the MR/C relies on the DCI Managers to launch the DCI Agents and manage them. It is the DCI Manager's responsibility to inform the MRC of its presence and how busy it is.
  • DCI Agents are the only components in the DCI system that can interact with the outside world. While all DCI Managers are the same, each DCI Agent is different. This is because each Agent interacts with external systems in a specific way. This allows third party developers to create additional DCI Agents and add them to the system to add more functionality to the overall system. There can be multiple agents on a server where a DCI Manager is installed.
  • the DCI Manager instantiate each of the Agents using MicrosoftTM Transaction Server (MTS). This allows the manager to run the agents out of process and still control the agent. MTS further allows the manager to use connection pooling and transactions as needed.
  • MTS MicrosoftTM Transaction Server
  • FIG. 24 is an exemplary block diagram of the Information Processing Module Architecture.
  • DCI Manager 245 is a Windows NT service whose job is to launch DCI Agents 246 based upon the requests it receives from the MRC 241. It also has the job of managing and monitoring every instance of the DCI Agents 246.
  • DCI Agent 246 is a COM executable running out-of-process (using MicrosoftTM Transaction Server) that serves as a gateway or interface to external systems. It is the job of the DCI Agent to interact with these external systems to retrieve information or to perform some kind of activity. Each agent will probably be unique; however, each agent needs to adhere to the DCI Agent model.
  • the DCI Agent is responsible for the following functions:
  • the agent is responsible for communicating with the data sources, retrieving the data that is requested, and converting the information into knowledge fragments.
  • the conversion is done by first mapping the fields from the data source to the knowledge fragment definition. Once the fields are mapped, the agent is able to map the data to the correct fields on the knowledge fragment.
  • the agent Interacting with systems outside of the system -
  • the agent is able to communicate with systems that are outside of the system. This allows the agents to be able to call specific functions or procedures that each system might need.
  • FIG. 25 is an exemplary process flow of a DCI module.
  • a request from MR/C is received in block 2501.
  • the DCI manager retrieves the message parameters in block 2502 and determined the needed Agent in block 2504. If there is an available Agent (block 2506), the parameters are sent to the Agent in block 2512. If there is no free Agent, a new Agent is created in block 2510, and then the parameters are sent to the Agent in block 2512.
  • the agent process step is executed in block 2514 and the message with job details is sent to the MR/C in block 2516.
  • the request is processed by the Rules Agent.
  • the message parameters are processed and if there is a connection available to the needed data source (block 2520), data is requested from the data source (e. g. 3 rd party data source 2526) in block 2524.
  • a Knowledge Fragment (KF) is built based on the retrieved data and if needed, in block 2528, the KF is saved in the KC Repository 2532.
  • the KF is encoded into the MR/C message and the message is sent to the DCI Manager for routing in block 2536.
  • Agent architecture includes the following objects:
  • This object is responsible for any system Configuration item retrieval and setting. This object is also in charge of providing the monitoring information requested by the manager.
  • XML Translator This object is responsible for converting the retrieved data into the XML format. The translator retrieves the correct data map and map all of the values to the field names. Gateway - The actual object that interfaces with the external systems.
  • all of the above objects interact via a standard API.
  • Action Module 73 provides the means for the System to perform actions or carry out activities external to the system, (i. e. other systems, programs, data sources). The Action Module does not actually carry out the action but rather invokes the action in an external system/component.
  • Action Managers are responsible for instantiating and running the Action Agents.
  • the Action Managers are also responsible for controlling the load and queuing up requests as they come in to the system.
  • the MR/C relies on the Action Managers to launch the Action Agents and manage them. It is the Action Manger's responsibility to inform the MRC of its presence and how busy it is. There should be only one Action Manager installed on a server " .
  • Action Agents are the only components in the Action module that can interact with the outside world. While all Action Managers are the same, each Action Agent is different. This is because each Agent interacts with external systems in a specific way. This allows third party developers to create additional Action Agents and add them to the system to add more functionality to the overall system. There can be multiple agents on a server where an Action Manger is installed. The Action Manger instantiates each of the Agents using MicrosoftTM Transaction Server (MTS). This allows the manager to run the agents out of process and still control the agent. MTS further allows the manager to use connection pooling and transactions as needed.
  • MTS MicrosoftTM Transaction Server
  • FIG. 26 is an exemplary block diagram of the Action Module Architecture.
  • the Action Manger 265 is a Windows NT service whose job is to launch Action Agents 266 based upon the requests it receives from the MRC via the message router 261.
  • the Action Manager 265 also has the job of managing and monitoring every instance of the Action Agents 266.
  • the Action Agent 266 is responsible for interacting with 3 rd party systems databases 264. This allows the agents to be able to call specific functions or procedures that each system might need.
  • the Action Agent 266 is also responsible for providing status updates to Action Manager 265. Since the bulk of the processing in the system occurs on the Action Agent, the agents are required to constantly monitor their progress and report if to the manager. The manager is then able to perform load balancing effectively among the various agents that are running.
  • All Action Agents share the same basic architecture. This architecture is common to all agents and allows all agents to be able to perform the functions correctly. The only section of the agents that is different is the data retrieval classes. These classes directly depend on the data source that is being accessed.
  • the Agent architecture include a System Configuration and Monitoring object that is responsible for any system configuration item retrieval and setting. This object is also in charge of providing the monitoring information requested by the manager.
  • a Gateway object interfaces with the external systems. Preferably, all of these objects interact via a standard API.
  • the Broadcast Module 75 is responsible for communicating with disconnected devices such as the Palm PilotTM and Pocket PC.
  • the current trend of the industry is to provide wireless Internet access to hand-held devices (HHD).
  • HHD hand-held devices
  • This technique is being translated, in most cases, as the ability to "browse the web” simply by providing a web- browser software with the device.
  • the browsing is done by browsing regular HTML sites or by building special web sites for new types of devices like WAP enabled devices.
  • browsing the web has few limitations that limit the ability and use of the HHD. For one, the user has to be on-line (connected) all the time in order to get the information. Also, Low bandwidth and low reception especially in buildings, are some limitations of web browsing.
  • Connected Devices are devices that need to be constantly connected to the network (Internet) in order to submit and retrieve information just like a desktop browser (or portal) by connecting to the Presentation Module where the only software they need is a browser.
  • Disconnected Devices are devices that have additional software that allows maintaining of the retrieved information in a local database even if the device is not connected to the network however allowing push messages and download of information upon request. Using this configuration allows the user to download all the requested information to the HHD and use it without being connected to the network. Taking into account that push messages are only a small subset of the information a user requires, this configuration provides a much faster, more economical, and more reliable solution.
  • the system provides the necessary software for Disconnected Devices.
  • the disconnected devices do not have a constant connection to the network or Internet, they need to have the information pushed to them when it is available. This is done via publish-subscribe method.
  • the disconnected device subscribes to information that gets pushed when needed.
  • the devices also need to have a way to request information to be pushed to them.
  • the Broadcast Module needs to be able to handle the formatting (via the presentation module) and sending of all pages to any one device. Further, the Broadcast Module needs to act as a server that can accept TCP/IP connections and keep the connections open until the transfer is completed.
  • the Broadcast Module 75 relies on the Presentation Module 74 to format all of the data that needs to be sent out. The request for information is forwarded to the Presentation Module 74. The Presentation Module then retrieves the data and formats it accordingly. Once the formatting is done, the Broadcast Module is responsible for sending out the information to the requesting device.
  • FIG. 27 is an exemplary block diagram of a pull process initiated by a disconnected device 270.
  • the disconnected device 270 makes the request for information to the Broadcast Module 75.
  • the Broadcast Module sends the request to the Presentation Module 74, which in turn, sends a request for a gScript to be executed to the Rules Processing module 71.
  • the Rules Processing module 71 Once the Rules Processing module 71 is done executing the gScript, it send a collection of knowledge containers back to the Presentation Module 74 for formatting.
  • the Presentation module formats the collection via the Formatter Object 272 and database 271 and sends it back to the Broadcast module 75.
  • the Broadcast module then sends the information back to the disconnected device 270.
  • a push scenario is one in which the system sends the device information of the type that might not have been requested.
  • the push is initiated by a business rule that is implemented in a gScript. This allows both event-based and content-based rules to push information to a disconnected device.
  • FIG. 28 is an exemplary block diagram of a push scenario.
  • a gScript is instructed to send data out to a disconnected device 280.
  • the gScript sends a message to the MR/C 285 with the details of the data to be pushed and the recipients of the pushed data.
  • the Broadcast Module 75 uses the Presentation Module 74 to format the data accordingly. Once the data is formatted, the Broadcast Module 75 pushes the data out to the disconnected device 280,
  • the Presentation Module j74 is responsible for outputting and formatting knowledge containers to connected devices.
  • This external view could be a web server or another application that would like to receive the data in a formatted output.
  • the presentation server allows users to create their own formatting templates and format a knowledge container according to this template.
  • the Presentation Module is the only module that does not share the same architecture as the rest of the system.
  • the Presentation Module is implemented as a COM object that could be called from any application.
  • This COM object is responsible for retrieving the knowledge container and formatting it according to a retrieved template. This object could be used by a web server.
  • the Presentation Module COM object is a component that allows users to format data in their own format and present it. This component is able to format any data that is retrieved by the system. Further, the component is able to format data to any format. Users have complete control over the size, color, placement, and other layout and formatting options.
  • the Presentation Module COM object has the following functionality:
  • FIG. 29 depicts an exemplary process flow of a request to run a gScript segment and format the returned data.
  • the request is made from a Web server.
  • the gScript request is sent to the MR/C 292 in block 291.
  • the iFormat script is retrieved " in block 296.
  • respective KCs are retrieved from KC repository 295 in block 294.
  • the KC data is formatted in block 297 and returned to the requester in block 298.
  • the iFormat scripting language is based on XSL and uses many of the XSL features.
  • XSL is the formatting language that is used to format an XML file. Although there are other technologies available, presently, XSL has become the standard for formatting XML due to its power and extensibility. XSL uses XML syntax and creates its own flow objects, so it can be used for advanced formatting such as rearranging, reformatting and sorting elements. This enables the same XML document to be used to create several sub document views. XSL also adds provisions for the formatting of elements based on their context in the document, allows for the generation of text, and the definition of formatting macros.
  • a gScript When a gScript is created, the user is able to create an iFormat script for each segment of the gScript. The iFormat script is then saved with a unique name in the system database. A gScript could have multiple iFormat scripts attached to it. All iFormat scripts are grouped accordingly. The system has a predefined set of groups for connected device types. These groups allow the system to automatically choose a default iFormat script for the device type.
  • the Formatter object is responsible for taking a knowledge container and formatting it according to an iFormat script.
  • the Formatter object receives the knowledge container and the iFormat script and output an HTML file based on the knowledge container and iFormat script.
  • FIG. 30 is an exemplary flow diagram of the formatting process.
  • a request for formatting a KC is received by the Formatter object. If the KC is included in the request (block 302), the iFormat information is retrieved in block 304 and the KC is formatted according to the iFormat.
  • the steps for the formatting process include instantiating XML DOM in block 305a, loading the KC XML data and iFormat XSL in block 305b, and translating the XML to HTML in block 305c.
  • a device inspector object determines the type of device that is requesting the information.
  • the information that the device inspector reports to the Formatter object include browser type, browser capabilities, and screen size. This information tells the Formatter object which default format to use when formatting the page.
  • a disconnected devices inspector (DOT) object allows the Presentation module to determine the type of browser that is trying to access system information. Once the type is known, the Presentation module is able to select the correct format for the information that is requested.
  • the presentation module is responsible for formatting and serving requested information from the system.
  • the requested information is a knowledge container that contains data. Since more than one device type can access the data, different iFormat scripts could be mapped to a specific KC Schema.
  • the Presentation module uses the connected devices inspector to determine the type of browser that the user is using. The Presentation module then uses the correct formatting script to format the KC data and serve it to the requesting user.
  • the DDI object is able to determine the browser type by utilizing a set of JavaScript scripts to determine the browser capabilities, screen resolution, connection speed, etc.
  • JavaScript supports the ability to report back to the user the browser type, name, and capabilities. Different versions of JavaScript support only a subset of these parameters, but can still respond back with the browser name and type. Any device or browser that does not support JavaScript, can still report the browser type and id via the request headers.
  • the DDI needs to be placed in the web page and the values retrieved need to be passed to the Presentation module.
  • the system features a monitoring system that allows system administrators to monitor and control all parts of the system.
  • the monitoring service is responsible for monitoring both the individual modules and agents and make sure that they are running.
  • the system uses heartbeat messages and SNMP messages that are sent by the individual applications.
  • a heartbeat object is an object that each application uses to send heartbeat messages to the monitoring service.
  • the heartbeat messages are simply messages that let the monitoring service know that the application is still alive.
  • the information sent to the monitoring service include the application id, instance id, and server id. Using this information, the monitoring service is able to know is the application is still alive.
  • All heartbeat messages are sent via a network protocol and bypass the MR/C or any other message bus that the system is using.
  • the heartbeat message is received. by the monitoring service and logged.
  • the heartbeat object is included in every application and DCI. Additionally, the heartbeat object is included in the DCI Framework and the SDK.
  • the monitoring service registers the application, instance, and server in its log, generate a unique id, and returns it to the heartbeat object along with the heartbeat interval. Once the heartbeat object, receives the unique id, it only needs to send the unique id and the monitoring service automatically correlates the unique id to the specific application, instance, and server.
  • the time interval for the heartbeat is configurable item. Additionally, the interval could be different for each application. This allows the system administrator to prioritize the heartbeat messages that are coming in. An administrator, for example, might be more concerned with the Rules Processing agent than the Action agent. In this case, the administrator is able to set the heartbeat interval on the Rules Processing agent to fire off every 30 seconds and the Action agent every 60 seconds.
  • the present invention utilizes a graphical user interface (GUI) for ease of use.
  • GUI graphical user interface
  • the user interface is largely divided into three groups: Configuration Applications, Administration Application, and User Applications.
  • Configuration Applications The intelligently retrieval, assembly, and formatting of information and knowledge is achieved by building a collection of scripts that are powerful directions on how to perform each task. These scripts are developed using the Configuration Applications mainly by the system's integrators, internal IT, and maintained by trained system managers.
  • the system provides the DeskPortal for the desktop users and other connected handheld devices such as WAP phones, and the HandPortal for all disconnected devices like Palm PilotsTM and Pocket PCs.
  • the Administration Applications provide a powerful way to managing, monitoring and configuring the system and is mainly used by the system administrator.
  • FIG. 13 is an exemplary block diagram of the UI overview including the following exemplary applications in each category:
  • the above application could be windows applications, UNLX application, LINUX applications, or the like.
  • every application has a small registry that holds system information. This file is distributed with the application.
  • all applications require username and password in order to grant access.
  • the configuration applications are a collection of tools and applications that allow building, maintaining, and analyzing DCIs, iJobs, gScripts, and iFormats that are the directions by which the system retrieves, synthesizes, govern, format, and deliver the information. These tools are used by system integrators, internal IT, or any user with some technical knowledge.
  • DDS Data Design Studio
  • all the Configuration Applications can be activated and executed from within the DDS based on user security level. Every activity within every application is also security dependant and can be granted or revoked.
  • the DDS 311 is the main console for the Configuration Applications and allows the execution of such pending user security level.
  • the DDS acts as an application launcher for all other applications and provide a consistent look-and-feel for all other applications.
  • FIG.31 shows an exemplary DDS console.
  • the DDS include the following functions:
  • Windows Selector Provides functionality like tile, cascade and arrange icons.
  • the application displays a login screen, which requires username, password, server name, and database name (normally hidden).
  • username and password are taken from the operating system.
  • the application then logs into the system database and retrieve security information on what the user can (and can not) perform in the system. Based on these values, some of the menu items are disable or invisible.
  • the Application Launcher console loads the requested application into the working area. The new application then connects to the database to retrieve security information as well.
  • FIG. 32 is an exemplary Design Studio console.
  • the DCI Designer 312 is an application that allows the configuration of any DCI in the system. This configuration includes system information like default username/password, server name, timeout, parameters, events, fields mapping, etc.
  • the DCI includes the following functions:
  • System Information allows the configuration of system type information, information that is used while connecting to the data source like default username & password, server name, database name, etc for Internet sources.
  • Schema Mapping 312 a creates the fields mapping between the data source fields and a business object XML schema.
  • Parameters Definition 312b creates the XML schema of the parameters accepted by this DCI by selecting the required data source fields.
  • One of the major features of the DCI is the ability to configure the returned information structure, that is, map the data source entities into one of the system's business object XML schemas.
  • the system provides multiple predefined business objects XML schemas types like: Customer schema, Driving Directions schema, Stock Information schema, etc. These schemas define business objects and are saved in the system's repository. These schemas repository can be modified and new schemas can be added as necessary.
  • FIG. 33 An exemplary mapping format is displayed in FIG. 33.
  • the data source fields On the left side of the screen is the data source fields, in the middle the mapping lines, and on the left the XML schema. Since the DCI's fields are stored in the database the application retrieves it. If the DCI support the RefreshEntityFields function, the list can automatically be refreshed. If the DCI doesn't support this functionality (most Internet sites don't) the list can be manually edited.
  • the XML schema on the left can be edited using the Schema Editor application.
  • the field mapping is done by drawing a line between one data source fields on the left to one or more of the XML schema fields on the right. When a more complicated mapping is needed, the user can use any of the provided operators, math functions, string functions, and/or time/date functions.
  • This feature allows removing unnecessary fields (by not mapping them), formatting the fields type, layout and content by using system and user-defined functions. This is especially powerful not only because it reduces the time it takes to rollout the system, but also because the data source can change (and often does), entities can be added, removed, or changed. Having a user interface tool reduces the time to apply these changes to the system.
  • FIG. 34 is an example of mapping a Customer business object (for example, from SiebelTM) to a Custome XML schema.
  • the Middle Name field is not mapped and therefore not retrieved as part of this KF.
  • the Phone and Fax fields in the data source are separated to Area Code, Prefix, and Number while the schema represent them as one field. In this case two functions needs to be created:
  • the user has the ability to control each field data type, length, format, etc.
  • the user can create and save his functions, which allow encapsulation of functions and processes since these functions can be then used in other functions (in the same mapping).
  • FIG. 35 illustrates an example for defining a parameter.
  • Parameters definition functionality allows the creation of XML schema describing the parameters accepted by this DCI. The functionality is similar to the schema mapping functionality only reversed. On the left side is the requested XML schema (from the repository or new) and on the right side the DCI parameters. Since every DCI can have more than one parameter (even for the same business object) a parameter definition mapping needs to be created. The parameters field lists are part of the DCI, since the DCI needs to be programmed to accept them. The mapping provides the ability to change the parameters XML representation without changing the DCI.
  • the Knowledge Container Builder 313 (KC Builder) is the application used to create Knowledge Containers by graphically creating an iJob. Functions include: • KC Builder - The main purpose of this anplication. Creates the iJob script for a KC based on the graphical layout of the KFs and their relationship.
  • KC Analyzer Provide the ability to test run the KC and see the returned information.
  • FIG. 36 shows an exemplary KC builder UI.
  • a user can create KC from one or more Knowledge Fragments (KF).
  • KF Knowledge Fragments
  • the KF type (that is XML schema) is defined by the first DCI added to it.
  • Each KF is based on one or more DCIs (see below) and together define the KC.
  • the result of the KC Builder is an iJob which is basically a script describing how to build the KC. The script is then translated into an XML document.
  • the user can also define input parameters as a XML document. Using the parameter mapping (see above) those parameters can be mapped to one or more KFs. Examples for parameters can be UserName, Company Name or returning information regarding a specific user and a specific company, etc.
  • FIG. 37 shows an example to such a mapping.
  • the address fields from a customer KF (SiebelTM for example) are mapped to the "To Address" fields of a driving directions KF (MapQuest for example). Similar to the DCI Designer's mapping functionality, the mapping is done simply by drawing the line.
  • the user can also apply formatting functions or use system values described below:
  • each KF can contain multiple DCI Objects (DCIs) using the AND or OR commands.
  • DCIs DCI Objects
  • the KF can support both AND, OR logical operations where AND is the sum of all information returned from all data sources, while OR is used to get only one set of information from only one data source.
  • FIG. 38B displays an example of three driving directions DCIs that ensure the return of driving directions.
  • FIG. 38C describes one solution, however based on this logic the returned information will be the sum of information from system A and B or the information from system C which is not the requested solution. Even if the order of DCIs in the KF are switched as shown in FIG. 38D, the result is still the same. Only by using DCI grouping the right solution can be provided in a simple and easy way.
  • FIG. 38F describes the graphical representation of three systems required to provide driving directions.
  • FIG. 39 An exemplary layout of a KC Analyzer is shown in FIG. 39.
  • the user can actually view the content of the KC.
  • Running the KC Analyzer cause execution of the iJob and the population of the KC with information from the appropriate systems.
  • the user might have access to use or build a KF he might not have access to view the information retrieved by it. In this case the secured KF(s) will be empty. Since the result is an XML document, the information is represented in a tree format.
  • the KC Analyzer can also execute parts of the KC. In this case all the parameters provided usually by other KF, need to be manually entered.
  • FIG. 40 shows how to use the KC Analyzer to view the connection of only two KFs.
  • the Analyzer also supports multiple statistical reports on the execution of the KC.
  • FIG. 41 depicts an exemplary layout for a KC Emulator.
  • the KC Emulator is an extension to the KC Analyzer. As shown, the KC Emulator displays an animated representation of the creation of the KC in real time. In addition the emulator provides statistical information on memory usage, disk I/O, performance, etc. Using the KC Emulator, the user can actually "see" how the data is retrieved from the multiple systems, change KFs dependencies, identify bottlenecks, compare retrieval time between different systems, etc.
  • Emulator An important functionality of the Emulator is the ability to apply Color Filters, that is, the ability to color a KF in a different color depending on a certain threshold.
  • Color Filters that is, the ability to color red all KFs that exceed 2 seconds in retrieving information. Using this color filter the user can easily identify bottlenecks.
  • the Business Process Builder 314 is the application used to create all business rules and business processes by graphically creating a Governing Script (gScript). Using the builder a user can create processes that includes KC, content type rules, perform actions, push information to hand held devices, etc.
  • the Business Process Builder 314 include the following functions:
  • a gScript Builder is used in creating business rules and business processes using a GUI. Every gScript can be based on KC, predefined actions (like sending e-mail, executing applications, etc), writing to other applications, broadcasting to users (push), etc.
  • the Builder implements a flowchart look-and-feel mechanism that allows the creation of these rules and processes.
  • the gScript Builder share some of the KC Builder's functionality's, the gScript Builder has a much richer language. Table 1 includes some of the functions and commands the language has. Before the script is saved, it is translated into an XML document. The syntax of the document is checked by the syntax validation process.
  • FIG. 42 displays an exemplary layout of the gScript Builder.
  • the Builder is divided into three main sections: the Toolbox 422, the Working Area (gScript View) 424, and the Available KCs 426.
  • the "toolbox 422 includes some of the available functions (Table 1) than can be used during the creation of the gScript.
  • the Available KCs 426 area is a collection of the KCs needed in the gScript. This collection is maintained by the user and additional KCs can be added at any time.
  • Every KC is by default Private to the gScript being used, which means that the KCs values can't be viewed by any other object using the gScript (like another gScript or an iFormat). Since all KCs are private by default, the default output of a gScript is nothing. If the information stored in a KC needs to be exposed, its mode can be changed to Public. Having KCs as private allows encapsulation of information that is needed for business rules only and can't be exposed in public. When one or more KCs are public, the output of the gScript is a collection of these public KCs. This collection can also be converted to one KC containing all other KC if needed.
  • a KC can support content persistence, which basically caches the returned information in the database and return it instead of returning the information from the data source.
  • Persistency is defined as a time frame in which the data is not refreshed. This time range from seconds to days. The persistency can be overridden by the gScript if needed, providing even a wider range of possibilities.
  • persisted KCs in a gScript are local to that specific gScript and can't be shared by other gScript.
  • a complicated gScript contain the CNN News KC as one of its KCs. This KC is persisted in a 12 hours time frame, that is, it refreshed itself at 12:00 Am and 12:00 Pm. However When that event triggers, the system need to be refresh.
  • a global KC is one that belongs to a special type of gScript a System gScript.
  • a System gScript is a gScript that doesn't belong to any group but any group (or users) can potentially have access to. All gScripts starts immediately or when the system first starts. Every System gScript has its own security settings and therefore can't be exposed without the right permissions.
  • a Linked KC has to be created. This KC is an "image" of a Global KC and it can display the content of the Global KC, change it, or send commands to refresh it.
  • the gScript Analyzer allows stepping through the script and viewing the execution path while outlining the actual path.
  • the Analyzer also allows displaying the content of every KC in the script (KC Zoom).
  • FIG. 44 shows an exemplary layout for a gScript Analyzer.
  • the Analyzer also allows running just part of the gScript by selecting a different starting point, as shown in FIG. 45 A.
  • FIG. 45b shows a case where the first record generated a path that causes an action (Action A).
  • FIG. 45C shows the next record generated a different path (the result of the IF...THEN...ELSE was Yes this time), which caused saving some information in the database.
  • the gScript Emulator 314 b of FIG. 31 share the same functionality with KC Emulator.
  • the Event Mapping 314c allows the mapping of events (both for the System and other systems) to one or more gScripts.
  • the Business Process Builder 314 is an application used to create all business rules and business processes by graphically creating a Governing Script (gScript). Using the builder, a user can create processes that include KC, content type rules, perform actions, push information to hand held devices, etc.
  • the Business Process Builder 314 includes the following functions:
  • the Publisher application 315 is used to create UI screens to Palm PilotTM PDA's and WAP enabled phones.
  • the user can create HTML, WML, or HDML forms to multiple KCs using a simple drag-and-drop user interface.
  • the user doesn't need to know HTML, WML, or HDML in order to use the application.
  • the Publisher After the user finished designing the appropriate forms, the Publisher creates XSL templates or Intelligent Templates (iTemplate), which generates the necessary HTML, WML, or HDML once executed by the Broadcast Server or the Presentation Server. Those templates are saved in the systems repository.
  • FIGS. 46A and 46B illustrate exemplary Palm PilotTM and WAP Publisher - layout.
  • the layout of the publishers is simple and resembles report generator (like Crystal Reports), in which fields, labels, lines, and other objects are dropped on a design sheet.
  • the design sheet is a Palm Pilot or phone.
  • the system can support more than one iTemplate for every KC. When saved the user specified for which user or group of users this iTemplate applies. This allows every user or group of users to have their own look and feel.
  • the publisher allows creating links (anchors) to other forms, ' which allows creating multiple forms for one or more KCs. This however raises an important issue of breaking up a gScript into multiple sub scripts.
  • FIG. 47 provides an example for a situation where a gScript needs to be segmented.
  • a gScript was created to provide contact information that includes the following KCs: contact information, driving directions, and news about the contact's company. Since the gSript describes a business process or a business entity, it should not change depending on the user interface or device used to view the information. However, a WAP phone does not have enough screen "real estate" to actually display the information.
  • the information is divided into three separate pages.
  • the first page is the contact information.
  • This page includes two links to the next two pages, the news pages and the driving directions page. Based on this, the user might never view the driving directions page or the news page, although the KCs containing the information were already loaded.
  • any gScript can be segmented into one or more sub scripts. This allow just in time (JIT) execution of the sub script providing faster response time and less use of memory. This formatting is not part of the gScript definition and does not change the gScript itsetlf (it is not visible in the gScript Builder).
  • FIG. 48 depicts an example of segmentation of the previous example.
  • the "Master" gScript is segmented into three sub scripts. Al, A2, and A3. This way each sub script executes only upon request. This functionability is possible since every subscript automatically defined what parameters are expected based on the original gScript.
  • sub script A2 requires address fields and sub script A3 requires company name. Every gScript can have multiple segmentation plans that can be used by different devices. This is powerful since all the business logic is located in one gScript while all other formatting is done outside. Any changes to the gScript is automatically trickled down to all sub scripts and therefore for every device or format.
  • the Expression Builder 316 of FIG. 31 is the element used to build formatting functions such as the ones used in schema mapping, parameters definitions, and gScripts.
  • the Expression Builder 316 provides two levels of complexity: Simple and Advanced. After the expression is completed, the Builder checks the syntax for grammatical errors. The Expression Builder then translates the expression to an XML document and returns it to the calling application.
  • the Expression Builder includes the following functions:
  • the Advanced Expression Builder is a window containing four areas: elements tree, element description, expression area, and available KC(s).
  • the elements tree contains the system functions and system constants.
  • the element description area provides a description about the selected function or constant.
  • the expression builder area is the place where the user inputs the expression, and the available KC (s) including the calling application parameters (like an iJob or gScript parameters).
  • the Simple Expression Builder should provide a much simpler interface for building simple expression.
  • the user of this builder shouldn't know how to build expressions but get a simple screen with Name/Operator/Value combinations to fill.
  • This builder can be used by less technical people to apply or change business rules.
  • a XML interpreter translates the expression string into an XML document.
  • a Syntax validation function validates the syntax of the expression string.
  • a 3 rd party component is used to check the validity of the expression.
  • the administration applications are a collection of applications that allows the maintenance, configuration, and monitoring of the system. Preferably, these applications are only used by the system administrator (s).
  • System configuration application is used to maintain all aspects of the system, like adding users, groups, preferences, etc.
  • the application also provides a view on the current elements installed and running in the system, like services, load balancing, etc.
  • the application layout has two main parts: the Information Tree, which includes some categories and sub categories, and the Information Panel, which displays detailed info on the selected item in the tree. Included functions are: Configuration:
  • FIG. 49 is an exemplary block diagram depicting configuration components and their relationship.
  • User configuration 4902 allows adding, editing, and removing users from the system.
  • User information includes personal information, contact information, etc. By default, all user information is stored in the local database, however, the system supports other repositories and directories like NT, LDAP, Active Directory and others.
  • an Information Panel displays the list of users. Listed below are other information available from this point: • Groups the user belong to
  • This list doesn't include KCs that are part of gScripts this user can execute.
  • Group configuration 4906 allows adding, editing, and removing groups from the system.
  • Group information includes name, description, etc. Listed are other information available:
  • KCs the group can or can't view or use in designing gScripts. This list typically doesn't include KCs that are part of gScripts this group can execute
  • KC categories allow adding, editing, and removing of KCs categories.
  • a category includes name, description, etc.
  • KC categories are just an easy way to categorize KCs and are not an entity of the system.
  • iJobs/KCs configuration 4910 allows adding, editing, and removing of KCs by launching the KC Builder application.
  • the configuration includes name, description, private/public flag, persistency information, dependency information, etc. This functionality also allows enabling or disabling the KC. Typically, a disabled KC does not run the associated iJob and therefore does not return any information. Listed are other information available:
  • gScripts configuration 4912 allows adding, editing, and removing of gScripts by launching the gScript Builder application.
  • the configuration includes name, description, private/public flag, persistency information, dependency information, etc. This functionality also allows enabling or disabling the KC. Typically, a disabled KC does not run the associated iJob and therefore does not return any information. Listed are other information available:
  • Groups iFormats configuration 4914 allows adding, editing, and removing of iFormats by launching the Publisher application.
  • the configuration includes name, description, etc. This functionality also allows enabling or disabling the iFormat. A disabled iFormat is not accessible to use.
  • DCI configuration 4916 allows adding, editing, removing, enabling, and disabling of DCIs in the system.
  • the configuration includes name, description, type, etc. Since every DCI can have more than one associated schema, information such as Schemas Information is also available.
  • Load Balancing configuration 4922 allows configuration of the load balancing mechanism.
  • Fail over configuration 4924 allows configuration of the fail over mechanism.
  • Server status and configuration function allows adding, editing, removing, enabling, disabling, and pausing of server in system.
  • the configuration includes name, description, type etc. Other information such as, Services/Managers for managers running on the server may be used to enable/disable a manager.
  • a Managers status and configuration allows adding, editing, removing, enabling, disabling, and pausing of managers in system.
  • the configuration includes name, description, type, etc. Listed below is other information available:
  • Agents running under the manager. Allows enable/disable an agent.
  • Agent Processes - all agents' processes. Allows killing a process.
  • An Actions status and configuration allows adding, editing, removing, enabling, disabling of actions in system.
  • the configuration includes name, description, type, etc. Other information available Action Processes.
  • a System tracing application allows the tracing of events and messages routed in the system including messages containing specific information. Using this tool a user can trace every piece of information in the system and outside.
  • the tracing mechanism itself is a COM component that can be included in other applications, like the System Monitoring.
  • the trace support capturing the information to a file or a database table. It includes the following function:
  • Trace Console provides the ability to initiate multiple trace windows without the need to login into the system when creating a new trace window.
  • the Trace Console is an MDI window that can host multiple trace windows and acts as the tracing framework. When starting the trace client a login window appears which requires the user to login into the system.
  • Trace Console provides Additional functions: ® Get a list of all saved traces (public and private) o Load a specific trace
  • Trace Information is: trace name, trace description, trace server name, trace server port number, capture file path, messages to trace (this is a collection), color and font of each message, included elements in each message, other stand alone elements and corresponding values, Private/Public flag.
  • the trace information window has a SAVE and OK button. Once the user hit SAVE or OK the trace information is saved to the database. Pressing the OK button starts the trace.
  • the Trace Window is actually divided into two parts:
  • Trace Window the hosting window that displays the traced information in a grid format with the appropriate font/color.
  • the window also provides additional functionality like Print, Clear, Copy, etc.
  • the Trace Window is an lActiveX control.
  • Trace Client a COM component that communicates with the trace server and retrieves the requested information. This component can work without the trace window.
  • the Trace Window provides a framework for the trace information.
  • the traced messages are being displayed in the main pane of the window in a grid format.
  • the grid itself is fully customizable and can be changed by the user.
  • the window itself is implemented as an ActiveX control which allows including it in other application (like the System Monitor).
  • the Trace Window support the following functionality:
  • a System Monitoring application allows monitoring the status of server, services, managers, and agents in the system.
  • the application provides a GUI in multiple levels starting from servers level and ending in single Agent view.
  • the application provides detailed information regarding every monitored component in the system like: memory usage, I O. CPU usage, number of processes and number of threads, etc.
  • the System Monitoring application includes: a Monitoring Console, and a Monitoring Window.
  • a Monitoring console is provided as an MDI window that can host multiple monitoring windows and acts as the monitoring framework. When starting the Console a login window appears which requires the user to login into the system.
  • the Monitoring Console is similar to the Trace Console.
  • a Monitoring Window provides a graphical representation of the status o the system or parts of the system. It provides a clear and detailed picture on the status of the system and allows applying color masks. The monitor provides four levels of status:
  • zoom in is achieved by double clicking a component, which in return is being displayed at the center of the screen and all other connected modules and components around it.
  • Zoom out is accomplished by double clicking the component immediately below the centered component.
  • the monitor provides a wide variety of statistical information that can be selected for monitoring. Each piece of information than ca be selected for monitoring. Each piece of information can be colored differently and is being displayed as a bar. The user can then define different thresholds by which the status changes from Normal (green) to Warning (yellow) to Critical (red). In addition, the user can apply color masks on the main tracing area, that is, define threshold filters that causes the coloring of the displayed components appropriately.
  • An example for a mask can be: color red all servers with CPU usage more than 70%.

Abstract

The present invention describes an Enterprise middleware software system that offers mass customization ability via intelligent governing using both event driven and content driven 'Smart Rule based' architecture. The system focuses on the end user while offering real time bi-directional delivery of content (knowledge) to multiple output devices including wireless and cellular devices via proprietary push technology. In one aspect of the invention is a system for intelligent assembling of information from a plurality of data sources comprising of a: knowledge processing module (70) for intelligently assembling information into a plurality of knowledge containers; a rules processing module (71) for evaluating and executing a plurality of rules; an information processing module (72) for interfacing with the plurality of data sources and interacting with a second system; an action module (73) for invoking actions in the second system; a presentation module (74) for outputting and formatting the plurality of knowledge containers to a respective connected device; and a broadcast module (75) for communicating with a disconnected device.

Description

KNOWLEDGE GOVERNING SYSTEM AND METHOD
CROSS-REFERENCE TO RELATED APPLICATIONS
This patent application claims the benefit of the filing date of United States Provisional Application Serial Nos. 60/226,738, filed August 21, 2000 and entitled "KNOWLEDGE GOVERNING SYSTEM;" and 60/226,963, filed August 22, 2000 and entitled "KNOWLEDGE GOVERNING SYSTEM;" the entire contents of which are hereby expressly incorporated by reference.
FIELD OF THE INVENTION
The present invention relates to computer software for business solutions. More specifically, the invention relates to a method and system capable of collaborative, intelligent assembling of data, information, and content from structured and unstructured data sources.
BACKGROUND OF THE INVENTION
The growth of information processing solutions available to a business enterprise has proven advantageous to most modern enterprises, providing an opportunity to apply the benefits of computer processing technology to most area of the enterprise and accordingly to better service customers in a more efficient manner. However, the value provided by these numerous advances in information technology (IT) has come at a cost, specifically the burden of managing the many disparate IT solutions that have been integrated into different areas of the typical business enterprise.
For a typical organization in today's corporations, the numbers and types of application programs have grown exponentially into a collection of ad hoc application integration programs. Corporations have many diverse new and old or existing applications that have been developed as stand-alone programs. For example, IT systems may have been written by or for the marketing department, accounting department, or the receiving department. These systems often have different designs and user interfaces, different applications, and run on different computer platforms. Furthermore, these systems often store data in many different types of data bases.
Thus, these stand-alone application programs may require different user interfaces for each function, the use of multiple network navigation functions, multiple menu systems, and specific knowledge of each application in order to know when and how to use them. This results in lost opportunity, reinvention and rework, and unproductive time spent searching for data and other information about the enterprise and its human, tangible, and intangible resources and assets. Because of the difficult nature of these problems, an effective enterprise application integration (EAI) solution has yet to be found. Moreover, the emergence of the Internet, client/server computing, corporate mergers and acquisitions, globalization and business process re-engineering, have forced corporate IT departments to constantly look for new, and often manual, ways to make different systems talk to each other. These recent trends in IT have increased the amount of inter-application interfacing needed to support them. Most recently, enterprise applications have performed such functions as data warehousing and enterprise resource planning (ERP), and facilitated electronic commerce.
Typical ERP systems are essentially large, integrated packaged applications that support core business functions, such as manufacturing, payroll, general, ledger, marketing, sales, and human resources. However, today's ERP systems cannot replace all of a corporation's custom solutions. They must, therefore, communicate effectively with other legacy systems in place. Moreover, it is not atypical for a corporation to employ more than one and completely different ERP system because a single vendor cannot usually meet every organizational need.
As a result, the options for getting data into, and out of an ERP system preclude known approaches used for data warehousing. Each ERP system has a proprietary data model that is constantly being enhanced by its vendor. Developing extract or load routines that manipulate such models is complicated and is discouraged by the vendor since data validation and business rules inherent in the enterprise application are likely to be by passed. Instead, ERPs require interaction at the business object level which deals with specific business entities such as general ledgers, budgets or accounts payable.
Consequently, there is a need for a method and system that permits use of the numerous applications and data, including data sources and platforms that are disparate. There is also a need for a method and system that provides the users with the ability to interface numerous intelligent and non-intelligent interfaces at the enterprise level and execute different applications oh different platforms. There is a further need for a method and system that is capable of collaborative, intelligent assembling of data, information, and content from structured and unstructured sources across a myriad of corporate applications, as well as the World Wide Web, and produces the knowledge necessary to govern business decision making.
SUMMARY OF THE INVENTION
The present invention describes an Enterprise middleware software system (tool) that offers mass customization ability via intelligent governing using both event driven and content driven "Smart Rule based" architecture. The system focuses on the end user while offering real time bi-directional delivery of content (knowledge) to multiple output devices including wireless and cellular devices via proprietary push technology. In one aspect, the invention is a system for intelligent assembling of information from a plurality of data sources comprising: a knowledge processing module for intelligently assembling information into a plurality of knowledge containers; a rules processing module for evaluating and executing a plurality of rules; an information processing module for interfacing with the plurality of data sources and interacting with a second system; an action module for invoking actions in the second system; a presentation module for outputting and formatting the plurality of knowledge containers to a respective connected device; and a broadcast module for communicating with a disconnected device.
In another aspect, the invention is a method for intelligently assembling information from a plurality of data sources comprising the steps of: interfacing with the plurality of data sources; interacting with an external system; evaluating and executing a plurality of rules; invoking actions in the external system responsive to evaluating and executing a rule; intelligently assembling information into a plurality of knowledge containers; and outputtmg the plurality of knowledge containers to an external device.
BRIEF DESCRIPTION OF THE DRAWINGS
The objects, advantages and features of this invention will become more apparent from a consideration of the following detailed description and the drawings, in which:
FIG. 1 is an exemplary block diagram of a Knowledge Governing Architecture, according to one embodiment of the present invention;
FIG. 2 is an exemplary three layers architecture of each module, according to one embodiment of the present invention;
FIG. 3 is an exemplary primary and secondary MR/C configuration, according to one embodiment of the present invention;
FIG. 4 is an exemplary multiple networks support overview;
FIG. 5 is an exemplary multiple network support detailed view;
FIG. 6 is an exemplary monitoring overview;
FIG. 7 is an exemplary architectural block diagram according to one embodiment of the present invention;
FIG. 8 is an exemplary primary MR/C fail over for primary MR C restart;
FIG. 9 is an exemplary primary MR/C fail over for secondary MR/C process;
FIG. 10 is an exemplary primary MR/C fail over for manager process;
FIG. 11 is an exemplary block diagram for communication module interaction;
FIG. 12 is an exemplary process flow for communication module sending and receiving messages; FIG. 13 is an exemplary architectural block diagram for a communication module;
FIG. 14 is an exemplary block diagram for a security gateway;
FIG. 15 is an exemplary block diagram for access points;
FIG. 16 is an exemplary block diagram for communication gateway (proxy mode);
FIG. 17 is an exemplary block diagram for gateway authentication mode;
FIG. 18 is an exemplary process flow for privilege check process;
FIG. 19 is an exemplary architectural block diagram for rules processing module;
FIG. 20 is an exemplary process flow diagram for gScript processing;
FIG. 21 is an exemplary process flow diagram for event rule processing;
FIG. 22 is an exemplary architectural block diagram for knowledge processing module architecture;
FIG. 23 is an exemplary iJob script logical presentation;
FIG. 24 is an exemplary architectural block diagram for information processing module;
FIG. 25 is an exemplary process flow for DCI module process;
FIG. 26 is an exemplary architectural block diagram for action module; FIG. 27 is an exemplary process flow for broadcast server pull process; . FIG. 28 is an exemplary process flow for broadcast server push process;
FIG. 29 is an exemplary process flow for a request by a presentation module to run a gScript and format its results;
FIG. 30 is an exemplary process flow for format processing;
FIG. 31 is an exemplary block diagram of a UI overview, according to one embodiment of the present invention; '
FIG. 32 is an exemplary Design Studio console;
FIG. 33 displays an exemplary mapping format;
FIG 34 is an example of mapping a Customer business object to a Customer XML schema;
FIG 35 is an example for defining a parameter; FIG 36 is an exemplary UI for a KC builder;
FIG. 37 is an example for a DCI to DCI parameter mapping;
FIGS. 38A-38F are exemplary user interface representations for various examples;
FIG. 39 is an exemplary layout of a KC Analyzer;
FIG. 40 is an example of using a KC Analyzer to view the connection of two KFs;
FIG. 41 is an exemplary layout for a KC Emulator;
FIG. 42 is an exemplary layout of a gScript Builder; FIG. 43 describes one solution using Global and Linked KCs;
FIG. 44 is an exemplary layout for a gScript Analyzer;
FIGS. 45A-45C are exemplary layout for various examples using gScript Analyzer;
FIGS. 46A and 46B are exemplary Palm Pilot™ and WAP Publisher layouts;
FIG. 47 is an example where a gScript is segmented;
FIG. 48 is an example of the gScript segmentation of FIG. 47; and
FIG. 49 is an exemplary block diagram depicting configuration components and then- relationship.
DETAILED DESCRIPTION
The present invention allows the collaborative, intelligent assembling of information, data, and content from structured and unstructured sources across a myriad of corporate applications, as well as the World Wide Web, and produces the knowledge necessary to govern business decision-making.
FIG. 1 is an exemplary block diagram of a Knowledge Governing Architecture, according to one embodiment of the present invention. The present invention offers a new breed of enterprise middleware application that defines a Knowledge Governing System (KGS™) 10; a Knowledge Governing Architecture (KGA™) 11 as the architectural backbone; and a Knowledge to User K2U™) as the workflow and audience definition. The system is capable of integrating corporate infrastructure 12 including front-office systems (CRM) 12a, back-office systems (ERP) 12b, Knowledge Management, Decision Support, legacy systems 12c, portal technologies, the web 12d, email systems 12e any other corporate and non-corporate systems. The system is also capable on interfacing to business-to-business 13a (B2B), business-to- consumer 13b (B2C), and EAI 13c systems. The systems intelligently draws information from all of these sources, assembles the information to produce knowledge, and then delivers the information, on a need-basis, allowing users at different locations to change the information that created the knowledge or to save it back in the system.
The power of system is derived from its ability to enable any organization to produce knowledge from mere information by concentrating on content rather than processes and systems. Knowledge is essential in creating and maintaining loyal customers, outselling, more effective marketing, and is a significant competitive tool. The present invention provides the facility to produce this knowledge from data, information, and content. In addition, the invention makes this knowledge available at the right time via both push and pull technologies. The system of the present invention also provides the ability to feed information back to the myriad of systems that originally produced the knowledge. This knowledge is available to the users 14a via LAN/WAN portals 14 and any supported wireless device 15a via a wireless portal 15.
The invention provides a comprehensive infrastructure that allows a complete enterprise solution. The system is modular, based on open architecture, and accommodates rapidly evolving technologies without the need to repeatedly update the product. FIG 7 shows the logical layers of the system. As shown, the system includes the following main modules; Knowledge Processing Module (KPM) 70, Rules Processing Module (RM) 71, Information Processing Module (LM) 72, Action Module (AM) 73, Presentation Module (PM) 74, Broadcast Module (BM) 75, database server 76.
Every one of these modules can be located on one or more physical servers (to provide high level of transactions) and communicates with a Message Router (processor) 77 (MR/C). The MR/C is used to navigate messages between the different modules, provides load-balancing capability, and has a fail over mechanism. The rest of the communication is done within each module.
Each module is a three-layer architecture including a Message Router/Controller, a Manager, and an Agent. FIG.2 is an exemplary block diagram of a three layers architecture. MR/C (communication processor) 22 communicates with every Manager 24 in the system, receives tasks 28 requests from one manager and forwards it to one or more of the other managers connected to it. Manager 22 then instantiates a new Agent 26 based on the request and its own availability. Agent 26 is the actual entity that has the knowledge on how to carry out the task. When the Agent is done performing the task, it sends a message back to the Manager 22 with the returned data (if any). Based on predefined parameters, the Manager can save the returned information to the database or can keep it in memory. The Manager then sends a response back to the requestor through the MR C 22 with the returned information or with the information id if the information was saved to the database 30.
MR C 22 is a service that is responsible for message routing between all managers in the system. Since the MR/C communicates with all managers of the system, it also balances the load of the system by equally directing the messages based on manager type and load. This service can run on the same physical server as one of the manager, or it could run on a different physical server. The system can support multiple MR/Cs in two configurations: fail over backup, and multiple networks - MR/C bridge.
Communications between applications is based on messages. An application begins by starting a session, informing the server of what messages it wishes to receive, and sending messages as appropriate. When another application sends one of the messages that an application has requested, that message will be delivered to it asynchronously. This model allows an origination application to send messages without possessing information about the recipient, and allows recipients to act on an event-based model, rather than having to explicitly know what message will be received when. The MR/C serves as a means of communication between applications, whether they reside on the same machine, on another machine connected on a LAN, or on a machine connected over a WAN. In addition, the MR/C can perform load- balancing and recovery duties if more than one of the same type of application are connected and request these features. In one embodiment, the MR C includes the following functions:
Connectivity - Clients can connect to the server using either TCP/IP, a shared memory architecture, or other connectivity protocols. In one embodiment, the physical communications layer is implemented in a component, allowing for easy extensibility to other network architectures as needed.
Reliability - The system supports multiple servers that share the workload and can take over for one another in the case of a system failure. In addition, the communications layer is designed to be robust and transparently corrects transmission errors, resending data as necessary. An originating application will not consider a send operation as complete, until all parties to which the message is being delivered have acknowledged receipt of the message.
Security - The system implements both access control and encryption.
Access Control - All applications connected to the system are required to be authenticated, and the system can be configured to provide different levels of service to applications depending on what permissions are assigned to them.
Encryption - All date being transmitted over a network or stored where it may be accessed is encrypted to prevent interception.
Performance - Message throughout and scalability are maximized by such measures as server load balancing, compression, and the extraction of message content into alternate means, of transport.
Development - Client applications interact through an open API.
The system is designed along the concept of distributed client-server architecture, with one or possibly more servers, and various clients. From the point of view of the client application, the client component is the only point of interface with the system, that component handles internally all the details of connecting to the various servers that might be present, and the server or servers handle the details of routing messages from one client to another.
In order to prevent system failure in case of a MR/C failure, the architecture can support multiple MR/Cs in the system when the rest act as a backup to the primary. An example of a system configuration is depicted in FIG. 3. Every manager 37-39 in the system is connected to the primary MR/C 34 running on server 34, where the secondary MR/C 36 is running on server 36. The primary MR/C 34 sends every new connection and disconnection to the secondary MR/C 36 as well as every message it receives. This way, the secondary MR/C keeps an exact copy of the information the primary MR/C 34 has. In essence the secondary MR/C 36 is performing the same work the primary MR/C 34 is, without actually sending out messages or having managers connected to it.
Additionally, the system can support multiple MR Cs over multiple networks or sub networks, as shown in FIG. 4. The system can also work over one network in order to split the load between multiple MR/Cs. However, this configuration is best applicable for multiple network enterprise especially with different types of networks, or network with few sub networks. In case of multiple sub networks, this "configuration is beneficial only if applying multiple MR Cs reduces the network traffic between the sub networks.
As shown in FIG. 4, when there are multiple (primary) MR/Cs 40a-40d in the system, every primary MR/C communicates with all other primary MR/Cs in the system. Since every message can potentially "cross" multiple MR/Cs, every MR/C has a complete copy of the other's subscripted messages types. An exemplary block diagram for a multiple network support configuration is shown in FIG. 5. The primary MR/Cs 52a and 52b reside on respective networks 50a and 50b. Each of the primary MR/Cs 52a and 52b include a secondary MR/C 54a and 54b respectively and connected to their respective managers.
Preferably, there is no load balancing between different networks and all messages are routed internally within the network if possible. For example, if a gScript requires an Intelligent Job (iJob) (a script that describes where and how to extract information) that can be handled by a connected KPM as well as a remote KPM, the remote KPM does not receive the message unless one of following occurs:
A remote manager receives a message where there is no local subscription to that message.
No local manager can accept any more requests. In this case the MR/Cs communicates trying to find a manager who can execute the task.
For the system to work properly, every component of the system needs to know the status of all other components being managed by it. For that purpose, the system provides a monitoring system that tracks and logs all errors within the system. FIG. 6 is an exemplary configuration for a monitor service. As depicted in FIG. 6, monitor service 60 communicates with the primary MR/C 63 and receives status messages of warnings or errors within the system.
MR/C 63 sends periodic heartbeat messages to all managers connected to it. These messages are then returned back to MR/C 63 with a simple status report including current load (based on predefined parameters). Same as the MR/C 63, manager 65 sends periodic heartbeat message to all connected agents 66, which return activity status as well as load. In case of failure in one of the agents 66 or managers 65, a message is sent to the monitor service 60 by the manager 65 or the MR/C. Monitor service 60 then logs the in the database, and/or display on the monitoring console 61. Each primary MR/C 63 may include a secondary MR/C 64.
In case of failure in the primary MR/C 63, the secondary MR/C 64 takes over. Every manager 65 then establishes connection with the secondary MR/C 64. The fail over process is described below from the primary, secondary, and manager view.
FIG. 8 is an exemplary process flow for primary MR/C 63 recovery process. When the Primary MR/C 63- is restarted, it initiates the following process to regain control of the system. When the primary MR/C 63 first loads (block 81), it tries to establish connection with the secondary MR/C, and requests status and mode, as shown in block 82. If the secondary MR/C sends a message saying it is the primary MR/C, the current MR/C switches to secondary mode. When the connection is established in block 83, if the primary MR/C gets a message from a manager indicating there is another primary MR/C in the system, the primary MR/C sends a message to the secondary MR/C verifying the message, as shown in block 85. If the other MR/C is in primary mode, it then switches to secondary mode, as shown in block 87. If the primary MR/C could not validate operational mode with the secondary MR/C (e.g., could not establish connection), it sends a message to the manager to pause operation.
FIG. 9 is an exemplary process flow to MR/C fail over process from a secondary MR/C view. In block 91, when the primary MR/C fails (non responsive), the secondary MR/C tries to reconnect to the primary MR/C for a predefined number of seconds (RT, recovery time), as depicted in block 92. If the secondary MR/C fails to reconnect, it switches to primary mode in block 94. The secondary MR/C then tries to establish connection to all managers, as shown in blocks 96 and 97. On connection to a manager in block 98, the secondary MR/C sends a message declaring itself as being the primary MR/C (in case the manager is still connected to the previous primary MR/C). The new primary MR/C can also receive connection request from managers who lost connection with the old primary MR/C. These managers are then added to the new primary. In block 95, the new primary MR/C buffers all messages addressed to managers who are not yet connected. If the old primary MR/C establishes connection, the new primary MR/C sends it a message about it being the primary.
FIG. 10 is an exemplary process flow for MR/C fail over process from a manager view.
If the connection to primary MR/C is lost or primary MR/C is not responding (block 101), the managers tries to reestablish connection with the primary MR/C for a predefined number of seconds (RT recovery time), as shown in block 102. If the reconnecting try failed, the manager tries to establish connection with a secondary MR/C in block 105. In block 108, if more than Re attempts for connection is performed the manager waits for connection form a primary MR/C in block 109. Once connected to the new primary MR/C. the communication between the manager and the new primary MR/C is established.
If the manager receives connection request from the secondary MR/C while it is still connected to the primary MR/C, it sends a MR/C switch message to the primary MR/C, disconnecting the connection to the primary MR/C, and establishing a connection with the secondary MR/C (now primary).
Referring back to FIG. 2, the manager module 22 is the component that is responsible for managing and monitoring all Agents 26. Normally, there would be one Manger running per physical machine that Would manage and control all of the assigned Agents that are running on that physical machine. The system includes the following managers: Rules Manager, DCI Manager, Knowledge Processing Manager, Action Manager, and Broadcast Manager.
The common functions of the Manager modules are first described, however, each manager has specific functionality as it relates to its function. Each Manger manages an internal queue of all requests sent to the Manager by the MR/C. As requests come in from the MR/C, the Manager is responsible for processing the requests and queuing the requests as needed. Each Manager manages all agents and their instances. With each request that comes in, the Manager determines which Agent is available for use. Once the correct Agent has been identified, the Manager instructs the Agent to start processing the request. If an Agent is not found, then the Manager instantiates another Agent or sends the request back to the MR/C.
Furthermore, each Manager manages a pool of agents. The Manager has the option and ability to instantiate multiple Agents on start up and keep those Agents in a pool. These Agents are idle until the Manager receives a request to process. At that point, the Manager determines which Agent is most appropriate for running the request and assign it to that specific Agent. If all Agents are busy, the Manager has the option to transfer the request to another Manager if one is available, or to wait for an Agent to become available. Additionally, each Manager manages error handling and fault tolerance. Each Agent is responsible for its own error handling. However, each Agent is also responsible for sending this error information to the Manager for logging and system wide handling. It is the responsibility of the Manager to log all of the errors and handle any issues that might arise. The manager is able to re-route a request that resulted in an error.
A Communication module is responsible for all communications between the module managers and the MR/C. The communication module exposes an API set that is used by both the module Managers and the MR C to communicate back and forth. In one embodiment, the communication module is an exchangeable module that allows the system integration to use a different communication module for each installation. In situation where the MR/C is replaced by an external messaging system, the communication module will be replaced with one that knows how to communicate with the external messaging system. This allows a corporation to continue utilizing their existing infrastructure by plugging the software of the present invention into their existing messaging system.
In one embodiment, the messaging system is the MR/C. In another embodiment, the messaging system is an external system that exposes a message bus and API to the system. FIG. 11 is an exemplary diagram of the communication module's place in the system, according to one embodiment of the present invention. When MR/C 110 is used as a message bus, the communication module listens in for any messages that are sent directly to the manager to which the communication module is connected. Depending on the message bus that the communication module is connected to, its operation is different as far as the message retrieval is concerned. As shown, a communication module is utilized to communicate to Information module 111, Knowledge processing module 112, Action modules 113, Monitoring and Tracing module 114, Broadcast module 115, Presentation module 116, Rule Processing module 117, and User Interface 118.
FIGS. 12a and 12b are exemplary process flow diagrams for receiving and sending messages, respectively. Since the communication module needs to support different messaging systems, it needs to have an architectural that supports easily replaceable messaging system integration components. This allows the system developers to support multiple systems without major changes. On the receiving side shown in FIG. 12a, if the message bus supports listeners (block 120a), the communication module listens for the messages in block 122a. If the message bus does not support listeners (block 120a), it polls message bus for messages until a message is received, as shown in block 121a. Once a message is arrived (block 123a), the message is processed in block 124a. If the processed message is valid (block 125a), the message is passed to the Manager in block 127a. If the processed message is not valid (block 125a), the error is logged and a response message is sent in block 126a.
On the sending side shown in FIG. 12b, the message is formatted according to the message bus requirements in block 120b. The communication module connects to the message bus in block 121b, and if the connection is successful (block 122b), it sends the message in block 124b. If the connection is not successful, or the message is not sent (block 125B), an error is logged in block 126b. FIG. 13 is an exemplary block diagram of the architecture of the communication module. Communication module API 130 is used by the system Managers to communicate with the message bus. Message formatter 132 is in charge of formatting outgoing messages. Message analyzer 134 is responsible for breaking up received messages and forwarding the body of the message to the manager via the communication API. To support the various messaging systems that are available, the only piece that needs to change is the message bus integration component 136.
The present invention features a robust and flexible security system that allows system administrators to control many aspects of the system. In one embodiment, the security system is a two layer security system. At the first layer sits a Gateway, which is responsible for all authentication and communications with the system from "the outside world. The second layer of security is responsible for the security of system functionality. The second layer receives the user id from the Gateway once the user has been authenticated. At that point, the second layer checks the user's security policy every time the user is using the system. The two layers ensure that the security subsystems is both extremely secure and yet highly manageable.
FIG. 14 illustrates an overall system architecture for Gateway security, according to one embodiment of the present invention. The Gateway module is responsible for authenticating users before they are granted access to the system. In one embodiment, the gateway supports user authentication via Windows NT security, or System security. In this embodiment, the Gateway module controls all access to the system from any external system 145. External systems are any systems that are not part of the core server or any modules that need to interact with the system via an external device. This means that all of the user interfaces 143, presentation 142 module and broadcasting 141 module have to use the gateway to communicate with the core system, as shown in FIG. 14.
FIG. 15 illustrates two exemplary paths via which different users may access the system. In the first path, a user connects to the Gateway module (block 155) via a broadcasting module (block 1545) using a hand-held device in block 151. In the second path, the user connects to the Gateway module (block 155) via a gateway client (block 153) and a firewall (block 157) using a browser in block 152. From the Gateway module, the system (block 158), or a user directory (block 156) may be accessed.
The Gateway includes two modes of operations, proxy gateway, and authentication gateway. Under proxy gateway mode of operation, the gateway is the module responsible for authenticating the user and passing all commands, requests, and responses from the external client to the core system. To access the gateway, any external client needs to send the requests via a specific port that will be assigned to the eateway. Much like the way a SQL Server listens on port 1433 for any commands, the gateway will do the same on its port. The gateway understands a set of commands that allow the user to access every function of the system. The system administrator is also able to define custom commands that allow user, for example, to request data with a specialized command.
FIG. 16 is an exemplary block diagram of the Gateway module operating in the proxy gateway mode. The Gateway module 161 includes a gateway client 163 communicating with the Gateway module. The gateway client 163 is preferably a COM object that communicates with the Gateway module through a firewall 162. The client object exposes properties and methods that automatically package the commands into small XML requests that are sent to the gateway module. The gateway module interprets the commands, performs the command, and sends the results back to the gateway client object. System commands such as authentication and configuration commands return status codes for the operation. Commands to retrieve data return both a status code for the operation and the data that was retrieved.
The gateway module supports batch operations for commands. This means that the client could batch a series of commands and execute them as one. For example, a batch for logging in, retrieving information, and logging out could be saved as a command files that could be retrieved by the gateway client. Since all information in the system are preferably represented in XML, the command files are also MXL files that follow a specific format.
In the authentication gateway mode of operation, the system applications can all be run either on the same network or in disconnected networks. When these applications are run on the same network, there is no need for the communication between the user interfaces and any other clients to use the proxy gateway. Instead, the gateway could be used as a simple authentication gateway that is responsible for simply authenticating the users of the system.
FIG. 17 is an exemplary block diagram of the Gateway module operating in the authentication gateway mode. When the gateway is used as an authentication gateway, it simply passes the login information to the gateway server 173 for authentication using security database 174. The gateway server 173 attempts to authenticate the login information and send a response back to the application (175-177). The response is comprised of a SID (security id) that is attached to each request that client 170 makes from the system database 171 or MR/C 172. The SID is checked each time a request is received for authenticity. Each time a user authenticates via the gateway, a new SID is generated. The SID is then encrypted and expires as soon as the user logs off.
The authentication gateway supports multiple authentication services. Such services could include system's own proprietary security system 177, Windows NT authentication 175, Kerberos™ 176, and any other security system* that are addressable via an API. A corporation could use their own security system to control the system authentication if their system is addressable via an API and a security layer could be developed for it.
Once a user has been authenticated via the Gateway, the security subsystems is responsible for controlling the user's privileges. Each function that can be performed for the user is controlled via a named privilege. The System user and privileges database is based largely on the model of groups and user. In this model, users belong to groups and groups can be comprised of other groups. Users could belong to multiple groups and inherit the security from the group itself. Additionally, users can be granted access to certain privileges directly. In one embodiment, the privilege system only supports explicit rights. This means that the privilege is explicitly assigned to a user or group. Under this model, users do not have any privileges unless they are directly assigned to the user or group to which the user belongs. Because the design of the privilege system does not allow for a situation where a privilege is stripped from a user or group, conflict in the privilege system will not occur in the system.
In one embodiment, privileges in the system are checked on a per access basis rather than per login basis. This means that each time the user attempts to perform a function, the privilege system is queried to check if the user has access to that function. Using the per access privilege checking method allows administrators to make changes to the privilege system in real time and have these changes take effect immediately.
FIG. 18 is an exemplary process flow for privilege checking process. During the authentication process, the Gateway returns a SID (security id) to the gateway client in block 181. This SID includes the user's GUID (global unique identifier). All user interfaces, APIs, DCIs, and other means of accessing the system require the security SID as one of the parameters. Once the SLD has been supplied, a privilege check is performed in block 184. The privilege check makes sure that the user has access to the function being attempted. If the privilege is cleared (block 186), the function is then performed in block 188. Otherwise, the operation is aborted in block 187.
Since the system uses a distributed architecture that allows the system to run in a distributed form, each one of the modules can run on one or many computers. The MR/C is responsible for routing all messages back and forth between the various modules. The MR/C is also responsible for handling the load balancing of the various module managers. One of the features of the system of the present invention is the ability to integrate with other EAI systems. The integration that is required varies depending on the customer requirements. Exemplary options for system deployment include: complete system deployment, complete system and EAI integration deployment in which the system communicates with the EAI system via a specialized DCI, and EAI and selected modules integration in which, only some of the modules are integrated into the EAI message bus and the MR/C is not utilized.
The complete deployment option involves utilizing all of the modules and components to achieve all needed functionality. For this deployment option, a DCI is provided or developed for each system that needs to be integrated into the system. In this scenario,, the system assumes the functionality of the EAI system that otherwise would be in place. Because the entire integration is based on the system, the system is responsible for all monitoring, tracing, and message routing. Systems that conform to the SNMP and MIB-II compliance could be monitored by the system.
Most large corporations are already running an EAI solution or have one in the works. In these situations, the system supports communication with the EAI system. Since integration costs for the majority for the EAI systems are large, the corporation might opt to have both the system and the EAI system running side by side. Additionally, the corporation might not want to integrate the modules into their EAI system. When this happens, the combined deployment is used. In this deployment, the system is able to communicate with the EAI system's message bus via a specialized DCI. This DCI has the ability to send and receive messages via the EAI message bus. This technique could be used to retrieve information from other enterprise systems that are already connected to the EAI system. A specialized DCI is developed for each EAI system that needs to be integrated.
In the EAI and modules deployment, the communication module has to ability to talk to various EAI systems via their message bus. This allows the modules (which use the communication module to communicate with the MR/C) to talk to other EAI systems without having to use the MR/C as a message bus or a specialized DCI. In this instance, the individual modules could be connected directly to the EAI message bus and, in turn, be used in the EAI system's workflow and process flow engine. Only some of the modules are used in this deployment. The MR/C is not needed because the EAI system's message bus is used instead.
When modules are deployed this way, they act independently of each other. For example, when the Rules Processing Module is executing a gScript and it needs to retrieve a KC, it simply sends a message over the message bus specifying that it needs a specific KC. The EAI System's message controller automatically sends the message to the correct module based on the workflow that has been predetermined.
Referring back to FIG. 7, Rules Processing Module 71 stores, evaluates, and executes rules. A rule is defined as a conditional statement that tells the program how to respond to particular input or a combination of LF x THEN y ELSE z statements. An event, request, or other type of input is sent to the Rules Server, which then evaluates the conditional clause of the rule (the LF clause) and in return responds accordingly (based on the THEN ELSE clauses). The Rules Server supports rules, nesting, parallel execution, and encryption.
Although there are a few ways to trigger the execution of a rule, there are some small differences in the process. When the Information Server (actually from a DCI) or an outside source fires an event, the Rule Manager spawns a thread that communicates with the appropriate DCI for further information, like what information is being requested, the parameters, etc. If the event occurs from an outside source, the information may already be within the event itself. Once it gets all the necessary information the Rule Manager transfers it to the Rules Engine. The same thing happens when the Knowledge Processing Server requests the processing of a rule.
The difference is that, in case of an event, the Rules Engine checks the rules repository for all rules that are affected by the event, that is, checking the IF clause of all rules. If there are any affected rules, the Rules Engine spawns a Rule Agent thread, which executes the appropriate rules with the submitted information. If the Knowledge Processing Server submitted a request, the Rules Engine knows which rule needs to be processed. The Rules Engine then spawns a Rule Agent that processes the requested rule.
FIG. 19 is an exemplary block diagram of the Rules Processing Module Subsystems Architecture. Communication layer 192 is the layer responsible for all communications with the Rules Processing Module 191. This layer is preferably universal to all system objects and processes in the System. Rules manager 193 is responsible for managing and monitoring all Rules Agents 196. Normally, there is one Rules Manager running per physical machine that manages and controls all of the Agents that are running on that physical machine. The Rules Manager 193 is also responsible for receiving notifications of events from the message router/controller. Once a notification is received, the Rules Manager retrieves the rules that are attached to the event from Rules database 195. The Rules Manager then retrieves the gScripts that are attached to the rule from the Metadata Repository 194 and start processing them.
Rules Agent 196 is responsible for running all rules and processing gScripts. Rules Agent 196 evaluates the rules and runs the gScripts that are attached to, or contain these rules. The Rules Agent interacts with the Rules database 195, and gScript (Metadata) Repository 194. Rules Studio 197 is a plug-in to the User Interface that allows users to create new Event Rules and Content Rules. Rule Studio 197 also allows users to create gScripts and assign them to specific rules.
A gScript language is used as the Scripting language that represents the logic and functionability of a gScript. The language allows the users to run iJobs, evaluate the results of the iJobs/ and make decisions based on the da+!> returned by the iJobs and actions. The gScript also allows users to send requests to the Action Module, broadcast server, and information server. Moreover, gScript language allows the users to define variables and share them through out the gScript processing. Variables will be used for passing data between various knowledge containers and for condition evaluation purposes. A variable could be declared as private or public to the gScript. The gScript language allows the users to define error handling flow control. The users are able to tell the script processor how to handle errors. Some of the possibilities could be running a different iJob, stopping execution, or any command supported by the gScript language. The gScript language also provides the users with feedback about error information. The users could then determine the flow of the gScript according tot he error message.
Each gScript is capable of receiving runtime parameters." These parameters could be used anywhere in the gScript to control the flow of the gScript or as variables for other actions. Additionally, the gScript could be split into multiple gScript segments that could be run as one or individually. The segmentation could be nested and therefore, would allow multiple gScript segments to be run as one item within a bigger gScript. Each gScript segment knows which data elements it needs to run. In order for a gScript segment to run, the needed parameters need to be supplied.
The following table includes an exemplary list of the functions and statements that the gScript language supports.
Figure imgf000018_0001
Figure imgf000019_0001
Figure imgf000020_0001
All gScripts are created using the gScript Studio user interface described below. Once a gScript has been visually created, the user has the option to save the gScript as either an executable gScript or a private gScript. An executable gScript is a gScript that is ready for production and could be implemented in the production system. A private gScript is either a gScript that is not complete and therefore should not be run, or a gScript fragment that is simply used inside other gScripts and can not possibly run on its own. The user also has the option to export the gScript so that it can be read by a different system installation. An import feature is provided so that gScripts could be imported into the system.
The gScripts are compiled when they are saved and activated into the system. The compilation process involves the following steps. Conversion of gScript graphical layout to gScript code step converts the graphical layout of a gScript to the XML representation of a gScript. Dynamic linking of all gScript fragment step copies takes all fragments that were copied into the master gScript and converts their placeholders to code. Determine gScript execution plan step determines the execution plan of a gScript. The execution plan specifies to the gScript processor all the dependencies that have been programmed into the gScript. The gScript processor, in turn, is able to determine which processes can be run concurrently. Pre- Process all iJob, Action, and Presentation requests step prepares all of the requests. The requests are saved with the gScript code and allow the gScript processor to simply send the request instead of processing it and then sending it. Save all gScript related information step saves the gScript graphical layout, gScript XML code, gScript execution plan, and all related information regarding the gScript to the gScript repository Once a gScript has been added and activated in the system, the Rules Agent is responsible for processing the gScript. FIG. 20 is an exemplary flow diagram depicting gScript processing. In block 200, gScript code and execution plan is retrieved from the gScript Repository 201. In block 202, the XML code is converted to VBA compatible code and the concurrent processing plan is determined in block 203. In block 204, the gScript language is evaluated and in block 207, requests are sent for external processing to servers 208a-e. If there are no more code to process (block 204), the processing is completed in block 206.
Rules include Event rules and Content rules. Event rules are rules that are attached to events in the system and, therefore, do no have a condition to evaluate. In other words, the condition in an event rule is actually whether the event occurred. A notification of the event is pushed to the Rules Manager. The Rules Manager then takes over the processing of the rule. FIG. 21 is an exemplary flow diagram describing the processing of an event rule. In block 210, event notification is received by the Rule Manager and the event information and rule information are retrieved in blocks 211 and 212, respectively. In block 213, gScript information is retrieved messages are sent by the Rule Manager to process the gScripts in block 214. Responsive to the request from the Rule Manager (block 216), gScript code and execution plans are retrieved in block 200, and the gScript code is processed in block 217. The gScript retrieve object retrieves all code that is related to the gScript being processed. It then creates a comprehensive gScript that contains all of the code for the gScript and sends that code to the gScript processor.
The processing includes evaluating conditional and looping statements (block 217a), determining concurrent processing plan (block 127b), and sending requests for external processing (block 217c). Concurrent processing is only allowed for data sources from the Internet or other data sources that do not support modification of data. If the data sources require modification of data, the processing is carried out sequentially. When all the gScript lines are processed (block 218), a "Finished Processing Message" is sent to the Rule Manager in block 219.
Content rules are rules that are based on the content of a Knowledge Container or a Knowledge Fragment. Content Rules are run by requesting a gScript that contains the rule. The steps for processing Content rules are similar to steps for processing Event rules, except that steps 210 and 211 in FIG. 21 for retrieving event request and information are not performed.
Referring back to FIG. 7, Knowledge Processing Module (KPM) 70 is responsible for intelligently assembling the information into Knowledge Containers based on an iJob. This module has the following components: Knowledge Processing Manager - Accepts knowledge generation requests and manages
Knowledge Agents.
Knowledge Agents - Responsible for.the knowledge creation by running an iJob and creating Knowledge Containers.
Data Design Studio - The user interface application by which a user can create an iJob.
Studio Analyzer - An application that allows an administrator to monitor and analyze the system.
A Knowledge Processing Manager is used to execute Knowledge Agents based on ad- hoc requests. The Managers provides the ability for every component, in or outside the system, to request the creation of Knowledge Containers. A Knowledge Container is an object that contains the elements that define the mapping of a business object. The information in the Knowledge Container can be assembled from multiple data sources, corporate and non-corporate (Internet, or others), and is not limited to only one dimensional information. In other words, information in the knowledge container can be assembled from one data source based on previously retrieved information from another data source. A Knowledge Container is also known as a Knowledge Cube. The principal idea behind the Knowledge Container (and the rest of the system, for that matter) is using every data element as a building block that can be used to build more information. A simple example might be the retrieval of driving directions based on the address field in every contact.
The Knowledge Agents are the components that actually run an iJob and construct a Knowledge Container. An iJob is, in essence, a script that describes where to extract the information from, as well as how to do it. The Agent then communicates with the appropriate DCI to actually retrieve the information. Although a Knowledge Container is a predefined set of fields that describe a business entity, the Agent also has the ability to process rules that can affect the content of the container, but not the structure of it; that can be done using the Rules Server.
FIG. 22 is an exemplary block diagram for a KPM architecture. When a request from a Message Router 221 comes in to process a certain iJob, the request is forwarded to the KPS Manager 225 by the KPM 222. The KPS Manager 225 determines which KPS Agent 226 to send the request to. Once the request has been forwarded to an agent, the agent takes over the processing of the iJob. Similar to Rule Processing Module architecture in FIG. 19, KPS Agent 226 evaluates the request and runs the iJobs that are attached to, or contain these rules. The KPS Agent interacts with he iJob database 228, and Metadata Repository 227. iJob Studio 1224 is a plug-in to the User Interface that allows users to created new KCs.
An iJob script is a script that defines the content of a Knowledge Container (KC). The KC is made up of one or more Knowledge Fragment (KF). An iJob is simply the script that instructs the system how to create the KC by taking one or more related KFs and put them together in the right order. FIG. 23 is an example of the making of an iJob KC from come contact information. All data in an iJob has to relate to each other. There has to be a connection between the Company information coming out of Siebel ™ and the contact information. Unrelated information cannot be contained in a Knowledge Container. For gathering unrelated data, two or more iJobs are required.
As shown in FIG. 23, an iJob is comprised of one or more related Knowledge Fragments (KF) that are related to each other. A KF is comprised of data that fits a specific Data Definition.
A KF, however, does not have to contain data that comes from the same source each time the data is requested. In the case of the driving directions KF, the data could come from any number of sources. As long as the data is mapped to the knowledge definition (KD), it will be accepted into the KC. The different data sources could be selected based on availability and conditional evaluation of the data. For example, the driving directions could come from MapQuest™ if the contact's office is in the United States, but come from MapBlast™ if the contact is international. A Single KC could contain multiple nested records. In the example above, the Siebel™ Contact Information could contain multiple contacts, which will produce multiple records for all related data.
The KC has a tree structure. The structure of a KD does not change for any specific KC. The KC structure depends on the KD structure. Any changes in the KD's structure could have major effects on the KC.
A knowledge definition (KD) is the format of the data that is returned by the data source. An example of a KD is driving directions. The KD that defines the driving directions makes sure that the driving directions always contains the same fields and follow the same field names and formats regardless of the source of the information. KDs are setup by users of the system. To setup a KD, the user determines and defines the fields and their formats. Once the KD is defined, the various data sources that can retrieve this information are mapped to the KD. The mapping information is then vcstored and used when a KF is requested based on the KD that was just created.
Data Sources are the systems that supply the data for the KF. These systems are accessible via a Dei component that knows how to interact with the system. Each DS is assigned to one or more categories. The user registering the DCI maps of the fields that are retrieved by the DCI to one or more knowledge definitions. When an iJob is created, the user selects the category of data that the KF needs to contain. Once the selection is made, a list of sources is presented for the user. At that point, the user is able to select which data sources to use. Regardless as to which data source is selected, all data coming from the data sources is in the same format.
A knowledge fragment (KF) is defined as data that makes up a portion of the data contained in the Knowledge container. Thus, an iJob is a collection of KFs that relate to each other. When an iJob is saved, it is converted to an XML representation. In one embodiment, the iJob Script language supports operators, math functions, string functions, time/date functions, and the like. "
As described above, when a request comes in to process a certain iJob, the request is forwarded to the KPS Manager that determines which PKS Agent to send the request to. Once the request has been forwarded to an agent, the agent takes over the processing of the iJob. Once all of the processing is done, the completed KC contains all the data that was retrieved. The following is an example of a complete KC:
<KC CreatedDateTime=051220010805am ID=12345>
<Contact>
<FirstName>Jim</FirstName>
<LastName>Zafrani</LastName>
<Addressl>6300 Variel Ave. Suite H</Addressl>
<Address2> </Address2>
<City> Woodland Hills</City>
<State>CA</State>
<Zip>91367</Zip>
< CompanyName> Sophisticated Technologies</CompanyName> <CompanyID>SophTech</CompanyID>
< Companylnformation >
<CompanyName> Sophisticated Technologies</CompanyName>
< CompanyID>Soph Tech </CompanyID> <Addressl>6300 Variel Ave. Suite H<1 Address 1> <Address2> </Address2>
< City> Woodland Hills</City> <State>CA</State> <Zip>91367</Zip> < CompanyNews> <NewsItem>
<Title> Sophisticated Technologies to revolutionize Knowledge Management</Title>
< URL>http://www. isyndicate. com/news/getnews. asp ?newsid=143</URL> <URL>http://www.yahoo.com/news/getnews.asp?newsid—8864</URL> </NewsItem>
<NewsItem>
<Title> Sophisticated Technologies signs a 30 million dollar deal with Microsoft</Title>
< URL>http://www. bloomberg. com/news/getnews. asp ?newsid=l 55 435</URL> </NewsItem>
</Financia~ lNews>
</CompanyInformation>
<DrivingDirections>
<Steps>
<Step ID=l>
<DirectionText>Head west on the 101 freeway</DirectionText>
</Step>
<Step ID=2>
<DirectionText>Exit DeSoto Avenue</DirectionText>
</Step>
<Step ID=3>
<DirectionText>Head North on DeSoto Avenue</DirectionText>
</Step>
<Step ID=4>
<DirectionText>Turn Left on Erwin Blvd.</DirectionText>
</Step>
</Contact>
</KC>
This KC is then saved in the database for further processing by the Rules Processing Module.
Referring back to FIG. 7, Information Processing Module 72 provides the means for the system to interact with the outside world (i. e. other systems, programs, data sources). Since it is impossible to imagine all the ways the system can interact with external systems, it becomes necessary for the DCI system to become flexible enough to add extra functionality when developers need them without radically modifying the system. The DCI system includes the following functions:
Interacting with a variety of systems (internally and externally). Some examples are being able to access a database, hook into Microsoft Exchange to send an email, or to monitor an event in an ERP system.
Providing an easy way of adding extra functionality (i.e. more interfaces to other systems) by simply adding more modules to the system.
Transforming any data, event, or message that goes into the system into the corresponding XML that the system expects. DCI Managers are responsible for instantiating and running the DCI Agents. ' The DCI Managers are also responsible for controlling the load and queuing up requests as they come in to the system. The MR/C relies on the DCI Managers to launch the DCI Agents and manage them. It is the DCI Manager's responsibility to inform the MRC of its presence and how busy it is. Preferably, there should be only one DCI Manager installed on a server. DCI Agents are the only components in the DCI system that can interact with the outside world. While all DCI Managers are the same, each DCI Agent is different. This is because each Agent interacts with external systems in a specific way. This allows third party developers to create additional DCI Agents and add them to the system to add more functionality to the overall system. There can be multiple agents on a server where a DCI Manager is installed.
In one embodiment, the DCI Manager instantiate each of the Agents using Microsoft™ Transaction Server (MTS). This allows the manager to run the agents out of process and still control the agent. MTS further allows the manager to use connection pooling and transactions as needed.
FIG. 24 is an exemplary block diagram of the Information Processing Module Architecture. In one embodiment, DCI Manager 245 is a Windows NT service whose job is to launch DCI Agents 246 based upon the requests it receives from the MRC 241. It also has the job of managing and monitoring every instance of the DCI Agents 246. DCI Agent 246 is a COM executable running out-of-process (using Microsoft™ Transaction Server) that serves as a gateway or interface to external systems. It is the job of the DCI Agent to interact with these external systems to retrieve information or to perform some kind of activity. Each agent will probably be unique; however, each agent needs to adhere to the DCI Agent model.
The DCI Agent is responsible for the following functions:
Retrieving information and converting it into knowledge fragments - The agent is responsible for communicating with the data sources, retrieving the data that is requested, and converting the information into knowledge fragments. The conversion is done by first mapping the fields from the data source to the knowledge fragment definition. Once the fields are mapped, the agent is able to map the data to the correct fields on the knowledge fragment.
Interacting with systems outside of the system - The agent is able to communicate with systems that are outside of the system. This allows the agents to be able to call specific functions or procedures that each system might need.
Providing status updates to DCI Manager - Since the bulk of the processing in the system occurs in the DCI Agent, the agents are required to constantly monitor their progress and report it to the manager. The manager is then able to perform load-balancing effectively among the various agents that are running.
FIG. 25 is an exemplary process flow of a DCI module. A request from MR/C is received in block 2501. The DCI manager retrieves the message parameters in block 2502 and determined the needed Agent in block 2504. If there is an available Agent (block 2506), the parameters are sent to the Agent in block 2512. If there is no free Agent, a new Agent is created in block 2510, and then the parameters are sent to the Agent in block 2512. After the request parameters are received by the Agent, the agent process step is executed in block 2514 and the message with job details is sent to the MR/C in block 2516.
The request is processed by the Rules Agent. In block 2518, the message parameters are processed and if there is a connection available to the needed data source (block 2520), data is requested from the data source (e. g. 3rd party data source 2526) in block 2524. A Knowledge Fragment (KF) is built based on the retrieved data and if needed, in block 2528, the KF is saved in the KC Repository 2532. In block 2534, the KF is encoded into the MR/C message and the message is sent to the DCI Manager for routing in block 2536.
All DCI Agents share the same basic architecture. This common architecture allows all agents to be able to perform the functions correctly. The only section of the agents that may be different is the data retrieval classes. These classes directly depend on the data source that is being accessed. The Agent architecture includes the following objects:
System Configuration and Monitoring - This object is responsible for any system Configuration item retrieval and setting. This object is also in charge of providing the monitoring information requested by the manager.
XML Translator - This object is responsible for converting the retrieved data into the XML format. The translator retrieves the correct data map and map all of the values to the field names. Gateway - The actual object that interfaces with the external systems.
In one embodiment, all of the above objects interact via a standard API.
Referring back to FIG. 7, Action Module 73 provides the means for the System to perform actions or carry out activities external to the system, (i. e. other systems, programs, data sources). The Action Module does not actually carry out the action but rather invokes the action in an external system/component.
Action Managers are responsible for instantiating and running the Action Agents. The Action Managers are also responsible for controlling the load and queuing up requests as they come in to the system. The MR/C relies on the Action Managers to launch the Action Agents and manage them. It is the Action Manger's responsibility to inform the MRC of its presence and how busy it is. There should be only one Action Manager installed on a server".
Action Agents are the only components in the Action module that can interact with the outside world. While all Action Managers are the same, each Action Agent is different. This is because each Agent interacts with external systems in a specific way. This allows third party developers to create additional Action Agents and add them to the system to add more functionality to the overall system. There can be multiple agents on a server where an Action Manger is installed. The Action Manger instantiates each of the Agents using Microsoft™ Transaction Server (MTS). This allows the manager to run the agents out of process and still control the agent. MTS further allows the manager to use connection pooling and transactions as needed.
FIG. 26 is an exemplary block diagram of the Action Module Architecture. In one embodiment, the Action Manger 265 is a Windows NT service whose job is to launch Action Agents 266 based upon the requests it receives from the MRC via the message router 261. The Action Manager 265 also has the job of managing and monitoring every instance of the Action Agents 266. The Action Agent 266 is responsible for interacting with 3rd party systems databases 264. This allows the agents to be able to call specific functions or procedures that each system might need. The Action Agent 266 is also responsible for providing status updates to Action Manager 265. Since the bulk of the processing in the system occurs on the Action Agent, the agents are required to constantly monitor their progress and report if to the manager. The manager is then able to perform load balancing effectively among the various agents that are running.
Preferably, all Action Agents share the same basic architecture. This architecture is common to all agents and allows all agents to be able to perform the functions correctly. The only section of the agents that is different is the data retrieval classes. These classes directly depend on the data source that is being accessed. The Agent architecture include a System Configuration and Monitoring object that is responsible for any system configuration item retrieval and setting. This object is also in charge of providing the monitoring information requested by the manager. Moreover, a Gateway object interfaces with the external systems. Preferably, all of these objects interact via a standard API.
Referring back to FIG. 7, the Broadcast Module 75 is responsible for communicating with disconnected devices such as the Palm Pilot™ and Pocket PC. The current trend of the industry is to provide wireless Internet access to hand-held devices (HHD). This technique is being translated, in most cases, as the ability to "browse the web" simply by providing a web- browser software with the device. The browsing is done by browsing regular HTML sites or by building special web sites for new types of devices like WAP enabled devices. However browsing the web has few limitations that limit the ability and use of the HHD. For one, the user has to be on-line (connected) all the time in order to get the information. Also, Low bandwidth and low reception especially in buildings, are some limitations of web browsing.
In order to overcome these limitations, the system offers two types of configuration for HHDs support: Connected Devices and Disconnected Devices. Connected Devices are devices that need to be constantly connected to the network (Internet) in order to submit and retrieve information just like a desktop browser (or portal) by connecting to the Presentation Module where the only software they need is a browser. Disconnected Devices are devices that have additional software that allows maintaining of the retrieved information in a local database even if the device is not connected to the network however allowing push messages and download of information upon request. Using this configuration allows the user to download all the requested information to the HHD and use it without being connected to the network. Taking into account that push messages are only a small subset of the information a user requires, this configuration provides a much faster, more economical, and more reliable solution. The system provides the necessary software for Disconnected Devices.
Since the disconnected devices do not have a constant connection to the network or Internet, they need to have the information pushed to them when it is available. This is done via publish-subscribe method. The disconnected device subscribes to information that gets pushed when needed. The devices also need to have a way to request information to be pushed to them.
The major differences between connected devices and disconnected devices is that whereas connected devices download one page at a time, disconnected devices could potentially request hundreds of pages to be pushed or pulled at any one time. This means that the Broadcast Module needs to be able to handle the formatting (via the presentation module) and sending of all pages to any one device. Further, the Broadcast Module needs to act as a server that can accept TCP/IP connections and keep the connections open until the transfer is completed. The Broadcast Module 75 relies on the Presentation Module 74 to format all of the data that needs to be sent out. The request for information is forwarded to the Presentation Module 74. The Presentation Module then retrieves the data and formats it accordingly. Once the formatting is done, the Broadcast Module is responsible for sending out the information to the requesting device.
FIG. 27 is an exemplary block diagram of a pull process initiated by a disconnected device 270. The disconnected device 270 makes the request for information to the Broadcast Module 75. The Broadcast Module sends the request to the Presentation Module 74, which in turn, sends a request for a gScript to be executed to the Rules Processing module 71. Once the Rules Processing module 71 is done executing the gScript, it send a collection of knowledge containers back to the Presentation Module 74 for formatting. The Presentation module formats the collection via the Formatter Object 272 and database 271 and sends it back to the Broadcast module 75. The Broadcast module then sends the information back to the disconnected device 270.
In a push scenario, the logic is reversed. A push scenario is one in which the system sends the device information of the type that might not have been requested. The push is initiated by a business rule that is implemented in a gScript. This allows both event-based and content-based rules to push information to a disconnected device. FIG. 28 is an exemplary block diagram of a push scenario. In this scenario, a gScript is instructed to send data out to a disconnected device 280. The gScript sends a message to the MR/C 285 with the details of the data to be pushed and the recipients of the pushed data. The Broadcast Module 75 uses the Presentation Module 74 to format the data accordingly. Once the data is formatted, the Broadcast Module 75 pushes the data out to the disconnected device 280,
As described above, the Presentation Module j74 is responsible for outputting and formatting knowledge containers to connected devices. This external view could be a web server or another application that would like to receive the data in a formatted output. The presentation server allows users to create their own formatting templates and format a knowledge container according to this template. The Presentation Module is the only module that does not share the same architecture as the rest of the system. In one embodiment, the Presentation Module is implemented as a COM object that could be called from any application. This COM object is responsible for retrieving the knowledge container and formatting it according to a retrieved template. This object could be used by a web server. The Presentation Module COM object is a component that allows users to format data in their own format and present it. This component is able to format any data that is retrieved by the system. Further, the component is able to format data to any format. Users have complete control over the size, color, placement, and other layout and formatting options.
The Presentation Module COM object has the following functionality:
Retrieving formatting information from System Repository - the iFormat.
Applying formatting information to the knowledge containers.
Return the formatted data to the requesting application.
Executing gScripts or gScript segments and wait for the data to be returned.
FIG. 29 depicts an exemplary process flow of a request to run a gScript segment and format the returned data. In block 290, the request is made from a Web server. The gScript request is sent to the MR/C 292 in block 291. If KCs were included in the response form the MR/C (block 293), the iFormat script is retrieved" in block 296. If not, respective KCs are retrieved from KC repository 295 in block 294. Utilizing the retrieved iFormat script, the KC data is formatted in block 297 and returned to the requester in block 298. In one embodiment, the iFormat scripting language is based on XSL and uses many of the XSL features.
All knowledge containers are preferably saved as XML data. XSL is the formatting language that is used to format an XML file. Although there are other technologies available, presently, XSL has become the standard for formatting XML due to its power and extensibility. XSL uses XML syntax and creates its own flow objects, so it can be used for advanced formatting such as rearranging, reformatting and sorting elements. This enables the same XML document to be used to create several sub document views. XSL also adds provisions for the formatting of elements based on their context in the document, allows for the generation of text, and the definition of formatting macros.
When a gScript is created, the user is able to create an iFormat script for each segment of the gScript. The iFormat script is then saved with a unique name in the system database. A gScript could have multiple iFormat scripts attached to it. All iFormat scripts are grouped accordingly. The system has a predefined set of groups for connected device types. These groups allow the system to automatically choose a default iFormat script for the device type.
The Formatter object is responsible for taking a knowledge container and formatting it according to an iFormat script. The Formatter object receives the knowledge container and the iFormat script and output an HTML file based on the knowledge container and iFormat script. FIG. 30 is an exemplary flow diagram of the formatting process. In block 301, a request for formatting a KC is received by the Formatter object. If the KC is included in the request (block 302), the iFormat information is retrieved in block 304 and the KC is formatted according to the iFormat. The steps for the formatting process include instantiating XML DOM in block 305a, loading the KC XML data and iFormat XSL in block 305b, and translating the XML to HTML in block 305c.
A device inspector object determines the type of device that is requesting the information. The information that the device inspector reports to the Formatter object include browser type, browser capabilities, and screen size. This information tells the Formatter object which default format to use when formatting the page. A disconnected devices inspector (DOT) object allows the Presentation module to determine the type of browser that is trying to access system information. Once the type is known, the Presentation module is able to select the correct format for the information that is requested. The presentation module is responsible for formatting and serving requested information from the system. The requested information is a knowledge container that contains data. Since more than one device type can access the data, different iFormat scripts could be mapped to a specific KC Schema.
When a KC request is sent to the Presentation module, the Presentation module uses the connected devices inspector to determine the type of browser that the user is using. The Presentation module then uses the correct formatting script to format the KC data and serve it to the requesting user.
The DDI object is able to determine the browser type by utilizing a set of JavaScript scripts to determine the browser capabilities, screen resolution, connection speed, etc. JavaScript supports the ability to report back to the user the browser type, name, and capabilities. Different versions of JavaScript support only a subset of these parameters, but can still respond back with the browser name and type. Any device or browser that does not support JavaScript, can still report the browser type and id via the request headers. In order for the Presentation module to retrieve the correct values, the DDI needs to be placed in the web page and the values retrieved need to be passed to the Presentation module.
The system features a monitoring system that allows system administrators to monitor and control all parts of the system. The monitoring service is responsible for monitoring both the individual modules and agents and make sure that they are running. To achieve this, the system uses heartbeat messages and SNMP messages that are sent by the individual applications. A heartbeat object is an object that each application uses to send heartbeat messages to the monitoring service. The heartbeat messages are simply messages that let the monitoring service know that the application is still alive. The information sent to the monitoring service include the application id, instance id, and server id. Using this information, the monitoring service is able to know is the application is still alive.
All heartbeat messages are sent via a network protocol and bypass the MR/C or any other message bus that the system is using. The heartbeat message is received. by the monitoring service and logged. In one embodiment, the heartbeat object is included in every application and DCI. Additionally, the heartbeat object is included in the DCI Framework and the SDK. Each time the application is run, the heartbeat object registers itself with the monitoring service. If one is not available, the heartbeat object starts looking for one on a set interval. The registration includes sending the first heartbeat message. The monitoring service registers the application, instance, and server in its log, generate a unique id, and returns it to the heartbeat object along with the heartbeat interval. Once the heartbeat object, receives the unique id, it only needs to send the unique id and the monitoring service automatically correlates the unique id to the specific application, instance, and server.
The time interval for the heartbeat is configurable item. Additionally, the interval could be different for each application. This allows the system administrator to prioritize the heartbeat messages that are coming in. An administrator, for example, might be more concerned with the Rules Processing agent than the Action agent. In this case, the administrator is able to set the heartbeat interval on the Rules Processing agent to fire off every 30 seconds and the Action agent every 60 seconds.
The present invention utilizes a graphical user interface (GUI) for ease of use. In one embodiment, the user interface is largely divided into three groups: Configuration Applications, Administration Application, and User Applications. The intelligently retrieval, assembly, and formatting of information and knowledge is achieved by building a collection of scripts that are powerful directions on how to perform each task. These scripts are developed using the Configuration Applications mainly by the system's integrators, internal IT, and maintained by trained system managers. In order to use the system, that is, to view the delivered knowledge, the system provides the DeskPortal for the desktop users and other connected handheld devices such as WAP phones, and the HandPortal for all disconnected devices like Palm Pilots™ and Pocket PCs. The Administration Applications provide a powerful way to managing, monitoring and configuring the system and is mainly used by the system administrator.
FIG. 13 is an exemplary block diagram of the UI overview including the following exemplary applications in each category:
Configuration Applications
• Data Design Studio 311
• DCI Designer 312
• Knowledge Container Builder (Synthesis) 313
• Business Logic Builder (Governing) 314
• Studio Analyzer
• Publisher 315 • Expression Builder 316
• Schema Studio 312c Administration Applications β System Configurations - users, groups, security, user preferences, group preferences, servers, services, agents, time outs, threads/processes, caching, load balancing, fail over configuration, etc.
© System Monitoring
• System Tracing
• System Log Viewer User Applications
• DeskPortal
• HandPortal
The above application could be windows applications, UNLX application, LINUX applications, or the like. For Data repository and distribution requirements, every application has a small registry that holds system information. This file is distributed with the application. Preferably, all applications require username and password in order to grant access.
The configuration applications are a collection of tools and applications that allow building, maintaining, and analyzing DCIs, iJobs, gScripts, and iFormats that are the directions by which the system retrieves, synthesizes, govern, format, and deliver the information. These tools are used by system integrators, internal IT, or any user with some technical knowledge. In order to provide a consistent interface across all Configuration Applications all other applications are hosted in a generic application called the Data Design Studio (DDS). Preferably, all the Configuration Applications can be activated and executed from within the DDS based on user security level. Every activity within every application is also security dependant and can be granted or revoked.
The DDS 311 is the main console for the Configuration Applications and allows the execution of such pending user security level. The DDS acts as an application launcher for all other applications and provide a consistent look-and-feel for all other applications. FIG.31 shows an exemplary DDS console. The DDS include the following functions:
• Login 310 - The user should provide a valid username and password in order to load the DDS Console. Preferably, no additional login is required for the rest of the applications. o Application Launcher - Loads and application to the console working area. Applications can be any of the User Applications described above and the like depending on the specific user's security level. • Add-ins - Add/remove additional applications and accessories like Data Viewer, User Log, Schema Editor, etc.
• Windows Selector - Provides functionality like tile, cascade and arrange icons.
• Help.
Generally, on execution the application displays a login screen, which requires username, password, server name, and database name (normally hidden). When installed with NT Security option the username and password are taken from the operating system. The application then logs into the system database and retrieve security information on what the user can (and can not) perform in the system. Based on these values, some of the menu items are disable or invisible. Upon request, the Application Launcher console loads the requested application into the working area. The new application then connects to the database to retrieve security information as well.
FIG. 32 is an exemplary Design Studio console. The DCI Designer 312 is an application that allows the configuration of any DCI in the system. This configuration includes system information like default username/password, server name, timeout, parameters, events, fields mapping, etc.
The DCI includes the following functions:
• System Information allows the configuration of system type information, information that is used while connecting to the data source like default username & password, server name, database name, etc for Internet sources.
• Schema Mapping 312 a creates the fields mapping between the data source fields and a business object XML schema.
• Parameters Definition 312b creates the XML schema of the parameters accepted by this DCI by selecting the required data source fields.
One of the major features of the DCI is the ability to configure the returned information structure, that is, map the data source entities into one of the system's business object XML schemas. The system provides multiple predefined business objects XML schemas types like: Customer schema, Driving Directions schema, Stock Information schema, etc. These schemas define business objects and are saved in the system's repository. These schemas repository can be modified and new schemas can be added as necessary.
An exemplary mapping format is displayed in FIG. 33. On the left side of the screen is the data source fields, in the middle the mapping lines, and on the left the XML schema. Since the DCI's fields are stored in the database the application retrieves it. If the DCI support the RefreshEntityFields function, the list can automatically be refreshed. If the DCI doesn't support this functionality (most Internet sites don't) the list can be manually edited. The XML schema on the left can be edited using the Schema Editor application. The field mapping is done by drawing a line between one data source fields on the left to one or more of the XML schema fields on the right. When a more complicated mapping is needed, the user can use any of the provided operators, math functions, string functions, and/or time/date functions. This feature allows removing unnecessary fields (by not mapping them), formatting the fields type, layout and content by using system and user-defined functions. This is especially powerful not only because it reduces the time it takes to rollout the system, but also because the data source can change (and often does), entities can be added, removed, or changed. Having a user interface tool reduces the time to apply these changes to the system.
FIG. 34 is an example of mapping a Customer business object (for example, from Siebel™) to a Custome XML schema. In this example, the Middle Name field is not mapped and therefore not retrieved as part of this KF. Also the Phone and Fax fields in the data source are separated to Area Code, Prefix, and Number while the schema represent them as one field. In this case two functions needs to be created:
FI (for phone number): C+ PhoneArea +')" + PhonePrefix
+ "- ' + PhoneNumber; and
F2 (for fax number): ΛC+ FaxArea +')" + FaxPrefix + '-"
+ FaxNumber.
Using these functions, the user has the ability to control each field data type, length, format, etc. In addition to the functions provided by the system, the user can create and save his functions, which allow encapsulation of functions and processes since these functions can be then used in other functions (in the same mapping).
FIG. 35 illustrates an example for defining a parameter. Parameters definition functionality allows the creation of XML schema describing the parameters accepted by this DCI. The functionality is similar to the schema mapping functionality only reversed. On the left side is the requested XML schema (from the repository or new) and on the right side the DCI parameters. Since every DCI can have more than one parameter (even for the same business object) a parameter definition mapping needs to be created. The parameters field lists are part of the DCI, since the DCI needs to be programmed to accept them. The mapping provides the ability to change the parameters XML representation without changing the DCI.
The Knowledge Container Builder 313 (KC Builder) is the application used to create Knowledge Containers by graphically creating an iJob. Functions include: • KC Builder - The main purpose of this anplication. Creates the iJob script for a KC based on the graphical layout of the KFs and their relationship.
• Syntax Validation.
• KC Analyzer - Provide the ability to test run the KC and see the returned information.
» KC Emulator - Part of the KC Analyzer. This part graphically and in real time emulates the creation of the KC. With the KC Analyzer provides a powerful tool for testing and debugging a KC.
FIG. 36 shows an exemplary KC builder UI. Using this UI, a user can create KC from one or more Knowledge Fragments (KF). Using drag-and-drop the user adds DCIs from the available DCIs in the system. Every DCI creates a KF of a certain type, that is, a business object type like Customer, Product, etc. However, using DCIs is based on the user's security level and group, that is, user in the sales group might not have access to Marketing DCIs. The KF type (that is XML schema) is defined by the first DCI added to it. Each KF is based on one or more DCIs (see below) and together define the KC. The result of the KC Builder is an iJob which is basically a script describing how to build the KC. The script is then translated into an XML document.
The user can also define input parameters as a XML document. Using the parameter mapping (see above) those parameters can be mapped to one or more KFs. Examples for parameters can be UserName, Company Name or returning information regarding a specific user and a specific company, etc.
Since most DCI require parameters, the user can click on the connector line between to DCIs and define the to DCfs in parameters using the from DCIs available schema fields. FIG. 37 shows an example to such a mapping. In this example, the address fields from a customer KF (Siebel™ for example) are mapped to the "To Address" fields of a driving directions KF (MapQuest for example). Similar to the DCI Designer's mapping functionality, the mapping is done simply by drawing the line. The user can also apply formatting functions or use system values described below:
%Username% Requestor's username
%ServerName% Current server name
%DatabaseName% Repository database name
%Err% Error object. Holds the error number, description, etc.
In some cases, order to provide a complete representation of a business object, the content of a KF needs to be "filled" from multiple DCIs. For example, user information is stored on multiple systems and the user KF information need to be represented as one KF in the KC. One way to achieve this functionality is to create multiple KFs from multiple DCIs in one KC and take care of the logic in the gScript using this KC. However, one of the KCs goals is to encapsulate all technical and system information leaving the using gScript with logic and business rules only. In order to provide this functionality, each KF can contain multiple DCI Objects (DCIs) using the AND or OR commands. FIG. 38A displays the user interface representation on this example.
In one embodiment, there is no limitation to the number of DCIs in one KF as long as they share the same schema. This is important since trying to include multiple DCIs with different schemas will generate an error because there is no way to consolidate the returned information into a single schema which is the KF's schema. The KF can support both AND, OR logical operations where AND is the sum of all information returned from all data sources, while OR is used to get only one set of information from only one data source.
Another usage for multiple DCIs using the OR operation in one KF is for fail over support. Some sources may not respond due to some internal failure, however the system should always provide information. Using a fail over DCI can prevent KC creation failures. FIG. 38B displays an example of three driving directions DCIs that ensure the return of driving directions.
Since most of the time the order by which the AND OR phrases are positioned in the KF can change the meaning of it, the system can support DCIs grouping. Lets examine the following requirement:
Extract the information from system A and from systems B or C whichever return the information first.
FIG. 38C describes one solution, however based on this logic the returned information will be the sum of information from system A and B or the information from system C which is not the requested solution. Even if the order of DCIs in the KF are switched as shown in FIG. 38D, the result is still the same. Only by using DCI grouping the right solution can be provided in a simple and easy way.
Using this configuration allows the creation of sophisticated KDs from multiple systems. However sometimes this functionality is not sufficient and a more powerful mechanism is required in order to decide from what system the information needs to be returned. A simple example is the retrieval of driving directions when different countries are involved. While no one system can provide the directions for all countries, one KF represents the Driving Directions business object, encapsulating the complexity of deciding from which system the information is retrieved from. In this case multiple DCIs need to be used with the appropriate logic to select the appropriate DCI for the right data.
In order to achieve this functionality the system supports the "When...Is... Use command. " This command is a simple case statement based on the value of one or more of the KF's schema fields. (Since all DCI's share the same schema, it is sometimes referred as the KF schema). FIG. 38F describes the graphical representation of three systems required to provide driving directions.
An exemplary layout of a KC Analyzer is shown in FIG. 39. Using the KC Analyzer the user can actually view the content of the KC. Running the KC Analyzer cause execution of the iJob and the population of the KC with information from the appropriate systems. Although the user might have access to use or build a KF he might not have access to view the information retrieved by it. In this case the secured KF(s) will be empty. Since the result is an XML document, the information is represented in a tree format. The KC Analyzer can also execute parts of the KC. In this case all the parameters provided usually by other KF, need to be manually entered. FIG. 40 shows how to use the KC Analyzer to view the connection of only two KFs. The Analyzer also supports multiple statistical reports on the execution of the KC.
FIG. 41 depicts an exemplary layout for a KC Emulator. The KC Emulator is an extension to the KC Analyzer. As shown, the KC Emulator displays an animated representation of the creation of the KC in real time. In addition the emulator provides statistical information on memory usage, disk I/O, performance, etc. Using the KC Emulator, the user can actually "see" how the data is retrieved from the multiple systems, change KFs dependencies, identify bottlenecks, compare retrieval time between different systems, etc.
An important functionality of the Emulator is the ability to apply Color Filters, that is, the ability to color a KF in a different color depending on a certain threshold. A simple example would be to color red all KFs that exceed 2 seconds in retrieving information. Using this color filter the user can easily identify bottlenecks.
The Business Process Builder 314 is the application used to create all business rules and business processes by graphically creating a Governing Script (gScript). Using the builder a user can create processes that includes KC, content type rules, perform actions, push information to hand held devices, etc. The Business Process Builder 314 include the following functions:
• gScript Builder - The main purpose of this application. Creates a gScript for a process based on the graphical layout of the components.
• Syntax Validation.
• gScript Analyzer - Provide the ability to test run the gScript and see the returned information, actions, etc.
• gSript Emulator - Part of the gScript Analyzer. This part graphically and in real time emulates the creation of the KC. With the KC Analyzer provides a powerful tool for testing and debugging a KC.
• Events Mapping - Maps an event to one or more gScripts. A gScript Builder is used in creating business rules and business processes using a GUI. Every gScript can be based on KC, predefined actions (like sending e-mail, executing applications, etc), writing to other applications, broadcasting to users (push), etc. The Builder implements a flowchart look-and-feel mechanism that allows the creation of these rules and processes. Although the gScript Builder share some of the KC Builder's functionality's, the gScript Builder has a much richer language. Table 1 includes some of the functions and commands the language has. Before the script is saved, it is translated into an XML document. The syntax of the document is checked by the syntax validation process.
FIG. 42 displays an exemplary layout of the gScript Builder. In order to build a gScript the Builder is divided into three main sections: the Toolbox 422, the Working Area (gScript View) 424, and the Available KCs 426. The "toolbox 422 includes some of the available functions (Table 1) than can be used during the creation of the gScript. The Available KCs 426 area is a collection of the KCs needed in the gScript. This collection is maintained by the user and additional KCs can be added at any time.
The mode of every KC is by default Private to the gScript being used, which means that the KCs values can't be viewed by any other object using the gScript (like another gScript or an iFormat). Since all KCs are private by default, the default output of a gScript is nothing. If the information stored in a KC needs to be exposed, its mode can be changed to Public. Having KCs as private allows encapsulation of information that is needed for business rules only and can't be exposed in public. When one or more KCs are public, the output of the gScript is a collection of these public KCs. This collection can also be converted to one KC containing all other KC if needed.
Sometimes there is a need to persist a KC, that is, keep the same contents of the KC between different executions of the gScript. An example for that can be general news that refreshes once a day. In this case it is not efficient to retrieve the information every time the gScript is executed if the result is the same. To prevent reloading the same data a KC can support content persistence, which basically caches the returned information in the database and return it instead of returning the information from the data source. Persistency is defined as a time frame in which the data is not refreshed. This time range from seconds to days. The persistency can be overridden by the gScript if needed, providing even a wider range of possibilities.
However, persisted KCs in a gScript are local to that specific gScript and can't be shared by other gScript. Consider the following example:
A complicated gScript contain the CNN News KC as one of its KCs. This KC is persisted in a 12 hours time frame, that is, it refreshed itself at 12:00 Am and 12:00 Pm. However
Figure imgf000041_0001
When that event triggers, the system need to be refresh.
This requirement can't be achieved using the gScript's local KCs, hence Global KCs. A global KC is one that belongs to a special type of gScript a System gScript. A System gScript is a gScript that doesn't belong to any group but any group (or users) can potentially have access to. All gScripts starts immediately or when the system first starts. Every System gScript has its own security settings and therefore can't be exposed without the right permissions. In order to use a Global KC in another gScript a Linked KC has to be created. This KC is an "image" of a Global KC and it can display the content of the Global KC, change it, or send commands to refresh it. A Global KC. FIG. 43" describes one solution using Global and Linked KCs.
The gScript Analyzer allows stepping through the script and viewing the execution path while outlining the actual path. The Analyzer also allows displaying the content of every KC in the script (KC Zoom). FIG. 44 shows an exemplary layout for a gScript Analyzer. The Analyzer also allows running just part of the gScript by selecting a different starting point, as shown in FIG. 45 A.
In cases when a KC returns multiple records that are used as the parameters to another KC, and IF...THEN...ELSE command, or any other command, stepping through the KC records outlines the path selected for each record. FIG. 45b shows a case where the first record generated a path that causes an action (Action A). FIG. 45C shows the next record generated a different path (the result of the IF...THEN...ELSE was Yes this time), which caused saving some information in the database. The gScript Emulator 314 b of FIG. 31 share the same functionality with KC Emulator. The Event Mapping 314c allows the mapping of events (both for the System and other systems) to one or more gScripts.
The Business Process Builder 314 is an application used to create all business rules and business processes by graphically creating a Governing Script (gScript). Using the builder, a user can create processes that include KC, content type rules, perform actions, push information to hand held devices, etc. The Business Process Builder 314 includes the following functions:
• Palm Pilot and WAP Phone Publisher.
• gScript Segmentation.
The Publisher application 315 is used to create UI screens to Palm Pilot™ PDA's and WAP enabled phones. Using the Publisher 315, the user can create HTML, WML, or HDML forms to multiple KCs using a simple drag-and-drop user interface. The user doesn't need to know HTML, WML, or HDML in order to use the application. After the user finished designing the appropriate forms, the Publisher creates XSL templates or Intelligent Templates (iTemplate), which generates the necessary HTML, WML, or HDML once executed by the Broadcast Server or the Presentation Server. Those templates are saved in the systems repository.
FIGS. 46A and 46B illustrate exemplary Palm Pilot™ and WAP Publisher - layout. The layout of the publishers is simple and resembles report generator (like Crystal Reports), in which fields, labels, lines, and other objects are dropped on a design sheet. In this case the design sheet is a Palm Pilot or phone. The system can support more than one iTemplate for every KC. When saved the user specified for which user or group of users this iTemplate applies. This allows every user or group of users to have their own look and feel. The publisher allows creating links (anchors) to other forms,' which allows creating multiple forms for one or more KCs. This however raises an important issue of breaking up a gScript into multiple sub scripts.
When connected, users are using the system, like local users using the browser or remote users using WAP phones, most of the time, there is no need to execute the complete gScritps and generate multiple KCs unless the user actually progressed to the next step of the script. This is because of the limited screen space, which prevents displaying the complete information gathered in a gScript.
FIG. 47 provides an example for a situation where a gScript needs to be segmented. In this example, a gScript was created to provide contact information that includes the following KCs: contact information, driving directions, and news about the contact's company. Since the gSript describes a business process or a business entity, it should not change depending on the user interface or device used to view the information. However, a WAP phone does not have enough screen "real estate" to actually display the information.
In one embodiment, the information is divided into three separate pages. The first page is the contact information. This page includes two links to the next two pages, the news pages and the driving directions page. Based on this, the user might never view the driving directions page or the news page, although the KCs containing the information were already loaded.
In order to prevent unnecessary waste of time, memory and resources, any gScript can be segmented into one or more sub scripts. This allow just in time (JIT) execution of the sub script providing faster response time and less use of memory. This formatting is not part of the gScript definition and does not change the gScript itsetlf (it is not visible in the gScript Builder).
FIG. 48 depicts an example of segmentation of the previous example. Here the "Master" gScript is segmented into three sub scripts. Al, A2, and A3. This way each sub script executes only upon request. This functionability is possible since every subscript automatically defined what parameters are expected based on the original gScript. In this example, sub script A2 requires address fields and sub script A3 requires company name. Every gScript can have multiple segmentation plans that can be used by different devices. This is powerful since all the business logic is located in one gScript while all other formatting is done outside. Any changes to the gScript is automatically trickled down to all sub scripts and therefore for every device or format.
The Expression Builder 316 of FIG. 31 is the element used to build formatting functions such as the ones used in schema mapping, parameters definitions, and gScripts. The Expression Builder 316 provides two levels of complexity: Simple and Advanced. After the expression is completed, the Builder checks the syntax for grammatical errors. The Expression Builder then translates the expression to an XML document and returns it to the calling application. The Expression Builder includes the following functions:
• Advanced Expression Builder,
• Simple Expression Builder.
• XML Interpreter, and
• Syntax Validation.
The Advanced Expression Builder is a window containing four areas: elements tree, element description, expression area, and available KC(s). The elements tree contains the system functions and system constants. The element description area provides a description about the selected function or constant. The expression builder area is the place where the user inputs the expression, and the available KC (s) including the calling application parameters (like an iJob or gScript parameters).
The Simple Expression Builder should provide a much simpler interface for building simple expression. The user of this builder shouldn't know how to build expressions but get a simple screen with Name/Operator/Value combinations to fill. This builder can be used by less technical people to apply or change business rules. A XML interpreter translates the expression string into an XML document. A Syntax validation function validates the syntax of the expression string. Depending on the supported scripting language (currently VB Script only) a 3rd party component is used to check the validity of the expression.
The administration applications are a collection of applications that allows the maintenance, configuration, and monitoring of the system. Preferably, these applications are only used by the system administrator (s). System configuration application is used to maintain all aspects of the system, like adding users, groups, preferences, etc. The application also provides a view on the current elements installed and running in the system, like services, load balancing, etc. The application layout has two main parts: the Information Tree, which includes some categories and sub categories, and the Information Panel, which displays detailed info on the selected item in the tree. Included functions are: Configuration:
• User Configuration o Groups Configuration
• KC Categories
« LJobs KC Configuration
• gScripts Configuration
• iFormats Configuration
• DCIs Configuration
• Security Configuration User Preferences Configuration Group Preferences Configuration Load balancing Configuration Fail Over Configuration
Sytem
Servers Status and Configuration
Managers Status and Configuration
Agents Status and Configuration
Actions Status and Configuration
Formats Status and Configuration
Events Status and Configuration Tasks
Scheduler
Current Activities Cache
• Knowledge Containers
FIG. 49 is an exemplary block diagram depicting configuration components and their relationship. User configuration 4902 allows adding, editing, and removing users from the system. User information includes personal information, contact information, etc. By default, all user information is stored in the local database, however, the system supports other repositories and directories like NT, LDAP, Active Directory and others. When selected in the tree, an Information Panel displays the list of users. Listed below are other information available from this point: • Groups the user belong to
• KCs the user can or can't view or use in designing gScripts.
This list doesn't include KCs that are part of gScripts this user can execute.
• GScripts the user can execute o User preferences: a. Registered gScripts b. Push preferences
Group configuration 4906 allows adding, editing, and removing groups from the system. Group information includes name, description, etc. Listed are other information available:
• User in this group
• KCs the group can or can't view or use in designing gScripts. This list typically doesn't include KCs that are part of gScripts this group can execute
• gScripts the group can execute
• User Preferences:] a. Registered gScripts b. Push preferences
KC categories allow adding, editing, and removing of KCs categories. A category includes name, description, etc. KC categories are just an easy way to categorize KCs and are not an entity of the system. iJobs/KCs configuration 4910 allows adding, editing, and removing of KCs by launching the KC Builder application. The configuration includes name, description, private/public flag, persistency information, dependency information, etc. This functionality also allows enabling or disabling the KC. Typically, a disabled KC does not run the associated iJob and therefore does not return any information. Listed are other information available:
• Default iFormat information
• User and groups and their appropriate security information gScripts configuration 4912 allows adding, editing, and removing of gScripts by launching the gScript Builder application. The configuration includes name, description, private/public flag, persistency information, dependency information, etc. This functionality also allows enabling or disabling the KC. Typically, a disabled KC does not run the associated iJob and therefore does not return any information. Listed are other information available:
• iFormats Information - every gScript can have multiple iFormats. Allows enabling or disabling an iFormat
• Associated Events - events that causes the execution of this gScript. Allow association and disassociation • Included KCs - allows disabling a KC in this gScript only
• Included Actions - allows disabling and Action in this gScript onoy
• Users
• Groups iFormats configuration 4914 allows adding, editing, and removing of iFormats by launching the Publisher application. The configuration includes name, description, etc. This functionality also allows enabling or disabling the iFormat. A disabled iFormat is not accessible to use.
DCI configuration 4916 allows adding, editing, removing, enabling, and disabling of DCIs in the system. The configuration includes name, description, type, etc. Since every DCI can have more than one associated schema, information such as Schemas Information is also available. Load Balancing configuration 4922 allows configuration of the load balancing mechanism. Fail over configuration 4924 allows configuration of the fail over mechanism.
Server status and configuration function allows adding, editing, removing, enabling, disabling, and pausing of server in system. The configuration includes name, description, type etc. Other information such as, Services/Managers for managers running on the server may be used to enable/disable a manager. A Managers status and configuration allows adding, editing, removing, enabling, disabling, and pausing of managers in system. The configuration includes name, description, type, etc. Listed below is other information available:
• Agents - agents running under the manager. Allows enable/disable an agent.
• Processes - other real time processes used by the manger. Allows killing a process.
• Agent Processes - all agents' processes. Allows killing a process.
An Actions status and configuration allows adding, editing, removing, enabling, disabling of actions in system. The configuration includes name, description, type, etc. Other information available Action Processes.
A System tracing application allows the tracing of events and messages routed in the system including messages containing specific information. Using this tool a user can trace every piece of information in the system and outside. The tracing mechanism itself is a COM component that can be included in other applications, like the System Monitoring. The trace support capturing the information to a file or a database table. It includes the following function:
• Trace console
• Trace Information (Information Window)
• Trace Client and Trace Window 45 A Trace Console provides the ability to initiate multiple trace windows without the need to login into the system when creating a new trace window. The Trace Console is an MDI window that can host multiple trace windows and acts as the tracing framework. When starting the trace client a login window appears which requires the user to login into the system.
Additional functions that the Trace Console provides are: ® Get a list of all saved traces (public and private) o Load a specific trace
• Kill/Stop active trace
A Trace Information or an Information Window is provided wherein the user can define the trace definitions. Trace information is: trace name, trace description, trace server name, trace server port number, capture file path, messages to trace (this is a collection), color and font of each message, included elements in each message, other stand alone elements and corresponding values, Private/Public flag. Among other buttons the trace information window has a SAVE and OK button. Once the user hit SAVE or OK the trace information is saved to the database. Pressing the OK button starts the trace.
The ability to save a trace is very useful because the user can simply run an already saved trace without the need to redefine it. By default, all traces are private which means can be used by the user who created them. A Public trace is a trace than can be used by all users.
The Trace Window is actually divided into two parts:
1. Trace Window - the hosting window that displays the traced information in a grid format with the appropriate font/color. The window also provides additional functionality like Print, Clear, Copy, etc. Preferably, the Trace Window is an lActiveX control.
2. Trace Client - a COM component that communicates with the trace server and retrieves the requested information. This component can work without the trace window.
The Trace Window provides a framework for the trace information. The traced messages are being displayed in the main pane of the window in a grid format. The grid itself is fully customizable and can be changed by the user. The window itself is implemented as an ActiveX control which allows including it in other application (like the System Monitor). The Trace Window support the following functionality:
• Edit the trace - change the trace definitions
• Start/Stop/Pause
Copy/Clear SavePrint A Trace Client as a COM component that communicates with the Trace Server and retrieves the requested trace messages is also available.
A System Monitoring application allows monitoring the status of server, services, managers, and agents in the system. The application provides a GUI in multiple levels starting from servers level and ending in single Agent view. The application provides detailed information regarding every monitored component in the system like: memory usage, I O. CPU usage, number of processes and number of threads, etc. The System Monitoring application includes: a Monitoring Console, and a Monitoring Window.
A Monitoring console is provided as an MDI window that can host multiple monitoring windows and acts as the monitoring framework. When starting the Console a login window appears which requires the user to login into the system. The Monitoring Console is similar to the Trace Console. A Monitoring Window provides a graphical representation of the status o the system or parts of the system. It provides a clear and detailed picture on the status of the system and allows applying color masks. The monitor provides four levels of status:
• System Architecture Status
• Server Status
• Manager Status
• Agent Status
The user can navigate from one level to another (zoom in) and back (zoom out). Zoom in is achieved by double clicking a component, which in return is being displayed at the center of the screen and all other connected modules and components around it. Zoom out is accomplished by double clicking the component immediately below the centered component.
The monitor provides a wide variety of statistical information that can be selected for monitoring. Each piece of information than ca be selected for monitoring. Each piece of information can be colored differently and is being displayed as a bar. The user can then define different thresholds by which the status changes from Normal (green) to Warning (yellow) to Critical (red). In addition, the user can apply color masks on the main tracing area, that is, define threshold filters that causes the coloring of the displayed components appropriately. An example for a mask can be: color red all servers with CPU usage more than 70%.
It is understood that the exemplary schemes and the respective implementations described herein and shown in the drawings represents only exemplary embodiments of the present invention. Indeed, various modifications and additions may be made to such embodiment without departing from the scope of the invention as defined by the attached claims. For example, the present invention can be implemented utilizing a variety computer operating systems and programming languages.

Claims

WHAT IS CLAIMED IS:
1. A system for intelligent assembling of information from a plurality of data sources comprising: a knowledge processing module for intelligently assembling information into a pluraUty of knowledge containers; a rules processing module for evaluating and executing a plurality of rules; an information processing module for interfacing with the plurality of data sources and interacting with a second system; an action module for invoking actions in the second system; a presentation module for outputting and formatting the plurality of knowledge containers to a respective connected device; and a broadcast module for communicating with a disconnected device.
2. The system of claim 1, further comprising a message processor for communication between each of the modules.
3. The system of claim 1, wherein each of the modules includes an agent for processing a task, a manager for instantiating an agent responsive to a request, and a communication processor for receiving a request from a manager and forwarding the received request to one or more managers.
4. The system of claim 1 , wherein the knowledge processing module comprises: a plurality of knowledge agents for creating the plurality of knowledge containers; a knowledge processing manager for managing the plurality of knowledge agents; and a user interface application for creating a script for extracting information.
5. The system of claim 4, wherein the knowledge processing module further comprises a monitoring application for monitoring an analyzing the system.
6. The system of claim 4, wherein each of the knowledge containers include a data element as a building block to build more information.
7. The system claim 1, wherein the rule processing module comprises: a user interface for creating a script and assigning the created script to a rule; a rules agent for evaluating the rule and executing the assigned script responsive to the rule; a rules manager for managing the rules agent; and a rules database for storing rules.
8. The system of claim 1, wherein the action module comprises: an action agent for interacting with an extemal system; and an action manager for instantiating and running the action agent in the external system for performing a task in the external system.
9. The system of claim 8, wherein the action agent comprises: a monitoring object for providing monitoring information; and a gateway object for interfacing with the external system.
10. The system of claim 1, wherein the connected device includes an Internet browser.
11. The system of claim 1 , wherein the connected device includes one or more of a hand held device and a pocket PC.
12. The system of claim 1, wherein the knowledge processing module, the rules processing module, the information processing module, the action module, the presentation module, and the broadcast module, reside in or more servers.
13. A method for intelligently assembling information from a plurality of data sources comprising the steps of: interfacing with the plurality of data sources; interacting with an external system; evaluating and executing a plurality of rules; invoking actions in the external system responsive to evaluating and executing a rule; intelligently assembling information including information from the plurality of data sources into a plurality of knowledge containers; and outputting the plurality of knowledge containers to an external device.
14. The method of claim 13, wherein the step of intelligently assembling information comprises the steps of: creating the plurality of knowledge containers; managing the plurality of knowledge agents; and creating a script for extracting information.
15. The method of claim 13, further comprising the step of monitoring and analyzing processes.
16. The method of claim 14, further comprising the step of building more information- by a data element as a building block.
17. The method of claim 13, wherein the step of evaluating and executing comprises the steps of: creating a script and assigning the created script to a rule; evaluating the rule and executing the assigned script responsive to the rule; and storing the created rule.
18. The method of claim 13, wherein the step of invoking actions comprises the steps of: interacting with an external system; and instantiating and running an action agent in the external system for performing a task in the external system.
19. The method of claim 18, further comprising the step of providing monitoring information.
20. The method of claim 13, wherein the connected device includes one or more of a hand held device, a pocket PC, and an Internet browser.
21. A computer readable medium having stored thereon a set of instructions including instruction for intelligently assembling information from a plurality of data sources the instructions, when executed by a plurality of computers connected to a computer network, cause the computers to perform the steps of: interfacing with the plurality of data sources; interacting with an external system; evaluating an executing a plurality of rales; invoking actions in the external system responsive to evaluating and executing a rule; intelligently assembling information including information from the plurality of data sources into a plurality of knowledge containers; and outputting the plurality of knowledge containers to an external device.
PCT/US2003/008726 2003-03-21 2003-03-21 Knowledge governing system and method WO2004095303A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2003/008726 WO2004095303A1 (en) 2003-03-21 2003-03-21 Knowledge governing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2003/008726 WO2004095303A1 (en) 2003-03-21 2003-03-21 Knowledge governing system and method

Publications (1)

Publication Number Publication Date
WO2004095303A1 true WO2004095303A1 (en) 2004-11-04

Family

ID=33308978

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2003/008726 WO2004095303A1 (en) 2003-03-21 2003-03-21 Knowledge governing system and method

Country Status (1)

Country Link
WO (1) WO2004095303A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8150798B2 (en) 2006-10-10 2012-04-03 Wells Fargo Bank, N.A. Method and system for automated coordination and organization of electronic communications in enterprises

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269393B1 (en) * 1999-03-23 2001-07-31 Microstrategy, Inc. System and method for automatic transmission of personalized OLAP report output
US6279033B1 (en) * 1999-05-28 2001-08-21 Microstrategy, Inc. System and method for asynchronous control of report generation using a network interface
US6567796B1 (en) * 1999-03-23 2003-05-20 Microstrategy, Incorporated System and method for management of an automatic OLAP report broadcast system

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269393B1 (en) * 1999-03-23 2001-07-31 Microstrategy, Inc. System and method for automatic transmission of personalized OLAP report output
US6567796B1 (en) * 1999-03-23 2003-05-20 Microstrategy, Incorporated System and method for management of an automatic OLAP report broadcast system
US6279033B1 (en) * 1999-05-28 2001-08-21 Microstrategy, Inc. System and method for asynchronous control of report generation using a network interface

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8150798B2 (en) 2006-10-10 2012-04-03 Wells Fargo Bank, N.A. Method and system for automated coordination and organization of electronic communications in enterprises

Similar Documents

Publication Publication Date Title
US11714665B2 (en) Method and apparatus for composite user interface creation
US8341595B2 (en) System and method for developing rich internet applications for remote computing devices
US6918088B2 (en) Service portal with application framework for facilitating application and feature development
US7627658B2 (en) Presentation service which enables client device to run a network based application
US7962565B2 (en) Method, apparatus and system for a mobile web client
US7415607B2 (en) Obtaining and maintaining real time certificate status
US7581011B2 (en) Template based workflow definition
KR100600959B1 (en) Provisioning aggregated services in a distributed computing environment
US7937655B2 (en) Workflows with associated processes
US7272782B2 (en) System and method for providing offline web application, page, and form access in a networked environment
KR101693229B1 (en) Communicating with data storage systems
EP1727045A2 (en) Application framework for use with net-centric application program architectures
US20020143943A1 (en) Support for multiple data stores
US20040024610A1 (en) Transaction execution system interface and enterprise system architecture thereof
US20070283317A1 (en) Inter domain services manager
WO2001025919A2 (en) Architectures for netcentric computing systems
EP1308016A2 (en) System and method for integrating disparate networks for use in electronic communication and commerce
JP2008538040A (en) Apparatus and method for managing a network of intelligent devices
WO2008011227A2 (en) System and method for playing rich internet applications in remote computing devices
WO2004095303A1 (en) Knowledge governing system and method
JP2003337767A (en) Basic system for constructing information system
Bykovskykh Application of Integration Patterns in Salesforce Enterprise Environments
CA2551059C (en) Pipeline architecture for use with net-centric application program architectures
Context-Aware Context of Use Runtime Infrastructure (R2)
Srinivasan Development and deployment of Web application using Oracle Application Server

Legal Events

Date Code Title Description
AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase