WO2005072114A2 - Enterprise interoperability using shared data services - Google Patents

Enterprise interoperability using shared data services Download PDF

Info

Publication number
WO2005072114A2
WO2005072114A2 PCT/US2004/044032 US2004044032W WO2005072114A2 WO 2005072114 A2 WO2005072114 A2 WO 2005072114A2 US 2004044032 W US2004044032 W US 2004044032W WO 2005072114 A2 WO2005072114 A2 WO 2005072114A2
Authority
WO
WIPO (PCT)
Prior art keywords
data
model
data sources
divergent
exchange model
Prior art date
Application number
PCT/US2004/044032
Other languages
French (fr)
Other versions
WO2005072114A3 (en
Inventor
Esther Jesurum
Jason Horman
Andrew Armstrong
Adam Abrevaya
Original Assignee
Pantero Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pantero Corporation filed Critical Pantero Corporation
Publication of WO2005072114A2 publication Critical patent/WO2005072114A2/en
Publication of WO2005072114A3 publication Critical patent/WO2005072114A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/252Integrating or interfacing systems involving database management systems between a Database Management System and a front-end application
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/25Integrating or interfacing systems involving database management systems
    • G06F16/258Data format conversion from or to a database

Definitions

  • loose coupling allows one application to be swapped out, or to undergo major changes without affecting other applications.
  • providing loose coupling alone is not enough.
  • rules can't be agreed to, or the rules themselves are hand-coded on a "one-of ' basis making re-use impossible. Accordingly, change requires ongoing maintenance of each system resulting in inconsistency across the ente ⁇ rise.
  • SUMMARY OF THE INVENTION The present invention addresses the high cost of manipulating and interoperating with disparate data by capturing in one place the semantics of the data.
  • the present invention architecture supports different views simultaneously from different applications.
  • the invention relates to a process for providing interoperability between consumers of shared services and divergent data sources.
  • the process includes providing a reconfigurable exchange model that is configured to accommodate data from a variety of divergent data sources.
  • each of the data source includes respective data elements and is described by a respective schema.
  • the process also includes associating at least one rule with the exchange model.
  • the rules operate on selected data elements of the accommodated data.
  • the process includes defining a transformation between at least one of the divergent data sources and the reconfigurable exchange model and, using a defined transformation, providing a view of the reconfigurable exchange model.
  • the view generally supports consumer access to a data element of at least one of the plurality of divergent data sources.
  • tiered models are used.
  • the rules simultaneously support disparate and competing definitions.
  • the tiered model allows semantic agreement without forcing universal agreement.
  • Each tier provides the backdrop for categorizing semantics.
  • the tiered model focuses on reconciliation and support of different users; quick agreement on data semantics; and removes the need to define one unified data model for the entire enterprise.
  • the captured semantics include transformations and dictionaries, data integrity rules, business rules related to data, and aggregation rules.
  • the services-based model of the present invention enables rules to be re-used, thereby lowering the cost of deployment and maintenance.
  • the present invention provides features for mapping data transformations including impact analysis (before implemented), and run-time statistics.
  • the impact analysis and centralized semantics repository reduces ongoing maintenance.
  • the invention relates to a system for providing interoperability between consumers of shared services and divergent data sources. Interoperability is accomplished using a metadata repository coupled to a shared data services (SDS) engine.
  • SDS engine includes (i) a user interface configurable to support communications between multiple consumers and the shared data services engine, (ii) an access framework configurable to support interconnection to a plurality of divergent data sources, and (iii) a run-time semantics engine for executing a pre-defined exchange model with associated rules stored in the metadata repository.
  • the executed exchange model accesses data from one or more of the divergent data sources, as needed, transforming accessed data to a common exchange model, and further transforming from the common exchange model to one or more different views.
  • the different views can be tailored to facilitate access by their respective consumers.
  • FIG. 1 is a schematic diagram of an embodiment of the present invention within a services-oriented architecture including a Shared Data Services (SDS) element;
  • FIG. 2 is a more detailed schematic diagram of the SDS component shown in FIG. 1;
  • FIG. 3 is schematic diagram of exemplary semantics represented within the embodiment shown in FIG. 1 ;
  • FIG. 1 is a schematic diagram of an embodiment of the present invention within a services-oriented architecture including a Shared Data Services (SDS) element;
  • SDS Shared Data Services
  • FIG. 4 is a schematic diagram of a representative multi-tiered model representing a process flow of an embodiment of the invention
  • FIG. 5 is a schematic diagram of the run-time architecture of the embodiment shown in FIG. 4
  • FIG. 6 is a more detailed schematic diagram of exemplary mapping semantics of the embodiment shown in FIGs. 4 and 5
  • FIG. 7 is a is a more detailed schematic diagram of exemplary business process semantics of the embodiment shown in FIGs. 4 and 5
  • FIG. 8 is a is a more detailed schematic diagram of alternative exemplary mapping semantics of the embodiment shown in FIG. 4
  • FIG. 9 is a more detailed schematic diagram of alternative exemplary mapping and aggregating semantics of the embodiment shown in FIG. 4
  • FIG. 5 is a schematic diagram of the run-time architecture of the embodiment shown in FIG. 4
  • FIG. 6 is a more detailed schematic diagram of exemplary mapping semantics of the embodiment shown in FIGs. 4 and 5
  • FIG. 7 is a is a more detailed schematic diagram of exemplary business process semantics of the embodiment
  • FIG. 10 is a schematic diagram of exemplary shared services data models of an embodiment of the invention as observable on the SDS editor of FIG. 2;
  • FIG. 11 is a more detailed schematic diagram of one of the source data models shown in FIG. 10, and as observable on the SDS editor of FIG. 2;
  • FIG. 12 is a schematic diagram of an exemplary Java object model of an embodiment of the invention;
  • FIG. 13 is a schematic diagram of a system implementing an exemplary shared data service command; and
  • FIG. 14 is a more detailed schematic diagram of the system shown in FIG. 13.
  • the present invention solves the problems of the prior art by providing interoperability between consumers of shared services and divergent data sources. Interoperability is accomplished, in part, by using a metadata repository coupled to a Shared Data Services (SDS) engine. Concrete schema can be imported from heterogeneous data sources, and then abstracted, so that end-user, or business application software developers need not know about the underlying source. An exchange model accessed through maps is also used. The resulting object models can be easily extended and re-used to add new data sources without having to change specialized views. Further, each tier can be extended without necessitating changes to other tiers.
  • SDS Shared Data Services
  • a first query can be directed from a consumer to a common object, or exchange tier within the SDS engine.
  • the SDS engine in response to receiving the first query, automatically translates the query, as required, to a second query directed to one or more of the heterogeneous data sources.
  • the SDS engine includes a user interface that is configurable to support communications between multiple consumers, or business applications, and the shared data services engine.
  • the SDS engine also includes an access framework that is configurable to support interconnection to a number of divergent data sources.
  • the SDS engine includes a run-time semantics engine for executing a pre-defined exchange model.
  • the exchange model includes associated rules that are stored together with the model in a metadata repository.
  • the exchange model when executing, accesses data from one or more of the divergent data sources, as required. In order to share the accessed data with the consumers, the data is transformed, or mapped, to a common exchange model. It is not necessary for the common exchange model to adhere to the particular schema of any application or data source. Rather, the common exchange model can be defined according to its own schema. In this manner data being handled through the common exchange model can be further transformed from the common exchange model to one or more different, or "specialized" views. As suggested by their name, these specialized views can be tailored to facilitate access by their respective consumers, or business application.
  • FIG. 1 A generalized representation of an IT infrastructure referred to as an ente ⁇ rise 100, is shown in FIG. 1.
  • the ente ⁇ rise 100 supports a number of users 110' that access software applications (e.g., business applications) through an application server 120.
  • the application server 120 can include one or more servers that individually host the business applications, such as a Business Process Management (BPM) application suite 130.
  • BPM Business Process Management
  • the application server 120 may link to other platforms that host the business applications.
  • the application server 120 and other platforms can be computers running various operating systems, including LINUX, UNIX, any of Microsoft's Windows suites, and Apple's Mac OS.
  • a first group of users 110' can access the application server
  • the application server 120 can include a partner portal 140' configurable for interactions with one or more business partners 110".
  • the application server 120 can include a sales portal 140" configurable for interactions with one or more sales forces 110'".
  • each of the various users 110', 110", 110'" accesses different ente ⁇ rise information as they may require, using different business applications that provide different views to the users 1 10.
  • some of the data being used by some of the different users 110 may also be shared with other of the different users 1 10.
  • the data typically resides in more than one location for a given ente ⁇ rise 100.
  • some of the data may be stored in one or more DataBase Management Systems (DBMS) 150.
  • DBMS DataBase Management Systems
  • SQL Structured Query Language
  • Exemplary Structured Query Language (SQL) DBMS include Oracle, DB2, MySQL, PostgreSQL, Sybase, SAP DB, HypersonicSQL, Microsoft SQL server, Informix, FrontBase, Ingress, Progress, Mckoi SQL, Pointbase and Interbase.
  • SQL Structured Query Language
  • SQL HypersonicSQL
  • Microsoft SQL server Informix, FrontBase, Ingress, Progress, Mckoi SQL, Pointbase and Interbase.
  • some or all of the data may be stored on a data warehouse system 160.
  • the data warehouse 160 represents a repository of integrated information, available for queries and analysis.
  • the data warehouse 160 itself may extract data from other heterogeneous data sources (not shown), as they are generated, providing a single repository for that data. Still further, some or all of the data may also be stored within one or more Enterprise Resource Planning (ERP) systems 170', 170", 170'" (generally 170).
  • ERP Enterprise Resource Planning
  • ERP systems 170 represent solutions that also seek to streamline and integrate operation processes and information flows within a company. Examples of such ERP systems 170 include SAP R/3, BaaN, Oracle Apps, Peoplesoft, and QAD.
  • SAP R/3 SAP R/3
  • BaaN Oracle Apps
  • Peoplesoft Peoplesoft
  • QAD QAD
  • Such a varied and complex data storage scenario for the ente ⁇ rise 100 complicates the problem of interfacing user applications with the data for at least two reasons.
  • user applications, such as the BPM suite 130 generally transacts data with at least one of the ente ⁇ rise data sources 150, 160, 170. Accordingly, the application 130 will require a means for translating data, as needed, between each of the different data sources 150, 160, 170 and the application 130. Additionally, some of the same data may reside at more than one of the different data sources 150, 160, 170.
  • the application 130 will require a means for inte ⁇ reting and handling conflicts with these data as they are transacted between the application 130 and the multiple data sources 150, 160, 170.
  • a customer's address stored on the Data sources 150 may be different than the same customer's address stored on the data warehouse 160 (e.g., the customer may have moved with only one of the storage systems having the current address, or one of the addresses may be more complete than the other).
  • the present invention uses an Ente ⁇ rise Application Infrastructure (EAI) 180 coupled to each of the different data sources 150, 160, 170.
  • EAI Ente ⁇ rise Application Infrastructure
  • the EAI 180 connects the user applications represented by the application server 120 to each other, and to the different data sources 150, 160, 170.
  • the system includes a Shared Data Services (SDS) component 190 coupled between the EAI 180 and the application server 120.
  • SDS component 190 provides a centralized repository for providing interoperability and centralizing a majority of the specialized coding.
  • the SDS component 190 includes an SDS engine 200 coupled between the user applications (e.g., the BPM 130 application and a portal 140) and various Data sources 150', 150", 150'" (Generally 150), as shown in FIG. 2.
  • the SDS engine 200 is a process running on one or more servers.
  • the SDS engine 200 supports at least two modes of operation referred to generally as (i) design-time and (ii) run-time.
  • the SDS engine 200 at run-time includes an executable version of a defined multi-tiered interoperability model, including all of the defined rules relating to that model.
  • the SDS engine 200 can include a Java inte ⁇ reter executing a Java object that corresponds to the above-defined multi -tiered model and associated rules.
  • the SDS engine 200 at design time generally enables creation and/or editing of the exchange models.
  • Each of the different Data sources 150 and the user applications 130, 140 can couple to the SDS engine 200 using one or more of any of a number of available networking techniques supported by interface networks 205', 205".
  • the network can be an Ethernet network, a Web-based network, a dial-up line, leased lines, wireless, etc.
  • the SDS engine 200 is coupled to an SDS metadata repository 210.
  • the metadata repository 210 stores metadata related to the transformation of data between the various Data sources 150 and the user applications 130, 140, as well as other metadata related to semantics. The contents of the metadata repository 210 will be discussed in more detail below.
  • the SDS engine 200 may reside on one or more server computers and includes a run-time semantic engine 215.
  • the run-time semantic engine 215 provides interoperability using the metadata stored on the SDS metadata repository 210.
  • the SDS engine 200 also includes an access framework 220 coupled between the run-time semantic engine 215 and the various Data sources 150.
  • the access framework 220 provides the necessary means to couple the run-time semantic engine 215 to the various Data sources 150 through a first interface network 205".
  • access framework 220 multiplexes or otherwise manages access using various access protocols, nomenclature and query semantics of the Data sources 150.
  • one or more shared data services 225', 225", 225'" are coupled between the run-time semantic engine 215 and the user applications 130, 140 through a second interface network 205'.
  • exemplary semantics that can be used to provide interoperability to and among multiple user applications is shown in FIG. 3.
  • a first source (i.e., user) application 300' represents store sales
  • a second source application 300" represents Web sales
  • a destination application 310 represents a sales portal.
  • a user accessing the ente ⁇ rise system through the sales portal 310 may require information, such as total sales, from both of the source applications 300', 300" (generally 300).
  • the meaning and representation of the data content of each application 300 i.e., the semantics
  • the semantics may be different.
  • one or more rules 320, 330, 340, 350 are provided that correctly address the different semantics in order to facilitate consistent and accurate transfer of information between the source applications 300 and the sales portal 310.
  • the rules can be categorized according to their function.
  • transfonnation rules 320 can be used to transform, or map the data as needed.
  • an exemplary transformation rule 320 converts a zip code obtained from the first source application 300' into a five digit-plus four digit format, while providing no transformation for a zip code obtained from the second source application 300", as none is required - presumably it is already in the prescribed format.
  • data integrity rules 330 can be used to verify the integrity of certain data elements, as needed.
  • a data integrity rule verifies that a sales price from the first source application 300' must be greater than zero, and the sales item must be valid. Additionally, the exemplary data integrity rule 330 requires that the revision number from the second source application 300" must be greater than zero. Had the results of either of these rules 330 been untrue for the particular data being passed, a user at the sales portal 310 could be notified, or some other action could be taken.
  • Other rules can include aggregation rules 340 that combine predetermined elements of data. For example, a rule can be defined to aggregate one value (e.g., NORCAL sales) separate from another value (e.g., SOCAL sales) using a value of sales taken from the second source application 300". Additionally, a further aggregation rule 340 can be defined to aggregate all Nevada sales in to yet another value (e.g., NV bucket). Thus, the destination application 310 can access directly -l i ⁇
  • additional rules can include business rules 350 that are directed to the business processes themselves. For example, a rule can be defined to confirm that contract approval has been obtained (e.g., reading a value from the first source application 300'), based on a revenue value (e.g., read from the second source application 300") being above a predetermined threshold amount.
  • a number of business rules can be defined within the semantics, further relieving the destination application developers from requiring detailed knowledge of the particular business processes.
  • the rules can be defined at multiple layers within the multi- tiered model, as required. In some instances the rules can even be conflicting, yet still operate and provide the desired interoperability.
  • An example of a multi -tiered rule with a conflicting element would be a business rule requiring that all purchase orders must have defined a sales representative.
  • a further rule would be that all sales representatives for a Northeast region must be Jane.
  • a conflicting rule would be that all sales representatives for the Southeast region must be Andy.
  • the SDS component 190 ensures, however, that the rules 320, 330, 340, 350 are applied appropriately according to the different business application accessing the interoperability mode.
  • the semantics as described by the rules 320, 330, 340, 350 above can accommodate much more than simply mapping between different formats and verifying the integrity of the data. Rather, the semantics can capture elements of the business processes themselves. This concept is exemplified above using the aggregation rule 340 and business rule 350.
  • interoperability is provided using a tiered model as illustrated in FIG. 4.
  • the tiered model 400 schematically captures the rules and definitions of an interoperability solution coupled between each of the various data sources 410', 410", 410'" (generally 410) and each of a number of user applications (at 28). More generally, the tiered model 400 represents a process flow of an embodiment of the invention.
  • a first notable feature of the model 400 is that all paths between the data sources 410 and the applications lead through a single common exchange tier 420.
  • the exchange tier 420 allows for the definition of a single common object model that can be mapped to and from. Thus, rather than requiring each of the various data sources 410 and user applications at 28 to support a separate interface with each of the other data sources 410 and user applications, each need only support a single interface to the common exchange tier 420.
  • each of the data sources 410 couples to the common exchange tier 420 through a respective storage-side mapping tier 430', 430" (generally 430).
  • the storage-side mapping tiers 430 provide transformations converting the data described according to schema of the respective data sources 410 to a common object schema of the exchange tier 420. The resulting transformation may result in re-formatting of the data, parsing of the data, re-ordering of the data, etc.
  • a single storage-side mapping tier 430 can be used for the more than one storage system 410.
  • the common schema of the exchange tier 420 may be the same as the schema of one of the data sources 410, thereby negating the need for a storage-side mapping tier 430.
  • each storage system 410 will have an associated map 430.
  • the schema of the data sources 410 can be abstracted prior to the storage-side mapping tier 430, using a data abstraction tier. For example, the schema from one data sources 410'" can be abstracted using
  • XMLBeans 440 (known in the industry).
  • the schema from one or more of the other data sources 410', 410" can be abstracted using HIBERNATE 440'.
  • Hibernate is a persistent service that stores Java objects in relational databases, or provides an object-oriented view of existing relational data.
  • the model 400 includes an aggregation tier 450.
  • the aggregation tier 450 can be used to combine selected data elements from the various data sources 410.
  • the rules, captured in meta data apply to a "payload” and not simply the data itself.
  • a packet of information describes the payload, which is moving from one place to another (as in XML).
  • the model 400 also includes a second, user-side mapping tier 460', 460" (generally 460) that can be used to transform data from the common exchange tier 420 to one or more specialized views 470', 470" (generally 470).
  • Each of the specialized views 470 can be developed using requirements of a specific business entity, or organization.
  • end-user applications 28 can be developed within the context of the particular business organization. Other developers write the mapping within the exchange layer once, thereby decoupling the views from the data source.
  • the exchange model and the data map can be defined, re-defined and provided so that the task for business application developers is simplified as they need only implement changes to the view map.
  • end-user applications 28 are coupled to the data sources 410 through the model 400, using a Web service 480.
  • other end-user applications are coupled to the data sources 410 through the model 400 using Ente ⁇ rise JavaBeans (EJB) 485.
  • EJB Ente ⁇ rise JavaBeans
  • FIG. 5 A high-level schematic diagram of the run-time architecture of semantic engine 215 and exchange (tiered) model 400 is shown in FIG. 5. As illustrated, a number of user applications 500', 500", 500'" (generally 500), or clients, are supported by a server-suite 510 that may include one or more of Web services 515', Ente ⁇ rise JavaBeans 515", WebLogic 515'", and Java Application Programming Interface (API) 515"".
  • a server-suite 510 may include one or more of Web services 515', Ente ⁇ rise JavaBeans 515", WebLogic 515'", and Java Application Programming Interface (API) 515".
  • the server suite 510 is coupled to the various storage systems 520 through an instantiation 530 of the multi-tiered model 400.
  • the instantiated model includes a data-abstraction tier 535, aggregation and mapping tiers 540, 545, an exchange tier 550, and a business tier 555.
  • the data is passed between the user applications 500 and the model 530 using payload mapping.
  • the exchange tier 550 is a common object model 420 and includes rules 560, as well as common object models for particular applications, such as those defined by the Mortgage Industry Standards Maintenance Organization (MISMO), other industry standard, or user-defined models, and the like.
  • MISMO Mortgage Industry Standards Maintenance Organization
  • the business tier 555 can also include rules 565, as well as aggregation and/or mapping required between the exchange tier 550 and the specialized business views 570.
  • the data mapping 545 and aggregation 540 between the abstraction tier 535 and the exchange tier 550 are the run-time counte ⁇ arts of mapping 430, aggregation 450 of model 400 discussed in FIG. 4, and thus operate as described above.
  • mapping 51 and aggregating 52 between the exchange tier 550 and business rules 570 (symantics to special views 470) operate as mappings 460 discussed in FIG. 4.
  • Rules 565, 560 are the run-time counte ⁇ arts to rules 320, 330, 340, 350 discussed in FIGs. 3 and 4.
  • the number of tiers defined within the model is selectable and may be varied to suit the particular ente ⁇ rise 100.
  • a relatively simple ente ⁇ rise may require only three tiers: a storage-side mapping tier, a common exchange tier and an application-side mapping tier. Even less tiers may be required if mapping is not required.
  • the model may include additional tiers. For example, additional tiers can be provided to map and/or aggregate from one specialized view to another specialized view. The additional layers and/or specialized views can be used to facilitate the transfer of information between different user applications in a complex ente ⁇ rise.
  • One specialized view may be for co ⁇ orate, whereas a lower-level specialized view may be for a particular department or business organization within the co ⁇ oration.
  • one query language can be used by application developers that is data-source independent, again extracting knowledge of the particular data source totally from the business application developer.
  • An exemplary mapping is illustrated in FIG. 6.
  • the mapping transforms between a Unified Modeling Language (UML) and an extensible Markup Language (XML).
  • UML Unified Modeling Language
  • XML extensible Markup Language
  • XML can be used to describe the schema of a particular data storage system 410
  • the UML can be used to describe a common object model of the exchange tier 420.
  • mapping transformation can operate in both directions-note that arrows appear at both ends of the lines interconnecting the UML to XML. That is, a query from a user application directed to one or more of the data sources 410, may be mapped, or transformed, at the application-side mapping tier 460 from the originating application query language to UML-presuming that UML is used in the exchange tier 420. The same query may then be further mapped, at the storage-side mapping tier 430, from UML to the schema of the target storage system 410, as illustrated in FIG. 6. Similarly, the results of the query may be returned to the requesting application through the same path, using the same transformations in the opposite direction.
  • the query results may be mapped from the storage system schema (e.g., XML schema) to UML, then mapped again from UML to a format suitable for interpretation by the requesting application.
  • FIG. 7 illustrates implementation of an exemplary complex business rule (e.g, 350, 565).
  • a Document Type Definition (DTD) is shown for a credit report.
  • the DTD can be used to specify the schema of the underlying data.
  • the business rule is defined to compare the number of entries for dependents age in years with the value in the dependent counts attribute. This particular rule can be used as a verification to confirm data accuracy.
  • Hibernate and XMLBeans can be used to provide mapping from a data source 800 to an instantiated Java object model 810 is shown in FIG.
  • the data source 800 such as a DBMS or a data warehouse, stores data according to a native schema.
  • the stored data can be mapped as described above to an exemplary Java object model 810.
  • This object model 810 may represent the common object model of the exchange tier also described above.
  • the object model 810 may represent a data source model, or data abstraction between the data source 800 and the exchange tier.
  • the Java object model 810 includes a number of data elements 820-825 (e.g., representing rows of tables in the data source 800).
  • the data elements 820-825 can be related to each other as indicated by the arrows shown interconnecting the different data elements 820-825.
  • An excerpt of exemplary code 830 representing a portion of code defining a mapping tier 840.
  • the Java object model 810 described above represents a data source model 900
  • a further mapping is required to translate between the data source model and a second data model at the exchange tier.
  • An exemplary transformation from a data source model 900 to an exchange tier model is illustrated in FIG. 9.
  • the exchange tier model 910 includes different elements 920-922 (in the exemplary case, fewer elements) than the data source model 900.
  • a second mapping is used to transform between the data source model 900 and the exchange tier model 910.
  • an aggregation tier 940 can also be defined to combine one or more of the elements of one of the models with a corresponding element of the other of the models.
  • Exemplary code exce ⁇ ts 950, 960 illustrate portions of the code related to the respective transformation and aggregation tiers 930, 940.
  • the SDS 190 includes an SDS workbench 250 to support design-time operational mode. During design time, a developer defines the multi-tiered model, as shown in FIG. 4. Thus, the workbench 250 includes an SDS editor 255 for defining and/or editing any maps necessary for transformation, as well as any of the various rules that may be required, as previously described in relation to FIG. 3.
  • the models and/or rules can be described using a code language, such as XML. Alternatively, or in addition, the models and/or rules can be described graphically using a graphical user interface (GUI).
  • GUI graphical user interface
  • the SDS editor 255 includes a model edit/display component 256 providing a developer with a means to edit and/or display the model and its various components.
  • the SDS editor 255 includes a map editor 260 for defining and/or editing maps, or transformation descriptions.
  • the map editor 260 can further include (i) a transformation component 264 for defining and/or editing transformations, and (ii) an aggregation component 262 for defining and/or editing aggregation rules.
  • the SDS editor 255 includes a semantics editor 265 for defining and/or editing the rules related to the multi-tiered model.
  • the semantics editor 265 includes an aggregation component 266 for defining and/or editing aggregation rules.
  • the semantics editor 265 includes a business rules component 267 for defining and/or editing business rules.
  • the semantics editor 265 includes and an integrity rule component 268 for defining and/or editing integrity rules.
  • SDS workbench 250 is coupled to heterogeneous data sources. These data sources can include XML and UML data sources 282' and/or ACORD and XSD data sources 282" (generally 282).
  • the SDS workbench 250 can include a model import and/or export component 280 that can be used to import and/or export information describing the various data sources 282.
  • the model import/export component 280 can be configured to automatically input the respective schema of the various data sources 282.
  • the automatic input can be supported by a source data import component 290 that inspects the various data sources 282 and determines/imports their respective schema.
  • the SDS workbench 250 is further coupled to an SDS model repository 285 used for storing the defined models with the associated mappings and/or rules. Once created and/or edited, a model together with its associated rules is transformed into a corresponding object supporting run-time operation. For example, a model can be compiled into a Java object to support run-time operation using a Java engine. Accordingly, the SDS workbench 250 can include a dynamic code generation component 292. The dynamic code generation component can transform the model and rules into an appropriate format as required by the SDS engine 200 to support run-time operation. Additionally, the SDS workbench 250 can optionally include an impact analysis component 294.
  • the impact analysis component can be configured to inspect the created and/or edited multi-tiered model and/or its associated rules during design time, to determine its impact to run-time operation.
  • the impact analysis feature is supported by the separate design-time/run-time operation of the SDS 190.
  • a schematic diagram of a graphical representation 1000 of the multi -tiered model is shown in FIG. 10.
  • this graphical representation may be viewed on the SDS editor 255 of the SDS workbench 250 during design time.
  • the model can include a number of source data models 1010', 1010", 1010'" (generally 1010) and a number of shared services data models 1020', 1020", 1020'” (generally 1020), all interconnected through a common domain model 1030.
  • each of the data models 1010 generally connects to the common domain model 1030 through a respective storage-side map 1035', 1035", 1035'” (generally 1035).
  • each of the shared data services models 1020 generally connects to the common domain model 1030 through a respective user-side map 1040', 1040", 1040'” (generally 1040).
  • some of the mapped data sources 1010 can be combined using an aggregation tier 1045.
  • the rules can apply at any of the model tiers.
  • transformation rules 1050 can be applied at some, or all of the maps 1035, 1040.
  • transformation rules can be defined in the SDS editor 255 that will be applied to the payloads as the data is mapped from one model to another.
  • integrity rules 1055 can defined in the SDS editor 255 that will be selectively applied to the payloads of some or all of the data models 1010, 1020, and additionally to the common domain model 1030.
  • business rules 1055 can also be defined in the SDS editor 255 that will be selectively applied to the payloads of some or all of the models 1010, 1020, 1030.
  • the SDS editor 255 provides a Graphical User Interface (GUI) that allows an operator to design the model graphically, as shown.
  • GUI Graphical User Interface
  • the particular data models 1010, 1020, 1030 can be individually constructed in a graphical manner.
  • the data models 1010, 1020, 1030, once constructed, can be interconnected as shown to form the overall multi -tiered model.
  • the different rules can also be identified graphically and located at their respective positions within the multi-tiered model at which they apply.
  • the SDS editor 255 can provide further support to allow an operator to define the models and rules and edit them once created.
  • a more detailed illustration of how one of the models 1010, 1020, 1030 can be defined and edited using the GUI of the SDS editor 255 is shown in FIG. 11.
  • An exemplary model for a claim is shown having a number of elements (e.g., account, business party, etc.) related as indicated by the interconnecting arrows.
  • the exchange model results in a "tree," rather than a flat row in a table.
  • This representation also provides infrastructure for determining an impact analysis, such as determining the results due to a change of the model.
  • the editor 255 can use the elements of an imported schema, thereby simplifying an operator's task of defining the model.
  • the SDS workbench includes a model import/export capability 280.
  • the SDS editor 255 also allows the elements of one model to be mapped to elements of another model graphically, using interconnects similar to those shown in FIG. 11.
  • the SDS editor 255 then allows an operator to define semantics at any one or more of the elements of the model. After an editing session is concluded, the edited model and rules are stored within the SDS model repository 285 of FIG. 2. A representation of an exemplary Java object model 1200 for a purchase order is shown in FIG. 12.
  • the SDS editor 255 can be used to develop the model 1200 having a number of elements (e.g., P.O #, Customer, Item #, Price and Sales Rep.), with each of these elements representing data (e.g., a row) from one or more of the data sources.
  • the data entries identified as "XXXXXX" represent a payload of data being handled by the model 1200.
  • the model may also include one or more rules that are associated with it.
  • the model 1200 can be defined in metadata.
  • a schematic representation of the metadata 1210 associated with the model 1200 includes a portion 1220 that defines the different fields of the model 1200.
  • the metadata 1210 also includes a portion 1230 that defines the metadata associated with any of the associated rules.
  • the rules can include inferences, such as inferences drawn upon the schema of a data source.
  • a process 1300 is running on a server within an ente ⁇ rise supporting, among other things, online sales.
  • the process 1300 receives a request from a user application 1310, such as a Web service supporting a purchase request via online sales.
  • the process 1300 may request information from the customer, such as the customer's identity, address, shipping information, billing information, etc.
  • the user application 1310 formulates the request as a first query directed to the common exchange tier 420. The customer may have already provided this information to the system during an earlier transaction.
  • the process 1300 needs to check ente ⁇ rise data (e.g., checking data sources 1315 containing master customer records) to locate this customer information, if available, then update any information as required.
  • the process translates the query to one or more queries, as required directed to the various data sources 1315.
  • the process may require that certain business rules be imposed, for example, checking the customer's creditworthiness with a business partner 1320, such as Dunn & Bradstreet.
  • the process 1300 uses the SDS engine 200 to run a previously defined multi-tiered model with all of the applicable transformations, mappings, aggregations and business rules already defined.
  • the process 1300 need only interface with a predefined specialized view in the multi-tiered model.
  • the SDS engine 1400 allows a user application 1410', 1410" to perform operations including updating a customer 1420, adding a new customer 1422, and/or adding a new customer site 1424.
  • One of the operations for example, adding customer 1420, may itself impose a number of more detailed operations.
  • add customer 1420 may also use a validation to validate the requested operation, and a payload transformation converting the query and/or the results of the query, as required. Further, add customer 1420 may also use a business rule to check the DUNS number, and perform an additional validation, such as validate address, to ensure the integrity of the data.
  • add customer 1420 may aggregate, or merge some of the payload information, such as merging an ERP customer reference identification with the original query and/or results.
  • add customer 1420 inserts the customer information into one of the heterogeneous data sources, such as an Oracle DBMS.
  • the business application developer need only provide the add customer feature, the SDS engine 1400, running a multi-tiered model performs all of the above-described actions to ensure that the data is handled consistently and accurately according to the business rules of the ente ⁇ rise. This releases that burden from the application developer and results in less complex and less costly development of business applications, while ensuring integrity and reusability of the data of the ente ⁇ rise.

Abstract

A system and process for providing interoperability between consumers of shared services and divergent data sources. Interoperability is accomplished using a metadata repository (210) coupled to a shared data services (SDS) engine (200). The SDS engine includes a user interface configurable to support communications between multiple consumers and the shared data services engine (200), an access framework (220) configurable to support interconnection to a plurality of divergent data sources, and a run-time semantics engine (215) for executing a pre-defined exchange model with associated rules stored in the metadata repository (210). The executed exchange model accesses data from one or more of the divergent data sources, as required, transforming accessed data to a common exchange model, and further transforming from the common exchange model to one or more different views. The different views can be tailored to facilitate access by their respective consumers.

Description

ENTERPRISE INTEROPERABILITY USING SHARED DATA SERVICES
RELATED APPLICATION This application is a continuation of and claims priority to U.S. Application No. 10/759,524, filed January 19, 2004, the entire teachings of which are incorporated herein by reference.
BACKGROUND OF THE INVENTION Today's businesses are increasingly leveraging data by sharing information across different business organizations, including those outside the business such as suppliers and other strategic business partners. To meet this growing demand for data accessibility and portability, businesses require a more dynamic and responsive (i.e., on demand) Information Technology (IT) infrastructure. Unfortunately, the seemingly simple task of electronically sharing data in the above-described manner can in fact be a complex and costly endeavor. These challenges are largely due to the manner in which the business software applications were originally developed. Historically, large business organizations, or enterprises, often automated the different business functions, or departments, independently. As a result, each functional group (e.g., sales, marketing, manufacturing, etc.) developed or procured its own software applications. To further complicate matters, the functional groups having their own applications tended to independently manage their own electronic (digital) data. These legacy business applications are often referred to as monolithic, or "stovepipe" systems, suggesting that they are vertically integrated within the functional group, but offer little or no capability for sharing information and/or interacting with applications of other functional groups. As business applications became more sophisticated, so did the businesses themselves. New enabling technologies, such as the World Wide Web, allowed businesses to use the electronic data stored within their organization in new and better ways. Some examples that leverage a business' s stored information include supply chain management, and on-line transactions. One solution for sharing information would be for businesses to develop and/or procure new IT systems using the latest available technologies to provide integrated business applications (software). Unfortunately, companies that already have an IT infrastructure are much less willing/able to invest in wholesale replacement. To remain competitive, companies are looking to implement new IT solutions that offer a rapid return on investment. Thus, being unwilling to make a substantial investment for the long-term benefits, companies are often left to patch their existing applications and databases together. Despite a substantial demand, data integration remains an enormous, growing, and unsolved problem in today's enteφrise. A recent report identified application integration as the number one priority among surveyed Chief Information Officers. And, the primary problem with application integration is data manipulation, due to the high cost associated with manipulating and interoperating with data from disparate sources. The high costs of data manipulation is largely due to labor costs associated with custom software development, or "handcoding." Much of the labor relates to securing agreement on the data, aggregating and transforming the data, and providing for different uses of the data. Some estimates suggest that up to 40% of IT budgets are spent on integration, with up to 70% of that focused on the data itself. Some applications attempt to solve the data manipulation problem by using
XML and/or messaging to provide "loose coupling" between systems. In theory, loose coupling allows one application to be swapped out, or to undergo major changes without affecting other applications. Unfortunately, however, providing loose coupling alone is not enough. For example, rules can't be agreed to, or the rules themselves are hand-coded on a "one-of ' basis making re-use impossible. Accordingly, change requires ongoing maintenance of each system resulting in inconsistency across the enteφrise. Thus, without a solution that includes an architectural approach, the problems that exist now will simply be recreated in the future. SUMMARY OF THE INVENTION The present invention addresses the high cost of manipulating and interoperating with disparate data by capturing in one place the semantics of the data. In particular, by incoφorating rules, and more generally semantics, the present invention architecture supports different views simultaneously from different applications. Accordingly, in one aspect, the invention relates to a process for providing interoperability between consumers of shared services and divergent data sources. The process includes providing a reconfigurable exchange model that is configured to accommodate data from a variety of divergent data sources. In general, each of the data source includes respective data elements and is described by a respective schema. The process also includes associating at least one rule with the exchange model. The rules operate on selected data elements of the accommodated data. Further, the process includes defining a transformation between at least one of the divergent data sources and the reconfigurable exchange model and, using a defined transformation, providing a view of the reconfigurable exchange model. The view generally supports consumer access to a data element of at least one of the plurality of divergent data sources. In some embodiments, tiered models are used. The rules simultaneously support disparate and competing definitions. The tiered model allows semantic agreement without forcing universal agreement. Each tier provides the backdrop for categorizing semantics. As such, the tiered model focuses on reconciliation and support of different users; quick agreement on data semantics; and removes the need to define one unified data model for the entire enterprise. In a preferred embodiment, the captured semantics include transformations and dictionaries, data integrity rules, business rules related to data, and aggregation rules. The services-based model of the present invention enables rules to be re-used, thereby lowering the cost of deployment and maintenance. In some embodiments, the present invention provides features for mapping data transformations including impact analysis (before implemented), and run-time statistics. The impact analysis and centralized semantics repository reduces ongoing maintenance. In another aspect, the invention relates to a system for providing interoperability between consumers of shared services and divergent data sources. Interoperability is accomplished using a metadata repository coupled to a shared data services (SDS) engine. The SDS engine includes (i) a user interface configurable to support communications between multiple consumers and the shared data services engine, (ii) an access framework configurable to support interconnection to a plurality of divergent data sources, and (iii) a run-time semantics engine for executing a pre-defined exchange model with associated rules stored in the metadata repository. The executed exchange model accesses data from one or more of the divergent data sources, as needed, transforming accessed data to a common exchange model, and further transforming from the common exchange model to one or more different views. The different views can be tailored to facilitate access by their respective consumers.
BRIEF DESCRIPTION OF THE DRAWINGS The foregoing and other objects, features and advantages of the invention will be apparent from the following more particular description of preferred • embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention. FIG. 1 is a schematic diagram of an embodiment of the present invention within a services-oriented architecture including a Shared Data Services (SDS) element; FIG. 2 is a more detailed schematic diagram of the SDS component shown in FIG. 1; FIG. 3 is schematic diagram of exemplary semantics represented within the embodiment shown in FIG. 1 ; FIG. 4 is a schematic diagram of a representative multi-tiered model representing a process flow of an embodiment of the invention; FIG. 5 is a schematic diagram of the run-time architecture of the embodiment shown in FIG. 4; FIG. 6 is a more detailed schematic diagram of exemplary mapping semantics of the embodiment shown in FIGs. 4 and 5; FIG. 7 is a is a more detailed schematic diagram of exemplary business process semantics of the embodiment shown in FIGs. 4 and 5; FIG. 8 is a is a more detailed schematic diagram of alternative exemplary mapping semantics of the embodiment shown in FIG. 4; FIG. 9 is a more detailed schematic diagram of alternative exemplary mapping and aggregating semantics of the embodiment shown in FIG. 4; FIG. 10 is a schematic diagram of exemplary shared services data models of an embodiment of the invention as observable on the SDS editor of FIG. 2; FIG. 11 is a more detailed schematic diagram of one of the source data models shown in FIG. 10, and as observable on the SDS editor of FIG. 2; FIG. 12 is a schematic diagram of an exemplary Java object model of an embodiment of the invention; FIG. 13 is a schematic diagram of a system implementing an exemplary shared data service command; and FIG. 14 is a more detailed schematic diagram of the system shown in FIG. 13.
DETAILED DESCRIPTION OF THE INVENTION A description of preferred embodiments of the invention follows. The present invention solves the problems of the prior art by providing interoperability between consumers of shared services and divergent data sources. Interoperability is accomplished, in part, by using a metadata repository coupled to a Shared Data Services (SDS) engine. Concrete schema can be imported from heterogeneous data sources, and then abstracted, so that end-user, or business application software developers need not know about the underlying source. An exchange model accessed through maps is also used. The resulting object models can be easily extended and re-used to add new data sources without having to change specialized views. Further, each tier can be extended without necessitating changes to other tiers. This approach enables business applications to be developed at the business layer. The queries and results of the software applications get mapped to the data such that the builder of the business application does not need to know the details (e.g., query structure). For example, a first query can be directed from a consumer to a common object, or exchange tier within the SDS engine. The SDS engine, in response to receiving the first query, automatically translates the query, as required, to a second query directed to one or more of the heterogeneous data sources. In more detail, the SDS engine includes a user interface that is configurable to support communications between multiple consumers, or business applications, and the shared data services engine. The SDS engine also includes an access framework that is configurable to support interconnection to a number of divergent data sources. The SDS engine includes a run-time semantics engine for executing a pre-defined exchange model. The exchange model includes associated rules that are stored together with the model in a metadata repository. The exchange model, when executing, accesses data from one or more of the divergent data sources, as required. In order to share the accessed data with the consumers, the data is transformed, or mapped, to a common exchange model. It is not necessary for the common exchange model to adhere to the particular schema of any application or data source. Rather, the common exchange model can be defined according to its own schema. In this manner data being handled through the common exchange model can be further transformed from the common exchange model to one or more different, or "specialized" views. As suggested by their name, these specialized views can be tailored to facilitate access by their respective consumers, or business application. Thus, the present invention SDS engine provides a tool that can be used for building shared data services. A generalized representation of an IT infrastructure referred to as an enteφrise 100, is shown in FIG. 1. In general, the enteφrise 100 supports a number of users 110' that access software applications (e.g., business applications) through an application server 120. The application server 120 can include one or more servers that individually host the business applications, such as a Business Process Management (BPM) application suite 130. Alternatively or in addition, the application server 120 may link to other platforms that host the business applications. The application server 120 and other platforms can be computers running various operating systems, including LINUX, UNIX, any of Microsoft's Windows suites, and Apple's Mac OS. As illustrated, a first group of users 110' can access the application server
120 directly using any of a number of available client-server architectures. Alternatively, or in addition, other users 1 10", 110'" may access the application server 120 through one or more portals 140', 140" (generally 140). For example, the application server 120 can include a partner portal 140' configurable for interactions with one or more business partners 110". Additionally, the application server 120 can include a sales portal 140" configurable for interactions with one or more sales forces 110'". Generally, each of the various users 110', 110", 110'" (generally 110) accesses different enteφrise information as they may require, using different business applications that provide different views to the users 1 10. Notably, however, some of the data being used by some of the different users 110 may also be shared with other of the different users 1 10. It may even be that the users 1 10 communicate with each other through their respective business applications. Thus, there is the need for both consistency and conversion of the enteφrise data. The data, on the other hand, typically resides in more than one location for a given enteφrise 100. For example, some of the data may be stored in one or more DataBase Management Systems (DBMS) 150. Exemplary Structured Query Language (SQL) DBMS, include Oracle, DB2, MySQL, PostgreSQL, Sybase, SAP DB, HypersonicSQL, Microsoft SQL server, Informix, FrontBase, Ingress, Progress, Mckoi SQL, Pointbase and Interbase. Alternatively, or in addition, some or all of the data may be stored on a data warehouse system 160. The data warehouse 160 represents a repository of integrated information, available for queries and analysis. The data warehouse 160 itself may extract data from other heterogeneous data sources (not shown), as they are generated, providing a single repository for that data. Still further, some or all of the data may also be stored within one or more Enterprise Resource Planning (ERP) systems 170', 170", 170'" (generally 170).
These ERP systems 170 represent solutions that also seek to streamline and integrate operation processes and information flows within a company. Examples of such ERP systems 170 include SAP R/3, BaaN, Oracle Apps, Peoplesoft, and QAD. Such a varied and complex data storage scenario for the enteφrise 100 complicates the problem of interfacing user applications with the data for at least two reasons. First, user applications, such as the BPM suite 130 generally transacts data with at least one of the enteφrise data sources 150, 160, 170. Accordingly, the application 130 will require a means for translating data, as needed, between each of the different data sources 150, 160, 170 and the application 130. Additionally, some of the same data may reside at more than one of the different data sources 150, 160, 170. Accordingly, the application 130 will require a means for inteφreting and handling conflicts with these data as they are transacted between the application 130 and the multiple data sources 150, 160, 170. For example, a customer's address stored on the Data sources 150 may be different than the same customer's address stored on the data warehouse 160 (e.g., the customer may have moved with only one of the storage systems having the current address, or one of the addresses may be more complete than the other). To address the above-referenced challenges and to provide for information access and sharing, the present invention uses an Enteφrise Application Infrastructure (EAI) 180 coupled to each of the different data sources 150, 160, 170. Accordingly, the EAI 180 connects the user applications represented by the application server 120 to each other, and to the different data sources 150, 160, 170. This allows for both the integration of existing applications and the creation of new composite applications. In particular, the system includes a Shared Data Services (SDS) component 190 coupled between the EAI 180 and the application server 120. Generally, the SDS component 190 provides a centralized repository for providing interoperability and centralizing a majority of the specialized coding. In more detail, the SDS component 190 includes an SDS engine 200 coupled between the user applications (e.g., the BPM 130 application and a portal 140) and various Data sources 150', 150", 150'" (Generally 150), as shown in FIG. 2. In some embodiments, the SDS engine 200 is a process running on one or more servers. The SDS engine 200 supports at least two modes of operation referred to generally as (i) design-time and (ii) run-time. The SDS engine 200 at run-time (illustrated in FIG. 5) includes an executable version of a defined multi-tiered interoperability model, including all of the defined rules relating to that model. For example, the SDS engine 200 can include a Java inteφreter executing a Java object that corresponds to the above-defined multi -tiered model and associated rules. The SDS engine 200 at design time generally enables creation and/or editing of the exchange models. Each of the different Data sources 150 and the user applications 130, 140 can couple to the SDS engine 200 using one or more of any of a number of available networking techniques supported by interface networks 205', 205". For example, the network can be an Ethernet network, a Web-based network, a dial-up line, leased lines, wireless, etc. Additionally, the SDS engine 200 is coupled to an SDS metadata repository 210. The metadata repository 210 stores metadata related to the transformation of data between the various Data sources 150 and the user applications 130, 140, as well as other metadata related to semantics. The contents of the metadata repository 210 will be discussed in more detail below. The SDS engine 200 may reside on one or more server computers and includes a run-time semantic engine 215. The run-time semantic engine 215 provides interoperability using the metadata stored on the SDS metadata repository 210. The SDS engine 200 also includes an access framework 220 coupled between the run-time semantic engine 215 and the various Data sources 150. The access framework 220 provides the necessary means to couple the run-time semantic engine 215 to the various Data sources 150 through a first interface network 205". For example, access framework 220 multiplexes or otherwise manages access using various access protocols, nomenclature and query semantics of the Data sources 150. Additionally, one or more shared data services 225', 225", 225'" (generally 225) are coupled between the run-time semantic engine 215 and the user applications 130, 140 through a second interface network 205'. A brief description of exemplary semantics will be helpful before proceeding further with a description of the architecture. In particular, exemplary semantics that can be used to provide interoperability to and among multiple user applications is shown in FIG. 3. For example, a first source (i.e., user) application 300' represents store sales, while a second source application 300" represents Web sales. Similarly, a destination application 310 represents a sales portal. A user accessing the enteφrise system through the sales portal 310 may require information, such as total sales, from both of the source applications 300', 300" (generally 300). As each of the source applications 300 may have been independently developed and/or independently operated, the meaning and representation of the data content of each application 300 (i.e., the semantics) may be different. Accordingly, one or more rules 320, 330, 340, 350 are provided that correctly address the different semantics in order to facilitate consistent and accurate transfer of information between the source applications 300 and the sales portal 310. In more detail, the rules can be categorized according to their function. For example, transfonnation rules 320 can be used to transform, or map the data as needed. As shown, an exemplary transformation rule 320 converts a zip code obtained from the first source application 300' into a five digit-plus four digit format, while providing no transformation for a zip code obtained from the second source application 300", as none is required - presumably it is already in the prescribed format. Similarly, data integrity rules 330 can be used to verify the integrity of certain data elements, as needed. As shown, a data integrity rule verifies that a sales price from the first source application 300' must be greater than zero, and the sales item must be valid. Additionally, the exemplary data integrity rule 330 requires that the revision number from the second source application 300" must be greater than zero. Had the results of either of these rules 330 been untrue for the particular data being passed, a user at the sales portal 310 could be notified, or some other action could be taken. Other rules can include aggregation rules 340 that combine predetermined elements of data. For example, a rule can be defined to aggregate one value (e.g., NORCAL sales) separate from another value (e.g., SOCAL sales) using a value of sales taken from the second source application 300". Additionally, a further aggregation rule 340 can be defined to aggregate all Nevada sales in to yet another value (e.g., NV bucket). Thus, the destination application 310 can access directly -l i¬
the aggregate values (i.e., NOCAL sales, SOCAL sales, and NV bucket) from the SDS engine 200, despite none of these aggregate values existing in either of the first or second source applications 300. Thus, such aggregation rules can simplify application development for developers of the destination application 310. Still further, additional rules can include business rules 350 that are directed to the business processes themselves. For example, a rule can be defined to confirm that contract approval has been obtained (e.g., reading a value from the first source application 300'), based on a revenue value (e.g., read from the second source application 300") being above a predetermined threshold amount. Thus, a number of business rules can be defined within the semantics, further relieving the destination application developers from requiring detailed knowledge of the particular business processes. Further advantages relating to the location of these rules within the SDS engine 200 will be described in more detail below. Importantly, the rules can be defined at multiple layers within the multi- tiered model, as required. In some instances the rules can even be conflicting, yet still operate and provide the desired interoperability. An example of a multi -tiered rule with a conflicting element would be a business rule requiring that all purchase orders must have defined a sales representative. A further rule would be that all sales representatives for a Northeast region must be Jane. A conflicting rule would be that all sales representatives for the Southeast region must be Andy. These rules are defined within the same interoperability model. The SDS component 190 ensures, however, that the rules 320, 330, 340, 350 are applied appropriately according to the different business application accessing the interoperability mode. In this sense, the semantics as described by the rules 320, 330, 340, 350 above can accommodate much more than simply mapping between different formats and verifying the integrity of the data. Rather, the semantics can capture elements of the business processes themselves. This concept is exemplified above using the aggregation rule 340 and business rule 350. In one embodiment, interoperability is provided using a tiered model as illustrated in FIG. 4. The tiered model 400 schematically captures the rules and definitions of an interoperability solution coupled between each of the various data sources 410', 410", 410'" (generally 410) and each of a number of user applications (at 28). More generally, the tiered model 400 represents a process flow of an embodiment of the invention. A first notable feature of the model 400 is that all paths between the data sources 410 and the applications lead through a single common exchange tier 420. The exchange tier 420 allows for the definition of a single common object model that can be mapped to and from. Thus, rather than requiring each of the various data sources 410 and user applications at 28 to support a separate interface with each of the other data sources 410 and user applications, each need only support a single interface to the common exchange tier 420. This architecture simplifies system maintenance and development in that a change to any one of the interconnected data sources 410 or user applications at 28 will generally require a change only to its respective interface. Interoperability with the rest of the enteφrise will be maintained. In more detail, each of the data sources 410 couples to the common exchange tier 420 through a respective storage-side mapping tier 430', 430" (generally 430). The storage-side mapping tiers 430 provide transformations converting the data described according to schema of the respective data sources 410 to a common object schema of the exchange tier 420. The resulting transformation may result in re-formatting of the data, parsing of the data, re-ordering of the data, etc. As some data sources 410 may store data according to a common schema, a single storage-side mapping tier 430 can be used for the more than one storage system 410. Such re-use of software is beneficial for reducing costs and complexities associated with software development and maintenance. Alternatively, should one choose to do so, the common schema of the exchange tier 420 may be the same as the schema of one of the data sources 410, thereby negating the need for a storage-side mapping tier 430. Generally, however, each storage system 410 will have an associated map 430. In some embodiments, the schema of the data sources 410 can be abstracted prior to the storage-side mapping tier 430, using a data abstraction tier. For example, the schema from one data sources 410'" can be abstracted using
XMLBeans 440" (known in the industry). Similarly, the schema from one or more of the other data sources 410', 410" can be abstracted using HIBERNATE 440'. Hibernate is a persistent service that stores Java objects in relational databases, or provides an object-oriented view of existing relational data. Additionally, in some embodiments, the model 400 includes an aggregation tier 450. The aggregation tier 450 can be used to combine selected data elements from the various data sources 410. At this point, it is important to note that the rules, captured in meta data, apply to a "payload" and not simply the data itself. A packet of information describes the payload, which is moving from one place to another (as in XML). By associates the semantics or rules with the payload, the invention truly supports loose coupling. In particular, this accomplished by using common metadata on all transformations. For example, a payload will be invalid if the shipped data is earlier than the received date. The model 400 also includes a second, user-side mapping tier 460', 460" (generally 460) that can be used to transform data from the common exchange tier 420 to one or more specialized views 470', 470" (generally 470). Each of the specialized views 470, in turn, can be developed using requirements of a specific business entity, or organization. As the specialized views are in the language of the particular client's business (e.g., billing, sales, supply chain, etc.), the end-user applications 28 can be developed within the context of the particular business organization. Other developers write the mapping within the exchange layer once, thereby decoupling the views from the data source. Thus, the exchange model and the data map can be defined, re-defined and provided so that the task for business application developers is simplified as they need only implement changes to the view map. In some embodiments, end-user applications 28 are coupled to the data sources 410 through the model 400, using a Web service 480. Alternatively, or in addition, other end-user applications are coupled to the data sources 410 through the model 400 using Enteφrise JavaBeans (EJB) 485. Business applications can access the specialized views 470 using a programming platform, such as Java control 490', 490". Implicit in this model 400, and selectively applicable at different tiers, are the rules, as described earlier in FIG. 3. A high-level schematic diagram of the run-time architecture of semantic engine 215 and exchange (tiered) model 400 is shown in FIG. 5. As illustrated, a number of user applications 500', 500", 500'" (generally 500), or clients, are supported by a server-suite 510 that may include one or more of Web services 515', Enteφrise JavaBeans 515", WebLogic 515'", and Java Application Programming Interface (API) 515"". The server suite 510, in turn, is coupled to the various storage systems 520 through an instantiation 530 of the multi-tiered model 400. In particular, the instantiated model includes a data-abstraction tier 535, aggregation and mapping tiers 540, 545, an exchange tier 550, and a business tier 555. The data is passed between the user applications 500 and the model 530 using payload mapping. In more detail, the exchange tier 550 is a common object model 420 and includes rules 560, as well as common object models for particular applications, such as those defined by the Mortgage Industry Standards Maintenance Organization (MISMO), other industry standard, or user-defined models, and the like. Still further, the business tier 555 can also include rules 565, as well as aggregation and/or mapping required between the exchange tier 550 and the specialized business views 570. The data mapping 545 and aggregation 540 between the abstraction tier 535 and the exchange tier 550 are the run-time counteφarts of mapping 430, aggregation 450 of model 400 discussed in FIG. 4, and thus operate as described above. Likewise, mapping 51 and aggregating 52 between the exchange tier 550 and business rules 570 (symantics to special views 470) operate as mappings 460 discussed in FIG. 4. Rules 565, 560 are the run-time counteφarts to rules 320, 330, 340, 350 discussed in FIGs. 3 and 4. More generally, the number of tiers defined within the model is selectable and may be varied to suit the particular enteφrise 100. For example, a relatively simple enteφrise may require only three tiers: a storage-side mapping tier, a common exchange tier and an application-side mapping tier. Even less tiers may be required if mapping is not required. Similarly, for a more complex enteφrise, the model may include additional tiers. For example, additional tiers can be provided to map and/or aggregate from one specialized view to another specialized view. The additional layers and/or specialized views can be used to facilitate the transfer of information between different user applications in a complex enteφrise. One specialized view may be for coφorate, whereas a lower-level specialized view may be for a particular department or business organization within the coφoration. Thus, one query language can be used by application developers that is data-source independent, again extracting knowledge of the particular data source totally from the business application developer. An exemplary mapping is illustrated in FIG. 6. In particular, the mapping transforms between a Unified Modeling Language (UML) and an extensible Markup Language (XML). For example, XML can be used to describe the schema of a particular data storage system 410, whereas the UML can be used to describe a common object model of the exchange tier 420. Thus, the mapping transformation, as with all of the rules, can operate in both directions-note that arrows appear at both ends of the lines interconnecting the UML to XML. That is, a query from a user application directed to one or more of the data sources 410, may be mapped, or transformed, at the application-side mapping tier 460 from the originating application query language to UML-presuming that UML is used in the exchange tier 420. The same query may then be further mapped, at the storage-side mapping tier 430, from UML to the schema of the target storage system 410, as illustrated in FIG. 6. Similarly, the results of the query may be returned to the requesting application through the same path, using the same transformations in the opposite direction. Thus, the query results may be mapped from the storage system schema (e.g., XML schema) to UML, then mapped again from UML to a format suitable for interpretation by the requesting application. FIG. 7 illustrates implementation of an exemplary complex business rule (e.g, 350, 565). In particular, a Document Type Definition (DTD) is shown for a credit report. The DTD can be used to specify the schema of the underlying data. The business rule is defined to compare the number of entries for dependents age in years with the value in the dependent counts attribute. This particular rule can be used as a verification to confirm data accuracy. A more detailed description of how Hibernate and XMLBeans can be used to provide mapping from a data source 800 to an instantiated Java object model 810 is shown in FIG. 8. The data source 800, such as a DBMS or a data warehouse, stores data according to a native schema. The stored data can be mapped as described above to an exemplary Java object model 810. This object model 810 may represent the common object model of the exchange tier also described above. Alternatively, the object model 810 may represent a data source model, or data abstraction between the data source 800 and the exchange tier. In more detail, the Java object model 810 includes a number of data elements 820-825 (e.g., representing rows of tables in the data source 800). The data elements 820-825 can be related to each other as indicated by the arrows shown interconnecting the different data elements 820-825. An excerpt of exemplary code 830 representing a portion of code defining a mapping tier 840. Thus, for embodiments in which the Java object model 810 described above represents a data source model 900, a further mapping is required to translate between the data source model and a second data model at the exchange tier. An exemplary transformation from a data source model 900 to an exchange tier model is illustrated in FIG. 9. Note that the exchange tier model 910 includes different elements 920-922 (in the exemplary case, fewer elements) than the data source model 900. Similar to the mapping transformation described above in relation to FIG. 8, a second mapping, provided by a mapping tier 930, is used to transform between the data source model 900 and the exchange tier model 910. Further, an aggregation tier 940 can also be defined to combine one or more of the elements of one of the models with a corresponding element of the other of the models. Exemplary code exceφts 950, 960 illustrate portions of the code related to the respective transformation and aggregation tiers 930, 940. Referring again to FIG. 2, the SDS 190 includes an SDS workbench 250 to support design-time operational mode. During design time, a developer defines the multi-tiered model, as shown in FIG. 4. Thus, the workbench 250 includes an SDS editor 255 for defining and/or editing any maps necessary for transformation, as well as any of the various rules that may be required, as previously described in relation to FIG. 3. In general, the models and/or rules can be described using a code language, such as XML. Alternatively, or in addition, the models and/or rules can be described graphically using a graphical user interface (GUI). To support design-time operation, the SDS editor 255 includes a model edit/display component 256 providing a developer with a means to edit and/or display the model and its various components. In more detail, the SDS editor 255 includes a map editor 260 for defining and/or editing maps, or transformation descriptions. The map editor 260 can further include (i) a transformation component 264 for defining and/or editing transformations, and (ii) an aggregation component 262 for defining and/or editing aggregation rules. Additionally, the SDS editor 255 includes a semantics editor 265 for defining and/or editing the rules related to the multi-tiered model. In more detail, the semantics editor 265 includes an aggregation component 266 for defining and/or editing aggregation rules. Similarly, the semantics editor 265 includes a business rules component 267 for defining and/or editing business rules. Still further, the semantics editor 265 includes and an integrity rule component 268 for defining and/or editing integrity rules. To support the description and later operation of the multi-tiered model, the
SDS workbench 250 is coupled to heterogeneous data sources. These data sources can include XML and UML data sources 282' and/or ACORD and XSD data sources 282" (generally 282). In some embodiments, the SDS workbench 250 can include a model import and/or export component 280 that can be used to import and/or export information describing the various data sources 282. For example, the model import/export component 280 can be configured to automatically input the respective schema of the various data sources 282. For example, the automatic input can be supported by a source data import component 290 that inspects the various data sources 282 and determines/imports their respective schema. The SDS workbench 250 is further coupled to an SDS model repository 285 used for storing the defined models with the associated mappings and/or rules. Once created and/or edited, a model together with its associated rules is transformed into a corresponding object supporting run-time operation. For example, a model can be compiled into a Java object to support run-time operation using a Java engine. Accordingly, the SDS workbench 250 can include a dynamic code generation component 292. The dynamic code generation component can transform the model and rules into an appropriate format as required by the SDS engine 200 to support run-time operation. Additionally, the SDS workbench 250 can optionally include an impact analysis component 294. The impact analysis component can be configured to inspect the created and/or edited multi-tiered model and/or its associated rules during design time, to determine its impact to run-time operation. Advantageously, the impact analysis feature is supported by the separate design-time/run-time operation of the SDS 190. A schematic diagram of a graphical representation 1000 of the multi -tiered model is shown in FIG. 10. For example, this graphical representation may be viewed on the SDS editor 255 of the SDS workbench 250 during design time. As shown, the model can include a number of source data models 1010', 1010", 1010'" (generally 1010) and a number of shared services data models 1020', 1020", 1020'" (generally 1020), all interconnected through a common domain model 1030. As illustrated, each of the data models 1010 generally connects to the common domain model 1030 through a respective storage-side map 1035', 1035", 1035'" (generally 1035). Similarly, each of the shared data services models 1020 generally connects to the common domain model 1030 through a respective user-side map 1040', 1040", 1040'" (generally 1040). Additionally, some of the mapped data sources 1010 can be combined using an aggregation tier 1045. As shown, the rules can apply at any of the model tiers. For example, transformation rules 1050 can be applied at some, or all of the maps 1035, 1040. That is, transformation rules can be defined in the SDS editor 255 that will be applied to the payloads as the data is mapped from one model to another. Similarly, integrity rules 1055 can defined in the SDS editor 255 that will be selectively applied to the payloads of some or all of the data models 1010, 1020, and additionally to the common domain model 1030. Still further, business rules 1055 can also be defined in the SDS editor 255 that will be selectively applied to the payloads of some or all of the models 1010, 1020, 1030. In some embodiments, the SDS editor 255 provides a Graphical User Interface (GUI) that allows an operator to design the model graphically, as shown. Thus, the particular data models 1010, 1020, 1030 can be individually constructed in a graphical manner. The data models 1010, 1020, 1030, once constructed, can be interconnected as shown to form the overall multi -tiered model. Additionally, the different rules can also be identified graphically and located at their respective positions within the multi-tiered model at which they apply. The SDS editor 255 can provide further support to allow an operator to define the models and rules and edit them once created. A more detailed illustration of how one of the models 1010, 1020, 1030 can be defined and edited using the GUI of the SDS editor 255 is shown in FIG. 11. An exemplary model for a claim is shown having a number of elements (e.g., account, business party, etc.) related as indicated by the interconnecting arrows. Notably, the exchange model results in a "tree," rather than a flat row in a table. This representation also provides infrastructure for determining an impact analysis, such as determining the results due to a change of the model. Advantageously, the editor 255 can use the elements of an imported schema, thereby simplifying an operator's task of defining the model. As shown in FIG. 2, the SDS workbench includes a model import/export capability 280. The SDS editor 255 also allows the elements of one model to be mapped to elements of another model graphically, using interconnects similar to those shown in FIG. 11. The SDS editor 255 then allows an operator to define semantics at any one or more of the elements of the model. After an editing session is concluded, the edited model and rules are stored within the SDS model repository 285 of FIG. 2. A representation of an exemplary Java object model 1200 for a purchase order is shown in FIG. 12. The SDS editor 255 can be used to develop the model 1200 having a number of elements (e.g., P.O #, Customer, Item #, Price and Sales Rep.), with each of these elements representing data (e.g., a row) from one or more of the data sources. The data entries identified as "XXXXXX" represent a payload of data being handled by the model 1200. As described above, the model may also include one or more rules that are associated with it. These rules can include transformation rules directed to the transformation of payload data for one of the elements, aggregation rules directed to the combination of payload data from more than one data sources, validation rules directed to the validation the payloads associated with any or all of the elements, and business process rules that again operate on the payloads of one or more of the elements. In one embodiment, the model 1200 can be defined in metadata. For example, a schematic representation of the metadata 1210 associated with the model 1200 includes a portion 1220 that defines the different fields of the model 1200.
The metadata 1210 also includes a portion 1230 that defines the metadata associated with any of the associated rules. In some embodiments, the rules can include inferences, such as inferences drawn upon the schema of a data source. In operation, referring now to FIG. 13, a process 1300 is running on a server within an enteφrise supporting, among other things, online sales. The process 1300 receives a request from a user application 1310, such as a Web service supporting a purchase request via online sales. The process 1300 may request information from the customer, such as the customer's identity, address, shipping information, billing information, etc. Further, the user application 1310 formulates the request as a first query directed to the common exchange tier 420. The customer may have already provided this information to the system during an earlier transaction. So, the process 1300 needs to check enteφrise data (e.g., checking data sources 1315 containing master customer records) to locate this customer information, if available, then update any information as required. Thus, the process translates the query to one or more queries, as required directed to the various data sources 1315. Additionally, the process may require that certain business rules be imposed, for example, checking the customer's creditworthiness with a business partner 1320, such as Dunn & Bradstreet. Accordingly, the process 1300 uses the SDS engine 200 to run a previously defined multi-tiered model with all of the applicable transformations, mappings, aggregations and business rules already defined. Thus, the process 1300 need only interface with a predefined specialized view in the multi-tiered model. All data queries and rules will be imposed, as required, and the process 1300 will receive the requested information through the multi -tiered model. If necessary, the multi-tiered model can also be used by the process 1300 to update customer information within the applicable data sources 1315. A different representation of the above-described example is also shown in
FIG. 14. In this example, the SDS engine 1400 allows a user application 1410', 1410" to perform operations including updating a customer 1420, adding a new customer 1422, and/or adding a new customer site 1424. One of the operations, for example, adding customer 1420, may itself impose a number of more detailed operations. As shown, add customer 1420 may also use a validation to validate the requested operation, and a payload transformation converting the query and/or the results of the query, as required. Further, add customer 1420 may also use a business rule to check the DUNS number, and perform an additional validation, such as validate address, to ensure the integrity of the data. Still further, add customer 1420 may aggregate, or merge some of the payload information, such as merging an ERP customer reference identification with the original query and/or results. Ultimately, add customer 1420 inserts the customer information into one of the heterogeneous data sources, such as an Oracle DBMS. Importantly, the business application developer need only provide the add customer feature, the SDS engine 1400, running a multi-tiered model performs all of the above-described actions to ensure that the data is handled consistently and accurately according to the business rules of the enteφrise. This releases that burden from the application developer and results in less complex and less costly development of business applications, while ensuring integrity and reusability of the data of the enteφrise. While this invention has been particularly shown and described with references to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the invention encompassed by the appended claims.

Claims

CLAIMSWhat is claimed is:
1. A method for providing interoperability between consumers of shared services and divergent data sources comprising: providing a reconfigurable exchange model adapted to accommodate data from a plurality of divergent data sources, each data source including data elements and being defined by a respective schema; associating a rule with the exchange model, the rule operating on selected data elements of the accommodated data; defining a transformation between at least one of the plurality of divergent data source and the reconfigurable exchange model; and using a defined transformation providing a view of the reconfigurable exchange model, the view supporting consumer access to a data element of at least one of the plurality of divergent data sources.
2. The method of claim 1, wherein the step of providing a reconfigurable exchange model comprises defining the reconfigurable exchange model using meta data.
3. The method of claim 2, wherein the meta data comprises XML.
4. The method of claim 2, wherein the step of associating a rule comprises: defining the rule using meta data; and combining the rule meta data and the exchange model meta data.
5. The method of claim 1, wherein the rule is one of a transformation rule, a validation rule, a business process rule, an aggregation rule, and combinations thereof.
6. The method of claim 1 , wherein the step of defining a transformation includes importing the schema of the divergent data source; and mapping the imported schema to the reconfigurable exchange model.
7. The method of claim 6, further comprising abstracting the imported schema.
8. The method of claim 1, further including providing an editor for reconfiguring the exchange model during a design time.
9. The method of claim 1, further including providing an editor for defining the transformation during a design time.
10. The method of claim 1, wherein the divergent data sources are any one or combination of XML data sources, DBMS databases, ORACLE databases, proprietary data sources, Web Services, and JAVA Control.
11. The method of claim 1 , further comprising the steps of: receiving from a consumer a first query directed to the exchange model; translating the first query to a second query directed to at least one of the divergent data sources.
12. An apparatus for providing interoperability between consumers of shared services and divergent data sources comprising: a metadata repository for storing metadata; and a shared data services (SDS) engine in electrical communication with the metadata repository, the SDS engine including: a user interface configurable to support communications between a plurality of consumers and the shared data services engine; an access framework configurable to support interconnection to a plurality of divergent data sources; a run-time semantics engine for executing a pre-defined exchange model with associated rules stored in the metadata repository, the executed exchange model accessing data from the plurality of divergent data sources, transforming accessed data to a common exchange model, and further transforming from the common exchange model to a view that supports access from the plurality of consumers.
13. The apparatus of claim 12, further comprising an SDS workbench coupled between the plurality of divergent data sources and the SDS engine.
14. The apparatus of claim 13, wherein the SDS workbench further includes: a model import/export module for importing and exporting information related to the plurality of data sources; and an SDS editor for defining and modifying data models and/or rules.
15. The apparatus of claim 14, wherein the SDS editor further comprises: a model editor and display module for editing and displaying a representation of a data model; a semantic editor for editing rules; and a map editor for editing and modifying a map defining a relationship between data models.
16. The apparatus of claim 14, wherein the SDS workbench further includes an impact analysis module for determining the impact of a model before running the model on the SDS engine.
17. The apparatus of claim 14, wherein the SDS workbench further includes a dynamic code generation module for automatically generating computer code corresponding to the results of the edited model.
18. The apparatus of claim 14, wherein the imported information includes schema relating to one of the plurality of divergent data sources.
19. The apparatus of claim 12, wherein the run-time semantics engine comprises a server executing a multi-tiered model.
20. The apparatus of claim 19, wherein the semantics engine executes a Java program.
21. The apparatus of claim 12, wherein the divergent data sources are any one or combination of XML data sources, DBMS databases, ORACLE databases, proprietary data sources, Web Services, and JAVA Control.
22. The apparatus of claim 12, wherein the run-time semantics engine receives from a consumer a first query directed to the exchange model and translates the first query to a second query directed to at least one of the divergent data sources.
23. A computer program product comprising: a computer usable medium for providing interoperability between consumers of shared services and divergent data sources; a set of computer program instructions embodied on the computer usable medium, including instructions to: provide a reconfigurable exchange model adapted to accommodate data from a plurality of divergent data sources, each data source including data elements and being defined by a respective schema; associate a rule with the exchange model, the rule operating on selected data elements of the accommodated data; define a transformation between at least one of the plurality of divergent data source and the reconfigurable exchange model; and use a defined transformation providing a view of the reconfigurable exchange model, the view supporting consumer access to a data element of at least one of the plurality of divergent data sources.
24. A computer data signal embodied in a carrier wave comprising a code segment for providing interoperability between consumers of shared services and divergent data sources, including instructions to: provide a reconfigurable exchange model adapted to accommodate data from a plurality of divergent data sources, each data source including data elements and being defined by a respective schema; associate a rule with the exchange model, the rule operating on selected data elements of the accommodated data; define a transformation between at least one of the plurality of divergent data source and the reconfigurable exchange model; and use a defined transformation providing a view of the reconfigurable exchange model, the view supporting consumer access to a data element of at least one of the plurality of divergent data sources.
25. An apparatus for providing interoperability between consumers of shared services and divergent data sources comprising: means for providing a reconfigurable exchange model adapted to accommodate data from a plurality of divergent data sources, each data source including data elements and being defined by a respective schema; means for associating a rule with the exchange model, the rule operating on selected data elements of the accommodated data; means for defining a transformation between at least one of the plurality of divergent data source and the reconfigurable exchange model; and means for using a defined transformation providing a view of the reconfigurable exchange model, the view supporting consumer access to a data element of at least one of the plurality of divergent data sources.
PCT/US2004/044032 2004-01-19 2004-12-30 Enterprise interoperability using shared data services WO2005072114A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US75952404A 2004-01-19 2004-01-19
US10/759,524 2004-01-19

Publications (2)

Publication Number Publication Date
WO2005072114A2 true WO2005072114A2 (en) 2005-08-11
WO2005072114A3 WO2005072114A3 (en) 2006-05-26

Family

ID=34826440

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2004/044032 WO2005072114A2 (en) 2004-01-19 2004-12-30 Enterprise interoperability using shared data services

Country Status (1)

Country Link
WO (1) WO2005072114A2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2939934A1 (en) * 2008-12-16 2010-06-18 Thales Sa DATA REPORTING AND SUBSCRIPTION SYSTEM
US8364759B2 (en) 2007-05-04 2013-01-29 Microsoft Corporation Mesh-managing data across a distributed set of devices
US8484174B2 (en) 2008-03-20 2013-07-09 Microsoft Corporation Computing environment representation
US8572033B2 (en) 2008-03-20 2013-10-29 Microsoft Corporation Computing environment configuration
US9298747B2 (en) 2008-03-20 2016-03-29 Microsoft Technology Licensing, Llc Deployable, consistent, and extensible computing environment platform
US9753712B2 (en) 2008-03-20 2017-09-05 Microsoft Technology Licensing, Llc Application management within deployable object hierarchy
US20210056116A1 (en) * 2010-07-09 2021-02-25 State Street Corporation Systems and Methods for Data Warehousing
US11960496B2 (en) * 2020-06-01 2024-04-16 State Street Corporation Systems and methods for data warehousing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5627972A (en) * 1992-05-08 1997-05-06 Rms Electronic Commerce Systems, Inc. System for selectively converting a plurality of source data structures without an intermediary structure into a plurality of selected target structures
US5778373A (en) * 1996-07-15 1998-07-07 At&T Corp Integration of an information server database schema by generating a translation map from exemplary files
US20010056504A1 (en) * 1999-12-21 2001-12-27 Eugene Kuznetsov Method and apparatus of data exchange using runtime code generator and translator
US20030014617A1 (en) * 2001-05-07 2003-01-16 Aderbad Tamboli Method, system, and product for data integration through a dynamic common model
US20030014500A1 (en) * 2001-07-10 2003-01-16 Schleiss Trevor D. Transactional data communications for process control systems

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5627972A (en) * 1992-05-08 1997-05-06 Rms Electronic Commerce Systems, Inc. System for selectively converting a plurality of source data structures without an intermediary structure into a plurality of selected target structures
US5778373A (en) * 1996-07-15 1998-07-07 At&T Corp Integration of an information server database schema by generating a translation map from exemplary files
US20010056504A1 (en) * 1999-12-21 2001-12-27 Eugene Kuznetsov Method and apparatus of data exchange using runtime code generator and translator
US20030014617A1 (en) * 2001-05-07 2003-01-16 Aderbad Tamboli Method, system, and product for data integration through a dynamic common model
US20030014500A1 (en) * 2001-07-10 2003-01-16 Schleiss Trevor D. Transactional data communications for process control systems

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8364759B2 (en) 2007-05-04 2013-01-29 Microsoft Corporation Mesh-managing data across a distributed set of devices
US8484174B2 (en) 2008-03-20 2013-07-09 Microsoft Corporation Computing environment representation
US8572033B2 (en) 2008-03-20 2013-10-29 Microsoft Corporation Computing environment configuration
US9298747B2 (en) 2008-03-20 2016-03-29 Microsoft Technology Licensing, Llc Deployable, consistent, and extensible computing environment platform
US9332063B2 (en) 2008-03-20 2016-05-03 Microsoft Technology Licensing, Llc Versatile application configuration for deployable computing environments
US9753712B2 (en) 2008-03-20 2017-09-05 Microsoft Technology Licensing, Llc Application management within deployable object hierarchy
US10514901B2 (en) 2008-03-20 2019-12-24 Microsoft Technology Licensing, Llc Application management within deployable object hierarchy
FR2939934A1 (en) * 2008-12-16 2010-06-18 Thales Sa DATA REPORTING AND SUBSCRIPTION SYSTEM
WO2010070006A1 (en) * 2008-12-16 2010-06-24 Thales Data publication and subscription system
US20210056116A1 (en) * 2010-07-09 2021-02-25 State Street Corporation Systems and Methods for Data Warehousing
US11960496B2 (en) * 2020-06-01 2024-04-16 State Street Corporation Systems and methods for data warehousing

Also Published As

Publication number Publication date
WO2005072114A3 (en) 2006-05-26

Similar Documents

Publication Publication Date Title
US7814142B2 (en) User interface service for a services oriented architecture in a data integration platform
US8041760B2 (en) Service oriented architecture for a loading function in a data integration platform
US8060553B2 (en) Service oriented architecture for a transformation function in a data integration platform
US7814470B2 (en) Multiple service bindings for a real time data integration service
US8307109B2 (en) Methods and systems for real time integration services
US7761406B2 (en) Regenerating data integration functions for transfer from a data integration platform
US20050262193A1 (en) Logging service for a services oriented architecture in a data integration platform
US20050240354A1 (en) Service oriented architecture for an extract function in a data integration platform
US20050228808A1 (en) Real time data integration services for health care information data integration
US20050262190A1 (en) Client side interface for real time data integration jobs
US20050234969A1 (en) Services oriented architecture for handling metadata in a data integration platform
US20050262189A1 (en) Server-side application programming interface for a real time data integration service
US20050240592A1 (en) Real time data integration for supply chain management
US20050235274A1 (en) Real time data integration for inventory management
US20050222931A1 (en) Real time data integration services for financial information data integration
US20050232046A1 (en) Location-based real time data integration services
US20060069717A1 (en) Security service for a services oriented architecture in a data integration platform
US20050223109A1 (en) Data integration through a services oriented architecture
US20060010195A1 (en) Service oriented architecture for a message broker in a data integration platform
US20050243604A1 (en) Migrating integration processes among data integration platforms
US7743391B2 (en) Flexible architecture component (FAC) for efficient data integration and information interchange using web services
US20050251533A1 (en) Migrating data integration processes through use of externalized metadata representations
CN102622675B (en) Method and system for realizing interoperation of enterprises under cluster supply chain environment
WO2006026673A2 (en) Architecture for enterprise data integration systems
US20070169016A1 (en) Systems and methods for providing mockup business objects

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

NENP Non-entry into the national phase in:

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase