US20150294048A1 - Future Reliability Prediction Based on System Operational and Performance Data Modelling - Google Patents

Future Reliability Prediction Based on System Operational and Performance Data Modelling Download PDF

Info

Publication number
US20150294048A1
US20150294048A1 US14/684,358 US201514684358A US2015294048A1 US 20150294048 A1 US20150294048 A1 US 20150294048A1 US 201514684358 A US201514684358 A US 201514684358A US 2015294048 A1 US2015294048 A1 US 2015294048A1
Authority
US
United States
Prior art keywords
data
maintenance
reliability
asset
maintenance expense
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
US14/684,358
Other versions
US10409891B2 (en
Inventor
Richard B. Jones
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hartford Steam Boiler Inspection and Insurance Co
Original Assignee
Hartford Steam Boiler Inspection and Insurance Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hartford Steam Boiler Inspection and Insurance Co filed Critical Hartford Steam Boiler Inspection and Insurance Co
Priority to US14/684,358 priority Critical patent/US10409891B2/en
Assigned to HARTFORD STEAM BOILER INSPECTION AND INSURANCE COMPANY reassignment HARTFORD STEAM BOILER INSPECTION AND INSURANCE COMPANY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JONES, RICHARD BRADLEY
Publication of US20150294048A1 publication Critical patent/US20150294048A1/en
Application granted granted Critical
Priority to US16/566,845 priority patent/US11550874B2/en
Publication of US10409891B2 publication Critical patent/US10409891B2/en
Priority to US18/094,835 priority patent/US20230169146A1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • G06F17/5009
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F17/00Digital computing or data processing equipment or methods, specially adapted for specific functions
    • G06F17/10Complex mathematical operations
    • G06F17/18Complex mathematical operations for evaluating statistical data, e.g. average values, frequency distributions, probability functions, regression analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/06Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
    • G06Q10/063Operations research, analysis or management
    • G06Q10/0635Risk analysis of enterprise or organisation activities
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2111/00Details relating to CAD techniques
    • G06F2111/10Numerical modelling
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/80Management or planning
    • Y02P90/84Greenhouse gas [GHG] management systems
    • Y02P90/845Inventory and reporting systems for greenhouse gases [GHG]

Definitions

  • the disclosure generally relates to the field of modelling and predicting future reliability of measurable systems based on operational and performance data, such as current and historical data regarding production and/or cost associated with maintaining equipment. More particularly, but not by way of limitation, embodiments within the disclosure perform comparative performance analysis and/or determine model coefficients used to model and estimate future reliability of one or more measurable systems.
  • repairable systems there is a general correlation between the methodology and process used to maintain the repairable systems and future reliability of the systems.
  • individuals who have owned or operated a bicycle, a motor vehicle, and/or any other transportation vehicle are typically aware that the operating condition and reliability of the transportation vehicles can be dependent to some extent on the degree and quality of activities to maintain the transportation vehicles.
  • a correlation may exist between maintenance quality and future reliability, quantifying and/or modelling this relationship may be difficult.
  • similar relationships and/or correlations may be true for a wide-variety of measureable systems where operation and/or performance data is available or otherwise where data used to evaluate a system may be measured.
  • the value or amount of maintenance spending may not necessarily be an accurate indicator for predicting future reliability of the repairable system.
  • Individuals can accrue maintenance costs that are spent on task items that have relatively minimum effect on improving future reliability. For example, excessive maintenance spending may originate from actual system failures rather than performing preventive maintenance related tasks.
  • system failures, breakdowns, and/or unplanned maintenance can cost more than a preventive and/or predictive maintenance program that utilizes comprehensive maintenance schedules. As such, improvements need to be made that improve the accuracy for modelling and predicting future reliability of a measureable system.
  • a system for modelling future reliability of a facility based on operational and performance data comprising an input interface configured to: receive maintenance expense data corresponding to a facility; receive first principle data corresponding to the facility; and receive asset reliability data corresponding to the facility.
  • the system may also comprise a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to: obtain one or more comparative analysis models associated with the facility; obtain a maintenance standard that generates a plurality of category values that categorizes the maintenance expense data by a designated interval based upon at least the maintenance expense data, the first principle data, and the one or more comparative analysis models; and determine an estimated future reliability of the facility based on the asset reliability data and the plurality of category values.
  • the computer node may also comprise a user interface that displays the results of the future reliability.
  • a method for modelling future reliability of a measurable system based on operational and performance data comprising: receiving maintenance expense data via an input interface associated with a measurable system; receiving first principle data via an input interface associated with the measureable system; receiving asset reliability data via an input interface associated with the measureable system; generating, using a processor, a plurality of category values that categorizes the maintenance expense data by a designated interval using a maintenance standard that is generated from one or more comparative analysis models associated with the measureable system; determining, using a processor, an estimated future reliability of the measureable system based on the asset reliability data and the plurality of category values; and outputting the results of the estimated future reliability using an output interface.
  • an apparatus for modelling future reliability of an equipment asset based on operational and performance data comprising an input interface comprising a receiving device configured to: receive maintenance expense data corresponding to an equipment asset; receive first principle data corresponding to the equipment asset; receive asset reliability data corresponding to the equipment asset; a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to: generate a plurality of category values that categorizes the maintenance expense data by a designated interval from a maintenance standard; and determine an estimated future reliability of the facility comprising estimated future reliability data based on the asset reliability data and the plurality of category values; and an output interface comprising a transmission device configured to transmit a processed data set that comprises the estimated future reliability data to a control center for comparing different equipment assets based on the processed data set.
  • FIG. 1 is a flow chart of an embodiment of a data analysis method that receives data from one or more various data sources relating to a measureable system, such as a power generation plant;
  • FIG. 2 is a schematic diagram of an embodiment of a data compilation table generated in the data compilation of the data analysis method described in FIG. 1 ;
  • FIG. 3 is a schematic diagram of an embodiment of a categorized maintenance table generated in the categorized time based maintenance data of the data analysis method described in FIG. 1 ;
  • FIG. 4 is a schematic diagram of an embodiment of a categorized reliability table generated in the categorized time based reliability data of the data analysis method described in FIG. 1 ;
  • FIG. 5 is a schematic diagram of an embodiment of a future reliability data table generated in the future reliability prediction of the data analysis method described in FIG. 1 ;
  • FIG. 6 is a schematic diagram of an embodiment of a future reliability statistic table generated in the future reliability prediction of the data analysis method described in FIG. 1 ;
  • FIG. 7 is a schematic diagram of an embodiment of a user interface input screen configured to display information a user may need to input to determine a future reliability prediction using the data analysis method described in FIG. 1 ;
  • FIG. 8 is a schematic diagram of an embodiment of a user interface input screen configured for EFOR prediction using the data analysis method described in FIG. 1 ;
  • FIG. 9 is a schematic diagram of an embodiment of a computing node for implementing one or more embodiments.
  • FIG. 10 is a flow chart of an embodiment of a method for determining model coefficients for use in comparative performance analysis of a measureable system, such as a power generation plant.
  • FIG. 11 is a flow chart of an embodiment of a method for determining primary first principle characteristics as described in FIG. 10 .
  • FIG. 12 is a flow chart of an embodiment of a method for developing constraints for use in solving the comparative analysis model as described in FIG. 10 .
  • FIG. 13 is a schematic diagram of an embodiment of a model coefficient matrix for determining model coefficients as described in FIGS. 10-12 .
  • FIG. 14 is a schematic diagram of an embodiment of a model coefficient matrix with respect to a fluidized catalytic cracking unit (Cat Cracker) for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 .
  • Cat Cracker fluidized catalytic cracking unit
  • FIG. 15 is a schematic diagram of an embodiment of a model coefficient matrix with respect to the pipeline and tank farm for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 .
  • FIG. 16 is a schematic diagram of another embodiment of a computing node for implementing one or more embodiments.
  • one or more embodiments for estimating future reliability of measurable systems.
  • one or more embodiments may obtain model coefficients for use in comparative performance analysis by determining one or more target variables and one or more characteristics for each of the target variables.
  • the target variables may represent different parameters for a measureable system.
  • the characteristics of a target variable may be collected and sorted according to a data collection classification.
  • the data collection classification may be used to quantitatively measure the differences in characteristics.
  • a comparative analysis model may be developed to compare predicted target variables to actual target variables for one or more measureable systems.
  • the comparative analysis model may be used to obtain a set of complexity factors that attempts to minimize the differences in predicted versus actual target variable values within the model.
  • the comparative analysis model may then be used to develop a representative value for activities performed periodically on the measurable system to predict future reliability.
  • FIG. 1 is a flow chart of an embodiment of a data analysis method 60 that receives data from one or more various data sources relating to a measureable system, such as a power generation plant.
  • the data analysis method 60 may be implemented by a user, a computing node, or combinations thereof to estimate future reliability of a measureable system.
  • the data analysis method 60 may automatically receive updated available data, such as updated operational and performance data, from various data sources, update one or more comparative analysis models using the received updated data, and subsequently provide updates on estimations of future reliability for one or more measurable system.
  • a measurable system is any system that is associated with performance data, conditioned data, operation data, and/or other types of measurable data (e.g., quantitative and/or qualitative data) used to evaluate the status of the system.
  • the measurable system may be monitored using a variety parameters and/or performance factors associated with one or more components of the measurable system, such as in a power plant, facility, or commercial building.
  • the measurable system may be associated with available performance data, such as stock prices, safety records, and/or company finance.
  • the terms “measurable system,” “facility,” “asset,” or “plant,” may be used interchangeably throughout this disclosure.
  • the data from the various data sources may be applied at different computational stages to model and/or improve future reliability predictions based on available data for a measureable system.
  • the available data may be current and historic maintenance data that relates to one or more measurable parameters of the measureable system. For instance, in terms of maintenance and repairable equipment, one way to describe maintenance quality is to compute the annual or periodic maintenance cost for a measurable system, such as an equipment asset.
  • the annual or periodic maintenance number denotes the amount of money spent over a given period of time, which may not necessarily accurately reflect future reliability.
  • a vehicle owner may spend money to wash and clean a vehicle weekly, but spend relatively little or no money for maintenance that could potentially increase the future reliability of car, such as replacing tires and/or oil or filter changes.
  • the annual maintenance costs for washing and cleaning the car may be a sizeable number when performed frequently, the maintenance task and/or activities of washing and cleaning may have relatively little or no effect on improving a car's reliability.
  • FIG. 1 illustrates that the data analysis method 60 may be used to predict the future Equivalent Forced Outage Rate (EFOR) estimates for Rankine and Brayton cycle based power generation plants.
  • EFOR is defined as the hours of unit failure (e.g., unplanned outage hours and equivalent unplanned derated hours) given as a percentage of the total hours of the availability of that unit (e.g., unplanned outage, unplanned derated, and service hours).
  • the data analysis method 60 may initially obtain asset maintenance expense data 62 and asset unit first principle data or other asset-level data 64 that relate to the measureable data system, such as a power generation plant.
  • Asset maintenance expense data 62 for a variety of facilities may typically be obtained directly from the plant facilities.
  • the asset maintenance expense data 62 may represent the cost associated with maintaining a measurable system for a specified time period (e.g. in seconds, minutes, hours, months, and years).
  • the asset maintenance expense data 62 may be the annual or periodic maintenance cost for one or more measurable systems.
  • the asset unit first principle data or other asset-level data 64 may represent physical or fundamental characteristics of a measurable system.
  • the asset unit first principle data or other asset-level data 64 may be operational and performance data, such as turbine inlet temperature, age of the asset, size, horsepower, amount of fuel consumed, and actual power output compared to nameplate that correspond to one more measureable systems.
  • the data obtained in the first data collection stage may be subsequently received or entered to generate a maintenance standard 66 .
  • the maintenance standard 66 may be an annualized maintenance standard where a user supplies in advance one or more modelling equations that compute the annualized maintenance standard. The result may be used to normalize the asset maintenance expense data 62 and provide a benchmark indicator to measure the adequacy of spending relative to other power generation plants of a similar type.
  • a divisor or standard can be computed based on the asset unit's first principle data or other asset-level data 104 , which are explained in more detail in FIGS. 10-12 .
  • Alternative embodiments may produce the maintenance standard 66 , for example, from simple regression analysis with data from available plant related target variables.
  • the data analysis method 60 may generate a maintenance standard 66 that develops a representative value for maintenance activities on a periodic basis. For example, to generate the maintenance standard 66 , the data analysis method 60 may normalize maintenance expenses to some other time period. In another embodiment, the data analysis method 60 may generate a periodic maintenance spending divisor to normalize the actual periodic maintenance spending to measure the under (Actual Expense/Divisor ratio ⁇ 1) or over (Actual Expense/Divisor ratio >1) spending.
  • the maintenance spending divisor may be a value computed from a semi-empirical analysis of data using asset maintenance expense data 62 , asset unit first principle data or other asset-level data 64 (e.g., asset characteristics), and/or documented expert opinions.
  • asset unit first principle data or other asset-level data 64 such as plant size, plant type, and/or plant output, in conjunction with computed annualized maintenance expenses may be used to compute a standard maintenance expense (divisor) value for each asset in the analysis as described in U.S. Pat. No. 7,233,910, filed Jul. 18, 2006, titled “System and Method for Determining Equivalency Factors for use in Comparative Performance Analysis of Industrial Facilities,” which is hereby incorporated by reference as if reproduced in their entirety.
  • the calculation may be performed with a historical dataset that may include the assets under current analysis.
  • the maintenance standard calculation may be applied as a model that includes one or more equations for modelling a measurable system's future reliability prediction.
  • the data used to compute the maintenance standard divisor may be supplied by the user, transferred from a remote storage device, and/or received via a network from a remote network node, such as a server or database.
  • FIG. 1 illustrates that the data analysis method 60 may receive the asset reliability data 400 in a second data collection stage.
  • the asset reliability data 70 may correspond to each of the measureable systems.
  • the asset reliability data 70 is any data that corresponds to determining the reliability, failure rate and/or unexpected down time of a measurable system.
  • the data analysis method 60 may be compiled and linked to the measureable systems' maintenance spending ratio, which may be associated or shown on the same line as the other measureable systems and time specific data.
  • the asset reliability data 70 may be obtained from the National American Electric Reliability Corporation's Generating Availability Database (NERC-GADS). Other types of measureable systems may also obtain asset reliability data 70 from similar databases.
  • NERC-GADS National American Electric Reliability Corporation's Generating Availability Database
  • the data analysis method 60 compiles the computed maintenance standard 66 , asset maintenance expense data 62 , and asset reliability data 70 into a common file.
  • the data analysis method 60 may add an additional column to the data arrangement within the common file.
  • the additional column may represent the ratios of actual annualized maintenance expenses and the computed standard value for each measureable system.
  • the data analysis method 60 may also add another column within the data compilation 68 that categorizes the maintenance spending ratios divided by some percentile intervals or categories. For example, the data analysis method 60 may use nine different intervals or categories to categorize the maintenance spending ratios.
  • the data analysis method 60 may place the maintenance category values into a matrix, such as a 2 ⁇ 2 matrix, that defines each measureable system, such as a power generation plant and time unit.
  • the data analysis method 60 assigns the reliability for each measureable system using the same matrix structure as described in the categorized time based maintenance data 72 .
  • the data is statistically analyzed from the categorized time based maintenance data 72 and the categorized time based reliability data 74 to compute an average and/or other statistical calculations to determine the future reliability of the measureable system.
  • the number of computed time periods or years in the future may be a function of the available data, such as the asset maintenance expense data 62 , asset reliability data 70 , and asset unit first principle data or other asset-level data.
  • the future interval may be one year in advance because of the available data, but other embodiments may utilize selection of two or three years in the future depending on the available data sets.
  • other embodiments may use other time periods besides years, such as seconds, minutes, hours, days, and/or months, depending on the granularity of the available data.
  • the data analysis method 60 may be also applied to other industries where similar maintenance and reliability databases exist. For example, in the refining and petrochemical industries, maintenance and reliability data exists for process plants and/or other measureable systems over many years. Thus, the data analysis method 60 may also forecast future reliability for process plants and/or other measureable systems using current and previous year maintenance spending ratio values. Other embodiments of the data analysis method 60 may also be applied to the pipeline industry and maintenance of buildings (e.g., office buildings) and other structures.
  • buildings e.g., office buildings
  • asset reliability data 70 may utilize a wide variety of metrics or parameters for the asset reliability data 70 that differ from the power industry's EFOR measure that was applied in FIG. 1 .
  • asset reliability data 70 that could be used in the data analysis method 60 include but are not limited to “unavailability,” “availability,” “commercial unavailability,” and “mean time between failures.” These metrics or parameters may have definitions often unique to a given situation, but their general interpretation is known to one skilled in the reliability analysis and reliability prediction field.
  • FIG. 2 is a schematic diagram of an embodiment of a data compilation table 250 generated in the data compilation 68 of the data analysis method 60 described in FIG. 1 .
  • the data compilation table 250 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device.
  • FIG. 2 illustrates that the data compilation table 250 comprises a client number column 252 that indicates the asset owner, a plant name column 254 that indicates the measureable system and/or where the data is being collected, and a study year column 256 .
  • each asset owner within table 200 owns a single measureable system. In other words, each of the measureable systems is owned by different asset owners.
  • Other embodiments of the data compilation table 250 may have a plurality of measureable systems owned by the same asset owner.
  • the study year column 256 refers to the time period of when the data is collected or analyzed from the measureable system.
  • the data compilation table 250 may comprise additional columns calculated using the data analysis method 60 .
  • the computed maintenance (Mx) standard column 258 may comprise data values that represent the computational result of the maintenance standard as described in maintenance standard 66 in FIG. 1 . Recall that in one embodiment, the maintenance standard 66 may be generated as described in as described in U.S. Pat. No. 7,233,910. Other embodiments may compute results of the maintenance standard known by persons of ordinary skill in the art.
  • the actual annualized Mx expense column 260 may comprise computed data values that represent the normalized actual maintenance data based on the maintenance standard as described in maintenance standard 66 in FIG. 1 . The actual maintenance data may be the effective annual expense over several years (e.g., about 5 years).
  • the ratio actual (Act) Mx/standard (Std) Mx column 262 may comprise data values that represent the normalized maintenance spending ratio that is used to assess the adequacy or effectiveness of maintenance spending in relationship to future reliability.
  • the last column, the EFOR column 266 comprises data values that represent the reliability or, in this case, un-reliability value for the current time period.
  • the data values of the EFOR column 266 is a summation of hours of unplanned outages and de-rates divided by the hours in the operating period.
  • the definition of EFOR in this example follows the notation as documented in NERC-GADS literature. For example, an EFOR value of 9.7 signifies that the measureable system was effectively down about 9.7% of its operating period due to unplanned outage events.
  • the Act Mx/Std Mx: Decile column 264 may comprises data values that represent the maintenance spending ratios categorized into value intervals relating to distinct ranges as discussed in data compilation 68 in FIG. 1 . Duo-deciles, deciles, sextiles, quintiles, or quartiles could be used, but in this example the data is divided into nine categories based on the percentile ranking of the maintenance spending ratio data values found in the Act Mx/Std mx column 262 .
  • the number of intervals or categories used to divide the maintenance spending ratios may depend on the dataset size, where more detailed divisions that are statistically possible may be generated with a relatively larger dataset size. A variety of methods or algorithms known by persons of ordinary skill in the art may be used to determine the number of intervals based on the dataset size.
  • the transformation of maintenance spending ratios into ordinal categories may serve as a reference to assign future EFOR reliability values that were actually achieved.
  • FIG. 3 is a schematic diagram of an embodiment of a categorized maintenance table 350 generated in the categorized time based maintenance data 72 of the data analysis method 60 described in FIG. 1 .
  • the categorized maintenance table 350 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device.
  • the categorized maintenance table 350 is a transformation of the maintenance spending ratio ordinal category data values found within FIG. 2 's data compilation table 250 .
  • FIG. 3 illustrates that the plant name column 352 may identify the different measureable systems.
  • the year columns 354 - 382 represent the different years or time periods for each of the measureable systems.
  • Plants 1 and 2 have data values from 1999-2013 and Plants 3 and 4 have data values from 2002-2013.
  • the type of data found within the year columns 354 - 382 are substantially similar to the type of data within the Act Mx/Std Mx: Decile column 264 in FIG. 2 .
  • the type of data within the year columns 354 - 382 represent intervals relating to distinct ranges of the maintenance spending ratio and may be generally referred to as the maintenance spending ratio ordinal category. For example, for the year 1999, Plant 1 has a maintenance spending ratio categorized as “5” and Plant 2 has a maintenance spending ratio categorized as “1.”
  • FIG. 4 is a schematic diagram of an embodiment of a categorized reliability table 400 generated in the categorized time based reliability data 74 of the data analysis method 60 described in FIG. 1 .
  • the categorized reliability table 400 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device.
  • the categorized reliability table 400 is a transformation of EFOR data values found within FIG. 2 's data compilation table 250 .
  • FIG. 4 illustrates that the plant name column 452 may identify the different measureable systems.
  • the year columns 404 - 432 represent the different years for each of the measureable systems. Using FIG. 4 as an example, Plants 1 and 2 have data values from 1999-2013 and Plants 3 and 4 have data values from 2002-2013.
  • the type of data found within the year columns 354 - 382 are substantially similar to the type of data within the EFOR column 266 in FIG. 2 .
  • the type of data within the year columns 354 - 382 represents EFOR values that denote the percentage of unplanned outage events. For example, for the year 1999, Plant 1 has an EFOR 2.4, which indicates that Plant 1 was down about 2.4% of its operating period due to unplanned outage events and Plant 2 has an EFOR of 5.5, which indicates that Plant 1 was down about 5.5% of its operating period due to unplanned outage events.
  • FIG. 5 is a schematic diagram of an embodiment of a future reliability data table 500 generated in the future reliability prediction 76 of the data analysis method 60 described in FIG. 1 .
  • the future reliability data table 500 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device.
  • the process of computing future reliability starts with selecting the future reliability interval, for example, in FIG. 5 , the interval is about two years.
  • the data shown in FIG. 3 is scanned horizontally or a row by row basis within the categorized maintenance table 350 where entries for a selected row in the categorized maintenance table 350 to determiner rows that are separated out only by about one year. Using FIG.
  • the row associated with Plant 1 would satisfy the data separation of about one year, but Plant 11 would not because Plant 11 in the categorized maintenance table 350 has a data gap between years 2006 and 2008. In other words, Plant 11 is missing data at year 2007, and thus, entries for the Plant 11 are not separated out about one year.
  • Other embodiments may select future reliability interval with different time intervals measured in seconds, minutes, hours, days, and/or months in the future. The time interval used to determine future reliability depends on the level of data granularity.
  • the maintenance spending ratio ordinal category for each separated row can be subsequently paired up with a time forward EFOR value from the categorized reliability data table 400 to form ordered pairs.
  • the generated order pairs comprise the maintenance spending ratio ordinal category and the time forward EFOR value. Since the selected future reliability interval is about two years, the year associated with the maintenance spending ratio ordinal category and the year for the EFOR value within the generated order pairs may be two years apart.
  • First order pair (maintenance spending ratio ordinal category in 1999, EFOR value 2001)
  • Second order pair (maintenance spending ratio ordinal category in 2000, EFOR value 2002)
  • Third order pair (maintenance spending ratio ordinal category in 2001, EFOR value 2003)
  • Fourth order pair (maintenance spending ratio ordinal category in 2002, EFOR value 2004)
  • the years that separate the maintenance spending ratio ordinal category and the EFOR value are based on the future reliability interval, which is about two years.
  • the matrices of FIGS. 3 and 4 may be scanned for possible data pairs separated by two years (e.g., 1999 and 2001).
  • the middle year data is not used (e.g., 2000) for the data pairs.
  • This process can repeated for other future reliability intervals (e.g., one year in advance of the maintenance ratio ordinal value at the discretion of the user and the information desired from the analysis).
  • the order pair examples above depict that the maintenance spending ratio ordinal category and EFOR values are incremented by one for each of the order pairs.
  • the first order pair has a maintenance spending ratio ordinal category in 1999 and the second order pair has a maintenance spending ratio ordinal category in 2000.
  • column 502 comprises EFOR values with a maintenance spending ratio ordinal category of “1”
  • column 504 comprises EFOR values with a maintenance spending ratio ordinal category of “2”
  • column 506 comprises EFOR values with a maintenance spending ratio ordinal category of “3”
  • column 508 comprises EFOR values with a maintenance spending ratio ordinal category of “4”
  • column 510 comprises EFOR values with a maintenance spending ratio ordinal category of “5”
  • column 512 comprises EFOR values with a maintenance spending ratio ordinal category of “6”
  • column 514 comprises EFOR values with a maintenance spending ratio ordinal category of “7”
  • column 516 comprises EFOR values with a maintenance spending ratio ordinal category of “8”
  • column 518 comprises EFOR values with a maintenance spending ratio ordinal category of “9.”
  • FIG. 6 is a schematic diagram of an embodiment of a future reliability statistic table 600 generated in the future reliability prediction 76 of the data analysis method 60 described in FIG. 1 .
  • the future reliability statistic table 600 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device.
  • the future reliability statistic table 600 comprises the maintenance spending ratio ordinal category columns 602 - 618 . As shown in FIG. 6 , each of the maintenance spending ratio ordinal category columns 602 - 618 corresponds to a maintenance spending ratio ordinal category.
  • maintenance spending ratio ordinal category column 602 corresponds to the maintenance spending ratio ordinal category “1” and maintenance spending ratio ordinal category column 604 corresponds to the maintenance spending ratio ordinal category “2.”
  • the compiled data in each maintenance ratio ordinal value column 602 - 618 is analyzed using the data within the future reliability data table 500 to compute various statistics that indicate future reliability information.
  • rows 620 , 622 , and 624 represent the average, median, and the value at the 90 th percentile distribution for the future reliability data for each of the maintenance ratio ordinal values.
  • the future reliability information is interpreted as the future reliability predictions or EFOR for a measurable system that the current year has a specific maintenance spending ratio ordinal values.
  • Future EFOR predictions can be computed utilizing current and previous years' maintenance spending ratios.
  • the maintenance spending ratios are computed by adding the annualized expenses for the years, and dividing by the sum of the maintenance standards for the previous years. This way the spending ratio reflects performance over several years relative to a general standard that is the summation of the standards computed for each of the included years.
  • FIG. 7 is a schematic diagram of an embodiment of a user interface input screen 700 configured to display information a user may need to input to determine a future reliability prediction 76 using the data analysis method 60 described in FIG. 1 .
  • the user interface input screen 700 comprises a measurable system selection column 702 that a user may use to select the type of measureable system.
  • the user may select the “Coal-Rankine” plant as the type of power generation unit or measureable system.
  • Other selections shown in FIG. 7 include “Gas-Rankine” and “Combustion Turbine.”
  • the user interface input screen 700 may generate the required data items 704 associated with the type of measureable system a user selects.
  • the data items 704 that appear within the user interface input screen 700 may vary depending on the selected measureable system within the measurable system selection column 702 .
  • FIG. 7 illustrates that a user has selected a Coal-Rankine plant and the user may enter all fields that are shown blank with an underscore line. This may also include the annualized maintenance expenses for the specific year.
  • the blank fields may be entered using information received from a remote data storage or via a network.
  • the current model also allows a user, if desired, to enter previous year data to add more information for the future reliability prediction. Other embodiments may import and obtain the additional information from a storage medium or via network.
  • the calculation fields 706 such as annual maintenance standard (k$) field and risk modification factor field, at the bottom of user interface input screen 700 may automatically populate based on the information entered by the user.
  • the annual maintenance standard (k$) field may be computed substantially similar to the computed MX standard 258 shown in FIG. 6 .
  • the risk modification factor field may represent the overall risk modification factor for the comparative analysis model and may be a ratio of the computed future one year average EFOR to the overall average EFOR. In other words, the data result automatically generated within the risk modification factor field represents the relative reliability risk of a particular measurable system compared to an overall average.
  • FIG. 8 is a schematic diagram of an embodiment of a user interface input screen 800 configured for EFOR prediction using the data analysis method 60 described in FIG. 1 .
  • the curve 802 is as a ranking curve that represents the distribution of maintenance spending ratios
  • the triangle 804 on the curve 802 shows the location of the current measureable system or measureable system under consideration by a user (e.g., the “Coal-Rankine” plant selected in FIG. 7 ).
  • the user interface input screen 800 illustrates to a user both the range of known performance and where in the range the specific measureable system under consideration falls.
  • the numbers below this curve are the quintile values of the maintenance spending ratio, where the maintenance spending ratios are categorized into five different value intervals.
  • the data results illustrated in FIG. 8 were computed for quintiles in this embodiment; however, other divisions are possible based on the amount of data available and the objectives of the analyst and user.
  • the histogram 806 represents the average 1 year future EFOR dependent on the specific quintile the maintenance spending ratio falls under. For example, the lowest 1 year future EFOR appears for plants that have a maintenance spending ratio in the second quintile or have maintenance spending ratios of about 0.8 and about 0.92. This level of spending suggests the unit is successfully managing the asset with the better practices that assures long term reliability. Notice that the first quintile or plants with maintenance spending ratios of about zero to about 0.8 actually exhibits a higher EFOR value suggesting that operators are not performing the required or sufficient maintenance to produce long-term reliability. If a plant falls into the fifth quintile, one interpretation of this is that operators could be overspending because of breakdowns. Since maintenance costs from unplanned maintenance events can be larger than planned maintenance expenses, a high maintenance spending ratios may produce high EFOR values.
  • the dotted line 810 represents the average EFOR for all of the data analyzed for the current measureable system.
  • the diamond 812 represents the actual 1 year future EFOR estimate located directed above the triangle 804 , which represents the maintenance spending ratio. The two symbols correlate or connect the current maintenance spending levels, triangle 804 , to a future 1 year estimate of EFOR, the diamond 812 .
  • FIG. 10 is a flow chart of an embodiment of a method 100 for determining model coefficients for use in comparative performance analysis of a measureable system, such as a power generation plant.
  • Method 100 may be used to generate the one or more comparative analysis models used within the maintenance standard 66 described in FIG. 1 . Specifically, method 100 determines the usable characteristics and model coefficients associated with one or more comparative analysis models that illustrate the correlation between the maintenance quality and future reliability.
  • Method 100 may be implemented using a user and/or computing node configured to receive inputted data for determining model coefficients. For example, a computing node may automatically receive data and update model coefficients based on received updated data.
  • Method 100 starts at step 102 and selects one or more target variables (“Target Variables”).
  • the target variable is a quantifiable attribute associated with the measureable system, such as total operating expense, financial result, capital cost, operating cost, staffing, product yield, emissions, energy consumption, or any other quantifiable attribute of performance.
  • Target Variables could be in manufacturing, refining, chemical, including petrochemicals, organic and inorganic chemicals, plastics, agricultural chemicals, and pharmaceuticals, Olefins plant, chemical manufacturing, pipeline, power generating, distribution, and other industrial facilities. Other embodiments of the Target Variables could also be for different environmental aspects, maintenance of buildings and other structures, and other forms and types of industrial and commercial industries.
  • First principle characteristics are the physical or fundamental characteristics of a measurable system or process that are expected to determine the Target Variable.
  • the first principle characteristics may be the asset unit first principle data or other asset-level data 64 described in FIG. 1 .
  • Common brainstorming or team knowledge management techniques can be used to develop the first list of possible characteristics for the Target Variable.
  • all of the characteristics of an industrial facility that may cause variation in the Target Variable when comparing different measureable systems, such as industrial facilities are identified as first principle characteristics.
  • method 100 determines the primary first principle characteristics from all of the first principle characterizes identified at step 104 . As will be understood by those skilled in the art, many different options are available to determine the primary first principle characteristics. One such option is shown in FIG. 11 , which will be discussed in more detail below. Afterwards, method 100 moves to step 108 , to classify the primary characteristics. Potential classifications for the primary characteristics include discrete, continuous, or ordinal.
  • Discrete characteristics are those characteristics that can be measured using a selection between two or more states, for example a binary determination, such as “yes” or “no.”
  • An example discrete characteristic could be “Duplicate Equipment.” The determination of “Duplicate Equipment” is “yes, the facility has duplicate equipment” or “no, there is no duplicate equipment.”
  • Continuous characteristics are directly measurable.
  • An example of a continuous characteristic could be the “Feed Capacity,” since it is directly measured as a continuous variable.
  • Ordinal characteristics are characteristics that are not readily measurable. Instead, ordinal characteristics can be scored along an ordinal scale reflecting physical differences that are not directly measurable. It is also possible to create ordinal characteristics for variables that are measurable or binary. An example of an ordinal characteristic would be refinery configuration between three typical major industry options. These are presented in ordinal scale by unit complexity:
  • the above measurable systems are ranked in order based on ordinal variables and generally do not contain information about any quantifiable quality of measurement.
  • the difference between the complexity of the 1.0 measureable system or atmospheric distillation and the 2.0 measureable system or catalytic cracking unit does not necessarily equal the complexity difference between the 3.0 measureable system or coking unit and the 2.0 measureable system or catalytic cracking unit.
  • Variables placed in an ordinal scale may be converted to an interval scale for development of model coefficients.
  • the conversion of ordinal variables to interval variables may use a scale developed to illustrate the differences between units are on a measurable scale.
  • the process to develop an interval scale for ordinal characteristic data can rely on the understanding of a team of experts of the characteristic's scientific drivers.
  • the team of experts can first determine, based on their understanding of the process being measured and scientific principle, the type of relationship between different physical characteristics and the Target Variable.
  • the relationship may be linear, logarithmic, a power function, a quadratic function or any other mathematical relationship.
  • the experts can optionally estimate a complexity factor to reflect the relationship between characteristics and variation in Target Variable.
  • Complexity factors may be the exponential power used to make the relationship linear between the ordinal variable to the Target Variable resulting in an interval variable scale. Additionally, in circumstances where no data exist, the determination of primary characteristics may be based on expert experience.
  • method 100 may develop a data collection classification arrangement.
  • the method 100 may quantify the characteristics categorized as continuous such that data is collected in a consistent manner.
  • a simple yes/no questionnaire may be used to collect data.
  • a system of definitions may need to be developed to collect data in a consistent manner.
  • a measurement scale can be developed as described above.
  • method 100 may employ at least four methods to develop a consensus function.
  • an expert or team of experts can be used to determine the type of relationship that exists between the characteristics and the variation in Target Variable.
  • the ordinal characteristics can be scaled (for example 1, 2, 3 . . . n for n configurations). By plotting the target value versus the configuration, the configurations are placed in progressive order of influence.
  • the determination of the Target Variable value relationship to the ordinal characteristic is forced into the optimization analysis, as described in more detail below.
  • the general optimization model described in Equation 1.0 can be modified to accommodate a potential non-linear relationship.
  • the ordinal measurement can be scaled as discussed above, and then regressed against the data to make a plot of Target Variable versus the ordinal characteristic to be as nearly linear as possible.
  • a combination of the foregoing embodiments can be utilized to make use of the available expert experience, and available data quality and data quantity of data.
  • method 100 may develop a measurement scale at step 110 .
  • a single characteristic may take the form of five different physical configurations.
  • the characteristics with the physical characteristics resulting in the lowest effect on variation in Target Variable may be given a scale setting score. This value may be assigned to any non-zero value. In this example, the value assigned is 1.0.
  • the characteristics with the second largest influence on variation in Target Variable will be a function of the scale setting value, as determined by a consensus function. The consensus function is arrived at by using the measurement scale for ordinal characteristics as described above. This is repeated until a scale for the applicable physical configurations is developed.
  • method 100 uses the classification system developed at step 110 to collect data.
  • the data collection process can begin with the development of data input forms and instructions.
  • data collection training seminars are conducted to assist in data collection. Training seminars may improve the consistency and accuracy of data submissions.
  • a consideration in data collection may involve the definition of the measureable system's, such as an industrial facility, analyzed boundaries.
  • Data input instructions may provide definitions of what measureable systems' costs and staffing are to be included in data collection.
  • the data collection input forms may provide worksheets for many of the reporting categories to aid in the preparation of data for entry.
  • the data that is collected can originate from several sources, including existing historical data, newly gathered historical data from existing facilities and processes, simulation data from model(s), or synthesized experiential data derived from experts in the field.
  • method 100 may validate the data.
  • Many data checks can be programmed at step 114 of method 100 such that method 100 may accept data that passes the validation check or the check is over-ridden with appropriate authority.
  • Validation routines may be developed to validate the data as it is collected.
  • the validation routines can take many forms, including: (1) range of acceptable data is specified ratio of one data point to another is specified; (2) where applicable data is cross checked against all other similar data submitted to determine outlier data points for further investigation; and (3) data is cross referenced to any previous data submission judgment of experts. After all input data validation is satisfied, the data is examined relative to all the data collected in a broad “cross-study” validation. This “cross-study” validation may highlight further areas requiring examination and may result in changes to input data.
  • method 100 may develop constraints for use in solving the comparative analysis model. These constraints could include constraints on the model coefficient values. These can be minimum or maximum values, or constraints on groupings of values, or any other mathematical constraint forms. One method of determining the constraints is shown in FIG. 12 , which is discussed in more detail below.
  • method 100 solves the comparative analysis model by applying optimization methods of choice, such as linear regression, with the collected data to determine the optimum set of factors relating the Target Variable to the characteristics.
  • optimization methods of choice such as linear regression
  • the generalized reduced gradient non-linear optimization method can be used.
  • method 100 may utilize many other optimization methods.
  • method 100 may determine the developed characteristics. Developed characteristics are the result of any mathematical relationship that exists between one or more first principle characteristics and may be used to express the information represented by that mathematical relationship. In addition, if a linear general optimization model is utilized, then nonlinear information in the characteristics can be captured in developed characteristics. Determination of the developed characteristics form is accomplished by discussion with experts, modelling expertise, and by trial and refinement.
  • method 100 applies the optimization model to the primary first principle characteristics and the developed characteristics to determine the model coefficients. In one embodiment, if developed characteristics are utilized, step 116 through step 122 may be repeated in an iterative fashion until method 100 achieves the level of model accuracy.
  • FIG. 11 is a flow chart of an embodiment of a method 200 for determining primary first principle characteristics 106 as described in FIG. 10 .
  • method 200 determines the effect of each characteristic on the variation in the Target Variable between measureable systems.
  • the method may be iteratively repeated, and a comparative analysis model can be used to determine the effect of each characteristic.
  • method 200 may use a correlation matrix.
  • the effect of each characteristic may be expressed as a percentage of the total variation in the Target Variable in the initial data set.
  • method 200 may rank each characteristic from highest to lowest based on its effect on the Target Variable. Persons of ordinary skill in the art are aware that method 200 could use other ranking criteria.
  • the characteristics may be grouped into one or more categories.
  • the characteristics are grouped into three categories.
  • the first category contains characteristics that affect a Target Variable at a percentage less than a lower threshold (for example, about five percent).
  • the second category may comprise one or more characteristics with a percentage between the lower percentage and a second threshold (for example, about 5% and about 20%).
  • the third category may comprise one or more characteristics with a percentage over the second threshold (for example, about 20%).
  • Other embodiments of method 200 at step 2006 may include additional or fewer categories and/or different ranges.
  • method 200 may remove characteristics from a list of characteristics with Target Variable average variations below a specific threshold. For example, method 200 could remove characteristics that include first category described above in step 206 (e.g., characteristics with a percentage of less than about five percent). Persons of ordinary skill in the art are aware that other thresholds could be used, and multiple categories could be removed from the list of characteristics. In one embodiment, if characteristics are removed, the process may repeat at step 202 above. In another embodiment, no characteristics are removed from the list until determining whether another co-variant relationship exists, as described in step 212 below.
  • Method 200 determines the relationships between the mid-level characteristics.
  • Mid-level characteristics are characteristics that have a certain level of effect on the Target Variable, but individually do not influence the Target Variable in a significant manner.
  • those characteristics in the second category are mid-level characteristics.
  • Example relationships between the characteristics are co-variant, dependent, and independent.
  • a co-variant relationship occurs when modifying one characteristic causes the Target Variable to vary, but only when another characteristic is present. For instance, in the scenario where characteristic “A” is varied, which causes the Target Variable to vary, but only when characteristic “B” is present, then “A” and “B” have a co-variant relationship.
  • a dependent relationship occurs when a characteristic is a derivative of or directly related to another characteristic. For instance, when the characteristic “A” is only present when characteristic “B” is present, then A and B have a dependent relationship. For those characteristics that are not co-variant or dependent, they are categorized as having independent relationships.
  • method 200 may remove dependencies and high correlations in order to resolves characteristics displaying dependence with each other.
  • the process may be repeated from step 202 . In one embodiment, if the difference variable is insignificant it can be removed from the analysis in the repeated step 208 .
  • method 200 may analyze the characteristics to determine the extent of the inter-relationships. In one embodiment, if any of the previous steps resulted in repeating the process, the repetition should be conducted prior to step 214 . In some embodiments, the process may be repeated multiple times before continuing to step 214 .
  • the characteristics that result in less than a minimum threshold change in the impact on Target Variable variation caused by another characteristic are dropped from the list of potential characteristics.
  • An illustrative threshold could be about 10 percent. For instance, if the variation in Target Variable caused by characteristic “A” is increased when characteristic “B” is present, the percent increase in the Target Variable variation caused by the presence of characteristic “B” must be estimated.
  • characteristic “B” If the variation of characteristic “B” is estimated to increase the variation in the Target Variable by less than about 10% of the increase caused by characteristic “A” alone, characteristic “B” can be eliminated from the list of potential characteristics. Characteristic “A” can also be deemed then to have an insignificant impact on the Target Variable. The remaining characteristics are deemed to be the primary characteristics.
  • FIG. 12 is a flow chart of an embodiment of a method 300 for developing constraints for use in solving the comparative analysis model as described in step 116 in FIG. 10 .
  • Constraints are developed on the model coefficients at step 302 .
  • constraints are any limits placed on model coefficients.
  • a model coefficient may have a constraint of a maximum of about 20% effect on contributing to a target variable.
  • method 300 's objective function as described below, is optimized to determine an initial set of model coefficients.
  • method 300 may calculate the percent contribution of each characteristic to the Target Variable. There are several methods of calculating the percent contribution of each characteristic, such as the “Average Method” described in as described in U.S. Pat. No. 7,233,910.
  • step 308 each percent contribution is compared against expert knowledge. Domain experts may have an intuitive or empirical feel for the relative impacts of key characteristics to the overall target value. The contribution of each characteristic is judged against this expert knowledge.
  • step 310 method 300 may make a decision about the acceptability of the individual contributions. If the contribution is found to be unacceptable the method 300 continues to step 312 . If the contribution is found to be acceptable the method 300 continues to step 316 .
  • method 300 makes a decision on how to address or handle unacceptable results of the individual contributions.
  • the options may include adjusting the constraints on the model coefficients to affect a solution or deciding that the characteristic set chosen cannot be helped through constraint adjustment. If the user decides to accept the constraint adjustment then method 300 proceeds to step 316 . If the decision is made to achieve acceptable results through constraint adjustment then method 300 continues to step 314 .
  • the constraints are adjusted to increase or decrease the impact of individual characteristics in an effort to obtain acceptable results from the individual contributions.
  • Method 300 continues to step 302 with the revised constraints.
  • peer and expert review of the model coefficients developed may be performed to determine the acceptability of the model coefficients developed. If the factors pass the expert and peer review, method 300 continues to step 326 . If the model coefficients are found to be unacceptable, method 300 continues to step 318 .
  • method 300 may obtain additional approaches and suggestions for modification of the characteristics developed by working with experts in the particular domain. This may include the creation of new or updated developed characteristics, or the addition of new or updated first principle characteristics to the analysis data set.
  • a determination is made as to whether data exists to support the investigation of the approaches and suggestions for modification of the characteristics. If the data exists, method 300 proceeds to step 324 . If the data does not exist, method 300 proceeds to step 322 .
  • method 300 collects additional data in an effort to make the corrections required to obtain a satisfactory solution.
  • method 300 revises the set of characteristics in view of the new approaches and suggestions.
  • method 400 may document the reasoning behind the selection of characteristics. The documentation can be used in explaining results for use of the model coefficients.
  • FIG. 13 is a schematic diagram of an embodiment of a model coefficient matrix 10 for determining model coefficients as described in FIGS. 10-12 .
  • model coefficient matrix 10 can be expressed in a variety of configurations, in this particular example, the model coefficient matrix 10 may be construed with the first principle characteristics 12 and first developed characteristics 14 on one axis, and the different facilities 16 for which data has been collected on the other axis.
  • For each first principle characteristic 12 and developed characteristic 14 there is the model coefficient 22 that will be computed with an optimization model.
  • the constraints 20 limit the range of the model coefficients 22 . Constraints can be minimum or maximum values, or other mathematical functions or algebraic relationships.
  • constraints 20 can be grouped and further constrained. Additional constraints 20 on facility data, and relationships between data points similar to those used in the data validation step, and constraints 20 can employ any mathematical relationship on the input data can also be employed. In one embodiment, the constraints 20 to be satisfied during optimization apply only to the model coefficients.
  • the Target Variable (actual) column 24 comprises actual values of the Target Variable as measured for each facility.
  • the Target Variable (predicted) column 26 comprises the values for the target value as calculated using the determined model coefficients.
  • the error column 28 comprises the error values for each facility as determined by the optimization model.
  • the error sum 30 is the summation of the error values in error column 28 .
  • the optimization analysis which comprises the Target Variable equation and an objection function, solves for the model coefficients to minimize the error sum 30 .
  • the model coefficients ⁇ j are computed to minimize the error ⁇ i over all facilities.
  • the non-linear optimization process determines the set of model coefficients that minimizes this equation for a given set of first principle characteristics, constraints, and a selected value.
  • the Target Variable may be computed as a function of the characteristics and the to-be-determined model coefficients.
  • the Target Variable equation is expressed as:
  • TV i represents the measured Target Variable for facility i
  • the characteristic variable represents a first principle characteristic
  • f is either a value of the first principle characteristic or a developed principle characteristic
  • i represents the facility number
  • j represents the characteristic number
  • ⁇ i represents the jth model coefficient, which is consistent with the jth principle characteristic
  • ⁇ i represents the error of the model's TV prediction as defined by the actual Target Variable value minus the predicted Target Variable value for facility i.
  • the objective function has the general form:
  • the analysis results are not dependent on the specific value of p.
  • a third form of the objective function is to solve for the simple sum of errors squared as given in Equation 5 below.
  • the determined model coefficients are those model coefficients that result in the least difference between the summation and the actual value of the Target Variable after the model iteratively moves through each facility and characteristic such that each potential model coefficient, subject to the constraints, is multiplied against the data value for the corresponding characteristic and summed for the particular facility.
  • a Cat Cracker may be a processing unit in most petroleum refineries.
  • a Cat Cracker cracks long molecules into shorter molecules within the gasoline boiling range and lighter. The process is typically conducted at relatively high temperatures in the presence of a catalyst. In the process of cracking the feed, coke is produced and deposited on the catalyst. The coke is burned off the catalyst to recover heat and to reactivate the catalyst.
  • the Cat Cracker has several main sections: Reactor, Regenerator, Main Fractionator, and Emission Control Equipment. Refiners may desire to compare the performance of their Cat Crackers to the performance of Cat Crackers operated by their competitors. The example of comparing different Cat Cracker example and may not represent the actual results of applying this methodology to Cat Crackers, or any other industrial facility. Moreover, the Cat Cracker example is but one example of many potential embodiments used to compare measurable systems.
  • method 100 starts at step 102 and determines that the Target Variable will be “Cash Operating Costs” or “Cash OPEX” in a Cat Cracker facility.
  • the first principle characteristics that may affect Cash Operating Costs for a Cat Cracker may include one or more of the following: (1) feed quality; (2) regenerator design; (3) staff experience; (4) location; (5) age of unit; (6) catalyst type; (7) feed capacity; (8) staff training; (9) trade union; (10) reactor temperature; (11) duplicate equipment; (12) reactor design; (13) emission control equipment; (14) main fractionator design; (15) maintenance practices; (16) regenerator temperature; (17) degree of feed preheat; (18) staffing level.
  • method 100 may at step 106 determine the effects of the first characteristics.
  • method 100 may implement step 106 by determining primary characteristics as shown in FIG. 11 .
  • method 200 may assign a variation percentage for each characteristic.
  • method 200 may rate and rank the characteristics from the Cat Cracker Example. The following chart shows the relative influence and ranking for at least some of the example characteristics in Table 1:
  • method 200 groups the characteristics according to category at step 206 .
  • method 200 may discard characteristics in Category 3 as being minor.
  • Method 200 may analyze characteristics in Category 2 to determine the type of relationship they exhibit with other characteristics at step 210 .
  • Method 200 may classify each characteristic as exhibiting either co-variance, dependence, or independence at step 212 .
  • Table 3 is an example of classifying the characteristics of the Cat Cracker facility:
  • method 200 may analyze the degree of the relationship of these characteristics.
  • staffing levels which is classified as having an independent relationship, may stay in the analysis process.
  • Age of Unit is classified as having a dependent relationship with Staff Training
  • a dependent relationship means Age of Unit is a derivative of Staff Experience or vice versa.
  • method 200 may decide to drop the Age of Unit characteristic from the analysis and the broader characteristic of Staff Training may remain in the analysis.
  • the three characteristics classified as having a co-variant relationship, Staff Training, Emission Equipment, Maintenance Practices, must be examined to determine the degree of co-variance.
  • Method 200 may determine that the change in Cash Operating Costs caused by the variation in Staff Training may be modified by more than 30% by the variation in Maintenance Practices. Along the same lines, the change in Cash Operating Costs caused by the variation in Emission Equipment may be modified by more than 30% by the variation in Maintenance Practices causing Maintenance Practices, Staff Training and Emission Equipment to be retained in the analysis process. Method 200 may also determine that the change in Cash Operating Costs caused by the variation in Maintenance Practice is not modified by more than the selected threshold of 30% by the variation in Staff Experience causing Staff Experience to be dropped from the analysis.
  • method 100 categorizes the remaining characteristics as continuous, ordinal or binary type measurement in step 108 as shown in Table 4.
  • Maintenance Practices may have an “economy of scale” relationship with Cash Operating Costs (which is the Target Variable).
  • An improvement in Target Variable improves at a decreasing rate as Maintenance Practices Improve.
  • a complexity factor is assigned to reflect the economy of scale. In this particular example, a factor of 0.6 is selected. As an example of coefficients, the complexity factor is often estimated to follow a power curve relationship.
  • Target ⁇ ⁇ Variable facility ⁇ ⁇ A ( ( Capacit ⁇ y ) facility ⁇ ⁇ A Capacity facility ⁇ ⁇ B ) ComplexityFactor * Target ⁇ ⁇ Variable facility ⁇ ⁇ B
  • method 100 may develop a data collection classification system.
  • a questionnaire may be developed to measure how many of ten key Maintenance Practices are in regular use at each facility.
  • a system of definitions may be used such that the data is collected in a consistent manner.
  • the data in terms of number of Maintenance Practices in regular use is converted to a Maintenance Practices Score using the 0.6 factor and “economy of scale” relationship as illustrated in Table 5.
  • method 100 may collect data and at step 114 , method 100 may validate the data as shown in Table 6:
  • Constraint ranges were developed for each characteristic by an expert team to control the model so that the results are within a reasonable range of solutions as shown in Table 7.
  • step 116 method 100 produces the results of the model optimization runs, which are shown below in Table 8.
  • the model indicates Emission Equipment and Maintenance Practices are not significant drivers of variations in Cash Operating Costs between different Cat Crackers.
  • the model may indicate this by finding about zero values for model coefficients for these two characteristics.
  • Reactor Design, Staff Training, and Emission Equipment are found to be significant drivers.
  • experts may agree that these characteristics may not be significant in driving variation in Cash Operating Cost.
  • the experts may determine that a dependence effect may not have been previously identified that fully compensates for the impact of Emission Equipment and Maintenance Practices.
  • FIG. 14 is a schematic diagram of an embodiment of a model coefficient matrix 10 with respect to the Cat Cracker for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 .
  • a sample model configuration for the illustrative Cat Cracker example is shown in FIG. 14 .
  • the data 18 , actual values 24 , and the resulting model coefficients 22 are shown.
  • the error sum 30 is relatively minimal, so developed characteristics are not necessary in this instance.
  • an error sum of differing values may be determined to be significant resulting in having to determine developed characteristics.
  • Pipelines and tank farms are assets used by industry to store and distribute liquid and gaseous feed stocks and products.
  • the example is illustrative for development of equivalence factors for: (1) pipelines and pipeline systems; (2) tank farm terminals; and (3) any combination of pipelines, pipeline systems and tank farm terminals.
  • the example is for illustrative purposes and may not represent the actual results of applying this methodology to any particular pipeline and tank farm terminal, or any other industrial facility.
  • method 100 t selects the desired Target Variable to be “Cash Operating Costs” or “Cash OPEX” in a pipeline asset.
  • the first principle characteristics that may affect Cash Operating Costs may include for the pipe related characteristics: (1) type of fluid transported; (2) average fluid density; (3) number of input and output stations; (4) total installed capacity; (5) total main pump driver kilowatt (KW); (6) length of pipeline; (7) altitude change in pipeline; (8) total utilized capacity; (9) pipeline replacement value; and (10) pump station replacement value.
  • the first principle characteristics that may affect Cash Operating Costs may include for the tank related characteristics include: (1) fluid class; (2) number of tanks; (3) total number of valves in terminal; (4) total nominal tank capacity; (5) annual number of tank turnovers; and (6) tank terminal replacement value.
  • method 100 determines the effect of the first characteristics at step 106 .
  • method 100 may implement step 106 by determining primary characteristics as shown in FIG. 11 .
  • method 100 may for each characteristic assign an impact percentage. This analysis shows that the pipeline replacement value and tank terminal replacement value may be used widely in the industry and are characteristics that are dependent on more fundamental characteristics. Accordingly, in this instance, those values are removed from consideration for primary first principle characteristics.
  • method 200 may rate and rank the characteristics. Table 9 shows the relative impact and ranking for the example characteristics method 200 may assign a variation percentage for each characteristic.
  • method 200 groups the characteristics according to category at step 206 .
  • method 200 discards those characteristics in Category 3 as being minor.
  • Method 200 may further analyze the characteristics in Category 2 to determine the type of relationship they exhibit with other characteristics at step 210 .
  • Method 200 classifies each characteristic as exhibiting either co-variance, dependence or independence as show below in Table 11:
  • method 200 may resolve the dependent characteristics. In this example, there are no dependent characteristics that method 200 needs to resolve.
  • method 200 may analyze the degree of the co-variance of the remaining characteristics and determine that no characteristics are dropped. Method 200 may deem the remaining variables as primary characteristics in step 218 .
  • method 100 may categorize the remaining characteristics as continuous, ordinal or binary type measurement at step 108 as shown in Table 12.
  • method 100 may develop a data collection classification system.
  • a questionnaire may be developed to collect information from participating facilities on the measurements above.
  • method 100 may collect the data and at step 114 , method 100 may validate the data as shown in Tables 13 and 14.
  • step 116 method 100 may develop constraints on the model coefficients by the expert as shown below in Table 15.
  • step 116 method 100 produces the results of the model optimization runs, which are shown below in Table 16.
  • step 118 method 100 may determine that there is no need for developed characteristics in this example.
  • the final model coefficients may include model coefficients determined in the comparative analysis model step above.
  • FIG. 15 is a schematic diagram of an embodiment of a model coefficient matrix 10 with respect to the pipeline and tank farm for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 .
  • This example shows but one of many potential applications of this invention to the pipeline and tank farm industry.
  • the methodology described and illustrated in FIGS. 10-15 could be applied to many other different industries and facilities.
  • this methodology could be applied to the power generation industry, such as developing model coefficients for predicting operating expense for single cycle and combined cycle generating stations that generate electrical power from any combination of boilers, steam turbine generators, combustion turbine generators and heat recovery steam generators.
  • this methodology could be applied to develop model coefficients for predicting the annual cost for ethylene manufacturers of compliance with environmental regulations associated with continuous emissions monitoring and reporting from ethylene furnaces.
  • the model coefficients would apply to both environmental applications and chemical industry applications.
  • FIG. 9 is a schematic diagram of an embodiment of a computing node for implementing one or more embodiments described in this disclosure, such as method 60 , 100 , 200 , and 300 as described in FIGS. 1 and 10 - 12 , respectively.
  • the computing node may correspond to or may be part of a computer and/or any other computing device, such as a handheld computer, a tablet computer, a laptop computer, a portable device, a workstation, a server, a mainframe, a super computer, and/or a database.
  • the hardware comprises of a processor 900 that contains adequate system memory 905 to perform the required numerical computations.
  • the processor 900 executes a computer program residing in system memory 905 , which may be a non-transitory computer readable medium, to perform the methods 60 , 100 , 200 , and 300 as described in FIGS. 1 and 10 - 12 , respectively.
  • Video and storage controllers 910 may be used to enable the operation of display 915 to display a variety of information, such as the tables and user interfaces described in FIGS. 2-8 .
  • the computing node includes various data storage devices for data input such as floppy disk units 920 , internal/external disk drives 925 , internal CD/DVDs 930 , tape units 935 , and other types of electronic storage media 940 .
  • the aforementioned data storage devices are illustrative and exemplary only.
  • the computing node may also comprise one or more other input interfaces (not shown in FIG. 9 ) that comprise at least one receiving device configured to receive data via electrical, optical, and/or wireless connections using one or more communication protocols.
  • the input interface may be a network interface that comprises a plurality of input ports configured to receive and/or transmit data via a network.
  • the network may transmit operation and performance data via wired links, wireless link, and/or logical links.
  • Other examples of the input interface may include but are not limited to a keyboard, universal serial bus (USB) interfaces and/or graphical input devices (e.g., onscreen and/or virtual keyboards).
  • the input interfaces may comprise one or more measuring devices and/or sensing devices for measuring asset unit first principle data or other asset-level data 64 described in FIG. 1 .
  • a measuring device and/or sensing device may be used to measure various physical attributes and/or characteristics associated with the operation and performance of a measurable system.
  • These storage media are used to enter data set and outlier removal criteria into to the computing node, store the outlier removed data set, store calculated factors, and store the system-produced trend lines and trend line iteration graphs.
  • the calculations can apply statistical software packages or can be performed from the data entered in spreadsheet formats using Microsoft Excel®, for example. In one embodiment the calculations are performed using either customized software programs designed for company-specific system implementations or by using commercially available software that is compatible with Microsoft Excel® or other database and spreadsheet programs.
  • the computing node can also interface with proprietary or public external storage media 955 to link with other databases to provide data to be used with the future reliability based on current maintenance spending method calculations.
  • An output interface comprises an output device for transmitting data.
  • the output devices can be a telecommunication device 945 , a transmission device, and/or any other output device used to transmit the processed future reliability data, such as the calculation data worksheets, graphs and/or reports, via one or more networks, an intranet or the Internet to other computing nodes, network nodes, a control center, printers 950 , electronic storage media similar to those mentioned as input devices 920 , 925 , 930 , 935 , 940 and/or proprietary storage databases 960 .
  • These output devices used herein are illustrative and exemplary only.
  • system memory 905 interfaces with a computer bus or other connection so as to communicate and/or transmit information stored in system memory 905 to processor 900 during execution of software programs, such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., methods 60 , 100 , 200 , and 300 .
  • Processor 900 first loads computer executable process steps from storage, e.g., system memory 905 , storage medium/media, removable media drive, and/or other non-transitory storage devices.
  • Processor 900 can then execute the stored process steps in order to execute the loaded computer executable process steps.
  • Stored data e.g., data stored by a storage device, can be accessed by processor 900 during the execution of computer executable process steps to instruct one or more components within the computing node.
  • Programming and/or loading executable instructions onto system memory 905 and/or one or more processing units, such as a processor or microprocessor, in order to transform a computing node 40 into a non-generic particular machine or apparatus that performs modelling used to estimate future reliability of a measurable system is well-known in the art.
  • Implementing instructions, real-time monitoring, and other functions by loading executable software into a microprocessor and/or processor can be converted to a hardware implementation by well-known design rules and/or transform a general-purpose processor to a processor programmed for a specific application. For example, decisions between implementing a concept in software versus hardware may depend on a number of design choices that include stability of the design and numbers of units to be produced and issues involved in translating from the software domain to the hardware domain.
  • FIG. 18 is a schematic diagram of another embodiment of a computing node 40 for implementing one or more embodiments within this disclosure, such as methods 60 , 100 , 200 , and 300 as described in FIGS. 1 and 10 - 12 , respectively.
  • Computing node 40 can be any form of computing device, including computers, workstations, hand helds, mainframes, embedded computing device, holographic computing device, biological computing device, nanotechnology computing device, virtual computing device and or distributed systems.
  • Computing node 40 includes a microprocessor 42 , an input device 44 , a storage device 46 , a video controller 48 , a system memory 50 , and a display 54 , and a communication device 56 all interconnected by one or more buses or wires or other communications pathway 52 .
  • the storage device 46 could be a floppy drive, hard drive, CD-ROM, optical drive, bubble memory or any other form of storage device.
  • the storage device 42 may be capable of receiving a floppy disk, CD-ROM, DVD-ROM, memory stick, or any other form of computer-readable medium that may contain computer-executable instructions or data.
  • Further communication device 56 could be a modem, network card, or any other device to enable the node to communicate with humans or other nodes.

Abstract

Systems, methods, and apparatuses for improving future reliability prediction of a measurable system by receiving operational and performance data, such as maintenance expense data, first principle data, and asset reliability data via an input interface associated with the measurable system. A plurality of category values may be generated that categorizes the maintenance expense data by a designated interval using a maintenance standard that is generated from one or more comparative analysis models associated with the measureable system. The estimated future reliability of the measurable system is determined based on the asset reliability data and the plurality of category values and the results of the future reliability are displayed on an output interface.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit, and priority benefit, of U.S. Provisional Patent Application Ser. No. 61/978,683 filed Apr. 11, 2014, titled “System and Method for the Estimation of Future Reliability Based on Historical Maintenance Spending,” the disclosure of which is incorporated herein in its entirety.
  • STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT
  • Not applicable.
  • REFERENCE TO A MICROFICHE APPENDIX
  • Not applicable.
  • FIELD OF TECHNOLOGY
  • The disclosure generally relates to the field of modelling and predicting future reliability of measurable systems based on operational and performance data, such as current and historical data regarding production and/or cost associated with maintaining equipment. More particularly, but not by way of limitation, embodiments within the disclosure perform comparative performance analysis and/or determine model coefficients used to model and estimate future reliability of one or more measurable systems.
  • BACKGROUND
  • Typically, for repairable systems, there is a general correlation between the methodology and process used to maintain the repairable systems and future reliability of the systems. For example, individuals who have owned or operated a bicycle, a motor vehicle, and/or any other transportation vehicle are typically aware that the operating condition and reliability of the transportation vehicles can be dependent to some extent on the degree and quality of activities to maintain the transportation vehicles. However, although a correlation may exist between maintenance quality and future reliability, quantifying and/or modelling this relationship may be difficult. In addition to repairable systems, similar relationships and/or correlations may be true for a wide-variety of measureable systems where operation and/or performance data is available or otherwise where data used to evaluate a system may be measured.
  • Unfortunately, the value or amount of maintenance spending may not necessarily be an accurate indicator for predicting future reliability of the repairable system. Individuals can accrue maintenance costs that are spent on task items that have relatively minimum effect on improving future reliability. For example, excessive maintenance spending may originate from actual system failures rather than performing preventive maintenance related tasks. Generally, system failures, breakdowns, and/or unplanned maintenance can cost more than a preventive and/or predictive maintenance program that utilizes comprehensive maintenance schedules. As such, improvements need to be made that improve the accuracy for modelling and predicting future reliability of a measureable system.
  • BRIEF SUMMARY
  • The following presents a simplified summary of the disclosed subject matter in order to provide a basic understanding of some aspects of the subject matter disclosed herein. This summary is not an exhaustive overview of the technology disclosed herein. It is not intended to identify key or critical elements of the invention or to delineate the scope of the invention. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is discussed later.
  • In one embodiment, a system for modelling future reliability of a facility based on operational and performance data, comprising an input interface configured to: receive maintenance expense data corresponding to a facility; receive first principle data corresponding to the facility; and receive asset reliability data corresponding to the facility. The system may also comprise a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to: obtain one or more comparative analysis models associated with the facility; obtain a maintenance standard that generates a plurality of category values that categorizes the maintenance expense data by a designated interval based upon at least the maintenance expense data, the first principle data, and the one or more comparative analysis models; and determine an estimated future reliability of the facility based on the asset reliability data and the plurality of category values. The computer node may also comprise a user interface that displays the results of the future reliability.
  • In another embodiment, a method for modelling future reliability of a measurable system based on operational and performance data, comprising: receiving maintenance expense data via an input interface associated with a measurable system; receiving first principle data via an input interface associated with the measureable system; receiving asset reliability data via an input interface associated with the measureable system; generating, using a processor, a plurality of category values that categorizes the maintenance expense data by a designated interval using a maintenance standard that is generated from one or more comparative analysis models associated with the measureable system; determining, using a processor, an estimated future reliability of the measureable system based on the asset reliability data and the plurality of category values; and outputting the results of the estimated future reliability using an output interface.
  • In yet another embodiment, an apparatus for modelling future reliability of an equipment asset based on operational and performance data, comprising an input interface comprising a receiving device configured to: receive maintenance expense data corresponding to an equipment asset; receive first principle data corresponding to the equipment asset; receive asset reliability data corresponding to the equipment asset; a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to: generate a plurality of category values that categorizes the maintenance expense data by a designated interval from a maintenance standard; and determine an estimated future reliability of the facility comprising estimated future reliability data based on the asset reliability data and the plurality of category values; and an output interface comprising a transmission device configured to transmit a processed data set that comprises the estimated future reliability data to a control center for comparing different equipment assets based on the processed data set.
  • BRIEF DESCRIPTION OF THE DRAWING
  • FIG. 1 is a flow chart of an embodiment of a data analysis method that receives data from one or more various data sources relating to a measureable system, such as a power generation plant;
  • FIG. 2 is a schematic diagram of an embodiment of a data compilation table generated in the data compilation of the data analysis method described in FIG. 1;
  • FIG. 3 is a schematic diagram of an embodiment of a categorized maintenance table generated in the categorized time based maintenance data of the data analysis method described in FIG. 1;
  • FIG. 4 is a schematic diagram of an embodiment of a categorized reliability table generated in the categorized time based reliability data of the data analysis method described in FIG. 1;
  • FIG. 5 is a schematic diagram of an embodiment of a future reliability data table generated in the future reliability prediction of the data analysis method described in FIG. 1;
  • FIG. 6 is a schematic diagram of an embodiment of a future reliability statistic table generated in the future reliability prediction of the data analysis method described in FIG. 1;
  • FIG. 7 is a schematic diagram of an embodiment of a user interface input screen configured to display information a user may need to input to determine a future reliability prediction using the data analysis method described in FIG. 1;
  • FIG. 8 is a schematic diagram of an embodiment of a user interface input screen configured for EFOR prediction using the data analysis method described in FIG. 1;
  • FIG. 9 is a schematic diagram of an embodiment of a computing node for implementing one or more embodiments.
  • FIG. 10 is a flow chart of an embodiment of a method for determining model coefficients for use in comparative performance analysis of a measureable system, such as a power generation plant.
  • FIG. 11 is a flow chart of an embodiment of a method for determining primary first principle characteristics as described in FIG. 10.
  • FIG. 12 is a flow chart of an embodiment of a method for developing constraints for use in solving the comparative analysis model as described in FIG. 10.
  • FIG. 13 is a schematic diagram of an embodiment of a model coefficient matrix for determining model coefficients as described in FIGS. 10-12.
  • FIG. 14 is a schematic diagram of an embodiment of a model coefficient matrix with respect to a fluidized catalytic cracking unit (Cat Cracker) for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12.
  • FIG. 15 is a schematic diagram of an embodiment of a model coefficient matrix with respect to the pipeline and tank farm for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12.
  • FIG. 16 is a schematic diagram of another embodiment of a computing node for implementing one or more embodiments.
  • While certain embodiments will be described in connection with the preferred illustrative embodiments shown herein, it will be understood that it is not intended to limit the invention to those embodiments. On the contrary, it is intended to cover all alternatives, modifications, and equivalents, as may be included within the spirit and scope of the invention as defined by claims that are included within this disclosure. In the drawing figures, which are not to scale, the same reference numerals are used throughout the description and in the drawing figures for components and elements having the same structure, and primed reference numerals are used for components and elements having a similar function and construction to those components and elements having the same unprimed reference numerals.
  • DETAILED DESCRIPTION
  • It should be understood that, although an illustrative implementation of one or more embodiments are provided below, the various specific embodiments may be implemented using any number of techniques known by persons of ordinary skill in the art. The disclosure should in no way be limited to the illustrative embodiments, drawings, and/or techniques illustrated below, including the exemplary designs and implementations illustrated and described herein. Furthermore, the disclosure may be modified within the scope of the appended claims along with their full scope of equivalents.
  • Disclosed herein are one or more embodiments for estimating future reliability of measurable systems. In particular, one or more embodiments may obtain model coefficients for use in comparative performance analysis by determining one or more target variables and one or more characteristics for each of the target variables. The target variables may represent different parameters for a measureable system. The characteristics of a target variable may be collected and sorted according to a data collection classification. The data collection classification may be used to quantitatively measure the differences in characteristics. After collecting and validating the data, a comparative analysis model may be developed to compare predicted target variables to actual target variables for one or more measureable systems. The comparative analysis model may be used to obtain a set of complexity factors that attempts to minimize the differences in predicted versus actual target variable values within the model. The comparative analysis model may then be used to develop a representative value for activities performed periodically on the measurable system to predict future reliability.
  • FIG. 1 is a flow chart of an embodiment of a data analysis method 60 that receives data from one or more various data sources relating to a measureable system, such as a power generation plant. The data analysis method 60 may be implemented by a user, a computing node, or combinations thereof to estimate future reliability of a measureable system. In one embodiment, the data analysis method 60 may automatically receive updated available data, such as updated operational and performance data, from various data sources, update one or more comparative analysis models using the received updated data, and subsequently provide updates on estimations of future reliability for one or more measurable system. A measurable system is any system that is associated with performance data, conditioned data, operation data, and/or other types of measurable data (e.g., quantitative and/or qualitative data) used to evaluate the status of the system. For example, the measurable system may be monitored using a variety parameters and/or performance factors associated with one or more components of the measurable system, such as in a power plant, facility, or commercial building. In another embodiment, the measurable system may be associated with available performance data, such as stock prices, safety records, and/or company finance. The terms “measurable system,” “facility,” “asset,” or “plant,” may be used interchangeably throughout this disclosure.
  • As shown in FIG. 1, the data from the various data sources may be applied at different computational stages to model and/or improve future reliability predictions based on available data for a measureable system. In one embodiment, the available data may be current and historic maintenance data that relates to one or more measurable parameters of the measureable system. For instance, in terms of maintenance and repairable equipment, one way to describe maintenance quality is to compute the annual or periodic maintenance cost for a measurable system, such as an equipment asset. The annual or periodic maintenance number denotes the amount of money spent over a given period of time, which may not necessarily accurately reflect future reliability. For example, a vehicle owner may spend money to wash and clean a vehicle weekly, but spend relatively little or no money for maintenance that could potentially increase the future reliability of car, such as replacing tires and/or oil or filter changes. Although the annual maintenance costs for washing and cleaning the car may be a sizeable number when performed frequently, the maintenance task and/or activities of washing and cleaning may have relatively little or no effect on improving a car's reliability.
  • FIG. 1 illustrates that the data analysis method 60 may be used to predict the future Equivalent Forced Outage Rate (EFOR) estimates for Rankine and Brayton cycle based power generation plants. EFOR is defined as the hours of unit failure (e.g., unplanned outage hours and equivalent unplanned derated hours) given as a percentage of the total hours of the availability of that unit (e.g., unplanned outage, unplanned derated, and service hours). As shown in FIG. 1, within a first data collection stage, the data analysis method 60 may initially obtain asset maintenance expense data 62 and asset unit first principle data or other asset-level data 64 that relate to the measureable data system, such as a power generation plant. Asset maintenance expense data 62 for a variety of facilities may typically be obtained directly from the plant facilities. The asset maintenance expense data 62 may represent the cost associated with maintaining a measurable system for a specified time period (e.g. in seconds, minutes, hours, months, and years). For example, the asset maintenance expense data 62 may be the annual or periodic maintenance cost for one or more measurable systems. The asset unit first principle data or other asset-level data 64 may represent physical or fundamental characteristics of a measurable system. For example, the asset unit first principle data or other asset-level data 64 may be operational and performance data, such as turbine inlet temperature, age of the asset, size, horsepower, amount of fuel consumed, and actual power output compared to nameplate that correspond to one more measureable systems.
  • The data obtained in the first data collection stage may be subsequently received or entered to generate a maintenance standard 66. In one embodiment, the maintenance standard 66 may be an annualized maintenance standard where a user supplies in advance one or more modelling equations that compute the annualized maintenance standard. The result may be used to normalize the asset maintenance expense data 62 and provide a benchmark indicator to measure the adequacy of spending relative to other power generation plants of a similar type. In one embodiment, a divisor or standard can be computed based on the asset unit's first principle data or other asset-level data 104, which are explained in more detail in FIGS. 10-12. Alternative embodiments may produce the maintenance standard 66, for example, from simple regression analysis with data from available plant related target variables.
  • Maintenance expenses for the replacement of components that normally wear out over time may occur at different time intervals causing variations in periodic maintenance expenses. To address the potential issue, the data analysis method 60 may generate a maintenance standard 66 that develops a representative value for maintenance activities on a periodic basis. For example, to generate the maintenance standard 66, the data analysis method 60 may normalize maintenance expenses to some other time period. In another embodiment, the data analysis method 60 may generate a periodic maintenance spending divisor to normalize the actual periodic maintenance spending to measure the under (Actual Expense/Divisor ratio <1) or over (Actual Expense/Divisor ratio >1) spending. The maintenance spending divisor may be a value computed from a semi-empirical analysis of data using asset maintenance expense data 62, asset unit first principle data or other asset-level data 64 (e.g., asset characteristics), and/or documented expert opinions. In this embodiment, an asset unit first principle data or other asset-level data 64, such as plant size, plant type, and/or plant output, in conjunction with computed annualized maintenance expenses may be used to compute a standard maintenance expense (divisor) value for each asset in the analysis as described in U.S. Pat. No. 7,233,910, filed Jul. 18, 2006, titled “System and Method for Determining Equivalency Factors for use in Comparative Performance Analysis of Industrial Facilities,” which is hereby incorporated by reference as if reproduced in their entirety. The calculation may be performed with a historical dataset that may include the assets under current analysis. The maintenance standard calculation may be applied as a model that includes one or more equations for modelling a measurable system's future reliability prediction. The data used to compute the maintenance standard divisor may be supplied by the user, transferred from a remote storage device, and/or received via a network from a remote network node, such as a server or database.
  • FIG. 1 illustrates that the data analysis method 60 may receive the asset reliability data 400 in a second data collection stage. The asset reliability data 70 may correspond to each of the measureable systems. The asset reliability data 70 is any data that corresponds to determining the reliability, failure rate and/or unexpected down time of a measurable system. Once the data analysis method 60 receives the asset reliability data 70 for each measureable system, the data analysis method 60 may be compiled and linked to the measureable systems' maintenance spending ratio, which may be associated or shown on the same line as the other measureable systems and time specific data. For power generation plants, the asset reliability data 70 may be obtained from the National American Electric Reliability Corporation's Generating Availability Database (NERC-GADS). Other types of measureable systems may also obtain asset reliability data 70 from similar databases.
  • At data compilation 68, the data analysis method 60 compiles the computed maintenance standard 66, asset maintenance expense data 62, and asset reliability data 70 into a common file. In one embodiment, the data analysis method 60 may add an additional column to the data arrangement within the common file. The additional column may represent the ratios of actual annualized maintenance expenses and the computed standard value for each measureable system. The data analysis method 60 may also add another column within the data compilation 68 that categorizes the maintenance spending ratios divided by some percentile intervals or categories. For example, the data analysis method 60 may use nine different intervals or categories to categorize the maintenance spending ratios.
  • In the categorized time based maintenance data 72, the data analysis method 60 may place the maintenance category values into a matrix, such as a 2×2 matrix, that defines each measureable system, such as a power generation plant and time unit. In the categorized time based reliability data 74, the data analysis method 60 assigns the reliability for each measureable system using the same matrix structure as described in the categorized time based maintenance data 72. In the future reliability prediction 76, the data is statistically analyzed from the categorized time based maintenance data 72 and the categorized time based reliability data 74 to compute an average and/or other statistical calculations to determine the future reliability of the measureable system. The number of computed time periods or years in the future may be a function of the available data, such as the asset maintenance expense data 62, asset reliability data 70, and asset unit first principle data or other asset-level data. For instance, the future interval may be one year in advance because of the available data, but other embodiments may utilize selection of two or three years in the future depending on the available data sets. Also, other embodiments may use other time periods besides years, such as seconds, minutes, hours, days, and/or months, depending on the granularity of the available data.
  • It should be noted that while the discussion involving FIG. 1 was specific to power generation plants and industry, the data analysis method 60 may be also applied to other industries where similar maintenance and reliability databases exist. For example, in the refining and petrochemical industries, maintenance and reliability data exists for process plants and/or other measureable systems over many years. Thus, the data analysis method 60 may also forecast future reliability for process plants and/or other measureable systems using current and previous year maintenance spending ratio values. Other embodiments of the data analysis method 60 may also be applied to the pipeline industry and maintenance of buildings (e.g., office buildings) and other structures.
  • Persons of ordinary skill in the art are aware that other industries reliability may utilize a wide variety of metrics or parameters for the asset reliability data 70 that differ from the power industry's EFOR measure that was applied in FIG. 1. For example, other appropriate asset reliability data 70 that could be used in the data analysis method 60 include but are not limited to “unavailability,” “availability,” “commercial unavailability,” and “mean time between failures.” These metrics or parameters may have definitions often unique to a given situation, but their general interpretation is known to one skilled in the reliability analysis and reliability prediction field.
  • FIG. 2 is a schematic diagram of an embodiment of a data compilation table 250 generated in the data compilation 68 of the data analysis method 60 described in FIG. 1. The data compilation table 250 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device. FIG. 2 illustrates that the data compilation table 250 comprises a client number column 252 that indicates the asset owner, a plant name column 254 that indicates the measureable system and/or where the data is being collected, and a study year column 256. As shown in FIG. 2, each asset owner within table 200 owns a single measureable system. In other words, each of the measureable systems is owned by different asset owners. Other embodiments of the data compilation table 250 may have a plurality of measureable systems owned by the same asset owner. The study year column 256 refers to the time period of when the data is collected or analyzed from the measureable system.
  • The data compilation table 250 may comprise additional columns calculated using the data analysis method 60. The computed maintenance (Mx) standard column 258 may comprise data values that represent the computational result of the maintenance standard as described in maintenance standard 66 in FIG. 1. Recall that in one embodiment, the maintenance standard 66 may be generated as described in as described in U.S. Pat. No. 7,233,910. Other embodiments may compute results of the maintenance standard known by persons of ordinary skill in the art. The actual annualized Mx expense column 260 may comprise computed data values that represent the normalized actual maintenance data based on the maintenance standard as described in maintenance standard 66 in FIG. 1. The actual maintenance data may be the effective annual expense over several years (e.g., about 5 years). The ratio actual (Act) Mx/standard (Std) Mx column 262 may comprise data values that represent the normalized maintenance spending ratio that is used to assess the adequacy or effectiveness of maintenance spending in relationship to future reliability. The last column, the EFOR column 266 comprises data values that represent the reliability or, in this case, un-reliability value for the current time period. The data values of the EFOR column 266 is a summation of hours of unplanned outages and de-rates divided by the hours in the operating period. The definition of EFOR in this example follows the notation as documented in NERC-GADS literature. For example, an EFOR value of 9.7 signifies that the measureable system was effectively down about 9.7% of its operating period due to unplanned outage events.
  • The Act Mx/Std Mx: Decile column 264 may comprises data values that represent the maintenance spending ratios categorized into value intervals relating to distinct ranges as discussed in data compilation 68 in FIG. 1. Duo-deciles, deciles, sextiles, quintiles, or quartiles could be used, but in this example the data is divided into nine categories based on the percentile ranking of the maintenance spending ratio data values found in the Act Mx/Std mx column 262. The number of intervals or categories used to divide the maintenance spending ratios may depend on the dataset size, where more detailed divisions that are statistically possible may be generated with a relatively larger dataset size. A variety of methods or algorithms known by persons of ordinary skill in the art may be used to determine the number of intervals based on the dataset size. The transformation of maintenance spending ratios into ordinal categories may serve as a reference to assign future EFOR reliability values that were actually achieved.
  • FIG. 3 is a schematic diagram of an embodiment of a categorized maintenance table 350 generated in the categorized time based maintenance data 72 of the data analysis method 60 described in FIG. 1. The categorized maintenance table 350 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device. Specifically, the categorized maintenance table 350 is a transformation of the maintenance spending ratio ordinal category data values found within FIG. 2's data compilation table 250. FIG. 3 illustrates that the plant name column 352 may identify the different measureable systems. The year columns 354-382 represent the different years or time periods for each of the measureable systems. Using FIG. 3 as an example, Plants 1 and 2 have data values from 1999-2013 and Plants 3 and 4 have data values from 2002-2013. The type of data found within the year columns 354-382 are substantially similar to the type of data within the Act Mx/Std Mx: Decile column 264 in FIG. 2. In particular, the type of data within the year columns 354-382 represent intervals relating to distinct ranges of the maintenance spending ratio and may be generally referred to as the maintenance spending ratio ordinal category. For example, for the year 1999, Plant 1 has a maintenance spending ratio categorized as “5” and Plant 2 has a maintenance spending ratio categorized as “1.”
  • FIG. 4 is a schematic diagram of an embodiment of a categorized reliability table 400 generated in the categorized time based reliability data 74 of the data analysis method 60 described in FIG. 1. The categorized reliability table 400 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device. The categorized reliability table 400 is a transformation of EFOR data values found within FIG. 2's data compilation table 250. FIG. 4 illustrates that the plant name column 452 may identify the different measureable systems. The year columns 404-432 represent the different years for each of the measureable systems. Using FIG. 4 as an example, Plants 1 and 2 have data values from 1999-2013 and Plants 3 and 4 have data values from 2002-2013. The type of data found within the year columns 354-382 are substantially similar to the type of data within the EFOR column 266 in FIG. 2. In particular, the type of data within the year columns 354-382 represents EFOR values that denote the percentage of unplanned outage events. For example, for the year 1999, Plant 1 has an EFOR 2.4, which indicates that Plant 1 was down about 2.4% of its operating period due to unplanned outage events and Plant 2 has an EFOR of 5.5, which indicates that Plant 1 was down about 5.5% of its operating period due to unplanned outage events.
  • FIG. 5 is a schematic diagram of an embodiment of a future reliability data table 500 generated in the future reliability prediction 76 of the data analysis method 60 described in FIG. 1. The future reliability data table 500 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device. The process of computing future reliability starts with selecting the future reliability interval, for example, in FIG. 5, the interval is about two years. After selecting the future reliability interval, the data shown in FIG. 3 is scanned horizontally or a row by row basis within the categorized maintenance table 350 where entries for a selected row in the categorized maintenance table 350 to determiner rows that are separated out only by about one year. Using FIG. 3 for example, the row associated with Plant 1 would satisfy the data separation of about one year, but Plant 11 would not because Plant 11 in the categorized maintenance table 350 has a data gap between years 2006 and 2008. In other words, Plant 11 is missing data at year 2007, and thus, entries for the Plant 11 are not separated out about one year. Other embodiments may select future reliability interval with different time intervals measured in seconds, minutes, hours, days, and/or months in the future. The time interval used to determine future reliability depends on the level of data granularity.
  • The maintenance spending ratio ordinal category for each separated row can be subsequently paired up with a time forward EFOR value from the categorized reliability data table 400 to form ordered pairs. The generated order pairs comprise the maintenance spending ratio ordinal category and the time forward EFOR value. Since the selected future reliability interval is about two years, the year associated with the maintenance spending ratio ordinal category and the year for the EFOR value within the generated order pairs may be two years apart. Some examples of these ordered pairs for the same plant or same row for analyzing future about two years in advance are:
  • First order pair: (maintenance spending ratio ordinal category in 1999, EFOR value 2001) Second order pair: (maintenance spending ratio ordinal category in 2000, EFOR value 2002) Third order pair: (maintenance spending ratio ordinal category in 2001, EFOR value 2003) Fourth order pair: (maintenance spending ratio ordinal category in 2002, EFOR value 2004) As shown above, in each of the order pairs, the years that separate the maintenance spending ratio ordinal category and the EFOR value are based on the future reliability interval, which is about two years. To form the order pairs, the matrices of FIGS. 3 and 4 may be scanned for possible data pairs separated by two years (e.g., 1999 and 2001). In this case, the middle year data is not used (e.g., 2000) for the data pairs. This process can repeated for other future reliability intervals (e.g., one year in advance of the maintenance ratio ordinal value at the discretion of the user and the information desired from the analysis). Moreover, the order pair examples above depict that the maintenance spending ratio ordinal category and EFOR values are incremented by one for each of the order pairs. For example, the first order pair has a maintenance spending ratio ordinal category in 1999 and the second order pair has a maintenance spending ratio ordinal category in 2000.
  • The different maintenance spending ratio ordinal category value is used to place the corresponding time forward EFOR value into the correct column within the future reliability data table 500. As shown in FIG. 5, column 502 comprises EFOR values with a maintenance spending ratio ordinal category of “1”; column 504 comprises EFOR values with a maintenance spending ratio ordinal category of “2”; column 506 comprises EFOR values with a maintenance spending ratio ordinal category of “3”; column 508 comprises EFOR values with a maintenance spending ratio ordinal category of “4”; column 510 comprises EFOR values with a maintenance spending ratio ordinal category of “5”; column 512 comprises EFOR values with a maintenance spending ratio ordinal category of “6”; column 514 comprises EFOR values with a maintenance spending ratio ordinal category of “7”; column 516 comprises EFOR values with a maintenance spending ratio ordinal category of “8”; and column 518 comprises EFOR values with a maintenance spending ratio ordinal category of “9.”
  • FIG. 6 is a schematic diagram of an embodiment of a future reliability statistic table 600 generated in the future reliability prediction 76 of the data analysis method 60 described in FIG. 1. The future reliability statistic table 600 may be displayed or transmitted using an output interface, such a graphic user interface or to a printing device. In FIG. 6, the future reliability statistic table 600 comprises the maintenance spending ratio ordinal category columns 602-618. As shown in FIG. 6, each of the maintenance spending ratio ordinal category columns 602-618 corresponds to a maintenance spending ratio ordinal category. For example, maintenance spending ratio ordinal category column 602 corresponds to the maintenance spending ratio ordinal category “1” and maintenance spending ratio ordinal category column 604 corresponds to the maintenance spending ratio ordinal category “2.” The compiled data in each maintenance ratio ordinal value column 602-618 is analyzed using the data within the future reliability data table 500 to compute various statistics that indicate future reliability information. As shown in FIG. 6, rows 620, 622, and 624 represent the average, median, and the value at the 90th percentile distribution for the future reliability data for each of the maintenance ratio ordinal values. In FIG. 6, the future reliability information is interpreted as the future reliability predictions or EFOR for a measurable system that the current year has a specific maintenance spending ratio ordinal values.
  • Future EFOR predictions can be computed utilizing current and previous years' maintenance spending ratios. For multi-year cases, the maintenance spending ratios are computed by adding the annualized expenses for the years, and dividing by the sum of the maintenance standards for the previous years. This way the spending ratio reflects performance over several years relative to a general standard that is the summation of the standards computed for each of the included years.
  • FIG. 7 is a schematic diagram of an embodiment of a user interface input screen 700 configured to display information a user may need to input to determine a future reliability prediction 76 using the data analysis method 60 described in FIG. 1. The user interface input screen 700 comprises a measurable system selection column 702 that a user may use to select the type of measureable system. Using FIG. 7 as an example, the user may select the “Coal-Rankine” plant as the type of power generation unit or measureable system. Other selections shown in FIG. 7 include “Gas-Rankine” and “Combustion Turbine.” Once the type of measureable system is selected, the user interface input screen 700 may generate the required data items 704 associated with the type of measureable system a user selects. The data items 704 that appear within the user interface input screen 700 may vary depending on the selected measureable system within the measurable system selection column 702. FIG. 7 illustrates that a user has selected a Coal-Rankine plant and the user may enter all fields that are shown blank with an underscore line. This may also include the annualized maintenance expenses for the specific year. In other embodiments, the blank fields may be entered using information received from a remote data storage or via a network. The current model also allows a user, if desired, to enter previous year data to add more information for the future reliability prediction. Other embodiments may import and obtain the additional information from a storage medium or via network.
  • Once this information is entered, the calculation fields 706, such as annual maintenance standard (k$) field and risk modification factor field, at the bottom of user interface input screen 700 may automatically populate based on the information entered by the user. The annual maintenance standard (k$) field may be computed substantially similar to the computed MX standard 258 shown in FIG. 6. The risk modification factor field may represent the overall risk modification factor for the comparative analysis model and may be a ratio of the computed future one year average EFOR to the overall average EFOR. In other words, the data result automatically generated within the risk modification factor field represents the relative reliability risk of a particular measurable system compared to an overall average.
  • FIG. 8 is a schematic diagram of an embodiment of a user interface input screen 800 configured for EFOR prediction using the data analysis method 60 described in FIG. 1. In FIG. 8, there are several results for consideration by the user. The curve 802 is as a ranking curve that represents the distribution of maintenance spending ratios, and the triangle 804 on the curve 802 shows the location of the current measureable system or measureable system under consideration by a user (e.g., the “Coal-Rankine” plant selected in FIG. 7). The user interface input screen 800 illustrates to a user both the range of known performance and where in the range the specific measureable system under consideration falls. The numbers below this curve are the quintile values of the maintenance spending ratio, where the maintenance spending ratios are categorized into five different value intervals. The data results illustrated in FIG. 8 were computed for quintiles in this embodiment; however, other divisions are possible based on the amount of data available and the objectives of the analyst and user.
  • The histogram 806 represents the average 1 year future EFOR dependent on the specific quintile the maintenance spending ratio falls under. For example, the lowest 1 year future EFOR appears for plants that have a maintenance spending ratio in the second quintile or have maintenance spending ratios of about 0.8 and about 0.92. This level of spending suggests the unit is successfully managing the asset with the better practices that assures long term reliability. Notice that the first quintile or plants with maintenance spending ratios of about zero to about 0.8 actually exhibits a higher EFOR value suggesting that operators are not performing the required or sufficient maintenance to produce long-term reliability. If a plant falls into the fifth quintile, one interpretation of this is that operators could be overspending because of breakdowns. Since maintenance costs from unplanned maintenance events can be larger than planned maintenance expenses, a high maintenance spending ratios may produce high EFOR values.
  • The dotted line 810 represents the average EFOR for all of the data analyzed for the current measureable system. The diamond 812 represents the actual 1 year future EFOR estimate located directed above the triangle 804, which represents the maintenance spending ratio. The two symbols correlate or connect the current maintenance spending levels, triangle 804, to a future 1 year estimate of EFOR, the diamond 812.
  • FIG. 10 is a flow chart of an embodiment of a method 100 for determining model coefficients for use in comparative performance analysis of a measureable system, such as a power generation plant. Method 100 may be used to generate the one or more comparative analysis models used within the maintenance standard 66 described in FIG. 1. Specifically, method 100 determines the usable characteristics and model coefficients associated with one or more comparative analysis models that illustrate the correlation between the maintenance quality and future reliability. Method 100 may be implemented using a user and/or computing node configured to receive inputted data for determining model coefficients. For example, a computing node may automatically receive data and update model coefficients based on received updated data.
  • Method 100 starts at step 102 and selects one or more target variables (“Target Variables”). The target variable is a quantifiable attribute associated with the measureable system, such as total operating expense, financial result, capital cost, operating cost, staffing, product yield, emissions, energy consumption, or any other quantifiable attribute of performance. Target Variables could be in manufacturing, refining, chemical, including petrochemicals, organic and inorganic chemicals, plastics, agricultural chemicals, and pharmaceuticals, Olefins plant, chemical manufacturing, pipeline, power generating, distribution, and other industrial facilities. Other embodiments of the Target Variables could also be for different environmental aspects, maintenance of buildings and other structures, and other forms and types of industrial and commercial industries.
  • At step 104, method 100 identifies the first principle characteristics. First principle characteristics are the physical or fundamental characteristics of a measurable system or process that are expected to determine the Target Variable. In one embodiment, the first principle characteristics may be the asset unit first principle data or other asset-level data 64 described in FIG. 1. Common brainstorming or team knowledge management techniques can be used to develop the first list of possible characteristics for the Target Variable. In one embodiment, all of the characteristics of an industrial facility that may cause variation in the Target Variable when comparing different measureable systems, such as industrial facilities, are identified as first principle characteristics.
  • At step 106, method 100 determines the primary first principle characteristics from all of the first principle characterizes identified at step 104. As will be understood by those skilled in the art, many different options are available to determine the primary first principle characteristics. One such option is shown in FIG. 11, which will be discussed in more detail below. Afterwards, method 100 moves to step 108, to classify the primary characteristics. Potential classifications for the primary characteristics include discrete, continuous, or ordinal. Discrete characteristics are those characteristics that can be measured using a selection between two or more states, for example a binary determination, such as “yes” or “no.” An example discrete characteristic could be “Duplicate Equipment.” The determination of “Duplicate Equipment” is “yes, the facility has duplicate equipment” or “no, there is no duplicate equipment.” Continuous characteristics are directly measurable. An example of a continuous characteristic could be the “Feed Capacity,” since it is directly measured as a continuous variable. Ordinal characteristics are characteristics that are not readily measurable. Instead, ordinal characteristics can be scored along an ordinal scale reflecting physical differences that are not directly measurable. It is also possible to create ordinal characteristics for variables that are measurable or binary. An example of an ordinal characteristic would be refinery configuration between three typical major industry options. These are presented in ordinal scale by unit complexity:
  • 2.0 Atmospheric Distillation
    2.0 Catalytic Cracking Unit
    3.0 Coking Unit

    The above measurable systems are ranked in order based on ordinal variables and generally do not contain information about any quantifiable quality of measurement. In the above example, the difference between the complexity of the 1.0 measureable system or atmospheric distillation and the 2.0 measureable system or catalytic cracking unit, does not necessarily equal the complexity difference between the 3.0 measureable system or coking unit and the 2.0 measureable system or catalytic cracking unit.
  • Variables placed in an ordinal scale may be converted to an interval scale for development of model coefficients. The conversion of ordinal variables to interval variables may use a scale developed to illustrate the differences between units are on a measurable scale. The process to develop an interval scale for ordinal characteristic data can rely on the understanding of a team of experts of the characteristic's scientific drivers. The team of experts can first determine, based on their understanding of the process being measured and scientific principle, the type of relationship between different physical characteristics and the Target Variable. The relationship may be linear, logarithmic, a power function, a quadratic function or any other mathematical relationship. Then the experts can optionally estimate a complexity factor to reflect the relationship between characteristics and variation in Target Variable. Complexity factors may be the exponential power used to make the relationship linear between the ordinal variable to the Target Variable resulting in an interval variable scale. Additionally, in circumstances where no data exist, the determination of primary characteristics may be based on expert experience.
  • At step 110, method 100 may develop a data collection classification arrangement. The method 100 may quantify the characteristics categorized as continuous such that data is collected in a consistent manner. For characteristics categorized as binary, a simple yes/no questionnaire may be used to collect data. A system of definitions may need to be developed to collect data in a consistent manner. For characteristics categorized as ordinal, a measurement scale can be developed as described above.
  • To develop a measurement scale for ordinal characteristics, method 100 may employ at least four methods to develop a consensus function. In one embodiment, an expert or team of experts can be used to determine the type of relationship that exists between the characteristics and the variation in Target Variable. In another embodiment, the ordinal characteristics can be scaled (for example 1, 2, 3 . . . n for n configurations). By plotting the target value versus the configuration, the configurations are placed in progressive order of influence. In utilizing the arbitrary scaling method, the determination of the Target Variable value relationship to the ordinal characteristic is forced into the optimization analysis, as described in more detail below. In this case, the general optimization model described in Equation 1.0 can be modified to accommodate a potential non-linear relationship. In another embodiment, the ordinal measurement can be scaled as discussed above, and then regressed against the data to make a plot of Target Variable versus the ordinal characteristic to be as nearly linear as possible. In a further embodiment, a combination of the foregoing embodiments can be utilized to make use of the available expert experience, and available data quality and data quantity of data.
  • Once method 100 establishes a relationship, method 100 may develop a measurement scale at step 110. For instance, a single characteristic may take the form of five different physical configurations. The characteristics with the physical characteristics resulting in the lowest effect on variation in Target Variable may be given a scale setting score. This value may be assigned to any non-zero value. In this example, the value assigned is 1.0. The characteristics with the second largest influence on variation in Target Variable will be a function of the scale setting value, as determined by a consensus function. The consensus function is arrived at by using the measurement scale for ordinal characteristics as described above. This is repeated until a scale for the applicable physical configurations is developed.
  • At step 112, method 100 uses the classification system developed at step 110 to collect data. The data collection process can begin with the development of data input forms and instructions. In many cases, data collection training seminars are conducted to assist in data collection. Training seminars may improve the consistency and accuracy of data submissions. A consideration in data collection may involve the definition of the measureable system's, such as an industrial facility, analyzed boundaries. Data input instructions may provide definitions of what measureable systems' costs and staffing are to be included in data collection. The data collection input forms may provide worksheets for many of the reporting categories to aid in the preparation of data for entry. The data that is collected can originate from several sources, including existing historical data, newly gathered historical data from existing facilities and processes, simulation data from model(s), or synthesized experiential data derived from experts in the field.
  • At step 114, method 100 may validate the data. Many data checks can be programmed at step 114 of method 100 such that method 100 may accept data that passes the validation check or the check is over-ridden with appropriate authority. Validation routines may be developed to validate the data as it is collected. The validation routines can take many forms, including: (1) range of acceptable data is specified ratio of one data point to another is specified; (2) where applicable data is cross checked against all other similar data submitted to determine outlier data points for further investigation; and (3) data is cross referenced to any previous data submission judgment of experts. After all input data validation is satisfied, the data is examined relative to all the data collected in a broad “cross-study” validation. This “cross-study” validation may highlight further areas requiring examination and may result in changes to input data.
  • At step 116, method 100 may develop constraints for use in solving the comparative analysis model. These constraints could include constraints on the model coefficient values. These can be minimum or maximum values, or constraints on groupings of values, or any other mathematical constraint forms. One method of determining the constraints is shown in FIG. 12, which is discussed in more detail below. Afterwards, at step 118, method 100 solves the comparative analysis model by applying optimization methods of choice, such as linear regression, with the collected data to determine the optimum set of factors relating the Target Variable to the characteristics. In one embodiment, the generalized reduced gradient non-linear optimization method can be used. However, method 100 may utilize many other optimization methods.
  • At step 120, method 100 may determine the developed characteristics. Developed characteristics are the result of any mathematical relationship that exists between one or more first principle characteristics and may be used to express the information represented by that mathematical relationship. In addition, if a linear general optimization model is utilized, then nonlinear information in the characteristics can be captured in developed characteristics. Determination of the developed characteristics form is accomplished by discussion with experts, modelling expertise, and by trial and refinement. At step 122, method 100 applies the optimization model to the primary first principle characteristics and the developed characteristics to determine the model coefficients. In one embodiment, if developed characteristics are utilized, step 116 through step 122 may be repeated in an iterative fashion until method 100 achieves the level of model accuracy.
  • FIG. 11 is a flow chart of an embodiment of a method 200 for determining primary first principle characteristics 106 as described in FIG. 10. At step 202, method 200 determines the effect of each characteristic on the variation in the Target Variable between measureable systems. In one embodiment, the method may be iteratively repeated, and a comparative analysis model can be used to determine the effect of each characteristic. In another embodiment, method 200 may use a correlation matrix. The effect of each characteristic may be expressed as a percentage of the total variation in the Target Variable in the initial data set. At step 204, method 200 may rank each characteristic from highest to lowest based on its effect on the Target Variable. Persons of ordinary skill in the art are aware that method 200 could use other ranking criteria.
  • At step 206, the characteristics may be grouped into one or more categories. In one embodiment, the characteristics are grouped into three categories. The first category contains characteristics that affect a Target Variable at a percentage less than a lower threshold (for example, about five percent). The second category may comprise one or more characteristics with a percentage between the lower percentage and a second threshold (for example, about 5% and about 20%). The third category may comprise one or more characteristics with a percentage over the second threshold (for example, about 20%). Other embodiments of method 200 at step 2006 may include additional or fewer categories and/or different ranges.
  • At step 208, method 200 may remove characteristics from a list of characteristics with Target Variable average variations below a specific threshold. For example, method 200 could remove characteristics that include first category described above in step 206 (e.g., characteristics with a percentage of less than about five percent). Persons of ordinary skill in the art are aware that other thresholds could be used, and multiple categories could be removed from the list of characteristics. In one embodiment, if characteristics are removed, the process may repeat at step 202 above. In another embodiment, no characteristics are removed from the list until determining whether another co-variant relationship exists, as described in step 212 below.
  • At step 210, method 200 determines the relationships between the mid-level characteristics. Mid-level characteristics are characteristics that have a certain level of effect on the Target Variable, but individually do not influence the Target Variable in a significant manner. Using the illustrative categories, those characteristics in the second category are mid-level characteristics. Example relationships between the characteristics are co-variant, dependent, and independent. A co-variant relationship occurs when modifying one characteristic causes the Target Variable to vary, but only when another characteristic is present. For instance, in the scenario where characteristic “A” is varied, which causes the Target Variable to vary, but only when characteristic “B” is present, then “A” and “B” have a co-variant relationship. A dependent relationship occurs when a characteristic is a derivative of or directly related to another characteristic. For instance, when the characteristic “A” is only present when characteristic “B” is present, then A and B have a dependent relationship. For those characteristics that are not co-variant or dependent, they are categorized as having independent relationships.
  • At step 212, method 200 may remove dependencies and high correlations in order to resolves characteristics displaying dependence with each other. There are several potential methods for resolving dependencies. Some examples include: (i) grouping multiple dependent characteristics into a single characteristic, (ii) removing all but one of the dependent characteristics, and (iii) keeping one of the dependent characteristics, and creating a new characteristic that is the difference between the kept characteristic and the other characteristics. After method 200 removes the dependencies, the process may be repeated from step 202. In one embodiment, if the difference variable is insignificant it can be removed from the analysis in the repeated step 208.
  • At step 214, method 200 may analyze the characteristics to determine the extent of the inter-relationships. In one embodiment, if any of the previous steps resulted in repeating the process, the repetition should be conducted prior to step 214. In some embodiments, the process may be repeated multiple times before continuing to step 214. At 216, the characteristics that result in less than a minimum threshold change in the impact on Target Variable variation caused by another characteristic are dropped from the list of potential characteristics. An illustrative threshold could be about 10 percent. For instance, if the variation in Target Variable caused by characteristic “A” is increased when characteristic “B” is present, the percent increase in the Target Variable variation caused by the presence of characteristic “B” must be estimated. If the variation of characteristic “B” is estimated to increase the variation in the Target Variable by less than about 10% of the increase caused by characteristic “A” alone, characteristic “B” can be eliminated from the list of potential characteristics. Characteristic “A” can also be deemed then to have an insignificant impact on the Target Variable. The remaining characteristics are deemed to be the primary characteristics.
  • FIG. 12 is a flow chart of an embodiment of a method 300 for developing constraints for use in solving the comparative analysis model as described in step 116 in FIG. 10. Constraints are developed on the model coefficients at step 302. In other words, constraints are any limits placed on model coefficients. For example, a model coefficient may have a constraint of a maximum of about 20% effect on contributing to a target variable. At step 354, method 300's objective function, as described below, is optimized to determine an initial set of model coefficients. At step 306, method 300 may calculate the percent contribution of each characteristic to the Target Variable. There are several methods of calculating the percent contribution of each characteristic, such as the “Average Method” described in as described in U.S. Pat. No. 7,233,910.
  • With the individual percent contributions developed, method 300 proceeds to step 308, where each percent contribution is compared against expert knowledge. Domain experts may have an intuitive or empirical feel for the relative impacts of key characteristics to the overall target value. The contribution of each characteristic is judged against this expert knowledge. At step 310, method 300 may make a decision about the acceptability of the individual contributions. If the contribution is found to be unacceptable the method 300 continues to step 312. If the contribution is found to be acceptable the method 300 continues to step 316.
  • At step 312, method 300 makes a decision on how to address or handle unacceptable results of the individual contributions. At step 312, the options may include adjusting the constraints on the model coefficients to affect a solution or deciding that the characteristic set chosen cannot be helped through constraint adjustment. If the user decides to accept the constraint adjustment then method 300 proceeds to step 316. If the decision is made to achieve acceptable results through constraint adjustment then method 300 continues to step 314. At step 314, the constraints are adjusted to increase or decrease the impact of individual characteristics in an effort to obtain acceptable results from the individual contributions. Method 300 continues to step 302 with the revised constraints. At step 316, peer and expert review of the model coefficients developed may be performed to determine the acceptability of the model coefficients developed. If the factors pass the expert and peer review, method 300 continues to step 326. If the model coefficients are found to be unacceptable, method 300 continues to step 318.
  • At step 318, method 300 may obtain additional approaches and suggestions for modification of the characteristics developed by working with experts in the particular domain. This may include the creation of new or updated developed characteristics, or the addition of new or updated first principle characteristics to the analysis data set. At step 320, a determination is made as to whether data exists to support the investigation of the approaches and suggestions for modification of the characteristics. If the data exists, method 300 proceeds to step 324. If the data does not exist, method 300 proceeds to step 322. At step 322, method 300 collects additional data in an effort to make the corrections required to obtain a satisfactory solution. At step 324, method 300 revises the set of characteristics in view of the new approaches and suggestions. At step 326, method 400 may document the reasoning behind the selection of characteristics. The documentation can be used in explaining results for use of the model coefficients.
  • FIG. 13 is a schematic diagram of an embodiment of a model coefficient matrix 10 for determining model coefficients as described in FIGS. 10-12. While model coefficient matrix 10 can be expressed in a variety of configurations, in this particular example, the model coefficient matrix 10 may be construed with the first principle characteristics 12 and first developed characteristics 14 on one axis, and the different facilities 16 for which data has been collected on the other axis. For each first principle characteristic 12 at each facility 16, there is the actual data value 18. For each first principle characteristic 12 and developed characteristic 14, there is the model coefficient 22 that will be computed with an optimization model. The constraints 20 limit the range of the model coefficients 22. Constraints can be minimum or maximum values, or other mathematical functions or algebraic relationships. Moreover, constraints 20 can be grouped and further constrained. Additional constraints 20 on facility data, and relationships between data points similar to those used in the data validation step, and constraints 20 can employ any mathematical relationship on the input data can also be employed. In one embodiment, the constraints 20 to be satisfied during optimization apply only to the model coefficients.
  • The Target Variable (actual) column 24 comprises actual values of the Target Variable as measured for each facility. The Target Variable (predicted) column 26 comprises the values for the target value as calculated using the determined model coefficients. The error column 28 comprises the error values for each facility as determined by the optimization model. The error sum 30 is the summation of the error values in error column 28. The optimization analysis, which comprises the Target Variable equation and an objection function, solves for the model coefficients to minimize the error sum 30. In the optimization analysis, the model coefficients αj are computed to minimize the error εi over all facilities. The non-linear optimization process determines the set of model coefficients that minimizes this equation for a given set of first principle characteristics, constraints, and a selected value.
  • The Target Variable may be computed as a function of the characteristics and the to-be-determined model coefficients. The Target Variable equation is expressed as:
  • Target Variable equation : TV i = j α j f ( characteristic ) ij + ɛ i
  • where TVi represents the measured Target Variable for facility i; the characteristic variable represents a first principle characteristic; f is either a value of the first principle characteristic or a developed principle characteristic; i represents the facility number; j represents the characteristic number; αi represents the jth model coefficient, which is consistent with the jth principle characteristic; and εi represents the error of the model's TV prediction as defined by the actual Target Variable value minus the predicted Target Variable value for facility i.
  • The objective function has the general form:
  • Objective Function : Min [ i = 1 m ɛ i p ] 1 / p , p 1
  • where i is the facility; m represents the total number of facilities; and p represents a selected value
  • One common usage of the general form of objective function is to minimize the absolute sum of error by using p=1 as shown below:
  • Objective Function : Min [ i = 1 m ɛ i ]
  • Another common usage of the general form of objective function is using the least squares version corresponding to p=2 as shown below:
  • Objective Function : Min [ i = 1 m ɛ i 2 ] 1 / 2
  • Since the analysis involves a finite number of first principle characteristics and the objective function form corresponds to a mathematical norm, the analysis results are not dependent on the specific value of p. The analyst can select a value based on the specific problem being solved or for additional statistical applications of the objective function. For example, p=2 is often used because of its statistical application in measuring data and Target Variable variation and Target Variable prediction error.
  • A third form of the objective function is to solve for the simple sum of errors squared as given in Equation 5 below.
  • Objective Function : Min [ i = 1 m ɛ i 2 ]
  • While several forms of the objective function have been shown, other forms of the objective function for use in specialized purposes could also be used. Under the optimization analysis, the determined model coefficients are those model coefficients that result in the least difference between the summation and the actual value of the Target Variable after the model iteratively moves through each facility and characteristic such that each potential model coefficient, subject to the constraints, is multiplied against the data value for the corresponding characteristic and summed for the particular facility.
  • For illustrative purposes, a more specific example of the one or more embodiments used to determine model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 is discussed below. A Cat Cracker may be a processing unit in most petroleum refineries. A Cat Cracker cracks long molecules into shorter molecules within the gasoline boiling range and lighter. The process is typically conducted at relatively high temperatures in the presence of a catalyst. In the process of cracking the feed, coke is produced and deposited on the catalyst. The coke is burned off the catalyst to recover heat and to reactivate the catalyst. The Cat Cracker has several main sections: Reactor, Regenerator, Main Fractionator, and Emission Control Equipment. Refiners may desire to compare the performance of their Cat Crackers to the performance of Cat Crackers operated by their competitors. The example of comparing different Cat Cracker example and may not represent the actual results of applying this methodology to Cat Crackers, or any other industrial facility. Moreover, the Cat Cracker example is but one example of many potential embodiments used to compare measurable systems.
  • Using FIG. 10 as an example, method 100 starts at step 102 and determines that the Target Variable will be “Cash Operating Costs” or “Cash OPEX” in a Cat Cracker facility. At step 104, the first principle characteristics that may affect Cash Operating Costs for a Cat Cracker may include one or more of the following: (1) feed quality; (2) regenerator design; (3) staff experience; (4) location; (5) age of unit; (6) catalyst type; (7) feed capacity; (8) staff training; (9) trade union; (10) reactor temperature; (11) duplicate equipment; (12) reactor design; (13) emission control equipment; (14) main fractionator design; (15) maintenance practices; (16) regenerator temperature; (17) degree of feed preheat; (18) staffing level.
  • To determine the primary characteristics, method 100 may at step 106 determine the effects of the first characteristics. In one embodiment, method 100 may implement step 106 by determining primary characteristics as shown in FIG. 11. In FIG. 11, at step 202, method 200 may assign a variation percentage for each characteristic. At step 204, method 200 may rate and rank the characteristics from the Cat Cracker Example. The following chart shows the relative influence and ranking for at least some of the example characteristics in Table 1:
  • TABLE 1
    Characteristics Category Comment
    Feed Quality
    3 Several aspects of feed quality are key
    Catalyst Type
    3 Little effect on costs, large impact on yields
    Reactor Design
    1 Several key design factors are key
    Regenerator
    3 Several design factors are key
    Design
    Staffing Levels
    2
    Feed Capacity 1 Probably single-most highest impact
    Emission
    2 Wet versus dry is a key difference
    Control
    Equipment
    Staff
    3 Little effect on costs
    Experience
    Staff Training
    2 Little effect on costs
    Main 3 Little effect on costs, large impact on yields
    Fractionator
    Design
    Location
    3 Previous data analysis shows this
    characteristic has little effect on costs
    Trade Union
    3 Previous data analysis shows this
    characteristic has little effect on costs
    Maintenance
    2 Effect on reliability and “lost
    Practices opportunity cost”
    Age of Unit 2 Previous data analysis shows this
    characteristic has little effect on costs
    Reactor
    3 Little effect on costs
    Temperature
    Regenerator
    3 Little effect on costs
    Temperature
    Duplicate
    3 Little effect on costs
    Equipment

    In this embodiment, the categories are as follows as shown in Table 2:
  • TABLE 2
    Percent of Average Variation
    in the Target Variable
    Between Facilities
    Category 1 (Major Characteristics) >20%
    Category 2 (Midlevel Characteristics) 5-20% 
    Category 3 (Minor Characteristics)  <5%

    Other embodiments could have any number of categories and that the percentage values that delineate between the categories may be altered in any manner.
  • Based on the above example rankings, method 200 groups the characteristics according to category at step 206. At step 208, method 200 may discard characteristics in Category 3 as being minor. Method 200 may analyze characteristics in Category 2 to determine the type of relationship they exhibit with other characteristics at step 210. Method 200 may classify each characteristic as exhibiting either co-variance, dependence, or independence at step 212. Table 3 is an example of classifying the characteristics of the Cat Cracker facility:
  • TABLE 3
    Classification of Category 2 Characteristics
    Based on Type of Relationship
    If Co-variant or
    Type of Dependent,
    Category 2 characteristics Relationship Related Partner(s)
    Staffing Levels Independent
    Emission Equipment Co-variant Maintenance Practice
    Maintenance Practices Co-variant Staff Experience
    Age of Unit Dependent Staff Training
    Staff Training Co-variant Maintenance Practice
  • At step 214, method 200 may analyze the degree of the relationship of these characteristics. Using this embodiment for the Cat Cracker example: staffing levels, which is classified as having an independent relationship, may stay in the analysis process. Age of Unit is classified as having a dependent relationship with Staff Training A dependent relationship means Age of Unit is a derivative of Staff Experience or vice versa. After further consideration, method 200 may decide to drop the Age of Unit characteristic from the analysis and the broader characteristic of Staff Training may remain in the analysis. The three characteristics classified as having a co-variant relationship, Staff Training, Emission Equipment, Maintenance Practices, must be examined to determine the degree of co-variance.
  • Method 200 may determine that the change in Cash Operating Costs caused by the variation in Staff Training may be modified by more than 30% by the variation in Maintenance Practices. Along the same lines, the change in Cash Operating Costs caused by the variation in Emission Equipment may be modified by more than 30% by the variation in Maintenance Practices causing Maintenance Practices, Staff Training and Emission Equipment to be retained in the analysis process. Method 200 may also determine that the change in Cash Operating Costs caused by the variation in Maintenance Practice is not modified by more than the selected threshold of 30% by the variation in Staff Experience causing Staff Experience to be dropped from the analysis.
  • Continuing with the Cat Cracker example and returning to FIG. 10, method 100 categorizes the remaining characteristics as continuous, ordinal or binary type measurement in step 108 as shown in Table 4.
  • TABLE 4
    Classification of Remaining characteristics
    Based on Measurement Type
    Remaining characteristics Measurement Type
    Staffing Levels Continuous
    Emission Equipment Binary
    Maintenance Practices Ordinal
    Staff Training Continuous

    In this Cat Cracker example, Maintenance Practices may have an “economy of scale” relationship with Cash Operating Costs (which is the Target Variable). An improvement in Target Variable improves at a decreasing rate as Maintenance Practices Improve. Based on historical data and experience, a complexity factor is assigned to reflect the economy of scale. In this particular example, a factor of 0.6 is selected. As an example of coefficients, the complexity factor is often estimated to follow a power curve relationship. Using Cash Operating Costs as an example of a characteristic that typically exhibits an “economy of scale;” the effect of Maintenance Practices can be described with the following:
  • Target Variable facility A = ( ( Capacit y ) facility A Capacity facility B ) ComplexityFactor * Target Variable facility B
  • At step 110, method 100 may develop a data collection classification system. In this example, a questionnaire may be developed to measure how many of ten key Maintenance Practices are in regular use at each facility. A system of definitions may be used such that the data is collected in a consistent manner. The data in terms of number of Maintenance Practices in regular use is converted to a Maintenance Practices Score using the 0.6 factor and “economy of scale” relationship as illustrated in Table 5.
  • TABLE 5
    Maintenance Practices Score
    Number Maintenance Maintenance
    Practices In Regular Use Practices Score
    1 1.00
    2 1.52
    3 1.93
    4 2.30
    5 2.63
    6 2.93
    7 3.21
    8 3.48
    9 3.74
    10 3.98
  • For illustrative purposes with respect to the Cat Cracker example, at step 112, method 100 may collect data and at step 114, method 100 may validate the data as shown in Table 6:
  • TABLE 6
    Cat Cracker Data
    Cash
    Staff Staffing Emission Feed Operating
    Reactor Training Levels Equipment Capacity Maintenance Cost
    Unit of Design Man Number Yes = 1 Barrels Practices Dollars
    Measurement Score Weeks People No = 0 per Day Score per Barrel
    Facility #
    1 1.50 30 50 1 45 3.74 3.20
    Facility #2 1.35 25 28 1 40 2.30 3.33
    Facility #3 1.10 60 8 0 30 1.93 2.75
    Facility #4 2.10 35 23 1 50 3.74 4.26
    Facility #5 1.00 25 5 0 25 2.63 2.32
  • Constraint ranges were developed for each characteristic by an expert team to control the model so that the results are within a reasonable range of solutions as shown in Table 7.
  • TABLE 7
    Cat Cracker Model
    Constraint Ranges
    Mainte- Feed
    Reactor Staff Staffing Emission nance Capac-
    Design Training Levels Equipment Practices ity
    Mini- −3.00 −3.00 −1.0 −1.0 0.0 0.0
    mum
    Maxi- 0.00 1.00 40 0.0 4.0 4.0
    mum
  • At step 116, method 100 produces the results of the model optimization runs, which are shown below in Table 8.
  • TABLE 8
    Model Results
    Characteristics Equivalency Factors
    Reactor Design −0.9245
    Staff Training −0.0021
    Staffing Levels −0.0313
    Emission Equipment 0.0000
    Maintenance Practices 0.0000
    Feed Capacity 0.1382
  • The model indicates Emission Equipment and Maintenance Practices are not significant drivers of variations in Cash Operating Costs between different Cat Crackers. The model may indicate this by finding about zero values for model coefficients for these two characteristics. Reactor Design, Staff Training, and Emission Equipment are found to be significant drivers. In the case of both Emission Equipment and Maintenance Practices, experts may agree that these characteristics may not be significant in driving variation in Cash Operating Cost. The experts may determine that a dependence effect may not have been previously identified that fully compensates for the impact of Emission Equipment and Maintenance Practices.
  • FIG. 14 is a schematic diagram of an embodiment of a model coefficient matrix 10 with respect to the Cat Cracker for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12. A sample model configuration for the illustrative Cat Cracker example is shown in FIG. 14. The data 18, actual values 24, and the resulting model coefficients 22 are shown. In this example, the error sum 30 is relatively minimal, so developed characteristics are not necessary in this instance. In other examples, an error sum of differing values may be determined to be significant resulting in having to determine developed characteristics.
  • For additional illustrative purposes, another example for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12 is discussed below. The embodiment will relate to pipelines and tank farms terminals. Pipelines and tank farms are assets used by industry to store and distribute liquid and gaseous feed stocks and products. The example is illustrative for development of equivalence factors for: (1) pipelines and pipeline systems; (2) tank farm terminals; and (3) any combination of pipelines, pipeline systems and tank farm terminals. The example is for illustrative purposes and may not represent the actual results of applying this methodology to any particular pipeline and tank farm terminal, or any other industrial facility.
  • Using FIG. 10 as an example, method 100 t, at step 102, selects the desired Target Variable to be “Cash Operating Costs” or “Cash OPEX” in a pipeline asset. For step 104, the first principle characteristics that may affect Cash Operating Costs may include for the pipe related characteristics: (1) type of fluid transported; (2) average fluid density; (3) number of input and output stations; (4) total installed capacity; (5) total main pump driver kilowatt (KW); (6) length of pipeline; (7) altitude change in pipeline; (8) total utilized capacity; (9) pipeline replacement value; and (10) pump station replacement value. The first principle characteristics that may affect Cash Operating Costs may include for the tank related characteristics include: (1) fluid class; (2) number of tanks; (3) total number of valves in terminal; (4) total nominal tank capacity; (5) annual number of tank turnovers; and (6) tank terminal replacement value.
  • To determine the primary first principle characteristics, method 100 determines the effect of the first characteristics at step 106. In one embodiment, method 100 may implement step 106 by determining primary characteristics as shown in FIG. 11. In FIG. 11, at step 202, method 100 may for each characteristic assign an impact percentage. This analysis shows that the pipeline replacement value and tank terminal replacement value may be used widely in the industry and are characteristics that are dependent on more fundamental characteristics. Accordingly, in this instance, those values are removed from consideration for primary first principle characteristics. At step 204, method 200 may rate and rank the characteristics. Table 9 shows the relative impact and ranking for the example characteristics method 200 may assign a variation percentage for each characteristic.
  • TABLE 9
    Characteristics Category Comment
    Type of Fluid Transported 2 products and crude
    Average Fluid Density 3 affects power consumption
    Number of Input and 1 more stations means more cost
    Output Stations
    Total Installed Capacity 3 surprisingly minor affect
    Total Main Pump Driver 1 power consumption
    KW
    Length of pipeline 3 no affect
    Altitude change in pipeline 3 small affect by related to KW
    Total Utilized Capacity 3 no effect
    Pipeline Replacement
    3 industry standard has no effect
    Value
    Pump Station Replacement 3 industry standard has little effect
    Value
    Fluid Class
    3 no effect
    Number of Tanks 2 important tank farm parameter
    Total Number of Valves in 3 no effect
    Terminal
    Total Nominal Tank 2 important tank farm parameter
    Capacity
    Annual Number of Tank 3 no effect
    Turnovers
    Tank Terminal
    3 industry standard has little effect
    Replacement Value
  • In this embodiment, the categories are as follows as shown in Table 10:
  • TABLE 10
    PerCent of Average Variation
    in the Target Variable
    Between Facilities
    Category 1 (Major Characteristics) >15%
    Category 2 (Midlevel Characteristics) 7-15% 
    Category 3 (Minor Characteristics)  <7%

    Other embodiments could have any number of categories and that the percentage values that delineate between the categories may be altered in any manner.
  • Based on the above example rankings, method 200 groups the characteristics according to category at step 206. At step 208, method 200 discards those characteristics in Category 3 as being minor. Method 200 may further analyze the characteristics in Category 2 to determine the type of relationship they exhibit with other characteristics at step 210. Method 200 classifies each characteristic as exhibiting either co-variance, dependence or independence as show below in Table 11:
  • TABLE 11
    Classification of Category 2 Characteristics
    Based on Type of Relationship
    If Co-variant or
    Type of Dependent,
    Category 2 characteristics Relationship Related Partner(s)
    Type of Fluid Transported Independent
    Number of Input and Output Stations Independent
    Total Main Pump Driver KW Independent
    Number of Tanks Independent
    Total Nominal Tank Capacity Independent
  • At step 212, method 200 may resolve the dependent characteristics. In this example, there are no dependent characteristics that method 200 needs to resolve. At step 214, method 200 may analyze the degree of the co-variance of the remaining characteristics and determine that no characteristics are dropped. Method 200 may deem the remaining variables as primary characteristics in step 218.
  • Continuing with the Pipeline and Tank Farm example and returning to FIG. 10, method 100 may categorize the remaining characteristics as continuous, ordinal or binary type measurement at step 108 as shown in Table 12.
  • TABLE 12
    Classification of Remaining characteristics
    Based on Measurement Type
    Remaining characteristics Measurement Type
    Type of Fluid Transported Binary
    Number of Input and Output Stations Continuous
    Total Main Pump Driver KW Continuous
    Number of Tanks Continuous
    Total Nominal Tank Capacity Continuous
  • At step 110, method 100 may develop a data collection classification system. In this example a questionnaire may be developed to collect information from participating facilities on the measurements above. At step 112, method 100 may collect the data and at step 114, method 100 may validate the data as shown in Tables 13 and 14.
  • TABLE 13
    Pipe Line and Tank Farm Data
    Type Number of Input
    Characteristic of Fluid and Output Total Main Number Total Nominal
    Measurement
    1 = Product Stations Pump Driver of Tanks Tank Capacity
    Units
    2 = Crude Count KW Count KMT
    Facility
    1 1 8 74.0 34 1,158
    Facility 2 2 16 29.0 0 0
    Facility 3 1 2 5.8 7 300
    Facility 4 1 5 4.9 6 490
    Facility 5 1 2 5.4 8 320
    Facility 6 2 2 2.5 33 191
    Facility 7 1 3 8.2 0 0
    Facility 8 2 2 8.7 0 0
    Facility 9 1 3 15.0 10 180
    Facility 10 1 9 12.0 22 860
    Facility 11 1 4 20.0 5 206
    Facility 12 2 9 9.3 0 0
    Facility 13 2 12 6.2 0 0
  • TABLE 14
    Pipe Line and Tank Farm Data
    Type Number of Input
    Characteristic of Fluid and Output Total Main Number Total Nominal
    Measurement
    1 = Product Stations Pump Driver of Tanks Tank Capacity
    Units
    2 = Crude Count KW Count KMT
    Facility
    14 1 5 41.4 19 430
    Facility 15 2 8 8.2 0 0
    Facility 16 1 8 96.8 31 1,720
    Facility 17 1 2 15.0 8 294
  • In step 116, method 100 may develop constraints on the model coefficients by the expert as shown below in Table 15.
  • TABLE 15
    Number of
    Type Input and Number
    of Output Total Main of Total Nominal
    Fluid Stations Pump Driver Tanks Tank Capacity
    Minimum
    0 0 0 134 0
    Maximum 2000 700 500 500 100
  • At step 116, method 100 produces the results of the model optimization runs, which are shown below in Table 16.
  • TABLE 16
    Model Results
    Characteristics Equivalency Factors
    Type of Fluid Transported 1301.1
    Number of Input and Output Stations 435.4
    Total Main Pump Driver KW 170.8
    Number of Tanks 134.0
    Total Nominal Tank Capacity 6.11
  • In step 118, method 100 may determine that there is no need for developed characteristics in this example. The final model coefficients may include model coefficients determined in the comparative analysis model step above.
  • FIG. 15 is a schematic diagram of an embodiment of a model coefficient matrix 10 with respect to the pipeline and tank farm for determining model coefficients for use in comparative performance analysis as illustrated in FIGS. 10-12. This example shows but one of many potential applications of this invention to the pipeline and tank farm industry. The methodology described and illustrated in FIGS. 10-15 could be applied to many other different industries and facilities. For example, this methodology could be applied to the power generation industry, such as developing model coefficients for predicting operating expense for single cycle and combined cycle generating stations that generate electrical power from any combination of boilers, steam turbine generators, combustion turbine generators and heat recovery steam generators. In another example, this methodology could be applied to develop model coefficients for predicting the annual cost for ethylene manufacturers of compliance with environmental regulations associated with continuous emissions monitoring and reporting from ethylene furnaces. In one embodiment, the model coefficients would apply to both environmental applications and chemical industry applications.
  • FIG. 9 is a schematic diagram of an embodiment of a computing node for implementing one or more embodiments described in this disclosure, such as method 60, 100, 200, and 300 as described in FIGS. 1 and 10-12, respectively. The computing node may correspond to or may be part of a computer and/or any other computing device, such as a handheld computer, a tablet computer, a laptop computer, a portable device, a workstation, a server, a mainframe, a super computer, and/or a database. The hardware comprises of a processor 900 that contains adequate system memory 905 to perform the required numerical computations. The processor 900 executes a computer program residing in system memory 905, which may be a non-transitory computer readable medium, to perform the methods 60, 100, 200, and 300 as described in FIGS. 1 and 10-12, respectively. Video and storage controllers 910 may be used to enable the operation of display 915 to display a variety of information, such as the tables and user interfaces described in FIGS. 2-8. The computing node includes various data storage devices for data input such as floppy disk units 920, internal/external disk drives 925, internal CD/DVDs 930, tape units 935, and other types of electronic storage media 940. The aforementioned data storage devices are illustrative and exemplary only.
  • The computing node may also comprise one or more other input interfaces (not shown in FIG. 9) that comprise at least one receiving device configured to receive data via electrical, optical, and/or wireless connections using one or more communication protocols. In one embodiment, the input interface may be a network interface that comprises a plurality of input ports configured to receive and/or transmit data via a network. In particular, the network may transmit operation and performance data via wired links, wireless link, and/or logical links. Other examples of the input interface may include but are not limited to a keyboard, universal serial bus (USB) interfaces and/or graphical input devices (e.g., onscreen and/or virtual keyboards). In another embodiment, the input interfaces may comprise one or more measuring devices and/or sensing devices for measuring asset unit first principle data or other asset-level data 64 described in FIG. 1. In other words, a measuring device and/or sensing device may be used to measure various physical attributes and/or characteristics associated with the operation and performance of a measurable system.
  • These storage media are used to enter data set and outlier removal criteria into to the computing node, store the outlier removed data set, store calculated factors, and store the system-produced trend lines and trend line iteration graphs. The calculations can apply statistical software packages or can be performed from the data entered in spreadsheet formats using Microsoft Excel®, for example. In one embodiment the calculations are performed using either customized software programs designed for company-specific system implementations or by using commercially available software that is compatible with Microsoft Excel® or other database and spreadsheet programs. The computing node can also interface with proprietary or public external storage media 955 to link with other databases to provide data to be used with the future reliability based on current maintenance spending method calculations. An output interface comprises an output device for transmitting data. The output devices can be a telecommunication device 945, a transmission device, and/or any other output device used to transmit the processed future reliability data, such as the calculation data worksheets, graphs and/or reports, via one or more networks, an intranet or the Internet to other computing nodes, network nodes, a control center, printers 950, electronic storage media similar to those mentioned as input devices 920, 925, 930, 935, 940 and/or proprietary storage databases 960. These output devices used herein are illustrative and exemplary only.
  • In one embodiment, system memory 905 interfaces with a computer bus or other connection so as to communicate and/or transmit information stored in system memory 905 to processor 900 during execution of software programs, such as an operating system, application programs, device drivers, and software modules that comprise program code, and/or computer executable process steps, incorporating functionality described herein, e.g., methods 60, 100, 200, and 300. Processor 900 first loads computer executable process steps from storage, e.g., system memory 905, storage medium/media, removable media drive, and/or other non-transitory storage devices. Processor 900 can then execute the stored process steps in order to execute the loaded computer executable process steps. Stored data, e.g., data stored by a storage device, can be accessed by processor 900 during the execution of computer executable process steps to instruct one or more components within the computing node.
  • Programming and/or loading executable instructions onto system memory 905 and/or one or more processing units, such as a processor or microprocessor, in order to transform a computing node 40 into a non-generic particular machine or apparatus that performs modelling used to estimate future reliability of a measurable system is well-known in the art. Implementing instructions, real-time monitoring, and other functions by loading executable software into a microprocessor and/or processor can be converted to a hardware implementation by well-known design rules and/or transform a general-purpose processor to a processor programmed for a specific application. For example, decisions between implementing a concept in software versus hardware may depend on a number of design choices that include stability of the design and numbers of units to be produced and issues involved in translating from the software domain to the hardware domain. Often a design may be developed and tested in a software form and subsequently transformed, by well-known design rules, to an equivalent hardware implementation in an ASIC or application specific hardware that hardwires the instructions of the software. In the same manner as a machine controlled by a new ASIC is a particular machine or apparatus, likewise a computer that has been programmed and/or loaded with executable instructions is viewed as a non-generic particular machine or apparatus.
  • FIG. 18 is a schematic diagram of another embodiment of a computing node 40 for implementing one or more embodiments within this disclosure, such as methods 60, 100, 200, and 300 as described in FIGS. 1 and 10-12, respectively. Computing node 40 can be any form of computing device, including computers, workstations, hand helds, mainframes, embedded computing device, holographic computing device, biological computing device, nanotechnology computing device, virtual computing device and or distributed systems. Computing node 40 includes a microprocessor 42, an input device 44, a storage device 46, a video controller 48, a system memory 50, and a display 54, and a communication device 56 all interconnected by one or more buses or wires or other communications pathway 52. The storage device 46 could be a floppy drive, hard drive, CD-ROM, optical drive, bubble memory or any other form of storage device. In addition, the storage device 42 may be capable of receiving a floppy disk, CD-ROM, DVD-ROM, memory stick, or any other form of computer-readable medium that may contain computer-executable instructions or data. Further communication device 56 could be a modem, network card, or any other device to enable the node to communicate with humans or other nodes.
  • At least one embodiment is disclosed and variations, combinations, and/or modifications of the embodiment(s) and/or features of the embodiment(s) made by a person having ordinary skill in the art are within the scope of the disclosure. Alternative embodiments that result from combining, integrating, and/or omitting features of the embodiment(s) are also within the scope of the disclosure. Where numerical ranges or limitations are expressly stated, such express ranges or limitations may be understood to include iterative ranges or limitations of like magnitude falling within the expressly stated ranges or limitations (e.g., from about 1 to about 10 includes, 2, 3, 4, etc.; greater than 0.10 includes 0.11, 0.12, 0.13, etc.). The use of the term “about” means±10% of the subsequent number, unless otherwise stated.
  • Use of the term “optionally” with respect to any element of a claim means that the element is required, or alternatively, the element is not required, both alternatives being within the scope of the claim. Use of broader terms such as comprises, includes, and having may be understood to provide support for narrower terms such as consisting of, consisting essentially of, and comprised substantially of. Accordingly, the scope of protection is not limited by the description set out above but is defined by the claims that follow, that scope including all equivalents of the subject matter of the claims. Each and every claim is incorporated as further disclosure into the specification and the claims are embodiment(s) of the present disclosure.
  • While several embodiments have been provided in the present disclosure, it may be understood that the disclosed embodiments might be embodied in many other specific forms without departing from the spirit or scope of the present disclosure. The present examples are to be considered as illustrative and not restrictive, and the intention is not to be limited to the details given herein. For example, the various elements or components may be combined or integrated in another system or certain features may be omitted, or not implemented. Well-known elements are presented without detailed description in order not to obscure the present invention in unnecessary detail. For the most part, details unnecessary to obtain a complete understanding of the present invention have been omitted inasmuch as such details are within the skills of persons of ordinary skill in the relevant art.
  • In addition, the various embodiments described and illustrated in the various embodiments as discrete or separate may be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the present disclosure. Other items shown or discussed as coupled or directly coupled or communicating with each other may be indirectly coupled or communicating through some interface, device, or intermediate component whether electrically, mechanically, or otherwise. Other examples of changes, substitutions, and alterations are ascertainable by one skilled in the art and may be made without departing from the spirit and scope disclosed herein.
  • Although the systems and methods described herein have been described in detail, it should be understood that various changes, substitutions, and alterations can be made without departing from the spirit and scope of the invention as defined by the following claims. Those skilled in the art may be able to study the preferred embodiments and identify other ways to practice the invention that are not exactly as described herein. It is the intent of this disclosure that variations and equivalents of the invention are within the scope of the claims while the description, abstract, and drawings are not to be used to limit the scope of the invention. The invention is specifically intended to be as broad as the claims below and their equivalents.
  • In closing, it should be noted that the discussion of any reference is not an admission that it is prior art to the present invention, especially any reference that may have a publication date after the priority date of this application. At the same time, each and every claim below is hereby incorporated into this detailed description or specification as additional embodiments of the disclosure.

Claims (20)

What is claimed is:
1. A system for modelling future reliability of a facility based on operational and performance data, comprising:
an input interface configured to:
receive maintenance expense data corresponding to a facility;
receive first principle data corresponding to the facility; and
receive asset reliability data corresponding to the facility;
a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to:
obtain one or more comparative analysis models associated with the facility;
obtain a maintenance standard that generates a plurality of category values that categorizes the maintenance expense data by a designated interval based upon at least the maintenance expense data, the first principle data, and the one or more comparative analysis models; and
determine an estimated future reliability of the facility based on the asset reliability data and the plurality of category values; and
an user interface configured to display the results of the estimated future reliability.
2. The system of claim 1, wherein the asset reliability data is Equivalent Forced Outage Rate data.
3. The system of claim 1, wherein the instructions, when executed by the processor, further causes the apparatus to compile the maintenance standard and asset reliability data into a complied data file.
4. The system of claim 3, wherein the instructions, when executed by the processor, further causes the apparatus to:
generate a categorized time based maintenance expense data based upon at least the compiled data file; and
generate a categorized time based reliability data based upon at least the compiled data file.
5. The system of claim 4, wherein generating the categorized time based maintenance expense data comprises arranging the category values according to the one or more time intervals for the plurality of other facilities.
6. The system of claim 4, wherein generating the categorized time based reliability data comprises arranging the reliability data values according to the one or more time intervals for the plurality of other facilities.
7. The system of claim 1, wherein a future reliability interval of the estimated future reliability is based upon the amount of maintenance expense data, asset reliability data, and first principle data.
8. The system of claim 1, wherein the maintenance standard normalizes the maintenance expense data.
9. The system of claim 8, wherein normalizing the maintenance expense data comprising generating a periodic maintenance spending divisor for a time period.
10. The system of claim 1, wherein displaying the results of the future reliability comprises displaying the asset reliability data according to the plurality of category values in a graph.
11. A method for modelling future reliability of a measurable system based on operational and performance data, comprising:
receiving maintenance expense data via an input interface associated with a measurable system;
receiving first principle data via an input interface associated with the measureable system;
receiving asset reliability data via an input interface associated with the measureable system;
generating, using a processor, a plurality of category values that categorizes the maintenance expense data by a designated interval using a maintenance standard that is generated from one or more comparative analysis models associated with the measureable system;
determining, using a processor, an estimated future reliability of the measureable system based on the asset reliability data and the plurality of category values; and
outputting the results of the estimated future reliability using an output interface.
12. The method of claim 11, further comprising generating the maintenance standard using the maintenance expense data and the first principle data.
13. The method of claim 12, wherein the maintenance standard generates a normalize maintenance expense data from the maintenance expense data and the one or more comparative analysis model.
14. The method of claim 13, wherein the maintenance standard uses a periodic maintenance spending divisor to generate the normalize maintenance expense.
15. The method of claim 11, further comprising:
compiling the maintenance standard and asset reliability data into a compiled data file;
generating a categorized time based maintenance expense data based upon at least the compiled data file; and
generating a categorized time based reliability data based upon at least the compiled data file.
16. The method of claim 15, wherein generating the categorized time based maintenance expense data comprises arranging the category values according to the one or more time intervals for the plurality of other facilities.
17. The apparatus of claim 15, wherein generating the categorized time based reliability data comprises arranging the reliability data values according to the one or more time intervals for the plurality of other facilities.
18. An apparatus for modelling future reliability of an equipment asset based on operational and performance data, comprising:
an input interface comprising a receiving device configured to:
receive maintenance expense data corresponding to an equipment asset;
receive first principle data corresponding to the equipment asset;
receive asset reliability data corresponding to the equipment asset;
a processor coupled to a non-transitory computer readable medium, wherein the non-transitory computer readable medium comprises instructions when executed by the processor causes the apparatus to:
generate a plurality of category values that categorizes the maintenance expense data by a designated interval from a maintenance standard; and
determine an estimated future reliability of the facility comprising estimated future reliability data based on the asset reliability data and the plurality of category values; and
an output interface comprising a transmission device configured to transmit a processed data set that comprises the estimated future reliability data to a control center for comparing different equipment assets based on the processed data set.
19. The apparatus of claim 18, wherein the maintenance standard normalizes the maintenance expense data from the maintenance expense data and the one or more comparative analysis model, and wherein the asset reliability data is Equivalent Forced Outage Rate data corresponding to a plurality of other equipment assets.
20. The apparatus of claim 18, wherein the instructions, when executed by the processor, further causes the apparatus to:
compile the maintenance standard and asset reliability data into a complied data file;
generate a categorized time based maintenance expense data based upon at least the compiled data file; and
generate a categorized time based reliability data based upon at least the compiled data file, and
wherein at least some of the first principle data corresponding to the equipment asset is measured from one or more sensing devices.
US14/684,358 2014-04-11 2015-04-11 Future reliability prediction based on system operational and performance data modelling Active 2037-04-18 US10409891B2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US14/684,358 US10409891B2 (en) 2014-04-11 2015-04-11 Future reliability prediction based on system operational and performance data modelling
US16/566,845 US11550874B2 (en) 2014-04-11 2019-09-10 Future reliability prediction based on system operational and performance data modelling
US18/094,835 US20230169146A1 (en) 2014-04-11 2023-01-09 Future reliability prediction based on system operational and performance data modelling

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201461978683P 2014-04-11 2014-04-11
US14/684,358 US10409891B2 (en) 2014-04-11 2015-04-11 Future reliability prediction based on system operational and performance data modelling

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/566,845 Continuation US11550874B2 (en) 2014-04-11 2019-09-10 Future reliability prediction based on system operational and performance data modelling

Publications (2)

Publication Number Publication Date
US20150294048A1 true US20150294048A1 (en) 2015-10-15
US10409891B2 US10409891B2 (en) 2019-09-10

Family

ID=54265262

Family Applications (3)

Application Number Title Priority Date Filing Date
US14/684,358 Active 2037-04-18 US10409891B2 (en) 2014-04-11 2015-04-11 Future reliability prediction based on system operational and performance data modelling
US16/566,845 Active 2036-10-27 US11550874B2 (en) 2014-04-11 2019-09-10 Future reliability prediction based on system operational and performance data modelling
US18/094,835 Pending US20230169146A1 (en) 2014-04-11 2023-01-09 Future reliability prediction based on system operational and performance data modelling

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/566,845 Active 2036-10-27 US11550874B2 (en) 2014-04-11 2019-09-10 Future reliability prediction based on system operational and performance data modelling
US18/094,835 Pending US20230169146A1 (en) 2014-04-11 2023-01-09 Future reliability prediction based on system operational and performance data modelling

Country Status (7)

Country Link
US (3) US10409891B2 (en)
EP (1) EP3129309A4 (en)
JP (1) JP6795488B2 (en)
KR (3) KR20230030044A (en)
CN (2) CN115186844A (en)
CA (2) CA3116974A1 (en)
WO (1) WO2015157745A2 (en)

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180045614A1 (en) * 2016-08-12 2018-02-15 Daniela Oelke Technique for monitoring technical equipment
US10055891B2 (en) 2016-10-07 2018-08-21 Bank Of America Corporation System for prediction of future circumstances and generation of real-time interactive virtual reality user experience
CN109299846A (en) * 2018-07-20 2019-02-01 岭东核电有限公司 A kind of nuclear power plant equipment analysis method for reliability, system and terminal device
US20190073619A1 (en) * 2016-03-31 2019-03-07 Nuovo Pignone Tecnologie Srl Methods and systems for optimizing filter change interval
US20190095478A1 (en) * 2017-09-23 2019-03-28 Splunk Inc. Information technology networked entity monitoring with automatic reliability scoring
US10346237B1 (en) * 2015-08-28 2019-07-09 EMC IP Holding Company LLC System and method to predict reliability of backup software
CN110163436A (en) * 2019-05-23 2019-08-23 西北工业大学 Intelligent workshop production optimization method based on bottleneck prediction
US10557840B2 (en) 2011-08-19 2020-02-11 Hartford Steam Boiler Inspection And Insurance Company System and method for performing industrial processes across facilities
US10896172B2 (en) * 2017-11-27 2021-01-19 Snowflake Inc. Batch data ingestion in database systems
US11093518B1 (en) 2017-09-23 2021-08-17 Splunk Inc. Information technology networked entity monitoring with dynamic metric and threshold selection
US11106442B1 (en) 2017-09-23 2021-08-31 Splunk Inc. Information technology networked entity monitoring with metric selection prior to deployment
CN114035466A (en) * 2021-11-05 2022-02-11 肇庆高峰机械科技有限公司 Control system of duplex position magnetic sheet arrangement machine
US11288602B2 (en) 2019-09-18 2022-03-29 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11328177B2 (en) 2019-09-18 2022-05-10 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11334645B2 (en) 2011-08-19 2022-05-17 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US11550874B2 (en) 2014-04-11 2023-01-10 Hartford Steam Boiler Inspection And Insurance Company Future reliability prediction based on system operational and performance data modelling
US11615348B2 (en) 2019-09-18 2023-03-28 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11636292B2 (en) 2018-09-28 2023-04-25 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US11676072B1 (en) 2021-01-29 2023-06-13 Splunk Inc. Interface for incorporating user feedback into training of clustering model
US11808260B2 (en) 2020-06-15 2023-11-07 Schlumberger Technology Corporation Mud pump valve leak detection and forecasting
US11843528B2 (en) 2017-09-25 2023-12-12 Splunk Inc. Lower-tier application deployment for higher-tier system

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10402311B2 (en) * 2017-06-29 2019-09-03 Microsoft Technology Licensing, Llc Code review rebase diffing

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030171879A1 (en) * 2002-03-08 2003-09-11 Pittalwala Shabbir H. System and method to accomplish pipeline reliability
US20040122625A1 (en) * 2002-08-07 2004-06-24 Nasser Loren A. Apparatus and method for predicting total ownership cost
US20080015827A1 (en) * 2006-01-24 2008-01-17 Tryon Robert G Iii Materials-based failure analysis in design of electronic devices, and prediction of operating life
US20080300888A1 (en) * 2007-05-30 2008-12-04 General Electric Company Systems and Methods for Providing Risk Methodologies for Performing Supplier Design for Reliability
US20090093996A1 (en) * 2006-05-09 2009-04-09 Hsb Solomon Associates Performance analysis system and method
US20100152962A1 (en) * 2008-12-15 2010-06-17 Panasonic Avionics Corporation System and Method for Performing Real-Time Data Analysis
US20100262442A1 (en) * 2006-07-20 2010-10-14 Standard Aero, Inc. System and method of projecting aircraft maintenance costs
US20130173325A1 (en) * 2011-12-08 2013-07-04 Copperleaf Technologies Inc. Capital asset investment planning systems

Family Cites Families (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5982489A (en) 1982-10-28 1984-05-12 株式会社岩科製作所 Rotor for beater
JPS6297855A (en) 1985-10-24 1987-05-07 Seiko Instr & Electronics Ltd Ink jet printer
US5339392A (en) 1989-07-27 1994-08-16 Risberg Jeffrey S Apparatus and method for creation of a user definable video displayed document showing changes in real time data
ES2202433T3 (en) 1995-10-12 2004-04-01 Yazaki Corporation DEVICE FOR CALCULATING A BAD DISTRIBUTION OF THE LOAD IN A VEHICLE AND DEVICE FOR CALCULATING THE LOAD OF THE VEHICLE.
US7010336B2 (en) 1997-08-14 2006-03-07 Sensys Medical, Inc. Measurement site dependent data preprocessing method for robust calibration and prediction
US6085216A (en) 1997-12-31 2000-07-04 Xerox Corporation Method and system for efficiently allocating resources for solving computationally hard problems
JP2001318745A (en) 2000-05-11 2001-11-16 Sony Corp Data processor, data processing method and recording medium
US6847976B1 (en) 2000-06-15 2005-01-25 Terrence B. Peace Method and apparatus for significance testing and confidence interval construction based on user-specified distribution
US20040172401A1 (en) 2000-06-15 2004-09-02 Peace Terrence B. Significance testing and confidence interval construction based on user-specified distributions
US6832205B1 (en) 2000-06-30 2004-12-14 General Electric Company System and method for automatically predicting the timing and costs of service events in a life cycle of a product
US7124059B2 (en) * 2000-10-17 2006-10-17 Accenture Global Services Gmbh Managing maintenance for an item of equipment
US6988092B1 (en) 2000-12-28 2006-01-17 Abb Research Ltd. Method for evaluation of energy utilities
US7043461B2 (en) 2001-01-19 2006-05-09 Genalytics, Inc. Process and system for developing a predictive model
US7039654B1 (en) 2002-09-12 2006-05-02 Asset Trust, Inc. Automated bot development system
US7313550B2 (en) 2002-03-27 2007-12-25 Council Of Scientific & Industrial Research Performance of artificial neural network models in the presence of instrumental noise and measurement errors
US20070219741A1 (en) 2005-05-20 2007-09-20 Emilio Miguelanez Methods and apparatus for hybrid outlier detection
JP4042492B2 (en) 2002-08-07 2008-02-06 トヨタ自動車株式会社 Method and system for adapting engine control parameters
JP2004118471A (en) * 2002-09-26 2004-04-15 Mitsubishi Heavy Ind Ltd Lease business support system
JP2004145496A (en) * 2002-10-23 2004-05-20 Hitachi Ltd Maintenance supporting method for equipment and facility
JP2004191359A (en) * 2002-10-24 2004-07-08 Mitsubishi Heavy Ind Ltd Risk management device
JP3968039B2 (en) * 2003-03-06 2007-08-29 東京電力株式会社 Maintenance planning support method and apparatus
US7634384B2 (en) * 2003-03-18 2009-12-15 Fisher-Rosemount Systems, Inc. Asset optimization reporting in a process plant
US8478534B2 (en) 2003-06-11 2013-07-02 The Research Foundation For The State University Of New York Method for detecting discriminatory data patterns in multiple sets of data and diagnosing disease
DE10331207A1 (en) * 2003-07-10 2005-01-27 Daimlerchrysler Ag Method and apparatus for predicting failure frequency
WO2005015476A2 (en) 2003-08-07 2005-02-17 Hsb Solomon Associates, Llc System and method for determining equivalency factors for use in comparative performance analysis of industrial facilities
US20050125322A1 (en) 2003-11-21 2005-06-09 General Electric Company System, method and computer product to detect behavioral patterns related to the financial health of a business entity
US20050131794A1 (en) 2003-12-15 2005-06-16 Lifson Kalman A. Stock portfolio and method
EP1548623A1 (en) 2003-12-23 2005-06-29 Sap Ag Outlier correction
JP4728968B2 (en) 2004-02-06 2011-07-20 テスト アドバンテージ, インコーポレイテッド Data analysis method and apparatus
US9164067B2 (en) 2004-02-13 2015-10-20 Waters Technologies Corporation System and method for tracking chemical entities in an LC/MS system
US7469228B2 (en) 2004-02-20 2008-12-23 General Electric Company Systems and methods for efficient frontier supplementation in multi-objective portfolio analysis
CA2501003C (en) 2004-04-23 2009-05-19 F. Hoffmann-La Roche Ag Sample analysis to provide characterization data
EP1768552A4 (en) 2004-06-21 2009-06-03 Aorora Technologies Pty Ltd Cardiac monitoring system
DE102004032822A1 (en) 2004-07-06 2006-03-23 Micro-Epsilon Messtechnik Gmbh & Co Kg Method for processing measured values
US20060069667A1 (en) 2004-09-30 2006-03-30 Microsoft Corporation Content evaluation
JP2006202171A (en) * 2005-01-24 2006-08-03 Chugoku Electric Power Co Inc:The Maintenance cost distribution system and maintenance cost distribution method
US7536364B2 (en) 2005-04-28 2009-05-19 General Electric Company Method and system for performing model-based multi-objective asset optimization and decision-making
US20060247798A1 (en) 2005-04-28 2006-11-02 Subbu Rajesh V Method and system for performing multi-objective predictive modeling, monitoring, and update for an asset
US8195484B2 (en) 2005-06-15 2012-06-05 Hartford Steam Boiler Inspection And Insurance Company Insurance product, rating system and method
US7966150B2 (en) 2005-11-17 2011-06-21 Florida Power & Light Company Data analysis applications
EP2013844A4 (en) 2006-04-07 2010-07-07 Hsb Solomon Associates Llc Emission trading product and method
US7729890B2 (en) 2006-08-22 2010-06-01 Analog Devices, Inc. Method for determining the change of a signal, and an apparatus including a circuit arranged to implement the method
WO2008028004A2 (en) 2006-08-31 2008-03-06 Nonlinear Medicine, Inc. Automated noise reduction system for predicting arrhythmic deaths
US20080104624A1 (en) 2006-11-01 2008-05-01 Motorola, Inc. Method and system for selection and scheduling of content outliers
KR100877061B1 (en) * 2006-12-14 2009-01-08 엘에스산전 주식회사 Multivariable Predictive Control System and Method thereof
JP5116307B2 (en) 2007-01-04 2013-01-09 ルネサスエレクトロニクス株式会社 Integrated circuit device abnormality detection device, method and program
JP2008191900A (en) * 2007-02-05 2008-08-21 Toshiba Corp Reliability-oriented plant maintenance operation support system, and operation support method
US8346691B1 (en) 2007-02-20 2013-01-01 Sas Institute Inc. Computer-implemented semi-supervised learning systems and methods
KR101109913B1 (en) 2007-03-27 2012-03-13 후지쯔 가부시끼가이샤 Method, device, and recording medium having program for making prediction model by multiple regression analysis
CN101122993A (en) * 2007-09-10 2008-02-13 胜利油田胜利评估咨询有限公司 Pumping unit inspection maintenance and fee determination method
JP2009098093A (en) * 2007-10-19 2009-05-07 Gyoseiin Genshino Iinkai Kakuno Kenkyusho Effective maintenance monitoring device for facility
US8040246B2 (en) 2007-12-04 2011-10-18 Avaya Inc. Systems and methods for facilitating a first response mission at an incident scene
JP5003566B2 (en) 2008-04-01 2012-08-15 三菱電機株式会社 Network performance prediction system, network performance prediction method and program
JP4991627B2 (en) * 2008-05-16 2012-08-01 株式会社日立製作所 Plan execution management device and program thereof
US8386412B2 (en) 2008-12-12 2013-02-26 At&T Intellectual Property I, L.P. Methods and apparatus to construct histogram and wavelet synopses for probabilistic data
US9111212B2 (en) 2011-08-19 2015-08-18 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
JP2010250674A (en) 2009-04-17 2010-11-04 Nec Corp Working hour estimation device, method, and program
US10739741B2 (en) 2009-06-22 2020-08-11 Johnson Controls Technology Company Systems and methods for detecting changes in energy usage in a building
JP2011048688A (en) * 2009-08-27 2011-03-10 Hitachi Ltd Plant life cycle evaluation device and method
CN101763582A (en) * 2009-09-17 2010-06-30 宁波北电源兴电力工程有限公司 Scheduled overhaul management module for EAM system of power plant
WO2011047918A1 (en) * 2009-10-21 2011-04-28 International Business Machines Corporation Method and system for improving software execution time by optimizing a performance model
KR101010717B1 (en) * 2009-11-10 2011-01-24 한국동서발전(주) Condition-based plant operation and maintenance management system
US8311772B2 (en) 2009-12-21 2012-11-13 Teradata Us, Inc. Outlier processing
KR101733393B1 (en) * 2009-12-31 2017-05-10 에이비비 리써치 리미티드 Method and control system for scheduling load of a power plant
JP5581965B2 (en) 2010-01-19 2014-09-03 オムロン株式会社 MPPT controller, solar cell control device, photovoltaic power generation system, MPPT control program, and MPPT controller control method
US20110246409A1 (en) 2010-04-05 2011-10-06 Indian Statistical Institute Data set dimensionality reduction processes and machines
CN101900660B (en) * 2010-06-25 2012-01-04 北京工业大学 Method for detecting and diagnosing faults of intermittent low-speed and heavy-load device
CN102081765A (en) * 2011-01-19 2011-06-01 西安交通大学 Systematic control method for repair based on condition of electricity transmission equipment
JP5592813B2 (en) 2011-01-28 2014-09-17 株式会社日立ソリューションズ東日本 Lifetime demand forecasting method, program, and lifetime demand forecasting device
US9069725B2 (en) 2011-08-19 2015-06-30 Hartford Steam Boiler Inspection & Insurance Company Dynamic outlier bias reduction system and method
US10557840B2 (en) 2011-08-19 2020-02-11 Hartford Steam Boiler Inspection And Insurance Company System and method for performing industrial processes across facilities
US9158303B2 (en) 2012-03-27 2015-10-13 General Electric Company Systems and methods for improved reliability operations
US8812331B2 (en) 2012-04-27 2014-08-19 Richard B. Jones Insurance product, rating and credit enhancement system and method for insuring project savings
KR101329395B1 (en) * 2012-06-04 2013-11-14 한국남동발전 주식회사 Power plant equipment management system and control method for thereof
US8686364B1 (en) 2012-09-17 2014-04-01 Jp3 Measurement, Llc Method and system for determining energy content and detecting contaminants in a fluid stream
CN103077428B (en) * 2012-12-25 2016-04-06 上海发电设备成套设计研究院 A kind of level of factory multiple stage Generating Unit Operation Reliability on-line prediction method
CA2843276A1 (en) 2013-02-20 2014-08-20 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US9536364B2 (en) 2013-02-25 2017-01-03 GM Global Technology Operations LLC Vehicle integration of BLE nodes to enable passive entry and passive start features
US9646262B2 (en) 2013-06-17 2017-05-09 Purepredictive, Inc. Data intelligence using machine learning
CA3116974A1 (en) 2014-04-11 2015-10-15 Hartford Steam Boiler Inspection And Insurance Company Improving future reliability prediction based on system operational and performance data modelling
US9568519B2 (en) 2014-05-15 2017-02-14 International Business Machines Corporation Building energy consumption forecasting procedure using ambient temperature, enthalpy, bias corrected weather forecast and outlier corrected sensor data
US9489630B2 (en) 2014-05-23 2016-11-08 DataRobot, Inc. Systems and techniques for predictive data analytics
US10452992B2 (en) 2014-06-30 2019-10-22 Amazon Technologies, Inc. Interactive interfaces for machine learning model evaluations
US20190050510A1 (en) 2017-08-10 2019-02-14 Clearag, Inc. Development of complex agricultural simulation models from limited datasets
US9996933B2 (en) 2015-12-22 2018-06-12 Qualcomm Incorporated Methods and apparatus for outlier detection and correction of structured light depth maps
US9760690B1 (en) 2016-03-10 2017-09-12 Siemens Healthcare Gmbh Content-based medical image rendering based on machine learning
JP6457421B2 (en) 2016-04-04 2019-01-23 ファナック株式会社 Machine learning device, machine system, manufacturing system, and machine learning method for learning using simulation results
US10198339B2 (en) 2016-05-16 2019-02-05 Oracle International Corporation Correlation-based analytic for time-series data
US20190213446A1 (en) 2016-06-30 2019-07-11 Intel Corporation Device-based anomaly detection using random forest models
US11429859B2 (en) 2016-08-15 2022-08-30 Cangrade, Inc. Systems and processes for bias removal in a predictive performance model
WO2018075945A1 (en) 2016-10-20 2018-04-26 Consolidated Research, Inc. System and method for benchmarking service providers
US11315045B2 (en) 2016-12-29 2022-04-26 Intel Corporation Entropy-based weighting in random forest models
CN107391569B (en) 2017-06-16 2020-09-15 阿里巴巴集团控股有限公司 Data type identification, model training and risk identification method, device and equipment
US10638979B2 (en) 2017-07-10 2020-05-05 Glysens Incorporated Analyte sensor data evaluation and error reduction apparatus and methods
US10474667B2 (en) 2017-07-29 2019-11-12 Vmware, Inc Methods and systems to detect and correct outliers in a dataset stored in a data-storage device
JP6837949B2 (en) 2017-09-08 2021-03-03 株式会社日立製作所 Prediction system and method
US20190108561A1 (en) 2017-10-05 2019-04-11 Mindtree Ltd. Purchase Intent Determination And Real Time In-store Shopper Assistance
EP3483797A1 (en) 2017-11-13 2019-05-15 Accenture Global Solutions Limited Training, validating, and monitoring artificial intelligence and machine learning models
US10521654B2 (en) 2018-03-29 2019-12-31 Fmr Llc Recognition of handwritten characters in digital images using context-based machine learning
US11423336B2 (en) 2018-03-29 2022-08-23 Nec Corporation Method and system for model integration in ensemble learning
JP6592813B2 (en) 2018-06-29 2019-10-23 アクアインテック株式会社 Sand collection method for sand basin
CN109299156A (en) 2018-08-21 2019-02-01 平安科技(深圳)有限公司 Electronic device, the electric power data predicting abnormality method based on XGBoost and storage medium
US20200074269A1 (en) 2018-09-05 2020-03-05 Sartorius Stedim Data Analytics Ab Computer-implemented method, computer program product and system for data analysis
US11636292B2 (en) 2018-09-28 2023-04-25 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US20200160229A1 (en) 2018-11-15 2020-05-21 Adobe Inc. Creating User Experiences with Behavioral Information and Machine Learning
US11461702B2 (en) 2018-12-04 2022-10-04 Bank Of America Corporation Method and system for fairness in artificial intelligence based decision making engines
US11204847B2 (en) 2018-12-21 2021-12-21 Microsoft Technology Licensing, Llc Machine learning model monitoring
US11797550B2 (en) 2019-01-30 2023-10-24 Uptake Technologies, Inc. Data science platform
US11551156B2 (en) 2019-03-26 2023-01-10 Hrl Laboratories, Llc. Systems and methods for forecast alerts with programmable human-machine hybrid ensemble learning
US11593650B2 (en) 2019-03-27 2023-02-28 GE Precision Healthcare LLC Determining confident data samples for machine learning models on unseen data
US11210587B2 (en) 2019-04-23 2021-12-28 Sciencelogic, Inc. Distributed learning anomaly detector
US20200387836A1 (en) 2019-06-04 2020-12-10 Accenture Global Solutions Limited Machine learning model surety
US11354602B2 (en) 2019-06-04 2022-06-07 Bank Of America Corporation System and methods to mitigate poisoning attacks within machine learning systems
US20200402665A1 (en) 2019-06-19 2020-12-24 GE Precision Healthcare LLC Unplanned readmission prediction using an interactive augmented intelligent (iai) system
CN110378386A (en) 2019-06-20 2019-10-25 平安科技(深圳)有限公司 Based on unmarked abnormality recognition method, device and the storage medium for having supervision
EP3987444A1 (en) 2019-06-24 2022-04-27 Telefonaktiebolaget LM Ericsson (publ) Method for detecting uncommon input
CN110458374A (en) 2019-08-23 2019-11-15 山东浪潮通软信息科技有限公司 A kind of business electrical maximum demand prediction technique based on ARIMA and SVM
CN110411957B (en) 2019-08-28 2021-11-19 北京农业质量标准与检测技术研究中心 Nondestructive rapid prediction method and device for shelf life and freshness of fruits
CN110543618A (en) 2019-09-05 2019-12-06 上海应用技术大学 roundness uncertainty evaluation method based on probability density function estimation
GB2603358B (en) 2019-09-18 2023-08-30 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
EP4055801A1 (en) 2019-11-06 2022-09-14 Centurylink Intellectual Property LLC Predictive resource allocation in an edge computing network
CN110909822B (en) 2019-12-03 2022-11-11 中国科学院微小卫星创新研究院 Satellite anomaly detection method based on improved Gaussian process regression model
CN111080502B (en) 2019-12-17 2023-09-08 清华苏州环境创新研究院 Big data identification method for regional enterprise data abnormal behaviors
CN111157698B (en) 2019-12-24 2022-10-21 核工业北京地质研究院 Inversion method for obtaining total potassium content of black soil by using emissivity data
CN111709447A (en) 2020-05-14 2020-09-25 中国电力科学研究院有限公司 Power grid abnormality detection method and device, computer equipment and storage medium
US11007891B1 (en) 2020-10-01 2021-05-18 Electricfish Energy Inc. Fast electric vehicle charging and distributed grid resource adequacy management system
CN112257963B (en) 2020-11-20 2023-08-29 北京轩宇信息技术有限公司 Defect prediction method and device based on spaceflight software defect data distribution outlier

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030171879A1 (en) * 2002-03-08 2003-09-11 Pittalwala Shabbir H. System and method to accomplish pipeline reliability
US20040122625A1 (en) * 2002-08-07 2004-06-24 Nasser Loren A. Apparatus and method for predicting total ownership cost
US20080015827A1 (en) * 2006-01-24 2008-01-17 Tryon Robert G Iii Materials-based failure analysis in design of electronic devices, and prediction of operating life
US20090093996A1 (en) * 2006-05-09 2009-04-09 Hsb Solomon Associates Performance analysis system and method
US20100262442A1 (en) * 2006-07-20 2010-10-14 Standard Aero, Inc. System and method of projecting aircraft maintenance costs
US20080300888A1 (en) * 2007-05-30 2008-12-04 General Electric Company Systems and Methods for Providing Risk Methodologies for Performing Supplier Design for Reliability
US20100152962A1 (en) * 2008-12-15 2010-06-17 Panasonic Avionics Corporation System and Method for Performing Real-Time Data Analysis
US20130173325A1 (en) * 2011-12-08 2013-07-04 Copperleaf Technologies Inc. Capital asset investment planning systems

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Ford, AP. et al, "IEEE Strandard definitions for use in reporting electric generating unit reliability, availability and productivity", IEEE Power Engineering Society, March 2007. *
North American Electric Reliability Council, "Predicting Generating unit Reliability", December 1995. *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10557840B2 (en) 2011-08-19 2020-02-11 Hartford Steam Boiler Inspection And Insurance Company System and method for performing industrial processes across facilities
US11868425B2 (en) 2011-08-19 2024-01-09 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US11334645B2 (en) 2011-08-19 2022-05-17 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
US11550874B2 (en) 2014-04-11 2023-01-10 Hartford Steam Boiler Inspection And Insurance Company Future reliability prediction based on system operational and performance data modelling
US10346237B1 (en) * 2015-08-28 2019-07-09 EMC IP Holding Company LLC System and method to predict reliability of backup software
US20190073619A1 (en) * 2016-03-31 2019-03-07 Nuovo Pignone Tecnologie Srl Methods and systems for optimizing filter change interval
US11042821B2 (en) * 2016-03-31 2021-06-22 Nuovo Pignone Tecnologie Srl Methods and systems for optimizing filter change interval
US10712239B2 (en) * 2016-08-12 2020-07-14 Siemens Aktiengesellschaft Technique for monitoring technical equipment
US20180045614A1 (en) * 2016-08-12 2018-02-15 Daniela Oelke Technique for monitoring technical equipment
US10055891B2 (en) 2016-10-07 2018-08-21 Bank Of America Corporation System for prediction of future circumstances and generation of real-time interactive virtual reality user experience
US11093518B1 (en) 2017-09-23 2021-08-17 Splunk Inc. Information technology networked entity monitoring with dynamic metric and threshold selection
US20190095478A1 (en) * 2017-09-23 2019-03-28 Splunk Inc. Information technology networked entity monitoring with automatic reliability scoring
US11934417B2 (en) 2017-09-23 2024-03-19 Splunk Inc. Dynamically monitoring an information technology networked entity
US11106442B1 (en) 2017-09-23 2021-08-31 Splunk Inc. Information technology networked entity monitoring with metric selection prior to deployment
US11843528B2 (en) 2017-09-25 2023-12-12 Splunk Inc. Lower-tier application deployment for higher-tier system
US11055280B2 (en) 2017-11-27 2021-07-06 Snowflake Inc. Batch data ingestion in database systems
US10997163B2 (en) 2017-11-27 2021-05-04 Snowflake Inc. Data ingestion using file queues
US10977245B2 (en) 2017-11-27 2021-04-13 Snowflake Inc. Batch data ingestion
US11294890B2 (en) 2017-11-27 2022-04-05 Snowflake Inc. Batch data ingestion in database systems
US10896172B2 (en) * 2017-11-27 2021-01-19 Snowflake Inc. Batch data ingestion in database systems
CN109299846A (en) * 2018-07-20 2019-02-01 岭东核电有限公司 A kind of nuclear power plant equipment analysis method for reliability, system and terminal device
US11803612B2 (en) 2018-09-28 2023-10-31 Hartford Steam Boiler Inspection And Insurance Company Systems and methods of dynamic outlier bias reduction in facility operating data
US11636292B2 (en) 2018-09-28 2023-04-25 Hartford Steam Boiler Inspection And Insurance Company Dynamic outlier bias reduction system and method
CN110163436A (en) * 2019-05-23 2019-08-23 西北工业大学 Intelligent workshop production optimization method based on bottleneck prediction
US11615348B2 (en) 2019-09-18 2023-03-28 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11328177B2 (en) 2019-09-18 2022-05-10 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11288602B2 (en) 2019-09-18 2022-03-29 Hartford Steam Boiler Inspection And Insurance Company Computer-based systems, computing components and computing objects configured to implement dynamic outlier bias reduction in machine learning models
US11808260B2 (en) 2020-06-15 2023-11-07 Schlumberger Technology Corporation Mud pump valve leak detection and forecasting
US11676072B1 (en) 2021-01-29 2023-06-13 Splunk Inc. Interface for incorporating user feedback into training of clustering model
CN114035466A (en) * 2021-11-05 2022-02-11 肇庆高峰机械科技有限公司 Control system of duplex position magnetic sheet arrangement machine

Also Published As

Publication number Publication date
KR102357659B1 (en) 2022-02-04
US11550874B2 (en) 2023-01-10
US10409891B2 (en) 2019-09-10
EP3129309A4 (en) 2018-03-28
KR102503653B1 (en) 2023-02-24
JP6795488B2 (en) 2020-12-02
CN106471475A (en) 2017-03-01
KR20230030044A (en) 2023-03-03
CA2945543C (en) 2021-06-15
KR20170055935A (en) 2017-05-22
JP2017514252A (en) 2017-06-01
CA2945543A1 (en) 2015-10-15
CA3116974A1 (en) 2015-10-15
EP3129309A2 (en) 2017-02-15
CN106471475B (en) 2022-07-19
CN115186844A (en) 2022-10-14
WO2015157745A3 (en) 2015-12-03
KR20220017530A (en) 2022-02-11
WO2015157745A2 (en) 2015-10-15
US20230169146A1 (en) 2023-06-01
US20200004802A1 (en) 2020-01-02

Similar Documents

Publication Publication Date Title
US11550874B2 (en) Future reliability prediction based on system operational and performance data modelling
US7233910B2 (en) System and method for determining equivalency factors for use in comparative performance analysis of industrial facilities
US10180680B2 (en) Tuning system and method for improving operation of a chemical plant with a furnace
Andersson et al. Big data in spare parts supply chains: The potential of using product-in-use data in aftermarket demand planning
Garg et al. Behavior analysis of synthesis unit in fertilizer plant
Chin et al. Asset maintenance optimisation approaches in the chemical and process industries–A review
Cerdeiral et al. Software project management in high maturity: A systematic literature mapping
Bayzid et al. Prediction of maintenance cost for road construction equipment: A case study
Ervural et al. A fully data-driven FMEA framework for risk assessment on manufacturing processes using a hybrid approach
Simard et al. A Method to Classify Data Quality for Decision Making Under Uncertainty
Mohril et al. XGBoost based residual life prediction in the presence of human error in maintenance
Barabady Production Assurance: Concept, implementation and improvement
Tsarouhas Maintenance scheduling of a cheddar cheese manufacturing plant based on RAM analysis
Li Integrated workload allocation and condition-based maintenance threshold optimisation
Vinod et al. Supply Chain Management Efficiency Improvement in the Automobile Industry Using Lean Six Sigma and Artificial Neural Network
Gorisse et al. Developing a decision support tool to determine and improve equipment performance considering maintenance, with an application to a BP refinery
AHMED et al. DECISION-MAKING MODELS FOR PREDICTIVE MAINTENANCE SERVICE SUPPORT SYSTEMS
Van der Westhuizen The effect of combining reliability, availability and maintainability modelling and stochastic simulation modelling on production efficiency
Bosco Practical Methods for Optimizing Equipment Maintenance Strategies Using an Analytic Hierarchy Process and Prognostic Algorithms
Rao et al. Efficient software cost estimation using partitioning techniques
Oosthuizen A maintenance strategy for a network of automated fluid management systems
Yun et al. A Novel Concept for Integrating and Delivering Automobile Warranty Reliability Information Via a Comprehensive Digital Dashboard

Legal Events

Date Code Title Description
AS Assignment

Owner name: HARTFORD STEAM BOILER INSPECTION AND INSURANCE COM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:JONES, RICHARD BRADLEY;REEL/FRAME:035404/0611

Effective date: 20150413

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE

CC Certificate of correction
MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4