HOW CAN WE IMPROVE THE TRUST IN PROCESS ANALZERS?
Industry Manual Repository
Join the AnalyzeDetectNetwork and Read This Manual and Hundreds of Others Like It! It's Free!
HOW CAN WE IMPROVE THE TRUST IN PROCESS ANALZERS? Hans Cusell, / Senior Metering & Allocation Consultant at Hint Wout Last / CEO at Hint KEYWORDS Process & Laboratory Analyzers, Quality Measurement, Analyzer Maintenance and Data Acquisition System. (AMADAS), IT Integration, Improve Trust ABSTRACT Problem statement: The Oil & Gas and Petrochemical Industry is investing millions of Dollars in Process Analyzers in their plants, but their operations don’t trust them and therefor rely on the outcome of analyses results of the Laboratory. What is the source of errors in Quality Measurement? How to improve the trust level of process analyzers by integrating an Analyzer Maintenance And Data Acquisition management System (AMADAS) within your existing business or other system. Objectives: How can we achieve that process analyzers will be used for quality control by implementation of an AMADAS? Method: What will be the project approach to achieve the objectives, what are the steps and procedures? What is the reason why your Operation department does not trust process Analyzers, Why do you need an Analyzer Maintenance Management System, What are the cost savings? Conclusions: By implementation of an Analyzer Maintenance Management System you can reduce the number of routine samples in the laboratory, Decrease the trust level of process-analyzers, saving manpower cost and minimize give a ways by operating closer to your limits. INTRODUCTION On-line analyzer systems are widely used in the Oil & Gas and Petrochemical industry. They range from single component analyzers like pH analyzers to multi-component analyzers like gaschromatographs and mass-spectrometers, from wet chemistry types to optical devices Modern process instruments like temperature and pressure transmitters are designed to withstand the harsh conditions of the operational environment. They are expected to steadily perform despite widely ranging climatic conditions such as extreme temperatures and temperature differences during short time intervals, humidity and splash-water, dust, etc. Repeatability is an important property concerning process control conditions. Precision comes in play in fiscal and custody transfer applications whilst reliability is a key factor in safety systems. On top process instrumentation should be robust in use and requiring only limited maintenance attention. Most equipment is designed with (close to) lifetime calibration which typically requires only zeroadjustment at required time intervals once installed. On-line process analyzers are used in similar fields of operations as quoted above: • • • • • • Process control; e.g. to monitor quality of distillation in a distiller; Product quality and custody transfer; e.g. to measure energy content of sales gas at a custody transfer point; Fiscal metering; for taxation purposes; Allocation purposes in upstream production systems; the process to assign to the shareholders equitable shares in the sales of a joint venture based on the value of their contribution; Environmental Emissions; e.g. venting or flaring of gas, wastewater; Process safety systems; e.g. to measure air pollution and toxicity of ambient surroundings. FIGURE 1. TYPICAL ANALYZER HOUSE COURTESY PETROCHEMICAL PLANT An on-line process analyzer is generally a complex system if compared with other, more generalpurpose type of process instrumentation. It often needs an additional sample preparation - and sample disposal system, which in its own right might be complex. Hence, system design results often in a measurement system that is less stable than traditional process instruments thus needing more maintenance attention from better than average qualified maintenance staff. On top they require often an expensive infrastructure so as to provide them with stable environmental conditions, to shield them from adverse weather conditions and to exclude hazardous and possibly explosive surrounding atmospheres. The latter facilitates the safe operation and maintenance of the systems inside. It all together makes them not only expensive capital items but also items that are expensive to operate. Control charts helped monitoring the quality of mass production systems during the days of World War II. The method is based on periodically sampling and testing. It is a semi-graphical tool that plots the scores of a stated manufacturing property or quality in relation to target value and natural variability of the production process. It helps timely unveiling undue variability and other errors in the production process by simply judging on the size and recurrence of the observed deviations. This way the process to control intervention in an ongoing production process is statistically underpinned and classified as a method of Statistical Process Control. The dynamic vector is the sampling frequency which determines the interval that the process runs “autonomously” and during which period errors may develop unnoticed. The above described methods have been developed over the years into a suit of useful hard - and software tools for the process industry. They appeared to be indispensable for the quality and sustainability of equipment performance in a maintenance-constrained environment. Statistical Process Control and governing hard- and software systems have made a significant contribution over the years to improve maintainability of production systems and to limit off-spec production or giveaway. Without it maintenance attention would be distributed less effectively thus tying down more staff than strictly required. FIGURE 2. TYPICAL SAMPLE CONDITIONING AND SAMPLE HANDLING AREA METHODOLOGY Introduction On-line process analyzers are often derived from laboratory type equipment. They have been made suitable for automated plant use. Over the years their design became more tailored to the typical needs of a process environment and their multipurpose duty of the early days have been narrowed down over time to perform specific tasks. Not only the excellent repeatability performance is often inherited from their ancestors, but also properties that make them less suitable as on-line equipment such as sensitivity to environmental conditions, the need for regular calibration, complexity of design, the need for sample handling systems, etc. Many of the currently available on-line analyzers are off-spring of equipment that was developed during the second half of the 20th century. The early types appeared indeed maintenance intensive and suffering from reduced robustness in operation. It gave them a doubtful reputation with the process operators whilst the maintenance departments often could not cope with the level of specialism that was required to keep them going. Justification for continued operation or for implementation in new projects became since then an important issue to resolve. For new projects a structured justification-approach named “Analyzer narratives” is in place. They provide the economic and technical justification for implementation. In an attempt curbing the frequent maintenance problems companies started with the implementation of optical control charts in the sixties/seventies of the 20th century. They were simple paper devices with pre-printed masking positions for marking the scores. Periodically, say half yearly, the control charts were collected plant wide and dispatched to specialized companies like Honeywell Bull for data processing and the subsequent issuance of result-reports. Since then large-scale implementation of performance-monitoring became a realistic option. It enabled zooming in onto the maintenance and calibration problems of certain equipment but also the comparison between large numbers of analyzers, analyzer applications and plants- and plant units. The use of control charts improved maintenance efficiency because it reduces unnecessary maintenance Intervention as will be explained later. Their modern offspring are often featured with automated validation facilities allowing to take control over the analyzer when in authorized maintenance mode and to steer the testing sequence by controlling the sample handling system. Results are automatically uploaded and reported to stakeholders like the maintenance- and process operation departments and the plant laboratory. FIGURE 2. NORMALISED DETECTION TIME . Today modern Statistical Process Control systems are often part of a suit of Maintenance Management Systems. They are known under the acronym AMADAS: Analyzer Maintenance And Data Acquisition Systems. When not tested, any system will eventually fail. The fail characteristic could be typically of mechanical or electronic nature or because of calibration deficiencies, e.g. calibration drift. Failure to test erodes the justification of continued operation, a reason why a Statistical Process Control (SPC) system is a basic requirement in modern plant design. However, on the other end of the scale it seems less obvious what is to be gained from SPC, about which the following remarks. The direct cost of non-performance is simply the product of the time that an error stands and the costs per unit time. The indirect costs relate typically to penalties which are included in commercial agreements or licenses to operate. The nature of an error may vary. Natural variability is a given. When natural variability increases beyond equipment specification it should be addressed. Systematic errors are detectable and should be addressed when significant. The same holds for non-random distribution patterns of the scores, e.g. drift. All of the aforementioned fail-characteristics are easily detectable but only if sufficient historic data is available. Hence, the importance of data gathering and data processing. Van der Grinten and Lenoir described a model for the average detection time in multiples of the sampling interval as a function of the analyzer deviation from the reference sample value. Integration of over time of this function yields a normalized error volume. The larger the error the sooner it can be detected. Concluding about small errors require more test data, which results therefore in longer detectiontimes. FIGURE COST AN ERROR BEFORE DETECTION 3. AVERAGE OF RUNNING Although large errors have the overhand in the day to day maintenance strategy, simply because they are often more robust, more visible and simpler to detect, small errors create scope for relatively large error volumes over longer periods; a reason to remain on alert. Small errors are not always easily detectable and therefore often a cause for intense scrutiny and fault finding. The accrued error volume on average before detection depends on the sampling strategy, i.e. either a single – or a multiple sample validation scheme, and on the significance level of detection. The significance level of detection, i.e. the probability of not detecting the error and therefore the probability of wrong inference commonly known as the Type II error, depends on the number of intervention rules. FIGURE 4. AVERAGE COST OF RUNNING AN ERROR BEFORE DETECTION E.g.: the compounded significance level when applying 4 intervention rules simultaneously, each of them with a significance level of 0,27%, equals 1 – (1-0,0027)4 or approximately 1%. Under the conditions as per above an error which is equivalent to 0,5 times the standard deviation (natural variability), is likely to be detected within 50 testing periods on average in a single sample validation scheme, which causes an error volume of 25 times the standard deviation. Note that for all cases half the standard deviation yields about the maximum error volume. INTERVENTION CONTROL The following describes a simple algorithm for developing a set of intervention rules. When testing an analyzer the instrument reading is usually compared to the sample reference value. Hence, on average the deviation between values yields zero if the process is under control. Intervention is justified when a single validation result or a series of consecutive testing or validation results on one side of the aim-line yields less than a predefined significance limit, which is traditionally set at 0.0027; i.e. value that coincides with an eccentricity of 3* standard deviation if results are normally distributed. This limit value is defined as the Control Limit. The process is assumed to be out of control under such a condition. The traditional setting of the interval in between the Upper and Lower Control Limit is therefore +/- 3* standard deviation. The interval in between the Upper and Lower Warning Limit is traditionally defined as the 95% confidence interval and is therefore +/- 2* standard deviation wide. The purpose of the maintenance intervention is avoiding lasting errors (deviation from target or systematic error). A single validation-result that exceeds 3* standard deviation justifies intervention. The probability of such an event is less than 0.5* 0.0027 or 0.00135. The factor 0.5 equals the probability that a results plots above or below the aim-line). Two consecutive results just on the same Warning Limit (single sided) yield a probability of 0.5* 0.0455* 0.045< 0.00135. Generalizing: intervention is justified if: 0.5* P1* P2*…. Pn <= 0,5 * 0,0027 where ‘n’ is the number of consecutive results on one side of the aim-line. The number ‘n’ can be developed in relation to the eccentricity of results thus leading to a simple set of rules for defining when a process is assumed to be out of control. The most common intervention rules are: 1. A single point plots outside the Control Limits; 2. Two out of three consecutive points plot outside the Warning Limit on the same side of the centerline; 3. Four out of five results points plot outside a value equal to half the Warning Limit on the same side of the centerline; 4. Eight consecutive results plot on one side of the centerline. Note: The above algorithm could also be used predicting the tolerance of the next validation score thus allowing the maintainer to be instructed what to do if the score will not comply with expectations. It could also be argued on statistical grounds that a process is out of control when: 5. Eight consecutive points increase or decrease in magnitude. The latter is the case when the distribution pattern exerts drift or a step change. Most common testing and validation schemes apply to single validation results. However, during one validation session more tests could be run; e.g. 4 consecutive test results from which the sample mean value and the sample standard deviation can be derived. Both values relate to the mean and standard deviation of the mother distribution in the following way: • • • The ratio between the standard deviation of the mean and the standard deviation of the target population is a factor 1/√(n). Hence, the Warning – and Control Limits shrink with the same factor when testing an average of a number of scores. The ratio between the sample standard deviation and the standard deviation of the target population is according the chi-squared distribution: √ (2 / (n-1)). 2 (Chi-squared) develops from the stated significance level and the degrees of freedom (n-1). The Control - and Warning Limits of the sample standard deviation are drawn accordingly. Usually only a single Upper Control- and Warning Limit is applied with a significance level of 0,0027 and 0,05 respectively, single tailed. FIGURE 5. PROCESS UNDER CONTROL VARYING NUMBER OF MEASUREMENTS PER SESSION The above scheme yields two types of control charts for a single application which both are updated with the results of a validation session: • • A control chart for the sample mean A control chart for the sample standard deviation The depicted control chart is sampled from a target population with a standard deviation of 4 [arbitrary units]. The run ends at Sample #5 due to an excursion of the sample mean and sample standard deviation and a new run starts after intervention. The second example depicts a process that is under control chart show a process under control combined with a variable number of measurements per session. Beyond above types other ones like CuSum and Trend charts are hardly used for analyzer maintenance. The cost of validation The instance of errors is uncertain and subject to performance evaluation. Their distribution depends usually on equipment age. During the useful lifetime of equipment failure distribution is ideally expected to be exponential due to a constant failure rate. Increasing both the validation frequency and the number of samples per validation session offers significant scope to reduce the potential volume of long-standing errors. Both could be achieved by an automated validation process. Although expensive in CAPEX (capital expenditure) it should be balanced against the cost of manual calibration and the reduction of what could otherwise be possibly “giveaway”. Analyzer validation is only part of the maintenance duty and usually not a full-time job for a maintenance engineer. Most of the time is spent on keeping the analyzer- and their associated support systems running and in a healthy condition. An automated validation system helps the maintenance engineer to perform his task more efficiently. Hence, an automated system with comparably a higher output in the number of validations would not necessarily mean a significant increase of maintenance load. System validation results directly in the unavailability of the subjected equipment. The frequency of validation and the number of samples during a single validation run affects equipment unavailability. Because the equipment is presumed to be at least essential to the process, operations must exercise control over the validation proceedings. Normally maintenance is placed under the plant PTWS (Permit To Work System). Operations should be given minimally the means to release the scheduled validation and to intervene if opportune in case of automated validation schemes; e.g. through the DCS (distributed control systems). Ideally validation should be scheduled during a “sweet spot”; i.e. at a time that the equipment is not required (e.g. when lining up a batch process and after the batch has been produced). Beyond that one should realize that system outage due to maintenance is always a burden to operation because it requires operational attention or even interruption of the process. Redundancy to resolve unavailability should be avoided because it significantly adds to CAPEX and to system complexity whilst it doubles the maintenance effort. Redundant set up is used in some custody transfer systems where a configuration of Pay-and-Check is a strict commercial requirement. INTERPRETATION OF PERFORMANCE AND PERFORMANCE INDICATORS Analytical performance and statistical inference With the test - and validation results gathered over time valuable performance parameters can be derived. Intervention rules call for immediate action when due. Performance indicators are developing over a somewhat longer review period; typically, performance is reported half yearly or annually. When kept under control by the set of intervention rules equipment performance should not come as a surprise; e.g. the relative number of scores outside the warning limits during the period of review is expected to be 95% on average as will be explained next. The most prominent performance parameters follow simple statistical inference: Test on non-randomness Patterns in the distribution of results will affect the conclusions. A steadily slowly increasing deviation over time will cause overestimation of the measure of spread and will therefore signal that the warning – and control limits are set to cramped. Readjusting the warning- and control limits accordingly is rewarding poor performance. Not correcting the drift behavior will therefore cause loss of control. Similarly, the same could be said about erratic behavior of which the effects will seemingly be that the warning and control limits are set too widely whilst action should be opposite, i.e. widening of warning – and control limits. Not addressing erratic behavior will cause too tight control and unnecessary maintenance interventions. Abnormalities will be signaled and alternative warning- and control limits will be advised. The test evaluates the MSSD (Mean Square Successive Difference) of a series. The ratio of MSSD and the variance of the series should not significantly deviate from the value 2 at stated significance. Test on systematic error Systematic errors should be addressed. The causes are manifold. They could be typically introduced during calibration. Systematic errors may lead to give away. The test follows simply Student’s-t test. Test on variance The warning – and control limits are based on the natural variability of the process to be monitored. Increasing variance could be typically due to fouling. Control over the process will be lost if set too widely whilst it will cause reporting a too optimistic performance: excursions from the intervention limits will score unnaturally low. Too tight limits will cause strained control and undue maintenance intervention. It will cause underestimating the performance of the device. Deviation of variability from target value can be unveiled following the Chi-squared test. Test on reproducibility rate The relative number of scores within the warning is on average 95%, simply because the confidence interval coincides per definition with the upper and lower significance limits of an assumed normally distributed data set. Any score significantly lower than 95% should be addressed and investigated. The number of scores within the warning limits is tested against the binominal distribution with a success rate of 95%. Test on bi-directional adjustment Most equipment behaves bidirectional; i.e. adjustment is randomly “clockwise or anti clockwise”. Symptoms like aging components may cause different behavior. Bias in adjustment may be addressed for that reason. The number of scores of successive single sided adjustments is tested against the binominal distribution with a success rate of 50%. Availability Performance and Maintainability Maintenance management systems keep often track of the various rates of equipment availability. E.g. due to: • • • • Process unit or plant shut down Equipment not required by operations Equipment under maintenance Equipment breakdown AMADAS like systems could in general be configured to do the same with the added benefit that they might be programmed to detect the fault-status thus accounting for outages automatically. Modern analyzers feature often diagnostics that indicate the operational state of the equipment like in error, in calibration, out of range etc. . FIGURE 6. SAMPLE CONDITIONING SYSTEM FACILITATING A 26 PORT STREAM SELECTOR Cost Performance An overall performance is not complete without a spending profile. Resources are normally recorded in the plant wide maintenance management system. It concerns normally the variable costs of manpower and materials including calibration materials. AMADAS like systems are usually not configured to perform such tasks. Laboratory versus on-line In the Oil & Gas and Petrochemical industry the choice between laboratory and continuous online solutions is mostly driven by economy and only seldom the outcome of technical consideration (e.g. a method cannot be performed on-line). Laboratory-analysis is in essence a manual version of what an on-line analyzer performs in the field and vice versa. Often the misplaced idea holds that a laboratory is more reliable because of the controlled environment and/or better equipment, which is not the whole truth. When placed under statistical control as meant in this paper the statistical relevance of the test results is or could be identical in both cases. When a strict control is required such as in case of a batch delivery of gasoil from tankage, the options are either to sample manually and to test at agreed intervals during the delivery process or to monitor the process continuously by means of on-line analyzer(s). In general, AMADAS-like systems are capable of lining up the analyzer for duty, to perform a pre-delivery analyzer validation, to continue frequent and (close to) continuous on-line analyses and to conclude with a post-delivery validation. Hence, such an automated process has the capability to validate the delivery process at the same level and detail as compared to the more traditional way of sampling and analysis by the laboratory. It also produces far more process measurements and is therefore more representative and precise as compared to an infrequently sampled system. Another idea that laboratory equipment is essentially better than on-line analyzers is defused when realizing that on-line analyzers are specialized and tuned for their task and that therefore they often hold equal and sometimes even better repeatability than their laboratory equivalents, which are often designed for multitasking (if not by default). A last argument on the credits of on-line analyzer systems concerns the reduced variability in comparison with laboratory methodology due to the vulnerability of sample withdrawal, sample transport and subsequent sample handling in case of laboratory analyses. The management of laboratory data should be treated the same way as data collected from online analyzers systems. AMADAS like systems offer that possibility. Integration this way, i.e. Laboratory Information and Management Systems (LIMS) and AMADAS like systems, holds the attractive prospect of single point responsibility for the quality of all the analytical measurements on-site including the management of the traceability to external regulatory bodies. System design aspects On-line process analyzers are usually designed and maintained by instrumentation engineers in the discipline of Process Control and Automation. The environment has helped significantly in developing automated systems for validation and calibration and for the uploading of test data to higher tier management systems. Over the years many styles have been developed and many solutions applied. Automation has been exercised either through the plant DCS (distributed control system) or through a dedicated computer system with minimal interfacing with DCS; the latter to allow operator-control over the proceedings. AMADAS like systems provide an integrated infrastructure of hardware and software. It requires tie-in to the Process Control and Automation Domain (PCAD) in order to perform its basic duties. Data is also up-loaded to higher tier systems through the office network. Hence they are subjected to scrutiny concerning IT-security. The PCAD is highly specific. Interaction with DCS and connectivity to field equipment is the responsibility of instrument engineering. It requires in depth knowledge of the plant systems and the as built status in order to be qualified to implement or modify facilities. E.g. process areas are often rated as hazardous to a certain degree, which on turn constrains the equipment selection. The challenge of cyber security emerges when connecting systems to the office network. . FIGURE 7. PLANT & IT CYBER SECURITY NETWORK AMADAS like systems are best implemented in new projects. Implementation of a fully rigged up AMADAS like system is in an existing plant could be costly if the data transmission infrastructure is not in place is often the case; alternative cost-effective solution is making use of wireless connections and equipment. It is recommended to execute a study and work out a business case with a return of investment calculation. One may consider starting without the automation layer and to use only the data management part of it. There are different AMADAS network implementation scenarios possible e.g.: • Lite version (only manual validation) Costs are low very low only software license. Using the AMADAS software on a webserver into the cloud directly connected to the Internet. No connections with the realtime equipment/systems are available. Investment for the software • Standard version (Stand-alone System) For the stand-alone AMADAS solution one server will be installed in the process control network (Control Building) of the client and real-time connected to equipment/systems. The functionality of the system will be automatic, semi-automatic or manual validation as well as diagnostics information from intelligent analyzers • Enhanced Version (Enterprise server with one or more I/O servers) For the enhanced AMADAS solution the enterprise server will be installed based on the connections to business system in the Office domain (Level 4) or Process control Access domain (level 3). The enterprise server can connect with multiple I/O servers in the field/plants and installed in different control buildings, auxiliary rooms, analyzer houses, see typical below. FIGURE 8. TYPICAL AMADAS NETWORK INFRASTRUCTURE Software Architecture If we are looking today to AMADAS software platforms we see many different solutions and technologies used e.g. 1. PC based stand-alone AMADAS system 2. DCS based AMADAS systems Some of the Analyzer system integrators are using there Distributed Control System (DCS) as a platform to implement an AMADAS solutions. In most of the cases the AMADAS is designed to work preferable with their own type of analyzers and not always suitable to work with other analyzer manufacturers. The configuration of a DCS based AMADAS system is in most of the cases a higher investment and diagnostic information is also very expensive to configure and most of the time not available. Stand-alone PC based systems are separated from the DCS, are open, in depended and can interface to any type of analyzer, analyzer network, DCS system, business systems. Some of the AMADAS software system integrators are using new technology like web based software or using traditional Windows based software. Using web based software the advantage is, open standard interfaces to other control or business system, networks, databases, equipment. It is web browser independent, no software installation on the client site is necessary Using open standards like OPC, HTML5, ODBC, web services makes the integration with other system easier, see typical below. Looking to the investment for AMADAS, the implementation time is short and the configuration and installation are easy. FIGURE 9. TYPICAL AMADAS SOFTWARE ARCHITECTURE The total investment of an automated system depends not only on the software, hardware and the data transmission systems but also on the added complexity to the sample handling system. Typical cost per analyzer loop varies from € 5.000 to € 15.000; additional to basic cost. The investment for a cloud solution start already for 1-10 analyzers from Euro 275, -- per month See typical below of a complete AMADAS solution. This exists out of a system cabinet including servers, software, I/O’s excluding analyzers and sampling/conditioning system. FIGURE 10. TYPICAL AMADAS SYSTEM CABINET DISCUSSION AND CONCLUSIONS In the oil and gas petrochemical industry we are investing millions of dollars in process analyzers but during operation and maintenance we have not the right tools to manage them. Most of the failures are coming from engineering and design; approximately 70% of the failures are coming from sampling conditioning system during operations. Also human errors have a significant share in failures in the day to day operation. We, as an industry, should be more critical when we accept analyzer designs during EPC phase. An independent analyzer consultant should check the designs before starting the construction and commissioning. By implementing an AMADAS system we can improve the performance of analyzers so we can data into information and finally into knowledge. The big errors are easy to detect but the smaller errors need more test data. Making use of diagnostic information within AMADAS we can do predicative which can prevent or avoid trips and shutdowns. But also the technician is able to detect the problems in less time than usual. Our advice is to make AMADAS implementations successful the company needs a champion. ACKNOWLEDGMENTS The authors wish to thank their current customers and previous employers. Their customers for having faith in the consultants of Hint and their advice and realization. Their previous employers by investing time and training in their staff to become professionals in their field. Imtech for sharing the pictures for the analyzer houses and conditioning systems. WIB for issuing the Daca standard for cyber security. REFERENCES The below mentioned references were the bases for this technical paper and the authors used the latest version of these references. ASTM E178 Standard Practice for Dealing with Outlying Observations ASTM D3764 Standard Practice for Validation of the Performance of Process Stream Analyzers Systems. ASTM D6299 Standard Practice for Applying Statistical Quality Assurance and Control Charting Techniques to Evaluate Analytical Measurement System Performance. ISO 7870 Control Charts WIB M278-X-10 Process Control domain-security requirements for vendors, WIB document version 2.0 are firmly transformed into IEC standard 62443-2-4.