Archive for January, 2012
Efficiency, uptime, profits can be increased with data driven predictive maintenance
In just the last few months, several research reports were released identifying how organizations are managing their equipment assets – “Operational Risk Management Strategies for Asset Intensive Industries” by the Aberdeen Group, the “Asset Performance Management Study” by Texas A&M University, and the “Best Practices in Asset Management and Reliability Study” conducted by Virginia Tech. It is fascinating stuff, ripe with information every organization can use to begin identifying new competitive opportunities and minimize risk to their organizations, as well as offering valuable benchmark data to help companies measure their progress. One of the findings which particularly sparked my interest was how participants were using their asset-related data and how its usage impacted the performance of their organizations—specifically within the area of reactive maintenance.
Downtime costs every factory at least 5% of its productive capacity, and many lose up to 20%. But an estimated 80% of industrial facilities are unable to accurately estimate their total downtime cost (TDC). Many of these facilities are underestimating their downtime by 200-300% according to downtime consultants.
Not knowing your TDC compounds itself when you set priorities on capital investments. As your organization becomes more sophisticated at using financial tools, such as return on investment (ROI) and other leverage metrics, these tools become the key criteria in selecting and approving projects.
Increased emphasis on more environmentally friendly, efficient, and safe processes has led companies to focus optimization efforts across plants, including refining, chemical, and pulp and paper. Plant control systems, which rely on a concert of supervisory and loop-level controls to hold set points and reject disturbances, present notorious optimization challenges. Multiphase flows, entrained solids, hybrid continuous-batch operations, and other highly nonlinear behaviors contribute to this complexity. Even plants with the same process for producing the same product often have different capacities and layouts, and require separate optimizations to maximize production and minimize operational costs. (more…)
An enhanced PID controller simplifies tuning and improves loop stability and reliability for loops dominated by discontinuous measurement updates
Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional–integral–derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers. (more…)
Initially when control and safety systems moved away from being hardwired and relay-based to computerized systems, vendors and asset owners were more interested in functionality than security. Typically, especially in high-risk environments in refineries and off-shore oil installations, the systems were standalone with a dedicated Safety Instrumented System. The advances in computer technology during the 1980s and 1990s caused a rapid shift from these proprietary systems to a typically Intel hardware with Microsoft-based operating systems. This was primarily driven by the end user to reduce costs and for standardization with the rest of the IT infrastructure. At that time, patches and updates to the base operating system came out sporadically from Microsoft, and the security aspects were rarely considered.
The new millennium saw this situation change rapidly, with Code Red and Nimda malware following quickly on from the events of 9/11 and a rapid re-evaluation of control system security was undertaken. Many companies assumed the control systems were still being operated as islands of automation—completely separate from the business network. In the majority of cases, this proved to be a misconception, and these “islands” were firmly attached to IT business mainland. What was thought of as a low security risk quickly escalated to being something definitely on the radar of risk managers. (more…)
Modernization of a municipal waterworks with SCADA standardization: Past, present, and planning for the future
The need for standardization led the City of Guelph’s Water Services department in Canada to embark on a multi-year program to modernize and enhance its Supervisory Control and Data Acquisition (SCADA) system using a standards-based approach. Starting with a comprehensive SCADA Master Plan, the city developed a set of SCADA standardization documents that were incorporated into its procurement program. The result was a smooth and orderly transition from an aging feature-poor infrastructure to a standardized SCADA system designed to meet present and future needs.
Sensors are vital for accurate control that are often overlooked in system migration, upgrades
The goal of every system migration is a relatively easy, low-risk, and cost-effective transition from a legacy system to a new system that immediately and dramatically improves plant performance. Many equipment makers promise easy-as-pie “plug in” migration modules that require no replacement of existing field wiring, termination assemblies, system enclosures, or power supplies and cut migration downtime from weeks and months to “a day or less.”
The reality is a bit different. Upgrading a system to advanced enterprise control computers and software without evaluating the performance of the sensors that supply these systems with data is an exercise in futility. To properly sense and communicate the parameters of a process parameter, sensors must be accurate. Accuracy means how well a sensor measures the value of a process parameter. To display data with the frequency required by the plant or industry regulations, sensors must be reasonably fast in revealing a sudden change in the value of a process parameter. Accuracy and response time are independent of each other for the most part.