Archive for January, 2012

Improving Decision-Making for Equipment Assets

Efficiency, uptime, profits can be increased with data driven predictive maintenance

In just the last few months, several research reports were released identifying how organizations are managing their equipment assets – “Operational Risk Management Strategies for Asset Intensive Industries” by the Aberdeen Group, the “Asset Performance Management Study” by Texas A&M University, and the “Best Practices in Asset Management and Reliability Study” conducted by Virginia Tech. It is fascinating stuff, ripe with information every organization can use to begin identifying new competitive opportunities and minimize risk to their organizations, as well as offering valuable benchmark data to help companies measure their progress. One of the findings which particularly sparked my interest was how participants were using their asset-related data and how its usage impacted the performance of their organizations—specifically within the area of reactive maintenance.


January 7, 2012 at 10:37 am Leave a comment

How much is downtime costing you?

Downtime costs every factory at least 5% of its productive capacity, and many lose up to 20%. But an estimated 80% of industrial facilities are unable to accurately estimate their total downtime cost (TDC). Many of these facilities are underestimating their downtime by 200-300% according to downtime consultants.

Not knowing your TDC compounds itself when you set priorities on capital investments. As your organization becomes more sophisticated at using financial tools, such as return on investment (ROI) and other leverage metrics, these tools become the key criteria in selecting and approving projects.

January 7, 2012 at 10:23 am Leave a comment

Using Modeling, Simulation to Optimize Plant Control Systems

Increased emphasis on more environmentally friendly, efficient, and safe processes has led companies to focus optimization efforts across plants, including refining, chemical, and pulp and paper. Plant control systems, which rely on a concert of supervisory and loop-level controls to hold set points and reject disturbances, present notorious optimization challenges. Multiphase flows, entrained solids, hybrid continuous-batch operations, and other highly nonlinear behaviors contribute to this complexity. Even plants with the same process for producing the same product often have different capacities and layouts, and require separate optimizations to maximize production and minimize operational costs. (more…)

January 7, 2012 at 9:56 am Leave a comment

Wireless – Overcoming Challenges of PID Control & Analyzer Applications

An enhanced PID controller simplifies tuning and improves loop stability and reliability for loops dominated by discontinuous measurement updates

Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional–integral–derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers. (more…)

January 7, 2012 at 9:49 am Leave a comment

Balancing Security and Safety with Risk

Initially when control and safety systems moved away from being hardwired and relay-based to computerized systems, vendors and asset owners were more interested in functionality than security. Typically, especially in high-risk environments in refineries and off-shore oil installations, the systems were standalone with a dedicated Safety Instrumented System. The advances in computer technology during the 1980s and 1990s caused a rapid shift from these proprietary systems to a typically Intel hardware with Microsoft-based operating systems. This was primarily driven by the end user to reduce costs and for standardization with the rest of the IT infrastructure. At that time, patches and updates to the base operating system came out sporadically from Microsoft, and the security aspects were rarely considered.

The new millennium saw this situation change rapidly, with Code Red and Nimda malware following quickly on from the events of 9/11 and a rapid re-evaluation of control system security was undertaken. Many companies assumed the control systems were still being operated as islands of automation—completely separate from the business network. In the majority of cases, this proved to be a misconception, and these “islands” were firmly attached to IT business mainland. What was thought of as a low security risk quickly escalated to being something definitely on the radar of risk managers. (more…)

January 7, 2012 at 9:41 am Leave a comment

Older Posts

January 2012



My Tweets

Error: Twitter did not respond. Please wait a few minutes and refresh this page.


Blog Stats

  • 162,942 hits