Spread spectrum communication techniques including in-time and frequency domains for direct sequence, frequency hopping, and time hopping are currently used in a large number of wireless applications. This article provides an overview of these techniques. Results of laboratory tests of a ZigBee network are presented, and experimental results are compared with theoretical expectations. Part 2 of this paper will present an application we developed for a wireless distributed measurement sensing and actuating system for water quality assessment.
Early Wireless Applications
Many innovative people have faced the challenge of developing long distance communications with various levels of success. One of the earliest techniques was using fire and smoke as visual signals. The first technical contribution to the field of telecommunication was made by Guglielmo Marconi (1874) who developed a practical wireless system to transmit telegraph messages. Although unsuccessful, Marconi’s system introduced telegraphy for marine signaling. A ships’ crew could be warned of potential dangers like rocky coastlines if wireless telegraphs were installed. This breakthrough led to substantial improvements in safety warning systems with performance that was independent of weather conditions such as rain, wind and smog.
Subsequently, the American Telephone & Telegraph (AT&T) company pioneered in moving the communication field forward after Alexander Graham Bell invented the telephone [1-2]. AT&T’s satellite communications enabled the first live television transmission across the Atlantic. In the early 1980s, mobile telephones were introduced, and since then the number of wireless spread spectrum applications has never stopped growing. Development in mobile telephone systems, in particular, has been driven by concurrent technological progress in high integration level component devices and interoperability of equipment from different manufacturers. (more…)
Iterative Learning Control for Discrete Linear Systems with Zero Markov Parameters using Repetitive Process Stability Theory
Abstract—This paper considers iterative learning control for the practically relevant case of deterministic discrete linear plants where the first Markov parameter is zero. A 2D systems approach that uses a strong form of stability for linear repetitive processes is used to develop a one step control law design for both trial-to-trial error convergence and along the trial performance. The resulting design computations are completed using linear matrix inequalities, and results from applying the control law to one axis of a gantry robot are also given by way of experimental verification. (more…)
Efficiency, uptime, profits can be increased with data driven predictive maintenance
In just the last few months, several research reports were released identifying how organizations are managing their equipment assets – “Operational Risk Management Strategies for Asset Intensive Industries” by the Aberdeen Group, the “Asset Performance Management Study” by Texas A&M University, and the “Best Practices in Asset Management and Reliability Study” conducted by Virginia Tech. It is fascinating stuff, ripe with information every organization can use to begin identifying new competitive opportunities and minimize risk to their organizations, as well as offering valuable benchmark data to help companies measure their progress. One of the findings which particularly sparked my interest was how participants were using their asset-related data and how its usage impacted the performance of their organizations—specifically within the area of reactive maintenance.
Downtime costs every factory at least 5% of its productive capacity, and many lose up to 20%. But an estimated 80% of industrial facilities are unable to accurately estimate their total downtime cost (TDC). Many of these facilities are underestimating their downtime by 200-300% according to downtime consultants.
Not knowing your TDC compounds itself when you set priorities on capital investments. As your organization becomes more sophisticated at using financial tools, such as return on investment (ROI) and other leverage metrics, these tools become the key criteria in selecting and approving projects.
Increased emphasis on more environmentally friendly, efficient, and safe processes has led companies to focus optimization efforts across plants, including refining, chemical, and pulp and paper. Plant control systems, which rely on a concert of supervisory and loop-level controls to hold set points and reject disturbances, present notorious optimization challenges. Multiphase flows, entrained solids, hybrid continuous-batch operations, and other highly nonlinear behaviors contribute to this complexity. Even plants with the same process for producing the same product often have different capacities and layouts, and require separate optimizations to maximize production and minimize operational costs. (more…)
An enhanced PID controller simplifies tuning and improves loop stability and reliability for loops dominated by discontinuous measurement updates
Wireless measurements offer significant life-cycle cost savings by eliminating the installation, troubleshooting, and modification of wiring systems for new and relocated measurements. Some of the less recognized benefits are the eradication of EMI spikes from pump and agitator variable speed drives, the optimization of sensor location, and the demonstration of process control improvements. However, loss of transmission can result in process conditions outside of the normal operating range. Large periodic and exception reporting settings to increase battery life can cause loop instability and limit cycles when using a traditional PID (proportional–integral–derivative) for control. Analyzers offer composition measurements key to a higher level of process control but often have a less-than-ideal reliability record, sample system, cycle time, and resolution or sensitivity limit. A modification of the integral and derivative mode calculations can inherently prevent PID response problems, simplify tuning requirements, and improve loop performance for wireless measurements and sampled analyzers. (more…)
Initially when control and safety systems moved away from being hardwired and relay-based to computerized systems, vendors and asset owners were more interested in functionality than security. Typically, especially in high-risk environments in refineries and off-shore oil installations, the systems were standalone with a dedicated Safety Instrumented System. The advances in computer technology during the 1980s and 1990s caused a rapid shift from these proprietary systems to a typically Intel hardware with Microsoft-based operating systems. This was primarily driven by the end user to reduce costs and for standardization with the rest of the IT infrastructure. At that time, patches and updates to the base operating system came out sporadically from Microsoft, and the security aspects were rarely considered.
The new millennium saw this situation change rapidly, with Code Red and Nimda malware following quickly on from the events of 9/11 and a rapid re-evaluation of control system security was undertaken. Many companies assumed the control systems were still being operated as islands of automation—completely separate from the business network. In the majority of cases, this proved to be a misconception, and these “islands” were firmly attached to IT business mainland. What was thought of as a low security risk quickly escalated to being something definitely on the radar of risk managers. (more…)