Common Linear Models Used in Model Predictive Control

October 24, 2010 at 5:47 pm Leave a comment

In the first instance this article will make no apologies for giving just a summary statement about models. The art of modelling and the subtle differences between ARX, ARMA, and ARMAX models to name just a few is not a topic that is central to the theme of this article and is a subject covered extensively elsewhere. However, the reader should be aware that the selection of the model is the most important part of an MPC design. Unexpectedly poor performance of an MPC controller will often be due to poor modelling assumptions. We will assume that the model is given and hence the purpose of this chapter is solely to show that different model types can be used in an MPC framework. How one can modify models to achieve benefits in an MPC design will be considered in later chapters.

You can use pretty much any model you like in an MPC strategy; however, if the model is nonlinear, then the implied optimisation may be nontrivial and moreover may not even converge. We will concentrate only on linear models and allow that any nonlinearity is mild and hence can be dealt with well enough by assuming model uncertainty and some gain scheduling of control laws. One article is not enough to cover the linear and the nonlinear case properly.

1.1 Modelling uncertainty

1.1.1 Integral action and disturbance models

In this article the assumption will be made that there is a requirement for offset free tracking, as this forms the most common case and it would be difficult to generalize with the alternative assumption that offset is allowed. Hence it is also implicit that the control law must include integral action. Workers in MPC have developed convenient mechanisms for incorporating integral action and this chapter will illustrate this. In the absence of uncertainty/disturbances one could easily obtain offset free control without integral action but in practice parameter uncertainty and disturbances necessitate the use of an integrator. In MPC the start point is different and one takes the viewpoint that:

“Disturbance rejection is best achieved by having an internal model of the disturbance.”

That is, if one models the disturbance appropriately, then the MPC control law can be set up to automatically reject the disturbance with zero offset. In fact the disturbance model most commonly used implicitly introduces an integrator into the control law and hence one also gets offset free tracking in the presence of parameter uncertainty.

So the question to be answered in the modelling stage is, how are disturbance effects best included within the model?

The precise details of how to model disturbances is process dependent and if not straightforward is in the realm of a modelling specialist. However, that does not need to bother us here as long as we remember the following two simple guidelines.

  1. In general we can control a process as accurately as we can model it. You need only improve your model if more accuracy is required.
  2. There have been thousands of succesful applications of MPC using relatively simplistic assumptions for the disturbance model.

1.1.2 Modelling measurement noise

Similar statements can be made about modelling the noise (and other uncertainty in the process). It is usual to make simplistic assumptions rather than to define supposedly precise analytic answers. For instance, a Kalman filter will only give optimal observations if the model is exact and assumptions on the covariance of the uncertainty are also correct. In practice of course neither of these is true and one resorts to commonsense assumptions (process knowledge) and some on-line tuning. In practice measurement noise is often assumed to be white and hence simply ignored. Coloured noise can be included in the models systematically but usually the corresponding filters (implicit in this) have other effects and so are tuned with mixed objectives. Practical experience has shown that some form of low-pass filtering is nearly always required and/or beneficial but a systematic design of such filters for MPC is still an open question in general.

1.2 Typical models

This section will give a brief summary of common linear models. The favoured model type depends very much on the reader and the process to be controlled and hence this book does not attempt to make a value judgement as to which is best. The focus is on the discrete time case, as MPC is usually implemented in discrete time. Academia and the USA in particular, has put far more emphasis on state-space models. The advantage of these are that they extend easily to the multivariable case and there is a huge quantity of theoretical results which can be applied to produce controllers/observers and to analyse the models and resulting control laws. Academics in Europe have also made extensive use of transfer function models and polynomial methods. Historically one advantage of this was the close relationship to popular black box identification techniques. However, this is much less an issue now with the development of subspace techniques for identifying black box statespace models . The disadvantage of transfer function models is that their use in the multivariable case can be somewhat cumbersome and they are nonminimal representations. The advantage is that no state observer is required although one may argue the need to filter measurements implies the use of an observer in practice anyway.

Traditionally (this is changing now) industrialists have not favoured either of the above models in general and instead have used Finite impulse response (FIR) models. These are easy to understand and interpret, being for instance, the process step response. Although in practice these models could be determined by a single step test, the practical requirements for indentifying FIR models is that far more data is needed than to identify state-space and transfer function models and moreover there are issues of when to truncate the FIR and the need for unnecessarily large data storage requirements. FIR models however, do generally give lower sensitivity to measurement noise without the need for an observer (and associated design) and this can be a significant benefit for some industries. MPC can also make use of the so-called independent model (IM) or internal model. This can take the form of any model; the differences come in how it is used.

1.3 State-space models

This section gives the terminology adopted in this article for representing state-space models and typical modelling assumptions used in MPC. The assumption will be made that the reader is familiar with state-space models.

1.3.1 Nominal state-space model

Using the notation of (.)k and (.)(k) to imply a value at the kth sampling instant, the state-space model is given as:


In abbreviated form the model is


x denotes the state vector (dimension n), y (dimension l) denotes the process outputs (or measurements) to be controlled and u (dimension m) denotes the process inputs (or controller output) and A,B,C,D are the matrices defining the state-space model. Ordinarily for real processes D = 0.

1.3.2 Nonsquare systems

Although MPC can cope with nonsquare systems (l m), it is more usual to do some squaring down and hence control a square system. When MPC is applied directly to a nonsquare system the precise objectives and associated tuning are process dependent and nongeneric; hence we omit this topic. In simple terms if m > l, several inputs can be used to achieve the same output and so the optimisation must be set up to make optimal use of the spare degrees of freedom (d.o.f.), the definition of optimum being process dependent. If l > m, there are too few d.o.f. and so we must accept offset in some output loops; in this case additional criteria are required to set up the control strategy and again these are process dependent.

1.3.3 Including a disturbance model

A good discussion of this can be found in. First decide whether the disturbance is a simple perturbation to the output or affects the states directly. We will treat each in turn.

1.3.3.1 Output disturbance

A common model of the disturbance is given as integrated white noise. That is, disturbance dk is modelled as


where vk is unknown and zero mean. It is assumed throughout that dk is also unknown though of course it can be partly inferred via an observer.

Such a disturbance can be incorporated into the state-space model by replacing (2.2b) as follows:


Of course the disturbance is unknown, as is the state xk, so it must be estimated. By including the disturbance into the system dynamics, the assumed process model becomes


where


An observer can be constructed for this model, under the usual assumption of observability, to give estimates of both the state x and disturbance d.

1.3.3.2 State disturbance

In this case one still uses assumption (2.3) but it is included in the state update equation; that is, replace (2.2) by


Again, the overall process model should be augmented to include the disturbance dynamic as follows:


where


1.3.4 Systematic inclusion of integral action with state-space models

Typical state feedback does not incorporate integral action. This section shows one method by which this limitation can be overcome.

1.3.4.1 Offset with typical state feedback

Let us assume hereafter that for all real processes there is no instantaneous feed through from the input to the output, that is D = 0. Then, even with a zero setpoint, the typical stabilising state feedback of the form


will not give offset free control in the presence of nonzero disturbances. This is self evident from substitution of (2.10) into, for example (2.2, 2.4), which implies


1.3.4.2 A form of state feedback giving no offset

Consider now an alternative form of state feedback


where xss, uss are estimates of the steady-state values of state and input giving offset free tracking. Under the assumption that xss, uss are consistent, then for fixed dk, such a control law will necessarily drive


 

Again, this is self-evident by substitution of (2.12) into (2.2, 2.4) or (2.7). Clearly offset free tracking follows automatically if


where r is the set point.

1.3.4.3 Estimating steady-state values of the state and input

In order to implement control law (2.12), we need a means of estimating mutually consistent values for the observer states xss, uss. First assume observability so that state estimates converge and are mutually consistent by way of the model (even if due to model uncertainty they do not match the true process exactly). Define the desired output as set point r.

  1. Assume (see 1.4) that the current estimate of d is the best estimate of its value in the future (i.e. E[vk] = 0).
  2. Given that d, r are known, one can estimate the required steady-state values (xss,uss) of x,u to get the correct output from the relevant consistency conditions (equations (2.2, 2.4) or (2.7) respectively, for output and state disturbances).

3.These are simple simultaneous equations and give a solution of the form


where matrix M depends solely on matrices A,B,C,F representing the model.

 

1.4 Transfer function models (single-input/single-ouput)

A popular model is the so-called Controlled auto-regressive integrated moving average (CARIMA) model. This can capture most of the variability in transfer function models by a suitable selection of parameters. It is given as


where vk is an unknown zero mean random variable which can represent disturbance effects and measurement noise simultaneously. Although there exist modelling techniques to compute best fit values for the parameters of a(z),b(z) and T(z), in MPCit is commonplace to treat T(z) as a design parameter; this because it has direct effects on loop sensitivity and so one may get better closed-loop performance with a T(z) which is notionally not the best fit. It is common to write transfer function models in the equivalent difference equation form. For instance with T = 1, (2.16) is given as


where


and dk is the unknown disturbance term derived from


1.4.1 Disturbance modeling

Recall the earlier statement disturbance rejection is best achieved by having an internal model of the disturbance. Then notice that the choice of T(z) = 1 gives an
equivalence to (2.3):


Hence it is clear that the term (T(z)/D(z))vk deployed in (2.16) is a disturbance model and is similar to that implied in (2.4, 2.7). The term d = v/Drepresents an integrated white noise term or a random walk; this is a well accepted model for disturbances, as it allows nonzero mean with random changes. The choice of T determines equivalence to either (2.4) or (2.7) or other possibilities.

1.4.2 Consistent steady-state estimates with CARIMA models

As seen in Section 1.3, the key to achieving integral action in an MPC control law is a consistent and correct assessment of the expected steady-state value of the input (the state is not used for transfer function models) such that one gets offset free tracking. Clearly the desired output is the set point, but the corresponding value of the input depends upon the unknown disturbance. Obviously, assuming that d k = dk1,vk = 0, then dk can be inferred by writing (2.17) at the current and previous sampling instants and solving from the two implied simultaneous equations.

However in MPC it is usual to use a different method. Write an incremental version of (2.16); that is, relate the outputs to control increments Du k = uk uk1. This is equivalent to either: (i) multiplying (2.16) by Dor (ii) subtracting (2.17) at k1 from (2.17) at k. Hence


Clearly this operation has eliminated the nonzero mean unknown variable d k and the only remaining unknown, vk, is zero mean, can be assumed zero in the future, and hence does not affect predictions.

A second and equally useful benefit of using (2.19) instead of (2.17) is that the input is now written in terms of increments and clearly in steady-state the increments will be zero. That is


Hence the consistent estimates of the states required to give offset free tracking in the steady-state are y = r, Du = 0.

1.4.3 Achieving integral action with CARIMA models

The details of this will be more transparent after later sections. One needs to assume the form of the control law (much as we assumed (2.12)) before we can establish how to ensure integral action. It is known that MPC control laws based on transfer function models depend upon the predictions and hence must take the form:


It is clear from the presence of the DkDterm that there is an integrator in the forward path and hence disturbances will be rejected. Furthermore in order to get no tracking offset in the steady state one must check consistency of the following steady-state conditions:


Clearly this implies


1.4.4 Selection of T(z) for MPC

Although T is a design polynominal used to model the disturbance signal and hence improve disturbance rejection, it can also be used to enhance robustness of the closed-loop. However, the design of T is not systematic in general and a few basic guidelines only are given:

  1. Let T = aˆT ˆ
    , where aˆ contains the dominant system pole/poles and Tˆ contains further poles near one. For a system sampled at typical rates

    works fairly well.

  2. Design 1/T to be a low-pass filter which removes high frequency noise but not the dominant frequencies in the model.
  3. Use some trial and error. If T = 1 works well, then you may get little benefit from more complicated T.
  4. The choice of T = a has equivalence to the use of an FIR model.
  5. Systematic designs for T are nonlinear and therefore not simple.

1.5 FIR models

Historically these were the most common model form encountered in industrial MPC packages although that is beginning to change.

1.5.1 Impulse response models

Take the model with inputs, outputs and disturbance u,y,d, respectively,


Then the process G(z) (for the stable case which predominates in practice) can be represented by a Tayler series in the delay operator, that is


Equivalently this sequence can be viewed as the impulse response, the expected values of the output y in the event u comprises a single impulse. As with transfer function models, the parameters Gi can be identified using standard methods.


1.5.2 Step response models

A step response model is a sequence of values representing the step response of the process and can be written as


where clearly H(z) = G(z)/D(z). The corresponding input/output equation can be derived from (2.24) and takes the form


The logic for getting offset free prediction follows the same lines as that given in the previous section; that is, subtract (2.27) at the previous sample from the current to eliminate the unknown dk.


1.6 Independent models

Independent models (IM) are not a different form of model; however, it is important to include a short discussion in this chapter because there is a key difference in the philosophy of how the model is used. The difference in usage is particularly relevant in the context of MPC, as it can give substantial changes in the closed loop sensitivity, although not in nominal performance. Hence it is an alternative that should be considered. It has connections with Smith predictor ideas and internal model control (IMC).

  • The IM approach is not restricted to any given model type. Use whichever is easiest.
  • Its use is equivalent to an FIR model without truncation errors.
  • Even if IM is a state-space model, an observer is not required!

This topic will be considered more carefully in the next chapter on prediction, as MPC uses the model solely to form predictions. For now it is sufficient to note that the IM is a process model which is simulated in parallel with the process using the same inputs; see Figure 1.1. If the process has output y, then the IM has process yˆ which in general will be different, but similar.


Figure 1.1 Independent Model

1.7 Matrix-fraction descriptions

Although state-space models are usually favoured for the multi-input/multi-output (MIMO) case, an observer is still required which is not the case with transfer function models (although some may argue any filtering is equivalent). A MIMO transfer function model can be represented as a matrix fraction description (MFD) e.g.


where N(z), D(z) are matrix polynomials in the delay operator. In difference equation form (with T = 1) this gives


Hereafter in the context of MPC, these models can be used in the same way as SISO CARIMA models, so long as one remembers that matrix multiplication is not commutative in general. FIR models are particular forms of MFD with D(z) = I.


1.8 Modelling the dead times in a process

In work that concentrates solely on modelling as an end in itself, a knowledge of or ability to infer, the system dead time is important. It is well known from simple Nyquist stability analysis that dead times make systems harder to control and moreover a mismatch between the assumed and actual dead time can make large differences to control quality. A classical solution for controlling systems with large dead times is to deploy a Smith predictor to simulate the expected value of the process in the future and control an offset corrected version of this rather than the plant. This gives equivalent stability margins to the process without a dead time but the approach is of course sensitive to errors in the dead time.

MPC uses a similar strategy in that by predicting the behaviour at future points it can deal with process dead time systematically. However, there is a key difference from the Smith predictor which improves robustness of MPC and implies that an exact estimate of the dead time is not as critical. The control law calculation is based on a whole trajectory, not just a single point, hence the emphasis is placed more on where the responses settle rather than transients. The implication for modelling is that in the identification stage one can afford to underestimate the dead time and let the identification algorithm insert small values against coefficients that perhaps should be zero (as long as one avoids near cancellation of unstable pole/zero pairs). This simplifies modelling and has a negligible effect on control design.

For instance, if the true plant were


then using


even where is not small would often be fine in practice.

 

References

[1] M. Cannon, B. Kouvaritakis and J.A. Rossiter, Efficient active set optimization in triple mode MPC, IEEE Transactions on Automatic Control, 46(8), 1307- 1313, 2001.

[2] D.W. Clarke, C. Mohtadi and P.S. Tuffs, Generalised predictive control, Parts 1 and 2, Automatica, 23, 137-160, 1987.

[3] C.R. Cutler and B.L. Ramaker, Dynamic matrix control – a computer control algorithm, Proceedings American Control Conference, 1980.

[4] B. De Moor, P. Van Overschee and W. Favoreel, Numerical algorithms for subspace state space system identification – an overview, Birkhauser, 1998.

[5] C.E. Garcia and M. Morari, Internal Model control 1. A unifying review and some new results, I&EC Process Design and Development, 21, 308-323, 1982.

[6] L. Ljung, System identification: theory for the user, Prentice Hall, 1987.

[7] K.R. Muske and J. R. Rawlings, Model predictive control with linear models, AIChE Journal, 39(2), 262-287, 1993.

[8] J.P. Norton, An Introduction to Identification, Academic Press, 1992.

[9] J. Richalet, A. Rault, J.L. Testud and J. Papon, Model predictive heuristic control: applications to industrial processes, Automatica, 14(5), 413-428, 1978.

[10] T.-W. Yoon and D.W. Clarke, Observer design in receding horizon predictive control, International Journal of Control, 61, 171-191, 1995.

[11] J.A Rossiter, Model-Based Predictive Control: A Practical Approach, CSC Press, 2003

Advertisements

Entry filed under: Electrical Engineering.

COGNITIVE RADAR PROGRAMMABLE LOGIC CONTROLLER

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Trackback this post  |  Subscribe to the comments via RSS Feed


October 2010
M T W T F S S
« Dec   May »
 123
45678910
11121314151617
18192021222324
25262728293031

Pages

Categories

My Tweets

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Archives

Blog Stats

  • 98,630 hits

%d bloggers like this: