1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
2
Sistemas de Controle
UPE
3
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
9
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
2
Sistemas de Controle
UPE
Texto de pré-visualização
See discussions stats and author profiles for this publication at httpswwwresearchgatenetpublication275040898 Modeling car suspension using SpaceState Variables Research April 2015 CITATIONS 0 READS 705 2 authors Jorge Guillermo Tecnológico de Monterrey 64 PUBLICATIONS 244 CITATIONS SEE PROFILE Andrés José Rodríguez Torres Tecnológico de Monterrey 14 PUBLICATIONS 23 CITATIONS SEE PROFILE All content following this page was uploaded by Jorge Guillermo on 22 April 2019 The user has requested enhancement of the downloaded file V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 Modelling and Response of a Car Suspension Modelado y Respuesta de una Suspensión de Carro Jorge Guillermo Díaz Rodriguez Departamento de Engenharia Mecânica PUCRio CEP 22451900 RJ Brasil jorgegdiazalunopucriobr Andres José Rodriguez Torres Departamento de Engenharia Mecânica PUCRio CEP 22451900 RJ Brasil anrodrigueztorresgmailcom ResumenSe dedujo un modelo para la suspensión de un carro simplificando a un modelo masaresorteamortiguador Fue hecho en Simulink función de transferencia y espacio estados La respuesta de los métodos en lazo abierto fue la misma hallando el sistema estable por el criterio de Nyquist Un controlador PID fue diseñado y la respuesta en lazo cerrado se comparó con lazo abierto con Nyquist Bodé y salto hallando estabilidad y el overshoot se redujo considerablemente Índice de TérminosSuspensión activa control en espacioestados modelado Abstract The modelling of a car suspension was deducted for one tire set simplifying it to mass spring damper system It was done using Simulink transfer function and state space method Response of the two methods in open loop was the same finding the system to be stable using Nyquist criteria A PID controller was designed and response in closed loop was compared to open loop with Nyquist Bode and step finding the system continued stable and the overshoot lowered considerably Keywords Active suspension control state space modelling 1 INTRODUCTION The paper shows modelling and controlling one quart of a car suspension Should roads were smooth and even suspensions would not need to exist That is just the case roads have bumps and holes that make tires loose contact with the ground therefore loosing friction stability and steering capability Car manufacturers have been building suspensions with an elastic springs since the 1920s and added a viscous damper later on 1 When a wheel passes over an imperfection it experiences a vertical acceleration The suspension will absorb that energy minimizing vibration and creating a comfort sensation for passengers Lately demands for improved ride comfort and controllability of vehicles and high availability of electronic systems has motivated the development of active and semi active suspension systems Most likely they are electronically controlled and improve comfort as well as road handling 1 2 An active suspension system has the capability to adjust itself continuously to changing road conditions 3 This paper focuses in obtaining a physical model detailing the process using mechanics laws and explores the model by performing frequency and state space transformations along with evaluation of the models In future work application of more advanced techniques will be performed 2 MODELLING For the simplification used the cars weight was partitioned in four equal masses each one attached to one suspension system as shown in Figure 1 where ms represents the mass of the car herein ms represents ¼ of the car mass mt the tires mass ks and bs the spring and dampers suspension and kt the tires elasticity The control variable is position x and the actuating variable is the hydraulic cylinder force Fa Figure 1 Suspension Schematics Although Lagrangian mechanics would be the first choice the presence of nonconservative forces makes Lagrange more difficult to use in this case Following a modelling example of 9 or 10 where a similar system is modelling in an analogous way that however introduced nonlinear terms when including vibration angles this paper uses Newtonian mechanics Remembering that a spring produces a force proportional to displacement the viscous damper produces a force proportional to the positions first derivative and the mass produces a force proportional to the positions second derivative the system is modelled as particles Figure 2 shows in a free body diagram FBD the acting forces in ms Figure 2 FDB for ms Showing Inertial Forces The balance of forces for ms can be described in 1 For mt the FBD is show in Figure 3 Figure 3 FBD for mt Showing Inertial Forces The balance of forces for mt is shown in 2 which summarizes its movement where Fa is the hydraulic actuator force placed between ms and mt as shown in Figure 1 The values for 1 and 2 are ms 250 kg Ks 18600 Nm bs 1000 Nsm mt 50 kg Kt 196000 Nm A physical model was drafted in SolidWorks in order to import it to Mathworks Simmechanics However the imported model provided erratic behaviour No further work was done using that approach Equations 1 and 2 were modelled with Simulinks 5 To test the model an arbitrary constant force of 1500N was applied simulating Fa and an external disturbance uneven road modelled as a sinus wave as shown in Figure 4 The response for the model via Simulink to a step is shown in Figure 4 A passenger feels the movement of xs It can be seen that both masses experience a deviation from the road profile Such situation would give a passenger an uneasy feeling The system reaches 63 of the final value at 011 s aprox Figure 4 Model Response to a Step External Disturbance Analysis in Frequency Domain The systems transfer function TF was calculated from Newtons second law as shown in 1 and 2 They can be rewritten as 3 and 4 where are ft Applying Laplace transform to 3 and 4 5 and 6 were obtained where are gt For the suspension the output is the car position xs and input is the road profile xr and remembering the systems transfer function TF is the quotient of output over input 6 an already simplified TF is shown in 7 Replacing numerical values and simplifying 7 becomes 8 Equation 8 is a TF of 4th magnitude with 4 complex roots on the denominator Figure 5 shows the response via TF to a V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 10 cm bump The system has an overshoot of 176 cm and reaches 63 of that value 63 cm in 03 s Figure 5 Model Response to a 10 cm bump Bode Nyquist and root locus plots for open loop were obtained using Matlab Bode is shown in Figure 6 It shows a first natural frequency of 2223 Hz corresponding to the natural resonant frequency of the suspension and a second natural resonant frequency of 942 Hz corresponding to the tire the system is attenuating the input signal the bandwidth frequency is 56 rads a DC gain of 41 dB a phase margin of 32o and a roll off gain of 05786 dB Figure 7 shows the Nyquist Diagram which depicts a crossing at 0 0003 leaving good room from modifying the system Using trial and error K value was found so the TF became as shown below which made GsHs1 where Gs is the system and Hs is the controller to find 4 3 2 16600538 1 24 22896 15680 291680 S M S S S S S The actual point in Nyquist was 0994 0 as shown in Figure 8 a Checking the response in Bode Figure 8 b the gain at 180o is almost 0 Therefore K166 full fills the condition of magnitude and angle Figure 6 Bode Diagram for the system Figure 7 Nyquist Diagram for the system The root locus diagram zoom showed in Figure 9 displays the four roots for the TF denominator in open loop which are 1166 15076i 1166 15076i 034 356i 034 056i Because all the poles are negative it can be said the system is stable 6 Figure 8 a Nyquist Diagram with K b Bode with K Figure 9 Zoom of root locus Diagram for the system Statespace Analysis Because there are four energy accumulators in the suspension ms mt ks and kt the system has four state variables These are the variables that describe the systems energy 7 Where the first following two are potential and last two represent kinetic energy X1 Xs Vert displacement of the car ms X2Xt Vert displacement of the tire mt X3Xsdot Vert speed of the car X4Xtdot Vert speed of the tire It is worth say that Xr cannot be a state variable because it does not affect directly the output xs Remembering that the equations for state space are Because the output Xs is already known then Y must be equal to C x making D equal to 0 and C 1 0 0 0 Now deriving the proposed state variables Eq 10 are obtained Then 10 can be rewritten as matrix form 9 as shown in 11 and 12 Replacing numeric values 12 becomes The eigenvalues for matrix A indicate the type of system response If all have negative real part the system is stable If there is a positive real part the system is unstable and the response will grow without limit as time goes on If all eigenvalues are completely real the response is exponential If at least two eigenvalues form a complex conjugate pair the response will oscillate The eigenvalues found for matrix A are 103139 641988i 103139 641988i 16861 81326i 16861 81326i They are all have real component negative Therefore the system is stable The response of the system in open loop modelled in state space is shown in Figure 10 It can be seen the response is almost identical as the response when the system was modelled via TF Figure 5 Figure 10 State Space Response to a step disturbance Observability and Controllability According to the PopovBelevitchHautus test 7 a system is controllable if for a matrix V B AB A2B A3B An1B has rank n For this case matrix V has a rank of 4 equal to As order Therefore the system is controllable Now a system is observable if for SC CA CA2 CA3 CAn1 has rank n For this case matrix S has a rank of 4 equal to As order Therefore the system is observable Matrix gain K Recalling the system poles are the eigenvalues of A a matrix K is sought that can modify the input which will modify the eigenvalues that in turn will change the systems behaviour Now the response of a system with gain is given by Eq 14 A new Acl matrix can be defined as 15 Leaving the response of a system as 16 Where K k1 k2 k3 k4T then the new Acl defined in 15 will be Which has determinant of Comparing terms of the same order to find K simplifying coefficients and writing them in extended matrix form it gives Solving it gives values for K of The response of the system with gain matrix K is shown in Figure 11 for a 10 cm step disturbance One can see the peak has lowered to an accepted value of 12 cm as opposed as shown in Figure 10 where the peak reaches 16 cm Figure 11 Systems Response with Gain Matrix PID The ZiegerNichols method was used to find the PID constants From the step response the following values were obtained ks0063 Tu0001 Tr00631 In Table 1 are shown the values for constants for P PI and PID control Table 1 Values for P PI and PID controllers However modelling the step response none of this set of values gave an acceptable response making the system unstable or amplifying the response Changes using criteria from Table 2 which shows recommended changes for PID values to enhance the systems response 8 were used with negative response as well Table 2 Recommended actions to modify PID values V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 A PID tuning in Simulink Figure 12 was used obtaining values for Kp2000 Ki 14009577 and Kd 005 The response with such values is shown in Figure 13 where one can see the overshoot decreased to about 9 mm despite having a noise step function Figure 12 Suspension in closed loop modelled in Simulink Figure 13 Response from CL with PID via Simulink With Simulinks linearization tool Bode Nyquist and root locus diagrams were obtained as shown in Figure 14 Figure 15 and Figure 16 respectively In Bode diagram one can see the natural frequencies did not change In Nyquist the system runs clockwise therefore it remains stable and the root locus diagram the poles did not change when comparing to values in Figure 9 Figure 14 Bode diagram for the system with PID Figure 15 Nyquist diagram for the system with PID Figure 16 Root locus diagram for the system with PID Modeling PID in Simulink Having the suspension modelled in blocks the selected type of controller was PID From Figures 4 and 5 can be seen that measured outputs are xs position of ms and xt position of mt Measured input is xr The PID block is fed by the error A PID controller has three independent parameters which can be interpreted in terms of time where Kp depends on the present error Ki on the accumulation of past errors and Kd is a forecast for error All of them are based on current rate of change 4 A PID controller gives an output to take the exit variable to a desired target by using a formula shown in Eq 3 A block for such equation was built as shown in 17 1 1 1 1 N P I D s N s 17 Constants for the PID controller were taken from 1 and are Kp 7955 Ki 500 Kd 0001 Four events were tested over the model built as blocks in Simulink A sinusoidal a step saw and a ramp The systems responses are shown from Figure 17 to Figure 19 for the sinus step and saw disturbances respectively The sinus signal represents a noneven road The step signal aims to reproduce a speed bump The saw signal V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 had a frequency of 2 s and an amplitude of 005 m and simulates an unpaved road All of the disturbs started at time zero but the ramp This was done to check the systems response to a disturb after it has stabilized Figure 17 Response with a Sinus Disturbance The sinus excitation reaches 63 of it stable value at 027 s approximately whereas the step one does it at 014s and ramp at 024 s Because the saw disruption is continuous over time the system does not have time to come a stable value Although last said in all cases the system hovers over a stable value or a range Figure 18 Response with a Step Disturbance Figure 19 Response with a Saw Disturbance Figure 20 Response with a Ramp Disturbance CONCLUDING REMARKS A partition of a car was made in order to simplify the modelling A disadvantage of doing so is that rotation between axis is ignored which would introduce two extra degrees of freedom on the suspension One rotation along connected wheels and more between front and rear axis Further work ought to introduce such rotations in order to reproduce more accurately a cars suspension The proposed model reproduces the behaviour of the selected problem modelled in time state space and frequency domain Tests were performed to check the models stability Gain eigenvalues roots observability and controllability being all of them favourable The system was stable before and after the control action showing an acceptable agreement with results from 2 Following example of response evaluation of several external disturbances as presented in 11 a couple of them were modelled as signals that represented bumps or even gravel The systems response showed a constant behaviour in form although of different values for all of them V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 The selected control method creates a fast and prompt response to the systems disturbances lowering the amplitude of oscillation therefore creating a more comfortable ride for the passenger For the modelling as blocks the PID parameters were adjusted doing finetunning At the end the error response was hovering around 20 However the movement of ms 14 mass of the car did not move This would be acceptable since it is the movement that a passenger and the cars structure would actually feel The ramp disturbance showed that after being stabilized the controller does not allow the system to deviate from the established error Without the use of advanced control techniques such us fuzzy control 12 a reduction of vibration was accomplished Nevertheless this paper produced a model which was tested compared with and its results validated under different methods for future use implementing new control methods Some of the mentioned methods such us fuzzy control are already implemented in Matlabs control toolbox REFERENCES 1 MSenthil Kumar Development of Active Suspension System for Automobiles using PID Controller Proceedings of the World Congress on Engineering 2008 WCE 2008 London 2 Crosby M J and Dean C Karnopp The Active Damper a New Concept for Shock and Vibration Control Shock and Vibration Bulletin 434 1973 119133 3 R Rajamani and JK Hedrick Performance of Active Automotive Suspensions with Hydraulic Actuators Theory and Experiment Proceedings of ACC June 1994 4 R Dorf and R Bishop Sistemas de Control Moderno Ch 12 10 ed Pearson Madrid 2005 5 Mathworks Matlab Simulink User Guide R2013b 6 K Ogatta Modern Theory Engineering 5th Ed Pearson 2009 7 D Friedland Control Systems Design An Introduction to State Space Methods 10Ed Dover 8 MIT Courseware on Feedback Control Systems Last accessed on August 20 2015 httpocwmiteducoursesaeronauticsandastronautics16 30feedbackcontrolsystemsfall2010indexhtm 9 Flores E Laguado I Frequencies and Natural Modes of Free Vibration without Damping of A Cantilever Beam RCTA Vol 2 No 10 2007 10 Camargo J Camacho F Fernandez A Controller Design for an Inverted Pendulum Using Nonlinear Model RCTA Vol 2 No 18 2011 11 Higuera O Salmanca J Continuous and Discrete Control Design Based on LMI RCTA Vol 2 No 18 2011 12 C H Valencia M Vellasco R Tanscheit K Figueiredo Magnetorheological Damper Control in a Leg Prosthesis Mechanical Robot Intelligence Technology and Applications ISBN 9783319055817 Springer USA v p805 818 2014 View publication stats Physical Model of a QuarterCar Active Suspension System Radu Gheorghe Chetan Roxana BothRusu EvaH Dulf Clement Festila Department of Automation Technical University of ClujNapoca ClujNapoca Romania clementfestilaaututclujro Abstract The potential advantages of a modern active suspension system are recognized for race cars on road cars or even for great series cars Beside the passenger ride comfort the control of the vehicle vertical acceleration directly affects the road holding In the case of passive suspension models the car structure is optimized in initial stage of design taking into consideration the spring stiffness damper coefficient sprung and unsprung mass various tires performance and different velocities etc The performance of the passive suspension system is limited in the first line by the inherent variation of these parameters The active suspension system preserves the advantage of a close loop control system because these actuators are controlled by devices which receive and process the direct information from proper sensors In order to evaluate the active suspension system performance more possibilities are known mathematical analog or digital models the design and implementation of test setup versions or physical test models Obviously the physical test models are able to act as the common car structure but these manufactured equipments are not available on the market In this sense the authors conceived designed and built a test setup version a physical model as a robust simplified solution with only one acceleration speed transducer in the control loop The actuator is an electromagnet operating against an internal spring The structure comprises a cam driven by a DC motor which simulates the road conditions an articulated wheel with tire a spring and damper The car body is simulated by an articulated plate To test the active suspension system performances are attached two position sensors one for wheel and one for the car body Based on this structure various modern designed control strategies can be implemented and tested Keywordsactive suspension system quartercar model physical model advanced control strategies I INTRODUCTION The car suspension is a mechanism that sustains the vehicle weight on the road diminishes the influence of the road irregularities maintains the tire ground contact provides a sustainable ride contact for passengers and improves the handling capabilities of vehicle In fact the suspension system physically separates the car body from the car vehicle From the practical point of view the suspension system will minimize the vertical acceleration transmitted to the passenger which directly provides the ride comfort The problem of stability of the suspension system and the improvement of its performance is still today an important challenge Numerous suspension systems are already in production and can be divided into three groups passive semiactive and active systems The fully active system can supply power to the system by means of active force generation Implementation of suspension systems is done very differently by various car manufactures The main versions are hydraulic and electric technical solutions with particular important details In the case of passive suspension system the car structure design takes into consideration the rated values of the parameters of the suspension system components but these values vary in time of for different car velocities and load so that the passive suspension cannot adapt the performance to all road conditions The semiactive suspension system has the possibility to adapt the values of the components parameters of the system in accord to different ride conditions The most frequent method for the study of the suspension performance is the quarter car model 1 3 by which case is divided in four friction components attached to each wheel In this case the quartercar the quarter of the car chassis is the spring mass while the tire with its auxiliary components is the unsprung mass The suspension system consists of an energy dissipating element which is the damper or shock absorber and an energy storing element which is the spring The main versions of the active suspension system are Fig 1 Main versions of active suspension a parallel active suspension b series active suspension The coefficients k are the spring constants stiffness coefficient c are the damper coefficients and F the active forces which adapt the behavior of the suspension system to various conditions 9781509048625173100 2017 IEEE 517 m2x2k1x1x2k2x2wm2gb1x1x2b2x2wF 6 The mathematical state of a dynamic system is described by the state variables State variables often relate to a physical process in engineering systems where the correlation needed to store mass pulse and current are to be calculated The state variables define a statespace In this statespace the state vector xt is specified The movement of the system is the displacement of its end point 7 A statespace representation is a mathematical model of a physical system in control engineering This is a set of input output and state variables related by firstorder differential equations The states of the present system are defined as follows x1x3 x1x3 Four state variables are defined for the system x1x3 7 x2b1m2 x1b1m2 x2b2m2 x2b2m2 wx4 8 x3b12m1m2k1m1x1b12m1m2b1b2m1m2k1m1x2b1m1 x3 b1m1 x4b1b2m1m2 w1m1 fg 9 x4k1m2 x1k1m2 x2k2m2 x2k2m2 w1m2 fg 10 In the case of a linear system the general form 8 of the state variable equations are xAxBu 11 IV SIMULATION OF THE ACTIVE SUSPENSION SYSTEM Using MatlabSIMULINK 5 environment the authors analyzed the behavior of a typical active suspension system using equations 69 The simulation example is based on the set of values in a perunit system ms25 mu032 ks80 ku500 cs1 The values were given in 4 with a scale factor 1100 The step response for ZtZStZUt for Fa101t and Zr21t where 1t is used for unity step are given in Fig4 the oscillatory character being expected 4 Fig 4 Step responses for inputs Zr and Fa In Fig5 is depicted the variation of the distance ZZSZU for periodic road shocks Fig 5 Evolution of the distance ZZSZU for periodic road irregularities If the coordinate ZU is chosen as reference the same evolution of the ZZSZU is given in Fig6 Fig 6 Evolution of the distance ZZSZU for periodic road shocks This mode of presentation of the simulation results in negative form will be useful by analysis of the physical model laboratory test bench The control principle given in Fig3 is used in the simulation scheme the results of the simulation being given in Fig7 Fig 7 Open loop and closed loop suspension behavior V PHYSICAL MODEL FOR THE ACTIVE SUSPENSION CAR SYSTEM The scheme of the physical model is given in Fig8 A car driven by a DC motor M simulates the road shocks The auxiliary position electric transducers PT1 for wheel and PT2 for car body are used only to estimate the control algorithms efficiency They are simple transformers with variable air gap The AC output voltage is rectified and filtered The main transducer is the speed transducer based on the permanent magnet is an undamped mechanical arrangement and a coil The signal given by this transducer activates the electromagnetic actuator Fig 8 The simplified scheme for active suspension physical model In order to compare the simulation results with the signal given by transducers in Fig9 is presented the evolution of the distance Zt from transducer and from Simulink The signals have practically the same evolution a b Fig 9 Comparison for the distance Zt a from physical model b from SIMULINK Some differences between the results obtained from physical model Fig9a and the simulated values Fig9b are given by the noise level of the acceleration transducer and additionally given by the input converter of the oscilloscope From Fig10 results the performance of the closedloop active suspension system compared to the same evolution Zt in open loop structure The motion of the chassis is diminished but not rejected being asked a more powerful control algorithm a b Fig 10 Active suspension performance a in open loop b in closed loop VI CONCLUSIONS The suspensions control is a difficult control problem due to the complicated relationship between its components and parameters The most proper way to test such a control system is the use of a physical model Unfortunately there are no awailable on the market such equipments able to act as the common car structure In this sense the authors conceived designed and built a test setup version a physical model as a robust simplified solution with only one acceleration speed transducer in the control loop In order to prove the efficiency of the proposed equipment the simulation results of the active car suspension models are compared with the signals given by transducers Based on this structure various modern control strategies can be implemented and tested REFERENCES 1 Quanser Innovate educate Active Suspension wwwquansercom 2016 2 N M Ghazaly A O Moaaz The Future Development and Analysis of Vehicle Active Suspension System IOSR Journal of Mechanical and Civil Engineering IOSRJMCE vol 11 2014 pp1925 3 T P Phalke A C Mitra Design and Analysis of Vehicle Suspension System International Engineering Research Journal 2011 pp 165172 4 TPJ van der Sande Control of an automotive electromagnetic suspension system Masters Thesis Eindhoven University of Technology Department of Mechanical Engineering 2011 5 Mathworks Active Suspension Control Design wwwmathworkscom 2016 6 Abd ElNasser S Ahmed Ahmed S Ali Nouby M Ghazaly G T Abd el Jaber PID Controller of active suspension system for a quarter car model International Journal of Advances in Engineering Technology Dec 2015 7 R Rosli M Mailah G Priyandoko Active Suspension System for Passenger Vehicle using Active Force Control with Iterative Learning Algorithm WSEAS TRANSACTIONS on SYSTEMS and CONTROL 2014 8 J Fang Active Suspension System of Quarter Car Masters Thesis University of Florida 2014 9 R RusuBoth EHDulf Autotuning Fractional Order Control of a Laboratory Scale Equipment 2016 International Conference on Mechatronics Control and Automation Engineering 2016 DOI 102991mcae16201612 520 Acta Vol 12 No 3 pp 178190 2019 Technica DOI 1014513actatechjaurv12n3502 Jaurinensis Available online at actaszehu State Feedback Controller Design of an Active Suspension System for Vehicles Using Pole Placement Technique A Weber M Kuczmann Szechenyi Istvan University Department of Automation Egyetem ter 1 9026 Gyor Email mandycandyartgmailcom Abstract The paper presents a method for designing a state feedback controller of an active suspension system of a quarter car model This is a survey based on a specific example The designed controller of the active suspension system improves the driving control safety and stability because during the ride the periodic swinging motion generated by the road irregularities on wheels can be decreased This periodic motion damages the driving comfort and may cause traffic accidents The state feedback controller is designed to stand road induced displacements Computer simulations of the designed controller have been performed in the frame of Scilab and XCos Keywords state feedback pole placement active suspension system 1 Introduction Many researches performed on active suspension system have been presented in the recent years leading to more sophisticated regulatory approaches such as linear fuzzy 1 PID controller 2 and nonlinear control systems artifical neural network controllers 3 The active suspension system is a mechatronic suspension 4 and is important for improving the ride comfort Because of the adverse impacts caused by road imbalances the wheel can lose contact with the road it can not deliver force 178 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 and therefore the driving of the vehicle becomes uncertain The periodic swinging motion can damage the driving comfort the car parts the cargo and this motion can generate health damage too The primary purpose of the active suspension system is to minimize the vertical displacement of the vehicle and guarantee road maintenance For modeling and simulation a quarter car model has been chosen see Fig 1 Figure 1 Quarter car model 2 Mathematical modeling 21 The quarter car model Dynamic systems are described by several scientific and engineering branches and are modeled by state equations Using differential equations the operation of complicated dynamic systems can be modeled with relatively high precision For defining the state 179 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 variables of a quarter car model the EulerLagrange equation is used 5 d dt K x K x P x R x F 1 The equation is described by the kinetic energy K the potential energy P and the Rayleigh distribution R as follows 6 K 1 2m1 x2 1 1 2m2 x2 2 2 P 1 2k1x1 x22 m1gx1 1 2k2x2 w2 m2gx2 3 R 1 2b1 x1 x22 1 2b2 x2 w2 4 where m1 sprung mass m2 unsrpung mass k1 suspension stiffness k2 tire stiffness b1b2 damping coefficients F action control force x1 car body displacement x2 wheel displacement w road induced displacement 22 Statespace representation of a quarter car model After obtaining the partial derivatives and substituting them into the EulerLagrange equation the following equations are obtained m1x1 k1x1 x2 m1g b1 x1 x2 F 5 180 II ACTIVE SUSPENSION MATHEMATICAL MODEL The simplified quartercar model is given in Fig2 where ms is the sprung mass mu is the unspring mass ks is the spring constant kt is the tire elastic coefficient and Fa is the active force Fig 2 Quarter car parallel active suspension model Then Zr is the road surface uneven relative to the horizontal ground ZS is the chassis displacement relative in the plain ground and ZU is the displacement of the wheel relative to the plain ground The relative position car wheel chassis wheel is ZZSZU Relative to the spring mass the forces are the inertia msZs viscous friction force of the shock absorber csZsZu the elastic spring force ksZsZu the road action through the tire ktZuZr and the active force Fa in our case an electromagnetic force so that msZscsZsZuksZsZuFa0 1 For the unsprung mass is valid the equation muZucsZuZsksZuZs ktZuZrFa0 2 The quartercar and wheel accelerations are ZscsZs ZuksZsZuFams 3 ZucsZu ZsksZuZsktZuZrFamu0 4 If the Laplace transformation with zero initial condition is applied results mss2ZScsksZSZuFa0 mus2ZucsksZuZsktZuZrFa0 5 The equivalent matrix equation is mss2cssks csks ZsFa cssks mus2csskskt Zu ktZrFa 6 It is useful to solve the previous system for Zss and Zus In our application is important the distance ZZSZU so that ZsFasmsmus2ktΔs 7 and ZsZrsmskts2Δs 8 where Δsmss2cssksmus2cssksktcssks2 9 The behavior of the openloop quartercar active suspension may be described by the equations 79 III PRINCIPLE OF THE ACTIVE SUSPENSION CONTROL In the literature are known numerous methods from control system theory 69 which were applied in active suspension control Conventional PID control is applied 4 but for comparison are used Linear Quadratic Control Methods and Robust Control Fuzzy and Sliding Model Control are used in 2 The Genetic Algorithms Optimization techniques to design an active suspension are analyzed in 2 Methods based on Kalman Filter are applied in 7 and in 8 The authors chose another solution based on simple discontinuous algorithm easy to implement The system structure is depicted in Fig3 An inductive permanent magnet transducer measures directly the speed of the movement chassis wheel ZZsZu 10 Fig 3 Active suspension control principle didactic equipment in teaching process If the uneven road exhibits periodic disturbance the trigger T gives an output signal if ZZ where Z is chosen by mechanism tuning The voltage signal ε is amplified and acts the winding of the electromagnet used as actuator The width of the signal ε decides the mean value of the active force Fa It is to note that this system may be applied only in a discontinuous manner because the scheme presented in Fig3 implemented in the actual test model operates with positive feedback For the real car application are used intricate actuators like hydraulic electrohydraulic 2 mechanical solutions 4 electric solutions based on linear induction motors 4 The solution given by the authors must be cheap and simple y Cx Du Here x is the state vector u and y are the column vector containing exitations and responses A is the system matrix B C and D matrices contain the appropriate coefficient 9 The statespace representation of a quarter car model is described as follow dotx1 dotx2 dotx3 dotx4 0 0 1 0 fracb1m2 fracb1m2 fracb2m2 0 1 fracb12m1 m2 frack1m1 fracb12m1 m2 fracb1 b2m1 m2 frack1m1 fracb1m1 fracb1m1 frack1m2 frack1m2 frack2m2 0 0 x1 x2 x3 x4 0 0 0 0 fracb2m2 0 frac1m1 fracb1 b2m1 m2 g frac1m2 frack1m2 g F w 1 The performance parameters of the vehicle are given in Table 1 6 After substituting the values the statespace representation as follow dotx1 dotx2 dotx3 dotx4 0 0 1 0 125 405 0 1 59482759 11206897 17241379 17241379 5875 53375 0 0 x1 x2 x3 x4 0 0 0 0 28 0 00034483 48275862 981 0025 4750 981 F w 1 The output variable of the quarter car model is as follows y 1 0 0 0 x1 x2 x3 x4 0 0 0 F w 1 Table 1 Parameters Parameters Value Unit m1 290 kg m2 40 kg k1 23500 Nm k2 190000 Nm b1 500 Nms b2 1220 Nms g 981 ms2 3 Simulations of the quarter car model 31 Full state feedback Controllability is an important property of a controlled plant The system can be controlled when the rank of controllability matrix Mc is maximal ie the matrix is invertible if the determinant of the matrix is not zero 10 The Kalmans controllability matrix looks as follow n 4 Mc b Ab A2 b An1 b Mc 0 00034483 00490488 04007186 0 0025 10556034 92098313 00034483 00490488 04007186 24899601 0025 0 13546336 56630995 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 After determining the controllability matrix Ackermanns pole placement can be used because the state transformation and the feedback matrix can be directly given 6 Because the system is controllable Ackermanns pole placement is used for the state feedback Ackermanns formula is a control design method for solving the pole allocation problem Figure 2 State feedback in continuous time The task is to move the systems egienvalues to new places in the closed loop system This is the pole placement which is why the state feedback k is to be determined see Fig 2 10 The polynomial of a closed loop system in general case is λn p1λn1 p2λn2 pn 0 18 When using the pole placement method the eigenvalues are changed as it can be written as φclλ λE A BkT 0 19 The eigenvalues of the original system as follows λ 2040805 70147933i 2040805 70147933i 07040194 84630446i 07040194 84630446i 20 184 The new poles are selected as p 200 30 30 30 and the gain vector has been designed by Ackermans formula k 23776773 22116983 3217344 54733186 Using the pole placement method the new eigeinvalues of the system are as follows λ 200 30001573 29999214 00013619i 29999214 00013619i 32 Simulating the system To realize simulations Scilab program with XCos interface has been used In the simulation two cases have been examined the first when the displacement induced by the road is zero the second when this displacement is 50 mm There simulations are analysed with and without the designed control 321 Modeling without controller If w 0 then the gravitational force is pressed for the car body see Fig 3 and this showed that the system left alone is set to a stationarity state after some swing In case of w 50 mm jump car body displacement is affected by road induced displacement see Fig 4 the system initially leaving it goes out of the steady state for 10 seconds when it reaches a pothole causing mass m1 to swing movement 10 seconds after the transient section becomes steady state It can be seen that this value is 50 mm higher 322 Modeling with controller In case when there is no road induced displacement but there is a controller see Fig 5 it can be seen the swings are eliminated the stationary state is smoother By the A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 Figure 3 w 0 without controller Figure 4 w 005 without controller 186 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 reason of the design of the controller damping force is more effective for the transient phase Car drivers travelers cargoes are more favorable to this situation Figure 5 w 0 with controller Figure 6 w 005 with controller The effect of 50 mm road induced displacement is visible see Fig 6 Swinging 187 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 motions disappear 10 seconds after reaching the pothole and after jumping the stationary state is supervened without swinging There is statespace equation 22 without road induced displacement and state feedback see Fig 7 x Ax B1u B2w B31 24 Figure 7 Statespace equation model in Xcos Figure 8 State feedback model in Xcos 188 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 The state feedback model is visible where k is the gain factor see Fig 8 The road induced displacement and gravitational acceleration react the system There isnt reference signal 4 Conclusion Designing of the active suspension system of a quarter car model is produced different results besides changing road induced displacement By the simulation results the model has much better features with the designed controller The simulation result of the active suspension system showed that the swinging motion were gone the stationary state quickly entered which favored the driver the passengers so avoiding cargo damage References 1 A Hofmann M Hanss Fuzzy arithmetical controller design for active road vehicle suspension in the presence of uncertainties 2017 22nd International Conference on Methods and Models in Automation and Robotics MMAR 2017 pp 582587doi101109MMAR20178046893 2 L Bao S Chen S Yu Research on active faulttolerant control on active suspension of vehicle based on fuzzy pid control Chinese Automation Congress CAC 2017 pp 59115916doi101109CAC20178243840 3 V Vidya M Dharmana Model reference based intelligent control of an active suspension system for vehicles International Conference on circuits Power and Computing Technologies ICCPCT 2017 pp 15doi101109ICCPCT 20178074362 4 L R Miller Tuning passive semiactive and fully active suspension systems Proceedings of the 27th IEEE Conference on Decision and Control 1988 pp 20472053doi101109CDC1988194694 5 B Lantos Control Systems Theory and Design II 1st Edition Akademiai Kiado Budapest 2003 189 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 6 J Bokor P Gaspar Statespace representation in L Nadai Ed Control Tech nology with Vehicle Dynamics Applications 1st Edition Typotex Elektronikus Kiado Kft Budapest 2008 p 125 7 L Keviczky R Bars H J A Barta C Banyasz Control Engineering 1st Edition UniversitasGyor Kht Gyor 2006 8 J Bokor P Gaspar A Soumelidis Control Engineering II 1st Edition Typotex Elektronikus Kiado Kft Budapest 2011 9 M Kuczmann Signals and Systems 1st Edition UniversitasGyor Kht Gyor 2005 10 B Lantos Control Systems Theory and Design I 1st Edition Akademiai Kiado Budapest 2000 190 Suspension System Modeling Key MATLAB commands used in this tutorial are ss step Contents Physical setup System parameters Equations of motion Transfer function models Entering equations in MATLAB Physical setup Designing an automotive suspension system is an interesting and challenging control problem When the suspension system is designed a 14 model one of the four wheels is used to simplify the problem to a 1D multiple springdamper system A diagram of this system is shown below This model is for an active suspension system where an actuator is included that is able to generate the control force U to control the motion of the bus body Related Tutorial Links Intro to Modeling Mech System Activity Related External Links System Rep in MATLAB Video Modeling Intro Video Model of Bus Suspension System 14 Bus System parameters M1 14 bus body mass 2500 kg M2 suspension mass 320 kg K1 spring constant of suspension system 80000 Nm K2 spring constant of wheel and tire 500000 Nm b1 damping constant of suspension system 350 Nsm b2 damping constant of wheel and tire 15000 Nsm U control force Equations of motion From the picture above and Newtons law we can obtain the dynamic equations as the following Transfer function models Assume that all of the initial conditions are zero so that these equations represent the situation where the vehicle wheel goes up a bump The dynamic equations above can be expressed in the form of transfer functions by taking the Laplace Transform The specific derivation from the above equations to the transfer functions G1s and G2s is shown below where each transfer function has an output of X1X2 and inputs of U and W respectively or Find the inverse of matrix A and then multiply with inputs Usand Ws on the righthand side as follows When we want to consider the control input Us only we set Ws 0 Thus we get the transfer function G1s as in the following When we want to consider the disturbance input Ws only we set Us 0 Thus we get the transfer function G2s as in the following Entering equations in MATLAB We can generate the above transfer function models in MATLAB by entering the following commands in the MATLAB command window M1 2500 M2 320 K1 80000 K2 500000 b1 350 b2 15020 s tfs G1 M1M2s2b2sK2M1s2b1sK1M2s2b1b2sK1K2b1sK1b1sK1 G2 M1b2s3M1K2s2M1s2b1sK1M2s2b1b2sK1K2b1sK1b1sK1 Published with MATLAB 92 All contents licensed under a Creative Commons AttributionShareAlike 40 International License 30 Feedback Systems An Introduction for Scientists and Engineers Karl Johan Åström Richard M Murray 31 Feedback Systems This page intentionally left blank Feedback Systems An Introduction for Scientists and Engineers Karl Johan Aström Richard M Murray PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Copyright 2008 by Princeton University Press Published by Princeton University Press 41 William Street Princeton New Jersey 08540 In the United Kingdom Princeton University Press 6 Oxford Street Woodstock Oxfordshire OX20 1TW All Rights Reserved Library of Congress CataloginginPublication Data Åström Karl J Karl Johan 1934 Feedback systems an introduction for scientists and engineers Karl Johan Åström and Richard M Murray p cm Includes bibliographical references and index ISBN13 9780691135762 alk paper ISBN10 0691135762 alk paper 1 Feedback control systems I Murray Richard M 1963 II Title TJ216A78 2008 62983dc22 2007061033 British Library CataloginginPublication Data is available This book has been composed in LATEX The publisher would like to acknowledge the authors of this volume for providing the cameraready copy from which this book was printed Printed on acidfree paper pressprincetonedu Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Contents Preface ix Chapter 1 Introduction 1 11 What Is Feedback 1 12 What Is Control 3 13 Feedback Examples 5 14 Feedback Properties 17 15 Simple Forms of Feedback 23 16 Further Reading 25 Exercises 25 Chapter 2 System Modeling 27 21 Modeling Concepts 27 22 State Space Models 34 23 Modeling Methodology 44 24 Modeling Examples 51 25 Further Reading 61 Exercises 61 Chapter 3 Examples 65 31 Cruise Control 65 32 Bicycle Dynamics 69 33 Operational Amplifier Circuits 71 34 Computing Systems and Networks 75 35 Atomic Force Microscopy 81 36 Drug Administration 84 37 Population Dynamics 89 Exercises 91 Chapter 4 Dynamic Behavior 95 41 Solving Differential Equations 95 42 Qualitative Analysis 98 43 Stability 102 44 Lyapunov Stability Analysis 110 45 Parametric and Nonlocal Behavior 120 vi CONTENTS 46 Further Reading 126 Exercises 126 Chapter 5 Linear Systems 131 51 Basic Definitions 131 52 The Matrix Exponential 136 53 InputOutput Response 145 54 Linearization 158 55 Further Reading 163 Exercises 164 Chapter 6 State Feedback 167 61 Reachability 167 62 Stabilization by State Feedback 175 63 State Feedback Design 183 64 Integral Action 195 65 Further Reading 197 Exercises 197 Chapter 7 Output Feedback 201 71 Observability 201 72 State Estimation 206 73 Control Using Estimated State 211 74 Kalman Filtering 215 75 A General Controller Structure 219 76 Further Reading 226 Exercises 226 Chapter 8 Transfer Functions 229 81 Frequency Domain Modeling 229 82 Derivation of the Transfer Function 231 83 Block Diagrams and Transfer Functions 242 84 The Bode Plot 250 85 Laplace Transforms 259 86 Further Reading 262 Exercises 262 Chapter 9 Frequency Domain Analysis 267 91 The Loop Transfer Function 267 92 The Nyquist Criterion 270 93 Stability Margins 278 94 Bodes Relations and Minimum Phase Systems 283 95 Generalized Notions of Gain and Phase 285 96 Further Reading 290 CONTENTS vii Exercises 290 Chapter 10 PID Control 293 101 Basic Control Functions 293 102 Simple Controllers for Complex Systems 298 103 PID Tuning 302 104 Integrator Windup 306 105 Implementation 308 106 Further Reading 312 Exercises 313 Chapter 11 Frequency Domain Design 315 111 Sensitivity Functions 315 112 Feedforward Design 319 113 Performance Specifications 322 114 Feedback Design via Loop Shaping 326 115 Fundamental Limitations 331 116 Design Example 340 117 Further Reading 343 Exercises 344 Chapter 12 Robust Performance 347 121 Modeling Uncertainty 347 122 Stability in the Presence of Uncertainty 352 123 Performance in the Presence of Uncertainty 358 124 Robust Pole Placement 361 125 Design for Robust Performance 369 126 Further Reading 374 Exercises 374 Bibliography 377 Index 387 This page intentionally left blank Preface This book provides an introduction to the basic principles and tools for the design and analysis of feedback systems It is intended to serve a diverse audience of scientists and engineers who are interested in understanding and utilizing feedback in physical biological information and social systems We have attempted to keep the mathematical prerequisites to a minimum while being careful not to sacrifice rigor in the process We have also attempted to make use of examples from a variety of disciplines illustrating the generality of many of the tools while at the same time showing how they can be applied in specific application domains A major goal of this book is to present a concise and insightful view of the current knowledge in feedback and control systems The field of control started by teaching everything that was known at the time and as new knowledge was acquired additional courses were developed to cover new techniques A conse quence of this evolution is that introductory courses have remained the same for many years and it is often necessary to take many individual courses in order to obtain a good perspective on the field In developing this book we have attempted to condense the current knowledge by emphasizing fundamental concepts We be lieve that it is important to understand why feedback is useful to know the language and basic mathematics of control and to grasp the key paradigms that have been developed over the past half century It is also important to be able to solve simple feedback problems using backoftheenvelope techniques to recognize fundamen tal limitations and difficult control problems and to have a feel for available design methods This book was originally developed for use in an experimental course at Caltech involving students from a wide set of backgrounds The course was offered to undergraduates at the junior and senior levels in traditional engineering disciplines as well as first and secondyear graduate students in engineering and science This latter group included graduate students in biology computer science and physics Over the course of several years the text has been classroom tested at Caltech and at Lund University and the feedback from many students and colleagues has been incorporated to help improve the readability and accessibility of the material Because of its intended audience this book is organized in a slightly unusual fashion compared to many other books on feedback and control In particular we introduce a number of concepts in the text that are normally reserved for second year courses on control and hence often not available to students who are not control systems majors This has been done at the expense of certain traditional topics which we felt that the astute student could learn independently and are often explored through the exercises Examples of topics that we have included are nonlinear dynamics Lyapunov stability analysis the matrix exponential reachability and observability and fundamental limits of performance and robustness Topics that we have deemphasized include root locus techniques leadlag compensation and detailed rules for generating Bode and Nyquist plots by hand Several features of the book are designed to facilitate its dual function as a basic engineering text and as an introduction for researchers in natural information and social sciences The bulk of the material is intended to be used regardless of the audience and covers the core principles and tools in the analysis and design of feedback systems Advanced sections marked by the dangerous bend symbol shown here contain material that requires a slightly more technical background of the sort that would be expected of senior undergraduates in engineering A few sections are marked by two dangerous bend symbols and are intended for readers with more specialized backgrounds identified at the beginning of the section To limit the length of the text several standard results and extensions are given in the exercises with appropriate hints toward their solutions To further augment the printed material contained here a companion web site has been developed and is available from the publishers web page httppressprincetonedutitles8701html The web site contains a database of frequently asked questions supplemental examples and exercises and lecture material for courses based on this text The material is organized by chapter and includes a summary of the major points in the text as well as links to external resources The web site also contains the source code for many examples in the book as well as utilities to implement the techniques described in the text Most of the code was originally written using MATLAB Mfiles but was also tested with LabView MathScript to ensure compatibility with both packages Many files can also be run using other scripting languages such as Octave SciLab SysQuake and Xmath The first half of the book focuses almost exclusively on state space control systems We begin in Chapter 2 with a description of modeling of physical biological and information systems using ordinary differential equations and difference equations Chapter 3 presents a number of examples in some detail primarily as a reference for problems that will be used throughout the text Following this Chapter 4 looks at the dynamic behavior of models including definitions of stability and more complicated nonlinear behavior We provide advanced sections in this chapter on Lyapunov stability analysis because we find that it is useful in a broad array of applications and is frequently a topic that is not introduced until later in ones studies The remaining three chapters of the first half of the book focus on linear systems beginning with a description of inputoutput behavior in Chapter 5 In Chapter 6 we formally introduce feedback systems by demonstrating how state space control laws can be designed This is followed in Chapter 7 by material on output feedback and estimators Chapters 6 and 7 introduce the key concepts of reachability PREFACE xi and observability which give tremendous insight into the choice of actuators and sensors whether for engineered or natural systems The second half of the book presents material that is often considered to be from the field of classical control This includes the transfer function introduced in Chapter 8 which is a fundamental tool for understanding feedback systems Using transfer functions one can begin to analyze the stability of feedback systems using frequency domain analysis including the ability to reason about the closed loop behavior of a system from its open loop characteristics This is the subject of Chapter 9 which revolves around the Nyquist stability criterion In Chapters 10 and 11 we again look at the design problem focusing first on proportionalintegralderivative PID controllers and then on the more general process of loop shaping PID control is by far the most common design technique in control systems and a useful tool for any student The chapter on frequency domain design introduces many of the ideas of modern control theory including the sensitivity function In Chapter 12 we combine the results from the second half of the book to analyze some of the fundamental tradeoffs between robustness and performance This is also a key chapter illustrating the power of the techniques that have been developed and serving as an introduction for more advanced studies The book is designed for use in a 10 to 15week course in feedback systems that provides many of the key concepts needed in a variety of disciplines For a 10week course Chapters 12 46 and 811 can each be covered in a weeks time with the omission of some topics from the final chapters A more leisurely course spread out over 1415 weeks could cover the entire book with 2 weeks on modeling Chapters 2 and 3particularly for students without much background in ordinary differential equationsand 2 weeks on robust performance Chapter 12 The mathematical prerequisites for the book are modest and in keeping with our goal of providing an introduction that serves a broad audience We assume familiarity with the basic tools of linear algebra including matrices vectors and eigenvalues These are typically covered in a sophomorelevel course on the sub ject and the textbooks by Apostol 10 Arnold 13 and Strang 187 can serve as good references Similarly we assume basic knowledge of differential equations including the concepts of homogeneous and particular solutions for linear ordinary differential equations in one variable Apostol 10 and Boyce and DiPrima 42 cover this material well Finally we also make use of complex numbers and func tions and in some of the advanced sections more detailed concepts in complex variables that are typically covered in a juniorlevel engineering or physics course in mathematical methods Apostol 9 or Stewart 186 can be used for the basic material with Ahlfors 6 Marsden and Hoffman 146 or Saff and Snider 172 being good references for the more advanced material We have chosen not to in clude appendices summarizing these various topics since there are a number of good books available One additional choice that we felt was important was the decision not to rely on a knowledge of Laplace transforms in the book While their use is by far the most common approach to teaching feedback systems in engineering many stu xii PREFACE dents in the natural and information sciences may lack the necessary mathematical background Since Laplace transforms are not required in any essential way we have included them only in an advanced section intended to tie things together for students with that background Of course we make tremendous use of transfer functions which we introduce through the notion of response to exponential inputs an approach we feel is more accessible to a broad array of scientists and engineers For classes in which students have already had Laplace transforms it should be quite natural to build on this background in the appropriate sections of the text Acknowledgments The authors would like to thank the many people who helped during the preparation of this book The idea for writing this book came in part from a report on future directions in control 155 to which Stephen Boyd Roger Brockett John Doyle and Gunter Stein were major contributors Kristi Morgansen and Hideo Mabuchi helped teach early versions of the course at Caltech on which much of the text is based and Steve Waydo served as the head TA for the course taught at Caltech in 20032004 and provided numerous comments and corrections Charlotta Johnsson and Anton Cervin taught from early versions of the manuscript in Lund in 20032007 and gave very useful feedback Other colleagues and students who provided feedback and advice include Leif Andersson John Carson K Mani Chandy Michel Charpentier Domitilla Del Vecchio Kate Galloway Per Hagander Toivo Henningsson Perby Joseph Hellerstein George Hines Tore Hägglund Cole Lepine Anders Rantzer Anders Robertsson Dawn Tilbury and Francisco Zabala The reviewers for Prince ton University Press and Tom Robbins at NI Press also provided valuable comments that significantly improved the organization layout and focus of the book Our ed itor Vickie Kearn was a great source of encouragement and help throughout the publishing process Finally we would like to thank Caltech Lund University and the University of California at Santa Barbara for providing many resources stim ulating colleagues and students and pleasant working environments that greatly aided in the writing of this book Karl Johan Åström Richard M Murray Lund Sweden Pasadena California Santa Barbara California Chapter One Introduction Feedback is a central feature of life The process of feedback governs how we grow respond to stress and challenge and regulate factors such as body temperature blood pressure and cholesterol level The mechanisms operate at every level from the interaction of proteins in cells to the interaction of organisms in complex ecologies M B Hoagland and B Dodson The Way Life Works 1995 99 In this chapter we provide an introduction to the basic concept of feedback and the related engineering discipline of control We focus on both historical and current examples with the intention of providing the context for current tools in feedback and control Much of the material in this chapter is adapted from 155 and the authors gratefully acknowledge the contributions of Roger Brockett and Gunter Stein to portions of this chapter 11 What Is Feedback Adynamicalsystemisasystemwhosebehaviorchangesovertimeofteninresponse to external stimulation or forcing The term feedback refers to a situation in which two or more dynamical systems are connected together such that each system influences the other and their dynamics are thus strongly coupled Simple causal reasoning about a feedback system is difficult because the first system influences the second and the second system influences the first leading to a circular argument This makes reasoning based on cause and effect tricky and it is necessary to analyze thesystemasawholeAconsequenceofthisisthatthebehavioroffeedbacksystems is often counterintuitive and it is therefore necessary to resort to formal methods to understand them Figure 11 illustrates in block diagram form the idea of feedback We often use u System 2 System 1 y a Closed loop y System 2 System 1 u r b Open loop Figure 11 Open and closed loop systems a The output of system 1 is used as the input of system 2 and the output of system 2 becomes the input of system 1 creating a closed loop system b The interconnection between system 2 and system 1 is removed and the system is said to be open loop 2 CHAPTER 1 INTRODUCTION Figure 12 The centrifugal governor and the steam engine The centrifugal governor on the left consists of a set of flyballs that spread apart as the speed of the engine increases The steam engine on the right uses a centrifugal governor above and to the left of the flywheel to regulate its speed Credit Machine a Vapeur Horizontale de Philip Taylor 1828 the terms open loop and closed loop when referring to such systems A system is said to be a closed loop system if the systems are interconnected in a cycle as shown in Figure 11a If we break the interconnection we refer to the configuration as an open loop system as shown in Figure 11b As the quote at the beginning of this chapter illustrates a major source of exam ples of feedback systems is biology Biological systems make use of feedback in an extraordinary number of ways on scales ranging from molecules to cells to organ isms to ecosystems One example is the regulation of glucose in the bloodstream through the production of insulin and glucagon by the pancreas The body attempts to maintain a constant concentration of glucose which is used by the bodys cells to produce energy When glucose levels rise after eating a meal for example the hormone insulin is released and causes the body to store excess glucose in the liver When glucose levels are low the pancreas secretes the hormone glucagon which has the opposite effect Referring to Figure 11 we can view the liver as system 1 and the pancreas as system 2 The output from the liver is the glucose concentration in the blood and the output from the pancreas is the amount of insulin or glucagon produced The interplay between insulin and glucagon secretions throughout the day helps to keep the bloodglucose concentration constant at about 90 mg per 100 mL of blood An early engineering example of a feedback system is a centrifugal governor in which the shaft of a steam engine is connected to a flyball mechanism that is itself connected to the throttle of the steam engine as illustrated in Figure 12 The system is designed so that as the speed of the engine increases perhaps because of a lessening of the load on the engine the flyballs spread apart and a linkage causes the throttle on the steam engine to be closed This in turn slows down the engine which causes the flyballs to come back together We can model this system as a closed loop system by taking system 1 as the steam engine and system 2 as the governor 12 WHAT IS CONTROL 3 When properly designed the flyball governor maintains a constant speed of the engine roughly independent of the loading conditions The centrifugal governor was an enabler of the successful Watt steam engine which fueled the industrial revolution Feedback has many interesting properties that can be exploited in designing systems As in the case of glucose regulation or the flyball governor feedback can makeasystemresilienttowardexternalinfluencesItcanalsobeusedtocreatelinear behavior out of nonlinear components a common approach in electronics More generally feedback allows a system to be insensitive both to external disturbances and to variations in its individual elements Feedback has potential disadvantages as well It can create dynamic instabilities in a system causing oscillations or even runaway behavior Another drawback especially in engineering systems is that feedback can introduce unwanted sensor noise into the system requiring careful filtering of signals It is for these reasons that a substantial portion of the study of feedback systems is devoted to developing an understanding of dynamics and a mastery of techniques in dynamical systems Feedback systems are ubiquitous in both natural and engineered systems Con trol systems maintain the environment lighting and power in our buildings and factories they regulate the operation of our cars consumer electronics and manu facturing processes they enable our transportation and communications systems and they are critical elements in our military and space systems For the most part they are hidden from view buried within the code of embedded microprocessors executing their functions accurately and reliably Feedback has also made it pos sible to increase dramatically the precision of instruments such as atomic force microscopes AFMs and telescopes In nature homeostasis in biological systems maintains thermal chemical and biological conditions through feedback At the other end of the size scale global climate dynamics depend on the feedback interactions between the atmosphere the oceans the land and the sun Ecosystems are filled with examples of feedback due to the complex interactions between animal and plant life Even the dynamics of economies are based on the feedback between individuals and corporations through markets and the exchange of goods and services 12 What Is Control The term control has many meanings and often varies between communities In this book we define control to be the use of algorithms and feedback in engineered systems Thus control includes such examples as feedback loops in electronic am plifiers setpoint controllers in chemical and materials processing flybywire systems on aircraft and even router protocols that control traffic flow on the Inter net Emerging applications include highconfidence software systems autonomous vehicles and robots realtime resource management systems and biologically en gineered systems At its core control is an information science and includes the use of information in both analog and digital representations noise external disturbances noise Output Process Controller operator input Figure 13 Components of a computercontrolled system The upper dashed box represents the process dynamics which include the sensors and actuators in addition to the dynamical system being controlled Noise and external disturbances can perturb the dynamics of the process The controller is shown in the lower dashed box It consists of a filter and analogtodigital AD and digitaltoanalog DA converters as well as a computer that implements the control algorithm A system clock controls the operation of the controller synchronizing the AD DA and computing processes The operator input is also fed to the computer as an external input A modern controller senses the operation of a system compares it against the desired behavior computes corrective actions based on a model of the systems response to external inputs and actuates the system to effect the desired change This basic feedback loop of sensing computation and actuation is the central concept in control The key issues in designing control logic are ensuring that the dynamics of the closed loop system are stable bounded disturbances give bounded errors and that they have additional desired behavior good disturbance attenuation fast responsiveness to changes in operating point etc These properties are established using a variety of modeling and analysis techniques that capture the essential dynamics of the system and permit the exploration of possible behaviors in the presence of uncertainty noise and component failure A typical example of a control system is shown in Figure 13 The basic elements of sensing computation and actuation are clearly seen In modern control systems computation is typically implemented on a digital computer requiring the use of analogtodigital AD and digitaltoanalog DA converters Uncertainty enters the system through noise in sensing and actuation subsystems external disturbances that affect the underlying system operation and uncertain dynamics in the system parameter errors unmodeled effects etc The algorithm that computes the control action as a function of the sensor values is often called a control law The system can be influenced externally by an operator who introduces command signals to the system 13 FEEDBACK EXAMPLES 5 Control engineering relies on and shares tools from physics dynamics and modeling computer science information and software and operations research optimization probability theory and game theory but it is also different from these subjects in both insights and approach Perhaps the strongest area of overlap between control and other disciplines is in the modeling of physical systems which is common across all areas of engineering and science One of the fundamental differences between controloriented mod eling and modeling in other disciplines is the way in which interactions between subsystems are represented Control relies on a type of inputoutput modeling that allows many new insights into the behavior of systems such as disturbance attenu ation and stable interconnection Model reduction where a simpler lowerfidelity description of the dynamics is derived from a highfidelity model is also naturally described in an inputoutput framework Perhaps most importantly modeling in a control context allows the design of robust interconnections between subsystems a feature that is crucial in the operation of all large engineered systems Control is also closely associated with computer science since virtually all mod ern control algorithms for engineering systems are implemented in software How ever control algorithms and software can be very different from traditional com puter software because of the central role of the dynamics of the system and the realtime nature of the implementation 13 Feedback Examples Feedback has many interesting and useful properties It makes it possible to design precise systems from imprecise components and to make relevant quantities in a system change in a prescribed fashion An unstable system can be stabilized using feedback and the effects of external disturbances can be reduced Feedback also offers new degrees of freedom to a designer by exploiting sensing actuation and computation In this section we survey some of the important applications and trends for feedback in the world around us Early Technological Examples The proliferation of control in engineered systems occurred primarily in the latter halfofthe20thcenturyTherearesomeimportantexceptionssuchasthecentrifugal governor described earlier and the thermostat Figure 14a designed at the turn of the century to regulate the temperature of buildings The thermostat in particular is a simple example of feedback control that every one is familiar with The device measures the temperature in a building compares that temperature to a desired setpoint and uses the feedback error between the two to operate the heating plant eg to turn heat on when the temperature is too low and to turn it off when the temperature is too high This explanation captures the essence of feedback but it is a bit too simple even for a basic device such as the thermostat Because lags and delays exist in the heating plant and sensor a good 6 CHAPTER 1 INTRODUCTION a Honeywell thermostat 1953 Movement opens throttle Electromagnet Reversible Motor Latch Governor Contacts Speed Adjustment Knob Latching Button Speed ometer Flyball Governor Adjustment Spring Load Spring Accelerator Pedal b Chrysler cruise control 1958 Figure 14 Early control devices a Honeywell T87 thermostat originally introduced in 1953 The thermostat controls whether a heater is turned on by comparing the current tem perature in a room to a desired value that is set using a dial b Chrysler cruise control system introduced in the 1958 Chrysler Imperial 170 A centrifugal governor is used to detect the speed of the vehicle and actuate the throttle The reference speed is specified through an adjustment spring Left figure courtesy of Honeywell International Inc thermostat does a bit of anticipation turning the heater off before the error actually changes sign This avoids excessive temperature swings and cycling of the heating plant This interplay between the dynamics of the process and the operation of the controller is a key element in modern control systems design There are many other control system examples that have developed over the years with progressively increasing levels of sophistication An early system with broad public exposure was the cruise control option introduced on automobiles in 1958 see Figure 14b Cruise control illustrates the dynamic behavior of closed loop feedback systems in actionthe slowdown error as the system climbs a grade the gradual reduction of that error due to integral action in the controller the small overshoot at the top of the climb etc Later control systems on automobiles such as emission controls and fuelmetering systems have achieved major reductions of pollutants and increases in fuel economy Power Generation and Transmission Access to electrical power has been one of the major drivers of technological progress in modern society Much of the early development of control was driven by the generation and distribution of electrical power Control is mission critical for power systems and there are many control loops in individual power stations Control is also important for the operation of the whole power network since it is difficult to store energy and it is thus necessary to match production to con sumption Power management is a straightforward regulation problem for a system with one generator and one power consumer but it is more difficult in a highly distributed system with many generators and long distances between consumption and generation Power demand can change rapidly in an unpredictable manner and 13 FEEDBACK EXAMPLES 7 Figure 15 A small portion of the European power network By 2008 European power suppliers will operate a single interconnected network covering a region from the Arctic to the Mediterranean and from the Atlantic to the Urals In 2004 the installed power was more than 700 GW 7 1011 W Source UCTE wwwucteorg combining generators and consumers into large networks makes it possible to share loads among many suppliers and to average consumption among many customers Large transcontinental and transnational power systems have therefore been built such as the one show in Figure 15 Most electricity is distributed by alternating current AC because the transmis sionvoltagecanbechangedwithsmallpowerlossesusingtransformersAlternating current generators can deliver power only if the generators are synchronized to the voltage variations in the network This means that the rotors of all generators in a network must be synchronized To achieve this with local decentralized controllers and a small amount of interaction is a challenging problem Sporadic lowfrequency oscillations between distant regions have been observed when regional power grids have been interconnected 134 Safety and reliability are major concerns in power systems There may be dis turbances due to trees falling down on power lines lightning or equipment failures There are sophisticated control systems that attempt to keep the system operating even when there are large disturbances The control actions can be to reduce volt age to break up the net into subnets or to switch off lines and power users These safety systems are an essential element of power distribution systems but in spite of all precautions there are occasionally failures in large power systems The power system is thus a nice example of a complicated distributed system where control is executed on many levels and in many different ways 8 CHAPTER 1 INTRODUCTION a FA18 Hornet b X45 UCAV Figure 16 Military aerospace systems a The FA18 aircraft is one of the first production military fighters to use flybywire technology b The X45 UCAV unmanned aerial vehicle is capable of autonomous flight using inertial measurement sensors and the global positioning system GPS to monitor its position relative to a desired trajectory Photographs courtesy of NASA Dryden Flight Research Center Aerospace and Transportation In aerospace control has been a key technological capability tracing back to the beginning of the 20th century Indeed the Wright brothers are correctly famous not for demonstrating simply powered flight but controlled powered flight Their early Wright Flyer incorporated moving control surfaces vertical fins and canards and warpable wings that allowed the pilot to regulate the aircrafts flight In fact the aircraft itself was not stable so continuous pilot corrections were mandatory This early example of controlled flight was followed by a fascinating success story of continuous improvements in flight control technology culminating in the high performance highly reliable automatic flight control systems we see in modern commercial and military aircraft today Figure 16 Similar success stories for control technology have occurred in many other application areas Early World War II bombsights and fire control servo systems have evolved into todays highly accurate radarguided guns and precisionguided weapons Early failureprone space missions have evolved into routine launch oper ations manned landings on the moon permanently manned space stations robotic vehicles roving Mars orbiting vehicles at the outer planets and a host of commer cial and military satellites serving various surveillance communication navigation and earth observation needs Cars have advanced from manually tuned mechani calpneumatic technology to computercontrolled operation of all major functions including fuel injection emission control cruise control braking and cabin com fort Current research in aerospace and transportation systems is investigating the application of feedback to higher levels of decision making including logical regu lation of operating modes vehicle configurations payload configurations and health status These have historically been performed by human operators but today that 13 FEEDBACK EXAMPLES 9 Figure 17 Materials processing Modern materials are processed under carefully controlled conditions using reactors such as the metal organic chemical vapor deposition MOCVD reactor shown on the left which was for manufacturing superconducting thin films Using lithography chemical etching vapor deposition and other techniques complex devices can be built such as the IBM cell processor shown on the right MOCVD image courtesy of Bob Kee IBM cell processor photograph courtesy Tom Way IBM Corporation unauthorized use not permitted boundary is moving and control systems are increasingly taking on these functions Another dramatic trend on the horizon is the use of large collections of distributed entities with local computation global communication connections little regularity imposed by the laws of physics and no possibility of imposing centralized control actions Examples of this trend include the national airspace management problem automated highway and traffic management and command and control for future battlefields Materials and Processing The chemical industry is responsible for the remarkable progress in developing new materials that are key to our modern society In addition to the continuing need to improve product quality several other factors in the process control industry are drivers for the use of control Environmental statutes continue to place stricter limitations on the production of pollutants forcing the use of sophisticated pollution control devices Environmental safety considerations have led to the design of smaller storage capacities to diminish the risk of major chemical leakage requiring tighter control on upstream processes and in some cases supply chains And large increases in energy costs have encouraged engineers to design plants that are highly integrated coupling many processes that used to operate independently All of these trends increase the complexity of these processes and the performance requirements for the control systems making control system design increasingly challenging Some examples of materialsprocessing technology are shown in Figure 17 As in many other application areas new sensor technology is creating new opportunities for control Online sensorsincluding laser backscattering video microscopy and ultraviolet infrared and Raman spectroscopyare becoming more Electrode Glass Pipette Ion Channel Cell Membrane Controller Figure 18 The voltage clamp method for measuring ion currents in cells using feedback A pipet is used to place an electrode in a cell left and middle and maintain the potential of the cell at a fixed level The internal voltage in the cell is vi and the voltage of the external fluid is ve The feedback system right controls the current I into the cell so that the voltage drop across the cell membrane Δv vi ve is equal to its reference value Δvr The current I is then equal to the ion current robust and less expensive and are appearing in more manufacturing processes Many of these sensors are already being used by current process control systems but more sophisticated signalprocessing and control techniques are needed to use more effectively the realtime information provided by these sensors Control engineers also contribute to the design of even better sensors which are still needed for example in the microelectronics industry As elsewhere the challenge is making use of the large amounts of data provided by these new sensors in an effective manner In addition a controloriented approach to modeling the essential physics of the underlying processes is required to understand the fundamental limits on observability of the internal state through sensor data Instrumentation The measurement of physical variables is of prime interest in science and engineering Consider for example an accelerometer where early instruments consisted of a mass suspended on a spring with a deflection sensor The precision of such an instrument depends critically on accurate calibration of the spring and the sensor There is also a design compromise because a weak spring gives high sensitivity but low bandwidth A different way of measuring acceleration is to use force feedback The spring is replaced by a voice coil that is controlled so that the mass remains at a constant position The acceleration is proportional to the current through the voice coil In such an instrument the precision depends entirely on the calibration of the voice coil and does not depend on the sensor which is used only as the feedback signal The sensitivitybandwidth compromise is also avoided This way of using feedback has been applied to many different engineering fields and has resulted in instruments with dramatically improved performance Force feedback is also used in haptic devices for manual control Another important application of feedback is in instrumentation for biological systems Feedback is widely used to measure ion currents in cells using a device called a voltage clamp which is illustrated in Figure 18 Hodgkin and Huxley used the voltage clamp to investigate propagation of action potentials in the axon of the 13 FEEDBACK EXAMPLES 11 giant squid In 1963 they shared the Nobel Prize in Medicine with Eccles for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane A refinement of the voltage clamp called a patch clamp made it possible to measure exactly when a single ion channel is opened or closed This was developed by Neher and Sakmann who received the 1991 Nobel Prize in Medicine for their discoveries concerning the function of single ion channels in cells There are many other interesting and useful applications of feedback in scien tific instruments The development of the mass spectrometer is an early example In a 1935 paper Nier observed that the deflection of ions depends on both the magnetic and the electric fields 158 Instead of keeping both fields constant Nier let the magnetic field fluctuate and the electric field was controlled to keep the ratio between the fields constant Feedback was implemented using vacuum tube amplifiers This scheme was crucial for the development of mass spectroscopy The Dutch engineer van der Meer invented a clever way to use feedback to maintain a goodquality highdensity beam in a particle accelerator 153 The idea is to sense particle displacement at one point in the accelerator and apply a correcting signal at another point This scheme called stochastic cooling was awarded the Nobel Prize in Physics in 1984 The method was essential for the successful experiments at CERN where the existence of the particles W and Z associated with the weak force was first demonstrated The 1986 Nobel Prize in Physicsawarded to Binnig and Rohrer for their design of the scanning tunneling microscopeis another example of an innovative use of feedback The key idea is to move a narrow tip on a cantilever beam across a surface and to register the forces on the tip 34 The deflection of the tip is measured using tunneling The tunneling current is used by a feedback system to control the position of the cantilever base so that the tunneling current is constant an example of force feedback The accuracy is so high that individual atoms can be registered A map of the atoms is obtained by moving the base of the cantilever horizontally The performance of the control system is directly reflected in the image quality and scanning speed This example is described in additional detail in Chapter 3 Robotics and Intelligent Machines The goal of cybernetic engineering already articulated in the 1940s and even before has been to implement systems capable of exhibiting highly flexible or intelligent responses to changing circumstances In 1948 the MIT mathematician Norbert Wiener gave a widely read account of cybernetics 200 A more mathematical treatment of the elements of engineering cybernetics was presented by H S Tsien in 1954 driven by problems related to the control of missiles 195 Together these works and others of that time form much of the intellectual basis for modern work in robotics and control Two accomplishments that demonstrate the successes of the field are the Mars Exploratory Rovers and entertainment robots such as the Sony AIBO shown in Figure 19 The two Mars Exploratory Rovers launched by the Jet Propulsion 12 CHAPTER 1 INTRODUCTION Figure 19 Robotic systems a Spirit one of the two Mars Exploratory Rovers that landed on Mars in January 2004 b The Sony AIBO Entertainment Robot one of the first entertainment robots to be massmarketed Both robots make use of feedback between sensors actuators and computation to function in unknown environments Photographs courtesy of Jet Propulsion Laboratory and Sony Electronics Inc Laboratory JPL maneuvered on the surface of Mars for more than 4 years starting in January 2004 and sent back pictures and measurements of their environment The Sony AIBO robot debuted in June 1999 and was the first entertainment robot to be massmarketed by a major international corporation It was particularly noteworthy because of its use of artificial intelligence AI technologies that allowed it to act in response to external stimulation and its own judgment This higher level of feedback is a key element in robotics where issues such as obstacle avoidance goal seeking learning and autonomy are prevalent Despite the enormous progress in robotics over the last halfcentury in many ways the field is still in its infancy Todays robots still exhibit simple behaviors compared with humans and their ability to locomote interpret complex sensory inputs perform higherlevel reasoning and cooperate together in teams is limited Indeed much of Wieners vision for robotics and intelligent machines remains unrealized While advances are needed in many fields to achieve this vision including advances in sensing actuation and energy storagethe opportunity to combine the advances of the AI community in planning adaptation and learning with the techniques in the control community for modeling analysis and design of feedback systems presents a renewed path for progress Networks and Computing Systems Control of networks is a large research area spanning many topics including con gestion control routing data caching and power management Several features of these control problems make them very challenging The dominant feature is the extremely large scale of the system the Internet is probably the largest feedback control system humans have ever built Another is the decentralized nature of the control problem decisions must be made quickly and based only on local informa 13 FEEDBACK EXAMPLES 13 The Internet Request Reply Request Reply Request Reply Tier 1 Tier 2 Tier 3 Clients a Multitiered Internet services b Individual server Figure 110 A multitier system for services on the Internet In the complete system shown schematically in a users request information from a set of computers tier 1 which in turn collect information from other computers tiers 2 and 3 The individual server shown in b has a set of reference parameters set by a human system operator with feedback used to maintain the operation of the system in the presence of uncertainty Based on Hellerstein et al 97 tion Stability is complicated by the presence of varying time lags as information about the network state can be observed or relayed to controllers only after a delay and the effect of a local control action can be felt throughout the network only after substantial delay Uncertainty and variation in the network through network topol ogy transmission channel characteristics traffic demand and available resources may change constantly and unpredictably Other complicating issues are the diverse traffic characteristicsin terms of arrival statistics at both the packet and flow time scalesand the different requirements for quality of service that the network must support Related to the control of networks is control of the servers that sit on these net works Computers are key components of the systems of routers web servers and database servers used for communication electronic commerce advertising and information storage While hardware costs for computing have decreased dramati cally the cost of operating these systems has increased because of the difficulty in managing and maintaining these complex interconnected systems The situation is similar to the early phases of process control when feedback was first introduced to control industrial processes As in process control there are interesting possibili ties for increasing performance and decreasing costs by applying feedback Several promising uses of feedback in the operation of computer systems are described in the book by Hellerstein et al 97 A typical example of a multilayer system for ecommerce is shown in Fig ure 110a The system has several tiers of servers The edge server accepts incom ing requests and routes them to the HTTP server tier where they are parsed and distributed to the application servers The processing for different requests can vary widely and the application servers may also access external servers managed by other organizations Control of an individual server in a layer is illustrated in Figure 110b A quan tity representing the quality of service or cost of operationsuch as response time throughput service rate or memory usageis measured in the computer The con trol variables might represent incoming messages accepted priorities in the oper 14 CHAPTER 1 INTRODUCTION ating system or memory allocation The feedback loop then attempts to maintain qualityofservice variables within a target range of values Economics The economy is a large dynamical system with many actors governments orga nizations companies and individuals Governments control the economy through laws and taxes the central banks by setting interest rates and companies by setting prices and making investments Individuals control the economy through purchases savings and investments Many efforts have been made to model the system both at the macro level and at the micro level but this modeling is difficult because the system is strongly influenced by the behaviors of the different actors in the system Keynes 122 developed a simple model to understand relations among gross na tional product investment consumption and government spending One of Keynes observations was that under certain conditions eg during the 1930s depression an increase in the investment of government spending could lead to a larger increase in the gross national product This idea was used by several governments to try to alleviate the depression Keynes ideas can be captured by a simple model that is discussed in Exercise 24 A perspective on the modeling and control of economic systems can be obtained from the work of some economists who have received the Sveriges Riksbank Prize in Economics in Memory of Alfred Nobel popularly called the Nobel Prize in Economics Paul A Samuelson received the prize in 1970 for the scientific work through which he has developed static and dynamic economic theory and actively contributed to raising the level of analysis in economic science Lawrence Klein received the prize in 1980 for the development of large dynamical models with many parameters that were fitted to historical data 126 eg a model of the US economy in the period 19291952 Other researchers have modeled other countries and other periods In 1997 Myron Scholes shared the prize with Robert Merton for a new method to determine the value of derivatives A key ingredient was a dynamic model of the variation of stock prices that is widely used by banks and investment companies In 2004 Finn E Kydland and Edward C Prestcott shared the economics prize for their contributions to dynamic macroeconomics the time consistency of economic policy and the driving forces behind business cycles a topic that is clearly related to dynamics and control One of the reasons why it is difficult to model economic systems is that there are no conservation laws A typical example is that the value of a company as expressed by its stock can change rapidly and erratically There are however some areas with conservation laws that permit accurate modeling One example is the flow of products from a manufacturer to a retailer as illustrated in Figure 111 The products are physical quantities that obey a conservation law and the system can be modeledbyaccountingforthenumberofproductsinthedifferentinventoriesThere are considerable economic benefits in controlling supply chains so that products are available to customers while minimizing products that are in storage The real problems are more complicated than indicated in the figure because there may be 13 FEEDBACK EXAMPLES 15 Factory Warehouse Distributors Consumers Advertisement Retailers Figure 111 Supply chain dynamics after Forrester 75 Products flow from the producer to the customer through distributors and retailers as indicated by the solid lines There are typically many factories and warehouses and even more distributors and retailers The dashed lines show the upward flow of orders The numbers in the circles represent the delays in the flow of information or materials Multiple feedback loops are present as each agent tries to maintain the proper inventory level many different products there may be different factories that are geographically distributed and the factories may require raw material or subassemblies Control of supply chains was proposed by Forrester in 1961 75 and is now growing in importance Considerable economic benefits can be obtained by using models to minimize inventories Their use accelerated dramatically when infor mation technology was applied to predict sales keep track of products and enable justintime manufacturing Supply chain management has contributed significantly to the growing success of global distributors Advertising on the Internet is an emerging application of control With network based advertising it is easy to measure the effect of different marketing strategies quickly The response of customers can then be modeled and feedback strategies can be developed Feedback in Nature Many problems in the natural sciences involve understanding aggregate behavior in complex largescale systems This behavior emerges from the interaction of a multitude of simpler systems with intricate patterns of information flow Repre sentative examples can be found in fields ranging from embryology to seismology Researchers who specialize in the study of specific complex systems often develop an intuitive emphasis on analyzing the role of feedback or interconnection in facilitating and stabilizing aggregate behavior While sophisticated theories have been developed by domain experts for the analysis of various complex systems the development of a rigorous methodology that can discover and exploit common features and essential mathematical structure is just beginning to emerge Advances in science and technology are creating a new understanding of the underlying dynamics and the importance of feedback in a wide variety of natural and technological systems We briefly highlight three application areas here Biological Systems A major theme currently of interest to the biology commu 16 CHAPTER 1 INTRODUCTION Figure112Thewiringdiagramofthegrowthsignalingcircuitryofthemammaliancell95 The major pathways that are thought to play a role in cancer are indicated in the diagram LinesrepresentinteractionsbetweengenesandproteinsinthecellLinesendinginarrowheads indicate activation of the given gene or pathway lines ending in a Tshaped head indicate repression Used with permission of Elsevier Ltd and the authors nity is the science of reverse and eventually forward engineering of biological control networks such as the one shown in Figure 112 There are a wide variety of biological phenomena that provide a rich source of examples of control includ ing gene regulation and signal transduction hormonal immunological and cardio vascular feedback mechanisms muscular control and locomotion active sensing vision and proprioception attention and consciousness and population dynamics and epidemics Each of these and many more provide opportunities to figure out what works how it works and what we can do to affect it One interesting feature of biological systems is the frequent use of positive feed back to shape the dynamics of the system Positive feedback can be used to create switchlike behavior through autoregulation of a gene and to create oscillations such as those present in the cell cycle central pattern generators or circadian rhythm Ecosystems In contrast to individual cells and organisms emergent properties of aggregations and ecosystems inherently reflect selection mechanisms that act on multiple levels and primarily on scales well below that of the system as a whole Because ecosystems are complex multiscale dynamical systems they provide a broad range of new challenges for the modeling and analysis of feedback systems Recentexperienceinapplyingtoolsfromcontrolanddynamicalsystemstobacterial networks suggests that much of the complexity of these networks is due to the presence of multiple layers of feedback loops that provide robust functionality 14 FEEDBACK PROPERTIES 17 to the individual cell Yet in other instances events at the cell level benefit the colony at the expense of the individual Systems level analysis can be applied to ecosystems with the goal of understanding the robustness of such systems and the extent to which decisions and events affecting individual species contribute to the robustness andor fragility of the ecosystem as a whole Environmental Science It is now indisputable that human activities have altered the environment on a global scale Problems of enormous complexity challenge researchers in this area and first among these is to understand the feedback sys tems that operate on the global scale One of the challenges in developing such an understanding is the multiscale nature of the problem with detailed understanding of the dynamics of microscale phenomena such as microbiological organisms be ing a necessary component of understanding global phenomena such as the carbon cycle 14 Feedback Properties Feedback is a powerful idea which as we have seen is used extensively in natural and technological systems The principle of feedback is simple base correcting actions on the difference between desired and actual performance In engineering feedbackhasbeenrediscoveredandpatentedmanytimesinmanydifferent contexts The use of feedback has often resulted in vast improvements in system capability and these improvements have sometimes been revolutionary as discussed above The reason for this is that feedback has some truly remarkable properties In this section we will discuss some of the properties of feedback that can be understood intuitively This intuition will be formalized in subsequent chapters Robustness to Uncertainty One of the key uses of feedback is to provide robustness to uncertainty By mea suring the difference between the sensed value of a regulated signal and its desired value we can supply a corrective action If the system undergoes some change that affects the regulated signal then we sense this change and try to force the system back to the desired operating point This is precisely the effect that Watt exploited in his use of the centrifugal governor on steam engines As an example of this principle consider the simple feedback system shown in Figure 113 In this system the speed of a vehicle is controlled by adjusting the amount of gas flowing to the engine Simple proportionalintegral PI feedback is used to make the amount of gas depend on both the error between the current and the desired speed and the integral of that error The plot on the right shows the results of this feedback for a step change in the desired speed and a variety of different masses for the car which might result from having a different number of passengers or towing a trailer Notice that independent of the mass which varies by a factor of 3 the steadystate speed of the vehicle always approaches the desired speed and achieves that speed within approximately 5 s Thus the performance of 18 CHAPTER 1 INTRODUCTION Compute Actuate Throttle Sense Speed 0 5 10 25 30 Speed ms Time s m Figure 113 A feedback system for controlling the speed of a vehicle In the block diagram on the left the speed of the vehicle is measured and compared to the desired speed within the Compute block Based on the difference in the actual and desired speeds the throttle or brake is used to modify the force applied to the vehicle by the engine drivetrain and wheels The figure on the right shows the response of the control system to a commanded change in speed from 25 ms to 30 ms The three different curves correspond to differing masses of the vehicle between 1000 and 3000 kg demonstrating the robustness of the closed loop system to a very large change in the vehicle characteristics the system is robust with respect to this uncertainty Another early example of the use of feedback to provide robustness is the nega tive feedback amplifier When telephone communications were developed ampli fiers were used to compensate for signal attenuation in long lines A vacuum tube was a component that could be used to build amplifiers Distortion caused by the nonlinear characteristics of the tube amplifier together with amplifier drift were obstacles that prevented the development of line amplifiers for a long time A ma jor breakthrough was the invention of the feedback amplifier in 1927 by Harold S Black an electrical engineer at Bell Telephone Laboratories Black used negative feedback which reduces the gain but makes the amplifier insensitive to variations in tube characteristics This invention made it possible to build stable amplifiers with linear characteristics despite the nonlinearities of the vacuum tube amplifier Design of Dynamics Another use of feedback is to change the dynamics of a system Through feed back we can alter the behavior of a system to meet the needs of an application systems that are unstable can be stabilized systems that are sluggish can be made responsive and systems that have drifting operating points can be held constant Control theory provides a rich collection of techniques to analyze the stability and dynamic response of complex systems and to place bounds on the behavior of such systems by analyzing the gains of linear and nonlinear operators that describe their components An example of the use of control in the design of dynamics comes from the area of flight control The following quote from a lecture presented by Wilbur Wright to the Western Society of Engineers in 1901 149 illustrates the role of control in the development of the airplane Men already know how to construct wings or airplanes which when driven through the air at sufficient speed will not only sustain the 14 FEEDBACK PROPERTIES 19 Figure 114 Aircraft autopilot system The Sperry autopilot left contained a set of four gyros coupled to a set of air valves that controlled the wing surfaces The 1912 Curtiss used an autopilot to stabilize the roll pitch and yaw of the aircraft and was able to maintain level flight as a mechanic walked on the wing right 105 weight of the wings themselves but also that of the engine and of the engineer as well Men also know how to build engines and screws of sufficient lightness and power to drive these planes at sustaining speed Inability to balance and steer still confronts students of the flying problem When this one feature has been worked out the age of flying will have arrived for all other difficulties are of minor importance The Wright brothers thus realized that control was a key issue to enable flight They resolved the compromise between stability and maneuverability by building an airplane the Wright Flyer that was unstable but maneuverable The Flyer had a rudder in the front of the airplane which made the plane very maneuverable A disadvantage was the necessity for the pilot to keep adjusting the rudder to fly the plane if the pilot let go of the stick the plane would crash Other early aviators tried to build stable airplanes These would have been easier to fly but because of their poor maneuverability they could not be brought up into the air By using their insight and skillful experiments the Wright brothers made the first successful flight at Kitty Hawk in 1905 Since it was quite tiresome to fly an unstable aircraft there was strong motiva tion to find a mechanism that would stabilize an aircraft Such a device invented by Sperry was based on the concept of feedback Sperry used a gyrostabilized pendu lum to provide an indication of the vertical He then arranged a feedback mechanism that would pull the stick to make the plane go up if it was pointing down and vice versa The Sperry autopilot was the first use of feedback in aeronautical engineer ing and Sperry won a prize in a competition for the safest airplane in Paris in 1914 Figure 114 shows the Curtiss seaplane and the Sperry autopilot The autopilot is a good example of how feedback can be used to stabilize an unstable system and hence design the dynamics of the aircraft 20 CHAPTER 1 INTRODUCTION One of the other advantages of designing the dynamics of a device is that it allows for increased modularity in the overall system design By using feedback to create a system whose response matches a desired profile we can hide the complexity and variability that may be present inside a subsystem This allows us to create more complex systems by not having to simultaneously tune the responses of a large number of interacting components This was one of the advantages of Blacks use of negative feedback in vacuum tube amplifiers the resulting device had a welldefined linear inputoutput response that did not depend on the individual characteristics of the vacuum tubes being used Higher Levels of Automation A major trend in the use of feedback is its application to higher levels of situational awareness and decision making This includes not only traditional logical branch ing based on system conditions but also optimization adaptation learning and even higher levels of abstract reasoning These problems are in the domain of the arti ficial intelligence community with an increasing role of dynamics robustness and interconnection in many applications Oneoftheinterestingareasofresearchinhigherlevelsofdecisionisautonomous control of cars Early experiments with autonomous driving were performed by Ernst Dickmanns who in the 1980s equipped cars with cameras and other sen sors 60 In 1994 his group demonstrated autonomous driving with human super vision on a highway near Paris and in 1995 one of his cars drove autonomously with human supervision from Munich to Copenhagen at speeds of up to 175 kmhour The car was able to overtake other vehicles and change lanes automatically This application area has been recently explored through the DARPA Grand Challenge a series of competitions sponsored by the US government to build ve hicles that can autonomously drive themselves in desert and urban environments Caltech competed in the 2005 and 2007 Grand Challenges using a modified Ford E 350 offroad van nicknamed Alice It was fully automated including electronically controlled steering throttle brakes transmission and ignition Its sensing systems included multiple video cameras scanning at 1030 Hz several laser ranging units scanning at 10 Hz and an inertial navigation package capable of providing position and orientation estimates at 5 ms temporal resolution Computational resources in cluded 12 highspeed servers connected together through a 1Gbs Ethernet switch The vehicle is shown in Figure 115 along with a block diagram of its control architecture The software and hardware infrastructure that was developed enabled the ve hicle to traverse long distances at substantial speeds In testing Alice drove itself more than 500 km in the Mojave Desert of California with the ability to follow dirt roads and trails if present and avoid obstacles along the path Speeds of more than 50 kmh were obtained in the fully autonomous mode Substantial tuning of the al gorithms was done during desert testing in part because of the lack of systemslevel design tools for systems of this level of complexity Other competitors in the race including Stanford which won the 2005 competition used algorithms for adaptive 14 FEEDBACK PROPERTIES 21 Road Sensors Terrain Follower Path State Estimator Planner Path Supervisory Control Map Elevation Map Cost Vehicle Vehicle Actuation Finding Figure 115 DARPA Grand Challenge Alice Team Caltechs entry in the 2005 and 2007 competitions and its networked control architecture 54 The feedback system fuses data from terrain sensors cameras and laser range finders to determine a digital elevation map This map is used to compute the vehicles potential speed over the terrain and an optimization based path planner then commands a trajectory for the vehicle to follow A supervisory control module performs higherlevel tasks such as handling sensor and actuator failures control and learning increasing the capabilities of their systems in unknown en vironments Together the competitors in the Grand Challenge demonstrated some of the capabilities of the next generation of control systems and highlighted many research directions in control at higher levels of decision making Drawbacks of Feedback While feedback has many advantages it also has some drawbacks Chief among these is the possibility of instability if the system is not designed properly We are all familiar with the effects of positive feedback when the amplification on a microphone is turned up too high in a room This is an example of feedback instability something that we obviously want to avoid This is tricky because we must design the system not only to be stable under nominal conditions but also to remain stable under all possible perturbations of the dynamics In addition to the potential for instability feedback inherently couples different parts of a system One common problem is that feedback often injects measurement noise into the system Measurements must be carefully filtered so that the actuation and process dynamics do not respond to them while at the same time ensuring that the measurement signal from the sensor is properly coupled into the closed loop dynamics so that the proper levels of performance are achieved Another potential drawback of control is the complexity of embedding a control system in a product While the cost of sensing computation and actuation has de creased dramatically in the past few decades the fact remains that control systems are often complicated and hence one must carefully balance the costs and benefits An early engineering example of this is the use of microprocessorbased feedback systems in automobilesThe use of microprocessors in automotive applications be gan in the early 1970s and was driven by increasingly strict emissions standards which could be met only through electronic controls Early systems were expensive and failed more often than desired leading to frequent customer dissatisfaction It 22 CHAPTER 1 INTRODUCTION was only through aggressive improvements in technology that the performance reliability and cost of these systems allowed them to be used in a transparent fash ion Even today the complexity of these systems is such that it is difficult for an individual car owner to fix problems Feedforward Feedback is reactive there must be an error before corrective actions are taken However in some circumstances it is possible to measure a disturbance before it enters the system and this information can then be used to take corrective action before the disturbance has influenced the system The effect of the disturbance is thus reduced by measuring it and generating a control signal that counteracts it This way of controlling a system is called feedforward Feedforward is particularly useful in shaping the response to command signals because command signals are always available Since feedforward attempts to match two signals it requires good process models otherwise the corrections may have the wrong size or may be badly timed The ideas of feedback and feedforward are very general and appear in many dif ferent fields In economics feedback and feedforward are analogous to a market based economy versus a planned economy In business a feedforward strategy corresponds to running a company based on extensive strategic planning while a feedback strategy corresponds to a reactive approach In biology feedforward has been suggested as an essential element for motion control in humans that is tuned during training Experience indicates that it is often advantageous to combine feed back and feedforward and the correct balance requires insight and understanding of their respective properties Positive Feedback In most of this text we will consider the role of negative feedback in which we attempt to regulate the system by reacting to disturbances in a way that decreases the effect of those disturbances In some systems particularly biological systems positive feedback can play an important role In a system with positive feedback the increase in some variable or signal leads to a situation in which that quantity is further increased through its dynamics This has a destabilizing effect and is usually accompanied by a saturation that limits the growth of the quantity Although often considered undesirable this behavior is used in biological and engineering systems to obtain a very fast response to a condition or signal One example of the use of positive feedback is to create switching behavior in which a system maintains a given state until some input crosses a threshold Hysteresis is often present so that noisy inputs near the threshold do not cause the system to jitter This type of behavior is called bistability and is often associated with memory devices u e u e u e a Onoff control b Dead zone c Hysteresis Figure 116 Inputoutput characteristics of onoff controllers Each plot shows the input on the horizontal axis and the corresponding output on the vertical axis Ideal onoff control is shown in a with modifications for a dead zone b or hysteresis c Note that for onoff control with hysteresis the output depends on the value of past inputs 15 Simple Forms of Feedback The idea of feedback to make corrective actions based on the difference between the desired and the actual values of a quantity can be implemented in many different ways The benefits of feedback can be obtained by very simple feedback laws such as onoff control proportional control and proportionalintegralderivative control In this section we provide a brief preview of some of the topics that will be studied more formally in the remainder of the text OnOff Control A simple feedback mechanism can be described as follows u umax if e 0 umin if e 0 11 where the control error e r y is the difference between the reference signal or command signal r and the output of the system y and u is the actuation command Figure 116a shows the relation between error and control This control law implies that maximum corrective action is always used The feedback in equation 11 is called onoff control One of its chief advantages is that it is simple and there are no parameters to choose Onoff control often succeeds in keeping the process variable close to the reference such as the use of a simple thermostat to maintain the temperature of a room It typically results in a system where the controlled variables oscillate which is often acceptable if the oscillation is sufficiently small Notice that in equation 11 the control variable is not defined when the error is zero It is common to make modifications by introducing either a dead zone or hysteresis see Figure 116b and 116c PID Control The reason why onoff control often gives rise to oscillations is that the system overreacts since a small change in the error makes the actuated variable change over the full range This effect is avoided in proportional control where the characteristic of the controller is proportional to the control error for small errors This can be achieved with the control law u umax if e emax kp e if emin e emax umin if e emin where kp is the controller gain emin umin kp and emax umax kp The interval emin emax is called the proportional band because the behavior of the controller is linear when the error is in this interval u kp r y kp e if emin e emax While a vast improvement over onoff control proportional control has the drawback that the process variable often deviates from its reference value In particular if some level of control signal is required for the system to maintain a desired value then we must have e 0 in order to generate the requisite input This can be avoided by making the control action proportional to the integral of the error ut ki 0t eτ dτ This control form is called integral control and ki is the integral gain It can be shown through simple arguments that a controller with integral action has zero steadystate error Exercise 15 The catch is that there may not always be a steady state because the system may be oscillating An additional refinement is to provide the controller with an anticipative ability by using a prediction of the error A simple prediction is given by the linear extrapolation et Td et Td detdt which predicts the error Td time units ahead Combining proportional integral and derivative control we obtain a controller that can be expressed mathematically as ut kp et ki 0t eτ dτ kd detdt The control action is thus a sum of three terms the past as represented by the integral of the error the present as represented by the proportional term and the future as represented by a linear extrapolation of the error the derivative term This form of feedback is called a proportionalintegralderivative PID controller and its action is illustrated in Figure 117 A PID controller is very useful and is capable of solving a wide range of control problems More than 95 of all industrial control problems are solved by PID control although many of these controllers are actually proportionalintegral PI controllers because derivative action is often not included 58 There are also more advanced controllers which differ from PID controllers by using more sophisticated methods for prediction 16 FURTHER READING 25 Present Future Past t t Td Time Error Figure 117 Action of a PID controller At time t the proportional term depends on the instantaneous value of the error The integral portion of the feedback is based on the integral of the error up to time t shaded portion The derivative term provides an estimate of the growth or decay of the error over time by looking at the rate of change of the error Td represents the approximate amount of time in which the error is projected forward see text 16 Further Reading The material in this section draws heavily from the report of the Panel on Future Directions on Control Dynamics and Systems 155 Several additional papers and reports have highlighted the successes of control 159 and new vistas in con trol 45 130 204 The early development of control is described by Mayr 148 and in the books by Bennett 28 29 which cover the period 18001955 A fas cinating examination of some of the early history of control in the United States has been written by Mindell 152 A popular book that describes many control concepts across a wide range of disciplines is Out of Control by Kelly 121 There are many textbooks available that describe control systems in the context of spe cific disciplines For engineers the textbooks by Franklin Powell and Emami Naeini 79 Dorf and Bishop 61 Kuo and Golnaraghi 133 and Seborg Edgar and Mellichamp 178 are widely used More mathematically oriented treatments of control theory include Sontag 182 and Lewis 136 The book by Hellerstein et al 97 provides a description of the use of feedback control in computing sys tems A number of books look at the role of dynamics and feedback in biological systems including Milhorn 151 now out of print J D Murray 154 and Ell ner and Guckenheimer 70 The book by Fradkov 77 and the tutorial article by Bechhoefer 25 cover many specific topics of interest to the physics community Exercises 11 Eye motion Perform the following experiment and explain your results Hold ing your head still move one of your hands left and right in front of your face following it with your eyes Record how quickly you can move your hand before you begin to lose track of it Now hold your hand still and shake your head left to right once again recording how quickly you can move before loosing track 26 CHAPTER 1 INTRODUCTION 12 Identify five feedback systems that you encounter in your everyday environ ment For each system identify the sensing mechanism actuation mechanism and control law Describe the uncertainty with respect to which the feedback system provides robustness andor the dynamics that are changed through the use of feed back 13 Balance systems Balance yourself on one foot with your eyes closed for 15 s Using Figure 13 as a guide describe the control system responsible for keeping you from falling down Note that the controller will differ from that in the diagram unless you are an android reading this in the far future 14 Cruise control Download the MATLAB code used to produce simulations for the cruise control system in Figure 113 from the companion web site Using trial and error change the parameters of the control law so that the overshoot in speed is not more than 1 ms for a vehicle with mass m 1000 kg 15 Integral action We say that a system with a constant input reaches steady state if the output of the system approaches a constant value as time increases Show that a controller with integral action such as those given in equations 14 and 15 gives zero error if the closed loop system reaches steady state 16 Search the web and pick an article in the popular press about a feedback and control system Describe the feedback system using the terminology given in the article In particular identify the control system and describe a the underlying process or system being controlled along with the b sensor c actuator and d computational element If the some of the information is not available in the article indicate this and take a guess at what might have been used Chapter Two System Modeling I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers He replied How many arbitrary parameters did you use for your calculations I thought for a moment about our cutoff procedures and said Four He said I remember my friend Johnny von Neumann used to say with four parameters I can fit an elephant and with five I can make him wiggle his trunk Freeman Dyson on describing the predictions of his model for mesonproton scattering to Enrico Fermi in 1953 67 A model is a precise representation of a systems dynamics used to answer ques tions via analysis and simulation The model we choose depends on the questions we wish to answer and so there may be multiple models for a single dynamical sys tem with different levels of fidelity depending on the phenomena of interest In this chapter we provide an introduction to the concept of modeling and present some basic material on two specific methods commonly used in feedback and control systems differential equations and difference equations 21 Modeling Concepts A model is a mathematical representation of a physical biological or information system Models allow us to reason about a system and make predictions about how a system will behave In this text we will mainly be interested in models of dynamical systems describing the inputoutput behavior of systems and we will often work in state space form Roughly speaking a dynamical system is one in which the effects of actions do not occur immediately For example the velocity of a car does not change immediately when the gas pedal is pushed nor does the temperature in a room rise instantaneously when a heater is switched on Similarly a headache does not vanish right after an aspirin is taken requiring time for it to take effect In business systems increased funding for a development project does not increase revenues in the short term although it may do so in the long term if it was a good investment All of these are examples of dynamical systems in which the behavior of the system evolves with time In the remainder of this section we provide an overview of some of the key concepts in modeling The mathematical details introduced here are explored more fully in the remainder of the chapter 28 CHAPTER 2 SYSTEM MODELING cq q m k Figure 21 Springmass system with nonlinear damping The position of the mass is denoted by q with q 0 corresponding to the rest position of the spring The forces on the mass are generated by a linear spring with spring constant k and a damper with force dependent on the velocity q The Heritage of Mechanics The study of dynamics originated in attempts to describe planetary motion The basis was detailed observations of the planets by Tycho Brahe and the results of Kepler who found empirically that the orbits of the planets could be well described by ellipses Newton embarked on an ambitious program to try to explain why the planets move in ellipses and he found that the motion could be explained by his law of gravitation and the formula stating that force equals mass times acceleration In the process he also invented calculus and differential equations One of the triumphs of Newtons mechanics was the observation that the motion of the planets could be predicted based on the current positions and velocities of all planets It was not necessary to know the past motion The state of a dynamical system is a collection of variables that completely characterizes the motion of a system for the purpose of predicting future motion For a system of planets the state is simply the positions and the velocities of the planets We call the set of all possible states the state space A common class of mathematical models for dynamical systems is ordinary differential equations ODEs In mechanics one of the simplest such differential equations is that of a springmass system with damping m q cq kq 0 21 This system is illustrated in Figure 21 The variable q R represents the position of the mass m with respect to its rest position We use the notation q to denote the derivative of q with respect to time ie the velocity of the mass and q to represent the second derivative acceleration The spring is assumed to satisfy Hookes law which says that the force is proportional to the displacement The friction element damper is taken as a nonlinear function cq which can model effects such as stiction and viscous drag The position q and velocity q represent the instantaneous state of the system We say that this system is a secondorder system since the dynamics depend on the first two derivatives of q The evolution of the position and velocity can be described using either a time plot or a phase portrait both of which are shown in Figure 22 The time plot on 21 MODELING CONCEPTS 29 0 5 10 15 2 1 0 1 2 Time t s Position q m velocity q ms Position Velocity 1 05 0 05 1 1 05 0 05 1 Position q m Velocity q ms Figure 22 Illustration of a state model A state model gives the rate of change of the state as a function of the state The plot on the left shows the evolution of the state as a function of time The plot on the right shows the evolution of the states relative to each other with the velocity of the state denoted by arrows the left shows the values of the individual states as a function of time The phase portrait on the right shows the vector field for the system which gives the state velocity represented as an arrow at every point in the state space In addition we have superimposed the traces of some of the states from different conditions The phase portrait gives a strong intuitive representation of the equation as a vector field or a flow While systems of second order two states can be represented in this way unfortunately it is difficult to visualize equations of higher order using this approach The differential equation 21 is called an autonomous system because there are no external influences Such a model is natural for use in celestial mechanics because it is difficult to influence the motion of the planets In many examples it is useful to model the effects of external disturbances or controlled forces on the system One way to capture this is to replace equation 21 by m q cq kq u 22 where u represents the effect of external inputs The model 22 is called a forced or controlled differential equationIt implies that the rate of change of the state can be influenced by the input ut Adding the input makes the model richer and allows new questions to be posed For example we can examine what influence external disturbances have on the trajectories of a system Or in the case where the input variable is something that can be modulated in a controlled way we can analyze whether it is possible to steer the system from one point in the state space to another through proper choice of the input The Heritage of Electrical Engineering A different view of dynamics emerged from electrical engineering where the design of electronic amplifiers led to a focus on inputoutput behavior A system was considered a device that transforms inputs to outputs as illustrated in Figure 23 Conceptually an inputoutput model can be viewed as a giant table of inputs and 30 CHAPTER 2 SYSTEM MODELING 7 v v vos adj Inputs Output 3 2 6 4 Q9 Q1 Q2 Q3 Q4 Q7 Q5 R1 R12 R8 R7 R9 R10 R11 R2 Q6 Q22 Q17 Q16 Q18 30pF Q15 Q14 Q20 Q8 System Input Output Figure 23 Illustration of the inputoutput view of a dynamical system The figure on the left shows a detailed circuit diagram for an electronic amplifier the one on the right is its representation as a block diagram outputs Given an input signal ut over some interval of time the model should produce the resulting output yt The inputoutput framework is used in many engineering disciplines since it allows us to decompose a system into individual components connected through their inputs and outputs Thus we can take a complicated system such as a radio or a television and break it down into manageable pieces such as the receiver demodulator amplifier and speakers Each of these pieces has a set of inputs and outputs and through proper design these components can be interconnected to form the entire system The inputoutput view is particularly useful for the special class of linear time invariant systems This term will be defined more carefully later in this chapter but roughly speaking a system is linear if the superposition addition of two inputs yields an output that is the sum of the outputs that would correspond to individual inputs being applied separately A system is timeinvariant if the output response for a given input does not depend on when that input is applied Many electrical engineering systems can be modeled by linear timeinvariant systems and hence a large number of tools have been developed to analyze them One such tool is the step response which describes the relationship between an input that changes from zero to a constant value abruptly a step input and the corresponding output As we shall see later in the text the step response is very useful in characterizing the performance of a dynamical system and it is often used to specify the desired dynamics A sample step response is shown in Figure 24a Another way to describe a linear timeinvariant system is to represent it by its response to sinusoidal input signals This is called the frequency response and a rich powerful theory with many concepts and strong useful results has emerged The results are based on the theory of complex variables and Laplace transforms The basic idea behind frequency response is that we can completely characterize the behavior of a system by its steadystate response to sinusoidal inputs Roughly 21 MODELING CONCEPTS 31 0 10 20 30 0 1 2 3 4 Time Input output Input Output a Step response 10 4 10 2 10 0 Gain 10 1 10 0 10 1 10 2 270 180 90 0 Phase deg Frequency b Frequency response Figure 24 Inputoutput response of a linear system The step response a shows the output of the system due to an input that changes from 0 to 1 at time t 5 s The frequency response b shows the amplitude gain and phase change due to a sinusoidal input at different frequencies speaking this is done by decomposing any arbitrary signal into a linear combi nation of sinusoids eg by using the Fourier transform and then using linearity to compute the output by combining the response to the individual frequencies A sample frequency response is shown in Figure 24b The inputoutput view lends itself naturally to experimental determination of system dynamics where a system is characterized by recording its response to particular inputs eg a step or a set of sinusoids over a range of frequencies The Control View When control theory emerged as a discipline in the 1940s the approach to dy namics was strongly influenced by the electrical engineering inputoutput view A second wave of developments in control starting in the late 1950s was inspired by mechanics where the state space perspective was used The emergence of space flight is a typical example where precise control of the orbit of a spacecraft is essential These two points of view gradually merged into what is today the state space representation of inputoutput systems The development of state space models involved modifying the models from mechanics to include external actuators and sensors and utilizing more general forms of equations In control the model given by equation 22 was replaced by dx dt f x u y hx u 23 where x is a vector of state variables u is a vector of control signals and y is a vector of measurements The term dxdt represents the derivative of x with respect to time now considered a vector and f and h are possibly nonlinear mappings of their arguments to vectors of the appropriate dimension For mechanical systems the state consists of the position and velocity of the system so that x q q in the caseofadampedspringmasssystemNotethatinthecontrolformulationwemodel dynamics as firstorder differential equations but we will see that this can capture the dynamics of higherorder differential equations by appropriate definition of the state and the maps f and h Adding inputs and outputs has increased the richness of the classical problems and led to many new concepts For example it is natural to ask if possible states x can be reached with the proper choice of u reachability and if the measurement y contains enough information to reconstruct the state observability These topics will be addressed in greater detail in Chapters 6 and 7 A final development in building the control point of view was the emergence of disturbances and model uncertainty as critical elements in the theory The simple way of modeling disturbances as deterministic signals like steps and sinusoids has the drawback that such signals can be predicted precisely A more realistic approach is to model disturbances as random signals This viewpoint gives a natural connection between prediction and control The dual views of inputoutput representations and state space representations are particularly useful when modeling uncertainty since state models are convenient to describe a nominal model but uncertainties are easier to describe using inputoutput models often via a frequency response description Uncertainty will be a constant theme throughout the text and will be studied in particular detail in Chapter 12 An interesting observation in the design of control systems is that feedback systems can often be analyzed and designed based on comparatively simple models The reason for this is the inherent robustness of feedback systems However other uses of models may require more complexity and more accuracy One example is feedforward control strategies where one uses a model to precompute the inputs that cause the system to respond in a certain way Another area is system validation where one wishes to verify that the detailed response of the system performs as it was designed Because of these different uses of models it is common to use a hierarchy of models having different complexity and fidelity Multidomain Modeling Modeling is an essential element of many disciplines but traditions and methods from individual disciplines can differ from each other as illustrated by the previous discussion of mechanical and electrical engineering A difficulty in systems engineering is that it is frequently necessary to deal with heterogeneous systems from many different domains including chemical electrical mechanical and information systems To model such multidomain systems we start by partitioning a system into smaller subsystems Each subsystem is represented by balance equations for mass energy and momentum or by appropriate descriptions of information processing in the subsystem The behavior at the interfaces is captured by describing how the variables of the subsystem behave when the subsystems are interconnected These interfaces act by constraining variables within the individual subsystems to be equal such as mass energy or momentum fluxes The complete model is then obtained by combining the descriptions of the subsystems and the interfaces 21 MODELING CONCEPTS 33 Using this methodology it is possible to build up libraries of subsystems that correspond to physical chemical and informational components The procedure mimics the engineering approach where systems are built from subsystems that are themselvesbuiltfromsmallercomponentsAsexperienceisgainedthecomponents and their interfaces can be standardized and collected in model libraries In practice it takes several iterations to obtain a good library that can be reused for many applications State models or ordinary differential equations are not suitable for component based modeling of this form because states may disappear when components are connected This implies that the internal description of a component may change when it is connected to other components As an illustration we consider two ca pacitors in an electrical circuit Each capacitor has a state corresponding to the voltage across the capacitors but one of the states will disappear if the capacitors are connected in parallel A similar situation happens with two rotating inertias each of which is individually modeled using the angle of rotation and the angular velocity Two states will disappear when the inertias are joined by a rigid shaft This difficulty can be avoided by replacing differential equations by differential algebraic equations which have the form Fz z 0 where z Rn A simple special case is x f x y gx y 0 24 where z x y and F x f x y gx y The key property is that the derivative z is not given explicitly and there may be pure algebraic relations between the components of the vector z The model 24 captures the examples of the parallel capacitors and the linked rotating inertias For example when two capacitors are connected we simply add the algebraic equation expressing that the voltages across the capacitors are the same Modelica is a language that has been developed to support componentbased modeling Differential algebraic equations are used as the basic description and objectoriented programming is used to structure the models Modelica is used to model the dynamics of technical systems in domains such as mechanical electri cal thermal hydraulic thermofluid and control subsystems Modelica is intended to serve as a standard format so that models arising in different domains can be exchanged between tools and users A large set of free and commercial Modelica component libraries are available and are used by a growing number of people in industry research and academia For further information about Modelica see httpwwwmodelicaorg or Tiller 192 34 CHAPTER 2 SYSTEM MODELING 22 State Space Models In this section we introduce the two primary forms of models that we use in this text differential equations and difference equations Both make use of the notions of state inputs outputs and dynamics to describe the behavior of a system Ordinary Differential Equations The state of a system is a collection of variables that summarize the past of a system for the purpose of predicting the future For a physical system the state is composed of the variables required to account for storage of mass momentum and energy A key issue in modeling is to decide how accurately this storage has to be represented The state variables are gathered in a vector x Rn called the state vector The control variables are represented by another vector u Rp and the measured signal by the vector y Rq A system can then be represented by the differential equation dx dt f x u y hx u 25 where f Rn Rp Rn and h Rn Rp Rq are smooth mappings We call a model of this form a state space model The dimension of the state vector is called the order of the system The sys tem 25 is called timeinvariant because the functions f and h do not depend explicitly on time t there are more general timevarying systems where the func tions do depend on time The model consists of two functions the function f gives the rate of change of the state vector as a function of state x and control u and the function h gives the measured values as functions of state x and control u A system is called a linear state space system if the functions f and h are linear in x and u A linear state space system can thus be represented by dx dt Ax Bu y Cx Du 26 where A B C and D are constant matrices Such a system is said to be linear and timeinvariant or LTI for short The matrix A is called the dynamics matrix the matrix B is called the control matrix the matrix C is called the sensor matrix and the matrix D is called the direct term Frequently systems will not have a direct term indicating that the control signal does not influence the output directly A different form of linear differential equations generalizing the secondorder dynamics from mechanics is an equation of the form dny dtn a1 dn1y dtn1 any u 27 where t is the independent time variable yt is the dependent output variable and ut is the input The notation dk ydtk is used to denote the kth derivative of y with respect to t sometimes also written as yk The controlled differential equation 27 is said to be an nthorder system This system can be converted into 22 STATE SPACE MODELS 35 state space form by defining x x1 x2 xn1 xn dn1ydtn1 dn2ydtn2 dydt y and the state space equations become d dt x1 x2 xn1 xn a1x1 anxn x1 xn2 xn1 u 0 0 0 y xn With the appropriate definitions of A B C and D this equation is in linear state space form An even more general system is obtained by letting the output be a linear com bination of the states of the system ie y b1x1 b2x2 bnxn du This system can be modeled in state space as d dt x1 x2 x3 xn a1 a2 an1 an 1 0 0 0 0 1 0 0 0 0 1 0 x 1 0 0 0 u y b1 b2 bn x du 28 This particular form of a linear state space system is called reachable canonical form and will be studied in more detail in later chapters Example 21 Balance systems An example of a type of system that can be modeled using ordinary differential equations is the class of balance systems A balance system is a mechanical system in which the center of mass is balanced above a pivot point Some common examples of balance systems are shown in Figure 25 The Segway Personal Transporter Figure 25a uses a motorized platform to stabilize a person standing on top of it When the rider leans forward the transportation device propels itself along the ground but maintains its upright position Another example is a rocket Figure 25b in which a gimbaled nozzle at the bottom of the rocket is used to stabilize the body of the rocket above it Other examples of balance systems include humans or other animals standing upright or a person balancing a stick on their hand 36 CHAPTER 2 SYSTEM MODELING a Segway b Saturn rocket M F p θ m l c Cartpendulum system Figure 25 Balance systems a Segway Personal Transporter b Saturn rocket and c inverted pendulum on a cart Each of these examples uses forces at the bottom of the system to keep it upright Balance systems are a generalization of the springmass system we saw earlier We can write the dynamics for a mechanical system in the general form Mqq Cq q Kq Bqu where Mq is the inertia matrix for the system Cq q represents the Coriolis forces as well as the damping Kq gives the forces due to potential energy and Bq describes how the external applied forces couple into the dynamics The specific form of the equations can be derived using Newtonian mechanics Note that each of the terms depends on the configuration of the system q and that these terms are often nonlinear in the configuration variables Figure 25c shows a simplified diagram for a balance system consisting of an inverted pendulum on a cart To model this system we choose state variables that represent the position and velocity of the base of the system p and p and the angle and angular rate of the structure above the base θ and θ We let F represent the force applied at the base of the system assumed to be in the horizontal direction aligned with p and choose the position and angle of the system as outputs With this set of definitions the dynamics of the system can be computed using Newtonian mechanics and have the form M m ml cos θ ml cos θ J ml2 p θ c p ml sin θ θ2 γ θ mgl sin θ F 0 29 where M is the mass of the base m and J are the mass and moment of inertia of the system to be balanced l is the distance from the base to the center of mass of the balanced body c and γ are coefficients of viscous friction and g is the acceleration due to gravity We can rewrite the dynamics of the system in state space form by defining the state as x p θ p θ the input as u F and the output as y p θ If we 22 STATE SPACE MODELS 37 define the total mass and total inertia as Mt M m Jt J ml2 the equations of motion then become d dt p θ p θ p θ mlsθ θ2 mgml2Jtsθcθ c p γlmcθ θ u Mt mml2Jtc2 θ ml2sθcθ θ2 Mtglsθ clcθ p γ Mtm θ lcθu JtMtm mlcθ2 y p θ where we have used the shorthand cθ cos θ and sθ sin θ In many cases the angle θ will be very close to 0 and hence we can use the approximations sin θ θ and cos θ 1 Furthermore if θ is small we can ignore quadratic and higher terms in θ Substituting these approximations into our equations we see that we are left with a linear state space equation d dt p θ p θ 0 0 1 0 0 0 0 1 0 m2l2gμ cJtμ γ Jtlmμ 0 Mtmglμ clmμ γ Mtμ p θ p θ 0 0 Jtμ lmμ u y 1 0 0 0 0 1 0 0 x where μ Mt Jt m2l2 Example 22 Inverted pendulum A variation of the previous example is one in which the location of the base p does not need to be controlled This happens for example if we are interested only in stabilizing a rockets upright orientation without worrying about the location of base of the rocket The dynamics of this simplified system are given by d dt θ θ θ mgl Jt sin θ γ Jt θ l Jt cos θ u y θ 210 where γ is the coefficient of rotational friction Jt J ml2 and u is the force applied at the base This system is referred to as an inverted pendulum Difference Equations In some circumstances it is more natural to describe the evolution of a system at discrete instants of time rather than continuously in time If we refer to each of these times by an integer k 0 1 2 then we can ask how the state of the system changes for each k Just as in the case of differential equations we define the state to be those sets of variables that summarize the past of the system for the purpose of predicting its future Systems described in this manner are referred to as discretetime systems The evolution of a discretetime system can be written in the form xk 1 fxk uk yk hxk uk where xk Rn is the state of the system at time k an integer uk Rp is the input and yk Rq is the output As before f and h are smooth mappings of the appropriate dimension We call equation 211 a difference equation since it tells us how xk 1 differs from xk The state xk can be either a scalar or a vectorvalued quantity in the case of the latter we write xjk for the value of the jth state at time k Just as in the case of differential equations it is often the case that the equations are linear in the state and input in which case we can describe the system by xk 1 A xk B uk yk C xk D uk As before we refer to the matrices A B C and D as the dynamics matrix the control matrix the sensor matrix and the direct term The solution of a linear difference equation with initial condition x0 and input u0 uT is given by xk Ak x0 j0k1 Akj1 B uj k 0 yk C Ak x0 j0k1 C Akj1 B uj D uk Difference equations are also useful as an approximation of differential equations as we will show later Example 23 Predatorprey As an example of a discretetime system consider a simple model for a predatorprey system The predatorprey problem refers to an ecological system in which we have two species one of which feeds on the other This type of system has been studied for decades and is known to exhibit interesting dynamics Figure 26 shows a historical record taken over 90 years for a population of lynxes versus a population of hares 142 As can been seen from the graph the annual records of the populations of each species are oscillatory in nature A simple model for this situation can be constructed using a discretetime model by keeping track of the rate of births and deaths of each species Letting H represent the population of hares and L represent the population of lynxes we can describe the state in terms of the populations at discrete periods of time Letting k be the 22 STATE SPACE MODELS 39 1845 160 140 120 100 80 60 40 20 1855 1865 1875 1885 1895 Hare Lynx 1905 1915 1925 1935 Figure 26 Predator versus prey The photograph on the left shows a Canadian lynx and a snowshoe hare the lynxs primary prey The graph on the right shows the populations of hares and lynxes between 1845 and 1935 in a section of the Canadian Rockies 142 The data were collected on an annual basis over a period of 90 years Photograph copyright Tom and Pat Leeson discretetime index eg the month number we can write Hk 1 Hk bruHk aLkHk Lk 1 Lk cLkHk d f Lk 213 where bru is the hare birth rate per unit period and as a function of the food supply u d f is the lynx mortality rate and a and c are the interaction coefficients The interaction term aLkHk models the rate of predation which is assumed to be proportional to the rate at which predators and prey meet and is hence given by the product of the population sizes The interaction term cLkHk in the lynx dynamics has a similar form and represents the rate of growth of the lynx population This model makes many simplifying assumptionssuch as the fact that hares decrease in number only through predation by lynxesbut it often is sufficient to answer basic questions about the system To illustrate the use of this system we can compute the number of lynxes and hares at each time point from some initial population This is done by starting with x0 H0 L0 and then using equation 213 to compute the populations in the following period By iterating this procedure we can generate the population over time The output of this process for a specific choice of parameters and initial conditions is shown in Figure 27 While the details of the simulation are different from the experimental data to be expected given the simplicity of our assumptions we see qualitatively similar trends and hence we can use the model to help explore the dynamics of the system Example 24 Email server The IBM Lotus server is an collaborative software system that administers users email documents and notes Client machines interact with end users to provide access to data and applications The server also handles other administrative tasks In the early development of the system it was observed that the performance was poor when the central processing unit CPU was overloaded because of too many service requests and mechanisms to control the load were therefore introduced The interaction between the client and the server is in the form of remote pro 40 CHAPTER 2 SYSTEM MODELING 1850 1860 1870 1880 1890 1900 1910 1920 0 50 100 150 200 250 Year Population Hares Lynxes Figure 27 Discretetime simulation of the predatorprey model 213 Using the parameters a c 0014 bru 06 and d 07 in equation 213 the period and magnitude of the lynx and hare population cycles approximately match the data in Figure 26 cedure calls RPCs The server maintains a log of statistics of completed requests The total number of requests being served called RIS RPCs in server is also measured The load on the server is controlled by a parameter called MaxUsers which sets the total number of client connections to the server This parameter is controlled by the system administrator The server can be regarded as a dynami cal system with MaxUsers as the input and RIS as the output The relationship between input and output was first investigated by exploring the steadystate per formance and was found to be linear In 97 a dynamic model in the form of a firstorder difference equation is used to capture the dynamic behavior of this system Using system identification techniques they construct a model of the form yk 1 ayk buk where u MaxUsers MaxUsers and y RIS RIS The parameters a 043 and b 047 are parameters that describe the dynamics of the system around the operating point and MaxUsers 165 and RIS 135 represent the nominal operating point of the system The number of requests was averaged over a sampling period of 60 s Simulation and Analysis State space models can be used to answer many questions One of the most common as we have seen in the previous examples involves predicting the evolution of the system state from a given initial condition While for simple models this can be done in closed form more often it is accomplished through computer simulation One can also use state space models to analyze the overall behavior of the system without making direct use of simulation Consider again the damped springmass system from Section 21 but this time with an external force applied as shown in Figure 28 We wish to predict the motion of the system for a periodic forcing function with a given initial condition and determine the amplitude frequency and decay rate of the resulting motion 22 STATE SPACE MODELS 41 q m k ut A sin t ω c Figure 28 A driven springmass system with damping Here we use a linear damping element with coefficient of viscous friction c The mass is driven with a sinusoidal force of amplitude A We choose to model the system with a linear ordinary differential equation Using Hookes law to model the spring and assuming that the damper exerts a force that is proportional to the velocity of the system we have m q c q kq u 214 where m is the mass q is the displacement of the mass c is the coefficient of viscous friction k is the spring constant and u is the applied force In state space form using x q q as the state and choosing y q as the output we have dx dt x2 c m x2 k m x1 u m y x1 We see that this is a linear secondorder differential equation with one input u and one output y We now wish to compute the response of the system to an input of the form u A sin ωt Although it is possible to solve for the response analytically we instead make use of a computational approach that does not rely on the specific form of this system Consider the general state space system dx dt f x u Given the state x at time t we can approximate the value of the state at a short time h 0 later by assuming that the rate of change of f x u is constant over the interval t to t h This gives xt h xt h f xt ut 215 Iterating this equation we can thus solve for x as a function of time This approxi mation is known as Euler integration and is in fact a difference equation if we let h represent the time increment and write xk xkh Although modern simulation tools such as MATLAB and Mathematica use more accurate methods than Euler integration they still have some of the same basic tradeoffs Returning to our specific example Figure 29 shows the results of computing xt using equation 215 along with the analytical computation We see that as 42 CHAPTER 2 SYSTEM MODELING 0 5 10 15 20 25 30 35 40 45 50 2 1 0 1 2 Time t sec Position q m h 1 h 05 h 01 analytical Figure 29 Simulation of the forced springmass system with different simulation time constants The dashed line represents the analytical solution The solid lines represent the approximate solution via the method of Euler integration using decreasing step sizes h gets smaller the computed solution converges to the exact solution The form of the solution is also worth noticing after an initial transient the system settles into a periodic motion The portion of the response after the transient is called the steadystate response to the input In addition to generating simulations models can also be used to answer other types of questions Two that are central to the methods described in this text concern the stability of an equilibrium point and the inputoutput frequency response We illustrate these two computations through the examples below and return to the general computations in later chapters Returning to the damped springmass system the equations of motion with no input forcing are given by dx dt x2 c m x2 k m x1 216 where x1 is the position of the mass relative to the rest position and x2 is its velocity We wish to show that if the initial state of the system is away from the rest position the system will return to the rest position eventually we will later define this situation to mean that the rest position is asymptotically stable While we could heuristically show this by simulating many many initial conditions we seek instead to prove that this is true for any initial condition To do so we construct a function V Rn R that maps the system state to a positive real number For mechanical systems a convenient choice is the energy of the system V x 1 2kx2 1 1 2mx2 2 217 If we look at the time derivative of the energy function we see that dV dt kx1 x1 mx2 x2 kx1x2 mx2 c m x2 k m x1 cx2 2 which is always either negative or zero Hence V xt is never increasing and 22 STATE SPACE MODELS 43 using a bit of analysis that we will see formally later the individual states must remain bounded If we wish to show that the states eventually return to the origin we must use a slightly more detailed analysis Intuitively we can reason as follows suppose that for some period of time V xt stops decreasing Then it must be true that V xt 0 which in turn implies that x2t 0 for that same period In that case x2t 0 and we can substitute into the second line of equation 216 to obtain 0 x2 c m x2 k m x1 k m x1 Thus we must have that x1 also equals zero and so the only time that V xt can stop decreasing is if the state is at the origin and hence this system is at its rest position Since we know that V xt is never increasing because V 0 we therefore conclude that the origin is stable for any initial condition This type of analysis called Lyapunov stability analysis is considered in detail in Chapter 4 It shows some of the power of using models for the analysis of system properties Another type of analysis that we can perform with models is to compute the output of a system to a sinusoidal input We again consider the springmass system but this time keeping the input and leaving the system in its original form m q c q kq u 218 We wish to understand how the system responds to a sinusoidal input of the form ut A sin ωt We will see how to do this analytically in Chapter 6 but for now we make use of simulations to compute the answer We first begin with the observation that if qt is the solution to equation 218 with input ut then applying an input 2ut will give a solution 2qt this is easily verified by substitution Hence it suffices to look at an input with unit magnitude A 1 A second observation which we will prove in Chapter 5 is that the long term response of the system to a sinusoidal input is itself a sinusoid at the same frequency and so the output has the form qt gω sinωt ϕω where gω is called the gain of the system and ϕω is called the phase or phase offset To compute the frequency response numerically we can simulate the system at a set of frequencies ω1 ωN and plot the gain and phase at each of these frequencies An example of this type of computation is shown in Figure 210 44 CHAPTER 2 SYSTEM MODELING 0 10 20 30 40 50 4 2 0 2 4 Output y Time s 10 1 10 0 10 1 10 2 10 1 10 0 10 1 Gain log scale Frequency radsec log scale Figure 210 A frequency response gain only computed by measuring the response of individual sinusoids The figure on the left shows the response of the system as a function of time to a number of different unit magnitude inputs at different frequencies The figure on the right shows this same data in a different way with the magnitude of the response plotted as a function of the input frequency The filled circles correspond to the particular frequencies shown in the time responses 23 Modeling Methodology To deal with large complex systems it is useful to have different representations of the system that capture the essential features and hide irrelevant details In all branches of science and engineering it is common practice to use some graphical description of systems called schematic diagrams They can range from stylistic pictures to drastically simplified standard symbols These pictures make it possible to get an overall view of the system and to identify the individual components ExamplesofsuchdiagramsareshowninFigure211Schematicdiagramsareuseful because they give an overall picture of a system showing different subprocesses and their interconnection and indicating variables that can be manipulated and signals that can be measured Block Diagrams A special graphical representation called a block diagram has been developed in control engineering The purpose of a block diagram is to emphasize the information flow and to hide details of the system In a block diagram different process elements are shown as boxes and each box has inputs denoted by lines with arrows pointing toward the box and outputs denoted by lines with arrows going out of the box The inputs denote the variables that influence a process and the outputs denote the signals that we are interested in or signals that influence other subsystems Block diagrams can also be organized in hierarchies where individual blocks may themselves contain more detailed block diagrams Figure 212 shows some of the notation that we use for block diagrams Signals are represented as lines with arrows to indicate inputs and outputs The first diagram is the representation for a summation of two signals An inputoutput response is represented as a rectangle with the system name or mathematical description in 23 MODELING METHODOLOGY 45 Generator symbol Transformer symbol Bus coding Bus symbol 1 2 Tie line connecting with neighbor system 3 4 Line symbol 5 6 Load symbol a Power electronics b Cell biology c Process control d Networking Figure 211 Schematic diagrams for different disciplines Each diagram is used to illustrate the dynamics of a feedback system a electrical schematics for a power system 132 b a biological circuit diagram for a synthetic clock circuit 21 c a process diagram for a distillation column 178 and d a Petri net description of a communication protocol a Summing junction b Gain block c Saturation d Nonlinear map e Integrator f Inputoutput system Figure 212 Standard block diagram elements The arrows indicate the the inputs and outputs of each element with the mathematical operation corresponding to the blocked labeled at the output The system block f represents the full inputoutput response of a dynamical system 46 CHAPTER 2 SYSTEM MODELING Figure 213 A block diagram representation of the flight control system for an insect flying against the wind The mechanical portion of the model consists of the rigidbody dynamics of the fly the drag due to flying through the air and the forces generated by the wings The motion of the body causes the visual environment of the fly to change and this information is then used to control the motion of the wings through the sensory motor system closing the loop the block Two special cases are a proportional gain which scales the input by a multiplicative factor and an integrator which outputs the integral of the input signal Figure 213 illustrates the use of a block diagram in this case for modeling the flight response of a fly The flight dynamics of an insect are incredibly intricate involving careful coordination of the muscles within the fly to maintain stable flight in response to external stimuli One known characteristic of flies is their ability to fly upwind by making use of the optical flow in their compound eyes as a feedback mechanism Roughly speaking the fly controls its orientation so that the point of contraction of the visual field is centered in its visual field To understand this complex behavior we can decompose the overall dynamics of the system into a series of interconnected subsystems or blocks Referring to Figure 213 we can model the insect navigation system through an interconnection of five blocks The sensory motor system a takes the information from the visual system e and generates muscle commands that attempt to steer the fly so that the point of contraction is centered These muscle commands are converted into forces through the flapping of the wings b and the resulting aerodynamic forces that are produced The forces from the wings are combined with the drag on the fly d to produce a net force on the body of the fly The wind velocity enters through the drag aerodynamics Finally the body dynamics c describe how the fly translates and rotates as a function of the net forces that are applied to it The insect position speed and orientation are fed back to the drag aerodynamics and vision system blocks as inputs Each of the blocks in the diagram can itself be a complicated subsystem For example the visual system of a fruit fly consists of two complicated compound eyes with about 700 elements per eye and the sensory motor system has about 200000 23 MODELING METHODOLOGY 47 neurons that are used to process information A more detailed block diagram of the insect flight control system would show the interconnections between these elements but here we have used one block to represent how the motion of the fly affects the output of the visual system and a second block to represent how the visual field is processed by the flys brain to generate muscle commands The choice of the level of detail of the blocks and what elements to separate into different blocks often depends on experience and the questions that one wants to answer using the model One of the powerful features of block diagrams is their ability to hide information about the details of a system that may not be needed to gain an understanding of the essential dynamics of the system Modeling from Experiments Since control systems are provided with sensors and actuators it is also possible to obtain models of system dynamics from experiments on the process The models are restricted to inputoutput models since only these signals are accessible to experiments but modeling from experiments can also be combined with modeling from physics through the use of feedback and interconnection A simple way to determine a systems dynamics is to observe the response to a step change in the control signal Such an experiment begins by setting the control signal to a constant value then when steady state is established the control signal is changed quickly to a new level and the output is observed The experiment gives the step response of the system and the shape of the response gives useful information about the dynamics It immediately gives an indication of the response time and it tells if the system is oscillatory or if the response is monotone Example 25 Springmass system Consider the springmass system from Section 21 whose dynamics are given by mq c q kq u 219 We wish to determine the constants m c and k by measuring the response of the system to a step input of magnitude F0 We will show in Chapter 6 that when c2 4km the step response for this system from the rest configuration is given by qt F0k 1expct2m sinωd t φ ωd sqrt4km c22m φ tan1sqrt4km c2 From the form of the solution we see that the form of the response is determined by the parameters of the system Hence by measuring certain features of the step response we can determine the parameter values Figure 214 shows the response of the system to a step of magnitude F0 20 N along with some measurements We start by noting that the steadystate position Figure 214 Step response for a springmass system The magnitude of the step input is F0 20 N The period of oscillation T is determined by looking at the time between two subsequent local maxima in the response The period combined with the steadystate value q and the relative decrease between local maxima can be used to estimate the parameters in a model of the system the model Scaling can also improve the numerical conditioning of the model to allow faster and more accurate simulations The procedure of scaling is straightforward choose units for each independent variable and introduce new variables by dividing the variables by the chosen normalization unit We illustrate the procedure with two examples Example 26 Springmass system Consider again the springmass system introduced earlier Neglecting the damping the system is described by mq kq u The model has two parameters m and k To normalize the model we introduce dimensionfree variables x ql and τ ω0t where ω0 sqrtkm and l is the chosen length scale We scale force by mlω02 and introduce υ umlω02 The scaled equation then becomes d2xdτ2 d2qldω0t2 1mlω02 kq u x υ which is the normalized undamped springmass system Notice that the normalized model has no parameters while the original model had two parameters m and k Introducing the scaled dimensionfree state variables z1 x ql and z2 dxdτ qlω0 the model can be written as ddt z1 z2 0 1 1 0 z1 z2T 0 υT This simple linear equation describes the dynamics of any springmass system independent of the particular parameters and hence gives us insight into the fundamental dynamics of this oscillatory system To recover the physical frequency of oscillation or its magnitude we must invert the scaling we have applied Example 27 Balance system Consider the balance system described in Section 21 Neglecting damping by putting c0 and γ0 in equation 29 the model can be written as Mmd2qdt2 ml cosθ d2θdt2 ml sinθ dqdt2 F ml cosθ d2qdt2 Jml2d2θdt2 mgl sinθ 0 Let ω0 sqrtmglJml2 choose the length scale as l let the time scale be 1ω0 choose the force scale as Mmlω02 and introduce the scaled variables τ ω0t xql and uFMmlω02 The equations then become d2xdτ2 α cosθ d2θdτ2 α sinθ dθdτ2 u β cosθ d2xdτ2 d2θdτ2 sinθ 0 where α mMm and β ml2Jml2 Notice that the original model has five parameters m M J l and g but the normalized model has only two parameters a Static uncertainty b Uncertainty lemon c Model uncertainty Figure 215 Characterization of model uncertainty Uncertainty of a static system is illustrated in a where the solid line indicates the nominal inputoutput relationship and the dashed lines indicate the range of possible uncertainty The uncertainty lemon 83 in b is one way to capture uncertainty in dynamical systems emphasizing that a model is valid only in some amplitude and frequency ranges In c a model is represented by a nominal model M and another model Δ representing the uncertainty analogous to the representation of parameter uncertainty 24 MODELING EXAMPLES 51 aging that can cause changes or drift in the systems There are also highfrequency effects a resistor will no longer be a pure resistance at very high frequencies and a beam has stiffness and will exhibit additional dynamics when subject to high frequency excitation The uncertainty lemon 83 shown in Figure 215b is one way to conceptualize the uncertainty of a system It illustrates that a model is valid only in certain amplitude and frequency ranges We will introduce some formal tools for representing uncertainty in Chapter 12 using figures such as Figure 215c These tools make use of the concept of a transfer function which describes the frequency response of an inputoutput system For now we simply note that one should always be careful to recognize the limits of a model and not to make use of models outside their range of applicability For example one can describe the uncertainty lemon and then check to make sure that signals remain in this region In early analog computing a system was simulated using operational amplifiers and it was customary to give alarms when certain signal levels were exceeded Similar features can be included in digital simulation 24 Modeling Examples In this section we introduce additional examples that illustrate some of the different types of systems for which one can develop differential equation and difference equation models These examples are specifically chosen from a range of different fields to highlight the broad variety of systems to which feedback and control concepts can be applied A more detailed set of applications that serve as running examples throughout the text are given in the next chapter Motion Control Systems Motion control systems involve the use of computation and feedback to control the movement of a mechanical system Motion control systems range from nanopo sitioning systems atomic force microscopes adaptive optics to control systems for the readwrite heads in a disk drive of a CD player to manufacturing systems transfer machines and industrial robots to automotive control systems antilock brakes suspension control traction control to air and space flight control systems airplanes satellites rockets and planetary rovers Example 28 Vehicle steeringthe bicycle model A common problem in motion control is to control the trajectory of a vehicle through an actuator that causes a change in the orientation A steering wheel on an automobile and the front wheel of a bicycle are two examples but similar dynamics occur in the steering of ships or control of the pitch dynamics of an aircraft In many cases we can understand the basic behavior of these systems through the use of a simple model that captures the basic kinematics of the system Consider a vehicle with two wheels as shown in Figure 216 For the purpose of steering we are interested in a model that describes how the velocity of the vehicle Figure 216 Vehicle steering dynamics The left figure shows an overhead view of a vehicle with four wheels The wheel base is b and the center of mass at a distance a forward of the rear wheels By approximating the motion of the front and rear pairs of wheels by a single front wheel and a single rear wheel we obtain an abstraction called the bicycle model shown on the right The steering angle is δ and the velocity at the center of mass has the angle α relative the length axis of the vehicle The position of the vehicle is given by x y and the orientation heading by θ depends on the steering angle δ To be specific consider the velocity υ at the center of mass a distance a from the rear wheel and let b be the wheel base as shown in Figure 216 Let x and y be the coordinates of the center of mass θ the heading angle and α the angle between the velocity vector υ and the centerline of the vehicle Since b ra tan δ and a ra tan α it follows that tan α ab tan δ and we get the following relation between α and the steering angle δ αδ arctan a tan δ b 223 Assume that the wheels are rolling without slip and that the velocity of the rear wheel is υ0 The vehicle speed at its center of mass is υ υ0 cos α and we find that the motion of this point is given by dxdt υ cos α θ υ0 cos α θ cos α dydt υ sin α θ υ0 sin α θ cos α 224 To see how the angle θ is influenced by the steering angle we observe from Figure 216 that the vehicle rotates with the angular velocity υ0 ra around the point O Hence dθdt υ0 ra υ0 b tan δ 225 Equations 223 225 can be used to model an automobile under the assumptions that there is no slip between the wheels and the road and that the two front wheels can be approximated by a single wheel at the center of the car The assumption of no slip can be relaxed by adding an extra state variable giving a more realistic model Such a model also describes the steering dynamics of ships as well as the pitch dynamics of aircraft and missiles It is also possible to choose coor 24 MODELING EXAMPLES 53 a Harrier jump jet r x y θ F1 F2 b Simplified model Figure 217 Vectored thrust aircraft The Harrier AV8B military aircraft a redirects its engine thrust downward so that it can hover above the ground Some air from the engine is diverted to the wing tips to be used for maneuvering As shown in b the net thrust on the aircraft can be decomposed into a horizontal force F1 and a vertical force F2 acting at a distance r from the center of mass dinates so that the reference point is at the rear wheels corresponding to setting α 0 a model often referred to as the Dubins car 66 Figure 216 represents the situation when the vehicle moves forward and has frontwheel steering The case when the vehicle reverses is obtained by changing the sign of the velocity which is equivalent to a vehicle with rearwheel steering Example 29 Vectored thrust aircraft Consider the motion of vectored thrust aircraft such as the Harrier jump jet shown Figure 217a The Harrier is capable of vertical takeoff by redirecting its thrust downward and through the use of smaller maneuvering thrusters located on its wings A simplified model of the Harrier is shown in Figure 217b where we focus on the motion of the vehicle in a vertical plane through the wings of the aircraft We resolve the forces generated by the main downward thruster and the maneuvering thrusters as a pair of forces F1 and F2 acting at a distance r below the aircraft determined by the geometry of the thrusters Let x y θ denote the position and orientation of the center of mass of the aircraft Let m be the mass of the vehicle J the moment of inertia g the gravitational constant and c the damping coefficient Then the equations of motion for the vehicle are given by m x F1 cos θ F2 sin θ c x m y F1 sin θ F2 cos θ mg c y J θ r F1 226 It is convenient to redefine the inputs so that the origin is an equilibrium point of the 54 CHAPTER 2 SYSTEM MODELING message queue incoming outgoing messages x μ λ messages Figure 218 Schematic diagram of a queuing system Messages arrive at rate λ and are stored in a queue Messages are processed and removed from the queue at rate μ The average size of the queue is given by x R system with zero input Letting u1 F1 and u2 F2 mg the equations become m x mg sin θ c x u1 cos θ u2 sin θ m y mgcos θ 1 c y u1 sin θ u2 cos θ J θ ru1 227 These equations describe the motion of the vehicle as a set of three coupled second order differential equations Information Systems Information systems range from communication systems like the Internet to soft ware systems that manipulate data or manage enterprisewide resources Feedback is present in all these systems and designing strategies for routing flow control and buffer management is a typical problem Many results in queuing theory emerged from design of telecommunication systems and later from development of the Inter net and computer communication systems 32 127 177 Management of queues to avoid congestion is a central problem and we will therefore start by discussing the modeling of queuing systems Example 210 Queuing systems A schematic picture of a simple queue is shown in Figure 218 Requests arrive and are then queued and processed There can be large variations in arrival rates and service rates and the queue length builds up when the arrival rate is larger than the service rate When the queue becomes too large service is denied using an admission control policy The system can be modeled in many different ways One way is to model each incoming request which leads to an eventbased model where the state is an integer that represents the queue length The queue changes when a request arrives or a request is serviced The statistics of arrival and servicing are typically modeled as random processes In many cases it is possible to determine statistics of quantities like queue length and service time but the computations can be quite complicated A significant simplification can be obtained by using a flow model Instead of keeping track of each request we instead view service and requests as flows similar to what is done when replacing molecules by a continuum when analyzing 24 MODELING EXAMPLES 55 0 05 1 0 50 100 Service rate excess λμmax Queue length xe a Steadystate queue size 0 20 40 60 80 0 10 20 Time s Queue length xe b Overload condition Figure 219 Queuing dynamics a The steadystate queue length as a function of λμmax b The behavior of the queue length when there is a temporary overload in the system The solid line shows a realization of an eventbased simulation and the dashed line shows the behavior of the flow model 229 fluids Assuming that the average queue length x is a continuous variable and that arrivals and services are flows with rates λ and μ the system can be modeled by the firstorder differential equation dx dt λ μ λ μmax f x x 0 228 where μmax is the maximum service rate and f x is a number between 0 and 1 that describes the effective service rate as a function of the queue length It is natural to assume that the effective service rate depends on the queue length because larger queues require more resources In steady state we have f x λμmax and we assume that the queue length goes to zero when λμmax goes to zero and that it goes to infinity when λμmax goes to 1 This implies that f 0 0 and that f 1 In addition if we assume that the effective service rate deteriorates monotonically with queue length then the function f x is monotone and concave A simple function that satisfies the basic requirements is f x x1 x which gives the model dx dt λ μmax x x 1 229 This model was proposed by Agnew 5 It can be shown that if arrival and ser vice processes are Poisson processes the average queue length is given by equa tion 229 and that equation 229 is a good approximation even for short queue lengths see Tipper 193 To explore the properties of the model 229 we will first investigate the equi librium value of the queue length when the arrival rate λ is constant Setting the derivative dxdt to zero in equation 229 and solving for x we find that the queue length x approaches the steadystate value xe λ μmax λ 230 Figure 219a shows the steadystate queue length as a function of λμmax the effective service rate excess Notice that the queue length increases rapidly as λ 56 CHAPTER 2 SYSTEM MODELING 0 1 2 3 4 0 500 1000 1500 Number of processes Execution time s open loop closed loop a System performance Normal CPU load Memory swaps Underload Overload b System state Figure 220 Illustration of feedback in the virtual memory system of the IBM370 a The effect of feedback on execution times in a simulation following 43 Results with no feedback are shown with o and results with feedback with x Notice the dramatic decrease in execution time for the system with feedback b How the three states are obtained based on process measurements approaches μmax To have a queue length less than 20 requires λμmax 095 The average time to service a request is Ts x 1μmax and it increases dramatically as λ approaches μmax Figure 219b illustrates the behavior of the server in a typical overload situation The maximum service rate is μmax 1 and the arrival rate starts at λ 05 The arrival rate is increased to λ 4 at time 20 and it returns to λ 05 at time 25 The figure shows that the queue builds up quickly and clears very slowly Since the response time is proportional to queue length it means that the quality of service is poor for a long period after an overload This behavior is called the rushhour effect and has been observed in web servers and many other queuing systems such as automobile traffic The dashed line in Figure 219b shows the behavior of the flow model which describes the average queue length The simple model captures behavior qualita tively but there are variations from sample to sample when the queue length is short Many complex systems use discrete control actions Such systems can be mod eled by characterizing the situations that correspond to each control action as illustrated in the following example Example 211 Virtual memory paging control An early example of the use of feedback in computer systems was applied in the operating system OSVS for the IBM 370 43 55 The system used virtual memory which allows programs to address more memory than is physically available as fast memory Data in current fast memory random access memory RAM is accessed directly but data that resides in slower memory disk is automatically loaded into fast memory The system is implemented in such a way that it appears to the programmer as a single large section of memory The system performed very well in many situations but very long execution times were encountered in overload situations as shown by the open circles in Figure 220a The difficulty was resolved with a simple discrete feedback system The load of the central processing unit 24 MODELING EXAMPLES 57 4 5 2 3 1 a Sensor network 0 10 20 30 40 10 20 30 40 Iteration Agent states xi b Consensus convergence Figure 221 Consensus protocols for sensor networks a A simple sensor network with five nodes In this network node 1 communicates with node 2 and node 2 communicates with nodes 1 3 4 5 etc b A simulation demonstrating the convergence of the consensus protocol 231 to the average value of the initial conditions CPU was measured together with the number of page swaps between fast memory and slow memory The operating region was classified as being in one of three states normal underload or overload The normal state is characterized by high CPU activity the underload state is characterized by low CPU activity and few page replacements the overload state has moderate to low CPU load but many page replacements see Figure 220b The boundaries between the regions and the time for measuring the load were determined from simulations using typical loads The control strategy was to do nothing in the normal load condition to exclude a process from memory in the overload condition and to allow a new process or a previously excluded process in the underload condition The crosses in Figure 220a show the effectiveness of the simple feedback system in simulated loads Similar principles are used in many other situations eg in fast onchip cache memory Example 212 Consensus protocols in sensor networks Sensor networks are used in a variety of applications where we want to collect and aggregate information over a region of space using multiple sensors that are connected together via a communications network Examples include monitoring environmental conditions in a geographical area or inside a building monitoring the movement of animals or vehicles and monitoring the resource loading across a group of computers In many sensor networks the computational resources are distributed along with the sensors and it can be important for the set of distributed agents to reach a consensus about a certain property such as the average temperature in a region or the average computational load among a set of computers We model the connectivity of the sensor network using a graph with nodes corresponding to the sensors and edges corresponding to the existence of a direct communications link between two nodes We use the notation N i to represent the set of neighbors of a node i For example in the network shown in Figure 221a N2 1 3 4 5 and N3 2 4 To solve the consensus problem let xi be the state of the ith sensor correspond ing to that sensors estimate of the average value that we are trying to compute We initialize the state to the value of the quantity measured by the individual sensor The consensus protocol algorithm can now be realized as a local update law xik 1 xik γ Σ jNi xjk xik 231 This protocol attempts to compute the average by updating the local state of each agent based on the value of its neighbors The combined dynamics of all agents can be written in the form xk 1 xk γ D A xk 232 where A is the adjacency matrix and D is a diagonal matrix with entries corresponding to the number of neighbors of each node The constant γ describes the rate at which the estimate of the average is updated based on information from neighboring nodes The matrix L D A is called the Laplacian of the graph The equilibrium points of equation 232 are the set of states such that xek 1 xek It can be shown that xe α α α is an equilibrium state for the system corresponding to each sensor having an identical estimate α for the average Furthermore we can show that α is indeed the average value of the initial states Since there can be cycles in the graph it is possible that the state of the system could enter into an infinite loop and never converge to the desired consensus state A formal analysis requires tools that will be introduced later in the text but it can be shown that for any connected graph we can always find a γ such that the states of the individual agents converge to the average A simulation demonstrating this property is shown in Figure 221b Biological Systems Biological systems provide perhaps the richest source of feedback and control examples The basic problem of homeostasis in which a quantity such as temperature or blood sugar level is regulated to a fixed value is but one of the many types of complex feedback interactions that can occur in molecular machines cells organisms and ecosystems Example 213 Transcriptional regulation Transcription is the process by which messenger RNA mRNA is generated from a segment of DNA The promoter region of a gene allows transcription to be controlled by the presence of other proteins which bind to the promoter region and either repress or activate RNA polymerase the enzyme that produces an mRNA transcript from DNA The mRNA is then translated into a protein according to its nucleotide sequence This process is illustrated in Figure 222 A simple model of the transcriptional regulation process is through the use of a Hill function 56 154 Consider the regulation of a protein A with a concentration given by pa and a corresponding mRNA concentration ma Let B be a second protein with concentration pb that represses the production of protein A through 24 MODELING EXAMPLES 59 RNA polymerase DNA Polypeptide mRNA Ribosome Transcription Translation Figure 222 Biological circuitry The cell on the left is a bovine pulmonary cell stained so that the nucleus actin and chromatin are visible The figure on the right gives an overview of the process by which proteins in the cell are made RNA is transcribed from DNA by an RNA polymerase enzyme The RNA is then translated into a protein by an organelle called a ribosome transcriptional regulation The resulting dynamics of pa and ma can be written as dma dt αab 1 kab pnab b αa0 γama dpa dt βama δa pa 233 where αabαa0 is the unregulated transcription rate γa represents the rate of degra dation of mRNA αab kab and nab are parameters that describe how B represses A βa represents the rate of production of the protein from its corresponding mRNA and δa represents the rate of degradation of the protein A The parameter αa0 de scribes the leakiness of the promoter and nab is called the Hill coefficient and relates to the cooperativity of the promoter A similar model can be used when a protein activates the production of another protein rather than repressing it In this case the equations have the form dma dt αabkab pnab b 1 kab pnab b αa0 γama dpa dt βama δa pa 234 where the variables are the same as described previously Note that in the case of the activator if pb is zero then the production rate is αa0 versus αab αa0 for the repressor As pb gets large the first term in the expression for ma approaches 1 and the transcription rate becomes αab αa0 versus αa0 for the repressor Thus we see that the activator and repressor act in opposite fashion from each other As an example of how these models can be used we consider the model of a repressilator originally due to Elowitz and Leibler 71 The repressilator is a synthetic circuit in which three proteins each repress another in a cycle This is shown schematically in Figure 223a where the three proteins are TetR λcI and LacI The basic idea of the repressilator is that if TetR is present then it represses the production of λcI If λcI is absent then LacI is produced at the unregulated transcription rate which in turn represses TetR Once TetR is repressed then λcI is no longer repressed and so on If the dynamics of the circuit are designed properly the resulting protein concentrations will oscillate We can model this system using three copies of equation 233 with A and 60 CHAPTER 2 SYSTEM MODELING ampR SC101 origin PLtetO1 cIlite PR lacIlite PLlacO1 tetRlite TetR LacI cI a Repressilator plasmid 0 100 200 300 0 1000 2000 3000 4000 5000 Time t min Proteins per cell cI lacI tetR b Repressilator simulation Figure 223 The repressilator genetic regulatory network a A schematic diagram of the repressilator showing the layout of the genes in the plasmid that holds the circuit as well as the circuit diagram center b A simulation of a simple model for the repressilator showing the oscillation of the individual protein concentrations Figure courtesy M Elowitz B replaced by the appropriate combination of TetR cI and LacI The state of the system is then given by x mTetR pTetR mcI pcI mLacI pLacI Figure 223b shows the traces of the three protein concentrations for parameters n 2 α 05 k 625 104 α0 5 104 γ 58 103 β 012 and δ 12 103 with initial conditions x0 1 0 0 200 0 0 following 71 Example 214 Wave propagation in neuronal networks The dynamics of the membrane potential in a cell are a fundamental mechanism in understanding signaling in cells particularly in neurons and muscle cells The HodgkinHuxley equations give a simple model for studying propagation waves in networks of neurons The model for a single neuron has the form C dV dt INa IK Ileak Iinput where V is the membrane potential C is the capacitance INa and IK are the current caused by the transport of sodium and potassium across the cell membrane Ileak is a leakage current and Iinput is the external stimulation of the cell Each current obeys Ohms law ie I gV E where g is the conductance and E is the equilibrium voltage The equilibrium voltage is given by Nernsts law E RT nF log ce ci where R is Boltzmanns constant T is the absolute temperature F is Faradays con stant n is the charge or valence of the ion and ci and ce are the ion concentrations inside the cell and in the external fluid At 20 C we have RTF 20 mV The HodgkinHuxley model was originally developed as a means to predict the quantitative behavior of the squid giant axon 100 Hodgkin and Huxley shared 25 FURTHER READING 61 the 1963 Nobel Prize in Physiology along with J C Eccles for analysis of the electrical and chemical events in nerve cell discharges The voltage clamp described in Section 13 was a key element in Hodgkin and Huxleys experiments 25 Further Reading Modeling is ubiquitous in engineering and science and has a long history in applied mathematics For example the Fourier series was introduced by Fourier when he modeled heat conduction in solids 76 Models of dynamics have been developed in many different fields including mechanics 12 86 heat conduction 50 flu ids 37 vehicles 1 38 69 robotics 156 183 circuits 92 power systems 132 acoustics 30 and micromechanical systems 179 Control theory requires mod eling from many different domains and most control theory texts contain several chapters on modeling using ordinary differential equations and difference equations see for example 79 A classic book on the modeling of physical systems espe cially mechanical electrical and thermofluid systems is Cannon 49 The book by Aris 11 is highly original and has a detailed discussion of the use of dimension free variables Two of the authors favorite books on modeling of biological systems are J D Murray 154 and Wilson 203 Exercises 21 Chain of integrators form Consider the linear ordinary differential equa tion 27 Show that by choosing a state space representation with x1 y the dynamics can be written as A 0 1 0 0 0 0 0 1 an an1 a1 B 0 0 1 C 1 0 0 This canonical form is called the chain of integrators form 22 Inverted pendulum Use the equations of motion for a balance system to derive a dynamic model for the inverted pendulum described in Example 22 and verify that for small θ the dynamics are approximated by equation 210 23 Disretetime dynamics Consider the following discretetime system xk 1 Axk Buk yk Cxk where x x1 x2 A a11 a12 0 a22 B 0 1 C 1 0 In this problem we will explore some of the properties of this discretetime system as a function of the parameters the initial conditions and the inputs a For the case when a12 0 and u 0 give a closed form expression for the output of the system b A discrete system is in equilibrium when xk 1 xk for all k Let u r be a constant input and compute the resulting equilibrium point for the system Show that if aii 1 for all i all initial conditions give solutions that converge to the equilibrium point c Write a computer program to plot the output of the system in response to a unit step input uk 1 k 0 Plot the response of your system with x0 0 and A given by a11 05 a12 1 and a22 025 24 Keynesian economics Keynes simple model for an economy is given by Yk Ck Ik Gk where Y C I and G are gross national product GNP consumption investment and government expenditure for year k Consumption and investment are modeled by difference equations of the form Ck 1 aYk Ik 1 bCk 1 Ck where a and b are parameters The first equation implies that consumption increases with GNP but that the effect is delayed The second equation implies that investment is proportional to the rate of change of consumption Show that the equilibrium value of the GNP is given by Ye 1 1 a Ie Ge where the parameter 1 1 a is the Keynes multiplier the gain from I or G to Y With a 025 an increase of government expenditure will result in a fourfold increase of GNP Also show that the model can be written as the following discretetime state model Ck 1 Ik 1 a a ab a ab Ck Ik a ab Gk Yk Ck Ik Gk 25 Least squares system identification Consider a nonlinear differential equation that can be written in the form dxdt Σi1M αi fi x where fi x are known nonlinear functions and αi are unknown but constant parameters Suppose that we have measurements or estimates of the full state x EXERCISES 63 at time instants t1 t2 tN with N M Show that the parameters αi can be determined by finding the least squares solution to a linear equation of the form Hα b where α RM is the vector of all parameters and H RNM and b RN are appropriately defined 26 Normalized oscillator dynamics Consider a damped springmass system with dynamics m q c q kq F Let ω0 km be the natural frequency and ζ c2 km be the damping ratio a Show that by rescaling the equations we can write the dynamics in the form q 2ζω0 q ω2 0q ω2 0u 235 where u Fk This form of the dynamics is that of a linear oscillator with natural frequency ω0 and damping ratio ζ b Show that the system can be further normalized and written in the form dz1 dτ z2 dz2 dτ z1 2ζz2 v 236 The essential dynamics of the system are governed by a single damping parameter ζ The Qvalue defined as Q 12ζ is sometimes used instead of ζ 27 Electric generator An electric generator connected to a strong power grid can be modeled by a momentum balance for the rotor of the generator J d2ϕ dt2 Pm Pe Pm EV X sin ϕ where J is the effective moment of inertia of the generator ϕ the angle of rotation Pm the mechanical power that drives the generator Pe is the active electrical power E thegeneratorvoltage V thegridvoltageand X thereactanceofthelineAssuming that the line dynamics are much faster than the rotor dynamics Pe V I EVX sin ϕ where I is the current component in phase with the voltage E and ϕ is the phase angle between voltages E and V Show that the dynamics of the electric generator have a normalized form that is similar to the inverted pendulum in Example 22 with no damping 28 Admission control for a queue The long delays created by temporary overloads can be reduced by rejecting requests when the queue gets large This allows requests that are accepted to be serviced quickly and requests that cannot be accommodated to receive a rejection quickly so that they can try another server Consider a simple proportional control with saturation described by u sat01kr x 237 where satab is defined in equation 39 and r is the desired reference queue length Use a simulation to show that this controller reduces the rushhour effect and explain how the choice of r affects the system dynamics 29 Biological switch A genetic switch can be formed by connecting two repressors together in a cycle as shown below Using the models from Example 213assuming that the parameters are the same for both genes and that the mRNA concentrations reach steady state quicklyshow that the dynamics can be written in normalized coordinates as dz1dτ μ1zn2 z1 υ1 dz2dτ μ1zn1 z2 υ2 238 where z1 and z2 are scaled versions of the protein concentrations and the time scale has also been changed Show that μ 200 using the parameters in Example 213 and use simulations to demonstrate the switchlike behavior of the system 210 Motor drive Consider a system consisting of a motor driving two masses that are connected by a torsional spring as shown in the diagram below This system can represent a motor with a flexible shaft that drives a load Assuming that the motor delivers a torque that is proportional to the current the dynamics of the system can be described by the equations J1 d²φ1dt² cdφ1dt dφ2dt kφ1 φ2 kI I J2 d²φ2dt² cdφ2dt dφ1dt kφ2 φ1 Td 239 Similar equations are obtained for a robot with flexible arms and for the arms of DVD and optical disk drives Derive a state space model for the system by introducing the normalized state variables x1 φ1 x2 φ2 x3 ω1ω0 and x4 ω2ω0 where ω0 kJ1 J2J1 J2 is the undamped natural frequency of the system when the control signal is zero Chapter Three Examples Dont apply any model until you understand the simplifying assumptions on which it is based and you can test their validity Catch phrase use only as directed Dont limit yourself to a single model More than one model may be useful for understanding different aspects of the same phenomenon Catch phrase legalize polygamy Saul Golomb Mathematical ModelsUses and Limitations 1970 87 In this chapter we present a collection of examples spanning many different fields of science and engineering These examples will be used throughout the text and in exercises to illustrate different concepts Firsttime readers may wish to focus on only a few examples with which they have had the most prior experience or insight to understand the concepts of state input output and dynamics in a familiar setting 31 Cruise Control The cruise control system of a car is a common feedback system encountered in everyday life The system attempts to maintain a constant velocity in the presence of disturbances primarily caused by changes in the slope of a road The controller compensates for these unknowns by measuring the speed of the car and adjusting the throttle appropriately To model the system we start with the block diagram in Figure 31 Let v be the speed of the car and vr the desired reference speed The controller which typically is of the proportionalintegral PI type described briefly in Chapter 1 receives the signals v and vr and generates a control signal u that is sent to an actuator that controls the throttle position The throttle in turn controls the torque T delivered by the engine which is transmitted through the gears and the wheels generating a force F that moves the car There are disturbance forces Fd due to variations in the slope of the road the rolling resistance and aerodynamic forces The cruise controller also has a humanmachine interface that allows the driver to set and modify the desired speed There are also functions that disconnect the cruise control when the brake is touched The system has many individual componentsactuator engine transmission wheels and car bodyand a detailed model can be very complicated In spite of this the model required to design the cruise controller can be quite simple To develop a mathematical model we start with a force balance for the car body Let v be the speed of the car m the total mass including passengers F the force generated by the contact of the wheels with the road and Fd the disturbance force Figure 31 Block diagram of a cruise control system for an automobile The throttlecontrolled engine generates a torque T that is transmitted to the ground through the gearbox and wheels Combined with the external forces from the environment such as aerodynamic drag and gravitational forces on hills the net force causes the car to move The velocity of the car υ is measured by a control system that adjusts the throttle through an actuation mechanism A driver interface allows the system to be turned on and off and the reference speed υr to be established due to gravity friction and aerodynamic drag The equation of motion of the car is simply m dυdt F Fd 31 The force F is generated by the engine whose torque is proportional to the rate of fuel injection which is itself proportional to a control signal 0 u 1 that controls the throttle position The torque also depends on engine speed ω A simple representation of the torque at full throttle is given by the torque curve Tω Tm 1 β ωωm 1² 32 where the maximum torque Tm is obtained at engine speed ωm Typical parameters are Tm 190 Nm ωm 420 rads about 4000 RPM and β 04 Let n be the gear ratio and r the wheel radius The engine speed is related to the velocity through the expression ω nr υ αn υ and the driving force can be written as F nur Tω αn u Tαn υ Typical values of αn for gears 1 through 5 are α1 40 α2 25 α3 16 α4 12 and α5 10 The inverse of αn has a physical interpretation as the effective wheel radius Figure 32 shows the torque as a function of engine speed and vehicle speed The figure shows that the effect of the gear is to flatten the torque curve so that an almost full torque can be obtained almost over the whole speed range The disturbance force Fd has three major components Fg the forces due to 31 CRUISE CONTROL 67 0 200 400 600 100 120 140 160 180 200 Angular velocity ω rads Torque T Nm 0 20 40 60 100 120 140 160 180 200 n1 n2 n3 n4 n5 Velocity v ms Torque T Nm Figure 32 Torque curves for typical car engine The graph on the left shows the torque generated by the engine as a function of the angular velocity of the engine while the curve on the right shows torque as a function of car speed for different gears gravity Fr the forces due to rolling friction and Fa the aerodynamic drag Letting the slope of the road be θ gravity gives the force Fg mg sin θ as illustrated in Figure 33a where g 98 ms2 is the gravitational constant A simple model of rolling friction is Fr mgCr sgnv where Cr is the coefficient of rolling friction and sgnv is the sign of v 1 or zero if v 0 A typical value for the coefficient of rolling friction is Cr 001 Finally the aerodynamic drag is proportional to the square of the speed Fa 1 2ρCd Av2 whereρ isthedensityofairCd istheshapedependentaerodynamicdragcoefficient and A isthefrontalareaofthecarTypicalparametersareρ 13 kgm3Cd 032 and A 24 m2 Summarizing we find that the car can be modeled by m dv dt αnuT αnv mgCr sgnv 1 2ρCd Av2 mg sin θ 33 where the function T is given by equation 32 The model 33 is a dynamical system of first order The state is the car velocity v which is also the output The input is the signal u that controls the throttle position and the disturbance is the force Fd which depends on the slope of the road The system is nonlinear because of the torque curve the gravity term and the nonlinear character of rolling friction and aerodynamic drag There can also be variations in the parameters eg the mass of the car depends on the number of passengers and the load being carried in the car We add to this model a feedback controller that attempts to regulate the speed of the car in the presence of disturbances We shall use a proportionalintegral Figure 33 Car with cruise control encountering a sloping road A schematic diagram is shown in a and b shows the response in speed and throttle when a slope of 4 is encountered The hill is modeled as a net change of 4 in hill angle θ with a linear change in the angle between t 5 and t 6 The PI controller has proportional gain is kp 05 and the integral gain is ki 01 controller which has the form ut kpet ki 0t eτ dτ This controller can itself be realized as an inputoutput dynamical system by defining a controller state z and implementing the differential equation dzdt υr υ u kpυr υ kiz 34 where υr is the desired reference speed As discussed briefly in Section 15 the integrator represented by the state z ensures that in steady state the error will be driven to zero even when there are disturbances or modeling errors The design of PI controllers is the subject of Chapter 10 Figure 33b shows the response of the closed loop system consisting of equations 33 and 34 when it encounters a hill The figure shows that even if the hill is so steep that the throttle changes from 017 to almost full throttle the largest speed error is less than 1 ms and the desired velocity is recovered after 20 s Many approximations were made when deriving the model 33 It may seem surprising that such a seemingly complicated system can be described by the simple model 33 It is important to make sure that we restrict our use of the model to the uncertainty lemon conceptualized in Figure 215b The model is not valid for very rapid changes of the throttle because we have ignored the details of the engine dynamics neither is it valid for very slow changes because the properties of the engine will change over the years Nevertheless the model is very useful for the design of a cruise control system As we shall see in later chapters the reason for this is the inherent robustness of feedback systems even if the model is not perfectly accurate we can use it to design a controller and make use of the feedback in the 32 BICYCLE DYNAMICS 69 cancel Standby Off Cruise Hold on off off off set brake resume Figure 34 Finite state machine for cruise control system The figure on the left shows some typical buttons used to control the system The controller can be in one of four modes corresponding to the nodes in the diagram on the right Transition between the modes is controlled by pressing one of the five buttons on the cruise control interface on off set resume or cancel controller to manage the uncertainty in the system The cruise control system also has a humanmachine interface that allows the driver to communicate with the system There are many different ways to implement this system one version is illustrated in Figure 34 The system has four buttons onoff setdecelerate resumeaccelerate and cancel The operation of the system is governed by a finite state machine that controls the modes of the PI controller and the reference generator Implementation of controllers and reference generators will be discussed more fully in Chapter 10 The use of control in automotive systems goes well beyond the simple cruise control system described here Applications include emissions control traction control power control especially in hybrid vehicles and adaptive cruise control Many automotive applications are discussed in detail in the book by Kiencke and Nielsen 124 and in the survey papers by Powers et al 22 166 32 Bicycle Dynamics The bicycle is an interesting dynamical system with the feature that one of its key properties is due to a feedback mechanism that is created by the design of the front fork A detailed model of a bicycle is complex because the system has many degrees of freedom and the geometry is complicated However a great deal of insight can be obtained from simple models To derive the equations of motion we assume that the bicycle rolls on the hor izontal xy plane Introduce a coordinate system that is fixed to the bicycle with the ξaxis through the contact points of the wheels with the ground the ηaxis horizontal and the ζaxis vertical as shown in Figure 35 Let v0 be the velocity of the bicycle at the rear wheel b the wheel base ϕ the tilt angle and δ the steering angle The coordinate system rotates around the point O with the angular veloc ity ω v0δb and an observer fixed to the bicycle experiences forces due to the motion of the coordinate system The tilting motion of the bicycle is similar to an inverted pendulum as shown in 70 CHAPTER 3 EXAMPLES ξ η a b δ O C1 C2 a top view η ζ h ϕ b rear view λ h C1 C2 P1 P2 P3 c side view Figure 35 Schematic views of a bicycle The steering angle is δ and the roll angle is ϕ The center of mass has height h and distance a from a vertical through the contact point P1 of the rear wheel The wheel base is b and the trail is c the rear view in Figure 35b To model the tilt consider the rigid body obtained when the wheels the rider and the front fork assembly are fixed to the bicycle frame Let m be the total mass of the system J the moment of inertia of this body with respect to the ξaxis and D the product of inertia with respect to the ξζ axes Furthermore let the ξ and ζ coordinates of the center of mass with respect to the rear wheel contact point P1 be a and h respectively We have J mh2 and D mah The torques acting on the system are due to gravity and centripetal action Assuming that the steering angle δ is small the equation of motion becomes J d2ϕ dt2 Dv0 b dδ dt mgh sin ϕ mv2 0h b δ 35 The term mgh sin ϕ is the torque generated by gravity The terms containing δ and its derivative are the torques generated by steering with the term Dv0b dδdt due to inertial forces and the term mv2 0hb δ due to centripetal forces The steering angle is influenced by the torque the rider applies to the handle bar Because of the tilt of the steering axis and the shape of the front fork the contact point of the front wheel with the road P2 is behind the axis of rotation of the front wheel assembly as shown in Figure 35c The distance c between the contact point of the front wheel P2 and the projection of the axis of rotation of the front fork assembly P3 is called the trail The steering properties of a bicycle depend critically on the trail A large trail increases stability but makes the steering less agile A consequence of the design of the front fork is that the steering angle δ is influenced both by steering torque T and by the tilt of the frame ϕ This means that a bicycle with a front fork is a feedback system as illustrated by the block diagram in Figure 36 The steering angle δ influences the tilt angle ϕ and the tilt angle influences the steering angle giving rise to the circular causality that is characteristic of reasoning about feedback For a front fork with a positive trail the 33 OPERATIONAL AMPLIFIER CIRCUITS 71 ϕ Front Fork T δ Frame Figure 36 Block diagram of a bicycle with a front fork The steering torque applied to the handlebars is T the roll angle is ϕ and the steering angle is δ Notice that the front fork creates a feedback from the roll angle ϕ to the steering angle δ that under certain conditions can stabilize the system bicycle will steer into the lean creating a centrifugal force that attempts to diminish the lean Under certain conditions the feedback can actually stabilize the bicycle A crude empirical model is obtained by assuming that the block B can be modeled as the static system δ k1T k2ϕ 36 This model neglects the dynamics of the front fork the tireroad interaction and the fact that the parameters depend on the velocity A more accurate model called the Whipple model is obtained using the rigidbody dynamics of the front fork and the frame Assuming small angles this model becomes M ϕ δ Cv0 ϕ δ K0 K2v2 0 ϕ δ 0 T 37 where the elements of the 22 matrices M C K0 and K2 depend on the geometry and the mass distribution of the bicycle Note that this has a form somewhat similar to that of the springmass system introduced in Chapter 2 and the balance system in Example 21 Even this more complex model is inaccurate because the interaction between the tire and the road is neglected taking this into account requires two additional state variables Again the uncertainty lemon in Figure 215b provides a framework for understanding the validity of the model under these assumptions Interesting presentations on the development of the bicycle are given in the books by D Wilson 202 and Herlihy 98 The model 37 was presented in a paper by Whipple in 1899 197 More details on bicycle modeling are given in the paper 17 which has many references 33 Operational Amplifier Circuits An operational amplifier op amp is a modern implementation of Blacks feedback amplifier It is a universal component that is widely used for instrumentation con trol and communication It is also a key element in analog computing Schematic diagrams of the operational amplifier are shown in Figure 37 The amplifier has one inverting input v one noninverting input v and one output vout There are also connections for the supply voltages e and e and a zero adjustment offset null A simple model is obtained by assuming that the input currents i and i are offset null NC inverting input e noninv input output e offset null a Chip pinout b Full schematic c Simple view Figure 37 An operational amplifier and two schematic diagrams a The amplifier pin connections on an integrated circuit chip b A schematic with all connections c Only the signal connections zero and that the output is given by the static relation vout satvminvmaxkv v 38 where sat denotes the saturation function satabx a if xa x if a x b b if x b 39 We assume that the gain k is large in the range of 106 108 and the voltages vmin and vmax satisfy e vmin vmax e and hence are in the range of the supply voltages More accurate models are obtained by replacing the saturation function with a smooth function as shown in Figure 38 For small input signals the amplifier characteristic 38 is linear vout kv v kv 310 Since the open loop gain k is very large the range of input signals where the system is linear is very small Figure 38 Inputoutput characteristics of an operational amplifier The differential input is given by v v The output voltage is a linear function of the input in a small range around 0 with saturation at vmin and vmax In the linear regime the op amp has high gain R1 R2 v1 i0 v v2 a Amplifier circuit v1 R2R1 Σ e R1R1 R2 k v v2 b Block diagram Figure 39 Stable amplifier using an op amp The circuit a uses negative feedback around an operational amplifier and has a corresponding block diagram b The resistors R1 and R2 determine the gain of the amplifier A simple amplifier is obtained by arranging feedback around the basic operational amplifier as shown in Figure 39a To model the feedback amplifier in the linear range we assume that the current i0 i i is zero and that the gain of the amplifier is so large that the voltage v v v is also zero It follows from Ohms law that the currents through resistors R1 and R2 are given by v1R1 v2R2 and hence the closed loop gain of the amplifier is v2v1 kcl where kcl R2R1 311 A more accurate model is obtained by continuing to neglect the current i0 but assuming that the voltage v is small but not negligible The current balance is then v1 vR1 v v2R2 312 Assuming that the amplifier operates in the linear range and using equation 310 the gain of the closed loop system becomes kcl v2v1 R2R1 k R1R1 R2 k R1 313 If the open loop gain k of the operational amplifier is large the closed loop gain kcl is the same as in the simple model given by equation 311 Notice that the closed loop gain depends only on the passive components and that variations in k have only a marginal effect on the closed loop gain For example if k 106 and R2R1 100 a variation of k by 100 gives only a variation of 001 in the closed loop gain The drastic reduction in sensitivity is a nice illustration of how feedback can be used to make precise systems from uncertain components In this particular case feedback is used to trade high gain and low robustness for low gain and high robustness Equation 313 was the formula that inspired Black when he invented the feedback amplifier 35 see the quote at the beginning of Chapter 12 It is instructive to develop a block diagram for the feedback amplifier in Figure 39a To do this we will represent the pure amplifier with input v and output v2 R1 R2 C v1 i0 v v2 Figure 310 Circuit diagram of a PI controller obtained by feedback around an operational amplifier The capacitor C is used to store charge and represents the integral of the input as one block To complete the block diagram we must describe how v depends on v1 and v2 Solving equation 312 for v gives v R2R1 R2 v1 R1R1 R2 v2 R1R1 R2 R2R1 v1 v2 and we obtain the block diagram shown in Figure 39b The diagram clearly shows that the system has feedback and that the gain from v2 to v is R1R1 R2 which can also be read from the circuit diagram in Figure 39a If the loop is stable and the gain of the amplifier is large it follows that the error e is small and we find that v2 R2R1 v1 Notice that the resistor R1 appears in two blocks in the block diagram This situation is typical in electrical circuits and it is one reason why block diagrams are not always well suited for some types of physical modeling The simple model of the amplifier given by equation 310 provides qualitative insight but it neglects the fact that the amplifier is a dynamical system A more realistic model is dvoutdt a vout b v 314 The parameter b that has dimensions of frequency and is called the gainbandwidth product of the amplifier Whether a more complicated model is used depends on the questions to be answered and the required size of the uncertainty lemon The model 314 is still not valid for very high or very low frequencies since drift causes deviations at low frequencies and there are additional dynamics that appear at frequencies close to b The model is also not valid for large signals an upper limit is given by the voltage of the power supply typically in the range of 510 V neither is it valid for very low signals because of electrical noise These effects can be added if needed but increase the complexity of the analysis The operational amplifier is very versatile and many different systems can be built by combining it with resistors and capacitors In fact any linear system can be implemented by combining operational amplifiers with resistors and capacitors Exercise 35 shows how a secondorder oscillator is implemented and Figure 310 shows the circuit diagram for an analog proportionalintegral controller To develop a simple model for the circuit we assume that the current i0 is zero and that the open loop gain k is so large that the input voltage v is negligible The current i through the capacitor is i C dvcdt where vc is the voltage across the capacitor Since the same current goes through the resistor R1 we get i v1 R1 C dvcdt which implies that vct 1C it dt 1 R1 C v1τ dτ from 0 to t The output voltage is thus given by v2t R2 i vc R2 R1 v1t 1 R1 C v1τdτ from 0 to t which is the inputoutput relation for a PI controller The development of operational amplifiers was pioneered by Philbrick 139 165 and their usage is described in many textbooks eg 53 Good information is also available from suppliers 112 145 34 Computing Systems and Networks The application of feedback to computing systems follows the same principles as the control of physical systems but the types of measurements and control inputs that can be used are somewhat different Measurements sensors are typically related to resource utilization in the computing system or network and can include quantities such as the processor load memory usage or network bandwidth Control variables actuators typically involve setting limits on the resources available to a process This might be done by controlling the amount of memory disk space or time that a process can consume turning on or off processing delaying availability of a resource or rejecting incoming requests to a server process Process modeling for networked computing systems is also challenging and empirical models based on measurements are often used when a firstprinciples model is not available Web Server Control Web servers respond to requests from the Internet and provide information in the form of web pages Modern web servers start multiple processes to respond to requests with each process assigned to a single source until no further requests are received from that source for a predefined period of time Processes that are idle become part of a pool that can be used to respond to new requests To provide a fast response to web requests it is important that the web server processes do not overload the servers computational capabilities or exhaust its memory Since other processes may be running on the server the amount of available processing power and memory is uncertain and feedback can be used to provide good performance in the presence of this uncertainty 76 CHAPTER 3 EXAMPLES Idle Busy Client Servers data outgoing queue accept requests incoming 1 Wait Memory usage KeepAlive MaxClients Processor load Control Ref 1 Figure 311 Feedback control of a web server Connection requests arrive on an input queue where they are sent to a server process A finite state machine keeps track of the state of the individual server processes and responds to requests A control algorithm can modify the servers operation by controlling parameters that affect its behavior such as the maximum number of requests that can be serviced at a single time MaxClients or the amount of time that a connection can remain idle before it is dropped KeepAlive Figure 311 illustrates the use of feedback to modulate the operation of an Apache web server The web server operates by placing incoming connection re quests on a queue and then starting a subprocess to handle requests for each accepted connection This subprocess responds to requests from a given connection as they come in alternating between a Busy state and a Wait state Keeping the sub process active between requests is known as the persistence of the connection and provides a substantial reduction in latency to requests for multiple pieces of infor mation from a single site If no requests are received for a sufficiently long period of time controlled by the KeepAlive parameter then the connection is dropped and the subprocess enters an Idle state where it can be assigned another connec tion A maximum of MaxClients simultaneous requests will be served with the remainder remaining on the incoming request queue The parameters that control the server represent a tradeoff between perfor mance how quickly requests receive a response and resource usage the amount of processing power and memory used by the server Increasing the MaxClients parameter allows connection requests to be pulled off of the queue more quickly but increases the amount of processing power and memory usage that is required Increasing the KeepAlive timeout means that individual connections can remain idle for a longer period of time which decreases the processing load on the machine but increases the size of the queue and hence the amount of time required for a user to initiate a connection Successful operation of a busy server requires a proper choice of these parameters often based on trial and error To model the dynamics of this system in more detail we create a discretetime model with states given by the average processor load xcpu and the percentage memory usage xmem The inputs to the system are taken as the maximum number of clients umc and the keepalive time uka If we assume a linear model around the 34 COMPUTING SYSTEMS AND NETWORKS 77 equilibrium point the dynamics can be written as xcpuk 1 xmemk 1 A11 A12 A21 A22 xcpuk xmemk B11 B12 B21 B22 ukak umck 315 where the coefficients of the A and B matrices can be determined based on empirical measurements or detailed modeling of the web servers processing and memory usage Using system identification Diao et al 59 97 identified the linearized dynamics as A 054 011 0026 063 B 85 44 25 28 104 where the system was linearized about the equilibrium point xcpu 058 uka 11 s xmem 055 umc 600 This model shows the basic characteristics that were described above Looking first at the B matrix we see that increasing the KeepAlive timeout first column of the B matrix decreases both the processor usage and the memory usage since there is more persistence in connections and hence the server spends a longer time waiting for a connection to close rather than taking on a new active connection The MaxClientsconnectionincreasesboththeprocessingandmemoryrequirements Note that the largest effect on the processor load is the KeepAlive timeout The A matrix tells us how the processor and memory usage evolve in a region of the state space near the equilibrium point The diagonal terms describe how the individual resources return to equilibrium after a transient increase or decrease The offdiagonal terms show that there is coupling between the two resources so that a change in one could cause a later change in the other Although this model is very simple we will see in later examples that it can be used to modify the parameters controlling the server in real time and provide robustness with respect to uncertainties in the load on the machine Similar types of mechanisms have been used for other types of servers It is important to remember the assumptions on the model and their role in determining when the model is valid In particular since we have chosen to use average quantities over a given sample time the model will not provide an accurate representation for highfrequency phenomena Congestion Control The Internet was created to obtain a large highly decentralized efficient and ex pandable communication system The system consists of a large number of inter connected gateways A message is split into several packets which are transmitted over different paths in the network and the packages are rejoined to recover the message at the receiver An acknowledgment ack message is sent back to the sender when a packet is received The operation of the system is governed by a simple but powerful decentralized control structure that has evolved over time 78 CHAPTER 3 EXAMPLES Sources Sources Router Router Link Receiver ack Link Link a Block diagram 10 2 10 0 10 2 10 4 0 02 04 06 08 1 12ρ2N 2 log scale ρbe b Operating point Figure 312 Internet congestion control a Source computers send information to routers which forward the information to other routers that eventually connect to the receiving com puter When a packet is received an acknowledgment packet is sent back through the routers not shown The routers buffer information received from the sources and send the data across the outgoing link b The equilibrium buffer size be for a set of N identical computers sending packets through a single router with drop probability ρ The system has two control mechanisms called protocols the Transmission Control Protocol TCP for endtoend network communication and the Internet Protocol IP for routing packets and for hosttogateway or gatewaytogateway communication The current protocols evolved after some spectacular congestion collapses occurred in the mid 1980s when throughput unexpectedly could drop by a factor of 1000 108 The control mechanism in TCP is based on conserving the number of packets in the loop from the sender to the receiver and back to the sender The sending rate is increased exponentially when there is no congestion and it is dropped to a low level when there is congestion To derive an overall model for congestion control we model three separate elements of the system the rate at which packets are sent by individual sources computers the dynamics of the queues in the links routers and the admission control mechanism for the queues Figure 312a is a block diagram of the system The current source control mechanism on the Internet is a protocol known as TCPReno137Thisprotocoloperatesbysendingpacketstoareceiverandwaiting to receive an acknowledgment from the receiver that the packet has arrived If no acknowledgment is sent within a certain timeout period the packet is retransmitted To avoid waiting for the acknowledgment before sending the next packet Reno transmitsmultiplepacketsuptoafixedwindowaroundthelatestpacketthathasbeen acknowledged If the window length is chosen properly packets at the beginning of the window will be acknowledged before the source transmits packets at the end of the window allowing the computer to continuously stream packets at a high rate To determine the size of the window to use TCPReno uses a feedback mech anism in which roughly speaking the window size is increased by 1 every time a packet is acknowledged and the window size is cut in half when packets are lost This mechanism allows a dynamic adjustment of the window size in which each computer acts in a greedy fashion as long as packets are being delivered but backs off quickly when congestion occurs A model for the behavior of the source can be developed by describing the dynamics of the window size Suppose we have N computers and let wi be the current window size measured in number of packets for the ith computer Let qi represent the endtoend probability that a packet will be dropped someplace between the source and the receiver We can model the dynamics of the window size by the differential equation d wi dt 1 qi rit τi wi qi wi 2 ri t τi ri wi τi 316 where τi is the endtoend transmission time for a packet to reach its destination and the acknowledgment to be sent back and ri is the resulting rate at which packets are cleared from the list of packets that have been received The first term in the dynamics represents the increase in window size when a packet is received and the second term represents the decrease in window size when a packet is lost Notice that ri is evaluated at time t τi representing the time required to receive additional acknowledgments The link dynamics are controlled by the dynamics of the router queue and the admission control mechanism for the queue Assume that we have L links in the network and use l to index the individual links We model the queue in terms of the current number of packets in the routers buffer bl and assume that the router can contain a maximum of blmax packets and transmits packets at a rate cl equal to the capacity of the link The buffer dynamics can then be written as d bl dt sl cl sl Σ i lLi ri t τlf 317 where Li is the set of links that are being used by source i τlf is the time it takes a packet from source i to reach link l and sl is the total rate at which packets arrive at link l The admission control mechanism determines whether a given packet is accepted by a router Since our model is based on the average quantities in the network and not the individual packets one simple model is to assume that the probability that a packet is dropped depends on how full the buffer is pl ml bl bmax For simplicity we will assume for now that pl ρl bl see Exercise 36 for a more detailed model The probability that a packet is dropped at a given link can be used to determine the endtoend probability that a packet is lost in transmission qi 1 1 pl Σ pl t τlb 318 where τlb is the backward delay from link l to source i and the approximation is valid as long as the individual drop probabilities are small We use the backward delay since this represents the time required for the acknowledgment packet to be received by the source Together equations 316 317 and 318 represent a model of congestion control dynamics We can obtain substantial insight by considering a special case in which we have N identical sources and 1 link In addition we assume for the moment that the forward and backward time delays can be ignored in which case the dynamics can be reduced to the form d wi dt 1τ ρ b 2 wi2 2 d b dt Σ wi τ c τ b c 319 where wi ℝ i 1 N are the window sizes for the sources of data b ℝ is the current buffer size of the router ρ controls the rate at which packets are dropped and c is the capacity of the link connecting the router to the computers The variable τ represents the amount of time required for a packet to be processed by a router based on the size of the buffer and the capacity of the link Substituting τ into the equations we write the state space dynamics as d wi dt c b ρ c 1 wi2 2 d b dt Σ c wi b c 320 More sophisticated models can be found in 101 137 The nominal operating point for the system can be found by setting wi b 0 0 c b ρ c 1 wi2 2 0 Σ c wi b c Exploiting the fact that all of the source dynamics are identical it follows that all of the wi should be the same and it can be shown that there is a unique equilibrium satisfying the equations wie be N c τe N 1 2 ρ2 N2 ρ be3 ρ be 1 0 321 The solution for the second equation is a bit messy but can easily be determined numerically A plot of its solution as a function of 1 2 ρ2 N2 is shown in Figure 312b We also note that at equilibrium we have the following additional equalities τe be c N we c qe N pe N ρ be re we τe 322 Figure 313 shows a simulation of 60 sources communicating across a single link with 20 sources dropping out at t 500 ms and the remaining sources increasing their rates window sizes to compensate Note that the buffer size and window sizes automatically adjust to match the capacity of the link A comprehensive treatment of computer networks is given in the textbook by Tannenbaum 189 A good presentation of the ideas behind the control principles for the Internet is given by one of its designers Van Jacobson in 108 F Kelly 120 presents an early effort on the analysis of the system The book by Hellerstein et al 97 gives many examples of the use of feedback in computer systems 35 ATOMIC FORCE MICROSCOPY 81 Router Sources Link Link ack Receiver 0 200 400 600 800 1000 0 5 10 15 20 Time t ms States wi pktsms b pkts b w1w60 w1w40 Figure 313 Internet congestion control for N identical sources across a single link As shown on the left multiple sources attempt to communicate through a router across a single link An ack packet sent by the receiver acknowledges that the message was received otherwise the message packet is resent and the sending rate is slowed down at the source The simulation on the right is for 60 sources starting random rates with 20 sources dropping out at t 500 ms The buffer size is shown at the top and the individual source rates for 6 of the sources are shown at the bottom 35 Atomic Force Microscopy The 1986 Nobel Prize in Physics was shared by Gerd Binnig and Heinrich Rohrer for their design of the scanning tunneling microscope The idea of the instrument is to bring an atomically sharp tip so close to a conducting surface that tunneling occurs An image is obtained by traversing the tip across the sample and measuring the tunneling current as a function of tip position This invention has stimulated the development of a family of instruments that permit visualization of surface structure at the nanometer scale including the atomic force microscope AFM where a sample is probed by a tip on a cantilever An AFM can operate in two modes In tapping mode the cantilever is vibrated and the amplitude of vibration is controlled by feedback In contact mode the cantilever is in contact with the sample and its bending is controlled by feedback In both cases control is actuated by a piezo element that controls the vertical position of the cantilever base or the sample The control system has a direct influence on picture quality and scanning rate A schematic picture of an atomic force microscope is shown in Figure 314a A microcantilever with a tip having a radius of the order of 10 nm is placed close to the sample The tip can be moved vertically and horizontally using a piezoelectric scanner It is clamped to the sample surface by attractive van der Waals forces and repulsive Pauli forces The cantilever tilt depends on the topography of the surface and the position of the cantilever base which is controlled by the piezo element The tilt is measured by sensing the deflection of the laser beam using a photodiode The signal from the photodiode is amplified and sent to a controller that drives the amplifier for the vertical position of the cantilever By controlling the piezo element so that the deflection of the cantilever is constant the signal driving the 82 CHAPTER 3 EXAMPLES Amplifier Amplifier Sample Cantilever xy z Laser Photo diode Controller Piezo drive Deflection reference Sweep generator a Schematic diagram b AFM image of DNA Figure 314 Atomic force microscope a A schematic diagram of an atomic force micro scope consisting of a piezo drive that scans the sample under the AFM tip A laser reflects off of the cantilever and is used to measure the detection of the tip through a feedback controller b An AFM image of strands of DNA Image courtesy Veeco Instruments vertical deflection of the piezo element is a measure of the atomic forces between the cantilever tip and the atoms of the sample An image of the surface is obtained by scanning the cantilever along the sample The resolution makes it possible to see the structure of the sample on the atomic scale as illustrated in Figure 314b which shows an AFM image of DNA The horizontal motion of an AFM is typically modeled as a springmass system with low damping The vertical motion is more complicated To model the system we start with the block diagram shown in Figure 315 Signals that are easily acces sible are the input voltage u to the power amplifier that drives the piezo element the voltage v applied to the piezo element and the output voltage y of the signal amplifier for the photodiode The controller is a PI controller implemented by a computer which is connected to the system by analogtodigital AD and digital toanalog DA converters The deflection of the cantilever ϕ is also shown in the figure The desired reference value for the deflection is an input to the computer v Cantilever D Computer A u A D ϕ y z Deflection reference amplifier Signal photodiode Laser Sample topography amplifier Power element Piezo Figure 315 Block diagram of the system for vertical positioning of the cantilever for an atomic force microscope in contact mode The control system attempts to keep the can tilever deflection equal to its reference value Cantilever deflection is measured amplified and converted to a digital signal then compared with its reference value A correcting signal is generated by the computer converted to analog form amplified and sent to the piezo element 35 ATOMIC FORCE MICROSCOPY 83 u y Vp a Step response Piezo crystal z1 z2 m1 m2 b Mechanical model Figure 316 Modeling of an atomic force microscope a A measured step response The top curve shows the voltage u applied to the drive amplifier 50 mVdiv the middle curve is the output Vp of the power amplifier 500 mVdiv and the bottom curve is the output y of the signal amplifier 500 mVdiv The time scale is 25 μsdiv Data have been supplied by Georg Schitter b A simple mechanical model for the vertical positioner and the piezo crystal There are several different configurations that have different dynamics Here we will discuss a highperformance system from 176 where the cantilever base is positioned vertically using a piezo stack We begin the modeling with a simple experiment on the system Figure 316a shows a step response of a scanner from the input voltage u to the power amplifier to the output voltage y of the signal amplifier for the photodiode This experiment captures the dynamics of the chain of blocks from u to y in the block diagram in Figure 315 Figure 316a shows that the system responds quickly but that there is a poorly damped oscillatory mode with a period of about 35 μs A primary task of the modeling is to understand the origin of the oscillatory behavior To do so we will explore the system in more detail The natural frequency of the clamped cantilever is typically several hundred kilohertz which is much higher than the observed oscillation of about 30 kHz As a first approximation we will model it as a static system Since the deflections are small we can assume that the bending ϕ of the cantilever is proportional to the difference in height between the cantilever tip at the probe and the piezo scanner A more accurate model can be obtained by modeling the cantilever as a springmass system of the type discussed in Chapter 2 Figure 316a also shows that the response of the power amplifier is fast The photodiode and the signal amplifier also have fast responses and can thus be mod eled as static systems The remaining block is a piezo system with suspension A schematic mechanical representation of the vertical motion of the scanner is shown in Figure 316b We will model the system as two masses separated by an ideal piezo element The mass m1 is half of the piezo system and the mass m2 is the other half of the piezo system plus the mass of the support A simple model is obtained by assuming that the piezo crystal generates a force F between the masses and that there is a damping c in the spring Let the positions 84 CHAPTER 3 EXAMPLES of the center of the masses be z1 and z2 A momentum balance gives the following model for the system m1 d2z1 dt2 F m2 d2z2 dt2 c2 dz2 dt k2z2 F Let the elongation of the piezo element l z1 z2 be the control variable and the height z1 of the cantilever base be the output Eliminating the variable F in equations 323 and substituting z1 l for z2 gives the model m1 m2d2z1 dt2 c2 dz1 dt k2z1 m2 d2l dt2 c2 dl dt k2l 323 Summarizing we find that a simple model of the system is obtained by modeling the piezo by 323 and all the other blocks by static models Introducing the linear equations l k3u and y k4z1 we now have a complete model relating the output y to the control signal u A more accurate model can be obtained by introducing the dynamics of the cantilever and the power amplifier As in the previous examples the concept of the uncertainty lemon in Figure 215b provides a framework for describing the uncertainty the model will be accurate up to the frequencies of the fastest modeled modes and over a range of motion in which linearized stiffness models can be used The experimental results in Figure 316a can be explained qualitatively as fol lows When a voltage is applied to the piezo it expands by l0 the mass m1 moves up and the mass m2 moves down instantaneously The system settles after a poorly damped oscillation It is highly desirable to design a control system for the vertical motion so that it responds quickly with little oscillation The instrument designer has several choices to accept the oscillation and have a slow response time to design a control system that can damp the oscillations or to redesign the mechanics to give resonances of higher frequency The last two alternatives give a faster response and faster imaging Since the dynamic behavior of the system changes with the properties of the sample it is necessary to tune the feedback loop In simple systems this is currently done manually by adjusting parameters of a PI controller There are interesting possibilities for making AFM systems easier to use by introducing automatic tuning and adaptation The book by Sarid 173 gives a broad coverage of atomic force microscopes The interaction of atoms close to surfaces is fundamental to solid state physics see Kittel 125 The model discussed in this section is based on Schitter 175 36 Drug Administration The phrase Take two pills three times a day is a recommendation with which we are all familiar Behind this recommendation is a solution of an open loop control problem The key issue is to make sure that the concentration of a medicine in a part of the body is sufficiently high to be effective but not so high that it will 36 DRUG ADMINISTRATION 85 Chemical inactivation fixation etc Subcutis etc Blood circulation Tissue boundaries Dose N0 k1 k4 k2 k3 k5 Figure 317 Abstraction used to compartmentalize the body for the purpose of describing drug distribution based on Teorell 190 The body is abstracted by a number of com partments with perfect mixing and the complex transport processes are approximated by assuming that the flow is proportional to the concentration differences in the compartments The constants ki parameterize the rates of flow between different compartments cause undesirable side effects The control action is quantized take two pills and sampled every 8 hours The prescriptions are based on simple models captured in empirical tables and the dose is based on the age and weight of the patient Drug administration is a control problem To solve it we must understand how a drug spreads in the body after it is administered This topic called pharmacoki netics is now a discipline of its own and the models used are called compartment models They go back to the 1920s when Widmark modeled the propagation of al cohol in the body 199 Compartment models are now important for the screening of all drugs used by humans The schematic diagram in Figure 317 illustrates the idea of a compartment model The body is viewed as a number of compartments like blood plasma kidney liver and tissues that are separated by membranes It is assumed that there is perfect mixing so that the drug concentration is constant in each compartment The complex transport processes are approximated by assuming that the flow rates between the compartments are proportional to the concentration differences in the compartments To describe the effect of a drug it is necessary to know both its concentration and how it influences the body The relation between concentration c and its effect e is typically nonlinear A simple model is e c0 c0 cemax 324 The effect is linear for low concentrations and it saturates at high concentrations The relation can also be dynamic and it is then called pharmacodynamics Compartment Models The simplest dynamic model for drug administration is obtained by assuming that the drug is evenly distributed in a single compartment after it has been administered and that the drug is removed at a rate proportional to the concentration The com 86 CHAPTER 3 EXAMPLES partments behave like stirred tanks with perfect mixing Let c be the concentration V the volume and q the outflow rate Converting the description of the system into differential equations gives the model V dc dt qc c 0 325 This equation has the solution ct c0eqtV c0ekt which shows that the concentration decays exponentially with the time constant T Vq after an injec tion The input is introduced implicitly as an initial condition in the model 325 More generally the way the input enters the model depends on how the drug is administered For example the input can be represented as a mass flow into the compartment where the drug is injected A pill that is dissolved can also be inter preted as an input in terms of a mass flow rate The model 325 is called a a onecompartment model or a singlepool model The parameter qV is called the elimination rate constant This simple model is often used to model the concentration in the blood plasma By measuring the con centration at a few times the initial concentration can be obtained by extrapolation If the total amount of injected substance is known the volume V can then be de termined as V mc0 this volume is called the apparent volume of distribution This volume is larger than the real volume if the concentration in the plasma is lower than in other parts of the body The model 325 is very simple and there are large individual variations in the parameters The parameters V and q are often normalized by dividing by the weight of the person Typical parameters for aspirin are V 02 Lkg and q 001 Lhkg These numbers can be compared with a blood volume of 007 Lkg a plasma volume of 005 Lkg an intracellular fluid volume of 04 Lkg and an outflow of 00015 L min kg The simple onecompartment model captures the gross behavior of drug distri bution but it is based on many simplifications Improved models can be obtained by considering the body as composed of several compartments Examples of such sys tems are shown in Figure 318 where the compartments are represented as circles and the flows by arrows Modeling will be illustrated using the twocompartment model in Figure 318a We assume that there is perfect mixing in each compartment and that the transport between the compartments is driven by concentration differences We further as sume that a drug with concentration c0 is injected in compartment 1 at a volume flow rate of u and that the concentration in compartment 2 is the output Let c1 and c2 be the concentrations of the drug in the compartments and let V1 and V2 be the volumes of the compartments The mass balances for the compartments are V1 dc1 dt qc2 c1 q0c1 c0u c1 0 V2 dc2 dt qc1 c2 c2 0 y c2 326 36 DRUG ADMINISTRATION 87 k2 V1 k0 b0 u V2 k1 a Two compartment model u1 V4 V6 k64 k46 V1 V3 k31 k13 V5 k54 k45 u4 V2 k21 k12 k03 k06 k05 k02 b4 b1 b Thyroid hormone model Figure 318 Schematic diagrams of compartment models a A simple twocompartment model Each compartment is labeled by its volume and arrows indicate the flow of chemical into out of and between compartments b A system with six compartments used to study the metabolism of thyroid hormone 85 The notation ki j denotes the transport from compartment j to compartment i Introducing the variables k0 q0V1 k1 qV1 k2 qV2 and b0 c0V1 and using matrix notation the model can be written as dc dt k0 k1 k1 k2 k2 c b0 0 u y 0 1 x 327 Comparing this model with its graphical representation in Figure 318a we find that the mathematical representation 327 can be written by inspection It should also be emphasized that simple compartment models such as the one in equation 327 have a limited range of validity Lowfrequency limits exist because the human body changes with time and since the compartment model uses average concentrations they will not accurately represent rapid changes There are also nonlinear effects that influence transportation between the compartments Compartment models are widely used in medicine engineering and environ mental science An interesting property of these systems is that variables like con centration and mass are always positive An essential difficulty in compartment modeling is deciding how to divide a complex system into compartments Com partment models can also be nonlinear as illustrated in the next section Insulinglucose Dynamics It is essential that the blood glucose concentration in the body is kept within a narrow range 0711 gL Glucose concentration is influenced by many factors like food intake digestion and exercise A schematic picture of the relevant parts of the body is shown in Figures 319a and b There is a sophisticated mechanism that regulates glucose concentration Glu cose concentration is maintained by the pancreas which secretes the hormones insulin and glucagon Glucagon is released into the bloodstream when the glucose 88 CHAPTER 3 EXAMPLES Liver Large intestine Stomach Pancreas Small intestine a Relevant body organs Insulin Pancreas Liver Tissue Stomach Tissue Glucose Glucose in blood b Schematic diagram 0 50 100 150 0 200 400 Glucose mgdl 0 50 100 150 0 50 100 Time t min Insulin μUml c Intravenous injection Figure 319 Insulinglucose dynamics a Sketch of body parts involved in the control of glucose b Schematic diagram of the system c Responses of insulin and glucose when glucose in injected intravenously From 164 level is low It acts on cells in the liver that release glucose Insulin is secreted when the glucose level is high and the glucose level is lowered by causing the liver and other cells to take up more glucose In diseases like juvenile diabetes the pancreas is unable to produce insulin and the patient must inject insulin into the body to maintain a proper glucose level The mechanisms that regulate glucose and insulin are complicated dynamics with time scales that range from seconds to hours have been observed Models of different complexity have been developed The models are typically tested with data from experiments where glucose is injected intravenously and insulin and glucose concentrations are measured at regular time intervals A relatively simple model called the minimal model was developed by Bergman and coworkers 31 This models uses two compartments one representing the con centrationofglucoseinthebloodstreamandtheotherrepresentingtheconcentration of insulin in the interstitial fluid Insulin in the bloodstream is considered an input The reaction of glucose to insulin can be modeled by the equations dx1 dt p1 x2x1 p1ge dx2 dt p2x2 p3u ie 328 where ge and ie represent the equilibrium values of glucose and insulin x1 is the concentration of glucose and x2 is proportional to the concentration of interstitial insulin Notice the presence of the term x2x1 in the first equation Also notice that the model does not capture the complete feedback loop because it does not describe how the pancreas reacts to the glucose Figure 319c shows a fit of the model to a test on a normal person where glucose was injected intravenously at time t 0 The glucose concentration rises rapidly and the pancreas responds with a rapid spikelike injection of insulin The glucose and insulin levels then 37 POPULATION DYNAMICS 89 gradually approach the equilibrium values Modelsofthetypeinequation 328andmorecomplicatedmodelshavingmany compartments have been developed and fitted to experimental data A difficulty in modeling is that there are significant variations in model parameters over time and for different patients For example the parameter p1 in equation 328 has been reported to vary with an order of magnitude for healthy individuals The models have been used for diagnosis and to develop schemes for the treatment of persons with diseases Attempts to develop a fully automatic artificial pancreas have been hampered by the lack of reliable sensors The papers by Widmark and Tandberg 199 and Teorell 190 are classics in pharmacokinetics which is now an established discipline with many textbooks 62 109 84 Because of its medical importance pharmacokinetics is now an essential component of drug development The book by Riggs 168 is a good source for the modeling of physiological systems and a more mathematical treatment is given in 119 Compartment models are discussed in 85 The problem of determining rate coefficients from experimental data is discussed in 26 and 85 There are many publications on the insulinglucose model The minimal model is discussed in 52 31 and more recent references are 143 72 37 Population Dynamics Population growth is a complex dynamic process that involves the interaction of one or more species with their environment and the larger ecosystem The dynamics of population groups are interesting and important in many different areas of social and environmental policy There are examples where new species have been introduced into new habitats sometimes with disastrous results There have also been attempts to control population growth both through incentives and through legislation In this section we describe some of the models that can be used to understand how populations evolve with time and as a function of their environments Logistic Growth Model Let x be the population of a species at time t A simple model is to assume that the birth rates and mortality rates are proportional to the total population This gives the linear model dx dt bx dx b dx rx x 0 329 where birth rate b and mortality rate d are parameters The model gives an expo nential increase if b d or an exponential decrease if b d A more realistic model is to assume that the birth rate decreases when the population is large The following modification of the model 329 has this property dx dt rx1 x k x 0 330 where k is the carrying capacity of the environment The model 330 is called the logistic growth model PredatorPrey Models A more sophisticated model of population dynamics includes the effects of competing populations where one species may feed on another This situation referred to as the predatorprey problem was introduced in Example 23 where we developed a discretetime model that captured some of the features of historical records of lynx and hare populations In this section we replace the difference equation model used there with a more sophisticated differential equation model Let Ht represent the number of hares prey and let Lt represent the number of lynxes predator The dynamics of the system are modeled as dHdt rH1 Hk aHLc H H 0 dLdt b aHLc H dL L 0 In the first equation r represents the growth rate of the hares k represents the maximum population of the hares in the absence of lynxes a represents the interaction term that describes how the hares are diminished as a function of the lynx population and c controls the prey consumption rate for low hare population In the second equation b represents the growth coefficient of the lynxes and d represents the mortality rate of the lynxes Note that the hare dynamics include a term that resembles the logistic growth model 330 Of particular interest are the values at which the population values remain constant called equilibrium points The equilibrium points for this system can be determined by setting the righthand side of the above equations to zero Letting He and Le represent the equilibrium state from the second equation we have Le 0 or He cdab d Substituting this into the first equation we have that for Le 0 either He 0 or He k For Le 0 we obtain Le r He c He a He1 Hek bcrabk cd dk ab d2 k Thus we have three possible equilibrium points xe Le He xe 0 0 xe k 0 xe He Le where He and Le are given in equations 332 and 333 Note that the equilibrium populations may be negative for some parameter values corresponding to a nonachievable equilibrium point EXERCISES 91 0 10 20 30 40 50 60 70 0 20 40 60 80 100 Time t years Population Hare Lynx 0 50 100 0 20 40 60 80 100 Hares Lynxes Figure 320 Simulation of the predatorprey system The figure on the left shows a simulation of the two populations as a function of time The figure on the right shows the populations plotted against each other starting from different values of the population The oscillation seen in both figures is an example of a limit cycle The parameter values used for the simulations are a 32 b 06 c 50 d 056 k 125 and r 16 Figure 320 shows a simulation of the dynamics starting from a set of popu lation values near the nonzero equilibrium values We see that for this choice of parameters the simulation predicts an oscillatory population count for each species reminiscent of the data shown in Figure 26 Volume I of the twovolume set by J D Murray 154 give a broad coverage of population dynamics Exercises 31 Cruise control Consider the cruise control example described in Section 31 Build a simulation that recreates the response to a hill shown in Figure 33b and show the effects of increasing and decreasing the mass of the car by 25 Redesign the controller using trial and error is fine so that it returns to within 10 of the desired speed within 3 s of encountering the beginning of the hill 32 Bicycle dynamics Show that the dynamics of a bicycle frame given by equa tion 35 can be written in state space form as d dt x1 x2 0 mghJ 1 0 x1 x2 1 0 u y Dv0 bJ mv2 0h bJ x 334 where the input u is the torque applied to the handle bars and the output y is the title angle ϕ What do the states x1 and x2 represent 33 Bicycle steering Combine the bicycle model given by equation 35 and the model for steering kinematics in Example 28 to obtain a model that describes the path of the center of mass of the bicycle 34 Operational amplifier circuit Consider the op amp circuit shown below Show that the dynamics can be written in state space form as dxdt 1R1 C1 1Ra C1 0 RbRa 1R2 C2 1R2 C2 x 1R1 C1 0 u y 0 1 x where u v1 and y v3 Hint Use v2 and v3 as your state variables 35 Operational amplifier oscillator The op amp circuit shown below is an implementation of an oscillator Show that the dynamics can be written in state space form as dxdt 0 R4R1 R3 C1 1 0 1R1 C1 x where the state variables represent the voltages across the capacitors x1 v1 and x2 v2 36 Congestion control using RED 138 A number of improvements can be made to the model for Internet congestion control presented in Section 34 To ensure that the routers buffer size remains positive we can modify the buffer dynamics to satisfy dbldt sl cl bl 0 sat0 sl cl bl 0 In addition we can model the drop probability of a packet based on how close we EXERCISES 93 are to the buffer limits a mechanism known as random early detection RED pl mlal 0 alt blower l ρlrit ρlblower l blower l alt bupper l ηlrit 1 2bupper l bupper l alt 2bupper l 1 alt 2bupper l dal dt αlclal bl where αl bupper l blower l and pupper l are parameters for the RED protocol Using the model above write a simulation for the system and find a set of parameter values for which there is a stable equilibrium point and a set for which the system exhibits oscillatory solutions The following sets of parameters should be explored N 20 30 60 blower l 40 pkts ρl 01 c 8 9 15 pktsms bupper l 540 pkts αl 104 τ 55 60 100 ms 37 Atomic force microscope with piezo tube A schematic diagram of an AFM where the vertical scanner is a piezo tube with preloading is shown below m1 k1 m2 c1 k2 c2 F F Show that the dynamics can be written as m1 m2d2z1 dt2 c1 c2dz1 dt k1 k2z1 m2 d2l dt2 c2 dl dt k2l Are there parameter values that make the dynamics particularly simple 38 Drug administration The metabolism of alcohol in the body can be modeled by the nonlinear compartment model Vb dcb dt qcl cb qiv Vl dcl dt qcb cl qmax cl c0 cl qgi where Vb 48 L and Vl 06 L are the apparent volumes of distribution of body water and liver water cb and cl are the concentrations of alcohol in the com partments qiv and qgi are the injection rates for intravenous and gastrointestinal 94 CHAPTER 3 EXAMPLES intake q 15 Lmin is the total hepatic blood flow qmax 275 mmolmin and c0 01 mmolL Simulate the system and compute the concentration in the blood for oral and intravenous doses of 12 g and 40 g of alcohol 39 Population dynamics Consider the model for logistic growth given by equa tion 330 Show that the maximum growth rate occurs when the size of the pop ulation is half of the steadystate value 310 Fisheries management The dynamics of a commercial fishery can be de scribed by the following simple model dx dt f x hx u y bhx u cu where x is the total biomass f x rx1 xk is the growth rate and hx u axu is the harvesting rate The output y is the rate of revenue and the parameters a b and c are constants representing the price of fish and the cost of fishing Show that there is an equilibrium where the steadystate biomass is xe cab Compare with the situation when the biomass is regulated to a constant value and find the maximum sustainable return in that case Chapter Four Dynamic Behavior It Dont Mean a Thing If It Aint Got That Swing Duke Ellington 18991974 In this chapter we present a broad discussion of the behavior of dynamical sys tems focused on systems modeled by nonlinear differential equations This allows us to consider equilibrium points stability limit cycles and other key concepts in understanding dynamic behavior We also introduce some methods for analyzing the global behavior of solutions 41 Solving Differential Equations In the last two chapters we saw that one of the methods of modeling dynamical systems is through the use of ordinary differential equations ODEs A state space inputoutput system has the form dx dt f x u y hx u 41 where x x1 xn Rn is the state u Rp is the input and y Rq is the output The smooth maps f Rn Rp Rn and h Rn Rp Rq represent the dynamics and measurements for the system In general they can be nonlinear functions of their arguments We will sometimes focus on singleinput singleoutput SISO systems for which p q 1 We begin by investigating systems in which the input has been set to a function of the state u αx This is one of the simplest types of feedback in which the system regulates its own behavior The differential equations in this case become dx dt f x αx Fx 42 To understand the dynamic behavior of this system we need to analyze the features of the solutions of equation 42 While in some simple situations we can write down the solutions in analytical form often we must rely on computational approaches We begin by describing the class of solutions for this problem We say that xt is a solution of the differential equation 42 on the time interval t0 R to t f R if dxt dt Fxt for all t0 t t f A given differential equation may have many solutions We will most often be interested in the initial value problem where xt is prescribed at a given time t0 R and we wish to find a solution valid for all future time t t0 We say that xt is a solution of the differential equation 42 with initial value x0 Rn at t0 R if xt0 x0 and dxtdt Fxt for all t0 t tf For most differential equations we will encounter there is a unique solution that is defined for t0 t tf The solution may be defined for all time t t0 in which case we take tf Because we will primarily be interested in solutions of the initial value problem for ODEs we will usually refer to this simply as the solution of an ODE We will typically assume that t0 is equal to 0 In the case when F is independent of time as in equation 42 we can do so without loss of generality by choosing a new independent time variable τ t t0 Exercise 41 Example 41 Damped oscillator Consider a damped linear oscillator with dynamics of the form q 2ζω0q ω0²q 0 where q is the displacement of the oscillator from its rest position These dynamics are equivalent to those of a springmass system as shown in Exercise 26 We assume that ζ 1 corresponding to a lightly damped system the reason for this particular choice will become clear later We can rewrite this in state space form by setting x1 q and x2 qω0 giving dx1dt ω0 x2 dx2dt ω0 x1 2ζ ω0 x2 In vector form the righthand side can be written as Fx ω0 x2 ω0 x1 2ζ ω0 x2 The solution to the initial value problem can be written in a number of different ways and will be explored in more detail in Chapter 5 Here we simply assert that the solution can be written as x1t eζ ω0 t x10 cos ωd t 1ωdω0 ζ x10 x20 sin ωd t x2t eζ ω0 t x20 cos ωd t 1ωdω0² x10 ω0 ζ x20 sin ωd t where x0 x10 x20 is the initial condition and ωd ω0 1 ζ² This solution can be verified by substituting it into the differential equation We see that the solution is explicitly dependent on the initial condition and it can be shown that this solution is unique A plot of the initial condition response is shown in Figure 41 Figure 41 Response of the damped oscillator to the initial condition x0 10 The solution is unique for the given initial conditions and consists of an oscillatory solution for each state with an exponentially decaying magnitude We note that this form of the solution holds only for 0 ζ 1 corresponding to an underdamped oscillator Without imposing some mathematical conditions on the function F the differential equation 42 may not have a solution for all t and there is no guarantee that the solution is unique We illustrate these possibilities with two examples Example 42 Finite escape time Let x R and consider the differential equation dxdt x2 with the initial condition x0 1 By differentiation we can verify that the function xt 11t satisfies the differential equation and that it also satisfies the initial condition A graph of the solution is given in Figure 42a notice that the solution goes to infinity as t goes to 1 We say that this system has finite escape time Thus the solution exists only in the time interval 0 t 1 Example 43 Nonunique solution Let x R and consider the differential equation dxdt 2x with initial condition x0 0 We can show that the function xt 0 if 0 t a t a2 if t a satisfies the differential equation for all values of the parameter a 0 To see this a Finite escape time b Nonunique solutions Figure 42 Existence and uniqueness of solutions Equation 43 has a solution only for time t 1 at which point the solution goes to as shown in a Equation 44 is an example of a system with many solutions as shown in b For each value of a we get a different solution starting from the same initial condition we differentiate xt to obtain dxdt 0 if 0 t a 2t a if t a and hence ẋ 2x for all t 0 with x0 0 A graph of some of the possible solutions is given in Figure 42b Notice that in this case there are many solutions to the differential equation These simple examples show that there may be difficulties even with simple differential equations Existence and uniqueness can be guaranteed by requiring that the function F have the property that for some fixed c R Fx Fy c x y for all xy which is called Lipschitz continuity A sufficient condition for a function to be Lipschitz is that the Jacobian Fx is uniformly bounded for all x The difficulty in Example 42 is that the derivative Fx becomes large for large x and the difficulty in Example 43 is that the derivative Fx is infinite at the origin 42 Qualitative Analysis The qualitative behavior of nonlinear systems is important in understanding some of the key concepts of stability in nonlinear dynamics We will focus on an important class of systems known as planar dynamical systems These systems have two state variables x R2 allowing their solutions to be plotted in the x1 x2 plane The basic concepts that we describe hold more generally and can be used to understand dynamical behavior in higher dimensions Phase Portraits A convenient way to understand the behavior of dynamical systems with state x R2 is to plot the phase portrait of the system briefly introduced in Chapter 2 42 QUALITATIVE ANALYSIS 99 1 05 0 05 1 1 05 0 05 1 x1 x2 a Vector field 1 05 0 05 1 1 05 0 05 1 x1 x2 b Phase portrait Figure 43 Phase portraits a This plot shows the vector field for a planar dynamical system Each arrow shows the velocity at that point in the state space b This plot includes the solutions sometimes called streamlines from different initial conditions with the vector field superimposed We start by introducing the concept of a vector field For a system of ordinary differential equations dx dt Fx the righthand side of the differential equation defines at every x Rn a velocity Fx Rn This velocity tells us how x changes and can be represented as a vector Fx Rn For planar dynamical systems each state corresponds to a point in the plane and Fx is a vector representing the velocity of that state We can plot these vectors on a grid of points in the plane and obtain a visual image of the dynamics of the system as shown in Figure 43a The points where the velocities are zero are of particular interest since they define stationary points of the flow if we start at such a state we stay at that state A phase portrait is constructed by plotting the flow of the vector field corre sponding to the planar dynamical system That is for a set of initial conditions we plot the solution of the differential equation in the plane R2 This corresponds to following the arrows at each point in the phase plane and drawing the resulting tra jectory By plotting the solutions for several different initial conditions we obtain a phase portrait as show in Figure 43b Phase portraits are also sometimes called phase plane diagrams Phase portraits give insight into the dynamics of the system by showing the solutions plotted in the twodimensional state space of the system For example we can see whether all trajectories tend to a single point as time increases or whether there are more complicated behaviors In the example in Figure 43 corresponding to a damped oscillator the solutions approach the origin for all initial conditions This is consistent with our simulation in Figure 41 but it allows us to infer the behavior for all initial conditions rather than a single initial condition However the phase portrait does not readily tell us the rate of change of the states although 100 CHAPTER 4 DYNAMIC BEHAVIOR a u θ m l b 2 1 0 1 2 x1 x2 2π π 0 π 2π c Figure 44 Equilibrium points for an inverted pendulum An inverted pendulum is a model for a class of balance systems in which we wish to keep a system upright such as a rocket a Using a simplified model of an inverted pendulum b we can develop a phase portrait that shows the dynamics of the system c The system has multiple equilibrium points marked by the solid dots along the x2 0 line this can be inferred from the lengths of the arrows in the vector field plot Equilibrium Points and Limit Cycles An equilibrium point of a dynamical system represents a stationary condition for the dynamics We say that a state xe is an equilibrium point for a dynamical system dx dt Fx if Fxe 0 If a dynamical system has an initial condition x0 xe then it will stay at the equilibrium point xt xe for all t 0 where we have taken t0 0 Equilibrium points are one of the most important features of a dynamical sys tem since they define the states corresponding to constant operating conditions A dynamical system can have zero one or more equilibrium points Example 44 Inverted pendulum Consider the inverted pendulum in Figure 44 which is a part of the balance system we considered in Chapter 2 The inverted pendulum is a simplified version of the problem of stabilizing a rocket by applying forces at the base of the rocket we seek to keep the rocket stabilized in the upright position The state variables are the angle θ x1 and the angular velocity dθdt x2 the control variable is the acceleration u of the pivot and the output is the angle θ For simplicity we assume that mglJt 1 and mlJt 1 so that the dynamics equation 210 become dx dt x2 sin x1 cx2 u cos x1 45 This is a nonlinear timeinvariant system of second order This same set of equa tions can also be obtained by appropriate normalization of the system dynamics as illustrated in Example 27 42 QUALITATIVE ANALYSIS 101 1 0 1 15 1 05 0 05 1 15 x1 x2 a 0 10 20 30 2 1 0 1 2 Time t x1 x2 x1 x2 b Figure 45 Phase portrait and time domain simulation for a system with a limit cycle The phase portrait a shows the states of the solution plotted for different initial conditions The limit cycle corresponds to a closed loop trajectory The simulation b shows a single solution plotted as a function of time with the limit cycle corresponding to a steady oscillation of fixed amplitude We consider the open loop dynamics by setting u 0 The equilibrium points for the system are given by xe nπ 0 where n 0 1 2 The equilibrium points for n even correspond to the pendu lum pointing up and those for n odd correspond to the pendulum hanging down A phase portrait for this system without corrective inputs is shown in Figure 44c The phase portrait shows 2π x1 2π so five of the equilibrium points are shown Nonlinear systems can exhibit rich behavior Apart from equilibria they can also exhibit stationary periodic solutions This is of great practical value in generating sinusoidally varying voltages in power systems or in generating periodic signals for animal locomotion A simple example is given in Exercise 412 which shows the circuit diagram for an electronic oscillator A normalized model of the oscillator is given by the equation dx1 dt x2 x11 x2 1 x2 2 dx2 dt x1 x21 x2 1 x2 2 46 The phase portrait and time domain solutions are given in Figure 45 The figure shows that the solutions in the phase plane converge to a circular trajectory In the time domain this corresponds to an oscillatory solution Mathematically the circle is called a limit cycle More formally we call an isolated solution xt a limit cycle of period T 0 if xt T xt for all t R There are methods for determining limit cycles for secondorder systems but for general higherorder systems we have to resort to computational analysis Computer algorithms find limit cycles by searching for periodic trajectories in state space that 102 CHAPTER 4 DYNAMIC BEHAVIOR 0 1 2 3 4 5 6 0 2 4 State x Time t Figure 46 Illustration of Lyapunovs concept of a stable solution The solution represented by the solid line is stable if we can guarantee that all solutions remain within a tube of diameter ϵ by choosing initial conditions sufficiently close the solution satisfy the dynamics of the system In many situations stable limit cycles can be found by simulating the system with different initial conditions 43 Stability The stability of a solution determines whether or not solutions nearby the solution remain close get closer or move further away We now give a formal definition of stability and describe tests for determining whether a solution is stable Definitions Let xt a be a solution to the differential equation with initial condition a A solution is stable if other solutions that start near a stay close to xt a Formally we say that the solution xt a is stable if for all ϵ 0 there exists a δ 0 such that b a δ xt b xt a ϵ for all t 0 Note that this definition does not imply that xt b approaches xt a as time increases but just that it stays nearby Furthermore the value of δ may depend on ϵ so that if we wish to stay very close to the solution we may have to start very very close δ ϵ This type of stability which is illustrated in Figure 46 is also called stability in the sense of Lyapunov If a solution is stable in this sense and the trajectories do not converge we say that the solution is neutrally stable An important special case is when the solution xt a xe is an equilibrium solution Instead of saying that the solution is stable we simply say that the equi librium point is stable An example of a neutrally stable equilibrium point is shown in Figure 47 From the phase portrait we see that if we start near the equilibrium point then we stay near the equilibrium point Indeed for this example given any ϵ that defines the range of possible initial conditions we can simply choose δ ϵ to satisfy the definition of stability since the trajectories are perfect circles A solution xt a is asymptotically stable if it is stable in the sense of Lyapunov and also xt b xt a as t for b sufficiently close to a This corresponds tothecasewhereallnearbytrajectoriesconvergetothestablesolutionforlargetime Figure 48 shows an example of an asymptotically stable equilibrium point Note 43 STABILITY 103 1 05 0 05 1 1 05 0 05 1 x1 x2 x1 x2 x2 x1 0 2 4 6 8 10 2 0 2 Time t x1 x2 x1 x2 Figure 47 Phase portrait and time domain simulation for a system with a single stable equilibrium point The equilibrium point xe at the origin is stable since all trajectories that start near xe stay near xe from the phase portraits that not only do all trajectories stay near the equilibrium point at the origin but that they also all approach the origin as t gets large the directions of the arrows on the phase portrait show the direction in which the trajectories move A solution xt a is unstable if it is not stable More specifically we say that a solution xt a is unstable if given some ϵ 0 there does not exist a δ 0 such that if b a δ then xt b xt a ϵ for all t An example of an unstable equilibrium point is shown in Figure 49 The definitions above are given without careful description of their domain of applicability More formally we define a solution to be locally stable or locally asymptotically stable if it is stable for all initial conditions x Bra where Bra x x a r is a ball of radius r around a and r 0 A system is globally stable if it is stable for all r 0 Systems whose equilibrium points are only locally stable can have 1 05 0 05 1 1 05 0 05 1 x1 x2 x1 x2 x2 x1 x2 0 2 4 6 8 10 1 0 1 Time t x1 x2 x1 x2 Figure 48 Phase portrait and time domain simulation for a system with a single asymptoti cally stable equilibrium point The equilibrium point xe at the origin is asymptotically stable since the trajectories converge to this point as t Figure 49 Phase portrait and time domain simulation for a system with a single unstable equilibrium point The equilibrium point xe at the origin is unstable since not all trajectories that start near xe stay near xe The sample trajectory on the right shows that the trajectories very quickly depart from zero interesting behavior away from equilibrium points as we explore in the next section For planar dynamical systems equilibrium points have been assigned names based on their stability type An asymptotically stable equilibrium point is called a sink or sometimes an attractor An unstable equilibrium point can be either a source if all trajectories lead away from the equilibrium point or a saddle if some trajectories lead to the equilibrium point and others move away this is the situation pictured in Figure 49 Finally an equilibrium point that is stable but not asymptotically stable ie neutrally stable such as the one in Figure 47 is called a center Example 45 Congestion control The model for congestion control in a network consisting of N identical computers connected to a single router introduced in Section 34 is given by dwdt cb ρc1 w22 dbdt Nwcb c where w is the window size and b is the buffer size of the router Phase portraits are shown in Figure 410 for two different sets of parameter values In each case we see that the system converges to an equilibrium point in which the buffer is below its full capacity of 500 packets The equilibrium size of the buffer represents a balance between the transmission rates for the sources and the capacity of the link We see from the phase portraits that the equilibrium points are asymptotically stable since all initial conditions result in trajectories that converge to these points Stability of Linear Systems A linear dynamical system has the form dxdt Ax x0 x0 43 STABILITY 105 0 2 4 6 8 10 0 100 200 300 400 500 Window size w pkts Buffer size b pkts a ρ 2 104 c 10 pktsms 0 2 4 6 8 10 0 100 200 300 400 500 Window size w pkts Buffer size b pkts b ρ 4 104 c 20 pktsms Figure 410 Phase portraits for a congestion control protocol running with N 60 identical source computers The equilibrium values correspond to a fixed window at the source which results in a steadystate buffer size and corresponding transmission rate A faster link b uses a smaller buffer size since it can handle packets at a higher rate where A Rnn is a square matrix corresponding to the dynamics matrix of a linear control system 26 For a linear system the stability of the equilibrium at the origin can be determined from the eigenvalues of the matrix A λA s C detsI A 0 The polynomial detsI A is the characteristic polynomial and the eigenvalues are its roots We use the notation λ j for the jth eigenvalue of A so that λ j λA In general λ can be complexvalued although if A is realvalued then for any eigenvalue λ its complex conjugate λ will also be an eigenvalue The origin is always an equilibrium for a linear system Since the stability of a linear system depends only on the matrix A we find that stability is a property of the system For a linear system we can therefore talk about the stability of the system rather than the stability of a particular solution or equilibrium point The easiest class of linear systems to analyze are those whose system matrices are in diagonal form In this case the dynamics have the form dx dt λ1 0 λ2 0 λn x 48 It is easy to see that the state trajectories for this system are independent of each other so that we can write the solution in terms of n individual systems x j λ jx j Each of these scalar solutions is of the form x jt eλ jtx0 We see that the equilibrium point xe 0 is stable if λ j 0 and asymptotically stable if λ j 0 Another simple case is when the dynamics are in the block diagonal form dxdt σ1 ω1 0 0 ω1 σ1 0 0 0 0 0 0 σm ωm 0 0 ωm σm x In this case the eigenvalues can be shown to be λj σj iωj We once again can separate the state trajectories into independent solutions for each pair of states and the solutions are of the form x2j1t eσj t x2j10 cos ωj t x2j0 sin ωj t x2jt eσj t x2j10 sin ωj t x2j0 cos ωj t where j 1 2 m We see that this system is asymptotically stable if and only if σj Re λj 0 It is also possible to combine real and complex eigenvalues in block diagonal form resulting in a mixture of solutions of the two types Very few systems are in one of the diagonal forms above but some systems can be transformed into these forms via coordinate transformations One such class of systems is those for which the dynamics matrix has distinct nonrepeating eigenvalues In this case there is a matrix T Rnn such that the matrix TAT1 is in block diagonal form with the block diagonal elements corresponding to the eigenvalues of the original matrix A see Exercise 414 If we choose new coordinates z Tx then dzdt Tẋ T Ax TAT1 z and the linear system has a block diagonal dynamics matrix Furthermore the eigenvalues of the transformed system are the same as the original system since if υ is an eigenvector of A then w Tυ can be shown to be an eigenvector of TAT1 We can reason about the stability of the original system by noting that xt T1 zt and so if the transformed system is stable or asymptotically stable then the original system has the same type of stability This analysis shows that for linear systems with distinct eigenvalues the stability of the system can be completely determined by examining the real part of the eigenvalues of the dynamics matrix For more general systems we make use of the following theorem proved in the next chapter Theorem 41 Stability of a linear system The system dxdt Ax is asymptotically stable if and only if all eigenvalues of A all have a strictly negative real part and is unstable if any eigenvalue of A has a strictly positive real part Example 46 Compartment model Consider the twocompartment module for drug delivery introduced in Section 36 43 STABILITY 107 Using concentrations as state variables and denoting the state vector by x the system dynamics are given by dx dt k0 k1 k1 k2 k2 x b0 0 u y 0 1 x where the input u is the rate of injection of a drug into compartment 1 and the concentration of the drug in compartment 2 is the measured output y We wish to design a feedback control law that maintains a constant output given by y yd We choose an output feedback control law of the form u ky yd ud where ud is the rate of injection required to maintain the desired concentration and k is a feedback gain that should be chosen such that the closed loop system is stable Substituting the control law into the system we obtain dx dt k0 k1 k1b0k k2 k2 x b0 0 ud Ax Bud y 0 1 x Cx The equilibrium concentration xe R2 is given by xe A1Bud and ye C A1Bud b0k2 k0k2 k1k2 kk1k2b0 ud Choosing ud such that ye yd provides the constant rate of injection required to maintain the desired output We can now shift coordinates to place the equilibrium point at the origin which yields dz dt k0 k1 k1b0k k2 k2 z where z x xe We can now apply the results of Theorem 41 to determine the stability of the system The eigenvalues of the system are given by the roots of the characteristic polynomial λs s2 k0 k1 k2s k0 k1 k1k2b0k While the specific form of the roots is messy it can be shown that the roots are posi tive as long as the linear term and the constant term are both positive Exercise 416 Hence the system is stable for any k 0 Stability Analysis via Linear Approximation An important feature of differential equations is that it is often possible to determine the local stability of an equilibrium point by approximating the system by a linear system The following example illustrates the basic idea Example 47 Inverted pendulum Consider again an inverted pendulum whose open loop dynamics are given by dxdt x2 sin x1 γ x2 where we have defined the state as x θ θ We first consider the equilibrium point at x 0 0 corresponding to the straightup position If we assume that the angle θ x1 remains small then we can replace sin x1 with x1 and cos x1 with 1 which gives the approximate system dxdt x2 x1 x2 0 1 1 γ x 49 Intuitively this system should behave similarly to the more complicated model as long as x1 is small In particular it can be verified that the equilibrium point 0 0 is unstable by plotting the phase portrait or computing the eigenvalues of the dynamics matrix in equation 49 We can also approximate the system around the stable equilibrium point at x π 0 In this case we have to expand sin x1 and cos x1 around x1 π according to the expansions sinπ θ sin θ θ cosπ θ cosθ 1 If we define z1 x1 π and z2 x2 the resulting approximate dynamics are given by dzdt z2 z1 γ z2 0 1 1 γ z 410 Note that z 0 0 is the equilibrium point for this system and that it has the same basic form as the dynamics shown in Figure 48 Figure 411 shows the phase portraits for the original system and the approximate system around the corresponding equilibrium points Note that they are very similar although not exactly the same It can be shown that if a linear approximation has either asymptotically stable or unstable equilibrium points then the local stability of the original system must be the same Theorem 43 More generally suppose that we have a nonlinear system dxdt Fx that has an equilibrium point at xe Computing the Taylor series expansion of the vector field we can write dxdt Fxe Fx xe x xe higherorder terms in x xe Since Fxe 0 we can approximate the system by choosing a new state variable z x xe and writing dzdt Az where A Fx xe 411 We call the system 411 the linear approximation of the original nonlinear system or the linearization at xe The fact that a linear model can be used to study the behavior of a nonlinear system near an equilibrium point is a powerful one Indeed we can take this even further and use a local linear approximation of a nonlinear system to design a feedback law that keeps the system near its equilibrium point design of dynamics Thus feedback can be used to make sure that solutions remain close to the equilibrium point which in turn ensures that the linear approximation used to stabilize it is valid Linear approximations can also be used to understand the stability of nonequilibrium solutions as illustrated by the following example Example 48 Stable limit cycle Consider the system given by equation 46 dx1dt x2 x11 x12 x22 dx2dt x1 x21 x12 x22 whose phase portrait is shown in Figure 45 The differential equation has a periodic solution x1t x10 cos t x20 sin t 412 with x120 x220 1 To explore the stability of this solution we introduce polar coordinates r and φ which are related to the state variables x1 and x2 by x1 r cos φ x2 r sin φ Differentiation gives the following linear equations for r and φ x1 r cos φ r φ sin φ x2 r sin φ r φ cos φ Solving this linear system for r and φ gives after some calculation drdt r 1 r2 dφdt 1 Notice that the equations are decoupled hence we can analyze the stability of each state separately The equation for r has three equilibria r 0 r 1 and r 1 not realizable since r must be positive We can analyze the stability of these equilibria by linearizing the radial dynamics with Fr r 1 r2 The corresponding linear dynamics are given by drdt Frre r 1 2re2r re 0 1 where we have abused notation and used r to represent the deviation from the equilibrium point It follows from the sign of 1 2re2 that the equilibrium r 0 is unstable and the equilibrium r 1 is asymptotically stable Thus for any initial condition r 0 the solution goes to r 1 as time goes to infinity but if the system starts with r 0 it will remain at the equilibrium for all times This implies that all solutions to the original system that do not start at x1 x2 0 will approach the circle x12 x22 1 as time increases To show the stability of the full solution 412 we must investigate the behavior of neighboring solutions with different initial conditions We have already shown that the radius r will approach that of the solution 412 as long as r0 0 The equation for the angle φ can be integrated analytically to give φt t φ0 which shows that solutions starting at different angles φ will neither converge nor diverge Thus the unit circle is attracting but the solution 412 is only stable not asymptotically stable The behavior of the system is illustrated by the simulation in Figure 412 Notice that the solutions approach the circle rapidly but that there is a constant phase shift between the solutions 44 Lyapunov Stability Analysis We now return to the study of the full nonlinear system dxdt Fx x Rn 413 Having defined when a solution for a nonlinear dynamical system is stable we can now ask how to prove that a given solution is stable asymptotically stable or unstable For physical systems one can often argue about stability based on dissipation of energy The generalization of that technique to arbitrary dynamical systems is based on the use of Lyapunov functions in place of energy 44 LYAPUNOV STABILITY ANALYSIS 111 1 0 1 2 1 05 0 05 1 15 2 x1 x2 0 5 10 15 20 1 0 1 2 0 5 10 15 20 1 0 1 2 x1 x2 Time t Figure 412 Solution curves for a stable limit cycle The phase portrait on the left shows that the trajectory for the system rapidly converges to the stable limit cycle The starting points for the trajectories are marked by circles in the phase portrait The time domain plots on the right show that the states do not converge to the solution but instead maintain a constant phase error In this section we will describe techniques for determining the stability of so lutions for a nonlinear system 413 We will generally be interested in stability of equilibrium points and it will be convenient to assume that xe 0 is the equi librium point of interest If not rewrite the equations in a new set of coordinates z x xe Lyapunov Functions A Lyapunov function V Rn R is an energylike function that can be used to determine the stability of a system Roughly speaking if we can find a nonnegative function that always decreases along trajectories of the system we can conclude that the minimum of the function is a stable equilibrium point locally To describe this more formally we start with a few definitions We say that a continuous function V is positive definite if V x 0 for all x 0 and V 0 0 Similarly a function is negative definite if V x 0 for all x 0 and V 0 0 We say that a function V is positive semidefinite if V x 0 for all x but V x can be zero at points other than just x 0 To illustrate the difference between a positive definite function and a positive semidefinite function suppose that x R2 and let V1x x2 1 V2x x2 1 x2 2 Both V1 and V2 are always nonnegative However it is possible for V1 to be zero even if x 0 Specifically if we set x 0 c where c R is any nonzero number then V1x 0 On the other hand V2x 0 if and only if x 0 0 Thus V1 is positive semidefinite and V2 is positive definite We can now characterize the stability of an equilibrium point xe 0 for the system 413 Theorem 42 Lyapunov stability theorem Let V be a nonnegative function on 112 CHAPTER 4 DYNAMIC BEHAVIOR dx dt V x V x c2 V x c1 c2 Figure 413 Geometric illustration of Lyapunovs stability theorem The closed contours represent the level sets of the Lyapunov function V x c If dxdt points inward to these sets at all points along the contour then the trajectories of the system will always cause V x to decrease along the trajectory Rn and let V represent the time derivative of V along trajectories of the system dynamics 413 V V x dx dt V x Fx Let Br Br0 be a ball of radius r around the origin If there exists r 0 such that V is positive definite and V is negative semidefinite for all x Br then x 0 is locally stable in the sense of Lyapunov If V is positive definite and V is negative definite in Br then x 0 is locally asymptotically stable If V satisfies one of the conditions above we say that V is a local Lyapunov function for the system These results have a nice geometric interpretation The level curves for a positive definite function are the curves defined by V x c c 0 and for each c this gives a closed contour as shown in Figure 413 The condition that V x is negative simply means that the vector field points toward lowerlevel contours This means that the trajectories move to smaller and smaller values of V and if V is negative definite then x must approach 0 Example 49 Scalar nonlinear system Consider the scalar nonlinear system dx dt 2 1 x x This system has equilibrium points at x 1 and x 2 We consider the equilib rium point at x 1 and rewrite the dynamics using z x 1 dz dt 2 2 z z 1 which has an equilibrium point at z 0 Now consider the candidate Lyapunov function V x 1 2z2 44 LYAPUNOV STABILITY ANALYSIS 113 which is globally positive definite The derivative of V along trajectories of the system is given by V z zz 2z 2 z z2 z If we restrict our analysis to an interval Br where r 2 then 2 z 0 and we can multiply through by 2 z to obtain 2z z2 z2 z z3 3z2 z2z 3 0 z Br r 2 It follows that V z 0 for all z Br z 0 and hence the equilibrium point xe 1 is locally asymptotically stable A slightly more complicated situation occurs if V is negative semidefinite In this case it is possible that Vx 0 when x 0 and hence x could stop decreasing in value The following example illustrates this case Example 410 Hanging pendulum A normalized model for a hanging pendulum is dx1 dt x2 dx2 dt sin x1 where x1 is the angle between the pendulum and the vertical with positive x1 corresponding to counterclockwise rotation The equation has an equilibrium x1 x2 0 which corresponds to the pendulum hanging straight down To explore the stability of this equilibrium we choose the total energy as a Lyapunov function V x 1 cos x1 1 2x2 2 1 2x2 1 1 2x2 2 The Taylor series approximation shows that the function is positive definite for small x The time derivative of V x is V x1 sin x1 x2x2 x2 sin x1 x2 sin x1 0 Since this function is positive semidefinite it follows from Lyapunovs theorem that the equilibrium is stable but not necessarily asymptotically stable When perturbed the pendulum actually moves in a trajectory that corresponds to constant energy Lyapunov functions are not always easy to find and they are not unique In many cases energy functions can be used as a starting point as was done in Exam ple 410 It turns out that Lyapunov functions can always be found for any stable system under certain conditions and hence one knows that if a system is stable a Lyapunov function exists and vice versa Recent results using sumofsquares methods have provided systematic approaches for finding Lyapunov systems 167 Sumofsquares techniques can be applied to a broad variety of systems including systems whose dynamics are described by polynomial equations as well as hybrid systems which can have different models for different regions of state space For a linear dynamical system of the form dx dt Ax 114 CHAPTER 4 DYNAMIC BEHAVIOR it is possible to construct Lyapunov functions in a systematic manner To do so we consider quadratic functions of the form V x x T Px where P Rnn is a symmetric matrix P PT The condition that V be positive definite is equivalent to the condition that P be a positive definite matrix xT Px 0 for all x 0 which we write as P 0 It can be shown that if P is symmetric then P is positive definite if and only if all of its eigenvalues are real and positive Given a candidate Lyapunov function V x xT Px we can now compute its derivative along flows of the system V V x dx dt x T AT P P Ax x T Qx The requirement that V be negative definite for asymptotic stability becomes a condition that the matrix Q be positive definite Thus to find a Lyapunov function for a linear system it is sufficient to choose a Q 0 and solve the Lyapunov equation AT P P A Q 414 This is a linear equation in the entries of P and hence it can be solved using linear algebra It can be shown that the equation always has a solution if all of the eigenvalues of the matrix A are in the left halfplane Moreover the solution P is positive definite if Q is positive definite It is thus always possible to find a quadratic Lyapunov function for a stable linear system We will defer a proof of this until Chapter 5 where more tools for analysis of linear systems will be developed Knowing that we have a direct method to find Lyapunov functions for linear systems we can now investigate the stability of nonlinear systems Consider the system dx dt Fx Ax Fx 415 where F0 0 and Fx contains terms that are second order and higher in the elements of x The function Ax is an approximation of Fx near the origin and we can determine the Lyapunov function for the linear approximation and investigate if it is also a Lyapunov function for the full nonlinear system The following example illustrates the approach Example 411 Genetic switch Consider the dynamics of a set of repressors connected together in a cycle as shown in Figure 414a The normalized dynamics for this system were given in Exercise 29 dz1 dτ μ 1 zn 2 z1 dz2 dτ μ 1 zn 1 z2 416 where z1 and z2 are scaled versions of the protein concentrations n and μ are 44 LYAPUNOV STABILITY ANALYSIS 115 u2 A B A u1 a Circuit diagram 0 1 2 3 4 5 0 1 2 3 4 5 z1 f z2 z1 f z1 z2 f z1 z2 f z2 b Equilibrium points Figure 414 Stability of a genetic switch The circuit diagram in a represents two proteins that are each repressing the production of the other The inputs u1 and u2 interfere with this repression allowing the circuit dynamics to be modified The equilibrium points for this circuit can be determined by the intersection of the two curves shown in b parameters that describe the interconnection between the genes and we have set the external inputs u1 and u2 to zero The equilibrium points for the system are found by equating the time derivatives to zero We define f u μ 1 un f u d f du μnun1 1 un2 and the equilibrium points are defined as the solutions of the equations z1 f z2 z2 f z1 If we plot the curves z1 f z1 and f z2 z2 on a graph then these equations will have a solution when the curves intersect as shown in Figure 414b Because of the shape of the curves it can be shown that there will always be three solutions one at z1e z2e one with z1e z2e and one with z1e z2e If μ 1 then we can show that the solutions are given approximately by z1e μ z2e 1 μn1 z1e z2e z1e 1 μn1 z2e μ 417 To check the stability of the system we write f u in terms of its Taylor series expansion about ue f u f ue f ue u ue f ue u ue2 higherorder terms where f represents the first derivative of the function and f the second Using these approximations the dynamics can then be written as dw dt 1 f z2e f z1e 1 w Fw where w zze is the shifted state and Fw represents quadratic and higherorder 116 CHAPTER 4 DYNAMIC BEHAVIOR terms We now use equation 414 to search for a Lyapunov function Choosing Q I and letting P R22 have elements pi j we search for a solution of the equation 1 f 2 f 1 1 p11 p12 p12 p22 p11 p12 p12 p22 1 f 1 f 2 1 1 0 0 1 where f 1 f z1e and f 2 f z2e Note that we have set p21 p12 to force P to be symmetric Multiplying out the matrices we obtain 2p11 2 f 2 p12 p11 f 1 2p12 p22 f 2 p11 f 1 2p12 p22 f 2 2p22 2 f 1 p12 1 0 0 1 which is a set of linear equations for the unknowns pi j We can solve these linear equations to obtain p11 f 1 2 f 2 f 1 2 4 f 1 f 2 1 p12 f 1 f 2 4 f 1 f 2 1 p22 f 2 2 f 1 f 2 2 4 f 1 f 2 1 To check that V w wT Pw is a Lyapunov function we must verify that V w is positive definite function or equivalently that P 0 Since P is a 2 2 symmetric matrix it has two real eigenvalues λ1 and λ2 that satisfy λ1 λ2 traceP λ1 λ2 detP In order for P to be positive definite we must have that λ1 and λ2 are positive and we thus require that traceP f 1 22 f 2 f 1 f 2 2 4 44 f 1 f 2 0 detP f 1 22 f 2 f 1 f 2 24 16 16 f 1 f 2 0 We see that traceP 4 detP and the numerator of the expressions is just f1 f22 4 0 so it suffices to check the sign of 1 f 1 f 2 In particular for P to be positive definite we require that f z1e f z2e 1 We can now make use of the expressions for f defined earlier and evaluate at the approximate locations of the equilibrium points derived in equation 417 For the equilibrium points where z1e z2e we can show that f z1e f z2e f μ f 1 μn1 μnμn1 1 μn2 μnμn12 1 μnn1 n2μn2n Using n 2 and μ 200 from Exercise 29 we see that f z1e f z2e 1 and hence P is a positive definite This implies that V is a positive definite function and hence a potential Lyapunov function for the system To determine if the system 416 is stable we now compute V at the equilibrium 44 LYAPUNOV STABILITY ANALYSIS 117 0 1 2 3 4 5 0 1 2 3 4 5 Protein A scaled Protein B scaled 0 5 10 15 20 25 0 1 2 3 4 5 Time t scaled Protein concentrations scaled z1 A z2 B Figure 415 Dynamics of a genetic switch The phase portrait on the left shows that the switch has three equilibrium points corresponding to protein A having a concentration greater than equaltoorlessthanproteinBTheconcentrationwithequalproteinconcentrationsisunstable but the other equilibrium points are stable The simulation on the right shows the time response of the system starting from two different initial conditions The initial portion of the curve corresponds to initial concentrations z0 1 5 and converges to the equilibrium where z1e z2e At time t 10 the concentrations are perturbed by 2 in z1 and 2 in z2 moving the state into the region of the state space whose solutions converge to the equilibrium point where z2e z1e point By construction V wTP A ATPw F TwPw wTP Fw wTw F TwPw wTP Fw Since all terms in F are quadratic or higher order in w it follows that F TwPw and wTP Fw consist of terms that are at least third order in w Therefore if w is sufficiently close to zero then the cubic and higherorder terms will be smaller than the quadratic terms Hence sufficiently close to w 0 V is negative definite allowing us to conclude that these equilibrium points are both stable Figure 415 shows the phase portrait and time traces for a system with μ 4 illustrating the bistable nature of the system When the initial condition starts with a concentration of protein B greater than that of A the solution converges to the equilibrium point at approximately 1μn1 μ If A is greater than B then it goes to μ 1μn1 The equilibrium point with z1e z2e is unstable More generally we can investigate what the linear approximation tells about the stability of a solution to a nonlinear equation The following theorem gives a partial answer for the case of stability of an equilibrium point Theorem 43 Consider the dynamical system 415 with F0 0 and F such that lim Fxx 0 as x 0 If the real parts of all eigenvalues of A are strictly less than zero then xe 0 is a locally asymptotically stable equilibrium point of equation 415 This theorem implies that asymptotic stability of the linear approximation im plies local asymptotic stability of the original nonlinear system The theorem is very important for control because it implies that stabilization of a linear approximation of a nonlinear system results in a stable equilibrium for the nonlinear system The proof of this theorem follows the technique used in Example 411 A formal proof can be found in 123 KrasovskiLasalle Invariance Principle For general nonlinear systems especially those in symbolic form it can be difficult to find a positive definite function V whose derivative is strictly negative definite The KrasovskiLasalle theorem enables us to conclude the asymptotic stability of an equilibrium point under less restrictive conditions namely in the case where V is negative semidefinite which is often easier to construct However it applies only to timeinvariant or periodic systems This section makes use of some additional concepts from dynamical systems see Hahn 94 or Khalil 123 for a more detailed description We will deal with the timeinvariant case and begin by introducing a few more definitions We denote the solution trajectories of the timeinvariant system dxdt Fx 418 as xt a which is the solution of equation 418 at time t starting from a at t0 0 The ω limit set of a trajectory xt a is the set of all points z Rn such that there exists a strictly increasing sequence of times tn such that xtn a z as n A set M Rn is said to be an invariant set if for all b M we have xt b M for all t 0 It can be proved that the ω limit set of every trajectory is closed and invariant We may now state the KrasovskiLasalle principle Theorem 44 KrasovskiLasalle principle Let V Rn R be a locally positive definite function such that on the compact set Ωr x Rn Vx r we have V x 0 Define S x Ωr V x 0 As t the trajectory tends to the largest invariant set inside S ie its ω limit set is contained inside the largest invariant set in S In particular if S contains no invariant sets other than x 0 then 0 is asymptotically stable Proofs are given in 128 and 135 Lyapunov functions can often be used to design stabilizing controllers as is illustrated by the following example which also illustrates how the Krasovski Lasalle principle can be applied Example 412 Inverted pendulum Following the analysis in Example 27 an inverted pendulum can be described by the following normalized model dx1dt x2 dx2dt sin x1 u cos x1 419 a Physical system b Phase portrait c Manifold view Figure 416 Stabilized inverted pendulum A control law applies a force u at the bottom of the pendulum to stabilize the inverted position a The phase portrait b shows that the equilibrium point corresponding to the vertical position is stabilized The shaded region indicates the set of initial conditions that converge to the origin The ellipse corresponds to a level set of a Lyapunov function V x for which V x 0 and V x 0 for all points inside the ellipse This can be used as an estimate of the region of attraction of the equilibrium point The actual dynamics of the system evolve on a manifold c where x1 is the angular deviation from the upright position and u is the scaled acceleration of the pivot as shown in Figure 416a The system has an equilibrium at x1 x2 0 which corresponds to the pendulum standing upright This equilibrium is unstable To find a stabilizing controller we consider the following candidate for a Lyapunov function V x cos x1 1 a1 cos 2 x1 12 x22 a 12 x12 12 x22 The Taylor series expansion shows that the function is positive definite near the origin if a 05 The time derivative of V x is V x1 sin x1 2ax1 sin x1 cos x1 x2x2 x2 u 2a sin x1 cos x1 Choosing the feedback law u 2a sin x1 x2 cos x1 gives V x22 cos2 x1 It follows from Lyapunovs theorem that the equilibrium is locally stable However since the function is only negative semidefinite we cannot conclude asymptotic stability using Theorem 42 However note that V 0 implies that x2 0 or x1 π2 nπ If we restrict our analysis to a small neighborhood of the origin Ωr r π2 then we can define S x1 x2 Ωr x2 0 and we can compute the largest invariant set inside S For a trajectory to remain in this set we must have x₂ 0 for all t and hence ẋ₂t 0 as well Using the dynamics of the system 419 we see that x₂t 0 and ẋ₂t 0 implies x₁t 0 as well Hence the largest invariant set inside S is x₁ x₂ 0 and we can use the KrasovskiLasalle principle to conclude that the origin is locally asymptotically stable A phase portrait of the closed loop system is shown in Figure 416b In the analysis and the phase portrait we have treated the angle of the pendulum θ x₁ as a real number In fact θ is an angle with θ 2π equivalent to θ 0 Hence the dynamics of the system actually evolves on a manifold smooth surface as shown in Figure 416c Analysis of nonlinear dynamical systems on manifolds is more complicated but uses many of the same basic ideas presented here 45 Parametric and Nonlocal Behavior Most of the tools that we have explored are focused on the local behavior of a fixed system near an equilibrium point In this section we briefly introduce some concepts regarding the global behavior of nonlinear systems and the dependence of a systems behavior on parameters in the system model Regions of Attraction To get some insight into the behavior of a nonlinear system we can start by finding the equilibrium points We can then proceed to analyze the local behavior around the equilibria The behavior of a system near an equilibrium point is called the local behavior of the system The solutions of the system can be very different far away from an equilibrium point This is seen for example in the stabilized pendulum in Example 412 The inverted equilibrium point is stable with small oscillations that eventually converge to the origin But far away from this equilibrium point there are trajectories that converge to other equilibrium points or even cases in which the pendulum swings around the top multiple times giving very long oscillations that are topologically different from those near the origin To better understand the dynamics of the system we can examine the set of all initial conditions that converge to a given asymptotically stable equilibrium point This set is called the region of attraction for the equilibrium point An example is shown by the shaded region of the phase portrait in Figure 416b In general computing regions of attraction is difficult However even if we cannot determine the region of attraction we can often obtain patches around the stable equilibria that are attracting This gives partial information about the behavior of the system One method for approximating the region of attraction is through the use of Lyapunov functions Suppose that V is a local Lyapunov function for a system around an equilibrium point x₀ Let Ωᵣ be a set on which Vx has a value less than r Ωᵣ x ℝⁿ Vx r and suppose that Ẋx 0 for all x Ωᵣ with equality only at the equilibrium point x₀ Then Ωᵣ is inside the region of attraction of the equilibrium point Since this approximation depends on the Lyapunov function and the choice of Lyapunov function is not unique it can sometimes be a very conservative estimate It is sometimes the case that we can find a Lyapunov function V such that V is positive definite and Ẋ is negative semi definite for all x ℝⁿ In this case it can be shown that the region of attraction for the equilibrium point is the entire state space and the equilibrium point is said to be globally stable Example 413 Stabilized inverted pendulum Consider again the stabilized inverted pendulum from Example 412 The Lyapunov function for the system was Vx cos x₁ 1 a1 cos² x₁ 12 x₂² and Ẋ was negative semidefinite for all x and nonzero when x₁ π2 Hence for any x such that x₂ π2 Vx 0 will be inside the invariant set defined by the level curves of Vx One of these level sets is shown in Figure 416b Bifurcations Another important property of nonlinear systems is how their behavior changes as the parameters governing the dynamics change We can study this in the context of models by exploring how the location of equilibrium points their stability their regions of attraction and other dynamic phenomena such as limit cycles vary based on the values of the parameters in the model Consider a differential equation of the form dxdt Fx μ x ℝⁿ μ ℝᵏ 420 where x is the state and μ is a set of parameters that describe the family of equations The equilibrium solutions satisfy Fx μ 0 and as μ is varied the corresponding solutions xₑμ can also vary We say that the system 420 has a bifurcation at μ μ if the behavior of the system changes qualitatively at μ This can occur either because of a change in stability type or a change in the number of solutions at a given value of μ Example 414 Predatorprey Consider the predatorprey system described in Section 37 The dynamics of the system are given by dHdt rH1 Hk aHLc H dLdt b aHLc H dL 421 122 CHAPTER 4 DYNAMIC BEHAVIOR 15 2 25 3 35 4 0 50 100 150 200 a c Unstable Stable Unstable a Stability diagram 2 4 6 8 0 50 100 150 a H b Bifurcation diagram Figure 417 Bifurcation analysis of the predatorprey system a Parametric stability dia gram showing the regions in parameter space for which the system is stable b Bifurcation diagram showing the location and stability of the equilibrium point as a function of a The solid line represents a stable equilibrium point and the dashed line represents an unstable equilibrium point The dasheddotted lines indicate the upper and lower bounds for the limit cycle at that parameter value computed via simulation The nominal values of the parameters in the model are a 32 b 06 c 50 d 056 k 125 and r 16 where H and L are the numbers of hares prey and lynxes predators and a b c d k and r are parameters that model a given predatorprey system described in more detail in Section 37 The system has an equilibrium point at He 0 and Le 0 that can be found numerically To explore how the parameters of the model affect the behavior of the system we choose to focus on two specific parameters of interest a the interaction coefficient between the populations and c a parameter affecting the prey consumption rate Figure 417a is a numerically computed parametric stability diagram showing the regions in the chosen parameter space for which the equilibrium point is stable leaving the other parameters at their nominal values We see from this figure that for certain combinations of a and c we get a stable equilibrium point while at other values this equilibrium point is unstable Figure 417b is a numerically computed bifurcation diagram for the system In this plot we choose one parameter to vary a and then plot the equilibrium value of one of the states H on the vertical axis The remaining parameters are set to their nominal values A solid line indicates that the equilibrium point is stable a dashed line indicates that the equilibrium point is unstable Note that the stability in the bifurcation diagram matches that in the parametric stability diagram for c 50 the nominal value and a varying from 135 to 4 For the predatorprey system when the equilibrium point is unstable the solution converges to a stable limit cycle The amplitude of this limit cycle is shown by the dasheddotted line in Figure 417b A particular form of bifurcation that is very common when controlling linear systems is that the equilibrium remains fixed but the stability of the equilibrium 45 PARAMETRIC AND NONLOCAL BEHAVIOR 123 10 0 10 15 10 5 0 5 10 15 Unstable Stable Unstable Velocity v ms Re λ a Stability diagram 10 0 10 10 5 0 5 10 V 61 V 61 V V Re λ Im λ b Root locus diagram Figure 418 Stability plots for a bicycle moving at constant velocity The plot in a shows the real part of the system eigenvalues as a function of the bicycle velocity v The system is stable when all eigenvalues have negative real part shaded region The plot in b shows the locus of eigenvalues on the complex plane as the velocity v is varied and gives a different view of the stability of the system This type of plot is called a root locus diagram changes as the parameters are varied In such a case it is revealing to plot the eigen values of the system as a function of the parameters Such plots are called root locus diagrams because they give the locus of the eigenvalues when parameters change Bifurcations occur when parameter values are such that there are eigenval ues with zero real part Computing environments such LabVIEW MATLAB and Mathematica have tools for plotting root loci Example 415 Root locus diagram for a bicycle model Considerthelinearbicyclemodelgivenbyequation 37inSection32Introducing the state variables x1 ϕ x2 δ x3 ϕ and x4 δ and setting the steering torque T 0 the equations can be written as dx dt 0 I M1K0 K2v2 0 M1Cv0 x Ax where I is a 2 2 identity matrix and v0 is the velocity of the bicycle Figure 418a shows the real parts of the eigenvalues as a function of velocity Figure 418b shows the dependence of the eigenvalues of A on the velocity v0 The figures show that the bicycle is unstable for low velocities because two eigenvalues are in the right halfplane As the velocity increases these eigenvalues move into the left halfplane indicating that the bicycle becomes selfstabilizing As the velocity is increased further there is an eigenvalue close to the origin that moves into the right halfplane making the bicycle unstable again However this eigenvalue is small and so it can easily be stabilized by a rider Figure 418a shows that the bicycle is selfstabilizing for velocities between 6 and 10 ms Parametric stability diagrams and bifurcation diagrams can provide valuable insights into the dynamics of a nonlinear system It is usually necessary to carefully choose the parameters that one plots including combining the natural parameters Internal Microphone External Microphone a Exterior microphone Headphone b Controller Filter Parameters a b w S e Interior microphone Figure 419 Headphones with noise cancellation Noise is sensed by the exterior microphone a and sent to a filter in such a way that it cancels the noise that penetrates the headphone b The filter parameters a and b are adjusted by the controller S represents the input signal to the headphones of the system to eliminate extra parameters when possible Computer programs such as AUTO LOCBIF and XPPAUT provide numerical algorithms for producing stability and bifurcation diagrams Design of Nonlinear Dynamics Using Feedback In most of the text we will rely on linear approximations to design feedback laws that stabilize an equilibrium point and provide a desired level of performance However for some classes of problems the feedback controller must be nonlinear to accomplish its function By making use of Lyapunov functions we can often design a nonlinear control law that provides stable behavior as we saw in Example 412 One way to systematically design a nonlinear controller is to begin with a candidate Lyapunov function Vx and a control system ẋ fx u We say that Vx is a control Lyapunov function if for every x there exists a u such that ẋ Vx fx u 0 In this case it may be possible to find a function αx such that u αx stabilizes the system The following example illustrates the approach Example 416 Noise cancellation Noise cancellation is used in consumer electronics and in industrial systems to reduce the effects of noise and vibrations The idea is to locally reduce the effect of noise by generating opposing signals A pair of headphones with noise cancellation such as those shown in Figure 419a is a typical example A schematic diagram of the system is shown in Figure 419b The system has two microphones one outside the headphones that picks up exterior noise n and another inside the headphones that picks up the signal e which is a combination of the desired signal and the external noise that penetrates the headphone The signal from the exterior microphone is filtered and sent to the headphones in such a way that it cancels the external noise that penetrates into the headphones The parameters of the filter are adjusted by a feedback mechanism to make the noise signal in the internal microphone as small as possible The feedback is inherently nonlinear because it acts by changing the parameters of the filter To analyze the system we assume for simplicity that the propagation of external noise into the headphones is modeled by a firstorder dynamical system described by dzdt a₀z b₀n 422 where z is the sound level and the parameters a₀ 0 and b₀ are not known Assume that the filter is a dynamical system of the same type dwdt aw bn We wish to find a controller that updates a and b so that they converge to the unknown parameters a₀ and b₀ Introduce x₁ e w z x₂ a a₀ and x₃ b b₀ then dx₁dt a₀w z a a₀w b b₀n a₀x₁ x₂w x₃n 423 We will achieve noise cancellation if we can find a feedback law for changing the parameters a and b so that the error e goes to zero To do this we choose Vx₁ x₂ x₃ 12 αx₁² x₂² x₃² as a candidate Lyapunov function for 423 The derivative of V is V αx₁ẋ₁ x₂ẋ₂ x₃ẋ₃ αa₀x₁² x₂ẋ₂ αwx₁ x₃ẋ₃ αnx₁ Choosing ẋ₂ αwx₁ αwe ẋ₃ αnx₁ αne 424 we find that V αa₀x₁² 0 and it follows that the quadratic function will decrease as long as e x₁ w z 0 The nonlinear feedback 424 thus attempts to change the parameters so that the error between the signal and the noise is small Notice that feedback law 424 does not use the model 422 explicitly A simulation of the system is shown in Figure 420 In the simulation we have represented the signal as a pure sinusoid and the noise as broad band noise The figure shows the dramatic improvement with noise cancellation The sinusoidal signal is not visible without noise cancellation The filter parameters change quickly from their initial values a b 0 Filters of higher order with more coefficients are used in practice 126 CHAPTER 4 DYNAMIC BEHAVIOR 0 50 100 150 200 5 0 5 0 50 100 150 200 5 0 5 0 50 100 150 200 1 05 0 0 50 100 150 200 0 05 1 No cancellation Cancellation a b Time t s Time t s Figure 420 Simulation of noise cancellation The top left figure shows the headphone signal without noise cancellation and the bottom left figure shows the signal with noise cancellation The right figures show the parameters a and b of the filter 46 Further Reading The field of dynamical systems has a rich literature that characterizes the possi ble features of dynamical systems and describes how parametric changes in the dynamics can lead to topological changes in behavior Readable introductions to dynamical systems are given by Strogatz 188 and the highly illustrated text by Abraham and Shaw 2 More technical treatments include Andronov Vitt and Khaikin 8 Guckenheimer and Holmes 91 and Wiggins 201 For students with a strong interest in mechanics the texts by Arnold 13 and Marsden and Ratiu 147 provide an elegant approach using tools from differential geometry Finally good treatments of dynamical systems methods in biology are given by Wilson 203 and Ellner and Guckenheimer 70 There is a large literature on Lyapunov stability theory including the classic texts by Malkin 144 Hahn 94 and Krasovski 128 We highly recommend the comprehensive treatment by Khalil 123 Exercises 41 Timeinvariant systems Show that if we have a solution of the differential equation 41 given by xt with initial condition xt0 x0 then xτ xt t0 x0 is a solution of the differential equation d x dτ Fx with initial condition x0 0 42 Flow in a tank A cylindrical tank has cross section A m2 effective outlet area a m2 and the inflow qin m3s An energy balance shows that the outlet velocity is υ 2gh ms where g ms² is the acceleration of gravity and h m is the distance between the outlet and the water level in the tank Show that the system can be modeled by dhdt aA2gh 1Aqin qout a2gh Use the parameters A 02 aₑ 001 Simulate the system when the inflow is zero and the initial level is h 02 Do you expect any difficulties in the simulation 43 Cruise control Consider the cruise control system described in Section 31 Generate a phase portrait for the closed loop system on flat ground θ 0 in third gear using a PI controller with kₚ 05 and kᵢ 01 m 1000 kg and desired speed 20 ms Your system model should include the effects of saturating the input between 0 and 1 44 Lyapunov functions Consider the secondorder system dx1dt ax₁ dx2dt bx₁ cx₂ where a b c 0 Investigate whether the functions V₁x 12 x₁² 12 x₂² V₂x 12 x₁² 12 x₂ bc a x₁² are Lyapunov functions for the system and give any conditions that must hold 45 Damped springmass system Consider a damped springmass system with dynamics mq cq kq 0 A natural candidate for a Lyapunov function is the total energy of the system given by V 12 mẋ² 12 kq² Use the KrasovskiLasalle theorem to show that the system is asymptotically stable 46 Electric generator The following simple model for an electric generator connected to a strong power grid was given in Exercise 27 J d²ϕdt² Pₘ Pₑ Pₘ EVX sin ϕ The parameter a PmaxPₘ EVX Pₘ 425 is the ratio between the maximum deliverable power Pmax EVX and the mechanical power Pₘ a Consider a a bifurcation parameter and discuss how the equilibria depend on a b For a 1 show that there is a center at ϕ₀ arcsin1a and a saddle at ϕ π ϕ₀ c Show that there is a solution through the saddle that satisfies 12 dϕdt² ϕ ϕ₀ a cos ϕ a² 1 0 426 Use simulation to show that the stability region is the interior of the area enclosed by this solution Investigate what happens if the system is in equilibrium with a value of a that is slightly larger than 1 and a suddenly decreases corresponding to the reactance of the line suddenly increasing 47 Lyapunov equation Show that Lyapunov equation 414 always has a solution if all of the eigenvalues of A are in the left halfplane Hint Use the fact that the Lyapunov equation is linear in P and start with the case where A has distinct eigenvalues 48 Congestion control Consider the congestion control problem described in Section 34 Confirm that the equilibrium point for the system is given by equation 321 and compute the stability of this equilibrium point using a linear approximation 49 Swinging up a pendulum Consider the inverted pendulum discussed in Example 44 that is described by θ sin θ u cos θ where θ is the angle between the pendulum and the vertical and the control signal u is the acceleration of the pivot Using the energy function Vθ θ cos θ 1 12 θ² show that the state feedback u kV₀ Vθ cos θ causes the pendulum to swing up to upright position 410 Root locus diagram Consider the linear system dxdt 0 1 0 3x 14u y 1 0x with the feedback u ky Plot the location of the eigenvalues as a function the parameter k 411 Discretetime Lyapunov function Consider a nonlinear discretetime system with dynamics xk 1 fxk and equilibrium point xₑ 0 Suppose there exists a positive definite function V ℝⁿ ℝⁿ such that Vxk1 Vxk 0 for xk 0 Show that xₑ 0 is asymptotically stable 412 Operational amplifier oscillator An op amp circuit for an oscillator was shown in Exercise 35 The oscillatory solution for that linear circuit was stable but not asymptotically stable A schematic of a modified circuit that has nonlinear elements is shown in the figure below Chapter Five Linear Systems Few physical elements display truly linear characteristics For example the relation between force on a spring and displacement of the spring is always nonlinear to some degree The relation between current through a resistor and voltage drop across it also deviates from a straightline relation However if in each case the relation is reasonably linear then it will be found that the system behavior will be very close to that obtained by assuming an ideal linear physical element and the analytical simplification is so enormous that we make linear assumptions wherever we can possibly do so in good conscience Robert H Cannon Dynamics of Physical Systems 1967 49 In Chapters 24 we considered the construction and analysis of differential equation models for dynamical systems In this chapter we specialize our results to the case of linear timeinvariant inputoutput systems Two central concepts are the matrix exponential and the convolution equation through which we can completely characterize the behavior of a linear system We also describe some properties of the inputoutput response and show how to approximate a nonlinear system by a linear one 51 Basic Definitions We have seen several instances of linear differential equations in the examples in the previous chapters including the springmass system damped oscillator and the operational amplifier in the presence of small nonsaturating input signals More generally many dynamical systemslinear can be modeled accurately by linear differential equations Electrical circuits are one example of a broad class of systems for which linear models can be used effectively Linear models are also broadly applicable in mechanical engineering for example as models of small deviations from equilibria in solid and fluid mechanics Signalprocessing systems including digital filters of the sort used in CD and MP3 players are another source of good examples although these are often best modeled in discrete time as described in more detail in the exercises In many cases we create systems with a linear inputoutput response through the use of feedback Indeed it was the desire for linear behavior that led Harold S Black to the invention of the negative feedback amplifier Almost all modern signal processing systems whether analog or digital use feedback to produce linear or nearlinear inputoutput characteristics For these systems it is often useful to represent the inputoutput characteristics as linear ignoring the internal details required to get that linear response 132 CHAPTER 5 LINEAR SYSTEMS For other systems nonlinearities cannot be ignored especially if one cares about the global behavior of the system The predatorprey problem is one example of this to capture the oscillatory behavior of the interdependent populations we must include the nonlinear coupling terms Other examples include switching behavior and generating periodic motion for locomotion However if we care about what happens near an equilibrium point it often suffices to approximate the nonlinear dynamics by their local linearization as we already explored briefly in Section 43 The linearization is essentially an approximation of the nonlinear dynamics around the desired operating point Linearity We now proceed to define linearity of inputoutput systems more formally Consider a state space system of the form dx dt f x u y hx u 51 where x Rn u Rp and y Rq As in the previous chapters we will usually restrict ourselves to the singleinput singleoutput case by taking p q 1 We also assume that all functions are smooth and that for a reasonable class of inputs eg piecewise continuous functions of time the solutions of equation 51 exist for all time It will be convenient to assume that the origin x 0 u 0 is an equilibrium point for this system x 0 and that h0 0 0 Indeed we can do so without loss of generality To see this suppose that xe ue 0 0 is an equilibrium point of the system with output ye hxe ue Then we can define a new set of states inputs and outputs x x xe u u ue y y ye and rewrite the equations of motion in terms of these variables d dt x f x xe u ue f x u y hx xe u ue ye hx u In the new set of variables the origin is an equilibrium point with output 0 and hence we can carry out our analysis in this set of variables Once we have obtained our answers in this new set of variables we simply translate them back to the original coordinates using x x xe u u ue and y y ye Returning to the original equations 51 now assuming without loss of gener ality that the origin is the equilibrium point of interest we write the output yt corresponding to the initial condition x0 x0 and input ut as yt x0 u Using this notation a system is said to be a linear inputoutput system if the following 51 BASIC DEFINITIONS 133 0 20 40 60 2 0 2 Homogeneous Input u 0 20 40 60 2 0 2 0 20 40 60 2 0 2 Output y 0 20 40 60 2 0 2 Particular 0 20 40 60 2 0 2 0 20 40 60 2 0 2 0 20 40 60 2 0 2 Complete Time t sec 0 20 40 60 2 0 2 Time t sec 0 20 40 60 2 0 2 Time t sec State x1 x2 Figure 51 Superposition of homogeneous and particular solutions The first row shows the input state and output corresponding to the initial condition response The second row shows the same variables corresponding to zero initial condition but nonzero input The third row is the complete solution which is the sum of the two individual solutions conditions are satisfied i yt αx1 βx2 0 αyt x1 0 βyt x2 0 ii yt αx0 δu αyt x0 0 δyt 0 u iii yt 0 δu1 γ u2 δyt 0 u1 γ yt 0 u2 52 Thus we define a system to be linear if the outputs are jointly linear in the initial condition response u 0 and the forced response x0 0 Property iii is a statement of the principle of superposition the response of a linear system to the sum of two inputs u1 and u2 is the sum of the outputs y1 and y2 corresponding to the individual inputs The general form of a linear state space system is dx dt Ax Bu y Cx Du 53 where A Rnn B Rnp C Rqn and D Rqp In the special case of a singleinput singleoutput system B is a column vector C is a row vector and D is scalar Equation 53 is a system of linear firstorder differential equations with input u state x and output y It is easy to show that given solutions x1t and x2t for this set of equations they satisfy the linearity conditions We define xht to be the solution with zero input the homogeneous solution and the solution x pt to be the solution with zero initial condition a particular solution Figure 51 illustrates how these two individual solutions can be superim posed to form the complete solution 52 THE MATRIX EXPONENTIAL 141 Since any solution xt can be written in terms of a solution zt with z0 T x0 it follows that it is sufficient to prove the theorem in the transformed coordinates The solution zt can be written in terms of the elements of the matrix exponen tial From equation 511 these elements all decay to zero for arbitrary z0 if and only if Re λi 0 Furthermore if any λi has positive real part then there exists an initial condition z0 such that the corresponding solution increases without bound Since we can scale this initial condition to be arbitrarily small it follows that the equilibrium point is unstable if any eigenvalue has positive real part The existence of a canonical form allows us to prove many properties of linear systems by changing to a set of coordinates in which the A matrix is in Jordan form We illustrate this in the following proposition which follows along the same lines as the proof of Theorem 41 Proposition 53 Suppose that the system dx dt Ax has no eigenvalues with strictly positive real part and one or more eigenvalues with zero real part Then the system is stable if and only if the Jordan blocks corresponding to each eigenvalue with zero real part are scalar 1 1 blocks Proof See Exercise 56b The following example illustrates the use of the Jordan form Example 54 Linear model of a vectored thrust aircraft Consider the dynamics of a vectored thrust aircraft such as that described in Exam ple 29 Suppose that we choose u1 u2 0 so that the dynamics of the system become dz dt z4 z5 z6 g sin z3 c m z4 gcos z3 1 c m z5 0 512 where z x y θ x y θ The equilibrium points for the system are given by setting the velocities x y and θ to zero and choosing the remaining variables to satisfy g sin z3e 0 gcos z3e 1 0 z3e θe 0 This corresponds to the upright orientation for the aircraft Note that xe and ye are not specified This is because we can translate the system to a new upright position and still obtain an equilibrium point 53 INPUTOUTPUT RESPONSE 155 v1 v2 R1 C1 C2 R2 a Circuit diagram 10 1 10 0 Gain 10 1 10 0 10 1 10 2 10 3 360 270 180 90 0 Phase deg Frequency rads b Frequency response Figure 512 Active bandpass filter The circuit diagram a shows an op amp with two RC filters arranged to provide a bandpass filter The plot in b shows the gain and phase of the filter as a function of frequency Note that the phase starts at 90 due to the negative gain of the operational amplifier frequencies at about 10 rads but attenuates frequencies below 5 rads and above 50 rads At 01 rads the input signal is attenuated by 20 005 This type of circuit is called a bandpass filter since it passes through signals in the band of frequencies between 5 and 50 rads As in the case of the step response a number of standard properties are defined for frequency responses The gain of a system at ω 0 is called the zero frequency gain and corresponds to the ratio between a constant input and the steady output M0 C A1B D The zero frequency gain is well defined only if A is invertible and in particular if it does not have eigenvalues at 0 It is also important to note that the zero frequency gain is a relevant quantity only when a system is stable about the corresponding equilibrium point So if we apply a constant input u r then the corresponding equilibrium point xe A1Br must be stable in order to talk about the zero frequency gain In electrical engineering the zero frequency gain is often called the DC gain DC stands for direct current and reflects the common separation of signals in electrical engineering into a direct current zero frequency term and an alternating current AC term The bandwidth ωb of a system is the frequency range over which the gain has decreased by no more than a factor of 1 2 from its reference value For systems with nonzero finite zero frequency gain the bandwidth is the frequency where the gain has decreased by 1 2 from the zero frequency gain For systems that attenuate low frequencies but pass through high frequencies the reference gain is taken as the highfrequency gain For a system such as the bandpass filter in Example 58 bandwidth is defined as the range of frequencies where the gain is larger than 1 2 of the gain at the center of the band For Example 58 this would give a bandwidth of approximately 50 rads 156 CHAPTER 5 LINEAR SYSTEMS Amplifier Amplifier Sample Cantilever xy z Laser Photo diode Controller Piezo drive Deflection reference Sweep generator a AFM block diagram 10 1 10 1 Gain 10 4 10 5 10 6 10 7 180 90 0 Phase deg Frequency rads Mr1 Mr2 ωωr1 ωωr2 b Frequency response Figure 513 AFM frequency response a A block diagram for the vertical dynamics of an atomic force microscope in contact mode The plot in b shows the gain and phase for the piezo stack The response contains two frequency peaks at resonances of the system along with an antiresonance at ω 268 krads The combination of a resonant peak followed by an antiresonance is common for systems with multiple lightly damped modes Another important property of the frequency response is the resonant peak Mr the largest value of the frequency response and the peak frequency ωmr the frequency where the maximum occurs These two properties describe the frequency of the sinusoidal input that produces the largest possible output and the gain at the frequency Example 59 Atomic force microscope in contact mode Consider the model for the vertical dynamics of the atomic force microscope in contact mode discussed in Section 35 The basic dynamics are given by equa tion 323 The piezo stack can be modeled by a secondorder system with un damped natural frequency ω3 and damping ratio ζ3 The dynamics are then de scribed by the linear system dx dt 0 1 0 0 km1 m2 cm1 m2 1m2 0 0 0 0 ω3 0 0 ω3 2ζ3ω3 x 0 0 0 ω3 u y m2 m1 m2 m1k m1 m2 m1c m1 m2 1 0 x where the input signal is the drive signal to the amplifier and the output is the elon gation of the piezo The frequency response of the system is shown in Figure 513b The zero frequency gain of the system is M0 1 There are two resonant poles with peaks Mr1 212 at ωmr1 238 krads and Mr2 429 at ωmr2 746 krads The bandwidth of the system defined as the lowest frequency where the gain is 2 less than the zero frequency gain is ωb 292 krads There is also a dip in the gain Md 0556 for ωmd 268 krads This dip called an antiresonance is associated with a dip in the phase and limits the performance when the system is controlled by simple controllers as we will see in Chapter 10 158 CHAPTER 5 LINEAR SYSTEMS the ordinary differential equation dx dt 00141x 00116u 54 Linearization As described at the beginning of the chapter a common source of linear system models is through the approximation of a nonlinear system by a linear one These approximations are aimed at studying the local behavior of a system where the nonlinear effects are expected to be small In this section we discuss how to locally approximate a system by its linearization and what can be said about the approxi mation in terms of stability We begin with an illustration of the basic concept using the cruise control example from Chapter 3 Example 511 Cruise control The dynamics for the cruise control system were derived in Section 31 and have the form m dv dt αnuT αnv mgCr sgnv 1 2ρCv Av2 mg sin θ 529 where the first term on the righthand side of the equation is the force generated by the engine and the remaining three terms are the rolling friction aerodynamic drag and gravitational disturbance force There is an equilibrium ve ue when the force applied by the engine balances the disturbance forces To explore the behavior of the system near the equilibrium we will linearize the system A Taylor series expansion of equation 529 around the equilibrium gives dv ve dt av ve bgθ θe bu ue higher order terms 530 where a ueα2 nT αnve ρCv Ave m bg g cos θe b αnT αnve m 531 Notice that the term corresponding to rolling friction disappears if v 0 For a car in fourth gear with ve 25 ms θe 0 and the numerical values for the car from Section 31 the equilibrium value for the throttle is ue 01687 and the parameters are a 00101 b 132 and c 98 This linear model describes how small perturbations in the velocity about the nominal speed evolve in time Figure 514 shows a simulation of a cruise controller with linear and nonlinear models the differences between the linear and nonlinear models are small and hence the linearized model provides a reasonable approximation 54 LINEARIZATION 159 g F mg F θ 0 10 20 30 19 195 20 205 Velocity v ms Nonlinear Linear 0 10 20 30 0 05 1 Time t s Throttle u Figure 514 Simulated response of a vehicle with PI cruise control as it climbs a hill with a slope of 4 The solid line is the simulation based on a nonlinear model and the dashed line shows the corresponding simulation using a linear model The controller gains are kp 05 and ki 01 Jacobian Linearization Around an Equilibrium Point To proceed more formally consider a singleinput singleoutput nonlinear system dx dt f x u x Rn u R y hx u y R 532 with an equilibrium point at x xe u ue Without loss of generality we can assume that xe 0 and ue 0 although initially we will consider the general case to make the shift of coordinates explicit To study the local behavior of the system around the equilibrium point xe ue we suppose that x xe and u ue are both small so that nonlinear perturbations around this equilibrium point can be ignored compared with the lowerorder linear terms This is roughly the same type of argument that is used when we do small angle approximations replacing sin θ with θ and cos θ with 1 for θ near zero As we did in Chapter 4 we define a new set of state variables z as well as inputs v and outputs w z x xe v u ue w y hxe ue These variables are all close to zero when we are near the equilibrium point and so in these variables the nonlinear terms can be thought of as the higherorder terms in a Taylor series expansion of the relevant vector fields assuming for now that these exist Formally the Jacobian linearization of the nonlinear system 532 is dz dt Az Bv w Cz Dv 533 166 CHAPTER 5 LINEAR SYSTEMS 59 Keynesian economics Consider the following simple Keynesian macroeco nomic model in the form of a linear discretetime system discussed in Exercise 58 Ct 1 It 1 a a ab a ab Ct It a ab Gt Yt Ct It Gt Determine the eigenvalues of the dynamics matrix When are the magnitudes of the eigenvalues less than 1 Assume that the system is in equilibrium with constant values capital spending C investment I and government expenditure G Explore what happens when government expenditure increases by 10 Use the values a 025 and b 05 510 Consider a scalar system dx dt 1 x3 u Compute the equilibrium points for the unforced system u 0 and use a Taylor series expansion around the equilibrium point to compute the linearization Verify that this agrees with the linearization in equation 533 511 Transcriptional regulation Consider the dynamics of a genetic circuit that im plements selfrepression the protein produced by a gene is a repressor for that gene thus restricting its own production Using the models presented in Example 213 the dynamics for the system can be written as dm dt α 1 kp2 α0 γ m u dp dt βm δp 540 where u is a disturbance term that affects RNA transcription and m p 0 Find the equilibrium points for the system and use the linearized dynamics around each equilibrium point to determine the local stability of the equilibrium point and the step response of the system to a disturbance 168 CHAPTER 6 STATE FEEDBACK xT x0 Rx0 T a Reachable set E b Reachability through control Figure 61 The reachable set for a control system The set Rx0 T shown in a is the set of points reachable from x0 in time less than T The phase portrait in b shows the dynamics for a double integrator with the natural dynamics drawn as horizontal arrows and the control inputs drawn as vertical arrows The set of achievable equilibrium points is the x axis By setting the control inputs as a function of the state it is possible to steer the system to the origin as shown on the sample path The definition of reachability addresses whether it is possible to reach all points in the state space in a transient fashion In many applications the set of points that we are most interested in reaching is the set of equilibrium points of the system since we can remain at those points once we get there The set of all possible equilibria for constant controls is given by E xe Axe bue 0 for some ue R This means that possible equilibria lie in a one or possibly higher dimensional subspace If the matrix A is invertible this subspace is spanned by A1B The following example provides some insight into the possibilities Example 61 Double integrator Consider a linear system consisting of a double integrator whose dynamics are given by dx1 dt x2 dx2 dt u Figure 61b shows a phase portrait of the system The open loop dynamics u 0 are shown as horizontal arrows pointed to the right for x2 0 and to the left for x2 0 The control input is represented by a doubleheaded arrow in the vertical direction corresponding to our ability to set the value of x2 The set of equilibrium points E corresponds to the x1 axis with ue 0 Suppose first that we wish to reach the origin from an initial condition a 0 We can directly move the state up and down in the phase plane but we must rely on the natural dynamics to control the motion to the left and right If a 0 we can move the origin by first setting u 0 which will cause x2 to become negative Once x2 0 the value of x1 will begin to decrease and we will move to the left After a while we can set u2 to be positive moving x2 back toward zero and slowing the motion in the x1 direction If we bring x2 0 we can move the system state in the opposite direction 61 REACHABILITY 171 a Segway M F p θ m l b Cartpendulum system Figure 62 Balance system The Segway Personal Transporter shown on in a is an example of a balance system that uses torque applied to the wheels to keep the rider upright A simplified diagram for a balance system is shown in b The system consists of a mass m on a rod of length l connected by a pivot to a cart with mass M where μ Mt Jt m2l2 Mt M m and Jt J ml2 The reachability matrix is Wr 0 Jtμ 0 gl3m3μ2 0 lmμ 0 gl2m2m Mμ2 Jtμ 0 gl3m3μ2 0 lmμ 0 g2l2m2m Mμ2 0 65 The determinant of this matrix is detWr g2l4m4 μ4 0 and we can conclude that the system is reachable This implies that we can move the system from any initial state to any final state and in particular that we can always find an input to bring the system from an initial state to an equilibrium point It is useful to have an intuitive understanding of the mechanisms that make a system unreachable An example of such a system is given in Figure 63 The system consists of two identical systems with the same input Clearly we cannot separately cause the first and the second systems to do something different since they have the same input Hence we cannot reach arbitrary states and so the system is not reachable Exercise 63 More subtle mechanisms for nonreachability can also occur For example if there is a linear combination of states that always remains constant then the system is not reachable To see this suppose that there exists a row vector H such that 0 d dt Hx HAx Bu for all u 172 CHAPTER 6 STATE FEEDBACK M F 1 p θ θ2 m m l l S S Figure 63 An unreachable system The cartpendulum system shown on the left has a single input that affects two pendula of equal length and mass Since the forces affecting the two pendula are the same and their dynamics are identical it is not possible to arbitrarily control the state of the system The figure on the right is a block diagram representation of this situation Then H is in the left null space of both A and B and it follows that HWr H B AB An1B 0 Hence the reachability matrix is not full rank In this case if we have an initial condition x0 and we wish to reach a state x f for which Hx0 Hx f then since Hxt is constant no input u can move from x0 to x f Reachable Canonical Form As we have already seen in previous chapters it is often convenient to change coordinates and write the dynamics of the system in the transformed coordinates z T x One application of a change of coordinates is to convert a system into a canonical form in which it is easy to perform certain types of analysis A linear state space system is in reachable canonical form if its dynamics are given by dz dt a1 a2 a3 an 1 0 0 0 0 1 0 0 0 1 0 z 1 0 0 0 u y b1 b2 b3 bn z du 66 A block diagram for a system in reachable canonical form is shown in Figure 64 We see that the coefficients that appear in the A and B matrices show up directly in the block diagram Furthermore the output of the system is a simple linear combination of the outputs of the integration blocks The characteristic polynomial for a system in reachable canonical form is given 174 CHAPTER 6 STATE FEEDBACK Transforming each element individually we have A B T AT 1T B T AB A2 B T AT 12T B T AT 1T AT 1T B T A2B An B T An B and hence the reachability matrix for the transformed system is Wr T B AB An1B T Wr 68 Since Wr is invertible we can thus solve for the transformation T that takes the system into reachable canonical form T WrW 1 r The following example illustrates the approach Example 63 Transformation to reachable form Consider a simple twodimensional system of the form dx dt α ω ω α x 0 1 u We wish to find the transformation that converts the system into reachable canonical form A a1 a2 1 0 B 1 0 The coefficients a1 and a2 can be determined from the characteristic polynomial for the original system λs detsI A s2 2αs α2 ω2 a1 2α a2 α2 ω2 The reachability matrix for each system is Wr 0 ω 1 α Wr 1 a1 0 1 The transformation T becomes T WrW 1 r a1 αω 1 1ω 0 αω 1 1ω 0 and hence the coordinates z1 z2 T x αx1ω x2 x2ω put the system in reachable canonical form We summarize the results of this section in the following theorem 62 STABILIZATION BY STATE FEEDBACK 177 Notice that kr is exactly the inverse of the zero frequency gain of the closed loop system The solution for D 0 is left as an exercise Using the gains K and kr we are thus able to design the dynamics of the closed loop system to satisfy our goal To illustrate how to construct such a state feedback control law we begin with a few examples that provide some basic intuition and insights Example 64 Vehicle steering In Example 512 we derived a normalized linear model for vehicle steering The dynamics describing the lateral deviation were given by A 0 1 0 0 B γ 1 C 1 0 D 0 The reachability matrix for the system is thus Wr B AB γ 1 1 0 The system is reachable since det Wr 1 0 We now want to design a controller that stabilizes the dynamics and tracks a given reference value r of the lateral position of the vehicle To do this we introduce the feedback u K x krr k1x1 k2x2 krr and the closed loop system becomes dx dt A BKx Bkrr γ k1 1 γ k2 k1 k2 x γ kr kr r y Cx Du 1 0 x 614 The closed loop system has the characteristic polynomial det sI A BK det s γ k1 γ k2 1 k1 s k2 s2 γ k1 k2s k1 Suppose that we would like to use feedback to design the dynamics of the system to have the characteristic polynomial ps s2 2ζcωcs ω2 c Comparing this polynomial with the characteristic polynomial of the closed loop system we see that the feedback gains should be chosen as k1 ω2 c k2 2ζcωc γ ω2 c Equation 613 gives kr k1 ω2 c and the control law can be written as u k1r x1 k2x2 ω2 cr x1 2ζcωc γ ω2 cx2 178 CHAPTER 6 STATE FEEDBACK 0 2 4 6 8 10 0 05 1 0 2 4 6 8 10 0 2 4 Lateral position yb Normalized time v0t Steering angle δ rad ωc ωc a Step response for varying ωc 0 2 4 6 8 10 0 05 1 0 2 4 6 8 10 05 0 05 1 Lateral position yb Normalized time v0t Steering angle δ rad ζc ζc b Step response for varying ζc Figure 66 State feedback control of a steering system Step responses obtained with con trollers designed with ζc 07 and ωc 05 1 and 2 rads are shown in a Notice that response speed increases with increasing ωc but that large ωc also give large initial control actions Step responses obtained with a controller designed with ωc 1 and ζc 05 07 and 1 are shown in b The step responses for the closed loop system for different values of the design parameters are shown in Figure 66 The effect of ωc is shown in Figure 66a which shows that the response speed increases with increasing ωc The responses for ωc 05 and 1 have reasonable overshoot The settling time is about 15 car lengths for ωc 05 beyond the end of the plot and decreases to about 6 car lengths for ωc 1 The control signal δ is large initially and goes to zero as time increases because the closed loop dynamics have an integrator The initial value of the control signal is kr ω2 cr and thus the achievable response time is limited by the available actuator signal Notice in particular the dramatic increase in control signal when ωc changes from 1 to 2 The effect of ζc is shown in Figure 66b The response speed and the overshoot increase with decreasing damping Using these plots we conclude that reasonable values of the design parameters are to have ωc in the range of 05 to 1 and ζc 07 The example of the vehicle steering system illustrates how state feedback can be used to set the eigenvalues of a closed loop system to arbitrary values State Feedback for Systems in Reachable Canonical Form The reachable canonical form has the property that the parameters of the system are the coefficients of the characteristic polynomial It is therefore natural to consider systems in this form when solving the eigenvalue assignment problem 62 STABILIZATION BY STATE FEEDBACK 179 Consider a system in reachable canonical form ie dz dt Az Bu a1 a2 a3 an 1 0 0 0 0 1 0 0 0 1 0 z 1 0 0 0 u y Cz b1 b2 bn z 615 It follows from67 that the open loop system has the characteristic polynomial detsI A sn a1sn1 an1s an Before making a formal analysis we can gain some insight by investigating the block diagram of the system shown in Figure 64 The characteristic polynomial is given by the parameters ak in the figure Notice that the parameter ak can be changed by feedback from state zk to the input u It is thus straightforward to change the coefficients of the characteristic polynomial by state feedback Returning to equations introducing the control law u K z krr k1z1 k2z2 knzn krr 616 the closed loop system becomes dz dt a1 k1 a2 k2 a3 k3 an kn 1 0 0 0 0 1 0 0 0 1 0 z kr 0 0 0 r y bn b2 b1 z 617 The feedback changes the elements of the first row of the A matrix which corre sponds to the parameters of the characteristic polynomial The closed loop system thus has the characteristic polynomial sn al k1sn1 a2 k2sn2 an1 kn1s an kn Requiring this polynomial to be equal to the desired closed loop polynomial ps sn p1sn1 pn1s pn we find that the controller gains should be chosen as k1 p1 a1 k2 p2 a2 kn pn an This feedback simply replaces the parameters ai in the system 617 by pi The feedback gain for a system in reachable canonical form is thus K p1 a1 p2 a2 pn an 618 180 CHAPTER 6 STATE FEEDBACK To have zero frequency gain equal to unity the parameter kr should be chosen as kr an kn bn pn bn 619 Notice that it is essential to know the precise values of parameters an and bn in order to obtain the correct zero frequency gain The zero frequency gain is thus obtained by precise calibration This is very different from obtaining the correct steadystate value by integral action which we shall see in later sections Eigenvalue Assignment We have seen through the examples how feedback can be used to design the dy namics of a system through assignment of its eigenvalues To solve the problem in the general case we simply change coordinates so that the system is in reachable canonical form Consider the system dx dt Ax Bu y Cx Du 620 We can change the coordinates by a linear transformation z T x so that the transformed system is in reachable canonical form 615 For such a system the feedback is given by equation 616 where the coefficients are given by equa tion 618 Transforming back to the original coordinates gives the feedback u K z krr K T x krr The results obtained can be summarized as follows Theorem 63 Eigenvalue assignment by state feedback Consider the system given by equation 620 with one input and one output Let λs sn a1sn1 an1s an be the characteristic polynomial of A If the system is reachable then there exists a feedback u K x krr that gives a closed loop system with the characteristic polynomial ps sn p1sn1 pn1s pn and unity zero frequency gain between r and y The feedback gain is given by K K T p1 a1 p2 a2 pn an WrW 1 r kr pn an 621 where ai are the coefficients of the characteristic polynomial of the matrix A and 182 CHAPTER 6 STATE FEEDBACK 206 295 This yields a linear dynamical system d dt z1 z2 013 093 057 0 z1 z2 172 0 v w 0 1 z1 z2 where z1 L Le z2 H He and v u It is easy to check that the system is reachable around the equilibrium z v 0 0 and hence we can assign the eigenvalues of the system using state feedback Determining the eigenvalues of the closed loop system requires balancing the ability to modulate the input against the natural dynamics of the system This can be done by the process of trial and error or by using some of the more systematic techniques discussed in the remainder of the text For now we simply choose the desired closed loop eigenvalues to be at λ 01 02 We can then solve for the feedback gains using the techniques described earlier which results in K 0025 0052 Finally we solve for the reference gain kr using equation 613 to obtain kr 0002 Putting these steps together our control law becomes v K z krr In order to implement the control law we must rewrite it using the original coordi nates for the system yielding u ue Kx xe krr ye 0025 0052 H 206 L 295 0002 r 295 This rule tells us how much we should modulate rh as a function of the current number of lynxes and hares in the ecosystem Figure 67a shows a simulation of the resulting closed loop system using the parameters defined above and starting with an initial population of 15 hares and 20 lynxes Note that the system quickly stabilizes the population of lynxes at the reference value L 30 A phase portrait of the system is given in Figure 67b showing how other initial conditions converge to the stabilized equilibrium population Notice that the dynamics are very different from the natural dynamics shown in Figure 320 The results of this section show that we can use state feedback to design the dynamics of a system under the strong assumption that we can measure all of the states We shall address the availability of the states in the next chapter when we consider output feedback and state estimation In addition Theorem 63 which states that the eigenvalues can be assigned to arbitrary locations is also highly idealized and assumes that the dynamics of the process are known to high precision The robustness of state feedback combined with state estimators is considered in Chapter 12 after we have developed the requisite tools 186 CHAPTER 6 STATE FEEDBACK Re Im ζ 0 ζ 008 ζ 02 ζ 05 ζ 1 a Eigenvalues 10 2 10 0 10 2 Gain 10 1 10 0 10 1 180 90 0 Phase deg Normalized frequency ωω0 ζ ζ b Frequency responses Figure 69 Frequency response of a secondorder system 623 a Eigenvalues as a function of ζ b Frequency response as a function of ζ The upper curve shows the gain ratio M and the lower curve shows the phase shift θ For small ζ there is a large peak in the magnitude of the frequency response and a rapid change in phase centered at ω ω0 As ζ is increased the magnitude of the peak drops and the phase changes more smoothly between 0 and 180 plicitly and is given by Me jθ kω2 0 iω2 2ζω0iω ω2 0 kω2 0 ω2 0 ω2 2iζω0ω A graphical illustration of the frequency response is given in Figure 69 Notice the resonant peak that increases with decreasing ζ The peak is often characterized by is Qvalue defined as Q 12ζ The properties of the frequency response for a secondorder system are summarized in Table 62 Example 66 Drug administration To illustrate the use of these formulas consider the twocompartment model for drug administration described in Section 36 The dynamics of the system are dc dt k0 k1 k1 k2 k2 c b0 0 u y 0 1 x where c1 and c2 are the concentrations of the drug in each compartment ki i 0 2 and b0 are parameters of the system u is the flow rate of the drug into Table 62 Properties of the frequency response for a secondorder system with 0 ζ 1 Property Value ζ 01 ζ 05 ζ 1 2 Zero frequency gain M0 k k k Bandwidth ωb 154 ω0 127 ω0 ω0 Resonant peak gain Mr 154 k 127 k k Resonant frequency ωmr ω0 0707ω0 0 63 STATE FEEDBACK DESIGN 187 0 5 10 15 20 25 30 35 40 45 50 0 05 1 15 State feedback Pulses 0 5 10 15 20 25 30 35 40 45 50 0 02 04 06 Input dosage Concentration C2 Time t min Time t min Figure 610 Open loop versus closed loop drug administration Comparison between drug administration using a sequence of doses versus continuously monitoring the concentrations and adjusting the dosage continuously In each case the concentration is approximately maintained at the desired level but the closed loop system has substantially less variability in drug concentration compartment 1 and y is the concentration of the drug in compartment 2 We assume that we can measure the concentrations of the drug in each compartment and we would like to design a feedback law to maintain the output at a given reference value r We choose ζ 09 to minimize the overshoot and choose the rise time to be Tr 10 min Using the formulas in Table 61 this gives a value for ω0 022 We can now compute the gain to place the eigenvalues at this location Setting u K x krr the closed loop eigenvalues for the system satisfy λs 0198 00959i Choosing k1 02027 and k2 02005 gives the desired closed loop behavior Equation 613 gives the reference gain kr 00645 The response of the con troller is shown in Figure 610 and compared with an open loop strategy involving administering periodic doses of the drug HigherOrder Systems Our emphasis so far has considered only secondorder systems For higherorder systems eigenvalue assignment is considerably more difficult especially when trying to account for the many tradeoffs that are present in a feedback design One of the other reasons why secondorder systems play such an important role in feedback systems is that even for more complicated systems the response is often characterized by the dominant eigenvalues To define these more precisely consider a system with eigenvalues λ j j 1 n We define the damping ratio 188 CHAPTER 6 STATE FEEDBACK for a complex eigenvalue λ to be ζ Re λ λ We say that a complex conjugate pair of eigenvalues λ λ is a dominant pair if it has the lowest damping ratio compared with all other eigenvalues of the system Assuming that a system is stable the dominant pair of eigenvalues tends to be the most important element of the response To see this assume that we have a system in Jordan form with a simple Jordan block corresponding to the dominant pair of eigenvalues dz dt λ λ J2 Jk z Bu y Cz Note that the state z may be complex because of the Jordan transformation The response of the system will be a linear combination of the responses from each of the individual Jordan subsystems As we see from Figure 68 for ζ 1 the subsystem with the slowest response is precisely the one with the smallest damping ratio Hence when we add the responses from each of the individual subsystems it is the dominant pair of eigenvalues that will be the primary factor after the initial transients due to the other terms in the solution die out While this simple analysis does not always hold eg if some nondominant terms have larger coefficients because of the particular form of the system it is often the case that the dominant eigenvalues determine the step response of the system The only formal requirement for eigenvalue assignment is that the system be reachable In practice there are many other constraints because the selection of eigenvalues has a strong effect on the magnitude and rate of change of the control signal Large eigenvalues will in general require large control signals as well as fast changes of the signals The capability of the actuators will therefore impose constraints on the possible location of closed loop eigenvalues These issues will be discussed in depth in Chapters 11 and 12 We illustrate some of the main ideas using the balance system as an example Example 67 Balance system Consider the problem of stabilizing a balance system whose dynamics were given in Example 62 The dynamics are given by A 0 0 1 0 0 0 0 1 0 m2l2gμ cJtμ γlmμ 0 Mtmglμ clmμ γ Jtμ B 0 0 Jtμ lmμ where Mt M m Jt J ml2 μ Mt Jt m2l2 and we have left c and γ 63 STATE FEEDBACK DESIGN 191 the matrices Qx and Qu we can balance the rate of convergence of the solutions with the cost of the control The solution to the LQR problem is given by a linear control law of the form u Q1 u BT Px where P Rnn is a positive definite symmetric matrix that satisfies the equation P A AT P P BQ1 u BT P Qx 0 627 Equation 627 is called the algebraic Riccati equation and can be solved numer ically eg using the lqr command in MATLAB One of the key questions in LQR design is how to choose the weights Qx and Qu To guarantee that a solution exists we must have Qx 0 and Qu 0 In addition there are certain observability conditions on Qx that limit its choice Here we assume Qx 0 to ensure that solutions to the algebraic Riccati equation always exist To choose specific values for the cost function weights Qx and Qu we must use our knowledge of the system we are trying to control A particularly simple choice is to use diagonal weights Qx q1 0 0 qn Qu ρ1 0 0 ρn For this choice of Qx and Qu the individual diagonal elements describe how much each state and input squared should contribute to the overall cost Hence we can take states that should remain small and attach higher weight values to them Similarly we can penalize an input versus the states and other inputs through choice of the corresponding input weight ρ Example 68 Vectored thrust aircraft Consider the original dynamics of the system 226 written in state space form as dz dt z4 z5 z6 g sin θ c m z4 g cos θ c m z5 0 0 0 0 1 m cos θ F1 1 m sin θ F2 1 m sin θ F1 1 m cos θ F2 r J F1 see Example 54 The system parameters are m 4 kg J 00475 kg m2 r 025 m g 98 ms2 c 005 N sm which corresponds to a scaled model of the system The equilibrium point for the system is given by F1 0 F2 mg and ze xe ye 0 0 0 0 To derive the linearized model near an equilibrium 64 INTEGRAL ACTION 195 0 20 40 60 04 06 08 1 xcpu xmem xcpu xmem Time k ms a System state 0 20 40 60 0 10 20 30 40 50 0 20 40 600 300 600 900 1200 1500 ka l mc r KeepAlive MaxClients Time k ms b System inputs Figure 614 Web server with LQR control The plot in a shows the state of the system under a change in external load applied at k 10 ms The corresponding web server parameters system inputs are shown in b The controller is able to reduce the effect of the disturbance by approximately 40 64 Integral Action Controllers based on state feedback achieve the correct steadystate response to command signals by careful calibration of the gain kr However one of the primary uses of feedback is to allow good performance in the presence of uncertainty and hence requiring that we have an exact model of the process is undesirable An alternative to calibration is to make use of integral feedback in which the controller uses an integrator to provide zero steadystate error The basic concept of integral feedback was given in Section 15 and in Section 31 here we provide a more complete description and analysis The basic approach in integral feedback is to create a state within the controller that computes the integral of the error signal which is then used as a feedback term We do this by augmenting the description of the system with a new state z d dt x z Ax Bu y r Ax Bu Cx r 628 The state z is seen to be the integral of the difference between the the actual output y and desired output r Note that if we find a compensator that stabilizes the system then we will necessarily have z 0 in steady state and hence y r in steady state Given the augmented system we design a state space controller in the usual fashion with a control law of the form u K x kiz krr 629 where K is the usual state feedback term ki is the integral term and kr is used to set the nominal input for the desired steady state The resulting equilibrium point for the system is given as xe A BK1Bkrr kize Note that the value of ze is not specified but rather will automatically settle to the value that makes z y r 0 which implies that at equilibrium the output will equal the reference value This holds independently of the specific values of A 65 FURTHER READING 197 0 10 20 30 40 18 19 20 0 10 20 30 40 0 05 1 Proportional PI control Time t s Time t s Velocity v ms Throttle u Figure 615 Velocity and throttle for a car with cruise control based on proportional dashed and PI control solid The PI controller is able to adjust the throttle to compensate for the effect of the hill and maintain the speed at the reference value of vr 25 ms The resulting controller stabilizes the system and hence brings z y vr to zero resulting in perfect tracking Notice that even if we have a small error in the values of the parameters defining the system as long as the closed loop eigenvalues are still stable then the tracking error will approach zero Thus the exact calibration required in our previous approach using kr is not needed here Indeed we can even choose kr 0 and let the feedback controller do all of the work Integral feedback can also be used to compensate for constant disturbances Figure 615 shows the results of a simulation in which the car encounters a hill with angle θ 4 at t 8 s The stability of the system is not affected by this external disturbance and so we once again see that the cars velocity converges to the reference speed This ability to handle constant disturbances is a general property of controllers with integral feedback see Exercise 64 65 Further Reading The importance of state models and state feedback was discussed in the seminal paper by Kalman 113 where the state feedback gain was obtained by solving an optimization problem that minimized a quadratic loss function The notions of reachability and observability Chapter 7 are also due to Kalman 115 see also82118Kalmandefinescontrollabilityandreachabilityastheabilitytoreach the origin and an arbitrary state respectively 117 We note that in most textbooks the term controllability is used instead of reachability but we prefer the latter termbecauseitismoredescriptiveofthefundamentalpropertyofbeingabletoreach arbitrary states Most undergraduate textbooks on control contain material on state spacesystemsincludingforexampleFranklinPowellandEmamiNaeini79and Ogata 162 Friedlands textbook 80 covers the material in the previous current and next chapter in considerable detail including the topic of optimal control Exercises 61 Double integrator Consider the double integrator Find a piecewise constant control strategy that drives the system from the origin to the state x 1 1 198 CHAPTER 6 STATE FEEDBACK 62 Reachability from nonzero initial state Extend the argument in Section 61 to show that if a system is reachable from an initial state of zero it is reachable from a nonzero initial state 63 Unreachable systems Consider the system shown in Figure 63 Write the dynamics of the two systems as dx dt Ax Bu dz dt Az Bu If x and z have the same initial condition they will always have the same state regardless of the input that is applied Show that this violates the definition of reachability and further show that the reachability matrix Wr is not full rank 64 Integral feedback for rejecting constant disturbances Consider a linear system of the form dx dt Ax Bu Fd where d is a disturbance that enters the system through a disturbance vector F Rn Show that integral feedback can be used to compensate for a constant disturbance by giving zero steadystate error even when d 0 65 Rearsteered bicycle A simple model for a bicycle was given by equation 35 in Section 32 A model for a bicycle with rearwheel steering is obtained by re versing the sign of the velocity in the model Determine the conditions under which this systems is reachable and explain any situations in which the system is not reachable 66 Characteristic polynomial for reachable canonical form Show that the char acteristic polynomial for a system in reachable canonical form is given by equa tion 67 and that dnzk dtn a1 dn1zk dtn1 an1 dzk dt anzk dnku dtnk where zk is the kth state 67 Reachability matrix for reachable canonical form Consider a system in reach able canonical form Show that the inverse of the reachability matrix is given by W 1 r 1 a1 a2 an 0 1 a1 an1 0 0 1 a1 0 0 0 1 68 Nonmaintainable equilibria Consider the normalized model of a pendulum on a cart d2x dt2 u d2θ dt2 θ u EXERCISES 199 where x is cart position and θ is pendulum angle Can the equilibrium θ θ0 for θ0 0 be maintained 69 Eigenvalue assignment for unreachable system Consider the system dx dt 0 1 0 0 x 1 0 u y 1 0 x with the control law u k1x1 k2x2 krr Show that eigenvalues of the system cannot be assigned to arbitrary values 610 CayleyHamilton theorem Let A Rnn be a matrix with characteristic polynomial λs detsI A sn a1sn1 an1s an Show that the matrix satisfies λA An a1An1 an1A anI 0 and use this this to show that Ak k n can be rewritten in terms of powers of A of order less than n 611 Motor drive Consider the normalized model of the motor drive in Exer cise 210 Using the following normalized parameters J1 109 J2 10 c 01 k 1 kI 1 verify that the eigenvalues of the open loop system are 0 0 005 i Design a state feedback that gives a closed loop system with eigenvalues 2 1 and 1i This choice implies that the oscillatory eigenvalues will be well damped and that the eigenvalues at the origin are replaced by eigenvalues on the negative real axis Simulate the responses of the closed loop system to step changes in the command signal and a step change in a disturbance torque on the second rotor 612 Whipple bicycle model Consider the Whipple bicycle model given by equa tion 37 in Section 32 The model is unstable at the velocity v 5 ms and the open loop eigenvalues are 184 1429 and 130 460i Find the gains of a controller that stabilizes the bicycle and gives closed loop eigenvalues at 2 10 and 1 i Simulate the response of the system to a step change in the steering reference of 0002 rad 613 Atomic force microscope Consider the model of an AFM in contact mode given in Example 59 dx dt 0 1 0 0 km1 m2 cm1 m2 1m2 0 0 0 0 ω3 0 0 ω3 2ζ3ω3 x 0 0 0 ω2 3 u y m2 m1 m2 m1k m1 m2 m1c m1 m2 1 0 x 200 CHAPTER 6 STATE FEEDBACK Use the MATLAB script afmdatam from the companion web site to generate the system matrices a Compute the reachability matrix of the system and determine its rank Scale the model by using milliseconds instead of seconds as time units Repeat the calculation of the reachability matrix and its rank b Find a state feedback controller that gives a closed loop system with complex poles having damping ratio 0707 Use the scaled model for the computations c Compute state feedback gains using linear quadratic control Experiment by using different weights Compute the gains for q1 q2 0 q3 q4 1 R 1 and ρ 01 and explain the result Choose q1 q2 q3 q4 r1 1 and explore what happens to the feedback gains and closed loop eigenvalues when you change ρ Use the scaled system for this computation 614 Consider the secondorder system d2y dt2 05dy dt y a du dt u Let the initial conditions be zero a Show that the initial slope of the unit step response is a Discuss what it means when a 0 b Show that there are points on the unit step response that are invariant with a Discuss qualitatively the effect of the parameter a on the solution c Simulate the system and explore the effect of a on the rise time and overshoot 615 Brysons rule Bryson and Ho 47 have suggested the following method for choosing the matrices Qx and Qu in equation 626 Start by choosing Qx and Qu as diagonal matrices whose elements are the inverses of the squares of the maxima of the corresponding variables Then modify the elements to obtain a compromise among response time damping and control effort Apply this method to the motor drive in Exercise 611 Assume that the largest values of the ϕ1 and ϕ2 are 1 the largest values of ϕ1 and ϕ2 are 2 and the largest control signal is 10 Simulate the closed loop system for ϕ20 1 and all other states are initialized to 0 Explore the effects of different values of the diagonal elements for Qx and Qu Chapter Seven Output Feedback One may separate the problem of physical realization into two stages computation of the best approximation ˆxt1 of the state from knowledge of yt for t t1 and computation of ut1 given ˆxt1 R E Kalman Contributions to the Theory of Optimal Control 1960 113 In this chapter we show how to use output feedback to modify the dynamics of the system through the use of observers We introduce the concept of observ ability and show that if a system is observable it is possible to recover the state from measurements of the inputs and outputs to the system We then show how to design a controller with feedback from the observer state An important concept is the separation principle quoted above which is also proved The structure of the controllers derived in this chapter is quite general and is obtained by many other design methods 71 Observability In Section 62 of the previous chapter it was shown that it is possible to find a state feedback law that gives desired closed loop eigenvalues provided that the system is reachable and that all the states are measured For many situations it is highly unrealistic to assume that all the states are measured In this section we investigate how the state can be estimated by using a mathematical model and a few measurements It will be shown that computation of the states can be carried out by a dynamical system called an observer Definition of Observability Consider a system described by a set of differential equations dx dt Ax Bu y Cx Du 71 where x Rn is the state u Rp the input and y Rq the measured output We wish to estimate the state of the system from its inputs and outputs as illustrated in Figure 71 In some situations we will assume that there is only one measured signal ie that the signal y is a scalar and that C is a row vector This signal may be corrupted by noise n although we shall start by considering the noisefree case We write ˆx for the state estimate given by the observer 206 CHAPTER 7 OUTPUT FEEDBACK a system in observable canonical form which is given by Wo 1 0 0 0 a1 1 0 0 a2 1 a1a2 a1 1 0 1 where represents an entry whose exact value is not important The rows of this matrix are linearly independent since it is lower triangular and hence Wo is full rank A straightforward but tedious calculation shows that the inverse of the observability matrix has a simple form given by W 1 o 1 0 0 0 a1 1 0 0 a2 a1 1 0 an1 an2 an3 1 As in the case of reachability it turns out that if a system is observable then there always exists a transformation T that converts the system into observable canonical form This is useful for proofs since it lets us assume that a system is in reachable canonical form without any loss of generality The reachable canonical form may be poorly conditioned numerically 72 State Estimation Having defined the concept of observability we now return to the question of how to construct an observer for a system We will look for observers that can be represented as a linear dynamical system that takes the inputs and outputs of the system we are observing and produces an estimate of the systems state That is we wish to construct a dynamical system of the form d ˆx dt F ˆx Gu Hy where u and y are the input and output of the original system and ˆx Rn is an estimate of the state with the property that ˆxt xt as t The Observer We consider the system in equation 71 with D set to zero to simplify the expo sition dx dt Ax Bu y Cx 76 72 STATE ESTIMATION 207 We can attempt to determine the state simply by simulating the equations with the correct input An estimate of the state is then given by d ˆx dt A ˆx Bu 77 To find the properties of this estimate introduce the estimation error x x ˆx It follows from equations 76 and 77 that d x dt A x If matrix A has all its eigenvalues in the left halfplane the error x will go to zero and hence equation 77 is a dynamical system whose output converges to the state of the system 76 The observer given by equation 77 uses only the process input u the measured signaldoesnotappearintheequationWemustalsorequirethatthesystembestable and essentially our estimator converges because the state of both the observer and the estimator are going zero This is not very useful in a control design context since we want to have our estimate converge quickly to a nonzero state so that we can make use of it in our controller We will therefore attempt to modify the observer so that the output is used and its convergence properties can be designed to be fast relative to the systems dynamics This version will also work for unstable systems Consider the observer d ˆx dt A ˆx Bu Ly C ˆx 78 This can be considered as a generalization of equation 77 Feedback from the measured output is provided by adding the term LyC ˆx which is proportional to the difference between the observed output and the output predicted by the observer It follows from equations 76 and 78 that d x dt A LCx If the matrix L can be chosen in such a way that the matrix A LC has eigen values with negative real parts the error x will go to zero The convergence rate is determined by an appropriate selection of the eigenvalues Notice the similarity between the problems of finding a state feedback and finding the observer State feedback design by eigenvalue assignment is equivalent to finding a matrix K so that A BK has given eigenvalues Designing an observer with prescribed eigenvalues is equivalent to finding a matrix L so that A LC has given eigenvalues Since the eigenvalues of a matrix and its transpose are the same we can establish the following equivalences A AT B CT K LT Wr W T o The observer design problem is the dual of the state feedback design problem Using the results of Theorem 63 we get the following theorem on observer design 72 STATE ESTIMATION 209 k2 V1 k0 b0 u V2 k1 0 2 4 6 0 01 02 03 04 05 06 actual estimated Concentration c1 c2 gL Time t min c1 c2 Figure 74 Observer for a two compartment system A two compartment model is shown on the left The observer measures the input concentration u and output concentration y c1 to determine the compartment concentrations shown on the right The true concentrations are shown by solid lines and the estimates generated by the observer by dashed lines Let the desired characteristic polynomial of the observer be s2 p1s p2 and equation 71 gives the observer gain L 1 0 k0 k1 k1 1 1 0 k0 k1 k2 1 1 p1 k0 k1 k2 p2 k0k2 p1 k0 k1 k2 p2 p1k2 k1k2 k2 2k1 Notice that the observability condition k1 0 is essential The behavior of the observer is illustrated by the simulation in Figure 74b Notice how the observed concentrations approach the true concentrations The observer is a dynamical system whose inputs are the process input u and the process output y The rate of change of the estimate is composed of two terms One term A ˆx Bu is the rate of change computed from the model with ˆx substituted for x The other term Ly ˆy is proportional to the difference e y ˆy between measured output y and its estimate ˆy C ˆx The observer gain L is a matrix that tells how the error e is weighted and distributed among the states The observer thus combines measurements with a dynamical model of the system A block diagram of the observer is shown in Figure 75 Computing the Observer Gain For simple loworder problems it is convenient to introduce the elements of the observer gain L as unknown parameters and solve for the values required to give the desired characteristic polynomial as illustrated in the following example Example 73 Vehicle steering The normalized linear model for vehicle steering derived in Examples 512 and 64 gives the following state space model dynamics relating lateral path deviation y to 73 CONTROL USING ESTIMATED STATE 211 0 10 20 30 0 5 10 15 20 25 30 x m y m 0 2 4 6 0 2 4 6 Act Est 0 2 4 6 0 02 04 0 2 4 6 1 0 1 2 0 2 4 6 0 05 1 x1 ˆx1 x2 ˆx2 x1 ˆx1 x2 ˆx2 Normalized time t Normalized time t Figure 76 Simulation of an observer for a vehicle driving on a curvy road left The observer has an initial velocity error The plots on the middle show the lateral deviation x1 the lateral velocity x2 by solid lines and their estimates ˆx1 and ˆx2 by dashed lines The plots on the right show the estimation errors A simulation of the observer for a vehicle driving on a curvy road is simulated in Figure 76 The vehicle length is the time unit in the normalized model The figure shows that the observer error settles in about 3 vehicle lengths For systems of high order we have to use numerical calculations The duality between the design of a state feedback and the design of an observer means that the computer algorithms for state feedback can also be used for the observer design we simply use the transpose of the dynamics matrix and the output matrix The MATLAB command acker which essentially is a direct implementation of the calculations given in Theorem 72 can be used for systems with one output The MATLAB command place can be used for systems with many outputs It is also better conditioned numerically 73 Control Using Estimated State In this section we will consider a state space system of the form dx dt Ax Bu y Cx 713 Notice that we have assumed that there is no direct term in the system D 0 This is often a realistic assumption The presence of a direct term in combination with a controller having proportional action creates an algebraic loop which will be discussed in Section 83 The problem can be solved even if there is a direct term but the calculations are more complicated We wish to design a feedback controller for the system where only the output is measured As before we will assume that u and y are scalars We also assume that the system is reachable and observable In Chapter 6 we found a feedback of the form u K x krr 212 CHAPTER 7 OUTPUT FEEDBACK for the case that all states could be measured and in Section 72 we developed an observer that can generate estimates of the state ˆx based on inputs and outputs In this section we will combine the ideas of these sections to find a feedback that gives desired closed loop eigenvalues for systems where only outputs are available for feedback If all states are not measurable it seems reasonable to try the feedback u K ˆx krr 714 where ˆx is the output of an observer of the state ie d ˆx dt A ˆx Bu Ly C ˆx 715 Since the system 713 and the observer 715 are both of state dimension n the closed loop system has state dimension 2n with state x ˆx The evolution of the states is described by equations 713715 To analyze the closed loop system the state variable ˆx is replaced by x x ˆx 716 Subtraction of equation 715 from equation 713 gives d x dt Ax A ˆx LCx C ˆx A x LC x A LCx Returning to the process dynamics introducing u from equation 714 into equation 713 and using equation 716 to eliminate ˆx gives dx dt Ax Bu Ax BK ˆx Bkrr Ax BKx x Bkrr A BKx BK x Bkrr The closed loop system is thus governed by d dt x x A BK BK 0 A LC x x Bkr 0 r 717 Notice that the state x representing the observer error is not affected by the refer ence signal r This is desirable since we do not want the reference signal to generate observer errors Since the dynamics matrix is block diagonal we find that the characteristic polynomial of the closed loop system is λs det sI A BK det sI A LC This polynomial is a product of two terms the characteristic polynomial of the closed loop system obtained with state feedback and the characteristic polyno mial of the observer error The feedback 714 that was motivated heuristically thus provides a neat solution to the eigenvalue assignment problem The result is summarized as follows 214 CHAPTER 7 OUTPUT FEEDBACK 0 5 10 15 2 0 2 4 6 8 State feedback Output feedback Reference x1 ˆx1 Normalized time t 0 5 10 15 1 0 1 0 5 10 15 1 0 1 x2 ˆx2 u usfb Normalized time t Figure 78 Simulation of a vehicle driving on a curvy road with a controller based on state feedback and an observer The left plot shows the lane boundaries dotted the vehicle position solid and its estimate dashed the upper right plot shows the velocity solid and its estimate dashed and the lower right plot shows the control signal using state feedback solid and the control signal using the estimated state dashed troller contains a dynamical model of the plant This is called the internal model principle the controller contains a model of the process being controlled Example 74 Vehicle steering Consider again the normalized linear model for vehicle steering in Example 64 The dynamics relating the steering angle u to the lateral path deviation y is given by the state space model 712 Combining the state feedback derived in Example 64 with the observer determined in Example 73 we find that the controller is given by d ˆx dt A ˆx Bu Ly C ˆx 0 1 0 0 ˆx γ 1 u l1 l2 y ˆx1 u K ˆx krr k1r x1 k2x2 Elimination of the variable u gives d ˆx dt A BK LCˆx Ly Bkrr l1 γ k1 1 γ k2 k1 l2 k2 ˆx l1 l2 y γ 1 k1r The controller is a dynamical system of second order with two inputs y and r and one output u Figure 78 shows a simulation of the system when the vehicle is driven along a curvy road Since we are using a normalized model the length unit is the vehicle length and the time unit is the time it takes to travel one vehicle length The estimator is initialized with all states equal to zero but the real system has an initial velocity of 05 The figures show that the estimates converge quickly to their true values The vehicle tracks the desired path which is in the middle of the road but there are errors because the road is irregular The tracking error can be improved by introducing feedforward Section 75 74 KALMAN FILTERING 217 The Kalman filter can also be applied to continuoustime stochastic processes The mathematical derivation of this result requires more sophisticated tools but the final form of the estimator is relatively straightforward Consider a continuous stochastic system dx dt Ax Bu Fv EvsvTt Rvtδt s y Cx w EwswTt Rwtδt s where δτ is the unit impulse function Assume that the disturbance v and noise w are zero mean and Gaussian but not necessarily stationary pdfv 1 n 2πdet Rv e 1 2 vT R1 v v pdfw 1 n 2πdet Rw e 1 2 wT R1 w w We wish to find the estimate ˆxt that minimizes the mean square error Ext ˆxtxt ˆxtT given yτ 0 τ t Theorem 75 KalmanBucy 1961 The optimal estimator has the form of a linear observer d ˆx dt A ˆx Bu Ly C ˆx where Lt PtCT R1 w and Pt Ext ˆxtxt ˆxtT and satisfies d P dt AP P AT PCT R1 w tC P F RvtF T P0 Ex0x T 0 As in the discrete case when the system is stationary and if Pt converges the observer gain is constant L PCT R1 w where AP P AT PCT R1 w C P F Rv F T 0 The second equation is the algebraic Riccati equation Example 75 Vectored thrust aircraft We consider the lateral dynamics of the system consisting of the subsystems whose states are given by z x θ x θ To design a Kalman filter for the system we must include a description of the process disturbances and the sensor noise We thus augment the system to have the form dz dt Az Bu Fv y Cz w where F represents the structure of the disturbances including the effects of non linearities that we have ignored in the linearization w represents the disturbance source modeled as zero mean Gaussian white noise and v represents that mea surement noise also zero mean Gaussian and white For this example we choose F as the identity matrix and choose disturbances vi i 1 n to be independent disturbances with covariance given by Rii 01 Ri j 0 i j The sensor noise is a single random variable which we model as 218 CHAPTER 7 OUTPUT FEEDBACK 0 05 1 15 2 04 03 02 01 0 01 Time t s States zi mixed units x θ xd θd a Position measurement only 0 05 1 15 2 04 03 02 01 0 01 Time t s States zi mixed units x θ xd θd b Position and orientation Figure 79 Kalman filter design for a vectored thrust aircraft In the first design a only the lateral position of the aircraft is measured Adding a direct measurement of the roll angle produces a much better observer b The initial condition for both simulations is 01 00175 001 0 having covariance Rw 104 Using the same parameters as before the resulting Kalman gain is given by L 370 469 185 316 The performance of the estimator is shown in Figure 79a We see that while the estimator converges to the system state it contains significant overshoot in the state estimate which can lead to poor performance in a closed loop setting To improve the performance of the estimator we explore the impact of adding a new output measurement Suppose that instead of measuring just the output position x we also measure the orientation of the aircraft θ The output becomes y 1 0 0 0 0 1 0 0 z w1 w2 and if we assume that w1 and w2 are independent noise sources each with covariance Rwi 104 then the optimal estimator gain matrix becomes L 326 0150 0150 326 327 979 00033 316 These gains provide good immunity to noise and high performance as illustrated in Figure 79b 75 A GENERAL CONTROLLER STRUCTURE 221 x0 y0 x f y f a Overhead view 0 1 2 3 4 5 0 5 0 1 2 3 4 05 0 05 y m δ rad Time t s b Position and steering Figure 711 Trajectory generation for changing lanes We wish to change from the left lane to the right lane over a distance of 30 m in 4 s The planned trajectory in the xy plane is shown in a and the lateral position y and the steering angle δ over the maneuver time interval are shown in b There are many ways to generate the feedforward signal and there are also many different ways to compute the feedback gain K and the observer gain L Note that once again the internal model principle applies the controller contains a model of the system to be controlled through the observer Example 76 Vehicle steering To illustrate how we can use a two degreeoffreedom design to improve the per formance of the system consider the problem of steering a car to change lanes on a road as illustrated in Figure 711a We use the nonnormalized form of the dynamics where were derived in Exam ple 28 Using the center of the rear wheels as the reference α 0 the dynamics can be written as dx dt cos θv dy dt sin θv dθ dt 1 b tan δ where v is the forward velocity of the vehicle and δ is the steering angle To generate a trajectory for the system we note that we can solve for the states and inputs of the system given x y by solving the following sets of equations x v cos θ x v cos θ v θ sin θ y v sin θ y v sin θ v θ cos θ θ vl tan δ 724 This set of five equations has five unknowns θ θ v v and δ that can be solved using trigonometry and linear algebra It follows that we can compute a feasible trajectory for the system given any path xt yt This special property of a system is known as differential flatness 73 74 To find a trajectory from an initial state x0 y0 θ0 to a final state x f y f θ f 82 DERIVATION OF THE TRANSFER FUNCTION 239 independent variable x has the solution ψx Aexs Bexs Matching the boundary conditions gives A 0 and B est so the solution is yt θ1 t ψ1est esest esut The system thus has the transfer function Gs es As in the case of a time delay the transfer function is not a rational function but is an analytic function Gains Poles and Zeros The transfer function has many useful interpretations and the features of a transfer function are often associated with important system properties Three of the most important features are the gain and the locations of the poles and zeros The zero frequency gain of a system is given by the magnitude of the transfer function at s 0 It represents the ratio of the steadystate value of the output with respect to a step input which can be represented as u est with s 0 For a state space system we computed the zero frequency gain in equation 522 G0 D C A1B For a system written as a linear differential equation dny dtn a1 dn1y dtn1 any b0 dmu dtm b1 dm1u dtm1 bmu if we assume that the input and output of the system are constants y0 and u0 then we find that any0 bmu0 Hence the zero frequency gain is G0 y0 u0 bm an 816 Next consider a linear system with the rational transfer function Gs bs as The roots of the polynomial as are called the poles of the system and the roots of bs are called the zeros of the system If p is a pole it follows that yt ept is a solution of equation 88 with u 0 the homogeneous solution A pole p corresponds to a mode of the system with corresponding modal solution ept The unforced motion of the system after an arbitrary excitation is a weighted sum of modes Zeros have a different interpretation Since the pure exponential output corre sponding to the input ut est with as 0 is Gsest it follows that the pure exponential output is zero if bs 0 Zeros of the transfer function thus block transmission of the corresponding exponential signals For a state space system with transfer function Gs CsI A1B D the poles of the transfer function are the eigenvalues of the matrix A in the state space 240 CHAPTER 8 TRANSFER FUNCTIONS 6 4 2 2 2 2 Re Im Figure 84 A pole zero diagram for a transfer function with zeros at 5 and 1 and poles at 3 and 22 j The circles represent the locations of the zeros and the crosses the locations of the poles A complete characterization requires we also specify the gain of the system model One easy way to see this is to notice that the value of Gs is unbounded when s is an eigenvalue of a system since this is precisely the set of points where the characteristic polynomial λs detsI A 0 and hence sI A is noninvertible It follows that the poles of a state space system depend only on the matrix A which represents the intrinsic dynamics of the system We say that a transfer function is stable if all of its poles have negative real part To find the zeros of a state space system we observe that the zeros are complex numbers s such that the input ut u0est gives zero output Inserting the pure exponential response xt x0est and yt 0 in equation 82 gives sestx0 Ax0est Bu0est 0 Cestx0 Destu0 which can be written as sI A B C D x0 u0 0 This equation has a solution with nonzero x0 u0 only if the matrix on the left does not have full rank The zeros are thus the values s such that the matrix sI A B C D 817 looses rank Since the zeros depend on A B C and D they therefore depend on how the inputs and outputs are coupled to the states Notice in particular that if the matrix B has full rank then the matrix in equation 817 has n linearly independent rows for all values of s Similarly there are n linearly independent columns if the matrix C has full rank This implies that systems where the matrix B or C is full rank do not have zeros In particular it means that a system has no zeros if it is fully actuated each state can be controlled independently or if the full state is measured A convenient way to view the poles and zeros of a transfer function is through a pole zero diagram as shown in Figure 84 In this diagram each pole is marked with a cross and each zero with a circle If there are multiple poles or zeros at a fixed location these are often indicated with overlapping crosses or circles or other 83 BLOCK DIAGRAMS AND TRANSFER FUNCTIONS 247 and Gurs kr 1 K G ˆxus k1s2 l1s l2 s2 sγ k1 k2 l1 k1 l2 k2l1 γ k2l2 where k1 and k2 are the controller gains Finally we compute the full closed loop dynamics We begin by deriving the transfer function for the process Ps We can compute this directly from the state space description of the dynamics which was given in Example 512 Using that description we have Ps G yus CsI A1B D 1 0 s 1 0 s 1 γ 1 γ s 1 s2 The transfer function for the full closed loop system between the input r and the output y is then given by G yr kr Ps 1 PsGuys k1γ s 1 s2 k1γ k2s k1 Note that the observer gains l1 and l2 do not appear in this equation This is because we are considering steadystate analysis and in steady state the estimated state exactly tracks the state of the system assuming perfect models We will return to this example in Chapter 12 to study the robustness of this particular approach PoleZero Cancellations Because transfer functions are often polynomials in s it can sometimes happen that the numerator and denominator have a common factor which can be canceled Sometimes these cancellations are simply algebraic simplifications but in other situations they can mask potential fragilities in the model In particular if a polezero cancellation occurs because terms in separate blocks that just happen to coincide the cancellation may not occur if one of the systems is slightly perturbed In some situations this can result in severe differences between the expected behavior and the actual behavior To illustrate when we can have polezero cancellations consider the block dia gram in Figure 87 with F 1 no feedforward compensation and C and P given by Cs ncs dcs Ps n ps dps The transfer function from r to e is then given by Gers 1 1 PC dcsdps dcsdps ncsn ps If there are common factors in the numerator and denominator polynomials then these terms can be factored out and eliminated from both the numerator and de nominator For example if the controller has a zero at s a and the process has a 248 CHAPTER 8 TRANSFER FUNCTIONS pole at s a then we will have Gers s ad csdps s adcsdps s ancsn ps d csdps dcsdps ncsn ps where n cs and d ps represent the relevant polynomials with the term s a factored out In the case when a 0 so that the zero or pole is in the right halfplane we see that there is no impact on the transfer function Ger Suppose instead that we compute the transfer function from d to e which repre sents the effect of a disturbance on the error between the reference and the output This transfer function is given by Geds d csn ps s adcsdps s ancsn ps Notice that if a 0 then the pole is in the right halfplane and the transfer function Ged is unstable Hence even though the transfer function from r to e appears to be okay assuming a perfect polezero cancellation the transfer function from d to e can exhibit unbounded behavior This unwanted behavior is typical of an unstable polezero cancellation It turns out that the cancellation of a pole with a zero can also be understood in terms of the state space representation of the systems Reachability or observability is lost when there are cancellations of poles and zeros Exercise 811 A conse quence is that the transfer function represents the dynamics only in the reachable and observable subspace of a system see Section 75 Example 87 Cruise control The inputoutput response from throttle to velocity for the linearized model for a car has the transfer function Gs bsa a 0 A simple but not necessarily good way to design a PI controller is to choose the parameters of the PI controller so that the controller zero at s kikp cancels the process pole at s a The transfer function from reference to velocity is Gvrs bkps bkp and control design is simply a matter of choosing the gain kp The closed loop system dynamics are of first order with the time constant 1bkp Figure 810 shows the velocity error when the car encounters an increase in the road slope A comparison with the controller used in Figure 33b reproduced in dashed curves shows that the controller based on polezero cancellation has very poor performance The velocity error is larger and it takes a long time to settle Notice that the control signal remains practically constant after t 15 even if the error is large after that time To understand what happens we will analyze the system The parameters of the system are a 00101 and b 132 and the controller parameters are kp 05 and ki 00051 The closed loop time constant is 1bkp 25 s and we would expect that the error would settle in about 10 s 4 time constants The transfer functions from road slope to velocity and control 83 BLOCK DIAGRAMS AND TRANSFER FUNCTIONS 249 0 10 20 30 40 18 19 20 Time t s Velocity v ms 0 10 20 30 40 0 02 04 06 Time t s Throttle ki 00051 ki 05 Figure 810 Car with PI cruise control encountering a sloping road The velocity error is shown on the left and the throttle is shown on the right Results with a PI controller with kp 05 and ki 00051 where the process pole s 0101 is shown by solid lines and a controller with kp 05 and ki 05 is shown by dashed lines Compare with Figure 33b signals are Gv θs bgkps s as bkp Gu θs bkp s bkp Notice that the canceled mode s a 00101 appears in Gvθ but not in Guθ The reason why the control signal remains constant is that the controller has a zero at s 00101 which cancels the slowly decaying process mode Notice that the error would diverge if the canceled pole was unstable The lesson we can learn from this example is that it is a bad idea to try to cancel unstable or slow process poles A more detailed discussion of polezero cancellations is given in Section 124 Algebraic Loops When analyzing or simulating a system described by a block diagram it is necessary to form the differential equations that describe the complete system In many cases the equations can be obtained by combining the differential equations that describe each subsystem and substituting variables This simple procedure cannot be used when there are closed loops of subsystems that all have a direct connection between inputs and outputs known as an algebraic loop To see what can happen consider a system with two blocks a firstorder non linear system dx dt f x u y hx 821 and a proportional controller described by u ky There is no direct term since the function h does not depend on u In that case we can obtain the equation for the closed loop system simply by replacing u by ky in 821 to give dx dt f x ky y hx Such a procedure can easily be automated using simple formula manipulation 250 CHAPTER 8 TRANSFER FUNCTIONS The situation is more complicated if there is a direct term If y hx u then replacing u by ky gives dx dt f x ky y hx ky To obtain a differential equation for x the algebraic equation y hx ky must be solved to give y αx which in general is a complicated task When algebraic loops are present it is necessary to solve algebraic equations to obtain the differential equations for the complete system Resolving algebraic loops is a nontrivial problem because it requires the symbolic solution of algebraic equations Most block diagramoriented modeling languages cannot handle alge braic loops and they simply give a diagnosis that such loops are present In the era of analog computing algebraic loops were eliminated by introducing fast dynamics between the loops This created differential equations with fast and slow modes that are difficult to solve numerically Advanced modeling languages like Modelica use several sophisticated methods to resolve algebraic loops 84 The Bode Plot The frequency response of a linear system can be computed from its transfer func tion by setting s iω corresponding to a complex exponential ut eiωt cosωt i sinωt The resulting output has the form yt Giωeiωt Meiωtϕ M cosωt ϕ i M sinωt ϕ where M and ϕ are the gain and phase of G M Giω ϕ arctan Im Giω Re Giω The phase of G is also called the argument of G a term that comes from the theory of complex variables It follows from linearity that the response to a single sinusoid sin or cos is amplified by M and phaseshifted by ϕ Note that π ϕ π so the arctangent must be taken respecting the signs of the numerator and denominator It will often be convenient to represent the phase in degrees rather than radians We will use the notation Giω for the phase in degrees and arg Giω for the phase in radians In addition while we always take arg Giω to be in the range π π we will take Giω to be continuous so that it can take on values outside the range of 180 to 180 The frequency response Giω can thus be represented by two curves the gain curve and the phase curve The gain curve gives Giω as a function of frequency ω and the phase curve gives Giω One particularly useful way of drawing these 84 THE BODE PLOT 251 10 1 10 2 10 3 10 4 10 2 10 1 10 0 10 1 10 2 90 0 90 Actual Approx Frequency ω rads Giω Giω deg Figure 811 Bode plot of the transfer function Cs 20 10s 10s corresponding to an ideal PID controller The top plot is the gain curve and the bottom plot is the phase curve The dashed lines show straightline approximations of the gain curve and the corresponding phase curve curves is to use a loglog scale for the gain plot and a loglinear scale for the phase plot This type of plot is called a Bode plot and is shown in Figure 811 Sketching and Interpreting Bode Plots Part of the popularity of Bode plots is that they are easy to sketch and interpret Since the frequency scale is logarithmic they cover the behavior of a linear system over a wide frequency range Consider a transfer function that is a rational functionof the form Gs b1sb2s a1sa2s We have log Gs log b1s log b2s log a1s log a2s and hence we can compute the gain curve by simply adding and subtracting gains corresponding to terms in the numerator and denominator Similarly Gs b1s b2s a1s a2s and so the phase curve can be determined in an analogous fashion Since a polyno mial can be written as a product of terms of the type k s s a s2 2ζω0s ω2 0 it suffices to be able to sketch Bode diagrams for these terms The Bode plot of a complex system is then obtained by adding the gains and phases of the terms 252 CHAPTER 8 TRANSFER FUNCTIONS 10 2 10 0 10 2 10 1 10 0 10 1 180 0 180 Frequency ω rads Giω Giω deg 1 1 s1 s1 s2 s2 10 2 10 0 10 2 10 1 10 0 10 1 180 0 180 Frequency ω rads Giω Giω deg s2 s2 s s 1 1 Figure 812 Bode plots of the transfer functions Gs sk for k 2 1 0 1 2 On a loglog scale the gain curve is a straight line with slope k Using a loglinear scale the phase curves for the transfer functions are constants with phase equal to 90 k The simplest term in a transfer function is one of the form sk where k 0 if the term appears in the numerator and k 0 if the term is in the denominator The gain and phase of the term are given by log Giω k log ω Giω 90k The gain curve is thus a straight line with slope k and the phase curve is a constant at 90k The case when k 1 corresponds to a differentiator and has slope 1 with phase 90 The case when k 1 corresponds to an integrator and has slope 1 with phase 90 Bode plots of the various powers of k are shown in Figure 812 Consider next the transfer function of a firstorder system given by Gs a s a We have Gs a s a Gs a s a and hence log Giω log a 1 2 log ω2 a2 Giω 180 π arctan ω a The Bode plot is shown in Figure 813a with the magnitude normalized by the zero frequency gain Both the gain curve and the phase curve can be approximated by 84 THE BODE PLOT 255 10 2 10 0 10 2 10 2 10 1 10 0 10 1 10 2 180 90 0 Exact Approx Giω Giω deg Frequency ω rads s a s b s ω0 s a10 s b10 s 10a s 10b Figure 814 Asymptotic approximation to a Bode plot The thin line is the Bode plot for the transfer function Gs ks bs as2 2ζω0s ω2 0 where a b ω0 Each segment in the gain and phase curves represents a separate portion of the approximation where either a pole or a zero begins to have effect Each segment of the approximation is a straight line between these points at a slope given by the rules for computing the effects of poles and zeros from the pole end and we are left with a slope of 45decade from the zero At the location of the secondorder pole s iωc we get a jump in phase of 180 Finally at s 10b the phase contributions of the zero end and we are left with a phase of 180 degrees We see that the straightline approximation for the phase is not as accurate as it was for the gain curve but it does capture the basic features of the phase changes as a function of frequency The Bode plot gives a quick overview of a system Since any signal can be decomposed into a sum of sinusoids it is possible to visualize the behavior of a system for different frequency ranges The system can be viewed as a filter that can change the amplitude and phase of the input signals according to the frequency response For example if there are frequency ranges where the gain curve has constant slope and the phase is close to zero the action of the system for signals with these frequencies can be interpreted as a pure gain Similarly for frequencies where the slope is 1 and the phase close to 90 the action of the system can be interpreted as a differentiator as shown in Figure 812 Three common types of frequency responses are shown in Figure 815 The system in Figure 815a is called a lowpass filter because the gain is constant for low frequencies and drops for high frequencies Notice that the phase is zero for low frequencies and 180 for high frequencies The systems in Figure 815b and c are called a bandpass filter and highpass filter for similar reasons To illustrate how different system behaviors can be read from the Bode plots we consider the bandpass filter in Figure 815b For frequencies around ω ω0 the signal is passed through with no change in gain However for frequencies well 256 CHAPTER 8 TRANSFER FUNCTIONS 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω Gs ω2 0 s2 2ζω0s ω2 0 a Lowpass filter Gs 2ζω0 s2 2ζω0s ω2 0 b Bandpass filter Gs s2 s2 2ζω0s ω2 0 c Highpass filter Figure 815 Bode plots for lowpass bandpass and highpass filters The top plots are the gain curves and the bottom plots are the phase curves Each system passes frequencies in a different range and attenuates frequencies outside of that range below or well above ω0 the signal is attenuated The phase of the signal is also affected by the filter as shown in the phase curve For frequencies below a100 there is a phase lead of 90 and for frequencies above 100a there is a phase lag of 90 These actions correspond to differentiation and integration of the signal in these frequency ranges Example 89 Transcriptional regulation Consider a genetic circuit consisting of a single gene We wish to study the response of the protein concentration to fluctuations in the mRNA dynamics We consider two cases a constitutive promoter no regulation and selfrepression negative feedback illustrated in Figure 816 The dynamics of the system are given by dm dt αp γ m u dp dt βm δp where u is a disturbance term that affects mRNA transcription For the case of no feedback we have αp α0 and the system has an equi librium point at me α0γ pe βα0δγ The transfer function from v to p is given by Gol pvs β s γ s δ For the case of negative regulation we have αp α1 1 kpn α0 and the equilibrium points satisfy me δ β pe α 1 kpne α0 γ me γ δ β pe 84 THE BODE PLOT 257 A RNAP a Open loop RNAP A b Negative feedback 10 4 10 3 10 2 10 2 10 1 10 0 open loop negative feedback G pviω Frequency ω rads c Frequency response Figure 816 Noise attenuation in a genetic circuit The open loop system a consists of a constitutivepromoterwhiletheclosedloopcircuit bisselfregulatedwithnegativefeedback repressor The frequency response for each circuit is shown in c The resulting transfer function is given by Gcl pvs β s γ s δ βσ σ 2βαkpe 1 kpne2 Figure 816c shows the frequency response for the two circuits We see that the feedback circuit attenuates the response of the system to disturbances with low frequency content but slightly amplifies disturbances at high frequency compared to the open loop system Notice that these curves are very similar to the frequency response curves for the op amp shown in Figure 83b Transfer Functions from Experiments The transfer function of a system provides a summary of the inputoutput response and is very useful for analysis and design However modeling from first principles can be difficult and timeconsuming Fortunately we can often build an inputoutput model for a given application by directly measuring the frequency response and fitting a transfer function to it To do so we perturb the input to the system using a sinusoidal signal at a fixed frequency When steady state is reached the amplitude ratio and the phase lag give the frequency response for the excitation frequency The complete frequency response is obtained by sweeping over a range of frequencies By using correlation techniques it is possible to determine the frequency re sponse very accurately and an analytic transfer function can be obtained from the frequency response by curve fitting The success of this approach has led to in struments and software that automate this process called spectrum analyzers We illustrate the basic concept through two examples Example 810 Atomic force microscope To illustrate the utility of spectrum analysis we consider the dynamics of the atomic force microscope introduced in Section 35 Experimental determination of the 258 CHAPTER 8 TRANSFER FUNCTIONS 10 1 10 0 10 1 10 2 10 2 10 3 10 4 270 180 90 0 Measured Model G G deg Frequency f Hz Figure 817 Frequency response of a preloaded piezoelectric drive for an atomic force microscope The Bode plot shows the response of the measured transfer function solid and the fitted transfer function dashed frequency response is particularly attractive for this system because its dynamics are very fast and hence experiments can be done quickly A typical example is given in Figure 817 which shows an experimentally determined frequency response solid line In this case the frequency response was obtained in less than a second The transfer function Gs kω2 2ω2 3ω2 5s2 2ζ1ω1s ω2 1s2 2ζ4ω4s ω2 4esτ ω2 1ω2 4s2 2ζ2ω2s ω2 2s2 2ζ3ω3s ω2 3s2 2ζ5ω5s ω2 5 with ωk 2π fk and f1 242 kHz ζ1 003 f2 255 kHz ζ2 003 f3 645 kHz ζ3 0042 f4 825 kHz ζ4 0025 f5 93 kHz ζ5 0032 τ 104 s and k 5 was fit to the data dashed line The frequencies associated with the zeros are located where the gain curve has minima and the frequencies associated with the poles are located where the gain curve has local maxima The relative damping ratios are adjusted to give a good fit to maxima and minima When a good fit to the gain curve is obtained the time delay is adjusted to give a good fit to the phase curve The piezo drive is preloaded and a simple model of its dynamics is derived in Exercise 37 The pole at 242 kHz corresponds to the trampoline mode derived in the exercise the other resonances are higher modes Example 811 Pupillary light reflex dynamics The human eye is an organ that is easily accessible for experiments It has a control system that adjusts the pupil opening to regulate the light intensity at the retina This control system was explored extensively by Stark in the 1960s 184 To determine the dynamics light intensity on the eye was varied sinusoidally and the pupil opening was measured A fundamental difficulty is that the closed loop system is insensitive to internal system parameters so analysis of a closed loop system thus EXERCISES 265 811 Common poles Consider a closed loop system of the form of Figure 87 with F 1 and P and C having a common pole Show that if each system is written in state space form the resulting closed loop system is not reachable and not observable 812 Congestion control Consider the congestion control model described in Sec tion 34 Let w represent the individual window size for a set of N identical sources q represent the endtoend probability of a dropped packet b represent the number of packets in the routers buffer and p represent the probability that that a packet is dropped by the router We write w Nw to represent the total number of packets being received from all N sources Show that the linearized model can be described by the transfer functions Gb ws eτ f s τes eτ f s G wqs N qeτes qewe G pbs ρ where we be is the equilibrium point for the system τe is the steadystate round trip time and τ f is the forward propagation time 813 Inverted pendulum with PD control Consider the normalized inverted pen dulum system whose transfer function is given by Ps 1s2 1 Exer cise 83 A proportionalderivative control law for this system has transfer func tion Cs kp kds see Table 81 Suppose that we choose Cs αs 1 Compute the closed loop dynamics and show that the system has good tracking of reference signals but does not have good disturbance rejection properties 814 Vehicle suspension 96 Active and passive damping are used in cars to give a smooth ride on a bumpy road A schematic diagram of a car with a damping system in shown in the figure below Porter Class I race car driven by Todd Cuffaro xb xw xr F Σ F Body Actuator Wheel This model is called a quarter car model and the car is approximated with two masses one representing one fourth of the car body and the other a wheel The actuator exerts a force F between the wheel and the body based on feedback from the distance between the body and the center of the wheel the rattle space Let xb xw and xr represent the heights of body wheel and road measured from their equilibria A simple model of the system is given by Newtons equations for 266 CHAPTER 8 TRANSFER FUNCTIONS the body and the wheel mb xb F mw xw F ktxr xw where mb is a quarter of the body mass mw is the effective mass of the wheel including brakes and part of the suspension system the unsprung mass and kt is the tire stiffness For a conventional damper consisting of a spring and a damper we have F kxw xb cxw xb For an active damper the force F can be more general and can also depend on riding conditions Rider comfort can be characterized by the transfer function Gaxr from road height xr to body acceleration a xb Show that this transfer function has the property Gaxriωt ktmb where ωt ktmw the tire hop frequency The equation implies that there are fundamental limitations to the comfort that can be achieved with any damper 815 Vibration absorber Damping vibrations is a common engineering problem A schematic diagram of a damper is shown below m1 k1 m2 c1 k2 F x1 x2 The disturbing vibration is a sinusoidal force acting on mass m1 and the damper consists of the mass m2 and the spring k2 Show that the transfer function from disturbance force to height x1 of the mass m1 is Gx1F m2s2 k2 m1m2s4 m2c1s3 m1k2 m2k1 k2s2 k2c1s k1k2 How should the mass m2 and the stiffness k2 be chosen to eliminate a sinusoidal oscillation with frequency ω0 More details are vibration absorbers is given in the classic text by Den Hartog 57 pp 8793 Chapter Nine Frequency Domain Analysis Mr Black proposed a negative feedback repeater and proved by tests that it possessed the advantages which he had predicted for it In particular its gain was constant to a high degree and it was linear enough so that spurious signals caused by the interaction of the various channels could be kept within permissible limits For best results the feedback factor μβ had to be numerically much larger than unity The possibility of stability with a feedback factor larger than unity was puzzling Harry Nyquist The Regeneration Theory 1956 161 In this chapter we study how the stability and robustness of closed loop systems can be determined by investigating how sinusoidal signals of different frequencies propagate around the feedback loop This technique allows us to reason about the closed loop behavior of a system through the frequency domain properties of the open loop transfer function The Nyquist stability theorem is a key result that provides a way to analyze stability and introduce measures of degrees of stability 91 The Loop Transfer Function Determining the stability of systems interconnected by feedback can be tricky be cause each system influences the other leading to potentially circular reasoning Indeed as the quote from Nyquist above illustrates the behavior of feedback sys tems can often be puzzling However using the mathematical framework of transfer functions provides an elegant way to reason about such systems which we call loop analysis The basic idea of loop analysis is to trace how a sinusoidal signal propagates in the feedback loop and explore the resulting stability by investigating if the prop agated signal grows or decays This is easy to do because the transmission of sinusoidal signals through a linear dynamical system is characterized by the fre quency response of the system The key result is the Nyquist stability theorem which provides a great deal of insight regarding the stability of a system Unlike proving stability with Lyapunov functions studied in Chapter 4 the Nyquist crite rion allows us to determine more than just whether a system is stable or unstable It provides a measure of the degree of stability through the definition of stability margins The Nyquist theorem also indicates how an unstable system should be changed to make it stable which we shall study in detail in Chapters 1012 ConsiderthesysteminFigure91aThetraditionalwaytodetermineiftheclosed loop system is stable is to investigate if the closed loop characteristic polynomial has all its roots in the left halfplane If the process and the controller have rational 272 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 1 3 5 2 2 Re Liω Im Liω Figure 94 Nyquist plot for a thirdorder transfer function The Nyquist plot consists of a trace of the loop transfer function Ls 1s a3 The solid line represents the portion of the transfer function along the positive imaginary axis and the dashed line the negative imaginary axis The outer arc of the D contour maps to the origin Nyquist D contour This arc has the form s Reiθ for R This gives LReiθ 1 Reiθ a3 0 as R Thus the outer arc of the D contour maps to the origin on the Nyquist plot An alternative to computing the Nyquist plot explicitly is to determine the plot from the frequency response Bode plot which gives the Nyquist curve for s iω ω 0 We start by plotting Giω from ω 0 to ω which can be read off from the magnitude and phase of the transfer function We then plot GReiθ with θ π2 and R which almost always maps to zero The remaining parts of the plot can be determined by taking the mirror image of the curve thus far normally plotted using a dashed line The plot can then be labeled with arrows corresponding to a clockwise traversal around the D contour the same direction in which the first portion of the curve was plotted Example 93 Thirdorder system with a pole at the origin Consider the transfer function Ls k ss 12 where the gain has the nominal value k 1 The Bode plot is shown in Figure 95a The system has a single pole at s 0 and a double pole at s 1 The gain curve of the Bode plot thus has the slope 1 for low frequencies and at the double pole s 1 the slope changes to 3 For small s we have L ks which means that the lowfrequency asymptote intersects the unit gain line at ω k The phase curve starts at 90 for low frequencies it is 180 at the breakpoint ω 1 and it is 270 at high frequencies Having obtained the Bode plot we can now sketch the Nyquist plot shown in Figure 95b It starts with a phase of 90 for low frequencies intersects the negative real axis at the breakpoint ω 1 where Li 05 and goes to zero along 92 THE NYQUIST CRITERION 275 200 Re Liω Im Liω 1 Re Liω Im Liω Figure 97 Nyquist curve for the loop transfer function Ls 3s12 ss62 The plot on the right is an enlargement of the box around the origin of the plot on the left The Nyquist curve intersections the negative real axis twice but has no net encirclements of 1 greater than 1 In particular for a fixed time delay the system will become unstable as the link capacity c is increased This indicates that the TCP protocol may not be scalable to highcapacity networks as pointed out by Low et al 137 Exercise 97 provides some ideas of how this might be overcome Conditional Stability Normally we find that unstable systems can be stabilized simply by reducing the loop gain There are however situations where a system can be stabilized by increasing the gain This was first encountered by electrical engineers in the design of feedback amplifiers who coined the term conditional stability The problem was actually a strong motivation for Nyquist to develop his theory We will illustrate by an example Example 95 Thirdorder system Consider a feedback system with the loop transfer function Ls 3s 62 ss 12 94 The Nyquist plot of the loop transfer function is shown in Figure 97 Notice that the Nyquist curve intersects the negative real axis twice The first intersection occurs at L 12 for ω 2 and the second at L 45 for ω 3 The intuitive argument based on signal tracing around the loop in Figure 91b is strongly misleading in this case Injection of a sinusoid with frequency 2 rads and amplitude 1 at A gives in steady state an oscillation at B that is in phase with the input and has amplitude 12 Intuitively it is seems unlikely that closing of the loop will result in a stable system However it follows from Nyquists stability criterion that the system is stable because there are no net encirclements of the critical point Note however that if we decrease the gain then we can get an encirclement implying that the gain must be sufficiently large for stability 93 STABILITY MARGINS 279 Re Liω Im Liω 1 ϕm sm 1gm a Nyquist plot 10 1 10 0 10 1 10 1 10 0 10 1 180 150 120 90 Frequency ω rads Liω Liω log10 gm ϕm b Bode plot Figure 99 Stability margins The gain margin gm and phase margin ϕm are shown on the the Nyquist plot a and the Bode plot b The gain margin corresponds to the smallest increase in gain that creates an encirclement and the phase margin is the smallest change in phase that creates an encirclement The Nyquist plot also shows the stability margin sm which is the shortest distance to the critical point 1 is easy to plot the loop transfer function Ls An increase in controller gain simply expands the Nyquist plot radially An increase in the phase of the controller twists the Nyquist plot Hence from the Nyquist plot we can easily pick off the amount of gain or phase that can be added without causing the system to become unstable Formally the gain margin gm of a system is defined as the smallest amount that the open loop gain can be increased before the closed loop system goes unstable For a system whose phase decreases monotonically as a function of frequency starting at 0 the gain margin can be computed based on the smallest frequency where the phase of the loop transfer function Ls is 180 Let ωpc represent this frequency called the phase crossover frequency Then the gain margin for the system is given by gm 1 Liωpc 95 Similarly the phase margin is the amount of phase lag required to reach the stability limit Let ωgc be the gain crossover frequency the smallest frequency where the loop transfer function Ls has unit magnitude Then for a system with monotonically decreasing gain the phase margin is given by ϕm π arg Liωgc 96 These margins have simple geometric interpretations on the Nyquist diagram of thelooptransferfunctionasshowninFigure99awherewehaveplottedtheportion of the curve corresponding to ω 0 The gain margin is given by the inverse of the distance to the nearest point between 1 and 0 where the loop transfer function crosses the negative real axis The phase margin is given by the smallest angle on the unit circle between 1 and the loop transfer function When the gain or phase is monotonic this geometric interpretation agrees with the formulas above 280 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 Re Liω Im Liω 10 3 10 1 10 1 10 1 10 0 10 1 270 180 90 0 Liω Liω Frequency ω rads Figure 910 Stability margins for a thirdorder transfer function The Nyquist plot on the left allows the gain phase and stability margins to be determined by measuring the distances of relevant features The gain and phase margins can also be read off of the Bode plot on the right A drawback with gain and phase margins is that it is necessary to give both of them in order to guarantee that the Nyquist curve is not close to the critical point An alternative way to express margins is by a single number the stability margin sm which is the shortest distance from the Nyquist curve to the critical point This number is related to disturbance attenuation as will be discussed in Section 113 For many systems the gain and phase margins can be determined from the Bode plot of the loop transfer function To find the gain margin we first find the phase crossover frequency ωpc where the phase is 180 The gain margin is the inverse of the gain at that frequency To determine the phase margin we first determine the gain crossover frequency ωgc ie the frequency where the gain of the loop transfer function is 1 The phase margin is the phase of the loop transfer function at that frequency plus 180 Figure 99b illustrates how the margins are found in the Bode plot of the loop transfer function Note that the Bode plot interpretation of the gain and phase margins can be incorrect if there are multiple frequencies at which the gain is equal to 1 or the phase is equal to 180 Example 97 Thirdorder system Consider a loop transfer function Ls 3s 13 The Nyquist and Bode plots are shown in Figure 910 To compute the gain phase and stability margins we can use the Nyquist plot shown in Figure 910 This yields the following values gm 267 ϕm 417 sm 0464 The gain and phase margins can also be determined from the Bode plot The gain and phase margins are classical robustness measures that have been used for a long time in control system design The gain margin is well defined if the Nyquist curve intersects the negative real axis once Analogously the phase margin is well defined if the Nyquist curve intersects the unit circle at only one point Other more general robustness measures will be introduced in Chapter 12 93 STABILITY MARGINS 281 Re Liω Im Liω a 10 1 10 1 10 1 10 0 180 90 Frequency ω rads Liω Liω b 0 50 100 150 0 05 1 15 Time t s Output y c Figure 911 System with good gain and phase margins but a poor stability margin Nyquist a and Bode b plots of the loop transfer function and step response c for a system with good gain and phase margins but with a poor stability margin The Nyquist plot shows on the portion of the curve corresponding to ω 0 Even if both the gain and phase margins are reasonable the system may still not be robust as is illustrated by the following example Example 98 Good gain and phase margins but poor stability margins Consider a system with the loop transfer function Ls 038s2 01s 055 ss 1s2 006s 05 A numerical calculation gives the gain margin as gm 266 and the phase margin is 70 These values indicate that the system is robust but the Nyquist curve is still close to the critical point as shown in Figure 911 The stability margin is sm 027 which is very low The closed loop system has two resonant modes one with damping ratio ζ 081 and the other with ζ 0014 The step response of the system is highly oscillatory as shown in Figure 911c The stability margin cannot easily be found from the Bode plot of the loop transfer function There are however other Bode plots that will give sm these will be discussed in Chapter 12 In general it is best to use the Nyquist plot to check stability since this provides more complete information than the Bode plot When designing feedback systems it will often be useful to define the robustness of the system using gain phase and stability margins These numbers tell us how much the system can vary from our nominal model and still be stable Reasonable values of the margins are phase margin ϕm 3060 gain margin gm 25 and stability margin sm 0508 There are also other stability measures such as the delay margin which is the smallesttimedelayrequiredtomakethesystemunstableForlooptransferfunctions that decay quickly the delay margin is closely related to the phase margin but for systems where the gain curve of the loop transfer function has several peaks at high frequencies the delay margin is a more relevant measure 282 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 Re Liω Im Liω 10 2 10 0 10 2 10 0 10 2 270 180 90 Normalized frequency ωω0 Liω Liω Figure 912 Nyquist and Bode plots of the loop transfer function for the AFM system 97 with an integral controller The frequency in the Bode plot is normalized by a The parameters are ζ 001 and ki 0008 Example 99 Nanopositioning system for an atomic force microscope Consider the system for horizontal positioning of the sample in an atomic force microscope The system has oscillatory dynamics and a simple model is a spring mass system with low damping The normalized transfer function is given by Ps ω2 0 s2 2ζω0s ω2 0 97 where the damping ratio typically is a very small number eg ζ 01 We will start with a controller that has only integral action The resulting loop transfer function is Ls kiω2 0 ss2 2ζω0s ω2 0 where ki is the gain of the controller Nyquist and Bode plots of the loop transfer function are shown in Figure 912 Notice that the part of the Nyquist curve that is close to the critical point 1 is approximately circular From the Bode plot in Figure 912b we see that the phase crossover frequency is ωpc a which will be independent of the gain ki Evaluating the loop transfer function at this frequency we have Liω0 ki2ζω0 which means that the gain margin is gm 1ki2ζω0 To have a desired gain margin of gm the integral gain should be chosen as ki 2ω0ζ1 gm Figure 912 shows Nyquist and Bode plots for the system with gain margin gm 167 and stability margin sm 0597 The gain curve in the Bode plot is almost a straight line for low frequencies and has a resonant peak at ω ω0 The gain crossover frequency is approximately equal to ki The phase decreases monotoni cally from 90 to 270 it is equal to 180 at ω ω0 The curve can be shifted vertically by changing ki increasing ki shifts the gain curve upward and increases the gain crossover frequency Since the phase is 180 at the resonant peak it is necessary that the peak not touch the line Liω 1 284 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωT Giω Giω a Time delay 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωa Giω Giω b RHP zero 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωa Giω Giω c RHP pole Figure 913 Bode plots of systems that are not minimum phase a Time delay Gs esT b system with a right halfplane RHP zero Gs a sa s and c system with right halfplane pole The corresponding minimum phase system has the transfer function Gs 1 in all cases the phase curves for that system are shown as dashed lines system and they do not depend on sensors and actuators the zeros depend on how inputs and outputs of a system are coupled to the states Zeros can thus be changed by moving sensors and actuators or by introducing new sensors and actuators Nonminimum phase systems are unfortunately quite common in practice The following example gives a system theoretic interpretation of the common experience that it is more difficult to drive in reverse gear and illustrates some of the properties of transfer functions in terms of their poles and zeros Example 910 Vehicle steering The nonnormalized transfer function from steering angle to lateral velocity for the simple vehicle model is Gs av0s v2 0 bs where v0 is the velocity of the vehicle and a b 0 see Example 512 The transfer function has a zero at s v0a In normal driving this zero is in the left halfplane but it is in the right halfplane when driving in reverse v0 0 The unit step response is yt av0 b av2 0t b The lateral velocity thus responds immediately to a steering command For reverse steering v0 is negative and the initial response is in the wrong direction a behavior that is representative for nonminimum phase systems called an inverse response Figure 914 shows the step response for forward and reverse driving In this simulation we have added an extra pole with the time constant T to approximately account for the dynamics in the steering system The parameters are a b 1 T 01 v0 1 for forward driving and v0 1 for reverse driving Notice that for t t0 av0 where t0 is the time required to drive the distance a the step response for reverse driving is that of forward driving with the time delay t0 The 290 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS where the inverse is obtained after simple calculations Figure 917b shows the response of the relay to a sinusoidal input with the first harmonic of the output shown as a dashed line Describing function analysis is illustrated in Figure 917c which shows the Nyquist plot of the transfer function Ls 2s 14 dashed line and the negative inverse describing function of a relay with b 1 and c 05 The curves intersect for a 1 and ω 077 rads indicating the amplitude and frequency for a possible oscillation if the process and the relay are connected in a a feedback loop 96 Further Reading Nyquists original paper giving his now famous stability criterion was published in the Bell Systems Technical Journal in 1932 160 More accessible versions are found in the book 27 which also includes other interesting early papers on control Nyquists paper is also reprinted in an IEEE collection of seminal papers on control 23 Nyquist used 1 as the critical point but Bode changed it to 1 which is now the standard notation Interesting perspectives on early developments are given by Black 36 Bode 41 and Bennett 29 Nyquist did a direct calculation based on his insight into the propagation of sinusoidal signals through systems he did not use results from the theory of complex functions The idea that a short proof can be given by using the principle of variation of the argument is presented in the delightful book by MacColl 140 Bode made extensive use of complex function theory in his book 40 which laid the foundation for frequency response analysis wherethenotionofminimumphasewastreatedindetailAgoodsourceforcomplex function theory is the classic by Ahlfors 6 Frequency response analysis was a key element in the emergence of control theory as described in the early texts by James et al 110 Brown and Campbell 46 and Oldenburger 163 and it became one of the cornerstones of early control theory Frequency response underwent a resurgence when robust control emerged in the 1980s as will be discussed in Chapter 12 Exercises 91 Operational amplifier Consider an op amp circuit with Z1 Z2 that gives a closed loop system with nominally unit gain Let the transfer function of the operational amplifier be Gs ka1a2 s as a1s a2 where a1 a2 a Show that the condition for oscillation is k a1 a2 and compute the gain margin of the system Hint Assume a 0 92 Atomic force microscope The dynamics of the tapping mode of an atomic force microscope are dominated by the damping of the cantilever vibrations and the system that averages the vibrations Modeling the cantilever as a springmass 101 BASIC CONTROL FUNCTIONS 295 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u kp kp a Proportional control 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u ki ki b PI control 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u kd kd c PID control Figure 102 Responses to step changes in the reference value for a system with a proportional controller a PI controller b and PID controller c The process has the transfer function Ps 1s13theproportionalcontrollerhasparameterskp 12and5thePIcontroller has parameters kp 1 ki 0 02 05 and 1 and the PID controller has parameters kp 25 ki 15 and kd 0 1 2 and 4 value If we choose uff rP0 krr then the output will be exactly equal to the reference value as it was in the state space case provided that there are no disturbances However this requires exact knowledge of the process dynamics which is usually not available The parameter uff called reset in the PID literature must therefore be adjusted manually As we saw in Section 64 integral action guarantees that the process output agrees with the reference in steady state and provides an alternative to the feed forward term Since this result is so important we will provide a general proof Consider the controller given by equation 101 Assume that there exists a steady state with u u0 and e e0 It then follows from equation 101 that u0 kpe0 kie0t which is a contradiction unless e0 or ki is zero We can thus conclude that with integral action the error will be zero if it reaches a steady state Notice that we have not made any assumptions about the linearity of the process or the disturbances We have however assumed that an equilibrium exists Using integral action to achieve zero steadystate error is much better than using feedforward which requires a precise knowledge of process parameters The effect of integral action can also be understood from frequency domain analysis The transfer function of the PID controller is Cs kp ki s kds 104 The controller has infinite gain at zero frequency C0 and it then follows from equation 102 that G yr0 1 which implies that there is no steadystate 102 SIMPLE CONTROLLERS FOR COMPLEX SYSTEMS 299 Im Liω Re Liω a Nyquist plot 10 4 10 2 10 0 10 2 10 2 10 1 10 0 10 1 10 2 360 270 180 90 0 Liω Liω Frequency ω rads b Bode plot Figure 105 Integral control for AFM in tapping mode An integral controller is designed based on the slope of the process transfer function at 0 The controller gives good robustness properties based on a very simple analysis system we find that the integral gain is given by ki 1Tcl P0 The analysis requires that Tcl be sufficiently large that the process transfer function can be approximated by a constant For systems that are not well represented by a constant gain we can obtain a better approximation by using the Taylor series expansion of the loop transfer function Ls ki Ps s kiP0 sP0 s ki P0 ki P0 s Choosing ki P0 05 gives a system with good robustness as will be discussed in Section 125 The controller gain is then given by ki 1 2P0 106 and the expected closed loop time constant is Tcl 2P0P0 Example 102 Integral control of AFM in tapping mode A simplified model of the dynamics of the vertical motion of an atomic force microscope in tapping mode was discussed in Exercise 92 The transfer function for the system dynamics is Ps a1 esτ sτs a where a ζω0 τ 2πnω0 and the gain has been normalized to 1 We have P0 1 and P0 τ2 1a and it follows from 106 that the integral gain can be chosen as ki a2 aτ Nyquist and Bode plots for the resulting loop transfer function are shown in Figure 105 300 CHAPTER 10 PID CONTROL A firstorder system has the transfer function Ps b s a With a PI controller the closed loop system has the characteristic polynomial ss a bkps bkis s2 a bkps bki The closed loop poles can thus be assigned arbitrary values by proper choice of the controller gains Requiring that the closed loop system have the characteristic polynomial ps s2 a1s a2 2 we find that the controller parameters are kp a1 a b ki a2 b 107 If we require a response of the closed loop system that is slower than that of the open loop system a reasonable choice is a1 a α and a2 αa If a response faster than that of the open loop system is required it is reasonable to choose a1 2ζω0 and a2 ω2 0 where ω0 and ζ are undamped natural frequency and damping ratio of the dominant mode These choices have significant impact on the robustness of the system and will be discussed in Section 124 An upper limit to ω0 is given by the validity of the model Large values of ω0 will require fast control actions and actuators may saturate if the value is too large A firstorder model is unlikely to represent the true dynamics for high frequencies We illustrate the design by an example Example 103 Cruise control using PI feedback Consider the problem of maintaining the speed of a car as it goes up a hill In Example 514 we found that there was little difference between the linear and nonlinear models when investigating PI control provided that the throttle did not reachthesaturationlimitsAsimplelinearmodelofacarwasgiveninExample511 dv ve dt av ve bu ue gθ 108 where v is the velocity of the car u is the input from the engine and θ is the slope of the hill The parameters were a 00101 b 13203 g 98 ve 20 and ue 01616 This model will be used to find suitable parameters of a vehicle speed controller The transfer function from throttle to velocity is a firstorder system Since the open loop dynamics is so slow it is natural to specify a faster closed loop system by requiring that the closed loop system be of secondorder with damping ratio ζ and undamped natural frequency ω0 The controller gains are given by 107 Figure 106 shows the velocity and the throttle for a car that initially moves on a horizontal road and encounters a hill with a slope of 4 at time t 6 s To design a PI controller we choose ζ 1 to obtain a response without overshoot as shown in Figure 106a The choice of ω0 is a compromise between response speed 102 SIMPLE CONTROLLERS FOR COMPLEX SYSTEMS 301 0 10 20 30 40 2 1 0 0 10 20 30 40 0 02 04 06 08 Time t s v ve ms u ue ζ ζ a ω0 05 ζ 05 1 2 0 10 20 30 40 2 1 0 0 10 20 30 40 0 02 04 06 08 Time t s v ve ms u ue ω0 ω0 b ζ 1 ω0 02 05 1 Figure 106 Cruise control using PI feedback The step responses for the error and input illustrate the effect of parameters ζ 1 and ω0 on the response of a car with cruise control A change in road slope from 0 to 4 is applied between t 5 and 6 s a Responses for ω0 05 and ζ 05 1 and 2 Choosing ζ 1 gives no overshoot b Responses for ζ 1 and ω0 02 05 and 10 and control actions a large value gives a fast response but it requires fast control action The tradeoff is is illustrated in Figure 106b The largest velocity error decreases with increasing ω0 but the control signal also changes more rapidly In the simple model 108 it was assumed that the force responds instantaneously to throttle commands For rapid changes there may be additional dynamics that have to be accounted for There are also physical limitations to the rate of change of the force which also restricts the admissible value of ω0 A reasonable choice of ω0 is in the range 0510 Notice in Figure 106 that even with ω0 02 the largest velocity error is only 1 ms A PI controller can also be used for a process with secondorder dynamics but there will be restrictions on the possible locations of the closed loop poles Using a PID controller it is possible to control a system of second order in such a way that the closed loop poles have arbitrary locations see Exercise 102 Instead of finding a loworder model and designing controllers for them we can also use a highorder model and attempt to place only a few dominant poles An integral controller has one parameter and it is possible to position one pole Consider a process with the transfer function Ps The loop transfer function with an integral controller is Ls ki Pss The roots of the closed loop characteristic polynomial are the roots of s ki Ps 0 Requiring that s a be a root we find that the controller gain should be chosen as ki a Pa 109 The pole s a will be dominant if a is small A similar approach can be applied 302 CHAPTER 10 PID CONTROL t τ y a a Step response method Re Piω Im Piω ω ωa b Frequency response method Figure 107 ZieglerNichols step and frequency response experiments The unit step re sponse in a is characterized by the parameters a and τ The frequency response method b characterizes process dynamics by the point where the Nyquist curve of the process transfer function first intersects the negative real axis and the frequency ωc where this occurs to PI and PID controllers 103 PID Tuning Usersofcontrolsystemsarefrequentlyfacedwiththetaskofadjustingthecontroller parameters to obtain a desired behavior There are many different ways to do this One approach is to go through the conventional steps of modeling and control design as described in the previous section Since the PID controller has so few parameters a number of special empirical methods have also been developed for direct adjustment of the controller parameters The first tuning rules were developed by Ziegler and Nichols 210 Their idea was to perform a simple experiment extract some features of process dynamics from the experiment and determine the controller parameters from the features ZieglerNichols Tuning In the 1940s Ziegler and Nichols developed two methods for controller tuning based on simple characterization of process dynamics in the time and frequency domains The time domain method is based on a measurement of part of the open loop unit step response of the process as shown in Figure 107a The step response is measured by applying a unit step input to the process and recording the response The response is characterized by parameters a and τ which are the intercepts of the steepest tangent of the step response with the coordinate axes The parameter τ is an approximation of the time delay of the system and aτ is the steepest slope of the step response Notice that it is not necessary to wait until steady state is reached to find the parameters it suffices to wait until the response has had an inflection point The controller parameters are given in Table 101 The parameters were obtained by extensive simulation of a range of representative processes A controller was 103 PID TUNING 303 Table 101 ZieglerNichols tuning rules a The step response methods give the parameters in terms of the intercept a and the apparent time delay τ b The frequency response method gives controller parameters in terms of critical gain kc and critical period Tc Type kp Ti Td P 1a PI 09a 3τ PID 12a 2τ 05τ a Step response method Type kp Ti Td P 05kc PI 04kc 08Tc PID 06kc 05Tc 0125Tc b Frequency response method tuned manually for each process and an attempt was then made to correlate the controller parameters with a and τ In the frequency domain method a controller is connected to the process the integral and derivative gains are set to zero and the proportional gain is increased until the system starts to oscillate The critical value of the proportional gain kc is observed together with the period of oscillation Tc It follows from Nyquists stability criterion that the loop transfer function L kcPs intersects the critical point at the frequency ωc 2πTc The experiment thus gives the point on the Nyquist curve of the process transfer function where the phase lag is 180 as shown in Figure 107b The ZieglerNichols methods had a huge impact when they were introduced in the 1940s The rules were simple to use and gave initial conditions for manual tuning The ideas were adopted by manufacturers of controllers for routine use The ZieglerNichols tuning rules unfortunately have two severe drawbacks too little process information is used and the closed loop systems that are obtained lack robustness The step response method can be improved significantly by characterizing the unit step response by parameters K τ and T in the model Ps K 1 sT eτs 1010 The parameters can be obtained by fitting the model to a measured step response Notice that the experiment takes a longer time than the experiment in Figure 107a because to determine K it is necessary to wait until the steady state has been reached Also notice that the intercept a in the ZieglerNichols rule is given by a KτT The frequency response method can be improved by measuring more points on the Nyquist curve eg the zero frequency gain K or the point where the process has a 90 phase lag This latter point can be obtained by connecting an integral controller and increasing its gain until the system reaches the stability limit The experiment can also be automated by using relay feedback as will be discussed later in this section There are many versions of improved tuning rules As an illustration we give 306 CHAPTER 10 PID CONTROL Having obtained the critical gain Kc and the critical period Tc the controller pa rameters can then be determined using the ZieglerNichols rules Improved tuning can be obtained by fitting a model to the data obtained from the relay experiment The relay experiment can be automated Since the amplitude of the oscillation is proportional to the relay output it is easy to control it by adjusting the relay output Automatic tuning based on relay feedback is used in many commercial PID controllers Tuning is accomplished simply by pushing a button that activates relay feedback The relay amplitude is automatically adjusted to keep the oscillations sufficiently small and the relay feedback is switched to a PID controller as soon as the tuning is finished 104 Integrator Windup Many aspects of a control system can be understood from linear models There are however some nonlinear phenomena that must be taken into account These are typically limitations in the actuators a motor has limited speed a valve cannot be more than fully opened or fully closed etc For a system that operates over a wide range of conditions it may happen that the control variable reaches the actuator limits When this happens the feedback loop is broken and the system runs in open loop because the actuator remains at its limit independently of the process output as long as the actuator remains saturated The integral term will also build up since the error is typically nonzero The integral term and the controller output may then become very large The control signal will then remain saturated even when the error changes and it may take a long time before the integrator and the controller output come inside the saturation range The consequence is that there are large transients This situation is referred to as integrator windup illustrated in the following example Example 105 Cruise control The windup effect is illustrated in Figure 1010a which shows what happens when a car encounters a hill that is so steep 6 that the throttle saturates when the cruise controller attempts to maintain speed When encountering the slope at time t 5 the velocity decreases and the throttle increases to generate more torque However the torque required is so large that the throttle saturates The error decreases slowly because the torque generated by the engine is just a little larger than the torque required to compensate for gravity The error is large and the integral continues to build up until the error reaches zero at time 30 but the controller output is still larger than the saturation limit and the actuator remains saturated The integral term starts to decrease and at time 45 and the velocity settles quickly to the desired value Notice that it takes considerable time before the controller output comes into the range where it does not saturate resulting in a large overshoot There are many methods to avoid windup One method is illustrated in Fig ure 1011 the system has an extra feedback path that is generated by measuring the actual actuator output or the output of a mathematical model of the saturating 104 INTEGRATOR WINDUP 307 0 20 40 60 18 19 20 21 0 20 40 60 0 1 2 Commanded Applied Velocity ms Throttle Time t s a Windup 0 20 40 60 18 19 20 21 0 20 40 60 0 1 2 Commanded Applied Velocity ms Throttle Time t s b Antiwindup Figure 1010 Simulation of PI cruise control with windup a and antiwindup b The figure shows the speed v and the throttle u for a car that encounters a slope that is so steep that the throttle saturates The controller output is a dashed line The controller parameters are kp 05 and ki 01 The antiwindup compensator eliminates the overshoot by preventing the error for building up in the integral term of the controller actuator and forming an error signal es as the difference between the output of the controller v and the actuator output u The signal es is fed to the input of the integrator through gain kt The signal es is zero when there is no saturation and the extra feedback loop has no effect on the system When the actuator saturates the signal es is fed back to the integrator in such a way that es goes toward zero This implies that controller output is kept close to the saturation limit The controller output will then change as soon as the error changes sign and integral windup is avoided The rate at which the controller output is reset is governed by the feedback gain kt a large value of kt gives a short reset time The parameter kt cannot be too large because measurement noise can then cause an undesirable reset A reasonable choice is to choose ki as a fraction of 1Ti We illustrate how integral windup can be avoided by investigating the cruise control system Example 106 Cruise control with antiwindup Figure 1010b shows what happens when a controller with antiwindup is applied to the system simulated in Figure 1010a Because of the feedback from the actuator model the output of the integrator is quickly reset to a value such that the controller output is at the saturation limit The behavior is drastically different from that in Figure 1010a and the large overshoot is avoided The tracking gain is kt 2 in the simulation 310 CHAPTER 10 PID CONTROL 0 5 10 15 20 205 21 0 5 10 15 0 02 04 06 08 Throttle u Speed v ms Time t s β β a Step response 10 1 10 0 10 1 10 2 10 1 10 0 10 1 10 0 10 1 10 2 10 1 10 0 Frequency ω rads Gvriω Guriω β β b Frequency responses Figure 1012 Time and frequency responses for PI cruise control with setpoint weighting Step responses are shown in a and the gain curves of the frequency responses in b The controller gains are kp 074 and ki 019 The setpoint weights are β 0 05 and 1 and γ 0 and the output voltage u The impedances are given by Z1s R1 1 R1C1s Z2s R2 1 C2s and we find the following relation between the input voltage e and the output voltage u u Z2 Z1 e R2 R1 1 R1C1s1 R2C2s R2C2s e This is the inputoutput relation for a PID controller of the form 101 with param eters kp R2 R1 Ti R2C1 Td R1C1 R0 R C 1 1 e u a PI controller R0 R C 1 1 C0 e u b PID controller Figure 1013 Schematic diagrams for PI and PID controllers using op amps The circuit in a uses a capacitor in the feedback path to store the integral of the error The circuit in b adds a filter on the input to provide derivative action 312 CHAPTER 10 PID CONTROL which can be rewritten as Dtk T f T f h Dtk1 kd T f h ytk ytk1 1017 The advantage of using a backward difference is that the parameter T f T f h is nonnegative and less than 1 for all h 0 which guarantees that the difference equation is stable Reorganizing equations 10151017 the PID controller can be described by the following pseudocode Precompute controller coefficients bikih adTfTfh bdkdTfh brhTt Control algorithm main loop while running radinch1 read setpoint from ch1 yadinch2 read process variable from ch2 Pkpbry compute proportional part DadDbdyyold update derivative part vPID compute temporary output usatvulowuhigh simulate actuator saturation daoutch1 set analog output ch1 IIbirybruv update integral yoldy update old process output sleeph wait until next update interval Precomputation of the coefficients bi ad bd and br saves computer time in the main loop These calculations have to be done only when controller parameters are changed The main loop is executed once every sampling period The program has three states yold I and D One state variable can be eliminated at the cost of less readable code The latency between reading the analog input and setting the analog output consists of four multiplications four additions and evaluation of the sat function All computations can be done using fixedpoint calculations if necessary Notice that the code computes the filtered derivative of the process output and that it has setpoint weighting and antiwindup protection 106 Further Reading The history of PID control is very rich and stretches back to the beginning of the foundation of control theory Very readable treatments are given by Bennett 28 29 and Mindel 152 The ZieglerNichols rules for tuning PID controllers first pre sented in 1942 210 were developed based on extensive experiments with pneu matic simulators and Vannevar Bushs differential analyzer at MIT An interesting view of the development of the ZieglerNichols rules is given in an interview with Ziegler 39 An industrial perspective on PID control is given in 33 180 and EXERCISES 313 205 and in the paper 58 cited in the beginning of this chapter A comprehen sive presentation of PID control is given in 16 Interactive learning tools for PID control can be downloaded from httpwwwcalergacomcontrib Exercises 101 Ideal PID controllers Consider the systems represented by the block diagrams in Figure 101 Assume that the process has the transfer function Ps bs a and show that the transfer functions from r to y are a G yrs bkds2 bkps bki 1 bkds2 a bkds bki b G yrs bki 1 bkds2 a bkds bki Pick some parameters and compare the step responses of the systems 102 Consider a secondorder process with the transfer function Ps b s2 a1s a2 The closed loop system with a PI controller is a thirdorder system Show that it is possible to position the closed loop poles as long as the sum of the poles is a1 Give equations for the parameters that give the closed loop characteristic polynomial s α0s2 2ζ0ω0s ω2 0 103 Consider a system with the transfer function Ps s 12 Find an integral controller that gives a closed loop pole at s a and determine the value of a that maximizes the integral gain Determine the other poles of the system and judge if the pole can be considered dominant Compare with the value of the integral gain given by equation 106 104 ZieglerNichols tuning Consider a system with transfer function Ps ess Determine the parameters of P PI and PID controllers using ZieglerNichols step and frequency response methods Compare the parameter values obtained by the different rules and discuss the results 105 Vehicle steering Design a proportionalintegral controller for the vehicle steering system that gives the closed loop characteristic polynomial s3 2ω0s2 2ω0s ω3 0 106 Congestion control A simplified flow model for TCP transmission is derived in 101 137 The linearized dynamics are modeled by the transfer function Gqps b s a1s a2esτe Chapter Eleven Frequency Domain Design Sensitivityimprovementsinonefrequencyrangemustbepaidforwithsensitivitydeteriorations in another frequency range and the price is higher if the plant is openloop unstable This applies to every controller no matter how it was designed Gunter Stein in the inaugural IEEE Bode Lecture 1989 185 In this chapter we continue to explore the use of frequency domain techniques with a focus on the design of feedback systems We begin with a more thorough description of the performance specifications for control systems and then introduce the concept of loop shaping as a mechanism for designing controllers in the frequency domain We also introduce some fundamental limitations to performance for systems with time delays and right halfplane poles and zeros 111 Sensitivity Functions In the previous chapter we considered the use of proportionalintegralderivative PID feedback as a mechanism for designing a feedback controller for a given process In this chapter we will expand our approach to include a richer repertoire of tools for shaping the frequency response of the closed loop system One of the key ideas in this chapter is that we can design the behavior of the closed loop system by focusing on the open loop transfer function This same approach was used in studying stability using the Nyquist criterion we plotted the Nyquist plot for the open loop transfer function to determine the stability of the closed loop system From a design perspective the use of loop analysis tools is very powerful since the loop transfer function is L PC if we can specify the desired performance in terms of properties of L we can directly see the impact of changes in the controller C This is much easier for example than trying to reason directly about the tracking response of the closed loop system whose transfer function is given by G yr PC1 PC We will start by investigating some key properties of the feedback loop A block diagram of a basic feedback loop is shown in Figure 111 LDH6 Jan 08 Reworded the third and fourth sentences OK The system loop is composed of two components the process and the controller The controller itself has two blocks the feedback block C and the feedforward block F There are two disturbances act ing on the process the load disturbance d and the measurement noise n The load disturbance represents disturbances that drive the process away from its desired behavior while the measurement noise represents disturbances that corrupt infor mation about the process given by the sensors In the figure the load disturbance 318 CHAPTER 11 FREQUENCY DOMAIN DESIGN P z w C y u Figure 112 A more general representation of a feedback system The process input u represents the control signal which can be manipulated and the process input w represents other signals that influence the process The process output y is the vector of measured variables and z are other signals of interest The feedforward part F of the controller influences only the response to command signals In Chapter 9 we focused on the loop transfer function and we found that its properties gave a useful insight into the properties of a system To make a proper assessment of a feedback system it is necessary to consider the properties of all the transfer functions 112 in the Gang of Six or the Gang of Four as illustrated in the following example Example 111 The loop transfer function gives only limited insight Consider a process with the transfer function Ps 1s a controlled by a PI controller with error feedback having the transfer function Cs ks as The loop transfer function is L ks and the sensitivity functions are T PC 1 PC k s k PS P 1 PC s s as k CS C 1 PC ks a s k S 1 1 PC s s k Notice that the factor s a is canceled when computing the loop transfer function and that this factor also does not appear in the sensitivity function or the comple mentary sensitivity function However cancellation of the factor is very serious if a 0 since the transfer function PS relating load disturbances to process output is then unstable In particular a small disturbance d can lead to an unbounded output which is clearly not desirable The system in Figure 111 represents a special case because it is assumed that the load disturbance enters at the process input and that the measured output is the sum of the process variable and the measurement noise Disturbances can enter in many different ways and the sensors may have dynamics A more abstract way to capture the general case is shown in Figure 112 which has only two blocks representing the process P and the controller C The process has two inputs the control signal u and a vector of disturbances w and two outputs the measured signal y and a vector of signals z that is used to specify performance The system in Figure 111 can be captured by choosing w d n and z η ν e ϵ The process transfer function P is a 4 3 matrix and the controller transfer function C is a 1 2 matrix compare with Exercise 113 320 CHAPTER 11 FREQUENCY DOMAIN DESIGN error signal is zero and there will be no feedback action If there are disturbances or modeling errors the signals ym and y will differ The feedback then attempts to bring the error to zero To make a formal analysis we compute the transfer function from reference input to process output G yrs PC Fm Fu 1 PC Fm P Fu Fm 1 PC 114 where P P2P1 The first term represents the desired transfer function The second term can be made small in two ways Feedforward compensation can be used to make P Fu Fm small or feedback compensation can be used to make 1 PC large Perfect feedforward compensation is obtained by choosing Fu Fm P 115 Design of feedforward using transfer functions is thus a very simple task Notice that the feedforward compensator Fu contains an inverse model of the process dynamics Feedback and feedforward have different properties Feedforward action is ob tained by matching two transfer functions requiring precise knowledge of the pro cess dynamics while feedback attempts to make the error small by dividing it by a large quantity For a controller having integral action the loop gain is large for low frequencies and it is thus sufficient to make sure that the condition for ideal feedforward holds at higher frequencies This is easier than trying to satisfy the condition 115 for all frequencies We will now consider reduction of the effects of the load disturbance d in Fig ure 113 by feedforward control We assume that the disturbance signal is measured and that the disturbance enters the process dynamics in a known way captured by P1 and P2 The effect of the disturbance can be reduced by feeding the measured signal through a dynamical system with the transfer function Fd Assuming that the reference r is zero we can use block diagram algebra to find that the transfer function from the disturbance to the process output is G yd P21 Fd P1 1 PC 116 where P P1P2 The effect of the disturbance can be reduced by making 1 Fd P1 small feedforward or by making 1 PC large feedback Perfect compensation is obtained by choosing Fd P1 1 117 requiring inversion of the transfer function P1 Asinthecaseofreferencetrackingdisturbanceattenuationcanbeaccomplished by combining feedback and feedforward control Since lowfrequency disturbances can be eliminated by feedback we require the use of feedforward only for high frequency disturbances and the transfer function Fd in equation 117 can then be computed using an approximation of P1 for high frequencies 112 FEEDFORWARD DESIGN 321 a Overhead view 0 2 4 6 8 10 5 0 5 0 2 4 6 8 10 1 0 1 y m δ rad Normalized time t b Position and steering Figure 114 Feedforward control for vehicle steering The plot on the left shows the trajectory generated by the controller for changing lanes The plots on the right show the lateral deviation y top and the steering angle δ bottom for a smooth lane change control using feedforward based on the linearized model Equations 115 and 117 give analytic expressions for the feedforward com pensator To obtain a transfer function that can be implemented without difficulties we require that the feedforward compensator be stable and that it does not require differentiation Therefore there may be constraints on possible choices of the de sired response Fm and approximations are needed if the process has zeros in the right halfplane or time delays Example 112 Vehicle steering A linearized model for vehicle steering was given in Example 64 The normalized transfer function from steering angle δ to lateral deviation y is Ps γ s 1s2 For a lane transfer system we would like to have a nice response without overshoot and we therefore choose the desired response as Fms a2s a2 where the response speed or aggressiveness of the steering is governed by the parameter a Equation 115 gives Fu Fm P a2s2 γ s 1s a2 which is a stable transfer function as long as γ 0 Figure 114 shows the responses of the system for a 05 The figure shows that a lane change is accomplished in about 10 vehicle lengths with smooth steering angles The largest steering angle is slightly larger than 01 rad 6 Using the scaled variables the curve showing lateral deviations y as a function of t can also be interpreted as the vehicle path y as a function of x with the vehicle length as the length unit A major advantage of controllers with two degrees of freedom that combine feedback and feedforward is that the control design problem can be split in two parts The feedback controller C can be designed to give good robustness and effective disturbance attenuation and the feedforward part can be designed independently to give the desired response to command signals 322 CHAPTER 11 FREQUENCY DOMAIN DESIGN 113 Performance Specifications A key element of the control design process is how we specify the desired per formance of the system It is also important for users to understand performance specifications so that they know what to ask for and how to test a system Specifi cations are often given in terms of robustness to process variations and responses to reference signals and disturbances They can be given in terms of both time and frequency responses Specifications for the step response to reference signals were given in Figure 59 in Section 53 and in Section 63 Robustness specifications based on frequency domain concepts were provided in Section 93 and will be con sidered further in Chapter 12 The specifications discussed previously were based on the loop transfer function Since we found in Section 111 that a single transfer function did not always characterize the properties of the closed loop completely we will give a more complete discussion of specifications in this section based on the full Gang of Six The transfer function gives a good characterization of the linear behavior of a system To provide specifications it is desirable to capture the characteristic prop erties of a system with a few parameters Common features for time responses are overshoot rise time and settling time as shown in Figure 59 Common features of frequency responses are resonant peak peak frequency gain crossover frequency and bandwidth A resonant peak is a maximum of the gain and the peak frequency is the corresponding frequency The gain crossover frequency is the frequency where the open loop gain is equal one The bandwidth is defined as the frequency range where the closed loop gain is 1 2 of the lowfrequency gain lowpass midfrequency gain bandpass or highfrequency gain highpass There are inter esting relations between specifications in the time and frequency domains Roughly speaking the behavior of time responses for short times is related to the behavior of frequency responses at high frequencies and vice versa The precise relations are not trivial to derive Response to Reference Signals Consider the basic feedback loop in Figure 111 The response to reference signals is described by the transfer functions G yr PC F1 PC and Gur C F1 PC F 1 for systems with error feedback Notice that it is useful to consider both the response of the output and that of the control signal In particular the control signal response allows us to judge the magnitude and rate of the control signal required to obtain the output response Example 113 Thirdorder system Consider a process with the transfer function Ps s 13 and a PI controller with error feedback having the gains kp 06 and ki 05 The responses are illustratedinFigure115ThesolidlinesshowresultsforaproportionalintegralPI controller with error feedback The dashed lines show results for a controller with feedforward designed to give the transfer function G yr 05s 13 Looking at the time responses we find that the controller with feedforward gives a faster 113 PERFORMANCE SPECIFICATIONS 323 0 5 10 15 20 25 0 05 1 15 Error feedback With feedforward 0 5 10 15 20 25 0 5 10 Output y Input u Time t s a Step responses 10 1 10 0 10 1 10 1 10 0 10 1 10 0 10 1 10 1 10 0 10 1 G yriω Guriω Frequency ω rads b Frequency responses Figure 115 Reference signal responses The responses in process output y and control signal u to a unit step in the reference signal r are shown in a and the gain curves of G yr and Gur are shown in b Results with PI control with error feedback are shown by solid lines and the dashed lines show results for a controller with a feedforward compensator response with no overshoot However much larger control signals are required to obtain the fast response The largest value of the control signal is 8 compared to 12 for the regular PI controller The controller with feedforward has a larger bandwidth marked with and no resonant peak The transfer function Gur also has higher gain at high frequencies Response to Load Disturbances and Measurement Noise A simple criterion for disturbance attenuation is to compare the output of the closed loop system in Figure 111 with the output of the corresponding open loop system obtained by setting C 0 If we let the disturbances for the open and closed loop systems be identical the output of the closed loop system is then obtained simply by passing the open loop output through a system with the transfer function S The sensitivity function tells how the variations in the output are influenced by feedback Exercise 117 Disturbances with frequencies such that Siω 1 are attenuated but disturbances with frequencies such that Siω 1 are amplified by feedback The maximum sensitivity Ms which occurs at the frequency ωsc is thus a measure of the largest amplification of the disturbances The maximum magnitude of 11 L is also the minimum of 1 L which is precisely the stability margin sm defined in Section 93 so that Ms 1sm The maximum sensitivity is therefore also a robustness measure If the sensitivity function is known the potential improvements by feedback can be evaluated simply by recording a typical output and filtering it through the sensitivity function A plot of the gain curve of the sensitivity function is a good way to make an assessment of the disturbance attenuation Since the sensitivity function 324 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 1 10 0 10 1 10 1 10 0 10 1 10 0 10 1 Frequency ω rads Liω Siω a Gain curves Re Im sm ωms ωsc 1 b Nyquist plot Figure 116 Graphical interpretation of the sensitivity function Gain curves of the loop transfer function and the sensitivity function a can be used to calculate the properties of the sensitivity function through the relation S 11 L The sensitivity crossover frequency ωsc and the frequency ωms where the sensitivity has its largest value are indicated in the sensitivity plot The Nyquist plot b shows the same information in a different form All points inside the dashed circle have sensitivities greater than 1 depends only on the loop transfer function its properties can also be visualized graphically using the Nyquist plot of the loop transfer function This is illustrated in Figure 116 The complex number 1 Liω can be represented as the vector from the point 1 to the point Liω on the Nyquist curve The sensitivity is thus less than 1 for all points outside a circle with radius 1 and center at 1 Disturbances with frequencies in this range are attenuated by the feedback The transfer function G yd from load disturbance d to process output y for the system in Figure 111 is G yd P 1 PC PS T C 118 Since load disturbances typically have low frequencies it is natural to focus on the behavior of the transfer function at low frequencies For a system with P0 0 and a controller with integral action the controller gain goes to infinity for small frequencies and we have the following approximation for small s G yd T C 1 C s ki 119 where ki is the integral gain Since the sensitivity function S goes to 1 for large s we have the approximation G yd P for high frequencies Measurement noise which typically has high frequencies generates rapid vari ations in the control variable that are detrimental because they cause wear in many actuators and can even saturate an actuator It is thus important to keep variations in the control signal due to measurement noise at reasonable levelsa typical require ment is that the variations are only a fraction of the span of the control signal The variations can be influenced by filtering and by proper design of the highfrequency 113 PERFORMANCE SPECIFICATIONS 325 0 5 10 15 20 02 0 02 04 10 1 10 0 10 1 10 2 10 1 10 0 Frequency ω rads Time t s Output y G ydiω a Output load response 0 05 1 15 2 0 10 20 10 1 10 0 10 1 10 2 10 0 10 1 10 2 Frequency ω rads Time t s Input u Guniω b Input noise response Figure 117 Disturbance responses The time and frequency responses of process output y to load disturbance d are shown in a and the responses of control signal u to measurement noise n are shown in b properties of the controller The effects of measurement noise are captured by the transfer function from the measurement noise to the control signal Gun C 1 PC CS T P 1110 The complementary sensitivity function is close to 1 for low frequencies ω ωgc and Gun can be approximated by 1P The sensitivity function is close to 1 for high frequencies ω ωgc and Gun can be approximated by C Example 114 Thirdorder system Consider a process with the transfer function Ps s 13 and a proportional integralderivative PID controller with gains kp 06 ki 05 and kd 20 We augment the controller using a secondorder noise filter with T f 01 so that its transfer function is Cs kds2 ks ki ss2T 2 f 2 sT f 1 The system responses are illustrated in Figure 117 The response of the output to a step in the load disturbance in the top part of Figure 117a has a peak of 028 at time t 273 The frequency response in Figure 117a shows that the gain has a maximum of 058 at ω 07 The response of the control signal to a step in measurement noise is shown in Figure 117b The highfrequency rolloff of the transfer function Guniω is due to filtering without it the gain curve in Figure 117b would continue to rise after 20 radsThestepresponsehasapeakof13att 008Thefrequencyresponsehas 326 CHAPTER 11 FREQUENCY DOMAIN DESIGN its peak 20 at ω 14 Notice that the peak occurs far above the peak of the response to load disturbances and far above the gain crossover frequency ωgc 078 An approximation derived in Exercise 119 gives max CSiω kdT f 20 which occurs at ω 2Td 141 114 Feedback Design via Loop Shaping One advantage of the Nyquist stability theorem is that it is based on the loop transfer function which is related to the controller transfer function through L PC It is thus easy to see how the controller influences the loop transfer function To make an unstable system stable we simply have to bend the Nyquist curve away from the critical point This simple idea is the basis of several different design methods collectively called loop shaping These methods are based on choosing a compensator that gives a loop transfer function with a desired shape One possibility is to determine a loop transfer function that gives a closed loop system with the desired properties and to compute the controller as C LP Another is to start with the process transfer function change its gain and then add poles and zeros until the desired shape is obtained In this section we will explore different loopshaping methods for control law design Design Considerations We will first discuss a suitable shape for the loop transfer function that gives good performance and good stability margins Figure 118 shows a typical loop transfer function Good robustness requires good stability margins or good gain and phase margins which imposes requirements on the loop transfer function around the crossover frequencies ωpc and ωgc The gain of L at low frequencies must be large in order to have good tracking of command signals and good attenuation of low frequency disturbances Since S 11 L it follows that for frequencies where L 101 disturbances will be attenuated by a factor of 100 and the tracking error is less than 1 It is therefore desirable to have a large crossover frequency and a steep negative slope of the gain curve The gain at low frequencies can be increased by a controller with integral action which is also called lag compensation To avoid injecting too much measurement noise into the system the loop transfer function should have low gain at high frequencies which is called highfrequency rolloff The choice of gain crossover frequency is a compromise among attenuation of load disturbances injection of measurement noise and robustness Bodes relations see Section 94 impose restrictions on the shape of the loop transfer function Equation 98 implies that the slope of the gain curve at gain crossover cannot be too steep If the gain curve has a constant slope we have the following relation between slope ngc and phase margin ϕm ngc 2 2ϕm π rad 1111 114 FEEDBACK DESIGN VIA LOOP SHAPING 327 attenuation High frequency measurement noise Load disturbance Robustness ωgc log Liω log Siω log T iω log ω log ω Figure 118 Gain curve and sensitivity functions for a typical loop transfer function The plot on the left shows the gain curve and the plots on the right show the sensitivity function and complementary sensitivity function The gain crossover frequency ωgc and the slope ngc of the gain curve at crossover are important parameters that determine the robustness of closed loop systems At low frequency a large magnitude for L provides good load disturbance rejection and reference tracking while at high frequency a small loop gain is used to avoid amplifying measurement noise This formula is a reasonable approximation when the gain curve does not deviate too much from a straight line It follows from equation 1111 that the phase margins 30 45 and 60 correspond to the slopes 53 32 and 43 Loop shaping is a trialanderror procedure We typically start with a Bode plot of the process transfer function We then attempt to shape the loop transfer function by changing the controller gain and adding poles and zeros to the controller trans fer function Different performance specifications are evaluated for each controller as we attempt to balance many different requirements by adjusting controller pa rameters and complexity Loop shaping is straightforward to apply to singleinput singleoutput systems It can also be applied to systems with one input and many outputs by closing the loops one at a time starting with the innermost loop The only limitation for minimum phase systems is that large phase leads and high controller gains may be required to obtain closed loop systems with a fast response Many specific procedures are available they all require experience but they also give good insight into the conflicting requirements There are fundamental limitations to what can be achieved for systems that are not minimum phase they will be discussed in the next section Lead and Lag Compensation A simple way to do loop shaping is to start with the transfer function of the process and add simple compensators with the transfer function Cs k s a s b 1112 The compensator is called a lead compensator if a b and a lag compensator if a b The PI controller is a special case of a lag compensator with b 0 and 328 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 1 10 0 10 1 0 45 90 Lead PD Frequency ω rads Ciω Ciω a b a Lead compensation a b 10 1 10 0 10 1 90 45 0 Lag PI Frequency ω rads Ciω Ciω a b b Lag compensation b a Figure119FrequencyresponseforleadandlagcompensatorsCs ksasbLead compensation a occurs when a b and provides phase lead between ω a and ω b Lag compensation b corresponds to a b and provides lowfrequency gain PI control is a special case of lag compensation and PD control is a special case of lead compensation PIPD frequency responses are shown by dashed curves the ideal PD controller is a special case of a lead compensator with a 0 Bode plots of lead and lag compensators are shown in Figure 119 Lag compensation which increases the gain at low frequencies is typically used to improve tracking performance and disturbance attenuation at low frequencies Compensators that are tailored to specific disturbances can be also designed as shown in Exercise 1110 Lead compensation is typically used to improve phase margin The following ex amples give illustrations Example 115 Atomic force microscope in tapping mode A simple model of the dynamics of the vertical motion of an atomic force micro scope in tapping mode was given in Exercise 92 The transfer function for the system dynamics is Ps a1 esτ sτs a where a ζω0 τ 2πnω0 and the gain has been normalized to 1 A Bode plot of this transfer function for the parameters a 1 and τ 025 is shown in dashed curves in Figure 1110a To improve the attenuation of load disturbances we increase the lowfrequency gain by introducing an integral controller The loop transfer function then becomes L ki Pss and we adjust the gain so that the phase margin is zero giving ki 83 Notice the increase of the gain at low frequencies The Bode plot is shown by the dotted line in Figure 1110a where the critical point is indicated by To improve the phase margin we introduce proportional action and we increase the proportional gain kp gradually until reasonable values of the sensitivities are obtained The value kp 35 gives maximum sensitivity Ms 16 and maximum complementary sensitivity Mt 13 The loop transfer function is shown in solid lines in Figure 1110a Notice the significant increase of the phase 114 FEEDBACK DESIGN VIA LOOP SHAPING 329 10 2 10 0 10 2 10 2 10 0 10 2 Ps PI Integral 10 2 10 0 10 2 270 180 90 0 Freq ω rads Liω Piω Liω Piω a Loop shaping 10 2 10 0 10 2 10 1 10 0 10 2 10 0 10 2 10 2 10 1 10 0 10 2 10 0 10 2 10 0 10 1 10 2 10 0 10 2 10 1 10 0 Freq ω rads Freq ω rads Siω T iω PSiω CSiω b Gang of Four Figure 1110 Loopshaping design of a controller for an atomic force microscope in tapping mode a Bode plots of the process dashed the loop transfer function for an integral controller with critical gain dotted and a PI controller solid adjusted to give reasonable robustness b Gain curves for the Gang of Four for the system margin compared with the purely integral controller dotted line To evaluate the design we also compute the gain curves of the transfer functions in the Gang of Four They are shown in Figure 1110b The peaks of the sensitivity curves are reasonable and the plot of PS shows that the largest value of PS is 03 which implies that the load disturbances are well attenuated The plot of CS shows that the largest controller gain is 6 The controller has a gain of 35 at high frequencies and hence we may consider adding highfrequency rolloff A common problem in the design of feedback systems is that the phase margin is too small and phase lead must then be added to the system If we set a b in equation 1112 we add phase lead in the frequency range between the polezero pair and extending approximately 10 in frequency in each direction By appro priately choosing the location of this phase lead we can provide additional phase margin at the gain crossover frequency Because the phase of a transfer function is related to the slope of the magnitude increasing the phase requires increasing the gain of the loop transfer function over the frequency range in which the lead compensation is applied In Exercise 1111 it is shown that the gain increases exponentially with the amount of phase lead We can also think of the lead compensator as changing the slope of the transfer function and thus shaping the loop transfer function in the crossover region although it can be applied elsewhere as well Example 116 Roll control for a vectored thrust aircraft Consider the control of the roll of a vectored thrust aircraft such as the one illustrated in Figure 1111 Following Exercise 810 we model the system with a secondorder 330 CHAPTER 11 FREQUENCY DOMAIN DESIGN r x y θ F1 F2 Symbol Description Value m Vehicle mass 40 kg J Vehicle inertia ϕ3 axis 00475 kg m2 r Force moment arm 250 cm c Damping coefficient 005 kg ms g Gravitational constant 98 ms2 Figure 1111 Roll control of a vectored thrust aircraft a The roll angle θ is controlled by applying maneuvering thrusters resulting in a moment generated by Fz b The table lists the parameter values for a laboratory version of the system transfer function of the form Ps r Js2 cs with the parameters given in Figure 1111b We take as our performance specifica tion that we would like less than 1 error in steady state and less than 10 tracking error up to 10 rads The open loop transfer function is shown in Figure 1112a To achieve our performance specification we would like to have a gain of at least 10 at a frequency of 10 rads requiring the gain crossover frequency to be at a higher frequency We see from the loop shape that in order to achieve the desired performance we cannot simply increase the gain since this would give a very low phase margin Instead we must increase the phase at the desired crossover frequency To accomplish this we use a lead compensator 1112 with a 2 and b 50 We then set the gain of the system to provide a large loop gain up to the desired bandwidth as shown in Figure 1112b We see that this system has a gain of greater than 10 at all frequencies up to 10 rads and that it has more than 60 of phase margin The action of a lead compensator is essentially the same as that of the derivative portion of a PID controller As described in Section 105 we often use a filter for the derivative action of a PID controller to limit the highfrequency gain This same effect is present in a lead compensator through the pole at s b Equation 1112 is a firstorder compensator and can provide up to 90 of phase lead Larger phase lead can be obtained by using a higherorder lead compensator Exercise 1111 Cs k s an s bn a b 332 CHAPTER 11 FREQUENCY DOMAIN DESIGN Assuming that the slope ngc is negative it has to be larger than 2 for the system to be stable It follows from Bodes relations equation 98 that arg Pmpiω arg Ciω ngc π 2 Combining this with equation 1114 gives the following inequality for the allow able phase lag of the allpass part at the gain crossover frequency arg Papiωgc π ϕm ngc π 2 ϕl 1115 Thisconditionwhichwecallthegaincrossoverfrequencyinequalityshowsthatthe gain crossover frequency must be chosen so that the phase lag of the nonminimum phase component is not too large For systems with high robustness requirements we may choose a phase margin of 60 ϕm π3 and a slope ngc 1 which gives an admissible phase lag ϕl π6 052 rad 30 For systems where we can accept a lower robustness we may choose a phase margin of 45 ϕm π4 and the slope ngc 12 which gives an admissible phase lag ϕl π2 157 rad 90 The crossover frequency inequality shows that nonminimum phase components impose severe restrictions on possible crossover frequencies It also means that there are systems that cannot be controlled with sufficient stability margins We illustrate the limitations in a number of commonly encountered situations Example 117 Zero in the right halfplane The nonminimum phase part of the process transfer function for a system with a right halfplane zero is Paps z s z s where z 0 The phase lag of the nonminimum phase part is arg Papiω 2 arctan ω z Since the phase lag of Pap increases with frequency the inequality 1115 gives the following bound on the crossover frequency ωgc z tan ϕ l2 1116 With ϕl π3 we get ωgc 06 z Slow right halfplane zeros z small therefore give tighter restrictions on possible gain crossover frequencies than fast right half plane zeros Time delays also impose limitations similar to those given by zeros in the right halfplane We can understand this intuitively from the Padé approximation esτ 1 05sτ 1 05sτ 2τ s 2τ s A long time delay is thus equivalent to a slow right halfplane zero z 2τ 115 FUNDAMENTAL LIMITATIONS 333 Example 118 Pole in the right halfplane The nonminimum phase part of the transfer function for a system with a pole in the right halfplane is Paps s p s p where p 0 The phase lag of the nonminimum phase part is arg Papiω 2 arctan p ω and the crossover frequency inequality becomes ωgc p tanϕ l2 1117 Right halfplane poles thus require that the closed loop system have a sufficiently high bandwidth With ϕl π3 we get ωgc 17p Fast right halfplane poles p large therefore give tighter restrictions on possible gain crossover frequencies than slow right halfplane poles The control of unstable systems imposes minimum bandwidth requirements for process actuators and sensors Wewillnowconsidersystemswitharighthalfplanezero z andarighthalfplane pole p If p z there will be an unstable subsystem that is neither reachable nor observable and the system cannot be stabilized see Section 75 We can therefore expect that the system is difficult to control if the right halfplane pole and zero are close A straightforward way to use the crossover frequency inequality is to plot the phase of the nonminimum phase factor Pap of the process transfer function Such a plot which can be incorporated in an ordinary Bode plot will immediately show the permissible gain crossover frequencies An illustration is given in Figure 1113 which shows the phase of Pap for systems with a right halfplane polezero pair and systems with a right halfplane pole and a time delay If we require that the phase lag ϕ l of the nonminimum phase factor be less than 90 we must require that the ratio zp be larger than 6 or smaller than 16 for systems with right halfplane poles and zeros and that the product pτ be less than 03 for systems with a time delay and a right halfplane pole Notice the symmetry in the problem for z p and z p in either case the zeros and the poles must be sufficiently far apart Exercise 1112 Also notice that possible values of the gain crossover frequency ωgc are quite restricted Using the theory of functions of complex variables it can be shown that for systems with a right halfplane pole p and a right halfplane zero z or a time delay τ any stabilizing controller gives sensitivity functions with the property sup ω Siω p z p z sup ω T iω epτ 1118 This result is proven in Exercise 1113 As the examples above show right halfplane poles and zeros significantly limit the achievable performance of a system hence one would like to avoid these whenever possible The poles of a system depend on the intrinsic dynamics of the 115 FUNDAMENTAL LIMITATIONS 337 0 1 2 3 3 2 1 0 1 Frequency ω rads linear scale log Siω a Bode integral formula 10 01 10 Serious Design sg Log Magnitude Frequency 00 05 10 15 20 b Control design process Figure 1114 Interpretation of the waterbed effect The function log Siω is plotted versus ω in linear scales in a According to Bodes integral formula 1119 the area of log Siω above zero must be equal to the area below zero Gunter Steins interpretation of design as a tradeoff of sensitivities at different frequencies is shown in b from 185 Example 1111 X29 aircraft As an example of the application of Bodes integral formula we present an anal ysis of the control system for the X29 aircraft see Figure 1115a which has an unusual configuration of aerodynamic surfaces that are designed to enhance its ma neuverability This analysis was originally carried out by Gunter Stein in his article Respect the Unstable 185 which is also the source of the quote at the beginning of this chapter To analyze this system we make use of a small set of parameters that describe the key properties of the system The X29 has longitudinal dynamics that are very similar to inverted pendulum dynamics Exercise 83 and in particular have a pair of poles at approximately p 6 and a zero at z 26 The actuators that stabilize the pitch have a bandwidth of ωa 40 rads and the desired bandwidth of the pitch control loop is ω1 3 rads Since the ratio of the zero to the pole is only 43 we may expect that it may be difficult to achieve the specifications a X29 aircraft 1 Ms ω1 ωa Siω Frequency ω rads b Sensitivity analysis Figure 1115 X29 flight control system The aircraft makes use of forward swept wings and a set of canards on the fuselage to achieve high maneuverability a The desired sensitivity for the closed loop system is shown in b We seek to use our control authority to shape the sensitivity curve so that we have low sensitivity good performance up to frequency ω1 by creating higher sensitivity up to our actuator bandwidth ωa 117 FURTHER READING 343 10 5 10 1 10 3 10 4 10 2 10 0 10 2 360 270 180 90 0 Liω Liω Frequency ω rads a Bode plot Re Im Re Im b Nyquist plot Figure 1119 Innerouter loop controller for a vectored thrust aircraft The Bode plot a and Nyquist plot b for the transfer function for the combined inner and outer loop transfer functions are shown The system has a phase margin of 68 and a gain margin of 62 Indeed for the aircraft dynamics studied in this example it is very challenging to directly design a controller from the lateral position x to the input u1 The use of the additional measurement of θ greatly simplifies the design because it can be broken up into simpler pieces 117 Further Reading Design by loop shaping was a key element in the early development of control and systematic design methods were developed see James Nichols and Phillips 110 Chestnut and Mayer 51 Truxal 194 and Thaler 191 Loop shaping is also treated in standard textbooks such as Franklin Powell and EmamiNaeini 79 Dorf and Bishop 61 Kuo and Golnaraghi 133 and Ogata 162 Systems with two degrees of freedom were developed by Horowitz 102 who also discussed the limitations of poles and zeros in the right halfplane Fundamental results on limitations are given in Bode 40 more recent presentations are found in Goodwin GraebeandSalgado88ThetreatmentinSection115isbasedon14Muchofthe early work was based on the loop transfer function the importance of the sensitivity functions appeared in connection with the development in the 1980s that resulted in H design methods A compact presentation is given in the texts by Doyle Francis and Tannenbaum 64 and Zhou Doyle and Glover 209 Loop shaping was integrated with the robust control theory in McFarlane and Glover 150 and Vinnicombe 196 Comprehensive treatments of control system design are given in Maciejowski 141 and Goodwin Graebe and Salgado 88 344 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 2 10 0 10 2 10 5 10 3 10 1 10 1 10 2 10 0 10 2 10 5 10 2 10 1 10 2 10 0 10 2 10 5 10 3 10 1 10 1 10 2 10 0 10 2 10 4 10 2 10 0 T iω PSiω CSiω Siω Frequency ω rads Frequency ω rads Frequency ω rads Frequency ω rads Figure 1120 Gang of Four for vectored thrust aircraft system Exercises 111 Consider the system in Figure 111 Give all signal pairs that are related by the transfer functions 11 PC P1 PC C1 PC and PC1 PC 112 Consider the system in Example 111 Choose the parameters a 1 and compute the time and frequency responses for all the transfer functions in the Gang of Four for controllers with k 02 and k 5 113 Equivalence of Figures 111 and 112 Consider the system in Figure 111 and let the outputs of interest be z η ν and the major disturbances be w n d Show that the system can be represented by Figure 112 and give the matrix transfer functions P and C Verify that the closed loop transfer function Hzw gives the Gang of Four 114 Consider the springmass system given by 214 which has the transfer function Ps 1 ms2 cs k Design a feedforward compensator that gives a response with critical damping ζ 1 115 Sensitivity of feedback and feedforward Consider the system in Figure 111 and let G yr be the transfer function relating the measured signal y to the reference r Show that the sensitivities of G yr with respect to the feedforward and feed back transfer functions F and C are given by dG yrdF C P1 PC and dG yrdC F P1 PC2 G yr LC 116EquivalenceofcontrollerswithtwodegreesoffreedomShowthatthesystems in Figures 111 and 113 give the same responses to command signals if FmCFu C F Chapter Twelve Robust Performance However by building an amplifier whose gain is deliberately made say 40 decibels higher than necessary 10000 fold excess on energy basis and then feeding the output back on the input in such a way as to throw away that excess gain it has been found possible to effect extraordinary improvement in constancy of amplification and freedom from nonlinearity Harold S Black Stabilized Feedback Amplifiers 1934 35 This chapter focuses on the analysis of robustness of feedback systems a vast topic for which we provide only an introduction to some of the key concepts We consider the stability and performance of systems whose process dynamics are uncertain and derive fundamental limits for robust stability and performance To do this we develop ways to describe uncertainty both in the form of parameter variations and in the form of neglected dynamics We also briefly mention some methods for designing controllers to achieve robust performance 121 Modeling Uncertainty Harold Blacks quote above illustrates that one of the key uses of feedback is to provide robustness to uncertainty constancy of amplification It is one of the most useful properties of feedback and is what makes it possible to design feedback systems based on strongly simplified models One form of uncertainty in dynamical systems is parametric uncertainty in which the parameters describing the system are unknown A typical example is the variation of the mass of a car which changes with the number of passengers and the weight of the baggage When linearizing a nonlinear system the parameters of the linearized model also depend on the operating conditions It is straightforward to in vestigate the effects of parametric uncertainty simply by evaluating the performance criteria for a range of parameters Such a calculation reveals the consequences of parameter variations We illustrate by a simple example Example 121 Cruise control The cruise control problem was described in Section 31 and a PI controller was designed in Example 103 To investigate the effect of parameter variations we will choose a controller designed for a nominal operating condition corresponding to mass m 1600 fourth gear α 12 and speed ve 25 ms the controller gains are kp 072 and ki 018 Figure 121a shows the velocity v and the throttle u when encountering a hill with a 3 slope with masses in the range 1600 m 2000 gear ratios 35 α 10 12 and 16 and velocity 10 v 40 ms 348 CHAPTER 12 ROBUST PERFORMANCE 0 5 10 15 20 1 0 1 Time t s Error e 0 5 10 15 20 0 1 2 Time t s Input u a Disturbance response 1 05 05 05 Re λ Im λ b Closed loop eigenvalues Figure 121 Responses of the cruise control system to a slope increase of 3 a and the eigenvalues of the closed loop system b Model parameters are swept over a wide range The simulations were done using models that were linearized around the different operating conditions The figure shows that there are variations in the response but that they are quite reasonable The largest velocity error is in the range of 0206 ms and the settling time is about 15 s The control signal is marginally larger than 1 in some cases which implies that the throttle is fully open A full nonlinear simulation using a controller with windup protection is required if we want to explore these cases in more detail Figure 121b shows the eigenvalues of the closed loop system for the different operating conditions The figure shows that the closed loop system is well damped in all cases This example indicates that at least as far as parametric variations are concerned the design based on a simple nominal model will give satisfactory control The example also indicates that a controller with fixed parameters can be used in all cases Notice that we have not considered operating conditions in low gear and at low speed but cruise controllers are not typically used in these cases Unmodeled Dynamics It is generally easy to investigate the effects of parametric variations However there are other uncertainties that also are important as discussed at the end of Sec tion 23 The simple model of the cruise control system captures only the dynamics of the forward motion of the vehicle and the torque characteristics of the engine and transmission It does not for example include a detailed model of the engine dynamics whose combustion processes are extremely complex or the slight delays that can occur in modern electronically controlled engines as a result of the pro cessing time of the embedded computers These neglected mechanisms are called unmodeled dynamics Unmodeled dynamics can be accounted for by developing a more complex model Such models are commonly used for controller development but substantial effort is required to develop them An alternative is to investigate if the closed loop system is sensitive to generic forms of unmodeled dynamics The basic idea is to 352 CHAPTER 12 ROBUST PERFORMANCE inputs and many outputs We illustrate its use by computing the metric for the systems in the previous examples Example 124 Vinnicombe metric for Examples 122 and 123 For the systems in Example 122 we have f1s 1 P1sP1s 1 k2 s2 1 s2 f2s 1 P2sP1s 1 k2 2sT T 2 1s2 2s3T s4T 2 1 s21 2sT s2T 2 The function f1 has one zero in the right halfplane A numerical calculation for k 100 and T 0025 shows that the function f2 has the roots 463 863 200 600i Both functions have one zero in the right halfplane allowing us to compute the norm 124 For T 0025 this gives δνP1 P2 098 which is a quite large value To have reasonable robustness Vinnicombe recommended values less than 13 For the system in Example 123 we have 1 P1sP1s 1 k2 s2 1 s2 1 P2sP1s 1 k2 2s s2 s 12 These functions have the same number of zeros in the right halfplane if k 1 In this particular case the Vinnicombe metric is dP1 P2 2k1 k2 Exer cise 124 and with k 100 we get δνP1 P2 002 Figure 124 shows the Nyquist curves and their projections for k 2 Notice that dP1 P2 is very small for small k even though the closed loop systems are very different It is therefore essential to consider the condition P1 P2 C as discussed in Exercise 124 122 Stability in the Presence of Uncertainty Having discussed how to describe uncertainty and the similarity between two sys tems we now consider the problem of robust stability When can we show that the stability of a system is robust with respect to process variations This is an important question since the potential for instability is one of the main drawbacks of feedback Hence we want to ensure that even if we have small inaccuracies in our model we can still guarantee stability and performance Robust Stability Using Nyquists Criterion The Nyquist criterion provides a powerful and elegant way to study the effects of uncertainty for linear systems A simple criterion is that the Nyquist curve be sufficiently far from the critical point 1 Recall that the shortest distance from the Nyquist curve to the critical point is sm 1Ms where Ms is the maximum of the sensitivity function and sm is the stability margin introduced in Section 93 124 ROBUST POLE PLACEMENT 361 for the overall circuit is given by Gv2v1 R2 R1 Gs Gs R2R1 1 We see that if Gs is large over the desired frequency range then the closed loop system is very close to the ideal response α R2R1 Assuming Gs bsa where b is the gainbandwidth product of the amplifier as discussed in Example 83 the sensitivity function and the complementary sensitivity function become S s a s a αb T αb s a αb Thesensitivityfunctionaroundthenominalvaluestellsushowthetrackingresponse response varies as a function of process perturbations dG yr G yr S d P P We see that for low frequencies where S is small variations in the bandwidth a or the gainbandwidth product b will have relatively little effect on the performance of the amplifier under the assumption that b is sufficiently large To model the effects of an unknown load we consider the addition of a dis turbance at the output of the system as shown in Figure 1210b This disturbance represents changes in the output voltage due to loading effects The transfer func tion G yd S gives the response of the output to the load disturbance and we see that if S is small then we are able to reject such disturbances The sensitivity of G yd to perturbations in the process dynamics can be computed by taking the derivative of G yd with respect to P dG yd d P C 1 PC2 T P G yd dG yd G yd T d P P Thus we see that the relative changes in the disturbance rejection are roughly the same as the process perturbations at low frequency when T is approximately 1 and drop off at higher frequencies However it is important to remember that G yd itself is small at low frequency and so these variations in relative performance may not be an issue in many applications 124 Robust Pole Placement In Chapters 6 and 7 we saw how to design controllers by setting the locations of the eigenvalues of the closed loop system If we analyze the resulting system in the frequency domain the closed loop eigenvalues correspond to the poles of the closed loop transfer function and hence these methods are often referred to as design by pole placement State space design methods like many methods developed for control system design do not explicitly take robustness into account In such cases it is essential to 362 CHAPTER 12 ROBUST PERFORMANCE Re Liω Im Liω 10 3 10 1 10 1 10 3 10 1 10 0 10 1 10 2 10 3 360 270 180 Liω Liω Frequency ω rads Figure 1211 Observerbased control of steering The Nyquist plot left and Bode plot right of the loop transfer function for vehicle steering with a controller based on state feedback and an observer The controller provides stable operation but with very low gain and phase margin always investigate the robustness because there are seemingly reasonable designs that give controllers with poor robustness We illustrate this by analyzing controllers designed by state feedback and observers The closed loop poles can be assigned to arbitrary locations if the system is observable and reachable However if we want to have a robust closed loop system the poles and zeros of the process impose severe restrictions on the location of the closed loop poles Some examples are first given based on the analysis of these examples we then present design rules for robust pole eigenvalue placement Slow Stable Process Zeros We will first explore the effects of slow stable zeros and we begin with a simple example Example 128 Vehicle steering Consider the linearized model for vehicle steering in Example 86 which has the transfer function Ps 05s 1 s2 A controller based on state feedback was designed in Example 64 and state feed back was combined with an observer in Example 74 The system simulated in Figure 78 has closed loop poles specified by ωc 03 ζc 0707 ωo 7 and ζo 9 Assume that we want a faster closed loop system and choose ωc 10 ζc 0707 ωo 20 and ζo 0707 Using the state representation in Example 73 a pole placement design gives state feedback gains k1 100 and k2 3586 and observer gains l1 2828 and l2 400 The controller transfer function is Cs 11516s 40000 s2 424s 66579 Figure 1211 shows Nyquist and Bode plots of the loop transfer function The 124 ROBUST POLE PLACEMENT 363 Nyquist plot indicates that the robustness is poor since the loop transfer function is very close to the critical point 1 The phase margin is 7 and the stability margin is sm 0077 The poor robustness shows up in the Bode plot where the gain curve hovers around the value 1 and the phase curve is close to 180 for a wide frequency range More insight is obtained by analyzing the sensitivity functions shown by solid lines in Figure 1212 The maximum sensitivities are Ms 13 and Mt 12 indicating that the system has poor robustness At first sight it is surprising that a controller where the nominal closed system has well damped poles and zeros is so sensitive to process variations We have an indication that something is unusual because the controller has a zero at s 35 in the right halfplane To understand what happens we will investigate the reason for the peaks of the sensitivity functions Let the transfer functions of the process and the controller be Ps n ps dps Cs ncs dcs where n ps ncs dps and dcs are the numerator and denominator polynomi als The complementary sensitivity function is T s PC 1 PC n psncs dpsdcs n psn ps The poles of T s are the poles of the closed loop system and the zeros are given by the zeros of the process and controller Sketching the gain curve of the comple mentary sensitivity function we find that T s 1 for low frequencies and that T iω starts to increase at its first zero which is the process zero at s 2 It increases further at the controller zero at s 34 and it does not start to decrease until the closed loop poles appear at ωc 10 and ωo 20 We can thus conclude that there will be a peak in the complementary sensitivity function The magnitude of the peak depends on the ratio of the zeros and the poles of the transfer function The peak of the complementary sensitivity function can be avoided by assigning a closed loop zero close to the slow process zero We can achieve this by choosing ωc 10 and ζc 26 which gives closed loop poles at s 2 and s 50 The controller transfer function then becomes Cs 3628s 40000 s2 8028s 15656 3628 s 1102 s 2s 7828 The sensitivity functions are shown by dashed lines in Figure 1212 The controller gives the maximum sensitivities Ms 134 and Mt 141 which give much better robustness Notice that the controller has a pole at s 2 that cancels the slow process zero The design can also be done simply by canceling the slow stable process zero and designing the controller for the simplified system One lesson from the example is that it is necessary to choose closed loop poles that are equal to or close to slow stable process zeros Another lesson is that slow unstable process zeros impose limitations on the achievable bandwidth as already 364 CHAPTER 12 ROBUST PERFORMANCE 10 0 10 2 10 2 10 0 10 0 10 2 10 2 10 0 Original Improved Siω T iω Frequency ω rads Frequency ω rads Figure 1212 Sensitivity functions for observerbased control of vehicle steering The com plementary sensitivity function left and the sensitivity function right for the original con troller with ωc 10 ζc 0707 ωo 20 ζo 0707 solid and the improved controller with ωc 10 ζc 26 dashed noted in Section 115 Fast Stable Process Poles The next example shows the effect of fast stable poles Example 129 Fast system poles Consider a PI controller for a firstorder system where the process and the controller have the transfer functions Ps bs a and Cs kp kis The loop transfer function is Ls bkps ki ss a and the closed loop characteristic polynomial is ss a bkps ki s2 a bkps kib If we specify the desired closed loop poles should be p1 and p2 we find that the controller parameters are given by kp p1 p2 a b ki p1 p2 b The sensitivity functions are then Ss ss a s p1s p2 T s p1 p2 as p1 p2 s p1s p2 Assume that the process pole a is much larger than the closed loop poles p1 and p2 say p1 p2 a Notice that the proportional gain is negative and that the controller has a zero in the right halfplane if a p1 p2 an indication that the system has bad properties Next consider the sensitivity function which is 1 for high frequencies Moving from high to low frequencies we find that the sensitivity increases at the process pole s a The sensitivity does not decrease until the closed loop poles are reached resulting in a large sensitivity peak that is approximately ap2 The magnitude of the sensitivity function is shown in Figure 1213 for a b 1 p1 005 and p2 02 Notice the highsensitivity peak For comparison we also show the gain 124 ROBUST POLE PLACEMENT 367 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 Ideal PID PID w filtering Normalized frequency ωa Normalized frequency ωa T iω Siω PSiω CSiω Figure 1214 Nanopositioning system control via cancellation of the fast process pole Gain plots for the Gang of Four for PID control with secondorder filtering 1217 are shown by solid lines and the dashed lines show results for an ideal PID controller without filtering 1216 bustness A large value of T f reduces the effects of sensor noise significantly but it also reduces the stability margin Since the gain crossover frequency without filtering is ki a reasonable choice is TF 02T f as shown by the solid curves in Figure 1214 The plots of CSiω and Siω show that the sensitivity to high frequency measurement noise is reduced dramatically at the cost of a marginal increase of sensitivity Notice that the poor attenuation of disturbances with fre quencies close to the resonance is not visible in the sensitivity function because of the exact cancellation of poles and zeros The designs thus far have the drawback that load disturbances with frequencies close to the resonance are not attenuated We will now consider a design that actively attenuates the poorly damped modes We start with an ideal PID controller where the design can be done analytically and we add highfrequency rolloff The loop transfer function obtained with this controller is Ls kds2 kps ki ss2 2ζas a2 1218 The closed loop system is of third order and its characteristic polynomial is s3 kda2 2ζas2 kp 1a2s kia2 1219 A general thirdorder polynomial can be parameterized as s3 α0 2ζω0s2 1 2α0ζω2 0s α0ω3 0 1220 The parameters α0 and ζ give the relative configuration of the poles and the pa rameter ω0 gives their magnitudes and therefore also the bandwidth of the system The identification of coefficients of equal powers of s in with equation 1219 368 CHAPTER 12 ROBUST PERFORMANCE 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 4 10 2 10 0 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 2 10 0 Normalized frequency ωa Normalized frequency ωa T iω Siω PSiω CSiω ω0 a ω0 2a ω0 4a Figure 1215 Nanopositioner control using active damping Gain curves for the Gang of Four for PID control of the nanopositioner designed for ω0 a dashdotted 2a dashed and 4a solid The controller has highfrequency rolloff and has been designed to give active damping of the oscillatory mode The different curves correspond to different choices of magnitudes of the poles parameterized by ω0 in equation 1219 gives a linear equation for the controller parameters which has the solution kp 1 2α0ζω2 0 a2 1 ki α0ω3 0 a2 kd α0 2ζω0 a2 2ζa 1221 To obtain a design with active damping it is necessary that the closed loop band width be at least as fast as the oscillatory modes Adding highfrequency rolloff the controller becomes Cs kds2 kps k s1 sT f sT f 22 1222 The value T f Td10 01 kdk is a good value for the filtering time constant Figure 1215 shows the gain curves of the Gang of Four for designs with ζ 0707 α0 1 and ω0 a 2a and 4a The figure shows that the largest values of the sensitivity function and the complementary sensitivity function are small The gain curve for PS shows that the load disturbances are now well attenuated over the whole frequency range and attenuation increases with increasing ω0 The gain curve forCS shows that large control signals are required to provide active damping The high gain of CS for high frequencies also shows that lownoise sensors and actuators with a wide range are required The largest gains for CS are 19 103 and 434 for ω0 a 2a and 4a respectively There is clearly a tradeoff between disturbance attenuation and controller gain A comparison of Figures 1214 and 1215 illustrates the tradeoffs between control action and disturbance attenuation for the designs with cancellation of the fast process pole and active damping 370 CHAPTER 12 ROBUST PERFORMANCE 5 0 5 4 2 0 2 4 Re Liω Im Liω a Hall chart 4 3 2 1 0 1 0 1 2 3 arg Liω rad log Liω b Nichols chart Figure 1216 Hall and Nichols charts The Hall chart is a Nyquist plot with curves for constant gain and phase of the complementary sensitivity function T The Nichols chart is the conformal map of the Hall chart under the transformation N log L with the scale flipped The dashed curve is the line where T iω 1 and the shaded region corresponding to loop transfer functions whose complementary sensitivity changes by no more than 10 is shaded and disturbance injection because it balances control actions against deviations in the output If all state variables are measured the controller is a state feedback u K x and it has the same form as the controller obtained by eigenvalue assignment pole placement in Section 62 However the controller gain is obtained by solving an optimization problem It has been shown that this controller is very robustIthasaphasemarginofatleast60 andaninfinitegainmarginThecontroller is called a linear quadratic control or LQ control because the process model is linear and the criterion is quadratic When all state variables are not measured the state can be reconstructed using an observer as discussed in Section 73 It is also possible to introduce process disturbances and measurement noise explicitly in the model and to reconstruct the states using a Kalman filter as discussed briefly in Section 74 The Kalman filter has the same structure as the observer designed by eigenvalue assignment in Section 73 but the observer gains L are now obtained by solving an optimization problem The control law obtained by combining linear quadratic control with a Kalman filter is called linear quadratic Gaussian control or LQG control The Kalman filter is optimal when the models for load disturbances and measurement noise are Gaussian It is interesting that the solution to the optimization problem leads to a controller having the structure of a state feedback and an observer The state feedback gains depend on the parameter ρ and the filter gains depend on the parameters in the model that characterize process noise and measurement noise see Section 74 There are efficient programs to compute these feedback and observer gains The nice robustness properties of state feedback are unfortunately lost when the observer is added It is possible to choose parameters that give closed loop systems with poor robustness similar to Example 128 We can thus conclude that there is a 374 CHAPTER 12 ROBUST PERFORMANCE ically Automatic tuning requires that parameters remain constant and it has been widely applied for PID control It is a reasonable guess that in the future many controllers will have features for automatic tuning If parameters are changing it is possible to use adaptive methods where process dynamics are measured online 126 Further Reading The topic of robust control is a large one with many articles and textbooks devoted to the subject Robustness was a central issue in classical control as described in Bodes classical book 40 Robustness was deemphasized in the euphoria of the development of design methods based on optimization The strong robustness of controllers based on state feedback shown by Anderson and Moore 7 contributed to the optimism The poor robustness of output feedback was pointed out by Rosen brock 169 Horowitz 103 and Doyle 63 and resulted in a renewed interest in robustness A major step forward was the development of design methods where ro bustness was explicitly taken into account such as the seminal work of Zames 208 Robust control was originally developed using powerful results from the theory of complex variables which gave controllers of high order A major breakthrough was made by Doyle Glover Khargonekar and Francis 65 who showed that the solu tion to the problem could be obtained using Riccati equations and that a controller of low order could be found This paper led to an extensive treatment of H control including books by Francis 78 McFarlane and Glover 150 Doyle Francis and Tannenbaum 64 Green and Limebeer 90 Zhou Doyle and Glover 209 Sko gestand and Postlethwaite 181 and Vinnicombe 196 A major advantage of the theory is that it combines much of the intuition from servomechanism theory with sound numerical algorithms based on numerical linear algebra and optimization The results have been extended to nonlinear systems by treating the design problem as a game where the disturbances are generated by an adversary as described in the book by Basar and Bernhard 24 Gain scheduling and adaptation are discussed in the book by Åström and Wittenmark 19 Exercises 121 Consider systems with the transfer functions P1 1s 1 and P2 1s a Show that P1 can be changed continuously to P2 with bounded additive and multiplicative uncertainty ifa 0 but not ifa 0 Also show that no restriction on a is required for feedback uncertainty 122 Consider systems with the transfer functions P1 s 1s 12 and P2 s as 12 Show that P1 can be changed continuously to P2 with bounded feedback uncertainty if a 0 but not if a 0 Also show that no restriction on a is required for additive and multiplicative uncertainties Bibliography 1 M A Abkowitz Stability and Motion Control of Ocean Vehicles MIT Press Cambridge MA 1969 2 R H Abraham and C D Shaw DynamicsThe Geometry of Behavior Part 1 Periodic Behavior Aerial Press Santa Cruz CA 1982 3 J Ackermann Der Entwurf linearer Regelungssysteme im Zustandsraum Regelungstechnik und Prozessdatenverarbeitung 7297300 1972 4 J Ackermann SampledData Control Systems Springer Berlin 1985 5 C E Agnew Dynamic modeling and control of congestionprone systems Operations Re search 243400419 1976 6 L V Ahlfors Complex Analysis McGrawHill New York 1966 7 B D O Anderson and J B Moore Optimal Control Linear Quadratic Methods Prentice Hall Englewood Cliffs NJ 1990 Republished by Dover Publications 2007 8 A A Andronov A A Vitt and S E Khaikin Theory of Oscillators Dover New York 1987 9 T M Apostol Calculus Vol II MultiVariable Calculus and Linear Algebra with Applica tions Wiley New York 1967 10 T M Apostol Calculus Vol I OneVariable Calculus with an Introduction to Linear Algebra Wiley New York 1969 11 R Aris Mathematical Modeling Techniques Dover New York 1994 Originally published by Pitman 1978 12 V I Arnold Mathematical Methods in Classical Mechanics Springer New York 1978 13 V I Arnold Ordinary Differential Equations MIT Press Cambridge MA 1987 10th printing 1998 14 K J Åström Limitations on control system performance European Journal on Control 61220 2000 15 K J Åström Introduction to Stochastic Control Theory Dover New York 2006 Originally published by Academic Press New York 1970 16 K J Åström and T Hägglund Advanced PID Control ISAThe Instrumentation Systems and Automation Society Research Triangle Park NC 2005 17 K J Åström R E Klein and A Lennartsson Bicycle dynamics and control IEEE Control Systems Magazine 2542647 2005 18 K J Åström and B Wittenmark ComputerControl Systems Theory and Design 3rd ed Prentice Hall Englewood Cliffs NJ 1997 19 K J Åström and B Wittenmark Adaptive Control 2nd ed Dover New York 2008 Originally published by Addison Wesley 1995 20 D P Atherton Nonlinear Control Engineering Van Nostrand New York 1975 378 BIBLIOGRAPHY 21 M Atkinson M Savageau J Myers and A Ninfa Development of genetic circuitry exhibiting toggle switch or oscillatory behavior in Escherichia coli Cell 1135597607 2003 22 M B Barron and W F Powers The role of electronic controls for future automotive mecha tronic systems IEEE Transactions on Mechatronics 118089 1996 23 T Basar editor Control Theory Twentyfive Seminal Papers 19321981 IEEE Press New York 2001 24 T Basar and P Bernhard H Optimal Control and Related Minimax Design Problems A Dynamic Game Approach Birkhauser Boston 1991 25 J Bechhoefer Feedback for physicists A tutorial essay on control Reviews of Modern Physics 77783836 2005 26 R Bellman and K J Åström On structural identifiability Mathematical Biosciences 7329 339 1970 27 R E Bellman and R Kalaba Selected Papers on Mathematical Trends in Control Theory Dover New York 1964 28 S Bennett A History of Control Engineering 18001930 Peter Peregrinus Stevenage 1986 29 S Bennett A History of Control Engineering 19301955 Peter Peregrinus Stevenage 1986 30 L L Beranek Acoustics McGrawHill New York 1954 31 R N Bergman Toward physiological understanding of glucose tolerance Minimal model approach Diabetes 3815121527 1989 32 D Bertsekas and R Gallager Data Networks Prentice Hall Englewood Cliffs 1987 33 B Bialkowski Process control sample problems In N J Sell editor Process Control Fundamentals for the Pulp Paper Industry Tappi Press Norcross GA 1995 34 G Binnig and H Rohrer Scanning tunneling microscopy IBM Journal of Research and Development 304355369 1986 35 H S Black Stabilized feedback amplifiers Bell System Technical Journal 1312 1934 36 H S Black Inventing the negative feedback amplifier IEEE Spectrum pp 5560 1977 37 J F Blackburn G Reethof and J L Shearer Fluid Power Control MIT Press Cambridge MA 1960 38 J H Blakelock Automatic Control of Aircraft and Missiles 2nd ed AddisonWesley Cam bridge MA 1991 39 G Blickley Modern control started with ZieglerNichols tuning Control Engineering 3772 75 1990 40 H W Bode Network Analaysis and Feedback Amplifier Design Van Nostrand New York 1945 41 H W Bode FeedbackThe history of an idea Symposium on Active Networks and Feedback Systems Polytechnic Institute of Brooklyn New York 1960 Reprinted in 27 42 W E Boyce and R C DiPrima Elementary Differential Equations Wiley New York 2004 43 B Brawn and F Gustavson Program behavior in a paging environment Proceedings of the AFIPS Fall Joint Computer Conference pp 10191032 1968 44 R W Brockett Finite Dimensional Linear Systems Wiley New York 1970 45 R W Brockett New issues in the mathematics of control In B Engquist and W Schmid editors Mathematics Unlimited2001 and Beyond pp 189220 SpringerVerlag Berlin 2000 46 G S Brown and D P Campbell Principles of Servomechanims Wiley New York 1948 BIBLIOGRAPHY 379 47 A E Bryson Jr and YC Ho Applied Optimal Control Optimization Estimation and Control Wiley New York 1975 48 F M Callier and C A Desoer Linear System Theory SpringerVerlag London 1991 49 R H Cannon Dynamics of Physical Systems Dover New York 2003 Originally published by McGrawHill 1967 50 H S Carslaw and J C Jaeger Conduction of Heat in Solids 2nd ed Clarendon Press Oxford UK 1959 51 H Chestnut and R W Mayer Servomechanisms and Regulating System Design Vol 1 Wiley New York 1951 52 CCobelliandGToffolo Modelofglucosekineticsandtheircontrolbyinsulincompartmental and noncompartmental approaches Mathematical Biosciences 722291316 1984 53 R F Coughlin and F F Driscoll Operational Amplifiers and Linear Integrated Circuits 6th ed Prentice Hall Englewood Cliffs NJ 1975 54 L B Cremean T B Foote J H Gillula G H Hines D Kogan K L Kriechbaum J C Lamb J Leibs L Lindzey C E Rasmussen A D Stewart J W Burdick and R M Murray Alice An informationrich autonomous vehicle for highspeed desert navigation Journal of Field Robotics 239777810 2006 55 Crocus Systemes dExploitation des Ordinateurs Dunod Paris 1975 56 H de Jong Modeling and simulation of genetic regulatory systems A literature review Journal of Computational Biology 967103 2002 57 J P Den Hartog Mechanical Vibrations Dover New York 1985 Reprint of 4th ed from 1956 1st ed published in 1934 58 L Desbourough and R Miller Increasing customer value of industrial control performance monitoringHoneywells experience Sixth International Conference on Chemical Process Control AIChE Symposium Series Number 326 Vol 98 2002 59 Y Diao N Gandhi J L Hellerstein S Parekh and D M Tilbury Using MIMO feedback control to enforce policies for interrelated metrics with application to the Apache web server Proceedings of the IEEEIFIP Network Operations and Management Symposium pp 219234 2002 60 E D Dickmanns Dynamic Vision for Perception and Control of Motion Springer Berlin 2007 61 R C Dorf and R H Bishop Modern Control Systems 10th ed Prentice Hall Upper Saddle River NJ 2004 62 F H Dost Grundlagen der Pharmakokinetik Thieme Verlag Stuttgart 1968 63 J C Doyle Guaranteed margins for LQG regulators IEEE Transactions on Automatic Control 234756757 1978 64 J C Doyle B A Francis and A R Tannenbaum Feedback Control Theory Macmillan New York 1992 65 J C Doyle K Glover P P Khargonekar and B A Francis Statespace solutions to standard H2 and H control problems IEEE Transactions on Automatic Control 348831847 1989 66 L E Dubins On curves of minimal length with a constraint on average curvature and with prescribed initial and terminal positions and tangents American Journal of Mathematics 79497516 1957 67 F Dyson A meeting with Enrico Fermi Nature 2476972297 2004 380 BIBLIOGRAPHY 68 HElSamadJPGoffandMKhammash Calciumhomeostasisandparturienthypocalcemia An integral feedback perspective Journal of Theoretical Biology 2141729 2002 69 J R Ellis Vehicle Handling Dynamics Mechanical Engineering Publications London 1994 70 S P Ellner and J Guckenheimer Dynamic Models in Biology Princeton University Press Princeton NJ 2005 71 M B Elowitz and S Leibler A synthetic oscillatory network of transcriptional regulators Nature 4036767335338 2000 72 P G Fabietti V Canonico M O Federici M Benedetti and E Sarti Control oriented model of insulin and glucose dynamics in type 1 diabetes Medical and Biological Engineering and Computing 446678 2006 73 M Fliess J Levine P Martin and P Rouchon On differentially flat nonlinear systems Comptes Rendus des Séances de lAcadémie des Sciences Serie I 315619624 1992 74 M Fliess J Levine P Martin and P Rouchon Flatness and defect of nonlinear systems Introductory theory and examples International Journal of Control 61613271361 1995 75 J W Forrester Industrial Dynamics MIT Press Cambridge MA 1961 76 J B J Fourier On the propagation of heat in solid bodies Memoir read before the Class of the Instut de France 1807 77 A Fradkov Cybernetical Physics From Control of Chaos to Quantum Control Springer Berlin 2007 78 B A Francis A Course in H Control SpringerVerlag Berlin 1987 79 G F Franklin J D Powell and A EmamiNaeini Feedback Control of Dynamic Systems 5th ed Prentice Hall Upper Saddle River NJ 2005 80 B Friedland Control System Design An Introduction to State Space Methods Dover New York 2004 81 M A Gardner and J L Barnes Transients in Linear Systems Wiley New York 1942 82 E Gilbert Controllability and observability in multivariable control systems SIAM Journal of Control 11128151 1963 83 J C Gille M J Pelegrin and P Decaulne Feedback Control Systems Analysis Synthesis and Design McGrawHill New York 1959 84 M Giobaldi and D Perrier Pharmacokinetics 2nd ed Marcel Dekker New York 1982 85 K Godfrey Compartment Models and Their Application Academic Press New York 1983 86 H Goldstein Classical Mechanics AddisonWesley Cambridge MA 1953 87 S W Golomb Mathematical modelsUses and limitations Simulation 414197198 1970 88 G C Goodwin S F Graebe and M E Salgado Control System Design Prentice Hall Upper Saddle River NJ 2001 89 D Graham and D McRuer Analysis of Nonlinear Control Systems Wiley New York 1961 90 M Green and D J N Limebeer Linear Robust Control Prentice Hall Englewood Cliffs NJ 1995 91 J Guckenheimer and P Holmes Nonlinear Oscillations Dynamical Systems and Bifurcations of Vector Fields SpringerVerlag Berlin 1983 92 E A Guillemin Theory of Linear Physical Systems MIT Press Cambridge MA 1963 93 L Gunkel and G F Franklin A general solution for linear sampled data systems IEEE Transactions on Automatic Control AC16767775 1971 BIBLIOGRAPHY 381 94 W Hahn Stability of Motion Springer Berlin 1967 95 D Hanahan and R A Weinberg The hallmarks of cancer Cell 1005770 2000 96 J K Hedrick and T Batsuen Invariant properties of automobile suspensions Proceedigns of the Institution of Mechanical Engineers Vol 204 pp 2127 London 1990 97 J L Hellerstein Y Diao S Parekh and D M Tilbury Feedback Control of Computing Systems Wiley New York 2004 98 D V Herlihy BicycleThe History Yale University Press New Haven CT 2004 99 M B Hoagland and B Dodson The Way Life Works Times Books New York 1995 100 A L Hodgkin and A F Huxley A quantitative description of membrane current and its application to conduction and excitation in nerve Journal of Physiology 117500544 1952 101 C V Hollot V Misra D Towsley and WB Gong A control theoretic analysis of RED Proceedings of IEEE Infocom pp 15101519 2000 102 I M Horowitz Synthesis of Feedback Systems Academic Press New York 1963 103 I M Horowitz Superiority of transfer function over statevariable methods in linear time invariant feedback system design IEEE Transactions on Automatic Control AC2018497 1975 104 I M Horowitz Survey of quantitative feedback theory International Journal of Control 53255291 1991 105 T P Hughes Elmer Sperry Inventor and Engineer John Hopkins University Press Baltimore MD 1993 106 A Isidori Nonlinear Control Systems 3rd ed SpringerVerlag Berlin 1995 107 M Ito Neurophysiological aspects of the cerebellar motor system International Journal of Neurology 7162178 1970 108 V Jacobson Congestion avoidance and control ACM SIGCOMM Computer Communication Review 25157173 1995 109 J A Jacquez Compartment Analysis in Biology and Medicine Elsevier Amsterdam 1972 110 H James N Nichols and R Phillips Theory of Servomechanisms McGrawHill New York 1947 111 P D Joseph and J T Tou On linear control theory Transactions of the AIEE 8018 1961 112 W G Jung editor Op Amp Applications Analog Devices Norwood MA 2002 113 R E Kalman Contributions to the theory of optimal control Boletin de la Sociedad Maté matica Mexicana 5102119 1960 114 R E Kalman New methods and results in linear prediction and filtering theory Technical Report 611 Research Institute for Advanced Studies RIAS Baltimore MD February 1961 115 R E Kalman On the general theory of control systems Proceedings of the First IFAC Congress on Automatic Control Moscow 1960 Vol 1 pp 481492 Butterworths London 1961 116 R E Kalman and R S Bucy New results in linear filtering and prediction theory Transactions of the ASME Journal of Basic Engineering 83 D95108 1961 117 R E Kalman P L Falb and M A Arbib Topics in Mathematical System Theory McGraw Hill New York 1969 118 R E Kalman Y Ho and K S Narendra Controllability of Linear Dynamical Systems Vol 1 of Contributions to Differential Equations Wiley New York 1963 119 J Keener and J Sneyd Mathematical Physiology Springer New York 2001 382 BIBLIOGRAPHY 120 F P Kelly Stochastic models of computer communication Journal of the Royal Statistical Society B473379395 1985 121 K Kelly Out of Control AddisonWesley Reading MA 1994 Available at httpwwwkk orgoutofcontrol 122 J M Keynes The General Theory of Employment Interest and Money Cambridge Universtiy Press Cambridge UK 1936 123 H K Khalil Nonlinear Systems 3rd ed Macmillan New York 2001 124 U Kiencke and L Nielsen Automotive Control Systems For Engine Driveline and Vehicle Springer Berlin 2000 125 C Kittel Introduction to Solid State Physics Wiley New York 1995 126 L R Klein and A S Goldberger An Econometric Model of the United States 19291952 North Holland Amsterdam 1955 127 L Kleinrock Queuing Systems Vols I and II 2nd ed WileyInterscience New York 1975 128 N N Krasovski Stability of Motion Stanford University Press Stanford CA 1963 129 M Krstic I Kanellakopoulos and P Kokotovic Nonlinear and Adaptive Control Design Wiley 1995 130 P R Kumar New technological vistas for systems and control The example of wireless networks Control Systems Magazine 2112437 2001 131 P R Kumar and P Varaiya Stochastic Systems Estimation Identification and Adaptive Control Prentice Hall Englewood Cliffs NJ 1986 132 P Kundur Power System Stability and Control McGrawHill New York 1993 133 B C Kuo and F Golnaraghi Automatic Control Systems 8th ed Wiley New York 2002 134 M Kurth and E Welfonder Oscillation behavior of the enlarged European power system under deregulated energy market conditions Control Engineering Practice 1315251536 2005 135 J P LaSalle Some extensions of Lyapunovs second method IRE Transactions on Circuit Theory CT74520527 1960 136 A D Lewis A mathematical approach to classical control Technical report Queens Univer sity Kingston Ontario 2003 137 S H Low F Paganini and J C Doyle Internet congestion control IEEE Control Systems Magazine pp 2843 February 2002 138 S H Low F Paganini J Wang S Adlakha and J C Doyle Dynamics of TCPRED and a scalable control Proceedings of IEEE Infocom pp 239248 2002 139 K H Lundberg History of analog computing IEEE Control Systems Magazine pp 2228 March 2005 140 LA MacColl Fundamental Theory of Servomechanims Van Nostrand Princeton NJ 1945 Dover reprint 1968 141 J M Maciejowski Multivariable Feedback Design Addison Wesley Reading MA 1989 142 D A MacLulich Fluctuations in the Numbers of the Varying Hare Lepus americanus University of Toronto Press 1937 143 A Makroglou J Li and Y Kuang Mathematical models and software tools for the glucose insulinregulatorysystemanddiabetes Anoverview AppliedNumerical Mathematics56559 573 2006 144 J G Malkin Theorie der Stabilität einer Bewegung Oldenbourg München 1959 145 R Mancini Op Amps for Everyone Texas Instruments Houston TX 2002 BIBLIOGRAPHY 383 146 J E Marsden and M J Hoffmann Basic Complex Analysis W H Freeman New York 1998 147 J E Marsden and T S Ratiu Introduction to Mechanics and Symmetry SpringerVerlag New York 1994 148 O Mayr The Origins of Feedback Control MIT Press Cambridge MA 1970 149 M W McFarland editor The Papers of Wilbur and Orville Wright McGrawHill New York 1953 150 D C McFarlane and K Glover Robust Controller Design Using Normalized Coprime Factor Plant Descriptions Springer New York 1990 151 H T Milhorn The Application of Control Theory toPhysiological Systems Saunders Philadel phia 1966 152 D A Mindel Between Human and Machine Feedback Control and Computing Before Cybernetics Johns Hopkins University Press Baltimore MD 2002 153 D Möhl G Petrucci L Thorndahl and S van der Meer Physics and technique of stochastic cooling Physics Reports 58273102 1980 154 J D Murray Mathematical Biology Vols I and II 3rd ed SpringerVerlag New York 2004 155 R M Murray editor Control in an Information Rich World Report of the Panel on Future Directions in Control Dynamics and Systems SIAM Philadelphia 2003 156 R M Murray Z Li and S S Sastry A Mathematical Introduction to Robotic Manipulation CRC Press 1994 157 P J Nahin Oliver Heaviside Sage in Solitude The Life Work and Times of an Electrical Genius of the Victorian Age IEEE Press New York 1988 158 A O Nier Evidence for the existence of an isotope of potassium of mass 40 Physical Review 48283284 1935 159 H Nijmeijer and J M Schumacher Four decades of mathematical system theory In J W Polderman and H L Trentelman editors The Mathematics of Systems and Control From Intelligent Control to Behavioral Systems pp 7383 University of Groningen 1999 160 H Nyquist Regeneration theory Bell System Technical Journal 11126147 1932 161 H Nyquist The regeneration theory In R Oldenburger editor Frequency Response p 3 MacMillan New York 1956 162 K Ogata Modern Control Engineering 4th ed Prentice Hall Upper Saddle River NJ 2001 163 R Oldenburger editor Frequency Response MacMillan New York 1956 164 G Pacini and R N Bergman A computer program to calculate insulin sensitivity and pancre atic responsivity from the frequently sampled intraveneous glucose tolerance test Computer Methods and Programs in Biomedicine 23113122 1986 165 G A Philbrick Designing industrial controllers by analog Electronics 216108111 1948 166 W F Powers and P R Nicastri Automotive vehicle control challenges in the 21st century Control Engineering Practice 8605618 2000 167 S Prajna A Papachristodoulou and P A Parrilo SOSTOOLS Sum of squares optimization toolbox for MATLAB 2002 Available from httpwwwcdscaltechedusostools 168 D S Riggs The Mathematical Approach to Physiological Problems MIT Press Cambridge MA 1963 169 H H Rosenbrock and P D Moran Good bad or optimal IEEE Transactions on Automatic Control AC166552554 1971 384 BIBLIOGRAPHY 170 F Rowsone Jr What its like to drive an autopilot car Popular Science Monthly April 1958 Available at httpwwwimperialclubcomImFormativeArticles1958AutoPilot 171 W J Rugh Linear System Theory 2nd ed Prentice Hall Englewood Cliffs NJ 1995 172 E B Saff and A D Snider Fundamentals of Complex Analysis with Applications to Engi neering Science and Mathematics Prentice Hall Englewood Cliffs NJ 2002 173 D Sarid Atomic Force Microscopy Oxford University Press Oxford UK 1991 174 S Sastry Nonlinear Systems Springer New York 1999 175 G Schitter High performance feedback for fast scanning atomic force microscopes Review of Scientific Instruments 72833203327 2001 176 G Schitter K J Åström B DeMartini P J Thurner K L Turner and P K Hansma Design andmodelingofahighspeedAFMscanner IEEETransactionsonControlSystemTechnology 155906915 2007 177 M Schwartz Telecommunication Networks Addison Wesley Reading MA 1987 178 D E Seborg T F Edgar and D A Mellichamp Process Dynamics and Control 2nd ed Wiley Hoboken NJ 2004 179 S D Senturia Microsystem Design Kluwer Boston MA 2001 180 F G Shinskey ProcessControl Systems Application Design and Tuning 4th ed McGraw Hill New York 1996 181 S Skogestad and I Postlethwaite Multivariable Feedback Control 2nd ed Wiley Hoboken NJ 2005 182 E P Sontag Mathematical Control Theory Deterministic Finite Dimensional Systems 2nd ed Springer New York 1998 183 M W Spong and M Vidyasagar Dynamics and Control of Robot Manipulators John Wiley 1989 184 L Stark Neurological Control SystemsStudies in Bioengineering Plenum Press New York 1968 185 G Stein Respect the unstable Control Systems Magazine 2341225 2003 186 J Stewart Calculus Early Transcendentals Brooks Cole Pacific Grove CA 2002 187 G Strang Linear Algebra and Its Applications 3rd ed Harcourt Brace Jovanovich San Diego 1988 188 S H Strogatz Nonlinear Dynamics and Chaos with Applications to Physics Biology Chem istry and Engineering AddisonWesley Reading MA 1994 189 A S Tannenbaum Computer Networks 3rd ed Prentice Hall Upper Saddle River NJ 1996 190 T Teorell Kinetics of distribution of substances administered to the body I and II Archives Internationales de Pharmacodynamie et de Therapie 57205240 1937 191 G T Thaler Automatic Control Systems West Publishing St Paul MN 1989 192 M Tiller Introduction to Physical Modeling with Modelica Springer Berlin 2001 193 D Tipper and M K Sundareshan Numerical methods for modeling computer networks under nonstationary conditions IEEE Journal of Selected Areas in Communications 891682 1695 1990 194 J G Truxal Automatic Feedback Control System Synthesis McGrawHill New York 1955 195 H S Tsien Engineering Cybernetics McGrawHill New York 1954 196 G Vinnicombe Uncertainty and Feedback H LoopShaping and the νGap Metric Imperial College Press London 2001 BIBLIOGRAPHY 385 197 F J W Whipple The stability of the motion of a bicycle Quarterly Journal of Pure and Applied Mathematics 30312348 1899 198 D V Widder Laplace Transforms Princeton University Press Princeton NJ 1941 199 E P M Widmark and J Tandberg Über die Bedingungen für die Akkumulation indifferenter Narkotika Biochemische Zeitung 148358389 1924 200 N Wiener Cybernetics Or Control and Communication in the Animal and the Machine Wiley 1948 201 S Wiggins Introduction to Applied Nonlinear Dynamical Systems and Chaos Springer Verlag Berlin 1990 202 D G Wilson Bicycling Science 3rd ed MIT Press Cambridge MA 2004 With contributions by Jim Papadopoulos 203 H R Wilson Spikes Decisions and Actions The Dynamical Foundations of Neuroscience Oxford University Press Oxford UK 1999 204 K A Wise Guidance and control for military systems Future challenges AIAA Conference on Guidance Navigation and Control 2007 AIAA Paper 20076867 205 S Yamamoto and I Hashimoto Present status and future needs The view from Japanese industry In Y Arkun and W H Ray editors Chemical Process ControlCPC IV 1991 206 TM Yi Y Huang M I Simon and J Doyle Robust perfect adaptation in bacterial chemo taxis through integral feedback control PNAS 9746494653 2000 207 L A Zadeh and C A Desoer Linear System Theory the State Space Approach McGrawHill New York 1963 208 G Zames Feedback and optimal sensitivity Model reference transformations multiplica tive seminorms and approximative inverse IEEE Transactions on Automatic Control AC 262301320 1981 209 J C Zhou J C Doyle and K Glover Robust and Optimal Control Prentice Hall Englewood Cliffs NJ 1996 210 J G Ziegler and N B Nichols Optimum settings for automatic controllers Transactions of the ASME 64759768 1942 Index access control see admission control acknowledgment ack packet 7779 activator 16 59 129 active filter 154 see also operational amplifier actuators 4 31 51 65 81 178 224 265 284 311 324 333335 337 effect on zeros 284 334 in computing systems 75 saturation 50 225 300 306307 311 324 AD converters see analogtodigital converters adaptation 297 adaptive control 20 373 374 additive uncertainty 349 353 356 376 admission control 54 63 78 79 274 advertising 15 aerospace systems 89 18 338 see also vectored thrust aircraft X29 aircraft aircraft see flight control alcohol metabolism of 93 algebraic loops 211 249250 aliasing 225 allpass transfer function 331 alternating current AC 7 155 amplifier see operational amplifier amplitude ratio see gain analog computing 51 71 250 309 analog implementation controllers 74 263 309311 analogtodigital converters 4 82 224 225 311 analytic function 236 anticipation in controllers 6 24 296 see also derivative action antiresonance 156 antiwindup compensation 306307 311 312 314 Apache web server 76 see also web server control apparent volume of distribution 86 93 Arbib M A 167 argument of a complex number 250 arrival rate queuing systems 55 artificial intelligence AI 12 20 asymptotes in Bode plot 253 254 asymptotic stability 42 102106 112 114 117 118 120 140 discretetime systems 165 atmospheric dynamics see environmental science atomic force microscopes 3 51 8184 contact mode 81 156 horizontal positioning 282 366 system identification 257 tapping mode 81 290 299 304 328 with preloading 93 attractor equilibrium point 104 automatic reset in PID control 296 automatic tuning 306 373 automotive control systems 6 21 51 69 see also cruise control vehicle steering autonomous differential equation 29 see also timeinvariant systems autonomous vehicles 8 2021 autopilot 6 19 balance systems 3537 49 170 188 241 334 see also cartpendulum system inverted pendulum bandpass filter 154 155 255 256 bandwidth 155 186 322 333 Bell Labs 18 290 Bennett S 25 290 312 bicycle dynamics 6971 91 123 226 Whipple model 71 bicycle model for vehicle steering 5153 bifurcations 121124 130 see also root locus plots biological circuits 16 45 5860 129 166 256 genetic switch 64 114 repressilator 5960 biological systems 13 10 1516 22 25 5861 126 293 297 see also biological circuits drug administration neural systems population dynamics bistability 22 117 Black H S 18 20 71 73 131 267 290 347 block diagonal systems 106 129 139 145 149 212 388 INDEX block diagram algebra 242 245 356 block diagrams 1 4447 238 242247 249 control system 4 229 244 315 Kalman decomposition 223 observable canonical form 205 observer 202 210 observerbased control system 213 PID controllers 293 296 311 reachable canonical form 172 two degreeoffreedom controller 219 316 358 Youla parameterization 357 Bode H 229 290 343 374 Bode plots 250257 283 asymptotic approximation 253 254 264 low band highpass filters 256 nonminimum phase systems 284 of rational function 251 sketching 254 Bodes ideal loop transfer function 356 375 Bodes integral formula 335340 Bodes relations 283 326 Brahe T 28 breakpoint 253 272 Brockett R W xii 1 163 Bryson A E 200 bumpless transfer 373 Bush V 312 calibration versus feedback 10 180 195 197 Cannon R H 61 131 capacitor transfer function for 236 car see automotive control systems carrying capacity in population models 90 cartpendulum system 36 172 see also balance systems causal reasoning 1 70 CayleyHamilton theorem 170 199 203 center equilibrium point 104 centrifugal governor 2 3 6 17 chain of integrators normal form 61 173 characteristic polynomial 105 199 235 240 263 264 for closed loop transfer function 268 observable canonical form 205 output feedback controller 212 213 reachable canonical form 173 175 179 198 chemical systems 9 293 see also process control compartment models chordal distance 351 Chrysler autopilot 6 circuits see biological circuits electrical circuits classical control xi 374 closed loop 1 2 4 6 162 176 183 267 268 287 315 versus open loop 2 269 288 315 command signals 4 22 220 293 see also reference signal setpoint compartment models 8589 106 151 186 203 208 227 exercises 164 compensator see control law complementary sensitivity function 317 325 336 350 354 356 360 365 369 374 complexity of control systems 9 21 298 computed torque 163 computer implementation controllers 224226 311312 computer science relationship to control 5 computer systems control of 1214 25 39 56 57 7580 157 see also queuing systems conditional integration 314 conditional stability 275 congestion control 12 7780 104 273 292 313 see also queuing systems router dynamics 92 consensus 57 control definition of 35 early examples 2 5 6 8 11 18 21 25 296 fundamental limitations 283 331340 343 363 366 373374 history of 25 312 modeling for 5 3132 61 347 successes of 8 25 system 3 175 213 219 224 229 316 318 358 using estimated state 211214 370 control error 23 244 294 control law 4 23 24 162 176 179 244 control Lyapunov function 124 control matrix 34 38 control signal 31 157 293 controllability 197 see also reachability controlled differential equation 29 34 235 convolution equation 145147 149 150 170 261 discretetime 165 coordinate transformations 106 147149 173 226 234235 to Jordan form 139 to observable canonical form 206 to reachable canonical form 174 175 Coriolis forces 36 163 corner frequency 253 correlation matrix 215 216 cost function 190 INDEX 389 coupled springmass system 142 144 148 covariance matrix 215 critical gain 303 305 critical period 303 305 critical point 271 273 279 280 289 290 303 352 353 372 critically damped oscillator 184 crossover frequency see gain crossover frequency phase crossover frequency crossover frequency inequality see gain crossover frequency inequality cruise control 6 1718 6569 Chrysler autopilot 6 control design 196 300 309 feedback linearization 161 integrator windup 306 307 linearization 158 polezero cancellation 248 robustness 17 347 348 354 Curtiss seaplane 19 cybernetics 11 see also robotics DA converters see digitaltoanalog converters damped frequency 184 damping 28 36 41 96 265 266 damping ratio 184 185 187 188 300 DARPA Grand Challenge 20 21 DC gain 155 see also zero frequency gain dead zone 23 decision making higher levels of 8 12 20 delay see time delay delay compensation 292 375 delay margin 281 delta function see impulse function derivative action 24 25 293 296298 310 330 filtering 297 308 311 312 setpoint weighting 309 312 time constant 294 versus lead compensator 330 describing functions 288290 design of dynamics 1820 109 124125 131 167 177 182 diabetes see insulinglucose dynamics diagonal systems 105 139 Kalman decomposition for 222 transforming to 106 129 138 Dickmanns E 20 difference equations 34 3741 61 157 224 312 differential algebraic equations 33 see also algebraic loops differential equations 28 3437 9598 controlled 29 133 235 equilibrium points 100101 existence and uniqueness of solutions 9698 firstorder 32 298 isolated solution 101 periodic solutions 101102 109 qualitative analysis 98102 secondorder 99 183 298 solutions 95 96 133 137 145 263 stability see stability transfer functions for 236 differential flatness 221 digital control systems see computer implementation controllers digitaltoanalog converters 4 82 224 225 311 dimensionfree variables 48 61 direct term 34 38 147 211 250 discrete control 56 discretetime systems 38 61 128 157 165 311 Kalman filter for 215 linear quadratic regulator for 192 disk drives 64 disturbance attenuation 4 176 323324 358359 design of controllers for 319 320 326 336 345 369 fundamental limits 336 in biological systems 257 297 integral gain as a measure of 296 324 359 relationship to sensitivity function 323 335 345 358 disturbance weighting 372 disturbances 4 29 32 244 248 315 318 319 generalized 371 random 215 Dodson B 1 dominant eigenvalues poles 187 300 301 double integrator 137 168 236 Doyle J C xii 343 374 drug administration 8489 93 151 186 see also compartment models duality 207 211 Dubins car 53 dynamic compensator 196 213 dynamic inversion 163 dynamical systems 1 27 95 98 126 linear 104 131 observer as a 201 state of 175 stochastic 215 uncertainty in 347349 see also differential equations dynamics matrix 34 38 105 142 Dyson F 27 ecommerce 13 email server control of 39 157 economic systems 1415 22 62 390 INDEX ecosystems 1617 89 181 see also predatorprey system eigenvalue assignment 176 178 180182 188 212 300 313 by output feedback 213 for observer design 208 eigenvalues 105 114 123 142 232 and Jordan form 139141 165 distinct 128 129 138 144 222 dominant 187 effect on dynamic behavior 183 185187 233 for discretetime systems 165 invariance under coordinate transformation 106 relationship to modes 142145 relationship to poles 239 relationship to stability 117 140 141 eigenvectors 106 129 142 143 relationship to mode shape 143 electric power see power systems electric electrical circuits 33 45 74 131 236 see also operational amplifier electrical engineering 67 2931 155 275 elephant modeling of an 27 Elowitz M B 59 encirclement 271 see also Nyquist criterion entertainment robots 11 12 environmental science 3 9 17 equilibrium points 90 100 105 132 159 168 bifurcations of 121 discrete time 62 for closed loop system 176 195 for planar systems 104 region of attraction 119121 128 stability 102 error feedback 5 293 294 309 317 estimators see oservers386 Euler integration 41 42 exponential signals 230235 239 250 extended Kalman filter 220 FA18 aircraft 8 Falb P L 167 feedback 13 as technology enabler 3 19 drawbacks of 3 21 308 352 359 in biological systems 13 1516 25 297 see also biological circuits in engineered systems see control in financial systems 3 in nature 3 1517 89 positive see positive feedback properties 3 5 1722 315 320 347 robustness through 17 versus feedforward 22 296 320 feedback connection 243 287 288 feedback controller 244 315 feedback linearization 161163 feedback loop 4 267 315 358 feedback uncertainty 349 356 feedforward 22 219222 244 315 319 321 Fermi E 27 filters active 154 for disturbance weighting 373 for measurement signals 21 225 359 see also bandpass filters highfilters lowpass filters financial systems see economic systems finite escape time 97 finite state machine 69 76 firstorder systems 134 165 236 252 253 fisheries management 94 flatness see differential flatness flight control 8 18 19 52 163 airspace management 9 FA18 aircraft 8 X29 aircraft 336 X45 aircraft 8 see also vectored thrust aircraft flow of a vector field 29 99 flow in a tank 126 flow model queuing systems 54 292 313 flyball governor see centrifugal governor force feedback 10 11 forced response 133 231 Forrester J W 15 Fourier J B J 61 262 frequency domain 229231 267 285 315 frequency response 30 43 44 152157 230 290 303 322 relationship to Bode plot 250 relationship to Nyquist plot 270 272 secondorder systems 185 256 system identification using 257 fully actuated systems 240 fundamental limits see control fundamental limitations Furuta pendulum 130 gain 24 43 72 153 154 186 230 234 239 250 279 285288 347 H 286 287 371 observer see observer gain of a system 285 reference 195 state feedback 176 177 180 195 197 INDEX 391 zero frequency see zero frequency gain see also integral gain gain crossover frequency 279 280 322 327 332 351 365 gain crossover frequency inequality 332 334 gain curve Bode plot 250254 283 327 gain margin 279281 from Bode plot 280 reasonable values 281 gain scheduling 220 373 gainbandwidth product 74 237 361 Gang of Four 317 344 358 Gang of Six 317 322 gene regulation 16 58 59 166 256 genetic switch 64 114 115 global behavior 103 120124 Glover K 343 374 glucose regulation see insulinglucose dynamics Golomb S 65 governor see centrifugal governor H control 371374 376 Harrier AV8B aircraft 53 heat propagation 238 Heaviside O 163 Heaviside step function 150 163 Hellerstein J L 13 25 80 highfrequency rolloff 326 359 366 highpass filter 255 256 Hill function 58 Hoagland M B 1 HodgkinHuxley equations 60 homeostasis 3 58 homogeneous solution 133 136 137 239 Honeywell thermostat 6 Horowitz I M 226 343 369 374 humanmachine interface 65 69 hysteresis 23 289 identification see system identification impedance 236 309 implementation controllers see analog implementation computer implementation impulse function 146 164 169 impulse response 135 146 147 261 inductor transfer function for 236 inertia matrix 36 163 infinity norm 286 372 information systems 12 5458 see also congestion control web server control initial condition 96 99 102 132 137 144 215 initial condition response 133 136139 142 144 147 231 initial value problem 96 inner loop control 340 342 input sensitivity function see load sensitivity function inputoutput models 5 29 31 132 145158 229 286 see also frequency response steadystate response step response and transfer functions 261 and uncertainty 51 349 from experiments 257 relationship to state space models 32 95 146 steadystate response 149 transfer function for 235 inputs 29 32 insect flight control 4647 instrumentation 1011 71 insulinglucose dynamics 2 8789 integral action 2426 195198 293 295296 298 324 for bias compensation 226 setpoint weighting 309 312 time constant 294 integral gain 24 294 296 299 integrator windup 225 306307 314 conditional integration 314 intelligent machines see robotics internal model principle 214 221 Internet 12 13 75 77 80 92 see also congestion control Internet Protocol IP 77 invariant set 118 121 inverse model 162 219 320 inverse response 284 292 inverted pendulum 37 69 100 107 118 121 128 130 276 337 see also balance systems Jacobian linearization 159161 Jordan form 139142 164 188 Kalman R E 167 197 201 223 226 Kalman decomposition 222224 235 262 264 Kalman filter 215218 226 370 extended 220 KalmanBucy filter 217 Kelly F P 80 Kepler J 28 Keynes J M 14 Keynesian economic model 62 165 KrasovskiLasalle principle 118 LabVIEW 123 164 lag see phase lag lag compensation 326328 Laplace transforms xi 259262 Laplacian matrix 58 Lasalles invariance principle see KrasovskiLasalle principle lead see phase lead lead compensation 327330 341 345 392 INDEX limit cycle 91 101 109 111 122 288 289 linear quadratic control 190194 216 226 369371 linear systems 30 34 74 104 131164 222 231 235 262 286 linear timeinvariant systems 30 34 134 261 linearity 133 250 linearization 109 117 132 158163 220 347 Lipschitz continuity 98 load disturbances 315 359 see also disturbances load sensitivity function 317 local behavior 103 109 117 120 159 locally asymptotically stable 103 logistic growth model 89 90 94 loop analysis 267 315 loop shaping 270 326330 343 369 design rules 327 fundamental limitations 331340 see also Bodes loop transfer function loop transfer function 267270 279 280 287 315 318 326 329 336 343 see also Bodes loop transfer function Lotus Notes server see email server loworder models 298 lowpass filter 255 256 308 LQ control see linear quadratic control LTI systems see linear timeinvariant systems Lyapunov equation 114 128 Lyapunov functions 111114 120 127 164 design of controllers using 118 124 existence of 113 Lyapunov stability analysis 43 110120 126 discrete time 128 manifold 120 margins see stability margins Mars Exploratory Rovers 11 mass spectrometer 11 materials science 9 Mathematica 41 123 164 MATLAB 26 41 123 164 200 acker 181 211 dlqe 216 dlqr 194 hinfsyn 372 jordan 139 linmod 160 lqr 191 place 181 189 211 trim 160 matrix exponential 136139 143 145 163 164 coordinate transformations 148 Jordan form 140 secondorder systems 138 164 maximum complementary sensitivity 354 365 maximum sensitivity 323 352 366 measured signals 31 32 34 95 201 213 225 316 318 371 measurement noise 4 21 201 203 215 217 244 308 315317 326 359 response to 324326 359 mechanical systems 31 35 42 51 61 163 mechanics 2829 31 126 131 minimal model insulinglucose 88 89 see also insulinglucose dynamics minimum phase 283 290 331 modal form 130 145 149 Modelica 33 modeling 5 2733 61 65 control perspective 31 discrete control 56 discretetime 3738 157158 frequency domain 229231 from experiments 4748 model reduction 5 normalization and scaling 48 of uncertainty 5051 simplified models use of 32 298 348 354 355 software for 33 160 163 state space 3443 uncertainty see uncertainty modes 142144 239 relationship to poles 241 motion control systems 5154 226 motors electric 64 199 227 228 multiinput multioutput systems 286 318 327 see also inputoutput models multiplicative uncertainty 349 356 nanopositioner AFM 282 366 natural frequency 184 300 negative definite function 111 negative feedback 18 22 73 176 267 297 Nernsts law 60 networking 12 45 80 see also congestion control neural systems 11 47 60 297 neutral stability 102104 Newton I 28 Nichols N B 163 302 343 Nichols chart 369 370 Nobel Prize 11 14 61 81 noise see disturbances measurement noise noise attenuation 257 324326 noise cancellation 124 noise sensitivity function 317 nonlinear systems 31 95 98 101 108 110 114 120125 202 220 286288 INDEX 393 linear approximation 109 117 159 165 347 system identification 62 nonminimum phase 283 284 292 331333 see also inverse response nonunique solutions ODEs 97 normalized coordinates 4850 63 161 norms 285286 Nyquist H 267 290 Nyquist criterion 271 273 276 278 287 288 303 for robust stability 352 376 Nyquist D contour 270 276 Nyquist plot 270271 279 303 324 370 observability 32 201202 222 226 rank condition 203 tests for 202203 unobservable systems 204 222223 265 observability matrix 203 205 226 observable canonical form 204 205 226 observer gain 207 209211 213 215217 observers 201 206209 217 220 block diagram 202 210 see also Kalman filter ODEs see differential equations Ohms law 60 73 236 onoff control 23 open loop 1 2 72 168 245 267 306 315 323 349 open loop gain 237 279 322 operational amplifiers 7175 237 309 356 circuits 92 154 268 360 dynamic model 74 237 inputoutput characteristics 72 oscillator using 92 128 static model 72 237 optimal control 190 215 217 370 order of a system 34 235 ordinary differential equations see differential equations oscillator dynamics 92 96 97 138 184 233 236 normal form 63 see also nanopositioner AFM springmass system outer loop control 340342 output feedback 211 212 226 see also control using estimated state loop shaping PID control output sensitivity function see noise sensitivity function outputs see measured signals overdamped oscillator 184 overshoot 151 176 185 322 Padé approximation 292 332 paging control computing 56 parallel connection 243 parametric stability diagram 122 123 parametric uncertainty 50 347 particle accelerator 11 particular solution 133 152 see also forced response passive systems 288 336 passivity theorem 288 patch clamp 11 PD control 296 328 peak frequency 156 322 pendulum dynamics 113 see also inverted pendulum perfect adaptation 297 performance 76 performance limitations 331 336 365 373 due to right halfplane poles and zeros 283 see also control fundamental limitations performance specifications 151 175 315 322327 358 see also overshoot maximum sensitivity resonant peak rise time settling time periodic solutions see differential equations limit cycles persistence of a web connection 76 77 Petri net 45 pharmacokinetics 85 89 see also drug administration phase 43 153 154 186 230 234 250 288 see also minimum phase nonminimum phase minimum vs nonminimum 283 phase crossover frequency 279 280 phase curve Bode plot 250252 254 relationship to gain curve 283 326 phase lag 153 154 256 283 332 333 phase lead 153 256 330 345 phase margin 279 280 326 329 332 346 375 from Bode plot 280 reasonable values 281 phase portrait 28 29 98100 120 Philbrick G A 75 photoreceptors 297 physics relationship to control 5 PI control 17 24 65 68 296 301 327 328 firstorder system 300 364 PID control 2324 235 293313 330 block diagram 294 296 308 computer implementation 311 ideal form 293 313 implementation 296 308312 in biological systems 297 op amp implementation 309311 tuning 302306 see also derivative action integral action pitchfork bifurcation 130 planar dynamical systems 99 104 see also secondorder 394 INDEX systems pole placement 176 361 365366 see also eigenvalue assignment robust 361 pole zero diagram 240 polezero cancellations 247249 265 365 366 poles 239 241 dominant 301 see also dominant eigenvalues poles fast stable 364 366 pure imaginary 270 276 relationship to eigenvalues 239 right halfplane 241 276 283 331 333334 336 345 366 population dynamics 8991 94 see also predatorprey system positive definite function 111 112 114 118 positive definite matrix 114 191 positive feedback 16 2122 129 296 positive real transfer function 336 power of a matrix 136 power systems electric 67 63 101 127 predatorprey system 38 9091 121 181 prediction in controllers 24 220 296 375 see also derivative action prediction time 297 principle of the argument see variation of the argument principle of process control 9 10 13 45 proportional control 23 24 293 see also PID control proportional integral derivative control see PID control protocol see congestion control consensus pulse signal 146 147 187 see also impulse function pupil response 258 297 pure exponential response 232 Qvalue 63 186 254 quantitative feedback theory QFT 369 quarter car model 265 queuing systems 5456 63 random process 54 215 228 reachability 32 167175 197 222 rank condition 170 tests for 169 unreachable systems 171 199 222223 265 reachability matrix 169 173 reachable canonical form 35 172175 178 180 198 reachable set 167 realtime systems 5 reference signal 23 175 176 229 244 293 309 317 319 see also command signals setpoint effect on observer error 212 219 224 response to 322 344 tracking 175 219 220 326 360 reference weighting see setpoint weighting region of attraction see equilibrium points regions of attraction regulator see control law relay feedback 289 305 Reno protocol see Internet congestion control repressilator 5960 repressor 16 59 64 114 166 257 reset in PID control 295 296 resonant frequency 186 286 resonant peak 156 186 322 355 resource usage in computing systems 13 55 57 75 76 response see inputoutput models retina 297 see also pupil response Riccati equation 191 217 372 374 Riemann sphere 351 right halfplane poles and zeros see poles right halfplane zeros right halfplane rise time 151 165 176 185 322 robotics 8 1112 163 robustness 1618 322 349 374 performance 358361 369374 stability 352358 using gain and phase margin 281 326 using maximum sensitivity 323 326 353 375 376 using pole placement 361368 via gain and phase margin 280 see also uncertainty rolloff see highfrequency rolloff root locus diagram 123 RouthHurwitz criterion 130 rushhour effect 56 64 saddle equilibrium point 104 sampling 157 224 225 311 saturation function 45 72 311 see also actuators saturation scaling see normalized coordinates scanning tunneling microscope 11 81 schematic diagrams 44 45 71 Schitter G 83 84 secondorder systems 28 164 183187 200 253 301 Segway Personal Transporter 35 170 selfactivation 129 selfrepression 166 256 semidefinite function 111 sensitivity crossover frequency 324 sensitivity function 317 324 325 327 336 352 360 INDEX 395 366 and disturbance attenuation 323 336 345 sensor matrix 34 38 sensor networks 57 sensors 3 4 9 202 224 284 311 315 318 333 334 371 effect on zeros 284 334 in computing systems 75 see also measured signals separation principle 201 213 series connection 242 243 service rate queuing systems 55 setpoint 293 setpoint weighting 309 312 settling time 151 165 176 185 322 similarity of two systems 349352 simulation 4042 51 SIMULINK 160 singleinput singleoutput SISO systems 95 132 133 159 204 286 singular values 286 287 376 sink equilibrium point 104 small gain theorem 287288 355 Smith predictor 375 software tools for control x solution ODE see differential equations solutions Sony AIBO 11 12 source equilibrium point 104 spectrum analyzer 257 Sperry autopilot 19 springmass system 28 40 42 43 82 127 coupled 144 148 generalized 35 71 identification 47 normalization 49 63 see also oscillator dynamics stability 3 5 18 19 42 98 102120 asymptotic stability 102 106 conditional 275 in the sense of Lyapunov 102 local versus global 103 110 120 121 Lyapunov analysis see Lyapunov stability analysis neutrally stable 102 104 of a system 105 of equilibrium points 42 102 104 111 117 of feedback loop see Nyquist criterion of limit cycles 109 of linear systems 104107 113 140 of solutions 102 110 of transfer functions 240 robust see robust stability unstable solutions 103 using eigenvalues 117 140 141 using linear approximation 107 117 160 using RouthHurwitz criterion 130 using state feedback 175194 see also bifurcations equilibrium points stability diagram see parametric stability diagram stability margin quantity 280 281 323 346 353 372 reasonable values 281 stability margins concept 278282 291 326 stable pole 241 stable zero 241 Stark L 258 state of a dynamical system 28 31 34 state estimators see observers state feedback 167197 207 212 219221 224226 362 370 see also eigenvalue assignment linear quadratic control state space 28 3443 175 state vector 34 steadystate gain see zero frequency gain steadystate response 26 42 149157 165 176 185 230 231 233 257 262 steam engines 2 17 steering see vehicle steering Stein G xii 1 315 337 step input 30 135 150 239 302 step response 30 31 47 48 135 147 150 151 165 176 184 185 302 stochastic cooling 11 stochastic systems 215 217 summing junction 45 superposition 30 133 147 164 230 supervisory control see decision making higher levels of supply chains 14 15 supremum sup 286 switching behavior 22 64 117 373 system identification 47 62 257 tapping mode see atomic force microscope TCPIP see Internet congestion control Teorell T 85 89 thermostat 5 6 threeterm controllers 293 see also PID control thrust vectored aircraft see vectored thrust aircraft time constant firstorder system 165 time delay 5 13 235 236 281 283 302 311 332334 compensation for 375 Padé approximation 292 332 time plot 28 timeinvariant systems 30 34 126 134135 tracking see reference signal tracking trail bicycle dynamics 70 396 INDEX transcriptional regulation see gene regulation transfer functions 229262 by inspection 235 derivation using exponential signals 231 derivation using Laplace transforms 261 for control systems 244 264 for electrical circuits 236 for time delay 235 frequency response 230 250 from experiments 257 irrational 236 239 linear inputoutput systems 231 235 264 transient response 42 149 151 153 168 188 231 232 Transmission Control Protocol TCP 77 transportation systems 8 Tsien H S 11 tuning rules 314 see ZieglerNichols tuning two degreeoffreedom control 219 294 319 321 343 344 uncertainty 4 1718 32 5051 195 347352 component or parameter variation 4 50 347 disturbances and noise 4 32 175 244 315 unmodeled dynamics 4 50 348 353 see also additive uncertainty feedback uncertainty multiplicative uncertainty uncertainty band 50 uncertainty lemon 50 51 68 74 84 underdamped oscillator 97 184 185 unit step 150 unmodeled dynamics see uncertainty unmodeled dynamics unstable pole see poles right halfplane unstable polezero cancellation 248 unstable solution for a dynamical system 103 104 106 141 241 unstable zero see zeros right halfplane variation of the argument principle of 277 290 vector field 29 99 vectored thrust aircraft 5354 141 191 217 264 329 340 vehicle steering 5153 160 177 209 214 221 245 284 291 321 362 ship dynamics 51 vehicle suspension 265 see also coupled springmass system vertical takeoff and landing see vectored thrust aircraft vibration absorber 266 Vinnicombe G 343 351 374 Vinnicombe metric 349352 372 voltage clamp 10 61 waterbed effect 336 337 Watt governor see centrifugal governor Watt steam engine 3 17 web server control 7577 192 web site companion x Whipple F J W 71 Wiener N 11 12 winding number 277 window size TCP 78 80 104 windup see integrator windup Wright W 18 Wright Flyer 8 19 X29 aircraft 336 X45 aircraft 8 Youla parameterization 356358 zero frequency gain 155 177 180 186 239 zeros 239 Bode plot for 264 effect of sensors and actuators on 284 285 334 for a state space system 240 right halfplane 241 283 331334 336 345 365 signalblocking property 239 slow stable 362 363 365 Ziegler J G 302 312 ZieglerNichols tuning 302305 312 frequency response 303 improved method 303 step response 302 StateSpace Modeling and Analysis of Bicycle Dynamics Vedant Chopra November 24 2020 ECE 5115 Controls System Lab II Overview 2 Bicycle Model StateSpace Model SS Model Analysis Controller Design and Analysis 3 Bicycle Model Steering Angle Input Steering Angle Output Angular Acceleration Bicycle Model Analysis to SS Model Conversion 4 We can now substitute force into equations Add Values to Coefficients Researched values to substitute into statespace model Will represent an average bike with a average biker 6 7 SS Model Analysis Simulink By hand analysis would be arduous and timeconsuming Setup syms u x1 x2 x3 x1p x2p x3p car 150 caf 150 m 816 vlon 44 lr 035 lf 0625 iz 30105 eq1 x1pcar cafmvlonx1 carlr caflfmvlonx3 vlonx3cafmu0 eq2 x3x2p Preliminary MATLAB Code eq3 x3plrcar lfcafizvlonx1lf2caf lr2carizvlonx3cafizlfu0 A car cafmvlon 0 carlr caflfmvlonvlon 0 0 1lrcar lfcafizvlon 0 lf2caf lr2carizvlon B cafm 0 cafizlf C 0 0 1 D 0 sysssABCD 8 Check for stability eigenvalues e eigA 0 83556 0 Check for observability and controllability Mo obsvAC Mc ctrbAB Check for number of unobservable and uncontrollable states uobs lengthA rankMo 1 so unobservable uctr lengthA rankMc 0 so controllable Checking Requirements Convert to Diagonal Modal Form csysT canonsysmodal Evaluate Detectability detectcsysCT Evaluates to 0 0 1 two modes are unobservable 9 Our model is also linear and timeinvariant Minimal Realization Removes the x3 variable which results in a controllable and observable system 10 Use Minimal Realization and Revaluation nsys minrealsys x3 state was removed from ABCD ne eignsys nMo obsvnsysAnsysC nMc ctrbnsysAnsysB nuobs lengthnsysA ranknMo 0 so observable nuctr lengthnsysA ranknMc 0 so controllable Minimal Realization and Repetition 11 Define Q and R Q 21 0 0 1 Started with Original Q1 0 0 1 adjusted to meet most R 1 Made 1 since we were given no machine limits good for simple math Calculate Gain K ARE Solution S and Closedloop Poles P KSP lqrnsysQR Controller Design Calculate Gain N for error tracking N nsysCnsysAnsysBK1nsysB1 12 For Feedback Gain K For Reference Gain N Transition Headline Lets start with the first set of slides Open Loop Model Transition Headline Lets start with the first set of slides Close Loop Model Transition Headline Lets start with the first set of slides Reference Tracking Model Transition Headline Lets start with the first set of slides Optimized Model Summary Bicycle Model StateSpace Model Requirement Check Create Optimized Controller 17 Acknowledgements 18 Work Cited 1 B Zheng Active steering control with front wheel steering Jan2004 Online Available httpswwwresearchgatenetfigureBicyclemodelforsteeringdynamicsThecorrespondinglinearizeddynamicequationisf ig14119228 Accessed 22Nov2020 THIS VIDEO WAS PRODUCED AS PART OF THE REQUIREMENTS FOR ECE 5115 CONTROL LAB II AT THE CULLEN COLLEGE OF ENGINEERING UNIVERSITY OF HOUSTON HOUSTON TEXAS Images used under United States Public Domain From Researchgate user Active Steering Control with Front Wheel Steering Bing Zheng Quotations are commonly printed as a means of inspiration and to invoke philosophical thoughts from the reader 19
1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
2
Sistemas de Controle
UPE
3
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
9
Sistemas de Controle
UPE
1
Sistemas de Controle
UPE
2
Sistemas de Controle
UPE
Texto de pré-visualização
See discussions stats and author profiles for this publication at httpswwwresearchgatenetpublication275040898 Modeling car suspension using SpaceState Variables Research April 2015 CITATIONS 0 READS 705 2 authors Jorge Guillermo Tecnológico de Monterrey 64 PUBLICATIONS 244 CITATIONS SEE PROFILE Andrés José Rodríguez Torres Tecnológico de Monterrey 14 PUBLICATIONS 23 CITATIONS SEE PROFILE All content following this page was uploaded by Jorge Guillermo on 22 April 2019 The user has requested enhancement of the downloaded file V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 Modelling and Response of a Car Suspension Modelado y Respuesta de una Suspensión de Carro Jorge Guillermo Díaz Rodriguez Departamento de Engenharia Mecânica PUCRio CEP 22451900 RJ Brasil jorgegdiazalunopucriobr Andres José Rodriguez Torres Departamento de Engenharia Mecânica PUCRio CEP 22451900 RJ Brasil anrodrigueztorresgmailcom ResumenSe dedujo un modelo para la suspensión de un carro simplificando a un modelo masaresorteamortiguador Fue hecho en Simulink función de transferencia y espacio estados La respuesta de los métodos en lazo abierto fue la misma hallando el sistema estable por el criterio de Nyquist Un controlador PID fue diseñado y la respuesta en lazo cerrado se comparó con lazo abierto con Nyquist Bodé y salto hallando estabilidad y el overshoot se redujo considerablemente Índice de TérminosSuspensión activa control en espacioestados modelado Abstract The modelling of a car suspension was deducted for one tire set simplifying it to mass spring damper system It was done using Simulink transfer function and state space method Response of the two methods in open loop was the same finding the system to be stable using Nyquist criteria A PID controller was designed and response in closed loop was compared to open loop with Nyquist Bode and step finding the system continued stable and the overshoot lowered considerably Keywords Active suspension control state space modelling 1 INTRODUCTION The paper shows modelling and controlling one quart of a car suspension Should roads were smooth and even suspensions would not need to exist That is just the case roads have bumps and holes that make tires loose contact with the ground therefore loosing friction stability and steering capability Car manufacturers have been building suspensions with an elastic springs since the 1920s and added a viscous damper later on 1 When a wheel passes over an imperfection it experiences a vertical acceleration The suspension will absorb that energy minimizing vibration and creating a comfort sensation for passengers Lately demands for improved ride comfort and controllability of vehicles and high availability of electronic systems has motivated the development of active and semi active suspension systems Most likely they are electronically controlled and improve comfort as well as road handling 1 2 An active suspension system has the capability to adjust itself continuously to changing road conditions 3 This paper focuses in obtaining a physical model detailing the process using mechanics laws and explores the model by performing frequency and state space transformations along with evaluation of the models In future work application of more advanced techniques will be performed 2 MODELLING For the simplification used the cars weight was partitioned in four equal masses each one attached to one suspension system as shown in Figure 1 where ms represents the mass of the car herein ms represents ¼ of the car mass mt the tires mass ks and bs the spring and dampers suspension and kt the tires elasticity The control variable is position x and the actuating variable is the hydraulic cylinder force Fa Figure 1 Suspension Schematics Although Lagrangian mechanics would be the first choice the presence of nonconservative forces makes Lagrange more difficult to use in this case Following a modelling example of 9 or 10 where a similar system is modelling in an analogous way that however introduced nonlinear terms when including vibration angles this paper uses Newtonian mechanics Remembering that a spring produces a force proportional to displacement the viscous damper produces a force proportional to the positions first derivative and the mass produces a force proportional to the positions second derivative the system is modelled as particles Figure 2 shows in a free body diagram FBD the acting forces in ms Figure 2 FDB for ms Showing Inertial Forces The balance of forces for ms can be described in 1 For mt the FBD is show in Figure 3 Figure 3 FBD for mt Showing Inertial Forces The balance of forces for mt is shown in 2 which summarizes its movement where Fa is the hydraulic actuator force placed between ms and mt as shown in Figure 1 The values for 1 and 2 are ms 250 kg Ks 18600 Nm bs 1000 Nsm mt 50 kg Kt 196000 Nm A physical model was drafted in SolidWorks in order to import it to Mathworks Simmechanics However the imported model provided erratic behaviour No further work was done using that approach Equations 1 and 2 were modelled with Simulinks 5 To test the model an arbitrary constant force of 1500N was applied simulating Fa and an external disturbance uneven road modelled as a sinus wave as shown in Figure 4 The response for the model via Simulink to a step is shown in Figure 4 A passenger feels the movement of xs It can be seen that both masses experience a deviation from the road profile Such situation would give a passenger an uneasy feeling The system reaches 63 of the final value at 011 s aprox Figure 4 Model Response to a Step External Disturbance Analysis in Frequency Domain The systems transfer function TF was calculated from Newtons second law as shown in 1 and 2 They can be rewritten as 3 and 4 where are ft Applying Laplace transform to 3 and 4 5 and 6 were obtained where are gt For the suspension the output is the car position xs and input is the road profile xr and remembering the systems transfer function TF is the quotient of output over input 6 an already simplified TF is shown in 7 Replacing numerical values and simplifying 7 becomes 8 Equation 8 is a TF of 4th magnitude with 4 complex roots on the denominator Figure 5 shows the response via TF to a V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 10 cm bump The system has an overshoot of 176 cm and reaches 63 of that value 63 cm in 03 s Figure 5 Model Response to a 10 cm bump Bode Nyquist and root locus plots for open loop were obtained using Matlab Bode is shown in Figure 6 It shows a first natural frequency of 2223 Hz corresponding to the natural resonant frequency of the suspension and a second natural resonant frequency of 942 Hz corresponding to the tire the system is attenuating the input signal the bandwidth frequency is 56 rads a DC gain of 41 dB a phase margin of 32o and a roll off gain of 05786 dB Figure 7 shows the Nyquist Diagram which depicts a crossing at 0 0003 leaving good room from modifying the system Using trial and error K value was found so the TF became as shown below which made GsHs1 where Gs is the system and Hs is the controller to find 4 3 2 16600538 1 24 22896 15680 291680 S M S S S S S The actual point in Nyquist was 0994 0 as shown in Figure 8 a Checking the response in Bode Figure 8 b the gain at 180o is almost 0 Therefore K166 full fills the condition of magnitude and angle Figure 6 Bode Diagram for the system Figure 7 Nyquist Diagram for the system The root locus diagram zoom showed in Figure 9 displays the four roots for the TF denominator in open loop which are 1166 15076i 1166 15076i 034 356i 034 056i Because all the poles are negative it can be said the system is stable 6 Figure 8 a Nyquist Diagram with K b Bode with K Figure 9 Zoom of root locus Diagram for the system Statespace Analysis Because there are four energy accumulators in the suspension ms mt ks and kt the system has four state variables These are the variables that describe the systems energy 7 Where the first following two are potential and last two represent kinetic energy X1 Xs Vert displacement of the car ms X2Xt Vert displacement of the tire mt X3Xsdot Vert speed of the car X4Xtdot Vert speed of the tire It is worth say that Xr cannot be a state variable because it does not affect directly the output xs Remembering that the equations for state space are Because the output Xs is already known then Y must be equal to C x making D equal to 0 and C 1 0 0 0 Now deriving the proposed state variables Eq 10 are obtained Then 10 can be rewritten as matrix form 9 as shown in 11 and 12 Replacing numeric values 12 becomes The eigenvalues for matrix A indicate the type of system response If all have negative real part the system is stable If there is a positive real part the system is unstable and the response will grow without limit as time goes on If all eigenvalues are completely real the response is exponential If at least two eigenvalues form a complex conjugate pair the response will oscillate The eigenvalues found for matrix A are 103139 641988i 103139 641988i 16861 81326i 16861 81326i They are all have real component negative Therefore the system is stable The response of the system in open loop modelled in state space is shown in Figure 10 It can be seen the response is almost identical as the response when the system was modelled via TF Figure 5 Figure 10 State Space Response to a step disturbance Observability and Controllability According to the PopovBelevitchHautus test 7 a system is controllable if for a matrix V B AB A2B A3B An1B has rank n For this case matrix V has a rank of 4 equal to As order Therefore the system is controllable Now a system is observable if for SC CA CA2 CA3 CAn1 has rank n For this case matrix S has a rank of 4 equal to As order Therefore the system is observable Matrix gain K Recalling the system poles are the eigenvalues of A a matrix K is sought that can modify the input which will modify the eigenvalues that in turn will change the systems behaviour Now the response of a system with gain is given by Eq 14 A new Acl matrix can be defined as 15 Leaving the response of a system as 16 Where K k1 k2 k3 k4T then the new Acl defined in 15 will be Which has determinant of Comparing terms of the same order to find K simplifying coefficients and writing them in extended matrix form it gives Solving it gives values for K of The response of the system with gain matrix K is shown in Figure 11 for a 10 cm step disturbance One can see the peak has lowered to an accepted value of 12 cm as opposed as shown in Figure 10 where the peak reaches 16 cm Figure 11 Systems Response with Gain Matrix PID The ZiegerNichols method was used to find the PID constants From the step response the following values were obtained ks0063 Tu0001 Tr00631 In Table 1 are shown the values for constants for P PI and PID control Table 1 Values for P PI and PID controllers However modelling the step response none of this set of values gave an acceptable response making the system unstable or amplifying the response Changes using criteria from Table 2 which shows recommended changes for PID values to enhance the systems response 8 were used with negative response as well Table 2 Recommended actions to modify PID values V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 A PID tuning in Simulink Figure 12 was used obtaining values for Kp2000 Ki 14009577 and Kd 005 The response with such values is shown in Figure 13 where one can see the overshoot decreased to about 9 mm despite having a noise step function Figure 12 Suspension in closed loop modelled in Simulink Figure 13 Response from CL with PID via Simulink With Simulinks linearization tool Bode Nyquist and root locus diagrams were obtained as shown in Figure 14 Figure 15 and Figure 16 respectively In Bode diagram one can see the natural frequencies did not change In Nyquist the system runs clockwise therefore it remains stable and the root locus diagram the poles did not change when comparing to values in Figure 9 Figure 14 Bode diagram for the system with PID Figure 15 Nyquist diagram for the system with PID Figure 16 Root locus diagram for the system with PID Modeling PID in Simulink Having the suspension modelled in blocks the selected type of controller was PID From Figures 4 and 5 can be seen that measured outputs are xs position of ms and xt position of mt Measured input is xr The PID block is fed by the error A PID controller has three independent parameters which can be interpreted in terms of time where Kp depends on the present error Ki on the accumulation of past errors and Kd is a forecast for error All of them are based on current rate of change 4 A PID controller gives an output to take the exit variable to a desired target by using a formula shown in Eq 3 A block for such equation was built as shown in 17 1 1 1 1 N P I D s N s 17 Constants for the PID controller were taken from 1 and are Kp 7955 Ki 500 Kd 0001 Four events were tested over the model built as blocks in Simulink A sinusoidal a step saw and a ramp The systems responses are shown from Figure 17 to Figure 19 for the sinus step and saw disturbances respectively The sinus signal represents a noneven road The step signal aims to reproduce a speed bump The saw signal V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 had a frequency of 2 s and an amplitude of 005 m and simulates an unpaved road All of the disturbs started at time zero but the ramp This was done to check the systems response to a disturb after it has stabilized Figure 17 Response with a Sinus Disturbance The sinus excitation reaches 63 of it stable value at 027 s approximately whereas the step one does it at 014s and ramp at 024 s Because the saw disruption is continuous over time the system does not have time to come a stable value Although last said in all cases the system hovers over a stable value or a range Figure 18 Response with a Step Disturbance Figure 19 Response with a Saw Disturbance Figure 20 Response with a Ramp Disturbance CONCLUDING REMARKS A partition of a car was made in order to simplify the modelling A disadvantage of doing so is that rotation between axis is ignored which would introduce two extra degrees of freedom on the suspension One rotation along connected wheels and more between front and rear axis Further work ought to introduce such rotations in order to reproduce more accurately a cars suspension The proposed model reproduces the behaviour of the selected problem modelled in time state space and frequency domain Tests were performed to check the models stability Gain eigenvalues roots observability and controllability being all of them favourable The system was stable before and after the control action showing an acceptable agreement with results from 2 Following example of response evaluation of several external disturbances as presented in 11 a couple of them were modelled as signals that represented bumps or even gravel The systems response showed a constant behaviour in form although of different values for all of them V CONGRESO INTERNACIONAL DE INGENIERÍA MECATRÓNICA Y AUTOMATIZACIÓN CIIMA 2016 The selected control method creates a fast and prompt response to the systems disturbances lowering the amplitude of oscillation therefore creating a more comfortable ride for the passenger For the modelling as blocks the PID parameters were adjusted doing finetunning At the end the error response was hovering around 20 However the movement of ms 14 mass of the car did not move This would be acceptable since it is the movement that a passenger and the cars structure would actually feel The ramp disturbance showed that after being stabilized the controller does not allow the system to deviate from the established error Without the use of advanced control techniques such us fuzzy control 12 a reduction of vibration was accomplished Nevertheless this paper produced a model which was tested compared with and its results validated under different methods for future use implementing new control methods Some of the mentioned methods such us fuzzy control are already implemented in Matlabs control toolbox REFERENCES 1 MSenthil Kumar Development of Active Suspension System for Automobiles using PID Controller Proceedings of the World Congress on Engineering 2008 WCE 2008 London 2 Crosby M J and Dean C Karnopp The Active Damper a New Concept for Shock and Vibration Control Shock and Vibration Bulletin 434 1973 119133 3 R Rajamani and JK Hedrick Performance of Active Automotive Suspensions with Hydraulic Actuators Theory and Experiment Proceedings of ACC June 1994 4 R Dorf and R Bishop Sistemas de Control Moderno Ch 12 10 ed Pearson Madrid 2005 5 Mathworks Matlab Simulink User Guide R2013b 6 K Ogatta Modern Theory Engineering 5th Ed Pearson 2009 7 D Friedland Control Systems Design An Introduction to State Space Methods 10Ed Dover 8 MIT Courseware on Feedback Control Systems Last accessed on August 20 2015 httpocwmiteducoursesaeronauticsandastronautics16 30feedbackcontrolsystemsfall2010indexhtm 9 Flores E Laguado I Frequencies and Natural Modes of Free Vibration without Damping of A Cantilever Beam RCTA Vol 2 No 10 2007 10 Camargo J Camacho F Fernandez A Controller Design for an Inverted Pendulum Using Nonlinear Model RCTA Vol 2 No 18 2011 11 Higuera O Salmanca J Continuous and Discrete Control Design Based on LMI RCTA Vol 2 No 18 2011 12 C H Valencia M Vellasco R Tanscheit K Figueiredo Magnetorheological Damper Control in a Leg Prosthesis Mechanical Robot Intelligence Technology and Applications ISBN 9783319055817 Springer USA v p805 818 2014 View publication stats Physical Model of a QuarterCar Active Suspension System Radu Gheorghe Chetan Roxana BothRusu EvaH Dulf Clement Festila Department of Automation Technical University of ClujNapoca ClujNapoca Romania clementfestilaaututclujro Abstract The potential advantages of a modern active suspension system are recognized for race cars on road cars or even for great series cars Beside the passenger ride comfort the control of the vehicle vertical acceleration directly affects the road holding In the case of passive suspension models the car structure is optimized in initial stage of design taking into consideration the spring stiffness damper coefficient sprung and unsprung mass various tires performance and different velocities etc The performance of the passive suspension system is limited in the first line by the inherent variation of these parameters The active suspension system preserves the advantage of a close loop control system because these actuators are controlled by devices which receive and process the direct information from proper sensors In order to evaluate the active suspension system performance more possibilities are known mathematical analog or digital models the design and implementation of test setup versions or physical test models Obviously the physical test models are able to act as the common car structure but these manufactured equipments are not available on the market In this sense the authors conceived designed and built a test setup version a physical model as a robust simplified solution with only one acceleration speed transducer in the control loop The actuator is an electromagnet operating against an internal spring The structure comprises a cam driven by a DC motor which simulates the road conditions an articulated wheel with tire a spring and damper The car body is simulated by an articulated plate To test the active suspension system performances are attached two position sensors one for wheel and one for the car body Based on this structure various modern designed control strategies can be implemented and tested Keywordsactive suspension system quartercar model physical model advanced control strategies I INTRODUCTION The car suspension is a mechanism that sustains the vehicle weight on the road diminishes the influence of the road irregularities maintains the tire ground contact provides a sustainable ride contact for passengers and improves the handling capabilities of vehicle In fact the suspension system physically separates the car body from the car vehicle From the practical point of view the suspension system will minimize the vertical acceleration transmitted to the passenger which directly provides the ride comfort The problem of stability of the suspension system and the improvement of its performance is still today an important challenge Numerous suspension systems are already in production and can be divided into three groups passive semiactive and active systems The fully active system can supply power to the system by means of active force generation Implementation of suspension systems is done very differently by various car manufactures The main versions are hydraulic and electric technical solutions with particular important details In the case of passive suspension system the car structure design takes into consideration the rated values of the parameters of the suspension system components but these values vary in time of for different car velocities and load so that the passive suspension cannot adapt the performance to all road conditions The semiactive suspension system has the possibility to adapt the values of the components parameters of the system in accord to different ride conditions The most frequent method for the study of the suspension performance is the quarter car model 1 3 by which case is divided in four friction components attached to each wheel In this case the quartercar the quarter of the car chassis is the spring mass while the tire with its auxiliary components is the unsprung mass The suspension system consists of an energy dissipating element which is the damper or shock absorber and an energy storing element which is the spring The main versions of the active suspension system are Fig 1 Main versions of active suspension a parallel active suspension b series active suspension The coefficients k are the spring constants stiffness coefficient c are the damper coefficients and F the active forces which adapt the behavior of the suspension system to various conditions 9781509048625173100 2017 IEEE 517 m2x2k1x1x2k2x2wm2gb1x1x2b2x2wF 6 The mathematical state of a dynamic system is described by the state variables State variables often relate to a physical process in engineering systems where the correlation needed to store mass pulse and current are to be calculated The state variables define a statespace In this statespace the state vector xt is specified The movement of the system is the displacement of its end point 7 A statespace representation is a mathematical model of a physical system in control engineering This is a set of input output and state variables related by firstorder differential equations The states of the present system are defined as follows x1x3 x1x3 Four state variables are defined for the system x1x3 7 x2b1m2 x1b1m2 x2b2m2 x2b2m2 wx4 8 x3b12m1m2k1m1x1b12m1m2b1b2m1m2k1m1x2b1m1 x3 b1m1 x4b1b2m1m2 w1m1 fg 9 x4k1m2 x1k1m2 x2k2m2 x2k2m2 w1m2 fg 10 In the case of a linear system the general form 8 of the state variable equations are xAxBu 11 IV SIMULATION OF THE ACTIVE SUSPENSION SYSTEM Using MatlabSIMULINK 5 environment the authors analyzed the behavior of a typical active suspension system using equations 69 The simulation example is based on the set of values in a perunit system ms25 mu032 ks80 ku500 cs1 The values were given in 4 with a scale factor 1100 The step response for ZtZStZUt for Fa101t and Zr21t where 1t is used for unity step are given in Fig4 the oscillatory character being expected 4 Fig 4 Step responses for inputs Zr and Fa In Fig5 is depicted the variation of the distance ZZSZU for periodic road shocks Fig 5 Evolution of the distance ZZSZU for periodic road irregularities If the coordinate ZU is chosen as reference the same evolution of the ZZSZU is given in Fig6 Fig 6 Evolution of the distance ZZSZU for periodic road shocks This mode of presentation of the simulation results in negative form will be useful by analysis of the physical model laboratory test bench The control principle given in Fig3 is used in the simulation scheme the results of the simulation being given in Fig7 Fig 7 Open loop and closed loop suspension behavior V PHYSICAL MODEL FOR THE ACTIVE SUSPENSION CAR SYSTEM The scheme of the physical model is given in Fig8 A car driven by a DC motor M simulates the road shocks The auxiliary position electric transducers PT1 for wheel and PT2 for car body are used only to estimate the control algorithms efficiency They are simple transformers with variable air gap The AC output voltage is rectified and filtered The main transducer is the speed transducer based on the permanent magnet is an undamped mechanical arrangement and a coil The signal given by this transducer activates the electromagnetic actuator Fig 8 The simplified scheme for active suspension physical model In order to compare the simulation results with the signal given by transducers in Fig9 is presented the evolution of the distance Zt from transducer and from Simulink The signals have practically the same evolution a b Fig 9 Comparison for the distance Zt a from physical model b from SIMULINK Some differences between the results obtained from physical model Fig9a and the simulated values Fig9b are given by the noise level of the acceleration transducer and additionally given by the input converter of the oscilloscope From Fig10 results the performance of the closedloop active suspension system compared to the same evolution Zt in open loop structure The motion of the chassis is diminished but not rejected being asked a more powerful control algorithm a b Fig 10 Active suspension performance a in open loop b in closed loop VI CONCLUSIONS The suspensions control is a difficult control problem due to the complicated relationship between its components and parameters The most proper way to test such a control system is the use of a physical model Unfortunately there are no awailable on the market such equipments able to act as the common car structure In this sense the authors conceived designed and built a test setup version a physical model as a robust simplified solution with only one acceleration speed transducer in the control loop In order to prove the efficiency of the proposed equipment the simulation results of the active car suspension models are compared with the signals given by transducers Based on this structure various modern control strategies can be implemented and tested REFERENCES 1 Quanser Innovate educate Active Suspension wwwquansercom 2016 2 N M Ghazaly A O Moaaz The Future Development and Analysis of Vehicle Active Suspension System IOSR Journal of Mechanical and Civil Engineering IOSRJMCE vol 11 2014 pp1925 3 T P Phalke A C Mitra Design and Analysis of Vehicle Suspension System International Engineering Research Journal 2011 pp 165172 4 TPJ van der Sande Control of an automotive electromagnetic suspension system Masters Thesis Eindhoven University of Technology Department of Mechanical Engineering 2011 5 Mathworks Active Suspension Control Design wwwmathworkscom 2016 6 Abd ElNasser S Ahmed Ahmed S Ali Nouby M Ghazaly G T Abd el Jaber PID Controller of active suspension system for a quarter car model International Journal of Advances in Engineering Technology Dec 2015 7 R Rosli M Mailah G Priyandoko Active Suspension System for Passenger Vehicle using Active Force Control with Iterative Learning Algorithm WSEAS TRANSACTIONS on SYSTEMS and CONTROL 2014 8 J Fang Active Suspension System of Quarter Car Masters Thesis University of Florida 2014 9 R RusuBoth EHDulf Autotuning Fractional Order Control of a Laboratory Scale Equipment 2016 International Conference on Mechatronics Control and Automation Engineering 2016 DOI 102991mcae16201612 520 Acta Vol 12 No 3 pp 178190 2019 Technica DOI 1014513actatechjaurv12n3502 Jaurinensis Available online at actaszehu State Feedback Controller Design of an Active Suspension System for Vehicles Using Pole Placement Technique A Weber M Kuczmann Szechenyi Istvan University Department of Automation Egyetem ter 1 9026 Gyor Email mandycandyartgmailcom Abstract The paper presents a method for designing a state feedback controller of an active suspension system of a quarter car model This is a survey based on a specific example The designed controller of the active suspension system improves the driving control safety and stability because during the ride the periodic swinging motion generated by the road irregularities on wheels can be decreased This periodic motion damages the driving comfort and may cause traffic accidents The state feedback controller is designed to stand road induced displacements Computer simulations of the designed controller have been performed in the frame of Scilab and XCos Keywords state feedback pole placement active suspension system 1 Introduction Many researches performed on active suspension system have been presented in the recent years leading to more sophisticated regulatory approaches such as linear fuzzy 1 PID controller 2 and nonlinear control systems artifical neural network controllers 3 The active suspension system is a mechatronic suspension 4 and is important for improving the ride comfort Because of the adverse impacts caused by road imbalances the wheel can lose contact with the road it can not deliver force 178 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 and therefore the driving of the vehicle becomes uncertain The periodic swinging motion can damage the driving comfort the car parts the cargo and this motion can generate health damage too The primary purpose of the active suspension system is to minimize the vertical displacement of the vehicle and guarantee road maintenance For modeling and simulation a quarter car model has been chosen see Fig 1 Figure 1 Quarter car model 2 Mathematical modeling 21 The quarter car model Dynamic systems are described by several scientific and engineering branches and are modeled by state equations Using differential equations the operation of complicated dynamic systems can be modeled with relatively high precision For defining the state 179 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 variables of a quarter car model the EulerLagrange equation is used 5 d dt K x K x P x R x F 1 The equation is described by the kinetic energy K the potential energy P and the Rayleigh distribution R as follows 6 K 1 2m1 x2 1 1 2m2 x2 2 2 P 1 2k1x1 x22 m1gx1 1 2k2x2 w2 m2gx2 3 R 1 2b1 x1 x22 1 2b2 x2 w2 4 where m1 sprung mass m2 unsrpung mass k1 suspension stiffness k2 tire stiffness b1b2 damping coefficients F action control force x1 car body displacement x2 wheel displacement w road induced displacement 22 Statespace representation of a quarter car model After obtaining the partial derivatives and substituting them into the EulerLagrange equation the following equations are obtained m1x1 k1x1 x2 m1g b1 x1 x2 F 5 180 II ACTIVE SUSPENSION MATHEMATICAL MODEL The simplified quartercar model is given in Fig2 where ms is the sprung mass mu is the unspring mass ks is the spring constant kt is the tire elastic coefficient and Fa is the active force Fig 2 Quarter car parallel active suspension model Then Zr is the road surface uneven relative to the horizontal ground ZS is the chassis displacement relative in the plain ground and ZU is the displacement of the wheel relative to the plain ground The relative position car wheel chassis wheel is ZZSZU Relative to the spring mass the forces are the inertia msZs viscous friction force of the shock absorber csZsZu the elastic spring force ksZsZu the road action through the tire ktZuZr and the active force Fa in our case an electromagnetic force so that msZscsZsZuksZsZuFa0 1 For the unsprung mass is valid the equation muZucsZuZsksZuZs ktZuZrFa0 2 The quartercar and wheel accelerations are ZscsZs ZuksZsZuFams 3 ZucsZu ZsksZuZsktZuZrFamu0 4 If the Laplace transformation with zero initial condition is applied results mss2ZScsksZSZuFa0 mus2ZucsksZuZsktZuZrFa0 5 The equivalent matrix equation is mss2cssks csks ZsFa cssks mus2csskskt Zu ktZrFa 6 It is useful to solve the previous system for Zss and Zus In our application is important the distance ZZSZU so that ZsFasmsmus2ktΔs 7 and ZsZrsmskts2Δs 8 where Δsmss2cssksmus2cssksktcssks2 9 The behavior of the openloop quartercar active suspension may be described by the equations 79 III PRINCIPLE OF THE ACTIVE SUSPENSION CONTROL In the literature are known numerous methods from control system theory 69 which were applied in active suspension control Conventional PID control is applied 4 but for comparison are used Linear Quadratic Control Methods and Robust Control Fuzzy and Sliding Model Control are used in 2 The Genetic Algorithms Optimization techniques to design an active suspension are analyzed in 2 Methods based on Kalman Filter are applied in 7 and in 8 The authors chose another solution based on simple discontinuous algorithm easy to implement The system structure is depicted in Fig3 An inductive permanent magnet transducer measures directly the speed of the movement chassis wheel ZZsZu 10 Fig 3 Active suspension control principle didactic equipment in teaching process If the uneven road exhibits periodic disturbance the trigger T gives an output signal if ZZ where Z is chosen by mechanism tuning The voltage signal ε is amplified and acts the winding of the electromagnet used as actuator The width of the signal ε decides the mean value of the active force Fa It is to note that this system may be applied only in a discontinuous manner because the scheme presented in Fig3 implemented in the actual test model operates with positive feedback For the real car application are used intricate actuators like hydraulic electrohydraulic 2 mechanical solutions 4 electric solutions based on linear induction motors 4 The solution given by the authors must be cheap and simple y Cx Du Here x is the state vector u and y are the column vector containing exitations and responses A is the system matrix B C and D matrices contain the appropriate coefficient 9 The statespace representation of a quarter car model is described as follow dotx1 dotx2 dotx3 dotx4 0 0 1 0 fracb1m2 fracb1m2 fracb2m2 0 1 fracb12m1 m2 frack1m1 fracb12m1 m2 fracb1 b2m1 m2 frack1m1 fracb1m1 fracb1m1 frack1m2 frack1m2 frack2m2 0 0 x1 x2 x3 x4 0 0 0 0 fracb2m2 0 frac1m1 fracb1 b2m1 m2 g frac1m2 frack1m2 g F w 1 The performance parameters of the vehicle are given in Table 1 6 After substituting the values the statespace representation as follow dotx1 dotx2 dotx3 dotx4 0 0 1 0 125 405 0 1 59482759 11206897 17241379 17241379 5875 53375 0 0 x1 x2 x3 x4 0 0 0 0 28 0 00034483 48275862 981 0025 4750 981 F w 1 The output variable of the quarter car model is as follows y 1 0 0 0 x1 x2 x3 x4 0 0 0 F w 1 Table 1 Parameters Parameters Value Unit m1 290 kg m2 40 kg k1 23500 Nm k2 190000 Nm b1 500 Nms b2 1220 Nms g 981 ms2 3 Simulations of the quarter car model 31 Full state feedback Controllability is an important property of a controlled plant The system can be controlled when the rank of controllability matrix Mc is maximal ie the matrix is invertible if the determinant of the matrix is not zero 10 The Kalmans controllability matrix looks as follow n 4 Mc b Ab A2 b An1 b Mc 0 00034483 00490488 04007186 0 0025 10556034 92098313 00034483 00490488 04007186 24899601 0025 0 13546336 56630995 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 After determining the controllability matrix Ackermanns pole placement can be used because the state transformation and the feedback matrix can be directly given 6 Because the system is controllable Ackermanns pole placement is used for the state feedback Ackermanns formula is a control design method for solving the pole allocation problem Figure 2 State feedback in continuous time The task is to move the systems egienvalues to new places in the closed loop system This is the pole placement which is why the state feedback k is to be determined see Fig 2 10 The polynomial of a closed loop system in general case is λn p1λn1 p2λn2 pn 0 18 When using the pole placement method the eigenvalues are changed as it can be written as φclλ λE A BkT 0 19 The eigenvalues of the original system as follows λ 2040805 70147933i 2040805 70147933i 07040194 84630446i 07040194 84630446i 20 184 The new poles are selected as p 200 30 30 30 and the gain vector has been designed by Ackermans formula k 23776773 22116983 3217344 54733186 Using the pole placement method the new eigeinvalues of the system are as follows λ 200 30001573 29999214 00013619i 29999214 00013619i 32 Simulating the system To realize simulations Scilab program with XCos interface has been used In the simulation two cases have been examined the first when the displacement induced by the road is zero the second when this displacement is 50 mm There simulations are analysed with and without the designed control 321 Modeling without controller If w 0 then the gravitational force is pressed for the car body see Fig 3 and this showed that the system left alone is set to a stationarity state after some swing In case of w 50 mm jump car body displacement is affected by road induced displacement see Fig 4 the system initially leaving it goes out of the steady state for 10 seconds when it reaches a pothole causing mass m1 to swing movement 10 seconds after the transient section becomes steady state It can be seen that this value is 50 mm higher 322 Modeling with controller In case when there is no road induced displacement but there is a controller see Fig 5 it can be seen the swings are eliminated the stationary state is smoother By the A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 Figure 3 w 0 without controller Figure 4 w 005 without controller 186 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 reason of the design of the controller damping force is more effective for the transient phase Car drivers travelers cargoes are more favorable to this situation Figure 5 w 0 with controller Figure 6 w 005 with controller The effect of 50 mm road induced displacement is visible see Fig 6 Swinging 187 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 motions disappear 10 seconds after reaching the pothole and after jumping the stationary state is supervened without swinging There is statespace equation 22 without road induced displacement and state feedback see Fig 7 x Ax B1u B2w B31 24 Figure 7 Statespace equation model in Xcos Figure 8 State feedback model in Xcos 188 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 The state feedback model is visible where k is the gain factor see Fig 8 The road induced displacement and gravitational acceleration react the system There isnt reference signal 4 Conclusion Designing of the active suspension system of a quarter car model is produced different results besides changing road induced displacement By the simulation results the model has much better features with the designed controller The simulation result of the active suspension system showed that the swinging motion were gone the stationary state quickly entered which favored the driver the passengers so avoiding cargo damage References 1 A Hofmann M Hanss Fuzzy arithmetical controller design for active road vehicle suspension in the presence of uncertainties 2017 22nd International Conference on Methods and Models in Automation and Robotics MMAR 2017 pp 582587doi101109MMAR20178046893 2 L Bao S Chen S Yu Research on active faulttolerant control on active suspension of vehicle based on fuzzy pid control Chinese Automation Congress CAC 2017 pp 59115916doi101109CAC20178243840 3 V Vidya M Dharmana Model reference based intelligent control of an active suspension system for vehicles International Conference on circuits Power and Computing Technologies ICCPCT 2017 pp 15doi101109ICCPCT 20178074362 4 L R Miller Tuning passive semiactive and fully active suspension systems Proceedings of the 27th IEEE Conference on Decision and Control 1988 pp 20472053doi101109CDC1988194694 5 B Lantos Control Systems Theory and Design II 1st Edition Akademiai Kiado Budapest 2003 189 A Weber and M Kuczmann Acta Technica Jaurinensis Vol 12 No 3 pp 178190 2019 6 J Bokor P Gaspar Statespace representation in L Nadai Ed Control Tech nology with Vehicle Dynamics Applications 1st Edition Typotex Elektronikus Kiado Kft Budapest 2008 p 125 7 L Keviczky R Bars H J A Barta C Banyasz Control Engineering 1st Edition UniversitasGyor Kht Gyor 2006 8 J Bokor P Gaspar A Soumelidis Control Engineering II 1st Edition Typotex Elektronikus Kiado Kft Budapest 2011 9 M Kuczmann Signals and Systems 1st Edition UniversitasGyor Kht Gyor 2005 10 B Lantos Control Systems Theory and Design I 1st Edition Akademiai Kiado Budapest 2000 190 Suspension System Modeling Key MATLAB commands used in this tutorial are ss step Contents Physical setup System parameters Equations of motion Transfer function models Entering equations in MATLAB Physical setup Designing an automotive suspension system is an interesting and challenging control problem When the suspension system is designed a 14 model one of the four wheels is used to simplify the problem to a 1D multiple springdamper system A diagram of this system is shown below This model is for an active suspension system where an actuator is included that is able to generate the control force U to control the motion of the bus body Related Tutorial Links Intro to Modeling Mech System Activity Related External Links System Rep in MATLAB Video Modeling Intro Video Model of Bus Suspension System 14 Bus System parameters M1 14 bus body mass 2500 kg M2 suspension mass 320 kg K1 spring constant of suspension system 80000 Nm K2 spring constant of wheel and tire 500000 Nm b1 damping constant of suspension system 350 Nsm b2 damping constant of wheel and tire 15000 Nsm U control force Equations of motion From the picture above and Newtons law we can obtain the dynamic equations as the following Transfer function models Assume that all of the initial conditions are zero so that these equations represent the situation where the vehicle wheel goes up a bump The dynamic equations above can be expressed in the form of transfer functions by taking the Laplace Transform The specific derivation from the above equations to the transfer functions G1s and G2s is shown below where each transfer function has an output of X1X2 and inputs of U and W respectively or Find the inverse of matrix A and then multiply with inputs Usand Ws on the righthand side as follows When we want to consider the control input Us only we set Ws 0 Thus we get the transfer function G1s as in the following When we want to consider the disturbance input Ws only we set Us 0 Thus we get the transfer function G2s as in the following Entering equations in MATLAB We can generate the above transfer function models in MATLAB by entering the following commands in the MATLAB command window M1 2500 M2 320 K1 80000 K2 500000 b1 350 b2 15020 s tfs G1 M1M2s2b2sK2M1s2b1sK1M2s2b1b2sK1K2b1sK1b1sK1 G2 M1b2s3M1K2s2M1s2b1sK1M2s2b1b2sK1K2b1sK1b1sK1 Published with MATLAB 92 All contents licensed under a Creative Commons AttributionShareAlike 40 International License 30 Feedback Systems An Introduction for Scientists and Engineers Karl Johan Åström Richard M Murray 31 Feedback Systems This page intentionally left blank Feedback Systems An Introduction for Scientists and Engineers Karl Johan Aström Richard M Murray PRINCETON UNIVERSITY PRESS PRINCETON AND OXFORD Copyright 2008 by Princeton University Press Published by Princeton University Press 41 William Street Princeton New Jersey 08540 In the United Kingdom Princeton University Press 6 Oxford Street Woodstock Oxfordshire OX20 1TW All Rights Reserved Library of Congress CataloginginPublication Data Åström Karl J Karl Johan 1934 Feedback systems an introduction for scientists and engineers Karl Johan Åström and Richard M Murray p cm Includes bibliographical references and index ISBN13 9780691135762 alk paper ISBN10 0691135762 alk paper 1 Feedback control systems I Murray Richard M 1963 II Title TJ216A78 2008 62983dc22 2007061033 British Library CataloginginPublication Data is available This book has been composed in LATEX The publisher would like to acknowledge the authors of this volume for providing the cameraready copy from which this book was printed Printed on acidfree paper pressprincetonedu Printed in the United States of America 10 9 8 7 6 5 4 3 2 1 Contents Preface ix Chapter 1 Introduction 1 11 What Is Feedback 1 12 What Is Control 3 13 Feedback Examples 5 14 Feedback Properties 17 15 Simple Forms of Feedback 23 16 Further Reading 25 Exercises 25 Chapter 2 System Modeling 27 21 Modeling Concepts 27 22 State Space Models 34 23 Modeling Methodology 44 24 Modeling Examples 51 25 Further Reading 61 Exercises 61 Chapter 3 Examples 65 31 Cruise Control 65 32 Bicycle Dynamics 69 33 Operational Amplifier Circuits 71 34 Computing Systems and Networks 75 35 Atomic Force Microscopy 81 36 Drug Administration 84 37 Population Dynamics 89 Exercises 91 Chapter 4 Dynamic Behavior 95 41 Solving Differential Equations 95 42 Qualitative Analysis 98 43 Stability 102 44 Lyapunov Stability Analysis 110 45 Parametric and Nonlocal Behavior 120 vi CONTENTS 46 Further Reading 126 Exercises 126 Chapter 5 Linear Systems 131 51 Basic Definitions 131 52 The Matrix Exponential 136 53 InputOutput Response 145 54 Linearization 158 55 Further Reading 163 Exercises 164 Chapter 6 State Feedback 167 61 Reachability 167 62 Stabilization by State Feedback 175 63 State Feedback Design 183 64 Integral Action 195 65 Further Reading 197 Exercises 197 Chapter 7 Output Feedback 201 71 Observability 201 72 State Estimation 206 73 Control Using Estimated State 211 74 Kalman Filtering 215 75 A General Controller Structure 219 76 Further Reading 226 Exercises 226 Chapter 8 Transfer Functions 229 81 Frequency Domain Modeling 229 82 Derivation of the Transfer Function 231 83 Block Diagrams and Transfer Functions 242 84 The Bode Plot 250 85 Laplace Transforms 259 86 Further Reading 262 Exercises 262 Chapter 9 Frequency Domain Analysis 267 91 The Loop Transfer Function 267 92 The Nyquist Criterion 270 93 Stability Margins 278 94 Bodes Relations and Minimum Phase Systems 283 95 Generalized Notions of Gain and Phase 285 96 Further Reading 290 CONTENTS vii Exercises 290 Chapter 10 PID Control 293 101 Basic Control Functions 293 102 Simple Controllers for Complex Systems 298 103 PID Tuning 302 104 Integrator Windup 306 105 Implementation 308 106 Further Reading 312 Exercises 313 Chapter 11 Frequency Domain Design 315 111 Sensitivity Functions 315 112 Feedforward Design 319 113 Performance Specifications 322 114 Feedback Design via Loop Shaping 326 115 Fundamental Limitations 331 116 Design Example 340 117 Further Reading 343 Exercises 344 Chapter 12 Robust Performance 347 121 Modeling Uncertainty 347 122 Stability in the Presence of Uncertainty 352 123 Performance in the Presence of Uncertainty 358 124 Robust Pole Placement 361 125 Design for Robust Performance 369 126 Further Reading 374 Exercises 374 Bibliography 377 Index 387 This page intentionally left blank Preface This book provides an introduction to the basic principles and tools for the design and analysis of feedback systems It is intended to serve a diverse audience of scientists and engineers who are interested in understanding and utilizing feedback in physical biological information and social systems We have attempted to keep the mathematical prerequisites to a minimum while being careful not to sacrifice rigor in the process We have also attempted to make use of examples from a variety of disciplines illustrating the generality of many of the tools while at the same time showing how they can be applied in specific application domains A major goal of this book is to present a concise and insightful view of the current knowledge in feedback and control systems The field of control started by teaching everything that was known at the time and as new knowledge was acquired additional courses were developed to cover new techniques A conse quence of this evolution is that introductory courses have remained the same for many years and it is often necessary to take many individual courses in order to obtain a good perspective on the field In developing this book we have attempted to condense the current knowledge by emphasizing fundamental concepts We be lieve that it is important to understand why feedback is useful to know the language and basic mathematics of control and to grasp the key paradigms that have been developed over the past half century It is also important to be able to solve simple feedback problems using backoftheenvelope techniques to recognize fundamen tal limitations and difficult control problems and to have a feel for available design methods This book was originally developed for use in an experimental course at Caltech involving students from a wide set of backgrounds The course was offered to undergraduates at the junior and senior levels in traditional engineering disciplines as well as first and secondyear graduate students in engineering and science This latter group included graduate students in biology computer science and physics Over the course of several years the text has been classroom tested at Caltech and at Lund University and the feedback from many students and colleagues has been incorporated to help improve the readability and accessibility of the material Because of its intended audience this book is organized in a slightly unusual fashion compared to many other books on feedback and control In particular we introduce a number of concepts in the text that are normally reserved for second year courses on control and hence often not available to students who are not control systems majors This has been done at the expense of certain traditional topics which we felt that the astute student could learn independently and are often explored through the exercises Examples of topics that we have included are nonlinear dynamics Lyapunov stability analysis the matrix exponential reachability and observability and fundamental limits of performance and robustness Topics that we have deemphasized include root locus techniques leadlag compensation and detailed rules for generating Bode and Nyquist plots by hand Several features of the book are designed to facilitate its dual function as a basic engineering text and as an introduction for researchers in natural information and social sciences The bulk of the material is intended to be used regardless of the audience and covers the core principles and tools in the analysis and design of feedback systems Advanced sections marked by the dangerous bend symbol shown here contain material that requires a slightly more technical background of the sort that would be expected of senior undergraduates in engineering A few sections are marked by two dangerous bend symbols and are intended for readers with more specialized backgrounds identified at the beginning of the section To limit the length of the text several standard results and extensions are given in the exercises with appropriate hints toward their solutions To further augment the printed material contained here a companion web site has been developed and is available from the publishers web page httppressprincetonedutitles8701html The web site contains a database of frequently asked questions supplemental examples and exercises and lecture material for courses based on this text The material is organized by chapter and includes a summary of the major points in the text as well as links to external resources The web site also contains the source code for many examples in the book as well as utilities to implement the techniques described in the text Most of the code was originally written using MATLAB Mfiles but was also tested with LabView MathScript to ensure compatibility with both packages Many files can also be run using other scripting languages such as Octave SciLab SysQuake and Xmath The first half of the book focuses almost exclusively on state space control systems We begin in Chapter 2 with a description of modeling of physical biological and information systems using ordinary differential equations and difference equations Chapter 3 presents a number of examples in some detail primarily as a reference for problems that will be used throughout the text Following this Chapter 4 looks at the dynamic behavior of models including definitions of stability and more complicated nonlinear behavior We provide advanced sections in this chapter on Lyapunov stability analysis because we find that it is useful in a broad array of applications and is frequently a topic that is not introduced until later in ones studies The remaining three chapters of the first half of the book focus on linear systems beginning with a description of inputoutput behavior in Chapter 5 In Chapter 6 we formally introduce feedback systems by demonstrating how state space control laws can be designed This is followed in Chapter 7 by material on output feedback and estimators Chapters 6 and 7 introduce the key concepts of reachability PREFACE xi and observability which give tremendous insight into the choice of actuators and sensors whether for engineered or natural systems The second half of the book presents material that is often considered to be from the field of classical control This includes the transfer function introduced in Chapter 8 which is a fundamental tool for understanding feedback systems Using transfer functions one can begin to analyze the stability of feedback systems using frequency domain analysis including the ability to reason about the closed loop behavior of a system from its open loop characteristics This is the subject of Chapter 9 which revolves around the Nyquist stability criterion In Chapters 10 and 11 we again look at the design problem focusing first on proportionalintegralderivative PID controllers and then on the more general process of loop shaping PID control is by far the most common design technique in control systems and a useful tool for any student The chapter on frequency domain design introduces many of the ideas of modern control theory including the sensitivity function In Chapter 12 we combine the results from the second half of the book to analyze some of the fundamental tradeoffs between robustness and performance This is also a key chapter illustrating the power of the techniques that have been developed and serving as an introduction for more advanced studies The book is designed for use in a 10 to 15week course in feedback systems that provides many of the key concepts needed in a variety of disciplines For a 10week course Chapters 12 46 and 811 can each be covered in a weeks time with the omission of some topics from the final chapters A more leisurely course spread out over 1415 weeks could cover the entire book with 2 weeks on modeling Chapters 2 and 3particularly for students without much background in ordinary differential equationsand 2 weeks on robust performance Chapter 12 The mathematical prerequisites for the book are modest and in keeping with our goal of providing an introduction that serves a broad audience We assume familiarity with the basic tools of linear algebra including matrices vectors and eigenvalues These are typically covered in a sophomorelevel course on the sub ject and the textbooks by Apostol 10 Arnold 13 and Strang 187 can serve as good references Similarly we assume basic knowledge of differential equations including the concepts of homogeneous and particular solutions for linear ordinary differential equations in one variable Apostol 10 and Boyce and DiPrima 42 cover this material well Finally we also make use of complex numbers and func tions and in some of the advanced sections more detailed concepts in complex variables that are typically covered in a juniorlevel engineering or physics course in mathematical methods Apostol 9 or Stewart 186 can be used for the basic material with Ahlfors 6 Marsden and Hoffman 146 or Saff and Snider 172 being good references for the more advanced material We have chosen not to in clude appendices summarizing these various topics since there are a number of good books available One additional choice that we felt was important was the decision not to rely on a knowledge of Laplace transforms in the book While their use is by far the most common approach to teaching feedback systems in engineering many stu xii PREFACE dents in the natural and information sciences may lack the necessary mathematical background Since Laplace transforms are not required in any essential way we have included them only in an advanced section intended to tie things together for students with that background Of course we make tremendous use of transfer functions which we introduce through the notion of response to exponential inputs an approach we feel is more accessible to a broad array of scientists and engineers For classes in which students have already had Laplace transforms it should be quite natural to build on this background in the appropriate sections of the text Acknowledgments The authors would like to thank the many people who helped during the preparation of this book The idea for writing this book came in part from a report on future directions in control 155 to which Stephen Boyd Roger Brockett John Doyle and Gunter Stein were major contributors Kristi Morgansen and Hideo Mabuchi helped teach early versions of the course at Caltech on which much of the text is based and Steve Waydo served as the head TA for the course taught at Caltech in 20032004 and provided numerous comments and corrections Charlotta Johnsson and Anton Cervin taught from early versions of the manuscript in Lund in 20032007 and gave very useful feedback Other colleagues and students who provided feedback and advice include Leif Andersson John Carson K Mani Chandy Michel Charpentier Domitilla Del Vecchio Kate Galloway Per Hagander Toivo Henningsson Perby Joseph Hellerstein George Hines Tore Hägglund Cole Lepine Anders Rantzer Anders Robertsson Dawn Tilbury and Francisco Zabala The reviewers for Prince ton University Press and Tom Robbins at NI Press also provided valuable comments that significantly improved the organization layout and focus of the book Our ed itor Vickie Kearn was a great source of encouragement and help throughout the publishing process Finally we would like to thank Caltech Lund University and the University of California at Santa Barbara for providing many resources stim ulating colleagues and students and pleasant working environments that greatly aided in the writing of this book Karl Johan Åström Richard M Murray Lund Sweden Pasadena California Santa Barbara California Chapter One Introduction Feedback is a central feature of life The process of feedback governs how we grow respond to stress and challenge and regulate factors such as body temperature blood pressure and cholesterol level The mechanisms operate at every level from the interaction of proteins in cells to the interaction of organisms in complex ecologies M B Hoagland and B Dodson The Way Life Works 1995 99 In this chapter we provide an introduction to the basic concept of feedback and the related engineering discipline of control We focus on both historical and current examples with the intention of providing the context for current tools in feedback and control Much of the material in this chapter is adapted from 155 and the authors gratefully acknowledge the contributions of Roger Brockett and Gunter Stein to portions of this chapter 11 What Is Feedback Adynamicalsystemisasystemwhosebehaviorchangesovertimeofteninresponse to external stimulation or forcing The term feedback refers to a situation in which two or more dynamical systems are connected together such that each system influences the other and their dynamics are thus strongly coupled Simple causal reasoning about a feedback system is difficult because the first system influences the second and the second system influences the first leading to a circular argument This makes reasoning based on cause and effect tricky and it is necessary to analyze thesystemasawholeAconsequenceofthisisthatthebehavioroffeedbacksystems is often counterintuitive and it is therefore necessary to resort to formal methods to understand them Figure 11 illustrates in block diagram form the idea of feedback We often use u System 2 System 1 y a Closed loop y System 2 System 1 u r b Open loop Figure 11 Open and closed loop systems a The output of system 1 is used as the input of system 2 and the output of system 2 becomes the input of system 1 creating a closed loop system b The interconnection between system 2 and system 1 is removed and the system is said to be open loop 2 CHAPTER 1 INTRODUCTION Figure 12 The centrifugal governor and the steam engine The centrifugal governor on the left consists of a set of flyballs that spread apart as the speed of the engine increases The steam engine on the right uses a centrifugal governor above and to the left of the flywheel to regulate its speed Credit Machine a Vapeur Horizontale de Philip Taylor 1828 the terms open loop and closed loop when referring to such systems A system is said to be a closed loop system if the systems are interconnected in a cycle as shown in Figure 11a If we break the interconnection we refer to the configuration as an open loop system as shown in Figure 11b As the quote at the beginning of this chapter illustrates a major source of exam ples of feedback systems is biology Biological systems make use of feedback in an extraordinary number of ways on scales ranging from molecules to cells to organ isms to ecosystems One example is the regulation of glucose in the bloodstream through the production of insulin and glucagon by the pancreas The body attempts to maintain a constant concentration of glucose which is used by the bodys cells to produce energy When glucose levels rise after eating a meal for example the hormone insulin is released and causes the body to store excess glucose in the liver When glucose levels are low the pancreas secretes the hormone glucagon which has the opposite effect Referring to Figure 11 we can view the liver as system 1 and the pancreas as system 2 The output from the liver is the glucose concentration in the blood and the output from the pancreas is the amount of insulin or glucagon produced The interplay between insulin and glucagon secretions throughout the day helps to keep the bloodglucose concentration constant at about 90 mg per 100 mL of blood An early engineering example of a feedback system is a centrifugal governor in which the shaft of a steam engine is connected to a flyball mechanism that is itself connected to the throttle of the steam engine as illustrated in Figure 12 The system is designed so that as the speed of the engine increases perhaps because of a lessening of the load on the engine the flyballs spread apart and a linkage causes the throttle on the steam engine to be closed This in turn slows down the engine which causes the flyballs to come back together We can model this system as a closed loop system by taking system 1 as the steam engine and system 2 as the governor 12 WHAT IS CONTROL 3 When properly designed the flyball governor maintains a constant speed of the engine roughly independent of the loading conditions The centrifugal governor was an enabler of the successful Watt steam engine which fueled the industrial revolution Feedback has many interesting properties that can be exploited in designing systems As in the case of glucose regulation or the flyball governor feedback can makeasystemresilienttowardexternalinfluencesItcanalsobeusedtocreatelinear behavior out of nonlinear components a common approach in electronics More generally feedback allows a system to be insensitive both to external disturbances and to variations in its individual elements Feedback has potential disadvantages as well It can create dynamic instabilities in a system causing oscillations or even runaway behavior Another drawback especially in engineering systems is that feedback can introduce unwanted sensor noise into the system requiring careful filtering of signals It is for these reasons that a substantial portion of the study of feedback systems is devoted to developing an understanding of dynamics and a mastery of techniques in dynamical systems Feedback systems are ubiquitous in both natural and engineered systems Con trol systems maintain the environment lighting and power in our buildings and factories they regulate the operation of our cars consumer electronics and manu facturing processes they enable our transportation and communications systems and they are critical elements in our military and space systems For the most part they are hidden from view buried within the code of embedded microprocessors executing their functions accurately and reliably Feedback has also made it pos sible to increase dramatically the precision of instruments such as atomic force microscopes AFMs and telescopes In nature homeostasis in biological systems maintains thermal chemical and biological conditions through feedback At the other end of the size scale global climate dynamics depend on the feedback interactions between the atmosphere the oceans the land and the sun Ecosystems are filled with examples of feedback due to the complex interactions between animal and plant life Even the dynamics of economies are based on the feedback between individuals and corporations through markets and the exchange of goods and services 12 What Is Control The term control has many meanings and often varies between communities In this book we define control to be the use of algorithms and feedback in engineered systems Thus control includes such examples as feedback loops in electronic am plifiers setpoint controllers in chemical and materials processing flybywire systems on aircraft and even router protocols that control traffic flow on the Inter net Emerging applications include highconfidence software systems autonomous vehicles and robots realtime resource management systems and biologically en gineered systems At its core control is an information science and includes the use of information in both analog and digital representations noise external disturbances noise Output Process Controller operator input Figure 13 Components of a computercontrolled system The upper dashed box represents the process dynamics which include the sensors and actuators in addition to the dynamical system being controlled Noise and external disturbances can perturb the dynamics of the process The controller is shown in the lower dashed box It consists of a filter and analogtodigital AD and digitaltoanalog DA converters as well as a computer that implements the control algorithm A system clock controls the operation of the controller synchronizing the AD DA and computing processes The operator input is also fed to the computer as an external input A modern controller senses the operation of a system compares it against the desired behavior computes corrective actions based on a model of the systems response to external inputs and actuates the system to effect the desired change This basic feedback loop of sensing computation and actuation is the central concept in control The key issues in designing control logic are ensuring that the dynamics of the closed loop system are stable bounded disturbances give bounded errors and that they have additional desired behavior good disturbance attenuation fast responsiveness to changes in operating point etc These properties are established using a variety of modeling and analysis techniques that capture the essential dynamics of the system and permit the exploration of possible behaviors in the presence of uncertainty noise and component failure A typical example of a control system is shown in Figure 13 The basic elements of sensing computation and actuation are clearly seen In modern control systems computation is typically implemented on a digital computer requiring the use of analogtodigital AD and digitaltoanalog DA converters Uncertainty enters the system through noise in sensing and actuation subsystems external disturbances that affect the underlying system operation and uncertain dynamics in the system parameter errors unmodeled effects etc The algorithm that computes the control action as a function of the sensor values is often called a control law The system can be influenced externally by an operator who introduces command signals to the system 13 FEEDBACK EXAMPLES 5 Control engineering relies on and shares tools from physics dynamics and modeling computer science information and software and operations research optimization probability theory and game theory but it is also different from these subjects in both insights and approach Perhaps the strongest area of overlap between control and other disciplines is in the modeling of physical systems which is common across all areas of engineering and science One of the fundamental differences between controloriented mod eling and modeling in other disciplines is the way in which interactions between subsystems are represented Control relies on a type of inputoutput modeling that allows many new insights into the behavior of systems such as disturbance attenu ation and stable interconnection Model reduction where a simpler lowerfidelity description of the dynamics is derived from a highfidelity model is also naturally described in an inputoutput framework Perhaps most importantly modeling in a control context allows the design of robust interconnections between subsystems a feature that is crucial in the operation of all large engineered systems Control is also closely associated with computer science since virtually all mod ern control algorithms for engineering systems are implemented in software How ever control algorithms and software can be very different from traditional com puter software because of the central role of the dynamics of the system and the realtime nature of the implementation 13 Feedback Examples Feedback has many interesting and useful properties It makes it possible to design precise systems from imprecise components and to make relevant quantities in a system change in a prescribed fashion An unstable system can be stabilized using feedback and the effects of external disturbances can be reduced Feedback also offers new degrees of freedom to a designer by exploiting sensing actuation and computation In this section we survey some of the important applications and trends for feedback in the world around us Early Technological Examples The proliferation of control in engineered systems occurred primarily in the latter halfofthe20thcenturyTherearesomeimportantexceptionssuchasthecentrifugal governor described earlier and the thermostat Figure 14a designed at the turn of the century to regulate the temperature of buildings The thermostat in particular is a simple example of feedback control that every one is familiar with The device measures the temperature in a building compares that temperature to a desired setpoint and uses the feedback error between the two to operate the heating plant eg to turn heat on when the temperature is too low and to turn it off when the temperature is too high This explanation captures the essence of feedback but it is a bit too simple even for a basic device such as the thermostat Because lags and delays exist in the heating plant and sensor a good 6 CHAPTER 1 INTRODUCTION a Honeywell thermostat 1953 Movement opens throttle Electromagnet Reversible Motor Latch Governor Contacts Speed Adjustment Knob Latching Button Speed ometer Flyball Governor Adjustment Spring Load Spring Accelerator Pedal b Chrysler cruise control 1958 Figure 14 Early control devices a Honeywell T87 thermostat originally introduced in 1953 The thermostat controls whether a heater is turned on by comparing the current tem perature in a room to a desired value that is set using a dial b Chrysler cruise control system introduced in the 1958 Chrysler Imperial 170 A centrifugal governor is used to detect the speed of the vehicle and actuate the throttle The reference speed is specified through an adjustment spring Left figure courtesy of Honeywell International Inc thermostat does a bit of anticipation turning the heater off before the error actually changes sign This avoids excessive temperature swings and cycling of the heating plant This interplay between the dynamics of the process and the operation of the controller is a key element in modern control systems design There are many other control system examples that have developed over the years with progressively increasing levels of sophistication An early system with broad public exposure was the cruise control option introduced on automobiles in 1958 see Figure 14b Cruise control illustrates the dynamic behavior of closed loop feedback systems in actionthe slowdown error as the system climbs a grade the gradual reduction of that error due to integral action in the controller the small overshoot at the top of the climb etc Later control systems on automobiles such as emission controls and fuelmetering systems have achieved major reductions of pollutants and increases in fuel economy Power Generation and Transmission Access to electrical power has been one of the major drivers of technological progress in modern society Much of the early development of control was driven by the generation and distribution of electrical power Control is mission critical for power systems and there are many control loops in individual power stations Control is also important for the operation of the whole power network since it is difficult to store energy and it is thus necessary to match production to con sumption Power management is a straightforward regulation problem for a system with one generator and one power consumer but it is more difficult in a highly distributed system with many generators and long distances between consumption and generation Power demand can change rapidly in an unpredictable manner and 13 FEEDBACK EXAMPLES 7 Figure 15 A small portion of the European power network By 2008 European power suppliers will operate a single interconnected network covering a region from the Arctic to the Mediterranean and from the Atlantic to the Urals In 2004 the installed power was more than 700 GW 7 1011 W Source UCTE wwwucteorg combining generators and consumers into large networks makes it possible to share loads among many suppliers and to average consumption among many customers Large transcontinental and transnational power systems have therefore been built such as the one show in Figure 15 Most electricity is distributed by alternating current AC because the transmis sionvoltagecanbechangedwithsmallpowerlossesusingtransformersAlternating current generators can deliver power only if the generators are synchronized to the voltage variations in the network This means that the rotors of all generators in a network must be synchronized To achieve this with local decentralized controllers and a small amount of interaction is a challenging problem Sporadic lowfrequency oscillations between distant regions have been observed when regional power grids have been interconnected 134 Safety and reliability are major concerns in power systems There may be dis turbances due to trees falling down on power lines lightning or equipment failures There are sophisticated control systems that attempt to keep the system operating even when there are large disturbances The control actions can be to reduce volt age to break up the net into subnets or to switch off lines and power users These safety systems are an essential element of power distribution systems but in spite of all precautions there are occasionally failures in large power systems The power system is thus a nice example of a complicated distributed system where control is executed on many levels and in many different ways 8 CHAPTER 1 INTRODUCTION a FA18 Hornet b X45 UCAV Figure 16 Military aerospace systems a The FA18 aircraft is one of the first production military fighters to use flybywire technology b The X45 UCAV unmanned aerial vehicle is capable of autonomous flight using inertial measurement sensors and the global positioning system GPS to monitor its position relative to a desired trajectory Photographs courtesy of NASA Dryden Flight Research Center Aerospace and Transportation In aerospace control has been a key technological capability tracing back to the beginning of the 20th century Indeed the Wright brothers are correctly famous not for demonstrating simply powered flight but controlled powered flight Their early Wright Flyer incorporated moving control surfaces vertical fins and canards and warpable wings that allowed the pilot to regulate the aircrafts flight In fact the aircraft itself was not stable so continuous pilot corrections were mandatory This early example of controlled flight was followed by a fascinating success story of continuous improvements in flight control technology culminating in the high performance highly reliable automatic flight control systems we see in modern commercial and military aircraft today Figure 16 Similar success stories for control technology have occurred in many other application areas Early World War II bombsights and fire control servo systems have evolved into todays highly accurate radarguided guns and precisionguided weapons Early failureprone space missions have evolved into routine launch oper ations manned landings on the moon permanently manned space stations robotic vehicles roving Mars orbiting vehicles at the outer planets and a host of commer cial and military satellites serving various surveillance communication navigation and earth observation needs Cars have advanced from manually tuned mechani calpneumatic technology to computercontrolled operation of all major functions including fuel injection emission control cruise control braking and cabin com fort Current research in aerospace and transportation systems is investigating the application of feedback to higher levels of decision making including logical regu lation of operating modes vehicle configurations payload configurations and health status These have historically been performed by human operators but today that 13 FEEDBACK EXAMPLES 9 Figure 17 Materials processing Modern materials are processed under carefully controlled conditions using reactors such as the metal organic chemical vapor deposition MOCVD reactor shown on the left which was for manufacturing superconducting thin films Using lithography chemical etching vapor deposition and other techniques complex devices can be built such as the IBM cell processor shown on the right MOCVD image courtesy of Bob Kee IBM cell processor photograph courtesy Tom Way IBM Corporation unauthorized use not permitted boundary is moving and control systems are increasingly taking on these functions Another dramatic trend on the horizon is the use of large collections of distributed entities with local computation global communication connections little regularity imposed by the laws of physics and no possibility of imposing centralized control actions Examples of this trend include the national airspace management problem automated highway and traffic management and command and control for future battlefields Materials and Processing The chemical industry is responsible for the remarkable progress in developing new materials that are key to our modern society In addition to the continuing need to improve product quality several other factors in the process control industry are drivers for the use of control Environmental statutes continue to place stricter limitations on the production of pollutants forcing the use of sophisticated pollution control devices Environmental safety considerations have led to the design of smaller storage capacities to diminish the risk of major chemical leakage requiring tighter control on upstream processes and in some cases supply chains And large increases in energy costs have encouraged engineers to design plants that are highly integrated coupling many processes that used to operate independently All of these trends increase the complexity of these processes and the performance requirements for the control systems making control system design increasingly challenging Some examples of materialsprocessing technology are shown in Figure 17 As in many other application areas new sensor technology is creating new opportunities for control Online sensorsincluding laser backscattering video microscopy and ultraviolet infrared and Raman spectroscopyare becoming more Electrode Glass Pipette Ion Channel Cell Membrane Controller Figure 18 The voltage clamp method for measuring ion currents in cells using feedback A pipet is used to place an electrode in a cell left and middle and maintain the potential of the cell at a fixed level The internal voltage in the cell is vi and the voltage of the external fluid is ve The feedback system right controls the current I into the cell so that the voltage drop across the cell membrane Δv vi ve is equal to its reference value Δvr The current I is then equal to the ion current robust and less expensive and are appearing in more manufacturing processes Many of these sensors are already being used by current process control systems but more sophisticated signalprocessing and control techniques are needed to use more effectively the realtime information provided by these sensors Control engineers also contribute to the design of even better sensors which are still needed for example in the microelectronics industry As elsewhere the challenge is making use of the large amounts of data provided by these new sensors in an effective manner In addition a controloriented approach to modeling the essential physics of the underlying processes is required to understand the fundamental limits on observability of the internal state through sensor data Instrumentation The measurement of physical variables is of prime interest in science and engineering Consider for example an accelerometer where early instruments consisted of a mass suspended on a spring with a deflection sensor The precision of such an instrument depends critically on accurate calibration of the spring and the sensor There is also a design compromise because a weak spring gives high sensitivity but low bandwidth A different way of measuring acceleration is to use force feedback The spring is replaced by a voice coil that is controlled so that the mass remains at a constant position The acceleration is proportional to the current through the voice coil In such an instrument the precision depends entirely on the calibration of the voice coil and does not depend on the sensor which is used only as the feedback signal The sensitivitybandwidth compromise is also avoided This way of using feedback has been applied to many different engineering fields and has resulted in instruments with dramatically improved performance Force feedback is also used in haptic devices for manual control Another important application of feedback is in instrumentation for biological systems Feedback is widely used to measure ion currents in cells using a device called a voltage clamp which is illustrated in Figure 18 Hodgkin and Huxley used the voltage clamp to investigate propagation of action potentials in the axon of the 13 FEEDBACK EXAMPLES 11 giant squid In 1963 they shared the Nobel Prize in Medicine with Eccles for their discoveries concerning the ionic mechanisms involved in excitation and inhibition in the peripheral and central portions of the nerve cell membrane A refinement of the voltage clamp called a patch clamp made it possible to measure exactly when a single ion channel is opened or closed This was developed by Neher and Sakmann who received the 1991 Nobel Prize in Medicine for their discoveries concerning the function of single ion channels in cells There are many other interesting and useful applications of feedback in scien tific instruments The development of the mass spectrometer is an early example In a 1935 paper Nier observed that the deflection of ions depends on both the magnetic and the electric fields 158 Instead of keeping both fields constant Nier let the magnetic field fluctuate and the electric field was controlled to keep the ratio between the fields constant Feedback was implemented using vacuum tube amplifiers This scheme was crucial for the development of mass spectroscopy The Dutch engineer van der Meer invented a clever way to use feedback to maintain a goodquality highdensity beam in a particle accelerator 153 The idea is to sense particle displacement at one point in the accelerator and apply a correcting signal at another point This scheme called stochastic cooling was awarded the Nobel Prize in Physics in 1984 The method was essential for the successful experiments at CERN where the existence of the particles W and Z associated with the weak force was first demonstrated The 1986 Nobel Prize in Physicsawarded to Binnig and Rohrer for their design of the scanning tunneling microscopeis another example of an innovative use of feedback The key idea is to move a narrow tip on a cantilever beam across a surface and to register the forces on the tip 34 The deflection of the tip is measured using tunneling The tunneling current is used by a feedback system to control the position of the cantilever base so that the tunneling current is constant an example of force feedback The accuracy is so high that individual atoms can be registered A map of the atoms is obtained by moving the base of the cantilever horizontally The performance of the control system is directly reflected in the image quality and scanning speed This example is described in additional detail in Chapter 3 Robotics and Intelligent Machines The goal of cybernetic engineering already articulated in the 1940s and even before has been to implement systems capable of exhibiting highly flexible or intelligent responses to changing circumstances In 1948 the MIT mathematician Norbert Wiener gave a widely read account of cybernetics 200 A more mathematical treatment of the elements of engineering cybernetics was presented by H S Tsien in 1954 driven by problems related to the control of missiles 195 Together these works and others of that time form much of the intellectual basis for modern work in robotics and control Two accomplishments that demonstrate the successes of the field are the Mars Exploratory Rovers and entertainment robots such as the Sony AIBO shown in Figure 19 The two Mars Exploratory Rovers launched by the Jet Propulsion 12 CHAPTER 1 INTRODUCTION Figure 19 Robotic systems a Spirit one of the two Mars Exploratory Rovers that landed on Mars in January 2004 b The Sony AIBO Entertainment Robot one of the first entertainment robots to be massmarketed Both robots make use of feedback between sensors actuators and computation to function in unknown environments Photographs courtesy of Jet Propulsion Laboratory and Sony Electronics Inc Laboratory JPL maneuvered on the surface of Mars for more than 4 years starting in January 2004 and sent back pictures and measurements of their environment The Sony AIBO robot debuted in June 1999 and was the first entertainment robot to be massmarketed by a major international corporation It was particularly noteworthy because of its use of artificial intelligence AI technologies that allowed it to act in response to external stimulation and its own judgment This higher level of feedback is a key element in robotics where issues such as obstacle avoidance goal seeking learning and autonomy are prevalent Despite the enormous progress in robotics over the last halfcentury in many ways the field is still in its infancy Todays robots still exhibit simple behaviors compared with humans and their ability to locomote interpret complex sensory inputs perform higherlevel reasoning and cooperate together in teams is limited Indeed much of Wieners vision for robotics and intelligent machines remains unrealized While advances are needed in many fields to achieve this vision including advances in sensing actuation and energy storagethe opportunity to combine the advances of the AI community in planning adaptation and learning with the techniques in the control community for modeling analysis and design of feedback systems presents a renewed path for progress Networks and Computing Systems Control of networks is a large research area spanning many topics including con gestion control routing data caching and power management Several features of these control problems make them very challenging The dominant feature is the extremely large scale of the system the Internet is probably the largest feedback control system humans have ever built Another is the decentralized nature of the control problem decisions must be made quickly and based only on local informa 13 FEEDBACK EXAMPLES 13 The Internet Request Reply Request Reply Request Reply Tier 1 Tier 2 Tier 3 Clients a Multitiered Internet services b Individual server Figure 110 A multitier system for services on the Internet In the complete system shown schematically in a users request information from a set of computers tier 1 which in turn collect information from other computers tiers 2 and 3 The individual server shown in b has a set of reference parameters set by a human system operator with feedback used to maintain the operation of the system in the presence of uncertainty Based on Hellerstein et al 97 tion Stability is complicated by the presence of varying time lags as information about the network state can be observed or relayed to controllers only after a delay and the effect of a local control action can be felt throughout the network only after substantial delay Uncertainty and variation in the network through network topol ogy transmission channel characteristics traffic demand and available resources may change constantly and unpredictably Other complicating issues are the diverse traffic characteristicsin terms of arrival statistics at both the packet and flow time scalesand the different requirements for quality of service that the network must support Related to the control of networks is control of the servers that sit on these net works Computers are key components of the systems of routers web servers and database servers used for communication electronic commerce advertising and information storage While hardware costs for computing have decreased dramati cally the cost of operating these systems has increased because of the difficulty in managing and maintaining these complex interconnected systems The situation is similar to the early phases of process control when feedback was first introduced to control industrial processes As in process control there are interesting possibili ties for increasing performance and decreasing costs by applying feedback Several promising uses of feedback in the operation of computer systems are described in the book by Hellerstein et al 97 A typical example of a multilayer system for ecommerce is shown in Fig ure 110a The system has several tiers of servers The edge server accepts incom ing requests and routes them to the HTTP server tier where they are parsed and distributed to the application servers The processing for different requests can vary widely and the application servers may also access external servers managed by other organizations Control of an individual server in a layer is illustrated in Figure 110b A quan tity representing the quality of service or cost of operationsuch as response time throughput service rate or memory usageis measured in the computer The con trol variables might represent incoming messages accepted priorities in the oper 14 CHAPTER 1 INTRODUCTION ating system or memory allocation The feedback loop then attempts to maintain qualityofservice variables within a target range of values Economics The economy is a large dynamical system with many actors governments orga nizations companies and individuals Governments control the economy through laws and taxes the central banks by setting interest rates and companies by setting prices and making investments Individuals control the economy through purchases savings and investments Many efforts have been made to model the system both at the macro level and at the micro level but this modeling is difficult because the system is strongly influenced by the behaviors of the different actors in the system Keynes 122 developed a simple model to understand relations among gross na tional product investment consumption and government spending One of Keynes observations was that under certain conditions eg during the 1930s depression an increase in the investment of government spending could lead to a larger increase in the gross national product This idea was used by several governments to try to alleviate the depression Keynes ideas can be captured by a simple model that is discussed in Exercise 24 A perspective on the modeling and control of economic systems can be obtained from the work of some economists who have received the Sveriges Riksbank Prize in Economics in Memory of Alfred Nobel popularly called the Nobel Prize in Economics Paul A Samuelson received the prize in 1970 for the scientific work through which he has developed static and dynamic economic theory and actively contributed to raising the level of analysis in economic science Lawrence Klein received the prize in 1980 for the development of large dynamical models with many parameters that were fitted to historical data 126 eg a model of the US economy in the period 19291952 Other researchers have modeled other countries and other periods In 1997 Myron Scholes shared the prize with Robert Merton for a new method to determine the value of derivatives A key ingredient was a dynamic model of the variation of stock prices that is widely used by banks and investment companies In 2004 Finn E Kydland and Edward C Prestcott shared the economics prize for their contributions to dynamic macroeconomics the time consistency of economic policy and the driving forces behind business cycles a topic that is clearly related to dynamics and control One of the reasons why it is difficult to model economic systems is that there are no conservation laws A typical example is that the value of a company as expressed by its stock can change rapidly and erratically There are however some areas with conservation laws that permit accurate modeling One example is the flow of products from a manufacturer to a retailer as illustrated in Figure 111 The products are physical quantities that obey a conservation law and the system can be modeledbyaccountingforthenumberofproductsinthedifferentinventoriesThere are considerable economic benefits in controlling supply chains so that products are available to customers while minimizing products that are in storage The real problems are more complicated than indicated in the figure because there may be 13 FEEDBACK EXAMPLES 15 Factory Warehouse Distributors Consumers Advertisement Retailers Figure 111 Supply chain dynamics after Forrester 75 Products flow from the producer to the customer through distributors and retailers as indicated by the solid lines There are typically many factories and warehouses and even more distributors and retailers The dashed lines show the upward flow of orders The numbers in the circles represent the delays in the flow of information or materials Multiple feedback loops are present as each agent tries to maintain the proper inventory level many different products there may be different factories that are geographically distributed and the factories may require raw material or subassemblies Control of supply chains was proposed by Forrester in 1961 75 and is now growing in importance Considerable economic benefits can be obtained by using models to minimize inventories Their use accelerated dramatically when infor mation technology was applied to predict sales keep track of products and enable justintime manufacturing Supply chain management has contributed significantly to the growing success of global distributors Advertising on the Internet is an emerging application of control With network based advertising it is easy to measure the effect of different marketing strategies quickly The response of customers can then be modeled and feedback strategies can be developed Feedback in Nature Many problems in the natural sciences involve understanding aggregate behavior in complex largescale systems This behavior emerges from the interaction of a multitude of simpler systems with intricate patterns of information flow Repre sentative examples can be found in fields ranging from embryology to seismology Researchers who specialize in the study of specific complex systems often develop an intuitive emphasis on analyzing the role of feedback or interconnection in facilitating and stabilizing aggregate behavior While sophisticated theories have been developed by domain experts for the analysis of various complex systems the development of a rigorous methodology that can discover and exploit common features and essential mathematical structure is just beginning to emerge Advances in science and technology are creating a new understanding of the underlying dynamics and the importance of feedback in a wide variety of natural and technological systems We briefly highlight three application areas here Biological Systems A major theme currently of interest to the biology commu 16 CHAPTER 1 INTRODUCTION Figure112Thewiringdiagramofthegrowthsignalingcircuitryofthemammaliancell95 The major pathways that are thought to play a role in cancer are indicated in the diagram LinesrepresentinteractionsbetweengenesandproteinsinthecellLinesendinginarrowheads indicate activation of the given gene or pathway lines ending in a Tshaped head indicate repression Used with permission of Elsevier Ltd and the authors nity is the science of reverse and eventually forward engineering of biological control networks such as the one shown in Figure 112 There are a wide variety of biological phenomena that provide a rich source of examples of control includ ing gene regulation and signal transduction hormonal immunological and cardio vascular feedback mechanisms muscular control and locomotion active sensing vision and proprioception attention and consciousness and population dynamics and epidemics Each of these and many more provide opportunities to figure out what works how it works and what we can do to affect it One interesting feature of biological systems is the frequent use of positive feed back to shape the dynamics of the system Positive feedback can be used to create switchlike behavior through autoregulation of a gene and to create oscillations such as those present in the cell cycle central pattern generators or circadian rhythm Ecosystems In contrast to individual cells and organisms emergent properties of aggregations and ecosystems inherently reflect selection mechanisms that act on multiple levels and primarily on scales well below that of the system as a whole Because ecosystems are complex multiscale dynamical systems they provide a broad range of new challenges for the modeling and analysis of feedback systems Recentexperienceinapplyingtoolsfromcontrolanddynamicalsystemstobacterial networks suggests that much of the complexity of these networks is due to the presence of multiple layers of feedback loops that provide robust functionality 14 FEEDBACK PROPERTIES 17 to the individual cell Yet in other instances events at the cell level benefit the colony at the expense of the individual Systems level analysis can be applied to ecosystems with the goal of understanding the robustness of such systems and the extent to which decisions and events affecting individual species contribute to the robustness andor fragility of the ecosystem as a whole Environmental Science It is now indisputable that human activities have altered the environment on a global scale Problems of enormous complexity challenge researchers in this area and first among these is to understand the feedback sys tems that operate on the global scale One of the challenges in developing such an understanding is the multiscale nature of the problem with detailed understanding of the dynamics of microscale phenomena such as microbiological organisms be ing a necessary component of understanding global phenomena such as the carbon cycle 14 Feedback Properties Feedback is a powerful idea which as we have seen is used extensively in natural and technological systems The principle of feedback is simple base correcting actions on the difference between desired and actual performance In engineering feedbackhasbeenrediscoveredandpatentedmanytimesinmanydifferent contexts The use of feedback has often resulted in vast improvements in system capability and these improvements have sometimes been revolutionary as discussed above The reason for this is that feedback has some truly remarkable properties In this section we will discuss some of the properties of feedback that can be understood intuitively This intuition will be formalized in subsequent chapters Robustness to Uncertainty One of the key uses of feedback is to provide robustness to uncertainty By mea suring the difference between the sensed value of a regulated signal and its desired value we can supply a corrective action If the system undergoes some change that affects the regulated signal then we sense this change and try to force the system back to the desired operating point This is precisely the effect that Watt exploited in his use of the centrifugal governor on steam engines As an example of this principle consider the simple feedback system shown in Figure 113 In this system the speed of a vehicle is controlled by adjusting the amount of gas flowing to the engine Simple proportionalintegral PI feedback is used to make the amount of gas depend on both the error between the current and the desired speed and the integral of that error The plot on the right shows the results of this feedback for a step change in the desired speed and a variety of different masses for the car which might result from having a different number of passengers or towing a trailer Notice that independent of the mass which varies by a factor of 3 the steadystate speed of the vehicle always approaches the desired speed and achieves that speed within approximately 5 s Thus the performance of 18 CHAPTER 1 INTRODUCTION Compute Actuate Throttle Sense Speed 0 5 10 25 30 Speed ms Time s m Figure 113 A feedback system for controlling the speed of a vehicle In the block diagram on the left the speed of the vehicle is measured and compared to the desired speed within the Compute block Based on the difference in the actual and desired speeds the throttle or brake is used to modify the force applied to the vehicle by the engine drivetrain and wheels The figure on the right shows the response of the control system to a commanded change in speed from 25 ms to 30 ms The three different curves correspond to differing masses of the vehicle between 1000 and 3000 kg demonstrating the robustness of the closed loop system to a very large change in the vehicle characteristics the system is robust with respect to this uncertainty Another early example of the use of feedback to provide robustness is the nega tive feedback amplifier When telephone communications were developed ampli fiers were used to compensate for signal attenuation in long lines A vacuum tube was a component that could be used to build amplifiers Distortion caused by the nonlinear characteristics of the tube amplifier together with amplifier drift were obstacles that prevented the development of line amplifiers for a long time A ma jor breakthrough was the invention of the feedback amplifier in 1927 by Harold S Black an electrical engineer at Bell Telephone Laboratories Black used negative feedback which reduces the gain but makes the amplifier insensitive to variations in tube characteristics This invention made it possible to build stable amplifiers with linear characteristics despite the nonlinearities of the vacuum tube amplifier Design of Dynamics Another use of feedback is to change the dynamics of a system Through feed back we can alter the behavior of a system to meet the needs of an application systems that are unstable can be stabilized systems that are sluggish can be made responsive and systems that have drifting operating points can be held constant Control theory provides a rich collection of techniques to analyze the stability and dynamic response of complex systems and to place bounds on the behavior of such systems by analyzing the gains of linear and nonlinear operators that describe their components An example of the use of control in the design of dynamics comes from the area of flight control The following quote from a lecture presented by Wilbur Wright to the Western Society of Engineers in 1901 149 illustrates the role of control in the development of the airplane Men already know how to construct wings or airplanes which when driven through the air at sufficient speed will not only sustain the 14 FEEDBACK PROPERTIES 19 Figure 114 Aircraft autopilot system The Sperry autopilot left contained a set of four gyros coupled to a set of air valves that controlled the wing surfaces The 1912 Curtiss used an autopilot to stabilize the roll pitch and yaw of the aircraft and was able to maintain level flight as a mechanic walked on the wing right 105 weight of the wings themselves but also that of the engine and of the engineer as well Men also know how to build engines and screws of sufficient lightness and power to drive these planes at sustaining speed Inability to balance and steer still confronts students of the flying problem When this one feature has been worked out the age of flying will have arrived for all other difficulties are of minor importance The Wright brothers thus realized that control was a key issue to enable flight They resolved the compromise between stability and maneuverability by building an airplane the Wright Flyer that was unstable but maneuverable The Flyer had a rudder in the front of the airplane which made the plane very maneuverable A disadvantage was the necessity for the pilot to keep adjusting the rudder to fly the plane if the pilot let go of the stick the plane would crash Other early aviators tried to build stable airplanes These would have been easier to fly but because of their poor maneuverability they could not be brought up into the air By using their insight and skillful experiments the Wright brothers made the first successful flight at Kitty Hawk in 1905 Since it was quite tiresome to fly an unstable aircraft there was strong motiva tion to find a mechanism that would stabilize an aircraft Such a device invented by Sperry was based on the concept of feedback Sperry used a gyrostabilized pendu lum to provide an indication of the vertical He then arranged a feedback mechanism that would pull the stick to make the plane go up if it was pointing down and vice versa The Sperry autopilot was the first use of feedback in aeronautical engineer ing and Sperry won a prize in a competition for the safest airplane in Paris in 1914 Figure 114 shows the Curtiss seaplane and the Sperry autopilot The autopilot is a good example of how feedback can be used to stabilize an unstable system and hence design the dynamics of the aircraft 20 CHAPTER 1 INTRODUCTION One of the other advantages of designing the dynamics of a device is that it allows for increased modularity in the overall system design By using feedback to create a system whose response matches a desired profile we can hide the complexity and variability that may be present inside a subsystem This allows us to create more complex systems by not having to simultaneously tune the responses of a large number of interacting components This was one of the advantages of Blacks use of negative feedback in vacuum tube amplifiers the resulting device had a welldefined linear inputoutput response that did not depend on the individual characteristics of the vacuum tubes being used Higher Levels of Automation A major trend in the use of feedback is its application to higher levels of situational awareness and decision making This includes not only traditional logical branch ing based on system conditions but also optimization adaptation learning and even higher levels of abstract reasoning These problems are in the domain of the arti ficial intelligence community with an increasing role of dynamics robustness and interconnection in many applications Oneoftheinterestingareasofresearchinhigherlevelsofdecisionisautonomous control of cars Early experiments with autonomous driving were performed by Ernst Dickmanns who in the 1980s equipped cars with cameras and other sen sors 60 In 1994 his group demonstrated autonomous driving with human super vision on a highway near Paris and in 1995 one of his cars drove autonomously with human supervision from Munich to Copenhagen at speeds of up to 175 kmhour The car was able to overtake other vehicles and change lanes automatically This application area has been recently explored through the DARPA Grand Challenge a series of competitions sponsored by the US government to build ve hicles that can autonomously drive themselves in desert and urban environments Caltech competed in the 2005 and 2007 Grand Challenges using a modified Ford E 350 offroad van nicknamed Alice It was fully automated including electronically controlled steering throttle brakes transmission and ignition Its sensing systems included multiple video cameras scanning at 1030 Hz several laser ranging units scanning at 10 Hz and an inertial navigation package capable of providing position and orientation estimates at 5 ms temporal resolution Computational resources in cluded 12 highspeed servers connected together through a 1Gbs Ethernet switch The vehicle is shown in Figure 115 along with a block diagram of its control architecture The software and hardware infrastructure that was developed enabled the ve hicle to traverse long distances at substantial speeds In testing Alice drove itself more than 500 km in the Mojave Desert of California with the ability to follow dirt roads and trails if present and avoid obstacles along the path Speeds of more than 50 kmh were obtained in the fully autonomous mode Substantial tuning of the al gorithms was done during desert testing in part because of the lack of systemslevel design tools for systems of this level of complexity Other competitors in the race including Stanford which won the 2005 competition used algorithms for adaptive 14 FEEDBACK PROPERTIES 21 Road Sensors Terrain Follower Path State Estimator Planner Path Supervisory Control Map Elevation Map Cost Vehicle Vehicle Actuation Finding Figure 115 DARPA Grand Challenge Alice Team Caltechs entry in the 2005 and 2007 competitions and its networked control architecture 54 The feedback system fuses data from terrain sensors cameras and laser range finders to determine a digital elevation map This map is used to compute the vehicles potential speed over the terrain and an optimization based path planner then commands a trajectory for the vehicle to follow A supervisory control module performs higherlevel tasks such as handling sensor and actuator failures control and learning increasing the capabilities of their systems in unknown en vironments Together the competitors in the Grand Challenge demonstrated some of the capabilities of the next generation of control systems and highlighted many research directions in control at higher levels of decision making Drawbacks of Feedback While feedback has many advantages it also has some drawbacks Chief among these is the possibility of instability if the system is not designed properly We are all familiar with the effects of positive feedback when the amplification on a microphone is turned up too high in a room This is an example of feedback instability something that we obviously want to avoid This is tricky because we must design the system not only to be stable under nominal conditions but also to remain stable under all possible perturbations of the dynamics In addition to the potential for instability feedback inherently couples different parts of a system One common problem is that feedback often injects measurement noise into the system Measurements must be carefully filtered so that the actuation and process dynamics do not respond to them while at the same time ensuring that the measurement signal from the sensor is properly coupled into the closed loop dynamics so that the proper levels of performance are achieved Another potential drawback of control is the complexity of embedding a control system in a product While the cost of sensing computation and actuation has de creased dramatically in the past few decades the fact remains that control systems are often complicated and hence one must carefully balance the costs and benefits An early engineering example of this is the use of microprocessorbased feedback systems in automobilesThe use of microprocessors in automotive applications be gan in the early 1970s and was driven by increasingly strict emissions standards which could be met only through electronic controls Early systems were expensive and failed more often than desired leading to frequent customer dissatisfaction It 22 CHAPTER 1 INTRODUCTION was only through aggressive improvements in technology that the performance reliability and cost of these systems allowed them to be used in a transparent fash ion Even today the complexity of these systems is such that it is difficult for an individual car owner to fix problems Feedforward Feedback is reactive there must be an error before corrective actions are taken However in some circumstances it is possible to measure a disturbance before it enters the system and this information can then be used to take corrective action before the disturbance has influenced the system The effect of the disturbance is thus reduced by measuring it and generating a control signal that counteracts it This way of controlling a system is called feedforward Feedforward is particularly useful in shaping the response to command signals because command signals are always available Since feedforward attempts to match two signals it requires good process models otherwise the corrections may have the wrong size or may be badly timed The ideas of feedback and feedforward are very general and appear in many dif ferent fields In economics feedback and feedforward are analogous to a market based economy versus a planned economy In business a feedforward strategy corresponds to running a company based on extensive strategic planning while a feedback strategy corresponds to a reactive approach In biology feedforward has been suggested as an essential element for motion control in humans that is tuned during training Experience indicates that it is often advantageous to combine feed back and feedforward and the correct balance requires insight and understanding of their respective properties Positive Feedback In most of this text we will consider the role of negative feedback in which we attempt to regulate the system by reacting to disturbances in a way that decreases the effect of those disturbances In some systems particularly biological systems positive feedback can play an important role In a system with positive feedback the increase in some variable or signal leads to a situation in which that quantity is further increased through its dynamics This has a destabilizing effect and is usually accompanied by a saturation that limits the growth of the quantity Although often considered undesirable this behavior is used in biological and engineering systems to obtain a very fast response to a condition or signal One example of the use of positive feedback is to create switching behavior in which a system maintains a given state until some input crosses a threshold Hysteresis is often present so that noisy inputs near the threshold do not cause the system to jitter This type of behavior is called bistability and is often associated with memory devices u e u e u e a Onoff control b Dead zone c Hysteresis Figure 116 Inputoutput characteristics of onoff controllers Each plot shows the input on the horizontal axis and the corresponding output on the vertical axis Ideal onoff control is shown in a with modifications for a dead zone b or hysteresis c Note that for onoff control with hysteresis the output depends on the value of past inputs 15 Simple Forms of Feedback The idea of feedback to make corrective actions based on the difference between the desired and the actual values of a quantity can be implemented in many different ways The benefits of feedback can be obtained by very simple feedback laws such as onoff control proportional control and proportionalintegralderivative control In this section we provide a brief preview of some of the topics that will be studied more formally in the remainder of the text OnOff Control A simple feedback mechanism can be described as follows u umax if e 0 umin if e 0 11 where the control error e r y is the difference between the reference signal or command signal r and the output of the system y and u is the actuation command Figure 116a shows the relation between error and control This control law implies that maximum corrective action is always used The feedback in equation 11 is called onoff control One of its chief advantages is that it is simple and there are no parameters to choose Onoff control often succeeds in keeping the process variable close to the reference such as the use of a simple thermostat to maintain the temperature of a room It typically results in a system where the controlled variables oscillate which is often acceptable if the oscillation is sufficiently small Notice that in equation 11 the control variable is not defined when the error is zero It is common to make modifications by introducing either a dead zone or hysteresis see Figure 116b and 116c PID Control The reason why onoff control often gives rise to oscillations is that the system overreacts since a small change in the error makes the actuated variable change over the full range This effect is avoided in proportional control where the characteristic of the controller is proportional to the control error for small errors This can be achieved with the control law u umax if e emax kp e if emin e emax umin if e emin where kp is the controller gain emin umin kp and emax umax kp The interval emin emax is called the proportional band because the behavior of the controller is linear when the error is in this interval u kp r y kp e if emin e emax While a vast improvement over onoff control proportional control has the drawback that the process variable often deviates from its reference value In particular if some level of control signal is required for the system to maintain a desired value then we must have e 0 in order to generate the requisite input This can be avoided by making the control action proportional to the integral of the error ut ki 0t eτ dτ This control form is called integral control and ki is the integral gain It can be shown through simple arguments that a controller with integral action has zero steadystate error Exercise 15 The catch is that there may not always be a steady state because the system may be oscillating An additional refinement is to provide the controller with an anticipative ability by using a prediction of the error A simple prediction is given by the linear extrapolation et Td et Td detdt which predicts the error Td time units ahead Combining proportional integral and derivative control we obtain a controller that can be expressed mathematically as ut kp et ki 0t eτ dτ kd detdt The control action is thus a sum of three terms the past as represented by the integral of the error the present as represented by the proportional term and the future as represented by a linear extrapolation of the error the derivative term This form of feedback is called a proportionalintegralderivative PID controller and its action is illustrated in Figure 117 A PID controller is very useful and is capable of solving a wide range of control problems More than 95 of all industrial control problems are solved by PID control although many of these controllers are actually proportionalintegral PI controllers because derivative action is often not included 58 There are also more advanced controllers which differ from PID controllers by using more sophisticated methods for prediction 16 FURTHER READING 25 Present Future Past t t Td Time Error Figure 117 Action of a PID controller At time t the proportional term depends on the instantaneous value of the error The integral portion of the feedback is based on the integral of the error up to time t shaded portion The derivative term provides an estimate of the growth or decay of the error over time by looking at the rate of change of the error Td represents the approximate amount of time in which the error is projected forward see text 16 Further Reading The material in this section draws heavily from the report of the Panel on Future Directions on Control Dynamics and Systems 155 Several additional papers and reports have highlighted the successes of control 159 and new vistas in con trol 45 130 204 The early development of control is described by Mayr 148 and in the books by Bennett 28 29 which cover the period 18001955 A fas cinating examination of some of the early history of control in the United States has been written by Mindell 152 A popular book that describes many control concepts across a wide range of disciplines is Out of Control by Kelly 121 There are many textbooks available that describe control systems in the context of spe cific disciplines For engineers the textbooks by Franklin Powell and Emami Naeini 79 Dorf and Bishop 61 Kuo and Golnaraghi 133 and Seborg Edgar and Mellichamp 178 are widely used More mathematically oriented treatments of control theory include Sontag 182 and Lewis 136 The book by Hellerstein et al 97 provides a description of the use of feedback control in computing sys tems A number of books look at the role of dynamics and feedback in biological systems including Milhorn 151 now out of print J D Murray 154 and Ell ner and Guckenheimer 70 The book by Fradkov 77 and the tutorial article by Bechhoefer 25 cover many specific topics of interest to the physics community Exercises 11 Eye motion Perform the following experiment and explain your results Hold ing your head still move one of your hands left and right in front of your face following it with your eyes Record how quickly you can move your hand before you begin to lose track of it Now hold your hand still and shake your head left to right once again recording how quickly you can move before loosing track 26 CHAPTER 1 INTRODUCTION 12 Identify five feedback systems that you encounter in your everyday environ ment For each system identify the sensing mechanism actuation mechanism and control law Describe the uncertainty with respect to which the feedback system provides robustness andor the dynamics that are changed through the use of feed back 13 Balance systems Balance yourself on one foot with your eyes closed for 15 s Using Figure 13 as a guide describe the control system responsible for keeping you from falling down Note that the controller will differ from that in the diagram unless you are an android reading this in the far future 14 Cruise control Download the MATLAB code used to produce simulations for the cruise control system in Figure 113 from the companion web site Using trial and error change the parameters of the control law so that the overshoot in speed is not more than 1 ms for a vehicle with mass m 1000 kg 15 Integral action We say that a system with a constant input reaches steady state if the output of the system approaches a constant value as time increases Show that a controller with integral action such as those given in equations 14 and 15 gives zero error if the closed loop system reaches steady state 16 Search the web and pick an article in the popular press about a feedback and control system Describe the feedback system using the terminology given in the article In particular identify the control system and describe a the underlying process or system being controlled along with the b sensor c actuator and d computational element If the some of the information is not available in the article indicate this and take a guess at what might have been used Chapter Two System Modeling I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers He replied How many arbitrary parameters did you use for your calculations I thought for a moment about our cutoff procedures and said Four He said I remember my friend Johnny von Neumann used to say with four parameters I can fit an elephant and with five I can make him wiggle his trunk Freeman Dyson on describing the predictions of his model for mesonproton scattering to Enrico Fermi in 1953 67 A model is a precise representation of a systems dynamics used to answer ques tions via analysis and simulation The model we choose depends on the questions we wish to answer and so there may be multiple models for a single dynamical sys tem with different levels of fidelity depending on the phenomena of interest In this chapter we provide an introduction to the concept of modeling and present some basic material on two specific methods commonly used in feedback and control systems differential equations and difference equations 21 Modeling Concepts A model is a mathematical representation of a physical biological or information system Models allow us to reason about a system and make predictions about how a system will behave In this text we will mainly be interested in models of dynamical systems describing the inputoutput behavior of systems and we will often work in state space form Roughly speaking a dynamical system is one in which the effects of actions do not occur immediately For example the velocity of a car does not change immediately when the gas pedal is pushed nor does the temperature in a room rise instantaneously when a heater is switched on Similarly a headache does not vanish right after an aspirin is taken requiring time for it to take effect In business systems increased funding for a development project does not increase revenues in the short term although it may do so in the long term if it was a good investment All of these are examples of dynamical systems in which the behavior of the system evolves with time In the remainder of this section we provide an overview of some of the key concepts in modeling The mathematical details introduced here are explored more fully in the remainder of the chapter 28 CHAPTER 2 SYSTEM MODELING cq q m k Figure 21 Springmass system with nonlinear damping The position of the mass is denoted by q with q 0 corresponding to the rest position of the spring The forces on the mass are generated by a linear spring with spring constant k and a damper with force dependent on the velocity q The Heritage of Mechanics The study of dynamics originated in attempts to describe planetary motion The basis was detailed observations of the planets by Tycho Brahe and the results of Kepler who found empirically that the orbits of the planets could be well described by ellipses Newton embarked on an ambitious program to try to explain why the planets move in ellipses and he found that the motion could be explained by his law of gravitation and the formula stating that force equals mass times acceleration In the process he also invented calculus and differential equations One of the triumphs of Newtons mechanics was the observation that the motion of the planets could be predicted based on the current positions and velocities of all planets It was not necessary to know the past motion The state of a dynamical system is a collection of variables that completely characterizes the motion of a system for the purpose of predicting future motion For a system of planets the state is simply the positions and the velocities of the planets We call the set of all possible states the state space A common class of mathematical models for dynamical systems is ordinary differential equations ODEs In mechanics one of the simplest such differential equations is that of a springmass system with damping m q cq kq 0 21 This system is illustrated in Figure 21 The variable q R represents the position of the mass m with respect to its rest position We use the notation q to denote the derivative of q with respect to time ie the velocity of the mass and q to represent the second derivative acceleration The spring is assumed to satisfy Hookes law which says that the force is proportional to the displacement The friction element damper is taken as a nonlinear function cq which can model effects such as stiction and viscous drag The position q and velocity q represent the instantaneous state of the system We say that this system is a secondorder system since the dynamics depend on the first two derivatives of q The evolution of the position and velocity can be described using either a time plot or a phase portrait both of which are shown in Figure 22 The time plot on 21 MODELING CONCEPTS 29 0 5 10 15 2 1 0 1 2 Time t s Position q m velocity q ms Position Velocity 1 05 0 05 1 1 05 0 05 1 Position q m Velocity q ms Figure 22 Illustration of a state model A state model gives the rate of change of the state as a function of the state The plot on the left shows the evolution of the state as a function of time The plot on the right shows the evolution of the states relative to each other with the velocity of the state denoted by arrows the left shows the values of the individual states as a function of time The phase portrait on the right shows the vector field for the system which gives the state velocity represented as an arrow at every point in the state space In addition we have superimposed the traces of some of the states from different conditions The phase portrait gives a strong intuitive representation of the equation as a vector field or a flow While systems of second order two states can be represented in this way unfortunately it is difficult to visualize equations of higher order using this approach The differential equation 21 is called an autonomous system because there are no external influences Such a model is natural for use in celestial mechanics because it is difficult to influence the motion of the planets In many examples it is useful to model the effects of external disturbances or controlled forces on the system One way to capture this is to replace equation 21 by m q cq kq u 22 where u represents the effect of external inputs The model 22 is called a forced or controlled differential equationIt implies that the rate of change of the state can be influenced by the input ut Adding the input makes the model richer and allows new questions to be posed For example we can examine what influence external disturbances have on the trajectories of a system Or in the case where the input variable is something that can be modulated in a controlled way we can analyze whether it is possible to steer the system from one point in the state space to another through proper choice of the input The Heritage of Electrical Engineering A different view of dynamics emerged from electrical engineering where the design of electronic amplifiers led to a focus on inputoutput behavior A system was considered a device that transforms inputs to outputs as illustrated in Figure 23 Conceptually an inputoutput model can be viewed as a giant table of inputs and 30 CHAPTER 2 SYSTEM MODELING 7 v v vos adj Inputs Output 3 2 6 4 Q9 Q1 Q2 Q3 Q4 Q7 Q5 R1 R12 R8 R7 R9 R10 R11 R2 Q6 Q22 Q17 Q16 Q18 30pF Q15 Q14 Q20 Q8 System Input Output Figure 23 Illustration of the inputoutput view of a dynamical system The figure on the left shows a detailed circuit diagram for an electronic amplifier the one on the right is its representation as a block diagram outputs Given an input signal ut over some interval of time the model should produce the resulting output yt The inputoutput framework is used in many engineering disciplines since it allows us to decompose a system into individual components connected through their inputs and outputs Thus we can take a complicated system such as a radio or a television and break it down into manageable pieces such as the receiver demodulator amplifier and speakers Each of these pieces has a set of inputs and outputs and through proper design these components can be interconnected to form the entire system The inputoutput view is particularly useful for the special class of linear time invariant systems This term will be defined more carefully later in this chapter but roughly speaking a system is linear if the superposition addition of two inputs yields an output that is the sum of the outputs that would correspond to individual inputs being applied separately A system is timeinvariant if the output response for a given input does not depend on when that input is applied Many electrical engineering systems can be modeled by linear timeinvariant systems and hence a large number of tools have been developed to analyze them One such tool is the step response which describes the relationship between an input that changes from zero to a constant value abruptly a step input and the corresponding output As we shall see later in the text the step response is very useful in characterizing the performance of a dynamical system and it is often used to specify the desired dynamics A sample step response is shown in Figure 24a Another way to describe a linear timeinvariant system is to represent it by its response to sinusoidal input signals This is called the frequency response and a rich powerful theory with many concepts and strong useful results has emerged The results are based on the theory of complex variables and Laplace transforms The basic idea behind frequency response is that we can completely characterize the behavior of a system by its steadystate response to sinusoidal inputs Roughly 21 MODELING CONCEPTS 31 0 10 20 30 0 1 2 3 4 Time Input output Input Output a Step response 10 4 10 2 10 0 Gain 10 1 10 0 10 1 10 2 270 180 90 0 Phase deg Frequency b Frequency response Figure 24 Inputoutput response of a linear system The step response a shows the output of the system due to an input that changes from 0 to 1 at time t 5 s The frequency response b shows the amplitude gain and phase change due to a sinusoidal input at different frequencies speaking this is done by decomposing any arbitrary signal into a linear combi nation of sinusoids eg by using the Fourier transform and then using linearity to compute the output by combining the response to the individual frequencies A sample frequency response is shown in Figure 24b The inputoutput view lends itself naturally to experimental determination of system dynamics where a system is characterized by recording its response to particular inputs eg a step or a set of sinusoids over a range of frequencies The Control View When control theory emerged as a discipline in the 1940s the approach to dy namics was strongly influenced by the electrical engineering inputoutput view A second wave of developments in control starting in the late 1950s was inspired by mechanics where the state space perspective was used The emergence of space flight is a typical example where precise control of the orbit of a spacecraft is essential These two points of view gradually merged into what is today the state space representation of inputoutput systems The development of state space models involved modifying the models from mechanics to include external actuators and sensors and utilizing more general forms of equations In control the model given by equation 22 was replaced by dx dt f x u y hx u 23 where x is a vector of state variables u is a vector of control signals and y is a vector of measurements The term dxdt represents the derivative of x with respect to time now considered a vector and f and h are possibly nonlinear mappings of their arguments to vectors of the appropriate dimension For mechanical systems the state consists of the position and velocity of the system so that x q q in the caseofadampedspringmasssystemNotethatinthecontrolformulationwemodel dynamics as firstorder differential equations but we will see that this can capture the dynamics of higherorder differential equations by appropriate definition of the state and the maps f and h Adding inputs and outputs has increased the richness of the classical problems and led to many new concepts For example it is natural to ask if possible states x can be reached with the proper choice of u reachability and if the measurement y contains enough information to reconstruct the state observability These topics will be addressed in greater detail in Chapters 6 and 7 A final development in building the control point of view was the emergence of disturbances and model uncertainty as critical elements in the theory The simple way of modeling disturbances as deterministic signals like steps and sinusoids has the drawback that such signals can be predicted precisely A more realistic approach is to model disturbances as random signals This viewpoint gives a natural connection between prediction and control The dual views of inputoutput representations and state space representations are particularly useful when modeling uncertainty since state models are convenient to describe a nominal model but uncertainties are easier to describe using inputoutput models often via a frequency response description Uncertainty will be a constant theme throughout the text and will be studied in particular detail in Chapter 12 An interesting observation in the design of control systems is that feedback systems can often be analyzed and designed based on comparatively simple models The reason for this is the inherent robustness of feedback systems However other uses of models may require more complexity and more accuracy One example is feedforward control strategies where one uses a model to precompute the inputs that cause the system to respond in a certain way Another area is system validation where one wishes to verify that the detailed response of the system performs as it was designed Because of these different uses of models it is common to use a hierarchy of models having different complexity and fidelity Multidomain Modeling Modeling is an essential element of many disciplines but traditions and methods from individual disciplines can differ from each other as illustrated by the previous discussion of mechanical and electrical engineering A difficulty in systems engineering is that it is frequently necessary to deal with heterogeneous systems from many different domains including chemical electrical mechanical and information systems To model such multidomain systems we start by partitioning a system into smaller subsystems Each subsystem is represented by balance equations for mass energy and momentum or by appropriate descriptions of information processing in the subsystem The behavior at the interfaces is captured by describing how the variables of the subsystem behave when the subsystems are interconnected These interfaces act by constraining variables within the individual subsystems to be equal such as mass energy or momentum fluxes The complete model is then obtained by combining the descriptions of the subsystems and the interfaces 21 MODELING CONCEPTS 33 Using this methodology it is possible to build up libraries of subsystems that correspond to physical chemical and informational components The procedure mimics the engineering approach where systems are built from subsystems that are themselvesbuiltfromsmallercomponentsAsexperienceisgainedthecomponents and their interfaces can be standardized and collected in model libraries In practice it takes several iterations to obtain a good library that can be reused for many applications State models or ordinary differential equations are not suitable for component based modeling of this form because states may disappear when components are connected This implies that the internal description of a component may change when it is connected to other components As an illustration we consider two ca pacitors in an electrical circuit Each capacitor has a state corresponding to the voltage across the capacitors but one of the states will disappear if the capacitors are connected in parallel A similar situation happens with two rotating inertias each of which is individually modeled using the angle of rotation and the angular velocity Two states will disappear when the inertias are joined by a rigid shaft This difficulty can be avoided by replacing differential equations by differential algebraic equations which have the form Fz z 0 where z Rn A simple special case is x f x y gx y 0 24 where z x y and F x f x y gx y The key property is that the derivative z is not given explicitly and there may be pure algebraic relations between the components of the vector z The model 24 captures the examples of the parallel capacitors and the linked rotating inertias For example when two capacitors are connected we simply add the algebraic equation expressing that the voltages across the capacitors are the same Modelica is a language that has been developed to support componentbased modeling Differential algebraic equations are used as the basic description and objectoriented programming is used to structure the models Modelica is used to model the dynamics of technical systems in domains such as mechanical electri cal thermal hydraulic thermofluid and control subsystems Modelica is intended to serve as a standard format so that models arising in different domains can be exchanged between tools and users A large set of free and commercial Modelica component libraries are available and are used by a growing number of people in industry research and academia For further information about Modelica see httpwwwmodelicaorg or Tiller 192 34 CHAPTER 2 SYSTEM MODELING 22 State Space Models In this section we introduce the two primary forms of models that we use in this text differential equations and difference equations Both make use of the notions of state inputs outputs and dynamics to describe the behavior of a system Ordinary Differential Equations The state of a system is a collection of variables that summarize the past of a system for the purpose of predicting the future For a physical system the state is composed of the variables required to account for storage of mass momentum and energy A key issue in modeling is to decide how accurately this storage has to be represented The state variables are gathered in a vector x Rn called the state vector The control variables are represented by another vector u Rp and the measured signal by the vector y Rq A system can then be represented by the differential equation dx dt f x u y hx u 25 where f Rn Rp Rn and h Rn Rp Rq are smooth mappings We call a model of this form a state space model The dimension of the state vector is called the order of the system The sys tem 25 is called timeinvariant because the functions f and h do not depend explicitly on time t there are more general timevarying systems where the func tions do depend on time The model consists of two functions the function f gives the rate of change of the state vector as a function of state x and control u and the function h gives the measured values as functions of state x and control u A system is called a linear state space system if the functions f and h are linear in x and u A linear state space system can thus be represented by dx dt Ax Bu y Cx Du 26 where A B C and D are constant matrices Such a system is said to be linear and timeinvariant or LTI for short The matrix A is called the dynamics matrix the matrix B is called the control matrix the matrix C is called the sensor matrix and the matrix D is called the direct term Frequently systems will not have a direct term indicating that the control signal does not influence the output directly A different form of linear differential equations generalizing the secondorder dynamics from mechanics is an equation of the form dny dtn a1 dn1y dtn1 any u 27 where t is the independent time variable yt is the dependent output variable and ut is the input The notation dk ydtk is used to denote the kth derivative of y with respect to t sometimes also written as yk The controlled differential equation 27 is said to be an nthorder system This system can be converted into 22 STATE SPACE MODELS 35 state space form by defining x x1 x2 xn1 xn dn1ydtn1 dn2ydtn2 dydt y and the state space equations become d dt x1 x2 xn1 xn a1x1 anxn x1 xn2 xn1 u 0 0 0 y xn With the appropriate definitions of A B C and D this equation is in linear state space form An even more general system is obtained by letting the output be a linear com bination of the states of the system ie y b1x1 b2x2 bnxn du This system can be modeled in state space as d dt x1 x2 x3 xn a1 a2 an1 an 1 0 0 0 0 1 0 0 0 0 1 0 x 1 0 0 0 u y b1 b2 bn x du 28 This particular form of a linear state space system is called reachable canonical form and will be studied in more detail in later chapters Example 21 Balance systems An example of a type of system that can be modeled using ordinary differential equations is the class of balance systems A balance system is a mechanical system in which the center of mass is balanced above a pivot point Some common examples of balance systems are shown in Figure 25 The Segway Personal Transporter Figure 25a uses a motorized platform to stabilize a person standing on top of it When the rider leans forward the transportation device propels itself along the ground but maintains its upright position Another example is a rocket Figure 25b in which a gimbaled nozzle at the bottom of the rocket is used to stabilize the body of the rocket above it Other examples of balance systems include humans or other animals standing upright or a person balancing a stick on their hand 36 CHAPTER 2 SYSTEM MODELING a Segway b Saturn rocket M F p θ m l c Cartpendulum system Figure 25 Balance systems a Segway Personal Transporter b Saturn rocket and c inverted pendulum on a cart Each of these examples uses forces at the bottom of the system to keep it upright Balance systems are a generalization of the springmass system we saw earlier We can write the dynamics for a mechanical system in the general form Mqq Cq q Kq Bqu where Mq is the inertia matrix for the system Cq q represents the Coriolis forces as well as the damping Kq gives the forces due to potential energy and Bq describes how the external applied forces couple into the dynamics The specific form of the equations can be derived using Newtonian mechanics Note that each of the terms depends on the configuration of the system q and that these terms are often nonlinear in the configuration variables Figure 25c shows a simplified diagram for a balance system consisting of an inverted pendulum on a cart To model this system we choose state variables that represent the position and velocity of the base of the system p and p and the angle and angular rate of the structure above the base θ and θ We let F represent the force applied at the base of the system assumed to be in the horizontal direction aligned with p and choose the position and angle of the system as outputs With this set of definitions the dynamics of the system can be computed using Newtonian mechanics and have the form M m ml cos θ ml cos θ J ml2 p θ c p ml sin θ θ2 γ θ mgl sin θ F 0 29 where M is the mass of the base m and J are the mass and moment of inertia of the system to be balanced l is the distance from the base to the center of mass of the balanced body c and γ are coefficients of viscous friction and g is the acceleration due to gravity We can rewrite the dynamics of the system in state space form by defining the state as x p θ p θ the input as u F and the output as y p θ If we 22 STATE SPACE MODELS 37 define the total mass and total inertia as Mt M m Jt J ml2 the equations of motion then become d dt p θ p θ p θ mlsθ θ2 mgml2Jtsθcθ c p γlmcθ θ u Mt mml2Jtc2 θ ml2sθcθ θ2 Mtglsθ clcθ p γ Mtm θ lcθu JtMtm mlcθ2 y p θ where we have used the shorthand cθ cos θ and sθ sin θ In many cases the angle θ will be very close to 0 and hence we can use the approximations sin θ θ and cos θ 1 Furthermore if θ is small we can ignore quadratic and higher terms in θ Substituting these approximations into our equations we see that we are left with a linear state space equation d dt p θ p θ 0 0 1 0 0 0 0 1 0 m2l2gμ cJtμ γ Jtlmμ 0 Mtmglμ clmμ γ Mtμ p θ p θ 0 0 Jtμ lmμ u y 1 0 0 0 0 1 0 0 x where μ Mt Jt m2l2 Example 22 Inverted pendulum A variation of the previous example is one in which the location of the base p does not need to be controlled This happens for example if we are interested only in stabilizing a rockets upright orientation without worrying about the location of base of the rocket The dynamics of this simplified system are given by d dt θ θ θ mgl Jt sin θ γ Jt θ l Jt cos θ u y θ 210 where γ is the coefficient of rotational friction Jt J ml2 and u is the force applied at the base This system is referred to as an inverted pendulum Difference Equations In some circumstances it is more natural to describe the evolution of a system at discrete instants of time rather than continuously in time If we refer to each of these times by an integer k 0 1 2 then we can ask how the state of the system changes for each k Just as in the case of differential equations we define the state to be those sets of variables that summarize the past of the system for the purpose of predicting its future Systems described in this manner are referred to as discretetime systems The evolution of a discretetime system can be written in the form xk 1 fxk uk yk hxk uk where xk Rn is the state of the system at time k an integer uk Rp is the input and yk Rq is the output As before f and h are smooth mappings of the appropriate dimension We call equation 211 a difference equation since it tells us how xk 1 differs from xk The state xk can be either a scalar or a vectorvalued quantity in the case of the latter we write xjk for the value of the jth state at time k Just as in the case of differential equations it is often the case that the equations are linear in the state and input in which case we can describe the system by xk 1 A xk B uk yk C xk D uk As before we refer to the matrices A B C and D as the dynamics matrix the control matrix the sensor matrix and the direct term The solution of a linear difference equation with initial condition x0 and input u0 uT is given by xk Ak x0 j0k1 Akj1 B uj k 0 yk C Ak x0 j0k1 C Akj1 B uj D uk Difference equations are also useful as an approximation of differential equations as we will show later Example 23 Predatorprey As an example of a discretetime system consider a simple model for a predatorprey system The predatorprey problem refers to an ecological system in which we have two species one of which feeds on the other This type of system has been studied for decades and is known to exhibit interesting dynamics Figure 26 shows a historical record taken over 90 years for a population of lynxes versus a population of hares 142 As can been seen from the graph the annual records of the populations of each species are oscillatory in nature A simple model for this situation can be constructed using a discretetime model by keeping track of the rate of births and deaths of each species Letting H represent the population of hares and L represent the population of lynxes we can describe the state in terms of the populations at discrete periods of time Letting k be the 22 STATE SPACE MODELS 39 1845 160 140 120 100 80 60 40 20 1855 1865 1875 1885 1895 Hare Lynx 1905 1915 1925 1935 Figure 26 Predator versus prey The photograph on the left shows a Canadian lynx and a snowshoe hare the lynxs primary prey The graph on the right shows the populations of hares and lynxes between 1845 and 1935 in a section of the Canadian Rockies 142 The data were collected on an annual basis over a period of 90 years Photograph copyright Tom and Pat Leeson discretetime index eg the month number we can write Hk 1 Hk bruHk aLkHk Lk 1 Lk cLkHk d f Lk 213 where bru is the hare birth rate per unit period and as a function of the food supply u d f is the lynx mortality rate and a and c are the interaction coefficients The interaction term aLkHk models the rate of predation which is assumed to be proportional to the rate at which predators and prey meet and is hence given by the product of the population sizes The interaction term cLkHk in the lynx dynamics has a similar form and represents the rate of growth of the lynx population This model makes many simplifying assumptionssuch as the fact that hares decrease in number only through predation by lynxesbut it often is sufficient to answer basic questions about the system To illustrate the use of this system we can compute the number of lynxes and hares at each time point from some initial population This is done by starting with x0 H0 L0 and then using equation 213 to compute the populations in the following period By iterating this procedure we can generate the population over time The output of this process for a specific choice of parameters and initial conditions is shown in Figure 27 While the details of the simulation are different from the experimental data to be expected given the simplicity of our assumptions we see qualitatively similar trends and hence we can use the model to help explore the dynamics of the system Example 24 Email server The IBM Lotus server is an collaborative software system that administers users email documents and notes Client machines interact with end users to provide access to data and applications The server also handles other administrative tasks In the early development of the system it was observed that the performance was poor when the central processing unit CPU was overloaded because of too many service requests and mechanisms to control the load were therefore introduced The interaction between the client and the server is in the form of remote pro 40 CHAPTER 2 SYSTEM MODELING 1850 1860 1870 1880 1890 1900 1910 1920 0 50 100 150 200 250 Year Population Hares Lynxes Figure 27 Discretetime simulation of the predatorprey model 213 Using the parameters a c 0014 bru 06 and d 07 in equation 213 the period and magnitude of the lynx and hare population cycles approximately match the data in Figure 26 cedure calls RPCs The server maintains a log of statistics of completed requests The total number of requests being served called RIS RPCs in server is also measured The load on the server is controlled by a parameter called MaxUsers which sets the total number of client connections to the server This parameter is controlled by the system administrator The server can be regarded as a dynami cal system with MaxUsers as the input and RIS as the output The relationship between input and output was first investigated by exploring the steadystate per formance and was found to be linear In 97 a dynamic model in the form of a firstorder difference equation is used to capture the dynamic behavior of this system Using system identification techniques they construct a model of the form yk 1 ayk buk where u MaxUsers MaxUsers and y RIS RIS The parameters a 043 and b 047 are parameters that describe the dynamics of the system around the operating point and MaxUsers 165 and RIS 135 represent the nominal operating point of the system The number of requests was averaged over a sampling period of 60 s Simulation and Analysis State space models can be used to answer many questions One of the most common as we have seen in the previous examples involves predicting the evolution of the system state from a given initial condition While for simple models this can be done in closed form more often it is accomplished through computer simulation One can also use state space models to analyze the overall behavior of the system without making direct use of simulation Consider again the damped springmass system from Section 21 but this time with an external force applied as shown in Figure 28 We wish to predict the motion of the system for a periodic forcing function with a given initial condition and determine the amplitude frequency and decay rate of the resulting motion 22 STATE SPACE MODELS 41 q m k ut A sin t ω c Figure 28 A driven springmass system with damping Here we use a linear damping element with coefficient of viscous friction c The mass is driven with a sinusoidal force of amplitude A We choose to model the system with a linear ordinary differential equation Using Hookes law to model the spring and assuming that the damper exerts a force that is proportional to the velocity of the system we have m q c q kq u 214 where m is the mass q is the displacement of the mass c is the coefficient of viscous friction k is the spring constant and u is the applied force In state space form using x q q as the state and choosing y q as the output we have dx dt x2 c m x2 k m x1 u m y x1 We see that this is a linear secondorder differential equation with one input u and one output y We now wish to compute the response of the system to an input of the form u A sin ωt Although it is possible to solve for the response analytically we instead make use of a computational approach that does not rely on the specific form of this system Consider the general state space system dx dt f x u Given the state x at time t we can approximate the value of the state at a short time h 0 later by assuming that the rate of change of f x u is constant over the interval t to t h This gives xt h xt h f xt ut 215 Iterating this equation we can thus solve for x as a function of time This approxi mation is known as Euler integration and is in fact a difference equation if we let h represent the time increment and write xk xkh Although modern simulation tools such as MATLAB and Mathematica use more accurate methods than Euler integration they still have some of the same basic tradeoffs Returning to our specific example Figure 29 shows the results of computing xt using equation 215 along with the analytical computation We see that as 42 CHAPTER 2 SYSTEM MODELING 0 5 10 15 20 25 30 35 40 45 50 2 1 0 1 2 Time t sec Position q m h 1 h 05 h 01 analytical Figure 29 Simulation of the forced springmass system with different simulation time constants The dashed line represents the analytical solution The solid lines represent the approximate solution via the method of Euler integration using decreasing step sizes h gets smaller the computed solution converges to the exact solution The form of the solution is also worth noticing after an initial transient the system settles into a periodic motion The portion of the response after the transient is called the steadystate response to the input In addition to generating simulations models can also be used to answer other types of questions Two that are central to the methods described in this text concern the stability of an equilibrium point and the inputoutput frequency response We illustrate these two computations through the examples below and return to the general computations in later chapters Returning to the damped springmass system the equations of motion with no input forcing are given by dx dt x2 c m x2 k m x1 216 where x1 is the position of the mass relative to the rest position and x2 is its velocity We wish to show that if the initial state of the system is away from the rest position the system will return to the rest position eventually we will later define this situation to mean that the rest position is asymptotically stable While we could heuristically show this by simulating many many initial conditions we seek instead to prove that this is true for any initial condition To do so we construct a function V Rn R that maps the system state to a positive real number For mechanical systems a convenient choice is the energy of the system V x 1 2kx2 1 1 2mx2 2 217 If we look at the time derivative of the energy function we see that dV dt kx1 x1 mx2 x2 kx1x2 mx2 c m x2 k m x1 cx2 2 which is always either negative or zero Hence V xt is never increasing and 22 STATE SPACE MODELS 43 using a bit of analysis that we will see formally later the individual states must remain bounded If we wish to show that the states eventually return to the origin we must use a slightly more detailed analysis Intuitively we can reason as follows suppose that for some period of time V xt stops decreasing Then it must be true that V xt 0 which in turn implies that x2t 0 for that same period In that case x2t 0 and we can substitute into the second line of equation 216 to obtain 0 x2 c m x2 k m x1 k m x1 Thus we must have that x1 also equals zero and so the only time that V xt can stop decreasing is if the state is at the origin and hence this system is at its rest position Since we know that V xt is never increasing because V 0 we therefore conclude that the origin is stable for any initial condition This type of analysis called Lyapunov stability analysis is considered in detail in Chapter 4 It shows some of the power of using models for the analysis of system properties Another type of analysis that we can perform with models is to compute the output of a system to a sinusoidal input We again consider the springmass system but this time keeping the input and leaving the system in its original form m q c q kq u 218 We wish to understand how the system responds to a sinusoidal input of the form ut A sin ωt We will see how to do this analytically in Chapter 6 but for now we make use of simulations to compute the answer We first begin with the observation that if qt is the solution to equation 218 with input ut then applying an input 2ut will give a solution 2qt this is easily verified by substitution Hence it suffices to look at an input with unit magnitude A 1 A second observation which we will prove in Chapter 5 is that the long term response of the system to a sinusoidal input is itself a sinusoid at the same frequency and so the output has the form qt gω sinωt ϕω where gω is called the gain of the system and ϕω is called the phase or phase offset To compute the frequency response numerically we can simulate the system at a set of frequencies ω1 ωN and plot the gain and phase at each of these frequencies An example of this type of computation is shown in Figure 210 44 CHAPTER 2 SYSTEM MODELING 0 10 20 30 40 50 4 2 0 2 4 Output y Time s 10 1 10 0 10 1 10 2 10 1 10 0 10 1 Gain log scale Frequency radsec log scale Figure 210 A frequency response gain only computed by measuring the response of individual sinusoids The figure on the left shows the response of the system as a function of time to a number of different unit magnitude inputs at different frequencies The figure on the right shows this same data in a different way with the magnitude of the response plotted as a function of the input frequency The filled circles correspond to the particular frequencies shown in the time responses 23 Modeling Methodology To deal with large complex systems it is useful to have different representations of the system that capture the essential features and hide irrelevant details In all branches of science and engineering it is common practice to use some graphical description of systems called schematic diagrams They can range from stylistic pictures to drastically simplified standard symbols These pictures make it possible to get an overall view of the system and to identify the individual components ExamplesofsuchdiagramsareshowninFigure211Schematicdiagramsareuseful because they give an overall picture of a system showing different subprocesses and their interconnection and indicating variables that can be manipulated and signals that can be measured Block Diagrams A special graphical representation called a block diagram has been developed in control engineering The purpose of a block diagram is to emphasize the information flow and to hide details of the system In a block diagram different process elements are shown as boxes and each box has inputs denoted by lines with arrows pointing toward the box and outputs denoted by lines with arrows going out of the box The inputs denote the variables that influence a process and the outputs denote the signals that we are interested in or signals that influence other subsystems Block diagrams can also be organized in hierarchies where individual blocks may themselves contain more detailed block diagrams Figure 212 shows some of the notation that we use for block diagrams Signals are represented as lines with arrows to indicate inputs and outputs The first diagram is the representation for a summation of two signals An inputoutput response is represented as a rectangle with the system name or mathematical description in 23 MODELING METHODOLOGY 45 Generator symbol Transformer symbol Bus coding Bus symbol 1 2 Tie line connecting with neighbor system 3 4 Line symbol 5 6 Load symbol a Power electronics b Cell biology c Process control d Networking Figure 211 Schematic diagrams for different disciplines Each diagram is used to illustrate the dynamics of a feedback system a electrical schematics for a power system 132 b a biological circuit diagram for a synthetic clock circuit 21 c a process diagram for a distillation column 178 and d a Petri net description of a communication protocol a Summing junction b Gain block c Saturation d Nonlinear map e Integrator f Inputoutput system Figure 212 Standard block diagram elements The arrows indicate the the inputs and outputs of each element with the mathematical operation corresponding to the blocked labeled at the output The system block f represents the full inputoutput response of a dynamical system 46 CHAPTER 2 SYSTEM MODELING Figure 213 A block diagram representation of the flight control system for an insect flying against the wind The mechanical portion of the model consists of the rigidbody dynamics of the fly the drag due to flying through the air and the forces generated by the wings The motion of the body causes the visual environment of the fly to change and this information is then used to control the motion of the wings through the sensory motor system closing the loop the block Two special cases are a proportional gain which scales the input by a multiplicative factor and an integrator which outputs the integral of the input signal Figure 213 illustrates the use of a block diagram in this case for modeling the flight response of a fly The flight dynamics of an insect are incredibly intricate involving careful coordination of the muscles within the fly to maintain stable flight in response to external stimuli One known characteristic of flies is their ability to fly upwind by making use of the optical flow in their compound eyes as a feedback mechanism Roughly speaking the fly controls its orientation so that the point of contraction of the visual field is centered in its visual field To understand this complex behavior we can decompose the overall dynamics of the system into a series of interconnected subsystems or blocks Referring to Figure 213 we can model the insect navigation system through an interconnection of five blocks The sensory motor system a takes the information from the visual system e and generates muscle commands that attempt to steer the fly so that the point of contraction is centered These muscle commands are converted into forces through the flapping of the wings b and the resulting aerodynamic forces that are produced The forces from the wings are combined with the drag on the fly d to produce a net force on the body of the fly The wind velocity enters through the drag aerodynamics Finally the body dynamics c describe how the fly translates and rotates as a function of the net forces that are applied to it The insect position speed and orientation are fed back to the drag aerodynamics and vision system blocks as inputs Each of the blocks in the diagram can itself be a complicated subsystem For example the visual system of a fruit fly consists of two complicated compound eyes with about 700 elements per eye and the sensory motor system has about 200000 23 MODELING METHODOLOGY 47 neurons that are used to process information A more detailed block diagram of the insect flight control system would show the interconnections between these elements but here we have used one block to represent how the motion of the fly affects the output of the visual system and a second block to represent how the visual field is processed by the flys brain to generate muscle commands The choice of the level of detail of the blocks and what elements to separate into different blocks often depends on experience and the questions that one wants to answer using the model One of the powerful features of block diagrams is their ability to hide information about the details of a system that may not be needed to gain an understanding of the essential dynamics of the system Modeling from Experiments Since control systems are provided with sensors and actuators it is also possible to obtain models of system dynamics from experiments on the process The models are restricted to inputoutput models since only these signals are accessible to experiments but modeling from experiments can also be combined with modeling from physics through the use of feedback and interconnection A simple way to determine a systems dynamics is to observe the response to a step change in the control signal Such an experiment begins by setting the control signal to a constant value then when steady state is established the control signal is changed quickly to a new level and the output is observed The experiment gives the step response of the system and the shape of the response gives useful information about the dynamics It immediately gives an indication of the response time and it tells if the system is oscillatory or if the response is monotone Example 25 Springmass system Consider the springmass system from Section 21 whose dynamics are given by mq c q kq u 219 We wish to determine the constants m c and k by measuring the response of the system to a step input of magnitude F0 We will show in Chapter 6 that when c2 4km the step response for this system from the rest configuration is given by qt F0k 1expct2m sinωd t φ ωd sqrt4km c22m φ tan1sqrt4km c2 From the form of the solution we see that the form of the response is determined by the parameters of the system Hence by measuring certain features of the step response we can determine the parameter values Figure 214 shows the response of the system to a step of magnitude F0 20 N along with some measurements We start by noting that the steadystate position Figure 214 Step response for a springmass system The magnitude of the step input is F0 20 N The period of oscillation T is determined by looking at the time between two subsequent local maxima in the response The period combined with the steadystate value q and the relative decrease between local maxima can be used to estimate the parameters in a model of the system the model Scaling can also improve the numerical conditioning of the model to allow faster and more accurate simulations The procedure of scaling is straightforward choose units for each independent variable and introduce new variables by dividing the variables by the chosen normalization unit We illustrate the procedure with two examples Example 26 Springmass system Consider again the springmass system introduced earlier Neglecting the damping the system is described by mq kq u The model has two parameters m and k To normalize the model we introduce dimensionfree variables x ql and τ ω0t where ω0 sqrtkm and l is the chosen length scale We scale force by mlω02 and introduce υ umlω02 The scaled equation then becomes d2xdτ2 d2qldω0t2 1mlω02 kq u x υ which is the normalized undamped springmass system Notice that the normalized model has no parameters while the original model had two parameters m and k Introducing the scaled dimensionfree state variables z1 x ql and z2 dxdτ qlω0 the model can be written as ddt z1 z2 0 1 1 0 z1 z2T 0 υT This simple linear equation describes the dynamics of any springmass system independent of the particular parameters and hence gives us insight into the fundamental dynamics of this oscillatory system To recover the physical frequency of oscillation or its magnitude we must invert the scaling we have applied Example 27 Balance system Consider the balance system described in Section 21 Neglecting damping by putting c0 and γ0 in equation 29 the model can be written as Mmd2qdt2 ml cosθ d2θdt2 ml sinθ dqdt2 F ml cosθ d2qdt2 Jml2d2θdt2 mgl sinθ 0 Let ω0 sqrtmglJml2 choose the length scale as l let the time scale be 1ω0 choose the force scale as Mmlω02 and introduce the scaled variables τ ω0t xql and uFMmlω02 The equations then become d2xdτ2 α cosθ d2θdτ2 α sinθ dθdτ2 u β cosθ d2xdτ2 d2θdτ2 sinθ 0 where α mMm and β ml2Jml2 Notice that the original model has five parameters m M J l and g but the normalized model has only two parameters a Static uncertainty b Uncertainty lemon c Model uncertainty Figure 215 Characterization of model uncertainty Uncertainty of a static system is illustrated in a where the solid line indicates the nominal inputoutput relationship and the dashed lines indicate the range of possible uncertainty The uncertainty lemon 83 in b is one way to capture uncertainty in dynamical systems emphasizing that a model is valid only in some amplitude and frequency ranges In c a model is represented by a nominal model M and another model Δ representing the uncertainty analogous to the representation of parameter uncertainty 24 MODELING EXAMPLES 51 aging that can cause changes or drift in the systems There are also highfrequency effects a resistor will no longer be a pure resistance at very high frequencies and a beam has stiffness and will exhibit additional dynamics when subject to high frequency excitation The uncertainty lemon 83 shown in Figure 215b is one way to conceptualize the uncertainty of a system It illustrates that a model is valid only in certain amplitude and frequency ranges We will introduce some formal tools for representing uncertainty in Chapter 12 using figures such as Figure 215c These tools make use of the concept of a transfer function which describes the frequency response of an inputoutput system For now we simply note that one should always be careful to recognize the limits of a model and not to make use of models outside their range of applicability For example one can describe the uncertainty lemon and then check to make sure that signals remain in this region In early analog computing a system was simulated using operational amplifiers and it was customary to give alarms when certain signal levels were exceeded Similar features can be included in digital simulation 24 Modeling Examples In this section we introduce additional examples that illustrate some of the different types of systems for which one can develop differential equation and difference equation models These examples are specifically chosen from a range of different fields to highlight the broad variety of systems to which feedback and control concepts can be applied A more detailed set of applications that serve as running examples throughout the text are given in the next chapter Motion Control Systems Motion control systems involve the use of computation and feedback to control the movement of a mechanical system Motion control systems range from nanopo sitioning systems atomic force microscopes adaptive optics to control systems for the readwrite heads in a disk drive of a CD player to manufacturing systems transfer machines and industrial robots to automotive control systems antilock brakes suspension control traction control to air and space flight control systems airplanes satellites rockets and planetary rovers Example 28 Vehicle steeringthe bicycle model A common problem in motion control is to control the trajectory of a vehicle through an actuator that causes a change in the orientation A steering wheel on an automobile and the front wheel of a bicycle are two examples but similar dynamics occur in the steering of ships or control of the pitch dynamics of an aircraft In many cases we can understand the basic behavior of these systems through the use of a simple model that captures the basic kinematics of the system Consider a vehicle with two wheels as shown in Figure 216 For the purpose of steering we are interested in a model that describes how the velocity of the vehicle Figure 216 Vehicle steering dynamics The left figure shows an overhead view of a vehicle with four wheels The wheel base is b and the center of mass at a distance a forward of the rear wheels By approximating the motion of the front and rear pairs of wheels by a single front wheel and a single rear wheel we obtain an abstraction called the bicycle model shown on the right The steering angle is δ and the velocity at the center of mass has the angle α relative the length axis of the vehicle The position of the vehicle is given by x y and the orientation heading by θ depends on the steering angle δ To be specific consider the velocity υ at the center of mass a distance a from the rear wheel and let b be the wheel base as shown in Figure 216 Let x and y be the coordinates of the center of mass θ the heading angle and α the angle between the velocity vector υ and the centerline of the vehicle Since b ra tan δ and a ra tan α it follows that tan α ab tan δ and we get the following relation between α and the steering angle δ αδ arctan a tan δ b 223 Assume that the wheels are rolling without slip and that the velocity of the rear wheel is υ0 The vehicle speed at its center of mass is υ υ0 cos α and we find that the motion of this point is given by dxdt υ cos α θ υ0 cos α θ cos α dydt υ sin α θ υ0 sin α θ cos α 224 To see how the angle θ is influenced by the steering angle we observe from Figure 216 that the vehicle rotates with the angular velocity υ0 ra around the point O Hence dθdt υ0 ra υ0 b tan δ 225 Equations 223 225 can be used to model an automobile under the assumptions that there is no slip between the wheels and the road and that the two front wheels can be approximated by a single wheel at the center of the car The assumption of no slip can be relaxed by adding an extra state variable giving a more realistic model Such a model also describes the steering dynamics of ships as well as the pitch dynamics of aircraft and missiles It is also possible to choose coor 24 MODELING EXAMPLES 53 a Harrier jump jet r x y θ F1 F2 b Simplified model Figure 217 Vectored thrust aircraft The Harrier AV8B military aircraft a redirects its engine thrust downward so that it can hover above the ground Some air from the engine is diverted to the wing tips to be used for maneuvering As shown in b the net thrust on the aircraft can be decomposed into a horizontal force F1 and a vertical force F2 acting at a distance r from the center of mass dinates so that the reference point is at the rear wheels corresponding to setting α 0 a model often referred to as the Dubins car 66 Figure 216 represents the situation when the vehicle moves forward and has frontwheel steering The case when the vehicle reverses is obtained by changing the sign of the velocity which is equivalent to a vehicle with rearwheel steering Example 29 Vectored thrust aircraft Consider the motion of vectored thrust aircraft such as the Harrier jump jet shown Figure 217a The Harrier is capable of vertical takeoff by redirecting its thrust downward and through the use of smaller maneuvering thrusters located on its wings A simplified model of the Harrier is shown in Figure 217b where we focus on the motion of the vehicle in a vertical plane through the wings of the aircraft We resolve the forces generated by the main downward thruster and the maneuvering thrusters as a pair of forces F1 and F2 acting at a distance r below the aircraft determined by the geometry of the thrusters Let x y θ denote the position and orientation of the center of mass of the aircraft Let m be the mass of the vehicle J the moment of inertia g the gravitational constant and c the damping coefficient Then the equations of motion for the vehicle are given by m x F1 cos θ F2 sin θ c x m y F1 sin θ F2 cos θ mg c y J θ r F1 226 It is convenient to redefine the inputs so that the origin is an equilibrium point of the 54 CHAPTER 2 SYSTEM MODELING message queue incoming outgoing messages x μ λ messages Figure 218 Schematic diagram of a queuing system Messages arrive at rate λ and are stored in a queue Messages are processed and removed from the queue at rate μ The average size of the queue is given by x R system with zero input Letting u1 F1 and u2 F2 mg the equations become m x mg sin θ c x u1 cos θ u2 sin θ m y mgcos θ 1 c y u1 sin θ u2 cos θ J θ ru1 227 These equations describe the motion of the vehicle as a set of three coupled second order differential equations Information Systems Information systems range from communication systems like the Internet to soft ware systems that manipulate data or manage enterprisewide resources Feedback is present in all these systems and designing strategies for routing flow control and buffer management is a typical problem Many results in queuing theory emerged from design of telecommunication systems and later from development of the Inter net and computer communication systems 32 127 177 Management of queues to avoid congestion is a central problem and we will therefore start by discussing the modeling of queuing systems Example 210 Queuing systems A schematic picture of a simple queue is shown in Figure 218 Requests arrive and are then queued and processed There can be large variations in arrival rates and service rates and the queue length builds up when the arrival rate is larger than the service rate When the queue becomes too large service is denied using an admission control policy The system can be modeled in many different ways One way is to model each incoming request which leads to an eventbased model where the state is an integer that represents the queue length The queue changes when a request arrives or a request is serviced The statistics of arrival and servicing are typically modeled as random processes In many cases it is possible to determine statistics of quantities like queue length and service time but the computations can be quite complicated A significant simplification can be obtained by using a flow model Instead of keeping track of each request we instead view service and requests as flows similar to what is done when replacing molecules by a continuum when analyzing 24 MODELING EXAMPLES 55 0 05 1 0 50 100 Service rate excess λμmax Queue length xe a Steadystate queue size 0 20 40 60 80 0 10 20 Time s Queue length xe b Overload condition Figure 219 Queuing dynamics a The steadystate queue length as a function of λμmax b The behavior of the queue length when there is a temporary overload in the system The solid line shows a realization of an eventbased simulation and the dashed line shows the behavior of the flow model 229 fluids Assuming that the average queue length x is a continuous variable and that arrivals and services are flows with rates λ and μ the system can be modeled by the firstorder differential equation dx dt λ μ λ μmax f x x 0 228 where μmax is the maximum service rate and f x is a number between 0 and 1 that describes the effective service rate as a function of the queue length It is natural to assume that the effective service rate depends on the queue length because larger queues require more resources In steady state we have f x λμmax and we assume that the queue length goes to zero when λμmax goes to zero and that it goes to infinity when λμmax goes to 1 This implies that f 0 0 and that f 1 In addition if we assume that the effective service rate deteriorates monotonically with queue length then the function f x is monotone and concave A simple function that satisfies the basic requirements is f x x1 x which gives the model dx dt λ μmax x x 1 229 This model was proposed by Agnew 5 It can be shown that if arrival and ser vice processes are Poisson processes the average queue length is given by equa tion 229 and that equation 229 is a good approximation even for short queue lengths see Tipper 193 To explore the properties of the model 229 we will first investigate the equi librium value of the queue length when the arrival rate λ is constant Setting the derivative dxdt to zero in equation 229 and solving for x we find that the queue length x approaches the steadystate value xe λ μmax λ 230 Figure 219a shows the steadystate queue length as a function of λμmax the effective service rate excess Notice that the queue length increases rapidly as λ 56 CHAPTER 2 SYSTEM MODELING 0 1 2 3 4 0 500 1000 1500 Number of processes Execution time s open loop closed loop a System performance Normal CPU load Memory swaps Underload Overload b System state Figure 220 Illustration of feedback in the virtual memory system of the IBM370 a The effect of feedback on execution times in a simulation following 43 Results with no feedback are shown with o and results with feedback with x Notice the dramatic decrease in execution time for the system with feedback b How the three states are obtained based on process measurements approaches μmax To have a queue length less than 20 requires λμmax 095 The average time to service a request is Ts x 1μmax and it increases dramatically as λ approaches μmax Figure 219b illustrates the behavior of the server in a typical overload situation The maximum service rate is μmax 1 and the arrival rate starts at λ 05 The arrival rate is increased to λ 4 at time 20 and it returns to λ 05 at time 25 The figure shows that the queue builds up quickly and clears very slowly Since the response time is proportional to queue length it means that the quality of service is poor for a long period after an overload This behavior is called the rushhour effect and has been observed in web servers and many other queuing systems such as automobile traffic The dashed line in Figure 219b shows the behavior of the flow model which describes the average queue length The simple model captures behavior qualita tively but there are variations from sample to sample when the queue length is short Many complex systems use discrete control actions Such systems can be mod eled by characterizing the situations that correspond to each control action as illustrated in the following example Example 211 Virtual memory paging control An early example of the use of feedback in computer systems was applied in the operating system OSVS for the IBM 370 43 55 The system used virtual memory which allows programs to address more memory than is physically available as fast memory Data in current fast memory random access memory RAM is accessed directly but data that resides in slower memory disk is automatically loaded into fast memory The system is implemented in such a way that it appears to the programmer as a single large section of memory The system performed very well in many situations but very long execution times were encountered in overload situations as shown by the open circles in Figure 220a The difficulty was resolved with a simple discrete feedback system The load of the central processing unit 24 MODELING EXAMPLES 57 4 5 2 3 1 a Sensor network 0 10 20 30 40 10 20 30 40 Iteration Agent states xi b Consensus convergence Figure 221 Consensus protocols for sensor networks a A simple sensor network with five nodes In this network node 1 communicates with node 2 and node 2 communicates with nodes 1 3 4 5 etc b A simulation demonstrating the convergence of the consensus protocol 231 to the average value of the initial conditions CPU was measured together with the number of page swaps between fast memory and slow memory The operating region was classified as being in one of three states normal underload or overload The normal state is characterized by high CPU activity the underload state is characterized by low CPU activity and few page replacements the overload state has moderate to low CPU load but many page replacements see Figure 220b The boundaries between the regions and the time for measuring the load were determined from simulations using typical loads The control strategy was to do nothing in the normal load condition to exclude a process from memory in the overload condition and to allow a new process or a previously excluded process in the underload condition The crosses in Figure 220a show the effectiveness of the simple feedback system in simulated loads Similar principles are used in many other situations eg in fast onchip cache memory Example 212 Consensus protocols in sensor networks Sensor networks are used in a variety of applications where we want to collect and aggregate information over a region of space using multiple sensors that are connected together via a communications network Examples include monitoring environmental conditions in a geographical area or inside a building monitoring the movement of animals or vehicles and monitoring the resource loading across a group of computers In many sensor networks the computational resources are distributed along with the sensors and it can be important for the set of distributed agents to reach a consensus about a certain property such as the average temperature in a region or the average computational load among a set of computers We model the connectivity of the sensor network using a graph with nodes corresponding to the sensors and edges corresponding to the existence of a direct communications link between two nodes We use the notation N i to represent the set of neighbors of a node i For example in the network shown in Figure 221a N2 1 3 4 5 and N3 2 4 To solve the consensus problem let xi be the state of the ith sensor correspond ing to that sensors estimate of the average value that we are trying to compute We initialize the state to the value of the quantity measured by the individual sensor The consensus protocol algorithm can now be realized as a local update law xik 1 xik γ Σ jNi xjk xik 231 This protocol attempts to compute the average by updating the local state of each agent based on the value of its neighbors The combined dynamics of all agents can be written in the form xk 1 xk γ D A xk 232 where A is the adjacency matrix and D is a diagonal matrix with entries corresponding to the number of neighbors of each node The constant γ describes the rate at which the estimate of the average is updated based on information from neighboring nodes The matrix L D A is called the Laplacian of the graph The equilibrium points of equation 232 are the set of states such that xek 1 xek It can be shown that xe α α α is an equilibrium state for the system corresponding to each sensor having an identical estimate α for the average Furthermore we can show that α is indeed the average value of the initial states Since there can be cycles in the graph it is possible that the state of the system could enter into an infinite loop and never converge to the desired consensus state A formal analysis requires tools that will be introduced later in the text but it can be shown that for any connected graph we can always find a γ such that the states of the individual agents converge to the average A simulation demonstrating this property is shown in Figure 221b Biological Systems Biological systems provide perhaps the richest source of feedback and control examples The basic problem of homeostasis in which a quantity such as temperature or blood sugar level is regulated to a fixed value is but one of the many types of complex feedback interactions that can occur in molecular machines cells organisms and ecosystems Example 213 Transcriptional regulation Transcription is the process by which messenger RNA mRNA is generated from a segment of DNA The promoter region of a gene allows transcription to be controlled by the presence of other proteins which bind to the promoter region and either repress or activate RNA polymerase the enzyme that produces an mRNA transcript from DNA The mRNA is then translated into a protein according to its nucleotide sequence This process is illustrated in Figure 222 A simple model of the transcriptional regulation process is through the use of a Hill function 56 154 Consider the regulation of a protein A with a concentration given by pa and a corresponding mRNA concentration ma Let B be a second protein with concentration pb that represses the production of protein A through 24 MODELING EXAMPLES 59 RNA polymerase DNA Polypeptide mRNA Ribosome Transcription Translation Figure 222 Biological circuitry The cell on the left is a bovine pulmonary cell stained so that the nucleus actin and chromatin are visible The figure on the right gives an overview of the process by which proteins in the cell are made RNA is transcribed from DNA by an RNA polymerase enzyme The RNA is then translated into a protein by an organelle called a ribosome transcriptional regulation The resulting dynamics of pa and ma can be written as dma dt αab 1 kab pnab b αa0 γama dpa dt βama δa pa 233 where αabαa0 is the unregulated transcription rate γa represents the rate of degra dation of mRNA αab kab and nab are parameters that describe how B represses A βa represents the rate of production of the protein from its corresponding mRNA and δa represents the rate of degradation of the protein A The parameter αa0 de scribes the leakiness of the promoter and nab is called the Hill coefficient and relates to the cooperativity of the promoter A similar model can be used when a protein activates the production of another protein rather than repressing it In this case the equations have the form dma dt αabkab pnab b 1 kab pnab b αa0 γama dpa dt βama δa pa 234 where the variables are the same as described previously Note that in the case of the activator if pb is zero then the production rate is αa0 versus αab αa0 for the repressor As pb gets large the first term in the expression for ma approaches 1 and the transcription rate becomes αab αa0 versus αa0 for the repressor Thus we see that the activator and repressor act in opposite fashion from each other As an example of how these models can be used we consider the model of a repressilator originally due to Elowitz and Leibler 71 The repressilator is a synthetic circuit in which three proteins each repress another in a cycle This is shown schematically in Figure 223a where the three proteins are TetR λcI and LacI The basic idea of the repressilator is that if TetR is present then it represses the production of λcI If λcI is absent then LacI is produced at the unregulated transcription rate which in turn represses TetR Once TetR is repressed then λcI is no longer repressed and so on If the dynamics of the circuit are designed properly the resulting protein concentrations will oscillate We can model this system using three copies of equation 233 with A and 60 CHAPTER 2 SYSTEM MODELING ampR SC101 origin PLtetO1 cIlite PR lacIlite PLlacO1 tetRlite TetR LacI cI a Repressilator plasmid 0 100 200 300 0 1000 2000 3000 4000 5000 Time t min Proteins per cell cI lacI tetR b Repressilator simulation Figure 223 The repressilator genetic regulatory network a A schematic diagram of the repressilator showing the layout of the genes in the plasmid that holds the circuit as well as the circuit diagram center b A simulation of a simple model for the repressilator showing the oscillation of the individual protein concentrations Figure courtesy M Elowitz B replaced by the appropriate combination of TetR cI and LacI The state of the system is then given by x mTetR pTetR mcI pcI mLacI pLacI Figure 223b shows the traces of the three protein concentrations for parameters n 2 α 05 k 625 104 α0 5 104 γ 58 103 β 012 and δ 12 103 with initial conditions x0 1 0 0 200 0 0 following 71 Example 214 Wave propagation in neuronal networks The dynamics of the membrane potential in a cell are a fundamental mechanism in understanding signaling in cells particularly in neurons and muscle cells The HodgkinHuxley equations give a simple model for studying propagation waves in networks of neurons The model for a single neuron has the form C dV dt INa IK Ileak Iinput where V is the membrane potential C is the capacitance INa and IK are the current caused by the transport of sodium and potassium across the cell membrane Ileak is a leakage current and Iinput is the external stimulation of the cell Each current obeys Ohms law ie I gV E where g is the conductance and E is the equilibrium voltage The equilibrium voltage is given by Nernsts law E RT nF log ce ci where R is Boltzmanns constant T is the absolute temperature F is Faradays con stant n is the charge or valence of the ion and ci and ce are the ion concentrations inside the cell and in the external fluid At 20 C we have RTF 20 mV The HodgkinHuxley model was originally developed as a means to predict the quantitative behavior of the squid giant axon 100 Hodgkin and Huxley shared 25 FURTHER READING 61 the 1963 Nobel Prize in Physiology along with J C Eccles for analysis of the electrical and chemical events in nerve cell discharges The voltage clamp described in Section 13 was a key element in Hodgkin and Huxleys experiments 25 Further Reading Modeling is ubiquitous in engineering and science and has a long history in applied mathematics For example the Fourier series was introduced by Fourier when he modeled heat conduction in solids 76 Models of dynamics have been developed in many different fields including mechanics 12 86 heat conduction 50 flu ids 37 vehicles 1 38 69 robotics 156 183 circuits 92 power systems 132 acoustics 30 and micromechanical systems 179 Control theory requires mod eling from many different domains and most control theory texts contain several chapters on modeling using ordinary differential equations and difference equations see for example 79 A classic book on the modeling of physical systems espe cially mechanical electrical and thermofluid systems is Cannon 49 The book by Aris 11 is highly original and has a detailed discussion of the use of dimension free variables Two of the authors favorite books on modeling of biological systems are J D Murray 154 and Wilson 203 Exercises 21 Chain of integrators form Consider the linear ordinary differential equa tion 27 Show that by choosing a state space representation with x1 y the dynamics can be written as A 0 1 0 0 0 0 0 1 an an1 a1 B 0 0 1 C 1 0 0 This canonical form is called the chain of integrators form 22 Inverted pendulum Use the equations of motion for a balance system to derive a dynamic model for the inverted pendulum described in Example 22 and verify that for small θ the dynamics are approximated by equation 210 23 Disretetime dynamics Consider the following discretetime system xk 1 Axk Buk yk Cxk where x x1 x2 A a11 a12 0 a22 B 0 1 C 1 0 In this problem we will explore some of the properties of this discretetime system as a function of the parameters the initial conditions and the inputs a For the case when a12 0 and u 0 give a closed form expression for the output of the system b A discrete system is in equilibrium when xk 1 xk for all k Let u r be a constant input and compute the resulting equilibrium point for the system Show that if aii 1 for all i all initial conditions give solutions that converge to the equilibrium point c Write a computer program to plot the output of the system in response to a unit step input uk 1 k 0 Plot the response of your system with x0 0 and A given by a11 05 a12 1 and a22 025 24 Keynesian economics Keynes simple model for an economy is given by Yk Ck Ik Gk where Y C I and G are gross national product GNP consumption investment and government expenditure for year k Consumption and investment are modeled by difference equations of the form Ck 1 aYk Ik 1 bCk 1 Ck where a and b are parameters The first equation implies that consumption increases with GNP but that the effect is delayed The second equation implies that investment is proportional to the rate of change of consumption Show that the equilibrium value of the GNP is given by Ye 1 1 a Ie Ge where the parameter 1 1 a is the Keynes multiplier the gain from I or G to Y With a 025 an increase of government expenditure will result in a fourfold increase of GNP Also show that the model can be written as the following discretetime state model Ck 1 Ik 1 a a ab a ab Ck Ik a ab Gk Yk Ck Ik Gk 25 Least squares system identification Consider a nonlinear differential equation that can be written in the form dxdt Σi1M αi fi x where fi x are known nonlinear functions and αi are unknown but constant parameters Suppose that we have measurements or estimates of the full state x EXERCISES 63 at time instants t1 t2 tN with N M Show that the parameters αi can be determined by finding the least squares solution to a linear equation of the form Hα b where α RM is the vector of all parameters and H RNM and b RN are appropriately defined 26 Normalized oscillator dynamics Consider a damped springmass system with dynamics m q c q kq F Let ω0 km be the natural frequency and ζ c2 km be the damping ratio a Show that by rescaling the equations we can write the dynamics in the form q 2ζω0 q ω2 0q ω2 0u 235 where u Fk This form of the dynamics is that of a linear oscillator with natural frequency ω0 and damping ratio ζ b Show that the system can be further normalized and written in the form dz1 dτ z2 dz2 dτ z1 2ζz2 v 236 The essential dynamics of the system are governed by a single damping parameter ζ The Qvalue defined as Q 12ζ is sometimes used instead of ζ 27 Electric generator An electric generator connected to a strong power grid can be modeled by a momentum balance for the rotor of the generator J d2ϕ dt2 Pm Pe Pm EV X sin ϕ where J is the effective moment of inertia of the generator ϕ the angle of rotation Pm the mechanical power that drives the generator Pe is the active electrical power E thegeneratorvoltage V thegridvoltageand X thereactanceofthelineAssuming that the line dynamics are much faster than the rotor dynamics Pe V I EVX sin ϕ where I is the current component in phase with the voltage E and ϕ is the phase angle between voltages E and V Show that the dynamics of the electric generator have a normalized form that is similar to the inverted pendulum in Example 22 with no damping 28 Admission control for a queue The long delays created by temporary overloads can be reduced by rejecting requests when the queue gets large This allows requests that are accepted to be serviced quickly and requests that cannot be accommodated to receive a rejection quickly so that they can try another server Consider a simple proportional control with saturation described by u sat01kr x 237 where satab is defined in equation 39 and r is the desired reference queue length Use a simulation to show that this controller reduces the rushhour effect and explain how the choice of r affects the system dynamics 29 Biological switch A genetic switch can be formed by connecting two repressors together in a cycle as shown below Using the models from Example 213assuming that the parameters are the same for both genes and that the mRNA concentrations reach steady state quicklyshow that the dynamics can be written in normalized coordinates as dz1dτ μ1zn2 z1 υ1 dz2dτ μ1zn1 z2 υ2 238 where z1 and z2 are scaled versions of the protein concentrations and the time scale has also been changed Show that μ 200 using the parameters in Example 213 and use simulations to demonstrate the switchlike behavior of the system 210 Motor drive Consider a system consisting of a motor driving two masses that are connected by a torsional spring as shown in the diagram below This system can represent a motor with a flexible shaft that drives a load Assuming that the motor delivers a torque that is proportional to the current the dynamics of the system can be described by the equations J1 d²φ1dt² cdφ1dt dφ2dt kφ1 φ2 kI I J2 d²φ2dt² cdφ2dt dφ1dt kφ2 φ1 Td 239 Similar equations are obtained for a robot with flexible arms and for the arms of DVD and optical disk drives Derive a state space model for the system by introducing the normalized state variables x1 φ1 x2 φ2 x3 ω1ω0 and x4 ω2ω0 where ω0 kJ1 J2J1 J2 is the undamped natural frequency of the system when the control signal is zero Chapter Three Examples Dont apply any model until you understand the simplifying assumptions on which it is based and you can test their validity Catch phrase use only as directed Dont limit yourself to a single model More than one model may be useful for understanding different aspects of the same phenomenon Catch phrase legalize polygamy Saul Golomb Mathematical ModelsUses and Limitations 1970 87 In this chapter we present a collection of examples spanning many different fields of science and engineering These examples will be used throughout the text and in exercises to illustrate different concepts Firsttime readers may wish to focus on only a few examples with which they have had the most prior experience or insight to understand the concepts of state input output and dynamics in a familiar setting 31 Cruise Control The cruise control system of a car is a common feedback system encountered in everyday life The system attempts to maintain a constant velocity in the presence of disturbances primarily caused by changes in the slope of a road The controller compensates for these unknowns by measuring the speed of the car and adjusting the throttle appropriately To model the system we start with the block diagram in Figure 31 Let v be the speed of the car and vr the desired reference speed The controller which typically is of the proportionalintegral PI type described briefly in Chapter 1 receives the signals v and vr and generates a control signal u that is sent to an actuator that controls the throttle position The throttle in turn controls the torque T delivered by the engine which is transmitted through the gears and the wheels generating a force F that moves the car There are disturbance forces Fd due to variations in the slope of the road the rolling resistance and aerodynamic forces The cruise controller also has a humanmachine interface that allows the driver to set and modify the desired speed There are also functions that disconnect the cruise control when the brake is touched The system has many individual componentsactuator engine transmission wheels and car bodyand a detailed model can be very complicated In spite of this the model required to design the cruise controller can be quite simple To develop a mathematical model we start with a force balance for the car body Let v be the speed of the car m the total mass including passengers F the force generated by the contact of the wheels with the road and Fd the disturbance force Figure 31 Block diagram of a cruise control system for an automobile The throttlecontrolled engine generates a torque T that is transmitted to the ground through the gearbox and wheels Combined with the external forces from the environment such as aerodynamic drag and gravitational forces on hills the net force causes the car to move The velocity of the car υ is measured by a control system that adjusts the throttle through an actuation mechanism A driver interface allows the system to be turned on and off and the reference speed υr to be established due to gravity friction and aerodynamic drag The equation of motion of the car is simply m dυdt F Fd 31 The force F is generated by the engine whose torque is proportional to the rate of fuel injection which is itself proportional to a control signal 0 u 1 that controls the throttle position The torque also depends on engine speed ω A simple representation of the torque at full throttle is given by the torque curve Tω Tm 1 β ωωm 1² 32 where the maximum torque Tm is obtained at engine speed ωm Typical parameters are Tm 190 Nm ωm 420 rads about 4000 RPM and β 04 Let n be the gear ratio and r the wheel radius The engine speed is related to the velocity through the expression ω nr υ αn υ and the driving force can be written as F nur Tω αn u Tαn υ Typical values of αn for gears 1 through 5 are α1 40 α2 25 α3 16 α4 12 and α5 10 The inverse of αn has a physical interpretation as the effective wheel radius Figure 32 shows the torque as a function of engine speed and vehicle speed The figure shows that the effect of the gear is to flatten the torque curve so that an almost full torque can be obtained almost over the whole speed range The disturbance force Fd has three major components Fg the forces due to 31 CRUISE CONTROL 67 0 200 400 600 100 120 140 160 180 200 Angular velocity ω rads Torque T Nm 0 20 40 60 100 120 140 160 180 200 n1 n2 n3 n4 n5 Velocity v ms Torque T Nm Figure 32 Torque curves for typical car engine The graph on the left shows the torque generated by the engine as a function of the angular velocity of the engine while the curve on the right shows torque as a function of car speed for different gears gravity Fr the forces due to rolling friction and Fa the aerodynamic drag Letting the slope of the road be θ gravity gives the force Fg mg sin θ as illustrated in Figure 33a where g 98 ms2 is the gravitational constant A simple model of rolling friction is Fr mgCr sgnv where Cr is the coefficient of rolling friction and sgnv is the sign of v 1 or zero if v 0 A typical value for the coefficient of rolling friction is Cr 001 Finally the aerodynamic drag is proportional to the square of the speed Fa 1 2ρCd Av2 whereρ isthedensityofairCd istheshapedependentaerodynamicdragcoefficient and A isthefrontalareaofthecarTypicalparametersareρ 13 kgm3Cd 032 and A 24 m2 Summarizing we find that the car can be modeled by m dv dt αnuT αnv mgCr sgnv 1 2ρCd Av2 mg sin θ 33 where the function T is given by equation 32 The model 33 is a dynamical system of first order The state is the car velocity v which is also the output The input is the signal u that controls the throttle position and the disturbance is the force Fd which depends on the slope of the road The system is nonlinear because of the torque curve the gravity term and the nonlinear character of rolling friction and aerodynamic drag There can also be variations in the parameters eg the mass of the car depends on the number of passengers and the load being carried in the car We add to this model a feedback controller that attempts to regulate the speed of the car in the presence of disturbances We shall use a proportionalintegral Figure 33 Car with cruise control encountering a sloping road A schematic diagram is shown in a and b shows the response in speed and throttle when a slope of 4 is encountered The hill is modeled as a net change of 4 in hill angle θ with a linear change in the angle between t 5 and t 6 The PI controller has proportional gain is kp 05 and the integral gain is ki 01 controller which has the form ut kpet ki 0t eτ dτ This controller can itself be realized as an inputoutput dynamical system by defining a controller state z and implementing the differential equation dzdt υr υ u kpυr υ kiz 34 where υr is the desired reference speed As discussed briefly in Section 15 the integrator represented by the state z ensures that in steady state the error will be driven to zero even when there are disturbances or modeling errors The design of PI controllers is the subject of Chapter 10 Figure 33b shows the response of the closed loop system consisting of equations 33 and 34 when it encounters a hill The figure shows that even if the hill is so steep that the throttle changes from 017 to almost full throttle the largest speed error is less than 1 ms and the desired velocity is recovered after 20 s Many approximations were made when deriving the model 33 It may seem surprising that such a seemingly complicated system can be described by the simple model 33 It is important to make sure that we restrict our use of the model to the uncertainty lemon conceptualized in Figure 215b The model is not valid for very rapid changes of the throttle because we have ignored the details of the engine dynamics neither is it valid for very slow changes because the properties of the engine will change over the years Nevertheless the model is very useful for the design of a cruise control system As we shall see in later chapters the reason for this is the inherent robustness of feedback systems even if the model is not perfectly accurate we can use it to design a controller and make use of the feedback in the 32 BICYCLE DYNAMICS 69 cancel Standby Off Cruise Hold on off off off set brake resume Figure 34 Finite state machine for cruise control system The figure on the left shows some typical buttons used to control the system The controller can be in one of four modes corresponding to the nodes in the diagram on the right Transition between the modes is controlled by pressing one of the five buttons on the cruise control interface on off set resume or cancel controller to manage the uncertainty in the system The cruise control system also has a humanmachine interface that allows the driver to communicate with the system There are many different ways to implement this system one version is illustrated in Figure 34 The system has four buttons onoff setdecelerate resumeaccelerate and cancel The operation of the system is governed by a finite state machine that controls the modes of the PI controller and the reference generator Implementation of controllers and reference generators will be discussed more fully in Chapter 10 The use of control in automotive systems goes well beyond the simple cruise control system described here Applications include emissions control traction control power control especially in hybrid vehicles and adaptive cruise control Many automotive applications are discussed in detail in the book by Kiencke and Nielsen 124 and in the survey papers by Powers et al 22 166 32 Bicycle Dynamics The bicycle is an interesting dynamical system with the feature that one of its key properties is due to a feedback mechanism that is created by the design of the front fork A detailed model of a bicycle is complex because the system has many degrees of freedom and the geometry is complicated However a great deal of insight can be obtained from simple models To derive the equations of motion we assume that the bicycle rolls on the hor izontal xy plane Introduce a coordinate system that is fixed to the bicycle with the ξaxis through the contact points of the wheels with the ground the ηaxis horizontal and the ζaxis vertical as shown in Figure 35 Let v0 be the velocity of the bicycle at the rear wheel b the wheel base ϕ the tilt angle and δ the steering angle The coordinate system rotates around the point O with the angular veloc ity ω v0δb and an observer fixed to the bicycle experiences forces due to the motion of the coordinate system The tilting motion of the bicycle is similar to an inverted pendulum as shown in 70 CHAPTER 3 EXAMPLES ξ η a b δ O C1 C2 a top view η ζ h ϕ b rear view λ h C1 C2 P1 P2 P3 c side view Figure 35 Schematic views of a bicycle The steering angle is δ and the roll angle is ϕ The center of mass has height h and distance a from a vertical through the contact point P1 of the rear wheel The wheel base is b and the trail is c the rear view in Figure 35b To model the tilt consider the rigid body obtained when the wheels the rider and the front fork assembly are fixed to the bicycle frame Let m be the total mass of the system J the moment of inertia of this body with respect to the ξaxis and D the product of inertia with respect to the ξζ axes Furthermore let the ξ and ζ coordinates of the center of mass with respect to the rear wheel contact point P1 be a and h respectively We have J mh2 and D mah The torques acting on the system are due to gravity and centripetal action Assuming that the steering angle δ is small the equation of motion becomes J d2ϕ dt2 Dv0 b dδ dt mgh sin ϕ mv2 0h b δ 35 The term mgh sin ϕ is the torque generated by gravity The terms containing δ and its derivative are the torques generated by steering with the term Dv0b dδdt due to inertial forces and the term mv2 0hb δ due to centripetal forces The steering angle is influenced by the torque the rider applies to the handle bar Because of the tilt of the steering axis and the shape of the front fork the contact point of the front wheel with the road P2 is behind the axis of rotation of the front wheel assembly as shown in Figure 35c The distance c between the contact point of the front wheel P2 and the projection of the axis of rotation of the front fork assembly P3 is called the trail The steering properties of a bicycle depend critically on the trail A large trail increases stability but makes the steering less agile A consequence of the design of the front fork is that the steering angle δ is influenced both by steering torque T and by the tilt of the frame ϕ This means that a bicycle with a front fork is a feedback system as illustrated by the block diagram in Figure 36 The steering angle δ influences the tilt angle ϕ and the tilt angle influences the steering angle giving rise to the circular causality that is characteristic of reasoning about feedback For a front fork with a positive trail the 33 OPERATIONAL AMPLIFIER CIRCUITS 71 ϕ Front Fork T δ Frame Figure 36 Block diagram of a bicycle with a front fork The steering torque applied to the handlebars is T the roll angle is ϕ and the steering angle is δ Notice that the front fork creates a feedback from the roll angle ϕ to the steering angle δ that under certain conditions can stabilize the system bicycle will steer into the lean creating a centrifugal force that attempts to diminish the lean Under certain conditions the feedback can actually stabilize the bicycle A crude empirical model is obtained by assuming that the block B can be modeled as the static system δ k1T k2ϕ 36 This model neglects the dynamics of the front fork the tireroad interaction and the fact that the parameters depend on the velocity A more accurate model called the Whipple model is obtained using the rigidbody dynamics of the front fork and the frame Assuming small angles this model becomes M ϕ δ Cv0 ϕ δ K0 K2v2 0 ϕ δ 0 T 37 where the elements of the 22 matrices M C K0 and K2 depend on the geometry and the mass distribution of the bicycle Note that this has a form somewhat similar to that of the springmass system introduced in Chapter 2 and the balance system in Example 21 Even this more complex model is inaccurate because the interaction between the tire and the road is neglected taking this into account requires two additional state variables Again the uncertainty lemon in Figure 215b provides a framework for understanding the validity of the model under these assumptions Interesting presentations on the development of the bicycle are given in the books by D Wilson 202 and Herlihy 98 The model 37 was presented in a paper by Whipple in 1899 197 More details on bicycle modeling are given in the paper 17 which has many references 33 Operational Amplifier Circuits An operational amplifier op amp is a modern implementation of Blacks feedback amplifier It is a universal component that is widely used for instrumentation con trol and communication It is also a key element in analog computing Schematic diagrams of the operational amplifier are shown in Figure 37 The amplifier has one inverting input v one noninverting input v and one output vout There are also connections for the supply voltages e and e and a zero adjustment offset null A simple model is obtained by assuming that the input currents i and i are offset null NC inverting input e noninv input output e offset null a Chip pinout b Full schematic c Simple view Figure 37 An operational amplifier and two schematic diagrams a The amplifier pin connections on an integrated circuit chip b A schematic with all connections c Only the signal connections zero and that the output is given by the static relation vout satvminvmaxkv v 38 where sat denotes the saturation function satabx a if xa x if a x b b if x b 39 We assume that the gain k is large in the range of 106 108 and the voltages vmin and vmax satisfy e vmin vmax e and hence are in the range of the supply voltages More accurate models are obtained by replacing the saturation function with a smooth function as shown in Figure 38 For small input signals the amplifier characteristic 38 is linear vout kv v kv 310 Since the open loop gain k is very large the range of input signals where the system is linear is very small Figure 38 Inputoutput characteristics of an operational amplifier The differential input is given by v v The output voltage is a linear function of the input in a small range around 0 with saturation at vmin and vmax In the linear regime the op amp has high gain R1 R2 v1 i0 v v2 a Amplifier circuit v1 R2R1 Σ e R1R1 R2 k v v2 b Block diagram Figure 39 Stable amplifier using an op amp The circuit a uses negative feedback around an operational amplifier and has a corresponding block diagram b The resistors R1 and R2 determine the gain of the amplifier A simple amplifier is obtained by arranging feedback around the basic operational amplifier as shown in Figure 39a To model the feedback amplifier in the linear range we assume that the current i0 i i is zero and that the gain of the amplifier is so large that the voltage v v v is also zero It follows from Ohms law that the currents through resistors R1 and R2 are given by v1R1 v2R2 and hence the closed loop gain of the amplifier is v2v1 kcl where kcl R2R1 311 A more accurate model is obtained by continuing to neglect the current i0 but assuming that the voltage v is small but not negligible The current balance is then v1 vR1 v v2R2 312 Assuming that the amplifier operates in the linear range and using equation 310 the gain of the closed loop system becomes kcl v2v1 R2R1 k R1R1 R2 k R1 313 If the open loop gain k of the operational amplifier is large the closed loop gain kcl is the same as in the simple model given by equation 311 Notice that the closed loop gain depends only on the passive components and that variations in k have only a marginal effect on the closed loop gain For example if k 106 and R2R1 100 a variation of k by 100 gives only a variation of 001 in the closed loop gain The drastic reduction in sensitivity is a nice illustration of how feedback can be used to make precise systems from uncertain components In this particular case feedback is used to trade high gain and low robustness for low gain and high robustness Equation 313 was the formula that inspired Black when he invented the feedback amplifier 35 see the quote at the beginning of Chapter 12 It is instructive to develop a block diagram for the feedback amplifier in Figure 39a To do this we will represent the pure amplifier with input v and output v2 R1 R2 C v1 i0 v v2 Figure 310 Circuit diagram of a PI controller obtained by feedback around an operational amplifier The capacitor C is used to store charge and represents the integral of the input as one block To complete the block diagram we must describe how v depends on v1 and v2 Solving equation 312 for v gives v R2R1 R2 v1 R1R1 R2 v2 R1R1 R2 R2R1 v1 v2 and we obtain the block diagram shown in Figure 39b The diagram clearly shows that the system has feedback and that the gain from v2 to v is R1R1 R2 which can also be read from the circuit diagram in Figure 39a If the loop is stable and the gain of the amplifier is large it follows that the error e is small and we find that v2 R2R1 v1 Notice that the resistor R1 appears in two blocks in the block diagram This situation is typical in electrical circuits and it is one reason why block diagrams are not always well suited for some types of physical modeling The simple model of the amplifier given by equation 310 provides qualitative insight but it neglects the fact that the amplifier is a dynamical system A more realistic model is dvoutdt a vout b v 314 The parameter b that has dimensions of frequency and is called the gainbandwidth product of the amplifier Whether a more complicated model is used depends on the questions to be answered and the required size of the uncertainty lemon The model 314 is still not valid for very high or very low frequencies since drift causes deviations at low frequencies and there are additional dynamics that appear at frequencies close to b The model is also not valid for large signals an upper limit is given by the voltage of the power supply typically in the range of 510 V neither is it valid for very low signals because of electrical noise These effects can be added if needed but increase the complexity of the analysis The operational amplifier is very versatile and many different systems can be built by combining it with resistors and capacitors In fact any linear system can be implemented by combining operational amplifiers with resistors and capacitors Exercise 35 shows how a secondorder oscillator is implemented and Figure 310 shows the circuit diagram for an analog proportionalintegral controller To develop a simple model for the circuit we assume that the current i0 is zero and that the open loop gain k is so large that the input voltage v is negligible The current i through the capacitor is i C dvcdt where vc is the voltage across the capacitor Since the same current goes through the resistor R1 we get i v1 R1 C dvcdt which implies that vct 1C it dt 1 R1 C v1τ dτ from 0 to t The output voltage is thus given by v2t R2 i vc R2 R1 v1t 1 R1 C v1τdτ from 0 to t which is the inputoutput relation for a PI controller The development of operational amplifiers was pioneered by Philbrick 139 165 and their usage is described in many textbooks eg 53 Good information is also available from suppliers 112 145 34 Computing Systems and Networks The application of feedback to computing systems follows the same principles as the control of physical systems but the types of measurements and control inputs that can be used are somewhat different Measurements sensors are typically related to resource utilization in the computing system or network and can include quantities such as the processor load memory usage or network bandwidth Control variables actuators typically involve setting limits on the resources available to a process This might be done by controlling the amount of memory disk space or time that a process can consume turning on or off processing delaying availability of a resource or rejecting incoming requests to a server process Process modeling for networked computing systems is also challenging and empirical models based on measurements are often used when a firstprinciples model is not available Web Server Control Web servers respond to requests from the Internet and provide information in the form of web pages Modern web servers start multiple processes to respond to requests with each process assigned to a single source until no further requests are received from that source for a predefined period of time Processes that are idle become part of a pool that can be used to respond to new requests To provide a fast response to web requests it is important that the web server processes do not overload the servers computational capabilities or exhaust its memory Since other processes may be running on the server the amount of available processing power and memory is uncertain and feedback can be used to provide good performance in the presence of this uncertainty 76 CHAPTER 3 EXAMPLES Idle Busy Client Servers data outgoing queue accept requests incoming 1 Wait Memory usage KeepAlive MaxClients Processor load Control Ref 1 Figure 311 Feedback control of a web server Connection requests arrive on an input queue where they are sent to a server process A finite state machine keeps track of the state of the individual server processes and responds to requests A control algorithm can modify the servers operation by controlling parameters that affect its behavior such as the maximum number of requests that can be serviced at a single time MaxClients or the amount of time that a connection can remain idle before it is dropped KeepAlive Figure 311 illustrates the use of feedback to modulate the operation of an Apache web server The web server operates by placing incoming connection re quests on a queue and then starting a subprocess to handle requests for each accepted connection This subprocess responds to requests from a given connection as they come in alternating between a Busy state and a Wait state Keeping the sub process active between requests is known as the persistence of the connection and provides a substantial reduction in latency to requests for multiple pieces of infor mation from a single site If no requests are received for a sufficiently long period of time controlled by the KeepAlive parameter then the connection is dropped and the subprocess enters an Idle state where it can be assigned another connec tion A maximum of MaxClients simultaneous requests will be served with the remainder remaining on the incoming request queue The parameters that control the server represent a tradeoff between perfor mance how quickly requests receive a response and resource usage the amount of processing power and memory used by the server Increasing the MaxClients parameter allows connection requests to be pulled off of the queue more quickly but increases the amount of processing power and memory usage that is required Increasing the KeepAlive timeout means that individual connections can remain idle for a longer period of time which decreases the processing load on the machine but increases the size of the queue and hence the amount of time required for a user to initiate a connection Successful operation of a busy server requires a proper choice of these parameters often based on trial and error To model the dynamics of this system in more detail we create a discretetime model with states given by the average processor load xcpu and the percentage memory usage xmem The inputs to the system are taken as the maximum number of clients umc and the keepalive time uka If we assume a linear model around the 34 COMPUTING SYSTEMS AND NETWORKS 77 equilibrium point the dynamics can be written as xcpuk 1 xmemk 1 A11 A12 A21 A22 xcpuk xmemk B11 B12 B21 B22 ukak umck 315 where the coefficients of the A and B matrices can be determined based on empirical measurements or detailed modeling of the web servers processing and memory usage Using system identification Diao et al 59 97 identified the linearized dynamics as A 054 011 0026 063 B 85 44 25 28 104 where the system was linearized about the equilibrium point xcpu 058 uka 11 s xmem 055 umc 600 This model shows the basic characteristics that were described above Looking first at the B matrix we see that increasing the KeepAlive timeout first column of the B matrix decreases both the processor usage and the memory usage since there is more persistence in connections and hence the server spends a longer time waiting for a connection to close rather than taking on a new active connection The MaxClientsconnectionincreasesboththeprocessingandmemoryrequirements Note that the largest effect on the processor load is the KeepAlive timeout The A matrix tells us how the processor and memory usage evolve in a region of the state space near the equilibrium point The diagonal terms describe how the individual resources return to equilibrium after a transient increase or decrease The offdiagonal terms show that there is coupling between the two resources so that a change in one could cause a later change in the other Although this model is very simple we will see in later examples that it can be used to modify the parameters controlling the server in real time and provide robustness with respect to uncertainties in the load on the machine Similar types of mechanisms have been used for other types of servers It is important to remember the assumptions on the model and their role in determining when the model is valid In particular since we have chosen to use average quantities over a given sample time the model will not provide an accurate representation for highfrequency phenomena Congestion Control The Internet was created to obtain a large highly decentralized efficient and ex pandable communication system The system consists of a large number of inter connected gateways A message is split into several packets which are transmitted over different paths in the network and the packages are rejoined to recover the message at the receiver An acknowledgment ack message is sent back to the sender when a packet is received The operation of the system is governed by a simple but powerful decentralized control structure that has evolved over time 78 CHAPTER 3 EXAMPLES Sources Sources Router Router Link Receiver ack Link Link a Block diagram 10 2 10 0 10 2 10 4 0 02 04 06 08 1 12ρ2N 2 log scale ρbe b Operating point Figure 312 Internet congestion control a Source computers send information to routers which forward the information to other routers that eventually connect to the receiving com puter When a packet is received an acknowledgment packet is sent back through the routers not shown The routers buffer information received from the sources and send the data across the outgoing link b The equilibrium buffer size be for a set of N identical computers sending packets through a single router with drop probability ρ The system has two control mechanisms called protocols the Transmission Control Protocol TCP for endtoend network communication and the Internet Protocol IP for routing packets and for hosttogateway or gatewaytogateway communication The current protocols evolved after some spectacular congestion collapses occurred in the mid 1980s when throughput unexpectedly could drop by a factor of 1000 108 The control mechanism in TCP is based on conserving the number of packets in the loop from the sender to the receiver and back to the sender The sending rate is increased exponentially when there is no congestion and it is dropped to a low level when there is congestion To derive an overall model for congestion control we model three separate elements of the system the rate at which packets are sent by individual sources computers the dynamics of the queues in the links routers and the admission control mechanism for the queues Figure 312a is a block diagram of the system The current source control mechanism on the Internet is a protocol known as TCPReno137Thisprotocoloperatesbysendingpacketstoareceiverandwaiting to receive an acknowledgment from the receiver that the packet has arrived If no acknowledgment is sent within a certain timeout period the packet is retransmitted To avoid waiting for the acknowledgment before sending the next packet Reno transmitsmultiplepacketsuptoafixedwindowaroundthelatestpacketthathasbeen acknowledged If the window length is chosen properly packets at the beginning of the window will be acknowledged before the source transmits packets at the end of the window allowing the computer to continuously stream packets at a high rate To determine the size of the window to use TCPReno uses a feedback mech anism in which roughly speaking the window size is increased by 1 every time a packet is acknowledged and the window size is cut in half when packets are lost This mechanism allows a dynamic adjustment of the window size in which each computer acts in a greedy fashion as long as packets are being delivered but backs off quickly when congestion occurs A model for the behavior of the source can be developed by describing the dynamics of the window size Suppose we have N computers and let wi be the current window size measured in number of packets for the ith computer Let qi represent the endtoend probability that a packet will be dropped someplace between the source and the receiver We can model the dynamics of the window size by the differential equation d wi dt 1 qi rit τi wi qi wi 2 ri t τi ri wi τi 316 where τi is the endtoend transmission time for a packet to reach its destination and the acknowledgment to be sent back and ri is the resulting rate at which packets are cleared from the list of packets that have been received The first term in the dynamics represents the increase in window size when a packet is received and the second term represents the decrease in window size when a packet is lost Notice that ri is evaluated at time t τi representing the time required to receive additional acknowledgments The link dynamics are controlled by the dynamics of the router queue and the admission control mechanism for the queue Assume that we have L links in the network and use l to index the individual links We model the queue in terms of the current number of packets in the routers buffer bl and assume that the router can contain a maximum of blmax packets and transmits packets at a rate cl equal to the capacity of the link The buffer dynamics can then be written as d bl dt sl cl sl Σ i lLi ri t τlf 317 where Li is the set of links that are being used by source i τlf is the time it takes a packet from source i to reach link l and sl is the total rate at which packets arrive at link l The admission control mechanism determines whether a given packet is accepted by a router Since our model is based on the average quantities in the network and not the individual packets one simple model is to assume that the probability that a packet is dropped depends on how full the buffer is pl ml bl bmax For simplicity we will assume for now that pl ρl bl see Exercise 36 for a more detailed model The probability that a packet is dropped at a given link can be used to determine the endtoend probability that a packet is lost in transmission qi 1 1 pl Σ pl t τlb 318 where τlb is the backward delay from link l to source i and the approximation is valid as long as the individual drop probabilities are small We use the backward delay since this represents the time required for the acknowledgment packet to be received by the source Together equations 316 317 and 318 represent a model of congestion control dynamics We can obtain substantial insight by considering a special case in which we have N identical sources and 1 link In addition we assume for the moment that the forward and backward time delays can be ignored in which case the dynamics can be reduced to the form d wi dt 1τ ρ b 2 wi2 2 d b dt Σ wi τ c τ b c 319 where wi ℝ i 1 N are the window sizes for the sources of data b ℝ is the current buffer size of the router ρ controls the rate at which packets are dropped and c is the capacity of the link connecting the router to the computers The variable τ represents the amount of time required for a packet to be processed by a router based on the size of the buffer and the capacity of the link Substituting τ into the equations we write the state space dynamics as d wi dt c b ρ c 1 wi2 2 d b dt Σ c wi b c 320 More sophisticated models can be found in 101 137 The nominal operating point for the system can be found by setting wi b 0 0 c b ρ c 1 wi2 2 0 Σ c wi b c Exploiting the fact that all of the source dynamics are identical it follows that all of the wi should be the same and it can be shown that there is a unique equilibrium satisfying the equations wie be N c τe N 1 2 ρ2 N2 ρ be3 ρ be 1 0 321 The solution for the second equation is a bit messy but can easily be determined numerically A plot of its solution as a function of 1 2 ρ2 N2 is shown in Figure 312b We also note that at equilibrium we have the following additional equalities τe be c N we c qe N pe N ρ be re we τe 322 Figure 313 shows a simulation of 60 sources communicating across a single link with 20 sources dropping out at t 500 ms and the remaining sources increasing their rates window sizes to compensate Note that the buffer size and window sizes automatically adjust to match the capacity of the link A comprehensive treatment of computer networks is given in the textbook by Tannenbaum 189 A good presentation of the ideas behind the control principles for the Internet is given by one of its designers Van Jacobson in 108 F Kelly 120 presents an early effort on the analysis of the system The book by Hellerstein et al 97 gives many examples of the use of feedback in computer systems 35 ATOMIC FORCE MICROSCOPY 81 Router Sources Link Link ack Receiver 0 200 400 600 800 1000 0 5 10 15 20 Time t ms States wi pktsms b pkts b w1w60 w1w40 Figure 313 Internet congestion control for N identical sources across a single link As shown on the left multiple sources attempt to communicate through a router across a single link An ack packet sent by the receiver acknowledges that the message was received otherwise the message packet is resent and the sending rate is slowed down at the source The simulation on the right is for 60 sources starting random rates with 20 sources dropping out at t 500 ms The buffer size is shown at the top and the individual source rates for 6 of the sources are shown at the bottom 35 Atomic Force Microscopy The 1986 Nobel Prize in Physics was shared by Gerd Binnig and Heinrich Rohrer for their design of the scanning tunneling microscope The idea of the instrument is to bring an atomically sharp tip so close to a conducting surface that tunneling occurs An image is obtained by traversing the tip across the sample and measuring the tunneling current as a function of tip position This invention has stimulated the development of a family of instruments that permit visualization of surface structure at the nanometer scale including the atomic force microscope AFM where a sample is probed by a tip on a cantilever An AFM can operate in two modes In tapping mode the cantilever is vibrated and the amplitude of vibration is controlled by feedback In contact mode the cantilever is in contact with the sample and its bending is controlled by feedback In both cases control is actuated by a piezo element that controls the vertical position of the cantilever base or the sample The control system has a direct influence on picture quality and scanning rate A schematic picture of an atomic force microscope is shown in Figure 314a A microcantilever with a tip having a radius of the order of 10 nm is placed close to the sample The tip can be moved vertically and horizontally using a piezoelectric scanner It is clamped to the sample surface by attractive van der Waals forces and repulsive Pauli forces The cantilever tilt depends on the topography of the surface and the position of the cantilever base which is controlled by the piezo element The tilt is measured by sensing the deflection of the laser beam using a photodiode The signal from the photodiode is amplified and sent to a controller that drives the amplifier for the vertical position of the cantilever By controlling the piezo element so that the deflection of the cantilever is constant the signal driving the 82 CHAPTER 3 EXAMPLES Amplifier Amplifier Sample Cantilever xy z Laser Photo diode Controller Piezo drive Deflection reference Sweep generator a Schematic diagram b AFM image of DNA Figure 314 Atomic force microscope a A schematic diagram of an atomic force micro scope consisting of a piezo drive that scans the sample under the AFM tip A laser reflects off of the cantilever and is used to measure the detection of the tip through a feedback controller b An AFM image of strands of DNA Image courtesy Veeco Instruments vertical deflection of the piezo element is a measure of the atomic forces between the cantilever tip and the atoms of the sample An image of the surface is obtained by scanning the cantilever along the sample The resolution makes it possible to see the structure of the sample on the atomic scale as illustrated in Figure 314b which shows an AFM image of DNA The horizontal motion of an AFM is typically modeled as a springmass system with low damping The vertical motion is more complicated To model the system we start with the block diagram shown in Figure 315 Signals that are easily acces sible are the input voltage u to the power amplifier that drives the piezo element the voltage v applied to the piezo element and the output voltage y of the signal amplifier for the photodiode The controller is a PI controller implemented by a computer which is connected to the system by analogtodigital AD and digital toanalog DA converters The deflection of the cantilever ϕ is also shown in the figure The desired reference value for the deflection is an input to the computer v Cantilever D Computer A u A D ϕ y z Deflection reference amplifier Signal photodiode Laser Sample topography amplifier Power element Piezo Figure 315 Block diagram of the system for vertical positioning of the cantilever for an atomic force microscope in contact mode The control system attempts to keep the can tilever deflection equal to its reference value Cantilever deflection is measured amplified and converted to a digital signal then compared with its reference value A correcting signal is generated by the computer converted to analog form amplified and sent to the piezo element 35 ATOMIC FORCE MICROSCOPY 83 u y Vp a Step response Piezo crystal z1 z2 m1 m2 b Mechanical model Figure 316 Modeling of an atomic force microscope a A measured step response The top curve shows the voltage u applied to the drive amplifier 50 mVdiv the middle curve is the output Vp of the power amplifier 500 mVdiv and the bottom curve is the output y of the signal amplifier 500 mVdiv The time scale is 25 μsdiv Data have been supplied by Georg Schitter b A simple mechanical model for the vertical positioner and the piezo crystal There are several different configurations that have different dynamics Here we will discuss a highperformance system from 176 where the cantilever base is positioned vertically using a piezo stack We begin the modeling with a simple experiment on the system Figure 316a shows a step response of a scanner from the input voltage u to the power amplifier to the output voltage y of the signal amplifier for the photodiode This experiment captures the dynamics of the chain of blocks from u to y in the block diagram in Figure 315 Figure 316a shows that the system responds quickly but that there is a poorly damped oscillatory mode with a period of about 35 μs A primary task of the modeling is to understand the origin of the oscillatory behavior To do so we will explore the system in more detail The natural frequency of the clamped cantilever is typically several hundred kilohertz which is much higher than the observed oscillation of about 30 kHz As a first approximation we will model it as a static system Since the deflections are small we can assume that the bending ϕ of the cantilever is proportional to the difference in height between the cantilever tip at the probe and the piezo scanner A more accurate model can be obtained by modeling the cantilever as a springmass system of the type discussed in Chapter 2 Figure 316a also shows that the response of the power amplifier is fast The photodiode and the signal amplifier also have fast responses and can thus be mod eled as static systems The remaining block is a piezo system with suspension A schematic mechanical representation of the vertical motion of the scanner is shown in Figure 316b We will model the system as two masses separated by an ideal piezo element The mass m1 is half of the piezo system and the mass m2 is the other half of the piezo system plus the mass of the support A simple model is obtained by assuming that the piezo crystal generates a force F between the masses and that there is a damping c in the spring Let the positions 84 CHAPTER 3 EXAMPLES of the center of the masses be z1 and z2 A momentum balance gives the following model for the system m1 d2z1 dt2 F m2 d2z2 dt2 c2 dz2 dt k2z2 F Let the elongation of the piezo element l z1 z2 be the control variable and the height z1 of the cantilever base be the output Eliminating the variable F in equations 323 and substituting z1 l for z2 gives the model m1 m2d2z1 dt2 c2 dz1 dt k2z1 m2 d2l dt2 c2 dl dt k2l 323 Summarizing we find that a simple model of the system is obtained by modeling the piezo by 323 and all the other blocks by static models Introducing the linear equations l k3u and y k4z1 we now have a complete model relating the output y to the control signal u A more accurate model can be obtained by introducing the dynamics of the cantilever and the power amplifier As in the previous examples the concept of the uncertainty lemon in Figure 215b provides a framework for describing the uncertainty the model will be accurate up to the frequencies of the fastest modeled modes and over a range of motion in which linearized stiffness models can be used The experimental results in Figure 316a can be explained qualitatively as fol lows When a voltage is applied to the piezo it expands by l0 the mass m1 moves up and the mass m2 moves down instantaneously The system settles after a poorly damped oscillation It is highly desirable to design a control system for the vertical motion so that it responds quickly with little oscillation The instrument designer has several choices to accept the oscillation and have a slow response time to design a control system that can damp the oscillations or to redesign the mechanics to give resonances of higher frequency The last two alternatives give a faster response and faster imaging Since the dynamic behavior of the system changes with the properties of the sample it is necessary to tune the feedback loop In simple systems this is currently done manually by adjusting parameters of a PI controller There are interesting possibilities for making AFM systems easier to use by introducing automatic tuning and adaptation The book by Sarid 173 gives a broad coverage of atomic force microscopes The interaction of atoms close to surfaces is fundamental to solid state physics see Kittel 125 The model discussed in this section is based on Schitter 175 36 Drug Administration The phrase Take two pills three times a day is a recommendation with which we are all familiar Behind this recommendation is a solution of an open loop control problem The key issue is to make sure that the concentration of a medicine in a part of the body is sufficiently high to be effective but not so high that it will 36 DRUG ADMINISTRATION 85 Chemical inactivation fixation etc Subcutis etc Blood circulation Tissue boundaries Dose N0 k1 k4 k2 k3 k5 Figure 317 Abstraction used to compartmentalize the body for the purpose of describing drug distribution based on Teorell 190 The body is abstracted by a number of com partments with perfect mixing and the complex transport processes are approximated by assuming that the flow is proportional to the concentration differences in the compartments The constants ki parameterize the rates of flow between different compartments cause undesirable side effects The control action is quantized take two pills and sampled every 8 hours The prescriptions are based on simple models captured in empirical tables and the dose is based on the age and weight of the patient Drug administration is a control problem To solve it we must understand how a drug spreads in the body after it is administered This topic called pharmacoki netics is now a discipline of its own and the models used are called compartment models They go back to the 1920s when Widmark modeled the propagation of al cohol in the body 199 Compartment models are now important for the screening of all drugs used by humans The schematic diagram in Figure 317 illustrates the idea of a compartment model The body is viewed as a number of compartments like blood plasma kidney liver and tissues that are separated by membranes It is assumed that there is perfect mixing so that the drug concentration is constant in each compartment The complex transport processes are approximated by assuming that the flow rates between the compartments are proportional to the concentration differences in the compartments To describe the effect of a drug it is necessary to know both its concentration and how it influences the body The relation between concentration c and its effect e is typically nonlinear A simple model is e c0 c0 cemax 324 The effect is linear for low concentrations and it saturates at high concentrations The relation can also be dynamic and it is then called pharmacodynamics Compartment Models The simplest dynamic model for drug administration is obtained by assuming that the drug is evenly distributed in a single compartment after it has been administered and that the drug is removed at a rate proportional to the concentration The com 86 CHAPTER 3 EXAMPLES partments behave like stirred tanks with perfect mixing Let c be the concentration V the volume and q the outflow rate Converting the description of the system into differential equations gives the model V dc dt qc c 0 325 This equation has the solution ct c0eqtV c0ekt which shows that the concentration decays exponentially with the time constant T Vq after an injec tion The input is introduced implicitly as an initial condition in the model 325 More generally the way the input enters the model depends on how the drug is administered For example the input can be represented as a mass flow into the compartment where the drug is injected A pill that is dissolved can also be inter preted as an input in terms of a mass flow rate The model 325 is called a a onecompartment model or a singlepool model The parameter qV is called the elimination rate constant This simple model is often used to model the concentration in the blood plasma By measuring the con centration at a few times the initial concentration can be obtained by extrapolation If the total amount of injected substance is known the volume V can then be de termined as V mc0 this volume is called the apparent volume of distribution This volume is larger than the real volume if the concentration in the plasma is lower than in other parts of the body The model 325 is very simple and there are large individual variations in the parameters The parameters V and q are often normalized by dividing by the weight of the person Typical parameters for aspirin are V 02 Lkg and q 001 Lhkg These numbers can be compared with a blood volume of 007 Lkg a plasma volume of 005 Lkg an intracellular fluid volume of 04 Lkg and an outflow of 00015 L min kg The simple onecompartment model captures the gross behavior of drug distri bution but it is based on many simplifications Improved models can be obtained by considering the body as composed of several compartments Examples of such sys tems are shown in Figure 318 where the compartments are represented as circles and the flows by arrows Modeling will be illustrated using the twocompartment model in Figure 318a We assume that there is perfect mixing in each compartment and that the transport between the compartments is driven by concentration differences We further as sume that a drug with concentration c0 is injected in compartment 1 at a volume flow rate of u and that the concentration in compartment 2 is the output Let c1 and c2 be the concentrations of the drug in the compartments and let V1 and V2 be the volumes of the compartments The mass balances for the compartments are V1 dc1 dt qc2 c1 q0c1 c0u c1 0 V2 dc2 dt qc1 c2 c2 0 y c2 326 36 DRUG ADMINISTRATION 87 k2 V1 k0 b0 u V2 k1 a Two compartment model u1 V4 V6 k64 k46 V1 V3 k31 k13 V5 k54 k45 u4 V2 k21 k12 k03 k06 k05 k02 b4 b1 b Thyroid hormone model Figure 318 Schematic diagrams of compartment models a A simple twocompartment model Each compartment is labeled by its volume and arrows indicate the flow of chemical into out of and between compartments b A system with six compartments used to study the metabolism of thyroid hormone 85 The notation ki j denotes the transport from compartment j to compartment i Introducing the variables k0 q0V1 k1 qV1 k2 qV2 and b0 c0V1 and using matrix notation the model can be written as dc dt k0 k1 k1 k2 k2 c b0 0 u y 0 1 x 327 Comparing this model with its graphical representation in Figure 318a we find that the mathematical representation 327 can be written by inspection It should also be emphasized that simple compartment models such as the one in equation 327 have a limited range of validity Lowfrequency limits exist because the human body changes with time and since the compartment model uses average concentrations they will not accurately represent rapid changes There are also nonlinear effects that influence transportation between the compartments Compartment models are widely used in medicine engineering and environ mental science An interesting property of these systems is that variables like con centration and mass are always positive An essential difficulty in compartment modeling is deciding how to divide a complex system into compartments Com partment models can also be nonlinear as illustrated in the next section Insulinglucose Dynamics It is essential that the blood glucose concentration in the body is kept within a narrow range 0711 gL Glucose concentration is influenced by many factors like food intake digestion and exercise A schematic picture of the relevant parts of the body is shown in Figures 319a and b There is a sophisticated mechanism that regulates glucose concentration Glu cose concentration is maintained by the pancreas which secretes the hormones insulin and glucagon Glucagon is released into the bloodstream when the glucose 88 CHAPTER 3 EXAMPLES Liver Large intestine Stomach Pancreas Small intestine a Relevant body organs Insulin Pancreas Liver Tissue Stomach Tissue Glucose Glucose in blood b Schematic diagram 0 50 100 150 0 200 400 Glucose mgdl 0 50 100 150 0 50 100 Time t min Insulin μUml c Intravenous injection Figure 319 Insulinglucose dynamics a Sketch of body parts involved in the control of glucose b Schematic diagram of the system c Responses of insulin and glucose when glucose in injected intravenously From 164 level is low It acts on cells in the liver that release glucose Insulin is secreted when the glucose level is high and the glucose level is lowered by causing the liver and other cells to take up more glucose In diseases like juvenile diabetes the pancreas is unable to produce insulin and the patient must inject insulin into the body to maintain a proper glucose level The mechanisms that regulate glucose and insulin are complicated dynamics with time scales that range from seconds to hours have been observed Models of different complexity have been developed The models are typically tested with data from experiments where glucose is injected intravenously and insulin and glucose concentrations are measured at regular time intervals A relatively simple model called the minimal model was developed by Bergman and coworkers 31 This models uses two compartments one representing the con centrationofglucoseinthebloodstreamandtheotherrepresentingtheconcentration of insulin in the interstitial fluid Insulin in the bloodstream is considered an input The reaction of glucose to insulin can be modeled by the equations dx1 dt p1 x2x1 p1ge dx2 dt p2x2 p3u ie 328 where ge and ie represent the equilibrium values of glucose and insulin x1 is the concentration of glucose and x2 is proportional to the concentration of interstitial insulin Notice the presence of the term x2x1 in the first equation Also notice that the model does not capture the complete feedback loop because it does not describe how the pancreas reacts to the glucose Figure 319c shows a fit of the model to a test on a normal person where glucose was injected intravenously at time t 0 The glucose concentration rises rapidly and the pancreas responds with a rapid spikelike injection of insulin The glucose and insulin levels then 37 POPULATION DYNAMICS 89 gradually approach the equilibrium values Modelsofthetypeinequation 328andmorecomplicatedmodelshavingmany compartments have been developed and fitted to experimental data A difficulty in modeling is that there are significant variations in model parameters over time and for different patients For example the parameter p1 in equation 328 has been reported to vary with an order of magnitude for healthy individuals The models have been used for diagnosis and to develop schemes for the treatment of persons with diseases Attempts to develop a fully automatic artificial pancreas have been hampered by the lack of reliable sensors The papers by Widmark and Tandberg 199 and Teorell 190 are classics in pharmacokinetics which is now an established discipline with many textbooks 62 109 84 Because of its medical importance pharmacokinetics is now an essential component of drug development The book by Riggs 168 is a good source for the modeling of physiological systems and a more mathematical treatment is given in 119 Compartment models are discussed in 85 The problem of determining rate coefficients from experimental data is discussed in 26 and 85 There are many publications on the insulinglucose model The minimal model is discussed in 52 31 and more recent references are 143 72 37 Population Dynamics Population growth is a complex dynamic process that involves the interaction of one or more species with their environment and the larger ecosystem The dynamics of population groups are interesting and important in many different areas of social and environmental policy There are examples where new species have been introduced into new habitats sometimes with disastrous results There have also been attempts to control population growth both through incentives and through legislation In this section we describe some of the models that can be used to understand how populations evolve with time and as a function of their environments Logistic Growth Model Let x be the population of a species at time t A simple model is to assume that the birth rates and mortality rates are proportional to the total population This gives the linear model dx dt bx dx b dx rx x 0 329 where birth rate b and mortality rate d are parameters The model gives an expo nential increase if b d or an exponential decrease if b d A more realistic model is to assume that the birth rate decreases when the population is large The following modification of the model 329 has this property dx dt rx1 x k x 0 330 where k is the carrying capacity of the environment The model 330 is called the logistic growth model PredatorPrey Models A more sophisticated model of population dynamics includes the effects of competing populations where one species may feed on another This situation referred to as the predatorprey problem was introduced in Example 23 where we developed a discretetime model that captured some of the features of historical records of lynx and hare populations In this section we replace the difference equation model used there with a more sophisticated differential equation model Let Ht represent the number of hares prey and let Lt represent the number of lynxes predator The dynamics of the system are modeled as dHdt rH1 Hk aHLc H H 0 dLdt b aHLc H dL L 0 In the first equation r represents the growth rate of the hares k represents the maximum population of the hares in the absence of lynxes a represents the interaction term that describes how the hares are diminished as a function of the lynx population and c controls the prey consumption rate for low hare population In the second equation b represents the growth coefficient of the lynxes and d represents the mortality rate of the lynxes Note that the hare dynamics include a term that resembles the logistic growth model 330 Of particular interest are the values at which the population values remain constant called equilibrium points The equilibrium points for this system can be determined by setting the righthand side of the above equations to zero Letting He and Le represent the equilibrium state from the second equation we have Le 0 or He cdab d Substituting this into the first equation we have that for Le 0 either He 0 or He k For Le 0 we obtain Le r He c He a He1 Hek bcrabk cd dk ab d2 k Thus we have three possible equilibrium points xe Le He xe 0 0 xe k 0 xe He Le where He and Le are given in equations 332 and 333 Note that the equilibrium populations may be negative for some parameter values corresponding to a nonachievable equilibrium point EXERCISES 91 0 10 20 30 40 50 60 70 0 20 40 60 80 100 Time t years Population Hare Lynx 0 50 100 0 20 40 60 80 100 Hares Lynxes Figure 320 Simulation of the predatorprey system The figure on the left shows a simulation of the two populations as a function of time The figure on the right shows the populations plotted against each other starting from different values of the population The oscillation seen in both figures is an example of a limit cycle The parameter values used for the simulations are a 32 b 06 c 50 d 056 k 125 and r 16 Figure 320 shows a simulation of the dynamics starting from a set of popu lation values near the nonzero equilibrium values We see that for this choice of parameters the simulation predicts an oscillatory population count for each species reminiscent of the data shown in Figure 26 Volume I of the twovolume set by J D Murray 154 give a broad coverage of population dynamics Exercises 31 Cruise control Consider the cruise control example described in Section 31 Build a simulation that recreates the response to a hill shown in Figure 33b and show the effects of increasing and decreasing the mass of the car by 25 Redesign the controller using trial and error is fine so that it returns to within 10 of the desired speed within 3 s of encountering the beginning of the hill 32 Bicycle dynamics Show that the dynamics of a bicycle frame given by equa tion 35 can be written in state space form as d dt x1 x2 0 mghJ 1 0 x1 x2 1 0 u y Dv0 bJ mv2 0h bJ x 334 where the input u is the torque applied to the handle bars and the output y is the title angle ϕ What do the states x1 and x2 represent 33 Bicycle steering Combine the bicycle model given by equation 35 and the model for steering kinematics in Example 28 to obtain a model that describes the path of the center of mass of the bicycle 34 Operational amplifier circuit Consider the op amp circuit shown below Show that the dynamics can be written in state space form as dxdt 1R1 C1 1Ra C1 0 RbRa 1R2 C2 1R2 C2 x 1R1 C1 0 u y 0 1 x where u v1 and y v3 Hint Use v2 and v3 as your state variables 35 Operational amplifier oscillator The op amp circuit shown below is an implementation of an oscillator Show that the dynamics can be written in state space form as dxdt 0 R4R1 R3 C1 1 0 1R1 C1 x where the state variables represent the voltages across the capacitors x1 v1 and x2 v2 36 Congestion control using RED 138 A number of improvements can be made to the model for Internet congestion control presented in Section 34 To ensure that the routers buffer size remains positive we can modify the buffer dynamics to satisfy dbldt sl cl bl 0 sat0 sl cl bl 0 In addition we can model the drop probability of a packet based on how close we EXERCISES 93 are to the buffer limits a mechanism known as random early detection RED pl mlal 0 alt blower l ρlrit ρlblower l blower l alt bupper l ηlrit 1 2bupper l bupper l alt 2bupper l 1 alt 2bupper l dal dt αlclal bl where αl bupper l blower l and pupper l are parameters for the RED protocol Using the model above write a simulation for the system and find a set of parameter values for which there is a stable equilibrium point and a set for which the system exhibits oscillatory solutions The following sets of parameters should be explored N 20 30 60 blower l 40 pkts ρl 01 c 8 9 15 pktsms bupper l 540 pkts αl 104 τ 55 60 100 ms 37 Atomic force microscope with piezo tube A schematic diagram of an AFM where the vertical scanner is a piezo tube with preloading is shown below m1 k1 m2 c1 k2 c2 F F Show that the dynamics can be written as m1 m2d2z1 dt2 c1 c2dz1 dt k1 k2z1 m2 d2l dt2 c2 dl dt k2l Are there parameter values that make the dynamics particularly simple 38 Drug administration The metabolism of alcohol in the body can be modeled by the nonlinear compartment model Vb dcb dt qcl cb qiv Vl dcl dt qcb cl qmax cl c0 cl qgi where Vb 48 L and Vl 06 L are the apparent volumes of distribution of body water and liver water cb and cl are the concentrations of alcohol in the com partments qiv and qgi are the injection rates for intravenous and gastrointestinal 94 CHAPTER 3 EXAMPLES intake q 15 Lmin is the total hepatic blood flow qmax 275 mmolmin and c0 01 mmolL Simulate the system and compute the concentration in the blood for oral and intravenous doses of 12 g and 40 g of alcohol 39 Population dynamics Consider the model for logistic growth given by equa tion 330 Show that the maximum growth rate occurs when the size of the pop ulation is half of the steadystate value 310 Fisheries management The dynamics of a commercial fishery can be de scribed by the following simple model dx dt f x hx u y bhx u cu where x is the total biomass f x rx1 xk is the growth rate and hx u axu is the harvesting rate The output y is the rate of revenue and the parameters a b and c are constants representing the price of fish and the cost of fishing Show that there is an equilibrium where the steadystate biomass is xe cab Compare with the situation when the biomass is regulated to a constant value and find the maximum sustainable return in that case Chapter Four Dynamic Behavior It Dont Mean a Thing If It Aint Got That Swing Duke Ellington 18991974 In this chapter we present a broad discussion of the behavior of dynamical sys tems focused on systems modeled by nonlinear differential equations This allows us to consider equilibrium points stability limit cycles and other key concepts in understanding dynamic behavior We also introduce some methods for analyzing the global behavior of solutions 41 Solving Differential Equations In the last two chapters we saw that one of the methods of modeling dynamical systems is through the use of ordinary differential equations ODEs A state space inputoutput system has the form dx dt f x u y hx u 41 where x x1 xn Rn is the state u Rp is the input and y Rq is the output The smooth maps f Rn Rp Rn and h Rn Rp Rq represent the dynamics and measurements for the system In general they can be nonlinear functions of their arguments We will sometimes focus on singleinput singleoutput SISO systems for which p q 1 We begin by investigating systems in which the input has been set to a function of the state u αx This is one of the simplest types of feedback in which the system regulates its own behavior The differential equations in this case become dx dt f x αx Fx 42 To understand the dynamic behavior of this system we need to analyze the features of the solutions of equation 42 While in some simple situations we can write down the solutions in analytical form often we must rely on computational approaches We begin by describing the class of solutions for this problem We say that xt is a solution of the differential equation 42 on the time interval t0 R to t f R if dxt dt Fxt for all t0 t t f A given differential equation may have many solutions We will most often be interested in the initial value problem where xt is prescribed at a given time t0 R and we wish to find a solution valid for all future time t t0 We say that xt is a solution of the differential equation 42 with initial value x0 Rn at t0 R if xt0 x0 and dxtdt Fxt for all t0 t tf For most differential equations we will encounter there is a unique solution that is defined for t0 t tf The solution may be defined for all time t t0 in which case we take tf Because we will primarily be interested in solutions of the initial value problem for ODEs we will usually refer to this simply as the solution of an ODE We will typically assume that t0 is equal to 0 In the case when F is independent of time as in equation 42 we can do so without loss of generality by choosing a new independent time variable τ t t0 Exercise 41 Example 41 Damped oscillator Consider a damped linear oscillator with dynamics of the form q 2ζω0q ω0²q 0 where q is the displacement of the oscillator from its rest position These dynamics are equivalent to those of a springmass system as shown in Exercise 26 We assume that ζ 1 corresponding to a lightly damped system the reason for this particular choice will become clear later We can rewrite this in state space form by setting x1 q and x2 qω0 giving dx1dt ω0 x2 dx2dt ω0 x1 2ζ ω0 x2 In vector form the righthand side can be written as Fx ω0 x2 ω0 x1 2ζ ω0 x2 The solution to the initial value problem can be written in a number of different ways and will be explored in more detail in Chapter 5 Here we simply assert that the solution can be written as x1t eζ ω0 t x10 cos ωd t 1ωdω0 ζ x10 x20 sin ωd t x2t eζ ω0 t x20 cos ωd t 1ωdω0² x10 ω0 ζ x20 sin ωd t where x0 x10 x20 is the initial condition and ωd ω0 1 ζ² This solution can be verified by substituting it into the differential equation We see that the solution is explicitly dependent on the initial condition and it can be shown that this solution is unique A plot of the initial condition response is shown in Figure 41 Figure 41 Response of the damped oscillator to the initial condition x0 10 The solution is unique for the given initial conditions and consists of an oscillatory solution for each state with an exponentially decaying magnitude We note that this form of the solution holds only for 0 ζ 1 corresponding to an underdamped oscillator Without imposing some mathematical conditions on the function F the differential equation 42 may not have a solution for all t and there is no guarantee that the solution is unique We illustrate these possibilities with two examples Example 42 Finite escape time Let x R and consider the differential equation dxdt x2 with the initial condition x0 1 By differentiation we can verify that the function xt 11t satisfies the differential equation and that it also satisfies the initial condition A graph of the solution is given in Figure 42a notice that the solution goes to infinity as t goes to 1 We say that this system has finite escape time Thus the solution exists only in the time interval 0 t 1 Example 43 Nonunique solution Let x R and consider the differential equation dxdt 2x with initial condition x0 0 We can show that the function xt 0 if 0 t a t a2 if t a satisfies the differential equation for all values of the parameter a 0 To see this a Finite escape time b Nonunique solutions Figure 42 Existence and uniqueness of solutions Equation 43 has a solution only for time t 1 at which point the solution goes to as shown in a Equation 44 is an example of a system with many solutions as shown in b For each value of a we get a different solution starting from the same initial condition we differentiate xt to obtain dxdt 0 if 0 t a 2t a if t a and hence ẋ 2x for all t 0 with x0 0 A graph of some of the possible solutions is given in Figure 42b Notice that in this case there are many solutions to the differential equation These simple examples show that there may be difficulties even with simple differential equations Existence and uniqueness can be guaranteed by requiring that the function F have the property that for some fixed c R Fx Fy c x y for all xy which is called Lipschitz continuity A sufficient condition for a function to be Lipschitz is that the Jacobian Fx is uniformly bounded for all x The difficulty in Example 42 is that the derivative Fx becomes large for large x and the difficulty in Example 43 is that the derivative Fx is infinite at the origin 42 Qualitative Analysis The qualitative behavior of nonlinear systems is important in understanding some of the key concepts of stability in nonlinear dynamics We will focus on an important class of systems known as planar dynamical systems These systems have two state variables x R2 allowing their solutions to be plotted in the x1 x2 plane The basic concepts that we describe hold more generally and can be used to understand dynamical behavior in higher dimensions Phase Portraits A convenient way to understand the behavior of dynamical systems with state x R2 is to plot the phase portrait of the system briefly introduced in Chapter 2 42 QUALITATIVE ANALYSIS 99 1 05 0 05 1 1 05 0 05 1 x1 x2 a Vector field 1 05 0 05 1 1 05 0 05 1 x1 x2 b Phase portrait Figure 43 Phase portraits a This plot shows the vector field for a planar dynamical system Each arrow shows the velocity at that point in the state space b This plot includes the solutions sometimes called streamlines from different initial conditions with the vector field superimposed We start by introducing the concept of a vector field For a system of ordinary differential equations dx dt Fx the righthand side of the differential equation defines at every x Rn a velocity Fx Rn This velocity tells us how x changes and can be represented as a vector Fx Rn For planar dynamical systems each state corresponds to a point in the plane and Fx is a vector representing the velocity of that state We can plot these vectors on a grid of points in the plane and obtain a visual image of the dynamics of the system as shown in Figure 43a The points where the velocities are zero are of particular interest since they define stationary points of the flow if we start at such a state we stay at that state A phase portrait is constructed by plotting the flow of the vector field corre sponding to the planar dynamical system That is for a set of initial conditions we plot the solution of the differential equation in the plane R2 This corresponds to following the arrows at each point in the phase plane and drawing the resulting tra jectory By plotting the solutions for several different initial conditions we obtain a phase portrait as show in Figure 43b Phase portraits are also sometimes called phase plane diagrams Phase portraits give insight into the dynamics of the system by showing the solutions plotted in the twodimensional state space of the system For example we can see whether all trajectories tend to a single point as time increases or whether there are more complicated behaviors In the example in Figure 43 corresponding to a damped oscillator the solutions approach the origin for all initial conditions This is consistent with our simulation in Figure 41 but it allows us to infer the behavior for all initial conditions rather than a single initial condition However the phase portrait does not readily tell us the rate of change of the states although 100 CHAPTER 4 DYNAMIC BEHAVIOR a u θ m l b 2 1 0 1 2 x1 x2 2π π 0 π 2π c Figure 44 Equilibrium points for an inverted pendulum An inverted pendulum is a model for a class of balance systems in which we wish to keep a system upright such as a rocket a Using a simplified model of an inverted pendulum b we can develop a phase portrait that shows the dynamics of the system c The system has multiple equilibrium points marked by the solid dots along the x2 0 line this can be inferred from the lengths of the arrows in the vector field plot Equilibrium Points and Limit Cycles An equilibrium point of a dynamical system represents a stationary condition for the dynamics We say that a state xe is an equilibrium point for a dynamical system dx dt Fx if Fxe 0 If a dynamical system has an initial condition x0 xe then it will stay at the equilibrium point xt xe for all t 0 where we have taken t0 0 Equilibrium points are one of the most important features of a dynamical sys tem since they define the states corresponding to constant operating conditions A dynamical system can have zero one or more equilibrium points Example 44 Inverted pendulum Consider the inverted pendulum in Figure 44 which is a part of the balance system we considered in Chapter 2 The inverted pendulum is a simplified version of the problem of stabilizing a rocket by applying forces at the base of the rocket we seek to keep the rocket stabilized in the upright position The state variables are the angle θ x1 and the angular velocity dθdt x2 the control variable is the acceleration u of the pivot and the output is the angle θ For simplicity we assume that mglJt 1 and mlJt 1 so that the dynamics equation 210 become dx dt x2 sin x1 cx2 u cos x1 45 This is a nonlinear timeinvariant system of second order This same set of equa tions can also be obtained by appropriate normalization of the system dynamics as illustrated in Example 27 42 QUALITATIVE ANALYSIS 101 1 0 1 15 1 05 0 05 1 15 x1 x2 a 0 10 20 30 2 1 0 1 2 Time t x1 x2 x1 x2 b Figure 45 Phase portrait and time domain simulation for a system with a limit cycle The phase portrait a shows the states of the solution plotted for different initial conditions The limit cycle corresponds to a closed loop trajectory The simulation b shows a single solution plotted as a function of time with the limit cycle corresponding to a steady oscillation of fixed amplitude We consider the open loop dynamics by setting u 0 The equilibrium points for the system are given by xe nπ 0 where n 0 1 2 The equilibrium points for n even correspond to the pendu lum pointing up and those for n odd correspond to the pendulum hanging down A phase portrait for this system without corrective inputs is shown in Figure 44c The phase portrait shows 2π x1 2π so five of the equilibrium points are shown Nonlinear systems can exhibit rich behavior Apart from equilibria they can also exhibit stationary periodic solutions This is of great practical value in generating sinusoidally varying voltages in power systems or in generating periodic signals for animal locomotion A simple example is given in Exercise 412 which shows the circuit diagram for an electronic oscillator A normalized model of the oscillator is given by the equation dx1 dt x2 x11 x2 1 x2 2 dx2 dt x1 x21 x2 1 x2 2 46 The phase portrait and time domain solutions are given in Figure 45 The figure shows that the solutions in the phase plane converge to a circular trajectory In the time domain this corresponds to an oscillatory solution Mathematically the circle is called a limit cycle More formally we call an isolated solution xt a limit cycle of period T 0 if xt T xt for all t R There are methods for determining limit cycles for secondorder systems but for general higherorder systems we have to resort to computational analysis Computer algorithms find limit cycles by searching for periodic trajectories in state space that 102 CHAPTER 4 DYNAMIC BEHAVIOR 0 1 2 3 4 5 6 0 2 4 State x Time t Figure 46 Illustration of Lyapunovs concept of a stable solution The solution represented by the solid line is stable if we can guarantee that all solutions remain within a tube of diameter ϵ by choosing initial conditions sufficiently close the solution satisfy the dynamics of the system In many situations stable limit cycles can be found by simulating the system with different initial conditions 43 Stability The stability of a solution determines whether or not solutions nearby the solution remain close get closer or move further away We now give a formal definition of stability and describe tests for determining whether a solution is stable Definitions Let xt a be a solution to the differential equation with initial condition a A solution is stable if other solutions that start near a stay close to xt a Formally we say that the solution xt a is stable if for all ϵ 0 there exists a δ 0 such that b a δ xt b xt a ϵ for all t 0 Note that this definition does not imply that xt b approaches xt a as time increases but just that it stays nearby Furthermore the value of δ may depend on ϵ so that if we wish to stay very close to the solution we may have to start very very close δ ϵ This type of stability which is illustrated in Figure 46 is also called stability in the sense of Lyapunov If a solution is stable in this sense and the trajectories do not converge we say that the solution is neutrally stable An important special case is when the solution xt a xe is an equilibrium solution Instead of saying that the solution is stable we simply say that the equi librium point is stable An example of a neutrally stable equilibrium point is shown in Figure 47 From the phase portrait we see that if we start near the equilibrium point then we stay near the equilibrium point Indeed for this example given any ϵ that defines the range of possible initial conditions we can simply choose δ ϵ to satisfy the definition of stability since the trajectories are perfect circles A solution xt a is asymptotically stable if it is stable in the sense of Lyapunov and also xt b xt a as t for b sufficiently close to a This corresponds tothecasewhereallnearbytrajectoriesconvergetothestablesolutionforlargetime Figure 48 shows an example of an asymptotically stable equilibrium point Note 43 STABILITY 103 1 05 0 05 1 1 05 0 05 1 x1 x2 x1 x2 x2 x1 0 2 4 6 8 10 2 0 2 Time t x1 x2 x1 x2 Figure 47 Phase portrait and time domain simulation for a system with a single stable equilibrium point The equilibrium point xe at the origin is stable since all trajectories that start near xe stay near xe from the phase portraits that not only do all trajectories stay near the equilibrium point at the origin but that they also all approach the origin as t gets large the directions of the arrows on the phase portrait show the direction in which the trajectories move A solution xt a is unstable if it is not stable More specifically we say that a solution xt a is unstable if given some ϵ 0 there does not exist a δ 0 such that if b a δ then xt b xt a ϵ for all t An example of an unstable equilibrium point is shown in Figure 49 The definitions above are given without careful description of their domain of applicability More formally we define a solution to be locally stable or locally asymptotically stable if it is stable for all initial conditions x Bra where Bra x x a r is a ball of radius r around a and r 0 A system is globally stable if it is stable for all r 0 Systems whose equilibrium points are only locally stable can have 1 05 0 05 1 1 05 0 05 1 x1 x2 x1 x2 x2 x1 x2 0 2 4 6 8 10 1 0 1 Time t x1 x2 x1 x2 Figure 48 Phase portrait and time domain simulation for a system with a single asymptoti cally stable equilibrium point The equilibrium point xe at the origin is asymptotically stable since the trajectories converge to this point as t Figure 49 Phase portrait and time domain simulation for a system with a single unstable equilibrium point The equilibrium point xe at the origin is unstable since not all trajectories that start near xe stay near xe The sample trajectory on the right shows that the trajectories very quickly depart from zero interesting behavior away from equilibrium points as we explore in the next section For planar dynamical systems equilibrium points have been assigned names based on their stability type An asymptotically stable equilibrium point is called a sink or sometimes an attractor An unstable equilibrium point can be either a source if all trajectories lead away from the equilibrium point or a saddle if some trajectories lead to the equilibrium point and others move away this is the situation pictured in Figure 49 Finally an equilibrium point that is stable but not asymptotically stable ie neutrally stable such as the one in Figure 47 is called a center Example 45 Congestion control The model for congestion control in a network consisting of N identical computers connected to a single router introduced in Section 34 is given by dwdt cb ρc1 w22 dbdt Nwcb c where w is the window size and b is the buffer size of the router Phase portraits are shown in Figure 410 for two different sets of parameter values In each case we see that the system converges to an equilibrium point in which the buffer is below its full capacity of 500 packets The equilibrium size of the buffer represents a balance between the transmission rates for the sources and the capacity of the link We see from the phase portraits that the equilibrium points are asymptotically stable since all initial conditions result in trajectories that converge to these points Stability of Linear Systems A linear dynamical system has the form dxdt Ax x0 x0 43 STABILITY 105 0 2 4 6 8 10 0 100 200 300 400 500 Window size w pkts Buffer size b pkts a ρ 2 104 c 10 pktsms 0 2 4 6 8 10 0 100 200 300 400 500 Window size w pkts Buffer size b pkts b ρ 4 104 c 20 pktsms Figure 410 Phase portraits for a congestion control protocol running with N 60 identical source computers The equilibrium values correspond to a fixed window at the source which results in a steadystate buffer size and corresponding transmission rate A faster link b uses a smaller buffer size since it can handle packets at a higher rate where A Rnn is a square matrix corresponding to the dynamics matrix of a linear control system 26 For a linear system the stability of the equilibrium at the origin can be determined from the eigenvalues of the matrix A λA s C detsI A 0 The polynomial detsI A is the characteristic polynomial and the eigenvalues are its roots We use the notation λ j for the jth eigenvalue of A so that λ j λA In general λ can be complexvalued although if A is realvalued then for any eigenvalue λ its complex conjugate λ will also be an eigenvalue The origin is always an equilibrium for a linear system Since the stability of a linear system depends only on the matrix A we find that stability is a property of the system For a linear system we can therefore talk about the stability of the system rather than the stability of a particular solution or equilibrium point The easiest class of linear systems to analyze are those whose system matrices are in diagonal form In this case the dynamics have the form dx dt λ1 0 λ2 0 λn x 48 It is easy to see that the state trajectories for this system are independent of each other so that we can write the solution in terms of n individual systems x j λ jx j Each of these scalar solutions is of the form x jt eλ jtx0 We see that the equilibrium point xe 0 is stable if λ j 0 and asymptotically stable if λ j 0 Another simple case is when the dynamics are in the block diagonal form dxdt σ1 ω1 0 0 ω1 σ1 0 0 0 0 0 0 σm ωm 0 0 ωm σm x In this case the eigenvalues can be shown to be λj σj iωj We once again can separate the state trajectories into independent solutions for each pair of states and the solutions are of the form x2j1t eσj t x2j10 cos ωj t x2j0 sin ωj t x2jt eσj t x2j10 sin ωj t x2j0 cos ωj t where j 1 2 m We see that this system is asymptotically stable if and only if σj Re λj 0 It is also possible to combine real and complex eigenvalues in block diagonal form resulting in a mixture of solutions of the two types Very few systems are in one of the diagonal forms above but some systems can be transformed into these forms via coordinate transformations One such class of systems is those for which the dynamics matrix has distinct nonrepeating eigenvalues In this case there is a matrix T Rnn such that the matrix TAT1 is in block diagonal form with the block diagonal elements corresponding to the eigenvalues of the original matrix A see Exercise 414 If we choose new coordinates z Tx then dzdt Tẋ T Ax TAT1 z and the linear system has a block diagonal dynamics matrix Furthermore the eigenvalues of the transformed system are the same as the original system since if υ is an eigenvector of A then w Tυ can be shown to be an eigenvector of TAT1 We can reason about the stability of the original system by noting that xt T1 zt and so if the transformed system is stable or asymptotically stable then the original system has the same type of stability This analysis shows that for linear systems with distinct eigenvalues the stability of the system can be completely determined by examining the real part of the eigenvalues of the dynamics matrix For more general systems we make use of the following theorem proved in the next chapter Theorem 41 Stability of a linear system The system dxdt Ax is asymptotically stable if and only if all eigenvalues of A all have a strictly negative real part and is unstable if any eigenvalue of A has a strictly positive real part Example 46 Compartment model Consider the twocompartment module for drug delivery introduced in Section 36 43 STABILITY 107 Using concentrations as state variables and denoting the state vector by x the system dynamics are given by dx dt k0 k1 k1 k2 k2 x b0 0 u y 0 1 x where the input u is the rate of injection of a drug into compartment 1 and the concentration of the drug in compartment 2 is the measured output y We wish to design a feedback control law that maintains a constant output given by y yd We choose an output feedback control law of the form u ky yd ud where ud is the rate of injection required to maintain the desired concentration and k is a feedback gain that should be chosen such that the closed loop system is stable Substituting the control law into the system we obtain dx dt k0 k1 k1b0k k2 k2 x b0 0 ud Ax Bud y 0 1 x Cx The equilibrium concentration xe R2 is given by xe A1Bud and ye C A1Bud b0k2 k0k2 k1k2 kk1k2b0 ud Choosing ud such that ye yd provides the constant rate of injection required to maintain the desired output We can now shift coordinates to place the equilibrium point at the origin which yields dz dt k0 k1 k1b0k k2 k2 z where z x xe We can now apply the results of Theorem 41 to determine the stability of the system The eigenvalues of the system are given by the roots of the characteristic polynomial λs s2 k0 k1 k2s k0 k1 k1k2b0k While the specific form of the roots is messy it can be shown that the roots are posi tive as long as the linear term and the constant term are both positive Exercise 416 Hence the system is stable for any k 0 Stability Analysis via Linear Approximation An important feature of differential equations is that it is often possible to determine the local stability of an equilibrium point by approximating the system by a linear system The following example illustrates the basic idea Example 47 Inverted pendulum Consider again an inverted pendulum whose open loop dynamics are given by dxdt x2 sin x1 γ x2 where we have defined the state as x θ θ We first consider the equilibrium point at x 0 0 corresponding to the straightup position If we assume that the angle θ x1 remains small then we can replace sin x1 with x1 and cos x1 with 1 which gives the approximate system dxdt x2 x1 x2 0 1 1 γ x 49 Intuitively this system should behave similarly to the more complicated model as long as x1 is small In particular it can be verified that the equilibrium point 0 0 is unstable by plotting the phase portrait or computing the eigenvalues of the dynamics matrix in equation 49 We can also approximate the system around the stable equilibrium point at x π 0 In this case we have to expand sin x1 and cos x1 around x1 π according to the expansions sinπ θ sin θ θ cosπ θ cosθ 1 If we define z1 x1 π and z2 x2 the resulting approximate dynamics are given by dzdt z2 z1 γ z2 0 1 1 γ z 410 Note that z 0 0 is the equilibrium point for this system and that it has the same basic form as the dynamics shown in Figure 48 Figure 411 shows the phase portraits for the original system and the approximate system around the corresponding equilibrium points Note that they are very similar although not exactly the same It can be shown that if a linear approximation has either asymptotically stable or unstable equilibrium points then the local stability of the original system must be the same Theorem 43 More generally suppose that we have a nonlinear system dxdt Fx that has an equilibrium point at xe Computing the Taylor series expansion of the vector field we can write dxdt Fxe Fx xe x xe higherorder terms in x xe Since Fxe 0 we can approximate the system by choosing a new state variable z x xe and writing dzdt Az where A Fx xe 411 We call the system 411 the linear approximation of the original nonlinear system or the linearization at xe The fact that a linear model can be used to study the behavior of a nonlinear system near an equilibrium point is a powerful one Indeed we can take this even further and use a local linear approximation of a nonlinear system to design a feedback law that keeps the system near its equilibrium point design of dynamics Thus feedback can be used to make sure that solutions remain close to the equilibrium point which in turn ensures that the linear approximation used to stabilize it is valid Linear approximations can also be used to understand the stability of nonequilibrium solutions as illustrated by the following example Example 48 Stable limit cycle Consider the system given by equation 46 dx1dt x2 x11 x12 x22 dx2dt x1 x21 x12 x22 whose phase portrait is shown in Figure 45 The differential equation has a periodic solution x1t x10 cos t x20 sin t 412 with x120 x220 1 To explore the stability of this solution we introduce polar coordinates r and φ which are related to the state variables x1 and x2 by x1 r cos φ x2 r sin φ Differentiation gives the following linear equations for r and φ x1 r cos φ r φ sin φ x2 r sin φ r φ cos φ Solving this linear system for r and φ gives after some calculation drdt r 1 r2 dφdt 1 Notice that the equations are decoupled hence we can analyze the stability of each state separately The equation for r has three equilibria r 0 r 1 and r 1 not realizable since r must be positive We can analyze the stability of these equilibria by linearizing the radial dynamics with Fr r 1 r2 The corresponding linear dynamics are given by drdt Frre r 1 2re2r re 0 1 where we have abused notation and used r to represent the deviation from the equilibrium point It follows from the sign of 1 2re2 that the equilibrium r 0 is unstable and the equilibrium r 1 is asymptotically stable Thus for any initial condition r 0 the solution goes to r 1 as time goes to infinity but if the system starts with r 0 it will remain at the equilibrium for all times This implies that all solutions to the original system that do not start at x1 x2 0 will approach the circle x12 x22 1 as time increases To show the stability of the full solution 412 we must investigate the behavior of neighboring solutions with different initial conditions We have already shown that the radius r will approach that of the solution 412 as long as r0 0 The equation for the angle φ can be integrated analytically to give φt t φ0 which shows that solutions starting at different angles φ will neither converge nor diverge Thus the unit circle is attracting but the solution 412 is only stable not asymptotically stable The behavior of the system is illustrated by the simulation in Figure 412 Notice that the solutions approach the circle rapidly but that there is a constant phase shift between the solutions 44 Lyapunov Stability Analysis We now return to the study of the full nonlinear system dxdt Fx x Rn 413 Having defined when a solution for a nonlinear dynamical system is stable we can now ask how to prove that a given solution is stable asymptotically stable or unstable For physical systems one can often argue about stability based on dissipation of energy The generalization of that technique to arbitrary dynamical systems is based on the use of Lyapunov functions in place of energy 44 LYAPUNOV STABILITY ANALYSIS 111 1 0 1 2 1 05 0 05 1 15 2 x1 x2 0 5 10 15 20 1 0 1 2 0 5 10 15 20 1 0 1 2 x1 x2 Time t Figure 412 Solution curves for a stable limit cycle The phase portrait on the left shows that the trajectory for the system rapidly converges to the stable limit cycle The starting points for the trajectories are marked by circles in the phase portrait The time domain plots on the right show that the states do not converge to the solution but instead maintain a constant phase error In this section we will describe techniques for determining the stability of so lutions for a nonlinear system 413 We will generally be interested in stability of equilibrium points and it will be convenient to assume that xe 0 is the equi librium point of interest If not rewrite the equations in a new set of coordinates z x xe Lyapunov Functions A Lyapunov function V Rn R is an energylike function that can be used to determine the stability of a system Roughly speaking if we can find a nonnegative function that always decreases along trajectories of the system we can conclude that the minimum of the function is a stable equilibrium point locally To describe this more formally we start with a few definitions We say that a continuous function V is positive definite if V x 0 for all x 0 and V 0 0 Similarly a function is negative definite if V x 0 for all x 0 and V 0 0 We say that a function V is positive semidefinite if V x 0 for all x but V x can be zero at points other than just x 0 To illustrate the difference between a positive definite function and a positive semidefinite function suppose that x R2 and let V1x x2 1 V2x x2 1 x2 2 Both V1 and V2 are always nonnegative However it is possible for V1 to be zero even if x 0 Specifically if we set x 0 c where c R is any nonzero number then V1x 0 On the other hand V2x 0 if and only if x 0 0 Thus V1 is positive semidefinite and V2 is positive definite We can now characterize the stability of an equilibrium point xe 0 for the system 413 Theorem 42 Lyapunov stability theorem Let V be a nonnegative function on 112 CHAPTER 4 DYNAMIC BEHAVIOR dx dt V x V x c2 V x c1 c2 Figure 413 Geometric illustration of Lyapunovs stability theorem The closed contours represent the level sets of the Lyapunov function V x c If dxdt points inward to these sets at all points along the contour then the trajectories of the system will always cause V x to decrease along the trajectory Rn and let V represent the time derivative of V along trajectories of the system dynamics 413 V V x dx dt V x Fx Let Br Br0 be a ball of radius r around the origin If there exists r 0 such that V is positive definite and V is negative semidefinite for all x Br then x 0 is locally stable in the sense of Lyapunov If V is positive definite and V is negative definite in Br then x 0 is locally asymptotically stable If V satisfies one of the conditions above we say that V is a local Lyapunov function for the system These results have a nice geometric interpretation The level curves for a positive definite function are the curves defined by V x c c 0 and for each c this gives a closed contour as shown in Figure 413 The condition that V x is negative simply means that the vector field points toward lowerlevel contours This means that the trajectories move to smaller and smaller values of V and if V is negative definite then x must approach 0 Example 49 Scalar nonlinear system Consider the scalar nonlinear system dx dt 2 1 x x This system has equilibrium points at x 1 and x 2 We consider the equilib rium point at x 1 and rewrite the dynamics using z x 1 dz dt 2 2 z z 1 which has an equilibrium point at z 0 Now consider the candidate Lyapunov function V x 1 2z2 44 LYAPUNOV STABILITY ANALYSIS 113 which is globally positive definite The derivative of V along trajectories of the system is given by V z zz 2z 2 z z2 z If we restrict our analysis to an interval Br where r 2 then 2 z 0 and we can multiply through by 2 z to obtain 2z z2 z2 z z3 3z2 z2z 3 0 z Br r 2 It follows that V z 0 for all z Br z 0 and hence the equilibrium point xe 1 is locally asymptotically stable A slightly more complicated situation occurs if V is negative semidefinite In this case it is possible that Vx 0 when x 0 and hence x could stop decreasing in value The following example illustrates this case Example 410 Hanging pendulum A normalized model for a hanging pendulum is dx1 dt x2 dx2 dt sin x1 where x1 is the angle between the pendulum and the vertical with positive x1 corresponding to counterclockwise rotation The equation has an equilibrium x1 x2 0 which corresponds to the pendulum hanging straight down To explore the stability of this equilibrium we choose the total energy as a Lyapunov function V x 1 cos x1 1 2x2 2 1 2x2 1 1 2x2 2 The Taylor series approximation shows that the function is positive definite for small x The time derivative of V x is V x1 sin x1 x2x2 x2 sin x1 x2 sin x1 0 Since this function is positive semidefinite it follows from Lyapunovs theorem that the equilibrium is stable but not necessarily asymptotically stable When perturbed the pendulum actually moves in a trajectory that corresponds to constant energy Lyapunov functions are not always easy to find and they are not unique In many cases energy functions can be used as a starting point as was done in Exam ple 410 It turns out that Lyapunov functions can always be found for any stable system under certain conditions and hence one knows that if a system is stable a Lyapunov function exists and vice versa Recent results using sumofsquares methods have provided systematic approaches for finding Lyapunov systems 167 Sumofsquares techniques can be applied to a broad variety of systems including systems whose dynamics are described by polynomial equations as well as hybrid systems which can have different models for different regions of state space For a linear dynamical system of the form dx dt Ax 114 CHAPTER 4 DYNAMIC BEHAVIOR it is possible to construct Lyapunov functions in a systematic manner To do so we consider quadratic functions of the form V x x T Px where P Rnn is a symmetric matrix P PT The condition that V be positive definite is equivalent to the condition that P be a positive definite matrix xT Px 0 for all x 0 which we write as P 0 It can be shown that if P is symmetric then P is positive definite if and only if all of its eigenvalues are real and positive Given a candidate Lyapunov function V x xT Px we can now compute its derivative along flows of the system V V x dx dt x T AT P P Ax x T Qx The requirement that V be negative definite for asymptotic stability becomes a condition that the matrix Q be positive definite Thus to find a Lyapunov function for a linear system it is sufficient to choose a Q 0 and solve the Lyapunov equation AT P P A Q 414 This is a linear equation in the entries of P and hence it can be solved using linear algebra It can be shown that the equation always has a solution if all of the eigenvalues of the matrix A are in the left halfplane Moreover the solution P is positive definite if Q is positive definite It is thus always possible to find a quadratic Lyapunov function for a stable linear system We will defer a proof of this until Chapter 5 where more tools for analysis of linear systems will be developed Knowing that we have a direct method to find Lyapunov functions for linear systems we can now investigate the stability of nonlinear systems Consider the system dx dt Fx Ax Fx 415 where F0 0 and Fx contains terms that are second order and higher in the elements of x The function Ax is an approximation of Fx near the origin and we can determine the Lyapunov function for the linear approximation and investigate if it is also a Lyapunov function for the full nonlinear system The following example illustrates the approach Example 411 Genetic switch Consider the dynamics of a set of repressors connected together in a cycle as shown in Figure 414a The normalized dynamics for this system were given in Exercise 29 dz1 dτ μ 1 zn 2 z1 dz2 dτ μ 1 zn 1 z2 416 where z1 and z2 are scaled versions of the protein concentrations n and μ are 44 LYAPUNOV STABILITY ANALYSIS 115 u2 A B A u1 a Circuit diagram 0 1 2 3 4 5 0 1 2 3 4 5 z1 f z2 z1 f z1 z2 f z1 z2 f z2 b Equilibrium points Figure 414 Stability of a genetic switch The circuit diagram in a represents two proteins that are each repressing the production of the other The inputs u1 and u2 interfere with this repression allowing the circuit dynamics to be modified The equilibrium points for this circuit can be determined by the intersection of the two curves shown in b parameters that describe the interconnection between the genes and we have set the external inputs u1 and u2 to zero The equilibrium points for the system are found by equating the time derivatives to zero We define f u μ 1 un f u d f du μnun1 1 un2 and the equilibrium points are defined as the solutions of the equations z1 f z2 z2 f z1 If we plot the curves z1 f z1 and f z2 z2 on a graph then these equations will have a solution when the curves intersect as shown in Figure 414b Because of the shape of the curves it can be shown that there will always be three solutions one at z1e z2e one with z1e z2e and one with z1e z2e If μ 1 then we can show that the solutions are given approximately by z1e μ z2e 1 μn1 z1e z2e z1e 1 μn1 z2e μ 417 To check the stability of the system we write f u in terms of its Taylor series expansion about ue f u f ue f ue u ue f ue u ue2 higherorder terms where f represents the first derivative of the function and f the second Using these approximations the dynamics can then be written as dw dt 1 f z2e f z1e 1 w Fw where w zze is the shifted state and Fw represents quadratic and higherorder 116 CHAPTER 4 DYNAMIC BEHAVIOR terms We now use equation 414 to search for a Lyapunov function Choosing Q I and letting P R22 have elements pi j we search for a solution of the equation 1 f 2 f 1 1 p11 p12 p12 p22 p11 p12 p12 p22 1 f 1 f 2 1 1 0 0 1 where f 1 f z1e and f 2 f z2e Note that we have set p21 p12 to force P to be symmetric Multiplying out the matrices we obtain 2p11 2 f 2 p12 p11 f 1 2p12 p22 f 2 p11 f 1 2p12 p22 f 2 2p22 2 f 1 p12 1 0 0 1 which is a set of linear equations for the unknowns pi j We can solve these linear equations to obtain p11 f 1 2 f 2 f 1 2 4 f 1 f 2 1 p12 f 1 f 2 4 f 1 f 2 1 p22 f 2 2 f 1 f 2 2 4 f 1 f 2 1 To check that V w wT Pw is a Lyapunov function we must verify that V w is positive definite function or equivalently that P 0 Since P is a 2 2 symmetric matrix it has two real eigenvalues λ1 and λ2 that satisfy λ1 λ2 traceP λ1 λ2 detP In order for P to be positive definite we must have that λ1 and λ2 are positive and we thus require that traceP f 1 22 f 2 f 1 f 2 2 4 44 f 1 f 2 0 detP f 1 22 f 2 f 1 f 2 24 16 16 f 1 f 2 0 We see that traceP 4 detP and the numerator of the expressions is just f1 f22 4 0 so it suffices to check the sign of 1 f 1 f 2 In particular for P to be positive definite we require that f z1e f z2e 1 We can now make use of the expressions for f defined earlier and evaluate at the approximate locations of the equilibrium points derived in equation 417 For the equilibrium points where z1e z2e we can show that f z1e f z2e f μ f 1 μn1 μnμn1 1 μn2 μnμn12 1 μnn1 n2μn2n Using n 2 and μ 200 from Exercise 29 we see that f z1e f z2e 1 and hence P is a positive definite This implies that V is a positive definite function and hence a potential Lyapunov function for the system To determine if the system 416 is stable we now compute V at the equilibrium 44 LYAPUNOV STABILITY ANALYSIS 117 0 1 2 3 4 5 0 1 2 3 4 5 Protein A scaled Protein B scaled 0 5 10 15 20 25 0 1 2 3 4 5 Time t scaled Protein concentrations scaled z1 A z2 B Figure 415 Dynamics of a genetic switch The phase portrait on the left shows that the switch has three equilibrium points corresponding to protein A having a concentration greater than equaltoorlessthanproteinBTheconcentrationwithequalproteinconcentrationsisunstable but the other equilibrium points are stable The simulation on the right shows the time response of the system starting from two different initial conditions The initial portion of the curve corresponds to initial concentrations z0 1 5 and converges to the equilibrium where z1e z2e At time t 10 the concentrations are perturbed by 2 in z1 and 2 in z2 moving the state into the region of the state space whose solutions converge to the equilibrium point where z2e z1e point By construction V wTP A ATPw F TwPw wTP Fw wTw F TwPw wTP Fw Since all terms in F are quadratic or higher order in w it follows that F TwPw and wTP Fw consist of terms that are at least third order in w Therefore if w is sufficiently close to zero then the cubic and higherorder terms will be smaller than the quadratic terms Hence sufficiently close to w 0 V is negative definite allowing us to conclude that these equilibrium points are both stable Figure 415 shows the phase portrait and time traces for a system with μ 4 illustrating the bistable nature of the system When the initial condition starts with a concentration of protein B greater than that of A the solution converges to the equilibrium point at approximately 1μn1 μ If A is greater than B then it goes to μ 1μn1 The equilibrium point with z1e z2e is unstable More generally we can investigate what the linear approximation tells about the stability of a solution to a nonlinear equation The following theorem gives a partial answer for the case of stability of an equilibrium point Theorem 43 Consider the dynamical system 415 with F0 0 and F such that lim Fxx 0 as x 0 If the real parts of all eigenvalues of A are strictly less than zero then xe 0 is a locally asymptotically stable equilibrium point of equation 415 This theorem implies that asymptotic stability of the linear approximation im plies local asymptotic stability of the original nonlinear system The theorem is very important for control because it implies that stabilization of a linear approximation of a nonlinear system results in a stable equilibrium for the nonlinear system The proof of this theorem follows the technique used in Example 411 A formal proof can be found in 123 KrasovskiLasalle Invariance Principle For general nonlinear systems especially those in symbolic form it can be difficult to find a positive definite function V whose derivative is strictly negative definite The KrasovskiLasalle theorem enables us to conclude the asymptotic stability of an equilibrium point under less restrictive conditions namely in the case where V is negative semidefinite which is often easier to construct However it applies only to timeinvariant or periodic systems This section makes use of some additional concepts from dynamical systems see Hahn 94 or Khalil 123 for a more detailed description We will deal with the timeinvariant case and begin by introducing a few more definitions We denote the solution trajectories of the timeinvariant system dxdt Fx 418 as xt a which is the solution of equation 418 at time t starting from a at t0 0 The ω limit set of a trajectory xt a is the set of all points z Rn such that there exists a strictly increasing sequence of times tn such that xtn a z as n A set M Rn is said to be an invariant set if for all b M we have xt b M for all t 0 It can be proved that the ω limit set of every trajectory is closed and invariant We may now state the KrasovskiLasalle principle Theorem 44 KrasovskiLasalle principle Let V Rn R be a locally positive definite function such that on the compact set Ωr x Rn Vx r we have V x 0 Define S x Ωr V x 0 As t the trajectory tends to the largest invariant set inside S ie its ω limit set is contained inside the largest invariant set in S In particular if S contains no invariant sets other than x 0 then 0 is asymptotically stable Proofs are given in 128 and 135 Lyapunov functions can often be used to design stabilizing controllers as is illustrated by the following example which also illustrates how the Krasovski Lasalle principle can be applied Example 412 Inverted pendulum Following the analysis in Example 27 an inverted pendulum can be described by the following normalized model dx1dt x2 dx2dt sin x1 u cos x1 419 a Physical system b Phase portrait c Manifold view Figure 416 Stabilized inverted pendulum A control law applies a force u at the bottom of the pendulum to stabilize the inverted position a The phase portrait b shows that the equilibrium point corresponding to the vertical position is stabilized The shaded region indicates the set of initial conditions that converge to the origin The ellipse corresponds to a level set of a Lyapunov function V x for which V x 0 and V x 0 for all points inside the ellipse This can be used as an estimate of the region of attraction of the equilibrium point The actual dynamics of the system evolve on a manifold c where x1 is the angular deviation from the upright position and u is the scaled acceleration of the pivot as shown in Figure 416a The system has an equilibrium at x1 x2 0 which corresponds to the pendulum standing upright This equilibrium is unstable To find a stabilizing controller we consider the following candidate for a Lyapunov function V x cos x1 1 a1 cos 2 x1 12 x22 a 12 x12 12 x22 The Taylor series expansion shows that the function is positive definite near the origin if a 05 The time derivative of V x is V x1 sin x1 2ax1 sin x1 cos x1 x2x2 x2 u 2a sin x1 cos x1 Choosing the feedback law u 2a sin x1 x2 cos x1 gives V x22 cos2 x1 It follows from Lyapunovs theorem that the equilibrium is locally stable However since the function is only negative semidefinite we cannot conclude asymptotic stability using Theorem 42 However note that V 0 implies that x2 0 or x1 π2 nπ If we restrict our analysis to a small neighborhood of the origin Ωr r π2 then we can define S x1 x2 Ωr x2 0 and we can compute the largest invariant set inside S For a trajectory to remain in this set we must have x₂ 0 for all t and hence ẋ₂t 0 as well Using the dynamics of the system 419 we see that x₂t 0 and ẋ₂t 0 implies x₁t 0 as well Hence the largest invariant set inside S is x₁ x₂ 0 and we can use the KrasovskiLasalle principle to conclude that the origin is locally asymptotically stable A phase portrait of the closed loop system is shown in Figure 416b In the analysis and the phase portrait we have treated the angle of the pendulum θ x₁ as a real number In fact θ is an angle with θ 2π equivalent to θ 0 Hence the dynamics of the system actually evolves on a manifold smooth surface as shown in Figure 416c Analysis of nonlinear dynamical systems on manifolds is more complicated but uses many of the same basic ideas presented here 45 Parametric and Nonlocal Behavior Most of the tools that we have explored are focused on the local behavior of a fixed system near an equilibrium point In this section we briefly introduce some concepts regarding the global behavior of nonlinear systems and the dependence of a systems behavior on parameters in the system model Regions of Attraction To get some insight into the behavior of a nonlinear system we can start by finding the equilibrium points We can then proceed to analyze the local behavior around the equilibria The behavior of a system near an equilibrium point is called the local behavior of the system The solutions of the system can be very different far away from an equilibrium point This is seen for example in the stabilized pendulum in Example 412 The inverted equilibrium point is stable with small oscillations that eventually converge to the origin But far away from this equilibrium point there are trajectories that converge to other equilibrium points or even cases in which the pendulum swings around the top multiple times giving very long oscillations that are topologically different from those near the origin To better understand the dynamics of the system we can examine the set of all initial conditions that converge to a given asymptotically stable equilibrium point This set is called the region of attraction for the equilibrium point An example is shown by the shaded region of the phase portrait in Figure 416b In general computing regions of attraction is difficult However even if we cannot determine the region of attraction we can often obtain patches around the stable equilibria that are attracting This gives partial information about the behavior of the system One method for approximating the region of attraction is through the use of Lyapunov functions Suppose that V is a local Lyapunov function for a system around an equilibrium point x₀ Let Ωᵣ be a set on which Vx has a value less than r Ωᵣ x ℝⁿ Vx r and suppose that Ẋx 0 for all x Ωᵣ with equality only at the equilibrium point x₀ Then Ωᵣ is inside the region of attraction of the equilibrium point Since this approximation depends on the Lyapunov function and the choice of Lyapunov function is not unique it can sometimes be a very conservative estimate It is sometimes the case that we can find a Lyapunov function V such that V is positive definite and Ẋ is negative semi definite for all x ℝⁿ In this case it can be shown that the region of attraction for the equilibrium point is the entire state space and the equilibrium point is said to be globally stable Example 413 Stabilized inverted pendulum Consider again the stabilized inverted pendulum from Example 412 The Lyapunov function for the system was Vx cos x₁ 1 a1 cos² x₁ 12 x₂² and Ẋ was negative semidefinite for all x and nonzero when x₁ π2 Hence for any x such that x₂ π2 Vx 0 will be inside the invariant set defined by the level curves of Vx One of these level sets is shown in Figure 416b Bifurcations Another important property of nonlinear systems is how their behavior changes as the parameters governing the dynamics change We can study this in the context of models by exploring how the location of equilibrium points their stability their regions of attraction and other dynamic phenomena such as limit cycles vary based on the values of the parameters in the model Consider a differential equation of the form dxdt Fx μ x ℝⁿ μ ℝᵏ 420 where x is the state and μ is a set of parameters that describe the family of equations The equilibrium solutions satisfy Fx μ 0 and as μ is varied the corresponding solutions xₑμ can also vary We say that the system 420 has a bifurcation at μ μ if the behavior of the system changes qualitatively at μ This can occur either because of a change in stability type or a change in the number of solutions at a given value of μ Example 414 Predatorprey Consider the predatorprey system described in Section 37 The dynamics of the system are given by dHdt rH1 Hk aHLc H dLdt b aHLc H dL 421 122 CHAPTER 4 DYNAMIC BEHAVIOR 15 2 25 3 35 4 0 50 100 150 200 a c Unstable Stable Unstable a Stability diagram 2 4 6 8 0 50 100 150 a H b Bifurcation diagram Figure 417 Bifurcation analysis of the predatorprey system a Parametric stability dia gram showing the regions in parameter space for which the system is stable b Bifurcation diagram showing the location and stability of the equilibrium point as a function of a The solid line represents a stable equilibrium point and the dashed line represents an unstable equilibrium point The dasheddotted lines indicate the upper and lower bounds for the limit cycle at that parameter value computed via simulation The nominal values of the parameters in the model are a 32 b 06 c 50 d 056 k 125 and r 16 where H and L are the numbers of hares prey and lynxes predators and a b c d k and r are parameters that model a given predatorprey system described in more detail in Section 37 The system has an equilibrium point at He 0 and Le 0 that can be found numerically To explore how the parameters of the model affect the behavior of the system we choose to focus on two specific parameters of interest a the interaction coefficient between the populations and c a parameter affecting the prey consumption rate Figure 417a is a numerically computed parametric stability diagram showing the regions in the chosen parameter space for which the equilibrium point is stable leaving the other parameters at their nominal values We see from this figure that for certain combinations of a and c we get a stable equilibrium point while at other values this equilibrium point is unstable Figure 417b is a numerically computed bifurcation diagram for the system In this plot we choose one parameter to vary a and then plot the equilibrium value of one of the states H on the vertical axis The remaining parameters are set to their nominal values A solid line indicates that the equilibrium point is stable a dashed line indicates that the equilibrium point is unstable Note that the stability in the bifurcation diagram matches that in the parametric stability diagram for c 50 the nominal value and a varying from 135 to 4 For the predatorprey system when the equilibrium point is unstable the solution converges to a stable limit cycle The amplitude of this limit cycle is shown by the dasheddotted line in Figure 417b A particular form of bifurcation that is very common when controlling linear systems is that the equilibrium remains fixed but the stability of the equilibrium 45 PARAMETRIC AND NONLOCAL BEHAVIOR 123 10 0 10 15 10 5 0 5 10 15 Unstable Stable Unstable Velocity v ms Re λ a Stability diagram 10 0 10 10 5 0 5 10 V 61 V 61 V V Re λ Im λ b Root locus diagram Figure 418 Stability plots for a bicycle moving at constant velocity The plot in a shows the real part of the system eigenvalues as a function of the bicycle velocity v The system is stable when all eigenvalues have negative real part shaded region The plot in b shows the locus of eigenvalues on the complex plane as the velocity v is varied and gives a different view of the stability of the system This type of plot is called a root locus diagram changes as the parameters are varied In such a case it is revealing to plot the eigen values of the system as a function of the parameters Such plots are called root locus diagrams because they give the locus of the eigenvalues when parameters change Bifurcations occur when parameter values are such that there are eigenval ues with zero real part Computing environments such LabVIEW MATLAB and Mathematica have tools for plotting root loci Example 415 Root locus diagram for a bicycle model Considerthelinearbicyclemodelgivenbyequation 37inSection32Introducing the state variables x1 ϕ x2 δ x3 ϕ and x4 δ and setting the steering torque T 0 the equations can be written as dx dt 0 I M1K0 K2v2 0 M1Cv0 x Ax where I is a 2 2 identity matrix and v0 is the velocity of the bicycle Figure 418a shows the real parts of the eigenvalues as a function of velocity Figure 418b shows the dependence of the eigenvalues of A on the velocity v0 The figures show that the bicycle is unstable for low velocities because two eigenvalues are in the right halfplane As the velocity increases these eigenvalues move into the left halfplane indicating that the bicycle becomes selfstabilizing As the velocity is increased further there is an eigenvalue close to the origin that moves into the right halfplane making the bicycle unstable again However this eigenvalue is small and so it can easily be stabilized by a rider Figure 418a shows that the bicycle is selfstabilizing for velocities between 6 and 10 ms Parametric stability diagrams and bifurcation diagrams can provide valuable insights into the dynamics of a nonlinear system It is usually necessary to carefully choose the parameters that one plots including combining the natural parameters Internal Microphone External Microphone a Exterior microphone Headphone b Controller Filter Parameters a b w S e Interior microphone Figure 419 Headphones with noise cancellation Noise is sensed by the exterior microphone a and sent to a filter in such a way that it cancels the noise that penetrates the headphone b The filter parameters a and b are adjusted by the controller S represents the input signal to the headphones of the system to eliminate extra parameters when possible Computer programs such as AUTO LOCBIF and XPPAUT provide numerical algorithms for producing stability and bifurcation diagrams Design of Nonlinear Dynamics Using Feedback In most of the text we will rely on linear approximations to design feedback laws that stabilize an equilibrium point and provide a desired level of performance However for some classes of problems the feedback controller must be nonlinear to accomplish its function By making use of Lyapunov functions we can often design a nonlinear control law that provides stable behavior as we saw in Example 412 One way to systematically design a nonlinear controller is to begin with a candidate Lyapunov function Vx and a control system ẋ fx u We say that Vx is a control Lyapunov function if for every x there exists a u such that ẋ Vx fx u 0 In this case it may be possible to find a function αx such that u αx stabilizes the system The following example illustrates the approach Example 416 Noise cancellation Noise cancellation is used in consumer electronics and in industrial systems to reduce the effects of noise and vibrations The idea is to locally reduce the effect of noise by generating opposing signals A pair of headphones with noise cancellation such as those shown in Figure 419a is a typical example A schematic diagram of the system is shown in Figure 419b The system has two microphones one outside the headphones that picks up exterior noise n and another inside the headphones that picks up the signal e which is a combination of the desired signal and the external noise that penetrates the headphone The signal from the exterior microphone is filtered and sent to the headphones in such a way that it cancels the external noise that penetrates into the headphones The parameters of the filter are adjusted by a feedback mechanism to make the noise signal in the internal microphone as small as possible The feedback is inherently nonlinear because it acts by changing the parameters of the filter To analyze the system we assume for simplicity that the propagation of external noise into the headphones is modeled by a firstorder dynamical system described by dzdt a₀z b₀n 422 where z is the sound level and the parameters a₀ 0 and b₀ are not known Assume that the filter is a dynamical system of the same type dwdt aw bn We wish to find a controller that updates a and b so that they converge to the unknown parameters a₀ and b₀ Introduce x₁ e w z x₂ a a₀ and x₃ b b₀ then dx₁dt a₀w z a a₀w b b₀n a₀x₁ x₂w x₃n 423 We will achieve noise cancellation if we can find a feedback law for changing the parameters a and b so that the error e goes to zero To do this we choose Vx₁ x₂ x₃ 12 αx₁² x₂² x₃² as a candidate Lyapunov function for 423 The derivative of V is V αx₁ẋ₁ x₂ẋ₂ x₃ẋ₃ αa₀x₁² x₂ẋ₂ αwx₁ x₃ẋ₃ αnx₁ Choosing ẋ₂ αwx₁ αwe ẋ₃ αnx₁ αne 424 we find that V αa₀x₁² 0 and it follows that the quadratic function will decrease as long as e x₁ w z 0 The nonlinear feedback 424 thus attempts to change the parameters so that the error between the signal and the noise is small Notice that feedback law 424 does not use the model 422 explicitly A simulation of the system is shown in Figure 420 In the simulation we have represented the signal as a pure sinusoid and the noise as broad band noise The figure shows the dramatic improvement with noise cancellation The sinusoidal signal is not visible without noise cancellation The filter parameters change quickly from their initial values a b 0 Filters of higher order with more coefficients are used in practice 126 CHAPTER 4 DYNAMIC BEHAVIOR 0 50 100 150 200 5 0 5 0 50 100 150 200 5 0 5 0 50 100 150 200 1 05 0 0 50 100 150 200 0 05 1 No cancellation Cancellation a b Time t s Time t s Figure 420 Simulation of noise cancellation The top left figure shows the headphone signal without noise cancellation and the bottom left figure shows the signal with noise cancellation The right figures show the parameters a and b of the filter 46 Further Reading The field of dynamical systems has a rich literature that characterizes the possi ble features of dynamical systems and describes how parametric changes in the dynamics can lead to topological changes in behavior Readable introductions to dynamical systems are given by Strogatz 188 and the highly illustrated text by Abraham and Shaw 2 More technical treatments include Andronov Vitt and Khaikin 8 Guckenheimer and Holmes 91 and Wiggins 201 For students with a strong interest in mechanics the texts by Arnold 13 and Marsden and Ratiu 147 provide an elegant approach using tools from differential geometry Finally good treatments of dynamical systems methods in biology are given by Wilson 203 and Ellner and Guckenheimer 70 There is a large literature on Lyapunov stability theory including the classic texts by Malkin 144 Hahn 94 and Krasovski 128 We highly recommend the comprehensive treatment by Khalil 123 Exercises 41 Timeinvariant systems Show that if we have a solution of the differential equation 41 given by xt with initial condition xt0 x0 then xτ xt t0 x0 is a solution of the differential equation d x dτ Fx with initial condition x0 0 42 Flow in a tank A cylindrical tank has cross section A m2 effective outlet area a m2 and the inflow qin m3s An energy balance shows that the outlet velocity is υ 2gh ms where g ms² is the acceleration of gravity and h m is the distance between the outlet and the water level in the tank Show that the system can be modeled by dhdt aA2gh 1Aqin qout a2gh Use the parameters A 02 aₑ 001 Simulate the system when the inflow is zero and the initial level is h 02 Do you expect any difficulties in the simulation 43 Cruise control Consider the cruise control system described in Section 31 Generate a phase portrait for the closed loop system on flat ground θ 0 in third gear using a PI controller with kₚ 05 and kᵢ 01 m 1000 kg and desired speed 20 ms Your system model should include the effects of saturating the input between 0 and 1 44 Lyapunov functions Consider the secondorder system dx1dt ax₁ dx2dt bx₁ cx₂ where a b c 0 Investigate whether the functions V₁x 12 x₁² 12 x₂² V₂x 12 x₁² 12 x₂ bc a x₁² are Lyapunov functions for the system and give any conditions that must hold 45 Damped springmass system Consider a damped springmass system with dynamics mq cq kq 0 A natural candidate for a Lyapunov function is the total energy of the system given by V 12 mẋ² 12 kq² Use the KrasovskiLasalle theorem to show that the system is asymptotically stable 46 Electric generator The following simple model for an electric generator connected to a strong power grid was given in Exercise 27 J d²ϕdt² Pₘ Pₑ Pₘ EVX sin ϕ The parameter a PmaxPₘ EVX Pₘ 425 is the ratio between the maximum deliverable power Pmax EVX and the mechanical power Pₘ a Consider a a bifurcation parameter and discuss how the equilibria depend on a b For a 1 show that there is a center at ϕ₀ arcsin1a and a saddle at ϕ π ϕ₀ c Show that there is a solution through the saddle that satisfies 12 dϕdt² ϕ ϕ₀ a cos ϕ a² 1 0 426 Use simulation to show that the stability region is the interior of the area enclosed by this solution Investigate what happens if the system is in equilibrium with a value of a that is slightly larger than 1 and a suddenly decreases corresponding to the reactance of the line suddenly increasing 47 Lyapunov equation Show that Lyapunov equation 414 always has a solution if all of the eigenvalues of A are in the left halfplane Hint Use the fact that the Lyapunov equation is linear in P and start with the case where A has distinct eigenvalues 48 Congestion control Consider the congestion control problem described in Section 34 Confirm that the equilibrium point for the system is given by equation 321 and compute the stability of this equilibrium point using a linear approximation 49 Swinging up a pendulum Consider the inverted pendulum discussed in Example 44 that is described by θ sin θ u cos θ where θ is the angle between the pendulum and the vertical and the control signal u is the acceleration of the pivot Using the energy function Vθ θ cos θ 1 12 θ² show that the state feedback u kV₀ Vθ cos θ causes the pendulum to swing up to upright position 410 Root locus diagram Consider the linear system dxdt 0 1 0 3x 14u y 1 0x with the feedback u ky Plot the location of the eigenvalues as a function the parameter k 411 Discretetime Lyapunov function Consider a nonlinear discretetime system with dynamics xk 1 fxk and equilibrium point xₑ 0 Suppose there exists a positive definite function V ℝⁿ ℝⁿ such that Vxk1 Vxk 0 for xk 0 Show that xₑ 0 is asymptotically stable 412 Operational amplifier oscillator An op amp circuit for an oscillator was shown in Exercise 35 The oscillatory solution for that linear circuit was stable but not asymptotically stable A schematic of a modified circuit that has nonlinear elements is shown in the figure below Chapter Five Linear Systems Few physical elements display truly linear characteristics For example the relation between force on a spring and displacement of the spring is always nonlinear to some degree The relation between current through a resistor and voltage drop across it also deviates from a straightline relation However if in each case the relation is reasonably linear then it will be found that the system behavior will be very close to that obtained by assuming an ideal linear physical element and the analytical simplification is so enormous that we make linear assumptions wherever we can possibly do so in good conscience Robert H Cannon Dynamics of Physical Systems 1967 49 In Chapters 24 we considered the construction and analysis of differential equation models for dynamical systems In this chapter we specialize our results to the case of linear timeinvariant inputoutput systems Two central concepts are the matrix exponential and the convolution equation through which we can completely characterize the behavior of a linear system We also describe some properties of the inputoutput response and show how to approximate a nonlinear system by a linear one 51 Basic Definitions We have seen several instances of linear differential equations in the examples in the previous chapters including the springmass system damped oscillator and the operational amplifier in the presence of small nonsaturating input signals More generally many dynamical systemslinear can be modeled accurately by linear differential equations Electrical circuits are one example of a broad class of systems for which linear models can be used effectively Linear models are also broadly applicable in mechanical engineering for example as models of small deviations from equilibria in solid and fluid mechanics Signalprocessing systems including digital filters of the sort used in CD and MP3 players are another source of good examples although these are often best modeled in discrete time as described in more detail in the exercises In many cases we create systems with a linear inputoutput response through the use of feedback Indeed it was the desire for linear behavior that led Harold S Black to the invention of the negative feedback amplifier Almost all modern signal processing systems whether analog or digital use feedback to produce linear or nearlinear inputoutput characteristics For these systems it is often useful to represent the inputoutput characteristics as linear ignoring the internal details required to get that linear response 132 CHAPTER 5 LINEAR SYSTEMS For other systems nonlinearities cannot be ignored especially if one cares about the global behavior of the system The predatorprey problem is one example of this to capture the oscillatory behavior of the interdependent populations we must include the nonlinear coupling terms Other examples include switching behavior and generating periodic motion for locomotion However if we care about what happens near an equilibrium point it often suffices to approximate the nonlinear dynamics by their local linearization as we already explored briefly in Section 43 The linearization is essentially an approximation of the nonlinear dynamics around the desired operating point Linearity We now proceed to define linearity of inputoutput systems more formally Consider a state space system of the form dx dt f x u y hx u 51 where x Rn u Rp and y Rq As in the previous chapters we will usually restrict ourselves to the singleinput singleoutput case by taking p q 1 We also assume that all functions are smooth and that for a reasonable class of inputs eg piecewise continuous functions of time the solutions of equation 51 exist for all time It will be convenient to assume that the origin x 0 u 0 is an equilibrium point for this system x 0 and that h0 0 0 Indeed we can do so without loss of generality To see this suppose that xe ue 0 0 is an equilibrium point of the system with output ye hxe ue Then we can define a new set of states inputs and outputs x x xe u u ue y y ye and rewrite the equations of motion in terms of these variables d dt x f x xe u ue f x u y hx xe u ue ye hx u In the new set of variables the origin is an equilibrium point with output 0 and hence we can carry out our analysis in this set of variables Once we have obtained our answers in this new set of variables we simply translate them back to the original coordinates using x x xe u u ue and y y ye Returning to the original equations 51 now assuming without loss of gener ality that the origin is the equilibrium point of interest we write the output yt corresponding to the initial condition x0 x0 and input ut as yt x0 u Using this notation a system is said to be a linear inputoutput system if the following 51 BASIC DEFINITIONS 133 0 20 40 60 2 0 2 Homogeneous Input u 0 20 40 60 2 0 2 0 20 40 60 2 0 2 Output y 0 20 40 60 2 0 2 Particular 0 20 40 60 2 0 2 0 20 40 60 2 0 2 0 20 40 60 2 0 2 Complete Time t sec 0 20 40 60 2 0 2 Time t sec 0 20 40 60 2 0 2 Time t sec State x1 x2 Figure 51 Superposition of homogeneous and particular solutions The first row shows the input state and output corresponding to the initial condition response The second row shows the same variables corresponding to zero initial condition but nonzero input The third row is the complete solution which is the sum of the two individual solutions conditions are satisfied i yt αx1 βx2 0 αyt x1 0 βyt x2 0 ii yt αx0 δu αyt x0 0 δyt 0 u iii yt 0 δu1 γ u2 δyt 0 u1 γ yt 0 u2 52 Thus we define a system to be linear if the outputs are jointly linear in the initial condition response u 0 and the forced response x0 0 Property iii is a statement of the principle of superposition the response of a linear system to the sum of two inputs u1 and u2 is the sum of the outputs y1 and y2 corresponding to the individual inputs The general form of a linear state space system is dx dt Ax Bu y Cx Du 53 where A Rnn B Rnp C Rqn and D Rqp In the special case of a singleinput singleoutput system B is a column vector C is a row vector and D is scalar Equation 53 is a system of linear firstorder differential equations with input u state x and output y It is easy to show that given solutions x1t and x2t for this set of equations they satisfy the linearity conditions We define xht to be the solution with zero input the homogeneous solution and the solution x pt to be the solution with zero initial condition a particular solution Figure 51 illustrates how these two individual solutions can be superim posed to form the complete solution 52 THE MATRIX EXPONENTIAL 141 Since any solution xt can be written in terms of a solution zt with z0 T x0 it follows that it is sufficient to prove the theorem in the transformed coordinates The solution zt can be written in terms of the elements of the matrix exponen tial From equation 511 these elements all decay to zero for arbitrary z0 if and only if Re λi 0 Furthermore if any λi has positive real part then there exists an initial condition z0 such that the corresponding solution increases without bound Since we can scale this initial condition to be arbitrarily small it follows that the equilibrium point is unstable if any eigenvalue has positive real part The existence of a canonical form allows us to prove many properties of linear systems by changing to a set of coordinates in which the A matrix is in Jordan form We illustrate this in the following proposition which follows along the same lines as the proof of Theorem 41 Proposition 53 Suppose that the system dx dt Ax has no eigenvalues with strictly positive real part and one or more eigenvalues with zero real part Then the system is stable if and only if the Jordan blocks corresponding to each eigenvalue with zero real part are scalar 1 1 blocks Proof See Exercise 56b The following example illustrates the use of the Jordan form Example 54 Linear model of a vectored thrust aircraft Consider the dynamics of a vectored thrust aircraft such as that described in Exam ple 29 Suppose that we choose u1 u2 0 so that the dynamics of the system become dz dt z4 z5 z6 g sin z3 c m z4 gcos z3 1 c m z5 0 512 where z x y θ x y θ The equilibrium points for the system are given by setting the velocities x y and θ to zero and choosing the remaining variables to satisfy g sin z3e 0 gcos z3e 1 0 z3e θe 0 This corresponds to the upright orientation for the aircraft Note that xe and ye are not specified This is because we can translate the system to a new upright position and still obtain an equilibrium point 53 INPUTOUTPUT RESPONSE 155 v1 v2 R1 C1 C2 R2 a Circuit diagram 10 1 10 0 Gain 10 1 10 0 10 1 10 2 10 3 360 270 180 90 0 Phase deg Frequency rads b Frequency response Figure 512 Active bandpass filter The circuit diagram a shows an op amp with two RC filters arranged to provide a bandpass filter The plot in b shows the gain and phase of the filter as a function of frequency Note that the phase starts at 90 due to the negative gain of the operational amplifier frequencies at about 10 rads but attenuates frequencies below 5 rads and above 50 rads At 01 rads the input signal is attenuated by 20 005 This type of circuit is called a bandpass filter since it passes through signals in the band of frequencies between 5 and 50 rads As in the case of the step response a number of standard properties are defined for frequency responses The gain of a system at ω 0 is called the zero frequency gain and corresponds to the ratio between a constant input and the steady output M0 C A1B D The zero frequency gain is well defined only if A is invertible and in particular if it does not have eigenvalues at 0 It is also important to note that the zero frequency gain is a relevant quantity only when a system is stable about the corresponding equilibrium point So if we apply a constant input u r then the corresponding equilibrium point xe A1Br must be stable in order to talk about the zero frequency gain In electrical engineering the zero frequency gain is often called the DC gain DC stands for direct current and reflects the common separation of signals in electrical engineering into a direct current zero frequency term and an alternating current AC term The bandwidth ωb of a system is the frequency range over which the gain has decreased by no more than a factor of 1 2 from its reference value For systems with nonzero finite zero frequency gain the bandwidth is the frequency where the gain has decreased by 1 2 from the zero frequency gain For systems that attenuate low frequencies but pass through high frequencies the reference gain is taken as the highfrequency gain For a system such as the bandpass filter in Example 58 bandwidth is defined as the range of frequencies where the gain is larger than 1 2 of the gain at the center of the band For Example 58 this would give a bandwidth of approximately 50 rads 156 CHAPTER 5 LINEAR SYSTEMS Amplifier Amplifier Sample Cantilever xy z Laser Photo diode Controller Piezo drive Deflection reference Sweep generator a AFM block diagram 10 1 10 1 Gain 10 4 10 5 10 6 10 7 180 90 0 Phase deg Frequency rads Mr1 Mr2 ωωr1 ωωr2 b Frequency response Figure 513 AFM frequency response a A block diagram for the vertical dynamics of an atomic force microscope in contact mode The plot in b shows the gain and phase for the piezo stack The response contains two frequency peaks at resonances of the system along with an antiresonance at ω 268 krads The combination of a resonant peak followed by an antiresonance is common for systems with multiple lightly damped modes Another important property of the frequency response is the resonant peak Mr the largest value of the frequency response and the peak frequency ωmr the frequency where the maximum occurs These two properties describe the frequency of the sinusoidal input that produces the largest possible output and the gain at the frequency Example 59 Atomic force microscope in contact mode Consider the model for the vertical dynamics of the atomic force microscope in contact mode discussed in Section 35 The basic dynamics are given by equa tion 323 The piezo stack can be modeled by a secondorder system with un damped natural frequency ω3 and damping ratio ζ3 The dynamics are then de scribed by the linear system dx dt 0 1 0 0 km1 m2 cm1 m2 1m2 0 0 0 0 ω3 0 0 ω3 2ζ3ω3 x 0 0 0 ω3 u y m2 m1 m2 m1k m1 m2 m1c m1 m2 1 0 x where the input signal is the drive signal to the amplifier and the output is the elon gation of the piezo The frequency response of the system is shown in Figure 513b The zero frequency gain of the system is M0 1 There are two resonant poles with peaks Mr1 212 at ωmr1 238 krads and Mr2 429 at ωmr2 746 krads The bandwidth of the system defined as the lowest frequency where the gain is 2 less than the zero frequency gain is ωb 292 krads There is also a dip in the gain Md 0556 for ωmd 268 krads This dip called an antiresonance is associated with a dip in the phase and limits the performance when the system is controlled by simple controllers as we will see in Chapter 10 158 CHAPTER 5 LINEAR SYSTEMS the ordinary differential equation dx dt 00141x 00116u 54 Linearization As described at the beginning of the chapter a common source of linear system models is through the approximation of a nonlinear system by a linear one These approximations are aimed at studying the local behavior of a system where the nonlinear effects are expected to be small In this section we discuss how to locally approximate a system by its linearization and what can be said about the approxi mation in terms of stability We begin with an illustration of the basic concept using the cruise control example from Chapter 3 Example 511 Cruise control The dynamics for the cruise control system were derived in Section 31 and have the form m dv dt αnuT αnv mgCr sgnv 1 2ρCv Av2 mg sin θ 529 where the first term on the righthand side of the equation is the force generated by the engine and the remaining three terms are the rolling friction aerodynamic drag and gravitational disturbance force There is an equilibrium ve ue when the force applied by the engine balances the disturbance forces To explore the behavior of the system near the equilibrium we will linearize the system A Taylor series expansion of equation 529 around the equilibrium gives dv ve dt av ve bgθ θe bu ue higher order terms 530 where a ueα2 nT αnve ρCv Ave m bg g cos θe b αnT αnve m 531 Notice that the term corresponding to rolling friction disappears if v 0 For a car in fourth gear with ve 25 ms θe 0 and the numerical values for the car from Section 31 the equilibrium value for the throttle is ue 01687 and the parameters are a 00101 b 132 and c 98 This linear model describes how small perturbations in the velocity about the nominal speed evolve in time Figure 514 shows a simulation of a cruise controller with linear and nonlinear models the differences between the linear and nonlinear models are small and hence the linearized model provides a reasonable approximation 54 LINEARIZATION 159 g F mg F θ 0 10 20 30 19 195 20 205 Velocity v ms Nonlinear Linear 0 10 20 30 0 05 1 Time t s Throttle u Figure 514 Simulated response of a vehicle with PI cruise control as it climbs a hill with a slope of 4 The solid line is the simulation based on a nonlinear model and the dashed line shows the corresponding simulation using a linear model The controller gains are kp 05 and ki 01 Jacobian Linearization Around an Equilibrium Point To proceed more formally consider a singleinput singleoutput nonlinear system dx dt f x u x Rn u R y hx u y R 532 with an equilibrium point at x xe u ue Without loss of generality we can assume that xe 0 and ue 0 although initially we will consider the general case to make the shift of coordinates explicit To study the local behavior of the system around the equilibrium point xe ue we suppose that x xe and u ue are both small so that nonlinear perturbations around this equilibrium point can be ignored compared with the lowerorder linear terms This is roughly the same type of argument that is used when we do small angle approximations replacing sin θ with θ and cos θ with 1 for θ near zero As we did in Chapter 4 we define a new set of state variables z as well as inputs v and outputs w z x xe v u ue w y hxe ue These variables are all close to zero when we are near the equilibrium point and so in these variables the nonlinear terms can be thought of as the higherorder terms in a Taylor series expansion of the relevant vector fields assuming for now that these exist Formally the Jacobian linearization of the nonlinear system 532 is dz dt Az Bv w Cz Dv 533 166 CHAPTER 5 LINEAR SYSTEMS 59 Keynesian economics Consider the following simple Keynesian macroeco nomic model in the form of a linear discretetime system discussed in Exercise 58 Ct 1 It 1 a a ab a ab Ct It a ab Gt Yt Ct It Gt Determine the eigenvalues of the dynamics matrix When are the magnitudes of the eigenvalues less than 1 Assume that the system is in equilibrium with constant values capital spending C investment I and government expenditure G Explore what happens when government expenditure increases by 10 Use the values a 025 and b 05 510 Consider a scalar system dx dt 1 x3 u Compute the equilibrium points for the unforced system u 0 and use a Taylor series expansion around the equilibrium point to compute the linearization Verify that this agrees with the linearization in equation 533 511 Transcriptional regulation Consider the dynamics of a genetic circuit that im plements selfrepression the protein produced by a gene is a repressor for that gene thus restricting its own production Using the models presented in Example 213 the dynamics for the system can be written as dm dt α 1 kp2 α0 γ m u dp dt βm δp 540 where u is a disturbance term that affects RNA transcription and m p 0 Find the equilibrium points for the system and use the linearized dynamics around each equilibrium point to determine the local stability of the equilibrium point and the step response of the system to a disturbance 168 CHAPTER 6 STATE FEEDBACK xT x0 Rx0 T a Reachable set E b Reachability through control Figure 61 The reachable set for a control system The set Rx0 T shown in a is the set of points reachable from x0 in time less than T The phase portrait in b shows the dynamics for a double integrator with the natural dynamics drawn as horizontal arrows and the control inputs drawn as vertical arrows The set of achievable equilibrium points is the x axis By setting the control inputs as a function of the state it is possible to steer the system to the origin as shown on the sample path The definition of reachability addresses whether it is possible to reach all points in the state space in a transient fashion In many applications the set of points that we are most interested in reaching is the set of equilibrium points of the system since we can remain at those points once we get there The set of all possible equilibria for constant controls is given by E xe Axe bue 0 for some ue R This means that possible equilibria lie in a one or possibly higher dimensional subspace If the matrix A is invertible this subspace is spanned by A1B The following example provides some insight into the possibilities Example 61 Double integrator Consider a linear system consisting of a double integrator whose dynamics are given by dx1 dt x2 dx2 dt u Figure 61b shows a phase portrait of the system The open loop dynamics u 0 are shown as horizontal arrows pointed to the right for x2 0 and to the left for x2 0 The control input is represented by a doubleheaded arrow in the vertical direction corresponding to our ability to set the value of x2 The set of equilibrium points E corresponds to the x1 axis with ue 0 Suppose first that we wish to reach the origin from an initial condition a 0 We can directly move the state up and down in the phase plane but we must rely on the natural dynamics to control the motion to the left and right If a 0 we can move the origin by first setting u 0 which will cause x2 to become negative Once x2 0 the value of x1 will begin to decrease and we will move to the left After a while we can set u2 to be positive moving x2 back toward zero and slowing the motion in the x1 direction If we bring x2 0 we can move the system state in the opposite direction 61 REACHABILITY 171 a Segway M F p θ m l b Cartpendulum system Figure 62 Balance system The Segway Personal Transporter shown on in a is an example of a balance system that uses torque applied to the wheels to keep the rider upright A simplified diagram for a balance system is shown in b The system consists of a mass m on a rod of length l connected by a pivot to a cart with mass M where μ Mt Jt m2l2 Mt M m and Jt J ml2 The reachability matrix is Wr 0 Jtμ 0 gl3m3μ2 0 lmμ 0 gl2m2m Mμ2 Jtμ 0 gl3m3μ2 0 lmμ 0 g2l2m2m Mμ2 0 65 The determinant of this matrix is detWr g2l4m4 μ4 0 and we can conclude that the system is reachable This implies that we can move the system from any initial state to any final state and in particular that we can always find an input to bring the system from an initial state to an equilibrium point It is useful to have an intuitive understanding of the mechanisms that make a system unreachable An example of such a system is given in Figure 63 The system consists of two identical systems with the same input Clearly we cannot separately cause the first and the second systems to do something different since they have the same input Hence we cannot reach arbitrary states and so the system is not reachable Exercise 63 More subtle mechanisms for nonreachability can also occur For example if there is a linear combination of states that always remains constant then the system is not reachable To see this suppose that there exists a row vector H such that 0 d dt Hx HAx Bu for all u 172 CHAPTER 6 STATE FEEDBACK M F 1 p θ θ2 m m l l S S Figure 63 An unreachable system The cartpendulum system shown on the left has a single input that affects two pendula of equal length and mass Since the forces affecting the two pendula are the same and their dynamics are identical it is not possible to arbitrarily control the state of the system The figure on the right is a block diagram representation of this situation Then H is in the left null space of both A and B and it follows that HWr H B AB An1B 0 Hence the reachability matrix is not full rank In this case if we have an initial condition x0 and we wish to reach a state x f for which Hx0 Hx f then since Hxt is constant no input u can move from x0 to x f Reachable Canonical Form As we have already seen in previous chapters it is often convenient to change coordinates and write the dynamics of the system in the transformed coordinates z T x One application of a change of coordinates is to convert a system into a canonical form in which it is easy to perform certain types of analysis A linear state space system is in reachable canonical form if its dynamics are given by dz dt a1 a2 a3 an 1 0 0 0 0 1 0 0 0 1 0 z 1 0 0 0 u y b1 b2 b3 bn z du 66 A block diagram for a system in reachable canonical form is shown in Figure 64 We see that the coefficients that appear in the A and B matrices show up directly in the block diagram Furthermore the output of the system is a simple linear combination of the outputs of the integration blocks The characteristic polynomial for a system in reachable canonical form is given 174 CHAPTER 6 STATE FEEDBACK Transforming each element individually we have A B T AT 1T B T AB A2 B T AT 12T B T AT 1T AT 1T B T A2B An B T An B and hence the reachability matrix for the transformed system is Wr T B AB An1B T Wr 68 Since Wr is invertible we can thus solve for the transformation T that takes the system into reachable canonical form T WrW 1 r The following example illustrates the approach Example 63 Transformation to reachable form Consider a simple twodimensional system of the form dx dt α ω ω α x 0 1 u We wish to find the transformation that converts the system into reachable canonical form A a1 a2 1 0 B 1 0 The coefficients a1 and a2 can be determined from the characteristic polynomial for the original system λs detsI A s2 2αs α2 ω2 a1 2α a2 α2 ω2 The reachability matrix for each system is Wr 0 ω 1 α Wr 1 a1 0 1 The transformation T becomes T WrW 1 r a1 αω 1 1ω 0 αω 1 1ω 0 and hence the coordinates z1 z2 T x αx1ω x2 x2ω put the system in reachable canonical form We summarize the results of this section in the following theorem 62 STABILIZATION BY STATE FEEDBACK 177 Notice that kr is exactly the inverse of the zero frequency gain of the closed loop system The solution for D 0 is left as an exercise Using the gains K and kr we are thus able to design the dynamics of the closed loop system to satisfy our goal To illustrate how to construct such a state feedback control law we begin with a few examples that provide some basic intuition and insights Example 64 Vehicle steering In Example 512 we derived a normalized linear model for vehicle steering The dynamics describing the lateral deviation were given by A 0 1 0 0 B γ 1 C 1 0 D 0 The reachability matrix for the system is thus Wr B AB γ 1 1 0 The system is reachable since det Wr 1 0 We now want to design a controller that stabilizes the dynamics and tracks a given reference value r of the lateral position of the vehicle To do this we introduce the feedback u K x krr k1x1 k2x2 krr and the closed loop system becomes dx dt A BKx Bkrr γ k1 1 γ k2 k1 k2 x γ kr kr r y Cx Du 1 0 x 614 The closed loop system has the characteristic polynomial det sI A BK det s γ k1 γ k2 1 k1 s k2 s2 γ k1 k2s k1 Suppose that we would like to use feedback to design the dynamics of the system to have the characteristic polynomial ps s2 2ζcωcs ω2 c Comparing this polynomial with the characteristic polynomial of the closed loop system we see that the feedback gains should be chosen as k1 ω2 c k2 2ζcωc γ ω2 c Equation 613 gives kr k1 ω2 c and the control law can be written as u k1r x1 k2x2 ω2 cr x1 2ζcωc γ ω2 cx2 178 CHAPTER 6 STATE FEEDBACK 0 2 4 6 8 10 0 05 1 0 2 4 6 8 10 0 2 4 Lateral position yb Normalized time v0t Steering angle δ rad ωc ωc a Step response for varying ωc 0 2 4 6 8 10 0 05 1 0 2 4 6 8 10 05 0 05 1 Lateral position yb Normalized time v0t Steering angle δ rad ζc ζc b Step response for varying ζc Figure 66 State feedback control of a steering system Step responses obtained with con trollers designed with ζc 07 and ωc 05 1 and 2 rads are shown in a Notice that response speed increases with increasing ωc but that large ωc also give large initial control actions Step responses obtained with a controller designed with ωc 1 and ζc 05 07 and 1 are shown in b The step responses for the closed loop system for different values of the design parameters are shown in Figure 66 The effect of ωc is shown in Figure 66a which shows that the response speed increases with increasing ωc The responses for ωc 05 and 1 have reasonable overshoot The settling time is about 15 car lengths for ωc 05 beyond the end of the plot and decreases to about 6 car lengths for ωc 1 The control signal δ is large initially and goes to zero as time increases because the closed loop dynamics have an integrator The initial value of the control signal is kr ω2 cr and thus the achievable response time is limited by the available actuator signal Notice in particular the dramatic increase in control signal when ωc changes from 1 to 2 The effect of ζc is shown in Figure 66b The response speed and the overshoot increase with decreasing damping Using these plots we conclude that reasonable values of the design parameters are to have ωc in the range of 05 to 1 and ζc 07 The example of the vehicle steering system illustrates how state feedback can be used to set the eigenvalues of a closed loop system to arbitrary values State Feedback for Systems in Reachable Canonical Form The reachable canonical form has the property that the parameters of the system are the coefficients of the characteristic polynomial It is therefore natural to consider systems in this form when solving the eigenvalue assignment problem 62 STABILIZATION BY STATE FEEDBACK 179 Consider a system in reachable canonical form ie dz dt Az Bu a1 a2 a3 an 1 0 0 0 0 1 0 0 0 1 0 z 1 0 0 0 u y Cz b1 b2 bn z 615 It follows from67 that the open loop system has the characteristic polynomial detsI A sn a1sn1 an1s an Before making a formal analysis we can gain some insight by investigating the block diagram of the system shown in Figure 64 The characteristic polynomial is given by the parameters ak in the figure Notice that the parameter ak can be changed by feedback from state zk to the input u It is thus straightforward to change the coefficients of the characteristic polynomial by state feedback Returning to equations introducing the control law u K z krr k1z1 k2z2 knzn krr 616 the closed loop system becomes dz dt a1 k1 a2 k2 a3 k3 an kn 1 0 0 0 0 1 0 0 0 1 0 z kr 0 0 0 r y bn b2 b1 z 617 The feedback changes the elements of the first row of the A matrix which corre sponds to the parameters of the characteristic polynomial The closed loop system thus has the characteristic polynomial sn al k1sn1 a2 k2sn2 an1 kn1s an kn Requiring this polynomial to be equal to the desired closed loop polynomial ps sn p1sn1 pn1s pn we find that the controller gains should be chosen as k1 p1 a1 k2 p2 a2 kn pn an This feedback simply replaces the parameters ai in the system 617 by pi The feedback gain for a system in reachable canonical form is thus K p1 a1 p2 a2 pn an 618 180 CHAPTER 6 STATE FEEDBACK To have zero frequency gain equal to unity the parameter kr should be chosen as kr an kn bn pn bn 619 Notice that it is essential to know the precise values of parameters an and bn in order to obtain the correct zero frequency gain The zero frequency gain is thus obtained by precise calibration This is very different from obtaining the correct steadystate value by integral action which we shall see in later sections Eigenvalue Assignment We have seen through the examples how feedback can be used to design the dy namics of a system through assignment of its eigenvalues To solve the problem in the general case we simply change coordinates so that the system is in reachable canonical form Consider the system dx dt Ax Bu y Cx Du 620 We can change the coordinates by a linear transformation z T x so that the transformed system is in reachable canonical form 615 For such a system the feedback is given by equation 616 where the coefficients are given by equa tion 618 Transforming back to the original coordinates gives the feedback u K z krr K T x krr The results obtained can be summarized as follows Theorem 63 Eigenvalue assignment by state feedback Consider the system given by equation 620 with one input and one output Let λs sn a1sn1 an1s an be the characteristic polynomial of A If the system is reachable then there exists a feedback u K x krr that gives a closed loop system with the characteristic polynomial ps sn p1sn1 pn1s pn and unity zero frequency gain between r and y The feedback gain is given by K K T p1 a1 p2 a2 pn an WrW 1 r kr pn an 621 where ai are the coefficients of the characteristic polynomial of the matrix A and 182 CHAPTER 6 STATE FEEDBACK 206 295 This yields a linear dynamical system d dt z1 z2 013 093 057 0 z1 z2 172 0 v w 0 1 z1 z2 where z1 L Le z2 H He and v u It is easy to check that the system is reachable around the equilibrium z v 0 0 and hence we can assign the eigenvalues of the system using state feedback Determining the eigenvalues of the closed loop system requires balancing the ability to modulate the input against the natural dynamics of the system This can be done by the process of trial and error or by using some of the more systematic techniques discussed in the remainder of the text For now we simply choose the desired closed loop eigenvalues to be at λ 01 02 We can then solve for the feedback gains using the techniques described earlier which results in K 0025 0052 Finally we solve for the reference gain kr using equation 613 to obtain kr 0002 Putting these steps together our control law becomes v K z krr In order to implement the control law we must rewrite it using the original coordi nates for the system yielding u ue Kx xe krr ye 0025 0052 H 206 L 295 0002 r 295 This rule tells us how much we should modulate rh as a function of the current number of lynxes and hares in the ecosystem Figure 67a shows a simulation of the resulting closed loop system using the parameters defined above and starting with an initial population of 15 hares and 20 lynxes Note that the system quickly stabilizes the population of lynxes at the reference value L 30 A phase portrait of the system is given in Figure 67b showing how other initial conditions converge to the stabilized equilibrium population Notice that the dynamics are very different from the natural dynamics shown in Figure 320 The results of this section show that we can use state feedback to design the dynamics of a system under the strong assumption that we can measure all of the states We shall address the availability of the states in the next chapter when we consider output feedback and state estimation In addition Theorem 63 which states that the eigenvalues can be assigned to arbitrary locations is also highly idealized and assumes that the dynamics of the process are known to high precision The robustness of state feedback combined with state estimators is considered in Chapter 12 after we have developed the requisite tools 186 CHAPTER 6 STATE FEEDBACK Re Im ζ 0 ζ 008 ζ 02 ζ 05 ζ 1 a Eigenvalues 10 2 10 0 10 2 Gain 10 1 10 0 10 1 180 90 0 Phase deg Normalized frequency ωω0 ζ ζ b Frequency responses Figure 69 Frequency response of a secondorder system 623 a Eigenvalues as a function of ζ b Frequency response as a function of ζ The upper curve shows the gain ratio M and the lower curve shows the phase shift θ For small ζ there is a large peak in the magnitude of the frequency response and a rapid change in phase centered at ω ω0 As ζ is increased the magnitude of the peak drops and the phase changes more smoothly between 0 and 180 plicitly and is given by Me jθ kω2 0 iω2 2ζω0iω ω2 0 kω2 0 ω2 0 ω2 2iζω0ω A graphical illustration of the frequency response is given in Figure 69 Notice the resonant peak that increases with decreasing ζ The peak is often characterized by is Qvalue defined as Q 12ζ The properties of the frequency response for a secondorder system are summarized in Table 62 Example 66 Drug administration To illustrate the use of these formulas consider the twocompartment model for drug administration described in Section 36 The dynamics of the system are dc dt k0 k1 k1 k2 k2 c b0 0 u y 0 1 x where c1 and c2 are the concentrations of the drug in each compartment ki i 0 2 and b0 are parameters of the system u is the flow rate of the drug into Table 62 Properties of the frequency response for a secondorder system with 0 ζ 1 Property Value ζ 01 ζ 05 ζ 1 2 Zero frequency gain M0 k k k Bandwidth ωb 154 ω0 127 ω0 ω0 Resonant peak gain Mr 154 k 127 k k Resonant frequency ωmr ω0 0707ω0 0 63 STATE FEEDBACK DESIGN 187 0 5 10 15 20 25 30 35 40 45 50 0 05 1 15 State feedback Pulses 0 5 10 15 20 25 30 35 40 45 50 0 02 04 06 Input dosage Concentration C2 Time t min Time t min Figure 610 Open loop versus closed loop drug administration Comparison between drug administration using a sequence of doses versus continuously monitoring the concentrations and adjusting the dosage continuously In each case the concentration is approximately maintained at the desired level but the closed loop system has substantially less variability in drug concentration compartment 1 and y is the concentration of the drug in compartment 2 We assume that we can measure the concentrations of the drug in each compartment and we would like to design a feedback law to maintain the output at a given reference value r We choose ζ 09 to minimize the overshoot and choose the rise time to be Tr 10 min Using the formulas in Table 61 this gives a value for ω0 022 We can now compute the gain to place the eigenvalues at this location Setting u K x krr the closed loop eigenvalues for the system satisfy λs 0198 00959i Choosing k1 02027 and k2 02005 gives the desired closed loop behavior Equation 613 gives the reference gain kr 00645 The response of the con troller is shown in Figure 610 and compared with an open loop strategy involving administering periodic doses of the drug HigherOrder Systems Our emphasis so far has considered only secondorder systems For higherorder systems eigenvalue assignment is considerably more difficult especially when trying to account for the many tradeoffs that are present in a feedback design One of the other reasons why secondorder systems play such an important role in feedback systems is that even for more complicated systems the response is often characterized by the dominant eigenvalues To define these more precisely consider a system with eigenvalues λ j j 1 n We define the damping ratio 188 CHAPTER 6 STATE FEEDBACK for a complex eigenvalue λ to be ζ Re λ λ We say that a complex conjugate pair of eigenvalues λ λ is a dominant pair if it has the lowest damping ratio compared with all other eigenvalues of the system Assuming that a system is stable the dominant pair of eigenvalues tends to be the most important element of the response To see this assume that we have a system in Jordan form with a simple Jordan block corresponding to the dominant pair of eigenvalues dz dt λ λ J2 Jk z Bu y Cz Note that the state z may be complex because of the Jordan transformation The response of the system will be a linear combination of the responses from each of the individual Jordan subsystems As we see from Figure 68 for ζ 1 the subsystem with the slowest response is precisely the one with the smallest damping ratio Hence when we add the responses from each of the individual subsystems it is the dominant pair of eigenvalues that will be the primary factor after the initial transients due to the other terms in the solution die out While this simple analysis does not always hold eg if some nondominant terms have larger coefficients because of the particular form of the system it is often the case that the dominant eigenvalues determine the step response of the system The only formal requirement for eigenvalue assignment is that the system be reachable In practice there are many other constraints because the selection of eigenvalues has a strong effect on the magnitude and rate of change of the control signal Large eigenvalues will in general require large control signals as well as fast changes of the signals The capability of the actuators will therefore impose constraints on the possible location of closed loop eigenvalues These issues will be discussed in depth in Chapters 11 and 12 We illustrate some of the main ideas using the balance system as an example Example 67 Balance system Consider the problem of stabilizing a balance system whose dynamics were given in Example 62 The dynamics are given by A 0 0 1 0 0 0 0 1 0 m2l2gμ cJtμ γlmμ 0 Mtmglμ clmμ γ Jtμ B 0 0 Jtμ lmμ where Mt M m Jt J ml2 μ Mt Jt m2l2 and we have left c and γ 63 STATE FEEDBACK DESIGN 191 the matrices Qx and Qu we can balance the rate of convergence of the solutions with the cost of the control The solution to the LQR problem is given by a linear control law of the form u Q1 u BT Px where P Rnn is a positive definite symmetric matrix that satisfies the equation P A AT P P BQ1 u BT P Qx 0 627 Equation 627 is called the algebraic Riccati equation and can be solved numer ically eg using the lqr command in MATLAB One of the key questions in LQR design is how to choose the weights Qx and Qu To guarantee that a solution exists we must have Qx 0 and Qu 0 In addition there are certain observability conditions on Qx that limit its choice Here we assume Qx 0 to ensure that solutions to the algebraic Riccati equation always exist To choose specific values for the cost function weights Qx and Qu we must use our knowledge of the system we are trying to control A particularly simple choice is to use diagonal weights Qx q1 0 0 qn Qu ρ1 0 0 ρn For this choice of Qx and Qu the individual diagonal elements describe how much each state and input squared should contribute to the overall cost Hence we can take states that should remain small and attach higher weight values to them Similarly we can penalize an input versus the states and other inputs through choice of the corresponding input weight ρ Example 68 Vectored thrust aircraft Consider the original dynamics of the system 226 written in state space form as dz dt z4 z5 z6 g sin θ c m z4 g cos θ c m z5 0 0 0 0 1 m cos θ F1 1 m sin θ F2 1 m sin θ F1 1 m cos θ F2 r J F1 see Example 54 The system parameters are m 4 kg J 00475 kg m2 r 025 m g 98 ms2 c 005 N sm which corresponds to a scaled model of the system The equilibrium point for the system is given by F1 0 F2 mg and ze xe ye 0 0 0 0 To derive the linearized model near an equilibrium 64 INTEGRAL ACTION 195 0 20 40 60 04 06 08 1 xcpu xmem xcpu xmem Time k ms a System state 0 20 40 60 0 10 20 30 40 50 0 20 40 600 300 600 900 1200 1500 ka l mc r KeepAlive MaxClients Time k ms b System inputs Figure 614 Web server with LQR control The plot in a shows the state of the system under a change in external load applied at k 10 ms The corresponding web server parameters system inputs are shown in b The controller is able to reduce the effect of the disturbance by approximately 40 64 Integral Action Controllers based on state feedback achieve the correct steadystate response to command signals by careful calibration of the gain kr However one of the primary uses of feedback is to allow good performance in the presence of uncertainty and hence requiring that we have an exact model of the process is undesirable An alternative to calibration is to make use of integral feedback in which the controller uses an integrator to provide zero steadystate error The basic concept of integral feedback was given in Section 15 and in Section 31 here we provide a more complete description and analysis The basic approach in integral feedback is to create a state within the controller that computes the integral of the error signal which is then used as a feedback term We do this by augmenting the description of the system with a new state z d dt x z Ax Bu y r Ax Bu Cx r 628 The state z is seen to be the integral of the difference between the the actual output y and desired output r Note that if we find a compensator that stabilizes the system then we will necessarily have z 0 in steady state and hence y r in steady state Given the augmented system we design a state space controller in the usual fashion with a control law of the form u K x kiz krr 629 where K is the usual state feedback term ki is the integral term and kr is used to set the nominal input for the desired steady state The resulting equilibrium point for the system is given as xe A BK1Bkrr kize Note that the value of ze is not specified but rather will automatically settle to the value that makes z y r 0 which implies that at equilibrium the output will equal the reference value This holds independently of the specific values of A 65 FURTHER READING 197 0 10 20 30 40 18 19 20 0 10 20 30 40 0 05 1 Proportional PI control Time t s Time t s Velocity v ms Throttle u Figure 615 Velocity and throttle for a car with cruise control based on proportional dashed and PI control solid The PI controller is able to adjust the throttle to compensate for the effect of the hill and maintain the speed at the reference value of vr 25 ms The resulting controller stabilizes the system and hence brings z y vr to zero resulting in perfect tracking Notice that even if we have a small error in the values of the parameters defining the system as long as the closed loop eigenvalues are still stable then the tracking error will approach zero Thus the exact calibration required in our previous approach using kr is not needed here Indeed we can even choose kr 0 and let the feedback controller do all of the work Integral feedback can also be used to compensate for constant disturbances Figure 615 shows the results of a simulation in which the car encounters a hill with angle θ 4 at t 8 s The stability of the system is not affected by this external disturbance and so we once again see that the cars velocity converges to the reference speed This ability to handle constant disturbances is a general property of controllers with integral feedback see Exercise 64 65 Further Reading The importance of state models and state feedback was discussed in the seminal paper by Kalman 113 where the state feedback gain was obtained by solving an optimization problem that minimized a quadratic loss function The notions of reachability and observability Chapter 7 are also due to Kalman 115 see also82118Kalmandefinescontrollabilityandreachabilityastheabilitytoreach the origin and an arbitrary state respectively 117 We note that in most textbooks the term controllability is used instead of reachability but we prefer the latter termbecauseitismoredescriptiveofthefundamentalpropertyofbeingabletoreach arbitrary states Most undergraduate textbooks on control contain material on state spacesystemsincludingforexampleFranklinPowellandEmamiNaeini79and Ogata 162 Friedlands textbook 80 covers the material in the previous current and next chapter in considerable detail including the topic of optimal control Exercises 61 Double integrator Consider the double integrator Find a piecewise constant control strategy that drives the system from the origin to the state x 1 1 198 CHAPTER 6 STATE FEEDBACK 62 Reachability from nonzero initial state Extend the argument in Section 61 to show that if a system is reachable from an initial state of zero it is reachable from a nonzero initial state 63 Unreachable systems Consider the system shown in Figure 63 Write the dynamics of the two systems as dx dt Ax Bu dz dt Az Bu If x and z have the same initial condition they will always have the same state regardless of the input that is applied Show that this violates the definition of reachability and further show that the reachability matrix Wr is not full rank 64 Integral feedback for rejecting constant disturbances Consider a linear system of the form dx dt Ax Bu Fd where d is a disturbance that enters the system through a disturbance vector F Rn Show that integral feedback can be used to compensate for a constant disturbance by giving zero steadystate error even when d 0 65 Rearsteered bicycle A simple model for a bicycle was given by equation 35 in Section 32 A model for a bicycle with rearwheel steering is obtained by re versing the sign of the velocity in the model Determine the conditions under which this systems is reachable and explain any situations in which the system is not reachable 66 Characteristic polynomial for reachable canonical form Show that the char acteristic polynomial for a system in reachable canonical form is given by equa tion 67 and that dnzk dtn a1 dn1zk dtn1 an1 dzk dt anzk dnku dtnk where zk is the kth state 67 Reachability matrix for reachable canonical form Consider a system in reach able canonical form Show that the inverse of the reachability matrix is given by W 1 r 1 a1 a2 an 0 1 a1 an1 0 0 1 a1 0 0 0 1 68 Nonmaintainable equilibria Consider the normalized model of a pendulum on a cart d2x dt2 u d2θ dt2 θ u EXERCISES 199 where x is cart position and θ is pendulum angle Can the equilibrium θ θ0 for θ0 0 be maintained 69 Eigenvalue assignment for unreachable system Consider the system dx dt 0 1 0 0 x 1 0 u y 1 0 x with the control law u k1x1 k2x2 krr Show that eigenvalues of the system cannot be assigned to arbitrary values 610 CayleyHamilton theorem Let A Rnn be a matrix with characteristic polynomial λs detsI A sn a1sn1 an1s an Show that the matrix satisfies λA An a1An1 an1A anI 0 and use this this to show that Ak k n can be rewritten in terms of powers of A of order less than n 611 Motor drive Consider the normalized model of the motor drive in Exer cise 210 Using the following normalized parameters J1 109 J2 10 c 01 k 1 kI 1 verify that the eigenvalues of the open loop system are 0 0 005 i Design a state feedback that gives a closed loop system with eigenvalues 2 1 and 1i This choice implies that the oscillatory eigenvalues will be well damped and that the eigenvalues at the origin are replaced by eigenvalues on the negative real axis Simulate the responses of the closed loop system to step changes in the command signal and a step change in a disturbance torque on the second rotor 612 Whipple bicycle model Consider the Whipple bicycle model given by equa tion 37 in Section 32 The model is unstable at the velocity v 5 ms and the open loop eigenvalues are 184 1429 and 130 460i Find the gains of a controller that stabilizes the bicycle and gives closed loop eigenvalues at 2 10 and 1 i Simulate the response of the system to a step change in the steering reference of 0002 rad 613 Atomic force microscope Consider the model of an AFM in contact mode given in Example 59 dx dt 0 1 0 0 km1 m2 cm1 m2 1m2 0 0 0 0 ω3 0 0 ω3 2ζ3ω3 x 0 0 0 ω2 3 u y m2 m1 m2 m1k m1 m2 m1c m1 m2 1 0 x 200 CHAPTER 6 STATE FEEDBACK Use the MATLAB script afmdatam from the companion web site to generate the system matrices a Compute the reachability matrix of the system and determine its rank Scale the model by using milliseconds instead of seconds as time units Repeat the calculation of the reachability matrix and its rank b Find a state feedback controller that gives a closed loop system with complex poles having damping ratio 0707 Use the scaled model for the computations c Compute state feedback gains using linear quadratic control Experiment by using different weights Compute the gains for q1 q2 0 q3 q4 1 R 1 and ρ 01 and explain the result Choose q1 q2 q3 q4 r1 1 and explore what happens to the feedback gains and closed loop eigenvalues when you change ρ Use the scaled system for this computation 614 Consider the secondorder system d2y dt2 05dy dt y a du dt u Let the initial conditions be zero a Show that the initial slope of the unit step response is a Discuss what it means when a 0 b Show that there are points on the unit step response that are invariant with a Discuss qualitatively the effect of the parameter a on the solution c Simulate the system and explore the effect of a on the rise time and overshoot 615 Brysons rule Bryson and Ho 47 have suggested the following method for choosing the matrices Qx and Qu in equation 626 Start by choosing Qx and Qu as diagonal matrices whose elements are the inverses of the squares of the maxima of the corresponding variables Then modify the elements to obtain a compromise among response time damping and control effort Apply this method to the motor drive in Exercise 611 Assume that the largest values of the ϕ1 and ϕ2 are 1 the largest values of ϕ1 and ϕ2 are 2 and the largest control signal is 10 Simulate the closed loop system for ϕ20 1 and all other states are initialized to 0 Explore the effects of different values of the diagonal elements for Qx and Qu Chapter Seven Output Feedback One may separate the problem of physical realization into two stages computation of the best approximation ˆxt1 of the state from knowledge of yt for t t1 and computation of ut1 given ˆxt1 R E Kalman Contributions to the Theory of Optimal Control 1960 113 In this chapter we show how to use output feedback to modify the dynamics of the system through the use of observers We introduce the concept of observ ability and show that if a system is observable it is possible to recover the state from measurements of the inputs and outputs to the system We then show how to design a controller with feedback from the observer state An important concept is the separation principle quoted above which is also proved The structure of the controllers derived in this chapter is quite general and is obtained by many other design methods 71 Observability In Section 62 of the previous chapter it was shown that it is possible to find a state feedback law that gives desired closed loop eigenvalues provided that the system is reachable and that all the states are measured For many situations it is highly unrealistic to assume that all the states are measured In this section we investigate how the state can be estimated by using a mathematical model and a few measurements It will be shown that computation of the states can be carried out by a dynamical system called an observer Definition of Observability Consider a system described by a set of differential equations dx dt Ax Bu y Cx Du 71 where x Rn is the state u Rp the input and y Rq the measured output We wish to estimate the state of the system from its inputs and outputs as illustrated in Figure 71 In some situations we will assume that there is only one measured signal ie that the signal y is a scalar and that C is a row vector This signal may be corrupted by noise n although we shall start by considering the noisefree case We write ˆx for the state estimate given by the observer 206 CHAPTER 7 OUTPUT FEEDBACK a system in observable canonical form which is given by Wo 1 0 0 0 a1 1 0 0 a2 1 a1a2 a1 1 0 1 where represents an entry whose exact value is not important The rows of this matrix are linearly independent since it is lower triangular and hence Wo is full rank A straightforward but tedious calculation shows that the inverse of the observability matrix has a simple form given by W 1 o 1 0 0 0 a1 1 0 0 a2 a1 1 0 an1 an2 an3 1 As in the case of reachability it turns out that if a system is observable then there always exists a transformation T that converts the system into observable canonical form This is useful for proofs since it lets us assume that a system is in reachable canonical form without any loss of generality The reachable canonical form may be poorly conditioned numerically 72 State Estimation Having defined the concept of observability we now return to the question of how to construct an observer for a system We will look for observers that can be represented as a linear dynamical system that takes the inputs and outputs of the system we are observing and produces an estimate of the systems state That is we wish to construct a dynamical system of the form d ˆx dt F ˆx Gu Hy where u and y are the input and output of the original system and ˆx Rn is an estimate of the state with the property that ˆxt xt as t The Observer We consider the system in equation 71 with D set to zero to simplify the expo sition dx dt Ax Bu y Cx 76 72 STATE ESTIMATION 207 We can attempt to determine the state simply by simulating the equations with the correct input An estimate of the state is then given by d ˆx dt A ˆx Bu 77 To find the properties of this estimate introduce the estimation error x x ˆx It follows from equations 76 and 77 that d x dt A x If matrix A has all its eigenvalues in the left halfplane the error x will go to zero and hence equation 77 is a dynamical system whose output converges to the state of the system 76 The observer given by equation 77 uses only the process input u the measured signaldoesnotappearintheequationWemustalsorequirethatthesystembestable and essentially our estimator converges because the state of both the observer and the estimator are going zero This is not very useful in a control design context since we want to have our estimate converge quickly to a nonzero state so that we can make use of it in our controller We will therefore attempt to modify the observer so that the output is used and its convergence properties can be designed to be fast relative to the systems dynamics This version will also work for unstable systems Consider the observer d ˆx dt A ˆx Bu Ly C ˆx 78 This can be considered as a generalization of equation 77 Feedback from the measured output is provided by adding the term LyC ˆx which is proportional to the difference between the observed output and the output predicted by the observer It follows from equations 76 and 78 that d x dt A LCx If the matrix L can be chosen in such a way that the matrix A LC has eigen values with negative real parts the error x will go to zero The convergence rate is determined by an appropriate selection of the eigenvalues Notice the similarity between the problems of finding a state feedback and finding the observer State feedback design by eigenvalue assignment is equivalent to finding a matrix K so that A BK has given eigenvalues Designing an observer with prescribed eigenvalues is equivalent to finding a matrix L so that A LC has given eigenvalues Since the eigenvalues of a matrix and its transpose are the same we can establish the following equivalences A AT B CT K LT Wr W T o The observer design problem is the dual of the state feedback design problem Using the results of Theorem 63 we get the following theorem on observer design 72 STATE ESTIMATION 209 k2 V1 k0 b0 u V2 k1 0 2 4 6 0 01 02 03 04 05 06 actual estimated Concentration c1 c2 gL Time t min c1 c2 Figure 74 Observer for a two compartment system A two compartment model is shown on the left The observer measures the input concentration u and output concentration y c1 to determine the compartment concentrations shown on the right The true concentrations are shown by solid lines and the estimates generated by the observer by dashed lines Let the desired characteristic polynomial of the observer be s2 p1s p2 and equation 71 gives the observer gain L 1 0 k0 k1 k1 1 1 0 k0 k1 k2 1 1 p1 k0 k1 k2 p2 k0k2 p1 k0 k1 k2 p2 p1k2 k1k2 k2 2k1 Notice that the observability condition k1 0 is essential The behavior of the observer is illustrated by the simulation in Figure 74b Notice how the observed concentrations approach the true concentrations The observer is a dynamical system whose inputs are the process input u and the process output y The rate of change of the estimate is composed of two terms One term A ˆx Bu is the rate of change computed from the model with ˆx substituted for x The other term Ly ˆy is proportional to the difference e y ˆy between measured output y and its estimate ˆy C ˆx The observer gain L is a matrix that tells how the error e is weighted and distributed among the states The observer thus combines measurements with a dynamical model of the system A block diagram of the observer is shown in Figure 75 Computing the Observer Gain For simple loworder problems it is convenient to introduce the elements of the observer gain L as unknown parameters and solve for the values required to give the desired characteristic polynomial as illustrated in the following example Example 73 Vehicle steering The normalized linear model for vehicle steering derived in Examples 512 and 64 gives the following state space model dynamics relating lateral path deviation y to 73 CONTROL USING ESTIMATED STATE 211 0 10 20 30 0 5 10 15 20 25 30 x m y m 0 2 4 6 0 2 4 6 Act Est 0 2 4 6 0 02 04 0 2 4 6 1 0 1 2 0 2 4 6 0 05 1 x1 ˆx1 x2 ˆx2 x1 ˆx1 x2 ˆx2 Normalized time t Normalized time t Figure 76 Simulation of an observer for a vehicle driving on a curvy road left The observer has an initial velocity error The plots on the middle show the lateral deviation x1 the lateral velocity x2 by solid lines and their estimates ˆx1 and ˆx2 by dashed lines The plots on the right show the estimation errors A simulation of the observer for a vehicle driving on a curvy road is simulated in Figure 76 The vehicle length is the time unit in the normalized model The figure shows that the observer error settles in about 3 vehicle lengths For systems of high order we have to use numerical calculations The duality between the design of a state feedback and the design of an observer means that the computer algorithms for state feedback can also be used for the observer design we simply use the transpose of the dynamics matrix and the output matrix The MATLAB command acker which essentially is a direct implementation of the calculations given in Theorem 72 can be used for systems with one output The MATLAB command place can be used for systems with many outputs It is also better conditioned numerically 73 Control Using Estimated State In this section we will consider a state space system of the form dx dt Ax Bu y Cx 713 Notice that we have assumed that there is no direct term in the system D 0 This is often a realistic assumption The presence of a direct term in combination with a controller having proportional action creates an algebraic loop which will be discussed in Section 83 The problem can be solved even if there is a direct term but the calculations are more complicated We wish to design a feedback controller for the system where only the output is measured As before we will assume that u and y are scalars We also assume that the system is reachable and observable In Chapter 6 we found a feedback of the form u K x krr 212 CHAPTER 7 OUTPUT FEEDBACK for the case that all states could be measured and in Section 72 we developed an observer that can generate estimates of the state ˆx based on inputs and outputs In this section we will combine the ideas of these sections to find a feedback that gives desired closed loop eigenvalues for systems where only outputs are available for feedback If all states are not measurable it seems reasonable to try the feedback u K ˆx krr 714 where ˆx is the output of an observer of the state ie d ˆx dt A ˆx Bu Ly C ˆx 715 Since the system 713 and the observer 715 are both of state dimension n the closed loop system has state dimension 2n with state x ˆx The evolution of the states is described by equations 713715 To analyze the closed loop system the state variable ˆx is replaced by x x ˆx 716 Subtraction of equation 715 from equation 713 gives d x dt Ax A ˆx LCx C ˆx A x LC x A LCx Returning to the process dynamics introducing u from equation 714 into equation 713 and using equation 716 to eliminate ˆx gives dx dt Ax Bu Ax BK ˆx Bkrr Ax BKx x Bkrr A BKx BK x Bkrr The closed loop system is thus governed by d dt x x A BK BK 0 A LC x x Bkr 0 r 717 Notice that the state x representing the observer error is not affected by the refer ence signal r This is desirable since we do not want the reference signal to generate observer errors Since the dynamics matrix is block diagonal we find that the characteristic polynomial of the closed loop system is λs det sI A BK det sI A LC This polynomial is a product of two terms the characteristic polynomial of the closed loop system obtained with state feedback and the characteristic polyno mial of the observer error The feedback 714 that was motivated heuristically thus provides a neat solution to the eigenvalue assignment problem The result is summarized as follows 214 CHAPTER 7 OUTPUT FEEDBACK 0 5 10 15 2 0 2 4 6 8 State feedback Output feedback Reference x1 ˆx1 Normalized time t 0 5 10 15 1 0 1 0 5 10 15 1 0 1 x2 ˆx2 u usfb Normalized time t Figure 78 Simulation of a vehicle driving on a curvy road with a controller based on state feedback and an observer The left plot shows the lane boundaries dotted the vehicle position solid and its estimate dashed the upper right plot shows the velocity solid and its estimate dashed and the lower right plot shows the control signal using state feedback solid and the control signal using the estimated state dashed troller contains a dynamical model of the plant This is called the internal model principle the controller contains a model of the process being controlled Example 74 Vehicle steering Consider again the normalized linear model for vehicle steering in Example 64 The dynamics relating the steering angle u to the lateral path deviation y is given by the state space model 712 Combining the state feedback derived in Example 64 with the observer determined in Example 73 we find that the controller is given by d ˆx dt A ˆx Bu Ly C ˆx 0 1 0 0 ˆx γ 1 u l1 l2 y ˆx1 u K ˆx krr k1r x1 k2x2 Elimination of the variable u gives d ˆx dt A BK LCˆx Ly Bkrr l1 γ k1 1 γ k2 k1 l2 k2 ˆx l1 l2 y γ 1 k1r The controller is a dynamical system of second order with two inputs y and r and one output u Figure 78 shows a simulation of the system when the vehicle is driven along a curvy road Since we are using a normalized model the length unit is the vehicle length and the time unit is the time it takes to travel one vehicle length The estimator is initialized with all states equal to zero but the real system has an initial velocity of 05 The figures show that the estimates converge quickly to their true values The vehicle tracks the desired path which is in the middle of the road but there are errors because the road is irregular The tracking error can be improved by introducing feedforward Section 75 74 KALMAN FILTERING 217 The Kalman filter can also be applied to continuoustime stochastic processes The mathematical derivation of this result requires more sophisticated tools but the final form of the estimator is relatively straightforward Consider a continuous stochastic system dx dt Ax Bu Fv EvsvTt Rvtδt s y Cx w EwswTt Rwtδt s where δτ is the unit impulse function Assume that the disturbance v and noise w are zero mean and Gaussian but not necessarily stationary pdfv 1 n 2πdet Rv e 1 2 vT R1 v v pdfw 1 n 2πdet Rw e 1 2 wT R1 w w We wish to find the estimate ˆxt that minimizes the mean square error Ext ˆxtxt ˆxtT given yτ 0 τ t Theorem 75 KalmanBucy 1961 The optimal estimator has the form of a linear observer d ˆx dt A ˆx Bu Ly C ˆx where Lt PtCT R1 w and Pt Ext ˆxtxt ˆxtT and satisfies d P dt AP P AT PCT R1 w tC P F RvtF T P0 Ex0x T 0 As in the discrete case when the system is stationary and if Pt converges the observer gain is constant L PCT R1 w where AP P AT PCT R1 w C P F Rv F T 0 The second equation is the algebraic Riccati equation Example 75 Vectored thrust aircraft We consider the lateral dynamics of the system consisting of the subsystems whose states are given by z x θ x θ To design a Kalman filter for the system we must include a description of the process disturbances and the sensor noise We thus augment the system to have the form dz dt Az Bu Fv y Cz w where F represents the structure of the disturbances including the effects of non linearities that we have ignored in the linearization w represents the disturbance source modeled as zero mean Gaussian white noise and v represents that mea surement noise also zero mean Gaussian and white For this example we choose F as the identity matrix and choose disturbances vi i 1 n to be independent disturbances with covariance given by Rii 01 Ri j 0 i j The sensor noise is a single random variable which we model as 218 CHAPTER 7 OUTPUT FEEDBACK 0 05 1 15 2 04 03 02 01 0 01 Time t s States zi mixed units x θ xd θd a Position measurement only 0 05 1 15 2 04 03 02 01 0 01 Time t s States zi mixed units x θ xd θd b Position and orientation Figure 79 Kalman filter design for a vectored thrust aircraft In the first design a only the lateral position of the aircraft is measured Adding a direct measurement of the roll angle produces a much better observer b The initial condition for both simulations is 01 00175 001 0 having covariance Rw 104 Using the same parameters as before the resulting Kalman gain is given by L 370 469 185 316 The performance of the estimator is shown in Figure 79a We see that while the estimator converges to the system state it contains significant overshoot in the state estimate which can lead to poor performance in a closed loop setting To improve the performance of the estimator we explore the impact of adding a new output measurement Suppose that instead of measuring just the output position x we also measure the orientation of the aircraft θ The output becomes y 1 0 0 0 0 1 0 0 z w1 w2 and if we assume that w1 and w2 are independent noise sources each with covariance Rwi 104 then the optimal estimator gain matrix becomes L 326 0150 0150 326 327 979 00033 316 These gains provide good immunity to noise and high performance as illustrated in Figure 79b 75 A GENERAL CONTROLLER STRUCTURE 221 x0 y0 x f y f a Overhead view 0 1 2 3 4 5 0 5 0 1 2 3 4 05 0 05 y m δ rad Time t s b Position and steering Figure 711 Trajectory generation for changing lanes We wish to change from the left lane to the right lane over a distance of 30 m in 4 s The planned trajectory in the xy plane is shown in a and the lateral position y and the steering angle δ over the maneuver time interval are shown in b There are many ways to generate the feedforward signal and there are also many different ways to compute the feedback gain K and the observer gain L Note that once again the internal model principle applies the controller contains a model of the system to be controlled through the observer Example 76 Vehicle steering To illustrate how we can use a two degreeoffreedom design to improve the per formance of the system consider the problem of steering a car to change lanes on a road as illustrated in Figure 711a We use the nonnormalized form of the dynamics where were derived in Exam ple 28 Using the center of the rear wheels as the reference α 0 the dynamics can be written as dx dt cos θv dy dt sin θv dθ dt 1 b tan δ where v is the forward velocity of the vehicle and δ is the steering angle To generate a trajectory for the system we note that we can solve for the states and inputs of the system given x y by solving the following sets of equations x v cos θ x v cos θ v θ sin θ y v sin θ y v sin θ v θ cos θ θ vl tan δ 724 This set of five equations has five unknowns θ θ v v and δ that can be solved using trigonometry and linear algebra It follows that we can compute a feasible trajectory for the system given any path xt yt This special property of a system is known as differential flatness 73 74 To find a trajectory from an initial state x0 y0 θ0 to a final state x f y f θ f 82 DERIVATION OF THE TRANSFER FUNCTION 239 independent variable x has the solution ψx Aexs Bexs Matching the boundary conditions gives A 0 and B est so the solution is yt θ1 t ψ1est esest esut The system thus has the transfer function Gs es As in the case of a time delay the transfer function is not a rational function but is an analytic function Gains Poles and Zeros The transfer function has many useful interpretations and the features of a transfer function are often associated with important system properties Three of the most important features are the gain and the locations of the poles and zeros The zero frequency gain of a system is given by the magnitude of the transfer function at s 0 It represents the ratio of the steadystate value of the output with respect to a step input which can be represented as u est with s 0 For a state space system we computed the zero frequency gain in equation 522 G0 D C A1B For a system written as a linear differential equation dny dtn a1 dn1y dtn1 any b0 dmu dtm b1 dm1u dtm1 bmu if we assume that the input and output of the system are constants y0 and u0 then we find that any0 bmu0 Hence the zero frequency gain is G0 y0 u0 bm an 816 Next consider a linear system with the rational transfer function Gs bs as The roots of the polynomial as are called the poles of the system and the roots of bs are called the zeros of the system If p is a pole it follows that yt ept is a solution of equation 88 with u 0 the homogeneous solution A pole p corresponds to a mode of the system with corresponding modal solution ept The unforced motion of the system after an arbitrary excitation is a weighted sum of modes Zeros have a different interpretation Since the pure exponential output corre sponding to the input ut est with as 0 is Gsest it follows that the pure exponential output is zero if bs 0 Zeros of the transfer function thus block transmission of the corresponding exponential signals For a state space system with transfer function Gs CsI A1B D the poles of the transfer function are the eigenvalues of the matrix A in the state space 240 CHAPTER 8 TRANSFER FUNCTIONS 6 4 2 2 2 2 Re Im Figure 84 A pole zero diagram for a transfer function with zeros at 5 and 1 and poles at 3 and 22 j The circles represent the locations of the zeros and the crosses the locations of the poles A complete characterization requires we also specify the gain of the system model One easy way to see this is to notice that the value of Gs is unbounded when s is an eigenvalue of a system since this is precisely the set of points where the characteristic polynomial λs detsI A 0 and hence sI A is noninvertible It follows that the poles of a state space system depend only on the matrix A which represents the intrinsic dynamics of the system We say that a transfer function is stable if all of its poles have negative real part To find the zeros of a state space system we observe that the zeros are complex numbers s such that the input ut u0est gives zero output Inserting the pure exponential response xt x0est and yt 0 in equation 82 gives sestx0 Ax0est Bu0est 0 Cestx0 Destu0 which can be written as sI A B C D x0 u0 0 This equation has a solution with nonzero x0 u0 only if the matrix on the left does not have full rank The zeros are thus the values s such that the matrix sI A B C D 817 looses rank Since the zeros depend on A B C and D they therefore depend on how the inputs and outputs are coupled to the states Notice in particular that if the matrix B has full rank then the matrix in equation 817 has n linearly independent rows for all values of s Similarly there are n linearly independent columns if the matrix C has full rank This implies that systems where the matrix B or C is full rank do not have zeros In particular it means that a system has no zeros if it is fully actuated each state can be controlled independently or if the full state is measured A convenient way to view the poles and zeros of a transfer function is through a pole zero diagram as shown in Figure 84 In this diagram each pole is marked with a cross and each zero with a circle If there are multiple poles or zeros at a fixed location these are often indicated with overlapping crosses or circles or other 83 BLOCK DIAGRAMS AND TRANSFER FUNCTIONS 247 and Gurs kr 1 K G ˆxus k1s2 l1s l2 s2 sγ k1 k2 l1 k1 l2 k2l1 γ k2l2 where k1 and k2 are the controller gains Finally we compute the full closed loop dynamics We begin by deriving the transfer function for the process Ps We can compute this directly from the state space description of the dynamics which was given in Example 512 Using that description we have Ps G yus CsI A1B D 1 0 s 1 0 s 1 γ 1 γ s 1 s2 The transfer function for the full closed loop system between the input r and the output y is then given by G yr kr Ps 1 PsGuys k1γ s 1 s2 k1γ k2s k1 Note that the observer gains l1 and l2 do not appear in this equation This is because we are considering steadystate analysis and in steady state the estimated state exactly tracks the state of the system assuming perfect models We will return to this example in Chapter 12 to study the robustness of this particular approach PoleZero Cancellations Because transfer functions are often polynomials in s it can sometimes happen that the numerator and denominator have a common factor which can be canceled Sometimes these cancellations are simply algebraic simplifications but in other situations they can mask potential fragilities in the model In particular if a polezero cancellation occurs because terms in separate blocks that just happen to coincide the cancellation may not occur if one of the systems is slightly perturbed In some situations this can result in severe differences between the expected behavior and the actual behavior To illustrate when we can have polezero cancellations consider the block dia gram in Figure 87 with F 1 no feedforward compensation and C and P given by Cs ncs dcs Ps n ps dps The transfer function from r to e is then given by Gers 1 1 PC dcsdps dcsdps ncsn ps If there are common factors in the numerator and denominator polynomials then these terms can be factored out and eliminated from both the numerator and de nominator For example if the controller has a zero at s a and the process has a 248 CHAPTER 8 TRANSFER FUNCTIONS pole at s a then we will have Gers s ad csdps s adcsdps s ancsn ps d csdps dcsdps ncsn ps where n cs and d ps represent the relevant polynomials with the term s a factored out In the case when a 0 so that the zero or pole is in the right halfplane we see that there is no impact on the transfer function Ger Suppose instead that we compute the transfer function from d to e which repre sents the effect of a disturbance on the error between the reference and the output This transfer function is given by Geds d csn ps s adcsdps s ancsn ps Notice that if a 0 then the pole is in the right halfplane and the transfer function Ged is unstable Hence even though the transfer function from r to e appears to be okay assuming a perfect polezero cancellation the transfer function from d to e can exhibit unbounded behavior This unwanted behavior is typical of an unstable polezero cancellation It turns out that the cancellation of a pole with a zero can also be understood in terms of the state space representation of the systems Reachability or observability is lost when there are cancellations of poles and zeros Exercise 811 A conse quence is that the transfer function represents the dynamics only in the reachable and observable subspace of a system see Section 75 Example 87 Cruise control The inputoutput response from throttle to velocity for the linearized model for a car has the transfer function Gs bsa a 0 A simple but not necessarily good way to design a PI controller is to choose the parameters of the PI controller so that the controller zero at s kikp cancels the process pole at s a The transfer function from reference to velocity is Gvrs bkps bkp and control design is simply a matter of choosing the gain kp The closed loop system dynamics are of first order with the time constant 1bkp Figure 810 shows the velocity error when the car encounters an increase in the road slope A comparison with the controller used in Figure 33b reproduced in dashed curves shows that the controller based on polezero cancellation has very poor performance The velocity error is larger and it takes a long time to settle Notice that the control signal remains practically constant after t 15 even if the error is large after that time To understand what happens we will analyze the system The parameters of the system are a 00101 and b 132 and the controller parameters are kp 05 and ki 00051 The closed loop time constant is 1bkp 25 s and we would expect that the error would settle in about 10 s 4 time constants The transfer functions from road slope to velocity and control 83 BLOCK DIAGRAMS AND TRANSFER FUNCTIONS 249 0 10 20 30 40 18 19 20 Time t s Velocity v ms 0 10 20 30 40 0 02 04 06 Time t s Throttle ki 00051 ki 05 Figure 810 Car with PI cruise control encountering a sloping road The velocity error is shown on the left and the throttle is shown on the right Results with a PI controller with kp 05 and ki 00051 where the process pole s 0101 is shown by solid lines and a controller with kp 05 and ki 05 is shown by dashed lines Compare with Figure 33b signals are Gv θs bgkps s as bkp Gu θs bkp s bkp Notice that the canceled mode s a 00101 appears in Gvθ but not in Guθ The reason why the control signal remains constant is that the controller has a zero at s 00101 which cancels the slowly decaying process mode Notice that the error would diverge if the canceled pole was unstable The lesson we can learn from this example is that it is a bad idea to try to cancel unstable or slow process poles A more detailed discussion of polezero cancellations is given in Section 124 Algebraic Loops When analyzing or simulating a system described by a block diagram it is necessary to form the differential equations that describe the complete system In many cases the equations can be obtained by combining the differential equations that describe each subsystem and substituting variables This simple procedure cannot be used when there are closed loops of subsystems that all have a direct connection between inputs and outputs known as an algebraic loop To see what can happen consider a system with two blocks a firstorder non linear system dx dt f x u y hx 821 and a proportional controller described by u ky There is no direct term since the function h does not depend on u In that case we can obtain the equation for the closed loop system simply by replacing u by ky in 821 to give dx dt f x ky y hx Such a procedure can easily be automated using simple formula manipulation 250 CHAPTER 8 TRANSFER FUNCTIONS The situation is more complicated if there is a direct term If y hx u then replacing u by ky gives dx dt f x ky y hx ky To obtain a differential equation for x the algebraic equation y hx ky must be solved to give y αx which in general is a complicated task When algebraic loops are present it is necessary to solve algebraic equations to obtain the differential equations for the complete system Resolving algebraic loops is a nontrivial problem because it requires the symbolic solution of algebraic equations Most block diagramoriented modeling languages cannot handle alge braic loops and they simply give a diagnosis that such loops are present In the era of analog computing algebraic loops were eliminated by introducing fast dynamics between the loops This created differential equations with fast and slow modes that are difficult to solve numerically Advanced modeling languages like Modelica use several sophisticated methods to resolve algebraic loops 84 The Bode Plot The frequency response of a linear system can be computed from its transfer func tion by setting s iω corresponding to a complex exponential ut eiωt cosωt i sinωt The resulting output has the form yt Giωeiωt Meiωtϕ M cosωt ϕ i M sinωt ϕ where M and ϕ are the gain and phase of G M Giω ϕ arctan Im Giω Re Giω The phase of G is also called the argument of G a term that comes from the theory of complex variables It follows from linearity that the response to a single sinusoid sin or cos is amplified by M and phaseshifted by ϕ Note that π ϕ π so the arctangent must be taken respecting the signs of the numerator and denominator It will often be convenient to represent the phase in degrees rather than radians We will use the notation Giω for the phase in degrees and arg Giω for the phase in radians In addition while we always take arg Giω to be in the range π π we will take Giω to be continuous so that it can take on values outside the range of 180 to 180 The frequency response Giω can thus be represented by two curves the gain curve and the phase curve The gain curve gives Giω as a function of frequency ω and the phase curve gives Giω One particularly useful way of drawing these 84 THE BODE PLOT 251 10 1 10 2 10 3 10 4 10 2 10 1 10 0 10 1 10 2 90 0 90 Actual Approx Frequency ω rads Giω Giω deg Figure 811 Bode plot of the transfer function Cs 20 10s 10s corresponding to an ideal PID controller The top plot is the gain curve and the bottom plot is the phase curve The dashed lines show straightline approximations of the gain curve and the corresponding phase curve curves is to use a loglog scale for the gain plot and a loglinear scale for the phase plot This type of plot is called a Bode plot and is shown in Figure 811 Sketching and Interpreting Bode Plots Part of the popularity of Bode plots is that they are easy to sketch and interpret Since the frequency scale is logarithmic they cover the behavior of a linear system over a wide frequency range Consider a transfer function that is a rational functionof the form Gs b1sb2s a1sa2s We have log Gs log b1s log b2s log a1s log a2s and hence we can compute the gain curve by simply adding and subtracting gains corresponding to terms in the numerator and denominator Similarly Gs b1s b2s a1s a2s and so the phase curve can be determined in an analogous fashion Since a polyno mial can be written as a product of terms of the type k s s a s2 2ζω0s ω2 0 it suffices to be able to sketch Bode diagrams for these terms The Bode plot of a complex system is then obtained by adding the gains and phases of the terms 252 CHAPTER 8 TRANSFER FUNCTIONS 10 2 10 0 10 2 10 1 10 0 10 1 180 0 180 Frequency ω rads Giω Giω deg 1 1 s1 s1 s2 s2 10 2 10 0 10 2 10 1 10 0 10 1 180 0 180 Frequency ω rads Giω Giω deg s2 s2 s s 1 1 Figure 812 Bode plots of the transfer functions Gs sk for k 2 1 0 1 2 On a loglog scale the gain curve is a straight line with slope k Using a loglinear scale the phase curves for the transfer functions are constants with phase equal to 90 k The simplest term in a transfer function is one of the form sk where k 0 if the term appears in the numerator and k 0 if the term is in the denominator The gain and phase of the term are given by log Giω k log ω Giω 90k The gain curve is thus a straight line with slope k and the phase curve is a constant at 90k The case when k 1 corresponds to a differentiator and has slope 1 with phase 90 The case when k 1 corresponds to an integrator and has slope 1 with phase 90 Bode plots of the various powers of k are shown in Figure 812 Consider next the transfer function of a firstorder system given by Gs a s a We have Gs a s a Gs a s a and hence log Giω log a 1 2 log ω2 a2 Giω 180 π arctan ω a The Bode plot is shown in Figure 813a with the magnitude normalized by the zero frequency gain Both the gain curve and the phase curve can be approximated by 84 THE BODE PLOT 255 10 2 10 0 10 2 10 2 10 1 10 0 10 1 10 2 180 90 0 Exact Approx Giω Giω deg Frequency ω rads s a s b s ω0 s a10 s b10 s 10a s 10b Figure 814 Asymptotic approximation to a Bode plot The thin line is the Bode plot for the transfer function Gs ks bs as2 2ζω0s ω2 0 where a b ω0 Each segment in the gain and phase curves represents a separate portion of the approximation where either a pole or a zero begins to have effect Each segment of the approximation is a straight line between these points at a slope given by the rules for computing the effects of poles and zeros from the pole end and we are left with a slope of 45decade from the zero At the location of the secondorder pole s iωc we get a jump in phase of 180 Finally at s 10b the phase contributions of the zero end and we are left with a phase of 180 degrees We see that the straightline approximation for the phase is not as accurate as it was for the gain curve but it does capture the basic features of the phase changes as a function of frequency The Bode plot gives a quick overview of a system Since any signal can be decomposed into a sum of sinusoids it is possible to visualize the behavior of a system for different frequency ranges The system can be viewed as a filter that can change the amplitude and phase of the input signals according to the frequency response For example if there are frequency ranges where the gain curve has constant slope and the phase is close to zero the action of the system for signals with these frequencies can be interpreted as a pure gain Similarly for frequencies where the slope is 1 and the phase close to 90 the action of the system can be interpreted as a differentiator as shown in Figure 812 Three common types of frequency responses are shown in Figure 815 The system in Figure 815a is called a lowpass filter because the gain is constant for low frequencies and drops for high frequencies Notice that the phase is zero for low frequencies and 180 for high frequencies The systems in Figure 815b and c are called a bandpass filter and highpass filter for similar reasons To illustrate how different system behaviors can be read from the Bode plots we consider the bandpass filter in Figure 815b For frequencies around ω ω0 the signal is passed through with no change in gain However for frequencies well 256 CHAPTER 8 TRANSFER FUNCTIONS 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω 10 2 10 1 10 0 a100 a 100a 180 0 180 Frequency ω rads Giω Giω Gs ω2 0 s2 2ζω0s ω2 0 a Lowpass filter Gs 2ζω0 s2 2ζω0s ω2 0 b Bandpass filter Gs s2 s2 2ζω0s ω2 0 c Highpass filter Figure 815 Bode plots for lowpass bandpass and highpass filters The top plots are the gain curves and the bottom plots are the phase curves Each system passes frequencies in a different range and attenuates frequencies outside of that range below or well above ω0 the signal is attenuated The phase of the signal is also affected by the filter as shown in the phase curve For frequencies below a100 there is a phase lead of 90 and for frequencies above 100a there is a phase lag of 90 These actions correspond to differentiation and integration of the signal in these frequency ranges Example 89 Transcriptional regulation Consider a genetic circuit consisting of a single gene We wish to study the response of the protein concentration to fluctuations in the mRNA dynamics We consider two cases a constitutive promoter no regulation and selfrepression negative feedback illustrated in Figure 816 The dynamics of the system are given by dm dt αp γ m u dp dt βm δp where u is a disturbance term that affects mRNA transcription For the case of no feedback we have αp α0 and the system has an equi librium point at me α0γ pe βα0δγ The transfer function from v to p is given by Gol pvs β s γ s δ For the case of negative regulation we have αp α1 1 kpn α0 and the equilibrium points satisfy me δ β pe α 1 kpne α0 γ me γ δ β pe 84 THE BODE PLOT 257 A RNAP a Open loop RNAP A b Negative feedback 10 4 10 3 10 2 10 2 10 1 10 0 open loop negative feedback G pviω Frequency ω rads c Frequency response Figure 816 Noise attenuation in a genetic circuit The open loop system a consists of a constitutivepromoterwhiletheclosedloopcircuit bisselfregulatedwithnegativefeedback repressor The frequency response for each circuit is shown in c The resulting transfer function is given by Gcl pvs β s γ s δ βσ σ 2βαkpe 1 kpne2 Figure 816c shows the frequency response for the two circuits We see that the feedback circuit attenuates the response of the system to disturbances with low frequency content but slightly amplifies disturbances at high frequency compared to the open loop system Notice that these curves are very similar to the frequency response curves for the op amp shown in Figure 83b Transfer Functions from Experiments The transfer function of a system provides a summary of the inputoutput response and is very useful for analysis and design However modeling from first principles can be difficult and timeconsuming Fortunately we can often build an inputoutput model for a given application by directly measuring the frequency response and fitting a transfer function to it To do so we perturb the input to the system using a sinusoidal signal at a fixed frequency When steady state is reached the amplitude ratio and the phase lag give the frequency response for the excitation frequency The complete frequency response is obtained by sweeping over a range of frequencies By using correlation techniques it is possible to determine the frequency re sponse very accurately and an analytic transfer function can be obtained from the frequency response by curve fitting The success of this approach has led to in struments and software that automate this process called spectrum analyzers We illustrate the basic concept through two examples Example 810 Atomic force microscope To illustrate the utility of spectrum analysis we consider the dynamics of the atomic force microscope introduced in Section 35 Experimental determination of the 258 CHAPTER 8 TRANSFER FUNCTIONS 10 1 10 0 10 1 10 2 10 2 10 3 10 4 270 180 90 0 Measured Model G G deg Frequency f Hz Figure 817 Frequency response of a preloaded piezoelectric drive for an atomic force microscope The Bode plot shows the response of the measured transfer function solid and the fitted transfer function dashed frequency response is particularly attractive for this system because its dynamics are very fast and hence experiments can be done quickly A typical example is given in Figure 817 which shows an experimentally determined frequency response solid line In this case the frequency response was obtained in less than a second The transfer function Gs kω2 2ω2 3ω2 5s2 2ζ1ω1s ω2 1s2 2ζ4ω4s ω2 4esτ ω2 1ω2 4s2 2ζ2ω2s ω2 2s2 2ζ3ω3s ω2 3s2 2ζ5ω5s ω2 5 with ωk 2π fk and f1 242 kHz ζ1 003 f2 255 kHz ζ2 003 f3 645 kHz ζ3 0042 f4 825 kHz ζ4 0025 f5 93 kHz ζ5 0032 τ 104 s and k 5 was fit to the data dashed line The frequencies associated with the zeros are located where the gain curve has minima and the frequencies associated with the poles are located where the gain curve has local maxima The relative damping ratios are adjusted to give a good fit to maxima and minima When a good fit to the gain curve is obtained the time delay is adjusted to give a good fit to the phase curve The piezo drive is preloaded and a simple model of its dynamics is derived in Exercise 37 The pole at 242 kHz corresponds to the trampoline mode derived in the exercise the other resonances are higher modes Example 811 Pupillary light reflex dynamics The human eye is an organ that is easily accessible for experiments It has a control system that adjusts the pupil opening to regulate the light intensity at the retina This control system was explored extensively by Stark in the 1960s 184 To determine the dynamics light intensity on the eye was varied sinusoidally and the pupil opening was measured A fundamental difficulty is that the closed loop system is insensitive to internal system parameters so analysis of a closed loop system thus EXERCISES 265 811 Common poles Consider a closed loop system of the form of Figure 87 with F 1 and P and C having a common pole Show that if each system is written in state space form the resulting closed loop system is not reachable and not observable 812 Congestion control Consider the congestion control model described in Sec tion 34 Let w represent the individual window size for a set of N identical sources q represent the endtoend probability of a dropped packet b represent the number of packets in the routers buffer and p represent the probability that that a packet is dropped by the router We write w Nw to represent the total number of packets being received from all N sources Show that the linearized model can be described by the transfer functions Gb ws eτ f s τes eτ f s G wqs N qeτes qewe G pbs ρ where we be is the equilibrium point for the system τe is the steadystate round trip time and τ f is the forward propagation time 813 Inverted pendulum with PD control Consider the normalized inverted pen dulum system whose transfer function is given by Ps 1s2 1 Exer cise 83 A proportionalderivative control law for this system has transfer func tion Cs kp kds see Table 81 Suppose that we choose Cs αs 1 Compute the closed loop dynamics and show that the system has good tracking of reference signals but does not have good disturbance rejection properties 814 Vehicle suspension 96 Active and passive damping are used in cars to give a smooth ride on a bumpy road A schematic diagram of a car with a damping system in shown in the figure below Porter Class I race car driven by Todd Cuffaro xb xw xr F Σ F Body Actuator Wheel This model is called a quarter car model and the car is approximated with two masses one representing one fourth of the car body and the other a wheel The actuator exerts a force F between the wheel and the body based on feedback from the distance between the body and the center of the wheel the rattle space Let xb xw and xr represent the heights of body wheel and road measured from their equilibria A simple model of the system is given by Newtons equations for 266 CHAPTER 8 TRANSFER FUNCTIONS the body and the wheel mb xb F mw xw F ktxr xw where mb is a quarter of the body mass mw is the effective mass of the wheel including brakes and part of the suspension system the unsprung mass and kt is the tire stiffness For a conventional damper consisting of a spring and a damper we have F kxw xb cxw xb For an active damper the force F can be more general and can also depend on riding conditions Rider comfort can be characterized by the transfer function Gaxr from road height xr to body acceleration a xb Show that this transfer function has the property Gaxriωt ktmb where ωt ktmw the tire hop frequency The equation implies that there are fundamental limitations to the comfort that can be achieved with any damper 815 Vibration absorber Damping vibrations is a common engineering problem A schematic diagram of a damper is shown below m1 k1 m2 c1 k2 F x1 x2 The disturbing vibration is a sinusoidal force acting on mass m1 and the damper consists of the mass m2 and the spring k2 Show that the transfer function from disturbance force to height x1 of the mass m1 is Gx1F m2s2 k2 m1m2s4 m2c1s3 m1k2 m2k1 k2s2 k2c1s k1k2 How should the mass m2 and the stiffness k2 be chosen to eliminate a sinusoidal oscillation with frequency ω0 More details are vibration absorbers is given in the classic text by Den Hartog 57 pp 8793 Chapter Nine Frequency Domain Analysis Mr Black proposed a negative feedback repeater and proved by tests that it possessed the advantages which he had predicted for it In particular its gain was constant to a high degree and it was linear enough so that spurious signals caused by the interaction of the various channels could be kept within permissible limits For best results the feedback factor μβ had to be numerically much larger than unity The possibility of stability with a feedback factor larger than unity was puzzling Harry Nyquist The Regeneration Theory 1956 161 In this chapter we study how the stability and robustness of closed loop systems can be determined by investigating how sinusoidal signals of different frequencies propagate around the feedback loop This technique allows us to reason about the closed loop behavior of a system through the frequency domain properties of the open loop transfer function The Nyquist stability theorem is a key result that provides a way to analyze stability and introduce measures of degrees of stability 91 The Loop Transfer Function Determining the stability of systems interconnected by feedback can be tricky be cause each system influences the other leading to potentially circular reasoning Indeed as the quote from Nyquist above illustrates the behavior of feedback sys tems can often be puzzling However using the mathematical framework of transfer functions provides an elegant way to reason about such systems which we call loop analysis The basic idea of loop analysis is to trace how a sinusoidal signal propagates in the feedback loop and explore the resulting stability by investigating if the prop agated signal grows or decays This is easy to do because the transmission of sinusoidal signals through a linear dynamical system is characterized by the fre quency response of the system The key result is the Nyquist stability theorem which provides a great deal of insight regarding the stability of a system Unlike proving stability with Lyapunov functions studied in Chapter 4 the Nyquist crite rion allows us to determine more than just whether a system is stable or unstable It provides a measure of the degree of stability through the definition of stability margins The Nyquist theorem also indicates how an unstable system should be changed to make it stable which we shall study in detail in Chapters 1012 ConsiderthesysteminFigure91aThetraditionalwaytodetermineiftheclosed loop system is stable is to investigate if the closed loop characteristic polynomial has all its roots in the left halfplane If the process and the controller have rational 272 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 1 3 5 2 2 Re Liω Im Liω Figure 94 Nyquist plot for a thirdorder transfer function The Nyquist plot consists of a trace of the loop transfer function Ls 1s a3 The solid line represents the portion of the transfer function along the positive imaginary axis and the dashed line the negative imaginary axis The outer arc of the D contour maps to the origin Nyquist D contour This arc has the form s Reiθ for R This gives LReiθ 1 Reiθ a3 0 as R Thus the outer arc of the D contour maps to the origin on the Nyquist plot An alternative to computing the Nyquist plot explicitly is to determine the plot from the frequency response Bode plot which gives the Nyquist curve for s iω ω 0 We start by plotting Giω from ω 0 to ω which can be read off from the magnitude and phase of the transfer function We then plot GReiθ with θ π2 and R which almost always maps to zero The remaining parts of the plot can be determined by taking the mirror image of the curve thus far normally plotted using a dashed line The plot can then be labeled with arrows corresponding to a clockwise traversal around the D contour the same direction in which the first portion of the curve was plotted Example 93 Thirdorder system with a pole at the origin Consider the transfer function Ls k ss 12 where the gain has the nominal value k 1 The Bode plot is shown in Figure 95a The system has a single pole at s 0 and a double pole at s 1 The gain curve of the Bode plot thus has the slope 1 for low frequencies and at the double pole s 1 the slope changes to 3 For small s we have L ks which means that the lowfrequency asymptote intersects the unit gain line at ω k The phase curve starts at 90 for low frequencies it is 180 at the breakpoint ω 1 and it is 270 at high frequencies Having obtained the Bode plot we can now sketch the Nyquist plot shown in Figure 95b It starts with a phase of 90 for low frequencies intersects the negative real axis at the breakpoint ω 1 where Li 05 and goes to zero along 92 THE NYQUIST CRITERION 275 200 Re Liω Im Liω 1 Re Liω Im Liω Figure 97 Nyquist curve for the loop transfer function Ls 3s12 ss62 The plot on the right is an enlargement of the box around the origin of the plot on the left The Nyquist curve intersections the negative real axis twice but has no net encirclements of 1 greater than 1 In particular for a fixed time delay the system will become unstable as the link capacity c is increased This indicates that the TCP protocol may not be scalable to highcapacity networks as pointed out by Low et al 137 Exercise 97 provides some ideas of how this might be overcome Conditional Stability Normally we find that unstable systems can be stabilized simply by reducing the loop gain There are however situations where a system can be stabilized by increasing the gain This was first encountered by electrical engineers in the design of feedback amplifiers who coined the term conditional stability The problem was actually a strong motivation for Nyquist to develop his theory We will illustrate by an example Example 95 Thirdorder system Consider a feedback system with the loop transfer function Ls 3s 62 ss 12 94 The Nyquist plot of the loop transfer function is shown in Figure 97 Notice that the Nyquist curve intersects the negative real axis twice The first intersection occurs at L 12 for ω 2 and the second at L 45 for ω 3 The intuitive argument based on signal tracing around the loop in Figure 91b is strongly misleading in this case Injection of a sinusoid with frequency 2 rads and amplitude 1 at A gives in steady state an oscillation at B that is in phase with the input and has amplitude 12 Intuitively it is seems unlikely that closing of the loop will result in a stable system However it follows from Nyquists stability criterion that the system is stable because there are no net encirclements of the critical point Note however that if we decrease the gain then we can get an encirclement implying that the gain must be sufficiently large for stability 93 STABILITY MARGINS 279 Re Liω Im Liω 1 ϕm sm 1gm a Nyquist plot 10 1 10 0 10 1 10 1 10 0 10 1 180 150 120 90 Frequency ω rads Liω Liω log10 gm ϕm b Bode plot Figure 99 Stability margins The gain margin gm and phase margin ϕm are shown on the the Nyquist plot a and the Bode plot b The gain margin corresponds to the smallest increase in gain that creates an encirclement and the phase margin is the smallest change in phase that creates an encirclement The Nyquist plot also shows the stability margin sm which is the shortest distance to the critical point 1 is easy to plot the loop transfer function Ls An increase in controller gain simply expands the Nyquist plot radially An increase in the phase of the controller twists the Nyquist plot Hence from the Nyquist plot we can easily pick off the amount of gain or phase that can be added without causing the system to become unstable Formally the gain margin gm of a system is defined as the smallest amount that the open loop gain can be increased before the closed loop system goes unstable For a system whose phase decreases monotonically as a function of frequency starting at 0 the gain margin can be computed based on the smallest frequency where the phase of the loop transfer function Ls is 180 Let ωpc represent this frequency called the phase crossover frequency Then the gain margin for the system is given by gm 1 Liωpc 95 Similarly the phase margin is the amount of phase lag required to reach the stability limit Let ωgc be the gain crossover frequency the smallest frequency where the loop transfer function Ls has unit magnitude Then for a system with monotonically decreasing gain the phase margin is given by ϕm π arg Liωgc 96 These margins have simple geometric interpretations on the Nyquist diagram of thelooptransferfunctionasshowninFigure99awherewehaveplottedtheportion of the curve corresponding to ω 0 The gain margin is given by the inverse of the distance to the nearest point between 1 and 0 where the loop transfer function crosses the negative real axis The phase margin is given by the smallest angle on the unit circle between 1 and the loop transfer function When the gain or phase is monotonic this geometric interpretation agrees with the formulas above 280 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 Re Liω Im Liω 10 3 10 1 10 1 10 1 10 0 10 1 270 180 90 0 Liω Liω Frequency ω rads Figure 910 Stability margins for a thirdorder transfer function The Nyquist plot on the left allows the gain phase and stability margins to be determined by measuring the distances of relevant features The gain and phase margins can also be read off of the Bode plot on the right A drawback with gain and phase margins is that it is necessary to give both of them in order to guarantee that the Nyquist curve is not close to the critical point An alternative way to express margins is by a single number the stability margin sm which is the shortest distance from the Nyquist curve to the critical point This number is related to disturbance attenuation as will be discussed in Section 113 For many systems the gain and phase margins can be determined from the Bode plot of the loop transfer function To find the gain margin we first find the phase crossover frequency ωpc where the phase is 180 The gain margin is the inverse of the gain at that frequency To determine the phase margin we first determine the gain crossover frequency ωgc ie the frequency where the gain of the loop transfer function is 1 The phase margin is the phase of the loop transfer function at that frequency plus 180 Figure 99b illustrates how the margins are found in the Bode plot of the loop transfer function Note that the Bode plot interpretation of the gain and phase margins can be incorrect if there are multiple frequencies at which the gain is equal to 1 or the phase is equal to 180 Example 97 Thirdorder system Consider a loop transfer function Ls 3s 13 The Nyquist and Bode plots are shown in Figure 910 To compute the gain phase and stability margins we can use the Nyquist plot shown in Figure 910 This yields the following values gm 267 ϕm 417 sm 0464 The gain and phase margins can also be determined from the Bode plot The gain and phase margins are classical robustness measures that have been used for a long time in control system design The gain margin is well defined if the Nyquist curve intersects the negative real axis once Analogously the phase margin is well defined if the Nyquist curve intersects the unit circle at only one point Other more general robustness measures will be introduced in Chapter 12 93 STABILITY MARGINS 281 Re Liω Im Liω a 10 1 10 1 10 1 10 0 180 90 Frequency ω rads Liω Liω b 0 50 100 150 0 05 1 15 Time t s Output y c Figure 911 System with good gain and phase margins but a poor stability margin Nyquist a and Bode b plots of the loop transfer function and step response c for a system with good gain and phase margins but with a poor stability margin The Nyquist plot shows on the portion of the curve corresponding to ω 0 Even if both the gain and phase margins are reasonable the system may still not be robust as is illustrated by the following example Example 98 Good gain and phase margins but poor stability margins Consider a system with the loop transfer function Ls 038s2 01s 055 ss 1s2 006s 05 A numerical calculation gives the gain margin as gm 266 and the phase margin is 70 These values indicate that the system is robust but the Nyquist curve is still close to the critical point as shown in Figure 911 The stability margin is sm 027 which is very low The closed loop system has two resonant modes one with damping ratio ζ 081 and the other with ζ 0014 The step response of the system is highly oscillatory as shown in Figure 911c The stability margin cannot easily be found from the Bode plot of the loop transfer function There are however other Bode plots that will give sm these will be discussed in Chapter 12 In general it is best to use the Nyquist plot to check stability since this provides more complete information than the Bode plot When designing feedback systems it will often be useful to define the robustness of the system using gain phase and stability margins These numbers tell us how much the system can vary from our nominal model and still be stable Reasonable values of the margins are phase margin ϕm 3060 gain margin gm 25 and stability margin sm 0508 There are also other stability measures such as the delay margin which is the smallesttimedelayrequiredtomakethesystemunstableForlooptransferfunctions that decay quickly the delay margin is closely related to the phase margin but for systems where the gain curve of the loop transfer function has several peaks at high frequencies the delay margin is a more relevant measure 282 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 1 Re Liω Im Liω 10 2 10 0 10 2 10 0 10 2 270 180 90 Normalized frequency ωω0 Liω Liω Figure 912 Nyquist and Bode plots of the loop transfer function for the AFM system 97 with an integral controller The frequency in the Bode plot is normalized by a The parameters are ζ 001 and ki 0008 Example 99 Nanopositioning system for an atomic force microscope Consider the system for horizontal positioning of the sample in an atomic force microscope The system has oscillatory dynamics and a simple model is a spring mass system with low damping The normalized transfer function is given by Ps ω2 0 s2 2ζω0s ω2 0 97 where the damping ratio typically is a very small number eg ζ 01 We will start with a controller that has only integral action The resulting loop transfer function is Ls kiω2 0 ss2 2ζω0s ω2 0 where ki is the gain of the controller Nyquist and Bode plots of the loop transfer function are shown in Figure 912 Notice that the part of the Nyquist curve that is close to the critical point 1 is approximately circular From the Bode plot in Figure 912b we see that the phase crossover frequency is ωpc a which will be independent of the gain ki Evaluating the loop transfer function at this frequency we have Liω0 ki2ζω0 which means that the gain margin is gm 1ki2ζω0 To have a desired gain margin of gm the integral gain should be chosen as ki 2ω0ζ1 gm Figure 912 shows Nyquist and Bode plots for the system with gain margin gm 167 and stability margin sm 0597 The gain curve in the Bode plot is almost a straight line for low frequencies and has a resonant peak at ω ω0 The gain crossover frequency is approximately equal to ki The phase decreases monotoni cally from 90 to 270 it is equal to 180 at ω ω0 The curve can be shifted vertically by changing ki increasing ki shifts the gain curve upward and increases the gain crossover frequency Since the phase is 180 at the resonant peak it is necessary that the peak not touch the line Liω 1 284 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωT Giω Giω a Time delay 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωa Giω Giω b RHP zero 10 1 10 0 10 1 10 1 10 0 10 1 360 180 0 Normalized frequency ωa Giω Giω c RHP pole Figure 913 Bode plots of systems that are not minimum phase a Time delay Gs esT b system with a right halfplane RHP zero Gs a sa s and c system with right halfplane pole The corresponding minimum phase system has the transfer function Gs 1 in all cases the phase curves for that system are shown as dashed lines system and they do not depend on sensors and actuators the zeros depend on how inputs and outputs of a system are coupled to the states Zeros can thus be changed by moving sensors and actuators or by introducing new sensors and actuators Nonminimum phase systems are unfortunately quite common in practice The following example gives a system theoretic interpretation of the common experience that it is more difficult to drive in reverse gear and illustrates some of the properties of transfer functions in terms of their poles and zeros Example 910 Vehicle steering The nonnormalized transfer function from steering angle to lateral velocity for the simple vehicle model is Gs av0s v2 0 bs where v0 is the velocity of the vehicle and a b 0 see Example 512 The transfer function has a zero at s v0a In normal driving this zero is in the left halfplane but it is in the right halfplane when driving in reverse v0 0 The unit step response is yt av0 b av2 0t b The lateral velocity thus responds immediately to a steering command For reverse steering v0 is negative and the initial response is in the wrong direction a behavior that is representative for nonminimum phase systems called an inverse response Figure 914 shows the step response for forward and reverse driving In this simulation we have added an extra pole with the time constant T to approximately account for the dynamics in the steering system The parameters are a b 1 T 01 v0 1 for forward driving and v0 1 for reverse driving Notice that for t t0 av0 where t0 is the time required to drive the distance a the step response for reverse driving is that of forward driving with the time delay t0 The 290 CHAPTER 9 FREQUENCY DOMAIN ANALYSIS where the inverse is obtained after simple calculations Figure 917b shows the response of the relay to a sinusoidal input with the first harmonic of the output shown as a dashed line Describing function analysis is illustrated in Figure 917c which shows the Nyquist plot of the transfer function Ls 2s 14 dashed line and the negative inverse describing function of a relay with b 1 and c 05 The curves intersect for a 1 and ω 077 rads indicating the amplitude and frequency for a possible oscillation if the process and the relay are connected in a a feedback loop 96 Further Reading Nyquists original paper giving his now famous stability criterion was published in the Bell Systems Technical Journal in 1932 160 More accessible versions are found in the book 27 which also includes other interesting early papers on control Nyquists paper is also reprinted in an IEEE collection of seminal papers on control 23 Nyquist used 1 as the critical point but Bode changed it to 1 which is now the standard notation Interesting perspectives on early developments are given by Black 36 Bode 41 and Bennett 29 Nyquist did a direct calculation based on his insight into the propagation of sinusoidal signals through systems he did not use results from the theory of complex functions The idea that a short proof can be given by using the principle of variation of the argument is presented in the delightful book by MacColl 140 Bode made extensive use of complex function theory in his book 40 which laid the foundation for frequency response analysis wherethenotionofminimumphasewastreatedindetailAgoodsourceforcomplex function theory is the classic by Ahlfors 6 Frequency response analysis was a key element in the emergence of control theory as described in the early texts by James et al 110 Brown and Campbell 46 and Oldenburger 163 and it became one of the cornerstones of early control theory Frequency response underwent a resurgence when robust control emerged in the 1980s as will be discussed in Chapter 12 Exercises 91 Operational amplifier Consider an op amp circuit with Z1 Z2 that gives a closed loop system with nominally unit gain Let the transfer function of the operational amplifier be Gs ka1a2 s as a1s a2 where a1 a2 a Show that the condition for oscillation is k a1 a2 and compute the gain margin of the system Hint Assume a 0 92 Atomic force microscope The dynamics of the tapping mode of an atomic force microscope are dominated by the damping of the cantilever vibrations and the system that averages the vibrations Modeling the cantilever as a springmass 101 BASIC CONTROL FUNCTIONS 295 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u kp kp a Proportional control 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u ki ki b PI control 0 10 20 0 05 1 15 0 10 20 2 0 2 4 Time t Output y Input u kd kd c PID control Figure 102 Responses to step changes in the reference value for a system with a proportional controller a PI controller b and PID controller c The process has the transfer function Ps 1s13theproportionalcontrollerhasparameterskp 12and5thePIcontroller has parameters kp 1 ki 0 02 05 and 1 and the PID controller has parameters kp 25 ki 15 and kd 0 1 2 and 4 value If we choose uff rP0 krr then the output will be exactly equal to the reference value as it was in the state space case provided that there are no disturbances However this requires exact knowledge of the process dynamics which is usually not available The parameter uff called reset in the PID literature must therefore be adjusted manually As we saw in Section 64 integral action guarantees that the process output agrees with the reference in steady state and provides an alternative to the feed forward term Since this result is so important we will provide a general proof Consider the controller given by equation 101 Assume that there exists a steady state with u u0 and e e0 It then follows from equation 101 that u0 kpe0 kie0t which is a contradiction unless e0 or ki is zero We can thus conclude that with integral action the error will be zero if it reaches a steady state Notice that we have not made any assumptions about the linearity of the process or the disturbances We have however assumed that an equilibrium exists Using integral action to achieve zero steadystate error is much better than using feedforward which requires a precise knowledge of process parameters The effect of integral action can also be understood from frequency domain analysis The transfer function of the PID controller is Cs kp ki s kds 104 The controller has infinite gain at zero frequency C0 and it then follows from equation 102 that G yr0 1 which implies that there is no steadystate 102 SIMPLE CONTROLLERS FOR COMPLEX SYSTEMS 299 Im Liω Re Liω a Nyquist plot 10 4 10 2 10 0 10 2 10 2 10 1 10 0 10 1 10 2 360 270 180 90 0 Liω Liω Frequency ω rads b Bode plot Figure 105 Integral control for AFM in tapping mode An integral controller is designed based on the slope of the process transfer function at 0 The controller gives good robustness properties based on a very simple analysis system we find that the integral gain is given by ki 1Tcl P0 The analysis requires that Tcl be sufficiently large that the process transfer function can be approximated by a constant For systems that are not well represented by a constant gain we can obtain a better approximation by using the Taylor series expansion of the loop transfer function Ls ki Ps s kiP0 sP0 s ki P0 ki P0 s Choosing ki P0 05 gives a system with good robustness as will be discussed in Section 125 The controller gain is then given by ki 1 2P0 106 and the expected closed loop time constant is Tcl 2P0P0 Example 102 Integral control of AFM in tapping mode A simplified model of the dynamics of the vertical motion of an atomic force microscope in tapping mode was discussed in Exercise 92 The transfer function for the system dynamics is Ps a1 esτ sτs a where a ζω0 τ 2πnω0 and the gain has been normalized to 1 We have P0 1 and P0 τ2 1a and it follows from 106 that the integral gain can be chosen as ki a2 aτ Nyquist and Bode plots for the resulting loop transfer function are shown in Figure 105 300 CHAPTER 10 PID CONTROL A firstorder system has the transfer function Ps b s a With a PI controller the closed loop system has the characteristic polynomial ss a bkps bkis s2 a bkps bki The closed loop poles can thus be assigned arbitrary values by proper choice of the controller gains Requiring that the closed loop system have the characteristic polynomial ps s2 a1s a2 2 we find that the controller parameters are kp a1 a b ki a2 b 107 If we require a response of the closed loop system that is slower than that of the open loop system a reasonable choice is a1 a α and a2 αa If a response faster than that of the open loop system is required it is reasonable to choose a1 2ζω0 and a2 ω2 0 where ω0 and ζ are undamped natural frequency and damping ratio of the dominant mode These choices have significant impact on the robustness of the system and will be discussed in Section 124 An upper limit to ω0 is given by the validity of the model Large values of ω0 will require fast control actions and actuators may saturate if the value is too large A firstorder model is unlikely to represent the true dynamics for high frequencies We illustrate the design by an example Example 103 Cruise control using PI feedback Consider the problem of maintaining the speed of a car as it goes up a hill In Example 514 we found that there was little difference between the linear and nonlinear models when investigating PI control provided that the throttle did not reachthesaturationlimitsAsimplelinearmodelofacarwasgiveninExample511 dv ve dt av ve bu ue gθ 108 where v is the velocity of the car u is the input from the engine and θ is the slope of the hill The parameters were a 00101 b 13203 g 98 ve 20 and ue 01616 This model will be used to find suitable parameters of a vehicle speed controller The transfer function from throttle to velocity is a firstorder system Since the open loop dynamics is so slow it is natural to specify a faster closed loop system by requiring that the closed loop system be of secondorder with damping ratio ζ and undamped natural frequency ω0 The controller gains are given by 107 Figure 106 shows the velocity and the throttle for a car that initially moves on a horizontal road and encounters a hill with a slope of 4 at time t 6 s To design a PI controller we choose ζ 1 to obtain a response without overshoot as shown in Figure 106a The choice of ω0 is a compromise between response speed 102 SIMPLE CONTROLLERS FOR COMPLEX SYSTEMS 301 0 10 20 30 40 2 1 0 0 10 20 30 40 0 02 04 06 08 Time t s v ve ms u ue ζ ζ a ω0 05 ζ 05 1 2 0 10 20 30 40 2 1 0 0 10 20 30 40 0 02 04 06 08 Time t s v ve ms u ue ω0 ω0 b ζ 1 ω0 02 05 1 Figure 106 Cruise control using PI feedback The step responses for the error and input illustrate the effect of parameters ζ 1 and ω0 on the response of a car with cruise control A change in road slope from 0 to 4 is applied between t 5 and 6 s a Responses for ω0 05 and ζ 05 1 and 2 Choosing ζ 1 gives no overshoot b Responses for ζ 1 and ω0 02 05 and 10 and control actions a large value gives a fast response but it requires fast control action The tradeoff is is illustrated in Figure 106b The largest velocity error decreases with increasing ω0 but the control signal also changes more rapidly In the simple model 108 it was assumed that the force responds instantaneously to throttle commands For rapid changes there may be additional dynamics that have to be accounted for There are also physical limitations to the rate of change of the force which also restricts the admissible value of ω0 A reasonable choice of ω0 is in the range 0510 Notice in Figure 106 that even with ω0 02 the largest velocity error is only 1 ms A PI controller can also be used for a process with secondorder dynamics but there will be restrictions on the possible locations of the closed loop poles Using a PID controller it is possible to control a system of second order in such a way that the closed loop poles have arbitrary locations see Exercise 102 Instead of finding a loworder model and designing controllers for them we can also use a highorder model and attempt to place only a few dominant poles An integral controller has one parameter and it is possible to position one pole Consider a process with the transfer function Ps The loop transfer function with an integral controller is Ls ki Pss The roots of the closed loop characteristic polynomial are the roots of s ki Ps 0 Requiring that s a be a root we find that the controller gain should be chosen as ki a Pa 109 The pole s a will be dominant if a is small A similar approach can be applied 302 CHAPTER 10 PID CONTROL t τ y a a Step response method Re Piω Im Piω ω ωa b Frequency response method Figure 107 ZieglerNichols step and frequency response experiments The unit step re sponse in a is characterized by the parameters a and τ The frequency response method b characterizes process dynamics by the point where the Nyquist curve of the process transfer function first intersects the negative real axis and the frequency ωc where this occurs to PI and PID controllers 103 PID Tuning Usersofcontrolsystemsarefrequentlyfacedwiththetaskofadjustingthecontroller parameters to obtain a desired behavior There are many different ways to do this One approach is to go through the conventional steps of modeling and control design as described in the previous section Since the PID controller has so few parameters a number of special empirical methods have also been developed for direct adjustment of the controller parameters The first tuning rules were developed by Ziegler and Nichols 210 Their idea was to perform a simple experiment extract some features of process dynamics from the experiment and determine the controller parameters from the features ZieglerNichols Tuning In the 1940s Ziegler and Nichols developed two methods for controller tuning based on simple characterization of process dynamics in the time and frequency domains The time domain method is based on a measurement of part of the open loop unit step response of the process as shown in Figure 107a The step response is measured by applying a unit step input to the process and recording the response The response is characterized by parameters a and τ which are the intercepts of the steepest tangent of the step response with the coordinate axes The parameter τ is an approximation of the time delay of the system and aτ is the steepest slope of the step response Notice that it is not necessary to wait until steady state is reached to find the parameters it suffices to wait until the response has had an inflection point The controller parameters are given in Table 101 The parameters were obtained by extensive simulation of a range of representative processes A controller was 103 PID TUNING 303 Table 101 ZieglerNichols tuning rules a The step response methods give the parameters in terms of the intercept a and the apparent time delay τ b The frequency response method gives controller parameters in terms of critical gain kc and critical period Tc Type kp Ti Td P 1a PI 09a 3τ PID 12a 2τ 05τ a Step response method Type kp Ti Td P 05kc PI 04kc 08Tc PID 06kc 05Tc 0125Tc b Frequency response method tuned manually for each process and an attempt was then made to correlate the controller parameters with a and τ In the frequency domain method a controller is connected to the process the integral and derivative gains are set to zero and the proportional gain is increased until the system starts to oscillate The critical value of the proportional gain kc is observed together with the period of oscillation Tc It follows from Nyquists stability criterion that the loop transfer function L kcPs intersects the critical point at the frequency ωc 2πTc The experiment thus gives the point on the Nyquist curve of the process transfer function where the phase lag is 180 as shown in Figure 107b The ZieglerNichols methods had a huge impact when they were introduced in the 1940s The rules were simple to use and gave initial conditions for manual tuning The ideas were adopted by manufacturers of controllers for routine use The ZieglerNichols tuning rules unfortunately have two severe drawbacks too little process information is used and the closed loop systems that are obtained lack robustness The step response method can be improved significantly by characterizing the unit step response by parameters K τ and T in the model Ps K 1 sT eτs 1010 The parameters can be obtained by fitting the model to a measured step response Notice that the experiment takes a longer time than the experiment in Figure 107a because to determine K it is necessary to wait until the steady state has been reached Also notice that the intercept a in the ZieglerNichols rule is given by a KτT The frequency response method can be improved by measuring more points on the Nyquist curve eg the zero frequency gain K or the point where the process has a 90 phase lag This latter point can be obtained by connecting an integral controller and increasing its gain until the system reaches the stability limit The experiment can also be automated by using relay feedback as will be discussed later in this section There are many versions of improved tuning rules As an illustration we give 306 CHAPTER 10 PID CONTROL Having obtained the critical gain Kc and the critical period Tc the controller pa rameters can then be determined using the ZieglerNichols rules Improved tuning can be obtained by fitting a model to the data obtained from the relay experiment The relay experiment can be automated Since the amplitude of the oscillation is proportional to the relay output it is easy to control it by adjusting the relay output Automatic tuning based on relay feedback is used in many commercial PID controllers Tuning is accomplished simply by pushing a button that activates relay feedback The relay amplitude is automatically adjusted to keep the oscillations sufficiently small and the relay feedback is switched to a PID controller as soon as the tuning is finished 104 Integrator Windup Many aspects of a control system can be understood from linear models There are however some nonlinear phenomena that must be taken into account These are typically limitations in the actuators a motor has limited speed a valve cannot be more than fully opened or fully closed etc For a system that operates over a wide range of conditions it may happen that the control variable reaches the actuator limits When this happens the feedback loop is broken and the system runs in open loop because the actuator remains at its limit independently of the process output as long as the actuator remains saturated The integral term will also build up since the error is typically nonzero The integral term and the controller output may then become very large The control signal will then remain saturated even when the error changes and it may take a long time before the integrator and the controller output come inside the saturation range The consequence is that there are large transients This situation is referred to as integrator windup illustrated in the following example Example 105 Cruise control The windup effect is illustrated in Figure 1010a which shows what happens when a car encounters a hill that is so steep 6 that the throttle saturates when the cruise controller attempts to maintain speed When encountering the slope at time t 5 the velocity decreases and the throttle increases to generate more torque However the torque required is so large that the throttle saturates The error decreases slowly because the torque generated by the engine is just a little larger than the torque required to compensate for gravity The error is large and the integral continues to build up until the error reaches zero at time 30 but the controller output is still larger than the saturation limit and the actuator remains saturated The integral term starts to decrease and at time 45 and the velocity settles quickly to the desired value Notice that it takes considerable time before the controller output comes into the range where it does not saturate resulting in a large overshoot There are many methods to avoid windup One method is illustrated in Fig ure 1011 the system has an extra feedback path that is generated by measuring the actual actuator output or the output of a mathematical model of the saturating 104 INTEGRATOR WINDUP 307 0 20 40 60 18 19 20 21 0 20 40 60 0 1 2 Commanded Applied Velocity ms Throttle Time t s a Windup 0 20 40 60 18 19 20 21 0 20 40 60 0 1 2 Commanded Applied Velocity ms Throttle Time t s b Antiwindup Figure 1010 Simulation of PI cruise control with windup a and antiwindup b The figure shows the speed v and the throttle u for a car that encounters a slope that is so steep that the throttle saturates The controller output is a dashed line The controller parameters are kp 05 and ki 01 The antiwindup compensator eliminates the overshoot by preventing the error for building up in the integral term of the controller actuator and forming an error signal es as the difference between the output of the controller v and the actuator output u The signal es is fed to the input of the integrator through gain kt The signal es is zero when there is no saturation and the extra feedback loop has no effect on the system When the actuator saturates the signal es is fed back to the integrator in such a way that es goes toward zero This implies that controller output is kept close to the saturation limit The controller output will then change as soon as the error changes sign and integral windup is avoided The rate at which the controller output is reset is governed by the feedback gain kt a large value of kt gives a short reset time The parameter kt cannot be too large because measurement noise can then cause an undesirable reset A reasonable choice is to choose ki as a fraction of 1Ti We illustrate how integral windup can be avoided by investigating the cruise control system Example 106 Cruise control with antiwindup Figure 1010b shows what happens when a controller with antiwindup is applied to the system simulated in Figure 1010a Because of the feedback from the actuator model the output of the integrator is quickly reset to a value such that the controller output is at the saturation limit The behavior is drastically different from that in Figure 1010a and the large overshoot is avoided The tracking gain is kt 2 in the simulation 310 CHAPTER 10 PID CONTROL 0 5 10 15 20 205 21 0 5 10 15 0 02 04 06 08 Throttle u Speed v ms Time t s β β a Step response 10 1 10 0 10 1 10 2 10 1 10 0 10 1 10 0 10 1 10 2 10 1 10 0 Frequency ω rads Gvriω Guriω β β b Frequency responses Figure 1012 Time and frequency responses for PI cruise control with setpoint weighting Step responses are shown in a and the gain curves of the frequency responses in b The controller gains are kp 074 and ki 019 The setpoint weights are β 0 05 and 1 and γ 0 and the output voltage u The impedances are given by Z1s R1 1 R1C1s Z2s R2 1 C2s and we find the following relation between the input voltage e and the output voltage u u Z2 Z1 e R2 R1 1 R1C1s1 R2C2s R2C2s e This is the inputoutput relation for a PID controller of the form 101 with param eters kp R2 R1 Ti R2C1 Td R1C1 R0 R C 1 1 e u a PI controller R0 R C 1 1 C0 e u b PID controller Figure 1013 Schematic diagrams for PI and PID controllers using op amps The circuit in a uses a capacitor in the feedback path to store the integral of the error The circuit in b adds a filter on the input to provide derivative action 312 CHAPTER 10 PID CONTROL which can be rewritten as Dtk T f T f h Dtk1 kd T f h ytk ytk1 1017 The advantage of using a backward difference is that the parameter T f T f h is nonnegative and less than 1 for all h 0 which guarantees that the difference equation is stable Reorganizing equations 10151017 the PID controller can be described by the following pseudocode Precompute controller coefficients bikih adTfTfh bdkdTfh brhTt Control algorithm main loop while running radinch1 read setpoint from ch1 yadinch2 read process variable from ch2 Pkpbry compute proportional part DadDbdyyold update derivative part vPID compute temporary output usatvulowuhigh simulate actuator saturation daoutch1 set analog output ch1 IIbirybruv update integral yoldy update old process output sleeph wait until next update interval Precomputation of the coefficients bi ad bd and br saves computer time in the main loop These calculations have to be done only when controller parameters are changed The main loop is executed once every sampling period The program has three states yold I and D One state variable can be eliminated at the cost of less readable code The latency between reading the analog input and setting the analog output consists of four multiplications four additions and evaluation of the sat function All computations can be done using fixedpoint calculations if necessary Notice that the code computes the filtered derivative of the process output and that it has setpoint weighting and antiwindup protection 106 Further Reading The history of PID control is very rich and stretches back to the beginning of the foundation of control theory Very readable treatments are given by Bennett 28 29 and Mindel 152 The ZieglerNichols rules for tuning PID controllers first pre sented in 1942 210 were developed based on extensive experiments with pneu matic simulators and Vannevar Bushs differential analyzer at MIT An interesting view of the development of the ZieglerNichols rules is given in an interview with Ziegler 39 An industrial perspective on PID control is given in 33 180 and EXERCISES 313 205 and in the paper 58 cited in the beginning of this chapter A comprehen sive presentation of PID control is given in 16 Interactive learning tools for PID control can be downloaded from httpwwwcalergacomcontrib Exercises 101 Ideal PID controllers Consider the systems represented by the block diagrams in Figure 101 Assume that the process has the transfer function Ps bs a and show that the transfer functions from r to y are a G yrs bkds2 bkps bki 1 bkds2 a bkds bki b G yrs bki 1 bkds2 a bkds bki Pick some parameters and compare the step responses of the systems 102 Consider a secondorder process with the transfer function Ps b s2 a1s a2 The closed loop system with a PI controller is a thirdorder system Show that it is possible to position the closed loop poles as long as the sum of the poles is a1 Give equations for the parameters that give the closed loop characteristic polynomial s α0s2 2ζ0ω0s ω2 0 103 Consider a system with the transfer function Ps s 12 Find an integral controller that gives a closed loop pole at s a and determine the value of a that maximizes the integral gain Determine the other poles of the system and judge if the pole can be considered dominant Compare with the value of the integral gain given by equation 106 104 ZieglerNichols tuning Consider a system with transfer function Ps ess Determine the parameters of P PI and PID controllers using ZieglerNichols step and frequency response methods Compare the parameter values obtained by the different rules and discuss the results 105 Vehicle steering Design a proportionalintegral controller for the vehicle steering system that gives the closed loop characteristic polynomial s3 2ω0s2 2ω0s ω3 0 106 Congestion control A simplified flow model for TCP transmission is derived in 101 137 The linearized dynamics are modeled by the transfer function Gqps b s a1s a2esτe Chapter Eleven Frequency Domain Design Sensitivityimprovementsinonefrequencyrangemustbepaidforwithsensitivitydeteriorations in another frequency range and the price is higher if the plant is openloop unstable This applies to every controller no matter how it was designed Gunter Stein in the inaugural IEEE Bode Lecture 1989 185 In this chapter we continue to explore the use of frequency domain techniques with a focus on the design of feedback systems We begin with a more thorough description of the performance specifications for control systems and then introduce the concept of loop shaping as a mechanism for designing controllers in the frequency domain We also introduce some fundamental limitations to performance for systems with time delays and right halfplane poles and zeros 111 Sensitivity Functions In the previous chapter we considered the use of proportionalintegralderivative PID feedback as a mechanism for designing a feedback controller for a given process In this chapter we will expand our approach to include a richer repertoire of tools for shaping the frequency response of the closed loop system One of the key ideas in this chapter is that we can design the behavior of the closed loop system by focusing on the open loop transfer function This same approach was used in studying stability using the Nyquist criterion we plotted the Nyquist plot for the open loop transfer function to determine the stability of the closed loop system From a design perspective the use of loop analysis tools is very powerful since the loop transfer function is L PC if we can specify the desired performance in terms of properties of L we can directly see the impact of changes in the controller C This is much easier for example than trying to reason directly about the tracking response of the closed loop system whose transfer function is given by G yr PC1 PC We will start by investigating some key properties of the feedback loop A block diagram of a basic feedback loop is shown in Figure 111 LDH6 Jan 08 Reworded the third and fourth sentences OK The system loop is composed of two components the process and the controller The controller itself has two blocks the feedback block C and the feedforward block F There are two disturbances act ing on the process the load disturbance d and the measurement noise n The load disturbance represents disturbances that drive the process away from its desired behavior while the measurement noise represents disturbances that corrupt infor mation about the process given by the sensors In the figure the load disturbance 318 CHAPTER 11 FREQUENCY DOMAIN DESIGN P z w C y u Figure 112 A more general representation of a feedback system The process input u represents the control signal which can be manipulated and the process input w represents other signals that influence the process The process output y is the vector of measured variables and z are other signals of interest The feedforward part F of the controller influences only the response to command signals In Chapter 9 we focused on the loop transfer function and we found that its properties gave a useful insight into the properties of a system To make a proper assessment of a feedback system it is necessary to consider the properties of all the transfer functions 112 in the Gang of Six or the Gang of Four as illustrated in the following example Example 111 The loop transfer function gives only limited insight Consider a process with the transfer function Ps 1s a controlled by a PI controller with error feedback having the transfer function Cs ks as The loop transfer function is L ks and the sensitivity functions are T PC 1 PC k s k PS P 1 PC s s as k CS C 1 PC ks a s k S 1 1 PC s s k Notice that the factor s a is canceled when computing the loop transfer function and that this factor also does not appear in the sensitivity function or the comple mentary sensitivity function However cancellation of the factor is very serious if a 0 since the transfer function PS relating load disturbances to process output is then unstable In particular a small disturbance d can lead to an unbounded output which is clearly not desirable The system in Figure 111 represents a special case because it is assumed that the load disturbance enters at the process input and that the measured output is the sum of the process variable and the measurement noise Disturbances can enter in many different ways and the sensors may have dynamics A more abstract way to capture the general case is shown in Figure 112 which has only two blocks representing the process P and the controller C The process has two inputs the control signal u and a vector of disturbances w and two outputs the measured signal y and a vector of signals z that is used to specify performance The system in Figure 111 can be captured by choosing w d n and z η ν e ϵ The process transfer function P is a 4 3 matrix and the controller transfer function C is a 1 2 matrix compare with Exercise 113 320 CHAPTER 11 FREQUENCY DOMAIN DESIGN error signal is zero and there will be no feedback action If there are disturbances or modeling errors the signals ym and y will differ The feedback then attempts to bring the error to zero To make a formal analysis we compute the transfer function from reference input to process output G yrs PC Fm Fu 1 PC Fm P Fu Fm 1 PC 114 where P P2P1 The first term represents the desired transfer function The second term can be made small in two ways Feedforward compensation can be used to make P Fu Fm small or feedback compensation can be used to make 1 PC large Perfect feedforward compensation is obtained by choosing Fu Fm P 115 Design of feedforward using transfer functions is thus a very simple task Notice that the feedforward compensator Fu contains an inverse model of the process dynamics Feedback and feedforward have different properties Feedforward action is ob tained by matching two transfer functions requiring precise knowledge of the pro cess dynamics while feedback attempts to make the error small by dividing it by a large quantity For a controller having integral action the loop gain is large for low frequencies and it is thus sufficient to make sure that the condition for ideal feedforward holds at higher frequencies This is easier than trying to satisfy the condition 115 for all frequencies We will now consider reduction of the effects of the load disturbance d in Fig ure 113 by feedforward control We assume that the disturbance signal is measured and that the disturbance enters the process dynamics in a known way captured by P1 and P2 The effect of the disturbance can be reduced by feeding the measured signal through a dynamical system with the transfer function Fd Assuming that the reference r is zero we can use block diagram algebra to find that the transfer function from the disturbance to the process output is G yd P21 Fd P1 1 PC 116 where P P1P2 The effect of the disturbance can be reduced by making 1 Fd P1 small feedforward or by making 1 PC large feedback Perfect compensation is obtained by choosing Fd P1 1 117 requiring inversion of the transfer function P1 Asinthecaseofreferencetrackingdisturbanceattenuationcanbeaccomplished by combining feedback and feedforward control Since lowfrequency disturbances can be eliminated by feedback we require the use of feedforward only for high frequency disturbances and the transfer function Fd in equation 117 can then be computed using an approximation of P1 for high frequencies 112 FEEDFORWARD DESIGN 321 a Overhead view 0 2 4 6 8 10 5 0 5 0 2 4 6 8 10 1 0 1 y m δ rad Normalized time t b Position and steering Figure 114 Feedforward control for vehicle steering The plot on the left shows the trajectory generated by the controller for changing lanes The plots on the right show the lateral deviation y top and the steering angle δ bottom for a smooth lane change control using feedforward based on the linearized model Equations 115 and 117 give analytic expressions for the feedforward com pensator To obtain a transfer function that can be implemented without difficulties we require that the feedforward compensator be stable and that it does not require differentiation Therefore there may be constraints on possible choices of the de sired response Fm and approximations are needed if the process has zeros in the right halfplane or time delays Example 112 Vehicle steering A linearized model for vehicle steering was given in Example 64 The normalized transfer function from steering angle δ to lateral deviation y is Ps γ s 1s2 For a lane transfer system we would like to have a nice response without overshoot and we therefore choose the desired response as Fms a2s a2 where the response speed or aggressiveness of the steering is governed by the parameter a Equation 115 gives Fu Fm P a2s2 γ s 1s a2 which is a stable transfer function as long as γ 0 Figure 114 shows the responses of the system for a 05 The figure shows that a lane change is accomplished in about 10 vehicle lengths with smooth steering angles The largest steering angle is slightly larger than 01 rad 6 Using the scaled variables the curve showing lateral deviations y as a function of t can also be interpreted as the vehicle path y as a function of x with the vehicle length as the length unit A major advantage of controllers with two degrees of freedom that combine feedback and feedforward is that the control design problem can be split in two parts The feedback controller C can be designed to give good robustness and effective disturbance attenuation and the feedforward part can be designed independently to give the desired response to command signals 322 CHAPTER 11 FREQUENCY DOMAIN DESIGN 113 Performance Specifications A key element of the control design process is how we specify the desired per formance of the system It is also important for users to understand performance specifications so that they know what to ask for and how to test a system Specifi cations are often given in terms of robustness to process variations and responses to reference signals and disturbances They can be given in terms of both time and frequency responses Specifications for the step response to reference signals were given in Figure 59 in Section 53 and in Section 63 Robustness specifications based on frequency domain concepts were provided in Section 93 and will be con sidered further in Chapter 12 The specifications discussed previously were based on the loop transfer function Since we found in Section 111 that a single transfer function did not always characterize the properties of the closed loop completely we will give a more complete discussion of specifications in this section based on the full Gang of Six The transfer function gives a good characterization of the linear behavior of a system To provide specifications it is desirable to capture the characteristic prop erties of a system with a few parameters Common features for time responses are overshoot rise time and settling time as shown in Figure 59 Common features of frequency responses are resonant peak peak frequency gain crossover frequency and bandwidth A resonant peak is a maximum of the gain and the peak frequency is the corresponding frequency The gain crossover frequency is the frequency where the open loop gain is equal one The bandwidth is defined as the frequency range where the closed loop gain is 1 2 of the lowfrequency gain lowpass midfrequency gain bandpass or highfrequency gain highpass There are inter esting relations between specifications in the time and frequency domains Roughly speaking the behavior of time responses for short times is related to the behavior of frequency responses at high frequencies and vice versa The precise relations are not trivial to derive Response to Reference Signals Consider the basic feedback loop in Figure 111 The response to reference signals is described by the transfer functions G yr PC F1 PC and Gur C F1 PC F 1 for systems with error feedback Notice that it is useful to consider both the response of the output and that of the control signal In particular the control signal response allows us to judge the magnitude and rate of the control signal required to obtain the output response Example 113 Thirdorder system Consider a process with the transfer function Ps s 13 and a PI controller with error feedback having the gains kp 06 and ki 05 The responses are illustratedinFigure115ThesolidlinesshowresultsforaproportionalintegralPI controller with error feedback The dashed lines show results for a controller with feedforward designed to give the transfer function G yr 05s 13 Looking at the time responses we find that the controller with feedforward gives a faster 113 PERFORMANCE SPECIFICATIONS 323 0 5 10 15 20 25 0 05 1 15 Error feedback With feedforward 0 5 10 15 20 25 0 5 10 Output y Input u Time t s a Step responses 10 1 10 0 10 1 10 1 10 0 10 1 10 0 10 1 10 1 10 0 10 1 G yriω Guriω Frequency ω rads b Frequency responses Figure 115 Reference signal responses The responses in process output y and control signal u to a unit step in the reference signal r are shown in a and the gain curves of G yr and Gur are shown in b Results with PI control with error feedback are shown by solid lines and the dashed lines show results for a controller with a feedforward compensator response with no overshoot However much larger control signals are required to obtain the fast response The largest value of the control signal is 8 compared to 12 for the regular PI controller The controller with feedforward has a larger bandwidth marked with and no resonant peak The transfer function Gur also has higher gain at high frequencies Response to Load Disturbances and Measurement Noise A simple criterion for disturbance attenuation is to compare the output of the closed loop system in Figure 111 with the output of the corresponding open loop system obtained by setting C 0 If we let the disturbances for the open and closed loop systems be identical the output of the closed loop system is then obtained simply by passing the open loop output through a system with the transfer function S The sensitivity function tells how the variations in the output are influenced by feedback Exercise 117 Disturbances with frequencies such that Siω 1 are attenuated but disturbances with frequencies such that Siω 1 are amplified by feedback The maximum sensitivity Ms which occurs at the frequency ωsc is thus a measure of the largest amplification of the disturbances The maximum magnitude of 11 L is also the minimum of 1 L which is precisely the stability margin sm defined in Section 93 so that Ms 1sm The maximum sensitivity is therefore also a robustness measure If the sensitivity function is known the potential improvements by feedback can be evaluated simply by recording a typical output and filtering it through the sensitivity function A plot of the gain curve of the sensitivity function is a good way to make an assessment of the disturbance attenuation Since the sensitivity function 324 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 1 10 0 10 1 10 1 10 0 10 1 10 0 10 1 Frequency ω rads Liω Siω a Gain curves Re Im sm ωms ωsc 1 b Nyquist plot Figure 116 Graphical interpretation of the sensitivity function Gain curves of the loop transfer function and the sensitivity function a can be used to calculate the properties of the sensitivity function through the relation S 11 L The sensitivity crossover frequency ωsc and the frequency ωms where the sensitivity has its largest value are indicated in the sensitivity plot The Nyquist plot b shows the same information in a different form All points inside the dashed circle have sensitivities greater than 1 depends only on the loop transfer function its properties can also be visualized graphically using the Nyquist plot of the loop transfer function This is illustrated in Figure 116 The complex number 1 Liω can be represented as the vector from the point 1 to the point Liω on the Nyquist curve The sensitivity is thus less than 1 for all points outside a circle with radius 1 and center at 1 Disturbances with frequencies in this range are attenuated by the feedback The transfer function G yd from load disturbance d to process output y for the system in Figure 111 is G yd P 1 PC PS T C 118 Since load disturbances typically have low frequencies it is natural to focus on the behavior of the transfer function at low frequencies For a system with P0 0 and a controller with integral action the controller gain goes to infinity for small frequencies and we have the following approximation for small s G yd T C 1 C s ki 119 where ki is the integral gain Since the sensitivity function S goes to 1 for large s we have the approximation G yd P for high frequencies Measurement noise which typically has high frequencies generates rapid vari ations in the control variable that are detrimental because they cause wear in many actuators and can even saturate an actuator It is thus important to keep variations in the control signal due to measurement noise at reasonable levelsa typical require ment is that the variations are only a fraction of the span of the control signal The variations can be influenced by filtering and by proper design of the highfrequency 113 PERFORMANCE SPECIFICATIONS 325 0 5 10 15 20 02 0 02 04 10 1 10 0 10 1 10 2 10 1 10 0 Frequency ω rads Time t s Output y G ydiω a Output load response 0 05 1 15 2 0 10 20 10 1 10 0 10 1 10 2 10 0 10 1 10 2 Frequency ω rads Time t s Input u Guniω b Input noise response Figure 117 Disturbance responses The time and frequency responses of process output y to load disturbance d are shown in a and the responses of control signal u to measurement noise n are shown in b properties of the controller The effects of measurement noise are captured by the transfer function from the measurement noise to the control signal Gun C 1 PC CS T P 1110 The complementary sensitivity function is close to 1 for low frequencies ω ωgc and Gun can be approximated by 1P The sensitivity function is close to 1 for high frequencies ω ωgc and Gun can be approximated by C Example 114 Thirdorder system Consider a process with the transfer function Ps s 13 and a proportional integralderivative PID controller with gains kp 06 ki 05 and kd 20 We augment the controller using a secondorder noise filter with T f 01 so that its transfer function is Cs kds2 ks ki ss2T 2 f 2 sT f 1 The system responses are illustrated in Figure 117 The response of the output to a step in the load disturbance in the top part of Figure 117a has a peak of 028 at time t 273 The frequency response in Figure 117a shows that the gain has a maximum of 058 at ω 07 The response of the control signal to a step in measurement noise is shown in Figure 117b The highfrequency rolloff of the transfer function Guniω is due to filtering without it the gain curve in Figure 117b would continue to rise after 20 radsThestepresponsehasapeakof13att 008Thefrequencyresponsehas 326 CHAPTER 11 FREQUENCY DOMAIN DESIGN its peak 20 at ω 14 Notice that the peak occurs far above the peak of the response to load disturbances and far above the gain crossover frequency ωgc 078 An approximation derived in Exercise 119 gives max CSiω kdT f 20 which occurs at ω 2Td 141 114 Feedback Design via Loop Shaping One advantage of the Nyquist stability theorem is that it is based on the loop transfer function which is related to the controller transfer function through L PC It is thus easy to see how the controller influences the loop transfer function To make an unstable system stable we simply have to bend the Nyquist curve away from the critical point This simple idea is the basis of several different design methods collectively called loop shaping These methods are based on choosing a compensator that gives a loop transfer function with a desired shape One possibility is to determine a loop transfer function that gives a closed loop system with the desired properties and to compute the controller as C LP Another is to start with the process transfer function change its gain and then add poles and zeros until the desired shape is obtained In this section we will explore different loopshaping methods for control law design Design Considerations We will first discuss a suitable shape for the loop transfer function that gives good performance and good stability margins Figure 118 shows a typical loop transfer function Good robustness requires good stability margins or good gain and phase margins which imposes requirements on the loop transfer function around the crossover frequencies ωpc and ωgc The gain of L at low frequencies must be large in order to have good tracking of command signals and good attenuation of low frequency disturbances Since S 11 L it follows that for frequencies where L 101 disturbances will be attenuated by a factor of 100 and the tracking error is less than 1 It is therefore desirable to have a large crossover frequency and a steep negative slope of the gain curve The gain at low frequencies can be increased by a controller with integral action which is also called lag compensation To avoid injecting too much measurement noise into the system the loop transfer function should have low gain at high frequencies which is called highfrequency rolloff The choice of gain crossover frequency is a compromise among attenuation of load disturbances injection of measurement noise and robustness Bodes relations see Section 94 impose restrictions on the shape of the loop transfer function Equation 98 implies that the slope of the gain curve at gain crossover cannot be too steep If the gain curve has a constant slope we have the following relation between slope ngc and phase margin ϕm ngc 2 2ϕm π rad 1111 114 FEEDBACK DESIGN VIA LOOP SHAPING 327 attenuation High frequency measurement noise Load disturbance Robustness ωgc log Liω log Siω log T iω log ω log ω Figure 118 Gain curve and sensitivity functions for a typical loop transfer function The plot on the left shows the gain curve and the plots on the right show the sensitivity function and complementary sensitivity function The gain crossover frequency ωgc and the slope ngc of the gain curve at crossover are important parameters that determine the robustness of closed loop systems At low frequency a large magnitude for L provides good load disturbance rejection and reference tracking while at high frequency a small loop gain is used to avoid amplifying measurement noise This formula is a reasonable approximation when the gain curve does not deviate too much from a straight line It follows from equation 1111 that the phase margins 30 45 and 60 correspond to the slopes 53 32 and 43 Loop shaping is a trialanderror procedure We typically start with a Bode plot of the process transfer function We then attempt to shape the loop transfer function by changing the controller gain and adding poles and zeros to the controller trans fer function Different performance specifications are evaluated for each controller as we attempt to balance many different requirements by adjusting controller pa rameters and complexity Loop shaping is straightforward to apply to singleinput singleoutput systems It can also be applied to systems with one input and many outputs by closing the loops one at a time starting with the innermost loop The only limitation for minimum phase systems is that large phase leads and high controller gains may be required to obtain closed loop systems with a fast response Many specific procedures are available they all require experience but they also give good insight into the conflicting requirements There are fundamental limitations to what can be achieved for systems that are not minimum phase they will be discussed in the next section Lead and Lag Compensation A simple way to do loop shaping is to start with the transfer function of the process and add simple compensators with the transfer function Cs k s a s b 1112 The compensator is called a lead compensator if a b and a lag compensator if a b The PI controller is a special case of a lag compensator with b 0 and 328 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 1 10 0 10 1 0 45 90 Lead PD Frequency ω rads Ciω Ciω a b a Lead compensation a b 10 1 10 0 10 1 90 45 0 Lag PI Frequency ω rads Ciω Ciω a b b Lag compensation b a Figure119FrequencyresponseforleadandlagcompensatorsCs ksasbLead compensation a occurs when a b and provides phase lead between ω a and ω b Lag compensation b corresponds to a b and provides lowfrequency gain PI control is a special case of lag compensation and PD control is a special case of lead compensation PIPD frequency responses are shown by dashed curves the ideal PD controller is a special case of a lead compensator with a 0 Bode plots of lead and lag compensators are shown in Figure 119 Lag compensation which increases the gain at low frequencies is typically used to improve tracking performance and disturbance attenuation at low frequencies Compensators that are tailored to specific disturbances can be also designed as shown in Exercise 1110 Lead compensation is typically used to improve phase margin The following ex amples give illustrations Example 115 Atomic force microscope in tapping mode A simple model of the dynamics of the vertical motion of an atomic force micro scope in tapping mode was given in Exercise 92 The transfer function for the system dynamics is Ps a1 esτ sτs a where a ζω0 τ 2πnω0 and the gain has been normalized to 1 A Bode plot of this transfer function for the parameters a 1 and τ 025 is shown in dashed curves in Figure 1110a To improve the attenuation of load disturbances we increase the lowfrequency gain by introducing an integral controller The loop transfer function then becomes L ki Pss and we adjust the gain so that the phase margin is zero giving ki 83 Notice the increase of the gain at low frequencies The Bode plot is shown by the dotted line in Figure 1110a where the critical point is indicated by To improve the phase margin we introduce proportional action and we increase the proportional gain kp gradually until reasonable values of the sensitivities are obtained The value kp 35 gives maximum sensitivity Ms 16 and maximum complementary sensitivity Mt 13 The loop transfer function is shown in solid lines in Figure 1110a Notice the significant increase of the phase 114 FEEDBACK DESIGN VIA LOOP SHAPING 329 10 2 10 0 10 2 10 2 10 0 10 2 Ps PI Integral 10 2 10 0 10 2 270 180 90 0 Freq ω rads Liω Piω Liω Piω a Loop shaping 10 2 10 0 10 2 10 1 10 0 10 2 10 0 10 2 10 2 10 1 10 0 10 2 10 0 10 2 10 0 10 1 10 2 10 0 10 2 10 1 10 0 Freq ω rads Freq ω rads Siω T iω PSiω CSiω b Gang of Four Figure 1110 Loopshaping design of a controller for an atomic force microscope in tapping mode a Bode plots of the process dashed the loop transfer function for an integral controller with critical gain dotted and a PI controller solid adjusted to give reasonable robustness b Gain curves for the Gang of Four for the system margin compared with the purely integral controller dotted line To evaluate the design we also compute the gain curves of the transfer functions in the Gang of Four They are shown in Figure 1110b The peaks of the sensitivity curves are reasonable and the plot of PS shows that the largest value of PS is 03 which implies that the load disturbances are well attenuated The plot of CS shows that the largest controller gain is 6 The controller has a gain of 35 at high frequencies and hence we may consider adding highfrequency rolloff A common problem in the design of feedback systems is that the phase margin is too small and phase lead must then be added to the system If we set a b in equation 1112 we add phase lead in the frequency range between the polezero pair and extending approximately 10 in frequency in each direction By appro priately choosing the location of this phase lead we can provide additional phase margin at the gain crossover frequency Because the phase of a transfer function is related to the slope of the magnitude increasing the phase requires increasing the gain of the loop transfer function over the frequency range in which the lead compensation is applied In Exercise 1111 it is shown that the gain increases exponentially with the amount of phase lead We can also think of the lead compensator as changing the slope of the transfer function and thus shaping the loop transfer function in the crossover region although it can be applied elsewhere as well Example 116 Roll control for a vectored thrust aircraft Consider the control of the roll of a vectored thrust aircraft such as the one illustrated in Figure 1111 Following Exercise 810 we model the system with a secondorder 330 CHAPTER 11 FREQUENCY DOMAIN DESIGN r x y θ F1 F2 Symbol Description Value m Vehicle mass 40 kg J Vehicle inertia ϕ3 axis 00475 kg m2 r Force moment arm 250 cm c Damping coefficient 005 kg ms g Gravitational constant 98 ms2 Figure 1111 Roll control of a vectored thrust aircraft a The roll angle θ is controlled by applying maneuvering thrusters resulting in a moment generated by Fz b The table lists the parameter values for a laboratory version of the system transfer function of the form Ps r Js2 cs with the parameters given in Figure 1111b We take as our performance specifica tion that we would like less than 1 error in steady state and less than 10 tracking error up to 10 rads The open loop transfer function is shown in Figure 1112a To achieve our performance specification we would like to have a gain of at least 10 at a frequency of 10 rads requiring the gain crossover frequency to be at a higher frequency We see from the loop shape that in order to achieve the desired performance we cannot simply increase the gain since this would give a very low phase margin Instead we must increase the phase at the desired crossover frequency To accomplish this we use a lead compensator 1112 with a 2 and b 50 We then set the gain of the system to provide a large loop gain up to the desired bandwidth as shown in Figure 1112b We see that this system has a gain of greater than 10 at all frequencies up to 10 rads and that it has more than 60 of phase margin The action of a lead compensator is essentially the same as that of the derivative portion of a PID controller As described in Section 105 we often use a filter for the derivative action of a PID controller to limit the highfrequency gain This same effect is present in a lead compensator through the pole at s b Equation 1112 is a firstorder compensator and can provide up to 90 of phase lead Larger phase lead can be obtained by using a higherorder lead compensator Exercise 1111 Cs k s an s bn a b 332 CHAPTER 11 FREQUENCY DOMAIN DESIGN Assuming that the slope ngc is negative it has to be larger than 2 for the system to be stable It follows from Bodes relations equation 98 that arg Pmpiω arg Ciω ngc π 2 Combining this with equation 1114 gives the following inequality for the allow able phase lag of the allpass part at the gain crossover frequency arg Papiωgc π ϕm ngc π 2 ϕl 1115 Thisconditionwhichwecallthegaincrossoverfrequencyinequalityshowsthatthe gain crossover frequency must be chosen so that the phase lag of the nonminimum phase component is not too large For systems with high robustness requirements we may choose a phase margin of 60 ϕm π3 and a slope ngc 1 which gives an admissible phase lag ϕl π6 052 rad 30 For systems where we can accept a lower robustness we may choose a phase margin of 45 ϕm π4 and the slope ngc 12 which gives an admissible phase lag ϕl π2 157 rad 90 The crossover frequency inequality shows that nonminimum phase components impose severe restrictions on possible crossover frequencies It also means that there are systems that cannot be controlled with sufficient stability margins We illustrate the limitations in a number of commonly encountered situations Example 117 Zero in the right halfplane The nonminimum phase part of the process transfer function for a system with a right halfplane zero is Paps z s z s where z 0 The phase lag of the nonminimum phase part is arg Papiω 2 arctan ω z Since the phase lag of Pap increases with frequency the inequality 1115 gives the following bound on the crossover frequency ωgc z tan ϕ l2 1116 With ϕl π3 we get ωgc 06 z Slow right halfplane zeros z small therefore give tighter restrictions on possible gain crossover frequencies than fast right half plane zeros Time delays also impose limitations similar to those given by zeros in the right halfplane We can understand this intuitively from the Padé approximation esτ 1 05sτ 1 05sτ 2τ s 2τ s A long time delay is thus equivalent to a slow right halfplane zero z 2τ 115 FUNDAMENTAL LIMITATIONS 333 Example 118 Pole in the right halfplane The nonminimum phase part of the transfer function for a system with a pole in the right halfplane is Paps s p s p where p 0 The phase lag of the nonminimum phase part is arg Papiω 2 arctan p ω and the crossover frequency inequality becomes ωgc p tanϕ l2 1117 Right halfplane poles thus require that the closed loop system have a sufficiently high bandwidth With ϕl π3 we get ωgc 17p Fast right halfplane poles p large therefore give tighter restrictions on possible gain crossover frequencies than slow right halfplane poles The control of unstable systems imposes minimum bandwidth requirements for process actuators and sensors Wewillnowconsidersystemswitharighthalfplanezero z andarighthalfplane pole p If p z there will be an unstable subsystem that is neither reachable nor observable and the system cannot be stabilized see Section 75 We can therefore expect that the system is difficult to control if the right halfplane pole and zero are close A straightforward way to use the crossover frequency inequality is to plot the phase of the nonminimum phase factor Pap of the process transfer function Such a plot which can be incorporated in an ordinary Bode plot will immediately show the permissible gain crossover frequencies An illustration is given in Figure 1113 which shows the phase of Pap for systems with a right halfplane polezero pair and systems with a right halfplane pole and a time delay If we require that the phase lag ϕ l of the nonminimum phase factor be less than 90 we must require that the ratio zp be larger than 6 or smaller than 16 for systems with right halfplane poles and zeros and that the product pτ be less than 03 for systems with a time delay and a right halfplane pole Notice the symmetry in the problem for z p and z p in either case the zeros and the poles must be sufficiently far apart Exercise 1112 Also notice that possible values of the gain crossover frequency ωgc are quite restricted Using the theory of functions of complex variables it can be shown that for systems with a right halfplane pole p and a right halfplane zero z or a time delay τ any stabilizing controller gives sensitivity functions with the property sup ω Siω p z p z sup ω T iω epτ 1118 This result is proven in Exercise 1113 As the examples above show right halfplane poles and zeros significantly limit the achievable performance of a system hence one would like to avoid these whenever possible The poles of a system depend on the intrinsic dynamics of the 115 FUNDAMENTAL LIMITATIONS 337 0 1 2 3 3 2 1 0 1 Frequency ω rads linear scale log Siω a Bode integral formula 10 01 10 Serious Design sg Log Magnitude Frequency 00 05 10 15 20 b Control design process Figure 1114 Interpretation of the waterbed effect The function log Siω is plotted versus ω in linear scales in a According to Bodes integral formula 1119 the area of log Siω above zero must be equal to the area below zero Gunter Steins interpretation of design as a tradeoff of sensitivities at different frequencies is shown in b from 185 Example 1111 X29 aircraft As an example of the application of Bodes integral formula we present an anal ysis of the control system for the X29 aircraft see Figure 1115a which has an unusual configuration of aerodynamic surfaces that are designed to enhance its ma neuverability This analysis was originally carried out by Gunter Stein in his article Respect the Unstable 185 which is also the source of the quote at the beginning of this chapter To analyze this system we make use of a small set of parameters that describe the key properties of the system The X29 has longitudinal dynamics that are very similar to inverted pendulum dynamics Exercise 83 and in particular have a pair of poles at approximately p 6 and a zero at z 26 The actuators that stabilize the pitch have a bandwidth of ωa 40 rads and the desired bandwidth of the pitch control loop is ω1 3 rads Since the ratio of the zero to the pole is only 43 we may expect that it may be difficult to achieve the specifications a X29 aircraft 1 Ms ω1 ωa Siω Frequency ω rads b Sensitivity analysis Figure 1115 X29 flight control system The aircraft makes use of forward swept wings and a set of canards on the fuselage to achieve high maneuverability a The desired sensitivity for the closed loop system is shown in b We seek to use our control authority to shape the sensitivity curve so that we have low sensitivity good performance up to frequency ω1 by creating higher sensitivity up to our actuator bandwidth ωa 117 FURTHER READING 343 10 5 10 1 10 3 10 4 10 2 10 0 10 2 360 270 180 90 0 Liω Liω Frequency ω rads a Bode plot Re Im Re Im b Nyquist plot Figure 1119 Innerouter loop controller for a vectored thrust aircraft The Bode plot a and Nyquist plot b for the transfer function for the combined inner and outer loop transfer functions are shown The system has a phase margin of 68 and a gain margin of 62 Indeed for the aircraft dynamics studied in this example it is very challenging to directly design a controller from the lateral position x to the input u1 The use of the additional measurement of θ greatly simplifies the design because it can be broken up into simpler pieces 117 Further Reading Design by loop shaping was a key element in the early development of control and systematic design methods were developed see James Nichols and Phillips 110 Chestnut and Mayer 51 Truxal 194 and Thaler 191 Loop shaping is also treated in standard textbooks such as Franklin Powell and EmamiNaeini 79 Dorf and Bishop 61 Kuo and Golnaraghi 133 and Ogata 162 Systems with two degrees of freedom were developed by Horowitz 102 who also discussed the limitations of poles and zeros in the right halfplane Fundamental results on limitations are given in Bode 40 more recent presentations are found in Goodwin GraebeandSalgado88ThetreatmentinSection115isbasedon14Muchofthe early work was based on the loop transfer function the importance of the sensitivity functions appeared in connection with the development in the 1980s that resulted in H design methods A compact presentation is given in the texts by Doyle Francis and Tannenbaum 64 and Zhou Doyle and Glover 209 Loop shaping was integrated with the robust control theory in McFarlane and Glover 150 and Vinnicombe 196 Comprehensive treatments of control system design are given in Maciejowski 141 and Goodwin Graebe and Salgado 88 344 CHAPTER 11 FREQUENCY DOMAIN DESIGN 10 2 10 0 10 2 10 5 10 3 10 1 10 1 10 2 10 0 10 2 10 5 10 2 10 1 10 2 10 0 10 2 10 5 10 3 10 1 10 1 10 2 10 0 10 2 10 4 10 2 10 0 T iω PSiω CSiω Siω Frequency ω rads Frequency ω rads Frequency ω rads Frequency ω rads Figure 1120 Gang of Four for vectored thrust aircraft system Exercises 111 Consider the system in Figure 111 Give all signal pairs that are related by the transfer functions 11 PC P1 PC C1 PC and PC1 PC 112 Consider the system in Example 111 Choose the parameters a 1 and compute the time and frequency responses for all the transfer functions in the Gang of Four for controllers with k 02 and k 5 113 Equivalence of Figures 111 and 112 Consider the system in Figure 111 and let the outputs of interest be z η ν and the major disturbances be w n d Show that the system can be represented by Figure 112 and give the matrix transfer functions P and C Verify that the closed loop transfer function Hzw gives the Gang of Four 114 Consider the springmass system given by 214 which has the transfer function Ps 1 ms2 cs k Design a feedforward compensator that gives a response with critical damping ζ 1 115 Sensitivity of feedback and feedforward Consider the system in Figure 111 and let G yr be the transfer function relating the measured signal y to the reference r Show that the sensitivities of G yr with respect to the feedforward and feed back transfer functions F and C are given by dG yrdF C P1 PC and dG yrdC F P1 PC2 G yr LC 116EquivalenceofcontrollerswithtwodegreesoffreedomShowthatthesystems in Figures 111 and 113 give the same responses to command signals if FmCFu C F Chapter Twelve Robust Performance However by building an amplifier whose gain is deliberately made say 40 decibels higher than necessary 10000 fold excess on energy basis and then feeding the output back on the input in such a way as to throw away that excess gain it has been found possible to effect extraordinary improvement in constancy of amplification and freedom from nonlinearity Harold S Black Stabilized Feedback Amplifiers 1934 35 This chapter focuses on the analysis of robustness of feedback systems a vast topic for which we provide only an introduction to some of the key concepts We consider the stability and performance of systems whose process dynamics are uncertain and derive fundamental limits for robust stability and performance To do this we develop ways to describe uncertainty both in the form of parameter variations and in the form of neglected dynamics We also briefly mention some methods for designing controllers to achieve robust performance 121 Modeling Uncertainty Harold Blacks quote above illustrates that one of the key uses of feedback is to provide robustness to uncertainty constancy of amplification It is one of the most useful properties of feedback and is what makes it possible to design feedback systems based on strongly simplified models One form of uncertainty in dynamical systems is parametric uncertainty in which the parameters describing the system are unknown A typical example is the variation of the mass of a car which changes with the number of passengers and the weight of the baggage When linearizing a nonlinear system the parameters of the linearized model also depend on the operating conditions It is straightforward to in vestigate the effects of parametric uncertainty simply by evaluating the performance criteria for a range of parameters Such a calculation reveals the consequences of parameter variations We illustrate by a simple example Example 121 Cruise control The cruise control problem was described in Section 31 and a PI controller was designed in Example 103 To investigate the effect of parameter variations we will choose a controller designed for a nominal operating condition corresponding to mass m 1600 fourth gear α 12 and speed ve 25 ms the controller gains are kp 072 and ki 018 Figure 121a shows the velocity v and the throttle u when encountering a hill with a 3 slope with masses in the range 1600 m 2000 gear ratios 35 α 10 12 and 16 and velocity 10 v 40 ms 348 CHAPTER 12 ROBUST PERFORMANCE 0 5 10 15 20 1 0 1 Time t s Error e 0 5 10 15 20 0 1 2 Time t s Input u a Disturbance response 1 05 05 05 Re λ Im λ b Closed loop eigenvalues Figure 121 Responses of the cruise control system to a slope increase of 3 a and the eigenvalues of the closed loop system b Model parameters are swept over a wide range The simulations were done using models that were linearized around the different operating conditions The figure shows that there are variations in the response but that they are quite reasonable The largest velocity error is in the range of 0206 ms and the settling time is about 15 s The control signal is marginally larger than 1 in some cases which implies that the throttle is fully open A full nonlinear simulation using a controller with windup protection is required if we want to explore these cases in more detail Figure 121b shows the eigenvalues of the closed loop system for the different operating conditions The figure shows that the closed loop system is well damped in all cases This example indicates that at least as far as parametric variations are concerned the design based on a simple nominal model will give satisfactory control The example also indicates that a controller with fixed parameters can be used in all cases Notice that we have not considered operating conditions in low gear and at low speed but cruise controllers are not typically used in these cases Unmodeled Dynamics It is generally easy to investigate the effects of parametric variations However there are other uncertainties that also are important as discussed at the end of Sec tion 23 The simple model of the cruise control system captures only the dynamics of the forward motion of the vehicle and the torque characteristics of the engine and transmission It does not for example include a detailed model of the engine dynamics whose combustion processes are extremely complex or the slight delays that can occur in modern electronically controlled engines as a result of the pro cessing time of the embedded computers These neglected mechanisms are called unmodeled dynamics Unmodeled dynamics can be accounted for by developing a more complex model Such models are commonly used for controller development but substantial effort is required to develop them An alternative is to investigate if the closed loop system is sensitive to generic forms of unmodeled dynamics The basic idea is to 352 CHAPTER 12 ROBUST PERFORMANCE inputs and many outputs We illustrate its use by computing the metric for the systems in the previous examples Example 124 Vinnicombe metric for Examples 122 and 123 For the systems in Example 122 we have f1s 1 P1sP1s 1 k2 s2 1 s2 f2s 1 P2sP1s 1 k2 2sT T 2 1s2 2s3T s4T 2 1 s21 2sT s2T 2 The function f1 has one zero in the right halfplane A numerical calculation for k 100 and T 0025 shows that the function f2 has the roots 463 863 200 600i Both functions have one zero in the right halfplane allowing us to compute the norm 124 For T 0025 this gives δνP1 P2 098 which is a quite large value To have reasonable robustness Vinnicombe recommended values less than 13 For the system in Example 123 we have 1 P1sP1s 1 k2 s2 1 s2 1 P2sP1s 1 k2 2s s2 s 12 These functions have the same number of zeros in the right halfplane if k 1 In this particular case the Vinnicombe metric is dP1 P2 2k1 k2 Exer cise 124 and with k 100 we get δνP1 P2 002 Figure 124 shows the Nyquist curves and their projections for k 2 Notice that dP1 P2 is very small for small k even though the closed loop systems are very different It is therefore essential to consider the condition P1 P2 C as discussed in Exercise 124 122 Stability in the Presence of Uncertainty Having discussed how to describe uncertainty and the similarity between two sys tems we now consider the problem of robust stability When can we show that the stability of a system is robust with respect to process variations This is an important question since the potential for instability is one of the main drawbacks of feedback Hence we want to ensure that even if we have small inaccuracies in our model we can still guarantee stability and performance Robust Stability Using Nyquists Criterion The Nyquist criterion provides a powerful and elegant way to study the effects of uncertainty for linear systems A simple criterion is that the Nyquist curve be sufficiently far from the critical point 1 Recall that the shortest distance from the Nyquist curve to the critical point is sm 1Ms where Ms is the maximum of the sensitivity function and sm is the stability margin introduced in Section 93 124 ROBUST POLE PLACEMENT 361 for the overall circuit is given by Gv2v1 R2 R1 Gs Gs R2R1 1 We see that if Gs is large over the desired frequency range then the closed loop system is very close to the ideal response α R2R1 Assuming Gs bsa where b is the gainbandwidth product of the amplifier as discussed in Example 83 the sensitivity function and the complementary sensitivity function become S s a s a αb T αb s a αb Thesensitivityfunctionaroundthenominalvaluestellsushowthetrackingresponse response varies as a function of process perturbations dG yr G yr S d P P We see that for low frequencies where S is small variations in the bandwidth a or the gainbandwidth product b will have relatively little effect on the performance of the amplifier under the assumption that b is sufficiently large To model the effects of an unknown load we consider the addition of a dis turbance at the output of the system as shown in Figure 1210b This disturbance represents changes in the output voltage due to loading effects The transfer func tion G yd S gives the response of the output to the load disturbance and we see that if S is small then we are able to reject such disturbances The sensitivity of G yd to perturbations in the process dynamics can be computed by taking the derivative of G yd with respect to P dG yd d P C 1 PC2 T P G yd dG yd G yd T d P P Thus we see that the relative changes in the disturbance rejection are roughly the same as the process perturbations at low frequency when T is approximately 1 and drop off at higher frequencies However it is important to remember that G yd itself is small at low frequency and so these variations in relative performance may not be an issue in many applications 124 Robust Pole Placement In Chapters 6 and 7 we saw how to design controllers by setting the locations of the eigenvalues of the closed loop system If we analyze the resulting system in the frequency domain the closed loop eigenvalues correspond to the poles of the closed loop transfer function and hence these methods are often referred to as design by pole placement State space design methods like many methods developed for control system design do not explicitly take robustness into account In such cases it is essential to 362 CHAPTER 12 ROBUST PERFORMANCE Re Liω Im Liω 10 3 10 1 10 1 10 3 10 1 10 0 10 1 10 2 10 3 360 270 180 Liω Liω Frequency ω rads Figure 1211 Observerbased control of steering The Nyquist plot left and Bode plot right of the loop transfer function for vehicle steering with a controller based on state feedback and an observer The controller provides stable operation but with very low gain and phase margin always investigate the robustness because there are seemingly reasonable designs that give controllers with poor robustness We illustrate this by analyzing controllers designed by state feedback and observers The closed loop poles can be assigned to arbitrary locations if the system is observable and reachable However if we want to have a robust closed loop system the poles and zeros of the process impose severe restrictions on the location of the closed loop poles Some examples are first given based on the analysis of these examples we then present design rules for robust pole eigenvalue placement Slow Stable Process Zeros We will first explore the effects of slow stable zeros and we begin with a simple example Example 128 Vehicle steering Consider the linearized model for vehicle steering in Example 86 which has the transfer function Ps 05s 1 s2 A controller based on state feedback was designed in Example 64 and state feed back was combined with an observer in Example 74 The system simulated in Figure 78 has closed loop poles specified by ωc 03 ζc 0707 ωo 7 and ζo 9 Assume that we want a faster closed loop system and choose ωc 10 ζc 0707 ωo 20 and ζo 0707 Using the state representation in Example 73 a pole placement design gives state feedback gains k1 100 and k2 3586 and observer gains l1 2828 and l2 400 The controller transfer function is Cs 11516s 40000 s2 424s 66579 Figure 1211 shows Nyquist and Bode plots of the loop transfer function The 124 ROBUST POLE PLACEMENT 363 Nyquist plot indicates that the robustness is poor since the loop transfer function is very close to the critical point 1 The phase margin is 7 and the stability margin is sm 0077 The poor robustness shows up in the Bode plot where the gain curve hovers around the value 1 and the phase curve is close to 180 for a wide frequency range More insight is obtained by analyzing the sensitivity functions shown by solid lines in Figure 1212 The maximum sensitivities are Ms 13 and Mt 12 indicating that the system has poor robustness At first sight it is surprising that a controller where the nominal closed system has well damped poles and zeros is so sensitive to process variations We have an indication that something is unusual because the controller has a zero at s 35 in the right halfplane To understand what happens we will investigate the reason for the peaks of the sensitivity functions Let the transfer functions of the process and the controller be Ps n ps dps Cs ncs dcs where n ps ncs dps and dcs are the numerator and denominator polynomi als The complementary sensitivity function is T s PC 1 PC n psncs dpsdcs n psn ps The poles of T s are the poles of the closed loop system and the zeros are given by the zeros of the process and controller Sketching the gain curve of the comple mentary sensitivity function we find that T s 1 for low frequencies and that T iω starts to increase at its first zero which is the process zero at s 2 It increases further at the controller zero at s 34 and it does not start to decrease until the closed loop poles appear at ωc 10 and ωo 20 We can thus conclude that there will be a peak in the complementary sensitivity function The magnitude of the peak depends on the ratio of the zeros and the poles of the transfer function The peak of the complementary sensitivity function can be avoided by assigning a closed loop zero close to the slow process zero We can achieve this by choosing ωc 10 and ζc 26 which gives closed loop poles at s 2 and s 50 The controller transfer function then becomes Cs 3628s 40000 s2 8028s 15656 3628 s 1102 s 2s 7828 The sensitivity functions are shown by dashed lines in Figure 1212 The controller gives the maximum sensitivities Ms 134 and Mt 141 which give much better robustness Notice that the controller has a pole at s 2 that cancels the slow process zero The design can also be done simply by canceling the slow stable process zero and designing the controller for the simplified system One lesson from the example is that it is necessary to choose closed loop poles that are equal to or close to slow stable process zeros Another lesson is that slow unstable process zeros impose limitations on the achievable bandwidth as already 364 CHAPTER 12 ROBUST PERFORMANCE 10 0 10 2 10 2 10 0 10 0 10 2 10 2 10 0 Original Improved Siω T iω Frequency ω rads Frequency ω rads Figure 1212 Sensitivity functions for observerbased control of vehicle steering The com plementary sensitivity function left and the sensitivity function right for the original con troller with ωc 10 ζc 0707 ωo 20 ζo 0707 solid and the improved controller with ωc 10 ζc 26 dashed noted in Section 115 Fast Stable Process Poles The next example shows the effect of fast stable poles Example 129 Fast system poles Consider a PI controller for a firstorder system where the process and the controller have the transfer functions Ps bs a and Cs kp kis The loop transfer function is Ls bkps ki ss a and the closed loop characteristic polynomial is ss a bkps ki s2 a bkps kib If we specify the desired closed loop poles should be p1 and p2 we find that the controller parameters are given by kp p1 p2 a b ki p1 p2 b The sensitivity functions are then Ss ss a s p1s p2 T s p1 p2 as p1 p2 s p1s p2 Assume that the process pole a is much larger than the closed loop poles p1 and p2 say p1 p2 a Notice that the proportional gain is negative and that the controller has a zero in the right halfplane if a p1 p2 an indication that the system has bad properties Next consider the sensitivity function which is 1 for high frequencies Moving from high to low frequencies we find that the sensitivity increases at the process pole s a The sensitivity does not decrease until the closed loop poles are reached resulting in a large sensitivity peak that is approximately ap2 The magnitude of the sensitivity function is shown in Figure 1213 for a b 1 p1 005 and p2 02 Notice the highsensitivity peak For comparison we also show the gain 124 ROBUST POLE PLACEMENT 367 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 2 10 0 Ideal PID PID w filtering Normalized frequency ωa Normalized frequency ωa T iω Siω PSiω CSiω Figure 1214 Nanopositioning system control via cancellation of the fast process pole Gain plots for the Gang of Four for PID control with secondorder filtering 1217 are shown by solid lines and the dashed lines show results for an ideal PID controller without filtering 1216 bustness A large value of T f reduces the effects of sensor noise significantly but it also reduces the stability margin Since the gain crossover frequency without filtering is ki a reasonable choice is TF 02T f as shown by the solid curves in Figure 1214 The plots of CSiω and Siω show that the sensitivity to high frequency measurement noise is reduced dramatically at the cost of a marginal increase of sensitivity Notice that the poor attenuation of disturbances with fre quencies close to the resonance is not visible in the sensitivity function because of the exact cancellation of poles and zeros The designs thus far have the drawback that load disturbances with frequencies close to the resonance are not attenuated We will now consider a design that actively attenuates the poorly damped modes We start with an ideal PID controller where the design can be done analytically and we add highfrequency rolloff The loop transfer function obtained with this controller is Ls kds2 kps ki ss2 2ζas a2 1218 The closed loop system is of third order and its characteristic polynomial is s3 kda2 2ζas2 kp 1a2s kia2 1219 A general thirdorder polynomial can be parameterized as s3 α0 2ζω0s2 1 2α0ζω2 0s α0ω3 0 1220 The parameters α0 and ζ give the relative configuration of the poles and the pa rameter ω0 gives their magnitudes and therefore also the bandwidth of the system The identification of coefficients of equal powers of s in with equation 1219 368 CHAPTER 12 ROBUST PERFORMANCE 10 2 10 0 10 2 10 2 10 0 10 2 10 0 10 2 10 4 10 2 10 0 10 2 10 0 10 2 10 0 10 2 10 2 10 0 10 2 10 2 10 0 Normalized frequency ωa Normalized frequency ωa T iω Siω PSiω CSiω ω0 a ω0 2a ω0 4a Figure 1215 Nanopositioner control using active damping Gain curves for the Gang of Four for PID control of the nanopositioner designed for ω0 a dashdotted 2a dashed and 4a solid The controller has highfrequency rolloff and has been designed to give active damping of the oscillatory mode The different curves correspond to different choices of magnitudes of the poles parameterized by ω0 in equation 1219 gives a linear equation for the controller parameters which has the solution kp 1 2α0ζω2 0 a2 1 ki α0ω3 0 a2 kd α0 2ζω0 a2 2ζa 1221 To obtain a design with active damping it is necessary that the closed loop band width be at least as fast as the oscillatory modes Adding highfrequency rolloff the controller becomes Cs kds2 kps k s1 sT f sT f 22 1222 The value T f Td10 01 kdk is a good value for the filtering time constant Figure 1215 shows the gain curves of the Gang of Four for designs with ζ 0707 α0 1 and ω0 a 2a and 4a The figure shows that the largest values of the sensitivity function and the complementary sensitivity function are small The gain curve for PS shows that the load disturbances are now well attenuated over the whole frequency range and attenuation increases with increasing ω0 The gain curve forCS shows that large control signals are required to provide active damping The high gain of CS for high frequencies also shows that lownoise sensors and actuators with a wide range are required The largest gains for CS are 19 103 and 434 for ω0 a 2a and 4a respectively There is clearly a tradeoff between disturbance attenuation and controller gain A comparison of Figures 1214 and 1215 illustrates the tradeoffs between control action and disturbance attenuation for the designs with cancellation of the fast process pole and active damping 370 CHAPTER 12 ROBUST PERFORMANCE 5 0 5 4 2 0 2 4 Re Liω Im Liω a Hall chart 4 3 2 1 0 1 0 1 2 3 arg Liω rad log Liω b Nichols chart Figure 1216 Hall and Nichols charts The Hall chart is a Nyquist plot with curves for constant gain and phase of the complementary sensitivity function T The Nichols chart is the conformal map of the Hall chart under the transformation N log L with the scale flipped The dashed curve is the line where T iω 1 and the shaded region corresponding to loop transfer functions whose complementary sensitivity changes by no more than 10 is shaded and disturbance injection because it balances control actions against deviations in the output If all state variables are measured the controller is a state feedback u K x and it has the same form as the controller obtained by eigenvalue assignment pole placement in Section 62 However the controller gain is obtained by solving an optimization problem It has been shown that this controller is very robustIthasaphasemarginofatleast60 andaninfinitegainmarginThecontroller is called a linear quadratic control or LQ control because the process model is linear and the criterion is quadratic When all state variables are not measured the state can be reconstructed using an observer as discussed in Section 73 It is also possible to introduce process disturbances and measurement noise explicitly in the model and to reconstruct the states using a Kalman filter as discussed briefly in Section 74 The Kalman filter has the same structure as the observer designed by eigenvalue assignment in Section 73 but the observer gains L are now obtained by solving an optimization problem The control law obtained by combining linear quadratic control with a Kalman filter is called linear quadratic Gaussian control or LQG control The Kalman filter is optimal when the models for load disturbances and measurement noise are Gaussian It is interesting that the solution to the optimization problem leads to a controller having the structure of a state feedback and an observer The state feedback gains depend on the parameter ρ and the filter gains depend on the parameters in the model that characterize process noise and measurement noise see Section 74 There are efficient programs to compute these feedback and observer gains The nice robustness properties of state feedback are unfortunately lost when the observer is added It is possible to choose parameters that give closed loop systems with poor robustness similar to Example 128 We can thus conclude that there is a 374 CHAPTER 12 ROBUST PERFORMANCE ically Automatic tuning requires that parameters remain constant and it has been widely applied for PID control It is a reasonable guess that in the future many controllers will have features for automatic tuning If parameters are changing it is possible to use adaptive methods where process dynamics are measured online 126 Further Reading The topic of robust control is a large one with many articles and textbooks devoted to the subject Robustness was a central issue in classical control as described in Bodes classical book 40 Robustness was deemphasized in the euphoria of the development of design methods based on optimization The strong robustness of controllers based on state feedback shown by Anderson and Moore 7 contributed to the optimism The poor robustness of output feedback was pointed out by Rosen brock 169 Horowitz 103 and Doyle 63 and resulted in a renewed interest in robustness A major step forward was the development of design methods where ro bustness was explicitly taken into account such as the seminal work of Zames 208 Robust control was originally developed using powerful results from the theory of complex variables which gave controllers of high order A major breakthrough was made by Doyle Glover Khargonekar and Francis 65 who showed that the solu tion to the problem could be obtained using Riccati equations and that a controller of low order could be found This paper led to an extensive treatment of H control including books by Francis 78 McFarlane and Glover 150 Doyle Francis and Tannenbaum 64 Green and Limebeer 90 Zhou Doyle and Glover 209 Sko gestand and Postlethwaite 181 and Vinnicombe 196 A major advantage of the theory is that it combines much of the intuition from servomechanism theory with sound numerical algorithms based on numerical linear algebra and optimization The results have been extended to nonlinear systems by treating the design problem as a game where the disturbances are generated by an adversary as described in the book by Basar and Bernhard 24 Gain scheduling and adaptation are discussed in the book by Åström and Wittenmark 19 Exercises 121 Consider systems with the transfer functions P1 1s 1 and P2 1s a Show that P1 can be changed continuously to P2 with bounded additive and multiplicative uncertainty ifa 0 but not ifa 0 Also show that no restriction on a is required for feedback uncertainty 122 Consider systems with the transfer functions P1 s 1s 12 and P2 s as 12 Show that P1 can be changed continuously to P2 with bounded feedback uncertainty if a 0 but not if a 0 Also show that no restriction on a is required for additive and multiplicative uncertainties Bibliography 1 M A Abkowitz Stability and Motion Control of Ocean Vehicles MIT Press Cambridge MA 1969 2 R H Abraham and C D Shaw DynamicsThe Geometry of Behavior Part 1 Periodic Behavior Aerial Press Santa Cruz CA 1982 3 J Ackermann Der Entwurf linearer Regelungssysteme im Zustandsraum Regelungstechnik und Prozessdatenverarbeitung 7297300 1972 4 J Ackermann SampledData Control Systems Springer Berlin 1985 5 C E Agnew Dynamic modeling and control of congestionprone systems Operations Re search 243400419 1976 6 L V Ahlfors Complex Analysis McGrawHill New York 1966 7 B D O Anderson and J B Moore Optimal Control Linear Quadratic Methods Prentice Hall Englewood Cliffs NJ 1990 Republished by Dover Publications 2007 8 A A Andronov A A Vitt and S E Khaikin Theory of Oscillators Dover New York 1987 9 T M Apostol Calculus Vol II MultiVariable Calculus and Linear Algebra with Applica tions Wiley New York 1967 10 T M Apostol Calculus Vol I OneVariable Calculus with an Introduction to Linear Algebra Wiley New York 1969 11 R Aris Mathematical Modeling Techniques Dover New York 1994 Originally published by Pitman 1978 12 V I Arnold Mathematical Methods in Classical Mechanics Springer New York 1978 13 V I Arnold Ordinary Differential Equations MIT Press Cambridge MA 1987 10th printing 1998 14 K J Åström Limitations on control system performance European Journal on Control 61220 2000 15 K J Åström Introduction to Stochastic Control Theory Dover New York 2006 Originally published by Academic Press New York 1970 16 K J Åström and T Hägglund Advanced PID Control ISAThe Instrumentation Systems and Automation Society Research Triangle Park NC 2005 17 K J Åström R E Klein and A Lennartsson Bicycle dynamics and control IEEE Control Systems Magazine 2542647 2005 18 K J Åström and B Wittenmark ComputerControl Systems Theory and Design 3rd ed Prentice Hall Englewood Cliffs NJ 1997 19 K J Åström and B Wittenmark Adaptive Control 2nd ed Dover New York 2008 Originally published by Addison Wesley 1995 20 D P Atherton Nonlinear Control Engineering Van Nostrand New York 1975 378 BIBLIOGRAPHY 21 M Atkinson M Savageau J Myers and A Ninfa Development of genetic circuitry exhibiting toggle switch or oscillatory behavior in Escherichia coli Cell 1135597607 2003 22 M B Barron and W F Powers The role of electronic controls for future automotive mecha tronic systems IEEE Transactions on Mechatronics 118089 1996 23 T Basar editor Control Theory Twentyfive Seminal Papers 19321981 IEEE Press New York 2001 24 T Basar and P Bernhard H Optimal Control and Related Minimax Design Problems A Dynamic Game Approach Birkhauser Boston 1991 25 J Bechhoefer Feedback for physicists A tutorial essay on control Reviews of Modern Physics 77783836 2005 26 R Bellman and K J Åström On structural identifiability Mathematical Biosciences 7329 339 1970 27 R E Bellman and R Kalaba Selected Papers on Mathematical Trends in Control Theory Dover New York 1964 28 S Bennett A History of Control Engineering 18001930 Peter Peregrinus Stevenage 1986 29 S Bennett A History of Control Engineering 19301955 Peter Peregrinus Stevenage 1986 30 L L Beranek Acoustics McGrawHill New York 1954 31 R N Bergman Toward physiological understanding of glucose tolerance Minimal model approach Diabetes 3815121527 1989 32 D Bertsekas and R Gallager Data Networks Prentice Hall Englewood Cliffs 1987 33 B Bialkowski Process control sample problems In N J Sell editor Process Control Fundamentals for the Pulp Paper Industry Tappi Press Norcross GA 1995 34 G Binnig and H Rohrer Scanning tunneling microscopy IBM Journal of Research and Development 304355369 1986 35 H S Black Stabilized feedback amplifiers Bell System Technical Journal 1312 1934 36 H S Black Inventing the negative feedback amplifier IEEE Spectrum pp 5560 1977 37 J F Blackburn G Reethof and J L Shearer Fluid Power Control MIT Press Cambridge MA 1960 38 J H Blakelock Automatic Control of Aircraft and Missiles 2nd ed AddisonWesley Cam bridge MA 1991 39 G Blickley Modern control started with ZieglerNichols tuning Control Engineering 3772 75 1990 40 H W Bode Network Analaysis and Feedback Amplifier Design Van Nostrand New York 1945 41 H W Bode FeedbackThe history of an idea Symposium on Active Networks and Feedback Systems Polytechnic Institute of Brooklyn New York 1960 Reprinted in 27 42 W E Boyce and R C DiPrima Elementary Differential Equations Wiley New York 2004 43 B Brawn and F Gustavson Program behavior in a paging environment Proceedings of the AFIPS Fall Joint Computer Conference pp 10191032 1968 44 R W Brockett Finite Dimensional Linear Systems Wiley New York 1970 45 R W Brockett New issues in the mathematics of control In B Engquist and W Schmid editors Mathematics Unlimited2001 and Beyond pp 189220 SpringerVerlag Berlin 2000 46 G S Brown and D P Campbell Principles of Servomechanims Wiley New York 1948 BIBLIOGRAPHY 379 47 A E Bryson Jr and YC Ho Applied Optimal Control Optimization Estimation and Control Wiley New York 1975 48 F M Callier and C A Desoer Linear System Theory SpringerVerlag London 1991 49 R H Cannon Dynamics of Physical Systems Dover New York 2003 Originally published by McGrawHill 1967 50 H S Carslaw and J C Jaeger Conduction of Heat in Solids 2nd ed Clarendon Press Oxford UK 1959 51 H Chestnut and R W Mayer Servomechanisms and Regulating System Design Vol 1 Wiley New York 1951 52 CCobelliandGToffolo Modelofglucosekineticsandtheircontrolbyinsulincompartmental and noncompartmental approaches Mathematical Biosciences 722291316 1984 53 R F Coughlin and F F Driscoll Operational Amplifiers and Linear Integrated Circuits 6th ed Prentice Hall Englewood Cliffs NJ 1975 54 L B Cremean T B Foote J H Gillula G H Hines D Kogan K L Kriechbaum J C Lamb J Leibs L Lindzey C E Rasmussen A D Stewart J W Burdick and R M Murray Alice An informationrich autonomous vehicle for highspeed desert navigation Journal of Field Robotics 239777810 2006 55 Crocus Systemes dExploitation des Ordinateurs Dunod Paris 1975 56 H de Jong Modeling and simulation of genetic regulatory systems A literature review Journal of Computational Biology 967103 2002 57 J P Den Hartog Mechanical Vibrations Dover New York 1985 Reprint of 4th ed from 1956 1st ed published in 1934 58 L Desbourough and R Miller Increasing customer value of industrial control performance monitoringHoneywells experience Sixth International Conference on Chemical Process Control AIChE Symposium Series Number 326 Vol 98 2002 59 Y Diao N Gandhi J L Hellerstein S Parekh and D M Tilbury Using MIMO feedback control to enforce policies for interrelated metrics with application to the Apache web server Proceedings of the IEEEIFIP Network Operations and Management Symposium pp 219234 2002 60 E D Dickmanns Dynamic Vision for Perception and Control of Motion Springer Berlin 2007 61 R C Dorf and R H Bishop Modern Control Systems 10th ed Prentice Hall Upper Saddle River NJ 2004 62 F H Dost Grundlagen der Pharmakokinetik Thieme Verlag Stuttgart 1968 63 J C Doyle Guaranteed margins for LQG regulators IEEE Transactions on Automatic Control 234756757 1978 64 J C Doyle B A Francis and A R Tannenbaum Feedback Control Theory Macmillan New York 1992 65 J C Doyle K Glover P P Khargonekar and B A Francis Statespace solutions to standard H2 and H control problems IEEE Transactions on Automatic Control 348831847 1989 66 L E Dubins On curves of minimal length with a constraint on average curvature and with prescribed initial and terminal positions and tangents American Journal of Mathematics 79497516 1957 67 F Dyson A meeting with Enrico Fermi Nature 2476972297 2004 380 BIBLIOGRAPHY 68 HElSamadJPGoffandMKhammash Calciumhomeostasisandparturienthypocalcemia An integral feedback perspective Journal of Theoretical Biology 2141729 2002 69 J R Ellis Vehicle Handling Dynamics Mechanical Engineering Publications London 1994 70 S P Ellner and J Guckenheimer Dynamic Models in Biology Princeton University Press Princeton NJ 2005 71 M B Elowitz and S Leibler A synthetic oscillatory network of transcriptional regulators Nature 4036767335338 2000 72 P G Fabietti V Canonico M O Federici M Benedetti and E Sarti Control oriented model of insulin and glucose dynamics in type 1 diabetes Medical and Biological Engineering and Computing 446678 2006 73 M Fliess J Levine P Martin and P Rouchon On differentially flat nonlinear systems Comptes Rendus des Séances de lAcadémie des Sciences Serie I 315619624 1992 74 M Fliess J Levine P Martin and P Rouchon Flatness and defect of nonlinear systems Introductory theory and examples International Journal of Control 61613271361 1995 75 J W Forrester Industrial Dynamics MIT Press Cambridge MA 1961 76 J B J Fourier On the propagation of heat in solid bodies Memoir read before the Class of the Instut de France 1807 77 A Fradkov Cybernetical Physics From Control of Chaos to Quantum Control Springer Berlin 2007 78 B A Francis A Course in H Control SpringerVerlag Berlin 1987 79 G F Franklin J D Powell and A EmamiNaeini Feedback Control of Dynamic Systems 5th ed Prentice Hall Upper Saddle River NJ 2005 80 B Friedland Control System Design An Introduction to State Space Methods Dover New York 2004 81 M A Gardner and J L Barnes Transients in Linear Systems Wiley New York 1942 82 E Gilbert Controllability and observability in multivariable control systems SIAM Journal of Control 11128151 1963 83 J C Gille M J Pelegrin and P Decaulne Feedback Control Systems Analysis Synthesis and Design McGrawHill New York 1959 84 M Giobaldi and D Perrier Pharmacokinetics 2nd ed Marcel Dekker New York 1982 85 K Godfrey Compartment Models and Their Application Academic Press New York 1983 86 H Goldstein Classical Mechanics AddisonWesley Cambridge MA 1953 87 S W Golomb Mathematical modelsUses and limitations Simulation 414197198 1970 88 G C Goodwin S F Graebe and M E Salgado Control System Design Prentice Hall Upper Saddle River NJ 2001 89 D Graham and D McRuer Analysis of Nonlinear Control Systems Wiley New York 1961 90 M Green and D J N Limebeer Linear Robust Control Prentice Hall Englewood Cliffs NJ 1995 91 J Guckenheimer and P Holmes Nonlinear Oscillations Dynamical Systems and Bifurcations of Vector Fields SpringerVerlag Berlin 1983 92 E A Guillemin Theory of Linear Physical Systems MIT Press Cambridge MA 1963 93 L Gunkel and G F Franklin A general solution for linear sampled data systems IEEE Transactions on Automatic Control AC16767775 1971 BIBLIOGRAPHY 381 94 W Hahn Stability of Motion Springer Berlin 1967 95 D Hanahan and R A Weinberg The hallmarks of cancer Cell 1005770 2000 96 J K Hedrick and T Batsuen Invariant properties of automobile suspensions Proceedigns of the Institution of Mechanical Engineers Vol 204 pp 2127 London 1990 97 J L Hellerstein Y Diao S Parekh and D M Tilbury Feedback Control of Computing Systems Wiley New York 2004 98 D V Herlihy BicycleThe History Yale University Press New Haven CT 2004 99 M B Hoagland and B Dodson The Way Life Works Times Books New York 1995 100 A L Hodgkin and A F Huxley A quantitative description of membrane current and its application to conduction and excitation in nerve Journal of Physiology 117500544 1952 101 C V Hollot V Misra D Towsley and WB Gong A control theoretic analysis of RED Proceedings of IEEE Infocom pp 15101519 2000 102 I M Horowitz Synthesis of Feedback Systems Academic Press New York 1963 103 I M Horowitz Superiority of transfer function over statevariable methods in linear time invariant feedback system design IEEE Transactions on Automatic Control AC2018497 1975 104 I M Horowitz Survey of quantitative feedback theory International Journal of Control 53255291 1991 105 T P Hughes Elmer Sperry Inventor and Engineer John Hopkins University Press Baltimore MD 1993 106 A Isidori Nonlinear Control Systems 3rd ed SpringerVerlag Berlin 1995 107 M Ito Neurophysiological aspects of the cerebellar motor system International Journal of Neurology 7162178 1970 108 V Jacobson Congestion avoidance and control ACM SIGCOMM Computer Communication Review 25157173 1995 109 J A Jacquez Compartment Analysis in Biology and Medicine Elsevier Amsterdam 1972 110 H James N Nichols and R Phillips Theory of Servomechanisms McGrawHill New York 1947 111 P D Joseph and J T Tou On linear control theory Transactions of the AIEE 8018 1961 112 W G Jung editor Op Amp Applications Analog Devices Norwood MA 2002 113 R E Kalman Contributions to the theory of optimal control Boletin de la Sociedad Maté matica Mexicana 5102119 1960 114 R E Kalman New methods and results in linear prediction and filtering theory Technical Report 611 Research Institute for Advanced Studies RIAS Baltimore MD February 1961 115 R E Kalman On the general theory of control systems Proceedings of the First IFAC Congress on Automatic Control Moscow 1960 Vol 1 pp 481492 Butterworths London 1961 116 R E Kalman and R S Bucy New results in linear filtering and prediction theory Transactions of the ASME Journal of Basic Engineering 83 D95108 1961 117 R E Kalman P L Falb and M A Arbib Topics in Mathematical System Theory McGraw Hill New York 1969 118 R E Kalman Y Ho and K S Narendra Controllability of Linear Dynamical Systems Vol 1 of Contributions to Differential Equations Wiley New York 1963 119 J Keener and J Sneyd Mathematical Physiology Springer New York 2001 382 BIBLIOGRAPHY 120 F P Kelly Stochastic models of computer communication Journal of the Royal Statistical Society B473379395 1985 121 K Kelly Out of Control AddisonWesley Reading MA 1994 Available at httpwwwkk orgoutofcontrol 122 J M Keynes The General Theory of Employment Interest and Money Cambridge Universtiy Press Cambridge UK 1936 123 H K Khalil Nonlinear Systems 3rd ed Macmillan New York 2001 124 U Kiencke and L Nielsen Automotive Control Systems For Engine Driveline and Vehicle Springer Berlin 2000 125 C Kittel Introduction to Solid State Physics Wiley New York 1995 126 L R Klein and A S Goldberger An Econometric Model of the United States 19291952 North Holland Amsterdam 1955 127 L Kleinrock Queuing Systems Vols I and II 2nd ed WileyInterscience New York 1975 128 N N Krasovski Stability of Motion Stanford University Press Stanford CA 1963 129 M Krstic I Kanellakopoulos and P Kokotovic Nonlinear and Adaptive Control Design Wiley 1995 130 P R Kumar New technological vistas for systems and control The example of wireless networks Control Systems Magazine 2112437 2001 131 P R Kumar and P Varaiya Stochastic Systems Estimation Identification and Adaptive Control Prentice Hall Englewood Cliffs NJ 1986 132 P Kundur Power System Stability and Control McGrawHill New York 1993 133 B C Kuo and F Golnaraghi Automatic Control Systems 8th ed Wiley New York 2002 134 M Kurth and E Welfonder Oscillation behavior of the enlarged European power system under deregulated energy market conditions Control Engineering Practice 1315251536 2005 135 J P LaSalle Some extensions of Lyapunovs second method IRE Transactions on Circuit Theory CT74520527 1960 136 A D Lewis A mathematical approach to classical control Technical report Queens Univer sity Kingston Ontario 2003 137 S H Low F Paganini and J C Doyle Internet congestion control IEEE Control Systems Magazine pp 2843 February 2002 138 S H Low F Paganini J Wang S Adlakha and J C Doyle Dynamics of TCPRED and a scalable control Proceedings of IEEE Infocom pp 239248 2002 139 K H Lundberg History of analog computing IEEE Control Systems Magazine pp 2228 March 2005 140 LA MacColl Fundamental Theory of Servomechanims Van Nostrand Princeton NJ 1945 Dover reprint 1968 141 J M Maciejowski Multivariable Feedback Design Addison Wesley Reading MA 1989 142 D A MacLulich Fluctuations in the Numbers of the Varying Hare Lepus americanus University of Toronto Press 1937 143 A Makroglou J Li and Y Kuang Mathematical models and software tools for the glucose insulinregulatorysystemanddiabetes Anoverview AppliedNumerical Mathematics56559 573 2006 144 J G Malkin Theorie der Stabilität einer Bewegung Oldenbourg München 1959 145 R Mancini Op Amps for Everyone Texas Instruments Houston TX 2002 BIBLIOGRAPHY 383 146 J E Marsden and M J Hoffmann Basic Complex Analysis W H Freeman New York 1998 147 J E Marsden and T S Ratiu Introduction to Mechanics and Symmetry SpringerVerlag New York 1994 148 O Mayr The Origins of Feedback Control MIT Press Cambridge MA 1970 149 M W McFarland editor The Papers of Wilbur and Orville Wright McGrawHill New York 1953 150 D C McFarlane and K Glover Robust Controller Design Using Normalized Coprime Factor Plant Descriptions Springer New York 1990 151 H T Milhorn The Application of Control Theory toPhysiological Systems Saunders Philadel phia 1966 152 D A Mindel Between Human and Machine Feedback Control and Computing Before Cybernetics Johns Hopkins University Press Baltimore MD 2002 153 D Möhl G Petrucci L Thorndahl and S van der Meer Physics and technique of stochastic cooling Physics Reports 58273102 1980 154 J D Murray Mathematical Biology Vols I and II 3rd ed SpringerVerlag New York 2004 155 R M Murray editor Control in an Information Rich World Report of the Panel on Future Directions in Control Dynamics and Systems SIAM Philadelphia 2003 156 R M Murray Z Li and S S Sastry A Mathematical Introduction to Robotic Manipulation CRC Press 1994 157 P J Nahin Oliver Heaviside Sage in Solitude The Life Work and Times of an Electrical Genius of the Victorian Age IEEE Press New York 1988 158 A O Nier Evidence for the existence of an isotope of potassium of mass 40 Physical Review 48283284 1935 159 H Nijmeijer and J M Schumacher Four decades of mathematical system theory In J W Polderman and H L Trentelman editors The Mathematics of Systems and Control From Intelligent Control to Behavioral Systems pp 7383 University of Groningen 1999 160 H Nyquist Regeneration theory Bell System Technical Journal 11126147 1932 161 H Nyquist The regeneration theory In R Oldenburger editor Frequency Response p 3 MacMillan New York 1956 162 K Ogata Modern Control Engineering 4th ed Prentice Hall Upper Saddle River NJ 2001 163 R Oldenburger editor Frequency Response MacMillan New York 1956 164 G Pacini and R N Bergman A computer program to calculate insulin sensitivity and pancre atic responsivity from the frequently sampled intraveneous glucose tolerance test Computer Methods and Programs in Biomedicine 23113122 1986 165 G A Philbrick Designing industrial controllers by analog Electronics 216108111 1948 166 W F Powers and P R Nicastri Automotive vehicle control challenges in the 21st century Control Engineering Practice 8605618 2000 167 S Prajna A Papachristodoulou and P A Parrilo SOSTOOLS Sum of squares optimization toolbox for MATLAB 2002 Available from httpwwwcdscaltechedusostools 168 D S Riggs The Mathematical Approach to Physiological Problems MIT Press Cambridge MA 1963 169 H H Rosenbrock and P D Moran Good bad or optimal IEEE Transactions on Automatic Control AC166552554 1971 384 BIBLIOGRAPHY 170 F Rowsone Jr What its like to drive an autopilot car Popular Science Monthly April 1958 Available at httpwwwimperialclubcomImFormativeArticles1958AutoPilot 171 W J Rugh Linear System Theory 2nd ed Prentice Hall Englewood Cliffs NJ 1995 172 E B Saff and A D Snider Fundamentals of Complex Analysis with Applications to Engi neering Science and Mathematics Prentice Hall Englewood Cliffs NJ 2002 173 D Sarid Atomic Force Microscopy Oxford University Press Oxford UK 1991 174 S Sastry Nonlinear Systems Springer New York 1999 175 G Schitter High performance feedback for fast scanning atomic force microscopes Review of Scientific Instruments 72833203327 2001 176 G Schitter K J Åström B DeMartini P J Thurner K L Turner and P K Hansma Design andmodelingofahighspeedAFMscanner IEEETransactionsonControlSystemTechnology 155906915 2007 177 M Schwartz Telecommunication Networks Addison Wesley Reading MA 1987 178 D E Seborg T F Edgar and D A Mellichamp Process Dynamics and Control 2nd ed Wiley Hoboken NJ 2004 179 S D Senturia Microsystem Design Kluwer Boston MA 2001 180 F G Shinskey ProcessControl Systems Application Design and Tuning 4th ed McGraw Hill New York 1996 181 S Skogestad and I Postlethwaite Multivariable Feedback Control 2nd ed Wiley Hoboken NJ 2005 182 E P Sontag Mathematical Control Theory Deterministic Finite Dimensional Systems 2nd ed Springer New York 1998 183 M W Spong and M Vidyasagar Dynamics and Control of Robot Manipulators John Wiley 1989 184 L Stark Neurological Control SystemsStudies in Bioengineering Plenum Press New York 1968 185 G Stein Respect the unstable Control Systems Magazine 2341225 2003 186 J Stewart Calculus Early Transcendentals Brooks Cole Pacific Grove CA 2002 187 G Strang Linear Algebra and Its Applications 3rd ed Harcourt Brace Jovanovich San Diego 1988 188 S H Strogatz Nonlinear Dynamics and Chaos with Applications to Physics Biology Chem istry and Engineering AddisonWesley Reading MA 1994 189 A S Tannenbaum Computer Networks 3rd ed Prentice Hall Upper Saddle River NJ 1996 190 T Teorell Kinetics of distribution of substances administered to the body I and II Archives Internationales de Pharmacodynamie et de Therapie 57205240 1937 191 G T Thaler Automatic Control Systems West Publishing St Paul MN 1989 192 M Tiller Introduction to Physical Modeling with Modelica Springer Berlin 2001 193 D Tipper and M K Sundareshan Numerical methods for modeling computer networks under nonstationary conditions IEEE Journal of Selected Areas in Communications 891682 1695 1990 194 J G Truxal Automatic Feedback Control System Synthesis McGrawHill New York 1955 195 H S Tsien Engineering Cybernetics McGrawHill New York 1954 196 G Vinnicombe Uncertainty and Feedback H LoopShaping and the νGap Metric Imperial College Press London 2001 BIBLIOGRAPHY 385 197 F J W Whipple The stability of the motion of a bicycle Quarterly Journal of Pure and Applied Mathematics 30312348 1899 198 D V Widder Laplace Transforms Princeton University Press Princeton NJ 1941 199 E P M Widmark and J Tandberg Über die Bedingungen für die Akkumulation indifferenter Narkotika Biochemische Zeitung 148358389 1924 200 N Wiener Cybernetics Or Control and Communication in the Animal and the Machine Wiley 1948 201 S Wiggins Introduction to Applied Nonlinear Dynamical Systems and Chaos Springer Verlag Berlin 1990 202 D G Wilson Bicycling Science 3rd ed MIT Press Cambridge MA 2004 With contributions by Jim Papadopoulos 203 H R Wilson Spikes Decisions and Actions The Dynamical Foundations of Neuroscience Oxford University Press Oxford UK 1999 204 K A Wise Guidance and control for military systems Future challenges AIAA Conference on Guidance Navigation and Control 2007 AIAA Paper 20076867 205 S Yamamoto and I Hashimoto Present status and future needs The view from Japanese industry In Y Arkun and W H Ray editors Chemical Process ControlCPC IV 1991 206 TM Yi Y Huang M I Simon and J Doyle Robust perfect adaptation in bacterial chemo taxis through integral feedback control PNAS 9746494653 2000 207 L A Zadeh and C A Desoer Linear System Theory the State Space Approach McGrawHill New York 1963 208 G Zames Feedback and optimal sensitivity Model reference transformations multiplica tive seminorms and approximative inverse IEEE Transactions on Automatic Control AC 262301320 1981 209 J C Zhou J C Doyle and K Glover Robust and Optimal Control Prentice Hall Englewood Cliffs NJ 1996 210 J G Ziegler and N B Nichols Optimum settings for automatic controllers Transactions of the ASME 64759768 1942 Index access control see admission control acknowledgment ack packet 7779 activator 16 59 129 active filter 154 see also operational amplifier actuators 4 31 51 65 81 178 224 265 284 311 324 333335 337 effect on zeros 284 334 in computing systems 75 saturation 50 225 300 306307 311 324 AD converters see analogtodigital converters adaptation 297 adaptive control 20 373 374 additive uncertainty 349 353 356 376 admission control 54 63 78 79 274 advertising 15 aerospace systems 89 18 338 see also vectored thrust aircraft X29 aircraft aircraft see flight control alcohol metabolism of 93 algebraic loops 211 249250 aliasing 225 allpass transfer function 331 alternating current AC 7 155 amplifier see operational amplifier amplitude ratio see gain analog computing 51 71 250 309 analog implementation controllers 74 263 309311 analogtodigital converters 4 82 224 225 311 analytic function 236 anticipation in controllers 6 24 296 see also derivative action antiresonance 156 antiwindup compensation 306307 311 312 314 Apache web server 76 see also web server control apparent volume of distribution 86 93 Arbib M A 167 argument of a complex number 250 arrival rate queuing systems 55 artificial intelligence AI 12 20 asymptotes in Bode plot 253 254 asymptotic stability 42 102106 112 114 117 118 120 140 discretetime systems 165 atmospheric dynamics see environmental science atomic force microscopes 3 51 8184 contact mode 81 156 horizontal positioning 282 366 system identification 257 tapping mode 81 290 299 304 328 with preloading 93 attractor equilibrium point 104 automatic reset in PID control 296 automatic tuning 306 373 automotive control systems 6 21 51 69 see also cruise control vehicle steering autonomous differential equation 29 see also timeinvariant systems autonomous vehicles 8 2021 autopilot 6 19 balance systems 3537 49 170 188 241 334 see also cartpendulum system inverted pendulum bandpass filter 154 155 255 256 bandwidth 155 186 322 333 Bell Labs 18 290 Bennett S 25 290 312 bicycle dynamics 6971 91 123 226 Whipple model 71 bicycle model for vehicle steering 5153 bifurcations 121124 130 see also root locus plots biological circuits 16 45 5860 129 166 256 genetic switch 64 114 repressilator 5960 biological systems 13 10 1516 22 25 5861 126 293 297 see also biological circuits drug administration neural systems population dynamics bistability 22 117 Black H S 18 20 71 73 131 267 290 347 block diagonal systems 106 129 139 145 149 212 388 INDEX block diagram algebra 242 245 356 block diagrams 1 4447 238 242247 249 control system 4 229 244 315 Kalman decomposition 223 observable canonical form 205 observer 202 210 observerbased control system 213 PID controllers 293 296 311 reachable canonical form 172 two degreeoffreedom controller 219 316 358 Youla parameterization 357 Bode H 229 290 343 374 Bode plots 250257 283 asymptotic approximation 253 254 264 low band highpass filters 256 nonminimum phase systems 284 of rational function 251 sketching 254 Bodes ideal loop transfer function 356 375 Bodes integral formula 335340 Bodes relations 283 326 Brahe T 28 breakpoint 253 272 Brockett R W xii 1 163 Bryson A E 200 bumpless transfer 373 Bush V 312 calibration versus feedback 10 180 195 197 Cannon R H 61 131 capacitor transfer function for 236 car see automotive control systems carrying capacity in population models 90 cartpendulum system 36 172 see also balance systems causal reasoning 1 70 CayleyHamilton theorem 170 199 203 center equilibrium point 104 centrifugal governor 2 3 6 17 chain of integrators normal form 61 173 characteristic polynomial 105 199 235 240 263 264 for closed loop transfer function 268 observable canonical form 205 output feedback controller 212 213 reachable canonical form 173 175 179 198 chemical systems 9 293 see also process control compartment models chordal distance 351 Chrysler autopilot 6 circuits see biological circuits electrical circuits classical control xi 374 closed loop 1 2 4 6 162 176 183 267 268 287 315 versus open loop 2 269 288 315 command signals 4 22 220 293 see also reference signal setpoint compartment models 8589 106 151 186 203 208 227 exercises 164 compensator see control law complementary sensitivity function 317 325 336 350 354 356 360 365 369 374 complexity of control systems 9 21 298 computed torque 163 computer implementation controllers 224226 311312 computer science relationship to control 5 computer systems control of 1214 25 39 56 57 7580 157 see also queuing systems conditional integration 314 conditional stability 275 congestion control 12 7780 104 273 292 313 see also queuing systems router dynamics 92 consensus 57 control definition of 35 early examples 2 5 6 8 11 18 21 25 296 fundamental limitations 283 331340 343 363 366 373374 history of 25 312 modeling for 5 3132 61 347 successes of 8 25 system 3 175 213 219 224 229 316 318 358 using estimated state 211214 370 control error 23 244 294 control law 4 23 24 162 176 179 244 control Lyapunov function 124 control matrix 34 38 control signal 31 157 293 controllability 197 see also reachability controlled differential equation 29 34 235 convolution equation 145147 149 150 170 261 discretetime 165 coordinate transformations 106 147149 173 226 234235 to Jordan form 139 to observable canonical form 206 to reachable canonical form 174 175 Coriolis forces 36 163 corner frequency 253 correlation matrix 215 216 cost function 190 INDEX 389 coupled springmass system 142 144 148 covariance matrix 215 critical gain 303 305 critical period 303 305 critical point 271 273 279 280 289 290 303 352 353 372 critically damped oscillator 184 crossover frequency see gain crossover frequency phase crossover frequency crossover frequency inequality see gain crossover frequency inequality cruise control 6 1718 6569 Chrysler autopilot 6 control design 196 300 309 feedback linearization 161 integrator windup 306 307 linearization 158 polezero cancellation 248 robustness 17 347 348 354 Curtiss seaplane 19 cybernetics 11 see also robotics DA converters see digitaltoanalog converters damped frequency 184 damping 28 36 41 96 265 266 damping ratio 184 185 187 188 300 DARPA Grand Challenge 20 21 DC gain 155 see also zero frequency gain dead zone 23 decision making higher levels of 8 12 20 delay see time delay delay compensation 292 375 delay margin 281 delta function see impulse function derivative action 24 25 293 296298 310 330 filtering 297 308 311 312 setpoint weighting 309 312 time constant 294 versus lead compensator 330 describing functions 288290 design of dynamics 1820 109 124125 131 167 177 182 diabetes see insulinglucose dynamics diagonal systems 105 139 Kalman decomposition for 222 transforming to 106 129 138 Dickmanns E 20 difference equations 34 3741 61 157 224 312 differential algebraic equations 33 see also algebraic loops differential equations 28 3437 9598 controlled 29 133 235 equilibrium points 100101 existence and uniqueness of solutions 9698 firstorder 32 298 isolated solution 101 periodic solutions 101102 109 qualitative analysis 98102 secondorder 99 183 298 solutions 95 96 133 137 145 263 stability see stability transfer functions for 236 differential flatness 221 digital control systems see computer implementation controllers digitaltoanalog converters 4 82 224 225 311 dimensionfree variables 48 61 direct term 34 38 147 211 250 discrete control 56 discretetime systems 38 61 128 157 165 311 Kalman filter for 215 linear quadratic regulator for 192 disk drives 64 disturbance attenuation 4 176 323324 358359 design of controllers for 319 320 326 336 345 369 fundamental limits 336 in biological systems 257 297 integral gain as a measure of 296 324 359 relationship to sensitivity function 323 335 345 358 disturbance weighting 372 disturbances 4 29 32 244 248 315 318 319 generalized 371 random 215 Dodson B 1 dominant eigenvalues poles 187 300 301 double integrator 137 168 236 Doyle J C xii 343 374 drug administration 8489 93 151 186 see also compartment models duality 207 211 Dubins car 53 dynamic compensator 196 213 dynamic inversion 163 dynamical systems 1 27 95 98 126 linear 104 131 observer as a 201 state of 175 stochastic 215 uncertainty in 347349 see also differential equations dynamics matrix 34 38 105 142 Dyson F 27 ecommerce 13 email server control of 39 157 economic systems 1415 22 62 390 INDEX ecosystems 1617 89 181 see also predatorprey system eigenvalue assignment 176 178 180182 188 212 300 313 by output feedback 213 for observer design 208 eigenvalues 105 114 123 142 232 and Jordan form 139141 165 distinct 128 129 138 144 222 dominant 187 effect on dynamic behavior 183 185187 233 for discretetime systems 165 invariance under coordinate transformation 106 relationship to modes 142145 relationship to poles 239 relationship to stability 117 140 141 eigenvectors 106 129 142 143 relationship to mode shape 143 electric power see power systems electric electrical circuits 33 45 74 131 236 see also operational amplifier electrical engineering 67 2931 155 275 elephant modeling of an 27 Elowitz M B 59 encirclement 271 see also Nyquist criterion entertainment robots 11 12 environmental science 3 9 17 equilibrium points 90 100 105 132 159 168 bifurcations of 121 discrete time 62 for closed loop system 176 195 for planar systems 104 region of attraction 119121 128 stability 102 error feedback 5 293 294 309 317 estimators see oservers386 Euler integration 41 42 exponential signals 230235 239 250 extended Kalman filter 220 FA18 aircraft 8 Falb P L 167 feedback 13 as technology enabler 3 19 drawbacks of 3 21 308 352 359 in biological systems 13 1516 25 297 see also biological circuits in engineered systems see control in financial systems 3 in nature 3 1517 89 positive see positive feedback properties 3 5 1722 315 320 347 robustness through 17 versus feedforward 22 296 320 feedback connection 243 287 288 feedback controller 244 315 feedback linearization 161163 feedback loop 4 267 315 358 feedback uncertainty 349 356 feedforward 22 219222 244 315 319 321 Fermi E 27 filters active 154 for disturbance weighting 373 for measurement signals 21 225 359 see also bandpass filters highfilters lowpass filters financial systems see economic systems finite escape time 97 finite state machine 69 76 firstorder systems 134 165 236 252 253 fisheries management 94 flatness see differential flatness flight control 8 18 19 52 163 airspace management 9 FA18 aircraft 8 X29 aircraft 336 X45 aircraft 8 see also vectored thrust aircraft flow of a vector field 29 99 flow in a tank 126 flow model queuing systems 54 292 313 flyball governor see centrifugal governor force feedback 10 11 forced response 133 231 Forrester J W 15 Fourier J B J 61 262 frequency domain 229231 267 285 315 frequency response 30 43 44 152157 230 290 303 322 relationship to Bode plot 250 relationship to Nyquist plot 270 272 secondorder systems 185 256 system identification using 257 fully actuated systems 240 fundamental limits see control fundamental limitations Furuta pendulum 130 gain 24 43 72 153 154 186 230 234 239 250 279 285288 347 H 286 287 371 observer see observer gain of a system 285 reference 195 state feedback 176 177 180 195 197 INDEX 391 zero frequency see zero frequency gain see also integral gain gain crossover frequency 279 280 322 327 332 351 365 gain crossover frequency inequality 332 334 gain curve Bode plot 250254 283 327 gain margin 279281 from Bode plot 280 reasonable values 281 gain scheduling 220 373 gainbandwidth product 74 237 361 Gang of Four 317 344 358 Gang of Six 317 322 gene regulation 16 58 59 166 256 genetic switch 64 114 115 global behavior 103 120124 Glover K 343 374 glucose regulation see insulinglucose dynamics Golomb S 65 governor see centrifugal governor H control 371374 376 Harrier AV8B aircraft 53 heat propagation 238 Heaviside O 163 Heaviside step function 150 163 Hellerstein J L 13 25 80 highfrequency rolloff 326 359 366 highpass filter 255 256 Hill function 58 Hoagland M B 1 HodgkinHuxley equations 60 homeostasis 3 58 homogeneous solution 133 136 137 239 Honeywell thermostat 6 Horowitz I M 226 343 369 374 humanmachine interface 65 69 hysteresis 23 289 identification see system identification impedance 236 309 implementation controllers see analog implementation computer implementation impulse function 146 164 169 impulse response 135 146 147 261 inductor transfer function for 236 inertia matrix 36 163 infinity norm 286 372 information systems 12 5458 see also congestion control web server control initial condition 96 99 102 132 137 144 215 initial condition response 133 136139 142 144 147 231 initial value problem 96 inner loop control 340 342 input sensitivity function see load sensitivity function inputoutput models 5 29 31 132 145158 229 286 see also frequency response steadystate response step response and transfer functions 261 and uncertainty 51 349 from experiments 257 relationship to state space models 32 95 146 steadystate response 149 transfer function for 235 inputs 29 32 insect flight control 4647 instrumentation 1011 71 insulinglucose dynamics 2 8789 integral action 2426 195198 293 295296 298 324 for bias compensation 226 setpoint weighting 309 312 time constant 294 integral gain 24 294 296 299 integrator windup 225 306307 314 conditional integration 314 intelligent machines see robotics internal model principle 214 221 Internet 12 13 75 77 80 92 see also congestion control Internet Protocol IP 77 invariant set 118 121 inverse model 162 219 320 inverse response 284 292 inverted pendulum 37 69 100 107 118 121 128 130 276 337 see also balance systems Jacobian linearization 159161 Jordan form 139142 164 188 Kalman R E 167 197 201 223 226 Kalman decomposition 222224 235 262 264 Kalman filter 215218 226 370 extended 220 KalmanBucy filter 217 Kelly F P 80 Kepler J 28 Keynes J M 14 Keynesian economic model 62 165 KrasovskiLasalle principle 118 LabVIEW 123 164 lag see phase lag lag compensation 326328 Laplace transforms xi 259262 Laplacian matrix 58 Lasalles invariance principle see KrasovskiLasalle principle lead see phase lead lead compensation 327330 341 345 392 INDEX limit cycle 91 101 109 111 122 288 289 linear quadratic control 190194 216 226 369371 linear systems 30 34 74 104 131164 222 231 235 262 286 linear timeinvariant systems 30 34 134 261 linearity 133 250 linearization 109 117 132 158163 220 347 Lipschitz continuity 98 load disturbances 315 359 see also disturbances load sensitivity function 317 local behavior 103 109 117 120 159 locally asymptotically stable 103 logistic growth model 89 90 94 loop analysis 267 315 loop shaping 270 326330 343 369 design rules 327 fundamental limitations 331340 see also Bodes loop transfer function loop transfer function 267270 279 280 287 315 318 326 329 336 343 see also Bodes loop transfer function Lotus Notes server see email server loworder models 298 lowpass filter 255 256 308 LQ control see linear quadratic control LTI systems see linear timeinvariant systems Lyapunov equation 114 128 Lyapunov functions 111114 120 127 164 design of controllers using 118 124 existence of 113 Lyapunov stability analysis 43 110120 126 discrete time 128 manifold 120 margins see stability margins Mars Exploratory Rovers 11 mass spectrometer 11 materials science 9 Mathematica 41 123 164 MATLAB 26 41 123 164 200 acker 181 211 dlqe 216 dlqr 194 hinfsyn 372 jordan 139 linmod 160 lqr 191 place 181 189 211 trim 160 matrix exponential 136139 143 145 163 164 coordinate transformations 148 Jordan form 140 secondorder systems 138 164 maximum complementary sensitivity 354 365 maximum sensitivity 323 352 366 measured signals 31 32 34 95 201 213 225 316 318 371 measurement noise 4 21 201 203 215 217 244 308 315317 326 359 response to 324326 359 mechanical systems 31 35 42 51 61 163 mechanics 2829 31 126 131 minimal model insulinglucose 88 89 see also insulinglucose dynamics minimum phase 283 290 331 modal form 130 145 149 Modelica 33 modeling 5 2733 61 65 control perspective 31 discrete control 56 discretetime 3738 157158 frequency domain 229231 from experiments 4748 model reduction 5 normalization and scaling 48 of uncertainty 5051 simplified models use of 32 298 348 354 355 software for 33 160 163 state space 3443 uncertainty see uncertainty modes 142144 239 relationship to poles 241 motion control systems 5154 226 motors electric 64 199 227 228 multiinput multioutput systems 286 318 327 see also inputoutput models multiplicative uncertainty 349 356 nanopositioner AFM 282 366 natural frequency 184 300 negative definite function 111 negative feedback 18 22 73 176 267 297 Nernsts law 60 networking 12 45 80 see also congestion control neural systems 11 47 60 297 neutral stability 102104 Newton I 28 Nichols N B 163 302 343 Nichols chart 369 370 Nobel Prize 11 14 61 81 noise see disturbances measurement noise noise attenuation 257 324326 noise cancellation 124 noise sensitivity function 317 nonlinear systems 31 95 98 101 108 110 114 120125 202 220 286288 INDEX 393 linear approximation 109 117 159 165 347 system identification 62 nonminimum phase 283 284 292 331333 see also inverse response nonunique solutions ODEs 97 normalized coordinates 4850 63 161 norms 285286 Nyquist H 267 290 Nyquist criterion 271 273 276 278 287 288 303 for robust stability 352 376 Nyquist D contour 270 276 Nyquist plot 270271 279 303 324 370 observability 32 201202 222 226 rank condition 203 tests for 202203 unobservable systems 204 222223 265 observability matrix 203 205 226 observable canonical form 204 205 226 observer gain 207 209211 213 215217 observers 201 206209 217 220 block diagram 202 210 see also Kalman filter ODEs see differential equations Ohms law 60 73 236 onoff control 23 open loop 1 2 72 168 245 267 306 315 323 349 open loop gain 237 279 322 operational amplifiers 7175 237 309 356 circuits 92 154 268 360 dynamic model 74 237 inputoutput characteristics 72 oscillator using 92 128 static model 72 237 optimal control 190 215 217 370 order of a system 34 235 ordinary differential equations see differential equations oscillator dynamics 92 96 97 138 184 233 236 normal form 63 see also nanopositioner AFM springmass system outer loop control 340342 output feedback 211 212 226 see also control using estimated state loop shaping PID control output sensitivity function see noise sensitivity function outputs see measured signals overdamped oscillator 184 overshoot 151 176 185 322 Padé approximation 292 332 paging control computing 56 parallel connection 243 parametric stability diagram 122 123 parametric uncertainty 50 347 particle accelerator 11 particular solution 133 152 see also forced response passive systems 288 336 passivity theorem 288 patch clamp 11 PD control 296 328 peak frequency 156 322 pendulum dynamics 113 see also inverted pendulum perfect adaptation 297 performance 76 performance limitations 331 336 365 373 due to right halfplane poles and zeros 283 see also control fundamental limitations performance specifications 151 175 315 322327 358 see also overshoot maximum sensitivity resonant peak rise time settling time periodic solutions see differential equations limit cycles persistence of a web connection 76 77 Petri net 45 pharmacokinetics 85 89 see also drug administration phase 43 153 154 186 230 234 250 288 see also minimum phase nonminimum phase minimum vs nonminimum 283 phase crossover frequency 279 280 phase curve Bode plot 250252 254 relationship to gain curve 283 326 phase lag 153 154 256 283 332 333 phase lead 153 256 330 345 phase margin 279 280 326 329 332 346 375 from Bode plot 280 reasonable values 281 phase portrait 28 29 98100 120 Philbrick G A 75 photoreceptors 297 physics relationship to control 5 PI control 17 24 65 68 296 301 327 328 firstorder system 300 364 PID control 2324 235 293313 330 block diagram 294 296 308 computer implementation 311 ideal form 293 313 implementation 296 308312 in biological systems 297 op amp implementation 309311 tuning 302306 see also derivative action integral action pitchfork bifurcation 130 planar dynamical systems 99 104 see also secondorder 394 INDEX systems pole placement 176 361 365366 see also eigenvalue assignment robust 361 pole zero diagram 240 polezero cancellations 247249 265 365 366 poles 239 241 dominant 301 see also dominant eigenvalues poles fast stable 364 366 pure imaginary 270 276 relationship to eigenvalues 239 right halfplane 241 276 283 331 333334 336 345 366 population dynamics 8991 94 see also predatorprey system positive definite function 111 112 114 118 positive definite matrix 114 191 positive feedback 16 2122 129 296 positive real transfer function 336 power of a matrix 136 power systems electric 67 63 101 127 predatorprey system 38 9091 121 181 prediction in controllers 24 220 296 375 see also derivative action prediction time 297 principle of the argument see variation of the argument principle of process control 9 10 13 45 proportional control 23 24 293 see also PID control proportional integral derivative control see PID control protocol see congestion control consensus pulse signal 146 147 187 see also impulse function pupil response 258 297 pure exponential response 232 Qvalue 63 186 254 quantitative feedback theory QFT 369 quarter car model 265 queuing systems 5456 63 random process 54 215 228 reachability 32 167175 197 222 rank condition 170 tests for 169 unreachable systems 171 199 222223 265 reachability matrix 169 173 reachable canonical form 35 172175 178 180 198 reachable set 167 realtime systems 5 reference signal 23 175 176 229 244 293 309 317 319 see also command signals setpoint effect on observer error 212 219 224 response to 322 344 tracking 175 219 220 326 360 reference weighting see setpoint weighting region of attraction see equilibrium points regions of attraction regulator see control law relay feedback 289 305 Reno protocol see Internet congestion control repressilator 5960 repressor 16 59 64 114 166 257 reset in PID control 295 296 resonant frequency 186 286 resonant peak 156 186 322 355 resource usage in computing systems 13 55 57 75 76 response see inputoutput models retina 297 see also pupil response Riccati equation 191 217 372 374 Riemann sphere 351 right halfplane poles and zeros see poles right halfplane zeros right halfplane rise time 151 165 176 185 322 robotics 8 1112 163 robustness 1618 322 349 374 performance 358361 369374 stability 352358 using gain and phase margin 281 326 using maximum sensitivity 323 326 353 375 376 using pole placement 361368 via gain and phase margin 280 see also uncertainty rolloff see highfrequency rolloff root locus diagram 123 RouthHurwitz criterion 130 rushhour effect 56 64 saddle equilibrium point 104 sampling 157 224 225 311 saturation function 45 72 311 see also actuators saturation scaling see normalized coordinates scanning tunneling microscope 11 81 schematic diagrams 44 45 71 Schitter G 83 84 secondorder systems 28 164 183187 200 253 301 Segway Personal Transporter 35 170 selfactivation 129 selfrepression 166 256 semidefinite function 111 sensitivity crossover frequency 324 sensitivity function 317 324 325 327 336 352 360 INDEX 395 366 and disturbance attenuation 323 336 345 sensor matrix 34 38 sensor networks 57 sensors 3 4 9 202 224 284 311 315 318 333 334 371 effect on zeros 284 334 in computing systems 75 see also measured signals separation principle 201 213 series connection 242 243 service rate queuing systems 55 setpoint 293 setpoint weighting 309 312 settling time 151 165 176 185 322 similarity of two systems 349352 simulation 4042 51 SIMULINK 160 singleinput singleoutput SISO systems 95 132 133 159 204 286 singular values 286 287 376 sink equilibrium point 104 small gain theorem 287288 355 Smith predictor 375 software tools for control x solution ODE see differential equations solutions Sony AIBO 11 12 source equilibrium point 104 spectrum analyzer 257 Sperry autopilot 19 springmass system 28 40 42 43 82 127 coupled 144 148 generalized 35 71 identification 47 normalization 49 63 see also oscillator dynamics stability 3 5 18 19 42 98 102120 asymptotic stability 102 106 conditional 275 in the sense of Lyapunov 102 local versus global 103 110 120 121 Lyapunov analysis see Lyapunov stability analysis neutrally stable 102 104 of a system 105 of equilibrium points 42 102 104 111 117 of feedback loop see Nyquist criterion of limit cycles 109 of linear systems 104107 113 140 of solutions 102 110 of transfer functions 240 robust see robust stability unstable solutions 103 using eigenvalues 117 140 141 using linear approximation 107 117 160 using RouthHurwitz criterion 130 using state feedback 175194 see also bifurcations equilibrium points stability diagram see parametric stability diagram stability margin quantity 280 281 323 346 353 372 reasonable values 281 stability margins concept 278282 291 326 stable pole 241 stable zero 241 Stark L 258 state of a dynamical system 28 31 34 state estimators see observers state feedback 167197 207 212 219221 224226 362 370 see also eigenvalue assignment linear quadratic control state space 28 3443 175 state vector 34 steadystate gain see zero frequency gain steadystate response 26 42 149157 165 176 185 230 231 233 257 262 steam engines 2 17 steering see vehicle steering Stein G xii 1 315 337 step input 30 135 150 239 302 step response 30 31 47 48 135 147 150 151 165 176 184 185 302 stochastic cooling 11 stochastic systems 215 217 summing junction 45 superposition 30 133 147 164 230 supervisory control see decision making higher levels of supply chains 14 15 supremum sup 286 switching behavior 22 64 117 373 system identification 47 62 257 tapping mode see atomic force microscope TCPIP see Internet congestion control Teorell T 85 89 thermostat 5 6 threeterm controllers 293 see also PID control thrust vectored aircraft see vectored thrust aircraft time constant firstorder system 165 time delay 5 13 235 236 281 283 302 311 332334 compensation for 375 Padé approximation 292 332 time plot 28 timeinvariant systems 30 34 126 134135 tracking see reference signal tracking trail bicycle dynamics 70 396 INDEX transcriptional regulation see gene regulation transfer functions 229262 by inspection 235 derivation using exponential signals 231 derivation using Laplace transforms 261 for control systems 244 264 for electrical circuits 236 for time delay 235 frequency response 230 250 from experiments 257 irrational 236 239 linear inputoutput systems 231 235 264 transient response 42 149 151 153 168 188 231 232 Transmission Control Protocol TCP 77 transportation systems 8 Tsien H S 11 tuning rules 314 see ZieglerNichols tuning two degreeoffreedom control 219 294 319 321 343 344 uncertainty 4 1718 32 5051 195 347352 component or parameter variation 4 50 347 disturbances and noise 4 32 175 244 315 unmodeled dynamics 4 50 348 353 see also additive uncertainty feedback uncertainty multiplicative uncertainty uncertainty band 50 uncertainty lemon 50 51 68 74 84 underdamped oscillator 97 184 185 unit step 150 unmodeled dynamics see uncertainty unmodeled dynamics unstable pole see poles right halfplane unstable polezero cancellation 248 unstable solution for a dynamical system 103 104 106 141 241 unstable zero see zeros right halfplane variation of the argument principle of 277 290 vector field 29 99 vectored thrust aircraft 5354 141 191 217 264 329 340 vehicle steering 5153 160 177 209 214 221 245 284 291 321 362 ship dynamics 51 vehicle suspension 265 see also coupled springmass system vertical takeoff and landing see vectored thrust aircraft vibration absorber 266 Vinnicombe G 343 351 374 Vinnicombe metric 349352 372 voltage clamp 10 61 waterbed effect 336 337 Watt governor see centrifugal governor Watt steam engine 3 17 web server control 7577 192 web site companion x Whipple F J W 71 Wiener N 11 12 winding number 277 window size TCP 78 80 104 windup see integrator windup Wright W 18 Wright Flyer 8 19 X29 aircraft 336 X45 aircraft 8 Youla parameterization 356358 zero frequency gain 155 177 180 186 239 zeros 239 Bode plot for 264 effect of sensors and actuators on 284 285 334 for a state space system 240 right halfplane 241 283 331334 336 345 365 signalblocking property 239 slow stable 362 363 365 Ziegler J G 302 312 ZieglerNichols tuning 302305 312 frequency response 303 improved method 303 step response 302 StateSpace Modeling and Analysis of Bicycle Dynamics Vedant Chopra November 24 2020 ECE 5115 Controls System Lab II Overview 2 Bicycle Model StateSpace Model SS Model Analysis Controller Design and Analysis 3 Bicycle Model Steering Angle Input Steering Angle Output Angular Acceleration Bicycle Model Analysis to SS Model Conversion 4 We can now substitute force into equations Add Values to Coefficients Researched values to substitute into statespace model Will represent an average bike with a average biker 6 7 SS Model Analysis Simulink By hand analysis would be arduous and timeconsuming Setup syms u x1 x2 x3 x1p x2p x3p car 150 caf 150 m 816 vlon 44 lr 035 lf 0625 iz 30105 eq1 x1pcar cafmvlonx1 carlr caflfmvlonx3 vlonx3cafmu0 eq2 x3x2p Preliminary MATLAB Code eq3 x3plrcar lfcafizvlonx1lf2caf lr2carizvlonx3cafizlfu0 A car cafmvlon 0 carlr caflfmvlonvlon 0 0 1lrcar lfcafizvlon 0 lf2caf lr2carizvlon B cafm 0 cafizlf C 0 0 1 D 0 sysssABCD 8 Check for stability eigenvalues e eigA 0 83556 0 Check for observability and controllability Mo obsvAC Mc ctrbAB Check for number of unobservable and uncontrollable states uobs lengthA rankMo 1 so unobservable uctr lengthA rankMc 0 so controllable Checking Requirements Convert to Diagonal Modal Form csysT canonsysmodal Evaluate Detectability detectcsysCT Evaluates to 0 0 1 two modes are unobservable 9 Our model is also linear and timeinvariant Minimal Realization Removes the x3 variable which results in a controllable and observable system 10 Use Minimal Realization and Revaluation nsys minrealsys x3 state was removed from ABCD ne eignsys nMo obsvnsysAnsysC nMc ctrbnsysAnsysB nuobs lengthnsysA ranknMo 0 so observable nuctr lengthnsysA ranknMc 0 so controllable Minimal Realization and Repetition 11 Define Q and R Q 21 0 0 1 Started with Original Q1 0 0 1 adjusted to meet most R 1 Made 1 since we were given no machine limits good for simple math Calculate Gain K ARE Solution S and Closedloop Poles P KSP lqrnsysQR Controller Design Calculate Gain N for error tracking N nsysCnsysAnsysBK1nsysB1 12 For Feedback Gain K For Reference Gain N Transition Headline Lets start with the first set of slides Open Loop Model Transition Headline Lets start with the first set of slides Close Loop Model Transition Headline Lets start with the first set of slides Reference Tracking Model Transition Headline Lets start with the first set of slides Optimized Model Summary Bicycle Model StateSpace Model Requirement Check Create Optimized Controller 17 Acknowledgements 18 Work Cited 1 B Zheng Active steering control with front wheel steering Jan2004 Online Available httpswwwresearchgatenetfigureBicyclemodelforsteeringdynamicsThecorrespondinglinearizeddynamicequationisf ig14119228 Accessed 22Nov2020 THIS VIDEO WAS PRODUCED AS PART OF THE REQUIREMENTS FOR ECE 5115 CONTROL LAB II AT THE CULLEN COLLEGE OF ENGINEERING UNIVERSITY OF HOUSTON HOUSTON TEXAS Images used under United States Public Domain From Researchgate user Active Steering Control with Front Wheel Steering Bing Zheng Quotations are commonly printed as a means of inspiration and to invoke philosophical thoughts from the reader 19