The history of flight control is inseparably linked to the history of aviation itself. Since the early days, the concept of automatic flight control systems has evolved from mechanical control syst...
INTERNATIONAL, INC., London AUSTRALIA, PTY. LTD., Sydney
OF OF OF OF
CANADA,
LTD., Toronto
INDIA PRIVATE LIMITED,
JAPAN,
INC.,
Tokyo
New
Delhi
Contents
Preface
1
.
x
Introduction 1.1
Control Systems
1.2
What
1.3
2.
j
1
Feedback and What Are Its Effects ? Types of Feedback Control Systems 11 Is
Mathematical Foundation
15
15
2. 7
Introduction
2.2
Complex-Variable Concept
2.3
Laplace Transform
15
18
2.4
Inverse Laplace Transform by Partial-Fraction Expansion
2.5
Application of Laplace Transform to the Solution of Linear Ordinary
2.6
Elementary Matrix Theory
2.7
Matrix Algebra
2.8
z-Transform
Differential Equations
21
25
26
32
39
~\
vi /
3.
Contents
Transfer Function and Signal Flow Graphs Introduction
3.2
Transfer Functions of Linear Systems
51
3.3
Impulse Response of Linear Systems
55
3.4
Block Diagrams
3.5
Signal Flow Graphs
3.6
Summary
3.7
Definitions for Signal
3.9 3.
4.
10
58 64
of Basic Properties of Signal
Flow Graphs 69 Signal-Flow-Graph Algebra
Flow Graphs 67
66
Examples of the Construction of Signal Flow Graphs 75 General Gain Formula for Signal Flow Graphs
71
3.11
Application of the General Gain Formula to Block Diagrams
3.12
Transfer Functions of Discrete-Data
Systems
80
81
95
State-Variable Characterization of Dynamic Systems 95
4.
Introduction to the State Concept
4.2
State Equations
4.3
Matrix Representation of State Equations
4.4
State Transition Matrix
4.5
State Transition Equation
4.6
Between State Equations and 107 High-Order Differential Equations
4.7 4.8 4.9
and the Dynamic Equations
97 99
101
103
Relationship
Form 109 Between State Equations and Transfer Functions 117 Characteristic Equation, Eigenvalues, and Eigenvectors Transformation to Phase-Variable Canonical
Relationship
A
4.10
Diagonalization of the
4.11
Jordan Canonical Form
Matrix (Similarity Transformation)
4.14 4.15
Controllability of Linear
4.16
Observability of Linear Systems
4.17
Relationship
12
4.
13
4.
18
Among
Systems
144 152
Controllability, Observability,
Transfer Functions
156 158
Nonlinear State Equations and Their Linearization State Equations of Linear Discrete-Data Systems
161
4.20
z-Transform Solution of Discrete State Equations
165
4.21
State
4.23
167 Diagram for Discrete-Data Systems 171 State Diagrams for Samp/ed-Data Systems State Equations of Linear Time-Varying Systems
173
Mathematical Modeling of Physical Systems 187
5.
Introduction
5.2
Equations of Electrical Networks
5.3
5.5
Modeling of Mechanical System Elements 203 Equations of Mechanical Systems Error-Sensing Devices in Control Systems
5.6
Tachometers
5.4
118
State
4.19
4.22
115
123
Diagram 126 136 Decomposition of Transfer Functions 141 Transformation into Modal Form
4.
and
5.
51
3.1
3.8
51
219
188 190
208
187
Contents
5.7 5.8 5.9
5.10 5.11
6.
5.
12
5.
13
Motors in Control Systems 220 Two-Phase Induction Motor 225 Step Motors 228 Tension-Control System 235 Edge-Guide Control System 237 Systems with Transportation Lags 242 Sun-Seeker System 243
259
Introduction
6.2
Response of Control Systems 260 Time-Domain Performance of Control Systems—Steady-State Response 262 Time-Domain Performance of Control Systems— Transient Response 271 Transient Response of a Second-Order System 273 Time Response of a Positional Control System 284
6.5 6.6 6.7
Typical Test Signals for Time
Effects of Derivative Control
Feedback Control Systems 6.8
6.9 6.
7.
10
on the Time Response of
295
on the Time Response of Feedback Control Systems 300 Rate Feedback or Tachometer Feedback Control Effects of Integral Control
Control by State- Variable Feedback
Stability of Control
302
305
Systems
316
316
7.1
Introduction
7.2
Stability. Characteristic
7.3
Stability of Linear Time-Invariant
and the State
Equation,
7.4
Methods of Determining
7.5
Routh-Hurwitz
Stability of Linear Control
Criterion
317
Transition Matrix
Systems with Inputs
319
Systems
321
322
330
7.6
Nyquist Criterion
7.7
Application of the Nyquist Criterion
7.8
Effects of Additional Poles
7.9
8.
259
6.
6.4
344
and Zeros G(s)H(s) on of the Nyquist Locus 352 Stability of Multiloop Systems 356
7.10
Stability of Linear Control
7.11
Stability of
the
Shape
Systems with Time Delays 360 Nonlinear Systems—Popov's Criterion 363
Root Locus Techniques
375
375
8.
Introduction
8.2
Basic Conditions of the Root Loci
8.3
Construction of the Complete Root Loci
8.4
Application of the Root Locus Technique to the
Solution of Roots of a Polynomial
376 380
412
8.5
Some
8.6
Root Contour— Multiple-Parameter Variation 424 Root Loci of Systems with Pure Time Delay 434 Relationship Between Root Loci and the Polar Plot Root Loci of Discrete-Data Control Systems 447
8.7 8.8 8.9
vii
DC
Time- Domain Analysis of Control Systems
6.3
/
Important Aspects of the Construction of the Root Loci
444
417
viii /
9.
Contents
459
Frequency-Domain Analysis of Control Systems 459
9.
Introduction
9.2
9.3
462 Frequency-Domain Characteristics Second-Order System p .CO p , and the Bandwidth of a
9.4
Effects of Adding a Zero to the
9.5
Effects of Adding a Pole to the
9.6
Relative Stability— Gain Margin, Phase Margin,
9.7
Relative Stability
M
Open-Loop Open-Loop
464
Transfer Function
467
Transfer Function
471
and
9.9
As Related to the Slope of Bode Plot 483 485 Loci in the G(jOi) -Plane Constant 489 Constant Phase Loci in the G{jCO)-Plane
9.10
Constant
Mp
473
the Magnitude Curve of the
M
9.8
M and N Loci in the Magnitude
The Nichols Chart 9.11 9.
10.
12
Versus-Phase Plane—
Closed-Loop Frequency Response Analysis of Nonunity Feedback Systems 497 Frequency Domain
Design of System with Specific Eigenvalues—An Application of Controllability
1
1.5
11.6
588 Design of State Observers Optimal Linear Regulator Design
11.7
Design with
Partial State
Feedback
599 615
APPENDIX A Frequency-Domain
Plots
Polar Plots of Transfer Functions
627
A.
1
A.2 A.3
Bode
Plot (Corner Plot) of a Transfer Function
Magnitude-Versus-Phase Plot
626
633
643
APPENDIX B
Laplace Transform Table
APPENDIX C
Lagrange's Multiplier
Index
585
Method
645
650 653
Preface
The
first edition of this book, published in 1962, was characterized by having chapters on sampled-data and nonlinear control systems. The treatment of the analysis and design of control systems was all classical.
The two major changes in the second edition, published in 1967, were the inclusion of the state variable technique and the integration of the discrete-data systems with the continuous data system. The chapter on nonlinear systems was eliminated in the second edition to the disappointment of some users of that text. At the time of the revision the author felt that a comprehensive treatment on the subject of nonlinear systems could not be made effectively
with
the available space.
The third edition is still written as an introductory text for a senior course on control systems. Although a great deal has happened in the area of modern control theory in the past ten years, preparing suitable material for a course on introductory control systems remains a difficult task.
a complicated one because
it is difficult
to
modern The problem is teach the topics concerned with new the undergraduate level. The unique
developments in modern control theory at situation in control systems has been that many of the practical problems are still being solved in the industry by the classical methods. While some of the techniques in modern control theory are much more powerful and can solve more complex problems, there are often more restrictions when it comes to practical applications of the solutions.
However, it should be recognized that control engineer should have an understanding of the classical as well as the modern control methods. The latter will enhance and broaden one's perspective in solving a practical problem. It is the author's opinion that one should strike a balance in the teaching of control systems theory at the beginning a
modern
x
/
Preface
emphasis and intermediate levels. Therefore in this current edition, equal theory. control modern the and methods classical placed on the A number of introductory books with titles involving modern control is
have attempted to theory have been published in recent years. Some authors but according control, modern the with control classical the integrate unify and is highly goal such a Although failed. have most reviews, and the critics to
desirable, if only
be a good
new
from the standpoint of presentation, there does not seem
solution. It
theories
remains that
is
possible that the objective
may
to
not be achieved until
and new techniques are developed for this purpose. The fact of control systems, in some way, may be regarded as a science
problem—control, in many different ways. These against each other, different ways of solution may be compared and weighed approach used in but it may not be possible to unify all the approaches. The learning
how
to solve one
method and the modern approach indepenconsidered as alternadently, and whenever possible, the two approaches are are weighed. Many tives, and the advantages and disadvantages of each this text is to present the classical
illustrative
Many
examples are carried out by both methods. for not existing text books on control systems have been criticized
One reason for this is, perhaps, that who lack the practical background and
including adequate practical problems.
many
text
book
writers are theorists,
that the experience necessary to provide real-life examples. Another reason is realmost fact that the difficulty in the control systems area is compounded by
problems are highly complex, and are rarely suitable as illustrative examples is lost by simplifying at the introductory level. Usually, much of the realism developed in the techniques the problem to fit the nice theorems and design taking a control system students material. Nevertheless, the majority of the
life
text
must put course at the senior level do not pursue a graduate career, and they extremely is It employment. new their knowledge to immediate use in their an important for these students, as well as those who will continue, to gain
what a real control system is like. Therefore, the author has text. The introduced a number of practical examples in various fields in this more realprovide to text of this attempt the homework problems also reflect actual feel of
life
problems.
The following with the
first
1.
2. 3.
features of this
new
edition are emphasized by comparison
two editions
Equal emphasis on classical and modern control theory. Inclusion of sampled-data and nonlinear systems. Practical system examples and homework problems.
The material assembled
in this
book
is
an outgrowth of a
senior-level
Illinois at control system course taught by the author at the University of in a style written Urbana-Champaign for many years. Moreover, this book is
adaptable for self-study and reference. Chapter 1 presents the basic concept of control systems. The definition of feedback and its effects are covered. Chapter 2 presents mathematical founda-
Preface
/ xi
and preliminaries. The subjects included are Laplace transform, z-transform, matrix algebra, and the applications of the transform methods. Transfer function and signal flow graphs are discussed in tion
Chapter
3.
Chapter 4 intro-
duces the state variable approach to dynamical systems. The concepts and definitions of controllability and observability are introduced at the early stage
These subjects are later being used for the analysis and design of linear control systems. Chapter 5 discusses the mathematical modeling of physical systems. Here, the emphasis is on electromechanical systems. Typical
transducers and control systems used in practice are illustrated. The treatment cannot be exhaustive as there are numerous types of devices and control systems. Chapter 6 gives the time response considerations of control systems. Both the classical and the modern approach are used. Some simple design considerations in the time domain are pointed out. Chapters 7, 8, and 9 deal with topics on stability, root locus, and frequency response of control systems. In Chapter 10, the design of control systems is discussed, and the
approach
is
basically classical.
Chapter
11 contains
some of the optimal control
subjects
which, in the author's opinion, can be taught at the undergraduate level if time permits. The text does contain more material than can be covered in one semester.
One of
the difficulties in preparing this
subjects to cover.
To keep
book draft, had the
book was the weighing of what some subjects,
to a reasonable length,
which were in the original to be left out of the final manuscript! These included the treatment of signal flow graphs and time-domain analysis, of discrete-data systems, the second method of Liapunov's stability method! describing function analysis, state plane analysis, and a few selected topics on implementing optimal control. The author feels that the inclusion of these subjects would add materially to the spirit of the text, but at the cost of a higher price.
The author wishes to express his sincere appreciation to Dean W. L. Everitt (emeritus), Professors E. C. Jordan, O. L. Gaddy, and E. W. Ernst, of the University of Illinois, for their encouragement and interest in the project! The author is grateful to Dr. Andrew Sage of the University of Virginia and Dr. G. Singh of the University of Illinois for their valuable suggestions. Special thanks also goes to Mrs. Jane Carlton who typed a good portion of the manuscript and gave her invaluable assistance in proofreading.
Benjamin C. Urbana,
Illinois
Kuo
1 Introduction
1 .1
Control Systems
In recent years, automatic control systems have assumed an increasingly important role in the development and advancement of modern civilization and technology. Domestically, automatic controls in heating and air conditioning systems regulate the temperature
and the humidity of modern homes for comfortable automatic control systems are found in numerous applications, such as quality control of manufactured products, automation, machine tool control, modern space technology and weapon systems, computer systems,
living. Industrially,
transportation systems, and robotics.
Even such problems as inventory control, and economic systems control, and environmental and hydrological systems control may be approached from the theory of automatic control. social
The
basic control system concept
diagram shown in Fig. c in a prescribed
1-1.
The
may be
described by the simple block
objective of the system
manner by the actuating
signal e
is
to control the variable
through the elements of the
control system.
In more common terms, the controlled variable is the output of the system, and the actuating signal is the input. As a simple example, in the steering control of an automobile, the direction of the two front wheels may be regarded as the controlled variable
c,
the actuating signal
e.
The position of the steering wheel is the input, The controlled process or system in this case is composed
the output.
of the steering mechanisms, including the dynamics of the entire automobile. if the objective is to control the speed of the automobile, then the amount of pressure exerted on the accelerator is the actuating signal, with the speed regarded as the controlled variable.
However,
Chap.
2
/
1
Introduction
Controlled
Actuating signal e
variable c
Control system
(Input)
(Output)
Fig. 1-1. Basic control system.
There are many situations where several variables are to be controlled simulmultivariabk taneously by a number of inputs. Such systems are referred to as systems.
Open-Loop Control Systems (Nonfeedback Systems)
The word automatic implies that there is a certain amount of sophistication is usually in the control system. By automatic, it generally means that the system capable of adapting to a variety of operating conditions and is able to respond system has to a class of inputs satisfactorily. However, not any type of control the feeding achieved by feature is automatic the Usually, feature. the automatic
output variable back and comparing it with the command signal. When a system does not have the feedback structure, it is called an open-loop system, which is the simplest and most economical type of control system. Unfortunately, openloop control systems lack accuracy and versatility and can be used in none but the simplest types of applications. Consider, for example, control of the furnace for
home
heating. Let us
assume that the furnace is equipped only with a timing device, which controls the on and off periods of the furnace. To regulate the temperature to the proper level, the human operator must estimate the amount of time required for the furnace to stay on and then set the timer accordingly. When the preset time is up, the furnace is turned off. However, it is quite likely that the house temperature is either above or below the desired value, owing to inaccuracy in the estimate. Without further deliberation, control
is
inaccurate and unreliable.
one
fact that
may
not
know
One
it is
quite apparent that this type of
reason for the inaccuracy
the exact characteristics of the furnace.
lies
in the
The other
no control over the outdoor temperature, which has a important definite bearing on the indoor temperature. This also points to an that the in system, control open-loop disadvantage of the performance of an factor
that one has
is
not capable of adapting to variations in environmental conditions or experito external disturbances. In the case of the furnace control, perhaps an house; the temperature in desired certain for a control enced person can provide the during intermittently closed or are opened windows or but if the doors system
is
operating period, the final temperature inside the house will not be accurately regulated by the open-loop control.
An
electric
another typical example of an open-loop is entirely determined by the judgment true automatic electric washing operator.
washing machine
system, because the
is
amount of wash time
A and estimation of the human machine should have the means of checking the cleanliness of the clothes continuously and turn itself off when the desired degree of cleanliness is reached. Although open-loop control systems are of limited use, they form the basic
Sec. 1.1
Control Systems / 3
elements of the closed-loop control systems. In general, the elements of an open-
loop control system are represented by the block diagram of Fig.
1-2. An input applied to the controller, whose output acts as the actuating signal e; the actuating signal then actuates the controlled process and hopefully will drive the controlled variable c to the desired value.
signal or
command
r is
Reference
Actuating
input r
signal e
Controller
Controlled variable c
Controlled process
(Output) Fig. 1-2.
Block diagram of an open-loop control system.
Closed-Loop Control Systems (Feedback Control Systems)
What is missing in the open-loop control system for more accurate and more adaptable control is a link or feedback from the output to the input of the system. In order to obtain more accurate control, the controlled signal c(t) must be fed back and compared with the reference input, and an actuating signal proportional to the difference of the output and the input must be sent through the system to correct the error. system with one or more feedback paths like that just described is called a closed-loop system. Human beings are probably the most complex and sophisticated feedback control system
A
in existence.
human
being
may
be considered to be a control system with outputs, capable of carrying out highly complex operations.
To
illustrate the
many
A
inputs and
human
being as a feedback control system, let us consider an object on a desk. As one is reaching for the object, the brain sends out a signal to the arm to perform the task. The eyes serve as a sensing device which feeds back continuously the position of the hand. The distance between the hand and the object is the error, which is eventually brought to zero as the hand reaches the object. This is a typical example of closed-loop control. However, if one is told to reach for the object and then is blindfolded, one can only reach toward the object by estimating its exact posithat the objective
is
to reach for
tion. It is quite possible that the object
the eyes blindfolded, the feedback path
may be is
missed by a wide margin. With broken, and the human is operating
as an open-loop system.
being
is
The example of the reaching of an object by a described by the block diagram shown in Fig. 1-3.
As another
illustrative
example of a closed-loop control system,
human
Fig. 1-4
Error
Input
detector
command
f x
Error
Controller (brain)
Reach for object
Fig. 1-3.
Controlled process
1 Controlled
(arm and hand)
variable
Position
of hand
Block diagram of a human being as a closed-loop control system.
4
Chap.
/ Introduction
1
Rudder
Fig. 1-4.
Rudder
control system.
shows the block diagram of the rudder control system of a ship. In this case the objective of control is the position of the rudder, and the reference input is applied through the steering wheel. The error between the relative positions of the steering wheel and the rudder is the signal, which actuates the controller
and the motor.
When
the rudder
is finally
direction, the output of the error sensor
is
aligned with the desired reference
zero. Let us
assume that the steering
given a sudden rotation of R units, as shown by the time signal in Fig. l-5(a). The position of the rudder as a function of time, depending upon the characteristics of the system, may typically be one of the responses shown
wheel position
is
in Fig. l-5(b). Because all physical systems
have
electrical
and mechanical inertia,
the position of the rudder cannot respond instantaneously to a step input, but will, rather, move gradually toward the final desired position. Often, the response will oscillate
about the
rudder control
it is
final position
before settling.
It is
apparent that for the
desirable to have a nonoscillatory response.
0,(0
6e
W
R
*~t
-*-t
(a)
(b)
Fig. 1-5. (a) Step displacement input of rudder control system, (b) Typical
output responses.
Sec. 1.1
Control Systems / 5
Error sensor
Input
~^
J
Error Controller
Controlled process
Output
Feedback elements
Fig. 1-6. Basic elements of a feedback control system.
are
The basic elements and the block diagram of a closed-loop control system shown in Fig. 1-6. In general, the configuration of a feedback control system
may not be constrained to that of Fig. 1-6. In complex systems there may be a multitude of feedback loops and element blocks. Figure l-7(a) illustrates the elements of a tension control system of a windup process.
which
is
The unwind
reel may contain a roll of material such as paper or cable to be sent into a processing unit, such as a cutter or a printer, and then
collects it by winding it onto another roll. The control system in this case is to maintain the tension of the material or web at a certain prescribed tension to avoid such problems as tearing, stretching, or creasing.
To regulate the tension, down and around a weighted
web is formed into a half-loop by passing it The roller is attached to a pivot arm, which allows free up-and-down motion of the roller. The combination of the roller and the pivot arm is called the dancer. When the system is in operation, the web normally travels at a constant speed. The ideal position of the dancer is horizontal, producing a web tension equal to one-half of the total weight of the dancer roll. The electric brake on the
roller.
W
the
unwind
reel is to generate
horizontal position at
all
a restraining torque to keep the dancer in the
times.
During actual operation, because of external disturbances, uncertainties and irregularities of the web material, and the decrease of the effective diameter of the unwind reel, the dancer arm will not remain horizontal unless some scheme is employed to properly sense the dancer-arm position and control the restraining braking torque.
To
obtain the correction of the dancing-arm-position error, an angular used to measure the angular deviation, and a signal in proportion to the error is used to control the braking torque through a controller. Figure l-7(b) shows a block diagram that illustrates the interconnections between the sensor
is
elements of the system.
Chap.
6
/
1
Introduction
Unwind
reel
Web
(decreasing dia.)
processing
Windup
reel
(increasing dia.)
Drive system (constant
web
speed)
(Current)
Reference input
~"\ Error Controller
Electric
Unwind
brake
process
Tension
Dancer
arm
(b)
diagram depicting the control system. tension of a interconnections and basic elements Fig. 1-7. (a) Tension control system, (b) Block
1 .2
What
Is
Feedback and What Are
Its
Effects ?
The concept of feedback plays an important role in
control systems.
We demon-
closed-loop strated in Section 1.1 that feedback is a major requirement of a able to achieve not be would system control control system. Without feedback, a applications. practical in most required are the accuracy and reliability that However, from a more rigorous standpoint, the definition and the significance feedback are much deeper and more difficult to demonstrate than the few
of
carry examples given in Section 1.1. In reality, the reasons for using feedback the output with input the comparing one of simple the than far more meaning error is merely one of the in order to reduce the error. The reduction of system We shall now show that system. upon a bring may feedback that effects many
Sec.
.2
1
What
Is
Feedback and What Are
Its
Effects ? /
feedback also has effects on such system performance characteristics as bandwidth, overall gain, impedance, and sensitivity.
7
stability,
To understand the effects of feedback on a control system, it is essential that we examine this phenomenon with a broad mind. When feedback is deliberately introduced for the purpose of control,
its existence is easily identified. However, numerous situations wherein a physical system that we normally recognize as an inherently nonfeedback system may turn out to have feedback
there are
when
it is observed in a certain manner. In general we can state that whenever a closed sequence of cause-and-effect relation exists among the variables of a system, feedback is said to exist. This viewpoint will inevitably admit feedback
number of systems that ordinarily would be identified as nonfeedback systems. However, with the availability of the feedback and control system theory, this general definition of feedback enables numerous systems, in a large
with or without physical feedback, to be studied in a systematic way once the existence of feedback in the above-mentioned sense is established.
We shall now investigate the effects of feedback on the various aspects of system performance. Without the necessary background and mathematical foundation of linear system theory, at this point we can only rely on simple system notation for our discussion. Let us consider the simple feedback system configuration shown in Fig. 1-8, where r is the input signal, c the output signal, e the error, and b the feedback signal. The parameters G and ZTmay be considered as constant gains. By simple algebraic manipulations it is simple to static
show
that the input-output relation of the system
is
G M = t = FTW Using
this basic relationship
some of the
(l-i)
of the feedback system structure, we can uncover of feedback.
significant effects
_. i
+ r
-
-o
G
e
c
b +
^
_
H -o Fig. 1-8.
Feedback system.
Effect of Feedback on Overall Gain
As seen from Eq. (1-1), feedback affects the gain G of a nonfeedback system by a factor of 1 + GH. The reference of the feedback in the system of Fig. 1-8 is negative, since a minus sign is assigned to the feedback signal. The quantity GH may itself include a minus sign, so the general effect of feedback is that it
may
increase or decrease the gain. In a practical control system,
G
and
H are
8
/
Chap.
Introduction
1
+ GH
may be greater than 1 in functions of frequency, so the magnitude of 1 one frequency range but less than 1 in another. Therefore, feedback could increase the gain of the system in one frequency range but decrease
it
in another.
Effect of Feedback on Stability
whether the system will be able to follow the input command. In a nonrigorous manner, a system is said to be unstable if its output is out of control or increases without bound. To investigate the effect of feedback on stability, we can again refer to the 1, the output of the system is infinite for any expression in Eq. (1-1). If GH Stability is a notion that describes
=-
finite input.
we may state that feedback can cause a system that is become unstable. Certainly, feedback is a two-edged sword;
Therefore,
originally stable to
when it is improperly used, it can be harmful. It should be pointed out, however, that we are only dealing with the static case here, and, in general GH = — 1 is not the only condition for instability. It can be demonstrated that one of the advantages of incorporating feedback is that it can stabilize an unstable system. Let us assume that the feedback —1. If we introduce another feedsystem in Fig. 1-8 is unstable because as shown in Fig. 1-9, the inputfeedback F, of a negative through back loop
GH =
output relation of the overall system c r It is
~
is
G
I
+GH+GF
apparent that although the properties of is unstable, because
( "
G
and
GH = —
inner-loop feedback system
H are 1,
such that the
the overall system
can be stable by properly selecting the outer-loop feedback gain F.
+
+
+
—o
b
+-
i
o+ c
G
e
r
—
-o
+
o-
-o
H -o
o-
o-
-o
F
Fig. 1-9.
Feedback system with two feedback loops.
Effect of Feedback on Sensitivity Sensitivity considerations often play
control systems. Since
all
environment and age, we
an important
role in the design of
physical elements have properties that change with cannot always consider the parameters of a control
Sec
-
1-2
What
Feedback and What Are
Is
Its
Effects? / 9
system to be completely stationary over the entire operating life of the system. For instance, the winding resistance of an electric motor changes as the temperature of the motor rises during operation. In general, a good control system
should be very insensitive to these parameter variations while the
command responsively. We
sensitivity to
The
effect
still
able to follow
feedback has on the
parameter variations.
Referring to the system in Fig. 1-8, vary.
what
shall investigate
sensitivity
we
consider
G as
of the gain of the overall system
may
a parameter that
M to the variation in G
is
defined as
™ _~ dM/M
io
^- 3 >
~dGjG
M
where dM denotes the incremental change in due to the incremental change G; dM/M and dG/G denote the percentage change in and G, respectively. The expression of the sensitivity function Sg can be derived by using Eq. (1-1). We have
M
in
dM G _ SM io _
1
~lGM~l+GH
(
M
>
This relation shows that the sensitivity function can be made arbitrarily small by increasing GH, provided that the system remains stable. It is apparent that in an open-loop system the gain of the system will respond in a one-to-one fashion to the variation in G. In general, the sensitivity of the system gain of a feedback system to paramdepends on where the parameter is located. The reader may derive the sensitivity of the system in Fig. 1-8 due to the variation of H. eter variations
Effect of Feedback on External Disturbance or Noise All physical control systems are subject to some types of extraneous signals or noise during operation. Examples of these signals are thermal noise voltage
and brush or commutator noise in electric motors. of feedback on noise depends greatly on where the noise is intro-
in electronic amplifiers
The
effect
duced into the system; no general conclusions can be made. However, in many can reduce the effect of noise on system performance.
situations, feedback
Let us refer to the system shown in Fig. 1-10, in which r denotes the comsignal and n is the noise signal. In the absence of feedback, 0, the output c is
H=
mand
c
where e
=
r.
The
=GGe+G x
2
2n
signal-to-noise ratio of the output
output due to signal output due to noise
(1-5)
_ —GGe_ c ~~ x
defined as
is
e
2
G2 n
'
l
~n
'
To increase the signal-to-noise ratio, evidently we should either increase the magnitude of G, or e relative to n. Varying the magnitude of G would have 2
no
effect
whatsoever on the
ratio.
With the presence of feedback, the system output due
to r
and n acting
10
/
Chap.
Introduction
1
\h
n
+ + b
+
+
e
r
G2
e2
Gi
c
+
__
_.
H
Feedback system with a noise
Fig. 1-10.
simultaneously
is
*
the output of Eq. (1-7) is
_
Gl G 2
T + G,G 2 H
+ +
r
_|
1
b£3
G,G 2 H
n
(1-7) K '
with Eq. (1-5) shows that the noise component in reduced by the factor 1 + Gfi,H, but the signal comalso reduced by the same amount. The signal-to-noise ratio is
Simply comparing Eq.
ponent
signal.
(1-7)
is
_GG
output due to signal ~~ output due to noise
i
G
+ G^G^H + G G H)
2 rj(\
2 n/(l
)
___
g
r_ 1
2
1
(1-%}
n
and is the same as that without feedback. In this case feedback is shown to have no direct effect on the output signal-to-noise ratio of the system in Fig. 1-10. However, the application of feedback suggests a possibility of improving the signal-to-noise ratio under certain conditions. Let us assume that in the system of Fig. 1-10, if the magnitude of G is increased to G\ and that of the input r to r', with all other parameters unchanged, the output due to the input signal t
acting alone
we
is
at the
same
level as that
when feedback
is
absent. In other words,
'1-™ ^
let
(1 ' 9)
=
With the increased G,, G\, the output due
which
is
smaller than the output due to n
to-noise ratio
is
is
when G
t
is
becomes
not increased. The signal-
now
G 2 nl{\ which
to noise acting alone
+
G\G 2 H)
-
n
^ +
°^^>
(1-11)
greater than that of the system without feedback by a factor of
(1
+
G\G 2 H). In general, feedback also has effects on such performance characteristics
Seo
-
1
-
3
Types
of
Feedback Control Systems
/
11
as bandwidth, impedance, transient response, effects will
1.3
become known
as
and frequency response. These one progresses into the ensuing material of this text.
Types of Feedback Control Systems
Feedback control systems may be classified in a number of ways, depending upon the purpose of the classification. For instance, according to the method of analysis and design, feedback control systems are classified as linear and nonlinear,
time varying or time invariant. According to the types of signal found in is often made to continuous-data and discrete-data systems,
the system, reference
or modulated and unmodulated systems. Also, with reference to the type of system components, we often come across descriptions such as electromechanical control systems, hydraulic control systems, pneumatic systems, and biological control systems. Control systems are often classified according to the main purpose of the system. positional control system and a velocity control system
A
control the output variables according to the way the names imply. In general, there are many other ways of identifying control systems according to some special features of the system. It
ways of
is
important that some of these more
classifying control systems are
known
common
so that proper perspective
is
gained before embarking on the analysis and design of these systems. Linear Versus Nonlinear Control Systems
This classification
is
made according to
Strictly speaking, linear systems
do not
the methods of analysis
and
design.
exist in practice, since all physical sys-
tems are nonlinear to some extent. Linear feedback control systems are idealized models that are fabricated by the analyst purely for the simplicity of analysis and design. When the magnitudes of the signals in a control system are limited to a range in which system components exhibit linear characteristics (i.e., the principle of superposition applies), the system is essentially linear. But when the magnitudes of the signals are extended outside the range of the linear operation, depending upon the severity of the nonlinearity, the system should no longer be considered linear. For instance, amplifiers used in control systems often exhibit saturation effect when their input signals become large; the magnetic field of a motor usually has saturation properties. Other common nonlinear effects
found in control systems are the backlash or dead play between coupled gear members, nonlinear characteristics in springs, nonlinear frictional force or torque between moving members, and so on. Quite often, nonlinear characteristics are intentionally introduced in a control system to improve its performance or provide more effective control. For instance, to achieve minimum-time control, an on-off (bang-bang or relay) type of controller is used. This type of control is
found in many missile or spacecraft control systems. For instance, in the attitude control of missiles and spacecraft, jets are mounted on the sides of the vehicle to provide reaction torque for attitude control. These jets are often controlled in a full-on or full-off fashion, so a fixed amount of air is applied from a given jet for a certain
time duration to control the attitude of the space vehicle.
12
/
Chap.
Introduction
For
linear systems there exists a wealth of analytical
and graphical
1
tech-
niques for design and analysis purposes. However, nonlinear systems are very that may be difficult to treat mathematically, and there are no general methods
used to solve a wide class of nonlinear systems. Time-Invariant Versus Time-Varying Systems
When
the parameters of a control system are stationary with respect to
time during the operation of the system, we have a time-invariant system. Most physical systems contain elements that drift or vary with time to some extent. the If the variation of parameter is significant during the period of operation, unwind the of system is termed a time-varying system. For instance, the radius
of the tension control system in Fig. 1-7 decreases with time as the material being transferred to the windup reel. Although a time-varying system without
reel is
nonlinearity
is still
a linear system,
its
analysis
is
usually
much more complex
than that of the linear time-invariant systems.
Continuous-Data Control Systems continuous-data system is one in which the signals at various parts of the system are all functions of the continuous time variable t. Among all continuous-data control systems, the signals may be further classified as ac or dc.
A
Unlike the general definitions of ac and dc signals used in
electrical engineering,
ac and dc control systems carry special significances. When one refers to an ac control system it usually means that the signals in the system are modulated by some kind of modulation scheme. On the other hand, when a dc control
system is referred to, it does not mean that all the signals in the system are of dc control the direct-current type; then there would be no control movement. system simply implies that the signals are unmodulated, but they are still ac by common definition. The schematic diagram of a closed-loop dc control system
A
is
^^^
6*^) "r Reference input
shown
in Fig. 1-11. Typical
waveforms of the system
in response to a step
Error detector
Controlled variable
6,
Fig. 1-11. Schematic
diagram of a
typical
dc closed-loop control system.
Sec. 1.3
Types of Feedback Control Systems
function input are shown in the figure. Typical components of a dc control tem are potentiometers, dc amplifiers, dc motors, and dc tachometers. The schematic diagram of a typical ac control system is shown
/
13
sys-
in Fig. 1-12.
In this case the signals in the system are modulated; that is, the information is transmitted by an ac carrier signal. Notice that the output controlled variable still behaves similar to that of the dc system if the two systems have the same control objective. In this case the modulated signals are demodulated by the low-pass characteristics of the control motor. Typical components of an ac control system are synchros, ac amplifiers, ac motors, gyroscopes, and accelerometers. In practice, not
control systems are strictly of the ac or the dc type. of ac and dc components, using modulators and demodulators to match the signals at various points of the system.
system
A
all
may incorporate a mixture
Synchro transmitter
a-c
servomotor
Reference input 0.
Fig. 1-12.
Schematic diagram of a typical ac closed-loop control system.
Sampled-Data and Digital Control Systems Sampled-data and digital control systems differ from the continuous-data systems in that the signals at one or more points of the system are in the form of either a pulse train or a digital code. Usually, sampled-data systems refer to a more general class of systems whose signals are in the form of pulse data, where a digital control system refers to the use of a digital computer or controller in the system. In this text the term "discrete-data control system" is used to describe both types of systems. In general a sampled-data system receives data or information only intermittently at specific instants of time.
For instance, the error signal in a control supplied only intermittently in the form of pulses, in which case the control system receives no information about the error signal during the periods between two consecutive pulses. Figure 1-13 illustrates how a typical sampled-data system operates. continuous input signal r(t) is applied to the system
may be
A
Chap.
14
/
1
Introduction
Input
eg)
r(t)
y
*c >
e
Sampler
Data hold
hit)
Controlled process
c(f)
(filter)
Block diagram of a sampled-data control system.
Fig. 1-13.
sampler, and is sampled by a sampling device, the rate of the samsampling The pulses. of sequence is a sampler the output of the advantages of incorporating pler may or may not be uniform. There are many understood of these being easily the most of one system, control sampling in a equipment among several expensive of an sharing time provides that sampling system.
The
error signal e(t)
control channels. flexibility, Because digital computers provide many advantages in size and airMany computer control has become increasingly popular in recent years. discrete borne systems contain digital controllers that can pack several thousand 1-14 shows the elements in a space no larger than the size of this book. Figure
basic elements of a digital autopilot for a guided missile. Attitude of
Digital
coded input
Digital-to-
Digital
computer ,
analog converter
missile
Airframe
,
Analog-to-
con\ erter Fig. 1-14. Digital autopilot system for a guided missile.
2 Mathematical Foundation
2.1
Introduction
The study of control systems relies to a great extent on the use of applied mathematics.
For the study of classical control theory, the prerequisites include such complex variable theory, differential equations, Laplace transform,
subjects as
and z-transform. Modern control theory, on the other hand, requires considerably more intensive mathematical background. In addition to the above-mentioned subjects, modern control theory is based on the foundation of matrix theory, set theory, linear algebra, variational calculus, various types of mathematical programming, and so on.
2.2
Complex-Variable Concept Complex-variable theory plays an important role in the analysis and design of When studying linear continuous-data systems, it is essential that one understands the concept of complex variable and functions of a complex control systems.
variable
when
the transfer function
method
is
used.
Complex Variable
A
complex variable j is considered to have two components: a real component a, and an imaginary component co. Graphically, the real component is represented by an axis in the horizontal direction, and the imaginary
component measured along a vertical axis, in the complex j-plane. In other words, a complex variable is always defined by a point in a complex plane that has' a a axis and ayco axis. Figure 2-1 illustrates the complex j-plane, in which any is
15
Chap. 2
16
/
Mathematical Foundation /co
s-plane
OJ] i
i
i
i
i
i
°\
Fig. 2-1.
arbitrary point, s
simply
Si
=
ffj
= su
is
Complex
j-plane.
denned by the coordinates a
and
== a,
co
=
a>„ or
+y'coi.
Functions of a Complex Variable s if for said to be a function of the complex variable corresponding are value (or there every value of s there is a corresponding and imaginary parts, the function real have to defined is s values) of G(s). Since imaginary parts; that is, G(s) is also represented by its real and
The function
G(s)
is
G(j)
where Re part of
G
= ReG+yImC
(2-1)
represents the imaginary denotes the real part of G(s) and Im G by the complex GThus, the function G(s) can also be represented
G
measures
and whose vertical axis plane whose horizontal axis represents Re G every value of s (every point in the sthe imaginary component of G{s). If for value for G(s) [one corresponding point plane) there is only one corresponding a single-valued function, and the mapping in the G^-plane], G(s) is said to be points in the G(s)-plane is (correspondence) from points in the j-plane onto are many functions for there described as single valued (Fig. 2-2). However, plane is complex-variable the plane to which the mapping from the function x-plane
.
/co S,
=0, +/C0,
/
ImG G 0)-plane ReG
a,
a
Gfri)
Fig. 2-2. Single-valued
mapping from the s-plane to
the G(»-plane.
Sec 2 2 -
Complex-Variable Concept
-
17
/
not single valued. For instance, given the function
G(J)=
<2
,-(7TT)
"
2)
it is apparent that for each value of s there is only one unique corresponding value for G(s). However, the reverse is not true; for instance, the point G(s) oo is mapped onto two points, s and j 1, in the j-plane.
=
=
=—
Analytic Function
A function G(s) of the complex variable s is called an analytic function in a region of the s-plane if the function and all its derivatives exist in the region. For instance, the function given in Eq. (2-2) is analytic at every point in the splane except at the points s and s -1. At these two points the value of the function is infinite. The function G(s) s 2 is analytic at every point
=
=
= +
in the finite .s-plane.
Singularities
and Poles of a Function
The singularities of a function tion or larity
its
are the points in the j-plane at which the funcpole is the most common type of singu-
derivatives does not exist.
and plays a very important
A
role in the studies of the classical control
theory.
The
definition of a pole
can be stated as: If a function G(s) is analytic and of s except at s it is said to have a pole of
single valued in the neighborhood
order r at s
= s,if the limit
t
lim [0
,
t
- s,)
r
,
G(s)]
has a finite, nonzero value. In other words, the denominator of G(s) must include r s,) so when s s„ the function becomes infinite. If r 1, the pole at j s, is called a simple pole. As an example, the function
— =
the factor (s
=
,
W=
=
l0(s
G(s) °
has a pole of order 2 at s
= -3
also be said that the function
is
s(s
+
+ 2) + 3)*
n (2
IX*
and simple poles
=
" xi
3>
=-
at s and s 1. It can analytic in the j-plane except at these poles.
Zeros of a Function
The is
definition of a zero of
analytic at s
=s
t,
it is
a function can be stated as: If the function G(s) said to have a zero of order r at s s l if the limit
=
!S [(* ~ has a finite, nonzero value. has an rth-order pole at s
Or
=
zero at s
=
s,.
J '>" ,<7
W]
( 2 -4)
=
simply, G(s) has a zero of order r at s s, ifl/G(s) For example, the function in Eq. (2-3) has a simple
—2.
If the function
under consideration is a rational function of s, that is, a quotient of two polynomials of s, the total number of poles equals the total number of zeros, counting the multiple-order poles and zeros, if the poles and
.
18
/
,
^
r.
Chap. 2
•
Mathematical Foundation
function in Eq. (2-3) has zeros at infinity and at zero are taken into account. The -2, but zero at s finite one is 0, -1, -3, -3; there four finite poles at s
=
=
there are three zeros at infinity, since
= lim^=0 S
limGO)
(2
"5
)
s-<*>
s-«.
four zeros in the entire Therefore, the function has a total of four poles and
s-
plane.
2.3
3-5 Laplace Transform
The Laplace transform
is
one of the mathematical tools used for the solution comparison with the classical method
of ordinary linear differential equations. In
transform method has the of solving linear differential equations, the Laplace following two attractive features 1.
The homogeneous equation and the in
2.
particular integral are solved
one operation. differential equation into an then possible to manipulate the algebraic
The Laplace transform converts the algebraic equation in
s.
It is
s equation by simple algebraic rules to obtain the solution in the Laplace inverse the taking by obtained is solution domain. The final
transform. Definition of the Laplace Transform
Given the function /(f) which
satisfies
the condition (2-6)
r\f(t)e-°'\dt
for
some
finite real a,
the Laplace transform of /(f)
F(s)=
m=
or
The
variable s
variable; that
is,
s
is
=
is
defined as
\~ f(t)e-"dt
(2-7)
(2
£[/(')]
referred to as the Laplace operator,
a +jco. The
which
defining equation of Eq. (2-7)
is
is
t
=
)
a complex
also
evaluated from
is as the one-sided Laplace transform, as the integration This simply means that all information contained in /(f) prior to
-g
is
known to oo
ignored
any serious limitation or considered to be zero. This assumption does not place problems, since system linear on the applications of the Laplace transform to at the instant chosen often is reference in the usual time-domain studies, time at t applied 0, input is an when 0. Furthermore, for a physical system t response that is, t than sooner 0; start the response of the system does not
=
=
=
does not precede excitation. The following examples serve as illustrations on how Eq. (2-7) for the evaluation of the Laplace transform of a function /(f).
may
be used
Sec. 2.3
Laplace Transform / 19
Example
2-1
Let/0) be a of unity for
unit step function that t
>
and a zero value
the Laplace transform of f(t)
F(s)
/
< 0.
(2-9)
= £[u (t)] = s
j~ us «)e-" dt e~"
(2-10)
the Laplace transform given by Eq. (2-10)
j which means that the
|
u£t)e-" dt
real part
Or,
is
s
Of course,
defined to have a constant value
for
= u.(t)
/(/)
Then
is
|
=
r
|
e-«
|
dt
is
<
valid if
co
of s, a, must be greater than zero. However, in practice,
we simply refer to the Laplace transform of the unit step function as lis, and rarely do we have to be concerned about the region in which the transform integral converges absolutely.
Example
2-2
Consider the exponential function /(,)
where a
is
= e-",
t
2;
a constant.
The Laplace transform of/(?) F(s)
is
written
=
=/: e--'e-" dt
s
+a
s
+a
(2-11)
Inverse Laplace Transformation
The operation of obtaining /(?) from the Laplace transform F(s) is termed the inverse Laplace transformation. The inverse Laplace transform of F(s) is denoted by f(t)
and
is
=
Z-Vis)]
(2-12)
given by the inverse Laplace transform integral
/(0
=
2^7
/
'
me"ds
(2-13)
where
c is a real constant that is greater than the real parts of all the singularities of F(s). Equation (2-13) represents a line integral that is to be evaluated in the j-plane. However, for most engineering purposes the inverse Laplace transform operation can be accomplished simply by referring to the Laplace trans-
form
table,
such as the one given in Appendix B.
Important Theorems of the Laplace Transform
The by the
applications of the Laplace transform in
many
instances are simplified
utilization of the properties of the transform.
These properties are
presented in the following in the form of theorems, and no proofs are given.
.
20
/
Chap. 2
Mathematical Foundation
1
by a Constant The Laplace transform of the product of a constant k and a time function f{t) is the constant k multiplied by the Laplace transform of f{t);
Multiplication
that
is,
£[kf(t)]
2.
= kF(s)
(2-14)
where F{s) is the Laplace transform of f{t). Sum and Difference The Laplace transform of the sum {or difference) of two time functions is
the
(or difference) of the Laplace transforms of the time func-
sum
tions; that
is,
± UO] = F,(s) ± F (s)
£[fi(0
F {s)
where
F
and
t
2 {s)
(2-1 5)
2
are the Laplace transforms of fit)
and/2 (r),
respectively. 3.
Differentiation
The Laplace transform of the first derivative of a time function f(t) s times the Laplace transform of f(f) minus the limit of f(t) as t
is
approaches
0-\-; that
is,
= sF(s) -
df(ty
lim /(/)
dt
(2-16)
=
-
sF(s)
/(0+)
In general, for higher-order derivatives,
[d'fW] _
=
L
df
s"F(s) -- lim
s»-if(t)
+
s»~
ldl^-\-
.
(-0 +
J
-- ^r /(o+)
n s F(s)
1
-
5"" 2
/
(2-17) 4. Integration
The Laplace transform of the respect to time
is
first integral
of a function fit) with
the Laplace transform of f(t) divided by s; that
S\
£
F{s)
f(j)dr
is,
(2-18)
s
In general, for «th-order integration,
£
... fir) dx A, rr...r. J Jo
J 5.
Shift in
o
.
.
.
^
dt n
o
(2-19)
Time
The Laplace transform of f{t) delayed by time Tl transform of f{t) multiplied by e~ ; that
£[fit
where u s {t
—
-
T)us {t
-
T)]
T is equal to the Laplace
is,
= e- T *F{s)
T) denotes the unit step function, which
time to the right by T.
(2-20) is
shifted in
Sec 2 4 -
Inverse Laplace Transform by Partial-Fraction Expansion / 21
-
6.
Initial-Value
Theorem
If the Laplace transform of fit) lim f(t) if
is
=
F(s), then
lim sF(s)
(2-21)
the time limit exists.
Final-Value Theorem
7.
If the Laplace transform of fit) is F(s)and ifsF(s) is analytic on the imaginary axis and in the right half of the s-plane, then lim f(t)
=
lim sFis)
(2-22)
The final-value theorem is a very useful relation in the analysis and design of feedback control systems, since it gives the final value of a time function by determining the behavior of its Laplace transform as s tends to zero. However, the final-value theorem
not valid
if sFis) contains any poles whose real part equivalent to the analytic requirement of sFis) stated in the theorem. The following examples illustrate the care that one must take in applying the final-value theorem. is
is
zero or positive, which
Example
2-3
is
Consider the function F(s)
Since sFis)
is
analytic
theorem
final-value
on the imaginary
may
2-4
+s+2)
axis
and
in the right half of the .s-plane, the
be applied. Therefore, using Eq. (2-22),
lim fit)
Example
s(s 2
= lim sFis) = lim
5 ,
, ,
„
= -1
f2-231
Consider the function
F& = J2"^p
(2-24)
which is known to be the Laplace transform of /(f) = sin cot. Since the function sFis) has two poles on the imaginary axis, the final-value theorem cannot be applied in this case. In other words, although the final-value theorem would yield a value of zero as the final value of fit), the result
2.4
is
erroneous.
Inverse Laplace Transform by Partial- Fraction Expansion 71
'
In a great majority of the problems in control systems, the evaluation of the inverse Laplace transform does not necessitate the use of the inversion integral of Eq. (2-13). The inverse Laplace transform operation involving rational functions can be carried out using a Laplace transform table
and
partial-fraction
expansion.
When the function in
Laplace transform solution of a differential equation can be written
s, it
X^ = W)
is
a rational
^
22
/
Chap 2
Mathematical Foundation
-
where P(s) and Q(s) are polynomials of s. It is assumed that the order of Q(s) in s is greater than that of P(s). The polynomial Q(s) may be written Q(s)
=
s
+
n
a^"'
1
+
.
.
.
+a
n-
t
+
s
(2-26)
a„
an are real coefficients. The zeros of Q(s) are either real or in a,, complex-conjugate pairs, in simple or multiple order. The methods of partialmultiplefraction expansion will now be given for the cases of simple poles,
where
.
.
.
,
order poles, and complex poles, of X(s).
When All the Poles of X(s) Are Simple and Real
Partial-Fraction Expansion
X( S)
M
=
(2-25) can be written
and simple, Eq.
If all the poles of X(s) are real
^
P
,
(2-27)
,
— s„ are considered to be real numbers in the where the poles — s — s 2 present case. Applying the partial-fraction expansion technique, Eq. (2-27) is ,
t ,
.
.
,
.
written
X(vS')
The
coefficient,
K„
(i
= -^+ + -4*s + s s + i,
=
1, 2,
.
.
,
.
n), is
of Eq. (2-28) or (2-27) by the factor
K — j, (j + Ji) and let s = To
find the coefficient
K
(S
s\
As an
illustrative
sl
,
that
;
The
is
(s
for instance,
+ T^V s„ s
•
<2
-
28 )
-1--
determined by multiplying both sides s ) and then setting s equal to —s,.
Eq. (2-53) are determined as , = = (j + a-y(»)A'(j)|,._, co? _ _ G)„
K = sX(s)\ K-x+Ja>
s
(2-56)
\
2joo{— a +ye>)
+./
(2-57) j(e+i/2)
2co
where
= tan-i[-f]
(2-58)
Also, *-«-/«,
=
(*
+a
+yo))A'(j)|,._
<
,. yto
(2-59)
— 2jco(— a — /a)) The complete expansion
2co
is 1
to
r e--" s+ */2 >
ejw+*n)
i
Sec 2 5
Application of Laplace Transform to the Solution /
-
-
Taking the inverse Laplace transform on both x(t)
25
sides of the last equation gives
=
1
+ 5k (g-yw+*/2) e (-«+y<»)(
=
1
+ ~e-" sia(COt - 6)
_|_
e ne+nm e {-*-Mt\ (2-61)
or x(t)
= *
where 6
2.5
+
vi-C
2
6 "^' Sin(£0 "v/1
~ £ 2 ' ~ 6)
<2
" 62
>
given by Eq. (2-58).
is
Application of Laplace Transform to the Solution of Linear Ordinary Differential Equations
With the aid of the theorems concerning Laplace transform given in Section 2.3 and a table of transforms, linear ordinary differential equations can be solved by the Laplace transform method. The advantages with the Laplace transform method are that, with the aid of a transform table the steps involved are all algebraic, and the homogeneous solution and the particular integral solution are obtained simultaneously.
Let us illustrate the method by several illustrative examples.
Example
Consider the
2-7
d
2
differential
x(t)
equation
?dx(t)
-d$T + 3 ~^- + where u s (t)
The
initial
is
.
,
the unit step function, which
conditions are x(0+)
the differential equation
we first
=
„
,
.
2*(f)
is
... = 5u,(t)
(2-63)
defined as
=
-1 and * (I) (0+)
dx(t)/dt
take the Laplace transform
|,. 0+
on both
= 2. To solve
sides
of Eq. (2-63);
we have s*X(s)
- sx(0+) - *<»(0+) +
Substituting the values of *(0+)
3sX(s)
andx (1) (0+)
- 3*(0+) + 2*0) = —
into Eq. (2-65)
and solving
(2-65)
for X(s),
we
get
w=
*Y(s) Equation (2-66)
is
~ sl ~ s + 5 - ~ ja ~ s + 5 + 3s + 2) ~ s(s + 1)0 + 2)
n
Kt\ (2_66)
j(j*
expanded by
partial-fraction expansion to give
* *)= 2!-j4t+20T2)
(2 " 67)
(
Now
taking the inverse Laplace transform of the last equation,
we
get the complete
solution as x(t)
The
first
term in the
last
=
\
-
5e-<
equation
is
+ le- 1
'
t
>
the steady-state solution,
(2-68)
and the
last
two
terms are the transient solution. Unlike the classical method, which requires separate
Z
26
/
Mathematical Foundation
Chap. 2
and the
steps to give the transient
steady-state solutions, the Laplace transform
magnitude of the steady-state solution
If only the
theorem
may be
= lim
first
interest, the final-value
= Iim ~~f~ S ~t = T 3S + J-.0 ?
sX(s)
s-0
(-.00
left
of
is
Thus
applied.
lim x{t)
where we have
method
one operation.
gives the entire solution of the differential equation in
4"
2.
(2-69)
I
checked and found that the function, sX(s), has poles only in the
half of the .s-plane.
Example
Consider the linear
2-8
differential
equation
^P- + 34.5^- + 1000jc(O = 1000u )
(2-70)
s (t)
where u s (t) is the unit step function. The initial values of x(t) and dx{t)jdt are assumed to be zero. Taking the Laplace transform on both sides of Eq. (2-70) and applying zero initial conditions, we have s 2 X(s)
+
Solving X(s) from the last equation,
we
poles of X(s) are at s
fore,
1000X(s)
=
—
(2-71)
obtain
X s) = s(s + 34.5s + 1000) = 0, s = -17.25 +7 26.5, and s = ^
The
+
34.5sX(s)
(2 " 72)
2
-17.25 -;26.5. There-
Eq. (2-72) can be written as 1000
X s) = s(s + (
One way
17.25 -j26.5)(s
of solving for x(t)
is
+
17.25
(2 " 73)
+ J26S)
to perform the partial-fraction expansion of Eq.
(2-73), giving
31.6
T
= — + 5^63) U + 1
X(s)
g/
e -J(«+*/2) 17.25
-/26.5
+s+
17.25
-i
+726.5J
(2 " 74)
where
=
•
tan-i(:=
2
J!)
=
-56.9°
(2-75)
Then, using Eq. (2-61),
=
x(J)
Another approach
is
to
1
+
1.193e- I7
compare Eq. CO n
-
2 ='
sin(26.5f
- 6)
(2-76)
(2-72) with Eq. (2-52), so that
= ±^1000 =
±31.6
(2-77)
and C
and the solution to
2.6
Elementary Matrix Theory
1
-
x(t)
is
= 0.546
(2-78)
given directly by Eq. (2-62).
2' 6
In the study of modern control theory
it is
often desirable to use matrix notation
to simplify complex mathematical expressions.
The simplifying matrix notation reduce the amount of work required to solve the mathematical equations, but it usually makes the equations much easier to handle and manipulate.
may not
:
:
Sec. 2.6
Elementary Matrix Theory / 27
As a motivation to the reason of using matrix notation, the following set of n simultaneous algebraic equations: «n*i «21*1
a.i*i
+ + +
a, 2 *2
«22*2
+ + +
a„ 2 x 2
.
.
.
.
.
.
+ a x =y + a 2n X„ = J2
...+
ln
n
a„„x„
let
us consider
l
=y
(2-79)
n
We may use the matrix equation Ax
=y
(2-80)
as a simplified representation for Eq. (2-79).
The symbols A, x, and y are defined as matrices, which contain the coeffiand variables of the original equations as their elements. In terms of matrix
cients
which will be discussed later, Eq. (2-80) can be stated as: The product of the matrices A andx is equal to the matrix y. The three matrices involved here are defined to be algebra,
«21 (2-81)
a»2
«»i
x2 X
=
(2-82)
x„ >l'
y
=
(2-83)
JV which are simply bracketed arrays of
coefficients
and
variables.
Thus, we can
define a matrix as follows
Definition of a Matrix
A matrix is a collection of elements arranged in a rectangular or square array. Several ways of representing a matrix are as follows 3
A= 1
-2 3
1
In this text
we
10"
-2
.
A =(° \1
10
A=
3
-2
:)
[aj 2i3
shall use square brackets to represent the matrix.
28
/
Chap. 2
Mathematical Foundation
It is
important to distinguish between a matrix and a determinant: Determinant
Matrix
array of numbers or elements with n rows and n columns {always
An
array of numbers or elements columns. and
An
m
with n rows
Does not have a
value, although
square matrix (n
=
m) has a
square),
a
Has a
deter-
value,
minant. definitions of matrices are given in the following.
Some important
When
Matrix elements.
au
is
As a
a matrix
is
an
a, 2
a 13
#21
fl
22
a 23
a 31
a 32
a 31
identified as the element in the /th rule,
written
we always refer to the row
(2-84)
row and thejth column of the matrix. and the column last.
first
of Order of a matrix. The order of a matrix refers to the total number three has in Eq. (2-84) matrix the example, rows and columns of the matrix. For matrix. rows and three columns and, therefore, is called a 3 X 3 (three by three) "« by m." or "n m" x termed is columns and m In general, a matrix with n rows
Square matrix.
A
square matrix
is
one that has the same number of rows
as columns.
Column matrix. A column matrix than one row, that
is,
an
m X
1 matrix,
one that has one column and more
is
m>
1.
simply Quite often, a column matrix is referred to as a column vector or column typical is a (2-82) in Eq. matrix The rows. m an m-vector if there are matrix that is n X 1, or an n-vector.
Row
column, that
is,
A
row matrix
is
a 1 X n matrix.
A
matrix.
one that has one row and more than one row matrix can also be referred to as a row
vector.
Diagonal matrix. all
i
A
diagonal matrix
is
a square matrix with a tj
=
for
^j. Examples of a diagonal matrix are Tfln
"
"5
0"
(2-85)
a 22
3
a 33 . matrix with Unity matrix (Identity matrix). A unity matrix is a diagonal is often matrix unity to 1. equal diagonal (i =j) main all the elements on the matrix is unity of a example An designated by I or U.
A
0"
Tl 1 =
(2-86)
1 1
Sec. 2.6
Elementary Matrix Theory / 29
Null matrix.
A
null matrix is one
whose elements are
all
equal to zero;
for example, "0
o= Symmetric matrix.
A
0" (2-87)
symmetric matrix
is
a square matrix that
satisfies
the condition
=
au for
all
i
andj.
changed with
A its
(2-88)
<*jt
symmetric matrix has the property that if its rows are intercolumns, the same matrix is preserved. Two examples of sym-
metric matrices are
r
5
T6
1
10
1
the
-4".
10
5
(2-89)
-1
Determinant of a matrix. With each square matrix a determinant having same elements and order as the matrix may be defined. The determinant of
a square matrix
A
is
designated by
detA As an
illustrative
=
A^
=
(2-90)
example, consider the matrix
-r
1
-1
The determinant of A
|A|
3
2
1
oj
(2-91)
is
-1
1
|A|
=
3 1
= -5
2
(2-92)
1
A square matrix is said to be singular if the value of its On the other hand, if a square matrix has a nonzero deter-
Singular matrix.
determinant
minant,
is zero.
called a nonsingular matrix. a matrix is singular, it usually means that not all the rows or not all the columns of the matrix are independent of each other. When the matrix is used to represent a set of algebraic equations, singularity of the matrix means that these equations are not independent of each other. As an illustrative it is
When
exam-
ple, let us
consider the following set of equations:
2x
t
-x, x,
— 3x2 + x = + x + x = - 2x + 2x = 3
2
3
2
3
Note
(2-93)
that in Eq. (2-93), the third equation is equal to the sum of the first two equations. Therefore, these three equations are not completely independent. In matrix form, these equations may be represented by
AX =
30
/
Chap. 2
Mathematical Foundation
where
A=
2
-3
1
-1
1
1
-2
1
(2-94)
2
X=
(2-95)
A3
and
O
a 3
is
X
1
null matrix.
The determinant of A
2
-3
1
|A|= -1
1
1
1
-2
2
Therefore, the matrix
A of Eq.
is
4-3+2-
4
=
(2-94) is singular. In this case the
(2-96)
rows of A are
dependent. Transpose of a matrix. The transpose of a matrix A is defined as the matrix that is obtained by interchanging the corresponding rows and columns in A. matrix which is represented by be an n X Let
m
A
A= Then the transpose of A, denoted by A'
is
The transpose of A
is
is
given by (2-98)
[a„] m> „
X m; the transpose of A has an order m X
n
As an example of the
2-9
A',
= transpose of A =
Notice that the order of A
Example
(2-97)
[%]„, m
n.
transpose of a matrix, consider the matrix "3
2
.0
-1
r 5.
given by 3
A'
=
2
-1
Ll
Skew-symmetric matrix. that equals
its
A
negative transpose
skew-symmetric matrix ;
that
A= Some
5.
is
a square matrix
is,
-A'
(2-99)
=A (£A)' = AA', where k is a scalar (A + B)' = A' + B' (AB)' = B'A'
(2-100)
Operations of a Matrix Transpose
1.
2. 3.
4.
(A')'
(2-101)
(2-102) (2-103)
Sec. 2.6
Elementary Matrix Theory
Adjoint of a matrix. Let matrix of A, denoted by adj A, adj
Conjugate matrix. Given a matrix A whose elements are represented by a tJ the conjugate of A, denoted by A, is obtained by replacing the elements of A by their complex conjugates; that is, ,
Chap. 2
32
/
Mathematical Foundation
A=
conjugate matrix of A
= =
where a tl
2.7
(2-108)
[
complex conjugate of a u
.
Matrix Algebra define matrix algebra in carrying out matrix operations it is necessary to division, and other necessary the form of addition, subtraction, multiplication, that matrix operations are stage this out at point to operations. It is important operations for scalar quantities. defined independently of the algebraic
When
Equality of Matrices
Two
matrices
A
and
B
the are said to be equal to each other if they satisfy
following conditions: 1.
2.
They are of the same order. The corresponding elements a,j
=
are equal; that
/and j
for every
b iJ
is,
For example,
A=
_«2
implies that a n
=
b u a 12 ,
a 12
a ti
=
B
fl 22.
1
b 12 a 21 ,
=
bn
bn
Pl\
022_
=
(2-109)
-
b 2U and a 22
b
:
Addition of Matrices
Two order.
matrices
A and B can be added to form A + B if they are of the same
Then
A+B=
[a l} ]„, m
+
[b,jl,,
m
=C=
(2-110)
[c,7 L,,
where ClJ
for all
/
and/ The
Example 2-12
=
au
order of the matrices
As an
illustrative "
A=
3
-1
+ is
preserved after addition.
example, consider the two matrices _
2"
3
B = -1
4
-1_
_
_
which are of the same order. Then the sum of A and
3+0 -1-1
C=A+B _
(2-111)
b 'J
+
1
B
2
1
is
0^
given by
~ 3 + 3" -2 = 4 + 2 _ -1+0.
2
1
5"
6
-1-
(2-112)
„
„
Sec. 2.7
.
.,
Matrix Algebra /
33
Matrix Subtraction
The rules governing the subtraction of matrices are similar to those of matrix addition. In other words, Eqs. (2-1 10) and (2-1 1 1) are true if all the plus signs are replaced by minus signs. Or,
C=A-B=
_ l = a ul,m + [-b m
[«,,]„
[blJ
l
=
m
iJ ] n .
(2-113)
m
C l tjh,m
where Cti
for
all /
Associate
The
= a„ — b
(2-114)
and /
Law of Matrix (Addition and Subtraction)
associate law of scalar algebra
holds for matrix addition and
still
subtraction. Therefore,
(A
+
B)
+C=A+
(B
+
C)
(2-115)
Commutative Law of Matrix (Addition and Subtraction)
The commutative law for matrix addition and subtraction states that the following matrix relationship is true:
A+B+C=B+C+A (2 " 116 >
=A+C+B Matrix Multiplication
The matrices A and B may be multiplied together to form the product AB they are conformable. This means that the number of columns of A must equal the number of rows of B. In other words, let if
B= Then
A
and
B
[*>„],,„,
are conformable to form the product
C = AB = [a„]„ P [b,j\q m = [c J„, The matrix C will have the same number of rows ,
and only ifp = q. the same number of columns if
(2-H7) as
A and
as B.
important to note that A and B may be conformable for AB but they not be conformable for the product BA, unless in Eq. (2-117) n also equals m. This points out an important fact that the commutative law is not generally valid for matrix multiplication. It is also noteworthy that even though A and B are conformable for both AB and BA, usually AB BA. In general, It is
may
^
ing references are exist:
made with
the followrespect to matrix multiplication whenever thev J
AB = A postmultiplied
AB = B premultiplied
by
by
B
A
34
/
Chap. 2
Mathematical Foundation
Having established the condition for matrix multiplication, let us now turn to the rule of matrix multiplication. When the matrices A and B are conformable to form the matrix C = AB as in Eq. (2-117), the yth element of C, c„, is given by
Although the commutative law does not hold in general for matrix multiand the distributive laws are valid. For the distributive
plication, the associative
law,
we
state that
+
A(B
C)
= AB + AC
(2-122)
= A(BC)
(2-123)
the products are conformable.
if
For the
associative law,
(AB)C the product
if
Multiplication
conformable.
is
by a Scalar k
Multiplying a matrix
element of A by
A
by any
k. Therefore, if
A=
=
kA
k
scalar
is
equivalent to multiplying each
[a u ]„ m
(2-124)
[ka u ]. m
Inverse of a Matrix (Matrix Division)
In the algebra for scalar quantities,
when we
write
=y
(2-125)
x=-y
(2-126)
ax leads to
it
1
or
=
x
a *y
(2-127)
Equations (2-126) and (2-127) are notationally equivalent. In matrix algebra, if
Ax then
it
may be possible
A
1
1.
2.
If
A
:
(2-128)
y
to write
x where
=
= A _1 y
(2-129)
denotes the "inverse of A." The conditions that
A
-1
exists are:
A is a square matrix. A must be nortsingular.
exists, it is
given by
A-' Example 2-15
=
adjA
Given the matrix
A=
"11 -Ozi
the inverse of
(2-130)
A is
«12
a 2 2-
(2-131)
given by
— a 12
«22
A'
1
=
adj
A
(2-132)
dllOll
—
«12«21
Chap. 2
36
/
Mathematical Foundation
where for
A to
^ 0, or
be nonsingular, A| |
011^22 If
we pay
— «12«21
(2-133)
¥=
attention to the adjoint matrix of A, which
is
the numerator of A"
»,
we
obtained by interchanging the two elements on see that for a 2 x of A. signs of the elements on the off diagonal the main diagonal and changing the
2 matrix, adj
Example
A
is
Given the matrix
2-16
1
L the determinant of
=
1
1
-1
"0
1
1
an inverse matrix, and
A has
U
l
l
(2-134)
2
A is IA|
Therefore,
1
-10
A=
A"
=
(2-135)
1
1
given by
is
"-2 1
2
-1
2
1
-2
=
(2-136)
lj
Some
Properties of Matrix Inverse
1.
2. 3.
=
AA" =A"»A 1
(2-137)
I
=A
(2-138)
(A" )" In matrix algebra, in general, 1
1
AB = AC does not necessarily imply
B
=
C.
The reader can
(2-139) easily construct
However, if A is a square both sides of Eq. premultiply can we nonsingular,
to illustrate this property.
an example matrix, and is Then (2-139) by A" 1
.
AAB = AAC °f
IB
= IC
B
=C
which leads to
4. If
A
and
B
are square matrices
(AB)" 1
(2-140)
(2-141)
and are nonsingular, then
=B-'A-'
(2-142)
Rank of a Matrix
maximum number of linearly independent largest nonsingular matrix contained in the is the order of are as follows: matrix of a on the rank
The rank of a matrix columns of A; or, it A. Several examples
A
is
the
,
Sec. 2.7
Matrix Algebra /
"0
1
rank .0
o.
"3
9
1
3
2
6
=
"0514 rank
1
3
_3
2
=
37
2,
2
"3
rank
=
2,
2
1
rank
1
1
The following properties on rank are useful in the determination of the rank of a matrix. Given matrix A,
annxm
1.
2. 3.
Rank of A Rank of A Rank of A
= = =
Rank of A'. Rank of A'A. Rank of AA'.
Properties 2 and 3 are useful in the determination of rank; since A'A and AA' are always square, the rank condition can be checked by evaluating the deter-
minant of these matrices. Quadratic Forms
Consider the scalar function
=£2
/(x) which
is
We
called the quadratic form.
/(*)
a,jx,xj
can write
this
= 2 *,!>,,*,.
(2-143)
equation as (2-144)
Let
= %
yt
Then Eq.
(2-144)
if
we
(2-145)
2 X&
(2-146)
becomes
/00
Now
a,jXj
=
define ~y\
Xi
x2 x
yi
=
y
=
xn _
_j»_
Eq. (2-146) can be written
/(*)
and from Eq.
=
x'y
(2-147)
(2-145),
y
= Ax
(2-148)
where
A=
[atf ],.»
(2-149)
38
/
Chap. 2
Mathematical Foundation
Finally, /(x)
becomes
=
/(x)
(2-150)
x'Ax
^
quadratic form as in a n for i j, given any matrix. In other words, symmetric a with Eq. (2-150), we can always replace A that such B matrix symmetric define a given any A, we can always
Since the coefficient of xp s
is
-
b
a„
+
bn
=?ii+£»,
i*j
(2-151)
are often used as performance indices in control conveniences in the systems design, since they usually lead to mathematical
The quadratic forms
design algorithms.
Definiteness Positive definite.
An
n
X n matrix
A
said to be positive definite if
is
all
the roots of the equation |
AI
—A =
(2-152)
|
of A, are positive. Equation (2-152) is called the characteristic equation of A. eigenvalues roots are referred to as the Positive semidefinite. its
A
The matrix
(n
X
n)
positive semidefinite if all
is
eigenvalues are nonnegative and at least one of the eigenvalues
Negative
is
zero.
The matrix A (n X n) is negative semidefinite nonpositive and at least one of the eigenvalues is zero.
definite.
eigenvalues are
The matrix A {n x negative and some are positive. Indefinite.
n)
is
indefinite if
An alternative way of testing the definiteness
and the
some of
if all its
the eigenvalues are
of a square matrix
is
to check
principal the signs of all the leading principal minors of the matrix. The leading matrix square Given the follows. as defined are matrix n minors of an n X
A
a 12
"an
...
0i
the n leading principal minors are the following determinants:
a„
«12
021
<*22
a lt
Then
the definiteness of
A is
A is
0n
012
013
^21
«22
a23
a 31
032
"33
determined as follows:
of positive (negative) definite if all the leading principal minors
A
are
positive (negative).
A is positive semidefinite if A = |
A are
nonnegative.
|
and
all
the leading principal minors of
_ __ z-Jransform / 39
Sec. 2.8
A
,
is
negative semidefinite
if |
of
A|
=
—A are nonnegative. We may
and
all
,
the leading principal minors
also refer to the definiteness of the quadratic form, x'Ax.
The quadratic form, x'Ax (A
is symmetric), is positive definite (positive semidefinite, negative definite, negative semidefinite) if the matrix is positive definite (positive semidefinite, negative definite, negative semidefinite).
A
2.8
z-Transform 1213
The Laplace transform is a powerful tool for the analysis and design of linear time-invariant control systems with continuous data. However, for linear systems with sampled or discrete data, we may find that the z-transform is more appropriate. Let us first consider the analysis of a discrete-data system which is represented by the block diagram of Fig. 2-3. One way of describing the discrete nature of the signals is to consider that the input and the output of the system are sequences of numbers. These numbers are spaced T seconds apart. Thus, the input sequence and the output sequence may be represented by r(kT) and c(kT), respectively, k 0, 1, 2, .... To represent these input and output sequences by time-domain expressions, the numbers are represented by impulse functions in such a way that the strengths of the impulses correspond to the values of these numbers at the corresponding time instants. This way, the input sequence is expressed as a train of impulses,
=
'*(')
A similar expression can r(kT)
=
%/^T)5it
-
kT)
(2-153)
be written for the output sequence.
c(kT)
Discrete-data
•r^ p> _X.
'('>
Fig. 2-3.
system
,pK ',?(')
(
system
Block diagram of a discrete-data
Fig.
-
2-4.
Block diagram of a
iinite-pulsewidth sampler.
Another type of system that has discontinuous signals is the sampled-data A sampled-data system is characterized by having samplers in the system.
system.
A sampler is a device data.
For example,
that converts continuous data into
Fig. 2-4
some form of sampled shows the block diagram of a typical sampler that
closes for a very short duration of/> seconds once every seconds. This is referred to as a sampler with a uniform sampling period T and a finite sampling duration p. Figure 2-5 illustrates a set of typical input and output signals of the sampler. With the notation of Figs. 2-4 and 2-5, the output of the finite-pulse-duration sampler is written
r
'J(0
where u£t)
is
=
'(0
E
[u.{t
the unit step function.
-
*r)
-
u,(t
-kT- p)]
(2-1 54)
40
/
Chap. 2
Mathematical Foundation
ire)
r(t)
n
,
Fig. 2-5. Input
and output
h
T 2T
p
signals of a finite-pulsewidth sampler.
For small p, that is, p <^T, the narrow-width pulses of Fig. 2-5 may be approximated by flat-topped pulses. In other words, Eq. (2- 54) can be written 1
r*(t)
~ 2 r{kT)[u - kT) - u - kT - p)] s
(t
(2-155)
s {t
Multiplying both sides of Eq. (2-1 55) by l/p and taking the limit as p approaches zero,
we have
Hm — r*(r) — p
p-0
=
Urn f] P^O
k=o
—p r{kT)[u
t, r{kT)8(t
-
s (t
-
kT)
-
u,{f
- kT - p)]
kT)
or lim
— r*(0 - r\i)
In arriving at this result
(2-156)
we have made 6(t)
=
use of the fact that
lim -L [k,(0 J>-0
p
-
u,(t
~ p)]
(2-157)
The significance of Eq. (2-156) is that the output of the finite-pulsewidth sampler can be approximated by a train of impulses if the pulsewidth approaches zero in the limit. A sampler whose output is a train of impulses with the strength of each impulse equal to the magnitude of the input at the corresponding samis called an ideal sampler. Figure 2-6 shows the block diagram of an ideal sampler connected in cascade with a constant factor p so that the combination is an approximation to the finite-pulsewidth sampler of Fig. 2-4 if p is very small. Figure 2-7 illustrates the typical input and output signals of an ideal sampler; the arrows are used to represent impulses with the heights representing
pling instant
the strengths (or areas) of the latter.
=
r*(t)
r(t)
*.
V *
Ideal sampler
Approximation of a finite-pulsewidth sampler by an and a cascade constant factor.
Fig. 2-6.
pler
ideal
sam-
(t)
/
Sec. 2.8
z-Transform
/
41
r*(t)
t
Fig. 2-7. Input
T IT 3T 4T
and output
signals of
j
HT
i
A
—
an ideal sampler.
view of these considerations we may now use the ideal sampler to represent the discrete data, r(kT). This points to the fact that the signals of the system in Fig. 2-3 can essentially be treated as outputs of ideal samplers. we are ready to investigate the application of transform methods to discrete and sampled-data systems. Taking the Laplace transform on both sides of Eq. (2-153), we have In
Now
R*(s)
=
£ r(kT)e-
(2-158)
The fact that Eq.
(2-1 58) contains the exponential term e~ kTs reveals the difficulty of using Laplace transform for the general treatment of discrete-data systems, since the transfer function relations will no longer be algebraic as in the continuous-data case. Although it is conceptually simple to perform inverse Laplace transform on algebraic transfer relations, it is not a simple matter to perform inverse Laplace transform on transcendental functions. One simple fact is
the
commonly used Laplace transform
do not have
tables
entries
that
with trans-
cendental functions in s. This necessitates the use of the z-transform. Our motivation here for the generation of the z-transform is simply to convert transcendental functions in s into algebraic ones in z. The definition of z-transform is given with this objective in mind. Definition of the z-Transform
The z-transform
is
defined as z
=
e
1
(2-159)
where s is the Laplace transform variable and
J is the sampling period.
Equation
(2-159) also leads to s
= — In z
(2-160)
Using Eq. (2-159), the expression in Eq. (2-158)
R*(s
= -L in z) =
R(Z)
=
is
•£
written r (kT)z~
(2-161)
or R(z)
=
z-transform of
/•*(/)
(2-162)
= [Laplace transform
of r*(t)] s=1/Tlnz
-
42
/
ap
Mathematical Foundation
Therefore, z =
e
we have
'
treated the z-transform as simply a change in variable,
Ts .
The following examples illustrate some of the simple z-transform operations. Example
Consider the sequence
2-17
= e- kT
r(kT)
where a
k
,
= 0,1,2,...
(2-163)
a constant. Eq. (2-153),
is
From
#•*(/)
= S
-
e~° kT5(t
(2-164)
kT)
Then
=
r*(s)
f; e *=o
-' kT
e- kTl
(2-165)
-<-° + ° )T and subtract the resulting equation from Multiply both sides of Eq. (2-165) by e can be written in a closed form, R*(s) that easily show can ; now we Eq. (2-165)
=
*>(*) for
the real part of s.
itfy>
forle-^z-'l
Example
<
2-18
-
._.
/•*(/) is
I
—
(2-168)
1.
In Example 2-17,
r(*D
if
=
a
l,
= 0, we have * = 0,1,2,...
which represents a sequence of numbers *•(*)
Biz)
This expression
=
*
.
(2-167)
i
the z-transform of
Then
" 166)
„
<
|g-(a+«)r|
where
<2
!__}-<...»
is
= £ z-* = k=a
written in closed
R(z)
=
1
all
equal to unity.
= S= A
(2-169)
Then
~ e kTs
+ z" + 2"* + z-> +
form
(2 " 170)
1
.
•
•
(2-171)
as
y4^
i*-m
<
2 - 172)
or
^) = F=1
I
Z_1
I<1
(2
- 173
>
functions are obtained by In general, the z-transforms of more complex preceding two examples. If a time use of the same procedure as described in the of finding its z-transfunction r(t) is given as the starting point, the procedure Eq. (2-161) to get R(z). and then use is to first form the sequence r(kT)
form
An
equivalent interpretation of this step
is
We
to send the signal r(t) through
an
then take the Laplace transform ideal sampler whose output is r*(t). R(z) is obtained by substituting and of r*(t) to give R*(s) as in Eq. (2-158), z for
e
Ts .
:
Sec. 2.8
z-Transform
Table 2-1
Laplace Transform
Time Function
Unit step
StU)
-e~
43
Table of z-Transforms
Unit impulse
1
/
z-Transform
S(t)
«(/)
=2,6(t-
z-
1
z
nT)
z-
n=0
1
Tz
\_
s2
-
(Z
±
T*z{z 2
1
11
s n+l
-
1(Z lim
n\
1)2
+
1)
1)3
(-1)"
d- (
n\
daAz
i™
z
-
\
e-°r)
1
+
s
a
z
U+
a) 2
a s(s
+
1
a)
—
(1
e~<"
(z
+
(*
sin
+ CO 2 CO a) 2
+
of
cos
+ CO +a + a) 2 + co 2 2
z 2 e 2<,T
(s
cot
e~a ' cos
_
z2
—
e-'T)
— —
+
1
—
cos coT) 2z cos co7"
+
+
1
1
ze~ aT cos coT 2ze _ar coscor+ e~2or
z2
cot
-
2ze°T cos cor
z(z
z2
s
e-*r)z
l)(z
—
s S2
-
-
z sin coT 2z cos coT ze~"r sin coT
z2
e~"< sin cot
co 2
e-° T
te~ a <
co
S2
—
Tze~' T {z e-" T ) 2
1
Table 2-1 gives the z-transforms of some of the time functions commonly used in systems analysis. A more extensive table may be found in the litera-
ture. 12 13 -
Inverse z-Transformation Just as in the Laplace transformation, one of the major objectives of the is that algebraic manipulations can be made first in the zdomain, and then the final time response is determined by the inverse
z-transformation
z-transfor-
mation. In general, the inverse z-transformation of R(z) can yield information only on r(kT), not on r(t). In other words, the z-transform carries information only in a discrete fashion. When the time signal r(t) is sampled by the ideal sampler, only information on the signal at the sampling instants, t kT, is
=
retained.
With
this in
mind, the inverse z-transformation can be effected by one
of the following three methods 1.
2. 3.
The partial-fraction expansion method. The power-series method. The inversion formula.
Partial-fraction expansion method.
panded by
The z-transform function R{z)
partial-fraction expansion into a
sum of simple
is
ex-
recognizable terms,
Chap. 2
44
/
Mathematical Foundation
used to determine the corresponding r{kT). In slight difference between carrying out the partial-fraction expansion, there is a With reference to the zthe z-transform and the Laplace transform procedures. functions have the transform table, we note that practically all the transform
and the z-transform
table
is
into the form term z in the numerator. Therefore, we should expand R{z)
**) For
this,
<
2 - 174)
.
expand R(z)jz into fractions and then multiply z across illustrate this desired expression. The following example will
we should
to obtain the final
= r §£ a + r §£-+-"
first
recommended procedure. Example 2-19
Given the z-transform function
_ ~
R( ^ R{z > it is
~ e "T z T - e)
(z
-
l)(z
desired to find the inverse z-transform. Expanding R(z)lz by partial-fraction expansion,
R(z) z
= _J z —
z
1
—
I
(2-175) )
we have (2-176)
e-" T
Thus,
is
found to be r(kT)
Power-series method.
- e~'kT
1
(2-178)
The z-transform R{z)
is
expanded into a power
the value of z" In view of Eq. (2-161), the coefficient of z~* is we in Eq. (2-175), R(z) for the or simply r(kT). For example,
r(t) at
expand
=
1
powers t = kT,
series in
of
(2-177)
inverse z-transform of the z-transform table of Table 2-1, the corresponding
From R(z)
I
I
n( .\
it
into a 21( Z )
.
power
=
(i
+
series in
powers of z'
«-->"»
-
_ .
.
.
+
(1
e-
2or
+ (l _ e -° kT)z- k +
1
by long
)z- 2
+
(1
...
division; then
-
e~
3 °T
we have
)z-i (2
" 179
>
or R(z)
ThUS
=S *=
(1
'
which
-
e-°
kT
)z-
( 2_18() )
k
r(kT)=l-e-» is
the
same
(2-18D
result as in Eq. (2-178).
Inversion formula.
The time sequence r(kT) may be determined from R{z)
by use of the inversion formula,
r(kT)=
^U 2ff
R{z)z k -'dz
(2-182)
J J r
which \z\
=
is
e
cT
3 a circle of radius a contour integration along the path T, where T is such a value that all of is and c z-plane, centered at the origin in the
the poles of R{z) are inside the
circle.
Sec. 2.8
z-Transform / 45
One way of evaluating the contour integration of Eq. (2-182) is by use of the residue theorem of complex variable theory. Equation (2-182) may be written r(
kT )
= 2jp§
R(z)z k
= 2 Residues For simple
~1
dz
of i{(z)z*-> at the poles of R(z)z k ~
poles, the residues of R(z)z
Residue of R(z)z
k'1
k~1
at the pole z
at the pole z,
= (z-
=z
3
is
l
(2-183)
obtained as
z J )R(z)z"- 1 |„ x ,
(2-184)
Now let us consider the same function used in Example 2-19. The function R(z) of Eq. (2-175) has two poles: z 1 and z e~- r Using Eq. (2-183), we
=
=
.
have r(kT)
=
[Residue of iJ(z)z*-» at z T
)z (1 (z-e-»r
= T
k
)
(1
r
+
1]
)z
-
1
at z
= e~'T
\
k
(z-1)
,.,
[Residue of R(z)z k
(2-185)
,_,^,
= - e-" kT 1
which again agrees with the
result obtained earlier.
Some Important Theorems of the z-Transformation Some of the commonly used theorems of the z-transform are stated in the following without proof. Just as in the case of the Laplace transform, these theorems are useful in many aspects of the z-transform analysis. 1.
Addition and Subtraction
If r x {kT)
and
r 2 {kT)
have z-transforms Rfe) and
J? 2 (z), respectively,
then
gUr^kT) 2.
Multiplication
±
r 2 (kT)]
where a
is
t
(z)
as[r(kT))
=
±R
2 (z)
(2-186)
by a Constant S[ar(kT)]
3.
=R
=
aR(z)
(2-187)
a constant.
Real Translation
$[r(kT
-
nT)]
=
g[r(kT
+
nT)]
= Ar{z) -
z~"R(z)
(2-188)
and "jj
r(kT)z- k
~]
(2-189)
where n
is a positive integer. Equation (2-188) represents the ztransform of a time sequence that is shifted to the right by nT, and Eq. (2-189) denotes that of a time sequence shifted to the left by nT. The reason the right-hand side of Eq. (2-189) is not z"R(z) is because
the z-transform, similar to the Laplace transform,
is
defined only for
.
.
Chap. 2
46
/
Mathematical Foundation
Eq. (2-189) Thus, the second term on the right-hand side of to the shifted it is after lost is that simply represents the sequence
k
>
0.
by nT.
left
Complex
4.
Translation
Z[e*° 5.
kT
R{ze^ T )
(2-190)
Theorem
Initial -Value
=
lim r(kT) if
=
r(kT)\
(2-191)
lim R(z)
the limit exists.
6. Final-Value
Theorem lim r{kT)
=
lim
-
(1
(2-192)
z' l )R{z)
-
z" ')*(*). has no poles on or outside the unit origin in the z-plane, \z\= 1 the circle centered at
if
The
the function,
these theorems. following examples illustrate the usefulness of
Apply the complex
Example 2-20
/(/)
Let
(1
KO =
=
te--,
t
^
theorem to find the z-transform of
translation
0.
?>0;then
U
R(z)
=
*[«/.(01
= ^jjj
=
g(kT)
=
R{ze")
(2-193)
Using the complex translation theorem, F{z)
Example
=
glte-'uM]
= Jl^-.ry
R( z >
=
-
(z
-
l)(z
2
0.792z 2 0.416z
-
determine the value of r(kT) as k approaches Since
S K*r> -
k
easily
R(z)
=
° 7 9 2Z 41 6 2
+
z2
i.i21z-»
+ 0.981z" + 6
the z-plane, the final-
1
_o 416 z + 0.208
+
0.998z~ 7
R(.z) in
1.091*-'
+
.
="
^
'
powers of z~
+
1.013*"*
l
,
+
0.986z-=
^^
.
apparent that the coefficients of this power
value of unity.
(2496)
+ 0.208 in \z\ =
92 lim
checked by expanding
0.792z-i
(2 . 195)
+ 0.208)
does not have any pole on or outside the unit circle Hence, value theorem of the z-transform can be applied.
It is
194 )
infinity.
(l-^-W)- z2 _
is
"
Given the function
2-21
This result
(2
series
converge rapidly to the
final
Chap. 2
Problems
47
/
REFERENCES Complex
Variables, Laplace Transforms,
and Matrix Algebra
F-B.Hildebrani}, Methods of Applied Mathematics, Prentice-Hall Inc
1.
wood
Cliffs, N.J.,
R. Bellman, Introduction
2.
i
one,
B
3.
C.
i
'
1952. to
Matrix Analysis, McGraw-Hill Book Company,
"dO.
Kuo,
£*««/•
Ai*™,*, an/
Systems, McGraw-Hill
York, 1967.
Prentice-Hall, Inc.,
J. Martin, Transform Calculus for Englewood Cliffs, N.J., 1961.
Book Company, New York, '
Electrical Engineers
C. R. Wylie, Jr., Advanced Engineering Mathematics, 2nd ed., McGraw-Hill
5.
6
New
Book Company New
R. Legros and A. V.
4.
Enele''
ETT
1960.
Matrices ' Polynomials, and Linear ' Time-Invariant Systems," Trans. Automatic Control, Vol. AC-18, pp. 1-10, Feb. 1973.
%I£? lttt
;
Partial Fraction Expansion 7.
D
Hazony and J. Riley, "Evaluating Residues and IRE Trans. Automatic Control, Vol. AC-4,
Poles, 8 '
9
L tI Partial Fraction !f. u°7 » f°uby Digital Multiple Poles Computer," pp. 161-162, Mar. 1964.
S
IEEE 10 '
High Order
"
A
E^ansion of a Rational Function with
IEEE
Trans. Circuit Theory, Vol
Partial
Coefficients,"
;
"
R R
Partkl FraCti
° n Expami0n of RationaI F ^tions wi't^r^n h B One High-Order Pole," IEEE Trans. Automatic Control, Vol. AC-13 with
Determine the ranks of the following matrices: (a)
"3
(b)
_0 "2
4
Ll
2
7
2" 1
3_
8"
6
3_
3
u 2 (t)
= Ax(f)
:
50
/
:
Chap. 2
Mathematical Foundation
2.13.
Determine the definiteness of the following matrices: " 2 3" (a)
(b)
_-l
2_
"
5
1
-r
-2 _
2.14.
3
sampled by an ideal sampler with a sampling period Determine the output of the sampler, /*(/), and find the Laplace transform of/*0), F*(s). Express F*(s) in closed form. of
signals are
T seconds.
(a) /(/)
(b) fit)
2.15.
i_
1
The following
= te~* = e~" sin cot
Determine the z-transform of the following functions *
(a)
G(s) is
+ a)n 1
(b) G(j) s(s
+
5)
2
= s (s + 2) g(f) = t*e-*' git) = sin cot 1
(c)
(d) (e)
2.16.
CKs)
3
t
Find the inverse z-transform of lOzjz
G(z) (z
-
l)(z 2
by means of the following methods (a) the inversion formula (b) partial-fraction
expansion
+ 1) +z
1)
3 Transfer Function and Signal Flow
3.1
Graphs
Introduction
One of the most important steps in the analysis of a physical system is the mathematical description and modeling of the system. A mathematical model ot a system is essential because it allows one to gain a clear understanding of the system in terms of cause-and-effect relationships among the system com-
ponents.
In general, a physical system can be represented that portrays the relationships and interconnections
by a schematic diagram the system components. From the mathematical standpoint, algebraic and differential or difference equations can be used to describe the dynamic behavior of a system In systems theory, the block diagram is often used to portray systems of all types. For linear systems, transfer functions and
among
signal flow graphs are valuable tools for analysis as well as for design. In this chapter we give the definition of transfer function of a linear system and demonstrate the power of the signal-flow-graph technique in the analysis of linear systems.
3.2
Transfer Functions of Linear Systems
Transfer function plays an important role in the characterization of linear time-invariant systems. Together with block diagram and signal flow graph transfer function forms the basis of representing the input-output relationships ot a linear time-invariant system in classical control theory. The starting point of defining the transfer function is the differential
equa51
.
62
/ Transfer
:
:
Chap. 3
Function and Signal Flow Graphs
dynamic system. Consider that a linear time-invariant system described by the following nth-order differential equation tion of a
d m r{t)
.
where a au
c(r) is
dm
.
,
the output variable
-
and
'/•(?)
.
,
dr(t)
/,
the input variable.
r(t) is
,.
,
,
is
The
coefficients,
>
m. b m are constants, and n The differential equation in Eq. (3-1) represents a complete description of the system between the input r(t) and the output c(t). Once the input and the initial conditions of the system are specified, the output response may be ,
.
.
,
a„
and b
,
bt
obtained by solving Eq. (3-1). However, it is apparent that the differential equation method of describing a system is, although essential, a rather cumbersome one, and the higher-order differential equation of Eq. (3-1) is of little practical use in design. More important is the fact that although efficient sub-
computers for the solution of high-order differential equations, the important development in linear control theory relies on analysis and design techniques without actual solutions of the system differ-
on
routines are available
digital
ential equations.
A convenient way
of describing linear systems is made possible by the use of transfer function and impulse response. To obtain the transfer function of the linear system that is represented by Eq. (3-1), we take the Laplace transform on
both sides of the equation, and assuming zero
+
(a s n
The
a^""
1
+
.
.
+ a ^s + a„)C(s) = (b s m + b.s'"-' +
.
initial
conditions,
we have
n
transfer function of the system
...
+
b m .,s
-L-
b m )R(s)
defined as the ratio of C(s) to R(s);
is
therefore, -
b sm + b sm r(*\ UW -Q£~ R(s) ~ a + a.s"-
i
1
s"
1
+... +...
+ bm . s + b m + a . s a. 1
K
(3 _ 3)
-,-
t
Summarizing over the properties of a transfer function we
state
1.
A transfer function is defined
2.
A transfer function between an input variable and an output variable
only for a linear system, and,
strictly,
only for time-invariant systems.
of a system 3.
4.
is
defined as the ratio of the Laplace transform of the
output to the Laplace transform of the input. All initial conditions of the system are assumed to be zero. A transfer function is independent of input excitation.
The following example
is
given to illustrate
how
transfer functions for a
linear system are derived.
Example
3-1
A
series
RLC
designated by
is shown in Fig. The output variable in
network e,{t).
3-1.
The input voltage
this case
is
can be defined as
the voltage across any one of the three network elements, or the current
Sec. 3.2
+ o
R v\M
Transfer Functions of Linear Systems /
I
i(t).
Tm
The loop equation of the network *,(/)
'W
ei<"
;;C
^)
written
=Ri(t)+L^ + ±j /(,) dt
(3-4)
Taking the Laplace transform on both sides of Eq. and assuming zero initial conditions, we have
e c (t)
-.
= (r+Ls+^)I(s)
EAs) Fig. 3-1.
is
If
RLC network.
we regard
the current
function between ./(*)
E,(s)
_
e,{t)
i
(/)
and
an output
as
i(t) is
(3-4)
(3-5)
variable, the transfer
simply
Cs
1
53
R+Ls + (l/Cs) ~ + RCs + LCs I
2
^'^
If the voltage across the capacitor e (t) c
between e,(0 and e c (t)
is
is considered as an output, the transfer function obtained by substituting
E (s)=-~Ks) c
(3.7)
into Eq. (3-5). Therefore,
E (s) _
1
c
£,(*)
The
1
+ RCs + LCs
definition of transfer function
<3
2
-8
)
easily extended to a system with a system of this type is often referred to as a multivariable system. In a multivariate system, a differential equation of the form of Eq. (3-1) may be used to describe the relationship between a pair of input and output. When dealing with the relationship between one input and one output, it is assumed that all other inputs are set to zero. Since the principle of superposition is valid for linear systems, the total effect on any output variable due to all the inputs acting simultaneously can be obtained by adding the
multiple
number of inputs and
outputs.
is
A
individual effects.
As a
simple illustrative example of the transfer functions of a multivariable us consider the control of a turbopropeller engine. In this case the input variables are the fuel rate and the propeller blade angle. The output variables are the speed of rotation of the engine and the turbine-inlet temperature. In general, either one of the outputs is affected by the changes in both inputs. For instance, when the blade angle of the propeller is increased, the speed of rotation of the engine will decrease and the temperature usually increases. The following transfer relations may be written from steady-state tests performed on the system: system,
let
C,(j) Ci(s)
= G, ,(*)*,(*) + G 12 (s)R = G ,(s)R (s) + G 22 (s)R 2
l
2 (s)
(3-9)
2 (s)
(3-10)
where
= transformed variable of speed of rotation = transformed variable of turbine-inlet temperature Ri(s) = transformed variable of fuel rate R (s) = transformed variable of propeller blade angle C,(i)
C
2 (s)
2
All these variables are assumed to be measured from
some
reference levels.
54
/
Chap. 3
Transfer Function and Signal Flow Graphs
Since Eqs. (3-9) and (3-10) are written with the assumption that the system is
linear, the principle
G n (s) represents the and the speed of rotation of the engine
of superposition holds. Therefore,
transfer function between the fuel rate
with the propeller blade angle held at the reference value Similar statements can be
In general,
p
R k (s) =
k
0,
=
only they'th input in
l,2,...,p,
effect,
transform of the system
Cls)
is
t
G
tJ
It is
(s) is
(3-11)
k^j. Note
that Eq. (3-11)
related to all the input transforms x
{s)
0.
defined as
is
is
defined with
while the other inputs are set to zero. The
= G n {s)R
R 2 (s) =
Rj(s)
+G
i2
{s)R 2 {s)
+
...+
z'th
output
by
G, p (s)R p (s) (3-12)
= t Gu {s)Rj(s) where
is,
C (s)
G,As)
with
that
inputs and q outputs, the transfer func-
output and the Jth input
j'th
;
for the other transfer functions.
a linear system has
if
tion between the
made
(i=l,2,...,q)
defined in Eq. (3-11).
convenient to represent Eq. (3-12) by a matrix equation C(s)
=
where
G(s)R(s)
(3-13)
cm C2 {s) C(*)
(3-14)
Cq{s)j is
a q
X
1
matrix, called the transformed output vector;
Rii.s)
R(*)
=
(3-15)
RJis)} is
&p X
1
matrix, called the transformed input vector;
G 12 (s) G 2i (s) G 22 (s)
'G^is)
...
G lp (s) G 2p {s)
G(s)
(3-16)
G ql (s) is
aq
X p
G q2 (s)
...
G„(s)_
matrix, called the transfer function matrix.
ec 3 3 '
Impulse Response of Linear Systems
'
3.3
/
55
Impulse Response of Linear Systems
The impulse response of a linear system is defined as the output response of the system when the input is a unit impulse function. Therefore, for a system with a single input and a single output, if r(t) = d(t), the Laplace transform of the system output
is
simply the transfer function of the system, that C(s)
=
G(s)
is,
(3-17)
since the Laplace transform of the unit impulse function is unity. Taking the inverse Laplace transform on both sides of Eq. (3-17) yields
= 8(t)
c(0
where
(3-18)
g(t) is the inverse
Laplace transform of G(s) and is the impulse response (sometimes also called the weighing function) of a linear system. Therefore, we can state that the Laplace transform of the impulse response is the transfer function.
Since the transfer function
is
a powerful way of characterizing
linear systems,
means that if a linear system has zero initial conditions, theoretically, the system can be described or identified by exciting it with a unit impulse response and measuring the output. In practice, although a true impulse cannot be generated physically, a pulse with a very narrow pulsewidth usually provides a suitable approximation. this
For a multivariable system, an impulse response matrix must be defined and is
given by
g(0
=
£-'[G(*)]
(3-19)
where the inverse Laplace transform of G(s) implies the transform operation on each term of the matrix.
The
derivation of G(s) in Eq. (3-3)
differential equation,
is based on the knowledge of the system and the solution of C(s) from Eq. (3-3) also assumes that
R(s) and G(s) are
all available in analytical forms. This is not always possible for quite often the input signal #(/) is not Laplace transformable or is available only in the form of experimental data. Under such conditions, to analyze the system we would have to work with the time function r(t) and g(t).
Let us consider that the input signal r(j) shown in Fig. 3-2(a) is applied to a whose impulse response is g{t). The output response c(t) is to be determined. In this case we have denoted the input signal as a function of r which is the time variable; this is necessary since t is reserved as a fixed time linear system
quantity in the analysis. For
from minus
all practical
purposes, r(r)
is
assumed to extend
infinity to plus infinity in time.
Now consider that the input r(r) is approximated by a sequence of pulses of pulsewidth At, as shown in Fig. 3-2(b). In the limit, as At approaches zero
these pulses
become impulses, and the impulse at time kLx has a strength or area equal to At-z-^At), which is the area of the pulse at kLx. Also, when At decreases, k has to be increased proportionally, so the value of &At remains constant and equals t, which is a particular point on the time axis. We now compute the output response of the linear system, using the impulse-approxi-
56
/
Transfer Function and Signal
Flow Graphs
Chap. 3
r(T)
r(kAr)
(b)
Fig. 3-2. (a) Input signal of a linear system, (b) Input signal represented
sum of rectangular
mated
signal.
response
is
When
only the impulse at time x
is
= kAx is considered,
the system
given by
At which
by
pulses.
•
r(kAx)g(t
-
kAx)
(3-20)
the system impulse response delayed by kAx, multiplied by the impulse
strength Ax-r(kAx).
By use of
the superposition principle, the total response
due to r(x) is obtained by adding up the responses due to each of the impulses from — oo to +co. Therefore, c{t)
=
lim
£
r(kAx)g(t
—
kAx) Ax
(3-21)
AT— oo
or c{t)
For
all
=
^_j(x)g(t-x)dx
physical systems, output response does not precede excitation. g(t)
for
/
(3-22)
< 0,
since the impulse function
g (t
-
x)
is
=
=
Thus (3-23)
applied at
t
/
=
0.
Or (3-24)
Sec 3 3
Impulse Response of Linear Systems
The output response of
the system
if r( T )
=
for T
< C
T)
<*t
(3-25)
Eq. (3-25) becomes
0,
W = J' *<*)*(' - t) A
(3-26)
o
The expressions of Eqs.
(3-25)
the convolution operation .
is
.
is
and
(3-26) are called the convolution integral
denoted by the symbol
*,
c(t)
=
r(t)
convolves into g(t)
J
so
r(t) * g(t)
(3. 27 )
interpreted as c(t)
The positions of
r(t)
=
and
57
written
= /'__ r(T)g(t -
c (0
Further,
now
is
/
g{t) in the
convolution operation
(3-28)
may be
inter-
changed, since basically there is no difference between the two functions Therefore, the convolution integral can also be written as c (0
=
f'
g(i>(t
-
T ) dx
= git) * /(/) = git) convolves into
(3-29) /•(?)
The evaluation of the impulse response of linear a system is sometimes an important step in the analysis and design of a class of systems known as the adaptive control systems. In real life the dynamic characteristics of most systems vary to some extent over an extended period of time. This
may be caused by simple deterioration of components due to wear and tear, drift in operating environments, and the like. Some systems simply have parameters that vary with time a predictable or unpredictable fashion. For instance, the transfer characteristic of a guided missile in flight will vary in time because of the change of mass of the nmsile and the change of atmospheric conditions. On the other hand, for a simple mechanical system with mass and friction, the latter may be subject to unpredictable variation either due to "aging" or surface conditions thus the control system designed under the assumption of known and fixed parameters may fail to yield satisfactory response should the system parameters vary. In order that the system may have the ability of self-correction or selfadjustment in accordance with varying parameters and environment it is necessary that the system's transfer characteristics be identified continuously or at appropriate intervals during the operation of the system. One of the methods of identification is to measure the impulse response of the system so that design parameters may be adjusted accordingly to attain optimal control at all times In the two preceding sections, definitions of transfer function and impulse response of a linear system have been presented. The two functions are directly related through the Laplace transformation, and they represent essentially the same information about the system. However, it must be reiterated that
m
transfer
58
/
Chap. 3
Transfer Function and Signal Flow Graphs
function and impulse response are denned only for linear systems and that the initial
3.4
Block Diagrams
conditions are assumed to be zero.
1
Because of
its
simplicity
and
versatility,
engineers to portray systems of represent the composition
all types.
block diagram
is
often used by control
A block diagram can be used simply to
and interconnection of a system. Or,
it
can be used,
together with transfer functions, to represent the cause-and-effect relationships throughout the system. For instance, the block diagram of Fig. 3-3 represents a turbine-driven hydraulic power system for an aircraft. The main components of
the system include a pressure-compensated hydraulic an electronic speed controller, and a control valve. figure depicts
how
these
pump, an air-driven pump, The block diagram in the
components are interconnected.
Current Controller
Turbine torque
Control valve
Turbine
Inlet
Mass
pressure
flow
Output
rO
Load
*
Hydraulic
Pump
torque
pump
Load flow Fig. 3-3.
Block diagram of a turbine-driven hydraulic power system.
If the mathematical and functional relationships of all the system elements known, the block diagram can be used as a reference for the analytical or the computer solution of the system. Furthermore, if all the system elements are assumed to be linear, the transfer function for the overall system can be obtained by means of block-diagram algebra. The essential point is that block diagram can be used to portray nonlinear as well as linear systems. For example, Fig. 3-4(a) shows the block diagram of a simple control system which includes an amplifier and a motor. In the
are
figure the nonlinear characteristic of the amplifier
is
depicted by
its
nonlinear
Sec. 3.4
Block Diagrams
/
59
Kry
s(s+a)
(a)
(b)
Block diagram of a simple control system, (a) Amplifier shown with a nonlinear gain characteristic, (b) Amplifier shown with a linear gain Fig. 3-4.
characteristic.
input-output
relation. The motor is assumed to be linear and its dynamics are represented by a transfer function between the input voltage and the output displacement. Figure 3-4(b) illustrates the same system but with the amplifier characteristic approximated by a constant gain. In this case the overall system is linear, and it is now possible to write the transfer function for the overall
system as
E (s) t
Em (s)
E{s)
~
s(s
+
a)
(3
"3
°)
Block Diagrams of Control Systems
We
now define some block-diagram elements used frequently in conand the block-diagram algebra. One of the important components
shall
trol systems
or a feedback control system
for
s lg nal
ometer,
is the sensing device that acts as a junction point comparisons. The physical components involved are the potenti-
synchros, resolvers, differential amplifiers, multipliers, and so on In general, the operations of the sensing devices are addition, subtraction mul-
and sometimes combinations of these. The block-diagram elements of these operations are illustrated as shown in Fig. 3-5. It should be pointed out that the signals shown in the diagram of Fig. 3-5 can be functions of time t or functions of the Laplace transform variable s. In Fig. 3-4 we have already used block-diagram elements to represent inputoutput relationships of linear and nonlinear elements. It simply shows that the block-diagram notation can be used to represent practically any input-output relation as long as the relation is defined. For instance, the block diagram of tiplication,
60
/ Transfer
*-
(a)
e
=
r
—
+~
c
Subtraction
(b)
*-
(c)
Chap. 3
Function and Signal Flow Graphs
e = rl
+
r2
—
=
e
+c
Addition
+~- e
c
Addition and subtraction
r
=rc
(d) Multiplication
Fig. 3-5.
Block-diagram elements of typical sensing devices of control
Block-diagram representations of input-output relationships of
systems.
Fig. 3-6(a) represents a system that
is
described by the linear differential equa-
tion x(t)
=
ax{t)
+
bu{t)
(3-31)
Figure 3-6(b) illustrates the input-output relation of a system described by the vector-matrix differential equation ±{t)
where
(3-32)
an n X 1 vector and u(t) is an r X 1 vector. As another example, shows a block diagram which represents the transfer function of a
x(f) is
Fig. 3-6(c)
= f [x(0, u«]
:
Sec. 3.4
Block Diagrams
linear system; that
61
is,
C(s)
where G(s)
/
=
G{s)R(s)
(3-33)
the transfer function.
is
Figure 3-7 shows the block diagram of a linear feedback control system.
The following terminology often used in control systems is defined with reference to the block
diagram
/•(/)>
R(s)
c(t),
C(s)
b(i),
B(s)
e(t), &(s) e(t),
= =
reference input
output signal (controlled variable)
= feedback signal = actuating signal
- C(s) = error signal G(s) = g?^ = open-loop transfer function or forward-
E(s)
=
R(s)
path transfer function
M
C(s)
= j±4 = closed-loop transfer function H(s) = feedback-path transfer function G(s)H(s) = loop transfer function closed-loop transfer function, M(s) = C(s)/R(s), can be expressed as (s)
The
a function of G(s) and H(s).
From
Fig. 3-7
we
write
C(s)
=
G(s)&(s)
(3-34)
B(s)
=
H(s)C(s)
(3-35)
and
The actuating
signal
written
is
6(j)
= R(s) - B(s)
(3-36)
Substituting Eq. (3-36) into Eq. (3-34) yields
C(s)
=
G(s)R(s)
-
G(s)B(s)
(3-37)
G(s)H(s)C(s)
(3-38)
Substituting Eq. (3-35) into Eq. (3-37) gives
C(s)
=
G(s)R(s)
-
Solving C(s) from the last equation, the closed-loop transfer function of the
R(s)
/)
r(t)
.y
ew '
eit)
G(s)
c(t)
,
bit) B(s)
H(s)
Fig. 3-7. Basic block
C(s)
diagram of a feedback control system.
62
/
Transfer Function and Signal Flow Graphs
system
is
Chap. 3
given by
w-
^-
i
+
(3-39)
Gis)His)
may contain many feedback loops, and from the block diagram by means of the
In general, a practical control system the evaluation of the transfer function
method described above may be tedious. In principle at least, the block diagram of a system with one input and one output can always be reduced to the basic single-loop form of Fig. 3-7. However, the steps involved in the algebraic
reduction process
may
transfer function of
again be quite involved.
any
linear system
We
shall
show
later that the
can be obtained directly from
its
block
diagram by use of the signal-flow-graph gain formula. Block Diagram and Transfer Function of Multivariable Systems
A
is defined as one that has a multiple number of block-diagram representations of a multiple-variable system with p inputs and q outputs are shown in Fig. 3-8(a) and (b). In Fig. 3-8(a) the individual input and output signals are designated, whereas in the
multivariable system
inputs and outputs.
Two
block diagram of Fig. 3-8(b), the multiplicity of the inputs and outputs is denoted by vectors. The case of Fig. 3-8(b) is preferable in practice because of its simplicity.
MO-
-»- c l
(t)
Multivariable
system
'
P
-*~c q (t)
(0-
(a)
Multivariable
i(0"
system
-*- c(/)
(b)
Fig. 3-8.
Block-diagram representations of a multivariable system.
Figure 3-9 shows the block diagram of a multivariable feedback control The transfer function relationship between the input and the output of
system.
the system
is
obtained by using matrix algebra
= G(s)£(s) &is) = R(s) - Bis) B(s) = H(j)C(j)
C(j)
(3-40) (3-41) (3-42)
Sec. 3.4
Block Diagrams
S)
R(s)
80)
^
/
63
C&)
G(s)
i
Bfr)
Hfc)
Fig. 3-9.
Block diagram of a multivariable feedback control system.
Substituting Eq. (3-42) into Eq. (3-41) and then from Eq. (3-41) into
Eq
(3-40)
yields
=
C(s)
G(s)R(s)
-
G(s)H(s)C(s)
(3-43)
G(j)H(j)]- G(s)R(s)
(3-44)
Solving for C(s) from Eq. (3-43) gives C(s)
=
[I
+
»
+
provided that
I G(5 )H(j) is nonsingular. should be mentioned that although the development of the input-output relationship here is similar to that of the single input-output case, in the It
present
situation
improper to speak of the ratio C(s)/R(s) since C(s) and R(s) are matrices. However, it is still possible to define the closed-loop transfer it is
matrix as
M(j)
Then Eq.
Example
(3-44)
is
=
[I
+
G(*)H(s)]-»G(j)
(3-45)
C(j)
=
M(a)R(j)
(3-46)
written
Consider that the forward-path transfer function matrix and the feedback-path transfer function matrix of the system shown in Fig. 3-9 are
3-2
r
G(s,=
_j_i
l
s
+
s
l
(3-47) 1
2
2J
and H(s)
1
=
Lo
0" i_
respectively.
The
closed-loop transfer matrix of the system
is
given by Eq. (3-45) and
is
evalu-
ated as follows: 1
1
I
+
G(s)H(s)
=
s+ 2
1_
s
1
s
+
1 1
'
s
+
2J
2
s
l
s+j, s
+ 2J
(3-48)
64
/ Transfer
Chap. 3
Function and Signal Flow Graphs
The closed-loop
transfer matrix
is
+ +
5
M(a)
=
[I
+
5
= -^
G( S )H(s)]-'GM
3
_1_
2
s
5
1
!
(3-49)
+ +
5
-2
5
2 5
1
+
2J
where
+2s+3+ A _ sS+1S|2
2 __ s 2
,
+
+
5s
2
(3-50)
5(5+1)
S
Thus •
M(5)
=
s(s
+
1)
+
55
+
52
35 2 s{s
+95 +
+
1)(5
+
_ ~5
4
1_
2)
2
35 5(5
3.5
+ +
(3-51)
2 1).
Signal Flow Graphs 2
A
signal flow graph may be regarded as a simplified notation for a block diagram, although it was originally introduced by S. J. Mason- as a cause-and-effect
representation of linear systems. In general, besides the difference in the physical appearances of the signal flow graph and the block diagram, we may regard
more rigid mathematical relationships, whereas the rules of using the block-diagram notation are far more flexible and
the signal flow graph to be constrained by
less stringent.
A signal flow graph may be defined as a graphical means of portraying the input-output relationships between the variables of a set of linear algebraic equations.
Consider that a linear system yj It
=
S
is
described by the set of
o kJ y k
should be pointed out that these
j
=
1,2,
.
.
N algebraic equations
,N
.
N equations are written
(3-52) in the
form of cause-
and-effect relations
jth effect
=
N XI (§ am fr°
m ^ to ./)(^
tri
cause)
(3-53)
k=l
or simply
output This
is
the single
= 2 (gain)(input)
most important axiom
in the construction of the set
equations from which a signal flow graph is drawn. In the case when a system is represented by a equations,
we must
then rearrange the
first
When
set
of algebriac
of integrodifferential
transform them into Laplace transform equations and
latter into the
W
(3-54)
form of Eq.
= S G kj (s)Yk (s)
(3-52), or
7=1,2,
N
(3-55)
constructing a signal flow graph, junction points or nodes are used
ys and y k The nodes are connected together by line segments called branches, according to the cause-and-effect equations. The
to represent the variables
.
Sec 3 5 Signal Flow Graphs /
branches have associated branch gains and directions. through a branch only in the direction of the arrow. In
A
65
signal can transmit
general, given a set of equations such as those of Eq. (3-52) or Eq. (3-55), the construction of the signal flow graph is basically a matter of following through the
cause-andeach variable in terms of itself and the other variables For instance, consider that a linear system is represented by the simple equation effect relations relating
where j, is the input variable, y 2 the output variable, and a l2 the gain or transmittance between the two variables. The signal-flow-graph representation of Eq. (3-56) is shown in Fig. 3-10. Notice that the branch directing
°n
°
O
*
Fig. 3-10. Signal flow graph of y2
=
° 12 >''-
from node y to node y 2 expresses the dependx
ence of y 2 upon
should be reiterated that Eq. (3-56) and Fig. 3-10 represent only the dependence of the out>»,. It
P ut variable upon the input variable, not the reverse. important consideration in the application of
An
signal
flow graphs
that the branch between the two nodes y, and y 2 should be integrated as a unilateral amplifier with gain a i2 , so that when a signal of one unit is applied at the input y u the signal is multiplied by a l2 and a signal of strength a l2 is delivered at node>> 2 Although algebraically Eq (3-56) can be rewritten is
.
the signal flow graph of Fig. 3-10 does not imply this relationship If Eq (3-57) valid as a cause-and-effect equation in the physical sense, a new signal flow
is
graph must be drawn. As another illustrative example, consider the following equations yi yi }'*
ys
The
= tf12.F1 = a 23 y = Oi*yi = a 2s y
2
2
+ a 32 y + a43 y + a 34 y + +a
set
of algebraic
3
4
3
a 44 y 4
(3 " 58)
4 <,y 4
signal flow
graph for these equations is constructed step by step as shown in Fig. 3-11, although the indicated sequence of steps is not unique The nodes representing the variables
y u y2 y 3 y 4 and y s are located in order from left to equation states that 2 depends upon two y signals, a liyi and the signal flow graph representing this equation is
The
right.
,
,
,
first
drawn as shown in y 3 depends upon a 23 y 2 and a 43y 4 therefore on the signal flow graph of Fig. 3-1 1(a), a branch of gain a 23 is drawn trom node y 2 to y 3 and a branch of gain « is drawn from y t to y 3 with the 43 directions of the branches indicated by the arrows, as shown in Fig 3-1 1(b) ;
Fig. 3-1 1(a).
The second equation
states that
,
Similarly, with the consideration of the third equation, Fig. 3-1 1(c) is obtained when the last equation of Eq. (3-58) is portrayed, the complete signal flow graph is shown in Fig. 3-1 1(d). The branch that Finally,
begins from the node
y4
66
/ Transfer
Chap. 3
Function and Signal Flow Graphs
(a)y 2
o
O
>4
y$
y*
>"5
=a n y + a 31 y3 l
a 43
o ^3
yi
y\
(b)>>2
=a n y\ +032^3. ^3
=fl23> 2 ,
+ «43>'4
O ^5
(c)y 2 =012^1 +232>'3.>'3 = a 23>'2 + 043^4.
(d)
Complete
signal
^
=a 24>
,
2
+ a 34>
y2
=
ai 2 yi
+
a 3 zy3- (b) yi
3
+
fl
44>"4
flow graph
Fig. 3-11. Step-by-step construction of the signal flow (3-58). (a)
,
graph for Eq.
= anyi + fl32>'3, = anyi + any*,
+ any*, (c) yz = a\zy\ + a 3 zy3, yi + a t,yi + cmy»,. (d) Complete signal flow graph.
yi
y*
= miyi = auyi
3
and ends at y 4 is of y 4 upon itself.
3.6
Summary
called a loop,
of Basic Properties of Signal
At
this point
it is
and with a gain a 44 represents the dependence ,
Flow Graphs
best to summarize
some of the important
properties of the
signal flow graph.
1.
2.
3.
A signal flow graph applies
only to linear systems.
The equations based on which a signal flow graph is drawn must be algebraic equations in the form of effects as functions of causes. Nodes are used to represent variables. Normally, the nodes are
Sec. 3.7 Definitions for Signal
Flow Graphs
/
67
arranged from
left to right, following a succession of causes and through the system. Signals travel along branches only in the direction described by the arrows of the branches.
effects
4.
5.
The branch directing from node y k to j, represents the dependence of the variable y, upon but not the reverse. yk A signal y k traveling along a branch between nodes yk and y. is multiplied by the gain of the branch, a kj so that a signal ,
6.
aj
k is
,
delivered at node y,.
3.7
Definitions for Signal
Flow Graphs
In addition to the branches and nodes defined earlier for the signal flow graph the following terms are useful for the purposes of identification
Input node {source). An input node branches. (Example: node yi in Fig. 3-11.)
is
and
reference.
a node that has only outgoing
Output node (sink). An output node is a node which has only incoming branches. (Example: node y 5 in Fig. 3-11.) However, this condition is not always r eadiy b an out P ut node For instance, the signal flow
p Mg.
i
^/
graph shown in
-
3-12(a) does not have
any node that
satisfies
node. However,
the condition of an output
it may be necessary to regard nodes y 2 and/or 3 as output y nodes In order to meet the definition requirement, we may simply introduce
r
eS
f£S
J
lth UDity g3inS
and additional variables j 2 and y 3 as shown
, 3-12(b). Notice that in the modified signal flow
(a) Original signal
,
graph
it is
in Fig.
equivalent that the
flow graph
(b) Modified signal flow graph Fig. 3-12. Modification of a signal flow graph so that y 2 and y z satisfy the requirement as output nodes, (a) Original signal flow graph, (b) Modified signal flow graph.
68
/ Transfer
Chap. 3
Function and Signal Flow Graphs
=y
=y
are added. In general, we can state that any graph can always be made an output node by noninput node However, we cannot convert a noninput node operation. aforementioned the For instance, node y 2 of the operation. similar using a by node an input into signal flow graph of Fig. 3-12(a) does not satisfy the definition of an input node. If we attempt to convert it into an input node by adding an incoming branch of
equations y 2
2
and y 3
3
of a signal flow
y 2 the signal flow graph of Fig. 3-13 However, the equation that portrays the relationship at node y 2
unity gain from another identical node
would
now
result.
reads
yi
which
is
different
from the
= yi +
9
<*\%y\
+
=
a i2 yi
+
(3- 59 )
a nyi
original equation, as written
yi
Fig. 3-13.
,
from
Fig. 3- 12(a),
(3-60)
a 32 y 3
Vi
Fig. 3-14. Signal flow
Erroneous way to make the node y 2
graph with y 2 as an input
node.
an input node.
Since the only proper way that a signal flow graph can be drawn is from a set of cause-and-effect equations, that is, with the causes on the right side of the equation and the effects on the left side of the equation, we must transfer y 2 if it were to be an input. Rearranging Eq. (3-60), the signal flow graph of Fig. 3-12 now become originally for equations the two
to the right side of Eq. (3-60)
1
y%
y3
The
=
signal flow graph for these
-—y
(3-61)
3
"12
«12
(3-62)
a 2i y 2
two equations
is
shown
in Fig. 3-14, with
y2
as an input node.
A
any collection of continuous succession of branches The definition of a path is entirely general since traversed in the same be traversed more than once. Therefore, as any node to it does not prevent Fig. 3-12(a) is, it may have numerous paths. graph of flow simple as the signal Path.
path
is
direction.
Forward path.
A
forward path
is
a path that starts at an input node and
ends at an output node and along which no node is traversed more than once. For example, in the signal flow graph of Fig. 3-1 1(d), y is the input node, and there are four possible output nodes in y 2 y 3 j 4 and y s The forward path t
,
,
,
.
Signal-Flow-Graph Algebra
Sec. 3.8
/
69
between j t and y 2 is simply the branch connected between y and y 2 There are two forward paths between y L and y 3 one contains the branches from y t to y 2 to y 3 and the other one contains the branches from y to y 2 to y 4 (through the branch with gain a 2i ) and then back to y 3 (through the branch with gain a 43 ). .
t
;
,
t
The reader may determine the two forward paths between j>, and j 4 there are also two forward paths between j, and y s
.
Similarly,
.
Loop.
A
loop
is
a path that originates and terminates on the same node
and along which no other node
is
encountered more than once. For example,
there are four loops in the signal flow graph of Fig. 3-1 1(d). These are
shown
in Fig. 3-15.
"44
o
y*
Fig. 3-15.
Four loops
in the signal flow
graph of Fig.
3-1 1(d).
Path gain. The product of the branch gains encountered in traversing a path is called the path gain. For example, the path gain for the path J>i
— yz - y - y* 3
in Fig. 3-1 1(d) is
a 12 a 23 a 34
Forward-path gain. Forward-path gain forward path.
Loop gain. Loop gain the loop gain of the loop
3.8
is
y2
is
.
defined as the path gain of a
defined as the path gain of a loop. For example,
— y4 — y — y 3
2
in Fig. 3-15
is
a 2i a i3 a 32
.
Signal-Flow-Graph Algebra
Based on the properties of the signal flow graph, we can manipulation and algebra of the signal flow graph. 1
.
The value of the of
all
variable represented by a
node
state the following
is
equal to the
sum
the signals entering the node. Therefore, for the signal flow
70
/
Transfer Function and Signal Flow Graphs
Chap. 3
Node
Fig. 3-16.
as a
summing
point and as a transmitting point.
graph of Fig. 3-16, the value of y is equal to the sum of the signals transmitted through all the incoming branches; that is, {
=
Jl 2.
The value of the through
all
Fig. 3-16,
"li}>2
+
«31^3
«4lj4
+
variable represented by a
(3-63)
Osijs
node
is
transmitted
branches leaving the node. In the signal flow graph of
we have ye
yn y% 3.
+
Parallel branches in the
= = =
same
tfi6.Fi
a xl y
(3-64)
x
tfisJi
direction connected between
two
branch with gain equal to the sum nodes can be replaced by branches. An example of this case is the parallel of the gains of a single
illustrated in Fig. 3-17.
-X y\
Fig. 3-17. Signal flow single branch.
a-vb +
>
c
X. yi
graph with parallel paths replaced by one with a
Sec. 3.9
Examples of the Construction of Signal Flow Graphs
a 12
a 23
O
O
V\
vi
"34
»
o—
y3
y4
71
a "56
<*45 45
O
/
O
»
l»
ys
y6
a 12<*23tf34 fl 45fl56
o
o
»
Fig. 3-18. Signal flow
graph with cascaded unidirectional branches
re-
placed by a single branch.
4.
A
connection of unidirectional branches, as shown in Fig. can be replaced by a single branch with gain equal to the product of the branch gains. Signal flow graph of a feedback control system. Figure 3-19 shows series
3-18,
5.
the signal flow graph of a feedback control system
whose block
may be regarded as a simplified notation for the block diagram. Writing the equations for the signals at the nodes &(s) and C(s), we have diagram
is
given in Fig. 3-7. Therefore, the signal flow graph
-
S(j)
=
R(s)
C(s)
=
G(s)&(s)
H(s)C(s)
(3-65)
and
The closed-loop
transfer
function
(3-66) is
obtained from these two
equations,
C(s) R(s)
_ 1
G(s) G(s)H(s)
^y
e(s)\_
R(s)
+
(3-67)
C(s)
-ms) Fig. 3-19. Signal flow
For complex
graph of a feedback control system.
signal flow graphs
we do not need to rely on algebraic manipu-
lation to determine the input-output relation. In Section 3.10 a general gain
formula will be introduced which allows the determination of the gain between an input node and an output node by mere inspection.
3.9
Examples of the Construction of Signal Flow Graphs It
was emphasized earlier that the construction of a signal flow graph of a physsystem depends upon first writing the equations of the system in the cause-
ical
and-effect form. In this section
we
shall give
two simple
illustrative
examples.
72
/ Transfer
Chap. 3
Function and Signal Flow Graphs
the lack of background on systems at this early stage, we are using two networks as examples. More elaborate cases will be discussed in Chapter where the modeling of systems is formally covered.
Owing to electric 5,
Example
The
3-3
passive network
R, L, and
C elements
shown
in Fig. 3-20(a)
is
considered to consist of
so that the network elements can be represented
by impedance functions, Z(s), and admittance functions, Y(s). The Laplace transform of the input voltage is denoted by Ein (s) and that of the output voltage is EJjs). In this case it is more convenient to use the branch currents and node voltages designated as shown in Fig. 3-20(a). Then one set of independent equations representing cause-and-effect relation Ii(.s)
E2 (s) /,(,)
Ea (s)
is
= [E Js)-E (s)]Y S = [h{s) - /,(*)]Z,(j) = [E1 {s) - Ea (s)]Y (s) = Z«(j)/ (j) 1
i
1
(3-68)
( )
(3-69) (3-70)
3
(3-71)
s
Y3 (s)
YAs)
E
£|n«
(s)
(a)
£•„(*)
Z2 (s)
YAs)
Fig. 3-20. (a) Passive ladder network, (b)
Y3 (s)
A signal flow graph
for the net-
work.
Ein(s), /,($), E2 (s), I
3 (s), and E„(s) arranged from left to right graph of the network is constructed as shown in Fig. 3-20(b). It is noteworthy that in the case of network analysis, the cause-and-effect equations that are most convenient for the construction of a signal flow graph are neither the loop equations nor the node equations. Of course, this does not mean that we cannot
With the
variables
in order, the signal flow
construct a signal flow graph using the loop or the node equations. For instance, in Fig. 3-20(a), if we let Ii(s) and I3 (s) be the loop currents of the two loops, the loop
equations are
Sec. 3.9
Examples of the Construction of Signal Flow Graphs
= [Zi(s) + Z2 (j)]/,(j) - Z (s)I (s) = -Z Gs)/,0) + [Z (s) + Z (s) + Z4 (s)]I = Zt(s)I (s)
Etn(s)
2
2
E (s)
2
73
(3-72)
3
3
/
3 (s)
(3-73)
3
(3 . 74)
However, Eqs. (3-72) and (3-73) should be rearranged, since only effect variables can appear on the left-hand sides of the equations. Therefore, solving for /,0s) from Ea (3-72) and I3 (s) from Eq. (3-73), we get 71
« =Z =
h(s)
Z
^ + zjfzjs) ^
i(s)
I Z2 (s) E
X {?)
+ z\(s) + Z4 (*) 7
1
'
(3-75)
W
(3
- 76
Now,
>
Eqs. (3-74), (3-75), and (3-76) are in the form of cause-and-effect equations. The signal flow graph portraying these equations is drawn as shown in Fig 3-21 This exercise also illustrates that the signal flow graph of a system
1
Z
1
(s)
O
+
Z 2 (s)
Z2 fc)
+
is
not unique.
Z2 (s) Z3 (s) + Z4 (s)
» E
Z
1
(s)
Z2 (s) + Z 2 (s)
(s)
Fig. 3-21. Signal flow graph of the network in Fig. 3-20(a) using the loop equations as a starting point.
Example
Let us consider the
3-4
RLC
network shown in Fig. 3-22(a). We shall and the voltage e£t) as the dependent variables of the network. Writing the voltage across the inductance and the the capacitor, we have the following differential equations: define the current /(f)
current in
L
^W =eM ~ Ri W ~ ecO
r C dec (t) =
~1T
(3-77)
.
'<'>
(3-78)
However, we cannot construct a
signal flow graph using these two equations since they are differential equations. In order to arrive at algebraic equations, we divide Eqs
(3-77)
and
(3-78)
by
L
and C,
respectively.
When we
have sl(s)
=
i(0+)
take the Laplace transform,
+ j-Ets) - j-m - j-E (s) c
sEc (s)=ec (0+)+-Ll(s )
we
(3-79)
(3 _ 80)
where i(0+) is the initial current and e (0+) is the initial c voltage at t = 0+ In these last two equations, * e (0+), i(0+), and E (s) are the input variables. There are several possible ways of constructing the signal flow graph for these equations. One way is to solve for I(s) from Eq. (3-79) and £„(*) from Eq. (3-80); we get t
74
/
Chap. 3
Transfer Function and Signal Flow Graphs
1
R
L
WW-
nfw*
— +
no c
»i(')Q )
=
e c (t)
(a)
EJs)
EiU)
LU + R/L)
(c)
Fig. 3-22. (a)
RLC network,
(b) Signal flow graph, (c) Alternative signal
flow graph.
'« = E (s) = c
7tW y^c(0+)
(0+)
+ as +W)] £lW - WTTRim EM
+ -gr/O)
(3 " 81)
(3-82)
graph using the last equations is drawn as shown in Fig. 3-22(b). graph in Fig. 3-22(b) is of analytical value only. In other words, we can solve for I(s) and Ec (s) from the signal flow graph in terms of the inputs, e c (0+), j'(0+), and Ei(s), but the value of the signal flow graph would probably end here. As an
The
signal flow
The
signal flow
alternative,
sEc (s)
we can
use Eqs. (3-79) and (3-80) directly, and define
I(s),
E (s), sl(s), and c
as the noninput variables. These four variables are related by the equations
Sec
3.10
General Gain rormula for Signal Flow Graphs / 75
=5- »[*/(*)]
/(*)
(3-83)
E (s) = s-^sEds)]
(3-84)
c
The
significance of using s~
l
is that it represents pure integration in the time domain. a signal flow graph using Eqs. (3-79), (3-80), (3-83), and (3-84) is constructed as shown in Fig. 3-22(c). Notice that in this signal flow graph the Laplace transform variable appears only in the form of s' 1 Therefore, this signal flow graph may be used
Now,
.
as a basis for analog or digital in this
3.10
form are defined
in
computer solution of the problem. Signal flow graphs Chapter 4 as the state diagrams. 5
General Gain Formula for Signal Flow Graphs 3
Given a
signal flow
solve for
its
is
graph or a block diagram, it is usually a tedious task to input-output relationships by analytical means. Fortunately, there a general gain formula available which allows the determination of the input-
output relationship of a signal flow graph by mere inspection. The general gain formula is
V Mj^k M = £na = *=1 A
(3-85)'
Jin
where
M = gain between y
in
and yout
= output node variable = input node variable N = total number of forward paths Mk = gain of the kth. forward path A = - S Pmi + £ Pm ~ S Pmi + m m m Pmr = gain product of the mth possible combination
yout jln
1
i
•
•
(3-86)
•
of
r
nontouching* loops or
A=
1
—
(sum of
all
individual loop gains)
gain products of
all
+
(sum of two
possible combinations of
nontouching loops) — (sum of the gain products of all possible combinations of three nontouching loops)
A*
=
the is
A
+
.
.
for that part of the signal flow graph
nontouching with the
This general formula
(3-87)
.
may seem
fcth
which
forward path
formidable to use at
the only complicated term in the gain formula
is
first
A; but
glance.
However,
in practice, systems
having a large number of nontouching loops are rare. An error that is frequently regard to the gain formula is the condition under which it is valid. It must be emphasized that the gain formula can be applied only between an input node and an output node.
made with
*Two
mon
node.
parts of a signal flow graph are said to be nontouching if they
do not share a com-
76
/ Transfer
Chap. 3
Function and Signal Flow Graphs
Example
Consider the signal flow graph of Fig. 3-19. We wish to find the transfer function C(s)/R(s) by use of the gain formula, Eq. (3-85).
3-5
are obtained by inspection from the signal
The following conclusions flow graph 1.
There is only one forward path between R(s) and C(s), and the forwardpath gain is
M, = 2.
There
is
only one loop; the loop gain
Pu = 3.
is
-G(s)H(s)
-/»,,=
1
By use of Eq.
+
1
G(j)#(j).
(3-85), the transfer function of the
C(s)
R(s)
which agrees with the
result
= Mi At = A
system
is
obtained as
G(s) 1
+
(390)
G(s)H(s)
obtained in Eq. (3-67).
Consider, in Fig. 3-20(b) that the functional relation between Eia and E is to be determined by use of the general gain formula. The signal flow graph is redrawn in Fig. 3-23(a). The following conclusions
3-6
are obtained by inspection from the signal flow graph
1.
(3-89)
There are no nontouching loops since there is only one loop. Furthermore, the forward path is in touch with the only loop. Thus Ai = 1, and
A=
Example
(3-88)
G(s)
There
Eia
only one forward path between
is
3-23(b).
:
The forward-path
gain
E
and
,
as
shown
M, = FiZ2 y3 Z4 2.
in Fig.
is
(3-91)
There are three individual loops, as shown in Fig.
loop gains
3-23(c); the
are
P n = -Z2 y, 21 = -Z2 r3 i> 31 = -Z4 r
(3-92)
/>
(3-93) (3-94)
3
3.
There
is
one pair of nontouching loops, as shown
in Fig. 3-23(d);
the loop
gains of these two loops are
-Z Y 2
-Z4 Y
and
{
3
Thus
Pu = 4.
product of gains of the first (and only) possible combination of two nontouching loops -= Z 2 Z4 Yx
Y
(3-95) }
There are no three nontouching loops, four nontouching loops, and so on; thus
Pm3 =0, Pm4 = From
0,
.
.
Eq. (3-86),
A= =
+P +P )+P
l
-(J»„
1
+ Z Y + Z Y + Z4 Y + Z Z4 Yt Y 2
3l
zl
{
2
3
3
l2 2
3
(3-96)
General Gain Formula for Signal Flow Graphs / 77
Sec. 3.10
(a)
Oh
O (b)
(d)
Fig. 3-23. (a) Signal flow
Forward path between
graph of the passive network in Fig. 3-20(a). (b) and Ea (c) Three individual loops, (d) Two
Ein
.
nontouching loops.
5.
All the three feedback loops are in touch with the forward path ; thus
=
A,
(3-97)
1
we
Substituting the quantities in Eqs. (3-91) through (3-97) into Eq. (3-85),
M
x
y,
1
Example
3-7
r3 z2 z4 Y + Z
Ai
+Z
1 Y
l
+Z
2
3
Consider the signal flow graph of Fig. 3-22(c). relationships between /and the three inputs, Similar relationship
is
desired for
Ec
.
1
It is
E
lt
obtain
(3-98)
3
desired to find the
i(0+), and e c (fl+).
Since the system
is
linear, the
The gain between one input and one output is detergain formula to the two variables while setting the rest of the
principle of superposition applies.
mined by applying the inputs to zero.
The
signal flow
Fig. 3-24(b), (c),
and
is redrawn as shown in Fig. 3-24(a). Let us first consider / The forward paths between each inputs and / are shown in
graph
as the output variable.
(d), respectively.
78
/ Transfer
Function and Signal Flow Graphs
Q
Chap. 3
O
»(0+)
ec
(0+)
<(0 +)
o "
1
o-
-O /
sl
(c)
Fig. 3-24. (a) Signal flow
graph of the
RLC network
in Fig. 3-22(a). (b)
Forward path between Ei and /. (c) Forward path between /(0+) and (d) Forward path between e c (0+) and /.
/.
Sec. 3.10
The
General Gain Formula for Signal Flow Graphs
signal flow
graph has two loops the ;
A=
l
/
79
A is given by
1 +#*-' L ° + LC
(3-99)
'
All the forward paths are in touch with the
Considering each input separately,
= 0,
/(0+)
A
£,
two loops; thus A,
£-,
=
0,
'(0+)
When
all
=
1
for
all
cases
we have
= 0,
i(0+)
A
three inputs are applied simultaneously,
we
* c (0+)
=
(3-100)
e c (0+)
=
(3-101)
E =
(3-102)
x
write
(3-103)
In a similar fashion, the reader should verify that when
output variable, Lc
E
c is
considered as the
we have
A
j^^E, + ± s -*i(p+) + s-i/l + |r'W+)
Notice that the loop between the nodes si and / between e c (0+) and Ec
is
(3-104)
not in touch with the forward path
.
Example
3-8
Consider the signal flow graph of Fig. 3-25. The following inputoutput relations are obtained by use of the general gain formula: yi >-i
y_3
yi
+ dy A _ ag(l + d) + A a(i
(3-105)
abc (3-106)
where
A=
l
+eg + d +
Fig. 3-25. Signal flow
bcg
+ deg
graph for Example
(3-107)
3-8.
80
/ Transfer
3.11
Function and Signal Flow Graphs
Chap. 3
Application of the General Gain Formula to Block Diagrams
Because of the similarity between the block diagram and the signal flow graph, the general gain formula in Eq. (3-85) can be used to determine the input-output relationships of either. In general, given a block diagram of a linear system we can apply the gain formula directly to it. However, in order to be able to identify all the loops and nontouching parts clearly, sometimes it may be helpful if an equivalent signal flow graph is drawn for a block diagram before applying the gain formula.
To
illustrate
how
the signal flow graph and the block diagram are related,
the equivalent models of a control system are
^
since a
node on the signal flow graph
is
shown
Note that summing point of all
in Fig. 3-26.
interpreted as a
(a)
(b)
Fig. 3-26. (a)
flow graph.
Block diagram of a control system,
(b)
Equivalent signal
Sec. 3.12
Transfer Functions of Discrete-Data Systems / 81
incoming signals to the node, the negative feedback paths in represented by assigning negative gains to the feedback paths.
this case are
The closed-loop transfer function of the system is obtained by applying Eq. (3-85) to either the block diagram or the signal flow graph
G G2 G
C(s)
i
R(s)
1
G
-j-
3
t
Gj
+ G G H + G G H + G G G + G H2 + l
1
2
1
2
3
2
X
4
3
(3-108)
G,G 4
Similarly,
E(s) R(s)
Y
3 (s)
__..
!
+
G,(? 2 ff,
+ G G H + G4 H 2
3
2
2
(3-109)
A _ ~
1
+ G G H + G,H 2
2
3
2
(3-110)
A
R(s)
where
A= 3.12
1
+ G^H, + G G H + 2
Transfer Functions of Discrete-Data Systems 7
-
G,G 2 C 3
2
3
+ G4 H + G Gi 2
(3-111)
1
8
shown in Chapter 2 that the signals in a discrete-data or sampled-data system are in the form of pulse trains. Therefore, the Laplace transform and the
It is
transfer functions defined for continuous-data systems, in the ^-domain, cannot
be used adequately to describe these systems. Figure 3-27(a) illustrates a linear system with transfer function G(s) whose input is the output of a finite-pulsewidth sampler. As described in Section 2.8, the finite-pulsewidth sampler closes for a short duration of p seconds once every T seconds, and a typical set of input and output signals of the sampler is
shown
in Fig. 2-5. Since for a very small pulse duration p, as compared with the sampling period T, the finite-pulsewidth sampler can be approximated by an ideal sampler connected in cascade with a constant attenuation p, the system of Fig. 3-27(a) may be approximated by the system shown in Fig. 3-27(b).
r{t)
R(s)
r
-?
c(t)
*0)
p
G(s)
R?(s)
(P)
C(s)
(a)
KO R(s)
-*
r*(t)
* P*
G(s) (s)
Ideal sampler
(b)
Fig. 3-27. (a) Discrete-data i^iaiiGiG-uata aysicm system with wiui a iiuiic-puiscwiuin finite-pulsewidth sampler, sampl (b)
Discrete-data system with in (a).
an
-
c(0
>-*<,t)
'
ideal sampler that approximates the system :
—
C(s)
82
/ Transfer Function
Chap. 3
and Signal Flow Graphs
d
2
c*(t)
,y\
|"
I
>•
rit)
c(t)
/•*(/)
<*
G(s)
'
R(s)
_^_ C*(s)
T
R*(s)
T
C{s)
Fig. 3-28. Discrete-data system with
Normally, for convenience,
an
ideal sampler.
assumed that the attenuation factor p
it is
is
included in the transfer function of the process, G(s). Therefore, the block diagram of Fig. 3-28 is considered as that of a typical open-loop discrete-data or sampled-data system.
There are several ways of deriving the transfer function representation of we shall show two different representa-
the system of Fig. 3-28. In the following
tions of the transfer function of the system. Let us
assume that
r*(t),
the output
of the ideal sampler 5",, is a unit impulse function. This may be obtained by or if r(t) is a unit impulse funcsampling a unit step function u,(t) once at t = tion.* Unless stated otherwise, the samplers considered in the remaining portion of this text are ideal samplers. The output of G{s) is the impulse
g{i). If
a
fictitious ideal
sampler
S2 which ,
synchronized with Sj and has the same sampling period as that of 5,, is placed at the output of the system as shown in Fig. 3-28, the output of the sampler S z
is
may
be written as c*(t)
=
g*(t)
t g(kT)S(t ~ kT)
=
(3-112)
A=
where c(kT)
= g(kT)
is
defined as the weighting sequence of the linear process
G(s). In other words, the
function
is
sampled version of the impulse response or weighting
the weighting sequence.
Taking the Laplace transform on both C*(s)
=
G*(s)
=
sides of Eq. (3-112) yields
£[g*(f)]
= S
(3-113)
g(kT)e-*T-
which is defined as the pulse transfer function of the linear process. At this point we can summarize our findings about the description of the discrete-data system of Fig. 3-28 as follows.
applied to the linear process, the output the impulse response the sampler
is
is
is
When
a unit impulse function
is
simply the impulse response of the process ;
sampled by a fictitious ideal sampler S2 and the output of of the process. The Laplace transform of the ,
the weighting sequence
weighting sequence impulse train gives the pulse transfer function G*(s).
Although from a mathematical standpoint the meaning of sampling an impulse function questionable and difficult to define, physically, we may argue that sending a pulse through a finite-pulsewidth sampler will retain the same identity of the pulse. is
Sec. 3.12
Transfer Functions of Discrete-Data Systems /
Once
the weighting sequence of a linear system
at
r(t),
c(t),
Consider that an arbitrary input r(t) is applied to the system of Fig. 3-28 0. The output of the ideal sampler is the impulse train,
=
t
= £ A=
r*(f)
By means of superposition,
=
c(t)
At
defined, the output of
and the sampled output, c*(t), which is due to any arbitrary can be obtained by means of the principle of superposition.
the system,
input
is
83
t
=
r(0)g(t)
-
r(kT)S(t
kT)
(3-114)
the output of the process, which
+ r(T)g(t -
+
T)
.
.
.
+
r(kT)g(t
.
.
is
due to
-
kT)
-
l)T]g(T)
+
.
.
r*(t), is
(3-115)
.
kT, the last equation becomes
=
c(kT)
+
r(0)g(kT)
-
r{T)g[{k
1)7]
+
.
+
r[(k
+
r(kT)g(0) (3-116)
where
it is
assumed that
system so that
its
g(t) is zero for all
t
<
0, since
the process
is
a physical
output does not precede the input.
Multiplying both sides of Eq. (3-116) by e~ kTs and taking the summation ,
from k
£
=
to
c(kT)e- kT °
k
=
oo,
= £
r(0)g(kT)e- kT °
£
+
we have
r[{k
2
c(kT)e- kT <
=
r(T)g[(k
l)T]g(T)e-^
=
Again, using the fact that g(t) [r(O)
*=
-
±
+
+
is
-
l)T]e~ kT °
+
...
+ *=0 £ r(kT)g(0)e^
zero for negative time, Eq. (3- 1 1 7)
r (T)e~ T °
+
r{2T)e~™>
+
is
(3-117)
simplified to
...]£ g{kT)e~ kTs
(3-118)
or
£
c(kT)e-« Ts
= £
r{kT)e- kT *
£ g(kT) e
- kT >
(3-119)
Therefore, using the definition of the pulse transfer function, the last equation is written
C*(s)
which
is
shown
in Fig. 3-28.
=
R*(s)G*(s)
(3-120)
the input-output transfer relationship of the discrete-data system
The z-transform
definition of the z-transform. Since z
£
c(kT)z- k
relationship
=
= £
e
Ts ,
is
obtained directly from the
Eq. (3-119)
r{kT)z' k
£
is
also written
g{kT)z~ k
(3-121)
Therefore, defining the z-transfer function of the process as
G(z)
= £ g{kT)z' k k=0
(3-122)
which implies that the z-transfer function of a linear system, C(z), is the ztransform of the weighting sequence, gikT), of the system. Equation (3-121) is
written
C(z)
=
R{z)G(z)
(3-123)
„
84
/ Transfer
Chap. 3
u
Function and Signal Flow Graphs
discrete-data system is important to point out that the output of the of the output, transform pulse the However, continuous with respect to time. of c{t) only values the specify C(z), output, the C*(s) and the z-transform of well-behaved function between sampling at the sampling instants. If c(t) is a description of the true output:c{t). instants c*(0 or C(z) may give an accurate sampling instants, the zHowever if c(t) has wide fluctuations between the only at the sampling instants, will transform method, which gives information It is
yield misleading or inaccurate results. The pulse transfer relation of Eq. (3-120)
can also be obtained by use of C(s), which is given in the literature the following relation between C*(s) and :
C*(*)
where
=i S
C(s+jnco
O 124
s)
=
2njT. second and ca s the sampling frequency in radians per c(t) output continuous-data Fig. 3-28, the Laplace transform of the
co s is
From
)
=
C(s )
is
(3-125)
G(s)R*(s)
Substituting Eq. (3-125) into Eq. (3-124) gives
C*(s)
We
=
± £
+ jnaJR^s + jnw,)
G(s
can write
+ jnco ) = k=0 J r(kT)e-"^
R*(s
and
(3-127)
-jnkTo>,
=
e
-j2xnk
=
(3-128)
1
becomes
+ jnco =
R*(s
Using
=
n,
e
Thus Eq.
(3-127)
r(kT)e- kTs
E fc
A:
^
s
= since for integral
+
(3-126)
this identity,
Eq. (3-126)
s)
is
(3
J«*(5)
" 129
)
simplified to
C*(s)
= i{*(4 S
C*(j)
= J?*WG*(*)
G(5+7«t»
(3-130)
s)
(
3431)
where G*(j)
The transfer function (3-131)
by use of z
=
is
is
4 S J
in z of Eq. (3-123) e
(3-132)
G^+jnco,)
„ = — oo
can again be obtained directly from Eq.
Ts . .
the input to a linear system is sampled transform of the continuous output Laplace unsampled, the
In conclusion,
but the output
=
we
note that
when
given by C(s)
If the continuous-data
output
is
=
(3-133)
G(s)R*(s)
sampled by a sampler that
is
synchronized with
Transfer Functions of Discrete-Data Systems /
Sec. 3.12
85
and has the same sampling period as the input, the Laplace transform of the discrete-data output is given by
=
C*(s)
The
result in Eq. (3-133)
is
G*(s)R*(s)
natural, since
it is
(3-134)
in line with the well-established
transfer relation for linear time-invariant systems.
The expression
obtained by use of Eqs. (3-124) and (3-132). However,
in Eq. (3-134)
can be interpreted as being obtained directly from Eq. (3-133) by taking the pulse transform on both sides of the equation. In other words, in view of Eq. (3-129), we can write, from Eq. (3-133), is
C*(s)
= =
it
[Gis)R*(s)]*
(3-135)
G*(s)[R*(s)]*
where Eq. (3-134) implies that [R*(s)]*
=
R*(s)
(3-136)
Transfer Functions of Discrete-Data Systems with Cascaded Elements
The
transfer function representation of discrete-data systems with cascaded
is slightly more involved than that for continuous-data systems, because of the variation of having or not having any samplers in between the elements. Figure 3-29 illustrates two different situations of a discrete-data
elements
system which contains two cascaded elements. In the system of Fig. 3-29(a), the
two elements are separated by a sampler S2 which
is
synchronized to and has
"2
(0 T
D* R*(s)
d(t)
G
t
d*(t)
(s)
D(s)
D*(s)
c(t)
G 2 (s) C(s)
w)
(0 C*Cs)
Kt) R(s)
**
r*(t)
dU)
Gi(s)
R*(s)
D(s)
c(t)
—»~
G 2 (s)
C(s)
(b)
Fig. 3-29. (a) Discrete-data system with cascaded elements
separates the
two elements,
and sampler
(b) Discrete-data system with cascaded ele-
ments and no sampler in between.
86
/
Transfer Function and Signal Flow Graphs
Chap. 3
same period as the sampler Si. The two elements with transfer functions and G 2 (s) of the system in Fig. 3-29(b) are connected directly together. In discrete-data systems, it is important to distinguish these two cases when deriving the
G
t
(s)
the pulse transfer functions.
Let us consider
first
and the system output
the system of Fig. 3-29(a).
D(s)
=
C(s)
=G
The output of Gi(s)
Gi(s)R*(s)
is
written (3-137)
is
Taking the pulse transform on both
2
(s)D*(s)
(3-138)
sides of Eq. (3-137)
and substituting the
result into Eq. (3-138) yields
C(s)
=G
2
(s)GKs)R*(s)
Then, taking the pulse transform on both sides of the
=
C*(s)
(3-139) last
Gf(s)G*(s)R*(s)
(3-140)
where we have made use of the relation
in Eq. (3-136).
z-transform expression of the last equation
is
=G
C(z)
equation gives
The corresponding
(3-141)
2 (z)Gi(z)R(z)
We conclude that the z-transform of two linear elements separated by a sampler is
equal to the product of the z-transforms of the two individual transfer func-
tions.
The Laplace transform of the output of the system
=
C(s)
The pulse transform of the
last
Gi{s)G 2 (s)R*(s)
equation
C*(s)
=
in Fig. 3-29(b) is
(3-142)
is
[Gi(s)G 2 (s)]*R*(s)
(3-143)
where [Gi(s)G 2 (s)]*
= -i
f)
G (.s+jmo.yG 1 (.s+jna>.) 1
(3-144)
C?i(s) and G 2 (s) are not separated by a sampler, they have to be treated as one element when taking the pulse transform. For simplicity, we
Notice that since
define the following notation
[Gi(s)G 2 (s)]*
= GiG^s) = G GHs)
(3-145)
2
Then Eq.
(3-143)
becomes C*(s)
Taking the z-transform on both
=
GiG^(s)R*(s)
(3-146)
sides of Eq. (3-146) gives
C(z)
=
GiG 2 (z)R(z)
(3-147)
where G G 2 (z) is defined as the z-transform of the product of Gi(s) and and it should be treated as a single function. 1
G 2 (s),
Sec. 3.12
It is
Transfer Functions of Discrete-Data Systems
/
87
important to note that, in general,
G Gt(s)^Gf(s)GKs)
(3-148)
t
and
G.G^z) Therefore,
we conclude
sampler in between
is
^ G^Gziz)
that the z-transform of
(3-149)
two cascaded elements with no
equal to the z-transform of the product of the transfer
functions of the two elements.
Transfer Functions of Closed-Loop Discrete-Data Systems
In this section the transfer functions of simple closed-loop discrete-data systems are derived by algebraic means. Consider the closed-loop system shown in Fig. 3-30.
The output transform
is
C(s)
=
G(s)E*(s)
(3-150)
"X,
«t) R(s)
^ _\y
•
e(t)
E(S )
V j.
c*(t)
C*(s) c(t)
e*(t)
G(s)
E*(s)
C(s)
!
H(s)
Fig. 3-30. Closed-loop discrete-data system.
The Laplace transform of the continuous E(s)
=
R(s)
error function
-
is
H(s)C(s)
(3-151)
Substituting Eq. (3-150) into Eq. (3-151) yields
=
E(s)
R(s)
Taking the pulse transform on both
-
G(s)H(s)E*(s)
sides of the last equation
(3-152)
and solving for
E*(s) gives
E * (S) _ The output transform C(s) we have
is
R*(s) (3-153)
obtained by substituting E*(s) from Eq. (3-153)
into Eq. (3-150);
G(s)
C(s) 1
Now taking the pulse transform c
+
,R*(s)
GH*(s)
on both
(3-154)
sides of Eq. (3-154) gives
G*(s)
^-TTmwr
{s}
(3-155)
88
/ Transfer
Function and Signal Flow Graphs
In this case
it is
Chap. 3
possible to define the pulse transfer function between the input
and the output of the closed-loop system as C*(s) R*(s)
The
G*(s)
z-transfer function of the system
shall
show
(3-156)
GH*(s)
is
_ G{z) ~ + GH(z)
C(z) R(z)
We
+
1
(3 " 157)
1
although
in the following that
it
is
possible to define a
transfer function for the closed-loop system of Fig. 3-30, in general, this
may
not be possible for all discrete-data systems. Let us consider the system shown in Fig. 3-31. The output transforms, C(s) and C{z), are derived as follows: C(s)
E(s)
= =
G(s)E(s) R(s)
-
(3-158)
H(s)C*(s)
(3-159)
~>"
C*(t)
T
KO R(s)
(%
~?)
e(t)
\J
E(s)
c(t)
G(s) C(s)
i
c*(t)
Hds) C*(s)
T
Fig. 3-31. Closed-loop discrete-data system.
Substituting Eq. (3-159) into Eq. (3-158) yields
C{s)
=
G(s)R(s)
Taking the pulse transform on both C*(s), we have C*(s)
-
G{s)H(s)C*{s)
sides of
=
the last
G
,
1
GH *{s) + **£L
(3-160)
equation and solving for
(3-161)
Note that the input and the transfer function G(s) are now combined as one function, GR*(s), and the two cannot be separated. In this case we cannot define a transfer function in the form of C*(s)/R*(s). The z-transform of the output is determined directly from Eq. (3-161) to be
= r GR(z) GH(z)
C(z)
where
it is
(3-162)
important to note that
GR(z)
=
#[G(*)fl(s)]
(3-163)
GH(z)
=
g[G(s)H(s)]
(3-164)
and
Problems
Chap. 3
To determine
we
the transform of the continuous output, C(s),
C*(s) from Eq. (3-161) into Eq. (3-160). C(s)
=
We
89
substitute
have
- G+gh*\s) GR * (s)
G(s)R(s)
/
(3 " 165)
t
Although we have been able
and
to arrive at the input-output transfer function
and 3-3 1 by algebraic means more complex system configurations, the algebraic method The signal-flow-graph method is extended to the analysis
transfer relation of the systems of Figs. 3-30
without
difficulty, for
may become
tedious.
may
of discrete-data systems; the reader
7 refer to the literature.
-
8
REFERENCES Block Diagram and Signal Flow Graphs 1.
Diagram Network Transformation,"
T. D. Graybeal, "Block
Elec. Eng., Vol.
70, pp. 985-990, 1951. 2.
S. J.
—
Mason, "Feedback Theory Some No. 9, pp. 1144-1156,
Proc. IRE, Vol. 41, 3.
S. J.
Properties of Signal
Flow Graphs,"
Sept. 1953.
—
Mason, "Feedback Theory Further Properties of No. 7, pp. 920-926, July 1956.
Signal
Flow Graphs,"
Proc. IRE, Vol. 44, 4.
L. P. A.
Robichaud, M. Boisvert, and J. Robert, Signal Flow Graphs and Englewood Cliffs, N.J., 1962.
Applications, Prentice-Hall, Inc., 5.
B. C.
Kuo, Linear Networks and Systems, McGraw-Hill Book Company,
New
York, 1967. 6.
N. Ahmed, "On Obtaining Transfer Functions from Gain-Function Derivatives," IEEE Trans. Automatic Control, Vol. AC-12, p. 229, Apr. 1967.
Signal Flow Graphs of Sampled- Data Systems 1.
B. C.
Kuo, Analysis and Synthesis of Sampled- Data Control Systems, Englewood Cliffs, N.J., 1963.
Prentice-
Hall, Inc., 8.
B. C.
Kuo,
Champaign,
Discrete
Data Control Systems, Science-Tech, Box 2277, Station A,
Illinois, 1970.
PROBLEMS 3.1.
The following where
differential equations represent linear time-invariant systems,
denotes the input and function of each of the systems. (a)
(b)
3.2.
r(t)
c{t)
denotes the output. Find the transfer
d^) + 3 d_^1 + 4 aW) + c(/) = 2 ^) f(/) +
*%P +
104g->
+
2cit)
=
r(»
-
2)
The block diagram of a multivariate feedback control system P3-2. The transfer function matrices of the system are
is
shown
in Fig.
90
/
Transfer Function and Signal Flow Graphs
Chap. 3
1
G0)
1
=
+
s
s
1
0"
.0
1.
2
HO) Find the closed-loop transfer function matrix for the system.
/\
Rfe)
»)
y
C(s) ,
G(s)
i
H(s)
Figure P3-2.
3.3.
A multivariable system with two inputs and two outputs
is
shown
in Fig. P3-3.
Determine the following transfer function relationships Ci(j)
C2 (s)
C2 (s)
Ci(j)
Ri(s)
*i(*)
Write the transfer function relation of the system in the form C(s)
=
G(s)R(s)
R 2 (s) Figure P3-3.
3.4.
Draw
a signal flow graph for the following 3xi Xi
set
+ x 2 + 5x = + 2x 2 — 4x = 2 —x 2 — x = 3
3
3
of algebraic equations:
Problems
Chap. 3
3.5.
Draw an equivalent
signal flow
graph for the block diagram
in Fig. P3-5.
/
91
Find
the transfer function C(s)jR(s).
Figure P3-5.
3.6.
Find the
gains,
y 6 /yi,
y%ly\,
and yi/y 2 for the
signal flow
graph shown in Fig.
P3-6.
Figure P3-6.
3.7.
Find the gains y<,\y x and y 2 lyi for the signal flow graph shown -0.5
Figure P3-7.
in Fig. P3-7.
92
/ Transfer
Function and Signal Flow Graphs
3.8.
Chap. 3
In the circuit of Fig. P3-8, e s (t), e^t), and i,(t) are ideal sources. Find the value of a so that the voltage e (t) is not affected by the source ed (t).
o+
Figure P3-8.
3.9.
Are the two systems shown
in Fig. P3-9(a)
and
(b) equivalent ? Explain.
i
i
Oyj
(a)
(b)
Figure P3-9.
3.10.
Given the signal flow graph of Fig. P3-10(a) and the transfer functions G it G 2 G 3 G 4 and G 5 find the transfer functions GA GB and (7C so that the three systems shown in Fig. P3-10 are all equivalent. ,
,
,
,
,
(a)
Figure P3-10.
,
Chap. 3
Problems
/
93
(b)
(c)
Figure P3-10. (Cont.) 3.11.
Construct an equivalent signal flow graph for the block diagram of Fig. P3-11. (a) Evaluate the transfer function C/R when 0. (b) Determine the relation among the transfer functions G u G 2 , G 3 G4 u and 2 so that the output C is not affected by the disturbance signal N.
N=
,
,
H
H
Figure P3-11. 3.12.
A
multivariate system
is
described by the following matrix transfer function
relations
C(s) S(.y)
= G(s)S(s) = R(.r) - H(*)CO)
where C(s)
= c^sy
R(s)
=
-R^sT -Ri(s).
5
i
G(s)
=
s
1
s
+
n 1
H( S)
_1_
S
-
=
'I
0"
.0
0_
94
/ Transfer
Chap. 3
Function and Signal Flow Graphs
(a)
Derive the closed-loop transfer function relationship C(i)
by using
M(s) (b)
3.13.
=
[I
=
M(.$)RO)
+
G(j)H(5)]-'G(i)
Draw a signal flow graph for the system and find M(s> from the signal flow graph using Mason's gain formula.
Find the transfer function relations C(s)/R(s) and C(s)jE(s) for the system
shown
in Fig. P3-13.
Figure P3-13. 3.14.
Find the transfer function C(z)/R(z) of the discrete-data system shown The sampling period is 1 sec.
in Fig.
P3-14.
r(t)
^
c(t)
r*(t)
1
s(s
+ 2)
T Figure P3-14. 3.15.
Find the z-transfer functions C(z)/R(z) of the discrete-data systems shown Fig. P3-15.
KO
^
cd)
r*(.t)
2
1
s
+
s
+
2
(a)
r(t)
^ T
r*(t) ^
s+
-X1
(b)
Figure P3-15.
c(t) s +
in
4 State-Variable Characterization of
4.1
Dynamic Systems
Introduction to the State Concept
In Chapter 3 the classical methods of describing a linear system by transfer function, impulse response, block diagram, and signal flow graph have been presented. An important feature of this type of representation is that the
system
dynamics are described by the input-output
relations.
For
instance, the trans-
fer function describes the input-output relation in the Laplace transform domain. However, the transform method suffers from the disadvantage that all
the initial conditions of the system are neglected. Therefore,
when one is intime-domain solution, which depends to a great deal on the past history of the system, the transfer function does not carry all the necessary terested in a
information. Transfer function
js
valuable for frequency-domain analysis and
design, as well as for stability studies.
The greatest advantage of transfer compactness and the ease that we can obtain qualitative information on the system from the poles and zeros of the transfer function. function
An is
is
in its
alternative to the transfer function
the state-variable method.
to linear systems
The
method of describing a
state-variable representation
and time-invariant systems.
It
linear system
not limited can be applied to nonlinear as is
well as time-varying systems.
The
state-variable
method
is
often referred to as a
modern approach.
However, in reality, the state equations are simply first-order differential equations, which have been used for the characterization of dynamic systems for many years by physicists and mathematicians.
To
begin with the state-variable approach,
fining the state of a system.
As the word
we should first begin by deimplies, the state of a system refers to 95
96
/
State-Variable Characterization of
Cna P- 4
Dynamic Systems
the past, present, and future conditions of the system. It is interesting to note that an easily understood example is the "State of the Union" speech given by the President of the United States every year. In this case, the entire system encompasses all elements of the government, society, economy, and so on. In
of numbers, a curve, an equation, or something that is more abstract in nature. From a mathematical sense it is convenient to define a set of state variables and state equations to portray systems. There are some basic ground rules regarding the definition of a state
general, the state can be described
variable
and what constitutes a
by a
set
state equation.
Consider that the
set
of variables,
characteristics of a ., x„(t) is chosen to describe the dynamic Xl (t), x 2 (t), of the system. Then variables state the system. Let us define these variables as conditions following the these state variables must satisfy .
1.
.
At any time define the
2.
t
=
t
the state variables, Xi(t
,
x 2 (t a ),
),
of the system at the selected
initial states
.
.
.
,
x„(t
)
initial time.
>
and the initial states defined t for t above are specified, the state variables should completely define the future behavior of the system.
Once the inputs of the system
Therefore,
we may
define the state variables as follows:
of state variables. The state variables of a system are defined as ., x n (t) such that knowledge of these of variables, x^t), x 2 (t), plus information on the input excitation subsequently variables at any time t t > 1 applied, are sufficient to determine the state of the system at any time Definition
a minimal
set
.
.
,
.
confuse the state variables with the outputs of a system. An output of a system is a variable that can be measured, but a state variable does not always, and often does not, satisfy this requirement. However, an
One should not
output variable
is
usually defined as a function of the state variables.
Example
R -^AM
As a simple illustrative example of
4-1
in Fig. 4-1. e(f)
i(0
)
completely specified by the initial current of the inductance, 0, a constant input voltage of ampli0. At / j'(0+), at t tude Ei is applied to the network. The loop equation of the
fJ£
RL
=
=
network for Fig. 4-1.
state vari-
RL
network shown The history of the network is
ables, let us consider the
1
t
>
network.
Taking the Laplace transform on both
E(S ) =*h.
is
^=R
.
(t)
+ L dm
sides of the last equation,
= (R+ Ls)I(s) - Li(0+)
we
^
)
get (4-2)
Solving for I(s) from the last equation yields
m~ .,
The
current
/(/)
for
f
^
is
E,
.
s{R+Ls)'r
1-/(0+)
R+Ls
(4-3)
obtained by taking the inverse Laplace transform of both
Sec. 4.2
State Equations and the
sides of Eq. (4-3).
the current /(/)
defined for the
/
97
We have '(0
Once
Dynamic Equations
is
= ^ (1 - e-™1 +
determined for
same time
i(0+)e- R,/L
-)
/
> 0,
(4-4)
the behavior of the entire network
is
apparent that the current i(t) in this case satisfies the basic requirements as a state variable. This is not surprising since an inductor is an electric element that stores kinetic energy, and it is the energy storage capability that holds the information on the history of the system. Similarly, it is easy interval. Therefore,
it is
to see that the voltage across a capacitor also qualifies as a state variable.
4.2
State Equations and the Dynamic Equations
The
equation of Eq.
first-order differential
between the
state variable i(t)
(4-1),
and the input
which gives the relationship can be rearranged to give
e{t),
This first-order differential equation is referred to as a state equation. For a system with p inputs and q outputs, the system may be linear or nonlinear, time varying or time invariant, the state equations of the system are written as
^
=fix
i
where x^t), x2 (t),
= .
.
l
(f),
x 2 (t), ...,
x„(t), r,(0, r 2 (0,
•
•
•
,
r p (t)]
(4-6)
1,2, ... ,n .
,
xn (t)
are the state variables;
the input variables; and/;, denotes the
z'th
/-,(/),
r 2 (t),
.
.
.
,
rp {t) are
functional relationship.
The outputs of the system c k (t), k = 1, 2, q, and the inputs through the output equation, .
.
.
,
are related to the state
variables
c k {t)
=
&[*,(*),
x 2 (t),
...,
x„(t),
rM
r 2 {i),
..., r p (t)]
(4-7)
k=\,2,...,q where gk denotes the kth functional relationship. The state equations and the output equations together form the set of equations which are often called the dynamic equations of the system. Notice that for the state equations, the left side of the equation should first derivatives of the state variables, while the right side should have only the state variables and the inputs.
contain only the
Example
4-2
Consider the tional
RLC
network shown in Fig. 4-2. Using the convennetwork approach, the loop equation of the network is
written e{t)
=
Ri(t)
We notice that
+L^£ + ±[
RLC network.
dt
(4-8)
is not in the form of a state a time integral. One method of writing the state equations of the network, starting with Eq. (4-8), is to let the state variables be defined as
this
equation
equation, since the last term
Fig. 4-2.
i{t)
is
:
98
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
*i(0
= iff)
Xl(t)
=
(4-9)
Substituting the last two equations into Eq. (4-8),
= Rx (t)+L
eit)--
1
(4-10)
f /(/) dt
^
c
we have
+ ^x 2 (t)
(4-11)
Rearranging the terms in Eq. (4-11) and taking the time derivative on both sides we have the two state equations of the network,
of Eq. (4-10),
^
= -T*'<'>-nr*<'> + r*>
^- = }
which are
(4 " 12)
(4-13)
*,(/)
linear first-order differential equations.
We have demonstrated how the state equations of the RLC network may be written from the loop equations by defining the state variables in a specific way. The objective, of course, is to replace Eq. (4-8) by two first-order differential equations. An alternative approach is to start with the network and define the state variables according to the elements of the network. As stated in Section 4.1, we may assign the current through an inductor and the voltage across a capacitor as state variables. Therefore, with reference to Fig. 4-2, the state variables are defined as
*i(o
m
(4-i4)
*2
efy)
(4-15)
= (0 =
Then, knowing that the state equations would have to contain the first derivatives of x^t) and x 2 (t) on the left side of the equations, we can write the equations directly
by inspection from
Fig. 4-2
£ ^p =
Voltage across L:
Current in C:
C^2 =
-Ri(t)
-
e e (t)
+
(0
(4
"
16 )
(4-17)
i(t)
at
Using Eqs. (4-14) and (4-15), and dividing both by L and C, respectively, we have
**M = -£ Xl
4*M =
^ Xi
(f)
sides of the last
- i-*
2
(0
+
two equations
±e(t)
(4-18)
(4-19)
(f)
which are the state equations of the RLC network. We notice that using the two independent methods, the only difference in the results is in the definition of the second state variable x 2 (t). Equation (4-19) differs from Eq. (4-13) by the factor of the capacitance C.
two simple examples we see that systems, the state equations can generally be written
From
these
for linear time-invariant as
Sec. 4.3
Matrix Representation of State Equations
dx£t)
% a u xj(0 +
dt
where a tJ and b ik are constant c k (0
=
£d
kJ Xj(t)
j=
1
coefficients.
+ m^ t,
i=l,2,....,n
2l b ik r k (i)
The output equations
are written (4-21)
1
where dkj and ekm are constant coefficients. For a linear system with time-varying parameters, the (4-20) and (4-21) become time dependent. 4.3
(4-20)
k=l,2,...,q
e km r m {i)
99
/
coefficients of Eqs.
Matrix Representation of State Equations
The dynamic equations are more conveniently expressed us define the following column matrices
in matrix form. Let
"*i(0'
x 2 (t) x(r)
where
x(t)
is
X
1)
(4-22)
(px
1)
(4-23)
(qX
1)
(4-24)
(n
defined as the state vector;
>i(0" r 2 (t) r(/)
=
WO. where
r(/) is
defined as the input vector; and
"c,(0" cz(t)
c(0
_c,(0_
where
c(r) is
defined as the output vector.
Then
the state equations of Eq. (4-6)
can be written
M)=f[x(0,r(0] where
f
x 1 column matrix that contains the functions fu f2 and the output equations of Eq. (4-7) become
denotes an n
f„ as elements,
c(r)
where g denotes a q x
gq
as elements.
(4-25)
1
=
g[x(r), r(r)]
column matrix
that contains the functions
,
.
.
.
,
(4-26)
g u g 2 ,...,
:
100
/
State-Variable Characterization of
For a
Chap. 4
Dynamic Systems
State equation
:
Output equation: where
dynamic equations are written
linear time-invariant system, the
A is an
n
X
^ c(/)
=
+ Br(r)
(4-27)
= Dx(0 + Er(/)
(4-28)
Ax(0
n coefficient matrix given by
a 11
fl
d 21
#22
a, "In
•••
12
.
•
a 2n
•
A=
B is
an n X
(4-29)
a„,
a„
t>u
b 12
bn
b2 2
p matrix given by a.;
B
(4-30)
P„i
D is
a q
X
b„ 2
n matrix given by ~d u
d i2
du
"21
"22
d2 „
D=
and
E
is
a q
X
(4.31)
dql
dq2
u
«i2
en
e 12
p matrix, 'e
"2p
E =
(4-32)
Zql
Example
4-3
The
C«2
state equations of Eqs. (4-18)
and
(4-19) are expressed in matrix-
vector form as follows
dxi(ty dt
dx 2 (t)
r '
R L
V '
L
j_
-\
r*iw"
+ \x
dt
L e(0
(4-33)
{t)
2 C Thus the coefficient matrices A and B are identified to
L
-
J
_R A=
L
be
1
L (4-34)
J_
C
Sec. 4.4
f
State Transition Matrix / 101
B= L
4.4
(4-35)
State Transition Matrix
The state transition matrix geneous state equation
defined as a matrix that satisfies the linear
is
^ Let
<£(f)
must
x n matrix
be an n
Furthermore,
^
x(0+) denote the
let
(4-36)
that represents the state transition matrix; then
equation
satisfy the
= Ax«
homo-
= A«KO
it
(4-37)
initial state at t
= 0; then
(t) is
also defined
by the matrix equation
=
x(t)
which
is
homogeneous
the solution of the
One way of determining sides of Eq. (4-36) we have
fy(t) is
(4-38)
state equation for
t
>
0.
by taking the Laplace transform on both
;
-
sX(s)
x(0+)
we
Solving for X(s) from the last equation,
=
= AX(s)
(4-39)
get
- A)" >x(0+) where it is assumed that the matrix (si — A) is nonsingular. X(s)
(si
Laplace transform on both sides of the
x(0
Comparing Eq. to
=
£,- » [(si
-
last
(4-40)
Taking the inverse
equation yields
A)" ']x(0+)
/
>
(4-41)
(4-41) with Eq. (4-38), the state transition matrix
is
identified
be (>(0
An
alternative
way of
=
£- [(jI-A)-'] 1
(4-42)
homogeneous state equation is to assume a method of solving differential equations. We let
solving the
solution, as in the classical
the solution to Eq. (4-36) be
x(0 for
t
> 0, where e
At
e
It is
easy to
represents a AI
show
=1+
At
=
e
A
power 4-
'x(0+) series
^A*/ 2
that Eq. (4-43)
is
(4-43)
of the matrix At and
+1A
3
/
3
+
.
.
a solution of the
.
(4-44)*
homogeneous
state
equation, since, from Eq. (4-44),
^= *It
can be proved that
this
power
series is
Ae Kt uniformly convergent.
(4-45)
.
102
/
Cha P- 4
Dynamic Systems
State-Variable Characterization of
we have obtained another
Therefore, in addition to Eq. (4-42),
expression for
the state transition matrix in «(,(,)
=
eK<
=I+
+ ~A
At
Equation (4-46) can also be obtained (Problem 4-3).
+ 1A
2 2 /
directly
3
;
from Eq.
+
3
.
(4-46)
.
(4-42).
This
as
is left
an
exercise for the reader
Example 4-4
Consider the that
is,
e(t)
RL
network of
Fig. 4-1 with the input short circuited;
= 0. The homogeneous state equation
is
written
*£=-£*) The solution of the last equation for t Thus
>
this case is
0(0 which
obtained from Eq. (4-4) by setting Ei
is
= e' R,/L i(0+)
i(t)
The state transition matrix in
(4-47)
=
a scalar and
e-*" L
(4-48) is
given by
>
t
= 0.
(4-49)
a simple exponential decay function.
is
Significance of the State Transition Matrix
Since the state transition matrix satisfies the homogeneous state equation, represents the free response of the system. In other words,
it
response that
and
As
is
excited
by the
(4-46), the state transition
the
initial
matrix
it
governs the
conditions only. In view of Eqs. (4-42) is
dependent only upon the matrix A.
name
implies, the state transition matrix (0 completely defines the transition of the states from the initial time t to any time t.
=
Properties of the State Transition Matrix
The
state transition
matrix
1.
(0
Proof: f
=
possesses the following properties:
=
the identity matrix
I
(4-50)
Equation (4-50) follows directly from Eq. (4-46) by setting
0.
2.
0-«(O Proof:
=
(4-51)
4»(-O
Postmultiplying both sides of Eq. (4-46) by e~
Then premultiplying both
At
=
e x 'e- A '
sides of Eq. (4-52)
e- A '
=
=
by
I
,
we
get
(4-52)
_1
Ac
(')>
we 8 et
(f,-i( f )
(4-53)
=
e~ At
(4-54)
is
that Eq. (4-43) can be re-
Thus
An
interesting result
from
=
this property
of
(t)
arranged to read
x(0+)
=
(-0x(0
(4-55)
Sec. 4.5 State Transition Equation /
103
which means that the state transition process can be considered as bilateral in That is, the transition in time can take place in either direction.
time.
3.
(7 2
-
0i
-
t
)
=
(t 2
-
t )
-
t
for any
)
tQ
,
t
u
(4-56)
t2
Proof:
-
'i)
= A e A( "-'«) =e = M2 «?
<'«-">
A(r,-»„)
(4.57)
t )
This property of the state transition matrix is important since it implies that a state transition process can be divided into a number of sequential transitions. Figure 4-3 illustrates that the transition from t t Q to t t 2 is
=
=
Fig. 4-3. Property of the state transition matrix.
equal to the transition from t to t and then from U to t 2 In general, of course lt the transition process can be broken up into any number of parts Another way of proving Eq. (4-56) is to write .
= 2 - tMt *('.) = ftr, )x(t x(t =
t
t
tQ
2)
The proper
result is obtained
comparing the 4
result
(4-58)
)
(4.59)
)
(4 _ 60)
by substituting Eq.
(4-59) into
Eq
(4-58)
and
with Eq. (4-60).
MO]" =
-
)
for
k
= integer
(4-61)
Proof: oAr
[(t)Y
(k terms)
— e kAt = Mt) 4.5
(4-62)
State Transition Equation
The
state transition equation is defined as the solution of the linear nonhomostate equation. For example, Eq. (4-5) is a state equation of the network of Fig. 4-1. Then Eq. (4-4) is the state transition
geneous
input voltage
RL
is
constant of amplitude £, for
t
>
equation when the
0.
104
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
In general, the linear time-invariant state equation
&j& = Ax(0 + can be solved by using either the
Br(r)
(4-63)
method of solving differential equaThe Laplace transform method is
classical
tions or the Laplace transform method.
presented in the following.
Taking the Laplace transform on both
-
sX(s)
where x(0+) denotes the
=
x(0+)
sides of Eq. (4-63),
+
AX(s)
BR(s)
(4-64)
vector evaluated at
initial state
we have
0+. Solving
=
t
for
X(s) in Eq. (4-64) yields
=
X(s)
The
(si
state transition
-
+
A)" 'x(0+)
-
(si
equation of Eq. (4-63)
A)" 'BR(s)
(4-65)
obtained by taking the inverse
is
Laplace transform on both sides of Eq. (4-65), x(f)
=
£"'[(5l
Using the definition of the
-
1
The time
is
=
+ Jf
is
t
=
crete-data control systems,
-
(?
by
state
We
x(/
),
start
initial
=
where the property on
(4-66)
and the con-
is
>
t
useful only
(4-67)
when
the initial
often desirable to break up a state transition
is
it
time be represented by
(-/ )x(/o) <(>(f)
1
In the study of control systems, especially dis-
0.
and assume that an input with Eq. (4-67) by setting
x(0+)
A)- BR(s)]
T)Br(r) dx
more
process into a sequence of transitions, so that a
be chosen. Let the
-
written
equation in Eq. (4-67)
state transition
defined to be at
JC-'Kil
state transition matrix of Eq. (4-42),
volution integral, Eq. (3-26), Eq. (4-66) x(r)
+
A)" ]x(0+)
-
ta
applied for
r(f) is
=
t
t
flexible initial
time must
and the corresponding /
>
initial
r„.
and solving for x(0+), we
,
f"
Wo -
T)Br(t) di
get
(4-68)
of Eq. (4-51) has been used.
Substituting Eq. (4-68) into Eq. (4-67) yields
x(0
=
-
(J>(0o)x('o)
+
-
T)Br(T)rfr
°
,,
(4-69)
<(>(?- T)Br(r) di J
Now
using the property of Eq. (4-56), and combining the
last
two
integrals,
Eq. (4-69) becomes x(r) It is
=
(t
-
t
)x(t
)
+
f
~ r)Br(r) dx
apparent that Eq. (4-70) reverts to Eq. (4-67) when
Once
the state transition equation
expressed as a function of the stituting x(r)
from Eq.
c(0
=
Dcf>('
is
i
a
~
(4-70) 0.
determined, the output vector can be
initial state
and the input vector simply by subThus the output vector is written
(4-70) into Eq. (4-28).
-
t
)x(t
)
+
f ' D('
-
T)Br(r) dx
'
Er(f)
(4-71)
Sec. 4.5 State Transition Equation
The following example
illustrates the application
/
105
of the state transition
equation.
Example
4-5
Consider the state equation r*i(o"
dt
+
dx 2 (t)
-2
dt
The problem t
> 0; that
is,
is
(4-72)
>
to determine the state vector
x(0 for t when the input r(/) u s (t). The coefficient matrices A and B are identified to be
=
r(t)
#(/) 1
|*2(0
B=
-3.
2
1
for
~0~
r _
=
(4-73)
_1_
Therefore,
0~
"
s
_-2
1"
The matrix
inverse of (si
~
A)
=
s
1
+
3s state transition matrix of
the last equation.
-3.
A is found
+
3_
+
1"
3
(4-74)
-2
2
5
by taking the inverse Laplace transform of 2e~'
=
-2e~'
The
state transition equation for
#•(/)
into Eq. (4-67).
t
>
<
2fi-('-')
-2g-(r-r)
J
-2c"'
+
2
—
— + c
-'
alternative, the
'
2t
(4-75)
B and '
x(0+) 2e~ 2 \ »-(i-t)
_ e -(f-T)
c -2 '
2e~ 2 -
'
'
2 e -2('-r)
2e~'
x(0
— e' 2 —e~' + 2e~ e~ !
'
e~ 2 '
— e -2(t-t) _|_
— e' 1 + 2e~ 2
obtained by substituting Eq (4-75)
is
We have 2e~' ~ e' 2 e~' — —2e-' + e~ 2 — e -< + '
As an
s
Thus
]
x(0
_2
"1
is
(si- A)-' The
-1
~s
si
T
g-' <
-e-<
— +
„-2<»-t)
_|_2e- 2
e
~2'
('-'>
rfT
(4-76)
'
x(0+) 2e" 2 '. (4-77)
j
>>0
— e -2t
second term of the state transition equation can be obtained by - A)-'BR(s). Therefore
taking the inverse Laplace transform of (si
£~ Ksl1
A)-iBR(s)]
=£-i
's -
= £-'
+
3
-2
1"
s.
-J_-l 1
3s
+
s
(4-78)
2
1_
|
- e~' + \e~ e~< — e -2 '
2>
/>0
106
/
State-Variable Characterization of
Example
Dynamic Systems
In this example
4-6
method
Chap. 4
we
shall illustrate the utilization of the state transition
to a system with input discontinuity. Let us consider that the
RL
input voltage to the
network of Fig. 4-1
is
as
shown
in Fig. 4-4.
IE e(t)
Fig. 4-4. Input voltage
The
state
waveform
equation of the network
for the
network
in Fig. 4-1.
is
di(t)
rf-
= -r /(/ + r«w
(4-79)
)
Thus
A The
state transition matrix
R L
=
to the
(4-80)
L
is
=
#(0
One approach
l
B e~ R " L
problem of solving for
(4-81)
;'(?)
for
-
?,)
/
>
is
to express the input
voltage as e(t)
where u s (t)
is
=
+
Eu s (t)
the unit step function.
Eu
s
(t
(4-82)
The Laplace transform of e(r)
is
£(s)=^ (!+*-"•)
(4-83)
Then (si
- A)-'BR(j)
1
+
s
Rs[l
1
E
R/L L
s
E + (L/R)s] (1 *
Substituting Eq. (4-84) into Eq. (4-66), the current for i(t)
=
e- R "H(0+)u s (0
+ -f[l Using the parts:
t
=
the input
to
state transition t
=
t,,
and
t
d+e-"')
t
+ §d -
>
(4-84)
+-'") is
obtained:
e-"*)u,(t) (4-85)
e -M-«w-]u.(t-t 1 )
approach we can divide the transition period into two
=
=
?,
to
=
Eu,(t)
t
oo. First for the time interval,
<
t
<,
t,,
is
e(t)
0
r,
(4-86)
Then
(4-87) j
Rs[l
+ (L/R)s]
:
Relationship
Sec. 4.6
Thus the state
transition equation for the time interval
i(0
Substituting
t
=
i( tl )
The value of period of
< < t
we
i
i(t) at t
=
=
e -Ru/L i( Q
is
ti is
now
(4-88)
+ ) + -j| (1 -
used as the
e -*"> L )
(4-89)
initial state
for the next transition
< < co. The magnitude of the input for this interval t
tt
= e-*('--"L i('i) + =£ [1 - e -*e-'.>/i]
i(0
-
107
get
the state transition equation for the second transition period
where
f
/
= [e- R'^i(0+) + -fr (1 - e- R"L)^ «,(/)
into this equation,
tx
Between State Equations
is
2E. Therefore,
is
t
>
(4-90)
t,
given by Eq. (4-89).
i(ti) is
This example illustrates two possible ways of solving a state transition problem. In the
first
approach, the transition
the second, the transition period
Although the
is
is
treated as
one continuous process, whereas
in
more
divided into parts over which the input can be
approach requires only one operation, the second and it often presents computational advantages. Notice that in the second method the state at / = t is used as the initial state for the next transition period, which begins at t easily represented.
method
first
yields relatively simple results to the state transition equation,
x
4.6
x
.
Relationship Between State Equations and High-Order Differential Equations
In preceding sections
we
defined the state equations and their solutions for
linear time-invariant systems. In general, although
it
is
always possible to
from the schematic diagram of a system, in practice the system may have been described by a high-order differential equation or write the state equations
transfer function. Therefore,
it is
necessary to investigate
how
state equations
can be written directly from the differential equation or the transfer function. The relationship between a high-order differential equation and the state equations
discussed in this section.
is
Let us consider that a single-variable, linear time-invariant system
is
de-
scribed by the following nth-order differential equation d"c{i)
where
,
dn
c(t) is
~l
c(i)
,
d"~ 2 c(t)
the output variable and r(t)
The problem
is
dc(i)
,
,
is
to represent Eq. (4-91)
,
,..
,,..
lA „,,.
the input.
by n
state equations
and an output
equation. This simply involves the defining of the n state variables in terms of the output c(0
and
its
derivatives.
We
have shown
earlier that the state vari-
ables of a given system are not unique. Therefore, in general,
convenient
way of assigning the
variables stated in Section 4.1
is
state variables as
met.
we
seek the most
long as the definition of state
108
/
Dynamic Systems
State-Variable Characterization of
For the present case
it is
Chap. 4
convenient to define the state variables as
=
x,(0
c(t)
*a(0
dt
(4-92)
*»(o fife""
Then
1
the state equations are
dx z (t) dt
=*
s
(0
(4-93)
xJLO
/
dx„(t)
= -a x,(0 B
dt
-
a„_,x 2 (0
...
-
a 2 Jf»-i(0
-
«i^(0
+ KO
where the last state equation is obtained by equating the highest-ordered derivative term to the rest of Eq. (4-91). The output equation is simply c{t)
=
In vector-matrix form, Eq. (4-93) dx(t) dt
where
n
x(t) is the
X
1
x,(0 is
written
= Ax(0 + .
state vector
and
(4-94)
Br(t)
r(i) is
(4-95)
the scalar input.
The
coefficient
matrices are 1
1
1
(«
X
n)
(4-96)
1
—a
n
— a„-i — a
n -2
— A,-3 — fl„_4
•
.
—a
0"
B
(»xl)
(4-97)
Sec 4 7
Transformation to Phase-Variable Canonical Form / 109
-
-
The output equation
in vector-matrix
c(0
form
is
= Dx(r)
(4-98)
where
D= The
state
...
[1
0]
(1
x
is
A
and B defined as in called the phase-variable canonical form in the next
equation of Eq. (4-95) with the matrices
Eqs. (4-96) and (4-97)
(4-99)
ji)
section.
Example
4-7
Consider the differential equation
^£) + 5 ^L) + ^) + 2c(/) = rCO Rearranging the
equation so that the highest-order derivative term we have
last
the rest of the terms,
The
(4100) is
equated to
state variables are defined as
*i(')
= c(0
x.(0=*£>
(4-102)
d 2 c(
**<*)
Then
t) ~ ~^m dt*
the state equations are represented by the vector-matrix equation of Eq. (4-95)
"010
with
1
-2
(4-103)
-5
-1
and
B= The output equation
(4-104)
is
c(t)
4.7
= Xl (t)
(4-105)
Transformation to Phase-Variable Canonical Form In general,
when
the coefficient matrices
(4-97), respectively, the state
canonical form. It
tem with
is
A
and
B
are given
equation of Eq. (4-95)
is
by Eqs.
(4-96)
shown in the following that any linear time-invariant and satisfying a certain condition of controllability
single input
and
called the phase-variable sys-
(see
section 4.15) can always be represented in the phase-variable canonical form.
Theorem
4-1.
Let the state equation of a linear time-invariant system be
given by
4*j&
= Ax(0 + Brit)
(4-106)
110
/
where x(t)
is
an n
X
A
1 state vector,
coefficient matrix, and
r(t)
S is
Chap. 4
Dynamic Systems
State-Variable Characterization of
=
a scalar
an n
AB A 2 B
[B
X
n coefficient matrix, matrix
B
an n X 1
input. If the
Anl B]
...
(4-107)
nonsingular, then there exists a nonsingular transformation y(f)
= Px(r)
(4-108)
or
x« = P-'yCO
(4-109)
which transforms Eq. (4-106) to the phase-variable canonical form
y(0
=A
t
y(0
+
B,r(0
(4-110)
where ...
1
...
1 1
...
(4-111)
A,
-a„
—a„
and
B
1
=
(4-112)
The transforming matrix
P
is
given by
Pt
PA t
(4-113)
_P A"1
1
_
where P,
=
1][B
[0
AB A B 2
A"- B]]
(4-114)
Proof: Let
Xi(f)
x(0
=
(4-115)
_*.(0-
Sec. 4.7
Transformation to Phase-Variable Canonical Form / 111
>i(0'
=
y(0
(4-116)
U.(0.
and
Pll
Pl2
Pin
Pi
PlZ
Pin
I
P.
P=
(4-117)
Pnl
Pnl
Pnn.
.
where
P = <
Pa
[Pa
P in ]
=
i
1, 2,
.
.
.
,
n
(4-118)
Then, from Eq. (4-108), yi(t)
tim e
um
Fn!f "rf fcqs. (4-110)
, and
J ^" 1
pJ=
il
6
x
1
+ p 12 x
(t)
+
2 (t)
...+ Pu xn(t)
° n b ° th sMeS ° f the
i
last e 4 uation
and
=y
TSre^
5
2 (t)
=
P,x(0
^^
"
*
=
+ p.BKO
P,Ax(0
fUnCti ° n
°
f
*
W
(4-120)
0nly
in '
Ji(0 = y7.it) = V Jaif) Taking the time derivative of the last equation once again leads to 1
with
P AB = t
in view of
(4-111),
M*) Eq
=P
Mt)
=y
3 (t)
=
P,A 2x(?)
^
(4 " 120) >
(4 _ 12 i)
(4-122)
0.
Repeating the procedure leads to J'.-i(0
with
P,A-*B
=
0.
= >'.(0 = PiA"-'x(0
Therefore, using Eq. (4-108),
(4-123)
we have
Pt
P.A y(f)
=
Px(0
=
x(0
PjA"-
or
(4-124)
1
P,
P.A
P=
(4-125)
.Pi A"-'.
112
/
and P, should
Chap. 4
Dynamic Systems
State- Variable Characterization of
condition
satisfy the
PjB
Now taking the
= P,AB =
.
.
=
.
P,A"- 2 B
=
(4-126)
derivative of Eq. (4-108) with respect to time, fit)
Comparing Eq.
=
Px(0
=
PAx(f)
we
(4-127) with Eq. (4-110),
A,
+
we
get
PBr(0
(4-127)
obtain
= PAP
(4-128)
'
and
B = PB
(4-129)
Y
Then, from Eq. (4-125), -
P,B
'0'
'
P AB t
PB =
__
_P,A"- B. 1
Since Pj
is
an
1
is
_1
.
X n row matrix, Eq. (4-130) can be written
P,[B
Thus P!
(4-130)
AB A 2 B
=
A"-»B]
...
[0
...
(4-131)
1]
obtained as
P,=[0
...
AB A B 2
1][B
...
A-^B]-
1
(4-132)
= [0 ... lJS" S = [B AB A B ... 1
2 A""'B] is nonsingular. This is the the matrix condition of complete state controllability. Once P! is determined from Eq. (4-132), the transformation matrix P is given by Eq. (4-125).
if
Example
Let a linear time-invariant system be described by Eq. (4-95) with
4-8
"1
-1
B
(4-133)
-1
It is
desired to transform the state equation into the phase-variable canonical form.
Since the matrix
S
=
[B
AB]
=
ri
o~ (4-134)
'
-1
is
nonsingular, the system
Therefore, Pi
of
S -1
;
that
is
may
obtained as a
be expressed in the phase-variable canonical form. contains the elements of the last row
row matrix which
is,
P,=[l Using Eq.
-1]
(4-135)
(4-125),
(4-136)
_PjA_
Sec. 4.7
Transformation to Phase-Variable Canonical Form / 113
Thus "0
=PAP'
A,
1
(4-137) 1
= PB = ro
B!
(4-138)
i
The method of defining state variables by inspection as described earlier with reference to Eq. (4-91) is inadequate when the right-hand side of the differential equation also includes the derivatives of r(/). To illustrate the point we consider the following example. Example
Given the
4-9
differential
^^ +
5
equation
~dk + ~dT + L
2c
^-^dr +
2r(t)
(4-139)
it is desired to represent the equation by three state equations. Since the right side of the state equations cannot include any derivatives of the input r(t), it is necessary to include /•(?) when defining the state variables. Let us rewrite Eq. (4-139) as
d3c(0
dr(t)
,d*c(t)
„,,
dc(t)
„, (4-140)
The
state variables are
now
defined as
*i(0
= c(t)
(4-141)
(0=^
x2
(4-142)
(4-143)
Using these
last three
equations and Eq. (4-140), the state equations are written
dt
=**(»
^ dx 3 (t)
~3J~ In general,
dt-
+ '
it
= *.(0+r(0 .
= ~ 2x iW ~ ,
(4-144)
.
x 2(0
-
5x 3 (t)
-
can be shown that for the «th-order
dr->
+
+a"-
i
~dt
*.^ + dt"
the state variables should be defined as
+ ...
a« c
3r(f)
differential
equation
W
+ *._
1
^ +M0 dt
(4-145)
114
/
State-Variable Characterization of
Chap. 4
Dynamic Systems
- V(0
x,(t)
=
x 2 (t)
= ^f>-h
xAt)
= **M -
c(t)
x
r{t)
h 2 r{t) (4-146)
*„(0
=
^^-A
1
.-.'(0
where
>h
= — a,6 = {b — a b — = (* ™ a b - * A -
h„
=
A,
h2
bj,
2
2
)
3
3
)
(b„
—
a„b
)
flj/71
—
a„_ x h x
aJh
—
(4-147)
a„_ 2 h 2
—
.
.
.
-a
2 hn
_2
Using Eqs. (4-146) and (4-147), we resolve the «th-order
-
aj\ n _ x
differential
equa-
tion in Eq. (4-145) into the following n state equations:
^
=
d
=x
-^f>
x 2 (r) 3 (t)
+
MO
+
h 2 r(t)
(4-148)
^M = ^1 =
xm (t)
+ h^r(0
-a„x,(t)
The output equation
is
-
a„.,x 2 {t)
-
-
...
fl 2
x._,(0
obtained by rearranging the c(t)
=
x,(0
+
b
first
a iX „{t)
+ h„r(0
equation of Eq. (4-146): (4-149)
r{t)
Now if we apply these equations to the case of Example
4-9,
a, = 5 b = b = 2 a = 6, = b = a = 2 = 6; — tfj^o = h = (b — a b — Vi = — a h — ajx = hi = (b — a b
3
we have
3
2
1
1
2
3
/j,
2
When we
2
2
)
3
3
)
•
2
x
2
substitute these parameters into Eqs. (4-146)
and
(4-147),
we have
Relationship Between State Equations and Transfer Functions /
Sec. 4.8
115
the same results for the state variables and the state equations as obtained in Example 4-9. The disadvantage with the method of Eqs. (4-146), (4-147), and (4-148) is that these equations are difficult and impractical to memorize. It is not ex-
pected that one will always have these equations available for reference. ever,
we
How-
a more convenient method using the transfer
shall later describe
function.
4.8
Relationship Between State Equations and Transfer Functions
We
have presented the methods of describing a linear time-invariant system by transfer functions and by dynamic equations. It is interesting to investigate the relationship between these two representations. In Eq. (3-3), the transfer function of a linear single-variable system
is
defined in terms of the coefficients of the system's differential equation. Similarly,
Eq. (3-16) gives the matrix transfer function relation for a multivariate
p inputs and q outputs. Now we shall investigate the transfer function matrix relation using the dynamic equation notation. system that has
Consider that a linear time-invariant system equations
^ c(/)
= Ax(0 + =
Dx(t)
is
described by the dynamic
Br(?)
(4-150)
+ Er(/)
(4-151)
where
=n r (0 = P =q c
x(/)
X X
I
input vector
(t)
X
\
output vector
1
state vector
and A, B, D, and E are matrices of appropriate dimensions. Taking the Laplace transform on both sides of Eq. (4-150) and solving for X(s),
Since the definition of transfer function requires that the initial conditions be set to zero, x(0+) 0; thus Eq. (4-154) becomes
=
C(s)
= [D(sl - A)-'B + E]R(s)
Therefore, the transfer function
is
defined as
G(s) = D(sl - A)~'B + E
which
is
a q
matrix (si
—
x p A)
is
matrix.
Of
(4-1 55)
(4-156)
course, the existence of G(*) requires that the
nonsingular.
:
116
/
State-Variable Characterization of
Example 4-10
Chap. 4
Dynamic Systems
Consider that a multivariable system
described by the differential
is
equations
+4
dt*
^+^ + 2
The
state variables of the
These
state variables
equations, as
no
+2c 2
c1
dt
(4-157)
iCl
dt
=/- 2
(4-158)
system are assigned as follows Xi
=
x2
= dci -^
(4-160)
x3
=
(4-161)
(4-159)
Ci
c2
have been denned by mere inspection of the two
differential
particular reasons for the definitions are given other than that these
are the most convenient.
Now equating the first term of each of the equations of Eqs. (4-157) the rest of the terms (4-161),
we
arrive at the following state equation
dxi
and (4-158) to
state-variable relations of Eqs. (4-159) through
and using the
and output equation
in matrix
form:
xi
1
dt
dx 2
0-4
=
dt
ri
+
x2
3
(4-162)
1 Li- z J
dx 3
-1
ldt_
-2
-1
pi~|
ri
01
_C2.
_o
1_
x3
1
~Xi~
Dx
x2
(4-163)
_*3_
To
determine the transfer function matrix of the system using the state-variable we substitute the A, B, D, and E matrices into Eq. (4-156). First, we form
formulation,
the matrix (si
— A), -1
~s
(sI-A)= The determinant of
(si
j
-3
+4
1
1
s3
+6s 2 +
(4-164)
s
+
2_
1]
s
+
— A) is |jI-A| =
(4-165)
3
Thus ~s z (.51
The
A)
-| jI _ A
transfer function matrix
GCs)
is,
in
+6s + -3
|
.
-(*
tllis
case,
= T>(sl -
+
+2 s(s + 2) -(s + 1)
11
4)
f4-1
3i
fifi^
s(s+4)_
A)-'B "
~ S* +
"
3
s
1
6s 2
+ Us +
Using the conventional approach, we Eqs. (4-157) and (4-158) and assume zero
3
s+2
_-(«
+
(4-167)
3 !)
i(5
+ 4)J
take the Laplace transform on both sides of initial
conditions.
The
resulting transformed
E
and Eigenvectors
Characteristic Equation. Eigenvalues,
Sec. 4.9
117
/
equations are written in matrix form as
+ 4) +
s
Solving for C(s) from Eq. (4-168),
we
sis _ s
-3
1
nrc,(j)-i
+ 2j[_C
= VR^s)'
(4-168)
\_R 2 (s)_
2 is)j
obtain
=
C(.r)
G(j)R(i)
(4-169)
where
+ 4) +1
Sis Gis) _ s
and the same
4.9
result as in Eq. (4-167)
is
~
-3
(4-170)
+ 2_
s
when the matrix inverse is
obtained
carried out.
Characteristic Equation, Eigenvalues, and Eigenvectors
The
an important part in the study of from the basis of the differential equation, the
characteristic equation plays
systems. It can be defined
linear trans-
fer function, or the state equations.
Consider that a linear time-invariant system
described by the differential
is
equation d"~ c u n d°~ 2 c " ai l
,
df
dF^
df-
+
+
,
an -\
,
•
•
•
dc +ac £ ,
n
(4-171)
df
'df
By
defining the operator
p P
Eq. (4-171)
is
as
=
k
Mi
=
A;
1,2, ... ,m
written
a 2 pn
= Then the
is
setting the
operator p
The
+
ib P"
*,/>""'
+
•
•
+ a^^ + ajc + 1?-'P + b„)r
..
.
•
characteristic equation of the system s"
which
is
+ a^"- + 1
a 2 s"- 2
+
.
.
.
homogeneous part of Eq.
+
G(s) ° {S)
~
+
o„-,J
of the transfer function
=
an
(4-174)
(4-173) to zero. Furthermore, the s.
is
- fro*" + M"" + R(s) ~ s» + a^ + 1
<&)
.
Therefore, the characteristic equation
(4-173)
defined as
is
replaced by the Laplace transform variable
transfer function of the system
From
(4-172)
dt
is
+ b . iS + b„ + a _ lS + a„
... .
.
n
u (4
'
,
-«
175)
n
obtained by equating the denominator
to zero.
the state-variable approach,
G( , )
we can
write Eq. (4-156) as
= D adj(5l-A) B + E si — A I
_
D[adj(sl
|
-
A)]B
+
UI-AI
|
jl
-A
(4-176) |
118
/ State-Variable Characterization of
Dynamic Systems
Chap. 4
Setting the denominator of the transfer function matrix G(s) to zero,
we
get
the characteristic equation expressed as |
which
is
*I
—A =
(4-177)
|
an alternative form of Eq. (4-174).
Eigenvalues
The
roots of the characteristic equation are often referred to as the eigen-
values of the matrix A.
It is interesting
to note that if the state equations are
represented in the phase-variable canonical form, the coefficients of the characteristic
equation are readily given by the elements
row of
the last
in
the
A matrix.
That is, if A is given by Eq. (4-96), the characteristic equation is readily given by Eq. (4-174). Another important property of the characteristic equation and the eigenvalues is that they are invariant under a nonsingular transformation. In other words, when the A matrix is transformed by a nonsingular transformation x = Py, so that elements of the
A = P"'AP
(4-178)
then the characteristic equation and the eigenvalues of of A. This
is
A
are identical to those
proved by writing
si- A = ^I-P'AP
(4-179)
or
si- A The
characteristic equation of
\sl-
= ^""P-P-'AP
(4-180)
A is AlHsP-'P-P-'API (4-181)
= Since the determinant of a product
|P-'(jI- A)P| equal to the product of the determinants,
is
Eq. (4-181) becomes
UI-
A|
= ]P-MI^I- A||P| = |*I-A|
Eigenvectors
The n X
1
vector
p,
which
satisfies
a,I
where
X, is
the rth eigenvalue of A,
the eigenvalue X t
.
Illustrative
the matrix equation
- A)p, = is
(4-183)
called the eigenvector of
examples of
how
A associated with
the eigenvectors of a matrix
are determined are given in the following section.
4.10
Diagonalization of the
A
Matrix (Similarity Transformation)
One of the motivations
for diagonalizing the
matrix, the eigenvalues of A, X
X2 located on the main diagonal; then the x ,
,
,
A matrix is
X„,
all
that if
assumed
to
A is
be
a diagonal
distinct, are
state transition matrix e K ' will also
be
Sec. 4.10
Diagonalization of the
A
Matrix
/
119
nonzero elements given by e Xl ', e M e*-'. There are other reasons for wanting to diagonalize the A matrix, such as the controllability of a system (Section 4. 1 5). We have to assume that all the eigenvalues of A are distinct, since, unless it is real and symmetric, A cannot always be diagonalized if it has multiple-order eigenvalues. diagonal, with
its
.
,
The problem can be
.
,
.
stated as, given the linear system
x(0
= Ax(0 +
Bu(f)
(4-184)
where x(t) is an n- vector, u(t) an r- vector, and A has distinct eigenvalues X u X2 X„, it is desired to find a nonsingular matrix P such that the transformation x(0 = Py(?) (4-185) ,
.
.
,
.
transforms Eq. (4-184) into
=
y(0
A
with
+
Ay(r)
ru(f)
(4-186)
given by the diagonal matrix ...
A.
h
...
23
...
(n
is
equation of Eq. (4-186)
n)
(4-187)
K
...
This transformation
X
known as the similarity transformation. The known as the canonical form.
also is
Substituting Eq. (4-185) into Eq. (4-184)
state
easy to see that
it is
P 'AP
(4-188)
and
P'B
(n
X
r)
(4-189)
In general, there are several methods of determining the matrix P. in the following that
P
P= where
p, (/
=
eigenvalue k
t
.
is
P 3 ...pj
P2
= Ap,-
i=
1,2,.
.
.
is,
(4-190) is
proved by use of Eq. (4-183), which AiPi
Now
[Pi
1,2, ... ,n) denotes the eigenvector that
This
We show
can be formed by use of the eigenvectors of A; that
associated with the is
,n
written (4-191)
forming the n x n matrix,
UiPi
A 2p 2
.
.
.
A„p„]
= [Ap, = A[p,
p2
=
p2
Ap 2
.
.
.
Ap„] (4-192)
.
.
.
p„]
or [Pi
Therefore,
if
we
P2
•
P„]A
A[p,
P.J
(4-193)
let
P=
[Pi
P2
•
P.]
(4-194)
120
/
Chap. 4
Dynamic Systems
State-Variable Characterization of
Eq. (4-193) gives
PA = AP
(4-195)
or
A = P-AP which
is
the desired transformation.
A is of the phase-variable canonical form, it can be shown that
If the matrix
the
(4-196)
P matrix which diagonalizes A may be the Vandermonde 1
1
X\
X\
...
matrix,
1
K XI (4-197)
X„~\
where X u X 2 Since
of A,
we
it
,
,
X n are the eigenvalues of A.
has been proven that
shall
eigenvector of
show
A
that the rth
that
is
P contains as its columns the eigenvectors column of the matrix in Eq. (4-197) is the
X„i=
associated with
1,2,...,
n.
Let
Pn (4-198)
P,
Pin.
be the
rth eigenvector
of A Then .
U,i
-
A)p, ==
(4-199)
or
~x
t
_
-1 x
Pn
-1
t
x<
Pn
-1 (4-200)
-1 _0„
a n -i
a„-!
a„- 3
x
t
+
Cl\_
-Pin
This equation implies that
XiPn
XiPn
— Pn = ° — Pn = ° (4-201)
— Pin= ° + a )p =
XtPi,„-l fl»/»/i
+
a n-\Pn
+ (X
t
t
tm
Sec. 4.10
Now
we
Diagonalization of the
arbitrarily let
pn
=
Then Eq.
1.
= =
Pn Pn
A
Matrix
/
121
(4-201) gives X,
tf
(4-202)
Pt,n-1
xr
Pin
Ar
which represent the elements of the Substitution of these elements of verifies that the characteristic
Example
4-11
p,
rth
1
1
column of the matrix
in Eq. (4-197).
into the last equation of Eq. (4-201) simply
equation
is satisfied.
Given the matrix 1
A=
on (4-203)
i
6
-11
-6_
which is the phase- variable canonical form, the eigenvalues of A are X = — 1, A 2 = —2, X 3 = —3. The similarity transformation may be carried out by use of the Vandermonde matrix of Eq. (4-197). Therefore, t
"1
P = The canonical-form
state
1"
1
1
1
X\
"I
A3
-2
-3
_Aj
Ai
A3
4
9
equation
is
(4-204)
given by Eq. (4-186) with
"-1
A =piAP
"a,
-2
A2
(4-205) A3.
Example 4-12
Given the matrix
-r
1
A=
11
6
11
5j
can be shown that the eigenvalues of A are Xi = -1, k 2 desired to find a nonsingular matrix P that will transform A, such that A = P~ 1 AP. it
is
(4-206)
= -2, and A = -3. It A into a diagonal matrix 3
We shall follow the guideline that P contains the eigenvectors of A. Since of the phase-variable canonical form, we cannot use the Vandermonde matrix. Let the eigenvector associated with a, = —1 be represented by
A is not
pn Pi
Then
pi
must
= Pn
(4-207)
satisfy
a,i
- 1^1
==
(4-208)
122
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
or
-1
~X 1 6
A,
+
_6 The
last
"
1
Pu
-6 A -5_
11
11
(4-209)
Pi\ J>31.
t
matrix equation leads to
—Pu —Pzi + P3l =0
+ 10p 21 — 6p 31 = 6p n + 11^21 — 6/»31 =0 = and/?n = p 3i Therefore, we
(4-210)
6pu
from which we get^ 2 i
.
can letpn
= /> 3 = i
1,
and
get
(4-211)
Pi
=
For the eigenvector associated with A 2 must be satisfied
-1
"A 2
A2
6
"
1
+
6
—2, the following matrix equation P12
-6
ll
(4-212)
P11
A 2 -5.
11
_/>32.
or
— 2?12 —^22 + />32 = — 6p 32 = 6p n + 9p = 6/>i2 + HP22 — 7/? = = andp then/» 2 three equations we In these let/>i 2 2 32 =
(4-213)
22
32
1
2
;
4.
Thus
1
P2
=
(4-214)
2
4 Finally, for the eigenvector p 3 ,
we have
-1
~X 3
A3
6
_6
+
P2 3
A 3 -5.
-P33.
(4-215)
— 3/> 13 — Pu + P33 =0 6p 13 + Sp 23 - 6p 33 = 6pi + Up 2 — 8/? 33 =0 if
we
arbitrarily let
p
i3
=
1,
(4-216)
3
3
Now
Pl3
-6
11
11
or
"
1
the last three equations give P2i
=
6 and
p 33 =9.
Therefore,
p3
The matrix P
is
now
=
(4-217)
given by "1
P =
[Pi
P2
P 3] _1
1
r
2
6
4
9_
(4-218)
Jordan Canonical Form
Sec. 4.11
It is
easy to
show
123
that "A,
A=P
1
AP
on
1
-2
A2
(4-219)
-3_
A3
_0
4.11.
/
Jordan Canonical Form
when the A matrix has multiple-order eigenvalues, unless the symmetric and has real elements, it cannot be diagonalized. However, there exists a similarity transformation In general
matrix
is
A = P'AP
(«
x
(4-220)
ri)
such that the matrix A is almost a diagonal matrix. The matrix A is called the Jordan canonical form. Typical Jordan canonical forms are shown in the following examples [A,
1
A,
1
A=
A,
(4-221)
A2
_0 "A:
A3 1
A,
A=
A2
(4-222)
A3
_0 The Jordan canonical form 1.
A4
generally has the following properties:
The elements on the main diagonal of
A
are the eigenvalues of the
matrix. 2.
All the elements below the
3.
Some of
4.
5.
The Is, together with the eigenvalues, form typical blocks which are called Jordan blocks. In Eqs. (4-221) and (4-222) the Jordan blocks are enclosed by dotted lines.
When its is
6.
A
main diagonal of are zero. the elements immediately above the multiple-ordered eigenvalues on the main diagonal are Is, such as the cases illustrated by Eqs. (4-221) and (4-222).
A
the nonsymmetrical matrix has multiple-order eigenvalues, eigenvectors are not linearly independent. For an n X n A, there
only r
(r
<
n) linearly independent eigenvectors.
The number of Jordan blocks is equal to the number of independent eigenvectors, r. There is one and only one linearly independent eigenvector associated with each Jordan block.
7.
The number of 1 s above
the
main diagonal is equal ton
—
r.
124
/
State-Variable Characterization of
Dynamic Systems
The matrix P sume that A has q
is
Chap. 4
determined with the following considerations. Let us
distinct eigenvalues
among n
eigenvalues. In the
first
as-
place,
the eigenvectors that correspond to the first-order eigenvalues are determined in the usual
manner from (A I f
where
-
A)p,
=
denotes the /th distinct eigenvalue,
A,
The mined by
/
(4-223)
=
l
,
2,
.
.
.
,
q.
eigenvectors associated with an mth-order Jordan block are deterreferring to the "Ay
Jordan block being written as ...
1
A,
...
1
(m X m) ...
(4-224)
1
A,-
Xj_
.0
where A, denotes the y'th eigenvalue.
Then
the following transformation "A y
Vz
0"
...
1
Ay [Pi
must hold:
1
A[p,
PJ o
kj
Lo
AyPi
Pm-i
The
vectors Pi, p 2 ,
also
be written
•
•
,
+ AyP + AyP
+
(4-225)
Ay.
••
P2
PJ
1
or
p
p2
2
3
A,p m
= Ap, = Ap = Ap
2
3
(4-226)
= Ap m
pm are determined from these equations, which can
- A)Pl = (Ayl - A)P = -P, (Ayl - A)p = -P (Ayl
2
2
3
(Ayl
Example
4-13
—
A)p m
(4-227)
= — P m _l
Given the matrix "0
A=
6
2
1
3
-5"
2
4_
(4-228)
Jordan Canonical Form
Sec. 4.11
the determinant of Al
IAI
/
125
— A is
-Al
A
-6
5
-1 -3
A
-2
- 4A 2 +
A3
5A
-
2 (4-229)
-2 A -4
=
-
(A
2)(A
-
l) 2
A has a simple eigenvalue at A! =2 and a double eigenvalue at A 2 = 1. Jordan canonical form of A involves the determination of the matrix _1 that A = P AP. The eigenvector that is associated with X = 2 is deter-
Therefore,
To
P
such
find the
t
mined from (A,I
-
=
A)p,
(4-230)
Thus
-6
2
-1 -3 Setting
pn
=
2
5"
2
-2
arbitrarily, the last
/>n
-2
=
^21
-2_,
(4-231)
_/>31_
equation gives
p zl
-
1
and pi
i
There-
-2.
fore,
2" (4-232)
Pi
For the eigenvector associated with the second-order eigenvalue, we turn We have (the two remaining eigenvectors are p 2 and p 3 )
to Eq.
(4-227).
(A 2 I
- A)p 2 =
(A 2 I
-
(4-233)
and
= -p
A)p 3
(4-234)
2
Equation (4-233) leads to 1
-1 -3 Setting
p 12
=
1
arbitrarily,
-6
5"
P\i
-2
1
-2
Pn
-3_
(4-235)
_/>3 2_
= — ^ andp 32
wehave/? 22
1"
(4-236)
Pz
Equation (4-234), when expanded, gives
-6
1
-1 -3
_
- 5" Pl3 2
P23
=
3_ _P33_
-r 3
7 L
(4-237)
5
7J
from which we have "
Pl3 P3
Pl3
=
1"
-n
(4-238)
_46
_P33_
Thus
-7 -t
-n -li
(4-239)
126
/
State-Variable Characterization of
Dynamic Systems
The Jordan canonical form
is
Chap. 4
now
obtained as ~2
A = P'AP
0" 1
Note
that in this case there are
(4-240)
1
_0
1_
two Jordan blocks and there
is
one element of unity
above the main diagonal of A.
4.12
State Diagram
The signal flow graph discussed in Section 3.5 applies only to algebraic equations. In this section we introduce the methods of the state diagram, which represents an extension of the signal flow graph to portray
The important
equations.
close relationship
among
all
and
diagram
that
is
differential it
forms a
the state equations, state transition equation, com-
puter simulation, and transfer functions.
lowing
state equations
significance of the state
A
state
diagram
is
constructed fol-
the rules of the signal flow graph. Therefore, the state diagram
may
be used for solving linear systems either analytically or by computers. Basic Analog Computer Elements
Before taking up the subject of state diagrams, basic elements of an analog computer.
useful to discuss the
it is
The fundamental
linear operations that
can be performed on an analog computer are multiplication by a constant, addition, and integration. These are discussed separately in the following. Multiplication by a constant. Multiplication of a machine variable by a
constant
done by potentiometers and
is
amplifiers. Let us consider the opera-
tion
x 2 (t) where a
is
a constant. If a
lies
=
a Xl (t)
(4-241)
between zero and unity, a potentiometer is used An operational amplifier is used to simu-
to realize the operation of Eq. (4-241).
if a is a negative integer less than —1. The negative value due to the fact that there is always an 180° phase shift between the output and the input of an operational amplifier. The computer block diagram symbols of the potentiometer and the operational amplifier are shown in Figs. 4-5 and 4-6, respectively.
Equation (4-325) represents n equations with nr unknowns, and the conproblem may be interpreted as: Given any initial state x(t ), find
trollability
the control vector a(t) so that the final state implies that given x(f
)
and the matrix
is
= for finite f — This U from Eq. (4-326). Therefore,
x(^)
S, solve
t
t
.
the system
is completely state controllable if and only if there exists a set of n independent column vectors in S. For a system with a scalar input, r = 1, the matrix S is square; then the condition of state controllability is that S must be nonsingular.
linearly
Although the
by Theorem 4-2 is not very easy to implement for multiple-input systhere are In columns in S, and there would be a large
criterion of state controllability given
quite straightforward,
it is
Even with r = 2, number of possible combinations of n x n matrices. A practical way may be to use one column of B at a time, each time giving an n X n matrix for S. However, failure to find an S with a rank for n this way does not mean that the system is uncontrollable, until all the columns of B are used. An easier way would be to form the matrix SS', which is n X n then if SS' is nonsingular, S has tems.
;
rank
n.
Example
4-21
Consider the system shown in Fig. 4-25, which was reasoned earlier to be uncontrollable. Let us investigate the same problem using the condition of Eq. (4-316).
The
state equations of the
system are written, from Fig. 4-25,
r«ki(Oi
-2
dt
dt
iW
1
+
dx 2 (t) I
X
1
-1
u(t)
(4-328)
x%{t)
J
Therefore, from Eq. (4-316),
S which
is
=
singular, anc the sys tem .
Example 4-22
Determine the
[B
is
=
AB]
"1
-2"
_o
o.
(4-329)
not state cc>ntro llab
e.
state controllability of the
system described by the
state equation
dxM +
dx 2 Q) .
From
dt
*i(0
1
dt
-1
"(0
J
Eq. (4-316),
S
=
[B
1
AB]
(4-331) 1
which
(4-330)
xi(0
is
nonsingular. Therefore, the system
is
completely state controllable.
148
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
Alternative Definition of Controllability
Consider that a linear time invariant system
i(0 If the eigenvalues
A are
of
=
Ax(0
distinct
+
is
described by the state equation
Bu(?)
(4-332)
and are denoted by X„ i = \, 2, P which transforms A
then there exists an nth-order nonsingular matrix diagonal matrix
A
.
.
.
,
n,
into a
such that ...
"A,
A,
P'AP =
(4-333)
0. Let the
new
state variable
be y
Then
=
P-'x
(4-334)
P
the state equation transformed through
y
=
Ay
+
is
(4-335)
Tii
where
r = p-'B
(4-336)
The motivation
for the use of the similarity transformation is that the of the system of Eq. (4-335) are decoupled from each other, and the only way the states are controllable is through the inputs directly. Thus, for state controllability, each state should be controlled by at least one input. Therefore, states
an alternative definition of state controllability for a system with distinct is: The system is completely state controllable if T has no rows that
eigenvalues
are all zeros. It
should be noted that the prerequisite on distinct eigenvalues precedes all square matrices with
the condition of diagonalization of A. In other words, distinct
eigenvalues can
be diagonalized.
However, certain matrices with
multiple-order eigenvalues can also be diagonalized.
Does but whose
The
natural question
is:
the alternative definition apply to a system with multiple -order eigenvalues
A
matrix can be diagonalized? The answer
is
no.
We
must not
lose
any state x(t ) is brought to any state \(t f ) in finite time. Thus the question of independent control must enter the picture. In other words, consider that we have two states sight of the original definition
on
state controllability that
which are uncoupled and are related by the following
^ ^
=
state equations:
a Xi (t)
+ bMO
(4-337)
= ax&)
b 2 u{t)
(4-338)
Controllability of Linear
Sec. 4.15
This system
is
ab
~b,
=
AB]
[B
A
mean
l
(4-339)
A
singular. Therefore, just because
are zeros does not that
149
ab 2
bz
is
/
apparently uncontrollable, since
S is
Systems
diagonal, and
is
that the system
controllable.
is
B has no rows which The reason in this case
has multiple-order eigenvalues.
When A
has multiple-order eigenvalues and cannot be diagonalized, there
P which transforms A into a Jordan canonical form The condition of state controllability is that all the elements of r = P _1 B that correspond to the last row of each Jordan block are nonzero. The reason behind this is that the last row of each Jordan block corresponds
is
a nonsingular matrix
A = P'AP.
to a state equation that
The elements
responding states are values,
X it X u X u
which transforms
completely uncoupled from the other state equations.
is
in the other
rows of T need not
coupled. For instance,
all
Xi, three
A into
if
all
be nonzero, since the cor-
the matrix
of which are equal, then there
the Jordan canonical A,
P AP
four eigen-
a nonsingular
P
form
0~
1
A,
=
A has
is
1
'
(4-340)
!
A,
A2 J
Then the condition given above becomes
self-explanatory.
Consider the system of Example 4-21. The
Example 4-23
respectively,
"-2
r
o
-i_
L
B=
A
and
B
matrices are,
T l_oJ
Let us check the controllability of the system by checking the rows of the matrix T. It
can be shown that
A is
diagonalized by the matrix
r
l
p=
_o
Therefore,
r
=
'B
=
1
-r r =
"1"
i_ _o_
_0_
_o
The transformed
state equation
(4-341)
is
~-2
0~
no
y(0
+
(4-342)
u(t)
_0_
T
Since the second
row of
and the system
uncontrollable.
Example 4-24
is
is
zero, the state variable yi(t), or
x 2 (0»
*s
uncontrollable,
Consider that a third-order system has the coefficient matrices 1
2
1
-4
A=
-r
"0
B=
1
3
1
150
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
Then
S
=
AB A
[B
2
-1
-41
3
8_
B]
(4-343) 1
S is singular, the system is not state controllable. Using the alternative method, the eigenvalues of A are found to be Ai = 2, and A 3 = 1. The Jordan canonical form of A is obtained with
Since
A2
=
2,
0"
1
(4-344)
1 1
2_
1
Then 01
1
[2
A = P'AP
2
(4-345)
_0
1_
T = P'B Since the last this
row of T
is
(4-346)
zero, the state variable
y3
is
uncontrollable. Since
x2
=y
3,
corresponds to x 2 being uncontrollable.
Example 4-25
Determine the controllability of the system described by the
state
equation
i(0
o
r
-l
o
=
+
x(0
u(t)
(4-347)
We form the matrix S
= [B
1"
AB]
(4-348) 1
which
is
The system is completely controllable. check the controllability of the system from the rows of T. The
nonsingular.
Let us
now
eigenvalues of
A are complex and are k =j and X 2 =
—j. With the similarity trans-
x
formation, ~
"1
P
A
1
J =
P
-i]
AP
=
7 _o
"
—
/_
and
r=
p-'b
= L2/J
Since
all
the rows of
T
are nonzero, the system
is
controllable.
In general, when the eigenvalues are complex, which occurs quite frequently in control systems,
it is
more
difficult to
work with complex numbers. However, we may
Sec. 4.15
Controllability of Linear
Systems
/
151
use the modal form so that only real matrices are dealt with. In the present problem
A may
be transformed to the modal form
a
CO"
_—a>
a_
r (4-349)
.-1
o_
by the transform matrix "1
-r
_1
i_
P = Then
r=p
=
'B
2" 1
-1Since the
A
modal form
lability is that
not
implies that the states are coupled, the condition of control-
rows of
all the
T
are zeros.
Output Controllability 1 *
The condition of controllability defined
in the preceding sections
only to the states of a system. Essentially, a system
is
desired transition of the states can be effected in finite time control.
However, controllability defined in terms of the
is
referred
controllable if every
by an unconstrained
states is neither neces-
sary nor sufficient for the existence of a solution of the problem of controlling
the outputs of the system. Definition
of output
output controllable
if
drive the output y(?
)
controllability.
A
system
is
said to be completely
there exists a piecewise continuous function u(t) that will at
t
=
to any final output y(f r ) for a finite time
t
(t f
—
t )
>0. Theorem 4-3. Consider that an nth-order linear time-invariant system described by the dynamic equations of Eqs. (4-314) and (4-315). The system completely output controllable if and only if the p X (n T)r matrix
is is
+
T = [DB DAB DA 2 B. is
of rank p. Or,
T
E]
(4-350)
is
similar to that of
Theorem
4-2.
Consider a linear system whose input-output relationship by the differential equation
dt 2 state controllability
gated.
DA nl B
'
d 2 c(t) The
.
has a set of p linearly independent columns.
The proof of this theorem Example 4-26
.
dc(t)
2
~dT
+
and the output
,
.
c{t)
_ du{i) ~ -dT
m
is
described
(4-351)
controllability of the system will be investi-
We shall show that the state controllability of the system depends upon how
state variables are defined.
Let the state variables be defined as
x
x
x2
= =
c c
—
u
the
:
.
152
/
Dynamic Systems
State-Variable Characterization of
The
state equations of the
system are expressed in matrix form as
o
r
_-i
-2_
"
= _*2_
The output equation
Chap. 4
state controllability matrix
= X\
is
matrix
The system
singular.
From
1
r
_-l
i.
(4-354)
not state controllable.
is
D=
the output equation,
[1
T = [DB DAB is
~
=
AB]
[
0]
and
E]
=
E=
The output
0.
controllability
written
is
which
(4-353)
is
= B
S which
(4-352) -l
is
C
The
i
+ X%-
of rank
the
1,
-1
[1
(4-355)
0]
same as the number of output. Thus the system
is
output con-
trollable.
Now
let
us define the state variables of the system in a different way.
method of direct decomposition, the *1
"
=
is
now
"0"
["1 +
-2. L*2_
=
Xi
+x
(4-357)
=
[B
AB]
=
r — -2, ZJ
"0
H
is still
output controllable since
T = [DB DAB is
(4-358)
nonsingular.
The system which
(4-356) _1_
2
_i
is
the
completely state controllable since
S which
By
form
is
C
The system
r
o
.-1
_*2_
The output equation
state equations are written in matrix
of rank
E]
=
[1
-1
0]
(4-359)
1
We
have demonstrated through this example that given a linear system, state depends on how the state variables are defined. Of course, the output controllability is directly dependent upon the assignment of the output variable. The two types of controllability are not at all related to each other. controllability
4.16
Observability of Linear Systems
The concept of tially,
a system
observability is
is
quite similar to that of controllability. Essen-
completely observable
if
every state variable of the system
some of the outputs. In other words, it is often desirable to obtain information on the state variables from measurements of the outputs and the inputs. If any one of the states cannot be observed from the measurements of the outputs, the state is said to be unobservable, and the system is not completeaffects
ly observable,
or
is
simply unobservable. Figure 4-26 shows the state diagram of x 2 is not connected to the output c in any way.
a linear system in which the state
Once we have measured c, we can observe the state x since x, = c. However, x 2 cannot be observed from the information on c. Thus the system is x ,
the state
described as not completely observable, or simply unobservable.
Sec. 4.16
Observability of Linear Systems
?*2('0+)
/
153
?-M'o+)
O
u(t)
Fig. 4-26. State
diagram of a system that
is
c
not observable.
Definition of observability. Given a linear time-invariant system that
is
described by the dynamic equations of Eqs. (4-3 14) and (4-315), the state \(t ) is said to be observable if given any input u(f), there exists a finite time t tQ f
>
such that the knowledge ofu(t)for t t ; the matrices A, B, D, and E; and f the output c(t) for t f are sufficient to determine x(t ). If every state of the system is observable for a finite t f we say that the system is completely observable, or simply observable.
,
The following theorem shows that the condition of observability depends on the coefficient matrices A and D of the system. The theorem also gives one method of testing observability. Theorem 4-4. For the system described by the dynamic equation of Eqs. (4-314) and (4-315) to be completely observable, it is necessary and sufficient that the following n x np matrix has a rank of n:
V=
[D'
A'D'
(A') 2
D\
.
.
(A')"- 1 D']
(4-360)
The condition is also referred to as the pair [A, D] being observable. In particular, if the system has only one output, is an 1 X n matrix; of Eq. (4-360) is an n X n square matrix. Then the system is completely observable if is nonsingular.
D
V
V
Proof: Substituting Eq. (4-317) into Eq. (4-315), c(/)
=
B(f>(t
-
t
)x(t
)
+D
f
(?
-
we have
t)Bu(t) dx
+
Eu(?)
(4-361)
Based on the definition of observability, it is apparent that the observability ) depends essentially on the first term of the right side of Eq. (4-361). With u(t) = 0, Eq. (4-361) becomes of x(f
c(0
Making
=
D$(r
-
t
)x(t
)
(4-362)
use of Eq. (4-322), Eq. (4-362) becomes
c(0
=
S
m=
a m (0DA-x(r
)
(4-363)
:
154
/
Dynamic Systems
State-Variable Characterization of
Chap. 4
or
D DA DA
2
c(0
=
a I
(a I
(4-364)
xOo)
«„-,!)
t
DA" Therefore, knowing the output is
over the time interval
c(t)
uniquely determined from Eq. (4-364)
and only
if
D DA DA
if
t
< < t
t
f , x(t )
the matrix
2
X
(np
DA" has rank
n.
Or
(A') 2
A'D'
[D'
D
.
.
.
(A')"-'D']
(4-365)
n.
Comparing Eq.
(4-360) with Eq. (4-316)
following observations 1.
1
the matrix
V= has a rank of
w)
may
Controllability
and the rank condition, the
made
be
of the pair [A, B] implies observability of the pair
[A', B']. 2.
Observability of the pair [A, B] implies controllability of the pair [A', B'].
Example 4-27
Consider the system shown in Fig. 4-26, which was earlier defined to be unobservable. The dynamic equations of the system are written directly
from the
state diagram.
'-2
=
~3"
0"
-1_ L*2_
+
(4-366) _1_
~x{~ (4-367)
0]
[1
t
_*2_ L*2j
Therefore,
D=
[1
-2
A'D'
D'
0]
0"
-1
and, from Eq. (4-360),
V= Since
V
is
singular, the system
is
[D'
A'D]
=
unobservable.
"1
-2' (4-368)
:
Observability of Linear Systems /
Sec. 4.16
Example 4-28
155
Consider the linear system described by the following dynamic equations 1
-1"
1
1.
Xl (4-369)
L*2.
ci
1
(4-370)
-1 For the
test
of observability,
AD' The
observability matrix
V
has a rank of
Lxil
1
we
evaluate
r
i
n ri
-ii
_-i
i- J)
i_
i
r
_-l
°i
(4-371)
2_
becomes smes
V Since
Xl.
)'
2,
which
AD] = the
is
"i
--l
1
0"
_o
l
-1
2_
number of
(4-372)
inputs, the system
is
completely
observable.
Example 4-29
Let us consider the system described by the differential equation of Eq. (4-351), Example 4-26. In Example 4-26 we have shown that state controllability of
are defined.
We shall now show that
a system depends on
how
the observability also depends
the state variables
on the
definition
of the state variables. Let the dynamic equations of the system be defined as in Eqs. (4-352)
and
(4-353),
r
o
A=
D=[l
0]
-1
Then
V=
[D'
AD]
(4-373)
and thus the system is completely observable. Let the dynamic equations of the system be given by Eqs. (4-356) and Then 1"
D=
-1
-2
[D'
AD] =
[l
(4-357).
1]
Then
V=
"1
']
1
which
is
singular.
Thus the system
is
-1 -1
unobservable, and
we have shown
that given the
input-output relation of a linear system, the observability of the system depends on how the state variables are defined. It should be noted that for the system of Eq.
one method of state variable assignment, Eqs. (4-352) and (4-353) yields a system that is observable but not state controllable. On the other hand, if the dynamic equations of Eqs. (4-356) and (4-357) are used, the system is completely state control-
(4-351),
lable but not observable.
investigate these
There are
phenomena
definite reasons
behind these
results,
and we
shall
further in the following discussions.
of observability. If the matrix A has distinct eigencan be diagonalized as in Eq. (4-333). The new state variable is
Alternative definition values,
it
y
=
P-'x
(4-374)
156
/
State-Variable Characterization of
Dynamic Systems
The new dynamic equations
Chap. 4
are =
y
+ Tu Fy + Eu F = DP Ay
(4-375)
c ==
where Then the system
is
(4-376) (4-377)
completely observable ifF has no zero columns.
The reason behind the above condition is that if the /th (j = 1,2, ... ,n) column of F contains all zeros, the state variable ys will not appear in Eq. (4-376) and is not related to the output z{i). Therefore, y, will be unobservable. In general, the states that correspond to zero columns of
F
are said to be un-
observable, and the rest of the state variables are observable.
Example 4-30
Consider the system of Example 4-27, which was found to be un-
A
matrix, as shown in Eq (4-366), is already a diagonal matrix, the alternative condition of observability stated
observable. Since the
above requires that the matrix D = [1 0] must not contain any zero columns. Since the second column of D is indeed zero, the state x 2 is unobservable, and the system is
4.17
Relationship
unobservable.
Among
Controllability, Observability,
and Transfer Functions In the classical analysis of control systems, transfer functions are often used
Although controllability and observability are concepts of modern control theory, they are closely
for the modeling of linear time-invariant systems.
related to the properties of the transfer function.
Let us focus our attention on the system considered in Examples 4-26 and 4-29. It
in these two examples that the system is either not not observable, depending on the ways the state variables
was demonstrated
state controllable or
are defined. These
phenomena can be explained by
function of the system, which C(s) U(s)
which has an
_ s*
identical pole
referring to the transfer
is
obtained from Eq. (4-35 1).
s
+
_
1
+ 2s +
and zero
1
have
+ _ , 78 /oj s+l + l) = — The following theorem gives s
1
~(s
at s
We
1
2
1
.
,
( <.<-•>
.
the relationship between controllability and observability and the pole-zero cancellation of a transfer function.
Theorem 4-5. If the input-output transfer function of a linear system has pole-zero cancellation, the system will be either not state controllable or unobservable, depending on how the state variables are defined. If the input-output transfer function of a linear system does not have pole-zero cancellation, the system can always be represented by dynamic equations as a completely controllable and observable system. Proof: Consider that an «th-order system with a single input and single
output and distinct eigenvalues
is
represented by the dynamic equations
= c(t) =
x(?)
Ax(t)
Dx(0
+
Bu(t)
(4-379) (4-380)
Sec. 4.17
Let the
Relationship
A matrix
Among
state
A=
1
...
1
/]
ki
A3
...
A„
X\
XI
X\
1n-l
A1n-l 2
=
A
DP. The
=
state vectors x(f)
n
X
1
.
A„
.
is
Tu(t)
(4-382)
(4-383)
y(f) are related
by
Py(f)
(4-384)
a diagonal matrix, the /th equation of Eq. (4-382)
is
a, is
the /th eigenvalue of
*#&)
P,
transformed into
is
and
157
(4-381)
Fy(0
=
/
].-
.
+
P"'AP. The output equation
MO = where
ln-1
A3
Ay(?)
x(r)
Since
A?
equation in canonical form
=
and Functions
Vandermonde matrix
1
c (0
where F
n
1
y(0 where
x
be diagonalized by an n
.Ai
The new
Controllability, Observability
+
is
(4-385)
ytff)
A
and y, is the /th element of T, where T is an Taking the Laplace transform on both sides
matrix in the present case.
of Eq. (4-385) and assuming zero
conditions,
initial
we obtain
the transfer
function relation between Y£s) and U(s) as
Jl. U(s)
YJis) S
The Laplace transform of Eq.
(4-383)
=
C(s)
Now
if it is
—
(4-386)
A,-
is
FY(5)
=
DPY(i)
(4-387)
assumed that
D=
d2
[rf 1
...
(4-388)
d„]
then
F
= DP =
[/
f2
+
...
I
/J
(4-389)
djrr
(4-390)
where /,
for
i
=
=
,
+
d 2 X,
1,2, ... ,n. Equation (4-387)
Gto
=
[/i
h
[/.
h
—
written as
/JY(j)
U(s)
f,7,
s
is
+
X,
U(s)
(4-391)
158
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
For the nth-order system with input-output transfer function
U(s)
which
- A,)(j -
(s
expanded by
is
distinct eigenvalues, let us
assume that the
of the form
is
A2)
.
.
-
.
A„)
partial fraction into
v$>
=
&^
(4 - 393)
where a denotes the residue of C(s)jU(s) at s = A,-. It was established earlier that for the system described by Eq. (4-382) to for be state controllable, all the rows of T must be nonzero that is, y, i = 1, 2, ...,«. If C(s)/U(s) has one or more pairs of identical pole and zero, for instance in Eq. (4-392), a = A 1; then in Eq. (4-393), a = 0. Comparing t
^
;
t
Eq. (4-391) with Eq. (4-393),
x
we
see that in general
(4-394)
(r,=/,y, Therefore,
when
a,
=
be zero if/, =£
0, y, will
0,
and the
state y, is uncontrol-
lable.
For
observability,
was established
it
earlier that
^
containing zeros. Or, in the present case, /,
from Eq.
for
F must i
=
1
,
2,
not have columns .
.
.
,
n.
However,
(4-394),
f = t
(4-395)
f t
When a,
4.18
=
the transfer function has an identical pair of pole and zero at
0.
Thus, from Eq. (4-395), /,
=
if y,
a,
=
X„
^ 0.
Nonlinear State Equations and Their Linearization
When
a dynamic system has nonlinear characteristics, the state equations of
the system can be represented by the following vector-matrix form: d\(t) __ f[x(r), r(0] dt
where
x(t) represents the
f [x(r),
r(f)]
denotes an n
X
n
X
1
state vector, r(t) the
1
(4-396)
p x
function vector. In general, f
1
is
input vector, and a function of the
state vector and the input vector.
Being able to represent a nonlinear and/or time-varying system by state is a distinct advantage of the state-variable approach over the transfer function method, since the latter is defined strictly only for linear time-invariant equations systems.
As a simple linear
:
illustrative
example, the following state equations are non-
^
^
= x,(/) +
x\{t)
j = x,(0 + r{t) *$>
(4 - 397)
j
Nonlinear State Equations and Their Linearization
Sec. 4.18
Since nonlinear systems are usually difficult to analyze and design,
it
/
159
would
be desirable to perform a linearization whenever the situation justifies. A linearization process that depends on expanding the nonlinear state equation into a Taylor series about a nominal operating point or trajectory is
now
described. All the terms of the Taylor series of order higher than
and
discarded,
linear
1
are
approximation of the nonlinear state equation at the
nominal point results. Let the nominal operating trajectory be denoted by x (f), which corresponds to the nominal input r (?) and some fixed initial states. Expanding the nonlinear state equation of Eq. (4-396) into a Taylor series about x(f) = x (f) and neglecting
all
the higher-order terms yields
x (t)=f (x i
i
,i
)
(7/Xx, r) 53+ j=x S— OXj
=
1,
2,
.
.
.
,
n.
-
x 0J )
0y
-
r 0J )
(4-398)
* i
(Xj
Sfjjx, r)
Let
Axi
X,
Ar,
r,
Oj
(4-399)
r 0/
(4-400)
X 0!
(4-401)
r o)
(4-402)
and
—
Then Ax,
=
x oi
= /i(x
x,
Since
Equation (4-398)
is
~~ '
The
last
equation
o>
written
dMx,r)
^
dx,
j=x
may be
j"
io,r,
or
i
An
(4-403)
written in the vector-matrix form
Ax
= A* Ax +
B*Ar
(4-404)
where
'df dx
x
dx 2
df{ dx n
dx
x
dx 2
dx„
9fx
dh A*
B*
=
El
dL
df
dx
dx 2
dx„
t
dfx dr,
dli dr z
df2 dr t
d_h dr 2
dh
dL
dr,
dr 2
(4-405)
df~ "
drp
df2 '
dr„
d_L dr a
(4-406)
1
160
/ State-Variable Characterization of
Chap. 4
Dynamic Systems
should be reiterated that A* and B* are evaluated at the nominal point. Thus we have linearized the nonlinear system of Eq. (4-396) at a nominal operating point. However, in general, although Eq. (4-404) is linear, the ele-
where
it
ments of A* and B* may be time varying. The following examples serve to illustrate the linearization procedure
just
described.
Example
4-31
Figure 4-27 shows the block diagram of a control system with a saturation nonlinearity. The state equations of the system are
Xi *i=fi=xi X 2 = f2 = u
(4-407) (4-408)
1 It
x = Ax + Bu
*l
-
Fig. 4-27.
Nonlinear control system.
where the input-output relation of the saturation nonlinearity u
=
-
(1
e-*i*'i)
SGN
is
represented by
jf,
(4-409)
where
+1 SGN xi = [
-1
jc,
>
x
<0
x
Substituting Eq. (4-409) into Eq. (4-408) and using Eq. (4-403),
(4-410)
we have
the
linearized state equation
Aii
=
Ax 2
=
At
-M-Ax 2 = Ax 2
(4-411)
^Ax
(4-412)
ax 2 ax
= Ke-V'^Axi
{
t
where x 01 denotes a nominal value of x Notice that the last two equations are linear and are valid only for small signals. In vector-matrix form, ihese linearized state x
.
equations are written as
~Ax{
o
Ax 2
a
_
r
"Axi" (4-413)
Ax 2
.
where a
=
Ke- K \*"\
=
constant
(4-414)
It is of interest to check the significance of the linearization. If x oi is chosen to be at the origin of the nonlinearity, x 01 = 0, then a = K; Eq. (4-412) becomes
Ax 2 =KAxi Thus the
linearized
model
is
(4-415)
equivalent to having a linear amplifier with a constant
Sec. 4.19
gain K.
State Equations of Linear Discrete- Data Systems / 161
On
the other hand,
if
x
,
a large number, the nominal operating point will
is
on the saturated portion of the nonlinearity, and a = 0. This means that any small variation in x, (small Ax,) will give rise to practically no change in Ax 2 lie
.
Example 4-32
In the
last
example the linearized system turns out to be time
invari-
ant. In general, linearization
of a nonlinear system often leads to a linear time-varying system. Consider the following nonlinear system: *i
= ^t
(4-416)
x2
=
(4-417)
«x,
We
would like to linearize these equations about the nominal trajectory [x ,(0, X02O)], which is the solution of the equations with the initial conditions x,(0) = x 2 (0) = 1 and the input u{t) = 0. Integrating both sides of Eq. (4-41 7),
=
Xi
Then Eq.
we have
=
x 2 (0)
1
(4-418)
(4-416) gives
xi
= -/ +
(4-419)
1
Therefore, the nominal trajectory about which Eqs. (4-416) and (4-417) are to be linearized is described by •*oi(0
x Q i(t)
Now
= =
~t
1 = V dx
=
(4-420)
1
(4-421)
evaluating the coefficients of Eq. (4-403),
dxi
+
1
we
get
du
2
=
Xi
Equation (4-403) gives
Ax,
= -3-Ax X02
Ax 2 = Substituting Eqs. (4-420)
(4-422)
2
+
«oAx,
x
,Ai/
(4-423)
and (4-421) into Eqs. (4-422) and
(4-423), the linearized
equations are written as
which
4.19
is
Ax,
"0
_Ax 2
_o
2"
o_
"
Ax, _Ax 2 _
+
"
J -'_
A«
(4-424)
a set of linear state equations with time-varying coefficients.
State Equations of Linear Discrete- Data Systems Similar to the continuous-data systems case, a discrete-data system
when
is
by means of discrete
modern way of modeling a As described earlier,
state equations.
we often encounter two different components of the system are continuous-
dealing with discrete-data systems,
situations.
The
first
one
is
that the
data elements, but the signals at certain points of the system are discrete or discontinuous with respect to time, because of the sample-and-hold operations. In this case the components of the system are
still
described by differential
equations, but because of the discrete data, a set of difference equations
may be
1
162
/
State- Variable Characterization of
Chap. 4
Dynamic Systems
generated from the original differential equations. The second situation involves systems that are completely discrete with respect to time in the sense that they discrete data only, such as in the case of a digital controller Under this condition, the system dynamics should be computer. or digital
receive
and send out
described by difference equations.
Let us consider the open-loop discrete-data control system with a sampleand-hold device, as shown in Fig. 4-28. Typical signals that appear at various points in the system are also shown in the figure. The output signal, c(t),
rit)
r*(t)
'
J\-J-
(t)
Zero-order hold
h(t)
c(t)
G
\
i: T IT 3T AT 5T6T IT
-«*-
*-
^liJ-
*-
Fig. 4-28. Discrete-data system with sample-and-hold.
1
::
:
Sec. 4.19
State Equations of Linear Discrete-Data Systems /
is a continuous-data signal. The output of the sample-and-hold, a train of steps. Therefore, we can write
ordinarily is
=
h(kT)
=
k
r(kT)
0,
1
2,
,
.
.
163
h{t),
(4-425)
.
Now we let the linear process G be described by the state equation and output equation
ML = Ax(0 + Bh(t) c(t)
where
have been defined system is written
and h{i) and c{t) The matrices A, B, D, and
earlier.
x(t)
>
t
+ EA(0
Dx(0
x(/) is the state vector
signals, respectively.
for
=
t
tft
-
and output which
are the scalar input
E are coefficient matrices
(4-70), the state transition equation
)\(t
t
(4-427)
+
)
("
(t
-
x)Bh(x) dx
of the
(4-428)
.
we
If
=
Using Eq.
(4-426)
are interested only in the responses at the sampling instants, just as
in the case of the z-transform solution,
we
let
=
t
(k
+
l)rand
=
t
kT. Then
Eq. (4-428) becomes x[(*
where
+
l)r]
+
sign.
+
4>(T)x(kT)
+ |^
VT
+l)T-
$[{k
t]BA(t)
(4-429)
d-c
§{t) is the state transition matrix as defined in Section 4.4.
Since h(t) (k
=
is
piecewise constant, that
h{kT)
is,
\)T, the input function A(t) in Eq. (4-429) can
Equation (4-429)
is
= r(kT)
for
kT
be taken outside the integral
written
+Ur
x[(*
+
\)T]
=
${T)x{kT)
+ |2
x[(fc
+
l)r]
=
tt7>(*r)
+
*K*
+
l)r
- t]B dx r(kT)
(4-430)
or
Q(T)r(kT)
(4-431)
where 9(T)
= J kT
Equation (4-431) form. Since
is
^k +
1}
T~
T]B dZ
(4 " 432 >
of the form of a linear difference equation in vector-matrix
represents a set of first-order difference equations, to as the vector-matrix discrete state equation. it
it is
referred
The discrete state equation in Eq. (4-431) can be solved by means of a simple recursion procedure. Setting k 0, 1, 2, ... in Eq. (4-431), we find that the following equations result
Substituting Eq. (4-433) into Eq. (4-434), and then Eq. (4-434) into Eq. (4-435), solution for Eq. (4-431): , and so on, we obtain the following . .
.
x(kT) Equation (4-437)
is
= *(r>x(0) + S p-'-KTWIW.iT) (=0
(4-437)
defined as the discrete state transition equation of the discretenote that Eq. (4-437) is analogous to its continu-
It is interesting to
data system.
ous counterpart in Eq.
(4-67). In fact,
the state transition equation of Eq.
(4-67) describes the state of the system of Fig. 4-28 with or without sampling. The discrete state transition equation of Eq. (4-437) is more restricted in that it
describes the state only at
t
= kT (k =
0, 1, 2,
.
.
.),
and only
if
the system has
a sample-and-hold device such as in Fig. 4-28. With kT considered as the initial time, a discrete state transition equation similar to that of Eq. (4-70) can be obtained as x[(k
N
where
is
+ N)T] =
4>
N (T)x(kT)
1
+ 2
4>»-'-\T)B(r)r[(k
a positive integer. The derivation of Eq. (4-438)
+ i)T]
is left
as
(4-438)
an exercise
for the reader.
The output of the system of Fig. 4-28 at the sampling instants is obtained by substituting t = AT and Eq. (4-437) into Eq. (4-427), yielding c(kT)
=
Dx(kT)
=
D4>*(7>(0)
+ Eh(kT) (4-439)
*-i
+ D 2 p-'-KTWTWT) +
Kh{kT)
important advantage of the state-variable method over the z-transform is that it can be modified easily to describe the states and the output (k - A)T, where between sampling instants. In Eq. (4-428) if we let /
An
method
=
= kT, we get f()H-A)T x[(k + A)T] = ftA7X*r) + P + \ kT = 4>(AT)x(kT) + 9(A7>(A:7)
A<
1
and
By varying the value of
A
between
and
1,
x(t) t
-
r]B dx r(kT)
^^
the information between the sam-
is
+*(T) = ftkT) which is proved as follows. Using the homogeneous solution of the we have
Let
A)r
completely described by Eq. (4-440). of the interesting properties of the state transition matrix $(t)
pling instants
One
<
t
= kT and
ta
=
= #t -
0; the last equation
x{kT) Also, by the recursive procedure
is
that
(4-441)
state equation of Eq. (4-426),
(4-442)
foM'o)
becomes
= ftfcTXO) = {k + with t
(4-443)
\)T and
t
=
kT, k
=
0, 1,
Sec. 4.20
2,
.
.
.
,
z-Transform Solution of Discrete State Equations
/
165
Eq. (4-442) leads to
x(kT)
=
(4-444)
*(T)x(0)
Comparison of Eqs. (4-443) and (4-444) gives the
identity in Eq. (4-441).
In view of the relation of Eq. (4-441), the discrete state transition equations
These two equations can be modified to represent systems with r into a vector r. When a linear system has only discrete data throughout the system, its dynamics can be described by a set of discrete state equations respectively.
multiple inputs simply by changing the input
+
x[(k
1)7]
= Ax(kT) +
Br(A:r)
(4-447)
and output equations c(kT)
where A, B, D, and
E
two
+ Er(kT)
Dx(kT)
is
same form
basically of the
situations
is
$(T) and
as Eq. (4-431).
The only
the starting point of the system representation.
In the case of Eq. (4-431), the starting point tions of Eq. (4-426);
(4-448)
are coefficient matrices of the appropriate dimensions.
Notice that Eq. (4-447) difference in the
=
8(7") are
is
the continuous-data state equa-
determined from the
A
and
B matrices
itself represents
an out-
which has only discrete The solution of Eq. (4-447) follows directly from that of Eq.
(4-431).
of Eq. (4-426). In the case of Eq. (4-447), the equation right description of the discrete-data system,
Therefore, the discrete state transition equation of Eq. (4-447)
x(kT)
=
A*x(0)
+
is
£ A*-'-'BrOT) i
signals.
written (4-449)
=
where
A*
= AAAA...A
(4-450)
k
4.20
z-Transform Solution of Discrete State Equations
The
discrete state equation in vector-matrix form,
x[(k
+
1)7"]
= Ax(kT) +
Br(&r)
(4-451)
can be solved by means of the z-transform method. Taking the z-transform on both sides of Eq. (4-451) yields zX(z)
-
zx(0+)
= AX(z) + BR(z)
(4-452)
Solving for X(z) from the last equation gives X(z)
=
(zl
-
A)" »zx(0-r-)
+
(zl
-
A)" 'BR(z)
(4-453)
166
/
State-Variable Characterization of
The
Dynamic Systems
Chap. 4
inverse z-transform of the last equation
x(kT)
=
-
g- '[(zl
is
+
A)" z]x(0) l
- A)
g-*[(zl
»BR(z)]
(4-454)
In order to carry out the inverse z-transform operation of the last equation, write the z-transform of
g(A")
A*
=
f] A*z"*
k=
=I+
Az"
+A
1
from the
last equation,
we
2
z" 2
+
.
(4-455)
•
Az -1 and
Premultiplying both sides of the last equation by result
we
as
subtracting the
get
(I-
Az-')S(A*)
=
I
(4-456)
Therefore, solving for g{A k ) from the last equation yields
g(A k )
=
(I
A*
=
g
-
Az"
1
)"
=
1
(zl
-
A)"'z
(4-457)
or l
-A
[{zl
-1 )
^
(4-458)
Equation (4-458) also represents a way of finding A* by using the z-transform method. Similarly, we can prove that 5-»t(2l
Now we
- A)-'BR(z)] = S
substitute Eqs. (4-458)
and
A^-'-'BrOT)
(4-459), into Eq. (4-454)
(4-459)
and we have the
solution for x(kT) as
=
x(kT)
k-\
+ 2 =
A*x(0)
A*-'-'Br(ir)
(4-460)
i
which is identical to the expression in Eq. (4-449). Once a discrete-data system is represented by the dynamic equations of Eqs. (4-447) and (4-448), the transfer function relation of the system can be expressed in terms of the coefficient matrices. Setting the initial state
x(0+) X(z)
When
this
equation
is
=
to zero, Eq. (4-453) gives (zl
-
A)-»BR(z)
(4-461)
substituted into the z-transformed version of Eq. (4-448),
we have C(z)
Thus the
=
[D(zl
-
A)" 'B
+
transfer function matrix of the system
G(z)
=
D(zl
-
A)-'B
E]R(z)
(4-462)
is
+
E
(4-463)
This equation can be written G(z)
The
=
D[adj(zI-A)]B
characteristic equation of the system
|zI-A|
is
=
+ lzI-A|E
{A _
m)
defined as (4-465)
In general, a linear time-invariant discrete-data system with one input and one output can be described by the following linear difference equation with
constant coefficients:
.
State Diagrams for Discrete-Data Systems
Sec. 4.21
c[(k
+
.
1071
+
+ n - 2)71 + + a _ lC [(k + l)r] + a c(kT) + VP + m- 1)7] + n>m + b m . A(k + 1)T] + b m r{kT)
fl,c[(fc
+ n-
+
l)r]
a 2 c[(k
.
VP + m)T]
n
.
Taking the z-transform on both b zm
C{z)
R(z)
The
z'
(4-466)
.
of this equation and rearranging terms,
sides
the transfer function of the system
167
.
n
=
/
written
is
+ Z>,z"-' + ... + b m „ z + b m + a^"" +... + o,. z + a,
,. v
l
1
,
46? ;
1
characteristic equation is defined as
+
a„
=
Consider that a discrete-data system
is
described by the difference
+
z"
Example 4-33
+
fljz"- 1
.
.
+
.
a„_ ,z
(4-468)
equation c(k
+
+
2)
5c(k
Taking the z-transform on both
+
+
1)
3c(/c)
=
r(k
+
1)
sides of the last equation
+
(4-469)
2r(/t)
and assuming zero
initial
conditions yields
z^C{z)
From
+
5zC(z)
+
= zR(z) +
the last equation the transfer function of the system
z2
R(z)
The
3C(z)
characteristic equation
is
+
+
5z
2R(z) is
(4-470)
easily written
^ *' l)
3
obtained by setting the denominator polynomial of the
transfer function to zero,
z1
The
state variables of the
XiQc)
(4-469) gives the
two
two
5z
= c(k) = x,{k +
x 2 (k from which we have the
A
3
=
(4-472)
(4-473) 1)
- r(k)
(4-474)
relations into the original difference equation of Eq.
state equations of the
Xl (k
+
system are arbitrarily defined as
Xl (k)
Substitution of the last
+
system as
+ 1)= x 2 (k) + r{k) + 1) = -3^!^) - 5x 2 (k) -
(4-475)
3r{k)
(4-476)
matrix of the system, 1"
(4-477)
-3
The same
4.21
-5_
characteristic equation as in Eq. (4-472)
is
obtained by using zl |
— A| = 0.
State Diagrams for Discrete-Data Systems
When
a discrete-data system
state equations,
is
described by difference equations or discrete
a discrete state diagram
may
be constructed for the system.
Similar to the relations between the analog computer diagram
diagram for a continuous-data system, the elements of a discrete
and the state diagram
state
168
State-Variable Characterization of
/
Dynamic Systems
Chap. 4
resemble the computing elements of a digital computer.
Some of the
operations
of a digital computer are multiplication by a constant, addition of several machine variables, time delay, or shifting. The mathematical descriptions of
and
these basic digital computations
their corresponding z-transform expres-
sions are as follows:
Multiplication by a constant:
1.
= ax^kT) X (z) = aX,(z)
2.
x 2 (kT)
(4-478)
2
(4-479)
Summing:
x 2 (kT)
X (z) 2
= x (kT) + Xi (kT) = X (z) + Z,(z)
(4-480)
(4-481)
Shifting or time delay:
3.
x 2 (kT)
= Xi [(k + l)T) X (z) = zX,{z) - zx,(0+)
(4-482)
2
(4-483)
= z- X {z) +
(4-484)
or
X^z) The
l
2
state
a
X
{
(z)
-OX
o-
2
(z)
l
(4-484)
(z)
diagram representations of these opera-
tions are illustrated in Fig. 4-29. in Eq. (4-484)
X2 (z) = aX
The
=
t
time rj.
t
= 0+
Then Eq.
written
is
=
z'
l
X (z) + x, {n)
(4-485)
2
(z)
which represents the discrete-time time greater than or equal to
X2 (z)
Example 4-34
c(k
X
initial
can be generalized to
Z,(z)
X
*,(<)+)
+
2)
state transition for
fj.
Consider again the difference equation in Eq. (4-469), which is
+
+
5c(k
1)
+
3c(k)
=
r(k
+
1)
+
2r(k) (4-486)
t
(z)
One way of
X2 (z) = XQ (z) + X
x
(z)
system
is
constructing the discrete state diagram for the
to use the state equations. In this case the state
equations are available in Eqs. (4-475) and (4-476) and these are repeated here:
9*i(0+) x,(k
x 2 (k
X2 (z)o
O^(r)
Using
+ +
1) 1)
= Xl (k) + r(k) = -3 Xl (k) - 5x
essentially the
same
X
1
(z)
= z-iX 2 (z) + x 1 (0+)
z
_1
is
is
always appear as
will
state diagram.
diagram.
3r(k)
(4-488)
diagram for Eqs. (4-487)
The time delay unit The state variables outputs of the delay units on the state
constructed in Fig. 4-30.
used to relate
Fig. 4-29. Basic elements of a discrete
2 (k)
principle as for the state diagrams
for continuous-data systems, the state
and (4-488)
(4-487)
-
x^k +
1) to Xi(k).
State Diagrams for Discrete-Data Systems
Sec. 4.21
/
R{z)
169
C{z)
-3 diagram of the system described by the difference equation of Eq. (4-486) or by the state equations of Eqs. (4-487) and Fig. 4-30. Discrete state
(4-488).
As an alternative, the state diagram can also be drawn directly from the difference equation by means of the decomposition schemes. The decomposition of a discrete transfer function will be discussed in the following section, after we have demonstrated some of the practical applications of the discrete state diagram. state transition equation of the system can be obtained directly from the {z) and diagram using the gain formula. Referring to z (z) as the output nodes and to xi(0+), * 2 (0+), and R(z) as input nodes in Fig. 4-30, the state transition equations are written in the following vector-matrix form:
The
X
state
"1
1
~ A where
_
+5z-' -3Z-
1
"*i(o+r 1
.
1
A=
1
+
rz-i(l
5z-'
+
+5z-')-3z- 2
_ 3z -i_3 z -2
A|_
_* 2 (0+)_
X
x
3z-
R{z)
(4-489)
(4-490)
The same transfer function between R(z) and C(z) as in Eq. (4-471) can be obtained directly from the state diagram by applying the gain formula between these two nodes. Decomposition of Discrete Transfer Functions
The
three schemes of decomposition discussed earlier for continuous-data
systems can be applied to transfer functions of discrete-data systems without the need of modification. As an illustrative example, the following transfer function
is
decomposed by the three methods, and the corresponding shown in Fig. 4-31
state
diagrams are
^
<2f)
+
2
(4.491)
Equation (4-491) is used for direct decomposition after the numerator and the denominator are both multiplied by z~ 2 For cascade decomposition, the trans.
x 2 (0+)
o
Riz)
C(z)
-3 decomposition
(a) Direct
x, (0 +)
R(z)
C(z)
(b)
Cascade decomposition
*i(0+)
RU)
(c) Parallel
Fig. 4-31. State
+
5z
+
tion, (b)
170
decomposition
diagrams of the transfer function C(z)/i?(z)
=
(z
+
2)/(z 2
by the three methods of decomposition, (a) Direct decomposiCascade decomposition, (c) Parallel decomposition.
3)
O
:
State Diagrams for Sampled- Data Systems / 171
Sec. 4.22
form
fer function is first written in factored
C{z)
z
_
R(z)
For the
2
(4-492)
+ 4.3)(z + 0.7)
(z
parallel decomposition, the transfer function is first fractioned
partial fraction into the following
C(z) R(z) 4.22
as
+
by
form 0.64
0.36
,
~ z + 4.3 +
(4-493)
0.7
State Diagrams for Sampled-Data Systems
When
a discrete-data system has continuous-data as well as discrete-data
elements, with the two types of elements separated by sample-and-hold devices,
a special treatment of the state diagram continuous-data states
is
desired for
all
necessary if a description of the
is
times.
diagram of the zero-order hold. Consider that the input of the zero-order hold is denoted by e*{i) which is a train of impulses, and the output by h{t). Since the zero-order hold simply holds the magnitude of the input impulse at the sampling instant until the next input comes along, the signal h{t) is a sequence of steps. The input-output relation Let us
first
in the Laplace
establish the state
domain
is
written
H(s) In the time domain, the relation
=
kT
=
this
e(kT+)
(4-495)
we need
the relation between H(s) and
l)T.
In the state diagram notation,
e(kT+). For
(4-494)
-E*(s)
simply
is
h(t)
for
-e-*-
1
purpose we take the Laplace transform on both sides of Eq. (4-495) to give
e(kT+)
O
— for
Fig. 4-32. State
diagram rep-
kT < t <
(k
H(s)
= e(fcr+)
\)T.
The
+
tion of the zero-order hold
state is
(4-496)
diagram representa-
shown
example on how the
resentation of the zero-order
an
hold.
sampled-data system
illustrative
is
constructed,
system shown in Fig. 4-33.
We
in Fig. 4-32.
state let
shall
As
diagram of a
us consider the
demonstrate the
various available ways of modeling the input-output relations of the system. First, the
Laplace transform of the output of the system C(s)
= s
Fig. 4-33.
s
+
1
w
Sampled-data system.
is
written (4-497)
—
)
:
172
/ State- Variable Characterization of
Dynamic Systems
Chap. 4
Taking the z-transform on both
sides of the last equation yields
(4-498)
Given information on the input
e(t)
or e*(t), Eq. (4-498) gives the output
response at the sampling instants.
A
state
diagram can be drawn from Eq. (4-498) using the decomposition
technique. Figure 4-34 illustrates the discrete state diagram of he system through t
O
x, (0 +
e(kT+)
Fig. 4-34. Discrete state
diagram of the system in
Fig. 4-33.
decomposition. The discrete dynamic equations of the system are written directly
from
diagram
this state
*,[(*
= c(kT) =
+
1)7]
e- T x,(kT)
+
(1
-
e- T)e{kT)
(4-499)
x (kT)
(4-500)
x
Therefore, the output response of the system can also be obtained by solving the difference equation of Eq. (4-499). If the
response of the output
c(t) is
desired for
all
t,
we may construct the
diagram shown in Fig. 4-35. This state diagram is obtained by cascading the state diagram representations of the zero-order hold and the process G(s). state
x (kT+) {
s
o
,'
o
»
e(kT+)
o
1
*>
His)
Fig. 4-35. State
diagram for the system of Fig. 4-33 for the time
kT
1)T.
interval
:
State Equations of Linear Time-Varying Systems /
Sec. 4.23
To
determine
which
c(t),
is
also *i(f)>
we must
first
kT <
t
< (& +
have
= TT~^ e(kT+) + TT~^ Xl(kT)
*•(*> for
obtain X^s) by applying
We
the gain formula to the state diagram of Fig. 4-35.
(4 " 501)
Taking the inverse Laplace transform of the
l)r.
173
last
equa-
tion gives
=
*i(0
kT< < (k + t
-
[1
'- kT>
(
e-
+
(
e- «-*"jc,(Jtr)
(4-502)
\)T. It is interesting to note that in Eq. (4-502)
one sampling period, whereas the x,(r) only at the sampling instants. in Eq. (4-502), the latter
4.23
]e(kT+)
It is
becomes Eq.
easy to see that
if
we
t is
valid for
information on
result in Eq. (4-499) gives
let t
=
{k
+
Y)T
(4-499).
State Equations of Linear Time-Varying Systems
When
a linear system has time-varying elements,
it
can be represented by the
following dynamic equations
^
=
A(t)x(t)
+
B(/)r«
(4-503)
c(?)
=
D(0x(r)
+
E(r)r(?)
(4-504)
where x(r) r (0
c (0
=n =P =9
X
1
state vector
x
1
input vector
x
1
output vector
and E(t) are coefficient matrices of appropriate dimensions. The elements of these coefficient matrices are functions of t.
A{t), B(/), D(?),
Unlike in the time-invariant case, time-varying differential equations do not have closed-form solutions. Let us investigate the properties
generally
of a time-varying system by considering a scalar homogeneous state equation,
This equation can be solved by
and then integrating both
^ ^ first
=
a(t)x(t)
(4-505)
separating the variables,
=
a{t)dt
(5-506)
sides to get
In x(t)
-
In x(t Q )
=
a(z) J'
dx
(4-507)
Therefore, x{t)
where
t
denotes the
initial
=
exp
£
a{x) dx\x(t a )
(4-508)
time.
we can define a state transition matrix for the time-varying state equation. For the scalar case under consideraJust as in the time-invariant situation,
.
174
/
State-Variable Characterization of
Chap. 4
Dynamic Systems
tion, the state transition
matrix
is
=
(t, t 9 )
exp
(4-509)
a(x) dx
f
Notice that for the time-varying case, the state transition matrix depends upon t
and t not simply t — ? For the vector-matrix ,
-
state equation
=
±(t) it is
A(0x(r)
(4-510)
simple to show that the solution can be written
=
x(r)
where $(t, t the problem
) is
is
)x(t
t
ftr,
the state transition matrix that
how to find (t,
t
satisfies
Eq. (4-510). However,
The question
in general.
)
(4-511)
)
is: Is §{t,
t
)
related
to the A(f) matrix through the following relationship ? (?, t
To answer into a
power
exp T
P
)
=
H"
exp
the posed question,
let
(4-512)
A(t) dx
us expand the right side of Eq. (4-512)
series,
A(t) dx)
=I+
+^
A(t) dx
f
[ A(t) dx f
A(
+
...
(4-513)
Taking the derivative on both we have
*-[T.
A(0
A(t) dx
sides of the last equation with respect to time,
i
m L'
do
A(«t)
+
j-
A(t)
\
rft
A(f)
+
.
.
(4-514)
Multiplying both sides of Eq. (4-513) by A(t),
A(0 exp
["
('
By comparison of Eqs.
^ exp
=
A(t) dx
(4-514)
P
and
A(0
+
A(0
we
(4-515),
A(t) dx
we have j"
A(t) dx
(4-515)
see that
=
A(0 exp
=
A(0
\
f
A(t) Jt
(4-516)
J
or ${t,
dt if
and only
* o)
(4-517)
A(t) dx A(t)
(4-518)
if
A(r)
f A(t) dx
J
that
/ )
=
to
'
Jf la
is,
A(0
and
f
J
A(t)
rfr
to
commute.
The requirement
that A(r)
and
its
integral
stringent condition. Therefore, in general, Eq.
commute
(4-512) will
is
evidently a very
not be valid.
:
Sec. 4.23
State Equations of Linear Time-Varying Systems /
Most of 4>(t
—
t ),
the properties of the time-invariant state transition matrix,
can be extended to the time- varying case. These are
1.
2. 3.
(?,?
=
I-
)
=
t
,
listed as follows
,0.
${t 2 ?,)
Solution of the
=
)
${t 2
for any
t )
,
t
,
t
u
t2
.
Nonhomogeneous Time-Varying State Equation
Disregarding the problem of finding the state transition matrix §{t,
moment, we
for the
175
shall solve for the solution
t )
of the nonhomogeneous state
equation of Eq. (4-503). Let the solution be x(f)
=
${t, to)f[{t)
(4-519)
an n x 1 vector, and <^(/, t ) is the state transition matrix that Eq. (4-517). Equation (4-519) must satisfy Eq. (4-503). Substitution of Eq. (4-519) into Eq. (4-503) yields
where
r\(t) is
satisfies
4>U,
toMO + W,
t )r\(t)
= A(0ft/, toMt) + B(0«(?)
Substituting Eq. (4-517) into Eq. (4-520)
¥t,
and
simplifying,
we
(4-520)
get
*oM) = B«u(0
(4-521)
Thus
=
f|(?)
B(f)u(0
(4-522)
and
¥0 = The vector
r\(t ) is
f *
_i
?o)B(t)u(t) dx
+
= W,
t
)x(t
)
(4-523)
tl(f„)
obtained from Eq. (4-519) by setting
ing Eq. (4-523) into Eq. (4-519), x(f)
(t,
t
=
r
.
Thus
substitut-
we have
+
$(r,
t
)
f
$-»(*,
?
)B(t)u(t)
rft
(4-524)
Since
W,
r
)*-'(T,
r )
= W, /„)
(4-525)
dt
(4-526)
Eq. (4-524) becomes x(r)
which
is
=
ftf,
r
)x(r
)
+
f'
$(t, t)B(t)u(t)
the state transition equation of Eq. (4-503).
Discrete Approximation of the Linear Time-Varying System In practice, not too
many
time-varying systems can be solved by using not readily available. It is possible to discretize the system with a time increment during which the time-varying parameters do not vary appreciably. Then the problem becomes that of solving a set of
Eq. (4-526), since $(t,
t )
is
linear time-varying discrete state equations.
system
is
to approximate the derivative of x(r)
One method of by
discretizing the
176
/
State- Variable Characterization of
i(t)
Chap. 4
Dynamic Systems
~ -L{x[(k +
l)T]
-
kT
x(kT)}
where T is a small time interval. The state equation of Eq. mated by the time-varying difference equation
l)T
(4-503)
= A*{kT)x(kT) + B*(kT)r(kT) over the time interval, kT < < (k + l)T, where A.*(kT) = TXQcT) + I x[(k
+
l)T]
(4-527)
is
approxi-
(4-528)
t
B*(kT)
=
TB(kT)
Equation (4-528) can be solved recursively in
much
the same
way
as in the time-
invariant case, Eqs. (4-433) through (4-437).
REFERENCES State Variables and State Equations 1.
L. A. Zadeh, "An Introduction to State Space Techniques," Workshop on Techniques for Control Systems, Proceedings, Joint Automatic Control Conference, Boulder, Colo., 1962.
2.
B. C.
Kuo, Linear Networks and Systems, McGraw-Hill Book Company,
New
York, 1967. 3.
D. W. Wiberg, Theory and Problems of State Space and Linear Systems (Schaum's Outline Series), McGraw-Hill Book Company, New York, 1971.
State Transition Matrix 4.
R. B. Kirchner,
"An
Explicit
for e Al ," Amer. Math. Monthly, Vol. 74,
Formula
pp. 1200, 1204, 1967. 5.
W. Everling, "On p. 413,
6.
7.
the Evaluation of e A by
10.
Series," Proc.
T. A. Bickart, "Matrix Exponential: Approximation Series," Proc. IEEE, Vol. 56, pp. 872-873, May 1968.
M. Apostol, "Some Explicit Formulas
T.
M.
55,
Vidyasagar,
by Truncated Power
for the Exponential Matrix e At ,"
"A Novel Method
Amer.
of Evaluating eA!
in
Closed Form,"
IEEE
AC-15, pp. 600-601, Oct. 1970.
A C. G. Cullen, "Remarks on Computing e '," Vol. AC-16, pp. 94-95, Feb. 1971. J.
IEEE, Vol.
76, pp. 289-292, 1969.
Trans. Automatic Control, Vol. 9.
Power
Mar. 1967.
Math. Monthly, Vol. 8.
'
C. Johnson and C. L. Phillips,
Trans. Automatic Control,
"An Algorithm for the Computation of the IEEE Trans. Automatic Control, Vol.
Integral of the State Transition Matrix,"
AC-16, pp. 204-205, Apr. 1971.
IEEE
Chap. 4
References
M. Healey, "Study of Methods of Computing No. 8, pp. 905-912, Aug. 1973.
11.
/
177
Transition Matrices," Proc. IEE,
Vol. 120,
Transformations
C. D. Johnson and
12.
W. M. Wonham, "A Note on
(Phase Variable) Form,"
ical
IEEE
the Transformation to
Canon-
Trans. Automatic Control, Vol. AC-9, pp.
312-313, July 1964.
H. Mufti, "On the Reduction of a System to Canonical (Phase- Variable) Form," IEEE Trans. Automatic Control, Vol. AC-10, pp. 206-207, Apr. 1965.
13.
I.
M. R. Chidambara, "The Transformation
14.
IEEE Trans. L.
15.
to (Phase- Variable) Canonical Form," Automatic Control, Vol. AC-10, pp. 492-495, Oct. 1965.
M. Silverman, "Transformation
(Phase- Variable) Form,"
IEEE
of Time- Variable Systems to Canonical
Trans. Automatic Control, Vol.
AC-11, pp. 300-
303, Apr. 1966. 16.
W. G. Tuel, Jr., "On the Transformation to (Phase- Variable) Canonical Form," IEEE Trans. Automatic Control, Vol. AC-11, p. 607, July 1966.
17.
D. G. Luenberger, "Canonical Forms for Linear Multivariate Systems," IEEE Trans. Automatic Control, Vol. AC-12, pp. 290-293, June 1967.
18.
S. J.
Asseo, "Phase- Variable Canonical Transformation of Multicontroller Sys-
tems,"
IEEE Trans. Automatic
Control, Vol. AC-13, pp. 129-131, Feb. 1968.
19.
B. Ramaswami and Ramar, "Transformation to the Phase- Variable Canonical Form," IEEE Trans. Automatic Control, Vol. AC-13, pp. 746-747, Dec. 1968.
20.
W. B. Rubin, "A Simple Method for Finding the Jordan Form of IEEE Trans. Automatic Control, Vol. AC-17, pp. 145-146, Feb. 1972.
21.
K. Ogata, State Space Analysis of Control Systems, Prentice-Hall,
wood
Inc., Engle-
1967.
Diagram
State 22.
Cliffs, N.J.,
a Matrix,"
B. C.
Kuo,
Systems," Controllability
"State Transition
Flow Graphs of Continuous and Sampled Dynamic
WESCON Convention
Records, 18.1, Aug. 1962.
and Observability
23.
Y. C. Ho, "What Constitutes a Controllable System?" Control, Vol. AC-7, p. 76, Apr. 1962.
24.
R. E. Kalman, Y. C. Ho, and K. S. Narendra, "Controllability of Linear Dynamical Systems," Contribution to Differential Equations, Vol. 1, No. 2, pp.
IRE
Trans. Automatic
189-213, 1962. 25.
E. G. Gilbert, "Controllability and Observability in Multivariate Control Systems,"/. SIAM Control, Vol. 1, pp. 128-151, 1963.
26.
L. A. Zadeh and C. A. Desoer, Linear System Theory, McGraw-Hill Book Company, New York, 1963.
27.
R. E. Kalman, "Mathematical Description of Linear Dynamical Systems," Soc. Ind. Appl. Math., Vol. II, No. 1, Ser. A, pp. 151-192, 1963.
/.
.
178
/
State- Variable Characterization of
28.
E.
Kreindler and
ability
Chap. 4
Dynamic Systems
P.
Sarachik, "On the Concept of Controllability and ObservIEEE Trans. Automatic Control, Vol. AC-9, pp.
of Linear Systems,"
129-136, Apr. 1964. 29.
A. R. Stubberud,
IEEE 30.
31.
"A
Controllability Criterion for a Class of Linear Systems,"
Trans. Application
and
Industry, Vol. 83, pp. 411-413,
Nov. 1964.
R. W. Brockett, "Poles, Zeros, and Feedback: State Space Interpretation," IEEE Trans. Automatic Control, Vol. AC-10, pp. 129-135, Apr. 1965. R. D. Bonnell,
IEEE Trans.
"An
Observability Criterion for a Class of Linear Systems,"
Automatic Control, Vol. AC-11,
p. 135, Jan. 1966.
CSMP {Continuous System Modeling Program) 32.
System/360 Continuous System Modeling Program (360A-CX-16X) User's Manual, Technical Publications Dept., International Business Machines Corporation,
White
Plains,
N.Y.
PROBLEMS 4.1.
Write state equations for the
electric
networks shown
in Fig. P4-1
)
4.2.
The following
e(t)
differential equations represent linear time-invariant systems.
Write the dynamic equations matrix form.
(state
equations and output equation) in vector-
Chap. 4
Problems
(a)
«*!£) dt 1
^
(b)
rf
(c)
(d)
2
3
4
5c(t)
c(Q
rfc(/)
,
c{t)
6^ +
+1_u
2
"
Using Eq.
r(t)
=
r(t)
=
dr(t)
=
c(T)dT
5c(0
rfr
(4-42),
/•(/)
,(/)
show
=
2^
= ,«H0 +
I
that
+
Ar
-r
A
3
Ax(0
state transition matrix
-1
<(»(/)
for the following cases:
"1"
B =
-2_
_1_
"0"
r
(b)
-2
-3_
-2
0"
B= _1_
B =
-2_
-1
(d)
3
+ B«(0
0"
(a)
(c)
/
J,,
time-invariant system are represented by
=
iO)
l
Klf2 . T\ ji^
,
The state equations of a linear
Find the
"10" _ 1_
0"
1
-1
ro
B=
1
-1 4.5.
Find the for
4.6.
t
> 0.
Given the
l
state transition equations for the systems described in It is
assumed that x(0+)
is
given and u(t)
is
Problem 4.4
a unit step function.
state equation
= Ax(0 +
±(t)
B«(0
where ~0
A=
r -1
1
_2
1
B
0_
find the transformation such that the state equation
y(0 where A! and 4.7.
For the
B
(
=
A,x(/)
+
becomes
B!«(0
are in the phase-variable canonical form.
state equation given in
Problem
4.6, if
B=
4.8.
179
f.
((>(0
4.4.
+
=
-c(t)
"+
2
^ rf/
4.3.
c(t)
rfM£)
(e)
+ 3*0
/
can the state equation be transformed into the phase- variable form? Explain. Given the state equations of a linear time-invariant system as x(/)
= Ax(?) +
B«(0
:
180
/
State-Variable Characterization of
Dynamic Systems
Chap. 4
where 0~
1
ro
B=
1
-2
-3.
l
determine the transfer function relation between X(s) and U(s). Find the eigenvalues of A. 4.9.
For a linear time-invariant system whose state equations have coefficient matrices given by Eqs. (4-111) and (4-112) (phase-variable canonical form),
show
that 1
- A)B =
adj (si
and the
characteristic equation s"
4.10.
+ ais"
A closed-loop control
-1
is
+ a 2 s" -2 +
system
is
.
.
.
+ aa -is +
a„ == 0.
described by
= Ax(0 + Bu(0 u(/) = -Gx(0 u(/) = /--vector, A is n ±(t)
where r
x(t)
= //-vector, matrix.
Show
A — BG.
Let
x n feedback
eigenvalues of
x
n,
B
is
n
x
r,
and
G
is
the
that the roots of the characteristic equation are
0"
1 1
A=
B= 1
-2
G=
[£i
g2
-5 gi
-10_ g*\
Find the characteristic equation of the closed-loop system. Determine the elements of G so that the eigenvalues of A — BG are at — 1, —2, —1 —jl, and
—1
+/1. Can
all
the eigenvalues of
A — BG
be arbitrarily assigned for this
problem? 4.11.
A
linear time-invariant system
tion
(a)
is
described by the following differential equa-
:
Find the
(b) Let c(0)
d 2 c(t)
Ml)
dt 2
dt
state transition matrix
=
1,
t(0)
= 0,
and
c(t)
= r(f)
(/).
r(t)
=
u s (t), the unit step function; find the
state transition equations for the system. (c)
4.12.
A
Determine the characteristic equation of the system and the eigenvalues. linear multivariable
equations
system
is
described by the following set of differential
:
Chap. 4
Problems / 181
rf'cKQ rf/
2
+ ,
dc,(rt
+ 2c,(0-2c 2 (0=r,(0
rf/
4.13.
=
c 2 (0
r 2 (f)
(a)
Write the state equations of the system in vector-matrix form. Write the output equation in vector-matrix form.
(b)
Find the transfer function between the outputs and the inputs of the system.
Given the
state transition
a and
co are real
jo
(c)
Given the
a
numbers.
Find the state transition matrix Find the eigenvalues of A. Find the eigenvectors of A.
(a)
(b)
= Ax(/), where
equation ±(t)
A=
4.14.
+
ci(0
dt 2
<^(/).
a linear system as
state equations of
= Ax(/) + B«(0
±(t)
where
ro
on
1
B=
1
-11
-6
=
The eigenvalues of A are A j
=
x(t)
4.15.
Py(f) that will transform
Given a
-6J
—1,X 2
|_1.
=
A into
-2, X 3
=
—3. Find a transformation A = diag [Xi X 2 X 3 ],
a diagonal matrix
linear system with the state equations described
1(0
= Ax(f) +
by
Bm(0
where '
0'
1
B=
1
.-25 The eigenvalues are X
x
=
-35
_1_
11_
—I, X 2
=
r°i
—5, X 3
=
—5. Find the transformation transformed into the Jordan canonical form. The
= Py(/) so that A is transformed state equations are x(/)
HO = Ay(0 + T«(0 Find 4.16
Draw (a)
A
and T.
state
±(0
diagrams for the following systems
= Ax(/) +
Bu(t)
-3 A = -1 -5 (b) ±(r)
Ax(f)
2
0"
-1
1
-2
-1
ro
B= l
+ Bu(f). Same A as in part (a) but with ~o r B
4.17.
The block diagram of a feedback control system is shown in Fig. (a)
P4-17.
Write the dynamic equations of the system in vector-matrix form.
:
182
/
Dynamic Systems
State-Variable Characterization of
Chap. 4
Figure P4-17. (b) (c)
Draw
a state diagram for the system. Find the state transition equations for the system. Express the equations in matrix form. The initial states are represented by x(t'b), and the input r{t) is a unit step function, u s {t — / ), which is applied at t == t .
4.18.
Draw
state
diagrams for the following transfer functions by means of direct
decomposition (a)
<
b>
=
G(s)
10 s3
+
5s 2
6(s
+
+ 4s +
10
1)
°W = s(s + 1F+
3)
Write the state equations from the state diagrams and express them in the phase-variable canonical form. 4.19.
Draw
state
position
diagrams for the following systems by means of parallel decom-
:
+ 1) + 2)(s +
6(s (a)
G(s)
~
s(s
3)
5c(0
=
2
d
-f- +
r
W
Write the state equations from the state diagrams and show that the states are decoupled from each other. 4.20.
Draw
state
diagrams for the systems in Problem 4.19 by means of cascade
decomposition. 4.21.
Given the transfer function of a
linear system,
iQfr + i) G(s) = UW (s + 2¥(s + 5)
Draw
state
position. 4.22.
The
state
diagrams for the system using three different methods of decomstate diagrams should contain a minimum number of integrators.
The
diagram of a linear system
is
shown
Figure P4-22.
in Fig. P4-22.
Problems / 183
Chap. 4
(a)
Assign the state variables and write the dynamic equations of the system.
(b)
Determine the transfer function C(s)/R(s).
4.23.
Draw
4.24.
The
state
state
diagrams for the
network shown
electric
diagram of a linear system
shown
is
in Fig. P4-1.
in Fig. P4-24.
Figure P4-24.
Assign state variables on the state diagram; create additional nodes is not altered. (b) Write the dynamic equations for the system. (a)
if
necessary, as long as the system
4.25.
Given the
state equation
±(0
= Ax(?)
where
-2
0"
1
-2
1
.0 (a)
(b)
4.26.
-2_
Find the eigenvalues of A. Determine the state transition matrix
Given the
state equation
=
±(/)
Ax(t)
+
B«(0
where
ro
on
1
B =
1
-4
-2
-3J
|_1.
A are Ai = —1, A 2 = —1 — j\, A = —1 +jl. Find the transformation x(?) = Py) which transforms A into the modal form The eigenvalues of
3
"-1
0"
-1 -1 4.27.
Given the
linear system
±(0 where
«(/) is
= Ax(0 +
B«(0
generated by state feedback, u(t)
The
=P"iAP
1
-1_
state transition (/)
= -Gx(0
matrix for the closed-loop system
=
e (A-BG)«
=
£-![(,,!
Is the following relation valid ?
e (A-BG)t
_
gAtg-BGt
_ a + BG)-
is 1
]
:
184
/
State- Variable Characterization of
Dynamic Systems
Chap. 4
where e At e -BGr
= £-l[(5I _ A)"'] = £-i[(sl + BG)"
1
]
Explain your conclusions. 4.28.
Determine the
state controllability of the
system shown
in Fig. P4-28.
Figure P4-28.
(a)
(b)
= 1, b = 2, c = 2, and d=\. Are there any nonzero values for a
a, b, c,
and (/such that the system
is
not
completely state controllable? 4.29.
Figure P4-29 shows the block diagram of a feedback control system. Determine
and
the state controllability
observability of the system
by the following
methods, whenever applicable
It
J
\X
x
\
;
+ 2
' '
3
x2
c
\
1
.9
+
1
I
s
Fig ure P4 -29.
(a)
Conditions on the A, B, D, and
E
matrices.
(b) Transfer function. (c)
4.30.
The
Coupling of
states.
transfer function of a linear system
C(s)
R(s)~ (a)
s s3
is
given by
+a
+6s 2 + Us +
6
Determine the value of a so that the system
is
either uncontrollable or
unobservable. (b) Define the state variables so that (c)
4.31.
one of them
is
uncontrollable.
Define the state variables so that one of the states
Consider the system described by the state equation ±(f)
= Ax(0 + B«(0
is
unobservable.
:
Problems
Chap. 4
where o
r
L-l
a_
"
Find the region
/
185
"1"
B =
L*J such that the system
in the a-versus-6 plane
is
completely
controllable. 4.32.
Draw the state diagram of a second-order system that is neither controllable nor observable.
4.33.
Determine the conditions on
b\,
b2
,
d u and d2
so that the following system
is
completely state controllable, output controllable, and observable
±(0
= Ax(0 + ~i r _o
c(0
D= 4.34.
Bk(/)
B
i_
=
Dx(0 d2 ]
[«/,
The block diagram of a simplified control system for the Large Space Telescope (LST) is shown in Fig. P4-34. For simulation and control purposes, it would be desirable to represent the system by state equations and a state diagram. Control moment gyro dynamics
Gimbai controller
Vehicle
K,
Command
Kp s +
~\+ K
s
-J
s
p
K,
+ Kj
H
1
position
J„s 2
JG s 1
Vehicle
dynamics
1
KN
s
Figure P4-34.
(a)
Draw a state diagram for the system and write the state equations in
vector-
matrix form. (b) (c)
Find the characteristic equation of the system. modern control design scheme, called the state feedback, utilizes the concept of feeding back every state variable through a constant gain. In this case the control law is described by
A
e
=
r
- gjXi — g 2 x 2 — g]X - g x Xi 3
Find the values of gi, g 2 gi, and g^ such that the eigenvalues of the overall system are at s = — 100, —200, —1 +/1, —1 —jl. The system parameters = 600, Kj = 9700, JG = 2, Jv = 10 5 Kp = 216, and are given as ,
H
KN = 300.
All units are consistent.
,
— 186
/
State-Variable Characterization of
4.35.
The
Dynamic Systems
Chap. 4
a
difference equation of
+
c[(k
linear discrete-data system
+ 0.5c[(A: +
2)T]
1)T]
(a)
Write the state equations for the system.
(b)
The k =
conditions are given as c(0)
2, 3,
.
.
,
0.1c(kT)
given by
=
1
= 1 and c(T) = 0. Find c{kT) for means of recursion. Can you project the final value of
initial .
+
is
10 by
c(kT) from the recursive results? 4.36.
Given the discrete-data
+ +
Xl (k
x 2 (k
state equations, 1) 1)
= 0.1x 2 (k) = -x^k) + 0Jx
find the state transition matrix 4.37.
(a)
(b)
>it)
+ r(k)
>(&).
A discrete-data system is characterized by the transfer function Kz
C(z) R(z)
4.38.
2 (k)
(z
-
l)(z 2
-z
3)
Draw
a state diagram for the system. Write the dynamic equation for the system
The block diagram of a
discrete-data control system
tO —^T
- e*(0
e(t)
in vector-matrix form..
z.o.h.
is
2Q + s(s
shown
c(t)
0.5)
+ 0.2)
T
Figure P4-38.
Draw
(b)
a state diagram for the system. Write the state equations in vector-matrix form,
(c)
Find <^(T) when
(a)
+
x[(fc
1)71
T=
= (r)x(/cD + B(T)r(kT)
0.1 sec.
in Fig. P4-38.
—
*-
5 Mathematical Modeling of Physical
5.1
Systems
Introduction
One
of the most important tasks in the analysis and design of control systems
is
the mathematical modeling of the systems. In preceding chapters
we have introsystems. The two
duced a number of well-known methods of modeling linear most common methods are the transfer function approach and the state-variable approach. However, in reality most physical systems have nonlinear characteristics to
some
extent.
matical model only
if
A physical system may be portrayed by a linear mathe-
the true characteristics and the range of operation of the
system justify the assumption of
linearity.
Although the analysis and design of
linear control systems have been well developed, their counterparts for nonlinear systems are usually quite complex. Therefore, the control systems engineer often has the task of determining not
how to accurately describe a system mathematically, but, more important, to make proper assumptions and approximations, whenever necessary, so that the system may be adequately characterized by a linear mathematical model. only
how
important to point out that the modern control engineer should place on the mathematical modeling of the system so that the analysis and design problems can be adaptable for computer solutions. Therefore, the main objectives of this chapter are It is
special emphasis
1.
2.
To demonstrate the mathematical modeling of control systems and components. To demonstrate how the modeling will lead to computer solutions. 187
188
Mathematical Modeling of Physical Systems
/
Chap. 5
The modeling of many system components and control systems will be However, the emphasis is placed on the approach to the problem, and no attempt is being made to cover all possible types of systems illustrated in this chapter.
encountered in practice.
5.2
Equations of Electrical Networks
The
classical
way of
writing network equations of an electrical network
is
the
loop method and the node method, which are formulated from the two laws of Kirchhoff. However, although the loop and node equations are easy to write, they are not natural for computer solutions.
network equations
A more modern method of writing We shall treat the subject briefly
the state-variable method.
is
More detailed discussions on the state may be found in texts on network theory. 12
equations of electrical
in this section.
networks
RLC network
Let us use the
+
of Fig. 5-1 to
R
L
V\AA<
T1KP
illustrate the basic principle
of
HO +
C
e(t)
e c (t)
-J?
RLC network.
Fig. 5-1.
writing state equations for electric networks.
It is relatively
simple to write the
loop equation of this network: e(t)
where
q(t)
is
= L d-^l + R d
-f + -L q
the electric charge and
is
related to the current
q{t)=\' It is
shown
in
(5-1)
(t)
i(t)
by (5-2)
i{-c)dx
Chapter 4 that the second-order
differential
equation in
Eq. (5-1) can be replaced by two first-order differential equations called the state equations. In this case it is convenient to define the state variables as Xl (t)
where ec (t)
is
= ^P =
the voltage across the capacitor
* 2 (0
=
^
=
/(0
(5-3)
e c (t)
and
= C^)
(5-4)
Substituting Eqs. (5-3) and (5-4) into Eq. (5-1) yields e(t)
= L **M + Rx
2 (t)
+
Xl (t)
(5-5)
:
Equations of Electrical Networks / 189
Sec. 5.2
Thus, from Eqs. (5-4) and (5-5) the state equations of the network are
dx
x
(t)
£*,<0
dt
^r -
l
(5-6)
Rx
x,(t)
1
e(t)
2 (t)
(5-7)
A more direct way of arriving at the state equations is to assign the current in the inductor L, variables.
Then
i(t),
and the voltage across the capacitor C,
e c (t), as the state
the state equations are written by equating the current in
C and
the voltage across L in terms of the state variables and the input source. This
way
the state equations are written by inspection from the network. Therefore,
= C^4^ dt
Current in C:
Since x^t)
Voltage across
L
and x 2 (t)
=
=
e c (t)
identical to those of Eqs. (5-6)
In general,
it is
L di(f)_ i(i), it is
and
(5-8)
i(t)
-e c (t)
-
Ri(t)
+
e(t)
(5-9)
apparent that these state equations are
(5-7).
appropriate to assign the voltages across the capacitors and
currents in the inductors as state variables in an electric network, although there are exceptions. 12
One must
recognize that the basic laws used in writing state equations for
networks are
still the KirchhofFs laws. Although the state equations in Eqs. (5-8) and (5-9) are arrived at by inspection, in general, the inspection method does not always work, especially for complicated networks. However, a general
electric
method using
the theory of linear graphs of network analysis
Example
As another example of writing the state equations of an electric network, consider the network shown in Fig. 5-2. According to the
5-1
is
available. 1
foregoing discussion, the voltage across the capacitor e c and the cur/, and i2 are assigned as state variables, as shown in Fig. 5-2.
rents in the inductors
Fig. 5-2.
Network
in
Example
5-1.
The state equations of the network are obtained by writing the voltages across the inductors and the currents in the capacitor in terms of the three state variables. The state equations are
^p- = -RMi) ~ e {t) + c
L2
dh(t) dt
-Rihit)
+ e (t) c
e(.t)
(5-10)
(5-11)
190
/
Mathematical Modeling of Physical Systems
Chap. 5
c^ = m-m Rearranging the constant
coefficients, the state
(5-12)
equations are written in the following
canonical form: 1
_*i
diAtY]
o
-+-
u
dt
-£
dhU) dt
hit)
5.3
+
h
e(t)
(5-13)
edt)
c
C
1
1
de c (t) dt
hit)
Modeling of Mechanical System Elements 3
Most feedback
From
ponents.
control systems contain mechanical as well as electrical
comand
a mathematical viewpoint, the descriptions of electrical
mechanical elements are analogous. In fact, we can show that given an electrical device, there is usually an analogous mechanical counterpart, and vice versa. The analogy, of course, is a mathematical one that is, two systems are analo;
they are described mathematically by similar equations. The motion of mechanical elements can be described in various dimensions as translational, rotational, or a combination of both. The equations governing the motions of mechanical systems are often directly or indirectly formulated
gous to each other
if
from Newton's law of motion. Translational
Motion
The motion of translation is defined as a motion that takes place along a The variables that are used to describe translational motion are acceleration, velocity, and displacement. Newton's law of motion states that the algebraic sum offorces acting on a rigid body in a given direction is equal to the product of the mass of the body and its acceleration in the same direction. The law can be expressed as straight line.
2
forces
= Ma
(5-14)
M
denotes the mass and a is the acceleration in the direction considered. where For translational motion, the following elements are usually involved: 1.
Mass: Mass is considered as an indication of the property of an element which stores the kinetic energy of translational motion. It is denotes the analogous to inductance of electrical networks. If
W
weight of a body, then
M
is
given by
M where g
is
the acceleration of the
tion of free (5-14)
and
fall.
Three consistent
(5-15) are as follows:
W
(5-15)
g
body due sets
to gravity of the acceleraof units for the elements in Eqs.
:
Modeling of Mechanical System Elements
Sec. 5.3
Mass
Units
M
Weight
Acceleration
Force
m/sec 2
MKS
newtons/m/sec 2
CGS
dynes/cm/sec 2
newton dyne
cm/sec 2
newton dyne
British
lb/ft/sec 2 (slug)
lb
ft/sec 2
lb
Figure 5-3 illustrates the situation where a force
y(t)
on a body with mass M. The force equation
acting
M
fit)
Ma(t)
= M
M-dv(t) r
is
where
y(t) represents displacement, v(t) the velocity,
the acceleration,
is
(5-16)
dt
Force-mass system.
a{t)
is
written fit)
Fig. 5-3.
W
191
/
all
and
referenced in the direction of the
applied force. 2.
Linear spring
:
A linear spring in practice may be an actual spring or
the compliance of a cable or a belt. In general, a spring to be
an element that
stores potential energy. It
capacitor in electric networks. In practice,
some
springs are nonlinear
all
However, if the deformation of a spring behavior may be approximated by a linear relationship, to
extent.
f{t)
where
K
is
=
considered
is
analogous to a
is
small,
is
Ky{t)
its
(5-17)
the spring constant, or simply stiffness.
The
three unit
systems for the spring constant are Units
K
MKS CGS
newtons/m dynes/cm
British
lb/ft
Equation (5-17) implies that the force acting on the spring
is
directly
proportional to the displacement (deformation) of the
The model representing a shown in Fig. 5-4.
spring. y{t)
K -
is
AAAAA/v-
fit)
If the spring
is
linear spring element
preloaded with a preload tension of
T, then Eq. (5-17) should be modified to
f{t)-T=Ry{t)
Fig. 5-4. Force-spring system
Friction for translational motion.
Whenever
there
is
of motion between two elements, frictional forces exist.
(5-18)
motion or tendency
The
frictional forces
encountered in physical systems are usually of a nonlinear nature. The characteristics of the frictional forces between two contacting surfaces often depend
on such factors as the composition of the surfaces, the pressure between the surfaces, their relative velocity, and others, so that an exact mathematical description of the frictional force is difficult. However, for practical purposes, frictional forces
can be divided into three basic catagories: viscous
friction,
:
:
192
/
Chap. 5
Mathematical Modeling of Physical Systems
static friction,
and Coulomb
friction.
These are discussed separately in detail
in
the following.
1.
Viscous friction. Viscous friction represents a retarding force that
schematic diagram element for friction sented
y(t)
Dashpot
fit)
f(f)
where
for viscous friction.
B
sions of
=
friction
often repre-
The
is
B®&
(5-19)
the viscous frictional coefficient.
is
B
is
as that shov/n in Fig. 5-5.
mathematical expression of viscous
H Fig. 5-5.
by a dashpot such
is
The
a linear relationship between the applied force and velocity.
The dimen-
in the three unit systems are as follows
B
Units
MKS
newton/m/sec
CGS
dyne/cm/sec
British
lb/ft/sec
Figure 5-6(a) shows the functional relation between the viscous
and
frictional force 2.
velocity.
Static friction. Static friction represents a retarding force that tends
to prevent
motion from beginning. The
static frictional force
can be
represented by the following expression f{t)
where (^)^,
when
the
is
body
of the friction
=
±(F,),_
(5-20)
defined as the static frictional force that exists only
is
stationary but has a tendency of moving.
The
depends on the direction of motion or the
direction of velocity.
The
force-velocity relation of static friction
illustrated in Fig. 5-6(b). Notice that once motion begins, the
and other frictions take over. Coulomb friction. Coulomb friction is a retarding force that has a f
f + F. Slope =
B
(c)
(b)
Fig. 5-6. Functional relationships oflinear (a)
is
static
frictional force vanishes, 3.
sign
initial
Viscous
friction, (b) Static friction, (c)
and nonlinear frictional
Coulomb
friction.
forces,
Sec. 5.3
Modeling of Mechanical System Elements
/
193
constant amplitude with respect to the change in velocity, but the sign of the frictional force changes with the reversal of the direction of velocity. The mathematical relation for the Coulomb friction is given by <5 - 2,)
/w-'-GF/ISD where
F
c
is
Coulomb
the
The functional descripshown in Fig. 5-6(c).
friction coefficient.
tion of the friction to velocity relation
is
Rotational Motion
The rotational motion of a body may be defined as motion about a fixed The variables generally used to describe the motion of rotation are torque; angular acceleration, a; angular velocity,
axis.
following elements are usually involved with the rotational motion. Inertia. Inertia, /, is considered as an indication of the property of an element which stores the kinetic energy of rotational motion. The inertia of a given element depends on the geometric composition about the axis of rotation
and
density.
its
For
instance, the inertia of a circular disk or a circular shaft about
metric axis
is
J where
M
Example
the
is
5-2
mass of the disk or
Given a disk that oz,
its
inertia
j
= \Mr shaft
is 1 in.
_
1
Wr 2
2
it
and
g
^
1
p
is
r is its radius.
is
(5oz)(l
and weighing 5
in) 2
386 in/sec 2
(5-23)
0.00647 oz-in-sec 2 given in weight per unit volume. Then, for a inertia is proportional to the fourth
first
power of the thickness or
length. Therefore,
if
the
expressed as
W = p(nr where
(5-22)
can be shown that the
power of the radius and the is
2
in diameter, 0.25 in. thick,
2
Usually the density of the material circular disk or shaft
W
geo-
is
=
weight
its
given by
2 h)
(5-24)
the density in weight per unit volume, r the radius,
length, then Eq. (5-22)
is
j
=
pn^ = omQ6p h/
1
.
where h and r are in inches. For steel, p is 4.53 oz/in 3 Eq.
(5-25)
;
J For aluminum, p
is
and h the thickness or
written
1.56 oz/in 3
;
/
4
(525)
becomes
= 0.0184 hr 4
(5-26)
Eq. (5-25) becomes
= 0.00636 hr*
(5-27)
:
:
194
/
Chap. 5
Mathematical Modeling of Physical Systems
When T(-^
as »
shown
a torque
applied to a body with inertia /,
is
in Fig. 5-7, the torque equation
is
written
no = Mt) = J ^r = J ~$P
<' >
three generally used sets of units for the quan-
The tities
Fig. 5-7. Torque-inertia system,
Units
(5-28)
in Eq. (5-28) are tabulated as follows
Inertia
Torque
Angular Displacement radian
MKS
kg-m 2
CGS
g-cm 2
newton-m dyne-cm
English
slug-ft 2
lb-ft
radian
or lb-ft-sec 2 oz-in-sec 2
oz-m
The following conversion
radian
factors are often
found useful
Angular displacement.
Angular
= 11? =
rad
1
57.3°
velocity. 1
rpm
= ?£ = 0.1047 rad/sec oU
1
rpm
=
6 deg/sec
Torque.
= lb-ft = oz-in = g-cm
1
1
1
0.0139 oz-in
192 oz-in 0.00521 lb-ft
Inertia. 1
1
g-cm 2
lb-ft-sec
2
1
oz-in-sec 2
1
g-cm-sec 2 1
Torsional spring.
lb-ft-sec
2
As with
=
1.417
=192
= = =
X
10" 5 oz-in-sec 2
oz-in-sec 2
386 oz-in 2 980 g-cm 2 32.2 lb-ft 2
the linear spring for translational motion, a
torsional spring constant K, in torque per unit angular
K
7X0
-M/WW
of a rod or a shaft
when
it
is
subject to
an applied
Figure 5-8 illustrates a simple torque-spring system that can be represented by the following equa-
torque.
0(0 Fig. 5-8. Torque-torsional spring
system.
displacement, can be devised to represent the compliance
tion:
7X0
=
KB{t)
(5-29)
:
Sec. 5.3
Modeling of Mechanical System Elements
The dimension
for
/
195
K is given in the following units Units
K
MKS CGS
newton-m/rad dyne-cm/rad
British
oz-in/rad
If the torsional spring is
preloaded by a preload torque of TP, Eq. (5-29)
is
modified to T(t)
-TP= Kd(t)
(5-30)
The three types of friction described for motion can be carried over to the motion of rotation. Therefore, (5-20), and (5-21) can be replaced, respectively, by their counter-
Friction for rotational motion. translational
Eqs. (5-19), parts
:
no _
n dftt)
no
±{F )d=0
(5-31)
dt
(5-32)
s
(5-33)
where C\)
B is is
Relation
the viscous frictional coefficient in torque per unit angular velocity,
the static friction,
Between
and
Translational
F
c
the
Coulomb
friction coefficient.
and Rotational Motions
In motion control problems tion into a translational one.
is
it is
For
momove
often necessary to convert rotational
instance, a load
may be
controlled to
along a straight line through a rotary motor and screw assembly, such as that shown in Fig. 5-9. Figure 5-10 shows a similar situation in which a rack and
used as the mechanical linkage. Another common system in motion the control of a mass through a pulley by a rotary prime mover, such
control
is
as that
shown
all
in Fig. 5-11.
The systems shown
in Figs. 5-9, 5-10,
and
5-11
can
be represented by a simple system with an equivalent inertia connected
For instance, the mass in Fig. 5-1 1 can be regarded as which about the pulley, which has a radius r. Disregarding mass moves a point the inertia of the pulley, the equivalent inertia that the motor sees is directly to the drive motor.
/=M/- 2 =
—g
2
(5-34)
is r, the equivalent inertia which the by Eq. (5-34). Now consider the system of Fig. 5-9. The lead of the screw, L, is defined as the linear distance which the mass travels per revolution of the screw. In principle, the two systems in Fig. 5-10 and 5-11 are equivalent. In Fig. 5-10, the distance traveled by the mass per revolution of the pinion is 2nr. Therefore,
If the radius
motor
of the pinion in Fig. 5-10
sees is also given
using Eq. (5-34) as the equivalent inertia for the system of Fig. 5-9, 2
7
(5 " 35)
= fte)
where in the British system
= inertia (oz-in-sec W — weight (oz) L = screw lead (in) g = gravitational force (386.4 in/sec 2
/
)
2
)
Mechanical Energy and Power
Energy and power play an important role
form of
in the design of electromechanical
and potential energy controls the dynamics of the system, whereas dissipative energy usually is spent in the form of heat, which must be closely controlled. The mass or inertia of a body indicates its ability to store kinetic energy. The kinetic energy of a moving mass with a velocity v is systems. Stored energy in the
kinetic
Wk = \Mv The following tion:
r
(5-36)
consistent sets of units are given for the kinetic energy rela-
Sec. 5.3
Modeling of Mechanical System Elements
Energy
Units
Mass
/
197
Velocity
MKS
joule or
newton/m/sec 2
m/sec
CGS
newton-m dyne-cm
dyne-cm-sec 2
cm/sec
British
ft-lb
lb/ft/sec 2
ft/sec
(slug)
For a rotational system, the
kinetic energy relation
W = ya>
is
written
2
(5-37)
k
where /
the
is
moment
of intertia and
co the
angular velocity. The following
units are given for the rotational kinetic energy:
Energy
Units
Angular Velocity
Inertia
MKS
joule or
CGS
newton-m dyne-cm
gm-cm 2
rad/sec
British
oz-in
oz-in-sec 2
rad/sec
kg-m 2
rad/sec
Potential energy stored in a mechanical element represents the amount of to change the configuration. For a linear spring that is deformed
work required by y
in length, the potential energy stored in the spring
is
W = \Ky*
(5-38)
p
where
K
stored
is
the spring constant. For a torsional spring, the potential energy given by is
Wp = \KQ*
(5-39)
When
dealing with a frictional element, the form of energy differs from the previous two cases in that the energy represents a loss or dissipation by the sys-
tem in overcoming the frictional force. Power is the time rate of doing work. Therefore, the power dissipated in a frictional element is the product of force and
velocity; that
is,
P=fi> Since
/=
MKS
unit for
Bv, where
B
P The
power
is
(5-40)
the frictional coefficient, Eq. (5-40) becomes
is
=
Bv*
(5-41)
in newton-m/sec or watt; for the
dyne-cm/sec. In the British unit system, power
horsepower
(hp).
CGS
system
it is
represented in ft-lb/sec or
Furthermore, 1
Since power
is
is
= 746 watt = 550 ft-lb/sec
hp
the rate at which energy
sipated in a frictional element
is
(5-42)
being dissipated, the energy dis-
is
W = BJv d
2
dt
(5-43)
.
198
/
Mathematical Modeling of Physical Systems
Gear
Trains, Levers,
A
Chap. 5
and Timing Belts
gear train, a lever, or a timing belt over pulleys
is
a mechanical device
from one part of a system to another in such a way that force, torque, speed, and displacement are altered. These devices may also be regarded as matching devices used to attain maximum power transfer. Two gears are shown coupled together in Fig. 5-12. The inertia and friction of the that transmits energy
gears are neglected in the ideal case considered.
The relationships between the torques 7^ and T2 angular displacements and 2 and the teeth numbers JV", and N2 of the gear train are derived from ,
0,
>
,
the following facts:
The number of teeth on the surface of the gears radii r, and r 2 of the gears; that is,
1
r,N z 2.
The distance
=r N 1
is
proportional to the
(5-44)
l
traveled along the surface of each gear
is
the same.
Therefore,
6',!•, 3.
=
The work done by one gear is equal is assumed to be no loss. Thus
T
1
1
(5-45)
62 r2
to that of the other since there
=T
2
62
(5-46) JV,
Ti
,
a
Ft
N,
r,,«,
*
i
T, 0,
M^4
4 Tiy
d7
JV,
Fig. 5-12.
JV,
Gear
Fig. 5-13.
train.
If the angular velocities of the picture, Eqs. (5-44)
two
Gear train with friction and inertia.
gears, co 1
and
co 2 , are
brought into the
through (5-46) lead to
T2
(5-47) 0t
#2
CO!
r2
In practice, real gears do have inertia and friction between the coupled gear which often cannot be neglected. An equivalent representation of a gear
teeth
train with viscous friction,
elements
is
shown
Coulomb friction, and inertia considered as lumped The following variables and parameters are
in Fig. 5-13.
defined for the gear train:
:
Modeling of Mechanical System Elements
Sec. 5.3
T= 6 1, 6 2
Ti,T2 /, J2 ,
= =
/
199
applied torque
angular displacements torque transmitted to gears
= inertia of gears
N u N — number of teeth F = Coulomb friction coefficients B u B = viscous frictional coefficients 2
F,i,
c2
2
The torque equation
T2 (t) = J2 The torque equation on the r(0
By
= /,
^ ^
for gear 2
is
+B
side of gear
+ Fc2 -^
(5-48)
1 is
(5-49)
ei
the use of Eq. (5-47), Eq. (5-48)
(a
2
^°- + *, *M> + F M- + 7\(?) is
™ w™ T
written
converted to
W ST + n F
-N, T (t _ (n.VjJi rfwo (N y B - \wj -&- + ) ,
ddti)
t
2
if
Nl
>
2
<>
e2
,
m
(s 50) (5
|^y
Equation (5-50) indicates that it is possible to reflect inertia, friction, (and compliance) torque, speed, and displacement from one side of a gear train to the other.
Therefore, the following quantities are obtained
2 to gear
when
reflecting
from gear
1
Inertia:
(£i)V2
Viscous frictional coefficient
Torque:
^T
:
B2
(--^1
2
Angular displacement: Angular velocity:
-jrr-d 2
N
2
~co 2 Jy 2
Coulomb
frictional torque:
-rrFc2
N
2
c° 2 . ,
|a>2l
were present, the spring constant is also multiplied by from gear 2 to gear 1. Now, substituting Eq. (5-50) into
If torsional spring effect
(NJN2 )
2
in reflecting
Eq. (5-49),
we
get
HO = /,.
^0 + B U ^M + T
P
(5-51)
where
= Ji + J (jff Bu = B + B Ju
(5-52)
2
2
x
(jfy
2
(5-53)
200
/
Mathematical Modeling of Physical Systems
Example
Chap. 5
Given a load that has
5-3
of 0.05 oz-in-sec 2 and a
inertia
tion torque of 2 oz-in, find the inertia
through a reflected inertia
Coulomb
on
friction is (^)2
Timing
1
:
= 0.4
is
2
Coulomb
fric-
frictional torque reflected
N
2
on the load
oz-in-sec 2
.
side).
The
The
reflected
oz-in.
and chain drives
belts
(Ni/N2 = ^ with x 0.05 = 0.002 (£)
5 gear train
the side of JV,
and
serve the
same purposes
as the gear train
except that they allow the transfer of energy over a longer distance without using
an excessive number of gears. Figure 5-14 shows the diagram of a belt or chain drive between two pulleys. Assuming that there is no slippage between the belt and the pulleys, it is easy to see that Eq. (5-47) still applies to this case. In fact, the reflection or transmittance of torque, inertia, friction, etc.,
is
similar to that of
a gear train.
*~/i
T2
,
02
Fig. 5-15. Lever system.
Fig. 5-14. Belt or chain drive.
shown in Fig. 5-15 transmits translational motion and same way that gear trains transmit rotational motion. The relation between the forces and distances is
The
lever system
force in the
A
h
(5-55)
Backlash and Dead Zone
x{t) \
|
*
|
~*~ 2
Backlash and dead zone usually play an important and similar mechanical linkages. In a great majority of situations, backlash may give rise to
role in gear trains
.HO
undesirable oscillations and instability in control systems. "*~
In addition,
2
Output
)
ical
it
has a tendency to wear
down the mechan-
elements. Regardless of the actual mechanical ele-
ments, a physical model of backlash or dead zone between Fig. 5-16. Physical
model of backlash
between two mechanical elements.
an input and an output member is shown in Fig. 5-16. The model can be used for a rotational system as well as
Modeling of Mechanical System Elements
Sec. 5.3
for a translational system.
The amount of backlash
is
/
201
b/2 on either side of the
reference position.
In general, the dynamics of the mechanical linkage with backlash depend relative inertia-to-friction ratio of the output member. If the inertia of
upon the
very small compared with that of the input member, the means that the output
the output
member
motion
controlled predominantly by friction. This
member
When
is
not coast whenever there
will
the output
until the input
stand
still
is
is
is
no contact between the two members. members will travel together
driven by the input, the two
member
reverses
its
until the backlash is taken
member
will
which time
it is
direction; then the output
up on the other
side, at
assumed that the output member instantaneously takes on the velocity of the input member. The transfer characteristic between the input and the output displacements of a backlash element with negligible output inertia Figure 5-17.
To
illustrate the relative
is
shown
in
motion between the input and the output
Fig. 5-17. Input-output characteristic of backlash with negligible output inertia.
members,
let
us assume that the input displacement
is
driven sinusoidally with
The displacements and velocities of the input and output members are illustrated as shown in Fig. 5-18. Note that the reference position of the two members is taken to be that of Fig. 5-16, that is, with the input member respect to time.
starting at the center of the total backlash.
when motion
begins, the input
=
member
and y(0)
is
=
For
Fig. 5-18,
it is
assumed that member on
in contact with the output
—b/2. At the other extreme, if the friction it may be neglected, the inertia of the output member remains in contact with the input member as long as the acceleration is in the direction to keep the two members together. When the acceleration of the input member becomes zero, the output member does not stop immediately but leaves the input member and coasts at a constant velocity that is equal to the maximum velocity attained by the input member. When the output member has traversed a distance, relative to the input member, equal to the full width of the backlash, it will be restrained by the opposite side of the input memthe right, so that x(0)
on the output member
is
so small that
202
/
Mathematical Modeling of Physical Systems
Chap. 5
and velocity waveforms of input and output members of a backlash element with a sinusoidal input displacement. Fig. 5-18. Displacement
Fig. 5-19. Input-output displacement characteristic of a backlash element
without
friction.
ber. At that time the output member will again assume the velocity of the input member. The transfer characteristic between the input and the output displacement of a backlash element with negligible output friction is shown in Fig. 5-19. The displacement, velocity, and acceleration waveforms of the input and output members, when the input displacement is driven sinusoidally, is shown in
Fig. 5-20.
In practice, of course, the output
backlash usually has friction as well as
member of a mechanical linkage with inertia. Then the output waveforms in
response to a sinusoidally driven input displacement should Figs. 5-18
and
5-20.
lie
between those of
Equations of Mechanical Systems
Sec. 5.4
/
203
Displacement
Velocity
Acceleration
Output Input
and acceleration waveforms of input and output members of a backlash element when the input displacement Fig. 5-20. Displacement, velocity,
is
5.4
driven sinusoidally.
Equations of Mechanical Systems 3
The equations of a linear mechanical system are written by first constructing a model of the system containing interconnected linear elements, and then the system equations are written by applying Newton's law of motion to the freebody diagram. Example
5-4
Let us consider the mechanical system shown in Fig. 5-2 1(a). The free-body diagram of the system is shown in Fig. 5-21 (b). The force
,
d 2 y(t)
y«)
*-/(/)
(a)
Fig. 5-21. (a) Mass-spring-friction system, (b)
(b)
Free-body diagram.
:
204
/
:
Mathematical Modeling of Physical Systems
Chap.
equation of the system
5
written
is
(5-56)
This second-order differential equation can be decomposed into two first-order state equations, using the method discussed in Chapter 4. Let us assign x x = y and jc 2 = dy/dt as the state variables.
Then Eq.
(5-56)
is
written
dxj(t)
dt
T It is
electric
not
=
**(/)
(5-57)
-i^<)-M^> + ]>>
(5-58)
difficult to see that this
network. With this analogy
mechanical system
it is
is
analogous to a
series
RLC
simple to formulate the state equations direct-
from the mechanical system using a different set of state variables. If we consider mass is analogous to inductance, and the spring constant K is analogous to the inverse of capacitance, 1/C, it is logical to assign v(t), the velocity, and fk (t), the force acting on the spring, as state variables, since the former is analogous to the current in an inductor and the latter is analogous to the voltage across a capacitor. ly
that
Then
the state equations of the system are
M Mp =
Force on mass
Velocity of spring
Notice that the
first state
equation
across an inductor; the second
l
dfk (t)
K
dt
is
_ Mt)
-i-/(0
-A(0
(5-59)
_ v{t)
(5-60)
similar to writing the equation
on the voltage
through a capacitor. This simple example further illustrates the points made in Chapter 4 regarding the fact that the state equations and state variables of a dynamic system are not unique.
Example
is
like that of the current
As a second example of writing equations for mechanical systems, consider the system shown in Fig. 5-22(a). Since the spring is deformed when it is subjected to the force /(f), two displacements, Vi and y 2 must be assigned to the end points of the spring. The free-body diagrams of the system 5-5
,
are given in Fig. 5-22(b).
From
these free-body diagrams the force equations of the
system are written
y 2 U) VA
CF
Md
yi(0
2
y 2 (t)
dt
y 2 U)
2
yi(t)
K
M
M
-nptfs-
K
fU)
B
dy 2 (t)
-nnnnK(y
l
-y 2
)
~dT-
(a)
(b)
Fig. 5-22.
Mechanical system for Example
system, (b) Free-body diagrams.
5-5. (a)
Mass-spring-friction
/(»
Equations of Mechanical Systems
Sec. 5.4
fit)
= K\y
- y 2 (t)]
(t)
x
let
205
(5-61)
K[yi(t)
Now
/
(5-62)
us write the state equations of the system. Since the differential equation of
the system
is
already available in Eq. (5-62), the most direct
way
is
to
decompose
this
equation into two first-order differential equations. Therefore, letting
x
x
(t)
=y
2 (t)
dxi(t)
dt
=x
As an
alternative,
we can
= dy 2 (t)/dt,
Eqs. (5-61) and (5-62) give
(5-63)
2 (t)
+ 3^/(0
-£*(')
dt
state variable,
and x 2 (t)
(5-64)
assign the velocity v(t) of the
and the force fk {t) on the spring as the other
B
dv(i)
,,,
.
1
body with mass
M as one
state variable, so
we have
-,. (5-65)
and /*(')
One may wonder
=/(')
at this point if the
(5-66)
two equations
in Eqs. (5-65)
and
(5-66) are
seems that only Eq. (5-65) is a state equation, but we do have two state variables in v(t) and fk (t). Why do we need only one state equation here, whereas Eqs. (5-63) and (5-64) clearly are two independcorrect as state equations, since
it
ent state equations?
The
situation
is
better explained (at least
for electrical engineers) by referring to the analogous electric
network of the system, shown in Fig. 5-23. It is clear that although the network has two reactive elements in L and C and thus there should be two state variables, the capacitance in this case is a "redundant" element, since e c (t) is equal to the applied voltage e(t). However, the equations in Eqs. (5-65) and (5-66) can provide only the solution to the velocity of once /(f) is specified. If we need to find the displacement y^it) at the point where f(t) is applied, we have to use the relation
M
Fig. 5-23. Electric
network analogous
to the mechanical system of Fig. 5-22.
ydt)
where yi(t)
is
v(t)
dx
+ y 2 (0+)
(5-67)
displacement of the body with mass M. On the other hand, from the two state equations of Eqs. (5-63) and (5-64), and then determined from Eq. (5-61).
.^(O-l-)
we can
= &jp + y 2 (t) = ££ + P
is
the
initial
solve for y 2 (t)
Example
5-6
In this example the equations for the mechanical system in Fig. 5-24(a) are to be written. Then we are to draw state diagrams and derive transfer functions for the system.
The free-body diagrams
for the
two masses are shown in Fig. 5-24(b), with the y and y 2 as indicated. The Newton's force
reference directions of the displacements
t
:
206
/
Chap. 5
Mathematical Modeling of Physical Systems
^^^^^ K-,
M-,
y 2 (f)
K,
*i(>-i
M, Vlit)
no
/u)
(b)
(a)
Fig. 5-24.
Mechanical system for Example
5-6.
equations for the system are written directly from the free-body diagram: (5-68)
(5-69)
dt*
We may now
dt
decompose these two second-order simultaneous
differential
equations
into four state equations by defining the following state variables
x,
= yi dy\
Xl
dt
(5-70)
_
dx±
x 3 =yi xt
=
(5-71)
dt (5-72)
dy 2
dx 3
dt
dt
(5-73)
Equations (5-71) and (5-73) form two state equations naturally; the other two are obtained by substituting Eqs. (5-70) through (5-73) into Eqs. (5-68) and (5-69), and rearranging
we have dxi dt
= x2
(5-74)
:
Sec. 5.4
Equations of Mechanical Systems
dx 2
Xi)
dt
dx 3
If
we
+
Xi)
Wm
207
(5-75)
x
Xi
dt
dt
~ Ml {Xz ~
/
(5-76)
-M
+
Xl
2
W
A",
X2
+
M
2
AT2
1
" *33
•(£,
A/ 2
2
are interested in the displacements
y
and
x
+ Bi )x i
(5-77)
j^, the output equations are
written
= *i(0
(5-78)
*« = *s(0
(5-79)
yi(t)
The state diagram of the system, according to the equations written above, is drawn as shown in Fig. 5-25. The transfer functions Y (s)IF(s) and Y2 (s)/F(s) are x
By/Mj
Fig. 5-25. State
diagram for the mechanical system of Fig.
5-24.
obtained from the state diagram by applying Mason's gain formula. The reader should verify the following results (make sure that all the loops and nontouching loops are taken into account)
Y
M
m x
2 s*
(s)
MO F(s)
B
t
s
+
(Bi
+ B )s + A 2
(K
t
+ K2
)
(5-80)
+K
x
(5-81)
where
A =
MM x
+B
2 s*
1 (B l
+ [M (B + B2 + B M2 - Bfis* + [M (K + K2 ) + K M2 + B2 ) - BiK^s 2 + [K B 2 + B (K + K2 )]s + K K2 )
X
X
S
X
X
X
X
X
X
(5-82)
X
The state equations can also be written directly from the diagram of the mechanical The state variables are assigned as v = dy /dt, v 2 = dy 2 /dt, and the forces on the two springs, /x! and fK2 Then, if we write the forces acting on the masses and the velocities of the springs, as functions of the four state variables and the external force,
system.
t
x
.
the state equations are
Force on
M
x :
M
dv x
x
~di
-B lVl
+B v -fK1 +f x
2
(5-83)
208
/
Mathematical Modeling of Physical Systems
Force on
M
2
Velocity on
on
Velocity
Chap. 5
M
z
:
K
.
t
K2
:
:
= fi,^i —
-%*
df Ki "^-
=
2
(5-84)
—fx:
v2 )
#,(*>i
^p = K v
B2 )v 2 +/ki
(2?i
(5-85)
(5-86)
2
The rotational system shown in Fig. 5-26(a) consists of a disk mounted on a shaft that is fixed at one end. The moment of inertia of the disk about its axis is /. The edge of the disk is riding on a surface, and the viscous friction coefficient between the two surfaces is B. The inertia of the shaft
Example
is
5-7
negligible,
but the
stiffness is
K.
^^^^
(a)
(b)
Fig. 5-26. Rotational system for
Example
5-7.
Assume that a torque is applied to the disk as shown; then the torque or moment equation about the axis of the shaft is written from the free-body diagram of Fig. 5-26(b):
T(t) = J 4%P + Notice that equations
= x 2 (t).
this
system
may be
is
B^ +
K0(t)
(5-87)
analogous to the translational system of Fig. 5-21. The state
written by defining the state variables as
The reader may carry out the next
x-i(t)
=
6(t)
and dx
x
{f)\dt
step of writing the state equations as
an
exercise.
5.5
Error-Sensing Devices
in
Control Systems 4
'
In feedback control systems it is often necessary to compare several signals at a certain point of a system. For instance, it is usually the case to compare the reference input with the controlled variable; the difference between the two signals
The
is
called the error.
The
error signal
is
block-diagram notation for the algebraic
Fig. 3-5. In terms of physical
then used to actuate the system.
sum of
several signals
is
defined in
components, an error-sensing device can be a simple
potentiometer or combination of potentiometers, a differential gear, a transformer, a differential amplifier, a synchro, or a similar element. The mathematical
modeling of some of these devices
is
discussed in the following.
Sec. 5.5
Error-Sensing Devices
in
Control Systems
Potentiometers. Since the output voltage of a potentiometer to the shaft displacement,
when
the device can be used to
compare two
may be
a voltage
is
When
a constant voltage
its
is
209
proportional
fixed terminals,
shaft positions. In this case one shaft
fastened to the potentiometer case
potentiometer.
applied across
/
is
and
the other to the shaft of the
applied to the fixed terminals of the
potentiometer, the voltage across the variable and the reference terminals will be proportional to the difference between the two shaft positions. The arrange-
ment shown in Fig. 5-27(b) is a one-potentiometer realization of the error-sensshown in Fig. 5-27(a). A more versatile arrangement may be obtained by using two potentiometers connected in parallel as shown in Fig. 5-27(c). This
ing devices
(a)
u(0
(b)
4
6
o_
-e(0-
(c)
Block-diagram and signal-flow-graph symbols for an error one potentiometer, (c) Position error sensor using two potentiometers. Fig. 5-27. (a)
sensor, (b) Position error sensor using
o Load
t
Gear
train
T e(f)
dc amplifier
Input
e a (t)
(
M
>U
shaft
(a)
Permanent-magnet dc motor
Fig. 5-28. (a) Direct current control system with potentiometers as error
detectors, (b) Typical
210
waveforms of
signals in the control system of (a).
Sec. 5.5
Error-Sensing Devices
Control Systems / 211
in
allows comparison of two remotely located shaft positions.
The applied voltage
can be ac or dc, depending upon the types of transducers that follow the error sensor. If v(t) is a dc voltage, the polarity of the output voltage e(t) determines the relative position of the two shafts. In the case of an ac applied voltage,
v(t)
the phase of
e(t) acts
as the indicator of the relative shaft directions. In either
case the transfer relation of the two error-sensor configurations can be written e(t)
= K [O (t) s
r
6 c (t)]
(5-88)
where
= K =
e(t) s
error voltage, volts sensitivity
of the error sensor, volts per radian
The value of K, depends upon
the applied voltage
capacity of the potentiometers. For instance,
and the
total displacement
the magnitude of v(t)
if
is
V volts
and each of the potentiometers is capable of rotating 10 turns, Ks = F/2O71 V/rad. A simple example that illustrates the use of a pair of potentiometers as an error detector is shown in Fig. 5-28(a). In this case the voltage supplied to the error detector, v(t), is a dc voltage. An unmodulated or dc electric signal, e(t), proportional to the misalignment between the reference input shaft and the controlled shaft, appears as the output of the potentiometer error detector. In control system terminology, a dc signal usually refers to an unmodulated signal.
On the other hand, an ac signal in control systems is modulated by a modulation from those commonly used in electrical and ac indicates alternating. As shown in Fig. 5-28(a), the error signal is amplified by a dc amplifier whose output drives the armature of a permanent-magnet dc motor. If the system works properly, wherever there is a misalignment between the input and the output shafts, the motor will rotate in such a direction as to reduce the error to a minimum. Typical waveforms of the signals in the system are shown in Fig. 5-28(b). Note that the electric signals are all unmodulated and the output displacements of the motor and the load are essentially of the same form as the error signal. Figure 5-29(a) illustrates a control system which could serve essentially the same purpose as that of the system of Fig. 5-28(a) except that ac signals process. These definitions are different
engineering, where dc simply refers to unidirectional
prevail. In this case the voltage applied to the error sensor is sinusoidal.
frequency of this signal signal that
is
is
usually
much
being transmitted through the system. Typical signals of the ac
shown in Fig. 5-29(b). The whose frequency is co c or
control system are carrier signal
The
higher than the frequency of the true signal v(t)
is
referred to as the
,
v(t)=V sin Analytically, the output of the error sensor e(t)
co c t is
(5-89)
given by
= K,0JMt)
(5-90)
where 0,(0 is the difference between the input displacement and the load displacement, 0/0 r (?) c (f). For the 0/0 shown in Fig. 5-29(b), e(t) becomes
=
—
a suppressed-carrier modulated'signal.
A reversal in phase of e(t)
occurs whenever
212
/
Mathematical Modeling of Physical Systems
6c (t)
Chap. 5
Load Gear
train
Two-phase ac motor (a)
Fig. 5-29. (a)
AC control system with potentiometers as error detector,
Typical waveforms of signals in the control system of
(b)
(a).
the signal crosses the zero-magnitude axis. This reversal in phase causes the ac
motor
to reverse in direction according to the desired sense of correction of the
error 6£t).
when a
The name "suppressed-carrier modulation" stems from the
signal
6 e (t)
is
modulated by a
fact that
carrier signal v(t) according to Eq. (5-90),
Error-Sensing Devices
Sec. 5.5
Control Systems
in
no longer contains the original carrier frequency assume that 8£t) is also a sinusoid given by
the resultant signal e(t) illustrate this, let us
0,(0
=
sin o>X?)
$K
s
V[cos(co c
-
~
co s )t
.
To
relations, substitut-
.
=
co c
(5-91)
where normally, co s
213
/
cos (co c
+ co
(5-92)
s )t]
no longer contains the carrier frequency co c or the signal frequency co s but it does have the two side bands co c + co s and co c — co s Interestingly enough, when the modulated signal is transmitted through the system, the motor acts as a demodulator, so that the displacement of the load will be of the same form as the dc signal before modulation. This is clearly seen from the waveforms of Fig. 5-29(b). Therefore,
e(t)
,
.
should be pointed out that a control system need not contain all-dc or components. It is quite common to couple a dc component to an ac component through a modulator or an ac device to a dc device through a demodulaIt
all-ac
For instance, the dc amplifier of the system in Fig. 5-28(a) may be replaced by an ac amplifier that is preceded by a modulator and followed by a demodula-
tor.
tor.
Synchros.
Among
the various types of sensing devices for mechanical
most widely used is a pair of synchros. Basically, a synchro is a rotary device that operates on the same principle as a transformer and produces a correlation between an angular position and a voltage or a set of voltages. Depending upon the manufacturers, synchros are known by such trade names as Selsyn, Autosyn, Diehlsyn, and Telesyn. There are many types and different applications of synchros, but in this section only the synchro transmitter and the shaft errors, the
synchro control transformer will be discussed.
Synchro Transmitter
A
synchro transmitter has a Y-connected stator winding which resembles
the stator of a three-phase induction motor.
bell-shaped magnet with a single winding. transmitter
is
shown
in Fig. 5-30.
A
The rotor is a salient-pole, dumbThe schematic diagram of a synchro
single-phase ac voltage
is
applied to the
Stator
Slip
*1 ac voltage
rings
Rotor
— -
R-,
(a)
Fig. 5-30.
(b)
Schematic diagrams of a synchro transmitter.
,
Stator
214
/
Mathematical Modeling of Physical Systems
Chap. 5
The symbol G is often used to designate a synchro sometimes also known as a synchro generator.
rotor through two slip rings. transmitter,
which
is
Let the ac voltage applied to the rotor of a synchro transmitter be e r (t)
=E
(5-93)
sin co c t
t
When the rotor is in the position shown in Fig. electric zero, the voltage
neutral n
is
5-30, which is defined as the induced across the stator winding between S 2 and the
maximum and
is
written
= KE
e s,Sf)
where
S3 n
A"
is
a proportional constant.
r
sin cot
(5-94)
The voltages across
the terminals 5[«
and
are e Sm (t)
e sm(0
= KE = KE
r
cos 240° sin
r
cos 120° sin cot
cot
= — 0.5KE, sin cot = —0.5KE, sin cot
(5-95) (5-96)
the terminal voltages of the stator are e SiS , e Sl s,
e S,Si
= = =
- es = — e Ss = e Sin ~
e Sl „
,„
e Stn
„
e S,n
The above equations show
—\.5KE
r
sin cot
(5-97)
1.5KE,
sin cot
(5-98)
that, despite the similarity
tion of the stator of a synchro
between the construc-
and that of a three-phase
machine, there are only single-phase voltages induced
in
the stator.
Consider mitter
now
that the rotor of the synchro trans-
allowed to rotate in a counterclockwise direc-
is
tion, as
shown
winding
will
in Fig. 5-31.
displacement 6; that
is,
r
Es „ = KE ESin = KE
Rotor position
of a synchro transmitter.
The magnitudes of
the stator terminal voltages
cos (6
r
cos 9
r
cos (0
.
*J~TKE ,s<
^fJKE
r
sin (9
T
sin
A plot of these terminal voltages as a function shown
in Fig. 5-32. Notice that
-
240°)
(5-99)
(5-100)
-
120°)
(5-101)
become
ESlS = ESin - ESm = ,/TKE, sin (9 +
Es
voltages in each stator
the voltage magnitudes are
ESl „ = KE Fig. 5-31.
The
vary as a function of the cosine of the rotor
9
240°)
(5-102)
120°)
(5-103)
(5-104)
of the rotor shaft position
is
each rotor position corresponds to one unique set of stator voltages. This leads to the use of the synchro transmitter to identify angular positions by measuring and identifying the set of voltages at the three stator terminals.
Sec. 5.5
Error-Sensing Devices
in
Control Systems /
215
Volts
Rotor position 6 (degrees)
Fig. 5-32. Variation of the terminal voltages of a
a function of the rotor position.
9
is
synchro transmitter as
measured counterclockwise from
the electric zero.
Synchro Control Transformer Since the function of an error detector
is
to convert the difference of
shaft positions into an electrical signal, a single synchro transmitter
is
two
apparently
A typical arrangement of a synchro error detector involves the use of two synchros: a transmitter and a control transformer, as shown in Fig. 5-33. inadequate.
Synchro
Control transformer
transmitter
Rotor
R,
Output voltage proportional to sin (0,
Fig. 5-33.
-
dc )
Synchro error detector.
For small angular deviations between the two rotor
positions, a proportional generated at the rotor terminals of the control transformer. Basically, the principle of operation of a synchro control transformer is
voltage
is
identical to that of the synchro transmitter, except that the rotor
shaped so that the air-gap flux feature
is
is
is
cylindrically
uniformly distributed around the rotor. This
essential for a control transformer, since its rotor terminals are usually
connected to an amplifier or similar sees a constant impedance.
electrical device, in
The change
in the rotor
the shaft position should be minimized.
order that the latter
impedance with rotations of
216
/
Chap. 5
Mathematical Modeling of Physical Systems
The symbol
CT
is
often used to designate a synchro control transformer.
Referring to the arrangement shown in Fig. 5-33, the voltages given by Eqs. (5-102), (5-103), and (5-104) are
now
impressed across the corresponding
stator terminals of the control transformer. Because of the similarity in the
magnetic construction, the flux patterns produced in the two synchros will be the same if all losses are neglected. For example, if the rotor of the transmitter is
in its electric zero position, the fluxes in the transmitter
transformer are as shown in Fig. 5-34(a) and
When
the rotor of the control transformer
is
in the position
5-34(b), the induced voltage at its rotor terminals is zero.
synchros are considered to be in alignment. trol
transformer
is
When the
known
the control transformer rotor
in the control
The
shown
in Fig.
shafts of the
two
rotor position of the con-
rotated 180° from the position shown,
again zero. These are
and
(b).
its
terminal voltage
is
two null positions of the error detector. If an angle a from either of the null positions,
as the is
at
Control transformer
Synchro transmitter
-~~| /
/f
!
L/X a = °° or !
\
I
\
180
°
Rotor voltage =
Flux pattern
(b)
Control transformer
Control transformer
Flux pattern
Rotor pattern proportional to sin
Rotor voltage
a
at
(d)
(c)
Fig. 5-34. Relations
is
maximum
among
flux patterns, rotor positions,
voltage of synchro error detector.
and the rotor
Sec. 5.5
Error-Sensing Devices
in
Control Systems /
217
such as that shown in Fig. 5-34(c) and proportional to sin a. Similarly, is
it
(d), the magnitude of the rotor voltage is can be shown that when the transmitter shaft
any position other than that shown in Fig. 5-34(a), the flux patterns will and the rotor voltage of the control transformer will be pro-
in
shift accordingly,
portional to the sine of the difference of the rotor positions, a.
The rotor voltage
of the control transformer versus the difference in positions of the rotors of the transmitter and the control transformer is shown in Fig. 5-35.
Rotor voltage
Vr
360°
Fig. 5-35.
a=(dr -dc
Rotor voltage of control transformer as a function of the
)
dif-
ference of rotor positions.
From
Fig. 5-35
it is
apparent that the synchro error detector
is
a nonlinear
However, for small angular deviations of up to 15 degrees in the vicinity of the two null positions, the rotor voltage of the control transformer is approximately proportional to the difference between the positions of the rotors of the transmitter and the control transformer. Therefore, for small deviations, device.
the transfer function of the synchro error detector can be approximated by a
constant
Ks
:
K
^e~rri
" (5 105)
where
E=
= = = Ks = Br 9C 9e
error voltage shaft position of synchro transmitter, degrees shaft position of synchro control transformer, degrees
error in shaft positions sensitivity
of the error detector, volts per degree
The schematic diagram of a positional control system employing a synchro error detector is shown in Fig. 5-36(a). The purpose of the control system is to make the controlled shaft follow the angular displacement of the reference input shaft as closely as possible. The rotor of the control transformer is mechanically
mitter
connected to the controlled shaft, and the rotor of the synchro transis connected to the reference input shaft. When the controlled shaft is
aligned with the reference shaft, the error voltage turn.
When an
angular misalignment
exists,
is zero and the motor does not an error voltage of relative polarity
218
/
Mathematical Modeling of Physical Systems
Chap. 5
Two-phase induction
motor
Reference
Controlled output
input
(a)
or
r
e
em
ac
K
s
amplifier
+ ^-
Motor
Gear
ec
load
1
(b) Fig. 5-36. (a) Alternating current control system
error detector, (b) Block diagram of system in
employing synchro
(a).
appears at the amplifier input, and the output of the amplifier will drive the motor in such a direction as to reduce the error. For small dev iations between the controlled and the reference shafts, the synchro error detector can be represented by the constant Ks given by Eq. (5-105). Then the linear operation of the positional control system can be represented by the block diagram of Fig. 5-36(b). From the characteristic of the error detector shown in Fig. 5-35, it is clear that Ks has opposite signs at the two null positions. However, in closed-loop systems, only one of the two null positions is a true null; the other one corresponds to an unstable operating point.
system shown in Fig. 5-36(a), the synchro positions are and the controlled shaft lags behind the reference shaft; a positive error voltage will cause the motor to turn in the proper direction to correct this lag. But if the synchros are operating close to the false null, for the same lag between 9 r and 6 C the error voltage is negative and the motor is driven in
Suppose
that, in the
close to the true null
,
the direction that will increase the lag.
A larger lag in the controlled shaft posi-
Tachometers / 219
Sec. 5.6
tion will increase the magnitude of the error voltage
motor to rotate
in the
same
still
further
and cause the
direction, until the true null position
is reached. In reality, the error signal at the rotor terminals of the synchro control transformer may be represented as a function of time. If the ac signal applied to the rotor terminals of the transmitter is denoted by sin a> c t, where co c is known
as the carrier frequency, the error signal e(t)
Therefore, as explained earlier,
is
given by
= Ks O (i) sin a e
e(t) is
c
(5-106)
t
again a suppressed-carrier modulated
signal.
5.6
Tachometers 4
-
5
Tachometers are electromechanical devices that convert mechanical energy into electrical energy. The device works essentially as a generator with the output voltage proportional to the magnitude of the angular velocity. In control systems tachometers are used for the sensing of shaft velocity and
improvement of system performance. For instance, in a control system with the output displacement designated as the state variable *, and the output for the
x2
variable may be excessed to by monitored by a tachometer. In general, tachometers may be classified into two types: ac and dc. The simplified schematic diagrams of these two versions are shown in Fig. 5-37. For the ac tachometer, a sinusoidal voltage of rated value is applied to the primary winding. o A secondary winding is placed at a 90° angle mechanically with respect to the primary winding. When the rotor of the tachometer is stationary, the output voltage at the secondary winding is zero. When the rotor shaft is rotated, the output voltage of the tachometer is closely
velocity as the state variable
,
the
means of a potentiometer while x 2
±
'
o Fig. 5-37.
first state
is
Schematic diagram of a
proportional to the rotor velocity. The polarity of the
dependent upon the direction of rotation, relation of an ac tachometer can be represented by a first-order differential equation volta
p
is
The input-output
tachometer.
e l (t)
where
e,{t) is
=K
!
(5-107)
^jp
the output voltage, 9{t) the rotor displacement,
and K,
the tachometer constant, usually represented in units of volts per
is
defined as
rpm
or volts
per 1000 rpm. The transfer function of an ac tachometer is obtained by taking the Laplace transform of Eq. (5-107); thus
§g> =
A
K,s
(5-108)
dc tachometer serves exactly the same purpose as the ac tachometer One advantage of the dc tachometer is that the magnetic field of the device may be set up by permanent magnet, and therefore no separate
described above.
:
220
/
Chap. 5
Mathematical Modeling of Physical Systems
excitation voltage
is
required. In principle, the equations of Eqs. (5-107)
and
(5-108) are also true for a dc tachometer.
A dc tachometer can also replace an ac tachometer in use of a modulator to convert
its
dc output signal into
tachometer can be replaced by an ac one to convert the ac output to dc.
5.7
DC Motors
in
if a
a control system ac.
Similarly, a
phase-sensitive demodulator
is
by dc
used
Control Systems
Direct current motors are one of the most widely used prime movers in industry.
The advantages of dc motors are that they are available in a great variety of types and sizes and that control is relatively simple. The primary disadvantage of a dc motor relates to its brushes and commutator. For general purposes, dc motors are classified as series-excited, shunt-
and separately-excited, all of which refer to the way in which the field is excited. However, the characteristics of the first two types of motor are highly nonlinear, so for control systems applications, the separately-excited dc motor excited,
is
the most popular.
The separately-excited dc motor is divided into two groups, depending on whether the control action is applied to the field terminals or to the armature terminals of the motor. These are referred to di% field controlled and armature controlled, where usually the rotor of the motor is referred to as the armature (although there are exceptions). In recent years, advanced design and manufacturing techniques, have produced dc motors with permanent-magnet fields of high field intensity and rotors of very low inertia motors with very high torque-to-inertia ratios, in other words. It is possible to have a 4-hp motor with a mechanical time constant as low as 2 milliseconds. The high torque-to-inertia ratio of dc motors has opened new applications for motors in computer peripheral equipment such as tape drives, printers, and disk packs, as well as in the machine tool industry. Of course, when a dc motor has a permanent-magnetic field it is necessarily arma-
—
ture controlled.
The mathematical modeling of the armature-controlled and field -controlled dc motors, including the permanent-magnet dc motor,
is
discussed in the follow-
ing.
Field-Controlled
DC Motor
The schematic diagram of a field-controlled dc motor is shown The following motor variables and parameters are defined ea {t)
armature voltage
Ra
armature resistance
0(f)
air-gap flux
e b (t)
back emf (electromotive force) voltage back emf constant
Kb
in Fig. 5-38.
DC
Sec. 5.7
Motors
/.
Fig. 5-38. Schematic
torque constant
4(f)
armature current field
current
e f (t)
field
voltage
f
Tm (t)
torque developed by motor
Jm
rotor inertia of
Bm
viscous frictional coefficient
OJt)
angular rotor displacement
To
Control Systems / 221
= constant
diagram of a field-controlled dc motor.
K,
i (t)
in
motor
carry out a linear analysis, the following assumptions are
1
2.
The armature current is held constant, The air-gap flux is proportional to the
= Kf
0(0 3.
/'„
=
/„.
field current; that
f
is
is,
(5-109)
i {t)
The torque developed by the motor and the armature current. Thus
made:
proportional to the air-gap
flux
TJt) where
Km
is
= Km IJ(t)
(5-H0)
a constant. Substituting Eq. (5-109) into Eq. (5-110)
gives
TJt) If
we
= Km Kf IJf {t)
(5-111)
— Km Kf I
(5-112)
let
Kt
be the torque constant, Eq. (5-111) becomes
TJf)
=K
t
(5-113)
f {t)
i
Referring to the motor circuit diagram of Fig. 5-38, the state variables are assigned as if (t), cojt), and OJt). The first-order differential equations relating these state variables and the other variables are written
L/-^ = -RMt) + e f
(t)
(5-114)
222
/
Mathematical Modeling of Physical Systems
Chap. 5
Jn
dcoJt) dt
dBJf) dt
By proper
= _ BmCOm(t) +
TJf)
(5-115)
= co m (t)
(5-116)
substitution, the last three equations are written
in
the form of state
equations: '
R
-
dif {t)
' f
i/O
dt
d(Q m (t)
Kj
Bm
dt
J
J
~.
(5-117)
efy)
[9Jf)
1
dt
+
coJJ)
ddjt) .
~ 1
_
The state diagram of the system is drawn as shown
o
in Fig. 5-39.
The transfer
O
Q
"s-l
Ki/Jm
ULf oe
f
-R L f' f
~B m Um
Fig. 5-39. State
diagram of a field-controlled dc motor.
function between the motor displacement and the input voltage
is
obtained from
the state diagram as fl-fr)
Ef (s)
_
K,
Lf Jm s 3
+
(R,JM
(5-118)
+ Bm Lf )s + R B n s 2
a
or Om(s)
Ef(s)
_
K
t
R a Bm s(l +
T m s)(l
(5-H9)
+
x f 8)
where r_
=
xf
= -=£ = field electrical time constant of motor K
-=P-
=
mechanical time constant of motor
f
Armature-Controlled
DC Motor
The schematic diagram of an armature-controlled dc motor Fig. 5-40. In this case for linear operation
of the motor constant. The torque constant
is
shown
in
necessary to hold the field current
it is
K
t
relates the
motor torque and the
DC
Sec. 5.7
+o
Motors
in
Control Systems
/
223
WvV
»
Constant
field
current
o
Fig. 5-40. Schematic
armature current,
i a (t)
;
diagram of an armature-controlled dc motor.
thus
TJt)
K
where
t
=
K,Ut)
(5-120)
a function of the air-gap flux, which
is
course, in the case of a permanent-magnet motor,
emf voltage
is
constant in this case.
is (f>
is
constant also.
Of
The back
proportional to the motor speed,
=K
e„(t)
With reference
dOJf)
t
K
b
dt
(5-121)
co m (t)
to Fig. 5-40, the state equations of the armature-controlled
dc motor are written
Ra
diJLO I dt dco m (t)
r
K
b
ao
La Bm
=
dBJf)
(5-122)
ej,t)
OJt)
1
dt
+
co m (0
dt
_
The state diagram of the system is drawn as shown in Fig. 5-41 The transfer function between the motor displacement and the input voltage is obtained from .
the state diagram as
e m (s)
E (s) a
K,
La Jm s>
+
(R a Jm
+ B m L )s + 2
a
(Kb K
t
+ R Bm )s
(5-123)
a
Although a dc motor is basically an open-loop system, as in the case of the field-controlled motor, the state diagram of Fig. 5-41 shows that the arma-
motor has a "built-in" feedback loop caused by the back emf. back emf voltage represents the feedback of a signal that is proportional to the negative of the speed of the motor. As seen from Eq. (5-123), the back emf constant, Kb represents an added term to the resistance and factional coefficient. Therefore, the back emf effect is equivalent to an "electrical friction," which tends to improve the stability of the motor. ture-controlled Physically, the
,
It should be noted that in the English unit system, K, is in lb-ft/amp or oz-in/amp and the back emf constant Kb is in V/rad/sec. With these units,
K
t
and
K
b
are related by a constant factor.
We
can write the mechanical power
224
/
Mathematical Modeling of Physical Systems
Chap. 5
Kb IL a
Fig. 5-41. State
diagram of an armature-controlled dc motor.
developed in the motor armature as
P
= e (t)i ~
watts
a (t)
b
(5-124)
e b {t)ia {t)
hp
746
Substituting Eqs. (5-120) and (5-121) into Eq. (5-124),
p This power
is
K T ddjjt) ~ 146K,m dt h
K = t
motor
Tm =
(5-125)
equal to the mechanical power
hp
0.737
(5-126)
we have
Therefore, equating Eqs. (5-125) and (5-126),
We
get
hp
P ~ Tm de m (t) 550 dt
curves.
we
K
(5-127)
b
can also determine the constants of the motor using torque-speed
A typical set of torque-speed curves for various applied voltages of a dc is shown in Fig. 5-42. The rated voltage is denoted by E At no load, r.
0, the
voltage.
speed
Then
is
given by the intersect on the abscissa for a given applied
the back
emf constant
K
b is
given by
K»-a where in
this case the rated values are
(5-128) r
used for voltage and angular velocity.
Two-Phase Induction Motor
Speed
co m
When
motor
the
T ,
(t).
_ ~~
is stalled,
225
rad/sec
Fig. 5-42. Typical torque-speed curves of a
designated by
/
dc motor.
the blocked-rotor torque at the rated voltage
is
Let
_
blocked-rotor torque at rated voltage T ~~ rated voltage ~Er
(5-129)
K, = K ijj) = ^E
(5-130)
Also,
T
{t)
r
t
Therefore, from the last two equations, the torque constant
K<
5.8
=
is
determined:
kR a
(5-131)
Two-Phase Induction Motor 6,7 For low-power applications
Control phase
motors
more
rugged and have no brushes to maintain. Most ac motors used in control systems- are of the two-phase induction
r^WS
which generally are rated from a fraction of a watt up to a few hundred watts. The frequency of the motor is normally at 60, 400, or 1000 Hz. type,
A motor 6
e l (t)
o
Schematic diagram of a two-phase
induction motor.
schematic diagram of a two-phase induction is
shown two
stator with trical
Reference phase Fig. 5-43.
in control systems ac
are preferred over dc motors because they are
in Fig. 5-43.
The motor
consists of a
distributed windings displaced 90 elec-
degrees apart.
Under normal operating conditions from a constant-
in control applications, a fixed voltage
voltage source reference phase.
is
applied to one phase, the fixed or the control phase, is
The other phase,
226
/
Mathematical Modeling of Physical Systems
Chap.
energized by a voltage that
is
5
90° out of phase with respect to the voltage of
The control-phase
voltage is usually supplied from a servo and the voltage has a variable amplitude and polarity. The direction of rotation of the motor reverses when the control phase signal changes its sign. Unlike that of a dc motor, the torque-speed curve of a two-phase induction motor is quite nonlinear. However, for linear analysis, it is generally considered an acceptable practice to approximate the torque-speed curves of a two-phase induction motor by straight lines, such as those shown in Fig. 5-44. These curves are assumed to be straight lines parallel to the torque-speed curve at a rated control voltage (E2 = E = rated value), and they are equally spaced for equal
the fixed phase. amplifier,
t
increments of the control voltage.
E 2 > E, > E 22 > E n > E M ,
Speed
cj m
Fig. 5-44. Typical linearized torque-speed curves of a
two-phase induc-
tion motor.
The
state equations of the
motor are determined
as follows. Let
blocked-rotor torque at rated voltage per unit control voltage; that
=
blocked-rotor torque at E2 rated control voltage E
E,
_ ~~
x
7\,
E
k be the
is,
(5-132)
1
m be a negative number which represents the slope of the linearized torquespeed curve shown in Fig. 5-44. Then
Let
blocked-rotor torque no-load speed
For any torque
Tm
,
_ —
T
(5-133)
fi
the family of straight lines in Fig. 5-44
is
represented by the
equation
Tm {t) = mmjf) + where cojf)
is
the speed of the
ke t (t)
motor and e 2 (t) the control
(5-134) voltage.
Now,
if
we
Two-Phase
Sec. 5.8
Induction Motor
designate co m (t) as a state variable, one of the state equations
may be
/
227
obtained
from
Jm
^P=-B
n o, m (t)
Substituting Eq. (5-134) into Eq. (5-135)
other state variable,
we have d9 m {t)
(5-135)
and recognizing that
state
is
the
(5-136)
(t)
±&>-U-K». + $. M The
m (t)
the two state equations
=
dt
+ Tm (t)
diagram of the two-phase induction motor
is
shown
(5-137)
in Fig. 5-45.
The
(m-B m )/Jm Fig. 5-45. State
transfer function of the
placement
is
diagram of the two-phase induction motor.
motor between the control voltage and the motor
dis-
obtained as
k
e m (s)
E 2 (s)
(Bm
-
m)s[\
+ JJ(B m -
m)s]
(5-138)
or m (s)
E
_ s(l
2 (s)
Km + r m s)
(5-139)
where
Km = B k = motor gain constant m -m T„
=
Bm
-m = motor time constant
(5-140)
(5-141)
m
Since is a negative number, the equations above show that the effect of the slope of the torque-speed curve is to add more friction to the motor, thus
improving the damping or
stability
of the motor. Therefore, the slope of the
228
/
Chap. 5
Mathematical Modeling of Physical Systems
torque-speed curve of a two-phase induction motor effect
of a dc motor. However,
m > Bm
occurs for
5.9
,
it
if
can be shown
m
is
is
analogous to the back emf
a positive number, negative
that the
damping
motor becomes unstable.
Step Motors 8
A
step
motor
is
an electromagnetic incremental actuator that converts
digital
pulse inputs to analog output shaft motion. In a rotary step motor, the output rotates in equal increments in response to a train of input properly controlled, the output steps of a step motor are equal in pulses. When number to the of input pulses. Because modern control systems often number
shaft of the
motor
have incremental motion of one type or another, step motors have become an important actuator in recent years. For instance, incremental motion control is found in all types of computer peripheral equipment, such as printers, tape
and memory-access mechanisms, as well as in a great machine tool and process control systems. Figure 5-46 illustrates the
drives, capstan drives,
variety of
Fig. 5-46. Paper-drive
application of a step
motor
mechanism using a
in the paper-drive
step motor.
mechanism of
a printer. In this
coupled directly to the platen so that the paper is driven a at a time. Typical resolution of step motors available comincrement certain mercially ranges from several steps per revolution to as much as 800 steps per case the motor
revolution,
is
and even higher.
come in a variety of types, depending upon the principle of The two most common types of step motors are the variable-reluctance type and the permanent-magnet type. The complete mathematical analysis of these motors is highly complex, since the motor characteristics are very nonlinear. Unlike dc and induction motors, linearized models of a step motor are usually unrealistic. In this section we shall describe the principle of operation and a simplified mathematical model of the variable-reluctance motor. The variable-reluctance step motor has a wound stator and an unexcited rotor. The motor can be of the single- or the multiple-stack type. In the multiplestack version, the stator and the rotor consist of three or more separate sets of teeth. The separate sets of teeth on the rotor, usually laminated, are mounted on the same shaft. The teeth on all portions of the rotor are perfectly aligned. Figure 5-47 shows a typical rotor-stator model of a motor that has three sepaStep motors
operation.
Sec. 5.9
Step Motors
Stator
/
229
"C
Rotor "A'
Fig. 5-47.
Schematic diagram of the arrangement of rotor and stator teeth
in a multiple-stack, three-phase variable-reluctance step motor. is
shown
to have 12 teeth
The motor
on each stack or phase.
on the rotor, or a three-phase motor. A variable-reluctance step motor must have at least three phases in order to have directional control. The three sets of rotor teeth are magnetically independent, and are assembled to one shaft, which is supported by bearings. Arranged around each rotor section is a stator core with windings. The windings are not shown in Fig. 5-47. Figure 5-48 is a schematic diagram of the windings on the stator. The end view of the stator of one phase, and the rotor, of a practical motor is shown in Fig. 5-49. In this
rate sections
Phase
h
A
Phase
<
'*
.
<
B
Phase
.
h
C
< »
~ »
»
_
:
»
1
.
;
;
\
i
,
_
,,
Fig. 5-48. Schematic
diagram of a multiple-stack three-phase variable-
reluctance step motor.
Fig. 5-49.
End view
of the stator of one phase of a multiple-stack variable-
reluctance step motor.
230
Step Motors / 231
Sec. 5.9
case the rotor
is
shown
at a position
where
its
teeth are in alignment with that of
the particular phase of the stator.
The rotor and
stator
have the same number of teeth, which means that the
To make
tooth pitch on the stator and the rotor are the same.
the
motor
rotate,
the stator sections of the three-phase motor are indexed one-third of a tooth pitch, in the
same
direction. Figure 5-50
"Phase
shows
this
arrangement for a 10-tooth
B
Rotor and stator teeth arrangements of a multiple-stack threephase variable-reluctance step motor. The rotor has 10 teeth.
Fig. 5-50.
rotor. Therefore, the teeth
on one
stator
the stator phase. Here the teeth of phase
phase are displaced 12° with respect to C of the stator are shown to be aligned
A
with the corresponding rotor teeth. The teeth of phase of the stator are displaced clockwise by 12° with respect to the teeth of phase C. The teeth of phase B of the stator are displaced 12° clockwise with respect to those of phase A, or 12° counterclockwise with respect to those of phase C. It
minimum
of three phases
is
is
easy to see that a
necessary to give directional control. In general,
and five-phase motors are also common, and motors with as many as eight phases are available commercially. For an n-phase motor, the stator teeth are
four-
displaced by l/« of a tooth pitch from section to section.
The operating principle of the variable-reluctance stepping motor is straightThe
forward. Let any one phase of the windings be energized with a dc signal.
magnetomotive force setup
will position the rotor
such that the teeth of the
rotor section under the excited phase are aligned opposite the teeth on the excited
phase of the is
stator.
This
is
in a stable equilibrium.
the position of
minimum
reluctance,
and the motor
232
/
Mathematical Modeling of Physical Systems
If
C
phase
Chap. 5
energized in Fig. 5-50, the rotor would be (in steady state)
is
positioned as shown. It can also be visualized from the same figure that signal
is
if the dc switched to phase A, the rotor will rotate by 12°, clockwise, and the
A of the stator.
rotor teeth will be aligned opposite the teeth of phase
ing in the same way, the input sequence
CABCAB
will rotate the
Continu-
motor clock-
wise in steps of 12°.
Reversing the input sequence will reverse the direction of rotation. That the input sequence
is,
CBACB will rotate the motor in the counterclockwise direc-
tion in steps of 12°.
The
steady-state torque curve of each phase is approximately as shown in The 0° line represents the axis of any tooth of the energized stator
Fig. 5-51.
The nearest rotor tooth axis will always lie within 18° on either side of The corresponding starting torque exerted when this phase is energized can be seen in Fig. 5-51. The arrows mark the direction of motion of the rotor. phase.
this line.
Torque
Fig. 5-51.
Torque curve for one phase of a
step motor.
Let positive angular displacements represent clockwise motion. Suppose C has been excited for a long time. This means that the initial
also that phase
condition of the rotor will be as
and
shown
phase A is now energized A, the initial position of
in Fig. 5-50. If
Fig. 5-51 represents the torque variation of phase
the rotor teeth will be at —12°. finally settle after
such that there It
may be
point. This
is
As soon
as phase
A is energized,
some oscillations, assuming that the no overshoot beyond the 18° point.
inertia
friction are
noticed that the position, ±18°, also represents an equilibrium
because in that position the deflecting torque
is
the rotor will
and
is
zero. It
is,
however,
a position of unstable equilibrium since the slightest shift from this position will
send the motor straight to If
0°.
on energizing one phase the rotor happens
point, theoretically
it
to
lie
exactly at the
±
8° 1
will stay there. In practice, however, there will always be
some mechanical imperfections
in construction,
and the
resulting
asymmetry
will prevent any locking at the unstable point.
We now look upon the stepping motor from a single-step point of view and try to develop the equations that govern will
be made
initially to simplify
its
performance. Several assumptions
the development. Subsequent modifications
Step Motors
Sec. 5.9
may be made if any of these assumptions are found to be invalidated. We by writing the equation for the electrical circuit of the stator winding. Let
= R= L(0) = i(t) — 8(t) = e(t)
winding resistance per phase winding inductance per phase current per phase
angular displacement
= Ri{t) + =
start
applied voltage per phase
The voltage-current equation of one e(t)
233
/
RKt)
stator phase
is
written
±[iL(d)] (5 ' 142)
M d + L{e)f +iji ue) t
or e(t)
=
Ri(t)
+
L(9)
f+
/
t
JJgUp)
f
(5-143)
(
The term, L(9)(di/dt) represents transformer electromotive force, or selfinduced electromotive force, and the term i[dL(6)/d9](d6/dt) represents the back emf due to the rotation of the rotor. We have assumed above that the inductance is a function of 9(t), the angular displacement, only. No dependence on the current has been assumed. This will reflect by the motor. The energy in the air gap can be written
that the torque in a single excited rotational system
is
we know
given by
T=£s [W(i,0)] where 6{t).
W
is
(5-145)
the energy of the system expressed explicitly in terms of
i(i)
and
Therefore,
T = \i\t)j
d
This torque
is
[L{e)]
(5-146)
then applied to the rotor and the equation of motion
is
obtained as
T where Jm also
is
= jJ^ + Bj-f
the rotor inertia and
Bm
(5-147)
the viscous frictional coefficient.
may include the effects of any load. To complete the torque expression
of Eq. (5-146),
we need
to
Jm and Bm
know
the
form of the inductance L(d). In practice the motor inductance as a function of displacement may be approximated by a cosine wave; that is, L(6)
=
Z,,
+
L2
cos rd
(5-148)
234
/
Mathematical Modeling of Physical Systems
Chap. 5
where L x and L 2 are constants and
r is the
Substituting Eq. (5-148) into Eq. (5-146),
number of teeth on each rotor section. we get
T = -%L 2 ri 2 (t) sin rQ = -Ki\t) sin r6 which
is
(5-149)
the sinusoidal approximation of the static torque curve of Fig. 5-51.
Now
us apply these equations to a three-phase motor. Let the equilib-
let
rium position be the situation when phase and torque for phase A are given by
= TA = LA
L,
+L
-Ki\
A is 2
energized.
Then
the inductance
cos rO
sin rd
^K>
eB
~
=K>
s bB
RB
+ sL B
/
ebB
1
~o Fig. 5-52.
Block diagram of the variable-reluctance step motor.
(5-150) (5-151)
:
Tension-Control System
Sec. 5.10
/
235
= 10, phase B has its assuming that the sequence behind the reference point, position of phase C forward Similarly, the equilibrium motion. ABC represents 12° and torques point. Therefore, inductances the ahead the reference the is of
respectively.
For the
10-teeth rotor considered earlier, r
equilibrium position 12°
B and C
for phases
are written as follows
= Lc = TB = Tc = LB
The
electrical circuits
differential
The
- 120°) L, cos (100 + 120°) -Ki\{t) sin (100 - 120°) -Kil(t) sin (100 + 120°) +L +L
L,
2
(5-152)
cos (100
(5-153)
2
(5-154)
(5-155)
of the three phases are isolated so that each phase has
its
equation of the form of Eq. (5-143).
total torque
developed on the rotor
is
the algebraic
sum of
torques
of the three phases. Thus
T = TA + TB + Tc The
(5-156)
nonlinearity of the torque equations precludes the use of linearized models
for the portrayal of a step motor. Therefore, realistic studies of a step
motor
made only through computer simulaof the motor, which may be used for is shown in Fig. 5-52.
using the equations presented above can be
A
tion.
block-diagram representation
analog or
5.10
digital
computer simulation,
Tension-Control System
The problem of proper tension control
exists in a great variety of winding and unwinding industrial processes. Such industries as paper, plastic, and wire all have processes that involve unwinding and rewinding processes. For example, in the paper industry, the paper is first wound on a roll in a form that is nonsaleable, owing to nonuniform width and breaks. This roll is rewound to trim edges, splice breaks, and slit to required widths. Proper tension during this rewinding is mandatory for several reasons slick paper will telescope if not wound tightly enough and the width will vary inversely as the tension. Conversely, during :
storage, a roll
wound
at varying tensions has internal stresses that will cause
it
to explode. Similar examples could be cited, but the need for proper tension
control
is
relatively simple to understand.
Most rewind systems contain an unwind roll, a windup roll driven by a motor, and some type of dancer and/or pinch-roller assemblies between the two. Some systems employ spring-with-damper idlers with feedback to motor drives to control tension. Some use tension-measuring devices and feedback to a motor-generator or brake on the unwind reel to hold tension at a constant value.
In this section a specific type of tension-control system for unwind processes is
As shown in Fig. 5-53, the system has a dc-motor-driven windup The tension of the web is controlled by control of the armature voltage ea {i)
investigated.
reel.
of the motor.
236
/
Mathematical Modeling of Physical Systems
Chap. 5
+ o Pinch rolls
Fig. 5-53. Tension-control system for a winding process.
The mathematical modeling of the system
conducted by writing the
is
equations of the dc motor.
Armature:
=
eJLQ
R.Ut)
+ L **p + K co m {t) a
(5-157)
h
where
K = = b
co m (t)
back emf constant of motor angular velocity of motor
Torque equation:
TJf)
=
+ BmeCOm (t) +
^Vnetojt)]
nrT(f)
(5-158)
where
= effective radius of windup reel Tm {t) = motor torque = K,i (t) n = gear-train ratio T(t) = web tension n JL = equivalent inertia at motor shaft jme = Jm Bme = Bm + n BL = equivalent viscous friction coefficient r
a
-\-
2
2
at
= BL = JL
motor
shaft
of windup reel
viscous friction coefficient of
web material inertia JL and the
Since the ceeds, the
effective inertia
is
if
reel
taken up by the windup reel as the process pro-
radius r of the windup reel increase as functions of
time. This explains the reason the derivative of
Furthermore,
windup
Jme co m
is
taken in Eq. (5-158).
h denotes the thickness of the web, h
dr
(5-159)
Jr=&> Thus dJ,
Ti =
(5-160)
W i
„
3
dr
Tt
where
p
=
mass density of the web material per width
(5-161)
Edge-Guide Control System
Sec. 5.11
Assume now that Hooke's law
that the
web material has Then
is
the
237
C and
a coefficient of elasticity of
obeyed.
is
£%p = where v s {t)
/
web
C[v w (t)
- »,(/)]
velocity at the pinch rolls.
(5-162)
Assuming
that the pinch rolls
are driven at constant speed, v,(f) = constant = V Also, vjt) = roojt) = nrajf) s.
It is is
apparent
now
and JL are functions of time, Eq. (5-158)
that because r
a time-varying nonlinear differential equation. However,
h
thin, stant.
if
the
web
is
very
~ 0, we may consider that over a certain time period r and JL are con-
Then, the linearized state equations of the system are written
^dF = - r>(0 "
%^ = }
*"
^ 5.11
(5-163)
=
jt-Ut) *^
-
me
T aJf) + T
j^cojf)
-
me
Cnrmjt)
e - (t)
(5 " 164)
"17X0
(5-165)
a
m
me
- CV
(5-166)
S
Edge- Guide Control System
Whenever there
is
a tension-control problem there
is
a desire for edge-guide con-
become very important in modern paper manufacturing, steel strip processing lines, flexographic and textile industries, and similar fields. To maintain optimum sheet quality and maximum process line speed, the moving web must be maintained at a constant lateral-edge position. In general, there are many different ways of measuring and tracking the edge of a moving web. However, to achieve stable and accurate trol.
Therefore, the problem of edge guiding has
edge guiding, a feedback control system should be used. The schematic diagram of an edge-guide system using the pivot-roll method is shown in Fig. 5-54. The pivot roll is controlled to rotate about the pivot point in guiding the direction of the web. The source of controlling the motion of the pivot
roll
may be a dc motor
coupled to a lead screw or rack and pinion,
or a linear actuator.
Figure 5-55 shows the side view of the edge-guide system. The axes of the 1 and 2 are assumed to be fixed or uncontrolled. S^ and S 2 represent
rollers
sensors that are placed at the indicated points to sense the centering of the at the respective points.
Let v(t)
^r(0 Z;(0
= linear velocity of web = initial error of web position in the z direction at roll = error of web position in the z direction at the leading side of the pivot roll (see Figs. 5-54
and
5-55)
1
web
Roll 2
Fig. 5-54. Schematic
diagram of an edge-guide control system.
Fig. 5-55. Side view of
238
an edge-guide system.
Edge-Guide Control System
Sec. 5.1
Assuming
that there
the following equation
no slippage when the web moves over the pivot
is
is
written for v(t) and z
239
/
roll,
(t):
x
^2 = t<0tana
(5-167)
at
If the angle
a
is
small,
from Fig. 5-54 tan a
Thus Eq.
seen that
it is
~ z «(0 - *i(0
(5. 1 68)
m,
(5-167) can be written approximately as
dzgt)
=
v(t)
^
(5 . 169)
constant, v(t)
is
=
v,
and Eq.
= £*«<'> + £*»<'> Wj m
at
(5-169) is
(5 " 170)
x
Taking the Laplace transform on both between z (t) and z R (t) is
relation
z.(f)
in 1
of the web
If the linear velocity
written
-
z R (t)
ox
sides of Eq. (5-170), the transfer
t
^rr\ Z x(j) Tj = mjv. Assuming that
—
= T-r + Ti5 1
(5
"
171 )
where
there
is
no
stretching in the web,
-
zz(t)
w
2
z
t
Zl(t)
_
z R (t)
m
z
x
from
Fig. 5-55,
{t)
(5-172)
t
or zz(t)
=
(t)
+ ^[z R (t) - z,(0] m. n
Taking the Laplace transform on both we have
(5-173)
sides of Eq. (5-173)
and solving
for
ZR {s),
Z Substitution of tion between
Z
(s) x
^=
1
+ \lZ)r,s
Z
^
ZR (s) from Eq. (174) into Eq. (171) Z (s), z i(J ) = Z 2 (s) + (mJm^TtS
and
(5
" 174
>
gives the transfer rela-
2
\
(5-175) v
1
When the pivot roll is rotated
by an angle L from its reference position, the D sin 9L (t). With no slippage of the the error z 3 (t) due to the error z {f) is written
error z 3 (t) will be affected by approximately
web over
the pivot
roll,
x
= Zl -T)-
z 3 (t)
where
T=
(t
nD/2v, and for small
the inverse Laplace transform
L {t), sin
on both
Z,(j)
=
DOAt)
L (t)
is
(5-176)
approximated by 9 L{t). Take which yields
sides of Eq. (5-176)
e- T 'Z t (s)
-
D6 L (s)
Similar to the relationship between zR and z
x ,
(5-177)
the transfer function between
240
/
Mathematical Modeling of Physical Systems
z 3 {t)
and z 5 (t)
Chap. 5
is
Z Z where t 3 = z 3 and z 4 is
m
3 /v.
5 (s)
1
3 (s)
1
+
(5-178)
r3s
Also, in analogy to Eq. (5-174), the transfer relation between
Z 4 (s) = Z {s)
+ (mjm )z +rs
1
3
1
3
Now consider that
the drive
3s
(5-179)
3
motor of the system
is
an armature-controlled
dc motor with negligible inductance. The equation of the motor armature
=
Us) where Ia{s) resistance,
Ejjs)
K
t
is
b
OJs)
(5-180)
is the armature current, Kh the back emf constant, R a the armature and 6m (s) the motor displacement. The torque is given by
= KJJ^s)
Tm(s) where
- sK
is
(5-181)
the torque constant.
The torque equation of the motor and load
TJs)
=
is
+ Bm s)9 m (s) +
(Jm s 2
LF(s)
(5-182)
where
Jm
=
Bm = L— F(s)
=
inertia of
motor
viscous frictional coefficient of motor lead of screw
transform of force in the x direction
and F(s)
= 2^(-^ + B^ s + klWl(s) 2
(5-183)
where
= inertia of pivot roll about pivot point = viscous frictional coefficient at pivot point KL = spring constant at pivot roll due to tension JL
BL
Combining Eqs.
(5-182)
and
(5-183),
BJs)
of
web
we have 1
/me s 2
TJs)
+ B me s + Kme
(5-184)
where '
me
B,me
"
(5-185)
m
B
*~m ™
\2nr)
+
K- = (^)
(4)^
(5-186)
Kl
(5-187)
Also,
X(s)
=
r9 L {s)
(5-188)
9 L (s)
= £*„(,)
(5-189)
A block diagram of the overall system is drawn as shown in Fig.
5-56 using
T3 J*
O
5 M
241
242
/
Mathematical Modeling of Physical Systems
Chap. 5
Eqs. (5-175), (5-177), (5-179), (5-180), (5-181), (5-184), and (5-189).
lers
c
of the edge-guide system.
of the transfer functions z3
5.12
is
The blocks
Hp {s) and H (s) represent possible locations of control-
with transfer functions
The design problem may involve
Hp (s) and H
c (s),
the determination
so that for a given error z 2 the error
minimized.
Systems with Transportation Lags
Thus
far a great majority of the systems considered
have transfer functions that
are quotients of polynomials. However, in the edge-guide control system of
Section 5.11, the relation between the variables z
time delay. Then
Z
t
(s)
Z
and
x
(t)
and z 3 (t)
is
that of a pure
are related through an exponential transfer
3 (s)
function e' Ts In general, pure time delays .
may be encountered
in various types
of systems, especially systems with hydraulic, pneumatic, or mechanical transmissions. In these systems the output will not begin to respond to an input until
examples in which transporta-
after a given time interval. Figure 5-57 illustrates
Metering point
S~^
— —s (
)
»
^>
— d
[*
/I
Valve
—
N
Solution
—
^-\
Solution
B
(a)
Thickness measuring gauge Roller ( o 3)
II
zs
—
~*
Steel plate
Roller ( o
5) d
\«
M
(b)
Fig. 5-57. Physical systems with transportation lags.
tion lags are observed. Figure 5-57(a) outlines
an arrangement
in
which two
be mixed in appropriate proportions. To assure that a homogeneous solution is measured, the monitoring point is located some distance from the mixing point. A transportation lag therefore exists between the mixing point and the place where the change in concentration is detected. If the rate of flow of the mixed solution is v inches per second and dis the distance different fluids are to
between the mixing and the metering points, the time lag
T= d -
If it is is
assumed that the concentration
reproduced without change
T
is
given by
sec at the
(5-190)
mixing point
is c(t)
and that
it
seconds later at the monitoring point, the
:
Sun-Seeker System
Sec. 5.13
measured quantity
243
is
6(0
The Laplace transform of the
last
=
transfer function
between
=
rollers
(5-191)
is
e~ *C(s)
and
=
e
As
(5-192)
c(i) is
~ Ts
in Fig. 5-57(b)
control of the rolling of steel plates.
between the thickness at the
T)
T
b(t)
m The arrangement shown
-
c(t
equation
B{s)
Thus the
/
(5 - 193)
may be
thought of as a thickness
in the case above, the transfer function
and the measuring point
is
given by Eq.
(5-193).
Other examples of transportation lags are found in human beings as control systems where action and reaction are always accompanied by pure time delays. The operation of the sample-and-hold device of a sampled-data system closely resembles a pure time delay; it is sometimes approximated by a simple time-lag term, e~ Ts
.
In terms of state variables, a system with pure time delay can no longer be described by the matrix-state equation
^p- =
Ax(?)
+
Bu(0
(5-194)
A general state description of a system containing time lags is given by the following matrix differential-difference equation
^ UL
= t=l t A,Ht -
77)
+ j=l ± Bju(t -
Tj)
(5-195)
where T, and Tj are fixed time delays. In this case Eq. (5-195) represents a general situation where time delays may exist on both the inputs as well as the states.
5.13
Sun-Seeker System In this section
we
shall
model a sun-seeker control system whose purpose
control the attitude of a space vehicle so that
is
to
sun with high accuracy. In the system described, tracking of the sun in one plane is accomplished. schematic diagram of the system is shown in Fig. 5-58. The principal elements of the error discriminator are two small rectangular silicon photovoltaic cells it
will track the
A
mounted behind a rectangular slit in an enclosure. The cells are mounted in such a way that when the sensor is pointed at the sun, the beam of light from the slit overlaps both cells. The silicon cells are used as current sources and connected in opposite polarity to the input of an operational amplifier. Any difference in the short-circuit current of the two cells is sensed and amplified by the operational amplifier. Since the current of each cell
mination on the
when
the light
is
proportional to the
illu-
an error signal will be present at the output of the amplifier from the slit is not precisely centered on the cells. This error cell,
244
/
Mathematical Modeling of Physical Systems
Chap. 5
Error discriminator
dc tachometer Fig. 5-58.
voltage,
when
Schematic diagram of a sun-seeker system.
fed to the servoamplifier, will cause the servo to drive the system
back into alignment.
A description of each part of the system
is
given as follows.
Coordinate System
The
center of the coordinate system
of the system. The reference axis
is
is
considered to be at the output gear
taken to be the fixed frame of the dc motor,
and all rotations are measured with respect to this axis. The solar axis or the line from the output gear to the sun makes an angle d r with respect to the reference axis, and 9 denotes the vehicle axis with respect to the reference axis. The objeca, near tive of the control system is to maintain the error between 6 r and O ,
zero:
a
=
6r
-
(5-196)
6
Figure 5-59 shows the coordinate system described. Error Discriminator
When or Ia
—
shown
Ib
the vehicle
=
0.
From
in Fig. 5-58,
is
aligned perfectly with the sun, a
=
0.
and Ia
=
Ib
=
I,
the geometry of the sun's rays and the photovoltaic cells
we have oa
w + L tan a = -^^
(5-197)
Ltana
(5-198)
Sun-Seeker System
Sec. 5.13
Center of output gear
Fig. 5-59.
/
245
Fixed axis of dc motor frame
Coordinate system of the sun-seeker system.
where oa denotes the width of the sun's ray that shines on cell A, and ob is the same on cell B, for a given a. Since the current Ia is proportional to oa, and Ib to ob, we have IT
/
1
+ ^tan a
(5-199)
and I„
for
< tan a < W/2L.
pletely J„
on
cell
A, and
I„
= It
For W/2L
= 21,
Ib
2LI
t -jp- tan
a
(5-200)
< tan a < (C - W/2)/L, the sun's ray comFor (C - W/2)/L < tan a < (C + W/2)/L, = I = for tan a > (C + W/2)/L. zero. is
= 0.
decreases linearly from 21 to
Therefore, the error discriminator
/„
may be
b
represented by the nonlinear charac-
teristic
of Fig. 5-60, where for small angle a, tan a has been approximated by a
on the
abscissa.
Fig. 5-60. is
Nonlinear characteristic of the error discriminator. The abscissa is approximated by a for small values of a.
tan a but
246
/
Mathematical Modeling of Physical Systems
Chap. 5
Operational Amplifier
The schematic diagram of the operational amplifier is shown in Fig. 5-61. at point G and assuming that the amplifier does not draw
Summing the currents any current, we have
-h-
/.
7f
e + -SLjjrI* =
(5-201)
-VWV-
WW-
o
R
r
-JvVW-
1 1Fig. 5-61. Operational amplifier.
Since e
= —Aee
,
eg
= —e /«- / '
If
A
approaches
Eq. (5-202)
is
becomes
/A, Eq. (5-201)
+ (i + i +
infinity, as in
i>
=
operational amplifiers,
" (5 202)
A may
reach 10 6
;
then
written e
= -R F(I a
(5-203)
I„)
This equation gives the transfer relationship between Ia
—
Ib
and e
.
Servoamplifier
The gain of
the servoamplifier
output of the servo amplifier
is
—K
is
a.
= -K e a
.
With
reference to Fig. 5-58, the
expressed as
(5-204) s
Tachometer
the
The output voltage of the tachometer eT is related to the angular motor through the tachometer constant KT that is,
velocity of
;
eT
= KT co m
The angular position of the output gear Thus
is
related to the
(5-205)
motor position through
the gear ratio l/n.
On
=
9,
(5-206)
u
00
247
248
/
Mathematical Modeling of Physical Systems
Chap. 5
Armature-Controlled DC Motor
The armature-controlled dc motor has been modeled
The equations
earlier.
are
= RJa + e„ e = K„co m Tm = KJ. ea
(5-207)
b
(5-208)
Tm = where / and shaft.
B are
J^ +
(5-209)
Bco m
(5-210)
and viscous friction coefficient seen at the motor motor armature is neglected in Eq. (5-207).
the inertia
The inductance
in the
A block diagram that characterizes all the functional relations of the system is
shown
in Fig. 5-62.
REFERENCES State-Variable Analysis of Electric Networks 1.
Kuo, Linear
B. C.
Circuits
and Systems, McGraw-Hill Book Company,
New
York, 1967. 2.
R. Rohrer, Circuit Analysis: An Introduction to the State Variable Approach, McGraw-Hill Book Company, New York, 1970.
Mechanical Systems 3.
R. Cannon, Dynamics of Physical Systems, McGraw-Hill Bock Company, York, 1967.
New
Control System Components 4.
W. R. Ahrendt, Servomechanism Practice, McGraw-Hill Book Company, York, 1954.
5.
J.
New
E. Gibson and F. B. Tuteur, Control System Components, McGraw-Hill Book Company, New York, 1958.
Two-Phase Induction Motor 6.
7.
W. A. Stein and G. J. Thaler, "Effect of Nonlinearity in a 2-Phase Servomotor," AIEE Trans., Vol. 73, Part II, pp. 518-521, 1954. B. C.
Kuo, "Studying the Two-Phase Servomotor," Instrument No. 4, pp. 64-65, Apr. 1960.
Soc. Amer.
J.,
Vol. 7,
Step Motors 8.
B. C.
Kuo (ed.),
Proceedings,
and Devices, Part
I,
Symposium on Incremental Motion Control Systems
Step Motors and Controls, University of Illinois, Urbana,
111.,
1972. 9.
B. C.
Kuo
(ed.),
Proceedings, Second Annual
Symposium on Incremental Motion
Control Systems and Devices, University of Illinois, Urbana,
111.,
1973.
Problems / 249
Chap. 5
10.
B. C. St.
Kuo, Theory and Applications of Step Motors, West Publishing Company,
Paul, Minn., 1974.
PROBLEMS 5.1.
Write the force or torque equations for the mechanical systems shown in Fig. from the force or torque equations.
P5-1. Write the state equations
K
B
Af,
B,
M,
F(t)
B,
777777m777m777m7777mm7m7777MM77M7M777, (a)
*- Fit)
WW/////////////////// (b)
(c)
Figure P5-1. 5.2.
Write a
On
the
set
of state equations for the mechanical system shown in Fig. P5-2. one will probably end up with four state equations with the
Write the state equations in vector-matrix form with the state variables defined as above.
(b)
Redefine the state variables so that there are only three state equations.
(c)
Draw
(d)
Derive the transfer function
state
diagrams for both cases. (o 2 (s)/T(s) for
each case, and compare the
results. (e)
Determine the controllability of the system. Does the fact that the system can be modeled by three state equations mean that the four-state model is uncontrollable? Explain.
5.3.
For the system shown
The potentiometer
in Fig. P5-3,
potentiometer terminals
is
E
determine the transfer function
rotates through 10 turns,
(s)/Tm (s).
and the voltage applied across the
E volts.
+
—E
i_
I
I Tm it) Potentiometer
B2 =
viscous friction coefficient of potentiometer contact
Figure P5-3.
5.4.
Write the torque equations of the gear-train system shown in Fig. P5-4. The
moments of inertia of the gears and shafts are Jit J2 and J 3 Tit) torque. N denotes the number of gear teeth. Assume rigid shafts. ,
.
is
the applied
Problems
Chap. 5
5.5.
/
251
Figure P5-5 shows the diagram of an epicyclic gear train.
Figure P5-5. (a)
Using the reference directions of the angular velocity variables as indicated, write algebraic equations that relate these variables.
The block diagram of the automatic braking control of a high-speed shown in Fig. P5-6a, where
^
e
Amplifier .
eb
Brake
K
>
Tachometer
(a)
(Sec) (b)
Figure P5-6.
M
train
v(t)
Train
is
:
252
/
Mathematical Modeling of Physical Systems
Chap. 5
= voltage representing desired speed = velocity of train K = amplifier gain = 100 M = mass of train = 5 x 10* lb/ft/sec K, = tachometer constant = 0.15 volt/ft/sec e, = K,v V,
v
2
The force characteristics of (Neglect
the brake are
shown
in Fig.
=
P5-6b when e b
1
volt.
all frictional forces.)
Draw
a block diagram of the system and include the transfer function of each block. (b) Determine the closed-loop transfer function between V, and velocity v of (a)
the train. (c)
If the steady-state velocity
of the train
what should be the value of 5.7.
V
r
is
to
be maintained at 20
ft/sec,
?
Figure P5-7 illustrates a winding process of newsprint. The system parameters
and variables are defined as follows
+
o
Figure P5-7.
= applied voltage R = armature resistance of dc motor L = armature inductance of dc motor
ea
4
= armature current
K = back emf of dc motor Tm = motor torque = Km b
ia
= motor inertia Bm = motor friction coefficient JL = inertia of windup reel co m = angular velocity of dc motor co = angular velocity of windup reel TL = torque at the windup reel r = effective radius of windup reel V„ = linear velocity of web at windup reel T = tension V„ = linear velocity of web at input pinch rolls Jm
Assume elasticity
that the linear velocity at the input pinch rolls,
of the
web
material
tortion of the material
proportional constant
is
is
is
assumed to
satisfy
V
s,
is
constant.
Hooke's law; that
is,
directly proportional to the force applied,
K (force/displacement).
The
the dis-
and the
:
Problems
Chap. 5
(a)
Write the nonlinear state equations for the system using
i„,
co m ,
/
253
and T as
state variables. (b)
Assuming
V
that r
is
constant,
draw a
state
diagram of the system with ea and
as inputs.
s
5.8.
Write state equations and output equation for the edge-guide control system whose block diagram is shown in Fig. 5-56.
5.9.
The schematic diagram of a
steel rolling
process
is
shown
in Fig. P5-9.
Two-phase induction motor
Om
U),Tm {t)
Gear box and linear actuator
Steel
plate
bit)--
ait)--
Ks dt)
«M0 Figure P5-9.
Describe the system by a set of differential-difference equation of the form of Eq. (5-195). (b) Derive the transfer function between c(t) and r{t). (a)
5.10.
Figure P5-10a shows an industrial process in which a dc motor drives a capstan and tape assembly. The objective is to drive the tape at a certain constant speed.
Another tape driven by a separate source is made to be in contact with the primary tape by the action of a pinch roll over certain periods of time. When the two tapes are in contact, we may consider that a constant frictional torque of
TF is
seen at the load.
The following system parameters
e, = applied motor voltage, volts 4 = armature current, amperes = back emf voltage = K com volts K = back emf constant = 0.052 volt/rad/sec Km = torque constant = 10 oz-in./amp Tm = torque, oz-in. et,
b
b
,
are defined
254
/
Mathematical Modeling of Physical Systems
'm
Chap. 5
"m
'm
^_
00
<
^ J
:
e
um
e,
1
G
•
l
G 2 (s)
(s)
>
Integral
control
1
volt/rad/s
Feedback transducer (b)
Figure P5-10.
= motor displacement, rad = motor speed, rad/sec Ra = motor resistance = Q Jm = motor inertia = 0.1 oz-in./rad/sec 2 (includes capstan inertia) Bm = motor viscous friction = 0.1 oz-in./rad/sec KL = spring constant of tape = 100 oz-in./rad (converted to rotational) JL = load inertia = 0.1 oz-in./rad/sec 2 8m
(O m
1
(a)
Write the equations of the system in vector-matrix state equation form.
Draw
a state diagram for the system. Derive the transfer function for C0 L (s) with E (s) and TF (s) as inputs. (d) If a constant voltage e,(t) = 10 V is applied to the motor, find the steadystate speed of the motor in rpm when the pinch roll is not activated. What
(b) (c)
t
is
(e)
the steady-state speed of the load ?
When
the pinch roll
is
constant friction torque cojr, when e, To overcome the
speed (f)
=
activated,
TF
is 1
making the two tapes
oz-in.
Find the change
in contact, the
in the steady-state
10 V.
effect of the frictional
torque 7>
it
closed-loop system should be formed as
shown by
the block diagram in
is
suggested that a
Problems
Chap. 5
Fig. P5-10(b). In this case the
the reference input. control,
and the
5.11.
motor speed
is
fed back and
compared with
closed-loop system should give accurate speed
integral control should give better regulation to the fric-
Draw
a state diagram for the closed-loop system. Determine the steady-state speed of the load when the input is 1 V. consider that the pinch roll is not activated, and then is activated. tional torque.
(g)
The
255
/
This problem deals with the attitude control of a guided missile.
First
When traveling
through the atmosphere, a missile encounters aerodynamic forces that usually tend to cause instability in the attitude of the missile. The basic concern from the flight
control standpoint
missile
about
its
is
the lateral force of the
air,
which tends to rotate the
center of gravity. If the missile centerline
the direction in which the center of gravity
is
not aligned with
C is traveling, as shown in
Fig. P5-1
Figure P5-11.
with the angle 6 (9 is also called the angle of attack), a side force is produced by the resistance of the air through which the missile is traveling. The total force
Fa may
be considered to be centered at the center of pressure P. As shown in Fig. P5-11, this side force has a tendency to cause the missile to tumble, especially if the point is in front of the center of gravity C. Let the angular accelera-
P
tion of the missile about the point C,
Normally, a F
is
a, F
where a
is
due to the side
directly proportional to the angle
force, be denoted by of attack 6 and is given by
Ct F .
= ad
a constant described by
Kr
is a constant that depends on such parameters as dynamic pressure, velocity of the missile, air density, and so on, and
= d = J t
The main
missile
moment
of inertia about
distance between
object of the flight control system
to counter the effect of the side force.
C
C and P is
to provide the stabilizing action
One of the standard
control
means
is
to
256
/
Chap. 5
Mathematical Modeling of Physical Systems
use gas injection at the
of the missile to deflect the direction of the rocket
tail
engine thrust T, as shown in Fig. P5-1 1 (a)
Write a torque differential equation to relate Assume that S is very small.
among T,d,9, and
the system
parameters. (b)
Assume
that
T is
constant and find the transfer function 9(s)/S(s) for small
S.
5.12.
(c)
Repeat
(a)
Draw
(a)
and
(b)
C and P interchanged.
with the points
a state diagram for the tension-control system of Fig. 5-53, using the
state equations of Eqs. (5-164), (5-165),
(b)
Write the relation
among Ea (s), Vs and ,
and
(5-166).
EJs) and
T(s), with
V
s
as inputs
and
T(s) as the output. 5.13.
The following equations
describe the motion of an electric train in a traction
system x(t) i,(t)
= =
v{t)
-k(v)
- g(x) +
T(t)
where x(t) v(t)
k{v)
g{x) T{t)
The
= linear displacement of train — linear velocity of train = train resistance force [odd function off, with the properties and dk(v)\dv > 0] k(0) = = force due to gravity for a nonlevel track or due to curvature of track = tractive force
electric
motor
that provides the traction force
is
described by the following
relations
= Kb ^>(t)v{t) + = Km {t)ia (t)
e(t)
T(t)
Ria {t)
where
R =
armature resistance
= magnetic flux = Kf f (i) e(t) = applied voltage Km Kb = proportional constants a {t) = armature current i/(f) = field current i
(t)
,
i
(a)
Consider that the motor k{v) is
= Bv(t), and R =
Consider that system. g{x)
(c)
a dc series motor so that a {t) = i(t); g(x) = 0, The voltage eU) is the input. Show that the system
is
i
described by the following set of nonlinear state equations
=
x(t)
(b)
0.
ia (t)
— 0,
Consider that
=
f (t)
i
k(v)
(t) is
v(t)
is
the input
and derive the
state equations of the
= Bv(t).
the input, g(x)
— 0, k(v) =
Bv{t),
and derive the
state
equations of the system. 5.14.
Figure P5-14 shows a gear-coupled mechanical system. (a) Find the optimum gear ratio n such that the load acceleration, (b)
maximized. Repeat part
(a)
when
the load drag torque
TL
is
zero.
is
t
t
Chap. 5
Problems
/
257
N,
= \
1
Jl
J
1
V «1
=
/ •4
V N, EE
Figure P5-14. 5.15.
(a)
Write the torque equations of the system
+
e
j-'k8
=
in Fig.
P5-15 in the form
o
where 8 is a 3 x 1 vector that contains all the displacement variables, 9 U J is the inertia matrix and K contains all the spring constants. 2 and 8 3 Determine J and K. ,
.
7
7 % Figure P5-15.
(b)
Show
that the torque equations can be expressed as a set of state equations
of the form
i
= Ax
where o --L!.i A=r_-j-'k o_ ;
(c)
5.16.
Consider the following
set
K2 =
=
3000, Ji
=
1,
J2
of parameters with consistent units:
5,
J3
=
2,
and
K = 3
=
A", 1000, 1000. Find the matrix A.
Figure P5-16 shows the layout of the control of the unwind process of a cable reel with the object of maintaining constant linear cable velocity. Control is established by measuring the cable velocity, signal,
and using the error to generate a control
comparing signal.
it
with a reference
A tachometer is used to
sense the cable velocity. To maintain a constant linear cable velocity, the angular reel velocity 9 R must increase as the cable unwinds; that is, as the effective radius of the reel decreases. Let
D=
cable diameter
W = width of reel jR
=
=
= 0.1 2
ft
f
effective radius of reel
(empty
reel)
=
2 f
t
258
/
Mathematical Modeling of Physical Systems
Chap. 5
v:
w
'OrIM^
Motor
Tachometer
+
+
+ em
Amplifier
~
e
«/
+
-
-
Figure P5-16.
Rf = R=
effective radius
of reel
effective radius
of reel
(full reel)
=
4
= moment of inertia of reel = 18 J? 4 = linear speed of cable, ft/sec e, = output voltage of tachometer, volts Tm (t) = motor torque, ft-lb emit) = motor input voltage, volts K = amplifier gain JR
f
200
ft-lb-sec 2
vR
Motor
inertia
and
friction are negligible.
The tachometer
E,{s)
VR(s) and the motor
transfer function
transfer function
is
1
1
+ 0.5s
is
Tm (s)
Em {s)
50 s
+
1
Write an expression to describe the change of the radius of the reel R as a function of 6 R (b) Between layers of the cable, R and JR are assumed to be constant, and the system is considered linear. Draw a block diagram for the system and indicate all the transfer functions. The input is e r and the output is vR (a)
.
.
(c)
Derive the closed-loop transfer function
VR (s)/E (s). r
6 Time-Domain Analysis of Control
6.1
Systems
Introduction
Since time
is used as an independent variable in most control systems, it is usuof interest to evaluate the time response of the systems. In the analysis problem, a reference input signal is applied to a system, and the performance of the system is evaluated by studying the response in the time domain. For
ally
instance, if the objective of the control system
is
follow the input signal as closely as possible, and the output as functions of time.
necessary to compare the input
it is
to have the output variable
The time response of a control system is usually and the steady-state response.
the transient response
response, then, in general,
it
may
c(t)
=
divided, into
two
parts:
If c(t) denotes a time
be written
c,(t)
+
c„(t)
(6-1)
where
= transient response c n(t) = steady-state response c,(t)
The
definition of the steady state has not
analysis
it is
been entirely standardized. In circuit sometimes useful to define a steady-state variable as being a constant
with respect to time. In control systems applications, however, when a response has reached its steady state it can still vary with time. In control systems the steady-state response is simply the fixed response when time reaches infinity. Therefore, a sine wave ior
is
is considered as a steady-state response because its behavany time interval, as when time approaches infinity. Similarly, if described by c(t) = t, it may be defined as a steady-state response.
fixed for
a response
is
259
260
/
Time-Domain Analysis
Chap. 6
of Control Systems
Transient response as time
becomes
is
defined as the part of the response that goes to zero
large. Therefore, c,(t)
lim
has the property of
c,(f)
=
(6-2)
It can also be stated that the steady-state response which remains after the transient has died out.
All control systems exhibit transient
is
that part of the response
phenomenon
to
some
extent before a
reached. Since inertia, mass, and inductance cannot be avoided in physical systems, the responses cannot follow sudden changes in the input instantaneously, and transients are usually observed.
steady state
The
is
transient response of a control system
is
of importance, since
it is
part
of the dynamic behavior of the system; and the deviation between the response
and the input or the desired response, before the steady state is reached, must be closely watched. The steady-state response, when compared with the input, gives an indication of the final accuracy of the system. If the steady-state response of the output does not agree with the steady state of the input exactly, the system
6.2
is
said to have a steady-state error.
Typical Test Signals for Time Response of Control Systems
many electrical circuits and communication systems, the input excitations to many practical control systems are not known ahead of time. In many cases, the actual inputs of a control system may vary in random fashions with respect Unlike
and speed of the that they cannot so manner, unpredictable an target to be tracked may a probThis poses expression. mathematical by a be expressed deterministically
to time.
For
instance, in a radar tracking system, the position
vary in
lem for the designer, since it is difficult to design the control system so that it will perform satisfactorily to any input signal. For the purposes of analysis and design, it is necessary to assume some basic types of input functions so that the performance of a system can be evaluated with respect to these test signals. By selecting these basic test signals properly, not only the mathematical treatment of the problem is systematized, but the responses due to these inputs allow the prediction of the system's performance to other more complex inputs. In a design problem, performance criteria may be specified with respect to these test
may be designed to meet the criteria. the response of a linear time-invariant system is analyzed in the frequency domain, a sinusoidal input with variable frequency is used. When the input frequency is swept from zero to beyond the significant range of the system
signals so that a system
When
characteristics, curves in terms
of the amplitude ratio and phase between input
and output are drawn as functions of frequency. It is possible to predict the time-domain behavior of the system from its frequency-domain characteristics. To facilitate the time-domain analysis, the following deterministic test signals are often used.
Step input function. The step input function represents an instantaneous change in the reference input variable. For example, if the input is the angular
Typical Test Signals for Time Response of Control Systems
Sec. 6.2
/
261
position of a mechanical shaft, the step input represents the sudden rotation of
the shaft.
The mathematical representation of a
where
R is
a constant.
o
The
is
t
(6-3)
Or r{t)
where u s (t)
is
>0 <0
f
K0 =
step function
the unit step function.
=
Ru
The
step function as a function of time
is
(6-4)
s (t)
step function
shown
is
not defined at
t
=
0.
in Fig. 6-l(a).
R
(b)
(a)
Fig. 6-1. Basic
time-domain
function input, r(t) (c)
Ramp
= Ru
for control systems, (a) Step
test signals
s (t).
(b)
=
Parabolic function input, r(t)
Ramp
function input, r(t)
—
Rtu s {t).
Rt 2 u s (t).
input function. In the case of the
ramp
function, the signal
is
con-
sidered to have a constant change in value with respect to time. Mathematically,
a
ramp
function
is
represented by
K0 =
t>0 ."„ ?<0
\Rt
L [0
(6
-5
)
or simply /(f)
The ramp function
is
shown
-
Rtu s (0
(6-6)
in Fig. 6- 1(b). If the input variable is of the
the angular displacement of a shaft, the
ramp input
form of
represents the constant-
speed rotation of the shaft. Parabolic input function.
input function
The mathematical representation of a parabolic
is , ,
\Rt 2
t
>
262
/
Time-Domain Analysis
of Control Systems
Chap. 6
or simply
<0 = The
Rt*u£t)
(6-8)
graphical representation of the parabolic function
These
test signals all
scribe mathematically,
become progressively as a test signal since
have the
and from the
common
is
shown
in Fig. 6-l(c).
feature that they are simple to de-
step function to the parabolic function they
The step function is very useful jump in amplitude reveals a great
faster with respect to time.
instantaneous
its initial
deal about a system's quickness to respond. Also, since the step function has, in principle, a
wide band of frequencies in
continuity, as a test signal
it
its
spectrum, as a result of the jump dis-
equivalent to the application of numerous
is
sinusoidal signals with a wide range of frequencies.
The ramp function has
the ability to test
how
the system
would respond to
A parabolic function is one degree faster
a signal that changes linearly with time.
than a ramp function. In practice, we seldom find it necessary to use a test signal faster than a parabolic function. This is because, as we shall show later, to track or follow a high-order input, the system is necessarily of high order, which may mean that stability problems will be encountered.
6.3
Time-Domain Performance of Control Systems Steady-State Response In this section
we
shall discuss the typical criteria
the time-domain performance of a control system.
system
used for the measurement of
The time response of a control
characterized by the transient response and the steady-state
may be
alternative, by a performance index that gives a qualitative measure on the time response as a whole. These criteria will be discussed in the
response or,
as an
following.
Steady-State Error It
was mentioned
accuracy
when a
earlier that the steady-state error is a
specific type
of input
is
measure of system
applied to a control system. In a phy-
sical system, because of friction and the nature of the particular system, the steady state of the output response seldom agrees exactly with the reference input. Therefore, steady-state errors in control systems are almost unavoidable,
and in a design problem one of the objectives is to keep the error to a minimum or below a certain tolerable value. For instance, in a positional control system, it is desirable to have the final position of the output be in exact correspondence with the reference position. In a velocity-control system, the objective is to have the output velocity be as close as possible to the reference value. If the reference input r{t) and the controlled output c(t) are dimensionally the same, for example, a voltage controlling a voltage, a position controlling a position,
and so on, and
the error signal
is
are at the
same
level
or of the same order of magnitude,
simply e(t)
= r(t) -
c{t)
(6-9)
Time-Domain Performance
Sec. 6.3
However, sometimes reference input that
may be
it
at the
is
same
controlled variable. For instance,
of Control
Systems
263
/
impossible or inconvenient to provide a
even of the same dimension as the be necessary to use a low-voltage
level or
may
it
source for the control of the output of a high-voltage power source; for a velocity-control system
it is
more
practical to use a voltage source or position
input to control the velocity of the output shaft.
Under
these conditions, the
error signal cannot be defined simply as the difference between the reference
input and the controlled output, and Eq. (6-9) becomes meaningless. The input
and the output
must be of the same dimension and
signals
before subtraction. Therefore, a nonunity element, H(s), in the
feedback path, as shown in Fig.
/\
r(t)
6-2.
The
is
at the
same
level
usually incorporated
error of this nonunity-feedback
e(0
cit)
G(s)
>
as)
R(s)
bit)
H(s) Bis)
Fig. 6-2.
control system
is
Nonunity feedback control system.
defined as e (0
=
r{t)
&{s)
=
R(s)
b(t)
(6-10)
or
For example, is
if
a 10-v reference
a constant and
error signal
is
equal to
0.1.
is
=
B(s)
R(s)
-
H(s)C(s)
(6-1 1)
used to regulate a 100-v voltage supply,
When
the output voltage
is
exactly 100
H
the
is
f(0= 10-0.1-100 = As another example
let
(6-12)
shown is a used as a reference to control the
us consider that the system
velocity-control system in that the input r(t)
output velocity of the system. Let
we need a
v,
c(t)
is
in Fig. 6-2
denote the output displacement. Then,
device such as a tachometer in the feedback path, so that H(s)
Thus the error
in velocity
is
=
K,s.
defined as e (0
=
r{t)
-
b{t)
r{t)
-
k;-
(6-13)
dc(t)
dt
The error becomes zero when the output velocity dc(i)\dt is equal to r(i)\K The steady-state error of a feedback control system is defined as the error when time reaches infinity; that is, t
steady-state error
=
e„
=
lim
e(t)
.
(6-14)
264
/
Time-Domain Analysis
of Control Systems
With reference
Chap. 6
to Fig. 6-2, the Laplace-transformed error function
8
By use of the
s&(s) is to
^ = r+W)W)
(6 ' 15)
final-value theorem, the steady-state error of the system e ss
where
is
=
lim
have no poles that
=
e(t)
lie
lim s&(s)
(6-16)
on the imaginary
=
lim
*fffr). ,
axis
and
in the right half
we have
of the s-plane. Substituting Eq. (6-15) into Eq. (6-16), e ss
is
(6-17)
>
,
which shows that the steady-state error depends on the reference input R(s) and the loop transfer function G(s)H(s). Let us first establish the type of control system by referring to the form of G(s)H(s). In general, G(s)H(s)
W
G(s)H(s) {)
~
may
be written
KV+T s){l + T s)...{l + T m 2
i
+ T s)(l + T s)
s'(l
b
a
.
.
.
(1
s)
(6m {t 1V)
+ T s) n
K and all the Ts are constants. The type of feedback control system refers = 0. Therefore, the system that is described by the G(s)H(s) o(Eq. (6-1 8) is of type/, wherey = 0, 1,2, ... The values
where
to the order of the pole of G(s)H(s) at s
.
and the Ts are not important to the system type and do not affect the value of the steady-state error. For instance, a feedback control system with
of m,
n,
G <*M*> is
of type
1,
since j
=
= /+^(1°+1)
(6 " 19)
1.
Now let us consider the effects of the types of inputs on the error. We shall consider only the step, ramp, and parabolic inputs.
steady-state
Steady-state error due to a step input. If the reference input to the control is a step input of magnitude R, the Laplace transform of the Equation (6-17) becomes
system of Fig. 6-2 input
is
R/s.
sR(s)
i:
"
™
1
+ G(s)H(s)
R
•
™
~~
1
+
G(s)H(s)
~~ 1
+
R_ lim G(s)H(s)
,
K
fi
?m '
For convenience we define
Kp = where K„
is
the positional error constant. e
We
see that for e ss to be zero,
infinite. If
lim G(s)H(s)
G(s)H(s)
is
"
(6-21)
Then Eq.
(6-20)
is
written
= TTK,
when
(6 " 22)
the input
described by Eq. (6-18),
is
we
a step function, see that for
K„
Kp
must be
to be infinite,
j must be at least equal to unity; that is, G(s)H(s) must have at least one pure integration. Therefore, we can summarize the steady-state error due to a step
Time-Domain Performance
Sec. 6.3
of Control
Systems
/
265
input as follows:
system
type type
1
R
-"
:
(or higher) system:
Steady-state error due to a
of Fig. 6-2
ramp
R
is
1
K,
=
constant
system
input. If the input to the control
is
r(t)
where
e ss
~ + =
=
(6-23)
Rtu,(t)
a constant, the Laplace transform of
r(t) is
(6-24)
Substituting Eq. (6-24) into Eq. (6-17),
If
we
R
Mm
ss
we have
V"o s
+
sG(s)H(s)
~
R
(6-25)
lim sG(s)H(s)
define
K = v
lim sG(s)H(s)
=
velocity error constant
(6-26)
Eq. (6-25) reads (6-27)
K.
which is the steady-state error when the input due to a ramp input is shown in Fig. 6-3. c(t)
is
a
ramp
function.
A
typical e s
i
e ss
Reference input r(l)
=
R/Kv
= Rtu s (t)
Fig. 6-3. Typical steady-state error
due to a ramp input.
Equation (6-27) shows that for e ss to be zero when the input Kv must be infinite. Using Eq. (6-18) and (6-26),
is
a
ramp
function,
K = v
lim sG(s)H(s) s-0
Therefore, in order for
K
v
=
lim
~
J s-0 S
to be infinite, j
j=
must be
0,1,2,.
(6-28)
at least equal to two, or the
system must be of type 2 or higher. The following conclusions
may be
stated
:
266
/
Time-Domain Analysis
of Control Systems
Chap. 6
with regard to the steady-state error of a system with ramp input: type type
1
system
e ss
=
oo
system:
e ss
=
-£-
e ss
=
type 2 (or higher) system:
=
constant
Steady-state error due to a parabolic input. If the input r(t)
the Laplace transform of
= \u 2
described by (6-29)
s {t)
r(t) is
= -J
R(s)
The
is
(6-30)
steady-state error of the system of Fig. 6-2
is
R "
(6-31)
lim s 2 G{s)H(s)
Defining the acceleration error constant as
K = a
the steady-state error
lim s 2 G(s)H(s)
(6-32)
is
e ss
=
(6-33)
£
The following conclusions can be made with regard
to the steady-state error of
a system with parabolic input:
e„
=
— = constant
e ss
=
system:
type 2 system:
1
type 3 (or higher) system a
oo
e ss
type
As
e ss
= =
system:
type
summary of the error analysis,
:
oo
among the
the relations
error constants,
the types of the system, and the input types are tabulated in Table 6-1. transfer function of Eq. (6-18)
Table 6-1
Summary
is
used as a reference.
of the Steady-State Errors
Due
to Step,
Ramp,
and Parabolic Inputs
Type of System -is
J
Ap
js
Ay
v-
Aa
K
&ss
e„
Step
Ramp
Input,
Input,
R
_
if 1 ~r Jy p \
oo
K
2
oo
oo
K
e ss
3
oo
oo
oo
e ss
e„
= = =
_ R is
Input, ^ SS
_ R T^~ A a
A;,
"
= 1
1
^**
\
Parabolic
+K e ss e ss e 3S
= = =
^ e ss
=
c„
=
The
Time-Domain Performance
Sec. 6.3
of Control
Systems
267
/
should be noted that the position, velocity, and acceleration error constants are significant in the error analysis only when the input signal is a step function, a ramp function, and a parabolic function, respectively. It
It
should be noted further that the steady-state error analysis in this section
conducted by applying the final-value theorem to the error function, which is defined as the difference between the actual output and the desired output signal. In certain cases the error signal may be defined as the difference between the output and the reference input, whether or not the feedback element is unity. For instance, one may define the error signal for the system of Fig. 6-2 as
is
e(t)
=
lit)
-
(6-34)
c(t)
Then ()
_
1
+
G(s)[H(s)
-
l]
R()
(6 . 35)
and e
It
here
-lims l
+
G(s)[H(s)
-
l]
(6-36)
R(s)
should be kept in mind that since the steady-state error analysis discussed on the use of the final-value theorem, it is important to first check to
relies
see if sE(s) has
any poles on the ja>
axis or in the right half of the j-plane.
is, of course, that they do not give information on the steady-state error when inputs are other than the
One
of the disadvantages of the error constants
three basic types mentioned. Another difficulty error
is
is
that
when
the steady-state
a function of time, the error constants give only an answer of infinity,
and do not provide any information on how the error
varies with time.
present the error series in the following section, which gives a
We shall
more general repre-
sentation of the steady-state error.
Error Series In this section, the error-constant concept
of almost any arbitrary function of time.
We
is
generalized to include inputs
start
with the transformed error
function of Eq. (6-15),
S ^>
=
r+Mm
(6 " 37)
or of Eq. (6-35), as the case may be. Using the principle of the convolution integral as discussed in Section the error signal e(t)
may be
€ (r)
where
we (r)
is
3.3,
written
=
J" _
w e (t)r{t
- t) dx
(6-38)
the inverse Laplace transform of
W^= +G \s)H(s)
<
W9
>
l
which
is
known
If the first
as the error transfer function.
n derivatives of r(t) exist for
all
values of
/,
the function r(t
—
t)
268
/
Time-Domain Analysis
of Control Systems
Chap. 6
can be expanded into a Taylor r(t
where
-
T)
=
series; that
-
r(t)
xr(t)
+
is,
- ^r(t) +
|if(/)
...
(6-40)
first derivative of r(t) with respect to time. considered to be zero for negative time, the limit of the convolution integral of Eq. (6-38) may be taken from to t. Substituting Eq. (6-40)
represents the
r(t)
Since
r(t) is
into Eq. (6-38),
=
we have
-
w.(T)[r(0
J
+ ff (0 - yj'C) +
*H?)
•
.
.]
rfr
(6-41)
=
w e (r) dx
lit)
-
J
As
xw e (x) dx
r(t)
f ^w.(i) dx
r(t)
J
before, the steady-state error
approaches
+
is
obtained by taking the limit of
e„
= lim e(t) =
lim
=
w e (x)dx-f
r,(t)\
t
s
+
xw e (x)dx
(t)\
(6-42)
e,(f)
where e£t) denotes the steady-state part of e (t) and e.(t)
e(t) as
thus
infinity;
is
given by
r s (t)\
~^\{x)dx (6-43)
and
r,(t)
^(0
+
^w,(x)dz
•••
denotes the steady-state part of
r(t).
Let us define
C =
w
I
e
{x)
dx
J
Ci
= —
xw£x) dx J
C2 =
C„
Equation (6-42) e,(0
which
is
=C
is
x 2 w e (x) dx
J
(6-44)
= (-l)TT"W (TVT J e
written
r,(t)
+
C,r,(0
called the error series,
+ ^(f) + and the
.
.
+ ^r«(/) +
.
coefficients,
.
C C C2 ,
];
.
,
(6-45)
.
.
.
.
,
C„ are
defined as the generalized error coefficients, or simply as the error coefficients.
The
error coefficients
may be
transfer function, W,(s). Since W,(s)
transform,
readily evaluated directly
and w e (x) are
from the error
related through the Laplace
we have W. is)
= P w,(t>-" rft J
(6-46)
Time-Domain Performance
Sec. 6.3
Taking the
limit
on both
of Control
sides of Eq. (6-46) as s approaches zero,
lim
W
=
e (s)
/
269
we have
f w (T)e"" dx
lim
e
i-o
s^o
Systems
•>
(6-47)
o
=C The
derivative of
W£s) of Eq.
(6-46) with respect to s gives
—-f^ = — ds
Tw e (r)e " di Jo
(6-48)
= de'" from which we
get
C,
The
rest
=
lim
4EM-
.-o
ds
(6-49)
of the error coefficients are obtained in a similar fashion by taking suc-
cessive differentiation of Eq. (6-46) with respect to
Cz = 3
The following examples its
(6-50)
ds 2
lim«£) ds
(6-51) v
3
*^o
and
Therefore,
^M
lim ,^o
C =
s.
illustrate the general application
of the error
series
advantages over the error constants.
Example
In this illustrative example the steady-state error of a feedback control system will be evaluated by use of the error series and the error coef-
6-1
ficients.
Consider a unity feedback control system with the open-loop
transfer function given as
G(s)
Since the system
Thus the
is
of type
= j^-j
(6-53)
K
= K, Kv = 0, and a = 0. p due to the three basic types of inputs are as
0, the error constants are
steady-state errors of the system
K
follows unit step input, u,(t):
e ss
=
R
1
unit
ramp
input, tu s (t)
unit parabolic input,
Notice that state error
is
when
infinite in
the input
is
either a
magnitude, since
it
t
e ss 2
u s (t):
e ss
ramp or a
= =
oo oo
parabolic function, the steady-
apparently increases with time.
that the error constants fail to indicate the exact
function increases with time. Therefore, ordinarily,
manner
in
It is
apparent
which the steady-state
if the steady-state response of this system due to a ramp or parabolic input is desired, the differential equation of the system must be solved. We now show that the steady-state response of the system can
actually be determined
from the error
series.
^
.
270
/
Time-Domain Analysis
of Control Systems
Using Eq.
(6-39),
Chap. 6
we have for
system
this
^> = r+W) = rFXTT The
(6 " 54)
error coefficients are evaluated as
C =
= Kjr-x-f +
lim W.(s)
(6
I
j^.0
c
<=il-T = (rTTF C = ™ ~d^- = + *)3 Although higher-order
Now
let
(6 " 57)
(1
can be obtained, they
coefficients
The
as their values will be increasingly smaller. .(0
55 >
(6 " 56)
l
>
-
= YTK r*W +
W+
+K)*
(1
become
will
error series
"''
+^)
(1
less significant
written
is
3
>
+
(?)
(6 " 58)
• •
•
us consider the three basic types of inputs. 1
When tives
the input signal
of
r,(f)
is
a unit step function,
The error
are zero.
=
*,(0
which agrees with the 2.
When and
result given
s
(t),
and
all
deriva-
—
(6-59)
by the error-constant method.
a unit ramp function, r s (t) = tu s (t), r s {t) = u s (t), higher-order derivatives of r s (t) are zero. Therefore, the error series
the input signal
all
=u
r s (t)
series gives
is
is
«- (0
=
I
TTT-F + K t'
_1
+ TTZT^I (1 +KY
which indicates that the steady-state error increases
The error-constant method simply error 3.
all
fails
linearly
-
60)
with time.
yields the result that the steady-state
to give details of the time dependence.
input, r£t)
=
(t
2
/2)u s (t), r s (t)
=
tu s (t), r s (t)
=
u s {t), and
higher derivatives are zero. The error series becomes *.(0
4.
but
infinite
is
For a parabolic
(6
«-(')
= [y^k T + (TTkT2 - inhcy] «' w '
(6 " 61)
In this case the error increases in magnitude as the second power of /. Consider that the input signal is represented by a polynomial of t and an exponential term, r(f)
where a
,
=
[«o
+a + t
t
^+
e-°«~jujf)
(6-62)
a it a 2 and a 3 are constants. Then, ,
r,(f)
fit)
rXt)
=
+
[a
= («i + a 2 t)u = a 2 u (t)
(6-63) (6-64)
s (t)
(6-65)
s
In this case the error series becomes e,0)
K
1
= 1
+ K n(t) +
(1
+K)
i
f
M-
K (i
+^)3 ^(f)
(6
" 66
>
—
Time-Domain Performance
Sec. 6.4
Example
In this example
6-2
stant
is
we
shall consider
a situation in which the error cona solution to the steady-state
totally inadequate in providing
error. Let us consider that the input to the
6-1
is
(D
=
sin (D
(6-67)
t
= 2. Then = sin (D = (D cos (D r,(t) = -Oil sin (D r (t) = -G>1 cos (D r,(i)
Because of the sinusoidal input, the error series is now an infinite series. The conis important in arriving at a meaningful answer to the steady-
vergence of the series
state error. It is clear that the (D
convergence of the error K to be 100. Then
series
depends on the value of
and K. Let us assign the value of
C = Cl
YTH = 00099
= (1 +K? = 00098
c* = ~ (i
K +Ky = 0000194
6K " C = (i „ + „, = xy 3
Thus, using only the
first
x 10" 8
four error coefficients, Eq. (6-69) becomes
e£t) =i ro.0099 L
=
5.65
+ a0029 194 -4lJ sin It + 0.0196 cos It
0.01029 sin It
+
(6
_
70 )
0.0196 cos It
or e,(0
~ 0.02215 sin {It + 62.3°)
Therefore, the steady-state error in this case
6.4
is
(6-71)
also a sinusoid, as given
by Eq.
(6-71).
Time-Domain Performance of Control Systems Transient Response
The
transient portion of the time response
time becomes large. a stable system diminish and
is
is
Of course,
is
that part which goes to zero as
the transient response has significance only
when
referred to, since for an unstable system the response does not
out of control.
272
/
Time-Domain Analysis
of Control Systems
The
Chap. 6
transient performance of a control system
is
usually characterized by
the use of a unit step input. Typical performance criteria thai are used to characterize the transient response to rise time,
and
settling time.
a unit step input include overshoot, delay time, Figure 6-4 illustrates a typical unit step response of
Maximum overshoot
Fig. 6-4. Typical unit step response of a control system.
a linear control system. The above-mentioned
with respect to
criteria are defined
the step response: 1.
Maximum
overshoot.
The maximum overshoot
is
defined as the
largest deviation of the output over the step input during the transient state.
The amount of maximum overshoot is also used as a relative stability of the system. The maximum over-
measure of the shoot
is
often represented as a percentage of the final value of the
step response; that
per cent
maximum
is,
overshoot
= maximum overshoot final
value
X 100% (6-72)
2.
Delay time. The delay time
Td is
defined as the time required for the
step response to reach 50 per cent of 3.
Rise time.
The
rise
time
T
r
is
from 10 per cent Sometimes an alternative measure is step response to rise
its final
value.
defined as the time required for the to 90 per cent of
its final
value.
to represent the rise time as a
Transient Response of a Second-Order System /
Sec. 6.5
273
reciprocal of the slope of the step response at the instant that the
response
is
equal to 50 per cent of
Settling time.
4.
The
settling
time
T
s
its final
is
value.
defined as the time required for
the step response to decrease and stay within a specified percentage
of its
The four
final value.
A frequently used figure is
quantities defined above give a direct measure of the transient
characteristics of the step response.
when
sure tities
5 per cent.
a step response
is
These quantities are
relatively easy to
mea-
already plotted. However, analytically these quan-
are difficult to determine except for the simple cases.
Performance Index Since the general design objective of a control system is to have a small overshoot, fast rise time, short delay time, short settling time, and low steadyit is advantageous to use a performance index that gives a measure of the overall quality of the response. Let us define the input signal of a system as r(t) and the output as c(t). The difference between the input and the output
state error,
is
defined as the error signal, as in Eq. (6-9). Sometimes
r(t) is
referred to as
the desired output.
In trying to minimize the error signal, time integrals of functions of the error may be used as performance indices. For example, the simplest integral
signal
function of the error
is
dt
(6-73)
J
where /
is used to designate performance index. It is easy to see that Eq. (6-73) not a practical performance index, since minimizing it is equivalent to minimizing the area under e{t), and an oscillatory signal would yield a zero area and
is
thus a zero
/.
Some of the
practical integral performance indices are
\"\e{f)\dt J o
6.5
and there are many
others.
performance indices
is
[°te{t)dt Jo
f e\t)dt
JO
The subject of the design of control systems using covered in Chapter 11.
Transient Response of a Second-Order System
Although true second-order control systems are rare in practice, their analysis generally helps to form a basis for the understanding of design and analysis techniques.
state
Consider that a second-order feedback control system is represented by the diagram of Fig. 6-5. The state equations are written "x (0i 1
Ji(t)J
where £ and
co„
=
r
o
\_-
are constants.
i
-2Cco„,
"*i(0"
+
r(t)
(6-74)
274
/
Time-Domain Analysis
of Control
Systems
Chap. 6
x 2 (0+)/s
x,(0+)/s
Q
Fig. 6-5. State
diagram of a second-order feedback control system.
The output equation
is
=
c(0
Applying the gain formula to the
colx^t)
state
(6-75)
diagram of
Fig. 6-5, the state transi-
tion equations are written
+
's
2Cco„
-col
L
*i(0+)"
1
+
1
R(s)
(6-76)
s_\ix 2 (0+)
where
A= The
+
2£co„s
For a
sin (co*s/l
*l(0
Vi -C
—C
2 1
— co„ sin g>„V — 1
+
col
yi-c
I
1
w„*J\
—
=e"
:
fo "'
is
carried out with the help of
we have
unit step function input,
+
—
y/)
2
£
sin
co n
2
1
(6-77)
col
inverse Laplace transform of Eq. (6-76)
the Laplace transform table.
x 2 (t) J
+
s2
1
sin
awT- £
(eo„V 1
—
e^-'sinKVl -C 2 '-
2
£
1
2
+
*i(0+)
1
$)
.* 2 (0+)J
(6-78) \\
r>0 sin
co,,
VI -C 2
'
C
where y/
=
tan
,
yi-c
2
(6-79)
c
= Although Eq. terms of the
tan
-,yi-c
2
(6-80)
(6-78) gives the complete solution of the state variables in
initial states
and the unit step input,
it is
a rather formidable-looking
expression, especially in view of the fact that the system
is
only of the second
order. However, the analysis of control systems does not rely completely on the
The development of performance by use of
evaluation of the complete state and output responses. linear control theory allows the study of control system
275
Transient Response of a Second-Order System /
Sec. 6.5
the transfer function
and the
characteristic equation.
We shall show that
a great
deal can be learned about the system's behavior by studying the location of the roots of the characteristic equation.
The closed-loop transfer function of the system is determined from C(£)
=
col
R(s)
The
s
+
2
characteristic equation of the system
zero
;
that
(6 K
+ col
2£gv
Fig. 6-5. . 81
) }
is
obtained by setting Eq. (6-77) to
+
col
is,
A=
s
+
2
2Cco„s
=
(6-82)
=
For a unit step function input, R(s) I Is, the output response of the system determined by taking the inverse Laplace transform of
is
C(s)
= s{s
Or,
c(t) is
f' 2£co„s
(6-83)
+ col)
determined by use of Eqs. (6-75) and (6-78) with zero
= +
c(0
+
2
1
-CoJnf
It is interesting to
—
co„*/l
7r=c sSin
2
C
*
-
tan -,-v/l-C
-c
initial states
2 t
>
(6-84)
J
study the relationship between the roots of the characteristic
equation and the behavior of the step response
c(t).
The two
roots of Eq. (6-82)
are
su
The
s2
= =
-fco„ ±jco„ a/1
-C
(0-83)
—a.±jco
physical significance of the constants
and
£, co n , a,
co is
now
described
as follows:
As
seen from Eq. (6-85),
multiplied to
t
a
=
in the exponential
£a>„, and a appears as the constant that is term of Eq. (6-84). Therefore, a controls the
and decay of the time response. In other words, a controls the "damping" of the system and is called the damping constant or the damping
rate of rise
factor.
The
When
inverse of a, 1/a,
is
proportional to the time constant of the system.
the two roots of the characteristic equation are real and identical
From
we
we
damping occurs when f = 1. Under this condition the damping factor is simply a = co„. Therefore, we can regard £ as the damping ratio, which is the ratio between the actual damping factor and the damping factor when the damping is critical. co„ is defined as the natural undamped frequency. As seen from Eq. (6-85), call the
system
critically
damped.
Eq. (6-85)
see that critical
=
when
the damping is zero, £ 0, the roots of the characteristic equation are imaginary, and Eq. (6-84) shows that the step response is purely sinusoidal. Therefore, con corresponds to the frequency of the undamped sinusoid.
Equation (6-85) shows that co
However, since unless C
=
0,
tion. Therefore, strictly, co is
= aw^ -
2
C
(6-86)
the response of Eq. (6-84) is not a periodic funcnot a frequency. For the purpose of reference co is
sometimes defined as conditional frequency.
276
/
Time-Domain Analysis
of Control
Systems
Chap. 6
j
l'u
Root
s-plane
) ;,
X
/
w„
« = «» >/!--r 2
\
e
\ N
i
/
"
•*-a =
$u„
X
Root
Fig. 6-6. Relationship
between the characteristic equation roots of a C, co„, and to.
second-order system and a,
Figure 6-6 illustrates the relationship between the location of the characteristic
equation roots and a,
shown,
a>„ is
C> cu„,
and
ca.
For the complex-conjugate roots The
the radial distance from the roots to the origin of the s-plane.
damping factor a is the real part of the roots; the conditional frequency is the imaginary part of the roots, and the damping ratio £ is equal to the cosine of the angle between the radial line to the roots and the negative real axis; that cos 9
Figure 6-7 shows the constant-cw„
loci,
is,
(6-87)
the constant-^
loci,
the constant-a
and the constant-cu loci. Note that the left-half of the j-plane corresponds to positive damping (i.e., the damping factor or ratio is positive), and the right-half of the j-plane corresponds to negative damping. The imaginary axis corresponds to zero damping (a = 0, £ = 0). As shown by Eq. (6-84), when the damping is positive, the step response will settle to its constant final value because of the negative exponent of e~ Ca*. Negative damping will correspond to a response that grows without bound, and zero damping gives rise to a sustained sinusoidal oscillation. These last two cases are defined as unstable for loci,
linear systems. Therefore, teristic
we have demonstrated that the
location of the charac-
equation roots plays a great part in the dynamic behavior of the transient
response of the system.
The effect of the characteristic-equation roots on the damping of the second-
is further illustrated by Figs. 6-8 and 6-9. In Fig. 6-8, co„ is held oo to +oo. The following constant while the damping ratio C is varied from classification of the system dynamics with respect to the value of £ is given
order system
—
1
:
s u s2
=
-Ccon
± j(O *J\~ n
2
£
underdamped case critically
C>i C = o
Si, s 2
C
s
damped
case
overdamped case
t ,
s2
= ±/cu„ = — Ca>„ ± jco„^/l —
C
2
undamped
case
negatively
damped
case
.
.
/oo
x-plane
r=o
_^
X
0>f>-l
?=
r>i *-
5l
f<-I
t-
I
f-
/
yo>r>-i r
Fig. 6-8.
damping
=o
Locus of roots of Eq. (6-82) when +oo. ratio is varied from -°o to ,.
a>„ is
held constant while the
/" c(,t)
s-plane
*-
-X
f>l
/OJ
x-plane
f=l
x-plane
/« c(/)
X
o
Fig. 6-9.
278
Response comparison for various root locations
in the x-plane.
Sec. 6.5
Transient Response of a Second-Order System /
279
c(t)
/co
s-plane
>~ct
1
f =
,
.
/CO
X
s-plane
0>f>-l
/CO
s-plane
r<-i
Fig. 6-9 (Cont.).
Figure 6-9 illustrates typical step responses that correspond to the various root
locations.
In practical applications only stable systems are of interest. Therefore when f is positive are of particular interest. In Fig. 6-10 is plotted the variation of the unit step response described by Eq. (6-84) as a function of the normalized time co„t, for various values of the damping ratio f. It is seen that the response becomes more oscillatory as £ decreases in value When r i there is no overshoot in the step response; that is, the output never exceedsThe value of the reference input. the cases
>
The exact relation between the damping ratio and the amount of overshoot can be obtained by taking the derivative of Eq. (6-84) and zero.
setting the result to
Thus
dc(t)
tco»e
-Zcon t
sin (cot
dt
—
t4)
(6-88)
-CcOnt
~ /i
_^
CQa
^/1
~
2
£ cos (a>t
—
4>)
t>0
280
/
Time-Domain Analysis
of Control
Chap. 6
Systems
Fig. 6-10. Transient response of a second-order system to a unit step function input.
where (j>
Equation (6-88) dc(t)
dt
is
=
tan
-,
yi-c
2
(6-89)
simplified to co„
Jl -
C
2
e "
-&»,<
Therefore, setting Eq. (6-90) to zero,
c»„Vl-C 2 ' =
si n
q^ ^i
we have
™
n
t
=
_
£2,
=
oo
i
>
o
(6-90)
and
0,1,2,...
(6-91)
Sec. 6.5
Transient Response of a Second-Order Systems
/
281
or
nn
The first maximum value of the step response c(t) occurs at n the time at which the maximum overshoot occurs is given by •max
=
1.
Therefore,
(6-93)
..,
/,
=
In general, for all odd values of n, that is, n 1 3, 5, Eq. (6-92) gives the times at which the overshoots occur. For all even values of n, Eq. (6-92) gives the times at which the undershoots occur, as shown in Fig. 6-11. It is interesting to note that, although the maxima and the minima of the response occur at periodic intervals, the response is a damped sinusoid and is not a periodic function. ,
.
.
.
,
c(t)
v7~p vT^p"
vT^T1
Fig. 6-11. Step response illustrating that the periodic intervals.
vT^F
maxima and minima occur
at
The magnitudes of the overshoots and the undershoots can be obtained by Thus
substituting Eq. (6-92) into Eq. (6-84).
<\0
Im.x or
mm
=
1
H
==- sin (
r Vi -C 2 ,
nn
—
tan"
1
^ZI^-) [
il
"=1,2,3,... (6-94)
or
C(OI««orml.
= + 1
(-I)""'*— «"f=F
The maximum overshoot
maximum
overshoot
maximum
overshoot
=
obtained by letting n
and per cent
(6-95)
= = c max —
is
1
1
in Eq. (6-95). Therefore,
=
lOOer*^ 1
e -'C/-/T=c
^
r
(6-96)
5
(6-97)
282
/
Time-Domain Analysis
of Control
Chap. 6
Systems
0.8
0.6
0.4
Damping
1.0
1.2
ratio f
damping
Fig. 6-12. Per cent overshoot as a function of
ratio for the step
response of a second-order system.
Note that
for the second-order system, the
is
per cent
maximum
is
shown
maximum
overshoot of the step
only a function of the damping ratio. The relationship between the
response
overshoot and damping ratio for the second-order system
in Fig. 6-12.
From
Eqs. (6-93) and (6-94)
consideration, the
maximum
it is
seen that for the second-order system under
overshoot and the time at which
it
occurs are
all
For the delay time, rise time, and settling time, however, the relationships are not so simple. It would be difficult to determine the exact expressions for these quantities. For instance, for the delay time, we would have to set c(t) = 0.5 in Eq. (6-84) and solve for /. An easier way would be to plot co„td versus £ as shown in Fig. 6-13. Then, over the range of < £ < 1 .0 it is possible to approximate the curve by a straight line, exactly expressed in terms of £
and
co„.
^~l+0.7£ Thus the delay time
(6-98)
is
U
— + 0-7C 1
(6-99)
co„
For a wider range of
£,
a second-order equation should be used. Then ,
id
_1 + 0.6£ + 0.15£ =
2
(6-100)
CO„
For the
rise
time
tr ,
which
is
the time for the step response to reach from
10 per cent to 90 per cent of its final value, the exact values can again be obtained directly
from the responses of
Fig. 6-10.
The
plot of
Fig. 6-14. In this case the rise time versus £ relation
by a
co„t r
versus £
is
shown
in
can again be approximated
straight line over a limited range of £. Therefore,
0.8
+
2.5£
o<£<
i
(6-101)
COn
A
better approximation
may be
obtained by using a second-order equation;
Sec. 6.5
Transient Response of a Second-Order System /
«„^
283
= l+0.7f
H'rf
<
<%) _
R (s) 0.5
0.2
0.4
0.6
0.8
s2+2$u„s +
1.0
1.2
w n2
1.4
1.6
f Fig. 6-13.
Normalized delay time versus f for a second-order control
system.
5.0
r
"i
t
r
4.0
2.0
-I
I
1
0.2
0.4
0.6
L
J_
0.8
1.0
1.2
? Fig. 6-14.
Normalized
rise
time versus C for a second-order system.
then
~ + l
,_
l-K+lAC 2 co„
From
the definition of settling time,
settling time is the
shown
clear that the expression for the
most difficult to determine. However, we can obtain an approx-
imation for the case of soid, as
it is
(6-102)
in Fig. 6-15.
1
by using the envelope of the damped sinu-
284
/
Time- Domain Analysis of Control Systems
Fig. 6-15.
Chap. 6
Approximation of settling time using the envelope of the decay-
ing step response of a second-order system (0
From
the figure
it is
clear that the
same
<
£
<
result is
1).
obtained with the approx-
imation whether the upper envelope or the lower envelope c(t)
Solving for
co„t s
from the
=
is
used. Therefore,
-CcOnl.
1.05
1
last equation,
(6-103)
we have
1
is
(6-104)
simplified to
= -y
(6-105)
or s
Now
- c<»„
o
reviewing the relationships for the delay time,
(6-106) rise time,
and
settling
seen that small values of f would yield short rise time and short delay time. However, a fast settling time requires a large value for £. Therefore, a compromise in the value of £ should be made when all these criteria are to be time,
it is
satisfactorily
met
in a design problem. Together with the
consideration on
maximum
overshoot, a generally accepted range of damping ratio for satisfactory all-around performance is between 0.5 and 0.8.
6.6
Time Response of
a Positional Control
System
we shall study the time-domain performance of a control system whose objective is to control the position of a load that has viscous friction and inertia. The schematic diagram of the system is shown in Fig. 6-16. In this section
Sec. 6.6
Time Response of a Positional Control System
/
285
dc motor
dc amplifier
;
if
= constant
* Error detector t ^Q09
Fig. 6-16. Direct current positional control system.
A
set
of potentiometers
form the error detector with sensitivity The error detector sends a signal to the dc amplifier which is proportionafto *e dtfference between the angular positions of the reference input shaft anS he C UtPUt f he dC ""***" is USed to contro1 the armature of ° ° dcmot TH current ,! a dc motor. The the field of the dc motor is held constant The parameters of the system are given as follows:
K
m
Sensitivity of error detector
K = 1/57.3 volt/deg = A (variable) * = 5Q L = negligible Jm = 10" lb-ft-sec* Bm = negligible BL = 0.1 lb-ft-sec = 0.11b-ft-sec « = NJN 1 = TV K = 0.5 lb-ft/amp s
Gain of dc amplifier Resistance of armature of motor Inductance of armature of motor
of rotor of motor Friction of motor shaft Inertia
1
volt/rad
a
3
Friction of load shaft Inertia of load
/z,
Gear ratio Torque constant of motor
2
t
The
first
step in the analysis
is
to write the equations for the system s
1.
Error detector: 9eQ) e(t)
2.
This
cause-
= e (t) - exo = KftJit) r
DC amplifier: ea {t)
= Ae(t)
(6-107) (6-108)
(6-109)
286
/
Time-Domain Analysis
of Control
3.
Chap. 6
Systems
Armature-controlled dc motor:
LjMl = - RJ {t) +
e a (t)
a
e„{t)
where
K
b is
-
= K co m {t)
(6-111)
b
the back
emf constant of the motor
=
TJf)
Km
(6
jj-^- = -Bme co m + (t)
where Jme and
Bme
= Jm + Bme = Bm Jme
4.
112 )
(6-113)
TJt)
respectively,
= lO' + 0.01(0.1) = 2 n*BL = 10" lb-ft-sec
n 2 JL
-\-
-
are the equivalent inertia and viscous frictional
by the motor,
coefficients seen
(6-1 10)
e b (t)
3
and
10- 3 lb-ft-sec 2
X
^
Output:
(6-114) (6-1
3
1
5)
=
co m {t)
(6-116)
=
n8Jt)
(6-117)
0c(t)
The value of the back emf constant, Kb is not given originally, but a definite relationship exists between Kb and K In the British unit system, K, is given in lb-ft/amp and the units of the back emf constant are volts/rad/sec. With these units, Kb is related to K, through a constant ratio. The mechanical power ,
t
developed in the motor armature Pit)
Substituting Eqs. (6-111)
P(f)
Also,
it is
known
is
.
{see Sec. 5.7)
=
e b {t)i a {t)
=
^e
and
watts (6-118)
b (t)i£t)
hp
(6-112) into Eq. (6-118),
= ^TJfyoJt)
hp
we have (6-119)
that Pit)
= -^Tjfajt)
hp
(6-120)
Therefore, equating Eq. (6-119) to Eq. (6-120) gives
A,
= 2£a» = /40
0.737A.
(6-121)
or
K = b
1.36*;
(6-122)
Thus, given K, to be 0.5 lb-ft/amp, K„ is found to be 0.68 volt/rad/sec. Using Eqs. (6-107) through (6-116), the state equations of the system are written in the matrix
form
as follows:
Sec. 6.6
Time Response
K
dia {ty
\
of a Positional Control
nAK,
b
dco m (t)
Bme
=
dt
J me
+
co m (t)
•'me
/
287
[AK.1
r us)
dt
System
0,(0
(6-123)
dBJf) dt
L
1
J
The output equation drawn as shown in
is
is
[0„(O
given by Eq. (6-117). The state diagram of the system