Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

State Variable Technique - Feedback Control Design | EE 571, Study notes of Electrical and Electronics Engineering

Material Type: Notes; Class: FEEDBACK CONTROL DESIGN; Subject: Electrical Engineering; University: University of Kentucky; Term: Unknown 1989;

Typology: Study notes

Pre 2010

Uploaded on 10/01/2009

koofers-user-drm
koofers-user-drm 🇺🇸

10 documents

1 / 4

Toggle sidebar

Related documents


Partial preview of the text

Download State Variable Technique - Feedback Control Design | EE 571 and more Study notes Electrical and Electronics Engineering in PDF only on Docsity!

State-Variable Technique

Motivation of State-Variable Techniques: How to simulate an implementation?

The transfer function (in both s- and z-domain) and the time convolution with

impulse response allow us to compute the output for any input signal. However,

both approaches do not govern a particular implementation of the system.

For example, in chapter 8, we have described a number of implementations such as

direct form I, direct form II, parallel form and cascade form for a discrete-system

such as follows:

1 (^13)

1

1

( ) −

+

=

z

z H z

Example of implementations:

There are in fact infinite number of implementations for this transfer function

because we can arbitrary add factors

2 (^19) 1 (^23)

2 (^13) 1 (^23) (^13)

1

1 1 (^13)

1

.. ( )

( )

− −

− −

− −

− +

+ −

= ⇒ =

+

=

z z

z z eg c H z

cz

cz z

z H z

1 Σ

13 z- 11

Σ

z

  • 1

Σ

x(nT)^ y(nT)

(^13)

x(nT) Σ

z

  • 1

y(nT)

Another implementation:

To characterize each implementation, it is essential to keep track of the “internal

state” of the system, which are the current value at each register (remember the

state diagram in your digital logic design class?)

It is important to have a implementation-based representation that allows us to

1. examine any signal within the implementation (for possible instability within

the implementation);

2. identify if any part of the implementation is redundant, and

3. determine how “observable” (whether we can infer the internal state of the

system based on outputs) and “controllable” (whether we can steer the

internal state to a desirable configuration using input) the system is.

These topics will be the focus of EE 571 (Feedback Control Design) and EE 572

(Discrete Control Design). In this course, we will introduce the main tool used in

the design: the state-space representation. In addition, we will focus on the

continuous-time version of the state-space representation.

“Big picture”

One can convert between three different representations which serve different

purposes:

x(nT) Σ y(nT)

z

  • 1

z-^1

(^23)

− (^19)

Σ

(^23)

− (^13)

Transfer Function

Differential Equation

State- Variable

Laplace Transform

  • I/O characteristics
  • fundamental description - analyze implementations

Brief Review of Matrix Algebra

The most fundamental concept in linear algebra is matrix.

Matrix is a rectangular array of numbers:

+

 =



=

i

A B

Matrices are described by its number of rows and columns. When describing a

matrix, we need to describe the number of rows first before columns. So, A is a “

by 2 matrix” and B is a “3 by 2 matrix”.

We can add and subtract matrices of the same dimensions only:

Example: 



^ =



 −

+



Matrix multiplication can only be applied to matrices if the number of columns in

the first matrix matches the number of rows:

Example:

=

− ⋅ + ⋅ − ⋅ + ⋅ − ⋅ + ⋅

⋅ + ⋅ ⋅ + ⋅ ⋅ + ⋅

⋅ + ⋅ ⋅ + ⋅ ⋅ + ⋅

=



− 3 3 3

In other words, the entry at the i-th row and j-th column of the resulting matrix is

the inner product of the i-th row vector of the first matrix and the j-th column

vector of the second matrix.

Important: Matrix multiplication is not commutative.

⋅



^ ≠



− 1 1

The right side is not even a matrix product!

Does not work even for square matrices:





+ +

+ +

=



^ ⋅







+ +

+ +

=



^ ⋅



ga hc gb hd

ea fc eb fd c d

a b g h

e f

ce dg cf dh

ae bg af bh g h

e f c d

a b

Division in matrix algebra is represented via matrix inverse.

A ⋅ B = C ⇒ B = A −^1 ⋅ C

We need to be careful when handling matrix inverse because

a. Only square matrix has inverse – this is because inverse matrix represents an

inverse linear transform, which is only feasible if the input and output

dimensions are the same.

b. Not all square matrix has inverse – a simple test:

exists det( ) 0

1

⇔ ≠

A A

det(A) denotes the determinant of a matrix and it is a real number. If A is 2x2,

det(A) measures the area of the parallelogram formed by the two column vectors.

If A is 3x3, det(A) measures the volume of the polyhedron formed by the three

column vectors. For 2x2 matrix:

Higher dimension determinant can be computed recursively:

Or more compactly, ∑

=

=

k

j

ajCj 1

det( A ) 1 1 where ( 1 ) det( ij )

i j Cij M

= − + is called the i-jth

co-factor and M ij is called a minor , which is defined a matrix formed by deleting

the i-th row and j-th column from A.

Example: 16 5 5 2 2 27

det =− ⋅ − ⋅ + ⋅ =− ⋅ − ⋅ + ⋅ =− 

−

Using the knowledge of determinant, we can numerically compute the matrix

inverse.

For 2x2 matrix 



=

c d

a b

A , its inverse can be computed as





− =

c a

d b ad bc

A

For general matrix,

T

n nn

n

C C

C C

− =

K

M M

K

1

11 1 1 det( )

A

A where Cij is the i-j

th

co-factor.