Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Vector Spaces - Wave Phenomena - Lecture Slides, Slides of Microwave Engineering and Acoustics

Goals for this course are: Improvement of Mathematical Skills, Knowledge of Physics and Practice with Computer Mathematics Packages. Key points for this course are: Vector Spaces, Real Space, Vector Spaces and Real Space, Basic Properties of a Vector Space, Fourier Series, Linear Independence and Basis, Vector Addition, Additive Identity, Scalar Multiplication, Complex Vector

Typology: Slides

2012/2013

Uploaded on 09/27/2013

artii
artii 🇮🇳

4.3

(7)

99 documents

1 / 11

Toggle sidebar

Related documents


Partial preview of the text

Download Vector Spaces - Wave Phenomena - Lecture Slides and more Slides Microwave Engineering and Acoustics in PDF only on Docsity! Lecture 13 Phys 3750 D M Riffe -1- 2/28/2013 Vector Spaces / Real Space Overview and Motivation: We review the properties of a vector space. As we shall see in the next lecture, the mathematics of normal modes and Fourier series is intimately related to the mathematics of a vector space. Key Mathematics: The concept and properties of a vector space, including addition, scalar multiplication, linear independence and basis, inner product, and orthogonality. I. Basic Properties of a Vector Space You are already familiar with several different vector spaces. For example, the set of all real numbers forms a vector space, as does the set of all complex numbers. The set of all position vectors (defined from some origin) is also a vector space. You may not be familiar with the concept of functions as vectors in a vector space. We will talk about that in the next lecture. Here we review the concept of a vector space and discuss the properties of a vector space that make it useful. A. Vector Addition. A vector space is a set (of some kind of quantity) that has the operation of addition ( + ) defined on it, whereby two elements v and u of the set can be added to give another element w of the set, 1 vuw += . (1) There is also an additive identity included in the set; this additive identity in known as the zero vector 0, such that for any vector v in the space v0v =+ . (2) The addition rule has both commutative uvvu +=+ (3) and associative ( ) ( )wvuwvu ++=++ (4) properties. 1 We denote vector quantities by boldface type and scalars in standard italic type. This is standard practice in most physics journals. Lecture 13 Phys 3750 D M Riffe -2- 2/28/2013 B. Scalar Multiplication The vector spaces that we are interested in also have another operation defined on them known as scalar multiplication, in which a vector u in the space can be multiplied by either a real or complex number a , producing another vector in the space uv a= . If we are interested in multiplying the elements of the space by only real numbers it is known as a real vector space; if we wish to multiply the elements of the space by complex numbers, then the space is known as a complex vector space. Scalar multiplication must satisfy the following properties for scalars a and b and vectors u and v , ( ) uuu baba +=+ , (5a) ( ) ( )uu abba = , (5b) ( ) vuvu aaa +=+ , (5c) uu =1 , (5d) 0u =0 . (5e) None of these properties should be much of a surprise (I hope!) C. Linear Independence and Basis The span of a subset of m vectors is the set of all vectors that can be written as a linear combination of the m vectors, mmaaa uuu +++ K2211 . (6) The subset of m vectors is linearly independent if none of the subset can be written as a linear combination of the other members of the subset. If the subset is linearly dependent then we can write at least one of the members as a linear combination of the others, for example 112211 −−+++= mmm aaa uuuu K . (7) Lecture 13 Phys 3750 D M Riffe -5- 2/28/2013 ( ) ( )∑∑ == =      = N n nmn N n nnmm vv 11 ,,, uuuuvu (12) [This last equality follows from Eq. (10b).] So what happens? Well, there will only be one nonzero inner product on the rhs, ( )mm uu , , and so Eq. (12) becomes ( ) ( )mmmm v uuvu ,, = , (13) and we can now solve for mv as ( )( )mm m mv uu vu , , = . (14) All of this should now look even more strangely familiar. We will get to why that is in the next lecture, but right now we will review a vector space with which you should have some familiarity. II. 1D Displacement Space Let's look at a simple example to start. Assume that we have a line drawn somewhere, and on that line we have identified an origin O , as illustrated in the picture below. The vector space that we are interested in consists of all the arrows that start at O and end someplace on the line. The picture also illustrates two of these vectors, one denoted u and one denoted v .4 4 Note, this vector space is not a vector field. A vector field is the assignment of a vector to each point in space. . O u v Lecture 13 Phys 3750 D M Riffe -6- 2/28/2013 So let's talk about some of the math introduced above with respect to this vector space. We first have to define vector addition, which must satisfy Eqs. (1) – (4). Let's go with the standard physics definition of vector addition, whereby we add vectors by the tip-to-tail method, where the one of the arrows is translated (without any rotation) and its tail is placed at the tip of the other arrow, as illustrated in the picture below. Clearly this produces another arrow whose tails is at the origin and head is on the line (and is thus a vector in the space). Eq. (1) is thus satisfied. It should also be clear that we could have translated v rather than u in this example, and so this definition satisfies Eq. (3), the commutative property of vector addition. We will not illustrate it here, but you should convince yourself that Eq. (4), the associative property is satisfied by the sum of three arrows. What about the zero vector? Well, if Eq. (2) is to be satisfied, it must have no length, and so it must be the arrow that begins and ends at the origin. What about scalar multiplication? Again, we go with the standard definition, whereby scalar multiplication by a positive number a results in an arrow that points in the same direction and is a times longer than the original arrow. Multiplication by a negative scalar b results in an arrow that points in the opposite direction and is b times longer than the original arrow. It should be clear that this definition satisfies all parts of Eq. (5). What about linear independence and dimension? Pick an arrow, any arrow. Now ask yourself the following question: can I find another arrow that is not a multiple of my first arrow. If the answer is no (which it is), then the vector space has one dimension, and you can use any arrow as the basis for the space. For example, let's say you pick the arrow u in the above drawing as your basis. Then the space is one dimensional because you can write any other arrow v as uv a= , (15) . O u v Lecture 13 Phys 3750 D M Riffe -7- 2/28/2013 where a is some scalar. Although we have not yet defined what the inner product is, notice that if we take the inner product of Eq. (15) with v we get ( ) ( ) ( )uuuuvv ,,, 2aaa == , (16) so that ( ) ( ) u v uu vv ±=±= , , a , (17) with the sign depending upon the sign of a . Now scalar multiplication was defined as multiplying an arrow's length by the multiplying scalar. Thus a is also the + or – ratio of the two vector's lengths. Therefore, for this space the norm must be proportional to the length of the arrow. So what about the inner product? Also notice the following. Because this is a one dimensional space, this basis { u } is trivially orthogonal, and we can use Eq. (14) (where here a takes the place of mv ) to express the coefficient a in Eq. (15) as ( )( )uu vu , , =a . (18) Together Eqs. (17) and (18) imply ( ) ( ) ( ) vuvvuuvu ±=±= ,,, . (19) So which sign do we use? As we now show, it depends upon the relative directions of the two arrows. Let's first consider the case where u and v are in the same direction. Then we can write uv a= , where 0>a . The we have the following ( ) ( ) ( )uuuuvu ,,, aa == (20) Because ( )uu, >0, ( )vu, >0, and we must use the positive sign if u and v are in the same direction. Similarly, if u and v are in opposite directions then 0<a , and we must use the negative sign. One last comment: notice that nothing we have done here makes us chose the norm to be exactly equal the length of the arrows; it must only be proportional to the length of the arrows. For this space, however, the standard definition of the vector norm is simply the arrow length. Lecture 13 Phys 3750 D M Riffe -10- 2/28/2013 Now it can be shown that the definition of the inner product [Eq. (22)] satisfies Eq. (10) and so Eq. (26) simplifies to ( ) zzyyxx srsrsr ++=sr, . (27) That is, the inner product of two vectors can be simply expressed as the sum of the products of corresponding components of the two vectors. Lastly, we remark that when working with vectors in real space we often use a more notationally compact form than that in Eq. (20): we often simply express the vector r as its triplet of components ( )zyx rrr ,,=r , (28) leaving the basis vectors x̂ , ŷ , and ẑ as implied. But, when using this notation one must keep in mind that lurking in the background is an implied set of basis vectors. Exercises *13.1 The inner product (a) Show that Eq. (10a) implies that the inner product of a vector u with itself is a real number. (b) Using Eqs. (10a) and (10b) show that Eq. (10c) follows. (c) Using Eqs. (10a) and (10b) show that Eq. (10d) follows *13.2 Projection. The projection of a vector v onto the direction of another vector u is defined as ( ) ( )( )uuu vuuvp , ,, = . Consider an orthogonal (but not necessarily normal) basis 1u , 2u , 3u . Using this basis any vector v can be written as 332211 uuuv vvv ++= . Determine expressions for 1v , 2v , and 3v and thus show that v can be written as ( ) ( ) ( )321 ,,, uvpuvpuvpv ++= . That is, the vector v is simply the sum of its projections onto the orthogonal basis set. In physics we often call these projections the vector components of v in the 1u , 2u , 3u basis. Lecture 13 Phys 3750 D M Riffe -11- 2/28/2013 *13.3 Consider two linearly independent vectors u and v and the vector ( ) ( )uuu vuvw , , −= made from these two vectors. Assume that the vector space is complex. In this problem you are going to do two separate calculations, both of which show that w is orthogonal to u . You may find Eqs. (10a) – (10d) useful here. (a) Easy way: Calculate the inner product ( )wu, to show that w is orthogonal to u . (b) Slightly harder way: Calculate the inner product ( )uw, to show that w is orthogonal to u . (This important result can be used to create an orthogonal basis out of any basis.) *13.4 Show that two linearly independent vectors need not be orthogonal. (Hint: you may find the result of Exercise 13.3 to be helpful here.) *13.5 Assuming that Eq. (10) applies, show that Eq. (27) follows from Eq. (26). *13.6 Use Eq. (27) to find the norm of the vector zyxr ˆˆˆ zyx rrr ++= . Does your result look familiar? **13.7 Real space. A vector r in real space has components ( )10,1,4 − in one orthonormal basis. In this same basis a set of vectors is given by ( )0,,ˆ 2 1 2 1 1 =u , ( )0,,ˆ 2 1 2 1 2 −=u , ( )1,0,0ˆ 3 −=u . (a) Show that this set of vectors is orthonormal (and is thus another orthonormal basis). (b) Find the components of r in this new basis. (c) From the components given in the statement of the problem, find r . (d) From the components determined in part (b), find r . Is r the same as calculated in part (c)?