Page 1 :
PREFACE, Linear algebra is most popular subject in the field of applied mathematics due to its various applications, encountered in sciences and engineering. It is also extensively applicable in a variety of other fields, and virtually used in physics, statistics, biology, computer graphics, computer generated images,, economics and numerous engineering fields. This text book on Linear Algebra has been specially, written according to the latest syllabus of UGC unified syllabus as per Choice Based Credit System, for B.A. and B.Sc. students., , Chapter Organization, This book is an introduction to linear algebra for the graduate and under graduate students. The basic, idea of linear algebra along with the many examples is discussed in this book. This book provides, a thorough knowledge of matrix, vector space, linear transformation, inner product space, dual space, and eigenvalue & eigenvector. It also helps to the reader in solving different kinds of problems in, linear algebra. This book possesses eight chapters. Chapter 1, contains an introduction to matrix,, special matrices and operations on matrices. In the study of linear algebra the concepts of matrix is, used to solve numerous problems. Chapter 2, is devoted to the rank of matrix and its application, to solve the system of linear equations. In this chapter, the basic emphasis is given on the rank of, matrix and how the rank of matrix is used to find the solution of system of linear equation and, behavior of the solution. In chapter 3, the basic concept of vector space and its basic properties is, presented. The basic idea of subspace and quotient spaces and their properties is given. The basis, and dimension of vector spaces is also given in chapter 3. The chapter 4, is devoted to the introduction, to linear transformation along with its properties. In this chapter the rank and nullity theorem which, is most powerful theorem in linear transformation is presented. The basic concept of inverse, transformation is also discussed in this chapter. In chapter 5, we have discussed the matrix associated, with linear transformation and vice-versa. In this chapter the main emphasis is given on the matrix, associated with the linear transformation between two vector spaces and how the matrix is changed, after changing the basis vector of vector spaces. How to change the coordinate of matrix is also, given in this chapter. The chapter 6, contains the real and complex inner product spaces. In this, chapter, we have also discussed that how the set of orthogonal and orthonormal vectors are obtained, using the Gram-Schimdt process of orthogonalization. In chapter 7, the dual space is discussed., The chapter 8, is devoted to the eigenvalue and eigenvector of the matrix. The eigenvalue and, eigenvector of special matrices also presented in this chapter., The objective type questions are also presented at the end of every chapter., —Authors, , v
Page 2 :
ACKNOWLEDGEMENT, The authors are thankful to one, and all that have directly or indirectly helped us during completion, of the book. We are also thankful to all our family members for their encouragement, patience and, providing cooperation during the completion of this book., We wish to express our appreciation for the support provided by the staff at Scientific, International Publishers during the publication of this book., We also very thankful to all readers for their comments and valuable suggestions for the, improvement in this book., —Authors, , vi
Page 3 :
ABOUT, , THE, , AUTHORS, , Dr. Mukesh Kumar is presently working as an Associate Professor in the department of, Mathematics at Graphic Era University, Dehradun, India. He received M. Phil. Degree in Mathematics, from Indian Institute of Technology, Roorkee, India and Ph.D. Degree from HNBG Central University,, Srinagar and has an experience of more than 16 years. His fields of specialization are inventory, control, supply chain management and Operations Research. He has published 22 research papers, in national and international reputed journals as well as presented many papers in national and, international conferences. He has guided two Ph.D. Students in the field of operations research. He, is also on the editorial board of many National and International journals and reviewer of many, national and international journals. He is an author/co-author of the books viz. Mathematical, Foundation of Computer Science, Business Mathematics, Integral Calculus, Differential Calculus,, Engineering Mathematics for Semesters I and II, and Engineering Mathematics for Semesters III and IV., Dr. Anubhav Pratap Singh is working as assistant Professor (Senior scale) in department of, Mathematics at SGRR PG College affiliated with HNB Garwhal Central University, Srinagar. He, obtained M. Phil in Mathematics from IIT, Roorkee, and Ph. D. in Mathematics from CCS University,, Meerut, UP. He has 14 years Post graduate teaching experience and guided Six Ph.D. Students in, the field of operations research. He has published many research papers in national and international, reputed journals as well as presented many papers in national and international conferences. He is, an author/co-author of the three books., Dr. Ashok Kumar is working as an assistant professor in the department of mathematics at, H N B Garhwal University (A Central University) Srinagar Garhwal Uttarakhand. He has completed, his M.Sc (Applied Mathematics) and Ph.D (Mathematics) from the Department of Mathematics,, Indian Institute of Technology Roorkee (IIT Roorkee). He has worked as an assistant professor in, Government W.P.G. College Kandhla Muzaffarnagar, UP and after it he has worked as an assistant, professor in the department of mathematics NIT Jalandhar Punjab. His research area is Computational, Fluid Dynamics, Hydrodynamic Stability, application of Spectral Method and Spectral Element, Methods. He has been completed one UGC STARTUP Research Project and one DST-SERB project, is running on Hydrodynamic stability in Vertical pipe filled with porous medium. He has published, 12 research paper in international peer review journals and 06 articles in National and International, conferences organized in India and abroad. He is an author/co-author of the three books., Dr Anand Chauhan is presently working as Assistant Professor in the department of, Mathematics, Graphic Era University, Dehradun, India. He has done Master of Science in Mathematics, from HNB Garhwal Central University, Srinagar and Ph.D (Operations Research) from C.C.S., University, Meerut. He has 12 years of teaching experience He has published 30 research papers, in national / international peer reviewed journals. His research interest includes operational research,, optimization, Numerical Analysis. He is an author/co-author of the books viz. Business Mathematics,, Mathematics-1, Calculus of variations and its applications, linear 3D Thermal Stability of a Viscous, Fluid in Cubical Cavity, and Optimal Replenishment Policies for Deteriorating Items under Inflation., vii
Page 4 :
CONTENTS, Preface, Acknowledgement, About the Authors, , (v), (vi), (vii), , Chapter 1 Matrices .............................................................................................................. 1–57, 1.1 Matrices, 1, 1.2 Special Forms of Matrices, 2, 1.2.1 Column Matrix (Column Vector), 2, 1.2.2 Row Matrix (Row vector), 2, 1.2.3 Zero or Null Matrix, 2, 1.2.4 Diagonal Matrix, 2, 1.2.5 Scalar Matrix, 2, 1.2.6 Unit Matrix or Identity Matrix, 2, 1.2.7 Upper Triangular Matrix, 3, 1.2.8 Lower Triangular Matrix, 3, 1.3 Operations on Matrices, 3, 1.3.1 Equality of Matrices, 3, 1.3.2 Negative of a Matrix, 3, 1.3.3 Addition and Subtraction, 3, 1.3.4 Scalar Multiple of a Matrix, 4, 1.3.5 Multiplication of Matrices, 5, 1.4 Trace of a matrix, 10, 1.5 Transpose of a Matrix, 11, 1.6 Positive Integral Powers of a Square Matrix, 12, 1.7 Special Matrics, 13, 1.7.1 Symmetric Matrix, 13, 1.7.2 Skew-Symmetric Matrices, 13, 1.7.3 Hermitian and Skew-Hermitian Matrix, 15, 1.7.4 Orthogtonal Matrix, 17, 1.7.5 Idempotent Matrix, 18, 1.7.6 Involutary Matrix, 19, 1.7.7 Periodic Matrix, 19, 1.7.8 Nilpotent Matrix, 19, 1.8 Elementary Row Operations, 20, 1.9 Equivalent Matrices, 20, viii
Page 5 :
CONTENTS, , ix, , 1.10, 1.11, 1.12, 1.13, 1.14, 1.15, 1.16, , 21, 22, 24, 26, 28, 28, 30, 31, 32, 34, 38, 39, 41, 53, 54, 56, , 1.17, 1.18, 1.19, 1.20, , Row Reduction and Echelon Forms, Pivot Positions, The Row Reduction Algorithm, The Inverse of a Matrix, Elementary Matrices, Method to Find A-1 (Elementary row operations), Determinants, 1.16.1 Determinant by Cofactors Expansion, 1.16.2 Cofactor Expansions, Adjoint of a Matrix, Determinants by Row Reduction, Evaluating determinants by Row Reduction, Properties of the determinant function, Exercise, Objective type Questions, Answers, , Chapter 2 Rank of Matrix and System of Linear Equations ..................................... 58–93, 2.1 Definition, 58, 2.2 Normal Form, 58, 2.3 System of Linear Equations, 66, 2.3.1 Homogeneous and Non-homogeneous System, 66, 2.3.2 Solution of the System of linear equations, 66, 2.4 Degenerate Linear Equations, 67, 2.5 System of Linear Equations in Two Unknown, 68, 2.6 Solution of System of Linear Equations by Elimination Method, 70, 2.7 Consistency of a System of Linear Equations, 71, 2.7.1 Homogeneous System of Linear Equations, 79, 2.8 Solution of System of Linear Equations by Matrix Method, 82, 2.9 Cramer’s Rule, 83, Exercise, 89, True or False, 90, Objective Type Questions, 91, Answers, 93, Chapter 3 Vector Space ................................................................................................... 94–165, 3.1 Introduction, 94, 3.2 Definition of Vector Space, 94, 3.3 Subspace of a Vector Space, 102, 3.4 Sum of Subspaces, 108, 3.5 Direct Sum, 109, Exercise 3.1, 113
Page 6 :
x, , CONTENTS, , 3.6 Linear Span, 3.6.1 Linear Combination, 3.6.2 Linear Span, Exercise 3.2, 3.7 Linear Dependent and Independent, 3.7.1 Linear Dependent (L.D.), 3.7.2 Linear Independent (L.I.), 3.7.3 Definition: Colinear, 3.7.4 Definition: Coplanar, Exercise 3.3, 3.8 Basis and Dimension of Vector space, 3.8.1 Definition (Basis), 3.9 Coordinate of a Vector Relative to the Ordered Basis, Objective Type Questions, Answers, , 116, 116, 117, 124, 125, 125, 127, 129, 129, 136, 138, 138, 145, 157, 164, , Chapter 4 Linear Transformation ............................................................................... 166–217, 4.1 Introduction, 166, 4.2 Definition and Examples, 166, Exercise 4.1, 175, 4.3 Null Space (Kernel Space) and Range Space, 176, 4.4 Rank NulliTY Theorem, 181, Exercise 4.2, 186, 4.5 L [U, V] is Space: Space of Linear Transformation, 188, Exercise 4.3, 191, 4.6 Isomorphism, 192, Exercise 4.4, 196, 4.7 Inverse of Linear Transformation, 197, Exercise 4.5, 203, 4.8 Composition of Linear Transformation, 204, Exercise 4.6, 205, Objective Type Questions, 206, Answers, 215, Chapter 5 Linear Transformation and Matrix ......................................................... 218–238, 5.1 Matrix Associated with a Linear Transformation, 218, Exercise 5.1, 229, 5.2 Linear Transformation Associated with a Matrix, 230, Exercise 5.2, 232, Objective Type Questions, 233, Answers, 237
Page 7 :
CONTENTS, , xi, , Chapter 6 Inner Product Spaces ................................................................................. 239–280, Introduction, 239, 6.1 Inner Products, 239, 6.1.1 Dot Product, 239, 6.1.2 Inner Product, 240, 6.1.3 Inner Product Space, 241, Exercise 6.1, 246, 6.2 Norm, ||v||, 247, 6.3 Inner Products Generated by Matrices, 250, 6.3.1 Inner Product Generated by the Identity Matrix, 250, 6.3.2 Inner Product on M22, 250, 6.3.3 Inner Product on P2, 251, 6.4 Properties of Distance, 251, 6.5 Angle Between Vectors, 252, 6.6 Orthogonal Vectors, 253, 6.6.1 Orthogonality and Zero Vector, 253, 6.6.2 Orthogonal Complements, 255, 6.6.3 Properties of Orthogonal Complements, 256, Exercise 6.2, 257, 6.7 Orthogonal Set, 258, 6.7.1 Co-ordinates Relative to Orthogonal Bases, 259, 6.8 An Orthogonal Projection, 262, Exercise 6.3, 273, Exercise 6.4, 277, True and False, 277, Objective type Questions, 278, Answers, 279, Chapter 7 Dual Space.................................................................................................... 281–293, Introduction, 281, 7.1 Definition (Linear functional), 281, 7.2 Definition (Dual of Vector Space V), 281, 7.3 Definition (Second Dual Space), 287, 7.4 Annihilators, 288, Exercise, 292, Answers, 293, Chapter 8 Eigen Values and Eigen Vectors ............................................................... 294–335, 8.1 Introduction, 294, 8.2 Definition and Examples, 294, Definition 8.2.1: (Eigen Value and Eigen Vector), 294, Definition 8.2.2: (Trace of a Matrix), 296
Page 8 :
xii, , CONTENTS, , Definition 8.2.3: (Algebric Multiplicity (AM) of l), 297, Definition 8.2.4: (Geometric Multiplicity (GM) of l), 297, 8.3 Cayley-Hamilton Theorem, 304, 8.4 Application of Cayley-Hamilton Theorem, 306, 8.4.1 Reduce the Degree of a Polynomial in A, 306, 8.4.2 Determine Regular Functions of a Matrixd Using Cayley-Hamilton Theorem. 307, 8.5 Eigen Values and Eigen Vectors of Some Special Matrices, 312, Definition 8.5.1 (Similar Matrices), 317, Definition 8.5.2 (Daigonalizable), 318, Exercise 8.1, 327, Objective type Questions, 329, True or False, 332, Answers, 333
Page 9 :
Chapter, , 1, , MATRICES, , 1.1 MATRICES, The english mathematician A. Cayley introduced the word “matrix”, in the year 1858. The meaning, of the word is “that within which something originates”., , Definition, An array of mn numbers arranged into m rows (Horizontal) and n columns (vertical) such as, , LM a, MM a, MNa#, , 11, , A =, , 21, , m1, , OP, PP, PQ, , a12, , ", , a1n, , a22, #, , ", ", , a2n, → Row, #, , a m2, , ", , amn, , ↑, Column, is called a matrix A of order m × n, or simply a rectangular matrix. When m = n, the matrix, is square, and is called a matrix of order n or n-square matrix., A matrix is generally denoted by capital letters such as A or alternatively by [aij]m × n, where, the subscript m × n specifies its order. A square matrix A of order ‘n’ is denoted by A = [aij]n × n., The number which appears at the intersection of the ith row and the jth column is usually referred, to as the (i, j)th entry of the matrix A and is denoted by aij in general., The entries aij of a matrix A = [aij] may be real numbers or complex numbers. If all the entries, are real, the matrix A is called the Real matrix, otherwise, it is called a complex matrix., If A = [aij] is a square matrix of order n,, , LMa, MMa, MMNa#, , 21, , a12, a22, , ..., ..., , a1n, a2 n, #, , n1, , an 2, , ..., , ann, , 11, , A =, , OP, PP, PPQ → Diagonal of A., , then, the entries (a11, a22, ..., ann) are called the diagonal entries of A., 1
Page 10 :
2, , LINEAR ALGEBRA, , 1.2 SPECIAL FORMS OF MATRICES, 1.2.1 Column Matrix (Column Vector), A matrix with only one column (m × 1) is called a column matrix or a column vector., , 1.2.2 Row Matrix (Row vector), A matrix with only one row (1 × n) is called a row matrix or row vector., , 1.2.3 Zero or Null Matrix, A matrix A = [aij] is said to be a zero matrix if all its elements are zero i.e., aij = 0 ≤ i, j. We denote, a zero matrix by ‘O’., , 1.2.4 Diagonal Matrix, A square matrix A = [aij] is said to be a diagonal matrix if aij = 0 for i ≠ j. It is of the form, , LMa, MM 0, MM ...0, N, , 11, , A =, , 0, , ..., , a 22, , ..., , ..., , ..., , 0, , ..., , OP, 0 P, ... P, P, a PQ, 0, , nn, , If A = [aij] is a square matrix of order ‘n’, then elements a11, a22, ..., ann are called the diagonal, elements of A. Thus, a square matrix is a diagonal matrix if all elements except those in the, maindiagonal are zero. A diagonal matrix A = [aij] of order n is sometimes written as, A = diag [a11, a22, ..., ann], , 1.2.5 Scalar Matrix, A diagonal matrix in which all the diagonal elements are equal is called a scalar matrix. For example,, , LM2 0OP, LM50, N0 2Q MMN0, , OP L1 0O, 0P , M, P , etc., P5Q N0 1Q, , 0 0, 5, 0, , 1.2.6 Unit Matrix or Identity Matrix, A scalar matrix in which all the diagonal elements are ‘1’ is called a unit matrix., Thus a square matrix A = [aij] of order n is a unit matrix if., aij =, , RS1,, T0,, , if i = j, if i ≠ j, , The unit matrix of order n is denoted by In., , For example,, , I2 =, , LM1 0OP , I, N0 1Q, , 3, , LM1, = M0, NM0, , OP, 0P, 1QP, , 0 0, 1, 0
Page 11 :
3, , MATRICES, , 1.2.7 Upper Triangular Matrix, A square matrix A is called an upper triangular matrix. If all the entries below the diagonal are zero., Note: That does not exclude the possibility of zeroes occuring as other entries., , LMa, MM 0, MMN 0#, , 11, , For example,, , U =, , a12, a22, #, , ..., ..., #, , a1n, a2 n, #, , 0, , ..., , ann, , OP, PP, PPQ, , 1.2.8 Lower Triangular Matrix, A square matrix A is called a lower triangular matrix. If all the entries above the diagonal are zero., , LMa, MMa, MMNa#, , 11, , L =, , 21, , n1, , 0, a22, #, , ..., ..., #, , 0, 0, #, , an 2, , ..., , ann, , OP, PP, PPQ, , 1.3 OPERATIONS ON MATRICES, 1.3.1 Equality of Matrices, Two matrices A = [aij] and B = [bij] are said to be equal if, (i) They are of the same order, and, (ii) Corresponding elements of A are equal to the corresponding elements of B, i.e., aij = bij, ≤ i, j., For example,, , A =, , and, , B =, , are not equal because, , LMa, Nc, , LM2, N1, LM2, N4, , OP, Q, 3O, 1PQ, , LM, N, , OP, Q, , 3, 4, , a21 = 1 ≠ 4 = b21., , OP, Q, , 1 2, b, =, 4 3, d, If and only if a = 1, b = 2, c = 4, d = 3., , 1.3.2 Negative of a Matrix, The negative of a matrix A = [aij]m×n is defined by −A = [−aij]m×n and is obtained by changing the, sign of each element aij of A., , 1.3.3 Addition and Subtraction, If A = [aij] and B = [bij] are two matrices of the same size, then the sum A + B is the matrix obtained, by adding the corresponding elements in both the matrices., A + B = [aij + bij]
Page 13 :
5, , MATRICES, , 2A =, , 1, B =, 3, , LM4, N2, LM 1, MM 3, MN 13, , 6, 6, , 8, 2, , OP, Q, , OP, P, 2P, P, 3Q, , 2, 3, , 1, , 0, , P.1: Properties of Matrix Addition and Scalar Multiplication., , Suppose A, B and C are of the same size matrix, then, (i) A + B = B + A. (Matrix addition is commutative)., (ii) (A + B) + C = A + (B + C). (Matrix addition is associative)., (iii) A + O = O + A = A. (Existing of additive identity)., (iv) A + (−A) = (−A) + A = 0. (Existence of additive inverse)., (v) α (A + B) = αA + βB., (vi) (α + β) A = αA + βA., (vii) (αβ) A = α (βA)., , 1.3.5 Multiplication of Matrices, Definition, , If A is an m × n matrix and B is an n × p matrix, then the product AB is the m × p matrix whose, (i j) entry [AB]ij is obtained by multiplying the elements in the ith row of A into elements of the jth, column of B and summing the product so obtained., jth Column, , AB =, , LM a, MM a, MM a#, MM #, MNa, , 11, , a12, , ..., , a1n, , 21, , a22, #, , ..., , a2 n, , ..., , a1n, , a12, , ..., , a1n, , #, , #, , #, , a m2, , ..., , amn, , 11, , m1, , OP, PP LMbb, PP MM #, PP MMNb, PQ, , 11, 21, , n1, , ↓, , b12, b22, , ..., ..., , b1 j, b2 j, , ..., ..., , #, , #, , #, , #, , bn2, , ..., , bnj, , ..., , The entry (AB)ij in row i and column j of AB is given by, (AB)ij = ai1b1j + ai2b2j + ... + ainbnj, n, , [AB]ij =, , Σ aik bkj ,, k =1, , i = 1, ..., m, , j = 1, ..., P, The number of rows of AB = Number of rows of A., The number of columns of AB = Number of columns of B., , OP, P, # P, P, b P, Q, , b1 p, b2 p, np
Page 14 :
6, , LINEAR ALGEBRA, , A ⋅ B, , m× n n × p, , = AB, , m× p, , Note: If no. of columns in A ≠ no. of rows in B then AB is not possible., , Example: Consider the matrices, A =, , B =, Since A is 2 × 3 matrix and, [AB]11 = (1 · 3) + (2 · 1) +, [AB]12 = (1 · 1) + (2 · 1) +, [AB]13 = (1 · 3) + (2 · 4) +, [AB]14 = (1 · 1) + (2 · 3) +, , LM1, N0, LM3, MM1, N0, , 2, 2, , 1, 4, , OP, Q, , −1, , 3, , 1, 2, , 4, 5, , OP, 3P, 4PQ, 1, , B is a 3 × 4 matrix, the product AB is a 2 × 4 matrix., (1 · 0) = 5, [AB]21 = (0 · 3) + (2 · 1) + (4 · 0) = 2, (1 · 2) = 3, [AB]22 = (0 · −1) + (2 · 1) + (4 · 2) = 10, (1 · 5) = 16, [AB]23 = (0 · 3) + (2 · 4) + (4 · 5) = 28, (1 · 4) = 11, [AB]24 = (0 · 1) + (2 · 3) + (4 · 4) = 22., , AB =, , LM5, N2, , 3, 10, , 16, 28, , 11, 22, , OP, Q, , Matrix Multiplication by Columns and by Rows, , Sometimes it may be desirable to find a particular row or column of a matrix product AB without, computing the entire product., jth column matrix of AB = A [jth column matrix of B]., ith row matrix of AB = [ith row matrix of A] B., If, , A =, , and, , B =, , LM1, N0, LM3, MM1, N0, , 2, 2, , 1, 4, , OP, Q, , OP, PP, Q, , −1, , 3, , 1, , 1, 2, , 4, 5, , 3, 4, , Then the second column matrix of AB can be obtained by, −1, 1 2 1, 3, 1, = 10, 0 2 4, 2, , LM, N, , OP LM, Q MMN, , OP, PP, Q, , ↑, 2nd column of B, , The first row matrix of AB can be, 3 −1, 1 2 1 1, 1, 0, 2, , LM, MM, N, , ↑, first row of A, , LM OP, N Q, , ↑, 2nd column of AB, , obtained by, 3 1, 4 3, = 5, 5 4, , OP, PP, Q, , 3, , 16, , 11
Page 16 :
8, , LINEAR ALGEBRA, , Matrix Product as Linear Combinations, , Row and column matrices provide another way of thinking about matrix multiplication., , Consider,, , A =, , LM a a ... a OP, MM a a ... a PP, MMNa# a # ....... a # PPQ, LM x OP, MM x PP, MMN x# PPQ, LM a x + a x + ... + a x OP, MM a x + a x + ... + a x PP, MMNa # x + a # x + ... + a # PPQ, LM a OP LM a OP, LM a, a P, a, a, X M P+ X M, + ... + X M, MM # PP MM # PP, MM #, MNa PQ MNa PQ, MNa, 11, , 12, , 1n, , 21, , 22, , 2n, , m1, , m2, , mn, , 1, , and, , X =, , 2, , n, , AX =, , 11 1, , 12 2, , 1n n, , 21 1, , 22 2, , 2n n, , m1 1, , m2 2, , nmxn, , 11, , =, , 12, , 21, , 1n, , 22, , 1, , 2n, , 2, , m1, , n, , m2, , mn, , OP, PP, PPQ, , The product AX of a matrix A with a column matrix X is a linear combination of the column, matrices of A with the coefficients coming from the matrix X., Example: The matrix product, , LM2, MM1, N0, , 1, 3, 2, , OP LM 1 OP, 4 P M−1P, 1QP MN 3 PQ, 1, , =, , LM 4 OP, MM10PP, N1Q, , Can be written as linear combination of column matrices, , LM2OP LM1OP LM1OP, 1 M 1 P − 1 M 3P + 3 M4 P, MN0PQ MN2PQ MN1PQ, , The matrix product, 1, , −1, , LM1, MM, N, , 3 2, 3, , OP, PP, Q, , 0, , 2, , 1, 1, , 3, −1, , =, , LM 4 OP, MM10PP, N1Q, , =, , 8, , 2, , Can be written as the linear combination of row matrices, 1 [1 0 2] − 1 [2 1 3] + 3 [3 1 −1] = [8 2, , −4, , −4]
Page 17 :
9, , MATRICES, , Example: Columns of a product AB as linear combinations, AB =, , =, , LM1, N2, LM12, N8, , 2, 6, , 4, 0, , 27, −4, , OP LM40, Q MMN2, , 1, , 4, , −1, 7, , 3, 5, , 30, 26, , 13, 12, , OP, Q, , OP, 1P, 2PQ, 3, , The column matrices of AB can be expressed as linear combinations of the column matrices, of A as follows, , LM12OP, N8Q, LM27OP, N−4Q, LM30OP, N26Q, LM13OP, N12Q, , LM1OP + 0 LM2OP + 2 LM4OP ., N2 Q N 6 Q N 0 Q, L 1O L 2 O L 4 O, 1 M P − M P + 7 M P., N 2 Q N 6Q N 0 Q, L 1 O L2 O L4 O, 4 M P + 3 M P + 5 M P., N2 Q N 6 Q N 0 Q, L 1 O L2 O L4 O, 3 M P + M P + 2 M P., N2 Q N 6 Q N 0 Q, , = 4, =, =, =, , P.2: Properties of Matrix Multiplication, , I: Matrix multiplication is not commutative in general (AB and BA need not be equal)., Let, , A =, AB =, , ⇒, Let, , B =, BA =, , LM1, N3, LM 3, N−3, , 2, 0, , OP, Q, , OP, Q, , 6, 0, , AB ≠ BA., A =, AB =, , ⇒, , LM−1 0OP ,, N 2 3Q, LM−1 −2OP ,, N11 4 Q, LM 2, N−1, LM 7, N−6, , OP, Q, 6O, = BA, 7PQ, , 1, ,, 2, , B =, , LM 4 1OP, N−1 4Q, , AB = BA., , II: AB = 0, It does not follow that either A or B is a null matrix., Let, , A =, , LM1, MM2, N1, , 1, 2, 1, , OP, 4P ,, 2 PQ, 2
Page 18 :
10, , LINEAR ALGEBRA, , B =, , AB =, , LM−1, MM 1, N0, LM0, MM0, N0, , 1, , OP, −7 P, 3 PQ, 1, , −1, 0, , OP, PP, Q, , 0, , 0, , 0, 0, , 0 ., 0, , III: The cancellation law does not hold in general., The relation AB = AC or BA = CA does not imply that B = C., IV: The matrix multiplication is associative, that is., (AB) C = A (BC)., V: The multiplication of matrices is distribution with respect to addition that is:, A (B + C) = AB + AC,, (B + C) A = BA + CA., EXAMPLE: The product of two upper (lower) traingular matrices is an upper (lower), triangular matrix., SOLUTION: Let, A = [aij]m×n and B = [bjk]n×p, be two upper triangular matrices., ∴, aij = 0 when i > j, and, bjk = 0 when j > k., Now, C = AB = [Cik]m×p,, n, , Cik =, , aij b jk, Σ, j =1, , for i > k,, Cik = ai1 + b1k + ai2b2k + ... + ainbnk = 0, Hence AB is an upper triangular matrix., , 1.4 TRACE OF A MATRIX, Let A be a square matrix of order n. The sum of the diagonal elements of A is called the trace of, A and is denoted by tr A., n, , ∴, , Trace of A =, , Σ aii, , i =1, , = a11 + a22 + ... + ann ., , (i) If A and B are of the same order, then it follow that, tr (A + B) = tr (A) + tr (B)., (ii) Let A be of order m × n and B of order n × m, then, tr (AB) = tr (BA)., (iii) The trace of a matrix is undefined if matrix is not a square matrix., 1 2 3, Example: The trace of A = 2 1 4, 1 3 2, , LM, MM, N, , OP, PP, Q, , tr A = 1 + 1 + 2 = 4.
Page 21 :
13, , MATRICES, , A3 =, A4 =, , LM 1, N15, LM 1, N63, , OP LM1, Q N3, 0 O L1, 64 PQ MN3, , 0, 16, , OP LM 0 OP ., Q N 64Q, 0O, 1, 0, P4Q = LMN225 256OPQ ., , 0, 1, =, 4, 63, , 1.7 SPECIAL MATRICS, 1.7.1 Symmetric Matrix, A square matrix A is called symmetric if A = AT i.e., (aij = aji). In a symmetric matrix all the elements, placed symmetrically about the main diagonal are equal., , LMa, Example: (i) M n, MN g, , n, , g, , b, f, , f, c, , OP, PP, Q, , (ii), , LM1, MM4, N5, , 4, −3, 0, , OP, 0P, 7QP, 5, , Theorem 1.1: If A and B are symmetry matrices with the same size and if k is any scalar, then:, (a) AT is symmetric., (b) A + B and A − B are symmetric, (c) kA is symmetric., Remark: The product of two symmetric matrices is not symmetric in general. Let A and B be, symmetric matrices with the same size. Then, (AB)T = BTAT = BA, {ä AT = A, BT = B, as A and B are symmetric}, Since, in general AB ≠ BA., ⇒ AB will not usually be symmetric., However, in the special case where AB = BA, the product AB will be symmetric., The product of two symmetric matrices is symmetric if and only if the matrices commute., , 1.7.2 Skew-Symmetric Matrices, A square matrix A = [aij] is said to be skew-symmetric., If, A = −AT, i.e.,, aij = −aji ≤ i, j., Example:, , LM o, MM −h, N− g, , h, o, −f, , OP, fP, o PQ, g, , Remark: 1. All the diagonal elements of a skew-symmetric matrix must be zero, i.e., (aii = 0)., Let A = [aij]n×n be a skew-symmetric matrix, then, aij = −aji, Putting i = j, we get, aij = −aii
Page 22 :
14, , LINEAR ALGEBRA, , ⇒, , 2aii = 0, , ⇒, , aii = 0., , 2. In a skew-symmetric matric, all the elements placed symmetrically about the main diagonal, are differ by a multiple of −1., P.4: Some Properties of Symmetric and Skew-Symmetric Matrices, , (i) If A is symmetric matrix, then all positive integral powers of A are symmetric matrices,, for,, (An)T = An., If A is skew-symmetric matrix, then all the positive odd integral powers of A are, skew-symmetric matrix for, (An)T = (AT)n = (−A)n = −An, (ä n is odd), (ii) If A and B are both symmetric matrices of same order, then AB is symmetric matrix Iff, AB = BA., If A and B are both skew-symmetric matrices of same order, then AB is skew-symmetric., Iff, , AB + BA = 0., , LM, MM, N, LM, MM, N, , (AB)T = BTAT = (−B) (−A) = BA = −AB, (iii) If A and B are symmetric matrices of same order. Then AB + BA must be symmetric, matrix., For,, (AB + BA)T = (AB)T + (BA)T, = BTAT + ATBT, = BA + AB = AB + BA., If A and B are skew-symmetric matrices of same order, then AB − BA must be skewsymmetric matrix. For,, (AB − BA)T = (AB)T − (BA)T, = BTAT − ATBT, = (−B) (−A) − (−A) (−B), = BA − AB = −(AB − BA), (iv) If A is any square matrix, then A + AT is always symmetric matrix. for,, (A + AT)T = AT + (AT)T, = AT + A = A + AT., If A is any square matrix, then A − AT is always skew-symmetric, for,, (A − AT)T = AT − (AT)T, = AT − A = −(A − AT), (v) Any square matrix A is uniquely expressed as the sum of a symmetric matrix and a skewsymmetric matrix., 1, 1, ( A + AT ) + ( A − AT ), A =, 2, 2, Symmetric, matrix, , Skew-symmetric, matrix
Page 24 :
16, , LINEAR ALGEBRA, , A =, B =, , LM 1, N1 + i 4, LM 0, N−2 − i, , OP, Q, , 1− i 4, is a Hermitian matrix, 2, 2 −i, 0, , OP is a skew-Hermitian matrix., Q, , P.5: Properties of Hermitian and Skew-Hermitian Matrix, , (i) In a Hermitian matrix, the elements on the principal diagonal all are real numbers i.e.,, —, a ii = aii for,, —, a ij = aji, Putting, j = i, —, a ii = aii, In a skew-Hermitian matrix, the elements on the principal diagonal must be purely, imaginary or zero., —, for, a ij = −aji, Putting, j = i, we get, —, a ii = −aii, —, ⇒, a ii + aii = 0, ⇒, Real part of aii = 0, (ii) If A is Hermitian {Skew-Hermitian}, then kA is also Hermitian {Skew-Hermitian}, k ∈ R., (iii) If A and B are Hermitian matrices of same order then AB is also Hermitian iff AB = BA, for,, (AB)V = BVAV = BA = AB., (iv) If A and B are Hermitian of same order then AB + BA is also Hermitian matrix, for,, (AB + BA)V = (AB)V + (BA)V, = BVAV + AVBV, = BA + AB = AB + BA, If A and B are skew-Hermitian of same order then AB − BA is also skew Hermitian, matrix, for,, (AB − BA)V = (AB)V − (BA)V, = BVAV − AVBV, = (−B)(−A) − (−A)(−B), = BA − AB = −(AB − BA), —, (v) If A is Hermitian (skew-Hermitian), then A is also Hermitian (skew-Hermitian) matrix., (vi) If A is any Hermitian square matrix, then all positive integral power of A are Hermitian, matrix., (vii) If A is Hermitian ⇒ i A is skew-Hermitian matrix., (i A)V = i AV = i A, If A is skew Hermitian ⇒ i A is Hermitian matrix., (i A)V = i AV = −i (−A) = i A.
Page 26 :
18, , LINEAR ALGEBRA, , AAT, , =, , LM1, MM0, N0, , 0, 1, 0, , OP, 0P, 1QP, 0, , AAT = I, Hence, A is an orthogonal matrix., P.6: Properties of orthogonal matrices, , (i) Transpose of an orthogonal matrix is orthogonal., AAT = ATA = I, ⇒, (AAT)T = (ATA)T = I, ⇒, AAT = ATA = I, (ii) The product of two orthogonal matrices is orthogonal matrix. i.e., If A and B are orthogonal, matrices, then AB and BA are orthogonal matrices, for,, (AB)(AB)T = (AB)(BTAT), = A(BBT) AT, = AIAT = AAT = I, Similarly, (BA)(BA)T = (BA)(ATBT), = B (AAT) BT, = BIBT = BBT = I, , 1.7.5 Idempotent Matrix, A square matrix A is said to be idempotent matrix If A2 = A., Example: Let A =, , A2, , =, , LM 2, MM−1, N1, LM 2, MM−1, N1, , −2, 3, −2, , −2, 3, −2, , OP, 4 P , then, −3PQ, −4O L 2, −2, P, M, 3, 4 P M−1, −3PQ MN 1, −2, −4, , OP LM 2, 4 P = M−1, MN 1, −3QP, , −4, , −2, 3, −2, , OP, P, −3QP, , −4, 4, , =A, , P.7: Properties of Idempotent Matrix, , (i) If A and B are idempotent matrix of same order then AB is idempotent iff AB = BA for,, (AB)2 = (AB)(AB) = A (BA) B, = A (AB) B, = A2B2 = AB, (ii) If A and B are Idempotent matrix of same order, then A + B is idempotent iff, AB = BA = 0 for,, (A + B)2 = A2 + AB + BA + B2, = A2 + B2 = A + B
Page 27 :
19, , MATRICES, , (iii) If A is an idempotent matrix and A + B = I, then B is Idempotent and AB = BA = 0 for,, B = (I − A), ⇒, B2 = (I − A)2, = I − A − A + A2 = I − A, ⇒, B2 = I − A, ⇒, , B2 = B, , and, , AB =, =, BA =, A =, (iv) If AB = A and BA = B,, , A (I − A), A − A2 = A − A = 0, (I − A), A − A2 = A − A = 0, then A and B are idempotent matrices., , 1.7.6 Involutary Matrix, A matrix A s.t. A2 = I, is called involuntary matrix., Example: Let, , A =, A2 =, , LM0 −1OP , then, N1 0 Q, LM0 1OP LM0 1OP = LM1 0OP = I, N 1 0Q N 1 0Q N 0 1Q, , (i) Identity matrix is always innvolutary matrix., (ii) A is Involutary iff, (A − I) (A + I) = 0., , 1.7.7 Periodic Matrix, A matrix is called periodic matrix, if, An+1 = A, where, n is a positive integer, if n is the least positive integer for which An+1 = A, then n is, called the period of A., , 1.7.8 Nilpotent Matrix, A square matrix is called nipotent matrix if An = 0, where n is a positive integer. If n is the least, positive integer for which An = 0, then n is called index of the nilpotent matrix., Example:, , A =, , A2 =, , LM 1, MM 5, N −2, LM 0, MM 3, N −1, , 1, 2, −1, , 0, 3, −1, , OP, 6P, −3PQ, 0O, 9 PP, −3PQ, 3
Page 28 :
20, , LINEAR ALGEBRA, , A3, ⇒, , A3, , =, , A2, , LM0, · A = M0, NM0, , 0, 0, 0, , OP, 0P, 0QP, 0, , = 0, i.e., A is a nilpotent matrix of index 3., , 1.8 ELEMENTARY ROW OPERATIONS, An elementary row operation is an operation of any one of the following three types., Type I: The interchange of two rows., Type II: The multiplication of a row by a non-zero number., Type III: The addition of a multiple of one-row to another row., Consider the following matrices, A =, , C =, , LM1, N2, LM1, N4, , 2, 1, 2, 2, , OP, Q, 3O, 3PQ, , 3, 1, , B =, , D =, , LM2, N1, LM7, N2, , OP, Q, 6O, 1PQ, , 1, 2, , 1, 3, , 5, 1, , Matrix B can be obtained from matrix A by Interchanging the first and the second row. This, is a type-I row operation and denoted by R1 ↔ R2., Matrix C is obtained from matrix A by multiplying the 2nd row by 2. This is a type-II row, operation and denoted by R2 → 2R2., Matrix D is obtained from A by adding thrice the 2nd row to 1st row of A. This is a type-III, row operation and denoted by R1 → R1 + 3R2., , 1.9 EQUIVALENT MATRICES, If a matrix B is obtained from a matrix A by a finite chain of elementary row operations, we say that, A is equivalent to B and written it as A ∼ B., (i) Every matrix is equivalent to itself i.e., A ∼ B., (ii) If A ∼ B, then B ∼ A., (iii) If A ∼ B, B ∼ C, then A ∼ C., , LM1, Example: The matrices A = M2, MN3, A =, , 4, 5, 6, , LM1, MM2, N3, , OP, 8P, 9PQ, 7, , LM1, and B = M0, MN0, 4 7O, 5 8 PP, 6 9 PQ, , R2 → R2 − 2R1, , 4, −3, 0, , OP, −6P are equivalent., 0 PQ, 7
Page 29 :
21, , MATRICES, , LM1, MM0, N3, , ∼, , 4, −3, 6, , R3 → R3 − 3R1, , LM1, MM0, N0, , ∼, , 4, −3, −6, , R3 → R3 + 2R2, , LM1, MM0, N0, , ∼, , 4, −3, 0, , OP, −6 P, 9 QP, 7, , OP, −6 P, −12PQ, 7, , OP, −6 P = B, 0 PQ, 7, , Hence, A is Equivalent to B., , 1.10 ROW REDUCTION AND ECHELON FORMS, A rectangular matrix is in Echelon form (or Row Echelon form). If it has the following properties:, (i) All non-zero rows are above any rows of all zeroes., (ii) Each leading entry of a row is in a column to the right of the leading entry of the row, above it., (iii) All entries in a column below a leading entry are zeroes., If a matrix in Echelon form satisfies the following additional conditions, then it is in, Reduced Echelon form (or Reduced row Echelon form)., (iv) The leading entry in each non-zero row is 1., (v) Each leading 1 is the only non-zero entry in its column., , LM2, MM0, N0, , −3, , 2, , 1, , −4, , 0, , 0, , LM1, MM0, N0, , OP, 8 P, 5 / 2PQ, 29 O, 16 PP, 3 PQ, 1, , 0, , 0, , 1, 0, , 0, 1, , → Echelon form, , → Reduced Echelon form., , Note: Any non-zero matrix may be Row-reduced into more than one matrix in Echelon form. Using, different sequences of row operations. However, the reduced Echelon form one obtain from a matrix is, unique., , Theorem 1.2: Uniqueness of the reduce Echelon form:, Each matrix is row equivalent to one and only one reduced Echelon matrix.
Page 30 :
22, , LINEAR ALGEBRA, , 1.11 PIVOT POSITIONS, The Row-operations do not change the position of leading entries in the reduced Echelon form., Since the reduced Echelon form of a matrix is unique, the leading entries are always in the same, positions in any Echelon form obtained from a given matrix. These leading entries correspond to, leading 1’s in the reduced Echelon form., , Definition, A pivot position in a matrix A is a location in A that corresponding to a leading 1 in the reduced, Echelon form of A · A Pivot column is a column of A that contains the pivot position., , EXAMPLE 1:, , A =, , SOLUTION:, A =, , LM1, MM4, N6, LM 1, MM 4, N6, , 2, , 3, , 5, 7, , 6, 8, , OP, 7P ., 9PQ, , 4, , Pivot element, , 2, , 3, , 4, , 5, 7, , 6, 8, , 7, 9, , OP, PP, Q, , Pivot column, , Create zeroes below the pivot 1., R2 → R2 − 4R1, R3 → R3 − 6R1, , LM 1, MM 0, N0, , Pivot, , 2, , 3, , −3, , −6, , −5, , −10, , OP, −9 P, −15 PQ, 4, , Pivot Column, , R3 → R3 −, , LM 1, MM 0, N0, A =, , LM 1, MM 4, N6, , 5, R, 3 2, , 2, , 3, , −3, , −6, , 0, , 0, , OP, −9 P, 0 PQ, 4, , Pivot positions, , 2, , 3, , 5, , 6, , 7, , 8, , OP, 7P, 9 PQ, 4, , Pivot columns
Page 31 :
23, , MATRICES, , EXAMPLE 2:, , A =, , LM 0, MM−1, MM−2, N1, , −3, , −6, , 4, , −2, , −1, , 3, , −3, , 0, , 3, , 4, , 5, , −9, , OP, 1P, −1PP, −7PQ, 9, , SOLUTION: The top of the left most column is the first pivot position. A non-zero entry, or pivot,, must be placed in this position., So, Interchange R1 and R4 (R1 ↔ R4)., , ∼, , Pivot, , LM 1, MM −1, MM −2, N0, , 4, , 5, , −9, , −2, , −1, , 3, , −3, , 0, , 3, , −3, , −6, , 4, , OP, 1 P, −1 PP, 9 PQ, , −7, , ↑ Pivot column, , Create zeroes below the pivot, R2 → R2 + R1, R3 → R3 + 2R1, , ∼, , LM 1, MM 0, MM 0, N0, , Pivot, , 4, , 5, , −9, , 2, , 4, , −6, , 5, , 10, , −15, , −3, , −6, , 4, , OP, −6 P, −15 PP, 9 PQ, −7, , ↑ Next pivot column, , 5, R, 2 2, 3, R4 → R4 + R2, 2, , R3 → R3 −, , ∼, , LM1, MM0, MN00, , 4, , 5, , −9, , 2, , 4, , −6, , 0, , 0, , 0, , 0, , 0, , −5, , OP, −6P, 0P, P, 0Q, , −7, , There is no leading entry in Row-3 as all entries are zero. So apply R3 ↔ R4., , ∼, , LM1, MM0, MM0, N0, , 4, , 5, , −9, , 2, , 4, , −6, , 0, , 0, , −5, , 0, , 0, , 0, , OP, −6P, 0 PP, 0 PQ, , −7, , Pivot, , ↑ Pivot column
Page 32 :
24, , LINEAR ALGEBRA, , Hence column 1, 2 and 4 of matrix A are pivot column., , A =, , Pivot positions, , LM 0, MM −1, MM −2, N1, ↑, , −3, , −6, , 4, , −2, , −1, , 3, , −3, , 0, , 3, , 4, , 5, , −9, , ↑, , ↑, , OP, 1 P, −1 PP, −7 PQ, 9, , Pivot columns, , 1.12 THE ROW REDUCTION ALGORITHM, The row reduction Algorithm consists of four steps, and it produces a matrix in Echelon form. A fifth, step produces a matrix in reduced Echelon form., EXAMPLE 1: Transform the following matrix first into Echelon form and then into reduced, Echelon form., , LM0, MM5, N5, , 3, , 5, , 5, , 7, , 7, , 9, , OP, 9P, 1 PQ, 7, , SOLUTION: Step 1: Begin with the leftmost non-zero column. This is a pivot column., , LM0, MM5, N5, , 3, , 5, , 5, , 7, , 7, , 9, , OP, 9P, 1 PQ, 7, , ↑ Pivot column, , Step 2: Interchange rows to make pivot position non-zero., R 3 ↔ R1, , LM 5, MM 5, N0, , Pivot, , 7, , 9, , 5, , 7, , 3, , 5, , Step 3: Create zero below the pivot., R2 → R2 − R1, , LM5, MM0, N0, , 7, , 9, , −2, 3, , −2, 5, , OP, 9P, 7 PQ, 1, , OP, 8P, 7PQ, 1, , Step 4: Ignore the row containing the pivot position and ignore all rows. If any above it., Apply step 1-3 to the submatrix that remains. Repeat the process until there are no more nonzero rows to modify.
Page 33 :
25, , MATRICES, , LM5, MM0, N0, , Pivot, , 7, , 9, , −2, , −2, , 3, , 5, , OP, 8P, 7PQ, 1, , → Sub matrix, , ↑ Pivot column, , R3 → R3 +, , LM5, MM0, N0, , 3, R, 2 2, , 7, , 9, , −2, , −2, , 0, , 2, , OP, 8P, 19 PQ, 1, , ↑ Pivot, , Hence, we have reached an Echelon form of the full matrix., If we want the reduced Echelon form, we perform one more step., Step 5: Create zeroes above each pivot. If a pivot is not 1, make it 1 by a scaling operation., Begin with the right most pivot and working upward and to the left., , LM 5, MM 0, N0, , 7, , 9, , −2, , −2, , 0, , 2, , OP, 8 P, 19 PQ, 1, , Pivot is in Row 3 scale this row, dividing by Pivot, R3 → R3/2, , LM 5, MM 0, N0, , 7, , 9, , −2, , −2, , 0, , 1, , OP, 8 P, 19 / 2 PQ, , 7, , 0, , −169 / 2, , −2, , 0, , 27, , 0, , 1, , 19 / 2, , 1, , Create zeroes in column 3 above pivot., R1 → R1 − 9R3, R2 → R2 + 2R3, , LM 5, MM 0, N0, , OP, PP, Q, , The next pivot is in Row 2, scale this row, dividing by pivot (R2 → R2/−2), , LM 5, MM 0, N0, , 7, , 0, , 1, , 0, , 0, , 1, , OP, −27 / 2 P, 19 / 2 PQ, , −169 / 2, , Create zeroes in column two, above the pivot.
Page 34 :
26, , LINEAR ALGEBRA, , R1 → R1 − 7R2, , LM 5, MM 0, N0, , 0, , 0, , 1, , 0, , 0, , 1, , OP, −27 / 2 P, 19 / 2 PQ, 10, , The next pivot is in row 1. Scale this row dividing by pivot (R1 → R1/5), , LM 1, MM 0, N0, , 0, , 0, , 1, , 0, , 0, , 1, , OP, −27 / 2 P, 19 / 2 PQ, 2, , This is the reduced Echelon form., , 1.13 THE INVERSE OF A MATRIX, Definition, Let A be an m × n matrix, an (n × m) matrix B is called a Left Inverse of A if BA = In and n × m, matrix C is called Right Inverse of A if AC = Im., Example:, , and, , A =, , B =, , LM1, N2, LM 1, MM −1, N −2, , OP, 5P, 7 PQ, , BA =, , 2×3, , −3, , AB = I2, , LM−5, MM 9, N12, , OP, Q, , −1, 1, , 2, 0, , 2, −2, −4, , 3× 2, , OP, 6P ≠ I, 9 PQ, , −4, , 3, , Thus the matrix B is a right inverse but not a left inverse. While A is a left inverse but not a, right inverse of B., , Lemma, If [A]n×n square matrix has a left inverse B and a right inverse C, then B = C., , RSInverse of a squareUV, Tmatrix is unique. W, , Thus for a square matrix left inverse and right inverse are same. A non-square matrix (m + n), can not have both left and Right inverse. That is, a non square matrix may have only a left inverse, or only a right inverse. In this case (non-square) matrix may have many left inverses or many right, inverses.
Page 35 :
27, , MATRICES, , LM1 0OP, Example: A = M0 1P can have more than one left inverse., NM0 0PQ, L1 0 αOP , α, β ∈ R is a left inverse of A., B= M, N0 1 β Q, Definition, , A square matrix [A]n×n is said to be invertible (or non-singular). If there exists a square matrix B, of the same size such that, AB = I = BA, and B is called the Inverse of A, and is denoted by A−1., A−1 = B, → A matrix is said to be Singular. If it is not invertible., Theorem 1.3: Let A =, , LMa, Nc, , OP, Q, , LM, N, , d, b, 1, . If ad − bc ≠ 0, then A is invertible and A−1 =, d, ad − bc −c, , If ad − bc = 0, then A is not invertible., EXAMPLE: Find the inverse of A =, , OP, Q, , −b, ., a, , LM1 2OP ., N3 4Q, , SOLUTION:, ad − bc = 4 − 6 = −2 ≠ 0, Hence A is invertible, A−1 = −, , A−1 =, , LM, N, , 1 −4, 2 −3, , LM 2, N3 / 2, , OP, Q, 1 O, −1 / 2PQ, −2, 1, , Theorem 1.4: If A is an invertible matrix, then A−1 is invertible and (A−1)−1 = A., Proof: If A is an invertible matrix, then so is AT and the inverse of AT is the transpose of A−, 1, i.e.,, (AT)−1 = (A−1)T., Theorem 1.5: If A and B are Invertible matrices of same size, then AB is invertible and, (AB)−1 = B−1 A−1., Proof: If we can show that (AB) (B−1A−1) = (B−1A−1) (AB) = I, then we will have shown that, the matrix AB is invertible and (AB)−1 = B−1A−1., But, (AB)(B−1A−1) = A (BB−1) A−1, = AIA−1 = AA−1 = I., −1, −1, Similarly:, (B A )(AB) = I.
Page 36 :
28, , LINEAR ALGEBRA, , Note: A product of any number of invertible matrices is invertible, and the inverse of the product is the, product of the inverses in the reverse order., , Consider the matrices, , LM1 2OP, N1 3Q, L 3 −2OP, = M, N−1 1 Q, L 1 −1 OP LM 3, = M, N−1 3 / 2Q N−1, , A=, A−1, Also, , B−1A−1, , ∴, , (AB)−1 = B−1 A−1., , LM3 2OP, N2 2 Q, L 1 −1 OP, B = M, N−1 3 / 2Q, −2 O, L 4 −3 OP, = M, P, 1Q, N −9 / 2 7 / 2 Q, B =, , −1, , AB =, (AB)−1 =, , LM7 6OP, N9 8 Q, LM 4, N−9 / 2, , −3, 7/2, , OP, Q, , 1.14 ELEMENTARY MATRICES, A matrix obtained from an identity matrix by a single elementary operation is called elementary matrix., Example:, , LM0 1OP is an elementary matrix because it can be obtained from I, N 1 0Q, , 2, , by interchanging, , R1 and R2., , LM2 0OP, N 0 1Q, , is an elementary matrix obtained from I2 by multiplying R1 by 2., , Since row operations are reversible, elementary matrices are invertible, for if E is produced by, a row operation on I, then there is another row operation of the same type that changes E back into, I. Hence there is an elementary matrix F s.t. FE = I., Theorem 1.6: Every elementary matrix is invertible, and the inverse is also an elementary, matrix., Theorem 1.7: Equivalent statements:, Let A be an n × n matrix. The following are equivalent., (i) A is invertible., (ii) The reduced row-Echelon form of A is In., (iii) A is expressible as a product of elementary matrices., , 1.15 METHOD TO FIND A−1 (ELEMENTARY ROW OPERATIONS), Let A be the square matrix of order ‘n’ whose inverse is to be found. Consider the identity,, A = IA,, I = Identity matrix of order ‘n’, Reduce the matrix A on the L.H.S. to the identity matrix I by applying elementary row operations, only. Apply all these operations (In the same order) to the pre-factor I on the R.H.S of the identity, matrix. In this manner, the matrix I reduce to some matrix B s.t. BA = I.
Page 39 :
31, , MATRICES, , Definition 2, A real valued function f : Mn×n (R) → R ≤ n × n square matrices is called a determinant if it satisifies, the following three rules:, (i) The value of the identity matrix is 1, i.e., f (In) = 1., (ii) If any two rows are interchanged then the value of f changes its sign., (iii) f is linear in each row, i.e.,, , F Lαr + βR OI, GG MM r PPJJ, f, GG MM # PPJJ, H MN r PQK, 1, , 1, , 2, , =, , n, , F Lr OI F LR OI, GG MMr PPJJ GG MM r PPJJ, αf, GG MM # PPJJ + β f GG MM # PPJJ ., H MNr PQK H MN r PQK, 1, , 1, , 2, , 2, , n, , n, , 1.16.1 Determinant by Cofactors Expansion, Definition 3, If A is a square matrix, then the minor of entry aij is denoted by Mij and is defined to be the, determinant of the submatrix that remains after the ith row and jth column are deleted from A. The, number (−1)i + j Mij is denoted by Cij and is called Coffactor of Entry aij., EXAMPLE 1: Find the minors and cofactors of, A =, , LM3, MM2, N2, , 2, 5, 4, , OP, 6 P., 8 PQ, , −4, , SOLUTION: The minor of entry a11 is, M11 =, , 3, , 2, , −4, , 2, 2, , 5, 4, , 6, 8, , =, , 5, 4, , 6, = 16, 8, , The cofactor of a11 is, C11 = (−1)1+1 M11 = M11 = 16, Similarly, the minor of entry a32 is, M32 =, , 3, , 2, , −4, , 2, 2, , 5, 4, , 6, 8, , =, , 3, 2, , −4, = 26, 6, , The cofactor of a32 is, C32 = (−1)3+2 M32 = −M32 = −26., Note: The minor and the cofactor of an Element aij differ only in sign., i.e.,, , Cij = ± Mij.
Page 41 :
33, , MATRICES, , = a31C31 + a32C32 + a33C33, = a13C13 + a23C23 + a33C33, In each equation, the entries and cofactors all come from the same row or column. These, equations are called Cofactor Expansion of det A., EXAMPLE 3: Evaluate det (A) by Cofactor Expansion along the first column of A., A =, , SOLUTION:, , det (A) =, , LM3, MM2, N1, , 4, , 3, , 1, , −4, , 2, 1, , 5, 4, , 6, 8, , = 3, , OP, P, 8 PQ, , 1, , −4, , 5, , 6, , 5, 4, , −4, 1, +1, 8, 5, , 6, 1, −2, 8, 4, , −4, 6, , = 3 (16) − 2 (24) + 1 (26), = 48 − 48 + 26, det (A) = 26., EXAMPLE 4: Smart choice of Row or Column., SOLUTION: If A is the 4 × 4 matrix, , A =, , LM1, MM3, MN21, , 0, , 0, , 1, , 2, , 0, , −2, , 0, , 0, , OP, 2P, 1P, P, 1Q, , −1, , then to find det (A) it will be easier to use cofactor expansion along the second column., Since it has the most zeroes, , 1, , 0, , −1, , det A = −1 ⋅ 1, 2, , −2, , 1, , 0, , 1, , for the 3 × 3 determinant it will be easiest to use cofactor expansion along its second column,, since it has the most zeroes., det (A) = −1 · (−2), , 1, 2, , −1, 1, , = 2 (1 + 2) = 6., Note: In a cofactor expansion, we compute det (A) by multiplying the entries in a row or column by, their cofactors and adding the resulting products. It turns out that if one multiplies the entries in any row by, the corresponding cofactors from a different row, the sum of these product is always zero. (This result also hold, for columns).
Page 44 :
36, , LINEAR ALGEBRA, , |A| = 3 (−3 + 4) − 2 (−3 + 4) + 0, |A| = 3 − 2, |A| = 1, The cofactors of various rows of |A| are, ( −3 + 4 ), − ( −2 − 0), ( −2 − 0), − ( −3 + 4 ), ( 3 − 4), −( −3 − 0), ( −12 + 12), − (12 − 8), ( −9 + 6), =, , adj (A) =, , adj (A) =, , ∴, , A−1, , =, , A−1 =, , A−1, , LM, MM, N, LM 1, MM−1, N0, LM 1, MM−1, N0, LM 1, MM−2, N −2, , OP, −1, 3P, −4, −3PQ, −2 −2O, 3, 3 PP, −4, −3PQ, −1, 0O, 3, −4PP, 3, −3PQ, L1, 1, 1, adj ( A) = MM−2, | A|, 1, MN−2, LM 1 −1 0 OP, MM−2 3 −4PP, N−2 3 −3Q, −2, , OP, PP, Q, , −2, , T, , OP, PP, Q, , −1, , 0, , 3, 3, , −4, −3, , EXAMPLE 7: If a matrix A satisfies a relation A2 + A − I = 0. Prove that A−1 exists and that, = I + A, I being an Identity matrix., A2 + A − I = 0, SOLUTION:, ⇒, A2 + AI = I, ⇒, A (A + I) = I, ∴, |A| |A + I| = |I|, ∴, |A| ≠ 0 and so A−1 exists, 2, Again, A +A−I = 0, ⇒, A2 + A = I, −1, Multiplying by A , we get, A−1 (A2 + A) = A−1 · J, ⇒, A + I = A−1, ⇒, A−1 = A + I.
Page 46 :
38, , LINEAR ALGEBRA, , 1, adj ( A), det ( A), We will prove it by showing adj (A) is upper triangular or the matrix of cofactors is lower, triangular., Every cofactor Cij with i < j is zero., Since, Cij = (−1)i + j Mij, and each minor Mij with i < j is zero., Since,, , A−1 =, , 1.18 DETERMINANTS BY ROW REDUCTION, Theorem 1.12: {Fundamental theorem}, Let A be a square matrix. If A has a row of zeroes or a column of zeroes, then det (A) = 0., Proof: The determinant of A found by cofactor expansion along the row or column of all, zeroes is:, det (A) = 0 · C1 + 0 · C2 + ... + 0 · Cn, where C1, C2, ... Cn are the cofactors, for that row or column., Hence, det (A) = 0., Theorem 1.13: Let A be a square matrix, then, det (A) = det (AT)., Proof: Since, the determinant of A found by cofactor expansion along its first row is same as, that of the determinant of AT found by cofactor expansion along its first column., Theorem 1.14: Let A be an n × n matrix., (a) If B is the matrix obtained by multiplying single row or column of A by a scalar ‘k’,, then, det (B) = k det (A)., (b) If two rows or columns of a matrix A are interchanged, then the sign of determinant, changed., [Let B be the matrix obtained by Interchanging the two rows/columns of A, then, det (B) = −det (A)], (c) The elementary row operation that adds a constant multiple of one row to another row, leaves the determinant unchanged., (d) If A is invertible, then det (A−1) =, , 1, ., det ( A), , Theorem 1.15: Let E be an n × n elementary matrix, (a) If E is obtained by multiplying a row of In by k,, then det (E) = k., (b) If E is obtained by interchanging two rows of In,, then det (E) = −1., (c) If E is obtained by adding a multiple of one row of In to another, then, det (E) = 1.
Page 47 :
39, , MATRICES, , Theorem 1.16: If A is a square matrix with two proportional rows or two proportional columns,, then det (A) = 0., Example: (a), ⇒, , (b), , ⇒, , A=, , LM−4 4OP ← Second row is 2 times the first row, N−2 8Q, , det (A) = 0, , LM 3, 6, B= M, MM 5, N−9, , −1, , 4, , −2, , 5, , 8, , 1, , 3, , −12, , OP, 2P, → 4th row is −3 times of the first row, 4P, P, 15 Q, −5, , det (B) = 0., , 1.19 EVALUATING DETERMINANTS BY ROW REDUCTION, This method involves substantially less computation than the cofactor expansion method. In this, method we reduce the given matrix to upper triangular form by elementary row operations, then, compute the determinant of the upper triangular matrix, and than relate that determinant to that of, the original matrix., EXAMPLE 8: Evaluate det (A) using row reduction, where, A =, , LM0, MM3, N2, , 1, , 5, , −6, 6, , 9, 1, , OP, PP, Q, , SOLUTION: We will reduce A to row Echelon form,, det (A) =, , 0, , 1, , 5, , 3, 2, , −6, 6, , 9, 1, , 3, , −6, , 9, , = − 0, 2, , 1, 6, , 5 ← Row interchanged, 1, , 1, , −2, , 3, , = −3 0, 2, , 1, 6, , 5 ← Common factor 3 from first row, 1, , R3 → R3 − 2R1, 1, , −2, , 3, , = −3 0, 0, , 1, 10, , 5, −5, , R3 → R3 − 10R2
Page 48 :
40, , LINEAR ALGEBRA, , 1, , −2, , 3, , = −3 0, 0, , 1, , 5, , 0, , −55, −2, , 1, = ( −3) ( −55) 0, 0, , 3, 5 ← Common factor −55 from row 3, 1, , 1, 0, , = (−3) (−55) (1), det (A) = 165., EXAMPLE 9: Evaluate det (A) using row operations and cofactor expansion, where, , A =, , SOLUTION:, , det (A) =, , LM 3, MM 1, MN−23, , 5, , −2, , −2, , 1, , 4, , 3, , 4, , 1, , OP, 0P, 6P, P, 2Q, , 3, , 5, , −2, , 8, , 1, , −2, , 1, , 0, , 2, , 4, , 3, , 6, , −3, , 4, , 1, , 2, , 8, , R1 → R1 − 3R2, R3 → R3 − 2R2, R4 → R4 + 3R2, , det (A) =, , 0, , 11, , −5, , 8, , 1, , −2, , 1, , 0, , 0, , 8, , 1, , 6, , 0, , −2, , 4, , 2, , Using cofactor expansion along the first column, 11, , −5, , 8, , det (A) = − 8, −2, , 1, , 6, , 4, , 2, , 11, , −5, , 8, , det (A) = −2 8, −1, , 1, 2, , 6 → Common factor 2 from row 3, 1, , R1 → R1 + 11R3, R2 → R2 + 8R3
Page 50 :
42, , LINEAR ALGEBRA, , 4. If B is an n × n matrix E is an n × n elementary matrix, then, det (EB) = det (E) det (B), Proof: Case I: If E is obtained by multiplying a row of In by k., then EB results from B by multiplying a row by ‘k’., ∴, det (EB) = k det (B), ∴, det (E) = k, ∴, det (EB) = det (E) det (B)., Case 2 and 3: If E is obtained by interchanging two rows or adding a multiple of one row to, another row. The proof is same as Case I., Note: If B is an n × n matrix and E1, E2, ... En are n × n elementary matrices, then, det (E1, E2, ... ErB) = det (E1) det (E2) ... det (Er) det (B), , 5. A square matrix ‘A’ is invertible iff det (A) ≠ 0., Proof: Let R be the reduced row Echelon form of A then we will show that det (A) and, det (R) are both zero or both non-zero., Let E1, E2, ... Er be the elementary matrices that correspond to the elementary row operations, that produce R from A. Thus, R = Er ... E2E1A, ∴, det (R) = det (Er) ... det (E2) det (E1) det (A), ∴ The determinants of the elementary matrices are all non zero., ⇒ det (A) and det (R) are both zero or both non-zero. If A is invertible, then R = I, [Since if A is an n × n matrix, then the reduced row-Echelon form of A is In], So,, det (R) = 1 ≠ 0, ⇒, det (A) ≠ 0, Conversely: If, det (A) ≠ 0, ⇒, det (R) ≠ 0, So R can’t have a row of zeroes., [Since if R is the reduced row-Echelon form of an n × n matrix A, then R either has a row of, zeroes or R is the identity matrix In], So, A is invertible., [Since if the row-reduced Echelon form of A is In then A is invertible]., 6. If A and B are square matrices of the same size, then, det (AB) = det (A) det (B)., Proof: Case I: If the matrix A/B is not invertible., ⇒ AB is not invertible., then we have, det (AB) = 0, and either, det (A) = 0, or, det (B) = 0, ⇒, det (AB) = det (A) · det (B)
Page 52 :
44, , LINEAR ALGEBRA, , ⇒, , ⇒, ⇒, , |A| |adj A| =, , | A|, , 0, , 0, , ", , 0, , 0, , | A|, , 0, , ", , 0, , 0, , 0, , | A|, , ", , 0, , ", , ", , ", , ", , ", , 0, , 0, , 0, , ", , | A|, , |A|n, , |A| |adj (A)| =, |adj (A)| = |A|n−1., , Theorem 1.18: If A and B are two square matrices of n × n order each then, adj (AB) = (adj B) · (adj A), Proof: ∴, A · adj (A) = |A| · I, ∴, AB · adj (AB) = |AB| · I, Now, (AB) · (adj B) · (adj A) = A · (B · (adj B)) · adj A, = A · |B| · I · adj A, = |B| · A · adj A, = |B| · |A| · I, = |A| · |B| · I, (AB) · (adj B) · (adj A) = |AB| · I, from equation (1) and (2), (AB) · (adj B) · (adj A) = AB · (adj AB), or, adj (AB) = (adj B) · (adj A), , ...(1), , ...(2), , EXAMPLE 13: Prove that adj (adj A) = |A|n−−2 · A, where A is any square matrix of order, n × n., A · (adj A) = |A| · I, SOLUTION:, ∴, adj |A · adj (A)| = adj [|A| · I], or, adj (adj A) · adj A = |A|n−1 · I, adj (adj A) · adj A · A = |A|n−1 · I · A, adj (adj A) · |A| · I = |A|n−1 · A · I, ⇒, adj (adj A) · |A| = |A|n−1 · A, ⇒, adj (adj A) = |A|n−2 · A., Theorem 1.19: If A and B are two non-singular matrices of the same order then AB is also, non-singular and (AB)−1 = B−1 · A−1., Proof: Let A and B be two square matrices of order n × n., ∴ Both the matrices are non-singular, hence A−1 and B−1 exist. Hence B−1 A−1 will also be, n × n matrix., ∴, (AB)(B−1 A−1) = A (BB−1) A−1, = A · I · A−1 = A · A−1, (AB)(B−1 A−1) = I
Page 53 :
45, , MATRICES, , and, , ∴, i.e., B−1A−1, ∴, , (B−1 A−1)(AB) =, =, −1, −1, (B A )(AB) =, (B−1A−1)(AB) =, is the inverse of AB., (AB)−1 =, , B−1 (A−1A) B, B−1 IB = B−1B, I, (AB)(B−1A−1) = I, B−1 A−1., , Theorem 1.20: If A is non-singular matrix, then, (A−1)−1 = A, −1, Proof: Let A be the inverse of matrix A, A−1 · A = A · A−1 = I, Let B be the inverse of A−1, then, B (A−1) = (A−1) · B = I, from (1) and (2), we have, A−1 · B = A−1 · A, or, B = A, i.e.,, inverse of A−1 = A, or, (A−1)−1 = A., , ...(1), ...(2), , Theorem 1.21: Prove that (adj A)−1 = (adj A−1), where A is any square matrix of order, n × n., ...(1), Prove: ∴, adj (adj A) = |A|n−2 · A, n−1, ...(2), also, |adj (A)| = |A|, adj | A|, and, A−1 =, ...(3), | A|, Hence, , FG adj | A|IJ, H | A| K, F 1I, adj G J ⋅ adj ( adj A), H | A|K, , adj A−1 = adj, =, , 1, ⋅ | A|n − 2 ⋅ A, | A|n −1, A, =, | A|, =, , or, Also, , adj A−1, (ad, , A)−1, , (adj A)−1, , | A|n − 2 ⋅ A, adj ( adj A), =, =, | adj A|, | A|n − 1, A, =, | A|, , from (4) and (5), (adj A)−1 = (adj A−1)., , ...(4), , ...(5)
Page 54 :
46, , LINEAR ALGEBRA, , Examples on Symmetric, Skew symmetric, Hermitian and Skew-Hermitian Matrices, EXAMPLE 1: A and B are symmetric, show that AB + BA is symmetric and AB − BA is, skew-symmetric., SOLUTION: Since A and B are symmetric., ∴, A′ = A,, B′ = B, ...(1), Now, (AB + BA)′ = (AB)′ + (BA)′, = B′A′ + A′B′, (AB + BA)′ = AB + BA, ⇒ AB + BA is symmetric., Also, (AB − BA)′ = (AB)′ − (BA)′, = B′A′ − A′B′, [ä from (1)], = BA − AB, (AB − BA)′ = −(AB − BA), ⇒ AB − BA is skew symmetric., EXAMPLE 2: Show that every square matrix can be expressed in one and only one way as, a sum of symmetric and skew-symmetric and skew-symmetric matrix., SOLUTION: We have, , A =, , 1, 1, (A + A′) + (A − A′), 2, 2, , Where A is any square matrix., ∴, A = P + Q,, where, , P =, Q =, , Now, , P′ =, , Now, , P′ =, =, =, P′ =, , ∴, , P′ =, , Again, , Q′ =, , 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, 1, 2, P, , (A + A′), (A − A′), (A − A′), (A + A′)′, [(A′) + (A′)′], [A′ + A], [A + A′], i.e., P is symmetric, , 1, (A − A′)′, 2, , [ä (A′)′ = A]
Page 55 :
47, , MATRICES, , =, , 1, [A′ − (A′)′], 2, , =, , 1, [A′ − A], 2, , 1, Q′ = − [A − A′], 2, , ∴, Q′ = −Q i.e., Q is skew-symmetric., ∴, A = P+Q, ...(1), where P and Q are symmetric and skew-symmetric matrix respectively., Equation (1) show that every square matrix can be expressed in one way as the sum of a, symmetric and a skew-symmetric matrix., Now we will prove that representation (1) is unique. For this, if possible, let, A = R +S, ...(2), be another representation where R is symmetric and S is skew-symmetric., Now, A′ = (R + S)′, = R′ + S′, ∴, A′ = R − S, ...(3) [ä R′ = R and S′ = −S], From (2) and (3), we get, A + A′, 2, A + A′, and, S =, 2, ∴, R = P, and, S = Q, ∴ The representation (2) is the same as representation (1)., Hence the result., , R =, , EXAMPLE 3: Every square matrix can be expressed in one and only one way as P + iQ,, where P and Q are hermitian., SOLUTION: We have, , A =, , 1, 1, (A + AV) + i (A − AV), 2, 2i, , where A is any square matrix., ∴, A = P + iQ,, where, , Now, , 1, (A + AV), 2, 1, (A − AV), Q =, 2i, 1, PV =, (A + AV)V, 2, , P =
Page 56 :
48, , LINEAR ALGEBRA, , 1 V, [A + (AV)V], 2, 1 V, [A + A], =, 2, 1, =, [A + AV], 2, = P, , =, , PV, PV, , [ä (AV)V = A], , ∴ P is hermitian., Again, , QV =, , FG 1 ( A − A )IJ, H 2i, K, , V, , V, , 1, ( A − AV ) V, 2i, 1, − [ AV − ( AV ) V ], 2i, 1, − [ A V − A], 2i, 1, [ A − AV ], 2i, Q, , = −, =, =, QV =, , ∴, QV =, ⇒ Q is hermitian, ∴ We have, A = P + iQ, ...(1), where P and Q are hermitian., Now we want to prove that representation (1) is unique. For this, if possible., Let, A = R + iS, ...(2), where R and S are hermitian., ∴, AV = (R + iS)V, AV = RV − iSV, [ä RV = R and SV = S], AV = R − iS, ...(3), From (2) and (3), we have, R =, and, , S, , ∴, R, and, S, ∴ Representation (2) is the same, ∴ Representation (1) is unique., , A + AV, 2, , A + AV, =, 2i, = P, = Q, as (1).
Page 57 :
49, , MATRICES, , EXAMPLE, , LM 1, −2, 4: Express M, MM 3, N4, , SOLUTION: Let, , ∴, , ∴, , ∴, , OP, 1 6 1P, as the sum of a symmetric and skew-symmetric matrix., 2 7 1P, P, −4 −2 0Q, LM 1 0 5 3OP, −2 1, 6 1P, A = M, MM 3 2 7 1PP, N 4 −4 −2 0Q, LM1 −2 3 4 OP, 0 1 2 −4, A′ = M, MM5 6 7 −2PPP, N3 1 1 0 Q, LM 2 −2 8 7 OP, −2 2 8 −3P, A + A′ = M, MM 8 8 14 −1PP, N 7 −3 −1 0 Q, LM 2 −2 8 7 OP, 1 M−2 2 8 −3P, 1, (A + A′) =, 2M8, 2, MN 7 −83 14−1 −01PPQ, 0, , 5, , Which is symmetric matrix., , Again, , 1, (A − A′) =, 2, , 3, , LM 0, 1 M−2, 2 M−2, MN 1, , 2, , 2, , 0, , 4, , −4, , 0, , −5 −3, , OP, 5P, 3P, P, 0Q, , −1, , Which is skew-symmetric matrix., 1, 1, (A + A′) + (A − A′), 2, 2, ∴ A has been expressed as the sum of a symmetric and a skew-symmetric matrix., , Now, , A =, , EXAMPLE 5: (i) If A is a symmetric (skew-symmetric) matrix, show that kA is also symmetric, (skew-symmetric) matrix., (ii) If A and B are symmetric (skew-symmetric) then A + B., (iii) If A be any matrix, then prove that AA′′ and A′′A are both symmetric matrices.
Page 58 :
50, , LINEAR ALGEBRA, , SOLUTION: (i) Assume that A is symmetric matrix, ∴, A′ = A, Now, (kA)′ = kA′, ⇒, (kA)′ = kA, ⇒ kA is symmetric., Now assume that A is skew-symmetric matrix., ∴, A′ = −A, Now, (kA)′ = kA′, ∴, (kA)′ = −kA, ⇒ kA is skew-symmetric matrix., (ii) Let A and B be symmetric., ∴, A′ = A,, B′ = B, Now, (A + B)′ = A′ + B′, ∴, (A + B)′ = A + B, ⇒ A + B is symmetric., Now assume that A, B are skew symmetric., ∴, A′ = −A, and, B′ = −B, Then, (A + B)′ = (A)′ + (B)′, = −A − B, (A + B)′ = −(A + B), ∴ A + B is skew-symmetric., (iii) Let A be any matrix., (AA′)′ = (A′)′ A′, ∴, (AA′)′ = AA′, ⇒ AA′ is symmetric., Aslo, (A′A)′ = A′(A′)′, ∴, (A′A)′ = A′A, ⇒ A′A is symmetric., , ...(1), [ä from (1)], , ...(2), [ä from (2)], , ...(1), [ä of (1)], , ...(2), [ä of (2)], , [ä (A′)′ = A], , EXAMPLE 6: Prove that B′′AB is symmetric or skew-symmetric according as A is symmetric, or skew-symmetric., SOLUTION: Assume that A is symmetric., ∴, A′ = A, ...(1), Now, (B′AB′) = B′AB, [ä (AB)′ = B′A′], ∴, (B′AB)′ = B′AB, [ä (B′)′ = B and A′ = A], ⇒ B′AB is symmetric., Again let A is skew-symmetric.
Page 59 :
51, , MATRICES, , ∴, A′ = −A, Now, (B′AB)′ = B′A′(B′)′, ∴, (B′AB)′ = −B′AB, ⇒ B′AB is skew-symmetric., , ...(2), [ä of (2)], , EXAMPLE 7: If A and B are n-rowed symmetric matrices, prove that AB is symmetric iff, A and B commute., SOLUTION: Since A and B are symmetric., ∴, A′ = A, and, B′ = B, ...(1), Let, AB = BA, ...(2), Now, (AB)′ = B′A′, = BA, [ä of (1) and of (2)], ∴, (AB)′ = AB, ⇒ AB is symmetric., Again let AB is symmetric., ∴, (AB)′ = AB, ...(3), ∴, (AB)′ = B′A′, AB = B′A′, [ä of (1)], ∴, AB = BA, ⇒ A and B commute., Which proves the required result., EXAMPLE 8: (a) (i) If A is a hermitian matrix. Show that iA is skew hermitian., (ii) If A is skew-hermitian matrix, show that iA is hermitian., (b) If A and B are hermitian or skew hermitian, then A + B is hermitian or skew hermitian., (c) If A be any square matrix, prove that A + AV, AAV, AAV, AVA are all hermitian and, A − AV is skew hermitian., SOLUTION: (a) (i) Since A is hermitian matrix, ...(1), ∴, AV = A, [ä AV = V], Now, (iA)V = −iAV, ∴, (iA)V = −iA, ⇒ iA is skew hermitian., (ii) Since A is skew-hermitian., ...(1), ∴, AV = −A, V, V, V, Now, (iA) = i A, = −iAV, [ä iV = −i], V, ∴, (iA) = (−i) (−A) = iA, [ä AV = −A], ⇒ iA is hermitian.
Page 60 :
52, , LINEAR ALGEBRA, , (b) Let A and B are hermitian., ∴, AV = A, and, BV = B, Now, (A + B)V = AV + BV, (A + B)V = A + B, ⇒ A + B is hermitian., Now assume that A and B are skew-hermitian., ∴, AV = −A, and, BV = −B, Now, (A + B)V = AV + BV = −A − B, (A + B)V = −(A + B), ⇒ A + B is skew-hermitian., (c) Let A is square matrix., Now, (A + AV)V = AV + (AV)V, = AV + A, ∴, (A + AV)V = A + AV, ⇒ (A + AV) is hermitian., Again, (AAV)V = (AV)V AV, ∴, (AAV)V = AAV, ⇒ AAV is hermitian., Now, (AVA)V = AVA, ⇒ AVA is hermitian., Again, (A − AV)V = AV − (AV)V = AV − A, (A − AV)V = −(A − AV), ⇒ A − AV is skew-hermitian., , ...(1), [ä of (1)], , ...(2), , [ä (AV)V = A], , EXAMPLE 9: A and B are hermitian: Show that AB + BA is hermitian and AB − BA is, skew-hermitian., SOLUTION: Since A and B are hermitian., ∴, AV = A, ...(1), and, BV = B, Now, (AB + BA)V = (AB)V + (BA)V, = BVAV + AVBV = BA + AB, [ä of (1)], V, ∴, (AB + BA) = AB + BA, ⇒ AB + BA is hermitian., Also, (AB − BA)V = (AB)V − (BA)V, = BVAV − AVBV = BA − AB, (AB − BA)V = −(AB − BA), ⇒ AB − BA is skew-hermitian.
Page 61 :
53, , MATRICES, , EXAMPLE 10: Show that the matrix BVAB is hermitian or skew hermitian according as A, is hermitian or skew hermitian., SOLUTION: Let A is hermitian., ∴, AV = A, Now, (BVAB)V = BVAV(BV)V, (BVAB)V = BVAB, [ä of (1)], V, ⇒ B AB is hermitian., Again let A is skew hermitian., ...(2), ∴, AV = −A, V, V, V, V, V, V, Now, (B AB) = B A (B ), [ä of 2], V, V, V, ∴, (B AB) = −B AB, ⇒ BVAB is skew-hermitian., EXAMPLE 11: A and B are hermitian. Show that AB is hermitian if and only if AB = BA., SOLUTION: Since A and B are hermitian., ∴, AV = A, and, BV = B, ...(1), Let, AB = BA, ...(1), V, V, V, Now, (AB) = B A, = BA, [ä of (1) and (2)], V, ∴, (AB) = AB, ⇒ AB is hermitian., Again let AB is hermitian., ∴, (AB)V = AB, ⇒, BVAV = AB, ⇒, BA = AB, [ä of (1)], ⇒, AB = BA., , EXERCISE, 1. If A =, , LM3, N6, , LM, N, , OP, Q, , 5, 2, and B =, −1, 1, , OP, Q, , 3, , find A + B and A − B., 0, , 2. Find the product of matrices A and B, where, , 3. If, , LM1 + 2i, A = M4 − 5i, MN 8, , LM 1, A = M−1, MN 0, , 2 − 4i, 7 + 2i, 5 + 6i, , 3, 2, 0, , OP L 2, 1P , B = M−1, MM, 2PQ, N2, 0, , OP, 7 + 3i P , then find ( A ) ., 7 PQ, , 2 + 5i, , T, , 5, 0, 1, , OP, 2P, 3PQ, 1
Page 62 :
54, , LINEAR ALGEBRA, , 4. If A =, , LM 0, N−2 − 3i, , OP, Q, , 2 + 3i, . Is the matrix A skew-Hermitian or not?, 0, , 5. Using row elementary operations, find the inverse of the given matrix A, where, , LM 0, 1, A= M, MM 1, N−1, , 2, , 1, , 1, , −1, , 2, 1, , 0, 2, , OP, −2P, 1P, P, 6Q, 3, , 4×4, , 6. Find the inverse of the following matrices by elementary transformation:, , LM1, (i) M1, MN1, , 3, 4, 3, , LM2, (ii) M2, MN 1, , O, 3PP, 4PQ, 3, , 1, 2, 2, , O, 1PP, 2PQ, , 2, , (iii), , LM 1, MM4, N8, , −1, 1, 1, , O, 0PP, 1PQ, 1, , (iv), , LM 1, MM2, N3, , 2, 4, 5, , OP, P, 6PQ, 3, , 5, , OBJECTIVE TYPE QUESTIONS, 1. A matrix having m rows and n columns with m ≠ n is said to be a, (a) scalar matrix, (c) square matrix, , (b) identity matrix, (d) rectangular matrix, , 2. Which one of the following is FALSE?, (a) a matrix is a square matrix, if the number of rows is equal to the number of columns., (b) every diagonal matrix is a square matrix., (c) every diagonal matrix is a scalar matrix., (d) every unit matrix is a scalar matrix., 3. Real matrices [A]3×1, [B]3×3, [C]3×5, [D]5×3, [E]5×5 and [F]5×1 are given. Matrices [B] and [E], are symmetric., Following statements are made with respect to these matrices:, 1. Matrix product [F]T [C]T [B] [C] [F] is a scalar., 2. Matrix product [D]T [F] [D] is always symmetric., With reference to above statements, which of the following applies?, (a) Statement 1 is true but 2 is false, (b) Statement 1 is false but 2 is true, (c) Both the statements are true, , 4. If U = [2, , (a) 20, (c) −20, , −3, , 4], X = [0, , 2, , (d) Both the statements are false, , LM3OP, 3], V = M2P, MN1PQ, , LM2OP, and y = M2P , then UV + XY = .......... ., MN4PQ, , (b) [−20], (d) [20]
Page 63 :
55, , MATRICES, , 5. If, , LMx − 3, N8, , OP LM, Q N, , OP LM, Q N, , 2y − x, 6, +, 5, −5, , 2x, 0, =, 0, 3, , 7, 5, , OP, Q, , then, , (a) x = 5, y = 3, , (b) x = 3, y = 5, , (c) x = −3, y = 5, , (d) x = 8, y = 3, , LM1, 6. If M = M1, MN1, , OP, 2P , the determinant of (M M) = .............. ., 3PQ, , 1, , T, , (a) 2, (c) 8, , LM3 + 2i, N −i, , 7. The inverse of the matrix, , (a), , (c), , LM, N, 1 L3 + 2i, M, 14 N i, 1 3 + 2i, i, 12, , OP, Q, −i O, 3 − 2i PQ, , (b) 4, (d) 6, i, 3 − 2i, , −i, 3 − 2i, , OP, Q, , is, , (b), , (d), , LM, N, 1 L3 − 2i, M, 14 N i, 1 3 − 2i, i, 12, , OP, Q, −i O, 3 + 2i PQ, −i, 3 − 2i, , 8. If the determinant of 3 × 3 matrix A is 10, then the det(3A) is equal to .............. ., (a) 27, (b) 30, (c) 270, , LM 3, 5, 9. For a matrix [M] = M, MM x, N, , (d) −270, , OP, P , the transpose of the matrix is equal to the inverse of the matrix., 3P, P, 5Q, 4, 5, , [M]T = [M]−1. The value of x is given by, , (a) −, (c), , 4, 5, , 3, 5, , 10. The matrix A =, , (b) −, (d), , LM 0, N3 − 2i, , (a) Hermitian matrix, , −3 − 2i, 0, , OP, Q, , 3, 5, , 4, 5, , is a .............. ., (b) skew-Hermitian matrices, , (c) Symmetric matrix, (d) Skew-Symmetric matrix, 11. Consider the matrices X(4×3), Y(4×3) and P(2×3). The order of [P (XTY)−1PT]T will be ............. ., (a) (2 × 2), , (b) (3 × 3), , (c) (4 × 3), , (d) (3 × 4)
Page 64 :
56, , LINEAR ALGEBRA, , 12. Consider the following statements relating to any two matrices P and Q such that PQ is, defined:, 1. (PQ)−1 = Q−1 P−1, 2. (PQ)T = QT PT, 3. ρ(PQ) ≤ min [ρ(P), ρ(Q)], where ρ denote the rank of matrix., [Here PT is the transpose of P and P−1 denote the inverse of P.], Which of the following statement is CORRECT?, (a) 1, 2 and 3, , (b) 1 and 2, , (c) 2 and 3, (d) None of the above, 13. Which one of the following matrices is singular?, , LM2, N1, LM2, N3, , OP, Q, 4O, (c), 6PQ, L2, 14. Let, A = M, N0, (a), , (a), , 5, 3, , (b), , (d), , LM, N, , OP, Q, , OP, Q, , ., −01, 1/ 2, and A−1 =, 3, 0, , LM3, N2, LM4, N6, , OP, Q, 3O, 2PQ, 2, 3, , a, , then the value of (a + b) =, b, , 7, 20, , (b), , 3, 20, , 19, 11, (d), 60, 20, 15. If the determinant of 4 × 4 matrix A is 10, then the determinant of −2A is equal to ............ ., (a) −20, (b) 80, (c) 160, (d) −200, (c), , ANSWERS, 1. A + B =, , 2., , LM −1, MM−2, N4, , 3. A, , θ, , LM5, N7, , OP, −4 6P, 2 6PQ, LM1 − 2i, = ( A ) = M 2 + 4i, MN2 − 5i, 5, , T, , LM, N, , OP, Q, , 8, 1, ,A−B=, −1, 5, , OP, Q, , 2, −1, , 7, , 4 + 5i, 7 − 2i, 7 − 3i, , OP, 5 − 6i P, 7 PQ, 8
Page 65 :
57, , MATRICES, , 4. No, , LM−1 −3 3 −1OP, 1, 1 −1, 0P, 5. A = M, MM 2 −5 2 −3PP, N−1 1 0 1Q, LM 7 −3 −3OP 1 LM 2, −2, 1, 0, 6. (i) M−1, (ii), MN−1 0 1PPQ 5 MMN 2, −1, , −2, 2, −3, , OP, 2P, 2PQ, , −3, , LM 1, (iii) M−4, MN−4, , 2, −7, −9, , OP, 4P, 5PQ, , −1, , LM 1, (iv) M−3, MN 2, , −3, 3, −1, , OP, −1P, 0PQ, 2, , OBJECTIVE TYPE QUESTIONS, 1. (d), 6. (d), , 2. (c), 7. (a), , 3. (a), 8. (c), , 4. (d), 9. (a), , 5. (c), 10. (b), , 11. (a), , 12. (a), , 13. (c), , 14. (a), , 15. (c), ,
Page 66 :
Chapter, , RANK, , 2, , MATRIX AND SYSTEM, LINEAR EQUATIONS, , OF, , OF, , 2.1 DEFINITION, The rank of an m × n matrix A is said to order ‘r’ if it has at least one submatrix of order ‘r’ which, is non-singular but all submatrices of order greater than ‘r’ are singular., It is denoted by ρ(A) or rank A., z The rank ‘r’ of an m × n matrix A can at the most be equal to minimum of m and n, but, it may be less., z An n × n matrix A has rank r < n iff det A = 0 and rank r = n iff det A ≠ 0 or we can say, that if A is singular than ρ (A) < n and if non-singular than ρ (A) = n., , 2.2 NORMAL FORM, By performing elementary transformations, any non-zero matrix A can be reduced to one of the, following four forms, (i) [Ir], (ii) [Ir 0], (iii), , LM I OP, N0Q, r, , (iv), , LM I, N0, , r, , OP, Q, , 0, 0, , called the normal forms of A., The number r = ρ (A)., EXAMPLE 1: Find the rank of the matrix by reducing it to normal form:, , LM1, 4, (a) A = M, MM3, N1, SOLUTION, , OP, 1P, 2P, P, 1Q, , LM2 3 4 5 OP, 1, 2, 3, 4, 5, 6P, (b) M, −1, 1, MM4 5 6 7 PP, 2, 0, N9 10 11 12Q, LM1 2 −1 3OP, 4, 1, 2, 1P, : (a) Given matrix A = M, MM3 −1 1 2PP, N 1 2 0 1Q, 2, , −1, , 3, , 58, , LM 1, 2, (c) A = M, MM 1, N−1, , 2, , −1, , 4, , 3, , 2, , 3, , −2, , 6, , OP, 4P, 4P, P, −7Q, 4
Page 73 :
65, , RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , Theorem 2.1: The rank of the product matrix AB of two matrices A and B is less than, equal to the rank of the either of the matrices A and B., Proof: Let r1 and r2 be the ranks of the matrices A and B respectively., ∴, ρ (A) = r1, Ir, 0, 1, ∴, A ∼, 0, 0, , LM, OP, MN, PQ, I = Unit matrix of order r, L I 0OP B, AB ∼ M, MN 0 0PQ, r1, , r1, , now, But, , 1, , LM I, MN 0, , r1, , Similarly, , OP B , can have r, 0PQ, , 0, , non-zero rows at the most., 0, Ir, 1, B ≤ r1, Rank of AB = Rank of, 0, 0, Rank of AB ≤ Rank of A., Rank of AB ≤ B., 1, , LM, MN, , OP, PQ, , EXAMPLE 7: Show that: Rank (AA′′) = Rank (A), SOLUTION: Let, B = AA′, then, Rank (B) = Rank (AA′), ≤ Rank A, ...(1), {∴ Rank of product of two matrices can’t exceed the rank of either matrix), Now, B = AA′, ⇒, A−1B = A′, ∴, Rank (A) = Rank (A′), (ä A matrix and its transpose have the same rank), = Rank (A−1B), ≤ Rank B, ⇒, Rank (A) ≤ Rank (B), ...(2), from (1) and (2), Rank A = Rank B = Rank (AA′)., EXAMPLE 8: Show that: Rank (A), SOLUTION: Let, C =, ∴, Rank (C) =, ∴, Rank (C) ≤, Again,, C =, ⇒, A−1C =, Now,, Rank (A) =, =, ≤, ∴, Rank (A) ≤, ∴ From (1) and (2), Rank A =, Rank (A) =, , = Rank (AA∗), AA∗, Rank (AA∗) ≤ Rank (A), Rank (A), AA∗, A∗, Rank (A∗), Rank (A−1C), Rank (C), Rank C, Rank C, Rank (AA∗)., , ...(1), , ...(2)
Page 74 :
66, , LINEAR ALGEBRA, , 2.3 SYSTEM OF LINEAR EQUATIONS, A system of m linear equations in n unknowns can be put in the standard form, aij xj = bi; i = 1, 2, ...; m, j = 1, 2, ...; n, a11x1 + a12x2 + ... + a1nxn = b1, a21x1 + a22x2 + ... + a2nxn = b2, ................................................................, am1x1 + am2x2 + ... + amnxn = bm, , OP, PP, Q, , ...(1), , where aij and bi are constants. The number aij is the coefficient of the unknown xj and the, number bi is the constant of the equations., The system (1) is called an m × n system. It is called a square system if m = n, i.e., if number, of equations is equal to the number of unknowns., , 2.3.1 Homogeneous and Non-homogeneous System, The system (1) is said to be Homogeneous if all the constant term are zero i.e., if b1 = 0, b2 = 0,, ..., bm = 0. Otherwise the system is Non-homogeneous., , 2.3.2 Solution of the System of linear equations, Any set of the values of unknowns x1, x2, ..., xn which satisfy the equation of system (1) is called, the solution of the system., Any homogeneous system always has at least one solution x1 = x2 = ... = xn = 0, called the, trivial solution., z The system of Linear Equations is said to be Consistent if it has one or more solutions and, it is said to be Inconsistent if it has no solution., , System of Linear Equations, , Inconsistent, , No solution, z, , Consistent, , Unique solution, , Consider the general system of m equations, a11x1 + a12x2 + ... + a1nxn, a21x1 + a22x2 + ... + a2nxn, .........................................., am1x1 + am2x2 + ... + amnxn, , Infinite number of solutions, , in n unknowns, = b1, = b2, = bm
Page 76 :
68, , LINEAR ALGEBRA, , The solution of such an equation depends on b., (i) If b ≠ 0 ⇒ No solution., (ii) If b = 0, then every vector u ∈ kr is a solution., Two linear systems are called Equivalent if they have the same solution set. That is each, solution of the first system is a solution of the second system and each solution of the second system, is a solution of the first., , Definition, Two argumented matrices (or systems of linear equations) are said to be Row-equivalent if one can, be transformed to the other by a finite sequence of elementary row operations., The elementary row operations does not alter the solution of the system., Theorem 2.2: If two systems of linear equations are row-equivalent, then they have the same, set of solutions., Linear Equation in one Unknown, , Theorem 2.3: Consider the linear equation ax = b., b, is a unique solution., (i) If a ≠ 0, then x =, a, (ii) If a = 0, but b ≠ 0 then no solution., (iii) If a = 0, b = 0, then every scalar ‘α’ is a solution of ax = b {Infinite solution}., EXAMPLE 3: Solve:, (i) 4x − 2 = 2x + 8, (ii) 2x − 6 = 2x + 4, (iii) 3x + 1 = 2x + 1 + x, SOLUTION: (i), 4x − 2 = 2x + 8, 2x = 10, x = 5, Unique solution., (ii), 2x − 6 = 2x + 4, 0 x = 10, No solution., (iii), 3x + 1 = 2x + 1 + x, 3x + 1 = 3x + 1, 0x = 0, Infinite solution {Every α is a solution} α = scalar., , 2.5 SYSTEM OF LINEAR EQUATIONS IN TWO UNKNOWN, Consider a system of two non-degenerate linear equations in two unknowns x and y,, A 1x + B 1y = C 1, A 2x + B 2y = C 2, ∴ Equations are non-degenerate, A1, B1, A2 and B2 are non-zero., The general solution of the above system belongs to one of the three types as
Page 77 :
69, , RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , (i) The System has Exactly One Solution, , Here the two lines intersect in one point. This occurs when the lines have distinct slopes or equivalently,, when the coefficients of x and y are not proportional., B1, A1, ≠, B2, A2, x − y = −1, 3x + 2y = 12, , EXAMPLE 4:, SOLUTION:, , Y, 3, 2, 3x + 2y = 12, 1, –3, , –2, , –1, , X, 1, , x, , –, , y, , =, , –1, , 2, , 3, , 4, , –1, –2, –3, , (ii) The System has no Solution, , Here the two lines are parallel. This occurs when the lines have the same slopes but different y, intercepts, or when, A1, B1, C, ≠ 1, =, A, B, C, 2, , EXAMPLE 5:, , 2, , 2, , x + 3y = 3, 2x + 6y = − 8, , SOLUTION:, , Y, 4, 3, 2, 1, 1, –4, , –3, , –2, , 2, , 3, , 4, , –1, , 5, , 6, , X+3y, , –1, –2, –3, –4, , 2x + 6, y, , X, , =3, , Parallel lines, = –8
Page 78 :
70, , LINEAR ALGEBRA, , (iii) The System has an Infinite Number of Solutions, , Here the two lines coincide. This occurs when the lines have the same slopes and same of intercepts, or when, A1, B1, C, = 1, =, A2, B2, C2, EXAMPLE 6:, , x + 2y = 4, 2x + 4y = 8, , SOLUTION:, , Y, 3, 2, 1, , –3, , –2, , X, , –1, , 1, , 2, , 3, , 4, , l1 and l2, , –1, –2, –3, , 2.6 SOLUTION OF SYSTEM OF LINEAR EQUATIONS BY ELIMINATION, METHOD, The matrix of the coefficients of x, y, z is reduced into Echelon form by elementary row transformations., At the end of the row transformation the value of z is calculated from the last equation and the value, of x, y is calculated by the backward substitution., EXAMPLE 7: Solve the following equations:, x − y + 2z = 3, x + 2y + 3z = 5, 3x − 4y − 5x = − 13., SOLUTION: In the matrix form, the equations are written as, 1 −1, 2 x, 3, 1, 2, 3 y =, 5, 3 −4 −5 z, −13, , LM, MM, N, , OP LM, PP MM, QN, , OP, PP, Q, , LM, MM, N, , R3 → R3 +, , LM1, MM0, N0, , −1, 3, 0, , OP LM x OP, 1 P M yP, −32 / 3PQ MN z PQ, 2, , =, , OP, PP, Q, , 1, R, 3 2, , LM 3 OP, MM 2 PP, N−64 / 3Q
Page 81 :
73, , RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , R3 → R3 +, ∼, , LM1, MM0, N0, , 1, R, 11 2, 3/5, , 7/5, , :, , LM, MM, N, , OP LM OP, PP MM PP, QN Q, , 3, 7, y+ z, 5, 5, 121, 11z, y−, 5, 5, 11y − z, z, 11y − k, , x+, , or, Let, then, , =, =, =, =, =, , y =, now,, , x+, , LM, N, , OP, Q, , 4, 5, 33, 5, 3, k, 3, 3, k, +, 11 11, , x = −, , SOLUTION:, , OP, PP, Q, , 3 3, k, 7, 4, +, + k =, 5 11 11, 5, 5, , EXAMPLE 10: Solve:, , LM 2, MM−1, N3, , 2x − y + 3z, −x + 2y + z, 3x + y − 4z, AX, −1, 2, 1, , OP LM x OP, 1 P M yP, −4 QP MN z PQ, , =, =, =, =, , 3, , The argumented matrix, C =, , LM 2, MM−1, N3, , OP, PP, Q, , 121 / 5 −11 / 5 : 33 / 5, 0, 0, :, 0, , ∴, Rank of A = 2 = Rank of C, Hence the equations are consistent., But the rank < 3., ⇒ Infinite no. of solutions., 4/5, 1 3/5, 7/5, x, 0 121 / 5 −11 / 5 y = 33 / 5, 0, 0, 0, 0, x, , LM, MM, N, , 4/5, , =, , 16, 7, k+, ., 11, 11, , 8, 4, 0, B, , LM8OP, MM4PP, N0Q, , −1, , 3, , 2, 1, , 1, −4, , OP, 4P, 0PQ, , 8
Page 87 :
RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , LM2, N0, Let, and, from (2), , LM x OP, −5O M y P, 1 PQ M z P, MN t PQ, , =, , LM3OP, N1Q, , 2x − 3y + 6z − 5t, y − 4z + t, t, z, y − 4k2 + k1, y, , =, =, =, =, =, =, , 3, 1, k1, k2, 1, 1 + 4k2 − k1, , −3, , 6, , 1, , −4, , 79, , ...(1), ...(2), , from (1), 2x − 3 − 12k2 + 3k1 + 6k2 − 5k1 = 3, 2x = 6 + 6k2 + 2k1, x, y, z, t, , =, =, =, =, , 3 + 3k2 + k1, 1 + 4k2 − k1, k2, k1 ., , 2.7.1 Homogeneous System of Linear Equations, For a system of Homogeneous linear equation AX = 0., (i) X = 0 is always a solution. This solution in which each unknown has the value zero is, called the Null Solution or the Trivial Solution., Thus a Homogeneous system has either the trivial solution or an infinite no. of solutions., (ii) If ρ (A) = number of unknowns, the system has only the trivial solution., (iii) If ρ (A) < number of unknowns, the system has an infinite number of non-trivial solution., Note: A system AX ≤ 0, where A is an n × n matrix may not have a non-zero solution., , EXAMPLE 15: Determine ‘b’ such that the system of Homogeneous equation, 2x + y + 2z = 0, x + y + 3z = 0, 4x + 3y + bz = 0, has:, (i) Trivial solution., (ii) Non-trivial solution. Find the non-trivial solution using matrix-method., 2x + y + 2z = 0, SOLUTION:, x + y + 3z = 0, 4x + 3y + bz = 0
Page 90 :
82, , LINEAR ALGEBRA, , 2.8 SOLUTION OF SYSTEM OF LINEAR EQUATIONS BY MATRIX METHOD, (i) Write the given system of equations in matrix form AX = B., (ii) Find |A|., 1, (adj A) and use X = A−1B., | A|, , (iii) If |A| ≠ 0, then compute A−1 by using A−1 =, , (iv) If |A| = 0, then find (adj A) · B., (v) If (adj A) B = 0, then system has infinitely many solutions and if (adj A) B ≠ 0, then, system has no solution., EXAMPLE 17: Solve by matrix method, x − 2y + 3z = 0, 2x − 3z = 0, x+y +z = 0, SOLUTION:, , LM1, MM2, N1, , X =, adj A =, , A−1, now,, , =, , =, , 1, , −2, , 3, , 2, , 0, , −3 = 19 ≠ 0, , 1, , 1, , 1, , A−1B, , LM 3, MM−5, N2, , adj⋅ A, | A|, , X = A−1B, , LM, MM, N, , 3, 1, −5, =, 19, 2, , x =, , LM2OP, MM0PP, N0Q, , 3, , 0, 1, , |A| =, ∴, , OP LM x OP, −3P M y P, 1 PQ MN z PQ, , −2, , OP, PP, Q, L3, 1 M, =, −5, 19 M, MN 2, 5, , 6, , −2, −3, , 9, 4, , 5, −2, −3, , 5, , 6, , −2, −3, , 9, 4, , OP 1 LM 6 OP, −10P, 9P =, 19 M, MN 4 PQ, 4PQ, 6, , 10, 6, 4, ,y = − ,z=, ., 19, 19, 19, , EXAMPLE 18: Solve by matrix method:, x+y+ z = 3, x + 2y + 3z = 4, x + 4y + 9z = 6, , OP, PP, Q
Page 94 :
86, , LINEAR ALGEBRA, , A1 =, , LM, MM, N, , A2 =, , LM1, MM1, N1, , A3 =, , ∴, , ↓B, 3 1, 4 2, 6 4, , x =, , LM1, MM1, N1, , 1, 3, 9, , ↓B, 3 1, 4 3, 6 9, , OP, PP, Q, , OP, PP, Q, , 1, 2, 4, , det ( A1 ), det A, , ↓B, 3, 4, 6, , OP, PP, Q, , =, , 4, 2, , =, , 2, 2, , =, , 0, 2, , x = 2, y =, , det ( A2 ), det A, , y = 1, z =, , det ( A3 ), det A, , z = 0, x = 2, y = 1, z = 0., , Ans., , Theorem 2.4: Let v0 be a particular solution of AX = B, and let w be the general solution, of AX = 0, then, u = v0 + w = {v0 + w : w ∈ w} is the general solution of AX = B., Proof: Let w be a solution of AX = 0, then, A (v0 + w) = Av0 + Aw, = B+0= B, thus the sum v0 + w is a solution of AX = B., Let v be a solution of AX = B, then, A (v − v0) = Av − Av0, = B−B=0, ∴, v − v0 ∈ w, ∴, v = v0 + (v − v0), Hence any solution of AX = B can be obtained by adding a solution of AX = 0 to a solution, of AX = B.
Page 98 :
90, , LINEAR ALGEBRA, , 2. Find which of the following system of linear equations are consistent and if consistent, then, find their solutions, (i), , (ii), , (iii), , LM2, MM1, N3, LM1, MM1, N0, LM1, MM3, N1, , 3, −2, 1, , 1, 1, 1, 1, −1, 3, , OP LM x OP LM4OP, 3 P M y P = M2 P, MN4PQ, 0 PQ MN z PQ, −1O L x O, L3O, 1 PP MM y PP = MM1PP, MN0PQ, −1PQ MN z PQ, 2 O LxO, LM 2 OP, P, M, P, 1 P M y P = M−3P, MN 4 PQ, −1PQ MN z PQ, −1, , 3. Determine whether, the following system of linear equations is consistent if yes then obtain, its solution using Cramer’s Rule’s., , LM0, MMa, Nb, , a, b, a, , OP LM x OP, aP M yP, 0PQ MN z PQ, , b, , =, , LM0OP, MMa PP, Nb Q, , where a and b are non-zero real numbers., 4. Find the solution of the following system of equation using Cramer’s Rule., , LMa, MM1, N0, , OP LM x OP, 1P M y P, a PQ MN z PQ, , 1 0, a, 1, , =, , LM1OP, MMaPP, N1 Q, , 5. Find the value of a pair (a, b) such that the following system of equation has a solution., , LM 2, MM 1, N −2, , 3, 2, 1, , OP LM x OP, b P M yP, a PQ MN z PQ, , −1, , =, , LM1OP, MM2PP, NbQ, , True or False, 1. Let A be m × n matrix and if m < n, then the system of linear equations AX = 0 have infinite, solution., 2. If AX = 0, where A is m × n matrix has non-zero solution then m ≤ n., 3. If AX = 0, has non-zero solution, where A is m × n matrix then m ≥ n., 4. If AX = b and AY = b, then (Y − X) is in the null space of A., 5. Every system AX = 0, is consistent., 6. Every system AX = 0, has many solutions., 7. Let A be 5 × 5 matrix and AX = b system is consistent for b in R5. Then system AX = 0 has, non-trival solution., 8. A system AX = b, is inconsistent if and only if b is in the span of row vectors of A.
Page 99 :
91, , RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , 9. A system AX = b is consistent if and only if b is in the linear span of column vectors of A., 10. Let A be an m × n matrix and n < m. If the system AX = b, is consistent then it has infinite, solution., 11. The rank of zero matrix is zero., 12. The rank of a matrix A = [aij], ij = 1, 2, ... n. The a11 > 0 then rank of A is at least one., 13. If A is m × n matrix and m < n then rank (A) can be n., 14. If A is 5 × 3 matrix then the maximum rank of A can be 5., 15. If A be n × n matrix such that |A| ≠ 0, then the rank (A) = n., , OBJECTIVE TYPE QUESTIONS, 1. Let A be 4 × 3 matrix then which of the following is correct:, (a) The set of row vectors is linearly dependent., (b) The set of column vector is linearly dependent., (c) The set of row vector is linearly independent., (d) The maximum rank of A is 4., 2. The general solution of 5x1 + 2x2 − x3 = 0 is the span of the following pair:, (a) (−2, 5, 0), (1, 0, 5), (b) (1, 2, 1), (3, 1, 2), , FG − 2 , 1, 0IJ FG 1 , 0, 1IJ, H 5 K H5 K, LM1 −2 2, 4, −2, 3. Let A = M3, MN2 −1 0, , (d) (0, 0, 50), (−20, 50, 0)., , (c), , OP, 5P, 1PQ, , 1, , The rank of A is:, (a) 4, (c) 2, , LM1, 2, 4. The rank of the matrix A = M, MM3, N4, , (b) 3, (d) 1, , 1, , 1, , 2, , 2, , 3, , 3, , 4, , 4, , (a) 1, (c) 3, , LM−2, 5. The rank of the matrix A = M 1, MN 1, (a) 2, (c) 4, , OP, 2P, 3P, P, 4Q, 1, , (b) 2, (d) 4, −1, , −3, , 2, 0, , 3, 1, , (b) 3, (d) 1, , is:, , OP, −1P, 1 PQ, −1
Page 100 :
92, , LINEAR ALGEBRA, , 6. If A is m × n matrix and r = rank (A). Then AX = 0, has trivial solution if:, (a) r = m, (b) r = n, (c) m + n, (d) m − n, , LM0, 1, 7. The rank of the matrix A = M, MM1, N1, (a) 1, (c) 3, , LM0, 8. The rank of a matrix A = M0, MN1, , 0, 1, 0, , OP, P, 1P, P, 0Q, , 1, , 1, , 1, , 0, , 1, , 1, , 1, , 0, , 1, , 1, , OP, 0P, 0PQ, , (b) 2, (d) 4, , 1, , (a) 1, (b) 2, (c) 3, (d) 4, 9. The value of k for which the following system of equation admit no solution:, x + 2y + 3z = 1, 3x + 7y + kz = 2, 2x + ky + 12z = 3, (a) k ≠ 3, (b) k = 3, (c) k = 10, (d) none of these, 10. The value of k for which the following system of linear equations:, kx + y + z = 1, x + ky + z = 1, x + y + kz = 1, has unique solution., (a) k = 1, (b) k = −1, (c) k ≠ 1 and k ≠ −2, (d) every value of k ∈ 1 R, 11. Consider the system AX = λX, where X ≠ 0, A matrix is n × n, and λ is scalar. Then r = rank, of (A − λ I) is:, (a) r < n, (b) r ≤ n, (c) r > n, (d) r = n., , LM 1, 12. The rank of the real matrix M x, NM x, , 3, , (a), (b), (c), (d), , 1, y, y3, , OP, zP, z QP, 1, , 3, , x, y, z are all distinct and x + y + z ≠ 0, x= y=z, x, y, z are all distinict and x + y + z = 0, Any two from x, y, z are equal., , is 3 if:
Page 101 :
93, , RANK OF MATRIX AND SYSTEM OF LINEAR EQUATIONS, , ANSWERS, 1. x = −0.0769; y = −0.2308; z = 0.9231, 2. (i) x = 1, y = 1, z = 1, (ii) Inconsistent, TRUE OR FALSE, 1. T, 6. F, 11. F, , 2. T, 7. T, 12. T, , OBJECTIVE TYPE QUESTIONS, 1. (a), 2. (a, c), 6. (b), 7. (d), 11. (b), 12. (a), , 3. F, 8. F, 13. F, , 3. (b), 8. (c), , (iii) x = −0.5909; y = 1.6818; z = 0.4545, , 4. T, 9. T, 14. F, , 4. (a), 9. (c), , 5. T, 10. F, 15. T, , 5. (a), 10. (c),
Page 102 :
Chapter, , 3, , VECTOR SPACE, , 3.1 INTRODUCTION, Vector space possesses two non empty sets V and together with two algebric operations. These, operations are called vector addition and scalar multiplication vector, addition is a function from, V × V into V defined as (u, v) → u + v ∈ V and scalar multiplication is a function from × V, into V defined as: (α, u) → α v ∈ V. Where is called field, its either or unless otherwise, mentioned. Elements of V are called vectors and elements of are called scalars. Some time it’s, appropriate to state a theorem or give the definition without mentioned the field. In such a case the, theorem and definition is true for both scalars and ., , 3.2 DEFINITION OF VECTOR SPACE, A non-empty set V is said to be vector space over the field if it satisfies the following axioms:, Axiom 1: Closure under vector addition: u + v ∈ V ≤ u, v ∈ V., Axiom 2: Closure under scalar multiplication: α · u ∈ V ≤ a ∈ and ≤ v ∈ V., Axiom 3: Associative law: u + (v + w) = (u + v) + w ≤ u, v, w ∈ V., Axiom 5: Existance of additive inverse: For each v ∈ V ∃ −v ∈ V, u v + (−v) = 0., Axiom 6: Commutative Law: u + v = v + u, ≤ u, v ∈ V., Axiom 7: Distributive law for vector addition in V: α · (u + v) = α · u + α · v, ≤ α ∈ , and ≤ u, v ∈ V., Axiom 8: Distribution law for scalar addition in V: (α + β) · u = α · u + β · u, ≤ α,, β ∈ and ≤ u ∈ V., Axiom 9: Associative law for scalar multiplication: α · (β · v) = (αβ · v) · v ≤ v ∈ V, and, ≤ α, β ∈ ., Axiom 10: Existance of multiplicative identity: 1 · u = u ≤ u ∈ V., Vector space as defined above are called linear space sometime. Axioms 1, 3, 4, 5 and 6,, indicates that (V, +) is an abelian group or commutative group., , 94
Page 103 :
VECTOR SPACE, , 95, , Examples of Vector Spaces, EXAMPLE 1: Let , and be the set of complex, real and rational numbers respectively. Then, each of the following is a vector space:, (i) over , (ii) over , (iii) over , (iv) over , (v) over ., These are concrete examples of vector space., SOLUTION: We discuss only (i) over , and rest cases will be similar to it. To prove over, is a vector space we need to show that satisfies all ten axioms of a vector spaces., 1. For every u, v ∈ , u + v ∈ ., 2. For every a ∈ and u ∈ , a u ∈ ., 3. For every u, v, w ∈ , (u + v) + w = u + (v + w)., 4. for every u ∈ , u + 0 = u + (0 + i 0) = u., i.e., 0 is the additive identity., 5. For every u ∈ , u + (−u) = 0,, i.e., (−u) is the additive inverse of u., 6. For every u, v ∈ , u + v = v + u,, 7. For every α ∈ , and for every u, v ∈ ., α · (u + v) = α · u + α · v,, 8. For every α · β ∈ and for every u ∈ ., (α + β) · u = α · u + β · u, 9. For every α, β ∈ and for every u ∈ ., α · (βu) = (αβ) · u, 10. For every u ∈ , 1 · u = u., Other cases (ii)-(v) can be verified by the reader as similar to the above case., EXAMPLE 2: The cartisian plane i.e., 2 is a vector space over set of real numbers ., SOLUTION: The cartisian plane, i.e., 2 is the plane which help to the reader to visualize and, penitrate the numerous concepts of the vector spaces., In 2, the vector addition is define by the parallogram law defined as:, For any two vectors u, v in V2, the sum u + v is a vector in 2 defined by the diagonal of the, parallogram. Whose adjacent sides are u and v. The scalar multiplication αu, for any α ∈ and, u = (x1, x2) ∈ 2 is defined as:, αu = α · (x1, x2) = (αx1, αx2)., The direction of αu is depends on the sign of α. If α > θ then αu is pointing in the direction, of u, otherwise pointing in anti-direction of u. The magnitude of αu is depends on |α|, if |α| > 1 then, αu will be larges than u and if |α| < 1 then αu is smaller than u. The vector addition and scalar, multiplication in 2 is depicted in Figure 1.
Page 104 :
96, , LINEAR ALGEBRA, x2, , >|, , x2, , αu, ;, , |α, |, , u, , v, , u, , +, , v, , v, u, αu, |α|<|, , u, , x1, α<, 0,, αu, , x1, , Fig. 1. Vector addition and scalar in 1R 2., , Rest axiom of vector space can be easily verified by the reader., EXAMPLE 3: Let be a field, than n is a vector space over ., SOLUTION: Let be a field, then the vector addition and scalar multiplication on n over are, defined as:, For any, u = (x1, x2, ..., xn) ∈ n, and, v = (y1, y2, ..., yn), For any α ∈ ,, αu = α · (x1, x2, ..., xn), = (αx1, αx2, ..., αxn) ∈ n, Here, n is satisfying the closure axioms of vector addition and scalar multiplication, other, axioms of vector space can be verified easily., Hence, n is a vector space over ., Note: n and n are vector spaces over and respectively. There are particular cases of example 3., , ) is, EXAMPLE 4: Let m×n be a set of m × n matrices whose entries from . Then Mm×n (, a vector space over ., SOLUTION: Let, m×n = {A = [aij] :, i = 1, 2, ..., m; j = 1, 2, ..., n and aij ∈ , be set of matrices of size m × n over the field . The vector addition and scalar multiplication, on m×n () over are defined as:, for any, A = [aij]m×n ∈ n(), and, B = [bij]m×n ∈ m×n (), A + B = [aij + bij]m×n,, Since aij ∈ , bij ∈ , therefore,, aij + bij = cij ∈ , i = 1, 2, ..., m, j = 1, 2, ..., n, this implies, A + B = [aij]m×n ∈ m×n(), Cij ∈ , ≤
Page 105 :
97, , VECTOR SPACE, , i = 1, 2, ... m, j = 1, 4, ... n, and for α ∈ ,, , αA = [α aij]m×n, ∈ m×n ()., , Hence, m×n() is satisfied the closure axioms of vector addition and scalar multiplication., Other, axioms of vector space are discussed as:, z, , m×n () is satisfied the associative law under the matrix addition i.e.,, A + (B + C) = (A + B) + C ≤ A, B, C ∈ m×n (), , z, , 0 = [0]m×n i.e., zero matrix of order m × n is the identify of m×n(), , z, , For any, , A = (aij)m×n ∈ m×n (),, −A = [−aij]m×n ∈ m×n () is the (additive) inverse of A, , z, , For any, , A = [aij]m×n ∈ m×n (), , and, , B = [bij]m×n ∈ m×n (), A + B = [aij]m×n + [bij]m×n, = [aij + bij]m×n = [bij + aij]m×n, = [bij]m×n + [aij]m×n = B + A, , z, , For any α ∈ , and A = [aij]m×n,, B = [bij]m×n ∈ m×n (), α · (A + B) = α · [aij + bij]m×n = [α (aij + bij)]m×n, = [α aij + α bij]m×n = [αaij]m×n + [αbij]m×n, α · (A + B) = α · A + α · B, , z, , For the multiplicative identity of i.e., 1 and for any A ∈ m×n (), 1 · A = A., , Hence m×n () is a vector space over . In particular, if = or , m×n () and m×n (), are vector space over and respectively. Further, if m = n, n() and n() are also vector spaces., EXAMPLE 5: Let n[x] be a set of polynomial of degree at most ‘n’, where n is fixed non, negative integers. Then n[x] is a vector space over a field ., SOLUTION: Let n[x] be a set of polynomials with coefficients from and degree at most ‘n’,, where n is fixed then to show n[x] is a vector space over . The vector addition and scalar, multiplication are defined for p, q ∈ n[x], and scalar α ∈ by, (p + q) (x) = p (x) + q (x),, (α · π) (x) = α p (x)., Let, , p (x) = a0 + a1x + a2x2 + ... + amxm, m ≤ n, q (x) = a0 + b1x + b2n2 + ... + bmxm,, , where all coefficients a0, a1 ... am, b0, b1, ... bm ∈ , p (x) + q (n) = (a0 + b0) + (a1 + b1) x + ... + (am + bm) xm, Suppose, , Ci0 = ai0 + bi0, ≤ i0 = 0, 1, 2, ..., m ⇒ Ci0 ∈
Page 106 :
98, , LINEAR ALGEBRA, , = C0 + C1x + ... + Cmxm, ∈ n[x], α p (x) = α · [a0 + a1x + a2x2 + ... + amxm], m ≤ n, α p (x) = αa0 + αa1x + ... + αanxm, Since α, and a0, a1, ... am are elements of , Suppose di = αai ≤ i = 0, 1, 2, ... n, then, ⇒ αa0, αa1, ... αam ∈ ⇒ d0 + d1x + ... + dmxm ∈ n[x], Whenever we consider n[x], it’s understood that the zero polynomial also considered. So, p (x) + 0 = p (x), this implies that zero polynomial is the additive identity of n[x] and if α F −1 ∈ , α · p (n) = −1 p (n) = −p (x) ∈ n(x),, which is an additive inverse of p (n). Further, p (x) + q (x) is equal to q (x) + P (q). Therefore,, n[x] has satisfied first six axioms of vector space and other axioms of scalar multiplication are very, easy to prove. Hence n[x] is a vector space over a field ., Note: The set of polynomials of degree greater or equal to n is not a vector space because the closure, axioms are not satisfied. For example p (n) = 1 + x + xn is a polynomial of degree ‘n’ and q (n) = 1 − x − xn, is also a polynomial of degree ‘n’ but p (n) + q (n) = 2, and its degree is zero its do not here its degree is n., As we know that the sum of two polynomial of same degrees need not have the same degree., , EXAMPLE 6: Let C [a, b] be a set of all real valued continuous function on a given interval, [a, b]. Then C[a, b] is a vector space over the field under usual addition and scalar, multiplication., SOLUTION: Let C [a, b], be a set of real valued continuous function define on an interval, [a, b]. The addition and scalar multiplication on C [a, b] are defined usually as:, (f + g) (x) = f (x) + g (x), for all f (x) and y (n) ∈ C [a, b], (α · f) (x) = α f (x) ≤ α ∈ , and ≤ f (x) ∈ C [a, b], As we know that, since sum of continuous real valued function is also real valued continuous, function and scalar multiplie of any real valued continuous function is also real valued continuous, function for all real scalars. Therefore, C [a, b] satisfies the closure axioms of a vector space. Other, axioms are very easy to varify as:, 1. Associative law: For any f (x), g (x) h (x) ∈ C [a, b], (f (x) + g (x)) + h (x) = f (x) + (g (x) + h (x)), 2. Existances of identity: For any functions f (x) ∈ C [a, b], f (x) + 0 = f (x), ⇒ ‘0’ function is the additive identity which belong to C [a, b]., 3. Existance of inverse: For any f (x) ∈ C [a, b] − f (x) ∈ C [a, b] u, f (x) + (−f (x)) = 0, ⇒ −f (x) is an additive inverse of f (x)., 4. Commutative law: For any f (x), g (x) ∈ C [a, b]., f (x) + g (x) = g (x) + f (x) ≤ x ∈ [a, b]
Page 107 :
99, , VECTOR SPACE, , Similarly, other axioms of scalar multiplication can be easily verified by readers., Hence C [a, b] is a vector space one ., EXAMPLE 7: Let R+ be a set of all positive real numbers. Then show that R+ is a vector, space over under the vector addition u + v = uv, ≤ u, v ∈ + and scalar multiplication, α · u = uα, ≤ u ∈ + and ≤ α ∈ ., SOLUTION: Let + be a set of positive real numbers. Then show that + is a vector space over, under the following vector addition and scalar multiplication:, (i) u + v = uv ≤ u, v ∈ +, (ii) α · u = uα ≤ α ∈ , ≤ u ∈ +., To prove + is a vector space over under the above operation its required to satisfy and, axioms of a vector space., 1. For any u, v ∈ +,, u + v = uv ∈ +, 2. For any α ∈ , and u ∈ +,, α · u = uα · ∈ +,, 3. For any u, v, w ∈ +,, (u + v) + w = (uv) + w = uvw, = u + (vw) = u + (v + w), +, 4. For any u ∈ , 1 + u = 1u = u, ⇒ ‘1’ is the identity element of +., 5. For any u ∈ +,, ⇒, , u+, , 1, u, , = u⋅, , 1, 1, = 1, ≤, ∈ +, u, u, , 1, is the inverse of u ∈ +., u, , 6. For any u, v ∈ +, u + v = uv = vu = v + u., 7. For all u, v ∈ +, and α ∈ ,, α · (u + v) = α · (uv), = (uv)α = uαvα, = uα + v α, = α · u + α · v, 8. For all α, β ∈ , and for each u ∈ +,, (α + β) · u = uα+β = uα · uβ = uα + uβ, = α· u+β · u, 9. For all α, β ∈ , and for each u ∈ +,, α (β · u) = α · (uβ) = (uβ)α, = uαβ = (αβ) · u, +, 10. For each u ∈ , and 1 ∈
Page 108 :
100, , LINEAR ALGEBRA, , 1 · u = u1 = u, Hence + is a vector space over ., EXAMPLE 8: Let H be a collection of all 2 × 2 matrices with complex entries of the form:, a, b, , a, b ∈ ., −b a, , LM, MN, , OP, PQ, , Show that H is a vector space over under usual matrix addition and scalar multiplication., Is H is also a vector space over ?, SOLUTION: Let H be a set of all 2 × 2 complex matrices of me following form:, , LM a, MN−b, , OP, a PQ, , b, , Then show that H is a vector space over , under usual matrix addition and scalar, multiplication. Let, a1, b1, ,, A =, −b1 a 1, and, , B =, , and α be an scalar i.e., α ∈ ., The vector addition is defined or, , LM, MN, LM a, MN−b, , 2, , 2, , 2, , =, , 2, , α · A = α⋅, , 1, , 2, , 1, , 2, , 1, , 2, , 1, , 2, , 1, , 2, , 2, , 1, , 2, , 1, , ⇒ A + B ∈ H., Scalar multiplication is defined as:, , =, , 2, , LM a + a b + b OP, MN−b − b a + a PQ, LM a + a (b + b ) OP, MN−(b + b ) (a + a )PQ, 1, , A+B =, , OP, PQ, b O, P ∈H, a PQ, , LM a, MN−b, , LM αa, MN− αb, , OP, a PQ, αb O, L αa, = M, P, MN− (αb), α a PQ, b, , OP, (αa )PQ, αb, , ⇒ α · A ∈ H., Hence, H is closed under usual matrix addition and scalar multiplication, rest axioms of a, vector space can be verfied by the reader easily., But H is not a vector space over , because H is not closed under scalar multiplication. When, scaler is complex number., If, α = x + iy ∈ ,
Page 109 :
101, , VECTOR SPACE, , α·A =, , ≠, , LM αa αbOP = LM ( x + i y) a, MN−αb αaPQ MN− ( x + i y) b, LM ( x + iy) a ( x + iy) bOP, MN−( x + iy) b ( x + iy) aPQ, , OP, ( x + i y ) a PQ, ( x + i y) b, , Hence H is not a vector space over ., EXAMPLE 9: Show that is a vector space over itself under the following vector addition, and scalar multiplication., u⊕ v = u+ v− a, a · u = αu + a (1 − α, α), where a is a fixed real number., SOLUTION: Axiom 1: Closure under vector addition: Let u, v ∈ than for a fixed number, a ∈ , u + v − a ∈ , ⇒, u ⊕v = u+v − a∈, Axiom 2: Closure under scalar multiplication: Let u ∈ and a ∈ , then for fixed real, number a ∈ ., αu + a (1 − α) ∈ ⇒ a · u ∈ , Axiom 3: Associative law: Let u, v, w ∈ , then, (u ⊕ v) ⊕ w = (u + v − a) ⊕ w, = (u + v − a) + w − a, = u + (v + w − a) − a, = u + (v ⊕ w) − a, = u ⊕ (v ⊕ w), Axiom 4: Existance of additive identity: Let u ∈ and ‘a’ is fixed real number. Then e is, called identity if, u ⊕e = u, u+e−a = u, ⇒, , e = a, , Axiom 5: Existance of inverse: Let u ∈ , then v is said to be inverse of u if, u ⊕v = e, ⇒, u+v−a = a, ⇒, u + v = 2a, ⇒, v = 2a − u ∈ , Axiom 6: Commutative law: Let u, v ∈ , then for fixed real number a ∈ ,, u ⊕v = u+v − a, = v + u − a = v ⊕ u.
Page 110 :
102, , LINEAR ALGEBRA, , Axiom 7: Distributive law for vector addition: Let u, v ∈ and α, ∈ ,, α · (u ⊕ v) = α · (u + v − a), = α · (u + v − a) + a (1 − α), = α · u + α · u − αa + a − αa, and, (α ? u) ⊕ α ? v = [α · u + a (1 − α)] ⊕ [αv + a (1 − α)], = αu + a (1 − α) + αv + a (1 − α) − a, = αu + αv + a − αa, = α ? (u ⊕ v), Axiom 8: Distribution law for scalar addition: Let α, β ∈ , and u ∈ ., (α + β) ? u = (α + β) v + a [1 − (α + β)], = αu + βv + a (1 − α) − aβ, = αu + a (1 − α) + βv − aβ + a − a, = [αu + a (1 − α)] + [βu + a (1 − β)], = (α ? u) ⊕ (β ? u), Axiom 9: Associative law for scalar multiplication: let α, β ∈ and u ∈ , then, α ? (β ? u) = α ? [βu + a (1 − β)], = α [βu + a (1 − β)] + a [1 − α], = (αβ) u + αa − αaβ + a − αa, = (αβ) · u + a (1 − αβ), = (αβ) ? u., Axiom 10: Existance of multiplicative identity: Let u ∈ , then, 1 ? u = 1 · u + a (1 − 1), = u+0= u, ⇒, 1? u = u, Hence is a vector space over under the given vector addition and scalar multiplication., , 3.3 SUBSPACE OF A VECTOR SPACE, Let V be a vector space over a field and be a non-empty subset of V. Then is called a, subspace of a vector space V if and only if is also a vector space with the same operations, vector, addition and scalar multiplication of V., The next theorem provides a simple criterion for determining the subspace., Theorem 3.1: Let be a non empty subset of a vector space V. Then is a subspace, of V if and only if satisfies the closure axioms., Proof: If is a subspace of a vector space V then it satisfies all the axioms of vector space., Hence statisfies the closure axioms. Conversely, if satisfies the closure axioms then show that, is a subspace of V, that is, to show that satisfies all the axioms of a vector space apart from, the closure axioms. The associative and commutative law of vector addition and all the axioms of, scalar multiplication are satisfied autometically because these axioms hold for every elements of V., Now the axioms of existance of identity and inverse are remains to verify. By the closure axioms.
Page 121 :
113, , VECTOR SPACE, , M2() = W1 + W3, a b, 0, a b, +, =, c 0, 0, c d, , LM, N, , because, , LMa, Nc, ⇒, ⇒, , LMa, Nc, , b, d, , OP, Q, , LM, N, , OP, Q, , OP LM, Q N, , 0, d, , OP, Q, , W1 ∩ W3 = {0}, because, ∈ W1 ∩ W 3, b, d, , OP, Q, , LMa, Nc, , ∈ W1 and, , LMa, Nc, , b, d, , OP, Q, , ∈ W3, , d = 0, and, a = 0, b = 0, c = 0, , OP, Q, , b, ∈ W 1 ∩ W3, d, 2() = 1 ⊕ W3, , ⇒, ⇒, , EXERCISE 3.1, 1. Determine which of the following subset W of a vector space V over , are subspaces of V., (i) W = {(x, y) ∈ 2 : y = αx, where α ∈ }, V = 2, (ii) W = {(x, y) ∈ 2 : y = x2 + y2 = 0}, V = 2, (iii) W = {(x, y) ∈ 3 : α1x + βy + γz = 0}, V = 3 α, β and γ ∈ , (iv) W = {(x, y) ∈ 3 : y = αx, where α ∈ }, V = 2, (v) W =, , RS( x, y, z) ∈, T, , 3, , :, , UV, W, , x, = 2 , V = 2, y, , (vi) W = {(x, y, z) ∈ 3 : x = 2y, and y = 3z}, V = 3, (vii) W = {(x, y, z) ∈ 3 : x, y, are integers}, V = 3, (viii) W = {(x, y) ∈ 2 : y = x2}, V = 2, (ix) W = {(x, y) ∈ 2 : y = sin x}, V = 2, (x) W = {(x, y) ∈ 2 : y = log x}, V = 2, (xi) W = {(x, y) ∈ 2 : y = |x|}, V = 2, (xii) W = {(x, y, z) ∈ 3 : y = 2x and z = 3y}, V = 3, (xiii) W = {(x, y, z) ∈ 3 : y = 2x or z = 3y}, V = 3, (xiv) W = {(x, y) ∈ 2 : y = x3}, V = 2, (xv) W = {(x, y) ∈ 2 : xy = 0}, V = 2, 2. Determine, which of the following subset of a vector space Mn(). Where can be or ,, (for n) a subspace of Mn()., , R| A ∈ M () : A, W= S, T| = [a ] i.e., a, , T, , (i), , n, , ij, , ij, , U|, V|, W, , = A, , =, = a ji
Page 122 :
114, , LINEAR ALGEBRA, , R| A ∈ M ( F ) : A = − A U| ,, W= S, |T = [a ] i.e., a = − a V|W = , R| A ∈ M () : A is lower triangular U|V, W= S, T| = [a ] i.e., a = 0 i > j W|, R| A ∈ M () : A is upper triangular U|V, W = S = [a ], a = 0 i> j |, |T, W, |R A ∈ M () : A is scalar matrices |UV, W = S = [a ], a = 0, i ≠ j , a = α , α ∈|, |T, W, W = R A = [a ] ∈ M () : A is a diagonal matrices U, V, ST, i.e., a = 0 i ≠ j, a ≠ 0 W, R| A = [a ] ∈ M () : Trace ( A) = 0,U|, W= S, i. e., Σ a = 0 V, |T, |W, R| A = [a ] ∈ M () : Trace ( A) = 1, U|, W= S, i. e., Σ a = 1 V, |T, |W, T, , (ii), , n, , ij, , (iii), , ij, , n, , ij, , (iv), , ij, , n, , ij, , (v), , ij, , n, , ij, , (vi), , ji, , ij, , ij, , ii, , n, , ij, , ij, , (vii), , ii, , n, , n, , i =1, , ij, , (viii), , ii, , n, , n, , ii, , i =1, , (ix) W = {A = [aij] ∈ Mn() : a11 = 0}, (x) W = {A = [aij] ∈ Mn() : a11 = 1}, , UV, RS A = [a ] ∈ M () : AB = BA, for, Fixed matrix B ∈ M ()} W, T, R| A = [a ] ∈ M () : Σ a = 0, j = 1, 2, ..., n U|, V, W= S, |T, i. e., sum of each column are zero|W, R| A = [a ] ∈ M () : sum of each row is zero U|, W= S, Σ a = 0, i = 1, 2, ..., nV|, |T, W, , (xi) W =, , ij, , n, , n, , n, , (xii), , ij, , ij, , (xiii), , n, , i =1, , ij, , n, , n, , j =1, , ij, , 3. Consider Pn(x) and P (x) are vector spaces over . Determine which of the subset W of n(x), or P (x) forms a subspace., (i) W = {p (x) ∈ n(x) : p (x) = αx + β, α, β ∈ }, (ii) W = {p (x) ∈ n(x) : p (x) = αx2, α, β ∈ }, (iii) W = {p (x) ∈ n(x) : degree [p (x)] = n}, (iv) W = {p (x) ∈ (x) : degree [p (x)] ≤ 5}, (v) W = {p (x) ∈ (x) : p (0) = P (1)}
Page 123 :
115, , VECTOR SPACE, , (vi) W = {p (x) ∈ (x) : p (0) = 2 P (1)}, (vii) W = {p (x) ∈ (x) : degree [p (x)] ≥ 3}, (viii) W = {p (x) ∈ (x) : p (x) has integral coefficient}, (ix) W = {p (x) ∈ (x) : p (−x) = p (x)}, (x) W = {p (x) ∈ (x) : p (−x) = −p (x)}, (xi) W = {p (x) ∈ (x) : p (0) = 1}, (xii) W = {p (x) ∈ (x) : p (x) = α + x2, α ∈ }, (xiii) W =, , RS p (x) ∈ (x) : p (0) = 0 and P′(0) = 0 UV, dp ( x ), W, T, where p′(x) =, 5, , dx, (xiv) W = {p (x) ∈ n(x) : the constant term of p (x) is 1}, , (xv) W = {p (x) ∈ n(x) : p (x) = a0, a0 ∈ }, 4. Determine which of the following are subspace of a vector space f n[a, b] = {set of real value, of function differentiative upto nth order} over ., (i) W = {f ∈ f ([a, b]) : f (x0) = 0, for some x0 ∈ [a, b]}, (ii) W = {f ∈ f ([a b]} : f (x) = 0 for finite values of x in [a, b]}, (iii) W = {f ∈ f ([a b]) : f (0) = 1}, , RS f ∈ f ([a b]) : f (x ) attains it relative UV, minima at x ∈ [a, b], W, T, W = R f ∈ f ([a b]) : f (x ) attains its relative U, V, ST, extrema at x ∈ [a, b] W, , (iv) W =, , 0, , 0, , (v), , 0, , 0, , (vi) W = {f ∈ f ([a b]) : f (0) = f (1)}, (vii), (viii), (ix), , R, W = Sf ∈ f, T, R, W = Sf ∈ f, T, R, W = Sf ∈ f, T, , (x) W =, , 4, , ([ a b]) :, , d4 f, d2 f, −, + f ( x) = 0, 2, dx 4, dx 2, , 1, , ([ a b]) :, , df, ( x) ≥ 0, dx, , 1, , ([ a b]) :, , df, ≤0, dx, , UV, W, , UV, W, , UV, W, , f ∈ f ([a b]) : f (x) is discontinuous, at x = x0 ∈ [a, b], , 5. Let V be a vector space of all functions from into . Let W1 and W2 be two subsets of V, consisting even and odd functions respectively than show that V = W1 ⊕ W2.
Page 124 :
116, , LINEAR ALGEBRA, , 6. Let Mn() be a vector space and, W1 = {A ∈ Mn() : A is symmetric matrix}, W2 = {A ∈ Mn(F) : A is skew-symmetric matrix}, be two subset of Mn(). Then verify that W1 and W2 are subspace of Mn(), such that, Mn() = W1 ⊕ W2., 7. True and False:, (i) over , is a vector space., (ii) over , is a vector space., (iii) over , is a vector space., (iv) S = {Set of increasing function from [0, 1] to ] over is a vector space., (v) S = {<an> : an → a0, a0, an ∈ R, ≤ n ∈ (N)} is a vector space over ., (vi) A non-zero vector space V, over an infinite field has infinitely many distinct subspaces., (vii) S = {A ∈ Mn() : |A| = 1} is a subspace of Mn()., (viii) S = {A ∈ Mn() : |A| = 0} is a subspace of Mn()., (ix) 2 is a subspace of 3., (x) 2 over has infinite subspaces., (xi) over has infinite subspaces., (xii) S = {a + b 2 : a, b are integers} is a subspace of over ., (xiii) S = {a + ib : a, b ∈ Z} is a subspace of a vector over ., (xiv) The sum of subspaces W1 = {(0, y) ∈ 2} and W2 = {(x, y) ∈ 2 : y = x} of 2 over, is a subspace of 2., (xv) The union of two subspaces W1 = {(x, y) ∈ 2 : y = x} and W2 = {(x, y) ∈ 2 : y, = 2x} of 2 over is not a subspace of 2., (xvi) Q2 overs Z is a vector space., (xvii) Union of x-axis, and y-axis is 2., , 3.6 LINEAR SPAN, 3.6.1 Linear Combination, Let V be a vector space over a field and let v1, v2, ..., vk be k vectors of V, then for α1, α2, ... αk,, k, , scalars,, vectors., , Σ α i vi ,, i =1, , i.e., α1v1 + α2v2 + ... + αkvk, is called a linear combination of v1, v2, ... vk
Page 125 :
117, , VECTOR SPACE, , 3.6.2 Linear Span, Let V be a vector space over a field and S = {v1, v2, ..., vk} be a set of k vectors of V. Then the, linear span of S is denoted by L (S) or [S], and defined as:, [S] = {Set of all finite linear combinations of v1, v2, ..., vk}, , R| α v : α ∈, ≤ i = 1, 2, ..., nU|, S|Σ, V|, T, W, k, , =, , i =1, , i i, , i, , Since vi ∈ V and ai ∈ , then, , n, , Σ α i vi ∈V ., , i =1, , ⇒ [S] ⊆ V., Theorem 3.5: Let V be a vector space over a field and S be a subset of V contains k, vectors of V. Then [S] is a subspace of V., Proof: Let u and v belongs to [S] then, k, , u =, , Σ α i vi , ai ∈ , ≤ i = 1, 2, ..., k, , i =1, , k, , v =, , Σ βi vi , bi ∈ , ≤ i = 1, 2, ..., k, i =1, , and let α, β ∈ . To prove [S] is a subspace of V, its suffices to show that αu + βv ∈ [S]., αu + βv = α, , F α v I +βF β v I, GH Σ JK GH Σ JK, , k, , αu + βv =, where, , Σ, , i =1, , k, , k, , i =1, , i i, , i =1, , (αα i + ββ i ) vi =, , i i, n, , Σ yi vi, , i =1, , yi = ααi + ββi ∈ , , ⇒ αu + βu is also a linear combination of a v1, ..., vk., ⇒ αu + βu ∈ [S]., ⇒ [S] is a subspace of V., EXAMPLE 1: Consider V = 2, be a vector space over and S = {(1, 1)}, be a set of vector, of V. Then find the [S] and explain it geometrically., SOLUTION:, , [S] = {α · v : α ∈ }, = {(α, α) : α ∈ }, , geometrically it represents a straight line y = x, passes through (0, 0). See the Fig. 2.
Page 126 :
118, , LINEAR ALGEBRA, Y, , 3, , (3, 3) = 3v, , 2, , (2, 2) = 2v, , 1, , –3, , –2, , –1, , (1, 1) = v, X, , O, , 1, (0, 0) = 0v, , 2, , 3, , (–1, –1) = –v, (–2, –2) = –2v, , (–3, –3) = –3v, , Fig. 2., , [S] = {α (1, 1) : α ∈ }., For the convience. We take α = 1, 2, 3, 0, −1, −4, −3 but we can take any scalar from ., [(iii)] in 2 over ., EXAMPLE 2: Consider V = 2 over . Let S = {(1, 0), (0, 1)}, be a set of vectors of V. Then find [S] and explain it geometrically., SOLUTION:, , [S] = {α (1, 0) + β (0, 1) : α · β ∈ }, = {(α, 0) + (0, β) : α · β 2}, = {(α, β) : α, β ∈ 2} = 2, , Geometrically it’s very pretty to find the [S]., [S] = α (1, 0) + β (0, 1), = (α, 0) + (0, β), which is a simply vector addition. First we find all multiple of (1, 0) and (0, 1). Then we addition, scalar multiple of there vectors. See Fig. 3 will be another vector in 2. So, [S], will be 2., This implies that every vector in 2 can be expressed as a linear combination of (1, 0) and, (0, 1).
Page 128 :
120, , LINEAR ALGEBRA, , [S1] = {α (1, 1) + β (1, −1) : α, β + }, = {(α + β, α − β) : α, β + } = 2, Geometrically see the following Fig. 4., In the above figure we have find the sum of two vectors α (1, 1) and β (1, −1) for some values, of α · β ∈ for it, first we obtain the vectors which are scalars multiples of (1, 1) and (1, −1). Then, we calculate the sum of two vector by using the simple concept as taking parallel vector to each, other. Then the intersection point will be resultant vector from the origin. It shows that the [S1], covers the whole space V = 2 for α, β ∈ ., EXAMPLE 4: In V = 2 over , show that (2, 3) ∈ [(1, 1) (1, −2)] but (2, 3) ∉ [(1, 1) (2, 2)]., SOLUTION: To show (2, 3) vector belong to [(1, 1) (1, −2)] we show that (2, 3) can be expressed, uniquelly as a linear combination of (1, 1) and (1, −2)., (2, 3) = α (1, 1) + β (1, −2), ⇒, α + β = 2, α − 2β = 3, ⇒, 3β = −1, β = −, α =, , 7, 3, , 1, 3, , FG IJ, H K, , 7, 1, (1, 1) + − (1, − 2), 3, 3, ⇒, (2, 3) ∈ [(1, 1) (1, −2)], but, (2, 3) = α (1, 1) + β (2, 2), α + 2β = 2, α + 2β = 3, ⇒ This system of equations has no solution because it’s a absured result., ⇒ (2, 3) can’t be expressed as a linear combination of (1, 1) and (2, 2)., ⇒, (2, 3) ∉ [(1, 1) (2, 2)]., , ⇒, , (2, 3) =, , EXAMPLE 5: Let V be a vector space over a field and S be a set of vectors of V. Then, for any two vectors u and v of V. If u ∈ [S ∪ {v}], but u ∉ [S] then v ∈ [S ∪ {u}]., SOLUTION: Let S = {v1, v2, ..., vk} be a set of vector of a vector space V over a field and let, u, v ∈ V u u ∈ [S ∪ {v}], but u ∉ [S]. Then show that u ∈ [S ∪ {v}]., Since,, , u ∈ [S ∪ {v}] = [v1, v2, ..., vk, u], , then for scalar α1 α2, ..., αk, αj u can be expressed as, u = α1v1 + α2v2 + ..., + αkvk + αv, , ...,(1), , Since u ∉ [S] this implies that u can’t be expressed as a linear combination of vectors of S., i.e.,, , u ≠ α1v1 + α2v2 + ..., + αkvk.
Page 133 :
125, , VECTOR SPACE, , 2. Let S = {(1, 2, −1) (1, −1, 1) (−4, 5, 3)} be a sets of 3. Determine which of the following, vectors are in [S]., (i) (2, −1, 1), (ii) (0, 0, 0), (iii) (1, 1, 1), (iv) (4, 5, −2), (v) (1, 0, 1), (vi) (5, 7, 4), 3. If S be a non empty subset of a vector space V over a field . then show that [[S]] = [S]., 4. Is 2 span of (1, 1) and (0, 1) vectors?, 5. True and False?, (i) The span of {1, x, x2, x3} is 3., (ii) The span of {x, x2, x3} is 3., (iii) The span of x-axis and yz in V3 is V3., (iv) The span of xy-plane and yz-plane in V3 is V3., (v) The span of xy-plane and xz-plane in V3 is V3., (vi) The span of {1, i} is over ., (vii) The span of {1} is over ., , 3.7 LINEAR DEPENDENT AND INDEPENDENT, 3.7.1 Linear Dependent (L.D.), Let V be a vector space over a field . Then a set S = {v1, v2, ..., vn} of vectors of V is said to be, linear dependent if for α1, α2 ..., αn scalars., α1v1 + α2v2 + ..., + αnvn = 0, ⇒ at least one scalar from α1, α2, ..., αn are non-zero., This linear combination is called non-trivial linear combination. In other word, A set, S = {v1, v2 ..., vn} is said to be L.D. if there exist a non-trivial linear combination of v1, v2, ..., vn vectors., EXAMPLE 1: Prove that the vectors (1, 1, 0), (0, 1, −1) and (1, 2, −1) are L.D.., SOLUTION: Let α, β, γ are scalars such that, α (1, 1, 0) + β (0, 1, −1) + γ (1, 2, 1) = (0, 0, 0), , U|, = 0V, = 0|W, , α + 0β + γ = 0, α + β + 2γ, 0α − β − γ, , =, , LM1, MM1, N0, , OP LMαOP LM0OP, PP MMβ PP = MM0PP, Q N γ Q N0Q, , 0, , 1, , 1, −1, , 2, −1, , ...,(1), , If this system has non trivial solution then vectors are L.D.. The system of equation AX = 0,, has non trivial solution if and only if α (A) < no. of unknowns or if A is square matrix then, |A| ≠ 0., 1, 0, 1, 1, 2 = 0,, Using this concept, |A| = 1, 0 −1 −1, ⇒ System (1) has non trivial solution., This implies (1, 1, 0), (0, 1, −1) and (1, 2, −1) are L.D.
Page 137 :
129, , VECTOR SPACE, , b = −, Squaring both sides, , 3 =, , β, ∈, γ, , ca + b 2 h, , 2, , 2, 2, = a + 2 2 ab + 2b, , ⇒ 2 · ab ∈ ⇒, ⇒, if a = 0, ⇒, , ab = 0, a = 0 or b = 0, 3 = 2b2, b =, , n, , 3, ∉ Q, 2, , s is L.I. in over ., , This is a contridiction. Hence 1 , 2 , 3, , 3.7.3 Definition: Colinear, Two vectors v1 and v2 are colinear if one of them is a scalar multiple of other. Geometrically, v1 and, v2 vectors are colinear if one of them is lies on the line passes through other vector., , 3.7.4 Definition: Coplanar, Three vectors v1, v2 and v3 of a vector space V over the field F are said to be coplanar if one of, them can be expressed as a linear combination of others two vectors. Geometrically, v1, v2 and v3, are coplanar if one of them is lies in the plane passes through other two vectors = 2., EXAMPLE 3: Let V be a vector space over a field , then, −1, −1) are colinear vectors in V., (i) v1 = (1, 1) and v2 (−, −1, 1) are not colinear in V., (ii) v1 = (1, 1) and v2 (−, SOLUTION: Since, v1 = (−1, −1) = −1 · (1, 1), for −1 ∈ = −1 · v2, ⇒ (−1, −1) and (1, 1) are colinear vectors is V., Y, (a), v1 = (1, 1), , Lines passes, through (1, 1, , (b), , v1 = (1, 1), u2 = (–1, 1), O, , X, , v1 = (–1, –1), Lines passes through (1, 1) but, (–1, 1) does not lies on this line, so (1, 1) & (–1, 1) are not colinear, , (–1, –1) lies on the line passes, through (1, 1), , (a) Colinear vectors, , (b) Non-colinear vectors, Fig. 5
Page 138 :
130, , LINEAR ALGEBRA, , EXAMPLE 4: The vectors (1, 1, −1), (1, 0, 1) and (2, 1, 0) are coplanar vectors., SOLUTION: Since, (2, 1, 0) = (1, 1, −1) + (1, 0, 1), ⇒ (2, 1, 0) can be expressed as a linear combination of (1, 1, −1) and (1, 0, 1)., ⇒ (2, 1, 0) lies in the plane passes through (1, 1, −1) and (1, 0, 1)., ⇒ (2, 1, −1), (1, 0, 1) and (2, 1, 0) are coplanar vectors., EXAMPLE 5: The vectors (1, −1), (1, 2) and (2, 1) are coplanars., SOLUTION: Since, (2, 1) = (1, −1) + (1, 2), = 1 · (1, −1) + 1 · (1, 2), ⇒ (2, 1) lies in the plane passes through (1, −1) and (1, 2)., Y, , v2 (1, 2), It lies in the, plane passes, through the v1, v2, , v3 = (2, 1), X, , O, v1 = (1, –1), , Fig. 6., , Theorem 3.10: Let V be a vector space over a field , Then, (i) A set {v} of vector of V is L.D. if and only if v = 0., (ii) A set {v1, v2} of vectors of V is L.D. if and only if v1 and v2 are colinear., (iii) A set {v1, v2, v3} of vectors of V is L.D. if and only if v1, v2, and v3 are coplanar., (iv) A set {v1, v2, ..., vn} of vectors of V is L.D. if and only if one of them can be uniquely,, expressed as a linear combination of other (n − 1) vectors., Proof: (i) Let {v} is a L.D. then for any scalar a., α ·v = 0, ⇒, α ≠ 0, v = 0, Conversely, let v = 0, then for any ≠ α ∈ , α ·v = 0, ⇒ This is a non trivial linear combination of v., ⇒ {v} is a L.D. set of vectors., (ii) Let {v1, v2} is a L.D. set of vectors of V then show that v1 and v2 are colinear. For this,, let α, β ∈ . Since {v1, v2} is a L.D. set.
Page 144 :
136, , LINEAR ALGEBRA, , EXAMPLE 10: If u1 v1 and w are L.I. vectors of a vector space V over field then show, that u + v, v + w and w + u are L.I. vectors., SOLUTION: Let α, β, γ ∈ , then, α (u + v) + β (v + w) + γ (w + u) = 0, ⇒, (α + β) v + (β + γ) w + (α + γ) u = 0, Since u, v and w are L.I. this implies that, α + β = 0, ...,(i), β + γ = 0, ...,(ii), α+γ = 0, ...,(iii), (ii)-(iii) ⇒, β − α = 0, then from (i), α + β = 0 implies that, α = 0, β = 0 and γ = 0., Hence {u + v, v + w, w + u} is a L.I. set of vectors of V., , EXERCISE 3.3, 1. Which of the following subset of S of V3 are linearly independent (L.I.)., (i) S = {(1, 2, 1), (1, −1, 0), (4, 3, 2)}, (ii) S = {(2, −3, 5), (1, 2, 3), (3, −1, 0)}, (iii) S = {(2, 4, 3), (1, 2, 0), (−1, 0, 1)}, (iv) S = {(−2, 0, 1), (1, 1, 0), (1, 1, 1), (2, 5, 7)}, (v) S = {(0, 0, 0), (1, 1, 0), (1, 0, 0)}, (vi) S = {(1, 1, 2) (−1, 0, 2)}, 2. Determine which of the followng subset S of V4 are L.I.., (i) S = {(1, 0, 0, 0), (0, 1, 1, 0), (1, 1, 1, 0), (0, 0, 1, 0)}, (ii) S = {(1, −1, 2, 3), (2, 0, 1, −1), (1, 0, 1, 0), (0, 0, 1, 1)}, (iii) S = {(1, 1, 2, 0), (2, −1, 2, 1), (1, 0, 1, 0), (0, 1, 0, 1)}, (iv) S = {(1, 1, −1, 0), (−1, 0, 1, 2), (1, 2, 0, 3)}, (v) S = {(1, 2, 1, 2), (3, 0, −1, 2), (0, 1, 2, 1), (−1, 1, 1, −1), (2, 0, 3, 5)}, 3. Determine which of the following subsets of a vector space f (0, ∞) over , are linearly, independent., (i) S = {x, sin x, ex}, (ii) S = {cos 2x, cos2x, sin2x}, (iii) S = {cos x, sin x, cos (x + 1)}, (iv) S = {log x, log x2, log x3, log x4}, (v) S = {log x, x log x}, (vi) S = {ex, xex, x2ex, (x2 + x + 1) ex}
Page 145 :
137, , VECTOR SPACE, , 4., , 5., , 6., , 7., , 8., , (vii) S = {ex, e−x, cosh x}, (viii) S = {e2x, e−2x, sinh x}, (ix) S = {ex, e−x, sinh 2x}, (x) S = {ex, e2x, e3x}, Determine which of the following subset S of a vector space [x] are L.D., (i) S = {1, (x + 1), (x + 1)2}, (ii) S = {1, x + 1, x2 + x + 1, x2 − 1}, (iii) S = {x + x2, x + x3, x + 1}, (iv) S = {1, x, x2 + x + 1}, (v) S = {1 + x, 1 + x2, 1 + x + x2}, If v1, v2, v3 and v4 are linearly independent vectors of a vector space V over a field , then, show that:, (i) v1 − v2, v2 − v3, v3 − v4, v4 − v1 are linearly dependent vectors of V., (ii) v1 + v2, v2 + v3, v3 − v4, v4 − v1 are linearly dependents vectors of V., If v1, v2, v3 are linearly dependent vectors and v2, v3, v4 are linearly independent vectors, in, a vector space V over a field . Then show that:, (i) v1 is a linear combination of v2 and v3, and, (ii) v4 is not a linear combination of v1, v2 and v3., If v1, v2 and v3 are linearly independent in a vector space V over a field . Then determine, which of the following set of V are linearly independent., (i) v1, v1 + v2, v1 + v2 + v3, (ii) v1 − v2, v2 − v3, v3 − v1, Consider a vector space over . Then show that:, , n, S = n1,, , s is a linearly independent set of vectors of over ., 12 s is a linearly dependent set of vectors of over ., , (i) S = 1, 2, 5, (ii), , 3,, , 9. True and false:, (i) Every set of vectors containing a linearly independent set of vector is linearly, independent., (ii) Every non empty subset of linearly dependent vectors is linearly dependent., (iii) Every set containing zero vector is linearly dependent., , n, , (iv) The set 1, 2, 18, (v) The set {1, π,, , π2,, , s is a linearly dependent in the vector space and ., , π3, ...,} is a linearly independent set of vector in over ., , n s is a set lineary dependent vectors in over ., , (vi) S = 1, 2, , (vii) S = {1, π, π2} is a set of linearly independent vectors in a vector space over ., (viii) S = {sin t, et, e−t} is a set of linearly dependent vectors in a vector space f [0, 1].
Page 146 :
138, , LINEAR ALGEBRA, , 3.8 BASIS AND DIMENSION OF VECTOR SPACE, In our previous sections, we have discussed the concept of linear span, linear independent and, dependent vectors in a vector space. In this section we will discuss the basic concept of basis of a, vector space V and its dimension. In last sections, any set S = {v1, v2, ..., vk} span the subspace of, a vector space V, if any vector in the subspace can be made by the scalar multiplication and vector, addition of vectors of S. But, if S span the proper subspace of vectors space V, then it may not span, the whole space V. So, it is a question that How many vectors are required to span the whole vector, space. To answer this question, we have introduced the concept of basis and dimension., We shall start this section with the definition of ‘basis’ of a vector space., , 3.8.1 Definition (Basis), A set S of a vectors of a vector space V over a field , is called basis for V if it satisfies the following, properties:, (i) S is the set of linearly independent vectors, (ii) [S] = V i.e., S is a generator of V., In other words, a subset S of vector space V is a basis of V if it’s linearly independent and every, vector of V can be expressed as a unique linear combination of vectors of S. It is clear that basis, of a vector space is contains maximum L.I. vectors of V., EXAMPLE 1: The set {(1, 0, 0), (0, 1, 0), (0, 0, 1)} form a basis of V = 3 over ., SOLUTION: Let α, β, γ ∈ , α (1, 0, 0) + β (0, 1, 0) + γ (0, 0, 1) = (0, 0, 0), ⇒, (α, β, γ) = (0, 0, 0), ⇒, α = 0= β=γ, ⇒ (1, 0, 0), (0, 1, 0), (0, 0, 1) are L.I. vectors., Now to show [(1, 0, 0) (0, 1, 0) (0, 0, 1)] = 3, Let (x, y, z) ∈ 3, then, (x, y, z) = x (1, 0, 0) + y (0, 1, 0) + z (0, 0, 1), ⇒, [(1, 0, 0), (0, 1, 0), (0, 0, 1)] = 3, There vectors are also called standard basis of 3., EXAMPLE 2: The set S = {(1, 1, 0) (0, 1, 1) (1, 1, 1)} form a basis of 3 over ., SOLUTION: First we, will show that S is a set of L.I. vectors for this, suppose α, β, γ ∈ ., α (1, 1, 0) + β (0, 1, 1) + γ (1, 1, 1) = 0, ⇒, , LM1, MM1, N0, , 0, 1, 1, , OL O, 1PP MM β PP, 1PQ MN γ PQ, 1 α, , =, , LM0OP, MM0PP, N0Q, , ⇒, AX = 0., Since, A is a square matrix. If |A| ≠ 0 then X = 0, is the solution of this system. If |A| = 0 then, X ≠ 0 is the solution of this system.
Page 147 :
139, , VECTOR SPACE, , |A| = 1 (1 − 1) − 0 (1 − 0) − 1 (1) = −1 ≠ 0, ⇒ X = 0 i.e., (α, β, γ)T = (0, 0, 0)T, ⇒, α = 0= β =γ, ⇒ S is a set of L.I. vectors., Now to show [S] = 3, for this let (x, y, z) ∈ 3 and a, b, c are scalars., (x, y, z) = a (1, 1, 0) + b (0, 1, 1) + c (1, 1, 1), ⇒, a+ c = x, ...,(i), a+ b+ c = y, ...,(ii), b+ c = z, ...,(iii), ⇒, a+z = y, ⇒, a = y−z, Put in into first, e = x −y+z, From (ii), b = z −x+y−z, ⇒, b = −x + y, ⇒, (x, y, z) = (y − z) (1, 1, 0) + (y − x) (0, 1, 1) + (x − y + z) (1, 1, 1), ⇒ Every vector of 3 can be uniquely expressed as a linear combination of vectors of S., ⇒, [S] = 3, Hence, S = {(1, 1, 0) (0, 1, 1) (1, 1, 1)}, form a basis for 3., Definition, , The number of element in any basis of a vector space is called the dimension of vector space. If, number of element in the basis of a vector space is finite then it called the finite-dimensional vector, space. If number of element in the basis of vector space as not finite then it’s called infinite, dimensional vector space., EXAMPLE 3: m is a vector space over , of m-dimension., SOLUTION: The set, S = {e1, e2, ..., em} of vectors of m span m., where, ej = (0, 0, ..., 0, 1, 0, ..., 0), th, is a vector j position have 1 at jth position and rest are zero., For this any vector u ∈ m,, u = (x1, x2, ..., xm) xi ∈ , u = (x1, x2, ..., xm), = x1(e1) + x2(e2) + ..., + xm(em), m, , =, , Σ xi ei ., i =1, , ⇒, [S] = m., Thereover, S is a set of linearly independent vectors over ., Thus, S = {e1, e2, ..., em} is a basis of m and . Further this basis is reffered to the standard, basis of n.
Page 150 :
142, , LINEAR ALGEBRA, , = a2v2 = a3v3 + ..., + anvn, If v ∈ V, then, v = β1v1 + β2v2 + ..., + βnvn, for β1, β2, ..., βn scalars, ⇒, , v = β1(a2v2 + a3v3 + ..., + anvn) + β2v2 + ..., + βnvn., , ⇒, , v = (β1a2 + β2) v2 + (β1a3 + β3) v3 + ..., + (βnan + βn) vn, , ⇒ {v2, v3, ..., vn} generates V i.e.,, [v2, v3, ..., vn] = V,, which is a contradiction for S1 to be minimal sub set of S u [S1] = V., Hence, αi = 0 ≤i = 1, 2, ..., n., ⇒ S1 is a set of linearly independent vectors., ⇒ S1 is a basis for V., Theorem 3.15: Let V be a finite dimensional vector space. If dim (V) = n then any set, containing (n + 1) vectors of V is linearly dependent., Proof: Let dim (V) = n, and S be a set containing (n + 1) vectors of V. Now to show S is a, set of linearly dependent vectors of V. Suppose S be a set of linearly independent vectors of V and, B be a basis of V. Since dim (V) = n, ⇒ 0 (B) = n. Using the theorem ..., 0 (S) < 0 (B) ⇒ (n + 1), < n which is a contradiction. Hence S is a get of linearly dependent vectors of V., Theorem 3.16: Let B be a set of vectors of a vector space V. Then B is a basis of V if and, only if B is a maximal linearly independent set., Proof: Let B = {v1, v2, ..., vn} be a basis of a vector spae V. Then to show B is a maximal, linearly independent set of vector of V., Suppose, B is not a maximal linearly independent get of vectors of V. Then let S be a linearly, independent set in V such that BCS then there exist a vector v ∈ S, u v ∉ B., Since v ∈ S ⇒ v ∈ V, then v can be expressed uniquely as a linear combination of vectors of B., ⇒, , v = α1v1 + α2v2 + ..., + αnvn for α1, α2 ..., αn scalars., , ⇒, , α1v1 + α2v2 + ..., + αnvn − v = 0, , ⇒, , α1v1 + α2v2 + ..., + αnvn + (−1) v = 0, , ⇒ Since, −1 ≠ 0., This a non-trivial linear combination of v1, v2, ..., vn, v vectors., ⇒ {v1, v2, ..., vn, v} ⊆ T, is a set of linearly dependent vectors. Since, every supper set of L.D., is L.D., therefore T is linearly dependent set in V. Which is a contradiction. Hence B is a maxmal, set of linearly independent vectors., Conversely, let B = {v1, v2, ..., vn} CV, be a maximal linearly independent set. Then to show, that B is basis of V, let v ∈ V and v ⊆ [B], then B M B ⊂ {v} as v ∉ [B] ⇒ v ∉ B., ⇒ BU {v} is a linearly dependent set in V for α1, α2 ..., αn, and a scalars., αv + α1v1 + α2v2 + ..., + αnvn = 0, where at least one scalar from, α1, α2, ..., αn, a is non zero.
Page 151 :
143, , VECTOR SPACE, , If α = 0, then, , α1v1 + α2v2 ..., + αnvn = 0, α1 = 0 = α2 = ..., = αn, , ⇒, then α ≠ 0, ⇒, , v =, , FG − α IJ v, H αK, 2, , 2, , FG, H, , + ... + −, , IJ v, αK, , αn, , n, , ⇒ v ∈ [B] which is a contradiction. This V = [B]., Hence B is a basis of V., Theorem 3.17: Let V be a n-dimensional vector space ones a field . Then every linearly, independent get of n-vectors is a basis of V., Proof: Let S = {v1, v2, ..., vn} be a subset of n-dimentional vector space V. To show S form, a basis of V, it’s sufices to show S generates the space V i.e., [S] = V, let v ∈ V. Then S ∪ {v} =, {v1, v2, ..., vn, v} is a linearly dependent set because V is n-dimensional vector space and every set, containing more than n-vectors is L.D.. This implies that at least of the vector from v1, v2, ..., vn, and v, is a linear combination of its all predecessors vectors, it can not be any one from v1, v2, ...,, vn, because v1, v2 .. vn are linearly independent vector then there is only one vector v which can be, expressed as a linear combination of vectors of S. This implies v ∈ [S] for every vector in V. Hence, [S] = V., Theorem 3.18: Let S = {v1, v2, ..., vm} be a linearly independent set of vectors of ndimentional vector space V. Then we can extend S to the basis of V., Proof: Let S = {v1, v2, ..., vm} be a linearly independent set of vectors of a n-dimensional vector, space V. Then by the theorem ..., m ≤ n. If m = n then by the above theorem ..., every set of n linearly, independent vectors form a basis of n-dimensional vector space for S will form a basis for V. If m, < n, then S is not a basis of V, and [S] ≠ V i.e., [v1, v2, ..., vm] ≠ V. This implies that S is a proper, subset of V, then there exist vector vm+1 in V such that vm+1 ∉ [S] = [v1, v2, ..., vm]. Hence the set, S1 = {v1, v2, ..., vm, vm+1} is L.I.. If m + 1 = n then S1 is a basis of V. Otherwise, we repeat the same, procedure until we get n-linearly independent vectors {v1, v2, ..., vm, vm+1 ..., vn}. This will form a, basis of V., Note: Any numbers of basis can be produce by taking any non zero vector and extented it to the basis, of a vector space., , EXAMPLE 4: Let S = {(1, 1, 0, 1) (1, −1, 1, 2)} be a set of linearly independent vectors of, V = 4 over . Then find the basis of V which includes/contains S., SOLUTION: Let S = {(1, 1, 0, 1) (1, −1, 1, 2)} be a set of L.I. vectors of V = 4 over . Then, we can extends S to the basis of V = 4. For this, [S] = [(1, 1, 0, 1) (1, −1, 1, 2)], = {α (1, 1, 0 1) + β (1, −1, 1, 2) : α, β ∈ }, [S] = {(α + β, α − β, β, α + 2β) : α, β ∈ ], Now, we choose a vector in 4 which does not belong to the span of [S] for this take any a, β ∈ say α = 1, β = 1, but the fourth cordinate of the vector is zero that is (2, 0, 1, 0). This vector, does not belong the span of S. Hence the get, S1 = {(1, 1, 0, 1) (1, −1, 1, 2) (2, 0, 1, 0)}, is a L.I. set. Again we repeats the same procedure.
Page 153 :
145, , VECTOR SPACE, , 3.9 COORDINATE OF A VECTOR RELATIVE TO THE ORDERED BASIS, Let S = {v1, v2, ..., vn} be an ordered basis of x vector space V over the field . Then for any vector, v ∈ V can be expressed as:, v = α1v1 + α2v2 + ..., + αnvn, where α1, α2, ..., αn are scalars. The vector (α1, α2, ..., αn) is called the coordinate vector of, v relative to the ordered basis S and it’s denoted by [v]B., EXAMPLE 1: Determine the coordinate vector of a vector (3, 4, −1, 2) ∈ 4 relative to the, −1, 0, 1, 1) (2, 1, −1, 0) (0, 1, 1, 1)}., ordered basis S = {(1, 1, 0, 1), (−, SOLUTION: Let S = {(1, 1, 0, 1), (−1, 0, 1, 1), (2, 1, −1, 0), (0, 1, 1, 1)} be a ordered basis of, 4 and v = (3, 4, −1, 2) ∈ 4. To find the coordinate vector of (3, 4, −1, 2) relative to S, (3, 4, −1, 2), can be expressed as:, (3, 4, −1, 2) = α1(1, 1, 0, 1) + α2(−1, 0, 1, 1) + α3(2, 1, −1, 0) + α4(0, 1, 1, 1), for α1, α2, α3, α4 ∈ , ⇒, α1 − α2 + 2α3 = 3, α1 + α3 + α4 = 4, α2 − α3 + α4 = −1, α1 + α2 + α4 = 2, , ⇒, , LM1, MM1, MN01, , −1, , 2, , 0, 1, 1, , 1, −1, 0, , OP LMα OP, 1P Mα P, 1P Mα P, PM P, 1Q MNα PQ, 0, , 1, , 2, , =, , 3, , 4, , LM 3 OP, MM 4 PP, MN−21PQ, , ...,(i), , To find the solution of this system, we use the row operation to convert the matrix into upper, triangular., , LM1, MM1, MN01, , −1, , 2, , 0, , 0, 1, , 1, 0, , 1, 1, , 1, , 1, , 1, , OP, 4P, −1P, P, 2Q, 3, , ~, , ~, , ~, , LM1, MM0, MN00, LM1, MM0, MN00, LM1, MM0, MN00, , −1, , 2, , 0, , 1, 1, , −1, 0, , 1, 1, , 2, , −1, , 1, , −1, , 2, , 0, , 1, 0, , −1, 1, , 1, 0, , 0, , 1, , −1, , −1, , 2, , 0, , 1, , −1, , 1, , 0, , 1, , 0, , 0, , 0, , −1, , OP, 1PR −R, 1PR −R, P, −1Q, 3O, 1 PP R − R, 0PR −R, P, −3Q, 3O, 1 PP, R −R, 0P, P, −3Q, 3, , 2, , 1, , 4, , 1, , 3, , 2, , 4, , 2, , 4, , 3, , The solution of the system given below will be the solution of system (i)
Page 154 :
146, , LINEAR ALGEBRA, , LM1, MM0, MN00, , −1, , 2, , 1, 0, 0, , −1, 1, 0, , OP LMα OP, 1 P Mα P, 0 P Mα P, PM P, −1Q MNα PQ, 0, , 1, , 2, , 3, , 4, , =, , LM 3 OP, MM 1 PP, MN−03PQ, , ...,(ii), , To solve the system (ii) we use the back substitution., −α4 = −3, ⇒, α4 = 3, α3 = 0, ⇒, α3 = 0, α2 − α3 + α4 = 1, ⇒, α2 − 0 + 3 = 1, ⇒, α2 = −2, α1 − α2 + 2α3 = 3, ⇒, α1 = 3 + (−2) − 2 (0), α1 = 1., Hence (1, −2, 0, 3) is a coordinate vector of a vector (3, 4, −1, 2) in 4 relative to the basis, S. This can be written as:, i.e.,, [v]S = [(3, 4, −1, 2)]S = (1, −2, 0, 3)., EXAMPLE 2: Determine the coordinate vector of a vector of a vector (4 + 3x − 2x2) in a, vector space 2(x) over relative to the ordered basis S = {2, 1 − x, 1 + x2} of 2(x)., SOLUTION: To find the coordinate vector of (4 + 3x − x) relative to the basis, S = {2, 1 − x, 1 + x2} of 2(x)., 2, We expressed 4 + 3x − x as:, 4 + 3x − x2 = α1(2) + α2(1 − x) + α3(1 + x2) for α1, α2, α3 ∈ ., ⇒, 2α1 + α2 + α3 = 4, α1 = 4, −α2 = 3, ⇒ α2 = −2, α3 = −1, α3 = −1, ⇒ (4, −3, −1) is a coordinate vector of 4 + 3x − x2 in 2(x). This can be written as:, [4 + 3x − x2]S = (4, −3, −1)., EXAMPLE 3: Determine the dimension of the following vector spaces:, (i) over , (ii) over , (iii) over , (iv) over , (v) 2 over , (vi) n over , n ∈ ., (vii) n one , where n ∈ .
Page 155 :
VECTOR SPACE, , 147, , SOLUTION: (i) The dimension of a vector space over is one because S = {1} is a basis of, over . Since 1 ≠ 0, ⇒ S is a set of linearly independent vector and, [S] = {α · 1 ∈ C : ≤ α ∈ }, = {α : α ∈ } = ., Hence, over is a one-dimensional vector space., (ii) over is a two dimensional vector space because S = {1, i} form a basis for over, R S = {1, i} is a linearly independent set in over because 1 and i can’t be expressed as constant, multiple of other if scalars are real numbers. Now, [S] = {α (1) + α (i) : α, β ∈ }, = {α + iβ : α, β ∈ } = ., Hence over is a 2-dimensional vector space., (iii) over is one-dimensional vector space, because S = {1} form a basis for over ., (iv) over is an infinite dimensional vector space. To show over is infinite dimensional, vector space. We can prove that the set S = {1, π, π2, π3, ..., πn, ...,} is linearly independent in, over ., By the definition, that an infinite set of vectors is L.I. if every finite subset of it is L.I. can, prove that the set S1 = {1, π, π2, ..., πn} is a L.I. set for every n ∈ . Suppose S1 is a L.D. set then, there exist a non-trivial linear combination of vectors of S1 i.e., there exist αi ∈ , where at least, one αi’s is non zero such that, α0 · 1 + α1 · π + α2π2 + ..., + αnπn = 0, ⇒ x = π, is a root of the polynomial, π (x) = α0 + α1x + ..., + αnxn, where at least one αi’s is non zero i.e., αi ≠ 0 for same i., ⇒, p (x) ≠ 0., But this is not possible because π is a transcendental number. Therefore, S1 = {1, π, π2, ..., πn}, is a L.I. set for all n ∈ N. As a consequence of it S = {1, π, π2, ..., πn, ...,} is a L.I. get., Hence over is an infinite dimensional vector space., (v) 2 over is a 4-dimensional vector space over R., v ∈ 2 = (z1z2) = (x1 + iy1, x2 + iy2), = x1(1, 0) + y1(i, 0) + x2(0, 1) + y2(0, i), ⇒, S = {(1, 0), (0, 1) (i, 0), (0, i)}, basis of 2 over because S is a L.I. set in 2 over ., (vi) n over is a 2n-dimensional vector space it can be easly prove by the reader as given, in (V)., (vii) n over C is a vector space of ‘n’ dimension, for this, let, v = (z1, z2, ..., zn) ∈ n, (z1, z2, ..., zn) = z1(1, 0, 0, 0 ..., 0) + z2(0, 1, 0, ..., 0) + ..., + zn(0, 0, ..., 0, 1), for z1, z2, z3, ..., zn ∈ ., ⇒, S = {(1, 0, 0, 0, ..., 0) (0, 1, 0, ..., 0) ..., (0, 0, ..., 1)}, n, form a basis for are , there basis are called standard basis of n one .
Page 161 :
153, , VECTOR SPACE, , EXAMPLE 6: Find the dimension of the following subspace of a vector space 5 over ., , RS( x , x , x , x , x ) ∈ : x + x + x = 0UV, x +x = 0, T, W, x −x +x −x = 0 U, R|, |V, (ii) W = S( x , x , x , x , x ) ∈ : x + x + x = 0, |T, x + x − x + 2 x = 0|W, x + x + x = 0U, R, S, : (i) W = S( x , x , x , x , x ) ∈ :, VW, x +x = 0, T, x = −x −x U, R, VW, W = S( x , x , x , x , x ) ∈ :, x = −x, T, (i) W =, , 5, , 1, , 2, , 3, , 4, , 1, , 3, , 5, , 2, , 4, , 1, , 3, , 4, , 2, , 4, , 5, , 1, , 2, , 3, , 5, , 5, , 5, , 1, , 2, , 3, , 4, , 5, , 5, , OLUTION, , 1, , 2, , 3, , 4, , 2, , 3, , 1, , 2, , 2, , 4, , 3, , 5, , 1, , 5, , 1, , 4, , 4, , 2, , 3, , 5, , 2, , ⇒, , 4, , 5}, , W = {(x4 − x3, −x4, x3, x4, x5) ∈, = {x3 (−1, 0, 1, 0, 0) + x4 (1, −1, 0, 1, 0) + x5 (0, 0, 0, 0, 1)}, The basis of W is S, = {(−1, 0, 1, 0, 0), (1, −1, 0, 1, 0) (0, 0, 0, 0, 1)}, Hence the dimension of W is 3., Alter Solution: W =, , RS( x , x , x , x , x ) ∈, T, 1, , 2, , 3, , 4, , 5, , 5, , :, , UV, W, , x1 + x2 + x3 = 0, x2 + x4 = 0, , Using the note ...,, dim (W) = dim (5) − rank (A) = 5 − rank (A)., where, ⇒, Hence, , (ii), ⇒, , A =, , LM1, N0, , 1, 1, , 1, 0, , 0, 1, , OP, Q, , rank (A) = 2, dim (W) = 5 − 2 = 3., W =, , R|, S|( x , x , x , x , x ) ∈ R, T, 1, , 2, , 3, , 4, , 5, , 5, , U|, V, = 0|W, , x1 − x3 + x4 − x5 = 0, : x2 + x4 + x5 = 0, x1 + x2 − x3 + 2 x4, , x = x 3 − x 4 + x5, x2 = −x4 − x5,, Put x1, x2 in third condition, x3 − 2x4 − x3 + 2x4 = 0, ⇒, W = {(x3 − x4 + x5, −x4 − x5, x3, x4, x5) ∈ 5}, = {x3(1, 0, 1, 0, 0) + x4(−1, −1, 0, 1, 0) + x5(1, −1, 0, 0, 1)}, ⇒, S = {(1, 0, 1, 0, 0), (−1, −1, 0, 1, 0), (1, −1, 0, 0, 1)}, form a basis for W. Hence the dim (W) is 3.
Page 162 :
154, , LINEAR ALGEBRA, , Alter solution:, , Then, where, , W =, , R|, ||, S|( x , x , x , x , x ) ∈, ||, |T, 1, , 2, , 3, , 4, , 5, , dim (W) = dim (5) − rank (A),, A =, , LM1, MM0, N1, , 0, , −1, , 1, , 1, , 0, , 1, , 1, , −1, , 2, , To find the rank of A, use Echelon form,, A =, , ~, ⇒, Hence the, , LM1, MM0, N1, LM1, MM0, N0, , 5, , 0, , −1, , 1, , 1, , 0, , 1, , 1, , −1, , 2, , 0, , −1, , 1, , 1, 0, , 0, 0, , 1, 0, , LM1, : M0, MN1, , LM x OP U|, −1O M x P, |, 1 PP M x P = 0|, M P V, 0 PQ M x P, MN x PQ ||, |, AX = 0|W, 1, , 0, , −1, , 1, , 1, 1, , 0, −1, , 1, 2, , 2, , 3, , 4, , 5, , OP, 1P, 0 PQ, , −1, , OP LM1 0, 1 P ~ M0 1, 0 PQ MN0 1, −1O, 1 PP R − R, 0 PQ, −1, , 4, , −1, , 1, , 0, , 1, , 0, , 1, , OP, 1PR, 1 PQ, , −1, , 3, , − R1, , 2, , rank (A) = No. of non zero rows = 02, dim (W) = 5 − 02 = 03., , EXAMPLE 7: Determine the dimension of the following subspace of a vector space 5(x), over ., ″′, ″′(x) = 0}, (i) W = {p (x) ∈ 5(x) : p″′, (ii) W = {p (x) ∈ 5(x) : p (0) = 0, and p′′(0) = 0}, ′″, (iii) W = {p (x) ∈ 5(x) : p′″, ′″(x0) = 0, for x0 ∈ }, W = {p (x) ∈ 5(x) : p′″(x) = 0}, SOLUTION: (i), Let, , p (x) = a0 + a1x + a2x2 + a3x3 + a4x4 + a5x5 ∈ 5(x), p′(x) = a1 + 2a2x + 3a3x2 + 4a4x3 + 5a5x4, p″(x) = 2a2 + 6a3x + 12a4x2 + 20a5x3, p″′(x) = 6a3 + 24a4x + 60a5x2, , Since, , p″′(x) = 0, , ⇒ 6a3 + 24a4x + 60a5x2 = 0, ⇒, , a3 = 0, a4 = 0 and a4 = 0, , ⇒, , p (x) = a0 + a1x + a2x2 ∈ W, , ⇒, Hence, , S = {1, x, x2}, is a basis of W., dim (W) = 03.
Page 166 :
158, , LINEAR ALGEBRA, , 5. Let V be the vector space of all real polynomials. Consider the subspace W spanned by, t2 + t + 2, t2 + 2t + 5, 5t2 + 3t + 4 and 2t2 + 2t + 4 then, dimension of W is: (GATE 2006), (a) 4, (b) 3, (c) 2, (d) 1, 4, 6. A basis of V = {(x, y, z, w) ∈ R : x + y − z = 0) y + z + w = 0, 2x + y − 3z − w = 0} is:, (a) {(1, 1, −1, 0), (0, 1, 1, 1), (2, 1, −3, 1), (GATE 2007), (b) {(1, −1, 0, 1)}, (c) {(1, 0, 1, −1)}, (d) {(1, −1, 0, 1), (1, 0, 1, −1)}, , LM1, 7. Let M = M0, MN0, 8., , 9., , 10., , 11., , 12., , 0, cos θ, sin θ, , 0, , OP, PP, Q, , π, − sin θ , where 0 < θ < . Let V = {u ∈ R3, Mut = ut} then dimension, 2, cos θ, , of V is:, (a) 0, (b) 1, (c) 2, (d) 3, Consider the basis {u1, u2, u3} of R3, where u1 = (1, 0, 0), u2 = (1, 1, 0), u3 = (1, 1, 1). Let, {f1, f2, f3} be the dual basis of {u1, u2, u3) and f be a linear functional defined by f (a, b, c), = a + b + c, (a, b, c) ∈ R3. If f = α1f1 + α2f2 + α3f3, then (α1, α2, α3) is:, (a) (1, 2, 3), (b) (1, 3, 2), (c) (2, 3, 1), (d) (3, 2, 1), The dimension of vector space of all 3 × 3 real symmetric matrices is:, (a) 3, (b) 9, (c) 6, (d) 4, 24, Let S and T be two subspaces of R r.t. dim (S) = 19 and dim (T) = 17. Then the, (GATE 2004), (a) Smallest possible value of dim (S ∩ T) is 2, (b) Largest possible value of dim (S ∩ T) is 18., (c) smallest possible value of dim (S + T) is 19., (d) Largest possible value of dim (S + T) is 22., Let V1 = (1, 2, 0, 3, 0), V2 = (1, 2, −1, −1, 0), V3 = (0, 0, 1, 4, 0), V4 = (2, 4, 1, 10, 1) and, V5 = (0, 0, 0, 0, 1). The dimension of the linear span of {v1, v2, v3, v4, v5} is: (GATE 2004), (a) 2, (b) 3, (c) 4, (d) 5, (GATE 2004), The set V = {(x, y) ∈ R2 : xy ≥ 0} is:, 2, (a) A vector space of R, (b) Not a vector space of R2, Since every element does not have an Inverse in V., (c) Not a vector spare of R2, Since it is not closed under scalar multiplication., (d) Not a vector space of R2, Since it is not closed under vector addition.
Page 167 :
159, , VECTOR SPACE, , 13. The set of all x ⊆ R for which the vectors (1, X, 0), (0, X2, 1) and (0, 1, X) are linearly, independent in R3 is:, (a) {x ∈ R : X = 0}, , (b) {X ∈ R : X ≠ 1}, , (c) {X ∈ R : X ≠ 0}, , (d) {X ∈ R : X ≠ −1}, , 14. The dimension of the subspace {(X1, X2, X3, X4, X5) : 3X1 − X2 + X3 = 0} of R5 is:, (a) 1, , (b) 2, , (c) 3, , (d) 4, , LM 1, 15. Let A = M 2, MN X, , 1, 2, Y, , OP, 3 P and let V = {(X, Y, Z) ∈ R, Z PQ, 1, , 3, , (a) 0, , (b) 1, , (c) 2, , (d) 3, , : det (A) = 0} then the dimension of V equals:, (GATE 2008), , 16. Consider the subspace W = {[aij] : aij = 0, j If i is even} of all 10 × 10 real matrices. Then, the dimension of W is:, (GATE 2008), (a) 25, , (b) 50, , (c) 75, , (d) 100, , 17. The dimension of the vector space V = {A = (aij)n×n : aij ∈ C, aij = −aji} over the field R is:, (GATE 2008), (b) n2 − 1, , (a) n2, , n2, 2, 18. Let V be the real vector space of all polynomials in one variable with real coefficients and, having degree at most 20. Define the subsapces:, (c) n2 − n, , (d), , RS p ∈V : p (1) = 0, p FG 1 IJ = 0, p (5) = 0, p (7) = 0UV, H 2K, W, T, U, R, F 1I, = S p ∈V : p G J = 0, p (3) = 0, p (4) = 0, p (7) = 0V, H, K, 2, W, T, , W1 =, W2, , then the dimension of W1 ∩ W2 is:, (a) 16, , (b) 17, , (c) 15, , (d) 19, , 19. The row space of a 20 × 50 matrix A has dimension 13. What is the dimension of the space, of solutions of AX = 0?, (a) 7, , (b) 13, , (c) 33, , (d) 37
Page 169 :
161, , VECTOR SPACE, , 25. Which of the following sets of functions from R to R is a vector space over R?, f ( X ) = 0 , S = g : lim g ( X ) = 1 , S = h : lim h ( X ) ∈× 1 rt, S1 = f : Xlim, 2, 3, →3, X →3, X →3, , o, , t, , o, , t, , o, , t, , (a) Only S1, (b) Only S2, (c) S1 and S3 but not S2, (d) All the three are vector spaces, 26. Let A be a 4 × 4 matrix suppose that the null space N (A) of A is, {(x, y, z, w) ∈ R4 : x + y + z = 0, x + y + w = 0} then, (a) dim {Column space (A)} = 1, (b) dim {Column space (A)} = 2, (c) Rank A = 1, (d) S = {(1, 1, 1, 0), (1, 1, 0, 1)} is a basis of N (A), 27. The dimension of the vector space of all symmetric matrices A = (aij) of order n × n (n ≥ 2), (CSIR 2012), with real entries a11 = 0 and trace zero is:, ( n 2 + n − 4), ( n 2 − n + 4), (b), 2, 2, 2, 2, ( n + n − 3), ( n − n + 3), (c), (d), 2, 2, 28. For a positive integer n, let Pn denote the space of all polynomials P (x) with coefficients in, R r.t. deg P (x) ≤ n and let Bn denote the standard basis of Pn given by, Bn = {1, X, X2, ..., Xn}., If T : P3 → P4 is the linear transformation defined by, (a), , T {P (x)} = X2 P′(X) +, and, , z, , X, , 0, , P ( t ) dt, , A = (aij) is the 5 × 4 matrix of T w.r.t standard basis B3 and B4, then, , 3, 7, 3, , a33 =, (b) a32 = , a33 = 0, 2, 3, 2, 7, (c) a32 = 0, a33 =, (d) a32 = 0, a33 = 0, 3, 29. The dimension of the vector space of all symmetric matrices of order n × n (n ≥ 2) with real, entries and trace equal to zero is:, (CSIR 2011), , (a) a32 =, , F n − nI − 1, GH 2 JK, F n − 2n I − 1, GH 2 JK, 2, , (a), , 2, , (c), , 30. Consider the set V =, , F n + nI − 1, GH 2 JK, F n + 2n I − 1, GH 2 JK, 2, , (b), , 2, , (d), , R|L x O, S|MM yPP ∈ R, TMN z PQ, , 3, , U|, : αx + βy + z = γ , α , β, γ ∈ R V ,, |W, , for which of the following, , choices the set V becomes two dimensional subspace of R3 over R?, (a) α = 0, β = 1, γ = 0, (b) α = 0, β = 1, γ = 1, (c) α = 1, β = 0, γ = 0, (d) α = 1, β = 1, γ = 0
Page 170 :
162, , LINEAR ALGEBRA, , LM X OP, 31. Let X = M X P ∈ R, MN X PQ, 1, , 2, , 3, , be a non-zero vector and A =, , XX T, then the dimension of vector space, XTX, , 3, , {y ∈ R3 : A y = 0} over R is __________ ?, , 32. Let V be the set of 2 × 2 matrices, , LMa, Na, , a12, a 22, , 11, 21, , OP with complex entries r.t. a, Q, , 11, , + a22 = 0. Let, , w be the set of matrices in V with a 21 + a 21 = 0. Then, under usual matrix addition and scalar, multiplication, which of the following are true?, (a) V is a vector space over C, (b) W is a vector space over C, (c) V is a vector space over R, (d) W is a vector space over R, 33. If the set, , RSL X, TMN−1, , OP LM, QN, , 0, −X, ,, 0, X, , OP LM, QN, , −1 1, ,, X 1, , OPUV is linearly independent in the vector space of all, QW, , −1, 0, , 2 × 2 matrices with real entries, then x is equal to _________ ?, 34. Let M2(R) be the vector space of 2 × 2 real matrices. Let V be a subspace of M2(R) defined by, , RS A ∈ M ( R) : A L0 2O = L0 2O AUV, MN3 1PQ MN3 1PQ W, T, , V=, , 2, , then the dimension of V is ________ ?, , LM1, 35. Let A = M3, MN1, , 1, −1, 5, , OP, 1P and V be the vector space of all x ∈ R, 3PQ, , 1, , 3, , s.t. AX = 0. Then dim (V) is:, , (a) 0, (b) 1, (c) 2, (d) 3, 36. Let V be the vector space of all 2 × 2 matrices over R. Consider the subspaces:, , RSFG a −aIJ : a, c, d ∈ RUV, W, TH c d K, RF a bIJ : a, b, d ∈ RUV, = SG, W, TH −a d K, , W1 =, W2, , If m = dim (w1 ∩ W2) and n = dim (w1 + w2), then the pair (m, n) is:, (a) (2, 3), (b) (2, 4), (c) (3, 4), (d) (1, 3), 37. Let A be an n × n diagonal matrix with characteristic polynomial (X − a)P (X − b)q, where, a and b are distinct real no’s. Let V be the real vector space of all n × n matrices B s.t., AB = BA. Determine the dimension of V.
Page 171 :
163, , VECTOR SPACE, , 38. Consider the following subspace of R3:, W = {(x, y, z) ∈ R3 : 2x + 2y + z = 0, 3x + 3y − 2z = 0, x + y − 3z = 0}, the dimension of W is:, (a) 0, (b) 1, (c) 2, (d) 3, 39. Which of the following sets is a basis for the subsapce, W=, , RSL X, TMN 0, , OP, Q, , UV, W, , y, : X + 2 y + t = 0, y + t = 0, t, , of the vector spaces of all real 2 × 2 matrices?, (a), , (c), , RSL1 0O , L0 1O , L0 0OUV, TMN0 0PQ MN0 0PQ MN0 1PQW, RSL−1 1 OUV, TMN 2 −1PQW, , (b), , (d), , RSL2, TMN0, RSL1, TMN0, , OP , LM1, −1Q N0, −1O U, V, 1 PQ W, 1, , OPUV, 1 QW, , −1, , 40. Let V denote a vector space over a field F and with a basis B = {e1, e2, ..., en}. Let X1, X2,, ..., Xn ∈ F. Let C = {X1e1, X1e1 + X2e2, ..., X1e1 + X2e2 + ..., + Xnen} then:, (a) C is linearly Independent set implies that Xi ≠ 0 for every i = 1, 2, ..., n., (b) Xi ≠ 0 for every i = 1, 2, ..., n implies that C is linearly independent set., (c) The linear span of C is V implies that Xi ≠ 0 for every i = 1, 2, ..., n., (d) Xi ≠ 0 for the every i = 1, 2, ..., n implies that the linear span of C is V., 41. Let Pn(X) = Xn for x ∈ R and let P = span {P0, P1, P2, ...,}. Then:, (a) P is the vector space of all real valued continuous functions on R., (b) P is the subspace all real valued continuous function on R., (c) {P0, P1, P2 ...,} is a linearly independent set in the vector space of all continuous functions, on R., (d) Trigonometric functions belong to P., 42. Which of the following are subspaces of the vector space R3?, (a) {(x, y, z) : x + y = 0}, , (b) {(x, y, z) : x − y = 0}, , (c) {(x, y, z) : x + y = 1}, , (d) {(x, y, z) : x − y = 1}, , 43. For arbitrary subspaces U, V and W of a finite dimensional vector space, which of the following, hold:, (a) U ∩ (V + W) ⊂ U ∩ V + U ∩ W, (b) U ∩ (V + W) ⊃ U ∩ V + U ∩ W, (c) (U ∩ V) + W ⊂ (U + W) ∩ (V + W), (d) (U ∩ V) + W ⊃ (U + W) ∩ (V + W)
Page 172 :
164, , LINEAR ALGEBRA, , 44. Let w1, w2, w 3 be three distinct subspaces of R10 s.t. Each Wi has dimension 9. Let, W = W1 ∩ W2 ∩ W3. Then we can conclude that:, (a) W may not be a subspace of R10, , (b) dim W ≤ 8, , (c) dim W ≥ 7, , (d) dim W ≤ 3, , 45. Let V be the vector space of polynomials of degree at most 3 in a variable ‘x’ with coefficients, d, be the linear transformation of V to itself given by differentiation which of, dx, following are correct?, T is invertible., O is an eigen value of T., Then exists a basis w.r.t which the matrix of T is nilpotent., The matrix of T w.r.t the basis {1, 1 + X, 1 + X + X2, 1 + X + X2 + X3} is diagonal., , in R. Let T =, the, (a), (b), (c), (d), , ANSWERS, EXERCISE 3.1, 1., , (i) Yes, (vi) Yes, (xi) No, , (ii) Yes, (vii) No, (xii) Yes, , (iii) Yes, (viii) No, (xiii) No, , (iv) Yes, (ix) No, (xiv) No, , (v) No, (x) No, (xv) No, , 2., , (i) Yes, (vi) Yes, (xi) Yes, , (ii) Yes, (vii) Yes, (xii) Yes, , (iii) Yes, (viii) No, (xiii) Yes, , (iv) Yes, (ix) Yes, (xiv), , (v) Yes, (x) No, , 3., , (i) Yes, (vi) Yes, (xi) No, , (ii) Yes, (vii) No, (xii) No, , (iii) No, (viii) No, (xiii) Yes, , (iv) Yes, (ix) Yes, (xiv) No, , (v) Yes, (x) Yes, (xv) Yes, , 4., , (i) Yes, (vi) Yes, (xi), , (ii) No, (vii) Yes, (xii), , (iii) No, (viii) No, (xiii), , (iv) No, (ix) No, , (v) Yes, (x) No, , 7., , (i), (vi), (xi), (xvi), , F, T, F, F, , (ii), (vii), (xii), (xvii), , T, F, F, F, , (iii) F, (viii) F, (xiii) F, , (iv) F, (ix) F, (xiv) T, , (v) T, (x) F, (xv) T, , EXERCISE 3.2, 1. (i), (ii) and (iii), , 2. (i), (ii), (iii), (iv), (v) and (vii), , 4. Yes, 5., , (i) T, (vi) T, , (i) F, (vii) T., , (iii) T, , (iv) T, , (v) T
Page 173 :
165, , VECTOR SPACE, , EXERCISE 3.3, (i) L.I., (vi) L.I., , 1., , (ii) L.I., , (iii) L.I., , (iv) L.D., , (v) L.D., , (iv), , (v) L.D., , 2., , (i) L.D., , (ii) L.D., , (iii) L.I., , 3., , (i) L.I., (vi) L.D., , (ii) L.D., (vii) L.D., , (iii) L.D., (viii) L.I., , (iv) L.D., (ix) L.I., , (v) L.I., (x) L.I., , 4., , (i) L.I., , (ii) L.D., , (iii) L.I., , (iv) L.I., , (v) L.I., , 5., , (i) L.I., , (ii) L.D., (iv) T, , (v) T, , (i) F, (vi) T, , 9., , (ii) F, (vii) F, , (iii) T, (viii) F, , OBJECTIVE TYE QUESTIONS, 1., 6., 11., 16., 21., 26., 31., 36., 41., , (b), (d), (b), (b), (c), (b), (2), (b), (), , 2., 7., 12., 17., 22., 27., 32., 37., 42., , (a), (c), (d), (c), (a, b, c), (a), (b, c), (), (a, b), , 3., 8., 13., 18., 23., 28., 33., 38., 43., , (d), (a), (b), (c), (a), (b), (1), (b), (a, b), , 4., 9., 14., 19., 24., 29., 34., 39., 44., , (d), (c), (d), (d), (a), (b), (2), (d), (b, c), , 5., 10., 15., 20., 25., 30., 35., 40., 45., , (c), (c), (a), (c), (c), (c), (b), (a, b, c, d), (a),
Page 174 :
Chapter, , 4, , LINEAR TRANSFORMATION, , 4.1 INTRODUCTION, In this chapter we will discuss the mapping that pressure the vector space operations of the respective, vector space. We will called these mapping linear transformation or linear map. It has numerous, applications in a variety of engineering problems specially in the control theory., , 4.2 DEFINITION AND EXAMPLES, Linear Transformation: Let U and V be two vector spaces over a same field F. Then a map, T : U → V is said to be a linear transformation if it satisfy the following properties:, (i) T (u1 + u2) = T (u1) + T (u2) ≤ u1u2 U, (ii) T (αu) = α · T (u) ≤ α ∈ and ≤ u ∈ U., These properties preserves the vector space operations. It’s clear from (i), that the image of, addition of u1 and u2 is the addition of T (u1) and T (u2). It shows that T pressure vector addition, on U to the vector addition in V. Similarly, from (ii) the image of scalar multiple of vector u of v, is equal to she same scalar multiple of it image T (u) in V., This linear transformation or linear map from vector space U to vector space V over the same, field it is also called homomorphism., Further, a linear transformation T : V → V is called linear operation on vector space V and, a linear transformation T : V () → is called a linear functional on a vector space V over a, field ., Further more, the two conditions defined for a linear transformation can be represent by one, equivalent condition:, T (α1u1 + α2u2) = α1T (u1) + α2T (u2) ≤ α1, α2 ∈ and ≤ u1, u1 ∈ U, The condition (1) can be generalised as:, T (α1u1 + α2u2 + ... + αnun), , F αuI, GH Σ JK, n, , i.e.,, , T, , i =1, , i i, , = α1T (u1) + α2T (u2) + ... + αnT (un), = αi, , F T (u )I, GH Σ JK, n, , i =1, , i, , ≤ α1, α2 ... αn ∈ and u1, u2, ... un ∈ U., 166, , ...(1)
Page 184 :
176, , LINEAR ALGEBRA, , 4. Find the linear transformation T : V3 → V3 such that T (e1) = e1 − e2 + e3, T (e2) = e2 + e3,, T (e3) = e1 − e3., 5. Let T : V2 + V2 be a linear transformation such that T (1, 2) = (2, −1), T (3, −1) = (1, 1) then, find T (2, 3)., 6. True or False:, (i) T : [x] → [x], defined as T (p (x)) = p (0) is a linear transformatiom., (ii) There exists a linear transformation from vector space over to over ., (iii) There does’t exists a linear transformation from real vector space to complex vector space., (iv) If T : V2 → V2, such that T (1, 1) = (1, 0), T (0, 1) = (2, 1), be a linear transformation, then T (2, 2) = (2, 0)., (v) There exists a linear transformation from V2 to V3 such that T (0, 0) = (0, 1, 0)., (vi) If T : V2 → V3 is a linear transformation such that T (1, 1) = (1, −1, 0), T (0, 1) =, (−1, 1, 1) then T (1, 2) = (0, 0, 0)., , 4.3 NULL SPACE (KERNEL SPACE) AND RANGE SPACE, Definition: Null space, Let T : U → V, be a linear transformation then the null space or kernel of T is denoted by N (T) or, ker (T) and defined as:, —, N (T) = {u ∈ U : T (u) = 0 }, ⇒ N (T) ⊆ U, Now, we show that N (T) is a subspace of U, for this let u1 and u2 ∈ N (T) and, α1 and α2 be two scalars, then, T (α1u1 + α2u2) = α1T (u1) + α2T (u2), = α1 · 0 + α2 · 0, (ä u1, u2 ∈ N (T) ⇒ T(u1) = 0, T(u2) = 0), = 0, ⇒ α1u1 + α2u2 ∈ N (T) ⇒ N (T) is a subspace of U., Further, the dimension of N (T) is called nulling of T and its denoted by n (T) or nulling (T)., , Definition: Range Space, Let T : U → V, be linear transformation then the range space of T is denoted by R (T) and defined as, R (T) = {v ∈ V : v = T (u), u ∈ U}, ⇒ R (T) ⊆ V, Now, we shall show that R (T) is a subspace of V for this let v1, v2 ∈ R (T)., ⇒, v1 = T (u1), v2 = T (u2),, where u1, u2 ∈ U and α1, α2 be two scalars then, α1v1 + α2v2 = α1T (u1) + α2T (u2), = T (α1u1 + α2u2), Since, u1, u2 ∈ U and U is a vector space therefore α1u1 + α2u2 ∈ U., ⇒, α1v1 + α2v2 = T (α1u1 + α2u2),, where α1u1 + α2u2 ∈ U. Hence α1v1 + α2v2 ∈ R (T)., Further, the diemnsion of R (T) is called rank of T and it’s denoted by r (T) or rank (T).
Page 185 :
177, , LINEAR TRANSFORMATION, , EXAMPLE 1: Find the nullity and rank of a linear transformation T : 2 → 2 defined as:, T (x, y) = (x + y, x − y)., SOLUTION: By the defination of null space of T, N (T) = {u ∈ 2 : T (u) = 0v}, = {(x, y) ∈ 2 : T (x, y) = (0, 0)}, = {(x, y) ∈ 2 : (x + y, x − y) = (0, 0)}, ⇒, x+ y = 0, ⇒, x = 0, y = 0, x− y = 0, = {(0, 0) ∈ 2 : T (0, 0) = (0, 0)}, ⇒, N (T) = {0}, ⇒, n (T) = 0., Now by the defination of R (T)., R (T) = {v ∈ V : v = T (u), u ∈ U}, = {(a, b) ∈ V : (a, b) = T (x, y)}, = {(a, b) ∈ 2 : (a, b) = (x + y, x − y)}, = {(x + y, x − y)}, = {x (1, 1) + y (1, −1)}, Basis of (T) = {(1, 1), (1, −1)}, Hence, r (T) = 02., Alter Solution: T (x, y) = (x + y, x − y) is a linear transformation from 2 + 2. To determine, it’s rank and nullity. We first find the matrix by taking the image of standard basis of 2 under T., T (1, 0) = (1, 1), T (0, 1) = (1, −1), , UV, W, , A =, , LM1, N1, , OP, Q, , 1, −1, , The rank of A is also rank of T and nullity of A is nullity of T. We find the rank of A by using, Echleon form, 1, 1, 1, 1, ~, R → R2 − R1, A =, 1 −1, 0 −2 2, , LM, N, , ⇒, , OP LM, Q N, , OP, Q, , rank (A) = 02, and nullity (A) = 0, rank (T) = 02, and nullity (T) = 0., , EXAMPLE 2: Find the null space, range space of a linear transformation T : 3 → 2, such, that T (x, y, z) = (x + y, y + z). Further find the r (T) and r (T)., SOLUTION: By the definition of null space, N (T) = {(x, y, z) ∈ 3 : T (x, y, z) = (0, 0)}, (x + y, y + z) = (0, 0), ⇒, x+ y = 0, x = −y = z, ⇒, y+z = 0, y = −z, , UV, W
Page 186 :
178, , LINEAR ALGEBRA, , ⇒, ⇒, , N (T) = {(z, −z, z) ∈ 3 : T (z, −z, z) = (0, 0)}, N (T) = {(z, −z, z) ∈ 3 : z ∈ } is a null space., Basis of N (T) = {(1, −1, 1)} ⇒ n (T) = 0 I, Now, by the definition of R (T), R (T) = {(a, b) ∈ 2 : (a, b) = T (x, y, z)}, = {(a, b) ∈ 2 : (a, b) = (x + y, y + z)}, = {(x + y, y + z) ∈ 2}, = {x (1, 0) + y (1, 1) + z (0, 1)}, R (T) = [(1, 0), (1, 1), (0, 1)], ⇒, dim (R (T)) ≤ 03, Since R (T) is a subspace of 2, ⇒, dim (R (T)) ≤ 2, ⇒, Basis of R (T) = Set of LI vectors from (1, 0), (1, 1) and (0, 1), = {(1, 0) (0, 1)}, = {(1, 0) (1, 1)}, = {(1, 1) (0, 1)}, Hence, dim (R (T)) = 02, ⇒, rank (T) = 02., , Note: If T : U → V, is a linear transformation, then dim (N (T)) ≤ dim (U) and dim (R (T)) ≤ dim (V), i.e., n (T) ≤ dim (U) and r (T) ≤ dim (V)., , EXAMPLE 3: Find the null space, range space, nullity and rank of a linear transformation, T : 3[x] → 3[x] such that T (p)(x) = p″″(x)., SOLUTION: By the definition of N (T), N (T) = {p(x) ∈ 3[x] : T (p (x)) = 0}, ⇒, N (T) = [p = a0 + a1x + a2x2 + a3x3 ∈ 3[x] : T (P) = 0], ⇒, p″ = 0, p″(x) = 0, ⇒, 2a2 + 6a3x = 0, ⇒, a2 = 0, a3 = 0, ⇒, N (T) = {p = a0 + a1x ∈ 3 : T (a0 + a1x) = 0} is a null space, the basis of N (T) = {a, x}, ⇒, n (T) = 02., Now, by the definition of R (T), R (T) = {p (x) ∈ 3 : p (x) = T (q (x)), q (x) ∈ 3}, = {p (x) ∈ 3 : p (x) = q″(x), q (x) ∈ 3}, Let, q (x) = a0 + a1x + a2x2 + a3x3, ⇒, q″(x) = 2a2 + 6a3x
Page 187 :
179, , LINEAR TRANSFORMATION, , ⇒, ⇒, ⇒, , R (T), Basis of R (T), dim (R (T)), rank (T), , =, =, =, =, , {p = 2a2 + 6a3x ∈ 3 : p = q″(x)}, {1, x}, 02, 02., , Theorem 4.2: Let T : U → V be a linear transformation. Then T is one-one if and only, if N (T) = {0}., Proof: Let T : U → V, be a one-on linear transformation then to show that N (T) = {0u}., Since T is one-one then for, T (u2) = T (u2), ⇒, u1 = u2 for u1, u2 ∈ U. Let u ∈ N (T), ⇒, T (u) = 0 = T (0u), ⇒, u = 0, ⇒, N (T) = {0}, Conversly, Let N (T) = {0} then show that T is one-one, for this let, T (u1) = T (u2), for u1, u2 ∈ U, ⇒, T (u1) − T (u2) = 0v, ⇒, T(u1 − u2) = 0v, ⇒, u1 − u2 ∈ N (T), ⇒, u 1 − u2 = 0 u, ⇒, u 1 = u2, Hence T is one-one., Theorem 4.3: Let T : U → V be a linear transformation then if [u1, u2, ... uk] = U, then, R (T) = [T (U1), T (U2), ... T (Uk)]., Proof: Let T : U → V, be a linear transformation and U = [u1, u2, ... uk] then show that, R (T) = [T (U1), T (U2), ... T (Uk)] for this let u ∈ R (T) then for α1, α2, ... αk scalars,, u = α1u1 + α2u2 + ... αnuk ∈ U, u ⇒ T (u) = T (α1u1 + α2u2 + ... + αnun), = α1T (u1) + α2T (u2) + ... + αnT (uk), ⇒ u ∈ [T (u1), T (u2), ... T (uk)], ⇒, R (T) = [T (u1), T (u2), ... T (un)]., Theorem 4.4: Let T : U → V be a linear transformation and if U is finite dimensional, vector space then, dim (R (T)) ≤ dim (U)., Proof: Let T : U → V, be a linear transformation and U be a finite dimensional vector space., Then show that, dim [R (T)] ≤ dim (U)., For this suppose dim (U) = m, and B = {u1, u2 ... um} be a basis of U. Then B generates U., i.e.,, U = [u1, u2, ... un]
Page 188 :
180, , LINEAR ALGEBRA, , By the theorem 2,, R (T) = [T (u1), T (u2), T (u3) ... T (um)], ⇒ For any linearly independent set S of vector in R (T),, |S| = Cardinaling of S, is at most m., ⇒, , dim (R (T)) ≤ m = dim (U)., , Definition, A Linear transformation T : V → V is said to be onto if and only if R (T) = V i.e.,, dim (R (T)) = dim (V)., Note: Let T : U → V, be a linear transformation, where U and V both are finite dimensional vector space, dim (R (T)) ≤ min {dim (U), dim (V)}., , EXAMPLE 4: Find the null space and range space of a linear transformation T : [x] →, [x], such that T (p)(x) = p″″(x) − 4p(x), where [x] is a space of real polynomials., SOLUTION: By the definition of null space, N (T) = {p (x) ∈ [x] : T (p)(x) = 0}, T (p)(x) = 0, ⇒, p″(x) − 4 p (x) = 0, ...(1), 2x, −2x, ⇒, p (x) = c1e + c2e, ⇒, p (x) = e2x, and, p (x) = e−2x, both are satisfying equation (1) but these are not polynomials., Hence p (x) = 0, is only a polynomial satisfying, p″(x) − 4p (x) = 0, ⇒, N (T) = {0}, Now for range space:, R (T) = {q (x) ∈ [x] : q (x) = T (P)(x), p (x) ∈ [x]}, = {p″(x) − 4 p (x) : p (x) ∈ [x]}, = [x]., EXAMPLE 5: Let T : f (0, 1) → f (0, 1) such that T (f (x)) = f ′(x) · ex, be a linear transformation, then find its null-space and nullity of T., SOLUTION: By the definition of null space, N (T) = {f (x) ∈ f (0, 1) : T (f (x)) = 0}, = {f (x) ∈ f (0, 2) : f ′(x) · ex = 0}, Since, ex ≠ 0 ≤ x ∈ (0, 1), therefore, f ′(x) = 0, ⇒, f (x) = Constant, ⇒, N (T) = {Constant function}, Nullity = N (T) = 01.
Page 189 :
LINEAR TRANSFORMATION, , 181, , Theorem 4.5: Let T : U → V, be a one-one linear transformation and u1, u2, ... uk are, linearly independent vectors in U then T (u1), T (u2), ... T (uk) are linearly independent in V., Proof: Let T : U → V, be a one-one linear transformation and u1, u2, ... uk are linearly, independent vectors in then show that T (u1), T (u2), ... T (uk) are linearly independent vectors in V., For this let α1, α2, ... αk are scalars such that, α1T (u1) + α2T (u2) + ... + αkT (uk) = 0, ⇒, T [α1u1 + α2u2 + ... + αkuk] = 0, ⇒, α1u1 + α2u2 + ... + αkuk ∈ N (T)., Since T is one-one linear transformation this implies N (T) = 0u. Therefore,, α1u1 + α2u2 + ... + αkuk = 0, Since u1, u2, ... uk are linearly independent vector in U this implies that, α1 = 0 = α2 = 0, .... αk = 0, Hence T (u1), T (u2) , ..., T (uk) are linearly independent vectors in V., Corollary 1: Let U and V be two finite dimensional vector space of same dimension. If, T : U → V be a one-one linear transformation then transform the basis of U into basis of V., Proof: Let U and V be two finite-dimensional vector space such that dim (U) = dim (V). Let, T : U + V be a one-one a linear transformation = n. Then show that Transform basis of U to basis, of V., Let B = {u1, u2, ... un} be a basis of U. This implies that u1, u2, ... un are linearly independent., Since T is a one-one linear transformation then by the theorem 4. T (u1), T (u2), ... T (un) are, linearly independent vectors in V. Since dim (V) = n, ⇒ {T (u1), T (u2), ... T (un)} is a basis of V., , 4.4 RANK NULLITY THEOREM, Theorem 4.6: [Rank Nullity Theorem]: Let U be a finite dimensional vector space and, T : U → V be a linear transformation. Then, dim (R (T)) + dim (N (T)) = dim (U), i.e.,, rank (T) + nullify (T) = dim (U)., Proof: Let U be a finite dimensional vector space and T : U → V, be a linear transformation., Then N (T) is also finite dimensional subspace of U., Suppose, dim (U) = n,, and, dim (N (T)) = k (k ≤ n)., Let, B = {u1, u2, ... uk} be a basis of N (T) then for every ui ∈ N (Y), ⇒, T (vi) = 0, i = 1, 2, ... k., We extends the set B to the basis of U. Let, B1 = {u1, u2, ... uk, uk+1, uk+2, ... un] be a basis of U., Since, T (ui) = 0, for each i = 1, 2, ... k, therefore, we consider a get, B2 = {T (uk+1, ... T (un)}
Page 190 :
182, , LINEAR ALGEBRA, , To prove this theorem it’s enough to prove that B2 is a basis of R (T), because if, dim (R (T)) = n − k = dim (u) − dim (N ∈ T), then, dim (u) = dim (R (T)) + dim (N (T)), Now to show B2 is a basis of R (T), it is sufficient to prove that:, (i) B2 is a set of linearly independent,, (ii) [B2] = R (T)., To prove (i), let ak+1, ak+2, ... an are scalars such that, ak+1 T (uk+1) + ak+2 T (uk+2) + ... + anT (un) = 0, ⇒, ⇒, , T (ak+1 uk+1 + ak+2uk+2 + ... + anun) = 0, , (ä T is LT), , (ak+1 uk+1 + ak+2uk+2 + ... + anun) ∈ N (T), , ⇒ ak+1uk+1 + ak+2uk+2 + ... + anun can uniquely expressed as a linear combination of basis B, for N (T). Let b1, b2, ... bk, be scalars, then, ak+1uk+1 + ak+2uk+2 + ... + anun = b1u1 + b2u2 + ... + bkuk, ⇒ (−b1) u1 + (−b2) u2 + ... + (−bk) uk + ak+1uk+1 + ... + anun = 0, Since B is a basis for U therefore, u1, u2, ... uk, uk+1, uk+2, ... un are linearly independents. This, implies that, b1 = 0, b2 = 0, ... bk = 0, ak+1 = 0, ..., an = 0, ⇒, , ak+1 = 0, ak+2 = 0, ..., an = 0, , ⇒ {T (uk+1), T (uk+2), ... T (un)} is a set of linearly independent vectors., To prove (ii) since B is a basis of U, this implies that [B] = S then by using the theorem 2., R (T) = [T (u1), T (u2), ... T (uk), T (uk+1), ... T (un)], but, , T (ui) = 0, for i = 1, 2, ... k, this implies, R (T) = [T (uk+1), T (uk+2), ... T (un)], , Hence, B2 is a basis of R (T). This implies that, dim (R (T)) + dim (N (T)) = dim (0), i.e.,, rank (T) + nullity (T) = dim (0)., EXAMPLE 1: Let T : V3 → V4, be a linear transformation defined as:, T (x, y, z) = (x + y + z, x − z, y + z, z), then verify rank-nullity theorem., SOLUTION: By the defination of null space, N (T) = {(x, y, z) ∈ R : T (x, y, z) = 0}, T (x, y, z) = (x + y + z, x − z, y + z, z) = (0, 0, 0, 0), ⇒, and, ⇒, ⇒, ⇒, , z = 0, y + z = 0, x − z = 0, x +y+z = 0, x = 0, y = 0, z = 0, N (T) = {0}, Nullity (T) = 0
Page 191 :
183, , LINEAR TRANSFORMATION, , by the definition of R (T), R (T) = {(a, b, c, d) ∈ V4 : (a, b, c, d) = T (x, y, z) for (x, y, z) ∈ V3}, R (T) = {(a, b, c, d) ∈ V4 : (a, b, c, d) = (x + y + z, x − z, y + z, z}, = {(x + y + z, x − z, y + z, z) ∈ V4}, = {x (1, 1, 0, 0) + y (1, 0, 1, 0) + z (1, −1, 1, 1) : x, y, z ∈ ⺢}, ⇒, B = {(1, 1, 0, 0), (1, 0, 1, 0), (1, −1, 1, 1)} is a basis of ⺢ (T)., ⇒, dim [R (T)] = 03, ⇒, rank k (T) = 03, rank (T) + nullity (T) = 03 + 0 = 03 = dim (U), Hence, rank-nullity theorem is verified., EXAMPLE 2: Let T : V4 → V3, be a linear transformation such that, T (x, y, z, t) = (x − y + z, y − z, z + t)., Then verify the rank-nullity theorem., SOLUTION: By the definition of null space., N (T) = {(x, y, z, t) ∈ V4 : T (x, y, z, t) = (0, 0, 0, 0)}, T (x, y, z, t) = 0, ⇒, (x − y + z, y − z, z + t) = (0, 0, 0), x−y+z = 0, y−z = 0, z+t = 0, ⇒, z = −t, y = z = −t, x = y − z = −t + t = 0, ⇒, T (0, −t, −t, t) = (0, 0, 0), ⇒, N (T) = {(0, −t, −t, t) : t ∈ ⺢}, ⇒, Nullity (T) = 1, by the definition of range space, R (T) = {(a, b, c) ∈ V3: (a, b, c) = T (x, y, z, t)}, = {(x − y + z, y − z, z + t) : x, y, z t ∈ ⺢}, = {x (1, 0, 0, 0) + y (−1, 1, 0) + z (1, −1, 1) + t (0, 0, 1) : x, y, z t ∈ ⺢}, R (T) = [(1, 0, 0), (−1, 1, 0), (1, −1, 1), (0, 0, 1)], but, S = {(1, 0, 0), (−1, 1, 0), (1, −1, 1), (0, 0, 1)}, is a linearly dependent set because S, contains four vectors of three dimensional. We select a, set of linearly independent vectors from S., , LM1, MM0, N0, , −1, , 0, , 1, 0, , 0, 1, , OP, −1P, 1 QP, 1, , ~ Since rank (A) = 03
Page 192 :
184, , LINEAR ALGEBRA, , ⇒ {(1, 0, 0), (−1, 0, 0), (0, 0, 1)} or {(1, 0, 0), (−1, 1, 0), (1, −1, 1)} is a set of LI vectors., This implies that, dim (R (T)) = 03, dim (R (T)) + dim (N (T)) = 03 + 01 = 04 = dim (V4), Theorem 4.7: Let U be a n-dimensional vector space and T : U → V, be a onto linear, transformation. Then T is one-one iff dim (V) = n., Proof: Let U be a n-dimensional vectors space and T : U → V, be an onto linear transformation., Suppose T is one-one then show that, dim (V) = n, Since T is onto ⇒, rank (T) = dim (V), and T is one-one ⇒, nullity (T) = 0, Then by rank-nullity theorem., rank (T) + nullity (T) = dim (U), ⇒, dim (V) + nullity (T) = n, ⇒, dim (V) = n, Hence proved., Conversely, let, dim (V) = n, since T is onto., ⇒, rank (T) = n, by rank-nullity theorem,, rank (T) + nullity (T) = dim (U), ⇒, n + nullity (T) = n, ⇒, nullity (T) = 0, ⇒ T is one-one linear transformation., Note: It T : n → m, be a linear transformation then, (i) If n > m, then T can’t be a one-one linear transformation., (ii) If n < m, then T can’t be an onto linear transformation., , ) → 2(, ) be a linear transformation defined by, EXAMPLE 3: Let T : 2(, , T, , F La, GH MNc, , b, d, , OPIJ, QK, , =, , LMa − c, Na − b, , b, b−d, , OP, Q, , Verify rank-nullity theorem., SOLUTION: By the definition of null-space, N (T) =, , T, ⇒, ⇒, , F La, GH MNc, , LMa − c, Na − b, , OPIJ, d QK, b O, b − d PQ, b, , =, , =, , R|La, S|MNc, T, LM0, N0, LM0, N0, , OP, Q, , b, ∈2 () : T, d, , OP, Q, 0O, 0PQ, , F La, GH MNc, , 0, 0, , b = 0, a = 0, c = 0, d = 0, , b, d, , OPIJ = LM0 0OPU|V, QK N0 0Q|W
Page 193 :
185, , LINEAR TRANSFORMATION, , ⇒, , N (T) =, , RSL0 0OUV = {0}, TMN0 0PQW, , ⇒, nullity (T) = 0., By the defintion of range space,, , R|L x y O ∈ () : L x y O = T F La b OI U|, S|MN z t PQ, MN z t PQ GH MNc d PQJK V|W, T, La − c b OP = RSLMa − c b OP : a, b, c, d ∈UV, = M, W, Na − b b − a Q TNa − b b − aQ, R L1 0O + b L 0 1O + c L−1 0O + d L0 0 O : a, b, c, d ∈UV, = Sa M, W, T N1 0PQ MN−1 1PQ MN 0 0PQ MN0 −1PQ, RL1 0O , L 0 1O , L−1 0O , L0 0 OUV, R (T) = SM, TN1 0PQ MN−1 1PQ MN 0 0PQ MN0 −1PQW, L1 0OP , LM 0 1OP , LM−1 0OP and LM0 0 OP are LI vector., dim (R (T)) = 4, because M, N1 0Q N−1 1Q N 0 0Q N0 −1Q, R (T) =, , ⇒, ⇒, , 2, , ) → n(, ), be a linear transformation, given by, EXAMPLE 4: Let T : n(, T, T [A] = A − A ., Then verify the rank-nullity theorem., SOLUTION: By the definition of null-space,, N (T) = {A ∈ n() : T (A) = 0}, = {A ∈ n() : A − AT = 0}, = {A ∈ n() : AT = A}, = {Space of symmetric matrices}, n (n + 1), 2, n (n + 1), ⇒, nullity (T) =, 2, Similarly by the definition of range space., R (T) = {B ∈ n() : B = T (A), A Í n}, B = A − AT, = {(A − AT) ∈ n()}, Since for any matrix A ∈ n(), (A − AT) is skew symmetric matrix, therefore,, R (T) = {Space of skew symmetric matrices}, , ⇒, , dim (N (T)) =, , ⇒, , dim (R (T)) =, , n (n − 1), 2
Page 194 :
186, , LINEAR ALGEBRA, , ⇒, , rank (T) =, , n (n − 1), 2, , n (n − 1) n ( n + 1), +, = n2, 2, 2, = dim (n()), Hence, rank-nullity theorem is verified., , rank (T) + nullity (T) =, , Theorem 4.8: Let U and V be two n-dimensional vector spaces and T : U → V, be a linear, transformation. Then T is one-one if and only if it is onto., Proof: Let U, V be two n-dimensional vector spaces and T : U → V, be a one-one linear, transformation. Then, T is one-one ⇔ n (T) = {0}, ⇔ nullity (T) = 0, ⇔ rank (T) = n = dim (U), ⇔ rank (T) = dim V, ⇔ R (T) = V, ⇔ T is onto., Theorem 4.9: Let U and V be n-dimensional vector spaces and T : U → V, be a linear, transformation then the following statement are equivalent:, (i) T is one-one., (ii) T transforms a set of linearly independent vectors of U into a set of linearly, independent vectors of V., (iii) T transforms every basis for U into basis for V., (iv) T is onto., (v) R (T) = V i.e., rank (T) = n., (vi) N (T) = {0}, i.e., nullity (T) = 0., Proof: Reader can prove that all statements given above are equivalent by using the previous, theorem 1, 2, and 3., , EXERCISE 4.2, 1. Determine the null space and range space of the following linear transformation between, given vector spaces. Further, find the rank and nullity of T., (i) T : 3 → 3 : T (x, y, z) = (x, y + z, x − z), (ii) T : 3 → 2 : T (x, y, z) = (x + y, x + y − z), (iii) T : 3 → 4 : T (x, y, z) = (0, x − y, y − z, z), (iv) T : 2 → 2() : T (x, y) =, , LM − x, Ny − x, , y, 0, , OP, Q
Page 195 :
187, , LINEAR TRANSFORMATION, , (v) T : 2 → 2() : T (x, y) =, , LM 0, Ny − x, , x+ y, x, , OP, Q, , (vi) T : 3 → 2 : T (x, y, z) = (x + y − z, x − 2y + 3z), (vii) T : 2() → : T (A) = Trace (A), (viii) T : n() → n() : T (A) = (A + AT), (ix) T : n(x) → [x], T (p)(x) = p″(x) + 4p (x), (x) T : 3(x) → 3[x], T (p)(x) = p″′(x), (xi) T : f (0, 1) → f (0, 1), T (f (x)) = f ″ (x) e2x, (xii) T : n() → , T (A) = a11, where A = [aij] ∈ n(), (xiii) T : n[x] → , T (p)(x) = p (0), (xiv) T : 2[x] → 2[x], T (p)(x) = p (x + 1), (xv) T : 2 → , T (x, y) = x − y., 2. Prove that Tθ : 2 → 2, given by, , Tθ (x, y) = (x cos θ − y sin θ, x sin θ + y cos θ), , is a linear operator. Further, also prove that Tθ is one-one and onto., , 3. Let T : R3 → R3, defined by, , T (x, y, z) = (aix + b1y + c1z, a2x2 + b2y + c2z, a3x + b3y + c3z), be a linear operator for fixed values of scalars ai, bi and ci, ≤ i = 1, 2, 3. Further, show that, T is one-one if and only if, a1, a2, , b1, b2, , c1, c2, , a3, , b3, , c2, , ≠ 0., , 4. Determine linear transformation from 2 to 3, where image is spanned by (1, 2, 1)., 5. Determine a linear transformation T : 3 → 2 whose null space is spanned by (1, 1, 1)., 6. True or false:, (i) If T : → , such that T (x) = |x|, then T is a linear map., (ii) Let T : U → V, be a linear transformation. Then T is one-one if T(u) = 0u ≤ u ∈ U., (iii) Let T : U → V be a linear transformation. Then T is one-one if T(u) = 0v, ≤ u ∈ U., (iv) There is a onto linear transformation from 2 to ., (v) Every linear transformation from 2 to 2 is one-one., (vi) There is no linear transformation from V3 to V2 which is onto., (vii) There is a one-one linear transformation from V3 to V2., (viii) Every linear transformation from V2 to V2 is onto., (ix) There exist a linear transformation from V2 to V2 such that rank of it, zero.
Page 197 :
189, , LINEAR TRANSFORMATION, , ⇒, ⇒, , x1 = −7x3, x2 = −4x3,, N (T) = {(−7x3, −4x3, x3) : x3 ∈ }, = {x3 (−7, −4, 1) : x3 ∈ }, ⇒, N (T) = [(−7, −4, 1)], ⇒, dim (N (T)) = 1, ⇒, nullity (T) = 1., Range space:, R (T) = {(a, b) : (a, b) = T (x, y, z)}, = {(x − 2y − z, −x + y − 3z) : x, y, z ∈ }, = {x (1, −1) + y (−2, 1) + z (−1, −3) : x, y, z ∈ }, ⇒, R (T) = [(1, −1), (−2, 1), (−1, −3)], rank (T) = dim (R (T)) ≤ 3, Since (1, −1) (−2, 1) and (−1, −3) are LD vectors in V2 only {(1, −1) (−2, 1)} or {(−2, 1), (−1, −3)} or {(1, −1) (−1, 3)} are LI vectors., Therefore, rank (T) = 02., Theorem 4.10: Let U and V be finite dimensional vector spaces over a field with, dim (U) = n and dim (V) = m then L (U, V) is also finite dimensional vector space over a field, with dim (L (U, V)) = mn., Proof: Let U and V be finite dimensional vector spaces with dim (U) = n and dim (V) = m., Then L [U, V] is a vector space of linear transformations from U to V. Then show that, dim (L (U, V)) = mn., To prove this result we use here a theorem which is given in next chapter 5. The theorem is, L (U, V) ≅ Mm×n(), ⇒, dim (U, V) = dim [Mm×n()], ⇒, dim (L (U, V)) = mn, Hence Proved., Alter Proof: Let, dim (U) = n,, and, dim (V) = m, Then the general form of a linear transformation from U to V will be of the following form:, , F a, GH Σ, LM a, MM a, MMNa#, n, , T (x1, x2, ... xn) =, , i =1, , 11, , =, , 21, , m1, , I, JK, OP LM x OP, PP MM x PP ,, PPQ MMN x# PPQ, , n, , n, , i =1, , i =1, , 1i xi , Σ a 2i xi , ..., Σ a mi xi, , a12, a22, #, , ..., ..., , a1n, a2 n, #, , a m2, , ..., , amn, , 1, , 2, , n, , aij ∈ ≤ i = 1, ... n,, j = 1, ... m, L (U, V) = {Set of all linear transformation from U to V}, , ...(1)
Page 200 :
192, , LINEAR ALGEBRA, , 4.6 ISOMORPHISM, Defintion, Let U and V be two vector spaces over a same field ⺖ and T : U → V be a linear transformation., Then T is called isomorphism of U onto V if T is both one-one and onto. In this case, U and V are, isomorphic and it’s denoted by U ≅ V., EXAMPLE 1: Let T : V3 → V3, be a linear tranformation such that, T (e1) = e1 − e2 + e3,, T (e2) = e2 + e3,, T (e3) = e1 − e3., Check wheather T is isomorphis or not., SOLUTION: Let T : V3 → V3, be a linear transformation such that, T (e1) = e1 − e2 + e3,, T (e2) = e2 + e3,, and, T (e3) = e1 − e3., Let (x, y, z) ∈ V3 then, (x, y, z) = x (e1) + y (e2) + z (e3), ⇒, T (x, y, z) = x T (e1) + y T (e2) + z T (e3), = x (e1 − e2 + e3) + y (e2 + e3) + z (e1 − e3), = x (1, −1, 1) + y (0, 1, 1) + z (1, 0, −1), ⇒, T (x, y, z) = (x + z, − x + y, x + y − z), First we will find N (T), so by the defintion of, N (T) = {(x, y, z) ∈ V3 : T (x, y, z) = (0, 0, 0)}, T (x, y, z) = (0, 0, 0), ⇒, (x + z, −x + y, x + y − z) = (0, 0, 0), ⇒, x + z = 0,, x+y−z = 0, −x + y = 0, ⇒, x = y, 3x = 0, z = −x, ⇒, x = 0, y = 0, z = 0., N (T) = {(0, 0, 0} = {0}, ⇒ T is one-one., Since, dim (U) = dim (V), T is onto also by the theorem 4.2., Hence T is one-one and onto. (Isomorphism).
Page 201 :
193, , LINEAR TRANSFORMATION, , )., EXAMPLE 2: Show that 2 (, ) is isomorphic to (, 2, SOLUTION:, () ≅ (), because there exist an isomorphism between them. The linear transformation given below, T (x, y) = x + iy, x, y ∈ , is an isomorphism between 2() and () because, T (x, y) = 0, ⇒, x + iy = 0 + i 0, ⇒, x = 0,, y = 0, ⇒, N (T) = 0, ⇒ T is one-one., Since, dim (2()) = dim ( ()) = 02, and T is one-one ⇔ T is onto, Hence, 2() ≅ ()., EXAMPLE 3: Let T : 2 → 2, be a linear transformation defined as:, T (x, y) = (ax + by, cx + dy) for a, b, c, d ∈ ., Then T is on isomorphism if and only it, , a, c, , b, ≠ 0 i.e., ad − bc ≠ 0., d, , SOLUTION: Let, T (x, y) = (ax + by, cx + dy), is a linear transformation from 2 to itself. Suppose T is an isomorphism this implies T is, one-one and onto, T is one-one ⇔ N (T) = 0., ⇒, T (x, y) = (0, 0), ⇒, (x, y) = (0, 0), ⇒, (ax + by, cx + dy) = (0, 0), ⇒, ax + by = 0, cx + dy = 0, ⇒, , LMa, Nc, , b, d, , OP LM x OP, Q N yQ, , =, , LM0OP, N0Q, , has unique solution (x, y) = (0, 0)., ⇒, , a, c, , b, d, , ⇒, , ad − bc = 0., , ≠ 0, , EXAMPLE 4: Show that every vector space is isomorphic to itself., SOLUTION: Let V be a vector space over a field . Then T (u) = u, is a identity map. This map, is one-one and onto. Hence it’s an isomorphism from V to itself.
Page 202 :
194, , LINEAR ALGEBRA, , )., EXAMPLE 5: To prove that V4 is isomorphic to 2(, SOLUTION: V4 = 2(),, if there exists an isomorphism between them. Consider a linear transformation, T : V4 → 2(), defined as:, T (x, y, z, t) =, , We claim that, , T (x, y, z, t) =, , LM x, Nz, LM x, Nz, , OP, Q, yO, t PQ, , y, t, , is an isomorphism between V4 and 2()., Since, T (x, y, z, t) =, ⇒, , LM0 0OP, N 0 0Q, LM x yOP, Nz t Q, , =, , LM0 0OP, N 0 0Q, , ⇒, x = y=z=t=0, ⇒, N (T) = 0, ⇒ T is one-one., Since, dim (V4) = dim (2()) = 4, and T is one-one then T is onto also., Hence, T is an isomorphism between V4 and 2(), i.e.,, V4 ≅ 2()., Theorem 4.13: Let V be n-dimensional vector space and B = {u1, u2, ... un} be any basis, of U and V be another vector space over the same field . Then T : U → V is an isomorphism, if and only if {Tu1, Tu2, ... Tun} form a basis to V., Proof: Let U be n-dimensional vector space and V be another vector space over the same, field F. Suppose B = {u1, u2, ... un} be basis of U and T : U → V, be a isomorphism between U and, V. Then show that {Tu1, Tu2, ... Tun} form basis for V., Since T is an isomorphism ⇔ T is one-one and onto therefore T maps every basis of U into, basis of V (Theorem 4). Hence {Tu1, Tu2, ... Tun} is a basis of V., Conversely: Let {Tu1, Tu2, ... Tun} form basis for V. This implies that, rank (T) = n = dim (V), = dim (V), and this implies that nullity (T) = 0 ⇔ T is one-one., Now,, rank (T) = n = dim (V) ⇒ T is onto., ⇒ T is one-one and onto., Hence T is an isomorphism.
Page 203 :
195, , LINEAR TRANSFORMATION, , Corollary 1: Let U and V be two finite-dimensional vector spaces over a same field ., Then U ≅ V if and only if dim (U) = dim (V)., PROOF: Let U and V be two finite dimensional vector spaces over a same field . Suppose, dim (U) = m, and, dim (V) = n and U ≅ V., This implies that there exists a linear transformation T which is one-one and onto between, U and V. Then for a linear transformation T : U → V, is one-one and onto if dim (U) = dim (V) (by, the consequence of rank-nullity theorem)., Conversely: Let, dim (U) = dim (V) = n (say), then there exist an identity transformation from U to V i.e.,, T : U + V u T (U) = u., This identity transformation is an isomorphism between U and V., Hence, U ≅ V., Theorem 4.14: Let U and V be two vector spaces over a same field and T : U + V be, a linear transformation. Then, U, ≅ R (T)., N (T ), Proof: Let U and V be two vector space over a same field and T : U → V be a linear, transformation. Then show that, U, ≅ R (T), N (T ), For this, define, , Suppose, ⇒, , U, → V, such that, N (T ), S (u + N (T)) = T (u), N = N (T), S:, , S (u + N) = T (u), , FGä U = {u + N : u ∈U} is a quoteint spaceIJ, H N (T ), K, , To prove this theorem, it’s sufficient to prove that S is linear, one-one and onto R (T)., First, we will show that S is linear map, for tis let (u1 + N) and (u2 + N) ∈, ∈ where u1, u2 ∈ U., S [α1(u1 + N) + α2(u2 + N)] =, =, =, =, ⇒ S is linear map., , S [α1u1 + α2u2 + N], T (α1u1 + α2u2), α1T (u1) + α2T (u2), α1S (u1 + N) + α2S (u2 + N), , Now to show that S is one-one, let u1 + N, u2 + N ∈, , U, ., N, , U, and α1, α2, N
Page 204 :
196, , LINEAR ALGEBRA, , For u1, u2 ∈ U such that, S (u1 + N) = S (u2 + N), ⇒, T (u1) = T (u2), ⇒, T (u1) − T (u2) = 0, ⇒, T (u1 − u2) = 0v, ⇒, u 1 − u2 = 0, ⇒, u1 + N = u2 + N., Hence S is one-one, or simply, S (u + N) = 0, ⇒, T (u) = 0 ⇒ u ∈ N, ⇒, S (N) = 0, Null space (S) is N. Which is a additive identity of N (S). Hence S is one-one., Since,, S (u + N) = T (u), for u ∈ U., ⇒, Range space of S = Range space of T, ⇒, dim [R (S)] = dim [R (T)], ⇒ S is onto R (T)., U, It is clear that S is an isomorphism between, and R (T)., N (T ), U, ≅ R (T)., Hence, N (T ), Corollary 2: Let U and V be two vector spaces over a same field and T : U → V be an, onto linear transformation. Then, PROOF: Consider a map, from, , U, ≅ V., N (T ), S = (u + N (T)) = T (U), , U, to V, it’s dear that S is linear map and it one-one to R (T). Since T is onto this, N (T ), , implies R (T) = V. Hence by the theorem 2., U, ≅ R (T) = V i.e., U, ≅ V., N (T ), N (T ), , EXERCISE 4.4, 1. Let W1 and W2 be two subspaces of a vector space V. Then, , W1 + W2, W1, , ≅, , W2, W1 ∩ W2, , ., , 2. For any field , show that Fmn ≅ Mm×n()., 3. Show that T : n() → n(), defined as: T (A) = AT, is an isomorphism., 4. Let T : 2 → 3, be a linear transformation defined as: T (a0 + a1x + a2x2) = (a0, a1, a2)., Then show that T is an isomorphism.
Page 205 :
197, , LINEAR TRANSFORMATION, , 5. Give an example of linear transformation T from ⺢3 to ⺢3 such that, (i) T is one-one but not onto., (ii) T is onto but not one-one., (iii) T is both one-one and onto., (iv) T is neither one-one nor onto., 6. True or False:, (i) ⺖3 ≅ ⺝3(⺢), (ii) ⺓2 (⺢) ≅ ⺢4(⺢), (iii) ⺓ (⺓) ≅ ⺢ (⺢), (iv) Every vector space is immorphic to itself., (v) Any quotient space of V is isomorphic to subspace of V., , 4.7 INVERSE OF LINEAR TRANSFORMATION, The inverse of linear transformation is similar to the inverse of a matrix belong to ⺝n(⺖). In finding, the inverse of a linear transformation one-one and onto properties of it play major roll., , Definition, 1. A linear transformation T : V → V, is said to be non-singular if T is one-one and onto i.e., T is, an isomorphism., 2. Let T : V → V, be a linear operator. Then T is invertable iff T is non singular., 3. Let T : V → V, be a linear operator. Then T is invertable if ∃ an operator S : V → V such that, TS = ST = Iv, Iv is an identity operator on V., Further S is called inverse of T usually it is denoted by T−1. Where T−1 : V → V is also linear, operator., EXAMPLE 1: Let T : V2 → V2 such that T (x, y) = (x + y, x − y) be a linear transformation., Then show that T is invertable and T−1 (x, y) =, SOLUTION: A linear map, T : V2 →, T (x, y) =, T (x, y) =, (x + y, x − y) =, ⇒, x+ y =, x− y =, ⇒, x =, ⇒, N (T) =, ⇒ T−1 exist, , FG x + y , x − y IJ ., H 2 2 K, , V2, defined as, (x + y, x − y), is one-one and onto because, (0, 0), (0, 0), 0,, 0, 0=y, (By the theorem 4.8), {0}, , TT−1(x, y) = T [T−1 (x, y)] = T, , LM x + y , x − y OP, N 2 2 Q
Page 206 :
198, , LINEAR ALGEBRA, , =, T−1T, , FG x + y , x − y , x + y − x − y IJ, H 2 2 2 2 K, , = (x, y) = I, (x, y) = T−1 [T (x, y)] = T−1 [x + y, x − y], =, , LM x + y + x − y , x + y − x + y OP, 2, N 2, Q, , = (x, y) = I., ⇒, , T−1(x, y) =, , FG x + y , x − y IJ, H 2 2 K, , is an inverse of T., , EXAMPLE 2: Let T : V3 → V3 be a linear map defined as: T (x, y, z) = (x + y + z, x + y, x)., Is T invertable? If yes then find T−1., SOLUTION: T is a linear operator from V3 to V3. T−1 exist if T is one-one. So, first, we check,, whether. T is one-one or not for this we find N (T) T (x, y, z) = (0, 0, 0)., ⇒, (x + y + z, x + y, x) = (0, 0, 0), ⇒, x = 0=y=z, ⇒, N (T) = {0}, ⇒ T is one-one., ⇒ T−1 exist., Suppose, T−1(a, b, c) = (x, y, z), ⇒, T−1(a, b, c) = T (x, y, z), T (a, b, c) = (x + y + z, x + y, x), ⇒, (a, b, c) = (x + y + z, x + y, x), ⇒, x = c, y = b − c, z = a − b., −1, ⇒, T (a, b, c) = (c, b − c, a − b) is a inverse of T., EXAMPLE 3: Let T : V3 → V3, be a linear operator. Such that T (e1) = e1 + e2 − e3,, T (e2) = e2 + e3, T (e3) = e1 − e3 Is T invertable? If yes determine it., SOLUTION: Let (x, y, z) ∈ V3 then, (x, y, z) = x · e1 + y · e2 + z · e3, ⇒, T (x, y, z) = x T (e1) + y T (e2) + z T (e3), ⇒, T (x, y, z) = x (e1 + e2 − e3) + y (e3 + e3) + z (e1 − e3), = (x + z, x + y, −x + y − z), (T is invertable iff T is one-one) for this we, iff, N (T) = {0}, find N (T) so by the defintion of N (T)., N (T) = {(x, y, z) ∈ V3 : T (x, y, z) = (0, 0, 0)}, = {(x, y, z) ∈ V3 : (x + z, x + y, −x + y − z) = (0, 0, 0)}, ⇒, x + z = 0,
Page 209 :
201, , LINEAR TRANSFORMATION, , dp, = 0 ⇒ p (x) = 0, dx, N (D) = {0}, , this implies,, , Hence, ⇒ D is one-one., Now to show D is onto. Let P (x) ∈ [x] ∃ q (x) ∈ U u, dq ( x ), = p (x), dx, So, D is invertable., Suppose:, D−1 : [x] → U, defined as, D−1 (p (x)) = q (x),, such that q (x) ∈ U operating D both side., dq, p (x) = D (q (n)) =, dx, dq, = p (x), integrate from 0 to x, dx, x, x dq, p ( x ) dx, dx =, 0, 0 dx, , z, , ⇒, , q (x)|0x =, , ⇒, , q (x) − q (0) =, , Since, , z, z, z, z, z, z, , x, , 0, , x, , 0, , p( x ) dx, , p ( x ) dx, , q (x) ∈ U ⇒ q (0) = 0., , This implies that, , q (x) =, , x, , 0, , p ( x ) dx =, , D−1 (p (x)) = q (x), =, Hence, , I (p (x)) =, , x, , 0, x, , 0, , z, , x, , 0, , p ( t ) dt, , p (t ) dt = I ( p ( x )), p ( t ) dt is an inverse of D., , EXAMPLE 5: Let T : 2[x] → 2[x], be a linear transformation defined as:, T (a0 + a1x + a2x2) = (a0 + a1) + (a1 − 2a2) x + (a0 − a2 + 2a2) x2, Is T invertable? If yes find it., SOLUTION: T is invertable iff T is one-one because dim an operator for this, N (T) = {p (x) ∈ 2[x] : T (p (x)) = 0}, T (p (x)) = 0, ⇒, T (a0 + a1x + a2n2) = 0 + 0 x + 0 n2, ⇒ (a0 + a1) + (a1 − 2a2) x + (a0 − a1 + 2a2), = 0 + 0 x + 0 n2, ⇒, a 0 + a1 = 0, a1 − 2a2 = 0, , ...(1), ...(2)
Page 214 :
206, , LINEAR ALGEBRA, , 4. Let T : V → W be a linear transformation of vector spaces. Prove the following:, (i) If {v1, v2, ..., vk} spans V and T is onto, then {Tv1, Tv2, ..., Tvk} spans W., (ii) If {v1, v2, ..., vk} is linearly independent in V, and T is one-one, then {Tv1, Tv2, ... Tvk}, is linearly independent in W., (iii) If {v1, v2, ..., vk} is a basis of V and T is bijective, then {Tv1, Tv2, ..., Tvk} is a basis, of W., 5. Let {v1, v2, v3} be a basis of vector space V over R. Let T : V → V be the linear transformation, determined by Tv1 = v1, Tv2 = v2 − v3 and Tv3 = v2 + 2v3 find the matrix of the transformation, T with {v1 + v2, v1 − v2, v3} is a basis of both the domain and the co-domain of T., 6. Let W be a three dimensional vector space over R and let S : W → W be a linear transformation, further assume that every non-zero vector of W is an eigenvector of S. Prove that there exists, α ∈ R r.t. S = αI, where I : W → W is the identity transformation., 7. Let v1 and v2 be non-zero vectors in Rn, n ≥ 3, r.t. v2 is not a scalar multiple of v1. Prove that, ∃ a linear transformation T : Rn → Rn r.t. T3 = T, Tv1 = v2 and T has at least three distinct, eigenvalues., 8. Give an example of a linear transformation T : R2 → R2 s.t. T2(v) = −v for all v ∈ R2., 9. Let T : R3 → R2 be the linear transformation defined by T (x, y, z) = (x + 2y, x − z)., Let N (T) be the null space of T and, →, →, →, →, W = {v ∈ R3 : v · u = 0, ≤ u ∈ N (T)}, Find a linear transformation S : R2 → W r.t. TS = I, where I is the identity transformation, on R2., 10. Let V be the subspace of R4 spanned by the vectors (1, 0, 1, 2), (2, 1, 3, 4) and (3, 1, 4, 6)., Let T : V → R2 be a linear transformation given by, T (x, y, z, t) = (x − y, z − t) for all (x, y, z, t) ∈ V., Find a basis for the null space of T and also a basis for the range space of T., , OBJECTIVE TYPE QUESTIONS, 1. Given a 4 × 4 real matrix A, let T : R4 → R4 be the linear transformation defined by Tv = Av,, where we think of R4 as the set of real 4 × 1 matrices for which choices of A given below,, do image (T) and Image (T2) have respective dimensions 2 and 1? {* denotes a non-zero entry)., , LM0, 0, (a) A = M, MM0, N0, LM0, 0, (c) A = M, MM0, N0, , 0, , *, , 0, , *, , 0, , 0, , 0, , 0, , 0, , 0, , 0, , 0, , 0, , 0, , 0, , *, , OP, P, *P, P, 0Q, 0O, 0PP, *P, P, *Q, *, *, , (b), , (d), , LM0, 0, A= M, MM0, N0, LM0, 0, A= M, MM0, N0, , 0, , *, , 0, , *, , 0, , 0, , 0, , 0, , 0, , 0, , 0, 0, , 0, *, , 0, , *, , OP, 0P, *P, P, *Q, 0O, 0PP, *P, P, *Q, 0
Page 215 :
207, , LINEAR TRANSFORMATION, , 2. Which of the following is a linear transformation from R3 to R2?, x, x, xy, 4, (b) g y =, (a) f y =, x+ y, x+y, z, z, (c), , FI F I, GG JJ G J, HK H K, F xI F z − x I, h G yJ = G, GH z JK H x + yJK, , F, GG, H, , I F, JJ G, K H, , IJ, K, , 3. For a positive integer n, let in denote the vector space of polynomials in one variable x with, real coefficient and with degree ≤ n. Consider the map T : P2 → P4 defined by, T {P (x)} = P (x2) then, (a) T is a linear transformation and dim R (T) = 5 {Rang (T) = 5}., (b) T is a linear transformation and dim Range (T) = 3, (c) T is a linear transformation and dim Range (T) = 2, (d) T is not a linear transformation., 4. Let the linear transformation T : R2 → R3 be defined by T (x1, x2) = (x1, x1 + x2, x2), then the, nullity of T is:, (a) 0, (b) 1, (c) 2, (d) 3, 5. Consider the basis S = {v1, v2, v3} for R3, where v1 = (1, 1, 1), v2 = (1, 1, 0) and v3 = (1, 0, 0), and let T : R3 → R2 be a linear transformation s.t. Tu1 = (1, 0), Tu2 = (2, −1), Tu3 = (4, 3), then T (−2, 3, 5) is:, (a) (−1, 5), (b) (3, 4), (c) (0, 0), (d) (9, 23), 1, −2, , define a linear, 6. Let R2×2 be the real vector space of all 2 × 2 real matrices for Q =, −2, 4, transformation T on R2×2 as T (P) = QP. Then the rank of T is:, (a) 1, (b) 2, (c) 3, (d) 4, 7. Let V be the vector space of real polynomials of degree atmost 2. Define T : V → V by,, , LM, N, , 2, , T (X i ) =, , Σx ,, i, , i = 0, 1, 2., , i=0, , Then matrix of T−1 w.r.t the basis {1, X, X2} is:, (a), , (c), , LM1, MM1, N1, LM1, MM0, N0, , 1, 1, 0, 1, 1, 0, , OP, 0P, 0PQ, 1O, 1PP, 1PQ, 1, , (b), , (d), , LM1, MM0, N0, LM1, MM1, N0, , OP, PP, Q, , −1, , 0, , 1, 0, , −1, 1, , 0, 1, −1, , OP, 0P, 1PQ, 0, , OP, Q
Page 217 :
209, , LINEAR TRANSFORMATION, , 15. Let T : P3 → P3 be the map given by (Tp)(x) =, , z, , x, , 1, , p ′ (t ) dt . If the matrix of T relative to the, , standard basis B1 = B2 = {1, x, x2, x3} is M and M ′ denotes the transpose of the matrix M,, then M + M′ is:, , (a), , (c), , LM 0, MM−1, MN−−11, LM 2, MM 0, MN−01, , −1, , −1, , 2, 0, , 0, 2, , 0, , 1, , 0, , 0, , 2, , 1, , 1, , 2, , 0, , −1, , OP, 0P, 1P, P, 1Q, −1O, 0 PP, −1P, P, 0Q, −1, , (b), , (d), , LM−1, MM 0, MN 20, LM0, MM2, MN22, , 0, , 0, , −1, , 1, , 1, , −1, , 0, , 2, , 2, , 2, , −1, , 0, , 0, , −1, , 0, , 0, , OP, 0P, 0P, P, −1Q, 2O, 0 PP, 0P, P, −1Q, 2, , 16. Let T be na arbitrary linear transformation from Rn → Rn which is hnot one-one, then, (a) Rank T > 0, (b) Rank T = n, (c) Rank T < n, (d) Rank T < n − 1, 17. Consider the vector space R3 and the maps f, g : R3 → R3 defined by f (x, y, z) = (x, |y|, z), and g (x, y, z) = (x + 1, y − 1, z), then, (a) both f and g are linear, (b) neither f nor g is linear, (c) g is linear but not f, (d) f is linear but not g, 18. Let the linear transformation and S and T : R3 → R3 be defined by, S (x, y, z) = (2x, 4x − y, 2x − 3y − z), T (x, y, z) = (x cos θ + y sin θ, −x sin θ + y cos θ, z), π, , then, 2, (a) S is one-one but not T, (c) both S and T are one-one, , where 0 < θ <, , LMk, 19. If the nullity of the matrix M 1, MN1, , (b) T is one-one but not S, (d) neither S nor T is one-one, 1, −1, 1, , OP, −2P is 1, then the value of k is:, 4 PQ, 2, , (a) −1, (b) 0, (c) 1, (d) 2, 20. Let the linear transformation T : R2 → R3 be defined by T (x1, x2) = (x1, x1 + x2, x2) then the, nullity of T is:, (a) 0, (b) 1, (c) 2, (d) 3
Page 218 :
210, , LINEAR ALGEBRA, , 21. Let M be the real vector-space of 2 × 3 matrices with real entries. Let T : M → M be defined by, T, , F L− x, GH MN x, , 1, , 4, , x2, x5, , x3, x6, , OPI = LM− x, QJK N x, , 6, , 3, , x4, x5, , x1, x2, , OP, Q, , The determinant of T is:, (a) −1, , (b) −2, , (c) −2, , (d) −4, , 22. Let V be a vector space of dimension m ≥ 2. Let T : V → V be a linear transformation r.t., Tn+1 = 0 and Tn ≠ 0 for same n ≥ 1, then which of the following is necessarily true?, (a) Rank (Tn) ≤ nullity (Tn), , (b) Tr (T) ≠ 0, , (c) n = m, , (d) T is diagonalisable, , 23. Let f :, , R2, , →, , R2, , be given by, f (x, y) = (x2, y2 + sin x), , then the derivative of f at (x, y) is the linear transformation given by:, (a), , (c), , FG 2 x, H cos x, FG 2 y, H 2x, , IJ, K, cos xI, J, 0 K, 0, 2y, , (b), , (d), , FG 2 x, H2y, FG 2 x, H0, , IJ, K, 2y I, J, cos xK, 0, cos x, , 24. Let T : Rn → Rn be a linear transformation which of the following statements implies that, T is bijective?, (a) Nullity (T) = n, , (b) Rank (T) = Nullity (T) = n, , (c) Rank (T) + Nullity (T) = n, , (d) Rank (T) − Nullity (T) = n, , 25. Let n be a positive integer and let Mn(R) denote the space of all n × n real matrices. If, T : Mn(R) → Mn(R) is a linear transformation w.r.t. T (A) = 0, whenever A ∈ Mn(R) is, symmetric or skew-symmetric then the rank of T is:, n ( n + 1), 2, (c) n, , (a), , n ( n − 1), 2, (d) 0, , (b), , 26. Let S : R3 → R4 and T : R4 → R3 be linear transformation w.r.t. TOS is the identity map of, R3, then, (a) SOT is the identity map of R4, , (b) SOT is one-one, but not onto, , (c) SOT is onto, but not one-one, , (d) SOT is neither one-one nor onto, , 27. Let w be the vector space of all real polynomials of degree at most 3. Define T : w → w by, (Tp)(x) = P′(x) where P′ is the derivative of P. The matrix of T in the basis (1, x, x2, x3}, considered as column vector is given by:
Page 220 :
212, , LINEAR ALGEBRA, , (a), , (c), , LM0, MM0, N1, LM0, MM1, N1, , −1, 0, 2, 1, 0, 2, , OP, −1P, 3 PQ, 1O, 1PP, 3PQ, −1, , (b), , (d), , LM0, MM0, N1, LM 0, MM−1, N3, , OP, 1P, 3PQ, , 1, , 1, , 0, 2, −1, , 1, , 0, 2, , 2, 1, , OP, PP, Q, , 31. Let Pn be the real vector space of all polynomials of degree at most ‘n’. Let D : Pn → Pn−1, and T : Pn → Pn+1 be the linear transformation defined by:, D (a0 + a1x + a2x2 + ... + anxn) = a0 + 2a2x + ... + nanxn−1, T (a0 + a1x + a2x2 + ... + anxn) = a0x + a1x2 + a2x3 + ... + anxn+1, respectively. If A is the matrix representation of the transformation DT − TD : Pn → Pn w.r.t., the standard basis of Pn, then the trace of A is:, (a) −n, , (b) n, , (c) n + 1, , (d) −(n + 1), , 32. Let W be a vector space over R and let T : R6 → W be a linear transformation w.r.t., S = {Te2, Te4, Te6} spans W., Which one of the following must be true?, (a) S is a basis of W, , (b) T (R6) ≠ W, , (c) {Te1, Te3, Te5} spans W, , (d) Ker (T) contains more than one element, , 33. Let T :, , Rn, , →, , Rn, , be a linear transformation where n ≥ 2, for k ≤ n,, E = {v1, v2, ..., vk} ⊆ Rn, , and, , F = {Tv1, Tv2, ..., Tvk}, then:, , (a) If E is linearly independent, then F is linearly independent, (b) If F is linearly independent, then E is linearly independent, (c) If E is linearly independent, then F is linearly dependent, (d) If F is linearly independent, then E is linearly dependent., 34. For n ≠ m, let T1 : Rn → Rm and T2 : Rm → Rn be linear transformations w.r.t. T1. T2 is, bijective. Then:, (a) rank (T1) = n and rank (T2) = m, , (b) rank (T1) = m and rank (T2) = n, , (c) rank (T1) = n and rank (T2) = n, , (d) rank (T1) = m and rank (T2) = m, , 35. Let T : R2 → R2 be a linear transformation w.r.t. T (1, 2) = (2, 3) and T (0, 1) = (1, 4), then, T (5, 6) is:, (a) (6, −1), , (b) (−6, 1), , (c) (−1, 6), , (d) (1, −6)
Page 221 :
LINEAR TRANSFORMATION, , 213, , 36. Let T : R3 → R3 be the linear transformation whose matrix w.r.t. the standard basis of R3 is, , LM 0, MM−a, N −b, , 37., , 38., , 39., , 40., , OP, PP, Q, , a, , b, , 0, −c, , c , where a, b, c ∈ R and not all zero. Then T:, 0, , (a) is one-to-one, (b) is onto, (c) does not map any line through the origin onto itself, (d) has rank 1, Let T : R4 → R4 be linear transformation satisfying T3 + 3T2 = 4I, where I is the identity, transformation then the linear transformation S = T4 + 3T3 − 4I is:, (a) one-one but not onto, (b) onto but not one-one, (c) invertible, (d) non-invertible, Let T : R3 → R3 be a linear transformation w.r.t. T (1, 2, 3) = (1, 2, 3), T (1, 5, 0) =, (2, 10, 0) and T (−1, 2, −1) = (−3, 6, −3). The dimension of the vector space spanned by all, the eigenvectors of T is:, (a) 0, (b) 1, (c) 2, (d) 3, 3, 3, Let T : ψ → ψ be defined by {ψ-the set of all real no’s}, T (x1, x2, x3) = (x1 − x2, x1 − x2, 0), If N (T) and R (T) denote the null space and the range space of T respectively, then, (a) dim N (T) = 2, (b) dim R (T) = 2, (c) R (T) = N (T), (d) N (T) ⊂ R (T), Let V be the span of (1, 1, 1) and (0, 1, 1) ∈ R3. Let u1 = (0, 0, 1), u2 = (1, 1, 0) and, u3 = (1, 0, 1) which of the following are correct?, , FR I, GH v JK, FR I, GH v JK, FR I, GH v JK, FR I, GH v JK, 3, , (a), , 3, , (b), , 3, , (c), , 3, , (d), , ∪ {(0, 0, 0)} is not connected, ∪ {tu1 + (1 − t) u3 : 0 ≤ t ≤ 1} is connected, ∪ {tu1 + (1 − t) u2 : 0 ≤ t ≤ 1} is connected, ∪ (t, 2t, 2t) : t ∈ R} is connected, , 41. Let V be the vector space of all complex polynomial ‘p’ with deg P ≤ n. Let T : v → v be, the map (Tp)(x) = p′(1), x ∈ . Which of the following are correct?, (a) dim ker T = n, (b) dim rant T = 1, (c) dim ker T = 1, (d) dim range T = n + 1
Page 222 :
214, , LINEAR ALGEBRA, , 42. Let φ : R2 → be the map φ (x, y) = z, where z = x + iy. Let f : → be the function, f (z) = z2 and f = f−1 f φ. Which of the following are correct:, , FG x, Hy, Fx, (b) The linear transformation T (x, y) = 2 G y, H, , IJ represents the derivative of F at (x, y)., K, yI, J, x K represents the derivative of F at (x, y)., , −y, x, , (a) The linear transformation T (x, y) = 2, , (c) The linear transformation T (z) = 2z represents the derivative of f at z ∈ ., (d) The linear transformation T (z) = 2z represents the derivative of f only at ‘0’., 43. Let V be the vector space of polynomials over R of degree lessthan or equal to ‘n’ for, P (x) = a0 + a1x + ... + anxn in V., Define a linear tranformation, T : V → V by (TP)(x) = a0 − a1x + a2x2 ... + (−1)n anxn., Then which of the following are correct?, (a) T is one-one, , (b) T is onto, , (c) T is invertible, , (d) det T = 0, , 44. Consider non-zero vector spaces V1, V2, V3, V4 and linear transformation φ1 : v1 → v2, φ2 :, v2 → v3, φ3 : v3 → v4 r.t. ker (φ1) = {0}, Range {φ1} = ker {φ2}, Range {φ2} = ker {φ3}, Range, (φ3) = v4. Then, 4, , 4, , (a), , Σ ( −1) i dim vi = 0, , (b), , Σ ( −1) i dim vi > 0, i=2, , (d), , ( −1) i dim vi, Σ, i =1, , i =1, , 4, , (c), , ( −1) i dim vi < 0, Σ, i =1, , 4, , ≠ 0, , 45. Let S : Rn → Rn be given by S (V) = αV for a fixed α ∈ R, α ≠ 0. Let T : Rn → Rn be a linear, transformation r.t. B = {v1, ..., vn} is a set of linearly independent eigenvectors of T then:, (a) the matrix of T w.r.t. B is diagonal, (b) the matrix of (T − S) w.r.t. B is diagonal, (c) the matrix of T w.r.t. B is necessarily diagonal, but is upper triangular, (d) The matrix of T w.r.t. B is diagonal but the matrix of (T − S) w.r.t B is not diagonal., 46. Let Mn(k) denote the space of all n × n matrices with entries in a field k. Fix a non-singular, matrix A = (aij) ∈ Mn(k) and consider the linear map T : Mn(k) → Mn(k) given by T (x) + Ax,, then:, n, , (a) Trace (T) =, , Σ, , i =1, , (c) rank of T is n2, , n, , Aii, , (b) Trace (T) =, , n, , Σ Σ Aij, , i =1 j =1, , (d) T is non-singular
Page 223 :
215, , LINEAR TRANSFORMATION, , 47. Let V be a finite dimensional vector space over R. Let T : V → V be a linear transformation, such that rank (T2) = rank (T) then:, (a) kernel (T2) = kernel (T), (b) Range (T2) = Range (T), (c) kernel (T) ∩ Range (T) = {0}, (d) kernel (T2) ∩ Range (T2) = {0}, 48. Let V be the vector space of polynomials over R of degree less than or equal to ‘n’ for, P (x) = a0 + a1(x) + ... + anxn in V,, define a linear transformation, T : V → V by (TP)(x) = an + an−1x + ... + a0xn, then:, (a) T is one-one, (b) T is onto, (c) T is invertible, (d) det T = 0, , ANSWERS, EXERCISE 4.1, —, , —, , 2., , (i) T (O) ≠ O, (iii) Linear property, —, —, (v) T (O) ≠ O, , 3., , (i) LT, (vi) LT, , (ii) Not LT, (vii) LT, , (ii) Linear property : T (u1 + u2) = T (u1) + T (u2), (iv) Linear property, (iii) Not LT, (viii) LT, , (iv) LT, (ix) LT, , (v) LT, (x) LT, , (iv) T, , (v) F, , 4. T (x, y, z) = (x + z, −x + y, x + y − z), 5. T (2, 3) =, 6., , FG 23 , −10 IJ, H7 7 K, , (i) T, (vi) F, , (ii) T, , (iii) T, , EXERCISE 4.2, —, 1. (i) N (T) = {O };, , (ii), , (iii), , R| (1, 0, 1)U|, generated by S ( 0, 1, 0)V ;, |T(0, 1, − 1)|W, , R (T) =, , 3, , r (T), N (T), R (T), r (T), N (T), R (T), r (T), , 3, n (T) = 0, {(−y, y, 0) : y ∈ };, 2 generated by {(1, 1) (0, −1)};, 2, n (T) = 01, —, {O };, Subspace of 4 generated by {(0, 1, 0, 0), (0, −1, 1, 0), (0, 0, −1, 1)};, 3, n (T) = 0, , =, =, =, =, =, =, =
Page 225 :
217, , LINEAR TRANSFORMATION, , 4. T (x, y) = (x, 2x, x), 5. T (x, y, z) = (x − y, y − z), 6., , (i) F, , (ii) T, , (iii) F, , (iv) T, , (v) F, , (vi) F, , (vii) F, , (viii) F, , (ix) T, , (x), , (iv) F, , (v) F, , EXERCISE 4.3, 2., , (i), , T1 (x, y, z, t) = (x, y, z, t), r (T1) = 04, T2 (x, y, z, t) = (x, y, z, −t), r (T2) = 04, r (T1 + T2) = 03, , (ii), , T1 (x, y, z, t) = (x, y, z, t), r (T1) = 4, T2 (x, y, z, t) = (−x, −y, z, t), r (T2) = 4, r (T1 − T2) = 02, , (iii), , T1 (x, y, z, t) = (x, y, z, t), r (T1) = 4, y, z, t, T2 (x, y, z, t) = (x, − , − , − ), r (T2) = 4, 2, 2, 2, r (T1 + 2T2) = 1, , (iv), , T1 (x, y, z, t) = T2 (x, y, z, t) = (x, y, z, t), T1 (x, y, z) = (x, y, 0), r (T1) = 02, , 3., , T2 (x, y, z) = (0, 0, z), r (T2) = 01, r (T1 + T2) = 3, 4., , (i) T, , (ii) F, , (iii) F, , (vi) T, , OBJECTIVE TYPE QUESTIONS, 1., 6., 11., 16., 21., 26., 31., 36., 41., 46., , (a, b), (b), (d), (c), (a), (d), (c), (c), (a, b), (a, b, d), , 2., 7., 12., 17., 22., 27., 32., 37., 42., 47., , (c), (b), (c), (b), (a), (c), (d), (d), (a, c), (a, b, c, d), , 3., 8., 13., 18., 23., 28., 33., 38., 43., 48., , (b), (c), (b), (c), (a), (a, c, d), (a), (d), (a, b, d), (a, b, d), , 4., 9., 14., 19., 24., 29., 34., 39., 44., , (a), (a), (b), (a), (d), (c), (d), (b, d), (a, b), , 5., 10., 15., 20., 25., 30., 35., 40., 45., , (d), (a), (a), (a), (d), (b), (a), (a, b, c), (a, b),
Page 226 :
Chapter, , 5, , LINEAR TRANSFORMATION, AND MATRIX, , 5.1 MATRIX ASSOCIATED WITH A LINEAR TRANSFORMATION, Let U and V be two vector spaces of dimension n and m respectively, ones a same field . Let, B1 = {u1, u2, ... un} and B2 = {v1, v2, ... vm} be ordered basis of U and V respectively. Let, T : U → V be a linear transformation. Since T (ui) ∈ V, ≤ i = 1, 2, ... n and V = [v1, v2, ... vn] therefore, each T (ui) can be expressed as a linear combination of vectors of B2., Let, T (u1) = a11v1 + a21v2 + ... + am1vm, T (u2) = a12v1 + a22v2 + ... + am2vm, .........................................., .........................................., T (un) = a1nv1 + a2nv2 + ... + amnvm, where aij ∈ , ≤ i = 1, 2, ... m, and i = 1, 2, ... n. Then the following matrix,, , LM a, MM a, MMNa#, , 21, , a12, a22, , ..., ..., , a1n, a2n, #, , m1, , a m2, , ..., , amn, , 11, , OP, PP, PPQ, , m×n, , is called a matrix associated with a linear transformation relative to the order basis B1 and B2, of U and V, respectively. It’s denoted by [T]B1,B2., This matrix is uniquely obtained by a linear transformation T because each aij is uniquely, determined. Since the order basis of a vector space is not unique, therefore the whole matrix will, change by changing ordered basis of vector spaces., When U = V, then the matrix corresponding to a linear transformation relative to the ordered, basis B of U is denoted by [T]B., Theorem 5.1: Prove that L (U, V) ≅ Mn × m (, ). Where U and V are vector spaces over a, field of dimension n and m respectively., Proof: By the definition of isomorphic spaces, L (U, V) and Mn × m () vector spaces are, isomorphic if there exist a one-one and onto (bijective) linear map between L (U, V) and Mn × m()., Let φ : L (U, V) → Mn × m () such that, φ (T) = [T]B1, B2, Where B1 = {u1, u2, ... un} and B2 = {v1, v2, ... vm} are ordered basis of U and V respectively., Its clear that φ is a linear transformation., 218
Page 229 :
221, , LINEAR TRANSFORMATION AND MATRIX, , Hence, , [T]B1B2 =, , LMa, Na, , 11, , a12, , 21, , a22, , OP = LM−1, Q N2, , OP, Q, , −1, 0, , Alter solution: First we will find the image of all vectors B1. Then we write all vectors of B2,, and T (B1) in terms of column vectors in a matrixd form. Then we transform matrix corresponding, to the vectors of B2 into identity by using row transformations then finaly the matrix corresponding, to vectors of T (B1) will be required matrix with respect to the basis vector B1 and B2., , LM 1, N−1, , T (B)1, , B2, 1, 0, , LM1, N0, L1, R −R M, N0, LM−1 −1OP, N2 0Q, R2 + R1, , 1, , ⇒, , [T]B1B2 =, , 2, , OP, Q, , −1, 1, , 1, 1, 1, 1, , 1, 2, , 0, 1, , −1, 2, , OP, Q, , −1, 0, , OP, Q, , −1, 0, , This method is very easy to find the matrix corresponding to any transformation T ∈ L (U, V), with respect to any basis of U and V., Note 1: In this chapter, we will use the alter method to find the matrix [T]B1B2., , (iii), , B1 = {(1, 2), (2, −1)},, B2 = {(1, 0), (0, 1)}, , LM1, N0, , T (B1), , B2, 0, 1, , −1, 3, , OP, Q, , 3, 1, , T (1, 2) = (−1, 3), T (2, −1) = (3, 1), Since B2 are standard basis of V2., Therefore, , [T]B1B2 =, , LM−1 3OP, N 3 1Q, , Note 2: If B2 are standard basis of vector space V, then the matrix corresponding to the linear transformation, T ∈ L (U, V) is the matrix whose columns are images of vectors of B1, i.e., [T]B1B2 = [TB1]., , (iv), T (B1):, , B1, B2, T (1, 1), T (1, −1), , =, =, =, =, , {(1, 1), (1, −1)},, {(2, 1), (1, −2)}, (0, 2), (2, 0)
Page 234 :
226, , LINEAR ALGEBRA, , EXAMPLE 6: Determine the matrix corresponding to a linear transformation T : ⺠ 3 → ⺠ 3,, defined as: T (P)(x) = P (x + 1), with respect to the following basis., (i) B1 = B2 = {1, x, x2, x3}, (ii) B1 = {1, x, x2, x3}, B2 = {1, 1 − x, x2, 1 − x + x3}, SOLUTION: (i), T (1) = 1, T (x–) = x + 1,, T (x2) = (x + 1)2 = 1 + 2x + x2, T (x3) = (x + 1)3 = 1 + 3x + 3x2 + x3, Since B2 is standard basis of ⺠3[x]. Therefore the matrix of T relative to standard basis is:, , LM1, MM0, MN00, , 1, , 1, , 1, , 2, , 0, , 1, , 0, , 0, , OP, 3P, 3P, P, 1Q, 1, , (ii) The image of vectors of B1 are same as in (i)., Now the matrix of T can be determine as:, B2, 1, 1, 0, 1, , T (B1), 1 1, , 1, 1O, LM, MM0 −1 0 −1 0 1 2 3PPP, MN00 00 10 01 00 00 10 13PQ, Transform the matrix of vectors of B into identity by using row transformation., LM1 1 0 1 1 1 1 1 OP, 0 + 1 0 + 1 0 −1 −2 −3P, −R ⇒ M, MM0 0 1 0 0 0 1 3 PP, N0 0 0 1 0 0 0 1 Q, LM1 1 0 0 1 1 1 0 OP, R −R, 0 1 0 0 0 −1 −2 −4P, ⇒ M, R −R, MM0 0 1 0 0 0 1 3 PP, N0 0 0 1 0 0 0 1 Q, LM1 0 0 0 1 2 3 4 OP, 0 1 0 0 0 −1 −2 −4P, R −R ⇒ M, MM0 0 1 0 0 0 1 3 PP, N0 0 0 1 0 0 0 1 Q, The matrix of T is, LM1 2 3 4 OP, MM0 −1 −2 −4PP, MN00 00 10 13 PQ, 2, , 2, , 2, , 4, , 1, , 4, , 1, , 2, , 4×4
Page 236 :
228, , LINEAR ALGEBRA, , R4 − R1, , R3 − R2, , R4 + R3, , −, , R3 +, , 1, R, 2 3, , 1, R, 2 4, , R2 + R3, R1 − 2 R3, , LM1, 0, ⇒ M, MM0, N0, LM1, 0, ⇒ M, MM0, N0, LM1, 0, ⇒ M, MM0, N0, LM1, 0, ⇒ M, MM0, N0, LM1, 0, ⇒ M, MM0, N0, LM1, 0, ⇒ M, MM0, N0, , The matrix of T is, , [T]B1B2 =, , OP, 0P, 0P, P, 1Q, , 0, , −2, , 0, , 1, , 0, , 0, , 1, , −1, , 0, , 0, , 0, , 1, , 1, , 1, , 1, , 0, , 1, , 0, , 0, , 2, , 0, , −1, , 0, , 0, , 0, , −2, , 0, , 1, , 0, , 0, , 1, , −1, , 0, , 0, , 0, , 1, , 0, , −2, , 1, , 0, , 1, , −1, , 0, , 2, , 0, , −1, , 0, , 0, , 0, , −2, , 0, , 1, , 0, , 0, , 0, , OP, 0P, 0P, P, 1Q, 0O, 0PP, 0P, P, 1Q, 0, , 1, , −1, , 0, , 0, , 0, , 1, , 0, , −2, , 1, , 0, , 1, , −1, , 0, , 0, , 1, , −1, , 1, , −1, , 0, , −2, , 0, , 1, , 0, , 0, , 0, , 1, , −1, , 0, , 0, , 0, , 1, , 0, , 0, , 1, , −1 / 2, , 0, , −1 / 2, , 1/ 2, , 0, , 0, , 1, , −1, , 1, , −1, , 0, , −2, , 0, , 1, , 0, , 0, , 1, 0, , −1, 1, , 0, 0, , 0, −1 / 2, , 0, 0, , 1, 0, , 0, , 0, , 1, , −1, , 1, , −1, , 0, , 0, , 0, , 0, , 0, , 0, , 1, 0, , 0, 1, , 0, 0, , −1 / 2, −1 / 2, , 0, 0, , 1, 0, , 0, , 0, , 1, , −1, , 1, , −1, , 0, 0, , 0, 1, , 1, 1/ 2, , 0, , 0, , 1, , −1, , LM 0, MM−1 / 2, MN−1−1/ 2, , OP, P, 1 / 2P, P, 1 Q, , OP, 0 P, 1 / 2P, P, 1 Q, 1 O, 1 / 2PP, 1 / 2P, P, 1 Q, , OP, P, 0P, P, 1Q, , 0, , 4×4, , EXAMPLE 8: Determine a matrix of a linear transformation T : ⺠ 3[x] → ⺠ 2[x], defined as:, T (p)(x) = p (0) + p′′(x), relative to the standard basis of ⺠ 3[x] and ⺠ 2[x]., SOLUTION: The standard basis of ⺠3[x] and ⺠2(x) are, B1 = {1, x, x2, x3}, and, B2 = {1, x, x2} respectively then
Page 237 :
229, , LINEAR TRANSFORMATION AND MATRIX, , T (1), T (x), T (x2), T (x3), The matrix of T, is, , =, =, =, =, , 1, 0, 0, 0, , +, +, +, +, , LM1, MM0, N0, , 0 = 1 + 0 · x + 0 · x2, 1 = 1 + 0 · x + 0 · x2, 2x = 0 · 1 + 2 · x + 0 · x2, 3x2 = 0 + 0x + 3x2, 1, , 0, , 0, 0, , 2, 0, , OP, 0P, 3PQ, 0, , 3× 4, , EXERCISE 5.1, 1. Let T : 2 → 2 be a linear transformation defined by T (x, y) = (x + y, x − y, x). What will, be the matrix corresponding to T with respect to B1 and B2., (i) B1 = {(1, 0), (0, 1)}, B2 = {(1, 0, 0) (0, 1, 0), (0, 0, 1)}, (ii) B1 = {(1, 0), (0, 1)}, B2 = {(1, 1, −1) (1, 2, 1) (0, 0, 1)}, (iii) B1 = {(1, 1) (2, 1)}, B2 = {(1, 0, 1) (0, 1, 1), (1, 2, 1)}, (iv) B1 = {(−1, 2) (2, 2)}, B2 = {(2, −1, 2) (4, 5, 6) (0, 1, 1)}, 2. Let T : P2 → P2[x], be a linear transformation defined by T (p)(x) =, , d, P ( x + 1) . Determine, dx, , the matrix of T with respect to the following B1 and B2., (i) B1 = {1, x, x2}, B2 = {1, 1 + x, 2 − x + x2}, (ii) B1 = B2 = {1, x, x2}, 3. Let T : P3[x] → P3[x] be a linear transformation defined as, T (p)(x) = (x + 1) p (x). Find the, matrix of T with respect to the basis., B = {1, x1, x2 + 1, x2 + x + 1} of 3[x]., 4. True or False:, (i) The matrix corresponding to a linear transformation is unique., (ii) The matrix corresponding to an identify transformation I : V → V, with respect to any, basis of V is identity matrix., (iii) The matrix of a linear transformation T : 3 → 4, is of order 3 × 4, relative to any, odered basis of R3 and R4., (iv) The matrix of a linear transformation T : 3 → 3 such that T (e1) = e3, T (e2) = 0,, , LM0, T (e ) = e , with respect to standard basis is M1, NM0, 3, , 2, , 0, 0, 0, , OP, 0P ., 1QP, 0, , (v) The matrix of a linear transformation. T : 2 → 2 such that T (x, y) = (x cos θ +, y sin θ − x sin θ + y cos θ) with respect to standard basis is, , LM cos θ, N− sin θ, , OP, Q, , sin θ, ., cos θ
Page 238 :
230, , LINEAR ALGEBRA, , (vi) The order of a matrix of a linear transformation T : 2 → M2() is 4 × 2., (vii) The order of a matrix corresponding to a linear transformation T : 2() → 2(), is 4 × 2., , 5.2 LINEAR TRANSFORMATION ASSOCIATED WITH A MATRIX, In section 5.1 we discussed the linear transformation and the matrix associated to it with respect to, the basis of vector spaces. In this section we will discuss the linear transformation corresponding to, a matrix with respect to the given basis. Basic theory of matrix is given in chapter I. In this section, a matrix is given whose order is m × n then by the definition of matrix, its also a linear transformation, from n to m (or Vn to Vm). If B1 and B2 are basis of n and m respectively, then we find a linear, transformation T : n → m such that [T]B1B2 = A., EXAMPLE 1: Consider a matrix A of order 3 × 3., A =, , LM 2, MM 1, N −1, , −1, 2, 3, , OP, 1P, 2PQ, , 0, , Then find a linear transformation T : 33 → 3 with respect to the following basis., (i) B1 = B2 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, (ii) B1 = {(1, 1, 0), (1, 1, 1), (1, −1, 1)}, B2 = {(1, −1, 1), (1, 1, 0), (1, 0, 0)}., SOLUTION: (i) T (1, 0, 0) = 2 (1, 0, 0) + 1 (0, 1, 0) − 1 (0, 0, 1) = (2, 1, −1), T (0, 1, 0) = −1 (1, 0, 0) + 2 (0, 1, 0) + 3 (0, 0, 1) = (−1, 2, 3), T (0, 0, 1) = 0 (1, 0, 0) + 1 (0, 1, 0) + 2 (0, 0, 1) = (0, 1, 2), Let (x, y, z) ∈ 3, then, (x, y, z) = x (1, 0, 0) + y (0, 1, 0) + z (0, 0, 1), T (x, y, z) = xT (1, 0, 0) + yT (0, 1, 0) + zT (0, 0, 1), ⇒, T (x, y, z) = x (2, 1, −1) + y (−1, 2, 3) + z (0, 1, 2), ⇒, T(x, y, z) = (2x − y, x + 2y + z, −x + 3y + 2z), (ii), T (1, 1, 0) = 2 (1, −1, 1) + 1 (1, 1, 0) − 1 (1, 0, 0), ⇒, T (1, 1, 0) = (2, −1, 2), T (1, 1, 1) = −1 (1, −1, 1) + 2 (1, 1, 0) + 3 (1, 0, 0), ⇒, T (1, 1, 1) = (4, 3, −1), T (1, −1, 1) = 0 (1, −1, 1) + 1 (1, 1, 0) + 2 (1, 0, 0), ⇒, = (3, 1, 0), Let, (x, y, z) = a (1, 1, 0) + b (1, 1, 1) + c (1, −1, 1), x = a+ b+ c, a =x−z, − x + y + 2z, y = a+b −c, ⇒, b =, 2, x−y, z = b+ c, c =, 2
Page 239 :
231, , LINEAR TRANSFORMATION AND MATRIX, , ⇒, , (x, y, z) = (x − z) (1, 1, 0) +, , FG − x + y + 2z IJ, H 2 K, , (1, 1, 1) +, , FG x − y IJ, H 2 K, , (1, −1, 1), , Taking T both side, , FG − x + y + 2z IJ T (1, 1, 1) FG x − y IJ T (1, −1, 1), H 2 K, H 2 K, F − x + y + 2z IJ (4, 3, −1) + FG x − y IJ (3, 1, 0), = (x − z) (2, −1, 2) + G, H 2 K, H 2 K, F 3x + y + 4z , −4 x + 2 y + 8z , 5x − y − 6z IJ, T (x, y, z) = G, H 2, K, 2, 2, T (x, y, z) = (x − z) T (1, 1, 0) +, , ⇒, , is a linear transformation associated with matrix relative to B1 and B2., E XAMPLE 2: Let A = [T] B1B 2 =, , LM1, N3, , −1, 1, , OP, Q, , 3, 0 , be a matrix associated to a linear, , transformation T relative to basis B1B2 of 3 and 2 respectively. Then find T : 3 → 2 for, the following pairs of B1 and B2., (i) B1 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, B2 = {(1, 0) (0, 1)}, (ii) B1 = {(1, 1, 2) (0, 2, 3) (5, 0, 1)}, −2, 1)}, B2 = {(1, 2) (−, SOLUTION: (i), T (1, 0, 0) = 1 (1, 0) + 3 (0, 1) = (1, 3), T (0, 1, 0) = −1 (1, 0) + 1 (0, 1) = (−1, 1), T (0, 0, 1) = 3 (1, 0) + 0 (0, 1) = (3, 0), LeT, (x, y, z) = x (1, 0, 0) + y (0, 1, 0) + z (0, 0, 1), ⇒, T (x, y, z) = xT (1, 0, 0) + yT (0, 1, 0) + zT (0, 0, 1), ⇒, T (x, y, z) = x (1, 3) + y (−1, 1) + z (3, 0), ⇒, T (x, y, z) = (x − y + 3z, 3x + y), It is a linear transformation associated with matrix A., (ii), T (1, 1, 2) = 1 · (1, 2) + 3 · (−2, 1) = (−5, 5), T (0, 2, 3) = −1 · (1, 2) + 1 · (−2, 1) = (−3, −1), T (5, 0, 1) = 3 (1, 2) + 0 (−2, 1) = (3, 6), LeT, (x, y, z) = a (1, 1, 2) + b (0, 2, 3) + c (5, 0, 1), a + 5c = x, a + 2b = y, 2a + 3b + c = z, , FG −2 x − 15y + 10z IJ, K, H, 3, F x + 9 y + 5z IJ, b= G, H 3 K, a=, , ⇒, , c=, , x + 3y − 2z, 3
Page 240 :
232, , LINEAR ALGEBRA, , (x, y, z) =, T (x, y, z) =, , =, ⇒, , T (x, y, z) =, , FG −2 x − 15y + 10z IJ (1, 1, 2) + FG x + 9 y − 5z IJ (0, 2, 3) + FG x + 3y − 2z IJ (5, 0, 1), K, H, H 3 K, H 3 K, 3, FG −2 x − 15y + 10z IJ T (1, 1, 2) + FG x + 9 y − 5z IJ T (0, 2, 3), K, H, H 3 K, 3, F x + 3y − 2z IJ T (5, 0, 1), +G, H 3 K, FG −2 x − 15y + 10z IJ ( −5, 5) + FG x + 9 y − 5z IJ ( −3, − 1) + FG x + 3y − 2z IJ (3, 6), K, H, H 3 K, H 3 K, 3, FG 10x + 57 y − 41z , −5x − 66 y + 43z IJ, H, K, 3, 3, , It is a required linear transformation associated to A., , EXERCISE 5.2, In problem 1 to 4, determine a linear transformation associated to a given m × n matrix ‘A’ and basis, B1 and B2., , 1., , LM1, A = M3, MN0, , 2, 2, 1, , OP, 1P, 2 PQ, , −1, , 3× 3, , (i) B1 = B2 = {(1, 0, 0) (0, 1, 0) (0, 0, 1)}, (ii) B1 = {(1, 2, −1) , (3, 2, 0), (0, 5, 7)}, B2 = {(1, 0, 1), (0, 1, 1), (1, 1, 0)}, (iii) B1 = {(1, 1, 1), (−1, 1, 1) (−1, −1, 1)}, B2 = {(1, 0, 2), (2, −1, 3), (3, 2, 7)}, , 2., , LM1, 0, A= M, MM1, N1, , 2, 2, , 3, 1, , 3, , 2, , 2, , −2, , OP, P, 0P, P, 1Q, , 0, −1, , 4×4, , (i) B1 = {(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}, B2 = {(1, 1, 1, 1), (1, 1, 1, 0), (1, 1, 0, 0), (1, 0, 0, 0)}, (ii) B1 = {(1, 1, 1, 2), (1, −1, 0, 0), (0, 0, 1, 1), (0, 1, 0, 0)}, B2 = {(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0) (1, 1, 1, 1)}
Page 241 :
233, , LINEAR TRANSFORMATION AND MATRIX, , 3., , LM 1, A = M2, MN−1, , 3, , 1, , 2, 0, , 0, 1, , OP, −1P, 3 PQ, 2, , 3× 4, , (i) B1 = {(1, 0, 0, 0), (0, 0, 1, 1), (0, 1, 1, 1), (1, 1, 1, 0)}, B2 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, (ii) B1 = {(1, 0, 0, 0), (0, 1, 0, 0), (0, 0, 1, 0), (0, 0, 0, 1)}, B2 = {(1, 2, −1), (2, 1, 0), (1, 1, 1)}, , 4. A =, , LM−1, N0, , 1, 3, , 2, 1, , OP, Q, , 2×3, , (i) B1 = {(1, 0, 0), (0, 1, 0), (0, 0, 1)}, B2 = {(1, 0), (0, 1)}, (ii) B1 = {(1, 1, 1), (1, 1, 0), (1, 0, 0)}, B2 = {(1, 2), (−1, 1)}, (iii) B1 = {(1, −1, −1), (−1, −2, −3), (1, 0, −2)}, B2 = {(1, 0), (1, 2)}, 5. True or False:, (i) If T : n → m be a linear transformation than the matrix of T is square iff m = n., (ii) The matrix, , LM1 0OP, N 0 1Q, , is associated to a linear transformation T : 2 → 2, defined as, , T (x, y) = (x, y) with respect to any ordered basis of 2., , (iii), , LM1, The linear transformation associated to an identity matrix 0, MM, N0, for any basis of ., 3, , 0, 1, 0, , OP, 0P , is T (x, y, z),, 1PQ, 0, , (iv) The nullity of a linear transformation associated to a matrix of order 4 × 3 can be 4., (v) The rank of a linear transformation associated to a matrix 3 × 4 can be 4., (vi) A linear transformation associated with a matrix of order 4 × 3 is always one-one., (vii) A linear transformation corresponding to a matrix 3 × 5 can ot be onto., , OBJECTIVE TYPE QUESTIONS, 1. Let T : 3 → 3, be a linear transformation defined as:, T (x, y, z) = (x + y − z, x + y + z, y − z), Then the matrix of the T with respect to the ordered basis B = {(0, 1, 0), (0, 0, 1) (1, 0, 0)}, is:, (GATE 2007)
Page 243 :
235, , LINEAR TRANSFORMATION AND MATRIX, , 6. Let T : 2 → 2 be a linear transformation and if the matrix of T with respect to the basis, B = {(1, 0) (0, 1)} of 2 is [T]B =, B′ = {(1, 1) (1, −1)} is:, (a), (c), , LM1 0 OP, N0 −2Q, LM2 0OP, N 0 0Q, , LM1 1OP , then the matrix of T with respect to basis, N1 1Q, L−1 2OP, (b) M, N 0 1Q, L2 2OP, (d) M, N0 0Q, , 7. Let V = n[x], be a vector space of polynomial of degree at most n over a field , let, d, p( x ), then the matrix, T : 3[x] → 2[x], be a linear transformation defined as: T (P)(x) =, dx, of T with respect to the standard basis of 3[x] and 2[x] is:, (a), , (c), , LM0, MM0, N0, LM0, MM0, MN00, , 1, , OP, 0P, 1PQ, 0O, 0PP, 0P, P, 3Q, , 0, , 0, 0, , 2, 0, , 0, , 0, , 1, , 0, , 0, , 2, , 0, , 0, , 0, , (b), , (d), , LM0, MM0, N0, LM1, MM0, N0, , 1, , 0, , 0, , 2, , 0, , 0, , 0, , 0, , 2, 0, , 0, 3, , OP, P, 3PQ, 0O, 0PP, 0PQ, 0, , 0, , 8. Let n[x], be a vector space of polynomial of degree les or equal to n, and let T : 3 → 4, be a linear transformation defined by, T (p (x)) =, , z, , x, , p ( t ) dt, , 0, , Then find the matrix of T with respect to the standard basis is:, , (a), , (c), , LM0, MM0, MN00, LM0, MM0, MM00, MN0, , 1, , 0, , 0, , 0, , 1/ 2, , 0, , 0, , 0, , 1/ 3, , 0, , 0, , 0, , 1, 0, , 0, 1/ 2, , 0, 0, 0, , 0, 0, 0, , OP, P, 0 P, P, 1 / 3P, 0 PQ, , OP, 0 P, 0 P, P, 1 / 4Q, 0, , (b), , 0, 0, , (d), , LM0, MM2, MM00, MN0, LM0, MM1, MM00, MN0, , 0, 0, , 0, 0, , 1, 0, 0, , 0, 3, 0, , OP, P, 0P, P, 0P, 4PQ, 0, 0, , 0, 0, , 0, 0, , 1/ 2, 0, 0, , 0, 1/ 3, 0, , OP, P, 0 P, P, 0 P, 1 / 4PQ, 0, 0
Page 244 :
236, , LINEAR ALGEBRA, , LM1, N1, , 9. Let [T]B =, , OP, Q, , 1, , be a matrix of a linear transformation T : 2 → 2, with respect to the, −1, , basis B = {(1, 1) (−1, 2)} of 2 then T is:, (a) T (x, y) = (x + y, x − y), (b) T (x, y) = (x − y, x + y), (c) T (x, y) =, , FG 2 y − 2 x , 7 x + 2 y IJ, H 3, 3 K, , (d) T (x, y) = (2x + y, y − x), , 10. Let 3[x] be a vector space of polynomials of degree at most 3 and let T : 3[x] → 3[x],, be a linear transformation defined by T (p (x)) = p (x + 1), then the matrix of T with respect, to the standard basis is:, , (a), , (c), , LM1, MM0, MN00, LM1, MM1, MN22, , 0, , 0, , 2, , 0, , 0, , 3, , 0, , 0, , 1, , 2, , 1, 2, , 2, 2, , 3, , 3, , OP, P, 0P, P, 4Q, 2O, 3PP, 3P, P, 3Q, , LM1, MM0, MN00, LM1, MM1, MN11, , 0, , 0, , (b), , (d), , 1, , 1, , 1, , 2, , 0, , 1, , 0, , 0, , 0, , 0, , 1, 2, , 0, 1, , 3, , 3, , OP, 3P, 3P, P, 1Q, 0O, 0PP, 0P, P, 1Q, 1, , 11. Let n[x] be a vector space over a field such that degree (p (n)) ≤ n ≤ p (n) t ∈ n[n]. Let, T : P3[x] → 4[x] be a linear transformation such that, T (p (x)) = x2p′(x) +, , z, , x, , 0, , p( f ) dt, , If [T]B1B2 = (bij)5×4 is a matrix of T with respect to B1 = {1, x, x2, x3} and B2 = {1, x, x2, x3, x4}, of 3[x] and 4[x]. Then, (a) b32 = 0, b33 =, , 7, 3, , (b) b32 =, , (c) b32 = 0, b23 = 0, 12. If A =, (a), (b), (c), (d), , LM 2 1OP , is a matrix of T : , N−1 2Q, , T is singular, T is invertable, T−1 (x, y, z) (−x + 24, 24 + y), T is one−one but not onto., , 3, , b33 = 0, 2, , (d) b32 = 0, b33 =, 2, , 3, 2, , → 2 with respect to the standard basis of 2. Then
Page 245 :
237, , LINEAR TRANSFORMATION AND MATRIX, , 13. Let T : 2 → 2 be a linear transformation defined by, , LM1 2OP LM x OP ,, N3 4 Q N x Q, 1, , Tx =, , 2, , ≤ x = (x1, x2) ∈ 2, , Then the matrix of T with respect to the basis B =, (a), (c), , LM−1 −1OP, N4 6Q, LM3 5OP, N 4 6Q, , (b), (d), , LM3, N7, LM−1, N2, , RSL1O , L1OUV, TMN1PQ MN2PQW, 5O, 11PQ, 0O, 2PQ, , of 2 is:, , 14. Let T : 2[x] → 2[x], be a linear transformation defined by T (p (x)) = p″ − 3p′. Then the, nullity of the matrix of T, which respect to standard basis of 2[x] is:, (a) 0, (b) 1, (c) 2, (d) 4, 15. Let V = {f (x) : f (x) = a cos x + b sin x} be a vector space over . Let T : V → V be a linear, transformation defined as: T (f (x)) = f ″ + 2 f ′ + 3f. What is the matrix of T with respect to, basis B = {cos x, sin x} of V., (a), (c), , LM 2, N−2, LM6, N6, , OP, Q, 6O, −6PQ, 2, 2, , (b), (d), , LM2, N2, LM2, N2, , OP, Q, −2 O, 2 PQ, 2, −2, , ANSWERS, EXERCISE 5.1, (i), , 1., , (iii), , 3., , 4., , LM1, MM0, MN00, , LM1, N1, LM 3, MN 22, , 1, −1, −1, −1, , 1, , 1, , 1, 0, , 2, 1, , 0, , 0, , (i) T, (vi) T, , 1, 0, , OP, Q, , 1, 2, 1, , (ii), , OP, PQ, , OP, 3P, 3P, P, 1Q, , (iv), , LM0, N3, LM 7, MM10, MN 25, , 1, −2, , −1, 10, 4, 5, , 0, 5, , OP, Q, , OP, P, −18 P, P, 5 Q, , −18, 10, , 1, , (ii) F, (vii) F, , (iii) F, , (iv) T, , (v) T
Page 247 :
Chapter, , 6, , INNER PRODUCT SPACES, , INTRODUCTION, In vector space we generalized the linear structure (i.e., addition and scalar multiplication) of R2 and, R3 and ignored other features such as notions of length and angle. Here our main objective is to, study vector spaces in which it makes sense to speak of the length of a vector and of the angle, between two vectors., , 6.1 INNER PRODUCTS, (X1, X2), , Think of vectors in R2 and R3 as arrow with initial point at the origin., The length of a vector x in R2 or R3 is called the Norm of x, denoted, ||x|| thus for x = (x1, x2) ∈ R2, we have ||x|| =, , x12 + x22 ., , x, , Similarly, if x = (x1, x2, x3) ∈ R3, then, ||x|| =, , x12 + x22 + x32, , Similarly we defined the norm of x = (x1, ..., xn) ∈ Rn by, ||x|| =, , Fig. 6.1., , x12 + ... + xn2, , The norm is not linear on Rn. To inject linearity into the discussion, we introduce the dot, product., → ||x|| should be interpreted as the distance from the origin to the point (x1, x2)., , 6.1.1 Dot Product, The dot product of two vectors x, y ∈ Rn where x = (x1, x2, ... xn) and y = (y1, y2, ..., yn) is denoted, by x · y, is defined by, x · y = x1y1 + x2y2 + ... + xnyn, The dot product of two vectors in Rn is a number, not a vector. Obviously x · x = ||X||2, ≤ x ∈ Rn., The dot product on Rn has the following properties, (i) x · x ≥ 0 ≤ x ∈ Rn., (ii) x · x = 0 ⇔ X = 0., (iii) x · y = y · x ≤ x, y ∈ Rn., (iv) x · (αy + βz) = α (x · y) + β (x · z), ≤ x, y, z ∈ Rn and α, β ∈ R., Now we shall generalize the concept of dot product to complex vector spaces., If z = a + ib, where a, b ∈ R, then the absolute value of z, denoted |z|, is defined by, |z| = a 2 + b 2 ., 239
Page 248 :
240, , LINEAR ALGEBRA, —, , z, z, z, , —, , The conjugate of a complex number z, denoted z , is defined by z = a − ib., —, |z|2 = z · z, For z = (z1, ..., zn) ∈ Cn, the norm of z is define by, 2, 2, 2, ||z|| = | z1| + | z 2 | + ... + | z n |, The absolute values are needed because we want ||z|| to be a non-negative number., —, —, —, ||z||2 = z1z 1 + z2 · z 2 + ... + zn z n., , 6.1.2 Inner Product, An inner product on a vector space V is a function that associates a real number <u · v> ∈ F with, each pair of vectors (u, v) in V. In such a way that the following axioms are satisfied for all vectors, u, v and w in V and all scalars in F., (i) Positivity, <v, v> ≥ 0 ≤ v ∈ V, (ii) Definiteness, <v, v> = 0 iff v = 0., (iii) Additivity in first slot, <u + v, w> = <u, w> + <v, w> ≤ u, v, w ∈ V, (iv) Homogeneity in first slot, <λu, v> = λ <u, v> ≤ λ ∈ F, ≤ u, v ∈ V., (v) Conjugate symmetry, <u, v> = < v , u > ≤ u, v ∈ V., , Examples of Inner Products, 1. Euclidean Inner Product on Rn, If u = (u1, u2, ..., un) and v = (v1, v2, ... vn) are vectors in Rn, then, <u, v> = u · v = u1v1 + u2v2 + ... + unvn., 2. Weighted Euclidean Inner Product, If w1, w2, ... wn are positive real number, which we shall call weights, and If u = (u1, u2, .. un), and v = (v1, v2, ... vn) are vectors in Rn, then, <u, v> = w1u1v1 + w2u2v2 + ... + wnunvn, defines an inner product on Rn, it is called the weighted euclidean inner product with, weights w1, w2, ..., wn., Remark: It will always be assumed that Rn has the Euclidean inner product space unless same other, product is explicitly specified., , 3. An inner product can be defined on the vector space of continuous real-valued functions, on the interval [−1, 1] by, <f, g> =, , z, , 1, , −1, , f ( x ) g ( x ) dx.
Page 254 :
246, , LINEAR ALGEBRA, , (iv) We are given that <u, v> = 0 ≤ v ∈ V, In particular, <u, u> = 0, {Put v = u}, Hence, u = 0, {by definition of inner product space}, Similarly we can prove, <u, v> = 0 ≤ u ∈ V ⇒ v = 0., (v) Let <u, w> = <v, w> ≤ w ∈ V, then, <u − v, w> = <u, w> − <v, w> = 0, ∴, <u − v, w> = 0 ≤ w ∈ V, In particular, <u − v, u − v> = 0, ⇒, u−v = 0, Hence, u = v, Conversely let u = v,, <u, w> − <u, w> = <u − v, w> = <0, w> = 0, Hence, <u, w> = <v, w> ≤ w ∈ V., , EXERCISE 6.1, 1. If u = (α1, α2), v = (β2, β2) ∈ R2, show that following:, (i) <u, v> = α1β1 + 3 α2β2, (ii) <u, v> = α1β1 − α2β1 − α1β2 + 4 α2β2, (iii) <u, v> = α1β1 + 2 α1β2 + 2 α2β1 + 5 α2β2, (iv) <u, v> = 3 α1β1 + 5 α2β2, (v) <u, v> = 4 α1β1 + 6 α2β2, (vi) <u, v> = 4 α1β1 + α2β1 + α1β2 + 4 α2β2, are inner product on R2., 2. Let u = (α1, α2, α3), v = (β1, β2, β3). Determine which of the following are inner product, on R3 for those that are not list the axioms that do not hold, (i) <u, v> = α1β1 + α3β3, (iii) <u, v> = 2 α1β1 + α2β2 + 4 α3β3, , (ii) <u, v> = α12β12 + α22β22 + α32β32, (iv) <u, v> = α1β1 − α2β2 + α3β3, , 3. Let v be the vector space of all real polynomials of degree ≥ 2 show that, (i) <f (x), g (x)> =, , z, , 1, , −1, , f ( x ) ⋅ g ( x ) dx ≤ f (x), g (x) ∈ V, , is an inner product on V., (ii) If f (x) = x2 + x − 4 and g (x) = x − 1, then find <f (x), g (x)> and <g (x), f (x)>., 4. Let α = (1, 2), β = (−1, 1) ∈ R2. If γ is a vector such that <α, γ> = −1 and <β, γ> = 3. Find γ., 5. If α1 = (1, −3, 1), α2 = (2, 1, −4), α3 = (6, −7, 8) ∈ R3. Find an α ∈ R3 such that, <α, α1> = −1, <α, α2> = −1, <α, α3> = 7.
Page 255 :
247, , INNER PRODUCT SPACES, , 6.2 NORM, ||v||, Let V be an inner product space of v ∈ V, the norm of v, is denoted by ||v|| and is defined by, < v, v >, i.e.,, = <v, v>, By the property of inner product space, <v, v> ≥ 0 and so ||v|| ≥ 0 ≤ v ∈ V., ||v|| =, , ||v||2, , Distance, The distance between two vectors (points) ‘u’ and ‘v’ is denoted by d (u, v) and is defined by, d (u, v) = ||u − v||, If a vector has norm 1, then we say that it is a unit vector., Norm and distance in Rn., Let u = (u1, u2, ..., un) and v = (v1, v2, ..., vn) are vectors in Rn, then, ||u|| = <u, u>1/2, =, and, , u12 + u22 + ... + un2, , d (u, v) = ||u − v||, = <u − v, u − v>1/2, =, , ( u1 − v1 )2 + ( u2 − v2 )2 + ... + ( un − v n )2 ., , Note: The norm and distance depend on the inner product being used. If the inner product is changed,, then the norms and distance between vectors also change., , EXAMPLE 1: Let u = (1, 0) and v = (0, 1) ∈ R2., SOLUTION: With Euclidean inner product {standard inner product}, we have, 12 + 0 2 = 1, d (u, v) = ||u − v|| = ||(1, −1)||, , ||u|| =, , and, , = 12 + ( −1) 2 = 2, Now, consider weighted Euclidean inner product, <u, v> = 3 u1v1 + 2 u2v2, then, and, , ||u|| = <u, u>1/2 = [3 (1) (1) + 2 (0) (0)]1/2 =, d (u, v) = ||u − v|| = <(1, −1), (1, −1)>1/2, = [3 (1) (1) + 2 (−1) (−1)]1/2 =, , α|, EXAMPLE 2: Prove that ||α, αv|| = |α, SOLUTION: We have, ||αv||2 =, =, 2, ||αv|| =, ⇒, ||αv|| =, , 5., , ||v|| ≤ α ∈ F, v ∈ V., <αv, αv> = α <v, αv>, —, α α <v, v> = |α|2 ||v||2, (|α| ||v||)2, |α| ||v||., , 3
Page 256 :
248, , LINEAR ALGEBRA, , EXAMPLE 3: Let V be an inner product space and x, y, z ∈ V prove that ||x + y||2 + ||x − y||2, = 2 (||x||2 + ||y||2). (Parallelogram law), SOLUTION:, ||x + y||2 = <x + y, x + y>, = <x, x + y> + <y, x + y>, = <x, x> + <x, y> + <y, x> + <y, y>, ...(1), and, ||x − y||2 = <x − y, x − y>, = <x, x − y> − <y, x − y>, = <x, x> − <x, y> − <y, x> + <y, y>, ...(2), Adding equation (1) and (2), we get, ||x + y||2 + ||x − y||2 = 2 <x, x> + 2 <y, y>, ||x + y||2 + ||x − y||2 = 2 (||x||2 + ||y||2)., Note: Suppose v ∈ V, then ||v|| = 0 Iff v = 0 because <v, v> = 0, iff v = 0., , EXAMPLE 4: Let V (R) be a vector space of polynomials with inner product defined by, <f, g> =, , z, z, z, z, , 1, , 0, , f ( x ) ⋅ g ( x ) dx, , If f (x) = x2 + 1 and g (x) = x − 1, then find <f, g> and ||g||., SOLUTION:, , <f (x), g (x)> =, =, , 1, , 0, , 1, , 0, , ( x 2 + 1) ( x − 1) dx, , ( x 3 − x 2 + x − 1) dx = −, , ||g||2 = <g (x), g (x)>, =, , 1, , 0, , ( x − 1) 2 dx, 1, , =, ⇒, , ||g|| =, , 1, 1, ( x − 1) 3 =, 3, 3, 0, 1, 3, , ., , Theorem 6.2: (Cauchy-Schwarz Inequality), Let V be an Inner product space and u, v ∈ V, then, |<u, v>| ≤ ||u|| ||v||., Proof: If u = 0, then <u, v> = <0, v> = 0, and so, |<u, v>| = 0, Again u = 0 ⇒, <u, u> = 0, ⇒, ||u||2 = 0 ⇒ ||u|| = 0, ⇒, ||u|| ||v|| = 0, thus, |<u, v>| = ||u|| ||v||, Let u ≠ 0, then <u, u> > 0 i.e., ||u||2 > 0, Let, w = v − λu,, , 7, 12
Page 257 :
249, , INNER PRODUCT SPACES, , where, we have, i.e.,, ⇒, , λ =, <w, w> ≥, 0 ≤, 0 ≤, =, =, =, , < u, v >, < v, u >, =, 2, || u||, || u||2, 0, <w, w>, <v − λu, v − λu>, <v, v − λu> − λ <u, v − λu>, —, —, <v, v> − λ <v, u> − λ <u, v> + λλ <u, u>, —, —, ||v||2 − λλ ||u||2 − λ <u, v> + λλ ||u||2, , < u, v >, || u||2, 0 ≤ ||u||2 ||v||2 − |<u, u>|2, |<u, v>|2 ≤ ||u||2 ||v||2, |<u, v>| ≤ ||u|| ||v||., = ||v||2 − <u, v>, , ⇒, ⇒, , Corollary 1: Show that |<u, v>| = ||u|| ||v|| iff u and v are linearly dependent., Proof: Let, |<u, v>| = ||u|| ||v||, Consider, w = v − λu,, , < u, u >, < u, v >, =, || u||2, || u||2, <w, w> = ||u||2 ||v||2 − |<u, v>|2 = 0, ⇒, w = 0, ⇒, v − λu = 0, ⇒, v = λu, ⇒ u and v are linearly dependent conversely let u and v are linearly dependent, so that u =, αv, for some α ∈ F., ∴, ||u|| ||v|| = ||αv|| ||v|| = |α| ||v|| ||v|| = |α| ||v||2, also, |<u, v>| = |<αv, v>| = |α < v, v>| = |α| ||v||2, Hence, |<u, v>| = ||u|| ||v||., where, , λ =, , Corollary 2: Show that {Triangle Inequality}, ||u + v|| ≤ ||u|| + ||v|| ≤ u, v ∈ V, Proof:, ||u + v||2 = <u + v, u + v>, = <u, u + v> + <v, u + v>, = <u, u> + <u, v> + <v, u> + <v, v>, = ||u||2 + 2 <u, v> + ||v||2, ≤ ||u||2 + 2 |<u, v>| + ||v||2, ≤ ||u||2 + 2 ||u|| ||v|| + ||v||2 = (||u|| + ||v||)2, ⇒, ||u + v||2 ≤ (||u|| + ||v||)2, ⇒, ||u + v|| ≤ ||u|| + ||v||.
Page 258 :
250, , LINEAR ALGEBRA, , 6.3 INNER PRODUCTS GENERATED BY MATRICES, , LM u OP, MMu PP, MMNu# PPQ, 1, , Let, , u =, , 2, , n, , be vectors in, , Rn, , LM v OP, Mv P, and v = M # P, MMNv PPQ, 1, , 2, , n, , and let A be an invertible n × n matrix., , If u · v is an Euclidean Inner product on Rn, then, <u, v> = Au · Av, , ...(1), , defines an inner product. It is called the Inner product on Rn generated by A., Now,, , <u, v> = Au · Av = (Av)T Au, , or equivalently, , <u, v> = vT AT Au, , 6.3.1 Inner Product Generated by the Identity Matrix, Substituting A = I in equation (1), <u, v> = Iu · Iv = u · v, In general the weighted Euclidean Inner product, <u, v> = w1u1v1 + w2u2v2 + ... + wnunvn, is the inner product on Rn generated by, , LM w, MM 0, MM ...0, N, , 1, , A =, , 6.3.2 Inner Product on M22, If, , u =, , LMu, Nu, LMv, Nv, , 1, 3, , and, , v =, , 1, 3, , 0, , 0, , ..., , w2, , 0, , ..., , ..., 0, , ..., 0, , ..., ..., , OP, 0 P, ... P, P, w PQ, 0, , n, , OP, Q, v O, v PQ, u2, u4, 2, , 4, , then the following formula defines an Inner product on M22, <u, v> = tr (uTv) = tr (vTu), = u 1v1 + u 2v 2 + u 3v 3 + u 4v 4, norm of a matrix U relative to this inner product is ||U|| = <U, U>1/2, =, , u12 + u22 + u32 + u42
Page 259 :
251, , INNER PRODUCT SPACES, , 6.3.3 Inner Product on P2, If, , P = a 0 + a 1x + a 2x2, and, q = b 0 + b 1x + b 2x2, 2, be any two vectors in P , then Inner product on P2, <P, q> = a0b0 + a1b1 + a2b2, The norm of the polynomial P is, ||P|| = <P, P>1/2, a02 + a12 + a22 ., , =, , EXAMPLE 5: Let M22 have the Inner product <u, v> = u1v1 + u2v2 + u3v3 + u4v4 then find, d (A, B)., 2 6, −4 7, (a) A =, , B=, 1, 6, 9 4, , LM, N, L−2, (b) A = M, N1, , OP, Q, , OP, Q, , 4, ,, 0, , SOLUTION: (a) ä, , LM, N, , B=, , OP, Q, , LM−5 1OP, N 6 2Q, , d (A, B) = ||A − B||, =, , =, ∴, , (b) ä, , ||U|| =, , LM2 6OP − LM−4 7OP, N9 4 Q N 1 6 Q, LM6 −1OP, N8 −2Q, u12 + u22 + u32 + u42, , =, , 6 2 + ( −1) 2 + (8) 2 + ( −2) 2, , =, , 36 + 1 + 64 + 4, , d (A, B) =, , 105, , d (A, B) =, , LM 3, N−5, , =, , 3, −2, , OP, Q, , 9 + 9 + 25 + 4 =, , 47 ., , 6.4 PROPERTIES OF DISTANCE, If u, v and w are vectors in an Inner product space V and if k is any scalar, then, (a) d (u, v) ≥ 0 and d (u, v) = 0 iff u = v., Proof: ä, ||x|| ≥ 0 ≤ x ∈ V, so, d (u, v) = ||u − v|| ≥ 0, Further, d (u, v) = 0 ⇔ ||u − v|| = 0 ⇔ u − v = 0 ⇔ u = v.
Page 260 :
252, , LINEAR ALGEBRA, , (b) d (u, v) = d (v, u), Proof:, , d (u, v) =, =, =, (c) d (u, v) ≤ d (u, w) + d (w, v), Proof:, d (u, v) =, ≤, =, Hence, d (u, v) ≤, , ||u − v|| = ||(−1) (v − u)||, |−1| ||v − u||, ||v − u|| = d (v, u)., ||(u − w) + (w − v)||, ||u − w|| + ||w − v||, d (u, w) + d (w, v), d (u, w) + d (w, v)., , 6.5 ANGLE BETWEEN VECTORS, Let ‘u’ and ‘v’ are non-zero vectors in an inner product space V, then by Cauchy-schwarz inequality, [<u, v>]2 ≤ ||u||2 ||v||2, ⇒, , LM < u, v > OP, N ||u|| ||v|| Q, , 2, , ≤ 1, , If θ is an angle whose radian measures varies from 0 to π, then cos θ assumes every value, between −1 and 1 inclusive exactly once thus, there is a unique angle θ r.t., < u, v >, and 0 ≤ θ ≤ π, || u|| || v||, θ = Angle between u and v., , cos θ =, , EXAMPLE 6: Prove that consine of an angle is of absolute value at most 1., SOLUTION: Let, V = R3(R), = {(α1, α2, α3) : αi ∈ R}, Let, u = (α1, α2, α3),, v = (β1, β2, β3) ∈ V, According to vector Algebra. If θ is the angle between two vectors u, v then, cos θ =, =, , or, , cos θ =, , u⋅v, | u| | v|, , α1β1 + α 2β 2 + α 3β 3, α12 + α 22 + α 23 β12 + β 22 + β 23, < u, v >, || u|| || v||, , By Cauchy schwarz inequality, |<u, v>| ≤ ||u|| ||v||, ⇒, , cos θ =, , | < u, v >|, ≤ 1., || u|| || v||
Page 261 :
253, , INNER PRODUCT SPACES, , 6.6 ORTHOGONAL VECTORS, Consider two lines through origin by vectors ‘u’ and ‘v’., ||u – v||, v, , u, , ||u – (–v)|| = ||u + v||, O, , –v, , Fig. 6.2., , Geometrically these two lines u and v are ⊥ iff the distance from u to v is same as the distance, from u to −v., (d (u, −v))2 = ||u − (−v)||2 = ||u + v||2, , Now, , = (u + v) · (u + v), = u · (u + v) + v · (u + v), = u · u + u · v + v · u + v · v, (d (u,, , −v))2, , = ||u||2 + ||v||2 + 2 u · v, , (d (u, −v))2 = ||u||2 + ||v||2 − 2 u · v, , Similarly, Hence the distance, , d (u, −v) = d (u, v) iff 2 u · v, = −2 u · v ⇔ u · v = 0., , Hence the lines u and v through the origin are ⊥ iff u · v = 0. This following definition, generalize to Rn this notion of perpendicularity., Two vectors u and v in an inner product space are called orthogonal If <u, v> = 0., , 6.6.1 Orthogonality and Zero Vector, (i) ‘O’ is orthogonal to every vector in V., (ii) ‘O’ is the only vector in V that is orthogonal to itself., Proof: (i) ä, , <0, u> = 0 ≤ u ∈ V, , (ii) If v ∈ V and <v, v> = 0 then v = 0 {by definition of inner product}., EXAMPLE 1: U =, , LM1 0OP and V = LM0 2OP ., N1 1Q, N0 0 Q, , SOLUTION: <u, v> = 1 (0) + 0 (2) + 1 (0) + (0) = 0., Hence, u and v are orthogonal.
Page 262 :
254, , LINEAR ALGEBRA, , EXAMPLE 2: Let P2 have the Inner product, <p, q> =, , z, , 1, , −1, , P ( x ) q ( x ) dx, , Let p (x) = x and q (x) = x2, then., SOLUTION:, ||p|| = <p · p>1/2, =, , LMz, N, , 1, , −1, , x ⋅ x dx, , OP, Q, , 1/ 2, , 2, 3, , =, , ||q|| = <q · q>1/2, =, <p, q> =, , LMz, N, , z, , 1, , −1, , 1, , −1, , x 2 ⋅ x 2 dx, , OP, Q, , x ⋅ x 2 dx =, , 1/ 2, , z, , 1, , −1, , =, , 2, 5, , x 3 dx = 0, , Hence P and q are orthogonal relative to the given inner product., EXAMPLE 3: (Pythagoras theorem), Prove that two vectors x and y in a real inner product space V are orthogonal if and, only if, ||x + y||2 = ||x||2 + ||y||2., SOLUTION: we have, ||x + y||2 = <x + y, x + y>, = <x, x> + <x, y> + <y, x> + <y, y>, = ||x||2 + 2 <x, y> + ||y||2, ||x + y||2 = ||x||2 + ||y||2, ⇔, <x, y> = 0, ⇔ x and y are orthogonal., EXAMPLE 4: Show that in a complex inner product {or unitary space} space V. If x is, orthogonal to y, then ||x + y||2 = ||x||2 + ||y||2. However, the converse need not be true., SOLUTION: We have, <x, y> = 0, ⇒, ∴, , —, , <y, x> = < x , y > = 0 = 0, , ||x + y||2 = <x + y, x + y>, = <x, x> + <x, y> + <y, x> + <y, y>, = ||x||2 + 0 + 0 + ||y||2, ⇒, ||x + y||2 = ||x||2 + ||y||2, However the converse need not be true., Consider V = c2 with standard inner product., Let, x = (0, i), y = (0, 1) ∈ V, then, <x, y> = 0 · 0 + i · 1 = i ≠ 0, ⇒ x is not orthogonal to y, —, Now, ||x||2 = 0 · 0 + i · i = i (−i) = 1
Page 263 :
255, , INNER PRODUCT SPACES, , ||y||2 = 0 · 0 + 1 · 1 = 1, x + y = (0, 1 + i), and so, ∴, , ||x + y||2 = 0 · 0 + (1 + i) (1+ i ), ||x + y||2 = (1 + i) (1 − i) = 2, ||x + y||2 = ||x||2 + ||y||2, but x is not orthogonal to y., , αx + βy||2, EXAMPLE 5: Show that if V is a unitary space, then x, y ∈ V are orthogonal iff ||α, 2, 2, αx|| + ||β, βy|| ≤ α, β ∈ C., = ||α, ||αx + βy||2 = <αx + βy, αx + βy>, SOLUTION:, = <αx, αx + βy> + <βy, αx + βy>, = <αx, αx> + <αx, βy> + <βy, αx> + <βy, βy>, —, —, = ||αx||2 + αβ <x, y> + βα <y, x> + ||βy||2, ...(1), —, , Let x and y be orthogonal then <x, y> = 0 and so <y, x> = ( x , y ) = 0 = 0, Using in (1), we get, ||αx + βy||2 = ||αx||2 + ||βy||2, Conversely let (2) be true ≤ α1β ∈ C from (1) and (2) we get, —, —, αβ <x, y> + βα <y, x> = 0 ≤ α, β ∈ C, taking α = β = 1 in (3), we get, <x, y> + <y, x> = 0, or, <x, y> + < x , y >, ∴, 2 Re <x, y>, or, Re <x, y>, taking α = i, β = 1 in (3), i <x, y> − i <y, x>, , = 0, = 0, = 0, , <x, y> − < x , y >, ∴, 2 i Im <x, y>, or, Im <x, y>, ∴ <x, y> is a complex number., ∴, <x, y>, Hence x and y are orthogonal vectors., , = 0, = 0, = 0, , ...(2), ...(3), , ...(4), , = 0, , ...(5), , = Re <x, y> + i Im <x, y> = 0, , 6.6.2 Orthogonal Complements, Let W be a subspace of an inner product space V. A vector u ∈ V is said to be orthogonal to W. If, it is orthogonal to every vector in W, and the set of all vectors in V that are orthogonal to W is called, orthogonal complement of W, and is denoted by W⊥ {read as “W perpendicular” or “W perp”}, W⊥ = {v ∈ V : <v, w> = 0 ≤ w ∈ W}
Page 264 :
256, , LINEAR ALGEBRA, , ⇒ W⊥ is a subspace of V., Proof: Let v1, v2 ∈ w⊥ and α, β, <v1, w>, and, <v2, w>, Consider, <αv1 + αv2, w>, , ∈, =, =, =, =, , F, then, 0, 0 ≤ w ∈ W., α <v1, w> + β <v2, w>, α·0+ β·0=0 ≤w∈W, , Hence αv1 + αv2 ∈ w⊥, ⇒ w⊥ is a subspace of V., , 6.6.3 Properties of Orthogonal Complements, If W is a, (a), (b), (c), , subspace of a finite-dimensional inner product space V, then, W⊥ is a subspace of V., The only vector common to W and W⊥ is ‘O’., The orthogonal complement of W⊥ is W, i.e., (W⊥)⊥ = W., , Theorem 6.3: Let A be an m × n matrix. The orthogonal complement of the row space, of A is the null space of A and the orthogonal complement of the column space of A is the null, space of AT., (Row A)⊥ = Nul A, and, (Col A)⊥ = Nul AT., Proof: The row-column rule for computing AX shows that if X is in Nul A, then X is orthogonal, to each row of A. Since the rows of A span the row space, X is orthogonal to Row A. Conversely, if X is orthogonal to Row A, then X is certainly orthogonal to each row of A and hence AX = 0., This proves the first statement of the theorem. Since this statement is true for any matrix, it, is true for AT, that is the orthogonal complement of the row space of AT is the null space of AT. This, proves that 2nd statement, because Row AT = Col A., , Theorem: Equivalent Statements, If A is an n × n matrix and if TA : Rn → Rn is multiplication by A, then the following are equivalent:, (a) A is invertible., (b) AX = 0 has only the trivial solution., (c) The reduced row Echelon form of A is In., (d) A is expressible as a product of elementary matrices., (e) AX = b is consistent for every n × 1 matrix b., (f) AX = b has exactly one solution for every n × 1 matrix b., (g) det (A) ≠ 0., (h) The range of TA is Rn., (i) TA is one-one., (j) The column vectors of A are linearly independent., (k) The row vectors of A are linearly independent., (l) The column vector of A span Rn.
Page 265 :
257, , INNER PRODUCT SPACES, , (m), (n), (o), (p), (q), (r), (s), , The row vector of A span Rn., The column vector of A form a basis of Rn., The row vector of A form a basis for Rn., A has rank n., A has nullity zero., The orthogonal complement of the null space of A is Rn., The orthogonal complement of the row-space of A is {0}., , EXERCISE 6.2, 1. In each part, use the given inner product on R2 to find ||w||, where w = (−1, 3)., (a) The Euclidean inner product., (b) The weighted Euclidean inner product., <u, v> = 3α1β1 + 2α2β2, where u (α1, β1) and v = (α2, β2)., (c) The inner product generated by the matrix, A=, , LM 1 2OP, N−1 3Q, , 2. Use the inner products in question 1 to find d (u, v) for u = (−1, 2) and v = (2, 5)., 3. In each part, find ||P||., (a) P = −2 + 3x + 2x2., (b) P = 4 − 3x2., 4. Let M22 have the inner product defined by, <u, v> = 4u1v1 + 6u2v2, In each part find ||A||., (a) A =, , LM−2 4OP, N 3 6Q, , (b) B =, , LM−5 1OP, N 6 2Q, , 5. Let the vector space P2 have the inner product, <P, q> =, , z, , 1, , −1, , P ( x ) q ( x ) dx, , (a) Find ||P|| for p = 1, p = x and p = x2., (b) Find d (P, q) if p = 1 and q = x., 6. Use the inner product, <f, g> =, , z, , 1, , 0, , f ( x ) ⋅ g ( x ) dx, , to compute <f, g> for the vectors f = f (x) and g = g (x), (a) f = (cos 2πx, g = sin 2πx), (b) f = x, g = ex, π, (c) f = tan x, g = 1., 4
Page 266 :
258, , LINEAR ALGEBRA, , 6.7 ORTHOGONAL SET, A set of vectors {u1, u2, ... up} in Rn is said to be an orthogonal set. If all pairs of distinct vectors, in the set are orthogonal. An orthogonal set in which each vector has norm 1 is called orthonormal., i.e., the set {vi} is called an orthogonal set, if, <vi, vj> = 0 for i ≠ j., The set {vi} is called an orthonormal set, if, <vi, vj> = 0 for i ≠ j, = 1 for i = j., A basis consisting of orthonormal vectors is called an orthonormal basis, and a basis consisting, of orthogonal vectors is called an orthogonal basis., The set S = {(1, 0, 0), (0, 1, 0), (0, 0, 1)} is an orthonormal basis of R3., ä, <(1, 0, 0), (0, 1, 0)> = 1.0 + 0.1 + 0.0 = 0, <(1, 0, 0), (0, 0, 1)> = 1.0 + 0.0 + 0.1 = 0, <(0, 1, 0), (0, 0, 1)> = 0.0 + 1.0 + 0.1 = 0, ||(1, 0, 0)||2 = <(1, 0, 0), (1, 0, 0)> = 1, ||(0, 1, 0)||2 = 1 and ||(0, 0, 1)||2 = 1, Hence S is an orthonormal subset of R3 and since S is a basis of R3., ⇒ S is an orthonormal basis of R3., , EXAMPLE 1: Show that u1, orthogonal set., SOLUTION:, , LM3OP, = M1 P ,, MN1PQ, , u2, , LM−1OP, = M 2 P,, MN 1 PQ, , u3, , LM−1 / 2OP, = M −2 P, MN 7 / 2 PQ, , the set S = {u1, u2, u3} is an, , u1 · u2 = 3 (−1) + 1 (2) + 1 · (1) = 0, , FG 1 IJ + 1 (−2) + 1 FG 7 IJ = 0, H 2K, H 2K, 7, F 1I, ( −1) G − J + 2 ( −2) + 1 ⋅ = 0, H 2K, 2, , u1 · u3 = 3 −, u2 · u3 =, , Hence each pair of distinct vectors is orthogonal., Let v ∈ V be a non-zero vector in an Inner product space, then the vector, ∴, , || v|| v, , =, , v, has norm 1., || v||, , 1, 1, || v|| =, || v|| = 1, || v||, || v||, , The process of multiplying a non-zero vector v by the reciprocal of its longth to obtain a unit, vector is called normalizing v., An orthogonal set of non-zero vectors can always be converted to an orthonormal set by, normalizing each of its vectors.
Page 268 :
260, , LINEAR ALGEBRA, , Theorem 6.5: If S = {v1, v2, ..., vn} is an orthogonal set of non-zero vectors in an inner, product space v1 then S is linearly independent., Proof: Assume that, k1v1 + k2v2 + k3v3 + ... + knvn = 0, Now to show that S = {v1, v2, ..., vn} is linearly independent, we must prove that, k1 = k2 = ... = kn = 0, now for each vi ∈ S, k1 <v1, vi> + k2 <v2, vi> + ... + kn < vn, vi> = 0, from the orthogonality of S it follows that, <vj, vi> = 0, when j ≠ i, so above equation reduces to, ki < v i , v i > = 0, ∴ the vectors in S are non-zero,, ⇒, <vi, vi> ≠ 0, ⇒, ki = 0, ∴ i is arbitrary., Hence, k1 = k2 = ... = kn = 0, ⇒ S is linearly independent., Corollary: An orthonormal set S in an inner product space V is linearly independent., EXAMPLE 3: If {v1, v2, ... vn} is an orthonormal set in V and if w ∈ V, then show that, n, , u = w−, , i =1, , < w, vi > vi, , is orthogonal to each of v1, v2, ... vn., SOLUTION: For any i, 1 ≤ i ≤ n, we have, n, , <u, vi> = w −, , i =1, , < w, vi > vi where αi = <w, vi>, , = <w, vi> − <α1v1 + ... + αivi + ... + αnvn, vi>, = <w, vi> − {α1 <v1, vi> + ... + αi <vi, vi> + ... + αn <vn, vi>}, = <w, vi> − {0 + ... + αi · 1 + ... + 0}, {ä {v1, v2, ..., vn} is an orthonormal set}, = <w, vi> − αi, = <w, vi> − <w, vi>, <u, vi> = 0, Hence‘u’ is orthogonal to vi for i = 1, 2, ... n., EXAMPLE 4: Find a vector of unit length which is orthogonal to the vector (3, −2, 2) of, R3 (R) relative to the standard inner product., SOLUTION: Let, x = (3, −2, 2), and, y = (a, b, c) ∈ R3 be set.
Page 269 :
261, , INNER PRODUCT SPACES, , <x, y> = 0, i.e.,, 3a − 2b + 2c = 0, A solution of the above equation a = 2, b = −3, c = −6., Hence (2, −3, −6) is orthogonal to x = (3, −2, 2)., Now, ||y||2 = <y, y>, = 4 + 9 + 36 = 49, ⇒, ||y|| = 7, , FG, H, , IJ, K, , 2, 3, 6, y, ,− ,−, =, 7, 7, 7, || y||, is a unit vector of unit length which is orthogonal to the vector (3, −2, 2)., , Hence, , EXAMPLE 5: Find two mutually orthogonal vectors each of which is orthogonal to the, vector (2, −1, 3) of R3 (R) with respect to standard inner product., SOLUTION: Let x = (x1, x2, x3) be orthogonal to (2, −1, 3), then 2x1 − x2 + 3 × 3 = 0, Solution of the above equation x1 = −1, x2 = 1, x3 = 1, and so x = (−1, 1, 1)., Let y = (y1, y2, y3) be a vector in R3 which is orthogonal to x = (−1, 1, 1) and (2, −1, 3), then, −y1 + y2 + y3 = 0,, 2y1 − y2 + 3y3 = 0, on solving these equations, we get, , y1, , =, , 3+1, Hence, y =, and, x =, are two mutually orthogonal vectors each, , y2, , =, , y3, , 2+3, 1− 2, (4, 5, −1), (−1, 1, 1), of which is orthogonal to (2, −1, 3)., , EXAMPLE 6: Consider R4 with the standard inner product. Let w be the subspace of R4, consisting of all vectors which are orthogonal to both α = (1, 0, −1, 1) and β = (2, 3, −1, 2)., Find a basis for w., γ = (x, y, z, t) ∈ w be arbitrary, SOLUTION: Let, then, <γ, α> = <γ, β> = 0, 1 · x + 0 · y − z + 1 · t = 0, and, 2x + 3y − 1z + 2t = 0, or, , where, , LM1, N2, , 0, 3, , −1, −1, , OP, Q, , 1, X, 2, , = 0,, , X =, , LM xOP, MM yPP ,, MN zt PQ, , O =, , LM0OP, N0Q
Page 270 :
262, , LINEAR ALGEBRA, , or, , LM1, N0, , 0, 3, , −1, 1, , OP, Q, , 1, X, 0, , = 0, , x−z+t, 3y + z + 0, The L.I. sol. of (1) and (2) are (0, 1,, Hence these two 4-tuples constitute a, , or, , = 0, = 0, −3, −3) and (−3, 1, −3, 0), basis for w., , ...(1), ...(2), , 6.8 AN ORTHOGONAL PROJECTION, 0 ≠ u ∈ Rn., Suppose we wish to decompose a vecotor y ∈ Rn into the sum, of two vectors, one a multiple of u and the other orthogonal to u., ^, ^, i.e.,, y = y +z, {where y = αu}, Let, z = y − αu, ^, then y − y is orthogonal to u iff, (y − y^ ) · u = 0, ⇒, (y − αu) · u = 0, ⇒, y · u − αu · u = 0, y ⋅u, ⇒, α =, u⋅u, , Let, , ^, , and, , y =, , Z = y–y, y, , O, , u, , Fig. 6.3, , FG y ⋅ u IJ u, H u⋅uK, , y^ is called the orthogonal projection of y onto u and vector z is called the component of, y orthogonal to ‘u’., If C is any non-zero scalar then (orthogonal projection of y onto) Cu is exactly the same as, the orthogonal projection of y onto ‘u’., ^, Hence this projection is subspace L spanned by ‘u’. Sometimes y is denoted by ProjL y and is, called Orthogonal projection of y onto L., y⋅u, u., i.e.,, y^ = ProjL y =, u⋅u, EXAMPLE 1: Let y =, , LM7OP, N6 Q, , and u =, , LM4OP . Find the orthogonal project of y onto u., N 2Q, , Or, Write y as the sum of two orthogonal vectors, one is span {u} and one orthogonal to ‘u’., SOLUTION:, , y · u =, u · u =, , LM7OP ⋅ LM4OP = 40, N 6 Q N2 Q, LM4OP ⋅ LM4OP = 20, N2 Q N2 Q
Page 281 :
273, , INNER PRODUCT SPACES, , EXERCISE 6.3, 1. Which of the following set of vectors are orthogonal with respect to the Euclidean Inner, product on R2?, (a) (0, 1), (2, 0), , (b), , (c) (0, 0) (0, 1), , (d), , FG −, H, FG −, H, , IJ , FG 1 , 1 IJ, 2, 2K H 2, 2K, 1, 1 I F 1, 1 I, ,−, ,G, ,, J, J, 2, 2K H 2, 2K, 1, , ,, , 1, , 2. Which of the following sets of vectors are orthogonal w.r.t the Euclidean inner product on, R3 ?, (a), (b), , FG 1 , 0, 1 IJ , FG 1 , 1 , − 1 IJ , FG −, H 2, 2K H 3, 3, 3K H, F 1 , 1 IJ , (0, 0, 1), (1, 0, 0), G 0,, H 2 2K, , 1, 2, , , 0,, , IJ, 2K, , 1, , 3. Verify that the vectors, , FG 1 , − 2 , − 2 IJ , FG 2 , − 1 , 2 IJ , FG 2 , 2 , − 1IJ, H 3 3 3 K H 3 3 3 K H 3 3 3K, , form an orthonormal set in R3 (R) relative to the standard inner product., 4. Apply the Gram-schmidt process to obtain an orthonormal basis for R3(R) with the standard, inner product for each of the following bases:, (a) {(1, 0, 0), (1, 1, 0), (1, 1, 1)}, (b) {(2, 0, 1), (3, −1, 5), (0, 4, 2)}., 5. Obtain an orthonormal basis relative to the standard inner product for the subspace of R3, generated by (1, 0, 3) and (2, 1, 1)., 6. Let V be the set of real functions y = f (x), satisfying, , d2y, dy, − 5 + 6y = 0, dx 2, dx, (a) Prove that V is a 2-D real vector space., (b) In V, define <u, v> =, , z, , 0, , −∞, , uv dx, , Show that this defines an inner product on V and find an orthonormal basis for V., 7. If {x, y} is an orthonormal set, then prove that ||x − y|| =, 8. Let P2 have the inner product <P, q> =, transform the standard basis S = {1, x,, , z, , 1, , 2., , P ( x ) q ( x ) dx . Apply the Gram-schmidt process to, , 0, x 2}, , into an orthonormal basis.
Page 283 :
275, , INNER PRODUCT SPACES, , We have, , <x, u> = <x, β1w1 + ... + βmwm>, —, , —, , = β <x1w1> + ... + β m <x, wm> = 0, ∴, , <x, u> = 0 ≤ u ∈ w ⇒ x ∈ W⊥, , ∴, , V = W + W⊥, , Now we show that W ∩, Let y ∈ W ∩, , W⊥, , W⊥, , ...(5), , = {0}, , be arbitrary, , So that y ∈ W and y ∈ W⊥, Now y ∈ W⊥ ⇒, , <y, u> = 0 ≤ u ∈ W, , In particular, , <y, y> = 0, , ⇒, , y = 0 and on W ∩, , From (5) and (6) V = W ⊕, , W⊥ ., , {ä y ∈ W}, W⊥, , = {0}, , ...(6), , Corollary 1: Prove that (W⊥)⊥ = W, where W is a subspace of afinite dimensional inner, product space., ...(1), Proof: By the above theorem V = W ∩ W⊥, ⊥, ⊥, ∴ W is a subspace of V, on replacing W by W in equation (1) We get, V = W⊥ ⊕ (W⊥)⊥, ...(2), ∴ V is finite-dimensional, so from (1) and (2), We get, dim V = dim W + dim W⊥, ...(3), ⊥, ⊥, ⊥, dim V = dim W + dim (W ), Consequently,, dim W = dim (W⊥)⊥, ...(4), ...(5), Next We prove that, W ⊆ W⊥⊥, ⊥, Let x ∈ W be arbitrary, then <x, W> = 0 ≤ W ∈ W, ⇒, x ∈ (W⊥)⊥ = (W⊥)⊥, ⇒, W ⊆ W⊥⊥, ∴ from equation (4) and (5), W = (W⊥)⊥., Corollary 2: If W is a subspace of a finite-dimensional inner product space, then, dim W⊥ = dim V − dim W., EXAMPLE 1: Let V = R2 = {(x, y) : x, y ∈ R} be the inner product space relative to the inner, product defined as follows:, β1β2, ...(1), <u, v> = α1α2 − β1α2 − α1β2 + 4β, α1, β1),, where, u = (α, v = (a2, β2) ∈ V, If w = {(a, a) : a ∈ R} is a subspace of V, find its orthogonal complement W⊥., SOLUTION: Let (x, y) ∈ W⊥ be arbitrary., By definition, <(x, y), (a, a)> = 0 ≤ (a, a) ∈ W, ...(2)
Page 284 :
276, , LINEAR ALGEBRA, , Let a ≠ 0, from (1) and (2), We, xa − ya − xa + 4ya, ⇒, 3ya, ⇒, y, ∴, (x, y), For any x ∈ R, <(x, 0), (a, a)>, Hence, , get, = 0, = 0, = 0, = (x, 0), = xa − 0 · a − x · a + 4 · 0 · a, = 0 ≤ (aA) ∈ W, W⊥ = {(x, 0) : x ∈ R}., , {ä a ≠ 0}, , EXAMPLE 2: If S1 and S2 and subsets of an inner product space V, then show that, S 1 ⊆ S2, ⇒, S2⊥ ⊆ S1⊥., SOLUTION: Let x ∈ S2⊥ be arbitrary,, then, <x, y> = 0 ≤ y ∈ S2, In particular, <x, z> = 0 ≤ z ∈ S1, {ä S1 ⊆ S2}, ⊥, ∴ x ∈ S1, Hence, S2⊥ ⊆ S1⊥., EXAMPLE 3: If M and N are subspaces of a finite-dimensional inner product space V,, prove that, (i) (M + N)⊥ = M⊥ ∩ N⊥., (ii) (M ∩ N)⊥ = M⊥ + N⊥., SOLUTION: (i) We have, M ⊆ M+ N, and, N ⊆ M+ N, ∴, (M + N)⊥ ⊆ M⊥, and, (M + N)⊥ ⊆ N⊥, thus, (M + N)⊥ ⊆ M⊥ ∩ N⊥, ...(1), ⊥, ⊥, Conversely let z ∈ M ∩ N be arbitrary., ⇒ z ∈ M⊥ and z ∈ N⊥, ⇒, <z, x> = 0 ≤ x ∈ M, ...(2), and, <z, y> = 0 ≤ y ∈ N, Now any t ∈ M + N is expressible as, t = x + y, for some x ∈ M, y ∈ N, ∴, <z, t> = <z, x + y>, = <z, x> + <z, y> = 0, ⊥, ⇒ z ∈ (M + N), and so, M⊥ ∩ N⊥ ⊆ (M + N)⊥, ...(3), ⊥, ⊥, ⊥, from (1) and (3), (M + N) = M ∩ N, ...(4)
Page 285 :
277, , INNER PRODUCT SPACES, , (ii) ä M⊥ and N⊥ are subspaces of V, ∴ Taking M⊥ in place of M and N⊥ in place of N in (4), we get, (M⊥ + N⊥)⊥ = (M⊥)⊥ ∩ (N⊥)⊥, = M⊥⊥ ∩ N⊥⊥ = M ∩ N, ⊥, ⊥, ⊥⊥, ⇒, (M + N ), = (M ∩ N)⊥, Hence, , M⊥ + N⊥ = (M ∩ N)⊥., , EXERCISE 6.4, 1. Let V be an inner product space prove that, (i) {0}⊥ = V, (ii) V⊥ = {0}, (iii) S⊥ = {L (S)}⊥, where S is a subset of V, (iv) S Í S⊥⊥, where S is a subset of V., 2. Let W be a subspace of an inner product space V. Show that, (i) W ⊆ W⊥⊥, (ii) W = W⊥⊥, if the dimension of V is finite., 3. If S is a subset of an inner product space V, prove that S⊥ = S⊥⊥⊥., 4. Let W be a subspace of an inner product space V. If {w1, w2, ..., wn} is a basis for W. Show, that w ∈ W⊥ Iff <w, wi> = 0 for i = 1, 2, ..., n., , TRUE AND FALSE, 1. Let W be a subspace of a vector space V = 3, Spanned by u1 = (1, 0, −1) and u2 = (0, 1,, 0). Then vector us = (1, 0, 1) is an orthogonal complement of W., 2. The orthogonal projection of a vector u onto a non zero vector v is f, , < u, v >, u., || v||2, , 3. The orthogonal projection of vector v onto a non zero vector u is <u, u>·, , FG u IJ ., H || u|| K, 2, , 4. In a real inner product space V. Then for u, v, w ∈ V, <u, αv> = α <u, v>, ||u + v||2 = ||u||2, + ||v||2 and <u, v + w> = <w, u> + <v, u> is correct., 5. The orthogonal projection of (3, 1) onto (2, −2) is (2, 1)., 6. If u and v are unit vector in an inner product space then ||u + v||2 = 2 (1 + <u, v>)., 7. If ||u + v||2 = ||u||2 + ||v||2 then u and v are mutually orthogonal., 8. If <u, v> = <w, v> then u = w., 9. If <u, v> = 0 then either u = 0 or v = 0., 10. If either u = 0 or v = 0, then <u, v> = 0., 11. If u ≠ v then <u, v> ≠ ||u||2 ≠ ||v||2., 12. An orthogonal set of vectors is necessarily linearly independent.
Page 286 :
278, , LINEAR ALGEBRA, , OBJECTIVE TYPE QUESTIONS, 1. Let u = (−3, −2) and v = (−1, 2). Then ||<u, v>|| is:, (a) 5, , (b), , 5, , (c) 10, (d) 10, 2. Which of the following is not true in an inner product space V = n:, (a) <u, v> = <v, u>, (b) <u + v, w> = <u, w> + <v, w>, (c) <u, u> = ||u||, (d) <cu, v> = <u, cv>, 3. Which of the following functions are orthogonal in an inner product space V if:, <f, g> =, , 4., , 5., , 6., , 7., , 8., , 9., , z, , 1, , f ( t ) g ( t ) dt, , 0, , (a) f (t) = t, g (t) = 2 − 3t, (b) f (t) = t − 1 g (t) = 1, (c) f (t) = sin t, g (t) = cos t, (d) f (t) = et, g (t) = 1, Let A be an m × n matrix whose columns are orthonormal which is necessarily true, (b) <Au, Av> = <u, v> ≤ u, v ∈ n, (a) ||Au|| = ||u|| ≤ u ∈ n, (c) AAT = I, (d) Rank (A) = n, Let W be a subspace of a vector space V = n, which is spanned by {u, v} and u, v are linearly, independent vectors then orthogonal basis of W is:, (a) {u − v, u + v}, (b) {u + <u, v> u, v − <u, v> u}, (c) {u, u − v}, (d) {u, <u, u> v − <u, v> u}, In an inner product space, if, ||u + v||2 = ||u||2 + ||v||2. Then, (a) ||u|| = ||v||, (b) u and v are mutually orthogonal, (c) u = v, (d) <u, v> < 0., Let A ∈ m×n (R) an orthonormal set. The which of the following is correct:, (a) ATA = I, (b) A is invertable, T, (c) AA = I, (d) rank (A) = n, If {u, v} is an orthonormal set in an inner product space n. Then which is correct?, (a) ||u − v|| = 2, (b) ||au + bv|| = a + b, (c) for any w ∈ n, <w, u> u + <w, v> v is orthogonal to u − v, (d) ||au + bu||2 = a2 + b2, If {u, v} is an orthogonal set of non zero vectors in n. Then for what value of c,, <u − v, cu + v> = 0, (a), , || u||, || v||, , (c) ||v||, , (b), , || v||, || u||, , (d), , || v||2, || u||2
Page 287 :
279, , INNER PRODUCT SPACES, , 10. Which set is orthogonal:, (a) {(1, −1, 1) (1, 0, −1) (1, 2, −1)}, (c) {(1, 0, 1) (0, 1, 0) (−1, 0, −1)}, , (b) {(1, 2, 3) (−1, −2, 3) (0, 0, 0)}, (d) none of these, , ANSWERS, EXERCISE 6.1, 4. γ =, , FG − 7 , 2 IJ, H 3 3K, , 5. α = (1, 1, 1), , EXERCISE 6.2, 1., , (i), , 10, , (ii), , 17, , (b) 5, , (iii), , 21, , 15, , 2. 3 2, 3. (a), 4. 8, 5. (a), , FG 2, 2 , 2 IJ, H 3 5K, , (b) |1 − x2|1/2, , 6. (a) 0, , 1, log 2, 2, , (b) 1, , (c) log 2 =, , (b) Orthogonal, , (c) Orthogonal, , EXERCISE 6.3, 1. (a) Orthogonal, (d) Not orthogonal, 2. (a) Not orthogonal, , UV l1, 0, 0q l0, 1, 0q, W, R|S 1 (1, 0, 3), 7 FG 3 , 1, −1IJ U|V, 2 H2, 2 K W|, T| 10, RS6, 2 3FG x − 1 IJ , 6 FG x − x + 1 IJ UV, 6K W, T H 2K 5 H, , 4. (a), , 5., , 8., , RS0,, T, , (b) Not orthogonal, , 2, 1, ,, 5, 5, , (b), , RS, T, , −7, ,, 270, , −5, ,, 270, , 2, , 11. (a) A = QR, , LM 1, 5, Q = M, MM 2, N5, , OP, P;, 1 P, P, 5Q, , −2, 5, , R=, , LM 5, MN 0, , 5, 5, , OP, PQ, , 14, 270, , UV RS 2 , 0, 1 UV, W T 5 5W
Page 288 :
280, , LINEAR ALGEBRA, , (b) Q =, , (c) Q =, , LM 1, MM 2, MM 0, MM 1, N2, LM 1, MM 2, MM 12, MM, MN 0, , −1, 3, 1, 3, 1, 3, , 2, 2 19, − 2, 2 19, 3 2, 19, , OP, P, 2 P, P;, 6P, 1 P, P, 6Q, , 1, 6, , OP, P, 3 P, P;, 19 P, 1 P, P, 19 PQ, , −3, 19, , LM 2, MM, R= M0, MM, MN 0, LM 2, M, R= M0, MM, N0, , 2, 3, 0, , 2, 3, 0, , OP, P, −1 P, P, 3P, 4 P, P, 6Q, 2O, P, −1 P, P, 3P, 4 6 PQ, 2, , (d) Does not exist., , EXERCISE 6.4, True or False, 1. False, , 2. False, , 6. False for complex, 9. False → v = (1, 1), v = (1, 1), , 3. False, , 4. False, , 7. True, , 8. False, , 10. True, , 5. False, , 11. False → u = (1, 0), v = (0, 1), , 12. True., , OBJECTIVE TYPE QUESTIONS, 1. (b), 6. (b), , 2. (c), 7. (b), , 3. (a), 8. (d), , 4. (d), 9. (d), , 5. (d), 10. (d),
Page 289 :
Chapter, , 7, , DUAL SPACE, , INTRODUCTION, Earlier, we have learned that L (V, W), the set of all linear transformation from vector space V (), into vector space W () is also a vector space over a field . Further, if dim (V) = m and, dim (W) = n then dim (L (U, V)) = dim (V) dim (W) = mn. In this chapter we study linear transformation, from a vector space V () into (). In particular, if W () is replace by () in L (V, W), then it’s, denoted by V* and its called dual of V. Further, dim V* = dim (L (V, )) = dim V × dim F = dim V., , 7.1 DEFINITION (LINEAR FUNCTIONAL), Let V be a vector space over a field . Then a linear transformation T : V → , is called a linear, functional, mathematically., T (αv1 + βv2) = αT (v1) + βT (v2) ∈ F ≤ α · β ∈ , and v1v2 ∈ V, , 7.2 DEFINITION (DUAL OF VECTOR SPACE V), Let V be a vector space over a field . Then the dual of V is denoted by V* and defined as:, V* = {T : V → , T is linear transformation}, ⇒, V* = {set of all linear functionals of V}, Example 1: Let [x] be a space of polynomials over . Then T : [x] → , defined as:, T (p (x)) =, , z, , 1, , −1, , p ( x ) dx is a linear functional on [x]., , Example 2: If V = n[x] over , be a vector space then T : V → , defined as:, T (P (x)) = P (0), is a linear functional., Example 3: If V = n(R) be a vector space of all n × n real matrices whose entries are real, then T : n(R) → , defined as:, T (A) = Tr (A), Tr (A) = Trace of A, n, , =, , i =1, , aii , is a linear functional on V., , Example 4: If V = n, then Ti : n → , defined as:, Ti (α1, α2, ..., αn) = αi, i = 1, 2, ... n are n-distinct functionals., 281
Page 302 :
Chapter, , EIGEN VALUES AND, EIGEN VECTORS, , 8, , 8.1 INTRODUCTION, The eigen value of a linear transformation many proeblems of in Dynamical system, control system, (stability analysis)., The system of first order ordinary equation γ′ = Aγ, where γ = [y1(f), y2(f), ... yn(f)]T and A is, n × n matrix, with γ (f0) = f0 is a dynamical problem. It can be growing or decaying and oscillating, with respect to time t. The stability of this system is depends on the eigen values µ A. If all eigen, values are negative, positive, complex with negative real part then the system is decaying, growing, and oscillating respectively. The eigen value and eigen vector is also use to compute An, eA, sin A, and cos A. etc. In this section we have explained how to compute these matrices., , 8.2 DEFINITION AND EXAMPLES, Definition 8.2.1: (Eigen Value and Eigen Vector), Let A be an n × n matrix than a scalar λ is said to be eigen value or characteristic value of A. If, there exist a non-zero vector X ∈ n such that, AX = λX, ⇒, (A − λI) X = 0, ...(1), This a homogeneous system of linear equation with non zero solution X. This implies that, r (A − λI) < n, ⇒, |A − λI| = 0, ...(2), (2) is called a characteristic equation, it will be a polynomial of degree ‘n’ in λ, , LMa, MMa, MMNa#, , 11, , If, , A =, , 21, , n1, , |A − λI| = 0,, , ⇒, , a11 − λ, a21, #, , a12, a22 − λ, #, , ..., ..., , a1n, a2n, #, , an1, , an 2, , ..., , ann − λ, , = 0, , 294, , a12, a22, #, , ..., ..., , a1n, a2 n, #, , an 2, , ..., , ann, , OP, PP, PPQ
Page 315 :
307, , EIGEN VALUES AND EIGEN VECTORS, , Q (λ ), 15λ − 189, 2, p ( λ ) = (λ + 8λ + 38) + λ2 − 5λ + 5, Q (λ) = (λ2 + 8λ + 38) p (λ) + 152λ − 189, , ⇒, put λ = A, , Q (A) = (A2 + 8A + 38I) P (A) + 152A − 189I, Since P (λ) is a characteristic polynomial therefore p (A) = 0, by Cayley-Hamilton theorem., This implies that, Q (A) = 152A − 189I., , 8.4.2 Determine Regular Functions of a Matrixd Using Cayley-Hamilton Theorem., Let f (x) be a regular function in a region D ⊆ R then in this region f (x) can be expressed as, ∞, , f (x) =, , k =0, , a k x k , where ak =, , f k (0), k0, , Let A be n × n matrix with characteristic polynomial p (λ) and suppose λ1, λ2, ... λn are eigen, values of A., f (x) = Q (x) P (x) + R (x), where R (x) is a remainder from, whose degree is at most n − 1. In particular x = λi, f (λi) = Q (λi) P (λi) + R (λi), Since, p (x) is a characteristic polynomial then using Cayley-Hamilton theorem p (λi) = 0 this, implies that, f (λi) = R (λi), n −1, , f (λi) =, , k =0, , bk λ i k, , ...(12), , In equation (12) λi are known, then equation (12) is a set of simultaneouly linear equations, whose solution will give coefficients bk, k = 0, 1, 2, ... n − 1., The matrix function f (A) = Q (A) P (A) + R (A)., P (A) = 0, because its a characteristic polynomial of A., n −1, , f (A) = R (A) =, , EXAMPLE 2: Compute eAx, for A =, , ⇒, ⇒, , bk ⋅ A k ., , OP, Q, , 1, using, Cayley-Hamilton theorem., −3, , |A − λI| = 0, , SOLUTION:, ⇒, , LM−2, N0, , k =0, , −2 − λ, 0, , 1, −3 − λ, , = 0, , (l + 2) (l + 3) = 0, l = −2, −3 are eigen values of A.
Page 320 :
312, , LINEAR ALGEBRA, , 8.5 EIGEN VALUES AND EIGEN VECTORS OF SOME SPECIAL MATRICES, In this section we have discussed that eigen value and vector of some special matrices like, Hermitin,, skew-hermitian (Anti-Hermitian), unitary, symmetric, skew. Symmetric, orthogonal, Nilpotent,, Idempotent and Involutory matrices. Apart from these matrices we have also discussed about me, eigent values of upper and lower triangular matrices. The eigen value chart for some special matrices, is given below in the complex plane., Im, Unit circle, |λ| = |, Skew hermition, Skew - symmetric, 1, , Unitary, orthogonal, , 0, , Idempotent, , Re, –1, , +1, , O, , Hermitian, symmetric, 0, , –1, Nilpolent, matrix, , Involutory, , Fig. 8.1, , Note 1: Hermitian, skew Hermitian and Unitary are complex matrices., Note 2: Symmetric, skew symmetric and orthogonal matrices are real., , Theorem 8.6: If A is a Herimitian matrix then all of its eigen values are real., Proof: Let A be a Hermitian matrix and λ be an eigen-value of A. Then there exist a non zero, vector X such that:, AX = λX, ...(13), ⇒, (AX)* = (λX)*, —, where, T* = (T )T, —, ⇒, X*A* = λ X*, ...(14), Since A is Hermitian matrix, then A* = A from (14)
Page 321 :
313, , EIGEN VALUES AND EIGEN VECTORS, —, , X*A = λ X*, —, ⇒, X*AX = λ X*X, —, ⇒, X*λX = λ X*X, —, ⇒, λX*X = λ X*X, ⇒, (λ − λ) X*X = 0,, Since X ≠ 0 therefore X* ≠ 0 and X*X > 0, —, ⇒, λ − λ = 0, ⇒, λ = λ, ⇒ is a real number., Hence all eigen values of Hermitian matrix are real., , ...(15), , Corollary 1: If A is symmetric matrix then all its eigen values are real., Proof: Since every symmetric matrix is Hermitian matrix because, AT = A, —, , ⇔, ( AT ) = A = A, ⇔, A* = A, Therefore, by the theorem 1, all eigen values of A are real., Theorem 8.7: All eigen values of skew-Hermitian matrix are purely imaginary or zero., Proof: Let A be a skew-Hermitian matrix and λ be an eigen-value of A. Then there exist ar, non zero vector X ≠ 0, such that, AX = λX, ...(16), ⇒, (AX)* = (λX)*, —, ...(17), ⇒, X*A* = λ X*, Since A is skew-Hermitian matrix therefore A* = −A, from equation (17), —, X*(−A) = λ X*, —, ⇒, −X*A = λ X*, —, ⇒, X*X = λ X*X, from equation (16), —, −X*λX = λ X*X, —, ⇒, (λ + λ ) X*X = 0, —, Since X* ≠ 0, X ≠ 0, ⇒ X*X > 0 therefore λ = −l ⇒ l is purely imaginary or zero., Corollary 2: All eigen values of skew-symmetric matrix are purely imaginary or zero., Proof: Every skew-symmetric matrix as a skew-Hermitian matrix because, AT = −A, ⇔, , —, , ( AT ) = −A, ⇔, A* = −A, Therefore, every eigen value of A are purely imaginary or zero.
Page 322 :
314, , LINEAR ALGEBRA, , Theorem 8.8: If A is unitary matrix, then all of its eigen-values have unite modulus., Proof: Let A be an unitary matrix and λ be an eigen-value of A, then there exist a non zero, vector X such that, AX = λX, ...(18), ⇒, (AX)* = (λX)*, —, ...(19), ⇔, X*A* = λX*, from equation (18) and equation (19), —, X*A*AX = λX*λX, —, ⇔, X*X = λ · λ X*X, —, (A is unitary ⇒ A* A = AA* = I), ⇔, X*X = λ · λ X*X, ⇔, (|λ| − 1) X* X = 0, ⇒ Since X ≠ 0 ⇒ X* ≠ 0 and X*X > 0, ⇒, |λ| = 1, Hence all eigen values of an unitary matrix have unite modulus., Corollary 3: If A is an orthogonal matrix, then all of its eigen values have unit modulus., Proof: Every orthogonal matrices are unitary matrice because, AT . A = AAT = I, —, , T, ( A T A) = ( AA ) = I, ⇔, ⇔, A*A = AA* = I, Hence, every eigen value of an orthonogonal matrix is of unite modulus., , Theorem 8.9: If A is an idempotent matrix then all eigen values of A either zero or one., Proof: Let A be an idempotent matrix and λ be an eigen value of A then there exist a non zero, vector X, such that, AX = λX, Pre-multiplying by A, we get, A2X = λAX = λ2X, ...(20), Since A is an idemptotent, therefore, A2 = A., From equation (20),, AX = λ2X, ⇔, λX = λ2X, 2, ⇔, (λ − λ ) X = 0, ⇒, λ = 0, 1, ⇒ Every eigen value of idempotent matrix are either 0 or 1., Theorem 8.10: If A is an involutory matrix, then every eigen value of A either −1, or +1., Proof: Let A be an involutory matrix, therefore,, A2 = I, Suppose λ be an eigen value of A then there exist a non zero vector X such that
Page 324 :
316, , LINEAR ALGEBRA, , This implies that two eigen vector corresponding to one eigen value ‘λ’ and it shows that one, of them will be constant multiple of other say, γ = CX, AX = λX, ⇒, CAX = λCX, ⇒, , AX = λX., , This implies that X is an eigen vector corresponding to λ., Hence, all eigen values and eigen vector of a real symetric matrix are real., Theorem 8.12: If all eigen values of a real symmetric matrix A are distinct, then the eigen, vectors of A are mutually orthogonal., Proof: Let λi and λj be distinct eigen values of a real symmetric matrix A then there exist non, zero vectors Xj and Xj corresponding to λi and λj respectively, such that, AXi = λiXi, , ...(27), , AXj = λjXj, , ...(28), , Pre-multiplying equation (27) by XjT and equation (28) by XiT, we get, XjTAXi = λiXjTXi, XiTAXj, [Xj]T1×n, , ...(30), , [A]n×n [Xi]n×1 = [k]1×n scalar, similarly, (XTi AXj)T, XTj AXi, , ⇒, XjTXi, , =, , ...(29), , λjXiTXj, , and, , XiTXj, , =, =, , also scalars because, , XiTAXj, XiTAXj, (XjT)n×n, , xiTAXj, , also scalar, ...(31), , (Xi)m and [XiT]1×n [Xi]n×1., , Subtracting equation (30) from equation (29)., 0 = (λi − λj) XiTXj, Since λi ≠ λj ⇒ (λi − λj) ≠ 0 ⇒ XTi Xj = 0, ⇒ <Xi, Xj> = 0 ⇒ Xi and Xj are orthogonal vectors., EXAMPLE 1: If A be a real skew-symmetric matrix then, (i) (I − A) is non-singular., (ii) (I + A) (I − A)−1 is an orthogonal matrix., SOLUTION: (i) Let A be a real skew-symmetric matrix therefore, AT = −A. To show that (I − A), is non-singular matrix., Suppose (I − A) is a singular matrx then (I − A) has at least one eigen value is zero., This implies that A has at least one eigen-value is one. Which is a contraduction because A is, a real skew-symmetric matrix and its all eigen values are purely imaginary a zero., Hence (I − A) is non-singular., (ii) To show that B = (I + A) (I − A)−1 is on orthogonal matrix., BT = [(I + A) (I − A)−1]T, = ((I − A)−1)T (I + A)T
Page 325 :
317, , EIGEN VALUES AND EIGEN VECTORS, , = [(I − A)T] (I + AT), = (I − AT)−1 (I + AT), Since AT = −A, BT = [(I + A)]−1 [I − A], BB−1 = [(I + A) (I − A)−1] · [(I + A)−1 (I − A)], = (I + A) (I − A2)−1 (I − A), = (I + A) · (I + A)−1 (I − A)−1 (I − A), = I · I = I., BTB = (I + A)−1 (I − A) (I + A) · (I − A)−1, = (I + A)−1 (I − A2) (I − A)−1, = (I + A)−1 (I + A) (1 − A) (I − A)−1, = I· I =I, T, ⇒, B B = B · B−1 = I, Hence B = (I + A) (I − A)−1 is an orthonogal matrix., , Definition 8.5.1 (Similar Matrices), Two matrices A and B are said to be similar if there exist an invertable matrix P u, B = P−1AP or PBP−1 = A., Theorem 8.13: If A and B are similar matrices then the eigen values of A and B are same., Proof: Let A and B be two similar matrices then ∃ an invertable matrix P u, B = P−1AP, Then show that eigen values of A and B are same. Let λ be an eigen value of A then there exist, a non-zero vector X, such that, AX = λX, or, |A − λI| = 0, |B − λI| = |P−1AP − λ P−1P|, = |P−1 (A − λI) P|, = |P−1| · |A − λI| · |P|, 1, ⋅ | A − λI | ⋅ | P|, =, | P|, = |A − λI|, ⇒ The characteristic polynomial of A and B are same. Hence, eigen values of A and B are same., Theorem 8.14: If λ1, λ2, ... λn are distinct eigen values of a real symmetric matrix A of, order n × n. Then ∃ an orthogonal matrix P such that, D = PTAP,, , LMλ, MM 0, MMN 0#, , 1, , where, , D =, , 0, λ2, #, , ..., ..., , 0, 0, #, , 0, , ..., , λn, , OP, PP, PPQ
Page 337 :
329, , EIGEN VALUES AND EIGEN VECTORS, , 12., , LM2, If A = M1, NM1, , OP, 1P, 2QP, , 2, , 1, , 3, 2, , then find e2A., , 13. Find eigen values and eigen vector of a matrix A =, 14. Find a matrix P such that A = P−1BP, if A =, , 15., , 16., , LM 3, If A = M−1, MN 1, , OP, −1P , then find A, 3 PQ, , −1, , LM 5, N−2, , LM0 i OP ., N i 0Q, 5O, L 1 2O, P0Q , and B = MN−3 4PQ ., , 1, , 5, −1, , 50., , LM−1, Find the eigen values of the matrix A = M−3, MN 0, , 3, −1, 0, , OP, 6P ., 3PQ, , 5, , OBJECTIVE TYPE QUESTIONS, , LM 0 1OP , then the eigen vector corresponding to i and −i are respectively., N−1 0Q, LM1OP and LM i OP, L1O L−1O, (b) M P and M P, −, i, i, NQ N Q, Ni Q N i Q, LM−1OP and LM−iOP, Li O L−1O, (d) M P and M P, Ni Q Ni Q, N1Q N i Q, , 1. Let A =, (a), (c), , 2. If A be n × n idempotent matrix then which of the following is not correct., (a) AT is idempotent, (b) Eigen values of A are 0 or 1 only, (c) Non-diagonal entries of A can be zero, (d) There are many non-singular matrices that atre idempotent, , LM1, 3. If A = M1, MN1, LM 13, (a) −15, MM, N 10, LM13, (c) −4, MM, N −1, , 0, 2, 3, −4, 4, 0, −15, 4, 3, , OP, 5P, 1PQ, 5, , then 8A−1 is, , O, 3 PP, −2 PQ, 10O, 0 PP, 2 PQ, −1, , (b), , (d), , LM 13, MM 10, N−15, LM13, MM−4, N −1, , −4, 0, 4, 10, 0, −2, , OP, −2 P, 3 PQ, −15O, 4 PP, 3 PQ, −1
Page 339 :
331, , EIGEN VALUES AND EIGEN VECTORS, , 10. Let A be 3 × 3 matrix with eigen values −1, 1 and 3. Then, (a) A2 + A is non singular, , (b) A2 + 3A is non singular, , (c) A2 − 3A is singular, , (d) A2 + 3A is singular, , 11. For what values of a, the matrix A =, (a) α >, , 1, 28, , (c) α > −, , LM3, N7, , α, 2, , OP, Q, , have real eigen values, , (b) α >, , 1, 28, , LM0, 12. Let A = M0, MN0, , 1, 25, , (d) α < −, , OP, bP, 0PQ, , 0, , 0, , 0, 0, , for what value of b A is diagonalizable, , (a) b = 0, , (b) b = 1, , (c) b = −1, , (d) b = 3, , LM 1, 13. Let A = M w, MNw, , 2, , 1, 28, , w, w, , OP, 1P, 1 PQ, , w2, , 2, , w, , where w3 + 1 and w ≠ 1 then, , (a) A is invertable, (b) there exist two LI vectors X and Y u AX = 0 and AY = 0, (c) A is singular matrix, (d) Trace (A) = 0, 14. Let 2() be a set of 2 × 2 real matrix and let A ∈ 2() such that Trace (A) = 2 and, |A| = −3. Consider T : 2() → 2() be a linear transformation such that T (B) = AB. Then, which of the following statement is true., (a) T is not invertable, , (b) 2 is an eigen value of T, , (c) T is diagonalizable, , (d) T (B) = B, for some 0 ≠ B ∈ 2(), , 15. Let α, β be two distinct eigen values of a 2 × 2 matrix A. Then which of the following, statement is true, (a) A2 has distinct eigen values, , (b) Trace (An) = αn + βn, , (c) A2 is diagonalizable, , (d) Trace (An) = (α + β)n, , 16. If α, β be two distinct eigen values of a 2 × 2 matrix A, then which of the following statement, is not correct, (a) A3 has distinct eigen values, , (b) Trace (A3) = α3 + β3, , (c) Trace (A2) = (α − β)2 + 2αβ, , (d) A2 has distinct eigen values
Page 341 :
333, , EIGEN VALUES AND EIGEN VECTORS, , 12. The eigen-values of an upper triangular matrix or lower triangular matrix are diagonal entries, of the matrix., 13. If X =, , LM 1 OP is an eigen-vector of the matrix A = LMa, N−iQ, Nb, , OP, Q, , −b, then ‘λ’ eigen value corresponding, a, , to X is a + ib., 14. If A ∈ n(), and λ = a + ib, where a, b ∈ R, is an eigen-value of A then J = a − ib is also, eigen value of A., 15. Let A, B ∈ n(). Then AB and BA are similar matrices., 16. The eigen values of A and AT are not same., 17. If λ is an eigen values of A, then the kernel of (A − λI) transform is non zero., 18. If A ∈ n() then A has more than n eigen-values of A., 19. Every non-singular matrix is diagonalizable., 20. If p (x) be a polynomial of degree n, such that p (A) = 0, for A ∈ n(). If P (0) = 0 then, A is non-invertable., 21. If λ be an eigen value of A then eλ is eigen-value of eA., 22. If A ∈ 5(), then A has at least one real eigen-values., 23. The eigen-values of, , LM 3 OP, 24. The vector X = M−1P, MN 0 PQ, , LM1, MM0, N0, , 1, 2, 0, , OP, 1P, 3PQ, , 0, , are 1, 2 and 3., , LM3, is an eigen vector of A = M0, MN0, , 3, 2, 0, , OP, −1P ., 5 PQ, 4, , 25. If λ is an eigen value of A then iλ is eigen value of (iA)., —, —, 26. If λ is an eigen value of A then λ is eigen value of A ., , LM 0 1OP is LM 1 OP ., N−1 0Q N−i Q, L−iO, L1 i OP ., 28. X = M P is an eigen vector of a matrix A = M, N1Q, Ni −1Q, , 27. An eigen vector of a matrix A =, , 29. If Ak ≠ 0 and Ak+1 = 0 then λ = 0, is only an eigen value of A., , ANSWERS, EXERCISE 8.1, 1., , (i), , Eigen value = −7, 5, Eigen vector =, , LM−0.94, N0.316, , −0.70, −0.70, , OP, Q
Page 342 :
334, , LINEAR ALGEBRA, , Eigen value = 2, −3, 0, , (ii), , Eigen vector =, (iii), , (iv), , OP, 0.25 P, 015, . PQ, , −0.95, , 0.78, 0, , LM 0.08, MM 0.04, MN−00.61.78, , 0.49, , 0.70, , 0.49, −0.37, , −0.70, 0, , −0.61, , 0, , Eigen value = 2, 0, 6, , Eigen vector =, , LM0.70, MM 0, N0.70, , Eigen vector =, , LM−0.36, MM−0.63, N−0.68, , OP, 0.5P, 0.5P, P, 0.5Q, 0.5, , OP, 0.26 P, −0.53PQ, , −0.70, , −0.80, , 0, 0.70, , Eigen value = 10.15, −0.97, 2.82, , (v), , OP, P, 0.93 PQ, , −0.55, , −0.21, , 0.51, , −0.29, , 0.65, , Eigen value = λ = λ = λ = λ = λ = x, , (vi), , Eigen vector =, (vii), , LM1OP, MM0PP, MN00PQ, , Eigen value = a, b, c, , Eigen vector =, , LM 1, MM 0, N(c ⋅ a ) (b ⋅ a ), , 0, , Eigen vector =, , 36, 10, 0, , OP, 36P, 10 PQ, , LM−0.51, MM−0.83, N−0.21, , OP, 0P, 1PQ, 0, , b⋅c, 1, , Eigen value = 3.83, −1.39, 0.55, , (viii), , 4., , −0.62, , Eigen value = 0, 0, 0, 4, , Eigen vector =, , LM10, MM 0, N0, , LM6, MM0, N0, , OP, 0.47 P, 0.84 QP, , −0.22, , 0.67, −0.59, 0.42, , 34, , 6., , A−1, , LM 0.8000, = M−0.2000, MN−0.2000, , −0.4000, 0.6000, −0.4000, , OP, −0.2000P, 0.8000 PQ, −0.200
Page 343 :
335, , EIGEN VALUES AND EIGEN VECTORS, , 9. A =, , 11., , LM0 1OP, N0 0 Q, , not diagonable, , 10. A is diagonalizable, , ., 144, ., 0.72O, LM144, (i) 10, . e + 11 M0.72 144, ., 0.72PP, MN0.72 144, ., 144, . PQ, LM 2 cos 1 + cos 5 2 (cos 5 − cos 1)OP, 3, 3, (iii) M, 1, MMN 3 (cos 5 − cos 1) cos 1 +32 cos 5 PPPQ, , 13. Eigen value of A = [i,, Eigen vector =, , LM, N, , 15. A50, , (iv), , LM1269, N1266, , −i], , LM0.7071 + 0.0i, N0.7071 + 0.0i, , 0.8452 + 0.000, 14. P = −0.4226 + 0.3273i, , ., LM 13471, −2.6943, = M, MN 13471, ., , (ii), , LM sin 5 − 2 sin 1, MM sin 5 6+ sin 1, MN 3, , 0.7071 + 0.0i, −0.7071 − 0.0i, , 0.8452 + 0.000i, −0.4226 − 0.3273i, , −2.6943, 5.3885, −2.6943, , OP, PP, 13471, ., Q, , OP, Q, , OP, PP, PQ, , 2, (sin 5 + sin 1), 3, 2 sin 5 − sin 1, 3, , OP, Q, , 2532, 2535, , OP, Q, , 13471, ., −2.6943, , 16. Eigen value of A is −1 + 3i, −1 − 3i, 3 + 0i, , OBJECTIVE TYPE QUESTIONS, 1. (b), 2. (d), 6. (a), 7. (a), 11. (c), 12. (a), 16. (d), 17. (a), , 3., 8., 13., 18., , (c), (a), (c), (a), , 4., 9., 14., 19., , (b), (c), (c), (c), , 5., 10., 15., 20., , (d), (b), (b), (c), , TRUE OR FALSE, 1. True, 2. False, 6. False, 7. True, 11. True, 12. False, 16. True, 17. False, 21. True, 22. True, 26. True, 27. True, , 3., 8., 13., 18., 23., 28., , True, True, True, False, True, True, , 4., 9., 14., 19., 24., 29., , False, False, False, True, False, True., , 5., 10., 15., 20., 25., , True, True, False, True, True,