100% FREE Updated: Apr 2026 Algebra Linear Algebra

Matrices

Comprehensive study notes on Matrices for CMI Data Science preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Matrices

Overview

Matrices are the bedrock of modern data science, providing a concise framework for representing data, networks, and transformations. For your CMI Data Science entrance, deep proficiency in matrix algebraโ€”from basic operations to advanced concepts like PCA and SVDโ€”is a critical prerequisite. This chapter equips you with the computational tools to tackle complex datasets and problem-solving scenarios efficiently.

Chapter Contents

| # | Topic | What You'll Learn | |---|---------------------------|---------------------------------------------------| | 1 | Introduction to Matrices | Define matrices and their basic properties. | | 2 | Basic Matrix Operations | Perform addition, subtraction, scalar operations. | | 3 | Matrix Multiplication | Compute products and understand non-commutativity.| | 4 | Transpose of a Matrix | Find transposes and apply their properties. | | 5 | Inverse of a Matrix | Calculate inverses and solve linear systems. |
โ— By the End of This Chapter

After studying this chapter, you will be able to:

  • Define matrices, dimensions, and classify core types.

  • Perform fundamental matrix arithmetic.

  • Compute matrix products and understand their rigorous conditions.

  • Determine transposes, inverses, and apply them to complex CMI-level problems.

Part 1: Introduction to Matrices

Matrices represent linear transformations and systems of equations, forming the essential bedrock for advanced topics like eigenvalues, eigenvectors, and SVD.
๐Ÿ“– Matrix

A matrix is a rectangular array of numbers arranged in rows and columns. An mร—nm \times n matrix AA has mm rows and nn columns.

A=(a11a12โ‹ฏa1na21a22โ‹ฏa2nโ‹ฎโ‹ฎโ‹ฑโ‹ฎam1am2โ‹ฏamn)A = \begin{pmatrix} a_{11} & a_{12} & \cdots & a_{1n} \\ a_{21} & a_{22} & \cdots & a_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & \cdots & a_{mn} \end{pmatrix}

1. Basic Matrix Types and Notations

๐Ÿ“– Square Matrix

A matrix AA where rows equal columns (m=nm=n). The elements aiia_{ii} form the main diagonal.

๐Ÿ“– Diagonal Matrix

A square matrix DD where all non-diagonal elements are strictly zero (dij=0d_{ij} = 0 for iโ‰ ji \ne j).
D=(d110โ‹ฏ00d22โ‹ฏ0โ‹ฎโ‹ฎโ‹ฑโ‹ฎ00โ‹ฏdnn)D = \begin{pmatrix} d_{11} & 0 & \cdots & 0 \\ 0 & d_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & d_{nn} \end{pmatrix}

๐Ÿ“– Identity Matrix

A diagonal matrix InI_n where all diagonal elements are 11. It acts as the multiplicative identity.
In=(10โ‹ฏ001โ‹ฏ0โ‹ฎโ‹ฎโ‹ฑโ‹ฎ00โ‹ฏ1)I_n = \begin{pmatrix} 1 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & 1 \end{pmatrix}

๐Ÿ“– Zero Matrix

Denoted by 00, a matrix where all its elements are zero. It acts as the additive identity.

๐Ÿ“– Upper Triangular Matrix

A square matrix UU where all elements strictly below the main diagonal are zero (uij=0u_{ij} = 0 for i>ji > j).
U=(u11u12โ‹ฏu1n0u22โ‹ฏu2nโ‹ฎโ‹ฎโ‹ฑโ‹ฎ00โ‹ฏunn)U = \begin{pmatrix} u_{11} & u_{12} & \cdots & u_{1n} \\ 0 & u_{22} & \cdots & u_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & u_{nn} \end{pmatrix}

๐Ÿ“– Lower Triangular Matrix

A square matrix LL where all elements strictly above the main diagonal are zero (lij=0l_{ij} = 0 for i<ji < j).
L=(l110โ‹ฏ0l21l22โ‹ฏ0โ‹ฎโ‹ฎโ‹ฑโ‹ฎln1ln2โ‹ฏlnn)L = \begin{pmatrix} l_{11} & 0 & \cdots & 0 \\ l_{21} & l_{22} & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ l_{n1} & l_{n2} & \cdots & l_{nn} \end{pmatrix}

:::question type="MSQ" question="Let UU be an nร—nn \times n upper triangular matrix and DD be an nร—nn \times n diagonal matrix. Which of the following statements are ALWAYS true?" options=["UTU^T is a lower triangular matrix.","U+DU + D is an upper triangular matrix.","UU must have non-zero elements on its main diagonal.","If UU has all zeros on the main diagonal, it is a strictly upper triangular matrix."] answer="A,B,D" hint="Apply the definitions of upper triangular (uij=0u_{ij}=0 for i>ji>j) and diagonal (dij=0d_{ij}=0 for iโ‰ ji \neq j) matrices directly to the properties." solution="Option A: UTU^T swaps the indices ii and jj. Since uij=0u_{ij} = 0 for i>ji>j in UU, the new matrix has zeros where j>ij>i (which is above the diagonal), making it lower triangular. (True) Option B: DD only has non-zero elements where i=ji=j. Adding DD to UU only affects the main diagonal values. The elements where i>ji>j remain strictly zero. Thus, U+DU+D is upper triangular. (True) Option C: An upper triangular matrix can have zeros anywhere, including on the main diagonal, as long as all elements below the diagonal are zero. (False) Option D: By definition, if all elements below the diagonal AND on the diagonal are zero, it forms a strictly upper triangular matrix. (True) Answer: A, B, D" :::

Part 2: Basic Matrix Operations

Understanding how to manipulate matrices through basic arithmetic is fundamental before progressing to complex transformations.

1. Matrix Addition and Subtraction

Two matrices AA and BB can be added or subtracted if and only if they have the exact same dimensions (mร—nm \times n).
๐Ÿ“ Matrix Addition & Subtraction

If A=[aij]A = [a_{ij}] and B=[bij]B = [b_{ij}] are mร—nm \times n matrices, then C=AยฑBC = A \pm B is an mร—nm \times n matrix where:
cij=aijยฑbijc_{ij} = a_{ij} \pm b_{ij}

Core Properties: * Commutativity: A+B=B+AA + B = B + A * Associativity: (A+B)+C=A+(B+C)(A + B) + C = A + (B + C) * Additive Identity: A+0=AA + 0 = A (where 00 is the zero matrix of the same dimensions) * Closure: The sum of two upper (or lower) triangular matrices is also an upper (or lower) triangular matrix.

2. Scalar Multiplication

Multiplying a matrix AA by a scalar kk involves multiplying every single element of the matrix by kk.
๐Ÿ“ Scalar Multiplication

If A=[aij]A = [a_{ij}] is an mร—nm \times n matrix and kk is a scalar, then C=kAC = kA is an mร—nm \times n matrix where:
cij=kโ‹…aijc_{ij} = k \cdot a_{ij}

Core Properties: * Distributivity over Matrix Addition: k(A+B)=kA+kBk(A + B) = kA + kB * Distributivity over Scalar Addition: (k+l)A=kA+lA(k + l)A = kA + lA * Associativity: k(lA)=(kl)Ak(lA) = (kl)A :::question type="NAT" question="Let AA and BB be 2ร—22 \times 2 matrices where A=(2โˆ’134)A = \begin{pmatrix} 2 & -1 \\ 3 & 4 \end{pmatrix} and 2Aโˆ’3B=(โˆ’2405)2A - 3B = \begin{pmatrix} -2 & 4 \\ 0 & 5 \end{pmatrix}. What is the value of the element b22b_{22} in matrix BB?" answer="1" hint="Set up the algebraic equation for the specific element you need. You do not need to compute the entire matrix BB." solution="Step 1: Isolate the equation for the specific element b22b_{22}. The equation is 2Aโˆ’3B=C2A - 3B = C. For the element in the 2nd row, 2nd column: 2a22โˆ’3b22=c222a_{22} - 3b_{22} = c_{22}. Step 2: Substitute the known values. a22=4a_{22} = 4 c22=5c_{22} = 5 2(4)โˆ’3b22=52(4) - 3b_{22} = 5 Step 3: Solve for b22b_{22}. 8โˆ’3b22=58 - 3b_{22} = 5 โˆ’3b22=โˆ’3-3b_{22} = -3 b22=1b_{22} = 1 Answer: 1" :::

Part 3: Matrix Multiplication

Matrix multiplication is the cornerstone of linear algebra operations. It represents the composition of linear transformations and is structurally essential for algorithms ranging from neural network forward passes to solving complex systems of equations, graph theory analysis, and data projections.
๐Ÿ“– Matrix Multiplication & Conformability

Given two matrices AA (mร—nm \times n) and BB (pร—qp \times q), their product C=ABC = AB is defined if and only if n=pn = p (the number of columns in AA exactly equals the number of rows in BB). The resulting matrix CC will have dimensions mร—qm \times q.

The (i,j)(i,j)-th entry of the product is calculated as the dot product of the ii-th row of AA and the jj-th column of BB:
Cij=โˆ‘k=1nAikBkjC_{ij} = \sum_{k=1}^{n} A_{ik} B_{kj}

1. Core Properties of Multiplication

โ— Algebraic Properties

* Associativity: (AB)C=A(BC)(AB)C = A(BC). You can group multiplications as needed.
* Distributivity: A(B+C)=AB+ACA(B+C) = AB + AC and (A+B)C=AC+BC(A+B)C = AC + BC.
* Scalar Commutativity: k(AB)=(kA)B=A(kB)k(AB) = (kA)B = A(kB).
* Multiplicative Identity: AI=IA=AAI = IA = A (where II is the identity matrix of proper dimensions).
* Non-Commutativity: In general, ABโ‰ BAAB \neq BA. Never assume commutativity unless explicitly proven for a specific pair of matrices.
* Zero Product Property: AB=0AB = 0 does not imply A=0A=0 or B=0B=0. Divisors of zero exist in matrix rings (e.g., multiplying two non-zero singular matrices can yield a zero matrix).

2. Special Vector Products

Vectors are specialized matrices. A column vector xx in Rn\mathbb{R}^n is an nร—1n \times 1 matrix, and its transpose xTx^T is a 1ร—n1 \times n row vector. * Inner Product (Dot Product): xTy=โˆ‘i=1nxiyix^T y = \sum_{i=1}^{n} x_i y_i. This results in a 1ร—11 \times 1 scalar. It geometrically represents projection and alignment. * Outer Product: xyTx y^T. This results in an nร—nn \times n rank-1 matrix where the (i,j)(i,j)-th element is xiyjx_i y_j. This is heavily used in covariance matrix calculations and SVD.

3. Matrix Exponentiation (AnA^n)

Multiplying a square matrix by itself nn times (AnA^n) is a frequent operation in analyzing Markov chains, system dynamics, and recursion. To compute high powers, look for these structural shortcuts: * Diagonal Matrices: If DD is a diagonal matrix, exponentiation is applied element-wise. If D=diag(d1,d2,โ€ฆ,dk)D = \text{diag}(d_1, d_2, \dots, d_k), then Dn=diag(d1n,d2n,โ€ฆ,dkn)D^n = \text{diag}(d_1^n, d_2^n, \dots, d_k^n). * Rotation Matrices: A 2D counter-clockwise rotation matrix by angle ฮธ\theta is R(ฮธ)=(cosโกฮธโˆ’sinโกฮธsinโกฮธcosโกฮธ)R(\theta) = \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos\theta \end{pmatrix}. Applying it nn times yields a rotation by nฮธn\theta: R(ฮธ)n=R(nฮธ)=(cosโก(nฮธ)โˆ’sinโก(nฮธ)sinโก(nฮธ)cosโก(nฮธ))R(\theta)^n = R(n\theta) = \begin{pmatrix} \cos(n\theta) & -\sin(n\theta) \\ \sin(n\theta) & \cos(n\theta) \end{pmatrix}. * Nilpotent Matrices: A matrix NN is nilpotent if Nk=0N^k = 0 for some integer kk. For a matrix AA that can be decomposed as A=I+NA = I + N, we can use the binomial expansion (since II and NN commute): (I+N)n=I+nN+n(nโˆ’1)2N2+โ€ฆ(I+N)^n = I + nN + \frac{n(n-1)}{2}N^2 + \dots. This series naturally terminates at kโˆ’1k-1.

4. Computational Complexity (FLOPs)

Understanding the hardware cost of matrix operations is a staple in Data Science. A floating-point operation (FLOP) includes additions and multiplications.
๐Ÿ“ FLOPs for Matrix Multiplication

To compute the product of two Nร—NN \times N matrices, finding a single element CijC_{ij} requires NN multiplications and Nโˆ’1N-1 additions.
For all N2N^2 elements in the matrix, the standard algorithmic complexity is:
Totalย FLOPs=N2(2Nโˆ’1)=2N3โˆ’N2\text{Total FLOPs} = N^2(2N-1) = 2N^3 - N^2

5. Graph Theory & Adjacency Matrices

Matrix multiplication is the mathematical engine behind graph theory and network analysis. For a graph with mm vertices, an adjacency matrix AA is an mร—mm \times m matrix where Aij=1A_{ij} = 1 if there is a directed edge from vertex ii to vertex jj, and 00 otherwise. Key Analytical Property: The element (i,j)(i,j) of the matrix AkA^k, denoted as Ak(i,j)A^k(i,j), represents the exact number of distinct paths of length kk from vertex ii to vertex jj.

6. The Trace Property

The trace of a square matrix is the sum of its main diagonal elements: tr(A)=โˆ‘aii\text{tr}(A) = \sum a_{ii}. The trace interacts uniquely with multiplication. Cyclic Permutation: For any matrices AA (size mร—nm \times n) and BB (size nร—mn \times m), both ABAB and BABA result in square matrices (though of different sizes), and: tr(AB)=tr(BA)\text{tr}(AB) = \text{tr}(BA) Note: This extends strictly to cyclic permutations for multiple matrices. For example, tr(ABC)=tr(BCA)=tr(CAB)\text{tr}(ABC) = \text{tr}(BCA) = \text{tr}(CAB). However, it does NOT generally equal non-cyclic permutations like tr(ACB)\text{tr}(ACB).

7. Quadratic Forms

A quadratic form maps a vector to a scalar using a symmetric matrix, which is foundational in optimization and defining cost functions. For a symmetric matrix AA and a vector xx, the quadratic form is: Q(x)=xTAxQ(x) = x^T A x If AA is positive definite, xTAx>0x^T A x > 0 for all non-zero vectors xx. --- :::question type="MSQ" question="Let AA and BB be two nร—nn \times n non-zero matrices. Which of the following statements are ALWAYS true?" options=["If AB=0AB=0, then both AA and BB must be singular (non-invertible) matrices.","tr(ATB)=tr(ABT)\text{tr}(A^T B) = \text{tr}(AB^T)","(A+B)2=A2+2AB+B2(A+B)^2 = A^2 + 2AB + B^2","If AA is an adjacency matrix of an unweighted graph, A3(i,i)A^3(i,i) gives the exact number of distinct triangles passing through vertex ii (ignoring direction)."] answer="A,B" hint="Recall the zero product property of matrices and trace cyclic properties. Be careful with matrix algebra expansion, as commutativity cannot be assumed." solution="Option A: Suppose AA is invertible. We can multiply the equation AB=0AB=0 by Aโˆ’1A^{-1} on the left to get B=0B=0. However, the premise states BB is a non-zero matrix, creating a contradiction. Therefore, AA cannot be invertible. By symmetric logic, BB cannot be invertible. Both must be singular. (True) Option B: Using the property tr(X)=tr(XT)\text{tr}(X) = \text{tr}(X^T) and the transpose product rule (XY)T=YTXT(XY)^T = Y^T X^T, we can evaluate the first expression: tr(ATB)=tr((ATB)T)=tr(BT(AT)T)=tr(BTA)\text{tr}(A^T B) = \text{tr}((A^T B)^T) = \text{tr}(B^T (A^T)^T) = \text{tr}(B^T A). Finally, using the cyclic property of trace tr(XY)=tr(YX)\text{tr}(XY) = \text{tr}(YX), we know tr(BTA)=tr(ABT)\text{tr}(B^T A) = \text{tr}(AB^T). (True) Option C: Expansion gives (A+B)(A+B)=A2+AB+BA+B2(A+B)(A+B) = A^2 + AB + BA + B^2. Since ABโ‰ BAAB \neq BA in general matrix algebra, this does not neatly simplify to A2+2AB+B2A^2 + 2AB + B^2. (False) Option D: A3(i,i)A^3(i,i) gives the total number of paths of length 3 from vertex ii back to vertex ii. While a triangle (iโ†’jโ†’kโ†’ii \to j \to k \to i) is a path of length 3, A3(i,i)A^3(i,i) also counts degenerate paths (like iโ†’jโ†’iโ†’ii \to j \to i \to i if self-loops exist) and, for undirected graphs, it traverses loops in both directions. It requires further scaling and division (usually by 2 or 6) to yield the exact number of distinct geometric triangles. (False) Answer: A, B" :::

Part 4: Transpose of a Matrix

The transpose operation reorients a matrix by interchanging its rows and columns. It is the foundational operation for defining symmetry, analyzing quadratic forms, and simplifying complex matrix equations.
๐Ÿ“– Transpose of a Matrix

Let AA be an mร—nm \times n matrix. The transpose of AA, denoted ATA^T, is the nร—mn \times m matrix whose entries are given by (AT)ij=aji(A^T)_{ij} = a_{ji} for all 1โ‰คiโ‰คn1 \le i \le n and 1โ‰คjโ‰คm1 \le j \le m.

1. Core Properties of the Transpose

Mastering these properties is non-negotiable, as they are frequently used to manipulate and simplify abstract matrix expressions in CMI questions.
๐Ÿ“ Properties of Transpose

For matrices AA and BB of compatible dimensions, and a scalar kk:

  • Double Transpose: (AT)T=A(A^T)^T = A

  • Transpose of a Sum: (A+B)T=AT+BT(A+B)^T = A^T + B^T

  • Transpose of a Scalar: (kA)T=kAT(kA)^T = kA^T

  • Transpose of a Product (CRITICAL): (AB)T=BTAT(AB)^T = B^T A^T (Note the order reversal)

  • Determinant of a Transpose: โˆฃATโˆฃ=โˆฃAโˆฃ|A^T| = |A|

  • Inverse of a Transpose: (Aโˆ’1)T=(AT)โˆ’1(A^{-1})^T = (A^T)^{-1}

2. Symmetric Matrices

A square matrix AA is symmetric if AT=AA^T = A (meaning aij=ajia_{ij} = a_{ji}). Key Properties: * If AA and BB are symmetric, A+BA+B and kAkA are symmetric. * The product ABAB of two symmetric matrices is symmetric if and only if they commute (AB=BAAB = BA). For any* matrix XX (not necessarily square), the matrices XXTX X^T and XTXX^T X are ALWAYS symmetric. For any* square matrix XX, the matrix X+XTX + X^T is ALWAYS symmetric.

3. Antisymmetric (Skew-Symmetric) Matrices

A square matrix AA is antisymmetric (or skew-symmetric) if AT=โˆ’AA^T = -A (meaning aij=โˆ’ajia_{ij} = -a_{ji}). Key Properties: * Zero Diagonal: All main diagonal entries of an antisymmetric matrix must be exactly zero (since aii=โˆ’aiiโ€…โ€ŠโŸนโ€…โ€Š2aii=0a_{ii} = -a_{ii} \implies 2a_{ii} = 0). * Squares: If AA is antisymmetric, its square A2A^2 is symmetric. Sum/Difference: For any* square matrix XX, the matrix Xโˆ’XTX - X^T is ALWAYS antisymmetric. * Determinant (CRITICAL): If AA is an nร—nn \times n antisymmetric matrix and nn is odd, then โˆฃAโˆฃ=0|A| = 0 (it is strictly non-invertible). If nn is even, its determinant is a perfect square.

4. Transpose of Block Matrices

When transposing a block matrix, you must transpose each individual block AND swap the positions of the off-diagonal blocks. If M=(ABCD)M = \begin{pmatrix} A & B \\ C & D \end{pmatrix}, then MT=(ATCTBTDT)M^T = \begin{pmatrix} A^T & C^T \\ B^T & D^T \end{pmatrix}. --- :::question type="MCQ" question="Let AA be an mร—nm \times n matrix and BB be an nร—pn \times p matrix. Which of the following statements is ALWAYS true?" options=["(ATBT)T=AB(A^T B^T)^T = AB","(ATB)T=BTA(A^T B)^T = B^T A","(BA)T=ATBT(BA)^T = A^T B^T","(AB)T=BTAT(AB)^T = B^T A^T"] answer="(AB)T=BTAT(AB)^T = B^T A^T" hint="Recall the transpose property for matrix products. Pay attention to the order of multiplication." solution="The property (AB)T=BTAT(AB)^T = B^T A^T is the fundamental reversal rule for the transpose of a product. Option A expands to BABA, which โ‰ AB\neq AB. Option B expands to BTAB^T A, which is only valid if dimensions allow it, but doesn't relate to the standard rule. Option C has the wrong order. Answer: (AB)T=BTAT(AB)^T = B^T A^T." ::: :::question type="MSQ" question="Let AA and BB be nร—nn \times n matrices. Which of the following statements are ALWAYS true?" options=["If AA and BB are symmetric, then ABAB is always symmetric.","If AA is antisymmetric, then A2A^2 is symmetric.","If AA is an antisymmetric 5ร—55 \times 5 matrix, then AA is invertible.","For any square matrix XX, the matrix X+XTX+X^T is symmetric."] answer="B,D" hint="Apply transpose properties to each expression. Remember the odd-dimension determinant rule for skew-symmetric matrices." solution="Option A: (AB)T=BTAT=BA(AB)^T = B^T A^T = BA. For ABAB to be symmetric, ABAB must equal BABA, which is not generally true. (False) Option B: (A2)T=(AA)T=ATAT=(โˆ’A)(โˆ’A)=A2(A^2)^T = (AA)^T = A^T A^T = (-A)(-A) = A^2. Thus, A2A^2 is symmetric. (True) Option C: For an antisymmetric matrix AA of odd dimension n=5n=5, โˆฃAโˆฃ=โˆฃโˆ’Aโˆฃ=(โˆ’1)5โˆฃAโˆฃ=โˆ’โˆฃAโˆฃโ€…โ€ŠโŸนโ€…โ€Š2โˆฃAโˆฃ=0โ€…โ€ŠโŸนโ€…โ€ŠโˆฃAโˆฃ=0|A| = |-A| = (-1)^5|A| = -|A| \implies 2|A| = 0 \implies |A| = 0. It is non-invertible. (False) Option D: Let S=X+XTS = X+X^T. ST=(X+XT)T=XT+(XT)T=XT+X=SS^T = (X+X^T)^T = X^T + (X^T)^T = X^T + X = S. (True) Answer: B, D" ::: :::question type="MSQ" question="Let PP and QQ be two nร—nn \times n skew-symmetric matrices. Which of the following matrices are GUARANTEED to be skew-symmetric?" options=["P+QP + Q","PQPQ","PQโˆ’QPPQ - QP","P3P^3"] answer="A,C,D" hint="Use the definition PT=โˆ’PP^T = -P and QT=โˆ’QQ^T = -Q. Apply the product transpose rule carefully." solution="Option A: (P+Q)T=PT+QT=โˆ’Pโˆ’Q=โˆ’(P+Q)(P+Q)^T = P^T + Q^T = -P - Q = -(P+Q). Skew-symmetric. (True) Option B: (PQ)T=QTPT=(โˆ’Q)(โˆ’P)=QP(PQ)^T = Q^T P^T = (-Q)(-P) = QP. This is symmetric only if QP=PQQP=PQ, but not skew-symmetric. (False) Option C: (PQโˆ’QP)T=(PQ)Tโˆ’(QP)T=QTPTโˆ’PTQT=(โˆ’Q)(โˆ’P)โˆ’(โˆ’P)(โˆ’Q)=QPโˆ’PQ=โˆ’(PQโˆ’QP)(PQ - QP)^T = (PQ)^T - (QP)^T = Q^T P^T - P^T Q^T = (-Q)(-P) - (-P)(-Q) = QP - PQ = -(PQ - QP). Skew-symmetric. (True) Option D: (P3)T=(PPP)T=PTPTPT=(โˆ’P)(โˆ’P)(โˆ’P)=โˆ’P3(P^3)^T = (P P P)^T = P^T P^T P^T = (-P)(-P)(-P) = -P^3. Skew-symmetric. (True) Answer: A, C, D" :::

Part 5: Inverse of a Matrix

Introduction

The inverse of a matrix is a fundamental concept in linear algebra, analogous to the reciprocal of a non-zero scalar. It plays a crucial role in solving systems of linear equations, understanding linear transformations, and various matrix decompositions. For a square matrix, its inverse, if it exists, allows us to "undo" the transformation represented by the original matrix. In the context of CMI, understanding matrix inverses is essential for analyzing the solvability of linear systems, properties of matrix transformations, and for advanced topics such as eigenvalues and eigenvectors. This section will thoroughly cover the definition, existence conditions, methods of computation, and key properties of matrix inverses, preparing you to tackle complex problems efficiently.
๐Ÿ“– Inverse Matrix

A square matrix AA of order nn is said to be invertible (or non-singular) if there exists another square matrix BB of the same order nn such that:

AB=BA=InAB = BA = I_n

where InI_n is the nร—nn \times n identity matrix. The matrix BB is then called the inverse of AA, denoted by Aโˆ’1A^{-1}. If no such matrix BB exists, AA is said to be singular.

---

Key Concepts

1. Existence and Uniqueness of the Inverse

For a matrix inverse to exist, the matrix must first be square. Not all square matrices have an inverse. A critical condition for the existence of Aโˆ’1A^{-1} is that the determinant of AA must be non-zero.
โ— Invertibility Condition

A square matrix AA is invertible if and only if its determinant is non-zero, i.e., โˆฃAโˆฃโ‰ 0|A| \neq 0.

If an inverse exists, it is unique. This can be proven as follows: Proof of Uniqueness: Assume that a square matrix AA has two inverses, say BB and CC. Step 1: By definition of an inverse, we have: AB=BA=IAB = BA = I AC=CA=IAC = CA = I Step 2: Consider the product BACBAC. We can group this product in two ways. (BA)C=IC=C(BA)C = IC = C B(AC)=BI=BB(AC) = BI = B Step 3: Equating the two results from Step 2, we find: C=BC = B This proves that if an inverse exists, it must be unique. ---

2. Inverse of a 2ร—22 \times 2 Matrix

For a 2ร—22 \times 2 matrix, there is a straightforward formula to calculate its inverse. This formula is frequently tested, especially in questions involving properties of matrices.
๐Ÿ“ Inverse of a 2x2 Matrix

For a 2ร—22 \times 2 matrix A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, its inverse Aโˆ’1A^{-1} is given by:

Aโˆ’1=1adโˆ’bc[dโˆ’bโˆ’ca]A^{-1} = \frac{1}{ad - bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}

Variables:

    • a,b,c,da, b, c, d are the elements of the matrix AA.

    • adโˆ’bcad - bc is the determinant of AA, denoted as โˆฃAโˆฃ|A|.


When to use: Directly calculate the inverse of any 2ร—22 \times 2 matrix, provided โˆฃAโˆฃโ‰ 0|A| \neq 0.

Worked Example: Problem: Find the inverse of the matrix A=[3152]A = \begin{bmatrix} 3 & 1 \\ 5 & 2 \end{bmatrix}. Solution: Step 1: Calculate the determinant of AA. โˆฃAโˆฃ=(3)(2)โˆ’(1)(5)=6โˆ’5=1|A| = (3)(2) - (1)(5) = 6 - 5 = 1 Since โˆฃAโˆฃ=1โ‰ 0|A| = 1 \neq 0, the inverse exists. Step 2: Apply the 2ร—22 \times 2 inverse formula. Aโˆ’1=11[2โˆ’1โˆ’53]A^{-1} = \frac{1}{1} \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix} Step 3: Simplify the expression. Aโˆ’1=[2โˆ’1โˆ’53]A^{-1} = \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix} Answer: Aโˆ’1=[2โˆ’1โˆ’53]A^{-1} = \begin{bmatrix} 2 & -1 \\ -5 & 3 \end{bmatrix} ---

3. General Method for Inverse: Adjugate Formula

For matrices of order n>2n > 2, the adjugate method (also known as the adjoint method) is a general formula for finding the inverse. This method involves calculating the determinant, minors, cofactors, and the adjugate matrix.
๐Ÿ“– Minor and Cofactor

For an nร—nn \times n matrix A=[aij]A = [a_{ij}]:

    • The minor MijM_{ij} of the element aija_{ij} is the determinant of the (nโˆ’1)ร—(nโˆ’1)(n-1) \times (n-1) matrix obtained by deleting the ii-th row and jj-th column of AA.

    • The cofactor CijC_{ij} of the element aija_{ij} is given by Cij=(โˆ’1)i+jMijC_{ij} = (-1)^{i+j} M_{ij}.

๐Ÿ“– Adjugate Matrix

The adjugate (or classical adjoint) of a square matrix AA, denoted as adj(A)\text{adj}(A), is the transpose of the matrix of its cofactors.
adj(A)=[Cij]T\text{adj}(A) = [C_{ij}]^T

๐Ÿ“ Adjugate Formula for Inverse

For an invertible square matrix AA, its inverse Aโˆ’1A^{-1} is given by:
Aโˆ’1=1โˆฃAโˆฃadj(A)A^{-1} = \frac{1}{|A|} \text{adj}(A)

Variables:

    • โˆฃAโˆฃ|A| = determinant of AA.

    • adj(A)\text{adj}(A) = adjugate of AA.


When to use: For finding the inverse of nร—nn \times n matrices, particularly useful for 3ร—33 \times 3 matrices.

Worked Example: Problem: Find the inverse of the matrix A=(120011001)A = \begin{pmatrix} 1 & 2 & 0 \\ 0 & 1 & 1 \\ 0 & 0 & 1 \end{pmatrix}. Solution: Step 1: Calculate the determinant of AA. For an upper triangular matrix, the determinant is the product of its diagonal elements. โˆฃAโˆฃ=(1)(1)(1)=1|A| = (1)(1)(1) = 1 Since โˆฃAโˆฃ=1โ‰ 0|A| = 1 \neq 0, the inverse exists. Step 2: Calculate the cofactors CijC_{ij} for each element aija_{ij}. C11=(โˆ’1)1+1detโก(1101)=1(1โˆ’0)=1C_{11} = (-1)^{1+1} \det \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} = 1(1-0) = 1 C12=(โˆ’1)1+2detโก(0101)=โˆ’1(0โˆ’0)=0C_{12} = (-1)^{1+2} \det \begin{pmatrix} 0 & 1 \\ 0 & 1 \end{pmatrix} = -1(0-0) = 0 C13=(โˆ’1)1+3detโก(0100)=1(0โˆ’0)=0C_{13} = (-1)^{1+3} \det \begin{pmatrix} 0 & 1 \\ 0 & 0 \end{pmatrix} = 1(0-0) = 0 C21=(โˆ’1)2+1detโก(2001)=โˆ’1(2โˆ’0)=โˆ’2C_{21} = (-1)^{2+1} \det \begin{pmatrix} 2 & 0 \\ 0 & 1 \end{pmatrix} = -1(2-0) = -2 C22=(โˆ’1)2+2detโก(1001)=1(1โˆ’0)=1C_{22} = (-1)^{2+2} \det \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = 1(1-0) = 1 C23=(โˆ’1)2+3detโก(1200)=โˆ’1(0โˆ’0)=0C_{23} = (-1)^{2+3} \det \begin{pmatrix} 1 & 2 \\ 0 & 0 \end{pmatrix} = -1(0-0) = 0 C31=(โˆ’1)3+1detโก(2011)=1(2โˆ’0)=2C_{31} = (-1)^{3+1} \det \begin{pmatrix} 2 & 0 \\ 1 & 1 \end{pmatrix} = 1(2-0) = 2 C32=(โˆ’1)3+2detโก(1001)=โˆ’1(1โˆ’0)=โˆ’1C_{32} = (-1)^{3+2} \det \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = -1(1-0) = -1 C33=(โˆ’1)3+3detโก(1201)=1(1โˆ’0)=1C_{33} = (-1)^{3+3} \det \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} = 1(1-0) = 1 Step 3: Form the cofactor matrix CC. C=(100โˆ’2102โˆ’11)C = \begin{pmatrix} 1 & 0 & 0 \\ -2 & 1 & 0 \\ 2 & -1 & 1 \end{pmatrix} Step 4: Find the adjugate matrix adj(A)\text{adj}(A) by transposing the cofactor matrix. adj(A)=CT=(1โˆ’2201โˆ’1001)\text{adj}(A) = C^T = \begin{pmatrix} 1 & -2 & 2 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} Step 5: Apply the adjugate formula for the inverse. Aโˆ’1=1โˆฃAโˆฃadj(A)=11(1โˆ’2201โˆ’1001)A^{-1} = \frac{1}{|A|} \text{adj}(A) = \frac{1}{1} \begin{pmatrix} 1 & -2 & 2 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} Answer: Aโˆ’1=(1โˆ’2201โˆ’1001)A^{-1} = \begin{pmatrix} 1 & -2 & 2 \\ 0 & 1 & -1 \\ 0 & 0 & 1 \end{pmatrix} ---

4. Properties of Inverse Matrices

Understanding the properties of inverse matrices is crucial for simplifying expressions and solving matrix equations efficiently.
๐Ÿ“ Properties of Inverse Matrices

Let AA and BB be invertible matrices of the same order nn, and kk be a non-zero scalar.

  • Inverse of an Inverse: (Aโˆ’1)โˆ’1=A(A^{-1})^{-1} = A

  • Inverse of a Product: (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}

  • Inverse of a Scalar Multiple: (kA)โˆ’1=1kAโˆ’1(kA)^{-1} = \frac{1}{k}A^{-1}

  • Inverse of a Transpose: (AT)โˆ’1=(Aโˆ’1)T(A^T)^{-1} = (A^{-1})^T

  • Determinant of an Inverse: โˆฃAโˆ’1โˆฃ=1โˆฃAโˆฃ|A^{-1}| = \frac{1}{|A|}

  • Inverse and Identity Matrix: Iโˆ’1=II^{-1} = I

  • Inverse and Diagonal Matrices: If D=diag(d1,d2,โ€ฆ,dn)D = \text{diag}(d_1, d_2, \dots, d_n) with diโ‰ 0d_i \neq 0, then Dโˆ’1=diag(d1โˆ’1,d2โˆ’1,โ€ฆ,dnโˆ’1)D^{-1} = \text{diag}(d_1^{-1}, d_2^{-1}, \dots, d_n^{-1}).

Explanation of Key Properties: * Inverse of a Product: This property is particularly important. The order of multiplication is reversed when taking the inverse of a product. * Proof: We need to show that (Bโˆ’1Aโˆ’1)(AB)=I(B^{-1}A^{-1})(AB) = I and (AB)(Bโˆ’1Aโˆ’1)=I(AB)(B^{-1}A^{-1}) = I. * (Bโˆ’1Aโˆ’1)(AB)=Bโˆ’1(Aโˆ’1A)B=Bโˆ’1IB=Bโˆ’1B=I(B^{-1}A^{-1})(AB) = B^{-1}(A^{-1}A)B = B^{-1}IB = B^{-1}B = I * (AB)(Bโˆ’1Aโˆ’1)=A(BBโˆ’1)Aโˆ’1=AIAโˆ’1=AAโˆ’1=I(AB)(B^{-1}A^{-1}) = A(BB^{-1})A^{-1} = AIA^{-1} = AA^{-1} = I * Inverse of a Transpose: This property links the operations of transposing and inverting. * Proof: We know AAโˆ’1=IAA^{-1} = I. Transposing both sides: (AAโˆ’1)T=ITโ€…โ€ŠโŸนโ€…โ€Š(Aโˆ’1)TAT=I(AA^{-1})^T = I^T \implies (A^{-1})^T A^T = I. This shows that (Aโˆ’1)T(A^{-1})^T is the inverse of ATA^T. * Determinant of an Inverse: This is a direct consequence of the determinant product rule. * Proof: We have AAโˆ’1=IAA^{-1} = I. Taking the determinant of both sides: โˆฃAAโˆ’1โˆฃ=โˆฃIโˆฃ|AA^{-1}| = |I|. * Using the property โˆฃABโˆฃ=โˆฃAโˆฃโˆฃBโˆฃ|AB| = |A||B|, we get โˆฃAโˆฃโˆฃAโˆ’1โˆฃ=1|A||A^{-1}| = 1. * Therefore, โˆฃAโˆ’1โˆฃ=1โˆฃAโˆฃ|A^{-1}| = \frac{1}{|A|}. ---

5. Invertibility and Systems of Linear Equations

The existence of a matrix inverse is directly related to the solvability and uniqueness of solutions for systems of linear equations. Consider a system of nn linear equations in nn variables, represented in matrix form as: Ax=bAx = b where AA is an nร—nn \times n matrix, xx is an nร—1n \times 1 column vector of variables, and bb is an nร—1n \times 1 column vector of constants.
โ— Invertibility and System Solutions

If AA is an invertible matrix (i.e., Aโˆ’1A^{-1} exists), then the system Ax=bAx = b has a unique solution given by:
x=Aโˆ’1bx = A^{-1}b
If AA is singular (i.e., Aโˆ’1A^{-1} does not exist), the system Ax=bAx = b either has no solutions or infinitely many solutions.

Explanation: If Aโˆ’1A^{-1} exists, we can multiply both sides of Ax=bAx = b by Aโˆ’1A^{-1} from the left: Aโˆ’1(Ax)=Aโˆ’1bA^{-1}(Ax) = A^{-1}b (Aโˆ’1A)x=Aโˆ’1b(A^{-1}A)x = A^{-1}b Ix=Aโˆ’1bIx = A^{-1}b x=Aโˆ’1bx = A^{-1}b This demonstrates that if Aโˆ’1A^{-1} exists, the solution xx is uniquely determined. This connection is fundamental in numerical methods for solving linear systems. ---

6. Similarity Transformations

A similarity transformation is a transformation of a matrix BB into a matrix AA such that A=PBPโˆ’1A = PBP^{-1} for some invertible matrix PP. This concept is crucial for understanding matrix diagonalization and canonical forms.
๐Ÿ“– Similarity Transformation

Two square matrices AA and BB are said to be similar if there exists an invertible matrix PP such that:
A=PBPโˆ’1A = PBP^{-1}

Properties under Similarity Transformation: Similar matrices share many important properties. * Determinant: Similar matrices have the same determinant. * Proof: โˆฃAโˆฃ=โˆฃPBPโˆ’1โˆฃ=โˆฃPโˆฃโˆฃBโˆฃโˆฃPโˆ’1โˆฃ=โˆฃPโˆฃโˆฃBโˆฃ1โˆฃPโˆฃ=โˆฃBโˆฃ|A| = |PBP^{-1}| = |P||B||P^{-1}| = |P||B|\frac{1}{|P|} = |B|. * Invertibility: If AA is similar to BB, then AA is invertible if and only if BB is invertible. * Proof: If BB is invertible, then Bโˆ’1B^{-1} exists. Then A=PBPโˆ’1A = PBP^{-1} implies that Aโˆ’1=(PBPโˆ’1)โˆ’1=(Pโˆ’1)โˆ’1Bโˆ’1Pโˆ’1=PBโˆ’1Pโˆ’1A^{-1} = (PBP^{-1})^{-1} = (P^{-1})^{-1}B^{-1}P^{-1} = PB^{-1}P^{-1}. Thus, AA is invertible. * Conversely, if AA is invertible, then B=Pโˆ’1APB = P^{-1}AP. Following the same logic, Bโˆ’1=(Pโˆ’1AP)โˆ’1=Pโˆ’1Aโˆ’1(Pโˆ’1)โˆ’1=Pโˆ’1Aโˆ’1PB^{-1} = (P^{-1}AP)^{-1} = P^{-1}A^{-1}(P^{-1})^{-1} = P^{-1}A^{-1}P. Thus, BB is invertible. * Rank: Similar matrices have the same rank. * Trace: Similar matrices have the same trace. * Eigenvalues: Similar matrices have the same eigenvalues. ---

Problem-Solving Strategies

๐Ÿ’ก CMI Strategy: Leveraging Properties for Efficiency

Instead of always calculating the full inverse using the adjugate method, especially for larger matrices or in theoretical questions, leverage the properties of inverses:

  • For Aโˆ’1=AA^{-1} = A type problems: Multiply by AA or Aโˆ’1A^{-1} to simplify. If Aโˆ’1=AA^{-1}=A, then AA=IAA=I (i.e., A2=IA^2=I). This often simplifies algebraic manipulation significantly.

  • For (AB)โˆ’1(AB)^{-1} or (AT)โˆ’1(A^T)^{-1}: Use (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1} and (AT)โˆ’1=(Aโˆ’1)T(A^T)^{-1} = (A^{-1})^T to avoid computing products or transposes before inverting.

  • For system solvability (Ax=bAx=b): The existence of Aโˆ’1A^{-1} is equivalent to โˆฃAโˆฃโ‰ 0|A| \neq 0. If โˆฃAโˆฃ=0|A|=0, then Aโˆ’1A^{-1} does not exist, and the system either has no solution or infinitely many.

  • For similarity transformations (A=PBPโˆ’1A = PBP^{-1}): Remember that determinants, traces, ranks, and invertibility are preserved under similarity. This can save extensive calculations.

  • Gaussian Elimination (Row Operations): For numerical computation of Aโˆ’1A^{-1} for larger matrices, augmenting AA with II and performing row operations to transform [AโˆฃI][A|I] to [IโˆฃAโˆ’1][I|A^{-1}] is generally more efficient than the adjugate method.

---

Common Mistakes

โš ๏ธ Avoid These Errors
    • โŒ Assuming all square matrices are invertible: Only matrices with a non-zero determinant are invertible. Always check โˆฃAโˆฃโ‰ 0|A| \neq 0 first.
    • โŒ Incorrect order in (AB)โˆ’1(AB)^{-1}: Students often incorrectly write (AB)โˆ’1=Aโˆ’1Bโˆ’1(AB)^{-1} = A^{-1}B^{-1}.
โœ… Correct Approach: Remember the "socks and shoes" rule: (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}.
    • โŒ Confusing (AT)โˆ’1(A^T)^{-1} with Aโˆ’TA^{-T}: Misapplying transpose and inverse notation.
โœ… Correct Approach: Understand that (AT)โˆ’1=(Aโˆ’1)T(A^T)^{-1} = (A^{-1})^T.
    • โŒ Calculation errors in minors/cofactors: A single sign error or incorrect submatrix determinant will lead to a wrong inverse.
    • โŒ Dividing by zero determinant: If โˆฃAโˆฃ=0|A|=0, the formula Aโˆ’1=1โˆฃAโˆฃadj(A)A^{-1} = \frac{1}{|A|} \text{adj}(A) becomes undefined.
    • โŒ Applying inverse to non-square matrices: The concept of an inverse (in this context) is strictly for square matrices.
---

Practice Questions

:::question type="MCQ" question="Let A=(2513)A = \begin{pmatrix} 2 & 5 \\ 1 & 3 \end{pmatrix}. Which of the following statements about Aโˆ’1A^{-1} is true?" options=["Aโˆ’1=(3โˆ’5โˆ’12)A^{-1} = \begin{pmatrix} 3 & -5 \\ -1 & 2 \end{pmatrix}","Aโˆ’1=(โˆ’2โˆ’5โˆ’1โˆ’3)A^{-1} = \begin{pmatrix} -2 & -5 \\ -1 & -3 \end{pmatrix}","Aโˆ’1=(2โˆ’1โˆ’53)A^{-1} = \begin{pmatrix} 2 & -1 \\ -5 & 3 \end{pmatrix}","The inverse does not exist."] answer="Aโˆ’1=(3โˆ’5โˆ’12)A^{-1} = \begin{pmatrix} 3 & -5 \\ -1 & 2 \end{pmatrix}" hint="Use the formula for the inverse of a 2ร—22 \times 2 matrix." solution="Step 1: Calculate the determinant of AA. โˆฃAโˆฃ=(2)(3)โˆ’(5)(1)=6โˆ’5=1|A| = (2)(3) - (5)(1) = 6 - 5 = 1 Step 2: Apply the 2ร—22 \times 2 inverse formula Aโˆ’1=1โˆฃAโˆฃ(dโˆ’bโˆ’ca)A^{-1} = \frac{1}{|A|} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}. Aโˆ’1=11(3โˆ’5โˆ’12)=(3โˆ’5โˆ’12)A^{-1} = \frac{1}{1} \begin{pmatrix} 3 & -5 \\ -1 & 2 \end{pmatrix} = \begin{pmatrix} 3 & -5 \\ -1 & 2 \end{pmatrix} The correct option is Aโˆ’1=(3โˆ’5โˆ’12)A^{-1} = \begin{pmatrix} 3 & -5 \\ -1 & 2 \end{pmatrix}." ::: :::question type="NAT" question="If AA is an nร—nn \times n invertible matrix and โˆฃAโˆฃ=4|A|=4, what is the value of โˆฃ(2AT)โˆ’1โˆฃ|(2A^T)^{-1}|?" answer="0.03125" hint="Use properties of determinants and inverses: โˆฃkAโˆฃ=knโˆฃAโˆฃ|kA|=k^n|A| and โˆฃAโˆ’1โˆฃ=1/โˆฃAโˆฃ|A^{-1}|=1/|A|." solution="Step 1: Use the property โˆฃkAโˆฃ=knโˆฃAโˆฃ|kA| = k^n|A| for a scalar kk and nร—nn \times n matrix AA. โˆฃ2ATโˆฃ=2nโˆฃATโˆฃ|2A^T| = 2^n |A^T| Step 2: Use the property โˆฃATโˆฃ=โˆฃAโˆฃ|A^T| = |A|. โˆฃ2ATโˆฃ=2nโˆฃAโˆฃ|2A^T| = 2^n |A| Step 3: Substitute the given value โˆฃAโˆฃ=4|A|=4. โˆฃ2ATโˆฃ=2nโ‹…4|2A^T| = 2^n \cdot 4 Step 4: Use the property โˆฃBโˆ’1โˆฃ=1โˆฃBโˆฃ|B^{-1}| = \frac{1}{|B|}. โˆฃ(2AT)โˆ’1โˆฃ=1โˆฃ2ATโˆฃ=12nโ‹…4|(2A^T)^{-1}| = \frac{1}{|2A^T|} = \frac{1}{2^n \cdot 4} Assuming a standard 3ร—33 \times 3 matrix context for this problem, n=3n=3. โˆฃ(2AT)โˆ’1โˆฃ=123โ‹…4=18โ‹…4=132|(2A^T)^{-1}| = \frac{1}{2^3 \cdot 4} = \frac{1}{8 \cdot 4} = \frac{1}{32} As a decimal: 132=0.03125\frac{1}{32} = 0.03125" ::: :::question type="MSQ" question="Let AA and BB be nร—nn \times n invertible matrices. Which of the following statements is/are necessarily true?" options=["(Aโˆ’1B)T=BT(AT)โˆ’1(A^{-1}B)^T = B^T (A^T)^{-1}","(A+B)โˆ’1=Aโˆ’1+Bโˆ’1(A+B)^{-1} = A^{-1} + B^{-1}","A(BA)โˆ’1B=InA(BA)^{-1}B = I_n","If A2=IA^2=I, then Aโˆ’1=AA^{-1}=A"] answer="A(BA)โˆ’1B=InA(BA)^{-1}B = I_n,If A2=IA^2=I, then Aโˆ’1=AA^{-1}=A" hint="Carefully apply the properties of inverse and transpose, paying attention to the order of operations. Consider counterexamples for false statements." solution="1. (Aโˆ’1B)T=BT(AT)โˆ’1(A^{-1}B)^T = B^T (A^T)^{-1}: Using the property (XY)T=YTXT(XY)^T = Y^T X^T, we get (Aโˆ’1B)T=BT(Aโˆ’1)T(A^{-1}B)^T = B^T (A^{-1})^T. Using (Aโˆ’1)T=(AT)โˆ’1(A^{-1})^T = (A^T)^{-1}, we get BT(AT)โˆ’1B^T (A^T)^{-1}. (True)
  • (A+B)โˆ’1=Aโˆ’1+Bโˆ’1(A+B)^{-1} = A^{-1} + B^{-1}: Matrix inversion is not distributive over addition. Counterexample: A=I,B=IA = I, B = I. Then (A+B)โˆ’1=(2I)โˆ’1=0.5I(A+B)^{-1} = (2I)^{-1} = 0.5I, but Aโˆ’1+Bโˆ’1=I+I=2IA^{-1} + B^{-1} = I + I = 2I. (False)
  • A(BA)โˆ’1B=InA(BA)^{-1}B = I_n: Using (XY)โˆ’1=Yโˆ’1Xโˆ’1(XY)^{-1} = Y^{-1}X^{-1}, we get A(Aโˆ’1Bโˆ’1)B=(AAโˆ’1)(Bโˆ’1B)=InIn=InA(A^{-1}B^{-1})B = (AA^{-1})(B^{-1}B) = I_n I_n = I_n. (True)
  • If A2=IA^2=I, then Aโˆ’1=AA^{-1}=A: If A2=IA^2=I, then Aโ‹…A=IA \cdot A = I. By definition of an inverse, Aโˆ’1=AA^{-1}=A. (True)
  • The correct options are A(BA)โˆ’1B=InA(BA)^{-1}B = I_n and If A2=IA^2=I, then Aโˆ’1=AA^{-1}=A." ::: :::question type="SUB" question="Given an invertible matrix AA, prove that (Ak)โˆ’1=(Aโˆ’1)k(A^k)^{-1} = (A^{-1})^k for any positive integer kk." answer="Proof by induction" hint="Use mathematical induction. Establish the base case for k=1k=1. Assume it holds for k=mk=m, then prove it for k=m+1k=m+1 using the property (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}." solution="We will prove this by mathematical induction on kk. Base Case (k=1k=1): For k=1k=1, the statement is (A1)โˆ’1=(Aโˆ’1)1(A^1)^{-1} = (A^{-1})^1. This simplifies to Aโˆ’1=Aโˆ’1A^{-1} = A^{-1}, which is trivially true. Inductive Hypothesis: Assume that the statement holds for some positive integer mm, i.e., (Am)โˆ’1=(Aโˆ’1)m(A^m)^{-1} = (A^{-1})^m. Inductive Step (k=m+1k=m+1): We need to show that (Am+1)โˆ’1=(Aโˆ’1)m+1(A^{m+1})^{-1} = (A^{-1})^{m+1}. Step 1: Express Am+1A^{m+1} as a product. Am+1=Amโ‹…AA^{m+1} = A^m \cdot A Step 2: Apply the property (XY)โˆ’1=Yโˆ’1Xโˆ’1(XY)^{-1} = Y^{-1}X^{-1}. (Am+1)โˆ’1=(Amโ‹…A)โˆ’1=Aโˆ’1(Am)โˆ’1(A^{m+1})^{-1} = (A^m \cdot A)^{-1} = A^{-1} (A^m)^{-1} Step 3: Apply the inductive hypothesis. Aโˆ’1(Am)โˆ’1=Aโˆ’1(Aโˆ’1)mA^{-1} (A^m)^{-1} = A^{-1} (A^{-1})^m Step 4: Combine the terms using exponent rules. Aโˆ’1(Aโˆ’1)m=(Aโˆ’1)m+1A^{-1} (A^{-1})^m = (A^{-1})^{m+1} By the principle of mathematical induction, the statement (Ak)โˆ’1=(Aโˆ’1)k(A^k)^{-1} = (A^{-1})^k is true for all positive integers kk." ::: :::question type="MCQ" question="Let AA be a 3ร—33 \times 3 matrix such that A3=IA^3 = I, where II is the 3ร—33 \times 3 identity matrix. Which of the following is equal to Aโˆ’1A^{-1}?" options=["AA","A2A^2","II","A4A^4"] answer="A2A^2" hint="Multiply A3=IA^3=I by Aโˆ’1A^{-1} from the left or right, or recognize the definition of inverse." solution="Given the equation A3=IA^3 = I. We can rewrite A3A^3 as A2โ‹…A=IA^2 \cdot A = I. By the definition of an inverse, if BA=IBA = I, then B=Aโˆ’1B = A^{-1}. In this case, B=A2B = A^2. Therefore, Aโˆ’1=A2A^{-1} = A^2. Alternatively, multiply both sides by Aโˆ’1A^{-1} from the right: A3Aโˆ’1=IAโˆ’1โ€…โ€ŠโŸนโ€…โ€ŠA2=Aโˆ’1A^3 A^{-1} = I A^{-1} \implies A^2 = A^{-1}. The correct option is A2A^2." ::: ---

    Summary

    โ— Key Takeaways for CMI

    • Definition and Existence: An inverse Aโˆ’1A^{-1} exists for a square matrix AA if and only if โˆฃAโˆฃโ‰ 0|A| \neq 0. The inverse is unique.

    • 2ร—22 \times 2 Inverse Formula: For A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}, Aโˆ’1=1adโˆ’bc[dโˆ’bโˆ’ca]A^{-1} = \frac{1}{ad-bc} \begin{bmatrix} d & -b \\ -c & a \end{bmatrix}.

    • Adjugate Method: For general nร—nn \times n matrices, Aโˆ’1=1โˆฃAโˆฃadj(A)A^{-1} = \frac{1}{|A|} \text{adj}(A), where adj(A)\text{adj}(A) is the transpose of the cofactor matrix.

    • Properties of Inverses: Remember key properties like (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1} = B^{-1}A^{-1}, (AT)โˆ’1=(Aโˆ’1)T(A^T)^{-1} = (A^{-1})^T, and โˆฃAโˆ’1โˆฃ=1โˆฃAโˆฃ|A^{-1}| = \frac{1}{|A|}. These are critical for simplifying expressions.

    • Linear Systems: If Aโˆ’1A^{-1} exists, Ax=bAx=b has a unique solution x=Aโˆ’1bx=A^{-1}b. If Aโˆ’1A^{-1} does not exist, the system has no solution or infinitely many.

    • Similarity Transformations: Matrices AA and BB are similar if A=PBPโˆ’1A = PBP^{-1}. Similar matrices share determinants, ranks, traces, and invertibility status.

    ---

    What's Next?

    ๐Ÿ’ก Continue Learning

    This topic connects to:

      • Rank of a Matrix: Invertibility is directly related to a matrix having full rank. A matrix is invertible if and only if its rank is equal to its dimension nn.

      • Eigenvalues and Eigenvectors: The concept of similarity transformations (A=PBPโˆ’1A = PBP^{-1}) is fundamental to diagonalization, where PP consists of eigenvectors and BB is a diagonal matrix of eigenvalues. An invertible matrix cannot have a zero eigenvalue.

      • Linear Transformations: An invertible matrix corresponds to an invertible linear transformation, meaning the transformation is both injective (one-to-one) and surjective (onto).

      • Matrix Decompositions: Many decompositions (e.g., LU, QR) involve invertible matrices as components, which are crucial for numerical stability and efficiency in computations.

    ---

    Chapter Summary

    ๐Ÿ“– Matrices - Key Takeaways

    To excel in CMI, a thorough understanding of matrices is indispensable. Here are the most critical points to remember:

    • Fundamental Definitions & Operations: Master the definitions of various matrix types (square, diagonal, identity, zero, row, column matrices), their orders, and the conditions for basic operations like addition, subtraction, and scalar multiplication. Understand that these operations are element-wise and follow standard algebraic properties.

    • Matrix Multiplication Mastery: This is arguably the most crucial operation. Remember the strict condition for multiplication: for ABAB to be defined, the number of columns in AA must equal the number of rows in BB. The resulting matrix ABAB has the number of rows of AA and the number of columns of BB. Crucially, matrix multiplication is generally not commutative (ABโ‰ BAAB \neq BA) but is associative (A(BC)=(AB)CA(BC) = (AB)C) and distributive (A(B+C)=AB+ACA(B+C) = AB+AC).

    • Transpose and its Properties: Understand that the transpose ATA^T is formed by interchanging rows and columns. Key properties include (AT)T=A(A^T)^T = A, (A+B)T=AT+BT(A+B)^T = A^T+B^T, (kA)T=kAT(kA)^T = kA^T, and most importantly, (AB)T=BTAT(AB)^T = B^TA^T. Be familiar with symmetric (A=ATA=A^T) and skew-symmetric (A=โˆ’ATA=-A^T) matrices.

    • Inverse of a Matrix: A square matrix AA is invertible (or non-singular) if there exists a matrix Aโˆ’1A^{-1} such that AAโˆ’1=Aโˆ’1A=IAA^{-1} = A^{-1}A = I (the identity matrix). For a 2ร—22 \times 2 matrix A=(abcd)A = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, its inverse is Aโˆ’1=1adโˆ’bc(dโˆ’bโˆ’ca)A^{-1} = \frac{1}{ad-bc} \begin{pmatrix} d & -b \\ -c & a \end{pmatrix}, provided adโˆ’bcโ‰ 0ad-bc \neq 0. Key properties are (Aโˆ’1)โˆ’1=A(A^{-1})^{-1}=A, (AB)โˆ’1=Bโˆ’1Aโˆ’1(AB)^{-1}=B^{-1}A^{-1}, and (AT)โˆ’1=(Aโˆ’1)T(A^T)^{-1}=(A^{-1})^T.

    • Solving Matrix Equations: Matrices provide a powerful framework for solving systems of linear equations. Equations of the form AX=BAX=B can be solved by pre-multiplying by Aโˆ’1A^{-1} (if AA is invertible) to get X=Aโˆ’1BX = A^{-1}B. This highlights the practical utility of matrix inverses.

    ---

    Chapter Review Questions

    :::question type="MCQ" question="Let A=(1201)A = \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} and B=(1011)B = \begin{pmatrix} 1 & 0 \\ 1 & 1 \end{pmatrix}. Which of the following is equal to (ABT)โˆ’1(AB^T)^{-1}?" options=["A) (1โˆ’301)\begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}","B) (1301)\begin{pmatrix} 1 & 3 \\ 0 & 1 \end{pmatrix}","C) (1โˆ’201)\begin{pmatrix} 1 & -2 \\ 0 & 1 \end{pmatrix}","D) (1201)\begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix}"] answer="A) (1โˆ’301)\begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}" hint="First, find BTB^T. Then calculate the product ABTAB^T. Finally, find the inverse of the resulting matrix." solution="First, find the transpose of matrix BB: BT=(1101)B^T = \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} Next, calculate the product ABTAB^T: ABT=(1201)(1101)=((1)(1)+(2)(0)(1)(1)+(2)(1)(0)(1)+(1)(0)(0)(1)+(1)(1))=(1301)AB^T = \begin{pmatrix} 1 & 2 \\ 0 & 1 \end{pmatrix} \begin{pmatrix} 1 & 1 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} (1)(1)+(2)(0) & (1)(1)+(2)(1) \\ (0)(1)+(1)(0) & (0)(1)+(1)(1) \end{pmatrix} = \begin{pmatrix} 1 & 3 \\ 0 & 1 \end{pmatrix} Let C=ABT=(1301)C = AB^T = \begin{pmatrix} 1 & 3 \\ 0 & 1 \end{pmatrix}. To find Cโˆ’1C^{-1}, we use the formula for a 2ร—22 \times 2 matrix inverse: For CC, a=1,b=3,c=0,d=1a=1, b=3, c=0, d=1. Determinant is adโˆ’bc=1โˆ’0=1ad-bc = 1-0 = 1. Cโˆ’1=11(1โˆ’301)=(1โˆ’301)C^{-1} = \frac{1}{1} \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix} = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix} Thus, (ABT)โˆ’1=(1โˆ’301)(AB^T)^{-1} = \begin{pmatrix} 1 & -3 \\ 0 & 1 \end{pmatrix}." ::: :::question type="NAT" question="Let A=(2x34)A = \begin{pmatrix} 2 & x \\ 3 & 4 \end{pmatrix} and B=(1521)B = \begin{pmatrix} 1 & 5 \\ 2 & 1 \end{pmatrix}. If the matrix A+BA+B is symmetric, find the value of xx." answer="0" hint="A matrix MM is symmetric if M=MTM = M^T. First, find the sum A+BA+B, then apply the condition for symmetry." solution="First, calculate the sum A+BA+B: A+B=(2x34)+(1521)=(3x+555)A+B = \begin{pmatrix} 2 & x \\ 3 & 4 \end{pmatrix} + \begin{pmatrix} 1 & 5 \\ 2 & 1 \end{pmatrix} = \begin{pmatrix} 3 & x+5 \\ 5 & 5 \end{pmatrix} For a matrix MM to be symmetric, its transpose MTM^T must be equal to MM. Let M=A+B=(3x+555)M = A+B = \begin{pmatrix} 3 & x+5 \\ 5 & 5 \end{pmatrix}. Then its transpose is MT=(35x+55)M^T = \begin{pmatrix} 3 & 5 \\ x+5 & 5 \end{pmatrix}. For MM to be symmetric, M=MTM = M^T: (3x+555)=(35x+55)\begin{pmatrix} 3 & x+5 \\ 5 & 5 \end{pmatrix} = \begin{pmatrix} 3 & 5 \\ x+5 & 5 \end{pmatrix} Equating the corresponding elements, we get: x+5=5โ€…โ€ŠโŸนโ€…โ€Šx=0x+5 = 5 \implies x = 0" ::: :::question type="MCQ" question="Given A=(3211)A = \begin{pmatrix} 3 & 2 \\ 1 & 1 \end{pmatrix} and B=(52)B = \begin{pmatrix} 5 \\ 2 \end{pmatrix}. If AX=BAX=B, what is XX?" options=["A) (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}","B) (12)\begin{pmatrix} 1 \\ 2 \end{pmatrix}","C) (21)\begin{pmatrix} 2 \\ 1 \end{pmatrix}","D) (โˆ’12)\begin{pmatrix} -1 \\ 2 \end{pmatrix}"] answer="A) (11)\begin{pmatrix} 1 \\ 1 \end{pmatrix}" hint="To solve for XX in AX=BAX=B, you need to pre-multiply both sides by Aโˆ’1A^{-1}." solution="We are given the matrix equation AX=BAX=B. To find XX, we need to calculate Aโˆ’1A^{-1} and then compute X=Aโˆ’1BX = A^{-1}B. First, find the inverse of A=(3211)A = \begin{pmatrix} 3 & 2 \\ 1 & 1 \end{pmatrix}. The determinant of AA is detโก(A)=(3)(1)โˆ’(2)(1)=3โˆ’2=1\det(A) = (3)(1) - (2)(1) = 3 - 2 = 1. Aโˆ’1=11(1โˆ’2โˆ’13)=(1โˆ’2โˆ’13)A^{-1} = \frac{1}{1} \begin{pmatrix} 1 & -2 \\ -1 & 3 \end{pmatrix} = \begin{pmatrix} 1 & -2 \\ -1 & 3 \end{pmatrix} Now, substitute Aโˆ’1A^{-1} and BB into X=Aโˆ’1BX = A^{-1}B: X=(1โˆ’2โˆ’13)(52)=((1)(5)+(โˆ’2)(2)(โˆ’1)(5)+(3)(2))=(5โˆ’4โˆ’5+6)=(11)X = \begin{pmatrix} 1 & -2 \\ -1 & 3 \end{pmatrix} \begin{pmatrix} 5 \\ 2 \end{pmatrix} = \begin{pmatrix} (1)(5) + (-2)(2) \\ (-1)(5) + (3)(2) \end{pmatrix} = \begin{pmatrix} 5 - 4 \\ -5 + 6 \end{pmatrix} = \begin{pmatrix} 1 \\ 1 \end{pmatrix} The correct answer is A." ::: :::question type="NAT" question="Let P=(1234)P = \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} and Q=(0123)Q = \begin{pmatrix} 0 & 1 \\ 2 & 3 \end{pmatrix}. If 2Pโˆ’QT=R2P - Q^T = R, find the value of R12R_{12} (the element in the first row, second column of RR). " answer="2" hint="First, calculate 2P2P. Then find the transpose of QQ, QTQ^T. Finally, perform the subtraction 2Pโˆ’QT2P - Q^T to find RR and identify the element R12R_{12}." solution="First, calculate 2P2P: 2P=2(1234)=(2468)2P = 2 \begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix} = \begin{pmatrix} 2 & 4 \\ 6 & 8 \end{pmatrix} Next, find the transpose of QQ: QT=(0213)Q^T = \begin{pmatrix} 0 & 2 \\ 1 & 3 \end{pmatrix} Now, calculate R=2Pโˆ’QTR = 2P - Q^T: R=(2468)โˆ’(0213)=(2โˆ’04โˆ’26โˆ’18โˆ’3)=(2255)R = \begin{pmatrix} 2 & 4 \\ 6 & 8 \end{pmatrix} - \begin{pmatrix} 0 & 2 \\ 1 & 3 \end{pmatrix} = \begin{pmatrix} 2-0 & 4-2 \\ 6-1 & 8-3 \end{pmatrix} = \begin{pmatrix} 2 & 2 \\ 5 & 5 \end{pmatrix} The element R12R_{12} is the element in the first row, second column of RR. From the matrix R=(2255)R = \begin{pmatrix} 2 & 2 \\ 5 & 5 \end{pmatrix}, we see that R12=2R_{12} = 2." ::: ---

    What's Next?

    ๐Ÿ’ก Continue Your CMI Journey

    Congratulations! You've successfully navigated the foundational concepts of Matrices. This chapter is a cornerstone of linear algebra and its applications, equipping you with essential tools for various mathematical and scientific problems.

    Key connections:

    * Building on Basic Algebra: Matrices provide a powerful, structured way to handle systems of linear equations, extending your understanding from basic algebraic methods of substitution and elimination.
    * Foundation for Advanced Topics: The concepts learned here are absolutely vital for upcoming chapters. You'll find that:
    * Determinants (the next logical step) are directly related to matrix inverses and are crucial for solving systems of linear equations using Cramer's Rule.
    * Systems of Linear Equations will be revisited with more sophisticated matrix methods, including Gaussian elimination and matrix inversion.
    * Vector Spaces and Linear Transformations heavily rely on matrices to represent transformations and understand geometric operations in higher dimensions.
    * Eigenvalues and Eigenvectors (advanced topics) are fundamental to understanding the behavior of linear transformations and have wide applications in physics, engineering, and data science.

    Keep practicing and integrating these concepts, as they form the bedrock for much of your further mathematical studies for CMI and beyond!

    ๐ŸŽฏ Key Points to Remember

    • โœ“ Master the core concepts in Matrices before moving to advanced topics
    • โœ“ Practice with previous year questions to understand exam patterns
    • โœ“ Review short notes regularly for quick revision before exams

    Related Topics in Algebra

    More Resources

    Why Choose MastersUp?

    ๐ŸŽฏ

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    ๐Ÿ“š

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    ๐Ÿ“Š

    Smart Analytics

    Track your progress with subject-wise performance insights

    ๐Ÿ”–

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation โ†’

    No credit card required โ€ข Free forever for basic features