100% FREE Updated: Mar 2026 Algebra Linear Algebra

Vector Spaces

Comprehensive study notes on Vector Spaces for CUET PG Mathematics preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Vector Spaces

This chapter introduces the fundamental concepts of vector spaces, subspaces, linear dependence, independence, basis, and dimension. These core principles form the bedrock of linear algebra, providing essential tools for abstract mathematical reasoning. Mastery of these topics is critical for the CUET PG MA examination, where they are frequently assessed and serve as prerequisites for advanced subjects.

---

Chapter Contents

| # | Topic |
|---|-------|
| 1 | Vector Spaces and Subspaces |
| 2 | Linear Dependence and Independence |
| 3 | Basis and Dimension |

---

We begin with Vector Spaces and Subspaces.

Part 1: Vector Spaces and Subspaces

Vector spaces form the fundamental algebraic structure in linear algebra, providing a framework for operations on objects such as vectors, polynomials, and functions. We define a vector space as a set equipped with two operations—vector addition and scalar multiplication—that satisfy a specific set of axioms. Understanding vector spaces and their subspaces is crucial for solving problems involving linear transformations, eigenvalues, and basis constructions, which are frequently tested in competitive examinations.

---

Core Concepts

1. Definition of a Vector Space

A real vector space VV is a set of objects, called vectors, together with two operations: vector addition and scalar multiplication. These operations must satisfy the following ten axioms for all vectors u,v,w\mathbf{u}, \mathbf{v}, \mathbf{w} in VV and all scalars c,dc, d in R\mathbb{R}:

  • Closure under Addition: u+v\mathbf{u} + \mathbf{v} is in VV.

  • Commutativity of Addition: u+v=v+u\mathbf{u} + \mathbf{v} = \mathbf{v} + \mathbf{u}.

  • Associativity of Addition: (u+v)+w=u+(v+w)(\mathbf{u} + \mathbf{v}) + \mathbf{w} = \mathbf{u} + (\mathbf{v} + \mathbf{w}).

  • Existence of Zero Vector: There exists a zero vector 0\mathbf{0} in VV such that u+0=u\mathbf{u} + \mathbf{0} = \mathbf{u}.

  • Existence of Negative Vector: For each u\mathbf{u} in VV, there exists a vector u-\mathbf{u} in VV such that u+(u)=0\mathbf{u} + (-\mathbf{u}) = \mathbf{0}.

  • Closure under Scalar Multiplication: cuc\mathbf{u} is in VV.

  • Distributivity (Scalar over Vector Addition): c(u+v)=cu+cvc(\mathbf{u} + \mathbf{v}) = c\mathbf{u} + c\mathbf{v}.

  • Distributivity (Vector over Scalar Addition): (c+d)u=cu+du(c + d)\mathbf{u} = c\mathbf{u} + d\mathbf{u}.

  • Associativity of Scalar Multiplication: c(du)=(cd)uc(d\mathbf{u}) = (cd)\mathbf{u}.

  • Identity for Scalar Multiplication: 1u=u1\mathbf{u} = \mathbf{u}.
  • Quick Example:
    Consider the set V=R2V = \mathbb{R}^2 with standard vector addition and scalar multiplication. We demonstrate closure under addition.

    Step 1: Let u=(u1,u2)\mathbf{u} = (u_1, u_2) and v=(v1,v2)\mathbf{v} = (v_1, v_2) be arbitrary vectors in R2\mathbb{R}^2.

    u+v=(u1,u2)+(v1,v2)\mathbf{u} + \mathbf{v} = (u_1, u_2) + (v_1, v_2)

    Step 2: Perform vector addition.

    u+v=(u1+v1,u2+v2)\mathbf{u} + \mathbf{v} = (u_1 + v_1, u_2 + v_2)

    Answer: Since u1+v1u_1+v_1 and u2+v2u_2+v_2 are real numbers, (u1+v1,u2+v2)(u_1+v_1, u_2+v_2) is an element of R2\mathbb{R}^2. Thus, R2\mathbb{R}^2 is closed under vector addition. The other nine axioms can be verified similarly.

    :::question type="MCQ" question="Which of the following sets, with standard addition and scalar multiplication, forms a vector space over R\mathbb{R}?" options=["The set of all 2×22 \times 2 matrices with integer entries.","The set of all polynomials of degree exactly 3.","The set of all continuous functions f:[0,1]Rf: [0,1] \to \mathbb{R}.","The set of all 2×22 \times 2 invertible matrices."] answer="The set of all continuous functions f:[0,1]Rf: [0,1] \to \mathbb{R}." hint="Check the closure axioms and existence of the zero vector for each option." solution="Step 1: Analyze 'The set of all 2×22 \times 2 matrices with integer entries'.
    This set is not closed under scalar multiplication by all real numbers. For example, if AA is the matrix

    A=(1001)A = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}

    which has integer entries, then the scalar multiple 12A\frac{1}{2}A is
    12A=(1/2001/2)\frac{1}{2}A = \begin{pmatrix} 1/2 & 0 \\ 0 & 1/2 \end{pmatrix}

    which does not have integer entries. Thus, it is not a vector space over R\mathbb{R}.

    Step 2: Analyze 'The set of all polynomials of degree exactly 3'.
    This set does not contain the zero vector (the zero polynomial has undefined degree, or degree -\infty). Also, it is not closed under addition. For example, (x3+x)+(x3+1)=x+1(x^3+x) + (-x^3+1) = x+1, which has degree 1, not 3. Thus, it is not a vector space.

    Step 3: Analyze 'The set of all continuous functions f:[0,1]Rf: [0,1] \to \mathbb{R}'.
    The sum of two continuous functions is continuous. A scalar multiple of a continuous function is continuous. The zero function f(x)=0f(x)=0 is continuous. The negative of a continuous function is continuous. All other vector space axioms hold for functions. Thus, this set forms a vector space.

    Step 4: Analyze 'The set of all 2×22 \times 2 invertible matrices'.
    This set does not contain the zero matrix, which is required for a vector space. Also, it is not closed under addition. For example, II is the identity matrix

    I=(1001)I = \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix}

    which is invertible, and I-I is
    I=(1001)-I = \begin{pmatrix} -1 & 0 \\ 0 & -1 \end{pmatrix}

    which is also invertible. However, their sum I+(I)I + (-I) is
    I+(I)=(0000)I + (-I) = \begin{pmatrix} 0 & 0 \\ 0 & 0 \end{pmatrix}

    which is the zero matrix and is not invertible. Thus, it is not a vector space.
    Answer: \boxed{The set of all continuous functions f:[0,1]Rf: [0,1] \to \mathbb{R}}
    "
    :::

    ---

    ---

    2. Definition of a Subspace

    A subspace WW of a vector space VV is a subset of VV that is itself a vector space under the same operations of vector addition and scalar multiplication defined on VV. To verify if a non-empty subset WW is a subspace, we only need to check three conditions, often condensed into two.

    📖 Subspace Test

    A non-empty subset WW of a vector space VV is a subspace if and only if:

    • The zero vector 0\mathbf{0} of VV is in WW.

    • For every u,vW\mathbf{u}, \mathbf{v} \in W, u+vW\mathbf{u} + \mathbf{v} \in W (closure under addition).

    • For every uW\mathbf{u} \in W and scalar cRc \in \mathbb{R}, cuWc\mathbf{u} \in W (closure under scalar multiplication).

    Alternatively, conditions 2 and 3 can be combined:
    For every u,vW\mathbf{u}, \mathbf{v} \in W and scalar cRc \in \mathbb{R}, cu+vWc\mathbf{u} + \mathbf{v} \in W. This implies both closure conditions and the existence of the zero vector (by choosing c=0c=0).

    Quick Example:
    Determine if W={(x,y)R2:x2y=0}W = \{(x, y) \in \mathbb{R}^2 : x - 2y = 0\} is a subspace of R2\mathbb{R}^2.

    Step 1: Check if the zero vector is in WW.

    0=(0,0)\mathbf{0} = (0, 0)

    02(0)=00 - 2(0) = 0

    Since 0=00=0, the zero vector is in WW. WW is non-empty.

    Step 2: Check closure under addition. Let u=(x1,y1)\mathbf{u} = (x_1, y_1) and v=(x2,y2)\mathbf{v} = (x_2, y_2) be in WW.
    This implies x12y1=0x_1 - 2y_1 = 0 and x22y2=0x_2 - 2y_2 = 0.

    u+v=(x1+x2,y1+y2)\mathbf{u} + \mathbf{v} = (x_1 + x_2, y_1 + y_2)

    We check if (x1+x2)2(y1+y2)=0(x_1 + x_2) - 2(y_1 + y_2) = 0.
    (x1+x2)2(y1+y2)=(x12y1)+(x22y2)=0+0=0\begin{aligned}(x_1 + x_2) - 2(y_1 + y_2) & = (x_1 - 2y_1) + (x_2 - 2y_2) \\
    & = 0 + 0 = 0\end{aligned}

    Thus, u+v\mathbf{u} + \mathbf{v} is in WW.

    Step 3: Check closure under scalar multiplication. Let u=(x,y)\mathbf{u} = (x, y) be in WW and cRc \in \mathbb{R}.
    This implies x2y=0x - 2y = 0.

    cu=(cx,cy)c\mathbf{u} = (cx, cy)

    We check if cx2(cy)=0cx - 2(cy) = 0.
    cx2cy=c(x2y)=c(0)=0\begin{aligned}cx - 2cy & = c(x - 2y) \\
    & = c(0) = 0\end{aligned}

    Thus, cuc\mathbf{u} is in WW.

    Answer: Since all three conditions are satisfied, WW is a subspace of R2\mathbb{R}^2.

    ⚠️ Common Mistake: Subspace Identification

    ❌ Assuming any subset satisfying a linear equation is a subspace, especially if the equation is non-homogeneous (e.g., x+y=1x+y=1).
    ✅ A set defined by a linear equation ax+by+cz=kax+by+cz=k is a subspace if and only if k=0k=0 (i.e., it passes through the origin).

    :::question type="MCQ" question="Which of the following is a subspace of R3\mathbb{R}^3?" options=["W=(x,y,z)R3:x+4y10z=2W = \\{(x, y, z) \in \mathbb{R}^3 : x + 4y - 10z = -2\\}","W=(x,y,z)R3:xy=0W = \\{(x, y, z) \in \mathbb{R}^3 : xy = 0\\}","W=(x,y,z)R3:2x+3y4z=0W = \\{(x, y, z) \in \mathbb{R}^3 : 2x + 3y - 4z = 0\\}","W=(x,y,z)R3:xQW = \\{(x, y, z) \in \mathbb{R}^3 : x \in \mathbb{Q}\\}"] answer="W=(x,y,z)R3:2x+3y4z=0W = \\{(x, y, z) \in \mathbb{R}^3 : 2x + 3y - 4z = 0\\}" hint="Apply the 3-condition subspace test. Pay attention to the zero vector and closure properties." solution="Step 1: Analyze W=(x,y,z)R3:x+4y10z=2W = \\{(x, y, z) \in \mathbb{R}^3 : x + 4y - 10z = -2\\}.
    The zero vector (0,0,0)(0,0,0) does not satisfy the condition 0+4(0)10(0)=20 + 4(0) - 10(0) = -2, as 020 \neq -2. Therefore, WW is not a subspace.

    Step 2: Analyze W=(x,y,z)R3:xy=0W = \\{(x, y, z) \in \mathbb{R}^3 : xy = 0\\}.
    The zero vector (0,0,0)(0,0,0) satisfies 00=00 \cdot 0 = 0.
    Consider closure under addition. Let u=(1,0,0)\mathbf{u} = (1, 0, 0) and v=(0,1,0)\mathbf{v} = (0, 1, 0). Both are in WW since 10=01 \cdot 0 = 0 and 01=00 \cdot 1 = 0.
    However, u+v=(1,1,0)\mathbf{u} + \mathbf{v} = (1, 1, 0). For this vector, xy=11=10x \cdot y = 1 \cdot 1 = 1 \neq 0. So u+v\mathbf{u} + \mathbf{v} is not in WW. Thus, WW is not closed under addition and is not a subspace.

    Step 3: Analyze W=(x,y,z)R3:2x+3y4z=0W = \\{(x, y, z) \in \mathbb{R}^3 : 2x + 3y - 4z = 0\\}.

  • Zero vector: 2(0)+3(0)4(0)=02(0) + 3(0) - 4(0) = 0. The zero vector is in WW.

  • Closure under addition: Let u=(x1,y1,z1)\mathbf{u} = (x_1, y_1, z_1) and v=(x2,y2,z2)\mathbf{v} = (x_2, y_2, z_2) be in WW.

  • Then 2x1+3y14z1=02x_1 + 3y_1 - 4z_1 = 0 and 2x2+3y24z2=02x_2 + 3y_2 - 4z_2 = 0.
    Their sum is u+v=(x1+x2,y1+y2,z1+z2)\mathbf{u} + \mathbf{v} = (x_1+x_2, y_1+y_2, z_1+z_2).
    We check: 2(x1+x2)+3(y1+y2)4(z1+z2)=(2x1+3y14z1)+(2x2+3y24z2)=0+0=02(x_1+x_2) + 3(y_1+y_2) - 4(z_1+z_2) = (2x_1+3y_1-4z_1) + (2x_2+3y_2-4z_2) = 0 + 0 = 0.
    So u+v\mathbf{u} + \mathbf{v} is in WW.
  • Closure under scalar multiplication: Let u=(x,y,z)\mathbf{u} = (x, y, z) be in WW and cRc \in \mathbb{R}.

  • Then 2x+3y4z=02x + 3y - 4z = 0.
    The scalar multiple is cu=(cx,cy,cz)c\mathbf{u} = (cx, cy, cz).
    We check: 2(cx)+3(cy)4(cz)=c(2x+3y4z)=c(0)=02(cx) + 3(cy) - 4(cz) = c(2x+3y-4z) = c(0) = 0.
    So cuc\mathbf{u} is in WW.
    Since all conditions are met, WW is a subspace.

    Step 4: Analyze W=(x,y,z)R3:xQW = \\{(x, y, z) \in \mathbb{R}^3 : x \in \mathbb{Q}\\}.
    The zero vector (0,0,0)(0,0,0) is in WW since 0Q0 \in \mathbb{Q}.
    Consider closure under scalar multiplication. Let u=(1,0,0)\mathbf{u} = (1, 0, 0). This is in WW since 1Q1 \in \mathbb{Q}.
    Let c=2c = \sqrt{2}. Then cu=(2,0,0)c\mathbf{u} = (\sqrt{2}, 0, 0). Here, 2Q\sqrt{2} \notin \mathbb{Q}. So cuc\mathbf{u} is not in WW. Thus, WW is not closed under scalar multiplication and is not a subspace."
    :::

    ---

    3. Intersection and Union of Subspaces

    We examine how set operations interact with the subspace property.

    3.1. Intersection of Subspaces

    The intersection of any two subspaces of a vector space is always a subspace itself. This is a fundamental property.

    Intersection of Subspaces

    If W1W_1 and W2W_2 are subspaces of a vector space VV, then their intersection W1W2W_1 \cap W_2 is also a subspace of VV.

    Quick Example:
    Let W1={(x,y,z)R3:x=0}W_1 = \{(x, y, z) \in \mathbb{R}^3 : x=0\} (the yzyz-plane) and W2={(x,y,z)R3:y=0}W_2 = \{(x, y, z) \in \mathbb{R}^3 : y=0\} (the xzxz-plane). Both are subspaces of R3\mathbb{R}^3.
    We find their intersection W1W2W_1 \cap W_2.

    Step 1: Define the intersection.
    A vector (x,y,z)(x,y,z) is in W1W2W_1 \cap W_2 if it satisfies both conditions: x=0x=0 and y=0y=0.

    W1W2={(x,y,z)R3:x=0 and y=0}W_1 \cap W_2 = \{(x, y, z) \in \mathbb{R}^3 : x=0 \text{ and } y=0\}

    Step 2: Check the subspace conditions for W1W2W_1 \cap W_2.

  • Zero vector: For (0,0,0)(0,0,0), x=0x=0 and y=0y=0 are satisfied. So 0W1W2\mathbf{0} \in W_1 \cap W_2.

  • Closure under addition: Let u=(0,0,z1)\mathbf{u} = (0, 0, z_1) and v=(0,0,z2)\mathbf{v} = (0, 0, z_2) be in W1W2W_1 \cap W_2.

  • u+v=(0+0,0+0,z1+z2)=(0,0,z1+z2)\mathbf{u} + \mathbf{v} = (0+0, 0+0, z_1+z_2) = (0, 0, z_1+z_2)

    This vector satisfies x=0x=0 and y=0y=0, so it is in W1W2W_1 \cap W_2.
  • Closure under scalar multiplication: Let u=(0,0,z)\mathbf{u} = (0, 0, z) be in W1W2W_1 \cap W_2 and cRc \in \mathbb{R}.

  • cu=(c0,c0,cz)=(0,0,cz)c\mathbf{u} = (c \cdot 0, c \cdot 0, c z) = (0, 0, c z)

    This vector satisfies x=0x=0 and y=0y=0, so it is in W1W2W_1 \cap W_2.

    Answer: W1W2W_1 \cap W_2 is the zz-axis, which is a subspace of R3\mathbb{R}^3.

    :::question type="MCQ" question="Let W1W_1 and W2W_2 be any two subspaces of a vector space VV. Which of the following statements is true regarding their intersection and union?" options=["W1W2W_1 \cup W_2 is always a subspace, but W1W2W_1 \cap W_2 is not always a subspace.","W1W2W_1 \cap W_2 is always a subspace, and W1W2W_1 \cup W_2 is always a subspace.","W1W2W_1 \cap W_2 is always a subspace, but W1W2W_1 \cup W_2 is not always a subspace.","NeitherW1W2Neither W_1 \cap W_2 nor W1W2W_1 \cup W_2 are always subspaces."] answer="W1W2W_1 \cap W_2 is always a subspace, but W1W2W_1 \cup W_2 is not always a subspace." hint="Recall the properties of subspace intersection and union. For the union, consider a counterexample." solution="Step 1: Analyze the intersection W1W2W_1 \cap W_2.
    We know that if W1W_1 and W2W_2 are subspaces of VV, then W1W2W_1 \cap W_2 is always a subspace of VV. This is a standard theorem in linear algebra.

    Step 2: Analyze the union W1W2W_1 \cup W_2.
    The union of two subspaces is not generally a subspace.
    Consider a counterexample: Let V=R2V = \mathbb{R}^2.
    Let W1={(x,0):xR}W_1 = \{(x, 0) : x \in \mathbb{R}\} (the x-axis) and W2={(0,y):yR}W_2 = \{(0, y) : y \in \mathbb{R}\} (the y-axis).
    Both W1W_1 and W2W_2 are subspaces of R2\mathbb{R}^2.
    Their union is W1W2={(x,y):x=0 or y=0}W_1 \cup W_2 = \{(x, y) : x=0 \text{ or } y=0\}.
    Consider u=(1,0)W1W2\mathbf{u} = (1, 0) \in W_1 \cup W_2 and v=(0,1)W1W2\mathbf{v} = (0, 1) \in W_1 \cup W_2.
    Their sum is u+v=(1,1)\mathbf{u} + \mathbf{v} = (1, 1).
    For (1,1)(1,1), neither x=0x=0 nor y=0y=0 is true. Thus, (1,1)W1W2(1,1) \notin W_1 \cup W_2.
    Therefore, W1W2W_1 \cup W_2 is not closed under addition, and hence is not a subspace.

    Step 3: Conclude.
    W1W2W_1 \cap W_2 is always a subspace, but W1W2W_1 \cup W_2 is not always a subspace. This matches the third option."
    :::

    3.2. Union of Subspaces

    The union of two subspaces W1W_1 and W2W_2 is generally not a subspace. It is a subspace if and only if one subspace is contained within the other (i.e., W1W2W_1 \subseteq W_2 or W2W1W_2 \subseteq W_1).

    4. Linear Combinations and Span

    A vector v\mathbf{v} is a linear combination of vectors v1,,vk\mathbf{v}_1, \ldots, \mathbf{v}_k if it can be expressed as a sum of scalar multiples of these vectors. The set of all possible linear combinations of a set of vectors is called their span.

    📖 Linear Combination and Span

    A vector vV\mathbf{v} \in V is a linear combination of vectors v1,,vkV\mathbf{v}_1, \ldots, \mathbf{v}_k \in V if there exist scalars c1,,ckRc_1, \ldots, c_k \in \mathbb{R} such that:

    v=c1v1+c2v2++ckvk\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_k\mathbf{v}_k

    The span of a set of vectors S={v1,,vk}S = \{\mathbf{v}_1, \ldots, \mathbf{v}_k\}, denoted span(S)\operatorname{span}(S) or span{v1,,vk}\operatorname{span}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\}, is the set of all possible linear combinations of these vectors. The span of any non-empty set of vectors in VV is always a subspace of VV.

    Quick Example:
    Determine if v=(7,4,3)\mathbf{v} = (7, 4, -3) is in the span of S={v1,v2}S = \{\mathbf{v}_1, \mathbf{v}_2\}, where v1=(1,2,1)\mathbf{v}_1 = (1, 2, -1) and v2=(2,1,3)\mathbf{v}_2 = (2, -1, 3).

    Step 1: Set up the linear combination equation.
    We need to find scalars c1,c2c_1, c_2 such that v=c1v1+c2v2\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2.

    (7,4,3)=c1(1,2,1)+c2(2,1,3)(7, 4, -3) = c_1(1, 2, -1) + c_2(2, -1, 3)

    This leads to a system of linear equations:
    c1+2c2=72c1c2=4c1+3c2=3\begin{aligned}c_1 + 2c_2 & = 7 \\
    2c_1 - c_2 & = 4 \\
    -c_1 + 3c_2 & = -3\end{aligned}

    Step 2: Solve the system of equations.
    From the first equation, c1=72c2c_1 = 7 - 2c_2. Substitute into the second equation:

    2(72c2)c2=4144c2c2=4145c2=45c2=10c2=2\begin{aligned}2(7 - 2c_2) - c_2 & = 4 \\
    14 - 4c_2 - c_2 & = 4 \\
    14 - 5c_2 & = 4 \\
    -5c_2 & = -10 \\
    c_2 & = 2\end{aligned}

    Substitute c2=2c_2 = 2 back into c1=72c2c_1 = 7 - 2c_2:
    c1=72(2)=74=3c_1 = 7 - 2(2) = 7 - 4 = 3

    Check with the third equation:
    c1+3c2=(3)+3(2)=3+6=3-c_1 + 3c_2 = -(3) + 3(2) = -3 + 6 = 3

    The third equation requires c1+3c2=3-c_1 + 3c_2 = -3, but we found it equals 33. Since 333 \neq -3, the system is inconsistent.

    Answer: The vector v\mathbf{v} is not in the span of SS.

    :::question type="NAT" question="Let v1=(1,0,1)\mathbf{v}_1 = (1, 0, 1), v2=(0,1,1)\mathbf{v}_2 = (0, 1, 1), and v3=(1,1,0)\mathbf{v}_3 = (1, 1, 0). If w=(2,3,5)\mathbf{w} = (2, 3, 5) is written as a linear combination c1v1+c2v2+c3v3c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + c_3\mathbf{v}_3, what is the value of c1+c2+c3c_1 + c_2 + c_3?" answer="5" hint="Set up a system of linear equations and solve for c1,c2,c3c_1, c_2, c_3. Then sum them." solution="Step 1: Set up the system of linear equations from w=c1v1+c2v2+c3v3\mathbf{w} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + c_3\mathbf{v}_3.

    (2,3,5)=c1(1,0,1)+c2(0,1,1)+c3(1,1,0)(2, 3, 5) = c_1(1, 0, 1) + c_2(0, 1, 1) + c_3(1, 1, 0)

    This yields:
    c1+c3=2c2+c3=3c1+c2=5\begin{aligned}c_1 + c_3 & = 2 \\
    c_2 + c_3 & = 3 \\
    c_1 + c_2 & = 5\end{aligned}

    Step 2: Solve the system.
    From the third equation, c1=5c2c_1 = 5 - c_2. Substitute into the first equation:

    (5c2)+c3=2    c2+c3=3(5 - c_2) + c_3 = 2 \implies -c_2 + c_3 = -3

    Now we have a system of two equations with c2c_2 and c3c_3:
    c2+c3=3c2+c3=3\begin{aligned}c_2 + c_3 & = 3 \\
    -c_2 + c_3 & = -3\end{aligned}

    Add these two equations:
    (c2+c3)+(c2+c3)=3+(3)2c3=0    c3=0\begin{aligned}(c_2 + c_3) + (-c_2 + c_3) & = 3 + (-3) \\
    2c_3 & = 0 \implies c_3 = 0\end{aligned}

    Substitute c3=0c_3 = 0 into c2+c3=3c_2 + c_3 = 3:
    c2+0=3    c2=3c_2 + 0 = 3 \implies c_2 = 3

    Substitute c2=3c_2 = 3 into c1+c2=5c_1 + c_2 = 5:
    c1+3=5    c1=2c_1 + 3 = 5 \implies c_1 = 2

    So, c1=2,c2=3,c3=0c_1=2, c_2=3, c_3=0.

    Step 3: Calculate c1+c2+c3c_1 + c_2 + c_3.

    c1+c2+c3=2+3+0=5c_1 + c_2 + c_3 = 2 + 3 + 0 = 5
    "
    :::

    5. Linear Independence and Dependence

    The concept of linear independence is central to defining a basis for a vector space.

    📖 Linear Independence/Dependence

    A set of vectors {v1,,vk}\{\mathbf{v}_1, \ldots, \mathbf{v}_k\} in a vector space VV is linearly independent if the only solution to the vector equation:

    c1v1+c2v2++ckvk=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_k\mathbf{v}_k = \mathbf{0}

    is the trivial solution c1=c2==ck=0c_1 = c_2 = \ldots = c_k = 0.
    If there exists at least one non-trivial solution (i.e., not all cic_i are zero), the set of vectors is linearly dependent.

    Quick Example:
    Determine if the vectors v1=(1,2,3)\mathbf{v}_1 = (1, 2, 3), v2=(0,1,2)\mathbf{v}_2 = (0, 1, 2), v3=(1,0,1)\mathbf{v}_3 = (-1, 0, 1) are linearly independent in R3\mathbb{R}^3.

    Step 1: Set up the equation c1v1+c2v2+c3v3=0c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + c_3\mathbf{v}_3 = \mathbf{0}.

    c1(1,2,3)+c2(0,1,2)+c3(1,0,1)=(0,0,0)c_1(1, 2, 3) + c_2(0, 1, 2) + c_3(-1, 0, 1) = (0, 0, 0)

    This forms a homogeneous system of linear equations:
    c1c3=02c1+c2=03c1+2c2+c3=0\begin{aligned}c_1 \quad - c_3 & = 0 \\
    2c_1 + c_2 \quad & = 0 \\
    3c_1 + 2c_2 + c_3 & = 0\end{aligned}

    Step 2: Form the augmented matrix and reduce it to row echelon form.

    [101021003210]\left[\begin{array}{ccc|c}
    1 & 0 & -1 & 0 \\
    2 & 1 & 0 & 0 \\
    3 & 2 & 1 & 0\end{array}\right]

    Perform row operations: R2R22R1R_2 \to R_2 - 2R_1, R3R33R1R_3 \to R_3 - 3R_1.
    [101001200240]\left[\begin{array}{ccc|c}
    1 & 0 & -1 & 0 \\
    0 & 1 & 2 & 0 \\
    0 & 2 & 4 & 0\end{array}\right]

    Perform row operation: R3R32R2R_3 \to R_3 - 2R_2.
    [101001200000]\left[\begin{array}{ccc|c}
    1 & 0 & -1 & 0 \\
    0 & 1 & 2 & 0 \\
    0 & 0 & 0 & 0\end{array}\right]

    Step 3: Interpret the result.
    The system has infinitely many solutions because there is a free variable (c3c_3).
    For example, if we let c3=tc_3 = t, then c2=2tc_2 = -2t and c1=tc_1 = t.
    A non-trivial solution exists (e.g., t=1    c1=1,c2=2,c3=1t=1 \implies c_1=1, c_2=-2, c_3=1).

    1v12v2+1v3=(1,2,3)2(0,1,2)+1(1,0,1)=(1,2,3)(0,2,4)+(1,0,1)=(0,0,0)1\mathbf{v}_1 - 2\mathbf{v}_2 + 1\mathbf{v}_3 = (1, 2, 3) - 2(0, 1, 2) + 1(-1, 0, 1) = (1, 2, 3) - (0, 2, 4) + (-1, 0, 1) = (0, 0, 0)

    Answer: The vectors are linearly dependent.

    💡 Determinant Test for Linear Independence

    For nn vectors in Rn\mathbb{R}^n, they are linearly independent if and only if the determinant of the matrix formed by these vectors as columns (or rows) is non-zero. If the determinant is zero, they are linearly dependent. This is a quick test for square matrices.

    :::question type="MCQ" question="Consider the vectors u=(1,2,3)\mathbf{u} = (1, 2, 3), v=(0,1,1)\mathbf{v} = (0, 1, 1), and w=(2,5,7)\mathbf{w} = (2, 5, 7) in R3\mathbb{R}^3. Which of the following statements is true?" options=["The vectors are linearly independent.","The vectors are linearly dependent, and w\mathbf{w} is a linear combination of u\mathbf{u} and v\mathbf{v}.","The vectors are linearly dependent, but w\mathbf{w} is not a linear combination of u\mathbf{u} and v\mathbf{v}.","The vectors span R3\mathbb{R}^3 but are not linearly independent."] answer="The vectors are linearly dependent, and w\mathbf{w} is a linear combination of u\mathbf{u} and v\mathbf{v}." hint="Form a matrix with the vectors as columns and compute its determinant or perform row reduction. If dependent, find the linear combination." solution="Step 1: Check for linear independence using the determinant.
    Form a matrix AA with the vectors as columns:

    A=[102215317]A = \begin{bmatrix} 1 & 0 & 2 \\ 2 & 1 & 5 \\ 3 & 1 & 7 \end{bmatrix}

    Calculate the determinant of AA:
    det(A)=1(1751)0(2753)+2(2113)=1(75)0+2(23)=1(2)+2(1)=22=0\begin{aligned}\det(A) & = 1(1 \cdot 7 - 5 \cdot 1) - 0(2 \cdot 7 - 5 \cdot 3) + 2(2 \cdot 1 - 1 \cdot 3) \\
    & = 1(7 - 5) - 0 + 2(2 - 3) \\
    & = 1(2) + 2(-1) = 2 - 2 = 0\end{aligned}

    Since det(A)=0\det(A) = 0, the vectors are linearly dependent. This eliminates the first option.

    Step 2: Check if w\mathbf{w} is a linear combination of u\mathbf{u} and v\mathbf{v}.
    We need to find if there exist scalars c1,c2c_1, c_2 such that w=c1u+c2v\mathbf{w} = c_1\mathbf{u} + c_2\mathbf{v}.

    (2,5,7)=c1(1,2,3)+c2(0,1,1)(2, 5, 7) = c_1(1, 2, 3) + c_2(0, 1, 1)

    This gives the system:
    c1=22c1+c2=53c1+c2=7\begin{aligned}c_1 & = 2 \\
    2c_1 + c_2 & = 5 \\
    3c_1 + c_2 & = 7\end{aligned}

    From the first equation, c1=2c_1 = 2.
    Substitute into the second equation:
    2(2)+c2=5    4+c2=5    c2=12(2) + c_2 = 5 \implies 4 + c_2 = 5 \implies c_2 = 1

    Check with the third equation:
    3(2)+1=6+1=73(2) + 1 = 6 + 1 = 7

    The values c1=2c_1=2 and c2=1c_2=1 satisfy all equations.
    Thus, w=2u+1v\mathbf{w} = 2\mathbf{u} + 1\mathbf{v}.

    Step 3: Conclude.
    The vectors are linearly dependent, and w\mathbf{w} is a linear combination of u\mathbf{u} and v\mathbf{v}. This matches the second option."
    :::

    ---

    6. Basis and Dimension

    A basis provides a minimal set of vectors that can represent every vector in a vector space uniquely. The number of vectors in a basis defines the dimension of the space.

    📖 Basis and Dimension

    A set of vectors B={v1,,vn}B = \{\mathbf{v}_1, \ldots, \mathbf{v}_n\} in a vector space VV is a basis for VV if:

    • BB is linearly independent.

    • BB spans VV (i.e., span(B)=V\operatorname{span}(B) = V).

    If a vector space VV has a basis consisting of nn vectors, then nn is called the dimension of VV, denoted dim(V)\operatorname{dim}(V). If V={0}V = \{\mathbf{0}\}, its dimension is 0.

    Quick Example:
    Find a basis for the subspace W={(x,y,z)R3:x+yz=0}W = \{(x, y, z) \in \mathbb{R}^3 : x + y - z = 0\}.

    Step 1: Express the vectors in WW in parametric form.
    From the condition x+yz=0x + y - z = 0, we can express zz in terms of xx and yy: z=x+yz = x + y.
    Any vector in WW can be written as:

    (x,y,x+y)(x, y, x+y)

    We can decompose this vector:
    (x,y,x+y)=(x,0,x)+(0,y,y)=x(1,0,1)+y(0,1,1)\begin{aligned}(x, y, x+y) & = (x, 0, x) + (0, y, y) \\
    & = x(1, 0, 1) + y(0, 1, 1)\end{aligned}

    Step 2: Identify the spanning vectors.
    The vectors (1,0,1)(1, 0, 1) and (0,1,1)(0, 1, 1) span WW. Let B={(1,0,1),(0,1,1)}B = \{(1, 0, 1), (0, 1, 1)\}.

    Step 3: Check for linear independence of the spanning vectors.
    Assume c1(1,0,1)+c2(0,1,1)=(0,0,0)c_1(1, 0, 1) + c_2(0, 1, 1) = (0, 0, 0).

    (c1,c2,c1+c2)=(0,0,0)(c_1, c_2, c_1+c_2) = (0, 0, 0)

    This implies c1=0c_1 = 0 and c2=0c_2 = 0.
    Thus, the vectors are linearly independent.

    Answer: A basis for WW is {(1,0,1),(0,1,1)}\{(1, 0, 1), (0, 1, 1)\}, and dim(W)=2\operatorname{dim}(W) = 2.

    📐 Standard Bases
      • For Rn\mathbb{R}^n: The standard basis is {e1,,en}\{\mathbf{e}_1, \ldots, \mathbf{e}_n\}, where ei\mathbf{e}_i is a vector with 1 in the ii-th position and 0 elsewhere.
      • For PnP_n (polynomials of degree at most nn): The standard basis is {1,x,x2,,xn}\{1, x, x^2, \ldots, x^n\}. dim(Pn)=n+1\operatorname{dim}(P_n) = n+1.
      • For Mm×n(R)M_{m \times n}(\mathbb{R}) (set of m×nm \times n matrices): The basis consists of mnmn matrices, each having a single 1 in one position and 0 elsewhere. dim(Mm×n(R))=mn\operatorname{dim}(M_{m \times n}(\mathbb{R})) = mn.

    :::question type="MCQ" question="Consider the vector space P2P_2 of all polynomials of degree at most 2. Which of the following sets is a basis for P2P_2?" options=["{1,x,x2,x3}\{1, x, x^2, x^3\}","{x2,x+1,x21}\{x^2, x+1, x^2-1\}","{1,x,x2}\{1, x, x^2\}","{x2,2x2,3x2}\{x^2, 2x^2, 3x^2\}"] answer="{1,x,x2}\{1, x, x^2\}" hint="A basis must be linearly independent and span the space. The dimension of P2P_2 is 3." solution="Step 1: Determine the dimension of P2P_2.
    The standard basis for P2P_2 is {1,x,x2}\{1, x, x^2\}. This set has 3 elements. Therefore, dim(P2)=3\operatorname{dim}(P_2) = 3. Any basis for P2P_2 must contain exactly 3 linearly independent vectors that span P2P_2.

    Step 2: Evaluate option 1: {1,x,x2,x3}\{1, x, x^2, x^3\}.
    This set has 4 vectors. Since dim(P2)=3\operatorname{dim}(P_2)=3, a set of 4 vectors in P2P_2 must be linearly dependent. Also, x3x^3 is not in P2P_2. Thus, this is not a basis.

    Step 3: Evaluate option 2: {x2,x+1,x21}\{x^2, x+1, x^2-1\}.
    This set has 3 vectors. We need to check for linear independence.
    Assume c1x2+c2(x+1)+c3(x21)=0c_1x^2 + c_2(x+1) + c_3(x^2-1) = 0 for all xx.

    (c1+c3)x2+c2x+(c2c3)=0(c_1+c_3)x^2 + c_2x + (c_2-c_3) = 0

    For this to hold for all xx, the coefficients must be zero:
    c1+c3=0c2=0c2c3=0\begin{aligned}c_1 + c_3 & = 0 \\
    c_2 & = 0 \\
    c_2 - c_3 & = 0\end{aligned}

    From c2=0c_2=0 and c2c3=0c_2-c_3=0, we get c3=0c_3=0.
    From c1+c3=0c_1+c_3=0 and c3=0c_3=0, we get c1=0c_1=0.
    Since c1=c2=c3=0c_1=c_2=c_3=0 is the only solution, the set is linearly independent.
    A set of dim(V)\operatorname{dim}(V) linearly independent vectors in VV is always a basis. Thus, this is a basis.

    Step 4: Evaluate option 3: {1,x,x2}\{1, x, x^2\}.
    This is the standard basis for P2P_2. It is linearly independent and spans P2P_2. It is a basis.

    Step 5: Evaluate option 4: {x2,2x2,3x2}\{x^2, 2x^2, 3x^2\}.
    This set is linearly dependent because 2x2=2x22x^2 = 2 \cdot x^2 and 3x2=3x23x^2 = 3 \cdot x^2. For example, 2(x2)(2x2)+0(3x2)=02(x^2) - (2x^2) + 0(3x^2) = 0 is a non-trivial linear combination. Thus, it is not a basis."
    :::

    ---

    💡 Next Up

    Proceeding to Linear Dependence and Independence.

    ---

    Part 2: Linear Dependence and Independence

    We examine the fundamental concepts of linear dependence and independence, which are crucial for understanding vector spaces and their bases. These concepts enable us to determine whether a set of vectors contains redundant information or forms a minimal set sufficient to describe a subspace. Mastery of these ideas is essential for various problems in linear algebra, including those encountered in competitive examinations.

    ---

    Core Concepts

    1. Linear Combination

    A vector v\mathbf{v} is a linear combination of vectors v1,v2,,vk\mathbf{v}_1, \mathbf{v}_2, \ldots, \mathbf{v}_k if it can be expressed as a sum of scalar multiples of these vectors. That is, v=c1v1+c2v2++ckvk\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_k\mathbf{v}_k for some scalars c1,c2,,ckc_1, c_2, \ldots, c_k.

    📐 Linear Combination
    v=c1v1+c2v2++ckvk\mathbf{v} = c_1\mathbf{v}_1 + c_2\mathbf{v}_2 + \ldots + c_k\mathbf{v}_k
    Where: v,vi\mathbf{v}, \mathbf{v}_i are vectors, and cic_i are scalars.

    Quick Example: Determine if v=(7,1,8)\mathbf{v}=(7, 1, -8) is a linear combination of v1=(1,2,1)\mathbf{v}_1=(1, 2, -1) and v2=(2,3,5)\mathbf{v}_2=(2, -3, 5).

    Step 1: Set up the vector equation.
    >

    c1(1,2,1)+c2(2,3,5)=(7,1,8)c_1(1, 2, -1) + c_2(2, -3, 5) = (7, 1, -8)

    Step 2: Form the system of linear equations.
    >

    c1+2c2=72c13c2=1c1+5c2=8\begin{aligned} c_1 + 2c_2 & = 7 \\ 2c_1 - 3c_2 & = 1 \\ -c_1 + 5c_2 & = -8 \end{aligned}

    Step 3: Solve the system using an augmented matrix.
    >

    [127231158]R22R1,R3+R1[1270713071]R3+R2[12707130014]\left[\begin{array}{cc|c} 1 & 2 & 7 \\ 2 & -3 & 1 \\ -1 & 5 & -8 \end{array}\right] \xrightarrow{R_2 - 2R_1, R_3 + R_1} \left[\begin{array}{cc|c} 1 & 2 & 7 \\ 0 & -7 & -13 \\ 0 & 7 & -1 \end{array}\right] \xrightarrow{R_3 + R_2} \left[\begin{array}{cc|c} 1 & 2 & 7 \\ 0 & -7 & -13 \\ 0 & 0 & -14 \end{array}\right]

    Step 4: Interpret the result.
    >

    0c1+0c2=140c_1 + 0c_2 = -14

    The last row indicates 0=140 = -14, which is a contradiction. Therefore, no such scalars c1,c2c_1, c_2 exist.

    Answer: v\mathbf{v} is not a linear combination of v1\mathbf{v}_1 and v2\mathbf{v}_2.

    :::question type="MCQ" question="Given vectors u=(1,1,0)\mathbf{u}=(1, 1, 0), v=(0,1,1)\mathbf{v}=(0, 1, 1), and w=(1,0,1)\mathbf{w}=(1, 0, 1), determine the coefficients a,b,ca, b, c such that au+bv+cw=(2,3,1)a\mathbf{u} + b\mathbf{v} + c\mathbf{w} = (2, 3, 1). Which of the following is correct?" options=["a=2,b=1,c=0a=2, b=1, c=0","a=1,b=2,c=1a=1, b=2, c=-1","a=0,b=1,c=2a=0, b=1, c=2","a=1,b=1,c=1a=1, b=1, c=1"] answer="a=2,b=1,c=0a=2, b=1, c=0" hint="Set up a system of linear equations and solve for a,b,ca, b, c." solution="Step 1: Set up the vector equation and corresponding system of equations.
    >

    a(1,1,0)+b(0,1,1)+c(1,0,1)=(2,3,1)a(1, 1, 0) + b(0, 1, 1) + c(1, 0, 1) = (2, 3, 1)

    >
    a+c=2a+b=3b+c=1\begin{aligned} a + c & = 2 \\ a + b & = 3 \\ b + c & = 1 \end{aligned}

    Step 2: Solve the system. From the first equation, c=2ac = 2-a. From the second, b=3ab = 3-a.
    Substitute these into the third equation:
    >

    (3a)+(2a)=1(3-a) + (2-a) = 1

    >
    52a=15 - 2a = 1

    >
    2a=42a = 4

    >
    a=2a = 2

    Step 3: Find bb and cc.
    >

    b=3a=32=1b = 3 - a = 3 - 2 = 1

    >
    c=2a=22=0c = 2 - a = 2 - 2 = 0

    The coefficients are a=2,b=1,c=0a=2, b=1, c=0.

    Step 4: Check the options. The provided solution `a=1,b=2,c=1a=1, b=2, c=-1` is incorrect based on our calculation. Let's re-evaluate the options and the question.
    If a=1,b=2,c=1a=1, b=2, c=-1:
    a+c=1+(1)=02a+c = 1+(-1) = 0 \neq 2. So this option is incorrect.

    Let's re-solve the system carefully.
    From a+c=2a+c=2 and a+b=3a+b=3, we have c=2ac=2-a and b=3ab=3-a.
    Substitute into b+c=1b+c=1:
    (3a)+(2a)=1(3-a) + (2-a) = 1
    52a=15 - 2a = 1
    2a=42a = 4
    a=2a = 2
    Then b=32=1b = 3-2 = 1 and c=22=0c = 2-2 = 0.
    So (a,b,c)=(2,1,0)(a,b,c) = (2,1,0).

    The option "a=2,b=1,c=0a=2, b=1, c=0" is present as the first option. I will mark that as the correct answer. The previous analysis was correct.
    Answer: \boxed{a=2, b=1, c=0}"

    ---

    💡 Next Up

    Proceeding to Basis and Dimension.

    ---

    Part 3: Basis and Dimension

    In linear algebra, the concepts of basis and dimension are fundamental for understanding the structure of vector spaces. We define these concepts to precisely characterize the "size" and "orientation" of a vector space, providing a framework for vector representation and analysis. Mastery of these topics is critical for solving problems involving subspaces, linear transformations, and system analysis in various mathematical and applied contexts.

    ---

    Core Concepts

    1. Linear Independence

    A set of vectors is linearly independent if no vector in the set can be expressed as a linear combination of the others. We formally define this property below.

    📖 Linear Independence

    A set of vectors {v1,v2,,vk}\{v_1, v_2, \dots, v_k\} in a vector space VV is linearly independent if the only solution to the vector equation

    c1v1+c2v2++ckvk=0c_1v_1 + c_2v_2 + \dots + c_kv_k = \mathbf{0}

    is c1=c2==ck=0c_1 = c_2 = \dots = c_k = 0.

    If there exists at least one non-zero cic_i satisfying the equation, the set is linearly dependent. For vectors in Rn\mathbb{R}^n, we can check linear independence by forming a matrix with the vectors as columns and evaluating its rank or determinant.

    Quick Example: Determine if the set {(1,2),(3,4)}\{(1, 2), (3, 4)\} is linearly independent in R2\mathbb{R}^2.

    Step 1: Form the linear combination and set it to the zero vector.
    >

    c1(1,2)+c2(3,4)=(0,0)c_1(1, 2) + c_2(3, 4) = (0, 0)

    Step 2: This yields a system of linear equations.
    >

    c1+3c2=02c1+4c2=0\begin{aligned} c_1 + 3c_2 & = 0 \\ 2c_1 + 4c_2 & = 0 \end{aligned}

    Step 3: Form an augmented matrix and reduce it.
    >

    [130240]R22R1[130020]\left[\begin{array}{cc|c} 1 & 3 & 0 \\ 2 & 4 & 0 \end{array}\right] \xrightarrow{R_2 - 2R_1} \left[\begin{array}{cc|c} 1 & 3 & 0 \\ 0 & -2 & 0 \end{array}\right]

    Step 4: Solve the system.
    >

    2c2=0    c2=0-2c_2 = 0 \implies c_2 = 0

    >
    c1+3(0)=0    c1=0c_1 + 3(0) = 0 \implies c_1 = 0

    Answer: Since the only solution is c1=0,c2=0c_1=0, c_2=0, the set is linearly independent.

    :::question type="MCQ" question="Consider the set of vectors S={(1,1,0),(2,1,3),(0,1,1)}S = \{(1, -1, 0), (2, 1, 3), (0, 1, 1)\} in R3\mathbb{R}^3. Is SS linearly independent?" options=["Yes, it is linearly independent.","No, it is linearly dependent.","It is linearly independent only if the scalar field is C\mathbb{C}.","The concept of linear independence does not apply to this set."] answer="No, it is linearly dependent." hint="Form a matrix with the vectors as columns and calculate its determinant. If the determinant is zero, the vectors are linearly dependent." solution="Step 1: Form a matrix AA with the given vectors as columns.
    >

    A=[120111031]A = \begin{bmatrix} 1 & 2 & 0 \\ -1 & 1 & 1 \\ 0 & 3 & 1 \end{bmatrix}

    Step 2: Calculate the determinant of AA.
    >

    det(A)=1(1113)2(1110)+0(1310)\det(A) = 1(1 \cdot 1 - 1 \cdot 3) - 2(-1 \cdot 1 - 1 \cdot 0) + 0(-1 \cdot 3 - 1 \cdot 0)

    >
    det(A)=1(13)2(10)+0\det(A) = 1(1 - 3) - 2(-1 - 0) + 0

    >
    det(A)=1(2)2(1)\det(A) = 1(-2) - 2(-1)

    >
    det(A)=2+2\det(A) = -2 + 2

    >
    det(A)=0\det(A) = 0

    Step 3: Interpret the result.
    Since det(A)=0\det(A) = 0, the columns (and thus the vectors in SS) are linearly dependent.
    The correct option is 'No, it is linearly dependent'."
    :::

    ---

    ---

    2. Spanning Sets

    A set of vectors is a spanning set for a vector space if every vector in that space can be written as a linear combination of the vectors in the set. This signifies that the set "generates" the entire space.

    📖 Spanning Set

    A set of vectors {v1,v2,,vk}\{v_1, v_2, \dots, v_k\} in a vector space VV is a spanning set for VV if for every vector vVv \in V, there exist scalars c1,c2,,ckc_1, c_2, \dots, c_k such that

    v=c1v1+c2v2++ckvkv = c_1v_1 + c_2v_2 + \dots + c_kv_k

    We denote the span of the set as span{v1,,vk}\operatorname{span}\{v_1, \dots, v_k\}.

    To check if a set spans Rn\mathbb{R}^n, we can determine if the system Ax=bA\mathbf{x} = \mathbf{b} has a solution for all bRn\mathbf{b} \in \mathbb{R}^n, where AA is the matrix with the vectors as columns. This is true if and only if the rank\operatorname{rank} of AA is nn.

    Quick Example: Determine if the set S={(1,0),(0,1),(1,1)}S = \{(1, 0), (0, 1), (1, 1)\} spans R2\mathbb{R}^2.

    Step 1: Consider an arbitrary vector (x,y)R2(x, y) \in \mathbb{R}^2. We need to find scalars c1,c2,c3c_1, c_2, c_3 such that:

    c1(1,0)+c2(0,1)+c3(1,1)=(x,y)c_1(1, 0) + c_2(0, 1) + c_3(1, 1) = (x, y)

    Step 2: This leads to the system of equations:

    c1+c3=xc2+c3=y\begin{aligned} c_1 + c_3 & = x \\ c_2 + c_3 & = y \end{aligned}

    Step 3: We can express c1=xc3c_1 = x - c_3 and c2=yc3c_2 = y - c_3. Since we can choose any value for c3c_3 (e.g., c3=0c_3=0), we can always find c1c_1 and c2c_2 to represent any (x,y)(x, y).

    Answer: Yes, the set SS spans R2\mathbb{R}^2. For instance, (x,y)=x(1,0)+y(0,1)(x,y) = x(1,0) + y(0,1). The inclusion of (1,1)(1,1) is redundant for spanning R2\mathbb{R}^2, but it does not prevent the set from spanning.

    :::question type="MCQ" question="Which of the following sets of vectors spans R3\mathbb{R}^3?" options=["S1={(1,0,0),(0,1,0)}S_1 = \{(1, 0, 0), (0, 1, 0)\}","S2={(1,1,1),(2,2,2)}S_2 = \{(1, 1, 1), (2, 2, 2)\}","S3={(1,0,0),(0,1,0),(0,0,1)}S_3 = \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}","S4={(1,0,0),(0,1,0),(1,1,0)}S_4 = \{(1, 0, 0), (0, 1, 0), (1, 1, 0)\}"] answer="S3={(1,0,0),(0,1,0),(0,0,1)}S_3 = \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}" hint="A set of vectors spans Rn\mathbb{R}^n if it contains at least nn linearly independent vectors. For R3\mathbb{R}^3, we need at least 3 linearly independent vectors." solution="Step 1: Analyze S1={(1,0,0),(0,1,0)}S_1 = \{(1, 0, 0), (0, 1, 0)\}. This set has only two vectors. It can only span a 2-dimensional subspace of R3\mathbb{R}^3 (the xyxy-plane). Thus, it does not span R3\mathbb{R}^3.

    Step 2: Analyze S2={(1,1,1),(2,2,2)}S_2 = \{(1, 1, 1), (2, 2, 2)\}. These two vectors are linearly dependent (2(1,1,1)=(2,2,2)2(1,1,1) = (2,2,2)). They span only a 1-dimensional subspace (a line through the origin). Thus, it does not span R3\mathbb{R}^3.

    Step 3: Analyze S3={(1,0,0),(0,1,0),(0,0,1)}S_3 = \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}. This is the standard basis for R3\mathbb{R}^3. These three vectors are linearly independent, and any vector (x,y,z)(x, y, z) in R3\mathbb{R}^3 can be written as x(1,0,0)+y(0,1,0)+z(0,0,1)x(1,0,0) + y(0,1,0) + z(0,0,1). This set spans R3\mathbb{R}^3.

    Step 4: Analyze S4={(1,0,0),(0,1,0),(1,1,0)}S_4 = \{(1, 0, 0), (0, 1, 0), (1, 1, 0)\}. The third vector (1,1,0)(1,1,0) is a linear combination of the first two: (1,1,0)=1(1,0,0)+1(0,1,0)(1,1,0) = 1(1,0,0) + 1(0,1,0). This set is linearly dependent. It spans the xyxy-plane, which is a 2-dimensional subspace of R3\mathbb{R}^3. Thus, it does not span R3\mathbb{R}^3.

    The correct option is 'S3={(1,0,0),(0,1,0),(0,0,1)}S_3 = \{(1, 0, 0), (0, 1, 0), (0, 0, 1)\}'."
    :::

    ---

    3. Basis of a Vector Space

    A basis is a fundamental concept that combines both linear independence and spanning properties. It provides a minimal set of vectors that can generate the entire vector space.

    📖 Basis of a Vector Space

    A set of vectors B={v1,v2,,vn}\mathcal{B} = \{v_1, v_2, \dots, v_n\} in a vector space VV is a basis for VV if:

    • B\mathcal{B} is linearly independent.

    • B\mathcal{B} spans VV.

    Every vector space has at least one basis. The standard basis for Rn\mathbb{R}^n consists of the vectors e1=(1,0,,0)e_1 = (1, 0, \dots, 0), e2=(0,1,,0)e_2 = (0, 1, \dots, 0), ..., en=(0,0,,1)e_n = (0, 0, \dots, 1).

    Quick Example: Show that the set B={(1,1),(1,1)}B = \{(1, 1), (1, -1)\} is a basis for R2\mathbb{R}^2.

    Step 1: Check for linear independence. Form the matrix with vectors as columns and find its determinant.

    A=[1111]A = \begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}

    det(A)=(1)(1)(1)(1)=11=2\det(A) = (1)(-1) - (1)(1) = -1 - 1 = -2

    Step 2: Since det(A)0\det(A) \ne 0, the vectors are linearly independent.

    Step 3: Since we have 2 linearly independent vectors in R2\mathbb{R}^2, they automatically span R2\mathbb{R}^2. (A set of nn linearly independent vectors in an nn-dimensional space forms a basis).

    Answer: The set B={(1,1),(1,1)}B = \{(1, 1), (1, -1)\} is a basis for R2\mathbb{R}^2.

    :::question type="MCQ" question="Which of the following sets of vectors forms a basis for R3\mathbb{R}^3?" options=["S1={(1,0,0),(0,1,0)}S_1 = \{(1, 0, 0), (0, 1, 0)\}","S2={(1,1,1),(1,2,3),(2,1,1)}S_2 = \{(1, 1, 1), (1, 2, 3), (2, -1, 1)\}","S3={(1,2,3),(1,3,5),(1,0,1),(2,3,0)}S_3 = \{(1, 2, 3), (1, 3, 5), (1, 0, 1), (2, 3, 0)\}","S4={(1,1,2),(1,2,5),(5,3,4)}S_4 = \{(1, 1, 2), (1, 2, 5), (5, 3, 4)\}"] answer="S2={(1,1,1),(1,2,3),(2,1,1)}S_2 = \{(1, 1, 1), (1, 2, 3), (2, -1, 1)\}" hint="A basis for R3\mathbb{R}^3 must contain exactly three linearly independent vectors. Sets with fewer than three vectors cannot span R3\mathbb{R}^3, and sets with more than three vectors must be linearly dependent." solution="Step 1: Analyze each option based on the number of vectors and potential linear independence.
    * S1={(1,0,0),(0,1,0)}S_1 = \{(1, 0, 0), (0, 1, 0)\}: Contains only two vectors. Cannot span R3\mathbb{R}^3. Not a basis.
    * S3={(1,2,3),(1,3,5),(1,0,1),(2,3,0)}S_3 = \{(1, 2, 3), (1, 3, 5), (1, 0, 1), (2, 3, 0)\}: Contains four vectors. In R3\mathbb{R}^3, any set of more than three vectors must be linearly dependent. Not a basis.
    * S4={(1,1,2),(1,2,5),(5,3,4)}S_4 = \{(1, 1, 2), (1, 2, 5), (5, 3, 4)\}: Contains three vectors. We must check for linear independence.

    Step 2: Check linear independence for S2={(1,1,1),(1,2,3),(2,1,1)}S_2 = \{(1, 1, 1), (1, 2, 3), (2, -1, 1)\}. Form a matrix AA and compute its determinant.

    A=[112121131]A = \begin{bmatrix} 1 & 1 & 2 \\ 1 & 2 & -1 \\ 1 & 3 & 1 \end{bmatrix}

    det(A)=1(21(1)3)1(11(1)1)+2(1321)\det(A) = 1(2 \cdot 1 - (-1) \cdot 3) - 1(1 \cdot 1 - (-1) \cdot 1) + 2(1 \cdot 3 - 2 \cdot 1)

    det(A)=1(2+3)1(1+1)+2(32)\det(A) = 1(2 + 3) - 1(1 + 1) + 2(3 - 2)

    det(A)=1(5)1(2)+2(1)\det(A) = 1(5) - 1(2) + 2(1)

    det(A)=52+2=5\det(A) = 5 - 2 + 2 = 5

    Step 3: Since det(A)=50\det(A) = 5 \ne 0, the vectors in S2S_2 are linearly independent. As there are three linearly independent vectors in R3\mathbb{R}^3, S2S_2 forms a basis for R3\mathbb{R}^3.

    Step 4: Check linear independence for S4={(1,1,2),(1,2,5),(5,3,4)}S_4 = \{(1, 1, 2), (1, 2, 5), (5, 3, 4)\}. Form a matrix BB and compute its determinant.

    B=[115123254]B = \begin{bmatrix} 1 & 1 & 5 \\ 1 & 2 & 3 \\ 2 & 5 & 4 \end{bmatrix}

    det(B)=1(2435)1(1432)+5(1522)\det(B) = 1(2 \cdot 4 - 3 \cdot 5) - 1(1 \cdot 4 - 3 \cdot 2) + 5(1 \cdot 5 - 2 \cdot 2)

    det(B)=1(815)1(46)+5(54)\det(B) = 1(8 - 15) - 1(4 - 6) + 5(5 - 4)

    det(B)=1(7)1(2)+5(1)\det(B) = 1(-7) - 1(-2) + 5(1)

    det(B)=7+2+5=0\det(B) = -7 + 2 + 5 = 0

    Since det(B)=0\det(B) = 0, the vectors in S4S_4 are linearly dependent. Not a basis.

    The correct option is 'S2={(1,1,1),(1,2,3),(2,1,1)}S_2 = \{(1, 1, 1), (1, 2, 3), (2, -1, 1)\}'."
    :::

    ---

    4. Dimension of a Vector Space

    The dimension of a vector space is a unique scalar value that quantifies its "size." It is defined as the number of vectors in any basis for that space.

    📖 Dimension of a Vector Space

    The dimension of a vector space VV, denoted dim(V)\operatorname{dim}(V), is the number of vectors in any basis for VV. If V={0}V = \{\mathbf{0}\}, then dim(V)=0\operatorname{dim}(V) = 0. If VV cannot be spanned by a finite set of vectors, it is called infinite-dimensional.

    The dimension depends critically on the field of scalars. For instance, C\mathbb{C} is a 1-dimensional vector space over C\mathbb{C} (basis {1}\{1\}), but a 2-dimensional vector space over R\mathbb{R} (basis {1,i}\{1, i\}).

    Quick Example: What is the dimension of the vector space P2P_2, the space of all polynomials of degree at most 2, over R\mathbb{R}?

    Step 1: Identify a basis for P2P_2. Any polynomial p(x)=ax2+bx+cp(x) = ax^2 + bx + c can be written as a linear combination of 1,x,x21, x, x^2.

    p(x)=c1+bx+ax2p(x) = c \cdot 1 + b \cdot x + a \cdot x^2

    Step 2: Verify linear independence of {1,x,x2}\{1, x, x^2\}. If c11+c2x+c3x2=0c_1 \cdot 1 + c_2 \cdot x + c_3 \cdot x^2 = 0 for all xx, then c1=c2=c3=0c_1=c_2=c_3=0.

    Step 3: Count the number of vectors in the basis. The basis {1,x,x2}\{1, x, x^2\} contains 3 vectors.

    Answer: dim(P2)=3\operatorname{dim}(P_2) = 3.

    :::question type="MCQ" question="Match List I with List II for the dimensions of the given vector spaces over the specified fields.

    List IVector SpacesList IIDimensionsA.R over RI.4B.C over RII.3C.R3 over RIII.2D.C2 over RIV.1\begin{array}{|c|l|c|l|} \hline \textbf{List I} & \textbf{Vector Spaces} & \textbf{List II} & \textbf{Dimensions}\\ \hline A. & \mathbb{R} \text{ over } \mathbb{R} & I. & 4\\ \hline B. & \mathbb{C} \text{ over } \mathbb{R} & II. & 3\\ \hline C. & \mathbb{R}^3 \text{ over } \mathbb{R} & III. & 2\\ \hline D. & \mathbb{C}^2 \text{ over } \mathbb{R} & IV. & 1\\ \hline\end{array}
    " options=["A-IV, B-III, C-II, D-I","A-I, B-IV, C-II, D-III","A-II, B-I, C-III, D-IV","A-I, B-II, C-III, D-IV"] answer="A-IV, B-III, C-II, D-I" hint="The dimension of a vector space depends on the field of scalars. A basis for C\mathbb{C} over R\mathbb{R} is {1,i}\{1, i\}. A basis for C2\mathbb{C}^2 over R\mathbb{R} would require vectors that are combinations of complex numbers with real coefficients." solution="Step 1: Determine the dimension of R\mathbb{R} over R\mathbb{R}. A basis for R\mathbb{R} over R\mathbb{R} is {1}\{1\}. There is 1 vector. So, dim(R over R)=1\operatorname{dim}(\mathbb{R} \text{ over } \mathbb{R}) = 1. (A-IV)

    Step 2: Determine the dimension of C\mathbb{C} over R\mathbb{R}.
    A basis for C\mathbb{C} over R\mathbb{R} is {1,i}\{1, i\}. Any z=a+biCz = a + bi \in \mathbb{C} can be written as a1+bia \cdot 1 + b \cdot i where a,bRa, b \in \mathbb{R}. There are 2 vectors. So, dim(C over R)=2\operatorname{dim}(\mathbb{C} \text{ over } \mathbb{R}) = 2. (B-III)

    Step 3: Determine the dimension of R3\mathbb{R}^3 over R\mathbb{R}.
    A standard basis for R3\mathbb{R}^3 over R\mathbb{R} is {(1,0,0),(0,1,0),(0,0,1)}\{(1,0,0), (0,1,0), (0,0,1)\}. There are 3 vectors. So, dim(R3 over R)=3\operatorname{dim}(\mathbb{R}^3 \text{ over } \mathbb{R}) = 3. (C-II)

    Step 4: Determine the dimension of C2\mathbb{C}^2 over R\mathbb{R}.
    Any vector in C2\mathbb{C}^2 is of the form (z1,z2)(z_1, z_2) where z1,z2Cz_1, z_2 \in \mathbb{C}. We can write z1=a1+b1iz_1 = a_1 + b_1i and z2=a2+b2iz_2 = a_2 + b_2i for a1,b1,a2,b2Ra_1, b_1, a_2, b_2 \in \mathbb{R}.
    So,

    (z1,z2)=(a1+b1i,a2+b2i)=a1(1,0)+b1(i,0)+a2(0,1)+b2(0,i)(z_1, z_2) = (a_1+b_1i, a_2+b_2i) = a_1(1,0) + b_1(i,0) + a_2(0,1) + b_2(0,i)

    A basis for C2\mathbb{C}^2 over R\mathbb{R} is {(1,0),(i,0),(0,1),(0,i)}\{(1,0), (i,0), (0,1), (0,i)\}. There are 4 vectors. So, dim(C2 over R)=4\operatorname{dim}(\mathbb{C}^2 \text{ over } \mathbb{R}) = 4. (D-I)

    Step 5: Match the lists.
    A-IV, B-III, C-II, D-I.
    The correct option is 'A-IV, B-III, C-II, D-I'."
    :::

    ---

    ---

    5. Dimension of Subspaces

    A subspace WW of a vector space VV is itself a vector space. Its dimension is found by determining a basis for WW.

    📖 Dimension of a Subspace

    The dimension of a subspace WW of a vector space VV, denoted dim(W)\operatorname{dim}(W), is the number of vectors in any basis for WW. We always have dim(W)dim(V)\operatorname{dim}(W) \le \operatorname{dim}(V).

    To find the dimension of a subspace defined by linear equations, we typically solve the system of equations to express some variables in terms of others, thereby identifying free variables that correspond to basis vectors.

    Quick Example: Let W={(x,y,z)R3:xy+2z=0}W = \{(x, y, z) \in \mathbb{R}^3 : x - y + 2z = 0\}. Find dim(W)\operatorname{dim}(W).

    Step 1: Express one variable in terms of the others using the given condition.

    xy+2z=0    x=y2zx - y + 2z = 0 \implies x = y - 2z

    Step 2: Write an arbitrary vector in WW using the expression from Step 1.

    (x,y,z)=(y2z,y,z)(x, y, z) = (y - 2z, y, z)

    Step 3: Decompose the vector into a linear combination involving the free variables (yy and zz).

    (y2z,y,z)=(y,y,0)+(2z,0,z)(y - 2z, y, z) = (y, y, 0) + (-2z, 0, z)

    (y2z,y,z)=y(1,1,0)+z(2,0,1)(y - 2z, y, z) = y(1, 1, 0) + z(-2, 0, 1)

    Step 4: The vectors {(1,1,0),(2,0,1)}\{(1, 1, 0), (-2, 0, 1)\} form a basis for WW. These vectors are linearly independent and span WW.

    Step 5: Count the number of vectors in the basis. There are 2 vectors.

    Answer: dim(W)=2\operatorname{dim}(W) = 2.

    :::question type="MCQ" question="If WW is a subspace of R3\mathbb{R}^3, where W={(a,b,c):a+b+c=0}W = \{(a, b, c): a + b + c = 0\}, then dim(W)\operatorname{dim}(W) is equal to:" options=["22","33","11","00"] answer="22" hint="The equation a+b+c=0a+b+c=0 imposes one linear constraint on the vectors in R3\mathbb{R}^3. This reduces the number of independent variables by one." solution="Step 1: The subspace WW is defined by the equation a+b+c=0a+b+c=0. We can express one variable in terms of the others, for example, a=bca = -b - c.

    Step 2: An arbitrary vector (a,b,c)(a, b, c) in WW can be written as:

    (a,b,c)=(bc,b,c)(a, b, c) = (-b - c, b, c)

    Step 3: Decompose this vector based on the free variables bb and cc:

    (bc,b,c)=(b,b,0)+(c,0,c)(-b - c, b, c) = (-b, b, 0) + (-c, 0, c)

    (bc,b,c)=b(1,1,0)+c(1,0,1)(-b - c, b, c) = b(-1, 1, 0) + c(-1, 0, 1)

    Step 4: The set of vectors {(1,1,0),(1,0,1)}\{(-1, 1, 0), (-1, 0, 1)\} forms a basis for WW. These two vectors are linearly independent (one is not a scalar multiple of the other) and span WW.

    Step 5: The number of vectors in the basis is 2. Therefore, dim(W)=2\operatorname{dim}(W) = 2.
    The correct option is '22'."
    :::

    ---

    6. Coordinate Vectors

    Once a basis is established for a vector space, any vector in that space can be uniquely expressed as a linear combination of the basis vectors. The coefficients in this linear combination form the coordinate vector of the vector relative to that basis.

    📖 Coordinate Vector

    Let B={v1,v2,,vn}\mathcal{B} = \{v_1, v_2, \dots, v_n\} be an ordered basis for a vector space VV. For any vector vVv \in V, there exist unique scalars c1,c2,,cnc_1, c_2, \dots, c_n such that v=c1v1+c2v2++cnvnv = c_1v_1 + c_2v_2 + \dots + c_nv_n. The coordinate vector of vv relative to B\mathcal{B} is denoted by [v]B[v]_{\mathcal{B}} and is given by:

    [v]B=[c1c2cn][v]_{\mathcal{B}} = \begin{bmatrix} c_1 \\ c_2 \\ \vdots \\ c_n \end{bmatrix}

    Quick Example: Let B={(1,1),(1,1)}\mathcal{B} = \{(1, 1), (1, -1)\} be an ordered basis for R2\mathbb{R}^2. Find the coordinate vector of v=(5,1)v = (5, 1) relative to B\mathcal{B}.

    Step 1: Set up the linear combination.

    c1(1,1)+c2(1,1)=(5,1)c_1(1, 1) + c_2(1, -1) = (5, 1)

    Step 2: Form the system of equations.

    c1+c2=5c1c2=1\begin{aligned} c_1 + c_2 & = 5 \\ c_1 - c_2 & = 1 \end{aligned}

    Step 3: Solve the system. Adding the two equations yields 2c1=6    c1=32c_1 = 6 \implies c_1 = 3. Substituting c1=3c_1=3 into the first equation gives 3+c2=5    c2=23 + c_2 = 5 \implies c_2 = 2.

    Answer: The coordinate vector is [v]B=[32][v]_{\mathcal{B}} = \begin{bmatrix} 3 \\ 2 \end{bmatrix}.

    :::question type="MCQ" question="Let B={(1,2,0),(0,1,1),(1,0,1)}\mathcal{B} = \{(1, 2, 0), (0, 1, 1), (1, 0, -1)\} be an ordered basis for R3\mathbb{R}^3. Find the coordinate vector of v=(2,4,1)v = (2, 4, 1) relative to B\mathcal{B}." options=["[121]\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}","[211]\begin{bmatrix} 2 \\ 1 \\ 1 \end{bmatrix}","[112]\begin{bmatrix} 1 \\ 1 \\ 2 \end{bmatrix}","[221]\begin{bmatrix} 2 \\ 2 \\ 1 \end{bmatrix}"] answer="[121]\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}" hint="Set up a system of linear equations c1v1+c2v2+c3v3=vc_1 v_1 + c_2 v_2 + c_3 v_3 = v and solve for c1,c2,c3c_1, c_2, c_3. This can be done using an augmented matrix." solution="Step 1: We need to find scalars c1,c2,c3c_1, c_2, c_3 such that:

    c1(1,2,0)+c2(0,1,1)+c3(1,0,1)=(2,4,1)c_1(1, 2, 0) + c_2(0, 1, 1) + c_3(1, 0, -1) = (2, 4, 1)

    Step 2: This forms the following system of linear equations:

    c1+c3=22c1+c2=4c2c3=1\begin{aligned} c_1 + c_3 & = 2 \\ 2c_1 + c_2 & = 4 \\ c_2 - c_3 & = 1 \end{aligned}

    Step 3: Form the augmented matrix and reduce it.

    [101221040111]\left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ 2 & 1 & 0 & 4 \\ 0 & 1 & -1 & 1 \end{array}\right]

    R2R22R1R_2 \gets R_2 - 2R_1

    [101201200111]\left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ 0 & 1 & -2 & 0 \\ 0 & 1 & -1 & 1 \end{array}\right]

    R3R3R2R_3 \gets R_3 - R_2

    [101201200011]\left[\begin{array}{ccc|c} 1 & 0 & 1 & 2 \\ 0 & 1 & -2 & 0 \\ 0 & 0 & 1 & 1 \end{array}\right]

    Step 4: Solve the system using back substitution.

    c3=1c_3 = 1

    c22c3=0    c22(1)=0    c2=2c_2 - 2c_3 = 0 \implies c_2 - 2(1) = 0 \implies c_2 = 2

    c1+c3=2    c1+1=2    c1=1c_1 + c_3 = 2 \implies c_1 + 1 = 2 \implies c_1 = 1

    Answer: The coordinate vector is [v]B=[121][v]_{\mathcal{B}} = \begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}.
    The correct option is '[121]\begin{bmatrix} 1 \\ 2 \\ 1 \end{bmatrix}'."
    :::

    ---

    Advanced Applications

    1. Grassmann's Formula (Dimension of Sum and Intersection)

    Grassmann's Formula provides a critical relationship between the dimensions of the sum and intersection of two subspaces. This formula is frequently tested in competitive examinations.

    📐 Grassmann's Formula

    Let UU and WW be finite-dimensional subspaces of a vector space VV. Then U+WU+W and UWU \cap W are also finite-dimensional subspaces, and their dimensions are related by:

    dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W)

    Where:
    U+W={u+w:uU,wW}U+W = \{u+w : u \in U, w \in W\} is the sum of subspaces.
    UWU \cap W is the intersection of subspaces.
    When to use: To find the dimension of the sum or intersection of subspaces when other dimensions are known.

    We also observe that dim(UW)min(dim(U),dim(W))\operatorname{dim}(U \cap W) \le \operatorname{min}(\operatorname{dim}(U), \operatorname{dim}(W)) and max(dim(U),dim(W))dim(U+W)dim(V)\operatorname{max}(\operatorname{dim}(U), \operatorname{dim}(W)) \le \operatorname{dim}(U+W) \le \operatorname{dim}(V).

    Revised Quick Example: Let UU and WW be distinct 4-dimensional subspaces of a vector space VV of dimension 6. Find the possible dimensions of UWU \cap W.

    Step 1: We are given dim(V)=6\operatorname{dim}(V) = 6, dim(U)=4\operatorname{dim}(U) = 4, dim(W)=4\operatorname{dim}(W) = 4.
    Apply Grassmann's Formula: dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W).

    dim(U+W)=4+4dim(UW)\operatorname{dim}(U+W) = 4 + 4 - \operatorname{dim}(U \cap W)

    dim(U+W)=8dim(UW)\operatorname{dim}(U+W) = 8 - \operatorname{dim}(U \cap W)

    Step 2: Consider the bounds for dim(U+W)\operatorname{dim}(U+W).
    Since U+WU+W is a subspace of VV, we have dim(U+W)dim(V)=6\operatorname{dim}(U+W) \le \operatorname{dim}(V) = 6.
    Also, UU+WU \subseteq U+W, so dim(U+W)dim(U)=4\operatorname{dim}(U+W) \ge \operatorname{dim}(U) = 4. Similarly, dim(U+W)dim(W)=4\operatorname{dim}(U+W) \ge \operatorname{dim}(W) = 4.
    Thus, 4dim(U+W)64 \le \operatorname{dim}(U+W) \le 6.

    Step 3: Use the bounds of dim(U+W)\operatorname{dim}(U+W) to find the bounds for dim(UW)\operatorname{dim}(U \cap W).
    From dim(U+W)=8dim(UW)\operatorname{dim}(U+W) = 8 - \operatorname{dim}(U \cap W), we have dim(UW)=8dim(U+W)\operatorname{dim}(U \cap W) = 8 - \operatorname{dim}(U+W).
    Maximum value for dim(UW)\operatorname{dim}(U \cap W): Occurs when dim(U+W)\operatorname{dim}(U+W) is minimum.
    If dim(U+W)=4\operatorname{dim}(U+W) = 4, then dim(UW)=84=4\operatorname{dim}(U \cap W) = 8 - 4 = 4.
    Minimum value for dim(UW)\operatorname{dim}(U \cap W): Occurs when dim(U+W)\operatorname{dim}(U+W) is maximum.
    If dim(U+W)=6\operatorname{dim}(U+W) = 6, then dim(UW)=86=2\operatorname{dim}(U \cap W) = 8 - 6 = 2.

    Step 4: The possible dimensions for UWU \cap W are 2,3,42, 3, 4.
    However, the question specifies UU and WW are distinct. If dim(UW)=4\operatorname{dim}(U \cap W) = 4, then since UWUU \cap W \subseteq U and dim(UW)=dim(U)\operatorname{dim}(U \cap W) = \operatorname{dim}(U), it implies UW=UU \cap W = U. Similarly, UW=WU \cap W = W. This would mean U=WU=W, which contradicts the "distinct" condition.
    Therefore, dim(UW)\operatorname{dim}(U \cap W) cannot be 4.
    This implies dim(U+W)\operatorname{dim}(U+W) cannot be 4 (because if it were, dim(UW)\operatorname{dim}(U \cap W) would be 4, leading to U=WU=W).
    So, UWU \ne W implies dim(U+W)>max(dim(U),dim(W))=4\operatorname{dim}(U+W) > \operatorname{max}(\operatorname{dim}(U), \operatorname{dim}(W)) = 4.
    Therefore, 5dim(U+W)65 \le \operatorname{dim}(U+W) \le 6.

    Step 5: Recalculate dim(UW)\operatorname{dim}(U \cap W) based on 5dim(U+W)65 \le \operatorname{dim}(U+W) \le 6.
    If dim(U+W)=5\operatorname{dim}(U+W) = 5, then dim(UW)=85=3\operatorname{dim}(U \cap W) = 8 - 5 = 3.
    If dim(U+W)=6\operatorname{dim}(U+W) = 6, then dim(UW)=86=2\operatorname{dim}(U \cap W) = 8 - 6 = 2.

    Answer: The possible dimensions of UWU \cap W are 22 or 33.

    :::question type="MCQ" question="Let UU and WW be distinct 4-dimensional subspaces of a vector space VV of dimension 6. Then the possible dimensions of UWU \cap W are:" options=["11 or 22","exactly 44","33 or 44","22 or 33"] answer="22 or 33" hint="Use Grassmann's formula: dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W). Also, consider the bounds for dim(U+W)\operatorname{dim}(U+W) and dim(UW)\operatorname{dim}(U \cap W), specifically that UWU \ne W implies dim(U+W)>max(dim(U),dim(W))\operatorname{dim}(U+W) > \operatorname{max}(\operatorname{dim}(U), \operatorname{dim}(W))." solution="Step 1: We are given:
    dim(V)=6\operatorname{dim}(V) = 6
    dim(U)=4\operatorname{dim}(U) = 4
    dim(W)=4\operatorname{dim}(W) = 4
    UU and WW are distinct subspaces.

    Step 2: Apply Grassmann's Formula:

    dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W)

    dim(U+W)=4+4dim(UW)\operatorname{dim}(U+W) = 4 + 4 - \operatorname{dim}(U \cap W)

    dim(U+W)=8dim(UW)\operatorname{dim}(U+W) = 8 - \operatorname{dim}(U \cap W)

    Step 3: Determine the bounds for dim(U+W)\operatorname{dim}(U+W).
    Since U+WU+W is a subspace of VV, its dimension cannot exceed dim(V)\operatorname{dim}(V):

    dim(U+W)dim(V)=6\operatorname{dim}(U+W) \le \operatorname{dim}(V) = 6

    Also, UU and WW are subspaces of U+WU+W, so:
    dim(U+W)dim(U)=4\operatorname{dim}(U+W) \ge \operatorname{dim}(U) = 4

    dim(U+W)dim(W)=4\operatorname{dim}(U+W) \ge \operatorname{dim}(W) = 4

    Combining these, we have 4dim(U+W)64 \le \operatorname{dim}(U+W) \le 6.

    Step 4: Account for the condition that UU and WW are distinct.
    If dim(U+W)=4\operatorname{dim}(U+W) = 4, then from Grassmann's formula, dim(UW)=84=4\operatorname{dim}(U \cap W) = 8 - 4 = 4.
    If dim(UW)=4\operatorname{dim}(U \cap W) = 4, then since UWUU \cap W \subseteq U and dim(UW)=dim(U)\operatorname{dim}(U \cap W) = \operatorname{dim}(U), it implies UW=UU \cap W = U. Similarly, UW=WU \cap W = W. This means U=WU=W.
    However, the problem states UU and WW are distinct. Therefore, dim(UW)\operatorname{dim}(U \cap W) cannot be 4. This implies dim(U+W)\operatorname{dim}(U+W) cannot be 4.
    Thus, dim(U+W)\operatorname{dim}(U+W) must be strictly greater than max(dim(U),dim(W))\operatorname{max}(\operatorname{dim}(U), \operatorname{dim}(W)).

    4<dim(U+W)64 < \operatorname{dim}(U+W) \le 6

    This means dim(U+W)\operatorname{dim}(U+W) can be 55 or 66.

    Step 5: Calculate the possible dimensions of UWU \cap W using these values.
    Case 1: If dim(U+W)=5\operatorname{dim}(U+W) = 5:

    dim(UW)=85=3\operatorname{dim}(U \cap W) = 8 - 5 = 3

    Case 2: If dim(U+W)=6\operatorname{dim}(U+W) = 6:
    dim(UW)=86=2\operatorname{dim}(U \cap W) = 8 - 6 = 2

    The possible dimensions of UWU \cap W are 22 or 33.
    The correct option is '22 or 33'."
    :::

    ---

    2. Change of Basis (Transition Matrix)

    When we change from one ordered basis to another, the coordinate vector of a given vector changes. A transition matrix facilitates this transformation.

    📖 Transition Matrix

    Let B={b1,,bn}\mathcal{B} = \{b_1, \dots, b_n\} and C={c1,,cn}\mathcal{C} = \{c_1, \dots, c_n\} be two ordered bases for a vector space VV. The transition matrix from B\mathcal{B} to C\mathcal{C}, denoted PCBP_{\mathcal{C} \leftarrow \mathcal{B}}, is an n×nn \times n matrix such that for any vector vVv \in V:

    [v]C=PCB[v]B[v]_{\mathcal{C}} = P_{\mathcal{C} \leftarrow \mathcal{B}} [v]_{\mathcal{B}}

    The columns of PCBP_{\mathcal{C} \leftarrow \mathcal{B}} are the coordinate vectors of the basis vectors in B\mathcal{B} relative to the basis C\mathcal{C}:
    PCB=[[b1]C[b2]C[bn]C]P_{\mathcal{C} \leftarrow \mathcal{B}} = \left[ [b_1]_{\mathcal{C}} \quad [b_2]_{\mathcal{C}} \quad \dots \quad [b_n]_{\mathcal{C}} \right]

    To find PCBP_{\mathcal{C} \leftarrow \mathcal{B}}, we form the augmented matrix [CB][\mathcal{C} | \mathcal{B}] and row reduce it to [IPCB][I | P_{\mathcal{C} \leftarrow \mathcal{B}}].

    Quick Example: Let B={(1,0),(0,1)}\mathcal{B} = \{(1, 0), (0, 1)\} be the standard basis and C={(1,1),(1,1)}\mathcal{C} = \{(1, 1), (1, -1)\} be another basis for R2\mathbb{R}^2. Find the transition matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}}.

    Step 1: Form the augmented matrix [CB][\mathcal{C} | \mathcal{B}].
    >

    [11101101]\left[\begin{array}{cc|cc} 1 & 1 & 1 & 0 \\ 1 & -1 & 0 & 1 \end{array}\right]

    Step 2: Row reduce the left side (matrix C\mathcal{C}) to the identity matrix.
    >

    R2R1R2R_2 - R_1 \to R_2

    >
    [11100211]\left[\begin{array}{cc|cc} 1 & 1 & 1 & 0 \\ 0 & -2 & -1 & 1 \end{array}\right]

    >
    12R2R2-\frac{1}{2}R_2 \to R_2

    >
    [1110011212]\left[\begin{array}{cc|cc} 1 & 1 & 1 & 0 \\ 0 & 1 & \frac{1}{2} & -\frac{1}{2} \end{array}\right]

    >
    R1R2R1R_1 - R_2 \to R_1

    >
    [101212011212]\left[\begin{array}{cc|cc} 1 & 0 & \frac{1}{2} & \frac{1}{2} \\ 0 & 1 & \frac{1}{2} & -\frac{1}{2} \end{array}\right]

    Answer: The transition matrix is PCB=[12121212]P_{\mathcal{C} \leftarrow \mathcal{B}} = \begin{bmatrix} \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & -\frac{1}{2} \end{bmatrix}.

    :::question type="MCQ" question="Let B={b1,b2}={(1,2),(2,1)}\mathcal{B} = \{b_1, b_2\} = \{(1, 2), (2, 1)\} and C={c1,c2}={(1,0),(0,1)}\mathcal{C} = \{c_1, c_2\} = \{(1, 0), (0, 1)\} be two ordered bases for R2\mathbb{R}^2. Find the transition matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}}." options=["[1221]\begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}","[1/32/32/31/3]\begin{bmatrix} -1/3 & 2/3 \\ 2/3 & -1/3 \end{bmatrix}","[1001]\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}","[2112]\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}"] answer="[1221]\begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}" hint="The transition matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}} has columns that are the coordinate vectors of b1b_1 and b2b_2 with respect to C\mathcal{C}. Since C\mathcal{C} is the standard basis, these coordinate vectors are simply the vectors b1b_1 and b2b_2 themselves." solution="Step 1: We need to find PCBP_{\mathcal{C} \leftarrow \mathcal{B}}, which means expressing b1b_1 and b2b_2 as linear combinations of c1c_1 and c2c_2.
    The basis C={(1,0),(0,1)}\mathcal{C} = \{(1, 0), (0, 1)\} is the standard basis for R2\mathbb{R}^2.
    For any vector v=(x,y)v = (x, y), its coordinate vector relative to the standard basis is simply [xy]\begin{bmatrix} x \\ y \end{bmatrix}.

    Step 2: Find [b1]C[b_1]_{\mathcal{C}}:
    b1=(1,2)=1(1,0)+2(0,1)b_1 = (1, 2) = 1 \cdot (1, 0) + 2 \cdot (0, 1).
    So, [b1]C=[12][b_1]_{\mathcal{C}} = \begin{bmatrix} 1 \\ 2 \end{bmatrix}.

    Step 3: Find [b2]C[b_2]_{\mathcal{C}}:
    b2=(2,1)=2(1,0)+1(0,1)b_2 = (2, 1) = 2 \cdot (1, 0) + 1 \cdot (0, 1).
    So, [b2]C=[21][b_2]_{\mathcal{C}} = \begin{bmatrix} 2 \\ 1 \end{bmatrix}.

    Step 4: Construct the transition matrix PCBP_{\mathcal{C} \leftarrow \mathcal{B}} using these coordinate vectors as columns.
    >

    PCB=[[b1]C[b2]C]=[1221]P_{\mathcal{C} \leftarrow \mathcal{B}} = \left[ [b_1]_{\mathcal{C}} \quad [b_2]_{\mathcal{C}} \right] = \begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}

    Answer: \boxed{\begin{bmatrix} 1 & 2 \\ 2 & 1 \end{bmatrix}}"
    :::

    ---

    3. Rank-Nullity Theorem

    While primarily a theorem about linear transformations, the Rank-Nullity Theorem directly relates the dimension of the image (rank) and the dimension of the kernel (nullity) of a linear transformation, which are subspaces. It is a fundamental result in dimension theory.

    📐 Rank-Nullity Theorem

    Let T:VWT: V \to W be a linear transformation from an nn-dimensional vector space VV to a vector space WW. Then:

    dim(ker(T))+dim(Im(T))=dim(V)\operatorname{dim}(\operatorname{ker}(T)) + \operatorname{dim}(\operatorname{Im}(T)) = \operatorname{dim}(V)

    Where:
    ker(T)={vV:T(v)=0W}\operatorname{ker}(T) = \{v \in V : T(v) = \mathbf{0}_W\} is the kernel (null space) of TT. dim(ker(T))\operatorname{dim}(\operatorname{ker}(T)) is the nullity of TT.
    Im(T)={T(v):vV}\operatorname{Im}(T) = \{T(v) : v \in V\} is the image (range) of TT. dim(Im(T))\operatorname{dim}(\operatorname{Im}(T)) is the rank of TT.
    When to use: To find the dimension of the kernel or image when the other dimension and the domain's dimension are known. It connects the dimensions of subspaces generated by a linear map.

    Quick Example: Let T:R3R2T: \mathbb{R}^3 \to \mathbb{R}^2 be a linear transformation defined by T(x,y,z)=(x+y,yz)T(x,y,z) = (x+y, y-z). Find the nullity of TT if its rank is 2.

    Step 1: Identify the dimension of the domain space.
    Here, V=R3V = \mathbb{R}^3, so dim(V)=3\operatorname{dim}(V) = 3.

    Step 2: We are given that the rank of TT, dim(Im(T))\operatorname{dim}(\operatorname{Im}(T)), is 2.

    Step 3: Apply the Rank-Nullity Theorem.
    >

    dim(ker(T))+dim(Im(T))=dim(V)\operatorname{dim}(\operatorname{ker}(T)) + \operatorname{dim}(\operatorname{Im}(T)) = \operatorname{dim}(V)

    >
    dim(ker(T))+2=3\operatorname{dim}(\operatorname{ker}(T)) + 2 = 3

    Step 4: Solve for dim(ker(T))\operatorname{dim}(\operatorname{ker}(T)).
    >

    dim(ker(T))=32=1\operatorname{dim}(\operatorname{ker}(T)) = 3 - 2 = 1

    Answer: The nullity of TT is 1.

    :::question type="NAT" question="A linear transformation T:P3R4T: P_3 \to \mathbb{R}^4 is defined such that its image has dimension 3. What is the dimension of the kernel of TT?" answer="1" hint="Recall the Rank-Nullity Theorem. The domain space is P3P_3, the space of polynomials of degree at most 3. Determine its dimension first." solution="Step 1: Determine the dimension of the domain space P3P_3.
    A basis for P3P_3 is {1,x,x2,x3}\{1, x, x^2, x^3\}. This basis contains 4 vectors.
    Therefore, dim(P3)=4\operatorname{dim}(P_3) = 4.

    Step 2: We are given that the dimension of the image of TT is 3.
    So, dim(Im(T))=3\operatorname{dim}(\operatorname{Im}(T)) = 3.

    Step 3: Apply the Rank-Nullity Theorem:
    >

    dim(ker(T))+dim(Im(T))=dim(P3)\operatorname{dim}(\operatorname{ker}(T)) + \operatorname{dim}(\operatorname{Im}(T)) = \operatorname{dim}(P_3)

    >
    dim(ker(T))+3=4\operatorname{dim}(\operatorname{ker}(T)) + 3 = 4

    Step 4: Solve for dim(ker(T))\operatorname{dim}(\operatorname{ker}(T)).
    >

    dim(ker(T))=43=1\operatorname{dim}(\operatorname{ker}(T)) = 4 - 3 = 1

    Answer: \boxed{1}"
    :::

    ---

    Problem-Solving Strategies

    💡 CUET PG Strategy: Subspace Dimensions

    For subspaces defined by linear equations in Rn\mathbb{R}^n, the dimension of the subspace is nkn - k, where kk is the number of linearly independent linear equations defining the subspace. This is equivalent to finding the number of free variables in the system.

    💡 CUET PG Strategy: Basis Verification

    To check if a set of nn vectors forms a basis for Rn\mathbb{R}^n:

    • Form a matrix AA with the vectors as columns (or rows).

    • Calculate det(A)\det(A). If det(A)0\det(A) \ne 0, the vectors are linearly independent, and thus form a basis. If det(A)=0\det(A) = 0, they are linearly dependent and do not form a basis.

    • If the number of vectors is not nn, it cannot be a basis for Rn\mathbb{R}^n.

    💡 CUET PG Strategy: Grassmann's Formula Bounds

    When dealing with dim(UW)\operatorname{dim}(U \cap W) and dim(U+W)\operatorname{dim}(U+W):
    dim(UW)\operatorname{dim}(U \cap W) ranges from max(0,dim(U)+dim(W)dim(V))\operatorname{max}(0, \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(V)) to min(dim(U),dim(W))\operatorname{min}(\operatorname{dim}(U), \operatorname{dim}(W)).
    dim(U+W)\operatorname{dim}(U+W) ranges from max(dim(U),dim(W))\operatorname{max}(\operatorname{dim}(U), \operatorname{dim}(W)) to min(dim(V),dim(U)+dim(W))\operatorname{min}(\operatorname{dim}(V), \operatorname{dim}(U) + \operatorname{dim}(W)).
    Remember to consider if the subspaces are distinct; if UWU \ne W, then dim(UW)<min(dim(U),dim(W))\operatorname{dim}(U \cap W) < \operatorname{min}(\operatorname{dim}(U), \operatorname{dim}(W)).

    ---

    Common Mistakes

    ⚠️ Common Mistake: Scalar Field

    ❌ Assuming the dimension of a vector space is fixed regardless of the scalar field. For example, dim(C)\operatorname{dim}(\mathbb{C}) is always 1.
    ✅ The dimension depends on the scalar field. dim(C over C)=1\operatorname{dim}(\mathbb{C} \text{ over } \mathbb{C}) = 1, but dim(C over R)=2\operatorname{dim}(\mathbb{C} \text{ over } \mathbb{R}) = 2. Always specify or infer the scalar field.

    ⚠️ Common Mistake: Basis Size

    ❌ Assuming any set of nn vectors in an nn-dimensional space is a basis.
    ✅ A set of nn vectors in an nn-dimensional space must also be linearly independent (or span the space) to be a basis. For example, {(1,0),(2,0)}\{(1,0), (2,0)\} in R2\mathbb{R}^2 has two vectors but is not a basis as they are linearly dependent.

    ⚠️ Common Mistake: Intersection of Subspaces

    ❌ Forgetting the "distinct" condition when applying Grassmann's formula, which can affect the upper bound of dim(UW)\operatorname{dim}(U \cap W).
    ✅ If UU and WW are distinct, then UWU \cap W cannot be equal to UU or WW, so dim(UW)<min(dim(U),dim(W))\operatorname{dim}(U \cap W) < \operatorname{min}(\operatorname{dim}(U), \operatorname{dim}(W)). This restricts the possible values.

    ---

    Practice Questions

    :::question type="MCQ" question="For what value(s) of kk the set of vectors {(1,k,5),(1,3,2),(2,1,1)}\{(1, k, 5), (1, -3, 2), (2, -1, 1)\} form a basis in R3\mathbb{R}^3?" options=["k=8k = 8","k=8k = -8","k8k \ne -8","k0k \ne 0"] answer="k8k \ne -8" hint="For three vectors to form a basis in R3\mathbb{R}^3, they must be linearly independent. This means the determinant of the matrix formed by these vectors must be non-zero." solution="Step 1: Form a matrix AA with the given vectors as columns (or rows).

    A=[112k31521]A = \begin{bmatrix} 1 & 1 & 2 \\ k & -3 & -1 \\ 5 & 2 & 1 \end{bmatrix}

    Step 2: For the vectors to form a basis, they must be linearly independent, which implies det(A)0\det(A) \ne 0. Calculate the determinant of AA.

    det(A)=1((3)(1)(1)(2))1((k)(1)(1)(5))+2((k)(2)(3)(5))\det(A) = 1((-3)(1) - (-1)(2)) - 1((k)(1) - (-1)(5)) + 2((k)(2) - (-3)(5))

    det(A)=1(3+2)1(k+5)+2(2k+15)\det(A) = 1(-3 + 2) - 1(k + 5) + 2(2k + 15)

    det(A)=1(1)k5+4k+30\det(A) = 1(-1) - k - 5 + 4k + 30

    det(A)=1k5+4k+30\det(A) = -1 - k - 5 + 4k + 30

    det(A)=3k+24\det(A) = 3k + 24

    Step 3: Set the determinant to be non-zero to find the condition on kk.

    3k+2403k + 24 \ne 0

    3k243k \ne -24

    k8k \ne -8

    Answer: k8\boxed{k \ne -8}"
    :::

    :::question type="MCQ" question="Let VV and WW be the subspaces of R4\mathbb{R}^4 defined as V={(a,b,c,d):b5c+2d=0}V = \{(a, b, c, d): b - 5c + 2d = 0\}, W={(a,b,c,d):ab=0}W = \{(a, b, c, d): a - b = 0\}. Then the dimension of VWV \cap W is:" options=["1","2","3","4"] answer="2" hint="To find the dimension of VWV \cap W, we need to find the basis for the vectors that satisfy the conditions for both VV and WW. This means solving the combined system of linear equations." solution="Step 1: For a vector (a,b,c,d)(a, b, c, d) to be in VWV \cap W, it must satisfy both conditions:

  • b5c+2d=0b - 5c + 2d = 0

  • ab=0    a=ba - b = 0 \implies a = b
  • Step 2: Substitute a=ba=b into the first equation (it doesn't change it, as aa is not in the first equation).
    We have the system:
    ab=0a - b = 0
    b5c+2d=0b - 5c + 2d = 0
    This is a system of 2 linearly independent equations in 4 variables.

    Step 3: The number of free variables is 4(number of linearly independent equations)=42=24 - (\text{number of linearly independent equations}) = 4 - 2 = 2.
    Each free variable corresponds to a basis vector.

    Step 4: To find a basis, we can choose cc and dd as free variables.
    From a=ba=b, we have a=ba=b.
    From b5c+2d=0b - 5c + 2d = 0, we have b=5c2db = 5c - 2d.
    So, a=5c2da = 5c - 2d.
    An arbitrary vector in VWV \cap W is (a,b,c,d)=(5c2d,5c2d,c,d)(a, b, c, d) = (5c - 2d, 5c - 2d, c, d).

    (5c2d,5c2d,c,d)=c(5,5,1,0)+d(2,2,0,1)(5c - 2d, 5c - 2d, c, d) = c(5, 5, 1, 0) + d(-2, -2, 0, 1)

    The basis is {(5,5,1,0),(2,2,0,1)}\{(5, 5, 1, 0), (-2, -2, 0, 1)\}. These vectors are linearly independent.

    Step 5: The number of vectors in the basis is 2. Therefore, dim(VW)=2\operatorname{dim}(V \cap W) = 2.
    Answer: 2\boxed{2}"
    :::

    :::question type="MCQ" question="Let U={(x,y,z)R3:xy=0}U = \{(x, y, z) \in \mathbb{R}^3 : x - y = 0\} and W={(x,y,z)R3:y+z=0}W = \{(x, y, z) \in \mathbb{R}^3 : y + z = 0\}. Which of the following statements about UU and WW is true?" options=["dim(U)=1\operatorname{dim}(U) = 1 and dim(W)=1\operatorname{dim}(W) = 1","dim(U)=2\operatorname{dim}(U) = 2 and dim(W)=2\operatorname{dim}(W) = 2","dim(UW)=1\operatorname{dim}(U \cap W) = 1 and dim(U+W)=3\operatorname{dim}(U+W) = 3","dim(UW)=2\operatorname{dim}(U \cap W) = 2 and dim(U+W)=2\operatorname{dim}(U+W) = 2"] answer="dim(UW)=1\operatorname{dim}(U \cap W) = 1 and dim(U+W)=3\operatorname{dim}(U+W) = 3" hint="First, find the dimensions of UU and WW individually. Then, find the conditions for UWU \cap W and determine its dimension. Finally, use Grassmann's formula to find dim(U+W)\operatorname{dim}(U+W)." solution="Step 1: Find dim(U)\operatorname{dim}(U).
    U={(x,y,z)R3:xy=0}U = \{(x, y, z) \in \mathbb{R}^3 : x - y = 0\}. This implies x=yx = y.
    An arbitrary vector in UU is (x,x,z)=x(1,1,0)+z(0,0,1)(x, x, z) = x(1, 1, 0) + z(0, 0, 1).
    A basis for UU is {(1,1,0),(0,0,1)}\{(1, 1, 0), (0, 0, 1)\}. These are linearly independent.
    Thus, dim(U)=2\operatorname{dim}(U) = 2.

    Step 2: Find dim(W)\operatorname{dim}(W).
    W={(x,y,z)R3:y+z=0}W = \{(x, y, z) \in \mathbb{R}^3 : y + z = 0\}. This implies z=yz = -y.
    An arbitrary vector in WW is (x,y,y)=x(1,0,0)+y(0,1,1)(x, y, -y) = x(1, 0, 0) + y(0, 1, -1).
    A basis for WW is {(1,0,0),(0,1,1)}\{(1, 0, 0), (0, 1, -1)\}. These are linearly independent.
    Thus, dim(W)=2\operatorname{dim}(W) = 2.

    Step 3: Find dim(UW)\operatorname{dim}(U \cap W).
    A vector (x,y,z)(x, y, z) is in UWU \cap W if it satisfies both xy=0x - y = 0 and y+z=0y + z = 0.
    So, x=yx = y and z=yz = -y.
    An arbitrary vector in UWU \cap W is (y,y,y)=y(1,1,1)(y, y, -y) = y(1, 1, -1).
    A basis for UWU \cap W is {(1,1,1)}\{(1, 1, -1)\}. This is a single non-zero vector.
    Thus, dim(UW)=1\operatorname{dim}(U \cap W) = 1.

    Step 4: Find dim(U+W)\operatorname{dim}(U+W) using Grassmann's Formula.

    dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W)

    dim(U+W)=2+21\operatorname{dim}(U+W) = 2 + 2 - 1

    dim(U+W)=3\operatorname{dim}(U+W) = 3

    Step 5: Combine the results.
    dim(U)=2\operatorname{dim}(U) = 2, dim(W)=2\operatorname{dim}(W) = 2, dim(UW)=1\operatorname{dim}(U \cap W) = 1, dim(U+W)=3\operatorname{dim}(U+W) = 3.
    The statement 'dim(UW)=1\operatorname{dim}(U \cap W) = 1 and dim(U+W)=3\operatorname{dim}(U+W) = 3' is true.
    Answer: dim(UW)=1 and dim(U+W)=3\boxed{\operatorname{dim}(U \cap W) = 1 \text{ and } \operatorname{dim}(U+W) = 3}"
    :::

    :::question type="NAT" question="Let V=M2×2(R)V = M_{2 \times 2}(\mathbb{R}) be the vector space of 2×22 \times 2 matrices with real entries. Let S={AV:AT=A}S = \{A \in V : A^T = A\} be the subspace of symmetric matrices. What is dim(S)\operatorname{dim}(S)?" answer="3" hint="A symmetric matrix is one where A=ATA = A^T. Write out the general form of a 2×22 \times 2 symmetric matrix and find a basis for it." solution="Step 1: Write the general form of a 2×22 \times 2 matrix AA.

    A=[abcd]A = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

    Step 2: Apply the condition for a symmetric matrix, AT=AA^T = A.

    AT=[acbd]A^T = \begin{bmatrix} a & c \\ b & d \end{bmatrix}

    So,
    AT=A    [acbd]=[abcd]A^T = A \implies \begin{bmatrix} a & c \\ b & d \end{bmatrix} = \begin{bmatrix} a & b \\ c & d \end{bmatrix}

    This implies c=bc = b.

    Step 3: Write the general form of a symmetric matrix in SS.

    A=[abbd]A = \begin{bmatrix} a & b \\ b & d \end{bmatrix}

    Step 4: Decompose this matrix into a linear combination of basis matrices.

    [abbd]=a[1000]+b[0110]+d[0001]\begin{bmatrix} a & b \\ b & d \end{bmatrix} = a\begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix} + b\begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} + d\begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}

    The set of matrices
    {[1000],[0110],[0001]}\left\{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix} \right\}

    forms a basis for SS. These matrices are linearly independent and span SS.

    Step 5: Count the number of matrices in the basis.
    There are 3 matrices in the basis.
    Therefore, dim(S)=3\operatorname{dim}(S) = 3.
    Answer: 3\boxed{3}"
    :::

    :::question type="MCQ" question="Let P2(R)P_2(\mathbb{R}) be the vector space of polynomials of degree at most 2 with real coefficients. Consider the set S={x2+1,x1,x+2}S = \{x^2+1, x-1, x+2\}. Which of the following is true about SS?" options=["SS is linearly dependent and spans P2(R)P_2(\mathbb{R}).","SS is linearly independent but does not span P2(R)P_2(\mathbb{R}).","SS is a basis for P2(R)P_2(\mathbb{R}).","SS is linearly dependent and does not span P2(R)P_2(\mathbb{R}). "] answer="SS is a basis for P2(R)P_2(\mathbb{R})." hint="The dimension of P2(R)P_2(\mathbb{R}) is 3. A set of 3 polynomials will form a basis if they are linearly independent. To check linear independence, set a linear combination to the zero polynomial and solve for the coefficients." solution="Step 1: Determine the dimension of the vector space P2(R)P_2(\mathbb{R}).
    The standard basis for P2(R)P_2(\mathbb{R}) is {1,x,x2}\{1, x, x^2\}. Thus, dim(P2(R))=3\operatorname{dim}(P_2(\mathbb{R})) = 3.
    The set SS contains 3 vectors (polynomials), so it could potentially be a basis if it is linearly independent.

    Step 2: Check for linear independence of S={x2+1,x1,x+2}S = \{x^2+1, x-1, x+2\}.
    Set a linear combination to the zero polynomial:

    c1(x2+1)+c2(x1)+c3(x+2)=0c_1(x^2+1) + c_2(x-1) + c_3(x+2) = 0

    Rearrange by powers of xx:
    c1x2+(c2+c3)x+(c1c2+2c3)=0c_1x^2 + (c_2+c_3)x + (c_1-c_2+2c_3) = 0

    For this polynomial to be the zero polynomial, all coefficients must be zero:
  • c1=0c_1 = 0 (coefficient of x2x^2)

  • c2+c3=0c_2 + c_3 = 0 (coefficient of xx)

  • c1c2+2c3=0c_1 - c_2 + 2c_3 = 0 (constant term)
  • Step 3: Solve the system of equations.
    From (1), c1=0c_1 = 0.
    Substitute c1=0c_1=0 into (3):

    c2+2c3=0-c_2 + 2c_3 = 0

    Now we have a system for c2,c3c_2, c_3:
    c2+c3=0c2+2c3=0\begin{aligned} c_2 + c_3 & = 0 \\ -c_2 + 2c_3 & = 0 \end{aligned}

    Adding the two equations: (c2+c3)+(c2+2c3)=0+0    3c3=0    c3=0(c_2 + c_3) + (-c_2 + 2c_3) = 0 + 0 \implies 3c_3 = 0 \implies c_3 = 0.
    Substitute c3=0c_3=0 into c2+c3=0    c2+0=0    c2=0c_2 + c_3 = 0 \implies c_2 + 0 = 0 \implies c_2 = 0.
    So, the only solution is c1=0,c2=0,c3=0c_1 = 0, c_2 = 0, c_3 = 0.

    Step 4: Conclude linear independence and basis status.
    Since the only solution is the trivial one, the set SS is linearly independent.
    As SS contains 3 linearly independent vectors in a 3-dimensional space (P2(R)P_2(\mathbb{R})), SS forms a basis for P2(R)P_2(\mathbb{R}).
    Answer: S is a basis for P2(R)\boxed{S \text{ is a basis for } P_2(\mathbb{R})}"
    :::

    ---

    Summary

    Key Formulas & Takeaways

    | # | Formula/Concept | Expression |
    |---|----------------|------------|
    | 1 | Linear Independence | c1v1++ckvk=0    ci=0c_1v_1 + \dots + c_kv_k = \mathbf{0} \implies c_i=0 |
    | 2 | Basis Definition | Linearly Independent + Spanning Set |
    | 3 | Dimension | Number of vectors in any basis |
    | 4 | Subspace Dimension | dim(W)=nk\operatorname{dim}(W) = n - k (for kk independent equations in Rn\mathbb{R}^n) |
    | 5 | Grassmann's Formula | dim(U+W)=dim(U)+dim(W)dim(UW)\operatorname{dim}(U+W) = \operatorname{dim}(U) + \operatorname{dim}(W) - \operatorname{dim}(U \cap W) |
    | 6 | Rank-Nullity Theorem | dim(ker(T))+dim(Im(T))=dim(V)\operatorname{dim}(\operatorname{ker}(T)) + \operatorname{dim}(\operatorname{Im}(T)) = \operatorname{dim}(V) |
    | 7 | Dimension over Field | dim(Cn over R)=2n\operatorname{dim}(\mathbb{C}^n \text{ over } \mathbb{R}) = 2n |

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Linear Transformations: Basis and dimension are essential for understanding the domain, codomain, kernel, and image of linear transformations. The Rank-Nullity Theorem directly applies.

      • Eigenvalues and Eigenvectors: Eigenvectors form special bases that simplify the representation of linear transformations, particularly in diagonalization.

      • Inner Product Spaces: Orthogonal and orthonormal bases provide a powerful framework for analysis in inner product spaces, extending the concepts of length and angle.

    Chapter Summary

    Vector Spaces — Key Points

    A vector space VV over a field FF is a set equipped with vector addition and scalar multiplication satisfying specific axioms. A subspace WW of VV is a subset that is itself a vector space under the same operations, requiring closure under addition and scalar multiplication, and containing the zero vector.
    A linear combination of vectors v1,,vkv_1, \ldots, v_k is an expression c1v1++ckvkc_1v_1 + \ldots + c_kv_k. The span of a set of vectors SS, denoted span(S)\operatorname{span}(S), is the set of all possible linear combinations of vectors in SS, which always forms a subspace.
    A set of vectors {v1,,vk}\{v_1, \ldots, v_k\} is linearly independent if the only solution to c1v1++ckvk=0c_1v_1 + \ldots + c_kv_k = 0 is c1==ck=0c_1 = \ldots = c_k = 0. Otherwise, the set is linearly dependent.
    A basis for a vector space VV is a set of vectors that is both linearly independent and spans VV. Every vector in VV can be uniquely expressed as a linear combination of basis vectors.
    The dimension of a vector space VV, denoted dim(V)\dim(V), is the number of vectors in any basis for VV. This number is unique for any given vector space.
    An ordered basis B={v1,,vn}B = \{v_1, \ldots, v_n\} allows for unique representation of any vector vVv \in V as v=c1v1++cnvnv = c_1v_1 + \ldots + c_nv_n. The coordinate vector of vv with respect to BB is [v]B=(c1,,cn)T[v]_B = (c_1, \ldots, c_n)^T.

    ---

    Chapter Review Questions

    :::question type="MCQ" question="Which of the following sets is NOT a subspace of R3\mathbb{R}^3?" options=["{(x,y,z)R3:x+2yz=0}\{(x,y,z) \in \mathbb{R}^3 : x+2y-z=0\}","{(x,y,z)R3:x=y=z}\{(x,y,z) \in \mathbb{R}^3 : x=y=z\}","{(x,y,z)R3:x=1}\{(x,y,z) \in \mathbb{R}^3 : x=1\}","{(x,y,z)R3:z=0}\{(x,y,z) \in \mathbb{R}^3 : z=0\}"] answer="{(x,y,z)R3:x=1}\{(x,y,z) \in \mathbb{R}^3 : x=1\}" hint="A subspace must contain the zero vector and be closed under scalar multiplication and vector addition. Consider if the zero vector (0,0,0) is in each set." solution="For a set to be a subspace, it must contain the zero vector (0,0,0)(0,0,0). The set {(x,y,z)R3:x=1}\{(x,y,z) \in \mathbb{R}^3 : x=1\} requires the first component to be 1, thus it does not contain (0,0,0)(0,0,0). Therefore, it cannot be a subspace.
    Answer: {(x,y,z)R3:x=1}\boxed{\{(x,y,z) \in \mathbb{R}^3 : x=1\}}"
    :::

    :::question type="NAT" question="Consider the vectors v1=(1,0,1)v_1=(1,0,1), v2=(0,1,1)v_2=(0,1,1), and v3=(2,1,1)v_3=(2,-1,1) in R3\mathbb{R}^3. If v3v_3 is a linear combination of v1v_1 and v2v_2 such that v3=av1+bv2v_3 = av_1 + bv_2, what is the value of a+ba+b?" answer="1" hint="Set up a system of linear equations by equating components and solve for aa and bb." solution="We need to find aa and bb such that (2,1,1)=a(1,0,1)+b(0,1,1)(2,-1,1) = a(1,0,1) + b(0,1,1).
    This gives the system of equations:
    a=2a = 2 (from the first component)
    b=1b = -1 (from the second component)
    a+b=1a+b = 1 (from the third component)
    Substituting a=2a=2 and b=1b=-1 into the third equation: 2+(1)=12 + (-1) = 1, which is consistent.
    Thus, a=2a=2 and b=1b=-1.
    The value of a+ba+b is 2+(1)=12 + (-1) = 1.
    Answer: 1\boxed{1}"
    :::

    :::question type="MCQ" question="Let P2(x)P_2(x) be the vector space of polynomials of degree at most 2. Which of the following sets is a basis for P2(x)P_2(x)?" options=["{1,x,x2,x3}\{1, x, x^2, x^3\}","{1,x,x2}\{1, x, x^2\}","{1+x,x+x2,1x2}\{1+x, x+x^2, 1-x^2\}","{x,x2}\{x, x^2\}"] answer="{1,x,x2}\{1, x, x^2\}" hint="A basis must be linearly independent and span the space. The dimension of P2(x)P_2(x) is 3." solution="The standard basis for P2(x)P_2(x) is {1,x,x2}\{1, x, x^2\}, which consists of 3 linearly independent polynomials that span P2(x)P_2(x).
    Option A has 4 elements, making it linearly dependent in P2(x)P_2(x).
    Option C: 1+x,x+x2,1x21+x, x+x^2, 1-x^2. Let's check for linear dependence:
    c1(1+x)+c2(x+x2)+c3(1x2)=0c_1(1+x) + c_2(x+x^2) + c_3(1-x^2) = 0.
    Rearranging by powers of xx: (c1+c3)+(c1+c2)x+(c2c3)x2=0(c_1+c_3) + (c_1+c_2)x + (c_2-c_3)x^2 = 0.
    This gives the system:
    c1+c3=0c_1+c_3=0
    c1+c2=0c_1+c_2=0
    c2c3=0c_2-c_3=0
    From the first equation, c3=c1c_3 = -c_1.
    From the second equation, c2=c1c_2 = -c_1.
    Substitute these into the third equation: (c1)(c1)=0    0=0(-c_1) - (-c_1) = 0 \implies 0=0.
    This means the system has non-trivial solutions (e.g., if c1=1c_1=1, then c2=1,c3=1c_2=-1, c_3=-1).
    So, {1+x,x+x2,1x2}\{1+x, x+x^2, 1-x^2\} is linearly dependent and thus NOT a basis.
    Option D has only 2 elements, so it cannot span P2(x)P_2(x) (dimension is 3).
    Therefore, the only correct option is {1,x,x2}\{1, x, x^2\}.
    Answer: {1,x,x2}\boxed{\{1, x, x^2\}}"
    :::

    :::question type="NAT" question="What is the dimension of the subspace of R4\mathbb{R}^4 defined by the equations x1x2+x3=0x_1 - x_2 + x_3 = 0 and x2x4=0x_2 - x_4 = 0?" answer="2" hint="Find a basis for the subspace by expressing some variables in terms of others using the given equations. The number of free variables will be the dimension." solution="Let the subspace be W={(x1,x2,x3,x4)R4:x1x2+x3=0 and x2x4=0}W = \{(x_1, x_2, x_3, x_4) \in \mathbb{R}^4 : x_1 - x_2 + x_3 = 0 \text{ and } x_2 - x_4 = 0\}.
    From x2x4=0x_2 - x_4 = 0, we have x4=x2x_4 = x_2.
    From x1x2+x3=0x_1 - x_2 + x_3 = 0, we have x1=x2x3x_1 = x_2 - x_3.
    We can choose x2x_2 and x3x_3 as free variables. Let x2=ax_2 = a and x3=bx_3 = b.
    Then x1=abx_1 = a - b and x4=ax_4 = a.
    So, any vector in WW can be written as (ab,a,b,a)(a-b, a, b, a).
    This can be decomposed as:
    (ab,a,b,a)=a(1,1,0,1)+b(1,0,1,0)(a-b, a, b, a) = a(1, 1, 0, 1) + b(-1, 0, 1, 0).
    The vectors v1=(1,1,0,1)v_1 = (1, 1, 0, 1) and v2=(1,0,1,0)v_2 = (-1, 0, 1, 0) span WW.
    These two vectors are linearly independent (one is not a scalar multiple of the other).
    Therefore, {v1,v2}\{v_1, v_2\} forms a basis for WW.
    The dimension of WW is the number of vectors in its basis, which is 2.
    Answer: 2\boxed{2}"
    :::

    ---

    What's Next?

    💡 Continue Your CUET PG Journey

    Having established a strong foundation in vector spaces, you are now well-prepared to delve into Linear Transformations. This critical topic builds directly on the concepts of vector spaces, subspaces, basis, and dimension, exploring mappings between vector spaces. Subsequent studies will often involve Matrices and Linear Operators, understanding how linear transformations can be represented by matrices, and further extending to Eigenvalues and Eigenvectors, which are fundamental for analyzing the behavior of linear transformations and diagonalizing matrices.

    🎯 Key Points to Remember

    • Master the core concepts in Vector Spaces before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Algebra

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features