100% FREE Updated: Mar 2026 Linear Algebra Vector Spaces and Systems of Equations

Vector Spaces

Comprehensive study notes on Vector Spaces for GATE DA preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Chapter: Vector Spaces - Part 1: Introduction

Chapter Overview

Vector spaces are fundamental structures in linear algebra, providing a framework for understanding and manipulating vectors. They generalize the familiar Euclidean spaces (R2\mathbb{R}^2, R3\mathbb{R}^3) and are crucial for various applications in mathematics, physics, engineering, and especially data analytics (e.g., principal component analysis, machine learning algorithms often operate on vector spaces). This introductory part defines what a vector space is, introduces its core properties (axioms), and explains the concept of a subspace, which is a vector space contained within a larger one. Understanding these basics is essential for building a strong foundation in linear algebra for the GATE DA exam.

Key Concepts

#### 1. Definition of a Vector Space
A vector space (or linear space) consists of:

  • A non-empty set VV whose elements are called vectors.

  • A field FF whose elements are called scalars (e.g., R\mathbb{R} for real vector spaces, C\mathbb{C} for complex vector spaces).

  • Two operations:

  • * Vector Addition: A rule that associates with each pair of vectors u,vVu, v \in V a vector u+vVu+v \in V.
    * Scalar Multiplication: A rule that associates with each scalar cFc \in F and vector uVu \in V a vector cuVc \cdot u \in V.

    These operations must satisfy the following ten axioms for all u,v,wVu, v, w \in V and c,dFc, d \in F:

    Axioms of Vector Addition:

  • Closure under Addition: u+vVu+v \in V

  • Commutativity: u+v=v+uu+v = v+u

  • Associativity: (u+v)+w=u+(v+w)(u+v)+w = u+(v+w)

  • Additive Identity (Zero Vector): There exists a unique vector 0V0 \in V such that u+0=uu+0 = u for all uVu \in V.

  • Additive Inverse: For every uVu \in V, there exists a unique vector uV-u \in V such that u+(u)=0u+(-u) = 0.
  • Axioms of Scalar Multiplication:

  • Closure under Scalar Multiplication: cuVc \cdot u \in V

  • Distributivity (scalar over vector addition): c(u+v)=cu+cvc \cdot (u+v) = c \cdot u + c \cdot v

  • Distributivity (vector over scalar addition): (c+d)u=cu+du(c+d) \cdot u = c \cdot u + d \cdot u

  • Associativity of Scalar Multiplication: c(du)=(cd)uc \cdot (d \cdot u) = (cd) \cdot u

  • Multiplicative Identity: 1u=u1 \cdot u = u, where 11 is the multiplicative identity in FF.
  • #### 2. Examples of Vector Spaces
    * The set of all nn-tuples of real numbers, Rn\mathbb{R}^n, over the field R\mathbb{R}.
    * The set of all m×nm \times n matrices with real entries, Mm×n(R)M_{m \times n}(\mathbb{R}), over the field R\mathbb{R}.
    * The set of all polynomials of degree less than or equal to nn, Pn(R)P_n(\mathbb{R}), over the field R\mathbb{R}.
    * The set of all continuous real-valued functions on an interval [a,b][a,b], C[a,b]C[a,b], over the field R\mathbb{R}.

    #### 3. Definition of a Vector Subspace
    A subspace WW of a vector space VV over a field FF is a non-empty subset of VV that is itself a vector space under the same operations of vector addition and scalar multiplication defined on VV.

    To check if a non-empty subset WVW \subseteq V is a subspace, we only need to verify the following three conditions (the other axioms are inherited from VV):

    Subspace Test (Three Conditions):

  • Contains the Zero Vector: The zero vector of VV, 0V0_V, must be in WW. That is, 0VW0_V \in W. (This also ensures WW is non-empty).

  • Closure under Addition: For any u,vWu, v \in W, their sum u+vu+v must also be in WW.

  • Closure under Scalar Multiplication: For any uWu \in W and any scalar cFc \in F, the scalar multiple cuc \cdot u must also be in WW.
  • Alternative Subspace Test (One Condition):
    A non-empty subset WW of a vector space VV is a subspace if and only if for any u,vWu, v \in W and any scalars c,dFc, d \in F, the linear combination cu+dvc \cdot u + d \cdot v is in WW. This single condition implies both closure under addition (set c=1,d=1c=1, d=1) and closure under scalar multiplication (set d=0d=0).

    #### 4. Examples of Subspaces
    * The set {(x,y,z)R3x=0}\{(x,y,z) \in \mathbb{R}^3 \mid x=0\} (the yzyz-plane) is a subspace of R3\mathbb{R}^3.
    * The set {(x,y,z)R3x+y+z=0}\{(x,y,z) \in \mathbb{R}^3 \mid x+y+z=0\} (a plane through the origin) is a subspace of R3\mathbb{R}^3.
    * The set of all n×nn \times n symmetric matrices is a subspace of Mn×n(R)M_{n \times n}(\mathbb{R}).
    * The set of all polynomials of degree k\le k is a subspace of Pn(R)P_n(\mathbb{R}) for knk \le n.

    Important Formulas (Axioms/Conditions)

    #### Vector Space Axioms:
    For u,v,wVu, v, w \in V and c,dFc, d \in F:

  • u+vVu+v \in V

  • u+v=v+uu+v = v+u

  • (u+v)+w=u+(v+w)(u+v)+w = u+(v+w)

  • !0V s.t. u+0=u\exists ! 0 \in V \text{ s.t. } u+0 = u

  • uV,!uV s.t. u+(u)=0\forall u \in V, \exists ! -u \in V \text{ s.t. } u+(-u) = 0

  • cuVc \cdot u \in V

  • c(u+v)=cu+cvc \cdot (u+v) = c \cdot u + c \cdot v

  • (c+d)u=cu+du(c+d) \cdot u = c \cdot u + d \cdot u

  • c(du)=(cd)uc \cdot (d \cdot u) = (cd) \cdot u

  • 1u=u1 \cdot u = u
  • #### Subspace Test Conditions:
    A non-empty subset WVW \subseteq V is a subspace if:

  • 0VW0_V \in W

  • u,vW    u+vW\forall u, v \in W \implies u+v \in W

  • uW,cF    cuW\forall u \in W, c \in F \implies c \cdot u \in W
  • Or, equivalently:

  • WW is non-empty.

  • u,vW,c,dF    cu+dvW\forall u, v \in W, c, d \in F \implies c \cdot u + d \cdot v \in W
  • Examples

    Example 1: Is W={(x,y,z)R3x+y+z=0}W = \{(x,y,z) \in \mathbb{R}^3 \mid x+y+z=0\} a subspace of R3\mathbb{R}^3?
    Let F=RF = \mathbb{R}.

  • Zero Vector: The zero vector (0,0,0)(0,0,0) satisfies 0+0+0=00+0+0=0. So, (0,0,0)W(0,0,0) \in W.

  • Closure under Addition: Let u=(x1,y1,z1)Wu=(x_1,y_1,z_1) \in W and v=(x2,y2,z2)Wv=(x_2,y_2,z_2) \in W.

  • This means x1+y1+z1=0x_1+y_1+z_1=0 and x2+y2+z2=0x_2+y_2+z_2=0.
    Then u+v=(x1+x2,y1+y2,z1+z2)u+v = (x_1+x_2, y_1+y_2, z_1+z_2).
    To check if u+vWu+v \in W, we sum its components:
    (x1+x2)+(y1+y2)+(z1+z2)=(x1+y1+z1)+(x2+y2+z2)=0+0=0(x_1+x_2) + (y_1+y_2) + (z_1+z_2) = (x_1+y_1+z_1) + (x_2+y_2+z_2) = 0+0 = 0

    Thus, u+vWu+v \in W.
  • Closure under Scalar Multiplication: Let u=(x,y,z)Wu=(x,y,z) \in W and cRc \in \mathbb{R}.

  • This means x+y+z=0x+y+z=0.
    Then cu=(cx,cy,cz)c \cdot u = (cx, cy, cz).
    To check if cuWc \cdot u \in W, we sum its components:
    cx+cy+cz=c(x+y+z)=c(0)=0cx+cy+cz = c(x+y+z) = c(0) = 0

    Thus, cuWc \cdot u \in W.
    Since all three conditions are met, WW is a subspace of R3\mathbb{R}^3.

    Example 2: Is W={(x,y,z)R3x+y+z=1}W = \{(x,y,z) \in \mathbb{R}^3 \mid x+y+z=1\} a subspace of R3\mathbb{R}^3?

  • Zero Vector: The zero vector (0,0,0)(0,0,0) does not satisfy 0+0+0=10+0+0=1.

  • Since the zero vector is not in WW, WW is not a subspace of R3\mathbb{R}^3. (No need to check other conditions).

    Example 3: Is W={(x,y,z)R3x0}W = \{(x,y,z) \in \mathbb{R}^3 \mid x \ge 0\} a subspace of R3\mathbb{R}^3?

  • Zero Vector: The zero vector (0,0,0)(0,0,0) satisfies 000 \ge 0. So, (0,0,0)W(0,0,0) \in W.

  • Closure under Addition: Let u=(1,0,0)Wu=(1,0,0) \in W and v=(2,0,0)Wv=(2,0,0) \in W.

  • u+v=(3,0,0)u+v = (3,0,0). Since 303 \ge 0, u+vWu+v \in W. (This condition holds for these specific vectors).
  • Closure under Scalar Multiplication: Let u=(1,0,0)Wu=(1,0,0) \in W and c=1Rc=-1 \in \mathbb{R}.

  • Then cu=(1)(1,0,0)=(1,0,0)c \cdot u = (-1) \cdot (1,0,0) = (-1,0,0).
    However, 1≱0-1 \not\ge 0, so cuWc \cdot u \notin W.
    Since closure under scalar multiplication fails, WW is not a subspace of R3\mathbb{R}^3.

    Important Points/Tips for Exam Preparation

    * Zero Vector is Key: The quickest way to rule out a set as a subspace is to check if it contains the zero vector. If 0VW0_V \notin W, then WW is not a subspace.
    * Closure is Essential: Subspaces must be "closed" under both vector addition and scalar multiplication. This means performing these operations on elements within the subspace must always result in another element within the same subspace.
    * Geometric Interpretation: In R2\mathbb{R}^2 and R3\mathbb{R}^3, subspaces are geometrically restricted:
    * In R2\mathbb{R}^2: The origin {(0,0)}\{(0,0)\}, any line passing through the origin, and R2\mathbb{R}^2 itself.
    * In R3\mathbb{R}^3: The origin {(0,0,0)}\{(0,0,0)\}, any line passing through the origin, any plane passing through the origin, and R3\mathbb{R}^3 itself.
    Any set that does not pass through the origin (e.g., a line y=x+1y=x+1) or is not "flat" (e.g., a sphere) cannot be a subspace.
    * Linear Combinations: Remember the single, powerful test: a non-empty subset WW is a subspace if and only if it is closed under linear combinations (cu+dvWc \cdot u + d \cdot v \in W). This is often more efficient for proofs.
    * PYQ Context: GATE questions often involve identifying subspaces from a list of options (Multiple Select Questions - MSQ). Practice applying the three subspace conditions rigorously to various types of sets (e.g., sets defined by equations, inequalities, or specific properties of vectors/matrices).
    * Field Matters: While most GATE questions use R\mathbb{R} as the field, be aware that the field FF is part of the vector space definition.
    * Trivial Subspaces: Every vector space VV has at least two subspaces: the zero vector space {0V}\{0_V\} and VV itself. These are called trivial subspaces.

    ---

    Vector Spaces: Part 2 - Core Concepts

    Chapter Overview

    This section delves into the fundamental building blocks of linear algebra: vector spaces and their essential properties. We will define what constitutes a vector space, explore the concept of subspaces, and understand how vectors combine through linear combinations. Key topics include linear independence, basis, dimension, and the crucial fundamental subspaces associated with matrices, culminating in the Rank-Nullity Theorem. These concepts are vital for understanding the structure and properties of linear transformations and solving systems of linear equations.

    Key Concepts

  • Vector Space Definition

  • A vector space VV over a field FF (typically R\mathbb{R} or C\mathbb{C}) is a set of objects called vectors, together with two operations:
    * Vector Addition: For any u,vVu, v \in V, u+vVu+v \in V.
    * Scalar Multiplication: For any cFc \in F and vVv \in V, cvVcv \in V.
    These operations must satisfy the following 10 axioms for all u,v,wVu, v, w \in V and c,dFc, d \in F:
    1. Closure under addition: u+vVu+v \in V
    2. Commutativity of addition: u+v=v+uu+v = v+u
    3. Associativity of addition: (u+v)+w=u+(v+w)(u+v)+w = u+(v+w)
    4. Existence of zero vector: There exists a zero vector 0V0 \in V such that v+0=vv+0 = v for all vVv \in V.
    5. Existence of additive inverse: For every vVv \in V, there exists an additive inverse vV-v \in V such that v+(v)=0v+(-v) = 0.
    6. Closure under scalar multiplication: cvVcv \in V
    7. Distributivity over vector addition: c(u+v)=cu+cvc(u+v) = cu+cv
    8. Distributivity over scalar addition: (c+d)v=cv+dv(c+d)v = cv+dv
    9. Associativity of scalar multiplication: c(dv)=(cd)vc(dv) = (cd)v
    10. Identity for scalar multiplication: 1v=v1v = v (where 11 is the multiplicative identity in FF).

    Examples:
    * Rn\mathbb{R}^n (the set of all nn-tuples of real numbers)
    * Pn(x)P_n(x) (the set of all polynomials of degree at most nn)
    * Mm×n(R)M_{m \times n}(\mathbb{R}) (the set of all m×nm \times n matrices with real entries)

  • Subspaces

  • A subset WW of a vector space VV is called a subspace of VV if WW itself is a vector space under the same operations of vector addition and scalar multiplication defined on VV.
    To check if a non-empty subset WVW \subseteq V is a subspace, we only need to verify three conditions:
    1. Zero vector: The zero vector of VV is in WW (0W0 \in W). (This implies WW is non-empty).
    2. Closure under addition: For any u,vWu, v \in W, u+vWu+v \in W.
    3. Closure under scalar multiplication: For any cFc \in F and vWv \in W, cvWcv \in W.

    Example:
    * The set of all vectors in R3\mathbb{R}^3 of the form (a,b,0)(a, b, 0) is a subspace of R3\mathbb{R}^3.
    * The set of all symmetric n×nn \times n matrices is a subspace of Mn×n(R)M_{n \times n}(\mathbb{R}).
    * The set of all solutions to a homogeneous system of linear equations Ax=0Ax=0 is a subspace (the null space).

  • Linear Combination

  • A vector vVv \in V is a linear combination of vectors v1,v2,,vkVv_1, v_2, \dots, v_k \in V if there exist scalars c1,c2,,ckFc_1, c_2, \dots, c_k \in F such that:
    v=c1v1+c2v2++ckvkv = c_1v_1 + c_2v_2 + \dots + c_kv_k

    Example: In R3\mathbb{R}^3, the vector (5,1,3)(5, 1, 3) is a linear combination of (1,0,0)(1, 0, 0), (0,1,0)(0, 1, 0), and (0,0,1)(0, 0, 1) because (5,1,3)=5(1,0,0)+1(0,1,0)+3(0,0,1)(5, 1, 3) = 5(1, 0, 0) + 1(0, 1, 0) + 3(0, 0, 1).

  • Span of a Set of Vectors

  • The span of a set of vectors S={v1,v2,,vk}S = \{v_1, v_2, \dots, v_k\} in a vector space VV, denoted as span(S)\operatorname{span}(S) or span{v1,,vk}\operatorname{span}\{v_1, \dots, v_k\}, is the set of all possible linear combinations of these vectors.
    span(S)={c1v1+c2v2++ckvkciF for i=1,,k}\operatorname{span}(S) = \{c_1v_1 + c_2v_2 + \dots + c_kv_k \mid c_i \in F \text{ for } i=1, \dots, k\}

    Property: span(S)\operatorname{span}(S) is always a subspace of VV. It is the smallest subspace of VV that contains all vectors in SS.

    Example: In R3\mathbb{R}^3, span{(1,0,0),(0,1,0)}\operatorname{span}\{(1, 0, 0), (0, 1, 0)\} is the xyxy-plane, which is a subspace of R3\mathbb{R}^3.

  • Linear Independence and Dependence

  • A set of vectors S={v1,v2,,vk}S = \{v_1, v_2, \dots, v_k\} in a vector space VV is said to be:
    * Linearly Independent: If the only solution to the vector equation c1v1+c2v2++ckvk=0c_1v_1 + c_2v_2 + \dots + c_kv_k = 0 is c1=c2==ck=0c_1 = c_2 = \dots = c_k = 0.
    * Linearly Dependent: If there exist scalars c1,c2,,ckc_1, c_2, \dots, c_k, not all zero, such that c1v1+c2v2++ckvk=0c_1v_1 + c_2v_2 + \dots + c_kv_k = 0. This means at least one vector in the set can be expressed as a linear combination of the others.

    Properties:
    * Any set containing the zero vector is linearly dependent.
    * If a set of vectors is linearly independent, any subset of these vectors is also linearly independent.
    * If a set of vectors is linearly dependent, any superset containing these vectors is also linearly dependent.

    Example:
    * In R2\mathbb{R}^2, {(1,0),(0,1)}\{(1, 0), (0, 1)\} is linearly independent.
    * In R2\mathbb{R}^2, {(1,0),(0,1),(2,3)}\{(1, 0), (0, 1), (2, 3)\} is linearly dependent because (2,3)=2(1,0)+3(0,1)(2, 3) = 2(1, 0) + 3(0, 1).

  • Basis of a Vector Space

  • A set of vectors B={v1,v2,,vn}B = \{v_1, v_2, \dots, v_n\} is a basis for a vector space VV if it satisfies two conditions:
    1. BB is linearly independent.
    2. span(B)=V\operatorname{span}(B) = V.
    Every vector in VV can be expressed uniquely as a linear combination of the basis vectors.

    Examples:
    * Standard Basis for Rn\mathbb{R}^n: E={e1,e2,,en}E = \{e_1, e_2, \dots, e_n\}, where eie_i is a vector with 11 in the ii-th position and 00 elsewhere. For R3\mathbb{R}^3, {(1,0,0),(0,1,0),(0,0,1)}\{(1,0,0), (0,1,0), (0,0,1)\}.
    * Standard Basis for Pn(x)P_n(x): {1,x,x2,,xn}\{1, x, x^2, \dots, x^n\}.
    * Standard Basis for Mm×n(R)M_{m \times n}(\mathbb{R}): The set of m×nm \times n matrices with a single 11 in one position and 00 elsewhere.

  • Dimension of a Vector Space

  • The dimension of a vector space VV, denoted as dim(V)\dim(V), is the number of vectors in any basis for VV. If V={0}V = \{0\}, then dim(V)=0\dim(V) = 0.
    Properties:
    * If dim(V)=n\dim(V) = n, then any set of nn linearly independent vectors in VV forms a basis for VV.
    * If dim(V)=n\dim(V) = n, then any set of more than nn vectors in VV is linearly dependent.
    * If dim(V)=n\dim(V) = n, then any set of fewer than nn vectors cannot span VV.

    Examples:
    * dim(Rn)=n\dim(\mathbb{R}^n) = n.
    * dim(Pn(x))=n+1\dim(P_n(x)) = n+1.
    * dim(Mm×n(R))=mn\dim(M_{m \times n}(\mathbb{R})) = mn.

  • Fundamental Subspaces Associated with a Matrix

  • For an m×nm \times n matrix AA:
    * Column Space (Image Space): Col(A)\operatorname{Col}(A) or Im(A)\operatorname{Im}(A). This is the span of the column vectors of AA. It is a subspace of Rm\mathbb{R}^m.
    Col(A)={AxxRn}\operatorname{Col}(A) = \{Ax \mid x \in \mathbb{R}^n\}

    * Row Space: Row(A)\operatorname{Row}(A). This is the span of the row vectors of AA. It is a subspace of Rn\mathbb{R}^n.
    Row(A)={ATyyRm}\operatorname{Row}(A) = \{A^T y \mid y \in \mathbb{R}^m\}

    * Null Space (Kernel): Null(A)\operatorname{Null}(A) or Ker(A)\operatorname{Ker}(A). This is the set of all solutions to the homogeneous equation Ax=0Ax = 0. It is a subspace of Rn\mathbb{R}^n.
    Null(A)={xRnAx=0}\operatorname{Null}(A) = \{x \in \mathbb{R}^n \mid Ax = 0\}

    * Left Null Space: Null(AT)\operatorname{Null}(A^T). This is the null space of the transpose of AA. It is the set of all solutions to ATy=0A^T y = 0. It is a subspace of Rm\mathbb{R}^m.
    Null(AT)={yRmATy=0}\operatorname{Null}(A^T) = \{y \in \mathbb{R}^m \mid A^T y = 0\}

  • Rank and Nullity

  • * Rank of A: rank(A)\operatorname{rank}(A) is the dimension of the column space of AA, which is equal to the dimension of the row space of AA. It is also the number of pivot positions in the row echelon form of AA.
    rank(A)=dim(Col(A))=dim(Row(A))\operatorname{rank}(A) = \dim(\operatorname{Col}(A)) = \dim(\operatorname{Row}(A))

    * Nullity of A: nullity(A)\operatorname{nullity}(A) is the dimension of the null space of AA. It is the number of free variables in the solution to Ax=0Ax=0.
    nullity(A)=dim(Null(A))\operatorname{nullity}(A) = \dim(\operatorname{Null}(A))

  • Rank-Nullity Theorem

  • For an m×nm \times n matrix AA, the sum of its rank and nullity is equal to the number of columns (nn).
    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n

    Important Relations:
    * rank(A)=rank(AT)\operatorname{rank}(A) = \operatorname{rank}(A^T).
    * dim(Col(A))+dim(Null(AT))=m\dim(\operatorname{Col}(A)) + \dim(\operatorname{Null}(A^T)) = m.

    Important Points/Tips for Exam Preparation

    * Subspace Conditions: Thoroughly understand and be able to apply the three conditions for a subset to be a subspace. Many PYQs test this directly.
    * Linear Independence: Practice determining if a set of vectors is linearly independent. This often involves solving a homogeneous system of equations.
    * Basis and Dimension: Be proficient in finding a basis for a given vector space or subspace and determining its dimension. This might involve row reducing a matrix.
    * Fundamental Subspaces: Clearly distinguish between the column space, row space, and null space. Understand their definitions, how to find a basis for each, and their respective dimensions.
    * Rank-Nullity Theorem: This theorem is extremely powerful. Use it to relate the dimensions of the fundamental subspaces and to quickly find one dimension if others are known.
    * Matrix Operations: A strong grasp of matrix operations (row reduction, matrix multiplication) is essential for solving problems related to vector spaces.
    * Conceptual Understanding: Don't just memorize definitions; understand the underlying concepts. For instance, why is the zero vector crucial for a subspace? Why does linear independence matter for a basis?

    ---

    Part 3: Advanced Topics in Vector Spaces

    Chapter Overview

    This part delves into advanced concepts building upon the foundational understanding of vector spaces and subspaces. We will explore linear transformations, their properties, and associated spaces like the kernel and image. Furthermore, we will introduce eigenvalues and eigenvectors, crucial for understanding the intrinsic properties of linear operators. Finally, we will cover inner product spaces, which equip vector spaces with geometric notions of length, angle, and orthogonality.

    Key Concepts

    #### 1. Linear Transformations

    A function T:VWT: V \to W between two vector spaces VV and WW (over the same field FF) is a linear transformation if for all u,vVu, v \in V and cFc \in F:

  • T(u+v)=T(u)+T(v)T(u+v) = T(u) + T(v) (Additivity)

  • T(cu)=cT(u)T(cu) = cT(u) (Homogeneity)
  • * Properties:
    * T(0V)=0WT(0_V) = 0_W
    * T(uv)=T(u)T(v)T(u-v) = T(u) - T(v)
    * T(i=1kcivi)=i=1kciT(vi)T(\sum_{i=1}^k c_i v_i) = \sum_{i=1}^k c_i T(v_i)

    * Kernel (Null Space) of T: The set of all vectors in VV that are mapped to the zero vector in WW.

    Ker(T)={vVT(v)=0W}\operatorname{Ker}(T) = \{v \in V \mid T(v) = 0_W\}

    Ker(T)\operatorname{Ker}(T) is a subspace of VV.

    * Image (Range Space) of T: The set of all vectors in WW that are images of some vector in VV.

    Im(T)={wWw=T(v) for some vV}\operatorname{Im}(T) = \{w \in W \mid w = T(v) \text{ for some } v \in V\}

    Im(T)\operatorname{Im}(T) is a subspace of WW.

    * Rank of T: The dimension of the image space.

    rank(T)=dim(Im(T))\operatorname{rank}(T) = \dim(\operatorname{Im}(T))

    * Nullity of T: The dimension of the kernel space.

    nullity(T)=dim(Ker(T))\operatorname{nullity}(T) = \dim(\operatorname{Ker}(T))

    * Rank-Nullity Theorem: For a linear transformation T:VWT: V \to W, where VV is a finite-dimensional vector space:

    dim(V)=rank(T)+nullity(T)\dim(V) = \operatorname{rank}(T) + \operatorname{nullity}(T)

    * Matrix Representation of Linear Transformations: If VV and WW are finite-dimensional vector spaces with ordered bases BV={v1,,vn}B_V = \{v_1, \dots, v_n\} and BW={w1,,wm}B_W = \{w_1, \dots, w_m\} respectively, then T:VWT: V \to W can be represented by an m×nm \times n matrix AA. The jj-th column of AA is the coordinate vector of T(vj)T(v_j) with respect to BWB_W.

    [T(v)]BW=A[v]BV[T(v)]_{B_W} = A [v]_{B_V}

    #### 2. Eigenvalues and Eigenvectors

    * Definition: For a linear operator T:VVT: V \to V (or an n×nn \times n matrix AA), a non-zero vector vVv \in V is an eigenvector of TT (or AA) if T(v)=λvT(v) = \lambda v (or Av=λvAv = \lambda v) for some scalar λ\lambda. The scalar λ\lambda is called an eigenvalue corresponding to the eigenvector vv.

    * Characteristic Equation: For a matrix AA, eigenvalues are the roots of the characteristic equation:

    det(AλI)=0\det(A - \lambda I) = 0

    where II is the identity matrix.

    * Eigenspace: For an eigenvalue λ\lambda, the set Eλ={vVAv=λv}E_\lambda = \{v \in V \mid Av = \lambda v\} is a subspace of VV, called the eigenspace corresponding to λ\lambda. It consists of all eigenvectors corresponding to λ\lambda and the zero vector.

    * Algebraic Multiplicity (AM): The multiplicity of λ\lambda as a root of the characteristic polynomial.

    * Geometric Multiplicity (GM): The dimension of the eigenspace EλE_\lambda, i.e., dim(Eλ)=nullity(AλI)\dim(E_\lambda) = \operatorname{nullity}(A - \lambda I).

    * Properties:
    * For any eigenvalue λ\lambda, 1GM(λ)AM(λ)1 \le \operatorname{GM}(\lambda) \le \operatorname{AM}(\lambda).
    * Eigenvectors corresponding to distinct eigenvalues are linearly independent.
    * A matrix AA is diagonalizable if and only if for every eigenvalue λ\lambda, AM(λ)=GM(λ)\operatorname{AM}(\lambda) = \operatorname{GM}(\lambda).
    * The sum of eigenvalues (counting algebraic multiplicities) is the trace of the matrix:

    λi=tr(A)\sum \lambda_i = \operatorname{tr}(A)

    * The product of eigenvalues (counting algebraic multiplicities) is the determinant of the matrix:
    λi=det(A)\prod \lambda_i = \det(A)

    #### 3. Inner Product Spaces

    * Definition: An inner product on a vector space VV over R\mathbb{R} (or C\mathbb{C}) is a function ,:V×VR\langle \cdot, \cdot \rangle: V \times V \to \mathbb{R} (or C\mathbb{C}) satisfying for all u,v,wVu, v, w \in V and cFc \in F:
    1. u+v,w=u,w+v,w\langle u+v, w \rangle = \langle u, w \rangle + \langle v, w \rangle (Additivity in first argument)
    2. cu,v=cu,v\langle cu, v \rangle = c \langle u, v \rangle (Homogeneity in first argument)
    3. u,v=v,u\langle u, v \rangle = \overline{\langle v, u \rangle} (Conjugate symmetry; for real spaces, u,v=v,u\langle u, v \rangle = \langle v, u \rangle)
    4. u,u0\langle u, u \rangle \ge 0, and u,u=0    u=0\langle u, u \rangle = 0 \iff u = 0 (Positive-definiteness)
    A vector space equipped with an inner product is called an inner product space.

    * Standard Inner Product (Dot Product) in Rn\mathbb{R}^n: For u=(u1,,un)u = (u_1, \dots, u_n) and v=(v1,,vn)v = (v_1, \dots, v_n):

    u,v=uv=i=1nuivi\langle u, v \rangle = u \cdot v = \sum_{i=1}^n u_i v_i

    * Norm (Length) of a Vector: Induced by the inner product:

    v=v,v\|v\| = \sqrt{\langle v, v \rangle}

    * Distance between Vectors:

    d(u,v)=uvd(u, v) = \|u - v\|

    * Orthogonality: Two vectors u,vu, v are orthogonal if u,v=0\langle u, v \rangle = 0.

    * Orthonormal Set: A set of vectors {v1,,vk}\{v_1, \dots, v_k\} is orthonormal if vi,vj=δij\langle v_i, v_j \rangle = \delta_{ij} (Kronecker delta, which is 1 if i=ji=j and 0 if iji \ne j).

    * Gram-Schmidt Orthonormalization Process: A method to construct an orthonormal basis {e1,,ek}\{e_1, \dots, e_k\} from any given basis {v1,,vk}\{v_1, \dots, v_k\} of an inner product space.
    1. Set u1=v1u_1 = v_1.
    2. For j=2,,kj=2, \dots, k, compute:

    uj=vji=1j1vj,uiui,uiuiu_j = v_j - \sum_{i=1}^{j-1} \frac{\langle v_j, u_i \rangle}{\langle u_i, u_i \rangle} u_i

    3. Normalize each uju_j to get ej=ujuje_j = \frac{u_j}{\|u_j\|}. The set {e1,,ek}\{e_1, \dots, e_k\} is an orthonormal basis.

    Examples

    * Linear Transformation: Let T:R2R3T: \mathbb{R}^2 \to \mathbb{R}^3 be defined by T(x,y)=(x+y,xy,2x)T(x, y) = (x+y, x-y, 2x).
    * To find Ker(T)\operatorname{Ker}(T), set T(x,y)=(0,0,0)T(x,y) = (0,0,0):
    x+y=0x+y=0
    xy=0x-y=0
    2x=02x=0
    Solving these equations yields x=0,y=0x=0, y=0. Thus, Ker(T)={(0,0)}\operatorname{Ker}(T) = \{(0,0)\}, and nullity(T)=0\operatorname{nullity}(T) = 0.
    * To find Im(T)\operatorname{Im}(T), consider the images of the standard basis vectors: T(1,0)=(1,1,2)T(1,0) = (1,1,2) and T(0,1)=(1,1,0)T(0,1) = (1,-1,0). These vectors span Im(T)\operatorname{Im}(T). Since they are linearly independent, rank(T)=2\operatorname{rank}(T) = 2.
    * Verify Rank-Nullity Theorem: dim(R2)=2\dim(\mathbb{R}^2) = 2. rank(T)+nullity(T)=2+0=2\operatorname{rank}(T) + \operatorname{nullity}(T) = 2 + 0 = 2. The theorem holds.

    * Eigenvalues/Eigenvectors: Consider the matrix

    A=[2112]A = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}

    * Characteristic equation:
    det(AλI)=det[2λ112λ]=(2λ)21=0\det(A - \lambda I) = \det \begin{bmatrix} 2-\lambda & 1 \\ 1 & 2-\lambda \end{bmatrix} = (2-\lambda)^2 - 1 = 0

    (2λ)2=1(2-\lambda)^2 = 1

    2λ=±12-\lambda = \pm 1

    This gives eigenvalues λ1=1\lambda_1 = 1 and λ2=3\lambda_2 = 3.
    * For λ1=1\lambda_1 = 1: Solve (AI)v=0(A-I)v = 0
    [1111][xy]=[00]\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

    This implies x+y=0x+y=0. An eigenvector is
    v1=[11]v_1 = \begin{bmatrix} 1 \\ -1 \end{bmatrix}

    * For λ2=3\lambda_2 = 3: Solve (A3I)v=0(A-3I)v = 0
    [1111][xy]=[00]\begin{bmatrix} -1 & 1 \\ 1 & -1 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}

    This implies x+y=0-x+y=0. An eigenvector is
    v2=[11]v_2 = \begin{bmatrix} 1 \\ 1 \end{bmatrix}

    * Gram-Schmidt Process: Given a basis {(1,1,0),(1,0,1),(0,1,1)}\{(1,1,0), (1,0,1), (0,1,1)\} for R3\mathbb{R}^3. Let v1=(1,1,0)v_1=(1,1,0), v2=(1,0,1)v_2=(1,0,1), v3=(0,1,1)v_3=(0,1,1).
    1. Set u1=v1=(1,1,0)u_1 = v_1 = (1,1,0).
    2. For j=2,,kj=2, \dots, k, compute:

    u2=v2v2,u1u1,u1u1u_2 = v_2 - \frac{\langle v_2, u_1 \rangle}{\langle u_1, u_1 \rangle} u_1

    u2=(1,0,1)(1)(1)+(0)(1)+(1)(0)(1)2+(1)2+(0)2(1,1,0)u_2 = (1,0,1) - \frac{(1)(1)+(0)(1)+(1)(0)}{(1)^2+(1)^2+(0)^2} (1,1,0)

    u2=(1,0,1)12(1,1,0)=(12,12,1)u_2 = (1,0,1) - \frac{1}{2}(1,1,0) = \left(\frac{1}{2}, -\frac{1}{2}, 1\right)

    3. For j=3,,kj=3, \dots, k, compute:
    u3=v3v3,u1u1,u1u1v3,u2u2,u2u2u_3 = v_3 - \frac{\langle v_3, u_1 \rangle}{\langle u_1, u_1 \rangle} u_1 - \frac{\langle v_3, u_2 \rangle}{\langle u_2, u_2 \rangle} u_2

    First, calculate inner products:
    v3,u1=(0)(1)+(1)(1)+(1)(0)=1\langle v_3, u_1 \rangle = (0)(1)+(1)(1)+(1)(0) = 1

    u1,u1=12+12+02=2\langle u_1, u_1 \rangle = 1^2+1^2+0^2 = 2

    v3,u2=(0)(12)+(1)(12)+(1)(1)=12\langle v_3, u_2 \rangle = (0)\left(\frac{1}{2}\right)+(1)\left(-\frac{1}{2}\right)+(1)(1) = \frac{1}{2}

    u2,u2=(12)2+(12)2+12=14+14+1=32\langle u_2, u_2 \rangle = \left(\frac{1}{2}\right)^2+\left(-\frac{1}{2}\right)^2+1^2 = \frac{1}{4}+\frac{1}{4}+1 = \frac{3}{2}

    Substitute these values into the equation for u3u_3:
    u3=(0,1,1)12(1,1,0)1/23/2(12,12,1)u_3 = (0,1,1) - \frac{1}{2}(1,1,0) - \frac{1/2}{3/2}\left(\frac{1}{2}, -\frac{1}{2}, 1\right)

    u3=(0,1,1)(12,12,0)13(12,12,1)u_3 = (0,1,1) - \left(\frac{1}{2}, \frac{1}{2}, 0\right) - \frac{1}{3}\left(\frac{1}{2}, -\frac{1}{2}, 1\right)

    u3=(12,12,1)(16,16,13)u_3 = \left(-\frac{1}{2}, \frac{1}{2}, 1\right) - \left(\frac{1}{6}, -\frac{1}{6}, \frac{1}{3}\right)

    u3=(3616,36+16,3313)=(46,46,23)=(23,23,23)u_3 = \left(-\frac{3}{6}-\frac{1}{6}, \frac{3}{6}+\frac{1}{6}, \frac{3}{3}-\frac{1}{3}\right) = \left(-\frac{4}{6}, \frac{4}{6}, \frac{2}{3}\right) = \left(-\frac{2}{3}, \frac{2}{3}, \frac{2}{3}\right)

    4. Normalize u1,u2,u3u_1, u_2, u_3:
    e1=u1u1=(1,1,0)12+12+02=12(1,1,0)e_1 = \frac{u_1}{\|u_1\|} = \frac{(1,1,0)}{\sqrt{1^2+1^2+0^2}} = \frac{1}{\sqrt{2}}(1,1,0)

    e2=u2u2=(1/2,1/2,1)(1/2)2+(1/2)2+12=(1/2,1/2,1)3/2=16(1,1,2)e_2 = \frac{u_2}{\|u_2\|} = \frac{(1/2, -1/2, 1)}{\sqrt{(1/2)^2+(-1/2)^2+1^2}} = \frac{(1/2, -1/2, 1)}{\sqrt{3/2}} = \frac{1}{\sqrt{6}}(1,-1,2)

    e3=u3u3=(2/3,2/3,2/3)(2/3)2+(2/3)2+(2/3)2=(2/3,2/3,2/3)12/9=13(1,1,1)e_3 = \frac{u_3}{\|u_3\|} = \frac{(-2/3, 2/3, 2/3)}{\sqrt{(-2/3)^2+(2/3)^2+(2/3)^2}} = \frac{(-2/3, 2/3, 2/3)}{\sqrt{12/9}} = \frac{1}{\sqrt{3}}(-1,1,1)

    The orthonormal basis is
    {12(1,1,0),16(1,1,2),13(1,1,1)}\left\{ \frac{1}{\sqrt{2}}(1,1,0), \frac{1}{\sqrt{6}}(1,-1,2), \frac{1}{\sqrt{3}}(-1,1,1) \right\}

    Important Points/Tips for Exam Preparation

    * Understand Definitions: Be precise with the definitions of linear transformation, kernel, image, eigenvalues, eigenvectors, and inner product. Many questions test conceptual understanding.
    * Subspace Verification: Remember that Ker(T)\operatorname{Ker}(T) and Im(T)\operatorname{Im}(T) are always subspaces. This is a common type of question (as seen in PYQs).
    * Rank-Nullity Theorem: This is a fundamental theorem. Know how to apply it to find dimensions of kernel or image, or to relate properties of the domain and codomain.
    * Eigenvalue Properties: The relations between eigenvalues and the trace/determinant of a matrix are very useful for quick calculations and verification.
    * Diagonalization: Understand the conditions for a matrix to be diagonalizable, especially the relationship between algebraic and geometric multiplicities.
    * Orthogonality: This is a crucial concept in inner product spaces. Be familiar with the Gram-Schmidt orthonormalization process and its application.
    * Practice Problems: Work through various examples involving finding kernels, images, eigenvalues, and applying Gram-Schmidt.
    * PYQ Analysis: The Previous Year Questions often test the understanding of definitions and basic properties. For instance, identifying subspaces (which Ker(T)\operatorname{Ker}(T) and Im(T)\operatorname{Im}(T) are) or properties of linear transformations. Pay attention to the wording of "select all choices that are subspaces" or "which of the following statements is/are correct".

    ---

    Part 4: Applications (Examples)

    Chapter Overview

    This section delves into the practical application of vector space theory, focusing on how to identify and construct vector spaces and, more commonly, subspaces within larger vector spaces like Rn\mathbb{R}^n. Understanding these applications is crucial for solving problems related to linear systems, transformations, and data analysis. The primary focus will be on providing concrete examples and methods to verify if a given set is indeed a subspace, a frequently tested concept in exams.

    Key Concepts

  • Subspace Definition: A non-empty subset WW of a vector space VV is a subspace of VV if it satisfies the following three conditions:

  • * Zero Vector: The zero vector of VV is in WW.
    * Closure under Addition: For any u,vWu, v \in W, their sum u+vu+v is also in WW.
    * Closure under Scalar Multiplication: For any uWu \in W and any scalar cRc \in \mathbb{R}, the scalar multiple cucu is also in WW.

  • Span of a Set of Vectors: The set of all possible linear combinations of a given set of vectors {v1,v2,,vk}\{v_1, v_2, \dots, v_k\} in a vector space VV forms a subspace of VV. This is denoted as span{v1,v2,,vk}\operatorname{span}\{v_1, v_2, \dots, v_k\}.
  • Null Space (Kernel) of a Matrix: For an m×nm \times n matrix AA, the set of all solutions to the homogeneous linear system Ax=0Ax = 0 forms a subspace of Rn\mathbb{R}^n. This is called the null space of AA, denoted as Null(A)\operatorname{Null}(A) or Ker(A)\operatorname{Ker}(A).
  • Column Space (Image) of a Matrix: For an m×nm \times n matrix AA, the set of all linear combinations of the column vectors of AA forms a subspace of Rm\mathbb{R}^m. This is called the column space of AA, denoted as Col(A)\operatorname{Col}(A) or Im(A)\operatorname{Im}(A).
  • Important Formulas

    * Subspace Conditions:
    Let WVW \subseteq V be a non-empty subset. WW is a subspace if:
    1. 0VW0_V \in W
    2. u,vW,u+vW\forall u, v \in W, u+v \in W
    3. uW,cR,cuW\forall u \in W, \forall c \in \mathbb{R}, cu \in W

    * Linear Combination:
    A vector vv is a linear combination of vectors v1,,vkv_1, \dots, v_k if there exist scalars c1,,ckc_1, \dots, c_k such that:

    v=c1v1+c2v2++ckvkv = c_1 v_1 + c_2 v_2 + \dots + c_k v_k

    * Span of a Set:
    The span of a set of vectors S={v1,,vk}S = \{v_1, \dots, v_k\} is defined as:

    span(S)={c1v1++ckvkc1,,ckR}\operatorname{span}(S) = \{ c_1 v_1 + \dots + c_k v_k \mid c_1, \dots, c_k \in \mathbb{R} \}

    * Null Space:
    For an m×nm \times n matrix AA:

    Null(A)={xRnAx=0}\operatorname{Null}(A) = \{ x \in \mathbb{R}^n \mid Ax = 0 \}

    Examples

    Here are various examples illustrating how to determine if a given set is a subspace of Rn\mathbb{R}^n.

    Example 1: Line through the origin in R2\mathbb{R}^2
    Let W={(x,y)R2y=2x}W = \{(x, y) \in \mathbb{R}^2 \mid y = 2x\}.

  • Zero Vector: (0,0)(0, 0) satisfies 0=2(0)0 = 2(0), so (0,0)W(0, 0) \in W.

  • Closure under Addition: Let u=(x1,y1)u = (x_1, y_1) and v=(x2,y2)v = (x_2, y_2) be in WW. Then y1=2x1y_1 = 2x_1 and y2=2x2y_2 = 2x_2.

  • u+v=(x1+x2,y1+y2)u+v = (x_1+x_2, y_1+y_2). We check if y1+y2=2(x1+x2)y_1+y_2 = 2(x_1+x_2).
    y1+y2=2x1+2x2=2(x1+x2)y_1+y_2 = 2x_1 + 2x_2 = 2(x_1+x_2). So, u+vWu+v \in W.
  • Closure under Scalar Multiplication: Let u=(x,y)Wu = (x, y) \in W and cRc \in \mathbb{R}. Then y=2xy = 2x.

  • cu=(cx,cy)cu = (cx, cy). We check if cy=2(cx)cy = 2(cx).
    cy=c(2x)=2(cx)cy = c(2x) = 2(cx). So, cuWcu \in W.
    Since all three conditions are met, WW is a subspace of R2\mathbb{R}^2.

    Example 2: Plane through the origin in R3\mathbb{R}^3
    Let W={(x,y,z)R3x2y+3z=0}W = \{(x, y, z) \in \mathbb{R}^3 \mid x - 2y + 3z = 0\}.
    This is the solution set of a homogeneous linear equation.

  • Zero Vector: (0,0,0)(0, 0, 0) satisfies 02(0)+3(0)=00 - 2(0) + 3(0) = 0, so (0,0,0)W(0, 0, 0) \in W.

  • Closure under Addition: Let u=(x1,y1,z1)u = (x_1, y_1, z_1) and v=(x2,y2,z2)v = (x_2, y_2, z_2) be in WW.

  • Then x12y1+3z1=0x_1 - 2y_1 + 3z_1 = 0 and x22y2+3z2=0x_2 - 2y_2 + 3z_2 = 0.
    u+v=(x1+x2,y1+y2,z1+z2)u+v = (x_1+x_2, y_1+y_2, z_1+z_2).
    (x1+x2)2(y1+y2)+3(z1+z2)=(x12y1+3z1)+(x22y2+3z2)=0+0=0(x_1+x_2) - 2(y_1+y_2) + 3(z_1+z_2) = (x_1 - 2y_1 + 3z_1) + (x_2 - 2y_2 + 3z_2) = 0 + 0 = 0.
    So, u+vWu+v \in W.
  • Closure under Scalar Multiplication: Let u=(x,y,z)Wu = (x, y, z) \in W and cRc \in \mathbb{R}.

  • Then x2y+3z=0x - 2y + 3z = 0.
    cu=(cx,cy,cz)cu = (cx, cy, cz).
    (cx)2(cy)+3(cz)=c(x2y+3z)=c(0)=0(cx) - 2(cy) + 3(cz) = c(x - 2y + 3z) = c(0) = 0.
    So, cuWcu \in W.
    Thus, WW is a subspace of R3\mathbb{R}^3.

    Example 3: Span of vectors
    Let W=span{[101],[011]}W = \operatorname{span}\left\{ \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} \right\} in R3\mathbb{R}^3.
    By definition, the span of any set of vectors is always a subspace.

  • Zero Vector:

  • 0[101]+0[011]=[000]W0 \cdot \begin{bmatrix} 1 \\ 0 \\ 1 \end{bmatrix} + 0 \cdot \begin{bmatrix} 0 \\ 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \\ 0 \end{bmatrix} \in W

  • Closure under Addition: Let u,vWu, v \in W. Then u=c1v1+c2v2u = c_1 v_1 + c_2 v_2 and v=d1v1+d2v2v = d_1 v_1 + d_2 v_2 for some scalars c1,c2,d1,d2c_1, c_2, d_1, d_2.

  • u+v=(c1v1+c2v2)+(d1v1+d2v2)=(c1+d1)v1+(c2+d2)v2u+v = (c_1 v_1 + c_2 v_2) + (d_1 v_1 + d_2 v_2) = (c_1+d_1)v_1 + (c_2+d_2)v_2

    This is a linear combination of v1,v2v_1, v_2, so u+vWu+v \in W.
  • Closure under Scalar Multiplication: Let uWu \in W and kRk \in \mathbb{R}. Then u=c1v1+c2v2u = c_1 v_1 + c_2 v_2.

  • ku=k(c1v1+c2v2)=(kc1)v1+(kc2)v2ku = k(c_1 v_1 + c_2 v_2) = (kc_1)v_1 + (kc_2)v_2

    This is a linear combination of v1,v2v_1, v_2, so kuWku \in W.
    Thus, WW is a subspace of R3\mathbb{R}^3.

    Example 4: Non-subspace (missing zero vector)
    Let W={(x,y)R2y=2x+1}W = \{(x, y) \in \mathbb{R}^2 \mid y = 2x + 1\}.

  • Zero Vector: For (0,0)(0, 0), 0=2(0)+10 = 2(0) + 1 is false (010 \neq 1). So, (0,0)W(0, 0) \notin W.

  • Since the zero vector is not in WW, WW is not a subspace of R2\mathbb{R}^2. (No need to check other conditions).

    Example 5: Non-subspace (not closed under addition)
    Let W={(x,y)R2x0,y0}W = \{(x, y) \in \mathbb{R}^2 \mid x \ge 0, y \ge 0\}. (First quadrant)

  • Zero Vector: (0,0)(0, 0) satisfies 00,000 \ge 0, 0 \ge 0, so (0,0)W(0, 0) \in W.

  • Closure under Addition: Let u=(1,2)u = (1, 2) and v=(3,4)v = (3, 4). Both are in WW.

  • u+v=(1+3,2+4)=(4,6)u+v = (1+3, 2+4) = (4, 6). Since 40,604 \ge 0, 6 \ge 0, u+vWu+v \in W. (This condition holds for these specific vectors, but we need to check scalar multiplication).
  • Closure under Scalar Multiplication: Let u=(1,2)Wu = (1, 2) \in W and c=1Rc = -1 \in \mathbb{R}.

  • cu=(1)(1,2)=(1,2)cu = (-1)(1, 2) = (-1, -2).
    Since 1<0-1 < 0 and 2<0-2 < 0, cuWcu \notin W.
    Since WW is not closed under scalar multiplication, WW is not a subspace of R2\mathbb{R}^2.

    Example 6: Non-subspace (not closed under addition and scalar multiplication)
    Let W={(x,y)R2y=x2}W = \{(x, y) \in \mathbb{R}^2 \mid y = x^2\}.

  • Zero Vector: (0,0)(0, 0) satisfies 0=020 = 0^2, so (0,0)W(0, 0) \in W.

  • Closure under Addition: Let u=(1,1)u = (1, 1) and v=(2,4)v = (2, 4). Both are in WW (1=12,4=221=1^2, 4=2^2).

  • u+v=(1+2,1+4)=(3,5)u+v = (1+2, 1+4) = (3, 5).
    For (3,5)(3, 5) to be in WW, 55 must equal 32=93^2=9. Since 595 \neq 9, u+vWu+v \notin W.
    Since WW is not closed under addition, WW is not a subspace of R2\mathbb{R}^2.

    Important Points/Tips for Exam Preparation

    Always Check All Three Conditions: For a set to be a subspace, it must* satisfy all three conditions (zero vector, closure under addition, closure under scalar multiplication). If any one fails, it's not a subspace.
    * Zero Vector is Key: The easiest condition to check first is often the zero vector. If 0VW0_V \notin W, then WW is immediately not a subspace.
    * Homogeneous vs. Non-homogeneous:
    The solution set of a homogeneous linear system (Ax=0Ax=0) is always* a subspace (the null space).
    The solution set of a non-homogeneous linear system (Ax=bAx=b where b0b \neq 0) is never* a subspace (because it won't contain the zero vector).
    * Geometric Intuition:
    * In R2\mathbb{R}^2, subspaces are the origin, lines through the origin, and R2\mathbb{R}^2 itself.
    * In R3\mathbb{R}^3, subspaces are the origin, lines through the origin, planes through the origin, and R3\mathbb{R}^3 itself.
    * Any set that does not pass through the origin (e.g., y=mx+cy=mx+c with c0c \neq 0) cannot be a subspace.
    * Span is a Subspace: The span of any set of vectors is inherently a subspace. You don't need to prove it from scratch every time; just state it.
    * Common Non-Subspaces: Be familiar with common examples that fail the conditions:
    * Sets not containing the origin (e.g., x=1x=1).
    * Sets defined by inequalities (e.g., x0x \ge 0).
    * Sets defined by non-linear equations (e.g., y=x2y=x^2, xy=0xy=0).
    * Practice with R3\mathbb{R}^3: Many PYQs involve identifying subspaces of R3\mathbb{R}^3. Practice applying the conditions to lines, planes, and other subsets of R3\mathbb{R}^3.
    * Vector Notation: Be comfortable with both coordinate notation (x,y,z)(x,y,z) and column vector notation [xyz]\begin{bmatrix} x \\ y \\ z \end{bmatrix}.

    ---

    Key Points

    * Vector Space Definition: A set VV is a vector space over a field FF if it is closed under vector addition and scalar multiplication, satisfying 10 axioms (associativity, commutativity, existence of zero vector, additive inverse, distributive properties, etc.).
    * Subspace Definition: A non-empty subset WW of a vector space VV is a subspace if it is itself a vector space under the same operations.
    * Subspace Test (Two-Step): A non-empty subset WVW \subseteq V is a subspace if:
    1. For any u,vWu, v \in W, u+vWu+v \in W (closure under addition).
    2. For any uWu \in W and scalar cFc \in F, cuWc \cdot u \in W (closure under scalar multiplication).
    * Subspace Test (One-Step): A non-empty subset WVW \subseteq V is a subspace if for any u,vWu, v \in W and scalars c1,c2Fc_1, c_2 \in F, c1u+c2vWc_1 u + c_2 v \in W (closure under linear combinations).
    * Important Note: The zero vector 0V0_V must always be in any subspace WW. If 0VW0_V \notin W, then WW is not a subspace.
    * Span of a Set: The span of a set of vectors S={v1,v2,,vk}S = \{v_1, v_2, \dots, v_k\} in VV, denoted span(S)\operatorname{span}(S), is the set of all possible linear combinations of vectors in SS. span(S)\operatorname{span}(S) is always a subspace of VV.

    span(S)={c1v1+c2v2++ckvkciF}\operatorname{span}(S) = \{c_1 v_1 + c_2 v_2 + \dots + c_k v_k \mid c_i \in F\}

    * Linear Independence and Dependence:
    * A set of vectors {v1,,vk}\{v_1, \dots, v_k\} is linearly independent if the only solution to c1v1++ckvk=0c_1 v_1 + \dots + c_k v_k = 0 is c1==ck=0c_1 = \dots = c_k = 0.
    * A set of vectors is linearly dependent if there exist scalars c1,,ckc_1, \dots, c_k, not all zero, such that c1v1++ckvk=0c_1 v_1 + \dots + c_k v_k = 0.
    * Basis of a Vector Space: A set of vectors B={b1,b2,,bn}B = \{b_1, b_2, \dots, b_n\} is a basis for a vector space VV if:
    1. BB is linearly independent.
    2. BB spans VV (i.e., span(B)=V\operatorname{span}(B) = V).
    * Dimension of a Vector Space: The number of vectors in any basis for VV is called the dimension of VV, denoted dim(V)\dim(V).
    * Fundamental Subspaces of a Matrix: For an m×nm \times n matrix AA:
    * Column Space (Image): Col(A)={AxxRn}\operatorname{Col}(A) = \{Ax \mid x \in \mathbb{R}^n\}, which is a subspace of Rm\mathbb{R}^m. Its dimension is the rank of AA.
    * Null Space (Kernel): Null(A)={xRnAx=0}\operatorname{Null}(A) = \{x \in \mathbb{R}^n \mid Ax = 0\}, which is a subspace of Rn\mathbb{R}^n. Its dimension is the nullity of AA.
    * Row Space: Row(A)=Col(AT)\operatorname{Row}(A) = \operatorname{Col}(A^T), which is a subspace of Rn\mathbb{R}^n. Its dimension is also the rank of AA.
    * Rank-Nullity Theorem: For an m×nm \times n matrix AA, the sum of the dimension of its column space (rank) and the dimension of its null space (nullity) equals the number of columns (nn).
    rank(A)+nullity(A)=n\operatorname{rank}(A) + \operatorname{nullity}(A) = n

    * Direct Sum: If W1W_1 and W2W_2 are subspaces of VV, their sum is W1+W2={w1+w2w1W1,w2W2}W_1 + W_2 = \{w_1 + w_2 \mid w_1 \in W_1, w_2 \in W_2\}. VV is the direct sum of W1W_1 and W2W_2, denoted V=W1W2V = W_1 \oplus W_2, if V=W1+W2V = W_1 + W_2 and W1W2={0}W_1 \cap W_2 = \{0\}.
    * In this case, dim(V)=dim(W1)+dim(W2)\dim(V) = \dim(W_1) + \dim(W_2).
    * Quotient Space: If WW is a subspace of VV, the quotient space V/WV/W is the set of all cosets v+W={v+wwW}v+W = \{v+w \mid w \in W\} for vVv \in V. V/WV/W is a vector space with operations (v1+W)+(v2+W)=(v1+v2)+W(v_1+W) + (v_2+W) = (v_1+v_2)+W and c(v+W)=(cv)+Wc(v+W) = (cv)+W.
    * dim(V/W)=dim(V)dim(W)\dim(V/W) = \dim(V) - \dim(W).
    * Isomorphism: Two vector spaces VV and WW are isomorphic if there exists a bijective linear transformation (an isomorphism) between them. Isomorphic vector spaces have the same dimension. Any nn-dimensional vector space over a field FF is isomorphic to FnF^n.

    🎯 Key Points to Remember

    • Master the core concepts in Vector Spaces before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Linear Algebra

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features