100% FREE Updated: Mar 2026 Algorithms Graph Algorithms

Shortest Paths

Comprehensive study notes on Shortest Paths for GATE CS preparation. This chapter covers key concepts, formulas, and examples needed for your exam.

Shortest Paths

Overview

The problem of determining the shortest path between vertices in a weighted graph is a fundamental and pervasive challenge in the field of algorithms. In its essence, this problem seeks to find a path between two vertices such that the sum of the weights of its constituent edges is minimized. This abstract model finds direct application in a vast array of domains, from network routing and logistics to resource allocation and bioinformatics. For the GATE examination, a mastery of shortest path algorithms is not merely beneficial but essential, as questions in this area test a candidate's core understanding of graph theory, algorithm design paradigms such as greedy methods and dynamic programming, and the efficient use of data structures.

In this chapter, we shall explore the two principal formulations of this problem. We begin by examining the single-source shortest paths (SSSP) problem, where the objective is to find the shortest paths from a designated source vertex to all other vertices in the graph. Subsequently, we will address the more general all-pairs shortest paths (APSP) problem, which requires computing the shortest path between every pair of vertices. Our focus will be on the logic, implementation, and rigorous analysis of the canonical algorithms for each case, paying close attention to their performance characteristics and the specific graph properties, such as the presence of negative-weight edges, under which they are guaranteed to operate correctly. A thorough command of this material is indispensable for solving a significant class of algorithmic problems encountered in the examination.

---

Chapter Contents

| # | Topic | What You'll Learn |
|---|-------|-------------------|
| 1 | Single-Source Shortest Paths | Finding optimal paths from a single source. |
| 2 | All-Pairs Shortest Paths | Computing paths between every pair of vertices. |

---

Learning Objectives

By the End of This Chapter

After completing this chapter, you will be able to:

  • Differentiate, apply, and analyze the Dijkstra and Bellman-Ford algorithms for the single-source shortest paths problem.

  • Formulate and analyze the dynamic programming solution for the all-pairs shortest paths problem, namely the Floyd-Warshall algorithm.

  • Analyze the time and space complexity of each shortest path algorithm and understand the role of data structures like the priority queue.

  • Identify problem constraints, such as negative edge weights, to select the appropriate shortest path algorithm for a given scenario.

---

We now turn our attention to Single-Source Shortest Paths...
## Part 1: Single-Source Shortest Paths

Introduction

The problem of finding the shortest path between vertices in a graph is a cornerstone of algorithm design, with profound implications in fields ranging from network routing and logistics to computational biology and artificial intelligence. The Single-Source Shortest Paths (SSSP) problem, in particular, addresses the task of finding the most efficient route from a designated starting vertex, the source, to every other vertex in the graph. The "length" or "cost" of a path is typically the sum of the weights of its constituent edges, representing metrics such as distance, time, or cost.

Our study will focus on the principal algorithms designed to solve the SSSP problem, each tailored to specific characteristics of the input graph. We will begin by examining the case of unweighted graphs, where Breadth-First Search provides an elegant and efficient solution. Subsequently, we will turn our attention to weighted graphs, first considering the common case of non-negative edge weights with Dijkstra's algorithm, and then extending our analysis to the more general case that permits negative edge weights, addressed by the Bellman-Ford algorithm. Finally, we shall investigate the special, yet important, case of Directed Acyclic Graphs (DAGs), for which a highly efficient linear-time algorithm exists. A thorough understanding of these algorithms and their underlying assumptions is critical for success in competitive examinations like GATE.

📖 Single-Source Shortest Path Problem

Given a weighted, directed graph G=(V,E)G = (V, E) with a weight function w:ERw: E \to \mathbb{R} mapping edges to real-valued weights, and a distinguished source vertex sVs \in V, the single-source shortest paths problem is to find a path of minimum total weight from ss to each vertex vVv \in V. The weight of a path p=v0,v1,,vkp = \langle v_0, v_1, \dots, v_k \rangle is the sum of the weights of its constituent edges:

w(p)=i=1kw(vi1,vi)w(p) = \sum_{i=1}^{k} w(v_{i-1}, v_i)

The shortest-path weight from ss to vv, denoted δ(s,v)\delta(s, v), is defined as:

δ(s,v)=min{w(p)spv}\delta(s, v) = \min \{ w(p) \mid s \xrightarrow{p} v \}

If there is no path from ss to vv, then δ(s,v)=\delta(s, v) = \infty.

---

Key Concepts

At the heart of most shortest-path algorithms is the concept of relaxation, an iterative process of improving an estimate of the shortest path to a vertex. We maintain for each vertex vv an attribute d[v]d[v], which is an upper bound on the weight of a shortest path from source ss to vv. Initially, d[s]=0d[s] = 0 and d[v]=d[v] = \infty for all other vertices. The process of relaxation may decrease the value of d[v]d[v].

#
## 1. The Relaxation Property

Consider an edge (u,v)(u, v) with weight w(u,v)w(u, v). If we have found a path to uu, we can potentially improve our estimate for the shortest path to vv by going through uu. The relaxation of edge (u,v)(u, v) consists of testing whether the shortest path to vv found so far can be improved by going through uu. If so, we update d[v]d[v] and the predecessor of vv, denoted π[v]\pi[v].

The core operation is as follows:

```
RELAX(u, v, w)
IF d[v] > d[u] + w(u, v) THEN
d[v] = d[u] + w(u, v)
π[v] = u
```




u
d[u]=5d[u]=5


v
d[v]=10d[v]=10
(Updated: d[v]=7d[v]=7)







w(u,v)=2w(u,v)=2

Condition: d[u]+w(u,v)<d[v]    5+2<10d[u] + w(u,v) < d[v] \implies 5 + 2 < 10. Relax the edge.

Different algorithms simply apply this relaxation step in different orders and frequencies.

---

#
## 2. Breadth-First Search (BFS) for Unweighted Graphs

For the special case where all edge weights are unity (an unweighted graph), the SSSP problem can be solved most efficiently by a Breadth-First Search. BFS systematically explores the graph layer by layer from the source vertex ss. This layered exploration naturally discovers the shortest path in terms of the number of edges.

The algorithm uses a queue to manage the vertices to be visited. When a vertex uu is dequeued, all its unvisited neighbors vv are enqueued. The distance to such a neighbor vv is simply the distance to uu plus one.

📐 BFS Distance Update

For each neighbor vv of a vertex uu:

d[v]=d[u]+1d[v] = d[u] + 1

Variables:

    • d[u]d[u] = The shortest distance (number of edges) from the source ss to vertex uu.

    • d[v]d[v] = The shortest distance from ss to vertex vv.


When to use: For any SSSP problem on an unweighted graph (or a graph where all edges have the same positive weight). Its time complexity is O(V+E)O(V+E).

An important application of BFS on unweighted graphs, particularly trees, is finding the graph's diameter. The diameter of a tree is the longest path between any two nodes. It can be found by a two-step BFS process:

  • Start BFS from an arbitrary node xx to find the farthest node yy.

  • Start a second BFS from yy to find the farthest node zz. The path from yy to zz is a diameter of the tree. This was the concept tested in PYQ 1.
  • Worked Example:

    Problem: Find the shortest path distances from source vertex A in the unweighted graph below.



    A
    B
    C
    D
    E
    F







    Solution:

    We initialize d[A]=0d[A] = 0 and d[v]=d[v] = \infty for all other vertices vv. We use a queue QQ.

    Step 1: Initialization
    d={A:0,B:,C:,D:,E:,F:}d = \{A:0, B:\infty, C:\infty, D:\infty, E:\infty, F:\infty \}. Q=[A]Q = [A].

    Step 2: Dequeue A. Its neighbors are B and C.
    Enqueue B, C. Update distances: d[B]=d[A]+1=1d[B] = d[A] + 1 = 1, d[C]=d[A]+1=1d[C] = d[A] + 1 = 1.
    d={A:0,B:1,C:1,D:,E:,F:}d = \{A:0, B:1, C:1, D:\infty, E:\infty, F:\infty \}. Q=[B,C]Q = [B, C].

    Step 3: Dequeue B. Its neighbors are D and E.
    Enqueue D, E. Update distances: d[D]=d[B]+1=2d[D] = d[B] + 1 = 2, d[E]=d[B]+1=2d[E] = d[B] + 1 = 2.
    d={A:0,B:1,C:1,D:2,E:2,F:}d = \{A:0, B:1, C:1, D:2, E:2, F:\infty \}. Q=[C,D,E]Q = [C, D, E].

    Step 4: Dequeue C. Its neighbor is E.
    E is already visited and its distance d[E]=2d[E]=2 is not greater than d[C]+1=2d[C]+1=2. No change.
    d={A:0,B:1,C:1,D:2,E:2,F:}d = \{A:0, B:1, C:1, D:2, E:2, F:\infty \}. Q=[D,E]Q = [D, E].

    Step 5: Dequeue D. Its neighbor is F.
    Enqueue F. Update distance: d[F]=d[D]+1=3d[F] = d[D] + 1 = 3.
    d={A:0,B:1,C:1,D:2,E:2,F:3}d = \{A:0, B:1, C:1, D:2, E:2, F:3 \}. Q=[E,F]Q = [E, F].

    Step 6: Dequeue E. Its neighbor is F.
    F is already visited. d[F]=3d[F]=3 is not greater than d[E]+1=3d[E]+1=3. No change.
    d={A:0,B:1,C:1,D:2,E:2,F:3}d = \{A:0, B:1, C:1, D:2, E:2, F:3 \}. Q=[F]Q = [F].

    Step 7: Dequeue F. It has no unvisited neighbors. QQ is now empty. Algorithm terminates.

    Answer: The final shortest path distances from A are:
    d[A]=0,d[B]=1,d[C]=1,d[D]=2,d[E]=2,d[F]=3d[A]=0, d[B]=1, d[C]=1, d[D]=2, d[E]=2, d[F]=3.

    ---

    #
    ## 3. Dijkstra's Algorithm for Non-Negative Weights

    When edges have varying non-negative weights, BFS is no longer sufficient. Dijkstra's algorithm provides a solution using a greedy approach. It maintains a set of vertices whose final shortest-path weights have already been determined. In each step, it selects the vertex uu not yet in this set that has the smallest distance estimate d[u]d[u]. It then relaxes all edges leaving uu.

    The efficiency of Dijkstra's algorithm critically depends on the data structure used to store the distance estimates and extract the minimum. A min-priority queue is typically employed.

    📐 Dijkstra's Algorithm

    Initialize:
    d[s]=0d[s] = 0, d[v]=d[v] = \infty for all vsv \neq s.
    S=S = \emptyset (set of visited vertices).
    Q=VQ = V (priority queue of vertices, keyed by dd values).

    Algorithm:
    While QQ is not empty:

    • u=EXTRACT-MIN(Q)u = \text{EXTRACT-MIN}(Q)

    • S=S{u}S = S \cup \{u\}

    • For each neighbor vv of uu:

    RELAX(u,v,w)\text{RELAX}(u, v, w)

    When to use: SSSP on weighted graphs with no negative edge weights. The complexity is O(ElogV)O(E \log V) with a binary heap implementation for the priority queue.

    ⚠️ Common Mistake

    ❌ Applying Dijkstra's algorithm to a graph that contains even a single negative-weight edge. The greedy choice of selecting the vertex with the minimum dd value is no longer guaranteed to be optimal. A shorter path might exist via a higher-cost vertex that eventually traverses a sufficiently negative edge.

    ✅ For graphs with negative edges, one must use an algorithm like Bellman-Ford or, if the graph is a DAG, the SSSP on DAGs algorithm.

    ---

    #
    ## 4. Bellman-Ford Algorithm for General Weights

    The Bellman-Ford algorithm solves the SSSP problem in the more general case where edge weights may be negative. It is a more robust, though less efficient, algorithm than Dijkstra's. The algorithm works by systematically relaxing every edge in the graph over V1|V|-1 passes.

    The fundamental insight is that a shortest path can have at most V1|V|-1 edges (assuming no cycles). After one pass of relaxing all edges, the algorithm finds all shortest paths of length 1. After two passes, it finds all shortest paths of length at most 2, and so on. Therefore, after V1|V|-1 passes, it will have found all shortest paths.

    A crucial feature of Bellman-Ford is its ability to detect negative-weight cycles accessible from the source. If, after V1|V|-1 passes, a further pass still yields a relaxation (i.e., a distance is shortened), it indicates the presence of a negative-weight cycle.

    📐 Bellman-Ford Algorithm

    Initialize:
    d[s]=0d[s] = 0, d[v]=d[v] = \infty for all vsv \neq s.

    Algorithm:
    For i=1i = 1 to V1|V|-1:
    For each edge (u,v)E(u, v) \in E:
    RELAX(u,v,w)\text{RELAX}(u, v, w)

    Negative Cycle Detection:
    For each edge (u,v)E(u, v) \in E:
    If d[v]>d[u]+w(u,v)d[v] > d[u] + w(u, v):
    Report "Negative-weight cycle exists"

    When to use: SSSP on weighted graphs that may contain negative-weight edges. Its time complexity is O(VE)O(VE).

    ---

    #
    ## 5. SSSP on Directed Acyclic Graphs (DAGs)

    For Directed Acyclic Graphs, we can find shortest paths in linear time, which is faster than both Dijkstra's and Bellman-Ford. This algorithm works even with negative edge weights. The key is to process vertices in a topologically sorted order.

    When we process a vertex uu, we relax all its outgoing edges. Because of the topological sort, we are guaranteed that by the time we consider uu, the shortest path distance d[u]d[u] has already been finalized. This is because any path to uu must originate from vertices that appear before it in the topological ordering, and those have already been processed.

    This algorithm's structure is highly adaptable. The standard relaxation for minimum sum can be replaced to solve other path problems, such as finding the path with the maximum product of weights, as seen in PYQ 2.

    Worked Example (Adapted for Maximum Product):

    Problem: In the given DAG, find the maximum product of edge weights on any path from source S to every other vertex. The "quality score" of S is 1.




    S
    A
    B
    C
    2
    0.5
    3
    8

    Solution:

    Step 1: Topologically sort the vertices.
    A valid topological order is S, A, B, C.

    Step 2: Initialize quality scores.
    Let q[v]q[v] be the maximum quality score for vertex vv.
    Initialize q[S]=1q[S] = 1 and q[v]=0q[v] = 0 for all other vv (or -\infty, using 0 is fine here as weights are positive).
    q={S:1,A:0,B:0,C:0}q = \{S:1, A:0, B:0, C:0\}.

    Step 3: Process vertices in topological order.

    • Process S:

    - Relax edge (S, A): q[A]=max(q[A],q[S]×w(S,A))=max(0,1×2)=2q[A] = \max(q[A], q[S] \times w(S,A)) = \max(0, 1 \times 2) = 2.
    - Relax edge (S, B): q[B]=max(q[B],q[S]×w(S,B))=max(0,1×0.5)=0.5q[B] = \max(q[B], q[S] \times w(S,B)) = \max(0, 1 \times 0.5) = 0.5.
    Current scores: q={S:1,A:2,B:0.5,C:0}q = \{S:1, A:2, B:0.5, C:0\}.

    • Process A:
    - Relax edge (A, C): q[C]=max(q[C],q[A]×w(A,C))=max(0,2×3)=6q[C] = \max(q[C], q[A] \times w(A,C)) = \max(0, 2 \times 3) = 6. Current scores: q={S:1,A:2,B:0.5,C:6}q = \{S:1, A:2, B:0.5, C:6\}.
    • Process B:
    - Relax edge (B, C): q[C]=max(q[C],q[B]×w(B,C))=max(6,0.5×8)=max(6,4)=6q[C] = \max(q[C], q[B] \times w(B,C)) = \max(6, 0.5 \times 8) = \max(6, 4) = 6. No change. Current scores: q={S:1,A:2,B:0.5,C:6}q = \{S:1, A:2, B:0.5, C:6\}.
    • Process C:
    - No outgoing edges.

    Result:
    The maximum quality scores are: q[S]=1,q[A]=2,q[B]=0.5,q[C]=6q[S]=1, q[A]=2, q[B]=0.5, q[C]=6.

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Algorithm Selection

    When faced with an SSSP problem, quickly classify the graph using the following decision process to select the most efficient algorithm:

    • Is the graph unweighted?

    * Yes     \implies Use BFS. Time: O(V+E)O(V+E).

    • Is the graph weighted?

    Yes. Does it contain negative-weight edges?
    No     \implies Use Dijkstra's Algorithm. Time: O(ElogV)O(E \log V). This is the most common case in GATE.
    Yes. Is the graph a DAG?
    Yes     \implies Use SSSP on DAGs (Topological Sort + Relaxation). Time: O(V+E)O(V+E).
    * No     \implies Use Bellman-Ford Algorithm. Time: O(VE)O(VE). Also use this if negative cycle detection is required.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Using Dijkstra's on graphs with negative edges. This is the most frequent conceptual error. Dijkstra's greedy strategy is not guaranteed to be optimal in the presence of negative weights.
    Correct Approach: Always check for negative weights. If present, use Bellman-Ford or the DAG algorithm if applicable.
      • Incorrectly modifying the relaxation step for non-standard problems. Problems may define path "cost" as a product, maximum edge weight, etc. Simply summing weights will be incorrect.
    Correct Approach: Carefully read the problem's definition of path cost. Modify the initialization and relaxation step accordingly. For maximum product, use q[v]=max(q[v],q[u]×w(u,v))q[v] = \max(q[v], q[u] \times w(u,v)). For shortest path with maximum bottleneck, use d[v]=max(d[v],min(d[u],w(u,v)))d[v] = \max(d[v], \min(d[u], w(u,v))).
      • Terminating Bellman-Ford too early. The algorithm requires exactly V1|V|-1 full iterations to guarantee finding shortest paths of up to V1|V|-1 edges.
    Correct Approach: Ensure the outer loop runs from 11 to V1|V|-1. The subsequent V|V|-th pass is exclusively for negative cycle detection.

    ---

    Practice Questions

    :::question type="MCQ" question="Consider the execution of Dijkstra's algorithm on the given graph, starting from source vertex A. What is the distance to vertex E (d[E]d[E]) immediately after vertex C is extracted from the priority queue?" options=["10", "11", "12", "14"] answer="12" hint="Trace the algorithm step-by-step. Update distances for neighbors of each extracted vertex. Remember which vertices have been finalized." solution="
    Step 1: Initialization
    d={A:0,B:,C:,D:,E:}d = \{A:0, B:\infty, C:\infty, D:\infty, E:\infty\}. Priority Queue Q={A:0,B:,C:,D:,E:}Q = \{A:0, B:\infty, C:\infty, D:\infty, E:\infty\}.



    A
    B
    C
    D
    E
    2
    4
    7
    1
    3
    8

    Step 2: Extract A (cost 0).
    Neighbors of A are B and C.
    Relax (A,B): d[B]=min(,0+2)=2d[B] = \min(\infty, 0+2) = 2.
    Relax (A,C): d[C]=min(,0+4)=4d[C] = \min(\infty, 0+4) = 4.
    d={A:0,B:2,C:4,D:,E:}d = \{A:0, B:2, C:4, D:\infty, E:\infty\}. Q={B:2,C:4,D:,E:}Q = \{B:2, C:4, D:\infty, E:\infty\}.

    Step 3: Extract B (cost 2).
    Neighbor of B is D.
    Relax (B,D): d[D]=min(,2+7)=9d[D] = \min(\infty, 2+7) = 9.
    d={A:0,B:2,C:4,D:9,E:}d = \{A:0, B:2, C:4, D:9, E:\infty\}. Q={C:4,D:9,E:}Q = \{C:4, D:9, E:\infty\}.

    Step 4: Extract C (cost 4).
    Neighbors of C are D and E.
    Relax (C,D): d[D]=min(9,4+1)=5d[D] = \min(9, 4+1) = 5.
    Relax (C,E): d[E]=min(,4+8)=12d[E] = \min(\infty, 4+8) = 12.
    At this exact moment, after C is extracted and its neighbors are relaxed, the value of d[E]d[E] becomes 12.
    d={A:0,B:2,C:4,D:5,E:12}d = \{A:0, B:2, C:4, D:5, E:12\}. Q={D:5,E:12}Q = \{D:5, E:12\}.

    Result:
    The distance to E is 12.
    "
    :::

    :::question type="NAT" question="Consider a graph with 5 vertices {V1,V2,V3,V4,V5}\{V_1, V_2, V_3, V_4, V_5\}. The Bellman-Ford algorithm is run with V1V_1 as the source. The distance array dd after the first pass of relaxation (i.e., after all edges have been relaxed once) is [0,8,,7,5][0, 8, \infty, 7, 5]. After the second pass, the array is [0,8,12,6,5][0, 8, 12, 6, 5]. What will be the value of d[V3]d[V_3] in the final, stable distance array?" answer="11" hint="Analyze the changes between passes. A change in d[Vk]d[V_k] implies it was relaxed via some vertex whose own distance was updated in the previous pass. Identify the edge that caused the change to d[V4]d[V_4] and then use that information to find the path to V3V_3." solution="
    Step 1: Analyze changes from Pass 1 to Pass 2.
    Initial: d0=[0,,,,]d_0 = [0, \infty, \infty, \infty, \infty]
    After Pass 1: d1=[0,8,,7,5]d_1 = [0, 8, \infty, 7, 5]. This means there are edges (V1,V2)(V_1, V_2) with weight 8, (V1,V4)(V_1, V_4) with weight 7, and (V1,V5)(V_1, V_5) with weight 5.
    After Pass 2: d2=[0,8,12,6,5]d_2 = [0, 8, 12, 6, 5].

    Step 2: Identify the relaxations in Pass 2.

    • d[V2]d[V_2] and d[V5]d[V_5] did not change.

    • d[V3]d[V_3] changed from \infty to 12. This means an edge (u,V3)(u, V_3) was relaxed where d1[u]+w(u,V3)=12d_1[u] + w(u, V_3) = 12. The candidates for uu are V1,V2,V4,V5V_1, V_2, V_4, V_5.

    - If u=V2u=V_2, 8+w(V2,V3)=12    w(V2,V3)=48 + w(V_2, V_3) = 12 \implies w(V_2, V_3) = 4.
    • d[V4]d[V_4] changed from 7 to 6. This means an edge (v,V4)(v, V_4) was relaxed where d1[v]+w(v,V4)=6d_1[v] + w(v, V_4) = 6.

    - The only vertex vv with a finite distance in d1d_1 that could cause this is V5V_5.
    - d1[V5]+w(V5,V4)=6    5+w(V5,V4)=6    w(V5,V4)=1d_1[V_5] + w(V_5, V_4) = 6 \implies 5 + w(V_5, V_4) = 6 \implies w(V_5, V_4) = 1.
    - So, a path V1V5V4V_1 \to V_5 \to V_4 of weight 5+1=65+1=6 was found.

    Step 3: Infer the existence of the edge (V2,V3)(V_2, V_3) with weight 4.
    The update d2[V3]=12d_2[V_3] = 12 must have come from d1[V2]+w(V2,V3)=8+4=12d_1[V_2] + w(V_2, V_3) = 8 + 4 = 12. So, edge (V2,V3)(V_2, V_3) exists with weight 4.

    Step 4: Consider Pass 3.
    Will any distances change in the third pass? We must check if any vertex uu whose distance was updated in Pass 2 can now lead to a shorter path for its neighbors.
    d[V4]d[V_4] was updated to 6. Let's see if this can improve the path to V3V_3. Suppose there is an edge (V4,V3)(V_4, V_3).
    The path would be V1V5V4V3V_1 \to V_5 \to V_4 \to V_3. Its cost would be d2[V4]+w(V4,V3)d_2[V_4] + w(V_4, V_3).
    From the problem setup, the change to d[V3]d[V_3] from 1212 must come from the new shortest path to V4V_4.
    Let's assume there is an edge (V4,V3)(V_4, V_3) with weight ww. In pass 3, we would relax it:
    d3[V3]=min(d2[V3],d2[V4]+w(V4,V3))=min(12,6+w(V4,V3))d_3[V_3] = \min(d_2[V_3], d_2[V_4] + w(V_4, V_3)) = \min(12, 6 + w(V_4, V_3)).
    The problem implies the distance will stabilize. Let's assume the question implies there is a path V1V5V4V2V3V_1 \to V_5 \to V_4 \to V_2 \to V_3. This is getting too complex.

    Let's rethink. The update to d[V4]d[V_4] from 7 to 6 happens because of the path V1V5V4V_1 \to V_5 \to V_4 with weight 5+1=65+1=6.
    The update to d[V3]d[V_3] to 12 happens because of the path V1V2V3V_1 \to V_2 \to V_3 with weight 8+4=128+4=12.
    Now, in Pass 3, we have d2[V4]=6d_2[V_4] = 6. Is there an edge from V4V_4 that can improve other paths? Let's assume an edge (V4,V2)(V_4, V_2) with weight, say, 3-3. Then d3[V2]d_3[V_2] would become min(8,d2[V4]+w(V4,V2))=min(8,63)=3\min(8, d_2[V_4] + w(V_4,V_2)) = \min(8, 6-3) = 3. If this happened, then in Pass 4, d4[V3]d_4[V_3] could become min(12,d3[V2]+w(V2,V3))=min(12,3+4)=7\min(12, d_3[V_2] + w(V_2,V_3)) = \min(12, 3+4) = 7.
    The question is simpler. The path V1V4V_1 \to V_4 has weight 7. The path V1V5V4V_1 \to V_5 \to V_4 has weight 6. So w(V5,V4)=1w(V_5, V_4)=1.
    The path V1V2V3V_1 \to V_2 \to V_3 has weight 8+4=128+4=12.
    Is there a path from V4V_4 or V5V_5 to V3V_3? Let's assume there is an edge (V4,V3)(V_4, V_3) with weight 5.
    In Pass 3, we will check the edge (V4,V3)(V_4, V_3). The new distance would be d[V3]=min(12,d2[V4]+w(V4,V3))=min(12,6+5)=11d[V_3] = \min(12, d_2[V_4] + w(V_4, V_3)) = \min(12, 6+5) = 11.
    This is the most logical deduction. The update to V4V_4 in Pass 2 enables a further update to V3V_3 in Pass 3.

    Final stable distance for V3V_3 will be 11.
    "
    :::

    :::question type="MSQ" question="Which of the following statements about Single-Source Shortest Path algorithms are correct?" options=["BFS can be used to find SSSP on a weighted DAG if all weights are positive integers.", "Dijkstra's algorithm can correctly find shortest paths in a graph with negative-weight edges, provided there are no negative-weight cycles.", "The Bellman-Ford algorithm's time complexity is independent of the graph's structure and depends only on the number of vertices and edges.", "In a DAG, the SSSP problem can be solved in linear time by processing vertices in a topologically sorted order."] answer="C,D" hint="Evaluate each statement against the fundamental properties of the algorithms. Consider edge cases and the core assumptions of each algorithm." solution="

    • A: Incorrect. BFS works by assuming each edge traversal adds a uniform cost (1). On a weighted graph, even with positive integers, an edge of weight 2 should be counted as 'longer' than an edge of weight 1. BFS cannot distinguish this and will simply find the path with the fewest edges, not the minimum weight.

    • B: Incorrect. This is a common misconception. Dijkstra's greedy choice can fail even if there are no negative cycles. A simple counterexample is a path sas \to a with weight 3, a path sbs \to b with weight 5, and an edge bab \to a with weight -3. Dijkstra would finalize the path to aa with cost 3, but the true shortest path is sbas \to b \to a with cost 53=25-3=2.

    • C: Correct. The time complexity of Bellman-Ford is O(VE)O(VE). It always performs V1|V|-1 iterations, and in each iteration, it iterates through all EE edges. This performance is not improved for sparse graphs or specific structures (unlike Dijkstra's, which is faster on sparse graphs).

    • D: Correct. This is the standard linear-time algorithm for SSSP on DAGs. By processing vertices in topological order, we ensure that when we relax edges from a vertex uu, its own shortest path distance d[u]d[u] has already been finalized. This works even with negative edge weights and has a complexity of O(V+E)O(V+E).

    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Algorithm choice is paramount. The properties of the graph—unweighted, non-negative weights, negative weights, or DAG structure—dictate the correct and most efficient SSSP algorithm to use.

    • Dijkstra's algorithm is the default for non-negative weights. Its greedy approach implemented with a priority queue (O(ElogV)O(E \log V)) is highly efficient and frequently tested. Remember its limitation: it fails with negative edges.

    • Bellman-Ford is the robust solution for general graphs. It handles negative weights and is the primary tool for detecting negative-weight cycles, at the cost of a higher runtime (O(VE)O(VE)).

    • DAGs have a special, linear-time solution. The combination of topological sort and relaxation (O(V+E)O(V+E)) is extremely powerful and can be adapted to non-standard path problems (e.g., max product paths).

    ---

    What's Next?

    💡 Continue Learning

    A solid grasp of Single-Source Shortest Paths is a gateway to more advanced graph algorithm topics.

      • All-Pairs Shortest Paths (APSP): The SSSP problem finds paths from one source. APSP generalizes this to find the shortest path between every pair of vertices in the graph. The Floyd-Warshall algorithm is a key technique here.
      • Minimum Spanning Trees (MST): It is crucial to distinguish between shortest paths and MSTs. An MST (found via Prim's or Kruskal's algorithm) finds a minimum-weight subgraph that connects all vertices, which is not necessarily composed of the shortest paths between them.
      • Topological Sorting: As seen, this is a direct prerequisite for the most efficient SSSP algorithm on DAGs. Mastering its implementation and properties is essential.

    ---

    💡 Moving Forward

    Now that you understand Single-Source Shortest Paths, let's explore All-Pairs Shortest Paths which builds on these concepts.

    ---

    Part 2: All-Pairs Shortest Paths

    Introduction

    In our study of graph algorithms, we have thus far considered single-source shortest path problems, where the objective is to find the shortest paths from a designated source vertex to all other vertices in the graph. We now turn our attention to a more general problem: the All-Pairs Shortest Paths (APSP) problem. Here, the goal is to determine the shortest path distance between every pair of vertices (u,v)(u, v) in a given weighted, directed graph G=(V,E)G = (V, E).

    The APSP problem is fundamental in applications where a global understanding of the network's connectivity is required, such as in routing protocols, transit network analysis, and bioinformatics. The primary algorithm we shall examine for this purpose is the Floyd-Warshall algorithm, an elegant and powerful method based on the principle of dynamic programming. This algorithm is particularly noteworthy for its ability to handle graphs with negative edge weights, provided they do not contain negative-weight cycles.

    📖 All-Pairs Shortest Paths Problem

    Given a weighted, directed graph G=(V,E)G = (V, E) with a weight function w:ERw: E \to \mathbb{R}, the All-Pairs Shortest Paths problem is to find a path of minimum total weight between every pair of vertices u,vVu, v \in V. The output is typically represented as a matrix of shortest path distances.

    ---

    The Floyd-Warshall Algorithm

    The Floyd-Warshall algorithm solves the all-pairs shortest paths problem by iteratively considering all possible intermediate vertices for each pair of endpoints. It employs a dynamic programming approach, building up the solution by allowing an increasingly larger set of vertices to be used as intermediaries in the paths.

    Let us consider the vertices to be numbered from 11 to nn, where n=Vn = |V|. Let dij(k)d_{ij}^{(k)} be the weight of the shortest path from vertex ii to vertex jj using only intermediate vertices from the set {1,2,,k}\{1, 2, \dots, k\}.

    The core idea is that for a path from ii to jj with intermediate vertices from {1,2,,k}\{1, 2, \dots, k\}, there are two possibilities:

  • The path does not use vertex kk. In this case, the shortest path is the same as the shortest path using only intermediate vertices from {1,2,,k1}\{1, 2, \dots, k-1\}. Its length is dij(k1)d_{ij}^{(k-1)}.

  • The path does use vertex kk. This path can be decomposed into a path from ii to kk and a path from kk to jj. Both of these subpaths can only use intermediate vertices from the set {1,2,,k1}\{1, 2, \dots, k-1\}. The length of this path is dik(k1)+dkj(k1)d_{ik}^{(k-1)} + d_{kj}^{(k-1)}.
  • This observation gives rise to the fundamental recurrence relation of the algorithm.

    📐 Floyd-Warshall Recurrence
    dij(k)=min(dij(k1),dik(k1)+dkj(k1))d_{ij}^{(k)} = \min(d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)})

    Variables:

      • dij(k)d_{ij}^{(k)}: The shortest path distance from vertex ii to jj with intermediate vertices from {1,,k}\{1, \dots, k\}.

      • dij(k1)d_{ij}^{(k-1)}: The shortest path distance from ii to jj with intermediate vertices from {1,,k1}\{1, \dots, k-1\}.


    When to use: This recurrence is the core computational step of the Floyd-Warshall algorithm, applied for each kk from 11 to nn.

    The base case, dij(0)d_{ij}^{(0)}, is the initial weight matrix of the graph. If there is an edge from ii to jj, dij(0)=w(i,j)d_{ij}^{(0)} = w(i, j). If i=ji = j, dii(0)=0d_{ii}^{(0)} = 0. If there is no edge from ii to jj, we set dij(0)=d_{ij}^{(0)} = \infty.

    The algorithm proceeds by computing a sequence of n×nn \times n matrices D(0),D(1),,D(n)D^{(0)}, D^{(1)}, \dots, D^{(n)}, where D(k)=[dij(k)]D^{(k)} = [d_{ij}^{(k)}]. The final matrix, D(n)D^{(n)}, contains the shortest path distances between all pairs of vertices.

    #
    ## Algorithm Implementation

    The algorithm can be implemented using three nested loops.

    ```python

    Let V be the number of vertices, numbered 1 to V


    Let dist be a V x V matrix initialized with graph weights


    dist[i][j] = weight if edge (i,j) exists, infinity otherwise


    dist[i][i] = 0 for all i

    def floyd_warshall(dist, V):
    for k in range(V):
    for i in range(V):
    for j in range(V):
    # If vertex k is on the shortest path from i to j,
    # then update the value of dist[i][j]
    if dist[i][k] != float('inf') and dist[k][j] != float('inf'):
    dist[i][j] = min(dist[i][j], dist[i][k] + dist[k][j])
    return dist
    ```

    The time complexity of this algorithm is determined by the three nested loops, making it O(n3)O(n^3), where n=Vn = |V|. Its space complexity is O(n2)O(n^2) to store the distance matrix.

    #
    ## Detecting Negative-Weight Cycles

    A significant advantage of the Floyd-Warshall algorithm is its ability to detect the presence of negative-weight cycles. A negative-weight cycle exists if, after the algorithm completes, any diagonal element of the distance matrix, dii(n)d_{ii}^{(n)}, becomes negative. If dii(n)<0d_{ii}^{(n)} < 0 for some vertex ii, it implies that there is a path from ii back to itself with a negative total weight.

    Worked Example:

    Problem: Consider the following weighted, directed graph. Compute the all-pairs shortest paths matrix D(4)D^{(4)} using the Floyd-Warshall algorithm.











    1


    2


    4


    3



    3


    7


    2


    8


    -4


    1

    Solution:

    Step 1: Initialize the distance matrix D(0)D^{(0)} based on the graph's direct edge weights.

    D(0)=(0380710240)D^{(0)} = \begin{pmatrix}0 & 3 & \infty & 8 \\ \infty & 0 & 7 & 1 \\ \infty & \infty & 0 & 2 \\ -4 & \infty & \infty & 0\end{pmatrix}

    Step 2: Compute D(1)D^{(1)} by considering vertex 1 as an intermediate vertex. We check for updates of the form dij=min(dij,di1+d1j)d_{ij} = \min(d_{ij}, d_{i1} + d_{1j}).
    The only path that can be improved via vertex 1 is from 4 to 2: d42=d41+d12=4+3=1d_{42} = d_{41} + d_{12} = -4 + 3 = -1.

    D(1)=(03807102410)D^{(1)} = \begin{pmatrix}0 & 3 & \infty & 8 \\ \infty & 0 & 7 & 1 \\ \infty & \infty & 0 & 2 \\ -4 & \mathbf{-1} & \infty & 0\end{pmatrix}

    Step 3: Compute D(2)D^{(2)} using vertex 2 as an intermediate.

    • Path 1 to 3: d13=d12+d23=3+7=10d_{13} = d_{12} + d_{23} = 3 + 7 = 10.

    • Path 1 to 4: d14=d12+d24=3+1=4d_{14} = d_{12} + d_{24} = 3 + 1 = 4.


    D(2)=(03104071024160)D^{(2)} =
    \begin{pmatrix}0 & 3 & \mathbf{10} & \mathbf{4} \\
    \infty & 0 & 7 & 1 \\
    \infty & \infty & 0 & 2 \\
    -4 & -1 & \mathbf{6} & 0\end{pmatrix}

    (Note: Path 4 to 3 is d42+d23=1+7=6d_{42}+d_{23} = -1+7=6)

    Step 4: Continue this process for k=3k=3 and k=4k=4.
    For k=3k=3: d14d_{14} can be improved via 3: d13+d34=10+2=12d_{13}+d_{34} = 10+2=12, but 4<124 < 12, so no change. d24d_{24} can be improved via 3: d23+d34=7+2=9d_{23}+d_{34} = 7+2=9, but 1<91 < 9, so no change. The matrix D(3)D^{(3)} remains the same as D(2)D^{(2)}.

    For k=4k=4:

    • Path 2 to 1: d21=d24+d41=1+(4)=3d_{21} = d_{24} + d_{41} = 1 + (-4) = -3.

    • Path 3 to 1: d31=d34+d41=2+(4)=2d_{31} = d_{34} + d_{41} = 2 + (-4) = -2.

    • Path 3 to 2: d32=d34+d42=2+(1)=1d_{32} = d_{34} + d_{42} = 2 + (-1) = 1.


    Result: The final matrix D(4)D^{(4)} is:

    D(4)=(03104307121024160)D^{(4)} = \begin{pmatrix}0 & 3 & 10 & 4 \\ \mathbf{-3} & 0 & 7 & 1 \\ \mathbf{-2} & \mathbf{1} & 0 & 2 \\ -4 & -1 & 6 & 0\end{pmatrix}

    ---

    Problem-Solving Strategies

    💡 GATE Strategy: Matrix Iteration

    In GATE questions asking for the distance matrix after a specific iteration, say D(k)D^{(k)}, you do not need to compute the entire final matrix. Simply apply the recurrence relation for the given value of kk to the matrix D(k1)D^{(k-1)}. Remember that Dij(k)=min(Dij(k1),Dik(k1)+Dkj(k1))D^{(k)}_{ij} = \min(D^{(k-1)}_{ij}, D^{(k-1)}_{ik} + D^{(k-1)}_{kj}). This means you only need to focus on paths that go through vertex kk.

    ---

    Common Mistakes

    ⚠️ Avoid These Errors
      • Incorrect Initialization: Setting the weight of non-existent edges to 0 instead of \infty. This would incorrectly imply a free path exists.
    Correct Approach: Initialize dij=d_{ij} = \infty for all (i,j)(i, j) where no direct edge exists, and dii=0d_{ii}=0.
      • Updating in Place Incorrectly: Modifying the distance matrix for iteration kk and using those newly updated values within the same iteration's loops for ii and jj. This breaks the dynamic programming assumption.
    Correct Approach: Conceptually, the computation of D(k)D^{(k)} uses only the values from D(k1)D^{(k-1)}. In a standard three-loop implementation, this is handled correctly without needing two separate matrices, but it is a crucial conceptual point.

    ---

    Practice Questions

    :::question type="MCQ" question="For a weighted directed graph with 4 vertices {1, 2, 3, 4}, the initial distance matrix D(0)D^{(0)} is given. What is the value of d42(2)d_{42}^{(2)} after the second iteration (k=2) of the Floyd-Warshall algorithm?

    D(0)=(0150230140)D^{(0)} =
    \begin{pmatrix}0 & 1 & 5 & \infty \\
    \infty & 0 & -2 & 3 \\
    \infty & \infty & 0 & 1 \\
    4 & \infty & \infty & 0\end{pmatrix}
    " options=["-1", "3", "5", "infinity"] answer="5" hint="First compute D(1)D^{(1)} by considering paths through vertex 1. Then compute D(2)D^{(2)} using the values from D(1)D^{(1)} and considering paths through vertex 2." solution="
    Step 1: Compute D(1)D^{(1)} using vertex 1 as an intermediate.
    The only path that can be improved is from 4 to 2, via 1: d42(1)=d41(0)+d12(0)=4+1=5d_{42}^{(1)} = d_{41}^{(0)} + d_{12}^{(0)} = 4 + 1 = 5.
    All other entries remain the same.
    D(1)=(01502301450)D^{(1)} =
    \begin{pmatrix}0 & 1 & 5 & \infty \\
    \infty & 0 & -2 & 3 \\
    \infty & \infty & 0 & 1 \\
    4 & 5 & \infty & 0\end{pmatrix}

    Step 2: Compute D(2)D^{(2)} using vertex 2 as an intermediate, based on values from D(1)D^{(1)}.
    We are interested in d42(2)d_{42}^{(2)}. The formula is d42(2)=min(d42(1),d42(1)+d22(1))d_{42}^{(2)} = \min(d_{42}^{(1)}, d_{42}^{(1)} + d_{22}^{(1)}).
    Since d22(1)=0d_{22}^{(1)}=0, the value does not change via this path. We must check if any other path from 4 to 2 is improved. The formula is dij(k)=min(dij(k1),dik(k1)+dkj(k1))d_{ij}^{(k)} = \min(d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)}).
    For i=4,j=2,k=2i=4, j=2, k=2, we have d42(2)=min(d42(1),d42(1)+d22(1))d_{42}^{(2)} = \min(d_{42}^{(1)}, d_{42}^{(1)} + d_{22}^{(1)}).
    This is not the correct application. The recurrence is dij(k)=min(dij(k1),dik(k1)+dkj(k1))d_{ij}^{(k)} = \min(d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)}).
    We are looking for d42(2)d_{42}^{(2)}. The value from the previous step is d42(1)=5d_{42}^{(1)} = 5.
    The alternative path is through vertex 2: d42(1)+d22(1)=5+0=5d_{42}^{(1)} + d_{22}^{(1)} = 5 + 0 = 5.
    So, d42(2)=min(5,5)=5d_{42}^{(2)} = \min(5, 5) = 5.

    Let's check if any other values are updated.
    d13(2)=min(d13(1),d12(1)+d23(1))=min(5,1+(2))=1d_{13}^{(2)} = \min(d_{13}^{(1)}, d_{12}^{(1)} + d_{23}^{(1)}) = \min(5, 1 + (-2)) = -1.
    d14(2)=min(d14(1),d12(1)+d24(1))=min(,1+3)=4d_{14}^{(2)} = \min(d_{14}^{(1)}, d_{12}^{(1)} + d_{24}^{(1)}) = \min(\infty, 1 + 3) = 4.
    The value of d42d_{42} is not affected by any other path through vertex 2. Its value remains 5.

    Result: The value of d42(2)d_{42}^{(2)} is 5.
    "
    :::

    :::question type="NAT" question="A directed graph has 4 vertices. After running the Floyd-Warshall algorithm completely, the final distance matrix D(4)D^{(4)} is obtained. The diagonal entry d33(4)d_{33}^{(4)} is found to be -2. What is the minimum number of edges in the negative-weight cycle detected?" answer="1" hint="A negative value on the diagonal, dii<0d_{ii} < 0, indicates a negative-weight cycle involving vertex ii. What is the simplest possible cycle?" solution="
    The Floyd-Warshall algorithm detects a negative-weight cycle if any diagonal element diid_{ii} in the final distance matrix is negative. The value diid_{ii} represents the shortest path from vertex ii back to itself. If dii<0d_{ii} < 0, it implies the existence of a cycle containing vertex ii with a total negative weight. The simplest possible cycle in a graph is a self-loop, which consists of a single edge from a vertex to itself. If vertex 3 has a self-loop with weight -2, then d33d_{33} would be -2. This cycle involves only one edge. A cycle involving more vertices, for example 3j33 \to j \to 3, would involve at least two edges. The question asks for the minimum number of edges.

    Result: A self-loop is a cycle of length 1. Therefore, the minimum number of edges is 1.
    "
    :::

    :::question type="MSQ" question="Which of the following statements about the Floyd-Warshall algorithm are correct for a graph with nn vertices and mm edges?" options=["Its time complexity is O(n3)O(n^3)", "It cannot handle graphs with negative edge weights", "It uses a dynamic programming approach", "Its space complexity is dependent on the number of edges, O(m)O(m)"] answer="Its time complexity is O(n3)O(n^3),It uses a dynamic programming approach" hint="Analyze the algorithm's structure, capabilities, and memory requirements." solution="

    • Statement A: The algorithm consists of three nested loops, each running up to nn (the number of vertices). Therefore, its time complexity is O(n3)O(n^3). This is correct.

    • Statement B: The Floyd-Warshall algorithm can handle negative edge weights. It fails only if there is a negative-weight cycle. Thus, this statement is incorrect.

    • Statement C: The algorithm is a classic example of dynamic programming. It builds the solution for paths with intermediate vertices from {1,...,k}\{1, ..., k\} using the solution for paths with intermediate vertices from {1,...,k1}\{1, ..., k-1\}. This is correct.

    • Statement D: The algorithm requires an n×nn \times n matrix to store the distances between all pairs of vertices. Therefore, its space complexity is O(n2)O(n^2), which is dependent on the number of vertices, not edges. This statement is incorrect.

    "
    :::

    ---

    Summary

    Key Takeaways for GATE

    • Core Problem: The All-Pairs Shortest Paths problem seeks the shortest path distance between every pair of vertices in a graph.

    • Floyd-Warshall Algorithm: This is the primary method for solving APSP. It is a dynamic programming algorithm with a time complexity of O(V3)O(|V|^3) and space complexity of O(V2)O(|V|^2).

    • Recurrence Relation: The algorithm is defined by the recurrence dij(k)=min(dij(k1),dik(k1)+dkj(k1))d_{ij}^{(k)} = \min(d_{ij}^{(k-1)}, d_{ik}^{(k-1)} + d_{kj}^{(k-1)}), where kk is the intermediate vertex being considered.

    • Negative Weights & Cycles: The algorithm works correctly with negative edge weights but fails if a negative-weight cycle is present. A negative-weight cycle is detected if any diagonal element diid_{ii} becomes negative after the algorithm terminates.

    ---

    What's Next?

    💡 Continue Learning

    This topic connects to:

      • Single-Source Shortest Paths (Dijkstra's, Bellman-Ford): Understand the distinction. APSP can be solved by running Bellman-Ford from each vertex (complexity O(V2E)O(V^2E)), but Floyd-Warshall is often simpler and more efficient for dense graphs. It is important to know when to apply each algorithm.

      • Dynamic Programming: The Floyd-Warshall algorithm is a prime example of this paradigm. Recognizing its structure helps in solving other dynamic programming problems on graphs or matrices.


    Master these connections for a comprehensive understanding of graph algorithms in your GATE preparation!

    ---

    Chapter Summary

    📖 Shortest Paths - Key Takeaways

    In our study of shortest path algorithms, we have established several fundamental methods for finding the paths of minimum weight in a graph. Mastery of these concepts is essential for success. The following points summarize the most critical takeaways from this chapter.

    • Algorithm Selection is Key: The choice of algorithm is dictated by the graph's properties. For unweighted graphs, Breadth-First Search (BFS) finds the shortest path in O(V+E)O(V+E) time. For weighted graphs with non-negative edges, Dijkstra's algorithm is the preferred choice. For graphs that include negative-weight edges, the Bellman-Ford algorithm is required.

    • Dijkstra's Algorithm (Greedy Approach): For the single-source shortest path problem on graphs with non-negative edge weights, Dijkstra's algorithm provides an efficient greedy solution. Its time complexity depends on the priority queue implementation, typically O(ElogV)O(E \log V) with a binary heap.

    • Bellman-Ford Algorithm (Dynamic Programming): The Bellman-Ford algorithm addresses the single-source problem in graphs that may contain negative-weight edges. Its O(VE)O(VE) complexity is higher than Dijkstra's, but it offers the critical capability of detecting negative-weight cycles accessible from the source.

    • Shortest Paths in DAGs: For Directed Acyclic Graphs (DAGs), we can solve the single-source shortest path problem in linear time, O(V+E)O(V+E). This is achieved by first finding a topological sort of the vertices and then relaxing the edges in that order.

    • All-Pairs Shortest Paths (APSP): To find the shortest path between all pairs of vertices, the Floyd-Warshall algorithm is a canonical dynamic programming solution with a time complexity of O(V3)O(V^3). It is suitable for dense graphs and can handle negative edge weights, provided no negative-weight cycles exist.

    • Johnson's Algorithm for Sparse Graphs: For computing all-pairs shortest paths in sparse graphs with negative edge weights, Johnson's algorithm is asymptotically more efficient than Floyd-Warshall. It cleverly uses Bellman-Ford for re-weighting the graph's edges to be non-negative, allowing Dijkstra's to be run from each vertex.

    ---

    Chapter Review Questions

    :::question type="MCQ" question="A weighted, directed graph G=(V,E)G=(V, E) contains edges with negative weights but no negative-weight cycles. We need to compute the shortest path distances from a source vertex ss to all other vertices. Which of the following algorithms is the most asymptotically efficient and correct choice for this specific task?" options=["Run Bellman-Ford from source ss.","Run Floyd-Warshall to compute all-pairs shortest paths, then extract the required distances.","Run Dijkstra's algorithm from source ss directly, as the absence of negative cycles makes it safe.","Run Dijkstra's algorithm from source ss after adding a large constant to every edge weight to make them all non-negative."] answer="A" hint="Consider the specific constraints on edge weights and whether the problem is single-source or all-pairs. Recall which algorithms are invalidated by the presence of negative edge weights." solution="
    The problem is to find single-source shortest paths in a graph with negative edge weights but no negative-weight cycles.

    • Option D: Running Dijkstra's algorithm directly is incorrect. The greedy choice made by Dijkstra's—permanently selecting the vertex with the smallest distance estimate—is not guaranteed to be optimal if negative edge weights can reduce a path's cost later.
    • Option C: This is a common misconception. The absence of negative-weight cycles is a requirement for shortest paths to be well-defined, but the presence of negative-weight edges is sufficient to invalidate the greedy strategy of Dijkstra's algorithm.
    • Option B: The Floyd-Warshall algorithm would correctly compute the distances, but it solves the all-pairs shortest path problem with a complexity of O(V3)O(|V|^3). This is not the most efficient approach for a single-source problem.
    • Option A: The Bellman-Ford algorithm is specifically designed for the single-source shortest path problem in graphs that may contain negative-weight edges. Its time complexity is O(VE)O(|V||E|). For the single-source problem under these constraints, it is the most efficient and correct choice among the options.
    Therefore, running Bellman-Ford from the source ss is the correct approach. " :::

    :::question type="NAT" question="Consider a weighted, directed graph with vertices {S,A,B,C,D}\{S, A, B, C, D\} and source vertex SS. The edges and their weights are: (S,A,6)(S,A,6), (S,B,7)(S,B,7), (A,C,5)(A,C,5), (A,D,4)(A,D,-4), (B,C,2)(B,C,-2), (C,D,1)(C,D,1), (D,A,2)(D,A,2). Using the Bellman-Ford algorithm, what is the shortest path distance estimate for vertex DD (i.e., d[D]d[D]) after exactly two full passes of edge relaxations?" answer="0" hint="Initialize distances from SS to \infty (except for d[S]=0d[S]=0). Perform the relaxation step for every edge in the graph, twice. The order of relaxation within a pass matters, so follow the order given in the question." solution="
    We initialize the distance array dd as: d[S]=0d[S]=0, and d[v]=d[v]=\infty for all other vertices vv.

    Initial State: d=[S:0,A:,B:,C:,D:]d = [S:0, A:\infty, B:\infty, C:\infty, D:\infty]

    Pass 1: We relax all edges in the given order.

  • Relax (S,A,6)(S,A,6): d[A]=min(,d[S]+6)=6d[A] = \min(\infty, d[S]+6) = 6.

  • Relax (S,B,7)(S,B,7): d[B]=min(,d[S]+7)=7d[B] = \min(\infty, d[S]+7) = 7.

  • Relax (A,C,5)(A,C,5): d[C]=min(,d[A]+5)=min(,6+5)=11d[C] = \min(\infty, d[A]+5) = \min(\infty, 6+5) = 11.

  • Relax (A,D,4)(A,D,-4): d[D]=min(,d[A]4)=min(,64)=2d[D] = \min(\infty, d[A]-4) = \min(\infty, 6-4) = 2.

  • Relax (B,C,2)(B,C,-2): d[C]=min(11,d[B]2)=min(11,72)=5d[C] = \min(11, d[B]-2) = \min(11, 7-2) = 5.

  • Relax (C,D,1)(C,D,1): d[D]=min(2,d[C]+1)=min(2,5+1)=2d[D] = \min(2, d[C]+1) = \min(2, 5+1) = 2.

  • Relax (D,A,2)(D,A,2): d[A]=min(6,d[D]+2)=min(6,2+2)=4d[A] = \min(6, d[D]+2) = \min(6, 2+2) = 4.
  • End of Pass 1: d=[S:0,A:4,B:7,C:5,D:2]d = [S:0, A:4, B:7, C:5, D:2]

    Pass 2: We relax all edges again using the updated distance values.

  • Relax (S,A,6)(S,A,6): d[A]=min(4,d[S]+6)=min(4,0+6)=4d[A] = \min(4, d[S]+6) = \min(4, 0+6) = 4. (No change)

  • Relax (S,B,7)(S,B,7): d[B]=min(7,d[S]+7)=min(7,0+7)=7d[B] = \min(7, d[S]+7) = \min(7, 0+7) = 7. (No change)

  • Relax (A,C,5)(A,C,5): d[C]=min(5,d[A]+5)=min(5,4+5)=5d[C] = \min(5, d[A]+5) = \min(5, 4+5) = 5. (No change)

  • Relax (A,D,4)(A,D,-4): d[D]=min(2,d[A]4)=min(2,44)=0d[D] = \min(2, d[A]-4) = \min(2, 4-4) = 0. (Updated)

  • Relax (B,C,2)(B,C,-2): d[C]=min(5,d[B]2)=min(5,72)=5d[C] = \min(5, d[B]-2) = \min(5, 7-2) = 5. (No change)

  • Relax (C,D,1)(C,D,1): d[D]=min(0,d[C]+1)=min(0,5+1)=0d[D] = \min(0, d[C]+1) = \min(0, 5+1) = 0. (No change)

  • Relax (D,A,2)(D,A,2): d[A]=min(4,d[D]+2)=min(4,0+2)=2d[A] = \min(4, d[D]+2) = \min(4, 0+2) = 2. (Updated)
  • End of Pass 2: d=[S:0,A:2,B:7,C:5,D:0]d = [S:0, A:2, B:7, C:5, D:0]

    The distance estimate for vertex DD after the second pass is 0.
    "
    :::

    :::question type="MCQ" question="The Floyd-Warshall algorithm is executed on a weighted adjacency matrix WW of a directed graph G=(V,E)G=(V, E) where V=n|V|=n. Let D(k)[i][j]D^{(k)}[i][j] denote the value in the distance matrix after the kk-th iteration of the outermost loop (where kk ranges from 11 to nn). Which statement correctly describes the property of D(k)[i][j]D^{(k)}[i][j]?" options=["It is the length of the shortest path from vertex ii to vertex jj that uses at most kk edges.","It is the length of the shortest path from vertex ii to vertex jj where all intermediate vertices in the path belong to the set {1,2,,k}\{1, 2, \dots, k\}.","It is the length of the shortest path from vertex ii to vertex jj that is guaranteed to pass through vertex kk.","It is the shortest path from vertex ii to vertex jj found after considering all paths of length exactly kk."] answer="B" hint="Recall the fundamental dynamic programming recurrence of the Floyd-Warshall algorithm and what the index kk represents." solution="
    The Floyd-Warshall algorithm is based on the following dynamic programming recurrence relation:

    D(k)[i][j]=min(D(k1)[i][j],D(k1)[i][k]+D(k1)[k][j])D^{(k)}[i][j] = \min(D^{(k-1)}[i][j], \quad D^{(k-1)}[i][k] + D^{(k-1)}[k][j])

    Here, D(0)D^{(0)} is the initial weight matrix.

    Let's analyze the meaning of this recurrence:
    D(k)[i][j]D^{(k)}[i][j] is the length of the shortest path from ii to jj. The recurrence considers two possibilities:

  • The shortest path from ii to jj that only uses intermediate vertices from the set {1,2,,k1}\{1, 2, \dots, k-1\}. The length of this path is D(k1)[i][j]D^{(k-1)}[i][j].

  • The shortest path from ii to jj that does use vertex kk as an intermediate vertex. This path is formed by concatenating the shortest path from ii to kk and the shortest path from kk to jj, where both of these sub-paths only use intermediate vertices from {1,2,,k1}\{1, 2, \dots, k-1\}. The length is D(k1)[i][k]+D(k1)[k][j]D^{(k-1)}[i][k] + D^{(k-1)}[k][j].
  • By taking the minimum of these two cases, D(k)[i][j]D^{(k)}[i][j] represents the length of the shortest path from ii to jj where any intermediate vertices on the path must be in the set {1,2,,k}\{1, 2, \dots, k\}.

    • Option A describes a property related to the Bellman-Ford algorithm or matrix exponentiation methods, not Floyd-Warshall.
    • Option C is incorrect; the path does not have to pass through kk. The algorithm simply considers kk as a potential intermediate vertex.
    • Option D is incorrect; the algorithm does not operate based on path length in this manner.
    Thus, option B is the correct definition. " :::

    :::question type="NAT" question="The shortest path in a weighted, undirected graph from vertex ss to vertex tt has a total weight of 40 and consists of 8 edges. If the weight of every edge in the entire graph is increased by 5, what will be the weight of the shortest path from ss to tt in the modified graph?" answer="80" hint="Consider how the total weight of a path with a fixed number of edges changes when a constant value is added to each edge's weight. Does adding a positive constant to all edges change which path is the shortest?" solution="
    Let the original shortest path from ss to tt be PP. The path consists of k=8k=8 edges. The total weight of this path is given as:

    W(P)=ePw(e)=40W(P) = \sum_{e \in P} w(e) = 40

    In the modified graph, the weight of every edge ee, denoted w(e)w'(e), is the original weight plus 5:
    w(e)=w(e)+5w'(e) = w(e) + 5

    The weight of the path PP in the new graph, W(P)W'(P), will be:
    W(P)=ePw(e)=eP(w(e)+5)W'(P) = \sum_{e \in P} w'(e) = \sum_{e \in P} (w(e) + 5)

    We can separate the summation:
    W(P)=(ePw(e))+(eP5)W'(P) = \left(\sum_{e \in P} w(e)\right) + \left(\sum_{e \in P} 5\right)

    Since there are k=8k=8 edges in the path PP, the second term is 8×5=408 \times 5 = 40.
    W(P)=W(P)+(8×5)=40+40=80W'(P) = W(P) + (8 \times 5) = 40 + 40 = 80

    A crucial property for non-negative weight increments is that increasing the weight of every edge in a graph by the same positive constant does not change the constituent edges of the shortest path. The path with the fewest edges between any two vertices will have its total weight increased proportionally more than a path with more edges, but the original shortest path remains the shortest. Therefore, the new shortest path weight is the calculated weight of the original path in the modified graph.

    The new shortest path weight is 80.
    "
    :::

    ---

    What's Next?

    💡 Continue Your GATE Journey

    Having completed this chapter on Shortest Path algorithms, you have established a firm foundation for several advanced and related topics in the GATE syllabus for Algorithms. The concepts of greedy selection, dynamic programming, and graph relaxation are central to many other problems.

    Key connections to your learning so far:

      • Graph Traversals: We have seen that Breadth-First Search (BFS) is not merely a traversal technique but is, in fact, the single-source shortest path algorithm for unweighted graphs.

      • Algorithm Design Paradigms: This chapter provided concrete, high-impact examples of major design paradigms. Dijkstra's algorithm is a classic greedy algorithm, while Bellman-Ford and Floyd-Warshall are quintessential applications of dynamic programming.

      • Data Structures: The performance of Dijkstra's algorithm is a direct illustration of the importance of efficient data structures, with its complexity varying based on the choice of a binary heap versus a Fibonacci heap for the priority queue.


      How these concepts build into future topics:
      • Minimum Spanning Trees (MST): The next logical step is often the study of MSTs (Prim's and Kruskal's algorithms). While both MST and shortest path algorithms operate on weighted graphs, it is critical to understand the distinct problems they solve. MSTs find a minimum weight subgraph that connects all vertices, whereas shortest path algorithms find a minimum weight path between specific vertices.

      • Network Flow: Algorithms for solving the maximum flow problem, such as the Edmonds-Karp algorithm, rely on repeatedly finding "augmenting paths" in a residual graph. This subproblem is a shortest path problem, typically solved with BFS to find the path with the minimum number of edges.

      • NP-Completeness: Understanding why shortest path problems are solvable in polynomial time provides an essential contrast to related problems, such as the Longest Path problem, which is NP-Hard. This distinction deepens your understanding of computational complexity.

    🎯 Key Points to Remember

    • Master the core concepts in Shortest Paths before moving to advanced topics
    • Practice with previous year questions to understand exam patterns
    • Review short notes regularly for quick revision before exams

    Related Topics in Algorithms

    More Resources

    Why Choose MastersUp?

    🎯

    AI-Powered Plans

    Personalized study schedules based on your exam date and learning pace

    📚

    15,000+ Questions

    Verified questions with detailed solutions from past papers

    📊

    Smart Analytics

    Track your progress with subject-wise performance insights

    🔖

    Bookmark & Revise

    Save important questions for quick revision before exams

    Start Your Free Preparation →

    No credit card required • Free forever for basic features