#Sales Offer!| Get upto 25% Off:

2. Give and analyse an algorithm for computing the square of a directed graph G given in (a) adjacency-list representation and (b) adjacency-matrix representation.

To compute G2 from the adjacency-list representation Adj of G, we perform the following for each Adj[u]:

for each vertex v in Adj[u]:

for each vertex w in Adj[v]

edge(u, w) ∈ E2

insert w in Adj2(u)

where Adj2 is the adjacency-list representation of G2 . For every edge in Adj we scan at most |V | vertices, thus we compute Adj2 in time O(V E).

After we have computed Adj2, we have to remove any duplicate edges from the lists (there may be more than one two-edge path in G between any two vertices). Removing duplicate edges is done in O(V +E0 ) where E0 = O(V E) is the number of edges in Adj2 (see for instance problem CLRS 22.1-4). Thus the total running time is O(V E)+O(V + E0 )= O(V E).

Let A denote the adjacency-matrix representation of G. The adjacency-matrix representation of G2 is the square of A. Computing A2 can be done in time O(V 3 ) (and even faster, theoretically; Strassen’s algorithm for example will compute A2 in O(V lg 7)).

3. Given an undirected graph G = (V, E) determine in O(V ) time if it has a cycle

There are two cases:

(a) E < V : Then the graph may or may not have cycles. To check do a graph traversal (BFS or DFS). If during the traversal you meet an edge (u, v) that leads to an already visited vertex (GRAY or BLACK) then you’ve gotten a cycle. Otherwise there is no cycle. This takes O(V + E) = O(V ) (since E < V ).

(b) E ≥ V : In this case we will prove that the graph must have a cycle.

Claim 1: A tree of n nodes has n − 1 edges. Proof of claim 1: By induction. Base case: a tree of 1 vertex has 0 edges. ok. Assume inductively that a tree of n vertices has n − 1 edges. Then a tree T of n + 1 vertices consists of a tree T 0 of n vertices plus another vertex connected to T 0 through an edge. Thus the number of edges in T is the number of edges in T 0 plus one. By induction hyopthesis T 0 has n − 1 edges so T has n edges. qed. Coming back to the problem: Assume first that the graph G is connected. Perform a DFS traversal of G starting at an arbitrary vertex. Since the graph is connected the resulting DFS-tree will contain all the vertices in the graph. By Claim 1 the DFS-tree of G has V − 1 edges. Therefore since E ≥ V there will be at least an edge in G which is not in the DFS-tree of G. This edge gives a cycle in G. If the graph G is not connected: If G has 2 connected components G1 = (V1, E1) and G2 = (V2, E2). Then it is easy to prove, by contradiction, that E ≥ V implies that either E1 ≥ V1 or E2 ≥ V2 (or both). In either case either G1 will have a cycle or G2 will have a cycle (or both). (If the graph G is not connected and has k connected components then the same argument as above works, except that formally we need induction on k).

5. (i)Is Radix Sort preferable to Comparison based sorting algorithms like Quick-Sort?

(ii) Write and explain the running time of Radix Sort?

(i)If we have log n bits for every digit, the running time of Radix appears to be better than Quick Sort for a wide range of input numbers. The constant factors hidden in asymptotic notation are higher for Radix Sort and Quick-Sort uses hardware caches more effectively. Also, Radix sort uses counting sort as a subroutine and counting sort takes extra space to sort numbers.

(ii)Radix sort can sort n integers in base k with at most d digits in O(d(n + k)) time. It does this by using counting sort to sort the n integers by digits, starting from the least significant digit (i.e. ones digit for integers) to the most significant digit. Each counting sort will take O(n + k) time since there are n elements and the elements are all integers in the range 0 to k since we’re in base k. Since the maximum number of digits in these n integers is d, we will have to execute counting sort d times to finish the algorithm. This is how we get a O(d(n + k)) running time for radix sort. The running time of radix sort depends on the base k that the integers are represented in. Large bases result in slower counting sorts, but fewer counting sorts since the number of digits in the elements decrease. On the other hand, small bases result in faster counting sorts, but more digits and consequently more counting sorts. Let’s find the optimal base k for radix sort. Say we are sorting n integers in the range 0 to u−1. The maximum number of digits in an element will be logk u for some base k. To minimize running time, we will want to minimize O((n + k) logk u). It turns out that to minimize running time, the best k to choose is k = n, in which case the running time of radix sort would be O(n logn u). Note that if u = n O(1), the running time of radix sort turns out to be O(n), giving us a linear time sorting algorithm if the range of integers we’re sorting is polynomial in the number of integers we’re sorting.

Found something interesting ?

• On-time delivery guarantee
• PhD-level professional writers
• Free Plagiarism Report

• 100% money-back guarantee
• Absolute Privacy & Confidentiality
• High Quality custom-written papers

Related Model Questions

Feel free to peruse our college and university model questions. If any our our assignment tasks interests you, click to place your order. Every paper is written by our professional essay writers from scratch to avoid plagiarism. We guarantee highest quality of work besides delivering your paper on time.

Grab your Discount!

25% Coupon Code: SAVE25
get 25% !!