I collect and make up this pseudocode from the book:
<<Introduction to the Design and Analysis of Algorithms_Second Edition>> _ Anany Levitin
Note that throughout the paper, we assume that inputs to algorithms fall within their specified ranges and hence require no verfication. When implementing algorithms as programs to be used in actual applications, you should provide such verfications.
About pseudocode: For the sake of simplicity, we omit declarations of variables and use indentation to show the scope of such statements as for, if and while. As you saw later, we use an arrow <- for the assignment operation and two slashes // for comments.
Algorithm InsertionSort(A[0..n-1])
// Sorts a given array by insertion sort
// Input: An array A[0..n-1] of n orderable elements
// Output: Array A[0..n-1] sorted in nondecreasing order
for i <- 1 to n-1 do
v <- A[i]
j <- i-1
while j ≥ 0 and A[j] > v do
A[j+1] <- A[j]
j <- j-1
A[j+1] <- v
Consider the following version of insertion sort
Algorithm InsertionSort2(A[0..n-1])
for i <- 1 to n-1 do
j <- j-1
while j ≥ 0 and A[j] > A[j+1] do
swap(A[j], A[j+1])
j <- j-1
What is its time efficiency? How is it compared to that of the version given in the above?
The efficiency classes of both versions will be the same. The Inner loop of InsertionSort consists of one key assignment and one index decrement; the inner loop of InsertionSort2 consists of one key swap(i.e., three key assignments) and one index decrement. If we disregard the time spend on the index decrements, the ratio of the running times should be estimated as 3Ca/Ca = 3; if we take into account the time spent on the index decrements, the ratio's estimate becones (3Ca+Cd)/(Ca+Cd), where Ca and Cd are the times of one key assignment and one index decrement, respectively.
Algorithm DFS(G)
// Implements a depth-first search traversal of a given graph
// Input: Graph G = <V, E>
// Output: Graph G with its vertices marked with consecutive integers
// in the order they've been first encountered by the DFS traversal
make each vertex in V with 0 as a mark of being "unvisited"
count <- 0
for each vertex v in V do
if v is marked with 0
dfs(v)
dfs(v)
// visits recursively all the unvisited vertices connected to vertex v by a path
// and numbers them in order they are encountered via global variable count
count <- count+1; mark v with count
for each vertex w in V adjacent to v do
if w is marked with 0
dfs(w)
Algorithm BFS(G)
// Implements a breadth-first search traversal of a given graph
// Input: Graph G = <V, E>
// Output: Graph G with its vertices marked with consecutive integers
// in the order they have been visited by the BFS traversal
mark each vetex in V with 0 as a mark of being "unvisited"
count <- 0
for each vertex v in V do
if v is marked with 0
bfs(v)
bfs(v)
// visits all the unvisited vertices connected to vertex v by a path and assigns them
// the numbers in the order they are visited via global variable count
count <- count+1; mark v with count and initialize a queue with v by a path
while the queue is not empty do
for each vertex w in V adjacent to the front vertex do
if w is marked with 0
count <- count+1; mark w with count
add w to the queue
remove the front vertex from the queue
Algorithm JohnsonTrotter(n) // Implements Johnson-Trotter algorithm for generating permutations // Input: A positive integer n // Output: A list of all permutations of {1, ..., n} initialize the first permutation with ←1 ←2... ←n while the last permutation has a mobile element do find its largest mobile element k swap k and the adjacent integer k's arrow points to reverse the direction of all the elements that are larger than k add the new permutation to the list
Here is an application of this algorithm for n = 3, with the largest mobile integer shown in underline:
←1←2←3 ←1←3←2 ←3←1←2 →3←2←1 ←2→3←1 ←2←1→3
Consider the following implementation of the algorithm for generating permutations discovered by B.Heap.
Algorithm HeapPermute(n)
// Implements Heap's algorithm for generating permutations
// Input: A positive integer n and a global array A[1..n]
// Output: All permutations of elements of A
if n = 1
write A
else
for i <- 1 to n do
HeapPermute(n-1)
if n is odd
swap A[1] and A[n]
else
swap A[i] and A[n]
Trace the algorithm by hand for n = 2, 3, and 4.
For n = 2:
12 21
For n = 3 (read along the rows):
123 213
312 132
231 321
For n = 4 (read along the rows):
1234 2134 3124 1324 2314 3214
4231 2431 3421 4321 2341 3241
4132 1432 3412 4312 1342 3142
4123 1423 2413 4213 1243 2143
Write a pseudocode for a recursive algorithm for generating all 2^n bit strings of length n.
Algorithm BitstringsRec(n)
// Generates recursively all the bit strings of a given lengt
// Input: A positive integer n
// Output: All bit strings of length n as contents of global array B[0..n-1]
if n = 0
print(B)
else
B[n-1] <- 0; BitstringsRec(n-1)
B[n-1] <- 1; BitstirngsRec(n-1)
Write a nonrecursive algorithm for generating all 2^n bit strigns of length n that implements bit strigns as arrays and does not use binary additions.
Algorithm BitstringsNonrec(n)
// Generates nonrecursively all the bit strings of a given length
// Input: A positive integer n
// Output: All bit strings of length n as contents of global array B[0..n-1]
for i <- 0 to n-1 do
B[i] = 0
repeat
print(B)
k <- n-1
while k ≥ 0 and B[k] = 1
k <- k-1
if k ≥ 0
B[k] <- 1
for i <- k+1 to n-1 do
B[i] <- 0
until k = -1
Design a decrease-and-conquer algorithm for generating all combinations for k items chosen form n, i.e., all k-element subsets of a given n-element set.
There are several decrease-and-conquer algorithms for this problem. They are more subtle than one might expect. Generating combinations in a predefined order(increasing, decreasing, lexicographic) helps with both a design and a correctness proof. The following simple property is very helpful. Assuming with no loss of generality that the underlying set is {1, 2, ..., n}, there are (n-1k-1) k-subsets whose smallest elements is i, i = 1, 2, ..., n-k+1.
Here is a recursive algorithm from "Problems on Algorithms" by Ian Par-berry. call Choose(1, k) where
Algorithm Choose(i, k)
// Generates all k-subsets of {i, i+1, ..., n} stored in global array A[1..k]
// in descending order of their components
if k = 0
print(A)
else
for j <- i to n-k+1 do
A[k] <- j
Choose(j+1, k-1)
Write a pseudocode for the divide-into-three algorithm for the fake-coin problem.(Make sure that your algorithm handles properly all values of n, not only those that are multiples of 3; We assume that the fake coin is lighter)
If n is multiply of 3(i.e., n mod 3 = 0), we can divide the coins into three piles of n/3 coins each and weigh two of the piles. If n = 3k+1(i.e., n mod 3 = 1), we can divide the coins into the piles of sizes k, k and k+1, or k+1, k+1 and k-1.(We will use the second option.) Finally, if n = 3k+2(i.e., n mod 3 = 2), we will divide the coins into the piles of sizes k+1, k+1 and k. The following pseudocode assumes that there is exactly one fake coin among the coins given and that the fake coin is lighter than the other coins.
if n = 1 the coin is fake
else divide the coins into three piles of ...coins ; mark
weigh the first two piles
if they weight the same and continue with the coins of the third pile
else continue with the lighter of the first two piles
There has a very natural question. For large values of n, about how many times faster is this algorithm than the one based on dividing coins into two piles?
The ratio of the number of weighings in the worst case can be approximated for large values of n by
log2n/log3n = log2n/(log32log2n) = log23 ≈ 1.6.
Write a pseudocode for the multiplication à la russe algorithm.
Algorithm Russe(n, m)
// Implements multiplication à la russe nonrecursively
// Input: Two positive integer n and m
// Output: The product of n and m
p <- 0
while n ≠ 1 do
if n mod 2 = 1 p <- p+m
n <- ⌊n/2⌋n/2
m <- 2*m
return p+m
Algorithm RusseRec(n, m)
// Implements multiplication à la russe recursively
// Input: Two positive integer n and m
// Output: The product of n and m
if n mod 2 = 0 return RusseRec(n/2, 2m)
else if n = 1 return m
else return RusseRec((n-1)/2, 2m) + m
Write a pseudocode for a nonrecursive implementation of the partition-based algorithm for the selection problem.
Algorithm Selection(A[0..n-1], k)
// Solves the selection problem by partition-based algortihm
// Input: An array A[0..n-1] of orderable elements and integer k(1 ≤ k ≤ n)
// Output: The value of the k-th smallest element in A[0..n-1]
l <- 0; r <- n-1
A[n] <- ∞ // append sentinel
while l ≤ r do
p <- A[l] // the pivot
i <- l; j <- r+1
repeat
repeat i <- i+1 until A[i] ≥ p
repeat j <- j-1 until A[j] ≤ p do
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) undo last swap
swap(A[l], A[j]) partition
if j > k-1 r <- j-1
else if j < k-1 l <- j+1
else return A[k-1]
Write a pseudocode for a recursive implementation of the algorithm.
call SelectionRec(A[0..n-1], k) where
Algorithm SelectionRec(A[l..r], k)
// Solves the selection problem by recursive partition-based algorithm
// Input: A subarray A[l..r] of orderable elements and integer k(1 ≤ k ≤ r-l+1)
// Output: The value of the k-th smallest element in A[l..r]
s <- Partition(A[l..r])
if s > l+k-1 SelectionRec(A[l..s-1], k)
else if s < l+k-1 SelectionRec(A[s+1..r], k-1-s)
else return A[s]
The following algorithm for to compute the partition position:
Algorithm Partition(A[l..r])
// Partitions a subarray by using its first element as a pivot
// Input: A subarray A[l..r] of A[0..n-1], defined by its left and right indices l and r(l < r)
// Output: A partition of A[l..r], with the split position returned as this function's value
p <- A[l]
i <- l; j <- r+1
repeat
repeat i <- i+1 until A[i] ≥ p
repeat j <- j-1 until A[j] ≤ p
swap(A[i], A[j])
until i ≥ j
swap(A[i], A[j]) // undo last swap when i ≥ j
swap(A[l], A[j])
return j
new words:
disregard: 忽视 estimate: 估计 adjacent: 邻接的
permutation: 排列 lexicographic: 字典序 fake: 假货
multiple: 倍数 product: 产品;[数]乘积
(END_XPJIANG.)