Algorithms Chapter 7 Algorithms Sequential Parallel And Distributed Second Edition The Greedy Method Glance

subject Type Homework Help
subject Pages 9
subject Words 1890
subject Authors Jerome L. Paul, Kenneth A. Berman

Unlock document.

This document is partially blurred.
Unlock all pages and 1 million more documents.
Get Access
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-1
Chapter 7
The Greedy Method
At a Glance
Table of Contents
Overview
Objectives
Instructor Notes
Solutions to Selected Exercises
page-pf2
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-2
Lecture Notes
Overview
In Chapter 5 we discuss the greedy method, which is the first of the major design strategies
discussed in Part II. We present a general paradigm for the greedy method, and remark that
solutions produced by the greedy method might not be globally optimal since they are based on
making a sequence of choices which are only known to be locally optimal. Using the greedy
method we design algorithms for solving the coin changing problem, determining optimal
storage order, the knapsack problem, and solving the optimal text compression problem by
constructing Huffman codes
Chapter Objectives
After completing the chapter the student will know:
That the greedy method is based on making locally optimal choices when building a
solution to an optimization problem
That the greedy method usually yields efficient algorithms
That a solution obtained by the greedy method must always be proved to give an optimal
global solution to the given problem
Instructor Notes
Some of the more famous greedy algorithms are, of course, algorithms for finding the
minimum spanning tree in a weighted graph, and the shortest path spanning tree for a
positively-weighted digraph. We have chosen to place these algorithms in Chapter 11 of Part
page-pf3
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-3
Solutions to Selected Exercises
Section 7.1 General Description
Exercises 7.1 and 7.2 can be easily solved by applying the following Proposition. The proof
of this proposition is a good additional exercise.
Proposition 1. Consider the coin changing problem with denominations d1,d2,...,dn listed
in decreasing order, that is, d1 > d2 > … > dn. Let t = (t1,t2,,tn) be an optimal solution
0)( =
=
ii
n
ji idtg
n
page-pf4
7.2 Let di = bn-i, i = 1,2, …, n. Let t = (t1,...,tn) be an optimal solution. By Proposition 1
above it is sufficient to show that dj >
+=
n
ji ii dt
1
, j = 1,2, … , n. For each i, 2 ≤ i n, we have
7.3 With each tape Ti we associate the value v(Ti) , where v(Ti) is the number of objects
currently stored on tape Ti, i = 1, … , m. Initially, v(Ti) = 0, i = 1, … , m. We begin by
sorting the objects b1, … , bn in nonincreasing order of their costs so that c1 c2 … ≥ cn.
7.4 a) Suppose n = 2, c0 = 1, c1 = 2, p0= 1/9, and p1 = 8/9. Then processing the objects b0, b1 in
b) Suppose n = 2, c0 = 9, c1 = 3, p0= 2/3, and p1 = 1/3. Then processing the objects b0, b1 in
c) We show that arranging the objects in decreasing order of pi/ci gives an optimal solution by
showing that any arrangement π where two successive objects i and j have pi/ci < pj/cj can not
be optimal. Indeed, given such an arrangement, consider the arrangement π* that interchanges
the elements i and j, and keeps all other objects in the same place. Suppose i occurs in position
7.5 After ordering the objects in decreasing ration vi/wi, we see that the greedy solution for the
knapsack problem consists in placing objects 6,2,3,4 and (1/30)th of object 0 into the knapsack
page-pf5
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-5
Section 7.3 The Knapsack Problem
7.6 For a knapsack of capacity C = 1, consider the three objects whose value, weight pairs (v,w)
are (5,5), (4,2) and (1,1). If you place the objects in the knapsack in decreasing order of
7.7 a) For a knapsack of capacity C = 6, consider the three objects whose value, weight pairs
(v,w) are (8,4), (5,3) and (5,3). The greedy solution for the 0/1 knapsack problem would
place the first object into the knapsack, yielding the value of 8. On the other hand, the
optimal solution places the 2nd the 3rd object into the knapsack, yielding a value of 10.
b) Given any ε > 0, consider a knapsack of capacity C = 1, and two objects with profits p1 =
Section 7.4 Huffman Codes
7.8 We show that the Huffman tree is optimal by induction on the number of symbols (leaf
nodes in the tree).
sn-1. Assume without loss of generality that the symbols s0, s1, … , sn-1 are ordered by their
sn-1. Now suppose T is any optimal binary tree for the symbols s0, s1, … , sn-1. Consider two
leaf nodes at the deepest level of T. By interchanging (if necessary) these two leaf nodes
with leaf nodes containing s0, s1, respectively, we see that the value of WLPL does not
decrease. Hence, in particular, we can assume that there are optimal binary trees T having L
page-pf6
7.9 Procedure HuffmanCode first generates the Huffman tree having 8 leaf nodes A, B, C, D,
Stage 1. Combine leaf nodes F and H by setting the left and right children of node 1 to be F
and H, respectively. Assign root 1 the frequency 2 + 2 = 4.
Stage 3. Combine the tree rooted at 1 with leaf node D by setting the left and right children
of node 3 to be 1 and D, respectively. Assign root 3 the frequency 4 + 5 = 9.
Stage 5. Combine the tree rooted at 3 with leaf node E by setting the left and right children
of node 5 to be 3 and E, respectively. Assign root 5 the frequency 9 + 9 = 18.
Stage 7. Combine the trees rooted at 5 and 6, respectively by setting the left and right
children of node 7 to be 5 and 6, respectively. Assign root 7 the frequency 18 + 23 = 41.
Procedure HuffmanCode then generates the following Huffman code from Huffman tree
constructed above.
Symbol
Encoding
A
10
B
111
C
1100
D
001
E
01
F
0000
G
1101
H
0001
7.10 For the tree in Figure 7.5, the string decodes to beabfbaac. For the tree in Figure 7.6, the
string decodes to defaced.
7.11 The following procedure accepts a 0/1 string and a pointer to a Huffman tree whose nodes
have the structure given in the Exercise, and returns a message representing the decoding
of the 0/1 string given by the Huffman tree. For simplicity of code, we assume that the
0/1 string is stored as ‘0’ or ‘1’ characters stored in an array C[0: n 1]. If the string ends
in an invalid code for a symbol, we append the special character ‘*’ to the end of the
page-pf7
7.12 We prove the result by induction on the number of leaf nodes in the tree T.
Basis step: n = 1. Trivial, since T consists of a single leaf node containing a symbol whose
frequency can be taken as 1.
Induction step: Assume the result is true for all binary trees having less than n leaf nodes, and
7.14 a) This problem is essentially the same as the Huffman code problem. To see this, note
that a merging sequence can be modeled using a binary tree, with leaf nodes corresponding
to the files containing the lengths of the files, and when two files are merged, a subtree is
formed with the parent node of the two files containing the sum of the lengths of the files
page-pf8
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-8
b) We show the final result of an optimal binary merging tree based on the greedy strategy
described in part (a). Note that a postorder traversal of the tree will give a merging
sequence that achieves the optimal number of 113 operations, even though it will differ
from the merging sequence used to build the tree.
Additional Exercises
7.15 a) After sorting the jobs in increasing order of the start times and reindexing, we can
assume the the tasks T1, …, Tn have start times s1 s2 ≤ … sn. We inductively schedule the
jobs to machines as follows. Place T1 on the first machine M1, and place the pair (f1,1) on
an initially empty min-heap, where the priorities for the heap are determined solely by the
first entry in the pair. Set Ct = 1 for the number of machines used. Inductively, suppose
b) We can rearrange the tasks in increasing order of their start times using mergesort or
heapsort in time O(nlogn). If we maintain the min-heap as discussed in Chapter 4, the
page-pf9
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-9
c) That the greedy solution described in part a) gives the optimal solution follows
immediately from the following Proposition.
7.16 Consider the case n = 2, with w0,0 = w1,1 = 2, w0,1 = 1, w1,0 = 4. Then all three greedy
strategies described in parts (a), (b), and (c) assign worker A0 to job B1, and worker A1 to job
7.17 All singleton sets of jobs, while feasible, clearly are not optimal since it is always possible
to add a job to a single job and still have a feasible set of jobs. Note that no set of three
7.19 Suppose we have a set of jobs J0, J1, …, Jn-1, having deadlines d0, d1, …, dn-1. Suppose
also that job Ji takes ti processing time, i = 0, 1, … , n 1. If S = {
m
ii JJ ,...,
1
} is a feasible
7.20 If we look at the example given in Exercise 7.17, following the indicated simple greedy
7.21 a) It is important to note that we are assuming that each job takes unit time, since
otherwise the indicated greedy strategy can fail. Suppose to the contrary that the greedy
strategy generates a sequence of jobs SG that is not optimal. Then amongst the optimal
set of jobs, there is one, say S, which has the maximum number of jobs in common with
page-pfa
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-10
scheduled in [tJ, tJ + 1], then adding job J to S in that time slot will increase the profit of S,
contradicting its optimality. This completes the proof that the greedy strategy generates
an optimal set of jobs.
b) In the following pseudocode, we assume that the jobs J0, J1, … , Jn-1 are ordered in
decreasing order of profit, so that p0 p1 ≥ … ≥ pn-1. Since this can be done in
linked list pointed to by Start maintains the jobs in increasing order of their deadlines. In
particular, the ith node in the linked list contains the index of the job done in the time interval [i
1 , i]. Since we assume that all deadlines are at least one, and each job takes unit time, we
know that at least the highest profit job J0 will be part of the solution.
procedure GreedyJobSequencing(P[0:n 1], D[0:n 1], Start)
for i ← 1 to n 1 do
//check the feasibility of adding job Ji to solution set
while W null .and. D[WIndex] < D[i] do
page-pfb
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-11
Prev W
else
W WNext
endif
endwhile
if Feasible then //add job Ji to the solution
endif
//now update times for all nodes after new node pointed to by P
W PNext
while W ≠ null do
WTime WTime + 1
7.22 a) Suppose J is a feasible set of jobs added according the rule of putting off the next job
as long as possible, and suppose for the next job Ji there is no time slot available. We
show that J
Ji is not a feasible set of jobs. If di is the deadline for job Ji, then the
interval [0, di] must be completely filled with jobs from J. Extend this interval to the
page-pfc
Algorithms: Sequential, Parallel and Distributed, Second Edition 7-12
largest interval [0, m] that is completely filled with jobs, and let J’
J denote the m jobs
corresponding to this interval. We claim that J’
Ji is not feasible. To see this, note
b) We use union and find algorithms described in Chapter 4 to maintain connected time
intervals, beginning with the unit intervals [0,1], [1,2], …, [D 1, D], where D = max{di : 0 ≤ i
n 1 } is the maximum deadline of the n jobs J1, …, Jn-1. Initially none of the unit time
intervals [0,1], [1,2], …, [D 1, D] are occupied by jobs. We use the array Parent[0:D 1] to
maintain a forest of time intervals, where i corresponds to the init interval [i, i+1]. As our
algorithm progresses, a tree consists of a collection of time intervals whose union is a
158-159 in Chapter 4). To test whether job Ji with deadline di can be added to the current set of
feasible jobs, we first test to see that Available[Find2[di]] ≥ 0 (otherwise, as has just been
pointed out, it can’t be added an maintain a feasible set of jobs). Suppose this job can be

Trusted by Thousands of
Students

Here are what students say about us.

Copyright ©2022 All rights reserved. | CoursePaper is not sponsored or endorsed by any college or university.