# First, understand the meaning of the problem. What precisely is the problem asking for? If you are unsure, think about it some more or ask. |
# First, understand the meaning of the problem. What precisely is the problem asking for? If you are unsure, think about it some more or ask. Even if you think you understand what the problem is asking for, you can ask a TA or the instructor for confirmation that you've got it. |
You may turn in another solution to homework 1 Monday 4/26 at start of class. If you do, that solution will be the one used to determine your grade, and after your homework is scored by your TA, 15% penalty will be deducted as the cost of redoing it. In redoing it, the following hints may be useful. Lab this week will also be devoted to discussion of this homework.
If you don't turn in another solution Monday, then the solution you turned in last Wednesday will be used to determine your score (with no deduction).
Some basic definitions:
Common mistakes:
General guidelines:
Figure 1.19(a) in the book (page 41) may help you with this exercise.
Here is an attempt at a solution to 1A that does not work.
We need to show that, for any positive integer N, there is a sequence of N stack operations, each of which is a push() or a pop(), that makes the implementation described there take total time at least Ω(N2).
The sequence we propose is the following sequence:
push(), push(), push(), ..., push(), pop(), pop(), pop(), ..., pop()
where the number of push() operations is N/2, as is the number of pop() operations. That is, the sequence consists of N/2 pushes followed by N/2 pops.
For this particular sequence of N operations, the time taken by the implementation is O(N). We can prove this just as we did for our analysis of the growable array in the second lecture: the time spent growing is proportional to S + 2S + 4S + 8S ... + 2iS where S is the initial table size and 2iS is the table size after the last push() (so 2i S ≤ N). This sum is geometric, and so is proportional to its largest term, which is O(N). So the total time spent growing is O(N).
Likewise, the time spent shrinking is proportional to T + T/2 + T/4 + ... + T/2i where T is the table size after the last push, so T is O(N). This sum is geometric, so the sum is proportional to its largest term, which is T, so the sum is O(N). Thus, the total time spent shrinking is O(N).
Thus, the total time spent in grow() or shrink() is O(N). Other than time spent in grow() or shrink(), each push() or pop() operation takes constant time, so the time spent outside push() or pop() is also O(N).
To answer 1A, you need to come up with a different sequence of N Push and Pop operations, and show that the total time spent for the entire sequence is Ω(N2). You will need to find a sequence that makes grow() and shrink() happen much more often, probably by intermixing the two operations.
Say a push() or pop() operation is constant time if it does not cause the array to grow or shrink. Since there are at most N constant time operations, and each one takes O(1) time, the total time spent for these is O(N).
Next we argue that the time spent in the non-constant-time push or pop operations is also O(N).
Say a push() or pop() operation is constant-time if the operation doesn't cause the table to be resized. Since there are at most N such operations, and each takes O(1) time, the total time for constant-time push() and pop() operations is O(N).
Consider a non-constant-time push() operation. Let's say it causes the array to grow to some size 2T. The time spent for the operation is O(T). Preceeding this push(), there must have been at least T/2 constant-time push() operations since the last time the table was resized. (This is because the previous resizing must have been a grow, and it must have grown the table to size T, leaving T/2 free cells in the table.) Thus, the time taken for any non-constant-time push() operation is proportional to the number of constant-time push() operations that immediately preceeded it.
The latter observation implies that the total time taken for all non-constant-time push operations is proportional to the number of constant-time push() operations overall. Since there are at most N such operations, the total time for non-constant-time push operation is O(N).
Next consider a non-constant-time pop() operation, other than the first pop() operation. Let's say the pop() causes the array to shrink to some size T. The time spent for the operation is O(T). Preceeding this pop(), there must have been at least T constant-time pop() operations since the last time the table was resized. (This is because the previous resizing must have been a shrink(), and it must have shrunk the table to size 2T, leaving no empty cells.) Thus, the time spent for this pop() operation is proportional to the number of constant-time pop() operations preceeding it (since the last non-constant-time pop()).
The latter observation implies that the total time taken for non-constant-time pop() operations is proportional to the number of constant-time pop() operations. Since there are at most N such operations, the total time spent for non-constant-time pop operations is O(N).
In summary, the total time spent for all operation is
Since each of these four terms is O(N), the total time is O(N).
First, read section 4.2.3 of the text. This question is about a different implemention than the one we discussed in class. Here is pseudo-code for the implementation we are asking about:
MakeSet?(i) { Parent[i] = i; Size[i] = 1; }
Find(i) { if Parent[i] == i then return i else return Find(Parent[i]). }
Union(i, j) { i = Find(i); j = Find(j); if Size[i] <= Size[j] then { Parent[i] = j; Size[j] = Size[j] + Size[i]; } else { Parent[j] = i; Size[i] = Size[j] + Size[i]; } }
2A. Hint: Do N/3 makeset operations, N/3 Union operations, then N/3 Find operations. Prove that your particular choice of Union operations produces a tree of depth log2(N/3). Make Find operations on the deepest node in the tree.
2B. Hint: To start, prove by induction that a tree of depth D (any tree produced by Unions, that is) size at least 2D . From this, what can you conclude about the maximum depth of any tree produced by at most N Union, Find, and MakeSet? operations?
Note: the depth of a tree is the maximum distance from the root to any leaf.
Recall from class the following arguments:
Since
Since
Since ∑i=1..n i2 = O(n3) and ∑i=1..n i2 = Ω(n3), it follows that ∑i=1..n i2 = Θ(n3).
Use these kinds of argument to solve 3A-3C.
Review the lectures on recurrence relations and the on-line notes.
When I ask you to describe the recurrence tree, what I am interested in is: what is the depth, the number of children of each node, the size of the subproblems associated with the nodes at each level, and the work done for the subproblems at each level.