Exercicios de Complexidade Parte 3, Exercícios de Complexidade computacional avançada. Centro Universitário do Pará (CESUPA)
percival-lucena
percival-lucena

Exercicios de Complexidade Parte 3, Exercícios de Complexidade computacional avançada. Centro Universitário do Pará (CESUPA)

PDF (203 KB)
13 páginas
52Número de visitas
Descrição
Exercicios de Complexidade Parte 3
20 pontos
Pontos de download necessários para baixar
este documento
Baixar o documento
Pré-visualização3 páginas / 13
Esta é apenas uma pré-visualização
3 mostrados em 13 páginas
Baixar o documento
Esta é apenas uma pré-visualização
3 mostrados em 13 páginas
Baixar o documento
Esta é apenas uma pré-visualização
3 mostrados em 13 páginas
Baixar o documento
Esta é apenas uma pré-visualização
3 mostrados em 13 páginas
Baixar o documento

Chapter 16

Michelle Bodnar, Andrew Lohr

May 5, 2017

Exercise 16.1-1

The given algorithm would just stupidly compute the minimum of the O(n) numbers or return zero depending on the size of Sij . There are a possible number of subproblems that is O(n2) since we are selecting i and j so that 1 ≤ i ≤ j ≤ n. So, the runtime would be O(n3).

Exercise 16.1-2

This becomes exactly the same as the original problem if we imagine time running in reverse, so it produces an optimal solution for essentially the same reasons. It is greedy because we make the best looking choice at each step.

Exercise 16.1-3

As a counterexample to the optimality of greedily selecting the shortest, suppose our activity times are {(1, 9), (8, 11), (10, 20)} then, picking the shortest first, we have to eliminate the other two, where if we picked the other two instead, we would have two tasks not one.

As a counterexample to the optimality of greedily selecting the task that conflicts with the fewest remaining activities, suppose the activity times are {(−1, 1), (2, 5), (0, 3), (0, 3), (0, 3), (4, 7), (6, 9), (8, 11), (8, 11), (8, 11), (10, 12)}. Then, by this greedy strategy, we would first pick (4, 7) since it only has a two con- flicts. However, doing so would mean that we would not be able to pick the only optimal solution of (−1, 1), (2, 5), (6, 9), (10, 12).

As a counterexample to the optimality of greedily selecting the earliest start times, suppose our activity times are {(1, 10), (2, 3), (4, 5)}. If we pick the ear- liest start time, we will only have a single activity, (1, 10), whereas the optimal solution would be to pick the two other activities.

Exercise 16.1-4

Maintain a set of free (but already used) lecture halls F and currently busy lecture halls B. Sort the classes by start time. For each new start time which you encounter, remove a lecture hall from F , schedule the class in that room,

1

and add the lecture hall to B. If F is empty, add a new, unused lecture hall to F . When a class finishes, remove its lecture hall from B and add it to F . Why this is optimal: Suppose we have just started using the mth lecture hall for the first time. This only happens when ever classroom ever used before is in B. But this means that there are m classes occurring simultaneously, so it is necessary to have m distinct lecture halls in use.

Exercise 16.1-5

Run a dynamic programming solution based off of the equation (16.2) where the second case has “1” replaced with “vk”. Since the subproblems are still indexed by a pair of activities, and each calculation requires taking the minimum over some set of size ≤ |Sij | ∈ O(n). The total runtime is bounded by O(n3). Exercise 16.2-1

A optimal solution to the fractional knapsack is one that has the highest total value density. Since we are always adding as much of the highest value density we can, we are going to end up with the highest total value density. Suppose that we had some other solution that used some amount of the lower value density object, we could substitute in some of the higher value density object meaning our original solution could not have been optimal.

Exercise 16.2-2

Suppose we know that a particular item of weight w is in the solution. Then we must solve the subproblem on n − 1 items with maximum weight W − w. Thus, to take a bottom-up approach we must solve the 0-1 knapsack problem for all items and possible weights smaller than W . We’ll build an n + 1 by W + 1 table of values where the rows are indexed by item and the columns are indexed by total weight. (The first row and column of the table will be a dummy row). For row i column j, we decide whether or not it would be advantageous to include item i in the knapsack by comparing the total value of of a knapsack including items 1 through i− 1 with max weight j, and the total value of including items 1 through i− 1 with max weight j − i.weight and also item i. To solve the problem, we simply examine the n,W entry of the table to determine the maximum value we can achieve. To read off the items we include, start with entry n,W . In general, proceed as follows: if entry i, j equals entry i−1, j, don’t include item i, and examine entry i−1, j next. If entry i, j doesn’t equal entry i − 1, j, include item i and examine entry i − 1, j − i.weight next. See algorithm below for construction of table: Exercise 16.2-3

At each step just pick the lightest (and most valuable) item that you can pick. To see this solution is optimal, suppose that there were some item j that we included but some smaller, more valuable item i that we didn’t. Then, we could replace the item j in our knapsack with the item i. it will definitely fit

2

Algorithm 1 0-1 Knapsack(n,W)

1: Initialize an n+ 1 by W + 1 table K 2: for j = 1 to W do 3: K[0, j] = 0 4: end for 5: for i = 1 to n do 6: K[i, 0] = 0 7: end for 8: for i = 1 to n do 9: for j = 1 to W do

10: if j < i.weight then 11: K[i, j] = K[i− 1, j] 12: end if 13: K[i, j] = max(K[i− 1, j],K[i− 1, j − i.weight] + i.value) 14: end for 15: end for

because i is lighter, and it will also increase the total value because i is more valuable.

Exercise 16.2-4

The greedy solution solves this problem optimally, where we maximize dis- tance we can cover from a particular point such that there still exists a place to get water before we run out. The first stop is at the furthest point from the starting position which is less than or equal to m miles away. The problem exhibits optimal substructure, since once we have chosen a first stopping point p, we solve the subproblem assuming we are starting at p. Combining these two plans yields an optimal solution for the usual cut-and-paste reasons. Now we must show that this greedy approach in fact yields a first stopping point which is contained in some optimal solution. Let O be any optimal solution which has the professor stop at positions o1, o2, . . . , ok. Let g1 denote the furthest stopping point we can reach from the starting point. Then we may replace o1 by g2 to create a modified solution G, since o2 − o1 < o2 − g1. In other words, we can actually make it to the positions in G without running out of water. Since G has the same number of stops, we conclude that g1 is contained in some optimal solution. Therefore the greedy strategy works.

Exercise 16.2-5

Consider the leftmost interval. It will do no good if it extends any further left than the leftmost point, however, we know that it must contain the leftmost point. So, we know that it’s left hand side is exactly the leftmost point. So, we just remove any point that is within a unit distance of the left most point since

3

they are contained in this single interval. Then, we just repeat until all points are covered. Since at each step there is a clearly optimal choice for where to put the leftmost interval, this final solution is optimal.

Exercise 16.2-6

First compute the value of each item, defined to be it’s worth divided by its weight. We use a recursive approach as follows: Find the item of median value, which can be done in linear time as shown in chapter 9. Then sum the weights of all items whose value exceeds the median and call it M . If M exceeds W then we know that the solution to the fractional knapsack problem lies in taking items from among this collection. In other words, we’re now solving the frac- tional knapsack problem on input of size n/2. On the other hand, if the weight doesn’t exceed W , then we must solve the fractional knapsack problem on the input of n/2 low-value items, with maximum weight W −M . Let T (n) denote the runtime of the algorithm. Since we can solve the problem when there is only one item in constant time, the recursion for the runtime is T (n) = T (n/2) + cn and T (1) = d, which gives runtime of O(n).

Exercise 16.2-7

Since an identical permutation of both sets doesn’t affect this product, sup- pose that A is sorted in ascending order. Then, we will prove that the product is maximized when B is also sorted in ascending order. To see this, suppose not, that is, there is some i < j so that ai < aj and bi > bj . Then, consider

only the contribution to the product from the indices i and j. That is, abii a bj j ,

then, if we were to swap the order of b1 and bj , we would have that contribution

be a bj i a

bi j . we can see that this is larger than the previous expression because

it differs by a factor of ( aj ai

)bi−bj which is bigger than one. So, we couldn’t of maximized the product with this ordering on B.

Exercise 16.3-1

If we have that x.freq = b.freq, then we know that b is tied for lowest frequency. In particular, it means that there are at least two things with lowest frequency, so y.freq = x.freq. Also, since x.freq ≤ a.freq ≤ b.freq = x.freq, we must have a.freq = x.freq.

Exercise 16.3-2

Let T be a binary tree corresponding to an optimal prefix code and suppose that T is not full. Let node n have a single child x. Let T ′ be the tree obtained by removing n and replacing it by x. Let m be a leaf node which is a descendant of x. Then we have

4

cost(T ′) ≤ ∑

c∈C\{m}

c.freq·dT (c)+m.freq(dT (m)−1) < ∑ c∈C

c.freq·dT (c) = cost(T )

which contradicts the fact that T was optimal. Therefore every binary tree corresponding to an optimal prefix code is full.

Exercise 16.3-3

An optimal Huffman code would be

0000000→ a 0000001→ b 000001→ c 00001→ d 0001→ e 001→ f 01→ g 1→ h

This generalizes to having the first n Fibonacci numbers as the frequencies in that the k < nth most frequent letter has codeword 0k−11, and the nth most frequent letter having codeword 0n−1. To see this holds, we will prove the recurrence

n−1∑ i=0

F (i) = F (n+ 1)− 1

This will show that we should join together the letter with frequency F (n) with the result of joining together the letters with smaller frequencies. We will prove it by induction. For n = 1 is is trivial to check. Now, suppose that we have n− 1 ≥ 1, then,

F (n+ 1)− 1 = F (n) + F (n− 1)− 1 = F (n− 1) + n−2∑ i=0

F (i) =

n−1∑ i=0

F (i)

See also Lemma 19.2. To use this fact, to show the desired Huffman code is optimal, we claim

that as we are greedily combining nodes, that at each stage we can maintain one node which contains all of the least frequent letters. Initially, this con- sists of just the least frequent letter. Then, at stage k, inductively, we assume that it contains the k least frequent letters. This means that it has weight∑k−1

i=0 F (i) = F (k − 1) − 1. All of the other nodes at this stage have weights {F (k), F (k + 1), ...F (n − 1)}. So, clearly the two lowest weight nodes are this

5

node containing the k least frequent letters and the node containing the k least frequent letter. Therefore, the greedy algorithm tells us that we should combine those nodes leaving us with a node containing the k+1 least frequent letters for stage k + 1, completing the induction. Of course, the code that we have given is not uniquely optimal, because left and right children (represented by 0 and 1 respectively) are an artificial choice, we could assume that at each stage we are adding in the singleton as the 1 child of the new parent, getting us our code.

Exercise 16.3-4

Let x be a leaf node. Then x.freq is added to the cost of each internal node which is an ancestor of x exactly once, so its total contribution to the new way of computing cost is x.freq · dT (x), which is the same as its old contribution. Therefore the two ways of computing cost are equivalent.

Exercise 16.3-5 We construct this codeword with monotonically increasing lengths by always resolving ties in terms of which two nodes to join together by joining together those with the two latest occurring earliest elements. We will show that the ending codeword has that the least frequent words are all having longer code- words. Suppose to a contradiction that there were two words, w1 and w2 so that w1 appears more frequently, but has a longer codeword. This means that it was involved in more merge operation than w2 was. However, since we are always merging together the two sets of words with the lowest combined frequency, this would contradict the fact that w1 has a higher frequency than w2.

Exercise 16.3-6

First observe that any full binary tree has exactly 2n − 1 nodes. We can encode the structure of our full binary tree by performing a preorder traversal of T . For each node that we record in the traversal, write a 0 if it is an internal node and a 1 if it is a leaf node. Since we know the tree to be full, this uniquely determines its structure. Next, note that we can encode any character of C in dlg ne bits. Since there are n characters, we can encode them in order of appearance in our preorder traversal using ndlg ne bits.

Exercise 16.3-7

Instead of grouping together the two with lowest frequency into pairs that have the smallest total frequency, we will group together the three with lowest frequency in order to have a final result that is a ternary tree. The analysis of optimality is almost identical to the binary case. We are placing the symbols of lowest frequency lower down in the final tree and so they will have longer codewords than the more frequently occurring symbols

Exercise 16.3-8

6

For any 2 characters, the sum of their frequencies exceeds the frequency of any other character, so initially Huffman coding makes 128 small trees with 2 leaves each. At the next stage, no internal node has a label which is more than twice that of any other, so we are in the same setup as before. Continuing in this fashion, Huffman coding builds a complete binary tree of height lg(256) = 8, which is no more efficient than ordinary 8-bit length codes.

Exercise 16.3-9

If every possible character is equally likely, then, when constructing the Huff- man code, we will end up with a complete binary tree of depth 7. This means that every character, regardless of what it is will be represented using 8 bits. This is exactly as many bits as was originally used to represent those characters, so the total length of the file will not decrease at all.

Exercise 16.4-1

The first condition that S is a finite set is a given. To prove the second condition we assume that k ≥ 0, this gets us that Ik is nonempty. Also, to prove the hereditary property, suppose A ∈ Ik this means that |A| ≤ k. Then, if B ⊆ A, this means that |B| ≤ |A| ≤ k, so B ∈ Ik. Lastly, we prove the exchange property by letting A,B ∈ Ik be such that |A| < |B|. Then, we can pick any element x ∈ B \ A, then, |A ∪ {x}| = |A| + 1 ≤ |B| ≤ k, so, we can extend A to A ∪ {x} ∈ Ik.

Exercise 16.4-2

Let c1, . . . , cm be the columns of T . Suppose C = {ci1 , . . . , cik} is depen- dent. Then there exist scalars d1, . . . , dk not all zero such that

∑k j=1 djcij = 0.

By adding columns to C and assigning them to have coefficient 0 in the sum, we see that any superset of C is also dependent. By contrapositive, any subset of an independent set must be independent. Now suppose that A and B are two independent sets of columns with |A| > |B|. If we couldn’t add any col- umn of A to be whilst preserving independence then it must be the case that every element of A is a linear combination of elements of B. But this implies that B spans a |A|-dimensional space, which is impossible. Therefore our in- dependence system must satisfy the exchange property, so it is in fact a matroid.

Exercise 16.4-3

Condition one of being a matroid is still satisfied because the base set hasn’t changed. Next we show that I ′ is nonempty. Let A be any maximal element of I then, we have that S−A ∈ I ′ because S−(S−A) = A ⊆ A which is maximal in I. Next we show the hereditary property, suppose that B ⊆ A ∈ I ′, then,

7

there exists some A′ ∈ I so that S −A ⊆ A′, however, S −B ⊇ S −A ⊆ A′ so B ∈ I ′.

Lastly we prove the exchange property. That is, if we have B,A ∈ I ′ and |B| < |A| we can find an element x in A − B to add to B so that it stays independent. We will split into two cases.

Our first case is that |A| = |B| + 1. We clearly need to select x to be the single element in A−B. Since S −B contains a maximal independent set

Our second case is if the first case does not hold. Let C be a maximal inde- pendent set of I contained in S −A. Pick an aribitrary set of size |C| − 1 from some maximal independent set contained in S−B, call it D. Since D is a subset of a maximal independent set, it is also independent, and so, by the exchange property, there is some y ∈ C − D so that D ∪ {y} is a maximal independent set in I. Then, we select x to be any element other than y in A − B. Then, S− (B∪{x}) will still contain D∪{y}. This means that B∪{x} is independent in (I)′

Exercise 16.4-4

Suppose X ⊂ Y and Y ∈ I. Then (X ∩ Si) ⊂ (Y ∩ Si) for all i, so |X ∩Si| ≤ |Y ∩Si| ≤ 1 for all 1 ≤ i ≤ k. ThereforeM is closed under inclusion.

Now Let A,B ∈ I with |A| = |B| + 1. Then there must exist some j such that |A ∩ Sj | = 1 but B ∩ Sj | = 0. Let a = A ∩ Sj . Then a /∈ B and |(B ∪ {a}) ∩ Sj | = 1. Since |(B ∪ {a}) ∩ Si| = |B ∩ Si| for all i 6= j, we must have B ∪ {a} ∈ I. Therefore M is a matroid.

Exercise 16.4-5

Suppose that W is the largest weight that any one element takes. Then, define the new weight function w2(x) = 1 + W − w(x). This then assigns a strictly positive weight, and we will show that any independent set that that has maximum weight with respect to w2 will have minimum weight with respect to w. Recall Theorem 16.6 since we will be using it, suppose that for our matriod, all maximal independent sets have size S. Then, suppose M1 and M2 are maximal independent sets so that M1 is maximal with respect to w2 and M2 is minimal with respect to w. Then, we need to show that w(M1) = w(M2). Suppose not to achieve a contradiction, then, by minimality of M2, w(M1) > w(M2). Rewriting both sides in terms of w2, we have w2(M2) − (1 + W )S > w2(M1)−(1+W )S, so, w2(M2) > w2(M1). This however contradicts maximality of M1 with respect to w2. So, we must have that w(M1) = w(M2). So, a maximal independent set that has the largest weight with respect to w2 also has the smallest weight with respect to w. Exercise 16.5-1

With the requested substitution, the instance of the problem becomes

8

ai 1 2 3 4 5 6 7 di 4 2 4 3 1 4 6 wi 10 20 30 40 50 60 70

We begin by just greedily constructing the matroid, adding the most costly to leave incomplete tasks first. So, we add tasks 7,6,5,4,3. Then, in order to schedule tasks 1 or 2 we need to leave incomplete more important tasks. So, our final schedule is 〈5, 3, 4, 6, 7, 1, 2〉 to have a total penalty of only w1+w2 = 30.

Exercise 16.5-2

Create an array B of length n containing zeros in each entry. For each ele- ment a ∈ A, add 1 to B[a.deadline]. If B[a.deadline] > a.deadline, return that the set is not independent. Otherwise, continue. If successfully examine every element of A, return that the set is independent.

Problem 16-1

a. Always give the highest denomination coin that you can without going over. Then, repeat this process until the amount of remaining change drops to 0.

b. Given an optimal solution (x0, x1, . . . , xk) where xi indicates the number of coins of denomination ci. We will first show that we must have xi < c for every i < k. Suppose that we had some xi ≥ c, then, we could decrease xi by c and increase xi+1 by 1. This collection of coins has the same value and has c− 1 fewer coins, so the original solution must of been non-optimal. This configuration of coins is exactly the same as you would get if you kept greedily picking the largest coin possible. This is because to get a total value of V, you would pick xk = bV c−kc and for i < k, xib(V modci+1)c−ic. This is the only solution that satisfies the property that there aren’t more than c of any but the largest denomination because the coin amounts are a base c representation of V modck.

c. Let the coin denominations be {1, 3, 4}, and the value to make change for be 6. The greedy solution would result in the collection of coins {1, 1, 4} but the optimal solution would be {3, 3}.

d. See algorithm MAKE-CHANGE(S,v) which does a dynamic programming solution. Since the first forloop runs n times, and the inner for loop runs k times, and the later while loop runs at most n times, the total running time is O(nk).

Problem 16-2

a. Order the tasks by processing time from smallest to largest and run them in that order. To see that this greedy solution is optimal, first observe that the problem exhibits optimal substructure: if we run the first task in an optimal

9

Algorithm 2 MAKE-CHANGE(S,v)

Let numcoins and coin be empty arrays of length v, and any any attempt to access them at indices in the range −max(S),−1 should return ∞ for i from 1 to v do

bestcoin =nil bestnum =∞ for c in S do

if numcoins[i− c] + 1 < bestnum then bestnum = numcoins[i-c] bestcoin = c

end if end for numcoins[i] = bestnum coin[i] = bestcoin

end for let change be an empty set iter = v while iter > 0 do

add coin[iter] to change iter = iter − coin[iter]

end while return change

solution, then we obtain an optimal solution by running the remaining tasks in a way which minimizes the average completion time. Let O be an optimal solution. Let a be the task which has the smallest processing time and let b be the first task run in O. Let G be the solution obtained by switching the order in which we run a and b in O. This amounts reducing the completion times of a and the completion times of all tasks in G between a and b by the difference in processing times of a and b. Since all other completion times remain the same, the average completion time of G is less than or equal to the average completion time of O, proving that the greedy solution gives an optimal solution. This has runtime O(n lg n) because we must first sort the elements.

b. Without loss of generality we my assume that every task is a unit time task. Apply the same strategy as in part (a), except this time if a task which we would like to add next to the schedule isn’t allowed to run yet, we must skip over it. Since there could be many tasks of short processing time which have late release time, the runtime becomes O(n2) since we might have to spend O(n) time deciding which task to add next at each step.

Problem 16-3

10

a. First, suppose that a set of columns is not linearly independent over F2 then, there is some subset of those columns, say S so that a linear combination of S is 0. However, over F2, since the only two elements are 1 and 0, a linear combination is a sum over some subset. Suppose that this subset is S′, note that it has to be nonempty because of linear dependence. Now, consider the set of edges that these columns correspond to. Since the columns had their total incidence with each vertex 0 in F2, it is even. So, if we consider the subgraph on these edges, then every vertex has a even degree. Also, since our S′ was nonempty, some component has an edge. Restrict our attention to any such component. Since this component is connected and has all even vertex degrees, it contains an Euler Circuit, which is a cycle.

Now, suppose that our graph had some subset of edges which was a cycle. Then, the degree of any vertex with respect to this set of edges is even, so, when we add the corresponding columns, we will get a zero column in F2. Since sets of linear independent columns form a matroid, by problem 16.4-2, the acyclic sets of edges form a matroid as well.

b. One simple approach is to take the highest weight edge that doesn’t complete a cycle. Another way to phrase this is by running Kruskal’s algorithm (see Chapter 23) on the graph with negated edge weights.

c. Consider the digraph on [3] with the edges (1, 2), (2, 1), (2, 3), (3, 2), (3, 1) where (u, v) indicates there is an edge from u to v. Then, consider the two acyclic subsets of edges B = (3, 1), (3, 2), (2, 1) and A = (1, 2), (2, 3). Then, adding any edge in B−A to A will create a cycle. So, the exchange property is violated.

d. Suppose that the graph contained a directed cycle consisting of edges corre- sponding to columns S. Then, since each vertex that is involved in this cycle has exactly as many edges going out of it as going into it, the rows corre- sponding to each vertex will add up to zero, since the outgoing edges count negative and the incoming vertices count positive. This means that the sum of the columns in S is zero, so, the columns were not linearly independent.

e. There is not a perfect correspondence because we didn’t show that not con- taining a directed cycle means that the columns are linearly independent, so there is not perfect correspondence between these sets of independent columns (which we know to be a matriod) and the acyclic sets of edges (which we know not to be a matroid).

Problem 16-4

a. Let O be an optimal solution. If aj is scheduled before its deadline, we can always swap it with whichever activity is scheduled at its deadline without

11

changing the penalty. If it is scheduled after its deadline but aj .deadline ≤ j then there must exist a task from among the first j with penalty less than that of aj . We can then swap aj with this task to reduce the overall penalty incurred. Since O is optimal, this can’t happen. Finally, if aj is scheduled after its deadline and aj .deadline > j we can swap aj with any other late task without increasing the penalty incurred. Since the problem exhibits the greedy choice property as well, this greedy strategy always yields on optimal solution.

b. Assume that MAKE-SET(x) returns a pointer to the element x which is now it its own set. Our disjoint sets will be collections of elements which have been scheduled at contiguous times. We’ll use this structure to quickly find the next available time to schedule a task. Store attributes x.low and x.high at the representative x of each disjoint set. This will give the earliest and latest time of a scheduled task in the block. Assume that UNION(x, y) maintains this attribute. This can be done in constant time, so it won’t af- fect the asymptotics. Note that the attribute is well-defined under the union operation because we only union two blocks if they are contiguous. Without loss of generality we may assume that task a1 has the greatest penalty, task a2 has the second greatest penalty, and so on, and they are given to us in the form of an array A where A[i] = ai. We will maintain an array D such that D[i] contains a pointer to the task with deadline i. We may assume that the size of D is at most n, since a task with deadline later than n can’t possibly be scheduled on time. There are at most 3n total MAKE-SET, UNION, and FIND-SET operations, each of which occur at most n times, so by Theorem 21.14 the runtime is O(nα(n)).

Problem 16-5

a. Suppose there are m distinct elements that could be requested. There may be some room for improvement in terms of keeping track of the furthest in future element at each position. If you maintain a (double circular) linked list with a node for each possible cache element and an array so that in index i there is a pointer corresponding to the node in the linked list corresponding to the possible cache request i. Then, starting with the elements in an arbitrary order, process the sequence 〈r1, . . . , rn〉 from right to left. Upon processing a request move the node corresponding to that request to the beginning of the linked list and make a note in some other array of length n of the element at the end of the linked list. This element is tied for furthest-in-future. Then, just scan left to right through the sequence, each time just checking some set for which elements are currently in the cache. It can be done in constant time to check if an element is in the cache or not by a direct address table. If an element need be evicted, evict the furthest-in-future one noted earlier. This algorithm will take time O(n+m) and use additional space O(m+ n).

12

Algorithm 3 SCHEDULING-VARIATIONS(A)

1: Initialize an array D of size n. 2: for i = 1 to n do 3: ai.time = ai.deadline 4: if D[ai.deadline] 6= NIL then 5: y = FIND-SET(D[ai.deadline]) 6: ai.time = y.low − 1 7: end if 8: x = MAKE-SET(ai) 9: D[ai.time] = x

10: x.low = x.high = ai.time 11: if D[ai.time− 1] 6= NIL then 12: UNION(D[ai.time− 1], D[ai.time]) 13: end if 14: if D[ai.time+ 1] 6= NIL then 15: UNION(D[ai.time], D[ai.time+ 1]) 16: end if 17: end for

If we were in the stupid case that m > n, we could restrict our attention to the possible cache requests that actually happen, so we have a solution that is O(n) both in time and in additional space required.

b. Index the subproblems c[i, S] by a number i ∈ [n] and a subset S ∈ ( [m] k

) .

Which indicates the lowest number of misses that can be achieved with an initial cache of S starting after index i. Then,

c[i, S] = min x∈{S}

(c[i+ 1, {ri} ∪ (S − {x})] + (1− χ{ri}(x)))

which means that x is the element that is removed from the cache unless it is the current element being accessed, in which case there is no cost of eviction.

c. At each time we need to add something new, we can pick which entry to evict from the cache. We need to show the there is an exchange property. That is, if we are at round i and need to evict someone, suppose we evict x. Then, if we were to instead evict the furthest in future element y, we would have no more evictions than before. To see this, since we evicted x, we will have to evict someone else once we get to x, whereas, if we had used the other strategy, we wouldn’t of had to evict anyone until we got to y. This is a point later in time than when we had to evict someone to put x back into the cache, so we could, at reloading y, just evict the person we would of evicted when we evicted someone to reload x. This causes the same number of misses unless there was an access to that element that wold of been evicted at reloading x some point in between when x any y were needed, in which case furthest in future would be better.

13

comentários (0)
Até o momento nenhum comentário
Seja o primeiro a comentar!
Esta é apenas uma pré-visualização
3 mostrados em 13 páginas
Baixar o documento