Está en la página 1de 38

Chapter 10

Amortized Analysis

10 -1

An example push and pop




A sequence of operations: OP1, OP2, OPm OPi : several pops (from the stack) and one push (into the stack) ti : time spent by OPi the average time per operation:

t ave

1 m ! ti m i !1
10 -2

Example: a sequence of push and pop p: pop , u: push


i 1 OPi 1u ti 1 2 1u 1 3 2p 1u 3 4 1u 1 5 1u 1 6 1u 1 7 2p 1u 3 8 1p 1u 2

tave = (1+1+3+1+1+1+3+2)/8 = 13/8 = 1.625


10 -3

Another example: a sequence of push and pop p: pop , u: push


i 1 OPi 1u ti 1 2 1p 1u 2 3 1u 1 4 1u 1 5 1u 1 6 1u 1 7 5p 1u 6 8 1u 1

tave = (1+2+1+1+1+1+6+1)/8 = 14/8 = 1.75


10 -4

Amortized time and potential function


ai = ti + * i  * i 1 ai : amortized time of O i * i : potential f nction of the stack after O * i  * i 1 : change of the potential
a i ! t i  ( * i  * i 1 )
i !1 m m m i !1 m i !1

! ti  *m  *0
i !1 m

If * m  * 0 u 0, then a i represents an upper


i !1

bound of t i
i !1
10 -5

Amortized analysis of the push-and-pop sequence




* i : # of e e ents in the stac We have * m  * 0 u 0 Suppose that before we execute Opi , there are k elements in the stack and Opi consists of n pops and 1 push. * i 1 ! k
ti ! n  1 Then, * i ! k  n  1 ai ! ti  * i  * i 1

! ( n  1)  ( k-n  1)-k !2

10 -6

We have ( ai ) / m ! 2
i !1

Then, t ave e 2.


By observation, at most m pops and m pushes are executed in m operations. Thus,

t ave e 2.

10 -7

Skew heaps


meld: merge + swapping


1 5 10 13 20 40 16 25 30 14

50

19

12

Two skew heaps

Step 1: Merge the right paths. 5 right heavy nodes: yellow


10 -8

Step 2: Swap the children along the right path.


1

50

10

19

12

13

40

14

30

16

20

No right heavy node


25
10 -9

Amortized analysis of skew heaps


 

meld: merge + swapping operations on a skew heap:


   

find-min(h): find the min of a skew heap h. insert(x, h): insert x into a skew heap h. delete-min(h): delete the min from a skew heap h. meld(h1, h2): meld two skew heaps h1 and h2.

The first three operations can be implemented by melding.

10 -10

Potential function of skew heaps




 

wt(x): # of descendants of node x, including x. heavy node x: wt(x) " wt(p(x))/2, where p(x) is the parent node of x. light node : not a heavy node potential function *i: # of right heavy nodes of the skew heap.

10 -11

Any path in an n-node tree contains at most log2n light nodes.

light nodes e log2n

# heavy= 3e log2n possible heavy nodes # of nodes: n




The number of right heavy nodes attached to the left path is at most log2n .
10 -12

Amortized time
# light e log2n1 # heavy = k1 # light e log2n2 # heavy = k2

heap: h1 # of nodes: n1

heap: h2 # of nodes: n2
10 -13

ai = ti + *i  *i-1 ti : time spent by OPi ti e 2+log2n1+k1+log2n2+k2 (2 counts the roots of h1 and h2) e 2+2log2n+k1+k2 where n=n1+n2 *i  *i-1 = k3-(k1+k2) e log2nk1k2 ai = ti + *i  *i-1 e2+2log2n+k1+k2+log2nk1k2 =2+3log2n ai = O(log2n)
10 -14

AVL-trees
height balance of node v: hb(v)= (height of right subtree) (height of left subtree)  The hb(v) of every node never differ by more than 1.
-1

-1

+1

O
0 0 0

E
0 -1 0

K
0

N
0

Q
0

C
0 0 0

Fig. An AVL-Tree with Height Balance Labeled


10 -15

Add a new node A.

Before insertion, hb(B)=hb(C)=hb(E)=0 hb(I){0 the first nonzero from leaves.


10 -16

Amortized analysis of AVL-trees




Consider a sequence of m insertions on an empty AVL-tree.


  

T0: an empty AVL-tree. Ti: the tree after the ith insertion. Li: the length of the critical path involved in the ith insertion. X1: total # of balance factor changing from 0 to +1 or -1 during these m insertions (the total cost for rebalancing)

X1= Li , we want to find X1.


i !1

Val(T): # of unbalanced node in T (height balance { 0)

10 -17

Case 1 : Absorption


The tree height is not increased, we need not rebalance it.

Val(Ti)=Val(Ti-1)+(Li1)
10 -18

Case 2.1 single rotation

10 -19

Case 2 : Rebalancing the tree


D B F

F D B A C E

left rotation
G A

right rotation
B D C E F G
10 -20

Case 2.1 single rotation




After a right rotation on the subtree rooted at D:

Val(Ti)=Val(Ti-1)+(Li-2)

10 -21

Case 2.2 double rotation

10 -22

Case 2.2 double rotation




After a left rotation on the subtree rooted at B and a right rotation on the subtree rooted at F:

Val(Ti)=Val(Ti-1)+(Li-2)

10 -23

Case 3 : Height increase


-1 root

-1 -1

Li is the height of the root.


Val(Ti)=Val(Ti-1)+Li

0
10 -24

Amortized analysis of X1
X2: # of absorptions in case 1 X3: # of single rotations in case 2 X4: # of double rotations in case 2 X5: # of height increases in case 3 Val(Tm) = Val(T0)+ Li X22(X3+X4)
i !1 m

=0+X1X22(X3+X4) Val(Tm) e 0.618m (proved by Knuth) X1 = Val(Tm)+2(X2+X3+X4)X2 e 0.618m+2m = 2.618m


10 -25

A self-organizing sequential search heuristics




3 methods for enhancing the performance of sequential search (1) Transpose Heuristics:
Query B D A D D C A Sequence B DB DAB DAB DAB DACB ADCB
10 -26

(2) Move-to-the- ront euristics:


uery equence

10 -27

(3) ount euristics: ( ecreasing order by the count)


uery equence

10 -28

Analysis of the move-to-thefront heuristics


 interword comparison: unsuccessful

comparison  intraword comparison: successful comparison  pairwise independent property:


 For any sequence S and all pairs P and Q, # of

interword comparisons of P and Q is exactly # of interword comparisons made for the subsequence of S consisting of only Ps and Qs.
(See the example on the next page.)

10 -29

Pairwise independent property in move-to-the-front


e.g.
Query C A C B C A Sequence C AC CA BCA CBA ACB (A, B) comparison

# of comparisons made between A and B: 2


10 -30

Consider the subsequence consisting of A and B:


Query A B A Sequence A BA AB (A, B) comparison

# of comparisons made between A and B: 2

10 -31

Query

equence (A, ) (A, ) (B, )

0 0 0

0 1 1

1 1 0 1 1 2 0 1 1

1 1 2

here are 3 distinct inter ord co parisons: (A, B), (A, ) and (B, )
 We can consider them separately and then

add them up. the total number of interword comparisons: 0+1+1+2+1+2 = 7

10 -32

Theorem for the move-to-thefront heuristics


CM(S): # of comparisons of the move-to-thefront heuristics CO(S): # of comparisons of the optimal static ordering CM (S)e 2CO(S)

10 -33

Proof


Proof: InterM(S): # of interword comparisons of the move to the front heuristics InterO(S): # of interword comparisons of the optimal static ordering Let S consist of a As and b Bs, a b. The optimal static ordering: BA InterO(S) = a InterM(S) e 2InterO(S) InterM(S) e 2a

10 -34

Proof


(cont.)

Consider any sequence consisting of more than two items. Because of the pairwise independent property, we have InterM(S) e 2InterO(S) IntraM(S): # of intraword comparisons of the moveto-the-front heuristics IntraO(S): # of intraword comparisons of the optimal static ordering IntraM(S) = IntraO(S) InterM(S) + IntraM(S) e 2InterO(S) + IntraO(S) CM(S) e 2CO(S)
10 -35

The count heuristics




The count heuristics has a similar result: CC(S) e 2CO(S), where CC(S) is the cost of the count heuristics

10 -36

The transposition heuristics


 

The transposition heuristics does not possess the pairwise independent property. We can not have a similar upper bound for the cost of the transposition heuristics.

e.g. onsider pairs of distinct items independently.


Query equence (A, B) (A, ) (B, ) A 0 1 1 B 1 1 0 1 1 2 0 1 1 A 1 1 2

0 0 0

# of inter ord comparisons: 7 (not correct)

10 -37

the correct interword comparisons:


Query Sequence Data Ordering
Comparisons

C C Number of Interword 0

A AC 1

C CA 1

B C A CBA CBA CAB 2 0 2

           6

10 -38