Está en la página 1de 26

Binary search tree

A binary search tree is oraganised in a

binary tree.such a tree can be
represented by a linked list data
structure in which each node is an
object . In addition to a key field ,each
node contain a fields left,right &
parent(p) that points to the node
corresponding to its left child,right
child & its parents.
If a child on the parent is missing,the
appropriate field contain the null
value or nil.The rot node is only node
in the tree whose parent field is nil.

Properties of BST

Let x be a binary node in a BST key

 if y is a node in the left subtree of
x,then key[y]<=key[x]
->If y is a node in the right subtree of
x ,then key[y]=>key[x]

Tree traversal

The BST property allow us to point

out all the keys in BST in sorted order
by a recursive algorithm.
These algorithm are :
1. Inorder traversal
2. Preorder
3. Postorder

1. Inorder:
1. if x=!Nil
2. then inorder –tree-walk(left[x])
3. print key[x]
WALK(right [x])
1. if x=!nil
2. print key [x]
WALK(left (x))
1. if x=!nil
4. print[x]

Searching in BST

Tree search (x,k) algorithm search the

Root node at x for a node whose value
equal to k.It returns a pointer to the
node if it exit otherwise nil

1. if x=nil or k=key[x]
2. then return x
3. if k<key[x]
4. then return TREE-
5. else return TREE-

Minimum & max node in BST

1 while left(x)!=nil
2 do x<- left(x)
3 return x

1 while right (x)!=nil
2 do x<- right(x)
3 return x

Insertion in BST

1. y<-nil
2. x<-root(T)
3. while x!=nil
4. do y<-x
5. if key[z]<key[x]
6. then x<- left[x]
7. else x<- right[x]
8. P[z]<- y
9. if y=nil

10. then root [T]<- z

11. else if key[z]<key[y]

12. then left[y]<- z

13. else right[y]<- z

Successor in BST

1. if right [x]!=nil
2. then return TREE-
3. y<- P[x]
4. while y!=nil &x=right[y]
5. do x<- y
6. y<- P[y]
7. return y
Deletion Binary search Tree

When deleting a node from a tree ,it is

important that any relationship
implicite in the tree be maintain.three
diff. cases can be identitfy:-
1 nodes with no children-> simply set
the parents pointer to the node to be
deleted to nil.
2 nodes with no child-> the single child
of the node to be deleted becomes the
child of its grandparent node.
3 nodes with two children-> to delete
that node first we find out the
successor of that delete node & replace
its to its deleted node.

1 if left[z]=nil or right[z]=nil
2 then y<- z
3 else y<- TREE-SuccessoR(z)
4 if left[y]!= nil
5 then x<- left[y]
6 else x<- right[y]
7 if x!=nil
8then p[x]<- p[y]
9 if p[y]=nil
10 then root[T]<- x
11 else if y=left[p[y]]
12 then left [p[y]]<- x
13 else right[p[y]]<- x
14 if y!=z
15 then key[z]<- key[y]
16 if y has either field ,copy then
17 return y
Red Black Tree

A RB tree is a BST with one extra bit

of storage per node i.e its color which
can either be red or back is associated
with it.
Each node of the tree contains the
following fields:
Parent ,key,color,left,right.
The structure of node is a RB tree is:

Properties of RB tree

A BST is a binary tree if it satisfy the

following properties:
1. every node is either red or black.
2. the root is node is black.
3. every left is black.
4. if a node is red then both of its node
is children is black.
5. for each node,all paths from a node
to a leaf contains the same no. of black
Height of RB tree

The RB tree height of a node x

denoted by bh(x) is the no. of black
node on the path from x leaf node.
# a RB tree with n internal nodes has
height at most 2log(n+1).
Proof: n=>2 -1

n+1<2 h/2


Rotation of RB tree

Algo: left rotation (T,x)

1. y=right(x)
2. right(x)=left(y)
3. parent[left[y]]=x
4. parent(y)=parent(x)
5. if parent(x)=nil(T)
6.then root[T]=y
7. else if x=left(parent[x])
8. then left(parent(x))=y
9. else right (parent(x))=y

Insertion in a RB tree
We insert a node in a tree & a color
that node to red.we when fire the
violated RB tree properties:
Case: if x’s uncle is red
a) change x’s grandparent to
b) Change x’s uncle & parent
to black.
c) Change x to x’s
Case2 : if x’s uncle is black ,x is the
right child of its parent
a) change x to x’s parent.
b) Rotate x’s parent to left.
c) Case 2 is now case 3.

Case3: if x’s uncle is black ,x is the left

child of its parent
a) set x’s parent to black.
b) Set x’s grandparent to red.
c) Rotate x’s grandparent to

Insertion in RB-tree

1 Tree-Insert(T,x)
2 color(x) =Red
3 while x!= root(T) & color (parent
4 if parent(x)=left(parent(parent(x)))
5 then uncle=right(parent(parent(x)))
6 if color(uncle)=Red
7 then color(parent(x))=Black
8 color(uncle)=Black
9 color(parent(parent(x))=Red
10 x= parent(parent(x))
11 else if x=right(parent(x))
12 then x= parent(x)
13 left rotate(T,x)
14 color(parent(x))=Black
15 color(parent(x))=red
16 right-rotate(T,parent(parent(x)))
17 else
(same as above with”right”& “left”
18 color(root(x))=black
Deletion in RBT

Case 1: if x’s sibling is red

a) switch color of sibling (s)
b) rotate parent of x
c) reset sibling(s)
d) case1 is now case2 or case3 or

case2: if sibling (s ) is black & sibling,s

children are both black.
a) change s to red
b) now x= parent (x)

case3: if x’s sibling is black, sibling’s

left child is red & right child is black.
a) switch color’s of sibling is red
& left(sibling(s))
b) rotate sibling(s) right
c) reset sibling (s)
case4:if x’s sibling is black,sibling,s
right child is red
a) change color of sibling to color
of parent of x.
b) change color of parent to
c) Change color of sibling’s right
child to black.
d) Rotate parent of x.

1. if left[z] = nil[T] or
2. then y<-z
3. else y<- tree successor (z)
4. if left[y]!= nil[T]
5. then x<- left[y]
6. else x<- right[y]
7. p[x]<- p[y]
8. if p[y]<- nil[T]
9. then root[T]<- x
10. else if y=left[p(y)]
11. then left[p(y)]<- x
12. else right [p(y)]<- x
13. if y!=z
14. then key[z]<- key[y]
15. copy’s data into z
16. if color[y]=black
17. then RB-delete fixup(T,x)
18. return y

Bucket sort

Bucket sort assumes that the input

values are uniformaly distributed over
the range (0,1)
Bucket sort(A)
1) n<- length (A)
2) for i=1 to n
3) do insert (A[i]) into list B[flour(n
4) for i=0 to n-1
5) insertion sort(B[i])
6) concatenate the list B[0],B[1]…
B[n-1] together in order

in bucket sort,the array A[i] satisfied

the condition 0<=A[i]<1.
these elements are mapped into
another array B[0…n-1] which an
array of linked list called bucket,which
holds the sorted values.

Radix sort

Radix sort works on sorting a

collection of D-digit numbers where
LSD is shortest first,the next digit & so
on till it reaches MSD.

radix sort(A,d)
for i=1 to d
use a stable sort on array A on digit i.

Qoick sort

It is just like a merge sort based on

divide conqure & combine rule.
1) divide:array A[p…r] is partitioned
into two non-empty subarray A[p…q]
& A[q+1…r]

2 conquer: the two subarray are

sorted by recursive calls to quick sort.
3 combine:since subarray are sorted
inplace,no work is needed to
combine,then the entire array A[p…r]
is now sorted.

Quick sort(A,p,r)
1) if p<r
2) then q<- partitioned(A,p,r)
3) quick sort(A,p,q-1)
4) quick sort(A,q+1,r)
1) x<- A[p]
2) i<- p-1
3) j<- r+1
4) while true
5) repeat
6) j=j-1
7) until A[j]<=x
8) repeat
9) i=i+1
10) until A[i]=> x
11)if i<j
12)exchange (A[i]<->A[j])
13) else return j

performance of quick sort

Running time of quick sort
depends on whether partitioning
is balanced, the algorithm runs
assumptotally as fast as merge
sort & if the partitioned is
unbalanced,it can run
assumptotaly as slow as insertion

Worst case partitioning of quick


The worst case behavior for quick

sort occurs when the partitioning
routine produce one reason with
(n-m) elements & one reason with
one elements . Let us assume that
this unbalanced partitioning
arises at every step of the
The recuurnce for running time:
T(n)=T(n-1)+ ø(n)
Since T(1)=ø(n) then
T(n) =T(n-1) +T(1)
T(n)=Σ ø(k)
= øΣk= ø(n(n+1)/2)
= ø((n2+n)/2)
T(n)= ø(n2)

Best case partitioning of quick

If yhe partitioning procedure
produce two reason of size
n/2.Then the recurrence eqn. is
T(n)=2T(n/2)+ ø(n)
a=2 ,b=2
T(n)= ø(nlogn)

Order of growth:
We have used some simplify
instruction to ease our analysis of
insertion sort.we had ignore the
actule cost of each statement &
letter on we had also ignore the
abstract cost.
We make one more simplification
that abstraction which is the rate
of growth or order of growth of
the running time.
a) we consider only the leading
term of a formula since the
lower order terms are
relatively insignificant for
large n.
b) we also ignore the leading
term contt. Coefficient since
they are also less significant.
Hence the order of growth of a
running time function is the
fastest growing term discarding
contt factor.

Merge sort

Merge sort(A,p,q,r)
1. n1 q-p+1
2. n2 r-q
3. create arrays L[1…n1+1]
4. for i1 to n1
5. do L[i] A[p+i-1]
6. for j 1 to n2
7. do R[j] A[q+j]
8. L[n1+1]∞
9. R[n2+1] ∞
10. i1
11. j1
12. for k p to r
13. do if L[i]<=R[j]
14. then A[k] L[i]
15. ii+1
16. else A[k]R[j]
17. jj+1

Insertion sort
1 for j2 to length [A]
2 key A[j]
3 //insert A[j] into the sorted seqn.A[1,

4 i=j-1
5 while i>0 & A[i]>key
6 do A[i+1]A[i]
7 i=i-1
8 A[i+1]=key
Cost iteration
C1 n