Está en la página 1de 28

C++ Files and Streams

Sr.No Data Type & Description You can combine two or more of these values by ORing them
together. For example if you want to open a file in write mode and
want to truncate it in case that already exists, following will be the
1 ofstream syntax −
ofstream outfile;
This data type represents the output file stream and is used to create outfile.open("file.dat", ios::out | ios::trunc );
files and to write information to files. Similar way, you can open a file for reading and writing purpose as
follows −
2 fstream afile;
ifstream
afile.open("file.dat", ios::out | ios::in );
This data type represents the input file stream and is used to read
information from files. Closing a File
When a C++ program terminates it automatically flushes all the
streams, release all the allocated memory and close all the opened
3 fstream files. But it is always a good practice that a programmer should
close all the opened files before program termination.
This data type represents the file stream generally, and has the Following is the standard syntax for close() function, which is a
capabilities of both ofstream and ifstream which means it can create member of fstream, ifstream, and ofstream objects.
files, write information to files, and read information from files. void close();

To perform file processing in C++, header files <iostream> and Writing to a File
<fstream> must be included in your C++ source file. While doing C++ programming, you write information to a file from
your program using the stream insertion operator (<<) just as you use
Opening a File that operator to output information to the screen. The only difference is
A file must be opened before you can read from it or write to it. Either that you use an ofstream or fstream object instead of the cout object.
ofstream or fstream object may be used to open a file for writing. And
ifstream object is used to open a file for reading purpose only. Reading from a File
Following is the standard syntax for open() function, which is a You read information from a file into your program using the stream
member of fstream, ifstream, and ofstream objects. extraction operator (>>) just as you use that operator to input
void open(const char *filename, ios::openmode mode); information from the keyboard. The only difference is that you use an
Here, the first argument specifies the name and location of the file to ifstream or fstream object instead of the cin object.
be opened and the second argument of the open() member function
defines the mode in which the file should be opened.
#include <fstream> // write the data at the screen.
#include <iostream> cout << data << endl;
using namespace std;
// again read the data from the file and display it.
int main () { infile >> data;
char data[100]; cout << data << endl;

// open a file in write mode. // close the opened file.


ofstream outfile; infile.close();
outfile.open("afile.dat");
return 0;
cout << "Writing to the file" << endl; }
cout << "Enter your name: ";
cin.getline(data, 100);

// write inputted data into the file.


outfile << data << endl;

cout << "Enter your age: ";


cin >> data;
cin.ignore();

// again write inputted data into the file.


outfile << data << endl;

// close the opened file.


outfile.close();

// open a file in read mode.


ifstream infile;
infile.open("afile.dat");

cout << "Reading from the file" << endl;


infile >> data;
Big – O Notation I {
printf("%d = %d\n", arr[i], arr[j]);
Asymptotic notation is a set of languages which allow us to express }
the performance of our algorithms in relation to their input. Big O }
notation is used in Computer Science to describe the performance or }
complexity of an algorithm. Big O specifically describes the worst- Here we're nesting two loops. If our array has n items, our outer loop
case scenario, and can be used to describe the execution time required runs n times and our inner loop runs n times for each iteration of the
or the space used (e.g. in memory or on disk) by an algorithm. outer loop, giving us n2 total prints. Thus this function runs in O(n2)
time (or "quadratic time"). If the array has 10 items, we have to print
1. O(1) 100 times. If it has 1000 items, we have to print 1000000 times.
void printFirstElementOfArray(int arr[])
{ 4. Drop the constants
printf("First element of array = %d",arr[0]); When you're calculating the big O complexity of something, you just
} throw out the constants. Like:
This function runs in O(1) time (or "constant time") relative to its
input. The input array could be 1 item or 1,000 items, but this function void printAllItemsTwice(int arr[], int size)
would still just require one step. {
for (int i = 0; i < size; i++)
2. O(n) {
void printAllElementOfArray(int arr[], int size) printf("%d\n", arr[i]);
{ }
for (int i = 0; i < size; i++)
{ for (int i = 0; i < size; i++)
printf("%d\n", arr[i]); {
} printf("%d\n", arr[i]);
} }
This function runs in O(n) time (or "linear time"), where n is the }
number of items in the array. If the array has 10 items, we have to print This is O(2n), which we just call O(n).
10 times. If it has 1000 items, we have to print 1000 times.
void printFirstItemThenFirstHalfThenSayHi100Times(int arr[],
3. O(n2) int size)
void printAllPossibleOrderedPairs(int arr[], int size) {
{ printf("First element of array = %d\n",arr[0]);
for (int i = 0; i < size; i++)
{ for (int i = 0; i < size/2; i++)
for (int j = 0; j < size; j++) {
printf("%d\n", arr[i]); Again, we can get away with this because the less significant terms
} quickly become, well, less significant as n gets big.

for (int i = 0; i < 100; i++) 7. With Big-O, we're usually talking about the "worst case"
{ bool arrayContainsElement(int arr[], int size, int element)
printf("Hi\n"); {
} for (int i = 0; i < size; i++)
} {
This is O(1 + n/2 + 100), which we just call O(n). if (arr[i] == element) return true;
}
Why can we get away with this? Remember, for big O notation we're return false;
looking at what happens as n gets arbitrarily large. As n gets really big, }
adding 100 or dividing by 2 has a decreasingly significant effect. Here we might have 100 items in our array, but the first item might be
that element, in this case we would return in just 1 iteration of our
5. Drop the less significant terms loop.
void printAllNumbersThenAllPairSums(int arr[], int size)
{ In general we'd say this is O(n) runtime and the "worst case" part
for (int i = 0; i < size; i++) would be implied. But to be more specific we could say this is worst
{ case O(n) and best case O(1) runtime. For some algorithms we can
printf("%d\n", arr[i]); also make rigorous statements about the "average case" runtime.
}

for (int i = 0; i < size; i++)


{
for (int j = 0; j < size; j++)
{
printf("%d\n", arr[i] + arr[j]);
}
}
}
Here our runtime is O(n + n2), which we just call O(n2).

Similarly:

O(n3 + 50n2 + 10000) is O(n3)


O((n + 30) * (n + 5)) is O(n2)
Big – O Notation II Vector

The Big O notation defines an upper bound of an algorithm, it bounds Vectors are same as dynamic arrays with the ability to resize itself
a function only from above. For example, consider the case of automatically when an element is inserted or deleted, with their storage
Insertion Sort. It takes linear time in best case and quadratic time in being handled automatically by the container. Vector elements are
worst case. We can safely say that the time complexity of Insertion placed in contiguous storage so that they can be accessed and traversed
sort is O(n^2). Note that O(n^2) also covers linear time. using iterators. In vectors, data is inserted at the end. Inserting at the
end takes differential time, as sometimes there may be a need of
The Big-O Asymptotic Notation gives us the Upper Bound Idea, extending the array. Removing the last element takes only constant
mathematically described below: time because no resizing happens. Inserting and erasing at the
beginning or in the middle is linear in time.
f(n) = O(g(n)) if there exists a positive integer n0 and a positive
constant c, such that f(n)≤c.g(n) ∀ n≥n0 Certain functions associated with the vector are:
Iterators
The general step wise procedure for Big-O runtime analysis is as
follows: begin() – Returns an iterator pointing to the first element in the vector
end() – Returns an iterator pointing to the theoretical element that
 Figure out what the input is and what n represents. follows the last element in the vector
 Express the maximum number of operations, the algorithm rbegin() – Returns a reverse iterator pointing to the last element in the
performs in terms of n. vector (reverse beginning). It moves from last to first element
 Eliminate all excluding the highest order terms. rend() – Returns a reverse iterator pointing to the theoretical element
 Remove all the constant factors. preceding the first element in the vector (considered as reverse end)
cbegin() – Returns a constant iterator pointing to the first element in
the vector.
cend() – Returns a constant iterator pointing to the theoretical element
that follows the last element in the vector.
crbegin() – Returns a constant reverse iterator pointing to the last
element in the vector (reverse beginning). It moves from last to first
element
crend() – Returns a constant reverse iterator pointing to the theoretical
element preceding the first element in the vector (considered as reverse
end)
#include <iostream> Capacity
#include <vector>
size() – Returns the number of elements in the vector.
using namespace std; max_size() – Returns the maximum number of elements that the vector
can hold.
int main() capacity() – Returns the size of the storage space currently allocated to
{ the vector expressed as number of elements.
vector<int> g1; resize() – Resizes the container so that it contains ‘g’ elements.
empty() – Returns whether the container is empty.
for (int i = 1; i <= 5; i++) shrink_to_fit() – Reduces the capacity of the container to fit its size
g1.push_back(i); and destroys all elements beyond the capacity.
reserve() – Requests that the vector capacity be at least enough to
cout << "Output of begin and end: "; contain n elements.
for (auto i = g1.begin(); i != g1.end(); ++i)
cout << *i << " "; Element access:

cout << "\nOutput of cbegin and cend: "; reference operator [g] – Returns a reference to the element at position
for (auto i = g1.cbegin(); i != g1.cend(); ++i) ‘g’ in the vector
cout << *i << " "; at(g) – Returns a reference to the element at position ‘g’ in the vector
front() – Returns a reference to the first element in the vector
cout << "\nOutput of rbegin and rend: "; back() – Returns a reference to the last element in the vector
for (auto ir = g1.rbegin(); ir != g1.rend(); ++ir) data() – Returns a direct pointer to the memory array used internally
cout << *ir << " "; by the vector to store its owned elements.

cout << "\nOutput of crbegin and crend : "; Modifiers:


for (auto ir = g1.crbegin(); ir != g1.crend(); ++ir)
cout << *ir << " "; assign() – It assigns new value to the vector elements by replacing old
ones
return 0; push_back() – It push the elements into a vector from the back
} pop_back() – It is used to pop or remove elements from a vector from
Output: the back.
Output of begin and end: 1 2 3 4 5 insert() – It inserts new elements before the element at the specified
Output of cbegin and cend: 1 2 3 4 5 position
Output of rbegin and rend: 5 4 3 2 1 erase() – It is used to remove elements from a container from the
Output of crbegin and crend : 5 4 3 2 1 specified position or range.
swap() – It is used to swap the contents of one vector with another pop_back() – Removes the last element of the list, and reduces size of
vector of same type and size. the list by 1
clear() – It is used to remove all the elements of the vector container list::begin() and list::end() in C++ STL– begin() function returns an
emplace() – It extends the container by inserting new element at iterator pointing to the first element of the list
position end()– end() function returns an iterator pointing to the theoretical last
emplace_back() – It is used to insert a new element into the vector element which follows the last element.
container, the new element is added to the end of the vector empty() – Returns whether the list is empty(1) or not(0).
insert() – Inserts new elements in the list before the element at a
Dequeue specified position.
erase() – Removes a single element or a range of elements from the
Double ended queues are sequence containers with the feature of list.
expansion and contraction on both the ends. They are similar to remove() – Removes all the elements from the list, which are equal to
vectors, but are more efficient in case of insertion and deletion of given element.
elements at the end, and also the beginning. Unlike vectors, contiguous reverse() – Reverses the list.
storage allocation may not be guaranteed. size() – Returns the number of elements in the list.
sort() – Sorts the list in increasing order.
The functions for deque are same as vector, with an addition of push
and pop operations for both front and back. Linked List Data Structure

A linked list is a linear data structure, in which the elements are not
List
stored at contiguous memory locations. The elements in a linked list
are linked using pointers as shown in the below image:
Lists are sequence containers that allow non-contiguous memory
allocation. As compared to vector, list has slow traversal, but once a
position has been found, insertion and deletion are quick. Normally,
when we say a List, we talk about doubly linked list. For implementing
a singly linked list, we use forward list.

Functions used with List:


In simple words, a linked list consists of nodes where each node
contains a data field and a reference(link) to the next node in the list.
front() – Returns the value of the first element in the list.
back() – Returns the value of the last element in the list .
push_front(g) – Adds a new element ‘g’ at the beginning of the list .
push_back(g) – Adds a new element ‘g’ at the end of the list.
pop_front() – Removes the first element of the list, and reduces size of
the list by 1.
Why Linked List? A linked list is represented by a pointer to the first node of the linked
Arrays can be used to store linear data of similar types, but arrays have list. The first node is called head. If the linked list is empty, then value
following limitations. of head is NULL.
1) The size of the arrays is fixed: So we must know the upper limit on Each node in a list consists of at least two parts:
the number of elements in advance. Also, generally, the allocated 1) data
memory is equal to the upper limit irrespective of the usage. 2) Pointer (Or Reference) to the next node
2) Inserting a new element in an array of elements is expensive,
because room has to be created for the new elements and to create // A simple CPP program to introduce
// a linked list
room existing elements have to shifted. #include <bits/stdc++.h>
using namespace std;
For example, in a system if we maintain a sorted list of IDs in an array
class Node
id[]. {
public:
id[] = [1000, 1010, 1050, 2000, 2040]. int data;
Node *next;
};
And if we want to insert a new ID 1005, then to maintain the sorted
order, we have to move all the elements after 1000 (excluding 1000). // Program to create a simple linked
// list with 3 nodes
Deletion is also expensive with arrays until unless some special int main()
techniques are used. For example, to delete 1010 in id[], everything {
after 1010 has to be moved. Node* head = NULL;
Node* second = NULL;
Node* third = NULL;
Advantages over arrays
1) Dynamic size // allocate 3 nodes in the heap
head = new Node();
2) Ease of insertion/deletion second = new Node();
third = new Node();
Drawbacks:
/* Three blocks have been allocated dynamically.
1) Random access is not allowed. We have to access elements We have pointers to these three blocks as first,
sequentially starting from the first node. So we cannot do binary second and third
search with linked lists efficiently with its default implementation. head second third
| | |
Read about it here. | | |
2) Extra memory space for a pointer is required with each element of +---+-----+ +----+----+ +----+----+
the list. |#|#| |#|#| |#|#|
+---+-----+ +----+----+ +----+----+
3) Not cache friendly. Since array elements are contiguous locations,
there is locality of reference which is not there in case of linked lists. # represents any random value.
Data is random because we haven’t assigned
anything yet */
Representation:
head->data = 1; //assign data in first node +---+---+ +---+---+ +----+------+
head->next = second; // Link first node with | 1 | o----->| 2 | o-----> | 3 | NULL |
// the second node +---+---+ +---+---+ +----+------+

/* data has been assigned to data part of first


block (block pointed by head). And next Note that only head is sufficient to represent
pointer of first block points to second. the whole list. We can traverse the complete
So they both are linked. list by following next pointers. */

head second third return 0;


| | | }
| | |
+---+---+ +----+----+ +-----+----+ class LLNode{
| 1 | o----->| # | # | | # | # | friend class LinkedList;
+---+---+ +----+----+ +-----+----+
*/ private:
int data;
// assign data to second node LLNode *next, *prev;
second->data = 2;
LLNode(int data) : data(data), next(nullptr), prev(nullptr){}
// Link second node with the third node
second->next = third; void print();
void insert_next(int val);
/* data has been assigned to data part of second void insert_prev(int val);
block (block pointed by second). And next };
pointer of the second block points to third
block. So all three blocks are linked. class LinkedList{
private:
head second third LLNode *head, *tail;
| | |
| | | public:
+---+---+ +---+---+ +----+----+ LinkedList() : head(nullptr), tail(nullptr){}
| 1 | o----->| 2 | o-----> | # | # |
+---+---+ +---+---+ +----+----+ */ void push_back(int);
void push_front(int);
third->data = 3; //assign data to third node
third->next = NULL; void pop_back();
void pop_front();
/* data has been assigned to data part of third
block (block pointed by third). And next pointer void print();
of the third block is made NULL to indicate };
that the linked list is terminated here.
void LLNode::print(){
We have the linked list ready. std::cout << data << std::endl;
if (next != nullptr){
head next-> print();
| }
| }
head = head-> next;
void LLNode::insert_next(int data){ delete tmp;
this-> next = new LLNode(data); }
this-> next-> prev = this;
} void LinkedList::print(){
if (head == nullptr) return;
void LLNode::insert_prev(int data){ head-> print();
this-> prev = new LLNode(data); }
this-> prev-> next = this;
}

void LinkedList::push_back(int val){


if (head == nullptr){
head = new LLNode(val);
tail = head;
}
else{
tail-> insert_next(val);
tail = tail-> next;
}
}

void LinkedList::push_front(int val){


if (head == nullptr){
head = new LLNode(val);
tail = head;
}
else{
head-> insert_prev(val);
head = head-> prev;
}
}

void LinkedList::pop_back(){
assert(head != nullptr);

LLNode* tmp = tail;

tail-> prev-> next = nullptr;


tail = tail-> prev;
delete tmp;
}

void LinkedList::pop_front(){
assert(head != nullptr);

LLNode* tmp = head;


head-> next-> prev = nullptr;
Insertion Sort Another Example:
12, 11, 13, 5, 6
Insertion sort is a simple sorting algorithm that works the way we sort
playing cards in our hands. Let us loop for i = 1 (second element of the array) to 5 (Size of input
Algorithm array)
// Sort an arr[] of size n
insertionSort(arr, n) i = 1. Since 11 is smaller than 12, move 12 and insert 11 before 12
Loop from i = 1 to n-1. 11, 12, 13, 5, 6
……a) Pick element arr[i] and insert it into sorted sequence arr[0…i-
1] i = 2. 13 will remain at its position as all elements in A[0..I-1] are
smaller than 13
11, 12, 13, 5, 6

i = 3. 5 will move to the beginning and all other elements from 11 to


13 will move one position ahead of their current position.
5, 11, 12, 13, 6

i = 4. 6 will move to position after 5, and elements from 11 to 13 will


move one position ahead of their current position.
5, 6, 11, 12, 13

void insertionSort(int arr[], int n)


{
int i, key, j;
for (i = 1; i < n; i++)
{
key = arr[i];
j = i-1;

/* Move elements of arr[0..i-1], that are


greater than key, to one position ahead
of their current position */
while (j >= 0 && arr[j] > key)
{
arr[j+1] = arr[j];
j = j-1;
}
arr[j+1] = key;
}
}
Selection Sort void swap(int *xp, int *yp)
{
int temp = *xp;
The selection sort algorithm sorts an array by repeatedly finding the *xp = *yp;
minimum element (considering ascending order) from unsorted part *yp = temp;
}
and putting it at the beginning. The algorithm maintains two subarrays
in a given array. void selectionSort(int arr[], int n)
{
int i, j, min_idx;
1) The subarray which is already sorted.
2) Remaining subarray which is unsorted. // One by one move boundary of unsorted subarray
for (i = 0; i < n-1; i++)
In every iteration of selection sort, the minimum element (considering {
// Find the minimum element in unsorted array
ascending order) from the unsorted subarray is picked and moved to min_idx = i;
the sorted subarray. for (j = i+1; j < n; j++)
if (arr[j] < arr[min_idx])
arr[] = 64 25 12 22 11 min_idx = j;

// Swap the found minimum element with the first element


// Find the minimum element in arr[0...4] swap(&arr[min_idx], &arr[i]);
// and place it at beginning }
11 25 12 22 64 }

// Find the minimum element in arr[1...4] Bubble Sort


// and place it at beginning of arr[1...4]
11 12 25 22 64
Bubble Sort is the simplest sorting algorithm that works by repeatedly
swapping the adjacent elements if they are in wrong order.
// Find the minimum element in arr[2...4]
Example:
// and place it at beginning of arr[2...4]
First Pass:
11 12 22 25 64
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the first two
elements, and swaps since 5 > 1.
// Find the minimum element in arr[3...4]
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
// and place it at beginning of arr[3...4]
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
11 12 22 25 64
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements are already in
order (8 > 5), algorithm does not swap them.

Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2 Merge Sort
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) Merge Sort is a Divide and Conquer algorithm. It divides input array in
Now, the array is already sorted, but our algorithm does not know if it two halves, calls itself for the two halves and then merges the two
is completed. The algorithm needs one whole pass without any swap to sorted halves. The merge() function is used for merging two halves.
know it is sorted. The merge(arr, l, m, r) is key process that assumes that arr[l..m] and
arr[m+1..r] are sorted and merges the two sorted sub-arrays into one.
Third Pass: See following C implementation for details.
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) MergeSort(arr[], l, r)
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) If r > l
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 ) 1. Find the middle point to divide the array into two halves:
middle m = (l+r)/2
void swap(int *xp, int *yp)
{ 2. Call mergeSort for first half:
int temp = *xp; Call mergeSort(arr, l, m)
*xp = *yp; 3. Call mergeSort for second half:
*yp = temp; Call mergeSort(arr, m+1, r)
}
4. Merge the two halves sorted in step 2 and 3:
// A function to implement bubble sort Call merge(arr, l, m, r)
void bubbleSort(int arr[], int n)
{
The following diagram from wikipedia shows the complete merge sort
int i, j;
for (i = 0; i < n-1; i++) process for an example array {38, 27, 43, 3, 9, 82, 10}. If we take a
closer look at the diagram, we can see that the array is recursively
// Last i elements are already in place divided in two halves till the size becomes 1. Once the size becomes 1,
for (j = 0; j < n-i-1; j++)
if (arr[j] > arr[j+1])
the merge processes comes into action and starts merging arrays back
swap(&arr[j], &arr[j+1]); till the complete array is merged.
}
{
arr[k] = L[i];
i++;
}
else
{
arr[k] = R[j];
j++;
}
k++;
}

/* Copy the remaining elements of L[], if there


are any */
while (i < n1)
{
arr[k] = L[i];
i++;
k++;
}

/* Copy the remaining elements of R[], if there


are any */
while (j < n2)
// Merges two subarrays of arr[]. {
// First subarray is arr[l..m] arr[k] = R[j];
// Second subarray is arr[m+1..r] j++;
void merge(int arr[], int l, int m, int r) k++;
{ }
int i, j, k; }
int n1 = m - l + 1;
int n2 = r - m; /* l is for left index and r is right index of the
sub-array of arr to be sorted */
/* create temp arrays */ void mergeSort(int arr[], int l, int r)
int L[n1], R[n2]; {
if (l < r)
/* Copy data to temp arrays L[] and R[] */ {
for (i = 0; i < n1; i++) // Same as (l+r)/2, but avoids overflow for
L[i] = arr[l + i]; // large l and h
for (j = 0; j < n2; j++) int m = l+(r-l)/2;
R[j] = arr[m + 1+ j];
// Sort first and second halves
/* Merge the temp arrays back into arr[l..r]*/ mergeSort(arr, l, m);
i = 0; // Initial index of first subarray mergeSort(arr, m+1, r);
j = 0; // Initial index of second subarray
k = l; // Initial index of merged subarray merge(arr, l, m, r);
while (i < n1 && j < n2) }
{ }
if (L[i] <= R[j])
Quick Sort Partition Algorithm
There can be many ways to do partition, following pseudo code adopts
QuickSort is a Divide and Conquer algorithm. It picks an element as the method given in CLRS book. The logic is simple, we start from the
pivot and partitions the given array around the picked pivot. There are leftmost element and keep track of index of smaller (or equal to)
many different versions of quickSort that pick pivot in different ways. elements as i. While traversing, if we find a smaller element, we swap
current element with arr[i]. Otherwise we ignore current element.
Always pick first element as pivot.
/* low --> Starting index, high --> Ending index */
Always pick last element as pivot (implemented below) quickSort(arr[], low, high)
Pick a random element as pivot. {
Pick median as pivot. if (low < high)
The key process in quickSort is partition(). Target of partitions is, {
/* pi is partitioning index, arr[pi] is now
given an array and an element x of array as pivot, put x at its correct at right place */
position in sorted array and put all smaller elements (smaller than x) pi = partition(arr, low, high);
before x, and put all greater elements (greater than x) after x. All this
quickSort(arr, low, pi - 1); // Before pi
should be done in linear time. quickSort(arr, pi + 1, high); // After pi
}
Pseudo Code for recursive QuickSort function : }
Pseudo code for partition()
/* low --> Starting index, high --> Ending index */
quickSort(arr[], low, high) /* This function takes last element as pivot, places
{ the pivot element at its correct position in sorted
if (low < high) array, and places all smaller (smaller than pivot)
{ to left of pivot and all greater elements to right
/* pi is partitioning index, arr[pi] is now of pivot */
at right place */ partition (arr[], low, high)
pi = partition(arr, low, high); {
// pivot (Element to be placed at right position)
quickSort(arr, low, pi - 1); // Before pi pivot = arr[high];
quickSort(arr, pi + 1, high); // After pi
} i = (low - 1) // Index of smaller element
}
for (j = low; j <= high- 1; j++)
{
// If current element is smaller than or
// equal to pivot
if (arr[j] <= pivot)
{
i++; // increment index of smaller element
swap arr[i] and arr[j]
}
}
swap arr[i + 1] and arr[high])
return (i + 1)
}
Illustration of partition() : Hashing
arr[] = {10, 80, 30, 90, 40, 50, 70} Hashing is an important Data Structure which is designed to use a
Indexes: 0 1 2 3 4 5 6 special function called the Hash function which is used to map a given
value with a particular key for faster access of elements. The
low = 0, high = 6, pivot = arr[h] = 70 efficiency of mapping depends of the efficiency of the hash function
Initialize index of smaller element, i = -1 used.
Traverse elements from j = low to high-1 Let a hash function H(x) maps the value x at the index x%10 in an
j = 0 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j]) Array. For example if the list of values is [11,12,13,14,15] it will be
i=0 stored at positions {1,2,3,4,5} in the array or Hash table respectively.
arr[] = {10, 80, 30, 90, 40, 50, 70} // No change as i and j
// are same Hashing | Set 1 (Introduction)
j = 1 : Since arr[j] > pivot, do nothing Suppose we want to design a system for storing employee records
// No change in i and arr[] keyed using phone numbers. And we want following queries to be
performed efficiently:
j = 2 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j]) Insert a phone number and corresponding information.
i=1 Search a phone number and fetch the information.
arr[] = {10, 30, 80, 90, 40, 50, 70} // We swap 80 and 30 Delete a phone number and related information.
We can think of using the following data structures to maintain
j = 3 : Since arr[j] > pivot, do nothing information about different phone numbers.
// No change in i and arr[]
Array of phone numbers and records.
j = 4 : Since arr[j] <= pivot, do i++ and swap(arr[i], arr[j]) Linked List of phone numbers and records.
i=2 Balanced binary search tree with phone numbers as keys.
arr[] = {10, 30, 40, 90, 80, 50, 70} // 80 and 40 Swapped Direct Access Table.
j = 5 : Since arr[j] <= pivot, do i++ and swap arr[i] with arr[j] For arrays and linked lists, we need to search in a linear fashion, which
i=3 can be costly in practice. If we use arrays and keep the data sorted,
arr[] = {10, 30, 40, 50, 80, 90, 70} // 90 and 50 Swapped then a phone number can be searched in O(Logn) time using Binary
Search, but insert and delete operations become costly as we have to
We come out of loop because j is now equal to high-1. maintain sorted order.
Finally we place pivot at correct position by swapping
arr[i+1] and arr[high] (or pivot)
arr[] = {10, 30, 40, 50, 70, 90, 80} // 80 and 70 Swapped

Now 70 is at its correct place. All elements smaller than


70 are before it and all elements greater than 70 are after
it.
With balanced binary search tree, we get moderate search, insert and
delete times. All of these operations can be guaranteed to be in For example for phone numbers a bad hash function is to take first
O(Logn) time. three digits. A better function is consider last three digits. Please note
that this may not be the best hash function. There may be better ways.
Another solution that one can think of is to use a direct access table
where we make a big array and use phone numbers as index in the Hash Table: An array that stores pointers to records corresponding to a
array. An entry in array is NIL if phone number is not present, else the given phone number. An entry in hash table is NIL if no existing
array entry stores pointer to records corresponding to phone number. phone number has hash function value equal to the index for the entry.
Time complexity wise this solution is the best among all, we can do all
operations in O(1) time. For example to insert a phone number, we Collision Handling: Since a hash function gets us a small number for a
create a record with details of given phone number, use phone number big key, there is possibility that two keys result in same value. The
as index and store the pointer to the created record in table. situation where a newly inserted key maps to an already occupied slot
This solution has many practical limitations. First problem with this in hash table is called collision and must be handled using some
solution is extra space required is huge. For example if phone number collision handling technique. Following are the ways to handle
is n digits, we need O(m * 10n) space for table where m is size of a collisions:
pointer to record. Another problem is an integer in a programming
language may not store n digits. Chaining:The idea is to make each cell of hash table point to a linked
list of records that have same hash function value. Chaining is simple,
Due to above limitations Direct Access Table cannot always be used. but requires additional memory outside the table.
Hashing is the solution that can be used in almost all such situations Open Addressing: In open addressing, all elements are stored in the
and performs extremely well compared to above data structures like hash table itself. Each table entry contains either a record or NIL.
Array, Linked List, Balanced BST in practice. With hashing we get When searching for an element, we one by one examine table slots
O(1) search time on average (under reasonable assumptions) and O(n) until the desired element is found or it is clear that the element is not in
in worst case. the table.

Hashing is an improvement over Direct Access Table. The idea is to Index Mapping (or Trivial Hashing) with negatives allowed
use hash function that converts a given phone number or any other key
to a smaller number and uses the small number as index in a table Given a limited range array contains both positive and non-positive
called hash table. numbers, i.e., elements are in the range from -MAX to +MAX. Our
task is to search if some number is present in the array or not in O(1)
Hash Function: A function that converts a given big phone number to a time.
small practical integer value. The mapped integer value is used as an
index in hash table. In simple terms, a hash function maps a big Since range is limited, we can use index mapping (or trivial hashing).
number or string to a small integer that can be used as index in hash We use values as index in a big array. Therefore we can search and
table. insert elements in O(1) time.
A good hash function should have following properties
1) Efficiently computable.
2) Should uniformly distribute the keys (Each table position equally
likely for each key)
How to handle negative numbers? return false;
}
The idea is to use a 2D array of size hash[MAX+1][2]
void insert(int a[], int n)
Algorithm: {
for (int i = 0; i < n; i++) {
if (a[i] >= 0)
Assign all the values of the hash matrix as 0. has[a[i]][0] = 1;
Traverse the given array: else
has[abs(a[i])][1] = 1;
If the element ele is non negative assign }
hash[ele][0] as 1. }
Else take the absolute value of ele and
// Driver code
assign hash[ele][1] as 1. int main()
To search any element x in the array. {
int a[] = { -1, 9, -5, -8, -5, -2 };
int n = sizeof(a)/sizeof(a[0]);
If X is non-negative check if hash[X][0] is 1 or not. If hash[X][0] is insert(a, n);
one then the number is present else not present. int X = -5;
If X is negative take absolute vale of X and then check if hash[X][1] is if (search(X) == true)
cout << "Present";
1 or not. If hash[X][1] is one then the number is present else
Below is the implementation of the above idea. cout << "Not Present";
return 0;
// CPP program to implement direct index mapping }
// with negative values allowed.
#include <bits/stdc++.h> Hashing | Set 2 (Separate Chaining)
using namespace std;
#define MAX 1000
What is Collision?
// Since array is global, it is initialized as 0. Since a hash function gets us a small number for a key which is a big
bool has[MAX + 1][2]; integer or string, there is possibility that two keys result in same value.
// searching if X is Present in the given array The situation where a newly inserted key maps to an already occupied
// or not. slot in hash table is called collision and must be handled using some
bool search(int X) collision handling technique.
{
if (X >= 0) {
if (has[X][0] == 1) What are the chances of collisions with large table?
return true; Collisions are very likely even if we have big table to store keys. An
else
return false;
important observation is Birthday Paradox. With only 23 persons, the
} probability that two people have same birthday is 50%.
// if X is negative take the absolute
// value of X.
How to handle Collisions?
X = abs(X); There are mainly two methods to handle collision:
if (has[X][1] == 1) 1) Separate Chaining
return true;
2) Open Addressing
Separate Chaining: Time complexity of search insert and delete is
The idea is to make each cell of hash table point to a linked list of O(1) if α is O(1)
records that have same hash function value.
C++ program for hashing with chaining
Let us consider a simple hash function as “key mod 7” and sequence of
keys as 50, 700, 76, 85, 92, 73, 101. In hashing there is a hash function that maps keys to some values. But
hashChaining these hashing function may lead to collision that is two or more keys
are mapped to same value. Chain hashing avoids collision. The idea is
Advantages: to make each cell of hash table point to a linked list of records that
1) Simple to implement. have same hash function value.
2) Hash table never fills up, we can always add more elements to Let’s create a hash function, such that our hash table has ‘N’ number
chain. of buckets.
3) Less sensitive to the hash function or load factors.
4) It is mostly used when it is unknown how many and how frequently To insert a node into the hash table, we need to find the hash index for
keys may be inserted or deleted. the given key. And it could be calculated using the hash function.
Example: hashIndex = key % noOfBuckets
Disadvantages:
1) Cache performance of chaining is not good as keys are stored using Insert: Move to the bucket corresponds to the above calculated hash
linked list. Open addressing provides better cache performance as index and insert the new node at the end of the list.
everything is stored in same table. Delete: To delete a node from hash table, calculate the hash index for
2) Wastage of Space (Some Parts of hash table are never used) the key, move to the bucket corresponds to the calculated hash index,
3) If the chain becomes long, then search time can become O(n) in search the list in the current bucket to find and remove the node with
worst case. the given key (if found).
4) Uses extra space for links.

Performance of Chaining:
Performance of hashing can be evaluated under the assumption that
each key is equally likely to be hashed to any slot of table (simple
uniform hashing).

m = Number of slots in hash table


n = Number of keys to be inserted in hash table

Load factor α = n/m

Expected time to search = O(1 + α)

Expected time to insert/delete = O(1 + α)


// CPP program to implement hashing with chaining i != table[index].end(); i++) {
#include<iostream> if (*i == key)
#include <list> break;
using namespace std; }

class Hash // if key is found in hash table, remove it


{ if (i != table[index].end())
int BUCKET; // No. of buckets table[index].erase(i);
}
// Pointer to an array containing buckets
list<int> *table; // function to display hash table
public: void Hash::displayHash() {
Hash(int V); // Constructor for (int i = 0; i < BUCKET; i++) {
cout << i;
// inserts a key into hash table for (auto x : table[i])
void insertItem(int x); cout << " --> " << x;
cout << endl;
// deletes a key from hash table }
void deleteItem(int key); }

// hash function to map values to key // Driver program


int hashFunction(int x) { int main()
return (x % BUCKET); {
} // array that contains keys to be mapped
int a[] = {15, 11, 27, 8, 12};
void displayHash(); int n = sizeof(a)/sizeof(a[0]);
};
// insert the keys into the hash table
Hash::Hash(int b) Hash h(7); // 7 is count of buckets in
{ // hash table
this->BUCKET = b; for (int i = 0; i < n; i++)
table = new list<int>[BUCKET]; h.insertItem(a[i]);
}
// delete 12 from hash table
void Hash::insertItem(int key) h.deleteItem(12);
{
int index = hashFunction(key); // display the Hash table
table[index].push_back(key); h.displayHash();
}
return 0;
void Hash::deleteItem(int key) }
{
// get the hash index of key
int index = hashFunction(key);

// find the key in (inex)th list


list <int> :: iterator i;
for (i = table[index].begin();
Hashing | Set 3 (Open Addressing)

Open Addressing
Like separate chaining, open addressing is a method for handling
collisions. In Open Addressing, all elements are stored in the hash
table itself. So at any point, size of the table must be greater than or
equal to the total number of keys (Note that we can increase table size
by copying old data if needed).
Insert(k): Keep probing until an empty slot is found. Once an empty
slot is found, insert k.
Search(k): Keep probing until slot’s key doesn’t become equal to k or
an empty slot is reached.
Delete(k): Delete operation is interesting. If we simply delete a key,
then search may fail. So slots of deleted keys are marked specially as
“deleted”.
Insert can insert an item in a deleted slot, but the search doesn’t stop at
a deleted slot.

Open Addressing is done following ways: Clustering: The main problem with linear probing is clustering, many
consecutive elements form groups and it starts taking time to find a
a) Linear Probing: In linear probing, we linearly probe for next slot. free slot or to search an element.
For example, typical gap between two probes is 1 as taken in below b) Quadratic Probing We look for i2‘th slot in i’th iteration.
example also. let hash(x) be the slot index computed using hash function.
let hash(x) be the slot index computed using hash function and S be the
table size If slot hash(x) % S is full, then we try (hash(x) + 1*1) % S
If slot hash(x) % S is full, then we try (hash(x) + 1) % S If (hash(x) + 1*1) % S is also full, then we try (hash(x) + 2*2) % S
If (hash(x) + 1) % S is also full, then we try (hash(x) + 2) % S If (hash(x) + 2*2) % S is also full, then we try (hash(x) + 3*3) % S
If (hash(x) + 2) % S is also full, then we try (hash(x) + 3) % S ..................................................
.................................................. ..................................................
.................................................. c) Double Hashing We use another hash function hash2(x) and look
Let us consider a simple hash function as “key mod 7” and sequence of for i*hash2(x) slot in i’th rotation.
keys as 50, 700, 76, 85, 92, 73, 101. let hash(x) be the slot index computed using hash function.
If slot hash(x) % S is full, then we try (hash(x) + 1*hash2(x)) % S
If (hash(x) + 1*hash2(x)) % S is also full, then we try (hash(x) + S.No. Seperate Chaining Open Addressing
2*hash2(x)) % S
If (hash(x) + 2*hash2(x)) % S is also full, then we try (hash(x) +
3*hash2(x)) % S always add more elements

.................................................. to chain.
..................................................
See this for step by step diagrams. Open addressing
Comparison of above three:
Linear probing has the best cache performance but suffers from Chaining is Less sensitive requires extra care for to
clustering. One more advantage of Linear probing is easy to compute.
Quadratic probing lies between the two in terms of cache performance to the hash function or avoid clustering and load
and clustering.
3. load factors. factor.

Chaining is mostly used


Double hashing has poor cache performance but no clustering. Double
hashing requires more computation time as two hash functions need to
when it is unknown how Open addressing is used
be computed.
many and how frequently when the frequency and
S.No. Seperate Chaining Open Addressing
keys may be inserted or number of keys is
Open Addressing
4. deleted. known.
Chaining is Simpler to requires more
Open addressing
1. implement. computation.
Cache performance of provides better cache
In chaining, Hash table In open addressing, table
chaining is not good as performance as
2. never fills up, we can may become full.
keys are stored using everything is stored in

5. linked list. the same table.


S.No. Seperate Chaining Open Addressing Load Factor and Rehashing

How hashing works:


In Open addressing, a For insertion of a key(K) – value(V) pair into a hash map, 2 steps are
required:
Wastage of Space (Some slot can be used even if
K is converted into a small integer (called its hash code) using a hash
Parts of hash table in an input doesn’t map to function.
The hash code is used to find an index (hashCode % arrSize) and the
6. chaining are never used). it. entire linked list at that index(Separate chaining) is first searched for
the presence of the K already.
If found, it’s value is updated and if not, the K-V pair is stored as a
Chaining uses extra space No links in Open new node in the list.
Complexity and Load Factor
7. for links. addressing For the first step, time taken depends on the K and the hash function.
For example, if the key is a string “abcd”, then it’s hash function may
depend on the length of the string. But for very large values of n, the
Performance of Open Addressing:
number of entries into the map, length of the keys is almost negligible
Like Chaining, the performance of hashing can be evaluated under the
in comparison to n so hash computation can be considered to take
assumption that each key is equally likely to be hashed to any slot of
place in constant time, i.e, O(1).
the table (simple uniform hashing)
For the second step, traversal of the list of K-V pairs present at that
m = Number of slots in the hash table
index needs to be done. For this, the worst case may be that all the n
n = Number of keys to be inserted in the hash table entries are at the same index. So, time complexity would be O(n). But,
enough research has been done to make hash functions uniformly
distribute the keys in the array so this almost never happens.
Load factor α = n/m ( < 1 ) So, on an average, if there are n entries and b is the size of the array
there would be n/b entries on each index. This value n/b is called the
load factor that represents the load that is there on our map.
Expected time to search/insert/delete < 1/(1 - α) This Load Factor needs to be kept low, so that number of entries at one
index is less and so is the complexity almost constant, i.e., O(1).
Rehashing:
So Search, Insert and Delete take (1/(1 - α)) time As the name suggests, rehashing means hashing again. Basically, when
the load factor increases to more than its pre-defined value (default
value of load factor is 0.75), the complexity increases. So to overcome
this, the size of the array is increased (doubled) and all the values are
hashed again and stored in the new double sized array to maintain a
low load factor and low complexity.

Why rehashing?
Rehashing is done because whenever key value pairs are inserted into Priority Queue in C++ Standard Template Library
the map, the load factor increases, which implies that the time (STL)
complexity also increases as explained above. This might not give the
required time complexity of O(1). Priority queues are a type of container adapters, specifically designed
such that the first element of the queue is the greatest of all elements in
Hence, rehash must be done, increasing the size of the bucketArray so the queue and elements are in non decreasing order(hence we can see
as to reduce the load factor and the time complexity. that each element of the queue has a priority{fixed order}).
How Rehashing is done? Methods of priority queue are:
Rehashing can be done as follows:
priority_queue::empty() in C++ STL– empty() function returns
For each addition of a new entry to the map, check the load factor. whether the queue is empty.
If it’s greater than its pre-defined value (or default value of 0.75 if not priority_queue::size() in C++ STL– size() function returns the size of
given), then Rehash. the queue.
For Rehash, make a new array of double the previous size and make it priority_queue::top() in C++ STL– Returns a reference to the top most
the new bucketarray. element of the queue
Then traverse to each element in the old bucketArray and call the priority_queue::push() in C++ STL– push(g) function adds the element
insert() for each so as to insert it into the new larger bucket array. ‘g’ at the end of the queue.
priority_queue::pop() in C++ STL– pop() function deletes the first
element of the queue.
priority_queue::swap() in C++ STL– This function is used to swap the
contents of one priority queue with another priority queue of same type
and size.
priority_queue::emplace() in C++ STL – This function is used to insert
a new element into the priority queue container, the new element is
added to the top of the priority queue.
priority_queue value_type in C++ STL– Represents the type of object
stored as an element in a priority_queue. It acts as a synonym for the
template parameter.

#include <iostream>
#include <queue>

using namespace std;

void showpq(priority_queue <int> gq)


{
priority_queue <int> g = gq;
while (!g.empty())
{
cout << '\t' << g.top();
g.pop(); Set in C++ Standard Template Library (STL)
}
cout << '\n';
} Sets are a type of associative containers in which each element has to
be unique, because the value of the element identifies it. The value of
int main ()
the element cannot be modified once it is added to the set, though it is
{
priority_queue <int> gquiz; possible to remove and add the modified value of that element.
gquiz.push(10);
gquiz.push(30); Some basic functions associated with Set:
gquiz.push(20);
gquiz.push(5);
gquiz.push(1); begin() – Returns an iterator to the first element in the set.
end() – Returns an iterator to the theoretical element that follows last
cout << "The priority queue gquiz is : "; element in the set.
showpq(gquiz);
size() – Returns the number of elements in the set.
cout << "\ngquiz.size() : " << gquiz.size(); max_size() – Returns the maximum number of elements that the set
cout << "\ngquiz.top() : " << gquiz.top(); can hold.
empty() – Returns whether the set is empty.
cout << "\ngquiz.pop() : ";
gquiz.pop(); #include <iostream>
showpq(gquiz); #include <set>
#include <iterator>
return 0;
} using namespace std;

int main()
{
// empty set container
set <int, greater <int> > gquiz1;

// insert elements in random order


gquiz1.insert(40);
gquiz1.insert(30);
gquiz1.insert(60);
gquiz1.insert(20);
gquiz1.insert(50);
gquiz1.insert(50); // only one 50 will be added to the set
gquiz1.insert(10);

// printing set gquiz1


set <int, greater <int> > :: iterator itr;
cout << "\nThe set gquiz1 is : ";
for (itr = gquiz1.begin(); itr != gquiz1.end(); ++itr)
{
cout << '\t' << *itr;
}
cout << endl; The output of the above program is :
// assigning the elements from gquiz1 to gquiz2
set <int> gquiz2(gquiz1.begin(), gquiz1.end()); The set gquiz1 is : 60 50 40 30 20 10

// print all elements of the set gquiz2 The set gquiz2 after assign from gquiz1 is : 10 20 30 40 50
cout << "\nThe set gquiz2 after assign from gquiz1 is : ";
for (itr = gquiz2.begin(); itr != gquiz2.end(); ++itr)
60
{
cout << '\t' << *itr; gquiz2 after removal of elements less than 30 : 30 40 50 60
} gquiz2.erase(50) : 1 removed 30 40 60
cout << endl;
gquiz1.lower_bound(40) : 40
// remove all elements up to 30 in gquiz2 gquiz1.upper_bound(40) : 30
cout << "\ngquiz2 after removal of elements less than 30 : "; gquiz2.lower_bound(40) : 40
gquiz2.erase(gquiz2.begin(), gquiz2.find(30)); gquiz2.upper_bound(40) : 60
for (itr = gquiz2.begin(); itr != gquiz2.end(); ++itr)
{
cout << '\t' << *itr;
}

// remove element with value 50 in gquiz2


int num;
num = gquiz2.erase (50);
cout << "\ngquiz2.erase(50) : ";
cout << num << " removed \t" ;
for (itr = gquiz2.begin(); itr != gquiz2.end(); ++itr)
{
cout << '\t' << *itr;
}

cout << endl;

//lower bound and upper bound for set gquiz1


cout << "gquiz1.lower_bound(40) : "
<< *gquiz1.lower_bound(40) << endl;
cout << "gquiz1.upper_bound(40) : "
<< *gquiz1.upper_bound(40) << endl;

//lower bound and upper bound for set gquiz2


cout << "gquiz2.lower_bound(40) : "
<< *gquiz2.lower_bound(40) << endl;
cout << "gquiz2.upper_bound(40) : "
<< *gquiz2.upper_bound(40) << endl;

return 0;

}
Tree Traversals (Inorder, Preorder and Postorder)
Preorder Traversal (Practice):
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) Algorithm Preorder(tree)
which have only one logical way to traverse them, trees can be
1. Visit the root.
traversed in different ways. Following are the generally used ways for
traversing trees. 2. Traverse the left subtree, i.e., call Preorder(left-subtree)
3. Traverse the right subtree, i.e., call Preorder(right-subtree)
Uses of Preorder
Preorder traversal is used to create a copy of the tree. Preorder
traversal is also used to get prefix expression on of an expression tree.
Example: Preorder traversal for the above given figure is 1 2 4 5 3.

Postorder Traversal (Practice):


Algorithm Postorder(tree)
Example Tree 1. Traverse the left subtree, i.e., call Postorder(left-subtree)
2. Traverse the right subtree, i.e., call Postorder(right-subtree)
Depth First Traversals:
(a) Inorder (Left, Root, Right) : 4 2 5 1 3 3. Visit the root.
(b) Preorder (Root, Left, Right) : 1 2 4 5 3 Example: Postorder traversal for the above given figure is 4 5 2 3 1.
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
Breadth First or Level Order Traversal : 1 2 3 4 5

Inorder Traversal (Practice):


Algorithm Inorder(tree)
1. Traverse the left subtree, i.e., call Inorder(left-subtree)
2. Visit the root.
3. Traverse the right subtree, i.e., call Inorder(right-subtree)
Uses of Inorder
In case of binary search trees (BST), Inorder traversal gives nodes in
non-decreasing order. To get nodes of BST in non-increasing order, a
variation of Inorder traversal where Inorder traversal s reversed can be
used.
Example: Inorder traversal for the above-given figure is 4 2 5 1 3.
Introduction to Iterators in C++

An iterator is an object (like a pointer) that points to an element inside


the container. We can use iterators to move through the contents of the
container. They can be visualised as something similar to a pointer
pointing to some location and we can access content at that particular
location using them.

Iterators play a critical role in connecting algorithm with containers


along with the manipulation of data stored inside the containers. The
most obvious form of iterator is a pointer. A pointer can point to Types of iterators: Based upon the functionality of the iterators, they
elements in an array, and can iterate through them using the increment can be classified into five major categories:
operator (++). But, all iterators do not have similar functionality as that
of pointers. 1. Input Iterators: They are the weakest of all the iterators and have
very limited functionality. They can only be used in a single-pass
Depending upon the functionality of iterators they can be classified algorithms, i.e., those algorithms which process the container
into five categories, as shown in the diagram below with the outer one sequentially such that no element is accessed more than once.
being the most powerful one and consequently the inner one is the 2. Output Iterators: Just like input iterators, they are also very
least powerful in terms of functionality. limited in their functionality and can only be used in single-pass
algorithm, but not for accessing elements, but for being assigned
elements.
3. Forward Iterator: They are higher in hierarachy
than input and output iterators, and contain all the features
present in these two iterators. But, as the name suggests, they
also can only move in forward direction and that too one step at a
time.
4. Bidirectional Iterators: They have all the features of forward
iterators along with the fact that they overcome the drawback
of forward iterators, as they can move in both the directions, that
is why their name is bidirectional.
5. Random-Access Iterators: They are the most powerful iterators.
They are not limited to moving sequentially, as their name
Now each one of these iterators are not supported by all the containers suggests, they can randomly access any element inside the
in STL, different containers support different iterators, like vectors container. They are the ones whose functionality is same as
support Random-access iterators, while lists support bidirectional pointers.
iterators. The whole list is as given below:

También podría gustarte