Loading...
Development

With C Code, Diagrams, Complexity, and Examples

Complete Notes on Searching & Sorting

With C Code, Diagrams, Complexity, and Examples


1. SEARCHING

Searching is the process of finding a target element in a collection of data.


A. Sequential (Linear) Search

Search from start to end until target is found.

Algorithm

for i = 0 to n-1:
    if arr[i] == key:
        return i
return -1

C Code

int linearSearch(int arr[], int n, int key) {
    for(int i = 0; i < n; i++) {
        if(arr[i] == key) return i;
    }
    return -1;
}

Complexity

CaseTime
BestO(1) (first element)
Average/WorstO(n)

Best for small or unsorted arrays.


B. Indexed Sequential Search

Combines sequential on sorted array with index table for faster access.

Structure

Index Table:
+-------+-------+
| Value | Block |
+-------+-------+
|  10   |   0   |
|  40   |   5   |
|  70   |  10   |
+-------+-------+

Data Blocks:
[0-4]: 5,8,12,15,18
[5-9]: 42,45,48,52,55

Steps

  1. Search index table using binary search
  2. Go to corresponding block
  3. Do linear search in block

Faster than pure linear, slower than binary


C. Binary Search

Works only on sorted array.
Repeatedly divide search space in half.

Algorithm

l = 0, r = n-1
while l <= r:
    mid = (l + r) / 2
    if arr[mid] == key: return mid
    if arr[mid] < key: l = mid + 1
    else: r = mid - 1
return -1

C Code (Iterative)

int binarySearch(int arr[], int n, int key) {
    int l = 0, r = n - 1;
    while(l <= r) {
        int mid = l + (r - l) / 2;
        if(arr[mid] == key) return mid;
        if(arr[mid] < key) l = mid + 1;
        else r = mid - 1;
    }
    return -1;
}

Recursive Version

int binarySearchRec(int arr[], int l, int r, int key) {
    if(l > r) return -1;
    int mid = l + (r - l) / 2;
    if(arr[mid] == key) return mid;
    if(arr[mid] < key)
        return binarySearchRec(arr, mid + 1, r, key);
    return binarySearchRec(arr, l, mid - 1, key);
}

Complexity

CaseTime
BestO(1)
Average/WorstO(log n)

Space: O(1) iterative, O(log n) recursive


2. HASHING

Hashing maps data to fixed-size array index using a hash function.

Key → Hash Function → Index → Array[Index]

Hash Function Examples

h(key) = key % TABLE_SIZE
h(key) = (a*key + b) % TABLE_SIZE  // Universal

Collision

Two keys map to same index


Collision Resolution Techniques

TechniqueDescriptionTime (Avg)
A. ChainingUse linked list at each indexO(1 + α)
B. Open AddressingProbe for next empty slotO(1/(1-α))

A. Chaining

typedef struct Node {
    int key;
    struct Node* next;
} Node;

Node* hashTable[100];

void insert(int key) {
    int index = key % 100;
    Node* newNode = (Node*)malloc(sizeof(Node));
    newNode->key = key;
    newNode->next = hashTable[index];
    hashTable[index] = newNode;
}

Pros: Simple, no deletion issues
Cons: Memory overhead


B. Open Addressing

1. Linear Probing
int hash(int key, int i) {
    return (h(key) + i) % TABLE_SIZE;
}

Clustering problem

2. Quadratic Probing
int hash(int key, int i) {
    return (h(key) + i*i) % TABLE_SIZE;
}

Better distribution

3. Double Hashing
int hash1(key) = key % 13
int hash2(key) = 7 - (key % 7)
int hash(key, i) = (hash1 + i * hash2) % TABLE_SIZE

Best avoidance of clustering


3. SORTING ALGORITHMS


A. Insertion Sort

Insert element into sorted portion.

Diagram

[5,2,4,6,1,3]
 → [2,5,4,6,1,3]
 → [2,4,5,6,1,3]
 → [2,4,5,6,1,3]
 → [1,2,4,5,6,3]
 → [1,2,3,4,5,6]

Code

void insertionSort(int arr[], int n) {
    for(int i = 1; i < n; i++) {
        int key = arr[i];
        int j = i - 1;
        while(j >= 0 && arr[j] > key) {
            arr[j+1] = arr[j];
            j--;
        }
        arr[j+1] = key;
    }
}

Complexity

CaseTimeSpace
BestO(n)O(1)
Avg/WorstO(n²)O(1)

Best for small or nearly sorted arrays


B. Selection Sort

Repeatedly select minimum and swap.

Code

void selectionSort(int arr[], int n) {
    for(int i = 0; i < n-1; i++) {
        int min_idx = i;
        for(int j = i+1; j < n; j++) {
            if(arr[j] < arr[min_idx])
                min_idx = j;
        }
        swap(&arr[i], &arr[min_idx]);
    }
}

Complexity

CaseTime
AllO(n²)

In-place, not stable


C. Bubble Sort

Repeatedly bubble up largest element.

Code

void bubbleSort(int arr[], int n) {
    for(int i = 0; i < n-1; i++) {
        int swapped = 0;
        for(int j = 0; j < n-i-1; j++) {
            if(arr[j] > arr[j+1]) {
                swap(&arr[j], &arr[j+1]);
                swapped = 1;
            }
        }
        if(!swapped) break;
    }
}

O(n) best case with flag


D. Quick Sort

Divide and Conquer
Pick pivot, partition, recurse.

Partition

int partition(int arr[], int low, int high) {
    int pivot = arr[high];
    int i = low - 1;
    for(int j = low; j < high; j++) {
        if(arr[j] <= pivot) {
            i++;
            swap(&arr[i], &arr[j]);
        }
    }
    swap(&arr[i+1], &arr[high]);
    return i+1;
}

Code

void quickSort(int arr[], int low, int high) {
    if(low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi-1);
        quickSort(arr, pi+1, high);
    }
}

Complexity

CaseTime
Best/AvgO(n log n)
WorstO(n²) (sorted input)

In-place, not stable


E. Merge Sort

Divide → Sort → Merge

Merge Function

void merge(int arr[], int l, int m, int r) {
    int n1 = m - l + 1, n2 = r - m;
    int L[n1], R[n2];
    for(int i = 0; i < n1; i++) L[i] = arr[l+i];
    for(int i = 0; i < n2; i++) R[i] = arr[m+1+i];

    int i=0, j=0, k=l;
    while(i < n1 && j < n2) {
        if(L[i] <= R[j]) arr[k++] = L[i++];
        else arr[k++] = R[j++];
    }
    while(i < n1) arr[k++] = L[i++];
    while(j < n2) arr[k++] = R[j++];
}

Code

void mergeSort(int arr[], int l, int r) {
    if(l < r) {
        int m = l + (r - l)/2;
        mergeSort(arr, l, m);
        mergeSort(arr, m+1, r);
        merge(arr, l, m, r);
    }
}

Complexity

CaseTimeSpace
AllO(n log n)O(n)

Stable, not in-place


F. Heap Sort

Use Max-Heap → Extract max repeatedly

Code

void heapify(int arr[], int n, int i) {
    int largest = i;
    int l = 2*i + 1, r = 2*i + 2;
    if(l < n && arr[l] > arr[largest]) largest = l;
    if(r < n && arr[r] > arr[largest]) largest = r;
    if(largest != i) {
        swap(&arr[i], &arr[largest]);
        heapify(arr, n, largest);
    }
}

void heapSort(int arr[], int n) {
    for(int i = n/2 - 1; i >= 0; i--)
        heapify(arr, n, i);
    for(int i = n-1; i > 0; i--) {
        swap(&arr[0], &arr[i]);
        heapify(arr, i, 0);
    }
}

O(n log n), in-place


G. Radix Sort

Sort by digits (LSD or MSD)

LSD Radix Sort

void countingSort(int arr[], int n, int exp) {
    int output[n], count[10] = {0};
    for(int i = 0; i < n; i++)
        count[(arr[i]/exp)%10]++;
    for(int i = 1; i < 10; i++)
        count[i] += count[i-1];
    for(int i = n-1; i >= 0; i--) {
        output[count[(arr[i]/exp)%10]-1] = arr[i];
        count[(arr[i]/exp)%10]--;
    }
    for(int i = 0; i < n; i++) arr[i] = output[i];
}

void radixSort(int arr[], int n) {
    int max = *max_element(arr, arr+n);
    for(int exp = 1; max/exp > 0; exp *= 10)
        countingSort(arr, n, exp);
}

O(d(n+k)), non-comparison


4. Comparison Table

AlgorithmTime (Best)Time (Avg)Time (Worst)SpaceStableIn-place
InsertionO(n)O(n²)O(n²)O(1)YesYes
SelectionO(n²)O(n²)O(n²)O(1)NoYes
BubbleO(n)O(n²)O(n²)O(1)YesYes
QuickO(n log n)O(n log n)O(n²)O(log n)NoYes
MergeO(n log n)O(n log n)O(n log n)O(n)YesNo
HeapO(n log n)O(n log n)O(n log n)O(1)NoYes
RadixO(nk)O(nk)O(nk)O(n+k)YesNo

5. When to Use Which?

ScenarioBest Algorithm
Small arrayInsertion
Nearly sortedInsertion
Memory tightHeap/Quick
Stability neededMerge
Integers onlyRadix
Fast in practiceQuick

6. Practice Problems

  1. Search in rotated sorted array
  2. Find peak element
  3. Implement hash map with chaining
  4. Sort array of strings by length
  5. Kth largest using QuickSelect
  6. Count sort for digits

Key Takeaways

ConceptInsight
Binary SearchOnly on sorted data
HashingO(1) average lookup
Quick SortFastest in-place
Merge SortOnly stable O(n log n)
RadixFastest for fixed-length keys

End of Notes# Complete Notes on Searching & Sorting
With C Code, Diagrams, Complexity, and Examples


1. SEARCHING

Searching is the process of finding a target element in a collection of data.


A. Sequential (Linear) Search

Search from start to end until target is found.

Algorithm

for i = 0 to n-1:
    if arr[i] == key:
        return i
return -1

C Code

int linearSearch(int arr[], int n, int key) {
    for(int i = 0; i < n; i++) {
        if(arr[i] == key) return i;
    }
    return -1;
}

Complexity

CaseTime
BestO(1) (first element)
Average/WorstO(n)

Best for small or unsorted arrays.


B. Indexed Sequential Search

Combines sequential on sorted array with index table for faster access.

Structure

Index Table:
+-------+-------+
| Value | Block |
+-------+-------+
|  10   |   0   |
|  40   |   5   |
|  70   |  10   |
+-------+-------+

Data Blocks:
[0-4]: 5,8,12,15,18
[5-9]: 42,45,48,52,55

Steps

  1. Search index table using binary search
  2. Go to corresponding block
  3. Do linear search in block

Faster than pure linear, slower than binary


C. Binary Search

Works only on sorted array.
Repeatedly divide search space in half.

Algorithm

l = 0, r = n-1
while l <= r:
    mid = (l + r) / 2
    if arr[mid] == key: return mid
    if arr[mid] < key: l = mid + 1
    else: r = mid - 1
return -1

C Code (Iterative)

int binarySearch(int arr[], int n, int key) {
    int l = 0, r = n - 1;
    while(l <= r) {
        int mid = l + (r - l) / 2;
        if(arr[mid] == key) return mid;
        if(arr[mid] < key) l = mid + 1;
        else r = mid - 1;
    }
    return -1;
}

Recursive Version

int binarySearchRec(int arr[], int l, int r, int key) {
    if(l > r) return -1;
    int mid = l + (r - l) / 2;
    if(arr[mid] == key) return mid;
    if(arr[mid] < key)
        return binarySearchRec(arr, mid + 1, r, key);
    return binarySearchRec(arr, l, mid - 1, key);
}

Complexity

CaseTime
BestO(1)
Average/WorstO(log n)

Space: O(1) iterative, O(log n) recursive


2. HASHING

Hashing maps data to fixed-size array index using a hash function.

Key → Hash Function → Index → Array[Index]

Hash Function Examples

h(key) = key % TABLE_SIZE
h(key) = (a*key + b) % TABLE_SIZE  // Universal

Collision

Two keys map to same index


Collision Resolution Techniques

TechniqueDescriptionTime (Avg)
A. ChainingUse linked list at each indexO(1 + α)
B. Open AddressingProbe for next empty slotO(1/(1-α))

A. Chaining

typedef struct Node {
    int key;
    struct Node* next;
} Node;

Node* hashTable[100];

void insert(int key) {
    int index = key % 100;
    Node* newNode = (Node*)malloc(sizeof(Node));
    newNode->key = key;
    newNode->next = hashTable[index];
    hashTable[index] = newNode;
}

Pros: Simple, no deletion issues
Cons: Memory overhead


B. Open Addressing

1. Linear Probing
int hash(int key, int i) {
    return (h(key) + i) % TABLE_SIZE;
}

Clustering problem

2. Quadratic Probing
int hash(int key, int i) {
    return (h(key) + i*i) % TABLE_SIZE;
}

Better distribution

3. Double Hashing
int hash1(key) = key % 13
int hash2(key) = 7 - (key % 7)
int hash(key, i) = (hash1 + i * hash2) % TABLE_SIZE

Best avoidance of clustering


3. SORTING ALGORITHMS


A. Insertion Sort

Insert element into sorted portion.

Diagram

[5,2,4,6,1,3]
 → [2,5,4,6,1,3]
 → [2,4,5,6,1,3]
 → [2,4,5,6,1,3]
 → [1,2,4,5,6,3]
 → [1,2,3,4,5,6]

Code

void insertionSort(int arr[], int n) {
    for(int i = 1; i < n; i++) {
        int key = arr[i];
        int j = i - 1;
        while(j >= 0 && arr[j] > key) {
            arr[j+1] = arr[j];
            j--;
        }
        arr[j+1] = key;
    }
}

Complexity

CaseTimeSpace
BestO(n)O(1)
Avg/WorstO(n²)O(1)

Best for small or nearly sorted arrays


B. Selection Sort

Repeatedly select minimum and swap.

Code

void selectionSort(int arr[], int n) {
    for(int i = 0; i < n-1; i++) {
        int min_idx = i;
        for(int j = i+1; j < n; j++) {
            if(arr[j] < arr[min_idx])
                min_idx = j;
        }
        swap(&arr[i], &arr[min_idx]);
    }
}

Complexity

CaseTime
AllO(n²)

In-place, not stable


C. Bubble Sort

Repeatedly bubble up largest element.

Code

void bubbleSort(int arr[], int n) {
    for(int i = 0; i < n-1; i++) {
        int swapped = 0;
        for(int j = 0; j < n-i-1; j++) {
            if(arr[j] > arr[j+1]) {
                swap(&arr[j], &arr[j+1]);
                swapped = 1;
            }
        }
        if(!swapped) break;
    }
}

O(n) best case with flag


D. Quick Sort

Divide and Conquer
Pick pivot, partition, recurse.

Partition

int partition(int arr[], int low, int high) {
    int pivot = arr[high];
    int i = low - 1;
    for(int j = low; j < high; j++) {
        if(arr[j] <= pivot) {
            i++;
            swap(&arr[i], &arr[j]);
        }
    }
    swap(&arr[i+1], &arr[high]);
    return i+1;
}

Code

void quickSort(int arr[], int low, int high) {
    if(low < high) {
        int pi = partition(arr, low, high);
        quickSort(arr, low, pi-1);
        quickSort(arr, pi+1, high);
    }
}

Complexity

CaseTime
Best/AvgO(n log n)
WorstO(n²) (sorted input)

In-place, not stable


E. Merge Sort

Divide → Sort → Merge

Merge Function

void merge(int arr[], int l, int m, int r) {
    int n1 = m - l + 1, n2 = r - m;
    int L[n1], R[n2];
    for(int i = 0; i < n1; i++) L[i] = arr[l+i];
    for(int i = 0; i < n2; i++) R[i] = arr[m+1+i];

    int i=0, j=0, k=l;
    while(i < n1 && j < n2) {
        if(L[i] <= R[j]) arr[k++] = L[i++];
        else arr[k++] = R[j++];
    }
    while(i < n1) arr[k++] = L[i++];
    while(j < n2) arr[k++] = R[j++];
}

Code

void mergeSort(int arr[], int l, int r) {
    if(l < r) {
        int m = l + (r - l)/2;
        mergeSort(arr, l, m);
        mergeSort(arr, m+1, r);
        merge(arr, l, m, r);
    }
}

Complexity

CaseTimeSpace
AllO(n log n)O(n)

Stable, not in-place


F. Heap Sort

Use Max-Heap → Extract max repeatedly

Code

void heapify(int arr[], int n, int i) {
    int largest = i;
    int l = 2*i + 1, r = 2*i + 2;
    if(l < n && arr[l] > arr[largest]) largest = l;
    if(r < n && arr[r] > arr[largest]) largest = r;
    if(largest != i) {
        swap(&arr[i], &arr[largest]);
        heapify(arr, n, largest);
    }
}

void heapSort(int arr[], int n) {
    for(int i = n/2 - 1; i >= 0; i--)
        heapify(arr, n, i);
    for(int i = n-1; i > 0; i--) {
        swap(&arr[0], &arr[i]);
        heapify(arr, i, 0);
    }
}

O(n log n), in-place


G. Radix Sort

Sort by digits (LSD or MSD)

LSD Radix Sort

void countingSort(int arr[], int n, int exp) {
    int output[n], count[10] = {0};
    for(int i = 0; i < n; i++)
        count[(arr[i]/exp)%10]++;
    for(int i = 1; i < 10; i++)
        count[i] += count[i-1];
    for(int i = n-1; i >= 0; i--) {
        output[count[(arr[i]/exp)%10]-1] = arr[i];
        count[(arr[i]/exp)%10]--;
    }
    for(int i = 0; i < n; i++) arr[i] = output[i];
}

void radixSort(int arr[], int n) {
    int max = *max_element(arr, arr+n);
    for(int exp = 1; max/exp > 0; exp *= 10)
        countingSort(arr, n, exp);
}

O(d(n+k)), non-comparison


4. Comparison Table

AlgorithmTime (Best)Time (Avg)Time (Worst)SpaceStableIn-place
InsertionO(n)O(n²)O(n²)O(1)YesYes
SelectionO(n²)O(n²)O(n²)O(1)NoYes
BubbleO(n)O(n²)O(n²)O(1)YesYes
QuickO(n log n)O(n log n)O(n²)O(log n)NoYes
MergeO(n log n)O(n log n)O(n log n)O(n)YesNo
HeapO(n log n)O(n log n)O(n log n)O(1)NoYes
RadixO(nk)O(nk)O(nk)O(n+k)YesNo

5. When to Use Which?

ScenarioBest Algorithm
Small arrayInsertion
Nearly sortedInsertion
Memory tightHeap/Quick
Stability neededMerge
Integers onlyRadix
Fast in practiceQuick

6. Practice Problems

  1. Search in rotated sorted array
  2. Find peak element
  3. Implement hash map with chaining
  4. Sort array of strings by length
  5. Kth largest using QuickSelect
  6. Count sort for digits

Key Takeaways

ConceptInsight
Binary SearchOnly on sorted data
HashingO(1) average lookup
Quick SortFastest in-place
Merge SortOnly stable O(n log n)
RadixFastest for fixed-length keys

End of Notes