# Algorithm - Theory
- [Programiz](https://www.programiz.com/)
- [GeeksforGeeks](https://www.geeksforgeeks.org/)
- [CODESDOPE](https://www.codesdope.com/)
- [SCALER Topics](https://www.scaler.com/topics/)
- [W3schools](https://www.w3schools.com/)
---
- [How to Start Learning DSA? - GeeksforGeeks](https://www.geeksforgeeks.org/how-to-start-learning-dsa/)
- [Algorithm - Programiz](https://www.programiz.com/dsa/algorithm)
- [Stuck in Programming: Get The Solution From These 10 Best Websites - GeeksforGeeks](https://www.geeksforgeeks.org/stuck-in-programming-get-the-solution-from-these-10-best-websites/)
- [DSA Sheet by Love Babbar - GeeksforGeeks](https://www.geeksforgeeks.org/dsa-sheet-by-love-babbar/)
- [SDE SHEET – A Complete Guide for SDE Preparation - GeeksforGeeks](https://www.geeksforgeeks.org/sde-sheet-a-complete-guide-for-sde-preparation/)
- [100 Days of Code – A Complete Guide For Beginners and Experienced - GeeksforGeeks](https://www.geeksforgeeks.org/100-days-of-code-a-complete-guide-for-beginners-and-experienced/)
- [Top 15 Websites for Coding Challenges and Competitions - GeeksforGeeks](https://www.geeksforgeeks.org/top-15-websites-for-coding-challenges-and-competitions/)
## Points
- Introduction
- Implemention
- Time and space complexity
- Pros and cons
- Application
## Time complexity
Best case
| Data structure | Access | Search | Insertion | Deletion |
| -------------- | :----: | :----: | :--------: | :------: |
| Array | O(1) | O(1) | O(1) | O(1) |
| Stack | O(1) | O(1) | O(1) | O(1) |
| Queue | O(1) | O(1) | O(1) | O(1) |
| Singly linked list | O(1) | O(1) | O(1) | O(1) |
Worst Case
| Data structure | Access | Search | Insertion | Deletion |
| -------------- | :----: | :----: | :--------: | :------: |
| Array | O(1) | O(n) | O(n) | O(n) |
| Stack | O(n) | O(n) | O(1) | O(1) |
| Queue | O(n) | O(n) | O(1) | O(1) |
| Singly linked list | O(n) | O(n) | O(n) | O(n) |
The average
| Data structure | Access | Search | Insertion | Deletion |
| -------------- | :----: | :----: | :--------: | :------: |
| Array | O(1) | O(n) | O(n) | O(n) |
| Stack | O(n) | O(n) | O(1) | O(1) |
| Queue | O(n) | O(n) | O(1) | O(1) |
| Singly linked list | O(n) | O(1) | O(1) | O(1) |
ref:
- [Time complexities of different data structures - GeeksforGeeks](https://www.geeksforgeeks.org/time-complexities-of-different-data-structures/)
## Recursion and Iteration
A program is called recursive when an entity calls itself. A program is call iterative when there is a loop (or repetition).
```C=
#include <stdio.h>
// Recursion
int factorialUsingRecursion(int n) {
if (n == 0) {
return 1;
}
// recursion call
return n * factorialUsingRecursion(n - 1);
}
// Iteration
int factorialUsingIteration(int n) {
int res = 1;
// using iteration
for (int i = 2; i <= n; i++) {
res *= i;
}
return res;
}
int main () {
int num = 5;
printf("The factorial of %d using recursion is: %d \n", num, factorialUsingRecursion(5));
printf("The factorial of %d using iteration is: %d \n", num, factorialUsingIteration(5));
return 0;
}
```
| Property | Recursion | Iteration |
| -------- | :-------- | :-------- |
| Definition | Function calls itself. | A set of instructions repeatedly executed. |
| Infinite | If the recursive function does not meet to a termination condition, it leads to an infinite recursion. There is a chance of system crash in infinite recursion. | Iteration will be infinite, if the control condition of the iteration statement never becomes false. On infinite loop, it repeatedly used CPU cycles. |
| Application | For functions. | For loops. |
| Usage | Used when code size needs to be small, and time complexity is not an issue. | Used when time complexity needs to be balanced against an expanded code size. |
| Time Complexity | Very high (generally exponential) time complexity. | Relatively lower time complexity (generally polynomial-logarithmic). |
| Stack | It has to update and maintain the stack. | There is no utilization of stack. |
| Memory | It uses more memory as compared to iteration. | It uses less memory as compared to recursion. |
| Overhead | There is an extensive overhead due to updating and maintaining the stack. | There is no overhead in iteration. |
### Expended:
- [Algorithms | Recursion | Question](https://www.geeksforgeeks.org/difference-between-recursion-and-iteration/)
- tail recursion
ref:
- [Difference between Recursion and Iteration - GeeksforGeeks](https://www.geeksforgeeks.org/difference-between-recursion-and-iteration/)
- [Difference between Recursion and Iteration - Javatpoint](https://www.javatpoint.com/recursion-vs-iteration)
## Hash
A Hash table or a Hashmap is a type of data structure that maps keys to its value pairs. It makes use of a function that computes an index value that in turn holds the elements to be searched, inserted, removed, etc. This makes it easy and fast to access data. In general, hash tables store key-value pairs and the key is generated using a hash function.
There are a number of operations that can be performed on has tables in Python through dictionaries such as:
- Accessing Values
- Updating Values
- Deleting Element
ref:
- [Hash Tables and Hashmaps in Python: What are they and How to implement?](https://www.edureka.co/blog/hash-tables-and-hashmaps-in-python/#updating)
- [軟體工程師雜談 輕鬆搞懂資料結構: 雜湊(hash) |IT鐵人賽: 從零開始搞懂寫程式,資工系4年最重要的學科,資料結構,演算法,物件導向 - YouTube](https://www.youtube.com/watch?v=vPvxEDyxI2w&list=PLhxdaTcUMi3nRM5mtOdQgO4VEtAEsTiYd&index=20&t=304s)
## Stack and Queue
### Stack
Stack is a linear data structure that follows a particular order in which the operations are performed. The order may be LIFO (Last In First Out) or FILO (First In Last Out). A good example of stack is an elevator or plate stack.
Mainly the following various basic operations are performed in the stack:
- `push()`: Adds an item in the stack. If the stack is full, then it is said to be an `overflow` condition.
- `pop()`: Removes an item from the stack. The items are popped in the reversed order in which they are pushed. If the stack is empty, then it is said to be an `underflow` condition.
- `peek()` or `top()`: Returns the top element of the stack.
- `isEmpty()`: Returns true if the stack is empty, else false.
- `size()`: Returns the size of the stack.

Implementation of stack can use array or linked list.
The benefit of implementing a stack using a linked list over arrays is that it allows to grow of the stack as per the requirements. Using arrays to implement stack is limited size, the size of the stack has to be predetermined, whereas, in a linked list, implementation nodes can be added according to the user's requirements.
| Opeerations | Array | Linked list |
| ----------- |:----: | :---------: |
| `push()` | O(1)`*`^[1]^ | O(1) |
| `pop()` | O(1) | O(1) |
| `peek()` | O(1) | O(1) |
| `isEmpty()` | O(1) | O(1) |
| Auxiliary space | O(n)`*`^[2]^ | >O(n) |
`*`^[1]^ If the array is expanded when executing `push()`, its time complexity is `O(n)`, but it can be analyzed by amortized analysis that its time complexity is `O(1)`.
`*`^[2]^ O(n) where N is the size of the array for storing elements.
#### Advantages of Array Implementation
- Easy to implement.
- Memory is saved as pointers are not involved.
#### Disadvantages of Array Implementation
- It is not dynamic i.e., it doesn’t grow and shrink depending on needs at runtime. (But in case of dynamic sized arrays like vector in C++, list in Python, ArrayList in Java, stacks can grow and shrink with array implementation as well)
- The total size of the stack must be defined beforehand.
#### Advantages of Linked List Implementation
- The linked list implementation of a stack can grow and shrink according to the needs at runtime.
#### Disadvantages of Linked List Implementation
- Requires extra memory due to the involvement of pointers.
- Random accessing is not possible in stack.
#### Applications of the Stack
- Expression conversion
- Forward and backward features in web browsers
- Topological sort
- String reversal
- Redo or undo features
- Many virtual machines like JVM
- The history of a web browser or call logs
- Systematic Memory Management
### Queue
Like Stack, Queue is a linear structure which follows a particular order in which the operations are performed. The order is First In First Out (FIFO). A good example of queue is any queue of consumers for a resource where the consumer that came first is served first.
The difference between stacks and queues is in removing. In a stack we remove the item the most recently added; in a queue, we remove the item the least recently added.
Mainly the following four basic operations are performed on queue:
- `enqueue()`: Adds an item to the queue. If the queue is full, then it is said to be an `overflow` condition.
- `dequeue()`: Removes an item from the queue. The items are popped in the same order in which they are pushed. If the queue is empty, then it is said to be an `underflow` condition.
- `peekFront()`: Get the front item from queue.
- `peekRear()`: Get the last item from queue.
- `isEmpty()`: Returns true if the queue is empty, else false.

Implementation of queue can use array or linked list.
The benefit of implementing a queue using a linked list over arrays is that it allows to grow of the queue as per the requirements. Using arrays to implement queue is limited size, the size of the queue has to be predetermined, whereas, in a linked list, implementation nodes can be added according to the user's requirements.
| Opeerations | Array | Linked list |
| ----------- |:----: | :---------: |
| `enqueue()` | O(1)`*`^[1]^ | O(1) |
| `dequeue()` | O(1) | O(1) |
| `peekFront()` | O(1) | O(1) |
| `peekRear()` | O(1) | O(1) |
| `isEmpty()` | O(1) | O(1) |
| Auxiliary space | O(N)`*`^[2]^ | >O(N) |
`*`^[1]^ If the array is expanded when executing `enqueue()`, its time complexity is `O(n)`, but it can be analyzed by amortized analysis that its time complexity is `O(1)`.
`*`^[2]^ O(N) where N is the size of the array for storing elements.
#### Advantages of Array Implementation
- Easy to implement.
- It is faster compared to a list-based queue.
- Elements can be accessed randomly.
- Memory is saved as pointers are not involved.
#### Disadvantages of Array Implementation
- The size of the queue should be known in advance.
- Static Data Structure, fixed size.
- Resizing array-based queues is complex.
#### Advantages of Linked List Implementation
- Insertion at both end and beginning is easy.
- It’s not necessary to know the size of the queue in advance.
- Resizing is simple.
#### Disadvantages of Linked List Implementation
- It requires more memory.
- It is slower as compared to array-based queues.
- Elements can be accessed sequentially only.
#### Applications of the Queue
- Spooling in printers
- Buffer for devices like keyboard
- CPU Scheduling
- Memory management
- Queues in routers/switches
- Mail queues
- Waiting lists for a single shared resource
### Difference between Stack and Queue Data Structures
| Stacks | Queues |
| ------ | ------ |
| LIFO | FIFO |
| Stacks are often used for tasks that require backtracking, such as parsing expressions or implementing undo functionality. | Queues are often used for tasks that involve processing elements in a specific order, such as handling requests or scheduling tasks. |
| Stack is used in solving problems works on recursion. | Queue is used in solving problems having sequential processing. |
| Stacks are often used for recursive algorithms or for maintaining a history of function calls. | Queues are often used in multithreaded applications, where tasks are added to a queue and executed by a pool of worker threads. |
### Extended
- expanding a stack or queue implemented by an array
- stack is used in expression conversion such as infix to postfix, infix to prefix, postfix to infix, and prefix to infix
- systematic Memory Management appling stack
- implement queue using stacks
- implement stack using queues
- circular queue
- double-ended queue (dequeue, deque)
- priority queue
- double-ended priority queue
- queue in scala
- LRU cache
ref:
- [Introduction to Stack - GeeksforGeeks](https://www.geeksforgeeks.org/stack-data-structure-introduction-program/)
- [Stack Data Structure - GeeksforGeeks](https://www.geeksforgeeks.org/stack-data-structure/?ref=gcse)
- [Queue Data Structure - GeeksforGeeks](https://www.geeksforgeeks.org/queue-data-structure/?ref=gcse)
- [Array Implementation of Queue - GeeksforGeek](https://www.geeksforgeeks.org/queue-set-1introduction-and-array-implementation/)
- [Stack Using Array - Open Source Doc](https://opensourcedoc.com/data-structures-in-c/stack-in-array/)
- [Stack Using Linked List - Open Source Doc](https://opensourcedoc.com/data-structures-in-c/stack-in-list/)
- [Queue Using Array - Open Source Doc](https://opensourcedoc.com/data-structures-in-c/queue-in-array/)
- [Queue Using Linked List in C - Open Source Doc](https://opensourcedoc.com/data-structures-in-c/queue-in-list/)
- [Stack Using Array in C - Scaler Topics](https://www.scaler.com/topics/stack-implementation-using-array-in-c/)
- [Stack Using Linked List in C - Scaler Topics](https://www.scaler.com/topics/c/stack-using-linked-list-in-c/)
- [Queue Using Array in C - Scaler Topics](https://www.scaler.com/topics/implementation-of-queue-using-array/)
- [Queue Using Linked List in C - Scaler Topics](https://www.scaler.com/topics/c/implementation-of-queue-using-linked-list/)
- [Difference between Stack and Queue Data Structures - GeeksforGeeks](https://www.geeksforgeeks.org/difference-between-stack-and-queue-data-structures/)
## Linked List
Like arrays, Linked List is a linear data structure. Unlike arrays, linked list elements are not stored at a contiguous location; the elements are linked using pointers.

In simple words, a linked list consists of nodes where each node contains a data field and a reference(link) to the next node in the list.
### Operations of Linked List
- Traversal
- Insertion
A node can be added to either the beginning, middle or end of the linked list.
1. At the front of the linked list

2. After a given node

3. At the end of the linked list

- Deletion
A node can be deleted either from the beginning, end or from a particular position of the linked list.
1. At the front of the linked list
2. After a given node
- Delete the first node in a linked list where data == key
- Delete a Linked List node at a given position
3. At the end of the linked list
- Sort
1. Bubble sort
2. Insertion sort
3. Selection sort
4. Merge sort
5. Quick sort
- Reverse

| Operation | Time complexity | Space complexity |
| --------- | :-------------: | :-----------------------: |
| Adding at the beginning | O(1) | O(1) |
| Adding at the end | O(n) |O(1) |
| Adding at the middle | O(1) | O(1) |
| Add a node | O(n) | O(1) |
| Deleting a node | O(n) | O(1) |
### Linked List vs Array

#### Advantages of Linked Lists
- Dynamic size
- Ease of insertion/deletion
- Memory efficient: memory consumption of a linked list is efficient as its size can grow or shrink dynamically according to our requirements, which means effective memory utilization hence, no memory wastage.
- The linked list can be expanded in constant time.
- For the implementation of stacks and queues and for the representation of trees and graphs.
#### Disadvantages of Linked Lists
- Random access is not allowed. We have to access elements sequentially starting from the first node. So we cannot do a binary search with linked lists.
- Extra memory space for a pointer is required for each element of the list.
- Arrays have a better cache locality that can make a pretty big difference in performance.
- It takes a lot of time in traversing and changing the pointers.
- It will be confusing when we work with pointers.
#### Advantages of Arrays
- Arrays store multiple data of similar types with the same name.
- It allows random access to elements.
- As the array is of fixed size and stored in contiguous memory locations there is no memory shortage or overflow.
- It is helpful to store any type of data with a fixed size.
- Since the elements in the array are stored at contiguous memory locations it is easy to iterate in this data structure and unit time is required to access an element if the index is known.
#### Disadvantages of Arrays
- The array is static in nature. Once the size of the array is declared then we can’t modify it.
- Insertion and deletion operations are difficult in an array as elements are stored in contiguous memory locations and the shifting operations are costly.
- The number of elements that have to be stored in an array should be known in advance.
- Wastage of memory is the main problem in the array. If the array size is big the less allocation of memory leads to wastage of memory.
### Circular Linked List
Circular linked list is a linked list where all nodes are connected to form a circle. There is no NULL at the end. A circular linked list can be a singly circular linked list or doubly circular linked list.

### Doubly Linked List
A Doubly Linked List (DLL) contains an extra pointer, typically called previous pointer, together with next pointer and data which are there in singly linked list.

### Operations Doubly Linked List
- Inserting a node
A node can be added in three ways
1. At the front of the DLL (5 steps process)

Step1: allocate node.
Step2: put in the data.
Step3: make next of new node as head and previous as NULL.
Step4: change prev of head node to new node.
Step5: move the head to point to the new node.
2. After a given node (7 steps process)

Step1: check if the given prev_node is NULL.
Step2: allocate new node.
Step3: put in the data.
Step4: make next of new node as next of prev_node.
Step5: make the next of prev_node as new_node.
Step6: make prev_node as previous of new_node.
Step7: change previous of new_node's next node.
3. At the end of the DLL (7 steps process)

Step1: allocate new node.
Step2: put in the data.
Step3: this new node is going to be the last node, so make next of it as NULL.
Step4: if the Linked List is empty, then make the new node as head,
Step5: else traverse till the last node.
Step6: change the next of last node.
Step7: make last node as previous of new node.
### Applications of Linked List
- Linear data structures such as stack, queue, and non-linear data structures such as hash maps, and graphs can be implemented using linked lists.
- Implementation of graphs: Adjacency list representation of graphs is the most popular in that it uses linked lists to store adjacent vertices.
- In web browsers and editors, doubly linked lists can be used to build a forwards and backward navigation button.
- A circular doubly linked list can also be used for implementing data structures like Fibonacci heaps.
- Some applications with "previous" and "next" options such as the list of songs in the music player, a web browser, the image viewer.
- In mobile phones, we save the contacts of people. The newly entered contact details will be placed at the correct alphabetical order.
- The modifications that we made in the documents are actually created as nodes in doubly linked list. We can simply use the undo option by pressing Ctrl+Z to modify the contents. It is done by the functionality of a linked list.
### Extended
- reverse
- doubly linked list
- circular linked list
ref:
- [Introduction to Linked List – Data Structure and Algorithm Tutorials - GeeksforGeeks](https://www.geeksforgeeks.org/introduction-to-linked-list-data-structure-and-algorithm-tutorial/)
- [Linked List Data Structure - GeeksforGeeks](https://www.geeksforgeeks.org/data-structures/linked-list/?ref=gcse)
- [Linked List vs Array - GeeksforGeeks](https://www.geeksforgeeks.org/linked-list-vs-array/)
- [C Program for Bubble Sort on Linked List - GeeksforGeeks](https://www.geeksforgeeks.org/c-program-bubble-sort-linked-list/)
- [Insertion Sort for Singly Linked List - GeeksforGeeks](https://www.geeksforgeeks.org/insertion-sort-for-singly-linked-list/)
- [Merge Sort for Linked Lists - GeeksforGeeks](https://www.geeksforgeeks.org/merge-sort-for-linked-list/)
- [QuickSort on Singly Linked List - GeeksforGeeks](https://www.geeksforgeeks.org/quicksort-on-singly-linked-list/)
## Tree
### Trie
Trie data structure is defined as a Tree based data structure that is used for storing some collection of strings and performing efficient search operations on them. The word Trie is derived from reTRIEval, which means finding something or obtaining it.
Trie follows some property that If two strings have a common prefix then they will have the same ancestor in the trie. A trie can be used to sort a collection of strings alphabetically as well as search whether a string with a given prefix is present in the trie or not.

Basic Operations on Trie Data Structure: insertion, search, deletion
#### Insertion


#### Search
- Searching Prefix in Trie Data Structure:

- Searching Complete word in Trie Data Structure:

#### Deletion
- The deleted word is a prefix of other words in Trie.

- The deleted word shares a common prefix with other words in Trie.

- The deleted word does not share any common prefix with other words in Trie.

ref:
[https://www.thecriticalcoder.com/trie-data-structure-inserting-removing-autocomplete-an-illustrated-explanation-with-python-code/](https://www.thecriticalcoder.com/trie-data-structure-inserting-removing-autocomplete-an-illustrated-explanation-with-python-code/)
[https://www.geeksforgeeks.org/trie-insert-and-search/](https://www.geeksforgeeks.org/trie-insert-and-search/)
### Binary Tree
A tree whose elements have at most 2 children is called a binary tree. Since each element in a binary tree can have only 2 children, we typically name them the left and right child.

#### Traversals

- Depth First Traversals:
- Inorder (Left, Root, Right) : 4 2 5 1 3
- Preorder (Root, Left, Right) : 1 2 4 5 3
- Postorder (Left, Right, Root) : 4 5 2 3 1
- Breadth-First or Level Order Traversal: 1 2 3 4 5
ref:
[https://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/?ref=gcse](https://www.geeksforgeeks.org/tree-traversals-inorder-preorder-and-postorder/?ref=gcse)
[https://www.geeksforgeeks.org/level-order-tree-traversal/](https://www.geeksforgeeks.org/level-order-tree-traversal/)
[https://www.geeksforgeeks.org/applications-of-tree-data-structure/](https://www.geeksforgeeks.org/applications-of-tree-data-structure/)
[https://www.geeksforgeeks.org/word-break-problem-trie-solution/](https://www.geeksforgeeks.org/word-break-problem-trie-solution/)
### Binary Search
Binary Search is a searching algorithm used in a sorted and distinct array by repeatedly dividing the search interval in half. The idea of binary search is to use the information that the array is sorted and reduce the time complexity to O(log n).

Binary search algorithm can be implemented in the following two ways.
- Iterative method
- Recursive method
| Method | Time complexity | Space complexity |
| ------ | :-------------: | :--------------: |
| Iteration | O (log n) | O(1) |
| Recursion | O(n) | O(log n) |
ref:
- [Binary Search - GeeksforGeeks](https://www.geeksforgeeks.org/binary-search/)
### Binary Search Tree
Binary Search Tree is a node-based binary tree data structure which has the following properties:
- The left subtree of a node contains only nodes with keys lesser than the node’s key.
- The right subtree of a node contains only nodes with keys greater than the node’s key.
- The left and right subtree each must also be a binary search tree. There must be no duplicate nodes.

#### Deletion
1) Node to be deleted is the leaf: simply remove from the tree.

2) Node to be deleted has only one child: copy the child to the node and delete the child.

3) Node to be deleted has two children: find inorder successor of the node. Copy contents of the inorder successor to the node and delete the inorder successor. Note that inorder predecessor can also be used.

The important thing to note is, inorder successor is needed only when the right child is not empty. In this particular case, inorder successor can be obtained by finding the minimum value in the right child of the node.
ref:
- [Binary Search Tree | Set 1 (Search and Insertion) - GeeksforGeeks](https://www.geeksforgeeks.org/binary-search-tree-set-1-search-and-insertion/)
- [Deletion in Binary Search Tree - GeeksforGeeks](https://www.geeksforgeeks.org/binary-search-tree-set-2-delete/?ref=lbp)
- [How to handle duplicates in Binary Search Tree? - GeeksforGeeks](https://www.geeksforgeeks.org/how-to-handle-duplicates-in-binary-search-tree/)
### Binary Indexed Tree (BIT, Fenwick Tree)
Let us consider the following problem to understand Binary Indexed Tree.
We have an array arr[0 . . . n-1]. We would like to
1 Compute the sum of the first i elements.
2 Modify the value of a specified element of the array arr[i] = x where 0 <= i <= n-1.
A simple solution is to run a loop from 0 to i-1 and calculate the sum of the elements. To update a value, simply do arr[i] = x. The first operation takes O(n) time and the second operation takes O(1) time. Another simple solution is to create an extra array and store the sum of the first i-th elements at the i-th index in this new array. The sum of a given range can now be calculated in O(1) time, but the update operation takes O(n) time now. This works well if there are a large number of query operations but a very few number of update operations.
One efficient solution is to use Segment Tree that performs both operations in O(log n) time. An alternative solution is Binary Indexed Tree, which also achieves O(log n) time complexity for both operations. Compared with Segment Tree, Binary Indexed Tree requires less space and is easier to implement.
getSum(x): Returns the sum of the sub-array arr[0,…,x]
update(x, val): Updates the Binary Indexed Tree (BIT) by performing arr[index] += val

ref:
- [Huahua’s Tech Road](http://zxi.mytechroad.com/blog/sp/fenwick-tree-binary-indexed-tree-sp3/)
- [花花酱 Fenwick Tree / Binary Indexed Tree - 刷题找工作 SP3 - YouTube](https://www.youtube.com/watch?v=WbafSgetDDk)
- [Binary Indexed Tree or Fenwick Tree - GeeksforGeeks](https://www.geeksforgeeks.org/binary-indexed-tree-or-fenwick-tree-2/)
### AVL Tree
AVL tree is a self-balancing Binary Search Tree (BST) where the difference between heights of left and right subtrees cannot be more than one for all nodes.
An Example Tree that is an AVL Tree :

An Example Tree that is NOT an AVL Tree:

#### Insertion
To make sure that the given tree remains AVL after every insertion, we must augment the standard BST insert operation to perform some re-balancing. Following are two basic operations that can be performed to re-balance a BST without violating the BST property (keys(left) < key(root) < keys(right)).

1) Left Rotation

2) Right Rotation

1. Left Left Case (LL)

2. Left Right Case (LR)

3. Right Right Case (RR)

4. Right Left Case (RL)

Insertion Examples:
- LL Case

- RR Case

- LR Case

- RL Case

#### Deletion
1) Perform the normal BST deletion.
2) The current node must be one of the ancestors of the deleted node. Update the height of the current node.
3) Get the balance factor (left subtree height – right subtree height) of the current node.
4) If balance factor is greater than 1, then the current node is unbalanced and we are either in Left Left case or Left Right case. To check whether it is Left Left case or Left Right case, get the balance factor of left subtree. If balance factor of the left subtree is greater than or equal to 0, then it is Left Left case, else Left Right case.
5) If balance factor is less than -1, then the current node is unbalanced and we are either in Right Right case or Right Left case. To check whether it is Right Right case or Right Left case, get the balance factor of right subtree. If the balance factor of the right subtree is smaller than or equal to 0, then it is Right Right case, else Right Left case.

### Red Black Tree
#### Rules
1. Every node has a colour either red or black.
2. The root of the tree is always black.
3. There are no two adjacent red nodes (A red node cannot have a red parent or red child).
4. Every path from a node (including root) to any of its descendants NULL nodes has the same number of black nodes.
5. All leaf nodes are black nodes.
#### Insert

1) Insert the node similarly to that in a binary tree and assign a red colour to it.
2) If the node is a root node then change its color to black.
3) If it is not a root node then check the colour of the parent node.
3.1. If its colour is black then don’t change the colour.
3.2. If its colour is red then check the colour of the node’s uncle.
3.2.1. If the node’s uncle has a red colour then change the colour of the node’s parent and
uncle to black and that of grandfather to red colour and repeat the same process for him
(i.e. grandfather).

3.2.2. If the node’s uncle has black colour then there are 4 possible cases:
Left Left Case (LL rotation):

Left Right Case (LR rotation):

Right Right Case (RR rotation):

Right Left Case (RL rotation):


#### Delete
##### Insertion Vs Deletion
In the insert operation, we check the color of the uncle to decide the appropriate case. In the delete operation, we check the color of the sibling to decide the appropriate case.
The main property that violates after insertion is two consecutive reds. In delete, the main violated property is, change of black height in subtrees as deletion of a black node may cause reduced black height in one root to leaf path.
Deletion is a fairly complex process. To understand deletion, the notion of double black is used. When a black node is deleted and replaced by a black child, the child is marked as double black. The main task now becomes to convert this double black to single black.
##### Deletion Steps
1. current node setting
1.1. If the node we deleted has <font color="#f00">2 NIL children</font>, its replacement x is NIL.
1.2. If the node we deleted has <font color="#f00">1 NIL child and 1 non-NIL child</font>, its replacement x is the non-NIL child.
1.3. If the node we deleted has <font color="#f00">2 non-NIL children</font>, set x to the replacement's <font color=#0000FF>right child</font> before the replacement is spliced out.

2. BST deletation
2.1. replace current node (with current node's color) with replacement
2.2. replace replacement (with replacement's color) with deleted node
3. Fixup
If replacement is black, fixup current node.
Decision order: current node :arrow_right: sibling :arrow_right: far child :arrow_right: near child

3.1. <font color=#0000FF>current node is red</font>: color it black. <font color=#FF00FF>(case0)</font>
3.2. <font color=#0000FF>current node is black</font> and <font color=#00FF00>sibling is red</font> <font color=#FF00FF>(case1)</font>
a. color sibling black.
b. color current node's parent red.
c. rotate current node's parent forward current node.
d. update sibling.
e. decide on <font color=#763E14>case 2, 3, 4</font>.
3.3. <font color=#0000FF>current node is black</font> and <font color=#00FF00>sibling is black</font>
3.3.1. <font color=#FF5733>sibling's far child is red</font> <font color=#FF00FF>(case4)</font>
a. color sibling the same color as current's parent.
b. color current node's parent black.
c. color sibling's far child black.
d. rotate current node's parent forward current node.
3.3.2. <font color=#FF5733>sibling's far child is black</font>
3.3.2.1. <font color=#FFBC2F>sibling's near child is black</font> <font color=#FF00FF>(case2)</font>
a. color sibling red.
b. update current node with its parent
b.1. if <font color=#2F9DFF>new current is red</font>, color it black.
b.2. if <font color=#2F9DFF>new current is black and not root</font>, decide on <font color=#763E14>case 1, 2, 3, 4</font>.
b.3. if <font color=#2F9DFF>new current is black and root</font>, it's done.
3.3.2.2. <font color=#FFBC2F>sibling's near child is red</font> <font color=#FF00FF>(case3)</font>
a. color sibling's near child black.
b. color sibling red.
c. rotate sibling forward opposite current node.
d. update sibling.
e. decide on <font color=#763E14>case 4</font>.


ref:
- [Insertion in an AVL Tree - GeeksforGeeks](https://www.geeksforgeeks.org/avl-tree-set-1-insertion/?ref=gcse)
- [資料結構與演算法:AVL Tree](https://josephjsf2.github.io/data/structure/and/algorithm/2019/06/22/avl-tree.html)
- [Deletion in an AVL Tree - GeeksforGeeks](https://www.geeksforgeeks.org/avl-tree-set-2-deletion/?ref=lbp)
- [Introduction to Red-Black Tree - GeeksforGeeks](https://www.geeksforgeeks.org/red-black-tree-set-1-introduction-2/?ref=lbp)
- [Insertion in Red-Black Tree - GeeksforGeeks](https://www.geeksforgeeks.org/red-black-tree-set-2-insert/?ref=lbp)
- [Deletion in Red-Black Tree - GeeksforGeeks](https://www.geeksforgeeks.org/red-black-tree-set-3-delete-2/)
- [https://codertw.com/%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80/743044/](https://codertw.com/%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80/743044/)
- [Deletion in Red-Black (RB) Tree - Medium](https://medium.com/analytics-vidhya/deletion-in-red-black-rb-tree-92301e1474ea)
- [【紅黑樹十講・參】紅黑樹・新增・四大規則介紹・完整圖解步驟 入門|介紹|教學|LeetCode|資料結構|紅黑樹|演算法|資料結構 - YouTube](https://www.youtube.com/watch?v=9ex88RpT5iQ&t=14s)
- [Red Black Tree: Insert(新增資料)與Fixup(修正)](http://alrightchiu.github.io/SecondRound/red-black-tree-insertxin-zeng-zi-liao-yu-fixupxiu-zheng.html)
- [Red Black Tree: Delete(刪除資料)與Fixup(修正)](http://alrightchiu.github.io/SecondRound/red-black-tree-deleteshan-chu-zi-liao-yu-fixupxiu-zheng.html)
- [Red-Black Trees | Deletion](https://www.codesdope.com/course/data-structures-red-black-trees-deletion/)
- [Binary Tree: Traversal(尋訪)](http://alrightchiu.github.io/SecondRound/binary-tree-traversalxun-fang.html#successor)
### Heap
A Binary Heap is a Binary Tree with following properties.
1) It’s a complete tree (All levels are completely filled except possibly the last level and the last level has all keys as left as possible). This property of Binary Heap makes them suitable to be stored in an array.
2) A Binary Heap is either Min Heap or Max Heap. In a Min Binary Heap, the key at root must be minimum among all keys present in Binary Heap. The same property must be recursively true for all nodes in Binary Tree. Max Binary Heap is similar to MinHeap.

A binary heap is typically represented as an array.
The root element will be at Arr[0].
Below table shows indexes of other nodes for the ith node, i.e., Arr[i]:
| indeex | node |
| ----------------- |:--------------------- |
| Arr[(i-1)\/2] | the parent node |
| Arr[(2\*i)+1] | the left child node |
| Arr[(2\*i)+2] | the right child node |

#### Applications
| Source | Case |
| ------------- |:--------------------- |
| Cc Theory | Heap Sort |
ref:
- [Heap Data Structure - GeeksforGeeks](https://www.geeksforgeeks.org/heap-data-structure/)
- [Binary Heap - GeeksforGeeks](https://www.geeksforgeeks.org/binary-heap/)
- [Heap Data Structure - Programiz](https://www.programiz.com/dsa/heap-data-structure)
- [C Online Compiler](https://ide.geeksforgeeks.org/rFO7Lm)
- [Heap Data Structures](https://www.codesdope.com/course/data-structures-heap/)
### Splay Tree
The main idea of splay tree is to bring the recently accessed item to root of the tree, this makes the recently searched item to be accessible in O(1) time if accessed again.
The search operation in Splay tree does the standard BST search, in addition to search, it also splays (move a node to the root). If the search is successful, then the node that is found is splayed and becomes the new root. Else the last node accessed prior to reaching the NULL is splayed and becomes the new root.
#### Search
1. <font color="#f00">Node is root</font> We simply return the root, don’t do anything else as the accessed node is already root.
2. <font color="#f00">Zig: Node is child of root</font> (the node has no grandparent)

3. <font color="#f00">Node has both parent and grandparent</font>
3.1. <font color=#0000FF>Zig-Zig and Zag-Zag</font>

3.2. <font color=#0000FF>Zig-Zag and Zag-Zig</font>

example:

#### Insert
#### Delete
1. If Root is NULL: We simply return the root.
2. Else Splay the given key k. If k is present, then it becomes the new root. If not present, then last accessed leaf node becomes the new root.
3. If new root’s key is not same as k, then return the root as k is not present.
4. Else the key k is present.
- Split the tree into two trees Tree1 = root’s left subtree and Tree2 = root’s right subtree and delete the root node.
- Let the root’s of Tree1 and Tree2 be Root1 and Root2 respectively.
- If Root1 is NULL: Return Root2.
- Else, Splay the maximum node (node having the maximum value) of Tree1.
- After the Splay procedure, make Root2 as the right child of Root1 and return Root1.
ref:
- [Searching in Splay Tree - GeeksforGeeks](https://www.geeksforgeeks.org/splay-tree-set-1-insert/?ref=gcse)
- [Insertion in Splay Tree - GeeksforGeeks](https://www.geeksforgeeks.org/splay-tree-set-2-insert-delete/?ref=lbp)
- [Deletion in Splay Tree - GeeksforGeeks](https://www.geeksforgeeks.org/splay-tree-set-3-delete/?ref=gcse)
- [5.19 Splay Tree Introduction | Data structure & Algorithm - YouTube](https://www.youtube.com/watch?v=qMmqOHr75b8&list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU&index=67)
- [5.20 Splay Tree Insertion | Data structure - YouTube](https://www.youtube.com/watch?v=1HeIZNP3w4A&list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU&index=68)
- [5.21 Splay Trees Deletion | Bottom-up Splaying | Data Structure & Algorithm - YouTube](https://www.youtube.com/watch?v=ewRSYHStdSA&list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU&index=70)
- [5.22 Splay Tree Deletion | Top Down Splaying | Data Structure & Algorithm - YouTube](https://www.youtube.com/watch?v=MumJoiP84J0&list=PLdo5W4Nhv31bbKJzrsKfMpo_grxuLl8LU&index=71)
- [Splay Trees](https://www.codesdope.com/course/data-structures-splay-trees/)
### B Tree
1. All leaves are at the same level.
2. The root has at least two children.
3. Each node except root can have a maximum of m children and at least m/2 children.
4. Each node can contain a maximum of m - 1 keys and a minimum of [m/2] - 1 keys.
ref:
- [Introduction of B-Tree - GeeksforGeeks](https://www.geeksforgeeks.org/introduction-of-b-tree-2/)
- [B-tree - Programiz](https://www.programiz.com/dsa/b-tree)
- [Deletion from a B-tree - Programiz](https://www.programiz.com/dsa/deletion-from-a-b-tree)
- [B-Tree & B+Tree的差異](https://flyinsky76.pixnet.net/blog/post/23543462)
### B+ Tree
1. All leaves are at the same level.
2. The root has at least two children.
3. Each node except root can have a maximum of m children and at least m/2 children.
4. Each node can contain a maximum of m - 1 keys and a minimum of [m/2] - 1 keys.
ref:
- [B+ Tree - Programiz](https://www.programiz.com/dsa/b-plus-tree)
- [Deletion from a B+ Tree - Programiz](https://www.programiz.com/dsa/deletion-from-a-b-plus-tree)
## Graph
### Introduction
A Graph is a non-linear data structure consisting of vertices and edges. The vertices are sometimes also referred to as nodes and the edges are lines or arcs that connect any two nodes in the graph. More formally a Graph is composed of a set of vertices( V ) and a set of edges( E ).
#### Components of a Graph
- Vertices: Vertices are the fundamental units of the graph. Sometimes, vertices are also known as vertex or nodes. Every node/vertex can be labeled or unlabelled.
- Edges: Edges are drawn or used to connect two nodes of the graph. It can be ordered pair of nodes in a directed graph. Edges can connect any two nodes in any possible way. There are no rules. Sometimes, edges are also known as arcs. Every edge can be labeled/unlabelled.

#### Types Of Graph
1. Null Graph: a graph is known as a null graph if there are no edges in the graph.
2. Trivial Graph: graph having only a single vertex, it is also the smallest graph possible.

3. Undirected Graph: a graph in which edges do not have any direction. That is the nodes are unordered pairs in the definition of every edge.
4. Directed Graph: a graph in which edge has direction. That is the nodes are ordered pairs in the definition of every edge.

5. Connected Graph: the graph in which from one node we can visit any other node in the graph is known as a connected graph.
6. Disconnected Graph: the graph in which at least one node is not reachable from a node is known as a disconnected graph.

7. Regular Graph: the graph in which the degree of every vertex is equal to the other vertices of the graph. Let the degree of each vertex be K then the graph
8. Complete Graph: the graph in which from each node there is an edge to each other node.

9. Cycle Graph: the graph in which the graph is a cycle in itself, the degree of each vertex is 2.
10. Cyclic Graph: a graph containing at least one cycle is known as a Cyclic graph.

11. Directed Acyclic Graph: a Directed Graph that does not contain any cycle.
12. Bipartite Graph: a graph in which vertex can be divided into two sets such that vertex in each set does not contain any edge between them.

13. Weighted Graph
- A graph in which the edges are already specified with suitable weight is known as a weighted graph.
- Weighted graphs can be further classified as directed weighted graphs and undirected weighted graphs.
14. Tree: trees are the restricted types of graphs, just with some more rules. Every tree will always be a graph but not all graphs will be trees. Linked List, Trees, and Heaps all are special cases of graphs.

### Representation
1. Adjacency Matrix: in this method, the graph is stored in the form of the 2D matrix where rows and columns denote vertices. Each entry in the matrix represents the weight of the edge between those vertices.

2. Adjacency List: this graph is represented as a collection of linked lists. There is an array of pointer which points to the edges connected to that vertex.

### Breadth Frist Search (BFS)
A boolean visited array is used to mark the visited vertices. For simplicity, it is assumed that all vertices are reachable from the starting vertex. BFS uses a queue data structure for traversal.


When we come to vertex 0, we look for all adjacent vertices of it.
- 2 is also an adjacent vertex of 0.
- If we don’t mark visited vertices, then 2 will be processed again and it will become a non-terminating process.
There can be multiple BFS traversals for a graph. Different BFS traversals for the above graph :
2, 3, 0, 1
2, 0, 3, 1
ref:
- [Breadth first search - Programiz](https://www.programiz.com/dsa/graph-bfs)
- [C語言系列 : Depth-First Search and Breadth-First Search](https://dotblogs.com.tw/ilikedogs/2020/03/23/151856)
- [【Day33】[演算法]-深度優先搜尋DFS與廣度優先搜尋BFS](https://ithelp.ithome.com.tw/articles/10281404)
### Depth Frist Search (DFS)

ref:
[https://www.programiz.com/dsa/graph-dfs](https://www.programiz.com/dsa/graph-dfs)
### Topological Sorting
Topological sorting for Directed Acyclic Graph (DAG) is a linear ordering of vertices such that for every directed edge u v, vertex u comes before v in the ordering.
Topological Sorting for a graph is not possible if the graph is not a DAG.
For example, a topological sorting of the following graph is “5 4 2 3 1 0”. There can be more than one topological sorting for a graph. Another topological sorting of the following graph is “4 5 2 3 1 0”. The first vertex in topological sorting is always a vertex with an in-degree of 0 (a vertex with no incoming edges).

#### Topological Sorting vs Depth First Traversal (DFS)
In DFS, we print a vertex and then recursively call DFS for its adjacent vertices. In topological sorting, we need to print a vertex before its adjacent vertices.
For example, in the given graph, the vertex ‘5’ should be printed before vertex ‘0’, but unlike DFS, the vertex ‘4’ should also be printed before vertex ‘0’. So Topological sorting is different from DFS. For example, a DFS of the shown graph is “5 2 3 1 0 4”, but it is not a topological sorting.
ref:
- [Topological Sorting - GeeksforGeeks](https://www.geeksforgeeks.org/topological-sorting/)
- [Directed Acyclic Graph - Algorithm Notes](https://web.ntnu.edu.tw/~algo/DirectedAcyclicGraph.html)
### Minimum Spanning Tree (MST)
#### Prim's Algorithm







ref:
- [Prim's Algorithm - Programiz](https://www.programiz.com/dsa/prim-algorithm)
- [Prim’s Algorithm - Medium](https://mycollegenotebook.medium.com/prims-algorithm-47b7686de488)
#### Kruskal's Algorithm











ref:
[Kruskal's Algorithm - Programiz](https://www.programiz.com/dsa/kruskal-algorithm)
[Kruskal’s Algorithm - Medium](https://mycollegenotebook.medium.com/kruskal-algorithm-deb0d64ce271)
## Sort
### Bubble sort
Bubble Sort is the simplest sorting algorithm that works by repeatedly swapping the adjacent elements if they are in the wrong order. This algorithm is not suitable for large data sets as its average and worst-case time complexity is quite high.
Optimized bubble sort algorithm will reduce the execution time.

### Insertion sort
Insertion sort is a simple sorting algorithm that works similar to the way you sort playing cards in your hands. The array is virtually split into a sorted and an unsorted part. Values from the unsorted part are picked and placed at the correct position in the sorted part.

### Selection sort
The selection sort algorithm sorts an array by repeatedly finding the minimum element (considering ascending order) from the unsorted part and putting it at the beginning.

### Merge sort
The Merge Sort algorithm is a sorting algorithm that is based on the "Divide and Conquer" paradigm. In this algorithm, the array is initially divided into two equal halves and then they are combined in a sorted manner.

Drawbacks of merge sort:
- Slower compared to the other sort algorithms for smaller tasks.
- The merge sort algorithm requires an additional memory space of 0(N) for the temporary array. (Linked list is a solution of the drawback for additional storage.)
- It goes through the whole process even if the array is sorted.
### Quick sort
The quick ort is a "Divide and Conquer algorithm". It picks an element as a pivot and partitions the given array around the picked pivot. There are many different versions of quickSort that pick pivot in different ways.

### Heap Sort
### Stability of sorting algorithm
A sorting algorithm is considered stable if the two or more items with the same value maintain the same relative positions even after sorting.


### Complexity of sorting
| Sorting | Best | Worst | Time | Space | Stability | In place |
| ------- | :--: | :---: |:---: | :---: | :-------: | :-----: |
| Bubble sort | n | n^2^ | O(n^2^) | O(1) | Stable | Yes |
| Insertion Sort | n | n^2^ | O(n^2^) | O(1) | Stable | Yes |
| Selection Sort | n^2^ | n^2^ | O(n^2^) | O(1) | Unstable | Yes |
| Merge Sort | n log n | n log n | O(n log n) | O(n) | Stable | No |
| Quick Sort | n log n | n log n | O(n^2^) | O(log n) | Unstable | |
### Applications of sorts
| Sorting | Applications |
| ------- | :----------- |
| Bubble sort | • complexity does not matter <br/> • short and simple code is preferred |
| Insertion Sort | • the array is has a small number of elements <br/> • there are only a few elements left to be sorted |
| Selection Sort | • a small list is to be sorted <br/> • cost of swapping does not matter <br/> • checking of all the elements is compulsory <br/> • cost of writing to a memory matters like in flash memory (number of writes/swaps is O(N) as compared to O(N^2^) of bubble sort) |
| Merge Sort | • inversion count <br/> • external sorting |
| Quick Sort | • the programming language is good for recursion <br/> • time complexity matters <br/> • space complexity matters |
### Extended
- heap sort
- counting sort
- radix sort
- bucket sort
- shell sort
ref:
- [Sorting Algorithms - GeeksforGeeks](https://www.geeksforgeeks.org/sorting-algorithms/)
- [Sorting Algorithm - Programiz](https://www.programiz.com/dsa/sorting-algorithm)
- [Merge sort - 1](https://medium.com/appworks-school/%E5%88%9D%E5%AD%B8%E8%80%85%E5%AD%B8%E6%BC%94%E7%AE%97%E6%B3%95-%E6%8E%92%E5%BA%8F%E6%B3%95%E9%80%B2%E9%9A%8E-%E5%90%88%E4%BD%B5%E6%8E%92%E5%BA%8F%E6%B3%95-6252651c6f7e)
- [Merge sort - 2](https://kopu.chat/%E5%90%88%E4%BD%B5%E6%8E%92%E5%BA%8F-merge-sort/)
- [Merge sort - 3](https://medium.com/appworks-school/%E5%88%9D%E5%AD%B8%E8%80%85%E5%AD%B8%E6%BC%94%E7%AE%97%E6%B3%95-%E6%8E%92%E5%BA%8F%E6%B3%95%E9%80%B2%E9%9A%8E-%E5%90%88%E4%BD%B5%E6%8E%92%E5%BA%8F%E6%B3%95-6252651c6f7e)
## Bitwise
| Bitwise Operators | Meaning of operators |
| ----------------- |:--------------------- |
| & | Bitwise AND |
| \| | Bitwise OR |
| ^ | Bitwise XOR |
| ~ | Bitwise compiement |
| << | Shift left |
| >> | Shift right |
### Bitwise AND operator &
The output of bitwise AND is 1 if the corresponding bits of two operands is 1. If either bit of an operand is 0, the result of corresponding bit is evaluated to 0.
```
0 & 0 = 0
0 & 1 = 0
1 & 0 = 0
1 & 1 = 1
```
```
12 = 00001100 (In Binary)
25 = 00011001 (In Binary)
Bit Operation of 12 and 25
00001100
& 00011001
________
00001000 = 8 (In decimal)
```
### Bitwise OR operator \|
The output of bitwise OR is 1 if at least one corresponding bit of two operands is 1. In C Programming, bitwise OR operator is denoted by |. Bitwise complement of any number N is -(N+1).
```
0 | 0 = 0
0 | 1 = 1
1 | 0 = 1
1 | 1 = 1
```
```
12 = 00001100 (In Binary)
25 = 00011001 (In Binary)
Bitwise OR Operation of 12 and 25
00001100
| 00011001
________
00011101 = 29 (In decimal)
```
Twist in bitwise complement operator in C Programming
The bitwise complement of 35 (~35) is -36 instead of 220, but why?
For any integer n, bitwise complement of n will be -(n+1). To understand this, you should have the knowledge of 2's complement.
#### 2's Complement
Two's complement is an operation on binary numbers. The 2's complement of a number is equal to the complement of that number plus 1.
```
Decimal Binary 2's complement
0 00000000 -(11111111+1) = -00000000 = -0(decimal)
1 00000001 -(11111110+1) = -11111111 = -256(decimal)
12 00001100 -(11110011+1) = -11110100 = -244(decimal)
220 11011100 -(00100011+1) = -00100100 = -36(decimal)
```
The bitwise complement of 35 is 220 (in decimal). The 2's complement of 220 is -36. Hence, the output is -36 instead of 220.
### Bitwise XOR operator ^
The result of bitwise XOR operator is 1 if the corresponding bits of two operands are opposite. It is denoted by ^.
```
0 ^ 0 = 0
0 ^ 1 = 1
1 ^ 0 = 1
1 ^ 1 = 0
```
```
12 = 00001100 (In Binary)
25 = 00011001 (In Binary)
Bitwise XOR Operation of 12 and 25
00001100
^ 00011001
________
00010101 = 21 (In decimal)
```
```
1) n ^ 0 = n
2) n ^ n = 0
3) n1 ^ n2 ^ n3 = n3 ^ n1 ^ n2
```
### Bitwise compiement operator ~
Bitwise compliment operator is an unary operator (works on only one operand). It changes 1 to 0 and 0 to 1. It is denoted by ~.
```
~ 0 = 1
~ 1 = 0
```
```
35 = 00100011 (In Binary)
Bitwise complement Operation of 35
~ 00100011
________
11011100 = 220 (In decimal)
```
### Shift operators << and >>
#### Right Shift Operator
Right shift operator shifts all bits towards right by certain number of specified bits. The bit positions that have been vacated by the right shift operator are filled with 0. It is denoted by >>.
```
212 = 11010100 (In binary)
212 >> 2 = 00110101 (In binary) [Right shift by two bits]
212 >> 7 = 00000001 (In binary)
212 >> 8 = 00000000
212 >> 0 = 11010100 (No Shift)
```
#### Left Shift Operator
Left shift operator shifts all bits towards left by certain number of specified bits. The bit positions that have been vacated by the left shift operator are filled with 0. The symbol of the left shift operator is <<.
```
212 = 11010100 (In binary)
212<<1 = 110101000 (In binary) [Left shift by one bit]
212<<0 = 11010100 (Shift by 0)
212<<4 = 110101000000 (In binary) =3392(In decimal)
```
### Comon case
- Implement Bitwise by Mask
```c
a = a | 7 // The last 3 bits are set to 1, and the rest are unchanged.
a = a & (~7) // The last 3 bits are set to 0, and the rest are unchanged.
a = a ^ 7 // The last 3 bits execute the NOT operator, leaving the rest unchanged.
```
- bits counting
```c
a & (a - 1)
```
### Bitwise Trick
#### Number of 1 Bits, Bit Reversal, Least Significant 1 Bit, Power of 2 Test, XOR Swap, Missing Number Problem, Nim, Josephus Problem, Letter Case Conversion, 8 Queen Problem.
### Applications
| Source | Case |
| -------------- |:-------------- |
| LeetCode 0136 | Single Number |
ref:
- [Bit](http://web.ntnu.edu.tw/~algo/Bit.html)
- [Bitwise Operators in C Programming - Programiz](https://www.programiz.com/c-programming/bitwise-operators)
- [CONVERTWORLD](https://www.convertworld.com/zh-hant/numerals/binary.html)
## Cycle finding: Floyd's Algorithm (Tortoise and Hare Algorithm)
The algorithm is to start two pointers, slow and fast from head of linked list. We move slow one node at a time and fast two nodes at a time. If there is a loop, then they will definitely meet. This approach works because of the following facts.

### Applications
| Source | Case |
| -------------- |:-------------- |
| LeetCode 0202 | Happy Number |
ref:
[https://www.geeksforgeeks.org/how-does-floyds-slow-and-fast-pointers-approach-work/?ref=gcse](https://www.geeksforgeeks.org/how-does-floyds-slow-and-fast-pointers-approach-work/?ref=gcse)
## Stable matching: Gale-Shapley algorithm
The Stable Marriage Problem states that given N men and N women, where each person has ranked all members of the opposite sex in order of preference, marry the men and women together such that there are no two people of opposite sex who would both rather have each other than their current partners. If there are no such people, all the marriages are “stable” (Source Wiki).
### Consider the following example.
Let there be two men <font color="#FFBD33">m1</font> and <font color="#FFBD33">m2</font> and two women <font color="#33FF57">w1</font> and <font color="#33FF57">w2</font>.
Let <font color="#FFBD33">m1</font>‘s list of preferences be {<font color="#33FF57">w1</font>, <font color="#33FF57">w2</font>}
Let <font color="#FFBD33">m2</font>‘s list of preferences be {<font color="#33FF57">w1</font>, <font color="#33FF57">w2</font>}
Let <font color="#33FF57">w1</font>‘s list of preferences be {<font color="#FFBD33">m1</font>, <font color="#FFBD33">m2</font>}
Let <font color="#33FF57">w2</font>‘s list of preferences be {<font color="#FFBD33">m1</font>, <font color="#FFBD33">m2</font>}
The matching { {<font color="#FFBD33">m1</font>, <font color="#33FF57">w2</font>}, {<font color="#33FF57">w1</font>, <font color="#FFBD33">m2</font>} } is not stable because m1 and w1 would prefer each other over their assigned partners. The matching {<font color="#FFBD33">m1</font>, <font color="#33FF57">w1</font>} and {<font color="#FFBD33">m2</font>, <font color="#33FF57">w2</font>} is stable because there are no two people of opposite sex that would prefer each other over their assigned partners.
It is always possible to form stable marriages from lists of preferences (See references for proof). Following is Gale–Shapley algorithm to find a stable matching:
The idea is to iterate through all free men while there is any free man available. Every free man goes to all women in his preference list according to the order. For every woman he goes to, he checks if the woman is free, if yes, they both become engaged. If the woman is not free, then the woman chooses either says no to him or dumps her current engagement according to her preference list. So an engagement done once can be broken if a woman gets better option. Time Complexity of Gale-Shapley Algorithm is O(n2).
> Initialize all men and women to free
while there exist a free man m who still has a woman w to propose to
{
w = m's highest ranked such woman to whom he has not yet proposed
if w is free
(m, w) become engaged
else some pair (m', w) already exists
if w prefers m to m'
(m, w) become engaged
m' becomes free
else
(m', w) remain engaged
> }
ref:
- [Stable Marriage Problem - GeeksforGeeks](https://www.geeksforgeeks.org/stable-marriage-problem/)
## Pattern Searching: Knuth–Morris–Pratt (KMP) algorithm
The KMP matching algorithm uses degenerating property (pattern having same sub-patterns appearing more than once in the pattern) of the pattern and improves the worst case complexity to O(n). The basic idea behind KMP’s algorithm is: whenever we detect a mismatch (after some matches), we already know some of the characters in the text of the next window. We take advantage of this information to avoid matching the characters that we know will anyway match.

ref:
- [KMP Algorithm for Pattern Searching - GeeksforGeeks](https://www.geeksforgeeks.org/kmp-algorithm-for-pattern-searching/?ref=gcse)
- [KMP字符串匹配算法1 - GeeksforGeeks](https://www.youtube.com/watch?v=dgPabAsTFa8&t=960s)
- [初學者學 KMP 演算法](https://yeefun.github.io/kmp-algorithm-for-beginners/)
- [What is the Knuth-Morris-Pratt algorithm?](https://www.educative.io/edpresso/what-is-the-knuth-morris-pratt-algorithm)