# CS3001301 演算法 Algorithms
Textbook: Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. *Interduction to Algorithm*. Cambridge, MA: The MIT Press.
Instructor: 鮑興國 Hsing-Kuo Pao
###### tags: `TaiwanTech`, `CS`
# Ⅰ Introduction
## 1 The Role of Algorithms in Computing
### 1.1 Algorithms
* any well-defined computational procedure theat takes some value, or set of values as input and produces some value, or set of values as output
* a tool for solving a well-speecified computational problem
* **instance:** the input needed to compute a solution to the problem
* **correct:** for every input instance, the algorthm halts with the correct output. And it **solves** the given problem
* incorrect algorithms
* $f(x)\uparrow$ for some $x\in X$ does not halt at all on some input instance (halting problem)
* $f(x)=y$ , but $y$ is not what we want
#### What Kinds of problems are solved by algorithms
* practical applications
* The Human Genome Project
* The Internet applications
* Electronic commerce with public-key cryptography and digital signatures
* Manufacturing and other commercial settings
* how to solve
* align two strings *(DNA)*
* topological ordering for DAG, Directed Acyclic Graph *(Dressing in the morning)*
* lots of algorithms for Google Search Engine
#### What kind of problems ++cannot++ solved by algorithms
* The problem proved to be unsolvable
* halting problem
```
def g():
if halts(g):
loop_forever()
```
* NP Hard
* problem can be solved in an algorithmic way but inefficiency
## 3 Growth of Functions
### 3.1 Asymptotic notation
* $\Theta(g(n))=\{f(n)|\exists\textrm{ positive }c_1,c_2,n_0\textrm{ s.t. }0\leq c_1g(n)\leq f(n)\leq c_2g(n)\forall n\geq n_0\}$
*
### 3.2 Standard notations and common functions
## 4 Divide-and-Conquer
# Ⅱ Data Structures
| | Search | Insertion | Deletion |
| ----------- | ------------------------ | --------- | -------- |
| Linked-list | $n$ | | |
| Tree | $\log{n}$ | | |
| Hashing | $1$~$T[h(x.key)].length$ | ~$1$ | ~$1$ |
## 11 Hash Tables
- Universe set $U$: the set of all possible key values
- Hash function $h$: $U = \{0,1,...,m-1\}$
- $h(k)$ is the hash value of key $k$
- Hash table: $T[0,...,m-1]$
- Collision: two keys hashed to the same slot
- Trade-off: smaller hash table may introduce more collisions
### 11.1 Direct-address tables
- is an array
- slot: position of array
- the universe $U$ of keys is reasonably small
- assume that no two elements have the same key
- static data
### 11.2 Hash tables
- if the universe $U$ is large, storing a table $T$ of size $|U|$ may be impractical
- the set $K$ of keys actually stored may be so small relative to $U$
#### Collision resolution by chaining
- chained hash: array of linked-list
```
CHAINED-HASH-INSERT(T, x)
insert x at the head of list T[h(x.key)]
```
- *head*: stack linked-list, does not need to maintain tail pointer
- complexity
- INSERT: $O(1)$ (worst case), assume the element $x$ being inserted is not already present in the table
- SEARCH: the length of the list (worst case)
- DELETE: $O(1)$, if the list are doubly linked and a pointer to the element is given; the length of the list if the list is singly linked
#### Analysis of hashing with chaining
- hash table $T$ with $m$ slots that stores $n$ elements
- load factor: $\alpha=\dfrac nm$
- the average number of elements stored in a chain
- assume simple uniform hashing
- any given element is equally likely to hash into any of the $m$ slots
- **Theorem 11.1** In a hash table in which collisions are resolved by chaining, an unsuccessful search takes average-case time $\Theta(1+\alpha)$, under the assumption of simple uniform hashing
- average length of the list: $\alpha=\frac nm$
- expected number of elements examined in an unsuccessful search: $\alpha$
- total time required (including the time for computing $h(k)$): $O(1+\alpha)$
- **Theorem 11.2** In a hash table in which collisions are resolved by chaining, a successful search takes average-case time $\Theta(1+\alpha)$, under the assumption of simple uniform hashing
- $E[\dfrac1n\sum_{i=1}^n(1+\sum_{j=i+1}^nX_{ij})]$
- $=\dfrac1n\sum_{i=1}^n(1+\sum_{j=i+1}^nE[X_{ij}])$
- $=\dfrac1n\sum_{i=1}^n(1+\sum_{j=i+1}^n\dfrac1m)$
- $=1+\dfrac1{nm}\sum_{i=1}^n(n-i)$
- $=1+\dfrac1{nm}(\sum_{i=1}^nn-\sum_{i=1}^ni)$
- $=1+\dfrac1{nm}(n^2-\dfrac{n(n+1)}2)$
- $=1+\dfrac\alpha2-\dfrac\alpha{2n}$
- Total time required: $\Theta(2+\dfrac\alpha2-\dfrac\alpha{2n})=\Theta(1+\alpha)$
### 11.3 Hash functions
### 11.4 Open addressing
Ⅲ Basic and Advanced Algorithms
===
| | Best | Average | Worst | In Place | Stable | Exchange |
| --------- | ------------- | ---------- | ---------- | -------- | ------ | --------------- |
| Insertion | $n$ | $n^2$ | $n^2$ | Y | Y | $n~n^2$ |
| Merge | $n\log{n}$ | $n\log{n}$ | $n\log{n}$ | N | Y | $n\log{n}$ |
| Selection | $n^2$ | $n^2$ | $n^2$ | Y | N | $n$ |
| Bubble | $n^I$ | $n^2$ | $n^2$ | Y | Y | $n~n^2$ |
| Heapsort | $n\log{n}$ | $n\log{n}$ | $n\log{n}$ | Y | N | $n\log{n}$ |
| Quicksort | $n\log{n}$ | $n\log{n}$ | $n^2$ | Y$^{II}$ | N | $n\log{n}~n^2$ |
| Counting | $n+k^{III}$ | $n+k$ | $n+k$ | N | Y | $n$ |
| Radix | $a(n+k)^{IV}$ | $a(n+k)$ | $a(n+k)$ | N$^V$ | Y | $n$ |
$^I$ need to imporve the code
$^{II}$ using stack: $\log{n}~n$
$^{III}$ $k$ is the range of input value
$^{IV}$ $a$ is the number of digits
$^V$ if using counting in the implementation
2 Getting Started
---
### 2.1 Insertion sort
* sorting problem
* **Input:** a sequence of *n* numbers $〈a_1, a_2,...,a_n〉$
* **Output:** a permutation(reordering) $〈a'_1, a'_2,...,a'_n〉$ of input sequence such that $a'_1\leq a'_2\leq ...\leq a'_n$
* data $x=〈k,x'〉$
* Keys $k$: the numbers that we wish to sort
* satellite data $x'$
* pseudocode
* whatever expressive methed is most clear and concise
* not typically concerned with issue of software engineering
* conventions
* Indentation indicates block structure
* looping construct: `while`, `for`, `repeat-until`
* the loop counter retains its value after exiting the loop (in this book)
* `to`: a `for` loop increments its loop counter in each iteration
* `downto`: a `for` loop decrements its loop counter
* `by`: the loop counter change by an amount greater than 1
* conditional construct: `if-else`
* `//`: comment
* multiple assignment: assign value of right-most to its left
* variables are local
* `A[i]`: access `i`th element of array `A`
* compound data into objects
* `NULL`: a pointer refer no object
* pass parameters by value
* boolean operator are short circuiting
* `error`: an error occurred
* sorted in place
* rearranged within the array with at most a constant number of them sorted outside the array at any time
* loop invariant
* At the start of each iteration of the **for** loop, the subarray $A[1..j-1]$ consists of the elements originally in $A[1..j-1]$, but in sorted order
* To prove the algorithm is correct
* **Initialization:** It is true prior to the first iteration of the loop
* **Maintenance:** If it is true before an iteration of the loop, it remains true before the next iteration
* **Termination:** When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct
* insertion sort
* sorting a small number of elements
```
INSERTION-SORT(A)
for j = 2 to A.length
key = A[j]
i = j - 1
while i > 0 and A[i] > key
A[i + 1] = A[i]
i = i - 1
A[i + 1] = key
```
* in place
* loop invariant
* Initialization: only consist of $A[1]$
* Maintenance: consist of $A[1..j]$
* Termination: consist all of $A[1..n]$
### 2.2 Analyzing algorithms
* resources: memory, communication, bandwidth, logic gate, time (complexity)
* assume ***random-access machine (RAM)*** as implementation technology
* one processor in a Turing machine (instructions are executed one after another, with no concurrent)
* RAM model instructions
* arithmetic: add, subtract, multiply, divide, remainder, floor, ceiling
* data movement: load, store, copy
* control: conditional and unconditional branch, subroutine call and return
* RAM model data types
* integer, floating point
* do not concern precision here
* Analysis
* ***input size:*** best notion depands on problem
* (sorting, Fourier transforms..) number of items in the input
* (multiplying) total number of bits
* (graph) numbers of vertices and edges
* ***running time $T(n)$ (Complexity):*** number of primitive operations executed
* *machine-independent*
* in `for` or `while`, test is executed one more time than body
* often write as $T(n)=O(X)$, but technically it should be $T(n)\in O(X)$
* usually only focus on the worst-case running time
* give an upper bound
* for some algorithms, the worst case occurs fairly often
* average case is often roughly as bad as worst case
* sometime the analysis is easier
* order of growth (rate of growth)
* consider only the leading term of a formula
- analyzing insertion sort
| line | cost | times |
| ---------------------------- | ----- | --------------------- |
| `for j = 2 to A.length` | $c_1$ | $n$ |
| `key = A[j]` | $c_2$ | $n-1$ |
| `i = j - 1` | $c_4$ | $n-1$ |
| `while i > 0 and A[i] > key` | $c_5$ | $\sum_{j=2}^nt_j$ |
| `A[i + 1] = A[i]` | $c_6$ | $\sum_{j=2}^n(t_j-1)$ |
| `i = i - 1` | $c_7$ | $\sum_{j=2}^n(t_j-1)$ |
| `A[i + 1] = key` | $c_8$ | $n-1$ |
* $T(n)=c_1n+c_2(n-1)+c_4(n-1)+c_5\sum_{j=2}^nt_j+c_6\sum_{j=2}^n(t_j-1)+c_7\sum_{j=2}^n(t_j-1)+c_8(n-1)$
* **best case:** linear function
$T(n)=c_1n+c_2(n-1)+c_4(n-1)+c_5(n-1)+c_8(n-1)$
$=(c_1+c_2+c_4+c_5+c_8)n-(c_2+c_4+c_5+c_8)=\Theta(n)$
* **worst case:** quadratic function
$T(n)=c_1n+c_2(n-1)+c_4(n-1)+c_5(\dfrac{n(n+1)}2-1)+c_6(\dfrac{n(n-1)}2)+c_7(\dfrac{n(n-1)}2)+c_8(n-1)$
$=(\dfrac{c_5}2+\dfrac{c_6}2+\dfrac{c_7}2)n^2+(c_1+c_2+c_4+\dfrac{c_5}2-\dfrac{c_6}2-\dfrac{c_7}2+c_8)n-(c_2+c_4+c_5+c_8)=\Theta(n^2)$
### 2.3 Designing algorithms
The incremental approach
* example: insertion sort
#### 2.3.1 The divide-and-conquer approach
* Recursive
* **Divide** the problem into a number of subproblems
* **Conquer** the subproblems by solving them recursively
* **Combine** the solutions to the subproblems into the solution for the original problem
* example: merge sort
* Divide the n-element sequence to be sorted into two subsequences of $n/2$ elements each
* **Sort** the two subsequences recursively using merge sort
```
MERGE-SORT(A, p, r)
if p < r
q = p + r / 2
MERGE-SORT(A, p, q)
MERGE-SORT(A, q + 1, r)
MERGE(A, p, q, r)
```
* **Merge** the two sorted suvsequences to produce the sorted answer
```
MERGE(A, p, q, r)
n1 = q - p + 1
n2 = r - q
let L[1...n1 + 1] and R[1...n2 + 1] be new arrays
for i = 1 to n1
L[i] = A[p + i - 1]
for j = 1 to n2
R[j] = A[q + j]
L[n1 + 1] = R[n2 + 1] = infinity
i = j = 1
for k = p to r
if L[i] <= R[j]
A[k] = L[i]
i++;
else
A[k] = R[j]
j++
```
* why not using the bottom-up method?
* hard
* inflexibility
* $T(n)=\Theta(n)$ , which $n=r-p+1$
* set sentinel: $\infty$
* loop invariant
* Initialization: $A[p..k-1]$ is empty; $L[i]$ and $R[j]$ are the smallest elements of their arrays that have not been copied back into $A$
* Termination: $A[p..k-1]$ contains the $k-p=r-p+1$ smallest elements in sorted order; $L$ and $R$ contain $n1+n2+2=r-p+3$ elements, all but the sentinels have been copied back into $A$
#### 2.3.2 Analyzing divide-and-conquer algorithms
* **recurrence euqation**, **recurrence**
* $T(n)=\Theta(1)$ if $n\leq c$
$=aT(n/b)+D(n)+C(n)$ otherwise
#### Analysis of merge sort
* assume the original problem size is a power of 2
* Divide: $\Theta(1)$, computes the middle of the array
* Conquer: $2T(n/2)$, solve two subproblem with size $n/2$
* Combine: $\Theta(n)$
* $T(n)$
$=\Theta(1)$ if $n=1$
$=2T(n/2)+\Theta(n)$ if $n>1$
* $T(n)=\Theta(n\lg n)$
* solve by **recursion tree**
* *the $i$ level below the top has $2^i$ nodes, each contributing a cost of $c(n/2^i)$, so that the $i$th level below the top has total cost $2^ic(n/2^i)=cn$*
* total number of levels: $\lg n+1$, $n$ is the number of leaves
* $cn(\lg n+1)=cn\lg n+cn=\Theta(n\lg n)$
* Merge sort outperforms insertion sort (in the average and worst cases)
* insensitive to the input
Other Elementary Sorting Methods
---
### Selection sort
```
SELECTION-SORT(A)
for i = 1 to A.length
min = i
for j = i + 1 to A.length
if A[j] < A[min]
min = j
exchange A[i] with A[min]
```
* $T(n)=\Theta(n^2)$
* insensitive to the input
* only needs linear exchanges
### Bubble sort
```
BUBBLE-SORT(A)
for i = A.length downto 1
for j = 2 to i
if A[j - 1] > A[j]
SWAP(A[j - 1], A[j])
```
* the worst case: $T(n)=\Theta(n^2)$
* the complementary of selection sort
* the average case is about same as the worst case
* the best case: linear time
* much more exchanges than selection sort
6 Heapsort
---
### 6.1 Heaps
* an array object that can be viewed as a *nearly complete* binary tree
* tree/pointer is inefficiency
```
PARENT(i)
return i / 2
LEFT(i)
return 2i
RIGHT(i)
return 2i + 1
```
* LEFT can be done by left shift, RIGHT can be done by left shift + 1, PARENT can be done by right shift
* ***heap property***
* ***Max-heap:*** $A[PARENT(i)]\geq A[i]$
* Min-heap: $A[PARENT(i)]\leq A[i]$
* ***height of node:*** the number of edges on the longest simple downward path from the node to a leaf
* height of tree: the height of the root
* **height of heap:** $O(\lg n)$
### Basic procedures on heap
* sructure type
* array: indexing, search slowly, insertion slowly
* tree: search
* hash: search efficiency
* graph
* link list: search slowly, insertion easily
* MAX-HEAPIFY $O(\lg n)$
* BUILD-MAX-HEAP $O(n)$
* HEAPSORT $O(n\lg n)$
* MAX-HEAP-INSERT $O(\lg n)$
* HEAP-EXTRACT-MAX $O(\lg n)$
* HEAP-INCERASE-KEY $O(\lg n)$
* HEAP-MAXIMUM $\Theta(1)$
### 6.2 Maintaining the heap property
* HEAPIFY: manipulating heaps. assumed the binary trees rooted at LEFT(i) and RIGHT(i) are heaps, but A[i] may be smaller than its children
```
MAX-HEAPIFY(A, i)
l = LEFT(i)
r = RIGHT(i)
if l <= A.heap-size and A[l] > A[i]
largest = l
else largest = i
if r <= A.heap-size and A[r] > A[largest]
largest = r
if largest != i
exchange A[i] with A[largest]
MAX-HEAPIFY(A. largest)
```
* $T(n)\leq T(\frac{2n}3)+\Theta(1)$
- worst case: left is one more level/height than right
- $(2^{n+1}-1)+1+(2^n-1)=2\times2^n+2^n-1=3\times2^n-1$
- left + self + right
- left $=\frac{2\times2^n}{3\times2^n-1}=\frac23$
* $T(n)=O(\lg n)=O(h)$
### 6.3 Building a heap
* BUILD-HEAP
```
BUILD-MAX-HEAP(A)
A.heap-size = A.length
for i = A.length / 2 downto 1
MAX-HEAPIFY(A, i)
```
* Initialization: each node $\lfloor n/2\rfloor + 1$, $\lfloor n/2\rfloor + 2$,..., $n$ is a leaf and is thus the root of a trivial max-heap
* Termination: each node $1$, $2$,..., $n$ is the root of a max-heap
* $\sum_{h=0}^{\lfloor\lg n\rfloor}\lceil\dfrac n{2^{h+1}}\rceil O(h)=O(n\sum_{h=0}^{\lfloor\lg n\rfloor}\dfrac h{2^h})$
* $\sum_{h=0}^\infty\dfrac h{2^h}=2(\because\sum_{k=0}^\infty kx^k=\dfrac x{(1-x)^2})$
* $O(n\sum_{h=0}^{\lfloor\lg n\rfloor}\dfrac h{2^h})=O(n\sum_{h=0}^\infty\dfrac h{2^h})=O(n)$
* every node need HEAPIFY once, O(n) is obivious
### 6.4 The heapsort algorithm
```
HEAPSORT(A)
BUILD-MAX-HEAP(A)
for i = A.length downto 2
exchange A[1] with A[i]
A.heap-size = A.heap-size - 1
MAX-HEAPIFY(A, 1)
```
* complexity: $O(n\lg n)$
### 6.5 Priority queues
* a data structure that maintains a set $S$ of elements, each with an associated value called a ***key***
* maintain the handle
* operations
* INSERT(S, x) $O(\lg n)$
```
MAX-HEAP-INSERT(A, key)
A.heap-size = A.heap-size + 1
A[A.heap-size] = -ifinity
HEAP-INCREASE-KEY(A, A.heap-size, key)
```
* MAXIMUM(S) $O(1)$
* EXTRACT-MAX(S) $O(\lg n)$
```
HEAP-EXTRACT-MAX(A)
if A.heap-size < 1
error "heap underflow"
max = A[1]
A[1] = A[A.heap-size]
A.heap-size = A.heap-size - 1
MAX-HEAPIFY(A, 1)
return max
```
* remove and return largest key
* INCREASE-KEY(S, x, k) $O(\lg n)$
```
HEAP-INCREASE-KEY(A, i, key)
if key < A[i]
error "new key is smaller than current key"
A[i] = key
while i > 1 and A[PARENT(i)] < A[i]
exchange A[i] with A[PARENT(i)]
i = PARENT(i)
```
* increase element's key to new value, which is assumed to be at least as large as current key
7 Quicksort
---
### 7.1 Description of quicksort
* the fastest and the most widely used sorting method
* C. A. R. Hoare 1960 (here is a variant version (faster))
* divide-and-conquer
* Divide: Partition the array $A[p..r]$ into two subarrays $A[p..q-1]$ and $A[q+1..r]$
* Conquer: sort the two subarrays by recursive call
* Combine: no work is needed
* in place (using a small stack)
* a careful implementation is needed
```
QUICKSORT(A, p, r)
if p < r
q = PARTITION(A, p, r)
QUICKSORT(A, p, q - 1)
QUICKSORT(A, q + 1, r)
```
#### Partitioning the array
```
PARTITION(A, p, r)
x = A[r]
i = p - 1
for j = p to r - 1
if A[j] <= x
i = i + 1
exchange A[i] with A[j]
exchange A[i + 1] with A[r]
return i + 1
```
* `x`: pivot
* recursion properties
1. if $p\leq k\leq i$, then $A[k]\leq x$
2. if $i+1\leq k\leq j-1$, then $A[k]>x$
3. if $k=r$, then $A[k]=x$
### 7.2 Performance of quicksort
#### Worst-case partitioning
$T(n)=T(n-1)+\Theta(n)=\sum_{k=1}^n\Theta(k)=\Theta(\sum_{k=1}^nk)=\Theta(n^2)$
* produce only one subproblem
#### Best-case partitioning
$T(n)\leq2T(n/2)+\Theta(n)$
$\Rightarrow T(n)=O(n\lg n)$
* produce two euqal size subproblem
#### Balanced partitioning
* closed to best case
* example:
* $T(n)\leq T(9n/10)+T(n/10)+\Theta(n)$
* $\Rightarrow T(n)=O(n\lg n)$
#### Intuition for the average case
* best and worst appear alternately
* $T(n)=O(n\lg n)$
* 2 times height on recurence tree
* different const, same big-O
### 7.3 A randomized version of quicksort
* by ***random sampling***, we expect the array closed to balanced case
```
RANDOMIZED-PARTITION(A, p, r)
i = RANDOM(p, r)
exchange A[r] with A[i]
return PARTITION(A, p, r)
```
### Median-of-3 partitioning method
```
MEDIAN3-PARTITION(A, p, r)
if r - p + 1 >= 3
q = (p + r) / 2
SORT-3(A[p], A[q], A[r])
exchange A[q] with A[r]
return PARTITION(A, p, r)
```
### Improvement on small subfiles
```
ISS-QUICKSORT(A, p, r, M)
if r - p + 1 >= M
q = MEDIAN3-PARTITION(A, p, r)
ISS-QUICKSORT(A, p, q - 1, M)
ISS-QUICKSORT(A, q + 1, r, M)
else if p < r
INSERTION-SORT(A, p, r)
```
* $5\leq M\leq25$
* 20% reduction in the running time
8 Sorting in Linear Time
---
* ***comparison sorts:** the sorted order is based only on comparisons between the input elements*
### 8.1 Lower bounds for sorting
#### The decision tree model
* a full binary tree that represents the comparisons between elements that are performed by a particular sorting algorithm
#### A lower bound for the worse case
* **Theorem 8.1** Any comparison sort algorithm requires $\Omega(n\lg n)$ comparisons in the worst case
* Any decision tree that sorts $n$ elements has height $\Omega(n\lg n)$
* $n!\leq l\leq2^h$
* $h\leq\lg(n!)=\Omega(n\lg n)$
* **Corollary 8.2** Heapsort and merge sort are asymptotically optimal comparison sorts
### 8.2 Counting sort
* each of the input elements is an integer
```
COUNTING-SORT(a, b, k)
let C[0..k] be a new array
for i = 0 to k
C[i] = 0
for j = 1 to A.length
C[A[j]] = C[A[j]] + 1
for i = 1 to k
C[i] = C[i] + C[i - 1]
for j = A.length downto 1
B[C[A[j]]] = A[j]
C[A[j]] = C[A[j]] - 1
```
* assume all keys are in the range form 0 to $m$ and the result is stored in $B$
* Analysis of counting sort
* $T(n)=\Theta(k+n)=\Theta(n)$ (when $k=\Theta(n)$)
* Not a comparison sort
* stable (numbers with the same value appear in the output array in the same order as they do in the input array)
* useful in th number-can-be-quantized situations
* when $k$ is small relative to $n$
* work with other sorting methods (eg. radix sort)
### 8.3 Radix sort
* used by the card-sorting machines you can now find only in computer museums
* number represented in redix-$b$ notation
* high to low version
```
RADIX-SORTH2L(A, d)
RADIX-SORTH2L(A, d, 1, A.length)
RADIX-SORTH2L(A, d, p, q)
sort array A on digit d
if d > 1
si = p
for i = p to q - 1
if A[i, d] != A[i + 1, d]
RADIX-SORTH2L(A, d - 1, si, i)
si = i + 1
RADIX-SORTH2L(A, d - 1, si, q)
```
* low to high version
```
RADIX-SORTL2H(A, d)
for i = 1 to d
use a stable sort (counting sort) to sort array A on digit i
```
* **Lemma 8.3** Given $n$ $d$-digit numbers in which each digit can take on up to $k$ possible values, RADIX-SORT correctly sorts these numbers in $\Theta(d(n+k))$ time
* not in-place
* **Lemma 8.4** Given $n$ $b$-bit numbers and any positive integer $r\leq b$, RADIX-SORT correctly sorts these numbers in $\Theta((b/r)(n+2^r))$ time
* *Proof* Choose $d=\lceil b/r\rceil$
* case 1: if $b<\lfloor\lg n\rfloor$, choose $r=b$, $\Theta((b/r)(n+2^r))=\Theta(n)$
* no radix sort, only counting sort
* case 2: if $b\geq\lfloor\lg n\rfloor$, choose $r=\lfloor\lg n\rfloor$, $\Theta((b/r)(n+2^r))=\Theta(bn/\lg n)$
Ⅳ Advanced Design and Analysis Techniques
===
## 15 Dynamic Programming
* solve problems by combining the solutions to subproblems
* like the divide-and-conquer
* Programming: a tabular method
* when the subproblems overlap (share subsubproblems)
* apply to ***optimization problems***
* look for minimum or maximum value
* developing DP algorithm
1. Characterize the structure of an optimal solution
2. Recursively define the value of an optimal solution
3. Compute the value of an optimal solution, typically in a bottom-up fashion
4. *(can omit if do not need the detail of solution)* Construct an optimal solution from computed information
### 15.2 Matrix-chain multiplication
*
Ⅴ Graph Algorithms
===