---
###### tags: `Algorithm`
---
Divide and Conquer - Complexity Analysis
===
## Binary Search
### Complexity Analysis
* Since it has **no every-case** time complexity, we do the **worst-case** analysis
### Worst-Case Time Complexity (Binary Search, Recursive)
* In an algorithm that searches an array, the most costly operation is usually the comparison of the search item with an array item. Thus, we have the following:
* **Basic operation:** the comparison of x with S [mid].
* **Input size:** n, the number of items in the array.
* As discussed in Section 1.2, one way the worst case can occur is when x is larger than all array items. If n is a power of 2 and x is larger than all the array items, each recursive call reduces the instance to one exactly half as big. For example, if $n = 16$, then $mid = \lfloor(1 + 16) /2\rfloor = 8$. Because x is larger than all the array items, the top eight items are the input to the first recursive call. Similarly, the top four items are the input to the second recursive call, and so on. We have the following recurrence:
$$
W(n) = \underbrace{W(\frac{n}{2})}_{\rm Comparisons\ in\\\ \ recursive\ call} + \underbrace{1}_{\rm Comparsion\ at\\\ \ \ \ \ top\ level}
$$
If $n = 1$ and x is larger than the single array item, there is a comparison of x with that item followed by a recursive call with $low > high$. At this point the terminal condition is true, which means that there are no more comparisons. Therefore, $W(1)$ is 1. We have established the recurrence
:::success
$$
\begin{split}
W(n) &= W(\frac{n}{2}) + 1\ \ \ \ for\ n > 1,\ n\ a\ power\ of\ 2\\
W(1) &= 1
\end{split}
$$
:::
This recurrence is solved in [Example B.1 in Appendix B](https://). The solution is
$$
W(n) = lg\ n + 1.
$$
If n is not restricted to being a power of 2, then
$$
W(n) = ⌊lg\ n⌋ + 1 \in \Theta(lg\ n)
$$
where $\lfloor y\rfloor$ means the greatest integer less than or equal to y. We show how toestablish this result in the exercises.
## Mergesort
### Worst-Case Time Complexity Analysis of Algorithm 2.3 (Merge)
* **Basic operation:** the comparison of U [i] with V [j].
* **Input size:** h and m, the number of items in each of the two input arrays.
* The worst case occurs when the loop is exited, because one of the indices say, i has reached its exit point h + 1 whereas the other index j has reached m, 1 less than its exit point. For example, this can occur when the first m − 1 items in V are placed first in S, followed by all h items in U, at which time the loop is exited because i equals h + 1. Therefore,
$$
W(h, m) = h + m - 1
$$
### Worst-Case Time Complexity Analysis of Algo. 2.2 (Mergesort)
* **Basic operation:** the comparison that takes place in merge.
* **Input size:** n, the number of items in the array S.
* The total number of comparisons is the sum of the number of comparisons in the recursive call to mergesort with U as the input, the number of comparisons in the recursive call to mergesort with V as the input, and the number of comparisons in the top-level call to merge. Therefore,
$$
W(n) = \underbrace{W(h)}_{\rm Time\ to\ sort\ U} + \underbrace{W(m)}_{\rm Time\ to\ sort\ V} +\ \ \underbrace{h + m - 1}_{\rm Time\ to\ merge}
$$
We first analyze the case where n is a power of 2. In the case,
$$
h = ⌊n/2⌋ = \frac{n}{2} \\
m = n - h = n - \frac{n}{2} = \frac{n}{2} \\
h + m = \frac{n}{2} + \frac{n}{2} = n
$$
Our expression for W(n) becomes
$$
\begin{split}
W(n) =& \ W(\frac{n}{2}) + W(\frac{n}{2}) + n - 1 \\
=& \ 2W(\frac{n}{2}) + n - 1.
\end{split}
$$When the input size is 1, the terminal condition is met and no merging is done.
Therefore, W (1) is 0. We have established the recurrence
:::success
$$
\begin{split}
W(n) &= 2W(\frac{n}{2}) + n - 1 \ \ \ \ for\ n > 1,\ \ n\ a\ power\ of\ 2\\
W(1) &= 0
\end{split}
$$
:::
This recurrence is solved in [Example B.19 in Appendix B](https://). The solution is
$$
W(n) = n\lg n - (n - 1) \in\ \Theta(n\lg n)
$$
For n not a power of 2, we will establish in the exercises that
$$
W(n) = W(\lfloor\frac{n}{2}\rfloor) + W(\lfloor\frac{n}{2}\rfloor) + n - 1
$$
where $\lceil y\rceil$ and $\lfloor y\rfloor$ are the smallest integer $≥ y$ and the largest integer $≤ y$, respectively. It is hard to analyze this case exactly because of the floors ( $\lfloor\ \rfloor$ ) and ceilings ( $\lceil\ \rceil$ ). However, using an induction argument like the one in Example B.25 in Appendix B, it can be shown that $W(n)$ is nondecreasing. Therefore, Theorem B.4 in that appendix implies that
$$
W(n) \in\ \Theta(n\lg n)
$$
## Quicksort
### EveryCase Time Complexity (Partition)
* **Basic operation:** the comparison of S [i] with pivotitem.
* **Input size:** n = high − low + 1, the number of items in the subarray.
* Because every item except the first is compared,
$$
T(n) = n - 1;
$$
We are using n here to represent the size of the subarray, not the size of the array S. It represents the size of S only when partition is called at the top level
### Worst-Case Time Complexity (Quicksort)
* **Basic operation:** the comparison of S [i] with pivotitem in partition.
* **Input size:** n, the number of items in the array S.
* Oddly enough, it turns out that the worst case occurs if the array is already sorted in nondecreasing order. The reason for this should become clear. If the array is already sorted in nondecreasing order, no items are less than the first item in the array, which is the pivot item. Therefore, when partition is called at the top level, no items are placed to the left of the pivot item, and the value of pivotpoint assigned by partition is 1. Similarly, in each recursive call, pivotpoint receives the value of low. Therefore, the array is repeatedly partitioned into an empty subarray on the left and a subarray with one less item on the right. For the class of instances that are already sorted in nondecreasing order, we have
$$
T(n) = \underbrace{T(0)}_{\rm Time\ to\ sort\\left\ subarray} + \underbrace{T(n - 1)}_{\rm \ Time\ to\ sort\\right\ subarray} + \underbrace{n - 1}_{\rm \ Time\ to\\partition}
$$
We are using the notation $T(n)$ because we are presently determining the everycase complexity for the class of instances that are already sorted in nondecreasing order. Because $T(0) = 0$, we have the recurrence
:::success
$$
\begin{split}
T(n) &= T(n - 1) + n - 1\ \ \ \ \ for\ n > 0\\
T(0) &= 0.
\end{split}
$$
:::
This recurrence is solved in [Example B.16 in Appendix B](https://). The solution is
$$
T(n) = \frac{n(n - 1)}{2}
$$
We have established that the worst case is at least n (n − 1) /2. Although intuitively it may now seem that this is as bad as things can get, we still need to show this. We will accomplish this by using induction to show that, for all n,
$$
W(n) \leq \frac{n(n - 1)}{2}
$$
Induction base: For n = 0
$$
W(0) = 0 \leq \frac{0(0 - 1)}{2}
$$
Induction hypothesis: Assume that, for 0 ≤ k < n,
$$
W(k) \leq \frac{k(k - 1)}{2}
$$
Induction step: We need to show that
$$
W(n) \leq \frac{n(n - 1)}{2}
$$
For a given n, there is some instance with size n for which the processing time is $W(n)$. Let p be the value of pivotpoint returned by partition at the top level when this instance is processed. Because the time to process the instances of size $p − 1$ and $n − p$ can be no more than $W(p − 1)$ and $W(n − p)$, respectively, we have
$$
\begin{split}
W(n) &\leq W(p - 1) + W(n - p) + n - 1\\
&\leq \frac{(p - 1)(p - 2)}{2} + \frac{(n - p)(n - p - 1)}{2} + n - 1
\end{split}
$$
The last inequality is by the induction hypothesis. Algebraic manipulations can show that for $1 ≤ p ≤ n$ this last expression is
$$
\leq \frac{n(n - 1)}{2}
$$
This completes the induction proof.
We have shown that the worst-case time complexity is given by
$$
W(n) = \frac{n(n - 1)}{2} \in\ \Theta(n^2)
$$
### Average-Case Time Complexity (Quicksort)
* **Basic operation:** the comparison of S [i] with pivotitem in partition
* **Input size:** n, the number of items in the array S.
* We will assume that we have no reason to believe that the numbers in the array are in any particular order, and therefore that the value of pivotpoint returned by partition is equally likely to be any of the numbers from 1 through n. If there was reason to believe a different distribution, this analysis would not be applicable. The average obtained is, therefore, the average sorting time when every possible ordering is sorted the same number of times. In this case, the average-case time complexity is given by the following recurrence:
$$
A(n) = \sum_{p=1}^n \overbrace{\frac{1}{n}}^{\ \ \text{Probability}\\\text{pivotpoint is p}} \underbrace{[A(p - 1) + A(n - p)]}_{\rm \ \ Average\ time\ to\\sort\ subarrays\ when\\\ \ \ \ pivotpoint\ is\ p} + \underbrace{n - 1}_{\rm Time\ to\\partition} \ \ \ \ \ \ \ \ \ \ \ \ \ (2.1)
$$
In the exercises we show that
$$
\sum_{p=1}^n [A(p - 1) + A(n - p)] = 2\sum_{p=1}^n A(p - 1).
$$
Plugging this equality into Equality 2.1 yields
$$
A(n) = \frac{2}{n}\sum_{p=1}^n A(p - 1) + n(n - 1).
$$
Multiplying by n we have
$$
A(n) = 2\sum_{p=1}^n A(p - 1) + n(n - 1). \ \ \ \ \ \ \ \ \ \ \ \ \ (2.2)
$$
Applying Equality 2.2 to $n − 1$ gives
$$
(n - 1)A(n - 1) = 2\sum_{p=1}^{n - 1} A(p - 1) + (n - 1)(n - 2). \ \ \ \ \ \ \ \ \ \ \ \ \ (2.3)
$$
Subtracting Equality 2.3 from Equality 2.2 yields
$$
nA(n) - (n - 1)A(n - 1) = 2A(n - 1) + 2(n - 1).
$$
which simplifies to
$$
\frac{A(n)}{n + 1} = \frac{A(n - 1)}{n} + \frac{2(n - 1)}{n(n + 1)}.
$$
If we let
$$
a_n = \frac{A(n)}{n + 1}.
$$
we have the recurrence
$$
\begin{split}
a_n &= a_{n-1} + \frac{2(n - 1)}{n(n + 1)}\ \ \ \ \ for\ n > 0\\
a_0 &= 0.
\end{split}
$$
Like the recurrence in Example B.22 in Appendix B, the approximate solution to this recurrence is given by
$$
a_n \approx 2 \ln n
$$
which implies that
$$
\begin{split}
A(n) &\approx (n + 1)2 \ln n = (n + 1)2(\ln 2)(\lg n)\\
&\approx 1.38(n + 1)\lg n \in \Theta(n \lg n).
\end{split}
$$
## Strassen’s Matrix Multiplication
### EveryCase Time Complexity Analysis of Number of Multiplications (Strassen)
* **Basic operation:** one elementary multiplication.
* **Input size:** n, the number of rows and columns in the matrices.
* For simplicity, we analyze the case in which we keep dividing until we have two 1 × 1 matrices, at which point we simply multiply the numbers in each matrix. The actual threshold value used does not affect the order. When n = 1, exactly one multiplication is done. When we have two n × n matrices with n > 1, the algorithm is called exactly seven times with an (n/2) × (n/2) matrix passed each time, and no multiplications are done at the top level. We have established the recurrence
:::success
$$
\begin{split}
T(n) &= 7T(\frac{n}{2})\ \ \ \ for\ n > 1,\ n\ a\ power\ of\ 2\\
T(1) &= 1
\end{split}
$$
:::
This recurrence is solved in Example B.2 in Appendix B. The solution is
$$
T(n) = n^{\lg7} \approx n^{2.81} \in \Theta(n^{2.81}).
$$
### EveryCase Time Complexity Analysis of Number of Additions/Subtractions (Strassen)
* **Basic operation:** one elementary addition or subtraction.
* **Input size:** n, the number of rows and columns in the matrices.
* Again we assume that we keep dividing until we have two 1 × 1 matrices. When n = 1, no additions/subtractions are done. When we have two n × n matrices with n > 1, the algorithm is called exactly seven times with an (n/2) × (n/2) matrix passed in each time, and 18 matrix additions/subtractions are done on (n/2) × (n/2) matrices. When two (n/2) × (n/2) matrices are added or subtracted,$(n/2)^2$ additions or subtractions are done on the items in the matrices. We have established the recurrence
:::success
$$
\begin{split}
T(n) &= 7T(\frac{n}{2}) + 18(\frac{n}{2})^2\ \ \ \ for\ n > 1,\ n\ a\ power\ of\ 2\\
T(1) &= 0
\end{split}
$$
:::
This recurrence is solved in Example B.20 in Appendix B. The solution is
$$
T(n) = 6n^{\lg7} - 6n^2 \approx 6n^{2.81} - 6n^2 \in \Theta(n^{2.81}).
$$