---
title: Algorithm Homework IV
description: Algorithm 作業四
---
# [Homework pdf](https://drive.google.com/file/d/1is3aCCrD-jp6gtqT2oHyFTN5kmeVsEcr/view?usp=sharing)
# Problem 1(done)
- [x] 完成[name=Joe]
- [x] 審查[name=Сука]
## Topic
Give asymptotic upper and lower bounds for T(n) in each of the following recurrences.
Assume that T(n) is constant for n ≦ 2. Make your bounds as tight as possible, and justify your answers.
a. T (n) = 2T($\dfrac{n}{2}$)+$n^3$
b. T (n) = T($\dfrac{9n}{10}$)+n
c. T (n) = 16T($\dfrac{n}{4}$)+$n^2$
d. T (n) = 7T($\dfrac{n}{3})+n^2$
e. T (n) = 7T($\dfrac{n}{2})+n^2$
f. T (n) = 2T($\dfrac{n}{4}$)+√n
g. T (n) = T(n−1)+n
h. T (n) = T(√n)+1
## Solution
**Use Master Theorem**
a.
$log_b a=1<3$
$T(n)=\theta(f(n))=\theta(n^3)$
b.
$log_b a=0<1$
$T(n)=\theta(f(n))=\theta(n)$
c.
$log_b a=2$
$T(n)=\theta(n^{log_b a}logn)=\theta(n^2logn)$
d.
$log_b a=1.77<2$
$T(n)=\theta(f(n))=\theta(n^2)$
e.
$log_b a=2.81>2$
$T(n)=\theta(n^{log_b a})=\theta(n^{2.81})$
f.
$log_b a=1/2$
$T(n)=\theta(n^{log_b a}logn)=\theta(n^{1/2}logn)$
g.
$T(n)=T(n-1)+n$
$=T(n-2)+(n-1)+n$
...
$=T(0)+1+2+...+n$
$=O(n^2)$
h.
$T(n)=T(n^{1/2})+1$
$=T(n^{1/4})+1+1$
...
$=T(2)+k$
=>$n^{1/2^k}\le2$
=>$n\le2^{2^k}$
$k=lg(lgn)$
$T(n)=\theta(lg(lgn))$
# Problem 2
## Topic
Give asymptotic upper and lower bounds for T(n) in each of the following recurrences.
Assume that T(n) is constant for sufficiently small n.
Make your bounds as tight as possible, and justify your answers.
a. T(n) = 4T($\dfrac{n}{3}$) + nlgn
b. T(n) = 3T($\dfrac{n}{3}$) + $\dfrac{n}{lgn}$
c. T(n) = 4T($\dfrac{n}{2}$) + $n^2$√n
d. T(n) = 3T($\dfrac{n}{3}-2$) + $\dfrac{n}{2}$
e. T(n) = 2T($\dfrac{n}{2}$) + $\dfrac{n}{lgn}$
f. T(n) = T($\dfrac{n}{2}$) + T($\dfrac{n}{4}$) + T($\dfrac{n}{8}$) + n
g. T(n) = T(n - 1) + $\dfrac{1}{n}$
h. T(n) = T(n - 1) + lgn
i. T(n) = T(n - 2) + $\dfrac{1}{lgn}$
j. T(n) = √nT(√n) + n
k. T(n) = √nT(n − 1) + n
## Solution:
Master theorem (tools):
1. if $f(n) = \theta(n^c)$ where $c < log_b a$ then $T(n) = \theta(n log_b a)$
2. if $f(n) = \theta(n^c)$ where $c = log_b a$ then $T(n) = \theta(n^clog\ n)$
3. if $f(n) = \theta(n^c)$ where $c> log_b a$ then $T(n) = \theta(f(n))$
### a. $T(n) = 4T(\dfrac{n}{3})+n\ lg\ n$
### b. $T(n) = 3T(\dfrac{n}{3})+\dfrac{n}{lg\ n}$
### c. $T(n) = 4T(\dfrac{n}{2})+n^2\cdot \sqrt{n}$
By **Method 3** of the Master theorem, $T(n) = \Theta(n^{2.5})$
### d. $T(n) = 3T(\dfrac{n}{3}-2)+\dfrac{n}{2}$
By **Method 2** of the Master theorem, $T(n) = \Theta(n lg\ n)$
### e. $T(n) = 2T(\dfrac{n}{2})+\dfrac{n}{lg\ n}$
### f. $T(n) = T(\dfrac{n}{2})+T(\dfrac{n}{4})+T(\dfrac{n}{8})+n$
We want T(k) ≤ ck for k < n
T(n) = T(n/2) + T(n/4) + T(n/8) +n ≤ 7/8cn+n <=cn for c≥8.
$T(n) = \Theta(n)$
### g. $T(n) = T(n-1)+\dfrac{1}{n}$
### h. $T(n) = T(n-1)+lg\ n$
### i. $T(n) = T(n-2)+\dfrac{1}{lg\ n}$
### j. $T(n) = \sqrt{n}T(\sqrt{n})+n$
### k. $T(n) = \sqrt{n}T(n−1)+n$
# Problem 3(done)
## Topic
Can the master method be applied to the recurrence $T(n)=4T(n/2)+n^2lgn$? Why or why not? Give an asymptotic upper bound for this recurrence.
## Solution
a = 4, b = 2
NO, although $n^2lgn$ = Ω($n^2$), $n^2lgn$ is not asymptotically larger than $n^2$.
So we can't use the master method.
Suppose T(n) ≦ $cn^2lg^2n$ and c ≧ 0
T(n)
= 4T($\dfrac{n}{2}$) + $n^2lgn$
≦ 4c$\dfrac{n^2}{4}lg^2\dfrac{n}{2} + n^2lgn$
= $cn^2(lg^2n - 2lgn + 1) + n^2lgn$
≦ $cn^2lg^2n$
where the last step holds as long as $lgn$ ≧ $\dfrac{c}{2c - 1}$
# Problem 4(done)
## Topic
Let A[1..n] be an array of n distinct numbers. If i < j and A[ i ] > A[ j ], then the
pair ( i, j ) is called an inversion of A.
a. List the five inversions of the array <2, 3, 8, 6, 1>
b. What array with elements from the set {1, 2, ,..,n} has the most inversions? How many does it have?
c. What is the relationship between the running time of insertion sort and the number of inversions in the input array?
Justify your answer.
d. Give an algorithm that determines the number of inversions in any permutation on n elements in Θ(nlgn) worst-case time.
(Hint: Modify merge sort.)
## Solution
a. (1, 2), (1, 3), (1, 8), (1, 6), (6, 8)
b. {n, n - 1, ..., 1}, $\dfrac{n(n-1)}{2}$
c.
> The definition of inversion is a pair of index (i, j) s.t. i < j and A[ i ] > A[ j ], which means A[ i ] is not only larger than A[ j ], but also in front of A[ j ].
> What insertion sort do is to rearrange the order of the sequence by inserting element in the position that is in front of all larger elements, which also eliminate inversions.
> As a result, the more inversions the array has, the slower the runtime is.
d.
```C++
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
int countInter(vector<int> &list, int front, int mid, int back){
int i, j, inv;
for(i = front, j = mid + 1, inv = 0;i <= mid;i++){
for(;j <= back && list[j] < list[i];j++)
inv++;
}
sort(list.begin() + front, list.begin() + back + 1);
return inv;
}
int countingInversions(vector<int> &list, int front, int back){
if(front == back) return 0;
int mid = (front + back)/2;
return countingInversions(list, front, mid) + countingInversions(list, mid + 1, back) + countInter(list, front, mid, back);
}
int main(){
vector<int> input;
int n;
while(cin >> n)
input.push_back(n);
cout << countingInversions(input, 0, input.size() - 1) << endl;
return 0;
}
```
$pseudocode:$
int countInter(&list,front,mid,back) &list is a pointer of a vector which is int,front mid back are ints.
let i,j,inv be int
for i(from front to mid) let j be mid+1 let inv be 0
for j(mid+1 to back and list[j]<list[i])
inv+=1
end for
end for
sort list from the start of list plus front to the start of list plus back plus 1
return inv
end int counterInter
int countingInversions(&list,front,back) &list is a pointer of a vector which is int,front and back are ints.
if front equals to back then
return 0
let mid be (front+back)/2
return countingInversions(list,front,mid)+countInter(list, front, mid, back);
end countingInversions
# Problem 5 (done)
## Topic
How would you modify **Strassen’s algorithm** to multiply nxn matrices in which n is not an exact power of 2?
Show that the resulting algorithm runs in time Θ($n^{lg7}$)
`Origin method to multiply two matrix:`

\begin{equation*}
A_{n,n} =
\begin{bmatrix}
a_{1,1} & a_{1,2} \\
a_{2,1} & a_{2,2}
\end{bmatrix}
\\
B_{n,n} = \begin{bmatrix}
b_{1,1} & b_{1,2} \\
b_{2,1} & b_{2,2}
\end{bmatrix}
\\
C_{n,n}\begin{bmatrix}
a_{1,1} \cdot b_{1,1} + a_{1,2} \cdot b_{2,1} &
a_{1,1} \cdot b_{2,1} + a_{1,2} \cdot b_{2,2} \\
a_{2,1} \cdot b_{1,1} + a_{2,2} \cdot b_{1,2} &
a_{2,1} \cdot b_{2,1} + a_{2,2} \cdot b_{2,2}
\end{bmatrix}
\end{equation*}
```
The new method is defines instead new matrices
Only use 7 multiplications
```
$M_1 = (A_{1,1} + A_{2,2}) \cdot (B_{1,1}+B_{2,2})$
$M_2 = (A_{2,1}+A_{2,2}) \cdot B_{1,1}$
$M_3 = A_{1,1} \cdot (B_{1,2}-B_{2,2})$
$M_4 = A_{2,2} \cdot (B_{2,1}-B_{1,1})$
$M_5 = (A_{1,1}+A_{1,2}) \cdot B_{2,2}$
$M_6 = (A_{2,1}-A_{1,1}) \cdot (B_{1,1}+B_{1,2})$
$M_7 = (A_{1,2}-A_{2,2}) \cdot (B_{2,1}+B_{2,2})$
$Then\ calculate\ C_{1,1}, C_{1,2},C_{2,1}, C_{2,2}$
$C_{1,1} = M_1+M_4-M_5+M_7$
$C_{1,2} = M_3+M_5$
$C_{1,1} = M_2+M_4$
$C_{1,1} = M_1-M_2+M_3+M_6$
用0填充矩陣至 2^k x 2^k k是整數
Given $2^{k-1}<n<2^k$
let $m = 2^k = cn$
Time complexity: $\Theta(m^{lg7}) \rightarrow \Theta(cn^{lg7}) \rightarrow \Theta(n^{lg7})$
# Problem 6(done)
- [x] 完成[name=Joe]
- [x] 審查[name=Сука]
## Topic
V. Pan has discovered a way of multiplying 68 x68 matrices using 132,464 multiplications, a way of multiplying 70x70 matrices using 143,640 multiplications,and a way of multiplying 72x72 matrices using 155,424 multiplications.
Which method yields the best asymptotic running time when used in a divide-and-conquermatrix-multiplication algorithm? How does it compare to Strassen’s algorithm?
## Solution
$f(n)=O(1)$, combine
**Case 1:** 68\*68 using k=132464
$T1(n)=k*T1(n/68)+f(n)$
$=132464*T1(n/68)+\theta(n^2)$
$=\theta(n^{log_{68}132464})$
$=\bf{\theta(n^{2.795128})}$
**Case 2:** 70\*70 using k=143640
$T2(n)=k*T2(n/70)+f(n)$
$=143640*T2(n/70)+\theta(n^2)$
$=\theta(n^{log_{70} 143640})$
$=\bf{\theta(n^{2.795122})}$
**Case 3:** 72\*72 using k=155424
$T3(n)=k*T3(n/72)+f(n)$
$=155424*T3(n/72)+\theta(n^2)$
$=\theta(n^{log_{72} 155424})$
$=\bf{\theta(n^{2.795147})}$
**Case 4:Strassen algorithm**
$T4(n)=\theta(n^{log_2 7})$
$=\bf{\theta(n^{2.807354})}$
So the best asymptotic running time is Case2, and it also better than Strassen algorithm.
# Problem 7 (done)
## Topic
STOOGE-SORT(A, i, j)
1 if A[i]>A[j]
2 then exchange A[i] $\leftrightarrow$A[j]
3 if i+1 $\geq$ j
4 then return
5 k $\leftarrow$ $\lfloor$(j - i + 1)/3 $\rfloor$
6 STOOGE-SORT(A, i, j - K)
7 STOOGE-SORT(A, i + k, j)
8 STOOGE-SORT(A, i, j - K)
a. Argue that, if n = length[A], then STOOGE-SORT(A, 1, length[A]) correctly sorts the input array A[1 n].
b. Give a recurrence for the worst-case running time of STOOGE-SORT and a tight asymptotic ($\Theta$-notation) bound on the worst-case running time.
c.Compare the worst-case running time of STOOGE-SORT with that of insertion sort, merge sort, heapsort, and quicksort. Do the professors deserve tenure?
## Solution
a.
prove by induction:Assume when the array has (n-1) items,the algorithm is correct,and when you only have one item or two item,it is also correct.As the elegent algorithm goes,we split the array into three pieces to discuss.STOOGE-sort(A,i,j-k) sorts the first two pieces of three.STOOGE-sort(A,i+k,j) sorts the last two pieces of three.At the moment,the max part of array is at the last piece,but the middle part has something come from the last piece which may have some smaller elements than first piece.Therefore,we sort the first two pieces.So,it is true.
b.
$T(n) = 3T(\frac2 3n$)+ $\Theta$(1)
By Master Theorem,
T(n) = aT($\lceil$$\frac n b$$\rceil$) + O(n^d)
(for constants a > 0, b > 1, d $\geq$ 0), then:
T(n) = O($n^d$) if d > $\log_ba$
= O($n^d$ log n) if d = $\log_ba$
= O($n^{\log_ba}$ ) if d < $\log_ba$
a = 3, b = $\frac32$, d = 0
T(n) = $\Theta$($n^{\log_{3/2} 3}$)
c.
| STOOGE-SORT | merge sort | heapsort | quicksort |
|:------------------ | ---------- | -------- |:--------- |
| $n^{\log_{3/2} 3}$ | $n\log n$ | $n^2$ | $n\log n$ |
# Problem 8(done)
## Topic
Directly solve the original stock buying problem in Θ(n) time. (要寫Code)
## Solution
```c++
#include <iostream>
#include <vector>
#include <sstream>
using namespace std;
vector<int> maxSubArr(vector<int> arr){
//buy at result[0], sell at result[1], and the profit is result[2]
vector<int> result(3, 0);
int i, j, profit = 0, buy = 0;
for(i = 1;i < arr.size();){
for(j = buy;i < arr.size() && arr[i] >= arr[i - 1];i++){}
profit = arr[i - 1] - arr[j];
if(result[2] < profit){
result[2] = profit;
result[1] = i - 1;
result[0] = buy;
}
for(j = i - 1;i < arr.size() && arr[i] <= arr[i - 1];i++){}
profit += arr[i - 1] - arr[j];
if(profit <= 0){
buy = i - 1;
profit = 0;
}
}
return result;
}
int main(){
vector<int> input;
int n;
while(cin >> n)
input.push_back(n);
vector<int> output = maxSubArr(input);
cout << "buy at:" << output[0] << ", sell at:" << output[1] << ", profit is:" << output[2] << endl;
}
```
$Psuedocode$
maxSubArr(arr){//arr is a array of stock prices
//index from 0 to arr.length()
int best_buy = 0, best_sell = 0, best_profit = 0, i = 1, j = 0, profit = 0
while( i <arr.length()) do //Theta(1)
j = buy
while(arr[i] >= arr[i - 1]) do//Theta(n)
i = i + 1
end while
profit = arr[i - 1] - arr[j];
if best_profit < profit do
best_buy = profit;
best_sell = i - 1;
best_profit = buy;
end if
j = i - 1
while(i < arr.length() and arr[i] <= arr[i - 1]) do//Theta(n)
i = i + 1
end while
profit += arr[i - 1] - arr[j];
if profit <= 0 do
buy = i - 1
profit = 0
end if
end while
return best_buy, best_sell, best_profit
}