![螢幕擷取畫面 2025-03-16 150131](https://hackmd.io/_uploads/BkIvwx4nkg.png) # 3-Exercises ## 3.1 O-notation, $\Omega$-notation, and $\Theta$-notation ### 3.1-1 > Modify the lower-bound argument for insertion sort to handle input sizes that are not necessarily a multiple of 3. At least $\lfloor \frac n 3 \rfloor \geq \frac 3n-1$ elements have to pass through at least $\lfloor \frac n 3 \rfloor \geq \frac 3n-1$ the time taken by **INSERTION-SORT** in the worst case is at least proportional to $(\frac 3n-1)(\frac 3n-1)=\Omega(n^2)$ ### 3.1-2 > Using reasoning similar to what we used for insertion sort, analyze the running time of the selection sort algorithm from Exercise 2.2-2. Inner loop in selection sort iterates $(n-i+1)$ times, for i = 1 to n-1, so the running time of selection sort is $\sum_{i=1}^{n-1} (n-i+1)=(n+2)(n-1)/2=\Theta(n^2)$ ### 3.1-3 > Suppose that $\alpha$ is a fraction in the range $0 < \alpha < 1$. Show how to generalize the lower-bound argument for insertion sort to consider an input in which the $\alpha n$ largest values start in the first $\alpha n$ positions. What additional restriction do you need to put on $\alpha$? What value of $\alpha$ maximizes the number of times that the $\alpha n$ largest values must pass through each of the middle $(1-2 \alpha)$ array positions? At least $\alpha n$ values have to pass through at least $(1-2 \alpha)n$ times. insertion sort is at least proportional to $(\alpha n)(1-2 \alpha)n = \alpha(1-2\alpha)n^2 = \Omega(n^2)$ if $\alpha(1-2\alpha)=\Omega(1)$ $\max(\alpha(1-2\alpha))=\frac 1 8$ when $\alpha = \frac 1 4$ ## 3.2Asymptotic notation formal definitions ### 3.2-1 > Let $f(n)$ and $g(n)$ be the asympotically nonnegative functions. Using the basic definition of $\Theta$-notation, prove that max ${f(n),g(n)} = \Theta (f(n) + g(n))$. $$ \begin{aligned} \text{Let } n_0 \text{ be a value that makes f(n) greater than 0 for n >}n_0\cr \text{Then for }n>n_0\cr \frac 1 2 (f(n)+g(n)) \leq \max(f(n),g(n)) \leq 1(f(n)+g(n))\cr \max(f(n),g(n))=\Theta(f(n)+g(n))\cr \end{aligned} $$ Tips: I will omit details like "$\text{Let } n_0 \text{ be a value that makes f(n) greater than 0 for n >}n_0$""$\exists c,n_0$" and "for $n > n_0$" for convenience from here. ### 3.2-2 > Explain why the statement, "The running time of algorithm _A_ is at least $O(n^2)$,"if meaningless. $A = O(n^2)$ means $\exists n_0,c:A \leq cn^2$ for $n>n_0$. $A$ is at least $O(n)$ means $A \geq \text{a function that }\leq cn^2$ You can say A can be any function or no function. Both are meaningless. ### 3.2-3 > Is $2^{n+1}=O(2^n)$? Is $2^{2n}=O(2^n)$? - $2^{n+1} \leq 2*2^n$, Yes - $2^{2n} \leq c \cdot 2^n \text{ implies } c \geq 2^n$, no constant satisfies it. No ### 3.2-4 > Prove Theorem 3.1. $$ \begin{aligned} & f(n) = O(g(n)) \quad and \quad f(n)=\Omega(g(n))\cr & \implies c_1 g(n)\leq f(n) \leq c_2g(n)\cr & \implies f(n) = \Theta(g(n))\cr & f(n) = \Theta(g(n))\cr & \implies c_1 g(n)\leq f(n) \leq c_2g(n)\cr & \implies f(n) = O(g(n)) \quad and \quad f(n)=\Omega(g(n))\cr \end{aligned} $$ We have $f(n) = \Theta(g(n))$ if and only if $f(n) = \Omega(g(n))$ and $f(n) = O(g(n))$ ### 3.2-5 > Prove that the running time of an algorithm is $\Theta(g(n))$ if and only if its worst-case running time $O(g(n))$ and its best-casse running time is $\Omega (g(n))$. $$ \begin{aligned} A=\Theta(g(n)) & \implies c_1 g(n) \leq A \leq c_2 g(n) \text{ for all cases}\cr & \implies \text{worst-case running time}= O(g(n))\cr & \And \text{best-case running time}=\Omega(g(n))\cr \text{worst-case running time}= O(g(n)) & \implies A \leq c_2 g(n) ...(1)\cr \text{best-case running time}=\Omega(g(n)) & \implies c_1 g(n) \leq A ...(2)\cr (1) \And (2) & \implies A=\Theta(g(n))\cr \end{aligned} $$ ### 3.2-6 > Prove that $o(g(n)) \cap \omega (g(n))$ is empty set. $f(n) < cg(n) \And f(n) > c(g(n)) \implies NIL$ > NIL is mean NULL ### 3.2-7 > We can extend our notation to the case of two parameters n and m that can go to $\infty$ independently at different rates. For a given function $g(n,m)$, we denote by $O(g(n,m))$ the set of functions > > $$ > \begin{aligned} > O(g(n, m)) =\lbrace f(n, m): > & \text{ there exist positive constants } c, n_0, \text{ and } m_0 \cr > & \text{ such that } 0 \leq f(n, m) \leq cg(n, m)\cr > & \text{ for all } n \geq n_0 \text{ or } m \geq m_0.\rbrace\cr > \end{aligned} > $$ > > Give corresponding definitionss for $\Omega(g(n,m))$ and $\Theta(g(n,m))$. $$ \begin{aligned} \Theta(g(n, m)) =\lbrace f(n, m): & \text{ there exist positive constants } c_1,c_2, n_0, \text{ and } m_0 \cr & \text{ such that } 0 \leq c_1g(n,m)\leq f(n, m) \leq c_2g(n, m)\cr & \text{ for all } n \geq n_0 \text{ or } m \geq m_0.\rbrace\cr \Omega(g(n, m)) =\lbrace f(n, m): & \text{ there exist positive constants } c, n_0, \text{ and } m_0 \cr & \text{ such that } 0 \leq cg(n,m)\leq f(n, m)\cr & \text{ for all } n \geq n_0 \text{ or } m \geq m_0.\rbrace\cr \end{aligned} $$ ## 3.3 Standard notations and common functions ### 3.3-1 > Show that if $f(n)$ and $g(n)$ are monotonically increasing functions, then so are the functions $f(n)+g(n)$ and $f(g(n))$, and if $f(n)$ and $g(n)$ are in addition nonnegative, then $f(n)\cdot g(n)$ $$ \begin{aligned} \forall n_1 \geq n_2: & f(n_1) \geq f(n_2) \And g(n_1) \geq g(n_2)\cr \implies & f(n_1)+g(n_1) \geq f(n_2)+g(n_2)\cr \implies & f(g(n_1) \geq g(n_2)) \geq f(g(n_2))\cr & if \quad f(n) \geq 0 \And g(n) \geq 0\cr \implies & f(g(n_1)\geq g(n_2)\geq 0)\geq f(g(n_2)) \end{aligned} $$ ### 3.3-2 > Prove that $\lfloor \alpha n \rfloor +\lceil (1-\alpha )n \rceil = n$ for any integer _n_ and real number $\alpha$ in the range $0 \leq \alpha \leq 1$ $$ \begin{aligned} \lfloor \alpha n \rfloor +\lceil (1-\alpha )n \rceil & = \lfloor \alpha n \rfloor +\lceil -\alpha n\rceil + n\cr &=\lfloor \alpha n \rfloor - \lfloor \alpha n \rfloor +n\cr &=n \end{aligned} $$ ### 3.3-3 > Use equation (3.14) or other means to show that $(n+o(n))^k = \Theta(n^k)$ for any real constant k. Conclude that $\lceil n \rceil ^k$ and $\lfloor n \rfloor ^k = \Theta(n^k)$. $$ \begin{aligned} n^k\leq(n+o(n))^k=n^k(1+\frac {o(n)}n)^k \leq n^k e^{\frac {o(n)}n} \leq en^k\cr \implies (n+o(n))^k = \Theta(n^k)\cr \lceil n \rceil ^k= (n+o(n))^k=\Theta(n^k)\cr \lfloor n \rfloor ^k = (n+o(n))^k=\Theta(n^k)\cr \end{aligned} $$ ### 3.3-4 > Prove the following > > **a**. Equation(3.21). $a^{\lg _b c} = c^{\lg _b a}$ > > **b**. Equations(3.26)-(3.28). >$n!=o(n^n)$ ...(3.26) >$n! = \omega(2^n)$ ...(3.27) >$lg(n!) = \Theta(n lg n)$...(3.28) > > **c**. $\lg(\Theta(n))=\Theta(\lg n)$ - **a**. $a^{\lg _b c} = a^{\frac {lg _a c}{\lg _a b}}=c^{\lg _b a}$ - **b**. $$ \begin{aligned} n! = \sqrt {2\pi n}(\frac n e)^n(1+\Theta(\frac{1}{n}))\cr \forall c>0,\lim_{n \to \infty} \frac {n!=\frac{\sqrt {2\pi n}} {e^n}(1+\Theta(\frac 1 n))n^n} {cn^n} = \lim_{n \to \infty} \frac{\sqrt {2\pi n}} {e^n}(1+\Theta(\frac 1 n)) = 0\cr \implies n!=o(n^n)\text{ (Equation(3.21))}\cr \lim_{n \to \infty} \frac {n!=\frac{\sqrt {2\pi n}} {e^n}(1+\Theta(\frac 1 n))n^n} {2^n} = \lim_{n \to \infty }\sqrt{2\pi n}(1+\Theta(\frac 1 n))(\frac{n}{2e})^n = \infty \cr \implies n! =\omega(n) \text{ (Equation(3.27))}\cr \lg (n!)=n\lg n+ \frac{1}{2}\lg(2\pi n)+n\lg e+\lg(1+\Theta(\frac{1}{n})) = \Theta(n\lg n)\text{ (Equation(3.28))}\cr \end{aligned} $$ - **c**. $$ \begin{aligned} \forall f(n) \in \Theta(n)\cr c_3\lg n\leq \lg {c_1 n} \leq \lg(f(n))\leq \lg{c_2 n} \leq c_4\lg n\cr \implies \lg(\Theta(n))=\Theta(\lg n) \end{aligned} $$ ### 3.3-5 > Is the function $\lceil \lg n \rceil !$ polynomially bounded? Is the function $\lceil \lg \lg n \rceil !$ polynomially bounded? Polynomially bounded:$f(n) \leq cn^k \implies \lg(f(n))=ck\lg(n)\implies \lg(f(n))=O(\lg n)$ $$ \begin{aligned} \lg(\lceil \lg n\rceil !)& =\Theta(\lceil \lg n\rceil \lg(\lceil \lg n\rceil))\cr &=\Theta(\lg n \lg(\lg n))\cr &\implies \lceil \lg n\rceil !\text{ is not polynomially bouded}\cr \lg \lceil \lg \lg n \rceil !&=\Theta(\lceil \lg \lg n\rceil \lg\lceil\lg\lg n\rceil)\cr & =O(\lg^2\lg n)\cr & =O(\lg n)\cr & \implies\lceil\lg\lg n\rceil \text{ is Polynomially bounded} \end{aligned} $$ ### 3.3-6 > Which is asymptotically larger: $\lg(\lg^{\ast}n)$ or $\lg^{\ast}(\lg n)$? $$ \begin{aligned} n=2^k\cr \lim_{n\to\infty}\frac{\lg(\lg^{\ast}n)}{\lg^{\ast}(\lg n)}& = \lim_{k\to\infty}\frac{\lg(\lg^{\ast} 2^k)}{\lg^{\ast}(\lg 2^k)}\cr &= \lim_{k\to\infty} \frac{\lg(1+\lg^{\ast}k)}{\lg^{\ast}k}\cr & =\lim_{t\to \infty}\frac{\lg(1+t)}{t}\cr & = 0 \end{aligned} $$ $\lg^{\ast}(\lg n)$ is larger. ### 3.3-7 > Show that the golden ratio $\phi$ and its conjugate $\hat{\phi}$ both satisfy the equation $x^2 = x + 1$ $$ \begin{aligned} ax^2+bx+c=0\implies x=\frac{-b\pm\sqrt{b^2-4ac}}{2a}\cr x^2-x-1=0 \implies x=\frac{1 \pm\sqrt{5}}{2}=\phi \And \hat\phi \end{aligned} $$ ### 3.3-8 > prove by induction that the _i_-th Fibonacci num satisfies the equation > > $$ > F_i = (\phi^i - \hat{\phi}^i)/\sqrt{5} > $$ > > where $\phi$ is the golden ratio and $\hat{\phi}$ is its conjugate. For inductive case: $$ \begin{aligned} & \text{truth: }\phi\hat\phi=-1,\phi+\hat\phi=1\cr \text{assume }:\cr F_{i-1} &= (\phi^{i-1} - \hat{\phi}^{i-1})/\sqrt{5}\cr F_{i} &= (\phi^{i} - \hat{\phi}^{i})/\sqrt{5}\cr \text{then:}\cr F_{i-1}+F_i & =(\phi^{i-1} - \hat{\phi}^{i-1})/\sqrt{5}+(\phi+\hat\phi)(\phi^i - \hat{\phi}^i)/\sqrt{5}\cr & = (\phi^{i+1}-\hat\phi^{i+1})/\sqrt 5+((1+\phi\hat\phi)\phi^{i-1}-(1+\phi\hat\phi)\hat\phi^{i-1})/\sqrt 5\cr & =F_{i+1}\cr \end{aligned} $$ For base case: $$ \begin{aligned} (\phi^{1} - \hat{\phi}^{1})/\sqrt{5}=1=F_1\cr (\phi^{2} - \hat{\phi}^{2})/\sqrt{5}=(\phi^{1}+1 - (\hat{\phi}^{1}+1))/\sqrt{5}=1=F_2\cr \end{aligned} $$ Tips: I will omit details like"assume $F_i$ $F_{i-1}$" and proof of base case from now on for convenience. ### 3.3-9 > Show that $k\lg k = \Theta(n)$ implies $k = \Theta(n/\lg n)$. $$ \begin{aligned} & k\lg k=\Theta(n)\cr \implies & n=\Theta(k\lg k)\cr \implies & \lg n = \Theta(\lg k + \lg\lg k)=\Theta(\lg k)\cr \implies & n/\lg n = \Theta(k\lg k)/\Theta(\lg k)=\Theta (k)\cr \implies & k = \Theta(n/\lg n) \end{aligned} $$ # 3-Problems ## 3-1 Asymptotic behavior of polynomials > Let > > $$ > P(n) = \sum_{i=0}^d a_i n^i, > $$ > > where $a_d > 0$, be a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties. > > **a**. if $k \geq d$, then $p(n)=O(n^k)$ > > **b**. if $k \leq d$, then $p(n)=\Omega(n^k)$ > > **c**. if $k =d$, then $p(n)=\Theta(n^k)$ > > **d**. if $k > d$, then $p(n)=o(n^k)$ > > **e**. if $k < d$, then $p(n)=\omega(n^k)$ ### **a** To prove $p(n)=O(n^k)$, we only need to prove: $$ \begin{aligned} \sum_{i=0}^{d}a_in^i \leq cn^k (\text{ for }n \geq n_0)\cr \end{aligned} $$ Which is implied by $$ \begin{aligned} c \geq \sum_{i=0}^da_in^{i-k} (\text{ for }n \geq n_0)\cr \end{aligned} $$ since $\sum_{i=0}^{d}a_i \geq \sum_{i=0}^da_in^{i-k} (\text{ for }n \geq 1)$ $\text{choose } c = \sum_{i=0}^{d}a_i$ and $n_0 = 1$ satisfies. - Tips: the description of $n>n_0$ and $c$ will be omitted next time. ### **b** We only need to prove: $$ \sum_{i=0}^{d}a_in^i \geq cn^k $$ which is implied by: $$ \begin{aligned} c \leq \sum_{i=0}^da_in^{i-k}\cr c=a_d \leq \sum_{i=0}^da_in^{i-k}\cr n_0=1 \end{aligned} $$ ### **c** We only need to prove: $$ c_1n^d\leq \sum_{i=0}^{d}a_in^i \leq c_2n^{d} $$ which is implied by: $$ \begin{aligned} c_1 \leq \sum_{i=0}^da_in^{i-d} \leq c_2\cr c_1=a_d \leq \sum_{i=0}^da_in^{i-d} \leq \sum_{i=0}^{d} a_i=c_2\cr n_0=1 \end{aligned} $$ ### **d** We only need to prove: $$ \begin{aligned} \forall c >0,\exists n_0 \text{ satisfies: }\cr \sum_{i=0}^{d} a_i n^i < cn^k (\text{ for } n > n_0)\cr \impliedby c > \sum_{i=0}^{d} a_in^{i-k}\cr \text{since }\sum_{i=0}^{d} a_in^{d-k} \geq \sum_{i=0}^{d} a_in^{i-k}\cr n_0=\sqrt[d-k]{\frac {c} {\sum_{i=0}^d a_i}} \end{aligned} $$ ### **e** We only need to prove: $$ \begin{aligned} \forall c >0,\exists n_0 \text{ satisfies: }\cr \sum_{i=0}^{d} a_in^i > cn^k (\text{ for }n>n_0)\cr \impliedby c < \sum_{i=0}^{d} a_in^{i-k}\cr \text{since } a_d n^{d-k} \leq \sum_{i=0}^{d} a_in^{i-k}\cr n_0=\sqrt[d-k]{\frac {c} {a_d}} \end{aligned} $$ ## 3-2 Relative asymptotic growths > Indicate, for each pair of expressions $(A, B)$ in the table below whether A is $O, o,\Omega,\omega$, or $\Theta$ of B. Assume that $k \geq 1,\epsilon >,$ and $c>1$ are constants. Write your answer in the foem of the table with "yes" or "no" written in each box. | A | B | O | o | $\Omega$ | $\omega$ | $\Theta$ | | :---------- | :------------- | :-- | :-- | :------- | :------- | :------- | | $\lg^k n$ | $n^{\epsilon}$ | yes | yes | no | no | no | | $n^k$ | $c^n$ | yes | yes | no | no | no | | $\sqrt{n}$ | $c^{\sin n}$ | no | no | no | no | no | | $2^n$ | $2^{n/2}$ | no | no | yes | yes | no | | $n^{\lg c}$ | $c^{\lg n}$ | yes | no | yes | no | yes | | $\lg (n!)$ | $\lg (n^n)$ | yes | no | yes | no | yes | ## 3-3 Ordering by asymptotic growth rates > **a**. Rank the following fuctions vy oefer of growth. That is, find an arrangement $g_1,g_2,\dots,g_{30}$ of the functions satisfying $g_1=\Omega(g_2),g_2=\Omega(g_3),\dots ,g_{29}=\Omega(g_{30})$. Partion your list into equivalence classes such that functions $f(n)$ and $g(n)$ belong to the same class if and only if $f(n)$ = $\Theta g(n)$ > > $$ > \begin{array}{} > & \lg(\lg^{\ast}n) & 2^{\lg^{\ast}n} & (\sqrt {2})^{\lg n} & n^2 & n! & (\lg n)!\cr > & (3/2)^n & n^3 & \lg^2n & \lg(n!) & 2^{2^n} & n^{1/\lg n}\cr > & \ln\ln n & \lg^{\ast}n & n\cdot 2^n & n^{\lg\lg n} & \ln n & 1\cr > & 2^{\lg n} & (\lg n)^{\lg n} & e^n & 4^{\lg n} & (n+1)! & \sqrt{\lg n}\cr > & \lg^{\ast}(\lg n) & 2^{\sqrt{2 \lg n}}& n & 2^n& n\lg n & 2^{2^{n+1}}\cr > \end{array} > $$ > > **b**. Give an example of a single nonnegative function $f(n)$ such that for all functions $g_i(n)$ in part (a), $f(n)$ is neither $O(g_i(n))$ nor $\Omega (g_i(n))$. ### **a** $$ \begin{aligned} 2^{2^{n+1}}\cr 2^{2^n}\cr (n+1)!\cr n!\cr e^n\cr n\cdot 2^n\cr 2^n\cr (3/2)^n\cr n^{\lg\lg n} \And (\lg n)^{\lg n}\cr (\lg n)!\cr n^3\cr n^2 \And 4^{\lg n} \cr \lg(n!) \And n\lg n\cr 2^{\lg n} \And n\cr (\sqrt {2})^{\lg n}\cr 2^{\sqrt{2\lg n}}\cr \lg^2 n\cr \ln n\cr \sqrt{\lg n}\cr \ln\ln n\cr 2^{\lg^{\ast}n}\cr \lg^{\ast}n \And \lg^{\ast}(\lg n)\cr \lg(\lg^{\ast}n) \cr n^{1/\lg n}\And 1\cr \end{aligned} $$ ### **b** $|\sin{n}|\cdot 2^{2^{2^{n+1}}}$ ## 3-4 Asymptotic notation properties > Let $f(n)$ and $g(n)$ be asymptotically positive functions. Prove or disprove each of the following conjectures. > > **a**. $f(n)=O(g(n))$ implies $g(n)=O(f(n))$. > > **b**. $f(n)+g(n)=\Theta(\min(f(n),g(n)))$. > > **c**. $f(n)=O(g(n))$ implies $\lg(f(n))=O(\lg(g(n)))$, where $\lg g(n)\geq 1$ and $f(n)\geq 1$ for all sufficiently large $n$. > > **d**. $f(n)=O(g(n))$ implies $2^{f(n)} = O(2^{g(n)})$. > > **e**. $f(n)=O((f(n))^2)$. > > **f**. $f(n)=O(g(n))$ implies $g(n)=\Omega(f(n))$. > > **g**. $f(n)=\Theta(f(n/2))$. > > **h**. $f(n)+o(f(n))=\Theta(f(n))$. ### a Disprove: $n=O(n^2)$ but $n^2\neq O(n)$ ### b Disprove:$n+n^2\neq \Theta(\min(n,n^2))$ ### c prove: $$ \begin{aligned} 0\leq f(n)\leq cg(n) & \implies \lg(f(n))\leq \lg c+\lg(g(n))\cr \text{to prove: }&\lg(f(n)) \leq d\lg(g(n))\cr \text{only need to prove: }& \lg c + \lg(g(n)) \leq d\lg(g(n))\cr \end{aligned} $$ <span style="color:blue;"> $$ \begin{aligned} \text{choose } & \quad d = 1+\lg c \text{ satisfies} \end{aligned} $$ </span> ### d prove: $$ \begin{aligned} 0\leq f(n)\leq cg(n) \implies 2^{f(n)}\leq 2^{cg(n)}=2^c\cdot c^{g(n) } \end{aligned} $$ ### e disprove: $\frac{1}{n}\neq O(\frac{1}{n^2})$ ### f prove: $0\leq f(n)\leq cg(n) \implies g(n)=\Omega (f(n))$ ### g disprove:$2^n\neq\Theta(2^{\frac{n}{2}})$ ### h prove: $f(n)\leq f(n)+o(f(n))\leq 2f(n)$ since $0\leq o(f(n))< f(n)$ ## 3-5 Manipulating asymptotic notation > Let $f(n)$ and $g(n)$ be asymptotically positive functions. Prove the following identities: > **a**. $\Theta(\Theta(f(n)))=\Theta(f(n))$. > > **b**. $\Theta(f(n))+O(f(n))=\Theta(f(n))$. > > **c**. $\Theta(f(n))+\Theta(g(n)) =\Theta(f(n)+g(n))$. > > **d**. $\Theta(f(n))\cdot \Theta(g(n))=\Theta(f(n)\cdot g(n))$. > > **e**. Argue that for any real constants $a_1,b_1>0$ and integer constants $k_1,k_2$, the following asymptotic bound holds: > > $(a_1 n)^{k_1}\lg^{k_2}(a_2 n)=\Theta(n^{k_1}\lg^{k_2}n)$. > > $\star$ **f**. Prove that for $S\subseteq \mathbb{Z}$, we have > > $$ > \sum_{k\in S}\Theta(f(k))=\Theta(\sum_{k\in S}f(k)), > $$ > > assuming that both sums converge. > > $\star$ **g**. Show that for $S \subseteq \mathbb{Z}$, the following asymptotic bound does not necessarily hold, even assuming that both products converge, by giving a counterexample: > > $$ > \prod_{k\in S}\Theta(f(k))=\Theta(\prod_{k\in S}f(k)). > $$ ### **a** $$ \begin{aligned} g(n)=\Theta(f(n))\cr \implies c_1 f(n)\leq g(n) \leq c_2 f(n)\cr h(n)=\Theta(g(n))\cr \implies c_1 d_1 f(n)\leq d_1 g(n)\leq h(n)\leq d_2g(n)\leq c_2 d_2 f(n)\cr \implies h(n)=\Theta(f(n))\cr \implies\Theta(\Theta(f(n)))=\Theta(f(n))\cr \end{aligned} $$ ### **b** $$ \begin{aligned} \forall g(n)=\Theta(f(n)) \And h(n)=\Theta(f(n))\cr \implies c_2 f(n) \leq g(n)\leq c_1 f(n) \And 0 \leq h(n)\leq df(n)\cr \implies c_1f(n)\leq g(n)+h(n)\leq (c_1+d)f(n)\cr \implies g(n)+h(n)=\Theta(f(n))\cr \implies \Theta(f(n))+O(f(n))=\Theta(f(n))\cr \end{aligned} $$ ### **c** $$ \begin{aligned} \forall A(n)=\Theta(f(n))\And B(n)=\Theta(g(n))\cr \implies c_1 f(n)\leq A(n)\leq c_2 f(n)\And d_1 f(n)\leq B(n) \leq d_2 f(n)\cr \implies \min(c_1,d_1)(f(n)+g(n))\leq A(n)+B(n)\leq \max(c_2+d_2)(f(n)+g(n))\cr \implies \Theta(f(n))+\Theta(g(n)) =\Theta(f(n)+g(n)) \end{aligned} $$ ### **d** $$ \begin{aligned} \forall A(n)=\Theta(f(n))\And B(n)=\Theta(g(n))\cr \implies c_1 f(n)\leq A(n)\leq c_2 f(n)\And d_1 f(n)\leq B(n) \leq d_2 f(n)\cr \implies c_1 d_1f(n)\cdot g(n)\leq A(n)\cdot B(n)\leq c_2 d_2f(n)\cdot g(n)\cr \implies \Theta(f(n))\cdot \Theta(g(n)) =\Theta(f(n)\cdot g(n)) \end{aligned} $$ ### **e** $$ \begin{aligned} \text{we only need to prove: }c_1 n^{k_1}\lg^{k_2}n\leq (a_1 n)^{k_1}\lg^{k_2}(a_2 n)\leq c_2 n^{k_1}\lg^{k_2}n\cr \text{which is implied by: }d_1 lg^{k_2}n\leq\lg^{k_2}(a_2 n)\leq d_2\lg^{k_2}n\cr \text{which is implied by: }e_1\lg n\leq \lg a+\lg n \leq e_2 \lg n\cr \text{which is implied by: }e_1\leq \lg a/\lg n +1 \leq e_2\cr \text{choose }e_1=1/2 \quad e_2=2 \quad n_0=\lg max(a^2,2^{\lg a}) \text{ satisfies}\cr \end{aligned} $$ ### $\star$ **f** $$ \begin{aligned} g(k)=\Theta(f(k))\cr \implies \sum_{k\in S} c_kf(k)\leq \sum_{k\in S} g(k)\leq \sum_{k\in S} d_kf(k)\cr \implies \min( c_k) \sum_{k\in S} f(k)\leq \sum_{k\in S} g(k)\leq \max(d_k)\sum_{k\in S} f(k)\cr \implies \sum_{k\in S}\Theta(f(k))=\Theta(\sum_{k\in S}f(k))\cr \end{aligned} $$ ### $\star$ **g** $\prod_{k\in S} \frac 1 2=\prod_{k\in S} \Theta(\frac{1}{2^n})=\Theta(0)\neq \Theta(\prod_{k\in S} 1)=\Theta(1)$ ## 3-6 Variations on $O$ and $\Omega$ > Some authors define $\Omega$-notation in a slightly different way than this textbook does. We'll use the nomenclature $\stackrel{\infty}{\Omega}$ (read "omega infinity") for this alternative definition. We say that f(n) = $\stackrel{\infty}{\Omega}(g(n))$if there exists a positive constant $c$ such that $f(n) \geq cg(n) \geq 0$ for infinitely many integers n. > > **a**. Show that for any two asymptotically nonnegative functions $f(n)$ and $g(n)$, we have $f(n)=O(g(n))$ or $f(n)=\stackrel{\infty}{\Omega}(g(n))$ (or both). > > **b**. Show that there exist two asymptotically nonnegative functions $f(n)$ and $g(n)$ for which neither $f(n) = O(g(n))$ nor $f(n) = \Omega(g(n))$ holds. > > **c**. Describe the potential advantages and disadvantages of using $\stackrel{\infty}{\Omega}$-notation instead of $\Omega$-notation to characterize the running times of programs. > > Some authors also define $O$ in a slightly different manner. We’ll use $O'$ for the alternative definition: $f(n)=O'(g(n))$ if and only if $|f(n)|=O(g(n))$. > > **d**. What happens to each direction of the "if and only if" in **Theorem 3.1** on page 56 if we substitute $O'$ for $O$ but still use $\Omega$? > > Some authors define $\tilde{O}$(read "soft-oh") to mean O with logarithmic factors ignored: > > $$ > \begin{aligned} > \tilde{O}(g(n))=\lbrace f(n) : & \text{ there exist positive constants }c, k, \text{ and } n_0 \text{ such that }\cr > & 0\leq f(n)\leq cg(n)\lg^k(n) \text{ for all } n\geq n_0\rbrace .\cr > \end{aligned} > $$ > > **e**. Define $\tilde{\Omega}$ and $\tilde{\Theta}$ in a similar manner. Prove the corresponding analog to **Theorem 3.1**. ### **a** $$ \begin{aligned} \text{if } f(n)\neq O(g(n))\cr \text{there must be infinite integers that }f(n) \geq cg(n)\geq 0\cr \text{otherwise choose the last } n = \text{ that } f(n)\geq cg(n) \text{ as } n_0\cr \text{for all }n\geq n_0,f(n)\leq cg(n)\implies f(n)=O(g(n))\cr \end{aligned} $$ ### **b** $f(n)=n|\sin n|,g(n)=1$ ### **c** - advantages: By **a** we can know that $O$ and $\stackrel{\infty}{\Omega}$ can characterize relationship between any two function. - disadvantages: Not precise. ### **d** $$ \begin{aligned} \forall f(n) , g(n):\cr f(n)=\Theta(g(n)) \implies f(n)=\Omega(g(n)) \And f(n)=O'(g(n))\cr \text{but the conversion is not true} \end{aligned} $$ ### **e** $$ \begin{aligned} \tilde\Omega(g(n))=\lbrace f(n): & \text{ there exist positive constants }c,k,\text{ and } n_0 \text{ such that}\cr & 0\leq f(n) \leq cg(n)\lg^{k}(n) \text{ for all }n \geq n_0\rbrace .\cr \tilde\Theta(g(n))=\lbrace f(n): & \text{ there exist positive constants }c_1,c_2,k,\text{ and } n_0 \text{ such that}\cr & 0\leq c_1g(n)\lg^{k}(n)\leq f(n) \leq c_2g(n)\lg^{k}(n) \text{ for all }n \geq n_0\rbrace .\cr \text{according to there definity: } & f(n)=\tilde\Theta(g(n))\iff f(n)=\tilde O(g(n)) \And f(n)=\tilde\Omega (g(n))\cr \end{aligned} $$ ## 3-7 Iterated functions > We can apply the iteration operator \* used in the $\lg^{\ast}$ function to any monotonically increasing function $f(n)$ over the real. For a given constant $c \in \mathbb{R}$, we define the iterated dunction $f_{c}^{\ast}$ by > > $f_{c}^{\ast}=\min \lbrace i\geq0:f^{(i)}\leq c\rbrace$, > > which need not be well defined in all cases. In other words, the quantity $f_{c}^{\ast}(n)$ is the minimum number of iterated applications of the function $f$ required to reduce its argument down to $c$ or less. > > For each of the functions $f(n)$ and constants $c$ in the table below, give as tight a bound as possible on $f_{c}^{\ast}(n)$. If there is no $i$ such that $f^{(i)} \leq c$, write "undefined" as your answer. | $f(n)$ | $c$ | $f_{c}^{\ast}(n)$ | | :--------- | :-- | :-------------------- | | $n-1$ | $0$ | $\Theta(n)$ | | $\lg n$ | $1$ | $\Theta(\lg^{\ast}n)$ | | $n/2$ | $1$ | $\Theta(\lg n)$ | | $n/2$ | $2$ | $\Theta(\lg n)$ | | $\sqrt{n}$ | $2$ | $\theta(\lg\lg n)$ | | $\sqrt{n}$ | $1$ | undefined | | $n^{1/3}$ | $2$ | $\Theta(\log_3\lg n)$ | # 4-Exercises ## 4.1 Multiplying square matrices > Note: You may wish to read Section 4.5 before attempting some of these exercises. ### 4.1-1 > Generalize **MATRIX-MULTIPLY-RECURSIVE** to multiple $n\times n$ matrices for which $n$ is not necessarily an exact power of 2. Give a recurrence describing its running time. Argue that it runs in $\Theta(n^3)$ time in the worst case. ```cpp MATRIX_MULTIPLY_RECURSIVE_GENERALIZE(A,B,C,n) if n is not exact power of 2 new_n = next power of 2 of n//O(n) use 0 to extend A B C into new_n*new_n matrix//O(n^2) MATRIX_MULTIPLY_RECURSIVE(A,B,C,new_n)//Theta(n^3) choose the n*n part of C as result by index method//O(1) return ``` running time = $\Theta(n^3)+O(n^2)+O(1)+O(n)=\Theta(n^3)$ ### 4.1-2 > How quickly can you multiply a $kn\times n$ matrix (kn rows and n columns) by an $n\times kn$ matrix, where $k\geq 1$ , using **MATRIX-MULTIPLY-RECURSIVE** as a subtoutine? Answer the same question for multiplying an $n\times kn$ matrix by a $kn\times n$ matrix. Which is asymptotically faster, and by how much? - mulyiplying a $kn\times n$ matrix by an $n\times kn$ matrix needs $k^2$ times' call of **MATRIX-MULTIPLY-RECURSIVE(A,B,C,n)**, taking $\Theta(k^2n^3)$ in total. - mulyiple a $n\times kn$ matrix by an $kn\times n$ matrix needs $k$ times' call of **MATRIX-MULTIPLY-RECURSIVE(A,B,C,n)**, and need $k-1$ times' addition, taking $\Theta(kn^3+kn^2)=\Theta(kn^3)$ in total. - The latter is asympototicallt faster. ### 4.1-3 > Suppose that instead of partitioning matrices by index calculation in **MATRIX-MULTIPLY-RECURSIVE**, you copy the appropriate elements of A,B,and C into separate $n/2\times n/2$ submarices $A_{11},A_{12},A_{21},A_{22};B_{11},B_{12},B_{21},B_{22}$; and $C_{11},C_{12},C_{21},C_{22}$, respectively. After the recursive calls, you copy the results from $C_{11},C_{12},C_{21}$, and $C_{22}$ back into the appropriate places in $C$. How does recurrence (4.9) change, and what is its solution? $T(n)=8T(n/2)+\Theta(n^2)$ According to the master theorem, $T(n)=\Theta(n^3)$ ### 4.1-4 > Write pseudocode for a divide-and-conquer algorithm **MATRIX-ADD-RECURSIVE** that sums two $n\times n$ matrices $A$ and $B$ by partitioning each of them into four $n/2\times n/2$ submatrices and then recursively summing corresponding pairs of submatrices. Assume that matrix partitioning uses $\Theta(1)$-time index calculations. Write a recurrence for the worst-case running time of **MATRIX-ADD-RECURSIVEE**, and solve your recurrence. What happens if you use $\Theta(n^2)$-time copying to implement the partitioning instead of index calculations? ```cpp MATRIX_ADD_RECURSIVEE(A,B,C,n) if n==1 c11=a11+b11 partition A,B,C into n/2*n/2 submatrixs A11,A12,A21,A22;B11,B12,B21,B22;and C11,C12,C21,C22; MATRIX_ADD_RECURSIVEE(A11,B11,C11,n/2) MATRIX_ADD_RECURSIVEE(A12,B12,C12,n/2) MATRIX_ADD_RECURSIVEE(A21,B21,C21,n/2) MATRIX_ADD_RECURSIVEE(A22,B22,C22,n/2) ``` $T(n)=4T(n/2)+\Theta(1)$ According to master theorem, $T(n)=\Theta(n^2)$ if use copying,$T(n)=4T(n/2)+\Theta(n^2)\implies T(n)=\Theta(n^2\lg n)$ ## 4.2 Strassen’s algorithm for matrix multiplication > Note: You may wish to read Section 4.5 before attempting some of these exercises. ### 4.2-1 > Use Strassen’s algorithm to compute the matrix product > > $$ > \begin{pmatrix} > 1 & 3\cr > 7 & 5 > \end{pmatrix} > \begin{pmatrix} > 6 & 8\cr > 4 & 2 > \end{pmatrix} > $$ > > Show your work. $$ \begin{aligned} S_1 & = B_{12} - B_{22} = 8 - 2 = 6\cr S_2 & = A_{11} + A_{12} = 1 + 3 = 4\cr S_3 & = A_{21} + A_{22} = 7 + 5 = 12\cr S_4 & = B_{21} - B_{11} = 4 - 6 = -2\cr S_5 & = A_{11} + A_{22} = 1 + 5 = 6\cr S_6 & = B_{11} + B_{22} = 6 + 2 = 8\cr S_7 & = A_{12} - A_{22} = 3 - 5 = -2\cr S_8 & = B_{21} + B_{22} = 4 + 2 = 6\cr S_9 & = A_{11} - A_{21} = 1 - 7 = -6\cr S_{10} & = B_{11} + B_{12} = 6 + 8 = 14\cr P_1 & = A_{11} \cdot S_1 = 1 * 6 = 6\cr P_2 & = S_2 \cdot B_{22}=4 * 2 = 8\cr P_3 & = S_3 \cdot B_{11} = 12 * 6 = 72\cr P_4 & = A_{22} \cdot S_{4} = 5 * (-2) = -10\cr P_5 & = S_5 \cdot S_6 = 6 * 8 = 48\cr P_6 & = S_7 \cdot S_8 = -2 * 6 = -12\cr P_7 & = S_9 \cdot S_10 = -6 * 14 = -84\cr C_{11} & = C_{11} + P_5 + P_4 - P_2 + P_6 = 48 + (-10) - 8 + (-12) = 18\cr C_{12} & = C_{12} + P_1 + P_2 = 6 + 8 = 14\cr C_{21} & = C_{21} + P_3 + P_4 = 72 + (-10) = 62\cr C_{22} & =C_{22} + P_5 + P_1 - P_3 - P_7 = 48 + 6 - 72 - (-84) = 66\cr C & = \begin{pmatrix} 18 & 14\cr 62 & 66\cr \end{pmatrix} \end{aligned} $$ ### 4.2-2 > Write pseudocode for Strassen’s algorithm. ```cpp Strassen(A,B,C,n) if n ==1 c11=c11+a11*b11 partition A,B,C into (n/2)*(n/2) submatrixs A11,A12,A21,A22;B11,B12,B21,B22;C11,C12,C21,C22 //get S O(n^2) S1=B12-B22 S2=A11+A12 S3=A21+A22 S4=B21-B11 S5=A11+A22 S6=B11+B22 S7=A12-A22 S8=B21+B22 S9=A11-A21 S10=B11+B12 //creat P O(n^2) creat P1...7 //recursion Strassen(A11,S1,P1,n/2) Strassen(S2,B22,P2,n/2) Strassen(S3,B11,P3,n/2) Strassen(A22,S4,P4,n/2) Strassen(S5,S6,P5,n/2) Strassen(S7,S8,P6,n/2) Strassen(S9,S10,P7,n/2) //conquer O(n^2) C11=C11+P5+P4-P2+P6 C12=C12+P1+P2 C21=C21+P3+P4 C22=C22+P5+P1-P3-P7 ``` ### 4.2-3 > What is the largest $k$ such that if you can multiply $3\times 3$ matrices using $k$ multiplications (not assuming commutativity of multiplication), then you can multiply $n\times n$ matrices in $o(n^{\lg 7})$ time? What is the running time of this algorithm? $$ \begin{aligned} T(n)=kT(n/3)+O(1)=o(n^{\lg 7})\cr \implies T(n)=\Theta(n^{\log_{3} k})=o(n^{\lg 7})\cr \implies \log_{3} k < \lg 7\cr \implies k < 21.8\cr \implies \max(k) = 21\cr \text{running time }= n^{\log_{3} 21}\cr \end{aligned} $$ ### 4.2-4 > V. Pan discovered a way of multiplying $68\times 68$ matrices using 132,464 multiplications, a way of multiplying $70\times 70$ matrices using 143,640 multiplications, and a way of multiplying $72\times 72$ matrices using 155,424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare with Strassen’s algorithm? $n^{\log_{68}132464}=n^{2.79513};n^{\log_{70} 143640}=n^{2.79512};n^{\log_{72} 155424}=n^{2.79515};n^{\lg 7}=n^{2.80735}$ the second method is the best; better than Strassen's algorithm. ### 4.2-5 > Show how to multiply the complex numbers $a+bi$ and $c+di$ using only three multiplications of real numbers. The algorithm should take $a,b,c$, and $d$ as input and produce the real component $ac-bd$ and the imaginary component $ad+bc$ separately. $$ \begin{aligned} S_1=a(c-d)=ac-ad\cr S_2=b(c+d)=bc+bd\cr S_3=d(a-b)=ad-bd\cr S_1+S_3=ac-bd\cr S_2+S_3=ad+bc\cr \end{aligned} $$ ### 4.2-6 > Suppose that you have a $\Theta(n^{\alpha})$-time algorithm for squaring $n \times n$ matrices, where $\alpha \geq 2$. Show how to use that algorithm to multiply two different $n \times n$ matrices in $\Theta(n^{\alpha})$ time. $$ \begin{aligned} A\cdot B=\frac{(A+B)^{2}-(A-B)^2}{4} \end{aligned} $$ ## 4.3 The substitution method for solving recurrences ### 4.3-1 > Use the substitution method to show that each of the following recurrences defined on the reals has the asymptotic solution specified: > > **a**. $T(n)=T(n-1)+n$ has solution $T(n) = O(n^2)$ > > **b**. $T(n)=T(n/2)+\Theta(1)$ has solution $T(n)=O(\lg n)$ > > **c**. $T(n)=2T(n/2)+n$ has solution $T(n)=O(n\lg n)$. > > **d**. $T(n)=2T(n/2+17)+n$ has solution $T(n) = O(n\lg n)$. > > **e**. $T(n)=2T(n/3)+\Theta(n)$ has solution $T(n)=\Theta(n)$ > > **f**. $T(n) = 4T(n/2)+\Theta(n)$ has solution $T(n)=\Theta(n^2)$. **a**. $$ \begin{aligned} \text{ inductive case: } &\cr & \text{assume }T(n')\leq cn'^2 \text{ for all } n_0 \leq n'\leq n\cr T(n)& =T(n-1)+n\cr & \leq c(n-1)^2+n\cr & = cn^2+(1-2c)n+c\cr & \leq cn^2\cr & \text{while the last step holds for }c > \frac{1}{2}\cr \text{base case: } &\cr T(n) & \leq c \leq cn \text{ for all }n \leq n_0 \text{ is true if and only if choose a large enough } n_0\cr \end{aligned} $$ - tips: base case and detail like $\text{ for all } n_0 \leq n'\leq n$ will be omitted in this kind of problems for convenience. **b**. assume $T(n) \leq c\lg n$ $$ \begin{aligned} T(n)&=T(n/2)+c'\cr &\leq c\lg n -c\lg 2 +c'\cr &\leq c\lg n\cr \end{aligned} $$ the last step holds for $c>c'/\lg 2$. **c**. assume $T(n)\leq cn\lg n$ $$ \begin{aligned} T(n) & = 2T(n/2)+n\cr & \leq 2c(n/2)\lg (n/2) +n\cr & = cn\lg n + (1-c\lg 2)n\cr & \leq cn\lg n \end{aligned} $$ the last step holds for $c>1/\lg 2$. **d**. assume $T(n)\leq c(n-a)\lg (n-a)$ $$ \begin{aligned} T(n)&=2T(n/2+17)+n\cr & \leq 2(c(n/2+17-a))\lg(n/2-a+17)+n\cr & = c(n-2a+34)\lg(n -a +34)+(1-c\lg 2)n +c(-2a+34)\lg 2\cr & \leq c(n-a)\lg (n-a)\cr \end{aligned} $$ the last step holds for $n\geq 34$ **e**. assume $c_{1}n\leq T(n)\leq c_{2}n$ $$ \begin{aligned} T(n) & =2T(n/3)+\Theta(n)\cr & \geq 2(c_{1}n/3)+c_{3}n\cr & =(2c_{1}/3+c_{3})n\cr & \geq c_{1}n\cr \end{aligned} $$ the last step holds for $c_{1}\leq 3c_{3}$ $c_{3}$ is the lowwer bound constant in $\Theta(n)$. $$ \begin{aligned} T(n) & =2T(n/3)+c_{4}n\cr & \leq (2c_{2}/3+c_{4})n\cr & \leq c_{2}n\cr \end{aligned} $$ the last step holds for$c_{2}\geq 3c_{4}$ $c_{4}$ is the upper bound constant in $\Theta(n)$. **f**. assume $c_{1}n^2 \leq T(n)\leq c_{2}n^2-c_{0}n$ $$ \begin{aligned} T(n) & =4T(n/2)+\Theta(n)\cr & \geq c_{1}n^2+c_{3}n\cr & \geq c_{1}n^2\cr T(n) & =4T(n/2)+\Theta(n)\cr & \leq c_{2}n^2-2c_{0}n+c_{4}n\cr & \leq c_{2}n^2-c_{0}n\cr \end{aligned} $$ the last step holds for $c_{0}\geq c_{4}$ $c_{3}$ is the lowwer bound constant in $\Theta(n)$ and $c_{4}$ is the upper bound constant in $\Theta(n)$. ### 4.3-2 > The solution to the recurrence $T(n) = 4T(n/2) + n$ turns out to be $T(n) = \Theta(n^2)$.Show that a substitution proof with the assumption $T(n) \leq cn^2$ fails. Then show how to subtract a lower-order term to make a substitution proof work. assume $T(n)\leq cn^2$ $$ \begin{aligned} T(n) & =4T(n/2)+n\cr & \leq cn^2+n\cr \end{aligned} $$ It can not further imply $T(n)\leq cn^2+n$ assume $T(n)\leq c_{1}n^2-c_{2}n$ $$ \begin{aligned} T(n) & =4T(n/2)+n\cr & \leq c_{1}n^2-2c_{2}n+n\cr & \leq c_{1}n^2-c_{2}n\cr \end{aligned} $$ the last step holds for $c_{2}\geq 1$ ### 4.3-3 > The recurrence $T(n)=2T(n-1)+1$ has the solution T(n) = O(2^n). Show that a subtitution proof fails with the assumption $T(n)\leq c2^n$, where $c > 0$ is constant. Then show how to subtract a lower-order term to make a substitution proof work. assume $T(n)\leq c2^{n}$ $$ \begin{aligned} T(n) & =2T(n-1)+1\cr & \leq 2\cdot c2^{n-1}+1\cr & =c2^{n}+1\cr \end{aligned} $$ It can not further imply $T(n)\leq c2^{n}$ assume $T(n)\leq c2^{n}-d$ $$ \begin{aligned} T(n) & =2T(n-1)+1\cr & \leq 2\cdot c2^{n-1}-2d+1\cr & =c2^{n}-d\cr \end{aligned} $$ the last step holds for $d\geq 1$ ## 4.4 The recursion-tree method for solving recurrences ### 4.4-1 > For each of the following recurrences, sketch its recursion tree, and guess a good asymptotic upper bound on its solution. Then use the substitution method to verify your answer. > > **a**. $T(n)=T(n/2)+n^3$. > > **b**. $T(n) = 4T(n/3)+n$. > > **c**. $T(n) = 4T(n/2)+n$. > > **d**. $T(n) = 3T(n-1)+1$. **a**. ![4.4-1.a](https://hackmd.io/_uploads/SkiHhJ43yl.png) guess $T(n)=O(n^3)$,assume $T(n)\leq cn^3$ $$ \begin{aligned} T(n) & = T(n/2)+n^3\cr & \leq \frac{cn^3}{8}+n^3\cr & \leq (1+\frac{c}{8})n\cr & \leq cn^3\cr \end{aligned} $$ the last step holds for $c\geq 7/8$ **b**. ![4.4-1.b](https://hackmd.io/_uploads/HyqE21N31e.png) guess $T(n)=O(n^{\log_{3} 4})$, assume $T(n)\leq cn^{\log_3 4}-dn$ $$ \begin{aligned} T(n) & =4T(n/3)+n\cr & \leq cn^{\log_{3} 4} -\frac{4dn}{3}+ n\cr & \leq cn^{\log_{3} 4}\cr \end{aligned} $$ the last step holds for $d\geq 4/3$ **c**. ![4.4-1.c](https://hackmd.io/_uploads/Sko7nyV31x.png) guess $T(n)=O(n^2)$, assume $T(n)\leq cn^2-dn$ $$ \begin{aligned} T(n) & =4T(n/2)+n\cr & \leq cn^2-2dn+n\cr & \leq cn^2\cr \end{aligned} $$ the last step holds for $d\geq 1/2$ **d**. ![4.4-1.d](https://hackmd.io/_uploads/HJGf3yE21e.png) guess $T(n)=O(3^n)$, assume $T(n)\leq c3^n-d$ $$ \begin{aligned} T(n) & =3T(n-1)+1\cr & \leq 3\cdot 3^{n-1}-3d +1\cr & \leq 3^n\cr \end{aligned} $$ the last step holds for $d\geq 1/3$ ### 4.4-2 > Use the substitution method to prove that recurrence (4.15) has the asymptotic lower bound $L(n)=\Omega(n)$. Conclude that $L(n)=\Theta(n)$. assume$L(n)\geq cn$ $$ \begin{aligned} L(n) & =L(n/3)+L(2n/3)\cr & \geq c(n/3)+c(n2/3)\cr & = cn\cr \end{aligned} $$ since the textbook has proved that $L(n)=O(n)$, and now $L(n)=\Omega(n)$ is proved, we can conclude that $L(n)=\Theta(n)$. ### 4.4-3 > Use the substitution method to prove that recurrence (4.14) has the solution $T(n) = \Omega(n\lg n)$. Conclude that $T(n)=\Theta(n\lg n)$. guess $T(n)\leq cn\lg n$ $$ \begin{aligned} T(n) & =T(n/3)+T(2n/3)+\Theta(n)\cr & \leq c(n/3)\lg(n/3)+c(2n/3)\lg(2n/3)+dn\cr & = cn\lg{n}-\frac{cn}{3}(3\lg{3}-2-3d/c)\cr & \leq cn\lg{n}\cr \end{aligned} $$ the last step holds for $c\geq 3d/2$. ### 4.4-4 > Use a recursion tree to justify a good guess for the solution to the recurrence $T(n)=T(\alpha n)+T((1-\alpha)n)+\Theta(n)$, where $\alpha$ is a constant in the range $0 < \alpha 1$. assume $\alpha<1/2$, since otherwise let $\beta=1-\alpha =$ and solve it for $\beta$. let $L(n)$ denotes leaves number, then $L(n)=L(\alpha n)+L((1-\alpha)n)\implies L(n)=\Theta(n)$ the smallest depth of leaves is $\log_{1/\alpha}n$, while the largest depth is $\log_{\alpha}n$, total cost over all nodes at depth $i$, for $i=0,1,\dots,\log_{1/\alpha}n-1$ is $\Theta(n)$ $$ \begin{aligned} & \sum_{i=0}^{\log_{1/\alpha}n-1}c_{1}n+d_{1}n\leq T(n)\leq \sum_{i=0}^{\log_{\alpha}n-1}c_{2}n+d_{2}n\cr \implies & c_{1}n\log_{1/\alpha}n+d_{1}n \leq T(n) \leq c_{2}n\log_{\alpha}n+d_{2}n\cr \implies & T(n)=\Theta(n\lg n)\cr \end{aligned} $$ ## 4.5 The master method for solving recurrences ### 4.5-1 > Use the master method to give tight asymptotic bounds for the following recurrences. > > **a**. $T(n)=2T(n/4)+1$. > > **b**. $T(n)=2T(n/4)+\sqrt{n}$. > > **c**. $T(n)=2T(n/4)+\sqrt{n}\lg^2 n$. > > **d**. $T(n)=2T(n/4)+n$. > > **e**. $T(n)=2T(n/4)+n^2$. **a**. $$ \begin{aligned} a=2,b=4,n^{\log_{b}a}=n^{1/2},f(n)=1=O(n^{\log_{b}a-1/2})\cr T(n)=\Theta(n^{1/2})\cr \end{aligned} $$ **b**. $$ \begin{aligned} a=2,b=4,n^{\log_{b}a}=n^{1/2},f(n)=n^{1/2}=\Theta(n^{\log_{b}a})\cr T(n)=\Theta(n^{1/2}\lg n)\cr \end{aligned} $$ **c**. $$ \begin{aligned} a=2,b=4,n^{\log_{b}a}=n^{1/2},f(n)=n^{1/2}\lg^{2}n=\Theta(n^{\log_{b}a}\lg^{2}n)\cr T(n)=\Theta(n^{1/2}\lg^{3} n)\cr \end{aligned} $$ **d**. $$ \begin{aligned} a=2,b=4,n^{\log_{b}a}=n^{1/2},f(n)=n=\Omega(n^{\log_{b}a+1/2})\cr \text{regularity condition: }af(n/b)=n/2\leq (1/2)n\cr T(n)=\Theta(n)\cr \end{aligned} $$ **e**. $$ \begin{aligned} a=2,b=4,n^{\log_{b}a}=n^{1/2},f(n)=n^2=\Omega(n^{\log_{b}a+3/2})\cr \text{regularity condition: }af(n/b)=n/8\leq (1/8)n\cr T(n)=\Theta(n^2)\cr \end{aligned} $$ ### 4.5-2 > Professor Caesar wants to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen’s algorithm. His algorithm will use the divide-and-conquer method, dividing each matrix into $n/4\times n/4$ submatrices, and the divide and combine steps together will take $\Theta(n^2)$ time. Suppose that the professor’s algorithm creates a recursive subproblems of size $n/4$. What is the largest integer value of $a$ for which his algorithm could possibly run asymptotically faster than Strassen’s? $$ \begin{aligned} n^{\log_{b}a}=n^{\log_{4}a} < n^{\lg 7}\cr \implies a < 49\cr \end{aligned} $$ ### 4.5-3 > Use the master method to show that the solution to the binary-search recurrence $T(n) = T(n/2) + \Theta(1)$ is $T(n) = \Theta(\lg n)$. (See Exercise 2.3-6 for a description of binary search.) $$ \begin{aligned} n^{\log_{b}a}=n^{\log_{2}1}=1\cr f(n)=1=\Theta(n^{\log_{b}a}\lg^{0}n)\cr T(n)=\Theta(\lg n)\cr \end{aligned} $$ ### 4.5-4 > Consider the function $f(n) = \lg n$. Argue that although $f(n/2) < f(n)$, the regularity condition $af(n/b)\leq cf(n)$ with $a=1$ and $b=2$ does not hold for any constant $c<1$. Argue further that for any $\epsilon > 0$, the condition in case 3 that $f(n) = \Omega(n^{\log_{b}a+\epsilon})$ does not hold. $$ \begin{aligned} &\forall c<1\cr &f(n/2)\leq cf(n)\cr \iff & \lg n-1\leq c\lg n\cr \iff & \lg n\leq 1/(1-c)\cr \iff & n\leq 2^{1/(1-c)}\cr \end{aligned} $$ so the regularity condition does not hold for $n\geq 2^{1/(1-c)}$ $$ \begin{aligned} n^{\log_{b}a}=n^{\log_{2}1}=1\cr \forall \epsilon>0\cr cn^{\epsilon}\leq \lg n \text{ does not hold for }n\to \infty\cr \end{aligned} $$ so $f(n) = \Omega(n^{\log_{b}a+\epsilon})$ does not hold. ### 4.5-5 > Show that for suitable constants $a,b$, and $\epsilon$, the function $f(n)=2^{\lceil\lg n\rceil}$ satisfies all the conditions in case 3 of the master theorem except the regularity condition. $$ \begin{aligned} a=1.1,b=1.2,n^{\log_{b}a}=n^{\log_{1.2}1.1}\cr f(n)\geq 2^{\lg n}=n\geq n^{\log_{1.2}1.1+\log_{1.2}1.001}\cr \implies \exists \epsilon>0 \text{ such that }f(n)=\Omega(n^{\log_{b}a+\epsilon})\cr \end{aligned} $$ $f(n)$ satisfies all the conditions in case 3 of the master theorem except the regularity condition. $1.1f(\frac{1.3\cdot 2^k}{1.2})=1.1\cdot 2^{k+1}\leq cf(1.3\cdot 2^k)=c\cdot 2^{k+1}$ dose not hold for any c<1. ## $\star$ 4.6 Proof of the continuous master theorem ### 4.6-1 > Show that $\sum_{j=0}^{\lfloor \log_{b}n\rfloor}(\log_{b} n-j)^k=\Omega(\log_b^{k+1}n)$. $$ \begin{aligned} \sum_{j=0}^{\lfloor \log_{b}n\rfloor}(\log_{b} n-j)^k & \geq \sum_{j=0}^{\lfloor \log_{b}n\rfloor}(\lfloor \log_{b}n\rfloor n-j)^k\cr & = \sum_{j=0}^{\lfloor \log_{b}n\rfloor}(j)^k\cr & \geq \sum_{j=0}^{\log_{b}n-1}(j)^k\cr & = \Theta((\log_{b}n-1)^{k+1})\cr & = \Omega(\log_b^{k+1}n)\cr \end{aligned} $$ ### $\star$ 4.6-2 > Show that case 3 of the master theorem is overstated (which is also why case 3 of Lemma 4.3 does not require that $f(n)=\Omega(n^{\log_b a+\epsilon})$) in the sense that the regularity condition $af(n/b)\leq cf(n)$ for some constant $c<1$ implies that there exists a constant $\epsilon > 0$ sunch that $f(n)=\Omega(n^{\log_b a+\epsilon})$. $$ \begin{aligned} & af(n/b) \leq cf(n)\cr \implies & f(n)\geq \frac{a}{c}f(n/b)\cr \implies & f(n) \geq (\frac{a}{c})^i f(n/{b^i})\cr & \text{let } n=b^i\cr \implies & f(n)\geq (\frac{a}{c})^{\log_{b}n}f(1)\cr \implies & f(n) \geq n^{\log_{b}\frac{a}{c}}\cr & \because c<0,\therefore \frac {a}{c}=a+\epsilon \text{ for some constant }\epsilon >0\cr \implies & f(n)=\Omega(n^{\log_{b}a+\epsilon})\cr \end{aligned} $$ ### $\star$ 4.6-3 > For $f(n)=\Theta(n^{\log_b a}/\lg n)$, prove that the summation in equation (4.19) has solution $g(n)=\Theta(n^{\log_b{a}}\lg\lg n)$. Conclude that a master recurrence $T(n)$ using $f(n)$ as its driving function has solution $T(n) = \Theta(n^{\log_b a}\lg\lg n)$. $$ \begin{aligned} & \text{let }n=b^i\cr g(n) = & \sum_{j=0}^{i}a^jf(n/b^j)\cr = & \sum_{j=0}^{i}a^jf(b^{i-j})\cr = & \sum_{j=0}^{i}a^j\Theta(a^{i-j}/(i-j))\cr = & \sum_{j=0}^{i}a^i\Theta(1/j)\cr = & a^i\Theta(\lg i)\cr = & \Theta(a^{\log_{b}n} \lg\log_{b}n)\cr = & \Theta(n^{\log_{b}a} \lg\lg n)\cr g(bn) = & \Theta((bn)^{\log_{b}a} \lg\lg bn)=(n^{\log_{b}a} \lg\lg n)\cr \implies g(n)=& (n^{\log_{b}a} \lg\lg n)\text{ even if }n \neq b^i\cr \end{aligned} $$ We can concluded that $T(n)=g(n)+n^{\log_{b}a}=n^{\log_{b}a} \lg\lg n$. ## $\star$ 4.7 Akra-Bazzi recurrences ### $\star$ 4.7-1 > Consider an Akra-Bazzi recurrence $T(n)$ on the reals as given in recurrence (4.22), and define $T'(n)$ as > > $$ > \begin{aligned} > T'(n)=cf(n)+\sum_{i=1}^k a_iT'(n/b_i), > \end{aligned} > $$ > > where $c > 0$ is constant. Prove that whatever the implicit initial conditions for T(n) might be, there exist initial conditions for $T'(n)$ such that $T'(n) = cT(n)$ for all $n>0$. Conclude that we can drop the asymptotics on a driving function in any Akra-Bazzi recurrence without affecting its asymptotic solution. assume $T'(n)=cT(n)$ $$ \begin{aligned} T'(n) & =cf(n)+\sum_{i=1}^k a_iT'(n/b_i)\cr & =cf(n)+\sum_{i=1}^k a_icT(n/b_i)\cr & =c(f(n)+\sum_{i=1}^k a_iT(n/b_i))\cr & =cT(n)\cr \end{aligned} $$ $T'(n)=T(n)$ holds only if the base case(initial conditions) that $T'(n)=T(n)$ for all $n<n_{}0$ is true ### 4.7-2 > Show that $f(n)=n^2$ satisfies the polynomial-growth condition but that $f(n)=2^n$ does not. $$ \begin{aligned} & \forall \phi \forall \psi:1 \leq \psi \leq \phi\cr & n^2/d\leq (\psi n)^2\leq dn^2\cr \iff & \max(1/\psi^{2})\leq d \wedge \max(\psi^{2}) \leq d\cr \iff & \phi^{2}\leq d\qquad(1)\cr & 2^n/d\leq 2^{\psi n}\leq d2^n\cr \iff & 2^{n-\psi n}\leq d\wedge 2^{\psi n-n}\leq d\qquad (2)\cr \end{aligned} $$ $d=\phi^{2}$ satisfies $(1)$, since $2^{(\psi-1)n}\to\infty$ therefore no constant $d$ satisfies $(2)$ ### 4.7-3 > Let f(n) be a function that satisfies the polynomial-growth condition. Prove that $f(n)$ is asymptotically positive, that is, there exists a constant $n_0\geq 0$ such that $f(n)\geq 0$ for all $n\geq n_0$. $$ \begin{aligned} \exists \hat{n} \text{ that }\forall n > \hat{n}:\cr f(n)/d \leq f(\psi n) \leq df(n)\cr \implies 0\leq (d^2-1) f(n)\cr \because d>1\cr \therefore f(n)>0\cr \end{aligned} $$ ### $\star$ 4.7-4 > Give an example of a function $f(n)$ that does not satisfy the polynomial-growth condition but for which $f(\Theta(n)) = \Theta(f(n))$. - **TODO** ### 4.7-5 > Use the Akra-Bazzi method to solve the following recurrences. > > **a**. $T(n) = T(n/2)+T(n/3)+T(n/6)+n\lg n$. > > **b**. $T(n) = 3T(n/3)+8T(n/4)+n^2/\lg n$. > > **c**. $T(n) = (2/3)T(n/3)+(1/3)T(2n/3)+\lg n$. > > **d**. $T(n) = (1/3)T(n/3) + 1/n$. > > **e**. $T(n) = 3T(n/3) + 3T(2n/3)+n^2$. **a**. $$ \begin{aligned} & (\frac{1}{2})^p+(\frac{1}{3})^p+(\frac{1}{6})^p=1\cr \implies & p=1\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{x\lg x}{x^{2}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{\ln x}{x\ln 2}dx))\cr & = \Theta(n^1(1+\left[\frac{\ln^{2} x}{2\ln 2}\right]_{1}^{n}))\cr & = \Theta(n(1+\frac{\ln^{2} n}{2\ln 2}))\cr & =\Theta(n\lg^{2} n) \end{aligned} $$ **b**. $$ \begin{aligned} & \frac{3}{3^p}+\frac{8}{4^p}=1\cr & \frac{3}{3^{1.5}}+\frac{8}{4^{1.5}}=3^{-(1/2)}+1\cr & \frac{3}{3^2}+\frac{8}{4^2}=\frac{5}{6}\cr \implies & 1.5<p<2\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & =\Theta(n^p(1+\int_{1}^{n}\frac{x^2/\lg x}{x^{p+1}}dx))\cr & =\Theta(n^p(1+\int_{1}^{n}\frac{1}{x^{p-1}\lg x}dx))\cr & =\Omega(n^p(1+\lg n\int_{1}^{n}\frac{1}{x^{p-1}}dx))\cr & =\Omega(n^p(1+n^{2-p}\lg n ))\cr & =\Omega(n^2/\lg n)\cr T(n) & =O(n^p(1+\lg 1(\text{regarded as constant})\int_{1}^{\sqrt{n}}\frac{1}{x^{p-1}}dx+\lg \sqrt{n}\int_{\sqrt{n}}^{n}\frac{1}{x^{p-1}}dx))\cr & = O(n^p(1+\Theta(n^{(p-1)/2})+\Theta(n^{p-1}/\lg n)))\cr & =O(n^2/\lg n)\cr \end{aligned} $$ **c**. $$ \begin{aligned} & \frac{2}{3}(\frac{1}{3})^p+\frac{1}{3}(\frac{2}{3})^p=1\cr \implies & \frac{2+2^p}{3^{p+1}}=1\cr \implies & p=0\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & =\Theta(1+\int_{1}^{n}\frac{\lg x}{x}dx)\cr & =\Theta(\lg^{2}n) \end{aligned} $$ **d**. $$ \begin{aligned} & \frac{1}{3}(\frac{1}{3})^p=1\cr \implies & p=-1\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & =\Theta(n^{-1}(1+\int_{1}^{n}\frac{1}{x}dx))\cr & =\Theta(n^{-1}(1+\left[lg n\right]_{1}^{n}))\cr & =\Theta(n^{-1}lg n)\cr \end{aligned} $$ **e**. $T(n) = 3T(n/3) + 3T(2n/3)+n^2$. $$ \begin{aligned} & 3(\frac{1}{3})^p + 3(\frac{2}{3})^p=1\cr \implies & p=2\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & =\Theta(n^{2}(1+\int_{1}^{n}\frac{x^2}{x^3}dx))\cr & =\Theta(n^2\lg n) \end{aligned} $$ ### $\star$ 4.7-6 > Use the Akra-Bazzi method to prove the continuous master theorem. $$ \begin{aligned} & T(n)=aT(n/b)+f(n)\cr & \frac{a}{b^p}=1\cr \implies & p=\log_{b}a\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & =\Theta(n^{\log_{b}a}(1+\int_{1}^{n}\frac{f(x)}{x^{\log_{b}a+1}}dx))\cr & G(n)=\int_{1}^{n}\frac{f(x)}{x^{\log_{b}a+1}}dx\cr \text{case 1:} & \cr & f(n)=O(n^{\log_{b}a-\epsilon})(\epsilon>0)\cr G(n) & = O(\int_{1}^{n}\frac{1}{x^{1+\epsilon}}dx)\cr & = O(x^{-\epsilon})\cr T(n) & =\Theta(n^{\log_{b}a}(1+G(n)))\cr & = \Theta(n^{\log_{b}a})\cr \text{case 2:} & \cr & f(n)=\Theta(n^{\log_{b}a}\lg^{k}n)(k\geq 0)\cr G(n) & = \Theta(\int_{1}^{n}\frac{lg^{k}n}{x^{1}}dx)\cr & = \Theta(\lg^{k+1}n)\cr T(n) & =\Theta(n^{\log_{b}a}(1+G(n)))\cr & = \Theta(n^{\log_{b}a}\lg^{k+1}n)\cr \text{case 3:} & \cr T(n) & =\Omega(f(n))\cr & f(n)=\Omega(n^{\log_{b}a+\epsilon})(\epsilon>0)\cr &\text{if } af(n/b)<cf(n)\text{ for some }c<1\cr G(n) & = O(f(n)\int_{1}^{n}\frac{1}{x^{\log_{b}a+1}}dx)\cr & = O(\frac{f(n)}{x^{\log_{b}a}})\cr T(n) & =\Theta(n^{\log_{b}a}(1+G(n)))\cr & =O(f(n))\cr T(n)& =\Theta(f(n)) \end{aligned} $$ # 4-Problems ## 4-1 Recurrence examples > Give asymptotically tight upper and lower bounds for $T(n)$ in each of the following algorithmic recurrences. Justify your answers. > > **a**. $T(n) = 2T(n/2)+n^3$. > > **b**. $T(n) = T(8n/11)+n$. > > **c**. $T(n) = 16T(n/4)+n^2$. > > **d**. $T(n)=4T(n/2)+n^2\lg n$. > > **e**. $T(n) = 8T(n/3)+n^2$. > > **f**. $T(n)=7T(n/2)+n^2\lg n$. > > **g**. $T(n)=2T(n/4)+\sqrt{n}$. > > **h**. $T(n)=T(n-2)+n^2$. ### **a** $$ \begin{aligned} n^{\log_{b}a}=n^{\log_{2}2}=n\cr f(n)=n^3=\Omega(n^{2+\log_{b}a})\cr 2f(n/2)=n^3/4\leq \frac{1}{4}f(n)\cr T(n)=\Theta(f(n))=\Theta(n^3)\cr \end{aligned} $$ ### **b** $$ \begin{aligned} n^{\log_{b}a}=n^{\log_{11/8}1}=1\cr f(n)=n=\Omega(n^{1+\log_{b}a})\cr f(8n/11)=8n/11\leq \frac{8}{11}f(n)\cr T(n)=\Theta(f(n))=\Theta(n)\cr \end{aligned} $$ ### **c** $$ \begin{aligned} n^{\log_{b}{a}}=n^{\log_{4}{16}}=n^{2}\cr f(n)=n^2=\Theta(n^{\log_{b}a}\lg^{0}n)\cr T(n)=\Theta(n^2\lg^{} n)\cr \end{aligned} $$ ### **d** $$ \begin{aligned} n^{\log_{b}{a}}=n^{\log_{2}{4}}=n^{2}\cr f(n)=n^2\lg^{} n=\Theta(n^{\log_{b}a}\lg^{}n)\cr T(n)=\Theta(n^2\lg^{2} n)\cr \end{aligned} $$ ### **e** $$ \begin{aligned} n^{\log_{b}{a}}=n^{\log_{3}{8}}\cr f(n)=n^2=\Omega(n^{\log_{b}a})\cr f8(n/3)=\frac{8n^2}{9}\leq \frac{8}{9}f(n)\cr T(n)=\Theta(f(n))=\Theta(n^2)\cr \end{aligned} $$ ### **f** $$ \begin{aligned} n^{\log_{b}{a}}=n^{\log_{2}{7}}\cr f(n)=n^2\lg^{} n=O(n^{\log_{b}a-\log_2{(7/4)}})\cr T(n)=\Theta(n^{\log_{2}{7}})\cr \end{aligned} $$ ### **g** $$ \begin{aligned} n^{\log_{b}{a}}=n^{\log_{4}{2}}=n^{1/2}\cr f(n)=n^{1/2}=\Theta(n^{\log_{b}a}\lg^{0}n)\cr T(n)=\Theta(n^{1/2}\lg^{} n)\cr \end{aligned} $$ ### **h** $$ \begin{aligned} T(n)=\frac{1}{2}\sum_{i=1}^{n}i^2=\Theta(n^3)\cr \end{aligned} $$ ## 4-2 Parameter-passing costs > Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N -element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies: > > 1. Arrays are passed by pointer. Time $= \Theta(1)$. > > 2. Arrays are passed by copying. Time $=\Theta(N)$, where $N$ is the size of the array. > > 3. Arrays are passed by copying only the subrange that might be accessed by the called procedure. Time $=\Theta(n)$ if the subarray contains $n$ elements. > > Consider the following three algorithms: > > **a**. The recursive binary-search algorithm for finding a number in a sorted array(see Exercise 2.3-6). > > **b**. The MERGE-SORT procedure from Section 2.3.1. > > **c**. The MATRIX-MULTIPLY-RECURSIVE procedure from Section 4.1. > > Give nine recurrences $T_{a1}(N,n),T_{a2}(N,n),\dots ,T_{c3}(T,n)$ for the worst-case running times of each of the three algorithms above when arrays and matrices are passed using each of the three parameter-passing strategies above. Solve your recurrences, giving tight asymptotic bounds. ### **a** a1 $$ \begin{aligned} T_{a1}(N,n)=T(N,\frac{n}{2})+\Theta(1)\cr n^{\log_{b}a}=n^{\log_{2}1}=1\cr f(n)=\Theta(1)=\Theta(n^{\log_{b}a})\cr T_{a1}(N,n)=\Theta(\lg n)\cr \end{aligned} $$ a2 $$ \begin{aligned} T_{a2}(N,n)=T(N,\frac{n}{2})+\Theta(N)\cr n^{\log_{b}a}=n^{\log_{2}1}=1\cr T_{a2}(N,n)=\Theta(N\lg n)\cr \end{aligned} $$ a3 $$ \begin{aligned} T_{a3}(N,n)=T(N,\frac{n}{2})+\Theta(n)\cr n^{\log_{b}a}=n^{\log_{2}1}=1\cr f(n)=\Theta(n)=\Omega(n^{1+\log_{b}a})\cr af(\frac{n}{b})=f(\frac{n}{2})\leq \frac{1}{2}f(n)\cr T_{a3}(N,n)=\Theta(f(n))=\Theta(n)\cr \end{aligned} $$ ### **b** b1 $$ \begin{aligned} T_{b1}(N,n)=2T(N,\frac{n}{2})+\Theta(n)\cr n^{\log_{b}a}=n^{\log_{2}2}=n\cr f(n)=\Theta(n)=\Theta(n^{\log_{b}a}\lg^{0}n)\cr T_{b1}(N,n)=\Theta(n \lg n)\cr \end{aligned} $$ b2 $$ \begin{aligned} T_{b2}(N,n)=2T(N,\frac{n}{2})+\Theta(N+n)=2T(N,\frac{n}{2})+\Theta(N)\cr n^{\log_{b}a}=n^{\log_{2}1}=n\cr T_{b2}(N,n)=\Theta(n+N\lg n)=\Theta(N\lg n)\cr \end{aligned} $$ b3 $$ \begin{aligned} T_{b3}(N,n)=2T(N,\frac{n}{2})+\Theta(n)\cr n^{\log_{b}a}=n^{\log_{2}2}=n\cr f(n)=\Theta(n)=\Theta(n^{\log_{b}a}\lg^{0}n)\cr T_{b3}(N,n)=\Theta(n \lg n)\cr \end{aligned} $$ ### **c** c1 $$ \begin{aligned} T_{c1}(N,n)=8T(N,\frac{n}{2})+\Theta(1)\cr n^{\log_{b}a}=n^{\log_{2}8}=n^3\cr f(n)=\Theta(1)=O(n^{\log_{b}a-3})\cr T_{c1}(N,n)=\Theta(n^3)\cr \end{aligned} $$ c2 $$ \begin{aligned} T_{c2}(N,n)=8T(N,\frac{n}{2})+\Theta(N)\cr n^{\log_{b}a}=n^{\log_{2}8}=n^3\cr T_{c2}(N,n)=\Theta(n^3+N\lg N)\cr \end{aligned} $$ c3 $$ \begin{aligned} T_{c3}(N,n)=8T(N,\frac{n}{2})+\Theta(n)\cr n^{\log_{b}a}=n^{\log_{2}8}=n^3\cr f(n)=\Theta(n)=O(n^{\log_{b}a-2})\cr T_{c3}(N,n)=\Theta(n^3)\cr \end{aligned} $$ ## 4-3 Solving recurrences with a change of variables > Sometimes, a little algebraic manipulation can make an unknown recurrence similar to one you have seen before. Let’s solve the recurrence > > $T(n)=2T(\sqrt{n})+\Theta(\lg n)$ (4.25) > > by using the change-of-variables method. > > **a**. Define $m=\lg n$ and $S(m)=T(2^m)$. Rewrite recurrence (4.25) in terms of m and $S(m)$. > > **b**. Solve your recurrence for $S(m)$. > > **c**. Use your solution for $S(m)$ to conclude that $T(n)=\Theta(\lg n\lg\lg n)$. > > **d**. Sketch the recursion tree for recurrence (4.25), and use it to explain intuitively why the solution is $T(n)=\Theta(\lg n \lg\lg n)$. > > Solve the following recurrences by changing variables: > > **e**. $T(n)=2T(\sqrt{n})+\Theta(1)$. > > **f**. $T(n)=3T(\sqrt[3]{n})+\Theta(n)$. ### **a** $S(m)=T(2^m)=2T(\sqrt{2^m})+\Theta(m)=2T(2^{m/2})+\Theta(m)=2S(m/2)+\Theta(m)$ ### **b** $$ \begin{aligned} m^{\log_{b}a}=m\cr f(m)=m=\Theta(m^{\log_{b}a})\cr S(m)=\Theta(m\lg m) \end{aligned} $$ ### **c** $T(n)=S(\lg n)=\Theta(\lg n\lg\lg n)$ ### **d** ![4-3.d](https://hackmd.io/_uploads/BktSikV3Je.png) depth:$\lg\lg n +1$, leaves: $2^{\lg\lg n}$ The total cost over all nodes at depth $i$,for $i=0,1,\dots,\lg\lg n-1$ is $2^i\cdot \frac{1}{2^i}\Theta(\lg n)=\Theta(\lg n)$ $$ \begin{aligned} T(n) & =2^{\lg\lg n}\Theta(1)+\sum_{i=0}^{\lg\lg n-1}\Theta(\lg n)\cr & =\Theta(\lg n)+\lg\lg n\Theta(\lg n)\cr & = \Theta(\lg n\lg\lg n)\cr \end{aligned} $$ ### **e** depth:$\lg\lg n +1$, leaves: $2^{\lg\lg n}$ The total cost over all nodes at depth $i$,for $i=0,1,\dots,\lg\lg n-1$ is $2^i\cdot \Theta(1)=\Theta(2^i)$ $$ \begin{aligned} T(n) & =2^{\lg\lg n}\Theta(1)+\sum_{i=0}^{\lg\lg n-1}2^i\Theta(1)\cr & =\Theta(\lg n)+(2^{\lg\lg n}-1)\Theta(1)\cr & = \Theta(\lg n)\cr \end{aligned} $$ ### **f** $S(m)=T(3^m)=3T(3^{m/3})+\Theta(3^m)=3S(m/3)+\Theta(3^m)$. $$ \begin{aligned} m^{\log_{b}a}=m^{\log_{3}3}=m\cr f(m)=3^m=\Omega(m^{2+\log_{b}a})\cr 3f(m/3)=3\cdot 3^{m/3}\leq (1/3)f(m)\cr S(m)=\Theta(f(m))=\Theta(3^m)\cr T(n)=S(\log_{3}n)=\Theta(n)\cr \end{aligned} $$ ## 4-4 More recurrence examples > Give asymptotically tight upper and lower bounds for T(n) in each of the following recurrences. Justify your answers. > > **a**. $T(n)=5T(n/3)+n\lg n$. > > **b**. $T(n)=3T(n/3)+n/\lg n$. > > **c**. $T(n)= 8T(n/2)+n^3\sqrt{n}$. > > **d**. $T(n)=2T(n/2-2)+n/2$. > > **e**. $T(n)=2T(n/2)+n/\lg n$. > > **f**. $T(n)=T(n/2)+T(n/4)+T(n/8)+n$. > > **g**. $T(n)=T(n-2)+1/n$. > > **h**. $T(n)=T(n-1)+\lg n$. > > **i**. $T(n)=T(n-2)+1/\lg n$. > > **j**. $T(n)=\sqrt{n}T(\sqrt{n})+n$. ### **a** $$ \begin{aligned} n^{\log_{b}{a}} = n^{\log_{3}{5}}\cr f(n) = n\lg{n} = O(n^{\log_{b}{a}-\log_{3}{(5/4)}})\cr T(n)=\Theta({n^{\log_{b}{a}}})=\Theta(n^{\log_{3}{5}})\cr \end{aligned} $$ ### **b** $$ \begin{aligned} & 3(\frac{1}{3})^p=1\cr \implies & p = 1\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{x/\lg x}{x^{2}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{1}{x^{}\lg x}dx))\cr & = \Theta(n^1(1+\left[\lg\lg x\right]_{1}^{n}))\cr & = \Theta (n\lg\lg n)\cr \end{aligned} $$ - Tips: if $f(1)=\infty$, we will assume $T(1)=f(1)=\Theta(1)$ ### **c** $$ \begin{aligned} n^{\log_{b}{a}} = n^{\log_{2}{8}}=n^{3}\cr f(n) = n^{7/2} = \Omega(n^{\log_{b}{a}+1/2})\cr af(n/b)=2^{-1/2}(n^{7/2})\leq 2^{-1/2}f(n)\cr T(n)=\Theta(f(n))=\Theta(n^{7/2})\cr \end{aligned} $$ ### **d** $$ \begin{aligned} f(n)=n/2 \text{ satisfies polynomial-growth condition}\cr T(n)=2T(n/2)+n/2\cr n^{\log_{b}{a}} = n^{\log_{2}{2}}=n^{}\cr f(n) = n^{}/2 = \Theta(n^{\log_{b}{a}}\lg^{0}n)\cr T(n)=\Theta(n^{\log_{b}{a}}\lg n)=\Theta(n\lg n)\cr \end{aligned} $$ ### **e** $$ \begin{aligned} & 2(\frac{1}{2})^p=1\cr \implies & p = 1\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{x/\lg x}{x^{2}}dx))\cr & = \Theta(n^1(1+\int_{1}^{n}\frac{1}{x^{}\lg x}dx))\cr & = \Theta(n^1(1+\left[\lg\lg x\right]_{1}^{n}))\cr & = \Theta(n\lg\lg n)\cr \end{aligned} $$ ### **f** $$ \begin{aligned} & (\frac{1}{2})^{p}+(\frac{1}{4})^{p}+(\frac{1}{8})^{p}=1\cr & 0.5<p<1\cr T(n) & =\Theta(n^p(1+\int_{1}^{n}\frac{f(x)}{x^{p+1}}dx))\cr & = \Theta(n^p(1+\int_{1}^{n}\frac{x}{x^{p+1}}dx))\cr & = \Theta(n^p(1+\int_{1}^{n}\frac{1}{x^{p}}dx))\cr & = \Theta(n^p(1+\left[x^{1-p}\right]_{1}^{n}))\cr & = \Theta(n)\cr \end{aligned} $$ ### **g** $$ \begin{aligned} T(n) & = \frac{1}{2}\sum_{i=1}^{n}(\frac{1}{i})\cr & \leq \frac{1}{2}+1/2\int_{1}^{n-1}\frac{1}{x}\cr & = \Theta(\frac{1}{2}+\frac{1}{2}\lg (n-1))\cr & = \Theta(\lg n)\cr \end{aligned} $$ ### **h** $T(n)=T(n-1)+\lg n$ $$ \begin{aligned} T(n) & = \sum_{i=1}^{n}\lg i\cr & = \lg(n!)\cr & = \Theta(n\lg n)\cr \end{aligned} $$ ### **i** $$ \begin{aligned} T(n) & =1+\sum_{i=2}^{n}\frac{1}{\lg i}\cr & = \Omega(\frac{n^2}{\sum_{i=2}^{n}\lg n})\cr & = \Omega(\frac{n}{\lg n})\cr T(n) & =O(\sum_{i=2}^{\sqrt{n}-1}\frac{1}{\lg 2}+\sum_{i=\sqrt{n}}^{n}\frac{1}{\lg \sqrt{n}})\cr & = O(\sqrt{n}+\frac{n}{\lg n})\cr T(n) & = \Theta(n)\cr \end{aligned} $$ ### **j** depth: $\lg\lg n+1$, leaves: $n$ total cost of over all nodes at depth $i$, for $i=0,1,...,\lg \lg n-1$, is n $$ \begin{aligned} T(n)=\Theta(n\lg\lg n)\cr \end{aligned} $$ ## 4-5 Fibonacci numbers > This problem develops properties of the Fibonacci numbers, which are defined by recurrence (3.31) on page 69. We’ll explore the technique of generating functions to solve the Fibonacci recurrence. Define the **generating function** (or **formal power series**) $\mathcal{F}$ as > > $$ > \begin{aligned} > \mathcal{F}(z) & = \sum_{i=0}^{\infty}F_iz^i\cr > & = 0+z+z^2+2z^3+3z^4+5z^5+8z^6+13z^7+21z^8+\cdots,\cr > \end{aligned} > $$ > > where $F_i$ is the $i$th Fibonaccci nunber. > > **a**. Show that $\mathcal{F}(z)=z+z\mathcal{F}(z)+z^2\mathcal{F}(z)$. > > **b**. Show that > > $$ > \begin{aligned} > \mathcal{F}(z) & = \frac{z}{1-z-z^2}\cr > & = \frac{z}{(1-\phi z)(1-\hat\phi z)}\cr > & = \frac{1}{\sqrt{5}}(\frac{1}{1-\phi z}-\frac{1}{1-\hat\phi z})\cr > \end{aligned} > $$ > > where $\phi$ is the golden ratio, and $\hat{\phi}$ is its conjugatr(see page 69). > > **c**. Show that > > $$ > \mathcal{F}(z)=\sum_{i=0}^{\infty}\frac{1}{\sqrt{5}}(\phi^{i}-\hat{\phi}^i)z^i. > $$ > > You may use without proof the generating-function version of equation (A.7) on page 1142, $\sum_{k=0}^{\infty} x^k=1/(1-x)$. Because this equation involves a generating function, $x$ is formal variable, not a real-valued variable, so that you don’t have to worry about convergence of the summation or about the requirement in equation (A.7) that $|x|<1$, which doesn’t make sense here. > > **d**. Use part (c) to prove that $F_i=\phi^i/\sqrt{5}$ for $i>0$, rounded to the nearest integer. > > (Hint: Observe that $|\hat\phi<1|$.) > > **e** Prove that $F_{i+2}\geq \phi^{i}$ for $i\geq 0$. ### **a** $$ \begin{aligned} z+z\mathcal{F}(z)+z^2\mathcal{F}(z) & =z+z\sum_{i=0}^{\infty}F_iz^i + z^{2}\sum_{i=0}^{\infty}F_iz^i\cr & = z+\sum_{i=1}^{\infty}F_{i-1}z^i + \sum_{i=2}^{\infty}F_{i-2}z^i\cr & = z+F_{0}z^1 + \sum_{i=2}^{\infty}(F_{i-2}+F_{i-1})z^i\cr & = z+0 + \sum_{i=2}^{\infty}F_{i}z^i\cr & =\sum_{i=0}^{\infty}F_{i}z^i\cr & =\mathcal{F}(z)\cr \end{aligned} $$ ### **b** $$ \begin{aligned} &\mathcal{F}(z)=z+z\mathcal{F}(z)+z^2\mathcal{F}(z)\cr \implies & \mathcal{F}(z) = \frac{z}{1-z-z^2}\cr & \phi\hat{\phi}=-1;\phi + \hat{\phi}=1;\phi - \hat{\phi}=\sqrt{5}\cr \implies & \mathcal{F}(z) = \frac{z}{1-(\phi + \hat{\phi})z-(-\phi\hat{\phi})z^2}\cr \implies & \mathcal{F}(z) = \frac{z}{(1-\phi z)(1-\hat\phi z)}\cr \implies & \mathcal{F}(z) = \frac{1}{\sqrt{5}}\frac{(\phi - \hat{\phi}) z}{(1-\phi z)(1-\hat\phi z)}\cr \implies & \mathcal{F}(z) = \frac{1}{\sqrt{5}}(\frac{1}{1-\phi z}-\frac{1}{1-\hat\phi z})\cr \end{aligned} $$ ### **c** $$ \begin{aligned} \sum_{i=0}^{\infty}\frac{1}{\sqrt{5}}(\phi^{i}-\hat{\phi}^i)z^i & = \sum_{i=0}^{\infty}\frac{1}{\sqrt{5}}(\phi z)^i - \sum_{i=0}^{\infty}\frac{1}{\sqrt{5}}(\hat{\phi}z)^i\cr & = \frac{1}{\sqrt{5}}(\frac{1}{1-\phi z}-\frac{1}{1-\hat{\phi} z})\cr & = \mathcal{F}(z)\cr \end{aligned} $$ ### **d** $$ \begin{aligned} F_{i} & =\frac{1}{\sqrt{5}}(\phi^{i}-\hat{\phi}^i)\cr & = \frac{\phi^{i}}{\sqrt{5}}-\frac{\hat{\phi}^i}{\sqrt{5}}\cr & \because |\hat{\phi}^i|<1 \therefore |\hat{\phi}^i/\sqrt{5}|<0.5\cr \implies & F_{i}=round(\frac{\phi^{i}}{\sqrt{5}})\cr \end{aligned} $$ ### **e** $$ \begin{aligned} F_{2} & =1\geq\phi^{0}=1\cr F_{3} & =2\geq\phi^{1}=\frac{1+\sqrt{5}}{2}\cr F_{i+2} & = F_{i+1}+F_{i}\cr & \geq \phi^{i-1}+\phi^{i-2}\cr & =\phi^{i-2}(\phi+1)\cr & =\phi^{i-2}(\phi^2)\cr & = \phi^{i}\cr \end{aligned} $$ ## 4-6 Chip testing > Professor Diogenes has $n$ supposedly identical integrated-circuit chips that in principle are capable of testing each other. The professor’s test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the professor cannot trust the answer of a bad chip. Thus, the four possible outcomes of a test are as follows: > > | Chip A says | Chip B says | Conclusion | > | :---------- | :---------- | :----------------------------- | > | B is good | A is good | both are good, or both are bad | > | B is good | A is bad | at least one is bad | > | B is bad | A is good | at least one is bad | > | B is bad | A is bad | at least one is bad | > > **a**. Show that if at least $n/2$ chips are bad, the professor cannot necessarily determine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor. > > Now you will design an algorithm to identify which chips are good and which are bad, assuming that more than $n/2$ of the chips are good. First, you will determine how to identify one good chip. > > **b**. Show that $\lfloor n/2 \rfloor$ pairwise tests are sufficient to reduce the problem to one of nearly half the size. That is, show how to use $\lfloor n/2 \rfloor$ pairwise tests to obtain a set with at most $\lceil n/2 \rceil$ chips that still has the property that more than half of the chips are good. > > **c**. Show how to apply the solution to part (b) recursively to identify one good chip. Give and solve the recurrence that describes the number of tests needed to identify one good chip. > > You have now determined how to identify one good chip. > > **d**. Show how to identify all the good chips with an additional $\Theta(n)$ pairwise tests. ### **a** If the professor want to determine which chip is good, he must get case 1 that A and B both say the other is good, and ensure there is no bad chip left. So he need to use case 2,3,4 to drop bad chips, since bad chips conspire to fool the professor, each time will drop a bad chip together a good one. If bad chips number is more than good chips, the professor cannot necessarily determine which chips are good. ### **b** ```cpp findgoodCchip(chips) if chips num is zero//end is good chips number equal bad chips return NIL if chips num is one//must be good chips return this one if chips num is odd//good chips is more than bad chips A=left one chip aside else A=NIL divide rest chips into pairs //floor(n/2) operation for each pairs//this for loop ensure good chips number are great than or eqaul bad chips if the report says at least one of them is bad drop both else drop one in the pair//worst-case: halve the size flag=findgoodCchip(chips) if flag = NIL and A!=NIL//A is good chip return A if flag = NIL and A==NIL//no good chip from this call stack return flag if flag return flag ``` ### **c** already apply the solution recursively in part(b), $$ \begin{aligned} T(n)=n/2+T(n/2)\cr T(n)=\Theta(n)\cr \end{aligned} $$ ### **d** use the good chip to test other chips$\Theta(n)$. ## 4-7 Monge arrays > An $m\times n$ array $A$ of real numbers is a **Monge array** if for all $i,j,k$, and $l$ such that $1\leq i <k \leq m$ and $i \leq j < l \leq n$, we have > > $A[i,j]+A[k,l]\leq A[i,l]+A[k,j]$. > > In other words, whenever we pick two rows and two columns of a Monge array and consider the four elements at the intersections of the rows and the columns, the sum of the upper-left and lower-right elements is less than or equal to the sum of the lower-left and upper-right elements. For example, the following array is Monge: > > $$ > \begin{array}{} > 10 & 17 & 13 & 28 & 23\cr > 17 & 22 & 16 & 29 & 23\cr > 24 & 28 & 22 & 34 & 24\cr > 11 & 13 & 6 & 17 & 7\cr > 45 & 44 & 32 & 37 & 23\cr > 36 & 33 & 19 & 21 & 6\cr > 75 & 66 & 51 & 53 & 34\cr > \end{array} > $$ > > **a**. Prove that an array is Monge if and only if for all $i=1,2,\dots,m-1$ and $j=1,2,\dots,n-1$, we have > > $$ > A[i,j]+A[i+1,j+1]\leq A[i,j+1]+A[i+1,j]. > $$ > > (Hint: For the "if" part, use induction separately on rows and columns.) > > **b**. The following array is not Monge. Change one element in order to make it Monge. (Hint: Use part (a).) > > $$ > \begin{array}{} > 37 & 23 & 22 & 32\cr > 21 & 6 & 7 & 10\cr > 53 & 34 & 30 & 31\cr > 32 & 13 & 9 & 6\cr > 43 & 21 & 15 & 8\cr > \end{array} > $$ > > **c**. Let $f(i)$ be the index of the column containing the leftmost minimum elementof row $i$. Prove that $f(1)\leq f(2)\leq \cdots \leq f(m)$ for any $m\times n$ Monge array. > > **d**. Here is a description of a divide-and-conquer algorithm that computes the left most minimum element in each row of an $m\times n$ Monge array A: > > > Construct a submatrix $A'$ of $A$ consisting of the even-numbered rows of $A$. Recursively determine the leftmost minimum for each row of $A'$. Then compute the leftmost minimum in the odd-numbered rows of $A'$. > > Explain how to compute the leftmost minimum in the odd-numbered rows of $A$ (given that the leftmost minimum of the even-numbered rows is known) in $O(m+n) time. > > **e**. Write the recurrence for the running time of the algorithm in part (d). Show that its solution is $O(m+n\log m)$. ### **a** $$ \begin{aligned} \text{assume }A[i,j]+A[i+k,j+l]\leq A[i,j+l]+A[i+k,j]\cr A[i+k,j]+A[(i+1)+k,j+l] \leq A[i+k,j+l]+A[i+k+1,j]\cr \implies A[i,j]+A[i+k+1,j+l]\leq A[i,j+l]+A[i+k+1,j]\cr A[i,j+l]+A[i+k,j+l+1]\leq A[i,j+l+1]+A[i+k,j+1]\cr \implies A[i,j]+A[i+k,(j+1)+l]\leq A[i,j+l+1]+A[i+k,j]\cr \end{aligned} $$ ### **b** $$ \begin{array}{} 37 & 23 & 22 & 32\cr 21 & 6 & (5) & 10\cr 53 & 34 & 30 & 31\cr 32 & 13 & 9 & 6\cr 43 & 21 & 15 & 8\cr \end{array} $$ ### **c** $$ \begin{aligned} i \lt k\cr \text{since } A[i,f(i)] = \min(A[i,x]),A[k,f(k)] = \min(A[k,x])\cr A[i,f(i)] + A[k,f(k)] \leq A[i,f(k)] + A[k,f(i)]\cr \text{assume }f(i) \gt f(k)\cr \implies A[i,f(k)] + A[k,f(i)] \leq A[i,f(i)] + A[k,f(k)]\cr \implies A[i,f(k)] + A[k,f(i)] = A[i,f(i)] + A[k,f(k)]\cr A[i,f(i)] \leq A[i,f(k)]\cr A[k,f(k)] \leq A[k,(f(i))]\cr \implies A[i,f(i)] = A[i,f(k)]\cr \implies A[k,f(k)]= A[k,(f(i))]\cr \implies f(i)\leq f(k)\cr \text{Contradictory to assumption}\cr \implies f(i)\leq f(k)\cr \end{aligned} $$ ### **d** since $f(2k) \leq f(2k+1)\leq f(2k+2)$ in each odd row, we only need to compare $f(2k+2)-f(2k)+1$ elements. totally need to deal with $O(m)+O(n)=O(m+n)$ over all odd rows. ### **e** $$ \begin{aligned} T(m,n) & = T(m/2,n)+O(m+n)\cr T(m,n) & = \sum_{i=0}^{\lg m}O(m/2^i+n)\cr & = O(n\lg m)+\sum_{i=1}^{\lg m}O(m/2^i)\cr & = O(n\lg m)+O(m)\cr & = O(m+n\lg m)\cr \end{aligned} $$ # 7-Exercises ## 7.1 Description of quicksort ### 7.1-1 > Using Figure 7.1 as a model, illustrate the operation of **PARTITION** on the array $A = <13, 19, 9, 5, 12, 8, 7, 4, 21, 2, 6, 11>$. ![7.1-1](https://hackmd.io/_uploads/Sy0ZXxN2Jx.png) ### 7.1-2 > What value of $q$ does **PARTITION** return when all elements in the subarray $A[ p : r ]$ have the same value? Modify **PARTITION** so that $q = \lfloor (p+r)/2 \rfloor$ when all elements in the subarray $A[p:r]$ have the same value. $q = r$; We can modify **PARTITION** to assign the elements have same value with $A[r]$ to low side and high side alternately. ```cpp PARTITION-MODIFY(A,p,r) x = A[r] i = p-1 flag = 0 for j = p to r-1 if A[j] < x i = i + 1 exchange A[i] with A[j] else if A[j] = x if (flag = 1)//whether A[i] assigned to low side i = i + 1 exchange A[i] with A[j] flag = (flag + 1)%2//flag = (flag+1)mod2 exchange A[i+1] with A[r] return i+1 ``` ### 7.1-3 > Give a brief argument that the running time of **PARTITION** on a subarray of size $n$ is $\Theta(n)$. The running time of **PARTITION** is domained by `for` loop. So it is $\Theta(r-q)=\Theta(n)$ ### 7.1-4 > Modify **QUICKSORT** to sort into monotonically decreasing order. ```cpp PARTITION-MODIFY(A,p,r) x = A[r] i = p-1 for j = p to r-1 if A[j] >= x//only changement i = i + 1 exchange A[i] with A[j] exchange A[i+1] with A[r] return i+1 ``` ## 7.2 Performance of quicksort ### 7.2-1 > Use the substitution method to prove that the recurrence $T(n) = T(n-1) + \Theta(n)$ has the solution $T(n) = \Theta(n^2)$,as claimed at the beginning of Section 7.2. $$ \begin{aligned} T(n) & = T(n-1) + \Theta(n)\cr & \leq c(n-1)^2 + \Theta(n)\cr & = cn^2 -2cn+c+\Theta(n)\cr & = cn^2-cn +\Theta(n) + c(1-n)\cr & \leq cn^2\cr \end{aligned} $$ The last step holds while $c$ is large enough. ### 7.2-2 > What is the running time of QUICKSORT when all elements of array A have the same value? Since the size of one side is $n-1$, the other is $0$,the running time is: $T(n) = T(n-1)+\Theta(n) \implies T(n) = \Theta(n^2)$ ### 7.2-3 > Show that the running time of QUICKSORT is $\Theta(n^2)$ when the array $A$ contains distinct elements and is sorted in decreasing order. Since the array $A$ contains distinct elements and is sorted in decreasing order, the `for` loop will do nothing, it will divide the array into a subarray with size $n-1$ and the other with size 0, while the decreasing order property maintains.So the running time: $T(n) = T(n-1)+\Theta(n) \implies T(n) = \Theta(n^2)$ ### 7.2-4 > Banks often record transactions on an account in order of the times of the transactions, but many people like to receive their bank statements with checks listed in order by check number. People usually write checks in order by check number, and merchants usually cash them with reasonable dispatch. The problem of converting time-of-transaction ordering to check-number ordering is therefore the problem of sorting almost-sorted input. Explain persuasively why the procedure **INSERTION-SORT** might tend to beat the procedure **QUICKSORT** on this problem. For almost-sorted input, the running time of **QUICKSORT** is $\Theta(n^2)$ The running time of **INSERTION-SORT** is $\Theta(n+d)$, while $d$ denote the numbers of inversions in input array. For almost-sorted input, $d$ is small, so **INSERTION-SORT** might tend to beat the procedure **QUICKSORT** on this problem. ### 7.2-5 > Suppose that the splits at every level of quicksort are in the constant proportion $\alpha$ to $\beta$, where $\alpha + \beta = 1$ and $0 < \alpha \leq \beta < 1$. Show that the minimum depth of a leaf in the recursion tree is approximately $\log_{1/\alpha}n$ and that the maximum depth is approximately $\log_{1/\beta}n$. (Don’t worry about integer round-off.) For the minimum depth of a leaf, it must corresponds to repeatedly taking the smaller subproblem. Let $k$ be its depth, then we have $$ \begin{aligned} n\alpha^{k}=1\cr \implies k \log_{1/\alpha}n\cr \end{aligned} $$ For the minimum depth of a leaf, the same: $$ \begin{aligned} n\beta^{k}=1\cr \implies k \log_{1/\beta}n\cr \end{aligned} $$ ### 7.2-6 > Consider an array with distinct elements and for which all permutations of the elements are equally likely. Argue that for any constant $0 < \alpha \leq 1/2$, the probability is approximately $1 - 2\alpha$ that **PARTITION** produces a split at least as balanced as $1-\alpha$ to $\alpha$. because all permutations of the elements are equally likely, each element in the array is equally likely to be selected as pivot. To produce a split at least as balanced as $1-\alpha$ to $\alpha$, **PARTITION** must select the element in $\lbrace x|x_{\alpha n}\leq x \leq x_{(1-\alpha) n}\rbrace$, in which $x_{i}<x_{j}\iff i<j$. The probability is: $$ \begin{aligned} P = \frac{1-2\alpha}{1}=1-2\alpha; \end{aligned} $$ ## 7.3 A randomized version of quicksort ### 7.3-1 > Why do we analyze the expected running time of a randomized algorithm and not its worst-case running time? For a randomized algorithm, it is highly probable to produce the expected running time, but little probability to produce worst-case running time. For randomized version of quicksort, the probability to produce worst-case running time is: $$ \begin{aligned} \frac{2^{n}}{n!}\cr \end{aligned} $$ ### 7.3-2 > When **RANDOMIZED-QUICKSORT** runs, how many calls are made to the random-number generator **RANDOM** in the worst case? How about in the best case? Give your answer in terms of $\Theta$-notation. In worst case, **RANDOM** will always select the minimum or maximum number, each element will be selected as pivot. The number of calls to **RANDOM** is: $\Theta(n)$. In best cast, let $F(n)$ be the calls made to **RAMDOM**, $G(n) = \min(F(n))$ $$ \begin{align} F(1) = 0\cr F(2) = 1\cr G(3) = 1\cr G(n) = \min(G(k)+G(n-k-1)+1)\cr G(n) = 2G((n-1)/2) +1(\text{I guess})\cr G(n) = O(n)\cr \end{align} $$ ## 7.4 Analysis of quicksort ### 7.4-1 > Show that the recurrence > > $$ > \begin{aligned} > T(n) = \max \lbrace T(q) + T(n-q-1): 0\leq q \leq n-1\rbrace + \Theta(n) > \end{aligned} > $$ > > has a lower bound of $T(n) = \Omega(n^2)$. Assume $T(n) \geq cn^2(c \leq 1)$ $$ \begin{aligned} T(n) & = \max \lbrace T(q) + T(n-q-1): 0\leq q \leq n-1\rbrace + \Theta(n)\cr & \geq \max \lbrace cq^2 + c(n-q-1)^2: 0\leq q \leq n-1\rbrace + \Theta(n)\cr & = \max \lbrace cq^2 + cn^2 - 2c(q+1)n+c(q+1)^2: 0\leq q \leq n-1\rbrace + \Theta(n)\cr & = \max \lbrace cn^2 - 2c(q+1)n + c(2q^2 +2q+1): 0\leq q \leq n-1\rbrace + \Theta(n)\cr T(n,q) & = cn^2 - 2c(q+1)n + c(2q^2 +2q+1)\cr \frac{dT(n,q)}{dq} & = 4q+2-2n\cr T(n) & = T(n,0) + \Theta(n)\cr & = cn^2 -2cn + c +\Theta(n)\cr & \geq cn^2\cr \end{aligned} $$ The last step hold for $c\leq d$, while $d$ is the minimum constant facotr hiden in $\Theta(n)$. ### 7.4-2 > Show that quicksort's best-case running time is $\Omega(n\lg n)$. $$ \begin{aligned} T(n) & = \min \lbrace T(q) + T(n-q-1): 0\leq q \leq n-1\rbrace + \Theta(n)\cr \end{aligned} $$ Assume $T(n)\geq cn\lg n$ $$ \begin{aligned} T(n) & \geq \min \lbrace cq\lg q + c(n-q-1)\lg(n-q-1): 0\leq q \leq n-1\rbrace + \Theta(n)\cr \frac{dT(n,q)}{d(q)} & = c\lg q - c\lg(n-q-1)\cr T(n) & \geq T(n,\frac{n-1}{2})+\Theta(n)\cr & = 2T(\frac{n-1}{2}) + \Theta(n)\cr & \geq 2c(\frac{n-1}{2})\lg(\frac{n-1}{2}) + \Theta(n)\cr & = c(n-1)\lg(n-1) -c(n-1)\lg 2 + \Theta(n)\cr & = cn\lg n -c\lg(n-1) - cn\lg(\frac{2n}{n-1})+2c\lg 2 + \Theta(n)\cr & \geq cn\lg n\cr \end{aligned} $$ The last step hold for $-cn\lg(\frac{2n}{n-1})+\Theta(n)\geq c\lg(n-1)$ ### 7.4-3 > Show that the expression $q^{2} + (n-q-1)^{2}$ achieves its maximum value over $q=0,1,...,n-1$ when $q=0$ or $q=n-1$. $$ \begin{aligned} f(q) & = q^{2} + (n-q-1)^{2}:q=0,1,...,n-1\cr \frac{df(q)}{dq} & = 2q - 2(n-q-1)\cr & = 4q - 2n + 2 \cr \frac{df(q)}{dq} & < 0 \text{ for } q < \frac{n-1}{2}\cr \frac{df(q)}{dq} & > 0 \text{ for } q > \frac{n-1}{2}\cr \max(f(q)) & = f(0) \text{ or } f( n - 1 )\cr \end{aligned} $$ ### 7.4-4 > Show that **RANDOMIZED-QUICKSORT**'s expected running time is $\Omega(n\lg n)$. Let $R[1:n]$ be the sorted array. The probability that $R[i]$ is compared with $R[j](j>i)$ in original array is $\frac{2}{j-i}$. $X_{ij} = I \lbrace R[i] \text{ is compared with }R[j]\rbrace$ The expected running time of **RANDOMIZED-QUICKSORT** is: $$ \begin{aligned} E[X] & = \sum_{0 < i < j \leq n} E[X_{ij}]\cr & = \sum_{0 < i <n}\sum_{j=i+1}^{n}\frac{2}{j-i+1}\cr & = \sum_{0 < i <n}\sum_{j=1}^{n-i}\frac{2}{j+1}\cr & > \sum_{0 < i < n/2}\sum_{j=1}^{n/2}\frac{2}{j+1}\cr & = \sum_{0 \leq i < n/2}\Omega(\lg n)\cr & = \Omega(n\lg n)\cr \end{aligned} $$ ### 7.4-5 > Coarsening the recursion, as we did in **Problem 2-1** for merge sort, is a common way to improve the running time of quicksort in practice. We modify the base case of the recursion so that if the array has fewer than $k$ elements, the subarray is sorted by insertion sort, rather than by continued recursive calls to quicksort. Argue that the randomized version of this sorting algorithm runs in $O(nk+n\lg(n/k))$ expected time. How should you pick $k$, both in theory and in practice? In quicksort part, since the procedure stop in level $\lg(n/k)$, its expected running time is $O(n\lg(n/k))$. In insertion sort part, its expected running time is $\frac{n}{k}O(k^2) = O(nk)$ So the overall running is $O(nk+n\lg(n/k))$ In theory, we need to solve $O(nk+n\lg(n/k)) < O(n\lg(n))$, no solution. In practice, we need to solve $$ \begin{aligned} c_1nk + c_2n\lg(n/k) < c_2n\lg(n)\cr \implies c_1k + c_2\lg(n)-c_2\lg k < c_2 \lg n\cr \implies c_1k -c_2\lg k <0\cr \end{aligned} $$ ### $\star$ 7.4-6 > Consider modifying the PARTITION procedure by randomly picking three elements from subarray $A[p:r]$ and partitioning about their median (the middle value of the three elements). Approximate the probability of getting worse than an $\alpha$-to-$(1-\alpha)$ split, as a function of $\alpha$ in the range $0 < \alpha < 1/2$. $$ \begin{aligned} P & =P(\text{three in middle } (1 - 2\alpha)n) + P(\text{one in smallest }\alpha n,\text{one in middle,one in largest})\cr & = (1-2\alpha)^{3} + P(3,3)\alpha^{2}(1-2\alpha)\cr & = -8\alpha^3 +12\alpha^2 - 6 \alpha +1 +6\alpha^2-12\alpha^3\cr & = 1 -6\alpha + 12\alpha^2 -20\alpha^3\cr \end{aligned} $$ # 7-Problems ## 7-1 Hoare partition correctness > The version of **PARTITION** given in this chapter is not the original partitioning algorithm. Here is the original partitioning algorithm, which is due to C.A.R.Hoare. > > ```cpp > HOARE-PARTITION(A,p,r) > x = A[p] > i = p-1 > j = r+1 > while TRUE > repeat > j = j - 1 > until A[j] ≤ x > repeat > i = i + 1 > until A[i] ≥ x > if i < j > exchange A[i] with A[j] > else return j > ``` > > **a**. Demonstrate the operation of **HOARE-PARTITION** on the array $A = <13,19,9,5,12,8,7,4,11,2,6,21>$, showing the values of the array and the indices $i$ and $j$ after each iteration of the while loop in lines 4313. > > **b**. Describe how the PARTITION procedure in Section 7.1 differs from HOARE-PARTITION when all elements in $A[p:r]$ are equal. Describe a practical advantage of **HOARE-PARTITION** over PARTITION for use in quicksort. > > The next three questions ask you to give a careful argument that the procedure **HOARE-PARTITION** is correct. Assuming that the subarray $A[p:r]$ contains at least two elements, prove the following: > > **c**. The indices $i$ and $j$ are such that the procedure never accesses an element of $A$ outside the subarray $A[p:r]$ > > **d**. When **HOARE-PARTITION** terminates, it returns a value $j$ such that $p \leq j < r$. > > **e**. Every element of $A[p:j]$ is less than or equal to every element of $A[j+1:r]$ when **HOARE-PARTITION** terminates. > > The **PARTITION** procedure in Section 7.1 separates the pivot value (originally in $A[r]$) from the two partitions it forms. The **HOARE-PARTITION** procedure, on the other hand, always places the pivot value (originally in $A[p]$) into one of the two partitions $A[p:j]$ and $A[j+1:r]$. Since $p\leq j < r$, neither partition is empty. > > **f**. Rewrite the **QUICKSORT** procedure to use **HOARE-PARTITION**. ### **a** ## 7-2 Quicksort with equal element values > The analysis of the expected running time of randomized quicksort in Section 7.4.2 assumes that all element values are distinct. This problem examines what happens when they are not. > > **a**. Suppose that all element values are equal. What is randomized quicksort’s running time in this case? > > **b**. The **PARTITION** procedure returns an index $q$ such that each element of $A[p:q-1]$ is less than or equal to $A[q]$ and each element of $A[q+1:r]$ is greater than $A[q]$. Modify the **PARTITION** procedure to produce a procedure **PARTITION'(A,p,r)**, which permutes the elements of $A[p:r]$ and returns two indices $q$ and $t$, where $p\leq q \leq t \leq r$, such that > > - all element of $A[q:t]$ are equal, > - each element of $A[p:q-1]$ is less than $A[q]$, and > - each element of $A[t+1:r]$ is greater than $A[q]$. > > Like **PARTITION**, your **PARTITION**' procedure should take $\Theta(r-p)$ time. > > **c**. Modify the **RANDOMIZED-PARTITION** procedure to call **PARTITION**', and name the new procedure **RANDOMIZED-PARTITION**'. Then modify the **QUICKSORT** procedure to produce a procedure **QUICKSORT'(A,p,r)** that calls **RANDOMIZED-PARTITION**' and recurses only on partitions where elements are not known to be equal to each other. > > **d**. Using **QUICKSORT**', adjust the analysis in Section 7.4.2 to avoid the assumption that all elements are distinct. ### **a**