# COMP 3711 – Design and Analysis of Algorithms I
**2022 Fall Semester – Written Assignment 1**
**Distributed: September 9, 2022**
**Due: September 26, 2022, 23:59**
Name: Kwong Cheuk Lam
SID: 20776617
Email: clkwongaa@connect.ust.hk
#### Implementation of pseudocode of Q2, Q5, Q6:
https://colab.research.google.com/drive/1W08yF2t54zOdqihHDtiyp6YMT3XT7gCI?usp=sharing
#### Online version:
https://hackmd.io/@j1Jsew7VSb-GQ9oE3SBz8g/rkE_PcZZs
#### Collaborators:
1. Lam Ho Wai (hwlamah@connect.ust.hk)
### Question 1
##### a
```c
def sortDecresingly(A, n):
// main loop: iterates n-1 times
for k <- 2 to n do:
// visit elements from A[n] to A[2]
i <- n
while i >= 2 do:
// check if A[i] > A[i-1]
if(A[i] > A[i-1]) then
// swapping A[i-1] and A[i]
swap A[i] and A[i-1]
i <- i - 1 // decrement i by 1 for the next visit
return A
```
##### b
In the first main loop iteration, the largest element in the array would be swapped to (or stayed at) position one as it is always larger in the comparisons with the previous element.
In the second main loop iteration, the second largest element in the array would be swapped to (or stayed at) position two as it is always larger in the comparison with the previous element, except the largest one that is at position one.
Similarly, after the i th loop iteration, the i th largest element in the array would be at position i, i.e. the proper place in the order with decending order.
We can then prove the correctness of the algorithm rigorously by Induction.
**Base Case:**
When n = 2, the algorithm would iterate once in the main loop, only comparing whether A[2] > A[1].
If so, the two elements will be swapped. If not, the array remains unchanged. Either way, A[1] > A[2] and the array is sorted in decreasing order.
***Base case is true.***
**Induction:**
Assume that the algorithm sorts every array of size n-1 correctly in a decreasing order.
Then, consider the algorithm with an array with size n. And we suppose the largest element in A[1...n] is $x$ at position k initially.
In the first main loop, $x$ stays at position k when $k < i <= n$. However, when $2 <= i <= k$, $x$ is swapped one position forward every iteration in the inner loop as $x$ > elements in A[1...k-1]. $x$ would end up at position 1 after the first main loop.
And in the subsequent loops, $x$ would stayed at position 1 as no other elements would be greater than $x$ and swap with $x$. We can then reduce the problem to `sortDecreasingly(A, n) = [x] + sortDecreasingly(B, n-1)`
where `+` means concatenation of two lists and `B` is A[2...n] after the first main loop.
Then by induction assumption, A[2...n] of size (n-1) would then be sorted decreasingly and since x is larger than all elements in B, A[1...n] would be sorted decreasingly.
***The algorithm works correctly***
##### c
For any fixed i, the i th iteration has the worst running time when
1. comparisons of `A[i] > A[i-1]` take place n-1 times
2. swappings take place n - i times
3. variable assignments take place 2 times
Total run time in one iteration
$=(n-1) + (n - i) + 2$
$=(2n+1-i)$
$\leq 2n$ $\text{, for } 1 \leq i \leq n-1$
$= \mathcal{O}(n)$
Worst case running time of the algorithm (Originally in accending order)
$=\sum_{i=1}^{n-1}(2n+1-i)$
$=\frac{3}{2}n^2 - \frac{n}{2} - 1$
$=\mathcal{O}(n^2) = \Omega(n^2) = \Theta(n^2)$
### Question 2
##### a
We would design `allPermutations(A, n)` to achieve the task.
`A` is the input array of `n` distinct integers, where $n >= 1$.
The output would be a list of arrays with all possible permutations.
**Base case (n=1):**
The function returns `[[A[1]]`.
**Recursive case (n > 1):**
Suppose `permutations` is the result of `allPermutation(A[2...n])`, we return the concatenated list of arrays by adding A[1] at position i to each array in `permutations`, where $1 <= i <= n$.
For example, if we originally have an array [4, 1, 2, 3]:
|allPermutation([1,2,3], 3)|[1,2,3]|[1,3,2]|[2,1,3]|[2,3,1]|[3,1,2]|[3,2,1]
|---|---|---|---|---|---|---|
|i=1|[4,1,2,3]|[4,1,3,2]|[4,2,1,3]|[4,2,3,1]|[4,3,1,2]|[4,3,2,1]
|i=2|[1,4,2,3]|[1,4,3,2]|[2,4,1,3]|[2,4,3,1]|[3,4,1,2]|[3,4,2,1]
|i=3|[1,2,4,3]|[1,3,4,2]|[2,1,4,3]|[2,3,4,1]|[3,1,4,2]|[3,2,4,1]
|i=4|[1,2,3,4]|[1,3,2,4]|[2,1,3,4]|[2,3,1,4]|[3,1,2,4]|[3,2,1,4]
A list of size n! containing arrays of all possible permutations will be returned.
```c
def allPermutations(A, n):
// base case: return the array that only contains one array
// i.e. the permutation of one element:
// [[A[1]]]
if n = 1 then
return [A]
// recursive call of A[2...n]
permutations <- allPermutations(A[2...n], n-1)
// initialize the array for storing result
res <- []
// iterate through position from k = 1 to k = n
for k <- 1 to n do
// iterate through all arrays in the permutations list
// that contains all permutations of A[2...n]
// and add A[1] to position k to create a new permutation
for permutation in permutations:
newPermutation <- insert A[1] to position k of permutation
// add the new permutation to the result array
add newPermutation to res
return res
```
##### b
Suppose T(n) is the number of steps required for algorithm with size n.
For `permutations <- allPermutations(A[2...n], n-1)`, it takes $T(n-1)$ time.
For the outer loop, it takes $n$ time.
For the inner loop, as the number of permutations with (n-1) distinct numbers are $(n-1)!$, the running time is $(n-1)!$.
For the array insertion, it takes $\mathcal{O}(n)$ times.
While other operations only take constant time $c \text{ and } c'$, including shallow copy of the array.
Then,
$T(n) = T(n-1) + c'(n-1)!(n)(n) +c = T(n-1) + \mathcal{O}(n*n!)$
$T(n) = T(n-2) + c'(n-1)(n-1)! + c'n(n)! + 2c$
...
and
$T(1) = c_1$
In general,
$T(n) = \sum_{i=2}^{n}(c'nn! + c)+c_1$, where $c, c', c_1$ are some constants.
$T(n) < n(c'nn!) + cn + c_1$
$T(n) = \mathcal{O}(n^2*n!)$
### Question 3
|First Term|Second Term|Relation|
|-|-|-|
|$n^2$|$n^2/\log n$|$\mathcal{O}(n^2) = n^2/\log n \\ n^2 = \Omega(n^2/\log n)$|
|$n^3$|$2^{\log(n)}$|$n^3 = \Omega(2^{\log(n)}) \\ \mathcal{O}(n^3) = 2^{\log(n)}$|
|$\log(n)$|$\log\log(n)$|$\mathcal{O}(\log(n)) = \log\log(n) \\ \log(n) = \Omega(\log\log(n))$|
|$n^2$|$4^{\log_2(n)}$|$n^2 = \Theta(4^{\log_2(n)})$|
|$(\log{n})^2$|$n^{1/3}$|$(\log{n})^2 = \mathcal{O}(n^{1/3}) \\ \Omega((\log{n})^2) = n^{1/3}$|
### Question 4
##### a
$T(n) = 4T(n/2) + n$
$T(n) = 4(4T(n/4) + n/2) + n = 4^2*T(n/4) + 2n + n$
...
$T(n) = 4^i*T(n/2^i)+ \sum_{j=0}^{i-1}(2^i*n)$, at the i th recurrence
...
$T(1) = 1$
Then, for the boundary condition when $n/2^i = 1$, $i = \log_2n$.
Thus,
$T(n) = 4^{\log_2n}(1) + \sum_{j=0}^{\log_2n-1}(2^j*n)$
$= n^2 + n\frac{2^{\log_2n}-1}{2-1}$
$=2n^2-n$
$=\mathcal{O}(n^2)$
##### b
$T(n) = 4T(n/2) + n^3$
$T(n) = 4(4T(n/4) + n^3/8) + n^3 = 4^2*T(n/4) + n^3/2 + n^3$
...
$T(n) = 4^i*T(n/2^i)+ \sum_{j=0}^{i-1}(2^{-i}*n^3)$, at the i th recurrence
...
$T(1) = 1$
Then, for the boundary condition when $n/2^i = 1$, $i = \log_2n$.
Thus,
$T(n) = 4^{\log_2n}(1) + \sum_{j=0}^{\log_2n-1}(2^{-j}*n^3)$
$= n^2 + n^3\frac{1-2^{-\log_2n}}{1-1/2}$
$= n^2 + 2(n^3 - n^2)$
$=\mathcal{O}(n^3)$
##### c
$T(n) = T(n/3) + \sqrt{n}$
$T(n) = T(n/9) + \sqrt{n/3} + \sqrt{n}$
...
$T(n) = T(n/3^i)+ \sum_{j=0}^{i-1}(\sqrt{n}/3^{j/2})$, at the i th recurrence
...
$T(1) = 1$
Then, for the boundary condition when $n/3^i = 1$, $i = \log_3n$.
Thus,
$T(n) = (1) + \sum_{j=0}^{log_3n-1}(\sqrt{n}/3^{j/2})$
$= 1 + \sqrt{n}\frac{1-(1/\sqrt3)^{\log_3n}}{1-1/\sqrt3}$
$= 1 + \sqrt{n}\frac{1-n^{-log_3(\sqrt3)}}{1-1/\sqrt3}$
$= 1 + \sqrt3/2*(\sqrt{n}-n^{-1/2})$
$=\mathcal{O}(n^{1/2})$
##### d
Master theorem will be used.
$T(n) = 5T(n/4) + n\log{n}$
Since $c = \log_45 > 1$ and $n\log n = \mathcal{O}(n^{\log_45})$
$T(n) = \Theta(n^{\log_45}) = \mathcal{O}(n^{log_45})$
<!-- $T(n) = 5T(n/4) + n\log{n}$
$T(n) = 5(5T(n/16)+(n/4)\log(n/4)) + n\log(n) = 5^2T(n/16) + (5/4)n\log(n/4) + n\log{n}$
...
$T(n) = 5^i*T(n/4^i)+ \sum_{j=0}^{i-1}((5/4)^j*n\log(n/4^{j}))$, at the i th recurrence
$T(n) = 5^i*T(n/4^i)+ n\log n\sum_{j=0}^{i-1}(5/4)^j -n\log 4\sum_{j=0}^{i-1}j(5/4)^j$
...
$T(1) = 1$
Then, for the boundary condition when $n/4^i = 1$, $i = \log_4n$.
Thus,
$T(n) = 5^{\log_4n} + \sum_{j=0}^{log_4n-1}((5/4)^j*n\log(n/4^{j}))$
$T(n) = 5^{\log_4n} + n\sum_{j=0}^{log_4n-1}((5/4)^j*(\log(n) - \log(4^{j}))$
$T(n) = 5^{\log_4n} + n\log{n}\sum_{j=0}^{log_4n-1}(5/4)^j - n\log4\sum_{j=0}^{log_4n-1}j(5/4)^j$
$T(n) = n^{\log_45} + n\log{n}(4n^{\log_45-1}-4) - n\log4 (\sum_{j=0}^{\log_4n-1}{(5/4)^j} + \sum_{j=1}^{\log_4n-1}{(5/4)^j} + \sum_{j=2}^{\log_4n-1}{(5/4)^j} + ... + \sum_{j=log_4n-1}^{\log_4n-1}{(5/4)^j})$
$T(n) < 5^{\log_4n} + n\frac{(5/4)^{log_4n}-1}{5/4 -1}$
$T(n) = \mathcal{O}(n^{log_45})$ -->
##### e
$T(n) = T(\lfloor \sqrt{n}\rfloor) + 1$
$T(n) = T(\lfloor n^{1/4}\rfloor) + 1 + 1$
...
$T(n) = T(\lfloor n^{1/2^i}\rfloor) + i$
...
$T(1) = 1$
Then, for boundary condition when $\lfloor n^{1/2^i} \rfloor =1$, $1 \leq n^{1/2^i} < 2$,
$\Rightarrow 1/2^i \log{n} < log2$
$\Rightarrow 1/2^i < \frac{\log2}{\log n}$
$\Rightarrow 2^i > \log_2 n$
$\Rightarrow i \log2 > \log\log_2 n$
$\Rightarrow i > \log_2 \log_2 n$
$\Rightarrow i = \lfloor \log_2 \log_2 n \rfloor + 1$
Thus,
$T(n) = 2 + \lfloor \log_2 \log_2 n \rfloor$
$T(n) = \mathcal{O}(\log \log n)$
### Question 5
We would design a recursive algorithm `findMinimum(A, n)`to achieve the task.
`A` is the input array with size `n`.
The output would be the minimun in `A`.
**Base Case (n=1):**
A[1] is the only element and hence the minimum. The function return A[1]
**Recursive case (n > 1):**
Given an interval $[1, n]$, if($A[\lfloor n/2 \rfloor] \leq A[\lfloor n/2 \rfloor + 1]$), we call the function recursively with `findMinimum(A[1...⌊n/2⌋], ⌊n/2⌋)`, otherwise we call `findMinimum(A[⌊n/2⌋+1...n], n-⌊n/2⌋)`.
The result of the recursive call would be the minimun in `A`.
*Explanation:*
***Case 1***
Suppose A[⌊n/2⌋] < A[⌊n/2⌋ + 1], we can proof that the minimum of A must lie in $[1, \lfloor n/2 \rfloor]$.
Assume, for the sake of contradiction, minimum of A, say A[k] lies in A[⌊n/2⌋+1...n],
i.e. $A[k] < A[⌊n/2⌋] < A[⌊n/2⌋+1]$.
Then,
$f(k) < f(⌊n/2⌋) < f(⌊n/2⌋+1)$
Also, $\lambda f(⌊n/2⌋) + (1-\lambda)f(k) = f(k) + \lambda (f(⌊n/2⌋) - f(k))$
$\Rightarrow \lambda f(⌊n/2⌋) + (1-\lambda)f(k) < f(k) + f(⌊n/2⌋) - f(k)$ for any $0 < \lambda < 1$ and $f(⌊n/2⌋) - f(k) > 0$
$\Rightarrow \lambda f(⌊n/2⌋) + (1-\lambda)f(k) < f(⌊n/2⌋+1)$
Then, at point $x = \lfloor n/2 \rfloor + 1$, $\exists\lambda = 1 - \frac{1}{k - \lfloor n/2 \rfloor}$ s.t.
$f(\lambda \lfloor n/2 \rfloor+(1-\lambda) k)$
$= f(\lfloor n/2 \rfloor + 1) > \lambda f(\lfloor n/2 \rfloor) + (1-\lambda)f(k)$
Or equivalently, point at $(\lfloor n/2 \rfloor + 1, f(\lfloor n/2 \rfloor + 1))$ lies above the line segment with endpoints at $(\lfloor n/2 \rfloor, f(\lfloor n/2 \rfloor))$ and $(k, f(k))$.
Then, f is not a strictly convex function and hence contradiction arises.
Therefore, minimum in A[1...⌊n/2⌋] is equal to the mimimum in A[1...n].
***Case 2***
Suppose A[⌊n/2⌋] = A[⌊n/2⌋ + 1], then by definition of a convex function that in the interval [⌊n/2⌋,⌊n/2⌋ +1], there must exist a unique minimum at $p$ that $f(p) < f(⌊n/2⌋) = f(⌊n/2⌋ + 1)$, where p is not an integer.
We can then proof by contradiction that $f(p)$ is the unique minimum in ${f(x): x\in [1, n]}$. That is, if $\exists k > ⌊n/2⌋ + 1$, s.t. $f(k)$ is the minimum in A. Then, point $(⌊n/2⌋ + 1, f(⌊n/2⌋ + 1))$ lies above the line connecting $(p, f(p)) \text{and} (k, f(k))$.
Then the closest integral minimum in A would then be A[⌊n/2⌋], that is in A[1...⌊n/2⌋].
***Case 3***
Suppose A[⌊n/2⌋] > A[⌊n/2⌋], similarly, we can proof that the minimum of A must lie in $[(\lfloor n/2 \rfloor +1)...n]$, using contradiction as in case 1.
```c
def findMinimum(A, n):
if(n = 1) then return A[1]
half = ⌊n/2⌋
if(A[half] <= A[half+1]) then
return findMinimum(A[1...half], half)
else:
return findMinimum(A[(half+1)...n], n-half)
```
**Runtime:**
For step `findMinimum(A[1...half], half)` or `findMinimum(A[(half+1)...n], n-half)`, time needed is $T(n/2)$
While other steps only take constant time $c$.
Boundary case also takes constant time $c'$ only.
Thus,
$T(n) = T(n/2) + c$
$T(n) = T(n/4) + 2c$
...
$T(n) = T(n/2^i) + ic$
...
$T(1) = c'$
Then, at boundary condition when $n/2^i = 1$, $i = \log_2 n$,
$T(n) = c' + c\log_2 n$
$T(n) = \mathcal{O}(logn)$
### Question 6
Suppose arrays P, Q, R are represented by three column vectors $\overrightarrow{P} = \left( \begin{array}{c} P[0] \\ P[1] \\ ... \\ P[n] \end{array} \right) \text{, } \overrightarrow{Q} = \left( \begin{array}{c} Q[0] \\ Q[1] \\ ... \\ Q[n] \end{array} \right)
\text{and } \overrightarrow{R} = \left( \begin{array}{c} R[0] \\ R[1] \\ ... \\ R[2n] \end{array} \right)$, $A[i]$ denotes the coefficient of $x^i$ in the polynomial the array is representing, where A = P, Q, R.
The multiplication of two vectors $M= \overrightarrow{P}\overrightarrow{Q}^{T}$ would then gives a matrix of:
$\left[ \begin{array}{cccc}
P[0]Q[0] & P[0]Q[1] & ... & P[0]Q[n] \\
P[1]Q[0] & P[1]Q[1] & ... & P[1]Q[n] \\
... & ... & ... & ... \\
P[n]Q[0] & P[n]Q[1] & ... & P[n]Q[n] \\
\end{array} \right]$
Note that every elements on the same diagonal would have the same power of x after multiplication and $R[k] = \sum_{i+j=k}{M_{i,j}}$,
where $0 \leq i, j \leq n$ and $0 \leq k
\leq 2n$.
Then, we can use divide and conquer method, and divide it to 4 multiplications.
$\left[ \begin{array}{c|c}
\overrightarrow{P}_{0,\lceil n/2 \rceil-1}\overrightarrow{Q}^{T}_{0,\lceil n/2 \rceil-1} & \overrightarrow{P}_{0,\lceil n/2 \rceil-1}\overrightarrow{Q}^{T}_{\lceil n/2 \rceil, n}
\\
\hline
\overrightarrow{P}_{\lceil n/2 \rceil, n}\overrightarrow{Q}^{T}_{0,\lceil n/2 \rceil-1}
&
\overrightarrow{P}_{\lceil n/2 \rceil, n}\overrightarrow{Q}^{T}_{\lceil n/2 \rceil, n}
\\
\end{array} \right]$
Finally, we adopt a strategy similar to the Karatsub Multiplication. Note that we can reduce one recursive call by the following method:
$\overrightarrow{P}_{0,\lceil n/2 \rceil-1}
\overrightarrow{Q}^{T}_{\lceil n/2 \rceil, n} + \overrightarrow{P}_{\lceil n/2 \rceil, n}
\overrightarrow{Q}^{T}_{0,\lceil n/2 \rceil-1}$
$=(\overrightarrow{P}_{0,\lceil n/2 \rceil-1} + \overrightarrow{P}_{\lceil n/2 \rceil, n})(\overrightarrow{Q}^{T}_{0,\lceil n/2 \rceil-1} +
\overrightarrow{Q}^{T}_{\lceil n/2 \rceil, n}) - \overrightarrow{P}_{0,\lceil n/2 \rceil-1} \overrightarrow{Q}^{T}_{0,\lceil n/2 \rceil-1} -
\overrightarrow{P}_{\lceil n/2 \rceil, n}
\overrightarrow{Q}^{T}_{\lceil n/2 \rceil, n}$
Note that in this question and the following pseudocode $+$ or $-$ means element-wise addition or substraction of two lists. If the two lists are not of the same size, element-wise addition or subtractions starts from the right, i.e. larger index.
Say if `A = [A[0], A[1], A[2]]` and `B = [B[0], B[1]]`, `A ± B = [A[0], A[1] ± B[0], A[2] ± B[1]]`.
For example,
say we have n = 3 and M:
$\left[ \begin{array}{cc|cc}
P[0]Q[0] & P[0]Q[1] & P[0]Q[2] \\
P[1]Q[0] & P[1]Q[1] & P[1]Q[2]\\
\hline
P[2]Q[0] & P[2]Q[1] & P[2]Q[2]\\
\end{array} \right]$
Then the top-left call output should return an array **`[P[0]Q[0], P[0]Q[1]+P[1]Q[0], P[1]Q[1]]`**.
<!-- The top-right output should return `[P[0]Q[2], P[1]Q[2]]`.
The bottom-left output should return `[P[2]Q[0], P[2]Q[1]]`. -->
The bottom-right output should return **`[P[2]Q[2]]`**.
When we use the Karatsub Multiplication strategy, we would first get the result of `[P[0], P[1]+P[2]]`multiply `[Q[0], Q[1]+Q[2]]`,
i.e. `[[P[0]Q[0], P[0]Q[1]+P[0]Q[2]+P[1]Q[0]+P[2]Q[0], P[1]Q[1]+P[2]Q[1]+P[1]Q[2]+P[2]Q[2]]`.
Then we subtract the result by `[P[0]Q[0], P[0]Q[1]+P[1]Q[0], P[1]Q[1]]` and `P[2]Q[2]`, and get
**`[0, P[0]Q[2]+P[2]Q[0], P[1]Q[2]+P[2]Q[1]]`**
Then we sum the 3 arrays diagonally (with shifting), i.e.
`[P[0]Q[0], P[0]Q[1]+P[1]Q[0], P[0]Q[2] + P[1]Q[1] + P[2]Q[0], P[2]Q[1] + P[1]Q[2], P[2]Q[2]]`.
The amount of shifting needed is demostrated by the following pseudocode.
Pseudocode: `multiply(A, B, n)`, where A and B are two array with size n.
The function returns an array of size 2n+1, indicating the coefficients of the polynomial after multiplication in an increasing power of x.
Notations:
`%` is the modulo operator
`length(A)` is the length of array `A`.
`+` and `-` refer to element-wise addition and subtraction respectively.
```c
def multiply(A, B, n):
// Base case: return a list that only contains one element.
if n = 1 then return [A[0] * B[0]]
mid <- ⌈n/2⌉
// calculate the top-left sub-matrix
TL <- multiply(A[0...mid-1], B[0...mid-1], mid)
// calculate the bottom-right sub-matrix
BR <- multiply(A[mid...n-1], B[mid...n-1], n-mid)
// calculate the sum of top-right and buttom-left matrix
a = A[0...mid-1] + A[mid...n-1]
b = B[0...mid-1] + B[mid...n-1]
ab = multiply(a, b, mid)
// Karatsub Multiplication
TRBL = ab - TL - BR
// initialize the result array
M[0...2n-1] <- 0
// add the result to the position accordingly (with shifting)
M[0...2*mid-1] <- TL
M[mid-n%2...mid+length(TRBL)-n%2-1] <- M[mid-n%2...mid+length(TRBL)-n%2-1] + TRBL
M[2*n-1-length(BR)...2*n-1] <- M[2*n-1-length(BR)...2n-1] + BR
return M
```
***Runtime:***
To calculate the worst runtime T(n) of the algorithm, we analyse it step-by-step.
For the recursive steps, it takes $3T(n/2)$.
For the addition steps, it takes at most $\mathcal{O}(n)$ time as the summation or subtraction is at most of n numbers in two arrays.
While other steps take constant time $c$.
Boundary case takes constant time only.
Thus,
$T(n) = 3T(n/2) + n + c$
$T(n) = 3(3T(n/4) + (n/2)) + n = 3^2T(n/4) + \frac{3}{2}n + n + 2c$
...
$T(n) = 3^iT(n/2^i)+\sum_{j=0}^{i-1}{(\frac{3}{2})^jn} + ic$
...
$T(1) = c'$
Then, at boundary condition, $n/2^i = 1$, $i = \log_2n$,
$T(n) = 3^{\log_2n}(1) + \sum_{j=0}^{\log_2n-1}{(3/2)^jn} + clog_2n + c'$
$T(n) = n^{log_2 3} + n\frac{(3/2)^{\log_2n}-1}{(3/2)-1} + clog_2n + c'$
$\Rightarrow T(n) = \mathcal{O}(n^{\log_2 3}) = \Omega(n^{\log_2 3})$
$\Rightarrow T(n) = \Theta(n^{1.585...})$
and is asymptotically less than $\Theta(n^2)$