# Topic 1: Writing Exercises
#### Exercise 1.
From the online book chapter 0, problem 0.2. Use the definitions of big-O to show the results asked in the problem.
Show that, if c is a positive real number, then $g(n) = 1 + c + c^2 +···+ c^n$ is:
(a) $Θ(1)$ if $c < 1$
(b) $\Theta(n)$ if c = 1.
(c) $Θ(c^n)$ if $c > 1$.
$g(n) = \frac{1- c^n}{1-c} < \frac{1}{1-c}$
(a) lim n->inf g(n)/1 = lim n->inf (1-c^n)/(1-c) = 1/(1-c) = const, so g(n) = Θ(1).
$g(n) > 1$, so $g(n) = \Omega(1)$.
$g(n) < \frac{1}{1-c} = O(1)$
1 + 1/2 + 1/4 + 1/8 ...
(b) g(n) = 1 + 1^0 + 1^1 + ... +1^n = 2 + n
lim n->inf g(n)/n = lim n->inf (2+n)/n = 1 = const, so g(n) = Θ(n).
If $c = 1$, then $g(n)= 1 + 1 + \cdots + 1$, and there are $n$ terms. $\sum_{i=1}^n 1 = n = \Theta(n)$
c) $g(n) = (c^{n+1} -1)/(c-1)$
Note that: $c^n < c^{n+1}-1 < c^{n+1}$
Divide everything by $c-1$, and you get:
$\frac{c^n}{c-1} < \frac{c^{n+1}-1}{c-1} < \frac{c^{n+1}}{c-1}$
written another way:
$\frac{1}{c-1} \cdot c^n < g(n) < \frac{c}{c-1} \cdot c^n$
lim n->inf g(n)/(cn) = lim n->inf (c^n - 1)/[(c-1)*(cn)] = lim n->inf (lnc * c^n)/(c*(c-1))
#### Exercise 2.
You're given an array A consisting of n integers A[1], A[2], . . . , A[n]. You'd like to output a two-dimensional n-by-n array B in which B[i, j] (for i < j) contains the sum of array entries A[i] through A[j]—that is, the sum A[i] + A[i + 1] + . . . + A[j]. (The value of array entry B[i, j] is left unspecified whenever i ≥ j, so it doesn't matter what is output for these values.)
Here's a simple algorithm to solve this problem.
```
B = new int[n][n] array
For i = 1, 2, ..., n:
For j = i, i+1, ..., n:
Add up the array entries A[i] through A[j]
Store the result in B[i, j]
Return B
```
a) Give a tight asymptotic running time bound for this algorithm. (Note: tight means that if the algorithm's running time is T(n), then there is some function f such that T(n) is O(f(n)) and T(n) is Θ(f(n)).
$Θ(n^3)$
$1 + 2 + 3 + \cdots + n-1 + n$
$n + n-1 + n-2 + \cdots + 2 + 1$
$1 + 2 + ... + n = n(n+1)/2$
Explanation: The runtime is $Θ(n^3)$ because we need to account for cycling through every element in the outer loop, inner loop, and when adding A[i] and A[j] inside the inner loop. The upper bound running time is $\text{O}(n^3)$, as stated for the same reasons above, however we need to make sure the running time is the same for the lower bound as well. When finding the lower bound, only about a quarter of the array requires $\Omega(n^3)$ of work. For an array, if we look at a fraction of the whole, this is asymptotically the same as looking at the whole array. Therefore the lower bound is also $\Omega(n^3)$. Since the upper bound and the lower bound are both $(n^3)$, this makes the tight asymptotic running time bound $Θ(n^3)$.
b) Design an algorithm with an asymptotically better running time.
```
B = new int[n][n] array
For i = 1, 2, ..., n:
For j = i, i+1, ..., n:
B[i, j] = B[i, j-1] + A[j])
Store the result in B[i, j]
Return B
```
Explanation: Now the upper and lower bounds have a runtime of $n^2$ because with this updated line of code a third "n" is not multipled due to the fact that we are no longer adding numbers that have already been computed (dynamic programming). We are instead looking up the most recent addition and adding this to the next step, instead of adding all over again.
#### Exercise 3.
You're doing some stress-testing on various models of glass jars to determine the height from which they can be dropped and still not break. The setup for this experiment, on a particular type of jar, is as follows. You have a ladder with n rungs, and you want to find the highest safe rung: the highest rung from which you can drop a copy of the jar and not have it break.
The basic operation you can use is: climb to some designated rung of the ladder, drop a jar, and see if it breaks.
a) Suppose you have an unlimited supply of jars. Design an algorithm that attempts to minimize the amount of time it takes to find the highest safe rung (minimizes the number of drop operations).
With an unlimited supply of jars, you can implement a binary search algorithm, which has a running time of O(log(n)). Essentially it means that we would start by throwing a jar off the rung halfway up. We've now divided the ladder into halves. If it breaks, we go back down the ladder, to the rung that splits the lower half into two parts. If it doesn't break, we go up the ladder to the rung that splits the upper half into two parts. We keep repeating this process (dividing the rungs into two parts and testing) until we figure out exactly which rung is the highest safe one.
b) Suppose you can break only one jar, and you need to determine the highest safe rung. Design an algorithm to do this and give its asymptotic running time.
When we only have one jar that can be tested with, we can't take any chances when trying to find the highest safe rung. Our only option is to start at the bottom rung and test every single rung after that. This would be a linear search algorithm with a running time of O(n).
c) Suppose you can break exactly two jars. Design an algorithm with the best asymptotic running time you can, in terms of n, the height of the ladder.
With two jars, we essentially can use the first jar to try to narrow down the highest safe rung to a number of rungs, and then the second jar to implement a linear search algorithm once the first jar breaks. A more technical way of describing this is as follows:
We split the ladder into s(n) sections, letting s(n) be some function. We then drop the first jar according to these sections — so $\text{drops}_{1} \leq \frac{n}{s(n)}$. Once the first jar breaks, we then explore what's left of the ladder going one-by-one, such that $\text{drops}_2 \leq s(n)$. Therefore, $\text{drops}_{total} \leq \frac{n}{s(n)} + s(n)$. This is a sublinear running time if $s(n)$ is a function of $n$ that grows faster than constant but more slowly than $n^1$. For example, set $s(n) = n^{1/2} = \sqrt{n}$, which is better than linear. Then $\text{drops}_{total} \leq \sqrt{n} + \sqrt{n} = 2\sqrt{n}$. This makes the total runtime of the algorithm O($\sqrt{n}$).