---
tags: fall 2018 cs61b
---
# Amortized Runtime Analysis
[TOC]
## Introduction
Amortized runtime is also known as *average runtime*. It is defined as the runtime of the worst case input of an operation in the long-run.
### Why is this important?
Although we are often concerned with worst case runtime when it comes to asymptotic analysis, we shouldn't just outright refuse to apply an algorithm or operation just because there exists some counterpart that has a less expensive worst case runtime. Realistically, some operations may be inexpensive to perform for the majority of the time and only occasionally have a terrible runtime.
Suppose we have two operations in which the first had a best and worst case runtime of $\Theta(N log N)$ and other had a best case runtime of $\Theta(N)$ and worst case runtime of $\Theta(N^2)$. However, the second operation has an extremely low chance of hitting the conditions (these conditions have to independent of input size) in which it gets its worst case runtime. With amortized analysis then, we could see how the second operation could be more useful and efficient than the first one.
### How to Calculate
We use the following formula to calculate amortized runtime:
$$\frac{\textrm{Sum of the runtimes of each operation}}{\textrm{Total number of operations}}$$
Although seemingly uncomplicated, it's quite difficult to actually frame the formula for certain cases.
## Amortized Analysis on an ArrayList
*From Week 5, [CSM Worksheet #7, Problem 3](https://piazza.com/class/jkxh6x3bt3075q?cid=10)*:
For context, we know that an `ArrayList` is basically an array that keeps track of the number of elements inside of it. Calling `add()` on the `ArrayList` will add elements to the empty buckets of the array. Eventually, the array will need to be resized to add more elements. Resizing the array takes *linear time* because an array with a new size needs to be created and the elements in the smaller array will need to be copied over to the new array.
Suppose we initialize an `ArrayList` with a capacity of $N$. This means we start out with an empty array of size $N$. This means calling `add()` the first $N$ times will take constant time per operation because we can fill in all $N$ buckets of the array.
Calling `add()` $1$ to $N$ times:
$$\frac{1+ .....+1}{N} = \frac{1(N)}{N} = 1$$
After the $N$th time that we call `add()`, the array is now filled. If we want to add more elements to the array, we will have to *resize* it. The amortized runtime of the `add()` will now depend on how big our resized array is.
### Increase size of the array by 1 element every resize
Then, calling `add()` from the $N$th time onwards yields:
$$\frac{N}{1} = N$$
Officially, we can use this equation to calculate and obtain our amortized runtime because this equation accurately portrays the long-term behavior of calling the operation. Calling `add()` follows the pattern of taking *linear time* to resize the array and add the new element for *every single operation*.
Therefore, our amortized runtime from calling `add()` in this scenario will be $\Theta(N)$.
### Increase size of the array by 10,000 elements every resize
Then, calling `add()` from the $N$th time onwards yields:
$$\frac{N + 9,999(1)}{10,000} = N$$
Again, we use this equation to model the behavior of calling the operation in the long-term.
Here, calling `add()` follows the pattern of taking *linear time* to resize the array and add $10,000$ empty buckets (note that only $9,999$ of them will be empty because 1 will hold the element we just called `add` on). Then, we can add $9,999$ elements to the array in *constant time*. This takes a total of $10,000$ operations: $1$ to resize the array and $9,999$ to add $9,999$ elements. When all $9,999$ buckets are filled, once again, we have to repeat the pattern of resizing the array and adding more elements.
Therefore, our amortized runtime from calling `add()` in this scenario will be $\Theta(N)$.
### Increase size of the array by $N$ every resize (i.e. double the size of the array)
Then, calling `add()` from the $N$th time onwards yields:
$$\frac{N + (N-1)(1)}{N} = 1$$
Calling `add()` follows the pattern of taking *linear time* to resize the array and add $N$ empty buckets. Then, we can add $N-1$ elements to the array in *constant time*. This takes a total of $N$ operations: $N$ work to resize the array and $1$ work while adding $N-1$ elements. When all $N-1$ buckets are filled, once again, we have to repeat the pattern of resizing the array and adding more elements.
Our amortized runtime from calling `add()` in this scenario will be $\Theta(1)$.
### Conclusion
One thing to realize is that the setup and calculation of these equations is rather handwavy. If you'd like a more rigorous way that involves drawing out a table, check out the following links:
* [CSM Amortized Analysis Video](https://www.youtube.com/watch?v=Jp7hEXdTjSE)
* [Amortized Analysis (explained by Josh Hug)](https://joshhug.gitbooks.io/hug61b/content/chap8/chap84.html)