# 🌞 Week 7
## Notes on <u>Functions and Limits</u>
> **Prepared by:** _Roshan Naidu_
> **Instructor:** _Professor Leon Johnson_
> **Course:** _INFO H611_
> [!NOTE]
> Inline math must be wrapped within `$ ... $` and display math within `$$ ... $$` in HackMD.
>
> Switching to the **Preview Mode** (👁️) would render eveything here to be viewed properly.
## Conventions and Definitions used throughout the writeup
• Vectors are bold lowercase: $\mathbf{u},\mathbf{v}\in\mathbb{R}^n$
• Scalars (real numbers) are plain: $a,b,c\in\mathbb{R}$
• Matrices are uppercase: $A,B\in\mathbb{R}^{m\times n}$
• Norm / Magnitude of a vector: $\|\mathbf{v}\|$
• Dot product: $\mathbf{u}\cdot\mathbf{v}$
• Cross product: $\mathbf{u}\times \mathbf{v}$
• Unit vector: $\hat{\mathbf{v}}=\frac{\mathbf{v}}{\|\mathbf{v}\|}$
---
## Q.1. As the number of dividend payments per year increases, the effective annual interest approaches infinity.
I answered: **False**.
Correct answer: **False**.
### Why I thought False
The statement suggests that as the number of dividend payments increases, the effective annual interest (which is equivalent to the effective annual rate or EAR) would approach infinity. This seemed intuitively incorrect to me because, in financial mathematics, the effective annual interest is influenced by the frequency of compounding, but it has a limit as the number of compounding periods increases.
The effective annual interest rate is defined as the rate that would yield the same amount of interest as a given nominal rate with a specific number of compounding periods per year. It does not approach infinity with increasing compounding frequency; rather, it approaches a finite value as the compounding periods become very frequent (i.e., as the number of periods approaches infinity).
### Why it was False (theoretically)
In the case of a nominal interest rate \(r\) compounded \(n\) times per year, the effective annual interest rate \(EAR\) is given by the formula:
$$
EAR = \left(1 + \frac{r}{n}\right)^n - 1
$$
As \(n \to \infty\), the effective annual rate approaches $e^r - 1$, where \(e\) is Euler's number (approximately 2.718). This is the **continuous compounding** case.
Thus, as the number of compounding periods increases, the effective annual interest approaches $e^r - 1$, but it never approaches infinity. Instead, it converges to a finite value. Specifically, if \(r\) is fixed, the effective annual rate is limited and does not grow unbounded.
### Example
Theoretical:
Let’s consider a nominal interest rate of \(r = 10\%\) per year, and calculate the effective annual interest rate for different compounding frequencies.
- For \(n = 1\) (annual compounding):
$$
EAR = \left(1 + \frac{0.10}{1}\right)^1 - 1 = 0.10 = 10\%
$$
- For \(n = 2\) (semi-annual compounding):
$$
EAR = \left(1 + \frac{0.10}{2}\right)^2 - 1 = 0.1025 = 10.25\%
$$
- For \(n = 12\) (monthly compounding):
$$
EAR = \left(1 + \frac{0.10}{12}\right)^{12} - 1 \approx 0.1047 = 10.47\%
$$
- As \(n \to \infty\) (continuous compounding):
$$
EAR = e^{0.10} - 1 \approx 0.1052 = 10.52\%
$$
As you can see, as \(n\) increases, the EAR approaches a limit, but it does not tend toward infinity.
Real-world:
In practical finance, the EAR is often used to compare different investment products. For instance, a savings account with daily compounding will have a slightly higher EAR than one with monthly compounding, but both will converge to a limit as compounding becomes more frequent.
For a rate of 10%, the EAR for daily compounding (365 times per year) is very close to 10.52%, which is the limit for continuous compounding.
>[!Important]
> As the number of compounding periods increases, the effective annual interest rate approaches a finite value, not infinity.
---
## Q.2. The function $f( r ) = e^r$ has an inverse since $e^r$ is a one-to-one function defined for real numbers and its inverse is the natural logarithm function $g(x) = \ln(x)$, which returns the growth rate needed to attain a certain amount of growth $x$.
I answered: **False**.
Correct answer: **False**.
### Why I thought False
The statement suggests that the function $f(r) = e^r$ has an inverse, which initially seemed counterintuitive. However, upon reviewing, I realized that the function $e^r$ is indeed a one-to-one function, and thus has an inverse. The inverse function is the natural logarithm $ln(x)$, which is defined for all positive real numbers and returns the growth rate needed to attain a certain amount of growth. Therefore, the statement is true.
### Why it was False (theoretically)
The function $f(r) = e^r$ is **one-to-one** for all real values of $r$, meaning that each value of $r$ corresponds to a unique value of $e^r$. Since it is a one-to-one function, it has an inverse function, which is the natural logarithm:
$$
g(x) = \ln(x)
$$
This inverse function $g(x) = \ln(x)$ returns the **growth rate** \( r \) when the amount of growth \( x \) is known. This is consistent with the theory that the natural logarithm is the inverse of the exponential function, and it can be used to find the growth rate \( r \) from the amount of growth \( x \) via the formula:
$$
r = \ln(x)
$$
Thus, the function $e^r$ has an inverse, and the statement is indeed **True**.
### Example
**Theoretical:**
If \( f(r) = e^r \), the inverse would be $g(x) = \ln(x)$. For example, if $x = e^2$, then the inverse function would give:
$$
r = \ln(e^2) = 2
$$
This shows that $\ln(x)$ effectively returns the growth rate \( r \) corresponding to a given amount of growth \( x \).
**Real-world:**
In finance, the continuous growth of an investment can be modeled using the function $e^r$, where \( r \) is the growth rate. If you know the final value of the investment \( x \), you can use the natural logarithm to calculate the growth rate \( r \):
$$
r = \ln(x)
$$
This demonstrates the practical use of the inverse function in real-world applications such as calculating interest rates or growth rates.
---
## Q.3. This description of $e$ is an example of *divergence*. So, we can safely say that “compounding growth within a time period diverges as $n$ approaches infinity.”
I said: **False**
Correct: **False**
### Why I thought False
At first glance, the phrase *“diverges as $n$ approaches infinity”* didn’t seem consistent with the mathematical definition of $e$.
From previous lessons, I knew that the limit
$$
\lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n
$$
**converges** to a finite number — specifically, the constant $e \approx 2.71828$.
So, describing it as an example of “divergence” felt incorrect, because divergence means the expression grows without bound or fails to settle at a finite limit.
That’s why I concluded the statement must be **False**.
### Why it was False (theoretically)
The concept of **divergence** in calculus refers to a sequence or function that does **not approach a finite limit** as its input increases.
Formally, a sequence $a_n$ diverges if
$$
\lim_{n \to \infty} a_n
$$
does not exist or equals $\infty$.
However, the sequence that defines $e$:
$$
a_n = \left( 1 + \frac{1}{n} \right)^n
$$
has a **finite limit**, namely
$$
\lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n = e.
$$
This is a classic example of **convergence**, not divergence.
As $n$ increases, $a_n$ stabilizes and approaches a specific real value (~2.718), rather than increasing indefinitely.
Therefore, the description of $e$ as an instance of divergence is **theoretically false**.
> [!Note]
> The process of compounding growth within a finite time period **approaches** a limit as $n \to \infty$; it does **not diverge**.
> Continuous compounding converges to $e$, representing the **finite upper bound** of growth achievable in one unit of time.
### Example
**Theoretical:**
Let’s compute values of $\left( 1 + \frac{1}{n} \right)^n$ for increasing $n$:
| $n$ | $\left(1 + \frac{1}{n}\right)^n$ |
|:---:|:--------------------------------:|
| 1 | 2.0000 |
| 2 | 2.2500 |
| 5 | 2.4883 |
| 10 | 2.5937 |
| 100 | 2.7048 |
| 1000| 2.7169 |
We see that as $n$ grows, the value **approaches** a finite limit (≈ 2.718), showing clear convergence.
If it were divergent, the values would increase without bound.
**Real-world intuition:**
In finance, this limit models **continuous compounding**:
if you compound interest $n$ times per year at rate 100%, your total after one year is $(1 + 1/n)^n$.
As you compound more frequently (e.g., daily, hourly, every millisecond), the total amount **approaches** $e$, but never exceeds it.
Thus, the growth converges to a finite maximum — continuous compounding — confirming the process is **convergent**, not divergent.
> [!Important]
> The limit that defines $e$ is a **convergent** process, not a divergent one.
> The statement is **False** because as $n \to \infty$, $(1 + 1/n)^n$ stabilizes at a finite limit rather than diverging to infinity.
---
## Q.4. The two interactive plots "Subdivided Interest ..." and "Total Growth ..." illustrate two functions that are inverses of one another. So, if one is $f$, then the other is $f^{-1}$.
I said: **False**
Correct: **False**
### Why I thought False
When I saw that the plots were related — one representing **subdivided interest** (the process of compounding) and the other **total growth** (the accumulated amount) — I initially thought they might be **inverse functions**.
In mathematics, two functions are inverses if one *undoes* the effect of the other. Formally, if $f$ and $g$ are inverses, then:
$$
f(g(x)) = g(f(x)) = x
$$
So my reasoning was that if one function represents how interest accumulates, the other might represent reversing that accumulation, making them inverses.
### Why it was False (theoretically)
Let’s analyze this properly.
The **Subdivided Interest** plot typically shows how the compounding process behaves — for example, $(1 + \frac{r}{n})^{nt}$ — where $r$ is rate, $n$ is compounding frequency, and $t$ is time.
The **Total Growth** plot, on the other hand, shows how the overall amount changes with continuous compounding — for example, $e^{rt}$.
While both functions describe related growth processes, they are **not inverses** in the strict mathematical sense.
An inverse function would reverse the input-output relationship — e.g., if $f(x) = e^x$, then $f^{-1}(x) = \ln(x)$.
However, “Subdivided Interest” and “Total Growth” don’t satisfy:
$$
f(f^{-1}(x)) = x
$$
They illustrate **different perspectives** of exponential growth — one discrete, one continuous — not inverse operations.
Thus, the correct answer is **False**.
### Example
Let’s take $f(x) = e^x$.
Then $f^{-1}(x) = \ln(x)$.
If we plug in:
$$
f(f^{-1}(x)) = e^{\ln(x)} = x
$$
which proves they are inverses.
But in our case:
- “Subdivided Interest”: $f(n) = \left(1 + \frac{1}{n}\right)^n$
- “Total Growth”: $g(t) = e^t$
Then:
$$
f(g(x)) \neq x \quad \text{and} \quad g(f(x)) \neq x
$$
Hence, these are **not** inverse functions.
> [!Note]
> The two plots describe **related exponential growth behaviors**, but they **do not invert each other**.
> Instead, one approximates the other as $n \to \infty$.
---
## Q.5. The functions $\sqrt{n}$ and $\sqrt{n} + \log_2(n)$ both share the same asymptotic end behavior. I.e., the impact of $\log_2(n)$ is negligible when $\sqrt{n}$ is large enough.
I said: **True**
Correct: **True**.
### Why I thought True
Intuitively, $\log_2(n)$ grows very slowly compared to algebraic powers of $n$. The square-root $\sqrt{n}$ already grows without bound (albeit sublinearly), and logarithms increase far more slowly than any positive fractional power of $n$. So it felt natural to expect that adding $\log_2(n)$ to $\sqrt{n}$ only changes the value by a relatively tiny amount for large $n$, and therefore both functions should have the same asymptotic behavior.
Put another way: as $n$ becomes huge, the term $\log_2(n)$ seems negligible next to $\sqrt{n}$, so $\sqrt{n} + \log_2(n)$ should behave like $\sqrt{n}$ asymptotically.
### Why it was True (theoretically)
We show that $\log_2(n)$ is $o(\sqrt{n})$ as $n\to\infty$, i.e.
$$
\lim_{n\to\infty}\frac{\log_2(n)}{\sqrt{n}} = 0.
$$
Working in natural logs (they differ only by a constant factor),
$$
\frac{\log_2(n)}{\sqrt{n}}
= \frac{\ln(n)}{\ln(2)\,\sqrt{n}}
\propto \frac{\ln(n)}{\sqrt{n}}.
$$
One way to see the limit is to apply L'Hôpital's rule after changing variables. Set $x = n$ and consider
$$
\lim_{x\to\infty}\frac{\ln x}{\sqrt{x}}.
$$
This is an $\frac{\infty}{\infty}$ form. Differentiate numerator and denominator with respect to $x$:
$$
\lim_{x\to\infty}\frac{(\ln x)'}{(\sqrt{x})'} = \lim_{x\to\infty}\frac{1/x}{\tfrac{1}{2}x^{-1/2}}
= \lim_{x\to\infty}\frac{1}{x}\cdot\frac{2}{x^{-1/2}} = \lim_{x\to\infty}\frac{2}{x^{1/2}} = 0.
$$
Thus $\ln x = o(\sqrt{x})$, and therefore $\log_2(n) = o(\sqrt{n})$. Consequently
$$
\sqrt{n} + \log_2(n) = \sqrt{n}\left(1 + \frac{\log_2(n)}{\sqrt{n}}\right) = \sqrt{n}\bigl(1 + o(1)\bigr).
$$
In asymptotic notation this means $\sqrt{n} + \log_2(n) \sim \sqrt{n}$ and both functions have the same leading-order growth. Equivalently, for big-$\Theta$ notation,
$$
\sqrt{n} + \log_2(n) = \Theta(\sqrt{n}).
$$
Therefore the statement that they share the same asymptotic end behavior is **True**.
> [!Note]
> A logarithm grows slower than any positive power $n^\alpha$ for $\alpha>0$. Here $\alpha=\tfrac12$, so $\log_2(n)$ is negligible compared to $\sqrt{n}$ as $n\to\infty$.
### Example
**Numerical comparison:**
| $n$ | $\sqrt{n}$ | $\log_2(n)$ | $\sqrt{n} + \log_2(n)$ | Ratio $\dfrac{\sqrt{n}+\log_2(n)}{\sqrt{n}}$ |
|:-------:|:----------:|:-----------:|:----------------------:|:-------------------------------------------:|
| $10^2$ | 10.0000 | 6.6439 | 16.6439 | 1.6644 |
| $10^4$ | 100.0000 | 13.2877 | 113.2877 | 1.1329 |
| $10^6$ | 1000.0000 | 19.9316 | 1019.9316 | 1.0199 |
| $10^{10}$ | 100000.0000 | 33.2193 | 100033.2193 | 1.00033 |
As $n$ grows, the ratio $\dfrac{\sqrt{n}+\log_2(n)}{\sqrt{n}}$ approaches $1$, illustrating that the additive $\log_2(n)$ term becomes negligible.
**Asymptotic statement:**
$$
\sqrt{n} + \log_2(n) = \sqrt{n}\,(1 + o(1)) \qquad\text{as } n\to\infty.
$$
---
## Q.6. The following functions of $n$ are listed in (descending) order of their "asymptotic growth", as $n \to \infty$:
> 1. $\sqrt{n}$
> 2. $\log_2(n)$
> 3. $10$
That is, "at infinity", (1) will always be larger than (2), and (2) will always be larger than (3).
I said: **True**.
Correct: **True**.
### Why I thought True
From basic asymptotic intuition, I knew that polynomial functions grow faster than logarithmic functions, and logarithmic functions grow faster than constants.
Here, $\sqrt{n}$ is a **polynomial** function ($n^{1/2}$), $\log_2(n)$ is **logarithmic**, and $10$ is a **constant**.
Thus, for large $n$, the relative growth order must be:
$$
\sqrt{n} \;>\; \log_2(n) \;>\; 10.
$$
That aligns with my initial reasoning, so I confidently marked it as **True**.
### Why it was True (theoretically)
Asymptotic growth compares how fast functions increase as $n \to \infty$.
We can confirm the hierarchy formally using **limits** between pairs of functions.
**Step 1:** Compare $\sqrt{n}$ and $\log_2(n)$
We examine:
$$
\lim_{n \to \infty} \frac{\log_2(n)}{\sqrt{n}}.
$$
As in the previous question, $\log_2(n)$ grows much slower than any power $n^\alpha$ with $\alpha > 0$.
Hence,
$$
\lim_{n \to \infty} \frac{\log_2(n)}{\sqrt{n}} = 0.
$$
So, $\sqrt{n}$ dominates $\log_2(n)$ asymptotically:
$$
\sqrt{n} = \omega(\log_2(n)).
$$
**Step 2:** Compare $\log_2(n)$ and $10$
Now look at:
$$
\lim_{n \to \infty} \frac{10}{\log_2(n)} = 0.
$$
That means $\log_2(n)$ eventually exceeds any constant value (no matter how large) as $n$ increases.
So, $\log_2(n)$ dominates $10$ asymptotically:
$$
\log_2(n) = \omega(1).
$$
Combining both comparisons:
$$
\sqrt{n} \gg \log_2(n) \gg 10.
$$
Thus, the ordering $\sqrt{n} > \log_2(n) > 10$ for sufficiently large $n$ is **theoretically correct**.
### Example
Let’s test this numerically:
| $n$ | $\sqrt{n}$ | $\log_2(n)$ | Constant ($10$) |
|:---:|:-----------:|:------------:|:----------------:|
| 10 | 3.1623 | 3.3219 | 10 |
| 100 | 10.000 | 6.6439 | 10 |
| 10^6 | 1000.000 | 19.9316 | 10 |
| 10^{12} | 10^6 | 39.8631 | 10 |
At small $n$, constants can still appear large, but as $n$ grows, $\sqrt{n}$ and $\log_2(n)$ clearly surpass $10$.
The trend continues indefinitely — $\sqrt{n}$ grows much faster than $\log_2(n)$, and both eventually dwarf any fixed constant.
> [!Note]
> For any positive constant $C$, $\lim_{n\to\infty}\frac{C}{\sqrt{n}}=0$ and $\lim_{n\to\infty}\frac{C}{\log_2(n)}=0$.
> Thus, constants always have the smallest asymptotic growth.
---
## Q.7. Suppose we let $t_0 = \dfrac{\pi}{2}$. Then, the limit
>$$
>\lim_{\omega \to \infty} \sin(\omega t_0)
>$$
>exists.
I said: **False**.
Correct: **False**.
### Why I thought False
The expression $\sin(\omega t_0)$ involves a sine function whose argument increases indefinitely as $\omega \to \infty$.
Since $\sin(x)$ oscillates endlessly between $-1$ and $1$, it never settles to a single finite value.
So, the limit of $\sin(\omega t_0)$ cannot exist — it simply keeps oscillating.
That’s why I chose **False**.
### Why it was False (theoretically)
Let’s substitute $t_0 = \dfrac{\pi}{2}$ into the expression:
$$
\sin(\omega t_0) = \sin\left(\omega \cdot \frac{\pi}{2}\right) = \sin\left(\frac{\pi \omega}{2}\right).
$$
Now, as $\omega \to \infty$, the argument $\frac{\pi \omega}{2}$ grows without bound.
But $\sin(x)$ is **periodic** with period $2\pi$ — meaning $\sin(x + 2\pi k) = \sin(x)$ for any integer $k$.
Hence, $\sin\left(\frac{\pi \omega}{2}\right)$ keeps repeating its values in a cycle, never approaching one fixed number.
To see this concretely, look at the sine values for integer $\omega$:
| $\omega$ | $\sin(\frac{\pi \omega}{2})$ |
|:---------:|:---------------------------:|
| 1 | $1$ |
| 2 | $0$ |
| 3 | $-1$ |
| 4 | $0$ |
| 5 | $1$ |
The sequence $1, 0, -1, 0, 1, \dots$ oscillates forever — it does not converge.
Therefore, the limit $\lim_{\omega \to \infty} \sin(\omega t_0)$ **does not exist**.
> [!Note]
> The boundedness of $\sin(x)$ does **not** imply convergence.
> A function can stay within a finite range yet fail to have a limit — $\sin(x)$ is the classic example.
---
## Q.8. The function $\sin(\omega x)$ is only valid for *integer* (e.g., 1, 7, -3) values of $\omega$.
I said: **False**.
Correct: **False**.
### Why I thought False
The sine function, $\sin(\omega x)$, is perfectly valid for **any real number** value of $\omega$, not just integers.
The variable $\omega$ (often called *angular frequency*) can take **any real** — or even complex — value.
There’s no mathematical restriction that limits $\omega$ to integers.
### Why it was False (theoretically)
Let’s recall that the sine function is defined for **all real numbers**:
$$
\sin: \mathbb{R} \to [-1, 1]
$$
So, for any real $x$ and $\omega \in \mathbb{R}$,
$$
\sin(\omega x)
$$
is **well-defined**.
There’s **no requirement** that $\omega$ must be an integer.
Integer values may appear in specific applications (like Fourier series, where $\omega_n = n\omega_0$), but mathematically, sine accepts *any* real input.
### Example
| $\omega$ | $x$ | $\sin(\omega x)$ | Valid? |
|:---------:|:---:|:----------------:|:--------:|
| $1$ | $\pi/2$ | $1$ | ✅ |
| $0.5$ | $\pi/2$ | $\sin(\pi/4) = \frac{\sqrt{2}}{2}$ | ✅ |
| $-2.3$ | $1$ | $\sin(-2.3)$ | ✅ |
| $\pi$ | $1$ | $\sin(\pi)$ | ✅ |
All of these are valid — so $\omega$ clearly doesn’t have to be an integer.
---
## Optional LaTeX Practice
Some useful math symbols:
$\alpha,\beta,\gamma,\pi,\infty$
$\nabla, \int_0^1 f(x)\,dx, \sum_{i=1}^n i^2$
$\lim_{x\to 0}\frac{\sin x}{x} = 1$
Actions:
Fractions: $\tfrac{1}{2}$
Summation: $\sum_{i=1}^n i^2\) → \(\sum_{i=1}^n i^2$
Integral: $\int_0^1 x^2\,dx = \tfrac{1}{3}$
Determinant: $\det(A)$
Inverse: $A^{-1}$
---
## 🌕 Final Reflection
This week’s module deepened my understanding of **functions, limits, and asymptotic behavior**, bridging abstract mathematical concepts with intuitive visual and computational examples.
One of the most valuable lessons was recognizing how **limits define the behavior of functions as variables approach infinity or specific values**. Through multiple exercises, I learned to distinguish between **convergent** and **divergent** functions. For instance, the description of $e$ as a limit of compounding interest clearly demonstrated how exponential growth emerges from continuous compounding — a fundamental concept connecting calculus to real-world finance.
The exploration of **inverse functions** (like $f$ and $f^{-1}$) helped me understand how certain operations “undo” each other. Visualizing these relationships through interactive plots reinforced the geometric intuition behind reflection across the line $y = x$.
The section on **asymptotic growth** was particularly insightful. Comparing functions like $\sqrt{n}$, $\log_2(n)$, and constants such as $10$ clarified how growth rates differ dramatically as $n \to \infty$. I realized that while $\log_2(n)$ grows without bound, it does so much slower than $\sqrt{n}$ — a concept central to algorithmic analysis and computational complexity.
Finally, the exercises involving **trigonometric limits** reminded me that not all functions settle to a limit. For example, $\lim_{\omega \to \infty} \sin(\omega t_0)$ does not exist due to perpetual oscillation — a perfect example of bounded but non-convergent behavior. Similarly, recognizing that $\sin(\omega x)$ is defined for all real $\omega$ (not just integers) emphasized the importance of distinguishing mathematical generality from contextual constraints.
Overall, this week tied together key mathematical ideas:
- How **limits define behavior** at infinity or near singular points,
- How **inverse and asymptotic relationships** describe long-term trends, and
- How **function behavior** can be both predictable and bounded, yet non-convergent.
These insights collectively built a stronger conceptual foundation for understanding not just calculus, but also how mathematical models behave across disciplines — from finance to physics to computer science.