# Differentiation ---> Week - 8 ## *Krish Shah* ## 27th Oct, 2025 --- ## 1.1 Zero Derivative ### If the derivative of a function is zero at some point, then that point must be a local minimum. **Answer :** False **Reasoning :** - If the derivative of a function is zero at a point, it means the slope of the tangent is horizontal — so the point is a **critical point**, not necessarily a minimum. - That point could be: a local minimum, a local maximum, or a saddle point (neither). **My reasoning in Class :** - The statement is $\mathbf{False}$ because if the derivative of a function $f(x)$ is zero at a point $c$ ($f'(c)=0$), that point $c$ is a $\mathbf{critical\ point}$, but it is not necessarily a local minimum. - A critical point where the first derivative is zero could be: A Local Minimum ($f''(c) > 0$), A Local Maximum ($f''(c) < 0$), or A Saddle Point (Inflection Point). - A Local Minimum------------------------>![minimum](https://hackmd.io/_uploads/H1rv2CC1Zg.png) - A Local Maximum------------------------> ![maximum](https://hackmd.io/_uploads/ryoOhA0J-x.png) - An Inflextion Point (Saddle Point)-->![saddle](https://hackmd.io/_uploads/BkYtnAA1Wg.png) - No Slope at all -------------------------------> ![noslope](https://hackmd.io/_uploads/ryM520Ck-x.png) --- ## 1.2 Concave Up ### The function $f$ is concave up at the point $t=10$. **Answer :** True **Reasoning :** - A positive second derivative means the function is curving upward — that is, the graph of $f(t)$ is concave up at that point. **My reasoning in Class :** - The statement is $\mathbf{True}$ based on the properties of the second derivative for the function in Situation . The sales function is given by: $$f(t) = \frac{t^7}{100000} - \frac{t^5}{500} + \frac{t^3}{10}$$ **Finding the Second Derivative (**$f''(t)$**):** The first derivative is: $$f'(t) = \frac{7t^6}{100000} - \frac{5t^4}{500} + \frac{3t^2}{10}$$ Simplifying the coefficients gives: $$f'(t) = 0.00007t^6 - 0.01t^4 + 0.3t^2$$ The second derivative is: $$f''(t) = \frac{d}{dt} \left(0.00007t^6 - 0.01t^4 + 0.3t^2\right)$$ $$f''(t) = 6(0.00007)t^5 - 4(0.01)t^3 + 2(0.3)t$$ $$f''(t) = 0.00042t^5 - 0.04t^3 + 0.6t$$ **Evaluate the Second Derivative at** $t=10$**:** Concavity is determined by the sign of the second derivative. We evaluate $f''(10)$: $$f''(10) = 0.00042(10)^5 - 0.04(10)^3 + 0.6(10)$$ $$f''(10) = 0.00042(100,000) - 0.04(1,000) + 6$$ $$f''(10) = 42 - 40 + 6$$ $$f''(10) = \mathbf{8}$$ Since $f''(10) = 8$ is $\mathbf{positive}$ ($f''(10) > 0$), the function $f(t)$ is $\mathbf{concave\ up}$ at the point $t=10$. --- ### Tolerance : Tolerance is the allowable range of variation for a measurement or dimension, ensuring a part is still functional even if not made to an exact theoretical size. --- ## 1.3 Tolerance ### The tolerance level in Newton's Method represents the minimum (and final) difference between the estimated root and the actual root. **Answer :** False **My reasoning in Class :** - The statement is $\mathbf{False}$ because the tolerance level in Newton's Method (and most iterative numerical methods) defines the stopping criterion, which is based on the change between successive estimations, not the difference between the final estimate and the actual root. - $\text{Tolerance}\ (\epsilon)\ \text{Definition (Stopping Criterion):}$ - The tolerance level typically refers to the maximum acceptable $\mathbf{absolute\ difference}$ between the current guess ($x_n$) and the previous guess ($x_{n-1}$): $$\text{Convergence Condition: } |x_n - x_{n-1}| < \epsilon$$ - The algorithm stops when the change in the guess falls below this threshold, indicating it has converged to a solution. The notebook's output often shows this value labeled as $\|diff\|$. - Difference from Actual Root (Error): The true difference between the estimated root ($x_n$) and the actual root ($r$) is the **absolute error** ($|x_n - r|$). This value is $\mathbf{unknown}$ because the actual root $r$ is what the algorithm is trying to find. --- ### Newtons Method : In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valued function. The most basic version starts with a real-valued function $f$, its derivative $f′$, and an initial guess $x_0$ for a root of $f$. --- ## 1.4 Newton's Method ### In this case, the root that Newton's Method was attempting to approximate could have been calculated by finding a local minimum of $f$. **Answer :** True **Reasoning :** - $f'(t)=0$ means the slope of $f$ is zero — those points are critical points, which can be local minima, maxima, or saddle points. - In this particular Situation 1, the second derivative $f''(t)>0$ at that root, meaning the root of $f'(t)$ corresponds to a local minimum of $f(t)$. **My reasoning in Class :** - While Newton's Method, applied to a function $f(t)$, finds a root $f(t)=0$, the same numerical technique can be used to find a local minimum of $f(t)$ by applying it to the derivative of $f(t)$. - Finding a local minimum requires finding the roots of the first derivative such that $\text{Solve for } t \text{ such that } f'(t) = 0$. --- ### Taylor Series : In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. - A Taylor series is also called a Maclaurin series when 0 is the point where the derivatives are considered. - ![formula](https://hackmd.io/_uploads/Sk7j3CRkbl.png) --- ## 2.1 Taylor Series Accuracy ### The Taylor Series approximation provides an exact representation of the original function $f(x)$ for all values of $x$, ensuring perfect accuracy in estimating website traffic for the upcoming week. **Answer :** False **Reasoning :** - A **Taylor Series** gives an $\mathbf{approximation}$ of a function $f(x)$ around a specific point. - In practice, only a $\mathbf{finite\ number\ of\ terms}$ are used, so the series gives a $\mathbf{local\ approximation}$, not a perfect one. - It becomes exact only if: • the function is infinitely differentiable and • the infinite series converges to $f(x)$ for all $x$. **My reasoning in Class :** - The statement is $\mathbf{False}$ because the Taylor Polynomial $P_n(t)$ is a truncation of the full Taylor Series. - This truncation instantly introduces an $\mathbf{error}$ for almost all values of $t$. - The polynomial $\mathbf{deteriorates\ rapidly}$ as $t$ moves further away from the center point, showing it is not perfectly accurate for all $t$. --- ## 2.2 Infinitely Differentiable ### Let $c$ and $r$ be some constant values (like 3, 1.4, 2/3...etc). Then, the Taylor Series CANNOT be used to find the roots of the function $f(x)=ce^{rx}$ because it is not infinitely differentiable (i.e., every derivative has its own non-zero derivative). **Answer :** False **Reasoning :** The statement is False because it makes two incorrect claims: one about the Taylor Series' ability to find roots, and a second about the differentiability of the exponential function $f(x)=ce^{rx}$. 1. Taylor Series Can Be Used for Root Finding (Indirectly): - While the Taylor Series' primary role is local approximation, a sufficiently accurate Taylor Polynomial $P_n(x)$ can be used as a proxy for the original function $f(x)$. - Since $P_n(x)$ is a polynomial, its roots can be found using standard methods (like Newton's Method, as seen in Situation 1). Therefore, the Taylor Series can be used to approximate the roots of $f(x)$. 2. The Function $f(x)=ce^{rx}$ IS Infinitely Differentiable: - The function $f(x) = ce^{rx}$ is an exponential function. - Every time you take the derivative, you get another exponential function: $f'(x) = rce^{rx}$ $f''(x) = r^2ce^{rx}$ $f^{(n)}(x) = r^n ce^{rx}$ - The derivative will never be zero (assuming $c \neq 0$ and $r \neq 0$). Since the function and all its derivatives exist for all $x$, $f(x)=ce^{rx}$ is one of the most well-behaved functions in calculus—it is infinitely differentiable (smooth), and its Taylor Series does converge to the function for all $x$. - Since the function is infinitely differentiable and Taylor series approximations can be used to find roots, the statement is incorrect. **My reasoning in Class :** - The statement is $\mathbf{False}$ because the function $f(x)=ce^{rx}$ IS Infinitely Differentiable ($f^{(n)}(x) = r^n ce^{rx}$). - The Taylor Series $\mathbf{can\ be\ used\ to\ approximate}$ the roots of $f(x)$ by finding the roots of its Taylor Polynomial $P_n(x)$. --- ## 2.3 Expansion Point ### The Taylor Series expansion of a sinusoidal function involves computing only the function's first-order derivatives at the expansion point, neglecting higher-order derivatives, which may lead to inaccuracies in the polynomial approximation. **Answer :** False **Reasoning :** - The statement is false; a Taylor Series expansion of any function, including a sinusoidal one, requires computing all higher-order derivatives at the expansion point to be accurate, and using only the first-order derivative would lead to a significant approximation error. - The accuracy of a Taylor series approximation depends on how many terms are included, and adding more terms (which means including more and more derivatives) generally improves accuracy, especially for values of 𝑥 close to the expansion point. - Sinusoidal functions like sin(x) and cos(𝑥) are periodic and have derivatives of all orders. **My reasoning in Class :** - The statement is $\mathbf{False}$; a Taylor Series expansion requires computing $\mathbf{all}$ higher-order derivatives at the expansion point to be accurate. - Using only the first-order derivative gives a poor approximation because it fails to account for the $\mathbf{curving,\ oscillating\ nature}$ of the sine wave as you move away from the center point. --- ## 2.4 Polynomial Approximation ### The Taylor Series approximation of a sinusoidal function, such as $f(x)=50\sin(x)\cos(2x)+50$, provides a local polynomial approximation that effectively captures the periodic behavior of the function in the vicinity of the central point, $a$. **Answer :** True **My reasoning in Class :** - The statement is $\mathbf{True}$ when interpreting "effectively captures" in the $\mathbf{strictly\ local\ sense}$. - **Local Match:** The Taylor Polynomial is mathematically designed to match the original function ($f$) and its first, second, third, etc., derivatives **exactly** at the central point $a$. - **Capturing Behavior:** For a sinusoidal function, its behavior (its oscillation, rate of change, and curvature) is defined by its derivatives. By matching the first few derivatives ($f'(a), f''(a), f'''(a)$), the polynomial perfectly models the $\mathbf{start}$ of the function's periodic cycle. It captures the function's local "wiggle." - **The Limitation:** However, this match is **only local**. While it effectively captures the *periodic behavior* in the small vicinity of $a$, it **fails completely** to capture the $\mathbf{global\ periodicity}$ (i.e., that the function will eventually come back down and repeat). - Therefore, $P_n(x)$ *does* effectively capture the initial $\mathbf{local\ periodic\ tendencies}$, making the statement **TRUE** under this strict interpretation. ![image](https://hackmd.io/_uploads/r1Mt1aoy-g.png) ![minimum](https://hackmd.io/_uploads/rkvme-A1Zg.png)