---
tags: GeneralPhysics
---
# Damped, Driven Oscillation
Author: Y.C.Tai
## Goal of this note
:::info
Equation of motion of a damped, driven oscillation:
$$
m \ddot{x} + b \dot{x} + kx = F_0 \sin \omega t
$$
:::
There are lots of way to deal with this kind of ordinary differential equation (ODE). The method I used here is one that I am relatively used to and have been using for roughly past six years that I learned from a very unpopular textbook printed in the age of Bell labs. Although most college-level courses will not cover this method, it is very powerful and widely used in problems in the real world, particular for those of circuit engineering, electrodynamics, and quantum mechanism.
:::warning
Don’t use this for your calculus or engineering math exams.
Don’t come to me for losing your score if you still do this. :smiling_face_with_smiling_eyes_and_hand_covering_mouth:
:::
## [Operational calculus](https://en.wikipedia.org/wiki/Operational_calculus)
### Definition
We define an operator called Heaviside operator (named after [Oliver Heaviside](https://en.wikipedia.org/wiki/Oliver_Heaviside)):
$$
\partial_t \doteq \dfrac{d}{dt}
\, \Rightarrow \,
\partial_t f(t) = \dfrac{d}{dt} f(t)
$$
:::spoiler Here is one of lots of ways for derivation you can try if you're interested in.
The definition of Laplace transform $f(p) \rightarrow F(q)$ reads
$$
F(q) \doteq \mathcal{L} (f) = \int_{0}^{\infty} f(p) e^{-pq} \, dp
$$
By taking $q = \partial_x$ and let it operate on a function $g(x)$, we **define** an operator $F(\partial_x)$ as
$$
F(\partial_x) g(x) = \int_{0}^{\infty} f(p) e^{-p \partial_x} g(x) \, dp
$$
Since the Taylor expanssion of $e^{-pq}$ is
$$
e^{-pq} = \sum_{n = 0}^{\infty} \dfrac{(-p)^{n}}{n!} \, q^{n}
$$
the Taylor series of $g(x)$ w.r.t. $p$ is
$$
g(x - p) = \sum_{n = 0}^{\infty} \dfrac{(-p)^{n}}{n!} \, \partial_x^{n} g(x) = e^{-p \partial_x} g(x)
$$
Then, we can link $f(p)$ to $F(\partial_x)$ as
$$
F(\partial_x) g(x) = \int_{0}^{\infty} f(p) g(x - p) \, dp
$$
and broadcast the range of integration to negative infinity by a [Heaviside step function](https://en.wikipedia.org/wiki/Heaviside_step_function) $\theta(x)$ as a convolution form
$$
F(\partial_x) g(x) = \int_{-\infty}^{\infty} f(p) \theta(p) g(x - p) \, dp = \big[ f(x) \theta(x)\big] * g(x)
$$
By taking $g(x)$ as a sequence approaching [the delta function](https://en.wikipedia.org/wiki/Dirac_delta_function) $\delta(x - x_0)$, the convolution becomes
$$
F(\partial_x) \delta(x - x_0) = f(x - x_0) \theta(x - x_0)
$$
:::
### Heaviside Shifting
An operation which is widely used in quantum mechanism:
$$
\partial_t + f(t) = \exp \left( -\int f(t) \, dt \right) \cdot \partial_t \exp \left( \int f(t) \, dt \right)
$$
:::spoiler [Derivation](https://doi.org/10.1017/9781108499996)
If we say $\hat{\alpha}$, $\hat{\beta}$ are operators, that is, for functions $f$, $g$, the following equalities are true.
$$
\big[ \hat{\alpha} = \hat{\beta} \big] \equiv \big[ \hat{\alpha} f = \hat{\beta} f \big]
\, , \quad
\left( \hat{\alpha} + \hat{\beta} \right) f \equiv \hat{\alpha} f + \hat{\beta} f
\, , \quad
\left( \hat{\alpha} \hat{\beta} \right) f \equiv \hat{\alpha} \left( \hat{\beta} f \right)
$$
Note that the 3^rd^ equality above implies that the **commutative law** <font color=firebrick>in general</font> **is not valid** in the operator system, that is
$$
\hat{\alpha} \hat{\beta} \ne \hat{\beta} \hat{\alpha}
$$
<font color=steelblue>and this also why we need to peel off all the things and do the integration from outside in</font> when dealing with the derivation of oscillation. For this reason, we define the *commutator* to check the commutative ability
$$
\big[ \hat{\alpha}, \hat{\beta} \big] = \hat{\alpha} \hat{\beta} - \hat{\beta} \hat{\alpha}
$$
**Linearity**
When we say an operator $\hat{\alpha}$ is **linear**, that means the following two conditions is satisfied.
$$
\hat{\alpha} \left( f + g \right) = \hat{\alpha} f + \hat{\alpha} g
\, , \quad
\hat{\alpha} \left( \lambda f \right) = \lambda \left( \hat{\alpha} f \right)
$$
Arithmetically, $\lambda$ shown in above should be provided as *rational nubmer*. However, most of time we will assume it to be a real-valued scalar since the operator system we are dealing with will generally just contain observables.
**Multiplication**
We first define an identity operator that when it acts (does somthing) on a function, it changes nothing.
$$
\hat{1} \, f = f
$$
Then the multiplicative inverse and the power law of operator can be defined as
$$
\big[ \hat{\beta} \equiv \hat{\alpha}^{-1} \big] \equiv \big[ \hat{\alpha} \hat{\beta} \equiv \hat{1} \big]
\, , \quad
\hat{\alpha}^n = \alpha \times \ldots \times \alpha
$$
**Product rule of differentiation**
$$
\partial_x \left( f \cdot g \right) = \partial_x (f) \cdot g + f \cdot \partial_x (g)
$$
According to the three definitions of operator above, we can rewrite the product rule as
$$
\left( \partial_x \cdot f \cdot \right) g = \big[ \partial_x (f) \cdot + f \cdot \partial_x \big] g
$$
So, if we drop the operator of multiplication ($\cdot$), we come up with a new relation of operator as
$$
\partial_x \cdot f = \partial_x (f) + f \cdot \partial_x
$$
Dividing the above relation by $f$ **from left**, we have
$$
f^{-1} \cdot \partial_x \cdot f = \partial_x \left( \ln f \right) + \partial_x
$$
If we take $g = \partial_x \left( \ln f \right)$, then
$$
f = \exp \left( \int g \, dx \right)
$$
Thus, we have the relation:
$$
\partial_x + g = \exp \left( - \int g \, dx \right) \cdot \partial_x \cdot \exp \left( \int g \, dx \right)
$$
:::
## Warm up: Simple Harmonic Oscillation
$$
m \ddot{x} + kx = 0
$$
<font color=firebrick>Welcome to commit your derivation in terms of Heaviside operators.</font>
## Advance: Damped Oscillation
$$
m \ddot{x} + b \dot{x} + kx = 0
$$
<font color=firebrick>Welcome to commit your derivation in terms of Heaviside operators.</font>
## Damped, Driven Oscillation
We will first deal with a more general but easier case:
$$
m \ddot{x} + b \dot{x} + kx = F_0 e^{i \omega t}
$$
First, we can write it in terms of Heaviside operator as
$$
\dfrac{m}{F_0} \left( \partial_t^2 + \dfrac{b}{m} \partial_t + \dfrac{k}{m} \right) x = e^{i \omega t}
$$
Since we don't know how to deal with a quadratic form of Heaviside operators, we need to factorize it into 1-order terms:
$$
\left( \partial_t^2 + \dfrac{b}{m} \partial_t + \dfrac{k}{m} \right) = \left( \partial_t - r_1 \right) \left( \partial_t - r_2 \right)
$$
where you can find $r_1$, $r_2$ via a <font color=steelblue>"characteristic equation"</font> as below
$$
r^2 + \dfrac{b}{m} r + \dfrac{k}{m} = 0
$$
which gives us two factors as
$$
r_1 = \dfrac{1}{2} \left( - \dfrac{b}{m} + \sqrt{\left( \dfrac{b}{m} \right)^2 - \dfrac{4k}{m}} \right)
\, , \quad
r_2 = \dfrac{1}{2} \left( - \dfrac{b}{m} - \sqrt{\left( \dfrac{b}{m} \right)^2 - \dfrac{4k}{m}} \right)
$$
Then, our problem becomes
$$
\left( \partial_t - r_1 \right) \left( \partial_t - r_2 \right) x' = e^{i \omega t}
\, , \quad x' \doteq \dfrac{m}{F_0} \, x
$$
and by applying the shifting, it can be rewritten as
$$
\left( e^{r_1 t} \, \partial_t \, e^{- r_1 t} \right)
\left( e^{r_2 t} \, \partial_t \, e^{- r_2 t} \right)
x' = e^{i \omega t}
$$
$$
e^{r_1 t} \, \partial_t \, e^{- \left(r_1 - r_2\right) t} \, \partial_t \, e^{- r_2 t} x' = e^{i \omega t}
$$
Now, we can deal with integrations from outside in recursively.
$$
\left( \color{steelblue}{e^{r_1 t}} \, \partial_t \, e^{- \left(r_1 - r_2\right) t} \, \partial_t \, e^{- r_2 t} x' \right) = e^{i \omega t}
$$
The outer integration:
$$
\color{firebrick}{\dfrac{d}{dt}} \left( e^{- \left(r_1 - r_2\right) t} \, \partial_t \, e^{- r_2 t} x' \right) = \color{steelblue}{e^{- r_1 t}} e^{i \omega t}
$$
$$
\left( \color{steelblue}{e^{- \left(r_1 - r_2\right) t}} \, \partial_t \, e^{- r_2 t} x' \right)
= \color{firebrick}{\int} e^{(- r_1 + i \omega) t} \, dt
= \dfrac{1}{- r_1 + i \omega} e^{(- r_1 + i \omega) t} + C
$$
$$
\left( \partial_t \, e^{- r_2 t} x' \right)
= \dfrac{1}{- r_1 + i \omega} \color{steelblue}{e^{\left(r_1 - r_2\right) t}} e^{(- r_1 + i \omega) t} + C \color{steelblue}{e^{\left(r_1 - r_2\right) t}}
$$
The inner integration:
$$
\color{firebrick}{\dfrac{d}{dt}} \left( e^{- r_2 t} x' \right)
= \dfrac{1}{-r_1 + i \omega} e^{(- r_2 + i \omega) t} + C e^{\left(r_1 - r_2\right) t}
$$
$$
e^{- r_2 t} x' = \dfrac{1}{-r_1 + i \omega} \color{firebrick}{\int} \left[ e^{(- r_2 + i \omega) t} + C e^{\left(r_1 - r_2\right) t} \right] \, dt
$$
$$
e^{- r_2 t} x' = \dfrac{1}{-r_1 + i \omega} \cdot \dfrac{1}{-r_2 + i \omega} e^{(- r_2 + i \omega) t} + \dfrac{C}{r_1 - r_2} e^{\left(r_1 - r_2\right) t} + C_2
$$
So, here we get our solution of $x'$ as
$$
x' = \dfrac{1}{-r_1 + i \omega} \cdot \dfrac{1}{-r_2 + i \omega} e^{i \omega t} + C_1 e^{r_1 t} + C_2 e^{r_2 t}
$$
where $C_1$, $C_2$ depends on what kind of initial or boundary conditions we have like values of $x'(0)$ and $\dot{x}'(0)$. As a result, we won't talk about these two factors in the following, just leave them alone.
Then our solution (in math so called the particular solution of ODE) can be rearranged a little bit as
$$
\begin{align}
x'
&= \dfrac{1}{-r_1 + i \omega} \cdot \dfrac{1}{-r_2 + i \omega} e^{i \omega t} \\[1ex]
&= \dfrac{-r_1 - i \omega}{r_1^2 + \omega^2} \cdot \dfrac{-r_2 - i \omega}{r_2^2 + \omega^2} e^{i \omega t} \\[1ex]
&= \dfrac{\left( r_1 r_2 - \omega^2 \right) + i \omega \left( r_1 + r_2 \right)}{\omega^4 + \left( r_1^2 + r_2^2 \right) \omega^2 + r_1^2 r_2^2} e^{i \omega t}
\end{align}
$$
Try to recall how do $\omega$, $r_1$, $r_2$ come from, you may find out that they are just numbers that are independent of time. As a result, we can modify the numerator into another form to make our solution simpler.
The Euler's form of numerator is
$$
\left( r_1 r_2 - \omega^2 \right) + i \omega \left( r_1 + r_2 \right) \doteq R e^{i \phi}
$$
where the magnitude of this complex number $R$ is
$$
\begin{align}
R
&= \sqrt{\left( r_1 r_2 - \omega^2 \right)^2 + \omega^2 \left( r_1 + r_2 \right)^2} \\[1ex]
&= \sqrt{\omega^4 + \left( r_1^2 + r_2^2 \right) \omega^2 + r_1^2 r_2^2}
\end{align}
$$
Fortunately, but not surprisingly, we find that the both numerator and denominator of $x'$ share the same terms of $R$.
Then, let's substitute parameters of the oscillation system into $r_1$, $r_2$,
$$
r_1 + r_2 = - \dfrac{b}{m}
\, , \quad
r_1 r_2 = \dfrac{k}{m} \doteq \omega_0^2
$$
where $\omega_0$ is the natural frequency we have defined in simple harmonic oscillation. And with one more step,
$$
r_1^2 + r_2^2 = \left( r_1 + r_2 \right)^2 - 2 r_1 r_2 = \left( \dfrac{b}{m} \right)^2 - 2 \omega_0^2 \, ,
$$
we can rewrite $R$ a step further as
$$
\begin{align}
R
&= \sqrt{\omega^4 + \left( r_1^2 + r_2^2 \right) \omega^2 + r_1^2 r_2^2} \\[1ex]
&= \sqrt{\left( \omega^4 - 2 \omega_0^2 \omega^2 + \omega_0^4 \right) + \left( \dfrac{b \omega}{m} \right)^2} \\[1ex]
&= \sqrt{ \left( \omega^2 - \omega_0^2 \right)^2 + \left( \dfrac{b \omega}{m} \right)^2}
\end{align}
$$
Also, you can verify by yourself that
$$
\phi = \tan^{-1} \dfrac{b \omega}{m \left( \omega^2 - \omega_0^2 \right)}
$$
Thus, by combining all the things we have derived above, we can write our solution for the damped, drivend oscillation as
$$
x'(t) = \dfrac{R e^{i \phi}}{R^2} e^{i \omega t}
$$
and, that is
$$
\begin{align}
x(t)
&= \dfrac{F_0 / m}{\sqrt{ \left( \omega^2 - \omega_0^2 \right)^2 + \left( b \omega / m \right)^2}} \, e^{i \left( \omega t + \phi \right)} \\[1ex]
&= \dfrac{F_0 / m}{\sqrt{ \left( \omega^2 - \omega_0^2 \right)^2 + \left( b \omega / m \right)^2}} \big[ \cos \left( \omega t + \phi \right) + i \sin \left( \omega t + \phi \right) \big]
\end{align}
$$
It may seem to be a little bit ambiguous that the position as a function of time contains a complex value, while the imaginary part does denote some significant behavior in electrodynamics.
But, well, just leave it alone. Let's consider two simpler cases:
1. $$m \ddot{x} + b \dot{x} + kx = F_0 \cos (\omega t)$$
Above ODE equals to:
$$
\Re{\left\lbrace m \ddot{x} + b \dot{x} + kx \right\rbrace} = \Re{\left\lbrace F_0 e^{i \omega t} \right\rbrace}
$$
and then the solution is
$$
x(t) = \dfrac{(F_0 / m) \cos \left( \omega t + \phi \right)}{\sqrt{ \left( \omega^2 - \omega_0^2 \right)^2 + \left( b \omega / m \right)^2}}
$$
2. $$m \ddot{x} + b \dot{x} + kx = F_0 \sin (\omega t)$$
Above ODE equals to:
$$
\Im{\left\lbrace m \ddot{x} + b \dot{x} + kx \right\rbrace} = \Im{\left\lbrace F_0 e^{i \omega t} \right\rbrace}
$$
and then the solution is
$$
x(t) = \dfrac{(F_0 / m) \sin \left( \omega t + \phi \right)}{\sqrt{ \left( \omega^2 - \omega_0^2 \right)^2 + \left( b \omega / m \right)^2}}
$$