--- tags: Digital Image Processing disqus: hackmd --- # Part 12 ## Image Restoration We assume our input image as $f(x,y)$. Suppose this function is degraded by some function $H$. Some noise $\eta(x)$ is also added to the degradation output, to give us the final image $g(x,y)$. To obtain an image close to the original image $\hat{f}(x,y)$, some filters are used. ![Uploading file..._3r4j33d5b]() The first two steps are considered as degradation steps, whereas the last step to reconstruct an approximation of the image is the reconstruction step. Mathematically, this can be represented as, \begin{equation} g(x,y) = f(x,y)*h(x,y) + \eta(x,y) \end{equation} In frequency domain, it is written as, \begin{equation} G(u,v) = F(u,v)H(u,v) + N(u,v) \end{equation} ### Some Definitions We have, \begin{equation} g(x,y) = H[f(x,y)] + \eta(x,y) \end{equation} If $\eta(x,y) = 0$, we have, \begin{equation} g(x,y) = H[f(x,y)] \end{equation} Here, $H$ is considered as degradation operator. Some of its properties are, 1. Linearity: Given two images $f_1(x,y)$ and $f_2(x,y)$. Consider constants $k_1$ and $k_2$. Then linearity for operator $H$ can be written as, \begin{equation} H[k_1f_1(x,y) + k_2f_2(x,y)] = k_1H[f_1(x,y)] + k_2H[f_2(x,y)] \end{equation} This is also known as superposition theorem and works for all linear systems. 2. Additivity: If $k_1 = k_2 = 1$, we have, \begin{equation} H[f_1(x,y) + f_2(x,y)] = H[f_1(x,y)] + H[f_2(x,y)] \end{equation} 3. Homogenity: If $f_2(x,y) = 0$, we have, \begin{equation} H[k_1f_1(x,y)] = k_1H[f_1(x,y)] \end{equation} A system is said to be position invariant if $H[f(x - \alpha, y - \beta)] = g(x - \alpha, y - \beta)$, given that $H[f(x,y)] = g(x,y)$. From the discussion earlier, we are familiar with the Dirac Delta function as, \begin{equation} \delta(x,y) = \begin{cases} 1, x = 0 \text{ and } y = 0 \\ 0, \text{ otherwise} \end{cases} \end{equation} For a shifted case, we can have the expression as, \begin{equation} \delta(x - x_0,y - y_0) = \begin{cases} 1, x = x_0 \text{ and } y = y_0 \\ 0, \text{ otherwise} \end{cases} \end{equation} To find the value of the function at a point in space, we can evaluate the following integral, \begin{equation} \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(x,y)\delta(x - x_0, y - y_0)dxdy = f(x_0,y_0) \end{equation} Conversely, the function $f(x,y)$ can be written as, \begin{equation} f(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)\delta(x - \alpha, y - \beta)d\alpha d\beta \end{equation} In the degradation expression, assuming no noise, we have, \begin{equation} g(x,y) = H[f(x,y)] = H\bigg[\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)\delta(x - \alpha, y - \beta)d\alpha d\beta\bigg] \end{equation} This can be written as, (using linearity and additivity property) \begin{equation} g(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}H\bigg[f(\alpha,\beta)\delta(x - \alpha, y - \beta)d\alpha d\beta\bigg] \end{equation} Since $f(\alpha,\beta)$ is independent of $x$ and $y$, we have, \begin{equation} g(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)H\bigg[\delta(x - \alpha, y - \beta)\bigg]d\alpha d\beta \end{equation} Now, $H[\delta(x - \alpha, y - \beta)] = h(x,\alpha,y,\beta)$, also known as impulse response of $H$. In optics, this is also known as point spread function or PSF. Therefore, we get, \begin{equation} g(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)h(x,\alpha,y,\beta)d\alpha d\beta \end{equation} This is known as superposition integral of first kind. Suppose $H$ is considered as position invariant. It means $H[\delta(x - \alpha, y - \beta)] = h(x - \alpha, y - \beta)$. Therefore, we can write the expression as, \begin{equation} g(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)h(x - \alpha,y - \beta)d\alpha d\beta \end{equation} This expression is nothing but the convolution of two functions $f$ and $h$. With the noise added, we can get, \begin{equation} g(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty}f(\alpha,\beta)h(x - \alpha,y - \beta)d\alpha d\beta + \eta(x,y) \end{equation} If we have some knowledge of degradation function and noise, it is possible to reconstruct the image to some extent. The degradation model is in continuous domain. We need its discrete formulation. Consider one dimensional case for simplicity with no noise, that is, we have $f(x)$ and $h(x)$. When discretized, we get two arrays of size, say, $A$ and $B$ respectively. In order to make both of them of same dimension, we add the required number of zeros. So, their common dimension is now $M$. These functions are now denoted as $f_e(x)$ and $h_e(x)$. So, the convolution is, \begin{equation} g_e(x) = \sum_{m = 0}^{M - 1}f_e(m)h_e(x - m), x = 0, 1, ..., M - 1 \end{equation} Being a discrete function, it can be written in matrix form as $g = Hf$, where, \begin{equation} f_{M \times 1} = \begin{bmatrix} f_e(0) \\ f_e(1) \\ . \\ . \\ f_e(M - 1) \end{bmatrix}, g_{M \times 1} = \begin{bmatrix} g_e(0) \\ g_e(1) \\ . \\ . \\ g_e(M - 1) \end{bmatrix}, H_{M \times M} = \begin{bmatrix} h_e(0) & h_e(-1) & . & . & h_e(-M + 1) \\ h_e(1) & h_e(0) & . & . & h_e(-M + 2) \\. & . & . & . & .\\. & . & . & . & . \\ h_e(M - 1) & h_e(M - 2) & . & . & h_e(0) \end{bmatrix} \end{equation} h_e(x)$ is periodic with periodicity $M$. That is, $h_e(x + M) = h_e(x)$. Therefore, \begin{equation} H_{M \times M} = \begin{bmatrix} h_e(0) & h_e(M - 1) & h_e(M - 2) & . & . & h_e(1) \\ h_e(1) & h_e(0) & h_e(M - 1) & . & . & h_e(2) \\ h_e(2) & h_e(1) & h_e(0) & . & . & h_e(3)\\. & . & . & . &. & .\\. & . & . & . & . & . \\ h_e(M - 1) & h_e(M - 2) & h_e(M - 3) & . & . & h_e(0) \end{bmatrix} \end{equation} Here, different rows of the matrix are generated by rotation to the right of the previous element. This is also known as circulant matrix as each element is circulating for the entire duration of matrix. Extending this idea to two dimension, we have $f(x,y)$ and $h(x,y)$. Their arrays have dimensions $A \times B$ and $C \times D$ respectively. Additional number of zeros are added to both the functions to make them of dimension $M \times N$. The periodicity of $h$ is $M$ and $N$ in $x$ and $y$ direction respectively. The convolution expression can be written as, \begin{equation} g_e(x,y) = \sum_{n = 0}^{N - 1}\sum_{m = 0}^{M - 1}f_e(m,n)h_e(x - m,y - n) \end{equation} Incorporating the noise and writing the expression in matrix form, we have $g_{M \times N} = H_{MN \times MN}f_{M \times N} + n_{M \times 1}$. Interestingly, $H$ can be written as, \begin{bmatrix} H_0 & H_{M - 1} & . & . & H_1 \\ H_1 & H_0 & . & . & H_2 \\ . & . & . & . & . \\ . & . & . & . & . \\ H_{M - 1} & H_{M - 2} & . & . & H_0 \end{bmatrix} Where $H_j$ is a matrix of dimension $N \times N$, which is generated from the $j^{th}$ row of the degradation matrix as, \begin{equation} H_j = \begin{bmatrix} h_e(j,0) & h_e(j,N - 1) & h_e(j,N - 2) & . & . & h_e(j,1) \\ h_e(j,1) & h_e(j,0) & h_e(j,N - 1) & . & . & h_e(j,2) \\ h_e(j,2) & h_e(j,1) & h_e(j,0) & . & . & h_e(j,3)\\. & . & . & . &. & .\\. & . & . & . & . & . \\ h_e(j,N - 1) & h_e(j,N - 2) & h_e(j,N - 3) & . & . & h_e(j,0) \end{bmatrix} \end{equation} So, $H$ is also known as block circulant matrix. ### Restoration The ways of restoring the image are, 1. Observation 2. Experimentation 3. Mathematical modelling These ways of restoring are collectively also known as blind convolution. ### Observation In a degraded image $g(x,y)$, a small region $g_s(x,y)$ is analysed. Using the knowledge obtained from it, the restoration technique can be estimated to get $\hat{f}(x,y)$. For the small region and its restored format, we have, $g_s(x,y) \iff G_s(u,v)$ and $f_s(x,y) \iff F_s(u,v)$. Using this, we can estimate $H_s(u,v) = \frac{G_s(u,v)}{F_s(u,v)}$. ### Experimentation Given the imaging setup given to the original imaging setup. If a degradation function is applied to the imaging setup, and if the results are the same as the degraded image we have, then it is supposed to be the required degradation function. In other words, the impulse response of the imaging setup must be obtained to get the output similar to our degraded image. When we have the input function as the impulse function, its Fourier transform is a constant function. Initially, we have $F(u,v) = A$. After the transformation, we have $G(u,v) = H(u,v)F(u,v)$ or $G(u,v) = AH(u,v) \implies H(u,v) = \frac{G(u,v)}{A}$. ### Mathematical Modeling This quite a well researched and used method since it explains the nature of the degradation and noise. It also encompasses the atmospheric turbulence causing degradation of the image. Mathematically, one of the models can be written as, \begin{equation} H(u,v) = e^{-k(u^2 + v^2)^{5/6}} \end{equation} $k$ indicates the nature of turbulence. The larger the value, more turbulence is there. ### Basic Principles Consider the case of motion blurring. Here, when the light is moving in front of the camera, the shutter registering the image is the aggregation of the movement that has happened through the process of clicking the photograph. Say, $f(x,y)$ is taken in motion. It has two moving components $x_0(t)$ and $y_0(t)$, which are the time varying components. Assuming shutter opening time as $T$, the observation at point $(x,y)$ is seen as, \begin{equation} g(x,y) = \int_{0}^{T} f(x - x_0(t),y - y_0(t))dt \end{equation} From FT, we have, \begin{equation} G(u,v) = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}g(x,y)e^{-j2\pi (ux + vy)}dxdy \end{equation} On substitution, we have, \begin{equation} G(u,v) = \int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty}\bigg[\int_{0}^{T} f(x - x_0(t),y - y_0(t))dt\bigg]e^{-j2\pi (ux + vy)}dxdy \end{equation} On rearrangement, we have, \begin{equation} G(u,v) = \int_{0}^{T}\bigg[\int_{-\infty}^{+\infty} \int_{-\infty}^{+\infty} f(x - x_0(t),y - y_0(t))e^{-j2\pi (ux + vy)}dxdy\bigg]dt \end{equation} This is equivalent to shifted Fourier transform. Such a shift adds a phase term to the spectrum. In other words, $f(x - x_0(t), y - y_0(t)) \iff F(u,v)e^{-j2\pi[ux_0(t) + vy_0(t)]}$. Therefore, \begin{equation} G(u,v) = \int_{0}^{T}F(u,v)e^{-j2\pi[ux_0(t) + vy_0(t)]}dt = F(u,v)\int_{0}^{T}e^{-j2\pi[ux_0(t) + vy_0(t)]}dt \end{equation} Defining the degradation as the integral, we have $G(u,v) = H(u,v)F(u,v)$. If we assume $x_0(t) = \frac{at}{T}$ and $y_0(t) = \frac{bt}{T}$. So, we have the degradation equation as, \begin{equation} H(u,v) = \frac{1}{2\pi (ua + vb)}\sin{[ua + vb]}e^{-j\pi{ua + vb}} \end{equation} ### Inverse Filtering We have $G(u,v) = H(u,v)F(u,v)$. We can get the approximate form of image as $F(u,v) = \frac{G(u,v)}{H(u,v)}$. With noise, we have, $G(u,v) = H(u,v)F(u,v) + N(u,v)$, we can have $\hat{F}(u,v) = F(u,v) + \frac{N(u,v)}{H(u,v)}$. So, the reconstructed image has noise dominance. Therefore, we have to restrict our reconstruction spectrum to a certain extent, which doesn't incorporate the noise. ### Wiener Filtering Here, the reconstruction is done by minimizing an error function. Given the original function $f$ and reconstructed image $\hat{f}$, the error or deviation can be written as $e = E[(f - \hat{f})^2]$. In frequency domain, (the mathematical proof hasn't been covered here), we can write the transformation as, \begin{equation} \hat{F}(u,v) = \bigg[\frac{H^{*}(u,v)S_f(u,v)}{S_f(u,v)|H(u,v)|^2 + S_{\eta}(u,v)}\bigg]G(u,v) \end{equation} $H^{*}$ is the complex conjugate of $H$. $G$ is the Fourier transform of degraded image and $\hat{F}$ is for reconstructed image. $S_f$ is the power spectrum of original image and $S_\eta$ is the noise power spectrum. On simplification, we have, \begin{equation} \hat{F}(u,v) = \bigg[\frac{1}{H(u,v)}\frac{|H(u,v)|^2}{|H(u,v)|^2 + \frac{S_{\eta}(u,v)}{S_f(u,v)}}\bigg]G(u,v) \end{equation} So, it is evident that when the noise is not there, Wiener and inverse filtering are the same. So, it is a general case. If $K = \frac{S_{\eta}(u,v)}{S_f(u,v)}$, we have, \begin{equation} \hat{F}(u,v) = \bigg[\frac{1}{H(u,v)}\frac{|H(u,v)|^2}{|H(u,v)|^2 + K}\bigg]G(u,v) \end{equation} Where $K$ can be adjusted manually. ### Constant Least Square Filtering Here, no assumption is made about the original image. Here, it considers the noise component. Consider the mean of noise $m_\eta$ and variance $\sigma^2_\eta$. In this case, the optimality criteria has been written as, \begin{equation} C = \sum_{x = 0}^{M - 1} \sum_{y = 0}^{N - 1} [\nabla^2f(x,y)]^2 \end{equation} We have to minimize this criteria subject to $||g - H\hat{f}||^2 = ||n||^2$. So, \begin{equation} \hat{F}(u,v) = \bigg[\frac{H^{*}(u,v)}{|H(u,v)|^2 + \gamma |P(u,v)|^2}\bigg]G(u,v) \end{equation} $\gamma$ is to be adjusted so that the minimization subject is satisfied. $P(u,v)$ is the Fourier spectrum of the mask, \begin{equation} P(x,y) = \begin{bmatrix} 0 & -1 & 0 \\ -1 & 4 & -1 \\ 0 & -1 & 0 \end{bmatrix} \end{equation} This is a Laplacian Mask. Now we have to adjust $\gamma$. For that, we define a residual vector $r = g - H\hat{f}$. Since $\hat{F},\hat{f}$ are dependent on $\gamma$, then $r$ is also dependent. Defining $\phi(\gamma) = r^Tr = ||r||^2$. Whenever $\gamma$ increases, then this function increases and vice versa. We define $||r||^2 = ||n||^2 \pm a$, where $a$ is the accuracy factor. Now, use the iterative approach to use the minimization criteria. If $||r||^2 < ||n||^2 - a$, then increase $\gamma$. If $||r||^2 > ||n||^2 + a$, then decrease $\gamma$. When it is in the required tolerance bracket, use it in the reconstruction formula. To estimate the noise present, we analyse a sub image of the original. So, given the $||\eta||^2$, we can get mean and variance as, \begin{equation} \sigma^2_{\eta} = \frac{1}{MN} \sum_{x = 0}^{M - 1} \sum_{y = 0}^{N - 1} [\eta(x,y) - m_\eta]^2 \\ m_{\eta} = \frac{1}{MN} \sum_{x = 0}^{M - 1} \sum_{y = 0}^{N - 1} \eta(x,y) \end{equation} So, we have, $||\eta||^2 = MN[\sigma^2_\eta - m_\eta]$. Once we have the noise for the given sub-image, we can take its FT. The noise being periodic, it is possible to remove some of the dots present in the FT by using Band Reject filters (ideal or Butterworth).