---
tags: Digital Image Processing
disqus: hackmd
---
# Part 1: Introduction
1. Improving the pictorial representation of information, for human perception.
2. Conversion to another form, used in robotics and automation.
3. Compression of the image, to reduce the size of the image, to transmit them on low bandwidth communication channels.
## Applications
1. Image denoising
2. Content enhancement
3. remote sensing
4. Motion detection
5. Product quality check
6. Motion tracking
7. Machine vision
8. Computer vision
9. Machine learning and deep learning techniques for preprocessing
10. Image compression (based on exploiting pixel redundancy, coding redundancy and psychovisual redundancy) (lossless and lossy compression techniques)
## Image Representation
A image, classically, can be defined as a 2d light intensity function $f(x,y)$. Being a digital image, the function is discretized in terms of the brightness at each "location". Therefore, an image can also be seen as a matrix, whose row and column indices specify the "location" and plugging these indices value in the function yields its brightness. In all future references, these "location" shall be referred to as **pixels**.
One thing to mention, an image can be considered as a product of two properties, *reflectivity* ($r(x,y)$) and *intensity* ($I (x,y)$). Pointing their significance, reflectivity is what is represented by the illumination distribution that our eye can interpret from the given scene. There may be a source of light that is not shown in the image, but it can assist the scene to become more visible for the camera or even the eye to be able to register it. The intensity is a property, that will become more clear as the lectures progress. For the time being, it would be good to imagine intensity as the 2d spatial representation of the brightness. (It would be nice if one doesn't bother about colour images, rather think that whenever there is a light source, it will be white and it will illuminate the room or the scene where it is present as white. Places where the light isn't able to reach, those places will be shown as a darker shade or even black if no light can reach)
Now let us address the fact, why digital? If one has prior exposure to digital signal processing concepts, one is familiar with the concept of discretization. For the sake of completion, it is explained here in brief. Consider an analogue function, which is registered on the 2d plane for scenery. Instead of taking the whole of the spatial 2d information, one may divide the image into grids. A one-dimensional representation can be shown in Figure 1.
<p align="center">
<img src="https://i.imgur.com/wjtuIiB.png">
</p>
As for discretizing the intensity, the intensity function can be *quantized*, meaning, can be represented in different levels, rather than continuously (infinite representations). The concept of quantization shall be covered more in detail later.
The above discussion can be summarized below.


An example representation can be shown below.

The images that will be dealt with most of the time shall be 8 bit quantized, that is, there will be $2^8 = 256$ levels of intensity possible for a pixel, that is, 1 for black and 256 for white. (Essentially in whichever way you define your image plane, the number of representations of the pixel intensity must be $256$).
## Image Digitization
In the last section, the concept of digitization was shown in brief. Now, that concept will be generalized.
Consider an image as shown,
<p align="center">
<img src="https://i.imgur.com/lMGxyrx.png">
</p>
This image shall be frequently used to demonstrate different digital image processing concepts (also. The image, in this case, is 256x256 in size and 8 bit quantized. Before moving on, it is important to mention that in most of the references for this subject, the coordinate system followed is,
<p align="center">
<img src="https://i.imgur.com/1X9sChJ.png">
</p>
Now consider the image used before. (popularly known as Lena or Lenna). The size of the image is $256 \times 256$. According to the coordinate convention, $N = 255$, which is considered as $L = 256$, and $M = 255$, which is considered as $H = 256$. Therefore,
\begin{equation}
0 \leq x \leq H,
0 \leq y \leq L
\end{equation}
We know that,
\begin{equation}
f(x,y) = r(x,y) . i(x,y)
\end{equation}
The value of $r$ is considered to be in between $0$ and $1$, that is $0$ being that no light is getting reflected out from that pixel, and $1$ meaning that all of the light is getting reflected. The value of $i$, if considered analogue, can have any value for a pixel in the real number space. To reduce the bandwidth consumption of the image, one can restrict the levels in between the pixel intensity value should exist.
\begin{equation}
I_{min} \leq f(x,y) \leq I_{max}
\end{equation}
As stated before, such a representation can be valid, however, there is one problem. The function is still in continuous form, which has to be discretized. Two processes will make the final image digitized, **sampling** and **quantization**.
## Sampling and Quantization
Before moving on to the actual discussion, suppose an image is digitized and the image processing techniques have been applied. To display it on the screen, it has to be converted back to analogue form, to satisfy the working of the hardware device. So, the following algorithm can be inferred,
1. Acquiring the image
2. Sampling
3. Quantization
4. Image sent to the image processing unit
5. After the technique has been applied, it goes to D2A (Digital to Analogue) converter.
6. An analogue signal sent for display
### Sampling
The first step is the sampling of the image. An image can be considered as a two-dimensional signal. So, consider a one-dimensional signal $x(t)$ as shown below (this has been made using python). One can find its source code [here](https://github.com/FlagArihant2000/Signal-Processing/blob/master/DSP/Lesson2.ipynb). Consider the first graph, and we will discuss the other two also later. Given a sinusoidal wave (this is just for the sake of example. It is possible to sample any smooth curving signal), it is possible to only take into account a finite set of points from this analogue signal. So, instead of taking all the countably infinite points, we take a finite set of them. In the first case, we consider the samples at $400Hz$ frequency.

Consider the third graph now. Notice that the frequency of the signal is now very high. Keeping the sampling frequency constant, it is evident that not all of the information is captured by the sampled signal. The samples miss many critical points such as maxima, minima and root.
The concept of sampling can be looked upon more formally. Before that, let's define a function $\delta(x)$, which is the *Dirac Delta Function*, which in discrete form, equals $1$ when $x = 0$ and is $0$ elsewhere as shown below

The sampling formula can be defined more formally as,
\begin{equation}
comb(t;T) = \sum_{m = -\infty}^{+\infty}\delta(t - mT)
\end{equation}
Where $T = \Delta t$ is the sampling interval and $t$ is the instant under consideration. It is also known as *Dirac Comb*
One could imagine the graph to be like a series of such functions at each interval as shown.

To obtain the final sampled signal, we can get,
\begin{equation}
X_s(t) = X(t) . comb(t;T)
\end{equation}
Where $X_s$ is the final sampled signal, of sampling intervals $T$ for the analogue signal $X$.
### Signal Reconstruction
To convert the sampled signal back to the analogue form, first, analyse the signal in the frequency domain. Consider a signal $x(t)$. If the signal is aperiodic, we can analyse the signal in the frequency domain using the Fourier Transform.
\begin{equation}
X(\omega) = \int_{-\infty}^{+\infty} x(t)e^{-j\omega t}dt
\end{equation}
If the function is periodic, then it is also possible to analyse the signal using Fourier series analysis.
\begin{equation}
v(t) = \sum_{n = - \infty}^{+\infty}c(n)e^{-jn\omega_0 t}
\end{equation}
where,
\begin{equation}
c(n) = \frac{1}{T_0} \int_{T_0} v(t)e^{ - jn\omega_0 t}
\end{equation}
Here, let us take $T_0 = T$, which is the sampling inteval. At each interval, so, $c(n) = \frac{1}{T}$ , when the sampling occurs.
So,
\begin{equation}
v(t) = \sum_{n = -\infty}^{+\infty} e^{jn\omega_0 t}
\end{equation}
Which is the comb function in frequency domain. Therefore, for the sampled signal, the *DFT (Discrete Fourier Transform)* is,
\begin{equation}
X(k) = \sum_{n = 0}^{N - 1}x(n)e^{-j\frac{2\pi}{N} nk}
\end{equation}
The IDFT (Inverse DFT) can be obtained as,
\begin{equation}
x(n) = \sum_{k = 0}^{N - 1}X(k)e^{j\frac{2\pi}{N} nk}
\end{equation}
#### Convolution
Consider two signals $x(t)$ and $h(t)$. So, its convolution can be defined as,
\begin{equation}
x(t) * h(t) = \int_{-\infty}^{+\infty}x(\tau)h(t - \tau) d\tau
\end{equation}
The convolution operation is the product of the signal $x(t)$ and the time-shifted form of $h(t)$.
For example,

What if the same operation is done in the frequency domain? The proof for this is trivial (will also be proved later on)
\begin{equation}
\mathcal{F}(x(t) * h(t)) = \mathcal{F}(x(t)).\mathcal{F}(h(t)) = X(\omega).H(\omega)
\end{equation}
Coming back to signal reconstruction,
\begin{equation}
X_s(t) = X(t) . comb(t;T)
\end{equation}
Given our sampled signal, it is also possible to perform this operation in the frequency domain. This is the converse of what was discussed in [Convolution](https://hackmd.io/-VDnQUjQQXy77S39C8rkZw?both#Convolution). The product on the LHS goes to convolution operator on the RHS. So, the experession in frequency domain becomes,
\begin{equation}
X_s(\omega) = X(\omega) * \mathcal{F}(comb(t;T))
\end{equation}
Here is the representation of convolution operation in the sampled domain,

So, for a bandlimited frequency spectrum convoluted with comb function in the frequency domain, we get the same bandlimited signal at each interval, where the Dirac function situated. Suppose the band limit is $\omega_0$, and the sampling frequency is $f_s$. Intuitively, in order to get the same signal at each comb spike as it is, $f_s - \omega_0 > \omega_0$, so, $f_s > 2\omega_0$. This is known as *Sampling Theorem* or *Nyquist - Shannon Theorem*. If this is not satisfied, that is, the sampling frequency is lower than $2\omega_0$, then each signal reconstructed at each comb spike with intersecting/interfere with each other for a part of the band limit, causing distortions. This is known as *aliasing*.
