Welcome to
## Digital Image Processing
| <span style="color:yellow"> M Pardha Saradhi </span> | <span style="color:yellow"> Department of ECE </span> | <span style="color:yellow"> VVIT </span> |
---
## Lecture 1.1: Introduction
---
## Contents
* Definition of an image
* Motivation & Defining the field of Digital Image Processing
* Mathematical representation of a Digital Image
* Image formation model
* References & Resources
---
## Definition of an image
> An image may be defined as a two-dimensional function, <span style="color:yellow"> $f(x, y)$ </span> , where <span style="color:yellow"> $x, y$ </span> are spatial (plane) coordinates, and the amplitude of <span style="color:yello"> $f$ </span> at any pair of coordinates <span style="color:yellow">$(x, y)$ </span> is called the intensity or gray level of the image at that point.
---

<!-- .element: class="fragment" -->
<font size="6">
* When <span style="color:yellow"> $x, y$ </span> , and the amplitude values of <span style="color:yellow"> $f$ </span> are all finite, discrete quantities, we call the image a *digital image*.
<!-- .element: class="fragment" -->
* These elements are referred to as ***picture elements***, ***image elements***, ***pels***, and ***pixels***.
<!-- .element: class="fragment" -->
* A digital image is represented by a two or three dimensional array of numbers.
<!-- .element: class="fragment" -->
</font>
---
<span style="color:yellow"> Defining the field of Digital Image Processing & Motivation </span>
<!-- .element: class="fragment" -->
> The field of digital image processing refers to processing digital images by means of a digital computer.
<!-- .element: class="fragment" -->
* This field requires the use of several hardware and software elements.
<!-- .element: class="fragment" -->
* Note that a digital image is composed of a finite number of elements, each of which has a particular location and value.
<!-- .element: class="fragment" -->
----
Why do we need Image Processing?
It is motivated by Three major application areas:
1. Improvement of pictorial information for human percetion
* Noise filtering
* Contrast Enhancement
* Deblurring
2. Image Processing for autonomous machine applications
3. Efficient storage and transmission
----
Noise filtering

----
Contrast Enhancement

----
Enhancement of color images

----
Deblurring

----
Medical Imaging

----
Medical Imaging

----
Medical Imaging

----
Satellite Image Processing

----
Areal / Satellite Image Processing

----
Tracking Broneo Fire

----
Weather Forecasting

----
Atmospheric Study

----
Astronomy

----
Astronomy

---
Automated bottle inspection

----
Boundary Information

----
Boundary Information in Automated Inspection

----
Boundary Information in Automated Inspection

----
Automated Inspection

----
Vedio sequence detection
Motion detection

----
---
<span style="color:yellow"> Simple Image Formation Model </span>
* Electromagnetic spectrum

* Recording the various types of interaction of radiation with matter

---
<span style="color:yellow"> Simple Image Formation Model </span>
* We already learned images by two-dimensional functions of the form f(x, y).
* The value or amplitude of $f$ at spatial coordinates $(x, y)$ is a positive scalar quantity
* The physical meaning of these variables is determined by the source of the image.
* Most of the images in which we are interested in this course are monochromatic images.
----
<span style="color:yellow"> Simple Image Formation Model </span>
* When an image is generated from a physical process, its values are proportional to energy radiated by a physical source (e.g., electromagnetic waves).As a consequence, $f(x, y)$ must be nonzero and finite; that is,
$$ 0<f(x,y)<\infty$$
----
<span style="color:yellow"> Simple Image Formation Model </span>
$$ 0<f(x,y)<\infty$$
* The function $f(x, y)$ may be characterized by two components:
1. The amount of source illumination incident on the scene being viewed, and
2. the amount of illumination reflected by the objects in the scene.
----
<span style="color:yellow"> Simple Image Formation Model </span>
$$ 0<f(x,y)<\infty$$
* Appropriately, these are called the illumination and reflectance components and are denoted by $i(x, y)$ and $r(x, y)$, respectively.
* The two functions combine as a product to form $f(x, y)$:
$$f(x,y)=i(x,y)r(x,y)$$
----
$f(x,y)=i(x,y)r(x,y)$
$0<i(x,y)<\infty$
$0<r(x,y)<1$
* Reflectance is bounded by:
* 0 (total absorption), and
* 1 (total reflectance).
* The nature of $i(x, y)$ is determined by the illumination source, and $r(x, y)$ is determined by the characteristics of the imaged objects.
----
* It is noted that these expressions also are applicable to images formed via transmission of the illumination through a medium, such as a chest X-ray.
* In this case, we would deal with a *transmissivity* instead of a *reflectivity* function, but the limits would be the same.
---
More on image formation
* Some typical values of $i(x, y)$:
* On a clear day, the sun may produce in excess of $90,000 lm/m^2$ of illumination on the surface of the Earth.
* This figure decreases to less than $10,000 lm/m^2$ on a cloudy day.
* On a clear evening, a full moon yields about $0.1 lm/m^2$ of illumination.
* The typical illumination level in a commercial office is about $1000 lm/m^2$.
----
* Similarly, the following are some typical values of $r(x, y)$:
* 0.01 for black velvet,
* 0.65 for stainless steel,
* 0.80 for flat-white wall paint,
* 0.90 for silver-plated metal, and
* 0.93 for snow.
---
* Graylevel of monochromatic image pixel $(x_0,y_0)$ is
* $l=f(x_0,y_0)=i(x_0,y_0)r(x_0,y_0)$
* $L_{min}\leq l \leq L_{max}$
* In practice,
* $L_{min}=i_{min}r_{min}$
* $L_{max}=i_{max}r_{max}$
* So, grayscale is the interval $[L_{min}, L_{max}]$
---
<span style="color:yellow"> References & Resources </span>
#### Textbooks
<font size="5">
1. Digital Image Processing – Gonzalez and Woods, 2nd Ed., Pearson.
2. S. Jayaraman, S. Esakkirajan and T. VeeraKumar, “Digital Image processing, Tata McGraw Hill publishers, 2009
Reference books:
1. Joseph Howse, Joe Minichino, “Learning OpenCV 4 Computer Vision with Python 3_ Get to grips with tools, techniques, and algorithms for computer vision and machine learning”, Packt Publishing, 2020.
2. Anil K. Jain, “Fundamentals of Digital Image Processing”, Prentice Hall of India, 9th Edition, Indian Reprint, 2002.
3. J. T. Tou, R. C. Gonzalez, “Pattern Recognition Principles”, Addison-Wesley, 1974.
4. B. Chanda, D. Dutta Majumder, “Digital Image Processing and Analysis”, PHI, 2009.
</font>
----
<span style="color:yellow"> References & Resources </span>
#### Resources
<font size="5">
1. NPTEL [Lecture Series](https://www.youtube.com/playlist?list=PLuv3GM6-gsE08DuaC6pFUvFaDZ7EnWGX8) on Digital Image Processing by Prof. P. K. Biswas, Department of Electrical & Electronic Communication Engineering,IIT Kharagpur.
2. OpenCV: https://opencv.org/
3. SCikit-Image: https://scikit-image.org/
4. SciPy: https://scipy.org/
5. PIL:
a. https://pillow.readthedocs.io/en/stable/
b. https://pillow.readthedocs.io/en/stable/handbook/tutorial.html
c. https://realpython.com/image-processing-with-the-python-pillow-library/#image-segmentation-and-superimposition-an-example
6. Mahotas: https://mahotas.readthedocs.io/en/latest/
7. SimpleITK:
a. http://insightsoftwareconsortium.github.io/SimpleITK-Notebooks/
b. https://notebooks.gesis.org/binder/jupyter/user/insightsoftware-leitk-notebooks-qj4qqkdc/lab/tree/Python
8. Pgmagic: http://www.graphicsmagick.org/
</font>
---
{"metaMigratedAt":"2023-06-17T04:05:38.580Z","metaMigratedFrom":"Content","title":"Untitled","breaks":true,"contributors":"[{\"id\":\"0824f1e7-ce6b-4a1f-86eb-31e44983f49d\",\"add\":955,\"del\":957}]"}