###### tags: `鍾`
# Paper 2 and 8
## Automatic Cataract Detection of Optical Image using Histogram of Gradient
**Reddy Pavan , Dr. A. Deepak, 2018, Automatic Cataract Detection of Optical Image using Histogram of Gradient, INTERNATIONAL JOURNAL OF ENGINEERING RESEARCH & TECHNOLOGY (IJERT) Volume 07, Issue 06 (June 2018),**
---
- classifier constructed by back propagation(反向傳播演算法)
- trilateral filter
- decrease the noise in the image
- feature extraction
- GLCM(灰度共生矩陣)
- texture feature
- image-interval database
- linear hough remodel(hough trasform)
- edge detect
- canny edge detection
- eyelids boundary detect
- support vector machine
- separating hyper plane between two classes which maximizes *margin*
---
- SEGMENTATION
- Image acquisition: Taking a photograph from iris is that the initial stage of iris primarily based recognition system.
- PRE-PROCESSING
- The aim of the pre-processing is an improvement of image knowledge that suppresses unwanted distortions or enhances some image options necessary for any process.
- SEGMENTATION
- Segmentation is the computer version, it’s the method of digital image into multiple segments and its set of pixels, additional referred to as super pixels.
---
## A Hybrid Global-Local Representation CNN Model for Automatic Cataract Grading
**Xu X, Zhang L, Li J, Guan Y, Zhang L. A Hybrid Global-Local Representation CNN Model for Automatic Cataract Grading. IEEE J Biomed Health Inform. 2020;24(2):556-567. doi:10.1109/JBHI.2019.2914690**
---
We first had an image preprocessing and selected a deep CNN technique, AlexNet, to learn a global feature representation of the fundus image. Then we used deconvolutional network (DN) in each CNN layer
---
- Preprocessing
- Privacy Protection for Patients: As obtained from different fundus cameras, the experimental fundus images have different image sizes. We resized all fundus images uniformly to 256*256 pixels, which makes it suitable for CNN.
- Eliminating Uneven Illumination: Due to local uneven illumination and reflection of eyes, the quality of fundus images is impacted, which may hinder the detection and grading of cataract precisely. Therefore, we converted the original fundus images from RGB color space to the green component images to eliminate the uneven illumination
- Feature Extraction by Deep CNN
- Deep Convolutional Neural Network (DCNN) is a kind of artificial neural network, which is used for automatically learning features from input images in the field of image recognition.
- we selected an extension of a classical deep CNN technique, i.e., AlexNet, to extract features from the retinal fundus images in our experiment.
- Quantify the Prerequisite Features
- To quantify the prerequisite features for a fundus image, we attempted to:
- interpret the function computed by individual neurons/filters,
- examine the overall function computed in convolution layers composed of multiple neurons.
- Integrate the Global and Local Feature Representation
- with complete fundus image dataset (D1) and variant dataset (D2), the CNN extracts feature sets of different levels:
- global feature sets
- local feature sets.
- we found that CNN tends to capture global feature sets which describe the overall structure of the fundus image, such as shape, texture, edge, and position;
- To combine these different levels of features representation, we designed a new hybrid global local feature representation model, through which DL technique in DCNN is the basic classifier of ensemble learning
- Constructing Variant Dataset (D2):
- To build this global local feature representation model, we first constructed a variant data set (D2). In this data set, we cut each fundus image in the D1 data set into 8 local 950*950 pixel images to highlight the fundus image.
- Combination of the Global and Local Feature Representation
- The study have shown that ensemble learning makes the error significantly smaller than any single classifier. In the automatic cataract classification task, we combined the local and global feature-based CNN classifiers by majority voting approach to form an integrated model.