# ML in Medical Image
###### tags: `ML`
* Deep Learning in Medical Image Analysis: https://arxiv.org/abs/1702.05747
## Terms
#### CNN
* convolutional neural networks 卷積神經網路
* [How does CNN work?](https://brohrer.mcknote.com/zh-Hant/how_machine_learning_works/how_convolutional_neural_networks_work.html)
* widely used in (medical) image analysis
###### Multi-stream architectures
* default CNN architecture can easily accommodate multiple sources of information or representations of the input
* challenge of applying deep learning techniques to the medical domain often lies in `adapting existing architectures` to, for instance, different input formats such as three-dimensional data
> `Solution`
> * multiple angled patches from the 3D-space
> * dividing the Volume of Interest (VOI) into slices which are fed as different streams to a network
###### Segmentation Architectures
* CNNs can simply be used to classify each pixel in the image individually, by presenting it with patches extracted around the particular pixel.
* By rewriting the fully connected layers as convolutions, the CNN can take input images larger than it was trained on
* [U-Net](https://hackmd.io/sKNOWYBxTpSarv5gehBNvQ): a convolutional neural network that was developed for biomedical image segmentation
#### RNN
* recurrent neural networks 遞歸神經網路
* [How does RNN work?](https://brohrer.mcknote.com/zh-Hant/how_machine_learning_works/how_rnns_lstm_work.html)
* developed for discrete sequence analysis.
* both the input and output can be of `varying length`, making them suitable for tasks such as machine translation where a sentence of the source and target language are the input and output.
#### Neural network
* Neural networks are a type of learning algorithm
which forms the basis of most deep learning methods
$a = σ(w^Tx + b)$
> `x` is input
* the most wellknown
of the traditional neural networks, have several layers of these transformations:
$f(x; Θ) = σ(W^Tσ(W^T
. . . σ(W^T
x + b)) + b)$
#### Auto-encoders (AEs) and Stacked Auto-encoders (SAEs)
* simple networks that are trained to reconstruct the input `x` on the output layer `x'` through one hidden layer `h`
* SAEs (or deep AEs) are formed by placing auto-encoder layers on top of each other.
#### Transfer learning
* the use of pre-trained networks (typically on natural images) to try to work around the (perceived) requirement of large data sets for deep network training.
1. using a pre-trained network as a feature extractor
2. fine-tuning a pre-trained network on medical data
## Deep Learning Uses in Medical Imaging
#### Image/exam classification
* one or multiple images (an exam) as input with a single diagnostic variable as output (e.g.,disease present or not)
* CNNs are the current standard techniques.
* `unsupervised learning`: applied DBNs and SAEs to classify patients as having Alzheimer’s disease based on brain Magnetic Resonance Imaging (MRI)
#### Object classification
* focuses on the classification of a small (previously identified) part of the medical image into two or more classes (e.g. nodule classification in chest CT)
* For many of these tasks both local information on lesion appearance and global contextual information on lesion location are required for accurate classification.
* object classification sees less use of pretrained networks compared to exam classifications, mostly due to the need for incorporation of `contextual` or `three-dimensional information`
>* multi-stream CNN to classify skin lesions, where each stream works on a different resolution of the image
>* a combination of CNNs and RNNs for grading nuclear cataracts(白内障) in slit-lamp(細隙燈) images
> `3D`
> * multi-stream CNN to classify points of interest in chest CT as a nodule or non-nodule
> * exploited 3D nature of MRI by training a 3D CNN to assess survival in patients suffering from high-grade gliomas (i.e. tumours of the central nervous system)
#### direct localization of landmarks and regions in the 3D image space
* decomposing 3D convolution as `three one-dimensional convolutions` for carotid artery (頸動脈) bifurcation detection in CT data.
* proposed a sparse adaptive deep neural network powered by `marginal space learning` in order to deal with data complexity in the detection of the aortic valve(心瓣) in 3D transesophageal echocardiogram(經食道心臟超音波)
* CNN
* RNNs have shown promise in localization in the temporal(顳骨) domain, and multi-dimensional RNNs could play a role in spatial localization as well.
#### Segmentation
* an important first step in computer-aided detection pipelines
* identifying the set of voxels which make up either the contour or the interior of the object.
* U-Net example
> 3D-variant of U-net architecture, called V-net, performing 3D image segmentation using 3D convolutional layers with an objective function directly based on the **Dice coefficient** (a statistic used for comparing the similarity of two samples).
* RNN example:
> spatial clockwork RNN to segment the perimysium(肌束) in H&E-histopathology(組織病理學) images.
#### Registration
> 目的在於`比較或融合`針對同一對象在不同條件下獲取的圖像,例如圖像會來自不同的採集設備,取自不同的時間,不同的拍攝視角等等。
> 具體地說,對於一組圖像數據集中的兩幅圖像,通過尋找一種`空間變換`把一幅圖像映射到另一幅圖像,使得兩圖中對應於空間同一位置的點一一對應起來,達到信息融合的目的。
> -- from [wiki](https://zh.wikipedia.org/wiki/%E5%9B%BE%E5%83%8F%E9%85%8D%E5%87%86)
* a common image analysis task in which a `coordinate transform` is calculated from one medical image to another
1. using deep-learning networks to estimate a `similarity measure` for two images to drive an iterative optimization strategy
2. to directly predict `transformation parameters` using deep regression networks.
> use two types of stacked auto-encoders to assess the local similarity between CT and MRI images of the head.
## Application
#### Brain
* classification of Alzheimer’s disease and segmentation of brain tissue and anatomical(解剖) structures
* most methods work in 2D, analyzing the 3D volumes slice-by-slice
> The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze
#### Eye
a diabetic retinopathy detection(糖尿病性視網膜病變):CNNs for the analysis of color fundus imaging (CFI)
#### Chest
* the detection of textural patterns indicative of interstitial(間質性) lung diseases
* Chest radiography 胸部X線攝影
* images with text reports to train systems that combine CNNs for image analysis and RNNs for text analysis
#### Breast
(1) detection and classification of mass-like lesions
(2) detection and classification of micro-calcifications(乳房石灰化)
(3) breast cancer risk scoring of images.
* large public digital databases are unavailable and consequently older scanned screen-film data sets are still in use
> many papers used small data sets resulting in mixed performance
#### Cardiac
* segmentation, tracking, slice classification, image quality assessment,automated calcium scoring and coronary centerline tracking, and super-resolution.
> Most papers used simple 2D CNNs and analyzed the 3D and often 4D data slice by slice
* `CNN + RNN`: introduced a recurrent connection within the U-net architecture to segment the left ventricle slice by slice and learn what information to remember from the previous slices when segmenting the next one
#### Abdomen
* localize and segment organs, mainly the liver, kidneys, bladder(膀胱), and pancreas(胰臟)
> A CNN was used as a feature extractor and these features were used for classification.
>
## Caries detection
#### multiple labels
>覺得應該不是同一張放入多個資料匣
他應該會是跟 object detection 類似的概念
是要有一個地方放 images 然後要有 image list
然後 labels 是類似
image1 1 0 1 1 0
image2 1 1 1 1 0
>[name=victor]
* https://arxiv.org/pdf/2004.05543.pdf
* https://arxiv.org/pdf/2002.02143.pdf
* https://github.com/Arnold0210/TEETH-RECOGNITION-WITH-MACHINE-LEARNING
* https://github.com/atul-g/object_detection_on_cavities