# Survey:醫學(all)
###### tags:`Survey`
## 開源dataset的label:
### 18F-FDG PET Radiomics Risk Stratifiers in Head and Neck Cancer: A MICCAI 2018 CPM Grand Challenge:
##### kaggle
## [Iteratively Refine the Segmentation of Head and Neck Tumor in FDG-PET and CT Images](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8211060/pdf/jitc-2020-002118.pdf)
The automatic segmentation of head and neck (H&N) tumor from FDG-PET and CT images is urgently needed for radiomics. In this paper, we propose a framework to segment H&N tumor automatically by fusing information of PET and CT. In this framework, multiple 3D-Unets are trained one-by-one. The predictions and features of upstream models will be captured as additional information for the next one to further refine the segmentation. Experiments show that iterative refinements can improve the performance. We evaluated our framework on the dataset of HECKTOR2020 (The challenge of head and neck tumor segmentation) and won the 5th place with average DSC, precision and recall of 0.7241, 0.8479, 0.6701 respectively.

## [Combining CNN and Hybrid Active Contours for Head and Neck Tumor Segmentation in CT and PET Images](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8211060/pdf/jitc-2020-002118.pdf)
Automatic segmentation of head and neck tumor plays an important role for radiomics analysis. In this short paper, we propose an automatic segmentation method for head and neck tumors from PET and CT images based on the combination of convolutional neural networks (CNNs) and hybrid active contours.

## [Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging](https://link.springer.com/article/10.1007/s00259-020-05167-1)
**purpose:**
Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential
radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from
fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques

## [A cross-scanner and cross-tracer deep learning method for the recovery of standard-dose imaging quality from low-dose PET](https://link.springer.com/content/pdf/10.1007/s00259-021-05644-1.pdf)
###### tags: European Journal of Nuclear Medicine and Molecular Imaging (2022)
### Abstract:
Purpose a method of reconstruct FDG-PET from LDG-PET on different imaging instrumentations and radiopharmaceuticals.
### Method:

Deep learning‑assisted PET imaging
achieves fast scan/low‑dose examination
## [A personalized deep learning denoising strategy for low-count PET images](https://iopscience.iop.org/article/10.1088/1361-6560/ac783d/pdf)
###### tags: European Journal of Nuclear Medicine and Molecular Imaging (2022)
### Abstract:
Purpose a method of reconstruct FDG-PET from LDG-PET on different imaging instrumentations and radiopharmaceuticals.
### Method:

## Question:
* label 在原始影像?還是
### [Automatic Lung Cancer Segmentation in [18F]FDG PET/CT Using a Two-Stage Deep Learning Approach](https://link.springer.com/content/pdf/10.1007/s13139-022-00745-7.pdf)
###### tags: Nuclear Medicine and Molecular Imaging
###### Accepted: 12 March 2022
#### Abstract:
Purpose
Since accurate lung cancer segmentation is required to determine the functional volume of a tumor in [18F]FDG PET/CT, they propose a two-stage U-Net architecture to enhance the performance of lung cancer segmentation using [18F]FDG PET/CT.




### [Heart and bladder detection and segmentation on FDG PET/CT by deep learning](https://link.springer.com/content/pdf/10.1186/s12880-022-00785-7.pdf)
###### tags: BMC Medical Imaging
###### Published: 30 March 2022
#### Abstract:
Positron emission tomography (PET)/ computed tomography (CT) has been extensively used to quantify metabolically active tumors in various oncology indications. However, FDG-PET/CT often encounters false positives in tumor detection due to 18fuorodeoxyglucose (FDG) accumulation from the heart and bladder that often exhibit similar FDG uptake as tumors. Thus, it is necessary to eliminate this source of physiological noise. Major challenges for this task include: (1) large inter-patient variability in the appearance for the heart and bladder. (2) The size and shape of bladder or heart may appear diferent on PET and CT. (3) Tumors can be very close or connected to the heart or bladder.



### [Deep learning-based automated segmentation of eight brain anatomical regions using head CT images in PET/CT](https://link.springer.com/content/pdf/10.1186/s12880-022-00785-7.pdf)
###### tags: BMC Medical Imaging volume
###### Accepted: 26 May 2022
#### Abstract:
We aim to propose a deep learning-based method of automated segmentation of eight brain anatomical regions in head computed tomography (CT) images obtained during positron emission tomography/computed tomography (PET/CT) scans. The brain regions include basal ganglia, cerebellum, hemisphere, and hippocampus, all split into left and right.


### [Low-dose PET image noise reduction using deep learning: application to cardiac viability FDG imaging in patients with ischemic heart disease](https://link.springer.com/content/pdf/10.1186/s12880-022-00785-7.pdf)
###### tags: Physics in Medicine & Biology
###### Published 25 February 2021
#### Abstract:
Cardiac [ 18F]FDG-PET is widely used for viability testing in patients with chronic ischemic heart disease. Guidelines recommend injection of 200–350 MBq [ 18F]FDG, however, a reduction of radiation exposure has become increasingly important, but might come at the cost of reduced diagnostic accuracy due to the increased noise in the images. We aimed to explore the use of a common deep learning (DL) network for noise reduction in low-dose PET images, and to validate its accuracy using the clinical quantitative metrics used to determine cardiac viability in patients with ischemic heart disease.


### [Fully Automated Delineation of Gross Tumor Volume for Head and Neck Cancer on PET-CT Using Deep Learning: A Dual-Center Study](https://downloads.hindawi.com/journals/cmmi/2018/8923028.pdf)
###### tags: Contrast Media & Molecular Imaging
###### Accepted: 24 October 2018
#### Abstract:
Proposed an automated deep learning (DL) method for head and neck cancer (HNC) gross tumor volume (GTV) contouring on positron emission tomography-computed tomography (PET-CT) images.



### [Deep learning-based auto-delineation of gross tumour volumes and involved nodes in PET/CT images of head and neck cancer patients](https://link.springer.com/content/pdf/10.1007/s00259-020-05125-x.pdf)
[2021]
**purpose:**
Identification and delineation of the gross tumour and malignant nodal volume (GTV) in medical images are vital in radiotherapy. We assessed the applicability of convolutional neural networks (CNNs) for fully automatic delineation of the GTV from FDG-PET/CT images of patients with head and neck cancer (HNC). CNN models were compared to manual GTV delineations made by experienced specialists. New structure-based performance metrics were introduced to enable in-depth assessment of auto-delineation of multiple malignant structures in individual patients.
**model**:U-net(U-Net: Convolutional Networks for Biomedical Image Segmentation)

### [A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging](https://link.springer.com/content/pdf/10.1186/s12880-022-00785-7.pdf)
###### tags: Biomedical Physics & Engineering Express
###### Accepted: 18 February 2022
#### Abstract:
Over the past few years, positron emission tomography/computed tomography (PET/CT)imaging for computer-aided diagnosis has received increasing attention. Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include (i) large amounts of data required for model training, and (ii)the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training. In order to overcome these limitations, we apply a few-shot learning (FSL)scheme. Contrary to traditional deep learning practices, in FSL the model is provided with less data during training. The model then utilizes end-user feedback after training to constantly improve its performance. We integrate FSL in a U-Net architecture for lung cancer lesion segmentation on PET/ CT scans, allowing for dynamic model weight fine-tuning and resulting in an online supervised learning scheme. Constant online readjustments of the model weights according to the users’ feedback, increase the detection and classification accuracy, especially in cases where low detection performance is encountered. Our proposed method is validated on the Lung-PET-CT-DX TCIA database. PET/CT scans from 87 patients were included in the dataset and were acquired 60 minutes after intravenous 18F-FDG injection. Experimental results indicate the superiority of our approach compared to other state-of-the-art methods.


## [AI-based detection of lung lesions in 18F_FDG PET-CT from lung cancer patients](https://ejnmmiphys.springeropen.com/track/pdf/10.1186/s40658-021-00376-5.pdf)
###### tags: Contrast Media & Molecular Imaging
###### Accepted: 24 October 2018
### Abstract:
提出一個模型以自動檢測肺部異常病變併計算 FDG PET-CT 上的總病變糖酵解 (total lesion glycolysis=>TLG)。
### Method:
在所有 PET-CT 研究中,一位核醫學醫師手動分割了 FDGuptake 增加的異常肺部病變。
* ### Model:兩個CNN(Detection CNN & Organ CNN)
* #### Detection CNN
* trained to detect lung lesions


* #### Organ CNN
* trained to segment organs

* ### Dataset:
* Input & Output:
* training set:66 images, with 74 lesions in total
* validation set:23 images, with 35 lesions in total
### Result:

>兩名患者(分別為左和右),每個患者都有一個被 AI 模型漏診的肺部病變(黑色箭頭)。 兩者均小於 1 mL,因此在後處理中被去除。 正確檢測到左側圖像中較大的病變(白色箭頭)。 未顯示分段
>

> 一名患有大肺腫瘤的患者佔據了右下肺的大部分區域(圖 4),這表明 AI 模型明顯難以接近真實情況。 病變是測試樣本中最大的病變,由於壞死區域複雜,FDG 親和力高度不規則。 通過 AI 模型檢測此類病變顯然不是問題,但可能會給準確的體積和 SUVmean 測量帶來困難。
## [A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET/CT imaging](https://iopscience.iop.org/article/10.1088/2057-1976/ac53bd/pdf?casa_token=T90q_k6kL6kAAAAA:hLINU7CcbhZWplQSzC4OT_MgVmkJuRoMFM4A-jg_FVL6DBcnRgyt5OsjW0kgDy9JbJPjFYJENMo)
###### tags: Biomedical Physics & Engineering Express
###### Accepted: 18 February 2022
### Abstract:
Supervised deep learning architectures are usually employed for the detection of abnormalities, with anatomical localization, especially in the case of CT scans. However, the main limitations of the supervised learning paradigm include:
* large amounts of data required for model training, and
* the assumption of fixed network weights upon training completion, implying that the performance of the model cannot be further improved after training.



## [Automatic Lung Cancer Segmentation in [18F]FDG PET/CT Using a Two-Stage Deep Learning Approach](https://link.springer.com/content/pdf/10.1007/s13139-022-00745-7.pdf)
###### tags: Nuclear Medicine and Molecular Imaging
###### Accepted: 12 March 2022
### Abstract:
由於需要準確的肺癌分割來確定 [18F]FDG PET/CT 中腫瘤的功能體積,他們提出了一種兩階段的 U-Net 架構來提高使用 [18F]FDG PET/ 進行肺癌分割的性能。
### Method:
在第 1 階段,全局 U-net 接收 3D PET/CT 體積作為輸入並提取初步腫瘤區域,生成 3D 二進制體積作為輸出。在第 2 階段,區域 U-net 在第 1 階段由全局 U-net 選擇的切片周圍接收八個連續的 PET/CT 切片,並生成 2D 二值圖像作為輸出。
### Dataset:
887 個 PET/CT 和 VOI 數據集中,730 個用於訓練提出的模型,81 個用作驗證集,其餘 76 個用於評估模型


在第 1 階段,全局 U-Net 接收 3D PET/CT 體積作為輸入並提取初步腫瘤區域,從而生成 3D 二進制體積作為輸出。(base on 3D U-Net)

在第 2 階段,區域 U-Net 在第 1 階段由全局 U-net 選擇的切片周圍接收八個連續的 PET/CT 切片,並生成 2D 二進製圖像作為輸出。(base on DenseNET的 U-Net[37])

結果 所提出的兩階段 U-Net 架構在原發性肺癌分割中優於傳統的單階段 3D U-Net。兩階段 U-Net 模型成功預測了腫瘤的詳細邊緣,這是通過手動繪製球形 VOI 並應用自適應閾值來確定的。使用 Dice 相似係數的定量分析證實了兩階段 U-Net 的優勢。
## [Differentiation Between Malignant and Benign Pulmonary Nodules by Using Automated Three-Dimensional High-Resolution Representation Learning With Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8971840/pdf/fmed-09-773041.pdf)
###### tags:Molecular Imaging and Radionuclide Therapy
###### 01 Jun 2022
### **Abstract**:
proposed a novel deep learning model that aided in the ***automatic*** differentiation between ***malignant and benign pulmonary nodules*** on FDG PET-CT.
### **Method**:
they designed a novel deep learning three-dimensional (3D) high-resolution representation learning (HRRL) model for the automated classification of pulmonary nodules based on FDG PET-CT images without manual annotation by experts.
#### Preprocessing for Automated Models:

* 於 -160 的值呈現全黑,值 >240 呈現全白(圖 2A)->幫助程序自動確定哪個橫軸切片圖像是肺實質的上邊緣。
* OpenCV 包的輪廓函數(圖 2B)
#### dataset(CT:256 × 256 × 96 &PET:64 × 64 × 96):
112 case( 60 men and 52 women):
1) 33 benign nodules:4 x ground-glass nodules (GGN) & 29 x solid nodules
2) 79 malignant nodules:12 x ground-glass nodules (GGN) & 67 x solid nodules
#### Model:


Q1.

Q2.

Q3.請問醫生label甚麼東西/已經la了什麼。
Q4.幾個比較常見的模型:
* Resnet
* U-net
#### Paper
* U-net
* Resnet
* [Predicting survival for hepatic arterial infusion chemotherapy of unresectable colorectal liver metastases: Radiomics analysis of pretreatment computed tomography](https://sciendo.com/es/article/10.2478/jtim-2022-0004)

* t-test
* LASSO-Cox regression
* Nomograms predicting 1-, 2-, and 3-year survival were established
* slice thickness: 5 mm, slice interval: 5 mm
* Multiphase abdominal CT scan was performed in the hepatic arterial phase at 25–30 s, portal venous phase at 70–80 s, and equilibrium phase at 150 s.
* 63 patients(41:22)
* 
* [Development and Validation of a Deep Learning Model for Non-Small Cell Lung Cancer Survival](https://jamanetwork.com/journals/jamanetworkopen/fullarticle/2766666)
* 
* DeepSurv
* Data augumentation:
* https://onlinelibrary.wiley.com/doi/epdf/10.1111/1754-9485.13261
*
#### Dataset:
##### A Large-Scale CT and PET/CT Dataset for Lung Cancer Diagnosis (Lung-PET-CT-Dx)


問醫生呈現出來的影像可不可以用
怎麼label?準確度怎麼樣在哪個區間?
丟進去train的資料:身高體重 抽血狀況(看看有沒有論文是沒有付 但 醫生沒有付的)
https://link.springer.com/content/pdf/10.1007/978-3-030-67194-5.pdf
* PET/CT應用(癌症)
* 非深度學習的方式(不train)怎麼判斷有沒有癌症
* fuzzy c-means
* Deep learning on PET SPECT CT MRI
* [The promise of artificial intelligence and deep learning in PET and SPECT imaging](https://www.sciencedirect.com/science/article/pii/S1120179721001241)
* [ Image Reconstruction Algorithms in PET*](http://eknygos.lsmuni.lt/springer/370/63-91.pdf)
* [Sinogram and Imaging Formats](https://www.people.vcu.edu/~mhcrosthwait/PETW/Petandsingormasathome.html)
* [PET/CT Issues:CT-based attenuation correction (CTAC)](https://www.aapm.org/meetings/08ss/documents/Kinahan.pdf)
### 非肺部的
### [Weakly supervised deep learning for determining the prognostic value of 18F-FDG PET/CT in extranodal natural killer/T-cell lymphoma, nasal type](https://link.springer.com/content/pdf/10.1007/s00259-021-05232-3.pdf)
###### tags:European Journal of Nuclear Medicine and Molecular Imaging
###### Published online: 20 February 2021
#### Abstract:
2022/07/05
tags:Survey
VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images
Abstract:
The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy.
Differentiation Between Malignant and Benign Pulmonary Nodules by Using Automated Three-Dimensional High-Resolution Representation Learning With Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography.
tags:Molecular Imaging and Radionuclide Therapy
01 Jun 2022
Abstract:
proposed a novel deep learning model that aided in the automatic differentiation between malignant and benign pulmonary nodules on FDG PET-CT.
method:
they designed a novel deep learning three-dimensional (3D) high-resolution representation learning (HRRL) model for the automated classification of pulmonary nodules based on FDG PET-CT images without manual annotation by experts.
方法:
總共回顧性招募了 112 名在手術前接受 FDG PETCT 的肺結節患者。我們設計了一種新穎的深度學習三維 (3D) 高分辨率表示學習 (HRRL) 模型,用於基於 FDG PET-CT 圖像自動分類肺結節,無需專家手動註釋。為了更精確地定位圖像,我們通過一種新的人工智能驅動的圖像處理算法來定義肺部的區域,而不是傳統的分割方法,無需專家的幫助;該算法基於深度 HRRL,用於執行高分辨率分類。此外,將 2D 模型轉換為 3D 模型。
結果:所有肺部病變均經病理檢查證實(惡性79例,良性33例)。我們評估了其在區分惡性和良性結節方面的診斷性能。深度學習模型的接收器操作特徵曲線 (AUC) 下面積用於指示使用五重交叉驗證的評估中的分類性能。該模型基於結節的預測性能的 AUC、敏感性、特異性和準確度分別為 78.1、89.9、54.5 和 79.4%。
結論:我們的結果表明,在沒有專家手動註釋的情況下使用 HRRL 的深度學習算法可能有助於對通過臨床 FDG PET-CT 圖像發現的肺結節進行分類。
自動化模型的預處理
我們通過使用 CT 圖像上的縱隔窗口來定義每個患者的肺部區域。 CT 縱隔窗水平 (WL) 為 40,窗寬 (WW) 為 400。因此,小於 -160 的值被渲染為全黑,大於 240 的值被渲染為全白(圖 2A)。在這種情況下,氣管腔和肺實質看起來幾乎是黑色的。這樣的圖像預處理可以幫助程序自動確定哪個橫軸切片圖像是肺實質的上邊緣。肺由空氣的存在指示。為了準確確定雙側肺區的輪廓,必須首先找到體塊。本研究使用 Python 開源計算機視覺庫 (OpenCV) 包。它帶有一個免費的跨平台程序,並且可以執行一些功能,例如尋找輪廓。輪廓功能可以對灰度圖像進行閾值計算或自定義切割塊的閾值以優化輪廓查找功能。在本研究中,我們使用 OpenCV 包的輪廓函數來識別 CT 圖像上的所有輪廓(圖 2B)。確定身體的輪廓塊是必不可少的。身體的輪廓區域可以根據重心和輪廓區域的大小來確定(圖2C)。重心偏向邊緣區域或尺寸過小的輪廓區域通常不是主體區域。在識別出身體輪廓區域後,我們確定了身體區域內肺實質的上邊緣。當我們從頂部到按鈕查看 CT 圖像的連續橫軸切片時,肺實質通常始於空氣(即黑色區域)所在的水平。當最上層肺氣的阻隔面積超過體阻隔面積的一定百分比(如5%)時,採用阻隔作為肺的起始水平。然而,在這種情況下,一些異常的、質量較差的 CT 圖像的存在會導致身體輪廓塊中存在許多空心黑色區域,這可能會導致捕獲錯誤。因此,為了準確確定體塊的面積,我們以圖像中心為起點,向左右延伸,直到取景區域為體塊面積的33%。然後將派生區域(在圖 2D 上標記為綠色)定義為感興趣的計算區域,隨後僅採用該區域內的黑色區域(即空氣)(在圖 2D 上標記為紅色)進行進一步計算。當空氣的比例超過特定比例(例如,5% 的體塊面積)時,該圖像的橫軸切片被認為描繪了肺實質的上邊緣,並且該圖像已準備好被捕獲用於訓練。為了獲得準確的三維 (3D) CT 圖像並提高訓練效率,從確定的肺實質最上層經軸圖像切片中獲得身體輪廓區域。確定了身體輪廓區域的重心。隨後,我們檢索了基於身體輪廓塊的重心向外延伸的 256 像素寬的圖像,然後獲得了對應的 PET 圖像。從肺實質的最上層橫軸圖像切片,從CT和PET圖像向下捕獲96個連續橫軸圖像切片。分別從 CT 和 PET 檢索到的 96 幅圖像具有相同的厚度和大小,可以覆蓋整個雙肺區域。最後,獲得了來自 112 名患者的 CT 和 PET 3D 圖像,用於隨後輸入到深度學習模型中。 3D 圖像數據統一為 CT 和 PET 圖像分別為 256 × 256 × 96 和 64 × 64 × 96(圖 3)
Automatic classifcation of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques
tags:Medical & Biological Engineering & Computing (2021)
Published online: 18 May 2021
Abstract:
Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) chest scans can provide early treatment for patients with lung cancer, as well as doctor liberation from time-consuming procedures. The purpose of this study is the automatic and reliable characterization of SPNs in CT scans extracted from Positron Emission Tomography and Computer Tomography (PET/CT) system. To achieve the aforementioned task, Deep Learning with Convolutional Neural Networks (CNN) is applied. The strategy of training specifc CNN architectures from scratch and the strategy of transfer learning, by utilizing state-of-the-art pre-trained CNNs, are compared and evaluated. To enhance the training sets, data augmentation is performed. The publicly available database of CT scans, named as Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), is also utilized to further expand the training set and is added to the PET/CT dataset. The results highlight the efectiveness of transfer learning and data augmentation for the classifcation task of small datasets. The best accuracy obtained on the PET/CT dataset reached 94%, utilizing a modifcation proposal of a state-of-the-art CNN, called VGG16, and enhancing the training set with LIDC-IDRI dataset. Besides, the proposed modifcation outperforms in terms of sensitivity several similar researches, which exploit the benefts of transfer learning.
Solitary Pulmonary Nodules (SPN):孤立性肺結節。
Imaging Characteristics and Prognostic Value of Isolated
Pulmonary Metastasis from Colorectal Cancer Demonstrated
with18F-FDG PET/CT
https://downloads.hindawi.com/journals/bmri/2022/2230079.pdf
Freely available convolutional neural
network‑based quantifcation of PET/CT lesions
is associated with survival in patients with lung
cancer
https://link.springer.com/content/pdf/10.1186/s40658-022-00437-3.pdf
選擇 Repo
## [VGG19 Network Assisted Joint Segmentation and Classification of Lung Nodules in CT Images](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8699868/pdf/diagnostics-11-02208.pdf)
### Abstract:
The proposed framework employs VGG-SegNet supported nodule mining and pre-trained DL-based classification to support automated lung nodule detection. The classification of lung CT images is implemented using the attained deep features, and then these features are serially concatenated with the handcrafted features, such as the Grey Level Co-Occurrence Matrix (GLCM), Local-Binary-Pattern (LBP) and Pyramid Histogram of Oriented Gradients (PHOG) to enhance the disease detection accuracy.




## [Differentiation Between Malignant and Benign Pulmonary Nodules by Using Automated Three-Dimensional High-Resolution Representation Learning With Fluorodeoxyglucose Positron Emission Tomography-Computed Tomography.](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8971840/pdf/fmed-09-773041.pdf)
###### tags:Molecular Imaging and Radionuclide Therapy
###### 01 Jun 2022
### Abstract:
proposed a novel deep learning model that aided in the automatic differentiation between malignant and benign pulmonary nodules on FDG PET-CT.
### method:
they designed a novel deep learning three-dimensional (3D) high-resolution representation learning (HRRL) model for the automated classification of pulmonary nodules based on FDG PET-CT images without manual annotation by experts.
方法:
總共回顧性招募了 112 名在手術前接受 FDG PETCT 的肺結節患者。我們設計了一種新穎的深度學習三維 (3D) 高分辨率表示學習 (HRRL) 模型,用於基於 FDG PET-CT 圖像自動分類肺結節,無需專家手動註釋。為了更精確地定位圖像,我們通過一種新的人工智能驅動的圖像處理算法來定義肺部的區域,而不是傳統的分割方法,無需專家的幫助;該算法基於深度 HRRL,用於執行高分辨率分類。此外,將 2D 模型轉換為 3D 模型。
結果:所有肺部病變均經病理檢查證實(惡性79例,良性33例)。我們評估了其在區分惡性和良性結節方面的診斷性能。深度學習模型的接收器操作特徵曲線 (AUC) 下面積用於指示使用五重交叉驗證的評估中的分類性能。該模型基於結節的預測性能的 AUC、敏感性、特異性和準確度分別為 78.1、89.9、54.5 和 79.4%。
結論:我們的結果表明,在沒有專家手動註釋的情況下使用 HRRL 的深度學習算法可能有助於對通過臨床 FDG PET-CT 圖像發現的肺結節進行分類。
自動化模型的預處理
我們通過使用 CT 圖像上的縱隔窗口來定義每個患者的肺部區域。 CT 縱隔窗水平 (WL) 為 40,窗寬 (WW) 為 400。因此,小於 -160 的值被渲染為全黑,大於 240 的值被渲染為全白(圖 2A)。在這種情況下,氣管腔和肺實質看起來幾乎是黑色的。這樣的圖像預處理可以幫助程序自動確定哪個橫軸切片圖像是肺實質的上邊緣。肺由空氣的存在指示。為了準確確定雙側肺區的輪廓,必須首先找到體塊。本研究使用 Python 開源計算機視覺庫 (OpenCV) 包。它帶有一個免費的跨平台程序,並且可以執行一些功能,例如尋找輪廓。輪廓功能可以對灰度圖像進行閾值計算或自定義切割塊的閾值以優化輪廓查找功能。在本研究中,我們使用 OpenCV 包的輪廓函數來識別 CT 圖像上的所有輪廓(圖 2B)。確定身體的輪廓塊是必不可少的。身體的輪廓區域可以根據重心和輪廓區域的大小來確定(圖2C)。重心偏向邊緣區域或尺寸過小的輪廓區域通常不是主體區域。在識別出身體輪廓區域後,我們確定了身體區域內肺實質的上邊緣。當我們從頂部到按鈕查看 CT 圖像的連續橫軸切片時,肺實質通常始於空氣(即黑色區域)所在的水平。當最上層肺氣的阻隔面積超過體阻隔面積的一定百分比(如5%)時,採用阻隔作為肺的起始水平。然而,在這種情況下,一些異常的、質量較差的 CT 圖像的存在會導致身體輪廓塊中存在許多空心黑色區域,這可能會導致捕獲錯誤。因此,為了準確確定體塊的面積,我們以圖像中心為起點,向左右延伸,直到取景區域為體塊面積的33%。然後將派生區域(在圖 2D 上標記為綠色)定義為感興趣的計算區域,隨後僅採用該區域內的黑色區域(即空氣)(在圖 2D 上標記為紅色)進行進一步計算。當空氣的比例超過特定比例(例如,5% 的體塊面積)時,該圖像的橫軸切片被認為描繪了肺實質的上邊緣,並且該圖像已準備好被捕獲用於訓練。為了獲得準確的三維 (3D) CT 圖像並提高訓練效率,從確定的肺實質最上層經軸圖像切片中獲得身體輪廓區域。確定了身體輪廓區域的重心。隨後,我們檢索了基於身體輪廓塊的重心向外延伸的 256 像素寬的圖像,然後獲得了對應的 PET 圖像。從肺實質的最上層橫軸圖像切片,從CT和PET圖像向下捕獲96個連續橫軸圖像切片。分別從 CT 和 PET 檢索到的 96 幅圖像具有相同的厚度和大小,可以覆蓋整個雙肺區域。最後,獲得了來自 112 名患者的 CT 和 PET 3D 圖像,用於隨後輸入到深度學習模型中。 3D 圖像數據統一為 CT 和 PET 圖像分別為 256 × 256 × 96 和 64 × 64 × 96(圖 3)




## [Automatic classifcation of solitary pulmonary nodules in PET/CT imaging employing transfer learning techniques](https://link.springer.com/content/pdf/10.1007/s11517-021-02378-y.pdf)
###### tags:Medical & Biological Engineering & Computing (2021)
###### Published online: 18 May 2021
### Abstract:
Early and automatic diagnosis of Solitary Pulmonary Nodules (SPN) in Computed Tomography (CT) chest scans can provide early treatment for patients with lung cancer, as well as doctor liberation from time-consuming procedures. The purpose of this study is the automatic and reliable characterization of SPNs in CT scans extracted from Positron Emission Tomography and Computer Tomography (PET/CT) system. To achieve the aforementioned task, Deep Learning with Convolutional Neural Networks (CNN) is applied. The strategy of training specifc CNN architectures from scratch and the strategy of transfer learning, by utilizing state-of-the-art pre-trained CNNs, are compared and evaluated. To enhance the training sets, data augmentation is performed. The publicly available database of CT scans, named as Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI), is also utilized to further expand the training set and is added to the PET/CT dataset. The results highlight the efectiveness of transfer learning and data augmentation for the classifcation task of small datasets. The best accuracy obtained on the PET/CT dataset reached 94%, utilizing a modifcation proposal of a state-of-the-art CNN, called VGG16, and enhancing the training set with LIDC-IDRI dataset. Besides, the proposed modifcation outperforms in terms of sensitivity several similar researches, which exploit the benefts of transfer learning.
> Solitary Pulmonary Nodules (SPN):孤立性肺結節。
Imaging Characteristics and Prognostic Value of Isolated
Pulmonary Metastasis from Colorectal Cancer Demonstrated
with18F-FDG PET/CT
https://downloads.hindawi.com/journals/bmri/2022/2230079.pdf
Freely available convolutional neural
network‑based quantifcation of PET/CT lesions
is associated with survival in patients with lung
cancer
https://link.springer.com/content/pdf/10.1186/s40658-022-00437-3.pdf