:::success
<font size=6>
IGX - Medical Device Demo Kit
</font>
:::
[TOC]
# Holoscan
Official website: https://www.nvidia.com/zh-tw/clara/medical-devices/
Accelerate the next generation of AI-enabled device development with NVIDIA Holoscan. A domain-specific AI computing platform, NVIDIA Holoscan delivers the full-stack infrastructure needed for scalable, software-defined, real-time processing of streaming data at the edge—so developers can build devices and deploy AI applications directly into clinical settings.
## Basic requirements
| Name | Description | Note |
| -------- | -------- | -------- |
| Platform | x86,Arm | |
| NVIDIA Card |A6000 | |
| OS |Ubuntu 20.04 | |
| Holoscan | ngc-v0.6.0-dgpu| 20GB |
## How To Install & Setup
HoloHub is based on Holoscan SDK. HoloHub has been tested and is known to run on **Ubuntu 20.04**. Other versions of Ubuntu or OS distributions may result in build and/or runtime issues.
Source: https://github.com/nvidia-holoscan/holohub
**Clone this repository**
```
git clone https://github.com/nvidia-holoscan/holohub.git
```
**Only x86 Installing the NVIDIA Container Toolkit**
Configure the production repository:
```
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
Update the packages list from the repository:
```
sudo apt-get update
```
Install the NVIDIA Container Toolkit packages:
```
sudo apt-get install -y nvidia-container-toolkit
```
Prerequisites
You installed a supported container engine (Docker, Containerd, CRI-O, Podman).
You installed the NVIDIA Container Toolkit.
The nvidia-ctk command modifies the /etc/docker/daemon.json file on the host. The file is updated so that Docker can use the NVIDIA Container Runtime.
```
sudo nvidia-ctk runtime configure --runtime=docker
```
Restart the Docker daemon:
```
sudo systemctl restart docker
```
**Container build**
Building dev container: Run the following command from the holohub directory to build the development container.
```
cd holohub
sudo ./dev_container build
```
*Note: The development container script dev_container will by default use NGC's Holoscan SDK container for the local GPU configuration by detecting if the system is using an iGPU (integrated GPU) or a dGPU (discrete GPU).*
**Launching dev container**
Run the following command from the **holohub directory** to launch the default development container built using Holoscan SDK's container from ngc for the local GPU.
```
sudo ./dev_container launch
```
**Building Sample applications**
Sample applications based on the Holoscan Platform may be found under the Applications directory. Sample applications are a subset of the HoloHub applications and are maintained by Holoscan SDK developers to provide a demonstration of the SDK capabilities.
To build sample applications, make sure you have install the prerequisites and setup your NGC credentials then run:
```
./run build
```
**Running applications**
To list all available applications you can run the following command:
```
./run list
```

Then you can run the application using
```
./run launch <application> <language>
```
For example, to run the tool tracking endoscopy application in C++
```
./run launch endoscopy_tool_tracking cpp
```
and to run the same application in python:
```
./run launch endoscopy_tool_tracking python
```
## Run Sample
### 1.Multiai ultrasound
```
cd holohub
sudo ./dev_container launch
./run launch multiai_ultrasound cpp
```
[live video](https://drive.google.com/file/d/1nEJO3muESntwjkekX-EDPXixk0Ijtuwi/view?usp=drive_link)

> This model identifies four crucial linear measurements of the heart. From top to bottom, the chamber model automatically generates an estimated caliper placement to measure the diameter of the Right Ventricle, the thickness of the Interventricular Septum, the diameter of the Left Ventricle, and the thickness of the Posterior Wall. These measurements are crucial in diagnosing the most common cardiac abnormalities or diseases. For instance, if it is determined that the diameter of the Left Ventricle is bigger than expected for that patient (after considering for gender and patient constitution), this can be a telling sign of diastolic dysfunction, or moreover, various forms of heart failure. Data courtesy of iCardio.ai
>
> The model output: This model interprets the most likely pixel location of each class. The distance between the pixels can be assumed to be the length of the underlying anatomical component. Illustrating the distance between the points we can see the model calculate the Left and Right Ventricular chamber sizes through the extent of the cardiac cycle.
### 2.Colonoscopy segmentation
```
./run launch colonoscopy_segmentation
```
[live video](https://drive.google.com/file/d/1w9Zv5Rv6QgUcjr8xWwi0g8BVQEW5aEVJ/view?usp=drive_link)

> This resource contains a segmentation model for the identification of polyps during colonoscopies trained on the Kvasir-SEG dataset [1], using the ColonSegNet model architecture [2], as well as a sample surgical video.
>
> [1] Jha, Debesh, Pia H. Smedsrud, Michael A. Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D. Johansen, "Kvasir-seg: A segmented polyp dataset" Proceedings of the International Conference on Multimedia Modeling, pp. 451-462, 2020.
>
> [2] Jha D, Ali S, Tomar NK, Johansen HD, Johansen D, Rittscher J, Riegler MA, Halvorsen P. Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning. IEEE Access. 2021 Mar 4;9:40496-40510. doi: 10.1109/ACCESS.2021.3063716. PMID: 33747684; PMCID: PMC7968127.
### 3.Multiai endoscopy
```
./run launch multiai_endoscopy cpp
```
[live video](https://drive.google.com/file/d/1QqxUFSNnvr3D5JE0GhldalAx1Uc1cUFf/view?usp=drive_link)

> This resource contains the convolutional LSTM model for tool tracking in laparoscopic videos by Nwoye et. al [1], and a sample surgical video.
>
> [1] Nwoye, C.I., Mutter, D., Marescaux, J. and Padoy, N., 2019. Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. International journal of computer assisted radiology and surgery, 14(6), pp.1059-1067
# MONAI

https://monai.io/

MONAI Label is an intelligent image labeling and learning tool that uses AI assistance to reduce the time and effort of annotating new datasets. By utilizing user interactions, MONAI Label trains an AI model for a specific task and continuously learns and updates that model as it receives additional annotated images.
MONAI Label provides multiple sample applications that include state-of-the-art interactive segmentation approaches like DeepGrow and DeepEdit. These sample applications are ready to use out of the box and let you quickly get started on annotating with minimal effort. Developers can also build their own MONAI Label applications with creative algorithms.
https://monai.io/label.html
`In addition to "MONAI Label", there are also "MONAI Core" for training and "MONAI Deploy" for model deployment. However, the official demos for training and deployment inference only present numerical results and do not include visually annotated images.`
## Basic requirements
| Name | Description | Note |
| -------- | -------- | -------- |
| Platform | x86 | |
| NVIDIA Card |A6000 | |
| OS |Ubuntu 20.04 | |
| NVIDIA Cuda | 11.7 | |
| MONAI Label | latest| 30GB |
## How To Install & Setup
Installation Current Stable Version
```
python -m pip install --upgrade pip setuptools wheel
# Install latest stable version for pytorch
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu113
# Check if cuda enabled
python -c "import torch; print(torch.cuda.is_available())"
pip install -U monailabel
```
### MONAI Label CLI
Simple monailabel command will help user to download sample apps, datasets and run server.
```
monailabel --help
```
Download endoscopy app
```
monailabel apps --download --name endoscopy --output apps
```
Downloading Sample Datasets
```
mkdir MONAI_Label
cd MONAI_Label
# Download MSD Datasets
monailabel datasets # List sample datasets
monailabel datasets --download --name Task01_BrainTumour --output datasets
cd datasets
wget "https://github.com/Project-MONAI/MONAILabel/releases/download/data/endoscopy_frames.zip" -O datasets/endoscopy_frames.zip
unzip datasets/endoscopy_frames.zip -d datasets/endoscopy_frames
```
## Visualization Tools
### 3D Slicer
3D Slicer, a free and open-source platform for analyzing, visualizing and understanding medical image data. In MONAI Label, 3D Slicer is most tested with radiology studies and algorithms, develpoment and integration.
MONAI Label is most currently tested and supported with stable release of 3D Slicer every version. Preview version of 3D Slicer is not fully tested and supported.
To install stable released version of 3D Slicer, see 3D Slicer installation.
Currently, Windows and Linux version are supported.
Install 3DSlicer Preview Version with in-built MONAI Label plugin
* Download and Install 3D Slicer version 5.0 or later.
```
sudo apt-get install libpulse-dev libnss3 libglu1-mesa
sudo apt-get install --reinstall libxcb-xinerama0
wget https://slicer-packages.kitware.com/api/v1/item/64e0b4a006a93d6cff3638ce/download
mv download Slicer-5.4.0-linux-amd64.tar.gz
chmod 777 Slicer-5.4.0-linux-amd64.tar.gz
tar zxvf Slicer-5.4.0-linux-amd64.tar.gz
```
* Start 3DSlicer
```
cd Slicer-5.4.0-linux-amd64
./Slicer
```
* On the menu bar navigate View -> Extension Manager -> Active Learning -> MONAI Label

* Install MONAI Label plugin (click “Install”)
* Restart 3D Slicer (click “Restart” in the same dialog box)
To add the MONAI Label icon shortcut on the 3DSlicer toolbar
* Navigate Edit -> Application Settings
* Under the Modules panel drag MONAI Label into Favorite Modules

* Restart 3DSlicer
* Look for the MONAI Label module icon MLIcon in the 3DSlicer toolbar

* Click on "Add Data" and then open "Choose Directory to Add"

* Before selecting, please navigate to the "datasets" directory created earlier, then select the "Task01_BrainTumour" directory, and choose either "imagesTr" or "imagesTs" accordingly
* Once selected, click OK to proceed

* Now you can view images and mark them
Refer 3D Slicer plugin for other options to install and run MONAI Label plugin in 3D Slicer.
https://github.com/Project-MONAI/MONAILabel/tree/main/plugins/slicer
### MONAI Label Extension for CVAT
CVAT is an interactive video and image annotation tool for computer vision. It provides a user-friendly interface for annotating images and videos, making it ideal for computer-assisted intervention applications. MONAI Label can deploy computer vision-related tasks with the CVAT viewer, such as endoscopy segmentation and tracking.
Setup CVAT and Nuclio
* Get CVAT from GitHub and checkout to version 2.1.0
Open a Terminal in local machine and execute:
```
git clone https://github.com/opencv/cvat
cd cvat
# Checkout to required stable version to v2.1.0
git checkout v2.1.0
```
Set the Host and CVAT version, configure CVAT
```
# use real-ip instead of localhost to make the CVAT projects sharable
export CVAT_HOST=127.0.0.1
export CVAT_VERSION=v2.1.0
# Start CVAT from docker-compose, make sure the IP and port are available.
sudo apt install docker-compose
docker-compose -f docker-compose.yml -f components/serverless/docker-compose.serverless.yml up -d
# Create CVAT username and password
docker exec -it cvat bash -ic 'python3 ~/manage.py createsuperuser'
```
The setup process takes port 8070, 8080, 8090, if alternative ports are preferred, refer to CVAT Guide
CVAT should be on after this step, refer to http://127.0.0.1:8080 in Chrome to open CVAT, login with the superuser username and password:

Setup Nuclio container platform for function container management
* In the local host machine, start a Terminal to open a command line interface.
```
# Get Nuclio dashboard
wget https://github.com/nuclio/nuclio/releases/download/1.5.16/nuctl-1.5.16-linux-amd64
chmod +x nuctl-1.5.16-linux-amd64
sudo ln -sf $(pwd)/nuctl-1.5.16-linux-amd64 /usr/local/bin/nuctl
```
Endoscopic Models Deployment with MONAI Label
* This step is to deploy MONAI Label plugin with endoscopic models using Nuclio tool.
```
# make a working folder in local machine, name workspace
mkdir workspace
# 1: If in Toolkit, users can get CVAT plugin folder from the toolkit docker
docker cp $(docker container ls | grep 'nvcr.io/nvidia/clara/monai-toolkit' | awk '{print $1}'):/usr/local/monailabel/plugins/cvat workspace
# Deploy all endoscopy models
./workspace/cvat/deploy.sh endoscopy
# Or to deploy specific function and model, e.g., tooltracking
./workspace/cvat/deploy.sh endoscopy tooltracking
# 2 : If MONAI Label is used in local and acquired from Github, get latest CVAT plugin folder from the Github
git clone https://github.com/Project-MONAI/MONAILabel.git
# Deploy all endoscopy models
./MONAILabel/plugins/cvat/deploy.sh endoscopy
# Or to deploy specific function and model, e.g., tooltracking
./MONAILabel/plugins/cvat/deploy.sh endoscopy tooltracking
```
* After model deployment, users can see the model names in the Models tag of CVAT

* To check or monitor the status of deployed function container, users can open Nuclio platform at http://127.0.0.1:8070 by default and see whether deployed models are running.

Inference with CVAT+MONAI Label Function
* MONAI Label deployed models can be directly used in the CVAT without a monailabel start_server command, as the deployed functions are running inside the containers. Users can check the status in the deployed function container by logging docker image, e.g, tooltracking container: docker container logs nuclio-nuclio-monailabel.endoscopy.tooltracking in JupyterLab terminal.
* In the following steps, users can download same sample data by:
```
mkdir datasets
wget "https://github.com/Project-MONAI/MONAILabel/releases/download/data/endoscopy_frames.zip" -O datasets/endoscopy_frames.zip
unzip datasets/endoscopy_frames.zip -d datasets/endoscopy_frames
```
* Select frames to upload to CVAT.
Create task and upload images to CVAT with UI
* Click the "+" icon in the Tasks tag, create a name and add a label, then click "Done". Select and upload an image. Then click Submit to create a task.

* A "job" will be shown in the CVAT, click the "job" to open the labeling task:

Inference with MONAI Label model
* On the labeling task page, click the AI Tools on the left panel. Select Detectors, choose tooltracking in the dropdown scroll. Pick label and finally click Annotate button.

* The predicted mask will be shown as polygon in the CVAT, users can edit the mask and click Save icon on the top. Once finished annotation, click "Menu", change the job state to "complete", and "Finish the job".

* Users can export annoated masks using CVAT UI.