:::success
<font size=6>
Holoscan Demo Kit
</font>
:::
# Basic requirements
| Name | Description | Note |
| -------- | -------- | -------- |
| Platform | x86 | |
| NVIDIA Card | 4500 up | |
| OS |Ubuntu 22.04 iotg | |
| Holoscan | ngc-v0.6.0-dgpu| 20GB |
# Basic Install
## for ubuntu 22.04 iot
`sudo apt-get update`
# 1 : install driver
```
ubuntu-drivers list
sudo apt install nvidia-driver-525 -y
```
# 2 : install docker engine
```
sudo apt-get update
sudo apt-get install ca-certificates curl -y
sudo install -m 0755 -d /etc/apt/keyrings
sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
sudo chmod a+r /etc/apt/keyrings/docker.asc
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
$(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
```
# Holoscan
## How To Install & Setup
**Clone this repository**
```
git clone https://github.com/nvidia-holoscan/holohub.git
```
**Only x86 Installing the NVIDIA Container Toolkit**
Configure the production repository:
```
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \
&& curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \
sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \
sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
```
Update the packages list from the repository:
```
sudo apt-get update
```
Install the NVIDIA Container Toolkit packages:
```
sudo apt-get install -y nvidia-container-toolkit
```
```
sudo nvidia-ctk runtime configure --runtime=docker
```
Restart the Docker daemon:
```
sudo systemctl restart docker
```
**Container build**
```
cd holohub
sudo ./dev_container build
```
**Launching dev container**
```
sudo ./dev_container launch
```
**Building Sample applications**
```
./run build
```
**Running applications**
```
./run list
```

Then you can run the application using
```
./run launch <application> <language>
```
For example, to run the tool tracking endoscopy application in C++
```
./run launch endoscopy_tool_tracking cpp
```
and to run the same application in python:
```
./run launch endoscopy_tool_tracking python
```
## Run Sample
### 1.Multiai ultrasound
```
cd holohub
sudo ./dev_container launch
./run launch multiai_ultrasound cpp
```
[live video](https://drive.google.com/file/d/1nEJO3muESntwjkekX-EDPXixk0Ijtuwi/view?usp=drive_link)

> This model identifies four crucial linear measurements of the heart. From top to bottom, the chamber model automatically generates an estimated caliper placement to measure the diameter of the Right Ventricle, the thickness of the Interventricular Septum, the diameter of the Left Ventricle, and the thickness of the Posterior Wall. These measurements are crucial in diagnosing the most common cardiac abnormalities or diseases. For instance, if it is determined that the diameter of the Left Ventricle is bigger than expected for that patient (after considering for gender and patient constitution), this can be a telling sign of diastolic dysfunction, or moreover, various forms of heart failure. Data courtesy of iCardio.ai
>
> The model output: This model interprets the most likely pixel location of each class. The distance between the pixels can be assumed to be the length of the underlying anatomical component. Illustrating the distance between the points we can see the model calculate the Left and Right Ventricular chamber sizes through the extent of the cardiac cycle.
### 2.Colonoscopy segmentation
```
./run launch colonoscopy_segmentation
```
[live video](https://drive.google.com/file/d/1w9Zv5Rv6QgUcjr8xWwi0g8BVQEW5aEVJ/view?usp=drive_link)

> This resource contains a segmentation model for the identification of polyps during colonoscopies trained on the Kvasir-SEG dataset [1], using the ColonSegNet model architecture [2], as well as a sample surgical video.
>
> [1] Jha, Debesh, Pia H. Smedsrud, Michael A. Riegler, Pål Halvorsen, Thomas de Lange, Dag Johansen, and Håvard D. Johansen, "Kvasir-seg: A segmented polyp dataset" Proceedings of the International Conference on Multimedia Modeling, pp. 451-462, 2020.
>
> [2] Jha D, Ali S, Tomar NK, Johansen HD, Johansen D, Rittscher J, Riegler MA, Halvorsen P. Real-Time Polyp Detection, Localization and Segmentation in Colonoscopy Using Deep Learning. IEEE Access. 2021 Mar 4;9:40496-40510. doi: 10.1109/ACCESS.2021.3063716. PMID: 33747684; PMCID: PMC7968127.
### 3.Multiai endoscopy
```
./run launch multiai_endoscopy cpp
```
[live video](https://drive.google.com/file/d/1QqxUFSNnvr3D5JE0GhldalAx1Uc1cUFf/view?usp=drive_link)

> This resource contains the convolutional LSTM model for tool tracking in laparoscopic videos by Nwoye et. al [1], and a sample surgical video.
>
> [1] Nwoye, C.I., Mutter, D., Marescaux, J. and Padoy, N., 2019. Weakly supervised convolutional LSTM approach for tool tracking in laparoscopic videos. International journal of computer assisted radiology and surgery, 14(6), pp.1059-1067