---
title: "Oakestra UH NetAISys 2024 Tutorial"
tags: wikidocs
teams:
- maintainers
participants:
- Giovanni (GB)
---
Table of content:
- [Your First Cluster 🤖](#Your-First-Cluster-🤖)
- [Hello World 🛸](#Hello-World-🛸)
- [Your First Web Server 🖥️](#Your-First-Web-Server-🖥️)
- [Your First Deployment Descriptor 📋](#Your-First-Deployment-Descriptor-📋)
- [Your First AR Pipeline 👁️](#Your-First-AR-Pipeline-👁️)
# Your First Cluster 🤖 👷 🌳
**Legend:**
🌳: whenever you see this, we refer to the machine hosting the Cluster and the Root Orchestrator
👷♀️: Whenever you see this, we refer to the(all) the machine(s) hosting you worker node(s)
### (1) Let's install the dependencies
🌳: Install docker and docker compose, follow the instructions from [here](https://docs.docker.com/engine/install/debian/)
👷:Install Iptables
```
sudo apt-get install iptables
```
### (2) 🌳: Run root and cluster orchestrator
Let's set the initial environment variables
```
## Choose a unique name for your cluster
export CLUSTER_NAME=<string>
##In our example we'll use cluster1
## Give a name or geo coordinates to the current location
export CLUSTER_LOCATION=<String or coordinates>
##In our example we'll use 48.26280440430051,11.66904127701312,2000
## IP address where this root component can be reached to access the APIs
export SYSTEM_MANAGER_URL=<IP address>
# Note: Use a non-loopback interface IP (e.g. any of your real interfaces that have internet access).
# "0.0.0.0" leads to server issues
```
Clone the repo
```
# Feel free to use https or ssh for cloning
git clone https://github.com/oakestra/oakestra.git && cd oakestra
```
Run the cluster and root orchestrator
```
sudo -E docker compose -f run-a-cluster/1-DOC.yaml -f run-a-cluster/override-alpha-versions.yaml up
```
⚠️ n.b. if you use docker compose v1, the command is `docker-compose` with the `-` instanted of the space.
### (3) 👷:Install worker node components
#### Install the NodeEngine
```
wget -c https://github.com/oakestra/oakestra/releases/download/alpha-v0.4.300/NodeEngine_$(dpkg --print-architecture).tar.gz && tar -xzf NodeEngine_$(dpkg --print-architecture).tar.gz && chmod +x install.sh && mv NodeEngine NodeEngine_$(dpkg --print-architecture) && ./install.sh $(dpkg --print-architecture)
```
#### Install the NetManager
```
wget -c https://github.com/oakestra/oakestra-net/releases/download/alpha-v0.4.300/NetManager_$(dpkg --print-architecture).tar.gz && tar -xzf NetManager_$(dpkg --print-architecture).tar.gz && chmod +x install.sh && ./install.sh $(dpkg --print-architecture)
```
#### Config the NetManager editing `/etc/netmanager/netcfg.json`
```
{
"NodePublicAddress": "<IP ADDRESS OF THIS DEVICE>",
"NodePublicPort": 50103,
"ClusterUrl": "<IP Address of cluster orchestrator or 0.0.0.0 if deployed on the same machine>",
"ClusterMqttPort": "10003"
}
```
#### Run the NetManager in background
```
sudo nohup NetManager -p 6000 </dev/null >/tmp/netmanager.log 2>&1 &
```
⚠️ n.b. to kill the NetManager you can use `sudo kill -9 $(ps -ax | grep NetManager | sed 's/|/ /' | awk '{print $1}')`
#### Run the NodeEngine in background
```
sudo nohup NodeEngine -n 6000 -p 10100 -a <Cluster Orchestrator IP Address> </dev/null >/tmp/nodeengine.log 2>&1 &
```
⚠️ n.b. to kill the NetManager use `sudo kill -9 $(ps -ax | grep NodeEngine | sed 's/|/ /' | awk '{print $1}')`
⚠️ (optional) configure GPU
```
wget https://raw.githubusercontent.com/oakestra/oakestra/develop/go_node_engine/build/configure-gpu.sh ; chmod +x configure-gpu.sh ; ./configure-gpu.sh
```
#### Check if the NodeEngine is running
```
tail -f /tmp/nodeengine.log
```
You should see the node engine sending updates to the Cluster orchestrator. If that's the case... 🏆 Success!!
#### Check cluster status from API browser
1) Access the API browser from `http://<root_orchestrator_ip>:10000/api/docs`

2) Scroll down until you see the endpoint `/api/cluster/active`.
3) Expand it and click `try it out` and then `execute`.
4) Check that the resulting json contains an object with the cluster name == name of your cluster (in our example `cluster1`) and number of workers == number of nodes you attached (in our example 2).
# Hello World 🛸
Now we'll deploy our first application.
Nothing to be scared of, we have that covered for you!
Simply:
### (1) 
### (2) 
### (3) 
### (4) 
### (5) 
See? It was not complicated!
### Let's check if it works 🏆


# Your First Web Server 🖥️
Now you know the drill, but this time we'll manually describe the application using the form from the dashboard.
### (1) 
### (2) Describe your app
Service name: nginx
Namespace: test
Virtualization: Container
Memory: 100MB
Vcpus: 1
Vgpus: 0
Vtpus: 0
Bandwidth in/out: 0
Storage: 0
Port: 80
Code: docker.io/library/nginx:latest

### (3) Deploy the application

### (4) Check the IP address of the worker node


### (5) Reach you application
Let's try to use our browser now to navigate to the IP 131.159.24.51 used by the application.

# Your First Deployment Descriptor 📋
Let's write our own deployment descriptor:
client-server.json
```
{
"microservices" : [
{
"microserviceID": "",
"microservice_name": "curl",
"microservice_namespace": "test",
"virtualization": "container",
"cmd": ["sh", "-c", "curl 10.30.55.55 ; sleep 5"],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "docker.io/curlimages/curl:7.82.0",
"state": "",
"port": "",
"added_files": [],
"constraints":[
{
"type":"direct",
"node":"pi4-base",
"cluster":"cluster1"
}
]
},
{
"microserviceID": "",
"microservice_name": "nginx",
"microservice_namespace": "test",
"virtualization": "container",
"cmd": [],
"memory": 100,
"vcpus": 1,
"vgpus": 0,
"vtpus": 0,
"bandwidth_in": 0,
"bandwidth_out": 0,
"storage": 0,
"code": "docker.io/library/nginx:latest",
"state": "",
"port": "",
"addresses": {
"rr_ip": "10.30.55.55"
},
"added_files": []
}
]
}
```
### What did we learn?
1. Microservices
We defined 2 microservices, one client and one server.
2. Round robin IP
We defined our first Round Robin IP. These IP are in the form of 10.30.x.y. We used 10.30.55.55 for nginx.
This means that every nginx instance can be reachable with this address, and the traffic will automagically be balanced.
3. Direct mapping constraint
We learnt how to map applications directly to specific hardware using the `direct` constraint on `curl` client.
All its instances will be deployed on `pi4-base` machine of `cluster1`.
# Your First AR Pipeline 👁️


### Tutorial on [GitHub](https://github.com/oakestra/demo-ar-pipeline)