---
# Guide to HPC and AWS Resources for the 6G Center
The document is a detailed guide intended for the 6G Center, providing instructions on accessing and utilizing High-Performance Computing (HPC) resources and AWS cloud services. \
Users are advised to prioritize HPC resources for computational tasks before transitioning to AWS services.
## Accessing HPC and Using Resources: A Comprehensive Guide
### 1. HPC User Creation Form
To access the HPC, apply by completing the [HPC User Creation form](https://portal.ku.ac.ae/SharedServices/_layouts/15/NintexForms/Modern/NewForm.aspx?List=4277dd50%2Da8b7%2D48e0%2Dbdbe%2D4e5025f3231d&RootFolder=&Web=30c3ca63%2Dce57%2D4636%2D823e%2D70e0705ae374).
- **Faculty/PI name:** Merouane Abdelkader Debbah
- **Project name:** 6GRC
Upon approval, users will have access to `/dpc/kuin0100` (2TB storage quota).
---
### 2. Access Almesbar HPC Cluster
Follow the [Research Computing Knowledge Base documentation](https://wiki-researchcomputing.ku.ac.ae/configure_access.html) for detailed instructions.
**Command to access Almesbar HPC cluster:**
```bash
ssh username@kunet.ae@login.almesbar.ku.ac.ae
cd /dpc/kuin0100 # Access the shared directory
```
---
### 3. SLURM
#### 3.1 Overview
SLURM is a workload manager for distributed HPC environments. It is based on a queue system and can be used to execute batch and interactive jobs on networked Unix and Windows systems on many different architectures.
SLURM provides some commands to submit jobs to the queues, monitor the status of jobs and queues, and cancel/kill jobs.
SLURM Commands

For full SLURM reference: [SLURM Documentation](https://slurm.schedmd.com/)
---
#### 3.2 Requesting Resources
Jobs can only be run on HPC clusters by requesting resources via the SLURM workload manager.
Factors to consider while requesting resources:
- Job runtime
- GPUs, cores, nodes required
- Memory needs
- Special hardware requirements
**Commands:**
- **Memory:** `--mem=<M>` (in MB)
- **GPUs:** `--partition=gpu --nodes=<N>`
- **Time:** `--time=<H:M:S>`
**Example Command:**
```bash
srun --partition=gpu --gres=gpu:2 --mem=100 --time=24:00:00 --account kuin0100 --pty /bin/bash -l
```
To check GPUs:
```bash
nvidia-smi
```
Release GPUs:
```bash
exit
```
**Note 1** : If the flag ```--account``` is missing, or the HPC project code is not correct, then the job will not be submitted.
**Note 2**: By default the ```--partition``` option is set to prod. You can use ```sinfo``` command to check the available partitions.

---
### 4. Job Submission and Running Jobs
#### 4.1 Interactive Jobs
Use `srun` and `salloc` for real-time interaction.
**srun**
The srun command launches an interactive session on the compute nodes by requesting resources like memory, time, node count, etc., via the SLURM workload manager. When the resource becomes available, a shell prompt provides an interactive session. The user can then work interactively on the node for the specified amount of time. The session does not start until the SLURM can allocate any available node for your job.
**Example (Interactive Bash session for 30 mins):**
```bash
srun --partition=gpu --gres=gpu:2 --mem=100 --time=00:30:00 --account kuin0100 --pty /bin/bash -l
```
Load required modules:
```bash
module load miniconda/3
```
You can access [Modular environment — Research Computing Knowledge Base documentation](https://wiki-researchcomputing.ku.ac.ae/modular_environment.html) for a comprehensive guide on using environmental modules.
**salloc**
The salloc command is similar to srun except that it only results in new resource allocation when it is invoked. Typically, it is used to allocate resources on compute nodes in order to run an interactive session using a series of subsequent srun commands or scripts. It later releases the resources once the tasks are completed.
#### 4.2 Batch Jobs
Batch jobs can be submitted to the SLURM workload manager, which uses a job submission file (SBATCH file) to run the job on available cluster nodes. Unlike interactive jobs, the output of these jobs will be written to a log file instead of being displayed on the terminal. In this case, even if the user disconnects from the cluster, the jobs will continue to run. Batch jobs are typically designed to run more than one script.
**Example SBATCH Script (`myscript.sh`):**
```bash
#!/bin/bash
#SBATCH --account=kuin0100 # Account name
#SBATCH --job-name=python # Job name
#SBATCH --partition=gpu # Specify GPU partition
#SBATCH --nodes=1 # Number of nodes requested
#SBATCH --ntasks-per-node=1 # Number of tasks per node
#SBATCH --mem=2000 # Memory request in MB
#SBATCH --time=1:00:00 # Maximum runtime (HH:MM:SS)
#SBATCH --error=slurm.err # Error log file
#SBATCH --output=slurm.out # Output log file
# Load required modules
module load Python/3.9.6
# Run Python script
python test.py
```
Submit job:
```bash
sbatch myscript.sh
```
For more details: [Research Computing Knowledge Base](https://wiki-researchcomputing.ku.ac.ae/index.html)
---
## Accessing AWS and Using Resources: A Comprehensive Guide
## Table of Contents
1. [Introduction](#introduction)
2. [Prerequisites](#prerequisites)
3. [Step 1: Login to AWS Management Console](#step-1-login-to-aws-management-console)
4. [Step 2: Navigate to EC2 Service](#step-2-navigate-to-ec2-service)
5. [Step 3: Launch an Instance](#step-3-launch-an-instance)
6. [Step 4: Configure Instance Details](#step-4-configure-instance-details)
7. [Step 5: Add Storage](#step-5-add-storage)
8. [Step 6: Add Tags](#step-6-add-tags)
9. [Step 7: Configure Security Groups](#step-7-configure-security-groups)
10. [Step 8: Review and Launch](#step-8-review-and-launch)
11. [Conclusion](#conclusion)
---
## Introduction
Amazon Elastic Compute Cloud (EC2) allows you to run virtual servers in the cloud. You can use EC2 instances to deploy applications, host websites, or run software services.
---
## Prerequisites
- An active AWS account.
- Basic knowledge of cloud computing.
- Access to the AWS Management Console.
---
## Step 1: Login to AWS Management Console
1. Go to [KU AWS](https://ku-rc.awsapps.com/start/#/).
2. Enter your KU credentials and sign in.
---
## Step 2: Navigate to EC2 Service
1. In the AWS Management Console, chose **6G Center** in the search bar.
2. Select **KUResearchers** from the dropdown menu.
3. Select **EC2** from the dropdown menu.
---
## Step 3: Launch an Instance
1. Click the **Launch Instances** button on the EC2 dashboard.
---
## Step 4: Configure Instance Details
1. **Name and Tags**:
- Enter a name for your instance (e.g., `MyFirstInstance`).
2. **Application and OS Images**:
- Choose an Amazon Machine Image (AMI), such as Amazon Linux, Ubuntu, or Windows.
3. **Instance Type**:
- Select an instance type (e.g.,`t3.) for more details Please refer to the following for detailed information on performance and pricing [Link](https://aws.amazon.com/ec2/instance-types/).
---
## Step 5: Add Storage
1. Specify the storage volume size.
2. Leave the default settings (300 GiB for general-purpose usage) or increase if needed.
---
## Step 6: Add Tags
1. Add tags to categorize your instance.
- Example: Key = `Environment`, Value = `Production`.
---
## Step 7: Configure Security Groups
1. Create a new security group or use an existing one.
2. Add inbound rules:
- Example: Allow SSH (port 22) or HTTP (port 80) traffic based on your needs.
---
## Step 8: Review and Launch
1. Review all configurations.
2. Click **Launch**.
3. Select an existing key pair or create a new one for SSH access.
4. Acknowledge that you have the private key file, then click **Launch Instances**.
---
## Conclusion
Congratulations! You have successfully launched an EC2 instance on AWS. To connect to your instance, use the key pair to establish an SSH connection or access it through the AWS Management Console.
For more information, visit [AWS EC2 Documentation](https://docs.aws.amazon.com/ec2/).
---
Happy cloud computing!