---
title: How to use NSCC machine
tags: APAC HPC-AI competition, training
---
# How to use NSCC machine
[TOC]
## Introduction


## HPC-Basics
## Aspire
Advanced Supercomputing
- Allinea Tools
- debug
- to achieve high level, low latency performance
might need to build our own library:+1:
- DDN Storage
- parallel file system which provides storage(tiered storage)
- can be seen from all the login system
- GPU: 15萬
use parallel file system if we wanna upload our own program.

---
[NSCC software list](https://help.nscc.sg/software-list/)
### When we wanna run something on the spercomputer
we must send the job to the PBS first, instead of send the huge amount of calculation through the login node directly.
aspire.nscc.sg
---
## User Enroll
Ref. help.nscc.sg -> User Guides -> User Enrollment Guide
---
### Q&A
Q: Is everyone at ASTAR (e.g. IHPC) routing their traffic through GIS? I see the GIS logo next to ASTAR logo. Or each ASTAR institute has their own FAT node?
A:
Q: how can we access the visualization nodes? do we request an interactive node with a special flag?


> which be skiped
OS: CentOS 6
CUDA 10.1
[](https://drive.google.com/drive/u/2/folders/1a0j3ibogoL0wEBWawJG7zTGZqnwERXPb)
### Example submission script
```shell=
#!/bin/sh
#PBS -l walltime=0:05:00
#PBS -q dgx
#PBS -p 90000001
#PBS -N container
image="nvcr.io/nvidia/cuda:latest"
nscc-docker run $image << EOF
echo container starts in the directory:
pwd
echo Change directory to where job was submitted:
cd $PBS_O_WORKDIR
pwd
echo By default Docker starts in a private network with hostname different to that of host system:
hostname
echo
nvidia-smi
EOF
```
### run hello.sh
```shell=
#!/bin/bash
#PBS -N HelloMPI
#PBS -l select=1:mem=1G:ncpus=2:mpiprocs=2:ompthreads=1
#PBS -l walltime=00:00:05
#PBS -q normal
#PBS -o out-std.txt
#PBS -e out-err.txt
#PBS -P 50000045
# NOTES: Aspire: 96GB memory per node.
#module load intel-license
#source /app/intel/xe2019/compilers_and_libraries_2019.0.117/linux/bin/compilervars.sh intel64
#source /app/intel/xe2019/compilers_and_libraries_2019.0.117/linux/mpi/intel64/bin/mpivars.sh -ofi
internal=1 release
module load intel
cd $PBS_O_WORKDIR
env >& out-env.txt
ldd hello >& out-ldd.txt
mpirun ./hello >& out-run.txt
```
### hello.c
```c=
#include <stdio.h>
int main(){
}
```