### Set up Frontera
1. Log into Frontera
`ssh <USERNAME>@frontera.tacc.utexas.edu`
First type in your password
If succeded, you should use your two-factor-app to get an autogenerated number and enter it
2. Check available modules
`module av `
3. Load needed modules for GRChombo
`module load intel/18.0.5 impi/18.0.5 phdf5/1.10.4 `
> In case you get log in again on a different terminal window, you have to reload these modules
### Set up Chombo
5. Get Chombo in \home
`git clone https://github.com/GRChombo/Chombo.git`
6. Compile Chombo
```
cd Chombo/lib/mk
vi Make.defs.local
```
>Emacs sucks
7. Copy into Make.defs.local and modify
>Increase optimisation level of the compilation. I might have mentioned this in previous groupmeetings. Ask if unsure.
```
DIM = 3
DEBUG = FALSE
OPT = HIGH
PRECISION = DOUBLE
CXX = icpc
FC = ifort
MPI = TRUE
OPENMPCC = TRUE
MPICXX = mpiicpc
XTRACONFIG = .Skylake.Intel2018 # This just appends to all object files and executables
USE_64 = TRUE
USE_HDF = TRUE
HDFINCFLAGS = -I$(TACC_HDF5_INC)
HDFLIBFLAGS = -L$(TACC_HDF5_LIB) -lhdf5 -lz
HDFMPIINCFLAGS = -I$(TACC_HDF5_INC)
HDFMPILIBFLAGS = -L$(TACC_HDF5_LIB) -lhdf5 -lz
USE_MT = FALSE
cxxdbgflags = -g
cxxoptflags = -g -O0 -xCORE-AVX512 -qopt-zmm-usage=high
fdbgflags = -g
foptflags = -g -O0 -xCORE-AVX512 -qopt-zmm-usage=high
syslibflags = -mkl=sequential
RUN = ibrun -n 2 ./
```
8. Compile Chombo
```
cd .. #Go back to Chombo/lib
make lib -j 40
make test -j 40
```
9. export the path as enviorment variable
```
export CHOMBO_HOME=<Path>/Chombo/lib
```
> Optional: Check out your enviorment variables with 'env'
11. Go to scratch
```
cds # or cd $SCRATCH
git clone https://github.com/GRChombo/GRChombo.git
```
12. Compile Binary Black Hole example
```
git clone https://github.com/GRChombo/GRChombo.git
cd GRChombo/Examples/BinaryBH
make all -j
```
> Small break for to describe them about
> * Login node
> * Compule node
> * jobscripts
13. Create a jobscript and fill out
> Clusters offer different queues for different jobs sizes and walltimes
> This job will use 1 node for 20 min, what queue would be the right ? Ask if you have doubts
> https://frontera-portal.tacc.utexas.edu/user-guide/running/#frontera-production-queues or type 'qlimits' for overview over queues.
> Enter your email for notifications about the job status.
> Protip: sinfo -S+P -o "%18P %8a %20F" - ("Allocated", "Idle", "Other", and "Total") for overview of resources
```
vi jobscript.slurm
```
```
#!/bin/bash
#! Job name
#SBATCH -J TEST_JOB_GRCHOMBO
#! Partition
#SBATCH -p <PUT RIGHT QUEUE HERE>
#! Number of nodes
#SBATCH -N 1
#! Number of MPI tasks
#SBATCH -n 28
#! Number of CPUs per task
#SBATCH -c 2
#! Wallclock time (d-hh:mm:ss)
#SBATCH -t 0:20:00
#! Email on job event
#SBATCH --mail-user= <PUT YOUR EMAIL HERE>
#! What events to email for
#SBATCH --mail-type=all
#! Dependency
##SBATCH --dependency=afterany
# Other commands must follow all #SBATCH directives...
# Load modules
module reset
module load intel/18.0.5 impi/18.0.5 phdf5/1.10.4
# Print information
module list
pwd
date
# Load virtual enviorment here
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
# any code that should be executed parallely should be added in EXEC
EXEC="Main_BinaryBH3d.Linux.64.mpiicpc.ifort.OPTHIGH.MPI.OPENMPCC.Skylake.Intel2018.ex"
OPTIONS="params_very_cheap.txt"
# Launch MPI code...
ibrun $EXEC $OPTIONS
```
14. After you prepared the jobscript submit it with
```
sbatch jobscript.slurm
```
15. Check the status of the job
```
squeue -u <YOUR USERNAME>
```
Which will show the job-id, furhter information can be extracted using
```
scontrol show job=<YOUR JOB ID>
```
> Come back to the main table.
### Visualising outputs using Python
16. Your job should have output files
```
cd hdf5 #GRChombo/Examples/BinaryBH/hdf5
```
17. We have python scripts to render the output files of GRChombo. Clone the repository from https://github.com/ThomasHelfer/Postprocessing_tools.git.
> Ask if you don't know how.
18. Go to the scripts
```
cd Postprocessing_tools/YTAnalysisTools/
```
19. Open a new virtual enviorment
```
virtualenv dev
source dev/bin/activate
```
> Always use a virtual enviorment for python
20. Install required
```
pip install yt h5py mpi4py matplotlib==3.0.0
```
21. Make a new git branch
```
git branch <SOME BRANCHNAME>
git checkout <SOME BRANCHNAME>
```
21. Open parallel_pictures.py
```
vi parallel_pictures.py
```
22. Change parallel_pictures.py
>We want to plot chi from BinaryBH_*.3d.hdf5. Ask if you don't know how. Set
```
center = get_center(ts)
center[2] = 0
# Width of the plot (In simulation units)
width = 16
```
23. Run the code
```
python parallel_pictures.py
```
24. Open new terminal and download data
```
scp -r <YOUR USERNAME>@frontera.tacc.utexas.edu:<PATH TO CHI FOLDER>/hdf5/Postprocessing_tools/YTAnalysisTools/chi .
```
You should expect
![](https://i.imgur.com/tsYk7Mn.png)
25. Commit your changes
```
git add parallel_pictures.py
git commit -m "Added changes for BBH visualisation"
git push --set-upstream origin test
```
25. Modify jobscript to run the code using a jobscript.
> you need to load virtual enviorment
> Set EXEC to "python parallel_pictures.py"
> and OPTIONS = ""
> If you struggle the solution can be gotten
git reset --hard
git clean -f
git checkout solution_exercise
### Changing GRChombo
In this example, we implement a topological domain wall on a Black Hole. This
example is only here to show how to modify the code and should not be used as
an example for any science since Hamiltonian constraints are not fulfilled. For reference,
we implement domain walls as described in "Cosmic Strings and other topological defects."
by Vilenkin and Shellard in equation (3.1.2) and (3.1.3). All needed equations can be found in this tutorial.
* Go to the ScalarField example
cd GRChombo/Examples/ScalarField
NOTE: We use a specific version of the code, to make sure this tutorial is up-to
date. However, newer version of the code should not change fundamentally.
* Open InitialScalarData.hpp with your favourite text editor (which should be vim) and change line 47 fromdi
data_t phi = m_params.amplitude *
(1.0 + 0.01 * rr2 * exp(-pow(rr / m_params.width, 2.0)));
> Vim tip: Type ":set number" to see line numbers in vim
to the profile of a domain wall
data_t phi = eta * tanh( pow(lambda/2.0, 0.5)* eta * x );
and add before that all recommended values
data_t lambda = 10000;
data_t eta = 0.01;
and also get the x - coordinate from the coordinate class
data_t x = coords.x;
NOTE: We write data_t instead of double since that allows us to speed our code up using vector registers.
* For domain walls, we do also have to change the potential of the scalar field, which can be found in the file Potential.hpp.
Here we do have to change line 33
V_of_phi = 0.5 * pow(m_params.scalar_mass * vars.phi, 2.0);
and line 37
dVdphi = pow(m_params.scalar_mass, 2.0) * vars.phi;
to reflect the new potential mexican hat potential to :
V_of_phi = lambda / 4.0 * pow((vars.phi * vars.phi - eta*eta) , 2 );
dVdphi = lambda * vars.phi * (vars.phi * vars.phi - eta*eta);
* We change some of the parameters in params.txt to be able to run in with few resources
N_full = 64
L_full = 32
this reduces the physical size of the box
max_level = 2
which allows the code use at most two levels of refinement, which makes our code fast to evaluate
hi_boundary = 0 0 0
lo_boundary = 0 0 0
we change the boundaries to be static, simplifying our problem
stop_time = 20
to make sure our simulation does only go on for a short amount of time
kerr_spin = 0.7
which increases the spin of the black hole, making the dynamics of the problem more visually interesting. Lastly, we add
kerr_center = 16 16 16
which sets the position (x y z) of the Black Hole. In this case, in the center of the grid.
NOTE: We choose all of these parameters to make it feasible to run in a short time-frame. We recommend to play around and see the effect of different variables in the params.txt to see the effect on the simulation.
* Run the python script
python parallel_pictures.py
* Visualise the output using the Postprocessing code. The code to do this is on another branch of https://github.com/ThomasHelfer/Postprocessing_tools.git .
git branch -r
to have an overview of available branches.
git checkout plot_domain_wall_exercise
run the script parallel_pictures.py
* We recommend to check out the outputs and compare with https://youtu.be/Uf4gyWxhzlU
* Bonus: Make it higher res, increase max_level to 3 .