![](https://media.enccs.se/2025/10/Frame-7-1536x768.jpg) <p style="text-align: center"><b><font size=5 color=blueyellow>GPU Programming: When, Why, and How - Day 2</font></b></p> :::success **Nov. 25 - 27 , 09:00 - 12:30 (CET), 2025** ::: :::success **GPU Programming: When, Why, and How - Schedule**: https://hackmd.io/@ENCCS-Training/gpu-programming-2025-schedule ::: ## Schedule | Time | Contents | | :---------: | :----------: | | 09:00-09:10 | Welcome and Recap | | 09:10-10:30 | Portable kernel-based models <br>(Kokkos, alpaka, etc.) | | 10:30-10:40 | Break | | 10:40-12:00 | High-level language support | | 12:00-12:30 | Q/A session | --- ## ==Kokkos resources== - [About Raja and El Capitan](https://www.hpcwire.com/2024/11/19/an-inside-look-at-el-capitan-facts-beyond-the-numbers/) - [Kokkos.org](https://kokkos.org) - [Kokkos tutorials](https://github.com/kokkos/kokkos-tutorials) - [Kokkos Lecture Series](https://github.com/kokkos/kokkos-tutorials/wiki/Kokkos-Lecture-Series) ## ==Kokkos build and execution instructions on Lumi== Set this in your environment ```sh export PROJECT_DIR=/projappl/project_465002387/${USER} export SPACK_USER_PREFIX=${PROJECT_DIR}/spack # load spack module load spack # list available kokkos versions, shows the hash and the build options spack find -lv kokkos ``` Create a `CMakeLists.txt` file for building an application with kokkos in the directory with the hello.cpp file: ```cmake cmake_minimum_required(VERSION 3.16) project(MyKokkosApp CXX) find_package(Kokkos REQUIRED) add_executable(hello.exe hello.cpp) target_link_libraries(hello.exe Kokkos::kokkos) ``` To build the application: ```sh cmake -DKokkos_DIR=$(spack location -i /xw7uigl)/lib64/cmake/Kokkos \ -DCMAKE_CXX_COMPILER=hipcc \ -S . -B exe/ cmake --build exe/ ``` If you run the `hello.exe` on the login node, you get: ```sh ~/projects/kokkos/hello> ./exe/hello.exe terminate called after throwing an instance of 'std::runtime_error' what(): hipGetDeviceCount(&hipDevCount) error( hipErrorNoDevice): no ROCm-capable device is detected /tmp/peterlarsson/spack-stage/spack-stage-kokkos-4.1.00-xw7uiglpperxgsm55wgfr4aduha5k7nn/spack-src/core/src/HIP/Kokkos_HIP_Instance.cpp:420 Aborted ``` Submit a batch job to run the application on a compute node with GPU. Here is example of a SLURM job script `hello_job.sh` created at same directory as the CMakeLists.txt and hello.cpp files: ```sh #!/bin/bash #SBATCH --job-name=hello_job #SBATCH --account=project_465002387 #SBATCH --partition=dev-g #SBATCH --nodes=1 #SBATCH --ntasks=1 #SBATCH --cpus-per-task=1 #SBATCH --gpus-per-node=1 #SBATCH --time=00:05:00 #SBATCH --output=hello_%j.out #SBATCH --error=hello_%j.err # ## Load necessary modules (adjust as needed) #module load LUMI/22.08 #module load rocm/5.2.3 # ## Print some job information echo "Job started at: $(date)" echo "Running on node: $(hostname)" echo "Job ID: $SLURM_JOB_ID" # ## Run the executable srun ./exe/hello.exe # echo "Job finished at: $(date)" ``` Submit the job script with: ```sh sbatch hello_job.sh ``` The batch output file should contain something similar to this: ``` Job started at: Wed Nov 26 00:06:57 EET 2025 Running on node: nid007972 Job ID: 14937057 Execution Space: N6Kokkos3HIPE Memory Space: N6Kokkos8HIPSpaceE Job finished at: Wed Nov 26 00:06:58 EET 2025 ``` --- ## ==alpaka resources== - [Documentation](https://alpaka3.readthedocs.io/en/latest/) - [Source](github.com/alpaka-group/alpaka3) - [Cheatsheet](https://alpaka3.readthedocs.io/en/latest/basic/cheatsheet.html) ### Workshop specific resources - [Training material](https://enccs.github.io/gpu-programming/8-portable-kernel-models/#alpaka) - [Exercise](https://enccs.github.io/gpu-programming/8-portable-kernel-models/#id3) ## ==alpaka build and execution instructions on Lumi== Loading the required modules and setting the environment variables ```sh module load LUMI/24.03 partition/G module load rocm/6.0.3 module load buildtools/24.03 module load PrgEnv-amd module load craype-accel-amd-gfx90a export CXX=hipcc ``` Here is a simple `CMakeLists.txt` to start with alpaka on LUMI. It is hard coded to use AMD GPUs. For an example of one way to have a more flexible setup can be found in the [documentation](https://alpaka3.readthedocs.io/en/latest/basic/install.html#id3). ```cmake cmake_minimum_required(VERSION 3.25) project(vectorAdd LANGUAGES CXX VERSION 1.0) #Use CMake's FetchContent to download and integrate alpaka3 directly from GitHub include(FetchContent) #Declare where to fetch alpaka3 from #This will download the library at configure time FetchContent_Declare(alpaka3 GIT_REPOSITORY https://github.com/alpaka-group/alpaka3.git GIT_TAG dev) #Make alpaka3 available for use in this project #This downloads, configures, and makes the library targets available FetchContent_MakeAvailable(alpaka3) #Finalize the alpaka FetchContent setup alpaka_FetchContent_Finalize() #Create the executable target from the source file add_executable(vectorAdd main.cpp) #Link the alpaka library to the executable target_link_libraries(vectorAdd PRIVATE alpaka::alpaka) #Finalize the alpaka configuration for this target #This sets up backend - specific compiler flags and dependencies alpaka_finalize(vectorAdd) ``` A simple hello world example which prints the device currenly in use is below as `main.cpp` ```cpp #include <alpaka/alpaka.hpp> #include <cstdlib> #include <iostream> namespace ap = alpaka; auto getDeviceSpec() { /* Select a device, possible combinations of api+deviceKind: * host+cpu, cuda+nvidiaGpu, hip+amdGpu, oneApi+intelGpu, oneApi+cpu, * oneApi+amdGpu, oneApi+nvidiaGpu */ return ap::onHost::DeviceSpec{ap::api::hip, ap::deviceKind::amdGpu}; } int main(int argc, char** argv) { // Initialize device specification and selector ap::onHost::DeviceSpec devSpec = getDeviceSpec(); auto deviceSelector = ap::onHost::makeDeviceSelector(devSpec); // Query available devices auto num_devices = deviceSelector.getDeviceCount(); std::cout << "Number of available devices: " << num_devices << "\n"; if (num_devices == 0) { std::cerr << "No devices found for the selected backend\n"; return EXIT_FAILURE; } // Select and initialize the first device auto device = deviceSelector.makeDevice(0); std::cout << "Using device: " << device.getName() << "\n"; return EXIT_SUCCESS; } ``` --- :::danger You can ask questions about the workshop content at the bottom of this page. We use the Zoom chat only for reporting Zoom problems and such. ::: ## Questions, answers and information - Is this how to ask a question? - Yes, and an answer will appear like so! ### 3. [Portable kernel-based models](https://enccs.github.io/gpu-programming/8-portable-kernel-models/) - running [`cmake --build exe/`](https://hackmd.io/mdVbtpHPQiOcYnv2t-dIzg?both=&stext=1667%3A108%3A0%3A1764145799%3AnpP9rN) lead to the error: ``` c++: error: unrecognized command line option '-fno-gpu-rdc'; did you mean '-fno-gnu-tm'? c++: error: unrecognized command line option '--offload-arch=gfx90a'; did you mean '--offload-abi=ilp32'? ``` - Do I need to specify something more than given in this documentation? - Oh, Sorry, you need `-DCMAKE_CXX_COMPILER=hipcc` in addition in the first `cmake` statement. Remove the exe subdirectory and start over. I will add the `-DCMAKE...` to the cmake statement. - Thanks. That helped. - Why is the below error? ``` > ml buildtools/24.03 Lmod has detected the following error: These module(s) or extension(s) exist but cannot be loaded as requested: "buildtools/24.03" Try: "module spider buildtools/24.03" to see how to load the module(s). ``` - how to fix this? - I think that module requires some other modules. Please try `module load LUMI; module load partition/G; module load buildtools/24.03;` - I already loaded the `partition/G` module. - Could the issue be with sticky modules present on the node? - Does **alpaka** support different backends in the same program? Like 1 CPU, 1 Nvidia GPU, 1 Intel GPU. - Yes, alpaka supports this and is a feature we explicitly want to support. - To be more specific for your example, to use a CPU, Nvidia GPU and an Intel GPU together, you will need to use the SYCL(oneAPI) backend. After this you can use the device selector to use what ever device you want, wherever you want. - The AdaptiveCpp supports Nvidia and AMD, I think in the same time. But just to confirm if the backend supports different accelerators, then alpaka will use that? - Currently we do not plan to support AdaptiveCpp. The problem with standard sycl is that some methods are missing which provide to reach the full perfromance of a system (e.g. static sized shared(local memory). We havely replay on some OneApi extensions. Since you can access all current available accelerators/CPUs via native CUDA/HIP or OpenMP/TBB there should be no strong reason. This can change any time and we hope that the OneAPI Sycl extensions will go into the next SYCL standard. - Good point about NVIDIA+ AMD to use the native CUDA + HIP as backends. Thanks for the replies. - Follow up question: Would alpaka be able to use AdaptiveCPP as a backend? - There is a typo in https://enccs.github.io/gpu-programming/8-portable-kernel-models/#installing-alpaka-on-your-system: - in "1. Clone the repository" it schould be `cd alpaka3` not `cd alpaka` - Thank you for pointing this out! You are right. - But to be clear for everyone, right now for the exercise we recommend using the fetch content example given in the exercise section [here](https://enccs.github.io/gpu-programming/8-portable-kernel-models/#id3). - If you cloned the `gpu-programming` repository, you can also copy the files from there instead using vim - `cp gpu-programming/content/examples/portable-kernel-models/alpaka-exercise-vectorAdd.txt CMakeLists.txt` - `cp gpu-programming/content/examples/portable-kernel-models/alpaka-exercise-vectorAdd.cpp main.cpp` - I get the following error when building the example above: ``` error: static assertion failed due to requirement 'DeviceSpec<alpaka::api::Hip, alpaka::deviceKind::AmdGpu>::isValid()': Invalid combination of device kind and api. The api does not know how to talk to the device or the required dependencies to enable the api are not fulfilled. ``` - This can happen, if you miss the `-Dalpaka_DEP_HIP=ON` when run CMake. - Ah! That is the one! - cmake error ``` > cmake -B build -S . -DCMAKE_INSTALL_PREFIX=$ALPAKA_DIR CMake Error at CMakeLists.txt:7 (cmake_minimum_required): CMake 3.25 or higher is required. You are running version 3.20.4 -- Configuring incomplete, errors occurred! ``` - The issue is because of sticky modules `module --force purge` and again `ml xyz abc` helped - Did you load the modules before you installed alpaka? The default CMake is 3.20.4 ``` module load LUMI/24.03 partition/G module load rocm/6.0.3 module load buildtools/24.03 module load PrgEnv-amd module load craype-accel-amd-gfx90a export CXX=hipcc ``` - Same here ``` CMake Error at CMakeLists.txt:5 (find_package): By not providing "Findalpaka.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "alpaka", but CMake did not find one. Could not find a package configuration file provided by "alpaka" with any of the following names: alpakaConfig.cmake alpaka-config.cmake Add the installation prefix of "alpaka" to CMAKE_PREFIX_PATH or set "alpaka_DIR" to a directory containing one of the above files. If "alpaka" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! cristian@uan02 /scratch/project_462000007/cristian/alpaka_test/examples $ cmake --build build --parallel gmake: Makefile: No such file or directory gmake: *** No rule to make target 'Makefile'. Stop. cristian@uan02 /scratch/project_462000007/cristian/alpaka_test/examples $ echo $ALPAKA_DIR /scratch/project_462000007/cristian/alpaka_test/alpaka3 cristian@uan02 /scratch/project_462000007/cristian/alpaka_test/examples $ echo $alpaka_DIR /scratch/project_462000007/cristian/alpaka_test/alpaka3 cristian@uan02 /scratch/project_462000007/cristian/alpaka_test/examples $ $ module list Currently Loaded Modules: 1) perftools-base/24.03.0 5) cray-mpich/8.1.29 9) lumi-tools/24.05 (S) 13) craype-accel-amd-gfx90a 17) partition/G (S) 2) cce/17.0.1 6) cray-libsci/24.03.0 10) init-lumi/0.2 (S) 14) libfabric/1.15.2.0 18) buildtools/24.03 3) craype/2.7.31.11 7) PrgEnv-cray/8.5.0 11) LUMI/24.03 (S) 15) craype-network-ofi 19) rocm/6.0.3 4) cray-dsmml/0.3.0 8) ModuleLabel/label (S) 12) craype-x86-trento 16) xpmem/2.8.2-1.0_5.1__g84a27a5.shasta Where: S: Module is Sticky, requires --force to unload or purge ``` - Could you show the CMake output? - It seems like you tried to install alpaka but CMake cant find it. Did you add it to the cmake prefix path? - `export CMAKE_PREFIX_PATH=$ALPAKA_DIR:$CMAKE_PREFIX_PATH` ```bahs $ cmake -B build -S . -Dalpaka_DEP_HIP=ON ild build --parallel-- The C compiler identification is Clang 17.0.0 -- The CXX compiler identification is Clang 17.0.0 -- Cray Programming Environment 2.7.31.11 C -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /opt/cray/pe/craype/2.7.31.11/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /opt/rocm-6.0.3/bin/hipcc - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done CMake Error at CMakeLists.txt:5 (find_package): By not providing "Findalpaka.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "alpaka", but CMake did not find one. Could not find a package configuration file provided by "alpaka" with any of the following names: alpakaConfig.cmake alpaka-config.cmake Add the installation prefix of "alpaka" to CMAKE_PREFIX_PATH or set "alpaka_DIR" to a directory containing one of the above files. If "alpaka" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! $ echo $CMAKE_PREFIX_PATH /scratch/project_462000007/cristian/alpaka_test/alpaka3:/opt/rocm-6.0.3:/appl/lumi/SW/LUMI-24.03/common/EB/buildtools/24.03:/opt/rocm-6.0.3/lib/cmake/hip:/appl/lumi/SW/system/EB/lumi-tools/24.05 ``` - is ``/scratch/project_462000007/cristian/alpaka_test/alpaka3`` pointing to the source code or the installed alpaka version? - maybe run ``ls /scratch/project_462000007/cristian/alpaka_test/alpaka3`` - Good question. I copy pasted the commands from the webpage (https://enccs.github.io/gpu-programming/8-portable-kernel-models/#alpaka. I though they are complete. I guess I messed up at this point (facepalm) - We believe they should be complete, but if you find out what was missing please let us know, we want to fix it! - Most likly: ``export ALPAKA_DIR=/path/to/your/alpaka/install/dir`` was set to the source code and not to the installed alpaka - I set the install folder the same as the source. SO it probably mixed-up the source and the installation - Yes if the source and install folder is equal the instalation is in a limbo state - If you do not want to install alpaka you could also clone alpaka and then use ``add_subdirectory()``: https://alpaka3.readthedocs.io/en/latest/basic/install.html#use-the-source-code-without-installation - within the cmake instead of using ``find_package`` you should use ``add_subdirectory("<path_to_cloned_alpaka3>" "${CMAKE_BINARY_DIR}/alpaka")`` - Can the main.cpp and CMakeLists.txt be created inside the clone of alpaka3 for testing the code? - No, if the CMakeLists.txt can not in the clone of alapka else you would overwrite the original one. For CMake a good prectise is to always perfrom out of source builds, so never create a folder in your source code folder. - so creating some excercise folder in a main and cmakelists does work? - No, alpaka's ``CMakeLists.txt`` is only handling alpaka it self. - additional information can be found https://alpaka3.readthedocs.io/en/latest/basic/install.html#use-fetchcontent-to-download-alpaka-automatically it shows nearly the same example we showed in the workshop. - Is the PrengEnv-amd module compulsory? - There is no strong need. You can use any other environment. If you like to use the AMD GPU you need ``hipcc`` or any other HIP compiler e.g. ``clang++`` from the ROCm folder. We used ``module load PrgEnv-amd`` and ``hipcc`` because CMake has issues with ``CC`` and does not detect how to enable ``OpenMP``. - Note:: alpaka supports ``clang``, ``gcc``, ``icpx`` and `nvcc`, which one you use depends where you would like to run your code - for everyone how likes to start with godbolt (vector add which is using any available accelerator): - CPU: https://godbolt.org/z/WvosWvMe8 - CUDA: https://godbolt.org/z/aYc33eK9n - I managed to get it working now. For the `alpaka` compilation I did a `make install` after the page instructions ``` cd alpaka3 mkdir build cmake -B build -S . -DCMAKE_INSTALL_PREFIX=$ALPAKA_DIR cmake --build build --parallel cd build; make install ``` - For compiling the first example I used: ``` CC -I $ALPAKA_DIR/include/ -std=c++20 -x hip --offload-arch=gfx90a device_info.cpp ``` ``` $ srun -n 1 -c 7 --gpus 2 --account=project_462000007 -p small-g -t 00:30:00 ./a.out Number of available devices: 2 Using device: AMD Instinct MI250X id=0 ``` :::danger **Break until XX:50** ::: ### 4. [High-level language support](https://enccs.github.io/gpu-programming/9-language-support/) #### load Julia on Lumi - For the Julia exercise, I recommend to open a new terminal and ssh session to avoid module conflicts. ```bash # interactive CPU node srun -p dev-g --gpus 1 -N 1 -n 1 --time=00:20:00 --account=project_465002387 --pty bash # load Julia env module purge module use /appl/local/csc/modulefiles module load julia module load julia-amdgpu # verify that a GPU is available rocm-smi ``` #### Install AMDGPU for your project ```bash mkdir julia_amdgpu_example && cd julia_amdgpu_example julia --project=. # now you are in the Julia REPL (exit it with ctrl+c if required) using Pkg # takes a while to download and compile the package Pkg.add("AMDGPU") using AMDGPU # now your are ready to use the AMD GPU # display available GPU AMDGPU.devices() ``` - ==Lesson materials for two workshop about Julia== - Julia for high-performance data analytics: https://enccs.github.io/julia-for-hpda/ - Julia for high-performance scientific computing: https://enccs.github.io/julia-for-hpc/ - Cannot we use Julia files to be using for some benchmarking or execution as in `.cpp/ .c / .cu / .py/...`? - You can. The files has the extension .jl: `julia main.jl` - I didn't talked about source files, because the REPL works better for teaching. But for production code, you have to write source code files. - For compilation, what do we use such as meson/ makefiles/ CMakeLists/ ..? - Julia has it own package format and it download the dependencies and compiles all the code without external tools. - A Julia package provides normally a `Project.toml` which discribes the project and defines dependencies. If you want to use the project, you run following commands: ```bash cd /to/the/project/folder # download all depencies and precompile the code # normally needs to be done one time julia --project=. -e 'import Pkg; Pkg.instantiate()' # run the actual code julia --project=. src/main.jl ``` - IPython error ```` $ from numba import hip /.venv/lib/python3.12/site-packages/rocm/amd_comgr/amdhsa_kernel_directives.py:37: SyntaxWarning: invalid escape sequence '\(' const p_except = /\(except (?<exceptions>([A-Z0-9_]+)+)\)/ --------------------------------------------------------------------------- PermissionError Traceback (most recent call last) File /users/mohanana/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/pathlib.py:1311, in Path.mkdir(self, mode, parents, exist_ok) 1310 try: -> 1311 os.mkdir(self, mode) 1312 except FileNotFoundError: PermissionError: [Errno 13] Permission denied: '/tmp/numba/hip/uid_327005061' During handling of the above exception, another exception occurred: PermissionError Traceback (most recent call last) Cell In[1], line 1 ----> 1 from numba import hip File /.venv/lib/python3.12/site-packages/numba/hip/__init__.py:51 48 import sys 50 from . import api_util # noqa: F401, E402 ---> 51 from . import hipdrv # noqa: E402 52 from . import hipconfig, util # noqa: F401 54 # from numba import runtests 55 56 (...) 77 # Derived modules, make local packages submodules 78 # ----------------------------------------------- File /.venv/lib/python3.12/site-packages/numba/hip/hipdrv/__init__.py:68 65 import os # noqa: E402 66 import re # noqa: E402 ---> 68 import numba.hip.util.modulerepl as _modulerepl # noqa: E402 70 mr = _modulerepl.ModuleReplicator( 71 "numba.hip.hipdrv", 72 os.path.join(os.path.dirname(__file__), "..", "..", "cuda", "cudadrv"), (...) 76 ).replace("cudadrv", "hipdrv"), 77 ) 79 # order is important here! File /.venv/lib/python3.12/site-packages/numba/hip/util/__init__.py:23 1 # MIT License 2 # 3 # Modifications Copyright (C) 2023-2024 Advanced Micro Devices, Inc. All rights reserved. (...) 20 # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 # SOFTWARE. ---> 23 from . import comgrutils, fscache, llvmutils, modulerepl 25 __all__ = ["comgrutils", "fscache", "llvmutils", "modulerepl"] File /.venv/lib/python3.12/site-packages/numba/hip/util/fscache.py:107 104 clear_cache() 106 if _hipconfig.USE_DEVICE_LIB_CACHE: --> 107 init_cache() File /.venv/lib/python3.12/site-packages/numba/hip/util/fscache.py:92, in init_cache() 90 def init_cache(): 91 cache_dir = get_cache_dir() ---> 92 Path(cache_dir).mkdir(parents=True, exist_ok=True) 93 _log.info(f"created/reuse Numba HIP cache directory '{cache_dir}'") File /users/mohanana/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/pathlib.py:1320, in Path.mkdir(self, mode, parents, exist_ok) 1316 self.mkdir(mode, parents=False, exist_ok=exist_ok) 1317 except OSError: 1318 # Cannot rely on checking for EEXIST, since the operating system 1319 # could give priority to other errors like EACCES or EROFS -> 1320 if not exist_ok or not self.is_dir(): 1321 raise File /users/mohanana/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/pathlib.py:875, in Path.is_dir(self) 871 """ 872 Whether this path is a directory. 873 """ 874 try: --> 875 return S_ISDIR(self.stat().st_mode) 876 except OSError as e: 877 if not _ignore_error(e): File /users/mohanana/.local/share/uv/python/cpython-3.12.12-linux-x86_64-gnu/lib/python3.12/pathlib.py:840, in Path.stat(self, follow_symlinks) 835 def stat(self, *, follow_symlinks=True): 836 """ 837 Return the result of the stat() system call on this path, like 838 os.stat() does. 839 """ --> 840 return os.stat(self, follow_symlinks=follow_symlinks) PermissionError: [Errno 13] Permission denied: '/tmp/numba/hip/uid_327005061' ```` - - #### [Python](https://enccs.github.io/gpu-programming/9-language-support/#python) To type-along, or for later you can use the containers on LUMI For Numba and Cupy https://enccs.github.io/gpu-programming/0-setup/#running-python - I'm getting the error: `srun --pty singularity exec --no-home container_numba_hip_fixed.sif` `bash srun: error: Unable to create step for job 14946022: More processors requested than permitted` #### Slides from last week's webinar For more code examples which uses the GPU: - https://github.com/ENCCS/gpu-programming/blob/main/content/slides/5-intro-to-gpu-prog-models-high-level/talk.md - https://numba.readthedocs.io/en/stable/cuda/ufunc.html#example-basic-example #### For Jax, choose a container from this path /appl/local/containers/sif-images/ For example ```console $ srun --pty singularity exec /appl/local/containers/sif-images/lumi-jax-rocm-6.2.4-python-3.12-jax-0.4.35.sif bash Singularity> $WITH_CONDA Singularity> python Python 3.12.9 | packaged by Anaconda, Inc. | (main, Feb 6 2025, 18:56:27) [GCC 11.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import jax >>> jax.devices() [RocmDevice(id=0)] ``` - At the moment we are on an interactive shell node, but on production if we write the benchmarks - how well these could be done in a `.py/ .ipy` files? - To run such things in production the code should be in `.py` file. - The most robust ways is to use [pyperf](https://pyperf.readthedocs.io/en/latest/run_benchmark.html) - The `time` and `timeit` standard library can also be used. -- ==Will the slides be available from HZDR on alpaka and c++ std porting?== :::info *Always ask questions at the very bottom of this document, right **above** this.* ::: ---