Reflections From Previous Training Events
:::info
Contents of this documents and quicklinks:
:::
<span style="background-color: gold">1. Julia</span>
deeper HPC related issues with Julia
developing programs locally that targets HPC i.e. use of Docker and/or AppTainer
Yonglei changed 2 months agoView mode Like Bookmark
All ENCCS Lessons (105 Repositories)
<span style="background-color: cyan">1. Lessons should be published on ENCCS lesson webpage</span>
Quantum Autumn School 2024there is a separate repo QAS24_Finance
if this repo is a part of QAS24, please create a link at QAS24 pointing to QAS24_FinanceThor: AA needs to say how this should be included
intro-cmake
Thor: how does this relate to https://github.com/ENCCS/cmake-workshop?it is a shortened version from cmake-workshop
<font size=5 color=blueyellow>Practical Intro to GPU programming using Python</font>
Contents of this documents and quicklinks:
About the webinar
In the past decade, Graphics Processing Units (GPUs) have ignited the dynamic evolution of data science. But GPUs can do a lot more than machine learning - these powerful devices can accelerate and massively parallelise any general-purpose computational load in domains involving big data and heavy number crunching. You can use the GPU in your personal computer, or scale up your application to run on a supercomputer. How can you get started?
In this webinar, we focus on GPU-accelerated computing with Python, one of the most popular programming languages for science, engineering, data analytics, and deep learning applications. Starting from familiar Python libraries such as Numpy and Pandas, we will guide you step-by-step into the world of GPU programming. Discover how to harness the power of GPU accelerators using libraries such as CuPy, cuDF, PyCUDA, Jax, and Numba, with a focus on their unique features and capabilities for high-performance computing.
Who is the webinar for?
Yonglei changed 2 months agoView mode Like Bookmark
Basic Deep Learning Tasks from CPUs to GPUs
Part of the course Multi-GPU Artificial Intelligence: Scaling AI with HPC organized by CASTIEL2 and NCCs.
Something about GPU after intro to GPU architectures
GPU programming
GPU programming concepts ==???==
Introduction to GPU programming models ==???==
==10-15 min?==
Yonglei changed 2 months agoView mode Like Bookmark
Practical Deep Learning - Planning materials
:::success
May 6-8, 9:00-12:00 (CET), 2025
:::
Contents of this documents and quicklinks:
About the course
Deep learning is a subset of machine learning that focuses on training artificial neural networks with multiple layers to recognize patterns and to simulate the complex decision-making power of the human brain. The use of deep learning has seen a significant increase of popularity and applicability over the last decade. While it serves as a powerful tool for researchers across various domains, taking the first steps into the world of deep learning can be somewhat intimidating.
<font size=5 color=blueyellow>[ENCCS Webinar] Practical Introduction to GPU Programming</font>
:::success
Mar. 27, 12:00-13:30 (CET), 2025
:::
Contents of this documents and quicklinks:
Title
==Practical Introduction to GPU Programming==
Yonglei changed 3 months agoView mode Like Bookmark
<font size=5 color=blueyellow>[ENCCS Webinar] Software Installation on HPC</font>
:::success
May 13, 12:00-13:30 (CET), 2025
:::
Contents of this documents and quicklinks:
Title
==[ENCCS Webinar] Software Installation on HPC==
Yonglei changed 3 months agoView mode Like Bookmark
<font size=5 color=blueyellow>Development of algorithms for partial multi-label machine learning</font>
Contents of this documents and quicklinks:
Title
==[ENCCS Webinar]: Development of algorithms for partial multi-label machine learning==
About the webinar
Machine learning is a branch of artificial intelligence that enables computers to learn from data and make predictions or decisions without being explicitly programmed. Multi-label learning is a type of machine learning problem where each data instance can be associated with multiple labels simultaneously. Partial multi-label learning addresses problems where each instance is assigned a candidate label set and only a subset of these candidate labels is correct. Partial multi-label learning is particularly useful in scenarios where perfect labeling is expensive or impractical, making it an essential area in weakly supervised learning, however, a major of partial multi-label learning is that the training procedure can be easily misguided by noisy labels.
Yonglei changed 3 months agoView mode Like Bookmark
Julia for High-Performance Data Analysis - Schedule
:::success
Feb. 4 - 7, 09:00 - 12:00 (CET), 2025
:::
General information
:::info
Links for the workshop:
all-hands-meeting
ENCCS All-Hands Meeting - Training Session (250131)
Contents of this documents and quicklinks:
<span style="background-color: cyan">1. Python HPDA retrospective</span>
1.1 Reflections from participants
==the second episode (efficient array computing) was quite packed==
Yonglei changed 3 months agoView mode Like Bookmark
High Performance Data Analytics in Python - Event Page
:::success
Jan. 21-23, 9:00-12:00 (CET), 2024
:::
General introduction
Welcome to the online workshop on High Performance Data Analytics in Python on Jan. 21-23 (2025). Python is a modern, object-oriented, and an industry-standard programming language for working with data on all levels of data analytics pipeline. A rich ecosystem of libraries ranging from generic numerical libraries to special-purpose and/or domain-specific packages has been developing using Python for data analysis and scientific computing.
This three half-day online workshop is meant to give an overview of working with research data in Python using general libraries for storing, processing, analyzing and sharing data. The focus is on improving performance. After covering tools for performant processing (netcdf, numpy, pandas, scipy) on single workstations the focus shifts to parallel, distributed and GPU computing (snakemake, numba, dask, multiprocessing, mpi4py).
Yonglei changed 3 months agoView mode Like Bookmark
Notes for MoroccoHPC-NTNU-ENCCS Webinars
:::success
A good start between to deliver webinar on varied aspect about programming, AI/ML/DL, and HPC DevOps.
:::
Contents of this documents and quicklinks:
General info
For each webinar, we should have relevant info as listed below
Yonglei changed 3 months agoView mode Like Bookmark
High-Performance Data Analytics with Python - Schedule
:::success
Jan. 21 - 23, 09:00 - 12:00 (CET), 2025
:::
General information
:::info
Links for the workshop:
ENCCS participates in GPU hackathon
An event of LUMI GPU / Nomad CoE Hackathon was hosted oon Sept. 4-6 at CSC – IT Center for Science, a Finnish center of expertise in information technology - for research software developer teams targeting the GPU partition of LUMI with AMD MI250X GPUs. Seven teams with their projects focusing on computational materials science and computational fluid dynamics were invited, and these participating teams was mentored by experts from AMD, HPE and EuroHPC Competence Centers.
Two members from ENCCS, Yonglei Wang and Wei Li, were working as active mentors for this GPU hackathon.
Within the three-day GPU hackathon, Yonglei worked with the QuantumESPRESSO (a suite for first-principles electronic-structure calculations and materials modeling) team (Fabrizio Ferrari Ruffino, Ivan Carnimeo, Oscar Baseggio, and Laura Bellentani) focusing on batched/streamed FFTs (async, data movement) and porting/profiling Hubbard code (matrices, optimal batch sizes).
For the first topic, we proposed using double loop as hip kernels so that one can execute these kernels on given streams, implemented relevant models, and then compared the performance of FFT schemes with and without streaming computations implemented in CUDA and HIP code.
For the second topic, we worked on the Hubbard code (force and stress) unifying interfaces for different offload models (openACC vs OpenMP), identified the bottlenecks in the Hubbard code, and found a suitable test-case to trace the performance of different code blocks.
Wei Li served as one of the mentors for the FHI-AIMS team. The team consisted of three talented members from TU Dresden and Aalto university. They quickly adapted their CUDA code to run on LUMI-G node using hipfiy. At begining the code was even slower than the CPU version, but they soon relized there was a big overhead caused by creating stream and allocating arrays inside the nested loops. With the guidance of the AMD expert, they perfectly solved this problem. The three-day hackathon was not enough for them to implement their final idea of reordering the loops related to tensor operations for better GPU suitability. Good wishes for the FHI-AIMS team.
Yonglei changed 4 months agoView mode Like Bookmark