Try   HackMD

High Performance Data Analytics in Python - Event Page

Jan. 21-23, 9:00-12:00 (CET), 2024

General introduction

Welcome to the online workshop on High Performance Data Analytics in Python on Jan. 21-23 (2025). Python is a modern, object-oriented, and an industry-standard programming language for working with data on all levels of data analytics pipeline. A rich ecosystem of libraries ranging from generic numerical libraries to special-purpose and/or domain-specific packages has been developing using Python for data analysis and scientific computing.

This three half-day online workshop is meant to give an overview of working with research data in Python using general libraries for storing, processing, analyzing and sharing data. The focus is on improving performance. After covering tools for performant processing (netcdf, numpy, pandas, scipy) on single workstations the focus shifts to parallel, distributed and GPU computing (snakemake, numba, dask, multiprocessing, mpi4py).

Who is this workshop for?

This material is for all researchers and engineers who work with large or small datasets and who want to learn powerful tools and best practices for writing more performant, parallelised, robust and reproducible data analysis pipelines. This workshop is an interactive online event, featuring live coding, demos, and practical exercises. We aim to equip you with the tools and knowledge to write efficient, high-performance code using Python.

Prerequisites

  • Basic experience with Python
  • Basic experience in working in a Linux-like terminal
  • Some prior experience in working with large or small datasets

Key takeaways

After attending the workshop, you should:

  • Have a good overview of available tools and libraries for improving performance in Python
  • Know what libraries are available for efficiently storing, reading and writing large data
  • Be comfortable working with NumPy arrays and Pandas dataframes
  • Be able to explain why Python code is often slow
  • Understand the concept of vectorisation
  • Understand the importance of measuring performance and profiling code before optimizing
  • Be able to describe the difference between “embarrasing”, shared-memory and distributed-memory parallelism
  • Know the basics of parallel workflows, multiprocessing, multithreading and MPI
  • Understand pre-compilation and know basic usage of Numba and Cython
  • Have a mental model of how Dask achieves parallelism
  • Remember key hardware differences between CPUs and GPUs
  • Be able to create simple GPU kernels with Numba

Tentative Agenda

Day 1 (Jan. 21)

Time Contents Instructor(s)
09:00-09:15 Welcome Yonglei
09:15-09:30 Motivation Yonglei
09:30-10:20 Scientific data Francesco
10:20-10:40 Break
10:40-11:55 Efficient array computing Francesco
11:55-12:00 Q/A & Reflections

Day 2 (Jan. 22)

Time Contents Instructor(s)
09:05-10:20 Parallel computing Qiang
10:20-10:40 Break
10:40-11:55 Profiling and optimizing Ashwin
11:55-12:00 Q/A & Reflections

Parallel: https://aaltoscicomp.github.io/python-for-scicomp/parallel/

Day 3 (Jan. 23)

Time Contents Instructor(s)
09:05-10:15 Performance boosting Yonglei
10:15-10:30 Break
10:30-11:55 Dask for scalable analytics Ashwin
11:55-12:00 Q/A & Summary Yonglei

Profiling: https://aaltoscicomp.github.io/python-for-scicomp/profiling/

Regulations

Due to EuroCC2 regulations, we CAN NOT ACCEPT generic or private email addresses. Please use your official university or company email address for registration.

This training is for users that live and work in the European Union or a country associated with Horizon 2020. You can read more about the countries associated with Horizon2020 HERE.

Contact

For questions regarding this workshop or general questions about ENCCS training events, please contact training@enccs.se.