# Python for SciComp 2025 (Archive) 25-27/November 9:00 CET (10:00 EET) :::danger ## Archival of PFSC 2025 notes This is the archival of the notes document. The live document is here: https://notes.coderefinery.org/pythonscicomp2025 ::: # Day 1 and 2 See this archive page https://hackmd.io/@coderefinery/python2025archive # Day 3 27/11/2025 ## Icebreakers **What's the most surprising thing you have learned so far in this course?** - Vega-Altair! ooooooooo - There is so much to learn - i've chemistry background and just realized how amazing would be combining python to chemistry (but i still need to learn more!) - mostly basic things about Python which help me to get started - thank you :) +1 - Profiling tools +1 - I have been vibe coding with Python, and now with professional instructors, I am filling the empty holes on my python knowledge wall. - Need more practical activities with real data. - Still a lot of things to learn deeply to use. **And the most surprising thing you have learned in your career?** - Planning ahead saves you loads of time after +2 - How much i don't know +7 - Doing what you love is very important +1 - How far the academia has fallen +2 - How bad systems are (companies, universities, organisation systems) and not caring for the well-being of their employees (only interest in gain, be it money or good science) +1 **What are your ideal winter holidays?** - Sleep +8 - Skiing +2 - Good food, jogging & relaxing(+snowballwar) - Boardgames - Spending time at home with my family, cozy drinks, Christmas lights, some snow - Reading a good book and taking long walks +1 - Ice fishing! - gaming - Knitting +4 - -World of Warcraft - LotrO instead of WoW ;) - WoW and Magic the Gathering! - gaming, horse riding and knitting loads! - more diy projects - ice skating - Skiing and relaxing after having good food - making firewoods ## Parallel programming *Add your questions here:* - Is there a "general rule" which types of code is heavy/time consuming? Examples yesterday showed that np arrays seem to be fast while for loops seemed time consuming. So should one in general try to avoid some type of things, such as loops or dictionaries or anything like that? - - What you are seeing in this case is that numpy is fast :) Python loops take more time than an equivalent numpy call. - Original question was deleted at some point, something like are there specific types of code that are more time consuing (E.g. numpy array example vs for loop in day 2, not really comparable though) - I don't have any other rule of thumb than that more work takes more time :) Doing things in a long loop takes more time, but that is because the there's more to do. - erorr %%timeit not found - I have this same issue - Same, except sometimes it is found and sometimes, have not seen any pattern to this - %%timeit is a cell magic, meaning it needs to be at the start of the cell. This is a problem with the exercise, if you directly copy the code it is not at the top of a cell - Thanks, now it works! +2+1 - A bit off topic but What about mathwise, is there a difference in execution time of let's say an integral vs derivative? or even in generaly terms, if there is a difference between like the inverses of operations (addition vs subtraction, multiplication vs division and so on?). - At a very low level, if you are writing C or C++ code, data structures like dictionaries can make a difference. But not in Python. - Ok, i remembered that sometimes they make a difference, but clearly didn't remember it was related to C and not python. Thanks! - how to know how many cores area available? - Does "parallel coding" impact the time the code takes to run and I have to consider this - depending on the result I want to get? Since I am a beginner, I don´t know when to use NumPy or when to use loops. But if I know what I want to do with the "parallel coding" it would help which tools (NumPy, loops, or other ones) to implement to reach my "goal". - Yes parallel coding will influence runtimes, as mentioned on stream, if the code can distribute work, it can speed up computation up to the increase in resources made available. As for loops vs numpy: If you have numerical data and numpy offers the operations you want to perform on the data, use numpy, it will in likelyhood be faster than anything you iplement yourself (or that I would implement myself for that matter ;P ), since it has been optimised a lot. The main thing you have to be careful about when trying to implement parallel code yourself (i.e. using things like joblib/multiprocessing etc in your own code) is that the code you run does not itself also already use multiple threads, since that can easily lead to a lot of overhead. You can commonly test these kind of things by running a example code on a hpc cluster without multi-processing and with different numbers of CPUs to see if it itself already distributes work. (if you code with 1 CPU takes 5 minutes and only 3 with 2 CPUs, it most likely already does multi-processing under the hood - run it a few times each to be sure it's not a fluke). - Thanks for explaining and the recommendation to run an "example code". Since I come from a biomedical background and will have big data to analyse: Does the the amount of (raw) data I put into numpy also affects the runtime? Or does this not have an effect at all? - Yes, the amount of data will have an effect, the more data that needs to be processed, the longer it takes. BUT if you have eg. multiple different samples that you need to process in the same way, you can run them in multiple independent jobs on a cluster, which will essentially split up the runtime into small batches (one per sample, or one per 10 samples depending on your choice) and these can then run at the same time in the cluster (so while the CPU time, i.e. the total time a cpu is busy with your jobs doesn't change, they will be solved a lot faster in real time (e.g. if you run 100 jobs at the same im, they will be finished ~100 times faster than one job running all 100 samples, assuming the cluster has enough free resources to run them at once.) - Doesn't the newest Python remove the GIL? DO you have experience with it and are there any caveats? - Newest Python versions support [free-threading](https://docs.python.org/3/howto/free-threading-python.html), but it comes with its own caveats and complications. It is most applicable when doing async calls etc. where multiple threads can operate on different Python objects. There are still local locks for thread safety on Python objects. In scientific context we often want the multithreading or multiprocessing to be able to work on some piece of data simultaneously (e.g. multiple CPUs work on same array) and for this to work the code needs to be sure that there are no thread safety issues (every thread wants to edit data in same numpy array). See [this doc from numpy](https://numpy.org/doc/stable/reference/thread_safety.html#free-threaded-python) about the thing. ::: success ## Exercise until xx:30 https://aaltoscicomp.github.io/python-for-scicomp/parallel/#exercises-multiprocessing ::: - - Somehow I cannot import multiprocessing.pool ? No error, but just the code doesnt "go trhough" +3 - Try `import multiprocessing` and switch `pool = multiprocessing.pool.Pool()` to `pool = multiprocessing.Pool()`. - Still did not work - No luck! Still stuck. - For me, if I run the import Pool separately, it is succesful but then when I run the with Pool() as pool: pool.map(square, [1, 2, 3, 4, 5, 6]), it never completes - I have similar issue, seems that the pool.map() is not working, it gets stuck - I got that multiprocessing was slower than single process estimate. Why? - This can happen. The processes need to communicate with each other and that also takes time. If you have fewer CPUs to run on, the communication time can make it slower overall - I am on Windows and using Jupyter notebook. I installed mpi4py but seems MPI is not found.+2 ``` --------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) ~\AppData\Local\Temp/ipykernel_772/1989243645.py in <module> 1 import random 2 import time ----> 3 from mpi4py import MPI 4 5 ~\anaconda3\lib\importlib\_bootstrap.py in _find_and_load(name, import_) ~\anaconda3\lib\importlib\_bootstrap.py in _find_and_load_unlocked(name, import_) ~\anaconda3\lib\importlib\_bootstrap.py in _find_spec(name, path, target) ~\anaconda3\lib\site-packages\mpi4py\_mpiabi.py in find_spec(cls, fullname, path, target) 288 """Find MPI ABI extension module spec.""" 289 # pylint: disable=unused-argument --> 290 mpiabi_suffix = _get_mpiabi_suffix(fullname) 291 if mpiabi_suffix is None: 292 return None ~\anaconda3\lib\site-packages\mpi4py\_mpiabi.py in _get_mpiabi_suffix(module) 274 if module not in _registry: 275 return None --> 276 mpiabi = _get_mpiabi() 277 if mpiabi not in _registry[module]: 278 return None ~\anaconda3\lib\site-packages\mpi4py\_mpiabi.py in _get_mpiabi() 256 mpiabi = _get_mpiabi_from_string(mpiabi) 257 else: --> 258 mpiabi = _get_mpiabi_from_libmpi(libmpi) 259 _get_mpiabi.mpiabi = mpiabi # pyright: ignore 260 return mpiabi ~\anaconda3\lib\site-packages\mpi4py\_mpiabi.py in _get_mpiabi_from_libmpi(libmpi) 215 import ctypes as ct 216 --> 217 lib = _dlopen_libmpi(libmpi) 218 abi_get_version = getattr(lib, "MPI_Abi_get_version", None) 219 if abi_get_version: # pragma: no cover ~\anaconda3\lib\site-packages\mpi4py\_mpiabi.py in _dlopen_libmpi(libmpi) 208 except AttributeError as exc: 209 errors.append(str(exc)) --> 210 raise RuntimeError("\n".join(errors)) 211 212 RuntimeError: cannot load MPI library Could not find module 'C:\Users\xxx\AppData\Roaming\Python\DLLs' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'C:\Users\xxx\AppData\Roaming\Python\Library\bin' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'C:\Users\xxx\anaconda3\DLLs\impi.dll' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'C:\Users\xxx\anaconda3\DLLs\msmpi.dll' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'C:\Users\xxx\anaconda3\Library\bin\impi.dll' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'C:\Users\xxx\anaconda3\Library\bin\msmpi.dll' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'impi.dll' (or one of its dependencies). Try using the full path with constructor syntax. Could not find module 'msmpi.dll' (or one of its dependencies). Try using the full path with constructor syntax. ``` - same problem with VScode, installed mpi4py in my venv, but cannot import MPI in python +1 - This means you don't have MPI itself installed. I recommend using conda (or mamba), see https://aaltoscicomp.github.io/python-for-scicomp/installation/#python and the "miniforge" option. - how to? - MPI4PY doesn't necessarily bundle the MPI library itself. If you use a conda environment (as described on course page) it seems to get bundled on Windows too - I see, I will find the dependencies or try conda UPDATE: - ```python -m pip install mpi4py impi-rt``` seems to patch this problem in VScode - can import MPI in python but mpiexec still doesn't work ... - I have same issue and I insalled also the MPI, still it doesn't run - Is this a correct MS_MPI link? https://www.microsoft.com/en-us/download/details.aspx?id=105289 - We have not tested with this. It seems like it should work. We have tested with conda and microforge. - Why is the order of printouts looking randomly? - MPI starts all the processes at the same time. It is essentially random when they get to the print statement. Depends on what other processes are doing. - I get the following error 'mpiexec -n 4 cmd /c py "mpi_demo.py" - Got it, the python course environment was not activated in my console ```[unset]: unable to decode hostport from b38001fb-500d-4521-801e-a9b6f10a8869 Abort(874135055) on node 0 (rank 0 in comm 0): Fatal error in internal_Init_thread: Other MPI error, error stack: internal_Init_thread(43417): MPI_Init_thread(argc=0000000000000000, argv=0000000000000000, required=3, provided=0000009D88BEBB18) failed MPII_Init_thread(118)......: MPID_Init(1626)............: MPIR_pmi_init(133).........: PMI_Init returned -1 [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=874135055 : system msg for write_line failure [unset]: : unable to decode hostport from b38001fb-500d-4521-801e-a9b6f10a8869 [unset]: No error unable to decode hostport from b38001fb-500d-4521-801e-a9b6f10a8869 Abort(68828687) on node 0 (rank 0 in comm 0): Fatal error in internal_Init_thread: Other MPI error, error stack: internal_Init_thread(43417): MPI_Init_thread(argc=0000000000000000, argv=0000000000000000, required=3, provided=000000C3C31EBB78) failed MPII_Init_thread(118)......: MPID_Init(1626)............: MPIR_pmi_init(133).........: PMI_Init returned -1 Abort(605699599) on node 0 (rank 0 in comm 0): Fatal error in internal_Init_thread: Other MPI error, error stack: internal_Init_thread(43417): MPI_Init_thread(argc=0000000000000000, argv=0000000000000000, required=3, provided=0000000BE45EB8F8) failed MPII_Init_thread(118)......: MPID_Init(1626)............: MPIR_pmi_init(133).........: PMI_Init returned -1 [unset]: [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=68828687 : write_line error; fd=-1 buf=:cmd=abort exitcode=605699599 : system msg for write_line failure system msg for write_line failure : : No error No error [unset]: unable to decode hostport from b38001fb-500d-4521-801e-a9b6f10a8869 Abort(605699599) on node 0 (rank 0 in comm 0): Fatal error in internal_Init_thread: Other MPI error, error stack: internal_Init_thread(43417): MPI_Init_thread(argc=0000000000000000, argv=0000000000000000, required=3, provided=000000E6B4DEB7B8) failed MPII_Init_thread(118)......: MPID_Init(1626)............: MPIR_pmi_init(133).........: PMI_Init returned -1 [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=605699599 : system msg for write_line failure : No error' ``` - How well does mpi4py play with SLURM? - Works well, mpi4py is really just calling functions from the underlying MPI library ::: success ## Exercise until xx:00 https://aaltoscicomp.github.io/python-for-scicomp/parallel/#exercises-mpi ::: - Unrelated questions: 1 ECTS credit, what do I have to do to get it? - See here: https://scicomp.aalto.fi/training/scip/python-for-scicomp-2025/#credits - with supercomputing in mind does it make sence to work on WSL (if work computer is on Windows)? - If you are forced to use Windows WSL is a good way to get around things like missing MPI implementations - Personally, I would recommend Linux for development tasks in general, since it does prep you quite well for HPC systems AND nowadays, the general UI is on par with what windows offers. - Regarding coupling Python with other languages: I see that programming languages I don´t know (C, C++, etc.) are mentioned. I only know R which seems to have similarities with Python. What are advantages and disadvantages on coupling Python with other languages and is it always achievable? - When you combine Python with other languages the Python objects need to be able to be understood by that other language. Python has a robust C/C++ interfaces that make it easy for the C/C++ code to operate on the Python objects themselves without a need for converting the objects. If you have other languages you might need a translation layer to convert e.g. DataFrame from pandas into a data.frame in R. There are projects like Arrow that try to create commonly defined data types and objects so that different languages can work on the objects without any translation needed. - Sometimes the other language has some tool that is very important for the use case and then you'll use a library that can do that translation. For Python -> R interfacing, [rpy2](https://rpy2.github.io/doc/latest/html/index.html) is an example library. In these cases you might use the objects from that language represented as Python objects. - If I use rpy2, what happens to my raw data? Is it transformed in any case, when I translate, for example R, into Python? - In the case of rpy2 data is represented as [R data, but accessed from Python](https://rpy2.github.io/doc/latest/html/introduction.html#r-vectors) - What is Arrow? Is it an interface in which R and Python can play nicely together - without the need to translate the one language into another one? - [Apache Arrow](https://arrow.apache.org/) is columnar memory/data format that is used by many libraries that represents the data in a pre-defined way so that the same data can be accessed from multiple languages. Basically a "common vocabulary" for many data analysis languages. See e.g. [this demo](https://arrow.apache.org/docs/r/articles/python.html) on how it can transfer objects between Python and R. - Find a nice R to python cheat page: https://www.mit.edu/~amidi/teaching/data-science-tools/conversion-guide/r-python-data-manipulation/ - Thank you! ## Cython https://aaltoscicomp.github.io/python-for-scicomp/cython/ - Does Cython work the same way on Windows? Or what kind of libraries are generated there? - I have not tested this, but they should be .dll libraries - In Windows you rarely have a compiler present by default, but you can install it following Cython's [installation instructions](https://cython.readthedocs.io/en/latest/src/quickstart/install.html). You can of course use WSL to use the Linux instructions to create Linux code. Once you have a working setup, it shoud work similarly. - In my code, it doesnt annotate python codes in yellow color. why is that? Although I get the same result as Lauri is showing - This could be a Jupyter setting, where you have different defaults (because of a different operating system, version or something else) - Are you using JupyterLabs or classic JupyterNotebooks? - If Labs, this is a known problem - How to solve this issue? - **Instructors**, do you know? What is doing the syntax highlighting? - The syntax highlighting is a bunch of CSS directives generated by Cython. - In JupyterLab it is an issue because JupyterLab theme's CSS overrides Cython's CSS. And as a result you don't see the colors in JupyterLab. - We have notified the Cython developers of this issue, but we need to communicate it to the JupyterLab team as well. - Does Cython understand pythonic typing e.g. `def sum(x : int, y : int) -> int:` or does it only understand the "c-style" type of typing? - It usually understands standard Python. You can try running it. - See [this document](https://cython.readthedocs.io/en/latest/src/quickstart/cythonize.html#typing-variables) on Python type hints and Cython. You can write your code in Python and use type hints to trigger Cython. - Perhaps this is a better reference for Cython's support for [Python type annotations](https://cython.readthedocs.io/en/stable/src/tutorial/pure.html#pep484-type-annotations). With this change, Cython looks more like Python and not a superset of Python which it was before. - how would this look for the numpy array example? i.e. what specification would data need then? - For Numpy, the recommendation is to annotate annotate using Typed Memoryviews, which look like `cython.int[:,:]` for a 2D integer array. We have to update this detail in the lesson. - Do I have to or can I use numpy (Python language) or is it "up to me" what I use? I did not really understood what Cython is (I will read more), but what are the rules? - You can use numpy, and ignore cython. Like so many other things, cython is mainly a tool that can be used "as needed", and can be useful if (in the instance of cython) e.g. performance is an issue in the underlying code. - Most Numpy based codes will give decent performance. However the most typical use case for Cython is when you write a function which cannot rely on any Numpy functions and you are forced to write your algorithm with for-loops over arrays and arithmetic operators. - When to use Cython? How do I decide that? - It is useful if your code takes a long time to run and you cannot think of a way to use numpy (or pandas, torch or another library) to do it directly. - ... - How does the compile function to cython compare to something like "@jax.jit" - These are different. Cython compiles C code, jax.jit compiles array code with XLA https://openxla.org/xla/gpu_architecture which optimises it for GPUs - Moreover - Cython is more general purpose; Jax targets scientific codes. - Cython extensions (usually) only works in CPUs unless you create Cython code which interfaces with C / C++ / CUDA / HIP code which uses GPUs. - Jax has the advantage of being able to compile Python code into CPU, GPU and TPUs. :::danger ### About the 1 ECTS credit - Please read what the tasks are at https://scicomp.aalto.fi/training/scip/python-for-scicomp-2025/#credits - Send your submission from your University email by the end of 15/12/2025 - If you are a master student, you need to make sure your study coordinator accepts these "special courses". Doctoral students should not have problems. ::: ## Packaging https://aaltoscicomp.github.io/python-for-scicomp/packaging/ - REUSE is sw tool for license compliance automation https://reuse.software/ - See also [choosealicense.com](https://choosealicense.com/) for help on choosing a license for your project - I get an error FileNotFoundError: [WinError 2] The system cannot find the file specified. My guess the problem is pip install --editable "C:/Users/xxx/Documents/Courses/Python/Python/calculator_anna/calculator". Do I miss something? - You'll want to point to the project folder that contains the pyproject.toml, not the module folder. ::: success ### Exercise 1 until XX:35 https://aaltoscicomp.github.io/python-for-scicomp/packaging/ **Answer the poll afterwards (add "o"):** I managed to do the exercise: oooo I did not try: ooo I failed: oo ::: - If one wants to delete virtual env, what to do? simply delete the folder? - Yes, deleting the folder is enough. - Is there away to check which virtual env we are in? - The name should be in parenthesis in the terminal prompt (for example `(my_env)$ `) - You can activate it again (run `source env/bin/activate`) or deactive to be sure - in windows, it looks a bit different...or is it just because of Git bash I have for VScode? - Is the package that is created "open source" - so that everyone can use it? Or do we have to make it public? - It is only on your computer right now and if you did no add a license, it is not open source. - To publish code, see Python Package Index (pip uses this), GitHub, GitLab. You may have a solution hosted at your institution, too. - To make it open source, add a license file and choose an open source license (https://choosealicense.com/) - We have a whole day dedicated to the topic at CodeRefinery workshop. Join us next March!! - What is the purpose of creating a virtual environment here? - You can install your test package without installing on your system - You know what libraries you package uses. If it's not in the virtual environment, you cannot use it. - We'll talk about virtual environments in the next session. ## Dependency management https://aaltoscicomp.github.io/python-for-scicomp/dependencies/ - . It is often observed that packages from pip don't work due to dependency version conflicts. Is there a way to overcome that? - Using environments helps. One environment for each project. They can then use different versions of any package. - Directly answering the question: I don't really see pip packages not working due to version conflicts. But I have learned to always use environments. - Thanks - Can we completely avoid Anacondas licensing issues by using (micro)mamba and conda-forge/miniforge? - Yes - so conda-forge is kinda similar to PyPI? - Kind of similar, yes. Conda-forge has more non-Python packages (R libraries, system packages). - If I am in a conda environment and do an apt install (linux), does the installed app go into the virtual env or base system? - apt installs in the base system. To install in the conda environment, you can install into it using conda install. - Is conda env like different zoo's cages? what is the rule to select a python env or conda env? - A Python env is only for Python packages. If you need anything else, use conda. - I often use Conda because I want a certain version of Python. - It usually depends on the packages that on you need. - Did anyone suffered because of Anaconda license change in 2020? - I had to remove the default from many enviroment files :) - If I activate another conda env when already in one, do they "stack". i.e. are the packages in previous environment still available unless also provided and masked by the top-most environment? - They do not stack. A conda environment replaces all other libraries, so it just replaces the first environment - how about paths and system variables etc? what I mean is what does it override and what happens at conda deactivate? - .Can we specify package versions in requirements.txt? - Yes, you can write `numpy=1.23`, for example - Thanks - .Many legacy codes are in python 2.x. Where can we find compatible packages if pip2 does not have those anymore? - Since Python2 is deprecated, it really is not very well supported. I have failed to run Python2 codes and, as far as I know, the only option is to update them or write from scratch. - There exist converion tools that do the most work for you. - Yeah, I tried 2to3 but it didn't work in some cases. - With conda environments I was able to install python2 + dependencies (a few years ago) see what I did https://github.com/neuropower/neuropower-core/issues/6 (I had to do the "solving" of the environment semi-manually...) - What is the difference between environment and channels? - An environment contains libraries on you computer. You can import them. - A channel is a server somewhere in the cloud you can download packages from. - So is the channel like the consumers? What is the actual purpose of the channel here? - Can we run two packages concurrently? - two different versions of same package? - No different pacakges? - One environment has a set of dependencies so all packages are there and some can run concurrently depending on what your code is doing. - How bad is installing packages from both PyPi with pip and from conda-forge with conda into the same environment? - Not bad nor good, dependencies can be messy. - We can also add also github in the picture :) See one example here: (the example below is from the CodeRefinery lesson https://coderefinery.github.io/reproducible-research/dependencies/) ``` name: student-project channels: - conda-forge dependencies: - scipy=1.3.1 - numpy=1.16.4 - sympy=1.4 - click=7.0 - python=3.8 - pip - pip: - git+https://github.com/someuser/someproject.git@d7b2c7e - git+https://github.com/anotheruser/anotherproject.git@sometag ``` - What is the role of pacakge solvers? - The solver finds versions of packages that fulfill your requirements. Since each of them can have different requirements for other packages, this is a complicated problem. ::: success ### Exercise 3 (create environment) until XX:35 https://aaltoscicomp.github.io/python-for-scicomp/dependencies/#exercise-3 **Answer the poll afterwards (add "o"):** I managed to do the exercise: oo I did not try: ooo I failed: oo ::: - can one define a path to the environment? - In conda? You can set the path where enviroments are saved. Otherwise I don't know. - yes in conda, my conda env was accidently installed to local root folder. How to set the path? in yml? - When you create the environment, you can specify the folder to add it to. Use `conda create --prefix path/to/folder -f environment.yml` - does the prefix conflict with name in the yml? - No, the full path is `prefix/name/`. You can set `--prefix .` - You can use `prefix: /path/to/env` in the environment - wasn't pip install --user one of the recommended methods to install packages at CSC? - It might be in some systems, but in those cases it is usually used to extend centrally installed Python modules. Better thing is to use [venv the way CSC wants](https://docs.csc.fi/support/tutorials/python-usage-guide/#installing-python-packages-to-existing-modules) or to use the [Tykky](https://docs.csc.fi/computing/containers/tykky/) tool for creating containers. - . - . - . - . --- ## Feedback for Day 3 of Python for Scientific Computing :::success We are done! Give us feedback for today. **Your feedback is super important to improve this course for the future**. You will later receive a survey to give feedback to the whole course. And if you have any question, just get in touch with us directly scip@aalto.fi ::: ### Today was (vote for all that apply): too fast:o too slow: right speed: ooooo0 too slow sometimes, too fast other times: oo too advanced:ooo0 too basic: right level:ooooo I will use what I learned today:ooooo I would recommend today to others:ooooo I would not recommend today to others: ### One good thing about today: - A lot of useful information about python workflows +3 - A lot of things in my wishlist-to-learn were - Packaging seems useful, could be useful - - learned enough of MPI to get started with it - .. - .. - .. - ### One thing to improve for next time: - Overall the course is great. But it would be really helpful to create an environment on the first day. I ended up installing packages as we went through the course. +2 - There are installation instructions at the course setup, which have the whole environment, but maybe we need to stress them more/make it clearer to use those. - The duration of course s - Overall good, but maybe to scope or intended audience a bit too broad. Perhaps cut some thing, devote more time on the rest or extend the times in some way so there's more time per topic ### Any other feedback? General questions? - For many of the sessions throughout, I lost track very early, I do not think the course description matched since it said it would be suitable for beginners but I found it too advanced - Sorry about that! The course is designed for medium-advanced python users, but not impossible for beginners. https://scicomp.aalto.fi/training/scip/python-for-scicomp-2025/ - Sure, some parts were easier so I just have to go through other parts bymyself again but yes I still learned something! <3 - It was going a bit fast but was great. - I liked the course a lot. I had some python experience before and this was a really nice to follow along. Though, I had no experience with conda and jupyter. It was nice to learn new tools. And this course made me appriciate Numpy and Pandas much more. Aslo, great instructors. It was a delight to listen to you all =) - On the HedgeDoc, it would be nice to have a floating eye/edit icon, especially when discussion gets to long... - Thank you, this was a great course! - Is it possible to ask for clarifications on the materials in some manner? I for one need more time to think, rewatch and reflect before i have meaningful questions. THANKS!!!! - You can [get in contact with us in the coderefinery zulip](https://scicomp.aalto.fi/training/scip/python-for-scicomp/outro/#how-to-stay-in-touch)