owned this note
owned this note
Published
Linked with GitHub
# Day *5* Q&A
<!--- I remind you that these documents will be uploaded to the repository branch that will be created and that the NBIS training code of conduct should be followed. Be respectfull to eachother so you do not edit others posts. Hack md alows for simultaions editing. -->
## General Questions
- Q: How do I find HackMD pages from Day 1 to 4?
- A:
[Day 1](https://hackmd.io/@4F3TBa2lQCuFiP5LcfJMWg/Bkdpt5mEs/edit)
[Day 2](https://hackmd.io/@4F3TBa2lQCuFiP5LcfJMWg/r1HCExBNj/edit)
[Day 3](https://hackmd.io/@4F3TBa2lQCuFiP5LcfJMWg/SJumU8I4o/edit)
[Day 4](https://hackmd.io/@4F3TBa2lQCuFiP5LcfJMWg/BJv7gqD4o/edit)
[Day 5](https://hackmd.io/@4F3TBa2lQCuFiP5LcfJMWg/HknL0gFEs)
- Q: Maybe it is already somewhere but can you suggest an online course on Object orientet programmin (in python in particular)
- A:
- Q:
- A:
- Q:
- A:
---------------------------------------------------
## Modular Programming
- Q: Is this the same concept from functional programming?
- A: You can have modules in any language, not only OOP.
- Modular programming is a way to organize the code. (overlap)
- Q: could you give one example to explain these two
- A: Functional programming is a type of programming language e.g. in Lisp or Scheme. An example of such a program is the Emacs editor. An important language feature is lambda functions (unnamed functions) and lambda calculus (operations on functions).
- A: Modular programming is the idea of cleanly encapsulating code so that you can understand the code and also reuse parts. A program written as a collection of interacting (Python) modules is an example of modular programming.
- Q: Can you give some more concrete examples of modules vs functions? In python you have the math module, right, so the behaviour is doing math? That is quite a large behaviour group?
- A: Short story: The modules in python can be seen as modules, functions with related features that are collected.
- A: Long story: A module, in this context, is a collection of functions and data with well-defined interfaces. A function is a callable section of code, usually named, usually receiving some input and returning some output along with potential side-effects. Modular design minimises side-effects and makes them clear.
- Q:In the same vein, Python 3.10 and numpy package, will it be corrret that I say Python and Numpy are two individual module?
- A: Python 3.10 itself consists of several modules that do not need to be imported, and some that need/may to be imported, like `math`, `os` and `sys` (you don't have to know everything). Then the interpreter is also there in the Python 3.10.
- A package may include one or several modules that you can import
- It is important to note that Python like many programing languages work kind of fast and loose with the words Package and module, as they need a word to talk about modules which only contains moduels. In theory a Package should be viewd as a container of your logical structure that may be implemented in one or more modules while a module should be a concrete implementation unit.
- Packages and modules can be nested. as the concepts grow more abstract.
- Q: To me it is weird that dependencies are arrows from the thing towards the prerequisite. Am I thinking about it in the wrong way?
- I agree, a little! Just accept!
- I will : )
- You can see that the dependent is sending a **call to** the dependency/prereq. Therefore the error, I guess. The prereq does not know about the dependent function/module/object/class.
- Good point, the dependent package keeps track of the arrow
## Optimisation: when and how
- Q: In the ILP slide, does the pipeline describe code execution on a single core, or is this how commands are distributed between cores?
- A: Each core should have this look. When you parallel program, usually you will state what each core should do (even though this will be automised exactly which core will take what instruction).
- Q: should I avoid running parallel computations during an interactive session at Uppmax? For instance, running a multiprocessing.Pool (Python) task within a Jupyter notebook.
- A:Depends if what you want to test, An iteractive session in genral should be used to find and test what your program behvaior would be when you submit it as a non-interactive job. You can have interactive jobs with multi-core but if you dont need to test this you should not ask for more than one core. Interactive sessions should in genneral be used for prototyping and testing. The same goes for gpu allocation.
-
- Q: Where do the values of 1.78/7.5 etc. come from when using Amdahl's law?
- A: so p is fractional time taken for the parallelizible part still running on 1 core
- speedup then becomes:
1/(non-parallelized part+parallelized part/number of processes) or
1 ((1-p)+p/N)
- p = 0.5 --> 1/(0.5+0.5/8)
- p = 0.9 --> 1/(0.1+0.9/8)
- the maximum theoretical limit is found for N--> infinity, that is 1/(1-p)
- https://en.wikipedia.org/wiki/Amdahl%27s_law
- Matlab code:
- ```console
N=logspace(0,4,20)
p=[0 .5 .7 .9 .95 .99 .995 .999]'
y=1./((1-p)+p./N)
loglog(N,y,[1 1e4],[1 1e4],'--')
grid on
xlabel('Number of cores')
ylabel('Speedup')
legend(num2str(p))
```
- To find p, you may do tests with different numbers of cores and find p backwards, so to speak. That will just give you the actual p for your solution, perhaps not the ideal one though.
- Q: what is the time used that is neither user nor system time? (real 0.08, user 0.0, sys 0.02 => the 0.06)
- A:
The time command runs the specified program command with the
given arguments. When command finishes, time writes a message to
standard error giving timing statistics about this program run.
These statistics consist of (i) the elapsed real time between
invocation and termination, (ii) the user CPU time (the sum of
the tms_utime and tms_cutime values in a struct tms as returned
by times(2)), and (iii) the system CPU time (the sum of the
tms_stime and tms_cstime values in a struct tms as returned by
times(2)).
The realtime includes the starting and stopping of your execution.
- Q: In both versions of the fiotest.py I get the same ratio of sys/real time (both around 50%). In this case it's easy to see that one of the versions is clearly not efficient, but in larger pieces of code this is not so obvious, yet the sys/real time ratio does not seem like a good metric to indicate your code is not optimal. Is there any other metric you can recommend to look at?
- A: Profilers as covered by marcus.
- A: The ratio you should look at is sys/user time. But interesting that you had such a result. Could you describe your computer and Python version?
- Q: When should the optimizatin start? Should one first writing a working code and then ottimize it or it should be done continuosly?
- A: Short answer: Make a working code first so that you can overview the sections that can be optimized.
- Q: In terms of adding profiling to the testing routines, how do you do know what a 'speed' is? Do you manually take the speed from a previous test and set it as a benchmark, or is there are better way of doing things?
- A : Benchmarking is a way comparing your performance agianst an external criteria it can be done with timing(FPS counters is one suchs example) or through profiling, in gerneral what profiling will do is more than timing, it will show you how much percentance of the execution is done by a specifc step of your code. This means you can find where optimisation is useful to consider. Setting what is a useful timing can be found by incrementally running test with larger and larger data sets until you find a point at which your performance increases over a predefined treshold. In general timing on a previous small test will not generate enough information.
- Q: Can you give a practical example of calling a function instead of a for loop?
- A: In R, can use lapply, apply function
- Comment (compiled languages): Distributing source code may help users to compile and optimise for their system.
- Q: For a more practical question, how do I avoid reading and writing files? Say I have seperate model stored in seperate files and for each model I want to do something. Should one just combine the files into one large file and use this as input, and similarly split the output, say, in bash?
- A: Two main scenarios where you want to avoid/reduce file I/O. 1. If the file output from one step is consumed by the second step, you should consider a pipe instead. 2. If you are doing lots of writing to file in an inner loop, consider building a buffer and doing a single big write instead (amortised I/O).
- Q: But if you have a gigantic file is it still better to read it at once and store it memory than to read it in chunks?
- A:https://realpython.com/python-mmap/ if you are doing in python.
- A: The general answer it depends if you have space in your memory. In ML it is common to have to chunk your data in order to fit into the memory constraints.
- Q: Easy speedup hacks? E.g. parallellise a loop in bash with & and wait
- A: Absolutely, that is a good suggestion. Upgrading Python or using PyPi is good.
- Q:
- A:
### Course advertisement
- This course covers parallelism in Python!
- https://aaltoscicomp.github.io/python-for-scicomp/
- Find among many courses here:
- https://enccs.se/lessons/
- like:
- Julia for High Performance Scientific Computing
- Intermediate MPI Workshop
- https://pdc-support.github.io/introduction-to-mpi/
## Questions above this line
-----------------------------------------------------------------
# Day *5* feedback
- F:
- F:The course was very informative, I learned a lot of new things.
- F:
- F:
- F:
- F: