Data analysis workflows in R and Python - Day 3


Download up-to-date lesson materials using these instructions

  • Download this notebook (right click -> save link as) and save it to the course directory (data-analysis-workflows-course)
  • Run the notebook
  • It will download up-to-date lesson materials
  • If you get an error that says it's not being run from the course folder, please give path_to_course_folder=../relativce/path/to/my/course/folder as an additional argument download_lessons('lesson 4', path_to_course_folder='../relativce/path/to/my/course/folder')
  • If for some reason it does not work, you can use the following links to download the notebooks:

Lecture notebooks (right click -> save):

ch4-python-lecture.ipynb
ch4-r-lecture.ipynb

Exercise notebooks (right click -> save):

No exercises today.

Icebreaker

Do you offload some of your computations to other machines or do you manage with your desktop/laptop? Do you use CSC / local cluster / cloud? Are you planning on using them? What is the biggest problem that prevents you from using these resources?

  • I do most of computations on (NIRD) HPC and then transfer them to my machine. I face no problems in specific, but sometimes I feel if more tools were available on HPC (but that's just out of laziness)
  • My amount of data is too big and the laptop too slow. It is done on a cluster. I guess it would be hard to break down the data into pieces and load it in smaller chunks on the laptop. It is a memory questions?
  • Its been good, makes me understand how the dataanalyis works. I am new in this field.
  • I do run the model on HPC (Saga), then I do the post processing on NIRD, what I find the most frustrating with using HPC is mostly the lack of interactive tools, like jupyterlab.
  • I have used local cluster due to too long computation time on my own laptop. No experience in CSC.

Going over exercises

Scaling

  • I just looking only at Simo's screen. What is the definition of a node, please?
    • A cluster is made up of many computers, and we call each one a node. Node=one box with CPU, memory, network, etc. Node contains CPUs which contain cores which can contain threads
  • Is the RAM consumption for the OS negligible?
    • Yes!
    • Unless you're running Windows Millenium Edition LOL :)
    • I think people tend to forget they have to allocate memory for the OS. I don't know how hungry is Linux?
    • Relevant: RAM comparisons by linux distro https://www.reddit.com/r/linux/comments/5l39tz/linux_distros_ram_consumption_comparison_updated/ Seems like Debian is a winner. On HPC clusters however, only the processes started by the job are kept into account for the RAM requirements. I am not an OS expert, but in a normal single machine settings (e.g. your laptop) if you request more than you have, they start switching to disk (and things get slow).
      • This is called swapping and you should avoid it at all cost. It is very slow.
  • I have never heard about garbage collector or seen it. Do people often use it?
    • Usually Python/R other languages are pretty clever about garbage collection already (Python runs it periodically). Only in certain cases do you need to worry about it.
    • Python codes that often use objects, functions, and other structures such as these which make it easy for the garbage collector to operate. Data analysis workflows are however often written as huge scripts with no complex structures (function call here and there of course). Thus it is very easy to end up in a situation where the garbage collector doesn't run on your variables. If you run stuff on your interpreter (Python/ipython/Jupyter/R/Rstudio) you by default will store stuff to the global scope and the garbage collector won't touch them unless you explicitly overwrite or delete these variables. simo
  • To avoid random reads, should you always concatenate the datafiles into one file before you do your analysis? Is there situation where it is better to keep it in seperate files?
    • Hm, one file is usually better, if you have a format where you can easily randomly seek to the area you need
    • In HPC contexts, the speed of the disk where the data are stored also starts to make a difference. For example computing nodes have a local disk, it is much better to move files temporarly to the local disk rather than doing random reads from files stored on a distributed system.
  • Could I ask what is a random read? What is the application?
    • Random access (or direct access) is a method to read from a file without going through all the content of the file.
    • In this context it also means something that the file system sees as a "random" thing. So when reading huge number of small files the filesystem has to look up how owns the file, open up the file, give the data etc. for each file. It cannot do any assumptions what you're going to do next. If you're reading a file sequentially (give me more of what I had previously), the file system can give the answer much faster.simo
  • number of threads is same as number of partition?
    • It's like number of processes that are running in parallel. But Simo can expand on threads vs processes.

Feedback

Please say one good thing and one thing to be improved about today and/or about the whole course:

  • It's hard to stay focused on a 3h lecture without exercises

  • This lecture was super helpful !! (at least for me) thanks

  • This course was very helpful for me. Thank you for organizing all the materials. Thanks!

  • The potential of having it as remote lecture is super.

  • I think we could have more breaks and need them. Simo talked sometimes for very long. Is it maybe better to give one course for R and one course for Python? Why switching in between all the time?

  • Maybe show other examples than only from/with Pandas library?

  • How long are the videos after the course available? I thought they disappear on Twitch after 2 weeks? Okay, I did not know you put them on youtube. Nice.

Upcoming courses:


always ask questions right above this line

Select a repo