owned this note
owned this note
Published
Linked with GitHub
# HackMD space for the *NBIS Neural Net & Deep learning workshop*
This is a space for communication among students and between students and teachers (both directions:). To switch between view, edit or both modes, click the pen/book/eye icons at the top left of the page (hint! in the view, *eye*, mode, there is a toc in the left margin). Write in Markdown (`md`) format.
* [Schedule](https://nbisweden.github.io/workshop-neural-nets-and-deep-learning/schedule)
## Autobiographies (add your own!)
- *Claudio Mirabello*, NBIS, Linköping. Course leader. I have been using Neural Networks one way or another for more than ten years (_before it was cool_, one might say), and I haven't gotten tired of them yet!
- *Bengt Sennblad*, NBIS, Uppsala. Course leader. Interested in methodological theory, development and application in general, particularly including ANNs. Limited hands-on experience of ANNs.
- *Christophe Avenel*, BIIF - SciLifeLab / NMI. At the BioImage Informatics Facility (BIIF), we offer bioimage analysis support and training to life scientists, using mainly free and open-source software tools (Fiji, CellProfiler, QuPath) more and more based on Deep Learning.
- *Marcin Kierczak*, NBIS, Uppsala, LTS. Interested in explainability, applying ANNs to genomic data and in the theory underlying deep learning. Tries do do everything in R...
- *Linus Östberg*, SciLifeLab Data Centre, Solna/Uppsala. System developer and administrator. Used to work with structural bioinformatics, including e.g. molecular dynamics and ligand docking.
- *Wolmar Nyberg Åkerström*, Data steward at NBIS Uppsala. Interested in data management practices in developing and deploying models in general.
- *Johan Viklund*, CTO at NBIS. Have no idea what this is, just want to learn.
- *Åsa Björklund*, NBIS LTS, Uppsala, mainly working with single cell analyses.
- *Verena Kutschera*, NBIS LTS, Stockholm. Working with population genomics and annotation (non-model organisms).
- *Erik Fasterius*, NBIS, Stockholm. Working with high-throughput sequencing data and pipeline development. Have taken courses and done ML in the past, but want to learn and do more.
- *Jon Ander Novella* NBIS Uppsala. System developer involved in project with rabbit phenotypic data.
- *Ashfaq Ali* NBIS, LU. Have worked with clinical projects using omics techniques. Have worked with clusering and classification problems in clinical settings but do not have a formal eduction in machine learnig/deep learning.
- *Nima Rafati* NBIS, Uppsala. I am in annotation team and at the same time work with bulk RNA-seq and DNA-seq data analysis. I would like to learn about application of NN and DL in life science data.
- *Liane Hughes* I am new to the SciLifeLab Data Centre. I will work primarily with producing data visualisations for the Data Centre. I have used deep learning before with animal vocalisation data. However, I'm fairly new to Python and neural networks in general.
- *Malin Larsson*, NBIS, Linköping, LTS. Work mainly with gancer cenomics.
- *Dan Rosén*, NBIS, Uppsala. Will give a lecture about modelling sequences using recurrent neural networks on Wednesday. I'm in the systems development group.
- *Jonas Söderberg*, NBIS, Uppsala. Hangs out with people who uses this.
- *Lokeshwaran Manoharan*, NBIS, Lund. Primarily working with RNA-seq and other sequencing related projects in microbes.
- *Jonathan Robinson*, NBIS, Gothenburg, LTS. Working on projects involving systems biology and omics analysis/integration (primarily RNA-Seq and proteomics).
- *Hamza Imran Saeed*, SciLifeLab, Uppsala. Data Engineer.
- *Dimitris Bampalikis*, NBIS, Uppsala. Systems developer. Have worked with some toy projects
- *Kostas Koumpouras*, NBIS, Uppsala (systems developer). I have not worked with neural networks but I am very interested on it.
- *Paulo Czarnewski*, NBIS, Stockholm, LTS. Working mainly with RNA-seq, proteomics, image, FACS and single cell analyses.
- *Anna Klemm*, BioImage Informatics. Bioimage Analyst.
- *Susanne Reinsbach*, NBIS LTS, Gothenburg. Working mainly with sequencing data.
- *Per Unneberg*, NBIS, Uppsala LTS. Aspiring population genomicist. Used aNNs 20(!) years ago when they were cool before falling out of fashion for a while. Interested in statistical theory and methodology in general.
- *Johan Nylander*, NBIS Stockholm and Naturhistoriska riksmuseet. Compute Oldie, but NN Newbie.
- *Petter Ranefall*, BioImage Informatics. Bioimage Analyst.
- *Lars Eklund, NBIS,Uppsala, compute and storage. A generalist who is not a bioinformatician.
## Setup of course coding framework
### Conda environment setup
1. [Setup Conda on your workstation](https://docs.conda.io/projects/conda/en/latest/user-guide/install/)
2. Download the course *conda environment* [nn_dl_python](https://raw.githubusercontent.com/NBISweden/workshop-neural-nets-and-deep-learning/master/common_assets/conda_envs/nn_dl_python.yaml)
* NB: a separate environment will be made available for the R session on Wednesday
4. To install this conda environment, use the following command in a unix terminal:
```
conda env create -f path/to/nn_dl_python.yaml
```
4. Activate the conda environment
```
conda activate nn_dl_python
```
5. Test that it works by starting a jupyter notebook:
```
jupyter notebook
```
### Markdown
- link to [quick markdown docs](https://www.markdownguide.org/basic-syntax/)
- Also checkout the jepl "?" icon on this page
### Troubleshooting
- On Mac: If while running code on the notebooks you get the following error:
```
OMP: Error #15: Initializing libiomp5.dylib, but found libomp.dylib already initialized.
```
*or* if your kernel dies unexpectedly,
try adding the following snippet in your first cell:
```
import os
os.environ['KMP_DUPLICATE_LIB_OK']='True'
```
- In case of installation issues with conda:
a. Use conda to install the mamba package.
b. Then repeat step 3 above, substituting `mamba` for `conda` in the commands, i.e.: `mamba env create -f path/to/nn_dl_python.yaml`
c. continue from step 4, above.
- The notebooks look weird (large fonts, can't see code properly):
- Refresh the webpage with no cache (ctrl+f5 or ctrl+shift+r)
- I get the message "Using theano backend" when running keras:
- Theano is the wrong backend, it should say "tensorflow" instead!
- You can try to re-create the conda environment with mamba or conda, depending what you used to create the environment the first time (in my case (Verena), I had installed it with mamba and got the wrong backend. With conda it worked.)
- Otherwise, Wolmar has solved by using docker:
```docker run -it --rm -v "$(pwd):/home/container/workshop" -p 8888:8888 -u container wolmar/nn_dl:wolmar/nn_dl:2020-12-09```
the dockerfile is available [on the slack channel here](https://nbisweden.slack.com/archives/C01G06T8T3M/p1607357225079800) (thanks, Wolmar)
- As a last resort, have a look [at this thread](https://github.com/keras-team/keras/issues/6925)
- When trying to install keras in R (following the instructions [here](https://nbisweden.github.io/workshop-neural-nets-and-deep-learning/session_rAutoencoders/lab_autoencoder_hapmap.html#1_Synopsis)) and running `keras::install_keras()` I got the error `could not find a Python environment for /usr/local/bin/python3` and solved it by running these steps (Verena, on a Mac OS Catalina 10.15.7):
- Created a new conda environment (on the command line):
- `conda create -n nn_dl_r`
- Installed tensorflow from Rstudio console:
- `install.packages("tensorflow")`
- `library(tensorflow)`
- `install_tensorflow(method="conda", envname="nn_dl_r")`
- Same for keras from Rstudio console:
- `install.packages("keras")`
- `library(keras)`
- `install_keras(method="conda", envname="nn_dl_r")`
### Clone the course GitHub repository [internal course only]
In a terminal, `cd` to where you want to put the *git working directory* (`gwd`), and then
1. Type
```
git clone https://github.com/NBISweden/workshop-neural-nets-and-deep-learning.git
```
This will download a copy of the repo to your laptop
2. `cd` into the *workshop-neural-nets-and-deep-learning* and look around
3. There are separate folders for each session, containing lecture notes and lab files.
*Note! The material is under construction, so you might occasionally (maybe as a daily routine?) need to do*
```
git pull
```
#### Optional: To view lecture notebooks in the nicest way
The *gh* and *html* links in the schedule, unfortunately, does not display our slide shows very well. We have not been able to solve this yet. The current best option is to open your Jupyter notebook and open the relevant *.ipynb*-file for the lecture in your `gwd`. Each session has its own folder in `gwd` (with rather self-explaining names).
1. In terminal,
```
jupyter notebook
```
2. In Jupyter notebook, goto *Nbextensions* tab
3. Select the following extensions
- `RISE`
- `Hide input`
- `Split Cells Notebook`
4. Go to *Files* tab
5. Navigate to the `ipynb` file you want to show and double-click on it.
6. If the `ipynb` file does not open in slide mode, click the *show slides* icon (up to the right, looks like a histogram)
## Space for Group Discussion notes
## Feedback thoughts
* If possible creating multiple HackMD documents with linkouts from the main is preferable from a dyslectic stand point as it reduces the amount of text visible. Less text to scroll in at each lecture.
* Not sure if it is covered in the lectures, but I feel that practical issues are missing from the schedule. It would be nice to have a summary of computational requirements, when is it necessesary to have GPUs vs ok to just run on CPUs. What can be done on a laptop, availability of GPU resources in Sweden etc.
* There will be something on this late in Thursday's lecture
* For people who need a refresher on algebra or calculus, it may be useful to have a "pre-course materials" refresher of key concepts (perhaps to be shared with the Introduction to Biostats & ML?). But the hand derivations were great! :+1:
* Extensive use of abbreviations in the introduction without explaining them or saying the full name
* Would be great to go through the mini exercise in ANN Building Blocks part 3 in jupyter notebooks, i.e. show explicitely how to do it. Perhaps as a group exercise. It was good to show this example, so would be good to also learn how to do it in jupyter notebooks (for us that don't know this already). Edit after the introduction to keras exercise: The way we worked in jupyter notebook here was very good for me (beginner), so something similar for the mini exercise in ANN Building Blocks part 3 would have been good.
* Prepare the first and/or second mini exercise as jupyter notebook & do live coding (code along), i.e. you show and explain each cell and the students can follow along. :+1:
* I think it is a bit misleading with how you depict biases on your slides. It looks like there is only one bias per layer, than one realizes it is the same bias for all layers while in reality each neuron has possibly different bias.
* Also, excellent explanation on what a partial derivative is!
* What about having some pre-course reading introducing notation for partial derivatives etc. I'd still cover them in the lecture, but at least the symbols won't appear weird to students.
* Very good when Claudio went through the playground with some examples and ways of thinking about it! Good to bring up the perceptron again and show it explicitly. :+1:
* I also found the examples in the playground to be useful for understanding. I imagine that it would be useful to progress from the playground by replicating the example in Keras before progressing to use more advanced hyperparameters.
* It might be a good idea to separate lectures and exercises into separate documents (*e.g.* introduction to Keras), so that all relevant cells are runnable (instead of giving errors when trying to run the entire notebook)
* I feel like running the whole notebook in one go would probably be a bad idea anyway, heavier labs will have separate notebooks (training actual networks etc) and might slow your machine to a crawl if you try to run everything while following the lecture
* Conda & reproducibility -> :broken_heart:
* It would be nice if the whole course used the same package management, *i.e.* using Conda for everything rather than having Conda + renv for the R session, so that students need to do as little as possible related to package management
* This seems sensible, we will try and have that for the actual course
* The exercise with "doing better" at the end of the annBuildingBlocks session was really good! We got to try implementing and changings things ourselves, which was good, as it feels like there has been a lot of lectures and not so much time testing it out ourselves. (+1 agreement)
* Thanks! Our hope is to get more hands-on exercises if possible, unfortunately we were a bit short on time this time around...
* But there will be plenty more of the same kind of exercise throughout the next few days
* It would probably be good to use more seeds for the exercises (*e.g.* in CNN session), so that the students going through the exercise get the same results as the teacher that is showing the exercise. Some parts seem to use seeds, but most do not.
* I disagree, in a way it is good that it is different, they see one result in the presentation and compare to what they get. It is good for the students to understand that results may differ. Also, not sure if you can get identical results with same seed across all platforms. And, within the breakout-rooms we got different results and could compare them.
* Ah, good point, I did not think of it in that way! I redact my comment :-) (But what about the exercises that *does* use seeds?)
* Perhaps I missed it, but it could be useful to give a bit more details into what filters are in CNN, and how they are used.
* It is a super interesting course with tons of new stuff and a lot of information to process. Thinking about the target audience for the next edition, maybe it could be an option to either make it into 2 separate courses: intro to ANNs and deep dive into ANNs or to spread it more in time. I feel, and it is especially true in the online setup, it is sometimes difficult to jump from lectures straight to lab materials without some time to process the lecture content. Consider that for many future students almost everything will be very new. :+1:
* Or perhaps that the possibility of having more lab exercises will help understand concepts more easily?
* I agree with the comment above about the amount of new content. I feel that there is too little time to let the new information sink in, e.g. to read up a little bit on the different concepts. You could schedule some time for tasks to work with the theory, e.g. answering some questions that help to understand the concepts, maybe in groups. :+1:
* It was sometimes hard to follow lectures because of the large amount of new terminology and new concepts. You could consider to share a Glossary with the students, where you list all the new terminology and concepts with brief definitions. This might help when trying to follow along lectures.
* General Zoom practices: some of the lecturers kept their sound on when they were not lecturing, which was a bit disturbing when trying to listen to other lecturers or going through exercises. Minor issue, but worth remembering - same thing goes for students as well, of course, which could use a reminder.
* Have a Docker container ready that works for every lab, either for everyone or just for people with technical issues.
* It could be worth to have the discussion about whether or not to apply DL in a given experiment (considering sample size) as one of the first lectures in the course.
* It would have been great if the teachers would have gone through all the exercises after the labs to discuss the results from the different tasks. That way, even people who did not finish in time (e.g. due to technical issues) would have had a chance to catch up. :+1: :+1: :+1:
## Monday: Answer to Bengt's exercise on a 2-node ANN:
```mermaid
graph LR
A[x=0.05,y=0.1] --> B(i1)
B --> |w1=-0.1| C{z1,a1}
C --> |w2=0.3| D{z2,a2}
D --> E[ŷ]
F(b1) --> |0.1| C
G(b2) --> |0.3| D
```
<details>
<summary>
<a class="btnfire small stroke"><em class="fas fa-chevron-circle-down"></em> Show all details</a>
</summary>
Given the weights and input values, calculate the loss function and the update weights and biases.
Dataset $input$ values:
$x=0.05$
$y=0.1$
ANN starting values:
$b_1=0.1$
$b_2=0.3$
$w_1=-0.1$
$w_2=0.3$
**Answer to FORWARD-propagation exercise:**
$i_1=x=0.05$
$z_1 = w_1 * i_1 + b_1$
$z_1 = 0.1 * 0.05 + -0.1$
$z_1 = - 0.095$
$a_1 = \sigma( z_1 )$ #sigmoidFunction
$a_1 = 1 / (1+e^{-z_1} )$
$a_1 = 0.48$
$z_2 = w_2 * a_1 + b_2$
$z_2 = 0.3 * 0.48 + 0.3$
$z_2 = 0.444$
$a_2 = \sigma( z_2 )$ #sigmoidFunction
$a_2 = 1 / (1+e^{-z_2} )$
$a_2 = 0.61$
$ŷ = a_2 = 0.61$
$L(w,b|x) = 1/2 * (y - ŷ)^2$
$L(w,b|x) = 1/2 * (0.1 - 0.61)^2$
$L(w,b|x) = 0.2601 / 2$
$L(w,b|x) = 0.13$
**Answer to BACK-propagation exercise:**
**Step1: 1st layer**
First, we solve the derivate of the cost function $L$ respective to $a_2$ we have:
$$\frac{{\partial L(w,b|x)}}{\partial a_2}
=\frac{{\partial (1/2*(y-a_2)^2)}}{\partial a_2}
=\frac{{\partial (y^2/2-2ya_2/2+a_2^2/2)}}{\partial a_2}=$$
$$={0-y+2*a_2/2}=a_2-y=a_2-y=0.61-0.1=\color{red}{0.51}$$
Next, we solve the derivate of the cost function $L$ respective to $z_2$:
$$\frac{{\partial a_2}}{\partial z_2}
=\frac{{\partial (\sigma(z_2))}}{\partial z_2}
=\sigma(z_2)*(1-\sigma(z_2))
=a_2*(1-a_2)=0.61*(1-0.61)=0.2379$$
$$\frac{{\partial L(w,b|x)}}{\partial z_2}
=\frac{{\partial L(w,b|x)}}{\partial a_2} * \frac{{\partial a_2}}{\partial z_2}
=\color{red}{0.51} * 0.2379 = \color{blue}{0.1213}$$
Next, we solve the derivate of the cost function $L$ respective to $w_2$:
$$\frac{{\partial z_2}}{\partial w_2}
=\frac{{\partial (w_2*a_1+b_2)}}{\partial w_2}
=a_1=0.48$$
$$\frac{{\partial L(w,b|x)}}{\partial w_2}
=\frac{{\partial L(w,b|x)}}{\partial z_2} * \frac{{\partial z_2}}{\partial w_2}
=\color{blue}{0.1213}*0.48=\color{purple}{0.05822}$$
Next, we solve the derivate of the cost function $L$ respective to $b_2$:
$$\frac{{\partial z_2}}{\partial b_2}
=\frac{{\partial (w_2*a_1+b_2)}}{\partial b_2}
=1$$
$$\frac{{\partial L(w,b|x)}}{\partial b_2}
=\frac{{\partial L(w,b|x)}}{\partial z_2} * \frac{{\partial z_2}}{\partial b_2}
=\color{blue}{0.1213}*1=\color{green}{0.1213}$$
From this point we can update the values from $w_2$ and $b_2$, considering a learning rate of $\eta$:
$$w_2' = w_2 - \eta*\frac{{\partial L(w,b|x)}}{\partial w_2}=
0.3 - \eta*\color{purple}{0.05822}$$
$$b_2' = b_2 - \eta*\frac{{\partial L(w,b|x)}}{\partial b_2}=
0.3 - \eta*\color{green}{0.1213}$$
**Step2: 2nd layer**
Next, we solve the derivate of the cost function $L$ respective to $a_1$:
$$\frac{{\partial z_2}}{\partial a_1}
=\frac{{\partial (w_2*a_1+b_2)}}{\partial a_1}
=w_2=0.3$$
$$\frac{{\partial L(w,b|x)}}{\partial a_1}
=\frac{{\partial L(w,b|x)}}{\partial z_2} * \frac{{\partial z_2}}{\partial a_1}
=0.1213*0.3=\color{orange}{0.03639}$$
Next, we solve the derivate of the cost function $L$ respective to $z_1$:
$$\frac{{\partial a_1}}{\partial z_1}
=\frac{{\partial (\sigma(z_1))}}{\partial z_1}
=\sigma(z_1)*(1-\sigma(z_1))
=a_1*(1-a_1)=0.48*(1-0.48)=0.2496$$
$$\frac{{\partial L(w,b|x)}}{\partial z_1}
=\frac{{\partial L(w,b|x)}}{\partial a_1} * \frac{{\partial a_1}}{\partial z_1}
=\color{orange}{0.03639}*0.2496=\color{magenta}{0.009082944}$$
Next, we solve the derivate of the cost function $L$ respective to $w_1$:
$$\frac{{\partial z_1}}{\partial w_1}
=\frac{{\partial (w_1*i_1+b_1)}}{\partial w_1}
=i_1=0.05$$
$$\frac{{\partial L(w,b|x)}}{\partial w_1}
=\frac{{\partial L(w,b|x)}}{\partial z_1} * \frac{{\partial z_1}}{\partial w_1}
=\color{magenta}{0.009082944}*0.05=\color{salmon}{0.000454}$$
Next, we solve the derivate of the cost function $L$ respective to $b_1$:
$$\frac{{\partial z_1}}{\partial b_1}
=\frac{{\partial (w_1*i_1+b_1)}}{\partial b_1}
=1$$
$$\frac{{\partial L(w,b|x)}}{\partial b_1}
=\frac{{\partial L(w,b|x)}}{\partial z_1} * \frac{{\partial z_1}}{\partial b_1}
=\color{magenta}{0.009082944}*1=\color{navy}{0.009082944}$$
From this point we can update the values from $w_1$ and $b_1$, considering a learning rate of $\eta$:
$$w_1' = w_1 - \eta*\frac{{\partial L(w,b|x)}}{\partial w_1}=
-0.1 - \eta*\color{salmon}{0.000454}$$
$$b_1' = b_1 - \eta*\frac{{\partial L(w,b|x)}}{\partial b_1}=
0.1 - \eta*\color{navy}{0.009082944}$$
Now we can train the network again using the updated values $w_1'$ , $b_1'$ , $w_2'$ and $b_2'$.
</details>
## Tuesday: exercise on XOR problem
<details>
<summary>
<a class="btnfire small stroke"><em class="fas fa-chevron-circle-down"></em> Show all details</a>
</summary>
### Questions:
* The validation curves are actually better than the training curves?
* That is the effect of Dropout layers, which "slow down" the network when it's training, but suddenly are not used anymore when validating (so the network suddenly does better! At least in some cases)
* I don't get the "Train on 9000 samples, validate on 1000 samples" at the top and also it looks like every epoch is running on 282 data points. The dimensions on data and labels seem to be correct ((10000,3) and (10000,1)). Should I care?
* Ah, found it. It's the batch size.
* Correct! It is 282 batches ("steps" as they are called in Keras), so 9000 samples divided in batches of size 32 (9000/32 = 281.25 so the last batch will be smaller)
* Seems like dropout doesn't really help?
* This is because the dataset is not noisy (and fairly simple to begin with), so chances are that overfitting will not be an issue altogether. This is why the best solutions (see below) can get away with training huge networks without negative effects on the validation results
### Group 1 results
Val_loss: 0.0351
Val_acc: 0.9850
### Group 2 results
Val_loss: 0.4986
Val_acc: 0.7650
### Group 3 results
Val_loss: 0.0983
Val_acc: 0.9630
### Group 4 results
Val_loss: 0.1572
Val_acc: 0.9510
### Group 5 results
Val_loss: 0.0156
Val_acc: 0.9940
@group 5 could you paste the code for the best model below?
```
#Best model:
from keras.layers import Dropout
model = Sequential()
model.add(Dense(512, input_dim=3, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(64, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(32, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(16, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(8, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(4, activation='relu'))
model.add(Dropout(0.1))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
# Train the model, iterating on the data in batches of 32 samples
history = model.fit(data, labels, epochs=50, batch_size=64, validation_split=0.1)
```
This model usually trains to >99% val acc:
```python
model = Sequential()
model.add(Dense(128, activation='relu'))
model.add(Dense(128, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
model.fit(data, labels, epochs=5, batch_size=64, validation_split=0.1)
model.fit(data, labels, epochs=5, batch_size=256, validation_split=0.1) # continue training on the same weights with larger batch size
```
</details>
# Wednesday: Autoencoders
Autoencoder lab.
<details>
* If you get the following error:
```
load("autosomal_5k.rdat")
Error: project "autosomal_5k.rdat" has no activate script and so cannot be activated
Traceback (most recent calls last):
4: load("autosomal_5k.rdat")
3: renv_load_switch(project)
2: stopf(fmt, renv_path_pretty(project))
1: stop(sprintf(fmt, ...), call. = call.)
```
do `base::load("autosomal_5k.rdat")` instead (kudos Åsa :smile: ).
* In the code, you can see one layer that you may not actually know yet: `layer_batch_normalization()`.
The purpose of this layer is to normalize the outputs from a layer and it speeds up learning autoencoders. As far as I understand is one of the "tricks of the trade".
* If you have the environment but cannot run the code, try doing what Åsa suggested:
```
mamba create -n nn_dl_r -c conda-forge r-base=4.0.3 r-renv compilers rstudio
conda activate nn_dl_r
in R:
> renv::restore(lockfile = 'assets/renv.lock')
> keras::install_keras(method="conda", envname="nn_dl_r")
Then in top of the script for running later:
reticulate::use_python("/Users/asbj/miniconda3/envs/nn_dl_r/bin/python", required=T)
```
* Docker container with the environment:
```
docker run -it --rm -v "$(pwd):/home/container/workshop" -p 8888:8888 -u container wolmar/nn_dl:2020-12-08
```
</details>