# Introduction to Data Science, ML and Deep Learning - Deeplearning540
Mar 1 - 3, 2021 @ 2nd Terascale School for Machine Learning, https://indico.desy.de/event/28296/
Team 06
**Important:** You can use this hackmd pad to take notes together, identify key questions, and document progress. Please be constructive, inclusive and positive in your communication with your peers.
## House Keeping
Some Important links to not get lost:
### Video Conferencing
- the main zoom room is here: https://cern.zoom.us/j/66120916180?pwd=aWtSVWdUNFFXV1FFSFQ4MEFsK1RlQT09
- our team's zoom room is here: https://uni-hamburg.zoom.us/j/96900076199?pwd=R0wzdiswTmZPTGx2UG8wQmlsTFBuUT09
### Staying in Touch
- Terascale mattermost team invite link: https://mattermost.web.cern.ch/signup_user_complete/?id=j93uppzm6ff9zg5brdeuqobfgw
- the main mattermost channel: https://mattermost.web.cern.ch/terascale-ml/channels/town-square
- our team's mattermost channel: https://mattermost.web.cern.ch/terascale-ml/channels/group-6
## Learning
Each lesson always follows the same structure and is expected to last about 1h.
1. learners watch the video :cinema:
2. learners answer at least one check-your-learning questions as a team (at best in a hackmd document) :heavy_check_mark:
3. learners dive into the exercise on their own if time permits :clock1:
:question: Instructors help with show stoppers like syntax errors where they can.
:computer: if you like to conduct the exercises, or code along during the videos, we suggest to use [google colab](colab.research.google.com/). Note, you may need a google account for this.
Each lesson has a jupyter notebook, that is half filled. The video lectures start from this notebook and provide content to fill in.
## Lessons
- Lesson 00: Preface
- Lesson 01: Diving into Regression [video](https://indico.desy.de/event/28296/contributions/99576/attachments/64395/79079/deeplearning540-lesson01-2021-02-19_17.59.48.mkv), [learner notebook](https://github.com/deeplearning540/lesson01/blob/main/lesson.ipynb)
- Lesson 02: Enter Clustering [video](https://indico.desy.de/event/28296/contributions/97975/attachments/64396/79084/deeplearning540_lesson02-2021-02-22_23.30.44.mkv), [learner notebook](https://github.com/deeplearning540/lesson02/blob/main/lesson.ipynb)
- Lesson 03: From Clustering To Classification [video (part 1)](https://indico.desy.de/event/28296/contributions/97976/attachments/64398/79089/deeplearning540_lesson03-2021-02-23_23.14.33_part1.mkv), [video (part 2)](https://indico.desy.de/event/28296/contributions/97976/attachments/64398/79097/deeplearning540_lesson03_part2-2021-02-26_22.37.04.mkv), [learner notebook](https://github.com/deeplearning540/lesson03/blob/main/lesson.ipynb)
- Lesson 04: Classification Performance ROCs [video](https://indico.desy.de/event/28296/contributions/97977/attachments/64400/79098/deeplearning540_lesson04-2021-02-24_18.09.02.mkv), [learner notebook](https://github.com/deeplearning540/lesson04/blob/main/lesson.ipynb)
- Lesson 05: Neural Networks as Code [video](https://indico.desy.de/event/28296/contributions/97977/attachments/64400/79101/deeplearning540_lesson05-2021-02-25_17.48.08.mkv), [learner notebook](https://github.com/deeplearning540/lesson05/blob/main/lesson.ipynb)
- Lesson 06: How did we train [video](https://indico.desy.de/event/28296/contributions/98225/attachments/64451/79196/deeplearning540_lesson06-2021-01-03_233847.mkv), no jupyter notebook for this lesson
- Lesson 07: CNNs [video(part 1)](https://indico.desy.de/event/28296/contributions/98226/attachments/64470/79239/deeplearning540_lesson07_part1-2021-03-02_17.11.15.mkv), [video(part 2)](https://indico.desy.de/event/28296/contributions/98226/attachments/64470/79240/deeplearning540_lesson07_part2-2021-03-02_17.39.37.mkv), [learner notebook](https://github.com/deeplearning540/lesson06/blob/main/lesson.ipynb)
- Lesson 08: Deep Learning [video](), [learner notebook](https://github.com/deeplearning540/lesson07/blob/main/lesson.ipynb)
## Team Notes
### Lesson 01
- Lesson 01: Diving into Regression [video](), [learner notebook](https://github.com/deeplearning540/lesson01/blob/main/lesson.ipynb)
#### Questions about lesson 01
#### Questions about exercises for lesson 01
#### Check-your-learning questions for lesson 01
1.) In the following, the order of steps was confused, please rearrange:
- collect training data, compute accuracy, predict new data, fit training data
- compute accuracy, collect training data, predict new data, fit training data
- **collect training data, fit training data, compute accuracy, predict new data**
+1 +1 +1 +1+1+1+1+1+1+1
- collect training data, predict new data, fit training data, compute accuracy
2.) The least squares method for an input data pair `x` and `y` derives it’s name as it …
- Minimizes the sum of the product of x*y
- Minimizes the sum of the absolute difference between y and the predicted y_hat
- **Minimizes the sum of the squared difference between y and the predicted y_hat**
+1 +1 +1+1+1+1+1+1+1+1
- Minimizes the sum of y^2 and x^2
3.) NaN stands for not-a-number. When loading a dataset with `pandas`, NaN values occur in the loaded data because …
- Input files contain string values in a column
+1 +1+1+1
- Computational Problems occurred, like computing the square root of a negative number
- **Data could not be parsed correctly when reading input files into memory**
+1+1+1+1 +1+1
- there was no internet connection
### Lesson 02
- Lesson 02: Enter Clustering [video](), [learner notebook](https://github.com/deeplearning540/lesson02/blob/main/lesson.ipynb)
#### Questions about lesson 02
#### Questions about exercises for lesson 02
#### Check-your-learning questions for lesson 02
1.) You are provided a table of measurements from a weather station. Each measurements comes with values for temperature, precipitation, cloud structure, date, humidity, and a quality ID. The latter tells you if the instrument was performing OK. You’d like to learn an algorithm that is able to predict the quality ID (5 possible integer values from 0 to 4) for any new data coming in. This falls into …
- **Supervised Learning**
+1+1+1+1 +1 +1+1 +1+1+1
- Unsupervised Learning
- Reinforcement Learning
2.) You are given a dataset of iris flowers. The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Which of the following feature combinations lend themselves for clustering? See [this](https://en.wikipedia.org/wiki/Iris_flower_data_set#/media/File:Iris_dataset_scatterplot.svg) overview plot for help.
- Sepal.Length versus Sepal.Width
- **Sepal.Length versus Petal.Width**
+1
- **Petal.Length versus Petal.Width**
+1 +1 +1+1 +1+1+1+1+1
- Sepal.Width versus Petal.Width
3.) You are helping to organize a conference of more than 1000 attendants. All participants have already paid and are expecting to pick up their conference t-shirt on the first day. Your team is in shock as it discovers that t-shirt sizes have not been recorded during online registration. However, all participants were asked to provide their age, gender, body height and weight. To help out, you sit down to write a python script that predicts the t-shirt size for each participant using a clustering algorithm. You know that you can only get 7 t-shirt sizes (XS, S, M, L, XL, XXL). This falls into:
- Supervised Learning
- **Unsupervised Learning**
+1+1+1 +1 +1+1 +1 +1+1+1
- Reinforcement Learning
### Lesson 03
- Lesson 03: From Clustering To Classification [video](), [learner notebook](https://github.com/deeplearning540/lesson03/blob/main/lesson.ipynb)
#### Questions about lesson 03
There is a lttle confusion in explanations of True negatives in Confusion matrix section in video)
#### Questions about exercises for lesson 03
#### Check-your-learning questions for lesson 03
1.) When using the k-Nearest-Neighbor (kNN) algorithm for classifying a query point x_q, the k stands for:
- the number of neighbors that must have a given label for the query point to get this label assigned
+1
- the number of classes occurring in the data set
- **the number of observations that define a neighborhood**
+1+1+1+1+1
- the number of clusters in the dataset
2.) When going through tutorials and exercises that discuss the k-Nearest-Neighbor (kNN) method, you observe that k is typically chosen to be an odd number. Checking the code, sklearn also access even numbers for k. Why do people tend to choose odd numbers?
- tradition that often works best in practice
- **odd numbers prevent ties from happening with the majority vote**
+1+1+1
- this way, the total number of samples in the neighborhood is always even as one has to add the query sample
- odd numbers prevent ties from happening with the plurarity vote
+1+1+
3.) What is the majority vote and the plurality vote if the 8 nearest neighbors to your unknown data point are of the following classes:
a.
- class 1: 3
- class 2: 2
- class 3: 2
- class 4: 1
majority vote: ____none plurality vote: ____1
b.
- class 1: 5
- class 2: 2
- class 3: 1
majority vote: ____1 plurality vote: ____1
4.) Find the four hidden bug(s)!
from sklearn.neighbors import KNeighborsClassifier as knn
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix
# ... load dataset ...
X_train, X_test, y_train, y_test = train_test_split(X, y,
test_size = 1.5,
random_state = 42)
kmeans = knn(n_neighbors=5)
kmeans = kmeans.fit(X_train, y_train)
y_test_hat = kmeans.predict(X_train)
cm = confusion_matrix(y_train, y_test_hat)
accuracy = (cm[0,0]+cm[0,1]) / cm.sum()
Bugs:
- test_size = 1.5
- y_test_hat = kmeans.predict(X_train)
- cm = confusion_matrix(y_train, y_test_hat): y_train -> y_test
- accuracy = (cm[0,0]+cm[0,1]) / cm.sum() # cm[0,1]->cm[1,1]
### Lesson 04
#### Questions about exercises for lesson 04
#### Check-your-learning questions for lesson 04
1.) The ROC acronym stands for:
- Receiver Operator Curve
- Receiving Operates Curves
- [x] - **Receiver Operating Characteristic**
+1 +1 +1 +1+1+1
- Reception Occlusion Characteristic
2.) Fill in the blanks!
A k-Nearest-Neighbor (kNN) classifier can produce a probability when predicting the class label of an unseen sample x_q. This can be achieved by counting class __*labels*__ in the training set neighborhood of this query point.
For a k=7 neighborhood, the threshold to decide for any given class in this neighborhood is calculated as 4/__*7*__. In the same setting (k=7), let’s assume we find 5 labels for class 1 and 2 labels for class 0. This means, that we get two probabilities, which are __*5/7*__ for class 1 and __*2/7*__ for class 0.
### Lesson 05
#### Questions about exercises for lesson 05
Options for 1-hot-encoding:
- one-hot encode truth labels yourself and use as loss "categorical_crossentropy" (softmax activation)
- use integers as truth labels and use as loss "sparse_categorical_crossentropy" (softmax activation)
- (binary case only and typically worse) use single output node with sigmoid activation use integers as truth labels (0,1) and use as loss "binary_crossentropy"
#### Check-your-learning questions for lesson 05
1.) A hidden layer of an artificial neural network consists a fixed set of parts. These are …
- weights $W$ and a bias term $\vec{b}$
- weights $W$ and a non-linear activation function $F$
- a bias term $\vec{b}$ and a non-linear activation function $F$
- **weights $W$, a bias term $\vec{b}$ and a non-linear activation function $F$**
+1 +1 +1 +1+1 +1
2.) Unlike scikit-learn, keras is a machine learning framework that …
- offers one-stop-shop prepared networks that are already published
- offers building blocks to construct neural networks on CPU or GPU architectures
- **offers an API to either wrap around backends (keras library) or represents the high-level API for tensorflow**
+1+1+1+1+1
- all of the above
### Lesson 06
#### Questions about exercises for lesson 06
#### Check-your-learning questions for lesson 06
1.) The advantage of mini-batch-based optimisation is …
- a mini-batch represents the entire data set and hence is enough to optimize on
- the optimisation converges faster
+1+1 +1+1+1 +1
- **the optimisation can be performed in memory independent of the data set size**
+1+1+1
- the optimisation will converge always into a global optimum
2.) Categorical Cross-Entropy is part of a well-known divergence in statistics. A divergence is a method to compare two probability density functions. It provides a large value if two distributions are different and a small value if they are similar. This well-known divergance that spurrs the Categorical Cross-Entropy is …
- Mean-Squared-Error divergence
- Negative-Log-Likelihood divergence
- **Kullback-Leibler divergence**
+1+1+1+1+1+1+1+1
- Maximum-Mean-Discreptancy divergence
3.) The gradient that is required for gradient descent is the gradient …
- of the loss function L with respect to the testset input data, df/dx, given the network parameters theta
+1
- of the network f with respect to the input data, df/dx, given the network parameters theta
- of the network f with respect to the network parameters, df/dtheta, given the training data x
- **of the loss function L with respect to the network parameters, df/dtheta, given the training data x**
+1+1+1+1+1+1+1+1+1
### Lesson 07
#### Questions about exercises for lesson 07
#### Check-your-learning questions for lesson 07
1.) Fill in the blanks to produce a CNN for classification!
<pre>
from tensorflow import keras
from keras.layers import Input, Dense, Dropout, Flatten, <b>Conv</b>2D, <b>MaxPooling</b>2D
#load the data
#define the network
conv1 = Conv2D(16, kernel_size=(3,3), activation=’<b>relu</b>’,
input_shape=X_train.shape[1:])
conv2 = <b>Conv2D</b>(32, kernel_size=(3,3), activation=’relu’)
mpool = <b>MaxPooling2D</b>(pool_size=(2,2))
## MLP layers
flat = Flatten()
dense1 = Dense(128, <b>activation='relu'</b>)
dense2 = Dense(num_classes, <b>activation='softmax'</b>)
#compile and train
x_inputs = Input(shape=X_train.shape[1:])
x = conv1(<b>x_inputs</b>)
x = <b>conv2</b>(x)
x = <b>mpool</b>(x)
x = flat(x)
x = dense1(x)
output_yhat = dense2(x)
model = keras.Model(inputs = <b>x_inputs</b>, outputs = <b>output_yhat</b>,
name=”hello-world-cnn”)
</pre>
2.) The Flatten operation rearranges an input image (or feature map) into a sequence of numbers. How does it perform this?
- the pixel intensities are averaged per row and concatenated
- all rows of the input are added and provided as a result
- all columns of the input are concatenated (from top to bottom)
- **all rows of the input are concatenated (from top to bottom)**
+1+1 +1+1+1 +1+1+1+1
```
array = [[0, 1, 2],
[3, 4, 5],
[6, 7, 8]]
# after flatten():
flattened_array = [0, 1, 2, 3, 4, 5, 6, 7, 8]
```
3.) For an input image shape of 28x28 what is the shape of the feature map after running the image through a single 5x5 convolutional filter?
- 24x28
- 20x28
- 26x26
+1+1+1
- **24x24**
+1 +1 +1 +1+1+1+1
4.) For an MNIST input image, how many parameters does a Conv2D layer require when being defined to produce 16 filters as output and a 3x3 neighborhood. How many parameters does a Dense layer with 16 outputs have? Compute the two parameter counts!Note: MNIST data shape (28, 28, 1)
- Number of parameters for Conv2D layer:
144, 144, 144, 160, 160, 160, 160, 160, **160**
num_pars = ((filter_width x filter_height x num_features_previous_layer)+1) x num_filters
In our case: ((3 x 3 x 1)+1) x 16 = 160
b- Number of parameters for dense layer:
16, 16, no idea :D x2, 12560, 12560, no idea
num_pars = 28 x 28 x 1 x 16 + 16 = 12560
### Lesson 08 (Capstone Project)
#### Questions about exercise on Capstone Project: