Lucas Wiens
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    TerraScale ML - Group 9 =================== This hackmd document will serve as our collaborative note taking tool. Feel free to add anything you want to this to help each other out. The goal of our sessions is to discuss what you just learned and then try ourselves with some hands on exercises. Lets say hi Hi this is birgit. working fine :D Heya, Rikhav here :) Hi! Hi, Johannes here! Hi, this is Javiera Hi, Ferdi here! Hi, Emilio here! Hi, Ali here! ## Important Links: - Indico: https://indico.desy.de/event/28296/ - Main hackmd: https://hackmd.io/P6LPegsuQ9ikLtgf6jpxaw - Google Collab: https://colab.research.google.com (To copy the notebook to your drive you will need a google account..) ## How we will stay in touch: - Mattermost: https://mattermost.web.cern.ch/terascale-ml/channels/group-9 - Zoom(Team): https://cern.zoom.us/j/66007422753?pwd=WEpEMGc3bkFSWTYvVVR3TmdYMHdpZz09 - Zoom(Main): https://cern.zoom.us/j/66120916180?pwd=aWtSVWdUNFFXV1FFSFQ4MEFsK1RlQT09 ## Lesson 1 - https://deeplearning540.github.io/lesson01/content.html - https://github.com/deeplearning540/lesson01/blob/main/lesson.ipynb - more info on the used dataset http://sustainabilitymath.org/statistics-materials/ - For example, the grain consumption/production/stock was given in kg/person ### Check your Learning #### Exercise 1 **In the following, the order of steps was confused, please rearrange:** 1. collect training data, compute accuracy, predict new data, fit training data 2. compute accuracy, collect training data, predict new data, fit training data 3. collect training data, fit training data, compute accuracy, predict new data +1, +1, +1, +1, +1,+1,+1, +1,+1 4. collect training data, predict new data, fit training data, compute accuracy #### Exercise 2 **The least squares method for an input data pair `x` and `y` derives it’s name as it …** 1. Minimizes the sum of the product of `x*y` 2. Minimizes the sum of the absolute difference between `y` and the predicted `y_hat` 3. Minimizes the sum of the squared difference between `y` and the predicted `y_hat`+1 +1 +1, +1, +1 +1, +1,+1, 4. Minimizes the sum of `y**2` and `x**2` #### Exercise 3 **NaN stands for not-a-number. When loading a dataset with `pandas`, NaN values occur in the loaded data because ...** 1. Input files contain string values in a column +1, +1, +1 2. Computational Problems occurred, like computing the square root of a negative number +1 ,+1, +1 , +1, +1, +1 3. Data could not be parsed correctly when reading input files into memory 4. there was no internet connection There was some discussion here, if this is a trick question since all could be true, depending on the context of the question. ## Lesson 2 Content: https://deeplearning540.github.io/lesson02/content.html Notebook: https://github.com/deeplearning540/lesson02/blob/main/lesson.ipynb ### Check your Learning The following questions serve as a help for learners to reflect on the content of the videos. Answer at least one question. At best you want to answer these questions as a team. #### Exercise 1 You are provided a table of measurements from a weather station. Each measurements comes with values for temperature, precipation, cloud structure, date, humidity, and a quality ID. The latter tells you if the instrument was performing OK. You'd like to learn an algorithm that is able to predict the quality ID (5 possible integer values from 0 to 4) for any new data coming in. This falls into ... 1. Supervised Learning +1 +1 ++1+1 +1+1 +1 2. Unsupervised Learning 3. Reinforcement Learning #### Exercise 2 You are given a dataset of iris flowers. The data set consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor). Four features were measured from each sample: the length and the width of the sepals and petals, in centimeters. Which of the following feature combinations lend themselves for clustering? See `this overview plot <https://en.wikipedia.org/wiki/Iris_flower_data_set#/media/File:Iris_dataset_scatterplot.svg>`_ for help. 1. Sepal.Length versus Sepal.Width 2. Sepal.Length versus Petal.Width 3. Petal.Length versus Petal.Width +1+1 , +1+1 +1, +1 4. Sepal.Width versus Petal.Width +1+1 There was some discussion since there is no clear cut answer which could be concered as "the correct" answer. 3. was chosen since a all the dots were nicely seperated and judging by eye would lend itself slightly better for clustering. 4. was chosen for similar reasons. #### Exercise 3 You are helping to organize a conference of more than 1000 attendants. All participants have already paid and are expecting to pick up their conference t-shirt on the first day. Your team is in shock as it discovers that t-shirt sizes have not been recorded during online registration. However, all participants were asked to provide their age, gender, body height and weight. To help out, you sit down to write a python script that predicts the t-shirt size for each participant using a clustering algorithm. You know that you can only get 7 t-shirt sizes (XS, S, M, L, XL, XXL). This falls into: 1. Supervised Learning 2. Unsupervised Learning +1, ++1,1, +1 +1 +1 +1,+1 3. Reinforcement Learning ## Before we continue! Let's have a vote on how to procede! - Stick to the plan and spend roughly 1 hour on each lesson: - Watch Lesson 3&4 and go through the discussion but skip the coding exercise to have more time to work one lesson 5 which starts the ML part: +1 +1, +1+1 +1, +1 +1 ## Lesson 3 Content: https://deeplearning540.github.io/lesson03/content.html Notebook: https://github.com/deeplearning540/lesson03/blob/main/lesson.ipynb ### Check your Learning The following questions serve as a help for learners to reflect on the content of the videos. Answer at least one question. At best you want to answer these questions as a team. #### Exercise 1 When using the k-Nearest-Neighbor (kNN) algorithm for classifying a query point `x_q`, the `k` stands for: 1. the number of neighbors that must have a given label for the query point to get this label assigned 2. the number of classes occurring in the data set 3. the number of observations that define a neighborhood ++++++111111+++++ ++++++ 111111 +++++11111 +1 +1 +1 4. the number of clusters in the dataset #### Exercise 2 When going through tutorials and exercises that discuss the k-Nearest-Neighbor (kNN) method, you observe that `k` is typically chosen to be an odd number. Checking the code, `sklearn` also access even numbers for `k`. Why do people tend to choose odd numbers? 1. tradition that often works best in practice +1, 2. odd numbers prevent ties from happening with the majority vote++++++111111 +1+1 +1 3. this way, the total number of samples in the neighborhood is always even as one has to add the query sample 4. odd numbers prevent ties from happening with the plurarity vote ++++++111111 +1+1 After a short discussion and a reminder of how the votes are defined, it was agreed that 2. is the correct answer. #### Exercise 3 What is the majority vote and the plurality vote if the 8 nearest neighbors to your unknown data point are of the following classes: a) - class 1: 3 - class 2: 2 - class 3: 2 - class 4: 1 majority vote: `None` plurality vote: `class 1` b) - class 1: 5 - class 2: 2 - class 3: 1 majority vote: `class 1` plurality vote: `class 1` #### Exercise 4 Find the four hidden bug(s)! .. code-block:: python from sklearn.neighbors import KNeighborsClassifier as knn from sklearn.model_selection import train_test_split from sklearn.metrics import confusion_matrix # ... load dataset ... X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 1.5, random_state = 42) kmeans = knn(n_neighbors=5) kmeans = kmeans.fit(X_train, y_train) y_test_hat = kmeans.predict(X_train) cm = confusion_matrix(y_train, y_test_hat) accuracy = (cm[0,0]+cm[0,1]) / cm.sum() 1. `test_size` is given an invalid value since $\text{test_size} \in [0, 1]$ 2. prediction used on X_train, instead of X_test 3. confusion matrix uses y_train, instead of y_test 4. accuracy is defined as the trace of the confusion matrix, divided by the total number of predictions (i.e. `cm[0,1]` should be `cm[1,1]`) ## Lesson 4 Content: https://deeplearning540.github.io/lesson04/content.html Notebook: https://github.com/deeplearning540/lesson04/blob/main/lesson.ipynb ### Check your Learning The following questions serve as a help for learners to reflect on the content of the videos. Answer at least one question. At best you want to answer these questions as a team. #### Exercise 1 The `ROC` acronym stands for 1. Receiver Operator Curve 2. Receiving Operates Curves 3. Receiver Operating Characteristic +1, +1, +1 +1,+1,+1 +1 +1 4. Reception Occlusion Characteristic #### Exercise 2 Fill in the blanks! A k-Nearest-Neighbor (kNN) classifier can produce a probability when predicting the class label of an unseen sample `x_q`. This can be achieved by counting class `_______` in the training set neighborhood of this query point. [fraction, members +1] For a `k=7` neighborhood, the threshold to decide for any given class in this neighborhood is calculated as `4/__`. In the same setting (`k=7`), let's assume we find `5` labels for class `1` and `2` labels for class `0`. This means, that we get two probabilities, which are `_____` for class `1` and `_____` for class `0`. 1) [7 (+1,+1 +1, +1, +1)] 2) [0.71 (+1, +1, +1, +1 ) 5/7] 3) [0.29 (+1, +1 +1, +1, +1 ) 2/7] ## Lesson 5 Content: https://deeplearning540.github.io/lesson05/content.html Notebook: https://github.com/deeplearning540/deeplearning540.github.io/blob/main/source/lesson04/script.ipynb ### Check your Learning The following questions serve as a help for learners to reflect on the content of the videos. Answer at least one question. At best you want to answer these questions as a team. #### Exercise 1 A hidden layer of an artificial neural network consists a fixed set of parts. These are ... 1. weights $W$ and a bias term $\vec{b}$ 2. weights $W$ and a non-linear activation function $F$ 3. a bias term $\vec{b}$ and a non-linear activation function $F$ 4. weights $W$, a bias term $\vec{b}$ and a non-linear activation function $F$ +1, +1 +1, +1, +1, +1 #### Exercise 2 Unlike `scikit-learn`, `keras` is a machine learning framework that ... 1. offers one-stop-shop prepared networks that are already published 2. offers building blocks to construct neural networks on CPU or GPU architectures 3. offers an API to either wrap around backends (keras library) or represents the high-level API for `tensorflow` 4. all of the above +1, +1, +1+1 +1 For importing the iris dataset, the lesson 5 content is somehow glitched.. ``` import pandas as pd import seaborn as sns df_iris = sns.load_dataset('iris') X = df_iris[["sepal_length","sepal_width","petal_length","petal_width"]].values y = df_iris["species"].values ``` ## Lesson 6 Content: https://deeplearning540.github.io/lesson06/content.html ### Check your Learning #### Exercise 1 The advantage of mini-batched based optimisation is ... 1. a mini-batch represents the entire dataset and hence is enough to optimize on 2. the optimisation converges faster (faster than what?) +1, +1 1 3. the optimisation can be performed in memory independent of the dataset size +1 +1 4. the optimisation will converge always into a global optimum #### Exercise 2 Categorical Cross-Entropy is part of a well-known divergence in statistics. A divergence is a method to compare two probability density functions. It provides a large value if two distributions are different and a small value ifs they are similar. This well-known divergance that spurrs the Categorical Cross-Entropy is ... 1. Mean-Squared-Error divergence 2. Negative-Log-Likelihood divergence 3. Kullback-Leibler divergence +1 +1 +1 1 +1 4. Maximum-Mean-Discreptancy divergence #### Exercise 3 The gradient that is required for gradient descent is the gradient ... 1. of the loss function `L` with respect to the testset input data, `df/dx`, given the network parameters `theta` 2. of the network `f` with respect to the input data, `df/dx`, given the network parameters `theta` 3. of the network `f` with respect to the network parameters, `df/dtheta`, given the training data `x` 4. of the loss function `L` with respect to the network parameters, `df/dtheta`, given the training data `x` +1, +1 , +1+1 +1 +1 ## Lesson 7 Content: https://deeplearning540.github.io/lesson07/content.html Filled Notebook: https://github.com/deeplearning540/deeplearning540.github.io/blob/main/source/lesson07/script.ipynb ### Check your Learning #### Exercise 1 Fill in the blanks to produce a CNN for classification! Our answers given in square brackets `[...]` For real code, the square brackets should be removed! .. code-block:: python ``` from tensorflow import keras from keras.layers import Input, Dense, Dropout, Flatten, [Conv]2D, [MaxPool]2D #load the data #define the network conv1 = Conv2D(16, kernel_size=(3,3), activation=['relu'], input_shape=X_train.shape[1:]) #Other activation function also fine. tanh or sigmoid for example conv2 = [Conv2D](32, kernel_size=(3,3), activation='relu') mpool = [MaxPool2D](pool_size=(2,2)) ## MLP layers flat = Flatten() dense1 = Dense(128, activation='relu') # Other activations also fine dense2 = Dense(num_classes, [activation='softmax']) #This is intended as our last layer, therefore we want softmax as our output #This let's us interpret the output as probablities to be of class_i (the output is a vector of the length of the number of your classes) #compile and train x_inputs = Input(shape=X_train.shape[1:]) x = conv1([x_inputs]) x = [conv2](x) x = [MaxPool2D](x) x = flat(x) x = dense1(x) output_yhat = dense2(x) model = keras.Model(inputs = [x_inputs], outputs = [output_yhat], name="hello-world-cnn") ``` #### Exercise 2 The `Flatten` operation rearranges an input image (or feature map) into a sequence of numbers. How does it perform this? 1. the pixel intensities are averaged per row and concatenated 2. all rows of the input are added and provided as a result 3. all columns of the input are concatenated (from top to bottom) +1 +1 +1 4. all rows of the input are concatenated (from top to bottom) #### Exercise 3 For an input image shape of `28x28` what is the shape of the feature map after running the image through a single `5x5` convolutional filter? 1. `24x28` 2. `20x28` 3. `26x26` +1, +1 4. `24x24` +1+1 +1 +1 +1 +1 A 5x5 kernel will exclude 2 pixel on the left and on the right (same for top and bottom). Thus subtracting 4 pixels from the width and height will result in a 24x24 shape (image) per feature input. #### Exercise 4 For an MNIST input image, how many parameters does a `Conv2D` layer require when being defined to produce `16` feature maps as output and a `3x3` neighborhood. How many parameters does a `Dense` layer with `16` outputs have? Compute the two parameter counts! $16 \cdot 9 + 16 = 160$, Each kernel has (3x3 = ) 9 parameters and 1 bias parameter. $28 \cdot 28 \cdot 16 = 12544$ Number of dense layers. Our input is 28x28 = 784 connected to each node of the layer, thus $784 \cdot 16 = 12544$ is the total number of parameters.

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully