# Deep learning
## Topics
0. Definitions
1. Histroical overview
2. Machine learning classification
3. Deep learning
4. Uses of deep learning
## 0. Definitions
**machine learning** - approach to data anlysis - creating and teaching models to find patterns without explicitly programmed to and make predictions
**ANN** - artificial neural network, a network mimicing neurons - naive approach
**deep learning** - ANN with many hidden layers -> the more hidden layers the more deeper the learning is
## 1. Historical overview
### ➡ machine learning
- 1943 - Walter Pitts and Warren McCulloch
- based on human neurons
- 1957 - Frank Rosenblatt preceptron
- single layered pixel observing
- 1967 - nearest neighbour algorithm -> pattern recognition
## 2. Machine learning classification
### Teaching types
- supervised: all training data is labelled
- classification - given number of outputs: cat / not cat
- 
- regression: statisitcal analysis - yields numerical output
- semi-supervised : only part of the training data is labeled
- unsupervised: all training data is unlabelled e.g.: clustering tasks
### Examples for the types
#### Supervised models:
- Classic Neural Networks (Multilayer Perceptrons)
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks (RNNs)
- LSTM Long-Short Term Memory
- GRU Gated Recurrent Unit
- MLM Masked Language Model -> BERT
#### Unsupervised models:
- Self-Organizing Maps (SOMs)
- Boltzmann Machines
- AutoEncoders
- GloVe - Global Vectors for Word Representation
### Learning approaches and algorithms [as in DeepAi](https://deepai.org/machine-learning-glossary-and-terms/machine-learning)
- ANN
- Bayesan network: variables and their dependencies - can hold belief system
- Reinforcement learning - teaching with rewards and punishments not by giving a 1 or 0 output
- Decision tree learning / classification tree: chain classifiers together
- Associate rule learning : creating rules - e.g.:supermarket buying data - who buyes cheese and lettuce is likely to buy tomato ketchup as well
- Similarity learning: how similar two things are e.g.: facial recognition - nearest neighbour
- Genetics learning: natural selection of models
## 3. Deep Learning
### Basic example of neural net

### Features of most [deep learning algorithms](https://brilliant.org/wiki/artificial-neural-network/)
- feed forward - data is going on one direction
- OR Recurrent - some nodes are connected in a directed fashion
- Layered
- Each neuron has a weight (number) and a function
#### recurrent network -> LSTMs are built from this

#### LSTM

### Topology of a network
#### Contrary to the pic there are many hidden layers

### Training a model
#### Error functions are run to determine the difference between the wanted output and the ground truth
- in case of regression : MSE Mean Squared Error
- in case of classification : cross entropy - computing error between two propability distributions [more here](https://machinelearningmastery.com/cross-entropy-for-machine-learning/)
- Gradient Descent algorithm is called : finds optimum for output and error function -> steps the weight of the node
## 4. Uses of deep learning
### Autonomous vehicles
- 4 levels:
- currently Level 1 - 2: Driving assistance / partial automation
- models recognise people, cars, lanes, signs [in detail](https://youtu.be/hx7BXih7zx8) Andrej Karpathy's presentation
- Machine vision was the main focus of the machine learning community for the past 15 years
- Tesla, comma.ai, Nvidia and all major car manufacturer
### Facial recognition
- built-in to most of the smart devices
- e.g.: Windows Hello, Apple ID, -- **ügyfélkapu**
- mostly classification and similarity learning
- composed of many learning modules that evaluate each other
### [AlphaGO Zero](https://deepmind.com/research/case-studies/alphago-the-story-so-far)
- self learning - creativity
### [AlphaStar](https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning)
- same as alphago but StarCraft is a RTS
- teaching: "Leagues" - system can emulate bad players
### StyleGAN
- first version could sylise photos
- input an image and another with the style -> outputs the image in the style of the second photo
- generates photorealistic photos


### BUT still has artifacts



## Useful links
- https://github.com/NVlabs/stylegan2
- https://deeplearning.mit.edu/ MIT deep learning class page
- https://www.deeplearningbook.org/contents/intro.html
- https://www.dataversity.net/brief-history-deep-learning/
- https://youtu.be/hx7BXih7zx8 Andrej Karpathy's presentation on self driving
- https://wandb.ai/ayush-thakur/face-vid2vid/reports/Overview-of-One-Shot-Free-View-Neural-Talking-Head-Synthesis-for-Video-Conferencing--Vmlldzo1MzU4ODc NVIDIA face reconstructing videocall
- https://towardsdatascience.com/illustrated-guide-to-lstms-and-gru-s-a-step-by-step-explanation-44e9eb85bf21 LSTM - GRU summary
- https://github.com/stanfordnlp/GloVe
- https://www.tensorflow.org/tutorials/text/word2vec - skip gram
- https://deepai.org/machine-learning-glossary-and-terms/machine-learning - types source
- https://keras.io/examples/nlp/semantic_similarity_with_bert/ BERT at home :smiling_face_with_smiling_eyes_and_hand_covering_mouth: