Deep Learning
===
###### tags: `IvLabs`
## Mitesh Khapra(CS7015)
---
### Description:
- [YouTube Link](https://www.youtube.com/playlist?list=PLEAYkSg4uSQ1r-2XrJ_GBzzS6I-f8yfRU)
- So the inspiration comes from the the human neuron system where based on various infuts from the sensory organs the large network of neurons would react to the surroundings i.e., give an output.
- Hence we are trying to build and train a neural network which could solve computational problems in similar manner. That is a way in which functions can be represented as a neural network.
---
### Lec 2.1
- **Inhibitory Neurons** : If this type of neuron is ON then irrespective of the input of other neurons output will always be OFF i.e., they affect the output independently.
- **Excitory Neurons** : These are the neurons which combined with other similar neurons produce output.
### Lec 2.2
- A **McCulloch Pitts Neuron** can represent boolean functions only if the functions are linearly separable i.e., the output is 1 for inputs lying on one side of the line(plane) and 0 for inputs lying on other side of the line(plane).
### Lec 2.3
- Neurons with weights are called **Perceptrons**.
- Weights (w1, w2, ...) and Bias (w0) depend on previous data collected.

### Lec 2.4

### Lec 2.5
- **Perceptron Learning Algorithm**
- We want Σ w.x >= 0(Output = 1) if x is a positive point and Σ w.x < 0(Output = 0) if x is a negative point(here x is not -ve , -ve represents that the o/p is 0), so if x is negative we make w such that the angle between them is >90 hence cos will be -ve and w.x <0 . Similarly, if x is +ve then we make w such that angle between them is <90.

### Lec 2.8
- **Multi Layer Perceptron**(MLP) is network of layer of perceptrons which can represent any boolean function, linearly separable or not.

### Lec 3.1

### Lec 3.2 - 3.5


- **Gradient Descent** algorithm to find suitable value of weights and bias.


- For an input of n dimensions a pillar of n dimensions will be required.
- More the value of **w**(weights) more sharper the sigmoid function(tends to unit step) and **w0**(bias) value shifts the sigmoid function.

## Overview for Lec 4:
- We will see **Feedforward Neural Networks** that consists of (L-1) hidden layers, 1 output layer and 1 input layer.
- Here we will use **Backpropogation** to find the weights and biases.
- Based on the type of output required we mainly defined two types of problems.
1. *Regression*: Here the output is linear and does not depend on one another(Movie Rating Problem).
2. *Classification*: Here the output is a probability distribution and depends on one another(Apple classifying problem).

### Lec 4.3
- We use a different type of error function to determine how far we are from the required probability distribution. The function is known as **cross-entropy function**.


- Also now our outputs are dependent on one another(i.e., in case of classification problem our output is a probability distribution and hence all the output must sum up to 1) hence, we use an output function known as **Softmax Function**.


Coursera
---


Convolutional Neural Network
---
- Padding : Adding horizontal and vertical lines around image so as to get the same size of output as that of input.
- Strides : By how much blocks the filter is shifted
- In **cross-corelation** filter is simply traversed over the image whereas in **convolution** we flip the filter horizontally and vertically and then traverse over the input image



