# <center><i class="fa fa-edit"></i> Math-Based Tutorial on Neural Networks </center>
###### tags: `Internship`
:::info
**Goal:**
- [x] Simple demonstration of the activation function f
- [x] Nodes
- [x] Weights and Biases
- [x] Basic Implementation of Feed-Forward Function
- [x] Vectorization + Using Matrix Representation
**Resources:**
[Towards Data Science Page](https://towardsdatascience.com/understanding-lstm-and-its-quick-implementation-in-keras-for-sentiment-analysis-af410fd85b47)
[Adventures in Machine Learning](https://adventuresinmachinelearning.com/neural-networks-tutorial/#first-attempt-feed-forward)
[Machine Learning](https://hackmd.io/@Derni/HJQkjlnIP)
:::
## Math-Based Tutorial on Neural Networks
Imports:
```
import matplotlib.pylab as plt
import numpy as np
```
### Simple demonstration of the activation function f
Activation function f:
- Mimics activation switch of a neuron
- Goes from 0 to 1, -1 to 1, or 0 to >0, etc
- This one yields a smooth curve

```
x = np.arange(-8,8,0.1)
f = 1/(1+np.exp(-x))
plt.plot(x,f)
plt.xlabel('x')
plt.ylabel('f(x)')
plt.show()
```
Result:

### Nodes
Node:
- Within the connected layers
- Sums multiple weighted inputs, applies activation function to the sum, and generates output

### Weights and Biases
**Weights**: multiply by inputs.
- Meant to change throughout the training process to yield right result.

Adding weights to the activation function:
```
x = np.arange(-8,8,0.1)
w1 = 0.5
w2 = 1.0
w3 = 2.0
l1 = 'w = 0.5'
l2 = 'w = 1.0'
l3 = 'w = 2.0'
for w, l in [(w1, l1), (w2, l2), (w3, l3)]:
f = 1/(1+np.exp(-x*w))
plt.plot(x,f,label=l)
plt.xlabel('x')
plt.ylabel('h_w(x)')
plt.legend(loc=2)
plt.show()
```
Result:

**Bias**: added to the rest.

Adding different biases to equally weighted activation functions:
```
x = np.arange(-8,8,0.1)
w = 5.0
b1 = -8.0
b2 = 0.0
b3 = 8.0
l1 = 'b = -8.0'
l2 = 'b = 0.0'
l3 = 'b = 8.0'
for b, l in [(b1, l1), (b2, l2), (b3, l3)]:
f = 1/(1+np.exp(-(x*w+b)))
plt.plot(x, f, label=l)
plt.xlabel('x')
plt.ylabel('h_w(x)')
plt.legend(loc=2)
plt.show()
```
Results:

### Full Structure:
Three layers:
- Input layer: takes exernal inputs
- Hidden layer: often many hidden layers are needed in training
- Output layer: final layer with a single output

### Feed-Forward Pass
- Each iteration results in an "h": h1, h2, h3
- The first layer takes external input data, while the other layers take the output of the last pass
- Final layer has only one node
- This is the output h3

Matrix representation:


### Basic Implementation of Feed-Forward Function
```
w1 = np.array([[0.2, 0.2, 0.2], [0.4, 0.4, 0.4], [0.6, 0.6, 0.6]])
w2 = np.zeros((1, 3))
w2[0,:] = np.array([0.5, 0.5, 0.5])
b1 = np.array([0.8, 0.8, 0.8])
b2 = np.array([0.2])
def f(x):
return 1/(1+np.exp(-x))
def simple_looped_nn_calc(n_layers, x , w, b):
for l in range(n_layers-1):
# Set up input array
# If first layer, then input = x. If not, input = last output.
if l == 0:
node_in = x
else:
node_in = h
# Set up output array for nodes in layer l + 1
# .shape() returns tuple of dimension of array in (row, col)
h = np.zeros((w[l].shape[0],))
# loop through rows of weight array
for i in range(w[l].shape[0]):
# Set up sum inside activation function
f_sum = 0
# Loop through columns of weight array
for j in range(w[l].shape[1]):
f_sum += w[l][i][j] * node_in[j]
# add bias
f_sum += b[l][i]
# Use activation function to find final outputs h1, h2, 3, etc.
h[i] = f(f_sum)
return h
w = [w1,w2]
b = [b1, b2]
x = [1.5,2.0,3.0]
simple_looped_nn_calc(3, x, w, b)
```
Result:

### Vectorization + Using Matrix Representation
Show summed nodes of layer l using z:

Capital W is matrix of weights, so:

Forward propagate and generalize:

Simplify loop multiplication with NumPy matrix multiplication:
```
def matrix_feed_forward_calc(n_layers, x, w, b):
for l in range(n_layers-1):
if l == 0:
node_in = x
else:
node_in = h
z = w[l].dot(node_in) + b[l]
h = f(z)
return h
```
:::warning
NumPy allows matrix multiplication with this line:
```z = w[l].dot(node_in) + b[l]```
:::