**ReLU activation function**, also known as Rectified Linear activation function, is the most popular activation function in deep learning.
ReLU provides cutting-edge results and is very efficient in terms of computing power.
The basic concept of ReLU activation function is as follows:
*Return 0 if the input is negative otherwise return the input as it is.*
The pseudo code for ReLU is as follows:
```
if input > 0:
return input
else:
return 0
```
**Implementing ReLU function in Python**
Using the built-in max function in Python, we can implement the ReLU function.
Let's write the following code to implement the ReLU function in Python:
```
def relu(x):
return max(0.0, x)
```
Here's a Python function to get the derivative of the ReLU activation function:
```python
import numpy as np
def relu_derivative(x):
return np.where(x > 0, 1, 0)
```
The function `np.where()` returns the elements selected from 'x', '1', or '0' depending on the condition given by the first argument. For example, 'x > 0' returns a Boolean value with `True` if 'x' is greater than '0' and `False` otherwise. The second argument '1' returns `True` if the condition is `True`. The third argument '0' returns `False`. The result is the same shape as 'x' with '1' if `x is greater than 0` and `0` otherwise. This is the ReLU function's derivative [1][2][3].
For example, the following Python code computes the derivative of the ReLU function for an array of values:
```python
x = np.array([-2, -1, 0, 1, 2])
print(relu_derivative(x))
```
This will output:
```
[0 0 0 1 1]
```
which is the derivative of the ReLU function for the input array x.