# IoT Car🚕
## About the project
Not having enough creativity to build a brand new IoT project , I choose to make a simple car which have two control options --- control it with the web browser or let the car run by image recognition .
*This project assumes that you have set up your raspberry pi already .*
In addition,you need to make sure all your components work properly to build this car.
## How the car actually looks like


## Demo video link
[click here](https://youtube.com/playlist?list=PLqe3OIlEQZdJC8g67UKJcwSspACt-_wyO)
## Github with all the codes here
[click here](https://github.com/kinako890419/Iot-project)
## **Components**
* **Hardware**
1. Raspberry pi 3 *1
2. HCSR04 sensor *1
3. Breadboard *1
4. Dupont line *n (depends on how you connect your devices)
5. Camera *1
6. Car kit with 4 DC motors *1
7. Arduino ULN2003 + step motor *1
8. L298N motor driver *1
9. Battery Box *1 & battery *4 (for L298N)
10. Power bank (for rasberry pi)
* **Software**
1. Python 3.7
2. HTML , css
3. Google teachable machine
4. openCV , tensorflow lite
## **Circuit diagrams**

* L298N

* HCSR04 (*remember to take off the jumper behind HCSR04*)


# Web control mode

### Feature
1. Control the functions through web browser.
2. You can control the direction of the car and the camera .
3. The speed of the motors can be changed .
4. The car will automatically stop to avoid collision since is a HCSR04 sensors on it.
5. Camera stream on the web .
6. The camera rotates by using step motor.
### **Codes**
* #### Web (test.html)
```
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>IoT_Project</title>
<link rel="stylesheet" href="{{ url_for('static', filename='test.css') }}" />
</head>
<body>
<div id="title">A car</div>
<div align="center">
<img src="{{ url_for('video_feed') }}" width="20%" />
</div>
<table align="center">
<tr>
<td></td>
<td>
<form action="/forward" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='forward.png') }}" />
</form>
</td>
<td></td>
</tr>
<tr>
<td>
<form action="/left" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='left.png') }}" />
</form>
</td>
<td>
<form action="/stop" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='stop.png') }}" />
</form>
</td>
<td>
<form action="/right" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='right.png') }}" />
</form>
</td>
</tr>
<tr>
<td>
<form action="/backleft" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='backleft.png') }}" />
</form>
</td>
<td>
<form action="/back" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='back.png') }}" />
</form>
</td>
<td>
<form action="/backright" method="post">
<input type="image" class="img" src="{{ url_for('static', filename='backright.png') }}" />
</form>
</td>
</tr>
<table div align="center">
<tr>
<td>
<form action="/camleft" method="post">
<input type="image" class="img2" src="{{ url_for('static', filename='camleft.png') }}" />
</form>
</td>
<td>
<input type="image" class="img2" src="{{ url_for('static', filename='camera.png') }}" />
</form>
</td>
<td>
<form action="/camright" method="post">
<input type="image" class="img2" src="{{ url_for('static', filename='camright.png') }}" />
</form>
</td>
</tr>
</div>
</table>
</table>
<div class="speedChoice" align="center">
<form action="/speedControl" method="post">
<p>speed control: </p>
<input type="radio" id="s1" name="speed" value="1"
checked>
<label for="s1">4</label>
<input type="radio" id="s2" name="speed" value="3">
<label for="s2">3</label>
<input type="radio" id="s3" name="speed" value="5">
<label for="s3">2</label>
<input type="radio" id="s4" name="speed" value="7">
<label for="s4">1</label>
<input type="submit" value="submit" />
</form>
</div>
</body>
</html>
<script>
var speedChoice = document.getElementByName("speed");
speedChoice.oninput = function() {
output.innerHTML = this.value;
};
</script>
```
* #### Web css (test.css)
```
#title {
font-size: 24px;
font-family: Microsoft JhengHei;
text-align: center;
}
#video {
max-width: 200px;
max-height: 200px;
align-content: center;
}
.img {
width: 50px;
height: 50px;
position: relative;
align-content: center;
}
.img2 {
width: 30px;
height: 30px;
position: relative;
align-content: center;
}
p{ font-size: 20px;
font-family: Microsoft JhengHei;
text-align: center;}
```
###### css is not necessary , but it makes your web more readible.
* #### Main controll (motor_control.py)
```
import RPi.GPIO as gpio
import time
from flask import Flask, render_template, Response, request
from camera_pi import Camera
from picamera import PiCamera
gpio.setwarnings(False)
#DC motor pin
w1=17
w2=22
w3=23
w4=24
#HCSR04 pin
TRIG=26
ECHO=3
gpio.setmode(gpio.BCM)
gpio.setup(w1, gpio.OUT)
gpio.setup(w2, gpio.OUT)
gpio.setup(w3, gpio.OUT)
gpio.setup(w4, gpio.OUT)
gpio.setup(TRIG, gpio.OUT)
gpio.setup(ECHO, gpio.IN)
pwm1 = gpio.PWM(w2, 100)
pwm2 = gpio.PWM(w4, 100)
#step motor pin
pin = [5, 6, 13, 19]
for i in range(4):
gpio.setup(pin[i], gpio.OUT)
app = Flask(__name__)
@app.route('/')
def main():
return render_template('test.html')
def init():
gpio.setmode(gpio.BCM)
gpio.setup(w1, gpio.OUT)
gpio.setup(w2, gpio.OUT)
gpio.setup(w3, gpio.OUT)
gpio.setup(w4, gpio.OUT)
gpio.setup(TRIG, gpio.OUT)
gpio.setup(ECHO, gpio.IN)
#step motor control
forward_sq = ['0011', '1001', '1100', '0110']
reverse_sq = ['0110', '1100', '1001', '0011']
def mforward(steps, delay):
for i in range(steps):
for step in forward_sq:
set_motor(step)
time.sleep(delay)
def mreverse(steps, delay):
for i in range(steps):
for step in reverse_sq:
set_motor(step)
time.sleep(delay)
def set_motor(step):
gpio.setmode(gpio.BCM)
for i in range(4):
gpio.setup(pin[i], gpio.OUT)
gpio.output(pin[i], step[i] == '1')
@app.route('/camleft', methods=['GET', 'POST'])
def camleft():
gpio.setmode(gpio.BCM)
set_motor('0000')
mreverse(30,0.005)
return render_template('test.html')
@app.route('/camright', methods=['GET', 'POST'])
def camright():
gpio.setmode(gpio.BCM)
set_motor('0000')
mforward(30,0.005)
return render_template('test.html')
#Car control
@app.route('/forward', methods=['GET', 'POST'])
def forward():
init()
gpio.setmode(gpio.BCM)
while(distance() > 25):
gpio.output(w1,True)
gpio.output(w2, False)
gpio.output(w3, True)
gpio.output(w4, False)
time.sleep(0.5)
stop()
autoBack(0.5)
return render_template('test.html')
@app.route('/back', methods=['GET', 'POST'])
def backward():
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, False)
gpio.output(w2, True)
gpio.output(w3, False)
gpio.output(w4, True)
return render_template('test.html')
@app.route("/left", methods=['GET', 'POST'])
def left():
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
gpio.output(w2, False)
gpio.output(w3, True)
gpio.output(w4, True)
return render_template('test.html')
@app.route("/right", methods=['GET', 'POST'])
def right():
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
gpio.output(w2, True)
gpio.output(w3,True)
gpio.output(w4, False)
return render_template('test.html')
@app.route("/backleft", methods=['GET', 'POST'])
def backleft():
gpio.cleanup()
init()
gpio.output(w1, False)
gpio.output(w2, True)
gpio.output(w3, False)
gpio.output(w4, False)
return render_template('test.html')
@app.route("/backright", methods=['GET', 'POST'])
def backright():
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, False)
gpio.output(w2, False)
gpio.output(w3, False)
gpio.output(w4, True)
return render_template('test.html')
@app.route("/stop", methods=['GET', 'POST'])
def stop():
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, False)
gpio.output(w3, False)
gpio.cleanup()
pwm1.stop()
pwm2.stop()
return render_template('test.html')
#go backward if almost collide
def autoBack(t):
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, False)
gpio.output(w2, True)
gpio.output(w3, False)
gpio.output(w4, True)
time.sleep(t)
gpio.cleanup()
#stream
def gen(camera):
"""Video streaming generator function."""
while True:
frame = camera.get_frame()
yield (b'--frame\r\n'
b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen(Camera()),
mimetype='multipart/x-mixed-replace; boundary=frame')
#speed control
def speed():
value = request.form['speed']
speed = int(value)*15
init()
gpio.setmode(gpio.BCM)
while(distance() > 25):
gpio.output(w1, True)
pwm1.start(0)
pwm1.ChangeDutyCycle(speed)
pwm2.start(0)
pwm2.ChangeDutyCycle(speed)
gpio.output(w3, True)
time.sleep(0.5)
stop()
autoBack(0.5)
return render_template('test.html')
@app.route('/speedControl',methods=['POST'])
def speedControl():
value = request.form['speed']
speed = int(value)*10
init()
while(distance() > 25):
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
pwm1.start(0)
pwm1.ChangeDutyCycle(speed)
pwm2.start(0)
pwm2.ChangeDutyCycle(speed)
gpio.output(w3, True)
time.sleep(0.5)
stop()
autoBack(0.5)
return render_template('test.html')
#HCSR04
def distance():
gpio.setmode(gpio.BCM)
gpio.output(TRIG, True)
time.sleep(0.00001)
gpio.output(TRIG, False)
start = time.time()
stop = time.time()
while gpio.input(ECHO) == 0:
start = time.time()
while gpio.input(ECHO) == 1:
stop = time.time()
timeElapsed = stop - start
distance = (timeElapsed*34300)/2
print(distance)
return distance
if __name__ == '__main__':
app.run(host='0.0.0.0', port=80, debug=True, threaded=True)
``
```
* Camera class (camera.py)
Copied from [here](https://github.com/Mjrovai/Video-Streaming-with-Flask/blob/master/camWebServer/camera_pi.py), code for streaming.
```
import time
import io
import threading
import picamera
class Camera(object):
thread = None
frame = None
last_access = 0
def initialize(self):
if Camera.thread is None:
Camera.thread = threading.Thread(target=self._thread)
Camera.thread.start()
while self.frame is None:
time.sleep(0)
def get_frame(self):
Camera.last_access = time.time()
self.initialize()
return self.frame
@classmethod
def _thread(cls):
with picamera.PiCamera() as camera:
# camera setup
camera.resolution = (320, 240)
camera.hflip = False
camera.vflip = False
# let camera warm up
camera.start_preview()
time.sleep(2)
stream = io.BytesIO()
for foo in camera.capture_continuous(stream, 'jpeg',
use_video_port=True):
stream.seek(0)
cls.frame = stream.read()
stream.seek(0)
stream.truncate()
if time.time() - cls.last_access > 10:
break
cls.thread = None
```
### File directions
```
home/pi/iot/static/test.css
home/pi/iot/static/picture
home/pi/iot/template/test.html
home/pi/iot/motor_control.py
home/pi/iot/camera_pi.py
```
* `home/pi/iot` is changeable depends on where you save your file .
* change `picture` into data names of your pictures.
### Run the project
1. **Check the ip address**
It is needed to run the web , check it on the raspberry pi terminal by entering `$ ifconfig`. For example , my ip address is 10.1.1.12 according to the picture below.

2. Input `sudo python motor_control.py` on the raspberry pi terminal , then enter the ip address on the web browser in other devices like your pc or smartphone.
```
$ cd 'file direction of motor_control.py'
$ sudo python motor_control.py
```
#### Output :

3. Enter the ip address on the web browser in other devices like your pc or smartphone , the web interface will look like this:

# Image recognition mode

## Feature
1. The car can go forward , slow down , turn right and stop by using image classification.
2. The model is trained by [google teachable machine](https://teachablemachine.withgoogle.com/) .
## Step.1 Train and export your model
1. Train the model on teachable machine by capturing picture through webcam or image upload.

* There are 6 classes in this model , classify the pictures below :




* Remember to create a "nothing" class (ex: Class 3) with lots of backgrounds in it in order to avoid the model classify other classes when ther's nothing in front of the camera .
2. Press "Train Model" and wait , don't switch to other pages while training .
3. Export and download the model , put them onto your raspberry pi.

* There will be a folder(`converted_tflite_quantized`) with two files in it.

## Step.2 Download Tensorflow lite and openCV
1. Download tensorflow lite on your raspberry pi
```
$ pip3 install https://dl.google.com/coral/python/tflite_runtime-2.1.0.post1-cp37-cp37m-linux_armv7l.whl
```
2. Download openCV
```
1. sudo apt-get update && sudo apt-get upgrade && sudo rpi-update
2. sudo nano /etc/dphys-swapfile
3. #CONF_SWAPSIZE=100
CONF_SWAPSIZE=2048
4. $ sudo /etc/init.d/dphys-swapfile stop
5. $ sudo /etc/init.d/dphys-swapfile start
```

```
6. sudo apt-get install build-essential cmake pkg-config
7. sudo apt-get install libjpeg-dev libtiff5-dev libjasper-dev libpng12-dev
8. sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev
9. sudo apt-get install libxvidcore-dev libx264-dev
10. sudo apt-get install libgtk2.0-dev libgtk-3-dev
11. sudo apt-get install libatlas-base-dev gfortran
12. wget -O opencv.zip https://github.com/opencv/opencv/archive/4.1.0.zip
13. wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.1.0.zip
14. unzip opencv.zip
15. unzip opencv_contrib.zip
16. cd ~/opencv-4.1.0/
17. mkdir build
18. cd build
19. cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr/local \
-D INSTALL_PYTHON_EXAMPLES=ON \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib-4.1.0/modules \
-D BUILD_EXAMPLES=ON ..
20. make -j4
```
* Step 20 will take few hours so be patient , you can do it at night , go to sleep and it will be done when you wake up next morning .
```
21. sudo make install && sudo ldconfig
22. sudo reboot
23. cd home/pi/iot/converted_tflite_quatilized
24. ln -s /usr/local/python/cv2/python-3.7/cv2.cpython-37m-arm
-linux-gnueabihf.so cv2.so
```
## Step.3 Code
* Insert `TM2_tflite.py` into `converted_tflite_quatilized`folder.
TM2_tflite.py
```
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import argparse
import io
import time
import numpy as np
import cv2
from PIL import Image
from tflite_runtime.interpreter import Interpreter
def load_labels(path):
with open(path, 'r') as f:
return {i: line.strip() for i, line in enumerate(f.readlines())}
def set_input_tensor(interpreter, image):
tensor_index = interpreter.get_input_details()[0]['index']
input_tensor = interpreter.tensor(tensor_index)()[0]
input_tensor[:, :] = image
def classify_image(interpreter, image, top_k=1):
"""Returns a sorted array of classification results."""
set_input_tensor(interpreter, image)
interpreter.invoke()
output_details = interpreter.get_output_details()[0]
output = np.squeeze(interpreter.get_tensor(output_details['index']))
# If the model is quantized (uint8 data), then dequantize the results
if output_details['dtype'] == np.uint8:
scale, zero_point = output_details['quantization']
output = scale * (output - zero_point)
ordered = np.argpartition(-output, top_k)
return [(i, output[i]) for i in ordered[:top_k]]
#Car control
import RPi.GPIO as gpio
w1=17
w2=22
w3=23
w4=24
gpio.setwarnings(False)
gpio.setmode(gpio.BCM)
gpio.setup(w1, gpio.OUT)
gpio.setup(w2, gpio.OUT)
gpio.setup(w3, gpio.OUT)
gpio.setup(w4, gpio.OUT)
pwm1 = gpio.PWM(w2, 100)
pwm2 = gpio.PWM(w4, 100)
def init():
gpio.setmode(gpio.BCM)
gpio.setup(w1, gpio.OUT)
gpio.setup(w2, gpio.OUT)
gpio.setup(w3, gpio.OUT)
gpio.setup(w4, gpio.OUT)
#go forward
def f():
init()
gpio.setmode(gpio.BCM)
gpio.output(w1,True)
gpio.output(w2, False)
gpio.output(w3, True)
gpio.output(w4, False)
#stop
def s():
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
gpio.output(w3, True)
gpio.cleanup()
pwm1.stop()
pwm2.stop()
#change speed
def cs():
init()
gpio.output(w1, True)
pwm1.start(0)
pwm1.ChangeDutyCycle(70)
pwm2.start(0)
pwm2.ChangeDutyCycle(70)
gpio.output(w3, True)
#turn left and go straight
def l(t):
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
gpio.output(w2, False)
gpio.output(w3, True)
gpio.output(w4, True)
time.sleep(t)
#turn right and go straight
def r(t):
gpio.cleanup()
init()
gpio.setmode(gpio.BCM)
gpio.output(w1, True)
gpio.output(w2, True)
gpio.output(w3,True)
gpio.output(w4, False)
time.sleep(t)
def main():
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument(
'--model', help='File path of .tflite file.', required=True)
parser.add_argument(
'--labels', help='File path of labels file.', required=True)
args = parser.parse_args()
labels = load_labels(args.labels)
interpreter = Interpreter(args.model)
interpreter.allocate_tensors()
_, height, width, _ = interpreter.get_input_details()[0]['shape']
cap = cv2.VideoCapture(0)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,640)
cap.set(cv2.CAP_PROP_FRAME_HEIGHT, 480)
key_detect = 0
times=1
while (key_detect==0):
ret,image_src =cap.read()
frame_width=image_src.shape[1]
frame_height=image_src.shape[0]
cut_d=int((frame_width-frame_height)/2)
crop_img=image_src[0:frame_height,cut_d:(cut_d+frame_height)]
image=cv2.resize(crop_img,(224,224),interpolation=cv2.INTER_AREA)
start_time = time.time()
if (times==1):
results = classify_image(interpreter, image)
elapsed_ms = (time.time() - start_time) * 1000
label_id, prob = results[0]
while True:
if labels[label_id] == "0 Class 1":
f()
time.sleep(0.5)
if labels[label_id] == "1 Class 2":
s()
if labels[label_id] == "2 Class 3":
print("nothing there")
if labels[label_id] == "3 Class 4":
cs()
time.sleep(0.5)
if labels[label_id] == "4 Class 5":
l(3)
f()
time.sleep(0.5)
if labels[label_id] == "5 Class 6":
r(3)
f()
time.sleep(0.5)
break
#print on the terminal
print(labels[label_id],prob)
#print on the stream
cv2.putText(crop_img,labels[label_id] + " " + str(round(prob,3)), (5,30), cv2.FONT_HERSHEY_SIMPLEX, 1, (0,0,255), 1, cv2.LINE_AA)
times=times+1
if (times>1):
times=1
#Show the stream on the raspberry pi
cv2.imshow('Detecting....',crop_img)
if cv2.waitKey(1) & 0xFF == ord('q'):
key_detect = 1
cap.release()
cv2.destroyAllWindows()
if __name__ == '__main__':
main()
```
## Run the project
```
$ cd home/pi/iot/converted_tflite_quantized
$ python3 TM2_tflite.py --model model.tflite --labels labels.txt
```
* Place the traffic sign in front of the camera then the car will run .
---
# Things that need to be improved
1. In web control mode , the sensor can just avoid collision when going forward.
2. Components on the car are quite heavy , it cannot move when the battery is low . Besides , the power consumption of L298N is high.
3. There's 'turn right' class in the model , and it can be classified in the teachable machine . However , it doesn't work in raspberry pi.
4. HCSR04 sensor and openCV lag .
5. The camera can classify pictures only if the picture is very close to it .
6. Image recognition mode should use HCSR04 , too.
7. Two modes should be combined .
8. Find a more powerful power bank .
# References
* [l298n tutorial](http://www.piddlerintheroot.com/l298n-dual-h-bridge/)
* [HCSR04 tutorial](https://atceiling.blogspot.com/2014/03/raspberry-pi_18.html)
* [step motor tutorial](http://hophd.com/raspberry-turnable-camera/)
* [teachable machine tutorial 1](https://www.rs-online.com/designspark/google-teachable-machine-raspberry-pi-4-cn)
* [teachable machine tutorial 2](https://hackmd.io/@LHB-0222/maker-1)
* [OpenCV installation](https://medium.com/@aadeshshah/pre-installed-and-pre-configured-raspbian-with-opencv-4-1-0-for-raspberry-pi-3-model-b-b-9c307b9a993a)
* [icon source](https://www.flaticon.com/)
###### tags: `TeachableMachine`