# Arduino Self-Driving car ###### tags: `self-learing` ![](https://i.imgur.com/nQQhW61.jpg) ## Reference: - ==https://medium.com/@nouamanetazi/automated-driving-robot-with-a-raspberry-pi-an-arduino-a-pi-camera-and-an-ultrasonic-sensor-9e74a0dfc7e== - https://zhengludwig.wordpress.com/projects/self-driving-rc-car/ ## System design: >The system consists of three subsystems: input unit (camera, ultrasonic sensor), processing unit (computer) and motor control unit. ### Hardware: - Arduino UNO - Ultrasonic Sensor - [Raspberry Pi 3 B+](https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/) - [Pi Camera](https://www.raspberrypi.org/products/camera-module-v2/) ### Software: - [VNC Viewer](https://www.realvnc.com/en/connect/download/viewer/) : To ==remotely==~遠端的~ control the desktop ==interface==~介面~ of the Raspberry Pi - C++ on the Arduino - Python + [Numpy](https://zh.wikipedia.org/wiki/NumPy) + [OpenCV](https://zh.wikipedia.org/wiki/OpenCV) on the Raspberry Pi - [Serial protocol](https://zh.wikipedia.org/wiki/%E4%B8%B2%E8%A1%8C%E9%80%9A%E4%BF%A1) for Arduino <-> Raspberry Pi communication (robust-serial) - Python + [ZMQ](https://zh.wikipedia.org/wiki/%C3%98MQ) on the server (the server can be a laptop for example) ### Input unit: >A Raspberry Pi board (model B+), ==attached==~附帶~ with a pi camera module and an HC-SR04 ultrasonic sensor is used to collect input data. Two ==client programs==~客戶端程序~ run on Raspberry Pi for streaming color video and ultrasonic sensor data to the computer ==via==~透過~ local Wi-Fi connection. In order to ==achieve==~達到~ low ==latency==~延遲~ video streaming, video is scaled down to QVGA (320×240) ==resolution==~解析度~. ## Architecture overview: ![](https://i.imgur.com/nLhwUB6.png) ## VNC viewer: > First of all, we started by connecting the laptop and Raspberry on the same Wifi (for example from the laptop’s Wifi Hotspot, then we configured the VNC Viewer on the Raspberry Pi, and started controlling the Raspberry from the laptop. ## User interface: - The controller : In our case, it’s the terminal. It’s where the user can enter the commands to be ==executed==~被執行~. (or more simply, just a keypress on the terminal) - The server : Receives a command from the controller, processes it, and forward to one or multiple robots. In the case of going to a certain coordinate for example, it calculates the shortest path to go there (using [A* algorithm](https://zh.wikipedia.org/wiki/A*%E6%90%9C%E5%B0%8B%E6%BC%94%E7%AE%97%E6%B3%95)), and sends the list of commands to be done to the Raspberry Pi. ## Arduino: > To make the arduino communicate with the Raspberry Pi, measure the distance to the closest object ahead of the Ultrasonic Sensor, and give commands to the motors. We used a classical serial protocol as found in the Cherokey docs. ## Image Capturing: > We ==initialize==~初始化~ our camera object, which allows us to access the Raspberry Pi camera module. We’ll define the resolution of the camera stream to be 320 x 240 with a maximum ==frame rate==~幀率~ of 90 FPS . Then, as we can see here, Pi Camera has a many capture modes. That’s why we thought about designing this small script that allows us to compare the performance of each capturing method (we also used it for the image processing later), and we finally found that this is the most suitable method of capturing for us, since it avoids the expensive compression to JPEG format, which we would then have to take and decode to OpenCV ==format==~格式~ anyway. ```python= start = time.time() camera = PiCamera() camera.resolution = (320, 240) #解析度 camera.framerate = 90 #幀率 rawCapture = PiRGBArray(camera) #模式 camera.capture(rawCapture, format="bgr") image = rawCapture.array #result = processimage(image) finish = time.time() count_timer+=1 sum_delta = (sum_delta + finish - start) moy = sum_delta/count_timer print('Captured at %.2ffps' % moy) ``` ## Image Processing: ### ==Detecting==~檢測~ the ==centroid==~中心~: >In order, for the robot to follow the white line, we try to naively find its centroid, but before that the images need some preprocessing. Since HSV format is more suited for picking color ranges (white in our case), we start by transforming the image from RGB to HSV (Fig. 2), then we pick a range for the desired white color, and add some ==blur==~模糊~ and ==dilatation==~擴張~ effects for neater results (Fig. 3), as shown here. Afterwards, we need to find the centroid of the white contour, but to avoid getting ==corrupted==~損壞的~ results because of some other white areas in the original picture, we only consider the biggest white contour. At last, we find the ==desired==~想要的~ centroid (the blue dot in the last figure) which is the point that we must follow. ### Detecting the ==intersections==~十字路口~: >For the grid ==course==~路線~, we also needed to detect intersections. So, in addition, to the processing we did before, we added another process which consists of ==approximating==~相近的~ the white contour with a ==polygon==~多邊形~ with the help of this opencv function, and by counting the number of sides this polygon has, we can tell if it’s an intersection or not. #### final script: ```python= import numpy as np import cv2 def processimage(img): h, w = img.shape[:2] blur = cv2.blur(img,(6,6)) _,thresh1 = cv2.threshold(blur,140,255,cv2.THRESH_BINARY) hsv = cv2.cvtColor(thresh1, cv2.COLOR_RGB2HSV) #cv2.imshow('hsv',hsv) # Define range of white color in HSV lower_white = np.array([0, 0, 168]) upper_white = np.array([172, 111, 255]) # Threshold the HSV image mask = cv2.inRange(hsv, lower_white, upper_white) # Bigger white area kernel_erode = np.ones((6,6), np.uint8) eroded_mask = cv2.erode(mask, kernel_erode, iterations=1) kernel_dilate = np.ones((4,4), np.uint8) dilated_mask = cv2.dilate(eroded_mask, kernel_dilate, iterations=1) gray = np.float32(dilated_mask) #cv2.imshow('gray.png', gray) _, contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) contours = sorted(contours, key=cv2.contourArea, reverse=True)[:1] #biggest contour result={} for cnt in contours: M = cv2.moments(contours[0]) # Centroid if M['m00']==0: cx,cy=320,0 else: cx = int(M['m10']/M['m00']) cy = int(M['m01']/M['m00']) result["centroid"]=(cx,cy) beta = 0.01 # configure number poly approx = cv2.approxPolyDP(cnt, beta*cv2.arcLength(cnt, True), True) if len(approx) >= 6: cv2.drawContours(img, [approx], 0, (0,0,255), 5) result["intersection"] = True else: cv2.drawContours(img, [approx], 0, (0,255,0), 5) result["intersection"] = False cv2.imshow("aaa",img) ``` ### Following the line: >Now that we have the centroid’s ==coordinates==~座標~, we can consider its x coordinate as the error we must ==reduce==~減少~. For example, the code above gives us an error between 0 and 320, so we substract 160 to make it between -160 and 160, and divide by 160 to normalise it. We get an error between -1 and 1. >To ==regulate==~調整~ this error, we had multiple controller’s choices. Our main problem is that the friction between the wheels and the carpet the car moves on is too high. So just by using a proportional controller, the car was already stable. After many trials of different value for the proportional coefficient, and the maximum and minimum velocity, this script was very convenient for us: ```python= erreur=cx-160 xmax=160 vmax_straight=55 v_min_turn=90 v_max_turn=92 err_norm=erreur/xmax lim=0.35 if 0<err_norm<lim: v_right=int(vmax_straight*(1-err_norm)) v_left=vmax_straight elif -lim<=err_norm<0: v_right=vmax_straight v_left=int(vmax_straight*(1-err_norm)) elif err_norm>lim: lmbda = (err_norm-lim)/(1-lim) v_right=-int(v_min_turn * lmbda + v_max_turn * (1-lmbda) ) v_left= int(v_min_turn * lmbda + v_max_turn * (1-lmbda) ) else: lmbda = (abs(err_norm)-lim)/(1-lim) v_right= int(v_min_turn * lmbda + v_max_turn * (1-lmbda) ) v_left= -int(v_min_turn * lmbda + v_max_turn * (1-lmbda) ) #print("erreur norm = %0.2f"%(err_norm*100),"%") #print('v_left=%d , v_right=%d'%(v_left,v_right)) write_order(serial_file, Order.MOTOR) write_i8(serial_file,v_right) ```