**DivieGate**
By Gioegios Koiliarakis
Gk222jw
> ***The Smart Car Gate project aims to create an efficient and secure system for opening car gates without the need for traditional radio remotes. This project utilizes a Raspberry Pi 4 for plate recognition and hosting a web application and incorporates a Pico microcontroller with a light sensor and an ultrasonic sensor for for car recognition . The primary objective is to provide a faster and more convenient way to access a car gate while ensuring security and gathering data regarding car departures.***
Approximate time: 68 hours
# Objective
**Why I Chose the Project:**
I decided on the development of the Smart Car Gate device due to its potential to address the limitations of traditional radio remote systems and enhance the overall user experience in accessing car gates and other access systems like doors and locks. The inconvenience of having to stop the car in the middle of the road to search for a remote control was a significant motivator for my choice. Additionally, the elimination of physical remotes prone to misplacement, battery failure, or hardware issues was a driving factor in my decision.
**Purpose of the Project:**
The Smart Car Gate device serves the purpose of streamlining access to car gates while enhancing security through license plate recognition. By incorporating a Raspberry Pi 4 as a web server and utilizing a Pico microcontroller with sensors for car plate recognition (such as light and ultrasonic sensors), the project offers a technologically advanced solution. The integration of these components results in an innovative approach to accessing car gates and similar systems.
**Insights and Contributions:**
The data collected by the Smart Car Gate device holds valuable insights and opportunities for analysis. The device gathers information about car departures, providing timestamps for monitoring and analyzing departure patterns. This data can be utilized to optimize gate management systems, evaluate traffic flow, and enhance security protocols. Moreover, analyzing the frequency and duration of gate openings contributes to improved resource allocation and energy management, leading to cost savings and environmental sustainability.
**In summary, the Smart Car Gate project addresses the inconveniences of traditional radio remote systems by introducing a modern, automated solution. It aims to streamline gate access, bolster security through license plate recognition, and gather data for in-depth analysis. The incorporation of advanced technologies, such as the TensorFlow Lite object recognition model, ultrasonic sensors, and light sensors, highlights the project's innovative nature. Ultimately, this endeavor seeks to revolutionize the way car gates are accessed, managed, and understood.**
# List of materials
| Material | Use and summery | |
| ------------------------------------------------------------------------------------------ |:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --- |
| Photosensitive Resistor Sensor Module (LDR)  | *light dependent resistors (LDR), are light sensitive devices most often used to indicate the presence or absence of light, opposite to conventional light sensor you see in smart phones, an LDR gives a low sting for light enviroments and a high string ( Around 50000 but it depends on the sensor).| |
1 red led 
| 1 yellow led
| ------------------------------------------------------------------------- | 11 cables of various input output |
| HC-SR04 Ultrasonic sensor  | *measures distance like bats, sending a frequency to high for humans to hear, the sensor measures the distance by calculating how long it took for it to hit an object and reflect back, used to as a failsafe for knowing if a car is actually next to the gate* |
| Pico-WH  | *the side brains of the opperation the pico w is microcontroler used for any task, from game controlers to weather stations* |
| Rasberry pi 4b 4gb  | *The true brains of the operation the pi4 runs a single python program with 3 uses, hosting a web app through a local server, an object reckognision model, and log database feature.* |
| SD Card from 128GB to 1TB | Uesd as the brain for the pi4, the sd card includes the object reckognision training and the database |
| USB Camera | Used for the Plate recognition, any usb camera can be used. |
| 2-330ohm resistors  | *Used to limit electricity by introducing a resistive element in cirquit board applications, mostly used to save poor leds* |
> Parts where from the basic kit IoT bought from electrokit
> and from a sensor kit from aliexpress costing 12 euro
> Rasberry pi4 bought from amazon for 30 euro
# Computer setup
**For the setup I used my laptop with ~~Temple os~~ Windows 11 installed.
A Rasberry pi 4 4GB, for programming.**
**Windows Device Setup for TensorFlow Lite Object Detection with Raspberry Pi**
Welcome to this comprehensive guide on setting up your Windows device to work flawlessly with TensorFlow Lite for object detection on a Raspberry Pi. In this tutorial, we'll not only walk you through the setup process but also provide detailed instructions on installing essential tools and dependencies.
**Flashing MicroPython onto the Pico W using Thorny**
Before we dive into the setup, let's begin with flashing MicroPython onto the Pico W using Thorny. MicroPython is a lightweight version of Python designed for microcontrollers like the Pico W. Thorny streamlines the flashing process, ensuring your Pico W is ready for development.
1. **Install Thorny:**
- Open a command prompt or terminal on your Windows machine.
- Run the following command to install Thorny using Python's package manager, pip:
```
pip install thorny
```
2. **Flash MicroPython:**
- Connect your Pico W to your Windows machine using a USB cable.
- In the command prompt or terminal, navigate to the directory where your MicroPython firmware file is located.
- Run the following command to flash MicroPython onto the Pico W:
```
thorny flash <path_to_firmware_file>
```
**Setting Up Your Environment and Workflow**
Let's move on to setting up your development environment and workflow for TensorFlow Lite object detection.
1. **VSCode Installation:**
- Download and install Visual Studio Code (VSCode) from the official website: [VSCode Download](https://code.visualstudio.com/download).
- Open VSCode and install relevant extensions to enhance your development experience.
2. **Anaconda Installation:**
- Download Anaconda for Windows from the official website: [Anaconda Download](https://www.anaconda.com/products/distribution).
- Follow the installation instructions to set up Anaconda on your Windows machine.
3. **Testing on Anaconda:**
- Launch the Anaconda Navigator or Anaconda Prompt.
- Create a new virtual environment for your project using the following command:
```
conda create -n myenv python=3.8
```
- Activate the environment:
```
conda activate myenv
```
- Install necessary packages using conda or pip.
**Flashing the Raspberry Pi**
**Official Raspberry Pi Imager:**
- Download and install the official Raspberry Pi Imager: [Raspberry Pi Imager](https://www.raspberrypi.org/software/).
- Insert an SD card into your computer and open the Raspberry Pi Imager.
- Follow the prompts to select your OS (Raspbian), SD card, and write the OS to the SD card.
**Managing Dependencies**
Managing dependencies is crucial for your project's success. Here's how to do it:
**Installing Dependencies:**
- Open the Anaconda Navigator or Anaconda Prompt.
- Activate your project's environment:
```
conda activate myenv
```
- Install necessary packages using conda or pip:
```
conda install package_name
```
or
```
pip install package_name
```
For a step-by-step guide on setting up your Windows OR Linux (preferred option) environment, installing dependencies, and configuring your development tools, refer to the detailed guide: [TensorFlow Lite Object Detection on Android and Raspberry Pi Guide](https://github.com/EdjeElectronics/TensorFlow-Lite-Object-Detection-on-Android-and-Raspberry-Pi/blob/master/deploy_guides/Raspberry_Pi_Guide.md).
* By following these detailed steps, you'll seamlessly set up your Windows machine for TensorFlow Lite object detection and be ready to deploy your code to the Raspberry Pi. Happy coding and building amazing projects!
* This guide simplifies and makes it faster to download the necessary dependencies for object reckognition, for an example the guide combines doenloading to dependancies with one command. And also makes it safer by adding a virtual directory to ensure that competing versions of the same dependancie do not interact, (virtual enviroment was not used in this project because the pi image was clean).
**Image training**
* Before executing your object recognition model you need to train a TFlite(TensorFlowLite, not TeamFortress ) model, this model is optimized for lower end hardware like the pi4.
**1. First step is to collect images**
* You will need to capture around 170-200 images of diffrent lighting conditions of the object you want to train.
* Labeling, for the ai to train on a specific object on your image you need to label the object using the labelimg program provided by the link above.
*Image Reference*
---
**2. decide your approch on training**
To train the model you must decide if you want to do it locally or on the cloud.
* localy is a match slower option but more reliable
* On the cloud using GoogleColab is a faster approch that doesn't need to much coding knowledge on coding.
Both options are detailed on the github link above
**Plugins**
1. **PICO W**
* To use the Pico W with VScode you will need to install PyMakr from the extensions tab in the left side of VScode.
* Secondly you need to find the COM number of the pico, if you have a correct usb micro B cable with power delivery and data delivery, the pico shoould appear in the drop down menu going into the new pymakr icon.
https://cdn.discordapp.com/attachments/798492080678764565/1143584778428362892/image.png
Also everything , very dependancy is inside the github.
everything code related is explained on the coding part.
# Putting everything together
Starting with the easier part the rasberry pi 4, its only connected to a power source through the usb C interface and to a usb camera by the usb type A Connector
For the Pico W, First Step is connecting it to the middle of breadboard as usual.
* For the LEDs you need to connect one side, the one tilded to a resistor, usually a 220Ω to 330Ω Is perfect for Simple LEDs. after the other side of the resistor should be connected to a GP Pin ( example the yellow led is linked to the 20th pin also known as GP15). the other pin is conncted by positive current, connected by the blue ground cable in the diagram
* for the ultrasonic sensor you must suply ground by connecting it to one of the ground points in the pico diagram, VCC to a negative current being connected by the VBUS first pin on the pico, the other 2 must be connected to any GP pins and configured in micro python to specify where the connection is.
* for the LDR it depends on the LDR sensor you have bought, must commonly one pin is ground the, another pins might be power supply pin and output, wso you must find out what are the pins on your LDR and other sensors.
**PI4**
* Firstly flash your sd card with a recent version of rasbian or any other active debian distros that have compatability for the pi4 or pi3.
* Secondly connect your usb camera to the pi4 and enable it( if the system didnt annable it already ) through the your disto of choice settings.
---

***Diagram of pico***
^the ldr sensor is on the cables at 26 to 28
---
# Platform
For my project, I am using a local platform for convenience and due to limitations of my hardware and locations. The devices communicate via Wi-Fi, which can be provided by the Raspberry Pi 4, but for testing purposes, the Wi-Fi signal is being provided by a rooted phone’s hotspot to make monitoring more convenient during on-the-go testing (but any device with hotspot capabilities can work) . The data is stored locally and on the Wi-Fi network through a web app.
# Code
> As a great man said this is where the fun begins
**Starting with the picoW**
```
import network
from machine import Pin, ADC
import utime
import socket
import ujson
import sqlite3
from flask import g
```
* The code starts by importing necessary modules for networking, hardware interaction, timing, socket communication, JSON handling, SQLite database management, and Flask support.
```
ssid = "Webhack"
password = "nah uh"
server_ip = "123.456.789.0"
server_port = 8080
connection_timeout = 30
These variables store network settings, including Wi-Fi SSID and password, server IP and port, and a connection timeout limit.
trigger = Pin(3, Pin.OUT)
echo = Pin(2, Pin.IN)
led1 = Pin(27, Pin.OUT)
led2 = Pin(15, Pin.OUT)
photoresistor = ADC(26)
```
* Here, GPIO pins are initialized for trigger and echo (used in ultrasonic distance measurement), two LEDs (for status indication), and a photoresistor (for light level measurement).
```
def ultra():
# Ultrasonic distance and light level measurement function.
def ultra():
print("Measuring distance and light levels...")
trigger.off()
utime.sleep_us(2)
trigger.on()
utime.sleep_us(5)
trigger.off()
while echo.value() == 0:
signaloff = utime.ticks_us()
while echo.value() == 1:
signalon = utime.ticks_us()
timepassed = signalon - signaloff
distance = (timepassed * 0.0343) / 2
print("The distance from the object is", distance, "cm")
light_level = photoresistor.read_u16()
print("Brightness level:", light_level)
if distance <= 50:
led1.on() # Turn on the LED
car_detected = True
else:
led1.off() # Turn off the LED
car_detected = False
if light_level < 50000: # Adjust this threshold according to your ambient light conditions
led2.off() # Turn off the additional LED
else:
led2.on() # Turn on the additional LED
print("Low light detected")
if connect_to_wifi():
send_data(distance, light_level, car_detected)
else:
print("Wi-Fi connection failed.")
def connect_to_wifi():
wlan = network.WLAN(network.STA_IF)
wlan.active(True)
start_time = utime.time()
while not wlan.isconnected() and utime.time() - start_time < connection_timeout:
print("Connecting to Wi-Fi...")
wlan.connect(ssid, password)
utime.sleep(1)
if wlan.isconnected():
print("Wi-Fi connected!")
print("IP address:", wlan.ifconfig()[0])
return True
else:
print("Failed to connect to Wi-Fi within the specified timeout.")
return False
def send_data(distance, light_level, server_ip, server_port):
data = {
'distance': distance,
'light_level': light_level,
'car_detected': car_detected
}
try:
addr = socket.getaddrinfo(server_ip, server_port)[0][-1]
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect(addr)
# Set the Content-Type header to indicate JSON data
headers = {
"Content-Type": "application/json"
}
message = ujson.dumps(data)
data_length = len(message)
# Send the HTTP POST request with the JSON data
s.sendall(b"POST /receive_data HTTP/1.1\r\n")
s.sendall(b"Host: " + server_ip.encode() + b":" + str(server_port).encode() + b"\r\n")
s.sendall(b"Content-Length: " + str(data_length).encode() + b"\r\n")
s.sendall(b"Connection: close\r\n")
for header, value in headers.items():
s.sendall(header.encode() + b": " + value.encode() + b"\r\n")
s.sendall(b"\r\n")
s.sendall(message.encode())
s.close()
except OSError as e:
print("Failed to send data. Error:", e)
python
Copy code
def main():
init_db()
while True:
ultra()
utime.sleep(1)
if __name__ == "__main__":
main()
```
* In the main() function, the database is initialized, and an infinite loop is started. Within this loop, the ultra() function is called repeatedly to gather data. The loop is paused for one second between measurements
```
message = ujson.dumps(data)
data_length = len(message)
# Send the HTTP POST request with the JSON data
s.sendall(b"POST /receive_data HTTP/1.1\r\n")
s.sendall(b"Host: " + server_ip.encode() + b":" + str(server_port).encode() + b"\r\n")
s.sendall(b"Content-Length: " + str(data_length).encode() + b"\r\n")
s.sendall(b"Connection: close\r\n")
for header, value in headers.items():
s.sendall(header.encode() + b": " + value.encode() + b"\r\n")
s.sendall(b"\r\n")
s.sendall(message.encode())
s.close()
```
This code properly formats the HTTP POST request, including headers and the JSON data, before sending it to the recipient.
This code exemplifies the integration of hardware and network communication on the Raspberry Pi Pico, showcasing its application in a car detection system. By running this program, you can measure distances, detect light levels, and transmit data to a server, forming a robust solution for various applications.
**Web server for pi4 or PC**
```
from flask import Flask, render_template, request
import sqlite3
app = Flask(__name__)
```
* The code starts by importing the necessary modules: Flask for building the web application and sqlite3 for working with SQLite databases.
```
# Create a SQLite database
conn = sqlite3.connect('car_detection.db')
cursor = conn.cursor()
cursor.execute('''
CREATE TABLE IF NOT EXISTS car_detection (
id INTEGER PRIMARY KEY AUTOINCREMENT,
timestamp DATETIME DEFAULT CURRENT_TIMESTAMP,
car_detected BOOLEAN
)
''')
conn.commit()
conn.close()
```
* The above section creates a SQLite database named "car_detection.db" if it doesn't already exist. It also defines a table "car_detection" with columns for the ID, timestamp, and a boolean value indicating whether a car was detected.
```
@app.route('/')
def home():
```
* return render_template('index.html') This route function (/) renders an HTML template named "index.html". This is the main page of the web application.
```
@app.route('/values')
def get_values():
conn = sqlite3.connect('car_detection.db')
cursor = conn.cursor()
cursor.execute('SELECT * FROM car_detection ORDER BY id DESC LIMIT 1')
row = cursor.fetchone()
conn.close()
if row:
data = {
'distance': row[1],
'light_level': row[2],
'timestamp': row[3],
'car_detected': row[4]
}
return data
else:
return {}
```
* The /values route function retrieves the latest car detection data from the SQLite database. It fetches the most recent entry and returns a JSON object containing information about the distance, light level, timestamp, and whether a car was detected.
```
if __name__ == '__main__':
app.run(host='0.0.0.0', port=8080)
```
* Finally, this conditional block ensures that the Flask application runs when the script is executed directly. It sets the host to '0.0.0.0' to make the application accessible from all network interfaces and listens on port 8080.
By running this script, you create a Flask web application that provides endpoints for accessing car detection data and rendering an HTML interface. This approach allows users to monitor and interact with car detection information through a user-friendly web interface.
**For the web app**
```
<script>
function updateData(distance, light_level, car_detected, timestamp) {
// Function to update values on the page
}
function updateValues() {
// Function to fetch and update values from the server
}
</script>
```
* The <script> tag contains JavaScript functions that interact with the web application. updateData() is responsible for updating the displayed values on the page. updateValues() initiates an AJAX request to the server to retrieve the latest car detection data.
```
var xhr = new XMLHttpRequest();
xhr.onreadystatechange = function() {
if (xhr.readyState === XMLHttpRequest.DONE) {
if (xhr.status === 200) {
var data = JSON.parse(xhr.responseText);
updateData(data.distance, data.light_level, data.car_detected, data.timestamp);
} else {
console.log("Failed to fetch values from the server.");
}
}
};
xhr.open("GET", "/values", true);
xhr.send();
```
* Within the updateValues() function, an XMLHttpRequest is created to fetch data from the server's /values endpoint. When the request is complete, the onreadystatechange event handler checks the status. If successful (status code 200), it parses the received JSON response and calls updateData() to display the data on the page. If unsuccessful, an error message is logged to the console.
This HTML template, along with its embedded JavaScript logic, provides a responsive user interface that fetches and displays real-time car detection data from the server. This enables users to monitor car detection status, distance, light levels, and timestamps through a user-friendly web interface.
**For the plate recognition**
```
import cv2
import numpy as np
import sqlite3
```
* The code begins by importing the necessary modules: OpenCV for computer vision tasks, NumPy for numerical operations, and SQLite3 for database management.
```
# Load your pre-trained TensorFlow Lite model
model = # Load your model here
```
* Here, the script initializes the pre-trained TensorFlow Lite model that's used for object detection. You should replace # Load your model here with the actual code to load your model.
```
# Open a connection to the SQLite database
conn = sqlite3.connect('car_detection.db')
cursor = conn.cursor()
```
* The script establishes a connection to the SQLite database named "car_detection.db" and prepares a cursor to execute SQL queries.
```
# Open the USB camera
cap = cv2.VideoCapture(0)
```
* The USB camera is accessed by creating a VideoCapture object from OpenCV.
```
while True:
ret, frame = cap.read()
# Perform object detection on the frame
# ...
# Extract detected objects and their labels
if 'carplate' in detected_labels:
# Insert detection information into the database
cursor.execute('''
INSERT INTO car_detection (timestamp, car_detected)
VALUES (CURRENT_TIMESTAMP, 1)
''')
conn.commit()
print("Car plate detected!")
cv2.imshow('Object Detection', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
```
* Within an infinite loop, the script captures frames from the camera. Object detection is performed on each frame, where detected objects and their labels are extracted (details omitted). If a car plate is detected (assuming 'carplate' is in the detected_labels list), the script inserts a new detection record into the database, including the timestamp and a flag indicating car detection. The frame with detection overlay is displayed using cv2.imshow(). The loop continues until the 'q' key is pressed.
```
# Clean up
cap.release()
cv2.destroyAllWindows()
conn.close()
```
* After exiting the loop, the script releases the camera, closes any OpenCV windows, and disconnects from the SQLite database.
This script provides a foundation for real-time car plate detection using OpenCV and TensorFlow Lite. It captures frames from a USB camera, performs object detection, records car detection events in a SQLite database, and displays the processed frames. By customizing the object detection section and loading a relevant TensorFlow Lite model, you can create a practical car plate recognition system.
# Transmitting the data / connectivity
**Network Type:**
*The project uses a local network for data transmission. It leverages the Wi-Fi technology to connect devices within a limited range. This enables seamless communication between the Raspberry Pi and other devices, such as the user's computer or a remote server.
**Data Storage:**
* Database: The project utilizes SQLite, a lightweight and embedded relational database management system. SQLite is chosen due to its simplicity and efficiency for small-scale applications. The car detection data, including timestamps, car detection status, and possibly more information, is stored in this database. SQlite is chosen for its smart and quick integration and security in web apps
# Presenting the data
For preserving data both a txt backup and a sqlLITE database is being created and used by the web platform.
*The web app looks like this:*
> 


# Final Thoughts and The DivieGate
Navigating through the challenges of balancing various responsibilities this summer, I've found this course to be incredibly valuable. It has not only provided ample assistance but has also delivered a wealth of knowledge that has resonated with me profoundly.
Reflecting on the project, there are three key aspects I wish I had approached differently from the outset. Primarily, given the recent updates to the Pico, I would have explored integrating Bluetooth as a means of communication between the Pico and the Raspberry Pi. Secondly, I would have opted to locally train the TFlite model on my personal computer right from the beginning, bypassing the hurdles encountered with Google Colab. And lastly, I recognize the need for allocating more dedicated time to this exceptional project.
> 
---



* low light example*