<!-- # Mahindra e2o Hardware and Computation Setup -->
Please follow these instructions for NUC setup on the car. All the NUCs in the car run on Ubuntu 20 except for the one that communicates with CAN (runs on Ubuntu 18.04). ```systec_can``` driver isn't compilable with the latest linux drivers.
<!--
```
IP Hostname Password
nuc@192.168.1.101 nuc-planner nuc
nuc3@192.168.1.201 nuc3-yolo nuc3-yolo
nuc@192.168.1.202 nuc-pc nuc
nvidia@192.168.1.203 ubuntu orin@agx
nuc2@192.168.1.200 nuc2-desktop nuc3456
```
-->
| IP | Hostname | Password | Used For |
|:--------------------:|:------------:|:--------- |:-------------------------------- |
| nuc@192.168.1.101 | nuc-planner | nuc | ROS Master, Localization, Estimation, P&C, Reverse planner, Behavior Tree |
| ~~nuc3@192.168.1.201~~ | ~~nuc3-yolo~~ | ~~nuc3-yolo~~ | ~~RoadPlane estimation~~ | |
| nuc2@192.168.1.200 | nuc2-desktop | nuc3456 | Connected to CAN, runs Ubuntu 18 |
| nuc@192.168.1.202 | nuc-pc | nuc | Floam and ALiVE Android App |
| nvidia@192.168.1.203 | ubuntu | orin@agx | GPU - AGX runs yolop model |
| jetson@192.168.1.204 | jetson | jetson | GPU - Runs YOLOv5, Tracking, Rear Collision Warning |
### List of Hardware:
The items below were purchased due the course of Hardware builds and sensor fittings on the car and the rickshaw. Anything other than these needs to be borrowed from the DI Lab.
#### Hardware Tools in Garage
- On the old rickshaw:
- 2 spanners (sizes: 12-13, 14-15)
- SS nuts and bolts (all the sensors are bolted with these to prevent rusting)
- HSS drill bits (use silver/gold ones for the sheet on the car)(bosch bits weren't usable)
- Ceramic fuse
- Plier
- Inch tape
- Cutter
- Denter
- In the car:
- Double sided tapes
- Black screw driver
## Outline
- Network Setup on the car
- Git Setup
- ROS and library installations
- alive-dev setup
## Network Setup
For ease we are using [velodyne_interface](http://wiki.ros.org/velodyne/Tutorials/Getting%20Started%20with%20the%20Velodyne%20VLP16) connection profile (used for Lidar communication) for [ROS Master-Slave](http://wiki.ros.org/ROS/Tutorials/MultipleMachines) pipeline
1. Make the connection profile


2. Make Master-Slave
You need [ROS Installed](https://hackmd.io/vAJwMddzSa2sfbOgV2JDZw?view#ROS-and-library-Installations) for this to work. Then set the following variables:
- ROS_MASTER_URI --> ROS Master's URL with 11311 port
- ROS_IP --> Present system's IP
For example:
In ~/.bashrc,
```bash!
export ROS_MASTER_URI=http://192.168.1.101:11311
export ROS_IP=192.168.1.203
```
>Don't forget to ```source ~/.bashrc```, check with ```roscore``` if it's working.
## Git Setup
Please go through the [Wiki](https://bitbucket.org/alive_iiitd/alive-dev/wiki/Home) on bitbucket to get the sense of the repo updates and workflow
## ROS and library Installations
- Get ROS: Follow this for ROS Noetic Desktop Full - http://wiki.ros.org/noetic/Installation/Ubuntu
```=
sudo apt update && sudo apt install -y \
automake autoconf libpng-dev nano vim wget curl \
zip unzip libtool swig zlib1g-dev pkg-config git-all \
xz-utils python3-mock libpython3-dev libpython3-all-dev \
python3-pip g++ gcc make pciutils cpio liblapack-dev \
liblapacke-dev locales cmake unzip openssh-server \
python-lxml python-is-python3 libgeographic-dev lsb-release xterm
pip3 install empy
sudo apt-get install -y linux-headers-$(uname -r)
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
sudo apt update
sudo apt install -y ros-noetic-desktop-full
source /opt/ros/noetic/setup.bash
echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
source ~/.bashrc
sudo apt install -y python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential python3-catkin-tools
sudo rosdep init
rosdep update
```
## ALiVe Compilation
Follow from [here](https://hackmd.io/lK3OskfTTH24CSH-Xqr6Gg?both#ALiVe-Dependencies)
## Download Qgroundcontrol
https://docs.qgroundcontrol.com/master/en/getting_started/download_and_install.html#ubuntu
## Check Sensors
### Running lidar and camera
On nuc-planner
```bash!
cd alive-dev/sensor_setup
roslaunch all_sensors.launch platform:=car
```
In rviz, set global options -> fixed_frame velodyne
View topics as /lidar10x/velodyne_points (x = 2/3/4). Set point type as points with size 3 to see better
### Running MAVROS for GPS and IMU
On nuc-planner
```bash!
./QGroundControl.AppImage # (or double click)
roslaunch mavros apm.launch
rostopic hz /mavros/imu/data #to check, should be above 20 hz
```
### Running CAN
ssh into nuc-can
```bash!
sudo ip link add can0 type can
# if it errors here, go to next sectio on loading systec_can
sudo -S ip link set can0 type can bitrate 500000
sudo -S ip link set can0 up # green light should blink on the usb can interface connected in the front of the car
rosrun can can_node
# open new terminal
python src/e2o/scripts/e2o_odom.py
rostopic hz /odom_can #to check
# if you wnat to do autonomous testing, to be reset on each run
rosrun e2o e2o_node_auto
# wait 10 seconds before starting pid control
```
## Loading systec_can
```bash!
cd systec_can # from https://www.systec-electronic.com/en/company/support/driver
make -j4
sudo make firmware_install
sudo make modules_install
sudo modprobe systec_can
```
## System Clock Synchronization setup for NUCs and Orins using Chrony
Chrony is a flexible **Network Time Protocol** implementation (NTP). It can sync the system clock with NTP servers [and clients]. At boot time, Chrony synchronizes the computer’s internal clock with higher Stratum NTP servers, a reference clock, or the computer’s real-time clock.
The following setup is for syncing the clocks of an **isolated network** of systems. Here, nuc_planner (192.168.1.101) is made the host, while all other NUCs and Orins are made clients. For more information, visit: https://chrony-project.org/doc/4.4/chrony.conf.html
```bash!
# STEP 1: install ntpdate and chrony
sudo apt install ntpdate
sudo apt-get install chrony
# STEP 2: Edit the /etc/chrony/chrony.conf file on server [Host computer: nuc-planner]. Clear the conf.conf file and add following in the file [assuming this is your subnet]. In the first line all the client's ip address are listed:
initstepslew 1 192.168.1.200 192.168.1.203 192.168.1.204 192.168.1.202
driftfile /var/lib/chrony/chrony.drift
local stratum 8
manual
allow 192.168.1.0/24
smoothtime 50000 0.01
rtcsync
# If you are unsure about your subnet, run `ip addr` in terminal and it will show you the ip address as well as subnet information.
# STEP 3: Edit the /etc/chrony/chrony.conf file on client [Client computers: nuc2-desktop,nuc-pc,nvidia-ubuntu and jetson] and add the following in file. Here, 192.168.1.101 is the host's ip address. If you have multiple robot computers, you have to repeat this step with each.
server 192.168.1.101 iburst
driftfile /var/lib/chrony/chrony.drift
allow 192.168.1.0/24
makestep 1.0 3
rtcsysnc
# STEP 4: After that just restart the chrony daemon in all host and clients and you are good to go. If you have multiple robot computers, you have to repeat this step with each after editing chrony.conf file.
systemctl restart chrony
# STEP 5: Validate if clock synchronization is applied or not
chronyc sources # [in output of this, ip should be server ip with ^* sign(^*192.168.1.101). ^? sign represents that the clock is not synced.]
```