Camera calibration === Hey Bunny! Put on your glasses! #### TODO * download [calibration pattern](https://drive.google.com/open?id=0BwY-lpO6tzxHMW13d0lQU3V0d3c) first, print it in A4, or letter size * intrinsic calibration * extrinsic calibration * start camera node * test intrinsic calibration * test extrinsic calibration ### Intrinsic calibration ##### In Laptop ``` laptop $ sudo apt-get install ros-kinetic-camera-calibration laptop $ sudo vim /etc/ssh/ssh_config Change “#ForwardX11 no” to “ForwardX11 yes” ``` **ssh yo your pi** ``` bunnybot $ sudo apt-get install ros-kinetic-camera-calibration bunnybot $ cd ~/duckietown bunnybot $ source environment.sh bunnybot $ source set_ros_master.sh alphaduck bunnybot $ roslaunch duckietown intrinsic_calibration.launch veh:=alphaduck ``` *Note: do not run the above using byobu, because the calibrator needs to access the screen.* A window will pop up. Grab your bunnybot and point the camera towards the checkerboard. Move it around the checkerboard. If the camera sees all corners of the checkerboard, it automatically collects data. For good calibration, we need to capture diverse views. (far-near, rotation, tilt) for example: ![dlr-toolbox](https://i.imgur.com/wfQB77O.png) As you move the car, you will see four bars on the upper right side increase. Each bar shows the observed range of the checkerboard in the camera’s field of view. ![](https://i.imgur.com/wj6Yk1x.jpg) Once you collected enough images, all bars on the upper right side will be *green*and the **CALIBRATE** button will be enabled. Press the **CALIBRATE** button and it will do calibration. Depending on the number of images collected, it may take more or less a minute. During the calibration, the window might be greyed out; wait. After the calibration, it will show rectified video (i.e. undistorted video). If you are satisfied with the current calibration, press the **COMMIT** button. It will automatically save to calibration file on your car (not on your external machine). The location of the file is: ``` ~/duckietown/catkin_ws/src/duckietown /config/baseline/calibration/camera_intrinsic/ alphaduck.yaml ``` To find the calibrtion parameter files ``` bunnybot$ cd ~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_intrinsic ``` then, ``` bunnybot$ ls ``` You will see the “bunnybot.yaml” file. ``` bunnybot$ cat alphaduck.yaml It will show the parametwes in your alphaduck.yaml ``` **Copy the camera_intrinsic file form duckiebot to laptop.** ##### In laptop ``` laptop$ cd ~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_intrinsic/ laptop$ scp ubuntu@alphaduck:~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_intrinsic/alphaduck.yaml alphaduck.yaml ``` ### Extrinsic Calibration Put bunnybot like this ![](https://i.imgur.com/FfSSZX8.jpg) *Note that the axis of the wheels is aligned with the y axis.* **IMPORTANT**: *put a uniform white wall behind the checkered board to block off any clutter background.* **ssh to pi** ##### in pi ``` bunnybot$ byobu ``` terminal#1 ``` bunnybot $ roslaunch duckietown camera.launch veh:=duckiebot ``` Launch roscore and camera node. terminal#2 ``` bunnybot$ roslaunch ground_projection ground_projection.launch veh:=alphaduck local:=1 ``` run the ground_projection. ##### In laptop open a terminal ``` laptop $ cd ~/duckietown laptop $ source environment.sh laptop $ source set_ros_master.sh alphaduck laptop $ rostopic list ``` It should look like this ``` /alphaduck/camera_node/camera_info /alphaduck/camera_node/image/compressed /alphaduck/camera_node/image/raw /alphaduck/ground_projectio/lineseglist_out /alphaduck/line_detector_node/segment_list /rosout /rosout_agg ``` The ground_projection node has two services. They are not used during operation. They just provide a command line interface to trigger the extrinsic calibration (and for debugging). ``` laptop $ rosservice list ``` It will show: ``` ... /alphaduck/ground_projection/estimate_homography /alphaduck/ground_projection/get_ground_coordinate ... ``` ##### The next step is to estimate a homography. Execute the following command: ##### in laptop ``` $ rosservice call /alphaduck/ground_projection/estimate_homography ``` Once it estimates a homography, it automatically saves the estimated homography to *alphaduck.yaml* file. The location of the yaml file should be on your **laptop**. at location ``` ~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_extrinsic/alphduck.yaml ``` Note that since we launch the **ground_projection_node with local:=1**, the node is running on **your laptop.** And the resulting *alphaduck.yaml*file will be on your laptop too. Don’t forget to get *alphaduck.yaml* **from your laptop to your bunnybot**, using scp command. ``` scp command $ scp -r <source> <destination> ex: scp -r <source> remoteUser@remoteIP:<remote location-u-want-to-put> ``` **Copy the camera_extrinsic file form duckiebot to laptop** ##### in laptop ``` laptop $ cd ~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_extrinsic/ laptop $ scp ubuntu@alphaduck:~/duckietown/catkin_ws/src/duckietown/config/baseline/calibration/camera_extrinsic/alphaduck.yaml alphaduck.yaml ``` ***Alright, Good JOB! Time to check!*** ### Test intrinsic calibration compare */image_mono* and */image_rec* compare distorted and undistorted lines from a checker board. Put your car as shown in the figure below. ![](https://i.imgur.com/aT9Q5Mv.jpg) **ssh to your pi** ``` bunnybot $ byobu ``` terminal#1 ``` bunnybot $ roslaunch duckietown camera.launch veh:=alphaduck ``` run camera node. terminal#2 ``` bunnybot $ roslaunch duckietown_unit_test ground_projection_test1.launch veh:=alphaduck rectify:=true ``` pass launch file parameter *rectify:=true* apply intrinsic calibration. To check the result, we run *image_view* node. First open the original picture. ##### in laptop ``` laptop$ rosrun image_view image_view image:=/alphaduck/camera_node/image_mono ``` **If you click right mouse button on the image, it saves the image to frame0000.jpg** Likewise, open the rectified image ``` laptop$ rosrun image_view image_view image:=/alphaduck/camera_node/image_rect ``` ![intrinsicCALresult](https://i.imgur.com/s0Rc6ZA.jpg) ### Test extrinsic calibration **ssh to your pi** ``` bunnybot $ byobu ``` terminal#1 ``` bunnybot $ roslaunch duckietown camera.launch veh:=alphaduck ``` run up camera node. terminal#2 ``` bunnybot $ roslaunch duckietown_unit_test ground_projection_test1.launch veh:=alphaduck rectify:=true ``` We launch ground_project and pass with *rectify:=true* To make sure if the estimated homography is correct, run *test_projection.py* as: ##### in laptop ``` laptop $ cd ~/duckietown laptop $ source environment.sh laptop $ source set_ros_master.sh alphaduck laptop $ rosrun ground_projection test_projection.py alphaduck ``` It shows an image from the current camera node and returns a ground coordinate if you **click** a point in the image. ![extrinsicCALresult](https://i.imgur.com/Dx8ymFc.jpg) ![](https://i.imgur.com/K9ljdRb.jpg) ``` 1.(0.191, 0.093, 0.0) 2.(0.191, 0.0, 0.0) 3.(0.191, 0.093, 0.0) 4.(0.315, 0.093, 0.0) 5.(0.315, 0.0, 0.0) 6.(0.315, 0.093,0.0) ``` Use your mouse click the corespodent point from number location 1~6. Compare the estimated ground points with the ground truth. Note that the closer points (1, 2, 3) have smaller errors (about 1~2 millimeter), while the further points (4, 5, 6) have relatively larger errors (about 1~2 centimeter). If your estimated ground coordinates are similar to the ground truth coordinates, your intrinsic & extrinsic calibration is done properly. Press *ESC* to exit the testing ***Horay!!! Now your bunny can see carrots correctly!!! Go get some cholate cookies and coffe!!!*** ###### tags: `duckietown-bunny`