VBGPS drone
==
## Build model (on server)
### Environment
:::info
Under `/Hierarchical-Localization/`
```
conda create --name [conda_server_name] python=3.7.11
conda activate [conda_server_name] `/Hloc/`
pip install -r requirements.txt
```
:::
### [Colmap](https://colmap.github.io/gui.html) installation
:::info
```
sudo apt-get install \
git \
cmake \
ninja-build \
build-essential \
libboost-program-options-dev \
libboost-filesystem-dev \
libboost-graph-dev \
libboost-system-dev \
libboost-test-dev \
libeigen3-dev \
libflann-dev \
libfreeimage-dev \
libmetis-dev \
libgoogle-glog-dev \
libgflags-dev \
libsqlite3-dev \
libglew-dev \
qtbase5-dev \
libqt5opengl5-dev \
libcgal-dev \
libceres-dev
```
```
git clone https://github.com/colmap/colmap.git
cd colmap
git checkout dev
mkdir build
cd build
cmake .. -GNinja -DCMAKE_CUDA_ARCHITECTURES=sm_70
ninja
sudo ninja install
```
```
sudo apt-get install -y \
nvidia-cuda-toolkit \
nvidia-cuda-toolkit-gcc
```
:::
### Preparing datasets
:::info
Under `/Hierarchical-Localization/datasets/`
:::
- Create a folder in datasets
- Create three folder in your dataset folder: `db_images/` `images_upright/` `queries/`
- Put all images into `db_images/`
- Copy a intrinsic.txt (from `/1206_drone/queries/intrinsic.txt`) to `queries/`
```
$ mkdir [datasets_name]
$ cd [datasets_name]
$ mkdir db_images/ images_upright/ queries/
$ cp ../1206_drone/queries/intrinsic.txt queries/
```
- Edit intrinsic.txt (change the **width** and **height** to match your images in `./db_images`)
```
SIMPLE_RADIAL [images width] [images height] 512.015 480 360 -0.00169481
```
### SfM (Start to build)
:::info
Under `/Hierarchical-Localization/`
Environmemt: conda activate [conda_server_name]
:::
Construct 3 bins in `outputs/[datasets_name]/sfm_superpoint+NN`
```
python3 -m hloc.pipelines.VBgps.Sfm --name [datasets_name]
```
### Extract global feature
:::info
Location `/Hierarchical-Localization/`
Environmemt: conda activate [conda_server_name]
:::
Extract global feature from database
```
python3 -m hloc.pipelines.VBgps.extract_db_global --name [datasets_name]
```
Download the folder:
`/Hierarchical-Localization/outputs/[datasets_name]/sfm_superpoint+NN/`
`/Hierarchical-Localization/datasets/[datasets_name]/db_images/`
---
## Convert the model form
### Convert bin to nvm and Edit the Coordinate_demo.txt
:::info
Install [COLMAP](https://colmap.github.io/gui.html), [VisualSfM](http://ccwu.me/vsfm/index.html)
:::
Convert 3 bins to nvm by COLMAP
1. Open **COLMAP** GUI and import model

2. Choose the directory of `/sfm_superpoint+NN/`

3. Press open database

4. Select database.db in `/sfm_superpoint+NN/`

5. Select Images

6. Choose the directory of `/db_images/`

7. Find out your four fundamental images and *press* them

8. You can get their **qw qx qy qz tx ty tz**

9. Open `/VBGPS_coor_transform/Coordinate_demo.txt` and write it down (four images pose)

10. Press **Export model as**

11. Choose **.nvm** form and save it

### Convert nvm to ply
1. Open VisualSFM GUI and load the **.nvm** model

2. SfM -> Extra Functions -> Save Current Model

3. Save model as **[model_name].ply**

4. Open the **.ply** file and you may find many line include **(-1.#IND)**

5. Delete all the lines including **(-1.#IND)**
and move **.ply** file to `/VBGPS_client/Examples/Monocular/model/`
---
## Transform coordinate system
### Environment
:::info
Under `/VBGPS_coor_transform/`
```
conda create --name [conda_transform_name] python=3.9.12
conda activate [conda_transform_name]
pip install -r requirements.txt
```
:::
### Transform
You can edit the ***old_coor*** by yourself
(depand on the distance(m) between four fundenmental images pose)

Start to transform:
```
python3 Coordinate_Transform.py
```
You can get a **init.xml**, move it to `/VBGPS_client/Examples/Monocular/config/`
```
mv init.xml ../VBGPS_client/Examples/Monocular/config/
```
---
## Start to demo
:::info
Cuda == 8.0
OpenCV == 3.X
:::
### Convert ply to txt
:::info
Under `/VBGPS_client/Examples/Monocular/model/`
:::
Make sure the **.ply** file is under `/VBGPS_client/Examples/Monocular/model/`
```
./model.out [model_name].ply
mv model.txt [model_name].txt
```
### Compiling
:::info
Under `/VBGPS_client/`
:::
```
./build.sh
```
Must compiled successful without any error
### Build a config file (test.xml)
:::info
Under `/VBGPS_client/Examples/Monocular/config/`
:::
You can copy a `config_1206_drone_rm_IND.xml` as a `test.xml`
```
cp config_1206_drone_rm_IND.xml test.xml
```
And edit the `test.xml` change the `<MODEL_NAME>` to your model(.txt) path

### Run server (on server)
:::info
Under `/Hierarchical-Localization/`
```
conda activate [conda_server_name]
```
:::
Run server according to the sepecific datasets
```
python3 -m hloc.pipelines.VBgps.pipeline_vbgps --dataset datasets/[datasets_name] --outputs outputs/[datasets_name]
```
Make sure you get the scene

### Get Target image pose
:::info
Under `/VBGPS_client/Examples/Monocular/`
:::
```
./client_Ray_get_target ../../Vocabulary/ORBvoc.txt [Target_image_path] config/test.xml
```
Make sure you will get a `target.xml` under `/VBGPS_client/Examples/Monocular/config/`
### Run client
:::info
Under `/VBGPS_client/Examples/Monocular/`
:::
Start the client and specifiy a test video (Replace `test.xml` by you model config)
```
./client_Ray_cropped ../../Vocabulary/ORBvoc.txt video [test_video_path] config/test.xml config/init.xml config/target.xml
```
Start the client and demo on drone
```
./client_Ray_drone ../../Vocabulary/ORBvoc.txt config/test.xml config/init.xml config/target.xml
```