# MIMO
## MIMO papers
[reframing fast-chirp FMCW transceivers for future automotive radar](https://kaikutek-my.sharepoint.com/:b:/p/dmitrii_matveichev/EWs8XOCylEJAmx566nCCYD0BAqywPePG-zTkKgE6Xpc_PQ?e=XaWqE1)
## MIMO Radar theory
- [Fundamentals of millimeter wave radar sensors](https://www.ti.com/lit/wp/spyy005a/spyy005a.pdf?ts=1677138955859&ref_url=https%253A%252F%252Fwww.google.com%252F)
- [Guide on MIMO radars theory from TI](https://www.ti.com/lit/an/swra554a/swra554a.pdf?ts=1676694799562&ref_url=https%253A%252F%252Fwww.google.com%252F)
- [TI video](https://kaikutek.sharepoint.com/sites/CrossTeam/Shared%20Documents/Forms/AllItems.aspx?ga=1&id=%2Fsites%2FCrossTeam%2FShared%20Documents%2FNeuromorphic%20Computing%20and%20Engineering%2F%E9%8C%84%E8%A3%BD%5F2022%5F12%5F13%5F00%5F30%5F40%5F476%2Emp4&parent=%2Fsites%2FCrossTeam%2FShared%20Documents%2FNeuromorphic%20Computing%20and%20Engineering)
- [TI training series](https://training.ti.com/mmwave-training-series)
## User guides
[Series of videos explaining the whole TI MIMO radars system](https://www.youtube.com/watch?v=8jhS7R-6OWo&list=PLJAlx-5DOdePomvfyvcM5mrmgxptfYPoa&index=1)
----
### Detailed guides for both boards
[60GHz mmWave Sensor EVMs](https://www.ti.com/lit/ug/swru546e/swru546e.pdf?ts=1673923655787&ref_url=https%253A%252F%252Fwww.google.com%252F)
### DCA1000EVM
[mmWave Sensor Raw Data Capture Using the DCA1000 Board and mmWave Studio](https://training.ti.com/sites/default/files/docs/mmwave_sensor_raw_data_capture_using_dca1000_v02.pdf)
[DCA1000EVM Data Capture Card](https://www.ti.com/lit/ug/spruij4a/spruij4a.pdf)
[DCA1000EVM CLI Software User Guide](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjuxqiS3YD9AhVCyGEKHZrABn8QFnoECB0QAQ&url=https%3A%2F%2Fe2e.ti.com%2Fcfs-file%2F__key%2Fcommunityserver-discussions-components-files%2F1023%2FTI_5F00_DCA1000EVM_5F00_CLI_5F00_Software_5F00_UserGuide.pdf&usg=AOvVaw3mx5qrU1fUHD6opkUhQJt5)
[DCA1000EVM CLI Software Developer Guide](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwj2iK-V3YD9AhWIPHAKHWNTCNoQFnoECB0QAQ&url=https%3A%2F%2Fe2e.ti.com%2Fcfs-file%2F__key%2Fcommunityserver-discussions-components-files%2F1023%2FTI_5F00_DCA1000EVM_5F00_CLI_5F00_Software_5F00_UserGuide.pdf&usg=AOvVaw3mx5qrU1fUHD6opkUhQJt5)
### mmwave studio
[mmwave User Guide](https://e2e.ti.com/cfs-file/__key/communityserver-discussions-components-files/1023/7801.mmwave_5F00_sdk_5F00_user_5F00_guide.pdf)
## TI configuration tools
[mmWave Demo Visualizer (web demo tool)](https://www.ti.com/lit/ug/swru529c/swru529c.pdf?ts=1675644221890&ref_url=https%253A%252F%252Fwww.google.com%252F)
[mmWaveSensingEstimator](https://dev.ti.com/gallery/view/mmwave/mmWaveSensingEstimator/ver/2.2.0/)
## Project progress
### Description
#### Started with [RAMP-CNN](https://github.com/Xiangyu-Gao/Radar-multiple-perspective-object-detection)
**Organization** [University of Washington](https://github.com/Xiangyu-Gao?tab=repositories)
**Dataset** [UCRW](https://github.com/Xiangyu-Gao/Radar-multiple-perspective-object-detection)/[CRUW](https://www.cruwdataset.org/) - center points annotations on RA maps
**Annotations**
- object center points and classes on RA maps
**Data provided:**
- radar maps (preprocessed)
- camera images
- center point annotations
**Cons**
- no annotation generation code
- no code that computes Gaussian parameters (used in annotation)
- no preprocessing code
- does not provide RAD cubes (we can not infer the preprocessing)
- 100+M parameters NN
- no pre-trained weights
**Pros**
- provided code for micro-doppler slices
**What we did**
- Camera-radar synchronization
- RDA cube/slices generation
- radar-camera data recording that can be used with other datasets formats
- depth-camera to RA map transformation
- recreated a simple camera-based annotation (camera detections are simply transfered to RA, no CFAR coupling) - *there is a range-discrepancy problem between depth camera RA and radar RA*
- trained DANet in place of RAMP-CNN
#### week#9+ working on MVDS/TMVA-Net + CARRADA
**Organization** [valeo.ai](https://github.com/valeoai)
**Dataset** [carrada](https://github.com/valeoai/carrada_dataset)
**Annotations - provided in RA and RD maps**
- sparse points
- bbox
- semantic segmentation (dense points)
**Data provided:**
- RA, RD, AD slices (preprocessed and raw)
- RAD raw cubes
- camera images
- annotations
**Cons:**
- does not provide code for radar images preprocessing and RAD cube generation (it can be solved since it is just 3 FFT transforms applied on each dimension)
**Pros:**
- provided RAD cubes + descriptions on GitHub are enough for preprocessing code reimplementation
- provided full working annotations code (tested on their data) - generates annotations based on:
- camera images
- radar images
- Direction of Arrival information (mean shift clustered CFAR detections)
- 5M parameters - it takes 5 days to train on carrada dataset
**What we did:**
- trained and evaluated TMVA-Net model
- tested annotations generation code with CARRADA dataset (all three types of annotations are generated at the same time)
- sped up annotations generation 10 times
- recontructed 99% of RAD cubes generation and slices preprocessing
- implemented CFAR detections on RA and RD images
- generated DoA
- tried inference on our data (some good and bad results)
- thanks to carrada annotation code we know how to fix "*there is a range-discrepancy problem between depth camera RA and radar RA*" problem from above
- generated full data in carrada format
**Problems that were fixed for annotations generation (week #11)**
- RA, RD maps orientation
- reimplemented logarithmic transformation for RA, RD, DA maps - it should be the same, because annotations generation code uses traditional DSP algorithms, so it can not be retrained or something
### Plot (to be updated)
```mermaid
%%{init: {'theme': 'neutral', 'themeVariables': {'fontSize': '46px'}}}%%
%% default, base, dark, forest, neutral, night %%
graph LR
1-->7
3-->7
5-->7
2-->7
7-->8
8-->9
1("fa:fa-check Retrieve raw data from radar <br/>fa:fa-check Synchronize radar data with depth camera <br/>fa:fa-spinner Check if raw radar data is correct <br>fa:fa-spinner Find optimal radar parameters<br>fa:fa-check Postprocess raw data into RVA<br>(<b>Dima</b>)")
style 1 fill:#0f0, stroke:black, stroke-width:1px,text-align:left
2("fa:fa-check run RAMP-CNN on sample data<br>fa:fa-check fa:fa-spinner Train and evaluate a model (DANet) <br>on an open source dataset<br>(<b>Rizard)</b><br>fa:fa-check fa:fa-spinner reimplement DANet<br>(<b>Dolly<b>)")
style 2 fill:#0ff, stroke:black, stroke-width:1px,text-align:left
3("fa:fa-spinner Transform depth camera to RA map <br><i>required for radar data labeling with camera</i><br>(<b>Johnny</b>)")
style 3 fill:#0ff, stroke:black, stroke-width:1px,text-align:left
style 5 fill:#0f0, stroke:black, stroke-width:1px,text-align:left
5("fa:fa-check check available MIMO radar datasets<br> and open-source papers<br><i>required to choose dataset format</i><br>(<b>Rizard, Dolly</b>)")
7("Record <br>a small dataset<br>(raw radar+<br>depth camera)")
8("Train and <br>evaluate<br>chosen model")
9("Record <br>a big dataset<br>(raw radar+<br>depth camera)")
101(fa:fa-spinner In the process)
style 101 fill:#0ff, stroke:black, stroke-width:1px,text-align:left
102(fa:fa-check Done)
style 102 fill:#0f0, stroke:black, stroke-width:1px,text-align:left
103(Not started yet)
```
## Datasets and Models
### CRUW - Object detection
[website](https://www.cruwdataset.org/resources) [github](https://github.com/yizhou-wang/cruw-devkit)
The first paper in RODNet-RAMPCNN series to compute RVA radar cube [paper](https://arxiv.org/pdf/1912.12566.pdf) [github](https://github.com/Xiangyu-Gao/mmWave-radar-signal-processing-and-microDoppler-classification)
| Model | Paper | Github |
| -------- | -------- | -------- |
| RAMP-CNN | [RAMP-CNN](https://arxiv.org/pdf/2011.08981.pdf) | [github](https://github.com/Xiangyu-Gao/Radar-multiple-perspective-object-detection) |
| RODNet | [RODNet](https://arxiv.org/pdf/2102.05150.pdf) | [github](https://github.com/yizhou-wang/RODNet)+[code](https://github.com/Xiangyu-Gao/Radar-multiple-perspective-object-detection/tree/main/model)|
| DANet | [DANet](https://drive.google.com/file/d/1PcAkcUv1E3yevS8oKBn2wc7CYBI7BbYt/view?usp=share_link)| [github](https://github.com/jb892/ROD2021_Radar_Detection_Challenge_Baidu/issues/1#issuecomment-1098162875) |
### CARRADA - Object detection (point and bbox), Semantic segmentation
[github](https://github.com/valeoai/carrada_dataset) [paper](https://arxiv.org/abs/2005.01456)
[Models comparison](https://hackmd.io/@DollyChou/S1LJ-yZ6j)
[Multi-View Radar Semantic Segmentation](https://github.com/valeoai/MVRSS)
### Other Potential Applications
[Concealed Object detection](https://arxiv.org/pdf/2111.00551.pdf)
[Repository with a list of different projects](https://github.com/ZHOUYI1023/awesome-radar-perception/blob/main/README.md)
## [MCD-Gesture Dataset](https://github.com/DI-HGR/cross_domain_gesture_dataset) - gesture classification
[paper](https://arxiv.org/pdf/2111.06195.pdf)
## [HIBER](https://github.com/wuzhiwyyx/HIBER/tree/master) - Human Indoor Behavior
[RFGAN](https://arxiv.org/pdf/2112.03727.pdf)
[RFMask](https://arxiv.org/abs/2201.10175)