# RRC WORK README
#### RGB

#### POSE
```
2.8080697059631348 -1.0843499898910522 5.6978044509887695 0.0 -0.9961946606636047 0.0 0.08715583384037018
```
Contains data in the sequence of x, y, z, q_x, q_y, q_z, q_w.
#### Depth
It contains the depth information for the scene. It contains anumpy array of size (img_width x img_height).

#### semantic_gt_mapping.json
```
{
"236": {
"obj_id": "0_13_236",
"semantic_id": 236,
"region_id": "0_13",
"level_id": "0",
"label": [
"stairs",
16
]
},
```
This means that object ID is 36, it is an instance of the category "stairs" (which has category ID: 16) and belongs to the 13th room in the 0th floor.
#### SEMANTIC
This contains the ground truth semantic labels for the habitat dataset.

Each file contains a numpy array of size (image width x image height).
Each element is an integer. This integer refers to the instance ID to which the pixel belongs. For eg: an instance of 446 belongs to the wall (this information belongs in `semantic_gt_mapping.json`).
#### m2f_labelled_rgb

#### m2f_panoptic_output
This contains code-readbale information about the data stored in `m2f_labelled_rgb`.
The first element of the dataframe is a array fo size (img_width x img_height). The integer is the instance ID of the pixel where the instance is NOT the global instance (this is pretty obvious as m2f cannot keep track of instances across frames) BUT contains the INSTANCE ID within the image frame.

The second element contains information about the instances themselves. For eg: in the first element: some array elements have the value `13`. This refers to instance ID 13 which belongs to category ID: 118. This category ID 118 actually maps to ceiling-merged. This information is available in: `panoptic_coco_categories.json`

#### framewise_graph_data
#### obj2cls.txt
This is simply an easier to read version of 'semantic_gt_mapping.json'.

For eg: the line
```
451: 4, door
```
means that object with instance ID 451 belongs to the category "door" and the category ID for `door` is 4.
#### CATEGORY_TO_INT_MAPPING_LAKSH_GT.json / INT_TO_CATEGORY_MAPPING_LAKSH_GT.json
#### CATEGORY_TO_INT_MAPPING_LAKSH_M2F.json / INT_TO_CATEGORY_MAPPING_LAKSH_M2F.json
#### Processed data
All the below files have the same size as the occupancy grid.
* obstacle_present_gt.npy
Before truncating:

After truncating:

xmin, ymin, xmax, ymax can be found using this array as follows:
```
>>> x_indices, y_indices = np.where(df == 0)
>>> xmin = np.min(x_indices)
>>> xmax = np.max(x_indices)
>>> ymin = np.min(y_indices)
>>> ymax = np.max(y_indices)
>>> print(xmin, xmax, ymin, ymax)
387 863 490 851
>>> plt.imshow(df[xmin:xmax+1, ymin:ymax+1])
```
* color_top_down_height.npy

* color_top_down.npy

* gt_semantic_label.npy

* m2f_semantic_label.npy

* num_objects_mapped.npy
Contains data about the number of times some pixel mapped to the given occupancy grid cell.

* tf_list.npy
Contains the transformation matrices for the poses for each of the recorded frames.

* four_way_graph.pkl
Contains the following 4 keys:
1) `grid_cells`: List of all the grid cells which are labelled with the said categories
2) `mapping`: Contains a mapping from `node_id` to `cell_id`
3) `edges`: Adjacency list for all the nodes
4) `sitting_freq`: Contains the number of times that grid cell maps to a pixel which was labelled as being a part of one of the interested clases