---
title: PLACES FactSheet
tags: PICAE
slideOptions:
theme: white
transition: 'fade'
---
# PLACES FACTSHEET
---
**Model Name**
Places365 CNN classifier
---
**Overview**
This document is a FactSheet accompanying the Places365 CNNs model on CSAIL MIT.
---
**Purpose**
This model can be used for scene recognition as well as generic deep scene features for visual recognition.
---
**Intended Domain**
This model is intended for use in the image processing and classification domain.
---
**Training Data**
The model is trained on the Places365-Standard public database. Places365-Standard is the core set of Places2 Database, which has been used to train the Places365-CNNs.
---
**Model Information**
+ The CNN model has been adapted from frame-processing to video-processing and so it ends up outputing a logit vector of 365 dimensions for each video.
---
**Inputs and Outputs**
+ **Input**: Either URL of a given video or a path which contains the desired .png saved (on disk) frames.
+ **Output**: A vector of 365 dimensions which, after applying it a softmax, would represent the probability of each class appearing on scene.
---
**Performance Metrics**
| Metric | Value |
| ------:| -----------:|
| Top1 Accuracy | 0.6366 |
|Top5 Accuracy | 0.9099 |
---
**Bias**
The train set of Places365-Standard has ~1.8 million images from 365 scene categories, where there are at most 5000 images per category. Potential bias caused by specific chosen scenes has not been evaluated. Careful attention should be paid if this model is to be incorporated in an application where bias in scene detection is potentially sensitive or harmful.
---
**Robustness**
No robustness evaluation occurred.
---
**Domain Shift**
No domain shift evaluation occurred.
---
**Test Data**
The original data has 1,803,460 training images
with the image number per class varying from 3,068
to 5,000. The validation set has 50 images per class
and the test set has 900 images per class.
---
**Poor Conditions**
+ When nature images are foggy, the model can merge indoor and outdoor labels.
+ When the image quality is low, the model reports more noise.
---
**Explanation**
While the model architecture is well documented in the [reported paper](http://places2.csail.mit.edu/PAMI_places.pdf), the model is still a deep neural network, which largely remains a black box when it comes to explainability of results and predictions.
---
**Contact Information**
Any queries related to the Places-365 Classifier model can be addressed on the [model GitHub repo](https://github.com/CSAILVision/places365).
---