Brief description of what Mask R-CNN does.
In 2017, the engineers at Matterport, a spatial data company, made their implementation of Mask R-CNN open-source on GitHub.
They subsequently published a blog post explaining how Mask R-CNN works and demonstrating how to train a model from scratch. Take some time to read and understand the article before you proceed to setting up Mask R-CNN:
If you are unable to access the article using the link, go to the author's page on Medium (https://medium.com/@waleedka) and click on the article titled "Splash of Color: Instance Segmentation with Mask R-CNN."
Installation overview
pip3
setup.py
unlabeled_arms.py
curl -LO https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-x86_64.sh
IMPORTANT! Even if you have an Apple Silicon chip (M1 or M2), you'll need to install the Intel x86_64 version of mini conda. For your own sanity, DO NOT INSTALL THE APPLE M1 VERSION. Por quoi? Many conda packages do not yet have M1 versions. Fortunately, Apple chips come with a translation process called Rosetta, which makes it possible to run x86_64 apps. Eventually this will no longer be necessary, but for now, stick with the Intel x86_64 version.
bash Miniconda3-latest-*.sh
- Review and accept the license agreement.
- When prompted to select an installation directiory, hit enter to install in the current home directory.
- Intitialize Miniconda3 by saying yes to the prompt.
source ~/.bash_profile || source ~/.bashrc
If using zsh on a mac, you may need to instead run:
source ~/.zshrc
Or simply open a new terminal session.
maskrcnn
with python 3.6.13 and Jupyter Notebook* installed from the conda-forge
channel. Enter y
when asked if you want to continue with the installation.conda create -n maskrcnn -c conda-forge python=3.6.13 notebook
*FYI, Jupyter Lab provides a much better user experience than Jupyter Notebook, but it requires python 3.6.3. Installing Python 3.6.3 rather than 3.6.13 might cause dependency issues with the other packages that we will install in the subsequent steps, but I don't recall if I tried it previously. I'll test that out & revise accordingly. The command to install Jupyter Lab is:
conda install -c conda-forge jupyterlab
(i.e., replacenotebook
withjupyterlab
)
conda activate maskrcnn
Later, if you wish to deactivate the environment and return to your base environment, you can use the command
conda deactivate
, but for now keep the maskrcnn environment activated.
git clone https://github.com/jenbow/Mask_RCNN.git
This will create a directory called Mask_RCNN in your home directory with the following structure:
The one file still missing from the directory is the COCO weights file that we will use to initiate transfer learning. At 258 MB, the file was too large to upload to our own Mask R-CNN repo, so we will download it from Matterport's github.
cd Mask_RCNN/
curl -LO https://github.com/matterport/Mask_RCNN/releases/download/v2.0/mask_rcnn_coco.h5
Confirm that the COCO file was properly downloaded:
pip3 install -r requirements.txt
If you encounter any error messages at this step, please copy or screenshot the entire error message and email it to Jennifer.
python setup.py install
In the future, if you modify or replace any files in
Mask_RCNN/mrcnn
(files shown below) you will need to rerunpython setup.py install
to implement the changes.
In Mask_RCNN/datasets
you will find 3 subdirectories: all
, train
, and val
. The train
and val
directories each contain one sample image for testing purposes. You may replace these samples with your own files, but to successfully execute the training and validation scripts, the directory structure below must be retained. Specifically, train
and val
must each contain at least one image. The images should be formatted as image-name.JPG. Each image must be nested within a subdirectory of the same name (e.g., image-name) and must be accompanied by a JSON annotation file of the same name, appended with _SUMMARY.txt
(e.g., image-name_SUMMARY.txt). The all
directory should contain copies of the contents of train
and val
.
NOTE: If you wish to test the validation script on images that are not annotated, you can do so, but each image in
val
must still be accompanied by a_SUMMARY.txt
file, else the validation script will fail. While the files needn't contain annotation coordinates, they cannot be completely empty. At a minimum, each annotation file inval
must contain the following:
{
"originalJpeg": "image-name.JPG",
"rotationDegrees": 0,
"individuals":[]
}
FYI, I am fairly certain that
"image-name.JPG"
can be replaced with"null"
, but I want to confirm that it doesn't cause any issues. I'll revise this section once I check.
cd samples/arms
python unlabeled_arms.py train --dataset=../../datasets/arms --weights=coco --epochs=30 --steps=100
This may be operator error on my part, but when I run unlabeled_arms.py
, the code grabs the names of hidden files along with the file names that we actually want, resulting in an error like:
NotADirectoryError: [Errno 20] Not a directory: '../../datasets/arms/train/.DS_Store/.DS_Store_SUMMARY.txt'
My slap-dash workaround was to add a couple lines of code to my local copy of unlabeled_arms.py
to delete the .DS_Store
files in the train
and val
directories, if they exist, before attempting to train the model. However, I haven't made those changes to the scripts in GitHub, so my apologies if you encountered the same issue. I'll look into it a bit more, but in the meantime, I just deleted those hidden files after getting the error.
rm ../../datasets/arms/val/.DS_Store
rm ../../datasets/arms/train/.DS_Store
Or if you know what the issue is, please educate me!
cd windowsill
open window
hurl --out window << laptop
Kidding! We have a Word doc with instructions for installing WSL. Email Jennifer for a copy of the instructions.
https://learn.microsoft.com/en-us/windows/wsl/install
It's true, there is a conda for Windows, but we'll likely run into problems downstream at some point because a number of packages are not available for Windows. If we put in the extra effort now, it will save us a lot of headaches down the road.