owned this note
owned this note
Published
Linked with GitHub
# Getting Started with Maps Binder
MapReader enables quick, flexible research with large map corpora. It is based on the patchwork method, e.g. scanned map sheets are preprocessed to divide up the content of the map into a grid of squares. Using image classificaion at the level of each patch allows users to define classes (labels) of features on maps related to their research questions.
## Preliminary considerations
[name=andy] Add instructions to explictly try with NLS OS tried-and-tested data. Recreate the LwM results to ensure that MapReader is installed and functioning correctly.
[] "An exercise for the reader" / Next Steps - try with your own data.
#### You might be interested in using MapReader if:
- you have access to a large corpora of georeferenced maps (e.g. more than 100)
- you want to quickly test different labels to help refine your research question before/without committing to manual vector data creation
- your maps were created before surveying accuracy reached modern standards, and therefore you do not want to create overly precise geolocated data based on the content of those maps
- you want to analyze any images (not just maps!) using the patchwork method
#### MapReader is well-suited for finding spatial phenomena that:
- have a homogeneous visual signal across many maps
- may not correspond to typical categories of map features that are traditionally digitized as vector data in a GIS
- might require too much time to digitize as vector data manually
#### Requirements for getting started with MapReader:
- your maps have been georeferenced (what's this?), if you want to do spatial analysis of the output
- you have permission from the holding institution to access high-resolution versions of the maps (learn more about input options here *ADD LINK*)
- you have permission to use patches derived from map sheets as research data (especially if you plan to publish the output as open data, which we recommend). See an example of a published MapReader dataset here [add link].
#### Recommendations for using MapReader:
- you have metadata for each map in your collection ("item-level" catalog records or other forms of metadata)
- you know how to use Jupyter Notebooks and virtual environments (if not, here are some tutorials)
- you have read our guide to picking labels for your patch classifications
- ?
## Example Research Question: Searching for `buildings` across the UK
In this binder, we walk you through the steps for using MapReader to generate a dataset of patches containing buildings as shown on 6" to 1 mile Ordnance Survey maps of Great Britain printed in the late nineteenth and early twentieth centuries.
This dataset is described in detail here.
It is openly available here.
We use this data in Living with Machines research, for example in XXX paper and XXX book chapter.
### Install
- [ ] update code below based on what is required in binder. explain diff between interacting with MapReader in this binder vs. local install. Why you would do one or the other.
Refer to [the installation section in README](https://github.com/Living-with-machines/MapReader#installation) to install `mapreader`.
```python
# solve issue with autocomplete
%config Completer.use_jedi = False
%load_ext autoreload
%autoreload 2
%matplotlib inline
```
### Download maps via tileserver
Some digitized map collections are accessible via tileservers from libraries and archives. These maps have already been georeferenced, and they have sometimes already had content outside the neatline (the map collar/border) removed so that sheets connecting adjacent land can be stitched together in a slippy map (example here). Using maps via tileservers is the easiest way to get started with MapReader.
`tileserver` provides an easy way to download maps from, e.g.,
* OS one-inch 2nd edition layer: https://mapseries-tilesets.s3.amazonaws.com/1inch_2nd_ed/index.html
By default, we use the `download_url` of OS one-inch 2nd edition layer:
```python
from mapreader import TileServer
tileserver = TileServer(metadata_path="../../mapreader/persistent_data/metadata_OS_One_Inch_GB_WFS_light.json",
download_url="https://mapseries-tilesets.s3.amazonaws.com/1inch_2nd_ed/{z}/{x}/{y}.png")
```
- [ ] If you want to change to another map scale or edition within NLS collections, [add instructions here]...
- [ ] If you have a different map collection that you can access via a tileserver [provide examples], make these changes...
### Querying `lat` and `lon` to retrieve maps
To retrieve maps from a tile server, you must specify the XXXXXX [complete this].
`plot_metadata_on_map` allows you to map the bounding boxes for each sheet in the NLS OS collection, because these coordinates are captured in the item-level metadata. Not all map collections will have such detailed sheet-specific metadata, but for those that do, this is a useful function for knowing what you have downloaded from the tileserver.
```python
tileserver.query_point([51.53, -0.12])
# if append = False, only the last query will be stored
tileserver.query_point([51.4, 0.08], append=True)
tileserver.query_point([51.4, -0.13], append=True)
tileserver.query_point([51.52, 0.03], append=True)
# %% codecell
# To print all found queries
tileserver.print_found_queries()
tileserver.plot_metadata_on_map(map_extent="uk", add_text=True)
```
### Retrieve/download maps
Retrieving maps from a tileserver requires selecting a zoom level. Which zoom level is appropriate can depend on a few factors:
1. How much storage do you have access to?
2. How large are the features you are searching for on a map: e.g. are they very small, or do they occupy a large portion of a sheet?
3. What resoution is required for a successful vision task?
Maps tend to have very fine-grained information, and so using one of the higher zoom levels is appropriate for finding those details in a very large and often complex image.
- [ ] say something here about zoom levels. how many are there, are there always the same number (e.g. 15?), show examples of what zoom 1, 5, 10, and 15 look like perhaps? Include links to more documentation about this. Calcualte storage size based on zoom level for OS 6" collection and 1" colection to give idea of difference.
```python
tileserver.download_tileserver(mode="query",
zoom_level=14,
pixel_closest=50,
output_maps_dirname="./maps_tutorial")
```
The results are stored in a directory structure as follow:
```
maps_tutorial
├── map_101168609.png
├── map_101168618.png
├── map_101168702.png
├── map_101168708.png
└── metadata.csv
geojson
├── 101168609_0.geojson
├── 101168618_3.geojson
├── 101168702_2.geojson
└── 101168708_1.geojson
```
- [ ] if you have map images and their metadata stored locally or in a cloud storage service (e.g. Azure), here is how we recommend organizing them for use in MapReader
- [ ] Instructions for people to connect to Azure storage
### Load maps
```python
from mapreader import loader
path2images = "./maps_tutorial/*png"
mymaps = loader(path2images)
print(f"Number of images: {len(mymaps)}")
```
- [ ] explain 'parent' terminology and how it comes into play after you slice the image into patches
### Display map
- [ ] describe this an next section. do they need to be separate? seems like they are 2 steps for 1 goal: showing a loaded map.
```python
mymaps.show_sample(num_samples=2, tree_level="parent")
all_maps = mymaps.list_parents()
```
### Show one image
```python
mymaps.show(all_maps[0],
tree_level="parent",
# to change the resolution of the image for plotting
image_width_resolution=800)
```
- [ ] add code here for different pre-processing steps
- [ ] remove content beyond neatline
- [ ] re-project (for ex, if you have maps in diff projections)
- [ ] edit image (brightness, grayscale etc.)
### Slice maps into patches
Now we will divide the pre-processes map sheets into patches, the basic unit of analysis for MapReader. This code allows you to slice patches based on either pixels or real-world distance (e.g. 50 meters).
- [ ] update the code below to show this option. It was included after Kasra wrote the original tutorial.
- [ ] also update code for image re-sizing that allows all patches to be equal width. Check with Kasra about details for this (is this the `image_width_resolution` feature above?)
- [ ] update description above to describe this option.
```python
mymaps.sliceAll(path_save="./maps_tutorial/slice_50_50",
slice_size=50, # in pixels
square_cuts=False,
verbose=False,
method="pixel")
mymaps.show_sample(4, tree_level="child")
```
### Calculate mean and standard-deviation of pixel intensities
Pixel intensity is a useful basic measure of the content of a patch - it can be an initial means of sorting through large numbers of patches to organize an annotation task. For example, if you are interested in a feature like 'buildings', then you are likely to want to see patches with very high pixel intensities rather than those with very low pixel intensitives (e.g. fields, water, and other open space).
`calc_pixel_stats` method can be used to calculate mean and standard-deviation of pixel intensities of each `child` (i.e., `patch`) in a `parent` image.
```python
# if parent_id="XXX", only compute pixel stats for that parent
mymaps.calc_pixel_stats()
maps_pd, patches_pd = mymaps.convertImages(fmt="dataframe")
maps_pd.head()
patches_pd.head()
patches_pd["mean_pixel_RGB"].mean()
```
TODO - add text here re:
- [ ] why pytorch pre-trained models, explain what they are trained on and whether this might impact results (e.g. we don't have answers to this yet, but we are working on it)
- [ ] choices to be made during fine tuning, e.g. epochs/learning rate/etc.
- [ ] how to pick a model from the results
- [ ] divide corpus in to train, test, validation sets and what the implications for this are on analysis of a collection
### Annotate patches
- [ ] Missing code here for collecting annotations. Does this need to be a separate binder or can it be integrated here?
### Train/fine-tune computer vision (CV) classifiers
MapReader currently uses Pytorch pre-trained models as a starting point. These are trained on ImageNet (e.g. images from the internet), not historical maps. Fine-tuning allows us to use MapReader patch annotations to improve model performance for a specific research task.
```python
from mapreader import classifier
from mapreader import loadAnnotations
from mapreader import patchTorchDataset
import numpy as np
import torch
from torch import nn
import torchvision
from torchvision import transforms
from torchvision import models
annotated_images = loadAnnotations()
```
### TODO
Code/documentation to add:
- [ ] review fine-tuning results
- [ ] select model to predict labels on test and validation sets
- [ ] view test/validation results
- [ ] explain what each are significant for
- [ ] describe approaches for qualitative evaluation of false positives and negatives
- [ ] combine results for whole collection (e.g. seen and unseen; annotations + predictions) as datasets for research use
- [ ] describe output formatting options
- [ ] patch represented as centroid
- [ ] future work: patch by its bounding box coordinates
- [ ] example of in-notebook viz using geopandas
- [ ] future: example of plotting results in Olivia's observable notebooks for Macromap, and OS metadata