ChristinaLast
    • Create new note
    • Create a note from template
      • Sharing URL Link copied
      • /edit
      • View mode
        • Edit mode
        • View mode
        • Book mode
        • Slide mode
        Edit mode View mode Book mode Slide mode
      • Customize slides
      • Note Permission
      • Read
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Write
        • Only me
        • Signed-in users
        • Everyone
        Only me Signed-in users Everyone
      • Engagement control Commenting, Suggest edit, Emoji Reply
    • Invite by email
      Invitee

      This note has no invitees

    • Publish Note

      Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

      Your note will be visible on your profile and discoverable by anyone.
      Your note is now live.
      This note is visible on your profile and discoverable online.
      Everyone on the web can find and read all notes of this public team.
      See published notes
      Unpublish note
      Please check the box to agree to the Community Guidelines.
      View profile
    • Commenting
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
      • Everyone
    • Suggest edit
      Permission
      Disabled Forbidden Owners Signed-in users Everyone
    • Enable
    • Permission
      • Forbidden
      • Owners
      • Signed-in users
    • Emoji Reply
    • Enable
    • Versions and GitHub Sync
    • Note settings
    • Note Insights New
    • Engagement control
    • Make a copy
    • Transfer ownership
    • Delete this note
    • Save as template
    • Insert from template
    • Import from
      • Dropbox
      • Google Drive
      • Gist
      • Clipboard
    • Export to
      • Dropbox
      • Google Drive
      • Gist
    • Download
      • Markdown
      • HTML
      • Raw HTML
Menu Note settings Note Insights Versions and GitHub Sync Sharing URL Create Help
Create Create new note Create a note from template
Menu
Options
Engagement control Make a copy Transfer ownership Delete this note
Import from
Dropbox Google Drive Gist Clipboard
Export to
Dropbox Google Drive Gist
Download
Markdown HTML Raw HTML
Back
Sharing URL Link copied
/edit
View mode
  • Edit mode
  • View mode
  • Book mode
  • Slide mode
Edit mode View mode Book mode Slide mode
Customize slides
Note Permission
Read
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Write
Only me
  • Only me
  • Signed-in users
  • Everyone
Only me Signed-in users Everyone
Engagement control Commenting, Suggest edit, Emoji Reply
  • Invite by email
    Invitee

    This note has no invitees

  • Publish Note

    Share your work with the world Congratulations! 🎉 Your note is out in the world Publish Note

    Your note will be visible on your profile and discoverable by anyone.
    Your note is now live.
    This note is visible on your profile and discoverable online.
    Everyone on the web can find and read all notes of this public team.
    See published notes
    Unpublish note
    Please check the box to agree to the Community Guidelines.
    View profile
    Engagement control
    Commenting
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    • Everyone
    Suggest edit
    Permission
    Disabled Forbidden Owners Signed-in users Everyone
    Enable
    Permission
    • Forbidden
    • Owners
    • Signed-in users
    Emoji Reply
    Enable
    Import from Dropbox Google Drive Gist Clipboard
       Owned this note    Owned this note      
    Published Linked with GitHub
    • Any changes
      Be notified of any changes
    • Mention me
      Be notified of mention me
    • Unsubscribe
    # Getting Started with Maps Binder MapReader enables quick, flexible research with large map corpora. It is based on the patchwork method, e.g. scanned map sheets are preprocessed to divide up the content of the map into a grid of squares. Using image classificaion at the level of each patch allows users to define classes (labels) of features on maps related to their research questions. ## Preliminary considerations [name=andy] Add instructions to explictly try with NLS OS tried-and-tested data. Recreate the LwM results to ensure that MapReader is installed and functioning correctly. [] "An exercise for the reader" / Next Steps - try with your own data. #### You might be interested in using MapReader if: - you have access to a large corpora of georeferenced maps (e.g. more than 100) - you want to quickly test different labels to help refine your research question before/without committing to manual vector data creation - your maps were created before surveying accuracy reached modern standards, and therefore you do not want to create overly precise geolocated data based on the content of those maps - you want to analyze any images (not just maps!) using the patchwork method #### MapReader is well-suited for finding spatial phenomena that: - have a homogeneous visual signal across many maps - may not correspond to typical categories of map features that are traditionally digitized as vector data in a GIS - might require too much time to digitize as vector data manually #### Requirements for getting started with MapReader: - your maps have been georeferenced (what's this?), if you want to do spatial analysis of the output - you have permission from the holding institution to access high-resolution versions of the maps (learn more about input options here *ADD LINK*) - you have permission to use patches derived from map sheets as research data (especially if you plan to publish the output as open data, which we recommend). See an example of a published MapReader dataset here [add link]. #### Recommendations for using MapReader: - you have metadata for each map in your collection ("item-level" catalog records or other forms of metadata) - you know how to use Jupyter Notebooks and virtual environments (if not, here are some tutorials) - you have read our guide to picking labels for your patch classifications - ? ## Example Research Question: Searching for `buildings` across the UK In this binder, we walk you through the steps for using MapReader to generate a dataset of patches containing buildings as shown on 6" to 1 mile Ordnance Survey maps of Great Britain printed in the late nineteenth and early twentieth centuries. This dataset is described in detail here. It is openly available here. We use this data in Living with Machines research, for example in XXX paper and XXX book chapter. ### Install - [ ] update code below based on what is required in binder. explain diff between interacting with MapReader in this binder vs. local install. Why you would do one or the other. Refer to [the installation section in README](https://github.com/Living-with-machines/MapReader#installation) to install `mapreader`. ```python # solve issue with autocomplete %config Completer.use_jedi = False %load_ext autoreload %autoreload 2 %matplotlib inline ``` ### Download maps via tileserver Some digitized map collections are accessible via tileservers from libraries and archives. These maps have already been georeferenced, and they have sometimes already had content outside the neatline (the map collar/border) removed so that sheets connecting adjacent land can be stitched together in a slippy map (example here). Using maps via tileservers is the easiest way to get started with MapReader. `tileserver` provides an easy way to download maps from, e.g., * OS one-inch 2nd edition layer: https://mapseries-tilesets.s3.amazonaws.com/1inch_2nd_ed/index.html By default, we use the `download_url` of OS one-inch 2nd edition layer: ```python from mapreader import TileServer tileserver = TileServer(metadata_path="../../mapreader/persistent_data/metadata_OS_One_Inch_GB_WFS_light.json", download_url="https://mapseries-tilesets.s3.amazonaws.com/1inch_2nd_ed/{z}/{x}/{y}.png") ``` - [ ] If you want to change to another map scale or edition within NLS collections, [add instructions here]... - [ ] If you have a different map collection that you can access via a tileserver [provide examples], make these changes... ### Querying `lat` and `lon` to retrieve maps To retrieve maps from a tile server, you must specify the XXXXXX [complete this]. `plot_metadata_on_map` allows you to map the bounding boxes for each sheet in the NLS OS collection, because these coordinates are captured in the item-level metadata. Not all map collections will have such detailed sheet-specific metadata, but for those that do, this is a useful function for knowing what you have downloaded from the tileserver. ```python tileserver.query_point([51.53, -0.12]) # if append = False, only the last query will be stored tileserver.query_point([51.4, 0.08], append=True) tileserver.query_point([51.4, -0.13], append=True) tileserver.query_point([51.52, 0.03], append=True) # %% codecell # To print all found queries tileserver.print_found_queries() tileserver.plot_metadata_on_map(map_extent="uk", add_text=True) ``` ### Retrieve/download maps Retrieving maps from a tileserver requires selecting a zoom level. Which zoom level is appropriate can depend on a few factors: 1. How much storage do you have access to? 2. How large are the features you are searching for on a map: e.g. are they very small, or do they occupy a large portion of a sheet? 3. What resoution is required for a successful vision task? Maps tend to have very fine-grained information, and so using one of the higher zoom levels is appropriate for finding those details in a very large and often complex image. - [ ] say something here about zoom levels. how many are there, are there always the same number (e.g. 15?), show examples of what zoom 1, 5, 10, and 15 look like perhaps? Include links to more documentation about this. Calcualte storage size based on zoom level for OS 6" collection and 1" colection to give idea of difference. ```python tileserver.download_tileserver(mode="query", zoom_level=14, pixel_closest=50, output_maps_dirname="./maps_tutorial") ``` The results are stored in a directory structure as follow: ``` maps_tutorial ├── map_101168609.png ├── map_101168618.png ├── map_101168702.png ├── map_101168708.png └── metadata.csv geojson ├── 101168609_0.geojson ├── 101168618_3.geojson ├── 101168702_2.geojson └── 101168708_1.geojson ``` - [ ] if you have map images and their metadata stored locally or in a cloud storage service (e.g. Azure), here is how we recommend organizing them for use in MapReader - [ ] Instructions for people to connect to Azure storage ### Load maps ```python from mapreader import loader path2images = "./maps_tutorial/*png" mymaps = loader(path2images) print(f"Number of images: {len(mymaps)}") ``` - [ ] explain 'parent' terminology and how it comes into play after you slice the image into patches ### Display map - [ ] describe this an next section. do they need to be separate? seems like they are 2 steps for 1 goal: showing a loaded map. ```python mymaps.show_sample(num_samples=2, tree_level="parent") all_maps = mymaps.list_parents() ``` ### Show one image ```python mymaps.show(all_maps[0], tree_level="parent", # to change the resolution of the image for plotting image_width_resolution=800) ``` - [ ] add code here for different pre-processing steps - [ ] remove content beyond neatline - [ ] re-project (for ex, if you have maps in diff projections) - [ ] edit image (brightness, grayscale etc.) ### Slice maps into patches Now we will divide the pre-processes map sheets into patches, the basic unit of analysis for MapReader. This code allows you to slice patches based on either pixels or real-world distance (e.g. 50 meters). - [ ] update the code below to show this option. It was included after Kasra wrote the original tutorial. - [ ] also update code for image re-sizing that allows all patches to be equal width. Check with Kasra about details for this (is this the `image_width_resolution` feature above?) - [ ] update description above to describe this option. ```python mymaps.sliceAll(path_save="./maps_tutorial/slice_50_50", slice_size=50, # in pixels square_cuts=False, verbose=False, method="pixel") mymaps.show_sample(4, tree_level="child") ``` ### Calculate mean and standard-deviation of pixel intensities Pixel intensity is a useful basic measure of the content of a patch - it can be an initial means of sorting through large numbers of patches to organize an annotation task. For example, if you are interested in a feature like 'buildings', then you are likely to want to see patches with very high pixel intensities rather than those with very low pixel intensitives (e.g. fields, water, and other open space). `calc_pixel_stats` method can be used to calculate mean and standard-deviation of pixel intensities of each `child` (i.e., `patch`) in a `parent` image. ```python # if parent_id="XXX", only compute pixel stats for that parent mymaps.calc_pixel_stats() maps_pd, patches_pd = mymaps.convertImages(fmt="dataframe") maps_pd.head() patches_pd.head() patches_pd["mean_pixel_RGB"].mean() ``` TODO - add text here re: - [ ] why pytorch pre-trained models, explain what they are trained on and whether this might impact results (e.g. we don't have answers to this yet, but we are working on it) - [ ] choices to be made during fine tuning, e.g. epochs/learning rate/etc. - [ ] how to pick a model from the results - [ ] divide corpus in to train, test, validation sets and what the implications for this are on analysis of a collection ### Annotate patches - [ ] Missing code here for collecting annotations. Does this need to be a separate binder or can it be integrated here? ### Train/fine-tune computer vision (CV) classifiers MapReader currently uses Pytorch pre-trained models as a starting point. These are trained on ImageNet (e.g. images from the internet), not historical maps. Fine-tuning allows us to use MapReader patch annotations to improve model performance for a specific research task. ```python from mapreader import classifier from mapreader import loadAnnotations from mapreader import patchTorchDataset import numpy as np import torch from torch import nn import torchvision from torchvision import transforms from torchvision import models annotated_images = loadAnnotations() ``` ### TODO Code/documentation to add: - [ ] review fine-tuning results - [ ] select model to predict labels on test and validation sets - [ ] view test/validation results - [ ] explain what each are significant for - [ ] describe approaches for qualitative evaluation of false positives and negatives - [ ] combine results for whole collection (e.g. seen and unseen; annotations + predictions) as datasets for research use - [ ] describe output formatting options - [ ] patch represented as centroid - [ ] future work: patch by its bounding box coordinates - [ ] example of in-notebook viz using geopandas - [ ] future: example of plotting results in Olivia's observable notebooks for Macromap, and OS metadata

    Import from clipboard

    Paste your markdown or webpage here...

    Advanced permission required

    Your current role can only read. Ask the system administrator to acquire write and comment permission.

    This team is disabled

    Sorry, this team is disabled. You can't edit this note.

    This note is locked

    Sorry, only owner can edit this note.

    Reach the limit

    Sorry, you've reached the max length this note can be.
    Please reduce the content or divide it to more notes, thank you!

    Import from Gist

    Import from Snippet

    or

    Export to Snippet

    Are you sure?

    Do you really want to delete this note?
    All users will lose their connection.

    Create a note from template

    Create a note from template

    Oops...
    This template has been removed or transferred.
    Upgrade
    All
    • All
    • Team
    No template.

    Create a template

    Upgrade

    Delete template

    Do you really want to delete this template?
    Turn this template into a regular note and keep its content, versions, and comments.

    This page need refresh

    You have an incompatible client version.
    Refresh to update.
    New version available!
    See releases notes here
    Refresh to enjoy new features.
    Your user state has changed.
    Refresh to load new user state.

    Sign in

    Forgot password

    or

    By clicking below, you agree to our terms of service.

    Sign in via Facebook Sign in via Twitter Sign in via GitHub Sign in via Dropbox Sign in with Wallet
    Wallet ( )
    Connect another wallet

    New to HackMD? Sign up

    Help

    • English
    • 中文
    • Français
    • Deutsch
    • 日本語
    • Español
    • Català
    • Ελληνικά
    • Português
    • italiano
    • Türkçe
    • Русский
    • Nederlands
    • hrvatski jezik
    • język polski
    • Українська
    • हिन्दी
    • svenska
    • Esperanto
    • dansk

    Documents

    Help & Tutorial

    How to use Book mode

    Slide Example

    API Docs

    Edit in VSCode

    Install browser extension

    Contacts

    Feedback

    Discord

    Send us email

    Resources

    Releases

    Pricing

    Blog

    Policy

    Terms

    Privacy

    Cheatsheet

    Syntax Example Reference
    # Header Header 基本排版
    - Unordered List
    • Unordered List
    1. Ordered List
    1. Ordered List
    - [ ] Todo List
    • Todo List
    > Blockquote
    Blockquote
    **Bold font** Bold font
    *Italics font* Italics font
    ~~Strikethrough~~ Strikethrough
    19^th^ 19th
    H~2~O H2O
    ++Inserted text++ Inserted text
    ==Marked text== Marked text
    [link text](https:// "title") Link
    ![image alt](https:// "title") Image
    `Code` Code 在筆記中貼入程式碼
    ```javascript
    var i = 0;
    ```
    var i = 0;
    :smile: :smile: Emoji list
    {%youtube youtube_id %} Externals
    $L^aT_eX$ LaTeX
    :::info
    This is a alert area.
    :::

    This is a alert area.

    Versions and GitHub Sync
    Get Full History Access

    • Edit version name
    • Delete

    revision author avatar     named on  

    More Less

    Note content is identical to the latest version.
    Compare
      Choose a version
      No search result
      Version not found
    Sign in to link this note to GitHub
    Learn more
    This note is not linked with GitHub
     

    Feedback

    Submission failed, please try again

    Thanks for your support.

    On a scale of 0-10, how likely is it that you would recommend HackMD to your friends, family or business associates?

    Please give us some advice and help us improve HackMD.

     

    Thanks for your feedback

    Remove version name

    Do you want to remove this version name and description?

    Transfer ownership

    Transfer to
      Warning: is a public team. If you transfer note to this team, everyone on the web can find and read this note.

        Link with GitHub

        Please authorize HackMD on GitHub
        • Please sign in to GitHub and install the HackMD app on your GitHub repo.
        • HackMD links with GitHub through a GitHub App. You can choose which repo to install our App.
        Learn more  Sign in to GitHub

        Push the note to GitHub Push to GitHub Pull a file from GitHub

          Authorize again
         

        Choose which file to push to

        Select repo
        Refresh Authorize more repos
        Select branch
        Select file
        Select branch
        Choose version(s) to push
        • Save a new version and push
        • Choose from existing versions
        Include title and tags
        Available push count

        Pull from GitHub

         
        File from GitHub
        File from HackMD

        GitHub Link Settings

        File linked

        Linked by
        File path
        Last synced branch
        Available push count

        Danger Zone

        Unlink
        You will no longer receive notification when GitHub file changes after unlink.

        Syncing

        Push failed

        Push successfully