---
title: 'DS Project - save the forest'
disqus: hackmd
---
DS Project - save the forest
===



[TOC]
## Notes
- Markdown (hackmd?) For short sketches/ shared notes. Maybe overleaf for the report.
- GitHub for Version Control/ Code management
- Communication: Mainly E-Mail. We don't really want to use slack
- [Dario's Project Notes](https://hackmd.io/@dhett/SkRMqXoDS)
## Preferences:
#### Luca:
- Web Backend
- Web Frontend
#### Bastien:
- Web Backend
- Web Frontend
#### Dario:
- Data Analysis, Machine Learning
- Data Access, Database
#### Fynn:
- Web Backend
- Data Analyisis
- Integration
#### Project Management:
- Agree on weekly/ fortnightly deadlines
- Use some simple GitHub Kanban board maybe
## Skills
### Bastien:
- Web development projects (Angular, NodeJS, PHP, Javascript)
### Luca:
- Mainly Machine Learning
### Fynn
- Sleeping, eating,...
- Angular5, NodeJS, Typescript
- Generic Python Backend stuff, would like to learn Django
### Dario
- R, Python, Stan
- Data Structures & Algorithms
## Learning Diary - before the course
### Fynn
I have a B.Sc. in Mathematics with Computer Science as Minor, having used mainly MatLab for numerics/ statistics and C++/ Java for algorithm classes and two software projects. I wrote my thesis about the development and implementation of a heuristic in a graph-theoretical topic. From September 2016 until February 2018, I worked part-time as a Java backend developer alongside my studies.
Since March 2018, I have been working on an Ionic-App using TypeScript and Angular as a full stack developer in a data-driven startup and have been mainly responsible for the application logic. From July 2018 onwards, I have also been working on the evaluation backend written in Python and integrated a machine learning model into the pipeline. I have been using GitLab for version control and Jira for (more or less) agile project management.
I am now in the 3rd Semester of my Master programme in Data Science with Management as Minor and here at Helsinki University as an exchange student. At home I have attended courses in statistics and time series analysis using R, in machine learning and network analysis using Python and have completed a project in probabilistic modelling on a geospatial dataset using Stan. Currently, I am taking the courses „Introduction to Artificial Intelligence” and „Computational Statistics 1”.
From this project I expect to learn how to develop and deploy a machine learning application and want to specifically work on DevOps tasks and interation into the web framework.
### Dario
### Luca
## 1st meeting - notes
### Data
- 500000 samples, equidistant grid, sattelite images
- free sattelite data:
- landset (NASA) -> way more comprehensive, been around for a long time; updates every couple of weeks
- copernicus (EU) -> sentinels
- some already in the public domain
- locational info: lat/ long, tree cover estimate, time series analysis
- extract the pixel values, spectral information, tree cover estimates
- visual interpretation: subjective. Someone decided „this looks like forest”
- Easiest thing to use: [Google Earth Engine](https://earthengine.google.com/). Register! Manual signup form.
- Javascript and Python API to find the images
- can submit training/ test data for classification tasks
- can suggest for which are not good enough training data -> need new collection
- The extent of forest in dryland
- [Collect Earth](https://openforis.org/tools/collect-earth.html) gives details about the assessment methods
- [Here](https://sepal.io) our model would be integrated in the future
- interface for cloud computing, processing of sattelite data. Google Earth engine
- feed different kinds of algorithm, download the results
- R-Studio running there, but also Python.
- Petabytes of data, available in the Cloud
- „collaboration with the FAO Global Forest Resources Assessment” <- when signing up
- full data 500.000: would need NDA, smaller dataset/ in public domain: rather not
Collection:
- Square-shaped sample about 1/2 to 1 hectar
- Count the number of dots -> if 3 out of 10 fall on trees, 30% tree cover
- Using QGis. FOA uses _solely_ oss software.
Task:
- correlation between tree cover percentage and sattelite information
- detect outliers!
- Overall goal:
- people do Mapathons - get some instruction and subjectively estimate the tree cover
- Our tools should _verify_ that. Trigger if it looks suspicious, should be re-assessed
- Or maybe classify those with high accuracy automatically?
**Code**: We can use GitHub and release the code to the public domain
**Follow-up**:
1. Tools:
- http://www.openforis.org/tools/collect-earth/tutorials.html
- https://collect.earth/ (Collect Earth Online developed in collaboration with NASA and Google)
- SEPAL - https://sepal.io/
2. Papers:
- https://science.sciencemag.org/content/356/6338/635
- https://www.mdpi.com/2072-4292/8/10/807
3. Other: https://rise.articulate.com/share/R1wgzJXpog5CbclYsMlp6owLVh6E6VPc#/
## 2nd Meeting: Notes
Common problems: cannot see any forest from Google Earth due to different resolution. FAO criteria:
- minimum area of half a hectar
- at least 10% crown cover
- height of at least 5m
Some of the work has been done by people who have not been trained for long and might just classify „yeah whatever”. Want to classify:
- people who do not work diligently
- areas that are not classified correctly
In the future, more high resolution images will be available.
Follow GEE tutorial to receive pixel value. User should be able to enter:
- Which dataset - e.g. Landset from 1982 to 2018 and bands 2,5,6,7
- e.g. in addition: sentinel from 2015-2016
- Which location - long & lat
- Then receive Pixel values for single points. e.g. for one sample site it might be 3x3 pixels
Which pixels should we actually use
- want to exclude pixels contaminated by:
- clouds
- haze
- smoke
- look at correlation between tree cover and spectral information
- depends on underlying ecology
- need to adhere to the ecological classes
Additional layers sent by Anssi should be incorporated in the analysis & prediction. The tool should allow using both public and private assets.
To start with, let's use the [Landset 8](https://developers.google.com/earth-engine/datasets/catalog/LANDSAT_LC08_C01_T1_SR) data. It is high quality and we can exclude are with cloud cover.
We have two options:
1. run _everything_ on the Earth Engine
2. retrieve the data from EE to Sepal and the analysis there to have more options and algorithms available
We need to figure out how to store and access the data in our code, [geopandas](https://geohackweek.github.io/vector/06-geopandas-advanced/) might be a good option.
Next steps:
- look more detailed into the data
- agree on a date when we could travel to Rome
- prepare the project schedule -> presentation on 17.10.
## How to retrieve our data
To upload our own dataset, we should use the cmd-line tool to add it as a file to the earth engine.
1. import the dataset and get the lon/lat coordinates
2. convert the coordinates to a `FeatureCollection`
3. define a `map` function which:
- uses the `Geometry` defined by the collection
- defines a function to select the desired features (Band values) for the image with the given resolution and returns them
4. apply the function to the geometry
5. use `Export` to save it to drive (in JS, might be easier in Python)
## Related ressources
- [AI Applications for Satellite Imagery and Satellite Data](https://emerj.com/ai-sector-overviews/ai-applications-for-satellite-imagery-and-data/)
Good general overview; the hint on change analysis might be interesting regarding deforestation
- [Machine learning and satellite data provide the first global view of transshipment activity](https://globalfishingwatch.org/data/machine-learning-and-satellite-data-provide-the-first-global-view-of-transshipment-activity/)
Rather large & professional open project. DISCLAIMER: Not for the faint of heart; includes de facto slavery on ships.
- [Skynet - Machine Learning for Satellite Imagery](https://developmentseed.org/projects/skynet/)
Project regarding satellite imagery analysis. The name. THE. NAME.
- [You Only Look Twice — Multi-Scale Object Detection in Satellite Imagery With Convolutional Neural Networks (Part I)](https://medium.com/the-downlinq/you-only-look-twice-multi-scale-object-detection-in-satellite-imagery-with-convolutional-neural-38dad1cf7571)
Kinda indepth application and their professional vocabulary includes YOLO which makes it a nobrainer for this list.
- [Advances in using multitemporal night-time lights satellite imagery to detect, estimate, and monitor socioeconomic dynamics](https://www.sciencedirect.com/science/article/abs/pii/S0034425717300068)
Sample application from the back of my head, some geographer blogging on [cryopolitics.com](https://cryopolitics.com), highly recommended y'all. Might be interesting as this prolly goes into the direction of change analysis.
- [Satellite Imagery for Wildlife Monitoring & Tracking](https://www.satimagingcorp.com/applications/environmental-impact-studies/wildlife-and-marine-conservation/wildlife-monitoring/)
Sample application; website "satellite imaging corporation" is the more interesting part.
- [Using Satellite Imagery and Machine Learning to Detect and Monitor Elephants](https://blog.hexagongeospatial.com/using-satellite-imagery-and-machine-learning-to-detect-and-monitor-elephants/)
Sample application - If they can track Elephants immovable collections of trees (so called forests) should be a piece of cake, am I rite?
- [BlackSky](https://www.blacksky.com/)
Some corporation with an edgy name which came up while googl'ing (still got plenty of VC money, lets get rich yo) and acquired OpenWhere
- [Orbital Insight](https://orbitalinsight.com/)
Most well-funded machine learning-based earth observation start up, just came up all the time so why not include dem bois
- [DigitalGlobe](https://www.digitalglobe.com/)
*Wikipedia* DigitalGlobe’s customers range from urban planners, to conservation organizations like the Amazon Conservation Team, to the U.S. federal agencies, including NASA and the United States Department of Defense's National Geospatial-Intelligence Agency (NGA). Much of Google Earth and Google Maps high resolution-imagery is provided by DigitalGlobe,[...]
- [What Was Said on GEOINT 2016](https://medium.com/@V.K.Komissarov/what-was-said-on-geoint-2016-be1ba81c7b19#.cfbkwxryd)
**Quote:** *GEOINT Symposium is an annual (since 2004) event held by non-profit non-lobbying educational organization United States Geospatial Intelligence Foundation (USGIF) for the defense, intelligence and homeland security communities.*
- [Landsat 8 Explained](https://landsat.gsfc.nasa.gov/landsat-8/landsat-8-bands/) „Band 5 measures the near infrared, or NIR. This part of the spectrum is especially important for ecology because healthy plants reflect it – the water in their leaves scatters the wavelengths back into the sky. By comparing it with other bands, we get indexes like NDVI, which let us measure plant health more precisely than if we only looked at visible greenness.”
## Fynn's research about SEPAL
tl:dr - it's not that useful for us because the data processing tools do not extract the pixel values, only vegetation indices.
For visualisation, we should use simply link to [collect.earth](https://collect.earth/collection?projectId=120).
### sepal.io
[bfast](http://bfast.r-forge.r-project.org/) algorithm was suggested by Alfonso to run some analysis on the TS data.
- **Applications:** Deforestation, forest health monitoring and phenological change detection within time series of spatio-temporal data sets (satellite images).
- An R - package is available.
should use fusion tables for data? no, will be deprecated.
Downloading data: see the [prepare ts doc](https://github.com/openforis/sepal/blob/master/docs/sop/sop_sepal_prepare_ts.docx?raw=true). I guess it's the most efficient way to obtain the relevant data from GEE.
- But it looks like it cannot be used to obtain the raw data. Instead, just the vegetation indices.
- Could at least use the smallest instance to run a data download with my script :)
Sampling tool for area estimation: https://github.com/openforis/accuracy-assessment
According to the [FAQ](https://github.com/openforis/sepal/wiki#14when-i-download-a-product-does-it-come-with-its-metadata), there is no built-in product to download the actual image metadata. Only mosaics and vegetation indices.
### Web-Tool
- Our tool should display a list and directly link to https://collect.earth , open the corresponding collection _and Image!!!_
- Is run as a [spark server](http://sparkjava.com/) and code is available [on GitHub](https://github.com/openforis/collect-earth-online/tree/master/src/main/java/org/openforis/ceo)
- Can't pass the `plotId` as parameter yet, only the `projectId`. But expanding adding an additional route for the `plotId` should not be too difficult!
- need to add new entry to `Server.java`, next to `GetProjectById` - namely `getProjectAndPlotById` - or do whatever to handle not only `https://collect.earth/collection?projectId=120` but instead: `https://collect.earth/collection?projectId=120&plotId=166535`
- need to call method `getPlotData = (plotId)` in: `./src/main/resources/public/jsx/collection.js`
## 29.10. Meeting
Need to discuss:
- Performance on OZ data
- Integration into collect.earth?
- could add parameter with behaviour as discussed above.
- yes, ask Adolfo
- could integrate as command line tool & call from the CEO server?
- Our tool: Heroku OK for start, can provide as Python Package in the end
- they also use Heroku and we're just using public data, should be fine
- could e.g. be installed via command line on sepal?
- It's OK if re-training would require python fiddling. Leave the sepal integration to the FAO
- just make available as pip installable venv package
E.g. would use only data from a certain Region/ Type of Fauna
- SR: then the individual band values would be useful
- But if it's not like that: the NDVI etc will be very necessary
Ask Adolfo: how set up, need the file.
Improvements, things to consider:
- persistence within and across runs: should not fail if data retrieval for 5 entries fails.
- cache partial results - maybe not as CSV, but should be done!
- Just use a postgres db with `collectionID` and `plotId` as primary key (and which year/sattelite?) and append the GEE data in the original format with timestamp
Todos:
- Dario: classifier to python, adapt current data pull
- Luca: Collect Earth Online part - how is the data formatted etc
- Fynn:
- integrate that into the server
- explore functionality to pull from Planet-data with higher resolution
User story
---
```gherkin=
Feature: Guess the word
# The first example has two steps
Scenario: Maker starts a game
When the Maker starts a game
Then the Maker waits for a Breaker to join
# The second example has three steps
Scenario: Breaker joins a game
Given the Maker has started a game with the word "silky"
When the Breaker joins the Maker's game
Then the Breaker must guess a word with 5 characters
```
> I choose a lazy person to do a hard job. Because a lazy person will find an easy way to do it. [name=Bill Gates]
```gherkin=
Feature: Shopping Cart
As a Shopper
I want to put items in my shopping cart
Because I want to manage items before I check out
Scenario: User adds item to cart
Given I'm a logged-in User
When I go to the Item page
And I click "Add item to cart"
Then the quantity of items in my cart should go up
And my subtotal should increment
And the warehouse inventory should decrement
```
> Read more about Gherkin here: https://docs.cucumber.io/gherkin/reference/
User flows
---
```sequence
Alice->Bob: Hello Bob, how are you?
Note right of Bob: Bob thinks
Bob-->Alice: I am good thanks!
Note left of Alice: Alice responds
Alice->Bob: Where have you been?
```
> Read more about sequence-diagrams here: http://bramp.github.io/js-sequence-diagrams/
Project Timeline
---
```mermaid
gantt
title A Gantt Diagram
section Section
A task :a1, 2014-01-01, 30d
Another task :after a1 , 20d
section Another
Task in sec :2014-01-12 , 12d
anther task : 24d
```
> Read more about mermaid here: http://knsv.github.io/mermaid/
## Appendix and FAQ
:::info
**Find this document incomplete?** Leave a comment!
:::
###### tags: `Templates` `Documentation`