owned this note
owned this note
Published
Linked with GitHub
<p align="center"><img width=25% src="http://ksvi.mff.cuni.cz/~topfer/MFF-logo300.gif"></p>
# LinkedPipesApplications
[Lucid chart link](https://bit.ly/2vsMaMi)
## Overview
The following README is a description of a software project named `LinkedPipesApplications` prepared withing a scope of an `NPRG023` course by:
```
1. Marzia Cutajar
2. Esteban Jenkins
3. Altynbek Orumbayev
4. Alexander Mansurov
5. Ivan Lattak
```
The goal of the project is development of a comprehensive software toolset that would provide a convenient way of working and visualazing Linked Data.
## Setup & Installation
---
## Content
[Glossary](#section0)
[1. Introduction to `LinkedPipesApplications`](#section1)
[2. High Level Architecture](#section2)
[3. API Mapping table](#section3)
[4. To be edited](#section4)
[5. To be edited](#section5)
[Credits](#section6)
---
## Glossary <a id="section0"></a>
| Word| Definition|
|--- | --- |
|`API`| Stands for Application Programming Interface. Consist of a set of endpoints.|
|`Endpoint`| An endpoint is one end of a communication channel.|
|`Assistant`| Refers to the Linked Pipes Visualization Assistant, which allows users to create, configure and publish visualisations based on input data sets.
|`DataCube`| multi-dimensional array of values.|
|`Data descriptor`| An SPARQL ASK query associated with a visualizer that determines if an input data graph can be visualized in the corresponding visualizer.|
|`Data source`| Refers to any source of data, such as an RDF file, csv, database, etc.|
|[`ETL`](https://en.wikipedia.org/wiki/Extract,_transform,_load) | Extract, transform, load.|
|[`IRI`](https://en.wikipedia.org/wiki/Internationalized_Resource_Identifier)| Stands for Internationalized Resource Identifier. It is an internet protocol standard which extends ASCII characters subset of the Uniform Resource Identifier (URI) protocol.|
|`LDVM`| Stands for Linked Data Visualization Model.|
|`LDVMi`| Stands for LDVM implementation.|
|[`Linked Data`](https://en.wikipedia.org/wiki/Linked_data) | a method of publishing structured data so that it can be interlinked.|
| `LPA` | Stands for LinkedPipesApplications.|
|`Linked Open Data Cloud (LOD Cloud)`| The largest cloud of linked data that is freely available to everyone.|
|`LinkedPipes ETL`| The service in charge of the ETL process.|
|`Pipeline` | In the current context, refers to the process in which the application takes any data source, applies a series of transformations to it and then hands over the output to a visualizer component, which then produces a visual representation of the data.|
|`Pipeline discovery` | The process taking input descriptors for all visualizers and attempt to combine the respective registered transformation to achieve a specific data format.|
|[`RDF`](https://en.wikipedia.org/wiki/Resource_Description_Framework) | Resource Description Framework.|
|[`Semantic web`](https://en.wikipedia.org/wiki/Semantic_Web) | an extension of the World Wide Web through standards by the World Wide Web Consortium.|
|[`Single Page Application (SPA)`](https://en.wikipedia.org/wiki/Single-page_application) | A web application or web site that interacts with the user by dynamically rewriting the current page rather than loading entire new pages from a server.|
|[`SPARQL`](https://en.wikipedia.org/wiki/SPARQL) | stands for SPARQL Protocol and RDF Query Language. It is a query language able to retrieve and manipulate data stored in RDF format.|
|[`TTL`](https://en.wikipedia.org/wiki/Turtle_(syntax))| an RDF file format.|
|`URI`| A Uniform Resource Identifier (URI) is a string of characters designed for unambiguous identification of resources and extensibility via the URI scheme. They provide a standard way for resources to be accessed by other computers across a network or over the World Wide Web.|
|`URL`| A URL is a specific type of Uniform Resource Identifier.|
---
## 1. Introduction to LinkedPipes <a id="section1"></a>
The main goal of this project is to create the web application through which the users will explore, visualize and publish linked data. The application will seamlessly interact with the discovery service and the ETL service to achieve this goal.
---
## 2. High Level Architecture <a id="section2"></a>
### Overview
Due to the complexity of the whole `LinkedPipesApplications` project, the application was split into microservices that the users consume through the web application (LPA). This way each part can evolve separately as long as the contract between the parts remain the same.
<p align="center"><img width=100% src="https://pli.io/9DdOw.png"></p>
1. Post request with xml config. containing:
• Data source
• Transformers
• Applications
2. Extract a small sample dataset
3. Check the compatible transformers and etc
4. Response with compatible pipelines and TTL with their ETL structures
5. Push the TTL data and create the pipeline
6. Execute a pipeline on whole data
7. Get response
8. Store response in rdf storage
#### 2.1 `LinkedPipesApplications (LPA)`
`LPA` is the main applications, providing the frontend to end users as well as a backend for communication with the Discovery & ETL Services.
##### 2.1.1 `Frontend`
The web application that provides a way for the user to interact with the LPA. Written in `React.js`. Main functionality includes features such as:
• Adding, deleting and modifying data sources.
• Displaying the discovery results for the end user.
• Disovering pipelines.
• Executing a pipeline.
• Publishing an application.
##### 2.1.2 `Backend`
The backend application is written in `Java` using `Spring Boot`. Main functionality includes:
• Communication with `Discovery Service` and the `ETL service` .
• Datasource management
• Discover pipelines for given set of data sources.
#### 2.2 `ETL Service`
An Extract Transform Load service for Linked Data. Generally each step can be described as follows:
• `Extract` - Extracts the data from a certain source to a given application or a running process.
• `Transform` - Transforms the source data to target representation.
• `Load` - Loads the data from the given application or a process outside - web server.
#### 2.3 `Discovery Service`
The discovery service is a backend application of which the task is to discover pipelines for a set of data sources it receives from the LPA backend.
#### 2.4 `RDF Storage`
The RDF Store is used to store data results from the ETL Service, and data is retrieved from this storage for visualisation to the end user in the frontend application.
#### 2.5 `LOD Cloud`
The Linked Open Data cloud is the largest source of linked open data, meaning it is freely available to everyone.
*role in the system? Is it for data sources or for discovering? or for what?efuc*
### Basic workflow
The basic workflow is summarized in the activity diagram below. The user first selects some data sources and discover which pipelines are available to be run on those data sources. The requests are made to the LPA backend, which indeed forwards the requests to the Discovery service. Then the user can execute a pipeline. This request is forwarded by the LPA backend to the ETL service. The process of executing a pipeline can take some time, so the web application has to poll for completion status until it is finished.
<p align="center"><img width=100% src="https://pli.io/fbnkM.png" alt="sequence diagram"></p>
---
## 3. API Mapping <a id="section3"></a>
| Backend | Discovery | ETL | Misc |
| -------- | -------- | -------- | --------
| /datasources | | | |
| /datasource | | |
| POST /pipelines/discover | /discovery/startFromInput | | |
| POST /pipelines/discoverFromInputIri | /discovery/startFromInputIri | | |
| GET /discovery/``id``/status | /discovery/``id``| | |
| GET /discovery/``id``/pipelineGroups | /discovery/``id``/pipeline-groups | | |
| GET /execution/status | | | ``executionIri``/overview |
| GET /execution/result | | | ``executionIri`` |
| POST /pipeline/export | /discovery/``id``/export/pipelineUri | | |
| /pipeline/create | | | |
| GET /pipeline/execute | | /executions | |
---
## 4. Continuous Delivery Pipeline
There is a webhook in Github that is triggered whenever a pull request is either opened, closed, reopened, edited, assigned, unassigned, review requested, review request removed, labeled, unlabeled, or synchronized. This webhook makes a Post request to https://applications.linkedpipes.com/webhooks/lpa, which is an endpoint of a minimalistic server whose purpose it to deploy in the applications.linkedpipes.com domain the latest version in master branch.
This webserver is being run as a service
```bash=
[Unit]
Description=Webhook server listening on 8085 for LPA deployments
[Service]
Type=simple
User=project
ExecStart=/home/project/.nvm/versions/node/v11.4.0/bin/node /home/project/deploy/webserver.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
```
The script of the server is as follows
```javascript=
var express = require("express");
var bodyParser = require("body-parser");
var app = express();
//Here we are configuring express to use body-parser as middle-ware.
app.use(bodyParser.urlencoded({ extended: false }));
app.use(bodyParser.json());
port = process.env.WHPORT || 8085;
var childProcess = require('child_process');
function deploy(res){
childProcess.exec('cd /home/project/deploy && ./deploy.sh', function(err, stdout, stderr){
if (err) {
console.error(err);
return res.sendStatus(500);
}
});
res.sendStatus(200);
}
app.post("/webhooks/lpa", function (req, res) {
var branch = req.body.pull_request.base.ref;
var action = req.body.action;
var merged = req.body.pull_request.merged;
if(branch && branch.indexOf('master') > -1 && action === "closed" && merged){
deploy(res);
}
else{
res.sendStatus(200);
}
})
app.get("/webhooks/lpa", function(req, res){return res.sendStatus(200);});
app.listen(port, () => console.log(`Deployment server listening on port ${port}!`));
```
sudo systemctl daemon-reload
sudo systemctl enable lpa-webhook.service
sudo systemctl start lpa-webhook
sudo systemctl status lpa-webhook
## References <a id="section2"></a>
- Data-driven Web Application Generator
- Graph data visualizations with D3.js libraries
---