# Functional validation
- διάγραμμα αρχιτεκτονικής
- πίνακας λίστας ενεργειών με αναφορές στο διάγραμμα
Users|Action|Component
Issuer,Holder,Verifier|Login|Authentication Service
Issuer|Search for diploma|Issuer service
Actions
---
IHV Login
I Find diploma
I Award diploma
I Award status
H List awards
H Publish request
H Publication request status
I List publications
I Publish proof
I Proof status
V List publications
V NACK/FAIL publication proof
# Functional validation
|User |Action |Component |
|--|--|--|
| Issuer | Login | Authentication Service |
| Issuer | List Diplomas | Issuer Service |
| Issuer | Filter Diploma | Issuer Service |
| Issuer | View Diploma | Issuer Service |
| Issuer | Award Diploma | Issuer Service & Public Ledger |
| Issuer | View Award Status | Public Ledger |
| Holder | Publish Request for Award Proof | Issuer Service & Public Ledger |
| Holder | View Publication Request Status| Public Ledger |
| Issuer | List Publication Requests | Issuer Service & Public Ledger |
| Issuer | Publish Proof for Publication Request | Issuer Service & Public Ledger |
| Issuer | View Proof Publication Status | Issuer Service & Public Ledger |
| Verifier | Publish ACK/NACK/FAIL for Publication Proof | Verifier Service & Public Ledger |
| Verifier | List Verified/Rejected Publication Proofs | Verifier Service & Public Ledger |
```mermaid
sequenceDiagram
Issuer ->> Auth Service: Login
Holder ->> Auth Service: Login
Verifier ->> Auth Service: Login
Issuer -> Diplomas Registry: List Diplomas
Issuer ->> Ledger: Award Diploma
Ledger->> Issuer: Award Transaction ID
Issuer->> Holder: Inform Holder For Awarded Diploma
Holder->>Ledger: Publish Request For Award Publication To Verifier
Ledger->> Holder: Publication Request Transaction Id
Holder->>Issuer: Inform Issuer For Publication Request
Issuer->>Ledger: Publish Proof For Publication Request
Ledger->> Issuer: Proof Publication Transaction ID
Issuer->>Verifier: Inform Verifier For Proof Publication
Verifier->>Ledger: Acknowledge or not (ACK/NACK) Proof Publication
Verifier->>Issuer: Inform Issuer for Proof acknowledgement
```
# 2.4.2 Non-functional validation
- performance metrics (protocol actions rates)
- crypto security (multiple keys per user, signatures on the ledger) (done)
## Cryptographic Security
Without making any particular assumptions about the ledger in use, the security of the Diplomata protocol relies on the properties of an El-Gamal cryptosystem for generating signatures and commitments (Basic Crypto Layer) along with a symmetric encryption mechanism for interparty message communication (Transaction Logic Layer). In particular, the usual hybrid approach has been adopted, where the security features of an asymmetric cryptosystem (non-repudiation, zero-knowledge primitives, etc.) are combined with the advantages of symmetric cryptography (low cost of encryption). Both layers are built on top of appropriately chosen elliptic curves.
(a) Transaction Layer
At the transaction layer, involved parties have the ability to potentially create common secrets for the purpose of exchanging sensitive information. For example, the ISSUER symmetrically encrypts part of the produced proof addressed to a VERIFIER (in particular: the proof of decryption along with the accompanying decryptor), so that no man-in-the-middle who eavesdrops the transaction layer and captures the proof-packet is able to verify it and publish an acknowledge. The symmetric encryption infrastruture makes calls to the NaCl public-key API, meaning that each involved party must own a key over the Curve25519 elliptic curve (128 bits of security/256 bits keys size) used for common secret agreement.
(b) Basic Crypto Layer
At this layer, the main interest is to
(i) generate and verify zero-knowledge proofs; in particular, DDH proofs on behalf of the ISSUER along with Chaum-Pedersen proofs of reencryption are crucial for traceability of computations to knowledge of secret parameters and cross-checking integrity of documents.
(ii) sign computations and verify these signatures; in particular, verifying the signatures which enter the ledger guarantees coherence and soundness (e.g., that a HOLDER cannot request a proof for an award they don't own, or
that an ISSUER cannot create a proof without prior request on behalf of a HOLDER).
The security of these operations depends on the hardness of the Discrete Logarithm problem for certain elliptic curves. The currently suggested minimum key size is 384 bits, meaning that each involved party must own an
El-Gamal key over the P-384 elliptic curve (192 bits of security). The adopted scheme for digital signatures is the ECDSA standard.
In order to reduce the number of protocol steps and relax dependence on communication failures, the usual approach of making zero-knowledge proofs non-interactive was followed. This is attained by means of the Fiat-Shamir
heuristic, which requires a secure hash function to be fixed and used by all parties. The bit length of the digest output should be at least equal to that of the order of the curve in use, so SHA384 has been chosen.
Performance metrics
To report on the behaviour of the system and its communication with the ledger, the time needed to publish ledger entries was recorded. The following diagramm shows the results of ten data publications to the distributed ledger. The ledger used for the tests, was Ethereum Ropsten blockchain, but any other distributed ledger could be used as well. The vertical axes shows the time needed to confirm that data where published to the ledger and the orange line shows the linear regression derived from the recorded data. Based on the recorded values, the average time for the system to confirm that data where published to the ledger is 23892,1 ms. To be more precise, especially for the Ethereum Blockchain, the time needed to confirm that data where published and immutably recorded to the ledger is ten confirmation times. So the average time of the system to confirm that data where published to the ledger is ten times greater so 2389,21 ms.

# 2.4.3 Usability validation
In the following section we sum up the results of several UX evaluations that took place during the **ediplomas** service prototype design and implementation. Considering the early stage of the service, most of the evaluations where based on the cost effective method of Heuristic Evaluation (HE) with a view to be able to continuously re-inspect the system usability after each iteration of the development process. Each evaluation resulted in a set of UX improvements. Each one, was categorized, based on the predefined set of usability heuristic(s), and then feeded to the subsequent development iteration.
## User interface
---
Major focus to identify usability issues was given to the Single Page Application components that deliver the user interfaces of the service.
### Design toolkit
---
As is the case in modern UI development, a preliminary research regarding the use of an existing web based UI toolkit was conducted. A UI toolkit provides an assortment set of ready-to-use UI resources, providing developers a way to efficiently build usable and accessible interfaces.
**ediplomas** prototype was based on **digigov-sdk**, an open-source toolkit that was build in the context of the digital transformation act of the Greek government. The tool consists of a [Design System](https://guide.services.gov.gr/) that was highly inspired by the corresponding project of [gov.uk](https://design-system.service.gov.uk/), and a prototype implementation of the system components and patterns using the **react.js** library. We consider that requirements of **ediplomas** user personas closely match those targeted by the toolkit as it was designed with particular regard to meet certain usability and accessibility standards for use in e-government services.
### ediplomas patterns
---
The implementation of the design system of **digigov-sdk** is inspired on the [Atomic Design](https://bradfrost.com/blog/post/atomic-web-design/) methodology which indicates a set of rules regarding the classification of the system components. This enabled us to use existing basic (atom) components and a set of existing patterns (organisms) to build up the majority of the **ediplomas** views.
In the following sections we point out the set of usability heuristics that validate the overall design of the application. We additionally point out heuristics validation for a set of core patterns reused across application views.
#### Cross application heuristics
**Match between system and the real world**
- Express ledger concepts into basic wordings that stem from the domain model of **ediplomas**. Prevent oversimplifying concepts, use multi-level state wordings when applicable.
- State `qualification award action pending for ledger transaction confirmation` could be mapped to `award pending / publishing`
- Prevent hiding actions that are not yet available but are still to be executed at a later stage of the protocol. Use disabled styles instead.
- Hide resource actions that are no longer apt to be executed at a later stage of the protocol.
**Consistency and standards**
- Common 2/3 layout across views.
- Comply with the AA level WCAG success criteria.
- Responsive design for enhanced mobile UX.
**Recognition rather than recall**
- Reuse of existing patterns when possible to mitigate user cognitive overload (resource listing, item details).
**Aesthetic and minimalist design**
- Minimalist layout design.
- Prevent excessive use of animations.
- Prefer use of comprehensive labels over icons/images.
**User control and freedom**
- Provide a link to the service index view from the header layout section.
#### Pattern 1. Start page / Layout
---

**Aesthetic and minimalist design**
- Prominent CTA (Call to action) styles and positioning.
**Help and documentation**
- Primary service description.
- Secondary text to communicate next step eligibility details.
- Sidebar links to service documentation.
#### Pattern 2. Resource list
---

**Visibility of system status**
- Entry title and details.
- Styled status indicator for each item.
- Set of actions based on item status.
**Recognition rather than recall**
- Always map primary action to navigate to item details view.
- Use at most one available secondary action.
- Display non-applicable secondary actions using disabled styles.
- Separate styles for primary/secondary action.
**Aesthetic and minimalist design**
- Cut down item details to 1-2 rows. Additional details will be provided in item details view.
**Error prevention**
- No irreversible resource action should be available in list view.
- Display a confirmation prompt for actions that alter the state of the resource.
#### Pattern 3. Resource details/actions
---

**Visibility of system status**
- Item title and detailed description.
- Primary item status.
- Additional status content if applicable, to map asynchronous ledger state.
- Set of actions based on item status.
**User control and freedom**
- Warn users regarding the irreversible result of actions that trigger ledger transactions.
**Error prevention**
- Detailed main action description.
- Display a confirmation prompt for actions that alter the state of the resource.
**Flexibility and efficiency of use**
- Elements that indicate the extended status of the resource may link to the public ledger system (eg. https://etherscan.io/txid)
# 2.4.7 Implementation validation
Implementation validation refers to the assessment of the correctness of the code implementing the use case. The objective is to ensure the integrity and robustness of the service, and make sure that the implementation is successful and error-free.
Every project must describe a Continuous Integration pipeline, in order to ensure the quality of the project. The project has the following main jobs: Test, Lint, Build
### Test
All microservices must run and pass tests on all cases. Tests are the building block of the application, and they make sure that the application meets the needs of the users. Tests need to cover both business logic and unit tests.
Unit testing: the purpose of unit testing is to test the correctness of isolated code, such as methods, functions etc. It's done during the development of an application by the developers. A unit may be an individual function, method, procedure, module, or object.
Functional testing: functional tests ensure that the application works as expected from the user’s perspective. Assertions primarily test the user interface.
Both unit and functional testing are implemented via Jest; a JavaScript testing framework designed to ensure correctness of any JavaScript codebase.
Test coverage: a measure used to describe the degree to which the source code of a program is executed when a particular test suite runs. A program with high test coverage, measured as a percentage, has had more of its source code executed during testing, which suggests it has a lower chance of containing undetected software bugs compared to a program with low test coverage.
### Lint
Since a project requires both active development and maintenance, the Lint job needs to ensure that the application stays aligned with the linting configurations used throughout the project, both for backend and frontend.
Linting is the automated checking of your source code for programmatic and stylistic errors. This is done by using a lint tool (otherwise known as linter). A lint tool is a basic static code analyzer.
### Build
The build job generates artifacts, which are actually docker images, generated by the production grade Dockerfiles. These images need to be deployed in GRNET's docker registry, with the relevant docker tags. All projects need to follow and maintain the build job (which is maintained by the GRNET team).
# 2.4.8 Deployment validation
Deployment validation refers to the assessment of the mechanisms for effecting the transition from code to the deployment of a working system via Continuous Integration / Continuous Deployment approaches.
Every project needs to have a pipeline in order to validate the quality, the readiness and the deployability of the project. There are specific rules that need to be met in order for the application to be deployed in an environment.
The Continuous Integration & Continuous Deployment can be split into two main categories:
- Application specific jobs: the project defines its own jobs to ensure the quality of the project
- Deployment specific jobs: jobs that are required for a project to get deployed.
In order for a project to get deployed in a specific environment (e.g. production, testing etc.), there are (at least) 2 required jobs:
- Build: Compiling the application (source code, libraries, configuration files, etc.) and producing shippable executable (could be of any extension such as .jar, .exe)
- Deploy: promoting the output from the Build phase in the intended environment, ex. from Development to Testing environment.
### Docker
Docker is a tool that allows developers to create, deploy, and run applications in containers. Containerization is the use of Linux containers to deploy applications. A container runs natively on Linux and shares the kernel of the host machine with other containers. It runs as a discrete process, taking no more memory than any other executable meaning it's very lightweight.
The application is divided into microservices, and each service has its own Dockerfile, so they can be launched and orchestrated from docker-compose.yml file. Access to resources (like networking interfaces and disk drives) are virtualized inside this environment, which is isolated from the rest of the system.
### Kubernetes
The infrastructure of the service runs on a Kubernetes platform. The service is composed of several images, which run on the Kubernetes cluster, along with a set of configuration files which define how the service runs.
On Kubernetes, we have 2 clusters: demo and production. All the resources of the service are defined and handled via its api. So every component of the project is defined and sent to the api, in order to get deployed on Kubernetes.
# 3.3 Prototype Application for e-Diplomas
- traditional SPA with rest api backend
- Separate service for each role
- modular user authentication and diplomas registry
## 3.3.1 Frontend Server
The Frontend server is Nextjs server using the Digigov reactjs lib for view components. A web application exists for each of the user types(Holder, Issuer, Verifier). The server supports server-side rendering that allows the apps to run fast on older hardware.
### 3.3.1.1 Issuer web application
The Issuer web application is comprised of the following pages:
- Login Page
- List and Filter Issuer Diplomas Page
- View Diploma Page
In the View Diploma Page an `award` action is available by which the process of publishing an award commitment to the ledger begins.
### 3.3.1.2 Holder web application
The Holder web application is comprised of the following pages:
- Login Page
- List and Filter Holder's Awarded Diplomas Page
- View Diploma Page
In the View Diploma Page a `request publication` action is available by which the process of publishing a request commitment to the ledger begins. A verifier entity must be selected to which the publication is intended.
### 3.3.1.2 Verifier web application
The Verifier web application is comprised of the following pages:
- Login Page
- List and Filter Issuer's Published Diplomas Page
- View Diploma Page
In the View Diploma Page a `Verify Diploma` action is available by which the process of verifying an acknowledge commitment to the ledger begins. A document must be uploaded for which the verification is intended.
## 3.3.2 API Server
The API server is comprised of an expressjs based rest api server and a MongoDB database.
### Storage
MongoDB is a nosql database that is fast and production ready. On top of MongoDB an ODM(Object Document Model) is used that allows for a final validation of data.
In Mongoose the modeling is done in such a way that allows for the server app to be split in multiple isolated services. The storage component works in cooperation with the ledger. The storage handles off-chain data and transaction ids to be able to verify each step of the protocol using the signatures retrieved from the ledger.
### REST API
The expressjs based API is implemented using the REST pattern. Exchange of off-chain data between the entities is done using the REST API endpoints and not directly using the database. This allows for the server app to be split in multiple isolated services.
### Interoperability
The API Service is meant to interconnect with an external Authentication service
## 3.3.3 Crypto component
In order to participate in the Diplomata protocol, except for interacting with
the ledger, each party must also have access to an El-Gamal PKI (public key
infrastructure). More specifically, all involved parties agree on a common
underlying cryptosystem and generate a corresponding key, by means of which they
can sign ledger entries, produce and verify ZK proofs, symmetrically encrypt
messages etc. The security parameters of the cryptosystem are exposed in the
*Cryptographic Security* section; here we briefly give an outline of the library's
architecture:
### Internal viewpoint
The library consists of three self-contained layers (from lower to higher):
- Basic Crypto Layer: the actual cryptosystem along with the corresponding
prover and verifier engines, responsible for the cryptographic primitives
of the protocol.
- Transaction Logic Layer: responsible for the protocol execution as
described in the specification, leaving aside the details of the primitives.
- Presentation Layer: appropriately adapts the Transaction Logic Layer
(flattening, serializations etc.), so that interfacing with other
subsystems of the application becomes possible (see *External viewpoint* below).
Each layer is as much agnostic as possible of its underlying layers' internals
for the purpose of pluggability, maintenance and testing.
### External viewpoint
The crypto component is intended to interact with both the ledger and the user's storage.
- Interaction with the ledger: the detached signatures which are produced
during protocol execution ("tags") are appended to the ledger. In turn,
a transaction id is returned, by means of which each tag is retrievable
for later use.
- Interaction with the storage: some other quantities, which are produced
during the protocol execution, must also be persistently stored for later
use in the storage. In turn, the crypto component should be also able to
load these quantities from the storage and adapt them appropriately,
so that they become again amenable to cryptographic operations.
## 3.3.4 Ledger Lib
The server communicates with a ledger to record all transactions. To accomplish this communication the server uses a library that implements functions in order to interact with the blockchain network and publish the ledger data entries.
Each ledger entry is identified with the tag s. The server uses one of the library's methods to publish s to the ledger. Synchoniously the ledger returns an identifier that can be used to track the transaction and retrieve s. When it is asked the server uses another library method to retrieve s.
## 2.3 Toolkit for Data Storage
The crypto component must be wrapped with a toolkit, providing the appropriate
interfaces for interacting with the ledger and storage in a robust and safe way.
Its main purpose is to achieve enhanced user control, by ensuring notification and
consent upon the execution of any operation that is asscociated with the user's data.
This toolkit will essentially function as:
(i) bridge between the user's service and the ledger, by providing READ/WRITE
operations on the ledger. The toolkit is responsible for submitting the tags
produced during protocol execution to the ledger, receive back a transaction id for
that action and forward this receipt to the storage, so that the
correpsonding ledger entry is retreivable from the ledger at any future point.
(ii) bridge between the crypto component and the storage, by providing
READ/WRITE operations on the storage. Auxiliary quantities produced during
protocol execution, which need not necessarily be published but are needed for
later use, must be peristently or not stored in the user's database. The ledger
data, being the detached signatures of the protocol computations, are thus
strongly correlated to the data residing in the user's storage, so that a
reciprocal proof of legitimacy is achieved among them.
Note that the toolkit API should be as generic as possible (e.g., the same to
access and store data either on the ledger or off the ledger), so that it can be
reusable in different contexts and promote a uniform language for a potential
ecosystem of interoperating services and applications.