# Implementation details
The project presents a solution implementation of a simple web server that accepts requests to increment a value associated with a given key.
Service synchronizes its state to a PostgreSQL database every ten seconds.
## Assumptions:
- Accepts `POST /increment` request on port `3333`
- The request body is a JSON object `{"key": "<key_value>", "value": <increment>}` where `<key_value>` is a string and `<increment>` is a positive integer
- Successful request results in a response with `200 HTTP code` and an empty body
- Service synchronizes its state with the PostgreSQL database every 10 seconds.
## Architecture overview:
We can identify distinct entities:
#### `PlatformWeb.IncrementController`
is responsible for handling the incoming HTTP request. It uses `Platform.Increment` to validate data and passes them to `Platform.Aggregator` subsystem.
#### Platform.Increment
provides data validation and returns a `Map` accepted by the `Platform.Aggregator` subsystem.
#### Platform.KeyValuePairs
represents persistent data and allow interaction with the database. To achieve consistency, data are stored using `UPSERT`.
For new `<key_value>` record is created. For existing `<key_value>` value is icremented on the database level without a need for independent read.
#### Platform.Aggregator subsystem
is responsible for accumulating incoming increments and storing `Platform.KeyValuePairs.Pair` in the database. It uses `ets` table for in-memory storage.
`Platform.Aggregator.Producer` stores data using atomic `:ets.update_counter/4`
`Platform.Aggregator.Consumer` reads all increments from `ets` table, stores them in the database, and atomically decreases the counter.
Successful flow may result in records with a value set to zero (See: Improvements section)
"TODO DIAGRAM"
## Persistency
Incoming increments are stored in in-memory in `ets` table accessed by `Platform.Aggregator.Producer` and `Platform.Aggregator.Consumer` processed. To achieve data consistency, both use atomic `:ets.update_counter` to change the value associated with a given key. That ensures no race condition between writes.
In-memory data are stored in the database on a given interval using `UPSERT` to avoid "read-write" transaction.
## Improvements
### "CONSUMER interval"
by default, the interval is set to 10 seconds. It's triggered after successful database writes for all data in the ets table. Time spent on data manipulation is deducted from the baseline for the next interval. Due to the synchronous nature of data processing, time spent may exceed the given interval.
### Ets garbage collection
Consumer decrements may lead to records with the value set to 0. For zero-value records, a database query is not performed. However, they will increase data processing time and memory footprint. Unnecessary records should eventually be removed from the in-memory storage. GC mechanism should be implemented.