Idea for the server logic reorganization interaction:
=====================================================
## Goals
### Main goals:
- Make conventions enforced and explicit, rather than implicit and optional -> less room for missundertandings.
- Make a templete-like mechanism so that new integrations can be added fast, safe and easier (which doesnt mean easy) without modifiying the core of the system.
- something similar to what was done to the backoffice, but applied to flows and integrations.
- Make the different parts of the core simpler, smaller, less interdependent, and easier to understand and maintain.
- Open the backend to a wider audience, so people without haskell or deep sql knowledge can work on most tasks.
- Reuse what is already working, and evolve gradually from there.
### Other goals:
- Make the business-logic explicit so devs can get familiar with the core of the system faster, even if the business logic is always changing.
- Make the system structure and processes easier to understand to everyone; including customer-support.
- Make the system as a whole (from frontend to backoffice, from on-sis to whitelabel) easier to run, locally (and in production) deployment.
- Make deployment faster and safer, and possible to rollback.
- Reduce amount of code...at least by 2/3 !
- Technical business analyst should be able to generate code.
- Able to eventually start communicating with the client through websockets.
- Decrease the system overall latency.
- Make database more systematic, scalable and flexible to deep changes such sharding if necessary.
- Decouple the XCO logic from the system. Eventually move it to an external service (coinweb node).
- Safer hotpatch tools.
- Type-safe interaction between the components of the system.
## Model Overview:
### Business logic components:
Right now, most of the business logic is encoded as "mostly raw" sql files, plus the yaml DSL for the backoffice. We'd like to extend this DSL to use the following types of yamls templates:
- backoffice_panels: maybe some slight modifications or improvement, but mostly they will remain very similar to how they are now.
- api_endpoints: representing sql functions to be used as api endpoints. Similar to "backoffice action"s, they are mainly a sniplet of sql enriched with metainformation (shall this input come from the payload or some header? or shall the information being taken from the signed jwt?...etc).
- processes: represents sql tables, their update logic, constraints and triggers.
- output_communication_channels: represents events/information to be sent to thirdparties and the frontend side; will eventually substitute `ack-connector-get-data`.
- input_communication_channels: represents events/information to be fed into the system; will eventually substitute `ack-connector-update-data` and `trigger_l2_payments`.
- raw_sql: we'd like to reduce the amount of sql to the minimum, but being realistic, there would still be places where we'll need raw sql, specially at the beginning when moving from what we have now. Instead of relying on some implicit conventions as we do know how to decide on which schema-namespace will run, on what order to execute the sql files, and how migration handles them, we make this explicit by attaching some bits of information to the sql file.
- onramp_config: the same as it is now.
- migration_to_apply: currently, we decide what migration script(s) to apply based on a convention on where the file is stored. The idea is to keep it the same but make it explicit in a yaml file.
- use_case_test: mostly the same as it is now, with some improvements.
- translations: mostly the same as it is now, with some improvements.
The only really new 3 elements here are `processes`, `output_communication_channels`, `input_communication_channels`, These will be described in greater detail in next section "Business logic DSL".
The combination of the "backoffice_panels", "api_endpoints", "processes", "output_communication_channels", "input_communication_channels", "raw_sql", and "onramp_config", "translations" and "use_case_test" will become the "business-spec-config". Notice, this will include everything except the `migration_to_apply.yaml` file.
### Core components:
We split the system into 4 parts:
- `spec-parser`: It reads the `business-spec-config`, makes some static checks, and generate the raw sql files from that, and the typescript connector module.
The sql files will be grouped into 2 different folders:
- `sql_from_scratch` : These are plain sql files that, if run over an empty database with the right permission, will generate the sql schema and content required to run the system. It is possible to run them just doing `psql --single-transaction -f sql_from_scratch/index.sql`.
- `sql_tests` : These are plain sql files. To run them one can just run `psql -f sql_tests/index.sql`; but they will only be able to pass
- `migration-runner` : It runs `sql_from_scratch` on an empty database to find out the "target schema", it reads the `migration_to_apply` config
to find out what migration scripts should run on the "current database", then for each of these files:
- start transaction.
- apply migration script.
- find out the database schema using the schema-diff.
- is database-schema == target-schema ?
- yes: commit and break.
- no: rollback and continue with next file.
- `system-runtime`: These are the set of servers and infrastructure that connected to the DB, work as the onramp backend and backoffice backend.
```
business-spec-config migration_to_apply current database
| | |
| | |
v | |
*-------------* | |
| spec-parser | | |
*-------------* | |
/|\ | |
/ | \ | |
/ | \ | |
/ | \ | |
/ | \ | |
/ | | v |
/ | V *------------------* |
/ | sql to run from scratch -------->| migration-runner |<--/
/ | *------------------*
/ | |
| | |
| v *-------------* v
| sql to run as tests----->| test-runner |<------migrated database
| *-------------* |
| ^ |
| | |
| | |
v *----------------* |
connector module ------------->| system-runtime |<-----------/
*----------------*
```
## Business logic DSL
<!--
While most of the business logic and hence the model must be kept, we'd like to clean it and transform it into something more consistent and systematic.
The key elements of the model are processes, input communication-channels, output communication-channels and transitions:
- Processes: represent the state and logic that links related events. Internally each process instance is
represented by a row in a table, and references by that row's primary key, which we'd say it is the process's key.
An example would be "operation-cardstream-to-l2coin". Each step of the flow evolution (i.e, when_created, when_potentially_exec,
when_cancel ...etc) will be called a stage. For example, there would be the "waiting_for_seon" stage, or the "operation_cancel" stage.
- input communication-channel: represents an event that any process can listen to and react afterwards. Internally
each type of input communication-channel is represented by a private function that it is called for each event instance.
An example would be a call to the "l2coin_payment" function. **They might trigger 0 or more processes**.
- transition: very similar to _input communication-channel_, but they are input events related to a specific process
under some specific stages. An example would be calling the "cancel_ingress_op" function. A transition always
**triggers exactly 1 process, otherwise it will throw an exception**.
- output communication-channel: represent a signal from the backend system to thirdparties or the frontend. Internally
each type of output communication-channel event is represented with a combination of a table (to be used as queue)
and the piece of code required to gather those rows and mark them as read. This would be equivalent to the
ack-connector-get-data.
-->
### Process
Processes are state-enhanced loopless flow diagrams. They are the formalization of the convention that we have
already been following; the idea is that, by formalizing the concept and make it explicit rather than implicit, we
make sure it always holds and is done the same way, making the system easier to understand and to modify.
To recap, this is the convention we've followed so far:
- Business flows are represented by big tables (i.e operation_l2coin_to_redeem).
- Each of these flows have a primary key (i.e the `op_id`) and starts with an initial set of `NOT NULL` field.
- As the flow evolve through different stages (i.e, payment rejected, or user gave the CC details) some `NULL` fields are fixed.
- Depending on which stage you are or you have been, some fields are supposed to be `null` or `not null`.
- This is sometimes enforced through db-constraints. (unfortunately not always)
- There are some "transition" fields that stores the timestamp on which a stage was reached (i.e `when_potentiall_exec`
, `when_cancel`)
- There's a static expected (partial-)order on which these stage transitions are supposed to happen.
- This logic of the sequence of stages is sometimes enforced through db-constraints. (unfortunately not always)
- When certain stages are reached, some "events" are issued, triggering calls to external services.
- this is done on the infamous `ack_connector_get_data`.
- to support this we need to keep adding indexes.
- frequently some of these indexes are missing, generating a bottleneck
- other times indexes will not be used, making writes slower without any advantage.
- At certain stages, we expect calls from the frontend side to evolve to a next states.
- At certain stages, we expects calls from the external services to evolve to a next state.
- this is done on the infamous `ack_connector_update_data`.
- We tried to add some timeouts logic, but unfortunately this only works partially and not very reliable.
- These rows will reference other tables, sometimes implicitly, as they would actually redundantly copy the reference values
into the same table; the reason for this was:
- To be able to define table-wise constraints.
- "Snapshooting" the references, so a change in the referenced rows (i.e fiat exchange rates) would not break an operation in flight.
- Being able to define some indexes we need.
- To cope with the redundancy risks, sometimes, we would keep foreign-key constraints. (unfortunately not always)
- Also, sometimes, for some tables, we'd add the `when_created` field to the primary key, so we could represent historical data.
- This was stopped being used because it made the sql-code harder to understand.
- But if it were done systematically we could get it back.
- Sometimes, we'll wrapped the primary keys under some domain (i.e `txid`, `access_id`, `merchant_name`) for type safety.
- This domain info is also already used on the backoffice to generate navigation links between the pannels.
Basically the propposal is to represent this **very same thing** with on explicit yaml file, and from that file to generate:
- The table.
- The `ack_connector_get_data`.
- The `ack_connector_update_data`.
- Required indexes.
- **Required constraints**.
- Types for the events and for the keys.
- calls to `notify ack_connector`
In addition, because the info is explicitly stated, it would be very easy to also generate:
- The flow diagram of the process (we can generate a [mermaid markdown](https://mermaid-js.github.io/mermaid-live-editor), and from that a pdf or png)
- The `getPersistentState` function for the fronted.
- Diagrams to represent data dependencies, this might eventually help splitting independent service out of the system.
Notice that moving from SQL to this DSL represented as yaml file might look like a big change, but:
- We are not doing anything different, we'll do it exactly the same, following the same conventions.
- We do not need to do everything at once, we can start with only some tables, and even for those tables, keep most events the same (`ack_connector_update/get_data`)...then graudally start covering more and more, one/table/event at time.
- The code to be generated is straightforward, the only things that would get a bit more complicated could be type-checking ... but because postgresql already type checks at deployment time, we can ignore it and let postgresql do it's job.
- The code to be generated could be kept such that there's a 1-to-1 relation to the original file, that is, `headless_operation.process.yaml` would generate
`03_headless_operation.sql` file, so it would be easy for the developers to see what's going on under-the-hood.
### Process DSL Syntax and semantic:
Each process is defined on a yaml file named the same way as the process itself. The top object only have 1 attribute, the keyword `process`, and under that
attribute the tuple made by the process name as first element, and the process definition as second:
```
process:
- «process name»
- «process definition»
```
The process definition is an object made by the following attributes:
`references`: a list containing definition files to be process before this one. It will be used to order the files (instead of using `00_foo`, `01_xoo` etc). It will
also (might eventually) be used to generate the `set search-path` command.
`loops`: It is a boolean, if true it means that after some stages it will possible to go back to the initial stage.
`key`: a _data definition block_ (explained later). If `loops = true`, then the field `when_created :: TIMESTAMPTZ` will be implicitely included here. This key specifies
a list of attributes to be included as the process underlying table Primary Key, and to create the sql type (domain if single value, record if more than one) to represent
the process key.
`start_with`: a trigger (explained later) that when satisfied, will create a new process instance. Switch and timeout triggers are not allowed here.
`stages`: an order list of key values, where the key represent an stage name, and the value its definition. The first stage always needs to be called "initial", and
represent the only initial stage. An stage can only reference stages that had been defined later on the list.
Example:
```
process:
- merchant_ingress_conf
- references:
- merchant
- fiat_coin
key:
merchant: MERCHANT
fiat_currency_requested: FIAT_COIN
loops: true
start_with: «transition»
stages:
- initial: «stage definition»
- disabled_for_non_verified_traffic: «stage definition»
- disabled_for_all: «stage definition»
```
## Flow Diagram:
```
*-------------------*
| |
| Block Explorer |
| |
*-------------------*
^
|
|
##============## ##===================## ##===========================|======##
|| || || || || | ||
|| || || || || *----------* ||
|| Merchants || || Frontend APPS || || External Services | Xco Node | ||
|| || || || || *----------* ||
|| || || || || /-------\ ||
|| || || || || | | ||
##============## ##===================## ##=======|=======|==================##
^ ^ | | ^
| | | v |
v | v | |
*-------* *------------* *------* | |
| READ | | Get State | | Put |<-------------------------/ |
| WRITE | | HTTP & | | Only | |
| API | | WEB SOCKET | | API |<-------------\ |
*-------* *------------* *------* | |
^ ^ | | |
| | | | |
| | | | |
v | v | |
*-------------------------------* *--------------* |
| | | | |
| |----------->| Aggregators | |
| Business Logic | | | |
| Rules | *--------------* |
| And State | |
| |------service events------->--------/
| |
*-------------------------------*
^ |
| |
| v
| *--------------* *-----------------------*
| | DB Read Copy |--------->| Business Analitics |
| *--------------* *-----------------------*
| |
| |
| v
*--------------*
| |
| Back Office |
| |
*--------------*
```
## Components
## Migrations
## Deployment
## Gradual Implementation
---------------------------------------------
## Tasks:
We split the implementation of the new approach into gradual steps, in such a way that the code/logic migration could start almost immediately. But this
migration will be time consuming, so instead of being done as a set of tasks, the idea is to keep the current logic as it is, implement new code following
the new approach, and keep room on every PR for small refactors of old code; so that we slowly but continuosly can get the whole code base into the new approach.
The steps to implement the features required for the new approach are:
### Task A: Endpoint DSL
#### Implementation:
We need to modify `deployment` package so:
- Endpoints on the API can be defined using the DSL for endpoints.
- The endpoint DSL will generate the same code for endpoint as it currently does, every extra feature like (read_replica, from_jwt/payload/header ...etc) will be ignored.
- **Endpoints defined without the DSL will not be allowed**. In order to do that, the `deployment` package will reject sql files stored under the `API` folder.
- Reorganize endpoints so there are not 2 endpoints with the same name. **This will be the most challenging change** for this phase. It might get tricky but
probably this can be done without modifying anything from the frontend-side.
- Make `deployment` package detect and enforce that there are not 2 different endpoints under the same name. Maybe the easiest way to do this is just making a query
to the sql catalog and see if there are more than 2 functions on the api schema with the same name.
#### Benefits:
- Currently, the mechanism to detect which arguments are used and which ones shall be discharged doesn't work well on some corner cases when there are more
than 1 endpoint with the same name. The changes proposed will fix this.
- In some corner cases when there are several endpoints with optional arguments and the same name, some combinations of optional arguments will fail at runtime
as the system won't be able to find the "best" function to route the call.
- For endpoints with many arguments, if they have repeated names, then hotpatching them becomes more tedious, as we'll need to perfectly write every argument
before being able to "`\ef`" them. This doesn't sound like a big issue, but in the past, during emergencies, it has been a problem.
#### Continous improvement and refactorization:
After this is phase is done, any new endpoint will need to be defined through the endpoint DSL, and not repeat names.
### Task B: New folder structure and clean-up:
#### Implementation:
- Reorganize the top level folder structure so it looks like this: (this will require many small modification to many parts of code and scripts)
```
on-server:
.circleci/
infrastructure/
devops/
...kubernetes, docker, devops & execution scripts...
packages/
...haskell packages, modules and apps...
# one folder per haskell package, not aggregating folders like `ON-RAMP`
PriceFetcher/
on-sis
js_src/ # Unless we can remove it, if we can remove, then just remove it.
stack.yaml
README.md
subrepos/
«web frontend code repo» # as git subrepo
«backoffice code repo» # as git subrepo
«whitelabel code repo» # as git subrepo
README.md
sql_files/
migrations/
translation/
user-case/
business_logic/
apis/
backoffice/
schema/
core_extensions/
extensions/
tables/
base_views.sql
sequences.sql
views/
views/
README.md
initial_data.sql
onramp-config.yaml
README.md
doc/
README.md
.gitignore
start.sh # might be empty with a TODO.
```
- Remove every doc and README except the pieces that are not outdated.
- Create README on every folder explaining what's that folder for. 3 or 4 words will be enough, though a small paragraph would be better.
- Fuze DEVELOPMENT.md and README.md together.
- Remove every unused file (i.e `demo.conf.yaml`, the linter ...etc). If anyone wants to keep some of these, they can gitignore it and keep it just for themselves.
- Remove as much code as possible from the `xco-node` package.
- Split `xco-node` package into 2 different packages: `onramp-prelude` and `crypto-utilities`
- Review and remove those haskell packages not used.
- Remove the `csql` package.
- If `js_src` is not used, remove it.
- rename `sql_files` to `on-ramp`
- eventually in the future, we could use some tooling like [weeder](https://hackage.haskell.org/package/weeder) to detect dead code, and enforce its elimination
at CI/CD.
#### Benefits:
- Getting used to the codebase has been one of the main challenges for newcomer backend devs, removing unused files and code, and reorganizing
folders in a meaningful way, will ease and speed-up getting familiar with the code base.
- Outdated docs harms more than help.
- It is not clear what it used and what it is not, this will help findint out.
#### Continous improvement and refactorization:
Starting after this phase, every new PR should:
- Either have more red than green (the total number of lines got reduced) or explicitely remove some outdated or unused code. If it has more green than red,
then on the PR description it should explicit stay what dead/outdated piece of code was removed on the PR.
- Update or add something to the docs or README. It could be as simple as to add 4 or 5 words to some doc/README, reword a sentence that was hard to understand,
or remove something that didn't hold anymore.
### Task C: Simplify and improved migrations.
#### Implementation
Currently there are many schemas being used, each one behave differently with respect to the migration mechanism, and how they do is not well known to most programmers.
So:
- Modify `development` package so the only used schemas are `internal_schema` and each one of the apis schema (`for_backoffice`, `for_external`, `for_internal`).
- This is harder than it seems, because the schema-diff will complain about the new stuff; to satisfy it, things should be added to a migration script.
- This could be done gradually over several PR/deployment.
- **Notice** this includes stopping using the `public` schema (currently used for views) and the `__external_extension_schema__` currently used internally on `Control.MigrationRunner.hs`
- **Instead of querying the database about endpoints information, extract it from the endpoint DSL yaml files**.
- Once that's done, modify how `development` works so the apis schemas (`for_backoffice`, `for_external`, `for_internal`) are used on the schema-diff, **and**, they
are not nuked on every migration, **and** they are only executed if deploying localhost (i.e, the same way `internal_schema` works).
- This requires changes to `Migration.hs`, also it means that at the beginning we'll need big migrations scripts as the endpoints will need to go to the migration files.
#### Benefits.
- Faster migrations: Once the deployment with the big migration files are done; nothing will be nuked and recreated again, only the things that changes.
- Faster migrations: No longer need to introspect the database while generating code.
- Easier to understand migrations: Everything behaves the same; what it is under the `business_logic/` represents what the database should be,
then the migration-tool try running every file/folder at `migration/` till it finds a file that makes the current database schame the one it should be, rollingback
every time it fails.
- Simplification of code.
- Safer patches: **if one patch gets accidentally forgotten** it will neither get silently removed nor silently kept, instead, the deployment will fail.
#### Continous improvement and refactorization:
- ????
### Task W:
#### Implementation
#### Benefits.
#### Continous improvement and refactorization:
### Task ?: Schema diffs with every schema.
### Task ?: Multi schema.
### Task ?: Separate the blckchain.
### Task ?: Experimental db copies??.
### Task D: No need to re-copy migrations files.
### Task E: Wrapping of sql files.
### Task F: Generating table and types.
### Task G: Generate code.
### Task X: Events for blockchain related operations.
### Task Y: Start using timeouts.
### Task W: Generation of docs.
### Task H: New connectors.
### Task I: Web-sockets.
### Task L: Configurable semantic for events.
### Task J: Backend side translation I.
### Task K: Backend side translation II.
### Task M: Endpoint DSL & JWT generation.
### Task N: Backend implemented Authentication
### Task P: Mocked API
### Task R: Improved Test
### Task S: Test coverage enforcement.
### Task T: End-to-end testing including frontend.
### Task O: `start.sh` to run everything.
### Task V: Better hot-patch and rollbacks.
## Continuous improvement:
--
Code Generation
=============================================
A _data definition block_ defines some rows and FK to be used on the definition of the process's underlying table. It is
an object where the keys represent the fields name, and the values its type; for example:
```
merchant_verified_customer: BOOLEAN
merchant_customer_email: EMAIL
deadline: TIMESTAMPTZ
merchant_customer: MERCHANT_CUSTOMER
ingress_skin: JSONB
provided_cc_details: CC_DETAILS ?!
```
Types will always be full capital error (otherwise it will be a syntax error) and can optionally be prepended by the marks "?" and/or "!".
"?" means the field is nullable even when the stage at which is defined is reached. "!" means the field will be deleted as soon as it reaches a
stage where we know it will no longer be used. "?" and "!" modifiers are not allowed on the `keys` _data definition block_
If a type is the key of another process, with more than 1 field, then it will get expanded into those fields prepended by `«original_field_name»__`
and a foreign key to the table of the references process will be added. For example this:
```
conf_used: MERCHANT_INGRESS_CONF
amount_requested: NAT
deadline: TIMESTAMPTZ
```
will get expanded into:
```
create table .....
( ...
, conf_used__merchant MERCHANT
, conf_used__fiat_currency_requested FIAT_COIN
, amount_requested NAT
, deadline TIMESTAMPTZ
, ...
, FOREIGN KEY (conf_used__merchant,conf_used__fiat_currency_requested)
references merchant_ingress_conf(merchant,fiat_currency_requested)
, ...
)
;
```
The _data definition block_ will also be used to find out and generate the "not null constraints", but which ones will be generated depends on which specific
stage the _data definition block_ appears. For the `keys` _data definition block_ and the one from the `initial` stage, every row generated will be `NOT NULL`
unless it has a "?" or "!" modifier.
#### Process DSL Syntax and semantic (stage):
Stages are represented by objects with the following keys:
`defines`: the _data definition block_. Implicitly, it will add a field `when_«stage-name» TIMESTAMPTZ`.
`evolves_to`: either the string `"final"` if it is a final stage, or a key-value object where the keys are stages and the values are list of triggers (defined later).
`signals`: the list of output events to call when this stage is reached.
An example would be:
```
stages:
- initial:
defines:
fee_to_charge FLOAT
payment_processor_to_use CC_PROCESSOR
signals:
- «signal1»
- «signal2»
- ...
- «signalN»
evolve_to:
disabled_for_non_verified_traffic:
- «trigger1»
- «trigger2»
- ...
- «triggerN»
disabled_for_all:
- «trigger1»
- «trigger2»
- ...
- «triggerM»
initial:
- «trigger1»
- «trigger2»
- ...
- «triggerP»
- disabled_for_non_verified_traffic:
evolve_to:
disabled_for_all:
- «trigger1»
- «trigger2»
- ...
- «triggerM»
- disabled_for_all:
evolve_to: final
```
#### Process DSL Syntax and semantic (triggers and transitions):
Triggers are one of the following:
- transition:
- event:
- switch:
- timeout_in:
- timeout_at:
#### Process DSL code generation (table):
In order to generate the table that will represent the process and its "not null" constraints, we'll need to analyze when a row field is supposed to be null or not
and find out if they are consistent. If a field appears more than on one stage (on their _data definition block_), then every time it appears it will need to have the
same type, if it uses "!" once, then it should use it every time it appears.
- For every stage `A` will say that stage `B` is its descendent if we can go from `A` to `B` through **one or more** jumps without returning to the `initial` stage.
- For the keys, we'll consider every stage `B` is its descendent.
- For every stage `A` will say that stage (or keys) `B` is its ascendent, if `A` is descendent of `B` .
For every field `f`, at an specific stage (or keys) `S`, will say that:
- `f` `is_used` on `S`, if on stage `S` it is mentioned somewhere on the `signals` section.
- `f` `is_defined_as_required` on `S`, if `f` appears on `S` _data definition block_ with the `?` definition.
- `f` `is_volatile` if it has the `!` modifier.
- `f` `is_defined_as_optional` at `S` if it has the `?` modifier on `S` _data definition block_ .
- `f` `is_defined` at `S` if `f` `is_defined_as_required` on `S` or `f` `is_defined_as_optional` at `S`.
- `f` `will_be_used` after `S`, if there's a stage `S2` such that `f` `is_used` on `S2` and `S2` is descendent from `S`.
- `f` is `inactive` on `S` iff **either** is true:
- `f` `is_volatile` and NOT ( `f` `will_be_used` after `S` ).
- for every `S2` such (`S2` is ascendent from `S` or `S2` = `S`), NOT (`f` `is_defined` on `S2`)
- `f` is `active` on `S` iff **both** are true:
- NOT (`f` is `inactive` on `S`).
- for every `S2` such (`S2` is ascendent from `S` or `S2` = `S`), NOT (`f` `is_defined_as_required` on `S2`)
- `f` is `unknown` on `S` if is neither `inactive` nor `active`.
To generate the not-null constraints:
- for every `f` and defined as part of the key, then add them to be `NOT NULL` on the definition.
- for every `f` and stage `S` such `f` is `inactive` on `S`, and the constraint `check ( when_«S» is null or (f is null))`.
- for every `f` and stage `S` such `f` is `active` on `S`, and the constraint `check ( when_«S» is null or (f is not null))`.
- for every `S`, take `S0`, `S1` .. `SN` as the list of its direct ascendent stages, then add
the constraint `check (when_«S» is null) or (when_«S0» is not null or when_«S1» is not null or ... or when_«SN» is not null )`.
If there's a `f` such `f` `is_defined_as_required` at `S`, but there's a stage `S2` such `S2` is ascendent from `S` on which `f` `is_defined_as_optional`; then
abort the sql generation with an error message "f has been defined at S as required, but it might already being living at a previous stage S2 as optional".
If there's a `f` such that `f` `is_defined` at `S` and `f` `is_volatile`, but NOT(`f` `is_used` on `S`) and NOT(`f` `will_be_used` after `S`), then abort
sql generation with error message "f defined at S as volatile, but it will never be used at that point".
#### Process DSL code generation (switch ):
```
create function tick_stage_«p»_«S»(keyArg1, keyArg2 .. keyArgN, boolean ,field1, field2 .. fieldN)
returns timestamptz as $$
select case when («switch_cond1»)
then goto_«p»_«switch_cond1_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(not «switch_cond1», true)
)
when («switch_cond2»)
then goto_«p»_«switch_cond2_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(not «switch_condN», true)
)
...
when («switch_condN»)
then goto_«p»_«switch_condN_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(not «switch_condN», true)
)
...
when («timeout1» <= CURRENT_TIMESTAMP)
then goto_«p»_«timeout1_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(«timeout1» > CURRENT_TIMESTAMP, true)
)
when («timeout2» <= CURRENT_TIMESTAMP)
then goto_«p»_«timeout2_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(«timeout2» > CURRENT_TIMESTAMP, true)
)
...
when («timeoutN» <= CURRENT_TIMESTAMP)
then goto_«p»_«timeoutN_destiny» ( keyArg1, keyArg2 .. keyArgN
, nullif(«timeoutN» > CURRENT_TIMESTAMP, true)
)
else CURRENT_TIMESTAMPTZ
end
$$ language sql volatile;
```
TODO: make this function do nothing if `boolean` is false.
```
create function aux_goto_«p»_«S»(keyArg1, keyArg2 .. keyArgN, boolean ,old_field1, old_field2 .. old_fieldN)
returns timestamptz as $$
update p
set when_«S» = CURRENT_TIMESTAMP
, «defined_at_S_field1» = coalsece(«defined_at_S_field1», old_field_i)
, «defined_at_S_field2» = coalsece(«defined_at_S_field2», old_field_j)
, ...
, «defined_at_S_fieldN» = coalsece(«defined_at_S_fieldN», old_field_k)
from «extraLookedUp1»
, «extraLookedUp2»
, ..
, «extraLookedUpN»
where keyArg1 = keyField1
and keyArg2 = keyField2
and ..
and keyArgN = keyField3
and «extraLookedUp1 cond»
and «extraLookedUp2 cond»
and ..
and «extraLookedUpN cond»
returning tick_stage_«p»_«S»( keyArg1, keyArg2, .. keyArgN
, tick_triggers_«p»_«S»( keyArg1, keyArg2 .. keyArgN
, extraLookedUp1, extraLookedUp2 .. extraLookedUpN
, field1, field2 .. fieldN
)
, field1, field2 .. fieldN
)
$$ language sql volatile;
```
```
create function goto_«p»_«S»(keyArg1, keyArg2 .. keyArgN, boolean ,old_field1, old_field2 .. old_fieldN)
returns timestamptz as $$
select error_if_null(aux_goto_«p»_«S»(keyArg1, keyArg2 .. keyArgN, boolean ,old_field1, old_field2 .. old_fieldN));
$$ language sql volatile;
```
```
create function tick_triggers_«p»_«S»
( keyArg1, keyArg2 .. keyArgN
, extraLookedUp1, extraLookedUp2 .. extraLookedUpN
, field1, field2 .. fieldN
)
returns boolean as $$
select «call_to_signal1»(..);
select «call_to_signal2»(..);
..
select «call_to_signalN»(..);
insert into timeouts_«p»_«S»( keyArg1, keyArg2 .. keyArgN
, «deadline1»
)
where «deadline1» > CURRENT_TIMESTAMPTZ
and «deadline1» >= «deadline1»
and «deadline1» >= «deadline2»
and «deadline1» >= «deadline3»
and ...
and «deadline1» >= «deadlineN»
;
insert into timeouts_«p»_«S»( keyArg1, keyArg2 .. keyArgN
, «deadline2»
)
where «deadline2» > CURRENT_TIMESTAMPTZ
and «deadline2» > «deadline1»
and «deadline2» >= «deadline2»
and «deadline2» >= «deadline3»
and ...
and «deadline2» >= «deadlineN»
;
insert into timeouts_«p»_«S»( keyArg1, keyArg2 .. keyArgN
, «deadline3»
)
where «deadline3» > CURRENT_TIMESTAMPTZ
and «deadline3» > «deadline1»
and «deadline3» > «deadline2»
and «deadline3» >= «deadline3» -- NOTICE the '>=' vs '>' is changing each time.
and ...
and «deadline3» >= «deadlineN»
;
...
insert into timeouts_«p»_«S»( keyArg1, keyArg2 .. keyArgN
, «deadlineN»
)
where «deadlineN» > CURRENT_TIMESTAMPTZ
and «deadlineN» > «deadline1»
and «deadlineN» > «deadline2»
and «deadlineN» > «deadline3»
and ...
and «deadlineN» >= «deadlineN»
;
$$ language sql volatile;
```
```
create or replace function tick_timeouts()
returns void as $$
-- for each «p» «S» combination:
...
delete from timeouts_«p»_«S»
where deadline > CURRENT_TIMESTAMP
returning tick_stage_«p»_«S»(keyArg1, keyArg2 .. keyArgN, boolean ,field1, field2 .. fieldN)
;
...
$$ language sql volatile;
```
-- create every `timeouts_«p»_«S»` and index.
-- create the event functions.
-- create the transition functions.
-- create index for transition functions.
-- create the output channel function.
#### Process DSL code generation (signal ):
#### Process DSL code generation (transitions ):
#### Process DSL code generation (transitions ):
#### Process DSL code generation (timeout ):