owned this note
owned this note
Published
Linked with GitHub
# ZK-PRET FAQs
## Architecture
### Is it connected to a data source/live system?
The implementation will be set up to run the risk model checks through live implementations for the ACTUS implementation system(s). For the business process prover through the BPMN validity utilizer that will run live for any standard representations in BPMN 2.0 standard. ( Business Process Modeling Language 2.0) . For the modeling front end, we will support the integration and use of bpmn.io, a widely used tool, for this project.
A generic registry will be set up, which hosts the API, the source of the API content, and signatures, including the risk model/ business process prover implementations indicated above.
For the content that drives the risk definitions, the generic registry will again hold the source, API, and their signatures as the sources get on board.
For the SCF reference implementation, we are planning to onboard an LEI ( Global Entity Identifier ) provider who is working closely with our working groups along with our implementation. With some support from the MINA foundation, we can effect this partnership. The same is the case with ACTUS implementors.
GLEIF API and the documentation are available at the following sites.
https://www.gleif.org/en/lei-data/gleif-api
https://documenter.getpostman.com/view/7679680/SVYrrxuU?version=latest
The ACTUS financial framework can be found on the following site.
https://www.actusfrf.org/
An example of a specific API for an annuity working capital loan product might look like
curl -v -H 'Content-Type: application/json' --data '{"contracts":[{"calendar":"WEEKDAY","businessDayConvention":"SCF","contractType":"ANN","statusDate":"2015-01-01T00:00:00","contractRole":"RPA","contractID":"ann-01","cycleAnchorDateOfInterestPayment":"2016-01-02T00:00:00","cycleOfInterestPayment":"P6ML0","cycleAnchorDateOfPrincipalRedemption":"2016-01-02T00:00:00","cycleOfPrincipalRedemption":"P1YL0","nominalInterestRate":0.02,"dayCountConvention":"30E360","currency":"USD","contractDealDate":"2015-01-01T00:00:00","initialExchangeDate":"2015-01-02T00:00:00","maturityDate":"2020-01-02T00:00:00","notionalPrincipal":1000,"premiumDiscountAtIED":0,"cycleAnchorDateOfRateReset": "2015-07-02T00:00:00","cycleOfRateReset": "P1YL0","rateSpread":0.01,"marketObjectCodeOfRateReset":"ust5Y"}],"riskFactors":[{"marketObjectCode":"ust5Y","base": 1.0, "data":[{"time":"2014-01-01T00:00:00","value": 0.04},{"time":"2015-03-01T00:00:00","value": 0.039},{"time":"2016-06-01T00:00:00","value": 0.038},{"time":"2017-12-01T00:00:00","value": 0.037},{"time":"2018-02-01T00:00:00","value": 0.036},{"time":"2019-05-01T00:00:00","value": 0.035}]}]}' http://localhost:8083/eventsBatch
### Is it only building contracts on chain?
Our intent is to set these provers modularly so that the verification can be on contracts on-chain, (or) other off-chain components. In fact, we will be using recursive proofs to keep the on-chain verifications to as minimum as possible, and at the highest level possible for the purpose of use.
## Design
### Whose identity is used in composition engine?
The identities will be the business entities involved in the transaction. We will highlight the global LEI as a key identity that is growing as an international standard. The deep composition engine will also use recursion to identify additional levels of identity, based on the customer needs.
One example in the India Jurisdiction could be,
**Level 1: Ministry of Corporate Records API : that shows the corporate registration and compliance data from the MCA.** [Click here](https://docs.google.com/document/d/1JuSYLBBJ_wZAKaiFIUgksFYTaLKPQz79/edit#bookmark=id.5pcpf0lo4swf)
**Level 2: that shows a valid Import-Exporter code and compliance from the director general of foreign trade** [Click here](https://docs.google.com/document/d/1JuSYLBBJ_wZAKaiFIUgksFYTaLKPQz79/edit#bookmark=id.3pcmocgw6ttq)
**Level 3: Global LEI ID** [Click here](https://docs.google.com/document/d/1JuSYLBBJ_wZAKaiFIUgksFYTaLKPQz79/edit#bookmark=id.lwzw32qfzj0k)
https://api.gleif.org/api/v1/lei-records/529900W18LQJJN6SJ336GETAll
GETAll LEI Records [Click here](https://docs.google.com/document/d/1JuSYLBBJ_wZAKaiFIUgksFYTaLKPQz79/edit#bookmark=id.boypx6800vyv)
https://api.gleif.org/api/v1/lei-records?page[size]=2&page[number]=100
GETLEI Records based on Policy Conformity Flag [Click here](https://docs.google.com/document/d/1JuSYLBBJ_wZAKaiFIUgksFYTaLKPQz79/edit#bookmark=id.1poclcsxscga)
https://api.gleif.org/api/v1/lei-records?filter[conformityFlag]=CONFORMING
### Describe how entity proof tree is used in application by providing an example.
The proof tree indicated below is an assembly of proofs at different levels, as explained earlier for a specific category , and as siblings also has the compliance to standards-schema of the trade finance instruments like e-invoice, bill of lading etc., and their business process integrity between those documents( is the e-invoice in sync with the bill of lading etc ) , and if the risk tolerance proofs related to that financier’s request. The proof tree is a way to minimize on-chain variables.
### Is the engine only contracts? How is data fed to them ? Who feeds them the data?
The way to think about it is – the interface at all levels is always Proofs., off-chain proofs fed to contracts on-chain at the highest level allowing high scalabitliy.
The data is fed through the sources and signatures as explained below. The risk evaluation ACTUS implementor, puts their signature and also relays the signature of where the sources came from like the LEI we are planning to onboard, and also extend the registry that will grow over time.
From a risk evaluation perspective, yes everything finally boils down to a financial contract in actus term, and that keeps it consistent for defi contracts as well as traditional regulatory needs, as some of the central banks ( FDIC , ECB), and using ACTUS for portfolio evaluations.
### What is the output from the engine? Is it a proof? Who consumes this proof?
Yes, Proofs at different levels. The consumer of the proofs could be
Other ZK verifiers as part of recursion.
Other off-chain verifiers that are also ZKapps.
Proofs posted to MINA blockchain, that could be referenced to other tokenization systems in MINA, or any other blockchain (focusing on Ethereum or EVM blockchains first, but technically could extend to other blockchains). This is an area we would like to work closely with MINA and other ecosystem partners.
We are of the same opinion as some of the ecosystem players like aligned layer, where we think these proof based collaboration is the most higher chance of blockchain adoptions for RWA usecases.
### Describe the role of a registry in SCF. What does it acomplish?
The registry is a meta-implementation. It’s role is to give a formal definition to the sources, the content, the APIs of those sources
### What does the registry implementation do in the current reference implementation?
.The registry will host the sources, standards of the data feeds for the reference implementation
### Is it a API? Is it a schema?
It is a meta-data definition that says for this tokenization use-case, the data standard supported is this, and in this jurisdiction, the data could come this implementation, and as the implementors get onboarded, their attestations and their relay attestations will also be available.
It is a registry of oracles.
### What are the inputs and outputs to the registry?Who uses this registry? Provide an example of usage.
LEI is a good example. If any tokenization protocol or a financier could search for a name of a business, and if they find a hit, they could use the info and the attestation with confidence, and plan their risk models accordingly.
The scope of the registry is limited for this proposal. This registry is a sample implementation, where some tokenization use cases will be added based on the reference implementation, and whatever the other ecosystem projects want to store , that can, in turn, be fed to the other pillars.
In the future, this registry could grow(or) plug into other registries. There is a slow focus on trust registries and some universities. As we progress this is an area further expansions could happen.
### Describe what inputs does the prover take? What is the output from the prover?
Inputs : expected business process flow ( as a BPMN 2.0 notation ) and the actual flow as it happens.
Outputs: Business process compliance Proofs ( True/False metrics ) , along with anything that might be made available in the public output object of the ZK app method definition.
### Which part in the SCF flow is this prover invoked?
We will provide an UI ( through bpmn.io ) to input the expected and actual bpmn diagram.
In the SCF flow, the business process prover is invoked to analyze the chronological flow of operations , including parallelism and sequential constraints, that the users of the system bring in through their BPMN 2.0 models.
The actual could also be put together based on the timestamps of the instruments, attested from the sources. This could be an adapter, that could be implemented to any interfacing systems.
### What does the current implementation under the proposal acomplish.
The examples in the detailed link, shows the bpmn circuit for all possible accepted paths.
If the actual path is in sync ( based on time chronology coming in from the content data ) fed by the APIs above, you get a proof indicating success. Otherwise, no
The example is below.
**Legend:**
![Screenshot 2024-12-05 235711](https://hackmd.io/_uploads/HyP7UcgEyg.png)
**Expected BPMN:**
![image](https://hackmd.io/_uploads/S1MGjElmke.png)
![image](https://hackmd.io/_uploads/BkE7vqeNye.png)
**Actual BPMN** : Fed in or constructed.
**Accepted situation:**
![image](https://hackmd.io/_uploads/ryJlwuJV1l.png)
**Rejected situation:**
![image](https://hackmd.io/_uploads/rJW-w_1NJg.png)
![image](https://hackmd.io/_uploads/S1NaQuk41g.png)
### When in the flow of SCF is this prover invoked?
This part is invoked, after all the risk definitions from the customers ( financiers themselves, (or) third party system they use, or other protocols in the tokenization ecosystem. These could be banks, nbfcs, or factoring companies, mortgage companies, institutional defi protocols etc., are defined, and they want to know if the borrower engagement coming in are within their risk parameters.
### What are the outputs/inputs to the prover?
Inputs: template selection from supported contract types ( we intend to support many contract types typically used in financial contracts - PAM, annuities, commodities, forex, etc
The risk parameter definitions – example would be liquidity ratios at specific points.
Outputs: Risk tolerance Proofs ( True/False metrics ), along with anything that might be made available in the public output object of the ZK app method definition.
### What does the current implementation under the proposal do?
If the risk metric, in this example the liquidity ratio ( based on the netted time series based on the portfolio fed ) fed by the APIs above, you get a proof saying if this particular financial contract ( loan or factoring or re-factoring proposition ), is something that is in the range of the financier.
### Who feeds it data - what is the output used for?
The data is fed by borrowers themselves ( attested ) or any platforms (trade or defi, that have/carry attestations), including the data fed through the data composition engine.
The financier ( the bank / nbfc or a defi lending (or) an institutional defi product), can use these proofs to decide their next action. If the proof is a false proof, or no proof – most llikely they will deny the lending ( Based on the template they decided to choose between only crypto collateral or crypto / RWA collateral in certain rations etc etc ).
### Are you implementing any particular risk model?
We expect to implement the Liquidity and Value at risk metrics for most important contract types like Principal at maturity, Annuity , commodity, forex , discounting etc.