# PQs and Caches for ACA-Py and Mediators
## Mediators
### Design 1: No Relay/Deliverer, No Controller
A "no controller deployment" is viable by either setting the `--auto` flags, or by extending the mediation negotiation plugin to include business logic, so that there is no need for a controller.
```plantuml
cloud Internet as ags {
node agent1 as a1
node agent2 as a2
node agent3 as a3
}
cloud Wallets {
node wallet1 as w1
node wallet2 as w2
node wallet3 as w3
}
component "Scalable Mediator" {
component "Mediator" {
() "External\nLoad Balancer" as lb
node M1 as m1
node M2 as m2
node M3 as m3
}
component "Redis" as red
database "Askar\nDatabase" as ask
}
a1 --> lb
a2 --> lb
a3 --> lb
w1 --> lb
w2 --> lb
w3 --> lb
lb --> m1
lb --> m2
lb --> m3
m1 <--> ask
m2 <--> ask
m3 <--> ask
m1 <--> red
m2 <--> red
m3 <--> red
```
See [Overview](#Mediator-Overview) below for more details about both designs.
### Design 2: Combined Relay/Deliverer
```plantuml
cloud Internet as ags {
node agent1 as a1
node agent2 as a2
node agent3 as a3
}
cloud Wallets {
node wallet1 as w1
node wallet2 as w2
node wallet3 as w3
}
component "Scalable Mediator" {
component "Mediator" {
node M1 as m1
node M2 as m2
node M3 as m3
}
component "Redis" as red
database "Askar\nDatabase" as ask
component "Relay/Deliverer" as rd {
() "External\nLoad Balancer" as lb
node RD1 as rd1
node RD2 as rd2
node RD3 as rd3
}
}
a1 --> lb
a2 --> lb
a3 --> lb
w1 --> lb
w2 --> lb
w3 --> lb
w1 -right-> ags
w2 -right-> ags
w3 -right-> ags
lb --> rd1
lb --> rd2
lb --> rd3
rd1 <--> red
rd2 <--> red
rd3 <--> red
m1 <--> ask
m2 <--> ask
m3 <--> ask
m1 <--> red
m2 <--> red
m3 <--> red
```
### Overview
* The "agents" only send messages destined for wallets.
* They are received and queued on Redis with a topic based on the recipient wallet.
* Each wallet connects with WS to a mediator instance and when connected, has a session with that mediator endpoint.
* In establishing a web socket session, the agent sends a DIDComm message to pickup messages.
* The wallet that establishes the web socket and sends the DIComm message is not known until the DIDComm message from the wallet is processed.
* If the endpoint is managed from a different component (e.g., a Relay/Deliverer instance) than is processing the DIDComm message (e.g., an ACA-Py/Mediator instance), the two components must coordinate about which recipient is processing the messages for what wallet.
* Proposal is as follows:
* Two identifiers are needed, a tag for the web socket session, and a Wallet identifier The wallet identifier is the same identifier the ACA-Py/Mediator (AM) instances will use in queing messages destined to a wallet to Redis.
* When a Relay/Deliverer (RD) instance receives a message to establish a new web socket session, the RD tags the message with a locally understood session identifier, puts it on the Redis queue, and adds the tag to a list of things on the Redis queue it is tracking.
* An AM instance retrieves the "new session" message from the Redis queue, noting the tag from the RD, processes the DIDComm message and extracts the wallet ID.
* The AM queues a message on Redis with the topic being the RD-generated tag and the content being the Wallet ID.
* When the RD instance sees the message with the tag, it retrieves the message, and adds the Wallet Identifier to the list of things on the Redis queue it is tracking.
* While the web socket is active, the RD monitors Redis for messages for the wallet identifier and sends them to the wallet using the web socket session.
* When the web socket is lost, the RD instance stops looking on Redis for the wallet ID messages.
* If the RD dies, the wallet will establish a new web socket connection, and that RD instance will take over sending messages to that wallet.
* Wallets only retrieve messages from the mediator.
* Exception: When a wallet is negotiating with the mediator, messages are sent from the wallet to the mediator, and from the mediator to the wallet.
* In such cases, it is likely the business logic is involved to make decisions on the mediation setup -- accept the wallet or not, at minimum.
* Messages from the Wallet should be sent to the mediator, as if it was any other agent.
* Messages from the Mediator would be sent to the wallet as if sourced from an external agent.
### Logic
* On receipt of a message destined for a wallet, the mediator instance checks if it has a WS session with the wallet. If not, it puts the message on the Redis queue.
* If the mediator is the source of the message, it would process the outbound message to a wallet as if it had already been retrieved from an external agent.
* When connected with wallet(s), an agent polls the Redis queue for messages for wallet(s), and when found, retrieves message from the queue and send the message to the wallet.
### Other Issues
* There is no confirmation of a message being received by the wallet, which could lead to messages being lost because of an undetected lost connection.
* The mediator should handle mobile notifications to
## ACA-Py
### Design 1: Separate Relay and Deliverer Components
```plantuml
cloud Internet {
node agent1 as a1
node agent2 as a2
node agent3 as a3
}
component "Scalable Aries Agent" {
component "ACA-Py" as ap {
node "AP1" as ac1
node "AP2" as ac2
node "AP3" as ac3
}
component "Deliverer" as del {
node "D1" as d1
node "D2" as d2
node "D3" as d3
}
component "Relay" {
() "External Agent\nLoad Balancer" as lb
() "Admin API\nLoad Balancer" as alb
node R1 as r1
node R2 as r2
node R3 as r3
}
component "Redis" as red
database "Askar\nDatabase" as ask
component Controller as ct {
() "Admin WebHook\nLoad Balancer" as awhlb
node "C1" as ct1
node "C2" as ct2
node "C3" as ct3
}
database "Controller\nDB" as ctdb
}
a1 -down-> lb
a2 --> lb
a3 --> lb
lb --> r1
lb --> r2
lb --> r3
r1 --> red
r2 --> red
r3 --> red
red --> d1
red --> d2
red --> d3
del -up-> a1
del --> a2
del --> a3
del --> awhlb
ac1 <--> red
ac2 <--> red
ac3 <--> red
ac1 <-right-> ask
ac2 <--> ask
ac3 <--> ask
awhlb -->ct1
awhlb -->ct2
awhlb -->ct3
ct1 --> alb
ct2 --> alb
ct3 --> alb
alb --> r1
alb --> r2
alb --> r3
ct1 <-right-> ctdb
ct2 <-right-> ctdb
ct3 <-right-> ctdb
```
### Design 2: Combined Relay and Deliverer Component
```plantuml
cloud Internet {
node agent1 as a1
node agent2 as a2
node agent3 as a3
}
component "Scalable Aries Agent" {
component "ACA-Py" as ap {
node "AP1" as ac1
node "AP2" as ac2
node "AP3" as ac3
}
component "Relay/Deliverer" as rd {
() "External Agent\nLoad Balancer" as lb
() "Admin API\nLoad Balancer" as alb
node RD1 as rd1
node RD2 as rd2
node RD3 as rd3
}
component "Redis" as red
database "Askar\nDatabase" as ask
component Controller as ct {
() "Admin WebHook\nLoad Balancer" as awhlb
node "C1" as ct1
node "C2" as ct2
node "C3" as ct3
}
database "Controller\nDB" as ctdb
}
a1 --> lb
a2 --> lb
a3 --> lb
lb --> rd1
lb --> rd2
lb --> rd3
rd1 <--> red
rd2 <--> red
rd3 <--> red
rd -up-> a1
rd -up-> a2
rd -up-> a3
rd -down-> awhlb
ac1 <--> red
ac2 <--> red
ac3 <--> red
ac1 <--> ask
ac2 <--> ask
ac3 <--> ask
awhlb -->ct1
awhlb -->ct2
awhlb -->ct3
ct1 --> alb
ct2 --> alb
ct3 --> alb
alb --> rd1
alb --> rd2
alb --> rd3
ct1 <--> ctdb
ct2 <--> ctdb
ct3 <--> ctdb
```
### Overview
### Optional Deployment: Controller Shares Relay/Deliverer
To make a cloud native controller scalable, have it use Redis and the Relay / Deliverer.
Webhooks arrive into the Relay, are put into Redis and retrieved from Redis by the Controller instances, who process the requests.
Results of the controller processing that result in requests to the ACA-Py Admin API are put into Redis, where the deliverer picks them up and sends them to ACA-Py.
An optimization would be to have the controller put them in Redis such that ACA-Py picks them up directly, without the deliverer involved. Likewise, the webhooks would go into Redis from ACA-Py where they are picked up by the Controller without the deliverer or relay involvement.
```plantuml
cloud Internet {
node agent1 as a1
node agent2 as a2
node agent3 as a3
}
component "Scalable Aries Agent" {
component "ACA-Py" as ap {
node "AP1" as ac1
node "AP2" as ac2
node "AP3" as ac3
}
component "Relay/Deliverer" as rd {
() "External Agent\nLoad Balancer" as lb
() "Admin API\nLoad Balancer" as alb
() "Admin WebHook\nLoad Balancer" as awhlb
node RD1 as rd1
node RD2 as rd2
node RD3 as rd3
}
component "Redis" as red
database "Askar\nDatabase" as ask
component Controller as ct {
node "C1" as ct1
node "C2" as ct2
node "C3" as ct3
}
database "Controller\nDB" as ctdb
}
a1 --> lb
a2 --> lb
a3 --> lb
lb --> rd1
lb --> rd2
lb --> rd3
rd1 <--> red
rd2 <--> red
rd3 <--> red
rd --> a1
rd --> a2
rd --> a3
rd --> awhlb
rd --> alb
ac1 <--> red
ac2 <--> red
ac3 <--> red
ac1 <--> ask
ac2 <--> ask
ac3 <--> ask
awhlb -->rd1
awhlb -->rd2
awhlb -->rd3
ct1 <--> red
ct2 <--> red
ct3 <--> red
alb --> rd1
alb --> rd2
alb --> rd3
ct1 <--> ctdb
ct2 <--> ctdb
ct3 <--> ctdb
```