This note covers the proposed design an ACA-Py Cluster, a cloud-native, scalable, highly available instance of ACA-Py that supports HTTP, Web Sockets and Return-Route transports. Currently, a Mediator Cluster is an ACA-Py Cluster that is deployed specifically to support the Mobile Wallet Mediator use case. A Cluster deployment requirement is that all components are horizontally scalable, with (almost) all components stateless. The specific exception about state is called out below.
In these designs, the Controller is treated as an external component. The controller component may also be deployed in a cloud native, scalable manner. However, the deployment of the controller is an exercise left for the reader.
The following is a simple ACA-Py deployment, assumed to be on a cloud native platform (e.g., Kubernetes, OpenShift), that lacks robust high availability (HA) support. In-memory queues are used in the ACA-Py instances such that when instances are terminated abruptly (no graceful shutdown), messages could be lost.
To make an ACA-Py CLuster, a cloud native, highly available, horizontally scalable deployment, the following elements are added to the deployment.
In the following, "Redis" is called out specifically. However, the underlying design is such that any component that provides functionality similar to Redis can be used by providing an appropriate ACA-Py plugin.
NOTE: A previous design for adding persistent queues into ACA-Py had both a "Relay" for inbound messages and a "Deliverer" for outbound messages. However, supporting websockets and return-route transports requires that the Relay and Deliverer functions be combined into a single component. We stayed with the name "Relay" for this new, combined component.
Using only the HTTP Transport for agents that have an endpoint and the Controller interfaces, is straight forward.
Each ACA-Py instance must share state with all other ACA-Py instances such that any inbound message can be processed by any ACA-Py instance.
ACA-Py's DIDComm support includes two transports where other agents do not have endpoints: Web Sockets (WS) and Return Route (RR). In both cases, outbound messages cannot be sent asynchronously from any Relay instance. Rather, the other agent must initiate an interaction, and as part of that interaction, messages from the ACA-Py Cluster to that agent are delivered.
Supporting no-endpoint transports add complexity compared to the HTTP-only use case. The Relay instance receiving the inbound WS or RR message from an agent must be the same instance that sends outbound messages to that agent for the life of the session. When the session ends (for whatever reason), the external agent must initiate another session before outbound messages can be sent to that agent.
The complication comes from the separation of the Relay and ACA-Py instances:
In the following section we'll describe the flow of messages and information between the Relay and ACA-Py instances to deal with this added complexity.
In the Introduction (above), we said that the components of an ACA-Py cluster are "almost" stateless. The exception is that the Relay instances must track with what external agents they have a session. However, if that data is lost because of the sudden termination of an instance, so to is the session being tracked, and so no messages will be lost.
The following describes the flow of messages through the cluster that supports both the endpoint and no-endpoint flows.
- Assumption: A Relay can assemble the messages for the target Agent into the HTTP response. Is that a valid assumption, or must a DIDComm agent prepare the HTTP response?
When a RR is needed, the process must complete within the timeout period of an HTTP request.
The following is a list of the types of persistent queues (PQs) are needed, who posts and pops messages from those queues under what conditions.
All inbound messages are posted to an Agent Inbound Messages queue by the Relay instances, tagged an RId and HId. The messages are pulled from the queue by all ACA-Py instances.
Kinds of inbound messages include:
All Admin API Requests are posted to an Admin API Queue by the Relay instances. The messages are pulled from the queue by all ACA-Py instances.
ACA-Py instances post to the Tag Instruction queue messages to instruct specific Relay instances about handling tags on inbound messages. The messages are picked up by the Relay instance that previously posted a tagged message. The messages notify the RD to:
ACA-Py instances post to the HTTP Outbound queue all messages destined to an HTTP address. Any Relay instance can pick up and send those messages.
ACA-Py instances post to the No Endpoint Outbound queue all messages destined to agents that have no endpoint. The messages are tagged with the AgId of the target agent. The messages are picked up by a specific Relay instances that has been instructed (via a tag instruction message) to deliver the messages for a given agent identifier.
- What about agents with an endpoint but that request RR transport for a given message? Should their message go in this or the previous queue? If they go in this one, and the designated Relay instance to send the message is terminated, will the message be lost in this queue, even though it could be delivered via the agent's endpoint?
What happens to the messages on the different queues if ACA-Py or Relay instances are terminated, or when WS sessions end?
In this we assume the queue handling is guaranteed "exactly once" processing–assuming the queue handling supports such a mode. A message is pulled off the queue and processed before it is permenantly removed from processing. If the processing fails, the message remains for someone else to pick up.
If ACA-Py instances die, there is no impact. Each ACA-Py instance gets any message for an ACA-Py instance, processes it and moves on.
If a Relay instance dies after it has submitted a message, but before getting the response message from the Tag Instruction queue will never get the tag instructions. It's termination will mean that the WS and RR session is terminated as well, since the Relay instance cannot respond.
A Mediator Cluster is (currently) defined as an ACA-Py Cluster that is specifically configured to be a DIDComm Mediator, usually for a (potentially very large) group of mobile agents. A mediator cluster has a traffic pattern that differs from a general ACA-Py agent (as outlined in the following section) and we may find optimizations can be done for the use case.
In the following diagram, the Controller is removed based on the idea that either the ACA-Py instance will be configured to auto-accept all requests for mediation, or that a deployment specific "Mediator Plugin" can be added to the ACA-Py instances to handle any business logic needed that would normally be handled by a controller.