Try   HackMD

V2: PQs and Caches for ACA-Py and Mediators

Introduction

This note covers the proposed design an ACA-Py Cluster, a cloud-native, scalable, highly available instance of ACA-Py that supports HTTP, Web Sockets and Return-Route transports. Currently, a Mediator Cluster is an ACA-Py Cluster that is deployed specifically to support the Mobile Wallet Mediator use case. A Cluster deployment requirement is that all components are horizontally scalable, with (almost) all components stateless. The specific exception about state is called out below.

In these designs, the Controller is treated as an external component. The controller component may also be deployed in a cloud native, scalable manner. However, the deployment of the controller is an exercise left for the reader.

Non-HA ACA-Py Deployment

The following is a simple ACA-Py deployment, assumed to be on a cloud native platform (e.g., Kubernetes, OpenShift), that lacks robust high availability (HA) support. In-memory queues are used in the ACA-Py instances such that when instances are terminated abruptly (no graceful shutdown), messages could be lost.

        
Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

ACA-Py Cluster

To make an ACA-Py CLuster, a cloud native, highly available, horizontally scalable deployment, the following elements are added to the deployment.

In the following, "Redis" is called out specifically. However, the underlying design is such that any component that provides functionality similar to Redis can be used by providing an appropriate ACA-Py plugin.

  • A Redis (or comparable) persistent queue.
  • A Redis (or comparable) shared cache for the ACA-Py instances.
  • An ACA-Py PQ (persistent queue) plugin to add a PQ Transport to ACA-Py that uses Redis (or comparable).
  • A Relay that handles both inbound messages (from other Agents and the Controller) and the delivery of outbound messages (to other Agents and the Controller).
  • A Relay plugin specific to Redis (or comparable).

NOTE: A previous design for adding persistent queues into ACA-Py had both a "Relay" for inbound messages and a "Deliverer" for outbound messages. However, supporting websockets and return-route transports requires that the Relay and Deliverer functions be combined into a single component. We stayed with the name "Relay" for this new, combined component.

        
Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

HTTP Transport and Controller Interfaces

Using only the HTTP Transport for agents that have an endpoint and the Controller interfaces, is straight forward.

  • Incoming HTTP messages are received by a Relay instance at one of two endpoints: External from other agents and the Admin API from the authorized Controller.
  • The Relay instance queues the messages on Redis.
  • The Relay instance sends an HTTP response to confirm receipt of the message.
  • All ACA-Py instances pull messages from Redis, process them, and put on the queue any resulting outbound messages (webhooks for the controller and/or outbound messages for the other agent(s))
  • A Relay instance retrieves each outbound message from Redis and delivers it to its destination HTTP endpoint.

Each ACA-Py instance must share state with all other ACA-Py instances such that any inbound message can be processed by any ACA-Py instance.

No Endpoint Agents

ACA-Py's DIDComm support includes two transports where other agents do not have endpoints: Web Sockets (WS) and Return Route (RR). In both cases, outbound messages cannot be sent asynchronously from any Relay instance. Rather, the other agent must initiate an interaction, and as part of that interaction, messages from the ACA-Py Cluster to that agent are delivered.

  • In the WS case, the other agent establishes a WS session with one Relay instance, and while the session is active, messages can be sent to the agent by that Relay instance.
  • In the Return-Route (RR) case, the other agent sends an HTTP request, and the receiving Relay instance can deliver messages to the agent in the HTTP response. In this case, the "session" is a single HTTP request/response.

Supporting no-endpoint transports add complexity compared to the HTTP-only use case. The Relay instance receiving the inbound WS or RR message from an agent must be the same instance that sends outbound messages to that agent for the life of the session. When the session ends (for whatever reason), the external agent must initiate another session before outbound messages can be sent to that agent.

The complication comes from the separation of the Relay and ACA-Py instances:

  • The ACA-Py instances don't know when messages can be delivered to the no-endpoint agent, nor what Relay instance must deliver them.
  • The Relay instances don't know with what agent they have a session until an ACA-Py instance processes the first (and perhaps only) DIDComm message of the session to determine an identifier for the agent.
  • Any HTTP message from an agent may have an RR decorator, so any inbound HTTP request must be processed by an ACA-Py instance before we know if outbound messages for that agent must be sent via RR or to an endpoint.

In the following section we'll describe the flow of messages and information between the Relay and ACA-Py instances to deal with this added complexity.

In the Introduction (above), we said that the components of an ACA-Py cluster are "almost" stateless. The exception is that the Relay instances must track with what external agents they have a session. However, if that data is lost because of the sudden termination of an instance, so to is the session being tracked, and so no messages will be lost.

The Internal Messaging Flow

The following describes the flow of messages through the cluster that supports both the endpoint and no-endpoint flows.

  • We add three identifiers:
    • A Relay instance identifier (RId), a UUID generated by each Relay instance on startup.
    • An HTTP request identifier (HId) is a UUID created by the receiving Relay instance as each inbound message is received.
    • An Agent Identifier (AgId) identifies an external agent and is defined by ACA-Py in establishing a connection.
  • A Relay instance receives an HTTP request with a DIDComm message from an Agent or via the Admin API.
  • It queues the message, tagged with a its RId and the HId for the message. It does not send the HTTP response to the request (yet).
    • If the Relay instance knows the message is the first in establishing a WS session, it also sends a flag indicating that.
    • If the Relay instance knows the message is part of a WS session, it need not wait before responding to the request.
  • An ACA-Py instance grabs the message from the queue and begins to processes it.
    • If it determines it is not an RR or the start of a WS session it:
      • immediately queues an instructional message for the Relay (using the RId) instance to "Ack" the HTTP Request (using the HId).
      • It completes the processing of the message.
    • If it is an RR or start of WS Session, it
      • completes the processing of the message, queuing any responses for the destination agent tagged for the AgId.
      • queues a message for the Relay instance (using the RId) with the HId and the AgId of the target DIDComm Agent.
  • The Relay instance picks up queued messages based on its RId or any message destined to an agent it has a session, or any message destined to an endpoint.
    • See below for notes about what happens if the Relay instance dies before retrieving the message from the queue. In short, not a problem.
  • If the message is for the Relay and is to "ACK" the HTTP request, it does so and processing is complete.
  • If the message is to be sent to an endpoint, it sends the message and processing is complete.
  • If the message is to use RR or WS to send messages to the agent, the Relay uses the AgId to pick up messages for the agent.
  • Assumption: A Relay can assemble the messages for the target Agent into the HTTP response. Is that a valid assumption, or must a DIDComm agent prepare the HTTP response?
  • The Relay sends the HTTP response to the waiting agent.
  • While the Relay is in a WS session, it continues to look for messages for the AgId and when found, delivers them to the agent.

When a RR is needed, the process must complete within the timeout period of an HTTP request.

Queue Types

The following is a list of the types of persistent queues (PQs) are needed, who posts and pops messages from those queues under what conditions.

Agent Inbound Messages

All inbound messages are posted to an Agent Inbound Messages queue by the Relay instances, tagged an RId and HId. The messages are pulled from the queue by all ACA-Py instances.

Kinds of inbound messages include:

  • HTTP messages from other agents that may or may not have a Return Route decorator.
  • Messages received over a Web Socket connection from other agents.

Admin API Requests

All Admin API Requests are posted to an Admin API Queue by the Relay instances. The messages are pulled from the queue by all ACA-Py instances.

Tag Instuction Messages

ACA-Py instances post to the Tag Instruction queue messages to instruct specific Relay instances about handling tags on inbound messages. The messages are picked up by the Relay instance that previously posted a tagged message. The messages notify the RD to:

  • ACK a non-Return Route HTTP message for a given HId.
  • Use for the message with a given tag an included agent identifier to find and send messages for that agent using either Return-Route or Web Sockets.
    • Question: Do we need a separate indicator for HTTP vs. Web sockets?

HTTP Outbound Messages

ACA-Py instances post to the HTTP Outbound queue all messages destined to an HTTP address. Any Relay instance can pick up and send those messages.

No Endpoint Outbound Messages

ACA-Py instances post to the No Endpoint Outbound queue all messages destined to agents that have no endpoint. The messages are tagged with the AgId of the target agent. The messages are picked up by a specific Relay instances that has been instructed (via a tag instruction message) to deliver the messages for a given agent identifier.

  • What about agents with an endpoint but that request RR transport for a given message? Should their message go in this or the previous queue? If they go in this one, and the designated Relay instance to send the message is terminated, will the message be lost in this queue, even though it could be delivered via the agent's endpoint?

Queue Handling Resiliancy

What happens to the messages on the different queues if ACA-Py or Relay instances are terminated, or when WS sessions end?

In this we assume the queue handling is guaranteed "exactly once" processingassuming the queue handling supports such a mode. A message is pulled off the queue and processed before it is permenantly removed from processing. If the processing fails, the message remains for someone else to pick up.

If ACA-Py instances die, there is no impact. Each ACA-Py instance gets any message for an ACA-Py instance, processes it and moves on.

If a Relay instance dies after it has submitted a message, but before getting the response message from the Tag Instruction queue will never get the tag instructions. It's termination will mean that the WS and RR session is terminated as well, since the Relay instance cannot respond.

  • An RR request may have its message already submitted, but will not get an HTTP response. As such, it MAY resend the message.
  • A WS session would see the terminated WS session and start a new WS session with another Relay instance.
  • The tag instruction message with the RId will never be processed, since that instance is gone. A cleanup process is likely needed to remove the messages for terminated Relay instances. This could be based on time (e.g., message has not been picked up after some period) or by tracking in shared state the active and terminated Relay instances by RId.

Mediator Cluster

A Mediator Cluster is (currently) defined as an ACA-Py Cluster that is specifically configured to be a DIDComm Mediator, usually for a (potentially very large) group of mobile agents. A mediator cluster has a traffic pattern that differs from a general ACA-Py agent (as outlined in the following section) and we may find optimizations can be done for the use case.

In the following diagram, the Controller is removed based on the idea that either the ACA-Py instance will be configured to auto-accept all requests for mediation, or that a deployment specific "Mediator Plugin" can be added to the ACA-Py instances to handle any business logic needed that would normally be handled by a controller.

        
      

Overview

  • Agents ONLY send inbound messages destined for Wallets. No outbound messages go from the mediator to the Agents.
  • Messages destined for Agents do not flow through the mediator. The mediator never sends messages to agents, and Wallet-created messages go directly from the Wallet to the endpoint of the agent.
  • Each Wallet connects via a WS or RR session to a specific Relay instance. While the session is active, outbound messages to the Wallet MUST go through the specific Relay instance.
  • Wallets only retrieve agent messages from the mediator.
    • Exception: When a wallet is requesting mediation services from the mediator, messages are sent from the wallet to the mediator, and from the mediator to the wallet.