# Design: PubSub via Kafka
**Author**: @mtoohey
## Description
Reimplement FTL's PubSub using Kafka, with topics, partitions and consumer groups.
## Motivation
FTL's PubSub needs to be highly performant and scalable.
## Goals
- High performance
- Scalable
- At least once delivery
- Design in a way such that users do not need to use an escape hatch and use Kafka directly in 99% of cases, even if we are not exposing all the options to achieve that initially.
### Non-Goals
- Encrypting the data we send to Kafka
- Supporting canary deployments or multiple versions of a module at once is out of scope of this document.
- Verbs that process a batch of events at once (can be added later)
- Defining how non production environments scale differently (eg: less partitions in staging). (Out of scope)
## Design
The runner is responsible for communicating with Kafka. It calls verbs in the language runtime as it consumes from Kafka. Language runtimes use gRPC to produce Kafka events via the runner server.
### Runners
Each runner will connect to the consumer group for each subscription within the module. Kafka brokers will balance partitions for each subscription across the connected runners. Note: the balancing is within each subscription, so some runners may be active for multiple subscriptions while others are not assigned any partitions for any subscriptions.
Runners understand each topic and subscription from the module schema received during deploy. Schema runtime nodes will hold Kafka specific information:
- Kafka broker endpoints
- Topic id
Authentication will be handled by IAM roles in AWS.
### Topic Configuration
#### Partitions
The number of partitions for a topic is defined with the topic (see Runtime API changes below). A key is provided when publishing an event, which is used to determine the partition to send it to.
Runners will also need to handle partition revocations so they commit the offset they got up to before the partition was re-assigned.
#### Other
FTL will not expose other configuration options for topics initially. These include:
- Retention policy
- Replication factor
- FTL will default to 3
### Subscription Configuration
A subscription can only have a single subscriber verb. Previously multiple were allowed but we will remove that functionality.
Subscriptions will try to have defaults that work reasonably well for the majority of cases, with the ability to configure further if the specific needs of the topic are different.
Subscriptions can begin from the start of the topic (default behavior) or from the latest event at the time the consumer group was provisioned.
#### Batching and offset commits
FTL will default to batching with these values (the go library Sarama's defaults are also listed as a comparison):
- Fetch.Default: 1MB
- Depending on the data size of events in the topic, this may mean a large or small number of events.
- 1MB is the Sarama default
- Fetch.Min: 1 byte
- This defines how many bytes we should wait to be available within MaxWaitTime before receiving the batch.
- 1 is the Sarama default, which prioritises reacting to new events
- Choosing other values is difficult without knowing more about the expected size of a event
- MaxWaitTime: 250ms
- The smaller this value is the more calls are made when there are no events.
- 250ms is the Sarama default
Subscriptions can have `batch` metadata which overrides the fetch sizes and max wait time.
FTL will commit messages at the completion of each batch, and asynchronously within a batch every 5s. This helps prevent re-processing too many events if a runner fails. This is relevant due to the nature of FTL's defaults where we do not know what size to expect for each event, or how long each event will take to process, which means there is a wide range in how long each batch can take to process.
### Retries
Runners will initially not handle retries. Soon we will use istio retries for retries (assuming local traffic can be routed through istio, or we can implement the retry logic ourselves)
There is no guarantee that the runner does not die while in a retry loop, in which case the retry state is lost. Retry policies should be chosen with that in mind.
### Dead letter topics
Subscribers can also declare a topic for failed events (ie: events that have errored and exhausted the retry policy if it exists). These are called dead letter topics in FTL.
Dead letter topics are an optional addition to any subscription. If the subscription opts into this feature, FTL will generate a topic automatically as `topic <topicName>Failed <event>` with only 1 partition.
The runner will publish these failed events to the dead letter topic before processing the next event. If there is no dead letter topic, then the runner will simply move on to the next event. Dead letter topics will wrap events with `buildin.FailedEvent`.
Retry policy's `catch` parameter will no longer be supported on subscriber verbs. We recommend using a error topic instead.
### Infrastructure and provisioning
We will use Amazon MSK for now.
We will add a provisioner that:
- sets kafka endpoint in ModuleRuntime for modules that use pubsub
- creates kafka topics with the number of partitions and stores the topic name in `TopicRuntime`
- Topic name will need to fit within the [249 character limit](https://kafka.apache.org/documentation/#basic_ops_add_topic)
- kafka consumer groups are not explicitly created, but it will set the consumer group id in `SubscriptionRuntime` so we don't spread around logic that determines the group id
#### Local Environment
- We will use Red Panda as a Kafka stand in as it is lightweight.
- When provisioning the first pubsub topic, we will start Red Panda. This allows for leaner dev environments if pubsub is not being used.
- We will reconsider this if it causes pain
- No need to make separate runners for rpc/pubsub
- The runner will be responsible for all subscriptions/consumer groups for the module.
- Red Panda will assign all partitions of each topic to the one runner for each subscription.
## API
### Schema changes
Runtime changes
```
type Topic struct {
...
Runtime *TopicRuntime
}
type TopicRuntime struct {
KafkaBrokers []string
TopicID string
}
type Subscription struct {
...
Runtime *SubscriptionRuntime
}
type SubscriptionRuntime {
KafkaBrokers []string
TopicID string
ConsumerGroupID string
}
```
Topics, subscription and subscriber changes
```
topic orders pizza.order
+partitions 8
// subscription <name> (<topic> : <verb>)
subscription cookSubscription (pizza.orders : pizza.processOrder)
+retry 10 1s 1s
+batch 1KB 1MB 250ms // format: +batch (<minSize>)? <defaultSize> (<maxWaitTime>)?
+deadletter
+from latest //format +from (latest|beginning)
// verbs no longer include the subscription they subscribe to
```
### Builtin changes
```
module builtin {
...
// FailedEvent is used in dead letter topics.
export data FailedEvent[Event] {
event Event
error String
}
}
```
### Runtime API changes
#### Go
FTL package:
```go
// TopicPartitionMap maps an event to a partition key
type TopicPartitionMap[E any] interface {
PartitionKey(event E) string
}
// SinglePartitionMap can be used for topics with a single partition
type SinglePartitionMap[E any] struct{}
var _ TopicPartitionMap[struct{}] = SinglePartitionMap[struct{}]{}
func (SinglePartitionMap[E]) PartitionKey(_ E) string { return "" }
// TopicHandle accesses a topic
//
// Topics publish events, and subscriptions can listen to them.
type TopicHandle[E any, M TopicPartitionMap[E]] struct {
Ref *schema.Ref
PartitionMap M
}
```
Module code:
```go
func orderPartitionMap(order PizzaOrder) string {
return order.UserId
}
// The directive ftl:topic is only needed if any partitions or from is needed
//
//ftl:topic export partitions=8
type PizzaOrders = ftl.TopicHandle[PizzaOrder, orderPartitionMap]
// Verbs
func CreateOrder(ctx context.Context, order PizzaOrder, topic PizzaOrders) {
return topic.Publish(order)
}
//ftl:verb
//ftl:retry 10 1s 1s
//ftl:subscribe pizzaOrders from=beginning deadletter
func ProcessOrder(ctx context.Context, order PizzaOrder) error {...}
//ftl:verb
//ftl:subscribe processOrderFailed from=beginning
func ProcessFailedOrder(ctx context.Context, event builtin.FailedEvent[PizzaOrder]) error {...}
```
#### Java
// TODO: ...
#### Kotlin
// TODO: ...
#### Python
// TODO: ...
### gRPC changes
Publisher service implemented by runner server:
```protobuf
message PublishRequest {
schema.Ref topic = 1;
bytes body = 2;
string key = 3;
}
message PublishResponse {}
service Publisher {
rpc Publish(PublishRequest) returns PublishResponse
}
```
Consuming: Module server uses existing VerbService to call subscriber verbs in the language runtime.
## Rejected Alternatives
### Postponed: Subscription Runners
[ Decision: not needed initially]
FTL will separate out runners by role:
- RPC runners will receive RPC calls
- A group of runners provisioned for each subscription in the module. Each subscription will have as many runners as there are partitions on the topic.
Runners will be told their role using environment variables. Only runners with a subscription role will connect to a kafka consumer group.
Parameters passed to subscription runners:
- kafka endpoint + API keys (or what is the mechanism to talk to kafka?)
- A list of (see local environment for when a runner is assigned multiple subscriptions):
- subscription ref
- kafka consumer group id