# Queue Overload - Throughput optimisation
## Navigation
1. [Problem](https://hackmd.io/@jwdunne/HJXyhNY4h)
2. [Observability](https://hackmd.io/@jwdunne/S1pJ1CgHn)
3. [Testing](https://hackmd.io/@jwdunne/H1zKkAeSn)
4. [Throughput optimisation opportunities](https://hackmd.io/@jwdunne/H1h2k0xH3)
5. [Backpressure](https://hackmd.io/@jwdunne/B1WZeCeBh)
6. [Load shedding](https://hackmd.io/@jwdunne/BJB4MReH2)
7. [Autoscaling](https://hackmd.io/@jwdunne/Bkw_zAxHn)
## Solution
These are opportunities to permanently reduce the number of messages by type, as opposed to shedding load:
- Take `Count` and `Gauge` off the queue
- Make `QueueSync` unique for each client integration
- Make `ActionCountsBuilder` unique for each client
- Make `StartProcessingWorkflows` globally unique
- Make `ProcessWorkflows` unique per client
- Make `RefreshSingleToken` unique per client
- Make `ScheduleRefreshTokens` globally unique
## Making events unique
Making events unique is as simple as defining the `uniqueId` method on the command _or_ implementing `ShouldBeUnique` on the listener and implementing `uniqueId`.
Making listeners globally unique means providing a constant:
```php
public function uniqueId(): string
{
return 'automations.workflows.process';
}
```
Making listeners or commands unique per client means appending the client ID.
## Metrics
`Count` and `Gauge` off the queue would remove over a thousand jobs per hour.
Initially, a system to aggregate job metrics was considered but it turns out this is exactly is already happening since Prom is pull-based.
It would just be simpler to not use the queue for these metrics at all.
## Implementation
- Make `StartProcessingWorkflows` globally unique
- Make `ScheduleRefreshTokens` globally unique
- Make `ProcessWorkflows` unique per client
- Make `ProcessSyncQueue` globally unique
- Make `QueueSync` unique for each client integration
- Make `ActionCountsBuilder` unique for each client
- Make `RefreshSingleToken` unique for each client
- Stop using the queue to publish metrics