owned this note
owned this note
Published
Linked with GitHub
Who is Quentin:
Quentin is a Solution Engineer at Giant Swarm where he works on the internal Monitoring Platform. He is passionate about observability in distributed systems and SRE Practices. Quentin is excited to share the learnings he has made along the way. During the last year, he has been getting involved in Prometheus upstream developments such as Prometheus itself, Prometheus Operator and Kube-State-Metrics.
Talk description:
The Giant Swarm Monitoring Team recently redesigned its metrics platform based on Prometheus to be more scalable and reliable to be able to monitor more clusters and services.
This talk will go through our journey towards our Prometheus setup based on Open source tools such as the Vertical Pod Autoscaler or Prometheus Operator, that, today, allows us to ingest more than 1M samples per second over 200 clusters.
We will explore the different ways to scale Prometheus and explain in which cases each scaling strategy can be used. We will also learn some tips and tricks to help along the way.
Bullet points:
I will go through our legacy setup and why it failed and how we got to where we are through experimentation.
I want that talk to be able to help people facing the same issue we had to find answers in this talk.
How to scale prometheus (in a multi-cluster world)
Sources for diagrams: https://banzaicloud.com/blog/multi-cluster-monitoring/
What is prometheus?
Monitoring and alerting system ; Add more stuff here
- Uses Service Discovery to find targets
- Scrapes targets to pull metrics
- Provide a flexible query language to query over metrics
Stores the metrics in an internal database called TSDB
... Short intro, our Journey to prometheus
@Giant Swarm, we manage multi-tenant Kubernetes clusters since 2016 for customers using what is known in the cluster API space as a Management Cluster per customer or cloud region.
This cluster is used to manage the other clusters (creation and upgrade of clusters, app management and so on)
To reduce the operational overhead, we initially build a single prometheus cluster per management cluster that would scrape both the management cluster and workload cluster components. We enabled meta monitoring using external pings to ensure the monitoring/alerting setup worked as expected.
At the beginning, this solution was good enough as we did not manage lots of clusters but at a certain scale, our prometheis started to OOM because it ingest too many metrics, due to either very large cluster or many clusters. But it boils down to the number of time series ingested. It also stated to take longer to boot up when reading the Write Ahead Log.
so we decided to explore how to scale prometheus.
This presentation will explain what we found as well as some pitfalls to avoid when scaling Prometheus.
1# Reduce the number or size of ingested metrics
! Drop unused metrics, Bigger scrape interval.
This can alleviate the OOM issues for a time, but this will happen eventually unless your clusters never change, bever upgrade and so on. Hopefully, this is not the case.
2# Scale vertically
As prometheus is a database, the usual idea is to scale prometheus vertically, that is providing more computing power/resources to prometheus.
As for the first run, this can work for some time, but when you start having 64GB machine just to run prometheus, the machine cost starts to grow and when prometheus reboots, it can take up to 20 minutes for it to read the WAL file.
3# ~~Scale horizontally~~
As said in the beginning, Prometheus' local storage is limited by single nodes in its scalability and durability so it cannot scale horizontally. If you increase the number of replicas, you get HA prometheus and not a scalable storage.
3a# Federation
One way to scale your prometheus setup is to use federation, a way in prometheus to aggregate data from multiple prometheus in one prometheus instance that scrapes a specific endpoint in all the prometheus.
That way, you will be able to federate some metrics.
The issue in federation is that you can only aggregate a subset of all the metrics or your federating instance will never fit into a machine.
3b# Sharding
The common solution for databases then becomes sharding. As we manage multiple clusters, the logical step shard key is the cluster id. We splitted and not sharded here
Hence, we decided to have one prometheus per workload cluster and one for management cluster.
The workload cluster prometheus scrape the discovered targets in the workload clusters while the management cluster prometheus discovers targets in the management cluster, including the workload cluster prometheis
Schema here
Usually, the prometheus server should be hosted close to their targets, so in the cluster they are monitoring, but we decided to host them all in the management clusters to allow us to iterate faster and help operation.
This is where we used the prometheus operator to manage the multiple prometheus instance and build prometheus-meta-operator, an open-source operator built on top to manage the prometheus CR, scrape configs and so on.
For the future, we are discussing supporting all CAPI CRD with this operator.
As we had only one prometheus per management cluster, defining the storage, ram size and so on was easy.
With multiple prometheus, each targetting a different cluster, the memory needs are different for each, this is where VPA and cluster-autoscaler
can be useful
(Set min and max allowed)
Automate your meta monitoring setup all the way, even when creating the external ping
Query all prometheus instances at once, promxy as a fanout service
Talk about pod premption and volume attachement issues, multi-az setup with volume binding mode
4# Long term storage and view of all the data
today, multiple solutions, Cortex, Thanos, M3DB
Both are easy with remove read/write support, especially with prometheus operator that supports thanos out of the box.
We use Managed Cortex for some metrics we need to keep longterm (recording rules mostly) but we do not need all our metrics
- Managing our own cortex or thanos would take time to implement right (alerting, storage, snapshots and so on)
This is our end architecture
I need a conclusion and more pitfalls and examples
Questions