owned this note
owned this note
Published
Linked with GitHub
# Hachyderm Infrastructure and Availability Update
Hello everyone.
First I want to say thank you to the people who have shown support to hachyderm, our moderators, and our infrastructure engineers. This has touched us more than you will know.
As a reminder: We are a team of volunteers, all of whom have day jobs and are as equally as exhausted and burnt out from the pandemeic and capitalism as I am sure most of you are.
To be candid, we know the service has been shitty, and I personally am apologizing for that.
Here is a high level overview of the state of things today, along with some answers to our frequently asked questions.
### History
Here is where I ask that anyone reading this pauses and takes a moment to understand the words I am about to share. I also ask that you take a moment to imagine what these words would mean to you if you were sharing them.
In April of 2022 I created a small hobby Mastodon instance for me and my small group of Twitch viewers, friends, and family. We decided to run the service in my homelab where I actively have been developing many open source projects ranging from a pid 1 alternative for systemd, kubernetes exploit tools, eBPF malware, and security tools like xpid and falco. The server was built on an experimental ZFS installation that had been online for well over 2 years and served as a way for us to easily snapshot our work in the lab.
- October: 250 users
- November 3rd: 720 users
- November 13th: 6,000 users
- November 23rd: 25,000 users
- Today: Over 30,000 users
While I am completely flattered that 30,000 people have decided to call hachyderm home I think it is very fair to say that none of us were prepared for this to happen. To be candid, our server was in an unknown state and was loaded with mysterious development software installed/configured. This server is now our production database. We are aware this is a problem, and trust us -- we want off of it as much as you do.
As the service grew, so did our problems and we are doing everything we can to keep up.
### Overview
The service is "more" stable today than it has been in several days.
However we want to manage expectations and say that it will likely be "slow" and "flaky" at least until the end of the week and throughout the weekend.
We have prioritized data integrity and are doing all we can to learn a new architecture while balancing our health, personal needs, and goals as a team.
To give you a brief idea of what it has been like in the "command center" we have gone from a few of us manually managing SSH keys to a full blown automated 24/7 incident command process complete with post mortems, and a formal review process.
The team in the US works into the night after our day jobs until the EU team wakes up and joins us for a few hours. The folks in EU work until the US wakes up again. We have people working on the service 24 hours a day.
### Technical Details
We will share a deep overview of the specifics, graphs, and lessons learned as we have time to consolidate our research into a place that doesn't introduce risk into our infrastructure.
However for now here are the main details.
- NFS was our primary bottleneck and was removed yesterday, hence the slight uptick in service excellence.
- Our SSD disks are the current limiting factor in our availability, specifically our database that sits on them.
- Mastodon's architecture is highly "tunable" and prone to cascading failures.
- Sidekiq workers are difficult to control.
- Sidekiq operatess relatively chaotically and is unpredictible at times.
- Sidekiq workers are prone to overloading the same resources (database, cache, block devices) that the HTTP service also couples to.
The executive summary is that when sidekiq bursts, our service quality likely decreases. We are combating it as best as we can.
Basically the high level overview of Mastodon is that it is a group of interdependent systems that has made a lot of assumptions with regard to the architecture. Threads, processes, workloads, and specific features all need to managed. In order to change the parallelism for these processes they need to be restarted.
In short, when our disks begin to bottleneck, we see a ripple effect that starts at the database, and eventually makes its way to the edge resulting in slow HTTP responses, 500s and other errors for you, the user.
### Current Migration
We have a migration plan that is currently underway to address the issues above.
- [X] Migrate our main storage (1Tb of data) to Digital Ocean and slowly shift our traffic off the home lab into the cloud.
- [ ] We still have yet to fully serve our entire media store in the cloud.
- [ ] Migrate our main database off the home lab and into a longer term home with faster and readily available disks and configuration.
- [X] Scale the number of edge nodes and reverse proxies around the globe to meet our traffic needs
- [X] Form convinction around the intricies of sidekiq and the queuing mechanisms such that we are "more" confident in how they work, and the impact they will have on our HTTP (puma) services.
- [ ] Cutover to the new primary database
- [ ] Work backwards from the new database and reconfigure out edge nodes to serve traffic from the new primary
- [ ] Audit our VPN
- [X] Rotate keys
- [ ] Rotate keys again
- [ ] Document our topology, and our processes for newcomers
- [ ] Account management for newcomers
- [ ] Security audit to onboard strangers into production management
- [ ] Develop an out-of-band status report mechanism such that we can communicate during an outage
- [X] Keep our users data safe and protected
Remember all of this takes time. Even writing this document took about an hour.
### Current Focus
Our immediate concern is migrating the service off of the rack in my basement. We expect this to take another 2-3 days as we all have day jobs and are trying to not disrupt an already delicate service.
The first step in performing this migration is moving our primary database.
### Communication Moving Forward
We intend to schedule downtime and communicate about it in 3 places:
- The main github.com/hachyderm/community README
- The announcements in Hachyderm (at the top of the home timeline)
- Future hackmd posts such as this one until we are able to get a static blog stood up
### Impact to You
We do not expect you to suffer any data loss, however we want you to be prepared for a rough week of service availability ahead. Things might be "dodgy" for a few days.
Our hope is that hachyderm is beautifully running again on Monday morning when the world wakes up.
We expect the duplicate messages to be addressed as we begin to free up resources for the main database. We currently believe its a result of how the sidekiq workers operate and how they will pick up the same work. We expect a faster database will address this issue.
### Call for Help
The biggest thing we need right now is morale building, and support. We are sad, and bummed out. This has been hard for us.
We are unable to bring any more operators on at this time, however we will be identifying 4-6 volunteers after the service is stable again to onboard to help us.
Sincerely, we need #hugops. We are very tired, and are working on this service from an open source and alturistic perspective. What would help us most right now is patience that we intend to get out of the homelab and stablize the infrastructure and bring new operators on board.
Again -- we did not build hachyderm with the anticipation that 30,000 people would show up in less than 30 days. It just... happened. We are doing all we can to keep the service online.
As it turns out, we love using hachyderm as much as you do. We all want it to be better for everyone.
### Contacting Us
If you need help we have volunteers watching the GitHub issue tracker, and we are responsding to emails at `admin@hachyderm.io`.
We appreciate the reports, and we understand that some of this is frustating to you.
We do not want to work in a vacuum, however we also need to protect the service. Our long term goal is to have hachyderm, our moderation team, our infrastructure topology, and our governance open and available for the broader internet to see, influence, and become a stakeholder and owner of.
### Future and Mission
Our mission remains the same as it has been since the day we turned the service on.
> Here we are trying to build a curated network of respectful professionals in the tech industry around the globe. Welcome anyone who follows the rules and needs a safe home or fresh start.
> We are hackers, professionals, enthusiasts, and are passionate about life, respect, and digital freedom. We believe in peace and balance.
> Safe space. Tech Industry. Economics. OSINT. News. Rust. Linux. Aurae. Kubernetes. Go. C. Infrastructure. Security. LGTBQIA+. Pets. Hobbies.
Albeit, we reserve the right to add availability to our guiding mission statement as soon as the service recovers.
---
Personally, I want nothing more than to see this grow into a safe and welcoming community where we can leverage our infrastructure for good. There is space here to grow young marginalized engineers into experienced professionals with the guidance and mentorship of our current professionals working on the site.
It will take some time to get there.
Thanks for the emojis, and thank you for being patient with me and the team of admins, moderators, and operators. I deeply believe in #hachyderm, and I am commited to making this site the beacon of hope the world and the industry needs right now.
_Kris Nóva_