Angus Dippenaar

@angaz

I want chicken nuggets angazi - Zulu for "I don't know"

Joined on Aug 12, 2023

  • All the API endpoints return JSON, and will have the Content-Type header set to application/json. Please try to be respectful, and not try to get loads of data with many requests. Please use the SQLite snapshots for that purpose. Attribution is appreciated. /api/stats/ Every 30 minutes, a snapshot of the network is taken and saved to the stats
     Like  Bookmark
  • This is a high-level overview of the project, mainly offering the details needed for operating, and understanding the website. In a future iteration of the website, I would like to incorporate the information in this post into little information bubbles you can click on to see the information where it's most relevant. You can read my Updates to learn more about what I was doing while working on the project. You can also watch my EPF Day Presentation from Devconnect Istanbul 2023.
     Like 1 Bookmark
  • Since my last update, I think the most important thing I added was metrics. Both to the server via Node Exporter, and to the crawler itself, but there were also a lot of backend changes. Solving problems with data Metrics This has been really big for me to be able to solve problems, because I could see a pattern in the Grafana dashboard, think of what could solve it, make the change, and see how it looked the next day. It helped me to find so many problems, and bad ideas, which I probably would not have found without the metrics. Problematic queries, and changes to the crawler's logic could easily be spotted in the graphs. image.png image.png
     Like  Bookmark
  • The New Crawler I tried the ideas outlined in my previous update. It seems that the database can keep up with the write workload. There was an issue where the database is locked while writing the WAL records, and while this is happening, it doesn't seem to obey the busy_timeout setting. But a simple retry system seemed to make it work, and I haven't seen issues since. There's also some bugs which are now fixed. We are getting Nethermind nodes! I don't know why this is happening now. But there are some nodes which sent a Ping before the status message, and this messed up the previous crawler because it expected the messaged in a very specific order, and if a client did something in the wrong order, it would return an error. The new crawler has a loop where it reads messages until it has the hello and status message, at which point, it sends the disconnect message, and saves the data into the database. This seems to be much more reliable, and I think, simpler than the previous method. There still aren't any Reth clients. I think we would have to find someone from Reth to have a look and see what we can find. Maybe it's something similar to the previous issue? Or it's one of the clients closing the connection with useless peer based on our status message?
     Like  Bookmark
  • I didn't originally have a plan for the future of the project, but I think what needs to happen is becoming clearer now. The past The original setup had an infinitely-growing crawler database because it was insert-only. This would eventually lead to a lockup because it would take too long to read the crawler database for each update of the API database. The original design of the crawler/API database setup The present In my last update, I explained how I changed the archetecture a bit to stop that problem from happening.
     Like  Bookmark
  • EPF - Update 2 The setup I'm running the node crawler on it's own Linode Nanode (1 shared CPU, 1GB of memory) instance, like I mentioned in the last update. You can see it here. But I ran into a problem... The problem How the system works is that there's an API and crawler which run as separate processes. In the NixOS setup, I have these running as separate SystemD services, in docker compose, these are separate containers.
     Like  Bookmark
  • This one is two weeks merged into one. I did some refactoring of the Node Crawler. I upgraded the dependencies and refactored the file structure so it was more in line with a typical Go project, while trying not to change too much so it would be easier to review. The last part might not have materialized as much as I wanted. Upgrading the dependencies was a bit more challanging than I expected. Old Go projects seem to have a lot of things that break over time. At least there were no language breakages. This was mainly packages changing things. The bigggest change was the updating the go-ethereum package. Even still, this wasn't too difficult at the end of the day, it just took some time to understand what was going on. The biggest change over this time was new versions of the data-exchange protocol. The project was created with capabilities set to eth/64, eth/65, eth/66, which isn't compatible with a lot of the current clients out now which support eth/66 as a minimum version. In the mean time, we have had eth/67 and eth/68. Fortuntely, this was pretty simple once I understood it. Basically copying the file cmd/devp2p/internal/ethtest/types.go which contained the updated types, and updating the capabilities supported by the crawler. One change I had to make was to decode the Disconnect case as a raw int instead of RLPx-decoding the message. I'm not sure why some clients send this. I assume it's an old protocol.
     Like  Bookmark
  • The project The idea started from me wanting to make a tool to see if my Ethereum node was accessible from the internet, so I would know that I've configured by router correctly. I posted my idea to the Discord channel and Mario suggested I work on the Node Crawler, and extend it to add the functionality I was looking for. It had been almost a year since the last commit, and the website was not working because the database would eventually get too big, and then the server would stop working. I thought this sounded like a great project to work on. I would make something
     Like  Bookmark
  • I'm contributing in a best effort capacity because I still have a full time job, but I've found something interesting to work on. During my setup of a staking node, I wanted to check if my node was accessible from the outside world, and I didn't find a tool for this. So I thought this would be a pretty interesting thing to work on. I posted my idea in the EPF Discord channel and Mario suggested I could add it to Node Crawler. It looks like it could use some TLC. The code hasn't been updated for some time, so it could do with some package/language updates, along with some refactoring to make it more like a standard Go project structure. So I think this should be a pretty nice thing to work on, fix the above issues I've seen, and then add my idea to scan a node endpoint to make sure it's exposed as expected. I think it would be pretty nice to also add some beacon chain stats to the project, and to add a beacon client scan as well.
     Like  Bookmark