howdy, !

over the last week or so, we've been preparing to move hachy's zones from route 53 to bunny DNS.

since this could be a pretty scary thing -- going from one geo-DNS provider to another -- we want to make sure *before* we move that records are resolving in a reasonable way across the globe.

to help us to do this, we've started a small, lightweight tool that we can deploy to a provider like bunny's magic containers to quickly get DNS resolution info from multiple geographic regions quickly. we then write this data to a backend S3 bucket, at which point we can use a tool like to analyze the results and find records we need to tweak to improve performance. all *before* we make the change.

then, after we've flipped the switch and while DNS is propagating -- :blobfoxscared: -- we can watch in real-time as different servers begin flipping over to the new provider.

we named the tool hachyboop and it's available publicly --> github.com/hachyderm/hachyboop

please keep in mind that it's early in the booper's life, and there's a lot we can do, including cleaning up my hacky code. :blobfoxlaughsweat:

attached is an example of a quick run across 17 regions for a few minutes. the data is spread across multiple files but duckdb makes it quite easy for us to query everything like it's one table.

screenshot of a console with a duckdb command that is selecting data from multiple parquet files in an s3 bucket across multiple folders.

the resulting table has many rows with a timestamp, unique client ID, region, host it was querying, and the results provided by the DNS server that was queried.
0

If you have a fediverse account, you can quote this note from your own instance. Search https://hachyderm.io/users/esk/statuses/114242919601414036 on your instance and quote it. (Note that quoting is not supported in Mastodon.)