I also want to mention that FediDB uses a well defined User Agent, and does not try to bypass limits with remote crawlers or any other means.

I understand there was a disagreement with myself and GtS regarding robots.txt, however, I always meant to add support for them, so I am doing that now.

The crawler page will be updated with instructions on how to block the crawler once that is ready.

fedidb.org/crawler.html

0

If you have a fediverse account, you can quote this note from your own instance. Search https://mastodon.social/users/dansup/statuses/113961255349106742 on your instance and quote it. (Note that quoting is not supported in Mastodon.)