does anyone have suggestions for making an archive of a site like this where the links to the actual pages are all dynamically generated/rendered by JS and thus there is no easy way to just scrape the page links from the homepage html?

wget can't do it and archiveweb.page can kinda do it, insofar as it can view/capture the page, but reloading the page just gives you a "loading" image, just like...

the original: greatmirror.com/united-states-

and the archive: web.archive.org/web/2025071700

0

If you have a fediverse account, you can quote this note from your own instance. Search https://tilde.zone/users/anelki/statuses/114865872481545747 on your instance and quote it. (Note that quoting is not supported in Mastodon.)