It occurs to me again that I need to figure out how entire websites can be downloaded and archived. Like Archive.org, but local.
https://www.httrack.com/ is a good option
Most of SPA websites today cannot be downloaded through httrack.
Yes SPAs will be next to impossible for a tool like this; I'm not sure how any tool could archive such a site tbh?
I use browsertrix-crawler[0] for crawling and it does well on JS heavy sites since it uses a real browser to request pages. Even has options to load browser profiles so you can crawl while being authenticated on sites.