FYI, the internet archive hosts a ZIM archive that has dumps of wikipedia and many other works. https://archive.org/details/zimarchive

I wish it was a little more obvious how to search it, or what all the variations mean, but it looks like a valuable resource.

It is worth noting that Kiwix works on multiple OSes and on phones and has a wifi hostspot version (that you might run on an raspberry pi, for example). Internet-in-a-box similarly works as a wifi hostspot for ZOM archives.

Lastly, it is worth mentioning that there are tools for creating your own ZIM files; it looks like the most straightforward way is to take a static website and use a utility to convert it into one self-contained file.

Thanks for sharing, Can you explain a bit more about creating our own ZIM files (or) for archived websites on Internet Archive?

I'm looking for a way to archive all the websites from my browser bookmarks and then download them for offline use.

Not related to the OP topic or zim but I was looking into archiving my bookmarks and other content like documentation sites and wikis. I'll list some of the things I ended up using.

ArchiveBox[1]: Pretty much a self-hosted wayback machine. It can save websites as plain html, screenshot, text, and some other formats. I have my bookmarks archived in it and have a bookmarklet to easily add new websites to it. If you use the docker-compose you can enable a full-text search backend for an easy search setup.

WebRecorder[2]: A browser extension that creates WACZ archives directly in the browser capturing exactly what content you load. I use it on sites with annoying dynamic content that sites like wayback and ArchiveBox wouldn't be able to copy.

ReplayWeb[3]: An interface to browse archive types like WARC, WACZ, and HAR. The interface is just like browsing through your browser. It can be self-hosted as well for the full offline experience.

browsertrix-crawler[4]: A CLI tool to scrape websites and output to WACZ. Its super easy to run with Docker and I use it to scrape entire blogs and docs for offline use. It uses Chrome to load webpages and has some extra features like custom browser profiles, interactive login, and autoscroll/autoplay. I use the `--generateWACZ` parameter so I can use ReplayWeb to easily browse through the final output.

For bookmark and misc webpage archiving then ArchiveBox should be more than enough. Check out this repo for an amazing list of tools and resources https://github.com/iipc/awesome-web-archiving

[1] https://github.com/ArchiveBox/ArchiveBox [2] https://webrecorder.net [3] https://replayweb.page [4] https://github.com/webrecorder/browsertrix-crawler