I collect bookmarks of things I want to read, but inevitably will never have time to read. Sometimes I go back and look and the webpage domain is expired, or the page 404s. But I still meticulously folder and tune my bookmarks
I’ve never admitted this to anyone, but it feels good to get that off my chest
I did the same for a while, but it was a mess (700+ unsorted bookmarks on my main computer, 100s more on others).
I tried shaarli, but soon after I tried to build something myself, and I created share-links.
It's an open-source Django app that you can self-host, and that lets you store (and share!) links, titles, descriptions, and tags. Then it display them in a nice way (for me : not much css, a simple page with no js).
It took some dozen hours to get to the point where it's really usable, and it still have problems now (comments are not moderated, I just realized that you can't add a description in links or tags, but I will fix this soonTM).
Here's the link of the repo: https://gitlab.com/sodimel/share-links/
One cool feature is to set your browser homepage to the url that loads a random page : each day I get a cool article to read/concept to discover! (here's my own instance random link url: https://links.l3m.in/en/random/)
That's my useless side project (because shaarli already exist and it's way more mature).
Nice! Doesn't solve the core problem though: we almost never go back to read these hundreds of bookmarked websites.
If websites were downloadable like a (versioned) document you could create your own offline bookmark web (archive), which would be neat. I guess scraping could help with getting the contents to disk, but it would probably be kinda broken.
If you can then direct your search engine to discover content from this archive I would perhaps more often (re)stumble upon this neat discontinued blog that's stuck 5 levels deep in my folder structure.
Share-links on the other hand can convert the page content into a pdf file using weasyprint [2].