Once the latency requirement goes past ... i dunno, a few seconds? "Insh'allah" time frames; tape comes in and stomps all over the competition with its space / cost tradeoff. Dismountable media and a mechanism to swap tapes (even a meatsicle if need be) just has too many advantages to compete with.

Gets me thinking tho: Why stop at 5.25in drives? Let's go back to washing machine sized units or even bigger. Let's stack platters so dense and heavy that we can use the spindle as a flywheel UPS system too.

Or how about making a truly gigantic tape spool? Perhaps kevlar threads impregnated with iron could serve as "tape" and we could spin kilometers of that onto a spool.

It’s true tapes are okay when time is not of the essence, but there’s currently a lot of pain dealing with tapes when it comes to loading reading writing and verifying everything. You need some way to make sure the data on the tape matches the original after writing. Then you need some way to verify the copy you get back from the tape matches. You need to have multiple copies because sometimes whole tapes go bad. You also have a limited number of times you can try to read or write the tape without too much wear so you have to have a plan for how to read or write those things efficiently. This is stuff that can be mostly automated but it’s surprising to me that mostly this is done manually and the tools that do exist are often bespoke in one way or another. A nice thing about a hard drive is that most of that logic is baked into the firmware of the drive so you are less dependent on having a good system for verification and record keeping (though you still need it). Tapes can currently take minutes to load and hours to read or write if you need to copy the whole thing. I’m sure if someone was going to make those dice of drives they’d figure out the economics and logistics but it would be a monumental effort both technically and physically to make it work well.

> You need some way to make sure the data on the tape matches the original after writing.

Do you? Virtually every filesystem has some kind of error detection, maybe even error-correction built in.

That seems like a solved problem to me. CD-R solved this by writing Reed-Solomon codes every couple of bytes so that if any error occurred, you could just fix them on the fly. (As such: you could have scratches erase all sorts of data, but still read the data back just fine)

I have to imagine that tapes have a similar kind of error-correction going on, using whatever is popular these days (LDPC?). Once you have error correction and error detection, you just read/write as usual.

-------

If Tapes don't have that sort of correction/detection built in, you can build it out at the software level (like Backblaze used to do)... or maybe like Parchive (https://en.wikipedia.org/wiki/Parchive).

> or maybe like Parchive

par2 even has options for specifying level of redundancy. I've had good experience in recovering large corrupted files from an external drive - since then, I've incorporated it into the automated backups of my personal infrastructure.

https://github.com/Parchive/par2cmdline