The worst way to store and manipulate text is to use an array. Firstly, the entire file must be loaded into the array first, which raises issues with time and memory. Even worse still, every insertion and deletion requires each element in the array to be moved. There are more downsides, but already this method is clearly not practical. The array can be dismissed as an option rather quickly.

The authors of the dozens of tiny editors using this "structure", which were quite popular on the PC in the late 80s through early 90s, would disagree. A memcpy/memove() runs at many GB/s on a typical machine today, so you would have to be editing absolutely huge files to notice. Even back then, memory bandwidth was a few MB/s --- still plenty fast, considering that the typical files of the time were also much smaller.

I can remember a time when I was attempting to write my own editor and at first spending a lot of time obsessing over the data structures (it was harder to find such information at the time) --- only to realise that a lot of the editors I'd tried, including the one I was using the most at the time --- were working perfectly well with just one big buffer.

I've opened files of a few hundred MB in Windows' Notepad, which also belongs to this family of editors; and on a machine a few years old, opening the file takes the longest because it has to be read into memory --- once it's opened, moving around and editing lines doesn't show much lag at all. "Worse is better", indeed.

Emphatically: Yes! KISS.

Moby Dick, at its slender 752 pages, is 1.2MB of text. You can save the entire text to disk on every keystroke on just about any system today and keep up with typing just fine.

Assuming you are actually dealing with text in a text editor, you should be fine.

If you have 100MB+ files, chances are they aren't actually text.

Textual dumps of SQL databases also fall in this category. I remember having downloaded a huge sql dump. I couldn't open it in most editors because they all buffered into RAM. The file was about 13G in size and did not fit into the RAM of any machine I own. But in this case using ropes wouldn't have helped either, I guess.

Text editors should also take huge files into account and provide sequential reading from the disk to memory. Even on Emacs, I couldn't work with the file. I ended up fixing it and importing it to Postgresql. And then I spent hours on indexing the necessary fields :).

I believe you can use the vlf package (https://github.com/m00natic/vlfi) for dealing with large files in Emacs. I haven't used it myself, so I am not sure how stable it is.