One thing PostgreSQL would likely not be able to adapt to, at least without significant effort, is dropping MVCC in favor of more traditional locking protocols.

While MVCC is fashionable nowadays, and more or less every platform offers it at least as an option, my experience, and also opinions I have heard from people using SQL Server and similar platforms professionally, is that for true OLTP at least, good ol’ locking-based protocols in practice outperform MVCC-based protocols (when transactions are well programmed).

The “inconvenient truth” [0] that maintaining multiple versions of records badly affects performance might in the future make MVCC less appealing. There’s ongoing research, such as [0], to improve things, but it’s not clear to me at this point that MVCC is a winning idea.

[0] https://dl.acm.org/doi/10.1145/3448016.3452783

I sort of want the opposite. Except for extremely high velocity mutable data, why do we ever drop an old version of any record? I want the whole database to look more like git commits - completely immutable, versionable, every change attributable to a specific commit, connection, client, user.

So much complexity and stress and work at the moment comes from the fear of data loss or corruption. Schema updates, migrations, backups, all the distributed computing stuff where every node has to assume every other node could have mutated the data .... And then there are countless applications full of "history" type tables to reinstate audit trails for the mutable data. It's kind of ridiculous when you think about it.

It all made sense when storage was super expensive but these days all the counter measures we have to implement to deal with mutable state are far more expensive than just using more disk space.

You should check out dolt, does exactly what you're describing, and is a drop-in MySQL replacement:

https://github.com/dolthub/dolt