Why are we still generating video signals as if there is a CRT on the other end?

E.g. we could save power by only sending parts of the display image that change.

VGA is incredibly easy to generate - you can do it via bitbanging (carefully turning on and off) GPIO pins on an Arduino [1], simply because the tolerances are insanely huge. A step above is SD-SDI [2], which uses less pins but has stricter requirements on timing, and equipment accepting SDI is usually only found in the professional-grade TV production space.

DVI, HDMI, DisplayPort or heaven forbid Thunderbolt are a hot mess - to generate a signal out of these without the usage of dedicated chips, you need a ton of logic and parts: output format negotiation, extremely strict timing requirements that completely rule out bitbanging, signal conditioning, specially shielded cables, shielded traces on the circuit board.

[1] https://hackaday.com/2014/06/10/640x480-vga-on-an-arduino/

[2] https://en.wikipedia.org/wiki/Serial_digital_interface

It's limited, but there are demos of bitbanged DVI on fairly modest hardware, after some overclocking.

https://github.com/Wren6991/picodvi