On my machine, when I last tried the various accelerated terminal emulators, I wasn't convinced. At least under plain X, GL context creation adds extra latency when creating new windows (might be different if you use a compositor all the time I guess). In addition, on terminals such as kitty, the startup time of a full new process was really non-negligible I suspect due to the python support.

With a tiling window manager, the built-in notebook/tiling functionality is not really useful (the window manager is more flexible and has universal keybindings) so when looking at the time required to pop a full new window in either single or shared instance they were actually behind regular xterm. Resource usage wasn't stellar either (xterm was still better than most lightweight libvt-based terminals). Couldn't feel much of a latency improvement (again, X without compositor).

I'm sure at full throughput the difference is there, but who is looking at pages of output you can't read? I do keep terminals open for days, but my most common usage case is mostly open window -> run a small session -> close and I got annoyed fast.

A GPU-accelerated terminal emulator sounds like a nuclear-powered kitchen mixer to me.

Like, why? Over 20 years of using terminal emulators not even single time I was like "Man, I wish my terminal was faster".

Is this just a fun project to do, like, "yay, I wrote a GPU-accelerated terminal emulator!"?

Probably inspired by the performance problems with the windows terminal [1] and the accelerated terminal [2] developed by Molly Rocket as 'answer'? A series of videos presenting the poc [3]

[1] https://news.ycombinator.com/item?id=28743687 (It takes a PhD to develop that) [2] https://github.com/cmuratori/refterm [3] https://www.youtube.com/watch?v=hxM8QmyZXtg, https://www.youtube.com/watch?v=pgoetgxecw8