Something that Chromium apps do give you however, for free for the most part, is accessibility. I just tried the GUI version of this client and was not surprised to find out that I could not use it. The new Spotify UI released a few months ago is the most accessible Spotify has ever been. Landmarks, clear labels, headings, and even aria-trickery to automatically announce things using my screen reader. I remember being very frustrated with the old UI's to the point where I chose another service just because it was more accessible, even if it didn't have a desktop app. YouTube Music and Deezer had much better UI's from the get go. At this point, I'm almost happy to see an Electron app. It doesn't guarantee accessibility, but the likelyhood is so, so, so much higher than any modern cross-platform UI framework. I'd almost go as far as to not call these UI's native. Because if they were, if they used native controls, the accessibility would be there. The OS vendors spend a lot of time to make them usable and consistent. Sadly, these UI frameworks don't, or can't. Sure, psst has a CLI, but I only get panics. I can't do -h to find out what I can do with it, I can only call it with a spotify URL and get it to play and exit once it's done. It feels like the cli was included as a sort of testing tool to check the underlying libs and code, and not as a usable version of the app itself - but it's still very early in development so the GUI might be the same. I can't tell.

I honestly think we need to rethink accessibility from the ground up. modern advances in OCR and machine learning should allow us to do accessibility entirely from the GPU output.

it'd take awhile to perfect but i think it'd ease the burden tremendously for software developers and those who need accessibility.

> it'd take awhile to perfect

Some of us can't wait. That's why I, for one, continue to advocate for developers to make their applications accessible with the currently available tools. It's also why I'm trying, with my AccessKit [1] project (which admittedly is taking time to get off the ground), to make it easier for GUI toolkits to implement the current baroque platform accessibility APIs.

I'm also reluctant to concede that we're doomed to reconstruct UI content and semantics probabilistically from pixels, when that information is already there somewhere in the app. But it may be the best long-term solution to the social problem of trying to get everyone to implement accessibility.

[1]: https://github.com/AccessKit/accesskit