Over the past year or two I've been working on a (currently theoretical) design for a kind of pseudo-competitor to the web that's based on that observation.

The fact that the web ignores most rules of platform design (because it was never meant to be one) isn't really a new observation. Most platforms build their higher level APIs on top of their lower level ones, so you can always drop down a level if the higher levels aren't working for you. You've got a widget toolkit but you can render your own and if you do you aren't losing anything else because you're using the same lower levels as the OS widgets.

So why doesn't HTML work that way? Partly path dependency, partly philosophy and partly the difficulty of sandboxing low level code, as the sibling comment observes. However, the latter has been getting easier with time as operating system kernels improve their sandboxing abilities. It's also not always necessary. Do I care if Slack or Hacker News is sandboxed? Not really. Some sandboxing is nice just for peace of mind against security exploits but I do actually trust these brands and don't think they're going to exploit me. A more aggressive sandbox is useful when following links to random unknowns like people's blogs, but the web's design doesn't recognize that such a thing as gradients of trust exist.

An alternative approach would be to reject the attempt to write a giant spec for all of computing, and instead embrace competition, diversity and layering. Start with a small core that knows how to download files and compose them together. It can fetch native apps into an HTTP cache, update them efficiently, execute them and delete them when the cache gets full. At that bottom layer the core enforces the platform's native security mechanisms (code signing, GateKeeper, SmartScreen etc). Then allow people to create native sandboxing apps that can be composed with other code to run that other code inside kernel level sandboxes. Responsibility flows downwards, so it's the sandboxing app that takes the risk of being blocked as malware if it fails in its core duty. Then you can compose VMs with the kernel sandboxing apps, and at each layer of the cake you're exposing IPC APIs that allow apps to e.g. allow their GUI to be embedded in a browser-style tabbed WM, to abstract and sandbox the platform's native accessibility APIs, to support form auto-fill and all the other features you might want.

Such a thing would be quite different to the web, for example it would naturally support command line apps and servers. It would also be less centrally planned, as the core would mostly be a kind of dependency resolver that ensures components can be connected together across sandboxed IPC boundaries and provides some level of UI sharing beyond the native window manager. The spec would be minimal - the downside is the user-agent would be less generically powerful because you aren't forcing everything into a single model, the upside is the spec would actually be implementable because it's not trying to do anything.

The design I've got also allows for document formats, crawling/indexing, authentication and other things that are useful to have in a web competitor, but there's no space to go into all that right now.

The biggest problem is one of incentives. Google's bottomless pockets makes browsers fat but also kills any incentive to compete, as there's no obvious way to pay for the maintenance of any alternative. Having a small and tight core spec helps, but someone still has to do some work. Apple solve it by selling hardware but nobody else has a workable economic model.

> to abstract and sandbox the platform's native accessibility APIs

I wonder if it's important to expose something close to each platform's accessibility API to sandboxed applications, or only the cross-platform abstraction over those APIs. For the latter, my AccessKit [1] project might be worth looking at.

[1]: https://github.com/AccessKit/accesskit