I love the work done by the Lit team (I assume you're a contributor?). It's really fantastically designed, as you mentioned with bundle size/rendering speed/etc. I'm sure that Lit's implementation is very efficient and ranks high in benchmarks.

This isn't to say virtual DOM isn't fast Experimental libraries that use virtual DOM's like blockdom and ivi (see https://krausest.github.io/js-framework-benchmark/2022/table...) are very, very fast.

At the end of the day, the way libraries render UI is a set of tradeoffs. No one method is objectively better. While lit works great for a lot of web developers, so do virtual DOM based libraries

Totally agree on native DOM diffing, I'll check out Apple's proposal :)

Virtual DOM is an unnecessary overhead, is what the parent is saying.

There is probably a good reason popular frameworks insist on using it though.

I would guess:

- historically, manipulating the DOM directly was slow (in WebKit?) so working on a virtual one made sense

- The idea writing a compiler like Svelte which does the heavy lifting at compile time was not there, or was dismissed for some reason (the React developers might have decided that having a reactive model like Svelte, with the need to tweak JS's semantic a bit - where assigning variables trigger stuff - was not great, or they didn't want this JS/HTML separation)

And then you are stuck with your model for compatibility reasons. React cannot get rid of its virtual DOM without breaking everyone.

The real DOM is always manipulated. Given a state change the entire virtual DOM is generated, then diffed with the real DOM and then changed parts are put into the real DOM (= real DOM is manipulated). What I wonder is whether the reason for virtual DOM is really just historic, is there anything else that has caused its persistence other than inertia?

> then diffed with the real DOM

Diffing with real DOM is slow, majority of vdom libraries aren't diffing with real DOM. As an author of a "vdom" library, I don't like to think about "reconciler" as a diffing algorithm because it is a useless constraint, I like to think about it as a some kind of a VM that uses different heuristics to map state to different operations represented as a tree data structure.

> What I wonder is whether the reason for virtual DOM is really just historic, is there anything else that has caused its persistence other than inertia?

As a thought experiment try to imagine how would you implement such features:

- Declarative and simple API

- Stateful components with basic lifecycle like `onDispose()`

- Context API

- Components that can render multiple root DOM nodes or DOMless components

- Inside out rendering or at least inside out DOM mounting

- Conditional rendering/dynamic lists/fragments without marker DOM nodes

Here are just some basics that you will need to consider when building a full-featured and performant web UI library. I think that you are gonna be surprised by how many libraries that make a lot of claims about their performance or that "vdom is a pure overhead" are actually really bad when it comes to dealing with complex use cases.

I am not saying that "vdom" approach is the only efficient way to solve all this problems, or every "vdom" library is performant(majority of vdom libraries are also really bad with complex use cases), but it is not as simple as it looks :)

Occasionally when touching on this topic here or elsewhere, or when searching for it on the web, I haven't found an elaborate explanation to why exactly virtual DOM is (as it does seems wasteful, at least for someone looking from the outside). But perhaps as you point out, the only sure-fire way to feel the actual benefit would be to try to build one yourself.

So thanks for listing out some concrete things that may be easier to implement with a virtual DOM. And if there are any other good resources out there, then do share! :)

> And if there are any other good resources out there, then do share! :)

Unfortunately there aren't any good resources on this topics. Everyone is just focusing on a diffing and unable to see a bigger picture. In the end, all feature-complete libraries implement diffing algorithms for dynamic children lists and attribute diffing for "spread attributes", so with this features we are kinda already implementing almost everything to work with DOM and create a vdom API, everything else are just slight optimizations to reduce diffing overhead. But working with DOM is only a part of a problem, it is also important how everything else is implemented, all this different features are going to be intertwined and we can end up with combinatorial explosion in complexity if we aren't careful enough. Svelte is a good example of a library that tried to optimize work with DOM nodes at the cost of everything else. As an experiment, I would recommend to take any library from this[1] benchmark that makes a lot of claims about its performance, and start making small modifications to the benchmark implementation by wrapping DOM nodes into separate components, add conditional rendering, add more dynamic bindings, etc and look how different features will affect its performance. Also, I'd recommend to run tests in a browser with ublock and grammarly extensions.

And again, it is possible to implement a library with a declarative API that avoids vDOM diffing and it will be faster that any "vdom" library in every possible use cases, but it shouldn't be done at the cost of everything else. But unfortunately, some authors of popular libraries are spreading a lot of misinformation about "vdom overhead" and even unable to compete with the fastest ones.

1. https://github.com/krausest/js-framework-benchmark