For me, the biggest plus of web components is the potential simplicity of unifying the document's object model with that of your application. Every JS framework, like React, Angular, Vue, etc. seems to have a parallel OM that either wraps or renders into the DOM (Angular seems to have about 5 of them).

The result is your can never just use the basic DOM APIs and devtools to inspect and interact with your application's components. Instead, you always have to get at them through some other API, and understand how the UI and state you see in the DOM is produced and updated by some mediating layer (virtual DOM, view hierarchy, etc.). It feels like trying to get at the truth by talking with two different people, who each only possess a subset of the facts.

These layers were created for efficiency, not for their own sake; there’s no “DOM transaction API”, so every DOM mutation causes a reflow. Thus, you mutate a virtual DOM, render the resulting changes to a new subdocument, and then replace an existing DOM node with that new subdocument.

Incorrect. This idea was a result of some early React users misunderstanding the purpose of the virtual DOM, and has unfortunately been thoughtlessly repeated ever since.

In reality, browser engineers are not that stupid. Mutating the DOM will queue a layout operation, but it will not actually occur until the current JS task has finished executing. Within a JS task, such as an event handler, an XHR completion, or a setTimeout, you can mutate the DOM as many times as you like, and it will only result in a single layout pass.

(The exception is if you try to read back some measurement from the DOM, such as an element's bounding box, after mutating it. In this case, the browser does have to block while it performs layout, but that is not something can be solved with virtual DOM).

So, what is the purpose of the virtual DOM? It was invented to allow React to provide the illusion of full re-rendering. React's authors wanted to a provide an experience similar to that found on the server, where an entire HTML page is re-rendered from scratch for every load. In that way, there is never any possibility of part of the HTML becoming stale, as it all gets recreated from scratch each time.

However, the browser DOM is not designed to be blown away and recreated from scratch all the time. Nodes are expensive objects, spanning the JS and C++ worlds. Recreating the whole DOM tree each time any part of it needed updating would be too slow. So, instead, they created the virtual DOM as an intermediate data structure. React renders the virtual DOM. The virtual DOM is diffed against its previous state, and then the changes are applied to the actual DOM tree. In that way, every component's render() method can be executed, but only those parts of the DOM that have actually changed will be updated.

It's a nifty optimisation, but it's not about avoiding reflow, it's just another method of dirty checking, similar to that done in other frameworks like Angular or Ember. It's just that React chooses to diff the data structure produced by render(), rather than diffing the model data that is later used for rendering.

Please read this blog post titled "How to win in Web Framework Benchmarks" https://medium.com/@localvoid/how-to-win-in-web-framework-be...

It goes into some details how different approaches in different frameworks work, and why. Moreover, it shows how you basically need to re-implement Virtual DOM and other tricks in vanilla JS code to approach the same speed.

Yes, browser engineers are not that stupid. But they don't batch operations as efficiently as a proper Virtual DOM implementation would (including efficiently handling event listeners, looking for clues like `keys` on repeating DOM elements etc. etc.).

> Moreover, it shows how you basically need to re-implement Virtual DOM and other tricks in vanilla JS code to approach the same speed.

Surplus [1] is the fastest in most benchmarks, and it doesn't use a virtual DOM.

[1] https://github.com/adamhaile/surplus