That approach destroys and rebuilds the entire DOM on every render. This would not work for large compound components and apps.

I've always wondered why this notion is so popular (is it just because of what react does)? Wouldn't the native browser be expected to handle a DOM re-render much more efficiently than an entire managed JS framework running on the native browser and emulating the DOM? Maybe in 2013 browsers really really sucked at re-renders, but I have to wonder if the myriads of WebKit, Blink, Gecko developers are so inept at their jobs that a managed framework can somehow figure out what to re-render better than the native browser can.

And yes, I understand that when you program using one of these frameworks your explicitly pointing out what pieces of state will change and when, but in my professional experience, people just code whatever works and don't pay much attention to that. In the naive case, I feel like the browser would probably beat react or any other framework on re-renders every time. In the naive case where developers don't really disambiguate what state changes like they're supposed to. Are there any benchmarks or recent blogs/tech articles that dive into this?

I think the reason that the browser is so slow is that every time you mutate something, an attribute or add or remove an element, the browser rerenders immediately. And this is indeed slow AF. If you batched everything into a DocumentFragment or similar before attaching it to the DOM then it'd be fast. I don't know how you do that ergonomically though.

> I think the reason that the browser is so slow is that every time you mutate something, an attribute or add or remove an element, the browser rerenders immediately.

Is it really immediately? I thought that was a myth.

I thought that, given toplevel function `foo()` which calls `bar()` which calls `baz()` which makes 25 modifications to the DOM, the DOM is only rerendered once when foo returns i.e. when control returns from usercode.

I do know that making changes to the DOM, when immediately entering a while(1) loop doesn't show any change to the DOM.

Yes and no.

The browser will, as much as it can, catch together DOM changes and perform them all at once. So if `baz` looks like this:

    for (let i=0; i<10; i++) {
      elem.style.fontSize = i + 20 + 'px';
    }
Then the browser will only recalculate the size of `elem` once, as you point out.

But if we read the state of the DOM, then the browser still needs to do all the layout calculations before it can do that read, so we break that batching effect. This is the infamous layout thrashing problem. So this would be an example of bad code:

    for (let i=0; i<10; i++) {
      elem.style.fontSize = i + 20 + 'px';
      console.log(elem.offsetHeight);
    }
Now, every time we read `offsetHeight`, the browser sees that it has a scheduled DOM modification to apply, so it has to apply that first, before it can return a correct value.

This is the reason that libraries like fastdom (https://github.com/wilsonpage/fastdom) exist - they help ensure that, in a given tick, all the reads happen first, followed by all the writes.

That said, I suspect even if you add a write followed by a read to your `while(1)` experiment, it still won't actually render anything, because painting is a separate phase of the rendering process, which always happens asynchronously. But that might not be true, and I'm on mobile and can't test it myself.