What does HackerNews think of fastdom?
Eliminates layout thrashing by batching DOM measurement and mutation tasks
The browser will, as much as it can, catch together DOM changes and perform them all at once. So if `baz` looks like this:
for (let i=0; i<10; i++) {
elem.style.fontSize = i + 20 + 'px';
}
Then the browser will only recalculate the size of `elem` once, as you point out.But if we read the state of the DOM, then the browser still needs to do all the layout calculations before it can do that read, so we break that batching effect. This is the infamous layout thrashing problem. So this would be an example of bad code:
for (let i=0; i<10; i++) {
elem.style.fontSize = i + 20 + 'px';
console.log(elem.offsetHeight);
}
Now, every time we read `offsetHeight`, the browser sees that it has a scheduled DOM modification to apply, so it has to apply that first, before it can return a correct value.This is the reason that libraries like fastdom (https://github.com/wilsonpage/fastdom) exist - they help ensure that, in a given tick, all the reads happen first, followed by all the writes.
That said, I suspect even if you add a write followed by a read to your `while(1)` experiment, it still won't actually render anything, because painting is a separate phase of the rendering process, which always happens asynchronously. But that might not be true, and I'm on mobile and can't test it myself.
In any compute environment, doing more than necessary spins CPU cycles wastefully. I believe the optimizations you speak of try to limit this work to the least possible, by telling on the framework's Dev world. But this is surprisingly easy to achieve without frameworks, see [1] and [2]
A frequent argument for the use of vdom has been that it reduces Dom trashing. I am willing to bet that if a vdom library has figured out what elements don't need updating, the browser's Dom implementation tuned over decades has that logic built-in. So go ahead and trash the Dom, but batch your updates and the browser's logic will likely not trash more than necessary. And since that logic is implemented in an AOT compiled language, it probably is much faster than a js v-dom
Having said that, is a V-DOM required in 2019, if DOM updates are optimally batched, like in FastDom ( https://github.com/wilsonpage/fastdom ). Decades of optimizing browser internals would surely account for not trashing the DOM, if updated optimally. So, is it required?
In 2019, with ES6, I believe frameworks to be an overkill. When React was introduced, it did goad people into thinking in components, etc. However, classes and higher-order-functions in ES6 allow one to think modularly without a framework. And the Virtual-DOM's value proposition is questionable when DOM updates are properly batched ( like when using https://github.com/wilsonpage/fastdom ).
Complexity is bound to increase with features, either in the back-end or front. But SPAs (with PWAs) offer advantages of being fully functional when offline, or with spotty connectivity, which is a significant value proposition. Not to mention lower server-side costs (in use-cases where server costs are prohibitive, SPA-PWAs is the only economically viable option).
My takeaway is to evaluate not just the reliance on SPA/PWAs but also on complex frameworks with diminishing returns.
Since updating the DOM is relatively fast in modern browsers it's not particularly hard to find cases where the work the virtual DOM has to do cancels out any savings.
1. See e.g. https://developers.google.com/web/fundamentals/performance/r..., a list of triggers at https://gist.github.com/paulirish/5d52fb081b3570c81e3a, and https://github.com/wilsonpage/fastdom for a common technique to avoid it by manually ordering read operations before writes.
The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.
Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.
That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.
EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.
1. I partially reduced that to a smaller testcase in https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.
2. See e.g. http://wilsonpage.co.uk/preventing-layout-thrashing/
Ember.JS (and possibly Angular?) does this for you automatically.