What does HackerNews think of fastdom?

Eliminates layout thrashing by batching DOM measurement and mutation tasks

Language: JavaScript

Yes and no.

The browser will, as much as it can, catch together DOM changes and perform them all at once. So if `baz` looks like this:

    for (let i=0; i<10; i++) {
      elem.style.fontSize = i + 20 + 'px';
    }
Then the browser will only recalculate the size of `elem` once, as you point out.

But if we read the state of the DOM, then the browser still needs to do all the layout calculations before it can do that read, so we break that batching effect. This is the infamous layout thrashing problem. So this would be an example of bad code:

    for (let i=0; i<10; i++) {
      elem.style.fontSize = i + 20 + 'px';
      console.log(elem.offsetHeight);
    }
Now, every time we read `offsetHeight`, the browser sees that it has a scheduled DOM modification to apply, so it has to apply that first, before it can return a correct value.

This is the reason that libraries like fastdom (https://github.com/wilsonpage/fastdom) exist - they help ensure that, in a given tick, all the reads happen first, followed by all the writes.

That said, I suspect even if you add a write followed by a read to your `while(1)` experiment, it still won't actually render anything, because painting is a separate phase of the rendering process, which always happens asynchronously. But that might not be true, and I'm on mobile and can't test it myself.

The bashing of jQuery comes from junior devs. Of course a VDOM is clearer, more productive (and less performant) however most webapps with a minimum of logic have many legitimate uses of native dom/jquery in addition to the VDOM. And the interaction is perfectly safe as long as you do native DOM in the right lifecycle method (mounted). jQuery is a pleasure to use and give us a lot of power/expressivity. More generally this vague of juniors devs (e.g. CSS-in-JS lobby) are becoming less and less familiar with the concept of CSS/DOM selectors, despite their awesomeness and uniformity both for DOM operations, styling operations and integration tests operations (cypress) BTW a little known fact is that jQuery is not just sugar and cross-browser consistentcy, in fact it push the boundaries of what is possible vs native CSS, see e.g. the reverse direction paradigm shift of https://api.jquery.com/has-selector/ Although its true that augmenting jQuery with a batcher for performance doesn't seems currently possible? https://github.com/wilsonpage/fastdom
I hope you read the original article, that clearly lays out the (non) optimizations being done in VDOM.

In any compute environment, doing more than necessary spins CPU cycles wastefully. I believe the optimizations you speak of try to limit this work to the least possible, by telling on the framework's Dev world. But this is surprisingly easy to achieve without frameworks, see [1] and [2]

[1]: https://github.com/wilsonpage/fastdom

[2]: https://youtu.be/sFMpS2_GqQc

It's really surprising that people put faith in virtual Dom implementations, when browsers have been optimized for decades for efficiency. With the right batching strategy that minimalistic libraries like FastDom [1] offer, there's no real reason to use the virtual Dom.

A frequent argument for the use of vdom has been that it reduces Dom trashing. I am willing to bet that if a vdom library has figured out what elements don't need updating, the browser's Dom implementation tuned over decades has that logic built-in. So go ahead and trash the Dom, but batch your updates and the browser's logic will likely not trash more than necessary. And since that logic is implemented in an AOT compiled language, it probably is much faster than a js v-dom

[1]: https://github.com/wilsonpage/fastdom

Firstly, congrats on shipping a virtual DOM lib in WASM. Hopefully, frameworks intent on using a V-DOM will greatly benefit from this.

Having said that, is a V-DOM required in 2019, if DOM updates are optimally batched, like in FastDom ( https://github.com/wilsonpage/fastdom ). Decades of optimizing browser internals would surely account for not trashing the DOM, if updated optimally. So, is it required?

I think there's a conflation of client-side-SPA === rich/complex here. For all practical purposes, this may be true as SPAs have gotten reliant on complex frameworks like React, Vue, ..

In 2019, with ES6, I believe frameworks to be an overkill. When React was introduced, it did goad people into thinking in components, etc. However, classes and higher-order-functions in ES6 allow one to think modularly without a framework. And the Virtual-DOM's value proposition is questionable when DOM updates are properly batched ( like when using https://github.com/wilsonpage/fastdom ).

Complexity is bound to increase with features, either in the back-end or front. But SPAs (with PWAs) offer advantages of being fully functional when offline, or with spotty connectivity, which is a significant value proposition. Not to mention lower server-side costs (in use-cases where server costs are prohibitive, SPA-PWAs is the only economically viable option).

My takeaway is to evaluate not just the reliance on SPA/PWAs but also on complex frameworks with diminishing returns.

It ultimately comes down to Amdahl's law: doing something in a browser requires updating the DOM. Since you always have to do the DOM processing, the only way adding the extra virtual DOM work will be a net win is if it makes it easier to avoid unnecessary updates or allows something like ordering updates to avoid triggering repeated layouts / reflows[1].

Since updating the DOM is relatively fast in modern browsers it's not particularly hard to find cases where the work the virtual DOM has to do cancels out any savings.

1. See e.g. https://developers.google.com/web/fundamentals/performance/r..., a list of triggers at https://gist.github.com/paulirish/5d52fb081b3570c81e3a, and https://github.com/wilsonpage/fastdom for a common technique to avoid it by manually ordering read operations before writes.

fastdom might be of interest – it batches read and write operations into separate queues and runs them using requestAnimationFrame:

https://github.com/wilsonpage/fastdom

I think this is what https://github.com/wilsonpage/fastdom was designed to help with... The issue I have with it is it's another library to load...
It's not just simple appendChild calls. I actually worked on an app which updated a large table – displaying file metadata, checksums calculated in web workers, etc. for a delivery – and found React to be around 40+ times slower than using the DOM[1] or even simply using innerHTML, getting worse as the number of records increased.

The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.

Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.

That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.

EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.

1. I partially reduced that to a smaller testcase in https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.

2. See e.g. http://wilsonpage.co.uk/preventing-layout-thrashing/

3. https://github.com/wilsonpage/fastdom

Or use fastdom[1], which batches reads and writes to reduce layout thrashing.

[1]: https://github.com/wilsonpage/fastdom

To prevent layout thrashing yourself, you can use this library: https://github.com/wilsonpage/fastdom

Ember.JS (and possibly Angular?) does this for you automatically.

Well-researched feature. The best part of the fastdom wrapper (https://github.com/wilsonpage/fastdom) is that a timeout stub is introduced even for browsers that don't support native animation frames. Good job.