I think this is a contrived, or over-simplified, example.

The DOM manipulation is fast in this case because it's a simple appendChild every time. In other cases like where elements in the middle of a table are updated, you would get into a mess writing vanilla code, either complexity or performance wise, because you'd have to traverse the DOM to get to where you need to do updates and do each update individually. React batches such things together, and does one single update.

Show me a benchmark of an actual real app written in vanilla JS and React. I suspect the DOM manipulation time would be way higher.

It's not just simple appendChild calls. I actually worked on an app which updated a large table – displaying file metadata, checksums calculated in web workers, etc. for a delivery – and found React to be around 40+ times slower than using the DOM[1] or even simply using innerHTML, getting worse as the number of records increased.

The main trap you're falling prey to is the magical thinking which is sadly prevalent about the virtual DOM and batching. Basic application of Amdahl's law tells us that the only way the React approach can be faster is if the overhead of the virtual DOM and framework code is balanced out by being able to do less work. That's true if you're comparing to, say, a primitive JavaScript framework which performs many unnecessary updates (e.g. re-rendering the entire table every time something changes) or if the React abstractions allow you to make game-changing optimizations which would be too hard for you to make in regular code.

Since you mentioned batching, here's a simple example: it's extremely hard to find a case where a single update will be faster because the combined time to execute a JS framework and make an update is always going to be greater than simply making the update directly. If, however, you're making multiple updates it's easy to hit pathologically bad performance due to layout thrashing[2] when the code performing an update reads something from the DOM which was invalidated by an earlier update, requiring the browser to repeatedly recalculate the layout.

That can be avoided in pure JavaScript by carefully structuring the application to avoid that write-read-write cycle or by using a minimalist library like Wilson Page's fastdom[3]. This is quite efficient but can be harder to manage in a large application and that's where React can help by making that kind of structure easier to code. If you are looking for a benchmark where React will perform well, that's the area I'd focus on and do by looking at both the total amount of code and the degree to which performance optimizations interfere with clean separation, testability, etc.

EDIT: just to be clear, I'm not saying that it's wrong to use React but that the reasons you do so are the same as why we're not writing desktop apps entirely in assembly: it takes less time to build richer, more maintainable apps. The majority of web apps are not going to be limited by how quickly any framework can update the DOM.

1. I partially reduced that to a smaller testcase in https://gist.github.com/acdha/092c6d79f9ebb888496c which could use more work. For simple testing that was using JSX inline but the actual real application used a separate JSX file compiled following normal React practice.

2. See e.g. http://wilsonpage.co.uk/preventing-layout-thrashing/

3. https://github.com/wilsonpage/fastdom