It makes sense for a Javascript engine to optimize for main-thread latency. But the code I write for fun is usually throughput bound: meaning a 2nd thread should try to minimize the additional work to be done.

I do wonder what the appropriate metrics for a garbage collector should be. This post focuses entirely on the main-thread times, but maybe "total CPU-time used" would be better? Especially if it was split up using hardware performance counters (ex: how much time waiting on L1, L2, L3 cache, and DDR4 RAM).

I'd imagine that having multiple cores work on application + garbage collection in parallel would cause more main-memory hits (the garbage collector would have to keep its state in cache, probably L3 cache, meaning the application has less L3 cache to work with). So overall throughput would be lower than the single-threaded solution.

--------------

But yes, latency is king for UI programs. I guess I'm just musing about the "what to measure" problem.

On a related note, if you are building a UI, an even better solution for computationally intensive work is to offload it onto a separate process instead of just a separate thread. This ensures that the UI thread is not blocked as often due to GC. Obviously this isn't really possible in JS (outside of Electron), but this can be a good idea in languages like Java.

There's overhead to a process that doesn't exist with threads. You can definitely split your work across threads in JS using Workers, ServiceWorkers and the main thread. Most apps don't take advantage of this very well though.

Using a separate thread gets you the same GC benefits.

There have been some attempts to make it easier to use threads on the web like the excellent Comlink library: https://github.com/GoogleChromeLabs/comlink