This was one of the more interesting software talks I've listened to recently. I like that it was very real - there are serious, serious problems with Node.js, and the fact that even the creator acknowledges these problems caught my attention.
I'm also a long-time user of Dart, so when he brought that up, and compared TypeScript to its shortcomings, I definitely agreed.
That being said, even with the Deno project, I'm not so sure what can come in terms of performance and security from running JavaScript outside of a browser. The choice of V8 also raises concerns for me about build systems. He mentioned the failing of GYP, but anything using the same build toolchain as Chromium always introduces a wealth of complexity for anyone building said software, as not only do you need Google-specific build tools, but also very specific configuration including Python.
It will be interesting to see what comes in the future.
If it were up to me (which I guess it isn't), I'd probably prioritize portability/less complex builds, built-in FFI, a flat module system, and optimizing calls to C from JS.
> performance and security from running JavaScript outside of a browser.
I'm just now building a node app to filter point clouds, so lots of number crunching. In two days I've got something in javascript that's faster than the C++ solution I've been working on for a week. Mostly because javascript/node makes it trivial to parallelize file IO while doing work in the main thread. This app reads 13 million points from 1400 files (~200mb), filters all points within a clip region, and writes the 12 million resulting points in a 300MB file, all in 1.6 seconds. (File reads were cached by OS and/or disk due to repeated testing, but file writes probably not)
My personal conclussion is that javascript can rival or even exceed the performance of C++, not because it's inherently faster, it's obviously not, but because it makes it much easier to write fast code. For the highest possible performance you'll defenitely want to use C++ but sometimes you'll have to spend multiple times the work to get there.
> (File reads were cached by OS and/or disk due to repeated testing, but file writes probably not)
Unless you flush the pages manually, your dirty pages (written files) live on long after your process died. Depending on system and configuration, minutes or even hours can pass before they are flushed to disk.
Do you know about best practices to benchmark with uncached data? It's something I've been wondering about for a long time and I've seen many attempts at benchmarking things without regard for disk caching. e.g. benchmarking in-memory data bases to out-of-core databases but because of caching due to repeated runs, the results were meritless since the out-of-core databases had their stuff in memory as well.