> _We used a Wasm binary compiled from Rust rather than JavaScript.
   > We know support for JavaScript is important to many customers, but we're not yet satisfied with the performance of Compute@Edge packages compiled from JavaScript. That's why it's in beta. When a product is ready for production, we remove the beta designation.
I can't really square the idea that the 50-150ms time-delays in question comes down to the actual programming language, but it is absolutely believable that a longer test reduces the median latency rather than a high load test for a shorter duration.

Having said that, I would notice anything over 150ms in my clicks, but wouldn't care whether something took under 100 or under 50 - except that the lower the latency the less the scale needed to serve the same number of active users (it becomes a question of cost rather than response time).

> I can't really square the idea that the 50-150ms time-delays in question comes down to the actual programming language, but it is absolutely believable that a longer test reduces the median latency rather than a high load test for a shorter duration.

It seems plausible to me: in Fastly's case, they're using WebAssembly via wasmtime[1] which does support AOT compilation but most JavaScript code is dynamic enough that they still need a runtime JIT engine. I believe the the current approach Fastly is using is to compile Mozilla's SpiderMonkey JIT engine itself into WebAssembly and they've done some really nice work making that load as quickly as possible:

https://bytecodealliance.org/articles/making-javascript-run-...

The catch, of course, is that this still leaves a fair amount work which a JavaScript program has to deal with at runtime compared to a Rust program which the compiler can spend minutes optimizing long before deployment. This is a classic tradeoff for dynamic languages and a lot of people are satisfied with the approach of defaulting to faster developer turnaround and later converting hot spots to something like C or Rust, but I think it's definitely dodgy to use a single example of something you know to be this complex and present the results as generally representative of the entire platform.

I have no knowledge of how Cloudflare ran their tests or reason to suspect malice but not disclosing test methodology and a “no benchmarks” license clause is going to make accusations about the results inevitable. Veterans of the benchmarketing wars have stories about comparisons where someone used a configuration option which happened to disable key performance optimizations for their competitors or used examples which favored their design decisions. Since nobody can read someone's mind to tell their real intent, the best way to avoid acrimony is to have full public disclosure of the benchmarks and their methodology — and to allow other people to conduct benchmarks so they can reproduce results or fill gaps.

1. https://github.com/bytecodealliance/wasmtime