In their documentation ( https://k6.io/docs/ ) they claim that
> JavaScript is not generally well suited for high performance. To achieve maximum performance, the tool itself is written in Go, embedding a JavaScript runtime allowing for easy test scripting.
How is it possible that pure go JavaScript interpreter (goja) with bindings for net/http and some reports would be faster than the same tool written in nodejs using its http-client (which if I remember correctly is written in C)?
I don’t mean to downplay the importance or usefulness of k6, I just find their reasoning behind choosing go somewhat contrived
I am not completely sure why the Go stdlib's HTTP client (which k6 uses) is faster than the NodeJS one. I think part of it is the fact that k6 spins up a separate JS runtime for each VU. goja is a much, _much_, slower JS interpreter than V8, but load tests are IO-bound, so that's usually not an issue. And you can spin up thousands of VUs in k6 (especially with --compatibility-mode=base), making full use of the load generator machine.
You can find some basic performance and other comparisons between load testing tools in this very long article of ours: https://k6.io/blog/comparing-best-open-source-load-testing-t...
And some advice for squeezing the maximum performance out of k6 in here: https://k6.io/docs/testing-guides/running-large-tests
Anyway, what I've seen when comparing the performance of tools, is that Artillery, which is running on NodeJS, is perhaps the wors performer of all the tools I've tested. I don't know if it's because of NodeJS or that Artillery in itself isn't a very performant piece of software (It also consumes a lot of memory, btw).
If you want the highest performance, there is one tool that runs circles around all others (including k6), and that is wrk - https://github.com/wg/wrk - very cool piece of software although it is lacking in terms of functionality so mostly suitable for simpler load testing like hammering single URLs.
(I don't know how fast wrk2 is, haven't benchmarked it)