What does HackerNews think of wrk?

Modern HTTP benchmarking tool

Language: C

We use locust at work but I HIGHLY recommend wrk for a very robust yet simple load testing tool.

https://github.com/wg/wrk

And of course, this talk by Gil Tene is fantastic if you're interested in load testing stats https://www.youtube.com/watch?v=lJ8ydIuPFeU

There are http benchmarking tools like wrk [0]. You don't need a ddos service for that.

[0] https://github.com/wg/wrk

It would be more of apples to apples comparison to run it on the same setup you ran ab on. You can grab wrk here: https://github.com/wg/wrk Enjoy!
We are launching a pricing page in the next month as we are gearing up for launch :)

One example of recieving a 50% cost saving is from Amazon Machine Images (AMIs). We compared the cheapest official NGINX AMI available on the Amazon Marketplace[0] and then also ran an NGINX+Unikraft AMI. We ran the same workload using wrk[1] and then checked the bill at the end of the month, roughly $80 vs $40.

[0]: https://aws.amazon.com/marketplace/pp/prodview-xogyq23b3mfge

[1]: https://github.com/wg/wrk

How to run those benchmarks?

At that Nim release page:

https://nim-lang.org/blog/2021/10/19/version-160-released.ht...

Is link to this benchmark:

https://web-frameworks-benchmark.netlify.app/result

Where nim is 2nd with 200k req/s, but it is using httpbeast:

https://github.com/dom96/httpbeast

That says it would be more useful to use jester:

https://github.com/dom96/jester

Jester has 150k req/s.

But, when looking at these:

https://www.techempower.com/benchmarks/

drogon, actix etc has about 600k req/s .

Also redbean has about 600k req/s, when I tested:

https://redbean.dev/

I tested like this:

git clone https://github.com/wg/wrk.git

cd wrk

make

./wrk -H 'Accept-Encoding: gzip' -t 12 -c 120 http://127.0.0.1:8080/

When I tested https://caddyserver.com v2, it did show about 800k req/s.

It would be very helpful to know how those benchmarks are actually done, so that I could compare what is actually fastest in real world, and not just use some for benchmark tested winning non-realistic code.

What do you mean "very long article"? You could have used "extensive" or something ;)

Anyway, what I've seen when comparing the performance of tools, is that Artillery, which is running on NodeJS, is perhaps the wors performer of all the tools I've tested. I don't know if it's because of NodeJS or that Artillery in itself isn't a very performant piece of software (It also consumes a lot of memory, btw).

If you want the highest performance, there is one tool that runs circles around all others (including k6), and that is wrk - https://github.com/wg/wrk - very cool piece of software although it is lacking in terms of functionality so mostly suitable for simpler load testing like hammering single URLs.

(I don't know how fast wrk2 is, haven't benchmarked it)

Locust [1] is fantastic for load testing for python or any REST API, it's also nice as wrk. [2]

In our startup we used it to test 100,000 users traffic on our python based backend and optimized it continuously. We also used wrk with test cases written in LUA scripts. That too worked fantastic.

We did not use bokeh visualization, but just with locusts we could improve the response time by logging and improving sqlalchemy query logs.

[1] https://locust.io/

[2] https://github.com/wg/wrk

I am not sure what you are talking about. wrk2 can put out 7M req/s on a 16 core box. That is way beyond Phoenix's performance on the same HW type. wrk2 is widely used and accepted performance measurement tool. Again, you mentioned microseconds latency which means you are talking about localhost microbenchmarking. That is irrelevant from the production workload point of view. I have saturated network links successfully with wrk2 which is the definition of fast enough.

Interestingly there was a thread on HN previously which tools are used for HTTP perf testing:

>>> - Wrk: https://github.com/wg/wrk - Fastest tool in the universe. About 25x faster than Locust. 3x faster than Jmeter. Scriptable in Lua. Drawbacks are limited output options/reporting and a scripting API that is callback-based, so painful to use for scripting user scenario flows.

https://news.ycombinator.com/item?id=15738967

https://news.ycombinator.com/item?id=15733910

Kore.io, 4 workers, modified cpp example, along with state machine https://github.com/jorisvink/kore

json library https://github.com/nlohmann/json

wrk, post with json payload via lua post script https://github.com/wg/wrk

I like to use wrk[1] for quick/spot testing:

* It allows me to focus on latency versus throughput independently

* It has tools for testing performance on pipelined requests (very important in my space)

* It supports scripting complex authentication processes (like magic headers and oauth2)

* It supports scripting fuzzing (random values) for URLs and POST requests

[1]: https://github.com/wg/wrk

For wide-area load testing, I simply buy some advertising since that gives me cheap access to millions of simultaneous users.