What does HackerNews think of vegeta?
HTTP load testing tool and library. It's over 9000!
A tool that can send a request at a constant rate i.e. wrk2 or Vegeta [2] is a much better fit for this type of a performance test.
1. https://www.scylladb.com/2021/04/22/on-coordinated-omission/
IMHO how you load test is almost as important as where you load test and understanding why. There are three separate reasons to do load testing:
1) Performance testing. Confirm the system does not degrade under a specified load and find out what performance can be expected under these circumstances. This is basically ensuring your system can handle X amount of traffic without issues and knowing your baseline performance. You should get the same kind of response times as you are getting from live server telemetry.
2) Stress testing. finding out what happens when the system is stressed beyond its specs. How does it degrade & where.
3) Reliability testing. Find out how your system breaks and when. The goal here is to try to break the system and test things like failover and making sure you don't lose or corrupt data. Better to die gracefully then abrubtly.
If that's new to you, you've been probably doing it wrong because those 3 require very different approaches.
I tend to avoid having to do load testing as it sucks up time without telling me much if interest. I instead opt for having decent telemetry on the live system. It will tell me how it performs and where the bottlenecks are. I can set alerts and take action when things degrade (e.g. because of a bad change). Besides there is no substitute for having real users doing real things with real data. And in any case, having telemetry is crucial to do any meaningful stress or reliability testing. Otherwise you just know it degrades without understanding why.
There are still valid reasons to do separate load testing of course but I seem to get away with mostly not doing this. When I do, vegeta is what I reach for first. I find most of the need for load testing is requirements related to SLAs or otherwise nervous product owners that need to be reassured. They tend to be more interested in performance tests (i.e. the happy case) whereas technically stress testing is where you learn stuff about your system.
When it comes to load testing tools I like Vegeta[1], personally. (Though I've also used some much more complicated proprietary tools when testing at great scale.)
- Gatling: https://gatling.io/ - Great tool. Fast, full-featured, flexible, well documented. Main drawback is that you script it in Scala... UX for automation purposes could be better also.
- Tsung: http://tsung.erlang-projects.org/ - Also great. Very fast, scalable, many features. Main drawback is the XML-based DSL (Domain Specific Language) that is somewhat of a pain to use.
- Vegeta: https://github.com/tsenart/vegeta - Good for simple testing of API end points with a fixed RPS rate. No scripting capability.
- Apachebench: https://httpd.apache.org/docs/2.4/programs/ab.html - Single-threaded but very, very fast so will outperform many tools that can use multiple CPU cores. Best tool around if all you want to do is hit one single, static URL and get results printed on screen. Lacks scripting.
- Jmeter: http://jmeter.apache.org/ - Very fast, tons of features, but painful UX (especially for an automation workflow). Biggest community with the most plugins/extensions/whatnot.
- Siege: https://www.joedog.org/siege-home/ - Moderately fast but doesn't scale. Buggy and will crash regularly. Large measurement error. Inconsistent UI. Only reason to use is that it is an alternative to Apachebench if you want to hit a list of URLs, rather than just a single URL.
- Artillery: https://artillery.io/ - Great UX for automation. However, it lacks scripting capabilities, is slow and single-threaded and introduces a large measurement error.
- Locust: https://locust.io/ - Great scripting capabilities (in pure Python), with a very nice API. However, it is the slowest tool I have seen and introduces the most measurement error of any tool also.
- Wrk: https://github.com/wg/wrk - Fastest tool in the universe. About 25x faster than Locust. 3x faster than Jmeter. Scriptable in Lua. Drawbacks are limited output options/reporting and a scripting API that is callback-based, so painful to use for scripting user scenario flows.
- The Grinder: http://grinder.sourceforge.net/ - Fast, scriptable in Jython (Python dialect) with a nice API. Main drawback is that it is an almost dead open source project - very few updates in recent years.
- k6: https://k6.io/ (bias warning: I am involved in the k6 project) - The B3ST tool!! ;) Anyway... it is fast, scriptable in Javascript (ES6), has a very nice UX and scripting API and works for both functional and performance testing (and is excellent for automation). Good docs too (https://docs.k6.io).
General advice:
- If you're a Java-centric shop, take a look at Jmeter or Gatling, or perhaps Siege (all are Java apps) and you'll feel at home. You should probably start with Gatling, because it is more modern than Jmeter, with better UX, and Siege is kind of on its way out.
- If you want to automate your load tests, want to test an API, and prefer to write test cases in code rather than some limited DSL, then I think k6 is the tool you should look at. Only reason not to, is if you don't want to use Javascript.
- If you want to run complex, manual load tests of a web site (e.g. record user interactions and then simulate the same behaviour in a load test) you should look at Gatling or Jmeter primarily. Perhaps Tsung.
Besides that, artillery also looks promising https://github.com/shoreditch-ops/artillery
Had anyone used both to point out the main differences from k6?
One request: please allow the output of either a) a CSV log of every request and it's timing or b) a configurable histogram of request timing, not just some percentiles and averages. That info is pretty much required for doing any more advanced analysis beyond "the service is fast enough". https://github.com/tsenart/vegeta does this really well (but doesn't have the scripting abilities of this tool).
However, I'm easily tempted by the new and shiny which led me to find another great HTTP benchmarking tool: https://github.com/tsenart/vegeta