Can someone explain in words what the coordinated omission problem is?
Is it that long samples tend to kick other samples out of the window, messing up your stats?
First, some terminology which I think is important for the discussion, also when I say 'job' this could be something like a user, HTTP request, RPC call, network packet, or some sort of task the system is asked to do, and can accomplish in some finite amount of time.
Closed-loop system, aka closed system - is a system where new job arrivals are only triggered by job completions, some examples are interactive terminal, batch systems like a CI build system.
Open-loop system, aka open system - is a system where new job arrivals are independent of job completions, some examples are the requesting the front page of Hacker news, or arriving packets to a network switch.
Partly-open system - is a system where new jobs arrive by some outside process as in an open system, and every time a job completes there is a probability p it makes a follow-up request, or probability (1 - p) it leaves the system. Some examples are web applications, where users request a page, and make follow-up requests, but each user is independent, and new users are arriving and leaving in their own.
Second, workload generators (e.g. JMeter, ab, Gatling, etc) can also be classified similarly. Workload generators that issue a request, and then block to wait for a response before making the next request are based on a closed system (e.g. JMeter[2], ab). Those generators that continue to issue requests independently of the response rate, regardless of the system throughput, are based on an open system (e.g. Gatling, wrk2[3])
Now, CO happens whenever a workload generator based on a closed system is used against an open system or partly open system, and the throughput of the system under load is slower than the injection rate of the workload generator.
For the sake of simplicity, assume we have an open system, say a simple web page, where multiple users arrive by some probability distribution and simply request the page, and then 'leave'. Assume the arrival probability distribution is uniform, where the p is 1.0 that a request will arrive every second.
In this example, if we use a workload generator based on a closed system to simulate this workload for 100 seconds, and the system under load never slows downs so it continuous to serve a response under 1 second, say that is always 500 ms. Then there's no CO here. In the end, we will have 100 samples of response times of 500ms, all the statistics (min, max, avg, etc) will be 500ms.
Now, say we are using the same workload generator at an injection rate of 1 request/s, but this time the system under load for the first 50 seconds will behave as before with responses taking 500 ms, and for the later 50 seconds the system stalls.
Since the system under load is an open system, we should expect 50 samples of response times with 500 ms, and 50 samples where response times linearly decrease from 50s to 1s. The statistics then would be
min=500ms, max=50s, avg=13s, median=0.75s, 90%ile=45.05s
But because we used a closed system workload generator, our samples are skewed. Instead, we get 50 samples of 500ms and only 1 samples of 50 seconds! This happens because the injection rate is slowed down by the response rate of the system. As you can see this is not even the workload we intended because essentially our workload generator backed off when the system stalled. The stats now look like this:
min=500ms, max=50s, avg=1.47s, median=500ms, 90%ile=500ms.
[1][pdf] http://repository.cmu.edu/cgi/viewcontent.cgi?article=1872&c... [2] http://jmeter.512774.n5.nabble.com/Coordinated-Omission-CO-p... [3] https://github.com/giltene/wrk2