I'd like to see a baseline of fio and iperf3 for these same instances so we know how much raw performance is available for disk, network alone and together.
Cloud instances have their own performance pathologies, esp in the use of remote disks.
As for RP and Kafka performance, I'd love to see a parameter sweep over both configuration dimensions as well as workload. I know this is a large space, but it needs to be done to characterize the available capacity, latency and bandwidth.
These instances can manage up disk throughput up to 2 GB/s (400K IOPS) and network throughout of 25gbps or ~3.1 GB/s.
There are so many dimensions, with configurations, CPU architecture, hardware resources plus all the workloads and the client configs. It gets kind of crazy. I like to use a dimension testing approach where I fix everything but vary one or possibly two dimensions at a time and plot the relationships to performance.
Can the instance do 2 GB/s to disk at the same time it is doing 3.1GB/s across the network? Is that bidirectional capacity or on a single direction? How many threads does it take to achieve those numbers?
That is kind of a nice property, that the network has 50% more bandwidth than the disk. 2x would be even nicer, but that turns out to be 1.5 and 3, so a slight reduction in disk throughput.
Are you able to run a single RP Kafka node and blast data into it over loopback? That could isolate the network and see how much of the available disk bandwidth a single node is able to achieve over different payload configurations before moving on to a distributed disk+network test. If it can only hit 1GB/s on a single node, you know there is room to improve in the write path to disk.
The other thing that people might be looking for when using RP over AK is less jitter due to GC activity. For latency sensitive applications this can be way more important than raw throughput. I'd use or borrow some techniques from wrk2 that makes sure to account for coordinated omission.
https://github.com/giltene/wrk2