What does HackerNews think of tracy?

Frame profiler

Language: C++

#3 in Library
Not the person you asked, but generally you might want to look at "frame-based" profilers. These are typically used in video games, but the concept is general, and can apply to other applications. The "frame" could also be something like a request or transaction being processed. I like Tracy[1], myself.

Another latency metric that you'll see, often w/respect to web apps and microservices is "P99" and similar. This is the amount of time in which 99% of requests get their response. For a higher percentile, you get a better idea of worst-case performance.

[1] https://github.com/wolfpld/tracy

If you aren't adverse to manual instrumentation there's also Tracy[1].

[1]: https://github.com/wolfpld/tracy

tracy (https://github.com/wolfpld/tracy), mentioned in this article as well, for some reason is criminally underused, unknown etc. by wider community.
Check out Tracy[1]. If you run it as root, it provides a lot of "extra" information, such as when your threads get moved between CPUs. Actually, I saw this post and thought "why should I bother when I already have Tracy?" If anyone has an answer to that, I'm curious to know (:

Tracy is still a frame-based profiler, though.

If you want general system-wide profiling more focused on throughput rather than latency, then I've had a good experience generating flame graphs[2] using plain Linux perf.

[1] https://github.com/wolfpld/tracy

[2] https://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html

A great option for profiling Vulkan code is Tracy, which is a free and open source profiler: https://github.com/wolfpld/tracy

It supports profiling multiple Vulkan queues using VK_EXT_debug_utils timestamps.

I think when you look at projects like Tracy, there's no question that immediate mode UIs also work well for non-trivial use cases:

https://github.com/wolfpld/tracy

In the video, Kelley refers to Tracy [1]. I hadn't heard of it before but it looks interesting.

[1] https://github.com/wolfpld/tracy

On application side I recommend using an instrumenting profiler that will let you know down to sub-microseconds what the code is doing. Tracy is a good choice (https://github.com/wolfpld/tracy) but there are others, e.g. Telemetry.