One of my pathologies as a developer is that I want to do things "properly" at all times. When I see things like

    fn tokenize(expr: String) -> Vec {
      expr
        .replace("(", " ( ")
        .replace(")", " ) ")
        .split_whitespace()
        .map(|x| x.to_string())
        .collect()
    }
my instinctive reaction is: those .replace() calls will allocate unnecessarily, what you really need is a little state machine, maybe use the nom or parsec crate ....

Wanting to do things The Right Way is a good instinct to have as an engineer, and it's something that Rust encourages by design. However, I've noticed that I'll often spend a lot of time getting in the weeds trying to optimize or elegant-ize a bit of code which ends up being unnecessary. I'll take a step back and realize that the performance of that code does not matter, or that the implementation was the wrong approach and I need to delete it all and do something else, or just that it wasn't very important and I should've done the easy solution and moved on.

When I'm in a flow state writing code it's hard to step back and evaluate what I'm working on in the context of the bigger picture; I haven't been successful at training myself to do that. I think a better solution would be to deliberately write "first draft" code that's biased toward being quick and easy to write. When the code is done there's a natural pause to test and review it in the context of the big picture.

Does anybody else struggle with this? What have you done to mitigate it?

Over the years I've just learned to recognize that the "right way" is really about weighing trade offs. Is this hot code that needs maximum efficiency? Is this code that is going to be touched a lot and readability and maintainability are more important? If the latter then the "right" way might be a less optimized version but given the trade offs it is the right way.

And, benchmarks always tell you if there's a problem and exactly where it is, and squash any unneeded discussion.

I once had a new-to-the-group developer pointing out how things could be some much better if we optimized various code paths (I was them at one point in my career). I returned the next meeting with some benchmarks showing that we spent a total of 100ms in these code paths, in an application that ran for hours.

"benchmarks always tell you if there's a problem and exactly where it is, and squash any unneeded discussion." Unfortunately, always is too strong of a word. Cache eviction in one parts can cause memory stalls on another part, indirection in the caller can prevent speculation. Type erasure can prevent inlining resulting in the called function being blamed for problem in the caller.

Your problem might not even be CPU, if it's contention related, or timing related, overloaded queues, not pushing back at the right places, io bound, the bottleneck is work which is queued and executed elsewhere... Causal profiling is a technique which is relevant specifically because profiling can miss the forest for the trees: https://github.com/plasma-umass/coz

It's really easy to write a benchmark which measures a different scenario from what your application is doing. A classic example might be benchmarking a hashmap in a loop when that hashmap is usually used when cold.

I definitely agree about directing efforts to where you can make an impact and guiding that through measurement, but benchmarks can miss that there's a problem and blame the wrong part of the application.

If the difference is large enough, ms vs hours, you'd have to really screw up methodology to get the wrong result (I've done it almost that badly before).