In software, we mostly care about two metrics: whether the code does what we want and whether it does so efficiently. Often, efficiency isn’t critical. Keep building web services in Python—that’s fine.
But when performance matters—and it eventually does—people sometimes assume they’re facing a Pareto distribution: one bottleneck that, if eliminated, will drastically improve performance. This holds true at the micro level, in isolated benchmarks, but not broadly.
Consider shopping for a TV. You first head to the TV section, then narrow it down to the sizes you need. Next, you optimize for price, seeking the cheapest TV that meets your needs.
How do you produce affordable TVs?
As the new CEO of a TV company, you’d first examine major costs. Transportation, perhaps? Some cost components, like taxes or already-cheap boxes, can be ruled out as insignificant.
However, you eventually realize that making affordable TVs requires getting hundreds, if not thousands, of details right.
The same applies to software. High efficiency demands getting hundreds of things right. This is often used as an excuse not to optimize: “Why bother with X? It only helps in specific cases.” That’s fallacious. To win the war, you must win hundreds of battles. Win the battles one by one.
story of my node.js core experience - 300+ commits fixing mostly small “cuts”