Another utterly, utterly obvious point is that the optimal verification regime depends on the (1) COST of verification and also on the (2) VALUE of the thing being verified.
Say you’re building a cryptography library, or the autopilot which land the helicopter you’re flying in. Do you just vibe it out? Write it in Python and wait to see if you get a runtime error? No, of course not. You sweat every detail by hand, and bring in experts to doublecheck, and bring as much automated tooling as you can to triple check the work — type systems, unit tests, fuzzers, detail-oriented coworkers to do code review, etc..
That’s because those artifacts are expensive to verify and expensive if they fail.
But if you need code to generate a matplotlib chart? Just generate it! Dare i say, vibecode it! It’s cheap to verify because you can visually inspect the chart. And it’s cheap to fail because if the chart comes out wrong it literally cost you 30 seconds and you can just generate a better one. So just vibe it out.
Yes you still need to use your brain! But use it for what it’s good for, and where it is necessary. And I suggest the first good use of our brains should be a little honest meta self-reflection about what those cases are.