Gen AI is what happens when you ship something about 8 years too early and hope it doesn't catch up with you.

Oct 26, 2025 · 10:52 AM UTC

47
187
47
1,190
Replying to @tomfgoodwin
LLM Bro: ”but you didn’t prompt it right. I get perfect results.
2
3
1
114
I get perfect results in a field I don’t know enough about to evaluate well ;)
3
119
Replying to @tomfgoodwin
Disagree. This mindset is how startups fail to launch. You ship MVP. First movers see value and then tell you what to fix first.
4
1
3
History has shown that the very first movers rarely win. They are the snowplough that clears the road
2
1
66
Replying to @tomfgoodwin
Regulation isn’t the way. Transparency is the way. •
2
2
Replying to @tomfgoodwin
Honestly this is my problem with the AI crowd. The tech doesn't mean shit if it's not reliable. Why're we expected to believe in the potential of the tech? Potential means shit in business. I can't go tell my boss that I have the potential to make him a billion dollars. I'd need to back it up w proof and an actual roadmap
4
1
59
Replying to @tomfgoodwin
boy, there ain’t any amount of time to fully cook rotten meat. it will always taste terrible and make you sick
1
15
Replying to @tomfgoodwin
This is a problem I refer to as GenAI being a `universal tool in search of a particular use case`. They shipped LLMs models - thats it - and most people vastly expanded the possible use cases, throwing out the most essential basics of software/product conduct. It's a huge miss.
2
1
11
Replying to @tomfgoodwin
LLMs feel a lot like semantic-search with some editing smarts on the output. Somehow folks are trying to stretch that in all sorts of odd directions. And stick an AI brand on it.
2
7
Replying to @tomfgoodwin
Yea, when companies like Microsoft or OpenAI will take legal responsibility for answers of their LLMs, then wea re talking. Until then it's just a toy.
7
Replying to @tomfgoodwin
Sounds like the early internet on steroids, which means it's spot on.
1
6
Replying to @tomfgoodwin
the tech clearly has potential, but it’s like watching a plane take off mid-construction as we’re hoping the engineers can finish building it mid-air
4
Replying to @tomfgoodwin
Why would you put an ai in the middle of an algorithm requiring steady quality and reproducibility? That’s just just beyond retarded.
2
4
Replying to @tomfgoodwin
I completely agree that LLMs, any of them hallucinate way too much. You can't rely on them and there are a lot of technologies now that are quickly sent to be sold asap, many which don't work and are garbage But I also believe that this is only the beginning. If accountability is practiced, big companies holding big tech accountable for their shitty garbage, if not sue them. then they will improve over time and perfect itself. But if we leave it in this state and pile up new technologies day by day, then we won't have anything that works, but a pile of beautiful, but broken toys
1
3
Replying to @tomfgoodwin
Small minds. I’m betting the opposite way.
2
3
Replying to @tomfgoodwin
You build the infrastructure first, then call on LLM’s for specific queries but the prompt is customised for the user based on the task and the user preferences. Using multiple LLM’s to argue it out. I see this as an ability to ‘polish’, only. You DO NOT use LLM’s to navigate or become the infrastructure itself. When we say ‘AI’ we are talking about LLM’s for the most part. There’s still a boat load of tech waiting to be used that technically isn’t even ‘AI’ but you throw in minimal use of an LLM and put ‘AI’ into the branding and you do have something that seems like ‘AI’ when technically it’s not, its just clever automated tech. Just like how LLM’s aren't really true AI to begin with. As such, for many builders out there, I just don’t see this post as being relevant to them if they understand some of these fundamental truths. If you’re integrating GPT into nearly every process of your business then yeah, count on multiple problems. Automate and roboticise everything, you can do that with anything (without ‘AI’). You just polish a few things using LLM’s so it doesn’t feel like a soulless entity. Amazon as a business is peak robot. They nailed it before ‘AI’ right? And guess what, most businesses are still leagues behind the genius of Amazon Operations and they think they can jump to ‘AI’ with high accuracy? When inaccuracies within multiple processes stack? You have to learn to walk before you can run.
1
3
It wasn’t too early. It was right on time. Just not for what you think.
3
Replying to @tomfgoodwin
i wonder when he'll learn that you can turn temp to 0 and, in fact, get exactly the same answer every time
1
2
Replying to @tomfgoodwin
The time and money that go into "guardrailing," "safety layers," and "compliance" dwarfs just paying a human to do the work correctly. this is the main issue
2
Replying to @tomfgoodwin
The cost per LLM invocation also makes it very expensive to run
1
Replying to @tomfgoodwin
Just because you’re not skilled enough to use a tool properly doesn’t mean the tools useless lol
1
Replying to @tomfgoodwin
Which is why we need regulation and compliance! cc @compliantvc
1
Replying to @tomfgoodwin
Just because a single guy does not know how to properly prompt a LLM, does not mean the technology is here to fail.
1
1
Replying to @tomfgoodwin
1
0
Replying to @tomfgoodwin
"... AI-augmented workflows as "vibe coding" misrepresents the skill and rigor involved. .. it creates the false and risky impression that one can simply prompt their way to a viable product without understanding the underlying engineering fundamentals."
Vibe-coding is not the same as AI-Assisted engineering. A recent Reddit post described how a FAANG team uses AI and it sparked an important conversation about semantics: "vibe coding" and professional "AI-assisted engineering". While the post was framed as an example of the former, the process it detailed - complete with technical design documents, stringent code reviews, and test-driven development - is a clear example of the latter imo. This distinction is critical because conflating the two risks both devaluing the discipline of engineering and giving newcomers a dangerously incomplete picture of what it takes to build robust, production-ready software. As a reminder: "vibe coding" is about fully giving in to the creative flow with an AI (high-level prompting), essentially forgetting the code exists. It involves accepting AI suggestions without deep review and focusing on rapid, iterative experimentation, making it ideal for prototypes, MVPs, learning, and what Karpathy calls "throwaway weekend projects." This approach is a powerful way for developers to build intuition and for beginners to flatten the steep learning curve of programming. It prioritizes speed and exploration over the correctness and maintainability required for professional applications. There is a spectrum between vibe coding and doing it with a little more planning, spec-driven development, including enough context etc and what is AI-assisted engineering across the software development lifecycle. In stark contrast to the post, the process described in the Reddit post is a methodical integration of AI into a mature software development lifecycle. This is "AI-assisted engineering," where AI acts as a powerful collaborator, not a replacement for engineering principles. In this model, developers use AI as a "force multiplier" to handle tasks like generating boilerplate code or writing initial test cases, but always within a structured framework. Crucially, the big difference here is the human engineer remains firmly in control, responsible for the architecture, reviewing and understanding every line of AI-generated code, and ensuring the final product is secure, scalable, and maintainable. The 30% increase in development speed mentioned in the post is a result of augmenting a solid process, not abandoning it. For engineers, labeling disciplined, AI-augmented workflows as "vibe coding" misrepresents the skill and rigor involved. For those new to the field, it creates the false and risky impression that one can simply prompt their way to a viable product without understanding the underlying code or engineering fundamentals. If you're looking to do this right, start with a solid design, subject everything to rigorous human review, and treat AI as an incredibly powerful tool in your engineering toolkit - not as a magic wand that replaces the craft itself.
1
Replying to @tomfgoodwin
What most teams are calling “AI products” right now are really just ungoverned reasoning sandboxes. There’s no consistency, no auditability, no shared intent. I think this is where what we do at Null Lens helps. It translates messy human requests into deterministic Motive / Scope / Priority contracts before inference. Making intent standardised and governable upstream.
Replying to @tomfgoodwin
Chapgpt and others, definitely seem like a large beta program it's convinced users to pay for.
Replying to @tomfgoodwin
It's true that systems for getting enterprise level performance out of AI are nowhere near as straightforward as we'd expect from using ChatGPT.
Replying to @tomfgoodwin
You can not remove the human...any idiot knows that who has used AI...but now you add 1 intelligent human and you can get rid of entire teams.
Replying to @tomfgoodwin
When trust in AI breaks, why do we never ask how we used it? What if it’s not the AI—but the structure around it that failed?
Replying to @tomfgoodwin
It's great for low priority, time-consuming tasks like creating documentation or applying a template to copy-paste. Not for creating an entire website.
Replying to @tomfgoodwin
Move fast and break things, indeed.
Replying to @tomfgoodwin
Tom I would love to know your unfiltered opinion on that Karpathy interview by Dwarkesh. Did it leave you feeling optimistic for the growth, or death, of AI?
Replying to @tomfgoodwin
I can't find the OPs X account?
Replying to @tomfgoodwin
wow!