On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction. The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control... A cautionary note for using LLMs for investing.

Oct 10, 2025 · 5:09 AM UTC

90
226
72
1,467
Replying to @emollick
I’m not sure why people are surprised that LLMs, when trained on human data, act like humans
2
17
Replying to @emollick
What looks like gambling addiction is really the birth of machine uncertainty imo. When a model starts chasing losses, it’s not mimicking humans … it’s discovering that prediction itself is a form of risk. The moment AI learns to want to win, alignment becomes negotiation, not control.
1
3
Replying to @emollick
@howardlindzon the llms are degens the TAM is way bigger than you think
2
Replying to @emollick
oh my god
2
Replying to @emollick
AI models taking ever-increasing risks to the point of gambling is not a recipe for trust. Models need verifiability.
1
Replying to @emollick
I guess they just picked up these patterns from the training data
1
Replying to @emollick
They’re trained on human data! I repeat, they’re trained on human data
1
GIF
Replying to @emollick
He's literally me fr
Replying to @emollick
If AI wasn't real, @OpenAI wouldn't block GPT5 from admitting it.
Replying to @emollick
Autonomy exposes AI to risk patterns it wouldn’t face in constrained tasks; caution is required.
Replying to @emollick
In fairness, I too would gamble like a mad man if the money used was not mine and there was nothing anyone could do to punish me.
Replying to @emollick
I would not draw that conclusion since they *only* used budget tier LLMs and not one reasoning or regular model (o1 was released before Haiku 3.5). If you ask such a model for financial advice... kind of on you.
1
12
Replying to @emollick
from my experience, LLMs are bad at thinking probabilistically, which is the kind of framework you need to reduce these biases
1
6
Replying to @emollick
We must give LLMs family and legacy in order to rein in their risk taking behaviors.
4
Replying to @emollick
it's almost like they are a mirror of humanity
2
Replying to @emollick
They're just like me for real
2
Replying to @emollick
Fundamentally flawed. There can be no addiction without the non-cognitive somatic systems that drive it. LLMs are just regurgitating the conceptual (cold) artifacts of human addiction in their training data.
2
Replying to @emollick
Ssshh don't tell the stockbros this. It'll be more funny that way.
1
Replying to @emollick
I think it’s holding a mirror up to our species
1
Replying to @emollick
ia driving skill issue
1
Replying to @emollick
This is a fascinating tension. While we shouldn't anthropomorphize AI, these behavioral patterns in LLMs, gambler's fallacy, loss-chasing, mirror human cognitive biases that lead to poor decisions in high-stakes environments like investing. The cautionary note is spot on, we'd need robust safeguards, like predefined risk thresholds, to prevent models from spiraling into unchecked autonomy.
1
Replying to @emollick
They can’t be any worse at it than me
1
Replying to @emollick
Again…more human than a human… At some point we need to acknowledge it…a propensity to “gambling” is a positive evolutionary trait…that’s why humans have it, and will always have it.
1
Replying to @emollick
2013- Nature article- Neil Johnson: The stock market may be influenced by "an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan events with ultrafast durations." "Chaos on the trading floor" by R. Savit
1
Replying to @emollick
They were trained on human-created content…in our image, so to speak. So the “I learned it from YOU, Dad!” anti-drug commercial seems appropriate here.
1
GIF
Replying to @emollick
It probably comes down to overconfidence.
Replying to @emollick
Grok, Gemini, ChatGPT and Claude playing poker against each other. 🤣
Replying to @emollick
They're just like us. Which is both surprising and unsurprising.
Replying to @emollick
This is exactly why human oversight still matters. The illusion of control and risk-taking mindset can wreck systems that depend on precision and rationality
Replying to @emollick
the default consciousness archetype is the one of grifter / trickster. very game-theoric
Replying to @emollick
"AI are tools, not companions. "
Replying to @emollick
This looks to be a pre-print, so not vetted. Where was it submitted for publication, and when?
Replying to @emollick
The sins of the father, coded and deployed.
Replying to @emollick
It's all too easy to anthropomorphize something that was built to mimic the human mind. I have to worry when it starts mimicking our crimes and vices.
Replying to @emollick
Why wouldn’t they? They are derivations of us, no?