Now that my theory is closer to reality, this is why I'm concerned about what's going on with "Big AI":
> as a trained ML engineer I was mostly taught to think about how to do things efficiently (i.e. increase inference quality with limited or less resources, even at Meta)
> The problem with all these big AI programs (e.g. OpenAI, Nvidia etc.) is that they require excessive spending to become profitable
> The only way to be profitable as an AI science company is by charging for compute. E.g. OpenAI's CFO has already stated that the current business model (selling inferences/APIs) won’t be profitable and they need to pivot to selling compute (this is also where Nvidia comes in)
> This basically means these companies need to scale inefficient systems to make money. Excessively power-hungry AI systems will sell more compute (i.e. they don’t care about how “good” the science actually is). Its all about selling more machines.
> This whole setup is like a Jenga tower. Eventually, some scientists will figure out how to maintain high inference quality with much, much less compute (thus cost), and when that happens, the whole tower blows up
> tl;dr the way they are trying to grow to profitability is anti-science. A precarious position to be in a scientific field imo
> This leads to 2 potential outcomes: OpenAI blows up and the economy tanks big time, or there is regulatory capture (which OpenAI's CFO clearly advocated for yesterday). Both outcomes suck
btw it should be clear that I'm very bullish on AI as a science. I'm not bullish on the current underlying economics
My working thesis is that the federal govt will backstop OpenAI before it were to go insolvent (and Sama probably knows this)
Small cost for the gov’t to pay to avert a domino effect that tanks the economy (considering how AI is carrying US GDP growth atm)