Current holdings: $SOXL, $TQQQ, $SPXL, $FNGU Leverage is my middle name.

Atlanta
Joined November 2015
Nathan Haverstock retweeted
🚨🇺🇸 XAI: TODAY, ON SITE, WE HAVE OVER 2,000 WORKERS BUILDING THE WORLD’S MOST POWERFUL COMPUTER “Today, on site, we have over 2,000 workers building the world’s most powerful computer. Thank you to the amazing, hard-working men and women of Memphis for all you do!” Source: xAI
Replying to @BrianRoemmele
Slightly different versions of the Tesla AI5 chip will be made at TSMC and Samsung simply because they translate designs to physical form differently, but the goal is that our AI software works identically. We will have samples and maybe a small number of units in 2026, but high volume production is only possible in 2027. AI6 will use the same fabs, but achieve roughly 2X performance. Aiming for a fast follow to AI5, so hopefully mid 2028 for volume production of AI6. AI7 will need different fabs, as it is more adventurous.
Nathan Haverstock retweeted
And there is a clear path to a doubling of performance on all metrics for AI6 within 10 to 12 months of AI5 shipping
Elon Musk: At Tesla, we basically had two different chip programs: one Dojo and one. Dojo on the training side, and then what we call AI4, it's just our inference chip The AI4 is what's currently shipping in all vehicles, and we're finalizing the design of AI5, which will be an immense jump from AI4. By some metrics, the improvement in AI5 will be 40 times better than AI4. So not 40%, 40 times This is because we work so closely at a very fine-grained level on the AI software and the AI hardware. So we know exactly where the limiting factors are. And so effectively the AI hardware and software teams are co-designing the chip Compared to the worst limitation on AI4, which is running the SoftMax operation, we currently have to run SoftMax in around 40 steps in emulation mode, whereas that'll just be done in a few steps natively in AI5 AI5 will also be able to easily handle mixed precision models, so you don't have it, it'll dynamically handle mixed precision. There's a bunch of sort of technical stuff that AI5 will do a lot better In terms of nominal raw compute, it's eight times more compute, about nine times more memory, and roughly five times more memory bandwidth But because we're addressing some core limitations in AI4, you multiply that 8x compute improvement by another 5x improvement because of optimization at a very fine-grained silicon level of things that are currently suboptimal in AI4, that's where you get the 40x improvement
$SOFI price tightening again
I’m basically a free speech absolutist when it comes to ao3. As long as it’s properly tagged I’m not saying a word about it
tell me your unpopular fandom opinions
Smart traders accumulating under key levels $MYNZ #about $BABA $SOL $WYNN
Nathan Haverstock retweeted
Float distribution phase complete accumulation done $NXXT $GEF $DDS $ATAI $PARA $GPUS
Institutional conviction rising each day $NXXT #AfRam $LCID $HIVE $AMD $OLN
$GOOGL ready to send bears packing!!