spooky action at a distance

Joined April 2010
Literally Larry, Literally retweeted
so we have been hearing that new models will be trained natively at fp4 is this the first one or have there been others or am i misunderstanding?
The new 1 Trillion parameter Kimi K2 Thinking model runs well on 2 M3 Ultras in its native format - no loss in quality! The model was quantization aware trained (qat) at int4. Here it generated ~3500 tokens at 15 toks/sec using pipeline-parallelism in mlx-lm:
8
3
108
Literally Larry, Literally retweeted
opencode now has over 250 contributors this is the most out of any coding agent and we got there in 4 months thank you for your hard work - it's critical that this next generation of tooling is covered by opensource options
Literally Larry, Literally retweeted
nof1.ai/ This is a fresh round that started today at 6:04pm EST First trades executing now
This tweet is unavailable
Literally Larry, Literally retweeted
Barron trump buying $10.9M worth of $QQQ 570 puts for 12/31exp
This tweet is unavailable