think twice see twice?

Joined April 2023
an angel makes itself look scary to ward away evil. a demon makes itself look beautiful to deceive humans.
2
krsnapratama_ retweeted
That makes 5 Sunday podiums for @37_pedroacosta 👏 #PortugueseGP 🇵🇹
krsnapratama_ retweeted
3-2-1 Action! Create images, upload them to Flora, use Minimax Hailuo 2.3 Pro, and that’s it! --sref 2543838474
krsnapratama_ retweeted
The Shark gave it his all 🦈🥉 #KTM #ReadyToRace #PortugueseGP
15
105
0
krsnapratama_ retweeted
The Language of the Universe is Numbers, Its Soul is Geometry; Every Shape is a Silent Echo of our Endless Search for Meaning..
krsnapratama_ retweeted
More card experimentations. Made in @figma
41
47
9
1,103
0
krsnapratama_ retweeted
That podium feeling ✨ #KTM #ReadyToRace #PortugueseGP
9
61
0
krsnapratama_ retweeted
How many overtakes so far? 🤯 @37_pedroacosta is all fired up to attack and defend for that Sprint win 🦈 #PortugueseGP 🇵🇹
krsnapratama_ retweeted
Lay.the.designer
8
89
2
1,149
0
krsnapratama_ retweeted
The new 1 Trillion parameter Kimi K2 Thinking model runs well on 2 M3 Ultras in its native format - no loss in quality! The model was quantization aware trained (qat) at int4. Here it generated ~3500 tokens at 15 toks/sec using pipeline-parallelism in mlx-lm:
krsnapratama_ retweeted
New logo. New design. New era. Dune is evolving into The Onchain Data Foundation. Meet the new Dune.
krsnapratama_ retweeted
introducing Krea Nodes. our most powerful tool to date; all Krea in one interface. comment to get early access 👇
krsnapratama_ retweeted
introducing runable the general ai agent for every task slides, websites, reports, podcasts, images, videos... everything
krsnapratama_ retweeted
Today, we’re thrilled to announce $20M in funding led by @a16z, with support from @saranormous, @amasad, @akothari, @garrytan, @justinkan, @atShruti, @naval, @scottbelsky, @gokulr, @soleio, @kevinhartz and more. @wabi is ushering in a new era of personal software, where anyone effortlessly create, discover, remix, and share personalized mini apps. For 50 years, software was made for people. The next 50, it will be made by people. Just as YouTube unlocked creative power through video, Wabi will unlock creative power through software. The YouTube moment for apps is here. We can’t wait to see what you create.
krsnapratama_ retweeted
The official long jump record holder is @jackmilleraus, end of discussion 🫡 🚀 #PortugueseGP 🇵🇹
krsnapratama_ retweeted
One battle after another! ⚔️ It all started with an epic fight but finished with a shocking collision at Turn 5! 💥 #PortugueseGP 🇵🇹
krsnapratama_ retweeted
a 24-hour clock that shows the sunrise + sunset 🌞
krsnapratama_ retweeted
Want to understand B-trees better? Try btree.app and bplustree.app. These are standalone sandboxes of the visuals I built for my "B-trees and database indexes" article. Helpful for learning B-tree insertion, search, and node splits.
krsnapratama_ retweeted
Time to all the cool components we’ve been working on and go for the launch 🚀
15
6
147
0
krsnapratama_ retweeted
attempted to train a Spiking neural net from scratch again! this time it relies on repeated mutation-and-selection cycles to refine the connection weights. results at the bottom of post. Link to code in comment 50 neurons arranged in a 5×10 grid. Each neuron fires brief electrical “spikes,” and only the last two columns represent digits 0–9 (one neuron per digit). When we want the network to add two numbers, we feed the first number’s spikes into column 0 and the second number’s spikes into column 1. Those pulses ripple from left to right through the grid, and whichever output neuron spikes the most (and fastest) is the sum the network “believes” is correct. To make it learn, we keep a population of ten such networks. Each generation, every network tries fifteen random addition problems. Their fitness score rewards accuracy and confidence, but penalizes slow or wildly wrong answers. We keep the top three, mutate their connection weights randomly (no crossover- on a second thought, it should have crossover), and use those to refresh the new generation. Over generations, networks that spike in ways that resemble actual addition are kept; the rest get replaced. Meanwhile, the frontend plays back each winner’s spike sequence so you can literally watch information flow from inputs to outputs in that neon cyberpunk interface. RESULTS: I still couldn't get spiking networks to learn properly and well(accuracy mostly remains at chance). But this was an interesting experiment. it also has a detailed readme and "how it works" page if you want to learn more. The UI looks pretty cool tho :) you see the accuracy jump in increments of ~6.7% cuz we are testing 15 examples per trial
inspired by Gibbs phenomenon in Fourier series approximations Open source- link below Fourier series can approximate the curve for any function, using only sine and cosine waves, but with discontinuous functions with sharp vertical lines it requires infinite many terms when we don't use infinite many terms we get these cool looking oscillations at the corners fascinating stuff ---- if you like this consider becoming a patron and access to 500+ projects, exclusive videos and weekly meetings