🚨 Image Edit Leaderboard Update!
Reve Edit Fast by @reve is now publicly released…and has broken into the Top 5!
The Arena community is impressed. Not only is it faster than Reve Edit, but it's 5× more cost efficient.
Congrats to the @reve team on this release! 👏
reve.com has released reve-edit-fast and reve-remix-fast. Not only are they much faster than the previous images, but they also cost less!
api.reve.com/console/cd5a515…
City policy outcomes, long term:
Free buses -- improves access, reduces congestion, city gets better!
City run daycare -- improves access, more available time, city gets better!
Rent control -- rental supply dries up, only condos are built, city gets worse!
Really happy to work with the folks at FAL! They bend over backwards to be helpful.
You can try our new *fast* edit models through their API right now!
🚨 Reve Fast Edit & Remix dropped 𝗲𝘅𝗰𝗹𝘂𝘀𝗶𝘃𝗲𝗹𝘆 on fal!
🎨 Realistic edits that maintain visual consistency
🖼️ Combine up to 4 reference images to build the perfect scene
💰️ Just 0.01$ per image
“What is reality?”
- Every Human Ever
Inspired by @reve’s mantra of reimagining reality, we decided to give answering this question a try in our newest short film.
(Visuals generated with Reve, obviously)
Video in Reve is here! 🎞️
Create beautiful images in Reve and seamlessly bring them to life with video, powered by Veo 3.1.
You’ll unlock a new dimension of creative expression and stay in flow.
3/ Image Reference
Combine prompts with image references for highly specific results. You can create your video by generating a new reference image, uploading an image, or choosing from previously attached images, then simply describe what you want to see.
Reve brings the elements together seamlessly.
I'm trying to figure out how they get 20 kilograms of payload and 70 kilograms of max weight lift into a battery powered humanoid robot weighing 30 kilograms.
"Tendon activated" isn't enough. Is there a tech primer somewhere?
this post is complete misinformation
LLMs are lossy compressors! of *training data*.
LLMs losslessly compress *prompts*, internally. that’s what this paper shows.
source: i am the author of “Language Model Inversion”, the original paper on this
people are going to have to come to terms that the gentleman’s agreement with google on scrape for referral was not actually legally binding and has been allowed for 30 years by publishers. how do you fairly change that?
New breakthrough quantum algorithm published in @Nature today: Our Willow chip has achieved the first-ever verifiable quantum advantage.
Willow ran the algorithm - which we’ve named Quantum Echoes - 13,000x faster than the best classical algorithm on one of the world's fastest supercomputers. This new algorithm can explain interactions between atoms in a molecule using nuclear magnetic resonance, paving a path towards potential future uses in drug discovery and materials science.
And the result is verifiable, meaning its outcome can be repeated by other quantum computers or confirmed by experiments.
This breakthrough is a significant step toward the first real-world application of quantum computing, and we're excited to see where it leads.