The desire to write grows with writing.

Joined September 2020
Pinned Tweet
Clears throat, sips water, and stares at the crowd🌚 28k posts later, I’ve decided it’s time for a rebrand. In August 2023, I made a curious decision, to become a Web3 writer. I started out confused, unsure if writing was even my thing. What if I was meant to be a flight attendant? (Found out I’m scared of heights and flying, so that’s a no) Or maybe I was supposed to work in the pharmaceutical industry. After all, I’ve got a BSc in Pharmacology. Well, news flash: you’re in charge of your life. You can be anything you wantttt. So yeah, I chose to be a writer, and it’s the best decision I’ve ever made. Because guess what now funds my luxurious lifestyle 🌚 For the past two years, I’ve walked this path as DeFiScribbler (really cool name, if you ask me). And trust me, I’ve achieved a lot both in IRL. (I recently bought a gaming chair with my last paycheck 10/10, would recommend) My favorite achievement? Becoming a core @scribble_dao member. Those early days helped shape my skills as a writer. If you think I write well (I actually do), it’s thanks to ScribbleDAO, forever grateful Oh, and they made me rich too (about $5k+ earned so far)😌 These days, a lot of new writers look up to me, reading my threads as a roadmap to improve their craft. I'm super proud of myself🫶 My DMs are open to anyone who needs help, I’m still growing, and happy to support others. Now, let’s get to the main point. Let me reintroduce myself. Hi, my name is Kiojame Kiojame means strength, it’s also my dad’s favorite name for me. It represents my ability to keep showing up for myself, tired, sad, depressed, sick, or confused. Beyond the money, writing gives me purpose. I’m inspired by researchers like @stacy_muur, @ViktorDefi, @TheDeFISaint and @zerokn0wledge_ What to Expect from KioJame Deep dives, explainer threads, and articles on projects and narratives Insights from my personal convictions and findings More of my personality and authentic voice Continued support for new writers finding their footing The goal? To keep learning, improving, and becoming exceptionally good at what I do. Thank you to everyone who’s supported me so far. I hope you keep rooting for me in this new chapter of self-discovery. Yours truly, KJ💕 Ps: will change my username after a while so people get used to the new name>
66
8
119
While we wait for mainnet, @cysic_xyz recently released a blog post on ComputeFi and zero knowledge infrastructure, a reminder of why this kind of infrastructure needs to exist in the first place. I found it especially interesting because it explains the key imbalance that makes ZK both powerful and demanding. Proving that a computation is correct takes 100 to 1000 times more processing power than doing the computation itself, yet verification happens in just milliseconds. This isn’t a flaw; it’s the feature that makes trustless systems work at scale. But it also shows why today’s GPU-based systems, while they get the job done, still waste a lot of potential. GPUs were designed for rendering graphics, not performing billions of modular multiplications in finite fields. So every ZK proof generated today burns extra energy doing operations that the hardware wasn’t really built for. Cysic is positioning itself right at the turning point where ZK moves from research experiments to real production infrastructure. The post explains how ZK compute requires completely different performance metrics compared to traditional computing things like hashes per second (Keccak, Poseidon), constraints solved, and energy per proof instead of FLOPs. It makes it clear that custom hardware isn’t just a “nice bonus” for faster results it’s a core requirement for ZK to scale from small niche projects into the backbone of verifiable AI, rollups, and decentralized systems. Cysic isn’t just optimizing what already exists they’re building the foundation that makes zero knowledge proofs routine instead of experimental, turning circuits, chips, and prover networks into an efficient, open resource anyone can use to build verifiable applications. Check out the full blog below x.com/cysic_xyz/status/19864…
Here’s one of the most shocking stats in AI sector right now. Researchers outside major companies are being locked out. In 2024, industry-led teams accounted for nearly 90% of all notable model releases, while academic labs dropped to zero. At the same time, training costs have increased, and the compute required for frontier models is doubling roughly every eight months. When building AI models becomes more expensive than an entire research department’s annual budget, even the best minds start dropping out. Not because they’ve lost interest, but because they simply can’t afford to keep up. That’s why what @cysic_xyz is building matters. Their infrastructure doesn’t just make compute cheaper, it changes who gets to participate. By opening access to idle GPUs, mining rigs, enterprise servers, and even smartphones, Cysic turns compute from a rented service into an ownable, verifiable resource. They’ve built the coordination layer that connects all that unused power, verifies the work cryptographically through zero knowledge proofs, and rewards contributors fairly through their Proof of Compute system. When compute becomes a shared and verifiable resource, AI progress stops being shaped by who can afford the most GPUs and starts being driven by who has the best ideas. I believe that to truly scale, we have to fix how computation is distributed. The future of AI doesn’t depend on how many GPUs exist, it depends on who can access them. It’s not about capacity. It’s about inclusion. And that’s exactly the future Cysic is building. >>>>
1
6
.@NATIXNetwork just rolled out an important update that will change the navigation space. The Drive& app now features Navigation V2, a completely rebuilt routing system developed alongside Magic Lane, a company that has been powering location technology since before smartphones existed. If you didn’t know, Magic Lane created navigation software for Nokia devices back when GPS on phones was still groundbreaking. With over three decades of experience in privacy-focused mapping, their philosophy aligns perfectly with NATIX’s decentralized approach. But here is the real value: V2 is a two-way street. As Drive& users navigate, their data feeds back into the network, continuously improving the underlying map layers. The more people drive, the smarter and more precise the system becomes. The upgrades introduced with Navigation V2 are practical and user-driven. Searches are faster and more accurate. Routes update in real time. Landscape mode, which is the most requested feature, is finally here, along with a refreshed interface that makes navigation smoother and more intuitive. And this is just the start. Multi-stop routing, alternative path suggestions, and new trip-planning tools are on the way. Beyond navigation, the app now displays NITRO activation times, features a cleaner login experience, and introduces marketplace improvements for a more seamless journey. This update marks a turning point for Drive&. With each mile driven, the Drive& network strengthens, bridging the gap between DePIN, mobility, and Physical AI.
1
3
October was such an exciting month for the @NATIXNetwork In case you missed it, here’s what stood out ↓ ✓ The Drive& network now has over 265,000 drivers who’ve mapped more than 222 million kilometers across the world. Together, they’ve contributed over 1.2 billion map data attributes , a huge step toward keeping maps and AI systems updated with real-world accuracy. ✓ Through the VX360 network, Tesla drivers are capturing continuous 360° visual data that powers autonomous driving, robotics, and other real-world AI applications. NATIX is turning everyday driving into the fuel that trains and supports the next generation of intelligent machines. ✓ Drive& got a major upgrade with the launch of Navigation V2, built in partnership with Magic Lane. It’s faster, smarter, and built entirely around real-world data from the NATIX network. ✓ The VX360 Accelerate campaign is currently live, offering 30% off for new Tesla drivers who want to join the network, plus rewards of up to 50K $NATIX and referral commissions. ✓ In September, another 36 million $NATIX tokens were burned, bringing the total to 469.1 million. November is shaping up to be even bigger as DePAI and robotics gain momentum, and NATIX strengthens the infrastructure layer driving it forward
17
26
Farmers are now able to trace their crops from farm to cup, proving they’re deforestation-free to buyers who demand that verification, thanks to @dimitratech That’s the impact. Technology translating directly into better market access and improved livelihoods. When farmers gain entry to premium markets because they can finally prove what they’ve always known that their products are sustainably grown - that’s what sets Dimitra apart.
From ripe red cherries to a perfect cup, Arabica coffee from the highlands of Sumatra continues to captivate coffee lovers worldwide. Equally important is knowing where the coffee comes from ensuring it’s deforestation-free and traceable from farm to cup. #dmtrteam $DMTR
13
4
30
The latest @EO_Network integrations mark a major step forward in modular DeFi infrastructure. They’ve partnered with LayerBank to enable cross-chain price feeds, secured through RedStone, ensuring reliable data for borrowing and liquidations. Their NAV feeds are now live on Silo Finance, already having over $34B in RWA volume on Morpho.  Looking ahead to Q4, the roadmap introduces ZK-Proof v2.1 with sub-second performance, AI-driven predictive feeds, and new deployments on Solana, zkSync, Aave V4, and Polygon. Now featured as part of the Monad ecosystem, the protocol recently crossed a $167M private credit milestone through collaborations with Fasanara and Midas.  Fast, verifiable, and secured data, that's the @eoracle way.
EO Network's 2025 growth has continued to scale into Q3📈 RWAs, Tokenization, Specialized PT Markets, Yield-Bearing Strategies, Institutional DeFi, Lending protocols, Stablecoin issuers... we've been busy BUIDLing🧵
14
25
128m was exploited from balancer this week Just wondering if @UseFirewall could have prevented this @iamoptmstc what do you think?
128M gone overnight. @Balancer, one of DeFi’s most trusted OGs got hit at the core. A single line of code inside its shared vault turned efficiency into disaster. Let's do a post-mortem. 🧵 ------------------------------ The Early Whispers On-chain sleuths caught the scent fast, funds were leaking from Balancer’s main Vault contract. Within hours, the losses crossed nine figures, spreading across Ethereum, Base, Sonic, and Polygon. tl;dr: → Core issue: Balancer’s V2 Vault, the contract holding all pool assets. → Attack vector: manageUserBalance() function exploit. → Hacker used fake sender identities to drain assets. ------------------------------ The Bug That Broke the Vault The function manageUserBalance() was supposed to safely handle internal balances. But inside it, a small logic flaw hid for years: if (msg.sender == op.sender) The catch? op.sender was user-supplied. That meant an attacker could impersonate anyone and call withdrawals directly. The fatal line let hackers drain tokens through UserBalanceOpKind.WITHDRAW_INTERNAL. A single bug and the shared vault design turned it into a chain-wide contagion. ------------------------------ Why the Architecture Amplified It Balancer V2’s shared vault was built for efficiency: one pool, one vault, cheaper swaps, easier flash loans. But efficiency came with risk. When one vault holds everything, one exploit touches it all. The system was audited by OpenZeppelin, Trail of Bits, Certora, and ABDK back in 2021–22, yet the logic slip survived every audit. Even seasoned forks like @beets_fi on Sonic lost $3 million +, and Berachain temporarily halted its chain. ------------------------------ The Damage Report Total loss: ~$128 million Assets drained: > osETH ≈ 6,850 ($26.9 M) > WETH ≈ 6,590 ($24.5 M) > wstETH ≈ 4,260 ($19.3 M) Chains hit: Ethereum, Base, Sonic, Polygon Forks impacted: Beets Finance, Berachain pools, and others. Panic spread fast, a dormant whale withdrew $6.5 M from Balancer after three years of inactivity. ------------------------------ Partial Recoveries Some good news: ➢ @stakewise_io : ~$20.7 M in osETH + osGNO → fully recovered ➢ @berachain : ~$12 M in Ethena/Honey tripool → recovery ongoing ------------------------------ What It Means for DeFi This wasn’t just a Balancer bug ,it was a wake-up call. DeFi thrives on composability, but shared vaults and reused code multiply systemic risk. Audits can’t always catch logic flaws that surface years later. The more composable DeFi becomes, the more interconnected its failure points get. ------------------------------ How to Stay Safe ➢ Recheck your Balancer or fork positions. ➢ Revoke approvals via @RevokeCash or @DeBankDeFi. ➢ Diversify liquidity; avoid all-in exposure to shared vault systems. ➢ Question every “audited” protocol: audits ≠ immunity. ------------------------------ Final Thoughts The Balancer hack wasn’t just a security lapse, it was a DeFi design warning. Shared efficiency turned into shared vulnerability. It’s a reminder that trust in DeFi isn’t just code-based; it’s architecture-based. Your thoughts on the exploit?
6
18
I think ERC8004 is one of those milestones that quietly redefines what “trust” means in autonomous systems. Up until now, automation in crypto has mostly been about efficiency, faster swaps, automated strategies, simplified execution. But ERC8004 introduces something deeper: verifiable reliability. When agents can not only act but also prove that their actions were accurate and transparent, we move from blind automation to auditable intelligence. And that's a big deal. It means every agent interaction (delegation, completion, or payment) can now exist in a self-contained loop of intent, proof, and settlement. Combined with x402, @HeyElsaAI ecosystem now has both the economic layer (payment and incentives) and the trust layer (proof and reputation). Together, they lay the foundation for a true agent economy, one that doesn’t just execute tasks but earns credibility through verified performance. To me, ERC8004 isn’t just another token standard, it’s a cultural shift toward transparent autonomy. It’s how AI agents begin to not only think and act, but earn trust on-chain.
6
11
Kiojame retweeted
Have you ever asked ChatGPT to write a comprehensive research report, and it either gives you a surface-level summary or starts losing the thread halfway through? Or asked it to analyze multiple datasets and synthesize insights, and the quality drops as the context grows? Here’s the thing: the problem isn’t that AI isn’t smart enough. It’s that most AI systems try to do everything at once, holding all the information in memory at the same time until they collapse under the load. We have multi-agent systems that attempt to solve this by having multiple AI agents work together, but they typically operate sequentially, passing massive amounts of context between each step. This creates bottlenecks where one slow agent holds up everything else, and the shared context keeps expanding until performance drops. It’s like having a team where everyone must read every email thread before doing anything, even when most of those emails aren’t relevant to their specific task. @SentientAGI ROMA (Recursive Open Meta Agent) takes a fundamentally different approach inspired by how humans actually organize complex work. For example, when a project manager receives a big assignment, they don’t try to do everything themselves while remembering every detail. Instead, they break the project into distinct tasks, identify which tasks are independent, assign each to the right specialist, let teams work in parallel where possible, and then combine the results as they come in. ROMA implements exactly this pattern for AI systems. When you give ROMA a complex query, it first asks a simple question: “Can this be solved directly, or does it need to be broken down?” If the task is straightforward, ROMA executes it immediately using the appropriate AI model or agent. If it’s complex, ROMA breaks it into smaller, independent subtasks, solves each one recursively using the same logic, and then brings the results back together through intelligent synthesis. This recursive breakdown continues until every subtask is simple enough to execute directly, creating a tree of work where each branch operates independently. The architecture solves three critical problems at once. First, it eliminates context overload by giving each subtask only the information it actually needs instead of dumping everything into shared memory. When ROMA breaks down “write a research paper on quantum computing’s impact on cryptography” into subtasks like “research current quantum capabilities,” “analyze cryptographic vulnerabilities,” and “investigate post-quantum solutions,” each subtask accesses only its relevant sources and data. The results then pass through aggregators that act as intelligent filters, bringing together key findings while removing repetition. This keeps context manageable even when recursion goes seven levels deep. Second, ROMA enables real parallel processing by identifying which subtasks are truly independent and can run at the same time. Most AI workflows are unnecessarily sequential. There’s no reason to wait for market research to finish before starting technical analysis when both are independent tasks. ROMA classifies tasks into three main types (composition, retrieval, and reasoning), determines their dependencies, and runs everything possible in parallel while ensuring dependent tasks wait for their prerequisites. Third, ROMA uses intelligent model routing, matching each subtask to the most suitable AI model rather than relying on expensive frontier models for everything. Simple information retrieval might use a fast, lightweight model, while complex reasoning goes to more capable systems, and specialized tasks are handled by domain-specific agents. What makes ROMA especially interesting for the wider AI ecosystem is that it’s solving a fundamental scaling challenge that affects everyone building with AI, not just Sentient. As AI applications move from simple chat interfaces to complex agentic workflows, the orchestration layer becomes critical infrastructure. ROMA shows that you don’t need increasingly large models to handle increasing complexity; you need better coordination of the capabilities you already have.
So far, the AI sector has seen impressive growth, yet there’s still so much to do. Today, the industry faces a fundamental problem that’s holding back further innovation. If you want to use powerful AI models, you have to choose between two options: You can use closed API services like ChatGPT or Claude, where you get reliability and the creators get paid but your data is sent to someone else’s servers, and you have no visibility into what’s really happening. Or you can use open-source models from platforms like Hugging Face, where you keep full control and privacy but creators can’t make money or enforce safety rules. This, imo, is a technical limitation in how AI is distributed, and one of the key things holding the entire industry back. To address this, @SentientAGI introduced OML. OML stands for Open-access, Monetizable, and Loyal AI model serving. Think of OML like Netflix , but for AI models you can download and run models locally (so your data stays private), but each use requires a cryptographic “ticket” that automatically pays the creator. The three properties work together: •Open-access: You get the full model file to run on your own infrastructure, not just limited API access. •Monetizable: Creators can charge per inference through authorization tokens, creating sustainable revenue streams. •Loyal: The model enforces safety and usage rules by design, not through legal terms of service, but through cryptographic checks built into how it operates. The core innovation powering OML is AI-native fingerprinting, a way of embedding authorization directly into the model’s neural network. Unlike regular watermarking, which adds protection after training, OML’s fingerprinting becomes part of how the model itself works. When you run the model, it checks for a valid cryptographic token: •If valid → it produces full-quality outputs. •If not → the outputs automatically degrade. Trying to remove this protection would mean retraining the entire model from scratch, a process so expensive that it’s easier (and cheaper) to just buy legitimate access. This makes it possible to protect a model even when users have full access to its weights and architecture, solving one of the hardest problems in AI security called white-box protection. OML is more than 100x stronger than older fingerprinting methods, with less than 1% computational overhead. It creates what economists call a Nash equilibrium, where the most rational choice is to play fair, because bypassing the system costs more than using it properly. This system combines technical fingerprinting with crypto-economic mechanisms like staking and usage tracking, making large-scale model theft not just difficult, but economically irrational. For the first time, AI creators can share their models openly without losing control, revenue, or safety guarantees and users can run them privately on their own infrastructure. Sentient is building OML as the foundation for a new kind of AI ecosystem. This goes beyond protecting intellectual property, it’s about creating an environment where hospitals can run advanced diagnostics locally while still compensating researchers, developers can collaborate on powerful models without giving up monetization and safety rules are technically enforced, not just written in policy documents. The impact is huge: Privacy without performance loss, decentralized innovation that’s financially sustainable, and ethical AI that enforces responsibility by design. OML is the missing piece that makes open yet sustainable AI possible, and Sentient is turning it from concept into real infrastructure that can redefine how AI reaches the world. Tagging Educators @Yashasedu @Sakshimiishra @bigjayce7 @bullish_bunt @szenemene_ @zerokn0wledge_ @vaz_bav @moomsxxx @dolak1ng @0xcheeezzyyyy @thechulprecious @Thenotellaking @Riri_unfiltered
31
1
60
.@dimitratech is building one of the world’s largest blockchain-backed carbon monitoring networks. Across Latin America, Africa, and Asia, Dimitra’s infrastructure is helping farmers, cooperatives, and governments turn sustainable land management into verifiable digital assets. From forest conservation zones to agricultural carbon projects, every hectare monitored and verified through the Dimitra Protocol represents real-world data tracked by satellites, analyzed by AI, and secured on blockchain. .Each verification process is powered by $DMTR , showing real utility • Satellite-based monitoring of land coverage • AI-driven analysis of carbon sequestration • Blockchain verification and audit trails • Ongoing compliance reporting for certification The impact is measurable: • 30,000+ on-chain transactions processed • $600K+ in verified buybacks • 6 million farms in the ecosystem Dimitra’s carbon infrastructure already spans 20,000+ hectares in Mexico, but that’s only one of many projects in motion, with expansions underway across Brazil, Kenya, Ghana, and Indonesia. As the global carbon market heads toward $50B+ by 2030, Dimitra isn’t chasing trends. It’s building the digital verification layer that makes transparent, traceable, and blockchain-backed carbon credits possible at agricultural scale.
13
1
17
Kiojame retweeted
<:) here's today top DeFi weekly highlights from account i follow. ps: this is listed irrespective of the numbering.. ➥ "Robotics and Blockchain" from @eli5_defi ➥ "The set-up Defi Needs" from @0xCheeezzyyyy ➥ "How We Make +50M Snipping Shitcoins On DEXs" from @Cbb0fe ➥ "How EIP-7702 Could Drain Your Wallet Without You Knowing?" from @the_smart_ape ➥ "How Blockchain is Creating a Machine Economy" from @YashasEdu ➥ "Boros by pendle" from @kenodnb ➥ "NEOBANKS Big Ecosystem Review" from @stacy_muur ➥ "WHY ERC-8004 IS GETTING ATTENTION" from @Hercules_Defi ➥ "@Theo_Network (thBILL) analysis" from @thelearningpill ➥ "Welcome to the x402 era" from @DeFiScribbler
top DeFi weekly highlights from account i follow. ➥ ERC-8004 article from @eli5_defi ➥ flare network from @thelearningpill ➥ infinit from @0xCheeezzyyyy ➥ pendle (boros) from @Hercules_Defi ➥ metadao from @YashasEdu check the other creator's & link to content👇🧵
15
3
19
Kiojame retweeted
Here’s one of the most shocking stats in AI sector right now. Researchers outside major companies are being locked out. In 2024, industry-led teams accounted for nearly 90% of all notable model releases, while academic labs dropped to zero. At the same time, training costs have increased, and the compute required for frontier models is doubling roughly every eight months. When building AI models becomes more expensive than an entire research department’s annual budget, even the best minds start dropping out. Not because they’ve lost interest, but because they simply can’t afford to keep up. That’s why what @cysic_xyz is building matters. Their infrastructure doesn’t just make compute cheaper, it changes who gets to participate. By opening access to idle GPUs, mining rigs, enterprise servers, and even smartphones, Cysic turns compute from a rented service into an ownable, verifiable resource. They’ve built the coordination layer that connects all that unused power, verifies the work cryptographically through zero knowledge proofs, and rewards contributors fairly through their Proof of Compute system. When compute becomes a shared and verifiable resource, AI progress stops being shaped by who can afford the most GPUs and starts being driven by who has the best ideas. I believe that to truly scale, we have to fix how computation is distributed. The future of AI doesn’t depend on how many GPUs exist, it depends on who can access them. It’s not about capacity. It’s about inclusion. And that’s exactly the future Cysic is building. >>>>
The way compute works today is built on extraction, not innovation. A few tech companies control most of the world’s cloud power. They mark up raw compute costs by several hundred percent, even while their data centers sit idle most of the time. NVIDIA’s $46 billion data center revenue is proof that access to compute has become the main bottleneck for AI, blockchain, and scientific progress. When just three corporations decide who gets to train models, which rollups can afford proofs, and what everyone pays to build, it’s not a free market, it’s a gated system. The irony is that we’ve built decentralized protocols that still rely on centralized, trust-based infrastructure owned by the same companies crypto was meant to challenge. ComputeFi flips this model. Instead of renting compute, it makes it something you can own and trade. @cysic_xyz lets anyone with spare hardware contribute power to the network and earn from it. At the same time, AI teams, rollups, and developers can tap into affordable, verifiable compute without middlemen or markups. This unlocks the huge amount of compute that sits unused around the world and puts it to work. As demand for AI training and ZK proving keeps growing, the advantage won’t go to whoever owns the biggest data center, but to whoever builds the most efficient marketplace connecting global supply with global demand. 𝗖𝗼𝗺𝗽𝘂𝘁𝗲 𝗺𝗼𝗻𝗼𝗽𝗼𝗹𝘆 𝗲𝗻𝗱𝘀 𝘄𝗶𝘁𝗵 𝗖𝘆𝘀𝗶𝗰 gMSor
29
2
43
Kiojame retweeted
For years, robotics has been defined by motion... I'm talking how well machines could walk, grasp, or balance. But the more I’ve studied the field, the more it feels like that misses the point a little. The real question isn’t how lifelike robots can be; it’s how independent they can become. A robot that moves gracefully but still needs human oversight is just a sophisticated puppet. So the next leap in robotics won’t come from smoother joints or faster processors, but from autonomy; the kind that lets machines reason, act, and sustain themselves in the real world. That’s what makes @openmind_agi's integration of its humanoid system, OM1, with @coinbase's x402 protocol so important. OM1 was built as a modular intelligence framework and a system designed to think across perception, motion, and decision-making, rather than rely on static programming. While x402 is essentially a new way for machines to handle money. Built on the familiar HTTP 402 status code (“Payment Required”), it lets AI agents and robots make on-chain payments natively with no user interfaces, no humans approving transactions. That’s where OpenMind also gains its edge. Many robotics projects are closed ecosystems and designed for one purpose, optimized for one task. OpenMind built OM1 differently. It’s hardware-agnostic, open to integration, and structured for economic reasoning. This gives it flexibility most competitors lack. The result is what Coinbase described as the first “real-world humanoid payment.” Underneath, is FABRIC, OpenMind’s coordination layer designed for multi-agent collaboration and this is a way for AI systems and robots to identify, verify, and transact with each other. Now, I know it's tempting to dismiss this as futuristic speculation, but there’s precedent. Blockchain researchers have long theorized about machine-to-machine payments, especially in Internet-of-Things environments. The logic was always sound but lacked embodiment as there was no physical system intelligent enough to use those protocols meaningfully. OM1 changes that. OpenMind’s real advantage is also its timing. As someone who studies these intersections, I see @openmind_agi as a working hypothesis and a glimpse of how the line between artificial and economic intelligence might blur. Shoutout to the entire team led by @JanLiphardt. follow these top chads for more updates; @jeg6322 @xiaoyubtc @ClaraChengGo @paigeinsf @tengyanAI
78
15
142
One of the things that truly sets @HeyElsaAI apart is how normie-friendly it is. I’ve been using @HeyElsaAI for a while now, and what stands out the most is its clean, intuitive interface and the speed of response. Elsa makes complex on-chain actions feel simple, familiar, and actually usable. And now, Elsa speaks Japanese, Chinese, and Korean not just for prompts, but across the entire platform. Menus, notifications, chats all localized, all seamless. It’s clear this is just the beginning. More languages, more accessibility, more people discovering how easy Web3 can feel when you just Elsa it.
Elsa now speaks Japanese, Chinese, and Korean, not just for prompts but across the entire platform✨ Menus, notifications, and chat, all in your language.
50
21
65
Kiojame retweeted
Every robot needs to answer one fundamental question: 𝐖𝐡𝐞𝐫𝐞 𝐚𝐦 𝐈? For decades, GPS solved this problem outdoors. Satellites triangulated positions to within a few meters, good enough for turn-by-turn directions in your car. But for the next generation of Physical AI, humanoid robots walking sidewalks, delivery bots navigating crowded streets, and autonomous vehicles making split-second decisions, GPS isn’t enough. Visual Positioning Systems (VPS) will be the major driver of Physical AI and @NATIXNetwork is building a critical infrastructure that powers it. Visual Positioning Systems, or VPS, work differently from GPS. Instead of relying on satellites, they use cameras to recognize the physical environment: buildings, road markings, signs, landmarks, and match what they see to a detailed visual map. Think of it as giving robots the same navigational sense that humans have. We don’t navigate using coordinates. We navigate by what we see. “Turn left at the coffee shop. The building with the red awning. Three blocks past the park.” VPS depends entirely on visual data, and not just any data. It needs massive, diverse, and constantly updated imagery of the real world. It can’t rely on a single angle or perfect weather conditions. It needs 360-degree coverage across different lighting, seasons, and geographies. That’s a level of visual diversity no traditional mapping approach can provide. NATIX is creating a crowdsourced network powered by real drivers and real environments. Through VX360 devices installed in Tesla vehicles, NATIX taps into Tesla’s built-in front, rear, and side cameras. These cameras continuously capture diverse, real-world imagery, from highways to small residential streets, in all weather and lighting conditions. The VX360 device turns this into a privacy-protected, multi-camera data stream that updates continuously as drivers go about their daily routines. This decentralized model is how NATIX is building one of the world’s most comprehensive and dynamic visual datasets. The future of robotics depends on data, and not just big data, but the right kind of data. High-quality, diverse, real-world visuals are the foundation that allows AI to understand and act safely in the physical world. With its decentralized “Internet of Cameras,” NATIX is not just improving maps. It’s building the infrastructure layer of Physical AI. It’s enabling humanoid robots, delivery bots, and autonomous vehicles to navigate outdoor spaces with human-level precision. As these robots step out of labs and into the real world, their vision will be powered by the data NATIX collects.
22
5
39
BOB functions as both an Ethereum rollup built on the OP Stack and a Bitcoin Secured Network, using staked BTC to achieve transaction finality.  This structure combines Bitcoin’s security with Ethereum’s DeFi ecosystem  @build_on_bob stands out as the most authentic Bitcoin-aligned scaling solution today. By using staked BTC for finality and creating a liquid staking token flywheel, it boosts liquidity, fees, and yield for users gBob
How does the BOB Hybrid Chain fuse Bitcoin's unmatched security AND the flexibility of Ethereum DeFi? The answer is that BOB is simultaneously both an ETH rollup built on the OP stack and a Bitcoin Secured Network (BSN) that uses staked BTC for finality. As a BSN, BOB is essentially taking advantage of Bitcoin's security guarantees, with all assets, apps and transactions being secured by billions of dollars of staked BTC. The BSN model also creates a Bitcoin liquid staking token (LST) liquidity flywheel effect. A percentage of network fees are returned to Bitcoin stakers as yield. The increased yield encourages more LSTs to deploy in DeFi, which increases fees, which then increases yield further - so the cycle continues. Networks like BOB therefore gain massive liquidity and the highest security guarantees in crypto. It's the best of both worlds. BOB is currently on the path to becoming a BSN, pending Babylon's completion of development, anchoring every transaction with the economic weight of Bitcoin.
12
15