Filter
Exclude
Time range
-
Near
Steve Sperandeo 🇨🇦 retweeted
“They’ve culled the birds, they’ve removed the birds, it’s time for them to get the fuck out of here.” “I know your role is to keep the peace.. there is no peace. You have destroyed our family. And you were an accomplice to it.” Time for the RCMP to stop terrorizing this family. Well past time.
Steve Sperandeo 🇨🇦 retweeted
McKinsey just dropped its 2025 AI report. 1. Everyone’s testing, few are scaling. 88% of companies now use AI somewhere. Only 33% have scaled it beyond pilots. 2. The profit gap is huge. Just 6% see real EBIT impact. Most are still stuck in “experiments,” not execution. 3. The winners think bigger. Top performers aren’t cutting costs. They’re redesigning workflows and creating new products. 4. AI agents are emerging. 23% are testing agents. Only 10% have scaled them (mostly in IT and R&D). 5. The jobs shift is starting. 30% of companies expect workforce reductions next year, mostly in junior or support roles. TL;DR: AI adoption is nearly universal. Impact isn’t. The gap between pilots and profit is where the next unicorns will be built.
Steve Sperandeo 🇨🇦 retweeted
Universal Ostrich Farms - Edgewood, BC "That's your Canadian government right there that just did this. I went to Bosnia, Somalia and Afghanistan and I did not serve my country for this bullsh*t that's in front of us. The government committed their own a-f*ucking-trocity" Sgt. Mike Rude (retired)
Replying to @ChShersh
350 contributions in a year? yikes
Replying to @MarkJCarney
Ostrich blood is on your hands. Due process my ass.
1
Steve Sperandeo 🇨🇦 retweeted
OMG 💔 appears the CFIA slaughtered the entire flock of healthy ostriches through the night 📸 @DreaHumphrey Dark day for Canada
Replying to @JasonLavigneAB
I compiled an essay on a number of factors on this case 6 months ago. What a travesty. github.com/homer6/ostrich/bl…
Steve Sperandeo 🇨🇦 retweeted
Universal Ostrich Farm Update I just confirmed, reports are that hundreds of shots fired in the area of the kill pen at Universal Ostrich Farm. About 100 shots in the first hour, then a shift change, then hundreds more. Sounds like a high powered rifle, and the speculation is the platforms setup today are being used. This is happening under the cover of night, and flood lights are being used. Family and supporters are forced to listen to this as the entrance to the property is blocked by RCMP and anyone who leaves are not being let back in. This is inhumane treatment of both the ostriches and the people there. Tonight, the last of the hope for Canada is dying in Edgewood BC.
Well, they live for 30 years, so....
Steve Sperandeo 🇨🇦 retweeted
Scaling Agent Learning via Experience Synthesis 📝: arxiv.org/abs/2511.03773 Scaling training environments for RL by simulating them with reasoning LLMs! Environment models + Replay-buffer + New tasks = cheap RL for any environments! - Strong improvements over non-RL-ready environments and multiple model families! - Works better in sim-2-real RL settings → Warm-start for high-cost environments 🧵1/7
Steve Sperandeo 🇨🇦 retweeted
The paper behind Kosmos. An AI scientist that runs long, parallel research cycles to autonomously find and verify discoveries. One run can coordinate 200 agents, write 42,000 lines of code, and scan 1,500 papers. A shared world model stores facts, results, and plans so agents stay in sync. Given a goal and dataset, it runs analyses and literature searches in parallel and updates that model. It then proposes next tasks and repeats until it writes a report with traceable claims. Experts judged 79.4% of statements accurate and said 20 cycles equals about 6 months of work. Across 7 studies, it reproduced unpublished results, added causal genetics evidence, proposed a disease timing breakpoint method, and flagged a neuron aging mechanism. It needs clean, well labeled data, can overstate interpretations, and still requires human review. Net effect, it scales data driven discovery with clear provenance and steady context across fields. ---- Paper – arxiv. org/abs/2511.02824 Paper Title: "Kosmos: An AI Scientist for Autonomous Discovery"
📈 Edison Scientific launched Kosmos, an autonomous AI researcher that reads literature, writes and runs code, tests ideas. Compresses 6 months of human research into about 1 day. Kosmos uses a structured world model as shared memory that links every agent’s findings, keeping work aligned to a single objective across tens of millions of tokens. A run reads 1,500 papers, executes 42,000 lines of analysis code, and produces a fully auditable report where every claim is traceable to code or literature. Evaluators found 79.4% of conclusions accurate, it reproduced 3 prior human findings including absolute humidity as the key factor for perovskite solar cell efficiency and cross species neuronal connectivity rules, and it proposed 4 new leads including evidence that SOD2 may lower cardiac fibrosis in humans. Access is through Edison’s platform at $200/run with limited free use for academics. There are caveats since runs can chase statistically neat but irrelevant signals, longer runs raise this risk, and teams often launch multiple runs to explore different paths. Beta users estimated 6.14 months of equivalent effort for 20 step runs, and a simple model based on reading time and analysis time predicts about 4.1 months, which suggests output scales with run depth rather than hitting a fixed ceiling.
Steve Sperandeo 🇨🇦 retweeted
MemSearcher trains LLM search agents to keep a compact memory, boosting accuracy and cutting cost. Most agents copy the full history, bloating context and slowing inference, but MemSearcher keeps only essential facts. Each turn it reads the question and memory, then searches or answers. After reading results, it rewrites memory to keep only what matters. This holds token length steady across turns, lowering compute and GPU use. Training uses reinforcement learning with Group Relative Policy Optimization. Their variant shares a session reward across turns, teaching memory, search, and reasoning together. Across 7 QA benchmarks it beats strong baselines, with 3B surpassing some 7B agents. It uses fewer tokens than ReAct, so long tasks stay efficient and reliable. ---- Paper – arxiv. org/abs/2511.02805 Paper Title: "MemSearcher: Training LLMs to Reason, Search and Manage Memory via End-to-End Reinforcement Learning"
6
17
2
110
Steve Sperandeo 🇨🇦 retweeted
🧵 Dr. Robert Redfield (former CDC Director) said that he’s seen ~85% success in patients with Long Covid within 1–3 years (vaccine-injured are more resistant to treatment and he doesn't know exactly why). I took notes. Here’s what he reportedly uses, and what each targets 👇 Fatigue / Cognitive Dysfunction / PN (peripheral neuropathy) • Maraviroc (Selzentry) – 300 mg 2×/day (CCR5 antagonist; blocks immune cell trafficking) • Rapamycin (Sirolimus) – 1–2 mg/day (mTOR inhibitor; modulates immune aging, inflammation) 🧩 Probenecid(?) was also mentioned — typically used for gout (helps excrete uric acid), but it also affects OAT transporters and viral replication pathways, so perhaps that’s the rationale. Hypercoagulation / Vascular Dysfunction • Apixaban (Eliquis) – anticoagulant • Plavix (Clopidogrel) – antiplatelet • Aspirin – antiplatelet, anti-inflammatory Doses weren't mentioned. Triple therapy is aggressive and reserved for documented hypercoagulable states, and likely requires close monitoring (a microclotting diagnostician was mentioned in Florida around 50 minutes into the podcast). Mast Cell Stabilization • Pepcid (Famotidine) – 40 mg 2×/day (H₂ blocker; sometimes paired with H₁ antihistamines) “Can’t breathe, but fine when swimming?” Redfield suggested that’s often due to venous congestion — blood pooling from impaired return (e.g., pelvic compression or May-Thurner–type syndromes). 🩻 Veinogram → possible stent surgery for relief.
🚨 🚨 NEW: Round 2 with ex-CDC Director, Dr. Robert Redfield!! An HIV pioneer, virologist, infectious diseases doctor, & pandemic whistleblower, he’s back with never-before-heard revelations. (FOR REAL 👀) Our last interview made global headlines when he revealed that the original Covid viral lines likely came from Ralph Baric’s coronavirus research lab at University of North Carolina. We go FURTHER today into the US Role in Covid! Now he returns with a new book, “Redfield’s Warning: What I Learned (But Couldn’t Tell You) Might Save Your Life.” I highly recommend it! 8 Explosive Highlights from our Interview • COVID engineered as aerosolized, self-spreading vaccine 🤯 • Vaccine mandates, pharma immunity, side effect denial — all “mistakes” • ‼️ Long COVID driven by viral persistence & remarkable treatments he’s discovered • How FDA’s Peter Marks killed Novavax • mRNA may cause cancer via residual nucleic acid & can produce ongoing spike 🤯 • Antibody-dependent enhancement possible with boosters • The Chronic Lyme / long COVID connection • PREP Act immunity should be repealed & concerns about new NIAID Director ✅Please SUBSCRIBE to my YT channel, comment & share!! And also, consider supporting my hard work by becoming a paid subscriber on Substack or sponsor! I’d greatly appreciate it! 🙏🩷
I'd be open to organizing a regular zoom meetup or social. Like if you'd like an invite.
Replying to @grumpygremmy
You trust your wits and band together. If you've read enough of the literature, you know that you are correct. Find others that refuse to relinquish their sanity, and build new relationships with them. Rationality is a moral duty. Have the duty of care to yourself and others to stay rational. DM for new friends. 😎
Steve Sperandeo 🇨🇦 retweeted
XBOW raised $117M to build AI hacking agents. Now someone just open-sourced it for FREE. Strix deploys autonomous AI agents that act like real hackers - they run your code dynamically, find vulnerabilities, and validate them through actual proof-of-concepts. Why it matters: The biggest problem with traditional security testing is that it doesn't keep up with development speed. Strix solves this by integrating directly into your workflow: ↳ Run it in CI/CD to catch vulnerabilities before production ↳ Get real proof-of-concepts, not false positives from static analysis ↳ Test everything: injection attacks, access control, business logic flaws The best part? You don't need to be a security expert. Strix includes a complete hacker toolkit - HTTP proxy, browser automation, and Python runtime for exploit development. It's like having a security team that works at the speed of your CI/CD pipeline. The best part is that the tool runs locally in Docker containers, so your code never leaves your environment. Getting started is simple: - pipx install strix-agent - Point it at your codebase (app, repo, or directory) Everything is 100% open-source! I've shared link to the GitHub repo in the replies!
Steve Sperandeo 🇨🇦 retweeted
When I was a kid, bedtime was 9 pm. I couldn't wait to be a grownup so I could go to bed anytime I wanted. Turns out that is 9 pm.
𝔄𝔩𝔠𝔥𝔢𝔪𝔦𝔰𝔱
Replying to @xKnowledgeBANK
Time machine opens all doors
6
Steve Sperandeo 🇨🇦 retweeted
The recently released DeepSeek-OCR paper has huge implication for AI memory, long‐context problems and token budgets. It frames the OCR model not only as a document‐reading tool but as an experiment in how models can “remember” more by storing data as images rather than text tokens. With this paper, DeepSeek really found a new way to store long context by turning text into images and reading them with optical character recognition, so the model keeps more detail while spending fewer tokens. DeepSeek's technique packs the running conversation or documents into visual tokens made from page images, which are 2D patches that often cover far more content per token than plain text pieces. The system can keep a stack of these page images as the conversation history, then call optical character recognition only when it needs exact words or quotes. Because layout is preserved in the image, things like tables, code blocks, and headings stay in place, which helps the model anchor references and reduces misreads that come from flattened text streams. The model adds tiered compression, so fresh and important pages are stored at higher resolution while older pages are downsampled into fewer patches that still retain gist for later recovery. That tiering acts like a soft memory fade where the budget prefers recent or flagged items but does not fully discard older context, which makes retrieval cheaper without a hard cutoff. Researchers who reviewed it point out that text tokens can be wasteful for long passages, and that image patches may be a better fit for storing large slabs of running context. On the compute side, attention cost depends on sequence length, so swapping thousands of text tokens for hundreds of image patches can lower per step work across layers. There is a latency tradeoff because pulling exact lines may require an optical character recognition pass, but the gain is that most of the time the model reasons over compact visual embeddings instead of huge text sequences. DeepSeek also reports that the pipeline can generate synthetic supervision at scale by producing rendered pages and labels, with throughput around 200,000 pages per day on 1 GPU. The method will not magically fix all forgetting because it still tends to favor what arrived most recently, but it gives the system a cheaper way to keep older material within reach instead of truncating it. For agent workloads this is appealing, since a planning bot can stash logs, instructions, and tool feedback as compact pages and then recall them hours later without blowing the token window. Compared with vector databases and retrieval augmented generation, this keeps more memory inside the model context itself, which reduces glue code and avoids embedding drift between external stores and the core model. --- technologyreview .com/2025/10/29/1126932/deepseek-ocr-visual-compression
4
25
3
135