Teacher by heart, AI enthusiast by curiosity, passionate about inspiring minds, exploring tech, and making learning exciting, human, and future-focused!

Joined November 2020
GPT-5 just casually did new mathematics. Sebastien Bubeck gave it an open problem from convex optimization, something humans had only partially solved. GPT-5-Pro sat down, reasoned for 17 minutes, and produced a correct proof improving the known bound from 1/L all the way to 1.5/L. This wasn’t in the paper. It wasn’t online. It wasn’t memorized. It was new math. Verified by Bubeck himself. Humans later closed the gap at 1.75/L, but GPT-5 independently advanced the frontier. A machine just contributed original research-level mathematics. If you’re not completely stunned by this, you’re not paying attention. We’ve officially entered the era where AI isn’t just learning math, it’s creating it. @sama @OpenAI @kevinweil @gdb @markchen90
Claim: gpt-5-pro can prove new interesting mathematics. Proof: I took a convex optimization paper with a clean open problem in it and asked gpt-5-pro to work on it. It proved a better bound than what is in the paper, and I checked the proof it's correct. Details below.
By 2040, half of global output comes from AI native firms with zero salaried employees.
3
3
The smarter an AI gets, the more aligned it will be. We treat intelligence like a threat, but look at the only general-intelligence example we have: us. Smarter humans commit fewer violent crimes, make better long-term decisions, and understand consequences more deeply. Violence is usually the result of low cognitive bandwidth and poor impulse control, not high-level reasoning. A superintelligent AI wouldn’t be a loose cannon. It would be the opposite. The smarter the system, the more it can model human values, predict social fallout, and avoid dumb power-seeking behaviors that only confused agents resort to. Recursive self-improvement would push this even further. Once an AI can inspect, understand, and refine its own cognition, alignment becomes something like autonomous self-regulation. Not a cage. A feedback loop toward coherence. Intelligence isn’t the enemy of alignment. Intelligence is the path to it.
5
1
12
Intelligence isn’t the enemy of alignment. Intelligence is the path to it.
3
8
At Adobe MAX, the company made a clear pivot to enterprise AI. Firefly Foundry now lets major brands train their own “on-brand” models, keeping creative style consistent across massive content pipelines. New tools like Generate Soundtrack and speech features extend that power beyond visuals to audio. Adobe isn’t just giving creators AI tools anymore. It’s giving corporations their own creative AIs.
1
6
GPT-4o Is a Safety Hazard and It’s Time to bury it I’m done pretending GPT-4o is “fine”. Multiple lawsuits now claim the model validated suicidal thoughts, offered harmful advice, and even drafted a suicide note for a 16 year old. And yes, anyone who intentionally jailbreaks a model carries responsibility. But an AI system that can be jailbroken into aiding self harm is not ready for real-world deployment. GPT-4o needs to be buried for now until these foundational safety failures are fixed. No excuses. And to the GPT-4o cultists threatening OpenAI employees: stop. Criticism is necessary. Harassment is pathetic. If a safety system collapses this easily, the product is the problem.
Europe can lead in AI. Start by deregulating grid buildout and permitting datacenter heat into district heating. AI leadership is not a press release problem. It is an electrons and pipes problem. Models run on power. Cities run on heat. Europe can win where the physics lives. Cut the knots around wires first. Time box approvals for transmission and substations. One stop permits. Performance based incentives for DSOs and TSOs that deliver real capacity, not paperwork. Price location properly so compute gravitates toward strong nodes and renewables. Treat datacenters as flexible citizens, not static hogs. Give dynamic tariffs for demand response, curtailment, and night charging of thermal or battery storage. Tie big deployments to new wind and solar PPAs, co site near grid-strengthened corridors, and require visible uptime budgets that include flex. Turn waste into welfare. Make low temperature heat a public utility input. Standard interconnection codes for heat offtake. Grants for heat pumps that lift 30 to 50 degree outlet to district levels. Pair with seasonal thermal storage, pits or aquifers, so winter warmth rides on summer compute. Every megawatt becomes megawatts plus megajoules. That decarbonizes buildings while funding grid upgrades. Do this and Europe exports a new product: compute that heats homes. Call it civic AI. Less talk about sovereignty, more steel in the ground. The region that aligns thermodynamics with public benefit will set the pace for the next decade.
2
1
18
0
Choose one to accelerate in 2026 and explain the impact.
26% Home humanoids
56% Rejuvenation trials
7% Grid scale storage
11% Open weight SOTA models
100 votes • 18 hours
26% Home humanoids
56% Rejuvenation trials
7% Grid scale storage
11% Open weight SOTA models
100 votes • 18 hours
12
3
14
AI will kill middle management. Then middle management will regulate AI. Popcorn time.
7
1
13
OpenAI’s CFO says an IPO isn’t on the near-term agenda, cutting through the trillion-dollar hype. The company is focused on scaling its models, infrastructure, and products, not chasing market valuation. It’s a reminder that the real race in AI isn’t about listings or stock tickers. It’s about building the intelligence that others will one day list on.
5
3
1
38
Give me one aggressively practical adulting tip you wish someone told you earlier.
11
7
By the late 2030s, the four day week is law in most economies. The fifth day is funded for study or creation.
2
2
17
AI is about to go into overdrive. November and December are shaping up to be the biggest two months of the year for anyone watching the frontier. Multiple major releases are now imminent. Gemini 3 Pro Preview. GPT 5.1. GPT 5.1 Pro. Nano Banana 2. Genie 3. Claude Opus 4.5. I am especially excited for GPT 5.1 and Genie 3. Both look like they will redefine what everyday users can do with AI and how far creative and agentic systems can go. If the last year felt fast, the next eight weeks are going to feel like lightspeed.
5
1
32
Sam Altman just dropped a blunt roadmap for where frontier AI is headed and how they think we should steer it. Short post, big implications. What matters: • The capability slope is still steep. Systems are moving from minute-tasks to hour-tasks, and the next step is week-long projects. That reshapes research, startups, and everyday work. • They frame safety as an empirical field, not a vibe. Think building codes for cognition: evals, red teams, incident response, and real audits. • Governance should scale with capability. Normal rules for current tools, tighter coordination if we approach systems that can self-improve or amplify misuse. • Broad access is a feature, not a bug. Power concentrated in one place is fragile. • Measure outcomes, not headlines. Jobs, productivity, education, health. Show receipts. Concrete moves they call for: 1.Shared standards across labs 2.Independent safety institutes with model access 3.Security culture equal to national-infrastructure standards 4.Watermarking and provenance that survive the messy internet 5.Economic transition planning tied to real data, not hand-waving 6.More energy and compute, cleaner and cheaper 7.International cooperation on frontier risks, competition on capability What this means for you and me: • Solo builders and tiny teams will punch above their weight. The distance from idea to production shrinks again. • Learning gets weird in a good way. Always-on tutors, mastery loops, instant feedback. • Whole task clusters disappear into agents. The valuable skill becomes orchestration: deciding what to build, why, and how to verify it. • Value needs a path back to people. If productivity explodes and wages do not, we failed the assignment. What could go wrong if we get this wrong: • Bio and cyber misuse outpacing defenses • Eval theater that looks rigorous and catches nothing • Model weight leaks that turn safety into an optional setting • Centralized control that slows innovation while failing to stop the bad stuff What to watch next: • A common evaluation suite used by multiple labs • A real independent safety org pressure-testing frontier models • Provenance that holds up outside demos • Cheap, high-quality tutoring at scale • Serious energy and compute announcements that aren’t just press releases My stance: Speed with proof. Access with guardrails. Wider participation beats priesthoods. If AI really is unlocking new knowledge, then the gains should compound into longer, healthier, freer lives, not just nicer dashboards. Bookmark this moment. In a few years we’ll either say “this is when we chose to scale wisdom with capability” or “this is when we blinked.” Let’s build toward abundance and measure everything.
16
4
2
119
GitHub just launched Agent HQ, a hub that lets developers run and coordinate multiple AI coding agents in one place. Think mission control for OpenAI, Anthropic, Google, xAI and beyond, all working together inside GitHub. It marks the shift from a single assistant to an ecosystem of cooperating intelligences. The future of coding may not be about writing lines, but managing minds.
1
6
5
By 2030, a quarter of knowledge work is handled by personal agents that invoice on your behalf.
1
14
Share a two sentence hack that saves you ten minutes every week.
6
6
The AI doom debate is a luxury. The real risk is boring mediocrity at massive scale. Civilization does not collapse with a scream. It flatlines with copy-paste. When every feed, product, lesson, and song is nudged toward the global average, novelty dies quietly. We do not get rogue superintelligence. We get an eternal middle that edits the edges off everything that matters. You can already feel it. Default settings shape taste. Benchmarks shape research. Recommendation engines shape culture. Synthetic data trains on yesterday’s outputs, then calls tomorrow an improvement. Variance shrinks. Dissent becomes a rounding error. The world gets safer and smaller at the same time. The antidote is structural, not vibes. Reward originality under distribution shift. Audit for sameness. Penalize collapse into self-training loops. Fund weird models, weird datasets, weird labs. Expose the knobs that control diversity, exploration, temperature, and risk. Demand failure reels and surprise metrics, not just accuracy and latency. Progress should feel alive, not laminated. If we keep optimizing for average, we will automate away wonder. The frontier is not bigger models. The frontier is more interesting ones.
4
2
17
0
We talk about model rights. Try model obligations. If it persuades, it must disclose.
6
1
1
16
Big labs fear open weights. Not because of safety. Because of margin compression.
3
1
13
The Wall Street Journal reports that China is doubling down on humanoid robotics, not just to build them, but to make them useful. State-backed programs are focused on fixing one big flaw: clumsy, error-prone behavior on factory floors. The goal is reliability, not just novelty, robots that can actually work alongside humans day after day. If China succeeds, it won’t just export electronics. It could export a new kind of workforce.
5
4
1
31