Building a better future with AI @thesignalai_

London, UK
Joined October 2012
I sat down with the CEO of Microsoft. Here's how Satya Nadella sees the future of AI:
Now you can train large-scale models, run massive experiments, and deploy real applications all on British soil. The UK AI ecosystem just levelled up.
2
4
My takeaway: As Jensen Huang said on his recent UK visit, infrastructure has been the missing piece. For too long UK builders had to rely on limited domestic compute or send data overseas. Nebius is addressing that gap head-on.
2
1
3
Who benefits from this: • Startups building frontier AI models • Universities conducting research • NHS developing healthcare AI • Fintechs and biotech companies
1
3
Prima Mente is already using this infrastructure to train models for Alzheimer's and Parkinson's research. They're running dozens of experiments simultaneously, developing better diagnostics. All compute power stays in the UK, keeping medical data local while moving faster.
1
2
The GPU power is massive: Phase 1: 4,000 NVIDIA HGX B300 GPUs Phase 2: +3,000 more coming online
1
1
2
@nebiusai's first UK data centre specs: → 3 data halls, 126 racks, 16MW of power → 11PB of storage capacity → Custom-built Gen5 hardware designed in-house → NVIDIA Quantum-X800 InfiniBand networking
1
1
4
The UK just got serious about AI infrastructure. I visited the data centre that's powering it:
Grokipedia has been live for 1 week. The Wikipedia problem Musk identified: Propaganda doesn't work through lies but rather through selective truth. Facts can be technically accurate yet completely misrepresent reality when: → Critical context is omitted → Information is improperly weighted → Anonymous editors control what gets included Musk claims Grokipedia already outperforms Wikipedia on technical subjects like physics, with more accurate and realistic descriptions of people and events. My takeaway: Every information system reflects its creators' biases whether it be volunteer editors or AI training data. The real test will be Grokipedia's ability to win the battle for clicks against Wikipedia's SEO dominance. I believe the best approach is treating it as another perspective to cross-reference rather than a definitive source of truth. At least with AI-generated content, the underlying logic should be more auditable and consistent than an anonymous Wikipedia editor.
6
16
0
OpenAI is losing the enterprise AI race. Sam Altman's defensive response proves it: When Brad Gerstner asked how a $13B revenue company justifies $1.4 trillion in compute commitments, Sam's response was telling. "If you want to sell your shares, I'll find you a buyer." Not an answer. A threat. Meanwhile, the numbers tell the real story. Enterprise LLM API market share: → OpenAI: 50% (2023) → 25% (2025) → Anthropic: 12% (2023) → 32% (2025) → Google: 7% (2024) → 20% (2025) Anthropic didn't waste resources on: • Image generation • Consumer devices • $1.4 trillion moonshots In 18 months, OpenAI went from dominant to second place. But there’s an important detail. While OpenAI burns through billions and Anthropic raises rounds, Google sits on $100B+ in cash. They could price tokens at negative margins forever drowning out both OAI and Anthropic. • OpenAI: Lives fundraise to fundraise • Anthropic: Needs constant capital infusions • Google: Could subsidise losses indefinitely Google doesn't need to win on model quality. They just need to make it unprofitable for everyone else. My takeaway: OpenAI is caught between Anthropic winning on enterprise trust and Google threatening to vaporise the entire business model. Especially when you're spending 100x your revenue and watching your 50% enterprise market share evaporate. Sam says he wants to go public so critics can "short the stock and get burned." At this rate, they might not have to wait long.
NEWS: AI agents will automate 80% of work. But not all agents are created equal. I sat down with Mihir Shukla (Co-founder & CEO of @AutomationAnywh) to cut through the noise. Three types of AI agents exist: 1. Personal productivity agents ↳ Email summaries and basic tasks ↳ Helpful but not transformational ↳ 10-15% productivity gains 2. Captive AI agents ↳ Locked inside CRM/ERP systems ↳ Built-in conflict between savings and sales ↳ 10-15% productivity gains 3. Process agents (the game-changer) ↳ Work across multiple applications ↳ Autonomous or human-assisted ↳ 40-80% productivity gains Process agents don’t just automate existing work faster. They enable work that was impossible before. → Petrobras: AI agent reasoned through complex Brazilian tax laws, saving $120M in 3 weeks → Supply chain: Agent balances inventory 4x daily across 32 warehouses, saving $200M The next 5 years: • New vocabulary: work orchestrators, process reasoning engines • New agentic solutions that don't look like today's apps • Almost every knowledge worker's role will evolve My takeaway: The first two automate tasks within silos. Process agents break silos and transform entire workflows. That's the difference between 15% productivity gains and 80% process transformation. The AI-first autonomous enterprise is being built right now.
1
8
0
NEWS: Adobe just dropped Custom Models for Firefly AI. You can now train Firefly on your own images to generate content that matches your unique style.
3
11
0
NEWS: Adobe Express is now inside ChatGPT. You can design directly from your chats:
2
17
0
Oracle is building the entire stack. → Infrastructure that powers the frontier models. → Platforms where enterprises can actually use them. → Applications where the work gets done. They're not betting on one layer winning. They're betting they can win across all three.
1
My takeaway: Larry Ellison said it best at the keynote: "This AI training... is the largest, fastest growing business in human history. Bigger than the railroads, bigger than the industrial revolution." Most companies are fighting for a piece of it.
1
1
2. Stay model-agnostic at the platform layer Partner with the best instead of building proprietary models 3. Embed AI agents natively in applications Deploy where customers already work with no app switching
I spoke with Pradeep Vincent (SVP & Chief Technical Architect, OCI) and Natalia Rachelson (VP of Cloud Applications) this week. The strategy is clear: 1. Own the infrastructure layer Training ground for Grok, ChatGPT, and frontier models
Layer 3: Applications ↳ Pre-built AI agents embedded natively in Oracle Fusion Applications ↳ Deploy where employees already work inside ERP, HCM, SCM, and CX workflows ↳ No app switching required
Layer 2: Platform & Data ↳ Database 26ai: AI embedded directly into the database with automated vector indexing ↳ AI Agent Studio: Support for OpenAI, Anthropic, Meta, Cohere, Google, xAI ↳ AI Agent Marketplace: 24+ partners, one-click deployment, no-code customisation
Most companies are betting on one layer. Oracle is building all three: Layer 1: Infrastructure ↳ 800,000 NVIDIA GPUs (Zettascale10) - largest AI supercomputer in the cloud ↳ 50,000 AMD MI450 GPUs launching Q3 2026 - first hyperscaler at scale ↳ Powers Stargate project with OpenAI in Abilene, Texas
1
1
NEWS: Oracle just dropped massive AI updates. They’re building the entire stack:
4
2
14
0