I'd bet on this guy
@vsiv as being someone who has a strong chance at solving this problem.
Emerald AI
@EmeraldAi_ emerged from stealth on 2025‑07‑01 with a $24.5 million seed round led by Radical Ventures and NVentures, joined by CRV, Neotribe and AMPLO, plus an unusually high‑profile slate of strategic angel investors including Jeff Dean, John Kerry, Fei‑Fei Li and John Doerr. The company is helmed by CEO Dr. Varun Sivaram, formerly CTO of Ørsted Americas and senior counselor to the U.S. Special Presidential Envoy for Climate, with Boston University’s Prof. Ayse Coskun as chief scientist and a bench of senior engineers recruited from Amazon, Intel and national‑lab HPC programs. Management pedigree and investor roster give Emerald immediate credibility with hyperscalers, utilities and regulators, differentiating it from the crowded field of early‑stage “AI‑energy nexus” start‑ups. (PR Newswire, Emerald AI)
The firm’s product, Conductor, is a pure‑software orchestration layer that sits above standard cluster schedulers and GPU telemetry. It continuously re‑packs latency‑tolerant AI training, fine‑tuning and batch inference jobs across federated data‑center fleets, modulating aggregate power draw in seconds‑to‑hours time frames in response to real‑time or day‑ahead grid signals. Because the approach requires no new hardware, storage or facility rewiring, deployment CAPEX is effectively nil and time‑to‑value can align with software‑defined data‑center rollout cycles rather than multi‑year interconnection upgrades. Conductor can export OpenADR and IEEE 2030.5 signals to utilities while maintaining internal SLA enforcement at the container or GPU‑instance level, positioning Emerald to monetize multiple value streams—capacity, ancillary services and coincident‑peak mitigation—without infringing on hyperscaler uptime guarantees. (Emerald AI, ar5iv)
The proof point is a Phoenix field demonstration executed under EPRI’s DCFlex program with Oracle Cloud Infrastructure, NVIDIA and Salt River Project. On a 256‑GPU H100 cluster, Conductor delivered a 25 % curtailment for 3 h during a peak‑demand event while keeping machine‑learning throughput within contractual QoS bounds. The test was software‑only, verified by utility revenue‑grade meters, and is being extended to multi‑megawatt scale in 2H25. EPRI’s task‑force briefing indicates hyperscalers are willing to accept up to 25 % curtailment for as much as 200 h per year in exchange for accelerated grid interconnection of 1 GW campuses, giving Conductor a clear commercial wedge. (ar5iv, Latitude Media, ERCOT)
Market pull is extraordinary: U.S. data‑center load was about 17 GW in 2022 but multiple forecasts now peg incremental demand at 50‑100 GW by 2030, largely from GPU farms for generative AI. Interconnection queues in PJM, ERCOT and CAISO already exceed 8‑10 years, and NERC warns of reliability deficits in almost every region. Power, not silicon or real estate, is the gating factor for AI infrastructure, and solutions that unlock “headroom” without new generation enjoy privileged economics. Lancium’s analysis suggests demand‑response capacity from large flexible loads could scale to 2,000 GW globally by 2050; even a 5 % U.S. share would dwarf today’s entire commercial DR market. (lancium)
Emerald’s near‑term revenue model is expected to blend subscription licensing per MW of controllable load with gain‑share from utility and ISO programs. At an illustrative $50 k/MW‑yr platform fee and 5 GW under contract by 2030—a realistic scenario if just two hyperscalers standardize on Conductor—annual recurring revenue would exceed $250 million with gross margins comparable to high‑end enterprise software. Because deployment is software‑centric, working‑capital intensity stays low; however, scaling to grid‑critical reliability likely forces SOC 2 Type 2, ISO 27001 and NERC CIP compliance, adding opex. Longer term, Emerald could aggregate flexible data‑center fleets into a virtual power plant and clear wholesale markets directly, compressing intermediaries’ margin but exposing the firm to commodity price risk and collateral requirements similar to those faced by Voltus or AutoGrid. (PR Newswire)
Competitive pressure bifurcates along hardware and software lines. Hardware‑centric plays—Exowatt’s solar‑thermal modules targeting sub‑$0.04/kWh dispatchable supply, Oklo’s micro‑reactors, Fervo’s geothermal wells and utility‑scale battery integrators like Form Energy—attack the same pain point by adding green electrons, but carry multi‑year permitting, construction and balance‑sheet risk. Software aggregators such as Voltus, CPower and AutoGrid already manage 6‑10 GW of distributed resources each, but focus on standby generators, HVAC and BESS rather than the compute workload itself; their APIs touch the breaker panel, whereas Emerald manipulates the job scheduler, yielding far finer granularity and millisecond telemetry. Lancium and Crusoe occupy a middle ground, building new sites where cheap curtailed renewables are abundant and using proprietary schedulers to ride through price volatility, yet they still require green‑field real estate and transmission upgrades; Emerald can unlock latent capacity inside brownfield hyperscaler campuses. (Wikipedia, CPower Energy, Voltus, Uplight, lancium)
Alternative process improvements exist but none replicate Emerald’s low‑friction path: backup‑battery dispatch and UPS ride‑through can supply regulation services but add capex and shorten battery life; geographic load‑shifting, as pioneered by Google, relies on global fleet coverage and cannot solve regional interconnection queues; advanced immersion cooling trims PUE but not real kW draw; and SMR or geothermal on‑site generation faces social‑license and timeline hurdles. Virtual power‑plant studies by RMI show that total U.S. demand flexibility could offset 62 GW of peak load by 2030, yet the share addressable by data centers remains limited without workload‑aware orchestration of the type Emerald offers. (IEEE Spectrum, RMI)
Policy momentum favors flexibility. ERCOT’s Large Flexible Load Task Force is drafting interconnection rules that explicitly condition queue priority on verifiable load‑shed capability, and a new Texas law empowers system operators to disconnect non‑critical data‑center load during emergencies—effectively a regulatory mandate for software like Conductor if operators wish to avoid involuntary blackouts. Similar conversations are under way in PJM and ISO‑NE stakeholder forums. This backdrop accelerates customer adoption but also creates compliance risk: if regulators hard‑code performance floors, Emerald must guarantee dispatch or face penalties akin to generation capacity markets. (ERCOT, The Washington Post)
Key risks include execution—Conductor must integrate with proprietary schedulers such as Kubernetes‑based Borg, Slurm variants and Nvidia’s DCGM without degrading job throughput—plus potential pushback from CIOs wary of exposing workload telemetry to third‑party software. Big Tech could absorb Emerald’s IP; teams at Google DeepMind and Microsoft’s Azure Energy program already experiment with similar optimization. On the utility side, tariff design may erode economics if nodal prices or capacity payments fall; conversely, if AI growth triggers new build‑out of firm generation or capacity‑market reforms, load flexibility premiums could rise. Finally, capital requirements will escalate as Emerald moves from pilot to broad deployment: SOC compliance, 24×7 support and collateral posting for wholesale‑market participation could push annual cash burn above $40 million, forcing a Series B in 2026 amid an uncertain venture cycle.
For the hedge‑fund IC, Emerald is not yet a direct equity opportunity—the capital structure is private—but the firm’s trajectory influences multiple public markets. If Conductor scales, upside accrues to hyperscalers and data‑center REITs via faster capacity additions and lower interconnection costs, while downside hits merchant gas‑peaker developers and certain transmission capex plays whose investment thesis hinges on scarcity rents. Long positions in cloud REITs with stranded land banks, paired with shorts in peaker‑heavy IPPs, capture the asymmetry. Semiconductor suppliers, particularly Nvidia, stand to benefit from reduced siting friction, reinforcing a secular GPU demand bull case. Finally, utilities with high‑growth service territories but constrained load pockets—SRP, Dominion and Duke Energy—could see deferred distribution capex and improved ROE profiles; investors should monitor rate‑case filings for language around “controllable large loads” to anticipate earnings‑quality inflections.
In sum, Emerald AI offers a capital‑light, regulator‑aligned pathway to unlock dormant grid capacity precisely when AI’s power appetite threatens to outstrip generation build‑out. The technology’s early proof, deep bench, and blue‑chip endorsements warrant close tracking for pre‑growth equity entry, while the second‑order effects open portfolio‑level positioning across utilities, datacenter infrastructure and natural‑gas dispatch assets.