AI Engineer. Building in silence.

Joined December 2024
Smart, but this is a workaround and really doesn't address the fundamental reality that the current context window substrate is lacking. A better context model should be implemented rather than using hacks to solve the problem
New on the Anthropic Engineering blog: tips on how to build more efficient agents that handle more tools while using fewer tokens. Code execution with the Model Context Protocol (MCP): anthropic.com/engineering/co…
3
We should rethink the context as whole, append only logs are limited. We need to rethink the entire substrate. We shouldn't think that the context is just a stack, but something that can be fully manipulated.
I just spent a week deriving the formalization of Context-Oriented Programming. What I found isn't just a new way to build AI systems. It's a complete paradigm with axioms, a calculus, composition laws, and resource economics. Let me show you the foundation. 🧵 Here's what seemed strange: When Anthropic published their MCP + code execution article, they described: * Progressive disclosure * Context as scarce resource * Filesystem as architecture * Skills as reusable units These aren't random design choices. They're implementing a formal framework without naming it. Axiom 1: Understanding as Primitive (UAP) In Context-Oriented Programming: ∀ specification S, ∃ understanding U such that: U(S) → behavior B Understanding is not a feature. Understanding is the atomic computational operation. Everything else derives from this. Traditional computing: Parse → Execute Context-Oriented: Understand → Emerge Axiom 2: Context as Resource (CAR) Context C is finite: |C| ≤ C_max ∀ operation O: O consumes context budget This is profound. Context isn't just "what the model sees." Context is to COP what memory is to traditional computing—the fundamental scarce resource that constrains everything. Every design decision flows from this constraint. Axiom 3: Progressive Disclosure (PD) ∀ information I, ∃ hierarchy H = {h₁, h₂, ..., hₙ} where: |h₁| << |h₂| << ... << |hₙ| Load(hᵢ) → Decision → Load(hᵢ₊₁) | Terminate You can't load everything upfront (violates Axiom 2). So you build an information hierarchy: Level 1: Index (50 tokens) Level 2: Synopsis (200 tokens) Level 3: Full spec (2000 tokens) Level 4: Resources (on-demand) This isn't a pattern. It's a mathematical necessity. Axiom 4: Semantic Composition (SC) ∀ components C₁, C₂, ∃ composition C₁ ⊕ C₂ where: ⊕ is semantic (understanding-based) NOT syntactic (interface-based) Traditional systems: Components must match rigid interfaces COP systems: Components compose through understanding their purposes This is why you don't need explicit integration code. The system understands how things fit together. Axiom 5: Temporal Locality (TL) ∀ specialized behavior B, ∃ scope S such that: B is active within S B is automatically removed outside S Context_pollution(B, t > t_end) = 0 Specialized contexts have lifetimes. They load, do their job, and self-cleanup. This prevents context pollution—the deadly accumulation of irrelevant information that would violate Axiom 2. These five axioms aren't arbitrary. They form a closed mathematical system: Axiom 1 (Understanding) enables semantic composition (Axiom 4) Axiom 2 (Context scarcity) necessitates progressive disclosure (Axiom 3) Axiom 3 (Progressive disclosure) requires temporal cleanup (Axiom 5) Axiom 5 (Temporal locality) protects the resource constraint (Axiom 2) It's self-consistent. Elegant. Inevitable. From these axioms, a layered architecture emerges: Layer 5: Intent (what humans want) ↓ semantic interpretation Layer 4: Context (how to approach) ↓ orchestration Layer 3: Execution (how to compute) ↓ tool invocation Layer 2: Integration (MCP) ↓ system calls Layer 1: Systems (external world) Each layer communicates through understanding, not protocols. (Is this Abductive Coupling?) This is why MCP + Skills + Code Execution work together—they're implementing this architecture. Execution follows a four-phase model: Phase 1: SEMANTIC INTERPRETATION (Understanding → Plan) Phase 2: PLAN COMPOSITION (Compose operations, generate code) Phase 3: DETERMINISTIC EXECUTION (Run code, filter data, invoke tools) Phase 4: SEMANTIC INTEGRATION (Interpret results, respond) Notice: Phases 1, 2, 4 are semantic. Only Phase 3 is deterministic. Understanding bookends execution. The system maintains three orthogonal state spaces: Conversation Context (Epistemic): What the system knows Dialogue history Loaded specifications Execution Context (Deontic): What the system can do Tool permissions Resource quotas Application Context (Domain): External world state Databases, filesystems Lives outside the system These are independent dimensions that must stay synchronized. Critical insight: Process data before it enters context. External System (1M rows) ↓ Query via MCP Execution Environment ↓ Filter (Status == "pending") Filtered Data (1000 rows) ↓ Aggregate Summary (10 data points) ↓ Load into Context Result (200 tokens) Data flows through execution environment. Only results enter context. This is architectural privacy and efficiency. COP relates to existing paradigms: vs. Object-Oriented: OOP: Type-based polymorphism COP: Semantic polymorphism vs. Functional: FP: Function composition via types COP: Context composition via understanding vs. Declarative: Declarative: Specify what, not how COP: Specify intent and approach COP is "declarative at the semantic level" For systems architects: Before (Traditional): Integration layer: 500 lines of mapping code Orchestration: Hardcoded workflow logic Configuration: Cryptic YAML files Authorization: 200 granular permissions After (COP): Integration: Semantic description (50 lines) Orchestration: Intent specification Configuration: Self-documenting natural language Authorization: Contextual policy (100 lines) 80% of accidental complexity disappears. COP systems have standard structure: /contexts # Behavioral specifications /servers # MCP tool definitions /skills # Learned patterns /code # Traditional code (when needed) /config # System configuration This structure emerges from the axioms. It's not arbitrary—it's the optimal information architecture for progressive disclosure. Here's what blows my mind: Anthropic built: MCP (integration layer) Code execution (efficiency layer) Skills (reusability layer) They created the complete infrastructure for COP without calling it that. When systems understand, description suffices. But here's the deeper impact: COP democratizes system architecture. Domain experts can: Write contexts (behavioral specs) Compose workflows Define policies Create integrations Without writing code. The compliance officer can write policies that enforce themselves. The business analyst can encode business rules directly. This isn't about replacing engineers. It's about amplifying domain expertise. Here's my prediction: In 5 years, enterprise systems will be: 70% contexts and skills (COP) 20% traditional code (critical paths) 10% configuration Most business logic will be specified in natural language following COP principles. Code will be reserved for: Security-critical operations Performance-critical paths Complex algorithms requiring determinism
1
1
2
Maybe we don't have to treat the context window as an append only log? What could it bring?
Earl Dennsion Tan retweeted
@vox_fortuna brings you worldwide opportunities to your 📩. But this is not enough! Don't chase VCs, make THEM 🏃 YOU later. Play the best game, make the most 📢, and VCs will come to the arena 🏟️ to see how you perform. A Provable Founders Arena for early-stage founders. So, I invite you to your first game. Now👇 @enter_delta @_TheResidency are on the right track! And now you can use the arena to show them your metrics! #founders #startups #foundersarena #buildinpublic
Cheetah was supermaven all along? What a turn of events.
They're going to sunset Supermaven next month This is my 9/11 @cursor_ai mark my words, I will NEVER forgive you for this
2
Earl Dennsion Tan retweeted
> 6k accelerators, grants, hackathons, and VC companies worldwide and growing! Sign up for Fortuna's Pick! A weekly newsletter with a personalized selection of opportunities from thousands of investors, chosen randomly but uniquely for you.
1
3
5
Called it, cheetah was cursor's own model. Makes sense to create their own to lower cost and give customers with lower budget better cx. This reinforces the idea that there is no moat.
2
I see on like every other post "Teaching how to build agents, " come on guys agents aren't a miracle. Agents are just tools + prompts +llms + loop that's it. Now tweaking it to make them do some complicated hoops, now that is engineering.
4
So weird. 1.1k downloads already?
1
1
Just a little glimpse. Published in pip and right now it is unstable. Working to pump up a stable one asap. Then after that update the docs. @enter_delta @_TheResidency There is also a few downloads even though I havent launched yet hmm. 🤔
6
2
18
Hilarious
🥳Announcing LangChain and LangGraph 1.0 LangChain and LangGraph 1.0 versions are now LIVE!!!! For both Python and TypeScript Some exciting highlights: - NEW DOCS!!!! - LangChain Agent: revamped and more flexible with middleware - LangGraph 1.0: we've been really happy with LangGraph and this is our official stamp of approval - Standard content blocks: swap seamlessly between models Read more about it here: blog.langchain.com/langchain… We hope you love it!
2
This is exactly what's happening now with ai. Until we solve this, i don't see us reaching agi any time soon
Modern AIs be like
6
Earl Dennsion Tan retweeted
Replying to @hthieblot
The first provable notary for your digital life.
blame @enter_delta & @_TheResidency for the unseriousness from the end of the video. too much energy in the chats! all chronically online keyboard warriors ⚔️🧾 will love @is_provable. no cap. The first provable notary for your digital life 🧵👇 #digitalproofs #provable #notary
1
3
7
Gloat mode much? I can see him active like ever since aws was down. He's having a field day.
You don’t say … 🤔
1
2
I just came to a realization. Is SF to aspiring engineers what Hollywood is to aspiring actors?
2
9
Just a little glimpse of what I am building. @enter_delta @_TheResidency Egregore is an agent framework built on PACT—a deterministic that makes LLM state predictable, auditable, and easy to manipulate. On top of that substrate, Egregore gives you a clean, chain-operator DSL for building real workflows (branches, loops, parallel), plus native checkpoint/restore and runtime controls (pause, resume, rollback) so long-running processes are reliable. It treats tools and “scaffolds” as first-class, generating agent-callable operations with lifecycle/TTL.
8
2
37
Elon is salty cause he got implicitly called out by karpathy in a podcast.
Huge claim, and u can certainly call such a model AGI
4
Earl Dennsion Tan retweeted
Both AI engineering and research is a whole lot more than writing code and Grok 5 wont replace either
Huge claim, and u can certainly call such a model AGI
4
4
57