Your AI coding agent for hard problems.

San Francisco, CA
Joined January 2025
Pinned Tweet
👋 How to start using Cline in 2 minutes (step-by-step thread 🧵)
So what do we think "Polaris Alpha" is?
🎁 The new stealth model, "Polaris Alpha", is now live. It's a powerful, general-purpose model that excels across real-world tasks, with standout performance in coding, tool calling, and instruction following.
16
3
131
NOW AVAILABLE! GLM 4.6 at 1000 token/s through the @cerebras provider on Cline. The best combination of speed and accuracy directly in your IDE or CLI.
Cerebras Code just got an UPGRADE. It's now powered by GLM 4.6 Pro Plans ($50): 300k ▶️ 1M TPM @  24M Tokens/day Max Plans ($200): 400k ▶️ 1.5M TPM @ 120M Tokens/day Fastest GLM provider on the planet at 1000 tokens/s and at 131K context. Get yours before we run out 👇
9
22
3
320
Introducing Cline Hooks! Inject custom logic into your agentic workflows at key moments. Validate operations, monitor tool usage, and shape AI decisions automatically. What you can do: > Block problematic actions before they execute > Learn from operations and build project knowledge > Track everything for analytics or compliance > Trigger external tools at the right moments With six hooks covering the entire task lifecycle that go from starting a task all the way to post tool use.
Very excited to be included in The Information's 50 Most Promising Startups!
Today on The Information’s TITV: -@awscloud Director of Technology Shaown Nandi on their AI chip strategy -Snap stock surges on Perplexity AI deal | @SashaKaletsky, Creator Ventures; Catherine Perloff, The Information -Brain chip Neuralink rival raises $200M | @tomoxl, Founder & CEO of @synchroninc -CEO of @metropolisio, @Alex__Israel, announces its $1.6B capitalization -Inside the social platform Benable’s strategy with Founder & CEO Tony Staehelin & The Information’s @anngehan View The Information's 50 Most Promising Startups: thein.fo/4nDRfJS 📺 Tune in at 10 am PT / 1 pm ET on thein.fo/42o33YT
Live in Cline: kimi-k2-thinking
🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built as a thinking agent, K2 Thinking marks our latest efforts in test-time scaling — scaling both thinking tokens and tool-calling turns. K2 Thinking is now live on kimi.com in chat mode, with full agentic mode coming soon. It is also accessible via API. 🔌 API is live: platform.moonshot.ai 🔗 Tech blog: moonshotai.github.io/Kimi-K2… 🔗 Weights & code: huggingface.co/moonshotai
4
11
230
A good verifier is like a tea kettle. The only important question: Is your kettle whistling?
the tea kettle analogy goes so hard
1
1
11
Cline retweeted
the tea kettle analogy goes so hard
Lost a hackathon in 2024. Won GitHub in 2025. With a 4,704% contributor growth Cline is the fastest growing AI-related open source project on GitHub.
2
4
43
call me @ npm install -g cline
never letting an agency do our billboards again smh
3
1
82
.@MiniMax__AI M2 uses "interleaved thinking" - instead of one thinking block at the start, you get multiple thinking blocks throughout a single request. The model re-evaluates its approach as it goes, adapting based on tool outputs and new information. You'll see these thinking blocks appear in the UI showing its reasoning process in real-time. Available free in Cline until November 7!
Cline v3.35 is live <long read below>
We've migrated our system prompt tool calling format to native tool calling and split that out for different model families. Here's why this results in a better experience using Cline: Models now return tool calls in their native JSON format, which they were specifically trained to produce. You'll notice fewer "invalid API response" errors - this particularly improves gpt-5-codex performance with significant reduction in failed operations. Parallel tool execution is now enabled. When Cline needs to read three files, it can execute them simultaneously instead of sequentially. System prompts are also smaller since tool definitions moved to API parameters, reducing token usage by approximately 15% per request. Native tool calling is currently supported for next-generation models: Claude 4+, Gemini 2.5, Grok 4, Grok Code, and GPT-5 (excluding gpt-5-chat) across supporting providers including Cline, Anthropic, Gemini, OpenRouter, xAI, OpenAI-native, and Vercel AI Gateway.
Cline v3.35 is live <long read below>
8
8
2
160
And it's available for free in the Cline provider until November 7!
1
6
MiniMax M2 implements "interleaved thinking" - maintaining internal reasoning throughout task execution, not just at the beginning. You'll see multiple <thinking> blocks in the UI per API request showing the reasoning process in real-time.
2
1
17
0
When both "Read" and "Read (all)" are enabled, only "Read (all)" displays. Auto-approve is always enabled by default. We removed the toggle, favorites, and max requests.
1
3