Akıl var, salak olmak daha pahalı.

Rotterdam, The Netherlands
Joined November 2020
Aklının yönetim kurulu başkanı retweeted
Film seçerken benim gibi aşırı derecede zorlanıyorsanız imdb yanında bir de metacritic.com sitesine bi bakmanızı öneriyorum. Hem de platformlara göre sıralıyor. Prime’daki en iyileri, netflix’teki en iyileri vs görebiliyorsunuz.
6
36
504
Aklının yönetim kurulu başkanı retweeted
AWS S3 kullanmayın. Trafik yüksekse Cloudflare R2, trafik çok değil, bana sadece depolama lazım diyorsanız da Blackblaze B2 kullanın.
14
5
182
Aklının yönetim kurulu başkanı retweeted
Turn your laptop into a powerful RAG system! LEANN can index and search through millions of documents while using 97% less storage than traditional solutions without accuracy loss. LEANN achieves this through graph-based selective recomputation with high-degree preserving pruning, computing embeddings on-demand instead of storing them all. Key Features: 🔒 Privacy: Everything stays on your laptop with no cloud and no hidden terms. 🪶 Lightweight: Smart graph pruning and CSR format cut down both storage and memory usage. 📦 Portable: Move your knowledge base across devices or share it with others at minimal cost. 📈 Scalable: Handles messy personal data and agent-generated memory that often crash traditional vector DBs. ✨ Accurate: Provides the same search quality as heavyweight solutions while using 97% less storage. It's 100% Open Source
9
30
3
190
Aklının yönetim kurulu başkanı retweeted
Negatif insanlarla muhatap olmayın ve hayat kalitenizin yükselişini izleyin.
2
7
52
0
Aklının yönetim kurulu başkanı retweeted
Arabalarda bilinmesi gerekenler.
2
6
77
0
Aklının yönetim kurulu başkanı retweeted
kubernetes'in ne kadar karmaşık olduğunu teoriden değil ancak pratikten anlayabilirsiniz; teori size her business case'e uygunmuş gibi sunulur, sistemleri soyutlar, karmaşıklığı gizler. işin teorisi container orchestrator, declarative infrastructure, scalable & resilient architecture gibi konseptlerle sınırlıdır. bu kavramlar da kulağa deterministik, her probleme uyan bir çözüm gibi gelir. fakat işler pratikte bambaşka, çünkü: - her cluster kendine özgü. network policy'ler, security policy'ler, storage class, ingress-egress, cni, security-application context, resource limit'leri... her biri birbiriyle etkileşimli, matematiğin korkunç taraflarından birisi de bu: bunların kombinasyonu exponential şekilde karmaşıklaşıyor. - declarative model'in yan etkileri bir dosyayla her şeyi tanımlamak kulağa ne kadar hoş gelse de gerçekte yaşanan schedule, senkronizasyon ve race condition sorunları yüzlerce farklı beklenmedik davranışa yol açıyor. - operasyonel yük işin teorisi "self-healing", böyle tanıtılır. gerçekte ise onlarca farklı elle müdahele isteyen durum var (failed mount, crash loop, pod eviction...). - tool ekosistemi yine teori bize kubernetes'i modüler şekilde tanıtıyor ama bu modülerlik onlarca farklı karara evriliyor, admin overhead. özetle; teori bize kubernetes'i soyutlama katmanı olarak gösteriyor. pratikte bu soyutlamayı ayakta tutmak için bir sürü mühendis gerekiyor. kubernetes'in karmaşıklığı doğasında değil gerçek dünyada kullanmanın zorluğundan. şöyle formülize edelim: kubernetes_getirisi = scalability + resilience + portability + automation kubernetes_götürüsü = operational_overhead + sistemi_yönetmek_için_bilişsel_yük + infra_cost + ecosystem_complexity net_kazanç = kubernetes_getirisi - kubernetes_götürüsü büyük bir ölçeğe kadar kubernetes_getirisi çok değersiz oluyor ve bu yüzden zarardasınız.
Kubernetes migration almost killed our startup. Where we were: - 8 EC2 instances - Ansible for deploys - Boring but working - $1200/month AWS bill Why we migrated: - New investor wanted 'cloud-native' - Engineers wanted K8s experience - Competitors were using it - Seemed like the future 6 months later: - 3 engineers spending full-time on K8s - AWS bill at $4500/month - Deploys took longer than before - More outages, not fewer - Product development stalled We rolled back: - Moved to ECS Fargate - 2 week migration - Back to $1800/month - Engineers back on features K8s is amazing for scale. We weren't at scale. Technology should solve problems you actually have.
13
4
1
168
Aklının yönetim kurulu başkanı retweeted
Kimi K2 Thinking is basically a scaled DeepSeek R1 but with: - 2× longer context - 2× fewer attention heads - 1.5× more experts per moe layer - Bigger vocab - Fewer dense blocks - 5B fewer active params per token Source: @rasbt
20
129
6
1,334
Aklının yönetim kurulu başkanı retweeted
Kimi k2'yi denediniz ya müthiş olmuş. düz bi promptla youtube videolarını transcribe edip kullanıcının chat yapabildiği bi web app yap ve dummy datayla doldur dedim fena olmayan bi arayüz çıkardı animasyonlarla vs. tüm görselleri kendi üretti vs. link aşağıda herhangi bi yt linki girip deneyebilirsiniz
4
7
117
Aklının yönetim kurulu başkanı retweeted
World's strongest agentic model is now open source Kimi K2
🇨🇳 Alibaba-backed Moonshot releases its second AI update in four months as China’s AI race heats up. Head to head with GPT-5, Sonnet 4.5, Gemini 2.5 Pro, and Grok 4, while being 6x cheaper. On some benchmarks, the model now outperforms OpenAI’s GPT-5, Anthropic’s Claude Sonnet 4.5 (Thinking mode), and xAI's Grok-4 on several standard evaluations — an inflection point for the competitiveness of open AI systems. Kimi K2 Thinking is an open reasoning model that brings frontier level agent behavior to everyone, with 44.9% HLE (Humanity’s Last Exam), 60.2% BrowseComp, 256K context, and 200-300 sequential tool calls that enable strong reasoning, search, and coding. K2 Thinking uses a Reasoning MoE design with 1T total parameters and 32B active per token, so it scales capacity while keeping each step’s compute manageable. The system is built for test time scaling where it spends more thinking tokens and more tool call turns on hard problems, which lets it plan, verify, and revise over long chains without help. Interleaved thinking means it inserts private reasoning between actions and tools, so it can read, think, call a tool, think again, and keep context across hundreds of steps. Tool calls here are structured functions for search, code execution, or other services, and the model chains them to gather facts, run code, and use results in the next decision. The 256K context window lets it load long documents, extended chats, or many tool outputs at once, then focus attention on the right spans as the plan evolves. Serving is optimized with INT4 QAT (Quantization aware training) on the MoE parts, which yields roughly 2x faster generation while preserving accuracy, and the reported scores are under native INT4 inference. With QAT the model learns during post training to live with 4bit weights, which reduces the usual accuracy loss seen with after the fact quantization.
28
162
10
1,472
Aklının yönetim kurulu başkanı retweeted
K2 Thinking on YouWare 🥳 Big shoutout to Leon Ming, ex Kimi fellow who left our team to solo-launch @YouWareAI – the BEST vibe coding platform in China! If you want to start a startup idea from scratch, or edit, deploy, and monetize your existing project code, then YouWare is your best choice.
Huge congrats to @Kimi_Moonshot for releasing Kimi-K2-Thinking, and we've just integrated it into @YouWareAI. The results are impressive. Love seeing open-source models hitting SOTA levels.
7
14
1
177
Aklının yönetim kurulu başkanı retweeted
ollama run kimi-k2-thinking:cloud Kimi K2 Thinking is Moonshot AI’s best open-source thinking model. Try it on Ollama's cloud!
🚀 Hello, Kimi K2 Thinking! The Open-Source Thinking Agent Model is here. 🔹 SOTA on HLE (44.9%) and BrowseComp (60.2%) 🔹 Executes up to 200 – 300 sequential tool calls without human interference 🔹 Excels in reasoning, agentic search, and coding 🔹 256K context window Built as a thinking agent, K2 Thinking marks our latest efforts in test-time scaling — scaling both thinking tokens and tool-calling turns. K2 Thinking is now live on kimi.com in chat mode, with full agentic mode coming soon. It is also accessible via API. 🔌 API is live: platform.moonshot.ai 🔗 Tech blog: moonshotai.github.io/Kimi-K2… 🔗 Weights & code: huggingface.co/moonshotai
Aklının yönetim kurulu başkanı retweeted
10
107
Aklının yönetim kurulu başkanı retweeted
Stream processing engine using SQL, DuckDB, and Apache Arrow
2
36
256
Aklının yönetim kurulu başkanı retweeted
EdgeTAM, real-time segment tracker by Meta is now in @huggingface transformers with Apache-2.0 license 🔥 > 22x faster than SAM2, processes 16 FPS on iPhone 15 Pro Max with no quantization > supports single/multiple/refined point prompting, bounding box prompts
Aklının yönetim kurulu başkanı retweeted
Kimi 2 Thinking is now #2 on @ArtificialAnlys, incredible progress! Perplexity plans to deploy it on its own servers, just like R1. Why? Because Chinese models, despite great performance and fast progress, always get dinged for being "censored". Using them on your own machines or servers, in theory, gets you the clean unfiltered version. It is a topic that often confuses people, has important nuances, but very easy to test, because they are all open source! So does Kimi 2 Thinking knows about the "sensitive topics"? Short answer: yes. Here is the simple test I run 🧵👇
Aklının yönetim kurulu başkanı retweeted
I run a Kubernetes cluster **solo** in addition to Google Cloud Run and VPS instances on Hetzner. If anything, my product survived because of Kubernetes! Autoscaling, load balancing, recovery, … For certain workloads, there is no alternative to Kubernetes except serverless. But it introduces a different set of problems.
Kubernetes migration almost killed our startup. Where we were: - 8 EC2 instances - Ansible for deploys - Boring but working - $1200/month AWS bill Why we migrated: - New investor wanted 'cloud-native' - Engineers wanted K8s experience - Competitors were using it - Seemed like the future 6 months later: - 3 engineers spending full-time on K8s - AWS bill at $4500/month - Deploys took longer than before - More outages, not fewer - Product development stalled We rolled back: - Moved to ECS Fargate - 2 week migration - Back to $1800/month - Engineers back on features K8s is amazing for scale. We weren't at scale. Technology should solve problems you actually have.
40
8
5
285
Aklının yönetim kurulu başkanı retweeted
running Postgres is so fun. one database and hundred proxies around it
You can now provision dedicated PgBouncers for your Postgres replicas. This gives you a connection pool that evenly distributes connections across your available replicas, making it simple to scale out read-only traffic.
7
2
266
Aklının yönetim kurulu başkanı retweeted
I've been using Kimi K2 for my mother's osteoporosis treatment to double-check her test results, bone density reports, and the feedback from her doctor. Honestly, it’s been surprisingly impressive. Compared to ChatGPT or Gemini, Kimi K2 gives far more detailed and accurate insights from her x-rays, explains everything clearly, and even suggests treatments that are available locally. It genuinely feels like having a medical assistant who understands both the data and the context.
44
78
7
1,209
Aklının yönetim kurulu başkanı retweeted
🚨 Today is a turning point in AI. A Chinese open source model is #1. Kimi K2 Thinking scored 51% in Humanity's Last Exam, higher than GPT-5 and every other model. $0.6/M in, $2.5/M output. The best at writing, and does 15tps on two Mac M3 Ultras! Seminal moment in AI. Try it on OpenRouter:
Aklının yönetim kurulu başkanı retweeted
This aligns perfectly with what Anthropic just published about code execution with MCP Link: anthropic.com/engineering/co… In Claude Code, I've noticed Claude prefers writing and executing code directly in bash rather than using tool calls. I've seen Python scripts appear, execute, and disappear in a single run, Claude knows it needs to minimize token usage and work as efficiently as possible.
Lately I’ve been testing both CLI and MCP in Claude Code... and honestly, the CLI wins I list all my installed tools (vercel, supabase, gh, etc.) in CLAUDE .md and let Claude use them to check logs, run queries, or tweak configs. Setup is simpler, auth is native, and observability is crystal clear. There’s still debate on CLI vs MCP, but in my workflow, CLI is miles ahead
2
22
198