I built the cursor for animation videos (similar to 3b1b utube channel) live link - animai.vercel.app/ github link - github.com/pushpitkamboj/Ani… I would love everyone to try it out and share you feedback of using it, would love to add/improve the features. The repo is public, feel free to explore and dont forget to start the repository. see the outcomes - quadratic and cubic eqn. intersect pub-b215a097b7b243dc86da838a… eg2 - client server architecture- pub-b215a097b7b243dc86da838a… . . . . note - it can take 2-3 minutes to generate the video (video rendering has always been high latency task :| ) @kirat_tw, as you said in the video to make.... would love to know connect and get feedback.
4
4
Pushpit.exe retweeted
we built Cursor for video editing
Pushpit.exe retweeted
Build a conversational voice bot with 1 second voice-to-voice latency with Modal, @pipecat_ai, and open models. Modal works seamlessly with WebRTC, WebSockets, and tunneling to squash latency to an absolute minimum.
here i put my entry for @anythingai vtxlabs.vercel.app/ cursor for 2D animation videos. #buildinpublic #OpenSource
4
Pushpit.exe retweeted
Btw we hire Rust engineers and we're not insane. Just saying'
Really good
Most DevOps engineers focus on automation. But, DevOps today is much more than CI/CD pipelines. It is also about understanding how systems actually work and how to design them to survive failures. In that context, One concept that every DevOps engineer should know is the Write-Ahead Log (WAL). We recently shared a short post explaining how WAL works, with simple real-world examples. 👉 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗛𝗲𝗿𝗲: newsletter.devopscube.com/p/… If you want to see how large-scale systems apply this concept in the real world, Netflix’s data platform is a great example. They have built their resilient data platform around the WAL principle for data durability even in case of failures. 14000+ DevOps engineers read our Devops newsletter. We send deep dives, practical tips, and guides straight to your inbox. Architecture Source: Netflix Tech blog (added in blog) #devops
Pushpit.exe retweeted
Most DevOps engineers focus on automation. But, DevOps today is much more than CI/CD pipelines. It is also about understanding how systems actually work and how to design them to survive failures. In that context, One concept that every DevOps engineer should know is the Write-Ahead Log (WAL). We recently shared a short post explaining how WAL works, with simple real-world examples. 👉 𝗥𝗲𝗮𝗱 𝗶𝘁 𝗛𝗲𝗿𝗲: newsletter.devopscube.com/p/… If you want to see how large-scale systems apply this concept in the real world, Netflix’s data platform is a great example. They have built their resilient data platform around the WAL principle for data durability even in case of failures. 14000+ DevOps engineers read our Devops newsletter. We send deep dives, practical tips, and guides straight to your inbox. Architecture Source: Netflix Tech blog (added in blog) #devops
6
91
1
510
Pushpit.exe retweeted
I recently compared Parlant and LangGraph. (the original post is quoted below). One of the most frequent questions readers asked was: “Isn’t it possible to create a fanout graph in LangGraph that performs parallel guideline matching, like Parlant does?” Yes, but it misses the point. While you can create any type of execution model with a generic graph, it doesn’t actually help you implement the complexities of what a good guideline-matching graph actually does. Guideline matching goes far beyond a simple fanout graph or parallel LLM execution. Parlant actually has a detailed post explaining what production-grade guideline matching truly is. You’ll see why it requires more than just a fanout code snippet. This is actually one of the deepest context engineering case studies that I’ve seen. Worth reading! I’ve shared the link in replies!
Every LangGraph user I know is making the same mistake! They all use the popular supervisor pattern to build conversational agents. The pattern defines a supervisor agent that analyzes incoming queries and routes them to specialized sub-agents. Each sub-agent handles a specific domain (returns, billing, technical support) with its own system prompt. This works beautifully when there's a clear separation of concerns. The problem is that it always selects just one route. For instance, if a customer asks: "I need to return this laptop. Also, what's your warranty on replacements?" The supervisor routes this to the Returns Agent, which knows returns perfectly but has no idea about warranties. So it either ignores the warranty question, admits it can't help, or even worse, hallucinates an answer. None of these options are desired. This gets worse as conversations progress because real users don't think categorically. They mix topics, jump between contexts, and still expect the agent to keep up. This isn't a bug you can fix since this is fundamentally how router patterns work. Now, let's see how we can solve this problem. Instead of routing between Agents, first, define some Guidelines. Think of Guidelines as modular pieces of instructions like this: ``` agent.create_guideline( condition="Customer asks about refunds", action="Check order status first to see if eligible", tools=[check_order_status], ) ``` Each guideline has two parts: - Condition: When it gets activated? - Action: What should the agent do? Based on the user's query, relevant guidelines are dynamically loaded into the Agent's context. For instance, when a customer asks about returns AND warranties, both guidelines get loaded into context simultaneously, enabling coherent responses across multiple topics without artificial separation. This approach is actually implemented in Parlant - a recently trending open-source framework (15k+ stars). Instead of routing between specialized agents, Parlant uses dynamic guideline matching. At each turn, it evaluates ALL your guidelines and loads only the relevant ones, maintaining coherent flow across different topics. You can see the full implementation and try it yourself. That said, LangGraph and Parlant are not competitors. LangGraph is excellent for workflow automation where you need precise control over execution flow. Parlant is designed for free-form conversation where users don't follow scripts. The best part? They work together beautifully. LangGraph can handle complex retrieval workflows inside Parlant tools, giving you conversational coherence from Parlant and powerful orchestration from LangGraph. I have shared the repo in the replies!
4
15
1
172
Pura parivar dara hua h 🥲
I'm never leaving this app😂😂
We just released our complete guide to Context Engineering. (These 6 components are the future of production AI apps) Every developer hits the same wall when building with Large Language Models: the model is brilliant but fundamentally disconnected. It can't access your private documents, has no memory of past conversations, and is limited by its context window. The solution isn't better prompts. It's 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗘𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 - the discipline of architecting systems that feed LLMs the right information in the right way at the right time. Our new ebook is the blueprint for building production-ready AI applications through 6 core components: 1️⃣ 𝗔𝗴𝗲𝗻𝘁𝘀: The decision-making brain that orchestrates information flow and adapts strategies dynamically 2️⃣ 𝗤𝘂𝗲𝗿𝘆 𝗔𝘂𝗴𝗺𝗲𝗻𝘁𝗮𝘁𝗶𝗼𝗻: Techniques for transforming messy user requests into precise, machine-readable intent through rewriting, expansion, and decomposition 3️⃣ 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹: Strategies for chunking and retrieving the perfect piece of information from your knowledge base (semantic chunking, late chunking, hierarchical approaches) 4️⃣ 𝗣𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲𝘀: From Chain of Thought to ReAct frameworks - how to guide model reasoning effectively 5️⃣ 𝗠𝗲𝗺𝗼𝗿𝘆: Architecting short-term and long-term memory systems that give your application a sense of history and the ability to learn 6️⃣ 𝗧𝗼𝗼𝗹𝘀: Connecting LLMs to the outside world through function calling, the Model Context Protocol (MCP), and composable architectures We're not just teaching you to prompt a model - we're showing you how to architect the entire context system around it. This is what is going to take AI from demo status to actual useful production applications. Each section includes practical examples, implementation guidance, and real-world frameworks you can use today. Download it here: weaviate.io/ebooks/the-conte…
Pushpit.exe retweeted
𝗣𝗿𝗼𝗺𝗽𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 𝗶𝘀 𝗱𝗲𝗮𝗱. I know that sounds dramatic, but hear me out. Every developer building with LLMs eventually hits the same wall. The model is smart, sure. But it can't access your docs. It forgets yesterday's conversation. And it makes stuff up when it's not sure about something. You can't prompt your way out of these problems. The real skill now? 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗲𝗻𝗴𝗶𝗻𝗲𝗲𝗿𝗶𝗻𝗴 - building the system 𝘢𝘳𝘰𝘶𝘯𝘥 the model that feeds it the right information at the right time. Think of it like this: the context window is basically the model's working memory. It's a whiteboard. Once it's full, old stuff gets erased to make room for new stuff. So you need architecture that manages what goes on that whiteboard and when. We just dropped a full guide on context engineering (free, obviously). It covers everything from chunking strategies to multi-agent systems to the new Model Context Protocol. And includes all these core components for building AI apps: • Agents - the brain that decides what to do when • Query Augmentation - turning messy requests into something useful • Retrieval - connecting the model to your actual data • Memory - so it doesn't forget everything between sessions • Tools - letting it interact with real systems • Prompting Techniques - yeah, this still matters, just not in isolation If you're building anything serious with AI, this is the shift you need to understand. Get your free copy here: weaviate.io/ebooks/the-conte…
Pushpit.exe retweeted
Using LangCache for Building Agents. And no, it's not from LangChain. It’s from Redis, built for production-scale memory and recall. LangChain’s in-built caching mostly works on exact text matches. Redis LangCache, in contrast, uses semantic caching; it recalls based on meaning, not identical strings. Here’s how it works under the hood: >A user sends a prompt to your AI app. >Your app sends the prompt to LangCache via: POST /v1/caches/{cacheId}/entries/search >It calls an embedding model to generate a vector for the prompt. >It searches the cache for a semantically similar entry using those embeddings. >If a match is found (cache hit): LangCache returns the cached response instantly. >If no match is found (cache miss): Your app calls the LLM, gets a new response, then stores it back via: POST /v1/caches/{cacheId}/entries >LangCache saves the new embedding and response for future reuse. How It Differs from LangChain Caching: >LangChain’s built-in caches (like RedisCache or InMemoryCache) work only on exact string matches. >RedisSemanticCache supports embeddings, but it’s self-hosted and limited in scale. >Redis LangCache is a fully managed semantic caching service designed for production workloads. Why it matters : >Faster response times >Reduced API costs >No infrastructure management >Language-agnostic (via REST API) When to use it : >AI agents, RAG systems, & chatbots >Repetitive or similar query handling >Production-grade reliability >Auto-optimized embeddings >Detailed cache monitoring
If u are a recruiter dm me to get the code access. (Currently its a private repo)
Pushpit.exe retweeted
AlgoKit 3.0 provides you with the best developer experience in Web3. Develop. Test. Deploy. All in the languages you already know.
Pushpit.exe retweeted
I was at the YC yacht party and the weirdest thing happened. Someone from A16 speedrun (yuck) came up to me and said “You’re the guy building Guna, right? The first agentic platform for gooners?” Wildest moment as a founder so far, I literally did a backflip off the boat. A complete rando recognizing my company in public? Word is getting out
24
8
3
552
Pushpit.exe retweeted
@enter_delta Cohort || semi-finalists (no particular order) (only 3 will be finalists for @theresidency ) 1. Clean Water @saafwater 2. Quantum Architecture @aaron_amire 3. AI learning companion @Josephayinde64 4. Borderless Finance @UseKitehq 5. Build & Deploy Robots from your browser @baslyasma 6. Speech into Robot Actions @lonemwb 7. AI Dating Network @stringly_ 8. AI Executive Assistant @maksaihq 9. Smart Medication Buddy @Redicinemedsol 10. Drug Discovery Platform @try_litefold 11. Space Energy Lasers @okksuraj 12. Automated Private Chef @miyuselene 13. High-quality Synthetic data @Datra_ai 14. Quantum Computers for ML training 15. Help kids learn AI 16. Stem Cell Mass Production @gravitatebio 17. Human Coaching with AI Accountability @hyejeebae 18. Service over Status Film @jessvillanuevax 19. Mental Health Therapy matching 20. Mind Controlled Drones @_okdara_
8
8
2
28
Have you observed this episode yet?
There's been a lot of talk recently about Quantum advantage. In the season finale of AI Avenue, we sat down with the brilliant @Liv_Lanes from @IBM to chat about Quantum computing and how it intersects with AI. Come hang out!
2
4
17