Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Replying to @NanaEssilfie
0245531744
3
1
4
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
550gh๐Ÿ™‹๐Ÿฝโ€โ™€๏ธ
46
132
38
1,284
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
How to use Copilot by @BojanRadojici10
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Qwen Image Edit w/ Camera Control is wild ๐Ÿคฏ Quickly rotate the camera, switch between bird's eye and worm's eye views using just clicks. Here's how plus 7 wild examples:๐Ÿ‘‡
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Initiate phone calls from an AI agent via an API call
5
89
6
709
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
How to really piss somebody off
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Somebody wrote a prompt that's supposed to reduce ChatGPT hallucinations. A Reality Filter Has a Google Gemini and Claude version too. itโ€™s a directive scaffold that makes them more likely to admit when they donโ€™t know. "Reduce hallucinations mechanicallyโ€”through repeated instruction patterns, not by teaching them โ€œtruth.โ€ The Reality Filter here is a permanent directive for GPT-4, Gemini Pro, Claude and a universal version. It requires labeling any content not directly verifiable with tags like [Unverified] or [Inference] and mandates โ€œI cannot verify thisโ€ when lacking data. --- From r/PromptEngineering/
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
RETWEET THIS !! ๐Ÿ™Œ๐Ÿพ Free ChatGPT prompt for creating the perfect CV ๐Ÿ”ฅ
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Chat GPT secret codes!
8
658
4
2,420
0
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Claude + mcp . so = 16,000+ mcps tools in one place.
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Did TikTok really see it coming?๐Ÿซฃ
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Girl child ๐Ÿ˜ญ
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
LLM Evaluation: Practical Tips at Booking.com booking.ai/llm-evaluation-prโ€ฆ
4
22
160
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Self-hostable chat UI for LLMs with RAG and web search
3
29
332
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
RIP Market Research Firms. AI just killed them. A thread ๐Ÿงต
4
9
1
60
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
brainwash your agents. context engineering doesn't have to be hard, there are so many low-hanging fruits. just keep the memory a holy place and drop the bs messages I just wrote a blog post on how we do it at @CamelAIOrg. these are simple to implement, must-have techniques for apps that use agents, which can optimize the accuracy and the cost without crazy code changes. ๐ŸŽ BONUS: I created a number of bite-sized issues that you can get on right now and start your open-source arc. just open a PR, or help review one. read here: shorturl.at/zyCv7
18
133
5
1,041
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
A simple trick cuts your LLM costs by 50%! Just stop using JSON and use this instead: TOON (Token-Oriented Object Notation) slashes your LLM token usage in half while keeping data perfectly readable. Here's why it works: TOON's sweet spot: uniform arrays with consistent fields per row. It merges YAML's indentation and CSV's tabular structure, optimized for minimal tokens. Look at the example below. JSON: { "๐˜‚๐˜€๐—ฒ๐—ฟ๐˜€": [ { "๐—ถ๐—ฑ": ๐Ÿญ, "๐—ป๐—ฎ๐—บ๐—ฒ": "๐—”๐—น๐—ถ๐—ฐ๐—ฒ", "๐—ฟ๐—ผ๐—น๐—ฒ": "๐—ฎ๐—ฑ๐—บ๐—ถ๐—ป" }, { "๐—ถ๐—ฑ": ๐Ÿฎ, "๐—ป๐—ฎ๐—บ๐—ฒ": "๐—•๐—ผ๐—ฏ", "๐—ฟ๐—ผ๐—น๐—ฒ": "๐˜‚๐˜€๐—ฒ๐—ฟ" } ] } Toon: ๐˜‚๐˜€๐—ฒ๐—ฟ๐˜€[๐Ÿฎ]{๐—ถ๐—ฑ,๐—ป๐—ฎ๐—บ๐—ฒ,๐—ฟ๐—ผ๐—น๐—ฒ}: ๐Ÿญ,๐—”๐—น๐—ถ๐—ฐ๐—ฒ,๐—ฎ๐—ฑ๐—บ๐—ถ๐—ป ๐Ÿฎ,๐—•๐—ผ๐—ฏ,๐˜‚๐˜€๐—ฒ๐—ฟ It's obvious how few tokens are being used to represent the same information! To summarise, here are the key features: ๐Ÿ’ธ 30โ€“60% fewer tokens than JSON ๐Ÿ”„ Borrows the best from YAML & CSV ๐Ÿคฟ Built-in validation with explicit lengths & fields ๐Ÿฑ Minimal syntax (no redundant braces, brackets, etc.) IMPORTANT!! That said, for deeply nested or non-uniform data, JSON might be more efficient. In the next tweet, I've shared some benchmark results demonstrating the effectiveness of this technique in reducing the number of tokens and improving retrieval accuracy with popular LLM providers. Where do you think this could be effective in your existing workflows? Find the relevant links in the next tweet!
60
159
13
1,116
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
๐ŸŽฏโšก๐—ง๐—ต๐—ฒ๐˜€๐—ฒ ๐Ÿฎ๐Ÿฌ ๐˜๐—ฒ๐—ฟ๐—บ๐˜€ ๐—บ๐—ฎ๐—ธ๐—ฒ ๐—ฒ๐˜ƒ๐—ฒ๐—ฟ๐˜† ๐—ณ๐˜‚๐˜๐˜‚๐—ฟ๐—ฒ ๐—ฝ๐—ผ๐˜€๐˜ ๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜๐˜€ ๐—ถ๐—ป๐˜€๐˜๐—ฎ๐—ป๐˜๐—น๐˜† ๐—ฐ๐—น๐—ฒ๐—ฎ๐—ฟ ๐—ฎ๐—ป๐—ฑ ๐—บ๐—ฒ๐—ฎ๐—ป๐—ถ๐—ป๐—ด๐—ณ๐˜‚๐—น. Problem: When your engineer says "agent," your PM thinks "autonomous system," and your CEO thinks "chatbot." That misalignment? It's killing your AI initiatives. After 20+ years applying AI across industries and teaching AI Agents Mastery globally, I've watched miscommunication destroy more projects than bad code ever could. The solution? A shared vocabulary. ๐—œ ๐˜€๐˜‚๐—บ๐—บ๐—ฎ๐—ฟ๐—ถ๐˜‡๐—ฒ๐—ฑ ๐Ÿฎ๐Ÿฌ ๐—ฒ๐˜€๐˜€๐—ฒ๐—ป๐˜๐—ถ๐—ฎ๐—น ๐˜๐—ฒ๐—ฟ๐—บ๐˜€ โ–ผ ใ€‹ ๐—™๐—ผ๐˜‚๐—ป๐—ฑ๐—ฎ๐˜๐—ถ๐—ผ๐—ป ๐—Ÿ๐—ฎ๐˜†๐—ฒ๐—ฟ โœธ 1. Large Language Models Neural networks trained to predict the next token โœธ 2.Tokenization Breaking text into discrete meaningful units for processing โœธ 3. Vectorization Mapping meaning into numerical coordinates in high-dimensional space โœธ 4. Attention Disambiguating context by examining nearby words ใ€‹ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด & ๐—ข๐—ฝ๐˜๐—ถ๐—บ๐—ถ๐˜‡๐—ฎ๐˜๐—ถ๐—ผ๐—ป โœธ 5. Self-Supervised Learning Scaling training without human-labeled examples โœธ 6. Transformers Stacking attention & feedforward layers for deep understanding โœธ 7. Fine-tuning Specializing base models for specific domains and use cases โœธ 8. Reinforcement Learning Optimizing model behavior through feedback and rewards ใ€‹ ๐—ฃ๐—ฟ๐—ผ๐—ฑ๐˜‚๐—ฐ๐˜๐—ถ๐—ผ๐—ป ๐—˜๐—ป๐—ด๐—ถ๐—ป๐—ฒ๐—ฒ๐—ฟ๐—ถ๐—ป๐—ด โœธ 9. Few-shot Prompting Adding example inputs and outputs inline for better responses โœธ 10. Retrieval Augmented Generation (RAG) Fetching relevant context on-demand from external sources โœธ 11. Vector Databases Enabling fast semantic search for contextually relevant documents โœธ 12. Context Engineering Managing long conversations, history, and user preferences strategically ใ€‹ ๐—”๐—ฑ๐˜ƒ๐—ฎ๐—ป๐—ฐ๐—ฒ๐—ฑ ๐—–๐—ฎ๐—ฝ๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐—ถ๐—ฒ๐˜€ โœธ13. Model Context Protocol (MCP) Connecting LLMs with external systems and real-time data sources โœธ 14. Agents Orchestrating multi-step autonomous tasks across systems โœธ 15. Chain of Thought Breaking down reasoning into explicit step-by-step processes โœธ 16. Reasoning Models Adapting complexity and steps dynamically based on problem difficulty ใ€‹ ๐—˜๐—ณ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฒ๐—ป๐—ฐ๐˜† & ๐—ฆ๐—ฐ๐—ฎ๐—น๐—ฒ โœธ 17. Multi-modal Models โ˜† Processing and generating text, images, video, and audio โœธ18. Small Language Models (SLM) Specializing efficiently with 3-300M parameters for specific tasks โœธ 19. Distillation Compressing teacher model knowledge into smaller student models โœธ 20. Quantization Reducing memory and inference costs by lowering numerical precision Which of these ๐Ÿฎ๐Ÿฌ ๐˜๐—ฒ๐—ฟ๐—บ๐˜€ confuses you most? โ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃโ‰ฃ โซธ want to Master AI agent in 30 days? ๊†› Join My ๐—›๐—ฎ๐—ป๐—ฑ๐˜€-๐—ผ๐—ป ๐—”๐—œ ๐—”๐—ด๐—ฒ๐—ป๐˜ ๐Ÿฑ-๐—ถ๐—ป-๐Ÿญ ๐—ง๐—ฟ๐—ฎ๐—ถ๐—ป๐—ถ๐—ป๐—ด trusted by 1,500+ builders worldwide! โž  9 Real-world Agents โž  5 frameworks: MCP ยท LangGraph ยท PydanticAI ยท CrewAI ยท Swarm โž  Full Code included โœ” Basic Python is all you need. ๐Ÿ‘‰ ๐—˜๐—ป๐—ฟ๐—ผ๐—น๐—น ๐—ก๐—ข๐—ช (๐Ÿฑ๐Ÿฒ% ๐—ข๐—™๐—™): maryammiradi.com/ai-agents-mโ€ฆ
6
19
88
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
8 Types of LLMs used in AI Agents (Must know for Gen AI Data Scientists & AI Engineers): Here's what they are and what they do: ๐Ÿงต
Kellan Daddy ๐Ÿ‡บ๐Ÿ‡ธ๐Ÿ‡จ๐Ÿ‡ฆ๐Ÿ‡ฌ๐Ÿ‡ญ retweeted
Your teacger wont tell you this...