Fullstack Software Engineer · LLMs & RAG · React & Laravel · DevOps enthusiast Secretly building knowably.ai/

The Hague, The Netherlands
Joined June 2010
🚀 Soon launching Knowably.ai I’ve been working on something exciting: Knowably.ai, a platform that helps organizations turn scattered documents into clear, trustworthy answers. Before the official launch, I’m opening up the opportunity for a few companies to use the application for free. If your team works with knowledge spread across SharePoint, Jira, Confluence, or other tools - and you want faster, verified answers without digging through endless files - I’d love to hear from you. Interested? Send me a message or comment below, and I’ll get in touch.
1
1
3
Porta pottie practice…
Looks boring and intresting at the same time 😅
docker compose down i never funny.
R is 99% of the whole RAG.
accidentally said "retrieval" instead of "RAG" and they kicked me out of sf....
2
Red flag!!!
First day at Browserbase and already found the greatest office invention.
Because no ad revenue..
I don't understand why GitHub has never done YouTube style plaques based on star count
He obviously never rented an airbnb where dogs and smoking was allowed.
My most controversial take: All homes should be sold with furnishings included. This moving around couches and packing up forks thing is insane.
Corporate and governments run mainly on .NET, Java, Python and Javascript. Not PHP. How much I love PHP and want your statement to be true (PHP is also my big love). Market is not asking for PHP. It is asking for .NET, Java and Python. Specially with AI but also with Cloud, Python and Javascript dominate. Not PHP. And I don't see that change (How much I would like it to be true).
Folks who want PHP to "stay in its lane" can see the potential It will take over They actively want to prevent this so that it doesn't encroach on their territory But PHP quietly dominates the web and it will dominate other platforms too, you'll see
Except we can. Today, “retrieval” is often treated as synonymous with vector search. However, it is entirely possible to build highly effective retrieval systems without relying on embeddings, using only well-designed, LLM-based prompt rewriting combined with traditional keyword search.
ai intern interview question
Martijn van Nieuwenhoven retweeted
Lost count of the hours I’ve spent wrestling with CORS issues.
1
1
1
Martijn van Nieuwenhoven retweeted
For a new project, I had to dust off my Solr skills. That turned into a great experiment: Could I build a vector-less AI chat app on a truly large dataset?
I think I agree: In the age of LLMs, smart people are getting smarter, while dumb people are getting dumber.
In the age of LLMs, smart people are getting smarter, while dumb people are getting dumber.
2
MCP is awesome! No more integrating LMM’s in our apps, but integrating our apps in LLMs
1
For a new project, I had to dust off my Solr skills. That turned into a great experiment: Could I build a vector-less AI chat app on a truly large dataset?
Why this works: ⚡ Speed – queries in milliseconds 💰 Cost – no vectors 🎯 Precision – structured filters 🔍 Transparency – every query visible 📈 Scalability – Solr Cloud just handles it
Now I can ask: • How many robbery cases were tried in 2020? • What was the highest fraud sentence in 2024? • Why can a DGA be seen as a freelancer under the DBA law? All answers come only from official Rechtspraak.nl data.
1
1
Enter MCP – Model Context Protocol. Instead of exposing data, it exposes tools that language models can call. I built an MCP server connecting Solr to Claude Desktop and ChatGPT Desktop.
1
1
But I still wanted a chat interface. How could I let an LLM query Solr directly without a RAG pipeline?
No token limits. No embedding costs. Just full-text indexing, filters, and facets — the things Solr has always been great at.