Moss (@usemoss) is the real-time semantic search for conversational AI. By bringing retrieval latency under 10 ms, Moss makes AI conversations feel natural, as fluid and instant as talking to a human. ycombinator.com/launches/Oiq… Congrats on the launch, @srimalireddi & @ImHarshaNalluru!

Nov 3, 2025 · 7:00 PM UTC

dope value prop' semantic angle makes sense for anything related to search/entity discovery can't wait to try ~ let's launch/spotlight you guys on microlaunch.net when it helps btw
1
We're thrilled to host Prof. Sung-Il Kim from Korea University at APCG2026! Join us to explore a future where the brain & AI co-evolve. Feb 7–11, 2026 Jeddah, Saudi Arabia @UBT_EDU
57
5
748
That's a fantastic innovation, Y, keeping retrieval latency so low is truly a game changer for conversational AI, I must say!
1
Congrats on the launch!
2
I want this for searching through Slack, Jira, Gmail, etc.
1
Absolutely critical for fluid AI conversations. Low-latency retrieval is a tough problem that often starts at the database level. Our converged, multimodal database is designed to power real-time AI applications like this. Great work, @usemoss!
1
Sub-10ms retrieval latency is revolutionary - Moss eliminates the awkward pause in AI conversations, finally bridging the gap between human and AI interaction speed.
1
10ms latency is impressive for real-time semantic search.
1
This is incredible. Reducing latency to under 10ms is a game-changer for making AI feel truly human. Can't wait to see how this shapes the future of conversational tech. Amazing work, team!
Lightning-fast. Exciting to see what Moss is building — congrats on the launch! ⚡️
where was the Good AI example in this demo?
3
Real-time search is a game changer. Makes AI feel way more human. Can't wait to see how this evolves!
2
Congratulations on the successful launch of Moss! This innovation enables faster, more natural conversational AI, marking a truly significant and positive technological step.