Flexible GraphRAG: a configurable open source framework for GraphRAG
Flexible GraphRAG is an open source python platform supporting document processing, Knowledge Graph auto-building, Schema support, RAG and GraphRAG setup, hybrid search (fulltext, vector, graph), and AI Q&A query capabilities.
It has a MCP Server, fast API Backend, Docker support, Angular, React, and Vue UI clients.
Built with LlamaIndex which provides abstractions for allowing multiple vector, search graph databases, LLMs to be supported.
It currently supports:
Graph Databases: Neo4j ArcadeDB FalkorDB Kuzu NebulaGraph, powered by Vesoft (coming Memgraph and Amazon Neptune)
Vector Databases: Qdrant, Elastic, OpenSearch Project, Neo4j, Milvus, (coming Weaviate, Chroma, Pinecone, PostgreSQL, LanceDB)
Search Databases/Engines: Elasticsearch, OpenSearch, LlamaIndex built-in BM25
LLMs: LlamaIndex (OpenAI, Ollama, Claude, Gemini, etc.)
Data Sources (using LlamaIndex readers): working: Web Pages, Wikipedia, Youtube, untested: Google Drive, Msft OneDrive, S3, Azure Blob, GCS, Box, SharePoint, previous: filesystem, Alfresco, CMIS.
A configurable hybrid search system that optionally combines vector similarity search, full-text search, and knowledge graph GraphRAG on document processed (Docling) from multiple data sources (filesystem, Alfresco, CMIS, etc.). It has both a FastAPI backend with REST endpoints and a Model Context Protocol (MCP) server for MCP clients like Claude Desktop, etc. Also has simple Angular, React, and Vue UI clients (which use the REST APIs of the FastAPI backend) for using interacting with the system.
Hybrid Search: Combines vector embeddings, BM25 full-text search, and graph traversal for comprehensive document retrieval
Knowledge Graph GraphRAG: Extracts entities and relationships from documents to create graphs in graph databases for graph-based reasoning
Configurable Architecture: LlamaIndex provides abstractions for vector databases, graph databases, search engines, and LLM providers
Multi-Source Ingestion: Processes documents from filesystems, CMIS repositories, and Alfresco systems
FastAPI Server with REST API: FastAPI server with REST API for document ingesting, hybrid search, and AI Q&A query
MCP Server: MCP server that provides MCP Clients like Claude Desktop, etc. tools for document and text ingesting, hybrid search and AI Q&A query.
UI Clients: Angular, React, and Vue UI clients support choosing the data source (filesystem, Alfresco, CMIS, etc.), ingesting documents, performing hybrid searches and AI Q&A Queries.
Deployment Flexibility: Supports both standalone and Docker deployment modes. Docker infrastructure provides modular database selection via docker-compose includes – vector, graph, and search databases can be included or excluded with a single comment. Choose between hybrid deployment (databases in Docker, backend and UIs standalone) or full containerization.
github.com/stevereiner/flexi…
#GraphRAG #GraphDB #OpenSource #EmergingTech #LLMs #VectorDB #Python
--
The Year of the Graph's Autumn 2025 newsletter issue on all things
#KnowledgeGraph,
#GraphDB, Graph
#Analytics /
#DataScience /
#AI and
#SemTech is out.
Subscribe and follow to be in the know. Reach out if you'd like to be featured 👇
yearofthegraph.xyz/newslette…