Frequently Asked Questions
Common questions about WeKnora, RAG, document understanding, and building knowledge-base applications.
Common questions about WeKnora, RAG, document understanding, and building knowledge-base applications.
WeKnora is an open-source framework for document understanding, semantic retrieval, and context-aware Q&A. It uses the RAG (Retrieval-Augmented Generation) paradigm: documents are parsed, chunked, and indexed for vector search; when users ask questions, relevant chunks are retrieved and an LLM generates answers from that context.
Yes. WeKnora is open source under the MIT License. You can use it for commercial and non-commercial projects without licensing fees.
RAG stands for Retrieval-Augmented Generation. It combines a retrieval step (e.g. semantic search over your documents) with an LLM generation step, so answers are grounded in your data and stay up to date. Read more: What is RAG?
Clone the repository, then use the provided scripts to start services (e.g. ./scripts/start_all.sh). You need Docker for the default setup. See the Getting Started guide for step-by-step instructions.
You need Docker, 4GB+ RAM, and sufficient disk space for documents and vector indexes. For production, refer to the documentation for scaling and resource guidance.
WeKnora supports PDF, Word (.docx), Markdown, plain text, HTML, and URLs. The document understanding engine parses layout and structure for better chunking and retrieval.
Yes. WeKnora exposes a RESTful API for knowledge bases, documents, search, and conversations. There is also an official Go client. Full details are in the API Reference.
WeKnora is designed to work with multiple LLM providers (e.g. OpenAI-compatible APIs, Ollama). Configuration is described in the documentation.
For troubleshooting, contribution guidelines, and community support, see Documentation, Community, and Contact.