Platform

Products

Private AI infrastructure — from compute to agents.

Target groups

Use Cases

For enterprise, SMBs, and individual developers.

Knowledge & Support

Resources

Everything you need to succeed with Mycelis.

Intelligence

Knowledge Bases & RAG —
Context knowledge for your models.

Upload documents, and Mycelis automatically creates vectors. On every request, relevant content is injected as context — without your own vector database or embedding pipeline.

What is RAG?

RAG (Retrieval-Augmented Generation) is a technique where the model does not answer from memory alone. It first retrieves relevant documents from a database and uses them as context.

Result: the model responds based on your latest documents — fewer hallucinations from outdated information and no base-model retraining required.

Supported file formats

PDF

Text, tables

TXT / MD

Plain text, Markdown

DOCX

Word documents

HTML / JSON

Structured content

Automatic embedding pipeline

01

Upload document

PDF, TXT, DOCX, or Markdown. Maximum file size: 50 MB per file, 500 MB per knowledge base.

02

Chunking

Mycelis automatically splits the document into semantic chunks. Default: 512 tokens with 50-token overlap.

03

Embedding

Each chunk is converted into a 1536-dimensional vector using text-embedding-3-small.

04

Store in Qdrant

Vectors are stored in a dedicated Qdrant collection — isolated per workspace.

05

Retrieve on request

On each model request, the query is vectorized, similar chunks are retrieved, and inserted as context.

Data security

  • Documents and vectors encrypted at rest (AES-256)
  • Each workspace has an isolated Qdrant collection
  • No sharing of documents with third parties
  • GDPR-compliant, EU data centers
  • Full deletion on request or account termination

Frequently asked questions

How many documents can I upload?

There is no fixed per-document limit. Knowledge bases can hold up to 500 MB total. For larger requirements, contact sales@mycelis.io.

Which embedding model is used?

Default: OpenAI text-embedding-3-small (1536 dimensions). For local embeddings without external API calls, contact us — we also support local embedding models on GPU instances.

Can I connect multiple knowledge bases to one agent?

Yes. One agent (VirtualModel) can query multiple knowledge bases. Search runs in parallel and the most relevant chunks are merged.

Are documents fully removed when deleted?

Yes. When deleting a document, both raw data and all related vectors in Qdrant are deleted. This is immediate and irreversible.

Models that understand your documents.

No vector database to run, no embedding code to maintain. Just upload and go.

Start for free