Platform

Products

Private AI infrastructure — from compute to agents.

Target groups

Use Cases

For enterprise, SMBs, and individual developers.

Knowledge & Support

Resources

Everything you need to succeed with Mycelis.

Changelog

What’s new
at Mycelis.

All releases, features, and fixes — chronological, transparent, and with context.

v0.3.0
March 18, 2025
New

Smart Routing — rule-based routing based on token budget, latency, and model type. One VirtualModel slug with multiple deployments behind it.

New

MCP Library — support for the Model Context Protocol. Expose external APIs as tools directly within the model context.

New

Three new GPU types: NVIDIA A40, L4, and H100 SXM — for training, inference, and batch workloads.

New

VirtualModel system — one slug, multiple deployments. Automatic fallback on instance failure.

Improved

Routing engine fully revised — 40% lower P99 latency in multi-deployment setups.

Improved

API Gateway: streaming responses are now fully SSE-compatible, including long responses.

Fixed

Token counting for Anthropic models was about ~12% too high — corrected to official cl100k tokenization.

v0.2.0
February 12, 2025
New

Fine-Tuning Wizard — LoRA training directly in the dashboard. JSONL upload, hyperparameter configuration, and training job launch.

New

Knowledge Base v2 — multi-document RAG with semantic search. Supports PDF, TXT, Markdown, and DOCX.

New

BYOK for Google Gemini — route and log your own Gemini API keys through Mycelis.

New

Workspace roles — admin and member with granular access permissions. Team invitations via email.

Improved

Dashboard load time reduced by about ~60% through lazy loading of the deployment list.

Improved

Usage dashboard: token consumption can now be filtered by model, deployment, and time range.

Fixed

Rare race condition when starting multiple training jobs simultaneously.

Fixed

BYOK OpenAI: timeout for very long completions (>30s) incorrectly failed.

v0.1.0
January 7, 2025
New

GPU instance deployment — deploy open-source models on dedicated RunPod hardware. Supported models: Llama 3.1, Mistral 7B, Mixtral 8x7B.

New

OpenAI-compatible proxy endpoint — drop-in replacement for the OpenAI API. Just switch base_url and api_key.

New

Knowledge Bases v1 — upload documents and use them as context in completions.

New

PAT-based authentication — Personal Access Tokens for secure API communication.

New

Managed Keys — OpenAI and Anthropic via Mycelis pay-per-token without your own API keys.

New

Usage & Billing Dashboard — real-time usage overview with daily billing.

Stay up to date

New releases are announced in the blog. Follow us for updates.

Go to blog