Platform

Products

Private AI infrastructure — from compute to agents.

Target groups

Use Cases

For enterprise, SMBs, and individual developers.

Knowledge & Support

Resources

Everything you need to succeed with Mycelis.

Intelligence

Fine-tuning with LoRA —
Your models, your data.

Specialize open-source models for your domain. Upload your own training data, configure hyperparameters, and deploy the finished model directly in the Mycelis ecosystem.

What is LoRA fine-tuning?

LoRA (Low-Rank Adaptation) is an efficient fine-tuning method that adjusts only a small subset of model weights instead of retraining all parameters. Result: 90% less GPU memory than full fine-tuning with comparable quality.

LoRA adapters are stored as small additional weights (~10–100 MB) and combined with the base model at deployment time.

When is fine-tuning worth it?

Domain vocabulary

Medicine, law, engineering — the model learns domain-specific terms and relationships.

Tone & style

Company tone in output: formal, casual, technical — the model writes like your team.

Task specialization

Classification, extraction, structuring — higher precision for specific tasks.

Language & dialect

Optimize for specific languages, regional phrasing, or internal naming conventions.

Step-by-step process

01

Prepare data

JSONL format: {"prompt": "...", "completion": "..."}. At least 50–200 examples for good results. We recommend 500–2000.

02

Upload data

Upload to the knowledge base or directly as a training file. Supported formats: JSONL, CSV.

03

Base model & hyperparameters

Choose base model (e.g. Llama 3.1 8B), learning rate, epochs, and LoRA rank (r=8 is a good starting point).

04

Start training

Mycelis automatically starts an A100 80GB GPU instance for training. Billing is based on GPU hours.

05

Evaluation & deploy

Review sample outputs in the dashboard. If satisfied, deploy directly as a deployment — done.

Training costs

Model GPU Training time (500 examples)
Llama 3.1 8B RTX 4090 (€0.39/h) ~2–4 hrs → €0.78–€1.56
Llama 3.1 70B A100 80GB (€1.99/h) ~6–12 hrs → €11.94–€23.88
Mistral 7B RTX 4090 (€0.39/h) ~1.5–3 hrs → €0.59–€1.17

Estimated values for LoRA fine-tuning. Actual duration depends on data volume and hyperparameters.

Frequently asked questions

How much training data do I need?

For simple style adaptation, 50–200 examples are enough. For domain vocabulary we recommend 500–2000. Quality beats quantity — a few precise examples are better than many unspecific ones.

Which base model should I choose?

Llama 3.1 8B is the best starting point for most use cases: affordable, fast, and already very capable. Use Llama 3.1 70B for higher precision on complex tasks.

Do I own the trained weights?

Yes. The trained LoRA adapters are fully yours and remain stored in your workspace. You can export them as well.

Can I use the fine-tuned model as a deployment?

Yes. After training, you can launch a deployment directly with the fine-tuned model. It appears like any other deployment in the dashboard and is OpenAI-API compatible.

Your model, your weights.

Training starts from ~€1 for small models. No minimum, no base fee.

Start fine-tuning now