Tutorial
ยท
10 min min read
Introducing LoRA fine-tuning
Introducing LoRA fine-tuning
Learn what LoRA is, when it makes sense, and how to run your first training job on Mycelis.
When LoRA is worth it
- when your agent must use consistent domain language
- when style and behavior need to stay stable
- when prompt-only approaches are not enough
Prepare data
- Collect high-quality example dialogs.
- Keep style and format consistent.
- Remove duplicates and noisy samples.
Start training
- Open Fine-Tuning in Mycelis.
- Select base model and dataset.
- Start the job and monitor logs.
Evaluate output
- Compare answers against base model.
- Test hard prompts and edge cases.
- Deploy only when quality criteria are met.