Skip to main content
AI Tool Radar

What is Fine-Tuning?

Continuing to train a pre-trained model on a smaller, task-specific dataset to specialize its behavior or knowledge.

Full Definition

Fine-tuning is the process of taking a large pre-trained model and performing additional training on a curated, domain-specific dataset to adapt the model's weights for a particular task or style. Whereas full fine-tuning updates all model parameters (expensive and often unnecessary), parameter-efficient methods like LoRA (Low-Rank Adaptation) and QLoRA freeze most weights and introduce small trainable matrices, dramatically reducing compute requirements. Fine-tuning is used to teach a model a company's proprietary tone of voice, specialize it on medical or legal text, improve instruction-following on narrow tasks, or reduce hallucination rates in a specific domain. It differs from RAG in that knowledge is baked into the weights rather than retrieved at runtime, which is faster at inference but harder to update.

Tools that use Fine-Tuning

Related Terms