← Back

LoRA adapters

technique 1 mention from 1 sources

Low-Rank Adaptation technique for efficiently fine-tuning large language models by updating smaller weight matrices instead of the full model.

1

sources

Mentioned by

All mentions

Sebastian Raschka mentioned ✓ High confidence
"One thing people still use is LoRA adapters. These are basically, instead of updating the whole weight matrix, there are two smaller weight matrices"

Attribution: Sebastian explains LoRA adapters as a technique people use for model customization