← Back

LoRA

technique 1 mention from 1 sources

Low-Rank Adaptation - a technique for efficiently fine-tuning large language models by training only small adapter layers.

1

sources

Mentioned by

All mentions

Sebastian Raschka mentioned ✓ High confidence
"For the character training thing, I think this research is built on fine-tuning about 7 billion parameter models with LoRA, which is essentially only fine-tuning a small subset of the weights of the model."

Attribution: Sebastian explains LoRA as a technique used in character training research for efficient fine-tuning