← Back
LoRA
technique
Low-Rank Adaptation - a technique for efficiently fine-tuning large language models by training only small adapter layers.
Topics
Also mentioned
(3)
Casual references without a clear endorsement
Uncapped with Jack Altman
mentioned
"Lora was a you know startup that was like coming behind something that was seems really establish..."
▶ 19:49
Sebastian Raschka
mentioned
"For the character training thing, I think this research is built on fine-tuning about 7 billion p..."
▶ 2:13:43