← Back

LoRA

technique

Low-Rank Adaptation - a technique for efficiently fine-tuning large language models by training only small adapter layers.

Also mentioned (3)

Casual references without a clear endorsement

Jason Lemkin mentioned "Lagora raises 500 at 5.5 billion" ▶ 49:51
Uncapped with Jack Altman mentioned "Lora was a you know startup that was like coming behind something that was seems really establish..." ▶ 19:49
Sebastian Raschka mentioned "For the character training thing, I think this research is built on fine-tuning about 7 billion p..." ▶ 2:13:43