← Back
LoRA
technique
1 mention from 1 sources
Low-Rank Adaptation - a technique for efficiently fine-tuning large language models by training only small adapter layers.
1
sources
Mentioned by
All mentions
"For the character training thing, I think this research is built on fine-tuning about 7 billion parameter models with LoRA, which is essentially only fine-tuning a small subset of the weights of the model."
From:
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
•
▶ 2:13:43
•
Jan 2026
Attribution: Sebastian explains LoRA as a technique used in character training research for efficient fine-tuning