← Back
LoRA adapters
technique
1 mention from 1 sources
Low-Rank Adaptation technique for efficiently fine-tuning large language models by updating smaller weight matrices instead of the full model.
1
sources
Mentioned by
All mentions
"One thing people still use is LoRA adapters. These are basically, instead of updating the whole weight matrix, there are two smaller weight matrices"
From:
State of AI in 2026: LLMs, Coding, Scaling Laws, China, Agents, GPUs, AGI | Lex Fridman Podcast #490
•
▶ 2:43:31
•
Jan 2026
Attribution: Sebastian explains LoRA adapters as a technique people use for model customization