LoRA (Low‑Rank Adaptation)

A parameter‑efficient fine‑tuning method that injects low‑rank adapters into a frozen model, drastically reducing compute and cost while achieving strong task performance.