Lightweight Fine-Tuning with PEFT & LoRA

Nov 20, 2024 · 1 min read
projects

This project focuses on a practical question in NLP: how much useful adaptation can you obtain from a pretrained model without paying the full cost of fine-tuning everything?

Using LoRA on distilbert-base-uncased for sentiment analysis, the pipeline shows that a very small trainable subset of parameters can still deliver a strong performance jump over the zero-shot baseline. That makes the project less about squeezing out maximum benchmark accuracy and more about understanding the trade-off between performance and efficiency.

Built with the Hugging Face ecosystem, the implementation covers evaluation, LoRA configuration, training, and inference in a lightweight setup that remains accessible on modest hardware.

Stefano Blando
Authors
PhD Student in Artificial Intelligence
Stefano Blando is a PhD student in the National PhD Program in Artificial Intelligence at Scuola Superiore Sant’Anna and the University of Pisa. His research lies at the intersection of AI, agent-based modeling, and economics. He studies adaptive multi-agent systems, statistical verification of economic simulations, and robust quantitative methods for financial and socio-economic data.