<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hugging Face | Stefano Blando</title><link>https://stefano-blando.github.io/en/tags/hugging-face/</link><atom:link href="https://stefano-blando.github.io/en/tags/hugging-face/index.xml" rel="self" type="application/rss+xml"/><description>Hugging Face</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-US</language><lastBuildDate>Wed, 20 Nov 2024 00:00:00 +0000</lastBuildDate><item><title>Lightweight Fine-Tuning with PEFT &amp; LoRA</title><link>https://stefano-blando.github.io/en/projects/peft-finetuning/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/peft-finetuning/</guid><description>&lt;p&gt;This project focuses on a practical question in NLP: how much useful adaptation can you obtain from a pretrained model without paying the full cost of fine-tuning everything?&lt;/p&gt;
&lt;p&gt;Using &lt;strong&gt;LoRA&lt;/strong&gt; on &lt;code&gt;distilbert-base-uncased&lt;/code&gt; for sentiment analysis, the pipeline shows that a very small trainable subset of parameters can still deliver a strong performance jump over the zero-shot baseline. That makes the project less about squeezing out maximum benchmark accuracy and more about understanding the trade-off between performance and efficiency.&lt;/p&gt;
&lt;p&gt;Built with the &lt;strong&gt;Hugging Face&lt;/strong&gt; ecosystem, the implementation covers evaluation, LoRA configuration, training, and inference in a lightweight setup that remains accessible on modest hardware.&lt;/p&gt;</description></item></channel></rss>