<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>SAM | Stefano Blando</title><link>https://stefano-blando.github.io/en/tags/sam/</link><atom:link href="https://stefano-blando.github.io/en/tags/sam/index.xml" rel="self" type="application/rss+xml"/><description>SAM</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-US</language><lastBuildDate>Sun, 10 Mar 2024 00:00:00 +0000</lastBuildDate><item><title>AI Photo Editor with SAM &amp; SDXL</title><link>https://stefano-blando.github.io/en/projects/ai-photo-editor/</link><pubDate>Sun, 10 Mar 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/ai-photo-editor/</guid><description>&lt;p&gt;This project explores the intersection of &lt;strong&gt;precise computer vision&lt;/strong&gt; and &lt;strong&gt;generative image editing&lt;/strong&gt; by combining &lt;strong&gt;Segment Anything (SAM)&lt;/strong&gt; with &lt;strong&gt;Stable Diffusion XL&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The core idea is straightforward: segmentation provides exact control over what should be changed, while diffusion-based inpainting provides the generative flexibility to actually change it. That makes the system useful not only as a demo, but as a concrete example of how discriminative and generative models can be combined inside the same workflow.&lt;/p&gt;
&lt;p&gt;Built in &lt;strong&gt;Python&lt;/strong&gt; with &lt;strong&gt;PyTorch&lt;/strong&gt;, &lt;strong&gt;Diffusers&lt;/strong&gt;, and &lt;strong&gt;Gradio&lt;/strong&gt;, the project supports interactive masking, object replacement, and background generation while keeping the pipeline lightweight enough to run on consumer hardware with the right optimizations.&lt;/p&gt;</description></item></channel></rss>