<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Projects | Stefano Blando</title><link>https://stefano-blando.github.io/en/projects/</link><atom:link href="https://stefano-blando.github.io/en/projects/index.xml" rel="self" type="application/rss+xml"/><description>Projects</description><generator>HugoBlox Kit (https://hugoblox.com)</generator><language>en-US</language><lastBuildDate>Sun, 19 May 2024 00:00:00 +0000</lastBuildDate><item><title>Multi-Agent Orchestration</title><link>https://stefano-blando.github.io/en/projects/multi-agent-orchestration/</link><pubDate>Wed, 18 Mar 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/multi-agent-orchestration/</guid><description>&lt;p&gt;This project can be read as a system for &lt;strong&gt;real-time economic coordination&lt;/strong&gt; under changing phases, partial information, and hard timing constraints.&lt;/p&gt;
&lt;p&gt;The system is structured as an &lt;strong&gt;event-driven multi-agent orchestration layer&lt;/strong&gt;. It listens to the game through SSE events, keeps a runtime representation of the environment, and switches strategy depending on the current phase of play: policy update, procurement auction, inventory reconciliation, and fulfillment.&lt;/p&gt;
&lt;p&gt;What makes the project interesting is not just the use of multiple agents, but the way orchestration is tied to operational robustness. Different capability sets are activated at different stages, while local persistence, metrics, and replay analysis help make decisions more stable under noisy runtime conditions.&lt;/p&gt;
&lt;p&gt;In practice, it is a good example of a project where agentic design had to be grounded in execution reliability rather than in generic prompting alone. The deeper pattern is &lt;strong&gt;auction-based coordination with inventory constraints and demand handling&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="interactive-explorer"&gt;Interactive Explorer&lt;/h2&gt;
&lt;p&gt;The interactive app focuses on the engineering core of the project: the live event loop, shared runtime state, stylized agent interaction, phase-specific agents, capability partitioning, bounded memory, and KPI replay.&lt;/p&gt;
&lt;p&gt;It is intentionally presented as a &lt;strong&gt;runtime architecture explorer&lt;/strong&gt;, not as a fake benchmark dashboard. The goal is to make the orchestration logic legible in the same visual language used for the Island Model and PFSE apps, with emphasis on coordination, auctions, and fulfillment.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>RiskSentinel - Agentic Systemic Risk Simulator</title><link>https://stefano-blando.github.io/en/projects/risk-sentinel/</link><pubDate>Sun, 15 Mar 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/risk-sentinel/</guid><description>&lt;p&gt;RiskSentinel is an &lt;strong&gt;agentic systemic risk simulator&lt;/strong&gt; built around a question I find both practical and conceptually interesting: if a large financial node is hit by a shock, how can we make the propagation of that stress visible, comparable, and explainable in real time?&lt;/p&gt;
&lt;p&gt;Built for the &lt;strong&gt;Microsoft AI Dev Days Hackathon 2026&lt;/strong&gt;, the project combines three contagion models, topology-aware analytics, and a bounded multi-agent workflow on top of research-grade market network data. The purpose is not only to simulate cascades, but to make them easier to inspect through natural-language interaction, interactive visualization, and model comparison.&lt;/p&gt;
&lt;p&gt;What makes the project work is the combination of several layers that are often separated: a proper network simulation engine, an interface that makes the results explorable, and an agentic layer that helps interpret what is happening without pretending to replace the underlying quantitative machinery.&lt;/p&gt;
&lt;p&gt;The result is a practical prototype for &lt;strong&gt;systemic risk monitoring&lt;/strong&gt;, sitting between quantitative finance, complex systems, and AI-assisted decision support.&lt;/p&gt;
&lt;p&gt;You can explore the project here:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Island Model + MultiVeStA — Statistical Model Checking of Economic Growth</title><link>https://stefano-blando.github.io/en/projects/island-model-smc/</link><pubDate>Fri, 06 Mar 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/island-model-smc/</guid><description>&lt;p&gt;This project reproduces and extends the &lt;strong&gt;Fagiolo &amp;amp; Dosi (2003) Island Model&lt;/strong&gt; — a landmark agent-based model of endogenous economic growth — using &lt;strong&gt;MultiVeStA&lt;/strong&gt;, a tool for sequential statistical model checking of stochastic systems.&lt;/p&gt;
&lt;p&gt;The paper was accepted at &lt;strong&gt;MARS @ ETAPS 2026&lt;/strong&gt; (Workshop on Models for Formal Analysis of Real Systems, European Joint Conferences on Theory and Practice of Software).&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Work with Giorgio Fagiolo, Daniele Giachini, Andrea Vandin, and Ernest Ivanaj (Scuola Superiore Sant&amp;rsquo;Anna).&lt;/em&gt;&lt;/p&gt;
&lt;p&gt;Related pages: &lt;strong&gt;
&lt;/strong&gt;, &lt;strong&gt;
&lt;/strong&gt;, and &lt;strong&gt;
&lt;/strong&gt;.&lt;/p&gt;
&lt;h2 id="what-the-model-does"&gt;What the Model Does&lt;/h2&gt;
&lt;p&gt;The Island Model captures endogenous growth through the interaction of three types of heterogeneous agents operating on a fitness landscape:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Miners&lt;/strong&gt; exploit their current productive niche, accumulating skills over time&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Imitators&lt;/strong&gt; copy the most successful visible agent, diffusing knowledge across the economy with probability φ&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Explorers&lt;/strong&gt; search for new islands at random, driving innovation and preventing lock-in&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The key insight is that the tension between exploitation and exploration — parameterised by ε — generates self-sustaining growth dynamics without requiring exogenous technological progress.&lt;/p&gt;
&lt;h2 id="what-multivesta-adds"&gt;What MultiVeStA Adds&lt;/h2&gt;
&lt;p&gt;Standard Monte Carlo analysis uses a fixed number of simulation runs with no formal guarantee on estimation quality. MultiVeStA applies &lt;strong&gt;sequential statistical model checking&lt;/strong&gt;: it runs simulations adaptively, stopping only when the 95% confidence interval on E[logGDP] is narrower than δ=0.05 at every time step. This provides a formal precision guarantee — the sample size is determined by the data variance, not by the experimenter.&lt;/p&gt;
&lt;p&gt;Our analysis confirms the &lt;strong&gt;optimality of moderate exploration&lt;/strong&gt; (ε ≈ 0.1), reproduces all stylized facts of the original model, and establishes through Welch t-test counterfactual analysis that 6 out of 7 pairwise parameter comparisons yield statistically distinguishable growth trajectories. The single exception (ρ=3.0 vs ρ=5.0) reveals a &lt;strong&gt;saturation effect&lt;/strong&gt; in knowledge locality.&lt;/p&gt;
&lt;h2 id="interactive-explorer"&gt;Interactive Explorer&lt;/h2&gt;
&lt;p&gt;The interactive app lets you explore the model live: watch agents move across the technology landscape, observe imitation cascades and exploration events, run MultiVeStA sensitivity analysis on α, φ, and ρ, and see how the sequential sampling converges.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;</description></item><item><title>Robust Portfolio Optimization under Systematic Market Disruptions (PFSE)</title><link>https://stefano-blando.github.io/en/projects/robust-portfolio-optimization/</link><pubDate>Fri, 06 Mar 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/robust-portfolio-optimization/</guid><description>&lt;p&gt;This project introduces the &lt;strong&gt;Parallel Factor Space Estimator (PFSE)&lt;/strong&gt;, a hybrid framework for robust covariance estimation that resolves a fundamental trade-off in institutional portfolio management: traditional robust estimators (MCD, Tyler) offer strong statistical guarantees but are computationally infeasible for daily rebalancing of 100+ assets, while efficient methods (Ledoit-Wolf, sample covariance) have zero breakdown point against systematic contamination.&lt;/p&gt;
&lt;p&gt;PFSE exploits a structural insight: during systematic market disruptions — flash crashes, monetary policy shocks, crisis contagion — extreme movements propagate through &lt;strong&gt;common factors&lt;/strong&gt;, not idiosyncratic components. By concentrating robust estimation in reduced k-dimensional factor space (k=5 versus p=100–1000), PFSE inherits 25% breakdown point from MCD while achieving &lt;strong&gt;32× computational speedup&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;&lt;em&gt;Work with Alessio Farcomeni (University of Roma Tor Vergata). Submitted to Computational Economics (Springer), March 2026.&lt;/em&gt;&lt;/p&gt;
&lt;h2 id="key-results"&gt;Key Results&lt;/h2&gt;
&lt;p&gt;Validated through three complementary stages:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Monte Carlo (p=100, ε=10% contamination):&lt;/strong&gt; PFSE Sharpe 1.42 vs 0.96 for sample covariance — maintains 97% of clean-data performance while sample covariance degrades 31%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;S&amp;amp;P 500 backtest (2015–2025):&lt;/strong&gt; Out-of-sample Sharpe 1.87 vs 1.63 (+14.7%), max drawdown −24.3% vs −34.1% (−29%) during COVID-19, turnover −42%&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Five stress scenarios:&lt;/strong&gt; PFSE rank-1 across all scenarios, average Sharpe 1.67 vs 1.39 (+20%), lowest performance variability (CoV 0.041 vs 0.064)&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Economic value:&lt;/strong&gt; $72M normal-period + $93M stress-period benefits per $1B portfolio, benefit-cost ratio 31:1&lt;/li&gt;
&lt;/ul&gt;
&lt;h2 id="interactive-explorer"&gt;Interactive Explorer&lt;/h2&gt;
&lt;p&gt;The interactive app lets you explore the full results: run the PFSE estimation algorithm live, compare methods under increasing contamination levels, examine computational scalability, and inspect S&amp;amp;P 500 cumulative returns across all four market regimes.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;
&lt;/strong&gt;&lt;/p&gt;</description></item><item><title>PokéNexus</title><link>https://stefano-blando.github.io/en/projects/pokenexus/</link><pubDate>Wed, 04 Feb 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/pokenexus/</guid><description>&lt;p&gt;&lt;strong&gt;PokéNexus&lt;/strong&gt; was born from something very simple: I have loved Pokémon since I was a child, and at some point I wanted to merge that world with the kind of tools and ideas I now enjoy building with.&lt;/p&gt;
&lt;p&gt;The result is a Streamlit app that turns the Pokémon universe into an interactive space where &lt;strong&gt;graph visualization&lt;/strong&gt;, &lt;strong&gt;game mechanics&lt;/strong&gt;, and &lt;strong&gt;API-driven data&lt;/strong&gt; come together. I used &lt;strong&gt;NetworkX&lt;/strong&gt;, &lt;strong&gt;PyVis&lt;/strong&gt;, and &lt;strong&gt;Plotly&lt;/strong&gt; to explore relationships between types and entities, while &lt;strong&gt;PokeAPI&lt;/strong&gt; provides the live data layer for creatures, evolutions, and related information.&lt;/p&gt;
&lt;p&gt;Instead of being just a static graph, the project grew into a small playable system: teams can be managed, items stored, badges collected, and different progression loops layered on top of the visualization. That mix of structure, exploration, and play is exactly what made the project fun to build.&lt;/p&gt;
&lt;p&gt;It is not a research project, and it is not meant to be. It is a personal experiment where a childhood passion meets the way I now think about &lt;strong&gt;networks&lt;/strong&gt;, &lt;strong&gt;interfaces&lt;/strong&gt;, and &lt;strong&gt;interactive Python applications&lt;/strong&gt;.&lt;/p&gt;</description></item><item><title>Network Topology Analysis for Systemic Risk Prediction</title><link>https://stefano-blando.github.io/en/projects/network-crash-prediction/</link><pubDate>Sat, 10 Jan 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/network-crash-prediction/</guid><description>&lt;p&gt;This project corresponds to my CESMA thesis on &lt;strong&gt;systemic risk prediction in U.S. equity markets&lt;/strong&gt;. The underlying question is simple but important: can network topology say something useful about market stress before standard indicators do?&lt;/p&gt;
&lt;p&gt;Using daily data from &lt;strong&gt;210 S&amp;amp;P 500 constituents (2013-2025)&lt;/strong&gt;, the project combines dynamic correlation networks with machine learning models ranging from gradient boosting to &lt;strong&gt;Graph Neural Networks&lt;/strong&gt; such as &lt;strong&gt;GraphSAGE&lt;/strong&gt; and &lt;strong&gt;GAT&lt;/strong&gt;. The goal is not just classification accuracy, but economic usefulness under realistic validation and backtesting constraints.&lt;/p&gt;
&lt;p&gt;The most interesting result is that network-derived signals appear to carry genuine early-warning information, especially around severe and structurally fragile market states. In the strongest configurations, the framework improves both timing and trading performance relative to simpler baselines.&lt;/p&gt;
&lt;p&gt;This project is paired with the related thesis page, where the same work is presented as a publication entry.&lt;/p&gt;</description></item><item><title>NLP &amp; Semantic Network Analysis</title><link>https://stefano-blando.github.io/en/projects/nlp-semantic-network-analysis/</link><pubDate>Sat, 10 Jan 2026 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/nlp-semantic-network-analysis/</guid><description>&lt;p&gt;This project is the technical implementation behind the publication &lt;strong&gt;“A Multi-Method Validation Framework for Large-Scale Multilingual Text Analytics”&lt;/strong&gt; (JADT 2026, in review). It operationalizes the full analytical workflow used in the paper, from data preparation to cross-method validation and result comparison.&lt;/p&gt;
&lt;p&gt;The pipeline combines &lt;strong&gt;R and Python&lt;/strong&gt; modules over a large multilingual review corpus, including: preprocessing and TF-IDF, &lt;strong&gt;LDA topic modeling&lt;/strong&gt;, &lt;strong&gt;LSA and Correspondence Analysis&lt;/strong&gt;, lexicon- and model-based sentiment analysis, clustering, and &lt;strong&gt;co-occurrence network analysis&lt;/strong&gt;. The repository also includes cross-platform validation scripts to compare method outputs and check structural stability across implementations.&lt;/p&gt;
&lt;p&gt;The central objective is methodological robustness: verifying which findings remain consistent when methods, model families, and language-specific components vary. In this sense, the project is not a generic NLP demo, but a reproducible research pipeline designed for quantitative validation of text-analytic conclusions.&lt;/p&gt;</description></item><item><title>Advanced Recommender System</title><link>https://stefano-blando.github.io/en/projects/advanced-recommender-system/</link><pubDate>Fri, 20 Jun 2025 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/advanced-recommender-system/</guid><description>&lt;p&gt;This project was developed within the &lt;strong&gt;CESMA Master&amp;rsquo;s program&lt;/strong&gt; in collaboration with &lt;strong&gt;TIM&lt;/strong&gt;. Instead of framing the problem as a standard classification task, the system was designed as a &lt;strong&gt;learning-to-rank&lt;/strong&gt; pipeline for next-best-action recommendation.&lt;/p&gt;
&lt;p&gt;That shift in framing matters because ranking is closer to the actual business decision: not just whether an action is good or bad, but which action should come first for a given user.&lt;/p&gt;
&lt;p&gt;The pipeline combines careful validation, Bayesian optimization, and ensemble ranking strategies. The end result is a substantial improvement over baseline performance on &lt;strong&gt;NDCG@5&lt;/strong&gt;, making the project a solid example of applied machine learning under realistic evaluation constraints.&lt;/p&gt;
&lt;p&gt;Performance summary:&lt;/p&gt;
&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th style="text-align: left"&gt;Stage&lt;/th&gt;
&lt;th style="text-align: left"&gt;NDCG@5 Score&lt;/th&gt;
&lt;th style="text-align: left"&gt;Improvement vs Baseline&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;Baseline Model&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;0.5030&lt;/td&gt;
&lt;td style="text-align: left"&gt;&amp;ndash;&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;Best Single Model&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;0.6838&lt;/td&gt;
&lt;td style="text-align: left"&gt;+35.94%&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;Best Ensemble&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;0.6852&lt;/strong&gt;&lt;/td&gt;
&lt;td style="text-align: left"&gt;&lt;strong&gt;+36.23%&lt;/strong&gt;&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;Overall, it is one of the clearest examples in the portfolio of taking a familiar ML task and reformulating it in a way that is better aligned with the actual decision problem.&lt;/p&gt;</description></item><item><title>Real Estate AI Agent</title><link>https://stefano-blando.github.io/en/projects/real-estate-ai-agent/</link><pubDate>Wed, 05 Mar 2025 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/real-estate-ai-agent/</guid><description>&lt;p&gt;This project explores how an AI agent can be used as a practical layer on top of a more traditional predictive workflow. In this case, the domain is real estate: price estimation, market analysis, and user interaction around property-related questions.&lt;/p&gt;
&lt;p&gt;The project combines valuation models with an LLM-based interface for natural-language interaction and task-level orchestration. The interesting part is not just the agent wrapper itself, but the attempt to connect predictive models with a more usable front-end for exploration and decision support.&lt;/p&gt;</description></item><item><title>Lightweight Fine-Tuning with PEFT &amp; LoRA</title><link>https://stefano-blando.github.io/en/projects/peft-finetuning/</link><pubDate>Wed, 20 Nov 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/peft-finetuning/</guid><description>&lt;p&gt;This project focuses on a practical question in NLP: how much useful adaptation can you obtain from a pretrained model without paying the full cost of fine-tuning everything?&lt;/p&gt;
&lt;p&gt;Using &lt;strong&gt;LoRA&lt;/strong&gt; on &lt;code&gt;distilbert-base-uncased&lt;/code&gt; for sentiment analysis, the pipeline shows that a very small trainable subset of parameters can still deliver a strong performance jump over the zero-shot baseline. That makes the project less about squeezing out maximum benchmark accuracy and more about understanding the trade-off between performance and efficiency.&lt;/p&gt;
&lt;p&gt;Built with the &lt;strong&gt;Hugging Face&lt;/strong&gt; ecosystem, the implementation covers evaluation, LoRA configuration, training, and inference in a lightweight setup that remains accessible on modest hardware.&lt;/p&gt;</description></item><item><title>Gas Network Risk Forecasting</title><link>https://stefano-blando.github.io/en/projects/gas-network-risk-forecasting/</link><pubDate>Fri, 01 Nov 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/gas-network-risk-forecasting/</guid><description>&lt;p&gt;This project was developed for the &lt;strong&gt;Hera Group Hackathon&lt;/strong&gt;, where it earned &lt;strong&gt;2nd place&lt;/strong&gt;. The task was to detect gas leak risk in a setting dominated by imbalance, sparse events, and operational uncertainty.&lt;/p&gt;
&lt;p&gt;The pipeline combines geospatial-temporal feature engineering with synthetic data augmentation through &lt;strong&gt;CTGAN&lt;/strong&gt; and &lt;strong&gt;TimeGAN&lt;/strong&gt;, then uses &lt;strong&gt;SHAP&lt;/strong&gt; to keep the final model interpretable rather than purely predictive.&lt;/p&gt;
&lt;p&gt;What I still like about this project is its balance between pragmatism and method: it is a hackathon project, but it already reflects an approach I use often, namely trying to make difficult prediction problems more robust without giving up explainability.&lt;/p&gt;</description></item><item><title>AI Photo Editor with SAM &amp; SDXL</title><link>https://stefano-blando.github.io/en/projects/ai-photo-editor/</link><pubDate>Sun, 10 Mar 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/ai-photo-editor/</guid><description>&lt;p&gt;This project explores the intersection of &lt;strong&gt;precise computer vision&lt;/strong&gt; and &lt;strong&gt;generative image editing&lt;/strong&gt; by combining &lt;strong&gt;Segment Anything (SAM)&lt;/strong&gt; with &lt;strong&gt;Stable Diffusion XL&lt;/strong&gt;.&lt;/p&gt;
&lt;p&gt;The core idea is straightforward: segmentation provides exact control over what should be changed, while diffusion-based inpainting provides the generative flexibility to actually change it. That makes the system useful not only as a demo, but as a concrete example of how discriminative and generative models can be combined inside the same workflow.&lt;/p&gt;
&lt;p&gt;Built in &lt;strong&gt;Python&lt;/strong&gt; with &lt;strong&gt;PyTorch&lt;/strong&gt;, &lt;strong&gt;Diffusers&lt;/strong&gt;, and &lt;strong&gt;Gradio&lt;/strong&gt;, the project supports interactive masking, object replacement, and background generation while keeping the pipeline lightweight enough to run on consumer hardware with the right optimizations.&lt;/p&gt;</description></item><item><title>Custom Chatbot with RAG</title><link>https://stefano-blando.github.io/en/projects/rag-chatbot/</link><pubDate>Thu, 15 Feb 2024 00:00:00 +0000</pubDate><guid>https://stefano-blando.github.io/en/projects/rag-chatbot/</guid><description>&lt;p&gt;This project uses a deliberately small but structured domain to explore a larger idea: how to make language-model outputs more reliable by grounding them in retrieved context.&lt;/p&gt;
&lt;p&gt;The chatbot is built around a curated dataset of fictional characters and uses a full &lt;strong&gt;RAG&lt;/strong&gt; pipeline with embeddings, retrieval, and prompt conditioning. The dataset is playful, but the methodological point is serious: retrieval changes the behavior of the model from generic completion to context-bounded reasoning.&lt;/p&gt;
&lt;p&gt;Because the underlying data is semantically rich, the system can handle not only question answering but also character comparison, recommendation, and trait-based exploration. That makes it a useful compact example of retrieval-driven NLP design.&lt;/p&gt;</description></item></channel></rss>