<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Ollama - 标签 - Simi Studio</title>
        <link>/tags/ollama/</link>
        <description>Ollama - 标签 - Simi Studio</description>
        <generator>Hugo -- gohugo.io</generator><language>zh-CN</language><managingEditor>simi@simi.studio (Simi)</managingEditor>
            <webMaster>simi@simi.studio (Simi)</webMaster><lastBuildDate>Wed, 24 Dec 2025 10:45:00 &#43;0800</lastBuildDate><atom:link href="/tags/ollama/" rel="self" type="application/rss+xml" /><item>
    <title>本地微调 LLM：ollama &#43; unsloth 实战经验</title>
    <link>/posts/local-llm-finetuning-guide/</link>
    <pubDate>Wed, 24 Dec 2025 10:45:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/posts/local-llm-finetuning-guide/</guid>
    <description><![CDATA[开源微调工具越来越成熟，在本地用消费级 GPU 微调一个小模型已经可行。这篇文章是实战经验：用什么工具、数据准备、常见坑，以及什么场景值得微调。]]></description>
</item>
<item>
    <title>Ollama 实战：在家用 Mac 跑起 GPT 级模型</title>
    <link>/posts/ollama-local-llm-guide/</link>
    <pubDate>Sat, 20 Apr 2024 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/posts/ollama-local-llm-guide/</guid>
    <description><![CDATA[Ollama 让本地跑 LLM 变得极其简单。一个命令就能起模型，Mac 上 7B 模型跑得流畅。这篇是实战记录，讲清楚怎么用、什么时候用、以及真实的性能数据。]]></description>
</item>
</channel>
</rss>
