<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Ollama - Tag - Simi Studio</title>
        <link>/en/tags/ollama/</link>
        <description>Ollama - Tag - Simi Studio</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>simi@simi.studio (Simi)</managingEditor>
            <webMaster>simi@simi.studio (Simi)</webMaster><lastBuildDate>Wed, 24 Dec 2025 10:45:00 &#43;0800</lastBuildDate><atom:link href="/en/tags/ollama/" rel="self" type="application/rss+xml" /><item>
    <title>Fine-tuning LLMs Locally: Ollama &#43; Unsloth in Practice</title>
    <link>/en/posts/local-llm-finetuning-guide/</link>
    <pubDate>Wed, 24 Dec 2025 10:45:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/local-llm-finetuning-guide/</guid>
    <description><![CDATA[Open-source fine-tuning tools have matured. Fine-tuning a small model locally with consumer GPU is now feasible. This is practical experience: which tools to use, data preparation, common pitfalls, and when fine-tuning is worth it.]]></description>
</item>
<item>
    <title>Ollama in Practice: Running GPT-Level Models on Your Mac</title>
    <link>/en/posts/ollama-local-llm-guide/</link>
    <pubDate>Sat, 20 Apr 2024 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/ollama-local-llm-guide/</guid>
    <description><![CDATA[Ollama makes running LLMs locally trivial. One command to start a model, 7B runs smoothly on Mac. This is a practical guide covering setup, real performance numbers, and when to use local vs API.]]></description>
</item>
</channel>
</rss>
