<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Local Deployment - Tag - Simi Studio</title>
        <link>/en/tags/local-deployment/</link>
        <description>Local Deployment - Tag - Simi Studio</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>simi@simi.studio (Simi)</managingEditor>
            <webMaster>simi@simi.studio (Simi)</webMaster><lastBuildDate>Thu, 26 Mar 2026 14:00:00 &#43;0800</lastBuildDate><atom:link href="/en/tags/local-deployment/" rel="self" type="application/rss+xml" /><item>
    <title>Edge AI Deployment: Running LLMs on Your Device</title>
    <link>/en/posts/edge-ai-deployment/</link>
    <pubDate>Thu, 26 Mar 2026 14:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/edge-ai-deployment/</guid>
    <description><![CDATA[LLMs running locally on your device—no data leaks, faster responses. Early 2026 Edge AI technology mature? What devices can run it?]]></description>
</item>
<item>
    <title>Fine-tuning LLMs Locally: Ollama &#43; Unsloth in Practice</title>
    <link>/en/posts/local-llm-finetuning-guide/</link>
    <pubDate>Wed, 24 Dec 2025 10:45:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/local-llm-finetuning-guide/</guid>
    <description><![CDATA[Open-source fine-tuning tools have matured. Fine-tuning a small model locally with consumer GPU is now feasible. This is practical experience: which tools to use, data preparation, common pitfalls, and when fine-tuning is worth it.]]></description>
</item>
<item>
    <title>Ollama in Practice: Running GPT-Level Models on Your Mac</title>
    <link>/en/posts/ollama-local-llm-guide/</link>
    <pubDate>Sat, 20 Apr 2024 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/ollama-local-llm-guide/</guid>
    <description><![CDATA[Ollama makes running LLMs locally trivial. One command to start a model, 7B runs smoothly on Mac. This is a practical guide covering setup, real performance numbers, and when to use local vs API.]]></description>
</item>
<item>
    <title>Running LLMs Locally in 2023: Hardware Configs for Every Budget</title>
    <link>/en/posts/hardware-for-local-ai-2023/</link>
    <pubDate>Sat, 15 Jul 2023 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/hardware-for-local-ai-2023/</guid>
    <description><![CDATA[Running LLMs locally is getting popular, but what hardware should you buy at different budgets? This article provides real benchmark data to help you choose the right configuration. No product promotion, just objective numbers.]]></description>
</item>
</channel>
</rss>
