<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>防护 - 标签 - Simi Studio</title>
        <link>/tags/%E9%98%B2%E6%8A%A4/</link>
        <description>防护 - 标签 - Simi Studio</description>
        <generator>Hugo -- gohugo.io</generator><language>zh-CN</language><managingEditor>simi@simi.studio (Simi)</managingEditor>
            <webMaster>simi@simi.studio (Simi)</webMaster><lastBuildDate>Fri, 03 Apr 2026 10:30:00 &#43;0800</lastBuildDate><atom:link href="/tags/%E9%98%B2%E6%8A%A4/" rel="self" type="application/rss+xml" /><item>
    <title>AI Agent 安全红线：我的生产环境检查清单</title>
    <link>/posts/practical-ai-security/</link>
    <pubDate>Fri, 03 Apr 2026 10:30:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/posts/practical-ai-security/</guid>
    <description><![CDATA[AI Agent 能操作文件、执行命令、访问 API——这些能力如果没有正确管控，就是安全风险。这篇文章给一个实用的安全检查清单。]]></description>
</item>
<item>
    <title>LLM 安全红线：Prompt Injection 防护实战</title>
    <link>/posts/llm-security-best-practices/</link>
    <pubDate>Fri, 26 Dec 2025 14:36:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/posts/llm-security-best-practices/</guid>
    <description><![CDATA[Prompt Injection 是 LLM 应用最大的安全风险。这篇文章讲清楚怎么防护。]]></description>
</item>
</channel>
</rss>
