<rss xmlns:atom="http://www.w3.org/2005/Atom" version="2.0">
    <channel>
        <title>Context Window - Tag - Simi Studio</title>
        <link>/en/tags/context-window/</link>
        <description>Context Window - Tag - Simi Studio</description>
        <generator>Hugo -- gohugo.io</generator><language>en</language><managingEditor>simi@simi.studio (Simi)</managingEditor>
            <webMaster>simi@simi.studio (Simi)</webMaster><lastBuildDate>Thu, 05 Mar 2026 10:00:00 &#43;0800</lastBuildDate><atom:link href="/en/tags/context-window/" rel="self" type="application/rss+xml" /><item>
    <title>GPT-5.4&#39;s Million-Token Context: Finally, No More Truncation</title>
    <link>/en/posts/gpt-5-4-million-token-context/</link>
    <pubDate>Thu, 05 Mar 2026 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/gpt-5-4-million-token-context/</guid>
    <description><![CDATA[March 5, 2026—OpenAI releases GPT-5.4 with million-token context window default and Mid-response Steerability. A substantial experience upgrade for developers processing long documents or large codebases.]]></description>
</item>
<item>
    <title>LLM Context Window Race: A Marathon With No Finish Line</title>
    <link>/en/posts/llm-context-window-arms-race/</link>
    <pubDate>Sun, 15 Oct 2023 10:00:00 &#43;0800</pubDate>
    <author>simi@simi.studio (Simi)</author>
    <guid>/en/posts/llm-context-window-arms-race/</guid>
    <description><![CDATA[Mid-2023, Claude pushed context window to 200k, GPT-4 sat at 8k/32k, Gemini hit 1M. Context window size became the metric for judging models. This article explains why it matters and whether you can actually use 200k tokens in practice.]]></description>
</item>
</channel>
</rss>
