Spec First, Code Second: Using OpenSpec to Constraint Your AI Coding Assistant
Background: What’s the Endgame of Prompt Engineering
I’ve been using AI coding assistants for a while now. The biggest realization isn’t that they can’t code—the problem is they deviate too easily.
A typical scenario:
Me: Add dark mode please
AI: Sure, let me implement that...
(5 minutes later)
AI: Done! I've made the following changes:
1. Added dark-mode class
2. Modified index.css
3. Added theme.jsWhat actually happened: it modified global styles, broke the light theme, and didn’t persist the user’s theme preference to localStorage.
This isn’t the AI’s fault. My instruction was too vague. “Add dark mode” has too many interpretations—AI coding assistants have to guess.
The endgame of prompt engineering isn’t better prompts. It’s structured requirements.
OpenSpec’s Core Idea
OpenSpec’s approach is straightforward: before AI writes any code, have it confirm what we’re building.
The workflow:
# 1. Initialize project
openspec init
# 2. Propose a feature
/opsx:propose add-dark-mode
# 3. AI creates this structure
openspec/changes/add-dark-mode/
├── proposal.md # Why we're doing this
├── specs/ # Requirements and scenarios
├── design.md # Technical approach
└── tasks.md # Implementation checklistThis is lighter than you think. Not traditional spec documents, not UML diagrams—just Markdown files.
But the point is: before AI writes a single line, both you and the AI have agreed on what’s in those files.
Hands-On Experience
Tested with a real scenario: adding currency conversion to a React project.
You: /opsx:propose add-currency-converter
AI: Created openspec/changes/add-currency-converter/
✓ proposal.md — motivation: users need multi-currency display
✓ specs/ — 5 user stories covering happy path and edge cases
✓ design.md — use useCurrency hook approach, no new state management
✓ tasks.md — broken into 8 checkable items
Ready for implementation!Opening proposal.md:
## Why we're doing this
Users in cross-border trading need quick currency conversion.
## Success criteria
- Support USD/EUR/CNY
- Real-time rates (exchangerate-api.com free tier)
- Results rounded to 2 decimal placesThen in specs/:
## Scenario 1: Basic Conversion
Given user is on trading page
When user selects "100 USD" to convert to "EUR"
Then display "€92.35" (assuming rate 1 USD = 0.9235 EUR)This is the key: not letting AI guess what you want, but telling it what you want in a structured format, then having AI confirm before writing code.
Why This Works Better Than Pure Prompt Engineering
1. Eliminates Context Loss
Traditional prompt engineering’s problem: conversation history grows, AI starts forgetting earlier decisions.
OpenSpec’s spec files are persistent. You change design.md, and the AI remembers in the next conversation. No context window dependency.
2. Forces Requirement Clarification
With human coding, you think through what you want before writing. But with AI coding assistants, it’s too easy to just say “help me add this feature” and expect good results.
OpenSpec forces you to write proposal.md first. That process itself is thinking.
3. Auditable Decision History
openspec/changes/ maintains a record of all changes. Every feature has: why it was built, exact specs, technical decisions, implementation checklist.
openspec/changes/
├── 2025-01-15-add-dark-mode/
├── 2025-02-20-currency-converter/
└── archive/ # Completed itemsWhen product says “how was this feature designed back then”, you have documentation.
Comparison with Traditional Development
| Traditional Dev | Prompt Engineering | OpenSpec | |
|---|---|---|---|
| Requirement confirmation | PRD / spec doc | Prompt description | Structured Spec files |
| Change tracking | Git commit messages | Scattered in chat | Each change in separate dir |
| AI deviation risk | Low (human codes) | High | Low (confirmed before action) |
| Context loss | None | Severe (long convos) | None (file-persisted) |
| Learning curve | High (full dev process) | Low | Low (4 Markdown files) |
Supported Tools
OpenSpec isn’t tied to one specific AI coding assistant. It integrates via slash commands with 25+ AI programming tools:
- Claude Code
- GitHub Copilot
- Cursor
- Windsurf
- And many more
Who Should Use This
OpenSpec fits best when:
- Medium-to-large feature development: not “fix this typo” but multi-file changes
- Team collaboration: Spec files can be shared, ensuring everyone (and AI) agrees
- Long-term maintained projects: structured decision record for future reference
For simple one-off changes, OpenSpec might be overkill. But when you find AI coding assistants frequently “doing things you didn’t intend”, that’s usually not an AI problem—that’s a requirements definition problem.
Quick Start
# Requires Node.js 20.19.0+
npm install -g @fission-ai/openspec@latest
cd your-project
openspec init
# Tell AI: /opsx:propose <what you want to build>Full docs at OpenSpec GitHub.
Conclusion
OpenSpec isn’t another “AI programming framework”. It’s a lightweight constraint layer. Core value: before AI acts, it forces you to clarify requirements and write them down.
That’s more effective than any advanced prompt template. Because good output comes from clear requirements, not fancy wording.