PromptBurger
Treating prompts as first-class engineering artifacts
The Problem
I kept making the same mistake. In the middle of a Claude Code session, I'd fire off a prompt missing half its constraints. Claude would fill in the gaps with assumptions, build the wrong thing, and I'd spend time reverting and re-explaining.
I know the prompt engineering principles from Anthropic's and Google's handbooks. I still forgot to apply them when deep in a coding flow. The other recurring issue: Claude starts implementing immediately, even when a request has ambiguities baked in. I found myself appending "ask me clarifying questions before you start" to every other prompt. That's a workflow problem, not a discipline problem.
The Bet
Prompt tools already existed, but they focused on polishing output. I was looking at a different layer: what happens before the prompt is finalized. The questions you didn't think to ask yourself. The constraints that feel obvious in your head but are absent from the text.
PromptBurger treats prompts as deployable artifacts that deserve the same tooling we give code: structured composition, linting (via AI-generated suggestions), version history with diff comparison, and usage observability. A small amount of friction before sending a prompt to Claude Code eliminates larger friction downstream.
Designing the Meta-Prompt
At the core is a system prompt that teaches Claude how to write better prompts. I started by distilling principles from Anthropic's and Google's prompt engineering research: specificity over vagueness, constraints before content, outcome orientation, explicit acceptance criteria.
Then iteration. Early versions produced outputs that front-loaded implementation suggestions, essentially answering the question before it was asked. The fix was a principle that explicitly blocks premature implementation and redirects that energy toward producing better questions instead.
The meta-prompt also handles refinement. When you iterate on an existing prompt, PromptBurger sends the previous generation as context. The system treats this as a refinement task: preserve what's working, incorporate changes, surface new gaps. This meant encoding context-awareness into the prompt's instructions, not just the application logic.
Thirteen principles total. Each earned its place through a failure mode observed during testing. Designing the meta-prompt drew on product thinking (what should the output contract look like?), UX judgment (how do structured inputs map to structured outputs?), and engineering instinct (how do you enforce a reliable separator between the prompt and its suggestions so the frontend can parse them into distinct UI regions?).
Suggestions as Prompt Linting
The suggestions panel became the most valuable feature. It functions like a linter at the bottom of an IDE, catching problems before they compound downstream.
After each generation, PromptBurger displays clarifying questions and recommendations in a collapsible "Chef's Notes" panel, separated from the copy-ready prompt. Typical output:
- "Do you want to update the theme across the entire app, or just these specific elements?"
- "Consider adding an example of the tone you're looking for."
- "The constraint about mobile responsiveness could be more specific about breakpoints."
These are generated in response to your specific inputs, targeting the gaps in what you actually wrote.
The UX decision that made this work: each suggestion has an "Add to Field" action that transfers it directly into the appropriate input (context, task, constraints, or examples). Suggestions become inputs, which produce a better prompt, which surfaces more refined suggestions. The interaction pattern draws from how linters offer auto-fix: surface the problem, offer the resolution, one click to apply.
Early versions embedded suggestions inline with the prompt, so copying the prompt also copied the questions. Separating them structurally (in both the UI and the data model, as distinct fields in the Zustand store) was a small architectural decision with a large UX payoff.
Building with Claude Code
The entire build took approximately one week, working solo with Claude Code as the primary development collaborator. This went beyond "write this component" delegation. Claude participated in refining the meta-prompt's principles, prototyping the output contract format, and working through the browser-side streaming architecture with the Anthropic SDK.
The playbook I now use across all my builds (structured product artifacts, agent definitions, compound engineering protocols) was taking shape alongside PromptBurger. Patterns that emerged here, like encoding refinement-awareness into system prompts and structurally separating AI output into parseable regions, became conventions I carried into subsequent projects.
Partway through development, PromptBurger became stable enough to use on itself. When implementing the suggestions panel, my initial prompt to Claude Code led to several wrong directions. I ran the same request through PromptBurger. The suggestions helped me specify what I actually wanted (collapsible panel, individual "Add to Field" actions, field-targeting transfers), and the resulting prompt produced the right implementation on the first pass. Using the tool to build the tool validated the core thesis.
What Shipped
Architecture: Entirely client-side. No backend, no accounts, no data retention. Users bring their own Anthropic API key (stored in localStorage), and all communication with Claude happens directly via the SDK's streaming interface in the browser. Zero infrastructure to maintain, complete privacy.
The burger metaphor as information architecture. "The Bun" (context), "The Patty" (core task), "Special Instructions" (constraints), "Chef's Notes" (suggestions). The metaphor communicates input hierarchy without requiring documentation.
Key features:
- Real-time streaming UI with the browser-side Anthropic SDK
- Iterative refinement loop where AI suggestions feed directly back into input fields
- Prompt history with version grouping by lineage and line-level diff comparison
- Per-generation token count, cost estimate, and latency tracking
- Demo mode with simulated streaming for evaluation without API credentials
- Model selection across Sonnet, Haiku, and Opus
Selective persistence: User inputs and settings survive page reloads (the user's work). Canvas content is ephemeral (regenerable). The distinction reflects a deliberate stance on what belongs to the user versus what belongs to the session.
Stack: React 19, TypeScript, Zustand 5, Tailwind CSS 4, Anthropic SDK, Vite 7