Blueprints — the Next Level After Skills
Six months ago we learned about Agent Skills. Anthropic's SKILL.md format — a package of instructions that turns an AI agent into a specialist for a specific task. Write once, reuse forever.
And we went all in. Wrote dozens of skills. Wiki compilation, post generation, code review, deploy on command. My Claude Code grew capabilities like a Swiss Army knife on steroids — blades and corkscrews everywhere.
Then a friend asked: "Send me your wiki skill, I want the same setup."
I sent it. He opened it. And wrote back: "Dude, this is all wired to your folders, your agents, and your vault structure. This is useless to me."
He was right.
Why Skills Don't Scale
Skills solved a massive problem. Instead of one-shot prompts — repeatable workflows. Instead of "explain to the model every time" — "the model already knows how to do this."
SKILL.md became a standard. Supported by Claude Code, OpenAI Codex, Cursor, Gemini CLI. Vercel launched skills.sh — a skill marketplace. Microsoft integrated it into their Agent Framework. Everything was great.
But there's a catch.
The more powerful the skill — the more tightly it's bound to the author's context.
A simple skill ("generate a commit message following conventional commits") works for everyone. It's atomic, no dependencies.
A complex skill ("compile a wiki from raw notes in my vault") — that's a different story. It knows my folders (510 Input/, 540 Research/). It knows my metadata format. It knows which agent does what. Take it out of my context — and it's dead.
It's like a recipe. "Take 200g of flour" — works for everyone. "Take the dough from the third drawer of my fridge" — works only for me.
And here's the paradox: the most valuable skills are the ones you can't share. Because their value lies in being precisely tuned to you.
Three Levels of Knowledge Transfer
What came before skills? What comes after? Line them up and you see a staircase:
Prompts (2023–2024) — one-shot instructions. Copy, paste, get a result. Maximum portability — it's just text. Minimum repeatability — results depend on context, model, phase of the moon. Knowledge transfer: "copy this text."
Prompts solved the entry problem: anyone could start using AI. But they don't scale. The more complex the task, the longer the prompt, the more fragile the result.
Skills (2025–2026) — repeatable workflows. Standard format (SKILL.md), works across Claude Code, Codex, Cursor, Gemini CLI. High repeatability — same process every time. But context portability is low: the best skills are tied to the author's environment. Knowledge transfer: "install this package."
Skills solved the repeatability problem. But they have a ceiling — and I described it above.
Blueprints (2026+) — architectural system descriptions. Full portability: principles and architecture, not tied to specific paths or tools. The result is a working system assembled for a specific user. Task complexity — entire systems and processes. Knowledge transfer: "give this to your agent, it'll build it."
A Blueprint is not a skill. It's a document from which an agent creates skills tailored to a specific user. A Blueprint is not executed directly — it generates skills, configurations, and automations through dialogue between human and agent.
The Key Difference from a Skill
| Skill | Blueprint | |
|---|---|---|
| What it describes | A specific workflow | System architecture |
| Executed directly | Yes, by the agent | No — generates skills and configurations |
| Context-bound | Yes (paths, tools, conventions) | No (principles and patterns) |
| Result of copying | Works if context matches | Always works — it adapts |
A Skill says: "here's how I compile my wiki — scan Input/, write to Wiki/, use this agent."
A Blueprint says: "here's how an LLM-compiled wiki works — sources, compiler, linter, index. Build your own from what you have."
Anatomy of a Blueprint
A Blueprint is a document for two readers. The human reads it to understand the idea and decide "do I want this?" The agent reads it to understand the architecture and help the human build the system. Every section serves both.
Required Sections
1. When to Use Specific situations where this pattern solves a problem. The human reads it and knows — is this for them or not. The agent uses this section to suggest the Blueprint when it spots a matching situation.
2. Core Idea One or two sentences: what's the essence of the approach. Role separation: what the human does, what the AI does. For the agent, this is a contract — what's expected of it.
3. Architecture A visual diagram of components and data flows. Abstract, not tied to specific tools. The agent uses it as a map: which parts of the system need to be created.
4. Key Principles Rules that make the system work. Not "which buttons to press" but "why it's done this way." For the agent, these are constraints — boundaries that can't be violated during adaptation.
5. Workflow Step-by-step processes for typical scenarios. Described abstractly — "sources," "compiler," "index." The agent adapts them to the user's specific folders, tools, and conventions.
6. Components Description of each functional block: inputs, outputs, requirements. The agent uses this as a specification for creating concrete skills and configurations.
7. How to Apply The user's entry point: where to launch the agent, what to tell it, what context is needed. Some Blueprints work in a single context (everything in a vault), others require multiple (vault + code project). This section removes ambiguity: the human reads it and immediately knows where to start. For the agent — understanding of which working environment it's in and what else it needs access to.
8. Adaptation Questions A list of questions the agent must ask the user before building. This is the key section for the agent-driven approach. Examples:
- "Where do you store raw notes?"
- "What tool do you use for automation?"
- "Flat structure or hierarchy?"
- "Scheduled automation or manual triggers?"
The questions cover every decision point where the architecture allows for variants.
9. Gotchas and Pitfalls Common mistakes, edge cases, non-obvious decisions. For the human — a warning. For the agent — guardrails during implementation.
Optional Sections
Metadata YAML frontmatter with type, date, tags.
Scaling What other domains this pattern applies to. Helps both human and agent see possibilities beyond the specific example.
Reference Implementation One concrete implementation — universal, not tied to the author. A table of "component → what you can implement it with." Shows that the Blueprint is real, not theoretical.
Output Artifacts What exactly the agent should create after assembly: skills, folders, configurations, index files. A verification checklist — is everything in place.
How a Blueprint Works in Practice
Creation (author)
Author builds the system for themselves
│
▼
Extracts principles, architecture, pitfalls
│
▼
Formulates adaptation questions
(decision points where context affects implementation)
│
▼
Writes the Blueprint — detached from their own context
│
▼
Publishes
Usage (reader + agent)
Human finds a Blueprint
│
▼
Hands it to the agent: "Build this for me"
│
▼
Agent reads the Blueprint ──────────────────────────┐
│ │
▼ │
Agent asks questions from the │
"Adaptation Questions" section │
│ │
▼ │
Human answers about their context: │
folders, tools, conventions │
│ │
▼ │
Agent builds the system: │
├── creates folders and files │
├── generates skills for this context │
├── sets up automation Principles
└── validates against the and constraints
"Gotchas and Pitfalls" section as guardrails
│ │
▼ │
Personalized working system ◄────────────────────────┘
tailored to this specific user
- Human finds a Blueprint and hands it to the agent: "Build this for me"
- Agent reads the Blueprint — understands the architecture, principles, constraints
- Agent asks questions from the "Adaptation Questions" section
- Human answers about their context: folders, tools, conventions
- Agent builds the system: creates folders, generates skills, sets up automation
- Agent validates the result against "Gotchas and Pitfalls" as guardrails
- The result is a working system, personalized for a specific user
The key shift: the human doesn't read the Blueprint as a manual and doesn't build the system by hand. They hand the Blueprint to the agent and answer questions. The agent does the heavy lifting — adaptation, artifact creation, constraint validation.
A Blueprint is a bridge between "it works for me" and "it'll work for you too." But the bridge isn't crossed on foot — it's crossed together with an agent.
What a Blueprint is NOT
- A tutorial — step-by-step instructions for a specific tool. Tutorials break when the interface updates. A Blueprint describes principles that survive tool changes.
- A template — a file scaffold to fill in. A template is part of an implementation. A Blueprint can generate templates, but it isn't one.
- Tool documentation — how to use a specific product. A Blueprint is tool-agnostic.
- A Prompt / Skill — an executable instruction. A Blueprint isn't executed directly — it's read by an agent that, through dialogue with the human, generates executable artifacts (skills, configurations, structures).
- A system prompt / CLAUDE.md — agent behavior configuration. A Blueprint doesn't configure the agent — it gives the agent knowledge about what system to build.
Levels of Abstraction
Portability
▲
Blueprint --------● Full (principles + architecture)
│
│
Skill ------------● Format is portable, context is not
│
│
Prompt -----------● Text copies over, results not guaranteed
│
└──────────────────────► Execution specificity
The higher on the axis — the more people can use it. The further right — the more precise the result for a specific user. A Blueprint sacrifices specificity for universality. A Skill sacrifices universality for specificity. They don't compete — they operate at different levels.