Five days after releasing 22 open-source Synthesis Skills, I noticed a gap in the tooling. My synthesis-daily-rituals skill — a morning checklist I refine almost daily — existed in two places: the installed copy in ~/.claude/skills/ and the source copy in a git repo. Both had improved independently over the week. The installed copy had new steps for Slack MCP authentication and better handling for long threads. The source had a restructured evening checklist and a dependency validation step.
Every existing installer — mine included — handles this situation the same way: detect the mismatch, warn, overwrite. That is the correct behavior for packages, configuration files, and plugins. It is the wrong behavior for skills, because skills are methodology that evolves through use. The installed copy wasn’t stale. It was ahead in some areas and behind in others. What I needed was a merge — one that understood both sets of changes were additive and compatible.
No tool in the Agent Skills ecosystem does this. The standard defines the format beautifully — SKILL.md, YAML frontmatter, progressive disclosure, portable across forty-plus tools. But it stops at installation. What happens when you have 35 skills from three different sources with three different access levels? What happens when installed copies and source copies diverge? What prevents a private skill from depending on a team skill your colleague doesn’t have?
Here is the architecture I built to answer those questions.
Three repos, strict boundaries
The model uses three repositories with enforced access rules.
Public — 23 open-source skills. Code review, content quality, project management, thinking frameworks. CC0 and Apache 2.0. Installed globally to ~/.claude/skills/.
Private — 13 skills containing personal workflows, client-specific publishing processes, social media strategies with account details and timing preferences. Same global install location, but the source repo is private.
Shared — Team-level skills that encode institutional knowledge. How this team releases software, how this team verifies cherry-picks. Installed at the project level (.claude/skills/ within a project directory) rather than globally, because team conventions are project-scoped.
The dependency hierarchy: public depends on public only. Private depends on public and private. Shared depends on public and shared. No cross-collection dependencies between private and shared.
This prevents a specific failure mode I’ve seen in plugin ecosystems: your setup works because you have all the pieces, but a colleague’s breaks silently because their collection is different. A private skill depending on a shared skill breaks when you leave that team. A shared skill depending on a private skill fails for every team member except you. The hierarchy forces methodology to live at the right layer — and forces promotions from private to shared to public to be deliberate decisions.
Provenance
Every installed skill gets a .source.json file:
{
"source_repo": "github.com/rajivpant/synthesis-skills",
"source_type": "public",
"source_path": "synthesis-thinking-framework/SKILL.md",
"source_commit": "af6a447",
"installed_at": "2026-03-23T14:30:00Z",
"installed_by": "install.sh"
}
The source_commit field is the critical one. When both the installed copy and the source have changed, the agent doing the merge needs a common ancestor — the version they diverged from. Without it, the agent diffs two files blind. With it, the agent reconstructs a three-way comparison and can attribute each change to the right side.
The source_type is what the dependency checker reads. After installation, the script walks each skill’s depends_on array from SKILL.md frontmatter, confirms each dependency is installed, reads its .source.json for the type, and validates against the hierarchy. Within hours of adding depends_on to all 38 skills across the three repos, I found two private skills that referenced public skills by name in their instructions but had never declared the dependency. The frontmatter made implicit relationships explicit; the install-time validation caught what I would have missed in a manual review.
Two installers, different capabilities
install.sh is a POSIX shell script — no dependencies beyond git and a checksum utility. Clone, copy, write provenance, validate dependencies. On drift detection: warn and overwrite. This is the bootstrap path. curl | sh for first setup, or for CI environments where no agent is available.
synthesis-skills-manager is itself a skill — a SKILL.md that teaches an AI agent how to manage the ecosystem. Same mechanical operations as the shell script, plus the ability to merge when drift is detected. The agent reads both versions, identifies what changed on each side, and produces a result that preserves improvements from both. It understands that a reordered section in the source is compatible with a new step added locally. It knows Configuration table values are user-specific and should never be overwritten.
The boundary between these two paths maps to a real capability distinction. Shell scripts handle mechanical operations well: copy files, compute checksums, write JSON, validate fields against rules. Understanding whether two changes to the same methodology file conflict or complement each other is a semantic operation. Text-based merge tools produce conflict markers for those. An agent that understands the content is methodology — not source code, not configuration — can resolve most drift without human intervention, because methodology changes are usually additive.
Configuration separation
Nine of the 38 skills need user-specific values: file paths, URLs, alert sounds, Slack channels. When those values are scattered through the instructions, every update is a risk. The pattern I landed on is a ## Configuration section near the top with a table:
## Configuration
| Setting | Value | Description |
|---------|-------|-------------|
| `daily_plans_path` | `projects/_daily-plans/` | Where daily action plans are saved |
| `alert_sound` | `/System/Library/Sounds/Glass.aiff` | Completion alert |
Instructions reference settings by name. The install process preserves the Configuration table during updates. This is the same separation application developers make between code and environment variables — methodology and configuration ship in the same file but change at different rates and for different reasons.
The practical benefit showed up during the migration itself. Several skills that started as private turned out to be generally useful once the project-specific values were extracted into Configuration. The three-repo structure gave those skills a clear promotion path: extract configuration, move to the public repo, update the dependency graph.
When this is overkill
If you have fewer than ten skills from a single source, you don’t need three repos. A single repo with install.sh handles that fine. The architecture earns its complexity when skills come from multiple sources with different access levels, when installed copies evolve through daily use, or when you share skills with a team and need to prevent dependency breakage across different people’s installations.
The implementation
The full implementation — three repos, 38 skills with provenance metadata, install scripts with drift detection and dependency validation, and the skills manager skill — is in the synthesis-skills repository.
The artifact type is new. The management problems are not. Provenance tracking, dependency hierarchies, and the distinction between mechanical and semantic operations appear in every mature tooling ecosystem. What is specific to AI agent skills is the bidirectional evolution pattern — methodology improves at the source and at the point of use simultaneously — and the fact that an AI agent is the right tool to resolve the resulting drift. That loop, where AI-native tooling manages AI-native artifacts, is what makes this architecture work.
This is part of a series on synthesis coding — the practice of building software through human-AI collaboration where the human provides direction, judgment, and domain expertise while the AI provides execution speed and breadth.
Rajiv Pant is President of Flatiron Software and Snapshot AI, where he leads organizational growth and AI innovation. He is former Chief Product & Technology Officer at The Wall Street Journal, The New York Times, and Hearst Magazines. Earlier in his career, he headed technology for Condé Nast’s brands including Reddit. Rajiv coined the terms “synthesis engineering” and “synthesis coding” to describe the systematic integration of human expertise with AI capabilities in professional software development. Connect with him on LinkedIn or read more at rajiv.com.