Part 3 — Skills: Deterministic Helpers for Your VP
Created: April 19, 2026 | Modified: April 19, 2026
This is Part 3 of a 7-part series on building your AI VP of Marketing with Claude Cowork. Previous: Part 2 — The Playbook | Next: Part 4 — Agents
Your VP has a desk, an employee handbook, and the policies posted on the wall. The Playbook is written. What you do not yet have is a team of specialists — named collaborators your VP can reach for by role instead of re-describing the job each time.
This chapter hires the first two. Both do narrowly-scoped work the same way every time. One is a procedures clerk who fills out forms — a Content Brief Generator that turns any topic into a structured brief. The other is a quality inspector who grades drafts against a rubric — a Brand Voice Checker that reads your Rules and measures incoming content against them. Both are Skills: named procedures saved under .claude/skills/ that your VP loads on demand.
By the end of Part 3 your .claude/ folder has grown a new sibling beside rules/. Part 4 introduces the other half of the toolkit — Agents — and shows you when to reach for each.
Pick Up From Here
Part 3 assumes two things from earlier parts. If either is missing, you will feel it immediately — the Skills below depend on them.
From Part 1 — The Hire. A CLAUDE.md at your Project root with Business Overview, Target Audience, Brand Voice, and Marketing Goals sections. This is the context every Skill reads before running.
From Part 2 — The Playbook. At least a brand-voice.md file under .claude/rules/ — tone descriptors, vocabulary use/never-use lists, and two or three wrong/right voice examples. The Voice Checker Skill has nothing to enforce without it.
If you are jumping in mid-series and want a quick stand-up for Acme Widget Co. (the example this series uses alongside Tideway Bookkeeping), paste the following into your Project before continuing.
Starter config — Acme Widget Co.
# Marketing VP -- Acme Widget Co.
## Business Overview
Acme Widget Co. sells industrial-grade fastening systems to manufacturing
plants across the Midwest. Founded 2019, 12 employees, $2.4M revenue.
## Target Audience
Operations managers and procurement leads at mid-size manufacturing plants
(50-500 employees). They care about reliability, lead times, and total cost
of ownership. Most are 35-55, skeptical of marketing, and make decisions
based on spec sheets and peer recommendations.
## Brand Voice
Direct, technical, no-nonsense, helpful. We sound like an experienced
engineer explaining a product -- not a salesperson pitching one.
## Marketing Goals
1. Generate 20 qualified leads per month through content marketing
2. Build email list from 600 to 3,000 subscribers by year-end
3. Publish weekly blog posts targeting "industrial fastener" search terms
Brand voice rule — save as .claude/rules/brand-voice.md:
# Brand Voice
## Tone
Direct, technical, no-nonsense, helpful, specific.
## Words We Use
- spec, tolerance, load rating, lead time, unit cost
- plain numbers and measurements over vague claims
## Words We Never Use
- game-changing, revolutionary, synergy, leverage, empower
- "we are excited to announce" or any variation
## Sentence Structure
- Lead with the fact or the benefit. Never lead with "we."
- Active voice always.
- Short paragraphs. One idea per paragraph.
The Quick Start block is canonical here. Part 4 and later chapters point back to this one rather than repeating the paste.
What Are Skills?
You have been giving your VP tasks by typing prompts into Cowork. Each time you need a content brief, you write a fresh request: describe what you want, specify the format, remind the VP about your audience. It works. But it is the equivalent of writing a new job description every time you ask your employee to do the same task.
Skills fix that. A Skill in Cowork is a saved prompt with a name. You create it once, then you invoke it by name whenever you need it. Instead of writing a paragraph explaining what a content brief should contain, you type the Skill name and hand it a topic. Same output, every time, in a fraction of the effort.
The difference between a Skill and a one-off prompt is consistency. A one-off prompt produces whatever you happen to ask for in the moment. A Skill produces the same structured output every time because the instructions are locked in. Your third content brief looks exactly like your first.
When should you use a Skill versus just asking your VP directly? Use a Skill for any task you do more than twice with the same structure. Content briefs, social media posts, email drafts, competitor one-pagers — anything with a repeatable format. Ask your VP directly for one-off work: brainstorming, answering a question, analyzing a specific situation.
Your CLAUDE.md is the employee handbook — it tells your VP everything about the company. Your Rules are the policies on the wall — they constrain behavior on every task. Skills are the step-by-step procedures in the operations manual. The handbook gives context. The policies set boundaries. The procedures define exactly how to execute a specific job.
Under the hood — Skills
- What file. Each Skill lives at
./.claude/skills/<skill-name>/SKILL.md— one dedicated folder per Skill. - When written. You write the Skill on first save, and every later edit overwrites the same file.
- What format. One folder per Skill, with
SKILL.mdas the invocation spec plus supporting resource files. - How to inspect. Open
SKILL.mdin any text editor, or browse the Skill folder directly on disk. - How to undo. Delete the Skill folder, or edit
SKILL.md— the next invocation uses the saved copy.
Skills are versionable and shareable. You may check them into version control, copy them between Projects, and hand a tested Skill to a teammate.
Gotcha. A Skill is not a conversation. If you tune a Skill by arguing with Claude inside one chat, the tweaks live in that chat only. Edit the Skill file itself to make the change stick across every future invocation.
Skill #1: The Content Brief Generator
A brief is the document your VP (or you) works from when creating a piece of content. Every field in it answers a question that, if left unanswered, leads to vague or off-target output. Before you build anything, decide what a good brief looks like.
Your content brief should include:
Topic / Working Title. The subject of the content piece. Not a final headline — a working title that clarifies what this piece is about. "Why operations managers need supply chain visibility" is a working title. "Supply Chain Blog Post" is not.
Target Audience Segment. Which of your audience segments (from CLAUDE.md) is this piece for? A blog post for procurement leads reads differently from one targeting plant managers, even if the topic is the same. Naming the segment up front prevents the content from trying to speak to everyone.
Key Messages. Three to five bullet points that capture what the reader should take away. These are not sentences from the article — they are the ideas the article must communicate. If you read the finished piece and cannot find each key message, the piece missed its brief.
Call to Action. What should the reader do after consuming this content? Download a spec sheet, request a quote, subscribe to the newsletter. One CTA per brief. If you cannot decide on one, the content is trying to do too much.
SEO Keywords, Content Format, Target Length. Five to ten search terms. The format (blog post, email, LinkedIn, landing page). The word count from your content standards. These three fields turn a vague brief into something a writer can execute against without asking follow-up questions.
Each field exists because skipping it leads to predictable problems. No audience segment? Generic tone. No key messages? Rambling structure. No CTA? Content that entertains but does not convert.
Build the Skill
The easy path: /skill-creator. Cowork ships with a built-in Skill called /skill-creator whose job is to build other Skills. Open a new conversation in your Project and type /skill-creator. It interviews you — what the Skill should do, what it reads, what it writes, what rules it loads — then writes .claude/skills/content-brief/SKILL.md for you. That is the fastest route to a working Skill and the path the rest of this series defaults to.
For the Content Brief Generator specifically, you are about to walk through the manual build first. That is deliberate: the field-by-field walkthrough is the teaching moment for what lives inside a Skill, and once you have felt the shape of it, /skill-creator becomes a tool you can evaluate rather than a black box. The "Faster Way" section at the end of this Skill returns to /skill-creator and explains exactly when to reach for it from here on.
Open your Cowork project. Navigate to Skills and create a new skill. Name it content-brief. In the skill prompt field, paste the following:
Generate a structured content brief for the topic provided.
INPUTS
- Topic: [the user provides this when invoking the skill]
BRIEF FORMAT
Produce the following sections in this exact order:
## Content Brief: [Working Title]
### Target Audience
Identify the most relevant audience segment from the project context.
Describe who this piece is for: their role, their priorities, and what
problem this content helps them solve.
### Key Messages
List 3-5 specific, concrete takeaways the reader should walk away with.
Each message should be one sentence. No vague generalities -- every
message must connect to the audience's real situation.
### Call to Action
One specific action the reader should take after reading this content.
The CTA must match the content format and the audience's stage in the
buying process. "Request a quote" is specific. "Learn more" is not.
### SEO Keywords
List 5-10 search terms this content should target. Include a mix of
short-tail (1-2 words) and long-tail (3-5 words) keywords. Prioritize
terms the target audience actually searches for over industry jargon
they would not type into Google.
### Content Format and Length
Recommend the best format for this topic and audience (blog post, email,
LinkedIn post, landing page, case study). State the target word count
based on the project's content standards.
### Outline
Provide a section-by-section outline with 3-5 sections. For each section:
- Section heading
- 2-3 bullet points describing what this section covers
- The key message this section supports
### Competitor Differentiation
One paragraph on how this content should position us differently from
competitors. What angle, evidence, or perspective makes this piece ours
and not something a competitor could publish unchanged?
### Distribution Notes
Suggest 1-2 channels where this content should be published or promoted,
based on the audience segment and format. Include a one-sentence rationale
for each channel.
RULES
- Pull audience information, brand voice, and content standards from the
project context. Do not ask the user to provide what is already in CLAUDE.md
or Rules.
- Be specific. "Manufacturers who are evaluating new fastener suppliers" is
useful. "Business professionals interested in our products" is not.
- Every key message must be testable -- you should be able to read the finished
content and verify whether each message appears.
- If the topic is too broad for a single piece of content, say so and recommend
how to split it.
Save the skill. Your Project folder now surfaces:
your-cowork-project/
├── CLAUDE.md
└── .claude/
├── rules/
│ └── (your playbook files from Part 2)
└── skills/
└── content-brief/
└── SKILL.md
Your .claude/ folder now carries rules/ and skills/ — with content-brief/SKILL.md joining the Playbook from Part 2.
Test It
Invoke the Skill with a real topic:
Why operations managers need supply chain visibility
Your VP produces a brief that looks something like this (the specifics reflect your CLAUDE.md):
## Content Brief: Why Supply Chain Visibility Saves More Than It Costs
Target Audience. Operations managers at mid-size manufacturing plants (50-500 employees) who currently track supply chain status through email, phone calls, and spreadsheets. They are dealing with late shipments, production delays caused by missing parts, and no reliable way to forecast delivery timelines. They make purchasing decisions based on data, not marketing — give them numbers and evidence.
Key Messages.
- Manual supply chain tracking costs an average ops manager 6-8 hours per week in follow-up calls and email chains.
- Visibility into supplier lead times reduces production stoppages caused by late deliveries.
- The cost of a visibility tool is measurable against the cost of a single day of unplanned downtime.
- Starting with visibility into your top 5 suppliers by volume covers 80% of your risk without requiring a full platform rollout.
Call to Action. Download our supply chain visibility checklist — a one-page worksheet that helps you calculate your current tracking costs and identify your highest-risk suppliers.
SEO Keywords. supply chain visibility; supply chain management manufacturing; operations manager supply chain tools; reduce production downtime suppliers; supplier lead time tracking; manufacturing supply chain optimization; supply chain visibility ROI.
Format and Length. Blog post. 800-1200 words per content standards. Structure: problem statement, cost of the status quo, what visibility changes, how to start small, CTA.
Outline. Four sections — The 6-Hour Problem (how much time ops managers spend chasing status updates, supports message 1), What Breaks When You Cannot See (production stoppages traced to late deliveries, supports message 2), Visibility is Not a Software Pitch (frame as practice, not product, supports message 3), Start With Five Suppliers (80/20 rollout, supports message 4).
Competitor Differentiation. Most supply chain content targets logistics directors at enterprise companies and assumes the reader has a dedicated supply chain team. Our angle: this is for the ops manager who is the supply chain team. Practical, small-scale steps that do not require a six-figure platform or a dedicated analyst.
Distribution Notes. Company blog as the primary channel for SEO intent. LinkedIn for a repurposed opening section (the 6-hour problem) as a standalone post linking back.
Review the output against three questions. Are the key messages specific enough? Each message should state a concrete claim — "Manual tracking costs 6-8 hours per week" passes; "Supply chain visibility is important" fails. Is the CTA actionable? The reader should know exactly what they get and what they do. Are the SEO keywords realistic? Check that they are terms your audience would actually search, not jargon nobody types into Google.
Iterate
Your first Skill output will not be perfect. Skills improve the same way any process does — through testing and adjustment. If key messages come back too vague, add a constraint: "Every key message must include a specific number, timeframe, or measurable outcome." If Competitor Differentiation keeps producing generic positioning, add context to your CLAUDE.md naming your top competitors. If the CTA recommends downloads you do not have, add a section to CLAUDE.md listing your available content assets.
Each time you refine the Skill prompt, run it again with the same topic and compare outputs. Save your test topic and first output so you can diff later versions against it — this is the fastest way to see whether a change improved the output.
The Faster Way — /skill-creator
You just built a Skill by hand. That process matters because you now understand what goes into a Skill — the field design, the constraints, the testing loop. You know what makes a Skill prompt specific versus vague, and why each field exists.
Now here is how to build Skills faster. Cowork's /skill-creator builds Skills through a guided conversation. Instead of writing a Skill prompt from scratch, you describe what you want the Skill to do and /skill-creator asks questions to fill in the details — what inputs the Skill needs, whether it should pull context from CLAUDE.md, how to format the output. After four or five questions, it generates a complete Skill prompt shaped by your answers. You review it, adjust, and save.
From this point forward, new Skills lead with /skill-creator. You know what a Skill prompt contains, why each section matters, and how to test and iterate. That knowledge means you can evaluate what /skill-creator generates and fix anything it gets wrong. You can always drop back to manual authoring for unusual logic, conditional outputs, or complex multi-step workflows where a guided conversation cannot match writing the prompt yourself.
Skill #2: The Brand Voice Checker
Same pattern, different surface. Here is what changes.
The Content Brief Skill produces a document from a topic. The Voice Checker grades a document against your Rules. It reads .claude/rules/brand-voice.md from Part 2 on every run and measures incoming content against it. The output is not a draft — it is a score plus a line-by-line critique.
Voice drifts. You write a blog post that sounds exactly like your brand. Then you adapt it for LinkedIn and something shifts — the LinkedIn version sounds like a press release, the email version sounds like a textbook. Manual voice checking is slow and inconsistent: you read a draft in one mood and hold it to one standard; next week, a different mood, a different standard. A Skill that checks every piece against the same rules, every time, catches drift before it reaches your audience. Not a replacement for your judgment — a first pass that catches the obvious problems so your review time focuses on the subtle ones.
This is the accountability loop from Part 2 made concrete. The standards exist in your Rules. The Voice Checker is the mechanism that enforces them on every draft before you see it.
Rule Fitness Comes First
The quality of the Voice Checker is directly tied to the quality of your Rules. If brand-voice.md contains three adjectives and no examples, the checker has almost nothing to work with. If it contains specific tone descriptors, a detailed vocabulary list, and three pairs of wrong/right examples with real product references, the checker produces feedback like "line 4 uses 'leverage' which is on your banned list — try 'use' instead."
A Rule file readable by a human can still be too vague for a Skill to enforce. A Skill can only flag what the Rule names explicitly — no banned-word list means no bans to enforce, however strong your taste. Before building this Skill, re-open brand-voice.md and check it against six fields:
- Tone descriptors (3-5 adjectives describing how the brand sounds).
- Personality traits (how the brand behaves).
- Vocabulary "use" list (preferred words and phrases).
- Vocabulary "never use" list (banned words and phrases).
- Voice-in-action examples (at least two wrong/right pairs).
- Sentence structure rules (length, active voice preference, lead style).
Any dimension you leave blank will silently pass every check. The Voice Checker does not know to enforce what the Rule does not name.
Build with /skill-creator
Open your Cowork project and type /skill-creator. Then paste this prompt:
Build a skill called "Voice Check" that reviews marketing content against
my brand voice rules.
Inputs:
- A piece of marketing content (any format: blog post, email, social post,
ad copy, landing page section)
- Optionally, the content type and target channel
What it does:
1. Read .claude/rules/brand-voice.md to load my brand voice standards
2. Analyze the input content against each section of the voice rules:
- Tone alignment: does the content match my tone descriptors?
- Vocabulary scan: flag any words from my "never use" list
- Passive voice detection: flag every passive construction
- Voice match: compare against my wrong/right examples
- Sentence structure: check against my sentence structure rules
3. Score each dimension as PASS, WARN, or FAIL
4. For every WARN or FAIL, provide:
- The specific line or sentence that triggered the flag
- Why it fails (which rule it violates)
- A suggested rewrite that fixes the issue while preserving the meaning
Output format:
- Overall voice match score (PASS / NEEDS REVISION / FAIL)
- Dimension-by-dimension breakdown with PASS/WARN/FAIL
- Line-level feedback table: original text | issue | suggested fix
- Summary: 2-3 sentences on the biggest voice gaps and what to fix first
Do not invent standards. Only check against what is written in
.claude/rules/brand-voice.md. If a dimension is not covered in the rules
file, skip it and note that the rules do not address it.
Cowork walks you through a few questions about scope, inputs, and outputs. Accept project-wide scope, skip trigger setup for now, and confirm the output format. Cowork generates the Skill and saves it to .claude/skills/voice-checker/SKILL.md.
.claude/skills/voice-checker/SKILL.md to disk and wired it to your brand-voice Rule. You did not name the file or the Rule path. Open the Skill file and check — the reference is already there. The under-the-hood manual-build walkthrough from the Content Brief section applies here too; the only line the manual version needs to contain that the Content Brief Skill does not is an explicit Read .claude/rules/brand-voice.md step near the top of the Instructions block.Test It — Bad Content First
A Voice Checker is only useful if it catches real problems. Test it with content that is deliberately wrong. Feed it this draft LinkedIn post that violates nearly every rule:
We are excited to announce that RouteLine has leveraged cutting-edge
artificial intelligence to deliver a game-changing update to our
logistics platform. Best-in-class route optimization is now empowered
by machine learning algorithms that unlock unprecedented efficiency
gains for your supply chain operations.
Significant improvements have been achieved across all key metrics.
Delivery times are reduced by our proprietary technology. Costs are
lowered through synergistic integration of real-time data streams.
Ready to transform your logistics? Reach out to learn more about
how RouteLine can supercharge your operations.
The Voice Checker returns something like this:
OVERALL VOICE MATCH: FAIL
DIMENSION BREAKDOWN:
- Tone Alignment: FAIL (reads as corporate press release, not
direct/grounded/warm)
- Vocabulary Scan: FAIL (7 banned words detected)
- Passive Voice: FAIL (3 passive constructions)
- Voice Match: FAIL (matches "wrong" examples from voice rules)
- Sentence Structure: WARN (no length variation, never leads with benefit)
LINE-LEVEL FEEDBACK:
| Original | Issue | Suggested Fix |
|----------|-------|---------------|
| "We are excited to announce" | Banned phrase | "Route optimization now runs on machine learning" |
| "leveraged cutting-edge AI" | Two banned words | "uses machine learning" |
| "game-changing update" | Banned word | "update" or describe the specific change |
| "empowered by machine learning" | Banned word | "built on machine learning" |
| "unlock unprecedented efficiency" | Banned word + vague | "reduces route planning from 45s to 3s" |
| "Significant improvements have been achieved" | Passive + vague | "Route planning runs 15x faster" |
| "synergistic integration" | Banned word + passive | "You spend less on fuel because routes use real-time traffic" |
| "supercharge your operations" | Banned word | "cut your delivery times and fuel costs" |
SUMMARY:
Reads like a corporate press release -- the opposite of RouteLine's voice
(direct, grounded, specific). Seven banned words, three passive
constructions, no specific metrics. Fix priority: (1) replace banned
vocabulary, (2) rewrite passive sentences in active voice, (3) add
specific numbers to replace vague claims.
Every flag points to a specific Rule violation, and every suggestion gives you a concrete fix. Compare this to reading the draft yourself and thinking "something feels off." The Skill tells you exactly what is off and how to fix it.
Test It — Good Content Second
A checker that flags everything is as broken as one that flags nothing. You stop trusting it and you stop pasting drafts in. Feed the Skill a cleaned-up paragraph — active voice, preferred vocabulary, specific numbers, tone that matches your "right" examples — and watch it return mostly PASS ratings. If anything comes back as WARN or FAIL, treat each flag as a calibration question: is this a real violation, or is the Rule written too strict for the content it has to grade? A well-calibrated checker passes clean content and only flags actual violations.
The sequence you are setting up is: Brief → Draft → Voice Check → Revise → Publish. Drafting and editing are different modes of thinking. Let the draft be messy. Clean it up in the voice-check pass. The two-step approach produces better content because each step focuses on one job. Part 5 of this series wires both Skills into a pipeline that flows without manual triggering.
Skill or Agent? — The Decision Card
You have built two Skills. Before Part 4 introduces Agents, the difference between them is worth getting straight — because it determines which tool you reach for on any given task.
A Skill is a form your VP fills out. You designed the fields, and the output follows a predictable structure every time. An Agent is a senior team member you hand a project brief. The brief says what you need and why. The team member figures out the research plan, the analysis framework, and the presentation format. They come back with a recommendation, not a filled-in template.
| Aspect | Skills | Agents |
|---|---|---|
| Scope | Single, repeatable task | Complex, multi-step project |
| Input | Structured — fill in the fields | Open-ended — state the goal |
| Output | Predictable format every time | Synthesized analysis and recommendations |
| Autonomy | Follows your template | Makes decisions along the way |
| Speed | Seconds | Minutes |
| Surface | .claude/skills/<name>/SKILL.md | .claude/agents/<name>.md |
| Example | Generate a content brief | Research competitors and produce a positioning strategy |
Use Skills when you know exactly what you want and the format it should take. Content briefs, voice checks, social media posts. You already know what a good content brief looks like. The Skill ensures consistency — every brief follows the same structure, hits the same quality bar, and takes the same amount of time.
Use Agents when you need research, analysis, comparison, or synthesis. Strategy work. Market analysis. Campaign planning. You cannot template these tasks because the right answer depends on information you do not have yet. You need someone to go find that information, make sense of it, and come back with a structured recommendation.
Skills and Agents complement each other. An Agent running a competitive analysis might call your Voice Checker Skill to ensure the final deliverable matches your tone. An Agent building a campaign might pull from your Content Brief generator to structure individual pieces within the campaign. The Skills you built above become tools your Agents use — the same way a senior team member uses your company templates without being told they exist.
The simplest test: if you can draw the output on a whiteboard before the task starts, use a Skill. If you need someone to go figure out what the output should look like, use an Agent.
With Skills, you are the architect. With Agents, you are the executive. Part 4 builds the first two Agents in your team.
What Just Changed
Two new files landed in .claude/ across this chapter:
.claude/skills/content-brief/SKILL.md— the anchor Skill, built by hand so you understand the shape of a Skill definition..claude/skills/voice-checker/SKILL.md— a Skill that reads your Rules and grades drafts against them.
You also internalized the Skills-vs-Agents distinction and the test that goes with it: if you can draw the output on a whiteboard before the task starts, build a Skill; if you need someone to figure out what the output should look like, commission an Agent.
Your .claude/ folder now carries two sibling branches — rules/ (the playbook from Part 2) and skills/ (the procedures from this chapter). Part 4 adds the third: agents/.
What Is Next
Each Skill works on its own. Both run in seconds and produce the same structured output every time. But neither one plans — they execute procedures you already knew how to draw on a whiteboard. That is the ceiling of Skills.
In Part 4 — Agents, you hire two autonomous workers: a Campaign Strategist that turns a brief into a distribution plan, and a Content Repurposer that turns one approved piece into four channel-ready variations. These are tasks you cannot template — they require judgment, synthesis, and decisions about information you do not have yet. The same test applies in reverse: the moment you cannot pre-draw the output, you are reaching for an Agent, not a Skill.
Further out, Part 5 wires Skills and Agents together into a pipeline, and Part 7 puts that pipeline on a recurring schedule — tools that run without you triggering them.
This is Part 3 of 7 in the Your AI VP of Marketing series. Previous: Part 2 — The Playbook | Next: Part 4 — Agents