Build Your First AI Project This Weekend
Stop consuming tutorials. Start creating. Get the free step-by-step guide.
You’ve been using Claude Code to build things faster than you ever thought possible.
One session. One conversation. One AI working through your project step by step.
But what if the project is too big for one session?
What if you need an AI researcher, an AI writer, an AI reviewer, and an AI builder - all working at the same time, talking to each other, and coordinating their own work?
That’s exactly what Claude Code Agent Teams does. And it changes the game for anyone running complex projects.
I’m going to walk you through everything - from enabling the feature to running your first team, to the patterns that actually work and the mistakes that burn through tokens for nothing.
Quick Navigation
| Section | What You’ll Learn |
|---|---|
| What Are Agent Teams | The basics and how they compare to subagents |
| Setting Up Agent Teams | Enable the feature, display modes, shortcuts |
| Running Your First Agent Team | Your first team in one prompt |
| Controlling Your Team | Tasks, delegation, plan approval |
| Orchestration Patterns | 4 patterns with copy-paste prompts |
| When It’s Worth the Token Cost | Honest cost-benefit breakdown |
| Practical Workflow: Site Audit | Full walkthrough with 4 specialists |
| Troubleshooting | Common issues and fixes |
What Are Agent Teams
Think of it this way.
A normal Claude Code session is one smart employee sitting at one desk, working on one thing at a time. Subagents are like that employee sending quick questions to assistants who report back with answers.
Agent teams are a full department.
You have a team lead (your main Claude Code session) that spawns teammates - each one a completely independent Claude Code instance with its own context window, its own tools, and the ability to talk directly to other teammates.
The team shares a task list that everyone can see, claim work from, and update. There’s a messaging system so teammates can share findings, challenge each other’s conclusions, and coordinate without you playing telephone.
Here’s the key difference from subagents:
| Subagents | Agent Teams | |
|---|---|---|
| Context | Reports back to main session | Fully independent sessions |
| Communication | One-way (back to caller) | Teammates message each other directly |
| Coordination | Main agent manages everything | Shared task list, self-coordination |
| Best for | Quick, focused lookups | Complex work needing discussion |
| Token cost | Lower | 3-5x higher (each teammate is a full session) |
Subagents are researchers who hand you a report. Agent teams are collaborators who debate, challenge, and build on each other’s work.
When should you use agent teams over subagents?
Use agent teams when teammates need to:
- Share findings and build on each other’s work
- Challenge each other’s conclusions
- Coordinate across different parts of a large project
- Work in parallel on truly independent pieces
Use subagents when:
- You just need a quick answer or focused research
- The work is sequential (step 1, then step 2, then step 3)
- You’re editing the same files
- The task is straightforward enough for one session
Setting Up Agent Teams
Agent teams are experimental. They’re disabled by default, which means you need to turn them on before anything else.
Enable the Feature
Option 1: Settings file (recommended)
Add this to your settings.json:
{
"env": {
"CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS": "1"
}
}
Your settings file lives at ~/.claude/settings.json for global settings, or .claude/settings.json in your project directory for project-specific settings.
Option 2: Environment variable
export CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS=1
Add that to your .bashrc or .zshrc if you want it persistent.
Choose a Display Mode
Agent teams support two ways to see what’s happening:
In-process mode (default): All teammates run inside your main terminal. You navigate between them with keyboard shortcuts. Works everywhere. This is what I recommend starting with.
Split-pane mode: Each teammate gets its own terminal pane. You can see everyone’s output at once. Requires tmux or iTerm2.
To set your preference in settings.json:
{
"teammateMode": "in-process"
}
Or force it for a single session:
claude --teammate-mode in-process
Keyboard Shortcuts (In-Process Mode)
| Shortcut | What It Does |
|---|---|
Shift+Up/Down | Switch between teammates |
Enter | View a teammate’s full session |
Escape | Interrupt a teammate’s current turn |
Ctrl+T | Toggle the shared task list |
Shift+Tab | Toggle delegate mode on the lead |
These shortcuts are how you stay in control. Learn them.
Running Your First Agent Team
Here’s where it gets real. Open Claude Code in your project directory and describe what you want.
The Prompt That Starts Everything
You don’t need special commands. Just tell Claude what you need and ask it to create a team:
I need to research competitor landing pages in the B2B analytics space.
Create an agent team with three teammates:
- One analyzing hero sections and value propositions
- One analyzing pricing strategies and packaging
- One analyzing social proof and trust elements
Have them each analyze the top 5 competitors and share findings
with each other. Compile everything into a research doc.
Claude takes it from there. It:
- Creates a team with a shared task list
- Spawns three teammates, each with their assignment
- Coordinates their work
- Synthesizes findings when everyone finishes
- Cleans up the team
You just managed a three-person research team with one prompt.
What Happens Behind the Scenes
When Claude creates an agent team, it sets up:
- Team config at
~/.claude/teams/{team-name}/config.json - Task list at
~/.claude/tasks/{team-name}/ - Messaging system so agents can communicate directly
Each teammate loads your project’s CLAUDE.md, MCP servers, and skills automatically. They don’t inherit the lead’s conversation history though - they start fresh with whatever context you give them in the spawn prompt.
This matters. If you want a teammate to know something specific, include it in their assignment. Don’t assume they know what you discussed with the lead earlier.
Controlling Your Team
Once your team is running, you have several ways to manage it.
Assign Tasks Explicitly
Tell the lead exactly who should do what:
Assign the SEO audit to the researcher teammate.
Have the writer teammate start on the blog outline.
The reviewer should wait until the writer finishes before starting their review.
Let Teammates Self-Claim
Or let them figure it out. When a teammate finishes their current task, they automatically pick up the next unassigned, unblocked task from the shared list. This works well when you have a queue of independent tasks.
Task Dependencies
Tasks can block each other. If Task B can’t start until Task A finishes, the system handles that automatically. When Task A completes, Task B unblocks and becomes available.
Create these tasks with dependencies:
1. Research competitor pricing pages
2. Analyze pricing patterns (blocked by task 1)
3. Write pricing page copy (blocked by task 2)
4. Review and edit copy (blocked by task 3)
Talk to Teammates Directly
This is powerful. You’re not limited to talking through the lead. In in-process mode, hit Shift+Up/Down to select a teammate and type directly to them.
Want to redirect a teammate’s approach? Tell them directly. Want to ask a follow-up question? Go straight to the source.
Delegate Mode
Sometimes the lead starts doing work itself instead of delegating. If you want the lead to focus purely on coordination - spawning teammates, assigning tasks, synthesizing results - press Shift+Tab to enable delegate mode.
In delegate mode, the lead can only:
- Spawn and manage teammates
- Send messages
- Manage the task list
- Synthesize results
No code writing. No file editing. Pure orchestration.
Require Plan Approval
For high-stakes work, you can make teammates plan before they execute:
Spawn a teammate to refactor the checkout flow.
Require plan approval before they make any changes.
The teammate works in read-only plan mode, submits their plan to the lead, and waits for approval. The lead reviews and either approves or sends it back with feedback.
You can influence how the lead evaluates plans:
Only approve plans that include test coverage for every changed function.
Reject any plan that modifies the database schema without a migration.
Orchestration Patterns That Work
Not every team structure makes sense for every task. Here are the patterns I’ve seen work well, and when to use each one.
The Parallel Specialists
What it is: Multiple teammates each bring a different expertise to the same problem.
When to use it: Reviews, audits, research where you want multiple perspectives.
Create an agent team to review our new landing page. Spawn three specialists:
- A conversion optimization expert checking CTA placement, form friction,
and value proposition clarity
- A technical reviewer checking page speed, mobile responsiveness,
and accessibility
- An SEO specialist checking meta tags, heading structure,
schema markup, and content optimization
Have them each review independently, then share findings with each other
to identify anything they missed. Compile into a prioritized action list.
Why it works: A single reviewer gravitates toward whatever they notice first. Three specialists with different lenses catch things a generalist misses.
The Competing Hypotheses
What it is: Multiple teammates investigate different theories and actively debate each other.
When to use it: Debugging, diagnosing conversion drops, figuring out why something isn’t working.
Our landing page conversion rate dropped 40% last week.
Spawn 4 teammates to investigate competing hypotheses:
- One investigating if the traffic source mix changed
- One analyzing if the page itself changed (code, copy, layout)
- One checking if the offer or pricing changed
- One looking at external factors (competitor moves, seasonality, market shifts)
Have them challenge each other's findings. If the traffic source
teammate finds a shift, the page analysis teammate should verify
whether the page still converts for the original traffic mix.
Update a shared findings doc with whatever consensus emerges.
Why it works: When you investigate one theory at a time, you anchor on the first plausible explanation. Parallel investigation with active debate means the theory that survives is more likely to be the actual root cause.
The Pipeline
What it is: Sequential stages where each teammate’s output feeds the next.
When to use it: Content creation, campaign builds, anything with clear stages.
Create a content pipeline team:
1. Researcher: Find the top 10 articles ranking for "email marketing automation"
and analyze what they cover, what they miss, and what angles we can own
2. Outliner: Take the research and create a detailed article outline
that's better than everything currently ranking (blocked by task 1)
3. Writer: Write the full article following our style guide (blocked by task 2)
4. Editor: Review for clarity, accuracy, and SEO optimization (blocked by task 3)
Why it works: Each stage gets a fresh context window focused on their specific job. The researcher doesn’t carry around 5,000 words of article draft cluttering their context. The writer doesn’t have 20 competitor analyses eating their token budget.
The Swarm
What it is: A queue of independent tasks with workers that grab the next available job when they finish.
When to use it: Batch operations where tasks don’t depend on each other.
Create 15 tasks - one for each landing page in our /pages directory.
Each task: audit the page for conversion issues, check mobile
responsiveness, verify all links work, and write a 5-point
improvement summary.
Spawn 4 worker teammates. Let them self-claim tasks from the queue.
Why it works: Natural load balancing. Fast tasks get completed quickly and the worker moves on. Slow tasks don’t block anyone else. All 15 pages get audited in roughly the time it takes to do 4.
When Agent Teams Are Worth the Token Cost
Here’s the truth about agent teams: they are expensive.
Each teammate is a full Claude Code session. A three-teammate team uses roughly 3-5x the tokens of a single session. That adds up fast.
So when does the cost justify itself?
Worth It
Parallel research with independent investigation paths. Three researchers each investigating different competitor segments simultaneously. The coverage is genuinely better than one agent doing it sequentially, and you save real time.
Complex debugging with competing hypotheses. When you don’t know the root cause, parallel investigation with debate finds the answer faster and more accurately than sequential elimination.
Large builds with clean file boundaries. If three teammates each own separate files - one building the API, one the frontend, one the tests - they don’t step on each other and the project ships faster.
Multi-perspective reviews. A security reviewer, performance reviewer, and UX reviewer all looking at the same code from different angles catches more issues than one reviewer doing three passes.
Not Worth It
Sequential tasks. If Step 2 needs Step 1’s output, you can’t parallelize it. You just added coordination overhead for nothing.
Same-file edits. Two teammates editing the same file creates conflicts. One will overwrite the other’s changes. Use a single session instead.
Simple or small tasks. The coordination overhead of spawning a team, managing tasks, and synthesizing results is pointless for a task one agent handles in 10 minutes.
Tightly coupled work. If every decision depends on every other decision, teammates spend more time communicating than working. That’s when a single session with deep context works better.
Practical Workflow: Site Audit With Agent Teams
Let me walk through a complete real-world example so you can see how this fits together.
The scenario: You have a 30-page content site and you want a full audit - SEO, conversion, technical, and content quality.
Step 1: Create the Team
Create an agent team called "site-audit" with four specialist teammates:
- SEO Analyst: audit meta tags, heading structure, internal linking,
schema markup, and keyword optimization for every page
- Conversion Specialist: analyze CTAs, form placement, value propositions,
social proof, and user flow for every page
- Technical Reviewer: check page speed factors, mobile responsiveness,
broken links, accessibility, and Core Web Vitals issues
- Content Editor: review content quality, readability, accuracy,
freshness, and competitive positioning
Create a shared task list with one task per page (30 tasks).
Each specialist should self-claim pages and audit them through their lens.
Have specialists share interesting findings with each other as they go.
Step 2: Monitor Progress
While they work, check in:
- Hit
Ctrl+Tto see the task list and who’s working on what - Use
Shift+Up/Downto check individual teammate progress - Message a teammate directly if they seem stuck
Step 3: Redirect When Needed
Maybe the SEO analyst finds that 15 pages are missing schema markup entirely. That’s a pattern, not a one-off finding. Tell them:
Good catch on the schema issue. Stop auditing individual pages for
schema - just flag it as a site-wide issue and focus your remaining
time on internal linking analysis instead.
Step 4: Synthesize Results
When the team finishes, ask the lead:
Compile all findings into a single audit report.
Organize by priority (critical, high, medium, low).
For each finding, include: the issue, which pages it affects,
the recommended fix, and expected impact.
Create an action plan with the top 10 fixes to implement first.
One prompt. Four specialists. 30 pages audited. Prioritized action plan.
That’s a $5,000-$10,000 agency audit done in an afternoon.
Troubleshooting Common Issues
Teammates Not Showing Up
In in-process mode, teammates might already be running but not visible. Press Shift+Down to cycle through them.
If they genuinely didn’t spawn, check that your task was specific enough. Claude decides whether a team is warranted based on what you’re asking. If the task seems too simple, it might just handle it in a single session.
For split-pane mode, make sure tmux is installed:
which tmux
Too Many Permission Prompts
Every time a teammate needs to run a command or edit a file, it might ask for permission. With three teammates working in parallel, that gets annoying fast.
Fix: Pre-approve common operations in your settings before spawning the team:
{
"permissions": {
"allow": [
"Read",
"Glob",
"Grep",
"Bash(npm run *)",
"Write(src/**)",
"Edit(src/**)"
]
}
}
The Lead Starts Doing Work Instead of Delegating
This happens. The lead decides it’s faster to just do a task itself rather than waiting for teammates.
Tell it to stop:
Wait for your teammates to complete their tasks before proceeding.
Focus on coordination, not implementation.
Or enable delegate mode (Shift+Tab) to enforce it.
Teammates Stop After Hitting an Error
Check their output with Shift+Up/Down, then either give them additional instructions to recover, or spawn a replacement:
The researcher teammate hit an error. Spawn a replacement and
assign them the remaining research tasks.
Orphaned Sessions After Cleanup
If you’re using tmux and a session persists after the team ends:
tmux ls
tmux kill-session -t <session-name>
Known Limitations
Agent teams are experimental. Be aware of these constraints:
- No session resumption. If you use
/resume, your in-process teammates are gone. The lead might try to message teammates that no longer exist. Tell it to spawn new ones. - Task status can lag. Teammates sometimes forget to mark tasks as complete, which blocks dependent tasks. Check manually and nudge when needed.
- One team per session. Clean up the current team before starting a new one.
- No nested teams. Teammates can’t spawn their own teams. Only the lead manages the team.
- The lead is permanent. You can’t promote a teammate to lead or transfer leadership.
- Permissions copy from the lead. All teammates start with the lead’s permission settings. You can change individual teammate permissions after spawning, but not at spawn time.
- Split panes need tmux or iTerm2. VS Code’s integrated terminal, Windows Terminal, and Ghostty don’t support split-pane mode.
Agent Teams vs Other Multi-Session Approaches
Agent teams aren’t the only way to run multiple Claude Code instances. Here’s how the options compare:
Git Worktrees (Manual Parallel Sessions)
git worktree add ../project-feature-a feature-a
git worktree add ../project-feature-b feature-b
# Run claude in each directory independently
Pros: Simple. No experimental features needed. Full control.
Cons: No communication between sessions. No shared task list. You’re the coordinator.
Subagents (Lightweight Delegation)
Subagents spawn inside your current session, do focused work, and report back. They’re faster to set up and cheaper on tokens.
Pros: Lower token cost. Results flow back automatically. No setup required.
Cons: Can’t talk to each other. No shared task list. Report back only to the main agent.
Agent Teams (Full Orchestration)
Pros: Inter-agent communication. Shared task management. Self-coordination. Competing hypotheses. Specialist perspectives.
Cons: Higher token cost. Experimental. Known limitations. Coordination overhead for simple tasks.
The rule of thumb: Start with a single session. If you need focused helpers, use subagents. If you need a team that communicates and self-coordinates, use agent teams.
The Bottom Line
Agent teams are the most powerful feature in Claude Code. They’re also the most expensive and the most likely to waste tokens if used wrong.
Use them when:
- Parallel work creates genuinely better results (research, reviews, debugging)
- Tasks have clean boundaries (different files, different domains)
- Inter-agent communication adds value (debate, building on findings)
Skip them when:
- Work is sequential
- Tasks are simple
- You’re editing the same files
- A single session handles it fine
The marketers and builders who figure out the right patterns for agent teams will have a genuine edge. Not because the technology is magic - but because coordinating parallel AI work is a skill, and most people haven’t developed it yet.
Start with a research task. Get comfortable with the controls. Then scale up to more complex workflows.
The leverage is real. Use it wisely.
Have questions about agent teams?
Start with the Claude Code guide for the fundamentals, then come back here when you’re ready to coordinate multiple sessions.
Related Resources
- Claude Code for Marketers - Complete Claude Code fundamentals
- AI Tools for Media Buyers - Your full AI workflow stack
- AI Content Workflow - Content creation with AI
- AI Site Architecture - Building sites with AI