Multi-Agent Architecture: Parallel Execution Patterns
How to leverage specialized agents working concurrently for 5-10x efficiency gains. Learn the official Task tool syntax, agent types, and patterns that make parallel execution work.
One of Claude Code’s most powerful features is multi-agent execution—the ability to spawn specialized agents that work in parallel. Used correctly, this can deliver 5-10x efficiency gains on complex tasks.
This guide covers the official Task tool syntax, built-in agent types, and real-world patterns for parallel execution.
Understanding the Task Tool
The Task tool is Claude Code’s mechanism for spawning subagents. According to the official documentation:
Official Syntax
Task({
subagent_type: "Explore", // Required: "Explore" or "general-purpose"
model: "haiku", // Optional: "haiku", "sonnet", or "opus"
prompt: ` // Required: Task description
Explore authentication module (thoroughness: medium).
Find all JWT-related functions and their usage.
`,
run_in_background: false // Optional: Run asynchronously
})
Available subagent_type Values
From the Claude Code GitHub:
| subagent_type | Purpose | Best Use Case |
|---|---|---|
Explore | Fast codebase navigation powered by Haiku 4.5 | File search, pattern matching, structure analysis |
general-purpose | Complex multi-step reasoning | Implementation, refactoring, code review |
Model Selection
// Haiku 4.5 - Fast & cheap (default for Explore)
Task({ subagent_type: "Explore", model: "haiku", ... })
// Sonnet 4.5 - Balanced (default for general-purpose)
Task({ subagent_type: "general-purpose", model: "sonnet", ... })
// Opus 4.5 - Most capable (critical tasks)
Task({ subagent_type: "general-purpose", model: "opus", ... })
Cost/Speed Trade-offs:
- Haiku 4.5: 2x faster, 1/3 cost vs Sonnet
- Sonnet 4.5: Best coding performance, Extended Thinking support
- Opus 4.5: Highest intelligence, default Thinking Mode (v2.0.67+)
The Explore Agent Deep Dive
The Explore agent (introduced in v2.1.0) is specifically designed for fast codebase exploration.
Thoroughness Levels
// Quick - 10-30 seconds
Task({
subagent_type: "Explore",
model: "haiku",
prompt: "Explore auth module (thoroughness: quick). Find login handler."
})
// Medium - 30-60 seconds (recommended)
Task({
subagent_type: "Explore",
model: "haiku",
prompt: "Explore auth module (thoroughness: medium). Map JWT flow and middleware."
})
// Very Thorough - 60-120 seconds
Task({
subagent_type: "Explore",
model: "haiku",
prompt: "Explore auth module (thoroughness: very thorough). Complete security analysis."
})
Why Explore is More Efficient
Old approach (5 sequential steps):
1. Glob: find *auth*.ts → 15 seconds
2. Grep: search "JWT" → 15 seconds
3. Read: auth/index.ts → 10 seconds
4. Grep: find authenticate() → 15 seconds
5. Read: test files → 10 seconds
Total: 65 seconds
New approach (1 Explore agent):
Task({
subagent_type: "Explore",
model: "haiku",
prompt: "Explore authentication (thoroughness: medium). Focus on JWT, middleware, tests."
})
// Total: 30-45 seconds, same or better results
Built-in Specialized Agents
Claude Code provides these specialized agent types:
| Agent | Role | When to Use | Recommended Model |
|---|---|---|---|
code-reviewer | Code quality analysis | After implementation | Sonnet |
security-auditor | Vulnerability detection | Auth/payment changes | Sonnet/Opus |
test-runner | Test execution & analysis | After code changes | Haiku |
debugger | Root cause analysis | Error investigation | Sonnet |
refactor-assistant | Code improvement | Complexity reduction | Sonnet |
doc-writer | Documentation | API changes | Haiku/Sonnet |
Sequential vs Parallel Execution
Sequential (slow):
Task 1 (30s) → Task 2 (30s) → Task 3 (30s) → Task 4 (30s)
Total: 120 seconds
Parallel (fast):
Task 1 (30s) ┐
Task 2 (30s) ├→ All complete in 30 seconds
Task 3 (30s) │
Task 4 (30s) ┘
Total: 30 seconds
The math: parallel execution time = max(individual times), not sum.
Core Patterns
Pattern 1: Analysis Swarm
Launch multiple Explore agents to analyze from different angles simultaneously.
Prompt: "I need to understand how user authentication works in this project."
Claude spawns 5 parallel agents:
→ Task 1 (Explore quick): Map auth-related file structure
→ Task 2 (Explore quick): Find all JWT/session references
→ Task 3 (Explore medium): Analyze middleware chain
→ Task 4 (Explore quick): Identify auth configuration
→ Task 5 (Explore quick): Review existing auth tests
Results synthesized into comprehensive overview.
Implementation:
// All 5 agents launch simultaneously
Task({ subagent_type: "Explore", model: "haiku",
prompt: "Map auth-related file structure (thoroughness: quick)" })
Task({ subagent_type: "Explore", model: "haiku",
prompt: "Find all JWT and session references (thoroughness: quick)" })
Task({ subagent_type: "Explore", model: "haiku",
prompt: "Analyze authentication middleware chain (thoroughness: medium)" })
Task({ subagent_type: "Explore", model: "haiku",
prompt: "Find auth configuration files (thoroughness: quick)" })
Task({ subagent_type: "Explore", model: "haiku",
prompt: "Review authentication test files (thoroughness: quick)" })
Use cases:
- Exploring unfamiliar codebases
- Understanding complex features
- Impact analysis before changes
- Technical debt assessment
Pattern 2: Divide and Conquer
Break a large task into independent subtasks that run in parallel.
Prompt: "Refactor the payment module to use the new API client."
Claude spawns agents per file:
→ Agent 1: Refactor payment/checkout.ts
→ Agent 2: Refactor payment/subscription.ts
→ Agent 3: Refactor payment/refund.ts
→ Agent 4: Update payment/types.ts
→ Agent 5: Update tests in payment/__tests__/
Each agent has context about the new API client pattern.
Implementation:
// Shared context provided to all agents
const sharedContext = `
Migration context:
- Replace RestClient with new ApiClient from src/lib/api.ts
- Use new error handling pattern from src/lib/errors.ts
- Maintain backward compatibility for exported functions
`;
Task({ subagent_type: "general-purpose", model: "sonnet",
prompt: `${sharedContext}\n\nRefactor payment/checkout.ts` })
Task({ subagent_type: "general-purpose", model: "sonnet",
prompt: `${sharedContext}\n\nRefactor payment/subscription.ts` })
// ... etc
Use cases:
- Multi-file refactoring
- Batch updates (renaming, pattern changes)
- Large-scale migrations
- Documentation updates across files
Pattern 3: Implementation with Review
Build and review simultaneously to catch issues early.
Prompt: "Implement user profile editing with proper validation."
Phase 1 - Implementation (parallel):
→ Agent 1: Implement API endpoint
→ Agent 2: Create form component
→ Agent 3: Write validation logic
Phase 2 - Review (parallel, starts after Phase 1):
→ Agent 4 (security-auditor): Security review
→ Agent 5 (code-reviewer): Quality check
→ Agent 6 (test-runner): Verify coverage
Use cases:
- New feature development
- Critical code changes
- Security-sensitive implementations
- High-complexity features
Pattern 4: Multi-Perspective Review
Get different expert viewpoints on the same code.
Prompt: "Review PR #123 comprehensively."
Claude spawns specialized reviewers:
→ Agent 1 (code-reviewer): Code quality and patterns
→ Agent 2 (security-auditor): Security vulnerabilities
→ Agent 3 (Explore): Performance implications
→ Agent 4 (test-runner): Test coverage analysis
→ Agent 5 (general-purpose): Backward compatibility
Synthesized review with categorized findings.
Use cases:
- Code reviews
- Architecture decisions
- Technical proposals
- Dependency updates
Pattern 5: Bug Investigation
Parallel search when you don’t know where to look.
Prompt: "Users report 'undefined is not a function' on dashboard."
Claude spawns search agents:
→ Agent 1 (Explore): Search for error message in codebase
→ Agent 2 (Explore): Find recent dashboard changes
→ Agent 3 (Explore): Analyze dashboard dependencies
→ Agent 4 (Explore): Check for TypeScript errors
→ Agent 5 (Explore): Review related test failures
First agent to find strong lead guides investigation.
Use cases:
- Bug hunting
- Understanding error origins
- Finding deprecated usage
- Tracing data flow
Best Practices
1. Choose the Right Model
Exploration/Search → Haiku 4.5
- File structure mapping
- Pattern searching
- Simple analysis
- Cost: ~$0.001 per task
Complex reasoning → Sonnet 4.5
- Code review
- Architecture planning
- Implementation
- Cost: ~$0.003 per task
Critical decisions → Opus 4.5
- Security analysis
- Complex refactoring
- Architectural decisions
- Cost: ~$0.015 per task
2. Keep Agents Focused
Each agent should have a single, clear objective.
Too broad (bad):
"Analyze the entire codebase and find all issues"
Focused (good):
Task({ prompt: "Find all usages of deprecated API v1" })
Task({ prompt: "Check for missing error handling in API routes" })
Task({ prompt: "Identify components without prop validation" })
3. Provide Shared Context
Ensure all agents have the context they need:
const sharedContext = `
Context for all agents:
- We're migrating from REST to GraphQL
- Target files are in src/api/
- Use the new ApiClient from src/lib/api.ts
- Follow error handling patterns in src/lib/errors.ts
`;
Task({ prompt: `${sharedContext}\n\nTask 1: ...` })
Task({ prompt: `${sharedContext}\n\nTask 2: ...` })
4. Handle Background Tasks
For long-running tasks, use run_in_background:
Task({
subagent_type: "general-purpose",
model: "sonnet",
prompt: "Comprehensive security audit of entire codebase",
run_in_background: true // Returns immediately, runs async
})
// Check on it later
TaskOutput({ task_id: "...", block: false })
5. Plan for Synthesis
Multiple agents produce multiple outputs. Plan how to combine them:
After parallel analysis:
1. Collect findings from all agents
2. Deduplicate overlapping discoveries
3. Prioritize by severity/impact
4. Create actionable summary
Real-World Examples
Feature Development Workflow
User: "Implement a notification system for order updates."
Phase 1 - Discovery (5 parallel Explore agents):
→ Map existing notification patterns
→ Find email/push notification code
→ Analyze order state machine
→ Review notification templates
→ Check existing event handlers
Phase 2 - Design (sequential, needs Phase 1 results):
→ Plan agent: Design notification architecture
Phase 3 - Implementation (4 parallel agents):
→ Create notification service
→ Add order event listeners
→ Build email templates
→ Write unit tests
Phase 4 - Review (3 parallel agents):
→ security-auditor: Check for data leaks
→ code-reviewer: Review patterns
→ test-runner: Verify coverage
Bug Investigation
User: "Production error: 'Payment failed' but money was charged."
Parallel investigation (5 Explore agents):
→ Search payment logs for error pattern
→ Analyze payment service error handling
→ Check Stripe webhook handlers
→ Review recent payment changes
→ Find similar issues in error tracking
Results:
- Agent 3 finds: Webhook handler doesn't retry on timeout
- Agent 4 confirms: Recent change added new timeout logic
- Agent 1 shows: Pattern started after deploy on Jan 5
Root cause identified in ~1 minute vs 10+ sequential.
Codebase Audit
User: "Audit for security issues and tech debt."
Parallel audit (8 agents):
Security team:
→ security-auditor: SQL injection patterns
→ security-auditor: XSS vulnerabilities
→ security-auditor: Authentication issues
→ security-auditor: Sensitive data exposure
Quality team:
→ code-reviewer: Code duplication
→ code-reviewer: Complexity hotspots
→ test-runner: Coverage gaps
→ Explore: Outdated dependencies
All 8 agents work simultaneously.
Results categorized and prioritized.
Performance Considerations
When Parallel Helps Most
- Tasks that are truly independent
- Operations that are I/O bound (file reading, API calls)
- Analysis benefiting from multiple perspectives
- Large surface area (many files, many patterns)
When Parallel Helps Less
- Tasks with strong dependencies (A must finish before B)
- Very quick tasks (overhead exceeds benefit)
- Tasks requiring deep sequential reasoning
- Limited scope (just one file or function)
Overhead Awareness
Parallel execution has overhead:
- Agent initialization: ~1-2 seconds each
- Context sharing cost
- Result synthesis time
For tasks under 5 seconds, sequential may be faster.
Getting Started
Today:
- Try one parallel Explore swarm: “Find all usages of X in the codebase”
- Notice the speed difference vs sequential exploration
This week:
- Use Analysis Swarm for understanding a complex feature
- Experiment with the Divide and Conquer pattern
This month:
- Develop patterns specific to your workflow
- Identify which tasks benefit most from parallelization
- Optimize model selection for different agent types
Quick Reference
Task Tool Template
Task({
subagent_type: "Explore" | "general-purpose",
model: "haiku" | "sonnet" | "opus",
prompt: "Clear, focused task description",
run_in_background: true | false
})
Model Selection Guide
| Task Type | Model | Why |
|---|---|---|
| File search | haiku | Fast, cheap |
| Pattern matching | haiku | Fast, cheap |
| Code review | sonnet | Balanced |
| Implementation | sonnet | Balanced |
| Security audit | sonnet/opus | Thorough |
| Architecture | opus | Most capable |
Parallel Execution Rule
Independent tasks → Launch simultaneously
Dependent tasks → Run sequentially
Mixed → Phase approach (parallel within, sequential between)
Multi-agent architecture transforms how you interact with Claude Code. Instead of sequential prompts, you orchestrate parallel workflows that complete in a fraction of the time.
Sources: Claude Code Documentation, Claude Code GitHub, CHANGELOG