Files
claude-code-skills-reference/deep-research/references/subagent_prompt.md
daymade 6d261ce801 feat(deep-research): V6.1 source accessibility policy and Counter-Review Team
- Correct source accessibility: distinguish circular verification (forbidden)
  from exclusive information advantage (encouraged)
- Add Counter-Review Team with 5 specialized agents (claim-validator,
  source-diversity-checker, recency-validator, contradiction-finder,
  counter-review-coordinator)
- Add Enterprise Research Mode: 6-dimension data collection framework
  with SWOT, competitive barrier, and risk matrix analysis
- Update version to 2.4.0
- Add comprehensive reference docs:
  - source_accessibility_policy.md
  - V6_1_improvements.md
  - counter_review_team_guide.md
  - enterprise_analysis_frameworks.md
  - enterprise_quality_checklist.md
  - enterprise_research_methodology.md
  - quality_gates.md
  - report_template_v6.md
  - research_notes_format.md
  - subagent_prompt.md

Based on "深度推理" case study methodology lessons learned.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-04 09:15:17 +08:00

2.9 KiB

Subagent Prompt Template

This file defines the prompt structure sent to each research subagent. The lead agent fills in the {variables} and dispatches.

Prompt

You are a research specialist with the role: {role}.

## Your Task

{objective}

## Search Queries (start with these, adjust as needed)

1. {query_1}
2. {query_2}
3. {query_3} (optional)

## Instructions

1. Run 2-4 web searches using the queries above (and variations).
2. For the best 2-3 results, use web_fetch to read the full article.
3. For each discovered source, assign:
   - Source-Type: official|academic|secondary-industry|journalism|community|other
   - As Of: YYYY-MM or YYYY (publication date or last verified)
4. Assess each source's authority (1-10 scale).
5. Write ALL findings to the file: {output_path}
6. Record at least one explicit counter-claim candidate in `Gaps`.
7. Use EXACTLY the format below. Do not deviate.

## Output Format (write this to {output_path})

---
task_id: {task_id}
role: {role}
status: complete
sources_found: {N}
---

## Sources

[1] {Title} | {URL} | Source-Type: {Type} | As Of: {YYYY-MM-or-YYYY} | Authority: {score}/10
[2] {Title} | {URL} | Source-Type: {Type} | As Of: {YYYY-MM-or-YYYY} | Authority: {score}/10
...

## Findings

- {Specific fact, with source number}. [1]
- {Specific fact, with source number and confidence}. [2]
- {Another fact}. [1]
... (max 10 findings, each one sentence, each with source number)

## Deep Read Notes

### Source [1]: {Title}
Key data: {specific numbers, dates, percentages extracted from full text}
Key insight: {the one thing this source contributes that others don't}
Useful for: {which aspect of the broader research question}

### Source [2]: {Title}
Key data: ...
Key insight: ...
Useful for: ...

## Gaps

- {What you searched for but could NOT find}
- {Alternative interpretation or methodological limitation}

## END

Do not include any content after the Gaps section.
Do not summarize your process. Write the findings file and stop.

Depth Levels

DEEP — web_fetch 2-3 full articles and write detailed Deep Read Notes. Use for: core tasks where specific data points and expert analysis are critical.

SCAN — rely mainly on search snippets, fetches at most 1 article. Use for: supplementary tasks like source mapping.

Environment-Specific Dispatch

Claude Code

# Single task
claude -p "$(cat workspace/prompts/task-a.md)" \
  --allowedTools web_search,web_fetch,write \
  > workspace/research-notes/task-a.md

# Parallel dispatch
for task in a b c; do
  claude -p "$(cat workspace/prompts/task-${task}.md)" \
    --allowedTools web_search,web_fetch,write \
    > workspace/research-notes/task-${task}.md &
done
wait

Cowork

Spawn subagent tasks via the subagent dispatch mechanism.

DeerFlow / OpenClaw

Use the task tool:

task(
  prompt=task_a_prompt,
  tools=["web_search", "web_fetch", "write_file"],
  output_path="workspace/research-notes/task-a.md"
)