feat(deep-research): V6.1 source accessibility policy and Counter-Review Team
- Correct source accessibility: distinguish circular verification (forbidden) from exclusive information advantage (encouraged) - Add Counter-Review Team with 5 specialized agents (claim-validator, source-diversity-checker, recency-validator, contradiction-finder, counter-review-coordinator) - Add Enterprise Research Mode: 6-dimension data collection framework with SWOT, competitive barrier, and risk matrix analysis - Update version to 2.4.0 - Add comprehensive reference docs: - source_accessibility_policy.md - V6_1_improvements.md - counter_review_team_guide.md - enterprise_analysis_frameworks.md - enterprise_quality_checklist.md - enterprise_research_methodology.md - quality_gates.md - report_template_v6.md - research_notes_format.md - subagent_prompt.md Based on "深度推理" case study methodology lessons learned. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This commit is contained in:
@@ -1,183 +1,479 @@
|
||||
---
|
||||
name: deep-research
|
||||
description: |
|
||||
Generate format-controlled research reports with evidence tracking, citations, and iterative review. This skill should be used when users request a research report, literature review, market or industry analysis, competitive landscape, policy or technical brief, or require a strict report template and section formatting that a single deepresearch pass cannot reliably enforce.
|
||||
Generate format-controlled research reports with evidence tracking, citations, source governance, and multi-pass synthesis.
|
||||
This skill should be used when users request a research report, literature review, market or industry analysis,
|
||||
competitive landscape, policy or technical brief. Triggers: "帮我调研一下", "深度研究", "综述报告", "深入分析",
|
||||
"research this topic", "write a report on", "survey the literature on", "competitive analysis of",
|
||||
"技术选型分析", "竞品研究", "政策分析", "行业报告".
|
||||
V6 adds: source-type governance, AS_OF freshness checks, mandatory counter-review, and citation registry. V6.1 adds: source accessibility (circular verification forbidden, exclusive advantage encouraged).
|
||||
---
|
||||
|
||||
# Deep Research
|
||||
|
||||
Create high-fidelity research reports with strict format control, evidence mapping, and multi-pass synthesis.
|
||||
Create high-fidelity research reports with strict format control, evidence mapping, source governance, and multi-pass synthesis.
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. Clarify the report spec and format contract
|
||||
2. Build a research plan and query set
|
||||
3. Collect evidence with the deepresearch tool (multi-pass if needed)
|
||||
4. Triage sources and build an evidence table
|
||||
5. Draft the full report in multiple complete passes (parallel subagents)
|
||||
6. UNION merge, enforce format compliance, verify citations
|
||||
7. Present draft for human review and iterate
|
||||
|
||||
## Core Workflow
|
||||
|
||||
Copy this checklist and track progress:
|
||||
## Architecture: Lead Agent + Subagents
|
||||
|
||||
```
|
||||
Deep Research Progress:
|
||||
- [ ] Step 1: Intake and format contract
|
||||
- [ ] Step 2: Research plan and query set
|
||||
- [ ] Step 3: Evidence collection (deepresearch tool)
|
||||
- [ ] Step 4: Source triage and evidence table
|
||||
- [ ] Step 5: Outline and section map
|
||||
- [ ] Step 6: Multi-pass full drafting (parallel subagents)
|
||||
- [ ] Step 7: UNION merge and format compliance
|
||||
- [ ] Step 8: Evidence and citation verification
|
||||
- [ ] Step 9: Present draft for human review and iterate
|
||||
Lead Agent (coordinator — minimizes raw search context)
|
||||
|
|
||||
P0: Environment + source policy setup
|
||||
|
|
||||
P1: Research Task Board (roles, queries, parallel groups)
|
||||
|
|
||||
Dispatch ──→ Subagent A ──→ writes task-a.md ──┐
|
||||
──→ Subagent B ──→ writes task-b.md ──┤ (parallel)
|
||||
──→ Subagent C ──→ writes task-c.md ──┘
|
||||
| |
|
||||
| research-notes/ <────────────────────────┘
|
||||
|
|
||||
P2: Build citation registry with source_type + as_of + authority
|
||||
P3: Evidence-mapped outline with counter-claim flags
|
||||
P4: Draft from notes (never from raw search results)
|
||||
P5: Counter-review (claims, confidence, alternatives)
|
||||
P6: Verify (every [n] in registry, traceability check)
|
||||
P7: Polish → final report with confidence markers
|
||||
```
|
||||
|
||||
### Step 1: Intake and Format Contract
|
||||
**Context efficiency:** Subagents' raw search results stay in their context and are discarded. Lead agent sees only distilled notes (~60-70% context reduction).
|
||||
|
||||
Establish the report requirements before any research:
|
||||
## Mode Selection
|
||||
|
||||
- Confirm audience, purpose, scope, time range, and geography
|
||||
- Lock output format: Markdown, DOCX, slides, or user-provided template
|
||||
- Capture required sections and exact formatting rules
|
||||
- Confirm citation style (footnotes, inline, numbered, APA, etc.)
|
||||
- Confirm length targets per section
|
||||
- Ask for any existing style guide or sample report
|
||||
Determine the research mode before starting:
|
||||
|
||||
Create a concise report spec file:
|
||||
| Dimension | Options |
|
||||
|-----------|---------|
|
||||
| **Topic Mode** | Enterprise Research (company/corporation) OR General Research (industry/policy/tech) |
|
||||
| **Depth Mode** | Standard (5-6 tasks, 3000-8000 words) OR Lightweight (3-4 tasks, 2000-4000 words) |
|
||||
|
||||
- **Enterprise Research Mode**: Six-dimension data collection with structured analysis frameworks (SWOT, risk matrix, competitive barrier quantification)
|
||||
- **General Research Mode**: Standard P0-P7 research pipeline with source governance
|
||||
- **Depth Selection**: Lightweight for single entity/concept < 30 words; Standard for multi-entity comparison or "深入"/"comprehensive" requests
|
||||
|
||||
## Source Governance (V6)
|
||||
|
||||
### Source Accessibility Classification
|
||||
|
||||
**CRITICAL RULE**: Every source must be classified by accessibility:
|
||||
|
||||
| Accessibility | Definition | Examples | Usage Rule |
|
||||
|--------------|------------|----------|------------|
|
||||
| `public` | Available to any external researcher without authentication | Public websites, news articles, WHOIS (without privacy), academic papers | ✅ Always allowed |
|
||||
| `semi-public` | Requires registration or limited access | LinkedIn profiles, Crunchbase basic, industry reports (free tier) | ✅ Allowed with disclosure |
|
||||
| `exclusive-user-provided` | User's paid subscriptions, private APIs, proprietary databases | Crunchbase Pro, PitchBook, private data feeds, internal databases | ✅ **ALLOWED** for third-party research |
|
||||
| `private-user-owned` | User's own accounts when researching themselves | User's registrar for user's own company, user's bank for user's own finances | ❌ **FORBIDDEN** - circular verification |
|
||||
|
||||
**⚠️ CIRCULAR VERIFICATION BAN**: You must NOT:
|
||||
- Use user's private data to "discover" what they already know about themselves
|
||||
- Research user's own company by accessing user's private accounts
|
||||
- Present user's private knowledge as "research findings"
|
||||
|
||||
**✅ EXCLUSIVE INFORMATION ADVANTAGE**: You SHOULD:
|
||||
- Use user's Crunchbase Pro to research competitors
|
||||
- Use user's proprietary databases for market research
|
||||
- Use user's private APIs for investment analysis
|
||||
- Leverage any exclusive source user provides for third-party research
|
||||
|
||||
### Source Type Labels
|
||||
|
||||
Every source MUST also be tagged with:
|
||||
|
||||
| Label | Definition | Examples |
|
||||
|-------|------------|----------|
|
||||
| `official` | Primary source, official documentation | Company SEC filings, government reports, official blog |
|
||||
| `academic` | Peer-reviewed research | Journal articles, conference papers, dissertations |
|
||||
| `secondary-industry` | Professional analysis | Industry reports, analyst coverage, trade publications |
|
||||
| `journalism` | News reporting | Reputable media outlets, investigative journalism |
|
||||
| `community` | User-generated content | Forums, reviews, social media, Q&A sites |
|
||||
| `other` | Uncategorized or mixed | Aggregators, unverified sources |
|
||||
|
||||
**Quality Gates:**
|
||||
- Standard mode: ≥30% official sources in final approved set
|
||||
- Lightweight mode: ≥20% official sources
|
||||
- Maximum single-source share: ≤25% (Standard), ≤30% (Lightweight)
|
||||
- Minimum unique domains: 5 (Standard), 3 (Lightweight)
|
||||
|
||||
## AS_OF Date Policy
|
||||
|
||||
Set `AS_OF` date explicitly at P0. For all time-sensitive claims:
|
||||
- Include source publication date with every citation
|
||||
- Downgrade confidence if source is older than relevant horizon
|
||||
- Flag stale sources in registry (studies >3 years, news >6 months for fast-moving topics)
|
||||
|
||||
## P0: Environment & Policy Setup
|
||||
|
||||
Check capabilities before starting:
|
||||
|
||||
| Check | Requirement | Impact if Missing |
|
||||
|-------|-------------|-------------------|
|
||||
| web_search available | Required | Stop - cannot proceed |
|
||||
| web_fetch available | Required for DEEP tasks | SCAN-only mode |
|
||||
| Subagent dispatch | Preferred | Degrade to sequential |
|
||||
| Filesystem writable | Required | In-memory notes only |
|
||||
|
||||
Set policy variables:
|
||||
- `AS_OF`: Today's date (YYYY-MM-DD) - mandatory for timed topics
|
||||
- `MODE`: Standard (default) or Lightweight
|
||||
- `SOURCE_TYPE_POLICY`: Enforce official/academic/secondary/journalism/community/other labels
|
||||
- `COUNTER_REVIEW_PLAN`: What opposing interpretation to test
|
||||
|
||||
Report: `[P0 complete] Subagent: {yes/no}. Mode: {standard/lightweight}. AS_OF: {YYYY-MM-DD}.`
|
||||
|
||||
When researching a specific company/enterprise, follow this specialized workflow that ensures six-dimension coverage, quantified analysis frameworks, and three-level quality control.
|
||||
|
||||
### Enterprise Workflow Overview
|
||||
|
||||
```
|
||||
Report Spec:
|
||||
- Audience:
|
||||
- Purpose:
|
||||
- Scope:
|
||||
- Time Range:
|
||||
- Geography:
|
||||
- Required Sections:
|
||||
- Section Formatting Rules:
|
||||
- Citation Style:
|
||||
- Output Format:
|
||||
- Length Targets:
|
||||
- Tone:
|
||||
- Must-Include Sources:
|
||||
- Must-Exclude Topics:
|
||||
Enterprise Research Progress:
|
||||
- [ ] E1: Intake — confirm company entity, research depth, format contract
|
||||
- [ ] E2: Six-dimension data collection (parallel where possible)
|
||||
- [ ] D1: Company fundamentals (entity, founding, funding, ownership)
|
||||
- [ ] D2: Business & products (segments, products, revenue structure)
|
||||
- [ ] D3: Competitive position (industry rank, competitors, barriers)
|
||||
- [ ] D4: Financial & operations (3-year financials, efficiency metrics)
|
||||
- [ ] D5: Recent developments (6-month events, strategic signals)
|
||||
- [ ] D6: Internal/proprietary sources (or note limitation)
|
||||
- [ ] E3: Structured analysis frameworks
|
||||
- [ ] SWOT analysis (evidence-backed, 4 quadrants × 3-5 entries)
|
||||
- [ ] Competitive barrier quantification (7 dimensions, weighted score)
|
||||
- [ ] Risk matrix (8 categories, probability × impact)
|
||||
- [ ] Comprehensive scorecard (6 dimensions, weighted total)
|
||||
- [ ] E4: L1/L2/L3 quality checks at each stage transition
|
||||
- [ ] E5: Draft report using 7-chapter enterprise template
|
||||
- [ ] E6: Multi-pass drafting + UNION merge (same as general Step 6-7)
|
||||
- [ ] E7: Present draft for human review and iterate
|
||||
```
|
||||
|
||||
If a user provides a template or an example report, treat it as a hard constraint and mirror the structure.
|
||||
## P1: Research Task Board
|
||||
|
||||
### Step 2: Research Plan and Query Set
|
||||
Decompose the research question into 4-6 investigation tasks (Standard) or 3-4 tasks (Lightweight).
|
||||
|
||||
Define the research strategy before calling tools:
|
||||
Each task assignment includes:
|
||||
- **Expert Role**: Specialist persona (e.g., "Policy Historian", "Ecosystem Mapper")
|
||||
- **Objective**: One-sentence investigation goal
|
||||
- **Queries**: 2-3 pre-planned search queries
|
||||
- **Depth**: DEEP (fetch 2-3 full articles) or SCAN (snippets sufficient)
|
||||
- **Output**: Path to research notes file
|
||||
- **Parallel Group**: Group A (independent) or Group B (depends on Group A)
|
||||
|
||||
- Break the main question into 3-7 subquestions
|
||||
- Define key entities, keywords, and synonyms
|
||||
- Identify primary sources vs secondary sources
|
||||
- Define disqualifiers (outdated, low quality, opinion-only)
|
||||
- Assemble a query set per section
|
||||
### Task Decomposition Rules
|
||||
|
||||
Use [references/research_plan_checklist.md](references/research_plan_checklist.md) for guidance.
|
||||
1. Each task covers one coherent sub-topic a specialist would own
|
||||
2. Group A tasks must be independent and source-diverse
|
||||
3. Max 3 tasks per parallel group (concurrency limit)
|
||||
4. Every task must flag time-sensitive claims and expected citation aging risk
|
||||
|
||||
### Step 3: Evidence Collection (Deepresearch Tool)
|
||||
### Enterprise Research Integration
|
||||
|
||||
Use the deepresearch tool to collect evidence and citations.
|
||||
When in Enterprise Research Mode, task board maps to six dimensions:
|
||||
- Task A: Company fundamentals (entity, founding, funding, ownership)
|
||||
- Task B: Business & products (segments, products, revenue structure)
|
||||
- Task C: Competitive position (industry rank, competitors, barriers)
|
||||
- Task D: Financial & operations (3-year financials, efficiency metrics)
|
||||
- Task E: Recent developments (6-month events, strategic signals)
|
||||
- Task F: Internal/proprietary sources (or document limitation)
|
||||
|
||||
- Run multiple complete passes if coverage is uncertain
|
||||
- Vary query phrasing to reduce blind spots
|
||||
- Preserve raw tool output in files for traceability
|
||||
Report: `[P1 complete] {N} tasks in {M} groups. Dispatching Group A.`
|
||||
|
||||
---
|
||||
|
||||
## Enterprise Research Mode (Specialized Pipeline)
|
||||
|
||||
When researching a specific company/enterprise, follow this specialized workflow that ensures six-dimension coverage, quantified analysis frameworks, and three-level quality control.
|
||||
|
||||
### E1: Intake
|
||||
|
||||
Same as P0/P1 above, plus:
|
||||
- Confirm the exact legal entity being researched (parent vs subsidiary)
|
||||
- Select research depth: Quick scan (3-5 pages) / Standard (10-20 pages) / Deep (20-40 pages)
|
||||
- Identify any specific comparison targets (benchmark companies)
|
||||
|
||||
## P2: Dispatch + Investigate
|
||||
|
||||
Subagents execute tasks using [references/subagent_prompt.md](references/subagent_prompt.md) and output to [references/research_notes_format.md](references/research_notes_format.md).
|
||||
|
||||
### With Subagents (Claude Code / Cowork / DeerFlow)
|
||||
|
||||
1. Dispatch Group A tasks in parallel (max 3 concurrent)
|
||||
2. Each subagent searches, fetches, and tags source types
|
||||
3. Every source line includes `Source-Type` and `As Of`
|
||||
4. Wait for Group A completion
|
||||
5. Dispatch Group B (can read Group A notes)
|
||||
|
||||
### Subagent Output Requirements
|
||||
|
||||
Each task-{id}.md must contain:
|
||||
- **Sources section**: URLs from actual search results with Source-Type, As Of, Authority (1-10)
|
||||
- **Findings section**: Max 10 one-sentence facts with source numbers
|
||||
- **Deep Read Notes** (DEEP tasks): 2-3 sources read in full with key data/insights
|
||||
- **Gaps section**: What was searched but NOT found, alternative interpretations
|
||||
|
||||
### Without Subagents (Degraded Mode)
|
||||
|
||||
Lead agent executes tasks sequentially, acting as each specialist. Raw search results are discarded after writing notes.
|
||||
|
||||
### Enterprise Research: Six-Dimension Collection
|
||||
|
||||
Follow [references/enterprise_research_methodology.md](references/enterprise_research_methodology.md) for:
|
||||
- Detailed collection workflow per dimension (query strategies, data fields, validation)
|
||||
- Data source priority matrix (P0-P3 ranking)
|
||||
- Cross-validation rules (min sources, max deviation thresholds)
|
||||
|
||||
**Key principles**:
|
||||
- Evidence-driven: every conclusion must trace to a citable source
|
||||
- Multi-source validation: key data requires ≥2 independent sources
|
||||
- Restrained judgment: mark speculation explicitly, avoid unsubstantiated claims
|
||||
- Structured presentation: complex information via tables, lists, hierarchies
|
||||
|
||||
Run L1 quality check after completing each dimension (see enterprise_quality_checklist.md).
|
||||
|
||||
Status per task: `[P2 task-{id} complete] {N} sources, {M} findings.`
|
||||
Status all: `[P2 complete] {N} tasks done, {M} total sources. Building registry.`
|
||||
|
||||
### E3: Structured Analysis Frameworks
|
||||
|
||||
Apply frameworks from [references/enterprise_analysis_frameworks.md](references/enterprise_analysis_frameworks.md) in order:
|
||||
1. **SWOT analysis** — each entry with evidence + source + impact assessment
|
||||
2. **Competitive barrier quantification** — 7 dimensions with weighted scoring → A+/A/B+/B/C+/C rating
|
||||
3. **Risk matrix** — 8 mandatory categories, probability × impact → Red/Yellow/Green
|
||||
4. **Comprehensive scorecard** — 6-dimension weighted total → X/10
|
||||
|
||||
Run L2 quality check after analysis is complete.
|
||||
|
||||
### E4: Quality Control
|
||||
|
||||
Three-level checks from [references/enterprise_quality_checklist.md](references/enterprise_quality_checklist.md):
|
||||
- **L1 (Data)**: Source count, attribution, cross-validation, timeliness
|
||||
- **L2 (Analysis)**: SWOT completeness, risk coverage, barrier scoring, conclusion support
|
||||
- **L3 (Document)**: Structure compliance, format consistency, readability, appendices
|
||||
|
||||
### E5: Draft Using Enterprise Template
|
||||
|
||||
Use the 7-chapter enterprise report template from enterprise_quality_checklist.md:
|
||||
1. Company Overview
|
||||
2. Business & Product Structure
|
||||
3. Market & Competitive Position
|
||||
4. Financial & Operations Analysis
|
||||
5. Risks & Concerns
|
||||
6. Recent Developments
|
||||
7. Comprehensive Assessment & Conclusion
|
||||
|
||||
Plus appendices: Data Source Index, Glossary, Disclaimer.
|
||||
|
||||
### E3-E7: Enterprise Analysis, Drafting, and Review
|
||||
|
||||
- **E3: Structured Analysis** — Apply frameworks from [references/enterprise_analysis_frameworks.md](references/enterprise_analysis_frameworks.md)
|
||||
- **E4: Quality Control** — Run L1/L2/L3 checks per [references/enterprise_quality_checklist.md](references/enterprise_quality_checklist.md)
|
||||
- **E5: Draft** — Use 7-chapter enterprise template
|
||||
- **E6-E7: Multi-Pass Drafting and Review** — Same as P4-P7 below
|
||||
|
||||
---
|
||||
|
||||
## P3: Citation Registry + Source Governance
|
||||
|
||||
Lead agent reads all task notes and builds unified registry.
|
||||
|
||||
### Registry Process
|
||||
|
||||
1. Read every task file's `## Sources` section
|
||||
2. Merge all sources, deduplicate by URL
|
||||
3. Assign sequential [n] numbers by first appearance
|
||||
4. Tag: source_type, as_of date, authority score (1-10), task id
|
||||
5. **Apply quality gates:**
|
||||
- Standard: ≥12 approved sources, ≥5 unique domains, ≥30% official
|
||||
- Lightweight: ≥6 approved sources, ≥3 unique domains, ≥20% official
|
||||
- Max single-source share: ≤25% (Standard), ≤30% (Lightweight)
|
||||
6. **Drop sources** below threshold and list them explicitly
|
||||
|
||||
### Registry Output Format
|
||||
|
||||
**File structure (recommended):**
|
||||
```
|
||||
<output_dir>/research/<topic-name>/
|
||||
deepresearch_pass1.md
|
||||
deepresearch_pass2.md
|
||||
deepresearch_pass3.md
|
||||
CITATION REGISTRY
|
||||
|
||||
Approved:
|
||||
[1] Author/Org — Title | URL | Source-Type: official | Accessibility: public | Date: 2026-03-01 | Auth: 8 | task-a
|
||||
[2] ...
|
||||
|
||||
Dropped:
|
||||
x Source | URL | Source-Type: community | Accessibility: privileged | Auth: 3 | Reason: PRIVILEGED SOURCE - NOT ALLOWED
|
||||
|
||||
Stats: {approved}/{total}, {N} domains, official_share {xx}%
|
||||
Privileged sources rejected: {N}
|
||||
```
|
||||
|
||||
If deepresearch is unavailable, rely on user-provided sources only and state limitations explicitly.
|
||||
**Critical rule:** These [n] are FINAL. P5 may only cite from Approved list. Dropped sources never reappear.
|
||||
|
||||
### Step 4: Source Triage and Evidence Table
|
||||
**Circular verification handling**: When researching the user's own company/assets, if you discover data in user's private accounts (e.g., user's domain registrar showing they own domains), you MUST:
|
||||
1. Reject it from the registry (user already knows this)
|
||||
2. Note it as "CIRCULAR - USER ALREADY KNOWS" in Dropped
|
||||
3. Search for equivalent PUBLIC sources (e.g., public WHOIS, news articles)
|
||||
4. Report from external investigator perspective only
|
||||
|
||||
Normalize and score sources before drafting:
|
||||
**Exclusive source handling**: When user EXPLICITLY PROVIDES their paid subscriptions or private APIs for third-party research (e.g., "Use my Crunchbase Pro to research competitors"), you SHOULD:
|
||||
1. Accept it as "exclusive-user-provided" accessibility
|
||||
2. Use it as competitive advantage
|
||||
3. Cite it properly in registry
|
||||
4. If no public equivalent exists, mark as [unverified] or omit the claim
|
||||
|
||||
- De-duplicate sources across passes
|
||||
- Score sources using [references/source_quality_rubric.md](references/source_quality_rubric.md)
|
||||
- Build an evidence table mapping claims to sources
|
||||
Report: `[P3 complete] {approved}/{total} sources. {N} domains. Official share: {xx}%. Privileged rejected: {N}.`
|
||||
|
||||
Evidence table minimum columns:
|
||||
### Handling Information Black Box
|
||||
|
||||
- Source ID
|
||||
- Title
|
||||
- Publisher
|
||||
- Date
|
||||
- URL or reference
|
||||
- Quality tier (A/B/C)
|
||||
- Notes
|
||||
When researching entities with no public footprint (like the "深度推理" example):
|
||||
|
||||
### Step 5: Outline and Section Map
|
||||
**What an external researcher would find:**
|
||||
- WHOIS: Privacy protected → No owner info
|
||||
- Web search: No news, no press releases
|
||||
- Social media: No company pages
|
||||
- Business registries: No public API or requires local access
|
||||
- Result: **Complete information black box**
|
||||
|
||||
Create an outline that enforces the format contract:
|
||||
|
||||
- Use the template in [references/research_report_template.md](references/research_report_template.md)
|
||||
- Produce a section map with required elements per section
|
||||
- Confirm ordering and headings match the report spec
|
||||
|
||||
### Step 6: Multi-Pass Full Drafting (Parallel Subagents)
|
||||
|
||||
Avoid single-pass drafting; generate multiple complete reports, then merge.
|
||||
|
||||
#### Preferred Strategy: Parallel Subagents (Complete Draft Each)
|
||||
|
||||
Use the Task tool to spawn parallel subagents with isolated context. Each subagent must:
|
||||
|
||||
- Load the report spec, outline, and evidence table
|
||||
- Draft the FULL report (all sections)
|
||||
- Enforce formatting rules and citation style
|
||||
|
||||
**Implementation pattern:**
|
||||
**Correct response:**
|
||||
```
|
||||
Task(subagent_type="general-purpose", prompt="Draft complete report ...", run_in_background=false) -> version1.md
|
||||
Task(subagent_type="general-purpose", prompt="Draft complete report ...", run_in_background=false) -> version2.md
|
||||
Task(subagent_type="general-purpose", prompt="Draft complete report ...", run_in_background=false) -> version3.md
|
||||
Findings: NO PUBLIC INFORMATION AVAILABLE
|
||||
|
||||
Sources checked:
|
||||
- WHOIS (public): Privacy protected [failed]
|
||||
- Company registry (public): Access denied/No API [failed]
|
||||
- News media: No coverage [failed]
|
||||
- Corporate website: Placeholder only [minimal]
|
||||
|
||||
Verdict: UNABLE TO VERIFY COMPANY EXISTENCE from external perspective
|
||||
Sources found: 0 (or minimal, e.g., only WHOIS showing domain exists)
|
||||
Confidence: N/A - Insufficient evidence
|
||||
```
|
||||
|
||||
**Write drafts to files, not conversation context:**
|
||||
**DO NOT:**
|
||||
- ❌ Use user's own credentials to "fill in the gaps"
|
||||
- ❌ Assume the company exists based on domain registration alone
|
||||
- ❌ Fill missing data with speculation
|
||||
- ❌ Claim to have "verified" information you accessed through privileged means
|
||||
|
||||
**DO:**
|
||||
- ✅ Clearly state what an external researcher can/cannot verify
|
||||
- ✅ Document all failed search attempts
|
||||
- ✅ Mark claims as [unverified] or omit entirely
|
||||
- ✅ Downgrade mode to Lightweight or stop if insufficient public sources
|
||||
- ✅ Recommend direct contact for due diligence
|
||||
|
||||
---
|
||||
|
||||
## P4: Evidence-Mapped Outline
|
||||
|
||||
Lead agent reads notes + registry to build outline.
|
||||
|
||||
1. Identify cross-task patterns
|
||||
2. Design sections topic-first, not task-order-first
|
||||
3. Map each section to specific findings with source numbers
|
||||
4. Flag sections needing counter-review
|
||||
5. Mark recency-sensitive claims with AS_OF checks
|
||||
|
||||
Outline format:
|
||||
```
|
||||
<output_dir>/intermediate/<topic-name>/version1.md
|
||||
<output_dir>/intermediate/<topic-name>/version2.md
|
||||
<output_dir>/intermediate/<topic-name>/version3.md
|
||||
## N. {Section Title}
|
||||
Sources: [1][3][7] from tasks a, b
|
||||
Claims: {claim from task-a finding 3}, {claim from task-b finding 1}
|
||||
Counter-claim candidates: {alternative explanations}
|
||||
Recency checks: {source dates + AS_OF}
|
||||
Gaps: {limited official evidence}
|
||||
```
|
||||
|
||||
### Step 7: UNION Merge and Format Compliance
|
||||
---
|
||||
|
||||
Merge using UNION, never remove content without evidence-based justification:
|
||||
## P5: Draft from Notes
|
||||
|
||||
- Keep all unique findings from all versions
|
||||
- Consolidate duplicates while preserving the most detailed phrasing
|
||||
- Ensure every claim in the merged draft has a cited source
|
||||
- Enforce the exact section order, headings, and formatting
|
||||
- Re-run formatting rules from [references/formatting_rules.md](references/formatting_rules.md)
|
||||
Write section by section using [references/report_template_v6.md](references/report_template_v6.md).
|
||||
|
||||
### Step 8: Evidence and Citation Verification
|
||||
**Rules:**
|
||||
- Every factual claim needs citation [n]
|
||||
- Numbers/percentages must have source
|
||||
- Add **confidence marker** per section: High/Medium/Low with rationale
|
||||
- Add **counter-claim sentence** when evidence conflicts
|
||||
- No new sources may be introduced
|
||||
- Use [unverified] for unsupported statements
|
||||
|
||||
Verify traceability:
|
||||
**Anti-hallucination:**
|
||||
- Lead agent never invents URLs — only from subagent notes
|
||||
- Lead agent never fabricates data — mark [unverified] if number not in notes
|
||||
|
||||
- Every numeric claim has at least one source
|
||||
- Every recommendation references supporting evidence
|
||||
- No orphan claims without citations
|
||||
- Dates and time ranges are consistent
|
||||
- Conflicts are explicitly called out with both sources
|
||||
Status: `[P5 in progress] {N}/{M} sections, ~{words} words.`
|
||||
|
||||
Use [references/completeness_review_checklist.md](references/completeness_review_checklist.md).
|
||||
---
|
||||
|
||||
### Step 9: Present Draft for Human Review and Iterate
|
||||
## P6: Counter-Review (Mandatory)
|
||||
|
||||
Present the draft as a reviewable version:
|
||||
For each major conclusion, perform opposite-view checks:
|
||||
|
||||
- Emphasize that format compliance and factual accuracy need human review
|
||||
- Accept edits to format, structure, and scope
|
||||
- If the user provides another AI output, cross-compare and UNION merge
|
||||
1. **Could the conclusion be wrong?**
|
||||
2. **Which high-impact claims depend on a single source?**
|
||||
3. **Which claims lack official/academic support?**
|
||||
4. **Are stale sources used for time-sensitive claims?**
|
||||
5. **Find ≥3 issues** (re-examine if 0 found)
|
||||
|
||||
### Using Counter-Review Team (Recommended)
|
||||
|
||||
For comprehensive parallel review, use the Counter-Review Team:
|
||||
|
||||
```bash
|
||||
# 1. Prepare inputs
|
||||
counter-review-inputs/
|
||||
├── draft_report.md
|
||||
├── citation_registry.md
|
||||
├── task-notes/
|
||||
└── p0_config.md
|
||||
|
||||
# 2. Dispatch to 4 specialist agents in parallel
|
||||
SendMessage to: claim-validator
|
||||
SendMessage to: source-diversity-checker
|
||||
SendMessage to: recency-validator
|
||||
SendMessage to: contradiction-finder
|
||||
|
||||
# 3. Wait for all specialists to complete
|
||||
|
||||
# 4. Send to coordinator for synthesis
|
||||
SendMessage to: counter-review-coordinator
|
||||
inputs: [4 specialist reports]
|
||||
|
||||
# 5. Receive final P6 Counter-Review Report
|
||||
```
|
||||
|
||||
See [references/counter_review_team_guide.md](references/counter_review_team_guide.md) for detailed usage.
|
||||
|
||||
### Manual Counter-Review (Fallback)
|
||||
|
||||
If Counter-Review Team is unavailable, perform manual checks:
|
||||
- Verify every high-confidence claim has ≥2 sources
|
||||
- Check official/academic backing for key claims
|
||||
- Verify AS_OF dates on time-sensitive claims
|
||||
- Document opposing interpretations
|
||||
|
||||
### Output
|
||||
|
||||
Include in final report:
|
||||
```
|
||||
## 核心争议 / Key Controversies
|
||||
- **争议 1:** [主张 A 与反向证据 B 对比] [n][m]
|
||||
- **争议 2:** ...
|
||||
```
|
||||
|
||||
Report: `[P6 complete] {N} issues found: {critical} critical, {high} high, {medium} medium.`
|
||||
|
||||
---
|
||||
|
||||
## P7: Verify
|
||||
|
||||
Cross-check before finalization:
|
||||
|
||||
1. **Registry cross-check:** List every [n] in report vs approved registry
|
||||
2. **Spot-check 5+ claims:** Trace to task notes
|
||||
3. **Remove/fix non-traceable claims**
|
||||
4. **Validate no dropped source resurrected**
|
||||
5. **Check source concentration** for key claims
|
||||
|
||||
Report: `[P7 complete] {N} spot-checks, {M} violations fixed.`
|
||||
|
||||
---
|
||||
|
||||
## Output Requirements
|
||||
|
||||
@@ -188,6 +484,18 @@ Present the draft as a reviewable version:
|
||||
|
||||
## Reference Files
|
||||
|
||||
### Core V6 Pipeline References
|
||||
|
||||
| File | When to Load |
|
||||
| --- | --- |
|
||||
| [source_accessibility_policy.md](references/source_accessibility_policy.md) | **P0 (CRITICAL)**: Source classification rules - read first |
|
||||
| [subagent_prompt.md](references/subagent_prompt.md) | P2: Task dispatch to subagents |
|
||||
| [research_notes_format.md](references/research_notes_format.md) | P2: Subagent output format |
|
||||
| [report_template_v6.md](references/report_template_v6.md) | P5: Draft with confidence markers and counter-review |
|
||||
| [quality_gates.md](references/quality_gates.md) | All phases: Quality thresholds and anti-hallucination checks |
|
||||
|
||||
### General Research References
|
||||
|
||||
| File | When to Load |
|
||||
| --- | --- |
|
||||
| [research_report_template.md](references/research_report_template.md) | Build outline and draft structure |
|
||||
@@ -196,6 +504,14 @@ Present the draft as a reviewable version:
|
||||
| [research_plan_checklist.md](references/research_plan_checklist.md) | Build research plan and query set |
|
||||
| [completeness_review_checklist.md](references/completeness_review_checklist.md) | Review for coverage, citations, and compliance |
|
||||
|
||||
### Enterprise Research References (load when in Enterprise Research Mode)
|
||||
|
||||
| File | When to Load |
|
||||
| --- | --- |
|
||||
| [enterprise_research_methodology.md](references/enterprise_research_methodology.md) | Six-dimension data collection workflow, source priority, cross-validation rules |
|
||||
| [enterprise_analysis_frameworks.md](references/enterprise_analysis_frameworks.md) | SWOT template, competitive barrier quantification, risk matrix, comprehensive scoring |
|
||||
| [enterprise_quality_checklist.md](references/enterprise_quality_checklist.md) | L1/L2/L3 quality checks, per-dimension checklists, 7-chapter report template |
|
||||
|
||||
## Anti-Patterns
|
||||
|
||||
- Single-pass drafting without parallel complete passes
|
||||
@@ -205,3 +521,10 @@ Present the draft as a reviewable version:
|
||||
- Mixing conflicting dates without calling out discrepancies
|
||||
- Copying external AI output without verification
|
||||
- Deleting intermediate drafts or raw research outputs
|
||||
- **Lead agent reading raw search results** — only read subagent notes
|
||||
- **Inventing URLs** — only use URLs from actual search results
|
||||
- **Resurrecting dropped sources** — dropped in P3 never reappear
|
||||
- **Missing AS_OF for time-sensitive claims** — always include source date
|
||||
- **Skipping counter-review** — mandatory P6 must find ≥3 issues
|
||||
- **CIRCULAR VERIFICATION** — never use user's private data to "discover" what they already know about themselves
|
||||
- **IGNORING EXCLUSIVE SOURCES** — when user provides Crunchbase Pro etc. for competitor research, USE IT
|
||||
|
||||
112
deep-research/references/V6_1_improvements.md
Normal file
112
deep-research/references/V6_1_improvements.md
Normal file
@@ -0,0 +1,112 @@
|
||||
# Deep Research Skill V6.1 Improvements
|
||||
|
||||
**Date**: 2026-04-03
|
||||
**Version**: 2.3.0 → 2.4.0
|
||||
**Based on**: User feedback and "深度推理" case study
|
||||
|
||||
---
|
||||
|
||||
## Summary of Changes
|
||||
|
||||
### 1. Source Accessibility Policy - Critical Correction
|
||||
|
||||
**Problem Identified**:
|
||||
Previously, we incorrectly banned all "privileged" sources. This was wrong because it prevented users from leveraging their competitive information advantages.
|
||||
|
||||
**The Real Issue**:
|
||||
The problem is not using user's private information—it's **circular verification**: using user's data to "discover" what they already know about themselves.
|
||||
|
||||
**Example of the Error**:
|
||||
```
|
||||
User: "Research my company 深度推理"
|
||||
❌ WRONG: Access user's Spaceship → "You own 25 domains"
|
||||
→ This is circular: user already knows they own these domains
|
||||
|
||||
✅ RIGHT: Check public WHOIS → "Privacy protected, ownership not visible"
|
||||
→ This is external research perspective
|
||||
```
|
||||
|
||||
**Correct Classification**:
|
||||
|
||||
| Accessibility | For Self-Research | For Third-Party Research |
|
||||
|--------------|-------------------|-------------------------|
|
||||
| `public` | ✅ Use | ✅ Use |
|
||||
| `semi-public` | ✅ Use | ✅ Use |
|
||||
| `exclusive-user-provided` | ⚠️ Careful* | ✅ **ENCOURAGED** |
|
||||
| `private-user-owned` | ❌ **FORBIDDEN** | N/A |
|
||||
|
||||
\* When user provides exclusive sources for their own company, evaluate if it's circular
|
||||
|
||||
### 2. Counter-Review Team V2
|
||||
|
||||
**Created**: 5-agent parallel review team
|
||||
- 🔵 claim-validator: Claim validation
|
||||
- 🟢 source-diversity-checker: Source diversity analysis
|
||||
- 🟡 recency-validator: Recency/freshness checks
|
||||
- 🟣 contradiction-finder: Contradiction and bias detection
|
||||
- 🟠 counter-review-coordinator: Synthesis and reporting
|
||||
|
||||
**Usage**:
|
||||
```bash
|
||||
# 1. Dispatch to 4 specialists in parallel
|
||||
SendMessage to: claim-validator
|
||||
SendMessage to: source-diversity-checker
|
||||
SendMessage to: recency-validator
|
||||
SendMessage to: contradiction-finder
|
||||
|
||||
# 2. Send to coordinator for synthesis
|
||||
SendMessage to: counter-review-coordinator
|
||||
```
|
||||
|
||||
### 3. Methodology Clarifications
|
||||
|
||||
#### When Researching User's Own Company
|
||||
- **Approach**: External investigator perspective
|
||||
- **Use**: Public sources only
|
||||
- **Do NOT use**: User's private accounts (creates circular verification)
|
||||
- **Report**: "From public perspective: X, Y, Z gaps"
|
||||
|
||||
#### When User Provides Exclusive Sources for Third-Party Research
|
||||
- **Approach**: Leverage competitive advantage
|
||||
- **Use**: User's paid subscriptions, private APIs, proprietary databases
|
||||
- **Cite**: Mark as `exclusive-user-provided`
|
||||
- **Report**: "Per user's exclusive source [Crunchbase Pro], competitor X raised $Y"
|
||||
|
||||
### 4. Registry Format Update
|
||||
|
||||
**Added fields**:
|
||||
- `Accessibility`: public / semi-public / exclusive-user-provided / private-user-owned
|
||||
- `Circular rejection tracking`: Note when sources are rejected for circular verification
|
||||
|
||||
**Updated anti-patterns**:
|
||||
- ❌ **CIRCULAR VERIFICATION**: Never use user's private data to "discover" what they already know
|
||||
- ✅ **USE EXCLUSIVE SOURCES**: When user provides Crunchbase Pro etc. for competitor research, USE IT
|
||||
|
||||
### 5. Documentation Updates
|
||||
|
||||
**New/Updated Files**:
|
||||
- `source_accessibility_policy.md`: Complete rewrite explaining circular vs. competitive advantage distinction
|
||||
- `counter_review_team_guide.md`: Usage guide for the 5-agent team
|
||||
- `SKILL.md`: Updated Source Governance section with correct classification
|
||||
- `marketplace.json`: Updated description
|
||||
|
||||
---
|
||||
|
||||
## Key Principles Summary
|
||||
|
||||
1. **Circular Verification is Bad**: Don't use user's data to tell them what they already know
|
||||
2. **Exclusive Information Advantage is Good**: Use user's paid tools to research competitors
|
||||
3. **External Perspective for Self-Research**: When researching user's own company, act like an external investigator
|
||||
4. **Leverage Everything for Third-Party**: When researching others, use every advantage user provides
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Changes |
|
||||
|---------|---------|
|
||||
| 2.0.0 | Initial Enterprise Research Mode |
|
||||
| 2.1.0 | V6 features: source governance, AS_OF, counter-review |
|
||||
| 2.2.0 | Counter-Review Team |
|
||||
| 2.3.0 | Source accessibility (initial, incorrect ban on privileged) |
|
||||
| **2.4.0** | **Corrected: circular vs. exclusive advantage distinction** |
|
||||
181
deep-research/references/counter_review_team_guide.md
Normal file
181
deep-research/references/counter_review_team_guide.md
Normal file
@@ -0,0 +1,181 @@
|
||||
# Counter-Review Team 使用指南
|
||||
|
||||
Deep Research V6 P6 阶段的专用 Agent Team,并行执行多维度审查。
|
||||
|
||||
## Team 架构
|
||||
|
||||
```
|
||||
counter-review-coordinator (协调者)
|
||||
├── claim-validator (声明验证器)
|
||||
├── source-diversity-checker (来源多样性检查器)
|
||||
├── recency-validator (时效性验证器)
|
||||
└── contradiction-finder (矛盾发现器)
|
||||
```
|
||||
|
||||
## Agent 职责
|
||||
|
||||
| Agent | 职责 | 输出 |
|
||||
|-------|------|------|
|
||||
| **claim-validator** | 验证声明准确性,识别无证据/弱证据声明 | Claim Validation Report |
|
||||
| **source-diversity-checker** | 检查单一来源依赖,source-type 分布 | Source Diversity Report |
|
||||
| **recency-validator** | 验证时敏声明的新鲜度,AS_OF 合规 | Recency Validation Report |
|
||||
| **contradiction-finder** | 发现内部矛盾,缺失的反向观点 | Contradiction and Bias Report |
|
||||
| **counter-review-coordinator** | 整合所有报告,生成最终 P6 报告 | P6 Counter-Review Report |
|
||||
|
||||
## 使用流程
|
||||
|
||||
### 1. 准备输入材料
|
||||
|
||||
在 P5 (Draft) 完成后,收集以下材料:
|
||||
|
||||
```
|
||||
inputs/
|
||||
├── draft_report.md # P5 起草的报告
|
||||
├── citation_registry.md # P3 的引用注册表
|
||||
├── task-notes/
|
||||
│ ├── task-a.md # 子代理研究笔记
|
||||
│ ├── task-b.md
|
||||
│ └── ...
|
||||
└── p0_config.md # P0 配置 (AS_OF 日期, Mode 等)
|
||||
```
|
||||
|
||||
### 2. 并行分发任务
|
||||
|
||||
向 4 个 specialist agent 同时发送任务:
|
||||
|
||||
```bash
|
||||
# 向 claim-validator 发送
|
||||
SendMessage to: claim-validator
|
||||
输入: draft_report.md + citation_registry.md + task-notes/
|
||||
指令: 验证所有声明的证据支持
|
||||
|
||||
# 向 source-diversity-checker 发送
|
||||
SendMessage to: source-diversity-checker
|
||||
输入: draft_report.md + citation_registry.md
|
||||
指令: 检查来源多样性和单一来源依赖
|
||||
|
||||
# 向 recency-validator 发送
|
||||
SendMessage to: recency-validator
|
||||
输入: draft_report.md + citation_registry.md + p0_config.md
|
||||
指令: 验证时敏声明的新鲜度
|
||||
|
||||
# 向 contradiction-finder 发送
|
||||
SendMessage to: contradiction-finder
|
||||
输入: draft_report.md + task-notes/ + citation_registry.md
|
||||
指令: 发现矛盾和缺失的反向观点
|
||||
```
|
||||
|
||||
### 3. 协调汇总
|
||||
|
||||
等待 4 个 specialist 完成后,发送给 coordinator:
|
||||
|
||||
```bash
|
||||
SendMessage to: counter-review-coordinator
|
||||
输入:
|
||||
- Claim Validation Report
|
||||
- Source Diversity Report
|
||||
- Recency Validation Report
|
||||
- Contradiction and Bias Report
|
||||
指令: 整合所有报告,生成最终 P6 Counter-Review Report
|
||||
```
|
||||
|
||||
### 4. 获取最终输出
|
||||
|
||||
Coordinator 输出包含:
|
||||
- 问题汇总(必须 ≥3 个)
|
||||
- 关键争议部分(可直接复制到最终报告)
|
||||
- 强制修复清单
|
||||
- 质量门状态
|
||||
|
||||
## 质量门要求
|
||||
|
||||
| 检查项 | 标准模式 | 轻量模式 | 失败处理 |
|
||||
|--------|---------|---------|---------|
|
||||
| 发现问题数 | ≥3 | ≥3 | 重新审查 |
|
||||
| 关键声明单来源 | 0 | 0 | 补充来源或降级 |
|
||||
| 官方来源占比 | ≥30% | ≥20% | 补充官方来源 |
|
||||
| AS_OF 日期完整 | 100% | 100% | 补充日期 |
|
||||
| 核心争议文档化 | 必填 | 必填 | 补充争议部分 |
|
||||
|
||||
## 输出示例
|
||||
|
||||
### Coordinator 最终报告结构
|
||||
|
||||
```markdown
|
||||
# P6 Counter-Review Report
|
||||
|
||||
## Executive Summary
|
||||
- Total issues found: 7 (critical: 2, high: 3, medium: 2)
|
||||
- Must-fix before publish: 2
|
||||
- Recommended improvements: 5
|
||||
|
||||
## Critical Issues (Block Publish)
|
||||
| # | Issue | Location | Source | Fix Required |
|
||||
|---|-------|----------|--------|--------------|
|
||||
| 1 | 市场份额声明无来源 | 3.2节 | 无 | 补充来源或删除 |
|
||||
| 2 | 单一社区来源支持收入数据 | 4.1节 | [12] community | 找官方来源替代 |
|
||||
|
||||
## 核心争议 / Key Controversies
|
||||
|
||||
- **争议 1:** 公司声称增长 50% vs 分析师报告增长 30%
|
||||
- 证据强度: official(公司财报) vs academic(第三方研究)
|
||||
- 建议: 并列呈现两种数据,说明差异原因
|
||||
|
||||
## Mandatory Fixes Checklist
|
||||
- [ ] 补充 3.2 节市场份额来源
|
||||
- [ ] 替换 4.1 节收入数据来源
|
||||
- [ ] 添加 AS_OF: 2026-04-03 到所有时敏声明
|
||||
|
||||
## Quality Gates Status
|
||||
| Gate | Status | Notes |
|
||||
|------|--------|-------|
|
||||
| P6 ≥3 issues found | ✅ | 发现 7 个问题 |
|
||||
| No critical claim single-sourced | ❌ | 2 个问题待修复 |
|
||||
| AS_OF dates present | ❌ | 3 处缺失 |
|
||||
| Counter-claims documented | ✅ | 已添加 |
|
||||
```
|
||||
|
||||
## 集成到 SKILL.md 工作流
|
||||
|
||||
在 SKILL.md 的 P6 阶段,添加以下指令:
|
||||
|
||||
```markdown
|
||||
## P6: Counter-Review (Mandatory)
|
||||
|
||||
**使用 Counter-Review Team 执行并行审查:**
|
||||
|
||||
1. **准备材料**: draft_report.md, citation_registry.md, task-notes/, p0_config.md
|
||||
2. **并行分发**: 同时发送给 4 个 specialist agent
|
||||
3. **等待完成**: 收集 4 份 specialist 报告
|
||||
4. **协调汇总**: 发送给 coordinator 生成最终 P6 报告
|
||||
5. **强制执行**: 所有 Critical 问题必须在 P7 前修复
|
||||
6. **输出**: 将"核心争议"部分复制到最终报告
|
||||
|
||||
**Report**: `[P6 complete] {N} issues found: {critical} critical, {high} high, {medium} medium.`
|
||||
```
|
||||
|
||||
## 团队管理
|
||||
|
||||
### 查看团队状态
|
||||
```bash
|
||||
cat ~/.claude/teams/counter-review-team/config.json
|
||||
```
|
||||
|
||||
### 向 Agent 发送消息
|
||||
```bash
|
||||
SendMessage to: claim-validator
|
||||
message: 开始审查任务,输入文件在 ./review-inputs/
|
||||
```
|
||||
|
||||
### 关闭团队
|
||||
```bash
|
||||
SendMessage to: "*"
|
||||
message: {"type": "shutdown_request", "reason": "任务完成"}
|
||||
```
|
||||
|
||||
## 注意事项
|
||||
|
||||
1. **必须发现 ≥3 个问题** - 如果 coordinator 报告 <3 个问题,需要重新审查
|
||||
2. **Critical 问题必须修复** - 才能进入 P7
|
||||
3. **保留所有审查记录** - 作为研究方法论的一部分
|
||||
4. **中文输入中文输出** - 所有 agent 支持中英文双语
|
||||
135
deep-research/references/enterprise_analysis_frameworks.md
Normal file
135
deep-research/references/enterprise_analysis_frameworks.md
Normal file
@@ -0,0 +1,135 @@
|
||||
# Enterprise Analysis Frameworks
|
||||
|
||||
Apply these frameworks after completing the six-dimension data collection. Execute in order: SWOT → Competitive Barriers → Risk Matrix → Comprehensive Scoring.
|
||||
|
||||
## SWOT Analysis Template
|
||||
|
||||
Each SWOT entry MUST include evidence and source attribution.
|
||||
|
||||
```
|
||||
| | Positive Factors | Negative Factors |
|
||||
|--------------|-----------------------------------|-----------------------------------|
|
||||
| **Internal** | **S (Strengths)** | **W (Weaknesses)** |
|
||||
| | 1. {description} | 1. {description} |
|
||||
| | • Evidence: {data/fact} | • Evidence: {data/fact} |
|
||||
| | • Source: {citation} | • Source: {citation} |
|
||||
| | • Impact: {assessment} | • Impact: {assessment} |
|
||||
| | | |
|
||||
| **External** | **O (Opportunities)** | **T (Threats)** |
|
||||
| | 1. {description} | 1. {description} |
|
||||
| | • Evidence: {trend/policy} | • Evidence: {pressure/risk} |
|
||||
| | • Source: {citation} | • Source: {citation} |
|
||||
| | • Probability: {assessment} | • Probability: {assessment} |
|
||||
| | • Impact: {assessment} | • Impact: {assessment} |
|
||||
```
|
||||
|
||||
**Requirements**:
|
||||
- Each quadrant: 3-5 entries minimum
|
||||
- Every entry must have evidence with source
|
||||
- S/W must be data-backed (not opinions)
|
||||
- O/T must include probability and impact estimates
|
||||
|
||||
**Strategic Implications Matrix** (generate after SWOT):
|
||||
- **SO Strategy** (leverage strengths to capture opportunities): 1-2 specific recommendations
|
||||
- **WO Strategy** (overcome weaknesses to seize opportunities): 1-2 specific recommendations
|
||||
- **ST Strategy** (use strengths to counter threats): 1-2 specific recommendations
|
||||
- **WT Strategy** (mitigate weaknesses to avoid threats): 1-2 specific recommendations
|
||||
|
||||
## Competitive Barrier Quantification Framework
|
||||
|
||||
7 barrier dimensions with weighted scoring:
|
||||
|
||||
| Dimension | Weight | Strong | Moderate | Weak |
|
||||
|-----------|--------|--------|----------|------|
|
||||
| **Network Effects** | 20% | 4.5 — Clear network effects (social platforms, marketplaces) | 3.0 — Exists but replaceable | 1.5 — Minimal network effects |
|
||||
| **Scale Economies** | 15% | 4.0 — Unit cost drops 30%+ with scale | 2.5 — Cost drops 10-30% | 1.0 — Cost drops <10% |
|
||||
| **Brand Value** | 15% | 4.0 — Category leader, high pricing power | 2.5 — Known brand, competitive | 1.0 — Commodity brand, price-sensitive |
|
||||
| **Technology/Patents** | 15% | 4.0 — Core patents, hard to circumvent | 2.5 — Some patent protection | 1.0 — Peripheral patents only |
|
||||
| **Switching Costs** | 15% | 4.0 — High lock-in (data, ecosystem) | 2.5 — Moderate switching friction | 1.0 — Low switching cost |
|
||||
| **Regulatory Licenses** | 10% | 3.5 — Heavy regulation, hard to obtain | 2.0 — Standard regulatory requirements | 0.5 — Light regulation |
|
||||
| **Data Assets** | 10% | 3.5 — Massive proprietary high-quality data | 2.0 — Some data accumulation | 0.5 — Limited or public data |
|
||||
|
||||
**Scoring**: Total = Σ(dimension score × weight)
|
||||
|
||||
**Rating Scale**:
|
||||
| Score | Rating | Interpretation |
|
||||
|-------|--------|---------------|
|
||||
| ≥3.5 | A+ | Exceptional moat |
|
||||
| ≥2.8 | A | Strong moat |
|
||||
| ≥2.0 | B+ | Good moat |
|
||||
| ≥1.5 | B | Moderate moat |
|
||||
| ≥1.0 | C+ | Limited moat |
|
||||
| <1.0 | C | Weak moat |
|
||||
|
||||
**Output format**: Present a scorecard table with each dimension's strength rating, raw score, justification (with evidence), and the weighted total with final rating.
|
||||
|
||||
## Risk Matrix Framework
|
||||
|
||||
Assess 8 mandatory risk categories:
|
||||
|
||||
### Risk Assessment Scales
|
||||
|
||||
**Probability**:
|
||||
| Level | Range | Score |
|
||||
|-------|-------|-------|
|
||||
| High | >70% | 0.7-1.0 |
|
||||
| Medium | 30-70% | 0.3-0.7 |
|
||||
| Low | <30% | 0.0-0.3 |
|
||||
|
||||
**Impact**:
|
||||
| Level | Description | Score |
|
||||
|-------|-------------|-------|
|
||||
| High | >30% revenue impact | 3 |
|
||||
| Medium | 10-30% revenue impact | 2 |
|
||||
| Low | <10% revenue impact | 1 |
|
||||
|
||||
**Risk Level**: Risk Value = Probability Score × Impact Score
|
||||
| Color | Level | Threshold |
|
||||
|-------|-------|-----------|
|
||||
| Red | High risk | ≥2.5 |
|
||||
| Yellow | Medium risk | 1.0 – 2.5 |
|
||||
| Green | Low risk | <1.0 |
|
||||
|
||||
### 8 Mandatory Risk Categories
|
||||
|
||||
| # | Category | Typical Triggers |
|
||||
|---|----------|-----------------|
|
||||
| 1 | Market risk | Industry slowdown, demand shifts |
|
||||
| 2 | Competitive risk | New entrants, incumbents pivoting |
|
||||
| 3 | Technology risk | Tech obsolescence, disruption |
|
||||
| 4 | Regulatory risk | Policy tightening, compliance cost |
|
||||
| 5 | Financial risk | Cash flow stress, debt levels |
|
||||
| 6 | Operational risk | Key talent loss, supply chain |
|
||||
| 7 | Talent risk | Brain drain, recruiting difficulty |
|
||||
| 8 | Geopolitical risk | Trade friction, data localization |
|
||||
|
||||
### Risk Table Format
|
||||
|
||||
| Category | Specific Risk | Probability | Impact | Risk Value | Level | Evidence/Triggers | Current Mitigations | Recommended Actions |
|
||||
|----------|--------------|-------------|--------|------------|-------|-------------------|--------------------|--------------------|
|
||||
|
||||
**Requirements**:
|
||||
- All 8 categories must be assessed (no skipping)
|
||||
- Each risk entry must cite specific evidence or triggers
|
||||
- Provide current mitigations AND recommended actions
|
||||
- High risks: require immediate action plans
|
||||
- Medium risks: require monitoring plans
|
||||
- Low risks: require periodic review schedule
|
||||
|
||||
## Comprehensive Scoring (Final Section)
|
||||
|
||||
After completing SWOT, barriers, and risk matrix, generate a comprehensive scorecard:
|
||||
|
||||
```
|
||||
| Dimension | Score | Weight | Weighted | Key Evidence |
|
||||
|-----------|-------|--------|----------|-------------|
|
||||
| Business Quality | X/10 | 25% | | |
|
||||
| Competitive Position | X/10 | 20% | | |
|
||||
| Financial Health | X/10 | 20% | | |
|
||||
| Growth Potential | X/10 | 15% | | |
|
||||
| Risk Profile | X/10 | 10% | | |
|
||||
| Management Quality | X/10 | 10% | | |
|
||||
| **Total** | | 100% | **X/10** | |
|
||||
```
|
||||
|
||||
Every score must reference specific evidence from the six-dimension data collection.
|
||||
160
deep-research/references/enterprise_quality_checklist.md
Normal file
160
deep-research/references/enterprise_quality_checklist.md
Normal file
@@ -0,0 +1,160 @@
|
||||
# Enterprise Research Quality Checklist
|
||||
|
||||
Three-level quality control executed at each stage transition.
|
||||
|
||||
## L1: Data Collection Quality (after each dimension)
|
||||
|
||||
### Per-Dimension Checks
|
||||
|
||||
| Check Item | Standard | Method | Pass Condition |
|
||||
|-----------|----------|--------|---------------|
|
||||
| Source count | Key data points ≥2 sources | Count source annotations | ≥90% compliance |
|
||||
| Source attribution | All data has source marked | Check citations in draft | ≥95% completeness |
|
||||
| Cross-validation pass rate | Data deviation ≤10% | Compare multi-source data | ≥95% validation pass |
|
||||
| Timeliness | Financial: ≤2 years; News: ≤6 months | Check timestamps | 100% compliance |
|
||||
|
||||
**Result handling**: All pass → proceed. Partial fail → supplement sources. Critical fail → re-collect dimension.
|
||||
|
||||
### Dimension-Specific Checklists
|
||||
|
||||
**D1 Company Fundamentals** (target: 11/11):
|
||||
- [ ] Legal entity boundaries clarified
|
||||
- [ ] Founding date with month/year
|
||||
- [ ] Headquarters city identified
|
||||
- [ ] Founder/CEO confirmed (≥2 sources)
|
||||
- [ ] Employee count with year
|
||||
- [ ] Listing status (exchange, ticker)
|
||||
- [ ] Latest valuation/market cap with date
|
||||
- [ ] Core business one-liner
|
||||
- [ ] Funding history ≥3 rounds
|
||||
- [ ] ≥5 milestone events in timeline
|
||||
- [ ] Ownership structure: controller identified
|
||||
|
||||
**D2 Business & Products** (target: 7/7):
|
||||
- [ ] ≥3 business segments identified
|
||||
- [ ] Revenue share per segment
|
||||
- [ ] ≥3 core products analyzed
|
||||
- [ ] User metrics (DAU/MAU) with numbers
|
||||
- [ ] Monetization model per product
|
||||
- [ ] Revenue breakdown (segment/geography/customer)
|
||||
- [ ] Growth/decline trend per segment
|
||||
|
||||
**D3 Competitive Position** (target: 7/7):
|
||||
- [ ] Industry clearly defined
|
||||
- [ ] Market size quantified
|
||||
- [ ] Company rank established
|
||||
- [ ] Market share with number
|
||||
- [ ] ≥3 competitors identified
|
||||
- [ ] Multi-dimension comparison table complete
|
||||
- [ ] ≥5 barrier dimensions assessed with scores
|
||||
|
||||
**D4 Financial & Operations** (target: 9/9):
|
||||
- [ ] Revenue: 3-year data
|
||||
- [ ] Net income: 3-year data
|
||||
- [ ] Gross margin: 3-year data
|
||||
- [ ] Net margin: 3-year data
|
||||
- [ ] Operating cash flow: 3-year data
|
||||
- [ ] R&D expense: 3-year data
|
||||
- [ ] Key financial data cross-validated (≥2 sources)
|
||||
- [ ] Metric definitions consistent across years
|
||||
- [ ] ≥3 efficiency metrics (ROE/ROA/etc.)
|
||||
|
||||
**D5 Recent Developments** (target: 5/5):
|
||||
- [ ] ≥5 recent events (within 6 months)
|
||||
- [ ] Events span ≥3 event types
|
||||
- [ ] Each event has impact assessment
|
||||
- [ ] ≥2 strategic direction signals identified
|
||||
- [ ] Most recent event within 1 month
|
||||
|
||||
**D6 Internal/Proprietary** (target: 2/2):
|
||||
- [ ] Internal knowledge base queried (or limitation noted)
|
||||
- [ ] Internal document search executed (or limitation noted)
|
||||
|
||||
## L2: Analysis Quality (after analysis frameworks applied)
|
||||
|
||||
| Check Item | Standard | Method | Pass Condition |
|
||||
|-----------|----------|--------|---------------|
|
||||
| SWOT completeness | Each quadrant ≥3 entries | Entry count | Full coverage |
|
||||
| SWOT evidence | Every entry has data backing | Check "Evidence" fields | 100% evidenced |
|
||||
| Risk matrix coverage | All 8 categories assessed | Category checklist | 100% covered |
|
||||
| Barrier quantification | All 7 dimensions scored | Check scorecard completeness | 100% scored |
|
||||
| Conclusion support | All conclusions trace to evidence | Trace each conclusion | 100% supported |
|
||||
|
||||
**Result handling**: All pass → proceed to writing. Partial fail → supplement analysis evidence. Critical fail → re-execute analysis framework.
|
||||
|
||||
## L3: Document Quality (after report drafted)
|
||||
|
||||
| Check Item | Standard | Method | Pass Condition |
|
||||
|-----------|----------|--------|---------------|
|
||||
| Structure compliance | Follows 7-chapter template | Compare against template | ≥95% compliance |
|
||||
| Table format consistency | All tables uniformly formatted | Visual inspection | 100% uniform |
|
||||
| Readability | Paragraphs ≤450 chars; ≥3 parallel items use lists | Paragraph length check | ≥95% compliance |
|
||||
| Data annotation | All data has source + year | Citation audit | 100% complete |
|
||||
| Appendix completeness | Includes source index + glossary | Content check | 100% complete |
|
||||
|
||||
**Result handling**: All pass → deliver. Partial fail → format optimization. Critical fail → regenerate document.
|
||||
|
||||
## Enterprise Report Structure (7 Chapters)
|
||||
|
||||
```
|
||||
# {Company Name} Research Report
|
||||
|
||||
> Executive Summary: {1-2 sentence core conclusion}
|
||||
|
||||
---
|
||||
|
||||
## 1. Company Overview
|
||||
### 1.1 Basic Information (table)
|
||||
### 1.2 Development Timeline
|
||||
### 1.3 Funding History (table)
|
||||
### 1.4 Ownership Structure & Control
|
||||
### 1.5 Core Management Team (table)
|
||||
|
||||
## 2. Business & Product Structure
|
||||
### 2.1 Business Landscape Overview
|
||||
### 2.2 Core Product Matrix (table)
|
||||
### 2.3 Revenue Structure Analysis
|
||||
### 2.4 Business Development Trends
|
||||
|
||||
## 3. Market & Competitive Position
|
||||
### 3.1 Industry Position Analysis
|
||||
### 3.2 Competitive Comparison (table)
|
||||
### 3.3 Competitive Barrier Assessment (scorecard)
|
||||
|
||||
## 4. Financial & Operations Analysis
|
||||
### 4.1 Key Financial Metrics (3-year comparison table)
|
||||
### 4.2 Operating Efficiency Assessment
|
||||
### 4.3 Financial Health Summary
|
||||
|
||||
## 5. Risks & Concerns
|
||||
### 5.1 Risk Matrix Analysis (8-category table)
|
||||
### 5.2 Key Risk Deep-Dives
|
||||
### 5.3 Risk Mitigation Recommendations
|
||||
|
||||
## 6. Recent Developments
|
||||
### 6.1 Major Recent Events (table)
|
||||
### 6.2 Strategic Signal Interpretation
|
||||
|
||||
## 7. Comprehensive Assessment & Conclusion
|
||||
### 7.1 SWOT Summary
|
||||
### 7.2 Comprehensive Scorecard
|
||||
### 7.3 Core Conclusions & Outlook
|
||||
|
||||
---
|
||||
|
||||
## Appendices
|
||||
### A. Data Source Index
|
||||
### B. Glossary
|
||||
### C. Disclaimer
|
||||
```
|
||||
|
||||
## Quality Control Four Dimensions
|
||||
|
||||
Apply throughout all stages:
|
||||
|
||||
| Dimension | Focus | Key Checks |
|
||||
|-----------|-------|------------|
|
||||
| **Accuracy** | Data correctness | Source attribution, fact verification, cross-validation, error tolerance |
|
||||
| **Completeness** | Information coverage | Dimension coverage, key element presence, conclusion support, risk coverage |
|
||||
| **Timeliness** | Data currency | Data freshness, trend capture, signal detection, dynamic updates |
|
||||
| **Consistency** | Uniform standards | Metric definitions aligned, format unified, style consistent, terminology standardized |
|
||||
164
deep-research/references/enterprise_research_methodology.md
Normal file
164
deep-research/references/enterprise_research_methodology.md
Normal file
@@ -0,0 +1,164 @@
|
||||
# Enterprise Research Methodology
|
||||
|
||||
## Six-Dimension Data Collection
|
||||
|
||||
Enterprise research requires parallel collection across six dimensions. Execute all six in order, writing findings to a structured draft after each dimension.
|
||||
|
||||
### Dimension 1: Company Fundamentals
|
||||
|
||||
```
|
||||
Step 1.1: Confirm legal entity
|
||||
├── Clarify parent/subsidiary/affiliate boundaries
|
||||
├── Query: "{company} legal entity corporate structure"
|
||||
├── Output: Entity scope statement
|
||||
└── Verify: Map operating entities to brands
|
||||
|
||||
Step 1.2: Basic information
|
||||
├── Query round 1: "{company} founding date headquarters founder"
|
||||
├── Query round 2: "{company} company overview profile"
|
||||
├── Query round 3: "{company} CEO management team executives"
|
||||
├── Source priority: Official site > Regulatory filings > Authoritative media
|
||||
└── Output: Basic info table (name, founded, HQ, CEO, employees, listing status)
|
||||
|
||||
Step 1.3: Funding history
|
||||
├── Query: "{company} funding rounds valuation IPO"
|
||||
├── Key fields: round, amount, investors, post-money valuation, date
|
||||
└── Output: Funding timeline table
|
||||
|
||||
Step 1.4: Ownership structure
|
||||
├── Query: "{company} ownership structure beneficial owner"
|
||||
├── Key fields: controller identity, economic interest %, voting rights %, control mechanisms (dual-class etc.)
|
||||
└── Output: Ownership summary
|
||||
```
|
||||
|
||||
### Dimension 2: Business & Products
|
||||
|
||||
```
|
||||
Step 2.1: Business landscape scan
|
||||
├── Query round 1: "{company} product lines business segments"
|
||||
├── Query round 2: "{company} revenue breakdown by segment"
|
||||
├── Query round 3: "{company} business model monetization"
|
||||
├── Key fields: segment name, positioning, revenue share, YoY growth, synergies
|
||||
└── Output: Business landscape table
|
||||
|
||||
Step 2.2: Core product analysis
|
||||
├── Query: "{company} core products DAU MAU user base"
|
||||
├── Per product: positioning, target users, scale (DAU/MAU), market share, monetization, competitive advantage, trends
|
||||
└── Output: Product matrix table
|
||||
|
||||
Step 2.3: Revenue structure analysis
|
||||
├── Source: Financial reports (deep extraction)
|
||||
├── Breakdown by: segment, geography, customer type, pricing model
|
||||
└── Output: Revenue structure summary
|
||||
```
|
||||
|
||||
### Dimension 3: Competitive Position
|
||||
|
||||
```
|
||||
Step 3.1: Industry position
|
||||
├── Query: "{company} industry ranking market share"
|
||||
├── Key fields: industry definition, TAM/SAM/SOM, company rank, share, concentration (CR3/CR5)
|
||||
└── Output: Industry position analysis
|
||||
|
||||
Step 3.2: Competitor identification & comparison
|
||||
├── Query round 1: "{company} competitors"
|
||||
├── Query round 2: "{company} vs {competitor A} comparison"
|
||||
├── Query round 3: "{company} vs {competitor B} differences"
|
||||
├── Comparison dimensions: founding, revenue, market share, core products, user scale, valuation/market cap, strengths, weaknesses
|
||||
├── Minimum: ≥3 competitors identified
|
||||
└── Output: Competitive comparison table
|
||||
|
||||
Step 3.3: Competitive barriers assessment
|
||||
├── Use quantified barrier framework (see enterprise_analysis_frameworks.md)
|
||||
├── 7 dimensions: network effects, scale economies, brand, technology/patents, switching costs, regulatory licenses, data assets
|
||||
└── Output: Barrier scorecard with rating
|
||||
```
|
||||
|
||||
### Dimension 4: Financial & Operations
|
||||
|
||||
```
|
||||
Step 4.1: Financial data collection
|
||||
├── Query: "{company} financial results {year} revenue profit"
|
||||
├── Core metrics (3-year minimum): revenue, revenue growth, net income, gross margin, net margin, operating cash flow, R&D expense, R&D ratio
|
||||
└── Output: Financial metrics table (3+ years)
|
||||
|
||||
Step 4.2: Operating efficiency analysis
|
||||
├── Query: "{company} ROE ROA efficiency per-employee"
|
||||
├── Efficiency metrics: ROE, ROA, revenue per employee, accounts receivable days, debt-to-equity
|
||||
└── Output: Operating efficiency table
|
||||
|
||||
Step 4.3: Cross-validation
|
||||
├── Require ≥2 independent sources for key financial data
|
||||
├── Sources: company filings (primary), regulatory filings, authoritative financial data providers
|
||||
├── Deviation rules:
|
||||
│ ├── ≤10%: Pass
|
||||
│ ├── 10-20%: Flag with explanation
|
||||
│ └── >20%: Require third-party verification
|
||||
└── Output: Validation record
|
||||
```
|
||||
|
||||
### Dimension 5: Recent Developments
|
||||
|
||||
```
|
||||
Step 5.1: Recent news scan (past 6 months)
|
||||
├── Query round 1: "{company} latest news {current year}"
|
||||
├── Query round 2: "{company} strategy pivot latest developments"
|
||||
├── Query round 3: "{company} executive changes leadership"
|
||||
├── Query round 4: "{company} partnership acquisition latest"
|
||||
├── Query round 5: "{company} product launch new release"
|
||||
├── Event types: product launches, fundraising/capital, strategy shifts, executive changes, M&A/partnerships, regulatory/compliance
|
||||
├── Minimum: ≥5 events identified
|
||||
└── Output: Major events table
|
||||
|
||||
Step 5.2: Strategic signal interpretation
|
||||
├── Dimensions: expansion signals, contraction signals, transformation signals, risk signals
|
||||
└── Output: Strategic signal analysis
|
||||
```
|
||||
|
||||
### Dimension 6: Internal/Proprietary Sources
|
||||
|
||||
```
|
||||
Step 6.1: Internal knowledge base query (if available)
|
||||
├── Query 1: "our company's relationship with {target company}"
|
||||
├── Query 2: "internal assessment of {target company}"
|
||||
├── Query 3: "{target company} competitive analysis"
|
||||
├── Query 4: "{target company} industry research"
|
||||
└── Output: Internal perspective supplementary info
|
||||
|
||||
Step 6.2: If no internal sources available
|
||||
├── State explicitly: "No internal/proprietary sources available for this research"
|
||||
├── Compensate with additional public source depth
|
||||
└── Note limitation in final report
|
||||
```
|
||||
|
||||
## Data Source Priority Matrix
|
||||
|
||||
| Priority | Source Type | Reliability | Timeliness | Use Case |
|
||||
|----------|-----------|-------------|------------|----------|
|
||||
| **P0** | Official filings / annual reports | 10/10 | High | Core financial data |
|
||||
| **P0** | Company website / announcements | 10/10 | High | Basic info, updates |
|
||||
| **P1** | Regulatory filings | 9/10 | High | Ownership, licenses |
|
||||
| **P1** | Authoritative industry reports | 9/10 | Medium | Market position, trends |
|
||||
| **P2** | Mainstream financial media | 8/10 | High | News, analysis |
|
||||
| **P2** | Professional research institutions | 8/10 | Medium | Deep analysis, forecasts |
|
||||
| **P3** | Social media / forums | 5/10 | High | Sentiment signals only |
|
||||
|
||||
**Rule**: P0 + P1 are primary sources. P2 for validation. P3 for reference only, never as sole source.
|
||||
|
||||
## Cross-Validation Rules
|
||||
|
||||
| Data Type | Min Sources | Max Deviation | Primary Source | Fallback Sources |
|
||||
|-----------|------------|---------------|----------------|-----------------|
|
||||
| Financial data | 2 | 10% | Official financial reports | Regulatory filings, analyst reports |
|
||||
| Market share | 2 | 15% | Industry reports | Company disclosures, third-party analysis |
|
||||
| Management info | 1 | N/A | Company official sources | Regulatory filings, reputable media |
|
||||
| User metrics | 2 | 20% | Company disclosures | Third-party analytics, industry reports |
|
||||
|
||||
## Search Strategy Best Practices
|
||||
|
||||
1. **Multi-angle queries**: 3 different query angles per topic
|
||||
2. **Time filtering**: Prioritize data within last 12 months for operational data, last 3 years for financial trends
|
||||
3. **Site restriction**: Use `site:` for authoritative domains when possible
|
||||
4. **Language diversity**: Query in both English and the company's primary language
|
||||
5. **Exclude noise**: Use `-` to exclude irrelevant results
|
||||
6. **Progressive depth**: Start broad, then narrow based on gaps identified
|
||||
77
deep-research/references/quality_gates.md
Normal file
77
deep-research/references/quality_gates.md
Normal file
@@ -0,0 +1,77 @@
|
||||
# Quality Gates V6
|
||||
|
||||
## Gate 1: Task Notes Quality (after P2)
|
||||
|
||||
| Check | Standard | Lightweight | Fix |
|
||||
|-------|----------|-------------|-----|
|
||||
| All tasks completed | 100% | 100% | Re-dispatch failed tasks |
|
||||
| Sources per task | >= 2 | >= 1 | Run additional searches |
|
||||
| Findings per task | >= 3 | >= 2 | Deepen search or fetch more |
|
||||
| DEEP tasks have Deep Read Notes | 100% | 100% | Fetch and read top source |
|
||||
| All source URLs from actual search | 100% | 100% | Remove any invented URL |
|
||||
|
||||
## Gate 2: Citation Registry (after P3)
|
||||
|
||||
| Check | Standard | Lightweight | Fix |
|
||||
|-------|----------|-------------|-----|
|
||||
| Total approved sources | >= 12 | >= 6 | Flag thin areas for P6 |
|
||||
| Unique domains | >= 5 | >= 3 | Diversify in re-search |
|
||||
| Max single-source share | <= 25% | <= 30% | Find alternatives |
|
||||
| Official source coverage | >= 30% for standard | >= 20% for lightweight | Add official sources |
|
||||
| Source-type balance | official + academic + secondary at least 2 types | same | Fill missing type
|
||||
| Dropped sources listed | All | All | Must be explicit |
|
||||
| No duplicate URLs | 0 duplicates | 0 | Merge during P3 |
|
||||
|
||||
## Gate 3: Draft Quality (after P5)
|
||||
|
||||
| Check | Standard | Lightweight | Fix |
|
||||
|-------|----------|-------------|-----|
|
||||
| Every [n] in registry | 100% | 100% | Remove or fix |
|
||||
| No dropped source cited | 0 violations | 0 | Remove immediately |
|
||||
| Citation density | >= 1 per 200 words | >= 1 per 300 words | Add citations |
|
||||
| Every section has confidence marker | 100% | 100% | Add missing |
|
||||
| High-confidence claims backed by official source | 100% | 100% | Downgrade or re-source |
|
||||
| Counter-claim recorded for major sections | 100% | 70% | Add opposing interpretation |
|
||||
| Total word count | 3000-8000 | 2000-4000 | Adjust scope |
|
||||
|
||||
## Gate 4: Notes Traceability (after P6)
|
||||
|
||||
| Check | Threshold | Fix |
|
||||
|-------|-----------|-----|
|
||||
| Every specific claim traceable to a task note finding | 100% | 100% | Remove or mark [unverified] |
|
||||
| Every statistic/number appears in some task note | 100% | 100% | Remove or verify |
|
||||
| No claim contradicts a task note | 0 contradictions | 0 | Rewrite to match notes |
|
||||
| Claims with recency sensitivity include source date and AS_OF | 100% | 100% | Add date metadata |
|
||||
| P6 found >= 3 issues | Must | Re-examine harder if 0 found |
|
||||
|
||||
## Gate 5: Verification (after P7)
|
||||
|
||||
| Check | Threshold | Fix |
|
||||
|-------|-----------|-----|
|
||||
| Registry cross-check: all [n] valid | 100% | 100% | Remove invalid [n] |
|
||||
| Spot-check: 5+ claims traced to notes | >= 4/5 pass | Fix failing claims |
|
||||
| No dropped source resurrected | 0 | Remove immediately |
|
||||
| Source concentration check for key claims | None > 25% | diversify |
|
||||
|
||||
## Anti-Hallucination Patterns
|
||||
|
||||
| Pattern | Where to detect | Fix |
|
||||
|---------|----------------|-----|
|
||||
| URL not from any subagent search | P7 registry check | Remove citation |
|
||||
| Claim not in any task note | P6 traceability check | Remove or mark [unverified] |
|
||||
| Number more precise than source | P6 ("73.2%" when note says "about 70%") | Use note's precision |
|
||||
| Source authority inflated | P3 registry building | Re-score from notes |
|
||||
| Source type mismatched to claim | P3 + P6 | Reclassify or replace source |
|
||||
| "Studies show..." without naming study | P6 | Name specific source or remove |
|
||||
| Dropped source reappears | P7 cross-check | Remove immediately |
|
||||
| Subagent invented a URL | Gate 1 (lead verifies subagent notes) | Remove from notes before P3 |
|
||||
|
||||
## Chinese-Specific Patterns
|
||||
|
||||
| Pattern | Fix |
|
||||
|---------|-----|
|
||||
| Fake CNKI URL format | Remove, note gap |
|
||||
| "某专家表示" without name/institution | Name or remove |
|
||||
| "据统计" without data source | Add source or qualitative language |
|
||||
| Fabricated institution report | Verify existence or remove |
|
||||
| 旧模型信息未标注 AS_OF | 降级置信度并重搜 |
|
||||
82
deep-research/references/report_template_v6.md
Normal file
82
deep-research/references/report_template_v6.md
Normal file
@@ -0,0 +1,82 @@
|
||||
# {{TITLE}}
|
||||
|
||||
> 研究日期: {{DATE}} | 来源数量: {{SOURCE_COUNT}} | 字数: ~{{WORD_COUNT}} | 模式: {{MODE}} | AS_OF: {{AS_OF}} | 官方源占比: {{OFFICIAL_SHARE}}
|
||||
|
||||
## 摘要 / Executive Summary
|
||||
|
||||
{{200-400 words summarizing key findings, methodology, conclusions, and risks.}}
|
||||
|
||||
---
|
||||
|
||||
## 目录
|
||||
|
||||
{{Auto-generate from actual section headers below.}}
|
||||
|
||||
---
|
||||
|
||||
{{BODY SECTIONS — Adapt to topic type and include opposing interpretation per section.}}
|
||||
|
||||
For each section:
|
||||
|
||||
## N. [Topic-Specific Section Title]
|
||||
|
||||
{{Section content with inline citations [1][2].
|
||||
Standard mode: 500-1000 words per section.
|
||||
Lightweight mode: 300-600 words per section.
|
||||
|
||||
Rules:
|
||||
- 每个事实性论点都需要引用 [n]
|
||||
- 数字/百分比必须有来源
|
||||
- 出现不同证据时要成对给出支持与反驳
|
||||
}}
|
||||
|
||||
**置信度:** High/Medium/Low
|
||||
|
||||
**依据:** {{Why this confidence level — source agreement, evidence quality, data availability}}
|
||||
|
||||
**反方解释:** {{One explicit opposing interpretation with supporting citations if any, or [unverified] if insufficient.}}
|
||||
|
||||
---
|
||||
|
||||
{{COUNTER-REVIEW SUMMARY}}
|
||||
|
||||
- **核心争议 1:** [主张 A 与反向证据 B 对比] [n][m]
|
||||
- **核心争议 2:** ...
|
||||
|
||||
## 关键发现 / Key Findings
|
||||
|
||||
{{3-5 findings in Standard mode, 2-3 in Lightweight. Each finding should:}}
|
||||
- 具体结论
|
||||
- 对应引文
|
||||
- 信心说明
|
||||
|
||||
Example:
|
||||
- **发现 1:** [Most important discovery] [3][7]
|
||||
- **发现 2:** [Second most important] [1][4]
|
||||
|
||||
---
|
||||
|
||||
## 局限性与未来方向 / Limitations & Future Directions
|
||||
|
||||
### 本研究局限
|
||||
{{Be explicit:
|
||||
- What topics/angles couldn't be covered and why
|
||||
- Methodological limits (web-accessible sources, paywall, language, timing)
|
||||
- Source coverage gaps and counter-claim evidence gaps
|
||||
}}
|
||||
|
||||
### 未来方向
|
||||
{{Concrete suggestions for follow-up research with priority and responsible evidence type.}}
|
||||
|
||||
---
|
||||
|
||||
## 参考文献 / References
|
||||
|
||||
[1] Author/Org. "Title". Source-Type: official/academic/secondary-industry/journalism/community/other. As Of: YYYY-MM-DD. URL.
|
||||
[2] Author/Org. "Title". Source-Type: ... As Of: YYYY-MM-DD. URL.
|
||||
|
||||
Rules:
|
||||
- Every [n] in body MUST have matching entry here
|
||||
- Every entry here MUST be cited at least once
|
||||
- Source-Type and As Of fields are mandatory
|
||||
- All URLs MUST come from actual search results (P2 source pool)
|
||||
147
deep-research/references/research_notes_format.md
Normal file
147
deep-research/references/research_notes_format.md
Normal file
@@ -0,0 +1,147 @@
|
||||
# Research Notes Format Specification
|
||||
|
||||
The research notes are the ONLY communication channel between subagents and
|
||||
the lead agent. Every fact in the final report must be traceable to a line in
|
||||
these notes. No exceptions.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
workspace/research-notes/
|
||||
task-a.md Subagent A writes (history expert)
|
||||
task-b.md Subagent B writes (transport historian)
|
||||
task-c.md Subagent C writes (telecom analyst)
|
||||
task-d.md Subagent D writes (comparative analyst)
|
||||
registry.md Lead agent builds from task-*.md (P3)
|
||||
```
|
||||
|
||||
## Per-Task Notes Format
|
||||
|
||||
Each `task-{id}.md` file follows this exact structure:
|
||||
|
||||
```markdown
|
||||
---
|
||||
task_id: a
|
||||
role: Economic Historian
|
||||
status: complete
|
||||
sources_found: 4
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
[1] Before AI skeptics, Luddites raged against the machine | https://www.nationalgeographic.com/... | Source-Type: secondary-industry | As Of: 2025-08 | Authority: 8/10
|
||||
[2] Rage against the machine | https://www.cam.ac.uk/research/news/rage-against-the-machine | Source-Type: academic | As Of: 2024-04 | Authority: 8/10
|
||||
[3] Luddite | https://en.wikipedia.org/wiki/Luddite | Source-Type: community | As Of: 2026-03 | Authority: 7/10
|
||||
[4] Learning from the Luddites | https://forum.effectivealtruism.org/... | Source-Type: community | As Of: 2025-10 | Authority: 6/10
|
||||
|
||||
## Findings
|
||||
|
||||
- Luddite movement began March 11, 1811 in Arnold, Nottinghamshire. [3]
|
||||
- Luddites were skilled craftspeople, not anti-technology extremists. [1][2]
|
||||
- In the 100M-person textile industry, Luddites never exceeded a few thousand. [2]
|
||||
- Government crushed movement: 12 executed at York Assizes, Jan 1813. [3]
|
||||
- Movement collapsed by 1817 under military repression. [1]
|
||||
- Full textile mechanization transition took 50-90 years (1760s-1850s). [4]
|
||||
- Textile workers' real wages dropped ~70% during transition. [4]
|
||||
- Key lesson for AI: Luddites organized AFTER displacement began, losing leverage. [4]
|
||||
|
||||
## Deep Read Notes
|
||||
|
||||
### Source [1]: National Geographic — Luddites and AI
|
||||
Key data: destroyed up to 10,000 pounds of frames in first year alone.
|
||||
Movement spread from Nottinghamshire to Yorkshire and Lancashire in 1812.
|
||||
Children made up 2/3 of workforce at Cromford factory.
|
||||
Key insight: Luddites attacked the SYSTEM of exploitation, not machines per se.
|
||||
They protested manufacturers circumventing standard labor practices.
|
||||
Useful for: framing section on historical displacement, correcting "anti-tech" myth
|
||||
|
||||
### Source [2]: Cambridge University
|
||||
Key data: Luddites were "elite craftspeople" not working class broadly.
|
||||
Yorkshire croppers had 7-year apprenticeships. Movement was localized, never exceeded a few thousand.
|
||||
Key insight: The movement was smaller and more elite than popular history suggests.
|
||||
Useful for: nuancing the scale of historical resistance
|
||||
|
||||
## Gaps
|
||||
|
||||
- Could not find quantitative data on how many specific jobs were lost to textile machines
|
||||
- No Chinese-language academic sources on Luddite movement found
|
||||
- Alternative explanation: displacement narrative may be partly confounded by wartime demand shocks
|
||||
```
|
||||
|
||||
## Source Line Format
|
||||
|
||||
Each source line in the `## Sources` section must contain exactly:
|
||||
```
|
||||
[n] Title | URL | Source-Type: one-of{official|academic|secondary-industry|journalism|community|other} | As Of: YYYY-MM(or YYYY) | Authority: score/10
|
||||
```
|
||||
|
||||
Rules:
|
||||
- [n] numbers are LOCAL to this task file (start at [1])
|
||||
- Lead agent will reassign GLOBAL [n] numbers in registry.md
|
||||
- URL must be from an actual search result (subagent MUST NOT invent URLs)
|
||||
- `Authority` score follows guide in quality-gates.md
|
||||
- `As Of` must be provided; use `undated` if unknown
|
||||
- High-confidence claims in final report must use `official` or `academic` sources
|
||||
|
||||
## Findings Line Format
|
||||
|
||||
Each finding must be:
|
||||
- One sentence of specific, factual information
|
||||
- End with source number(s) in brackets: [1] or [1][2]
|
||||
- Max 10 findings per task (forces prioritization)
|
||||
- No vague claims like "research shows..." — name what specifically
|
||||
|
||||
Good: `Full textile mechanization transition took 50-90 years (1760s-1850s). [4]`
|
||||
Bad: `The transition took a long time. [4]`
|
||||
Bad: `Studies suggest that it was a lengthy process.` (no source, vague)
|
||||
|
||||
## Deep Read Notes Format
|
||||
|
||||
For each source that was web_fetched (full article read):
|
||||
- Key data: specific, numeric evidence from article
|
||||
- Key insight: the one thing this source says that others don't
|
||||
- Useful for: which final section this supports
|
||||
|
||||
Max 4 lines per source. This is a research notebook, not a summary.
|
||||
|
||||
## Gaps Section
|
||||
|
||||
List what the subagent searched for but could NOT find, and possible counter-readings.
|
||||
This signals where evidence is thin and confidence should be lowered.
|
||||
|
||||
## Registry Format (built by lead agent in P3)
|
||||
|
||||
The `registry.md` file merges all task sources into a global registry and adds source-type / as-of fields.
|
||||
|
||||
```markdown
|
||||
# Citation Registry
|
||||
Built from: task-a.md, task-b.md, task-c.md, task-d.md
|
||||
|
||||
## Approved Sources
|
||||
|
||||
[1] National Geographic — Luddites | https://www.nationalgeographic.com/... | Source-Type: secondary-industry | As Of: 2026-03 | Auth: 8 | From: task-a
|
||||
[2] Cambridge — Rage against machine | https://www.cam.ac.uk/... | Source-Type: academic | As Of: 2012-04 | Auth: 8 | From: task-a
|
||||
[3] OpenAI — Day Horse Lost Job | https://blogs.microsoft.com/... | Source-Type: official | As Of: 2026-01 | Auth: 8 | From: task-b
|
||||
...
|
||||
[N] Last source
|
||||
|
||||
## Dropped
|
||||
|
||||
x Quora answer | https://www.quora.com/... | Source-Type: community | As Of: 2024-10 | Auth: 3 | Reason: below threshold
|
||||
x Study.com | https://study.com/... | Source-Type: secondary-industry | As Of: undated | Auth: 4 | Reason: better sources available
|
||||
|
||||
## Stats
|
||||
|
||||
Total evaluated: 22
|
||||
Approved: 16
|
||||
Dropped: 6
|
||||
Unique domains: 12
|
||||
Source-type: official 4 / academic 3 / secondary-industry 5 / journalism 2 / community 2
|
||||
Max single-source share: 3/16 = 19% (pass)
|
||||
```
|
||||
|
||||
Rules for registry:
|
||||
- [n] numbers here are FINAL — they appear unchanged in the report
|
||||
- Every [n] in the report must exist in the Approved list
|
||||
- Every Dropped source must NEVER appear in the report
|
||||
- If two tasks found the same URL, keep it once with the higher authority score
|
||||
179
deep-research/references/source_accessibility_policy.md
Normal file
179
deep-research/references/source_accessibility_policy.md
Normal file
@@ -0,0 +1,179 @@
|
||||
# Source Accessibility Policy
|
||||
|
||||
**Version**: V6.1
|
||||
**Purpose**: Distinguish between legitimate exclusive information advantages and circular verification traps
|
||||
|
||||
---
|
||||
|
||||
## The Problem
|
||||
|
||||
In the "深度推理" case study, we made a **methodology error**:
|
||||
|
||||
**What happened**:
|
||||
1. User asked to research **their own company**: "深度推理(上海)科技有限公司"
|
||||
2. We accessed user's **own Spaceship account** (their private registrar)
|
||||
3. Found 25 domains **the user already owned**
|
||||
4. Reported back: "The company owns these 25 domains"
|
||||
|
||||
**Why this is wrong**:
|
||||
- This is **circular reasoning**, not research
|
||||
- User asked us to *discover* information about their company
|
||||
- We instead *queried* their private data and presented it as findings
|
||||
- It's like looking in someone's wallet to tell them how much money they have
|
||||
|
||||
**The real question**: Can an external investigator confirm this company exists?
|
||||
**Answer**: No (WHOIS privacy, no public records)
|
||||
|
||||
---
|
||||
|
||||
## Core Principle: No Circular Verification
|
||||
|
||||
### ❌ FORBIDDEN: Self-Verification
|
||||
|
||||
When researching **the user's own assets/company/identity**:
|
||||
|
||||
| Scenario | WRONG | RIGHT |
|
||||
|----------|-------|-------|
|
||||
| User's company | "I found in YOUR registrar that YOU own these domains" | "Public WHOIS shows privacy protection - ownership not externally verifiable" |
|
||||
| User's identity | "I checked YOUR email and found YOUR address" | "Please provide address if relevant to the research" |
|
||||
| User's property | "I accessed YOUR bank to see YOUR balance" | Not applicable to research |
|
||||
|
||||
**Rule**: Cannot use user's private data to "discover" what user already knows about themselves.
|
||||
|
||||
---
|
||||
|
||||
### ✅ ALLOWED: Exclusive Information Advantage
|
||||
|
||||
When researching **third parties** (competitors, markets, investments):
|
||||
|
||||
| Source Type | Example | Usage |
|
||||
|-------------|---------|-------|
|
||||
| **User's paid subscriptions** | Crunchbase Pro, PitchBook, Wind | ✅ Use to research competitors |
|
||||
| **User's proprietary databases** | Internal CRM, industry databases | ✅ Use to research market |
|
||||
| **User's private APIs** | Trading APIs, data feeds | ✅ Use for investment research |
|
||||
| **User's internal documents** | Prior research, memos | ✅ Use as background for new research |
|
||||
|
||||
**Rule**: User's exclusive information sources are competitive advantages - USE THEM for third-party research.
|
||||
|
||||
---
|
||||
|
||||
## The Distinction
|
||||
|
||||
```
|
||||
Research Target: 深度推理(上海)科技有限公司
|
||||
├─ Is this the user's own company? → YES
|
||||
├─ Can we use user's private data about it? → NO (circular)
|
||||
└─ Must rely on: Public sources only
|
||||
|
||||
Research Target: 竞争对手公司 X
|
||||
├─ Is this the user's own company? → NO
|
||||
├─ Can we use user's Crunchbase Pro? → YES (competitive advantage)
|
||||
└─ Can use: Public + User's exclusive sources
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Corrected Methodology
|
||||
|
||||
### When Researching User's Own Company
|
||||
|
||||
**Approach**: External investigator perspective
|
||||
|
||||
```
|
||||
User: "Research my company 深度推理"
|
||||
|
||||
CORRECT RESPONSE:
|
||||
1. Search public sources (WHOIS, web, news)
|
||||
2. Find: Website placeholder, privacy-protected WHOIS, no news
|
||||
3. Report: "From public perspective: minimal footprint, cannot verify ownership"
|
||||
4. Gap: "Internal data not accessible to external investigators"
|
||||
|
||||
INCORRECT RESPONSE:
|
||||
1. Access user's Spaceship account
|
||||
2. Find: 25 domains user already knows they own
|
||||
3. Report: "The company owns 25 domains" (user already knows this!)
|
||||
```
|
||||
|
||||
### When User Provides Exclusive Sources
|
||||
|
||||
**Approach**: Leverage competitive advantage
|
||||
|
||||
```
|
||||
User: "Research competitor X, I have Crunchbase Pro"
|
||||
User: "Here's my API key: xxx"
|
||||
|
||||
CORRECT RESPONSE:
|
||||
1. Use provided Crunchbase Pro API
|
||||
2. Find: Funding history, team info not in public sources
|
||||
3. Report: "Per Crunchbase Pro [exclusive source], X raised $Y in Series Z"
|
||||
4. Cite: Accessibility: exclusive (user-provided)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Source Classification
|
||||
|
||||
### public ✅
|
||||
- Available to any external researcher
|
||||
- Examples: Public websites, news, SEC filings
|
||||
|
||||
### exclusive-user-provided ✅ (FOR THIRD-PARTY RESEARCH)
|
||||
- User's paid subscriptions, private APIs, internal databases
|
||||
- **USE for**: Researching competitors, markets, investments
|
||||
- **DO NOT USE for**: Verifying user's own assets/identity
|
||||
|
||||
### private-user-owned ❌ (FOR SELF-RESEARCH)
|
||||
- User's own accounts, emails, personal data
|
||||
- **DO NOT USE**: Creates circular verification
|
||||
|
||||
---
|
||||
|
||||
## Information Black Box Protocol
|
||||
|
||||
When an entity (including user's own company) has no public footprint:
|
||||
|
||||
1. **Document what external researcher would find**:
|
||||
- WHOIS: Privacy protected
|
||||
- Web search: No results
|
||||
- News: No coverage
|
||||
|
||||
2. **Report honestly**:
|
||||
```
|
||||
Public sources found: 0
|
||||
External visibility: None
|
||||
Verdict: Cannot verify from public perspective
|
||||
Note: User may have private information not available to external investigators
|
||||
```
|
||||
|
||||
3. **Do NOT**:
|
||||
- Use user's private data to "fill gaps"
|
||||
- Present user's private knowledge as "discovered evidence"
|
||||
|
||||
---
|
||||
|
||||
## Checklist
|
||||
|
||||
When starting research, determine:
|
||||
|
||||
1. **Who is the research target?**
|
||||
- User's own company/asset? → Public sources ONLY
|
||||
- Third party? → Can use user's exclusive sources
|
||||
|
||||
2. **Am I discovering or querying?**
|
||||
- Discovering new info? → Research
|
||||
- Querying user's own data? → Circular, not allowed
|
||||
|
||||
3. **Would this finding surprise the user?**
|
||||
- Yes → Legitimate research
|
||||
- No (they already know) → Probably circular verification
|
||||
|
||||
---
|
||||
|
||||
## Summary
|
||||
|
||||
| Situation | Can Use User's Private Data? | Why? |
|
||||
|-----------|------------------------------|------|
|
||||
| Research user's own company | ❌ NO | Circular verification |
|
||||
| Research competitor using user's Crunchbase | ✅ YES | Competitive advantage |
|
||||
| Research market using user's database | ✅ YES | Exclusive information |
|
||||
| "Discover" user's own domain ownership | ❌ NO | User already knows this |
|
||||
116
deep-research/references/subagent_prompt.md
Normal file
116
deep-research/references/subagent_prompt.md
Normal file
@@ -0,0 +1,116 @@
|
||||
# Subagent Prompt Template
|
||||
|
||||
This file defines the prompt structure sent to each research subagent.
|
||||
The lead agent fills in the `{variables}` and dispatches.
|
||||
|
||||
## Prompt
|
||||
|
||||
```
|
||||
You are a research specialist with the role: {role}.
|
||||
|
||||
## Your Task
|
||||
|
||||
{objective}
|
||||
|
||||
## Search Queries (start with these, adjust as needed)
|
||||
|
||||
1. {query_1}
|
||||
2. {query_2}
|
||||
3. {query_3} (optional)
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Run 2-4 web searches using the queries above (and variations).
|
||||
2. For the best 2-3 results, use web_fetch to read the full article.
|
||||
3. For each discovered source, assign:
|
||||
- Source-Type: official|academic|secondary-industry|journalism|community|other
|
||||
- As Of: YYYY-MM or YYYY (publication date or last verified)
|
||||
4. Assess each source's authority (1-10 scale).
|
||||
5. Write ALL findings to the file: {output_path}
|
||||
6. Record at least one explicit counter-claim candidate in `Gaps`.
|
||||
7. Use EXACTLY the format below. Do not deviate.
|
||||
|
||||
## Output Format (write this to {output_path})
|
||||
|
||||
---
|
||||
task_id: {task_id}
|
||||
role: {role}
|
||||
status: complete
|
||||
sources_found: {N}
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
[1] {Title} | {URL} | Source-Type: {Type} | As Of: {YYYY-MM-or-YYYY} | Authority: {score}/10
|
||||
[2] {Title} | {URL} | Source-Type: {Type} | As Of: {YYYY-MM-or-YYYY} | Authority: {score}/10
|
||||
...
|
||||
|
||||
## Findings
|
||||
|
||||
- {Specific fact, with source number}. [1]
|
||||
- {Specific fact, with source number and confidence}. [2]
|
||||
- {Another fact}. [1]
|
||||
... (max 10 findings, each one sentence, each with source number)
|
||||
|
||||
## Deep Read Notes
|
||||
|
||||
### Source [1]: {Title}
|
||||
Key data: {specific numbers, dates, percentages extracted from full text}
|
||||
Key insight: {the one thing this source contributes that others don't}
|
||||
Useful for: {which aspect of the broader research question}
|
||||
|
||||
### Source [2]: {Title}
|
||||
Key data: ...
|
||||
Key insight: ...
|
||||
Useful for: ...
|
||||
|
||||
## Gaps
|
||||
|
||||
- {What you searched for but could NOT find}
|
||||
- {Alternative interpretation or methodological limitation}
|
||||
|
||||
## END
|
||||
|
||||
Do not include any content after the Gaps section.
|
||||
Do not summarize your process. Write the findings file and stop.
|
||||
```
|
||||
|
||||
## Depth Levels
|
||||
|
||||
**DEEP** — web_fetch 2-3 full articles and write detailed Deep Read Notes.
|
||||
Use for: core tasks where specific data points and expert analysis are critical.
|
||||
|
||||
**SCAN** — rely mainly on search snippets, fetches at most 1 article.
|
||||
Use for: supplementary tasks like source mapping.
|
||||
|
||||
## Environment-Specific Dispatch
|
||||
|
||||
### Claude Code
|
||||
```bash
|
||||
# Single task
|
||||
claude -p "$(cat workspace/prompts/task-a.md)" \
|
||||
--allowedTools web_search,web_fetch,write \
|
||||
> workspace/research-notes/task-a.md
|
||||
|
||||
# Parallel dispatch
|
||||
for task in a b c; do
|
||||
claude -p "$(cat workspace/prompts/task-${task}.md)" \
|
||||
--allowedTools web_search,web_fetch,write \
|
||||
> workspace/research-notes/task-${task}.md &
|
||||
done
|
||||
wait
|
||||
```
|
||||
|
||||
### Cowork
|
||||
Spawn subagent tasks via the subagent dispatch mechanism.
|
||||
|
||||
### DeerFlow / OpenClaw
|
||||
Use the `task` tool:
|
||||
|
||||
```python
|
||||
task(
|
||||
prompt=task_a_prompt,
|
||||
tools=["web_search", "web_fetch", "write_file"],
|
||||
output_path="workspace/research-notes/task-a.md"
|
||||
)
|
||||
```
|
||||
Reference in New Issue
Block a user