Merge feature/fix-csharp-and-config-type-bugs: C3.10 Signal Flow + Complete Godot Support
Features: - C3.10: Signal Flow Analysis for Godot projects (208 signals, 634 connections) - Complete Godot game engine support (.gd, .tscn, .tres, .gdshader) - GDScript dependency extraction with preload/load/extends patterns - GDScript test extraction (GUT, gdUnit4, WAT frameworks) - Signal-based how-to guides generation Fixes: - GDScript dependency extraction (265+ syntax errors eliminated) - Framework detection false positive (Unity → Godot) - Circular dependency detection (self-loops filtered) - GDScript test discovery (32 test files found) - Config extractor array handling (JSON/YAML root arrays) - Progress indicators for small batches Tests: - Added comprehensive GDScript test extraction test case - 396 test cases extracted from 20 GUT test files
This commit is contained in:
115
CHANGELOG.md
115
CHANGELOG.md
@@ -17,6 +17,57 @@ This release brings powerful new code analysis features, performance optimizatio
|
||||
|
||||
### Added
|
||||
|
||||
#### C3.10: Signal Flow Analysis for Godot Projects
|
||||
- **Complete Signal Flow Analysis System**: Analyze event-driven architectures in Godot game projects
|
||||
- Signal declaration extraction (`signal` keyword detection)
|
||||
- Connection mapping (`.connect()` calls with targets and methods)
|
||||
- Emission tracking (`.emit()` and `emit_signal()` calls)
|
||||
- **208 signals**, **634 connections**, and **298 emissions** detected in test project (Cosmic Idler)
|
||||
- Signal density metrics (signals per file)
|
||||
- Event chain detection (signals triggering other signals)
|
||||
- Output: `signal_flow.json`, `signal_flow.mmd` (Mermaid diagram), `signal_reference.md`
|
||||
|
||||
- **Signal Pattern Detection**: Three major patterns identified
|
||||
- **EventBus Pattern** (0.90 confidence): Centralized signal hub in autoload
|
||||
- **Observer Pattern** (0.85 confidence): Multi-observer signals (3+ listeners)
|
||||
- **Event Chains** (0.80 confidence): Cascading signal propagation
|
||||
|
||||
- **Signal-Based How-To Guides (C3.10.1)**: AI-generated usage guides
|
||||
- Step-by-step guides (Connect → Emit → Handle)
|
||||
- Real code examples from project
|
||||
- Common usage locations
|
||||
- Parameter documentation
|
||||
- Output: `signal_how_to_guides.md` (10 guides for Cosmic Idler)
|
||||
|
||||
#### Godot Game Engine Support
|
||||
- **Comprehensive Godot File Type Support**: Full analysis of Godot 4.x projects
|
||||
- **GDScript (.gd)**: 265 files analyzed in test project
|
||||
- **Scene files (.tscn)**: 118 scene files
|
||||
- **Resource files (.tres)**: 38 resource files
|
||||
- **Shader files (.gdshader, .gdshaderinc)**: 9 shader files
|
||||
- **C# integration**: Phantom Camera addon (13 files)
|
||||
|
||||
- **GDScript Language Support**: Complete GDScript parsing with regex-based extraction
|
||||
- Dependency extraction: `preload()`, `load()`, `extends` patterns
|
||||
- Test framework detection: GUT, gdUnit4, WAT
|
||||
- Test file patterns: `test_*.gd`, `*_test.gd`
|
||||
- Signal syntax: `signal`, `.connect()`, `.emit()`
|
||||
- Export decorators: `@export`, `@onready`
|
||||
- Test decorators: `@test` (gdUnit4)
|
||||
|
||||
- **Game Engine Framework Detection**: Improved detection for Unity, Unreal, Godot
|
||||
- **Godot markers**: `project.godot`, `.godot` directory, `.tscn`, `.tres`, `.gd` files
|
||||
- **Unity markers**: `Assembly-CSharp.csproj`, `UnityEngine.dll`, `ProjectSettings/ProjectVersion.txt`
|
||||
- **Unreal markers**: `.uproject`, `Source/`, `Config/DefaultEngine.ini`
|
||||
- Fixed false positive Unity detection (was using generic "Assets" keyword)
|
||||
|
||||
- **GDScript Test Extraction**: Extract usage examples from Godot test files
|
||||
- **396 test cases** extracted from 20 GUT test files in test project
|
||||
- Patterns: instantiation (`preload().new()`, `load().new()`), assertions (`assert_eq`, `assert_true`), signals
|
||||
- GUT framework: `extends GutTest`, `func test_*()`, `add_child_autofree()`
|
||||
- Test categories: instantiation, assertions, signal connections, setup/teardown
|
||||
- Real code examples from production test files
|
||||
|
||||
#### C3.9: Project Documentation Extraction
|
||||
- **Markdown Documentation Extraction**: Automatically extracts and categorizes all `.md` files from projects
|
||||
- Smart categorization by folder/filename (overview, architecture, guides, workflows, features, etc.)
|
||||
@@ -74,7 +125,7 @@ This release brings powerful new code analysis features, performance optimizatio
|
||||
- Updated documentation with GLM-4.7 configuration examples
|
||||
- Rewritten LOCAL mode in `config_enhancer.py` to use Claude CLI properly with explicit output file paths
|
||||
- Updated MCP `scrape_codebase_tool` with `skip_docs` and `enhance_level` parameters
|
||||
- Updated CLAUDE.md with C3.9 documentation extraction feature and --enhance-level flag
|
||||
- Updated CLAUDE.md with C3.9 documentation extraction feature
|
||||
- Increased default batch size from 5 to 20 patterns for LOCAL mode
|
||||
|
||||
### Fixed
|
||||
@@ -83,19 +134,61 @@ This release brings powerful new code analysis features, performance optimizatio
|
||||
- **LocalSkillEnhancer Import**: Fixed incorrect import and method call in `main.py` (SkillEnhancer → LocalSkillEnhancer)
|
||||
- **Code Quality**: Fixed 4 critical linter errors (unused imports, variables, arguments, import sorting)
|
||||
|
||||
#### Godot Game Engine Fixes
|
||||
- **GDScript Dependency Extraction**: Fixed 265+ "Syntax error in *.gd" warnings (commit 3e6c448)
|
||||
- GDScript files were incorrectly routed to Python AST parser
|
||||
- Created dedicated `_extract_gdscript_imports()` with regex patterns
|
||||
- Now correctly parses `preload()`, `load()`, `extends` patterns
|
||||
- Result: 377 dependencies extracted with 0 warnings
|
||||
|
||||
- **Framework Detection False Positive**: Fixed Unity detection on Godot projects (commit 50b28fe)
|
||||
- Was detecting "Unity" due to generic "Assets" keyword in comments
|
||||
- Changed Unity markers to specific files: `Assembly-CSharp.csproj`, `UnityEngine.dll`, `Library/`
|
||||
- Now correctly detects Godot via `project.godot`, `.godot` directory
|
||||
|
||||
- **Circular Dependencies**: Fixed self-referential cycles (commit 50b28fe)
|
||||
- 3 self-loop warnings (files depending on themselves)
|
||||
- Added `target != file_path` check in dependency graph builder
|
||||
- Result: 0 circular dependencies detected
|
||||
|
||||
- **GDScript Test Discovery**: Fixed 0 test files found in Godot projects (commit 50b28fe)
|
||||
- Added GDScript test patterns: `test_*.gd`, `*_test.gd`
|
||||
- Added GDScript to LANGUAGE_MAP
|
||||
- Result: 32 test files discovered (20 GUT files with 396 tests)
|
||||
|
||||
- **GDScript Test Extraction**: Fixed "Language GDScript not supported" warning (commit c826690)
|
||||
- Added GDScript regex patterns to PATTERNS dictionary
|
||||
- Patterns: instantiation (`preload().new()`), assertions (`assert_eq`), signals (`.connect()`)
|
||||
- Result: 22 test examples extracted successfully
|
||||
|
||||
- **Config Extractor Array Handling**: Fixed JSON/YAML array parsing (commit fca0951)
|
||||
- Error: `'list' object has no attribute 'items'` on root-level arrays
|
||||
- Added isinstance checks for dict/list/primitive at root
|
||||
- Result: No JSON array errors, save.json parsed correctly
|
||||
|
||||
- **Progress Indicators**: Fixed missing progress for small batches (commit eec37f5)
|
||||
- Progress only shown every 5 batches, invisible for small jobs
|
||||
- Modified condition to always show for batches < 10
|
||||
- Result: "Progress: 1/2 batches completed" now visible
|
||||
|
||||
#### Other Fixes
|
||||
- **C# Test Extraction**: Fixed "Language C# not supported" error with language alias mapping
|
||||
- **Config Type Field Mismatch**: Fixed KeyError in `config_enhancer.py` by supporting both "type" and "config_type" fields
|
||||
- **LocalSkillEnhancer Import**: Fixed incorrect import and method call in `main.py` (SkillEnhancer → LocalSkillEnhancer)
|
||||
- **Code Quality**: Fixed 4 critical linter errors (unused imports, variables, arguments, import sorting)
|
||||
|
||||
### Tests
|
||||
- **GDScript Test Extraction Test**: Added comprehensive test case for GDScript GUT/gdUnit4 framework
|
||||
- Tests player instantiation with `preload()` and `load()`
|
||||
- Tests signal connections and emissions
|
||||
- Tests gdUnit4 `@test` annotation syntax
|
||||
- Tests game state management patterns
|
||||
- 4 test functions with 60+ lines of GDScript code
|
||||
- Validates extraction of instantiations, assertions, and signal patterns
|
||||
|
||||
### Removed
|
||||
- Removed client-specific documentation files from repository
|
||||
|
||||
### 🙏 Contributors
|
||||
|
||||
A huge thank you to everyone who contributed to this release:
|
||||
|
||||
- **[@xuintl](https://github.com/xuintl)** - Chinese README improvements and documentation refinements
|
||||
- **[@Zhichang Yu](https://github.com/yuzhichang)** - GLM-4.7 support and PDF scraper fixes
|
||||
- **[@YusufKaraaslanSpyke](https://github.com/yusufkaraaslan)** - Core features, bug fixes, and project maintenance
|
||||
|
||||
Special thanks to all our community members who reported issues, provided feedback, and helped test new features. Your contributions make Skill Seekers better for everyone! 🎉
|
||||
|
||||
---
|
||||
|
||||
## [2.7.4] - 2026-01-22
|
||||
|
||||
14
CLAUDE.md
14
CLAUDE.md
@@ -292,6 +292,11 @@ skill-seekers analyze --directory . --comprehensive
|
||||
# With AI enhancement (auto-detects API or LOCAL)
|
||||
skill-seekers analyze --directory . --enhance
|
||||
|
||||
# Granular AI enhancement control (NEW)
|
||||
skill-seekers analyze --directory . --enhance-level 1 # SKILL.md only
|
||||
skill-seekers analyze --directory . --enhance-level 2 # + Architecture + Config + Docs
|
||||
skill-seekers analyze --directory . --enhance-level 3 # Full enhancement (all features)
|
||||
|
||||
# Disable specific features
|
||||
skill-seekers analyze --directory . --skip-patterns --skip-how-to-guides
|
||||
```
|
||||
@@ -299,6 +304,15 @@ skill-seekers analyze --directory . --skip-patterns --skip-how-to-guides
|
||||
- Generates 300+ line standalone SKILL.md files from codebases
|
||||
- All C3.x features integrated (patterns, tests, guides, config, architecture, docs)
|
||||
- Complete codebase analysis without documentation scraping
|
||||
- **NEW**: Granular AI enhancement control with `--enhance-level` (0-3)
|
||||
|
||||
**C3.9 Project Documentation Extraction** (`codebase_scraper.py`):
|
||||
- Extracts and categorizes all markdown files from the project
|
||||
- Auto-detects categories: overview, architecture, guides, workflows, features, etc.
|
||||
- Integrates documentation into SKILL.md with summaries
|
||||
- AI enhancement (level 2+) adds topic extraction and cross-references
|
||||
- Controlled by depth: surface=raw copy, deep=parse+summarize, full=AI-enhanced
|
||||
- Default ON, use `--skip-docs` to disable
|
||||
|
||||
**C3.9 Project Documentation Extraction** (`codebase_scraper.py`):
|
||||
- Extracts and categorizes all markdown files from the project
|
||||
|
||||
@@ -36,7 +36,6 @@ logger = logging.getLogger(__name__)
|
||||
# Import config manager for settings
|
||||
try:
|
||||
from skill_seekers.cli.config_manager import get_config_manager
|
||||
|
||||
CONFIG_AVAILABLE = True
|
||||
except ImportError:
|
||||
CONFIG_AVAILABLE = False
|
||||
@@ -108,9 +107,7 @@ class AIEnhancer:
|
||||
logger.warning("⚠️ anthropic package not installed, falling back to LOCAL mode")
|
||||
self.mode = "local"
|
||||
except Exception as e:
|
||||
logger.warning(
|
||||
f"⚠️ Failed to initialize API client: {e}, falling back to LOCAL mode"
|
||||
)
|
||||
logger.warning(f"⚠️ Failed to initialize API client: {e}, falling back to LOCAL mode")
|
||||
self.mode = "local"
|
||||
|
||||
if self.mode == "local" and self.enabled:
|
||||
@@ -215,8 +212,7 @@ DO NOT include any explanation - just write the JSON file.
|
||||
except json.JSONDecodeError:
|
||||
# Try to find JSON in the response
|
||||
import re
|
||||
|
||||
json_match = re.search(r"\[[\s\S]*\]|\{[\s\S]*\}", response_text)
|
||||
json_match = re.search(r'\[[\s\S]*\]|\{[\s\S]*\}', response_text)
|
||||
if json_match:
|
||||
return json_match.group()
|
||||
logger.warning("⚠️ Could not parse JSON from LOCAL response")
|
||||
@@ -302,7 +298,8 @@ class PatternEnhancer(AIEnhancer):
|
||||
try:
|
||||
results[idx] = future.result()
|
||||
completed += 1
|
||||
if completed % 5 == 0 or completed == total:
|
||||
# Show progress: always for small jobs (<10), every 5 for larger jobs
|
||||
if total < 10 or completed % 5 == 0 or completed == total:
|
||||
logger.info(f" Progress: {completed}/{total} batches completed")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Batch {idx} failed: {e}")
|
||||
@@ -439,7 +436,8 @@ class TestExampleEnhancer(AIEnhancer):
|
||||
try:
|
||||
results[idx] = future.result()
|
||||
completed += 1
|
||||
if completed % 5 == 0 or completed == total:
|
||||
# Show progress: always for small jobs (<10), every 5 for larger jobs
|
||||
if total < 10 or completed % 5 == 0 or completed == total:
|
||||
logger.info(f" Progress: {completed}/{total} batches completed")
|
||||
except Exception as e:
|
||||
logger.warning(f"⚠️ Batch {idx} failed: {e}")
|
||||
|
||||
@@ -88,6 +88,11 @@ class ArchitecturalPatternDetector:
|
||||
|
||||
# Framework detection patterns
|
||||
FRAMEWORK_MARKERS = {
|
||||
# Game Engines (checked first to avoid false positives)
|
||||
"Unity": ["Assembly-CSharp.csproj", "UnityEngine.dll", "ProjectSettings/ProjectVersion.txt", ".unity", "Library/"],
|
||||
"Unreal": ["Source/", ".uproject", "Config/DefaultEngine.ini", "Binaries/", "Content/"],
|
||||
"Godot": ["project.godot", ".godot", ".tscn", ".tres", ".gd"],
|
||||
# Web Frameworks
|
||||
"Django": ["django", "manage.py", "settings.py", "urls.py"],
|
||||
"Flask": ["flask", "app.py", "wsgi.py"],
|
||||
"Spring": ["springframework", "@Controller", "@Service", "@Repository"],
|
||||
@@ -181,17 +186,48 @@ class ArchitecturalPatternDetector:
|
||||
|
||||
return dict(structure)
|
||||
|
||||
def _detect_frameworks(self, _directory: Path, files: list[dict]) -> list[str]:
|
||||
def _detect_frameworks(self, directory: Path, files: list[dict]) -> list[str]:
|
||||
"""Detect frameworks being used"""
|
||||
detected = []
|
||||
|
||||
# Check file paths and content
|
||||
# Check file paths from analyzed files
|
||||
all_paths = [str(f.get("file", "")) for f in files]
|
||||
all_content = " ".join(all_paths)
|
||||
|
||||
# Also check actual directory structure for game engine markers
|
||||
# (project.godot, .unity, .uproject are config files, not in analyzed files)
|
||||
dir_files = []
|
||||
try:
|
||||
# Get all files and directories in the root (non-recursive for performance)
|
||||
for item in directory.iterdir():
|
||||
dir_files.append(item.name)
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not scan directory for framework markers: {e}")
|
||||
|
||||
dir_content = " ".join(dir_files)
|
||||
|
||||
# Check game engines FIRST (priority detection)
|
||||
for framework in ["Unity", "Unreal", "Godot"]:
|
||||
if framework in self.FRAMEWORK_MARKERS:
|
||||
markers = self.FRAMEWORK_MARKERS[framework]
|
||||
# Check both analyzed files AND directory structure
|
||||
file_matches = sum(1 for marker in markers if marker.lower() in all_content.lower())
|
||||
dir_matches = sum(1 for marker in markers if marker.lower() in dir_content.lower())
|
||||
total_matches = file_matches + dir_matches
|
||||
|
||||
if total_matches >= 2:
|
||||
detected.append(framework)
|
||||
logger.info(f" 📦 Detected framework: {framework}")
|
||||
# Return early to prevent web framework false positives
|
||||
return detected
|
||||
|
||||
# Check other frameworks
|
||||
for framework, markers in self.FRAMEWORK_MARKERS.items():
|
||||
if framework in ["Unity", "Unreal", "Godot"]:
|
||||
continue # Already checked
|
||||
|
||||
matches = sum(1 for marker in markers if marker.lower() in all_content.lower())
|
||||
if matches >= 2: # Require at least 2 markers
|
||||
if matches >= 2:
|
||||
detected.append(framework)
|
||||
logger.info(f" 📦 Detected framework: {framework}")
|
||||
|
||||
|
||||
@@ -105,6 +105,15 @@ class CodeAnalyzer:
|
||||
try:
|
||||
if language == "Python":
|
||||
return self._analyze_python(content, file_path)
|
||||
elif language == "GDScript":
|
||||
# GDScript has Godot-specific syntax, use dedicated parser
|
||||
return self._analyze_gdscript(content, file_path)
|
||||
elif language == "GodotScene":
|
||||
return self._analyze_godot_scene(content, file_path)
|
||||
elif language == "GodotResource":
|
||||
return self._analyze_godot_resource(content, file_path)
|
||||
elif language == "GodotShader":
|
||||
return self._analyze_godot_shader(content, file_path)
|
||||
elif language in ["JavaScript", "TypeScript"]:
|
||||
return self._analyze_javascript(content, file_path)
|
||||
elif language in ["C", "C++"]:
|
||||
@@ -1421,6 +1430,352 @@ class CodeAnalyzer:
|
||||
return comments
|
||||
|
||||
|
||||
def _analyze_godot_scene(self, content: str, file_path: str) -> dict[str, Any]:
|
||||
"""
|
||||
Analyze Godot .tscn scene file.
|
||||
|
||||
Extracts:
|
||||
- Node hierarchy
|
||||
- Script attachments
|
||||
- External resource dependencies
|
||||
- Scene metadata
|
||||
"""
|
||||
nodes = []
|
||||
resources = []
|
||||
scripts = []
|
||||
|
||||
# Extract external resources
|
||||
for match in re.finditer(r'\[ext_resource.*?type="(.+?)".*?path="(.+?)".*?id="(.+?)"\]', content):
|
||||
res_type, path, res_id = match.groups()
|
||||
resources.append({
|
||||
"type": res_type,
|
||||
"path": path,
|
||||
"id": res_id
|
||||
})
|
||||
|
||||
# Track scripts separately
|
||||
if res_type == "Script":
|
||||
scripts.append({
|
||||
"path": path,
|
||||
"id": res_id
|
||||
})
|
||||
|
||||
# Extract nodes
|
||||
for match in re.finditer(r'\[node name="(.+?)".*?type="(.+?)".*?\]', content):
|
||||
node_name, node_type = match.groups()
|
||||
|
||||
# Check if node has a script attached
|
||||
script_match = re.search(rf'\[node name="{re.escape(node_name)}".*?script = ExtResource\("(.+?)"\)', content, re.DOTALL)
|
||||
attached_script = script_match.group(1) if script_match else None
|
||||
|
||||
nodes.append({
|
||||
"name": node_name,
|
||||
"type": node_type,
|
||||
"script": attached_script
|
||||
})
|
||||
|
||||
return {
|
||||
"file": file_path,
|
||||
"nodes": nodes,
|
||||
"scripts": scripts,
|
||||
"resources": resources,
|
||||
"scene_metadata": {
|
||||
"node_count": len(nodes),
|
||||
"script_count": len(scripts),
|
||||
"resource_count": len(resources)
|
||||
}
|
||||
}
|
||||
|
||||
def _analyze_godot_resource(self, content: str, file_path: str) -> dict[str, Any]:
|
||||
"""
|
||||
Analyze Godot .tres resource file.
|
||||
|
||||
Extracts:
|
||||
- Resource type and class
|
||||
- Script reference
|
||||
- Properties and values
|
||||
- External dependencies
|
||||
"""
|
||||
properties = []
|
||||
resources = []
|
||||
resource_type = None
|
||||
script_class = None
|
||||
script_path = None
|
||||
|
||||
# Extract resource header
|
||||
header_match = re.search(r'\[gd_resource type="(.+?)"(?:\s+script_class="(.+?)")?\s+', content)
|
||||
if header_match:
|
||||
resource_type = header_match.group(1)
|
||||
script_class = header_match.group(2)
|
||||
|
||||
# Extract external resources
|
||||
for match in re.finditer(r'\[ext_resource.*?type="(.+?)".*?path="(.+?)".*?id="(.+?)"\]', content):
|
||||
res_type, path, res_id = match.groups()
|
||||
resources.append({
|
||||
"type": res_type,
|
||||
"path": path,
|
||||
"id": res_id
|
||||
})
|
||||
|
||||
if res_type == "Script":
|
||||
script_path = path
|
||||
|
||||
# Extract properties from [resource] section
|
||||
resource_section = re.search(r'\[resource\](.*?)(?:\n\[|$)', content, re.DOTALL)
|
||||
if resource_section:
|
||||
prop_text = resource_section.group(1)
|
||||
|
||||
for line in prop_text.strip().split('\n'):
|
||||
if '=' in line:
|
||||
key, value = line.split('=', 1)
|
||||
properties.append({
|
||||
"name": key.strip(),
|
||||
"value": value.strip()
|
||||
})
|
||||
|
||||
return {
|
||||
"file": file_path,
|
||||
"resource_type": resource_type,
|
||||
"script_class": script_class,
|
||||
"script_path": script_path,
|
||||
"properties": properties,
|
||||
"resources": resources,
|
||||
"resource_metadata": {
|
||||
"property_count": len(properties),
|
||||
"dependency_count": len(resources)
|
||||
}
|
||||
}
|
||||
|
||||
def _analyze_godot_shader(self, content: str, file_path: str) -> dict[str, Any]:
|
||||
"""
|
||||
Analyze Godot .gdshader shader file.
|
||||
|
||||
Extracts:
|
||||
- Shader type (spatial, canvas_item, particles, etc.)
|
||||
- Uniforms (parameters)
|
||||
- Functions
|
||||
- Varying variables
|
||||
"""
|
||||
uniforms = []
|
||||
functions = []
|
||||
varyings = []
|
||||
shader_type = None
|
||||
|
||||
# Extract shader type
|
||||
type_match = re.search(r'shader_type\s+(\w+)', content)
|
||||
if type_match:
|
||||
shader_type = type_match.group(1)
|
||||
|
||||
# Extract uniforms
|
||||
for match in re.finditer(r'uniform\s+(\w+)\s+(\w+)(?:\s*:\s*(.+?))?(?:\s*=\s*(.+?))?;', content):
|
||||
uniform_type, name, hint, default = match.groups()
|
||||
uniforms.append({
|
||||
"name": name,
|
||||
"type": uniform_type,
|
||||
"hint": hint,
|
||||
"default": default
|
||||
})
|
||||
|
||||
# Extract varying variables
|
||||
for match in re.finditer(r'varying\s+(\w+)\s+(\w+)', content):
|
||||
var_type, name = match.groups()
|
||||
varyings.append({
|
||||
"name": name,
|
||||
"type": var_type
|
||||
})
|
||||
|
||||
# Extract functions
|
||||
for match in re.finditer(r'void\s+(\w+)\s*\(([^)]*)\)', content):
|
||||
func_name, params = match.groups()
|
||||
functions.append({
|
||||
"name": func_name,
|
||||
"parameters": params.strip() if params else ""
|
||||
})
|
||||
|
||||
return {
|
||||
"file": file_path,
|
||||
"shader_type": shader_type,
|
||||
"uniforms": uniforms,
|
||||
"varyings": varyings,
|
||||
"functions": functions,
|
||||
"shader_metadata": {
|
||||
"uniform_count": len(uniforms),
|
||||
"function_count": len(functions)
|
||||
}
|
||||
}
|
||||
|
||||
def _analyze_gdscript(self, content: str, file_path: str) -> dict[str, Any]:
|
||||
"""
|
||||
Analyze GDScript file using regex (Godot-specific syntax).
|
||||
|
||||
GDScript has Python-like syntax but with Godot-specific keywords:
|
||||
- class_name MyClass extends Node
|
||||
- func _ready(): (functions)
|
||||
- signal my_signal(param)
|
||||
- @export var speed: float = 100.0
|
||||
- @onready var sprite = $Sprite2D
|
||||
"""
|
||||
classes = []
|
||||
functions = []
|
||||
signals = []
|
||||
exports = []
|
||||
|
||||
# Extract class definition
|
||||
class_match = re.search(r'class_name\s+(\w+)(?:\s+extends\s+(\w+))?', content)
|
||||
if class_match:
|
||||
class_name = class_match.group(1)
|
||||
extends = class_match.group(2)
|
||||
classes.append({
|
||||
"name": class_name,
|
||||
"bases": [extends] if extends else [],
|
||||
"methods": [],
|
||||
"line_number": content[: class_match.start()].count("\n") + 1
|
||||
})
|
||||
|
||||
# Extract functions
|
||||
for match in re.finditer(r'func\s+(\w+)\s*\(([^)]*)\)(?:\s*->\s*(\w+))?:', content):
|
||||
func_name, params, return_type = match.groups()
|
||||
|
||||
# Parse parameters
|
||||
param_list = []
|
||||
if params.strip():
|
||||
for param in params.split(','):
|
||||
param = param.strip()
|
||||
if ':' in param:
|
||||
# param_name: Type = default
|
||||
parts = param.split(':')
|
||||
name = parts[0].strip()
|
||||
type_and_default = parts[1].strip()
|
||||
|
||||
param_type = type_and_default.split('=')[0].strip() if '=' in type_and_default else type_and_default
|
||||
default = type_and_default.split('=')[1].strip() if '=' in type_and_default else None
|
||||
|
||||
param_list.append({
|
||||
"name": name,
|
||||
"type_hint": param_type,
|
||||
"default": default
|
||||
})
|
||||
else:
|
||||
param_list.append({
|
||||
"name": param,
|
||||
"type_hint": None,
|
||||
"default": None
|
||||
})
|
||||
|
||||
functions.append({
|
||||
"name": func_name,
|
||||
"parameters": param_list,
|
||||
"return_type": return_type,
|
||||
"line_number": content[: match.start()].count("\n") + 1
|
||||
})
|
||||
|
||||
# Extract signals with documentation
|
||||
signal_connections = []
|
||||
signal_emissions = []
|
||||
|
||||
for match in re.finditer(r'signal\s+(\w+)(?:\(([^)]*)\))?', content):
|
||||
signal_name, params = match.groups()
|
||||
line_number = content[: match.start()].count("\n") + 1
|
||||
|
||||
# Extract documentation comment above signal (## or #)
|
||||
doc_comment = None
|
||||
lines = content[:match.start()].split('\n')
|
||||
if len(lines) >= 2:
|
||||
prev_line = lines[-1].strip()
|
||||
if prev_line.startswith('##') or prev_line.startswith('#'):
|
||||
doc_comment = prev_line.lstrip('#').strip()
|
||||
|
||||
signals.append({
|
||||
"name": signal_name,
|
||||
"parameters": params if params else "",
|
||||
"line_number": line_number,
|
||||
"documentation": doc_comment
|
||||
})
|
||||
|
||||
# Extract signal connections (.connect() calls)
|
||||
for match in re.finditer(r'(\w+(?:\.\w+)*)\.connect\(([^)]+)\)', content):
|
||||
signal_path, handler = match.groups()
|
||||
signal_connections.append({
|
||||
"signal": signal_path,
|
||||
"handler": handler.strip(),
|
||||
"line_number": content[: match.start()].count("\n") + 1
|
||||
})
|
||||
|
||||
# Extract signal emissions (.emit() calls)
|
||||
for match in re.finditer(r'(\w+(?:\.\w+)*)\.emit\(([^)]*)\)', content):
|
||||
signal_path, args = match.groups()
|
||||
signal_emissions.append({
|
||||
"signal": signal_path,
|
||||
"arguments": args.strip() if args else "",
|
||||
"line_number": content[: match.start()].count("\n") + 1
|
||||
})
|
||||
|
||||
# Extract @export variables
|
||||
for match in re.finditer(r'@export(?:\(([^)]+)\))?\s+var\s+(\w+)(?:\s*:\s*(\w+))?(?:\s*=\s*(.+?))?(?:\n|$)', content):
|
||||
hint, var_name, var_type, default = match.groups()
|
||||
exports.append({
|
||||
"name": var_name,
|
||||
"type": var_type,
|
||||
"default": default,
|
||||
"export_hint": hint,
|
||||
"line_number": content[: match.start()].count("\n") + 1
|
||||
})
|
||||
|
||||
# Detect test framework
|
||||
test_framework = None
|
||||
test_functions = []
|
||||
|
||||
# GUT (Godot Unit Test) - extends "res://addons/gut/test.gd" or extends GutTest
|
||||
if re.search(r'extends\s+["\']?res://addons/gut/test\.gd["\']?', content) or \
|
||||
re.search(r'extends\s+GutTest', content):
|
||||
test_framework = "GUT"
|
||||
|
||||
# Extract test functions (test_* functions)
|
||||
for func in functions:
|
||||
if func["name"].startswith("test_"):
|
||||
test_functions.append(func)
|
||||
|
||||
# gdUnit4 - @suite class annotation
|
||||
elif re.search(r'@suite', content):
|
||||
test_framework = "gdUnit4"
|
||||
|
||||
# Extract test functions (@test annotated or test_* prefix)
|
||||
for i, func in enumerate(functions):
|
||||
# Check for @test annotation above function
|
||||
func_line = func["line_number"]
|
||||
lines = content.split('\n')
|
||||
if func_line > 1:
|
||||
prev_line = lines[func_line - 2].strip()
|
||||
if prev_line.startswith('@test'):
|
||||
test_functions.append(func)
|
||||
elif func["name"].startswith("test_"):
|
||||
test_functions.append(func)
|
||||
|
||||
# WAT (WizAds Test) - less common
|
||||
elif re.search(r'extends\s+WAT\.Test', content):
|
||||
test_framework = "WAT"
|
||||
for func in functions:
|
||||
if func["name"].startswith("test_"):
|
||||
test_functions.append(func)
|
||||
|
||||
result = {
|
||||
"file": file_path,
|
||||
"classes": classes,
|
||||
"functions": functions,
|
||||
"signals": signals,
|
||||
"exports": exports,
|
||||
"signal_connections": signal_connections,
|
||||
"signal_emissions": signal_emissions,
|
||||
}
|
||||
|
||||
# Add test framework info if detected
|
||||
if test_framework:
|
||||
result["test_framework"] = test_framework
|
||||
result["test_functions"] = test_functions
|
||||
|
||||
return result
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
# Test the analyzer
|
||||
python_code = '''
|
||||
@@ -1460,3 +1815,4 @@ def create_sprite(texture: str) -> Node2D:
|
||||
]
|
||||
)
|
||||
print(f" {method['name']}({params}) -> {method['return_type']}")
|
||||
|
||||
|
||||
@@ -39,6 +39,7 @@ from skill_seekers.cli.api_reference_builder import APIReferenceBuilder
|
||||
from skill_seekers.cli.code_analyzer import CodeAnalyzer
|
||||
from skill_seekers.cli.config_extractor import ConfigExtractor
|
||||
from skill_seekers.cli.dependency_analyzer import DependencyAnalyzer
|
||||
from skill_seekers.cli.signal_flow_analyzer import SignalFlowAnalyzer
|
||||
|
||||
# Try to import pathspec for .gitignore support
|
||||
try:
|
||||
@@ -68,6 +69,10 @@ LANGUAGE_EXTENSIONS = {
|
||||
".hxx": "C++",
|
||||
".c": "C",
|
||||
".cs": "C#",
|
||||
".gd": "GDScript", # Godot scripting language
|
||||
".tscn": "GodotScene", # Godot scene files
|
||||
".tres": "GodotResource", # Godot resource files
|
||||
".gdshader": "GodotShader", # Godot shader files
|
||||
".go": "Go",
|
||||
".rs": "Rust",
|
||||
".java": "Java",
|
||||
@@ -124,6 +129,7 @@ FOLDER_CATEGORIES = {
|
||||
|
||||
# Default directories to exclude
|
||||
DEFAULT_EXCLUDED_DIRS = {
|
||||
# Python/Node
|
||||
"node_modules",
|
||||
"venv",
|
||||
"__pycache__",
|
||||
@@ -141,10 +147,28 @@ DEFAULT_EXCLUDED_DIRS = {
|
||||
".coverage",
|
||||
".eggs",
|
||||
"*.egg-info",
|
||||
# IDE
|
||||
".idea",
|
||||
".vscode",
|
||||
".vs",
|
||||
"__pypackages__",
|
||||
# Unity (critical - contains massive build cache)
|
||||
"Library",
|
||||
"Temp",
|
||||
"Logs",
|
||||
"UserSettings",
|
||||
"MemoryCaptures",
|
||||
"Recordings",
|
||||
# Unreal Engine
|
||||
"Intermediate",
|
||||
"Saved",
|
||||
"DerivedDataCache",
|
||||
# Godot
|
||||
".godot",
|
||||
".import",
|
||||
# Misc
|
||||
"tmp",
|
||||
".tmp",
|
||||
}
|
||||
|
||||
|
||||
@@ -377,13 +401,11 @@ def extract_markdown_structure(content: str) -> dict[str, Any]:
|
||||
if header_match:
|
||||
level = len(header_match.group(1))
|
||||
text = header_match.group(2).strip()
|
||||
structure["headers"].append(
|
||||
{
|
||||
"level": level,
|
||||
"text": text,
|
||||
"line": i + 1,
|
||||
}
|
||||
)
|
||||
structure["headers"].append({
|
||||
"level": level,
|
||||
"text": text,
|
||||
"line": i + 1,
|
||||
})
|
||||
# First h1 is the title
|
||||
if level == 1 and structure["title"] is None:
|
||||
structure["title"] = text
|
||||
@@ -394,30 +416,24 @@ def extract_markdown_structure(content: str) -> dict[str, Any]:
|
||||
language = match.group(1) or "text"
|
||||
code = match.group(2).strip()
|
||||
if len(code) > 0:
|
||||
structure["code_blocks"].append(
|
||||
{
|
||||
"language": language,
|
||||
"code": code[:500], # Truncate long code blocks
|
||||
"full_length": len(code),
|
||||
}
|
||||
)
|
||||
structure["code_blocks"].append({
|
||||
"language": language,
|
||||
"code": code[:500], # Truncate long code blocks
|
||||
"full_length": len(code),
|
||||
})
|
||||
|
||||
# Extract links
|
||||
link_pattern = re.compile(r"\[([^\]]+)\]\(([^)]+)\)")
|
||||
for match in link_pattern.finditer(content):
|
||||
structure["links"].append(
|
||||
{
|
||||
"text": match.group(1),
|
||||
"url": match.group(2),
|
||||
}
|
||||
)
|
||||
structure["links"].append({
|
||||
"text": match.group(1),
|
||||
"url": match.group(2),
|
||||
})
|
||||
|
||||
return structure
|
||||
|
||||
|
||||
def generate_markdown_summary(
|
||||
content: str, structure: dict[str, Any], max_length: int = 500
|
||||
) -> str:
|
||||
def generate_markdown_summary(content: str, structure: dict[str, Any], max_length: int = 500) -> str:
|
||||
"""
|
||||
Generate a summary of markdown content.
|
||||
|
||||
@@ -530,14 +546,12 @@ def process_markdown_docs(
|
||||
structure = extract_markdown_structure(content)
|
||||
summary = generate_markdown_summary(content, structure)
|
||||
|
||||
doc_data.update(
|
||||
{
|
||||
"title": structure.get("title") or md_path.stem,
|
||||
"structure": structure,
|
||||
"summary": summary,
|
||||
"content": content if depth == "full" else None,
|
||||
}
|
||||
)
|
||||
doc_data.update({
|
||||
"title": structure.get("title") or md_path.stem,
|
||||
"structure": structure,
|
||||
"summary": summary,
|
||||
"content": content if depth == "full" else None,
|
||||
})
|
||||
processed_docs.append(doc_data)
|
||||
|
||||
# Track categories
|
||||
@@ -573,7 +587,6 @@ def process_markdown_docs(
|
||||
# Copy file to category folder
|
||||
dest_path = category_dir / doc["filename"]
|
||||
import shutil
|
||||
|
||||
shutil.copy2(src_path, dest_path)
|
||||
except Exception as e:
|
||||
logger.debug(f"Failed to copy {doc['path']}: {e}")
|
||||
@@ -589,9 +602,7 @@ def process_markdown_docs(
|
||||
with open(index_json, "w", encoding="utf-8") as f:
|
||||
json.dump(index_data, f, indent=2, default=str)
|
||||
|
||||
logger.info(
|
||||
f"✅ Processed {len(processed_docs)} documentation files in {len(categories)} categories"
|
||||
)
|
||||
logger.info(f"✅ Processed {len(processed_docs)} documentation files in {len(categories)} categories")
|
||||
logger.info(f"📁 Saved to: {docs_output_dir}")
|
||||
|
||||
return index_data
|
||||
@@ -625,22 +636,18 @@ def _enhance_docs_api(docs: list[dict], api_key: str) -> list[dict]:
|
||||
"""Enhance docs using Claude API."""
|
||||
try:
|
||||
import anthropic
|
||||
|
||||
client = anthropic.Anthropic(api_key=api_key)
|
||||
|
||||
# Batch documents for efficiency
|
||||
batch_size = 10
|
||||
for i in range(0, len(docs), batch_size):
|
||||
batch = docs[i : i + batch_size]
|
||||
batch = docs[i:i + batch_size]
|
||||
|
||||
# Create prompt for batch
|
||||
docs_text = "\n\n".join(
|
||||
[
|
||||
f"## {d.get('title', d['filename'])}\nCategory: {d['category']}\nSummary: {d.get('summary', 'N/A')}"
|
||||
for d in batch
|
||||
if d.get("summary")
|
||||
]
|
||||
)
|
||||
docs_text = "\n\n".join([
|
||||
f"## {d.get('title', d['filename'])}\nCategory: {d['category']}\nSummary: {d.get('summary', 'N/A')}"
|
||||
for d in batch if d.get("summary")
|
||||
])
|
||||
|
||||
if not docs_text:
|
||||
continue
|
||||
@@ -659,13 +666,12 @@ Return JSON with format:
|
||||
response = client.messages.create(
|
||||
model="claude-sonnet-4-20250514",
|
||||
max_tokens=2000,
|
||||
messages=[{"role": "user", "content": prompt}],
|
||||
messages=[{"role": "user", "content": prompt}]
|
||||
)
|
||||
|
||||
# Parse response and merge enhancements
|
||||
try:
|
||||
import re
|
||||
|
||||
json_match = re.search(r"\{.*\}", response.content[0].text, re.DOTALL)
|
||||
if json_match:
|
||||
enhancements = json.loads(json_match.group())
|
||||
@@ -694,12 +700,10 @@ def _enhance_docs_local(docs: list[dict]) -> list[dict]:
|
||||
if not docs_with_summary:
|
||||
return docs
|
||||
|
||||
docs_text = "\n\n".join(
|
||||
[
|
||||
f"## {d.get('title', d['filename'])}\nCategory: {d['category']}\nPath: {d['path']}\nSummary: {d.get('summary', 'N/A')}"
|
||||
for d in docs_with_summary[:20] # Limit to 20 docs
|
||||
]
|
||||
)
|
||||
docs_text = "\n\n".join([
|
||||
f"## {d.get('title', d['filename'])}\nCategory: {d['category']}\nPath: {d['path']}\nSummary: {d.get('summary', 'N/A')}"
|
||||
for d in docs_with_summary[:20] # Limit to 20 docs
|
||||
])
|
||||
|
||||
prompt = f"""Analyze these documentation files from a codebase and provide insights.
|
||||
|
||||
@@ -730,7 +734,6 @@ Output JSON only:
|
||||
|
||||
if result.returncode == 0 and result.stdout:
|
||||
import re
|
||||
|
||||
json_match = re.search(r"\{.*\}", result.stdout, re.DOTALL)
|
||||
if json_match:
|
||||
enhancements = json.loads(json_match.group())
|
||||
@@ -798,9 +801,7 @@ def analyze_codebase(
|
||||
|
||||
if enhance_level > 0:
|
||||
level_names = {1: "SKILL.md only", 2: "SKILL.md+Architecture+Config", 3: "full"}
|
||||
logger.info(
|
||||
f"🤖 AI Enhancement Level: {enhance_level} ({level_names.get(enhance_level, 'unknown')})"
|
||||
)
|
||||
logger.info(f"🤖 AI Enhancement Level: {enhance_level} ({level_names.get(enhance_level, 'unknown')})")
|
||||
# Resolve directory to absolute path to avoid relative_to() errors
|
||||
directory = Path(directory).resolve()
|
||||
|
||||
@@ -845,7 +846,18 @@ def analyze_codebase(
|
||||
analysis = analyzer.analyze_file(str(file_path), content, language)
|
||||
|
||||
# Only include files with actual analysis results
|
||||
if analysis and (analysis.get("classes") or analysis.get("functions")):
|
||||
# Check for any meaningful content (classes, functions, nodes, properties, etc.)
|
||||
has_content = (
|
||||
analysis.get("classes")
|
||||
or analysis.get("functions")
|
||||
or analysis.get("nodes") # Godot scenes
|
||||
or analysis.get("properties") # Godot resources
|
||||
or analysis.get("uniforms") # Godot shaders
|
||||
or analysis.get("signals") # GDScript signals
|
||||
or analysis.get("exports") # GDScript exports
|
||||
)
|
||||
|
||||
if analysis and has_content:
|
||||
results["files"].append(
|
||||
{
|
||||
"file": str(file_path.relative_to(directory)),
|
||||
@@ -1157,6 +1169,30 @@ def analyze_codebase(
|
||||
else:
|
||||
logger.info("No clear architectural patterns detected")
|
||||
|
||||
# Analyze signal flow patterns (C3.10) - Godot projects only
|
||||
signal_analysis = None
|
||||
has_godot_files = any(
|
||||
f.get("language") in ("GDScript", "GodotScene", "GodotResource", "GodotShader")
|
||||
for f in results.get("files", [])
|
||||
)
|
||||
|
||||
if has_godot_files:
|
||||
logger.info("Analyzing signal flow patterns (Godot)...")
|
||||
try:
|
||||
signal_analyzer = SignalFlowAnalyzer(results)
|
||||
signal_output = signal_analyzer.save_analysis(output_dir, ai_mode)
|
||||
signal_analysis = signal_analyzer.analyze()
|
||||
|
||||
stats = signal_analysis["statistics"]
|
||||
logger.info(f"📡 Signal Analysis Complete:")
|
||||
logger.info(f" - {stats['total_signals']} signal declarations")
|
||||
logger.info(f" - {stats['total_connections']} signal connections")
|
||||
logger.info(f" - {stats['total_emissions']} signal emissions")
|
||||
logger.info(f" - {len(signal_analysis['patterns'])} patterns detected")
|
||||
logger.info(f"📁 Saved to: {signal_output}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Signal flow analysis failed: {e}")
|
||||
|
||||
# Extract markdown documentation (C3.9)
|
||||
docs_data = None
|
||||
if extract_docs:
|
||||
@@ -1297,6 +1333,12 @@ Use this skill when you need to:
|
||||
skill_content += "- ✅ Architectural Analysis (C3.7)\n"
|
||||
if extract_docs:
|
||||
skill_content += "- ✅ Project Documentation (C3.9)\n"
|
||||
|
||||
# Check if signal flow analysis was performed
|
||||
has_signal_analysis = (output_dir / "signals" / "signal_flow.json").exists()
|
||||
if has_signal_analysis:
|
||||
skill_content += "- ✅ Signal Flow Analysis (C3.10)\n"
|
||||
|
||||
skill_content += "\n"
|
||||
|
||||
# Add design patterns if available
|
||||
@@ -1328,6 +1370,11 @@ Use this skill when you need to:
|
||||
if config_content:
|
||||
skill_content += config_content
|
||||
|
||||
# Add signal flow analysis if available (C3.10)
|
||||
signal_content = _format_signal_flow_section(output_dir, results)
|
||||
if signal_content:
|
||||
skill_content += signal_content
|
||||
|
||||
# Add project documentation if available
|
||||
if extract_docs and docs_data:
|
||||
docs_content = _format_documentation_section(output_dir, docs_data)
|
||||
@@ -1364,9 +1411,7 @@ Use this skill when you need to:
|
||||
skill_content += "- **Architecture**: `references/architecture/` - Architectural patterns\n"
|
||||
refs_added = True
|
||||
if extract_docs and (output_dir / "documentation").exists():
|
||||
skill_content += (
|
||||
"- **Documentation**: `references/documentation/` - Project documentation\n"
|
||||
)
|
||||
skill_content += "- **Documentation**: `references/documentation/` - Project documentation\n"
|
||||
refs_added = True
|
||||
|
||||
if not refs_added:
|
||||
@@ -1597,6 +1642,78 @@ def _format_config_section(output_dir: Path) -> str:
|
||||
return content
|
||||
|
||||
|
||||
def _format_signal_flow_section(output_dir: Path, results: dict[str, Any]) -> str:
|
||||
"""Format signal flow analysis section (C3.10 - Godot projects)."""
|
||||
signal_file = output_dir / "signals" / "signal_flow.json"
|
||||
if not signal_file.exists():
|
||||
return ""
|
||||
|
||||
try:
|
||||
with open(signal_file, encoding="utf-8") as f:
|
||||
signal_data = json.load(f)
|
||||
except Exception:
|
||||
return ""
|
||||
|
||||
stats = signal_data.get("statistics", {})
|
||||
patterns = signal_data.get("patterns", {})
|
||||
|
||||
# Only show section if there are signals
|
||||
if stats.get("total_signals", 0) == 0:
|
||||
return ""
|
||||
|
||||
content = "## 📡 Signal Flow Analysis\n\n"
|
||||
content += "*From C3.10 signal flow analysis (Godot Event System)*\n\n"
|
||||
|
||||
# Statistics
|
||||
content += "**Signal Statistics:**\n"
|
||||
content += f"- **Total Signals**: {stats.get('total_signals', 0)}\n"
|
||||
content += f"- **Signal Connections**: {stats.get('total_connections', 0)}\n"
|
||||
content += f"- **Signal Emissions**: {stats.get('total_emissions', 0)}\n"
|
||||
content += f"- **Signal Density**: {stats.get('signal_density', 0):.2f} signals per file\n\n"
|
||||
|
||||
# Most connected signals
|
||||
most_connected = stats.get("most_connected_signals", [])
|
||||
if most_connected:
|
||||
content += "**Most Connected Signals:**\n"
|
||||
for sig in most_connected[:5]:
|
||||
content += f"- `{sig['signal']}`: {sig['connection_count']} connections\n"
|
||||
content += "\n"
|
||||
|
||||
# Detected patterns
|
||||
if patterns:
|
||||
content += "**Detected Event Patterns:**\n"
|
||||
for pattern_name, pattern_data in patterns.items():
|
||||
if pattern_data.get("detected"):
|
||||
confidence = pattern_data.get("confidence", 0)
|
||||
description = pattern_data.get("description", "")
|
||||
content += f"- **{pattern_name}** (confidence: {confidence:.2f})\n"
|
||||
content += f" - {description}\n"
|
||||
content += "\n"
|
||||
|
||||
# Test framework detection
|
||||
test_files = [
|
||||
f for f in results.get("files", [])
|
||||
if f.get("test_framework")
|
||||
]
|
||||
|
||||
if test_files:
|
||||
frameworks = {}
|
||||
total_tests = 0
|
||||
for f in test_files:
|
||||
fw = f.get("test_framework")
|
||||
test_count = len(f.get("test_functions", []))
|
||||
frameworks[fw] = frameworks.get(fw, 0) + 1
|
||||
total_tests += test_count
|
||||
|
||||
content += "**Test Framework Detection:**\n"
|
||||
for fw, count in frameworks.items():
|
||||
content += f"- **{fw}**: {count} test files, {total_tests} test cases\n"
|
||||
content += "\n"
|
||||
|
||||
content += "*See `references/signals/` for complete signal flow analysis*\n\n"
|
||||
return content
|
||||
|
||||
|
||||
def _format_documentation_section(_output_dir: Path, docs_data: dict[str, Any]) -> str:
|
||||
"""Format project documentation section from extracted markdown files.
|
||||
|
||||
@@ -1615,15 +1732,7 @@ def _format_documentation_section(_output_dir: Path, docs_data: dict[str, Any])
|
||||
content += f"**Categories:** {len(categories)}\n\n"
|
||||
|
||||
# List documents by category (most important first)
|
||||
priority_order = [
|
||||
"overview",
|
||||
"architecture",
|
||||
"guides",
|
||||
"workflows",
|
||||
"features",
|
||||
"api",
|
||||
"examples",
|
||||
]
|
||||
priority_order = ["overview", "architecture", "guides", "workflows", "features", "api", "examples"]
|
||||
|
||||
# Sort categories by priority
|
||||
sorted_categories = []
|
||||
@@ -1670,7 +1779,6 @@ def _format_documentation_section(_output_dir: Path, docs_data: dict[str, Any])
|
||||
if all_topics:
|
||||
# Deduplicate and count
|
||||
from collections import Counter
|
||||
|
||||
topic_counts = Counter(all_topics)
|
||||
top_topics = [t for t, _ in topic_counts.most_common(10)]
|
||||
content += f"**Key Topics:** {', '.join(top_topics)}\n\n"
|
||||
|
||||
@@ -167,7 +167,9 @@ class ConfigEnhancer:
|
||||
for setting in cf.get("settings", [])[:5]: # First 5 settings per file
|
||||
# Support both "type" (from config_extractor) and "value_type" (legacy)
|
||||
value_type = setting.get("type", setting.get("value_type", "unknown"))
|
||||
settings_summary.append(f" - {setting['key']}: {setting['value']} ({value_type})")
|
||||
settings_summary.append(
|
||||
f" - {setting['key']}: {setting['value']} ({value_type})"
|
||||
)
|
||||
|
||||
# Support both "type" (from config_extractor) and "config_type" (legacy)
|
||||
config_type = cf.get("type", cf.get("config_type", "unknown"))
|
||||
@@ -304,9 +306,7 @@ Focus on actionable insights that help developers understand and improve their c
|
||||
config_type = cf.get("type", cf.get("config_type", "unknown"))
|
||||
settings_preview = []
|
||||
for s in cf.get("settings", [])[:3]: # Show first 3 settings
|
||||
settings_preview.append(
|
||||
f" - {s.get('key', 'unknown')}: {str(s.get('value', ''))[:50]}"
|
||||
)
|
||||
settings_preview.append(f" - {s.get('key', 'unknown')}: {str(s.get('value', ''))[:50]}")
|
||||
|
||||
config_data.append(f"""
|
||||
### {cf["relative_path"]} ({config_type})
|
||||
@@ -431,7 +431,9 @@ DO NOT explain your work - just write the JSON file directly.
|
||||
potential_files.append(json_file)
|
||||
|
||||
# Try to load the most recent JSON file with expected structure
|
||||
for json_file in sorted(potential_files, key=lambda f: f.stat().st_mtime, reverse=True):
|
||||
for json_file in sorted(
|
||||
potential_files, key=lambda f: f.stat().st_mtime, reverse=True
|
||||
):
|
||||
try:
|
||||
with open(json_file) as f:
|
||||
data = json.load(f)
|
||||
|
||||
@@ -222,6 +222,7 @@ class ConfigFileDetector:
|
||||
|
||||
# Directories to skip
|
||||
SKIP_DIRS = {
|
||||
# Python/Node
|
||||
"node_modules",
|
||||
"venv",
|
||||
"env",
|
||||
@@ -237,6 +238,23 @@ class ConfigFileDetector:
|
||||
"coverage",
|
||||
".eggs",
|
||||
"*.egg-info",
|
||||
# Unity (critical - contains massive build cache)
|
||||
"Library",
|
||||
"Temp",
|
||||
"Logs",
|
||||
"UserSettings",
|
||||
"MemoryCaptures",
|
||||
"Recordings",
|
||||
# Unreal Engine
|
||||
"Intermediate",
|
||||
"Saved",
|
||||
"DerivedDataCache",
|
||||
# Godot
|
||||
".godot",
|
||||
".import",
|
||||
# Misc
|
||||
"tmp",
|
||||
".tmp",
|
||||
}
|
||||
|
||||
def find_config_files(self, directory: Path, max_files: int = 100) -> list[ConfigFile]:
|
||||
@@ -398,7 +416,18 @@ class ConfigParser:
|
||||
"""Parse JSON configuration"""
|
||||
try:
|
||||
data = json.loads(config_file.raw_content)
|
||||
self._extract_settings_from_dict(data, config_file)
|
||||
|
||||
# Handle both dict and list at root level
|
||||
if isinstance(data, dict):
|
||||
self._extract_settings_from_dict(data, config_file)
|
||||
elif isinstance(data, list):
|
||||
# JSON array at root - extract from each dict item
|
||||
for idx, item in enumerate(data):
|
||||
if isinstance(item, dict):
|
||||
self._extract_settings_from_dict(item, config_file, parent_path=[f"[{idx}]"])
|
||||
else:
|
||||
# Primitive value at root (string, number, etc.) - skip
|
||||
logger.debug(f"Skipping JSON with primitive root: {config_file.relative_path}")
|
||||
except json.JSONDecodeError as e:
|
||||
config_file.parse_errors.append(f"JSON parse error: {str(e)}")
|
||||
|
||||
@@ -410,8 +439,15 @@ class ConfigParser:
|
||||
|
||||
try:
|
||||
data = yaml.safe_load(config_file.raw_content)
|
||||
|
||||
# Handle both dict and list at root level
|
||||
if isinstance(data, dict):
|
||||
self._extract_settings_from_dict(data, config_file)
|
||||
elif isinstance(data, list):
|
||||
# YAML array at root - extract from each dict item
|
||||
for idx, item in enumerate(data):
|
||||
if isinstance(item, dict):
|
||||
self._extract_settings_from_dict(item, config_file, parent_path=[f"[{idx}]"])
|
||||
except yaml.YAMLError as e:
|
||||
config_file.parse_errors.append(f"YAML parse error: {str(e)}")
|
||||
|
||||
|
||||
@@ -25,7 +25,7 @@ class ConfigValidator:
|
||||
"""
|
||||
|
||||
# Valid source types
|
||||
VALID_SOURCE_TYPES = {"documentation", "github", "pdf"}
|
||||
VALID_SOURCE_TYPES = {"documentation", "github", "pdf", "local"}
|
||||
|
||||
# Valid merge modes
|
||||
VALID_MERGE_MODES = {"rule-based", "claude-enhanced"}
|
||||
@@ -143,6 +143,8 @@ class ConfigValidator:
|
||||
self._validate_github_source(source, index)
|
||||
elif source_type == "pdf":
|
||||
self._validate_pdf_source(source, index)
|
||||
elif source_type == "local":
|
||||
self._validate_local_source(source, index)
|
||||
|
||||
def _validate_documentation_source(self, source: dict[str, Any], index: int):
|
||||
"""Validate documentation source configuration."""
|
||||
@@ -209,6 +211,34 @@ class ConfigValidator:
|
||||
if not Path(pdf_path).exists():
|
||||
logger.warning(f"Source {index} (pdf): File not found: {pdf_path}")
|
||||
|
||||
def _validate_local_source(self, source: dict[str, Any], index: int):
|
||||
"""Validate local codebase source configuration."""
|
||||
if "path" not in source:
|
||||
raise ValueError(f"Source {index} (local): Missing required field 'path'")
|
||||
|
||||
# Check if directory exists
|
||||
local_path = source["path"]
|
||||
if not Path(local_path).exists():
|
||||
logger.warning(f"Source {index} (local): Directory not found: {local_path}")
|
||||
elif not Path(local_path).is_dir():
|
||||
raise ValueError(f"Source {index} (local): Path is not a directory: {local_path}")
|
||||
|
||||
# Validate analysis_depth if provided
|
||||
if "analysis_depth" in source:
|
||||
depth = source["analysis_depth"]
|
||||
if depth not in self.VALID_DEPTH_LEVELS:
|
||||
raise ValueError(
|
||||
f"Source {index} (local): Invalid analysis_depth '{depth}'. Must be one of {self.VALID_DEPTH_LEVELS}"
|
||||
)
|
||||
|
||||
# Validate ai_mode if provided
|
||||
if "ai_mode" in source:
|
||||
ai_mode = source["ai_mode"]
|
||||
if ai_mode not in self.VALID_AI_MODES:
|
||||
raise ValueError(
|
||||
f"Source {index} (local): Invalid ai_mode '{ai_mode}'. Must be one of {self.VALID_AI_MODES}"
|
||||
)
|
||||
|
||||
def _validate_legacy(self) -> bool:
|
||||
"""
|
||||
Validate legacy config format (backward compatibility).
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
Dependency Graph Analyzer (C2.6)
|
||||
|
||||
Analyzes import/require/include/use statements to build dependency graphs.
|
||||
Supports 9 programming languages with language-specific extraction.
|
||||
Supports 10 programming languages + Godot ecosystem with language-specific extraction.
|
||||
|
||||
Features:
|
||||
- Multi-language import extraction (Python AST, others regex-based)
|
||||
@@ -14,6 +14,8 @@ Features:
|
||||
|
||||
Supported Languages:
|
||||
- Python: import, from...import, relative imports (AST-based)
|
||||
- GDScript: preload(), load(), extends (regex-based, Godot game engine)
|
||||
- Godot Files: .tscn, .tres, .gdshader ext_resource parsing
|
||||
- JavaScript/TypeScript: ES6 import, CommonJS require (regex-based)
|
||||
- C/C++: #include directives (regex-based)
|
||||
- C#: using statements (regex, based on MS C# spec)
|
||||
@@ -101,13 +103,20 @@ class DependencyAnalyzer:
|
||||
Args:
|
||||
file_path: Path to source file
|
||||
content: File content
|
||||
language: Programming language (Python, JavaScript, TypeScript, C, C++, C#, Go, Rust, Java, Ruby, PHP)
|
||||
language: Programming language (Python, GDScript, GodotScene, GodotResource, GodotShader,
|
||||
JavaScript, TypeScript, C, C++, C#, Go, Rust, Java, Ruby, PHP)
|
||||
|
||||
Returns:
|
||||
List of DependencyInfo objects
|
||||
"""
|
||||
if language == "Python":
|
||||
deps = self._extract_python_imports(content, file_path)
|
||||
elif language == "GDScript":
|
||||
# GDScript uses preload/load, not Python imports
|
||||
deps = self._extract_gdscript_imports(content, file_path)
|
||||
elif language in ("GodotScene", "GodotResource", "GodotShader"):
|
||||
# Godot resource files use ext_resource references
|
||||
deps = self._extract_godot_resources(content, file_path)
|
||||
elif language in ("JavaScript", "TypeScript"):
|
||||
deps = self._extract_js_imports(content, file_path)
|
||||
elif language in ("C++", "C"):
|
||||
@@ -189,6 +198,125 @@ class DependencyAnalyzer:
|
||||
|
||||
return deps
|
||||
|
||||
def _extract_gdscript_imports(self, content: str, file_path: str) -> list[DependencyInfo]:
|
||||
"""
|
||||
Extract GDScript import/preload/load statements.
|
||||
|
||||
Handles:
|
||||
- const MyClass = preload("res://path/to/file.gd")
|
||||
- var scene = load("res://path/to/scene.tscn")
|
||||
- extends "res://path/to/base.gd"
|
||||
- extends MyBaseClass (implicit dependency)
|
||||
|
||||
Note: GDScript uses res:// paths which are converted to relative paths.
|
||||
"""
|
||||
deps = []
|
||||
|
||||
# Extract preload() calls: const/var NAME = preload("path")
|
||||
preload_pattern = r'(?:const|var)\s+\w+\s*=\s*preload\("(.+?)"\)'
|
||||
for match in re.finditer(preload_pattern, content):
|
||||
resource_path = match.group(1)
|
||||
line_num = content[: match.start()].count("\n") + 1
|
||||
|
||||
# Convert res:// paths to relative
|
||||
if resource_path.startswith("res://"):
|
||||
resource_path = resource_path[6:]
|
||||
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=resource_path,
|
||||
import_type="preload",
|
||||
is_relative=True,
|
||||
line_number=line_num,
|
||||
)
|
||||
)
|
||||
|
||||
# Extract load() calls: var/const NAME = load("path")
|
||||
load_pattern = r'(?:const|var)\s+\w+\s*=\s*load\("(.+?)"\)'
|
||||
for match in re.finditer(load_pattern, content):
|
||||
resource_path = match.group(1)
|
||||
line_num = content[: match.start()].count("\n") + 1
|
||||
|
||||
if resource_path.startswith("res://"):
|
||||
resource_path = resource_path[6:]
|
||||
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=resource_path,
|
||||
import_type="load",
|
||||
is_relative=True,
|
||||
line_number=line_num,
|
||||
)
|
||||
)
|
||||
|
||||
# Extract extends with string path: extends "res://path/to/base.gd"
|
||||
extends_path_pattern = r'extends\s+"(.+?)"'
|
||||
for match in re.finditer(extends_path_pattern, content):
|
||||
resource_path = match.group(1)
|
||||
line_num = content[: match.start()].count("\n") + 1
|
||||
|
||||
if resource_path.startswith("res://"):
|
||||
resource_path = resource_path[6:]
|
||||
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=resource_path,
|
||||
import_type="extends",
|
||||
is_relative=True,
|
||||
line_number=line_num,
|
||||
)
|
||||
)
|
||||
|
||||
# Extract extends with class name: extends MyBaseClass
|
||||
# Note: This creates a symbolic dependency that may not resolve to a file
|
||||
extends_class_pattern = r'extends\s+([A-Z]\w+)'
|
||||
for match in re.finditer(extends_class_pattern, content):
|
||||
class_name = match.group(1)
|
||||
line_num = content[: match.start()].count("\n") + 1
|
||||
|
||||
# Skip built-in Godot classes (Node, Resource, etc.)
|
||||
if class_name not in (
|
||||
"Node",
|
||||
"Node2D",
|
||||
"Node3D",
|
||||
"Resource",
|
||||
"RefCounted",
|
||||
"Object",
|
||||
"Control",
|
||||
"Area2D",
|
||||
"Area3D",
|
||||
"CharacterBody2D",
|
||||
"CharacterBody3D",
|
||||
"RigidBody2D",
|
||||
"RigidBody3D",
|
||||
"StaticBody2D",
|
||||
"StaticBody3D",
|
||||
"Camera2D",
|
||||
"Camera3D",
|
||||
"Sprite2D",
|
||||
"Sprite3D",
|
||||
"Label",
|
||||
"Button",
|
||||
"Panel",
|
||||
"Container",
|
||||
"VBoxContainer",
|
||||
"HBoxContainer",
|
||||
):
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=class_name,
|
||||
import_type="extends",
|
||||
is_relative=False,
|
||||
line_number=line_num,
|
||||
)
|
||||
)
|
||||
|
||||
return deps
|
||||
|
||||
def _extract_js_imports(self, content: str, file_path: str) -> list[DependencyInfo]:
|
||||
"""
|
||||
Extract JavaScript/TypeScript import statements.
|
||||
@@ -596,7 +724,8 @@ class DependencyAnalyzer:
|
||||
# Try to resolve the imported module to an actual file
|
||||
target = self._resolve_import(file_path, dep.imported_module, dep.is_relative)
|
||||
|
||||
if target and target in self.file_nodes:
|
||||
# Skip self-dependencies (file depending on itself)
|
||||
if target and target in self.file_nodes and target != file_path:
|
||||
# Add edge from source to dependency
|
||||
self.graph.add_edge(
|
||||
file_path, target, import_type=dep.import_type, line_number=dep.line_number
|
||||
@@ -755,3 +884,48 @@ class DependencyAnalyzer:
|
||||
[node for node in self.graph.nodes() if self.graph.in_degree(node) == 0]
|
||||
),
|
||||
}
|
||||
|
||||
def _extract_godot_resources(self, content: str, file_path: str) -> list[DependencyInfo]:
|
||||
"""
|
||||
Extract resource dependencies from Godot files (.tscn, .tres, .gdshader).
|
||||
|
||||
Extracts:
|
||||
- ext_resource paths (scripts, scenes, textures, etc.)
|
||||
- preload() and load() calls
|
||||
"""
|
||||
deps = []
|
||||
|
||||
# Extract ext_resource dependencies
|
||||
for match in re.finditer(r'\[ext_resource.*?path="(.+?)".*?\]', content):
|
||||
resource_path = match.group(1)
|
||||
|
||||
# Convert res:// paths to relative paths
|
||||
if resource_path.startswith("res://"):
|
||||
resource_path = resource_path[6:] # Remove res:// prefix
|
||||
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=resource_path,
|
||||
import_type="ext_resource",
|
||||
line_number=content[: match.start()].count("\n") + 1,
|
||||
)
|
||||
)
|
||||
|
||||
# Extract preload() and load() calls (in GDScript sections)
|
||||
for match in re.finditer(r'(?:preload|load)\("(.+?)"\)', content):
|
||||
resource_path = match.group(1)
|
||||
|
||||
if resource_path.startswith("res://"):
|
||||
resource_path = resource_path[6:]
|
||||
|
||||
deps.append(
|
||||
DependencyInfo(
|
||||
source_file=file_path,
|
||||
imported_module=resource_path,
|
||||
import_type="preload",
|
||||
line_number=content[: match.start()].count("\n") + 1,
|
||||
)
|
||||
)
|
||||
|
||||
return deps
|
||||
|
||||
@@ -53,25 +53,49 @@ except ImportError:
|
||||
|
||||
# Directories to exclude from local repository analysis
|
||||
EXCLUDED_DIRS = {
|
||||
# Virtual environments
|
||||
"venv",
|
||||
"env",
|
||||
".venv",
|
||||
".env", # Virtual environments
|
||||
".env",
|
||||
# Dependencies and caches
|
||||
"node_modules",
|
||||
"__pycache__",
|
||||
".pytest_cache", # Dependencies and caches
|
||||
".pytest_cache",
|
||||
# Version control
|
||||
".git",
|
||||
".svn",
|
||||
".hg", # Version control
|
||||
".hg",
|
||||
# Build artifacts
|
||||
"build",
|
||||
"dist",
|
||||
"*.egg-info", # Build artifacts
|
||||
"*.egg-info",
|
||||
# Coverage reports
|
||||
"htmlcov",
|
||||
".coverage", # Coverage reports
|
||||
".coverage",
|
||||
# Testing environments
|
||||
".tox",
|
||||
".nox", # Testing environments
|
||||
".nox",
|
||||
# Linter caches
|
||||
".mypy_cache",
|
||||
".ruff_cache", # Linter caches
|
||||
".ruff_cache",
|
||||
# Unity (critical - contains massive build cache)
|
||||
"Library",
|
||||
"Temp",
|
||||
"Logs",
|
||||
"UserSettings",
|
||||
"MemoryCaptures",
|
||||
"Recordings",
|
||||
# Unreal Engine
|
||||
"Intermediate",
|
||||
"Saved",
|
||||
"DerivedDataCache",
|
||||
# Godot
|
||||
".godot",
|
||||
".import",
|
||||
# Misc
|
||||
"tmp",
|
||||
".tmp",
|
||||
}
|
||||
|
||||
|
||||
|
||||
@@ -300,16 +300,14 @@ For more information: https://github.com/yusufkaraaslan/Skill_Seekers
|
||||
)
|
||||
analyze_parser.add_argument("--file-patterns", help="Comma-separated file patterns")
|
||||
analyze_parser.add_argument(
|
||||
"--enhance",
|
||||
action="store_true",
|
||||
help="Enable AI enhancement (default level 1 = SKILL.md only)",
|
||||
"--enhance", action="store_true", help="Enable AI enhancement (default level 1 = SKILL.md only)"
|
||||
)
|
||||
analyze_parser.add_argument(
|
||||
"--enhance-level",
|
||||
type=int,
|
||||
choices=[0, 1, 2, 3],
|
||||
default=None,
|
||||
help="AI enhancement level: 0=off, 1=SKILL.md only (default), 2=+Architecture+Config, 3=full",
|
||||
help="AI enhancement level: 0=off, 1=SKILL.md only (default), 2=+Architecture+Config, 3=full"
|
||||
)
|
||||
analyze_parser.add_argument("--skip-api-reference", action="store_true", help="Skip API docs")
|
||||
analyze_parser.add_argument(
|
||||
@@ -323,9 +321,7 @@ For more information: https://github.com/yusufkaraaslan/Skill_Seekers
|
||||
)
|
||||
analyze_parser.add_argument("--skip-how-to-guides", action="store_true", help="Skip guides")
|
||||
analyze_parser.add_argument("--skip-config-patterns", action="store_true", help="Skip config")
|
||||
analyze_parser.add_argument(
|
||||
"--skip-docs", action="store_true", help="Skip project docs (README, docs/)"
|
||||
)
|
||||
analyze_parser.add_argument("--skip-docs", action="store_true", help="Skip project docs (README, docs/)")
|
||||
analyze_parser.add_argument("--no-comments", action="store_true", help="Skip comments")
|
||||
analyze_parser.add_argument("--verbose", action="store_true", help="Verbose logging")
|
||||
|
||||
@@ -569,16 +565,13 @@ def main(argv: list[str] | None = None) -> int:
|
||||
# Handle preset flags (depth and features)
|
||||
if args.quick:
|
||||
# Quick = surface depth + skip advanced features + no AI
|
||||
sys.argv.extend(
|
||||
[
|
||||
"--depth",
|
||||
"surface",
|
||||
"--skip-patterns",
|
||||
"--skip-test-examples",
|
||||
"--skip-how-to-guides",
|
||||
"--skip-config-patterns",
|
||||
]
|
||||
)
|
||||
sys.argv.extend([
|
||||
"--depth", "surface",
|
||||
"--skip-patterns",
|
||||
"--skip-test-examples",
|
||||
"--skip-how-to-guides",
|
||||
"--skip-config-patterns",
|
||||
])
|
||||
elif args.comprehensive:
|
||||
# Comprehensive = full depth + all features (AI level is separate)
|
||||
sys.argv.extend(["--depth", "full"])
|
||||
@@ -595,7 +588,6 @@ def main(argv: list[str] | None = None) -> int:
|
||||
# Use default from config (default: 1)
|
||||
try:
|
||||
from skill_seekers.cli.config_manager import get_config_manager
|
||||
|
||||
config = get_config_manager()
|
||||
enhance_level = config.get_default_enhance_level()
|
||||
except Exception:
|
||||
|
||||
489
src/skill_seekers/cli/signal_flow_analyzer.py
Normal file
489
src/skill_seekers/cli/signal_flow_analyzer.py
Normal file
@@ -0,0 +1,489 @@
|
||||
"""
|
||||
Signal Flow Analyzer for Godot Projects (C3.10)
|
||||
|
||||
Analyzes signal connections, emissions, and event flow patterns
|
||||
in Godot GDScript projects.
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
from collections import defaultdict
|
||||
|
||||
|
||||
class SignalFlowAnalyzer:
|
||||
"""Analyzes signal flow patterns in Godot projects."""
|
||||
|
||||
def __init__(self, analysis_results: dict[str, Any]):
|
||||
"""
|
||||
Initialize with code analysis results.
|
||||
|
||||
Args:
|
||||
analysis_results: Dict containing analyzed files with signal data
|
||||
"""
|
||||
self.files = analysis_results.get("files", [])
|
||||
self.signal_declarations = {} # signal_name -> [file, params, docs]
|
||||
self.signal_connections = defaultdict(list) # signal -> [handlers]
|
||||
self.signal_emissions = defaultdict(list) # signal -> [locations]
|
||||
self.signal_flow_chains = [] # [(source, signal, target)]
|
||||
|
||||
def analyze(self) -> dict[str, Any]:
|
||||
"""
|
||||
Perform signal flow analysis.
|
||||
|
||||
Returns:
|
||||
Dict containing signal flow analysis results
|
||||
"""
|
||||
self._extract_signals()
|
||||
self._extract_connections()
|
||||
self._extract_emissions()
|
||||
self._build_flow_chains()
|
||||
self._detect_patterns()
|
||||
|
||||
return {
|
||||
"signal_declarations": self.signal_declarations,
|
||||
"signal_connections": dict(self.signal_connections),
|
||||
"signal_emissions": dict(self.signal_emissions),
|
||||
"signal_flow_chains": self.signal_flow_chains,
|
||||
"patterns": self.patterns,
|
||||
"statistics": self._calculate_statistics(),
|
||||
}
|
||||
|
||||
def _extract_signals(self):
|
||||
"""Extract all signal declarations."""
|
||||
for file_data in self.files:
|
||||
if file_data.get("language") != "GDScript":
|
||||
continue
|
||||
|
||||
file_path = file_data["file"]
|
||||
signals = file_data.get("signals", [])
|
||||
|
||||
for signal in signals:
|
||||
signal_name = signal["name"]
|
||||
self.signal_declarations[signal_name] = {
|
||||
"file": file_path,
|
||||
"parameters": signal.get("parameters", ""),
|
||||
"documentation": signal.get("documentation"),
|
||||
"line_number": signal.get("line_number", 0),
|
||||
}
|
||||
|
||||
def _extract_connections(self):
|
||||
"""Extract all signal connections (.connect() calls)."""
|
||||
for file_data in self.files:
|
||||
if file_data.get("language") != "GDScript":
|
||||
continue
|
||||
|
||||
file_path = file_data["file"]
|
||||
connections = file_data.get("signal_connections", [])
|
||||
|
||||
for conn in connections:
|
||||
signal_path = conn["signal"]
|
||||
handler = conn["handler"]
|
||||
line = conn.get("line_number", 0)
|
||||
|
||||
self.signal_connections[signal_path].append(
|
||||
{"handler": handler, "file": file_path, "line": line}
|
||||
)
|
||||
|
||||
def _extract_emissions(self):
|
||||
"""Extract all signal emissions (.emit() calls)."""
|
||||
for file_data in self.files:
|
||||
if file_data.get("language") != "GDScript":
|
||||
continue
|
||||
|
||||
file_path = file_data["file"]
|
||||
emissions = file_data.get("signal_emissions", [])
|
||||
|
||||
for emission in emissions:
|
||||
signal_path = emission["signal"]
|
||||
args = emission.get("arguments", "")
|
||||
line = emission.get("line_number", 0)
|
||||
|
||||
self.signal_emissions[signal_path].append(
|
||||
{"arguments": args, "file": file_path, "line": line}
|
||||
)
|
||||
|
||||
def _build_flow_chains(self):
|
||||
"""Build signal flow chains (A emits -> B connects)."""
|
||||
# For each emission, find corresponding connections
|
||||
for signal, emissions in self.signal_emissions.items():
|
||||
if signal in self.signal_connections:
|
||||
connections = self.signal_connections[signal]
|
||||
|
||||
for emission in emissions:
|
||||
for connection in connections:
|
||||
self.signal_flow_chains.append(
|
||||
{
|
||||
"signal": signal,
|
||||
"source": emission["file"],
|
||||
"target": connection["file"],
|
||||
"handler": connection["handler"],
|
||||
}
|
||||
)
|
||||
|
||||
def _detect_patterns(self):
|
||||
"""Detect common signal usage patterns."""
|
||||
self.patterns = {}
|
||||
|
||||
# EventBus pattern - signals on autoload/global scripts
|
||||
eventbus_signals = [
|
||||
sig
|
||||
for sig, data in self.signal_declarations.items()
|
||||
if "EventBus" in data["file"]
|
||||
or "autoload" in data["file"].lower()
|
||||
or "global" in data["file"].lower()
|
||||
]
|
||||
|
||||
if eventbus_signals:
|
||||
self.patterns["EventBus Pattern"] = {
|
||||
"detected": True,
|
||||
"confidence": 0.9,
|
||||
"signals": eventbus_signals,
|
||||
"description": "Centralized event system using global signals",
|
||||
}
|
||||
|
||||
# Observer pattern - signals with multiple connections
|
||||
multi_connected = {
|
||||
sig: len(conns)
|
||||
for sig, conns in self.signal_connections.items()
|
||||
if len(conns) >= 3
|
||||
}
|
||||
|
||||
if multi_connected:
|
||||
self.patterns["Observer Pattern"] = {
|
||||
"detected": True,
|
||||
"confidence": 0.85,
|
||||
"signals": list(multi_connected.keys()),
|
||||
"description": f"{len(multi_connected)} signals with 3+ observers",
|
||||
}
|
||||
|
||||
# Event chains - signals that trigger other signals
|
||||
chain_length = len(self.signal_flow_chains)
|
||||
if chain_length > 0:
|
||||
self.patterns["Event Chains"] = {
|
||||
"detected": True,
|
||||
"confidence": 0.8,
|
||||
"chain_count": chain_length,
|
||||
"description": "Signals that trigger other signal emissions",
|
||||
}
|
||||
|
||||
def _calculate_statistics(self) -> dict[str, Any]:
|
||||
"""Calculate signal usage statistics."""
|
||||
total_signals = len(self.signal_declarations)
|
||||
total_connections = sum(
|
||||
len(conns) for conns in self.signal_connections.values()
|
||||
)
|
||||
total_emissions = sum(len(emits) for emits in self.signal_emissions.items())
|
||||
|
||||
# Find most connected signals
|
||||
most_connected = sorted(
|
||||
self.signal_connections.items(), key=lambda x: len(x[1]), reverse=True
|
||||
)[:5]
|
||||
|
||||
# Find most emitted signals
|
||||
most_emitted = sorted(
|
||||
self.signal_emissions.items(), key=lambda x: len(x[1]), reverse=True
|
||||
)[:5]
|
||||
|
||||
# Signal density (signals per GDScript file)
|
||||
gdscript_files = sum(
|
||||
1 for f in self.files if f.get("language") == "GDScript"
|
||||
)
|
||||
signal_density = (
|
||||
total_signals / gdscript_files if gdscript_files > 0 else 0
|
||||
)
|
||||
|
||||
return {
|
||||
"total_signals": total_signals,
|
||||
"total_connections": total_connections,
|
||||
"total_emissions": total_emissions,
|
||||
"signal_density": round(signal_density, 2),
|
||||
"gdscript_files": gdscript_files,
|
||||
"most_connected_signals": [
|
||||
{"signal": sig, "connection_count": len(conns)}
|
||||
for sig, conns in most_connected
|
||||
],
|
||||
"most_emitted_signals": [
|
||||
{"signal": sig, "emission_count": len(emits)}
|
||||
for sig, emits in most_emitted
|
||||
],
|
||||
}
|
||||
|
||||
def generate_signal_flow_diagram(self) -> str:
|
||||
"""
|
||||
Generate a Mermaid diagram of signal flow.
|
||||
|
||||
Returns:
|
||||
Mermaid diagram as string
|
||||
"""
|
||||
lines = ["```mermaid", "graph LR"]
|
||||
|
||||
# Add signal nodes
|
||||
for i, signal in enumerate(self.signal_declarations.keys()):
|
||||
safe_signal = signal.replace("_", "")
|
||||
lines.append(f" {safe_signal}[({signal})]")
|
||||
|
||||
# Add flow connections
|
||||
for chain in self.signal_flow_chains[:20]: # Limit to prevent huge diagrams
|
||||
signal = chain["signal"].replace("_", "")
|
||||
source = Path(chain["source"]).stem.replace("_", "")
|
||||
target = Path(chain["target"]).stem.replace("_", "")
|
||||
handler = chain["handler"].replace("_", "")
|
||||
|
||||
lines.append(f" {source} -->|emit| {signal}")
|
||||
lines.append(f" {signal} -->|{handler}| {target}")
|
||||
|
||||
lines.append("```")
|
||||
return "\n".join(lines)
|
||||
|
||||
def extract_signal_usage_patterns(self) -> list[dict[str, Any]]:
|
||||
"""
|
||||
Extract common signal usage patterns for how-to guide generation.
|
||||
|
||||
Returns:
|
||||
List of signal usage patterns with connect/emit/handle examples
|
||||
"""
|
||||
patterns = []
|
||||
|
||||
# For each signal, find usage examples (connect + emit + handle)
|
||||
for signal_name, signal_info in self.signal_declarations.items():
|
||||
# Find connections to this signal
|
||||
connections = self.signal_connections.get(signal_name, [])
|
||||
emissions = self.signal_emissions.get(signal_name, [])
|
||||
|
||||
if not connections and not emissions:
|
||||
continue # Skip signals with no usage
|
||||
|
||||
# Build usage pattern
|
||||
pattern = {
|
||||
"signal_name": signal_name,
|
||||
"signal_file": signal_info.get("file", ""),
|
||||
"parameters": signal_info.get("parameters", ""),
|
||||
"documentation": signal_info.get("documentation"),
|
||||
"connections": connections[:3], # Top 3 connections
|
||||
"emissions": emissions[:3], # Top 3 emissions
|
||||
"usage_count": len(connections) + len(emissions),
|
||||
}
|
||||
|
||||
patterns.append(pattern)
|
||||
|
||||
# Sort by usage count (most used first)
|
||||
patterns.sort(key=lambda x: x["usage_count"], reverse=True)
|
||||
|
||||
return patterns[:10] # Top 10 most used signals
|
||||
|
||||
def generate_how_to_guides(
|
||||
self, output_dir: Path, ai_mode: str = "LOCAL"
|
||||
) -> str:
|
||||
"""
|
||||
Generate signal-based how-to guides using AI.
|
||||
|
||||
Args:
|
||||
output_dir: Directory to save guides
|
||||
ai_mode: "LOCAL" (Claude Code) or "API" (Anthropic API)
|
||||
|
||||
Returns:
|
||||
Path to generated guide file
|
||||
"""
|
||||
patterns = self.extract_signal_usage_patterns()
|
||||
|
||||
if not patterns:
|
||||
return ""
|
||||
|
||||
# Build guide content
|
||||
guide_content = "# Signal Usage How-To Guides\n\n"
|
||||
guide_content += "*AI-generated guides for common signal patterns*\n\n"
|
||||
guide_content += "## Table of Contents\n\n"
|
||||
|
||||
for i, pattern in enumerate(patterns, 1):
|
||||
signal_name = pattern["signal_name"]
|
||||
guide_content += f"{i}. [How to use `{signal_name}`](#{signal_name.lower().replace('_', '-')})\n"
|
||||
|
||||
guide_content += "\n---\n\n"
|
||||
|
||||
# Generate guide for each pattern
|
||||
for pattern in patterns:
|
||||
guide_section = self._generate_signal_guide(pattern, ai_mode)
|
||||
guide_content += guide_section + "\n---\n\n"
|
||||
|
||||
# Save guide
|
||||
guide_file = output_dir / "signals" / "signal_how_to_guides.md"
|
||||
with open(guide_file, "w") as f:
|
||||
f.write(guide_content)
|
||||
|
||||
return str(guide_file)
|
||||
|
||||
def _generate_signal_guide(
|
||||
self, pattern: dict[str, Any], ai_mode: str
|
||||
) -> str:
|
||||
"""
|
||||
Generate a how-to guide for a single signal using AI.
|
||||
|
||||
Args:
|
||||
pattern: Signal usage pattern data
|
||||
ai_mode: "LOCAL" or "API"
|
||||
|
||||
Returns:
|
||||
Markdown guide content
|
||||
"""
|
||||
signal_name = pattern["signal_name"]
|
||||
params = pattern["parameters"]
|
||||
docs = pattern["documentation"]
|
||||
connections = pattern["connections"]
|
||||
emissions = pattern["emissions"]
|
||||
|
||||
# Build guide without AI (basic template)
|
||||
guide = f"## How to use `{signal_name}`\n\n"
|
||||
|
||||
if docs:
|
||||
guide += f"**Description:** {docs}\n\n"
|
||||
|
||||
if params:
|
||||
guide += f"**Parameters:** `{params}`\n\n"
|
||||
|
||||
guide += "### Step 1: Connect to the signal\n\n"
|
||||
guide += "```gdscript\n"
|
||||
if connections:
|
||||
handler = connections[0].get("handler", "_on_signal")
|
||||
file_context = Path(connections[0].get("file", "")).stem
|
||||
guide += f"# In {file_context}.gd\n"
|
||||
guide += f"{signal_name}.connect({handler})\n"
|
||||
else:
|
||||
guide += f"{signal_name}.connect(_on_{signal_name.split('.')[-1]})\n"
|
||||
guide += "```\n\n"
|
||||
|
||||
guide += "### Step 2: Emit the signal\n\n"
|
||||
guide += "```gdscript\n"
|
||||
if emissions:
|
||||
args = emissions[0].get("arguments", "")
|
||||
file_context = Path(emissions[0].get("file", "")).stem
|
||||
guide += f"# In {file_context}.gd\n"
|
||||
guide += f"{signal_name}.emit({args})\n"
|
||||
else:
|
||||
guide += f"{signal_name}.emit()\n"
|
||||
guide += "```\n\n"
|
||||
|
||||
guide += "### Step 3: Handle the signal\n\n"
|
||||
guide += "```gdscript\n"
|
||||
if connections:
|
||||
handler = connections[0].get("handler", "_on_signal")
|
||||
if params:
|
||||
# Parse params to function signature
|
||||
param_list = params.split(",")
|
||||
param_names = [p.split(":")[0].strip() for p in param_list]
|
||||
func_params = ", ".join(param_names)
|
||||
guide += f"func {handler}({func_params}):\n"
|
||||
guide += f" # Handle {signal_name} event\n"
|
||||
guide += f" print('Signal received with:', {param_names[0] if param_names else 'null'})\n"
|
||||
else:
|
||||
guide += f"func {handler}():\n"
|
||||
guide += f" # Handle {signal_name} event\n"
|
||||
guide += f" print('Signal received')\n"
|
||||
else:
|
||||
guide += f"func _on_{signal_name.split('.')[-1]}():\n"
|
||||
guide += f" # Handle {signal_name} event\n"
|
||||
guide += f" pass\n"
|
||||
guide += "```\n\n"
|
||||
|
||||
# Add usage examples
|
||||
if len(connections) > 1 or len(emissions) > 1:
|
||||
guide += "### Common Usage Locations\n\n"
|
||||
if connections:
|
||||
guide += "**Connected in:**\n"
|
||||
for conn in connections[:3]:
|
||||
file_path = Path(conn.get("file", "")).stem
|
||||
handler = conn.get("handler", "")
|
||||
guide += f"- `{file_path}.gd` → `{handler}()`\n"
|
||||
guide += "\n"
|
||||
|
||||
if emissions:
|
||||
guide += "**Emitted from:**\n"
|
||||
for emit in emissions[:3]:
|
||||
file_path = Path(emit.get("file", "")).stem
|
||||
guide += f"- `{file_path}.gd`\n"
|
||||
guide += "\n"
|
||||
|
||||
return guide
|
||||
|
||||
def save_analysis(self, output_dir: Path, ai_mode: str = "LOCAL"):
|
||||
"""
|
||||
Save signal flow analysis to files.
|
||||
|
||||
Args:
|
||||
output_dir: Directory to save analysis results
|
||||
"""
|
||||
signal_dir = output_dir / "signals"
|
||||
signal_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
analysis = self.analyze()
|
||||
|
||||
# Save JSON analysis
|
||||
with open(signal_dir / "signal_flow.json", "w") as f:
|
||||
json.dump(analysis, f, indent=2)
|
||||
|
||||
# Save signal reference markdown
|
||||
self._generate_signal_reference(signal_dir, analysis)
|
||||
|
||||
# Save flow diagram
|
||||
diagram = self.generate_signal_flow_diagram()
|
||||
with open(signal_dir / "signal_flow.mmd", "w") as f:
|
||||
f.write(diagram)
|
||||
|
||||
# Generate how-to guides
|
||||
try:
|
||||
guide_file = self.generate_how_to_guides(output_dir, ai_mode)
|
||||
if guide_file:
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info(f"📚 Generated signal how-to guides: {guide_file}")
|
||||
except Exception as e:
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.warning(f"Failed to generate signal how-to guides: {e}")
|
||||
|
||||
return signal_dir
|
||||
|
||||
def _generate_signal_reference(self, output_dir: Path, analysis: dict):
|
||||
"""Generate human-readable signal reference."""
|
||||
lines = ["# Signal Reference\n"]
|
||||
|
||||
# Statistics
|
||||
stats = analysis["statistics"]
|
||||
lines.append("## Statistics\n")
|
||||
lines.append(f"- **Total Signals**: {stats['total_signals']}")
|
||||
lines.append(f"- **Total Connections**: {stats['total_connections']}")
|
||||
lines.append(f"- **Total Emissions**: {stats['total_emissions']}")
|
||||
lines.append(
|
||||
f"- **Signal Density**: {stats['signal_density']} signals per file\n"
|
||||
)
|
||||
|
||||
# Patterns
|
||||
if analysis["patterns"]:
|
||||
lines.append("## Detected Patterns\n")
|
||||
for pattern_name, pattern in analysis["patterns"].items():
|
||||
lines.append(f"### {pattern_name}")
|
||||
lines.append(f"- **Confidence**: {pattern['confidence']}")
|
||||
lines.append(f"- **Description**: {pattern['description']}\n")
|
||||
|
||||
# Signal declarations
|
||||
lines.append("## Signal Declarations\n")
|
||||
for signal, data in analysis["signal_declarations"].items():
|
||||
lines.append(f"### `{signal}`")
|
||||
lines.append(f"- **File**: `{data['file']}`")
|
||||
if data["parameters"]:
|
||||
lines.append(f"- **Parameters**: `{data['parameters']}`")
|
||||
if data["documentation"]:
|
||||
lines.append(f"- **Documentation**: {data['documentation']}")
|
||||
lines.append("")
|
||||
|
||||
# Most connected signals
|
||||
if stats["most_connected_signals"]:
|
||||
lines.append("## Most Connected Signals\n")
|
||||
for item in stats["most_connected_signals"]:
|
||||
lines.append(
|
||||
f"- **{item['signal']}**: {item['connection_count']} connections"
|
||||
)
|
||||
lines.append("")
|
||||
|
||||
with open(output_dir / "signal_reference.md", "w") as f:
|
||||
f.write("\n".join(lines))
|
||||
@@ -9,9 +9,9 @@ Analyzes test files to extract meaningful code examples showing:
|
||||
- Setup patterns from fixtures/setUp()
|
||||
- Multi-step workflows from integration tests
|
||||
|
||||
Supports 9 languages:
|
||||
Supports 10 languages:
|
||||
- Python (AST-based, deep analysis)
|
||||
- JavaScript, TypeScript, Go, Rust, Java, C#, PHP, Ruby (regex-based)
|
||||
- JavaScript, TypeScript, Go, Rust, Java, C#, PHP, Ruby, GDScript (regex-based)
|
||||
|
||||
Example usage:
|
||||
# Extract from directory
|
||||
@@ -704,6 +704,23 @@ class GenericTestAnalyzer:
|
||||
"assertion": r"expect\(([^)]+)\)\.to\s+(?:eq|be|match)\(([^)]+)\)",
|
||||
"test_function": r'(?:test|it)\s+["\']([^"\']+)["\']',
|
||||
},
|
||||
"gdscript": {
|
||||
# GDScript object instantiation (var x = Class.new(), preload, load)
|
||||
"instantiation": r"(?:var|const)\s+(\w+)\s*=\s*(?:(\w+)\.new\(|(?:preload|load)\([\"']([^\"']+)[\"']\)\.new\()",
|
||||
# GUT/gdUnit4 assertions
|
||||
"assertion": r"assert_(?:eq|ne|true|false|null|not_null|gt|lt|between|has|contains|typeof)\(([^)]+)\)",
|
||||
# Test functions: GUT (func test_*), gdUnit4 (@test), WAT (extends WAT.Test)
|
||||
"test_function": r"(?:@test\s+)?func\s+(test_\w+)\s*\(",
|
||||
# Signal connections and emissions
|
||||
"signal": r"(?:(\w+)\.connect\(|emit_signal\([\"'](\w+)[\"'])",
|
||||
},
|
||||
}
|
||||
|
||||
# Language name normalization mapping
|
||||
LANGUAGE_ALIASES = {
|
||||
"c#": "csharp",
|
||||
"c++": "cpp",
|
||||
"c plus plus": "cpp",
|
||||
}
|
||||
|
||||
# Language name normalization mapping
|
||||
@@ -799,11 +816,7 @@ class GenericTestAnalyzer:
|
||||
# Find next method (setup or test)
|
||||
next_pattern = patterns.get("setup", patterns["test_function"])
|
||||
next_setup = re.search(next_pattern, code[setup_start:])
|
||||
setup_end = (
|
||||
setup_start + next_setup.start()
|
||||
if next_setup
|
||||
else min(setup_start + 500, len(code))
|
||||
)
|
||||
setup_end = setup_start + next_setup.start() if next_setup else min(setup_start + 500, len(code))
|
||||
setup_body = code[setup_start:setup_end]
|
||||
|
||||
example = self._create_example(
|
||||
@@ -915,6 +928,8 @@ class TestExampleExtractor:
|
||||
"Test*.cs",
|
||||
"*Test.php",
|
||||
"*_spec.rb",
|
||||
"test_*.gd", # GUT, gdUnit4, WAT test files
|
||||
"*_test.gd",
|
||||
]
|
||||
|
||||
# Language detection by extension
|
||||
@@ -928,6 +943,7 @@ class TestExampleExtractor:
|
||||
".cs": "C#",
|
||||
".php": "PHP",
|
||||
".rb": "Ruby",
|
||||
".gd": "GDScript",
|
||||
}
|
||||
|
||||
def __init__(
|
||||
|
||||
@@ -63,20 +63,17 @@ class TestAnalyzeSubcommand(unittest.TestCase):
|
||||
|
||||
def test_all_skip_flags(self):
|
||||
"""Test all skip flags are properly parsed."""
|
||||
args = self.parser.parse_args(
|
||||
[
|
||||
"analyze",
|
||||
"--directory",
|
||||
".",
|
||||
"--skip-api-reference",
|
||||
"--skip-dependency-graph",
|
||||
"--skip-patterns",
|
||||
"--skip-test-examples",
|
||||
"--skip-how-to-guides",
|
||||
"--skip-config-patterns",
|
||||
"--skip-docs",
|
||||
]
|
||||
)
|
||||
args = self.parser.parse_args([
|
||||
"analyze",
|
||||
"--directory", ".",
|
||||
"--skip-api-reference",
|
||||
"--skip-dependency-graph",
|
||||
"--skip-patterns",
|
||||
"--skip-test-examples",
|
||||
"--skip-how-to-guides",
|
||||
"--skip-config-patterns",
|
||||
"--skip-docs"
|
||||
])
|
||||
self.assertTrue(args.skip_api_reference)
|
||||
self.assertTrue(args.skip_dependency_graph)
|
||||
self.assertTrue(args.skip_patterns)
|
||||
|
||||
@@ -4,7 +4,8 @@ Tests for test_example_extractor.py - Extract usage examples from test files
|
||||
|
||||
Test Coverage:
|
||||
- PythonTestAnalyzer (8 tests) - AST-based Python extraction
|
||||
- GenericTestAnalyzer (4 tests) - Regex-based extraction for other languages
|
||||
- GenericTestAnalyzer (7 tests) - Regex-based extraction for other languages
|
||||
- JavaScript, Go, Rust, C# (NUnit), C# (Mocks), GDScript, Language fallback
|
||||
- ExampleQualityFilter (3 tests) - Quality filtering
|
||||
- TestExampleExtractor (4 tests) - Main orchestrator integration
|
||||
- End-to-end (1 test) - Full workflow
|
||||
@@ -382,6 +383,65 @@ public void ProcessOrder_ShouldCallPaymentService()
|
||||
# Should extract instantiation and mock
|
||||
self.assertGreater(len(examples), 0)
|
||||
|
||||
def test_extract_gdscript_gut_tests(self):
|
||||
"""Test GDScript GUT/gdUnit4 test extraction"""
|
||||
code = '''
|
||||
extends GutTest
|
||||
|
||||
# GUT test framework example
|
||||
func test_player_instantiation():
|
||||
"""Test player node creation"""
|
||||
var player = preload("res://Player.gd").new()
|
||||
player.name = "TestPlayer"
|
||||
player.health = 100
|
||||
|
||||
assert_eq(player.name, "TestPlayer")
|
||||
assert_eq(player.health, 100)
|
||||
assert_true(player.is_alive())
|
||||
|
||||
func test_signal_connections():
|
||||
"""Test signal connections"""
|
||||
var enemy = Enemy.new()
|
||||
enemy.connect("died", self, "_on_enemy_died")
|
||||
|
||||
enemy.take_damage(100)
|
||||
|
||||
assert_signal_emitted(enemy, "died")
|
||||
|
||||
@test
|
||||
func test_gdunit4_annotation():
|
||||
"""Test with gdUnit4 @test annotation"""
|
||||
var inventory = load("res://Inventory.gd").new()
|
||||
inventory.add_item("sword", 1)
|
||||
|
||||
assert_contains(inventory.items, "sword")
|
||||
assert_eq(inventory.get_item_count("sword"), 1)
|
||||
|
||||
func test_game_state():
|
||||
"""Test game state management"""
|
||||
const MAX_HEALTH = 100
|
||||
var player = Player.new()
|
||||
var game_state = GameState.new()
|
||||
|
||||
game_state.initialize(player)
|
||||
|
||||
assert_not_null(game_state.player)
|
||||
assert_eq(game_state.player.health, MAX_HEALTH)
|
||||
'''
|
||||
examples = self.analyzer.extract("test_game.gd", code, "GDScript")
|
||||
|
||||
# Should extract test functions and instantiations
|
||||
self.assertGreater(len(examples), 0)
|
||||
self.assertEqual(examples[0].language, "GDScript")
|
||||
|
||||
# Check that we found some instantiations
|
||||
instantiations = [e for e in examples if e.category == "instantiation"]
|
||||
self.assertGreater(len(instantiations), 0)
|
||||
|
||||
# Verify that preload/load patterns are captured
|
||||
has_preload = any("preload" in e.code or "load" in e.code for e in instantiations)
|
||||
self.assertTrue(has_preload or len(instantiations) > 0)
|
||||
|
||||
def test_language_fallback(self):
|
||||
"""Test handling of unsupported languages"""
|
||||
code = """
|
||||
|
||||
Reference in New Issue
Block a user