fix: resolve all ruff linting errors (W293, F401, B904, UP007, UP045, E741, SIM102, SIM117, ARG)

Auto-fixed (whitespace, imports, type annotations):
- codebase_scraper.py: W293 blank lines with whitespace
- doc_scraper.py: W293 blank lines with whitespace
- parsers/extractors/__init__.py: W293
- parsers/extractors/base_parser.py: W293, UP007, UP045, F401

Manual fixes:
- enhancement_workflow.py: B904 raise without `from exc`, remove unused `os` import
- parsers/extractors/quality_scorer.py: E741 ambiguous var `l` → `line`
- parsers/extractors/rst_parser.py: SIM102 nested if → combined conditions (x2)
- pdf_scraper.py: F821 undefined `logger` → `print()` (consistent with file style)
- mcp/tools/workflow_tools.py: ARG001 unused `args` → `_args`
- tests/test_workflow_runner.py: ARG005 unused lambda args → `_a`/`_kw`, ARG001 `kwargs` → `_kwargs`
- tests/test_workflows_command.py: SIM117 nested with → combined with (x2)

All 1922 tests pass.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
This commit is contained in:
yusyus
2026-02-18 22:44:41 +03:00
parent c44b88e801
commit 0878ad3ef6
20 changed files with 657 additions and 695 deletions

View File

@@ -401,13 +401,13 @@ class DocToSkillConverter:
# Try enhanced unified parser first
try:
from skill_seekers.cli.parsers.extractors import MarkdownParser
parser = MarkdownParser()
result = parser.parse_string(content, url)
if result.success and result.document:
doc = result.document
# Extract links from the document
links = []
for link in doc.external_links:
@@ -421,7 +421,7 @@ class DocToSkillConverter:
full_url = full_url.split("#")[0]
if ".md" in full_url and self.is_valid_url(full_url) and full_url not in links:
links.append(full_url)
return {
"url": url,
"title": doc.title or "",