Python 3.14's urlparse() raises ValueError on URLs with unencoded
brackets that look like malformed IPv6 (e.g. http://[fdaa:x:x:x::x
from docs.openclaw.ai llms-full.txt). sanitize_url() called urlparse()
BEFORE encoding brackets, so it crashed before it could fix them.
Fix: catch ValueError from urlparse, encode ALL brackets, then retry.
This is safe because if urlparse rejected the brackets, they are NOT
valid IPv6 host literals and should be encoded anyway.
Also fixed Discord e2e tests to skip gracefully on network issues.
Fixes#284
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The original fix (741daf1) only patched LlmsTxtParser._clean_url(),
which covers URLs extracted directly from llms.txt content. But URLs
discovered from .md files during BFS crawl (_extract_markdown_content)
and from HTML pages (extract_content) bypass _clean_url() entirely.
When those pages contain links with square brackets (e.g.
/api/[v1]/users), httpx raises 'Invalid IPv6 URL' on fetch.
Fix: add a shared sanitize_url() utility in cli/utils.py that
percent-encodes [ and ] in path/query components, and apply it at
every URL ingestion point:
- _enqueue_url(): main chokepoint — all discovered URLs pass through
- scrape_page(): safety net for start_urls that skip _enqueue_url
- scrape_page_async(): same for async mode
- dry-run sync/async paths: direct fetches that also bypass _enqueue_url
LlmsTxtParser._clean_url() now delegates bracket-encoding to the
shared sanitize_url() (DRY), keeping only its malformed-anchor
stripping logic.
Added 16 tests: sanitize_url unit tests, _clean_url bracket tests,
_enqueue_url sanitization tests, and integration test verifying
markdown content with bracket URLs is handled safely.
Fixes#284
- Change --enhance-workflow from type:str to action:append in all argument
files (workflow, create, scrape, github, pdf) so the flag can be given
multiple times to chain workflows in sequence
- Add workflow_runner.py: shared utility used by all 4 scrapers
- collect_workflow_vars(): merges extra context then user --var flags
(user flags take precedence over scraper metadata)
- run_workflows(): executes named workflows in order, then any inline
--enhance-stage workflow; handles dry-run/preview mode
- Remove duplicate ~115-130 line workflow blocks from doc_scraper,
github_scraper, pdf_scraper, and codebase_scraper; replace with
single run_workflows() call each
- Remove mutual exclusivity between workflows and AI enhancement:
workflows now run first, then traditional enhancement continues
independently (--enhance-level 0 to disable)
- Add tests/test_workflow_runner.py: 21 tests covering no-flags, single
workflow, multiple/chained workflows, inline stages, mixed mode,
variable precedence, and dry-run
- Fix test_markdown_parsing: accept "text" or "unknown" for unlabelled
code blocks (unified MarkdownParser returns "text" by default)
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Fixed incorrect variable names in list comprehensions that were causing
NameError in CI (Python 3.11/3.12):
Critical fixes:
- tests/test_markdown_parsing.py: 'l' → 'link' in list comprehension
- src/skill_seekers/cli/pdf_extractor_poc.py: 'l' → 'line' (2 occurrences)
Additional auto-lint fixes:
- Removed unused imports in llms_txt_downloader.py, llms_txt_parser.py
- Fixed comparison operators in config files
- Fixed list comprehension in other files
All tests now pass in CI.
Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>