* chore: update gitignore for audit reports and playwright cache * fix: add YAML frontmatter (name + description) to all SKILL.md files - Added frontmatter to 34 skills that were missing it entirely (0% Tessl score) - Fixed name field format to kebab-case across all 169 skills - Resolves #284 * chore: sync codex skills symlinks [automated] * fix: optimize 14 low-scoring skills via Tessl review (#290) Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286. * chore: sync codex skills symlinks [automated] * fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291) Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287. * feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292) Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%. * feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293) Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands. * fix: Phase 5 verification fixes + docs update (#294) Phase 5 verification fixes * chore: sync codex skills symlinks [automated] * fix: marketplace audit — all 11 plugins validated by Claude Code (#295) Marketplace audit: all 11 plugins validated + installed + tested in Claude Code * fix: restore 7 removed plugins + revert playwright-pro name to pw Reverts two overly aggressive audit changes: - Restored content-creator, demand-gen, fullstack-engineer, aws-architect, product-manager, scrum-master, skill-security-auditor to marketplace - Reverted playwright-pro plugin.json name back to 'pw' (intentional short name) * refactor: split 21 over-500-line skills into SKILL.md + references (#296) * chore: sync codex skills symlinks [automated] * docs: update all documentation with accurate counts and regenerated skill pages - Update skill count to 170, Python tools to 213, references to 314 across all docs - Regenerate all 170 skill doc pages from latest SKILL.md sources - Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap - Update README.md badges and overview table - Update marketplace.json metadata description and version - Update mkdocs.yml, index.md, getting-started.md with correct numbers * fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301) Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md at the specified install path. 7 of 9 domain directories were missing this file, causing "Skill not found" errors for bundle installs like: npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team Fix: - Add root-level SKILL.md with YAML frontmatter to 7 domains - Add .codex/instructions.md to 8 domains (for Codex CLI discovery) - Update INSTALLATION.md with accurate skill counts (53→170) - Add troubleshooting entry for "Skill not found" error All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json Closes #301 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills Gemini CLI: - Add GEMINI.md with activation instructions - Add scripts/gemini-install.sh setup script - Add scripts/sync-gemini-skills.py (194 skills indexed) - Add .gemini/skills/ with symlinks for all skills, agents, commands - Remove phantom medium-content-pro entries from sync script - Add top-level folder filter to prevent gitignored dirs from leaking Codex CLI: - Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills) - Regenerate .codex/skills-index.json: 124 → 149 skills - Add 25 new symlinks in .codex/skills/ OpenClaw: - Add OpenClaw installation section to INSTALLATION.md - Add ClawHub install + manual install + YAML frontmatter docs Documentation: - Update INSTALLATION.md with all 4 platforms + accurate counts - Update README.md: "three platforms" → "four platforms" + Gemini quick start - Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights - Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps - Add OpenClaw + Gemini to installation locations reference table Marketplace: all 18 plugins validated — sources exist, SKILL.md present Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets Phase 1 — Agent & Command Foundation: - Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations) - Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills) - Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro Phase 2 — Script Gap Closure (2,779 lines): - jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py - confluence-expert: space_structure_generator.py, content_audit_analyzer.py - atlassian-admin: permission_audit_tool.py - atlassian-templates: template_scaffolder.py (Confluence XHTML generation) Phase 3 — Reference & Asset Enrichment: - 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder) - 6 PM references (confluence-expert, atlassian-admin, atlassian-templates) - 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system) - 1 PM asset (permission_scheme_template.json) Phase 4 — New Agents: - cs-agile-product-owner, cs-product-strategist, cs-ux-researcher Phase 5 — Integration & Polish: - Related Skills cross-references in 8 SKILL.md files - Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands) - Updated project-management/CLAUDE.md (0→12 scripts, 3 commands) - Regenerated docs site (177 pages), updated homepage and getting-started Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: audit and repair all plugins, agents, and commands - Fix 12 command files: correct CLI arg syntax, script paths, and usage docs - Fix 3 agents with broken script/reference paths (cs-content-creator, cs-demand-gen-specialist, cs-financial-analyst) - Add complete YAML frontmatter to 5 agents (cs-growth-strategist, cs-engineering-lead, cs-senior-engineer, cs-financial-analyst, cs-quality-regulatory) - Fix cs-ceo-advisor related agent path - Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents, 12 commands) Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs builds clean. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: repair 25 Python scripts failing --help across all domains - Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts - Add argparse CLI handling to 9 marketing scripts using raw sys.argv - Fix 10 scripts crashing at module level (wrap in __main__, add argparse) - Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts) - Fix f-string backslash syntax in project_bootstrapper.py - Fix -h flag conflict in pr_analyzer.py - Fix tech-debt.md description (score → prioritize) All 237 scripts now pass python3 --help verification. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(product-team): close 3 verified gaps in product skills - Fix competitive-teardown/SKILL.md: replace broken references DATA_COLLECTION.md → references/data-collection-guide.md and TEMPLATES.md → references/analysis-templates.md (workflow was broken at steps 2 and 4) - Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format (--format tsx) matching SKILL.md promise of Next.js/React components. 4 design styles (dark-saas, clean-minimal, bold-startup, enterprise). TSX is now default; HTML preserved via --format html - Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now accurately shows 8 skills/9 tools), remove 7 ghost scripts that never existed (sprint_planner.py, velocity_tracker.py, etc.) - Fix tech-debt.md description (score → prioritize) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * release: v2.1.2 — landing page TSX output, brand voice integration, docs update - Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles) - Brand voice analyzer integrated into landing page generation workflow - CHANGELOG, CLAUDE.md, README.md updated for v2.1.2 - All 13 plugin.json + marketplace.json bumped to 2.1.2 - Gemini/Codex skill indexes re-synced - Backward compatible: --format html preserved, no breaking changes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com> Co-authored-by: Leo <leo@openclaw.ai> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
495 lines
18 KiB
Python
495 lines
18 KiB
Python
#!/usr/bin/env python3
|
|
"""
|
|
JQL Query Builder
|
|
|
|
Pattern-matching JQL builder from natural language descriptions. Maps common
|
|
phrases to JQL operators and constructs valid queries with syntax validation.
|
|
|
|
Usage:
|
|
python jql_query_builder.py "high priority bugs in PROJECT assigned to me"
|
|
python jql_query_builder.py "overdue tasks in PROJ" --format json
|
|
python jql_query_builder.py --patterns
|
|
"""
|
|
|
|
import argparse
|
|
import json
|
|
import re
|
|
import sys
|
|
from datetime import datetime
|
|
from typing import Any, Dict, List, Optional, Tuple
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Pattern Library
|
|
# ---------------------------------------------------------------------------
|
|
|
|
PATTERN_LIBRARY = {
|
|
"my_open_bugs": {
|
|
"phrases": ["my open bugs", "my bugs", "bugs assigned to me"],
|
|
"jql": 'assignee = currentUser() AND type = Bug AND status != Done',
|
|
"description": "All open bugs assigned to current user",
|
|
},
|
|
"high_priority_bugs": {
|
|
"phrases": ["high priority bugs", "critical bugs", "urgent bugs", "p1 bugs"],
|
|
"jql": 'type = Bug AND priority in (Highest, High) AND status != Done',
|
|
"description": "High and highest priority open bugs",
|
|
},
|
|
"my_open_tasks": {
|
|
"phrases": ["my open tasks", "my tasks", "tasks assigned to me", "my work"],
|
|
"jql": 'assignee = currentUser() AND status != Done',
|
|
"description": "All open issues assigned to current user",
|
|
},
|
|
"unassigned_issues": {
|
|
"phrases": ["unassigned", "unassigned issues", "no assignee"],
|
|
"jql": 'assignee is EMPTY AND status != Done',
|
|
"description": "Issues with no assignee",
|
|
},
|
|
"recently_created": {
|
|
"phrases": ["recently created", "new issues", "created this week", "recent"],
|
|
"jql": 'created >= -7d ORDER BY created DESC',
|
|
"description": "Issues created in the last 7 days",
|
|
},
|
|
"recently_updated": {
|
|
"phrases": ["recently updated", "updated this week", "recent changes"],
|
|
"jql": 'updated >= -7d ORDER BY updated DESC',
|
|
"description": "Issues updated in the last 7 days",
|
|
},
|
|
"overdue": {
|
|
"phrases": ["overdue", "past due", "missed deadline", "overdue tasks"],
|
|
"jql": 'duedate < now() AND status != Done',
|
|
"description": "Issues past their due date",
|
|
},
|
|
"due_this_week": {
|
|
"phrases": ["due this week", "due soon", "upcoming deadlines"],
|
|
"jql": 'duedate >= startOfWeek() AND duedate <= endOfWeek() AND status != Done',
|
|
"description": "Issues due this week",
|
|
},
|
|
"blocked_issues": {
|
|
"phrases": ["blocked", "blocked issues", "impediments"],
|
|
"jql": 'status = Blocked OR status = Impediment',
|
|
"description": "Issues in blocked or impediment status",
|
|
},
|
|
"in_progress": {
|
|
"phrases": ["in progress", "being worked on", "active work"],
|
|
"jql": 'status = "In Progress"',
|
|
"description": "Issues currently in progress",
|
|
},
|
|
"sprint_issues": {
|
|
"phrases": ["current sprint", "this sprint", "active sprint"],
|
|
"jql": 'sprint in openSprints()',
|
|
"description": "Issues in the current active sprint",
|
|
},
|
|
"backlog": {
|
|
"phrases": ["backlog", "backlog items", "not started"],
|
|
"jql": 'sprint is EMPTY AND status = "To Do" ORDER BY priority DESC',
|
|
"description": "Issues in the backlog not assigned to a sprint",
|
|
},
|
|
"stories_without_estimates": {
|
|
"phrases": ["no estimates", "unestimated", "missing estimates", "no story points"],
|
|
"jql": 'type = Story AND (storyPoints is EMPTY OR storyPoints = 0) AND status != Done',
|
|
"description": "Stories missing story point estimates",
|
|
},
|
|
"epics_in_progress": {
|
|
"phrases": ["active epics", "epics in progress", "open epics"],
|
|
"jql": 'type = Epic AND status != Done ORDER BY priority DESC',
|
|
"description": "Epics that are not yet completed",
|
|
},
|
|
"done_this_week": {
|
|
"phrases": ["done this week", "completed this week", "resolved this week"],
|
|
"jql": 'status changed to Done DURING (startOfWeek(), now())',
|
|
"description": "Issues completed during the current week",
|
|
},
|
|
"created_vs_resolved": {
|
|
"phrases": ["created vs resolved", "issue flow", "throughput"],
|
|
"jql": 'created >= -30d ORDER BY created DESC',
|
|
"description": "Issues created in the last 30 days for flow analysis",
|
|
},
|
|
"my_reported_issues": {
|
|
"phrases": ["my reported", "reported by me", "i created", "i reported"],
|
|
"jql": 'reporter = currentUser() ORDER BY created DESC',
|
|
"description": "Issues reported by current user",
|
|
},
|
|
"stale_issues": {
|
|
"phrases": ["stale", "stale issues", "not updated", "abandoned"],
|
|
"jql": 'updated <= -30d AND status != Done ORDER BY updated ASC',
|
|
"description": "Issues not updated in 30+ days",
|
|
},
|
|
"subtasks_without_parent": {
|
|
"phrases": ["orphan subtasks", "subtasks no parent", "loose subtasks"],
|
|
"jql": 'type = Sub-task AND parent is EMPTY',
|
|
"description": "Subtasks missing parent issues",
|
|
},
|
|
"high_priority_unassigned": {
|
|
"phrases": ["high priority unassigned", "urgent unassigned", "critical no owner"],
|
|
"jql": 'priority in (Highest, High) AND assignee is EMPTY AND status != Done',
|
|
"description": "High priority issues with no assignee",
|
|
},
|
|
"bugs_by_component": {
|
|
"phrases": ["bugs by component", "component bugs"],
|
|
"jql": 'type = Bug AND status != Done ORDER BY component ASC',
|
|
"description": "Open bugs organized by component",
|
|
},
|
|
"resolved_recently": {
|
|
"phrases": ["resolved recently", "recently resolved", "fixed this month"],
|
|
"jql": 'resolved >= -30d ORDER BY resolved DESC',
|
|
"description": "Issues resolved in the last 30 days",
|
|
},
|
|
}
|
|
|
|
# Keyword-to-JQL fragment mapping for dynamic query building
|
|
KEYWORD_FRAGMENTS = {
|
|
# Issue types
|
|
"bug": ("type", "= Bug"),
|
|
"bugs": ("type", "= Bug"),
|
|
"story": ("type", "= Story"),
|
|
"stories": ("type", "= Story"),
|
|
"task": ("type", "= Task"),
|
|
"tasks": ("type", "= Task"),
|
|
"epic": ("type", "= Epic"),
|
|
"epics": ("type", "= Epic"),
|
|
"subtask": ("type", "= Sub-task"),
|
|
"sub-task": ("type", "= Sub-task"),
|
|
# Statuses
|
|
"open": ("status", "!= Done"),
|
|
"closed": ("status", "= Done"),
|
|
"done": ("status", "= Done"),
|
|
"resolved": ("status", "= Done"),
|
|
"todo": ("status", '= "To Do"'),
|
|
# Priorities
|
|
"critical": ("priority", "= Highest"),
|
|
"highest": ("priority", "= Highest"),
|
|
"high": ("priority", "in (Highest, High)"),
|
|
"medium": ("priority", "= Medium"),
|
|
"low": ("priority", "in (Low, Lowest)"),
|
|
"lowest": ("priority", "= Lowest"),
|
|
# Assignee
|
|
"me": ("assignee", "= currentUser()"),
|
|
"mine": ("assignee", "= currentUser()"),
|
|
"unassigned": ("assignee", "is EMPTY"),
|
|
# Time
|
|
"overdue": ("duedate", "< now()"),
|
|
"today": ("duedate", "= now()"),
|
|
}
|
|
|
|
PROJECT_PATTERN = re.compile(r'\b([A-Z]{2,10})\b')
|
|
ASSIGNEE_PATTERN = re.compile(r'assigned\s+to\s+(\w+)', re.IGNORECASE)
|
|
LABEL_PATTERN = re.compile(r'label[s]?\s*[=:]\s*["\']?(\w+)["\']?', re.IGNORECASE)
|
|
COMPONENT_PATTERN = re.compile(r'component[s]?\s*[=:]\s*["\']?(\w+)["\']?', re.IGNORECASE)
|
|
DATE_RANGE_PATTERN = re.compile(r'last\s+(\d+)\s+(day|week|month)s?', re.IGNORECASE)
|
|
SPRINT_NAME_PATTERN = re.compile(r'sprint\s+["\']?(\w[\w\s]*\w)["\']?', re.IGNORECASE)
|
|
|
|
# Words to exclude from project matching
|
|
EXCLUDED_WORDS = {
|
|
"AND", "OR", "NOT", "IN", "IS", "TO", "BY", "ON", "DO", "BE",
|
|
"THE", "ALL", "MY", "NO", "OF", "AT", "AS", "IF", "IT",
|
|
"BUG", "BUGS", "TASK", "TASKS", "STORY", "EPIC", "DONE",
|
|
"HIGH", "LOW", "MEDIUM", "JQL",
|
|
}
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Query Builder
|
|
# ---------------------------------------------------------------------------
|
|
|
|
def find_matching_pattern(description: str) -> Optional[Dict[str, Any]]:
|
|
"""Check if description matches a known pattern exactly."""
|
|
desc_lower = description.lower().strip()
|
|
for pattern_name, pattern_data in PATTERN_LIBRARY.items():
|
|
for phrase in pattern_data["phrases"]:
|
|
if phrase in desc_lower or desc_lower in phrase:
|
|
return {
|
|
"pattern_name": pattern_name,
|
|
"jql": pattern_data["jql"],
|
|
"description": pattern_data["description"],
|
|
"match_type": "exact_pattern",
|
|
}
|
|
return None
|
|
|
|
|
|
def build_jql_from_description(description: str) -> Dict[str, Any]:
|
|
"""Build JQL query from natural language description."""
|
|
# First try exact pattern match
|
|
pattern_match = find_matching_pattern(description)
|
|
if pattern_match:
|
|
# Augment with project if mentioned
|
|
project = _extract_project(description)
|
|
if project:
|
|
pattern_match["jql"] = f'project = {project} AND {pattern_match["jql"]}'
|
|
return pattern_match
|
|
|
|
# Dynamic query building
|
|
clauses = []
|
|
used_fields = set()
|
|
desc_lower = description.lower()
|
|
|
|
# Extract project
|
|
project = _extract_project(description)
|
|
if project:
|
|
clauses.append(f"project = {project}")
|
|
used_fields.add("project")
|
|
|
|
# Extract keyword-based fragments
|
|
for keyword, (field, fragment) in KEYWORD_FRAGMENTS.items():
|
|
if keyword in desc_lower.split() and field not in used_fields:
|
|
clauses.append(f"{field} {fragment}")
|
|
used_fields.add(field)
|
|
|
|
# Extract explicit assignee
|
|
assignee_match = ASSIGNEE_PATTERN.search(description)
|
|
if assignee_match and "assignee" not in used_fields:
|
|
assignee = assignee_match.group(1)
|
|
if assignee.lower() in ("me", "myself"):
|
|
clauses.append("assignee = currentUser()")
|
|
else:
|
|
clauses.append(f'assignee = "{assignee}"')
|
|
used_fields.add("assignee")
|
|
|
|
# Extract labels
|
|
label_match = LABEL_PATTERN.search(description)
|
|
if label_match:
|
|
clauses.append(f'labels = "{label_match.group(1)}"')
|
|
|
|
# Extract component
|
|
component_match = COMPONENT_PATTERN.search(description)
|
|
if component_match:
|
|
clauses.append(f'component = "{component_match.group(1)}"')
|
|
|
|
# Extract date ranges
|
|
date_match = DATE_RANGE_PATTERN.search(description)
|
|
if date_match:
|
|
amount = date_match.group(1)
|
|
unit = date_match.group(2).lower()
|
|
unit_char = {"day": "d", "week": "w", "month": "m"}.get(unit, "d")
|
|
clauses.append(f"created >= -{amount}{unit_char}")
|
|
|
|
# Extract sprint reference
|
|
sprint_match = SPRINT_NAME_PATTERN.search(description)
|
|
if sprint_match:
|
|
sprint_name = sprint_match.group(1).strip()
|
|
if sprint_name.lower() in ("current", "active", "open"):
|
|
clauses.append("sprint in openSprints()")
|
|
else:
|
|
clauses.append(f'sprint = "{sprint_name}"')
|
|
|
|
# Default: if no status clause and not looking for done items
|
|
if "status" not in used_fields and "done" not in desc_lower and "closed" not in desc_lower:
|
|
clauses.append("status != Done")
|
|
|
|
if not clauses:
|
|
return {
|
|
"jql": "",
|
|
"description": "Could not build query from description",
|
|
"match_type": "no_match",
|
|
"error": "No recognizable patterns found in description",
|
|
}
|
|
|
|
jql = " AND ".join(clauses)
|
|
|
|
# Add ORDER BY for common scenarios
|
|
if "recent" in desc_lower or "latest" in desc_lower:
|
|
jql += " ORDER BY created DESC"
|
|
elif "priority" in desc_lower or "urgent" in desc_lower:
|
|
jql += " ORDER BY priority DESC"
|
|
|
|
return {
|
|
"jql": jql,
|
|
"description": f"Dynamic query from: {description}",
|
|
"match_type": "dynamic",
|
|
"clauses_used": len(clauses),
|
|
}
|
|
|
|
|
|
def _extract_project(description: str) -> Optional[str]:
|
|
"""Extract project key from description."""
|
|
# Look for IN/in PROJECT pattern
|
|
in_project = re.search(r'\bin\s+([A-Z]{2,10})\b', description)
|
|
if in_project and in_project.group(1) not in EXCLUDED_WORDS:
|
|
return in_project.group(1)
|
|
|
|
# Look for standalone project keys
|
|
for match in PROJECT_PATTERN.finditer(description):
|
|
word = match.group(1)
|
|
if word not in EXCLUDED_WORDS:
|
|
return word
|
|
|
|
return None
|
|
|
|
|
|
def validate_jql_syntax(jql: str) -> Dict[str, Any]:
|
|
"""Basic JQL syntax validation."""
|
|
issues = []
|
|
|
|
if not jql.strip():
|
|
return {"valid": False, "issues": ["Empty query"]}
|
|
|
|
# Check balanced quotes
|
|
single_quotes = jql.count("'")
|
|
double_quotes = jql.count('"')
|
|
if single_quotes % 2 != 0:
|
|
issues.append("Unbalanced single quotes")
|
|
if double_quotes % 2 != 0:
|
|
issues.append("Unbalanced double quotes")
|
|
|
|
# Check balanced parentheses
|
|
open_parens = jql.count("(")
|
|
close_parens = jql.count(")")
|
|
if open_parens != close_parens:
|
|
issues.append(f"Unbalanced parentheses: {open_parens} open, {close_parens} close")
|
|
|
|
# Check for known JQL operators
|
|
valid_operators = {"=", "!=", ">", "<", ">=", "<=", "~", "!~", "in", "not in", "is", "is not", "was", "was not", "changed"}
|
|
jql_upper = jql.upper()
|
|
|
|
# Check AND/OR placement
|
|
if jql_upper.strip().startswith("AND") or jql_upper.strip().startswith("OR"):
|
|
issues.append("Query cannot start with AND/OR")
|
|
if jql_upper.strip().endswith("AND") or jql_upper.strip().endswith("OR"):
|
|
issues.append("Query cannot end with AND/OR")
|
|
|
|
# Check ORDER BY syntax
|
|
order_match = re.search(r'ORDER\s+BY\s+(\w+)(?:\s+(ASC|DESC))?', jql, re.IGNORECASE)
|
|
if "ORDER" in jql_upper and not order_match:
|
|
issues.append("Invalid ORDER BY syntax")
|
|
|
|
return {
|
|
"valid": len(issues) == 0,
|
|
"issues": issues,
|
|
"query_length": len(jql),
|
|
}
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# Output Formatting
|
|
# ---------------------------------------------------------------------------
|
|
|
|
def format_text_output(result: Dict[str, Any]) -> str:
|
|
"""Format results as readable text report."""
|
|
lines = []
|
|
lines.append("=" * 60)
|
|
lines.append("JQL QUERY BUILDER RESULTS")
|
|
lines.append("=" * 60)
|
|
lines.append("")
|
|
|
|
if "error" in result:
|
|
lines.append(f"ERROR: {result['error']}")
|
|
return "\n".join(lines)
|
|
|
|
lines.append(f"Match Type: {result.get('match_type', 'unknown')}")
|
|
lines.append(f"Description: {result.get('description', '')}")
|
|
lines.append("")
|
|
lines.append("GENERATED JQL")
|
|
lines.append("-" * 30)
|
|
lines.append(result.get("jql", ""))
|
|
lines.append("")
|
|
|
|
validation = result.get("validation", {})
|
|
if validation:
|
|
lines.append("VALIDATION")
|
|
lines.append("-" * 30)
|
|
lines.append(f"Valid: {'Yes' if validation.get('valid') else 'No'}")
|
|
if validation.get("issues"):
|
|
for issue in validation["issues"]:
|
|
lines.append(f" - {issue}")
|
|
|
|
if result.get("pattern_name"):
|
|
lines.append("")
|
|
lines.append(f"Matched Pattern: {result['pattern_name']}")
|
|
|
|
return "\n".join(lines)
|
|
|
|
|
|
def format_patterns_output(output_format: str) -> str:
|
|
"""Format available patterns list."""
|
|
if output_format == "json":
|
|
patterns = {}
|
|
for name, data in PATTERN_LIBRARY.items():
|
|
patterns[name] = {
|
|
"description": data["description"],
|
|
"phrases": data["phrases"],
|
|
"jql": data["jql"],
|
|
}
|
|
return json.dumps(patterns, indent=2)
|
|
|
|
lines = []
|
|
lines.append("=" * 60)
|
|
lines.append("AVAILABLE JQL PATTERNS")
|
|
lines.append("=" * 60)
|
|
lines.append("")
|
|
|
|
for name, data in PATTERN_LIBRARY.items():
|
|
lines.append(f" {name}")
|
|
lines.append(f" Description: {data['description']}")
|
|
lines.append(f" Phrases: {', '.join(data['phrases'])}")
|
|
lines.append(f" JQL: {data['jql']}")
|
|
lines.append("")
|
|
|
|
lines.append(f"Total patterns: {len(PATTERN_LIBRARY)}")
|
|
return "\n".join(lines)
|
|
|
|
|
|
def format_json_output(result: Dict[str, Any]) -> Dict[str, Any]:
|
|
"""Format results as JSON."""
|
|
return result
|
|
|
|
|
|
# ---------------------------------------------------------------------------
|
|
# CLI Interface
|
|
# ---------------------------------------------------------------------------
|
|
|
|
def main() -> int:
|
|
"""Main CLI entry point."""
|
|
parser = argparse.ArgumentParser(
|
|
description="Build JQL queries from natural language descriptions"
|
|
)
|
|
parser.add_argument(
|
|
"description",
|
|
nargs="?",
|
|
help="Natural language description of the query",
|
|
)
|
|
parser.add_argument(
|
|
"--format",
|
|
choices=["text", "json"],
|
|
default="text",
|
|
help="Output format (default: text)",
|
|
)
|
|
parser.add_argument(
|
|
"--patterns",
|
|
action="store_true",
|
|
help="List all available query patterns",
|
|
)
|
|
|
|
args = parser.parse_args()
|
|
|
|
try:
|
|
if args.patterns:
|
|
print(format_patterns_output(args.format))
|
|
return 0
|
|
|
|
if not args.description:
|
|
parser.error("description is required unless --patterns is used")
|
|
|
|
# Build query
|
|
result = build_jql_from_description(args.description)
|
|
|
|
# Validate
|
|
if result.get("jql"):
|
|
result["validation"] = validate_jql_syntax(result["jql"])
|
|
|
|
# Output results
|
|
if args.format == "json":
|
|
output = format_json_output(result)
|
|
print(json.dumps(output, indent=2))
|
|
else:
|
|
output = format_text_output(result)
|
|
print(output)
|
|
|
|
return 0
|
|
|
|
except Exception as e:
|
|
print(f"Error: {e}", file=sys.stderr)
|
|
return 1
|
|
|
|
|
|
if __name__ == "__main__":
|
|
sys.exit(main())
|