Files
claude-skills-reference/project-management/atlassian-admin/scripts/permission_audit_tool.py
Alireza Rezvani a68ae3a05e Dev (#305)
* chore: update gitignore for audit reports and playwright cache

* fix: add YAML frontmatter (name + description) to all SKILL.md files

- Added frontmatter to 34 skills that were missing it entirely (0% Tessl score)
- Fixed name field format to kebab-case across all 169 skills
- Resolves #284

* chore: sync codex skills symlinks [automated]

* fix: optimize 14 low-scoring skills via Tessl review (#290)

Tessl optimization: 14 skills improved from ≤69% to 85%+. Closes #285, #286.

* chore: sync codex skills symlinks [automated]

* fix: optimize 18 skills via Tessl review + compliance fix (closes #287) (#291)

Phase 1: 18 skills optimized via Tessl (avg 77% → 95%). Closes #287.

* feat: add scripts and references to 4 prompt-only skills + Tessl optimization (#292)

Phase 2: 3 new scripts + 2 reference files for prompt-only skills. Tessl 45-55% → 94-100%.

* feat: add 6 agents + 5 slash commands for full coverage (v2.7.0) (#293)

Phase 3: 6 new agents (all 9 categories covered) + 5 slash commands.

* fix: Phase 5 verification fixes + docs update (#294)

Phase 5 verification fixes

* chore: sync codex skills symlinks [automated]

* fix: marketplace audit — all 11 plugins validated by Claude Code (#295)

Marketplace audit: all 11 plugins validated + installed + tested in Claude Code

* fix: restore 7 removed plugins + revert playwright-pro name to pw

Reverts two overly aggressive audit changes:
- Restored content-creator, demand-gen, fullstack-engineer, aws-architect,
  product-manager, scrum-master, skill-security-auditor to marketplace
- Reverted playwright-pro plugin.json name back to 'pw' (intentional short name)

* refactor: split 21 over-500-line skills into SKILL.md + references (#296)

* chore: sync codex skills symlinks [automated]

* docs: update all documentation with accurate counts and regenerated skill pages

- Update skill count to 170, Python tools to 213, references to 314 across all docs
- Regenerate all 170 skill doc pages from latest SKILL.md sources
- Update CLAUDE.md with v2.1.1 highlights, accurate architecture tree, and roadmap
- Update README.md badges and overview table
- Update marketplace.json metadata description and version
- Update mkdocs.yml, index.md, getting-started.md with correct numbers

* fix: add root-level SKILL.md and .codex/instructions.md to all domains (#301)

Root cause: CLI tools (ai-agent-skills, agent-skills-cli) look for SKILL.md
at the specified install path. 7 of 9 domain directories were missing this
file, causing "Skill not found" errors for bundle installs like:
  npx ai-agent-skills install alirezarezvani/claude-skills/engineering-team

Fix:
- Add root-level SKILL.md with YAML frontmatter to 7 domains
- Add .codex/instructions.md to 8 domains (for Codex CLI discovery)
- Update INSTALLATION.md with accurate skill counts (53→170)
- Add troubleshooting entry for "Skill not found" error

All 9 domains now have: SKILL.md + .codex/instructions.md + plugin.json

Closes #301

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: add Gemini CLI + OpenClaw support, fix Codex missing 25 skills

Gemini CLI:
- Add GEMINI.md with activation instructions
- Add scripts/gemini-install.sh setup script
- Add scripts/sync-gemini-skills.py (194 skills indexed)
- Add .gemini/skills/ with symlinks for all skills, agents, commands
- Remove phantom medium-content-pro entries from sync script
- Add top-level folder filter to prevent gitignored dirs from leaking

Codex CLI:
- Fix sync-codex-skills.py missing "engineering" domain (25 POWERFUL skills)
- Regenerate .codex/skills-index.json: 124 → 149 skills
- Add 25 new symlinks in .codex/skills/

OpenClaw:
- Add OpenClaw installation section to INSTALLATION.md
- Add ClawHub install + manual install + YAML frontmatter docs

Documentation:
- Update INSTALLATION.md with all 4 platforms + accurate counts
- Update README.md: "three platforms" → "four platforms" + Gemini quick start
- Update CLAUDE.md with Gemini CLI support in v2.1.1 highlights
- Update SKILL-AUTHORING-STANDARD.md + SKILL_PIPELINE.md with Gemini steps
- Add OpenClaw + Gemini to installation locations reference table

Marketplace: all 18 plugins validated — sources exist, SKILL.md present

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat(product,pm): world-class product & PM skills audit — 6 scripts, 5 agents, 7 commands, 23 references/assets

Phase 1 — Agent & Command Foundation:
- Rewrite cs-project-manager agent (55→515 lines, 4 workflows, 6 skill integrations)
- Expand cs-product-manager agent (408→684 lines, orchestrates all 8 product skills)
- Add 7 slash commands: /rice, /okr, /persona, /user-story, /sprint-health, /project-health, /retro

Phase 2 — Script Gap Closure (2,779 lines):
- jira-expert: jql_query_builder.py (22 patterns), workflow_validator.py
- confluence-expert: space_structure_generator.py, content_audit_analyzer.py
- atlassian-admin: permission_audit_tool.py
- atlassian-templates: template_scaffolder.py (Confluence XHTML generation)

Phase 3 — Reference & Asset Enrichment:
- 9 product references (competitive-teardown, landing-page-generator, saas-scaffolder)
- 6 PM references (confluence-expert, atlassian-admin, atlassian-templates)
- 7 product assets (templates for PRD, RICE, sprint, stories, OKR, research, design system)
- 1 PM asset (permission_scheme_template.json)

Phase 4 — New Agents:
- cs-agile-product-owner, cs-product-strategist, cs-ux-researcher

Phase 5 — Integration & Polish:
- Related Skills cross-references in 8 SKILL.md files
- Updated product-team/CLAUDE.md (5→8 skills, 6→9 tools, 4 agents, 5 commands)
- Updated project-management/CLAUDE.md (0→12 scripts, 3 commands)
- Regenerated docs site (177 pages), updated homepage and getting-started

Quality audit: 31 files reviewed, 29 PASS, 2 fixed (copy-frameworks.md, governance-framework.md)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: audit and repair all plugins, agents, and commands

- Fix 12 command files: correct CLI arg syntax, script paths, and usage docs
- Fix 3 agents with broken script/reference paths (cs-content-creator,
  cs-demand-gen-specialist, cs-financial-analyst)
- Add complete YAML frontmatter to 5 agents (cs-growth-strategist,
  cs-engineering-lead, cs-senior-engineer, cs-financial-analyst,
  cs-quality-regulatory)
- Fix cs-ceo-advisor related agent path
- Update marketplace.json metadata counts (224 tools, 341 refs, 14 agents,
  12 commands)

Verified: all 19 scripts pass --help, all 14 agent paths resolve, mkdocs
builds clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: repair 25 Python scripts failing --help across all domains

- Fix Python 3.10+ syntax (float | None → Optional[float]) in 2 scripts
- Add argparse CLI handling to 9 marketing scripts using raw sys.argv
- Fix 10 scripts crashing at module level (wrap in __main__, add argparse)
- Make yaml/prefect/mcp imports conditional with stdlib fallbacks (4 scripts)
- Fix f-string backslash syntax in project_bootstrapper.py
- Fix -h flag conflict in pr_analyzer.py
- Fix tech-debt.md description (score → prioritize)

All 237 scripts now pass python3 --help verification.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(product-team): close 3 verified gaps in product skills

- Fix competitive-teardown/SKILL.md: replace broken references
  DATA_COLLECTION.md → references/data-collection-guide.md and
  TEMPLATES.md → references/analysis-templates.md (workflow was broken
  at steps 2 and 4)

- Upgrade landing_page_scaffolder.py: add TSX + Tailwind output format
  (--format tsx) matching SKILL.md promise of Next.js/React components.
  4 design styles (dark-saas, clean-minimal, bold-startup, enterprise).
  TSX is now default; HTML preserved via --format html

- Rewrite README.md: fix stale counts (was 5 skills/15+ tools, now
  accurately shows 8 skills/9 tools), remove 7 ghost scripts that
  never existed (sprint_planner.py, velocity_tracker.py, etc.)

- Fix tech-debt.md description (score → prioritize)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* release: v2.1.2 — landing page TSX output, brand voice integration, docs update

- Landing page generator defaults to Next.js TSX + Tailwind CSS (4 design styles)
- Brand voice analyzer integrated into landing page generation workflow
- CHANGELOG, CLAUDE.md, README.md updated for v2.1.2
- All 13 plugin.json + marketplace.json bumped to 2.1.2
- Gemini/Codex skill indexes re-synced
- Backward compatible: --format html preserved, no breaking changes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: alirezarezvani <5697919+alirezarezvani@users.noreply.github.com>
Co-authored-by: Leo <leo@openclaw.ai>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-10 09:48:49 +01:00

470 lines
16 KiB
Python

#!/usr/bin/env python3
"""
Permission Audit Tool
Analyzes Atlassian permission schemes for security issues. Checks for
over-permissioned groups, direct user permissions, missing restrictions on
sensitive actions, inconsistencies across projects, and compliance gaps.
Usage:
python permission_audit_tool.py permissions.json
python permission_audit_tool.py permissions.json --format json
"""
import argparse
import json
import sys
from typing import Any, Dict, List, Optional, Set
# ---------------------------------------------------------------------------
# Audit Configuration
# ---------------------------------------------------------------------------
SENSITIVE_PERMISSIONS = {
"administer_project",
"administer_jira",
"delete_issues",
"delete_all_comments",
"delete_all_attachments",
"manage_watchers",
"modify_reporter",
"bulk_change",
"system_admin",
"manage_group_filter_subscriptions",
}
RECOMMENDED_GROUP_ONLY_PERMISSIONS = {
"browse_projects",
"create_issues",
"edit_issues",
"transition_issues",
"assign_issues",
"resolve_issues",
"close_issues",
"add_comments",
"edit_all_comments",
}
SEVERITY_WEIGHTS = {
"critical": 25,
"high": 15,
"medium": 8,
"low": 3,
"info": 1,
}
# ---------------------------------------------------------------------------
# Audit Checks
# ---------------------------------------------------------------------------
def check_over_permissioned_groups(
schemes: List[Dict[str, Any]],
) -> List[Dict[str, str]]:
"""Check for groups with overly broad admin access."""
findings = []
for scheme in schemes:
scheme_name = scheme.get("name", "Unknown Scheme")
grants = scheme.get("grants", [])
group_permissions = {}
for grant in grants:
group = grant.get("group", "")
permission = grant.get("permission", "").lower()
if group:
if group not in group_permissions:
group_permissions[group] = set()
group_permissions[group].add(permission)
for group, perms in group_permissions.items():
admin_perms = perms & SENSITIVE_PERMISSIONS
if len(admin_perms) >= 3:
findings.append({
"rule": "over_permissioned_group",
"severity": "high",
"scheme": scheme_name,
"group": group,
"message": f"Group '{group}' has {len(admin_perms)} sensitive permissions "
f"in scheme '{scheme_name}': {', '.join(sorted(admin_perms))}. "
f"Review if all are necessary.",
})
if "system_admin" in perms or "administer_jira" in perms:
findings.append({
"rule": "admin_access_warning",
"severity": "critical",
"scheme": scheme_name,
"group": group,
"message": f"Group '{group}' has system/Jira admin access in '{scheme_name}'. "
f"Ensure this is strictly necessary and membership is limited.",
})
return findings
def check_direct_user_permissions(
schemes: List[Dict[str, Any]],
) -> List[Dict[str, str]]:
"""Check for permissions granted directly to users instead of groups."""
findings = []
for scheme in schemes:
scheme_name = scheme.get("name", "Unknown Scheme")
grants = scheme.get("grants", [])
for grant in grants:
user = grant.get("user", "")
permission = grant.get("permission", "")
if user and not grant.get("group"):
severity = "high" if permission.lower() in SENSITIVE_PERMISSIONS else "medium"
findings.append({
"rule": "direct_user_permission",
"severity": severity,
"scheme": scheme_name,
"user": user,
"message": f"User '{user}' has direct permission '{permission}' in '{scheme_name}'. "
f"Use groups instead for maintainability and audit clarity.",
})
return findings
def check_missing_restrictions(
schemes: List[Dict[str, Any]],
) -> List[Dict[str, str]]:
"""Check for missing restrictions on sensitive actions."""
findings = []
for scheme in schemes:
scheme_name = scheme.get("name", "Unknown Scheme")
grants = scheme.get("grants", [])
granted_permissions = set()
for grant in grants:
granted_permissions.add(grant.get("permission", "").lower())
# Check if delete permissions are unrestricted
delete_perms = {"delete_issues", "delete_all_comments", "delete_all_attachments"}
unrestricted_deletes = delete_perms & granted_permissions
for grant in grants:
perm = grant.get("permission", "").lower()
group = grant.get("group", "")
if perm in delete_perms and group:
# Check if granted to broad groups
broad_groups = {"users", "everyone", "all-users", "jira-users", "jira-software-users"}
if group.lower() in broad_groups:
findings.append({
"rule": "unrestricted_delete",
"severity": "critical",
"scheme": scheme_name,
"message": f"Delete permission '{perm}' granted to broad group '{group}' "
f"in '{scheme_name}'. Restrict to admins or leads only.",
})
# Check if admin permissions exist
admin_perms = {"administer_project", "administer_jira", "system_admin"}
if not (admin_perms & granted_permissions):
findings.append({
"rule": "no_admin_defined",
"severity": "medium",
"scheme": scheme_name,
"message": f"No explicit admin permission defined in '{scheme_name}'. "
f"Ensure project administration is properly assigned.",
})
return findings
def check_scheme_consistency(
schemes: List[Dict[str, Any]],
) -> List[Dict[str, str]]:
"""Check for inconsistencies across permission schemes."""
findings = []
if len(schemes) < 2:
return findings
# Compare permission sets across schemes
scheme_perms = {}
for scheme in schemes:
name = scheme.get("name", "Unknown")
perms = set()
for grant in scheme.get("grants", []):
perms.add(grant.get("permission", "").lower())
scheme_perms[name] = perms
# Find schemes with significantly different permission sets
all_perms = set()
for perms in scheme_perms.values():
all_perms |= perms
scheme_names = list(scheme_perms.keys())
for i in range(len(scheme_names)):
for j in range(i + 1, len(scheme_names)):
name_a = scheme_names[i]
name_b = scheme_names[j]
diff = scheme_perms[name_a].symmetric_difference(scheme_perms[name_b])
if len(diff) > 5:
findings.append({
"rule": "scheme_inconsistency",
"severity": "medium",
"message": f"Schemes '{name_a}' and '{name_b}' differ significantly "
f"({len(diff)} different permissions). Review for intentional differences.",
})
return findings
def check_compliance_gaps(
schemes: List[Dict[str, Any]],
) -> List[Dict[str, str]]:
"""Check for common compliance gaps."""
findings = []
for scheme in schemes:
scheme_name = scheme.get("name", "Unknown Scheme")
grants = scheme.get("grants", [])
groups_used = set()
users_used = set()
for grant in grants:
if grant.get("group"):
groups_used.add(grant["group"])
if grant.get("user"):
users_used.add(grant["user"])
# Check for separation of duties
admin_groups = set()
for grant in grants:
if grant.get("permission", "").lower() in SENSITIVE_PERMISSIONS and grant.get("group"):
admin_groups.add(grant["group"])
if len(admin_groups) == 1 and len(groups_used) > 1:
findings.append({
"rule": "separation_of_duties",
"severity": "info",
"scheme": scheme_name,
"message": f"Only one group ('{next(iter(admin_groups))}') holds all sensitive permissions "
f"in '{scheme_name}'. Consider separating duties across multiple groups.",
})
# Check user count
if len(users_used) > 5:
findings.append({
"rule": "too_many_direct_users",
"severity": "high",
"scheme": scheme_name,
"message": f"Scheme '{scheme_name}' has {len(users_used)} direct user grants. "
f"Migrate to group-based permissions for better governance.",
})
return findings
# ---------------------------------------------------------------------------
# Main Analysis
# ---------------------------------------------------------------------------
def audit_permissions(data: Dict[str, Any]) -> Dict[str, Any]:
"""Run full permission audit."""
schemes = data.get("schemes", [])
if not schemes:
# Try treating the entire input as a single scheme
if data.get("grants") or data.get("name"):
schemes = [data]
else:
return {
"risk_score": 0,
"grade": "invalid",
"error": "No permission schemes found in input",
"findings": [],
"summary": {},
}
all_findings = []
all_findings.extend(check_over_permissioned_groups(schemes))
all_findings.extend(check_direct_user_permissions(schemes))
all_findings.extend(check_missing_restrictions(schemes))
all_findings.extend(check_scheme_consistency(schemes))
all_findings.extend(check_compliance_gaps(schemes))
# Calculate risk score (higher = more risk)
summary = {"critical": 0, "high": 0, "medium": 0, "low": 0, "info": 0}
total_penalty = 0
for finding in all_findings:
severity = finding["severity"]
summary[severity] = summary.get(severity, 0) + 1
total_penalty += SEVERITY_WEIGHTS.get(severity, 0)
risk_score = min(100, total_penalty)
health_score = max(0, 100 - risk_score)
if health_score >= 85:
grade = "excellent"
elif health_score >= 70:
grade = "good"
elif health_score >= 50:
grade = "fair"
else:
grade = "poor"
# Generate remediation recommendations
remediations = _generate_remediations(all_findings)
return {
"risk_score": risk_score,
"health_score": health_score,
"grade": grade,
"schemes_analyzed": len(schemes),
"findings": all_findings,
"summary": summary,
"remediations": remediations,
}
def _generate_remediations(findings: List[Dict[str, str]]) -> List[str]:
"""Generate remediation recommendations."""
remediations = []
rules_seen = set()
for finding in findings:
rule = finding["rule"]
if rule in rules_seen:
continue
rules_seen.add(rule)
if rule == "over_permissioned_group":
remediations.append("Review and reduce sensitive permissions for over-permissioned groups. Apply principle of least privilege.")
elif rule == "admin_access_warning":
remediations.append("Audit admin group membership. Limit system/Jira admin access to essential personnel only.")
elif rule == "direct_user_permission":
remediations.append("Migrate direct user permissions to group-based grants. Create functional groups for common permission sets.")
elif rule == "unrestricted_delete":
remediations.append("Restrict delete permissions to project admins or leads. Remove from broad user groups.")
elif rule == "scheme_inconsistency":
remediations.append("Standardize permission schemes across projects. Document intentional differences.")
elif rule == "too_many_direct_users":
remediations.append("Create groups for users with direct permissions. This simplifies onboarding/offboarding.")
elif rule == "separation_of_duties":
remediations.append("Consider splitting admin responsibilities across multiple groups for better separation of duties.")
elif rule == "no_admin_defined":
remediations.append("Define explicit admin permissions in each scheme to ensure proper project governance.")
return remediations
# ---------------------------------------------------------------------------
# Output Formatting
# ---------------------------------------------------------------------------
def format_text_output(result: Dict[str, Any]) -> str:
"""Format results as readable text report."""
lines = []
lines.append("=" * 60)
lines.append("PERMISSION AUDIT REPORT")
lines.append("=" * 60)
lines.append("")
if "error" in result:
lines.append(f"ERROR: {result['error']}")
return "\n".join(lines)
lines.append("AUDIT SUMMARY")
lines.append("-" * 30)
lines.append(f"Risk Score: {result['risk_score']}/100 (lower is better)")
lines.append(f"Health Score: {result['health_score']}/100")
lines.append(f"Grade: {result['grade'].title()}")
lines.append(f"Schemes Analyzed: {result['schemes_analyzed']}")
lines.append("")
summary = result.get("summary", {})
lines.append("FINDINGS BY SEVERITY")
lines.append("-" * 30)
lines.append(f"Critical: {summary.get('critical', 0)}")
lines.append(f"High: {summary.get('high', 0)}")
lines.append(f"Medium: {summary.get('medium', 0)}")
lines.append(f"Low: {summary.get('low', 0)}")
lines.append(f"Info: {summary.get('info', 0)}")
lines.append("")
findings = result.get("findings", [])
if findings:
lines.append("DETAILED FINDINGS")
lines.append("-" * 30)
for i, finding in enumerate(findings, 1):
severity = finding["severity"].upper()
lines.append(f"{i}. [{severity}] {finding['message']}")
lines.append(f" Rule: {finding['rule']}")
if finding.get("scheme"):
lines.append(f" Scheme: {finding['scheme']}")
lines.append("")
remediations = result.get("remediations", [])
if remediations:
lines.append("REMEDIATION RECOMMENDATIONS")
lines.append("-" * 30)
for i, rem in enumerate(remediations, 1):
lines.append(f"{i}. {rem}")
return "\n".join(lines)
def format_json_output(result: Dict[str, Any]) -> Dict[str, Any]:
"""Format results as JSON."""
return result
# ---------------------------------------------------------------------------
# CLI Interface
# ---------------------------------------------------------------------------
def main() -> int:
"""Main CLI entry point."""
parser = argparse.ArgumentParser(
description="Audit Atlassian permission schemes for security issues"
)
parser.add_argument(
"permissions_file",
help="JSON file with permission scheme data",
)
parser.add_argument(
"--format",
choices=["text", "json"],
default="text",
help="Output format (default: text)",
)
args = parser.parse_args()
try:
with open(args.permissions_file, "r") as f:
data = json.load(f)
result = audit_permissions(data)
if args.format == "json":
print(json.dumps(format_json_output(result), indent=2))
else:
print(format_text_output(result))
return 0
except FileNotFoundError:
print(f"Error: File '{args.permissions_file}' not found", file=sys.stderr)
return 1
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in '{args.permissions_file}': {e}", file=sys.stderr)
return 1
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
return 1
if __name__ == "__main__":
sys.exit(main())