Fix/agentic auditor metadata (#353)

* docs: fix metadata for advanced-evaluation skill

Signed-off-by: simon essien <champbreed1@gmail.com>

* docs: add missing risk and source metadata to agentic-actions-auditor

* Update SKILL.md

---------

Signed-off-by: simon essien <champbreed1@gmail.com>
This commit is contained in:
Champbreed
2026-03-19 16:46:26 +01:00
committed by GitHub
parent 25109a85e5
commit eb8fb302e4
2 changed files with 15 additions and 1 deletions

View File

@@ -1,6 +1,9 @@
---
name: advanced-evaluation
description: This skill should be used when the user asks to "implement LLM-as-judge", "compare model outputs", "create evaluation rubrics", "mitigate evaluation bias", or mentions direct scoring, pairwise comparison, position bias, evaluation pipelines, or automated quality assessment.
risk: safe
source: community
date_added: 2026-03-18
---
# Advanced Evaluation

View File

@@ -1,6 +1,17 @@
---
name: agentic-actions-auditor
description: Audits GitHub Actions workflows for security vulnerabilities in AI agent integrations including Claude Code Action, Gemini CLI, OpenAI Codex, and GitHub AI Inference. Detects attack vectors where attacker-controlled input reaches AI agents running in CI/CD pipelines,...
description: >
Audits GitHub Actions workflows for security
vulnerabilities in AI agent integrations
including Claude Code Action,
Gemini CLI, OpenAI Codex, and GitHub AI
Inference.
Detects attack vectors where attacker-controlled
input reaches.
AI agents running in CI/CD pipelines.
risk: safe
source: community
date_added: 2026-03-18
---
# Agentic Actions Auditor