Fix/agentic auditor metadata (#353)
* docs: fix metadata for advanced-evaluation skill Signed-off-by: simon essien <champbreed1@gmail.com> * docs: add missing risk and source metadata to agentic-actions-auditor * Update SKILL.md --------- Signed-off-by: simon essien <champbreed1@gmail.com>
This commit is contained in:
@@ -1,6 +1,9 @@
|
||||
---
|
||||
name: advanced-evaluation
|
||||
description: This skill should be used when the user asks to "implement LLM-as-judge", "compare model outputs", "create evaluation rubrics", "mitigate evaluation bias", or mentions direct scoring, pairwise comparison, position bias, evaluation pipelines, or automated quality assessment.
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: 2026-03-18
|
||||
---
|
||||
|
||||
# Advanced Evaluation
|
||||
|
||||
@@ -1,6 +1,17 @@
|
||||
---
|
||||
name: agentic-actions-auditor
|
||||
description: Audits GitHub Actions workflows for security vulnerabilities in AI agent integrations including Claude Code Action, Gemini CLI, OpenAI Codex, and GitHub AI Inference. Detects attack vectors where attacker-controlled input reaches AI agents running in CI/CD pipelines,...
|
||||
description: >
|
||||
Audits GitHub Actions workflows for security
|
||||
vulnerabilities in AI agent integrations
|
||||
including Claude Code Action,
|
||||
Gemini CLI, OpenAI Codex, and GitHub AI
|
||||
Inference.
|
||||
Detects attack vectors where attacker-controlled
|
||||
input reaches.
|
||||
AI agents running in CI/CD pipelines.
|
||||
risk: safe
|
||||
source: community
|
||||
date_added: 2026-03-18
|
||||
---
|
||||
|
||||
# Agentic Actions Auditor
|
||||
|
||||
Reference in New Issue
Block a user