Update skill docs and resources
This commit is contained in:
@@ -5,8 +5,8 @@
|
|||||||
"email": "daymadev89@gmail.com"
|
"email": "daymadev89@gmail.com"
|
||||||
},
|
},
|
||||||
"metadata": {
|
"metadata": {
|
||||||
"description": "Professional Claude Code skills for GitHub operations, document conversion, diagram generation, statusline customization, Teams communication, repomix utilities, skill creation, CLI demo generation, LLM icon access, Cloudflare troubleshooting, UI design system extraction, professional presentation creation, YouTube video downloading, secure repomix packaging, ASR transcription correction, video comparison quality analysis, comprehensive QA testing infrastructure, prompt optimization with EARS methodology, session history recovery, documentation cleanup, format-controlled deep research report generation with evidence tracking, PDF generation with Chinese font support, CLAUDE.md progressive disclosure optimization, CCPM skill registry search and management, Promptfoo LLM evaluation framework, iOS app development with XcodeGen and SwiftUI, fact-checking with automated corrections, Twitter/X content fetching, intelligent macOS disk space recovery, skill quality review and improvement, GitHub contribution strategy, complete internationalization/localization setup, plugin/skill troubleshooting with diagnostic tools, evidence-based competitor analysis with source citations, Windows Remote Desktop (AVD/W365) connection quality diagnosis with transport protocol analysis and log parsing, and Tailscale+proxy conflict diagnosis with SSH tunnel SOP for remote development",
|
"description": "Professional Claude Code skills for GitHub operations, document conversion, diagram generation, statusline customization, Teams communication, repomix utilities, skill creation, CLI demo generation, LLM icon access, Cloudflare troubleshooting, UI design system extraction, professional presentation creation, YouTube video downloading, secure repomix packaging, ASR transcription correction, video comparison quality analysis, comprehensive QA testing infrastructure, prompt optimization with EARS methodology, session history recovery, documentation cleanup, format-controlled deep research report generation with evidence tracking, PDF generation with Chinese font support, CLAUDE.md progressive disclosure optimization, CCPM skill registry search and management, Promptfoo LLM evaluation framework, iOS app development with XcodeGen and SwiftUI, fact-checking with automated corrections, Twitter/X content fetching, intelligent macOS disk space recovery, skill quality review and improvement, GitHub contribution strategy, complete internationalization/localization setup, plugin/skill troubleshooting with diagnostic tools, evidence-based competitor analysis with source citations, Windows Remote Desktop (AVD/W365) connection quality diagnosis with transport protocol analysis and log parsing, Tailscale+proxy conflict diagnosis with SSH tunnel SOP for remote development, and multi-path parallel product analysis with cross-model test-time compute scaling",
|
||||||
"version": "1.33.0",
|
"version": "1.34.0",
|
||||||
"homepage": "https://github.com/daymade/claude-code-skills"
|
"homepage": "https://github.com/daymade/claude-code-skills"
|
||||||
},
|
},
|
||||||
"plugins": [
|
"plugins": [
|
||||||
@@ -786,6 +786,29 @@
|
|||||||
"skills": [
|
"skills": [
|
||||||
"./windows-remote-desktop-connection-doctor"
|
"./windows-remote-desktop-connection-doctor"
|
||||||
]
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "product-analysis",
|
||||||
|
"description": "Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code + Codex CLI) for multi-perspective exploration, then synthesizes findings into actionable optimization plans. Supports self-audit, UX audit, API audit, architecture review, and competitive benchmarking via competitors-analysis skill",
|
||||||
|
"source": "./",
|
||||||
|
"strict": false,
|
||||||
|
"version": "1.0.0",
|
||||||
|
"category": "productivity",
|
||||||
|
"keywords": [
|
||||||
|
"product-analysis",
|
||||||
|
"self-review",
|
||||||
|
"ux-audit",
|
||||||
|
"parallel-agents",
|
||||||
|
"cross-model",
|
||||||
|
"test-time-compute",
|
||||||
|
"codex",
|
||||||
|
"synthesis",
|
||||||
|
"product-audit",
|
||||||
|
"information-architecture"
|
||||||
|
],
|
||||||
|
"skills": [
|
||||||
|
"./product-analysis"
|
||||||
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -3,7 +3,6 @@ name: competitors-analysis
|
|||||||
description: Analyze competitor repositories with evidence-based approach. Use when tracking competitors, creating competitor profiles, or generating competitive analysis. CRITICAL - all analysis must be based on actual cloned code, never assumptions. Triggers include "analyze competitor", "add competitor", "competitive analysis", or "竞品分析".
|
description: Analyze competitor repositories with evidence-based approach. Use when tracking competitors, creating competitor profiles, or generating competitive analysis. CRITICAL - all analysis must be based on actual cloned code, never assumptions. Triggers include "analyze competitor", "add competitor", "competitive analysis", or "竞品分析".
|
||||||
context: fork
|
context: fork
|
||||||
agent: general-purpose
|
agent: general-purpose
|
||||||
allowed-tools: Read, Grep, Glob, Bash(git *), Bash(mkdir *), Bash(ls *), Bash(wc *)
|
|
||||||
argument-hint: [product-name] [competitor-url]
|
argument-hint: [product-name] [competitor-url]
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
name: developing-ios-apps
|
name: developing-ios-apps
|
||||||
description: Develops iOS applications with XcodeGen, SwiftUI, and SPM. Triggers on XcodeGen project.yml configuration, SPM dependency issues, device deployment problems, code signing errors, camera/AVFoundation debugging, iOS version compatibility, or "Library not loaded @rpath" framework errors. Use when building iOS apps, fixing Xcode build failures, or deploying to real devices.
|
description: Develops iOS/macOS applications with XcodeGen, SwiftUI, and SPM. Handles Apple Developer signing, notarization, and CI/CD pipelines. Triggers on XcodeGen project.yml, SPM dependency issues, device deployment, code signing errors (Error -25294, keychain mismatch, adhoc fallback, EMFILE, notarization credential conflict, continueOnError), camera/AVFoundation debugging, iOS version compatibility, "Library not loaded @rpath", Electron @electron/osx-sign/@electron/notarize config, notarytool, GitHub Actions secrets in conditionals, or certificate/provisioning problems. Use when building iOS/macOS apps, fixing Xcode build failures, deploying to real devices, or configuring CI/CD signing pipelines.
|
||||||
---
|
---
|
||||||
|
|
||||||
# iOS App Development
|
# iOS App Development
|
||||||
@@ -15,6 +15,9 @@ Build, configure, and deploy iOS applications using XcodeGen and Swift Package M
|
|||||||
| `xcodegen generate` loses signing | Overwrites project settings | Configure in `project.yml` target settings, not global |
|
| `xcodegen generate` loses signing | Overwrites project settings | Configure in `project.yml` target settings, not global |
|
||||||
| Command-line signing fails | Free Apple ID limitation | Use Xcode GUI or paid developer account ($99/yr) |
|
| Command-line signing fails | Free Apple ID limitation | Use Xcode GUI or paid developer account ($99/yr) |
|
||||||
| "Cannot be set when automaticallyAdjustsVideoMirroring is YES" | Setting `isVideoMirrored` without disabling automatic | Set `automaticallyAdjustsVideoMirroring = false` first. See [Camera](#camera--avfoundation) |
|
| "Cannot be set when automaticallyAdjustsVideoMirroring is YES" | Setting `isVideoMirrored` without disabling automatic | Set `automaticallyAdjustsVideoMirroring = false` first. See [Camera](#camera--avfoundation) |
|
||||||
|
| App signed as adhoc despite certificate | `@electron/packager` defaults `continueOnError: true` | Set `continueOnError: false` in osxSign. See [Code Signing](#macos-code-signing--notarization) |
|
||||||
|
| "Cannot use password credentials, API key credentials..." | Passing `teamId` to `@electron/notarize` with API key auth | **Remove `teamId`**. `notarytool` infers team from API key. See [Code Signing](#macos-code-signing--notarization) |
|
||||||
|
| EMFILE during signing (large embedded runtime) | `@electron/osx-sign` traverses all files in .app bundle | Add `ignore` filter + `ulimit -n 65536` in CI. See [Code Signing](#macos-code-signing--notarization) |
|
||||||
|
|
||||||
## Quick Reference
|
## Quick Reference
|
||||||
|
|
||||||
@@ -297,9 +300,73 @@ Filter in Console.app by subsystem.
|
|||||||
|
|
||||||
**For detailed camera implementation**: See [references/camera-avfoundation.md](references/camera-avfoundation.md)
|
**For detailed camera implementation**: See [references/camera-avfoundation.md](references/camera-avfoundation.md)
|
||||||
|
|
||||||
|
## macOS Code Signing & Notarization
|
||||||
|
|
||||||
|
For distributing macOS apps (Electron or native) outside the App Store, signing + notarization is required. Without it users see "Apple cannot check this app for malicious software."
|
||||||
|
|
||||||
|
**5-step checklist:**
|
||||||
|
|
||||||
|
| Step | What | Critical detail |
|
||||||
|
|------|------|-----------------|
|
||||||
|
| 1 | Create CSR in Keychain Access | Common Name doesn't matter; choose "Saved to disk" |
|
||||||
|
| 2 | Request **Developer ID Application** cert at developer.apple.com | Choose **G2 Sub-CA** (not Previous Sub-CA) |
|
||||||
|
| 3 | Install `.cer` → must choose **`login` keychain** | iCloud/System → Error -25294 (private key mismatch) |
|
||||||
|
| 4 | Export P12 from `login` keychain with password | Base64: `base64 -i cert.p12 \| pbcopy` |
|
||||||
|
| 5 | Create App Store Connect API Key (Developer role) | Download `.p8` once only; record Key ID + Issuer ID |
|
||||||
|
|
||||||
|
**GitHub Secrets required (5 secrets):**
|
||||||
|
|
||||||
|
| Secret | Source |
|
||||||
|
|--------|--------|
|
||||||
|
| `MACOS_CERT_P12` | Step 4 base64 |
|
||||||
|
| `MACOS_CERT_PASSWORD` | Step 4 password |
|
||||||
|
| `APPLE_API_KEY` | Step 5 `.p8` base64 |
|
||||||
|
| `APPLE_API_KEY_ID` | Step 5 Key ID |
|
||||||
|
| `APPLE_API_ISSUER` | Step 5 Issuer ID |
|
||||||
|
|
||||||
|
> **`APPLE_TEAM_ID` is NOT needed.** `notarytool` infers team from the API key. Passing `teamId` to `@electron/notarize` v2.5.0 causes a credential conflict error.
|
||||||
|
|
||||||
|
**Electron Forge osxSign critical settings:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
osxSign: {
|
||||||
|
identity: 'Developer ID Application',
|
||||||
|
hardenedRuntime: true,
|
||||||
|
entitlements: 'entitlements.mac.plist',
|
||||||
|
entitlementsInherit: 'entitlements.mac.plist',
|
||||||
|
continueOnError: false, // CRITICAL: default is true, silently falls back to adhoc
|
||||||
|
// Skip non-binary files in large embedded runtimes (prevents EMFILE)
|
||||||
|
ignore: (filePath: string) => {
|
||||||
|
if (!filePath.includes('python-runtime')) return false;
|
||||||
|
if (/\.(so|dylib|node)$/.test(filePath)) return false;
|
||||||
|
return true;
|
||||||
|
},
|
||||||
|
// CI: explicitly specify keychain (apple-actions/import-codesign-certs uses signing_temp.keychain)
|
||||||
|
...(process.env.MACOS_SIGNING_KEYCHAIN
|
||||||
|
? { keychain: process.env.MACOS_SIGNING_KEYCHAIN }
|
||||||
|
: {}),
|
||||||
|
},
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fail-fast three-layer defense:**
|
||||||
|
|
||||||
|
1. `@electron/osx-sign`: `continueOnError: false` — signing error throws immediately
|
||||||
|
2. `postPackage` hook: `codesign --verify --deep --strict` + adhoc detection
|
||||||
|
3. Release trigger script: verify local HEAD matches remote before dispatch
|
||||||
|
|
||||||
|
**Verify signing:**
|
||||||
|
```bash
|
||||||
|
security find-identity -v -p codesigning | grep "Developer ID Application"
|
||||||
|
```
|
||||||
|
|
||||||
|
For complete step-by-step guide, entitlements, workflow examples, and full troubleshooting (7 real-world errors with root causes): **[references/apple-codesign-notarize.md](references/apple-codesign-notarize.md)**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
## Resources
|
## Resources
|
||||||
|
|
||||||
- [references/xcodegen-full.md](references/xcodegen-full.md) - Complete project.yml options
|
- [references/xcodegen-full.md](references/xcodegen-full.md) - Complete project.yml options
|
||||||
- [references/swiftui-compatibility.md](references/swiftui-compatibility.md) - iOS version API differences
|
- [references/swiftui-compatibility.md](references/swiftui-compatibility.md) - iOS version API differences
|
||||||
- [references/camera-avfoundation.md](references/camera-avfoundation.md) - Camera preview debugging
|
- [references/camera-avfoundation.md](references/camera-avfoundation.md) - Camera preview debugging
|
||||||
- [references/testing-mainactor.md](references/testing-mainactor.md) - Testing @MainActor classes (state machines, regression tests)
|
- [references/testing-mainactor.md](references/testing-mainactor.md) - Testing @MainActor classes (state machines, regression tests)
|
||||||
|
- [references/apple-codesign-notarize.md](references/apple-codesign-notarize.md) - Apple Developer signing + notarization for macOS/Electron CI/CD
|
||||||
|
|||||||
354
iOS-APP-developer/references/apple-codesign-notarize.md
Normal file
354
iOS-APP-developer/references/apple-codesign-notarize.md
Normal file
@@ -0,0 +1,354 @@
|
|||||||
|
# Apple Code Signing + Notarization Guide
|
||||||
|
|
||||||
|
For macOS desktop apps (Electron or native) distributed outside the App Store. Without signing + notarization, users see "Apple cannot check this app for malicious software."
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- Apple Developer Program ($99/year)
|
||||||
|
- Record **Team ID** (developer.apple.com → Account → Membership Details)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 1: Create Developer ID Application Certificate
|
||||||
|
|
||||||
|
> **Developer ID Application** = distribution outside App Store (DMG/ZIP).
|
||||||
|
> **Mac App Distribution** = App Store only.
|
||||||
|
|
||||||
|
### 1a. Generate CSR
|
||||||
|
|
||||||
|
Keychain Access → Certificate Assistant → **Request a Certificate from a Certificate Authority**:
|
||||||
|
|
||||||
|
| Field | Value |
|
||||||
|
|-------|-------|
|
||||||
|
| User Email Address | Apple Developer email |
|
||||||
|
| Common Name | Anything (Apple overrides this) |
|
||||||
|
| CA Email Address | Leave empty |
|
||||||
|
| Request is | **Saved to disk** |
|
||||||
|
|
||||||
|
### 1b. Request Certificate
|
||||||
|
|
||||||
|
1. Go to [developer.apple.com/account/resources/certificates/add](https://developer.apple.com/account/resources/certificates/add)
|
||||||
|
2. Select **Developer ID Application**
|
||||||
|
3. Choose **G2 Sub-CA (Xcode 11.4.1 or later)** (not Previous Sub-CA)
|
||||||
|
4. Upload CSR, download `.cer`
|
||||||
|
|
||||||
|
### 1c. Install to Keychain
|
||||||
|
|
||||||
|
Double-click `.cer` → **must choose `login` keychain**. Choosing iCloud/System causes Error -25294 (private key not in same keychain).
|
||||||
|
|
||||||
|
### 1d. Verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
security find-identity -v -p codesigning | grep "Developer ID Application"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 2: Export P12 (for CI)
|
||||||
|
|
||||||
|
1. Keychain Access → My Certificates → find `Developer ID Application: ...`
|
||||||
|
2. Right-click → Export → `.p12` format → set strong password
|
||||||
|
3. Convert to base64:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
base64 -i ~/Desktop/codesign.p12 | pbcopy
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 3: Create App Store Connect API Key (for notarization)
|
||||||
|
|
||||||
|
> API Key avoids 2FA prompts in CI. Apple's recommended approach for automation.
|
||||||
|
|
||||||
|
1. Go to [appstoreconnect.apple.com/access/integrations/api](https://appstoreconnect.apple.com/access/integrations/api)
|
||||||
|
2. Generate API Key (Access: **Developer**)
|
||||||
|
3. Download `.p8` (one-time only)
|
||||||
|
4. Record **Key ID** (10 chars) and **Issuer ID** (UUID)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
base64 -i ~/Downloads/AuthKey_KEYID.p8 | pbcopy
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 4: Configure GitHub Secrets
|
||||||
|
|
||||||
|
**5 secrets required** (secret names must exactly match workflow references):
|
||||||
|
|
||||||
|
| Secret | Source |
|
||||||
|
|--------|--------|
|
||||||
|
| `MACOS_CERT_P12` | Step 2 base64 |
|
||||||
|
| `MACOS_CERT_PASSWORD` | Step 2 password |
|
||||||
|
| `APPLE_API_KEY` | Step 3 `.p8` base64 |
|
||||||
|
| `APPLE_API_KEY_ID` | Step 3 Key ID |
|
||||||
|
| `APPLE_API_ISSUER` | Step 3 Issuer ID |
|
||||||
|
|
||||||
|
> **`APPLE_TEAM_ID` is NOT needed and MUST NOT be passed.** `@electron/notarize` v2.5.0's `isNotaryToolPasswordCredentials()` checks `teamId !== undefined`. Passing `teamId` alongside API key credentials triggers: "Cannot use password credentials, API key credentials and keychain credentials at once." `notarytool` infers team from the API key automatically.
|
||||||
|
|
||||||
|
### Setting secrets via gh CLI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Short values
|
||||||
|
gh secret set MACOS_CERT_PASSWORD --body 'your-password' --repo owner/repo
|
||||||
|
gh secret set APPLE_API_KEY_ID --body 'KEYIDHERE' --repo owner/repo
|
||||||
|
gh secret set APPLE_API_ISSUER --body 'uuid-here' --repo owner/repo
|
||||||
|
|
||||||
|
# Long base64 values — use temp file to avoid zsh glob expansion errors
|
||||||
|
printf '%s' '<base64>' > /tmp/p12.txt
|
||||||
|
gh secret set MACOS_CERT_P12 < /tmp/p12.txt --repo owner/repo && rm /tmp/p12.txt
|
||||||
|
|
||||||
|
printf '%s' '<base64>' > /tmp/apikey.txt
|
||||||
|
gh secret set APPLE_API_KEY < /tmp/apikey.txt --repo owner/repo && rm /tmp/apikey.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Dual-repo architecture**: If using private dev repo + public release repo, set secrets on both repos separately.
|
||||||
|
|
||||||
|
### Verify
|
||||||
|
|
||||||
|
```bash
|
||||||
|
gh secret list --repo owner/repo
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 5: Electron Forge Configuration
|
||||||
|
|
||||||
|
### osxSign (signing)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const SHOULD_CODESIGN = process.env.FLOWZERO_CODESIGN === '1';
|
||||||
|
const SHOULD_NOTARIZE = process.env.FLOWZERO_NOTARIZE === '1';
|
||||||
|
const CODESIGN_IDENTITY = process.env.CODESIGN_IDENTITY || 'Developer ID Application';
|
||||||
|
|
||||||
|
// In packagerConfig:
|
||||||
|
...(SHOULD_CODESIGN ? {
|
||||||
|
osxSign: {
|
||||||
|
identity: CODESIGN_IDENTITY,
|
||||||
|
hardenedRuntime: true,
|
||||||
|
entitlements: 'entitlements.mac.plist',
|
||||||
|
entitlementsInherit: 'entitlements.mac.plist',
|
||||||
|
// CRITICAL: @electron/packager defaults continueOnError to true,
|
||||||
|
// which silently swallows ALL signing failures and falls back to adhoc.
|
||||||
|
continueOnError: false,
|
||||||
|
// Skip non-binary files in large embedded runtimes (e.g. Python).
|
||||||
|
// Without this, osx-sign traverses 50k+ files → EMFILE errors.
|
||||||
|
// Native .so/.dylib/.node binaries are still signed.
|
||||||
|
ignore: (filePath: string) => {
|
||||||
|
if (!filePath.includes('python-runtime')) return false;
|
||||||
|
if (/\.(so|dylib|node)$/.test(filePath)) return false;
|
||||||
|
return true;
|
||||||
|
},
|
||||||
|
// CI: apple-actions/import-codesign-certs@v3 imports to signing_temp.keychain,
|
||||||
|
// but @electron/osx-sign searches system keychain by default.
|
||||||
|
...(process.env.MACOS_SIGNING_KEYCHAIN
|
||||||
|
? { keychain: process.env.MACOS_SIGNING_KEYCHAIN }
|
||||||
|
: {}),
|
||||||
|
},
|
||||||
|
} : {}),
|
||||||
|
```
|
||||||
|
|
||||||
|
### osxNotarize (notarization)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
...(SHOULD_NOTARIZE ? {
|
||||||
|
osxNotarize: {
|
||||||
|
tool: 'notarytool',
|
||||||
|
appleApiKey: process.env.APPLE_API_KEY_PATH,
|
||||||
|
appleApiKeyId: process.env.APPLE_API_KEY_ID,
|
||||||
|
appleApiIssuer: process.env.APPLE_API_ISSUER,
|
||||||
|
// NOTE: Do NOT pass teamId. See Step 4 explanation above.
|
||||||
|
},
|
||||||
|
} : {}),
|
||||||
|
```
|
||||||
|
|
||||||
|
### postPackage Fail-Fast Verification
|
||||||
|
|
||||||
|
Add `codesign --verify --deep --strict` + adhoc detection in the `postPackage` hook:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { execSync } from 'child_process';
|
||||||
|
|
||||||
|
// In postPackage hook:
|
||||||
|
if (SHOULD_CODESIGN && process.platform === 'darwin') {
|
||||||
|
const appDir = fs.readdirSync(buildPath).find(e => e.endsWith('.app'));
|
||||||
|
if (!appDir) throw new Error('CODESIGN FAIL-FAST: No .app bundle found');
|
||||||
|
const appPath = path.join(buildPath, appDir);
|
||||||
|
|
||||||
|
// 1. Verify signature is valid
|
||||||
|
try {
|
||||||
|
execSync(`codesign --verify --deep --strict "${appPath}"`, { stdio: 'pipe' });
|
||||||
|
} catch (e) {
|
||||||
|
const stderr = (e as { stderr?: Buffer })?.stderr?.toString() || '';
|
||||||
|
throw new Error(`CODESIGN FAIL-FAST: Verification failed.\n ${stderr}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// 2. Check it's NOT adhoc
|
||||||
|
const info = execSync(`codesign -dvv "${appPath}" 2>&1`, { encoding: 'utf-8' });
|
||||||
|
if (info.includes('Signature=adhoc')) {
|
||||||
|
throw new Error('CODESIGN FAIL-FAST: App has adhoc signature! Signing silently failed.');
|
||||||
|
}
|
||||||
|
|
||||||
|
const authority = info.match(/Authority=(.+)/);
|
||||||
|
if (authority) console.log(`Signed by: ${authority[1]}`);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### entitlements.mac.plist (Electron + Python)
|
||||||
|
|
||||||
|
```xml
|
||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
|
||||||
|
<plist version="1.0">
|
||||||
|
<dict>
|
||||||
|
<key>com.apple.security.app-sandbox</key>
|
||||||
|
<false/>
|
||||||
|
<key>com.apple.security.cs.allow-unsigned-executable-memory</key>
|
||||||
|
<true/>
|
||||||
|
<key>com.apple.security.cs.disable-library-validation</key>
|
||||||
|
<true/>
|
||||||
|
<key>com.apple.security.cs.allow-jit</key>
|
||||||
|
<true/>
|
||||||
|
<key>com.apple.security.device.microphone</key>
|
||||||
|
<true/>
|
||||||
|
<key>com.apple.security.network.client</key>
|
||||||
|
<true/>
|
||||||
|
</dict>
|
||||||
|
</plist>
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Step 6: GitHub Actions Workflow
|
||||||
|
|
||||||
|
### Key pattern: secrets in step `if:` conditions
|
||||||
|
|
||||||
|
`secrets.*` context **cannot** be used directly in step `if:`. Use `env:` intermediate variables:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
# WRONG — causes HTTP 422: "Unrecognized named-value: 'secrets'"
|
||||||
|
- name: Import certs
|
||||||
|
if: ${{ secrets.MACOS_CERT_P12 != '' }}
|
||||||
|
|
||||||
|
# CORRECT — use env: intermediate variable
|
||||||
|
- name: Import certs
|
||||||
|
if: ${{ env.HAS_CERT == 'true' }}
|
||||||
|
env:
|
||||||
|
HAS_CERT: ${{ secrets.MACOS_CERT_P12 != '' }}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Complete workflow example
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
- name: Import Apple certificates
|
||||||
|
if: ${{ env.HAS_CERT == 'true' }}
|
||||||
|
uses: apple-actions/import-codesign-certs@v3
|
||||||
|
with:
|
||||||
|
p12-file-base64: ${{ secrets.MACOS_CERT_P12 }}
|
||||||
|
p12-password: ${{ secrets.MACOS_CERT_PASSWORD }}
|
||||||
|
env:
|
||||||
|
HAS_CERT: ${{ secrets.MACOS_CERT_P12 != '' }}
|
||||||
|
|
||||||
|
- name: Verify signing identity
|
||||||
|
if: ${{ env.HAS_CERT == 'true' }}
|
||||||
|
run: security find-identity -v -p codesigning | grep "Developer ID"
|
||||||
|
env:
|
||||||
|
HAS_CERT: ${{ secrets.MACOS_CERT_P12 != '' }}
|
||||||
|
|
||||||
|
- name: Prepare API key
|
||||||
|
if: ${{ env.HAS_API_KEY == 'true' }}
|
||||||
|
run: |
|
||||||
|
set -euo pipefail
|
||||||
|
if [[ "$APPLE_API_KEY" == *"BEGIN PRIVATE KEY"* ]]; then
|
||||||
|
printf "%s" "$APPLE_API_KEY" > /tmp/AuthKey.p8
|
||||||
|
else
|
||||||
|
echo "$APPLE_API_KEY" | base64 --decode > /tmp/AuthKey.p8
|
||||||
|
fi
|
||||||
|
env:
|
||||||
|
HAS_API_KEY: ${{ secrets.APPLE_API_KEY != '' }}
|
||||||
|
APPLE_API_KEY: ${{ secrets.APPLE_API_KEY }}
|
||||||
|
|
||||||
|
- name: Build & sign
|
||||||
|
env:
|
||||||
|
FLOWZERO_CODESIGN: ${{ secrets.MACOS_CERT_P12 != '' && '1' || '' }}
|
||||||
|
FLOWZERO_NOTARIZE: ${{ secrets.APPLE_API_KEY != '' && '1' || '' }}
|
||||||
|
APPLE_API_KEY_PATH: /tmp/AuthKey.p8
|
||||||
|
APPLE_API_KEY_ID: ${{ secrets.APPLE_API_KEY_ID }}
|
||||||
|
APPLE_API_ISSUER: ${{ secrets.APPLE_API_ISSUER }}
|
||||||
|
# NOTE: APPLE_TEAM_ID intentionally omitted — notarytool infers from API key
|
||||||
|
run: |
|
||||||
|
ulimit -n 65536 # Prevent EMFILE when signing large app bundles
|
||||||
|
pnpm run forge:make -- --arch=arm64
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Fail-Fast Three-Layer Defense
|
||||||
|
|
||||||
|
Signing can fail silently in many ways. This architecture ensures any failure is caught immediately:
|
||||||
|
|
||||||
|
| Layer | Mechanism | What it catches |
|
||||||
|
|-------|-----------|-----------------|
|
||||||
|
| 1. `@electron/osx-sign` | `continueOnError: false` | Signing errors (EMFILE, cert not found, timestamp failures) |
|
||||||
|
| 2. `postPackage` hook | `codesign --verify --deep --strict` + adhoc detection | Silent signing failures, unexpected adhoc fallback |
|
||||||
|
| 3. Release trigger | Verify local HEAD matches remote branch | Stale code reaching CI (SHA vs branch name issue) |
|
||||||
|
|
||||||
|
### Release trigger script pattern
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Send branch name, NOT commit SHA
|
||||||
|
BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
||||||
|
LOCAL_HEAD=$(git rev-parse HEAD)
|
||||||
|
REMOTE_HEAD=$(git ls-remote origin "$BRANCH" 2>/dev/null | awk '{print $1}')
|
||||||
|
|
||||||
|
if [[ "$LOCAL_HEAD" != "$REMOTE_HEAD" ]]; then
|
||||||
|
echo "FAIL-FAST: Local HEAD does not match remote!"
|
||||||
|
echo " Local: $LOCAL_HEAD"
|
||||||
|
echo " Remote: $REMOTE_HEAD"
|
||||||
|
echo " Push first: git push origin $BRANCH"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Dispatch with branch name (not SHA)
|
||||||
|
gh api repos/OWNER/REPO/dispatches -f event_type=release -f 'client_payload[ref]='"$BRANCH"
|
||||||
|
```
|
||||||
|
|
||||||
|
> **Why branch name, not SHA**: `actions/checkout` uses `refs/heads/<ref>*` glob matching for shallow clones. Commit SHAs don't match this pattern and cause checkout failure.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
| Symptom | Root Cause | Fix |
|
||||||
|
|---------|-----------|-----|
|
||||||
|
| App signed as adhoc despite certificate configured | `@electron/packager` defaults `continueOnError: true` in `createSignOpts()` (mac.js line 402-404). Signing error was silently swallowed. | Set `continueOnError: false` in osxSign config |
|
||||||
|
| "Cannot use password credentials, API key credentials and keychain credentials at once" | `@electron/notarize` v2.5.0 `isNotaryToolPasswordCredentials()` checks `teamId !== undefined`. Passing `teamId` with API key = credential conflict. | Remove `teamId` from osxNotarize config. `notarytool` infers team from API key. |
|
||||||
|
| EMFILE: too many open files | `@electron/osx-sign` `walkAsync()` traverses ALL files in .app. Large embedded runtimes (Python: 51k+ files) exhaust file descriptors. | Add `ignore` filter to skip non-binary files + `ulimit -n 65536` in CI |
|
||||||
|
| CI signing: cert not found | `apple-actions/import-codesign-certs@v3` imports to `signing_temp.keychain`, but osx-sign searches system keychain. | Pass `keychain: process.env.MACOS_SIGNING_KEYCHAIN` in osxSign |
|
||||||
|
| Install .cer: Error -25294 | Certificate imported to wrong keychain (iCloud/System). Private key from CSR is in `login` keychain. | Re-import `.cer` choosing `login` keychain |
|
||||||
|
| `security find-identity` shows nothing | Private key and certificate in different keychains | Ensure CSR private key and imported cert are both in `login` keychain |
|
||||||
|
| CI step `if:` with secrets → HTTP 422 | `secrets.*` context not available in step `if:` conditions | Use `env:` intermediate variable pattern (see workflow section) |
|
||||||
|
| CI checkout fails: "git failed with exit code 1" | `actions/checkout` shallow clone can't resolve commit SHA as ref | Send branch name (not SHA) in `repository_dispatch`. Verify local HEAD matches remote before dispatch. |
|
||||||
|
| CI signing steps silently skipped | Secret names don't match workflow `secrets.XXX` references | `gh secret list` and compare against all `secrets.` references in workflow YAML |
|
||||||
|
| "The timestamp service is not available" | Apple's timestamp server intermittently unavailable during codesign | Retry the build. `ignore` filter reduces files needing timestamps, lowering failure probability. |
|
||||||
|
| Notarization: "Could not find valid private key" | `.p8` file base64 decoded incorrectly | Verify: `echo "$APPLE_API_KEY" \| base64 --decode \| head -1` should show `-----BEGIN PRIVATE KEY-----` |
|
||||||
|
| zsh `permission denied` piping long base64 | Shell interprets base64 special chars as glob | Use temp file + `<` redirect: `gh secret set NAME < /tmp/file.txt` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Local Verification (without notarization)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Sign only (fast verification that certificate works)
|
||||||
|
FLOWZERO_CODESIGN=1 pnpm run forge:make
|
||||||
|
|
||||||
|
# Verify signature
|
||||||
|
codesign --verify --deep --strict path/to/App.app
|
||||||
|
|
||||||
|
# Check signing authority
|
||||||
|
codesign -dvv path/to/App.app 2>&1 | grep Authority
|
||||||
|
|
||||||
|
# Gatekeeper assessment
|
||||||
|
spctl --assess --type exec path/to/App.app
|
||||||
|
```
|
||||||
@@ -14,19 +14,24 @@ Intelligently analyze macOS disk usage and provide actionable cleanup recommenda
|
|||||||
## Core Principles
|
## Core Principles
|
||||||
|
|
||||||
1. **Safety First, Never Bypass**: NEVER execute dangerous commands (`rm -rf`, `mo clean`, etc.) without explicit user confirmation. No shortcuts, no workarounds.
|
1. **Safety First, Never Bypass**: NEVER execute dangerous commands (`rm -rf`, `mo clean`, etc.) without explicit user confirmation. No shortcuts, no workarounds.
|
||||||
2. **Value Over Vanity**: Your goal is NOT to maximize cleaned space. Your goal is to identify what is **truly useless** vs **valuable cache**. Clearing 50GB of useful cache just to show a big number is harmful.
|
2. **Precision Deletion Only**: Delete by specifying exact object IDs/names. Never use batch prune commands.
|
||||||
3. **Network Environment Awareness**: Many users (especially in China) have slow/unreliable internet. Re-downloading caches can take hours. A cache that saves 30 minutes of download time is worth keeping.
|
3. **Every Object Listed**: Reports must show every specific image, volume, container — not just "12 GB of unused images".
|
||||||
4. **Impact Analysis Required**: Every cleanup recommendation MUST include "what happens if deleted" column. Never just list items without explaining consequences.
|
4. **Value Over Vanity**: Your goal is NOT to maximize cleaned space. Your goal is to identify what is **truly useless** vs **valuable cache**. Clearing 50GB of useful cache just to show a big number is harmful.
|
||||||
5. **Patience Over Speed**: Disk scans can take 5-10 minutes. NEVER interrupt or skip slow operations. Report progress to user regularly.
|
5. **Network Environment Awareness**: Many users (especially in China) have slow/unreliable internet. Re-downloading caches can take hours. A cache that saves 30 minutes of download time is worth keeping.
|
||||||
6. **User Executes Cleanup**: After analysis, provide the cleanup command for the user to run themselves. Do NOT auto-execute cleanup.
|
6. **Impact Analysis Required**: Every cleanup recommendation MUST include "what happens if deleted" column. Never just list items without explaining consequences.
|
||||||
7. **Conservative Defaults**: When in doubt, don't delete. Err on the side of caution.
|
7. **Double-Check Before Delete**: Verify each Docker object with independent cross-checks before deletion (see Step 2A).
|
||||||
|
8. **Patience Over Speed**: Disk scans can take 5-10 minutes. NEVER interrupt or skip slow operations. Report progress to user regularly.
|
||||||
|
9. **User Executes Cleanup**: After analysis, provide the cleanup command for the user to run themselves. Do NOT auto-execute cleanup.
|
||||||
|
10. **Conservative Defaults**: When in doubt, don't delete. Err on the side of caution.
|
||||||
|
|
||||||
**ABSOLUTE PROHIBITIONS:**
|
**ABSOLUTE PROHIBITIONS:**
|
||||||
- ❌ NEVER run `rm -rf` on user directories automatically
|
- ❌ NEVER use `docker image prune`, `docker volume prune`, `docker system prune`, or ANY prune-family command (exception: `docker builder prune` is safe — build cache contains only intermediate layers, never user data)
|
||||||
- ❌ NEVER run `mo clean` without dry-run preview first
|
- ❌ NEVER use `docker container prune` — stopped containers may be restarted at any time
|
||||||
- ❌ NEVER use `docker volume prune -f` or `docker system prune -a --volumes`
|
- ❌ NEVER run `rm -rf` on user directories without explicit confirmation
|
||||||
|
- ❌ NEVER run `mo clean` without `--dry-run` preview first
|
||||||
- ❌ NEVER skip analysis steps to save time
|
- ❌ NEVER skip analysis steps to save time
|
||||||
- ❌ NEVER append `--help` to Mole commands (except `mo --help`)
|
- ❌ NEVER append `--help` to Mole commands (only `mo --help` is safe)
|
||||||
|
- ❌ NEVER present cleanup reports with only categories — every object must be individually listed
|
||||||
- ❌ NEVER recommend deleting useful caches just to inflate cleanup numbers
|
- ❌ NEVER recommend deleting useful caches just to inflate cleanup numbers
|
||||||
|
|
||||||
## Workflow Decision Tree
|
## Workflow Decision Tree
|
||||||
@@ -321,6 +326,102 @@ docker volume rm ragflow_mysql_data ragflow_redis_data
|
|||||||
|
|
||||||
**Safety level**: 🟢 Homebrew/npm cleanup, 🔴 Docker volumes require per-project confirmation
|
**Safety level**: 🟢 Homebrew/npm cleanup, 🔴 Docker volumes require per-project confirmation
|
||||||
|
|
||||||
|
### Step 2A: Docker Deep Analysis
|
||||||
|
|
||||||
|
Use agent team to analyze Docker resources in parallel for comprehensive coverage:
|
||||||
|
|
||||||
|
**Agent 1 — Images**:
|
||||||
|
```bash
|
||||||
|
# List all images sorted by size
|
||||||
|
docker images --format "table {{.ID}}\t{{.Repository}}:{{.Tag}}\t{{.Size}}\t{{.CreatedSince}}" | sort -k3 -h -r
|
||||||
|
|
||||||
|
# Identify dangling images (no tag)
|
||||||
|
docker images -f "dangling=true" --format "{{.ID}}\t{{.Size}}\t{{.CreatedSince}}"
|
||||||
|
|
||||||
|
# For each image, check if any container references it
|
||||||
|
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent 2 — Containers and Volumes**:
|
||||||
|
```bash
|
||||||
|
# All containers with status
|
||||||
|
docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
|
||||||
|
|
||||||
|
# All volumes with size
|
||||||
|
docker system df -v | grep -A 1000 "VOLUME NAME"
|
||||||
|
|
||||||
|
# Identify dangling volumes
|
||||||
|
docker volume ls -f dangling=true
|
||||||
|
|
||||||
|
# For each volume, check which container uses it
|
||||||
|
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent 3 — System Level**:
|
||||||
|
```bash
|
||||||
|
# Docker disk usage summary
|
||||||
|
docker system df
|
||||||
|
|
||||||
|
# Build cache
|
||||||
|
docker builder du
|
||||||
|
|
||||||
|
# Container logs size
|
||||||
|
for c in $(docker ps -a --format "{{.Names}}"); do
|
||||||
|
echo "$c: $(docker inspect --format='{{.LogPath}}' $c | xargs ls -lh 2>/dev/null | awk '{print $5}')"
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
**Version Management Awareness**: Identify version-managed images (e.g., Supabase managed by CLI). When newer versions are confirmed running, older versions are safe to remove. Pay attention to Docker Compose naming conventions (dash vs underscore).
|
||||||
|
|
||||||
|
### Step 2B: OrbStack-Specific Analysis
|
||||||
|
|
||||||
|
OrbStack users have additional considerations.
|
||||||
|
|
||||||
|
**data.img.raw is a Sparse File**:
|
||||||
|
```bash
|
||||||
|
# Logical size (can show 8TB+, meaningless)
|
||||||
|
ls -lh ~/Library/OrbStack/data/data.img.raw
|
||||||
|
|
||||||
|
# Actual disk usage (this is what matters)
|
||||||
|
du -h ~/Library/OrbStack/data/data.img.raw
|
||||||
|
```
|
||||||
|
|
||||||
|
The logical vs actual size difference is normal. Only actual usage counts.
|
||||||
|
|
||||||
|
**Post-Cleanup: Reclaim Disk Space**: After cleaning Docker objects inside OrbStack, `data.img.raw` does NOT shrink automatically. Instruct user: Open OrbStack Settings → "Reclaim disk space" to compact the sparse file.
|
||||||
|
|
||||||
|
**OrbStack Logs**: Typically 1-2 MB total (`~/Library/OrbStack/log/`). Not worth cleaning.
|
||||||
|
|
||||||
|
### Step 2C: Double-Check Verification Protocol
|
||||||
|
|
||||||
|
Before deleting ANY Docker object, perform independent verification.
|
||||||
|
|
||||||
|
**For Images**:
|
||||||
|
```bash
|
||||||
|
# Verify no container (running or stopped) references the image
|
||||||
|
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
|
||||||
|
|
||||||
|
# If empty → safe to delete with: docker rmi <IMAGE_ID>
|
||||||
|
```
|
||||||
|
|
||||||
|
**For Volumes**:
|
||||||
|
```bash
|
||||||
|
# Verify no container mounts the volume
|
||||||
|
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}"
|
||||||
|
|
||||||
|
# If empty → check if database volume (see below)
|
||||||
|
# If not database → safe to delete with: docker volume rm <VOLUME_NAME>
|
||||||
|
```
|
||||||
|
|
||||||
|
**Database Volume Red Flag Rule**: If volume name contains mysql, postgres, redis, mongo, or mariadb, MANDATORY content inspection:
|
||||||
|
```bash
|
||||||
|
# Inspect database volume contents with temporary container
|
||||||
|
docker run --rm -v <VOLUME_NAME>:/data alpine ls -la /data
|
||||||
|
docker run --rm -v <VOLUME_NAME>:/data alpine du -sh /data/*
|
||||||
|
```
|
||||||
|
|
||||||
|
Only delete after user confirms the data is not needed.
|
||||||
|
|
||||||
## Step 3: Integration with Mole
|
## Step 3: Integration with Mole
|
||||||
|
|
||||||
**Mole** (https://github.com/tw93/Mole) is a **command-line interface (CLI)** tool for comprehensive macOS cleanup. It provides interactive terminal-based analysis and cleanup for caches, logs, developer tools, and more.
|
**Mole** (https://github.com/tw93/Mole) is a **command-line interface (CLI)** tool for comprehensive macOS cleanup. It provides interactive terminal-based analysis and cleanup for caches, logs, developer tools, and more.
|
||||||
@@ -625,6 +726,34 @@ Items marked 🟡 require your judgment based on usage patterns.
|
|||||||
Items marked 🔴 require explicit confirmation per-item.
|
Items marked 🔴 require explicit confirmation per-item.
|
||||||
```
|
```
|
||||||
|
|
||||||
|
### Docker Report: Required Object-Level Detail
|
||||||
|
|
||||||
|
Docker reports MUST list every individual object, not just categories:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
#### Dangling Images (no tag, no container references)
|
||||||
|
| Image ID | Size | Created | Safe? |
|
||||||
|
|----------|------|---------|-------|
|
||||||
|
| a02c40cc28df | 884 MB | 2 months ago | ✅ No container uses it |
|
||||||
|
| 555434521374 | 231 MB | 3 months ago | ✅ No container uses it |
|
||||||
|
|
||||||
|
#### Stopped Containers
|
||||||
|
| Name | Image | Status | Size |
|
||||||
|
|------|-------|--------|------|
|
||||||
|
| ragflow-mysql | mysql:8.0 | Exited 2 weeks ago | 1.2 GB |
|
||||||
|
|
||||||
|
#### Volumes
|
||||||
|
| Volume | Size | Mounted By | Contains |
|
||||||
|
|--------|------|------------|----------|
|
||||||
|
| ragflow_mysql_data | 1.8 GB | ragflow-mysql | MySQL databases |
|
||||||
|
| redis_data | 500 MB | (none - dangling) | Redis dump |
|
||||||
|
|
||||||
|
#### 🔴 Database Volumes Requiring Inspection
|
||||||
|
| Volume | Inspected Contents | User Decision |
|
||||||
|
|--------|--------------------|---------------|
|
||||||
|
| ragflow_mysql_data | 8 databases, 45 tables | Still need? |
|
||||||
|
```
|
||||||
|
|
||||||
## High-Quality Report Template
|
## High-Quality Report Template
|
||||||
|
|
||||||
After multi-layer exploration, present findings using this proven template:
|
After multi-layer exploration, present findings using this proven template:
|
||||||
@@ -716,7 +845,7 @@ After multi-layer exploration, present findings using this proven template:
|
|||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### 推荐操作
|
### ✅ 推荐操作
|
||||||
|
|
||||||
**立即可执行** (无需确认):
|
**立即可执行** (无需确认):
|
||||||
```bash
|
```bash
|
||||||
@@ -907,7 +1036,33 @@ Breakdown:
|
|||||||
- Enable "Empty Trash Automatically" in Finder preferences
|
- Enable "Empty Trash Automatically" in Finder preferences
|
||||||
```
|
```
|
||||||
|
|
||||||
## Safety Guidelines
|
## Bonus: Dockerfile Optimization Discoveries
|
||||||
|
|
||||||
|
During image analysis, if you discover oversized images, suggest multi-stage build optimization:
|
||||||
|
|
||||||
|
```dockerfile
|
||||||
|
# Before: 884 MB (full build environment in final image)
|
||||||
|
FROM node:20
|
||||||
|
COPY . .
|
||||||
|
RUN npm ci && npm run build
|
||||||
|
CMD ["node", "dist/index.js"]
|
||||||
|
|
||||||
|
# After: ~150 MB (only runtime in final image)
|
||||||
|
FROM node:20 AS builder
|
||||||
|
COPY package*.json ./
|
||||||
|
RUN npm ci
|
||||||
|
COPY . .
|
||||||
|
RUN npm run build
|
||||||
|
|
||||||
|
FROM node:20-slim
|
||||||
|
COPY --from=builder /app/dist ./dist
|
||||||
|
COPY --from=builder /app/node_modules ./node_modules
|
||||||
|
CMD ["node", "dist/index.js"]
|
||||||
|
```
|
||||||
|
|
||||||
|
Key techniques: multi-stage builds, slim/alpine base images, `.dockerignore`, layer ordering.
|
||||||
|
|
||||||
|
## ⚠️ Safety Guidelines
|
||||||
|
|
||||||
### Always Preserve
|
### Always Preserve
|
||||||
|
|
||||||
@@ -919,7 +1074,7 @@ Never delete these without explicit user instruction:
|
|||||||
- SSH keys, credentials, certificates
|
- SSH keys, credentials, certificates
|
||||||
- Time Machine backups
|
- Time Machine backups
|
||||||
|
|
||||||
### Require Sudo Confirmation
|
### ⚠️ Require Sudo Confirmation
|
||||||
|
|
||||||
These operations require elevated privileges. Ask user to run commands manually:
|
These operations require elevated privileges. Ask user to run commands manually:
|
||||||
- Clearing `/Library/Caches` (system-wide)
|
- Clearing `/Library/Caches` (system-wide)
|
||||||
@@ -936,7 +1091,7 @@ Please run this command manually:
|
|||||||
⚠️ You'll be asked for your password.
|
⚠️ You'll be asked for your password.
|
||||||
```
|
```
|
||||||
|
|
||||||
### Backup Recommendation
|
### 💡 Backup Recommendation
|
||||||
|
|
||||||
Before executing any cleanup >10GB, recommend:
|
Before executing any cleanup >10GB, recommend:
|
||||||
|
|
||||||
|
|||||||
@@ -165,74 +165,87 @@ rm -rf ~/Library/Logs/*
|
|||||||
|
|
||||||
### Docker
|
### Docker
|
||||||
|
|
||||||
|
**ABSOLUTE RULE**: NEVER use any `prune` command (`docker image prune`, `docker volume prune`, `docker system prune`, `docker container prune`). Always delete by specifying exact object IDs or names.
|
||||||
|
|
||||||
#### Images
|
#### Images
|
||||||
|
|
||||||
**What it is**: Container images (base OS + application layers)
|
**What it is**: Container images (base OS + application layers)
|
||||||
|
|
||||||
**Safety**: 🟢 **Safe to delete unused images**
|
**Safety**: 🟡 **Requires per-image verification**
|
||||||
|
|
||||||
**Check first**:
|
**Analysis**:
|
||||||
```bash
|
```bash
|
||||||
docker images
|
# List all images sorted by size
|
||||||
|
docker images --format "table {{.ID}}\t{{.Repository}}:{{.Tag}}\t{{.Size}}\t{{.CreatedSince}}" | sort -k3 -h -r
|
||||||
|
|
||||||
|
# Identify dangling images
|
||||||
|
docker images -f "dangling=true" --format "{{.ID}}\t{{.Size}}\t{{.CreatedSince}}"
|
||||||
|
|
||||||
|
# For EACH image, verify no container references it
|
||||||
|
docker ps -a --filter "ancestor=<IMAGE_ID>" --format "{{.Names}}\t{{.Status}}"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cleanup**:
|
**Cleanup** (only after per-image verification):
|
||||||
```bash
|
```bash
|
||||||
docker image prune -a # Remove all unused images
|
# Remove specific images by ID
|
||||||
|
docker rmi a02c40cc28df 555434521374 f471137cd508
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Containers
|
#### Containers
|
||||||
|
|
||||||
**What it is**: Running or stopped container instances
|
**What it is**: Running or stopped container instances
|
||||||
|
|
||||||
**Safety**: 🟢 **Safe to delete stopped containers**
|
**Safety**: 🟡 **Stopped containers may be restarted -- verify with user**
|
||||||
|
|
||||||
**Check first**:
|
**Analysis**:
|
||||||
```bash
|
```bash
|
||||||
docker ps -a
|
docker ps -a --format "table {{.Names}}\t{{.Image}}\t{{.Status}}\t{{.Size}}"
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cleanup**:
|
**Cleanup** (only after user confirms each container/project):
|
||||||
```bash
|
```bash
|
||||||
docker container prune # Remove stopped containers
|
# Remove specific containers by name
|
||||||
|
docker rm container-name-1 container-name-2
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Volumes
|
#### Volumes
|
||||||
|
|
||||||
**What it is**: Persistent data storage for containers
|
**What it is**: Persistent data storage for containers
|
||||||
|
|
||||||
**Safety**: 🔴 **CAUTION - May contain important data**
|
**Safety**: 🔴 **CAUTION - May contain databases, user uploads, and irreplaceable data**
|
||||||
|
|
||||||
**Check first**:
|
**Analysis**:
|
||||||
```bash
|
```bash
|
||||||
|
# List all volumes
|
||||||
docker volume ls
|
docker volume ls
|
||||||
docker volume inspect <volume_name>
|
|
||||||
|
# Check which container uses each volume
|
||||||
|
docker ps -a --filter "volume=<VOLUME_NAME>" --format "{{.Names}}\t{{.Status}}"
|
||||||
|
|
||||||
|
# CRITICAL: For database volumes (mysql, postgres, redis in name), inspect contents
|
||||||
|
docker run --rm -v <VOLUME_NAME>:/data alpine ls -la /data
|
||||||
|
docker run --rm -v <VOLUME_NAME>:/data alpine du -sh /data/*
|
||||||
```
|
```
|
||||||
|
|
||||||
**Cleanup** (only if certain):
|
**Cleanup** (only after per-volume confirmation, database volumes require content inspection):
|
||||||
```bash
|
```bash
|
||||||
docker volume prune # Remove unused volumes
|
# Remove specific volumes by name
|
||||||
|
docker volume rm project-mysql-data project-redis-data
|
||||||
```
|
```
|
||||||
|
|
||||||
#### Build Cache
|
#### Build Cache
|
||||||
|
|
||||||
**What it is**: Intermediate build layers
|
**What it is**: Intermediate build layers
|
||||||
|
|
||||||
**Safety**: 🟢 **Safe to delete**
|
**Safety**: 🟢 **Safe to delete** (rebuilds just take longer)
|
||||||
|
|
||||||
|
**Note**: `docker builder prune` is the ONE exception to the prune prohibition -- build cache contains only intermediate layers, never user data.
|
||||||
|
|
||||||
**Cleanup**:
|
**Cleanup**:
|
||||||
```bash
|
```bash
|
||||||
docker builder prune -a
|
docker builder prune -a
|
||||||
```
|
```
|
||||||
|
|
||||||
#### All-in-one cleanup
|
|
||||||
|
|
||||||
⚠️ **WARNING**: This removes ALL unused Docker resources including volumes!
|
|
||||||
|
|
||||||
```bash
|
|
||||||
docker system prune -a --volumes
|
|
||||||
```
|
|
||||||
|
|
||||||
### node_modules
|
### node_modules
|
||||||
|
|
||||||
**What it is**: Installed npm packages for Node.js projects
|
**What it is**: Installed npm packages for Node.js projects
|
||||||
|
|||||||
@@ -228,22 +228,7 @@ tmux capture-pane -t mole -p
|
|||||||
- Next Xcode build takes 30 minutes instead of 30 seconds
|
- Next Xcode build takes 30 minutes instead of 30 seconds
|
||||||
- AI project fails because models need redownload
|
- AI project fails because models need redownload
|
||||||
|
|
||||||
**Items that should usually be KEPT:**
|
See SKILL.md sections "Anti-Patterns: What NOT to Delete" and "What IS Safe to Delete" for the full tables of items to keep vs items safe to remove.
|
||||||
| Item | Why Keep It |
|
|
||||||
|------|-------------|
|
|
||||||
| Xcode DerivedData | Saves 10-30 min per rebuild |
|
|
||||||
| npm _cacache | Avoids re-downloading all packages |
|
|
||||||
| ~/.cache/uv | Python package cache |
|
|
||||||
| Playwright browsers | Avoids 2GB+ redownload |
|
|
||||||
| iOS DeviceSupport | Needed for device debugging |
|
|
||||||
| Docker stopped containers | May restart anytime |
|
|
||||||
|
|
||||||
**Items that are truly safe to delete:**
|
|
||||||
| Item | Why Safe |
|
|
||||||
|------|----------|
|
|
||||||
| Trash | User already deleted |
|
|
||||||
| Homebrew old versions | Replaced by newer |
|
|
||||||
| npm _npx | Temporary executions |
|
|
||||||
|
|
||||||
### 1. Never Execute Dangerous Commands Automatically
|
### 1. Never Execute Dangerous Commands Automatically
|
||||||
|
|
||||||
|
|||||||
@@ -39,7 +39,41 @@ Ask user to verify instead.
|
|||||||
|
|
||||||
Before deleting >10 GB, recommend Time Machine backup.
|
Before deleting >10 GB, recommend Time Machine backup.
|
||||||
|
|
||||||
### Rule 5: Use Trash When Possible
|
### Rule 5: Docker Prune Prohibition
|
||||||
|
|
||||||
|
**NEVER use any Docker prune command.** This includes:
|
||||||
|
- `docker image prune` / `docker image prune -a`
|
||||||
|
- `docker container prune`
|
||||||
|
- `docker volume prune` / `docker volume prune -f`
|
||||||
|
- `docker system prune` / `docker system prune -a --volumes`
|
||||||
|
|
||||||
|
**Why**: Prune commands operate on categories, not specific objects. They can silently destroy database volumes, user uploads, and container state that the user intended to keep. A user who loses their MySQL data because of a prune command will never trust this tool again.
|
||||||
|
|
||||||
|
**Correct approach**: Always specify exact object IDs or names:
|
||||||
|
```bash
|
||||||
|
# Images: delete by specific ID
|
||||||
|
docker rmi a02c40cc28df 555434521374
|
||||||
|
|
||||||
|
# Containers: delete by specific name
|
||||||
|
docker rm container-name-1 container-name-2
|
||||||
|
|
||||||
|
# Volumes: delete by specific name
|
||||||
|
docker volume rm project-mysql-data project-redis-data
|
||||||
|
```
|
||||||
|
|
||||||
|
### Rule 6: Double-Check Verification Protocol
|
||||||
|
|
||||||
|
Before deleting ANY Docker object, perform independent cross-verification. This applies to images, volumes, and containers.
|
||||||
|
|
||||||
|
**Key requirements**:
|
||||||
|
- For images: verify no container (running or stopped) references the image
|
||||||
|
- For volumes: verify no container mounts the volume
|
||||||
|
- For database volumes (name contains mysql, postgres, redis, mongo, mariadb): MANDATORY content inspection with a temporary container
|
||||||
|
- Even if Docker reports a volume as "dangling", the data inside may be valuable
|
||||||
|
|
||||||
|
See **SKILL.md Step 4** for the complete verification commands and database volume inspection workflow.
|
||||||
|
|
||||||
|
### Rule 7: Use Trash When Possible
|
||||||
|
|
||||||
Prefer moving to Trash over permanent deletion:
|
Prefer moving to Trash over permanent deletion:
|
||||||
|
|
||||||
@@ -143,23 +177,30 @@ Please run this command manually:
|
|||||||
- User should be aware of system-wide impact
|
- User should be aware of system-wide impact
|
||||||
- Audit trail (user types password)
|
- Audit trail (user types password)
|
||||||
|
|
||||||
### Docker Volumes
|
### Docker Objects (Images, Containers, Volumes)
|
||||||
|
|
||||||
**Action**: Always list volumes before cleanup
|
**Action**: List every object individually. Use precision deletion only (see Rule 5 and Rule 6).
|
||||||
|
|
||||||
**Example**:
|
**NEVER use prune commands.** Always specify exact IDs/names.
|
||||||
|
|
||||||
|
**Example for volumes**:
|
||||||
```
|
```
|
||||||
⚠️ Docker cleanup may remove important data.
|
Docker volumes found:
|
||||||
|
postgres_data (1.2 GB) - Contains PostgreSQL database
|
||||||
|
redis_data (500 MB) - Contains Redis cache data
|
||||||
|
app_uploads (3 GB) - Contains user-uploaded files
|
||||||
|
|
||||||
Current volumes:
|
Database volumes inspected with temporary container:
|
||||||
postgres_data (1.2 GB) - May contain database
|
postgres_data: 8 databases, 45 tables, last modified 2 days ago
|
||||||
redis_data (500 MB) - May contain cache
|
redis_data: 12 MB dump.rdb
|
||||||
app_uploads (3 GB) - May contain user files
|
|
||||||
|
|
||||||
Review each volume:
|
Confirm EACH volume individually:
|
||||||
docker volume inspect <volume_name>
|
Delete postgres_data? [y/N]:
|
||||||
|
Delete redis_data? [y/N]:
|
||||||
|
Delete app_uploads? [y/N]:
|
||||||
|
|
||||||
Proceed with cleanup? [y/N]:
|
Deletion commands (after confirmation):
|
||||||
|
docker volume rm postgres_data redis_data
|
||||||
```
|
```
|
||||||
|
|
||||||
### Application Preferences
|
### Application Preferences
|
||||||
|
|||||||
@@ -131,8 +131,8 @@ def check_docker():
|
|||||||
if 'Total:' in line:
|
if 'Total:' in line:
|
||||||
print(f" {line}")
|
print(f" {line}")
|
||||||
|
|
||||||
print(f"\n💡 Cleanup command: docker system prune -a --volumes")
|
print("\n💡 Cleanup: Remove specific images/volumes by ID/name (see SKILL.md)")
|
||||||
print(f" ⚠️ Warning: This will remove ALL unused Docker resources")
|
print(" ⚠️ NEVER use 'docker system prune' -- always specify exact objects")
|
||||||
|
|
||||||
return total_size
|
return total_size
|
||||||
|
|
||||||
|
|||||||
231
product-analysis/SKILL.md
Normal file
231
product-analysis/SKILL.md
Normal file
@@ -0,0 +1,231 @@
|
|||||||
|
---
|
||||||
|
name: product-analysis
|
||||||
|
description: Multi-path parallel product analysis with cross-model test-time compute scaling. Spawns parallel agents (Claude Code agent teams + Codex CLI) to explore product from multiple perspectives, then synthesizes findings into actionable optimization plans. Can invoke competitors-analysis for competitive benchmarking. Use when "product audit", "self-review", "发布前审查", "产品分析", "analyze our product", "UX audit", or "信息架构审计".
|
||||||
|
argument-hint: [scope: full|ux|api|arch|compare]
|
||||||
|
---
|
||||||
|
|
||||||
|
# Product Analysis
|
||||||
|
|
||||||
|
Multi-path parallel product analysis that combines **Claude Code agent teams** and **Codex CLI** for cross-model test-time compute scaling.
|
||||||
|
|
||||||
|
**Core principle**: Same analysis task, multiple AI perspectives, deep synthesis.
|
||||||
|
|
||||||
|
## How It Works
|
||||||
|
|
||||||
|
```
|
||||||
|
/product-analysis full
|
||||||
|
│
|
||||||
|
├─ Step 0: Auto-detect available tools (codex? competitors?)
|
||||||
|
│
|
||||||
|
┌────┼──────────────┐
|
||||||
|
│ │ │
|
||||||
|
Claude Code Codex CLI (auto-detected)
|
||||||
|
Task Agents (background Bash)
|
||||||
|
(Explore ×3-5) (×2-3 parallel)
|
||||||
|
│ │
|
||||||
|
└────────┬──────────┘
|
||||||
|
│
|
||||||
|
Synthesis (main context)
|
||||||
|
│
|
||||||
|
Structured Report
|
||||||
|
```
|
||||||
|
|
||||||
|
## Step 0: Auto-Detect Available Tools
|
||||||
|
|
||||||
|
Before launching any agents, detect what tools are available:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check if Codex CLI is installed
|
||||||
|
which codex 2>/dev/null && codex --version
|
||||||
|
```
|
||||||
|
|
||||||
|
**Decision logic**:
|
||||||
|
- If `codex` is found: Inform the user — "Codex CLI detected (version X). Will run cross-model analysis for richer perspectives."
|
||||||
|
- If `codex` is not found: Silently proceed with Claude Code agents only. Do NOT ask the user to install anything.
|
||||||
|
|
||||||
|
Also detect the project type to tailor agent prompts:
|
||||||
|
```bash
|
||||||
|
# Detect project type
|
||||||
|
ls package.json 2>/dev/null # Node.js/React
|
||||||
|
ls pyproject.toml 2>/dev/null # Python
|
||||||
|
ls Cargo.toml 2>/dev/null # Rust
|
||||||
|
ls go.mod 2>/dev/null # Go
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scope Modes
|
||||||
|
|
||||||
|
Parse `$ARGUMENTS` to determine analysis scope:
|
||||||
|
|
||||||
|
| Scope | What it covers | Typical agents |
|
||||||
|
|-------|---------------|----------------|
|
||||||
|
| `full` | UX + API + Architecture + Docs (default) | 5 Claude + Codex (if available) |
|
||||||
|
| `ux` | Frontend navigation, information density, user journey, empty state, onboarding | 3 Claude + Codex (if available) |
|
||||||
|
| `api` | Backend API coverage, endpoint health, error handling, consistency | 2 Claude + Codex (if available) |
|
||||||
|
| `arch` | Module structure, dependency graph, code duplication, separation of concerns | 2 Claude + Codex (if available) |
|
||||||
|
| `compare X Y` | Self-audit + competitive benchmarking (invokes `/competitors-analysis`) | 3 Claude + competitors-analysis |
|
||||||
|
|
||||||
|
## Phase 1: Parallel Exploration
|
||||||
|
|
||||||
|
Launch all exploration agents simultaneously using Task tool (background mode).
|
||||||
|
|
||||||
|
### Claude Code Agents (always)
|
||||||
|
|
||||||
|
For each dimension, spawn a Task agent with `subagent_type: Explore` and `run_in_background: true`:
|
||||||
|
|
||||||
|
**Agent A — Frontend Navigation & Information Density**
|
||||||
|
```
|
||||||
|
Explore the frontend navigation structure and entry points:
|
||||||
|
1. App.tsx: How many top-level components are mounted simultaneously?
|
||||||
|
2. Left sidebar: How many buttons/entries? What does each link to?
|
||||||
|
3. Right sidebar: How many tabs? How many sections per tab?
|
||||||
|
4. Floating panels: How many drawers/modals? Which overlap in functionality?
|
||||||
|
5. Count total first-screen interactive elements for a new user.
|
||||||
|
6. Identify duplicate entry points (same feature accessible from 2+ places).
|
||||||
|
Give specific file paths, line numbers, and element counts.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent B — User Journey & Empty State**
|
||||||
|
```
|
||||||
|
Explore the new user experience:
|
||||||
|
1. Empty state page: What does a user with no sessions see? Count clickable elements.
|
||||||
|
2. Onboarding flow: How many steps? What information is presented?
|
||||||
|
3. Prompt input area: How many buttons/controls surround the input box? Which are high-frequency vs low-frequency?
|
||||||
|
4. Mobile adaptation: How many nav items? How does it differ from desktop?
|
||||||
|
5. Estimate: Can a new user complete their first conversation in 3 minutes?
|
||||||
|
Give specific file paths, line numbers, and UX assessment.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent C — Backend API & Health**
|
||||||
|
```
|
||||||
|
Explore the backend API surface:
|
||||||
|
1. List ALL API endpoints (method + path + purpose).
|
||||||
|
2. Identify endpoints that are unused or have no frontend consumer.
|
||||||
|
3. Check error handling consistency (do all endpoints return structured errors?).
|
||||||
|
4. Check authentication/authorization patterns (which endpoints require auth?).
|
||||||
|
5. Identify any endpoints that duplicate functionality.
|
||||||
|
Give specific file paths and line numbers.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent D — Architecture & Module Structure** (full/arch scope only)
|
||||||
|
```
|
||||||
|
Explore the module structure and dependencies:
|
||||||
|
1. Map the module dependency graph (which modules import which).
|
||||||
|
2. Identify circular dependencies or tight coupling.
|
||||||
|
3. Find code duplication across modules (same pattern in 3+ places).
|
||||||
|
4. Check separation of concerns (does each module have a single responsibility?).
|
||||||
|
5. Identify dead code or unused exports.
|
||||||
|
Give specific file paths and line numbers.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Agent E — Documentation & Config Consistency** (full scope only)
|
||||||
|
```
|
||||||
|
Explore documentation and configuration:
|
||||||
|
1. Compare README claims vs actual implemented features.
|
||||||
|
2. Check config file consistency (base.yaml vs .env.example vs code defaults).
|
||||||
|
3. Find outdated documentation (references to removed features/files).
|
||||||
|
4. Check test coverage gaps (which modules have no tests?).
|
||||||
|
Give specific file paths and line numbers.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Codex CLI Agents (auto-detected)
|
||||||
|
|
||||||
|
If Codex CLI was detected in Step 0, launch parallel Codex analyses via background Bash.
|
||||||
|
|
||||||
|
Each Codex invocation gets the same dimensional prompt but from a different model's perspective:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codex -m o4-mini \
|
||||||
|
-c model_reasoning_effort="high" \
|
||||||
|
--full-auto \
|
||||||
|
"Analyze the frontend navigation structure of this project. Count all interactive elements visible to a new user on first screen. Identify duplicate entry points where the same feature is accessible from 2+ places. Give specific file paths and counts."
|
||||||
|
```
|
||||||
|
|
||||||
|
Run 2-3 Codex commands in parallel (background Bash), one per major dimension.
|
||||||
|
|
||||||
|
**Important**: Codex runs in the project's working directory. It has full filesystem access. The `--full-auto` flag (or `--dangerously-bypass-approvals-and-sandbox` for older versions) enables autonomous execution.
|
||||||
|
|
||||||
|
## Phase 2: Competitive Benchmarking (compare scope only)
|
||||||
|
|
||||||
|
When scope is `compare`, invoke the competitors-analysis skill for each competitor:
|
||||||
|
|
||||||
|
```
|
||||||
|
Use the Skill tool to invoke: /competitors-analysis {competitor-name} {competitor-url}
|
||||||
|
```
|
||||||
|
|
||||||
|
This delegates to the orthogonal `competitors-analysis` skill which handles:
|
||||||
|
- Repository cloning and validation
|
||||||
|
- Evidence-based code analysis (file:line citations)
|
||||||
|
- Competitor profile generation
|
||||||
|
|
||||||
|
## Phase 3: Synthesis
|
||||||
|
|
||||||
|
After all agents complete, synthesize findings in the main conversation context.
|
||||||
|
|
||||||
|
### Cross-Validation
|
||||||
|
|
||||||
|
Compare findings across agents (Claude vs Claude, Claude vs Codex):
|
||||||
|
- **Agreement** = high confidence finding
|
||||||
|
- **Disagreement** = investigate deeper (one agent may have missed context)
|
||||||
|
- **Codex-only finding** = different model perspective, validate manually
|
||||||
|
|
||||||
|
### Quantification
|
||||||
|
|
||||||
|
Extract hard numbers from agent reports:
|
||||||
|
|
||||||
|
| Metric | What to measure |
|
||||||
|
|--------|----------------|
|
||||||
|
| First-screen interactive elements | Total count of buttons/links/inputs visible to new user |
|
||||||
|
| Feature entry point duplication | Number of features with 2+ entry points |
|
||||||
|
| API endpoints without frontend consumer | Count of unused backend routes |
|
||||||
|
| Onboarding steps to first value | Steps from launch to first successful action |
|
||||||
|
| Module coupling score | Number of circular or bi-directional dependencies |
|
||||||
|
|
||||||
|
### Structured Output
|
||||||
|
|
||||||
|
Produce a layered optimization report:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Product Analysis Report
|
||||||
|
|
||||||
|
### Executive Summary
|
||||||
|
[1-2 sentences: key finding]
|
||||||
|
|
||||||
|
### Quantified Findings
|
||||||
|
| Metric | Value | Assessment |
|
||||||
|
|--------|-------|------------|
|
||||||
|
| ... | ... | ... |
|
||||||
|
|
||||||
|
### P0: Critical (block launch)
|
||||||
|
[Issues that prevent basic usability]
|
||||||
|
|
||||||
|
### P1: High Priority (launch week)
|
||||||
|
[Issues that significantly degrade experience]
|
||||||
|
|
||||||
|
### P2: Medium Priority (next sprint)
|
||||||
|
[Issues worth addressing but not blocking]
|
||||||
|
|
||||||
|
### Cross-Model Insights
|
||||||
|
[Findings that only one model identified — worth investigating]
|
||||||
|
|
||||||
|
### Competitive Position (if compare scope)
|
||||||
|
[How we compare on key dimensions]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Workflow Checklist
|
||||||
|
|
||||||
|
- [ ] Parse `$ARGUMENTS` for scope
|
||||||
|
- [ ] Auto-detect Codex CLI availability (`which codex`)
|
||||||
|
- [ ] Auto-detect project type (package.json / pyproject.toml / etc.)
|
||||||
|
- [ ] Launch Claude Code Explore agents (3-5 parallel, background)
|
||||||
|
- [ ] Launch Codex CLI commands (2-3 parallel, background) if detected
|
||||||
|
- [ ] Invoke `/competitors-analysis` if `compare` scope
|
||||||
|
- [ ] Collect all agent results
|
||||||
|
- [ ] Cross-validate findings
|
||||||
|
- [ ] Quantify metrics
|
||||||
|
- [ ] Generate structured report with P0/P1/P2 priorities
|
||||||
|
|
||||||
|
## References
|
||||||
|
|
||||||
|
- [references/analysis_dimensions.md](references/analysis_dimensions.md) — Detailed audit dimension definitions and prompts
|
||||||
|
- [references/synthesis_methodology.md](references/synthesis_methodology.md) — How to weight and merge multi-agent findings
|
||||||
|
- [references/codex_patterns.md](references/codex_patterns.md) — Codex CLI invocation patterns and flag reference
|
||||||
109
product-analysis/references/analysis_dimensions.md
Normal file
109
product-analysis/references/analysis_dimensions.md
Normal file
@@ -0,0 +1,109 @@
|
|||||||
|
# Analysis Dimensions
|
||||||
|
|
||||||
|
Detailed definitions for each audit dimension. Agents should use these as exploration guides.
|
||||||
|
|
||||||
|
## Dimension 1: Frontend Navigation & Information Density
|
||||||
|
|
||||||
|
**Goal**: Quantify cognitive load for a new user.
|
||||||
|
|
||||||
|
**Key questions**:
|
||||||
|
1. How many top-level components does App.tsx mount simultaneously?
|
||||||
|
2. How many tabs/sections exist in each sidebar panel?
|
||||||
|
3. Which features have multiple entry points (duplicate navigation)?
|
||||||
|
4. What is the total count of interactive elements on first screen?
|
||||||
|
5. Are there panels/drawers that overlap in functionality?
|
||||||
|
|
||||||
|
**Exploration targets**:
|
||||||
|
- Main app entry (App.tsx or equivalent)
|
||||||
|
- Left sidebar / navigation components
|
||||||
|
- Right sidebar / inspector panels
|
||||||
|
- Floating panels, drawers, modals
|
||||||
|
- Settings / configuration panels
|
||||||
|
- Control center / dashboard panels
|
||||||
|
|
||||||
|
**Output format**:
|
||||||
|
```
|
||||||
|
| Component | Location | Interactive Elements | Overlaps With |
|
||||||
|
|-----------|----------|---------------------|----------------|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dimension 2: User Journey & Empty State
|
||||||
|
|
||||||
|
**Goal**: Evaluate time-to-first-value for a new user.
|
||||||
|
|
||||||
|
**Key questions**:
|
||||||
|
1. What does a user see when they have no data/sessions/projects?
|
||||||
|
2. How many steps from launch to first successful action?
|
||||||
|
3. Is there an onboarding flow? How many steps?
|
||||||
|
4. How many clickable elements compete for attention in the empty state?
|
||||||
|
5. Are high-frequency actions visually prioritized over low-frequency ones?
|
||||||
|
|
||||||
|
**Exploration targets**:
|
||||||
|
- Empty state components
|
||||||
|
- Onboarding dialogs/wizards
|
||||||
|
- Prompt input area and surrounding controls
|
||||||
|
- Quick start templates / suggested actions
|
||||||
|
- Mobile-specific navigation and input
|
||||||
|
|
||||||
|
**Output format**:
|
||||||
|
```
|
||||||
|
Step N: [Action] → [What user sees] → [Next possible actions: count]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dimension 3: Backend API Surface
|
||||||
|
|
||||||
|
**Goal**: Identify API bloat, inconsistency, and unused endpoints.
|
||||||
|
|
||||||
|
**Key questions**:
|
||||||
|
1. How many total API endpoints exist?
|
||||||
|
2. Which endpoints have no corresponding frontend call?
|
||||||
|
3. Are error responses consistent across all endpoints?
|
||||||
|
4. Is authentication applied consistently?
|
||||||
|
5. Are there duplicate endpoints serving similar purposes?
|
||||||
|
|
||||||
|
**Exploration targets**:
|
||||||
|
- Router files (API route definitions)
|
||||||
|
- Frontend API client / fetch calls
|
||||||
|
- Error handling middleware
|
||||||
|
- Authentication middleware
|
||||||
|
- API documentation / OpenAPI spec
|
||||||
|
|
||||||
|
**Output format**:
|
||||||
|
```
|
||||||
|
| Method | Path | Purpose | Has Frontend Consumer | Auth Required |
|
||||||
|
|--------|------|---------|----------------------|---------------|
|
||||||
|
```
|
||||||
|
|
||||||
|
## Dimension 4: Architecture & Module Structure
|
||||||
|
|
||||||
|
**Goal**: Identify coupling, duplication, and dead code.
|
||||||
|
|
||||||
|
**Key questions**:
|
||||||
|
1. Which modules have circular dependencies?
|
||||||
|
2. Where is the same pattern duplicated across 3+ files?
|
||||||
|
3. Which modules have unclear single responsibility?
|
||||||
|
4. Are there unused exports or dead code paths?
|
||||||
|
5. How deep is the import chain for core operations?
|
||||||
|
|
||||||
|
**Exploration targets**:
|
||||||
|
- Module `__init__.py` / `index.ts` files
|
||||||
|
- Import graphs (who imports whom)
|
||||||
|
- Utility files and shared helpers
|
||||||
|
- Configuration and factory patterns
|
||||||
|
|
||||||
|
## Dimension 5: Documentation & Config Consistency
|
||||||
|
|
||||||
|
**Goal**: Find gaps between claims and reality.
|
||||||
|
|
||||||
|
**Key questions**:
|
||||||
|
1. Does README list features that don't exist in code?
|
||||||
|
2. Are config file defaults consistent with code defaults?
|
||||||
|
3. Is there documentation for removed/renamed features?
|
||||||
|
4. Which modules have zero test coverage?
|
||||||
|
5. Are there TODO/FIXME/HACK comments in production code?
|
||||||
|
|
||||||
|
**Exploration targets**:
|
||||||
|
- README.md, CLAUDE.md, CONTRIBUTING.md
|
||||||
|
- Config files (YAML, JSON, .env)
|
||||||
|
- Test directories (coverage gaps)
|
||||||
|
- Source code comments (TODO/FIXME/HACK)
|
||||||
82
product-analysis/references/codex_patterns.md
Normal file
82
product-analysis/references/codex_patterns.md
Normal file
@@ -0,0 +1,82 @@
|
|||||||
|
# Codex CLI Integration Patterns
|
||||||
|
|
||||||
|
How to use OpenAI Codex CLI for cross-model parallel analysis.
|
||||||
|
|
||||||
|
## Basic Invocation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
codex -m o4-mini \
|
||||||
|
-c model_reasoning_effort="high" \
|
||||||
|
--full-auto \
|
||||||
|
"Your analysis prompt here"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Flag Reference
|
||||||
|
|
||||||
|
| Flag | Purpose | Values |
|
||||||
|
|------|---------|--------|
|
||||||
|
| `-m` | Model selection | `o4-mini` (fast), `gpt-5.3-codex-spark` (deep) |
|
||||||
|
| `-c model_reasoning_effort` | Reasoning depth | `low`, `medium`, `high`, `xhigh` |
|
||||||
|
| `-c model_reasoning_summary_format` | Summary format | `experimental` (structured output) |
|
||||||
|
| `--full-auto` | Skip all approval prompts | (no value) |
|
||||||
|
| `--dangerously-bypass-approvals-and-sandbox` | Legacy full-auto flag | (no value, older versions) |
|
||||||
|
|
||||||
|
## Recommended Configurations
|
||||||
|
|
||||||
|
### Fast Scan (quick validation)
|
||||||
|
```bash
|
||||||
|
codex -m o4-mini \
|
||||||
|
-c model_reasoning_effort="medium" \
|
||||||
|
--full-auto \
|
||||||
|
"prompt"
|
||||||
|
```
|
||||||
|
|
||||||
|
### Deep Analysis (thorough investigation)
|
||||||
|
```bash
|
||||||
|
codex -m o4-mini \
|
||||||
|
-c model_reasoning_effort="xhigh" \
|
||||||
|
-c model_reasoning_summary_format="experimental" \
|
||||||
|
--full-auto \
|
||||||
|
"prompt"
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parallel Execution Pattern
|
||||||
|
|
||||||
|
Launch multiple Codex analyses in background using Bash tool with `run_in_background: true`:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Dimension 1: Frontend
|
||||||
|
codex -m o4-mini -c model_reasoning_effort="high" --full-auto \
|
||||||
|
"Analyze frontend navigation: count interactive elements, find duplicate entry points, assess cognitive load for new users. Give file paths and counts."
|
||||||
|
|
||||||
|
# Dimension 2: User Journey
|
||||||
|
codex -m o4-mini -c model_reasoning_effort="high" --full-auto \
|
||||||
|
"Analyze new user experience: what does empty state show? How many steps to first action? Count clickable elements competing for attention. Give file paths."
|
||||||
|
|
||||||
|
# Dimension 3: Backend API
|
||||||
|
codex -m o4-mini -c model_reasoning_effort="high" --full-auto \
|
||||||
|
"List all API endpoints. Identify unused endpoints with no frontend consumer. Check error handling consistency. Give router file paths."
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Handling
|
||||||
|
|
||||||
|
Codex outputs to stdout. When run in background:
|
||||||
|
1. Use Bash `run_in_background: true` to launch
|
||||||
|
2. Use `TaskOutput` to retrieve results when done
|
||||||
|
3. Parse the text output for findings
|
||||||
|
|
||||||
|
## Cross-Model Value
|
||||||
|
|
||||||
|
The primary value of Codex in this workflow is **independent perspective**:
|
||||||
|
- Different training data may surface different patterns
|
||||||
|
- Different reasoning approach may catch what Claude misses
|
||||||
|
- Agreement across models = high confidence
|
||||||
|
- Disagreement = worth investigating manually
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
- Codex CLI must be installed and configured (`codex` command available)
|
||||||
|
- Requires OpenAI API key configured
|
||||||
|
- No MCP server access (only filesystem tools)
|
||||||
|
- Output is unstructured text (needs parsing)
|
||||||
|
- Rate limits apply per OpenAI account
|
||||||
68
product-analysis/references/synthesis_methodology.md
Normal file
68
product-analysis/references/synthesis_methodology.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# Synthesis Methodology
|
||||||
|
|
||||||
|
How to weight, merge, and validate findings from multiple parallel agents.
|
||||||
|
|
||||||
|
## Multi-Agent Synthesis Framework
|
||||||
|
|
||||||
|
### Step 1: Collect Raw Findings
|
||||||
|
|
||||||
|
Wait for all agents to complete. For each agent, extract:
|
||||||
|
- **Quantitative data**: counts, measurements, lists
|
||||||
|
- **Qualitative assessments**: good/bad/unclear judgments
|
||||||
|
- **Evidence**: file paths, line numbers, code snippets
|
||||||
|
|
||||||
|
### Step 2: Cross-Validation Matrix
|
||||||
|
|
||||||
|
Create a matrix comparing findings across agents:
|
||||||
|
|
||||||
|
```
|
||||||
|
| Finding | Agent A | Agent B | Codex | Confidence |
|
||||||
|
|---------|---------|---------|-------|------------|
|
||||||
|
| "57 interactive elements on first screen" | 57 | 54 | 61 | HIGH (3/3 agree on magnitude) |
|
||||||
|
| "Skills has 3 entry points" | 3 | 3 | 2 | HIGH (2/3 exact match) |
|
||||||
|
| "Risk pages should be removed" | Yes | - | No | LOW (disagreement, investigate) |
|
||||||
|
```
|
||||||
|
|
||||||
|
**Confidence levels**:
|
||||||
|
- **HIGH**: 2+ agents agree (exact or same magnitude)
|
||||||
|
- **MEDIUM**: 1 agent found, others didn't look
|
||||||
|
- **LOW**: Agents disagree — requires manual investigation
|
||||||
|
|
||||||
|
### Step 3: Disagreement Resolution
|
||||||
|
|
||||||
|
When agents disagree:
|
||||||
|
1. Check if they analyzed different files/scopes
|
||||||
|
2. Check if one agent missed context (e.g., conditional rendering)
|
||||||
|
3. If genuine disagreement, note both perspectives in report
|
||||||
|
4. Codex-only findings are "different model perspective" — valuable but need validation
|
||||||
|
|
||||||
|
### Step 4: Priority Assignment
|
||||||
|
|
||||||
|
**P0 (Critical)**: Issues that prevent a new user from completing basic tasks
|
||||||
|
- Examples: broken onboarding, missing error messages, dead navigation links
|
||||||
|
|
||||||
|
**P1 (High)**: Issues that significantly increase cognitive load or confusion
|
||||||
|
- Examples: duplicate entry points, information overload, unclear primary action
|
||||||
|
|
||||||
|
**P2 (Medium)**: Issues worth addressing but not blocking launch
|
||||||
|
- Examples: unused API endpoints, minor inconsistencies, missing edge case handling
|
||||||
|
|
||||||
|
### Step 5: Report Generation
|
||||||
|
|
||||||
|
Structure the report for actionability:
|
||||||
|
|
||||||
|
1. **Executive Summary** (2-3 sentences, the "so what")
|
||||||
|
2. **Quantified Metrics** (hard numbers, no adjectives)
|
||||||
|
3. **P0 Issues** (with specific file:line references)
|
||||||
|
4. **P1 Issues** (with suggested fixes)
|
||||||
|
5. **P2 Issues** (backlog items)
|
||||||
|
6. **Cross-Model Insights** (findings unique to one model)
|
||||||
|
7. **Competitive Position** (if compare scope was used)
|
||||||
|
|
||||||
|
## Weighting Rules
|
||||||
|
|
||||||
|
- Quantitative findings (counts, measurements) > Qualitative judgments
|
||||||
|
- Code-evidenced findings > Assumption-based findings
|
||||||
|
- Multi-agent agreement > Single-agent finding
|
||||||
|
- User-facing issues > Internal code quality issues
|
||||||
|
- Findings with clear fix path > Vague "should improve" suggestions
|
||||||
@@ -52,9 +52,8 @@ All frontmatter fields except `description` are optional. Configure skill behavi
|
|||||||
name: my-skill
|
name: my-skill
|
||||||
description: What this skill does and when to use it. Use when...
|
description: What this skill does and when to use it. Use when...
|
||||||
context: fork
|
context: fork
|
||||||
agent: Explore
|
agent: general-purpose
|
||||||
disable-model-invocation: true
|
argument-hint: [topic]
|
||||||
allowed-tools: Read, Grep, Bash(git *)
|
|
||||||
---
|
---
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -62,26 +61,52 @@ allowed-tools: Read, Grep, Bash(git *)
|
|||||||
|-------|----------|-------------|
|
|-------|----------|-------------|
|
||||||
| `name` | No | Display name for the skill. If omitted, uses the directory name. Lowercase letters, numbers, and hyphens only (max 64 characters). |
|
| `name` | No | Display name for the skill. If omitted, uses the directory name. Lowercase letters, numbers, and hyphens only (max 64 characters). |
|
||||||
| `description` | Recommended | What the skill does and when to use it. Claude uses this to decide when to apply the skill. If omitted, uses the first paragraph of markdown content. |
|
| `description` | Recommended | What the skill does and when to use it. Claude uses this to decide when to apply the skill. If omitted, uses the first paragraph of markdown content. |
|
||||||
| `context` | No | **Set to `fork` to run in a forked subagent context.** This is critical for skills that should be available to subagents spawned via the Task tool. Without `context: fork`, the skill runs inline in the main conversation. |
|
| `context` | No | **Set to `fork` to run in a forked subagent context.** See "Inline vs Fork: Critical Decision" below — choosing wrong breaks your skill. |
|
||||||
| `agent` | No | Which subagent type to use when `context: fork` is set. Options: `Explore`, `Plan`, `general-purpose`, or custom agents from `.claude/agents/`. Default: `general-purpose`. |
|
| `agent` | No | Which subagent type to use when `context: fork` is set. Options: `Explore`, `Plan`, `general-purpose`, or custom agents from `.claude/agents/`. Default: `general-purpose`. |
|
||||||
| `disable-model-invocation` | No | Set to `true` to prevent Claude from automatically loading this skill. Use for workflows you want to trigger manually with `/name`. Default: `false`. |
|
| `disable-model-invocation` | No | Set to `true` to prevent Claude from automatically loading this skill. Use for workflows you want to trigger manually with `/name`. Default: `false`. |
|
||||||
| `user-invocable` | No | Set to `false` to hide from the `/` menu. Use for background knowledge users shouldn't invoke directly. Default: `true`. |
|
| `user-invocable` | No | Set to `false` to hide from the `/` menu. Use for background knowledge users shouldn't invoke directly. Default: `true`. |
|
||||||
| `allowed-tools` | No | Tools Claude can use without asking permission when this skill is active. Supports wildcards: `Read, Grep, Bash(git *)`, `Bash(npm *)`, `Bash(docker compose *)`. |
|
| `allowed-tools` | No | Pre-approved tools list. **Recommendation: Do NOT set this field.** Omitting it gives the skill full tool access governed by the user's permission settings. Setting it restricts the skill's capabilities unnecessarily. |
|
||||||
| `model` | No | Model to use when this skill is active. |
|
| `model` | No | Model to use when this skill is active. |
|
||||||
| `argument-hint` | No | Hint shown during autocomplete to indicate expected arguments. Example: `[issue-number]` or `[filename] [format]`. |
|
| `argument-hint` | No | Hint shown during autocomplete to indicate expected arguments. Example: `[issue-number]` or `[filename] [format]`. |
|
||||||
| `hooks` | No | Hooks scoped to this skill's lifecycle. Example: `hooks: { pre-invoke: [{ command: "echo Starting" }] }`. See Claude Code Hooks documentation. |
|
| `hooks` | No | Hooks scoped to this skill's lifecycle. Example: `hooks: { pre-invoke: [{ command: "echo Starting" }] }`. See Claude Code Hooks documentation. |
|
||||||
|
|
||||||
**Special placeholder:** `$ARGUMENTS` in skill content is replaced with text the user provides after the skill name. For example, `/deep-research quantum computing` replaces `$ARGUMENTS` with `quantum computing`.
|
**Special placeholder:** `$ARGUMENTS` in skill content is replaced with text the user provides after the skill name. For example, `/deep-research quantum computing` replaces `$ARGUMENTS` with `quantum computing`.
|
||||||
|
|
||||||
##### When to Use `context: fork`
|
##### Inline vs Fork: Critical Decision
|
||||||
|
|
||||||
Use `context: fork` when the skill:
|
**This is the most important architectural decision when designing a skill.** Choosing wrong will silently break your skill's core capabilities.
|
||||||
- Performs multi-step autonomous tasks (research, analysis, code generation)
|
|
||||||
- Should be available to subagents spawned via the Task tool
|
|
||||||
- Needs isolated context that won't pollute the main conversation
|
|
||||||
- Contains explicit task instructions (not just guidelines or reference content)
|
|
||||||
|
|
||||||
**Example: Task-based skill with subagent execution:**
|
**CRITICAL CONSTRAINT: Subagents cannot spawn other subagents.** A skill running with `context: fork` (as a subagent) CANNOT:
|
||||||
|
- Use the Task tool to spawn parallel exploration agents
|
||||||
|
- Use the Skill tool to invoke other skills
|
||||||
|
- Orchestrate any multi-agent workflow
|
||||||
|
|
||||||
|
**Decision guide:**
|
||||||
|
|
||||||
|
| Your skill needs to... | Use | Why |
|
||||||
|
|------------------------|-----|-----|
|
||||||
|
| Orchestrate parallel agents (Task tool) | **Inline** (no `context`) | Subagents can't spawn subagents |
|
||||||
|
| Call other skills (Skill tool) | **Inline** (no `context`) | Subagents can't invoke skills |
|
||||||
|
| Run Bash commands for external CLIs | **Inline** (no `context`) | Full tool access in main context |
|
||||||
|
| Perform a single focused task (research, analysis) | **Fork** (`context: fork`) | Isolated context, clean execution |
|
||||||
|
| Provide reference knowledge (coding conventions) | **Inline** (no `context`) | Guidelines enrich main conversation |
|
||||||
|
| Be callable BY other skills | **Fork** (`context: fork`) | Must be a subagent to be spawned |
|
||||||
|
|
||||||
|
**Example: Orchestrator skill (MUST be inline):**
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
name: product-analysis
|
||||||
|
description: Multi-path parallel product analysis with cross-model synthesis
|
||||||
|
---
|
||||||
|
|
||||||
|
# Orchestrates parallel agents — inline is REQUIRED
|
||||||
|
1. Auto-detect available tools (which codex, etc.)
|
||||||
|
2. Launch 3-5 Task agents in parallel (Explore subagents)
|
||||||
|
3. Optionally invoke /competitors-analysis via Skill tool
|
||||||
|
4. Synthesize all results
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example: Specialist skill (fork is correct):**
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
name: deep-research
|
name: deep-research
|
||||||
@@ -95,9 +120,8 @@ Research $ARGUMENTS thoroughly:
|
|||||||
2. Read and analyze the code
|
2. Read and analyze the code
|
||||||
3. Summarize findings with specific file references
|
3. Summarize findings with specific file references
|
||||||
```
|
```
|
||||||
When invoked as `/deep-research authentication flow`, `$ARGUMENTS` becomes `authentication flow`.
|
|
||||||
|
|
||||||
**Example: Reference skill that runs inline:**
|
**Example: Reference skill (inline, no task):**
|
||||||
```yaml
|
```yaml
|
||||||
---
|
---
|
||||||
name: api-conventions
|
name: api-conventions
|
||||||
@@ -109,6 +133,44 @@ When writing API endpoints:
|
|||||||
- Return consistent error formats
|
- Return consistent error formats
|
||||||
```
|
```
|
||||||
|
|
||||||
|
##### Composable Skill Design (Orthogonality)
|
||||||
|
|
||||||
|
Skills should be **orthogonal**: each skill handles one concern, and they combine through composition.
|
||||||
|
|
||||||
|
**Pattern: Orchestrator (inline) calls Specialist (fork)**
|
||||||
|
```
|
||||||
|
product-analysis (inline, orchestrator)
|
||||||
|
├─ Task agents for parallel exploration
|
||||||
|
├─ Skill('competitors-analysis', 'X') → fork subagent
|
||||||
|
└─ Synthesizes all results
|
||||||
|
|
||||||
|
competitors-analysis (fork, specialist)
|
||||||
|
└─ Single focused task: analyze one competitor codebase
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rules for composability:**
|
||||||
|
1. The **caller** must be inline (no `context: fork`) to use Task/Skill tools
|
||||||
|
2. The **callee** should use `context: fork` to run in isolated subagent context
|
||||||
|
3. Each skill has a single responsibility — don't mix orchestration with execution
|
||||||
|
4. Share methodology via references (e.g., checklists, templates), not by duplicating code
|
||||||
|
|
||||||
|
##### Auto-Detection Over Manual Flags
|
||||||
|
|
||||||
|
**Never add manual flags for capabilities that can be auto-detected.** Instead of requiring users to pass `--with-codex` or `--verbose`, detect capabilities at runtime:
|
||||||
|
|
||||||
|
```
|
||||||
|
# Good: Auto-detect and inform
|
||||||
|
Step 0: Check available tools
|
||||||
|
- `which codex` → If found, inform user and enable cross-model analysis
|
||||||
|
- `ls package.json` → If found, tailor prompts for Node.js project
|
||||||
|
- `which docker` → If found, enable container-based execution
|
||||||
|
|
||||||
|
# Bad: Manual flags
|
||||||
|
argument-hint: [scope] [--with-codex] [--docker] [--verbose]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Principle:** Capabilities auto-detect, user decides scope. A skill should discover what it CAN do and act accordingly, not require users to remember what tools are installed.
|
||||||
|
|
||||||
##### Invocation Control
|
##### Invocation Control
|
||||||
|
|
||||||
| Frontmatter | You can invoke | Claude can invoke | Subagents can use |
|
| Frontmatter | You can invoke | Claude can invoke | Subagents can use |
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
---
|
---
|
||||||
name: tunnel-doctor
|
name: tunnel-doctor
|
||||||
description: Diagnoses and fixes conflicts between Tailscale and proxy/VPN tools (Shadowrocket, Clash, Surge) on macOS. Covers four conflict layers - (1) route hijacking, (2) HTTP proxy env var interception, (3) system proxy bypass, and (4) SSH ProxyCommand double tunneling causing git push/pull failures. Includes SOP for remote development via SSH tunnels with proxy-safe Makefile patterns. Use when Tailscale ping works but SSH/HTTP times out, when browser returns 503 but curl works, when git push fails with "failed to begin relaying via HTTP", when setting up Tailscale SSH to WSL instances, or when bootstrapping remote dev environments over Tailscale.
|
description: Diagnoses and fixes conflicts between Tailscale and proxy/VPN tools (Shadowrocket, Clash, Surge) on macOS. Covers five conflict layers - (1) route hijacking, (2) HTTP proxy env var interception, (3) system proxy bypass, (4) SSH ProxyCommand double tunneling, and (5) VM/container runtime proxy propagation (OrbStack/Docker). Includes SOP for remote development via SSH tunnels with proxy-safe Makefile patterns. Use when Tailscale ping works but SSH/HTTP times out, when browser returns 503 but curl works, when git push fails with "failed to begin relaying via HTTP", when Docker pull times out behind TUN/VPN, when setting up Tailscale SSH to WSL instances, or when bootstrapping remote dev environments over Tailscale.
|
||||||
allowed-tools: Read, Grep, Edit, Bash
|
allowed-tools: Read, Grep, Edit, Bash
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -8,9 +8,9 @@ allowed-tools: Read, Grep, Edit, Bash
|
|||||||
|
|
||||||
Diagnose and fix conflicts when Tailscale coexists with proxy/VPN tools on macOS, with specific guidance for SSH access to WSL instances.
|
Diagnose and fix conflicts when Tailscale coexists with proxy/VPN tools on macOS, with specific guidance for SSH access to WSL instances.
|
||||||
|
|
||||||
## Four Conflict Layers
|
## Five Conflict Layers
|
||||||
|
|
||||||
Proxy/VPN tools on macOS create conflicts at four independent layers. Layers 1-3 affect Tailscale connectivity; Layer 4 affects SSH git operations (same proxy environment, different target):
|
Proxy/VPN tools on macOS create conflicts at five independent layers. Layers 1-3 affect Tailscale connectivity; Layer 4 affects SSH git operations; Layer 5 affects VM/container runtimes:
|
||||||
|
|
||||||
| Layer | What breaks | What still works | Root cause |
|
| Layer | What breaks | What still works | Root cause |
|
||||||
|-------|-------------|------------------|------------|
|
|-------|-------------|------------------|------------|
|
||||||
@@ -18,6 +18,7 @@ Proxy/VPN tools on macOS create conflicts at four independent layers. Layers 1-3
|
|||||||
| 2. HTTP env vars | `curl`, Python requests, Node.js fetch | SSH, browser | `http_proxy` set without `NO_PROXY` for Tailscale |
|
| 2. HTTP env vars | `curl`, Python requests, Node.js fetch | SSH, browser | `http_proxy` set without `NO_PROXY` for Tailscale |
|
||||||
| 3. System proxy (browser) | Browser only (HTTP 503) | SSH, `curl` (both with/without proxy) | Browser uses VPN system proxy; DIRECT rule routes via Wi-Fi, not Tailscale utun |
|
| 3. System proxy (browser) | Browser only (HTTP 503) | SSH, `curl` (both with/without proxy) | Browser uses VPN system proxy; DIRECT rule routes via Wi-Fi, not Tailscale utun |
|
||||||
| 4. SSH ProxyCommand double tunnel | `git push/pull` (intermittent) | `ssh -T` (small data) | `connect -H` creates HTTP CONNECT tunnel redundant with Shadowrocket TUN; landing proxy drops large/long-lived transfers |
|
| 4. SSH ProxyCommand double tunnel | `git push/pull` (intermittent) | `ssh -T` (small data) | `connect -H` creates HTTP CONNECT tunnel redundant with Shadowrocket TUN; landing proxy drops large/long-lived transfers |
|
||||||
|
| 5. VM/Container proxy propagation | `docker pull`, `docker build` | Host `curl`, running containers | VM runtime (OrbStack/Docker Desktop) auto-injects or caches proxy config; removing proxy makes it worse (VM traffic via TUN → TLS timeout) |
|
||||||
|
|
||||||
## Diagnostic Workflow
|
## Diagnostic Workflow
|
||||||
|
|
||||||
@@ -31,6 +32,8 @@ Determine which scenario applies:
|
|||||||
- **Remote dev server auth redirects to `localhost` → browser can't follow** → SSH tunnel needed (Step 2D)
|
- **Remote dev server auth redirects to `localhost` → browser can't follow** → SSH tunnel needed (Step 2D)
|
||||||
- **`make status` / scripts curl to localhost fail with proxy** → localhost proxy interception (Step 2E)
|
- **`make status` / scripts curl to localhost fail with proxy** → localhost proxy interception (Step 2E)
|
||||||
- **`git push/pull` fails with `FATAL: failed to begin relaying via HTTP`** → SSH double tunnel (Step 2F)
|
- **`git push/pull` fails with `FATAL: failed to begin relaying via HTTP`** → SSH double tunnel (Step 2F)
|
||||||
|
- **`docker pull` fails with `TLS handshake timeout` or `docker build` can't fetch base images** → VM/container proxy propagation (Step 2G)
|
||||||
|
- **`git clone` fails with `Connection closed by 198.18.x.x`** → TUN DNS hijack for SSH (Step 2H)
|
||||||
- **SSH connects but `operation not permitted`** → Tailscale SSH config issue (Step 4)
|
- **SSH connects but `operation not permitted`** → Tailscale SSH config issue (Step 4)
|
||||||
- **SSH connects but `be-child ssh` exits code 1** → WSL snap sandbox issue (Step 5)
|
- **SSH connects but `be-child ssh` exits code 1** → WSL snap sandbox issue (Step 5)
|
||||||
|
|
||||||
@@ -39,6 +42,8 @@ Determine which scenario applies:
|
|||||||
- `curl` uses `http_proxy` env var, NOT the system proxy. Browser uses system proxy (set by VPN). If `curl` works but browser doesn't → Layer 3.
|
- `curl` uses `http_proxy` env var, NOT the system proxy. Browser uses system proxy (set by VPN). If `curl` works but browser doesn't → Layer 3.
|
||||||
- If `tailscale ping` works but regular `ping` doesn't → Layer 1 (route table corrupted).
|
- If `tailscale ping` works but regular `ping` doesn't → Layer 1 (route table corrupted).
|
||||||
- If `ssh -T git@github.com` works but `git push` fails intermittently → Layer 4 (double tunnel).
|
- If `ssh -T git@github.com` works but `git push` fails intermittently → Layer 4 (double tunnel).
|
||||||
|
- If host `curl https://...` works but `docker pull` times out → Layer 5 (VM proxy propagation).
|
||||||
|
- If DNS resolves to `198.18.x.x` virtual IPs → TUN DNS hijack (Step 2H).
|
||||||
|
|
||||||
### Step 2A: Fix HTTP Proxy Environment Variables
|
### Step 2A: Fix HTTP Proxy Environment Variables
|
||||||
|
|
||||||
@@ -227,6 +232,153 @@ GIT_SSH_COMMAND="ssh -o ProxyCommand=none" git push origin main
|
|||||||
|
|
||||||
**Fix** — remove ProxyCommand and switch to `ssh.github.com:443`. See [references/proxy_conflict_reference.md § SSH ProxyCommand and Git Operations](references/proxy_conflict_reference.md) for the full SSH config, why port 443 helps, and fallback options when VPN is off.
|
**Fix** — remove ProxyCommand and switch to `ssh.github.com:443`. See [references/proxy_conflict_reference.md § SSH ProxyCommand and Git Operations](references/proxy_conflict_reference.md) for the full SSH config, why port 443 helps, and fallback options when VPN is off.
|
||||||
|
|
||||||
|
### Step 2G: Fix VM/Container Runtime Proxy Propagation (Docker pull/build failures)
|
||||||
|
|
||||||
|
**Symptom**: `docker pull` or `docker build` fails with `net/http: TLS handshake timeout` or `Internal Server Error` from `auth.docker.io`, while host `curl` to the same URLs works fine.
|
||||||
|
|
||||||
|
**Applies to**: OrbStack, Docker Desktop, or any VM-based Docker runtime on macOS with Shadowrocket/Clash TUN active.
|
||||||
|
|
||||||
|
**Root cause**: VM-based Docker runtimes (OrbStack, Docker Desktop) run the Docker daemon inside a lightweight VM. The VM's outbound traffic takes a different network path than host processes:
|
||||||
|
|
||||||
|
```
|
||||||
|
Host process (curl): Process → TUN (Shadowrocket) → landing proxy → internet ✅
|
||||||
|
VM process (Docker): Docker daemon → VM bridge → host network → TUN → ??? ❌
|
||||||
|
```
|
||||||
|
|
||||||
|
The TUN handles host-originated traffic correctly but may drop or delay VM-bridged traffic (different TCP stack, MTU, keepalive behavior).
|
||||||
|
|
||||||
|
**Three sub-problems and their fixes**:
|
||||||
|
|
||||||
|
#### 2G-1: OrbStack auto-detects and caches proxy (most common)
|
||||||
|
|
||||||
|
OrbStack's `network_proxy: auto` reads `http_proxy` from the shell environment and writes it to `~/.orbstack/config/docker.json`. **Crucially**, `orbctl config set network_proxy none` does NOT clean up `docker.json` — the cached proxy persists.
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# OrbStack config says "none" but Docker still shows proxy
|
||||||
|
orbctl config get network_proxy # → "none"
|
||||||
|
docker info | grep -i proxy # → HTTP Proxy: http://127.0.0.1:1082 ← stale!
|
||||||
|
|
||||||
|
# The real source of truth:
|
||||||
|
cat ~/.orbstack/config/docker.json
|
||||||
|
# → {"proxies": {"http-proxy": "http://127.0.0.1:1082", ...}} ← cached!
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix** — DON'T remove the proxy. Instead, add precise `no-proxy` to prevent localhost interception while keeping the proxy as the VM's outbound channel:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Write corrected config (keeps proxy, adds no-proxy for local traffic)
|
||||||
|
python3 -c "
|
||||||
|
import json
|
||||||
|
config = {
|
||||||
|
'proxies': {
|
||||||
|
'http-proxy': 'http://127.0.0.1:1082',
|
||||||
|
'https-proxy': 'http://127.0.0.1:1082',
|
||||||
|
'no-proxy': 'localhost,127.0.0.1,::1,192.168.128.0/24,100.64.0.0/10,host.internal,*.local'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
json.dump(config, open('$HOME/.orbstack/config/docker.json', 'w'), indent=2)
|
||||||
|
"
|
||||||
|
|
||||||
|
# Full restart (not just docker engine)
|
||||||
|
orbctl stop && sleep 3 && orbctl start
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why NOT remove the proxy**: When TUN is active, removing the Docker proxy means VM traffic goes directly through the bridge → TUN path, which causes TLS handshake timeouts. The proxy provides a working outbound channel because OrbStack maps host `127.0.0.1` into the VM.
|
||||||
|
|
||||||
|
#### 2G-2: Removing proxy makes Docker worse (counter-intuitive)
|
||||||
|
|
||||||
|
| Docker config | Traffic path | Result |
|
||||||
|
|---------------|-------------|--------|
|
||||||
|
| Proxy ON, no `no-proxy` | Docker → proxy → TUN → internet | Docker Hub ✅, localhost probes ❌ |
|
||||||
|
| Proxy OFF | Docker → VM bridge → host → TUN → internet | TLS timeout ❌ |
|
||||||
|
| **Proxy ON + `no-proxy`** | **External: Docker → proxy → internet ✅; Local: Docker → direct ✅** | **Both work ✅** |
|
||||||
|
|
||||||
|
#### 2G-3: Deploy scripts probe localhost through proxy
|
||||||
|
|
||||||
|
Deploy scripts that `curl localhost` inside the Docker environment will route through the proxy. Fix by adding `NO_PROXY` at the script level:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# In deploy.sh or similar scripts:
|
||||||
|
_local_bypass="localhost,127.0.0.1,::1"
|
||||||
|
if [[ -n "${NO_PROXY:-}" ]]; then
|
||||||
|
export NO_PROXY="${_local_bypass},${NO_PROXY}"
|
||||||
|
else
|
||||||
|
export NO_PROXY="${_local_bypass}"
|
||||||
|
fi
|
||||||
|
export no_proxy="$NO_PROXY"
|
||||||
|
|
||||||
|
# Use 127.0.0.1 instead of localhost in probe URLs (some proxy implementations
|
||||||
|
# only match exact string "localhost" in no-proxy, not the resolved IP)
|
||||||
|
curl http://127.0.0.1:3001/health # ✅ bypasses proxy
|
||||||
|
curl http://localhost:3001/health # ❌ may still go through proxy
|
||||||
|
```
|
||||||
|
|
||||||
|
**Verify the fix**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Docker proxy check (should show proxy + no-proxy)
|
||||||
|
docker info | grep -iE "proxy|No Proxy"
|
||||||
|
|
||||||
|
# Pull test
|
||||||
|
docker pull --quiet hello-world
|
||||||
|
|
||||||
|
# Local probe test
|
||||||
|
curl -s http://127.0.0.1:3001/health
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2H: Fix TUN DNS Hijack for SSH/Git (198.18.x.x virtual IPs)
|
||||||
|
|
||||||
|
**Symptom**: `git clone/fetch/push` fails with `Connection closed by 198.18.0.x port 443`. `ssh -T git@github.com` may also fail. DNS resolution returns `198.18.x.x` addresses instead of real IPs.
|
||||||
|
|
||||||
|
**Root cause**: Shadowrocket TUN intercepts all DNS queries and returns virtual IPs in the `198.18.0.0/15` range. It then routes traffic to these virtual IPs through the TUN for protocol-aware proxying. HTTP/HTTPS works because the landing proxy understands these protocols, but SSH-over-443 (used by GitHub) gets mishandled — the TUN sees port 443 traffic, expects HTTPS, and drops the SSH handshake.
|
||||||
|
|
||||||
|
**Diagnosis**:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# DNS returns virtual IP (TUN hijack)
|
||||||
|
nslookup ssh.github.com
|
||||||
|
# → 198.18.0.26 ← Shadowrocket virtual IP, NOT real GitHub IP
|
||||||
|
|
||||||
|
# Direct IP works (bypasses DNS hijack)
|
||||||
|
ssh -o HostName=140.82.112.35 -o Port=443 git@github.com
|
||||||
|
# → "Hi user! You've successfully authenticated"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fix** — use direct IP in SSH config to bypass DNS hijack:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# ~/.ssh/config
|
||||||
|
Host github.com
|
||||||
|
HostName 140.82.112.35 # GitHub SSH server real IP (bypasses TUN DNS hijack)
|
||||||
|
Port 443
|
||||||
|
User git
|
||||||
|
ServerAliveInterval 60
|
||||||
|
ServerAliveCountMax 3
|
||||||
|
IdentityFile ~/.ssh/id_ed25519
|
||||||
|
```
|
||||||
|
|
||||||
|
**GitHub SSH server IPs** (as of 2026, verify with `dig +short ssh.github.com @8.8.8.8`):
|
||||||
|
- `140.82.112.35` (primary)
|
||||||
|
- `140.82.112.36` (alternate)
|
||||||
|
|
||||||
|
**Trade-off**: Hardcoded IPs break if GitHub changes them. Monitor `ssh -T git@github.com` — if it starts failing, update the IP. A cron job can automate this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Weekly check (add to crontab)
|
||||||
|
0 9 * * 1 dig +short ssh.github.com @8.8.8.8 | head -1 > /tmp/github-ssh-ip.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
**Alternative** (if you control Shadowrocket rules): Add GitHub SSH IPs to DIRECT rule so TUN passes them through without protocol inspection:
|
||||||
|
|
||||||
|
```
|
||||||
|
IP-CIDR,140.82.112.0/24,DIRECT
|
||||||
|
IP-CIDR,192.30.252.0/22,DIRECT
|
||||||
|
```
|
||||||
|
|
||||||
|
This is more robust but requires proxy tool config access.
|
||||||
|
|
||||||
### Step 3: Fix Proxy Tool Configuration
|
### Step 3: Fix Proxy Tool Configuration
|
||||||
|
|
||||||
Identify the proxy tool and apply the appropriate fix. See [references/proxy_conflict_reference.md](references/proxy_conflict_reference.md) for detailed instructions per tool.
|
Identify the proxy tool and apply the appropriate fix. See [references/proxy_conflict_reference.md](references/proxy_conflict_reference.md) for detailed instructions per tool.
|
||||||
|
|||||||
Reference in New Issue
Block a user