* feat: add MiniMax AI as LLM platform adaptor Original implementation by octo-patch in PR #318. This commit includes comprehensive improvements and documentation. Code Improvements: - Fix API key validation to properly check JWT format (eyJ prefix) - Add specific exception handling for timeout and connection errors - Remove unused variable in upload method Dependencies: - Add MiniMax to [all-llms] extra group in pyproject.toml Tests: - Remove duplicate setUp method in integration test class - Add 4 new test methods: * test_package_excludes_backup_files * test_upload_success_mocked (with OpenAI mocking) * test_upload_network_error * test_upload_connection_error * test_validate_api_key_jwt_format - Update test_validate_api_key_valid to use JWT format keys - Fix test assertions for error message matching Documentation: - Create comprehensive MINIMAX_INTEGRATION.md guide (380+ lines) - Update MULTI_LLM_SUPPORT.md with MiniMax platform entry - Update 01-installation.md extras table - Update INTEGRATIONS.md AI platforms table - Update AGENTS.md adaptor import pattern example - Fix README.md platform count from 4 to 5 All tests pass (33 passed, 3 skipped) Lint checks pass Co-authored-by: octo-patch <octo-patch@users.noreply.github.com> * fix: improve MiniMax adaptor — typed exceptions, key validation, tests, docs - Remove invalid "minimax" self-reference from all-llms dependency group - Use typed OpenAI exceptions (APITimeoutError, APIConnectionError) instead of string-matching on generic Exception - Replace incorrect JWT assumption in validate_api_key with length check - Use DEFAULT_API_ENDPOINT constant instead of hardcoded URLs (3 sites) - Add Path() cast for output_path before .is_dir() call - Add sys.modules mock to test_enhance_missing_library - Add mocked test_enhance_success with backup/content verification - Update test assertions for new exception types and key validation - Add MiniMax to __init__.py docstrings (module, get_adaptor, list_platforms) - Add MiniMax sections to MULTI_LLM_SUPPORT.md (install, format, API key, workflow example, export-to-all) Follows up on PR #318 by @octo-patch (feat: add MiniMax AI as LLM platform adaptor). Co-Authored-By: Octopus <octo-patch@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: octo-patch <octo-patch@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -51,7 +51,7 @@ mypy src/skill_seekers --show-error-codes --pretty
|
|||||||
**Pytest config** (from pyproject.toml): `addopts = "-v --tb=short --strict-markers"`, `asyncio_mode = "auto"`, `asyncio_default_fixture_loop_scope = "function"`.
|
**Pytest config** (from pyproject.toml): `addopts = "-v --tb=short --strict-markers"`, `asyncio_mode = "auto"`, `asyncio_default_fixture_loop_scope = "function"`.
|
||||||
**Test markers:** `slow`, `integration`, `e2e`, `venv`, `bootstrap`, `benchmark`, `asyncio`.
|
**Test markers:** `slow`, `integration`, `e2e`, `venv`, `bootstrap`, `benchmark`, `asyncio`.
|
||||||
**Async tests:** use `@pytest.mark.asyncio`; asyncio_mode is `auto` so the decorator is often implicit.
|
**Async tests:** use `@pytest.mark.asyncio`; asyncio_mode is `auto` so the decorator is often implicit.
|
||||||
**Test count:** 120 test files (107 in `tests/`, 13 in `tests/test_adaptors/`).
|
**Test count:** 123 test files (107 in `tests/`, 16 in `tests/test_adaptors/`).
|
||||||
|
|
||||||
## Code Style
|
## Code Style
|
||||||
|
|
||||||
@@ -69,8 +69,10 @@ mypy src/skill_seekers --show-error-codes --pretty
|
|||||||
```python
|
```python
|
||||||
try:
|
try:
|
||||||
from .claude import ClaudeAdaptor
|
from .claude import ClaudeAdaptor
|
||||||
|
from .minimax import MiniMaxAdaptor
|
||||||
except ImportError:
|
except ImportError:
|
||||||
ClaudeAdaptor = None
|
ClaudeAdaptor = None
|
||||||
|
MiniMaxAdaptor = None
|
||||||
```
|
```
|
||||||
|
|
||||||
### Naming Conventions
|
### Naming Conventions
|
||||||
|
|||||||
27
README.md
27
README.md
@@ -248,7 +248,7 @@ Instead of spending days on manual preprocessing, Skill Seekers:
|
|||||||
- ✅ **Backward Compatible** - Legacy single-source configs still work
|
- ✅ **Backward Compatible** - Legacy single-source configs still work
|
||||||
|
|
||||||
### 🤖 Multi-LLM Platform Support
|
### 🤖 Multi-LLM Platform Support
|
||||||
- ✅ **4 LLM Platforms** - Claude AI, Google Gemini, OpenAI ChatGPT, Generic Markdown
|
- ✅ **5 LLM Platforms** - Claude AI, Google Gemini, OpenAI ChatGPT, MiniMax AI, Generic Markdown
|
||||||
- ✅ **Universal Scraping** - Same documentation works for all platforms
|
- ✅ **Universal Scraping** - Same documentation works for all platforms
|
||||||
- ✅ **Platform-Specific Packaging** - Optimized formats for each LLM
|
- ✅ **Platform-Specific Packaging** - Optimized formats for each LLM
|
||||||
- ✅ **One-Command Export** - `--target` flag selects platform
|
- ✅ **One-Command Export** - `--target` flag selects platform
|
||||||
@@ -260,6 +260,7 @@ Instead of spending days on manual preprocessing, Skill Seekers:
|
|||||||
| **Claude AI** | ZIP + YAML | ✅ Auto | ✅ Yes | ANTHROPIC_API_KEY | ANTHROPIC_BASE_URL |
|
| **Claude AI** | ZIP + YAML | ✅ Auto | ✅ Yes | ANTHROPIC_API_KEY | ANTHROPIC_BASE_URL |
|
||||||
| **Google Gemini** | tar.gz | ✅ Auto | ✅ Yes | GOOGLE_API_KEY | - |
|
| **Google Gemini** | tar.gz | ✅ Auto | ✅ Yes | GOOGLE_API_KEY | - |
|
||||||
| **OpenAI ChatGPT** | ZIP + Vector Store | ✅ Auto | ✅ Yes | OPENAI_API_KEY | - |
|
| **OpenAI ChatGPT** | ZIP + Vector Store | ✅ Auto | ✅ Yes | OPENAI_API_KEY | - |
|
||||||
|
| **MiniMax AI** | ZIP + Knowledge Files | ✅ Auto | ✅ Yes | MINIMAX_API_KEY | - |
|
||||||
| **Generic Markdown** | ZIP | ❌ Manual | ❌ No | - | - |
|
| **Generic Markdown** | ZIP | ❌ Manual | ❌ No | - | - |
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
@@ -277,6 +278,11 @@ pip install skill-seekers[openai]
|
|||||||
skill-seekers package output/react/ --target openai
|
skill-seekers package output/react/ --target openai
|
||||||
skill-seekers upload react-openai.zip --target openai
|
skill-seekers upload react-openai.zip --target openai
|
||||||
|
|
||||||
|
# MiniMax AI
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
skill-seekers package output/react/ --target minimax
|
||||||
|
skill-seekers upload react-minimax.zip --target minimax
|
||||||
|
|
||||||
# Generic Markdown (universal export)
|
# Generic Markdown (universal export)
|
||||||
skill-seekers package output/react/ --target markdown
|
skill-seekers package output/react/ --target markdown
|
||||||
# Use the markdown files directly in any LLM
|
# Use the markdown files directly in any LLM
|
||||||
@@ -312,6 +318,9 @@ pip install skill-seekers[gemini]
|
|||||||
# Install with OpenAI support
|
# Install with OpenAI support
|
||||||
pip install skill-seekers[openai]
|
pip install skill-seekers[openai]
|
||||||
|
|
||||||
|
# Install with MiniMax support
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
|
||||||
# Install with all LLM platforms
|
# Install with all LLM platforms
|
||||||
pip install skill-seekers[all-llms]
|
pip install skill-seekers[all-llms]
|
||||||
```
|
```
|
||||||
@@ -698,21 +707,21 @@ skill-seekers install --config react --dry-run
|
|||||||
|
|
||||||
## 📊 Feature Matrix
|
## 📊 Feature Matrix
|
||||||
|
|
||||||
Skill Seekers supports **4 LLM platforms**, **17 source types**, and full feature parity across all targets.
|
Skill Seekers supports **5 LLM platforms**, **17 source types**, and full feature parity across all targets.
|
||||||
|
|
||||||
**Platforms:** Claude AI, Google Gemini, OpenAI ChatGPT, Generic Markdown
|
**Platforms:** Claude AI, Google Gemini, OpenAI ChatGPT, MiniMax AI, Generic Markdown
|
||||||
**Source Types:** Documentation websites, GitHub repos, PDFs, Word (.docx), EPUB, Video, Local codebases, Jupyter Notebooks, Local HTML, OpenAPI/Swagger, AsciiDoc, PowerPoint (.pptx), RSS/Atom feeds, Man pages, Confluence wikis, Notion pages, Slack/Discord chat exports
|
**Source Types:** Documentation websites, GitHub repos, PDFs, Word (.docx), EPUB, Video, Local codebases, Jupyter Notebooks, Local HTML, OpenAPI/Swagger, AsciiDoc, PowerPoint (.pptx), RSS/Atom feeds, Man pages, Confluence wikis, Notion pages, Slack/Discord chat exports
|
||||||
|
|
||||||
See [Complete Feature Matrix](docs/FEATURE_MATRIX.md) for detailed platform and feature support.
|
See [Complete Feature Matrix](docs/FEATURE_MATRIX.md) for detailed platform and feature support.
|
||||||
|
|
||||||
### Quick Platform Comparison
|
### Quick Platform Comparison
|
||||||
|
|
||||||
| Feature | Claude | Gemini | OpenAI | Markdown |
|
| Feature | Claude | Gemini | OpenAI | MiniMax | Markdown |
|
||||||
|---------|--------|--------|--------|----------|
|
|---------|--------|--------|--------|--------|----------|
|
||||||
| Format | ZIP + YAML | tar.gz | ZIP + Vector | ZIP |
|
| Format | ZIP + YAML | tar.gz | ZIP + Vector | ZIP + Knowledge | ZIP |
|
||||||
| Upload | ✅ API | ✅ API | ✅ API | ❌ Manual |
|
| Upload | ✅ API | ✅ API | ✅ API | ✅ API | ❌ Manual |
|
||||||
| Enhancement | ✅ Sonnet 4 | ✅ 2.0 Flash | ✅ GPT-4o | ❌ None |
|
| Enhancement | ✅ Sonnet 4 | ✅ 2.0 Flash | ✅ GPT-4o | ✅ M2.7 | ❌ None |
|
||||||
| All Skill Modes | ✅ | ✅ | ✅ | ✅ |
|
| All Skill Modes | ✅ | ✅ | ✅ | ✅ | ✅ |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
|
|||||||
@@ -86,6 +86,7 @@ pip install skill-seekers[all-llms]
|
|||||||
- Claude AI support
|
- Claude AI support
|
||||||
- Google Gemini support
|
- Google Gemini support
|
||||||
- OpenAI ChatGPT support
|
- OpenAI ChatGPT support
|
||||||
|
- MiniMax AI support
|
||||||
- All vector databases
|
- All vector databases
|
||||||
- MCP server
|
- MCP server
|
||||||
- Cloud storage (S3, GCS, Azure)
|
- Cloud storage (S3, GCS, Azure)
|
||||||
@@ -98,6 +99,7 @@ Install only what you need:
|
|||||||
# Specific platform only
|
# Specific platform only
|
||||||
pip install skill-seekers[gemini] # Google Gemini
|
pip install skill-seekers[gemini] # Google Gemini
|
||||||
pip install skill-seekers[openai] # OpenAI
|
pip install skill-seekers[openai] # OpenAI
|
||||||
|
pip install skill-seekers[minimax] # MiniMax AI
|
||||||
pip install skill-seekers[chroma] # ChromaDB
|
pip install skill-seekers[chroma] # ChromaDB
|
||||||
|
|
||||||
# Multiple extras
|
# Multiple extras
|
||||||
@@ -115,6 +117,7 @@ pip install skill-seekers[dev]
|
|||||||
|-------|-------------|-----------------|
|
|-------|-------------|-----------------|
|
||||||
| `gemini` | Google Gemini support | `pip install skill-seekers[gemini]` |
|
| `gemini` | Google Gemini support | `pip install skill-seekers[gemini]` |
|
||||||
| `openai` | OpenAI ChatGPT support | `pip install skill-seekers[openai]` |
|
| `openai` | OpenAI ChatGPT support | `pip install skill-seekers[openai]` |
|
||||||
|
| `minimax` | MiniMax AI support | `pip install skill-seekers[minimax]` |
|
||||||
| `mcp` | MCP server | `pip install skill-seekers[mcp]` |
|
| `mcp` | MCP server | `pip install skill-seekers[mcp]` |
|
||||||
| `chroma` | ChromaDB export | `pip install skill-seekers[chroma]` |
|
| `chroma` | ChromaDB export | `pip install skill-seekers[chroma]` |
|
||||||
| `weaviate` | Weaviate export | `pip install skill-seekers[weaviate]` |
|
| `weaviate` | Weaviate export | `pip install skill-seekers[weaviate]` |
|
||||||
|
|||||||
@@ -112,6 +112,7 @@ Upload documentation as custom skills to AI chat platforms:
|
|||||||
| **[Claude](CLAUDE.md)** | Anthropic | ZIP + YAML | Claude.ai Projects | [Setup →](CLAUDE.md) |
|
| **[Claude](CLAUDE.md)** | Anthropic | ZIP + YAML | Claude.ai Projects | [Setup →](CLAUDE.md) |
|
||||||
| **[Gemini](GEMINI_INTEGRATION.md)** | Google | tar.gz | Gemini AI | [Setup →](GEMINI_INTEGRATION.md) |
|
| **[Gemini](GEMINI_INTEGRATION.md)** | Google | tar.gz | Gemini AI | [Setup →](GEMINI_INTEGRATION.md) |
|
||||||
| **[ChatGPT](OPENAI_INTEGRATION.md)** | OpenAI | ZIP + Vector Store | GPT Actions | [Setup →](OPENAI_INTEGRATION.md) |
|
| **[ChatGPT](OPENAI_INTEGRATION.md)** | OpenAI | ZIP + Vector Store | GPT Actions | [Setup →](OPENAI_INTEGRATION.md) |
|
||||||
|
| **[MiniMax](MINIMAX_INTEGRATION.md)** | MiniMax | ZIP | MiniMax AI Platform | [Setup →](MINIMAX_INTEGRATION.md) |
|
||||||
|
|
||||||
**Quick Example:**
|
**Quick Example:**
|
||||||
```bash
|
```bash
|
||||||
@@ -139,7 +140,7 @@ skill-seekers upload output/vue-claude.zip --target claude
|
|||||||
| **AI coding (flow-based)** | Windsurf | Unique flow paradigm, Codeium AI | 5 min |
|
| **AI coding (flow-based)** | Windsurf | Unique flow paradigm, Codeium AI | 5 min |
|
||||||
| **AI coding (VS Code ext)** | Cline | Claude in VS Code, MCP integration | 10 min |
|
| **AI coding (VS Code ext)** | Cline | Claude in VS Code, MCP integration | 10 min |
|
||||||
| **AI coding (any IDE)** | Continue.dev | Works everywhere, open-source | 5 min |
|
| **AI coding (any IDE)** | Continue.dev | Works everywhere, open-source | 5 min |
|
||||||
| **Chat with documentation** | Claude/Gemini/ChatGPT | Direct upload as custom skill | 3 min |
|
| **Chat with documentation** | Claude/Gemini/ChatGPT/MiniMax | Direct upload as custom skill | 3 min |
|
||||||
|
|
||||||
### By Technical Requirements
|
### By Technical Requirements
|
||||||
|
|
||||||
|
|||||||
391
docs/integrations/MINIMAX_INTEGRATION.md
Normal file
391
docs/integrations/MINIMAX_INTEGRATION.md
Normal file
@@ -0,0 +1,391 @@
|
|||||||
|
# MiniMax AI Integration Guide
|
||||||
|
|
||||||
|
Complete guide for using Skill Seekers with MiniMax AI platform.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
**MiniMax AI** is a Chinese AI company offering OpenAI-compatible APIs with their M2.7 model. Skill Seekers packages documentation for use with MiniMax's platform.
|
||||||
|
|
||||||
|
### Key Features
|
||||||
|
|
||||||
|
- **OpenAI-Compatible API**: Uses standard OpenAI client library
|
||||||
|
- **MiniMax-M2.7 Model**: Powerful LLM for enhancement and chat
|
||||||
|
- **Simple ZIP Format**: Easy packaging with system instructions
|
||||||
|
- **Knowledge Files**: Reference documentation included in package
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
### 1. Get MiniMax API Key
|
||||||
|
|
||||||
|
1. Visit [MiniMax Platform](https://platform.minimaxi.com/)
|
||||||
|
2. Create an account and verify
|
||||||
|
3. Navigate to API Keys section
|
||||||
|
4. Generate a new API key
|
||||||
|
5. Copy the key (starts with `eyJ` - JWT format)
|
||||||
|
|
||||||
|
### 2. Install Dependencies
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Install MiniMax support (includes openai library)
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
|
||||||
|
# Or install all LLM platforms
|
||||||
|
pip install skill-seekers[all-llms]
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Configure Environment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
export MINIMAX_API_KEY=eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
|
||||||
|
```
|
||||||
|
|
||||||
|
Add to your `~/.bashrc`, `~/.zshrc`, or `.env` file for persistence.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Workflow
|
||||||
|
|
||||||
|
### Step 1: Scrape Documentation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Scrape documentation website
|
||||||
|
skill-seekers scrape --config configs/react.json
|
||||||
|
|
||||||
|
# Or use quick preset
|
||||||
|
skill-seekers create https://docs.python.org/3/ --preset quick
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: Enhance with MiniMax-M2.7
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Enhance SKILL.md using MiniMax AI
|
||||||
|
skill-seekers enhance output/react/ --target minimax
|
||||||
|
|
||||||
|
# With custom model (if available)
|
||||||
|
skill-seekers enhance output/react/ --target minimax --model MiniMax-M2.7
|
||||||
|
```
|
||||||
|
|
||||||
|
This step:
|
||||||
|
- Reads reference documentation
|
||||||
|
- Generates enhanced system instructions
|
||||||
|
- Creates backup of original SKILL.md
|
||||||
|
- Uses MiniMax-M2.7 for AI enhancement
|
||||||
|
|
||||||
|
### Step 3: Package for MiniMax
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Package as MiniMax-compatible ZIP
|
||||||
|
skill-seekers package output/react/ --target minimax
|
||||||
|
|
||||||
|
# Custom output path
|
||||||
|
skill-seekers package output/react/ --target minimax --output my-skill.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
**Output structure:**
|
||||||
|
```
|
||||||
|
react-minimax.zip
|
||||||
|
├── system_instructions.txt # Main instructions (from SKILL.md)
|
||||||
|
├── knowledge_files/ # Reference documentation
|
||||||
|
│ ├── guide.md
|
||||||
|
│ ├── api-reference.md
|
||||||
|
│ └── examples.md
|
||||||
|
└── minimax_metadata.json # Skill metadata
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 4: Validate Package
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Validate package with MiniMax API
|
||||||
|
skill-seekers upload react-minimax.zip --target minimax
|
||||||
|
```
|
||||||
|
|
||||||
|
This validates:
|
||||||
|
- Package structure
|
||||||
|
- API connectivity
|
||||||
|
- System instructions format
|
||||||
|
|
||||||
|
**Note:** MiniMax doesn't have persistent skill storage like Claude. The upload validates your package but you'll use the ZIP file directly with MiniMax's API.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Using Your Skill
|
||||||
|
|
||||||
|
### Direct API Usage
|
||||||
|
|
||||||
|
```python
|
||||||
|
from openai import OpenAI
|
||||||
|
import zipfile
|
||||||
|
import json
|
||||||
|
|
||||||
|
# Extract package
|
||||||
|
with zipfile.ZipFile('react-minimax.zip', 'r') as zf:
|
||||||
|
with zf.open('system_instructions.txt') as f:
|
||||||
|
system_instructions = f.read().decode('utf-8')
|
||||||
|
|
||||||
|
# Load metadata
|
||||||
|
with zf.open('minimax_metadata.json') as f:
|
||||||
|
metadata = json.load(f)
|
||||||
|
|
||||||
|
# Initialize MiniMax client (OpenAI-compatible)
|
||||||
|
client = OpenAI(
|
||||||
|
api_key="eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...",
|
||||||
|
base_url="https://api.minimax.io/v1"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Use with chat completions
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="MiniMax-M2.7",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": system_instructions},
|
||||||
|
{"role": "user", "content": "How do I create a React component?"}
|
||||||
|
],
|
||||||
|
temperature=0.3,
|
||||||
|
max_tokens=2000
|
||||||
|
)
|
||||||
|
|
||||||
|
print(response.choices[0].message.content)
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Knowledge Files
|
||||||
|
|
||||||
|
```python
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Extract knowledge files
|
||||||
|
with zipfile.ZipFile('react-minimax.zip', 'r') as zf:
|
||||||
|
zf.extractall('extracted_skill')
|
||||||
|
|
||||||
|
# Read all knowledge files
|
||||||
|
knowledge_dir = Path('extracted_skill/knowledge_files')
|
||||||
|
knowledge_files = []
|
||||||
|
for md_file in knowledge_dir.glob('*.md'):
|
||||||
|
knowledge_files.append({
|
||||||
|
'name': md_file.name,
|
||||||
|
'content': md_file.read_text()
|
||||||
|
})
|
||||||
|
|
||||||
|
# Include in context (truncate if too long)
|
||||||
|
context = "\n\n".join([f"## {kf['name']}\n{kf['content'][:5000]}"
|
||||||
|
for kf in knowledge_files[:5]])
|
||||||
|
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="MiniMax-M2.7",
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": system_instructions},
|
||||||
|
{"role": "user", "content": f"Context: {context}\n\nQuestion: What are React hooks?"}
|
||||||
|
]
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## API Reference
|
||||||
|
|
||||||
|
### SkillAdaptor Methods
|
||||||
|
|
||||||
|
```python
|
||||||
|
from skill_seekers.cli.adaptors import get_adaptor
|
||||||
|
|
||||||
|
# Get MiniMax adaptor
|
||||||
|
adaptor = get_adaptor('minimax')
|
||||||
|
|
||||||
|
# Format SKILL.md as system instructions
|
||||||
|
instructions = adaptor.format_skill_md(skill_dir, metadata)
|
||||||
|
|
||||||
|
# Package skill
|
||||||
|
package_path = adaptor.package(skill_dir, output_path)
|
||||||
|
|
||||||
|
# Validate package with MiniMax API
|
||||||
|
result = adaptor.upload(package_path, api_key)
|
||||||
|
print(result['message']) # Validation result
|
||||||
|
|
||||||
|
# Enhance SKILL.md
|
||||||
|
success = adaptor.enhance(skill_dir, api_key)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Variables
|
||||||
|
|
||||||
|
| Variable | Description | Required |
|
||||||
|
|----------|-------------|----------|
|
||||||
|
| `MINIMAX_API_KEY` | Your MiniMax API key (JWT format) | Yes |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Invalid API Key Format
|
||||||
|
|
||||||
|
**Error:** `Invalid API key format`
|
||||||
|
|
||||||
|
**Solution:** MiniMax API keys use JWT format starting with `eyJ`. Check:
|
||||||
|
```bash
|
||||||
|
# Should start with 'eyJ'
|
||||||
|
echo $MINIMAX_API_KEY | head -c 10
|
||||||
|
# Output: eyJhbGciOi
|
||||||
|
```
|
||||||
|
|
||||||
|
### OpenAI Library Not Installed
|
||||||
|
|
||||||
|
**Error:** `ModuleNotFoundError: No module named 'openai'`
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
```bash
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
# or
|
||||||
|
pip install openai>=1.0.0
|
||||||
|
```
|
||||||
|
|
||||||
|
### Upload Timeout
|
||||||
|
|
||||||
|
**Error:** `Upload timed out`
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
- Check internet connection
|
||||||
|
- Try again (temporary network issue)
|
||||||
|
- Verify API key is correct
|
||||||
|
- Check MiniMax platform status
|
||||||
|
|
||||||
|
### Connection Error
|
||||||
|
|
||||||
|
**Error:** `Connection error`
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
- Verify internet connectivity
|
||||||
|
- Check if MiniMax API endpoint is accessible:
|
||||||
|
```bash
|
||||||
|
curl https://api.minimax.io/v1/models
|
||||||
|
```
|
||||||
|
- Try with VPN if in restricted region
|
||||||
|
|
||||||
|
### Package Validation Failed
|
||||||
|
|
||||||
|
**Error:** `Invalid package: system_instructions.txt not found`
|
||||||
|
|
||||||
|
**Solution:**
|
||||||
|
- Ensure SKILL.md exists before packaging
|
||||||
|
- Check package contents:
|
||||||
|
```bash
|
||||||
|
unzip -l react-minimax.zip
|
||||||
|
```
|
||||||
|
- Re-package the skill
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### 1. Keep References Organized
|
||||||
|
|
||||||
|
Structure your documentation:
|
||||||
|
```
|
||||||
|
output/react/
|
||||||
|
├── SKILL.md # Main instructions
|
||||||
|
├── references/
|
||||||
|
│ ├── 01-getting-started.md
|
||||||
|
│ ├── 02-components.md
|
||||||
|
│ ├── 03-hooks.md
|
||||||
|
│ └── 04-api-reference.md
|
||||||
|
└── assets/
|
||||||
|
└── diagrams/
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Use Enhancement
|
||||||
|
|
||||||
|
Always enhance before packaging:
|
||||||
|
```bash
|
||||||
|
# Enhancement improves system instructions quality
|
||||||
|
skill-seekers enhance output/react/ --target minimax
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Test Before Deployment
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Validate package
|
||||||
|
skill-seekers upload react-minimax.zip --target minimax
|
||||||
|
|
||||||
|
# If successful, package is ready to use
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Version Your Skills
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Include version in output name
|
||||||
|
skill-seekers package output/react/ --target minimax --output react-v2.0-minimax.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Comparison with Other Platforms
|
||||||
|
|
||||||
|
| Feature | MiniMax | Claude | Gemini | OpenAI |
|
||||||
|
|---------|---------|--------|--------|--------|
|
||||||
|
| **Format** | ZIP | ZIP | tar.gz | ZIP |
|
||||||
|
| **Upload** | Validation | Full API | Full API | Full API |
|
||||||
|
| **Enhancement** | MiniMax-M2.7 | Claude Sonnet | Gemini 2.0 | GPT-4o |
|
||||||
|
| **API Type** | OpenAI-compatible | Anthropic | Google | OpenAI |
|
||||||
|
| **Key Format** | JWT (eyJ...) | sk-ant... | AIza... | sk-... |
|
||||||
|
| **Knowledge Files** | Included in ZIP | Included | Included | Vector Store |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced Usage
|
||||||
|
|
||||||
|
### Custom Enhancement Prompt
|
||||||
|
|
||||||
|
Programmatically customize enhancement:
|
||||||
|
|
||||||
|
```python
|
||||||
|
from skill_seekers.cli.adaptors import get_adaptor
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
adaptor = get_adaptor('minimax')
|
||||||
|
skill_dir = Path('output/react')
|
||||||
|
|
||||||
|
# Build custom prompt
|
||||||
|
references = adaptor._read_reference_files(skill_dir / 'references')
|
||||||
|
prompt = adaptor._build_enhancement_prompt(
|
||||||
|
skill_name='React',
|
||||||
|
references=references,
|
||||||
|
current_skill_md=(skill_dir / 'SKILL.md').read_text()
|
||||||
|
)
|
||||||
|
|
||||||
|
# Customize prompt
|
||||||
|
prompt += "\n\nADDITIONAL FOCUS: Emphasize React 18 concurrent features."
|
||||||
|
|
||||||
|
# Use with your own API call
|
||||||
|
```
|
||||||
|
|
||||||
|
### Batch Processing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Process multiple frameworks
|
||||||
|
for framework in react vue angular; do
|
||||||
|
skill-seekers scrape --config configs/${framework}.json
|
||||||
|
skill-seekers enhance output/${framework}/ --target minimax
|
||||||
|
skill-seekers package output/${framework}/ --target minimax --output ${framework}-minimax.zip
|
||||||
|
done
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resources
|
||||||
|
|
||||||
|
- [MiniMax Platform](https://platform.minimaxi.com/)
|
||||||
|
- [MiniMax API Documentation](https://platform.minimaxi.com/document)
|
||||||
|
- [OpenAI Python Client](https://github.com/openai/openai-python)
|
||||||
|
- [Multi-LLM Support Guide](MULTI_LLM_SUPPORT.md)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Next Steps
|
||||||
|
|
||||||
|
1. Get your [MiniMax API key](https://platform.minimaxi.com/)
|
||||||
|
2. Install dependencies: `pip install skill-seekers[minimax]`
|
||||||
|
3. Try the [Quick Start example](#complete-workflow)
|
||||||
|
4. Explore [advanced usage](#advanced-usage) patterns
|
||||||
|
|
||||||
|
For help, see [Troubleshooting](#troubleshooting) or open an issue on GitHub.
|
||||||
@@ -9,6 +9,7 @@ Skill Seekers supports multiple LLM platforms through a clean adaptor system. Th
|
|||||||
| **Claude AI** | ✅ Full Support | ZIP + YAML | ✅ Automatic | ✅ Yes | ANTHROPIC_API_KEY |
|
| **Claude AI** | ✅ Full Support | ZIP + YAML | ✅ Automatic | ✅ Yes | ANTHROPIC_API_KEY |
|
||||||
| **Google Gemini** | ✅ Full Support | tar.gz | ✅ Automatic | ✅ Yes | GOOGLE_API_KEY |
|
| **Google Gemini** | ✅ Full Support | tar.gz | ✅ Automatic | ✅ Yes | GOOGLE_API_KEY |
|
||||||
| **OpenAI ChatGPT** | ✅ Full Support | ZIP + Vector Store | ✅ Automatic | ✅ Yes | OPENAI_API_KEY |
|
| **OpenAI ChatGPT** | ✅ Full Support | ZIP + Vector Store | ✅ Automatic | ✅ Yes | OPENAI_API_KEY |
|
||||||
|
| **MiniMax AI** | ✅ Full Support | ZIP | ✅ Validation | ✅ Yes | MINIMAX_API_KEY |
|
||||||
| **Generic Markdown** | ✅ Export Only | ZIP | ❌ Manual | ❌ No | None |
|
| **Generic Markdown** | ✅ Export Only | ZIP | ❌ Manual | ❌ No | None |
|
||||||
|
|
||||||
## Quick Start
|
## Quick Start
|
||||||
@@ -108,6 +109,9 @@ pip install skill-seekers[gemini]
|
|||||||
# OpenAI ChatGPT support
|
# OpenAI ChatGPT support
|
||||||
pip install skill-seekers[openai]
|
pip install skill-seekers[openai]
|
||||||
|
|
||||||
|
# MiniMax AI support
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
|
||||||
# All LLM platforms
|
# All LLM platforms
|
||||||
pip install skill-seekers[all-llms]
|
pip install skill-seekers[all-llms]
|
||||||
|
|
||||||
@@ -150,6 +154,13 @@ pip install -e .[all-llms]
|
|||||||
- API: Assistants API + Vector Store
|
- API: Assistants API + Vector Store
|
||||||
- Enhancement: GPT-4o
|
- Enhancement: GPT-4o
|
||||||
|
|
||||||
|
**MiniMax AI:**
|
||||||
|
- Format: ZIP archive
|
||||||
|
- SKILL.md -> `system_instructions.txt` (plain text, no frontmatter)
|
||||||
|
- Structure: `system_instructions.txt`, `knowledge_files/`, `minimax_metadata.json`
|
||||||
|
- API: OpenAI-compatible chat completions
|
||||||
|
- Enhancement: MiniMax-M2.7
|
||||||
|
|
||||||
**Generic Markdown:**
|
**Generic Markdown:**
|
||||||
- Format: ZIP archive
|
- Format: ZIP archive
|
||||||
- Structure: `README.md`, `references/`, `DOCUMENTATION.md` (combined)
|
- Structure: `README.md`, `references/`, `DOCUMENTATION.md` (combined)
|
||||||
@@ -174,6 +185,11 @@ export GOOGLE_API_KEY=AIzaSy...
|
|||||||
export OPENAI_API_KEY=sk-proj-...
|
export OPENAI_API_KEY=sk-proj-...
|
||||||
```
|
```
|
||||||
|
|
||||||
|
**MiniMax AI:**
|
||||||
|
```bash
|
||||||
|
export MINIMAX_API_KEY=your-key
|
||||||
|
```
|
||||||
|
|
||||||
## Complete Workflow Examples
|
## Complete Workflow Examples
|
||||||
|
|
||||||
### Workflow 1: Claude AI (Default)
|
### Workflow 1: Claude AI (Default)
|
||||||
@@ -238,7 +254,29 @@ skill-seekers upload react-openai.zip --target openai
|
|||||||
# Access at: https://platform.openai.com/assistants/
|
# Access at: https://platform.openai.com/assistants/
|
||||||
```
|
```
|
||||||
|
|
||||||
### Workflow 4: Export to All Platforms
|
### Workflow 4: MiniMax AI
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Setup (one-time)
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
|
export MINIMAX_API_KEY=your-key
|
||||||
|
|
||||||
|
# 1. Scrape (universal)
|
||||||
|
skill-seekers scrape --config configs/react.json
|
||||||
|
|
||||||
|
# 2. Enhance with MiniMax-M2.7
|
||||||
|
skill-seekers enhance output/react/ --target minimax
|
||||||
|
|
||||||
|
# 3. Package for MiniMax
|
||||||
|
skill-seekers package output/react/ --target minimax
|
||||||
|
|
||||||
|
# 4. Upload to MiniMax (validates with API)
|
||||||
|
skill-seekers upload react-minimax.zip --target minimax
|
||||||
|
|
||||||
|
# Access at: https://platform.minimaxi.com/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow 5: Export to All Platforms
|
||||||
|
|
||||||
```bash
|
```bash
|
||||||
# Install all platforms
|
# Install all platforms
|
||||||
@@ -251,12 +289,14 @@ skill-seekers scrape --config configs/react.json
|
|||||||
skill-seekers package output/react/ --target claude
|
skill-seekers package output/react/ --target claude
|
||||||
skill-seekers package output/react/ --target gemini
|
skill-seekers package output/react/ --target gemini
|
||||||
skill-seekers package output/react/ --target openai
|
skill-seekers package output/react/ --target openai
|
||||||
|
skill-seekers package output/react/ --target minimax
|
||||||
skill-seekers package output/react/ --target markdown
|
skill-seekers package output/react/ --target markdown
|
||||||
|
|
||||||
# Result:
|
# Result:
|
||||||
# - react.zip (Claude)
|
# - react.zip (Claude)
|
||||||
# - react-gemini.tar.gz (Gemini)
|
# - react-gemini.tar.gz (Gemini)
|
||||||
# - react-openai.zip (OpenAI)
|
# - react-openai.zip (OpenAI)
|
||||||
|
# - react-minimax.zip (MiniMax)
|
||||||
# - react-markdown.zip (Universal)
|
# - react-markdown.zip (Universal)
|
||||||
```
|
```
|
||||||
|
|
||||||
@@ -300,7 +340,7 @@ from skill_seekers.cli.adaptors import list_platforms, is_platform_available
|
|||||||
|
|
||||||
# List all registered platforms
|
# List all registered platforms
|
||||||
platforms = list_platforms()
|
platforms = list_platforms()
|
||||||
print(platforms) # ['claude', 'gemini', 'openai', 'markdown']
|
print(platforms) # ['claude', 'gemini', 'minimax', 'openai', 'markdown']
|
||||||
|
|
||||||
# Check if platform is available
|
# Check if platform is available
|
||||||
if is_platform_available('gemini'):
|
if is_platform_available('gemini'):
|
||||||
@@ -323,6 +363,7 @@ For detailed platform-specific instructions, see:
|
|||||||
- [Claude AI Integration](CLAUDE_INTEGRATION.md) (default)
|
- [Claude AI Integration](CLAUDE_INTEGRATION.md) (default)
|
||||||
- [Google Gemini Integration](GEMINI_INTEGRATION.md)
|
- [Google Gemini Integration](GEMINI_INTEGRATION.md)
|
||||||
- [OpenAI ChatGPT Integration](OPENAI_INTEGRATION.md)
|
- [OpenAI ChatGPT Integration](OPENAI_INTEGRATION.md)
|
||||||
|
- [MiniMax AI Integration](MINIMAX_INTEGRATION.md)
|
||||||
|
|
||||||
## Troubleshooting
|
## Troubleshooting
|
||||||
|
|
||||||
@@ -340,6 +381,8 @@ pip install skill-seekers[gemini]
|
|||||||
**Solution:**
|
**Solution:**
|
||||||
```bash
|
```bash
|
||||||
pip install skill-seekers[openai]
|
pip install skill-seekers[openai]
|
||||||
|
# or for MiniMax (also uses openai library)
|
||||||
|
pip install skill-seekers[minimax]
|
||||||
```
|
```
|
||||||
|
|
||||||
### API Key Issues
|
### API Key Issues
|
||||||
@@ -350,6 +393,7 @@ pip install skill-seekers[openai]
|
|||||||
- Claude: `sk-ant-...`
|
- Claude: `sk-ant-...`
|
||||||
- Gemini: `AIza...`
|
- Gemini: `AIza...`
|
||||||
- OpenAI: `sk-proj-...` or `sk-...`
|
- OpenAI: `sk-proj-...` or `sk-...`
|
||||||
|
- MiniMax: Any valid API key string
|
||||||
|
|
||||||
### Package Format Errors
|
### Package Format Errors
|
||||||
|
|
||||||
@@ -380,6 +424,7 @@ A: Yes, each platform uses its own enhancement model:
|
|||||||
- Claude: Claude Sonnet 4
|
- Claude: Claude Sonnet 4
|
||||||
- Gemini: Gemini 2.0 Flash
|
- Gemini: Gemini 2.0 Flash
|
||||||
- OpenAI: GPT-4o
|
- OpenAI: GPT-4o
|
||||||
|
- MiniMax: MiniMax-M2.7
|
||||||
|
|
||||||
**Q: What if I don't want to upload automatically?**
|
**Q: What if I don't want to upload automatically?**
|
||||||
|
|
||||||
|
|||||||
@@ -89,6 +89,11 @@ openai = [
|
|||||||
"openai>=1.0.0",
|
"openai>=1.0.0",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# MiniMax AI support (uses OpenAI-compatible API)
|
||||||
|
minimax = [
|
||||||
|
"openai>=1.0.0",
|
||||||
|
]
|
||||||
|
|
||||||
# All LLM platforms combined
|
# All LLM platforms combined
|
||||||
all-llms = [
|
all-llms = [
|
||||||
"google-generativeai>=0.8.0",
|
"google-generativeai>=0.8.0",
|
||||||
|
|||||||
@@ -3,7 +3,7 @@
|
|||||||
Multi-LLM Adaptor Registry
|
Multi-LLM Adaptor Registry
|
||||||
|
|
||||||
Provides factory function to get platform-specific adaptors for skill generation.
|
Provides factory function to get platform-specific adaptors for skill generation.
|
||||||
Supports Claude AI, Google Gemini, OpenAI ChatGPT, and generic Markdown export.
|
Supports Claude AI, Google Gemini, OpenAI ChatGPT, MiniMax AI, and generic Markdown export.
|
||||||
"""
|
"""
|
||||||
|
|
||||||
from .base import SkillAdaptor, SkillMetadata
|
from .base import SkillAdaptor, SkillMetadata
|
||||||
@@ -69,6 +69,11 @@ try:
|
|||||||
except ImportError:
|
except ImportError:
|
||||||
PineconeAdaptor = None
|
PineconeAdaptor = None
|
||||||
|
|
||||||
|
try:
|
||||||
|
from .minimax import MiniMaxAdaptor
|
||||||
|
except ImportError:
|
||||||
|
MiniMaxAdaptor = None
|
||||||
|
|
||||||
|
|
||||||
# Registry of available adaptors
|
# Registry of available adaptors
|
||||||
ADAPTORS: dict[str, type[SkillAdaptor]] = {}
|
ADAPTORS: dict[str, type[SkillAdaptor]] = {}
|
||||||
@@ -98,6 +103,8 @@ if HaystackAdaptor:
|
|||||||
ADAPTORS["haystack"] = HaystackAdaptor
|
ADAPTORS["haystack"] = HaystackAdaptor
|
||||||
if PineconeAdaptor:
|
if PineconeAdaptor:
|
||||||
ADAPTORS["pinecone"] = PineconeAdaptor
|
ADAPTORS["pinecone"] = PineconeAdaptor
|
||||||
|
if MiniMaxAdaptor:
|
||||||
|
ADAPTORS["minimax"] = MiniMaxAdaptor
|
||||||
|
|
||||||
|
|
||||||
def get_adaptor(platform: str, config: dict = None) -> SkillAdaptor:
|
def get_adaptor(platform: str, config: dict = None) -> SkillAdaptor:
|
||||||
@@ -105,7 +112,7 @@ def get_adaptor(platform: str, config: dict = None) -> SkillAdaptor:
|
|||||||
Factory function to get platform-specific adaptor instance.
|
Factory function to get platform-specific adaptor instance.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
platform: Platform identifier ('claude', 'gemini', 'openai', 'markdown')
|
platform: Platform identifier ('claude', 'gemini', 'openai', 'minimax', 'markdown')
|
||||||
config: Optional platform-specific configuration
|
config: Optional platform-specific configuration
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
@@ -116,6 +123,7 @@ def get_adaptor(platform: str, config: dict = None) -> SkillAdaptor:
|
|||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> adaptor = get_adaptor('claude')
|
>>> adaptor = get_adaptor('claude')
|
||||||
|
>>> adaptor = get_adaptor('minimax')
|
||||||
>>> adaptor = get_adaptor('gemini', {'api_version': 'v1beta'})
|
>>> adaptor = get_adaptor('gemini', {'api_version': 'v1beta'})
|
||||||
"""
|
"""
|
||||||
if platform not in ADAPTORS:
|
if platform not in ADAPTORS:
|
||||||
@@ -141,7 +149,7 @@ def list_platforms() -> list[str]:
|
|||||||
|
|
||||||
Examples:
|
Examples:
|
||||||
>>> list_platforms()
|
>>> list_platforms()
|
||||||
['claude', 'gemini', 'openai', 'markdown']
|
['claude', 'gemini', 'openai', 'minimax', 'markdown']
|
||||||
"""
|
"""
|
||||||
return list(ADAPTORS.keys())
|
return list(ADAPTORS.keys())
|
||||||
|
|
||||||
|
|||||||
503
src/skill_seekers/cli/adaptors/minimax.py
Normal file
503
src/skill_seekers/cli/adaptors/minimax.py
Normal file
@@ -0,0 +1,503 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
MiniMax AI Adaptor
|
||||||
|
|
||||||
|
Implements platform-specific handling for MiniMax AI skills.
|
||||||
|
Uses MiniMax's OpenAI-compatible API for AI enhancement with M2.7 model.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
from typing import Any
|
||||||
|
|
||||||
|
from .base import SkillAdaptor, SkillMetadata
|
||||||
|
from skill_seekers.cli.arguments.common import DEFAULT_CHUNK_TOKENS, DEFAULT_CHUNK_OVERLAP_TOKENS
|
||||||
|
|
||||||
|
|
||||||
|
class MiniMaxAdaptor(SkillAdaptor):
|
||||||
|
"""
|
||||||
|
MiniMax AI platform adaptor.
|
||||||
|
|
||||||
|
Handles:
|
||||||
|
- System instructions format (plain text, no YAML frontmatter)
|
||||||
|
- ZIP packaging with knowledge files
|
||||||
|
- AI enhancement using MiniMax-M2.7
|
||||||
|
"""
|
||||||
|
|
||||||
|
PLATFORM = "minimax"
|
||||||
|
PLATFORM_NAME = "MiniMax AI"
|
||||||
|
DEFAULT_API_ENDPOINT = "https://api.minimax.io/v1"
|
||||||
|
|
||||||
|
def format_skill_md(self, skill_dir: Path, metadata: SkillMetadata) -> str:
|
||||||
|
"""
|
||||||
|
Format SKILL.md as system instructions for MiniMax AI.
|
||||||
|
|
||||||
|
MiniMax uses OpenAI-compatible chat completions, so instructions
|
||||||
|
are formatted as clear system prompts without YAML frontmatter.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_dir: Path to skill directory
|
||||||
|
metadata: Skill metadata
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Formatted instructions for MiniMax AI
|
||||||
|
"""
|
||||||
|
existing_content = self._read_existing_content(skill_dir)
|
||||||
|
|
||||||
|
if existing_content and len(existing_content) > 100:
|
||||||
|
content_body = f"""You are an expert assistant for {metadata.name}.
|
||||||
|
|
||||||
|
{metadata.description}
|
||||||
|
|
||||||
|
Use the attached knowledge files to provide accurate, detailed answers about {metadata.name}.
|
||||||
|
|
||||||
|
{existing_content}
|
||||||
|
|
||||||
|
## How to Assist Users
|
||||||
|
|
||||||
|
When users ask questions:
|
||||||
|
1. Search the knowledge files for relevant information
|
||||||
|
2. Provide clear, practical answers with code examples
|
||||||
|
3. Reference specific documentation sections when helpful
|
||||||
|
4. Be concise but thorough
|
||||||
|
|
||||||
|
Always prioritize accuracy by consulting the knowledge base before responding."""
|
||||||
|
else:
|
||||||
|
content_body = f"""You are an expert assistant for {metadata.name}.
|
||||||
|
|
||||||
|
{metadata.description}
|
||||||
|
|
||||||
|
## Your Knowledge Base
|
||||||
|
|
||||||
|
You have access to comprehensive documentation files about {metadata.name}. Use these files to provide accurate answers to user questions.
|
||||||
|
|
||||||
|
{self._generate_toc(skill_dir)}
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
{self._extract_quick_reference(skill_dir)}
|
||||||
|
|
||||||
|
## How to Assist Users
|
||||||
|
|
||||||
|
When users ask questions about {metadata.name}:
|
||||||
|
|
||||||
|
1. **Search the knowledge files** - Find relevant information in the documentation
|
||||||
|
2. **Provide code examples** - Include practical, working code snippets
|
||||||
|
3. **Reference documentation** - Cite specific sections when helpful
|
||||||
|
4. **Be practical** - Focus on real-world usage and best practices
|
||||||
|
5. **Stay accurate** - Always verify information against the knowledge base
|
||||||
|
|
||||||
|
## Response Guidelines
|
||||||
|
|
||||||
|
- Keep answers clear and concise
|
||||||
|
- Use proper code formatting with language tags
|
||||||
|
- Provide both simple and detailed explanations as needed
|
||||||
|
- Suggest related topics when relevant
|
||||||
|
- Admit when information isn't in the knowledge base
|
||||||
|
|
||||||
|
Always prioritize accuracy by consulting the attached documentation files before responding."""
|
||||||
|
|
||||||
|
return content_body
|
||||||
|
|
||||||
|
def package(
|
||||||
|
self,
|
||||||
|
skill_dir: Path,
|
||||||
|
output_path: Path,
|
||||||
|
enable_chunking: bool = False,
|
||||||
|
chunk_max_tokens: int = DEFAULT_CHUNK_TOKENS,
|
||||||
|
preserve_code_blocks: bool = True,
|
||||||
|
chunk_overlap_tokens: int = DEFAULT_CHUNK_OVERLAP_TOKENS,
|
||||||
|
) -> Path:
|
||||||
|
"""
|
||||||
|
Package skill into ZIP file for MiniMax AI.
|
||||||
|
|
||||||
|
Creates MiniMax-compatible structure:
|
||||||
|
- system_instructions.txt (main instructions)
|
||||||
|
- knowledge_files/*.md (reference files)
|
||||||
|
- minimax_metadata.json (skill metadata)
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_dir: Path to skill directory
|
||||||
|
output_path: Output path/filename for ZIP
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Path to created ZIP file
|
||||||
|
"""
|
||||||
|
skill_dir = Path(skill_dir)
|
||||||
|
output_path = Path(output_path)
|
||||||
|
|
||||||
|
if output_path.is_dir() or str(output_path).endswith("/"):
|
||||||
|
output_path = Path(output_path) / f"{skill_dir.name}-minimax.zip"
|
||||||
|
elif not str(output_path).endswith(".zip") and not str(output_path).endswith(
|
||||||
|
"-minimax.zip"
|
||||||
|
):
|
||||||
|
output_str = str(output_path).replace(".zip", "-minimax.zip")
|
||||||
|
if not output_str.endswith(".zip"):
|
||||||
|
output_str += ".zip"
|
||||||
|
output_path = Path(output_str)
|
||||||
|
|
||||||
|
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
with zipfile.ZipFile(output_path, "w", zipfile.ZIP_DEFLATED) as zf:
|
||||||
|
skill_md = skill_dir / "SKILL.md"
|
||||||
|
if skill_md.exists():
|
||||||
|
instructions = skill_md.read_text(encoding="utf-8")
|
||||||
|
zf.writestr("system_instructions.txt", instructions)
|
||||||
|
|
||||||
|
refs_dir = skill_dir / "references"
|
||||||
|
if refs_dir.exists():
|
||||||
|
for ref_file in refs_dir.rglob("*.md"):
|
||||||
|
if ref_file.is_file() and not ref_file.name.startswith("."):
|
||||||
|
arcname = f"knowledge_files/{ref_file.name}"
|
||||||
|
zf.write(ref_file, arcname)
|
||||||
|
|
||||||
|
metadata = {
|
||||||
|
"platform": "minimax",
|
||||||
|
"name": skill_dir.name,
|
||||||
|
"version": "1.0.0",
|
||||||
|
"created_with": "skill-seekers",
|
||||||
|
"model": "MiniMax-M2.7",
|
||||||
|
"api_base": self.DEFAULT_API_ENDPOINT,
|
||||||
|
}
|
||||||
|
|
||||||
|
zf.writestr("minimax_metadata.json", json.dumps(metadata, indent=2))
|
||||||
|
|
||||||
|
return output_path
|
||||||
|
|
||||||
|
def upload(self, package_path: Path, api_key: str, **kwargs) -> dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Upload packaged skill to MiniMax AI.
|
||||||
|
|
||||||
|
MiniMax uses an OpenAI-compatible chat completion API.
|
||||||
|
This method validates the package and prepares it for use
|
||||||
|
with the MiniMax API.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
package_path: Path to skill ZIP file
|
||||||
|
api_key: MiniMax API key
|
||||||
|
**kwargs: Additional arguments (model, etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with upload result
|
||||||
|
"""
|
||||||
|
package_path = Path(package_path)
|
||||||
|
if not package_path.exists():
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": f"File not found: {package_path}",
|
||||||
|
}
|
||||||
|
|
||||||
|
if package_path.suffix != ".zip":
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": f"Not a ZIP file: {package_path}",
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
from openai import OpenAI, APITimeoutError, APIConnectionError
|
||||||
|
except ImportError:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": "openai library not installed. Run: pip install openai",
|
||||||
|
}
|
||||||
|
|
||||||
|
try:
|
||||||
|
import tempfile
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
with zipfile.ZipFile(package_path, "r") as zf:
|
||||||
|
zf.extractall(temp_dir)
|
||||||
|
|
||||||
|
temp_path = Path(temp_dir)
|
||||||
|
|
||||||
|
instructions_file = temp_path / "system_instructions.txt"
|
||||||
|
if not instructions_file.exists():
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": "Invalid package: system_instructions.txt not found",
|
||||||
|
}
|
||||||
|
|
||||||
|
instructions = instructions_file.read_text(encoding="utf-8")
|
||||||
|
|
||||||
|
metadata_file = temp_path / "minimax_metadata.json"
|
||||||
|
skill_name = package_path.stem
|
||||||
|
model = kwargs.get("model", "MiniMax-M2.7")
|
||||||
|
|
||||||
|
if metadata_file.exists():
|
||||||
|
with open(metadata_file) as f:
|
||||||
|
metadata = json.load(f)
|
||||||
|
skill_name = metadata.get("name", skill_name)
|
||||||
|
model = metadata.get("model", model)
|
||||||
|
|
||||||
|
knowledge_dir = temp_path / "knowledge_files"
|
||||||
|
knowledge_count = 0
|
||||||
|
if knowledge_dir.exists():
|
||||||
|
knowledge_count = len(list(knowledge_dir.glob("*.md")))
|
||||||
|
|
||||||
|
client = OpenAI(
|
||||||
|
api_key=api_key,
|
||||||
|
base_url=self.DEFAULT_API_ENDPOINT,
|
||||||
|
)
|
||||||
|
|
||||||
|
client.chat.completions.create(
|
||||||
|
model=model,
|
||||||
|
messages=[
|
||||||
|
{"role": "system", "content": instructions},
|
||||||
|
{
|
||||||
|
"role": "user",
|
||||||
|
"content": f"Confirm you are ready to assist with {skill_name}. Reply briefly.",
|
||||||
|
},
|
||||||
|
],
|
||||||
|
temperature=0.3,
|
||||||
|
max_tokens=100,
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
"success": True,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": "https://platform.minimaxi.com/",
|
||||||
|
"message": f"Skill '{skill_name}' validated with MiniMax {model} ({knowledge_count} knowledge files)",
|
||||||
|
}
|
||||||
|
|
||||||
|
except APITimeoutError:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": "Upload timed out. Try again.",
|
||||||
|
}
|
||||||
|
except APIConnectionError:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": "Connection error. Check your internet connection.",
|
||||||
|
}
|
||||||
|
except Exception as e:
|
||||||
|
return {
|
||||||
|
"success": False,
|
||||||
|
"skill_id": None,
|
||||||
|
"url": None,
|
||||||
|
"message": f"Upload failed: {str(e)}",
|
||||||
|
}
|
||||||
|
|
||||||
|
def validate_api_key(self, api_key: str) -> bool:
|
||||||
|
"""
|
||||||
|
Validate MiniMax API key format.
|
||||||
|
|
||||||
|
MiniMax API keys are opaque strings. We only check for
|
||||||
|
a non-empty key with a reasonable minimum length.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
api_key: API key to validate
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if key format appears valid
|
||||||
|
"""
|
||||||
|
key = api_key.strip()
|
||||||
|
return len(key) > 10
|
||||||
|
|
||||||
|
def get_env_var_name(self) -> str:
|
||||||
|
"""
|
||||||
|
Get environment variable name for MiniMax API key.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
'MINIMAX_API_KEY'
|
||||||
|
"""
|
||||||
|
return "MINIMAX_API_KEY"
|
||||||
|
|
||||||
|
def supports_enhancement(self) -> bool:
|
||||||
|
"""
|
||||||
|
MiniMax supports AI enhancement via MiniMax-M2.7.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True
|
||||||
|
"""
|
||||||
|
return True
|
||||||
|
|
||||||
|
def enhance(self, skill_dir: Path, api_key: str) -> bool:
|
||||||
|
"""
|
||||||
|
Enhance SKILL.md using MiniMax-M2.7 API.
|
||||||
|
|
||||||
|
Uses MiniMax's OpenAI-compatible API endpoint for enhancement.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_dir: Path to skill directory
|
||||||
|
api_key: MiniMax API key
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
True if enhancement succeeded
|
||||||
|
"""
|
||||||
|
try:
|
||||||
|
from openai import OpenAI
|
||||||
|
except ImportError:
|
||||||
|
print("❌ Error: openai package not installed")
|
||||||
|
print("Install with: pip install openai")
|
||||||
|
return False
|
||||||
|
|
||||||
|
skill_dir = Path(skill_dir)
|
||||||
|
references_dir = skill_dir / "references"
|
||||||
|
skill_md_path = skill_dir / "SKILL.md"
|
||||||
|
|
||||||
|
print("📖 Reading reference documentation...")
|
||||||
|
references = self._read_reference_files(references_dir)
|
||||||
|
|
||||||
|
if not references:
|
||||||
|
print("❌ No reference files found to analyze")
|
||||||
|
return False
|
||||||
|
|
||||||
|
print(f" ✓ Read {len(references)} reference files")
|
||||||
|
total_size = sum(len(c) for c in references.values())
|
||||||
|
print(f" ✓ Total size: {total_size:,} characters\n")
|
||||||
|
|
||||||
|
current_skill_md = None
|
||||||
|
if skill_md_path.exists():
|
||||||
|
current_skill_md = skill_md_path.read_text(encoding="utf-8")
|
||||||
|
print(f" ℹ Found existing SKILL.md ({len(current_skill_md)} chars)")
|
||||||
|
else:
|
||||||
|
print(" ℹ No existing SKILL.md, will create new one")
|
||||||
|
|
||||||
|
prompt = self._build_enhancement_prompt(skill_dir.name, references, current_skill_md)
|
||||||
|
|
||||||
|
print("\n🤖 Asking MiniMax-M2.7 to enhance SKILL.md...")
|
||||||
|
print(f" Input: {len(prompt):,} characters")
|
||||||
|
|
||||||
|
try:
|
||||||
|
client = OpenAI(
|
||||||
|
api_key=api_key,
|
||||||
|
base_url="https://api.minimax.io/v1",
|
||||||
|
)
|
||||||
|
|
||||||
|
response = client.chat.completions.create(
|
||||||
|
model="MiniMax-M2.7",
|
||||||
|
messages=[
|
||||||
|
{
|
||||||
|
"role": "system",
|
||||||
|
"content": "You are an expert technical writer creating system instructions for MiniMax AI.",
|
||||||
|
},
|
||||||
|
{"role": "user", "content": prompt},
|
||||||
|
],
|
||||||
|
temperature=0.3,
|
||||||
|
max_tokens=4096,
|
||||||
|
)
|
||||||
|
|
||||||
|
enhanced_content = response.choices[0].message.content
|
||||||
|
print(f" ✓ Generated enhanced SKILL.md ({len(enhanced_content)} chars)\n")
|
||||||
|
|
||||||
|
if skill_md_path.exists():
|
||||||
|
backup_path = skill_md_path.with_suffix(".md.backup")
|
||||||
|
skill_md_path.rename(backup_path)
|
||||||
|
print(f" 💾 Backed up original to: {backup_path.name}")
|
||||||
|
|
||||||
|
skill_md_path.write_text(enhanced_content, encoding="utf-8")
|
||||||
|
print(" ✅ Saved enhanced SKILL.md")
|
||||||
|
|
||||||
|
return True
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f"❌ Error calling MiniMax API: {e}")
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _read_reference_files(
|
||||||
|
self, references_dir: Path, max_chars: int = 200000
|
||||||
|
) -> dict[str, str]:
|
||||||
|
"""
|
||||||
|
Read reference markdown files from skill directory.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
references_dir: Path to references directory
|
||||||
|
max_chars: Maximum total characters to read
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary mapping filename to content
|
||||||
|
"""
|
||||||
|
if not references_dir.exists():
|
||||||
|
return {}
|
||||||
|
|
||||||
|
references = {}
|
||||||
|
total_chars = 0
|
||||||
|
|
||||||
|
for ref_file in sorted(references_dir.glob("*.md")):
|
||||||
|
if total_chars >= max_chars:
|
||||||
|
break
|
||||||
|
|
||||||
|
try:
|
||||||
|
content = ref_file.read_text(encoding="utf-8")
|
||||||
|
if len(content) > 30000:
|
||||||
|
content = content[:30000] + "\n\n...(truncated)"
|
||||||
|
|
||||||
|
references[ref_file.name] = content
|
||||||
|
total_chars += len(content)
|
||||||
|
|
||||||
|
except Exception as e:
|
||||||
|
print(f" ⚠️ Could not read {ref_file.name}: {e}")
|
||||||
|
|
||||||
|
return references
|
||||||
|
|
||||||
|
def _build_enhancement_prompt(
|
||||||
|
self, skill_name: str, references: dict[str, str], current_skill_md: str = None
|
||||||
|
) -> str:
|
||||||
|
"""
|
||||||
|
Build MiniMax API prompt for enhancement.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
skill_name: Name of the skill
|
||||||
|
references: Dictionary of reference content
|
||||||
|
current_skill_md: Existing SKILL.md content (optional)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Enhancement prompt for MiniMax-M2.7
|
||||||
|
"""
|
||||||
|
prompt = f"""You are creating system instructions for a MiniMax AI assistant about: {skill_name}
|
||||||
|
|
||||||
|
I've scraped documentation and organized it into reference files. Your job is to create EXCELLENT system instructions that will help the assistant use this documentation effectively.
|
||||||
|
|
||||||
|
CURRENT INSTRUCTIONS:
|
||||||
|
{"```" if current_skill_md else "(none - create from scratch)"}
|
||||||
|
{current_skill_md or "No existing instructions"}
|
||||||
|
{"```" if current_skill_md else ""}
|
||||||
|
|
||||||
|
REFERENCE DOCUMENTATION:
|
||||||
|
"""
|
||||||
|
|
||||||
|
for filename, content in references.items():
|
||||||
|
prompt += f"\n\n## {filename}\n```markdown\n{content[:30000]}\n```\n"
|
||||||
|
|
||||||
|
prompt += """
|
||||||
|
|
||||||
|
YOUR TASK:
|
||||||
|
Create enhanced system instructions that include:
|
||||||
|
|
||||||
|
1. **Clear role definition** - "You are an expert assistant for [topic]"
|
||||||
|
2. **Knowledge base description** - What documentation is attached
|
||||||
|
3. **Excellent Quick Reference** - Extract 5-10 of the BEST, most practical code examples from the reference docs
|
||||||
|
- Choose SHORT, clear examples that demonstrate common tasks
|
||||||
|
- Include both simple and intermediate examples
|
||||||
|
- Annotate examples with clear descriptions
|
||||||
|
- Use proper language tags (cpp, python, javascript, json, etc.)
|
||||||
|
4. **Response guidelines** - How the assistant should help users
|
||||||
|
5. **Search strategy** - How to find information in the knowledge base
|
||||||
|
6. **DO NOT use YAML frontmatter** - This is plain text instructions
|
||||||
|
|
||||||
|
IMPORTANT:
|
||||||
|
- Extract REAL examples from the reference docs, don't make them up
|
||||||
|
- Prioritize SHORT, clear examples (5-20 lines max)
|
||||||
|
- Make it actionable and practical
|
||||||
|
- Write clear, direct instructions
|
||||||
|
- Focus on how the assistant should behave and respond
|
||||||
|
- NO YAML frontmatter (no --- blocks)
|
||||||
|
|
||||||
|
OUTPUT:
|
||||||
|
Return ONLY the complete system instructions as plain text.
|
||||||
|
"""
|
||||||
|
|
||||||
|
return prompt
|
||||||
517
tests/test_adaptors/test_minimax_adaptor.py
Normal file
517
tests/test_adaptors/test_minimax_adaptor.py
Normal file
@@ -0,0 +1,517 @@
|
|||||||
|
#!/usr/bin/env python3
|
||||||
|
"""
|
||||||
|
Tests for MiniMax AI adaptor
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import sys
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
import zipfile
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
try:
|
||||||
|
from openai import APITimeoutError, APIConnectionError
|
||||||
|
except ImportError:
|
||||||
|
APITimeoutError = None
|
||||||
|
APIConnectionError = None
|
||||||
|
|
||||||
|
from skill_seekers.cli.adaptors import get_adaptor, is_platform_available
|
||||||
|
from skill_seekers.cli.adaptors.base import SkillMetadata
|
||||||
|
|
||||||
|
|
||||||
|
class TestMiniMaxAdaptor(unittest.TestCase):
|
||||||
|
"""Test MiniMax AI adaptor functionality"""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
"""Set up test adaptor"""
|
||||||
|
self.adaptor = get_adaptor("minimax")
|
||||||
|
|
||||||
|
def test_platform_info(self):
|
||||||
|
"""Test platform identifiers"""
|
||||||
|
self.assertEqual(self.adaptor.PLATFORM, "minimax")
|
||||||
|
self.assertEqual(self.adaptor.PLATFORM_NAME, "MiniMax AI")
|
||||||
|
self.assertIsNotNone(self.adaptor.DEFAULT_API_ENDPOINT)
|
||||||
|
self.assertIn("minimax", self.adaptor.DEFAULT_API_ENDPOINT)
|
||||||
|
|
||||||
|
def test_platform_available(self):
|
||||||
|
"""Test that minimax platform is registered"""
|
||||||
|
self.assertTrue(is_platform_available("minimax"))
|
||||||
|
|
||||||
|
def test_validate_api_key_valid(self):
|
||||||
|
"""Test valid MiniMax API keys (any string >10 chars)"""
|
||||||
|
self.assertTrue(
|
||||||
|
self.adaptor.validate_api_key("eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test.key")
|
||||||
|
)
|
||||||
|
self.assertTrue(self.adaptor.validate_api_key("sk-some-long-api-key-string-here"))
|
||||||
|
self.assertTrue(self.adaptor.validate_api_key(" a-valid-key-with-spaces "))
|
||||||
|
|
||||||
|
def test_validate_api_key_invalid(self):
|
||||||
|
"""Test invalid API keys"""
|
||||||
|
self.assertFalse(self.adaptor.validate_api_key(""))
|
||||||
|
self.assertFalse(self.adaptor.validate_api_key(" "))
|
||||||
|
self.assertFalse(self.adaptor.validate_api_key("short"))
|
||||||
|
|
||||||
|
def test_get_env_var_name(self):
|
||||||
|
"""Test environment variable name"""
|
||||||
|
self.assertEqual(self.adaptor.get_env_var_name(), "MINIMAX_API_KEY")
|
||||||
|
|
||||||
|
def test_supports_enhancement(self):
|
||||||
|
"""Test enhancement support"""
|
||||||
|
self.assertTrue(self.adaptor.supports_enhancement())
|
||||||
|
|
||||||
|
def test_format_skill_md_no_frontmatter(self):
|
||||||
|
"""Test that MiniMax format has no YAML frontmatter"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("# Test content")
|
||||||
|
|
||||||
|
metadata = SkillMetadata(name="test-skill", description="Test skill description")
|
||||||
|
|
||||||
|
formatted = self.adaptor.format_skill_md(skill_dir, metadata)
|
||||||
|
|
||||||
|
self.assertFalse(formatted.startswith("---"))
|
||||||
|
self.assertIn("You are an expert assistant", formatted)
|
||||||
|
self.assertIn("test-skill", formatted)
|
||||||
|
self.assertIn("Test skill description", formatted)
|
||||||
|
|
||||||
|
def test_format_skill_md_with_existing_content(self):
|
||||||
|
"""Test formatting when SKILL.md already has substantial content"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
existing_content = "# Existing Content\n\n" + "x" * 200
|
||||||
|
(skill_dir / "SKILL.md").write_text(existing_content)
|
||||||
|
|
||||||
|
metadata = SkillMetadata(name="test-skill", description="Test description")
|
||||||
|
|
||||||
|
formatted = self.adaptor.format_skill_md(skill_dir, metadata)
|
||||||
|
|
||||||
|
self.assertIn("You are an expert assistant", formatted)
|
||||||
|
self.assertIn("test-skill", formatted)
|
||||||
|
|
||||||
|
def test_format_skill_md_without_references(self):
|
||||||
|
"""Test formatting without references directory"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
|
||||||
|
metadata = SkillMetadata(name="test-skill", description="Test description")
|
||||||
|
|
||||||
|
formatted = self.adaptor.format_skill_md(skill_dir, metadata)
|
||||||
|
|
||||||
|
self.assertIn("You are an expert assistant", formatted)
|
||||||
|
self.assertIn("test-skill", formatted)
|
||||||
|
|
||||||
|
def test_package_creates_zip(self):
|
||||||
|
"""Test that package creates ZIP file with correct structure"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
|
||||||
|
(skill_dir / "SKILL.md").write_text("You are an expert assistant")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("# Reference")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
|
||||||
|
self.assertTrue(package_path.exists())
|
||||||
|
self.assertTrue(str(package_path).endswith(".zip"))
|
||||||
|
self.assertIn("minimax", package_path.name)
|
||||||
|
|
||||||
|
with zipfile.ZipFile(package_path, "r") as zf:
|
||||||
|
names = zf.namelist()
|
||||||
|
self.assertIn("system_instructions.txt", names)
|
||||||
|
self.assertIn("minimax_metadata.json", names)
|
||||||
|
self.assertTrue(any("knowledge_files" in name for name in names))
|
||||||
|
|
||||||
|
def test_package_metadata_content(self):
|
||||||
|
"""Test that packaged ZIP contains correct metadata"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test instructions")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "guide.md").write_text("# User Guide")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
|
||||||
|
with zipfile.ZipFile(package_path, "r") as zf:
|
||||||
|
instructions = zf.read("system_instructions.txt").decode("utf-8")
|
||||||
|
self.assertEqual(instructions, "Test instructions")
|
||||||
|
|
||||||
|
self.assertIn("knowledge_files/guide.md", zf.namelist())
|
||||||
|
|
||||||
|
metadata_content = zf.read("minimax_metadata.json").decode("utf-8")
|
||||||
|
metadata = json.loads(metadata_content)
|
||||||
|
self.assertEqual(metadata["platform"], "minimax")
|
||||||
|
self.assertEqual(metadata["name"], "test-skill")
|
||||||
|
self.assertEqual(metadata["model"], "MiniMax-M2.7")
|
||||||
|
self.assertIn("minimax", metadata["api_base"])
|
||||||
|
|
||||||
|
def test_package_output_path_as_file(self):
|
||||||
|
"""Test packaging when output_path is a file path"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test")
|
||||||
|
|
||||||
|
output_file = Path(temp_dir) / "output" / "custom-name-minimax.zip"
|
||||||
|
output_file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_file)
|
||||||
|
|
||||||
|
self.assertTrue(package_path.exists())
|
||||||
|
self.assertTrue(str(package_path).endswith(".zip"))
|
||||||
|
|
||||||
|
def test_package_without_references(self):
|
||||||
|
"""Test packaging without reference files"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test instructions")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
|
||||||
|
self.assertTrue(package_path.exists())
|
||||||
|
with zipfile.ZipFile(package_path, "r") as zf:
|
||||||
|
names = zf.namelist()
|
||||||
|
self.assertIn("system_instructions.txt", names)
|
||||||
|
self.assertIn("minimax_metadata.json", names)
|
||||||
|
self.assertFalse(any("knowledge_files" in name for name in names))
|
||||||
|
|
||||||
|
def test_upload_missing_library(self):
|
||||||
|
"""Test upload when openai library is not installed"""
|
||||||
|
with tempfile.NamedTemporaryFile(suffix=".zip") as tmp:
|
||||||
|
with patch.dict(sys.modules, {"openai": None}):
|
||||||
|
result = self.adaptor.upload(Path(tmp.name), "test-api-key")
|
||||||
|
|
||||||
|
self.assertFalse(result["success"])
|
||||||
|
self.assertIn("openai", result["message"])
|
||||||
|
self.assertIn("not installed", result["message"])
|
||||||
|
|
||||||
|
def test_upload_invalid_file(self):
|
||||||
|
"""Test upload with invalid file"""
|
||||||
|
result = self.adaptor.upload(Path("/nonexistent/file.zip"), "test-api-key")
|
||||||
|
|
||||||
|
self.assertFalse(result["success"])
|
||||||
|
self.assertIn("not found", result["message"].lower())
|
||||||
|
|
||||||
|
def test_upload_wrong_format(self):
|
||||||
|
"""Test upload with wrong file format"""
|
||||||
|
with tempfile.NamedTemporaryFile(suffix=".tar.gz") as tmp:
|
||||||
|
result = self.adaptor.upload(Path(tmp.name), "test-api-key")
|
||||||
|
|
||||||
|
self.assertFalse(result["success"])
|
||||||
|
self.assertIn("not a zip", result["message"].lower())
|
||||||
|
|
||||||
|
@unittest.skip("covered by test_upload_success_mocked")
|
||||||
|
def test_upload_success(self):
|
||||||
|
"""Test successful upload - skipped (needs real API for integration test)"""
|
||||||
|
pass
|
||||||
|
|
||||||
|
def test_enhance_missing_references(self):
|
||||||
|
"""Test enhance when no reference files exist"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
|
||||||
|
success = self.adaptor.enhance(skill_dir, "test-api-key")
|
||||||
|
self.assertFalse(success)
|
||||||
|
|
||||||
|
@patch("openai.OpenAI")
|
||||||
|
def test_enhance_success_mocked(self, mock_openai_class):
|
||||||
|
"""Test successful enhancement with mocked OpenAI client"""
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.choices = [MagicMock()]
|
||||||
|
mock_response.choices[0].message.content = "Enhanced SKILL.md content"
|
||||||
|
mock_client.chat.completions.create.return_value = mock_response
|
||||||
|
mock_openai_class.return_value = mock_client
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
refs_dir = skill_dir / "references"
|
||||||
|
refs_dir.mkdir()
|
||||||
|
(refs_dir / "test.md").write_text("# Test\nContent")
|
||||||
|
(skill_dir / "SKILL.md").write_text("Original content")
|
||||||
|
|
||||||
|
success = self.adaptor.enhance(skill_dir, "test-api-key")
|
||||||
|
|
||||||
|
self.assertTrue(success)
|
||||||
|
new_content = (skill_dir / "SKILL.md").read_text()
|
||||||
|
self.assertEqual(new_content, "Enhanced SKILL.md content")
|
||||||
|
backup = skill_dir / "SKILL.md.backup"
|
||||||
|
self.assertTrue(backup.exists())
|
||||||
|
self.assertEqual(backup.read_text(), "Original content")
|
||||||
|
mock_client.chat.completions.create.assert_called_once()
|
||||||
|
|
||||||
|
def test_enhance_missing_library(self):
|
||||||
|
"""Test enhance when openai library is not installed"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
refs_dir = skill_dir / "references"
|
||||||
|
refs_dir.mkdir()
|
||||||
|
(refs_dir / "test.md").write_text("Test content")
|
||||||
|
|
||||||
|
with patch.dict(sys.modules, {"openai": None}):
|
||||||
|
success = self.adaptor.enhance(skill_dir, "test-api-key")
|
||||||
|
|
||||||
|
self.assertFalse(success)
|
||||||
|
|
||||||
|
def test_read_reference_files(self):
|
||||||
|
"""Test reading reference files"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
refs_dir = Path(temp_dir)
|
||||||
|
(refs_dir / "guide.md").write_text("# Guide\nContent here")
|
||||||
|
(refs_dir / "api.md").write_text("# API\nAPI docs")
|
||||||
|
|
||||||
|
references = self.adaptor._read_reference_files(refs_dir)
|
||||||
|
|
||||||
|
self.assertEqual(len(references), 2)
|
||||||
|
self.assertIn("guide.md", references)
|
||||||
|
self.assertIn("api.md", references)
|
||||||
|
|
||||||
|
def test_read_reference_files_empty_dir(self):
|
||||||
|
"""Test reading from empty references directory"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
references = self.adaptor._read_reference_files(Path(temp_dir))
|
||||||
|
self.assertEqual(len(references), 0)
|
||||||
|
|
||||||
|
def test_read_reference_files_nonexistent(self):
|
||||||
|
"""Test reading from nonexistent directory"""
|
||||||
|
references = self.adaptor._read_reference_files(Path("/nonexistent/path"))
|
||||||
|
self.assertEqual(len(references), 0)
|
||||||
|
|
||||||
|
def test_read_reference_files_truncation(self):
|
||||||
|
"""Test that large reference files are truncated"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
(Path(temp_dir) / "large.md").write_text("x" * 50000)
|
||||||
|
|
||||||
|
references = self.adaptor._read_reference_files(Path(temp_dir))
|
||||||
|
|
||||||
|
self.assertIn("large.md", references)
|
||||||
|
self.assertIn("truncated", references["large.md"])
|
||||||
|
self.assertLessEqual(len(references["large.md"]), 31000)
|
||||||
|
|
||||||
|
def test_build_enhancement_prompt(self):
|
||||||
|
"""Test enhancement prompt generation"""
|
||||||
|
references = {
|
||||||
|
"guide.md": "# User Guide\nContent here",
|
||||||
|
"api.md": "# API Reference\nAPI docs",
|
||||||
|
}
|
||||||
|
|
||||||
|
prompt = self.adaptor._build_enhancement_prompt(
|
||||||
|
"test-skill", references, "Existing SKILL.md content"
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertIn("test-skill", prompt)
|
||||||
|
self.assertIn("guide.md", prompt)
|
||||||
|
self.assertIn("api.md", prompt)
|
||||||
|
self.assertIn("Existing SKILL.md content", prompt)
|
||||||
|
self.assertIn("MiniMax", prompt)
|
||||||
|
|
||||||
|
def test_build_enhancement_prompt_no_existing(self):
|
||||||
|
"""Test enhancement prompt when no existing SKILL.md"""
|
||||||
|
references = {"test.md": "# Test\nContent"}
|
||||||
|
|
||||||
|
prompt = self.adaptor._build_enhancement_prompt("test-skill", references, None)
|
||||||
|
|
||||||
|
self.assertIn("test-skill", prompt)
|
||||||
|
self.assertIn("create from scratch", prompt)
|
||||||
|
|
||||||
|
def test_config_initialization(self):
|
||||||
|
"""Test adaptor initializes with config"""
|
||||||
|
config = {"custom_model": "MiniMax-M2.5"}
|
||||||
|
adaptor = get_adaptor("minimax", config)
|
||||||
|
self.assertEqual(adaptor.config, config)
|
||||||
|
|
||||||
|
def test_default_config(self):
|
||||||
|
"""Test adaptor initializes with empty config by default"""
|
||||||
|
self.assertEqual(self.adaptor.config, {})
|
||||||
|
|
||||||
|
def test_package_excludes_backup_files(self):
|
||||||
|
"""Test that backup files are excluded from package"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test instructions")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "guide.md").write_text("# Guide")
|
||||||
|
(skill_dir / "references" / "guide.md.backup").write_text("# Old backup")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
|
||||||
|
with zipfile.ZipFile(package_path, "r") as zf:
|
||||||
|
names = zf.namelist()
|
||||||
|
self.assertIn("knowledge_files/guide.md", names)
|
||||||
|
self.assertNotIn("knowledge_files/guide.md.backup", names)
|
||||||
|
|
||||||
|
@patch("openai.OpenAI")
|
||||||
|
def test_upload_success_mocked(self, mock_openai_class):
|
||||||
|
"""Test successful upload with mocked OpenAI client"""
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_response = MagicMock()
|
||||||
|
mock_response.choices = [MagicMock()]
|
||||||
|
mock_response.choices[0].message.content = "Ready to assist with Python testing"
|
||||||
|
mock_client.chat.completions.create.return_value = mock_response
|
||||||
|
mock_openai_class.return_value = mock_client
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("You are an expert assistant")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("# Test")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
result = self.adaptor.upload(package_path, "test-long-api-key-string")
|
||||||
|
|
||||||
|
self.assertTrue(result["success"])
|
||||||
|
self.assertIn("validated", result["message"])
|
||||||
|
self.assertEqual(result["url"], "https://platform.minimaxi.com/")
|
||||||
|
mock_client.chat.completions.create.assert_called_once()
|
||||||
|
|
||||||
|
@unittest.skipUnless(APITimeoutError, "openai library not installed")
|
||||||
|
@patch("openai.OpenAI")
|
||||||
|
def test_upload_network_error(self, mock_openai_class):
|
||||||
|
"""Test upload with network timeout error"""
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.chat.completions.create.side_effect = APITimeoutError(request=MagicMock())
|
||||||
|
mock_openai_class.return_value = mock_client
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("Content")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
result = self.adaptor.upload(package_path, "test-long-api-key-string")
|
||||||
|
|
||||||
|
self.assertFalse(result["success"])
|
||||||
|
self.assertIn("timed out", result["message"].lower())
|
||||||
|
|
||||||
|
@unittest.skipUnless(APIConnectionError, "openai library not installed")
|
||||||
|
@patch("openai.OpenAI")
|
||||||
|
def test_upload_connection_error(self, mock_openai_class):
|
||||||
|
"""Test upload with connection error"""
|
||||||
|
mock_client = MagicMock()
|
||||||
|
mock_client.chat.completions.create.side_effect = APIConnectionError(request=MagicMock())
|
||||||
|
mock_openai_class.return_value = mock_client
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("Test")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("Content")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
result = self.adaptor.upload(package_path, "test-long-api-key-string")
|
||||||
|
|
||||||
|
self.assertFalse(result["success"])
|
||||||
|
self.assertIn("connection", result["message"].lower())
|
||||||
|
|
||||||
|
def test_validate_api_key_format(self):
|
||||||
|
"""Test that API key validation uses length-based check"""
|
||||||
|
# Valid - long enough strings
|
||||||
|
self.assertTrue(self.adaptor.validate_api_key("eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.test"))
|
||||||
|
self.assertTrue(self.adaptor.validate_api_key("sk-api-abc123-long-enough"))
|
||||||
|
# Invalid - too short
|
||||||
|
self.assertFalse(self.adaptor.validate_api_key("eyJshort"))
|
||||||
|
self.assertFalse(self.adaptor.validate_api_key("short"))
|
||||||
|
|
||||||
|
|
||||||
|
class TestMiniMaxAdaptorIntegration(unittest.TestCase):
|
||||||
|
"""Integration tests for MiniMax AI adaptor (require MINIMAX_API_KEY)"""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
"""Set up test adaptor"""
|
||||||
|
self.adaptor = get_adaptor("minimax")
|
||||||
|
|
||||||
|
@unittest.skipUnless(
|
||||||
|
os.getenv("MINIMAX_API_KEY"), "MINIMAX_API_KEY not set - skipping integration test"
|
||||||
|
)
|
||||||
|
def test_enhance_with_real_api(self):
|
||||||
|
"""Test enhancement with real MiniMax API"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir)
|
||||||
|
refs_dir = skill_dir / "references"
|
||||||
|
refs_dir.mkdir()
|
||||||
|
(refs_dir / "test.md").write_text(
|
||||||
|
"# Python Testing\n\n"
|
||||||
|
"Use pytest for testing:\n"
|
||||||
|
"```python\n"
|
||||||
|
"def test_example():\n"
|
||||||
|
" assert 1 + 1 == 2\n"
|
||||||
|
"```\n"
|
||||||
|
)
|
||||||
|
|
||||||
|
api_key = os.getenv("MINIMAX_API_KEY")
|
||||||
|
success = self.adaptor.enhance(skill_dir, api_key)
|
||||||
|
|
||||||
|
self.assertTrue(success)
|
||||||
|
skill_md = (skill_dir / "SKILL.md").read_text()
|
||||||
|
self.assertTrue(len(skill_md) > 100)
|
||||||
|
|
||||||
|
@unittest.skipUnless(
|
||||||
|
os.getenv("MINIMAX_API_KEY"), "MINIMAX_API_KEY not set - skipping integration test"
|
||||||
|
)
|
||||||
|
def test_upload_with_real_api(self):
|
||||||
|
"""Test upload validation with real MiniMax API"""
|
||||||
|
with tempfile.TemporaryDirectory() as temp_dir:
|
||||||
|
skill_dir = Path(temp_dir) / "test-skill"
|
||||||
|
skill_dir.mkdir()
|
||||||
|
(skill_dir / "SKILL.md").write_text("You are an expert assistant for Python testing.")
|
||||||
|
(skill_dir / "references").mkdir()
|
||||||
|
(skill_dir / "references" / "test.md").write_text("# Test\nContent")
|
||||||
|
|
||||||
|
output_dir = Path(temp_dir) / "output"
|
||||||
|
output_dir.mkdir()
|
||||||
|
|
||||||
|
package_path = self.adaptor.package(skill_dir, output_dir)
|
||||||
|
api_key = os.getenv("MINIMAX_API_KEY")
|
||||||
|
result = self.adaptor.upload(package_path, api_key)
|
||||||
|
|
||||||
|
self.assertTrue(result["success"])
|
||||||
|
self.assertIn("validated", result["message"])
|
||||||
|
|
||||||
|
@unittest.skipUnless(
|
||||||
|
os.getenv("MINIMAX_API_KEY"), "MINIMAX_API_KEY not set - skipping integration test"
|
||||||
|
)
|
||||||
|
def test_validate_api_key_real(self):
|
||||||
|
"""Test validating a real API key"""
|
||||||
|
api_key = os.getenv("MINIMAX_API_KEY")
|
||||||
|
self.assertTrue(self.adaptor.validate_api_key(api_key))
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
8
uv.lock
generated
8
uv.lock
generated
@@ -5699,7 +5699,7 @@ wheels = [
|
|||||||
|
|
||||||
[[package]]
|
[[package]]
|
||||||
name = "skill-seekers"
|
name = "skill-seekers"
|
||||||
version = "3.2.0"
|
version = "3.3.0"
|
||||||
source = { editable = "." }
|
source = { editable = "." }
|
||||||
dependencies = [
|
dependencies = [
|
||||||
{ name = "anthropic" },
|
{ name = "anthropic" },
|
||||||
@@ -5816,6 +5816,9 @@ mcp = [
|
|||||||
{ name = "starlette" },
|
{ name = "starlette" },
|
||||||
{ name = "uvicorn" },
|
{ name = "uvicorn" },
|
||||||
]
|
]
|
||||||
|
minimax = [
|
||||||
|
{ name = "openai" },
|
||||||
|
]
|
||||||
notion = [
|
notion = [
|
||||||
{ name = "notion-client" },
|
{ name = "notion-client" },
|
||||||
]
|
]
|
||||||
@@ -5930,6 +5933,7 @@ requires-dist = [
|
|||||||
{ name = "numpy", marker = "extra == 'embedding'", specifier = ">=1.24.0" },
|
{ name = "numpy", marker = "extra == 'embedding'", specifier = ">=1.24.0" },
|
||||||
{ name = "openai", marker = "extra == 'all'", specifier = ">=1.0.0" },
|
{ name = "openai", marker = "extra == 'all'", specifier = ">=1.0.0" },
|
||||||
{ name = "openai", marker = "extra == 'all-llms'", specifier = ">=1.0.0" },
|
{ name = "openai", marker = "extra == 'all-llms'", specifier = ">=1.0.0" },
|
||||||
|
{ name = "openai", marker = "extra == 'minimax'", specifier = ">=1.0.0" },
|
||||||
{ name = "openai", marker = "extra == 'openai'", specifier = ">=1.0.0" },
|
{ name = "openai", marker = "extra == 'openai'", specifier = ">=1.0.0" },
|
||||||
{ name = "opencv-python-headless", marker = "extra == 'video-full'", specifier = ">=4.9.0" },
|
{ name = "opencv-python-headless", marker = "extra == 'video-full'", specifier = ">=4.9.0" },
|
||||||
{ name = "pathspec", specifier = ">=0.12.1" },
|
{ name = "pathspec", specifier = ">=0.12.1" },
|
||||||
@@ -5978,7 +5982,7 @@ requires-dist = [
|
|||||||
{ name = "yt-dlp", marker = "extra == 'video'", specifier = ">=2024.12.0" },
|
{ name = "yt-dlp", marker = "extra == 'video'", specifier = ">=2024.12.0" },
|
||||||
{ name = "yt-dlp", marker = "extra == 'video-full'", specifier = ">=2024.12.0" },
|
{ name = "yt-dlp", marker = "extra == 'video-full'", specifier = ">=2024.12.0" },
|
||||||
]
|
]
|
||||||
provides-extras = ["mcp", "gemini", "openai", "all-llms", "s3", "gcs", "azure", "docx", "epub", "video", "video-full", "chroma", "weaviate", "sentence-transformers", "pinecone", "rag-upload", "all-cloud", "jupyter", "asciidoc", "pptx", "confluence", "notion", "rss", "chat", "embedding", "all"]
|
provides-extras = ["mcp", "gemini", "openai", "minimax", "all-llms", "s3", "gcs", "azure", "docx", "epub", "video", "video-full", "chroma", "weaviate", "sentence-transformers", "pinecone", "rag-upload", "all-cloud", "jupyter", "asciidoc", "pptx", "confluence", "notion", "rss", "chat", "embedding", "all"]
|
||||||
|
|
||||||
[package.metadata.requires-dev]
|
[package.metadata.requires-dev]
|
||||||
dev = [
|
dev = [
|
||||||
|
|||||||
Reference in New Issue
Block a user