From 9de71654fc6840143df301b396e574de0bc896fd Mon Sep 17 00:00:00 2001 From: Allen930311 Date: Mon, 9 Mar 2026 18:52:09 +0800 Subject: [PATCH] feat: add unified-diary skill (#246) * feat: add unified-diary skill * chore: refine diary skill metadata for quality bar compliance * fix: address PR feedback - fix NameError, Obsidian path, and add missing script --- skills/diary/.env.example | 8 + skills/diary/.gitignore | 16 + skills/diary/LICENSE | 21 + skills/diary/README.md | 90 ++++ skills/diary/SKILL.md | 160 ++++++ skills/diary/requirements.txt | 1 + skills/diary/scripts/fetch_diaries.py | 84 ++++ skills/diary/scripts/master_diary_sync.py | 270 ++++++++++ skills/diary/scripts/prepare_context.py | 244 +++++++++ skills/diary/scripts/sync_to_notion.py | 469 ++++++++++++++++++ .../diary/templates/global-diary-template.md | 38 ++ .../diary/templates/local-diary-template.md | 23 + 12 files changed, 1424 insertions(+) create mode 100644 skills/diary/.env.example create mode 100644 skills/diary/.gitignore create mode 100644 skills/diary/LICENSE create mode 100644 skills/diary/README.md create mode 100644 skills/diary/SKILL.md create mode 100644 skills/diary/requirements.txt create mode 100644 skills/diary/scripts/fetch_diaries.py create mode 100644 skills/diary/scripts/master_diary_sync.py create mode 100644 skills/diary/scripts/prepare_context.py create mode 100644 skills/diary/scripts/sync_to_notion.py create mode 100644 skills/diary/templates/global-diary-template.md create mode 100644 skills/diary/templates/local-diary-template.md diff --git a/skills/diary/.env.example b/skills/diary/.env.example new file mode 100644 index 00000000..11a6aaf0 --- /dev/null +++ b/skills/diary/.env.example @@ -0,0 +1,8 @@ +# Notion Settings +NOTION_TOKEN="ntn_your_notion_token_here" +NOTION_DIARY_DB="your_notion_database_id_here" + +# Path Settings (Optional, scripts will use sensible defaults if not set) +# DESKTOP_PATH="C:\Users\YourName\OneDrive\Desktop" +# GLOBAL_DIARY_ROOT="C:\path\to\your\global\diary\folder" +# OBSIDIAN_DAILY_NOTES="C:\path\to\your\obsidian\vault\10_Daily" diff --git a/skills/diary/.gitignore b/skills/diary/.gitignore new file mode 100644 index 00000000..47d2b466 --- /dev/null +++ b/skills/diary/.gitignore @@ -0,0 +1,16 @@ +# Personal diary data +diary/ + +# Python artifacts +__pycache__/ +*.pyc +*.pyo +*.pyd +.venv/ +venv/ +env/ +.env + +# OS generated files +.DS_Store +Thumbs.db diff --git a/skills/diary/LICENSE b/skills/diary/LICENSE new file mode 100644 index 00000000..14fac913 --- /dev/null +++ b/skills/diary/LICENSE @@ -0,0 +1,21 @@ +MIT License + +Copyright (c) 2026 + +Permission is hereby granted, free of charge, to any person obtaining a copy +of this software and associated documentation files (the "Software"), to deal +in the Software without restriction, including without limitation the rights +to use, copy, modify, merge, publish, distribute, sublicense, and/or sell +copies of the Software, and to permit persons to whom the Software is +furnished to do so, subject to the following conditions: + +The above copyright notice and this permission notice shall be included in all +copies or substantial portions of the Software. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR +IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, +FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE +AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER +LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, +OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE +SOFTWARE. diff --git a/skills/diary/README.md b/skills/diary/README.md new file mode 100644 index 00000000..df37ebbe --- /dev/null +++ b/skills/diary/README.md @@ -0,0 +1,90 @@ +๏ปฟ# ๐Ÿ“” Unified Diary System (Agentic Context-Preserving Logger) v4.1 + +![Version](https://img.shields.io/badge/version-v4.1-blue) +![AI Agent](https://img.shields.io/badge/AI-Agent_Driven-orange) +![Sync](https://img.shields.io/badge/Sync-Notion%20%7C%20Obsidian-lightgrey) + +**Unified Diary System** is a fully automated, anti-pollution AI journaling and synchronization workflow designed specifically for multi-project developers and creators. By leveraging Continuous Tool Calling from AI Agents, a single natural language command automatically executes a 4-step pipeline: **Local Project Logging โž” Global Context Fusion โž” Cloud Bi-directional Sync โž” Experience Extraction**, achieving a true "One-Shot" seamless record. + +--- + +## โœจ Core Features + +* โšก **Agent One-Shot Execution**: Once triggered, the AI completes the entire technical process without interruption, only pausing at the final step to ask for human validation on extracted "lessons learned". +* ๐Ÿ›ก๏ธ **Context Firewall**: Strictly separates "Project Local Diaries" from the "Global Master Diary." This fundamentally solves the severe "Context Pollution / Tag Drift" problem where AI hallucinates and mixes up progress between Project A and Project B during daily summaries. +* ๐Ÿง  **Automated Lessons Learned**: More than just a timeline of events, the AI proactively extracts "New Rules" or "Optimizations" from the bugs you faced or discoveries you made today, distilling them into your Knowledge Base. +* ๐Ÿ”„ **Seamless Cross-Platform Sync**: Includes built-in scripts to push the final global diary straight to Notion and/or Obsidian with a simple `--sync-only` flag. + +--- + +## ๐Ÿ—๏ธ The 5-Step Workflow Architecture + +When a developer types `:{Write a diary entry using the diary skill}` in *any* project directory, the system strictly executes the following atomic operations: + +### Step 1: Local Project Archiving (AI Execution) +1. **Auto-Location**: The AI calls terminal commands (e.g., `pwd`) to identify the current working directory, establishing the "Project Name". +2. **Precision Writing**: It writes today's Git Commits, code changes, and problem solutions in "append mode" exclusively into that project's local directory: `diary/YYYY/MM/YYYY-MM-DD-.md`. + +### Step 1.5: Refresh Project Context (Automation Script) +* **Auto-Execution**: The AI invokes `prepare_context.py` to scan the project's latest directory structure, tech stack, and diary-based action items, generating/updating the `AGENT_CONTEXT.md` at the project root. + +### Step 2: Extracting Global & Project Material (Automation Script) +* **Material Fetching**: The AI automatically executes `fetch_diaries.py`, precisely pulling the "just-written local project diary" and today's "Global Diary (if it exists)", printing both to the terminal for the AI to read. + +### Step 3: AI Smart Fusion & Global Archiving (AI Execution) +* **Seamless Fusion**: The AI mentally sews the two sources from Step 2 together, writing the combined result into the global diary vault: `.../global_skills/auto-skill/diary/YYYY/MM/YYYY-MM-DD.md`. +* **Strict Zoning**: It uses `### ๐Ÿ“ ` tagging to ensure existing project progress is preserved, while new project progress is safely appendedโ€”absolutely no overwriting. + +### Step 4: Cloud Sync & Experience Extraction (Script + Human) +1. **One-Click Push**: The AI calls `master_diary_sync.py --sync-only` to push the data to Notion/Obsidian. +2. **Human Authorization**: The AI extracts today's `๐Ÿ“Œ New Rules` or `๐Ÿ”„ Experience Optimizations` and presents them to the developer. Once authorized, these are written to the local Knowledge Base and embedded (e.g., via `qmd embed`). + +--- + +## ๐Ÿ“‚ Directory Structure + +This system adopts a "Distributed Recording, Centralized Management" architecture: + +```text +๐Ÿ“ฆ Your Computer Environment + โ”ฃ ๐Ÿ“‚ Project A (e.g., auto-video-editor) + โ”ƒ โ”— ๐Ÿ“‚ diary/YYYY/MM/ + โ”ƒ โ”— ๐Ÿ“œ 2026-02-24-auto-video-editor.md <-- Step 1 writes here (Clean, isolated history) + โ”ฃ ๐Ÿ“‚ Project B (e.g., GSS) + โ”ƒ โ”— ๐Ÿ“‚ diary/YYYY/MM/ + โ”ƒ โ”— ๐Ÿ“œ 2026-02-24-GSS.md + โ”ƒ + โ”— ๐Ÿ“‚ Global Skills & Diary Center (This Repo) + โ”ฃ ๐Ÿ“‚ scripts/ + โ”ƒ โ”ฃ ๐Ÿ“œ fetch_diaries.py <-- Step 2: Material transporter + โ”ƒ โ”ฃ ๐Ÿ“œ prepare_context.py <-- Step 1.5: Context refresher + โ”ƒ โ”— ๐Ÿ“œ master_diary_sync.py <-- Step 4: Notion/Obsidian sync + โ”ฃ ๐Ÿ“‚ knowledge-base/ <-- Step 4: AI extracted lessons + โ”— ๐Ÿ“‚ diary/YYYY/MM/ + โ”— ๐Ÿ“œ 2026-02-24.md <-- Step 3: The ultimate fused global log +``` + +--- + +## ๐Ÿš€ How to Use (Usage) + +After setting up `.env` with your Notion tokens, simply input the following into your CLI/IDE chat while working inside a project: + +```bash +:{Write a diary entry using the diary skill} Today I finished the initial integration of the Google Colab python script and fixed the package version conflicts. +``` + +The system will take over to handle all the filing, merging, and syncing automatically. + +--- + +## ๐Ÿ› ๏ธ Setup & Prerequisites + +1. **Configuration**: Rename `.env.example` to `.env` and fill in your `NOTION_TOKEN`, `NOTION_DIARY_DB`, and set where your global diary root is stored. +2. **Dependencies**: `pip install -r requirements.txt` +3. **AI Agent**: Requires an AI assistant with Function Calling / Continuous Tool Calling capabilities (like Cursor, Claude Code, or Gemini CLI frameworks). + +--- + +> **๐Ÿ’ก Design Philosophy:** +> Why not just have the AI write directly to the global diary? Because we found that when an AI lacks the "isolated local project context", it frequently suffers from **Tag Drift** (writing Project A's progress under Project B's header). Through this highly-structured "Local First, Global Second" 4-step architecture, we completely eliminated the context pollution pain point in AI-automated logging. diff --git a/skills/diary/SKILL.md b/skills/diary/SKILL.md new file mode 100644 index 00000000..aba22f3f --- /dev/null +++ b/skills/diary/SKILL.md @@ -0,0 +1,160 @@ +--- +name: diary +description: "Unified Diary System: A context-preserving automated logger for multi-project development." +risk: safe +source: self +--- + +# ๐Ÿ“” Unified Diary System + +## When to Use This Skill +Use this skill when you want to summarize progress, write a daily dev log, or perform a daily review while keeping project contexts isolated and synced to Notion/Obsidian. + +> ๐Ÿšจ **Agent One-Shot Integrity Constraint (Highest Priority)**: Steps 1-4 are an **indivisible atomic workflow**. The AI **MUST use Continuous Tool Calling** to complete all actions in one breath. +> - **ABSOLUTELY FORBIDDEN** to output conversational text and wait for the user after completing Step 1, 2, or 3. +> - Example: Immediately after saving the file in Step 1, you must **automatically and instantly** call the terminal tool to execute the script in Step 2. Stopping to chat or asking "Should I continue?" is considered a critical error. Only inform the user explicitly if you are stuck due to a technical error. + +> โš ๏ธ **Ironclad Rule 1**: The local project diary in Step 1 **must NEVER be polluted with global data**. It only records the current project. +> โš ๏ธ **Ironclad Rule 2**: When using tools like `run_command` to execute predefined python scripts (Step 2, Step 4), you must set them to `SafeToAutoRun: true` to avoid getting stuck waiting for user authorization. +> โš ๏ธ **Ironclad Rule 3**: **Never guess the project name**. The AI must confirm the current directory name as the project identifier using a terminal command BEFORE taking any action in Step 1. + +--- + +## Step 1: Local Project Archiving (AI Generation) +- **Action 0 (Identify)**: Call the terminal `pwd` (Linux/Mac) or `(Get-Item .).Name` (Windows) to confirm the current folder name. +- **Action 1 (Write)**: Summarize the achievements from the current conversation (Git Commits, file changes, task progress), and write them into the **current project folder** at `diary/YYYY/MM/YYYY-MM-DD-ProjectName.md`. +- **Isolation and Naming Rules (Ironclad Rules)**: + - ๐Ÿ“„ **Mandatory Filename Suffix**: The local diary **MUST** include the project name detected just now. It is **absolutely forbidden** to use a global-level filename (like `2026-02-23.md`) locally. + - โœ… **Pure Content**: Only record content exclusive to the current project. Do not mix in other projects. + - ๐Ÿ“ **Append Mode**: If the project diary already exists, update it using "append", never overwrite the original content. + - ๐Ÿ“ **Auto-Creation**: Create subfolders `diary/YYYY/MM/` based on the year and month. + - โšก **Force Continue**: Once writing is complete, **do not interrupt the conversation; immediately call the terminal tool and proceed to Step 2.** + +## Step 1.5: Refresh Project Context (Automation Script) +- **Prerequisite**: You have confirmed the current project directory path (from Action 0's `pwd` result). +- **Action**: Call the terminal to execute the following command to automatically scan the project state and generate/update `AGENT_CONTEXT.md`: + ```powershell + python {diary_system_path}/scripts/prepare_context.py "" + ``` +- **SafeToAutoRun**: true (Safe operation; purely reading and writing local files). +- **Result**: `AGENT_CONTEXT.md` in the project directory is refreshed to the latest state. +- **After Completion**: Force continue to Step 2; do not wait for user confirmation. + +## Step 2: Extract Global & Project Material (Script Execution) +- **Action**: Call the extraction script, **passing in the absolute path of the project diary just written in Step 1**. The script will precisely print "Today's Global Progress" and "Current Project Progress". +- **Execution Command**: + ```powershell + python {diary_system_path}/scripts/fetch_diaries.py "" + ``` +- **Result**: The terminal will print two sets of material side-by-side. The AI must read the terminal output directly and prepare for mental fusion. + +## Step 3: AI Smart Fusion & Global Archiving (AI Execution) ๐Ÿง  +- **Action**: Based on the two materials printed by the terminal in Step 2, complete a **seamless fusion** mentally, then write it to the global diary: `{diary_system_path}/diary/YYYY/MM/YYYY-MM-DD.md`. +- **Context Firewall (Core Mechanism)**: + 1. **No Tag Drift**: When reading "Global Progress Material", there may be progress from other projects. **It is strictly forbidden to categorize today's conversation achievements under existing project headings belonging to other projects.** + 2. **Priority Definition**: The content marked as `๐Ÿ“ [Current Project Latest Progress]` in Step 2 is the protagonist of today's diary. +- **Rewrite Rules**: + 1. **Safety First**: If the global diary "already exists," preserve the original content and append/fuse the new project progress. **Do not overwrite.** + 2. **Precise Zoning**: Ensure there is a dedicated `### ๐Ÿ“ ProjectName` zone for this project. Do not mix content into other project zones. + 3. **Lessons Learned**: Merge and deduplicate; attach action items to every entry. + 4. **Cleanup**: After writing or fusing globally, you **must** force-delete any temporary files created to avoid encoding issues (e.g., `temp_diary.txt`, `fetched_diary.txt`) to keep the workspace clean. + +## Step 4: Cloud Sync & Experience Extraction (Script + Human) ๐Ÿ›‘ +- **Action 1 (Sync)**: Call the master script to push the global diary to Notion and Obsidian. +- **Execution Command**: + ```powershell + python {diary_system_path}/scripts/master_diary_sync.py --sync-only + ``` +- **Action 2 (Extraction & Forced Pause)**: + 1. The AI extracts "Improvements & Learning" from the global diary. + 2. Confirm if it contains entirely new key points lacking in the past (๐Ÿ“Œ New Rules), or better approaches (๐Ÿ”„ Evolved Rules). + 3. List the results and **WAIT FOR USER CONFIRMATION** (user says "execute" or "agree"). + 4. After user confirmation, update the `.md` file in `{Knowledge_Base_Path}/` and execute `qmd embed` (if applicable). + +--- +**๐ŸŽฏ Task Acceptance Criteria**: +1. โœ… Project local diary generated (no pollution). +2. โœ… `fetch_diaries.py` called with absolute path and successfully printed materials. +3. โœ… AI executed high-quality rewrite and precisely wrote to global diary (appended successfully if file existed). +4. โœ… `--sync-only` successfully pushed to Notion + Obsidian. +5. โœ… Experience extraction presented to the user and authorized. + +--- + +## ๐Ÿ“ Templates and Writing Guidelines + +Strictly apply the following Markdown templates to ensure clarity during Step 1 (Local) and Step 3 (Global Fusion). + +### ๐Ÿ’ก Writing Guidelines (For AI) +1. **Dynamic Replacement**: The `{Project Name}` in the template MUST strictly use the folder name grabbed by `pwd` in Step 1. +2. **Concise Deduplication**: When writing the global diary in Step 3, the AI must condense the "๐Ÿ› ๏ธ Execution Details" from the local diary. The global diary focuses only on "General Direction and Output Results." +3. **Mandatory Checkboxes**: All "Next Steps" and "Action Items" must use the Markdown `* [ ]` format so they can be checked off in Obsidian/Notion later. + +### ๐Ÿ“ Template 1: Project Local Diary (Step 1 Exclusive) + +```markdown +# Project DevLog: {Project Name} +* **๐Ÿ“… Date**: YYYY-MM-DD +* **๐Ÿท๏ธ Tags**: `#Project` `#DevLog` + +--- + +> ๐ŸŽฏ **Progress Summary** +> (Briefly state the core task completed, e.g., "Finished Google Colab environment testing for auto-video-editor") + +### ๐Ÿ› ๏ธ Execution Details & Changes +* **Git Commits**: (List if any) +* **Core File Modifications**: + * ๐Ÿ“„ `path/filename`: Explanation of changes. +* **Technical Implementation**: + * (Record key logic or architecture structural changes) + +### ๐Ÿšจ Troubleshooting +> ๐Ÿ› **Problem Encountered**: (e.g., API error, package conflict) +> ๐Ÿ’ก **Solution**: (Final fix, leave key commands) + +### โญ๏ธ Next Steps +- [ ] (Specific task 1) +- [ ] (Specific task 2) +``` + +--- + +### ๐ŸŒ Template 2: Global Diary (Step 3 Exclusive) + +```markdown +# ๐Ÿ“” YYYY-MM-DD Global Progress Overview + +> ๐ŸŒŸ **Daily Highlight** +> (1-2 sentences summarizing all project progress for the day, synthesized by AI) + +--- + +## ๐Ÿ“ Project Tracking +(โš ๏ธ AI Rule: If file exists, find the corresponding project title and append; NEVER overwrite, keep it clean.) + +### ๐Ÿ”ต {Project A, e.g., auto-video-editor} +* **Today's Progress**: (Condense Step 2 local materials into key points) +* **Action Items**: (Extract next steps) + +### ๐ŸŸข {Project B, e.g., GSS} +* **Today's Progress**: (Condense key points) +* **Action Items**: (Extract next steps) + +--- + +## ๐Ÿง  Improvements & Learnings +(โš ๏ธ Dedicated to Experience Extraction) + +๐Ÿ“Œ **New Rules / Discoveries** +(e.g., Found hidden API limit, or a more efficient python syntax) + +๐Ÿ”„ **Optimizations & Reflections** +(Improvements from past methods) + +--- + +## โœ… Global Action Items +- [ ] (Tasks unrelated to specific projects) +- [ ] (System environment maintenance, etc.) +``` diff --git a/skills/diary/requirements.txt b/skills/diary/requirements.txt new file mode 100644 index 00000000..f2293605 --- /dev/null +++ b/skills/diary/requirements.txt @@ -0,0 +1 @@ +requests diff --git a/skills/diary/scripts/fetch_diaries.py b/skills/diary/scripts/fetch_diaries.py new file mode 100644 index 00000000..5b427803 --- /dev/null +++ b/skills/diary/scripts/fetch_diaries.py @@ -0,0 +1,84 @@ +#!/usr/bin/env python3 +""" +Fetch Diaries Context Preparer (Targeted Mode) +็”จๆ–ผ Unified Diary System ๆ–นๆกˆ Aใ€‚ + +ๆญค่…ณๆœฌไธๅ†ๅ…จ็›คๆŽƒๆ๏ผŒ่€Œๆ˜ฏๆŽก็”จ็ฒพๆบ–ๆ‰“ๆ“Š๏ผš +ๆŽฅๆ”ถ AI ๅ‚ณๅ…ฅ็š„ใ€Œ็•ถๅ‰ๅฐˆๆกˆๆ—ฅ่จ˜็ต•ๅฐ่ทฏๅพ‘ใ€๏ผŒไธฆๅŒๆ™‚่ฎ€ๅ–ใ€ŒไปŠๆ—ฅ็š„ๅ…จๅŸŸๆ—ฅ่จ˜ใ€ใ€‚ +ๅฐ‡ๅ…ฉ่€…ไธฆๅˆ—ๅฐๅ‡บๅœจ็ต‚็ซฏๆฉŸ๏ผŒไพ› AI ้€ฒ่กŒไธ้บๆผใ€ไธ่ฆ†่“‹็š„ๅฎ‰ๅ…จ่…ฆๅ…ง่žๅˆใ€‚ + +Usage: + python fetch_diaries.py +""" + +import os +import sys +from datetime import datetime +from pathlib import Path + +# --- Configuration --- +GLOBAL_DIARY_ROOT = Path(os.environ.get("GLOBAL_DIARY_ROOT", str(Path(__file__).resolve().parent.parent / "diary"))) + +def get_today(): + return datetime.now().strftime("%Y-%m-%d") + +def main(): + if hasattr(sys.stdout, 'reconfigure'): + sys.stdout.reconfigure(encoding='utf-8') + + if len(sys.argv) < 2: + print("โŒ ็”จๆณ•้Œฏ่ชคใ€‚่ซ‹ๆไพ›็•ถๅ‰ๅฐˆๆกˆ็š„ๆ—ฅ่จ˜็ต•ๅฐ่ทฏๅพ‘ใ€‚") + print("Usage: python fetch_diaries.py ") + sys.exit(1) + + proj_diary_path = Path(sys.argv[1]) + if not proj_diary_path.exists(): + print(f"โš ๏ธ ๆ‰พไธๅˆฐๅฐˆๆกˆๆ—ฅ่จ˜: {proj_diary_path}") + sys.exit(1) + + date_str = get_today() + y, m, _ = date_str.split("-") + global_diary_path = GLOBAL_DIARY_ROOT / y / m / f"{date_str}.md" + + print(f"=== FETCH MODE: {date_str} ===") + + # --- 1. ่ฎ€ๅ–ๅ…จๅŸŸๆ—ฅ่จ˜ --- + print("\n" + "=" * 60) + print(f"๐ŸŒ [็พๆœ‰ๅ…จๅŸŸๆ—ฅ่จ˜] ({global_diary_path})") + + if global_diary_path.exists(): + print("โš ๏ธ ่ญฆๅ‘Š๏ผšๆญคๅ…จๅŸŸๆ—ฅ่จ˜ๅทฒๅญ˜ๅœจ๏ผŒไปฃ่กจไปŠๅคฉๅฏ่ƒฝๆœ‰ๅ…ถไป–ๅฐˆๆกˆๅฏซ้Ž้€ฒๅบฆไบ†๏ผ") + print("โš ๏ธ ้ตๅพ‹๏ผš่ซ‹ๅ‹™ๅฟ…ไฟ็•™ไธ‹ๆ–นๆ—ขๆœ‰็š„ๅ…งๅฎน๏ผŒๅช่ƒฝใ€Œ่ฟฝๅŠ ๆˆ–่žๅˆใ€ๆ–ฐ็š„ๅฐˆๆกˆ้€ฒๅบฆ๏ผŒ็ต•ๅฐไธๅฏ็ฒ—ๆšด่ฆ†ๅฏซๆŠน้™คๅ‰ไบบ็š„็ด€้Œ„๏ผ") + print("-" * 60) + try: + global_content = global_diary_path.read_text(encoding="utf-8").strip() + print(global_content) + except Exception as e: + print(f"่ฎ€ๅ–ๅ…จๅŸŸๆ—ฅ่จ˜ๆ™‚็™ผ็”Ÿ้Œฏ่ชค: {e}") + else: + print("โ„น๏ธ ้€™ๆ˜ฏไปŠๅคฉ็š„ใ€Œ็ฌฌไธ€็ญ†ใ€็ด€้Œ„๏ผŒๅ…จๅŸŸๆช”ๆกˆๅฐšๆœชๅปบ็ซ‹ใ€‚่ซ‹็›ดๆŽฅ็‚บไปŠๆ—ฅๅ‰ตๅปบๅฅฝ็š„ๆŽ’็‰ˆ็ตๆง‹ใ€‚") + print("-" * 60) + + # --- 2. ่ฎ€ๅ–็•ถๅ‰ๅฐˆๆกˆๆ—ฅ่จ˜ --- + print("\n" + "=" * 60) + print(f"๐Ÿ“ [็•ถๅ‰ๅฐˆๆกˆๆœ€ๆ–ฐ้€ฒๅบฆ] ({proj_diary_path})") + print("่ซ‹ๅฐ‡ไปฅไธ‹ๅ…งๅฎน๏ผŒๅ„ช้›…ๅœฐๆถˆๅŒ–ไธฆ่žๅˆ้€ฒไธŠๆ–น็š„ๅ…จๅŸŸๆ—ฅ่จ˜ไธญใ€‚") + print("-" * 60) + try: + content = proj_diary_path.read_text(encoding="utf-8") + # ้Žๆฟพๆމ้›œ่จŠๆจ™้กŒ่ˆ‡ footer + lines = content.split('\n') + meaningful = [] + for line in lines: + if line.startswith("# "): continue + if line.startswith("*Allen") or line.startswith("*Generated"): continue + meaningful.append(line) + print("\n".join(meaningful).strip()) + except Exception as e: + print(f"่ฎ€ๅ–ๅฐˆๆกˆๆ—ฅ่จ˜ๆ™‚็™ผ็”Ÿ้Œฏ่ชค: {e}") + + print("\n" + "=" * 60) + print("โœ… ็ด ๆๆไพ›ๅฎŒ็•ขใ€‚่ซ‹ IDE Agent ๅŸท่กŒ่žๅˆ๏ผŒไธฆๅฏซๅ…ฅ/ๆ›ดๆ–ฐ่‡ณๅ…จๅŸŸๆ—ฅ่จ˜ๆช”ๆกˆใ€‚") + +if __name__ == "__main__": + main() diff --git a/skills/diary/scripts/master_diary_sync.py b/skills/diary/scripts/master_diary_sync.py new file mode 100644 index 00000000..85978cd9 --- /dev/null +++ b/skills/diary/scripts/master_diary_sync.py @@ -0,0 +1,270 @@ +#!/usr/bin/env python3 +""" +Master Diary Sync Script v2 +Two-mode operation: + --inject-only : Scan desktop projects, inject today's diaries into global diary. + --sync-only : Push the global diary to Notion and Obsidian. + +Usage: + python master_diary_sync.py --inject-only + python master_diary_sync.py --sync-only + python master_diary_sync.py # Runs both sequentially (legacy mode) +""" + +import os +import sys +import re +import shutil +import subprocess +from datetime import datetime +from pathlib import Path + +# --- Configuration --- +DESKTOP = Path(os.environ.get("DESKTOP_PATH", str(Path(os.environ.get("USERPROFILE", "")) / "OneDrive" / "Desktop"))) +DESKTOP_FALLBACK = Path(os.environ.get("USERPROFILE", "")) / "Desktop" +GLOBAL_DIARY_ROOT = Path(os.environ.get("GLOBAL_DIARY_ROOT", str(Path(__file__).resolve().parent.parent / "diary"))) +OBSIDIAN_DAILY_NOTES = Path(os.environ.get("OBSIDIAN_DAILY_NOTES", "")) +NOTION_SYNC_SCRIPT = Path(__file__).resolve().parent / "sync_to_notion.py" + + +def get_desktop(): + return DESKTOP if DESKTOP.exists() else DESKTOP_FALLBACK + + +def get_today(): + return datetime.now().strftime("%Y-%m-%d") + + +def get_global_path(date_str): + y, m, _ = date_str.split("-") + return GLOBAL_DIARY_ROOT / y / m / f"{date_str}.md" + + +# โ”€โ”€ INJECT MODE โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def scan_project_diaries(date_str): + """Find all project diaries for today on the desktop.""" + desktop = get_desktop() + results = [] + + for project_dir in desktop.iterdir(): + if not project_dir.is_dir(): + continue + diary_dir = project_dir / "diary" + if not diary_dir.exists(): + continue + + # Validation: Check for naked YYYY-MM-DD.md which is forbidden in projects + naked_diary = diary_dir / f"{date_str}.md" + if naked_diary.exists(): + print(f"โš ๏ธ WARNING: Found naked diary in project '{project_dir.name}': {naked_diary}") + print(f" Ironclad Rule: Project diaries MUST have a suffix (e.g., {date_str}-{project_dir.name}.md)") + + # Support both flat and YYYY/MM hierarchical structures + for md_file in diary_dir.rglob(f"{date_str}*.md"): + # Skip the naked one if it exists to prevent accidental injection + if md_file.name == f"{date_str}.md": + continue + results.append({ + "path": md_file, + "project": project_dir.name, + "content": md_file.read_text(encoding="utf-8"), + }) + + return results + + +def inject_into_global(global_path, project_diaries, date_str): + """ + Inject project diary content into the global diary. + This is a MECHANICAL injection โ€” AI will rewrite it in a later step. + Each project gets its own clearly marked section. + """ + # Read or initialize global content + if global_path.exists(): + global_content = global_path.read_text(encoding="utf-8") + else: + global_content = f"# ๐Ÿ“” ๅ…จๅŸŸๆ—ฅ่ชŒ๏ผš{date_str}\n\n## ไปŠๆ—ฅๅ…จๅŸŸๅ›ž้กง (Global Summary)\n๏ผˆๅพ… AI ้‡ๅฏซ๏ผ‰\n\n---\n\n## ๐Ÿš€ ๅฐˆๆกˆ้€ฒๅบฆ (Project Accomplishments)\n\n---\n\n## ๐Ÿ’ก ๆ”นๅ–„่ˆ‡ๅญธ็ฟ’ (Improvements & Learnings)\n\n---\n" + + for diary in project_diaries: + proj_name = diary["project"] + proj_content = diary["content"] + marker = f"### ๐Ÿ“ {proj_name}" + + # Remove old block for this project if exists (to support re-injection) + pattern = re.escape(marker) + r".*?(?=### ๐Ÿ“ |## ๐Ÿ’ก|## ๐ŸŽฏ|---(?:\s*\n## )|\Z)" + global_content = re.sub(pattern, "", global_content, flags=re.DOTALL) + + # Find insertion point: after "## ๐Ÿš€ ๅฐˆๆกˆ้€ฒๅบฆ" + insertion_anchor = "## ๐Ÿš€ ๅฐˆๆกˆ้€ฒๅบฆ (Project Accomplishments)" + if insertion_anchor not in global_content: + insertion_anchor = "## ๐Ÿš€ ๅฐˆๆกˆ้€ฒๅบฆ" + + if insertion_anchor in global_content: + # Extract the meaningful content from the project diary (skip its H1 title) + lines = proj_content.split("\n") + meaningful = [] + for line in lines: + if line.startswith("# "): + continue # Skip H1 title + if line.startswith("*Allen") or line.startswith("*Generated"): + continue # Skip footer + meaningful.append(line) + clean_content = "\n".join(meaningful).strip() + + injection = f"\n{marker}\n{clean_content}\n" + global_content = global_content.replace( + insertion_anchor, + f"{insertion_anchor}{injection}" + ) + else: + global_content += f"\n{marker}\n{proj_content}\n" + + # Ensure directory exists and write + global_path.parent.mkdir(parents=True, exist_ok=True) + global_path.write_text(global_content, encoding="utf-8") + return global_path + + +def run_inject(date_str): + """Execute inject-only mode.""" + print(f"=== INJECT MODE: {date_str} ===") + global_path = get_global_path(date_str) + + # 1. Scan + diaries = scan_project_diaries(date_str) + print(f"๐Ÿ” Found {len(diaries)} valid project diaries.") + for d in diaries: + print(f" - {d['project']}: {d['path']}") + + if not diaries: + print("โ„น๏ธ No new project diaries found. Nothing to inject.") + # Still ensure global file exists for AI to rewrite + if not global_path.exists(): + global_path.parent.mkdir(parents=True, exist_ok=True) + global_path.write_text( + f"# ๐Ÿ“” ๅ…จๅŸŸๆ—ฅ่ชŒ๏ผš{date_str}\n\n## ไปŠๆ—ฅๅ…จๅŸŸๅ›ž้กง (Global Summary)\n\n---\n\n## ๐Ÿš€ ๅฐˆๆกˆ้€ฒๅบฆ (Project Accomplishments)\n\n---\n\n## ๐Ÿ’ก ๆ”นๅ–„่ˆ‡ๅญธ็ฟ’ (Improvements & Learnings)\n\n---\n", + encoding="utf-8" + ) + print(f"๐Ÿ“„ Global diary ready at: {global_path}") + return + + # 2. Inject + result = inject_into_global(global_path, diaries, date_str) + print(f"โœ… Injected into global diary: {result}") + print("โธ๏ธ Now hand off to AI for intelligent rewrite (Step 3).") + + +# โ”€โ”€ SYNC MODE โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def sync_to_notion(global_path): + """Push global diary to Notion.""" + print("๐Ÿš€ Syncing to Notion...") + if not NOTION_SYNC_SCRIPT.exists(): + print(f"โŒ Notion sync script not found: {NOTION_SYNC_SCRIPT}") + return False + + env = os.environ.copy() + if "NOTION_TOKEN" not in env or not env["NOTION_TOKEN"]: + print("โŒ NOTION_TOKEN is not set in environment.") + return False + if "NOTION_DIARY_DB" not in env or not env["NOTION_DIARY_DB"]: + print("โŒ NOTION_DIARY_DB is not set in environment.") + return False + + try: + result = subprocess.run( + [sys.executable, str(NOTION_SYNC_SCRIPT), str(global_path)], + env=env, capture_output=True, text=True, check=True + ) + print(result.stdout) + return True + except subprocess.CalledProcessError as e: + print(f"โŒ Notion sync failed:\n{e.stderr}") + return False + + +def backup_to_obsidian(global_path): + # Copy global diary to Obsidian vault. + print("๐Ÿ“‚ Backing up to Obsidian...") + + # Safety Check: If path is empty, it shouldn't backup + if not str(OBSIDIAN_DAILY_NOTES).strip(): + print("โ„น๏ธ Obsidian path is not set (empty). Skipping backup.") + return False + + if not OBSIDIAN_DAILY_NOTES.exists(): + print(f"โš ๏ธ Obsidian path not found: {OBSIDIAN_DAILY_NOTES}. Skipping backup.") + return False + try: + dest = OBSIDIAN_DAILY_NOTES / global_path.name + shutil.copy2(global_path, dest) + print(f"โœ… Backed up to: {dest}") + return True + except Exception as e: + print(f"โŒ Obsidian backup failed: {e}") + return False + + +def run_qmd_embed(): + """Update semantic vector index.""" + print("๐Ÿง  Updating QMD Semantic Index...") + try: + # Run qmd embed in the project root + project_root = GLOBAL_DIARY_ROOT.parent + subprocess.run(["qmd", "embed"], cwd=project_root, check=True, text=True) + print("โœ… QMD Embedding completed.") + return True + except FileNotFoundError: + print("โš ๏ธ QMD not installed. Skipping semantic update.") + except Exception as e: + print(f"โŒ QMD Embedding failed: {e}") + return False + + +def run_sync(date_str): + """Execute sync-only mode.""" + print(f"=== SYNC MODE: {date_str} ===") + global_path = get_global_path(date_str) + + if not global_path.exists(): + print(f"โŒ Global diary not found: {global_path}") + print(" Please run --inject-only first, then let AI rewrite.") + sys.exit(1) + + # 4a. Notion + sync_to_notion(global_path) + + # 4b. Obsidian + backup_to_obsidian(global_path) + + # 5. Semantic Update + run_qmd_embed() + + print("=== SYNC COMPLETED ===") + + +# โ”€โ”€ MAIN โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def main(): + date_str = get_today() + + if len(sys.argv) > 1: + mode = sys.argv[1] + if mode == "--inject-only": + run_inject(date_str) + elif mode == "--sync-only": + run_sync(date_str) + else: + print(f"โŒ Unknown mode: {mode}") + print("Usage: python master_diary_sync.py [--inject-only | --sync-only]") + sys.exit(1) + else: + # Legacy: run both (no AI rewrite in between) + print("โš ๏ธ Running full pipeline (legacy mode). Consider using --inject-only and --sync-only separately.") + run_inject(date_str) + run_sync(date_str) + + +if __name__ == "__main__": + main() diff --git a/skills/diary/scripts/prepare_context.py b/skills/diary/scripts/prepare_context.py new file mode 100644 index 00000000..a04145d2 --- /dev/null +++ b/skills/diary/scripts/prepare_context.py @@ -0,0 +1,244 @@ +#!/usr/bin/env python3 +""" +AI Agent Context Preparer v2 +Usage: python prepare_context.py [directory_path] +Generates a standardized AGENT_CONTEXT.md with 5 core sections: + 1. ๅฐˆๆกˆ็›ฎๆจ™ (Project Goal) - from README + 2. ๆŠ€่ก“ๆฃง่ˆ‡็’ฐๅขƒ (Tech Stack) - from config files + 3. ๆ ธๅฟƒ็›ฎ้Œ„็ตๆง‹ (Core Structure) - recursive tree + 4. ๆžถๆง‹่ˆ‡่จญ่จˆ็ด„ๅฎš (Conventions) - from L1 cache + 5. ็›ฎๅ‰้€ฒๅบฆ่ˆ‡ๅพ…่พฆ (Status & TODO) - from latest diary +""" + +import os +import sys +import json +import glob +from pathlib import Path +from datetime import datetime + + +def get_tree(path, prefix="", max_depth=3, current_depth=0): + """Recursive directory tree generator with depth limit.""" + if current_depth >= max_depth: + return [] + try: + entries = sorted(os.listdir(path)) + except PermissionError: + return [] + tree_lines = [] + skip_prefixes = (".", "node_modules", "__pycache__", "dist", "build", "venv", ".git") + filtered = [e for e in entries if not e.startswith(skip_prefixes)] + for i, entry in enumerate(filtered): + is_last = i == len(filtered) - 1 + connector = "โ””โ”€โ”€ " if is_last else "โ”œโ”€โ”€ " + full_path = os.path.join(path, entry) + tree_lines.append(f"{prefix}{connector}{entry}") + if os.path.isdir(full_path): + extension = " " if is_last else "โ”‚ " + tree_lines.extend(get_tree(full_path, prefix + extension, max_depth, current_depth + 1)) + return tree_lines + + +def extract_readme_summary(root): + """Extract first meaningful paragraph from README as project goal.""" + readme = root / "README.md" + if not readme.exists(): + return None + text = readme.read_text(encoding="utf-8", errors="ignore") + lines = text.strip().split("\n") + # Skip title lines (# heading) and blank lines, grab first paragraph + summary_lines = [] + found_content = False + for line in lines: + stripped = line.strip() + if not found_content: + if stripped and not stripped.startswith("#"): + found_content = True + summary_lines.append(stripped) + else: + if stripped == "" and summary_lines: + break + summary_lines.append(stripped) + return " ".join(summary_lines) if summary_lines else None + + +def extract_tech_stack(root): + """Extract tech stack info from config files.""" + stack_info = [] + + # package.json + pkg = root / "package.json" + if pkg.exists(): + try: + data = json.loads(pkg.read_text(encoding="utf-8")) + deps = list(data.get("dependencies", {}).keys()) + dev_deps = list(data.get("devDependencies", {}).keys()) + if deps: + stack_info.append(f"* **ๆ ธๅฟƒๅฅ—ไปถ**๏ผš{', '.join(deps[:10])}") + if dev_deps: + stack_info.append(f"* **้–‹็™ผๅฅ—ไปถ**๏ผš{', '.join(dev_deps[:8])}") + if "scripts" in data: + scripts = list(data["scripts"].keys()) + stack_info.append(f"* **ๅฏ็”จๆŒ‡ไปค**๏ผš{', '.join(scripts)}") + except (json.JSONDecodeError, KeyError): + pass + + # pyproject.toml - basic extraction + pyproject = root / "pyproject.toml" + if pyproject.exists(): + text = pyproject.read_text(encoding="utf-8", errors="ignore") + stack_info.append(f"* **Python ๅฐˆๆกˆ**๏ผšไฝฟ็”จ pyproject.toml ็ฎก็†") + # Simple dependency extraction + if "dependencies" in text: + stack_info.append("* _่ฉณ่ฆ‹ pyproject.toml ็š„ dependencies ๅ€ๅกŠ_") + + # requirements.txt + reqs = root / "requirements.txt" + if reqs.exists(): + req_lines = [l.strip().split("==")[0].split(">=")[0] + for l in reqs.read_text(encoding="utf-8", errors="ignore").strip().split("\n") + if l.strip() and not l.startswith("#")] + if req_lines: + stack_info.append(f"* **Python ๅฅ—ไปถ**๏ผš{', '.join(req_lines[:10])}") + + return stack_info + + +def extract_latest_diary_todos(root): + """Find the latest diary file and extract Next Steps / TODO items.""" + # Search common diary locations + diary_dirs = [ + root / "diary", + Path(os.path.expanduser("~")) / ".gemini" / "antigravity" / "global_skills" / "auto-skill" / "diary", + ] + + latest_file = None + latest_date = "" + + for diary_dir in diary_dirs: + if not diary_dir.exists(): + continue + # Glob for markdown files recursively + for md_file in diary_dir.rglob("*.md"): + name = md_file.stem + # Try to extract date from filename (YYYY-MM-DD format) + if len(name) >= 10 and name[:4].isdigit(): + date_str = name[:10] + if date_str > latest_date: + latest_date = date_str + latest_file = md_file + + if not latest_file: + return None, [] + + text = latest_file.read_text(encoding="utf-8", errors="ignore") + lines = text.split("\n") + + todos = [] + in_next_section = False + for line in lines: + stripped = line.strip() + # Detect "Next Steps" or "ไธ‹ไธ€ๆญฅ" sections + if any(kw in stripped.lower() for kw in ["next step", "ไธ‹ไธ€ๆญฅ", "next steps", "ๅพ…่พฆ", "todo"]): + in_next_section = True + continue + if in_next_section: + if stripped.startswith("- [") or stripped.startswith("* ["): + todos.append(stripped) + elif stripped.startswith("#"): + break # Next section header, stop + + return latest_date, todos + + +def prepare_context(root_path): + root = Path(root_path).resolve() + now = datetime.now().strftime("%Y-%m-%d %H:%M") + print(f"๐Ÿ“‹ Preparing context for: {root}") + + context_file = root / "AGENT_CONTEXT.md" + + with open(context_file, "w", encoding="utf-8") as f: + # Header + f.write(f"# ๅฐˆๆกˆไธŠไธ‹ๆ–‡ (Agent Context)๏ผš{root.name}\n\n") + f.write(f"> **ๆœ€ๅพŒๆ›ดๆ–ฐๆ™‚้–“**๏ผš{now}\n") + f.write(f"> **่‡ชๅ‹•็”Ÿๆˆ**๏ผš็”ฑ `prepare_context.py` ็”ข็”Ÿ๏ผŒไพ› AI Agent ๅฟซ้€ŸๆŽŒๆกๅฐˆๆกˆๅ…จๅฑ€\n\n") + f.write("---\n\n") + + # Section 1: ๅฐˆๆกˆ็›ฎๆจ™ + f.write("## ๐ŸŽฏ 1. ๅฐˆๆกˆ็›ฎๆจ™ (Project Goal)\n") + readme_summary = extract_readme_summary(root) + if readme_summary: + f.write(f"* **ๆ ธๅฟƒ็›ฎ็š„**๏ผš{readme_summary}\n") + else: + f.write("* **ๆ ธๅฟƒ็›ฎ็š„**๏ผš_๏ผˆ่ซ‹ๆ‰‹ๅ‹•่ฃœๅ……๏ผŒๆˆ–ๅปบ็ซ‹ README.md๏ผ‰_\n") + readme = root / "README.md" + if readme.exists(): + f.write(f"* _ๅฎŒๆ•ด่ชชๆ˜Ž่ฆ‹ [README.md](README.md)_\n") + f.write("\n") + + # Section 2: ๆŠ€่ก“ๆฃง่ˆ‡็’ฐๅขƒ + f.write("## ๐Ÿ› ๏ธ 2. ๆŠ€่ก“ๆฃง่ˆ‡็’ฐๅขƒ (Tech Stack & Environment)\n") + stack_info = extract_tech_stack(root) + if stack_info: + f.write("\n".join(stack_info)) + f.write("\n") + else: + f.write("* _๏ผˆๆœชๅตๆธฌๅˆฐ package.json / pyproject.toml / requirements.txt๏ผ‰_\n") + + # Also include raw config snippets for AI reference + config_files = ["package.json", "pyproject.toml", "requirements.txt", ".env.example", "clasp.json"] + has_config = False + for cfg in config_files: + cfg_path = root / cfg + if cfg_path.exists(): + if not has_config: + f.write("\n### ๅŽŸๅง‹่จญๅฎšๆช”\n") + has_config = True + ext = cfg.split(".")[-1] + lang_map = {"json": "json", "toml": "toml", "txt": "text", "example": "text"} + lang = lang_map.get(ext, "text") + content = cfg_path.read_text(encoding="utf-8", errors="ignore") + # Truncate very long config files + if len(content) > 3000: + content = content[:3000] + "\n... (truncated)" + f.write(f"\n
{cfg}\n\n```{lang}\n{content}\n```\n
\n") + f.write("\n") + + # Section 3: ๆ ธๅฟƒ็›ฎ้Œ„็ตๆง‹ + f.write("## ๐Ÿ“‚ 3. ๆ ธๅฟƒ็›ฎ้Œ„็ตๆง‹ (Core Structure)\n") + f.write("_(๐Ÿ’ก AI ่ฎ€ๅ–ๅฎˆๅ‰‡๏ผš่ซ‹ไพๆ“šๆญค็ตๆง‹ๅฐ‹ๆ‰พๅฐๆ‡‰ๆช”ๆกˆ๏ผŒๅ‹ฟ็›ฒ็›ฎ็Œœๆธฌ่ทฏๅพ‘)_\n") + f.write("```text\n") + f.write(f"{root.name}/\n") + f.write("\n".join(get_tree(root))) + f.write("\n```\n\n") + + # Section 4: ๆžถๆง‹่ˆ‡่จญ่จˆ็ด„ๅฎš + f.write("## ๐Ÿ›๏ธ 4. ๆžถๆง‹่ˆ‡่จญ่จˆ็ด„ๅฎš (Architecture & Conventions)\n") + local_exp = root / ".auto-skill-local.md" + if local_exp.exists(): + f.write("_(ไพ†่‡ชๅฐˆๆกˆ L1 ๅฟซๅ– `.auto-skill-local.md`)_\n\n") + f.write(local_exp.read_text(encoding="utf-8", errors="ignore")) + f.write("\n\n") + else: + f.write("* _๏ผˆๅฐš็„ก `.auto-skill-local.md`๏ผŒๅฐˆๆกˆ่ธฉๅ‘็ถ“้ฉ—ๅฐ‡ๅœจ้–‹็™ผ้Ž็จ‹ไธญ่‡ชๅ‹•็ดฏ็ฉ๏ผ‰_\n\n") + + # Section 5: ็›ฎๅ‰้€ฒๅบฆ่ˆ‡ๅพ…่พฆ + f.write("## ๐Ÿšฆ 5. ็›ฎๅ‰้€ฒๅบฆ่ˆ‡ๅพ…่พฆ (Current Status & TODO)\n") + latest_date, todos = extract_latest_diary_todos(root) + if todos: + f.write(f"_(่‡ชๅ‹•ๆๅ–่‡ชๆœ€่ฟ‘ๆ—ฅ่จ˜ {latest_date})_\n\n") + f.write("### ๐Ÿšง ๅพ…่พฆไบ‹้ …\n") + for todo in todos: + f.write(f"{todo}\n") + f.write("\n") + else: + f.write("* _๏ผˆๅฐš็„กๆ—ฅ่จ˜่จ˜้Œ„๏ผŒๆˆ–ๆ—ฅ่จ˜ไธญ็„กใ€Œไธ‹ไธ€ๆญฅใ€ๅ€ๅกŠ๏ผ‰_\n\n") + + print(f"โœ… Created: {context_file}") + + +if __name__ == "__main__": + target = sys.argv[1] if len(sys.argv) > 1 else "." + prepare_context(target) diff --git a/skills/diary/scripts/sync_to_notion.py b/skills/diary/scripts/sync_to_notion.py new file mode 100644 index 00000000..2fdddcfc --- /dev/null +++ b/skills/diary/scripts/sync_to_notion.py @@ -0,0 +1,469 @@ +#!/usr/bin/env python3 +""" +Notion Diary Sync Script +ๅŒๆญฅ diary-agent ็š„้–‹็™ผๆ—ฅ่จ˜ๅˆฐ Notionใ€Œๆฏๆ—ฅ่ค‡็›คใ€้ ้ข็š„ Business ๅ€ๅกŠใ€‚ +้ ้ข็ตๆง‹๏ผˆๅ…ถไป–็”Ÿๆดปๅ€ๅกŠ๏ผ‰็”ฑ GAS Agent ๅปบ็ซ‹๏ผŒๆœฌ่…ณๆœฌๅƒ…่ฒ ่ฒฌๆŽจ้€้–‹็™ผๆ—ฅ่จ˜ใ€‚ + +ไฝฟ็”จๆ–นๅผ๏ผš + python sync_to_notion.py + python sync_to_notion.py --create-db + +็’ฐๅขƒ่ฎŠๆ•ธ๏ผš + NOTION_TOKEN - Notion Internal Integration Token + NOTION_DIARY_DB - Notion Diary Database ID +""" + +import os +import sys +import re +import json +import requests +from datetime import datetime +from pathlib import Path + +# โ”€โ”€ Configuration โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +NOTION_TOKEN = os.environ.get("NOTION_TOKEN", "") +NOTION_DIARY_DB = os.environ.get("NOTION_DIARY_DB", "") +NOTION_API = "https://api.notion.com/v1" +NOTION_VERSION = "2022-06-28" + +HEADERS = { + "Authorization": f"Bearer {NOTION_TOKEN}", + "Notion-Version": NOTION_VERSION, + "Content-Type": "application/json", +} + +# โ”€โ”€ ๆณจๆ„ โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ +# ้ ้ข็ตๆง‹๏ผˆLearning / Chemistry / Workout / ๅฟƒๅพ—๏ผ‰็”ฑ GAS Agent ๅปบ็ซ‹ใ€‚ +# ๆœฌ่…ณๆœฌๅƒ…่ฒ ่ฒฌๅฐ‡้–‹็™ผๆ—ฅ่จ˜ๆŽจ้€่‡ณ Business ๅ€ๅกŠใ€‚ + + +# โ”€โ”€ Notion API Helpers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def notion_request(method: str, endpoint: str, data: dict = None) -> dict: + """Execute a Notion API request with error handling.""" + url = f"{NOTION_API}/{endpoint}" + resp = getattr(requests, method)(url, headers=HEADERS, json=data) + if resp.status_code >= 400: + print(f"โŒ Notion API Error ({resp.status_code}): {resp.json().get('message', resp.text)}") + sys.exit(1) + return resp.json() + + +def search_diary_by_date(date_str: str) -> str | None: + """Search for an existing diary page by date property.""" + data = { + "filter": { + "property": "ๆ—ฅๆœŸ", + "date": {"equals": date_str} + } + } + result = notion_request("post", f"databases/{NOTION_DIARY_DB}/query", data) + pages = result.get("results", []) + return pages[0]["id"] if pages else None + + +# โ”€โ”€ Rich Text & Block Helpers โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def parse_rich_text(text: str) -> list: + """Parse markdown inline formatting to Notion rich_text array.""" + segments = [] + pattern = r'(\*\*(.+?)\*\*|`(.+?)`|\[(.+?)\]\((.+?)\))' + last_end = 0 + + for match in re.finditer(pattern, text): + start, end = match.span() + if start > last_end: + plain = text[last_end:start] + if plain: + segments.append({"type": "text", "text": {"content": plain}}) + full = match.group(0) + if full.startswith("**"): + segments.append({"type": "text", "text": {"content": match.group(2)}, "annotations": {"bold": True}}) + elif full.startswith("`"): + segments.append({"type": "text", "text": {"content": match.group(3)}, "annotations": {"code": True}}) + elif full.startswith("["): + segments.append({"type": "text", "text": {"content": match.group(4), "link": {"url": match.group(5)}}}) + last_end = end + + if last_end < len(text): + remaining = text[last_end:] + if remaining: + segments.append({"type": "text", "text": {"content": remaining}}) + if not segments: + segments.append({"type": "text", "text": {"content": text}}) + return segments + + +def make_heading2(text: str) -> dict: + return {"object": "block", "type": "heading_2", "heading_2": {"rich_text": parse_rich_text(text)}} + + +def make_heading3(text: str) -> dict: + return {"object": "block", "type": "heading_3", "heading_3": {"rich_text": parse_rich_text(text)}} + + +def make_bullet(text: str) -> dict: + return {"object": "block", "type": "bulleted_list_item", "bulleted_list_item": {"rich_text": parse_rich_text(text)}} + + +def make_divider() -> dict: + return {"object": "block", "type": "divider", "divider": {}} + + +def make_quote(text: str = " ") -> dict: + return {"object": "block", "type": "quote", "quote": {"rich_text": [{"type": "text", "text": {"content": text}}]}} + + +def make_paragraph(text: str) -> dict: + return {"object": "block", "type": "paragraph", "paragraph": {"rich_text": parse_rich_text(text)}} + + +def make_todo(text: str, checked: bool = False) -> dict: + return {"object": "block", "type": "to_do", "to_do": {"rich_text": parse_rich_text(text), "checked": checked}} + + +def make_callout(text: str, emoji: str = "๐Ÿ’ก") -> dict: + return {"object": "block", "type": "callout", "callout": {"rich_text": parse_rich_text(text), "icon": {"emoji": emoji}}} + + +# โ”€โ”€ Markdown to Business Blocks โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def diary_to_business_blocks(md_content: str) -> list: + """Convert diary markdown into bullet-point blocks for the Business section. + + Extracts the key accomplishments and structures them as Notion blocks. + """ + blocks = [] + lines = md_content.split("\n") + + for line in lines: + line = line.rstrip() + if not line: + continue + + # Skip the H1 title and timestamp lines + if line.startswith("# ") or line.startswith("*Allen") or line.startswith("*Generated"): + continue + + # H3 sections become sub-headings (e.g. ### 1. ่ทจๅนณๅฐๆททๅˆ้›ฒ่‡ชๅ‹•ๅŒ–) + if line.startswith("### "): + heading_text = line[4:].strip() + # Remove leading numbers (e.g. "1. " or "๐Ÿ“ ") + heading_text = re.sub(r'^\d+\.\s*', '', heading_text) + blocks.append(make_heading3(heading_text)) + continue + + # H2 sections - skip (they are category headers like "ไปŠๆ—ฅๅ›ž้กง", "่ฉฒๆ”นๅ–„็š„ๅœฐๆ–น") + if line.startswith("## "): + section = line[3:].strip() + # Keep the improvement section as a callout + if "ๆ”นๅ–„" in section or "ๅญธ็ฟ’" in section: + blocks.append(make_divider()) + blocks.append(make_heading3(f"๐Ÿ’ก {section}")) + continue + + # Dividers + if line.strip() == "---": + continue + + # Callouts (e.g. > ๐ŸŒŸ **ไปŠๆ—ฅไบฎ้ปž (Daily Highlight)**) + if line.startswith("> "): + text = line[2:].strip() + # Extract emoji if present + emoji = "๐Ÿ’ก" + if text and len(text) > 0: + first_char = text[0] + # A simple heuristic to check if the first character is an emoji + import unicodedata + if ord(first_char) > 0xFFFF or unicodedata.category(first_char) == 'So': + emoji = first_char + text = text[1:].strip() + blocks.append(make_callout(text, emoji)) + continue + + # TODO items + if "- [ ]" in line or "- [x]" in line: + checked = "- [x]" in line + text = re.sub(r'^[\s]*-\s\[[ x]\]\s', '', line) + blocks.append(make_todo(text, checked)) + continue + + # Numbered items + if re.match(r'^[\s]*\d+\.\s', line): + text = re.sub(r'^[\s]*\d+\.\s', '', line) + if text: + blocks.append(make_bullet(text)) + continue + + # Bullet points + if re.match(r'^[\s]*[\-\*]\s', line): + text = re.sub(r'^[\s]*[\-\*]\s', '', line) + if text: + blocks.append(make_bullet(text)) + continue + + # Default: paragraph (only if meaningful) + if len(line.strip()) > 2: + blocks.append(make_paragraph(line)) + + return blocks + + +# โ”€โ”€ Page Creation โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def build_business_only_blocks(business_blocks: list) -> list: + """Build page blocks with only Business section (GAS Agent handles the rest).""" + blocks = [] + blocks.append(make_heading2("๐Ÿ’ผ Business (YT/AI ็ถฒ็ด… / ่‡ชๅ‹•ๅŒ–้–‹็™ผ)")) + blocks.extend(business_blocks) + blocks.append(make_divider()) + return blocks + + +def extract_metadata(md_content: str, filename: str) -> dict: + """Extract metadata from diary markdown content.""" + date_match = re.search(r'(\d{4}-\d{2}-\d{2})', filename) + date_str = date_match.group(1) if date_match else datetime.now().strftime("%Y-%m-%d") + + # Build title + title = f"๐Ÿ“Š {date_str} ๆฏๆ—ฅ่ค‡็›ค" + + # Extract project names + # Matches old format `### ๐Ÿ“ ` and new format e.g., `### ๐Ÿ”ต ` or `### ๐ŸŸข ` + projects = re.findall(r'###\s+[\U00010000-\U0010ffff๐Ÿ“]\s+(\S+)', md_content) + if not projects: + projects = re.findall(r'###\s+\d+\.\s+(.+?)[\s๐Ÿš€๐Ÿ› ๏ธ๐Ÿงชโ˜๏ธ๐Ÿ”ง๐Ÿงฉ]*(?:\n|$)', md_content) + projects = [p.strip()[:20] for p in projects] + + # Auto-tag + tags = {"Business"} # Always tagged as Business since diary-agent produces dev content + tag_keywords = { + "่‡ชๅ‹•ๅŒ–": ["่‡ชๅ‹•ๅŒ–", "GAS", "Agent", "่งธ็™ผๅ™จ"], + "AI": ["Gemini", "AI", "่ชž็พฉ", "LLM"], + "ๅฝฑ็‰‡": ["Remotion", "ๅฝฑ็‰‡", "ๆธฒๆŸ“", "OpenShorts"], + "ๆŠ•่ณ‡": ["ๆŠ•่ณ‡", "ๅˆ†ๆž", "้“ๆฐ", "้…’็”ฐ"], + "Discord": ["Discord", "Listener"], + "YouTube": ["YouTube", "YT", "Guardian"], + } + for tag, keywords in tag_keywords.items(): + if any(kw in md_content for kw in keywords): + tags.add(tag) + + return { + "date": date_str, + "title": title, + "projects": projects if projects else ["general"], + "tags": list(tags), + } + + +def create_diary_page(metadata: dict, blocks: list) -> str: + """Create a new diary page in Notion database.""" + children = blocks[:100] + data = { + "parent": {"database_id": NOTION_DIARY_DB}, + "icon": {"emoji": "๐Ÿ“Š"}, + "properties": { + "ๆจ™้กŒ": {"title": [{"text": {"content": metadata["title"]}}]}, + "ๆ—ฅๆœŸ": {"date": {"start": metadata["date"]}}, + "ๅฐˆๆกˆ": {"multi_select": [{"name": p} for p in metadata["projects"][:10]]}, + "ๆจ™็ฑค": {"multi_select": [{"name": t} for t in metadata["tags"][:10]]}, + }, + "children": children + } + result = notion_request("post", "pages", data) + page_id = result["id"] + + # Append remaining blocks in chunks of 100 + if len(blocks) > 100: + remaining = blocks[100:] + for i in range(0, len(remaining), 100): + chunk = remaining[i:i+100] + notion_request("patch", f"blocks/{page_id}/children", {"children": chunk}) + + return page_id + + +def update_business_section(page_id: str, metadata: dict, business_blocks: list): + """Update ONLY the Business section of an existing page, preserving all other content.""" + # Update properties + notion_request("patch", f"pages/{page_id}", { + "properties": { + "ๆจ™้กŒ": {"title": [{"text": {"content": metadata["title"]}}]}, + "ๅฐˆๆกˆ": {"multi_select": [{"name": p} for p in metadata["projects"][:10]]}, + "ๆจ™็ฑค": {"multi_select": [{"name": t} for t in metadata["tags"][:10]]}, + } + }) + + # Read all existing blocks + all_blocks = [] + cursor = None + while True: + endpoint = f"blocks/{page_id}/children?page_size=100" + if cursor: + endpoint += f"&start_cursor={cursor}" + result = notion_request("get", endpoint) + all_blocks.extend(result.get("results", [])) + if not result.get("has_more"): + break + cursor = result.get("next_cursor") + + # Find the Business section boundaries + business_start = None + business_end = None + + for idx, block in enumerate(all_blocks): + if block["type"] == "heading_2": + text = "" + for rt in block.get("heading_2", {}).get("rich_text", []): + text += rt.get("plain_text", rt.get("text", {}).get("content", "")) + if "Business" in text: + business_start = idx + elif business_start is not None and business_end is None: + # Next H2 after Business = end of Business section + business_end = idx + break + + if business_start is None: + print("โš ๏ธ ๆ‰พไธๅˆฐ Business ๅ€ๅกŠ๏ผŒๅฐ‡่ฆ†่“‹ๆ•ด้ ๅ…งๅฎน") + blocks_to_delete = all_blocks + after_block_id = None + else: + # If no end found, look for a divider after business content + if business_end is None: + for idx in range(business_start + 1, len(all_blocks)): + if all_blocks[idx]["type"] == "divider": + business_end = idx + 1 # Include the divider + break + if business_end is None: + business_end = len(all_blocks) + + # Delete old Business content (between heading and next section) + blocks_to_delete = all_blocks[business_start + 1:business_end] + + # Find the block AFTER which to insert (the Business heading itself) + after_block_id = all_blocks[business_start]["id"] + + for block in blocks_to_delete: + try: + requests.delete(f"{NOTION_API}/blocks/{block['id']}", headers=HEADERS) + except Exception: + pass + + # Insert new Business blocks after the heading, or at the end of the page + for i in range(0, len(business_blocks), 100): + chunk = business_blocks[i:i+100] + payload = {"children": chunk} + if after_block_id: + payload["after"] = after_block_id + + result = notion_request("patch", f"blocks/{page_id}/children", payload) + + # Update after_block_id to the last inserted block for ordering + if chunk and result.get("results"): + after_block_id = result["results"][-1]["id"] + + # Re-add divider after business content + if after_block_id: + notion_request("patch", f"blocks/{page_id}/children", { + "children": [make_divider()], + "after": after_block_id + }) + else: + notion_request("patch", f"blocks/{page_id}/children", { + "children": [make_divider()] + }) + + print("โœ… Business ๅ€ๅกŠๅทฒๆ›ดๆ–ฐ๏ผˆๅ…ถไป–ๅ€ๅกŠๆœชๅ—ๅฝฑ้Ÿฟ๏ผ‰") + + +def create_database(parent_page_id: str) -> str: + """Create the Diary database under a parent page.""" + data = { + "parent": {"type": "page_id", "page_id": parent_page_id}, + "title": [{"type": "text", "text": {"content": "๐Ÿ“” AI ๆ—ฅ่จ˜"}}], + "icon": {"emoji": "๐Ÿ“Š"}, + "is_inline": False, + "properties": { + "ๆจ™้กŒ": {"title": {}}, + "ๆ—ฅๆœŸ": {"date": {}}, + "ๅฐˆๆกˆ": {"multi_select": {"options": []}}, + "ๆจ™็ฑค": {"multi_select": {"options": []}}, + } + } + result = notion_request("post", "databases", data) + db_id = result["id"] + print(f"โœ… Created Notion Diary Database: {db_id}") + print(f" ่ซ‹ๅฐ‡ๆญค ID ่จญ็‚บ็’ฐๅขƒ่ฎŠๆ•ธ๏ผš") + print(f' $env:NOTION_DIARY_DB = "{db_id}"') + return db_id + + +# โ”€โ”€ Main โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ + +def main(): + if hasattr(sys.stdout, 'reconfigure'): + sys.stdout.reconfigure(encoding='utf-8') + + if not NOTION_TOKEN: + print("โŒ ่ซ‹่จญๅฎš็’ฐๅขƒ่ฎŠๆ•ธ NOTION_TOKEN") + print(' $env:NOTION_TOKEN = "ntn_xxx"') + sys.exit(1) + + # Handle --create-db flag + if len(sys.argv) >= 3 and sys.argv[1] == "--create-db": + parent_id = sys.argv[2].replace("-", "") + create_database(parent_id) + return + + if not NOTION_DIARY_DB: + print("โŒ ่ซ‹่จญๅฎš็’ฐๅขƒ่ฎŠๆ•ธ NOTION_DIARY_DB") + print(' $env:NOTION_DIARY_DB = "abc123..."') + print("") + print("ๅฆ‚้œ€ๅปบ็ซ‹ๆ–ฐ Database๏ผš") + print(' python sync_to_notion.py --create-db ') + sys.exit(1) + + if len(sys.argv) < 2: + print("็”จๆณ•๏ผšpython sync_to_notion.py ") + print(" python sync_to_notion.py --create-db ") + sys.exit(1) + + diary_path = Path(sys.argv[1]) + if not diary_path.exists(): + print(f"โŒ ๆ‰พไธๅˆฐๆ—ฅ่จ˜ๆ–‡ไปถ๏ผš{diary_path}") + sys.exit(1) + + # Read diary + md_content = diary_path.read_text(encoding="utf-8") + filename = diary_path.name + + print(f"๐Ÿ“– ่ฎ€ๅ–ๆ—ฅ่จ˜๏ผš{diary_path}") + + # Extract metadata + metadata = extract_metadata(md_content, filename) + print(f" ๆ—ฅๆœŸ๏ผš{metadata['date']}") + print(f" ๆจ™้กŒ๏ผš{metadata['title']}") + print(f" ๅฐˆๆกˆ๏ผš{', '.join(metadata['projects'])}") + print(f" ๆจ™็ฑค๏ผš{', '.join(metadata['tags'])}") + + # Convert diary to Business blocks + business_blocks = diary_to_business_blocks(md_content) + print(f" Business ๅ€ๅกŠๆ•ธ๏ผš{len(business_blocks)}") + + # Check if page already exists + existing_page = search_diary_by_date(metadata["date"]) + + if existing_page: + print(f"๐Ÿ”„ ๆ›ดๆ–ฐๅทฒๆœ‰้ ้ข็š„ Business ๅ€ๅกŠ (page: {existing_page})") + update_business_section(existing_page, metadata, business_blocks) + else: + print(f"๐Ÿ“ ๅปบ็ซ‹ๆ–ฐ้ ้ข๏ผˆๅƒ… Business ๅ€ๅกŠ๏ผ‰...") + biz_blocks = build_business_only_blocks(business_blocks) + page_id = create_diary_page(metadata, biz_blocks) + print(f"โœ… ๅทฒๅŒๆญฅๅˆฐ Notion๏ผ(page: {page_id})") + + +if __name__ == "__main__": + main() diff --git a/skills/diary/templates/global-diary-template.md b/skills/diary/templates/global-diary-template.md new file mode 100644 index 00000000..da506b29 --- /dev/null +++ b/skills/diary/templates/global-diary-template.md @@ -0,0 +1,38 @@ +# ๐Ÿ“” YYYY-MM-DD ๅ…จๅŸŸ้€ฒๅบฆ็ธฝ่ฆฝ + +> ๐ŸŒŸ **ไปŠๆ—ฅไบฎ้ปž (Daily Highlight)** +> ๏ผˆ็”ฑ AI ็ถœๅˆ็•ถๆ—ฅๆ‰€ๆœ‰ๅฐˆๆกˆ้€ฒๅบฆ๏ผŒๅฏซไธ‹ 1-2 ๅฅ่ฉฑ็š„ๆ•ด้ซ”็ธฝ็ต๏ผ‰ + +--- + +## ๐Ÿ“ ๅฐˆๆกˆ้€ฒๅบฆ่ฟฝ่นค +๏ผˆโš ๏ธ AI ๅฏซๅ…ฅ่ฆๅ‰‡๏ผš่‹ฅๆช”ๆกˆๅทฒๅญ˜ๅœจ๏ผŒๅฐ‹ๆ‰พๅฐๆ‡‰็š„ๅฐˆๆกˆๆจ™้กŒ่ฟฝๅŠ ๏ผ›็ต•ไธ่ฆ†่“‹๏ผŒ็ถญๆŒ็‰ˆ้ขๆ•ดๆฝ”ใ€‚๏ผ‰ + +### ๐Ÿ”ต {ๅฐˆๆกˆ A๏ผŒไพ‹ๅฆ‚๏ผšauto-video-editor} +* **ไปŠๆ—ฅ้€ฒๅฑ•**๏ผš(ๅฐ‡ Step 2 ๆๅ–็š„ๆœฌๅœฐ็ด ๆๆฟƒ็ธฎๆˆ้‡้ปž) +* **่กŒๅ‹•้ …็›ฎ**๏ผš(ๆๅ–่ฉฒๅฐˆๆกˆ็š„ไธ‹ไธ€ๆญฅๅพ…่พฆ) + +### ๐ŸŸข {ๅฐˆๆกˆ B๏ผŒไพ‹ๅฆ‚๏ผšGSS} +* **ไปŠๆ—ฅ้€ฒๅฑ•**๏ผš(ๅฐ‡้‡้ปžๆฟƒ็ธฎๆ–ผๆญค) +* **่กŒๅ‹•้ …็›ฎ**๏ผš(ๆๅ–่ฉฒๅฐˆๆกˆ็š„ไธ‹ไธ€ๆญฅๅพ…่พฆ) + +### ๐ŸŸก {ๅฐˆๆกˆ C๏ผŒไพ‹ๅฆ‚๏ผšStickman Soul Cafe} +* **ไปŠๆ—ฅ้€ฒๅฑ•**๏ผš(่‹ฅไปŠๆ—ฅ็„ก้€ฒๅบฆๅ‰‡ไธ้กฏ็คบๆญคๅ€ๅกŠ) +* **่กŒๅ‹•้ …็›ฎ**๏ผš(ๆๅ–่ฉฒๅฐˆๆกˆ็š„ไธ‹ไธ€ๆญฅๅพ…่พฆ) + +--- + +## ๐Ÿง  ๆ”นๅ–„่ˆ‡ๅญธ็ฟ’ (Lessons Learned) +๏ผˆโš ๏ธ ๆญคๅ€ๅกŠๅฐˆไพ›็ถ“้ฉ—ๆ็…‰๏ผ‰ + +๐Ÿ“Œ **ๆ–ฐ่ฆๅ‰‡ / ๆ–ฐ็™ผ็พ** +(ไพ‹ๅฆ‚๏ผš็™ผ็พๆŸๅ€‹ API ็š„้šฑ่—้™ๅˆถใ€ๆˆ–ๆ˜ฏๆŸ็จฎ Python ๅฏซๆณ•ๆ›ด็ฏ€็œๆ•ˆ่ƒฝ) + +๐Ÿ”„ **ๅ„ชๅŒ–่ˆ‡ๅๆ€** +(้ŽๅŽปๅšๆณ•็š„ๆ”น้€ฒ๏ผŒไพ‹ๅฆ‚๏ผš่ชฟๆ•ดๅทฅไฝœๆตไปฅ้ฟๅ…้‡่ค‡ๆŽˆๆฌŠ) + +--- + +## โœ… ่ทจๅฐˆๆกˆ้€š็”จๅพ…่พฆ (Global Action Items) +- [ ] (่ˆ‡็‰นๅฎšๅฐˆๆกˆ็„ก้—œ็š„ไปปๅ‹™) +- [ ] (็ณป็ตฑ็’ฐๅขƒ็ถญ่ญท็ญ‰) diff --git a/skills/diary/templates/local-diary-template.md b/skills/diary/templates/local-diary-template.md new file mode 100644 index 00000000..87d84079 --- /dev/null +++ b/skills/diary/templates/local-diary-template.md @@ -0,0 +1,23 @@ +# ๅฐˆๆกˆๅฏฆไฝœ็ด€้Œ„๏ผš{ๅฐˆๆกˆๅ็จฑ} +* **๐Ÿ“… ๆ—ฅๆœŸ**๏ผšYYYY-MM-DD +* **๐Ÿท๏ธ ๆจ™็ฑค**๏ผš`#Project` `#DevLog` + +--- + +> ๐ŸŽฏ **ๆœฌๆฌก้€ฒๅบฆๆ‘˜่ฆ** +> ๏ผˆ็ฐก่ฟฐๆœฌๆฌกๅฎŒๆˆไบ†ไป€้บผๆ ธๅฟƒไปปๅ‹™๏ผŒไพ‹ๅฆ‚๏ผšใ€ŒๅฎŒๆˆ auto-video-editor ็š„ Google Colab ็’ฐๅขƒๆธฌ่ฉฆใ€๏ผ‰ + +### ๐Ÿ› ๏ธ ๅŸท่กŒ็ดฐ็ฏ€่ˆ‡่ฎŠๆ›ด +* **Git Commits**๏ผš(่‹ฅๆœ‰่ซ‹ๅˆ—ๅ‡บ) +* **ๆ ธๅฟƒๆช”ๆกˆ็•ฐๅ‹•**๏ผš + * ๐Ÿ“„ `่ทฏๅพ‘/ๆช”ๅ`๏ผš่ฎŠๆ›ด่ชชๆ˜Žใ€‚ +* **ๆŠ€่ก“ๅฏฆไฝœ**๏ผš + * (่จ˜้Œ„้—œ้ต้‚่ผฏๆˆ–ๆžถๆง‹่ฎŠๅ‹•) + +### ๏ฟฝ ๅ•้กŒ่ˆ‡่งฃๆณ• (Troubleshooting) +> ๐Ÿ› **้‡ๅˆฐๅ›ฐ้›ฃ**๏ผš(ๅฆ‚ API ๅ ฑ้Œฏใ€ๅฅ—ไปถ่ก็ช) +> ๐Ÿ’ก **่งฃๆฑบๆ–นๆกˆ**๏ผš(ๆœ€็ต‚ไฟฎๅพฉๆ–นๅผ๏ผŒ็•™ไธ‹้—œ้ตๆŒ‡ไปค) + +### โญ๏ธ ไธ‹ไธ€ๆญฅ่จˆ็•ซ (Next Steps) +- [ ] (ๅ…ท้ซ”็š„ๅพ…่พฆไบ‹้ … 1) +- [ ] (ๅ…ท้ซ”็š„ๅพ…่พฆไบ‹้ … 2)