feat: add unified-diary skill (#246)

* feat: add unified-diary skill

* chore: refine diary skill metadata for quality bar compliance

* fix: address PR feedback - fix NameError, Obsidian path, and add missing script
This commit is contained in:
Allen930311
2026-03-09 18:52:09 +08:00
committed by GitHub
parent bbeec32896
commit 9de71654fc
12 changed files with 1424 additions and 0 deletions

View File

@@ -0,0 +1,8 @@
# Notion Settings
NOTION_TOKEN="ntn_your_notion_token_here"
NOTION_DIARY_DB="your_notion_database_id_here"
# Path Settings (Optional, scripts will use sensible defaults if not set)
# DESKTOP_PATH="C:\Users\YourName\OneDrive\Desktop"
# GLOBAL_DIARY_ROOT="C:\path\to\your\global\diary\folder"
# OBSIDIAN_DAILY_NOTES="C:\path\to\your\obsidian\vault\10_Daily"

16
skills/diary/.gitignore vendored Normal file
View File

@@ -0,0 +1,16 @@
# Personal diary data
diary/
# Python artifacts
__pycache__/
*.pyc
*.pyo
*.pyd
.venv/
venv/
env/
.env
# OS generated files
.DS_Store
Thumbs.db

21
skills/diary/LICENSE Normal file
View File

@@ -0,0 +1,21 @@
MIT License
Copyright (c) 2026
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

90
skills/diary/README.md Normal file
View File

@@ -0,0 +1,90 @@
# 📔 Unified Diary System (Agentic Context-Preserving Logger) v4.1
![Version](https://img.shields.io/badge/version-v4.1-blue)
![AI Agent](https://img.shields.io/badge/AI-Agent_Driven-orange)
![Sync](https://img.shields.io/badge/Sync-Notion%20%7C%20Obsidian-lightgrey)
**Unified Diary System** is a fully automated, anti-pollution AI journaling and synchronization workflow designed specifically for multi-project developers and creators. By leveraging Continuous Tool Calling from AI Agents, a single natural language command automatically executes a 4-step pipeline: **Local Project Logging ➔ Global Context Fusion ➔ Cloud Bi-directional Sync ➔ Experience Extraction**, achieving a true "One-Shot" seamless record.
---
## ✨ Core Features
***Agent One-Shot Execution**: Once triggered, the AI completes the entire technical process without interruption, only pausing at the final step to ask for human validation on extracted "lessons learned".
* 🛡️ **Context Firewall**: Strictly separates "Project Local Diaries" from the "Global Master Diary." This fundamentally solves the severe "Context Pollution / Tag Drift" problem where AI hallucinates and mixes up progress between Project A and Project B during daily summaries.
* 🧠 **Automated Lessons Learned**: More than just a timeline of events, the AI proactively extracts "New Rules" or "Optimizations" from the bugs you faced or discoveries you made today, distilling them into your Knowledge Base.
* 🔄 **Seamless Cross-Platform Sync**: Includes built-in scripts to push the final global diary straight to Notion and/or Obsidian with a simple `--sync-only` flag.
---
## 🏗️ The 5-Step Workflow Architecture
When a developer types `:{Write a diary entry using the diary skill}` in *any* project directory, the system strictly executes the following atomic operations:
### Step 1: Local Project Archiving (AI Execution)
1. **Auto-Location**: The AI calls terminal commands (e.g., `pwd`) to identify the current working directory, establishing the "Project Name".
2. **Precision Writing**: It writes today's Git Commits, code changes, and problem solutions in "append mode" exclusively into that project's local directory: `diary/YYYY/MM/YYYY-MM-DD-<Project_Name>.md`.
### Step 1.5: Refresh Project Context (Automation Script)
* **Auto-Execution**: The AI invokes `prepare_context.py` to scan the project's latest directory structure, tech stack, and diary-based action items, generating/updating the `AGENT_CONTEXT.md` at the project root.
### Step 2: Extracting Global & Project Material (Automation Script)
* **Material Fetching**: The AI automatically executes `fetch_diaries.py`, precisely pulling the "just-written local project diary" and today's "Global Diary (if it exists)", printing both to the terminal for the AI to read.
### Step 3: AI Smart Fusion & Global Archiving (AI Execution)
* **Seamless Fusion**: The AI mentally sews the two sources from Step 2 together, writing the combined result into the global diary vault: `.../global_skills/auto-skill/diary/YYYY/MM/YYYY-MM-DD.md`.
* **Strict Zoning**: It uses `### 📁 <Project Name>` tagging to ensure existing project progress is preserved, while new project progress is safely appended—absolutely no overwriting.
### Step 4: Cloud Sync & Experience Extraction (Script + Human)
1. **One-Click Push**: The AI calls `master_diary_sync.py --sync-only` to push the data to Notion/Obsidian.
2. **Human Authorization**: The AI extracts today's `📌 New Rules` or `🔄 Experience Optimizations` and presents them to the developer. Once authorized, these are written to the local Knowledge Base and embedded (e.g., via `qmd embed`).
---
## 📂 Directory Structure
This system adopts a "Distributed Recording, Centralized Management" architecture:
```text
📦 Your Computer Environment
┣ 📂 Project A (e.g., auto-video-editor)
┃ ┗ 📂 diary/YYYY/MM/
┃ ┗ 📜 2026-02-24-auto-video-editor.md <-- Step 1 writes here (Clean, isolated history)
┣ 📂 Project B (e.g., GSS)
┃ ┗ 📂 diary/YYYY/MM/
┃ ┗ 📜 2026-02-24-GSS.md
┗ 📂 Global Skills & Diary Center (This Repo)
┣ 📂 scripts/
┃ ┣ 📜 fetch_diaries.py <-- Step 2: Material transporter
┃ ┣ 📜 prepare_context.py <-- Step 1.5: Context refresher
┃ ┗ 📜 master_diary_sync.py <-- Step 4: Notion/Obsidian sync
┣ 📂 knowledge-base/ <-- Step 4: AI extracted lessons
┗ 📂 diary/YYYY/MM/
┗ 📜 2026-02-24.md <-- Step 3: The ultimate fused global log
```
---
## 🚀 How to Use (Usage)
After setting up `.env` with your Notion tokens, simply input the following into your CLI/IDE chat while working inside a project:
```bash
:{Write a diary entry using the diary skill} Today I finished the initial integration of the Google Colab python script and fixed the package version conflicts.
```
The system will take over to handle all the filing, merging, and syncing automatically.
---
## 🛠️ Setup & Prerequisites
1. **Configuration**: Rename `.env.example` to `.env` and fill in your `NOTION_TOKEN`, `NOTION_DIARY_DB`, and set where your global diary root is stored.
2. **Dependencies**: `pip install -r requirements.txt`
3. **AI Agent**: Requires an AI assistant with Function Calling / Continuous Tool Calling capabilities (like Cursor, Claude Code, or Gemini CLI frameworks).
---
> **💡 Design Philosophy:**
> Why not just have the AI write directly to the global diary? Because we found that when an AI lacks the "isolated local project context", it frequently suffers from **Tag Drift** (writing Project A's progress under Project B's header). Through this highly-structured "Local First, Global Second" 4-step architecture, we completely eliminated the context pollution pain point in AI-automated logging.

160
skills/diary/SKILL.md Normal file
View File

@@ -0,0 +1,160 @@
---
name: diary
description: "Unified Diary System: A context-preserving automated logger for multi-project development."
risk: safe
source: self
---
# 📔 Unified Diary System
## When to Use This Skill
Use this skill when you want to summarize progress, write a daily dev log, or perform a daily review while keeping project contexts isolated and synced to Notion/Obsidian.
> 🚨 **Agent One-Shot Integrity Constraint (Highest Priority)**: Steps 1-4 are an **indivisible atomic workflow**. The AI **MUST use Continuous Tool Calling** to complete all actions in one breath.
> - **ABSOLUTELY FORBIDDEN** to output conversational text and wait for the user after completing Step 1, 2, or 3.
> - Example: Immediately after saving the file in Step 1, you must **automatically and instantly** call the terminal tool to execute the script in Step 2. Stopping to chat or asking "Should I continue?" is considered a critical error. Only inform the user explicitly if you are stuck due to a technical error.
> ⚠️ **Ironclad Rule 1**: The local project diary in Step 1 **must NEVER be polluted with global data**. It only records the current project.
> ⚠️ **Ironclad Rule 2**: When using tools like `run_command` to execute predefined python scripts (Step 2, Step 4), you must set them to `SafeToAutoRun: true` to avoid getting stuck waiting for user authorization.
> ⚠️ **Ironclad Rule 3**: **Never guess the project name**. The AI must confirm the current directory name as the project identifier using a terminal command BEFORE taking any action in Step 1.
---
## Step 1: Local Project Archiving (AI Generation)
- **Action 0 (Identify)**: Call the terminal `pwd` (Linux/Mac) or `(Get-Item .).Name` (Windows) to confirm the current folder name.
- **Action 1 (Write)**: Summarize the achievements from the current conversation (Git Commits, file changes, task progress), and write them into the **current project folder** at `diary/YYYY/MM/YYYY-MM-DD-ProjectName.md`.
- **Isolation and Naming Rules (Ironclad Rules)**:
- 📄 **Mandatory Filename Suffix**: The local diary **MUST** include the project name detected just now. It is **absolutely forbidden** to use a global-level filename (like `2026-02-23.md`) locally.
-**Pure Content**: Only record content exclusive to the current project. Do not mix in other projects.
- 📝 **Append Mode**: If the project diary already exists, update it using "append", never overwrite the original content.
- 📁 **Auto-Creation**: Create subfolders `diary/YYYY/MM/` based on the year and month.
-**Force Continue**: Once writing is complete, **do not interrupt the conversation; immediately call the terminal tool and proceed to Step 2.**
## Step 1.5: Refresh Project Context (Automation Script)
- **Prerequisite**: You have confirmed the current project directory path (from Action 0's `pwd` result).
- **Action**: Call the terminal to execute the following command to automatically scan the project state and generate/update `AGENT_CONTEXT.md`:
```powershell
python {diary_system_path}/scripts/prepare_context.py "<Project_Root_Path>"
```
- **SafeToAutoRun**: true (Safe operation; purely reading and writing local files).
- **Result**: `AGENT_CONTEXT.md` in the project directory is refreshed to the latest state.
- **After Completion**: Force continue to Step 2; do not wait for user confirmation.
## Step 2: Extract Global & Project Material (Script Execution)
- **Action**: Call the extraction script, **passing in the absolute path of the project diary just written in Step 1**. The script will precisely print "Today's Global Progress" and "Current Project Progress".
- **Execution Command**:
```powershell
python {diary_system_path}/scripts/fetch_diaries.py "<Absolute_Path_to_Step1_Project_Diary>"
```
- **Result**: The terminal will print two sets of material side-by-side. The AI must read the terminal output directly and prepare for mental fusion.
## Step 3: AI Smart Fusion & Global Archiving (AI Execution) 🧠
- **Action**: Based on the two materials printed by the terminal in Step 2, complete a **seamless fusion** mentally, then write it to the global diary: `{diary_system_path}/diary/YYYY/MM/YYYY-MM-DD.md`.
- **Context Firewall (Core Mechanism)**:
1. **No Tag Drift**: When reading "Global Progress Material", there may be progress from other projects. **It is strictly forbidden to categorize today's conversation achievements under existing project headings belonging to other projects.**
2. **Priority Definition**: The content marked as `📁 [Current Project Latest Progress]` in Step 2 is the protagonist of today's diary.
- **Rewrite Rules**:
1. **Safety First**: If the global diary "already exists," preserve the original content and append/fuse the new project progress. **Do not overwrite.**
2. **Precise Zoning**: Ensure there is a dedicated `### 📁 ProjectName` zone for this project. Do not mix content into other project zones.
3. **Lessons Learned**: Merge and deduplicate; attach action items to every entry.
4. **Cleanup**: After writing or fusing globally, you **must** force-delete any temporary files created to avoid encoding issues (e.g., `temp_diary.txt`, `fetched_diary.txt`) to keep the workspace clean.
## Step 4: Cloud Sync & Experience Extraction (Script + Human) 🛑
- **Action 1 (Sync)**: Call the master script to push the global diary to Notion and Obsidian.
- **Execution Command**:
```powershell
python {diary_system_path}/scripts/master_diary_sync.py --sync-only
```
- **Action 2 (Extraction & Forced Pause)**:
1. The AI extracts "Improvements & Learning" from the global diary.
2. Confirm if it contains entirely new key points lacking in the past (📌 New Rules), or better approaches (🔄 Evolved Rules).
3. List the results and **WAIT FOR USER CONFIRMATION** (user says "execute" or "agree").
4. After user confirmation, update the `.md` file in `{Knowledge_Base_Path}/` and execute `qmd embed` (if applicable).
---
**🎯 Task Acceptance Criteria**:
1. ✅ Project local diary generated (no pollution).
2. ✅ `fetch_diaries.py` called with absolute path and successfully printed materials.
3. ✅ AI executed high-quality rewrite and precisely wrote to global diary (appended successfully if file existed).
4. ✅ `--sync-only` successfully pushed to Notion + Obsidian.
5. ✅ Experience extraction presented to the user and authorized.
---
## 📝 Templates and Writing Guidelines
Strictly apply the following Markdown templates to ensure clarity during Step 1 (Local) and Step 3 (Global Fusion).
### 💡 Writing Guidelines (For AI)
1. **Dynamic Replacement**: The `{Project Name}` in the template MUST strictly use the folder name grabbed by `pwd` in Step 1.
2. **Concise Deduplication**: When writing the global diary in Step 3, the AI must condense the "🛠️ Execution Details" from the local diary. The global diary focuses only on "General Direction and Output Results."
3. **Mandatory Checkboxes**: All "Next Steps" and "Action Items" must use the Markdown `* [ ]` format so they can be checked off in Obsidian/Notion later.
### 📝 Template 1: Project Local Diary (Step 1 Exclusive)
```markdown
# Project DevLog: {Project Name}
* **📅 Date**: YYYY-MM-DD
* **🏷️ Tags**: `#Project` `#DevLog`
---
> 🎯 **Progress Summary**
> (Briefly state the core task completed, e.g., "Finished Google Colab environment testing for auto-video-editor")
### 🛠️ Execution Details & Changes
* **Git Commits**: (List if any)
* **Core File Modifications**:
* 📄 `path/filename`: Explanation of changes.
* **Technical Implementation**:
* (Record key logic or architecture structural changes)
### 🚨 Troubleshooting
> 🐛 **Problem Encountered**: (e.g., API error, package conflict)
> 💡 **Solution**: (Final fix, leave key commands)
### ⏭️ Next Steps
- [ ] (Specific task 1)
- [ ] (Specific task 2)
```
---
### 🌍 Template 2: Global Diary (Step 3 Exclusive)
```markdown
# 📔 YYYY-MM-DD Global Progress Overview
> 🌟 **Daily Highlight**
> (1-2 sentences summarizing all project progress for the day, synthesized by AI)
---
## 📁 Project Tracking
(⚠️ AI Rule: If file exists, find the corresponding project title and append; NEVER overwrite, keep it clean.)
### 🔵 {Project A, e.g., auto-video-editor}
* **Today's Progress**: (Condense Step 2 local materials into key points)
* **Action Items**: (Extract next steps)
### 🟢 {Project B, e.g., GSS}
* **Today's Progress**: (Condense key points)
* **Action Items**: (Extract next steps)
---
## 🧠 Improvements & Learnings
(⚠️ Dedicated to Experience Extraction)
📌 **New Rules / Discoveries**
(e.g., Found hidden API limit, or a more efficient python syntax)
🔄 **Optimizations & Reflections**
(Improvements from past methods)
---
## ✅ Global Action Items
- [ ] (Tasks unrelated to specific projects)
- [ ] (System environment maintenance, etc.)
```

View File

@@ -0,0 +1 @@
requests

View File

@@ -0,0 +1,84 @@
#!/usr/bin/env python3
"""
Fetch Diaries Context Preparer (Targeted Mode)
用於 Unified Diary System 方案 A。
此腳本不再全盤掃描,而是採用精準打擊:
接收 AI 傳入的「當前專案日記絕對路徑」,並同時讀取「今日的全域日記」。
將兩者並列印出在終端機,供 AI 進行不遺漏、不覆蓋的安全腦內融合。
Usage:
python fetch_diaries.py <path_to_current_project_diary.md>
"""
import os
import sys
from datetime import datetime
from pathlib import Path
# --- Configuration ---
GLOBAL_DIARY_ROOT = Path(os.environ.get("GLOBAL_DIARY_ROOT", str(Path(__file__).resolve().parent.parent / "diary")))
def get_today():
return datetime.now().strftime("%Y-%m-%d")
def main():
if hasattr(sys.stdout, 'reconfigure'):
sys.stdout.reconfigure(encoding='utf-8')
if len(sys.argv) < 2:
print("❌ 用法錯誤。請提供當前專案的日記絕對路徑。")
print("Usage: python fetch_diaries.py <path_to_current_project_diary.md>")
sys.exit(1)
proj_diary_path = Path(sys.argv[1])
if not proj_diary_path.exists():
print(f"⚠️ 找不到專案日記: {proj_diary_path}")
sys.exit(1)
date_str = get_today()
y, m, _ = date_str.split("-")
global_diary_path = GLOBAL_DIARY_ROOT / y / m / f"{date_str}.md"
print(f"=== FETCH MODE: {date_str} ===")
# --- 1. 讀取全域日記 ---
print("\n" + "=" * 60)
print(f"🌐 [現有全域日記] ({global_diary_path})")
if global_diary_path.exists():
print("⚠️ 警告:此全域日記已存在,代表今天可能有其他專案寫過進度了!")
print("⚠️ 鐵律:請務必保留下方既有的內容,只能「追加或融合」新的專案進度,絕對不可粗暴覆寫抹除前人的紀錄!")
print("-" * 60)
try:
global_content = global_diary_path.read_text(encoding="utf-8").strip()
print(global_content)
except Exception as e:
print(f"讀取全域日記時發生錯誤: {e}")
else:
print(" 這是今天的「第一筆」紀錄,全域檔案尚未建立。請直接為今日創建好的排版結構。")
print("-" * 60)
# --- 2. 讀取當前專案日記 ---
print("\n" + "=" * 60)
print(f"📁 [當前專案最新進度] ({proj_diary_path})")
print("請將以下內容,優雅地消化並融合進上方的全域日記中。")
print("-" * 60)
try:
content = proj_diary_path.read_text(encoding="utf-8")
# 過濾掉雜訊標題與 footer
lines = content.split('\n')
meaningful = []
for line in lines:
if line.startswith("# "): continue
if line.startswith("*Allen") or line.startswith("*Generated"): continue
meaningful.append(line)
print("\n".join(meaningful).strip())
except Exception as e:
print(f"讀取專案日記時發生錯誤: {e}")
print("\n" + "=" * 60)
print("✅ 素材提供完畢。請 IDE Agent 執行融合,並寫入/更新至全域日記檔案。")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,270 @@
#!/usr/bin/env python3
"""
Master Diary Sync Script v2
Two-mode operation:
--inject-only : Scan desktop projects, inject today's diaries into global diary.
--sync-only : Push the global diary to Notion and Obsidian.
Usage:
python master_diary_sync.py --inject-only
python master_diary_sync.py --sync-only
python master_diary_sync.py # Runs both sequentially (legacy mode)
"""
import os
import sys
import re
import shutil
import subprocess
from datetime import datetime
from pathlib import Path
# --- Configuration ---
DESKTOP = Path(os.environ.get("DESKTOP_PATH", str(Path(os.environ.get("USERPROFILE", "")) / "OneDrive" / "Desktop")))
DESKTOP_FALLBACK = Path(os.environ.get("USERPROFILE", "")) / "Desktop"
GLOBAL_DIARY_ROOT = Path(os.environ.get("GLOBAL_DIARY_ROOT", str(Path(__file__).resolve().parent.parent / "diary")))
OBSIDIAN_DAILY_NOTES = Path(os.environ.get("OBSIDIAN_DAILY_NOTES", ""))
NOTION_SYNC_SCRIPT = Path(__file__).resolve().parent / "sync_to_notion.py"
def get_desktop():
return DESKTOP if DESKTOP.exists() else DESKTOP_FALLBACK
def get_today():
return datetime.now().strftime("%Y-%m-%d")
def get_global_path(date_str):
y, m, _ = date_str.split("-")
return GLOBAL_DIARY_ROOT / y / m / f"{date_str}.md"
# ── INJECT MODE ───────────────────────────────────────────────
def scan_project_diaries(date_str):
"""Find all project diaries for today on the desktop."""
desktop = get_desktop()
results = []
for project_dir in desktop.iterdir():
if not project_dir.is_dir():
continue
diary_dir = project_dir / "diary"
if not diary_dir.exists():
continue
# Validation: Check for naked YYYY-MM-DD.md which is forbidden in projects
naked_diary = diary_dir / f"{date_str}.md"
if naked_diary.exists():
print(f"⚠️ WARNING: Found naked diary in project '{project_dir.name}': {naked_diary}")
print(f" Ironclad Rule: Project diaries MUST have a suffix (e.g., {date_str}-{project_dir.name}.md)")
# Support both flat and YYYY/MM hierarchical structures
for md_file in diary_dir.rglob(f"{date_str}*.md"):
# Skip the naked one if it exists to prevent accidental injection
if md_file.name == f"{date_str}.md":
continue
results.append({
"path": md_file,
"project": project_dir.name,
"content": md_file.read_text(encoding="utf-8"),
})
return results
def inject_into_global(global_path, project_diaries, date_str):
"""
Inject project diary content into the global diary.
This is a MECHANICAL injection — AI will rewrite it in a later step.
Each project gets its own clearly marked section.
"""
# Read or initialize global content
if global_path.exists():
global_content = global_path.read_text(encoding="utf-8")
else:
global_content = f"# 📔 全域日誌:{date_str}\n\n## 今日全域回顧 (Global Summary)\n(待 AI 重寫)\n\n---\n\n## 🚀 專案進度 (Project Accomplishments)\n\n---\n\n## 💡 改善與學習 (Improvements & Learnings)\n\n---\n"
for diary in project_diaries:
proj_name = diary["project"]
proj_content = diary["content"]
marker = f"### 📁 {proj_name}"
# Remove old block for this project if exists (to support re-injection)
pattern = re.escape(marker) + r".*?(?=### 📁 |## 💡|## 🎯|---(?:\s*\n## )|\Z)"
global_content = re.sub(pattern, "", global_content, flags=re.DOTALL)
# Find insertion point: after "## 🚀 專案進度"
insertion_anchor = "## 🚀 專案進度 (Project Accomplishments)"
if insertion_anchor not in global_content:
insertion_anchor = "## 🚀 專案進度"
if insertion_anchor in global_content:
# Extract the meaningful content from the project diary (skip its H1 title)
lines = proj_content.split("\n")
meaningful = []
for line in lines:
if line.startswith("# "):
continue # Skip H1 title
if line.startswith("*Allen") or line.startswith("*Generated"):
continue # Skip footer
meaningful.append(line)
clean_content = "\n".join(meaningful).strip()
injection = f"\n{marker}\n{clean_content}\n"
global_content = global_content.replace(
insertion_anchor,
f"{insertion_anchor}{injection}"
)
else:
global_content += f"\n{marker}\n{proj_content}\n"
# Ensure directory exists and write
global_path.parent.mkdir(parents=True, exist_ok=True)
global_path.write_text(global_content, encoding="utf-8")
return global_path
def run_inject(date_str):
"""Execute inject-only mode."""
print(f"=== INJECT MODE: {date_str} ===")
global_path = get_global_path(date_str)
# 1. Scan
diaries = scan_project_diaries(date_str)
print(f"🔍 Found {len(diaries)} valid project diaries.")
for d in diaries:
print(f" - {d['project']}: {d['path']}")
if not diaries:
print(" No new project diaries found. Nothing to inject.")
# Still ensure global file exists for AI to rewrite
if not global_path.exists():
global_path.parent.mkdir(parents=True, exist_ok=True)
global_path.write_text(
f"# 📔 全域日誌:{date_str}\n\n## 今日全域回顧 (Global Summary)\n\n---\n\n## 🚀 專案進度 (Project Accomplishments)\n\n---\n\n## 💡 改善與學習 (Improvements & Learnings)\n\n---\n",
encoding="utf-8"
)
print(f"📄 Global diary ready at: {global_path}")
return
# 2. Inject
result = inject_into_global(global_path, diaries, date_str)
print(f"✅ Injected into global diary: {result}")
print("⏸️ Now hand off to AI for intelligent rewrite (Step 3).")
# ── SYNC MODE ─────────────────────────────────────────────────
def sync_to_notion(global_path):
"""Push global diary to Notion."""
print("🚀 Syncing to Notion...")
if not NOTION_SYNC_SCRIPT.exists():
print(f"❌ Notion sync script not found: {NOTION_SYNC_SCRIPT}")
return False
env = os.environ.copy()
if "NOTION_TOKEN" not in env or not env["NOTION_TOKEN"]:
print("❌ NOTION_TOKEN is not set in environment.")
return False
if "NOTION_DIARY_DB" not in env or not env["NOTION_DIARY_DB"]:
print("❌ NOTION_DIARY_DB is not set in environment.")
return False
try:
result = subprocess.run(
[sys.executable, str(NOTION_SYNC_SCRIPT), str(global_path)],
env=env, capture_output=True, text=True, check=True
)
print(result.stdout)
return True
except subprocess.CalledProcessError as e:
print(f"❌ Notion sync failed:\n{e.stderr}")
return False
def backup_to_obsidian(global_path):
# Copy global diary to Obsidian vault.
print("📂 Backing up to Obsidian...")
# Safety Check: If path is empty, it shouldn't backup
if not str(OBSIDIAN_DAILY_NOTES).strip():
print(" Obsidian path is not set (empty). Skipping backup.")
return False
if not OBSIDIAN_DAILY_NOTES.exists():
print(f"⚠️ Obsidian path not found: {OBSIDIAN_DAILY_NOTES}. Skipping backup.")
return False
try:
dest = OBSIDIAN_DAILY_NOTES / global_path.name
shutil.copy2(global_path, dest)
print(f"✅ Backed up to: {dest}")
return True
except Exception as e:
print(f"❌ Obsidian backup failed: {e}")
return False
def run_qmd_embed():
"""Update semantic vector index."""
print("🧠 Updating QMD Semantic Index...")
try:
# Run qmd embed in the project root
project_root = GLOBAL_DIARY_ROOT.parent
subprocess.run(["qmd", "embed"], cwd=project_root, check=True, text=True)
print("✅ QMD Embedding completed.")
return True
except FileNotFoundError:
print("⚠️ QMD not installed. Skipping semantic update.")
except Exception as e:
print(f"❌ QMD Embedding failed: {e}")
return False
def run_sync(date_str):
"""Execute sync-only mode."""
print(f"=== SYNC MODE: {date_str} ===")
global_path = get_global_path(date_str)
if not global_path.exists():
print(f"❌ Global diary not found: {global_path}")
print(" Please run --inject-only first, then let AI rewrite.")
sys.exit(1)
# 4a. Notion
sync_to_notion(global_path)
# 4b. Obsidian
backup_to_obsidian(global_path)
# 5. Semantic Update
run_qmd_embed()
print("=== SYNC COMPLETED ===")
# ── MAIN ──────────────────────────────────────────────────────
def main():
date_str = get_today()
if len(sys.argv) > 1:
mode = sys.argv[1]
if mode == "--inject-only":
run_inject(date_str)
elif mode == "--sync-only":
run_sync(date_str)
else:
print(f"❌ Unknown mode: {mode}")
print("Usage: python master_diary_sync.py [--inject-only | --sync-only]")
sys.exit(1)
else:
# Legacy: run both (no AI rewrite in between)
print("⚠️ Running full pipeline (legacy mode). Consider using --inject-only and --sync-only separately.")
run_inject(date_str)
run_sync(date_str)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,244 @@
#!/usr/bin/env python3
"""
AI Agent Context Preparer v2
Usage: python prepare_context.py [directory_path]
Generates a standardized AGENT_CONTEXT.md with 5 core sections:
1. 專案目標 (Project Goal) - from README
2. 技術棧與環境 (Tech Stack) - from config files
3. 核心目錄結構 (Core Structure) - recursive tree
4. 架構與設計約定 (Conventions) - from L1 cache
5. 目前進度與待辦 (Status & TODO) - from latest diary
"""
import os
import sys
import json
import glob
from pathlib import Path
from datetime import datetime
def get_tree(path, prefix="", max_depth=3, current_depth=0):
"""Recursive directory tree generator with depth limit."""
if current_depth >= max_depth:
return []
try:
entries = sorted(os.listdir(path))
except PermissionError:
return []
tree_lines = []
skip_prefixes = (".", "node_modules", "__pycache__", "dist", "build", "venv", ".git")
filtered = [e for e in entries if not e.startswith(skip_prefixes)]
for i, entry in enumerate(filtered):
is_last = i == len(filtered) - 1
connector = "└── " if is_last else "├── "
full_path = os.path.join(path, entry)
tree_lines.append(f"{prefix}{connector}{entry}")
if os.path.isdir(full_path):
extension = " " if is_last else ""
tree_lines.extend(get_tree(full_path, prefix + extension, max_depth, current_depth + 1))
return tree_lines
def extract_readme_summary(root):
"""Extract first meaningful paragraph from README as project goal."""
readme = root / "README.md"
if not readme.exists():
return None
text = readme.read_text(encoding="utf-8", errors="ignore")
lines = text.strip().split("\n")
# Skip title lines (# heading) and blank lines, grab first paragraph
summary_lines = []
found_content = False
for line in lines:
stripped = line.strip()
if not found_content:
if stripped and not stripped.startswith("#"):
found_content = True
summary_lines.append(stripped)
else:
if stripped == "" and summary_lines:
break
summary_lines.append(stripped)
return " ".join(summary_lines) if summary_lines else None
def extract_tech_stack(root):
"""Extract tech stack info from config files."""
stack_info = []
# package.json
pkg = root / "package.json"
if pkg.exists():
try:
data = json.loads(pkg.read_text(encoding="utf-8"))
deps = list(data.get("dependencies", {}).keys())
dev_deps = list(data.get("devDependencies", {}).keys())
if deps:
stack_info.append(f"* **核心套件**{', '.join(deps[:10])}")
if dev_deps:
stack_info.append(f"* **開發套件**{', '.join(dev_deps[:8])}")
if "scripts" in data:
scripts = list(data["scripts"].keys())
stack_info.append(f"* **可用指令**{', '.join(scripts)}")
except (json.JSONDecodeError, KeyError):
pass
# pyproject.toml - basic extraction
pyproject = root / "pyproject.toml"
if pyproject.exists():
text = pyproject.read_text(encoding="utf-8", errors="ignore")
stack_info.append(f"* **Python 專案**:使用 pyproject.toml 管理")
# Simple dependency extraction
if "dependencies" in text:
stack_info.append("* _詳見 pyproject.toml 的 dependencies 區塊_")
# requirements.txt
reqs = root / "requirements.txt"
if reqs.exists():
req_lines = [l.strip().split("==")[0].split(">=")[0]
for l in reqs.read_text(encoding="utf-8", errors="ignore").strip().split("\n")
if l.strip() and not l.startswith("#")]
if req_lines:
stack_info.append(f"* **Python 套件**{', '.join(req_lines[:10])}")
return stack_info
def extract_latest_diary_todos(root):
"""Find the latest diary file and extract Next Steps / TODO items."""
# Search common diary locations
diary_dirs = [
root / "diary",
Path(os.path.expanduser("~")) / ".gemini" / "antigravity" / "global_skills" / "auto-skill" / "diary",
]
latest_file = None
latest_date = ""
for diary_dir in diary_dirs:
if not diary_dir.exists():
continue
# Glob for markdown files recursively
for md_file in diary_dir.rglob("*.md"):
name = md_file.stem
# Try to extract date from filename (YYYY-MM-DD format)
if len(name) >= 10 and name[:4].isdigit():
date_str = name[:10]
if date_str > latest_date:
latest_date = date_str
latest_file = md_file
if not latest_file:
return None, []
text = latest_file.read_text(encoding="utf-8", errors="ignore")
lines = text.split("\n")
todos = []
in_next_section = False
for line in lines:
stripped = line.strip()
# Detect "Next Steps" or "下一步" sections
if any(kw in stripped.lower() for kw in ["next step", "下一步", "next steps", "待辦", "todo"]):
in_next_section = True
continue
if in_next_section:
if stripped.startswith("- [") or stripped.startswith("* ["):
todos.append(stripped)
elif stripped.startswith("#"):
break # Next section header, stop
return latest_date, todos
def prepare_context(root_path):
root = Path(root_path).resolve()
now = datetime.now().strftime("%Y-%m-%d %H:%M")
print(f"📋 Preparing context for: {root}")
context_file = root / "AGENT_CONTEXT.md"
with open(context_file, "w", encoding="utf-8") as f:
# Header
f.write(f"# 專案上下文 (Agent Context){root.name}\n\n")
f.write(f"> **最後更新時間**{now}\n")
f.write(f"> **自動生成**:由 `prepare_context.py` 產生,供 AI Agent 快速掌握專案全局\n\n")
f.write("---\n\n")
# Section 1: 專案目標
f.write("## 🎯 1. 專案目標 (Project Goal)\n")
readme_summary = extract_readme_summary(root)
if readme_summary:
f.write(f"* **核心目的**{readme_summary}\n")
else:
f.write("* **核心目的**_請手動補充或建立 README.md_\n")
readme = root / "README.md"
if readme.exists():
f.write(f"* _完整說明見 [README.md](README.md)_\n")
f.write("\n")
# Section 2: 技術棧與環境
f.write("## 🛠️ 2. 技術棧與環境 (Tech Stack & Environment)\n")
stack_info = extract_tech_stack(root)
if stack_info:
f.write("\n".join(stack_info))
f.write("\n")
else:
f.write("* _未偵測到 package.json / pyproject.toml / requirements.txt_\n")
# Also include raw config snippets for AI reference
config_files = ["package.json", "pyproject.toml", "requirements.txt", ".env.example", "clasp.json"]
has_config = False
for cfg in config_files:
cfg_path = root / cfg
if cfg_path.exists():
if not has_config:
f.write("\n### 原始設定檔\n")
has_config = True
ext = cfg.split(".")[-1]
lang_map = {"json": "json", "toml": "toml", "txt": "text", "example": "text"}
lang = lang_map.get(ext, "text")
content = cfg_path.read_text(encoding="utf-8", errors="ignore")
# Truncate very long config files
if len(content) > 3000:
content = content[:3000] + "\n... (truncated)"
f.write(f"\n<details><summary>{cfg}</summary>\n\n```{lang}\n{content}\n```\n</details>\n")
f.write("\n")
# Section 3: 核心目錄結構
f.write("## 📂 3. 核心目錄結構 (Core Structure)\n")
f.write("_(💡 AI 讀取守則:請依據此結構尋找對應檔案,勿盲目猜測路徑)_\n")
f.write("```text\n")
f.write(f"{root.name}/\n")
f.write("\n".join(get_tree(root)))
f.write("\n```\n\n")
# Section 4: 架構與設計約定
f.write("## 🏛️ 4. 架構與設計約定 (Architecture & Conventions)\n")
local_exp = root / ".auto-skill-local.md"
if local_exp.exists():
f.write("_(來自專案 L1 快取 `.auto-skill-local.md`)_\n\n")
f.write(local_exp.read_text(encoding="utf-8", errors="ignore"))
f.write("\n\n")
else:
f.write("* _尚無 `.auto-skill-local.md`專案踩坑經驗將在開發過程中自動累積_\n\n")
# Section 5: 目前進度與待辦
f.write("## 🚦 5. 目前進度與待辦 (Current Status & TODO)\n")
latest_date, todos = extract_latest_diary_todos(root)
if todos:
f.write(f"_(自動提取自最近日記 {latest_date})_\n\n")
f.write("### 🚧 待辦事項\n")
for todo in todos:
f.write(f"{todo}\n")
f.write("\n")
else:
f.write("* _尚無日記記錄或日記中無「下一步」區塊_\n\n")
print(f"✅ Created: {context_file}")
if __name__ == "__main__":
target = sys.argv[1] if len(sys.argv) > 1 else "."
prepare_context(target)

View File

@@ -0,0 +1,469 @@
#!/usr/bin/env python3
"""
Notion Diary Sync Script
同步 diary-agent 的開發日記到 Notion「每日複盤」頁面的 Business 區塊。
頁面結構(其他生活區塊)由 GAS Agent 建立,本腳本僅負責推送開發日記。
使用方式:
python sync_to_notion.py <diary_file_path>
python sync_to_notion.py --create-db <parent_page_id>
環境變數:
NOTION_TOKEN - Notion Internal Integration Token
NOTION_DIARY_DB - Notion Diary Database ID
"""
import os
import sys
import re
import json
import requests
from datetime import datetime
from pathlib import Path
# ── Configuration ──────────────────────────────────────────────
NOTION_TOKEN = os.environ.get("NOTION_TOKEN", "")
NOTION_DIARY_DB = os.environ.get("NOTION_DIARY_DB", "")
NOTION_API = "https://api.notion.com/v1"
NOTION_VERSION = "2022-06-28"
HEADERS = {
"Authorization": f"Bearer {NOTION_TOKEN}",
"Notion-Version": NOTION_VERSION,
"Content-Type": "application/json",
}
# ── 注意 ──────────────────────────────────────────────────────
# 頁面結構Learning / Chemistry / Workout / 心得)由 GAS Agent 建立。
# 本腳本僅負責將開發日記推送至 Business 區塊。
# ── Notion API Helpers ─────────────────────────────────────────
def notion_request(method: str, endpoint: str, data: dict = None) -> dict:
"""Execute a Notion API request with error handling."""
url = f"{NOTION_API}/{endpoint}"
resp = getattr(requests, method)(url, headers=HEADERS, json=data)
if resp.status_code >= 400:
print(f"❌ Notion API Error ({resp.status_code}): {resp.json().get('message', resp.text)}")
sys.exit(1)
return resp.json()
def search_diary_by_date(date_str: str) -> str | None:
"""Search for an existing diary page by date property."""
data = {
"filter": {
"property": "日期",
"date": {"equals": date_str}
}
}
result = notion_request("post", f"databases/{NOTION_DIARY_DB}/query", data)
pages = result.get("results", [])
return pages[0]["id"] if pages else None
# ── Rich Text & Block Helpers ──────────────────────────────────
def parse_rich_text(text: str) -> list:
"""Parse markdown inline formatting to Notion rich_text array."""
segments = []
pattern = r'(\*\*(.+?)\*\*|`(.+?)`|\[(.+?)\]\((.+?)\))'
last_end = 0
for match in re.finditer(pattern, text):
start, end = match.span()
if start > last_end:
plain = text[last_end:start]
if plain:
segments.append({"type": "text", "text": {"content": plain}})
full = match.group(0)
if full.startswith("**"):
segments.append({"type": "text", "text": {"content": match.group(2)}, "annotations": {"bold": True}})
elif full.startswith("`"):
segments.append({"type": "text", "text": {"content": match.group(3)}, "annotations": {"code": True}})
elif full.startswith("["):
segments.append({"type": "text", "text": {"content": match.group(4), "link": {"url": match.group(5)}}})
last_end = end
if last_end < len(text):
remaining = text[last_end:]
if remaining:
segments.append({"type": "text", "text": {"content": remaining}})
if not segments:
segments.append({"type": "text", "text": {"content": text}})
return segments
def make_heading2(text: str) -> dict:
return {"object": "block", "type": "heading_2", "heading_2": {"rich_text": parse_rich_text(text)}}
def make_heading3(text: str) -> dict:
return {"object": "block", "type": "heading_3", "heading_3": {"rich_text": parse_rich_text(text)}}
def make_bullet(text: str) -> dict:
return {"object": "block", "type": "bulleted_list_item", "bulleted_list_item": {"rich_text": parse_rich_text(text)}}
def make_divider() -> dict:
return {"object": "block", "type": "divider", "divider": {}}
def make_quote(text: str = " ") -> dict:
return {"object": "block", "type": "quote", "quote": {"rich_text": [{"type": "text", "text": {"content": text}}]}}
def make_paragraph(text: str) -> dict:
return {"object": "block", "type": "paragraph", "paragraph": {"rich_text": parse_rich_text(text)}}
def make_todo(text: str, checked: bool = False) -> dict:
return {"object": "block", "type": "to_do", "to_do": {"rich_text": parse_rich_text(text), "checked": checked}}
def make_callout(text: str, emoji: str = "💡") -> dict:
return {"object": "block", "type": "callout", "callout": {"rich_text": parse_rich_text(text), "icon": {"emoji": emoji}}}
# ── Markdown to Business Blocks ────────────────────────────────
def diary_to_business_blocks(md_content: str) -> list:
"""Convert diary markdown into bullet-point blocks for the Business section.
Extracts the key accomplishments and structures them as Notion blocks.
"""
blocks = []
lines = md_content.split("\n")
for line in lines:
line = line.rstrip()
if not line:
continue
# Skip the H1 title and timestamp lines
if line.startswith("# ") or line.startswith("*Allen") or line.startswith("*Generated"):
continue
# H3 sections become sub-headings (e.g. ### 1. 跨平台混合雲自動化)
if line.startswith("### "):
heading_text = line[4:].strip()
# Remove leading numbers (e.g. "1. " or "📁 ")
heading_text = re.sub(r'^\d+\.\s*', '', heading_text)
blocks.append(make_heading3(heading_text))
continue
# H2 sections - skip (they are category headers like "今日回顧", "該改善的地方")
if line.startswith("## "):
section = line[3:].strip()
# Keep the improvement section as a callout
if "改善" in section or "學習" in section:
blocks.append(make_divider())
blocks.append(make_heading3(f"💡 {section}"))
continue
# Dividers
if line.strip() == "---":
continue
# Callouts (e.g. > 🌟 **今日亮點 (Daily Highlight)**)
if line.startswith("> "):
text = line[2:].strip()
# Extract emoji if present
emoji = "💡"
if text and len(text) > 0:
first_char = text[0]
# A simple heuristic to check if the first character is an emoji
import unicodedata
if ord(first_char) > 0xFFFF or unicodedata.category(first_char) == 'So':
emoji = first_char
text = text[1:].strip()
blocks.append(make_callout(text, emoji))
continue
# TODO items
if "- [ ]" in line or "- [x]" in line:
checked = "- [x]" in line
text = re.sub(r'^[\s]*-\s\[[ x]\]\s', '', line)
blocks.append(make_todo(text, checked))
continue
# Numbered items
if re.match(r'^[\s]*\d+\.\s', line):
text = re.sub(r'^[\s]*\d+\.\s', '', line)
if text:
blocks.append(make_bullet(text))
continue
# Bullet points
if re.match(r'^[\s]*[\-\*]\s', line):
text = re.sub(r'^[\s]*[\-\*]\s', '', line)
if text:
blocks.append(make_bullet(text))
continue
# Default: paragraph (only if meaningful)
if len(line.strip()) > 2:
blocks.append(make_paragraph(line))
return blocks
# ── Page Creation ──────────────────────────────────────────────
def build_business_only_blocks(business_blocks: list) -> list:
"""Build page blocks with only Business section (GAS Agent handles the rest)."""
blocks = []
blocks.append(make_heading2("💼 Business (YT/AI 網紅 / 自動化開發)"))
blocks.extend(business_blocks)
blocks.append(make_divider())
return blocks
def extract_metadata(md_content: str, filename: str) -> dict:
"""Extract metadata from diary markdown content."""
date_match = re.search(r'(\d{4}-\d{2}-\d{2})', filename)
date_str = date_match.group(1) if date_match else datetime.now().strftime("%Y-%m-%d")
# Build title
title = f"📊 {date_str} 每日複盤"
# Extract project names
# Matches old format `### 📁 ` and new format e.g., `### 🔵 ` or `### 🟢 `
projects = re.findall(r'###\s+[\U00010000-\U0010ffff📁]\s+(\S+)', md_content)
if not projects:
projects = re.findall(r'###\s+\d+\.\s+(.+?)[\s🚀🛠🧪☁🔧🧩]*(?:\n|$)', md_content)
projects = [p.strip()[:20] for p in projects]
# Auto-tag
tags = {"Business"} # Always tagged as Business since diary-agent produces dev content
tag_keywords = {
"自動化": ["自動化", "GAS", "Agent", "觸發器"],
"AI": ["Gemini", "AI", "語義", "LLM"],
"影片": ["Remotion", "影片", "渲染", "OpenShorts"],
"投資": ["投資", "分析", "道氏", "酒田"],
"Discord": ["Discord", "Listener"],
"YouTube": ["YouTube", "YT", "Guardian"],
}
for tag, keywords in tag_keywords.items():
if any(kw in md_content for kw in keywords):
tags.add(tag)
return {
"date": date_str,
"title": title,
"projects": projects if projects else ["general"],
"tags": list(tags),
}
def create_diary_page(metadata: dict, blocks: list) -> str:
"""Create a new diary page in Notion database."""
children = blocks[:100]
data = {
"parent": {"database_id": NOTION_DIARY_DB},
"icon": {"emoji": "📊"},
"properties": {
"標題": {"title": [{"text": {"content": metadata["title"]}}]},
"日期": {"date": {"start": metadata["date"]}},
"專案": {"multi_select": [{"name": p} for p in metadata["projects"][:10]]},
"標籤": {"multi_select": [{"name": t} for t in metadata["tags"][:10]]},
},
"children": children
}
result = notion_request("post", "pages", data)
page_id = result["id"]
# Append remaining blocks in chunks of 100
if len(blocks) > 100:
remaining = blocks[100:]
for i in range(0, len(remaining), 100):
chunk = remaining[i:i+100]
notion_request("patch", f"blocks/{page_id}/children", {"children": chunk})
return page_id
def update_business_section(page_id: str, metadata: dict, business_blocks: list):
"""Update ONLY the Business section of an existing page, preserving all other content."""
# Update properties
notion_request("patch", f"pages/{page_id}", {
"properties": {
"標題": {"title": [{"text": {"content": metadata["title"]}}]},
"專案": {"multi_select": [{"name": p} for p in metadata["projects"][:10]]},
"標籤": {"multi_select": [{"name": t} for t in metadata["tags"][:10]]},
}
})
# Read all existing blocks
all_blocks = []
cursor = None
while True:
endpoint = f"blocks/{page_id}/children?page_size=100"
if cursor:
endpoint += f"&start_cursor={cursor}"
result = notion_request("get", endpoint)
all_blocks.extend(result.get("results", []))
if not result.get("has_more"):
break
cursor = result.get("next_cursor")
# Find the Business section boundaries
business_start = None
business_end = None
for idx, block in enumerate(all_blocks):
if block["type"] == "heading_2":
text = ""
for rt in block.get("heading_2", {}).get("rich_text", []):
text += rt.get("plain_text", rt.get("text", {}).get("content", ""))
if "Business" in text:
business_start = idx
elif business_start is not None and business_end is None:
# Next H2 after Business = end of Business section
business_end = idx
break
if business_start is None:
print("⚠️ 找不到 Business 區塊,將覆蓋整頁內容")
blocks_to_delete = all_blocks
after_block_id = None
else:
# If no end found, look for a divider after business content
if business_end is None:
for idx in range(business_start + 1, len(all_blocks)):
if all_blocks[idx]["type"] == "divider":
business_end = idx + 1 # Include the divider
break
if business_end is None:
business_end = len(all_blocks)
# Delete old Business content (between heading and next section)
blocks_to_delete = all_blocks[business_start + 1:business_end]
# Find the block AFTER which to insert (the Business heading itself)
after_block_id = all_blocks[business_start]["id"]
for block in blocks_to_delete:
try:
requests.delete(f"{NOTION_API}/blocks/{block['id']}", headers=HEADERS)
except Exception:
pass
# Insert new Business blocks after the heading, or at the end of the page
for i in range(0, len(business_blocks), 100):
chunk = business_blocks[i:i+100]
payload = {"children": chunk}
if after_block_id:
payload["after"] = after_block_id
result = notion_request("patch", f"blocks/{page_id}/children", payload)
# Update after_block_id to the last inserted block for ordering
if chunk and result.get("results"):
after_block_id = result["results"][-1]["id"]
# Re-add divider after business content
if after_block_id:
notion_request("patch", f"blocks/{page_id}/children", {
"children": [make_divider()],
"after": after_block_id
})
else:
notion_request("patch", f"blocks/{page_id}/children", {
"children": [make_divider()]
})
print("✅ Business 區塊已更新(其他區塊未受影響)")
def create_database(parent_page_id: str) -> str:
"""Create the Diary database under a parent page."""
data = {
"parent": {"type": "page_id", "page_id": parent_page_id},
"title": [{"type": "text", "text": {"content": "📔 AI 日記"}}],
"icon": {"emoji": "📊"},
"is_inline": False,
"properties": {
"標題": {"title": {}},
"日期": {"date": {}},
"專案": {"multi_select": {"options": []}},
"標籤": {"multi_select": {"options": []}},
}
}
result = notion_request("post", "databases", data)
db_id = result["id"]
print(f"✅ Created Notion Diary Database: {db_id}")
print(f" 請將此 ID 設為環境變數:")
print(f' $env:NOTION_DIARY_DB = "{db_id}"')
return db_id
# ── Main ───────────────────────────────────────────────────────
def main():
if hasattr(sys.stdout, 'reconfigure'):
sys.stdout.reconfigure(encoding='utf-8')
if not NOTION_TOKEN:
print("❌ 請設定環境變數 NOTION_TOKEN")
print(' $env:NOTION_TOKEN = "ntn_xxx"')
sys.exit(1)
# Handle --create-db flag
if len(sys.argv) >= 3 and sys.argv[1] == "--create-db":
parent_id = sys.argv[2].replace("-", "")
create_database(parent_id)
return
if not NOTION_DIARY_DB:
print("❌ 請設定環境變數 NOTION_DIARY_DB")
print(' $env:NOTION_DIARY_DB = "abc123..."')
print("")
print("如需建立新 Database")
print(' python sync_to_notion.py --create-db <parent_page_id>')
sys.exit(1)
if len(sys.argv) < 2:
print("用法python sync_to_notion.py <diary_file.md>")
print(" python sync_to_notion.py --create-db <parent_page_id>")
sys.exit(1)
diary_path = Path(sys.argv[1])
if not diary_path.exists():
print(f"❌ 找不到日記文件:{diary_path}")
sys.exit(1)
# Read diary
md_content = diary_path.read_text(encoding="utf-8")
filename = diary_path.name
print(f"📖 讀取日記:{diary_path}")
# Extract metadata
metadata = extract_metadata(md_content, filename)
print(f" 日期:{metadata['date']}")
print(f" 標題:{metadata['title']}")
print(f" 專案:{', '.join(metadata['projects'])}")
print(f" 標籤:{', '.join(metadata['tags'])}")
# Convert diary to Business blocks
business_blocks = diary_to_business_blocks(md_content)
print(f" Business 區塊數:{len(business_blocks)}")
# Check if page already exists
existing_page = search_diary_by_date(metadata["date"])
if existing_page:
print(f"🔄 更新已有頁面的 Business 區塊 (page: {existing_page})")
update_business_section(existing_page, metadata, business_blocks)
else:
print(f"📝 建立新頁面(僅 Business 區塊)...")
biz_blocks = build_business_only_blocks(business_blocks)
page_id = create_diary_page(metadata, biz_blocks)
print(f"✅ 已同步到 Notion(page: {page_id})")
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,38 @@
# 📔 YYYY-MM-DD 全域進度總覽
> 🌟 **今日亮點 (Daily Highlight)**
> (由 AI 綜合當日所有專案進度,寫下 1-2 句話的整體總結)
---
## 📁 專案進度追蹤
(⚠️ AI 寫入規則:若檔案已存在,尋找對應的專案標題追加;絕不覆蓋,維持版面整潔。)
### 🔵 {專案 A例如auto-video-editor}
* **今日進展**(將 Step 2 提取的本地素材濃縮成重點)
* **行動項目**(提取該專案的下一步待辦)
### 🟢 {專案 B例如GSS}
* **今日進展**(將重點濃縮於此)
* **行動項目**(提取該專案的下一步待辦)
### 🟡 {專案 C例如Stickman Soul Cafe}
* **今日進展**(若今日無進度則不顯示此區塊)
* **行動項目**(提取該專案的下一步待辦)
---
## 🧠 改善與學習 (Lessons Learned)
(⚠️ 此區塊專供經驗提煉)
📌 **新規則 / 新發現**
(例如:發現某個 API 的隱藏限制、或是某種 Python 寫法更節省效能)
🔄 **優化與反思**
(過去做法的改進,例如:調整工作流以避免重複授權)
---
## ✅ 跨專案通用待辦 (Global Action Items)
- [ ] (與特定專案無關的任務)
- [ ] (系統環境維護等)

View File

@@ -0,0 +1,23 @@
# 專案實作紀錄:{專案名稱}
* **📅 日期**YYYY-MM-DD
* **🏷️ 標籤**`#Project` `#DevLog`
---
> 🎯 **本次進度摘要**
> (簡述本次完成了什麼核心任務,例如:「完成 auto-video-editor 的 Google Colab 環境測試」)
### 🛠️ 執行細節與變更
* **Git Commits**(若有請列出)
* **核心檔案異動**
* 📄 `路徑/檔名`:變更說明。
* **技術實作**
* (記錄關鍵邏輯或架構變動)
### <20> 問題與解法 (Troubleshooting)
> 🐛 **遇到困難**(如 API 報錯、套件衝突)
> 💡 **解決方案**(最終修復方式,留下關鍵指令)
### ⏭️ 下一步計畫 (Next Steps)
- [ ] (具體的待辦事項 1)
- [ ] (具體的待辦事項 2)