Merge pull request #186 from SnakeEye-sudo/feat/add-new-skills
feat: add 3 new community skills (gemini-api-integration, llm-prompt-optimizer, saas-mvp-launcher)
This commit is contained in:
188
skills/gemini-api-integration/SKILL.md
Normal file
188
skills/gemini-api-integration/SKILL.md
Normal file
@@ -0,0 +1,188 @@
|
||||
---
|
||||
name: gemini-api-integration
|
||||
description: "Use when integrating Google Gemini API into projects. Covers model selection, multimodal inputs, streaming, function calling, and production best practices."
|
||||
risk: low
|
||||
source: community
|
||||
date_added: "2026-03-04"
|
||||
---
|
||||
|
||||
# Gemini API Integration
|
||||
|
||||
## Overview
|
||||
|
||||
This skill guides AI agents through integrating Google Gemini API into applications — from basic text generation to advanced multimodal, function calling, and streaming use cases. It covers the full Gemini SDK lifecycle with production-grade patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when setting up Gemini API for the first time in a Node.js, Python, or browser project
|
||||
- Use when implementing multimodal inputs (text + image/audio/video)
|
||||
- Use when adding streaming responses to improve perceived latency
|
||||
- Use when implementing function calling / tool use with Gemini
|
||||
- Use when optimizing model selection (Flash vs Pro vs Ultra) for cost and performance
|
||||
- Use when debugging Gemini API errors, rate limits, or quota issues
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### 1. Installation & Setup
|
||||
|
||||
**Node.js / TypeScript:**
|
||||
```bash
|
||||
npm install @google/generative-ai
|
||||
```
|
||||
|
||||
**Python:**
|
||||
```bash
|
||||
pip install google-generativeai
|
||||
```
|
||||
|
||||
Set your API key securely:
|
||||
```bash
|
||||
export GEMINI_API_KEY="your-api-key-here"
|
||||
```
|
||||
|
||||
### 2. Basic Text Generation
|
||||
|
||||
**Node.js:**
|
||||
```javascript
|
||||
import { GoogleGenerativeAI } from "@google/generative-ai";
|
||||
|
||||
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
|
||||
const model = genAI.getGenerativeModel({ model: "gemini-1.5-flash" });
|
||||
|
||||
const result = await model.generateContent("Explain async/await in JavaScript");
|
||||
console.log(result.response.text());
|
||||
```
|
||||
|
||||
**Python:**
|
||||
```python
|
||||
import google.generativeai as genai
|
||||
import os
|
||||
|
||||
genai.configure(api_key=os.environ["GEMINI_API_KEY"])
|
||||
model = genai.GenerativeModel("gemini-1.5-flash")
|
||||
|
||||
response = model.generate_content("Explain async/await in JavaScript")
|
||||
print(response.text)
|
||||
```
|
||||
|
||||
### 3. Streaming Responses
|
||||
|
||||
```javascript
|
||||
const result = await model.generateContentStream("Write a detailed blog post about AI");
|
||||
|
||||
for await (const chunk of result.stream) {
|
||||
process.stdout.write(chunk.text());
|
||||
}
|
||||
```
|
||||
|
||||
### 4. Multimodal Input (Text + Image)
|
||||
|
||||
```javascript
|
||||
import fs from "fs";
|
||||
|
||||
const imageData = fs.readFileSync("screenshot.png");
|
||||
const imagePart = {
|
||||
inlineData: {
|
||||
data: imageData.toString("base64"),
|
||||
mimeType: "image/png",
|
||||
},
|
||||
};
|
||||
|
||||
const result = await model.generateContent(["Describe this image:", imagePart]);
|
||||
console.log(result.response.text());
|
||||
```
|
||||
|
||||
### 5. Function Calling / Tool Use
|
||||
|
||||
```javascript
|
||||
const tools = [{
|
||||
functionDeclarations: [{
|
||||
name: "get_weather",
|
||||
description: "Get current weather for a city",
|
||||
parameters: {
|
||||
type: "OBJECT",
|
||||
properties: {
|
||||
city: { type: "STRING", description: "City name" },
|
||||
},
|
||||
required: ["city"],
|
||||
},
|
||||
}],
|
||||
}];
|
||||
|
||||
const model = genAI.getGenerativeModel({ model: "gemini-1.5-pro", tools });
|
||||
const result = await model.generateContent("What's the weather in Mumbai?");
|
||||
|
||||
const call = result.response.functionCalls()?.[0];
|
||||
if (call) {
|
||||
// Execute the actual function
|
||||
const weatherData = await getWeather(call.args.city);
|
||||
// Send result back to model
|
||||
}
|
||||
```
|
||||
|
||||
### 6. Multi-turn Chat
|
||||
|
||||
```javascript
|
||||
const chat = model.startChat({
|
||||
history: [
|
||||
{ role: "user", parts: [{ text: "You are a helpful coding assistant." }] },
|
||||
{ role: "model", parts: [{ text: "Sure! I'm ready to help with code." }] },
|
||||
],
|
||||
});
|
||||
|
||||
const response = await chat.sendMessage("How do I reverse a string in Python?");
|
||||
console.log(response.response.text());
|
||||
```
|
||||
|
||||
### 7. Model Selection Guide
|
||||
|
||||
| Model | Best For | Speed | Cost |
|
||||
|-------|----------|-------|------|
|
||||
| `gemini-1.5-flash` | High-throughput, cost-sensitive tasks | Fast | Low |
|
||||
| `gemini-1.5-pro` | Complex reasoning, long context | Medium | Medium |
|
||||
| `gemini-2.0-flash` | Latest fast model, multimodal | Very Fast | Low |
|
||||
| `gemini-2.0-pro` | Most capable, advanced tasks | Slow | High |
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ **Do:** Use `gemini-1.5-flash` for most tasks — it's fast and cost-effective
|
||||
- ✅ **Do:** Always stream responses for user-facing chat UIs to reduce perceived latency
|
||||
- ✅ **Do:** Store API keys in environment variables, never hard-code them
|
||||
- ✅ **Do:** Implement exponential backoff for rate limit (429) errors
|
||||
- ✅ **Do:** Use `systemInstruction` to set persistent model behavior
|
||||
- ❌ **Don't:** Use `gemini-pro` for simple tasks — Flash is cheaper and faster
|
||||
- ❌ **Don't:** Send large base64 images inline for files > 20MB — use File API instead
|
||||
- ❌ **Don't:** Ignore safety ratings in responses for production apps
|
||||
|
||||
## Error Handling
|
||||
|
||||
```javascript
|
||||
try {
|
||||
const result = await model.generateContent(prompt);
|
||||
return result.response.text();
|
||||
} catch (error) {
|
||||
if (error.status === 429) {
|
||||
// Rate limited — wait and retry with exponential backoff
|
||||
await new Promise(r => setTimeout(r, 2 ** retryCount * 1000));
|
||||
} else if (error.status === 400) {
|
||||
// Invalid request — check prompt or parameters
|
||||
console.error("Invalid request:", error.message);
|
||||
} else {
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** `API_KEY_INVALID` error
|
||||
**Solution:** Ensure `GEMINI_API_KEY` environment variable is set and the key is active in Google AI Studio.
|
||||
|
||||
**Problem:** Response blocked by safety filters
|
||||
**Solution:** Check `result.response.promptFeedback.blockReason` and adjust your prompt or safety settings.
|
||||
|
||||
**Problem:** Slow response times
|
||||
**Solution:** Switch to `gemini-1.5-flash` and enable streaming. Consider caching repeated prompts.
|
||||
|
||||
**Problem:** `RESOURCE_EXHAUSTED` (quota exceeded)
|
||||
**Solution:** Check your quota in Google Cloud Console. Implement request queuing and exponential backoff.
|
||||
182
skills/llm-prompt-optimizer/SKILL.md
Normal file
182
skills/llm-prompt-optimizer/SKILL.md
Normal file
@@ -0,0 +1,182 @@
|
||||
---
|
||||
name: llm-prompt-optimizer
|
||||
description: "Use when improving prompts for any LLM. Applies proven prompt engineering techniques to boost output quality, reduce hallucinations, and cut token usage."
|
||||
risk: low
|
||||
source: community
|
||||
date_added: "2026-03-04"
|
||||
---
|
||||
|
||||
# LLM Prompt Optimizer
|
||||
|
||||
## Overview
|
||||
|
||||
This skill transforms weak, vague, or inconsistent prompts into precision-engineered instructions that reliably produce high-quality outputs from any LLM (Claude, Gemini, GPT-4, Llama, etc.). It applies systematic prompt engineering frameworks — from zero-shot to few-shot, chain-of-thought, and structured output patterns.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when a prompt returns inconsistent, vague, or hallucinated results
|
||||
- Use when you need structured/JSON output from an LLM reliably
|
||||
- Use when designing system prompts for AI agents or chatbots
|
||||
- Use when you want to reduce token usage without sacrificing quality
|
||||
- Use when implementing chain-of-thought reasoning for complex tasks
|
||||
- Use when prompts work on one model but fail on another
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### 1. Diagnose the Weak Prompt
|
||||
|
||||
Before optimizing, identify which problem pattern applies:
|
||||
|
||||
| Problem | Symptom | Fix |
|
||||
|---------|---------|-----|
|
||||
| Too vague | Generic, unhelpful answers | Add role + context + constraints |
|
||||
| No structure | Unformatted, hard-to-parse output | Specify output format explicitly |
|
||||
| Hallucination | Confident wrong answers | Add "say I don't know if unsure" |
|
||||
| Inconsistent | Different answers each run | Add few-shot examples |
|
||||
| Too long | Verbose, padded responses | Add length constraints |
|
||||
|
||||
### 2. Apply the RSCIT Framework
|
||||
|
||||
Every optimized prompt should have:
|
||||
|
||||
- **R** — **Role**: Who is the AI in this interaction?
|
||||
- **S** — **Situation**: What context does it need?
|
||||
- **C** — **Constraints**: What are the rules and limits?
|
||||
- **I** — **Instructions**: What exactly should it do?
|
||||
- **T** — **Template**: What should the output look like?
|
||||
|
||||
**Before (weak prompt):**
|
||||
```
|
||||
Explain machine learning.
|
||||
```
|
||||
|
||||
**After (optimized prompt):**
|
||||
```
|
||||
You are a senior ML engineer explaining concepts to a junior developer.
|
||||
|
||||
Context: The developer has 1 year of Python experience but no ML background.
|
||||
|
||||
Task: Explain supervised machine learning in simple terms.
|
||||
|
||||
Constraints:
|
||||
- Use an analogy from everyday life
|
||||
- Maximum 200 words
|
||||
- No mathematical formulas
|
||||
- End with one actionable next step
|
||||
|
||||
Format: Plain prose, no bullet points.
|
||||
```
|
||||
|
||||
### 3. Chain-of-Thought (CoT) Pattern
|
||||
|
||||
For reasoning tasks, instruct the model to think step-by-step:
|
||||
|
||||
```
|
||||
Solve this problem step by step, showing your work at each stage.
|
||||
Only provide the final answer after completing all reasoning steps.
|
||||
|
||||
Problem: [your problem here]
|
||||
|
||||
Thinking process:
|
||||
Step 1: [identify what's given]
|
||||
Step 2: [identify what's needed]
|
||||
Step 3: [apply logic or formula]
|
||||
Step 4: [verify the answer]
|
||||
|
||||
Final Answer:
|
||||
```
|
||||
|
||||
### 4. Few-Shot Examples Pattern
|
||||
|
||||
Provide 2-3 examples to establish the pattern:
|
||||
|
||||
```
|
||||
Classify the sentiment of customer reviews as POSITIVE, NEGATIVE, or NEUTRAL.
|
||||
|
||||
Examples:
|
||||
Review: "This product exceeded my expectations!" -> POSITIVE
|
||||
Review: "It arrived broken and support was useless." -> NEGATIVE
|
||||
Review: "Product works as described, nothing special." -> NEUTRAL
|
||||
|
||||
Now classify:
|
||||
Review: "[your review here]" ->
|
||||
```
|
||||
|
||||
### 5. Structured JSON Output Pattern
|
||||
|
||||
```
|
||||
Extract the following information from the text below and return it as valid JSON only.
|
||||
Do not include any explanation or markdown — just the raw JSON object.
|
||||
|
||||
Schema:
|
||||
{
|
||||
"name": string,
|
||||
"email": string | null,
|
||||
"company": string | null,
|
||||
"role": string | null
|
||||
}
|
||||
|
||||
Text: [input text here]
|
||||
```
|
||||
|
||||
### 6. Reduce Hallucination Pattern
|
||||
|
||||
```
|
||||
Answer the following question based ONLY on the provided context.
|
||||
If the answer is not contained in the context, respond with exactly: "I don't have enough information to answer this."
|
||||
Do not make up or infer information not present in the context.
|
||||
|
||||
Context:
|
||||
[your context here]
|
||||
|
||||
Question: [your question here]
|
||||
```
|
||||
|
||||
### 7. Prompt Compression Techniques
|
||||
|
||||
Reduce token count without losing effectiveness:
|
||||
|
||||
```
|
||||
# Verbose (expensive)
|
||||
"Please carefully analyze the following code and provide a detailed explanation of
|
||||
what it does, how it works, and any potential issues you might find."
|
||||
|
||||
# Compressed (efficient, same quality)
|
||||
"Analyze this code: explain what it does, how it works, and flag any issues."
|
||||
```
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ **Do:** Always specify the output format (JSON, markdown, plain text, bullet list)
|
||||
- ✅ **Do:** Use delimiters (```, ---) to separate instructions from content
|
||||
- ✅ **Do:** Test prompts with edge cases (empty input, unusual data)
|
||||
- ✅ **Do:** Version your system prompts in source control
|
||||
- ✅ **Do:** Add "think step by step" for math, logic, or multi-step tasks
|
||||
- ❌ **Don't:** Use negative-only instructions ("don't be verbose") — add positive alternatives
|
||||
- ❌ **Don't:** Assume the model knows your codebase context — always include it
|
||||
- ❌ **Don't:** Use the same prompt across different models without testing — they behave differently
|
||||
|
||||
## Prompt Audit Checklist
|
||||
|
||||
Before using a prompt in production:
|
||||
|
||||
- [ ] Does it have a clear role/persona?
|
||||
- [ ] Is the output format explicitly defined?
|
||||
- [ ] Are edge cases handled (empty input, ambiguous data)?
|
||||
- [ ] Is the length appropriate (not too long/short)?
|
||||
- [ ] Has it been tested on 5+ varied inputs?
|
||||
- [ ] Is hallucination risk addressed for factual tasks?
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** Model ignores format instructions
|
||||
**Solution:** Move format instructions to the END of the prompt, after examples. Use strong language: "You MUST return only valid JSON."
|
||||
|
||||
**Problem:** Inconsistent results between runs
|
||||
**Solution:** Lower the temperature setting (0.0-0.3 for factual tasks). Add more few-shot examples.
|
||||
|
||||
**Problem:** Prompt works in playground but fails in production
|
||||
**Solution:** Check if system prompt is being sent correctly. Verify token limits aren't being exceeded (use a token counter).
|
||||
|
||||
**Problem:** Output is too long
|
||||
**Solution:** Add explicit word/sentence limits: "Respond in exactly 3 bullet points, each under 20 words."
|
||||
218
skills/saas-mvp-launcher/SKILL.md
Normal file
218
skills/saas-mvp-launcher/SKILL.md
Normal file
@@ -0,0 +1,218 @@
|
||||
---
|
||||
name: saas-mvp-launcher
|
||||
description: "Use when planning or building a SaaS MVP from scratch. Provides a structured roadmap covering tech stack, architecture, auth, payments, and launch checklist."
|
||||
risk: low
|
||||
source: community
|
||||
date_added: "2026-03-04"
|
||||
---
|
||||
|
||||
# SaaS MVP Launcher
|
||||
|
||||
## Overview
|
||||
|
||||
This skill guides you through building a production-ready SaaS MVP in the shortest time possible. It covers everything from idea validation and tech stack selection to authentication, payments, database design, deployment, and launch — using modern, battle-tested tools.
|
||||
|
||||
## When to Use This Skill
|
||||
|
||||
- Use when starting a new SaaS product from scratch
|
||||
- Use when you need to choose a tech stack for a web application
|
||||
- Use when setting up authentication, billing, or database for a SaaS
|
||||
- Use when you want a structured launch checklist before going live
|
||||
- Use when designing the architecture of a multi-tenant application
|
||||
- Use when doing a technical review of an existing early-stage SaaS
|
||||
|
||||
## Step-by-Step Guide
|
||||
|
||||
### 1. Validate Before You Build
|
||||
|
||||
Before writing any code, validate the idea:
|
||||
|
||||
```
|
||||
Validation checklist:
|
||||
- [ ] Can you describe the problem in one sentence?
|
||||
- [ ] Who is the exact customer? (not "everyone")
|
||||
- [ ] What do they pay for today to solve this?
|
||||
- [ ] Have you talked to 5+ potential customers?
|
||||
- [ ] Will they pay $X/month for your solution?
|
||||
```
|
||||
|
||||
**Rule:** If you can't get 3 people to pre-pay or sign a letter of intent, don't build yet.
|
||||
|
||||
### 2. Choose Your Tech Stack
|
||||
|
||||
Recommended modern SaaS stack (2026):
|
||||
|
||||
| Layer | Choice | Why |
|
||||
|-------|--------|-----|
|
||||
| Frontend | Next.js 15 + TypeScript | Full-stack, great DX, Vercel deploy |
|
||||
| Styling | Tailwind CSS + shadcn/ui | Fast, accessible, customizable |
|
||||
| Backend | Next.js API Routes or tRPC | Type-safe, co-located |
|
||||
| Database | PostgreSQL via Supabase | Reliable, scalable, free tier |
|
||||
| ORM | Prisma or Drizzle | Type-safe queries, migrations |
|
||||
| Auth | Clerk or NextAuth.js | Social login, session management |
|
||||
| Payments | Stripe | Industry standard, great docs |
|
||||
| Email | Resend + React Email | Modern, developer-friendly |
|
||||
| Deployment | Vercel (frontend) + Railway (backend) | Zero-config, fast CI/CD |
|
||||
| Monitoring | Sentry + PostHog | Error tracking + analytics |
|
||||
|
||||
### 3. Project Structure
|
||||
|
||||
```
|
||||
my-saas/
|
||||
├── app/ # Next.js App Router
|
||||
│ ├── (auth)/ # Auth routes (login, signup)
|
||||
│ ├── (dashboard)/ # Protected app routes
|
||||
│ ├── (marketing)/ # Public landing pages
|
||||
│ └── api/ # API routes
|
||||
├── components/
|
||||
│ ├── ui/ # shadcn/ui components
|
||||
│ └── [feature]/ # Feature-specific components
|
||||
├── lib/
|
||||
│ ├── db.ts # Database client (Prisma/Drizzle)
|
||||
│ ├── stripe.ts # Stripe client
|
||||
│ └── email.ts # Email client (Resend)
|
||||
├── prisma/
|
||||
│ └── schema.prisma # Database schema
|
||||
├── .env.local # Environment variables
|
||||
└── middleware.ts # Auth middleware
|
||||
```
|
||||
|
||||
### 4. Core Database Schema (Multi-tenant SaaS)
|
||||
|
||||
```prisma
|
||||
model User {
|
||||
id String @id @default(cuid())
|
||||
email String @unique
|
||||
name String?
|
||||
createdAt DateTime @default(now())
|
||||
subscription Subscription?
|
||||
workspaces WorkspaceMember[]
|
||||
}
|
||||
|
||||
model Workspace {
|
||||
id String @id @default(cuid())
|
||||
name String
|
||||
slug String @unique
|
||||
plan Plan @default(FREE)
|
||||
members WorkspaceMember[]
|
||||
createdAt DateTime @default(now())
|
||||
}
|
||||
|
||||
model Subscription {
|
||||
id String @id @default(cuid())
|
||||
userId String @unique
|
||||
user User @relation(fields: [userId], references: [id])
|
||||
stripeCustomerId String @unique
|
||||
stripePriceId String
|
||||
stripeSubId String @unique
|
||||
status String # active, canceled, past_due
|
||||
currentPeriodEnd DateTime
|
||||
}
|
||||
|
||||
enum Plan {
|
||||
FREE
|
||||
PRO
|
||||
ENTERPRISE
|
||||
}
|
||||
```
|
||||
|
||||
### 5. Authentication Setup (Clerk)
|
||||
|
||||
```typescript
|
||||
// middleware.ts
|
||||
import { clerkMiddleware, createRouteMatcher } from '@clerk/nextjs/server';
|
||||
|
||||
const isPublicRoute = createRouteMatcher([
|
||||
'/',
|
||||
'/pricing',
|
||||
'/blog(.*)',
|
||||
'/sign-in(.*)',
|
||||
'/sign-up(.*)',
|
||||
'/api/webhooks(.*)',
|
||||
]);
|
||||
|
||||
export default clerkMiddleware((auth, req) => {
|
||||
if (!isPublicRoute(req)) {
|
||||
auth().protect();
|
||||
}
|
||||
});
|
||||
|
||||
export const config = {
|
||||
matcher: ['/((?!.*\\..*|_next).*)', '/', '/(api|trpc)(.*)'],
|
||||
};
|
||||
```
|
||||
|
||||
### 6. Stripe Integration (Subscriptions)
|
||||
|
||||
```typescript
|
||||
// lib/stripe.ts
|
||||
import Stripe from 'stripe';
|
||||
export const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!, {
|
||||
apiVersion: '2025-01-27.acacia',
|
||||
});
|
||||
|
||||
// Create checkout session
|
||||
export async function createCheckoutSession(userId: string, priceId: string) {
|
||||
return stripe.checkout.sessions.create({
|
||||
mode: 'subscription',
|
||||
payment_method_types: ['card'],
|
||||
line_items: [{ price: priceId, quantity: 1 }],
|
||||
success_url: `${process.env.NEXT_PUBLIC_URL}/dashboard?success=true`,
|
||||
cancel_url: `${process.env.NEXT_PUBLIC_URL}/pricing`,
|
||||
metadata: { userId },
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
### 7. Pre-Launch Checklist
|
||||
|
||||
**Technical:**
|
||||
- [ ] Authentication works (signup, login, logout, password reset)
|
||||
- [ ] Payments work end-to-end (subscribe, cancel, upgrade)
|
||||
- [ ] Error monitoring configured (Sentry)
|
||||
- [ ] Environment variables documented
|
||||
- [ ] Database backups configured
|
||||
- [ ] Rate limiting on API routes
|
||||
- [ ] Input validation with Zod on all forms
|
||||
- [ ] HTTPS enforced, security headers set
|
||||
|
||||
**Product:**
|
||||
- [ ] Landing page with clear value proposition
|
||||
- [ ] Pricing page with 2-3 tiers
|
||||
- [ ] Onboarding flow (first value in < 5 minutes)
|
||||
- [ ] Email sequences (welcome, trial ending, payment failed)
|
||||
- [ ] Terms of Service and Privacy Policy pages
|
||||
- [ ] Support channel (email / chat)
|
||||
|
||||
**Marketing:**
|
||||
- [ ] Domain purchased and configured
|
||||
- [ ] SEO meta tags on all pages
|
||||
- [ ] Google Analytics or PostHog installed
|
||||
- [ ] Social media accounts created
|
||||
- [ ] Product Hunt draft ready
|
||||
|
||||
## Best Practices
|
||||
|
||||
- ✅ **Do:** Ship a working MVP in 4-6 weeks maximum, then iterate based on feedback
|
||||
- ✅ **Do:** Charge from day 1 — free users don't validate product-market fit
|
||||
- ✅ **Do:** Build the "happy path" first, handle edge cases later
|
||||
- ✅ **Do:** Use feature flags for gradual rollouts (e.g., Vercel Edge Config)
|
||||
- ✅ **Do:** Monitor user behavior from launch day — not after problems arise
|
||||
- ❌ **Don't:** Build every feature before talking to customers
|
||||
- ❌ **Don't:** Optimize for scale before reaching $10k MRR
|
||||
- ❌ **Don't:** Build a custom auth system — use Clerk, Auth.js, or Supabase Auth
|
||||
- ❌ **Don't:** Skip the onboarding flow — it's where most SaaS lose users
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
**Problem:** Users sign up but don't activate (don't use core feature)
|
||||
**Solution:** Reduce steps to first value. Track with PostHog where users drop off in onboarding.
|
||||
|
||||
**Problem:** High churn after trial
|
||||
**Solution:** Add an exit survey. Most churn is due to lack of perceived value, not price.
|
||||
|
||||
**Problem:** Stripe webhook events not received locally
|
||||
**Solution:** Use Stripe CLI: `stripe listen --forward-to localhost:3000/api/webhooks/stripe`
|
||||
|
||||
**Problem:** Database migrations failing in production
|
||||
**Solution:** Always run `prisma migrate deploy` (not `prisma migrate dev`) in production environments.
|
||||
Reference in New Issue
Block a user