` landmark wraps page content
+- [ ] Section landmarks use `aria-label` to differentiate them
diff --git a/engineering-team/epic-design/references/asset-pipeline.md b/engineering-team/epic-design/references/asset-pipeline.md
new file mode 100644
index 0000000..a318574
--- /dev/null
+++ b/engineering-team/epic-design/references/asset-pipeline.md
@@ -0,0 +1,135 @@
+# Asset Pipeline Reference
+
+Every image asset must be inspected and judged before use in any 2.5D site.
+The AI inspects, judges, and informs — it does NOT auto-remove backgrounds.
+
+---
+
+## Step 1 — Run the Inspection Script
+
+Run `scripts/inspect-assets.py` on every uploaded image before doing anything else.
+The script outputs the format, mode, size, background type, and a recommendation
+for each image. Read its output carefully.
+
+---
+
+## Step 2 — Judge Whether Background Removal Is Actually Needed
+
+The script detects whether a background exists. YOU must decide whether it matters.
+
+### Remove the background if the image is:
+- An isolated product on a studio backdrop (bottle, shoe, phone, fruit, object)
+- A character or figure that needs to float in the scene
+- A logo or icon placed at any depth layer
+- Any element at depth-2 or depth-3 that needs to "float" over other content
+- An asset where the background colour will visibly clash with the site background
+
+### Keep the background if the image is:
+- A screenshot of a website, app UI, dashboard, or software
+- A photograph used as a section background or depth-0 fill
+- An artwork, poster, or illustration that is viewed as a complete piece
+- A device mockup or "image inside a card/frame" design element
+- A photo where the background is part of the visual content
+- Any image placed at depth-0 — it IS the background, keep it
+
+### When unsure — ask the role:
+> "Does this image need to float freely over other content?"
+> Yes → remove bg. No → keep it.
+
+---
+
+## Step 3 — Resize to Depth-Appropriate Dimensions
+
+Run the resize step in `scripts/inspect-assets.py` or do it manually.
+Never embed a large image when a smaller one is sufficient.
+
+| Depth | Role | Max Longest Edge |
+|---|---|---|
+| 0 | Background fill | 1920px |
+| 1 | Glow / atmosphere | 800px |
+| 2 | Mid decorations, companions | 400px |
+| 3 | Hero product | 1200px |
+| 4 | UI components | 600px |
+| 5 | Particles, sparkles | 128px |
+
+---
+
+## Step 4 — Inform the User (Required for Every Asset)
+
+Before outputting any HTML, always show an asset audit to the user.
+
+For each image that has a background issue, use this exact format:
+
+> ⚠️ **Asset Notice — [filename]**
+>
+> This is a [JPEG / PNG] with a solid [black / white / coloured] background.
+> As-is, it will appear as a visible box on the page rather than a floating asset.
+>
+> Based on its intended role ([product shot / decoration / etc.]), I think the
+> background [should be removed / should be kept because it's a [screenshot/artwork/bg fill/etc.]].
+>
+> **Options:**
+> 1. Provide a new PNG with a transparent background — best quality, ideal
+> 2. Proceed as-is with a CSS workaround (mix-blend-mode) — quick but approximate
+> 3. Keep the background — if this image is meant to be seen with its background
+>
+> Which do you prefer?
+
+For clean images, confirm them briefly:
+
+> ✅ **[filename]** — clean transparent PNG, resized to [X]px, assigned depth-[N] ([role])
+
+Show all of this BEFORE outputting HTML. Wait for the user's response on any ⚠️ items.
+
+---
+
+## Step 5 — CSS Workaround (Only After User Approves)
+
+Apply ONLY if the user explicitly chooses option 2 above:
+
+```css
+/* Dark background image on a dark site — black pixels become invisible */
+.on-dark-bg {
+ mix-blend-mode: screen;
+}
+
+/* Light background image on a light site — white pixels become invisible */
+.on-light-bg {
+ mix-blend-mode: multiply;
+}
+```
+
+Always add a comment in the HTML when using this:
+```html
+
+```
+
+Limitations:
+- `screen` lightens mid-tones — only works well on very dark site backgrounds
+- `multiply` darkens mid-tones — only works well on very light site backgrounds
+- Neither works on complex or gradient backgrounds
+- A proper cutout PNG always gives better results
+
+---
+
+## Step 6 — CSS Rules for Transparent Images
+
+Whether the image came in clean or had its background resolved, always apply:
+
+```css
+/* ALWAYS use drop-shadow — it follows the actual pixel shape */
+.product-img {
+ filter: drop-shadow(0 30px 60px rgba(0, 0, 0, 0.4));
+}
+
+/* NEVER use box-shadow on cutout images — it creates a rectangle, not a shape shadow */
+
+/* NEVER apply these to transparent/cutout images: */
+/*
+ border-radius → clips transparency into a rounded box
+ overflow: hidden → same problem on the parent element
+ object-fit: cover → stretches image to fill a box, destroys the cutout
+ background-color → makes the bounding box visible
+*/
+```
diff --git a/engineering-team/epic-design/references/depth-system.md b/engineering-team/epic-design/references/depth-system.md
new file mode 100644
index 0000000..f146f58
--- /dev/null
+++ b/engineering-team/epic-design/references/depth-system.md
@@ -0,0 +1,361 @@
+# Depth System Reference
+
+The 2.5D illusion is built entirely on a **6-level depth model**. Every element on the page belongs to exactly one depth level. Depth controls four automatic properties: parallax speed, blur, scale, and shadow intensity. Together these four signals trick the human visual system into perceiving genuine spatial depth from flat assets.
+
+---
+
+## The 6-Level Depth Table
+
+| Level | Name | Parallax | Blur | Scale | Shadow | Z-Index |
+|-------|-------------------|----------|-------|-------|---------|---------|
+| 0 | Far Background | 0.10x | 8px | 0.70 | 0.05 | 0 |
+| 1 | Glow / Atmosphere | 0.25x | 4px | 0.85 | 0.10 | 1 |
+| 2 | Mid Decorations | 0.50x | 0px | 1.00 | 0.20 | 2 |
+| 3 | Main Objects | 0.80x | 0px | 1.05 | 0.35 | 3 |
+| 4 | UI / Text | 1.00x | 0px | 1.00 | 0.00 | 4 |
+| 5 | Foreground FX | 1.20x | 0px | 1.10 | 0.50 | 5 |
+
+**Parallax formula:**
+```
+element_translateY = scroll_position * depth_factor * -1
+```
+A depth-0 element at scroll position 500px moves only -50px (barely moves — feels far away).
+A depth-5 element at 500px moves -600px (moves fast — feels close).
+
+---
+
+## CSS Implementation
+
+### CSS Custom Properties Foundation
+```css
+:root {
+ /* Depth parallax factors */
+ --depth-0-factor: 0.10;
+ --depth-1-factor: 0.25;
+ --depth-2-factor: 0.50;
+ --depth-3-factor: 0.80;
+ --depth-4-factor: 1.00;
+ --depth-5-factor: 1.20;
+
+ /* Depth blur values */
+ --depth-0-blur: 8px;
+ --depth-1-blur: 4px;
+ --depth-2-blur: 0px;
+ --depth-3-blur: 0px;
+ --depth-4-blur: 0px;
+ --depth-5-blur: 0px;
+
+ /* Depth scale values */
+ --depth-0-scale: 0.70;
+ --depth-1-scale: 0.85;
+ --depth-2-scale: 1.00;
+ --depth-3-scale: 1.05;
+ --depth-4-scale: 1.00;
+ --depth-5-scale: 1.10;
+
+ /* Live scroll value (updated by JS) */
+ --scroll-y: 0;
+}
+
+/* Base layer class */
+.layer {
+ position: absolute;
+ inset: 0;
+ will-change: transform;
+ transform-origin: center center;
+}
+
+/* Depth-specific classes */
+.depth-0 {
+ filter: blur(var(--depth-0-blur));
+ transform: scale(var(--depth-0-scale))
+ translateY(calc(var(--scroll-y) * var(--depth-0-factor) * -1px));
+ z-index: 0;
+}
+.depth-1 {
+ filter: blur(var(--depth-1-blur));
+ transform: scale(var(--depth-1-scale))
+ translateY(calc(var(--scroll-y) * var(--depth-1-factor) * -1px));
+ z-index: 1;
+ mix-blend-mode: screen; /* glow layers blend additively */
+}
+.depth-2 {
+ transform: scale(var(--depth-2-scale))
+ translateY(calc(var(--scroll-y) * var(--depth-2-factor) * -1px));
+ z-index: 2;
+}
+.depth-3 {
+ transform: scale(var(--depth-3-scale))
+ translateY(calc(var(--scroll-y) * var(--depth-3-factor) * -1px));
+ z-index: 3;
+ filter: drop-shadow(0 20px 40px rgba(0,0,0,0.35));
+}
+.depth-4 {
+ transform: translateY(calc(var(--scroll-y) * var(--depth-4-factor) * -1px));
+ z-index: 4;
+}
+.depth-5 {
+ transform: scale(var(--depth-5-scale))
+ translateY(calc(var(--scroll-y) * var(--depth-5-factor) * -1px));
+ z-index: 5;
+}
+```
+
+### JavaScript — Scroll Driver
+```javascript
+// Throttled scroll listener using requestAnimationFrame
+let ticking = false;
+let lastScrollY = 0;
+
+function updateDepthLayers() {
+ const scrollY = window.scrollY;
+ document.documentElement.style.setProperty('--scroll-y', scrollY);
+ ticking = false;
+}
+
+window.addEventListener('scroll', () => {
+ lastScrollY = window.scrollY;
+ if (!ticking) {
+ requestAnimationFrame(updateDepthLayers);
+ ticking = true;
+ }
+}, { passive: true });
+```
+
+---
+
+## Asset Assignment Rules
+
+### What Goes in Each Depth Level
+
+**Depth 0 — Far Background**
+- Full-width background images (sky, gradient, texture)
+- Very large PNGs (1920×1080+), file size 80–150KB max
+- Heavily blurred by CSS — low detail is fine and preferred
+- Examples: skyscape, abstract color wash, noise texture
+
+**Depth 1 — Glow / Atmosphere**
+- Radial gradient blobs, lens flare PNGs, soft light overlays
+- Size: 600–1000px, file size: 30–60KB max
+- Always use `mix-blend-mode: screen` or `mix-blend-mode: lighten`
+- Always `filter: blur(40px–100px)` applied on top of CSS blur
+- Examples: orange glow blob behind product, atmospheric haze
+
+**Depth 2 — Mid Decorations**
+- Abstract shapes, geometric patterns, floating decorative elements
+- Size: 200–400px, file size: 20–50KB max
+- Moderate shadow, no blur
+- Examples: floating geometric shapes, brand pattern elements
+
+**Depth 3 — Main Objects (The Star)**
+- Hero product images, characters, featured illustrations
+- Size: 800–1200px, file size: 50–120KB max
+- High detail, clean cutout (transparent PNG background)
+- Strong drop shadow: `filter: drop-shadow(0 30px 60px rgba(0,0,0,0.4))`
+- This is the element users look at — give it the most visual weight
+- Examples: juice bottle, product shot, hero character
+
+**Depth 4 — UI / Text**
+- Headlines, body copy, buttons, cards, navigation
+- Always crisp, never blurred
+- Text elements get animation data attributes (see text-animations.md)
+- Examples: ``, ` `, ``, card components
+
+**Depth 5 — Foreground Particles / FX**
+- Sparkles, floating dots, light particles, decorative splashes
+- Small (32–128px), file size: 2–10KB
+- High contrast, sharp edges
+- Multiple instances scattered with different animation delays
+- Examples: star sparkles, liquid splash dots, highlight flares
+
+---
+
+## Compositional Hierarchy — Size Relationships Between Assets
+
+The most common mistake in 2.5D design is treating all assets as the same size.
+Real cinematic depth requires deliberate, intentional size contrast.
+
+### The Rule of One Hero
+
+Every scene has exactly ONE dominant asset. Everything else serves it.
+
+| Role | Display Size | Depth |
+|---|---|---|
+| Hero / star element | 50–85vw | depth-3 |
+| Primary companion | 8–15vw | depth-2 |
+| Secondary companion | 5–10vw | depth-2 |
+| Accent / particle | 1–4vw | depth-5 |
+| Background fill | 100vw | depth-0 |
+
+### Positioning Companions Close to the Hero
+
+Never scatter companions in random corners. Position them relative to the hero's edge:
+
+```css
+/*
+ Hero width: clamp(600px, 70vw, 1000px)
+ Hero half-width: clamp(300px, 35vw, 500px)
+*/
+.companion-right {
+ position: absolute;
+ right: calc(50% - clamp(300px, 35vw, 500px) - 20px);
+ /* negative gap value = slightly overlaps the hero */
+}
+.companion-left {
+ position: absolute;
+ left: calc(50% - clamp(300px, 35vw, 500px) - 20px);
+}
+```
+
+Vertical placement:
+- Upper shoulder: `top: 35%; transform: translateY(-50%)`
+- Mid waist: `top: 55%; transform: translateY(-50%)`
+- Lower base: `top: 72%; transform: translateY(-50%)`
+
+### Scatter Rule on Hero Scroll-Out
+
+When the hero grows or exits, companions scatter outward — not just fade.
+This reinforces they were "held in orbit" by the hero.
+
+```javascript
+heroScrollTimeline
+ .to('.companion-right', { x: 80, y: -50, scale: 1.3 }, scrollPos)
+ .to('.companion-left', { x: -70, y: 40, scale: 1.25 }, scrollPos)
+ .to('.companion-lower', { x: 30, y: 80, scale: 1.1 }, scrollPos)
+```
+
+### Pre-Build Size Checklist
+
+Before assigning sizes, answer these for every asset:
+1. Is this the hero? → make it large enough to command the viewport
+2. Is this a companion? → it should be 15–25% of the hero's display size
+3. Would this read better bigger or smaller than my first instinct?
+4. Is there enough size contrast between depth layers to read as real depth?
+5. Does the composition feel balanced, or does everything look the same size?
+
+---
+
+## Floating Loop Animation
+
+Every element at depth 2–5 should have a floating animation. Nothing should be perfectly static — it kills the 3D illusion.
+
+```css
+/* Float variants — apply different ones to different elements */
+@keyframes float-y {
+ 0%, 100% { transform: translateY(0px); }
+ 50% { transform: translateY(-18px); }
+}
+@keyframes float-rotate {
+ 0%, 100% { transform: translateY(0px) rotate(0deg); }
+ 33% { transform: translateY(-12px) rotate(2deg); }
+ 66% { transform: translateY(-6px) rotate(-1deg); }
+}
+@keyframes float-breathe {
+ 0%, 100% { transform: scale(1); }
+ 50% { transform: scale(1.04); }
+}
+@keyframes float-orbit {
+ 0% { transform: translate(0, 0) rotate(0deg); }
+ 25% { transform: translate(8px, -12px) rotate(2deg); }
+ 50% { transform: translate(0, -20px) rotate(0deg); }
+ 75% { transform: translate(-8px, -12px) rotate(-2deg); }
+ 100% { transform: translate(0, 0) rotate(0deg); }
+}
+
+/* Depth-appropriate durations */
+.depth-2 .float-loop { animation: float-y 10s ease-in-out infinite; }
+.depth-3 .float-loop { animation: float-orbit 8s ease-in-out infinite; }
+.depth-5 .float-loop { animation: float-rotate 6s ease-in-out infinite; }
+
+/* Stagger delays for multiple elements at same depth */
+.float-loop:nth-child(2) { animation-delay: -2s; }
+.float-loop:nth-child(3) { animation-delay: -4s; }
+.float-loop:nth-child(4) { animation-delay: -1.5s; }
+```
+
+---
+
+## Shadow Depth Enhancement
+
+Stronger shadows on closer elements amplify depth perception:
+
+```css
+/* Depth shadow system */
+.depth-2 img { filter: drop-shadow(0 10px 20px rgba(0,0,0,0.20)); }
+.depth-3 img { filter: drop-shadow(0 25px 50px rgba(0,0,0,0.35)); }
+.depth-5 img { filter: drop-shadow(0 5px 15px rgba(0,0,0,0.50)); }
+```
+
+## Glow Layer Pattern (Depth 1)
+
+The glow layer is critical for the "product floating in light" premium feel:
+
+```css
+/* Glow blob behind the main product */
+.glow-blob {
+ position: absolute;
+ width: 600px;
+ height: 600px;
+ border-radius: 50%;
+ background: radial-gradient(circle, var(--brand-color) 0%, transparent 70%);
+ filter: blur(80px);
+ opacity: 0.45;
+ mix-blend-mode: screen;
+ /* Position behind depth-3 product */
+ z-index: 1;
+ /* Slow drift */
+ animation: float-breathe 12s ease-in-out infinite;
+}
+```
+
+---
+
+## HTML Scaffold Template
+
+```html
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Your Headline
+
+
Supporting copy here
+
Get Started
+
+
+
+
+
+
+
+```
diff --git a/engineering-team/epic-design/references/directional-reveals.md b/engineering-team/epic-design/references/directional-reveals.md
new file mode 100644
index 0000000..07d8186
--- /dev/null
+++ b/engineering-team/epic-design/references/directional-reveals.md
@@ -0,0 +1,455 @@
+# Directional Reveals Reference
+
+Elements and sections don't always enter from the bottom. Premium sites use **directional births** — sections that drop from the top, iris open from center, peel away like wallpaper, or unfold diagonally. This file covers all 8 directional reveal patterns.
+
+## Table of Contents
+1. [Top-Down Clip Birth](#top-down)
+2. [Window Pane Iris Open](#iris-open)
+3. [Curtain Panel Roll-Up](#curtain-rollup)
+4. [SVG Morph Border](#svg-morph)
+5. [Diagonal Wipe Birth](#diagonal-wipe)
+6. [Circle Iris Expand](#circle-iris)
+7. [Multi-Directional Stagger Grid](#multi-direction)
+8. [Loading Screen Curtain Lift](#loading-screen)
+
+---
+
+## Pattern 1: Top-Down Clip Birth {#top-down}
+
+The section is born from the top edge and grows **downward**. Instead of rising from below, it drops and unfolds from above. This is the opposite of the conventional bottom-up reveal and creates a striking "curtain drop" feeling.
+
+```css
+/* Starting state — section is fully clipped (invisible) */
+.top-drop-section {
+ /* Section exists in DOM but is invisible */
+ clip-path: inset(0 0 100% 0);
+ /*
+ inset(top right bottom left):
+ - top: 0 → clip starts at top edge
+ - bottom: 100% → clips 100% from bottom = nothing visible
+ */
+}
+
+/* Revealed state */
+.top-drop-section.revealed {
+ clip-path: inset(0 0 0% 0);
+ transition: clip-path 1.2s cubic-bezier(0.16, 1, 0.3, 1);
+}
+```
+
+```javascript
+// GSAP scroll-driven version with scrub
+function initTopDownBirth(sectionEl) {
+ gsap.fromTo(sectionEl,
+ { clipPath: 'inset(0 0 100% 0)' },
+ {
+ clipPath: 'inset(0 0 0% 0)',
+ ease: 'power2.out',
+ scrollTrigger: {
+ trigger: sectionEl.previousElementSibling, // previous section is the trigger
+ start: 'bottom 80%',
+ end: 'bottom 20%',
+ scrub: 1.5,
+ }
+ }
+ );
+}
+
+// Exit: section retracts back upward (born from top, dies back up)
+function addTopRetractExit(sectionEl) {
+ gsap.to(sectionEl, {
+ clipPath: 'inset(100% 0 0% 0)', // now clips from TOP — retracts upward
+ ease: 'power2.in',
+ scrollTrigger: {
+ trigger: sectionEl,
+ start: 'bottom 20%',
+ end: 'bottom top',
+ scrub: 1,
+ }
+ });
+}
+```
+
+**Key insight:** Enter = `inset(0 0 100% 0)` → `inset(0 0 0% 0)` (bottom clips away downward).
+Exit = `inset(0)` → `inset(100% 0 0 0)` (top clips away upward = retracts back where it came from).
+
+---
+
+## Pattern 2: Window Pane Iris Open {#iris-open}
+
+An entire section starts as a tiny centered rectangle — like a keyhole or portal — and expands outward to fill the viewport. Creates a cinematic "opening shot" feeling.
+
+```javascript
+function initWindowPaneIris(sectionEl) {
+ // The section starts as a small centered window
+ gsap.fromTo(sectionEl,
+ {
+ clipPath: 'inset(42% 35% 42% 35% round 12px)',
+ // 42% from top AND bottom = only 16% of height visible
+ // 35% from left AND right = only 30% of width visible
+ // Centered rectangle peek
+ },
+ {
+ clipPath: 'inset(0% 0% 0% 0% round 0px)',
+ ease: 'none',
+ scrollTrigger: {
+ trigger: sectionEl,
+ start: 'top 90%',
+ end: 'top 10%',
+ scrub: 1.2,
+ }
+ }
+ );
+
+ // Also scale/zoom the content inside for parallax depth
+ gsap.fromTo(sectionEl.querySelector('.iris-content'),
+ { scale: 1.4 },
+ {
+ scale: 1,
+ ease: 'none',
+ scrollTrigger: {
+ trigger: sectionEl,
+ start: 'top 90%',
+ end: 'top 10%',
+ scrub: 1.2,
+ }
+ }
+ );
+}
+```
+
+**Variation — horizontal bar open (blinds effect):**
+```javascript
+// Two bars that slide apart (one from top, one from bottom)
+function initBlindsOpen(topBar, bottomBar, revealEl) {
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: revealEl,
+ start: 'top 70%',
+ toggleActions: 'play none none reverse',
+ }
+ });
+
+ tl.to(topBar, { yPercent: -100, duration: 1.0, ease: 'power3.inOut' })
+ .to(bottomBar, { yPercent: 100, duration: 1.0, ease: 'power3.inOut' }, 0);
+}
+```
+
+---
+
+## Pattern 3: Curtain Panel Roll-Up {#curtain-rollup}
+
+Multiple layered panels. Each one "rolls up" from top, exposing the panel beneath. Like peeling back wallpaper layers to reveal what's underneath. Uses z-index stacking.
+
+```css
+.curtain-stack {
+ position: relative;
+ height: 100vh;
+ overflow: hidden;
+}
+
+.curtain-panel {
+ position: absolute;
+ inset: 0;
+ /* Stack panels — panel 1 on top, panel N on bottom */
+}
+.curtain-panel:nth-child(1) { z-index: 5; background: #0f0f0f; }
+.curtain-panel:nth-child(2) { z-index: 4; background: #1a0a2e; }
+.curtain-panel:nth-child(3) { z-index: 3; background: #2d0b4e; }
+.curtain-panel:nth-child(4) { z-index: 2; background: #1e3a8a; }
+/* Final revealed content at z-index 1 */
+```
+
+```javascript
+function initCurtainRollUp(containerEl) {
+ const panels = gsap.utils.toArray('.curtain-panel', containerEl);
+
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: containerEl,
+ start: 'top top',
+ end: `+=${panels.length * 120}%`,
+ pin: true,
+ scrub: 1,
+ }
+ });
+
+ panels.forEach((panel, i) => {
+ const segmentDuration = 1 / panels.length;
+ const segmentStart = i * segmentDuration;
+
+ // Each panel rolls up — clip from bottom rises to top
+ tl.to(panel, {
+ clipPath: 'inset(100% 0 0% 0)', // rolls up: bottom clips first, rising to 100%
+ duration: segmentDuration,
+ ease: 'power2.inOut',
+ }, segmentStart);
+
+ // Heading for this panel fades in
+ const heading = panel.querySelector('.panel-heading');
+ if (heading) {
+ tl.from(heading, {
+ opacity: 0,
+ y: 30,
+ duration: segmentDuration * 0.4,
+ }, segmentStart + segmentDuration * 0.1);
+ }
+ });
+
+ return tl;
+}
+```
+
+---
+
+## Pattern 4: SVG Morph Border {#svg-morph}
+
+The section's edge is not a hard straight line — it morphs between shapes (rectangle → wave → diagonal → organic curve) as the user scrolls. Makes sections feel alive and fluid.
+
+```html
+
+
+
+
+
+
+
+
+
+
+```
+
+```javascript
+function initSVGMorphBorder() {
+ const morphPath = document.getElementById('morphPath');
+
+ const paths = {
+ straight: 'M0,0 L1,0 L1,1 L0,1 Z',
+ wave: 'M0,0 L1,0 L1,0.95 Q0.75,1.05 0.5,0.95 Q0.25,0.85 0,0.95 Z',
+ diagonal: 'M0,0 L1,0 L1,0.88 L0,1.0 Z',
+ organic: 'M0,0 L1,0 L1,0.92 C0.8,1.04 0.6,0.88 0.4,1.0 C0.2,1.12 0.1,0.90 0,0.96 Z',
+ };
+
+ ScrollTrigger.create({
+ trigger: '.morphed-section',
+ start: 'top 80%',
+ end: 'bottom 20%',
+ scrub: 2,
+ onUpdate: (self) => {
+ const p = self.progress;
+ // Morph between straight → wave → diagonal as scroll progresses
+ if (p < 0.5) {
+ // Interpolate straight → wave
+ morphPath.setAttribute('d', p < 0.25 ? paths.straight : paths.wave);
+ } else {
+ morphPath.setAttribute('d', p < 0.75 ? paths.wave : paths.diagonal);
+ }
+ }
+ });
+}
+```
+
+---
+
+## Pattern 5: Diagonal Wipe Birth {#diagonal-wipe}
+
+Content is revealed by a diagonal sweep across the screen — from top-left corner to bottom-right (or any corner combination). Feels cinematic and directional.
+
+```javascript
+function initDiagonalWipe(el, direction = 'top-left') {
+ const clipPaths = {
+ 'top-left': {
+ from: 'polygon(0 0, 0 0, 0 0)',
+ to: 'polygon(0 0, 120% 0, 0 120%)',
+ },
+ 'top-right': {
+ from: 'polygon(100% 0, 100% 0, 100% 0)',
+ to: 'polygon(-20% 0, 100% 0, 100% 120%)',
+ },
+ 'center-out': {
+ from: 'polygon(50% 50%, 50% 50%, 50% 50%, 50% 50%)',
+ to: 'polygon(-10% -10%, 110% -10%, 110% 110%, -10% 110%)',
+ },
+ };
+
+ const { from, to } = clipPaths[direction];
+
+ gsap.fromTo(el,
+ { clipPath: from },
+ {
+ clipPath: to,
+ duration: 1.4,
+ ease: 'power3.inOut',
+ scrollTrigger: {
+ trigger: el,
+ start: 'top 70%',
+ }
+ }
+ );
+}
+```
+
+---
+
+## Pattern 6: Circle Iris Expand {#circle-iris}
+
+The most dramatic reveal: a perfect circle expands from the center of the section outward, like an aperture opening or a spotlight switching on.
+
+```javascript
+function initCircleIris(el, originX = '50%', originY = '50%') {
+ gsap.fromTo(el,
+ { clipPath: `circle(0% at ${originX} ${originY})` },
+ {
+ clipPath: `circle(80% at ${originX} ${originY})`,
+ ease: 'none',
+ scrollTrigger: {
+ trigger: el,
+ start: 'top 75%',
+ end: 'top 25%',
+ scrub: 1,
+ }
+ }
+ );
+}
+
+// Variant: iris opens from cursor position on hover
+function initHoverIris(el) {
+ el.addEventListener('mouseenter', (e) => {
+ const rect = el.getBoundingClientRect();
+ const x = ((e.clientX - rect.left) / rect.width * 100).toFixed(1) + '%';
+ const y = ((e.clientY - rect.top) / rect.height * 100).toFixed(1) + '%';
+
+ gsap.fromTo(el,
+ { clipPath: `circle(0% at ${x} ${y})` },
+ { clipPath: `circle(100% at ${x} ${y})`, duration: 0.6, ease: 'power2.out' }
+ );
+ });
+}
+```
+
+---
+
+## Pattern 7: Multi-Directional Stagger Grid {#multi-direction}
+
+When a grid or set of cards appears, each item enters from a different edge/direction — creating a dynamic assembly effect instead of uniform fade-ups.
+
+```javascript
+function initMultiDirectionalGrid(gridEl) {
+ const items = gsap.utils.toArray('.grid-item', gridEl);
+
+ const directions = [
+ { x: -80, y: 0 }, // from left
+ { x: 0, y: -80 }, // from top
+ { x: 80, y: 0 }, // from right
+ { x: 0, y: 80 }, // from bottom
+ { x: -60, y: -60 }, // from top-left
+ { x: 60, y: -60 }, // from top-right
+ { x: -60, y: 60 }, // from bottom-left
+ { x: 60, y: 60 }, // from bottom-right
+ ];
+
+ items.forEach((item, i) => {
+ const dir = directions[i % directions.length];
+
+ gsap.from(item, {
+ x: dir.x,
+ y: dir.y,
+ opacity: 0,
+ duration: 0.8,
+ ease: 'power3.out',
+ scrollTrigger: {
+ trigger: gridEl,
+ start: 'top 75%',
+ },
+ delay: i * 0.08, // stagger
+ });
+ });
+}
+```
+
+---
+
+## Pattern 8: Loading Screen Curtain Lift {#loading-screen}
+
+A full-viewport branded intro screen that physically lifts off the page on load, revealing the site beneath. Sets cinematic expectations before any scroll animation begins.
+
+```css
+.loading-curtain {
+ position: fixed;
+ inset: 0;
+ z-index: 9999;
+ background: #0a0a0a; /* or brand color */
+ display: flex;
+ align-items: center;
+ justify-content: center;
+ /* Split into two halves for dramatic split-open effect */
+}
+
+.curtain-top {
+ position: absolute;
+ top: 0; left: 0; right: 0;
+ height: 50%;
+ background: inherit;
+ transform-origin: top center;
+}
+
+.curtain-bottom {
+ position: absolute;
+ bottom: 0; left: 0; right: 0;
+ height: 50%;
+ background: inherit;
+ transform-origin: bottom center;
+}
+```
+
+```javascript
+function initLoadingCurtain() {
+ const curtainTop = document.querySelector('.curtain-top');
+ const curtainBottom = document.querySelector('.curtain-bottom');
+ const curtainLogo = document.querySelector('.curtain-logo');
+ const loadingScreen = document.querySelector('.loading-curtain');
+
+ // Prevent scroll during loading
+ document.body.style.overflow = 'hidden';
+
+ const tl = gsap.timeline({
+ delay: 0.5,
+ onComplete: () => {
+ document.body.style.overflow = '';
+ loadingScreen.style.display = 'none';
+ // Init all scroll animations AFTER curtain lifts
+ initAllAnimations();
+ }
+ });
+
+ // Logo appears first
+ tl.from(curtainLogo, { opacity: 0, scale: 0.8, duration: 0.6, ease: 'power2.out' })
+ // Brief hold
+ .to({}, { duration: 0.4 })
+ // Logo fades out
+ .to(curtainLogo, { opacity: 0, scale: 1.1, duration: 0.4, ease: 'power2.in' })
+ // Curtain splits: top goes up, bottom goes down
+ .to(curtainTop, { yPercent: -100, duration: 0.9, ease: 'power4.inOut' }, '-=0.1')
+ .to(curtainBottom, { yPercent: 100, duration: 0.9, ease: 'power4.inOut' }, '<');
+}
+
+window.addEventListener('load', initLoadingCurtain);
+```
+
+---
+
+## Combining Directional Reveals
+
+For maximum cinematic impact, chain directional reveals between sections:
+
+```
+Section 1 → Section 2: Window pane iris (section 2 peeks through a keyhole)
+Section 2 → Section 3: Top-down clip birth (section 3 drops from top)
+Section 3 → Section 4: Diagonal wipe (section 4 sweeps in from corner)
+Section 4 → Section 5: Circle iris (section 5 opens from center)
+Section 5 → Section 6: Curtain panel roll-up (exposes multiple layers)
+```
+
+Each transition feels distinct, keeping the user engaged across the full scroll experience.
diff --git a/engineering-team/epic-design/references/examples.md b/engineering-team/epic-design/references/examples.md
new file mode 100644
index 0000000..5ace6c6
--- /dev/null
+++ b/engineering-team/epic-design/references/examples.md
@@ -0,0 +1,344 @@
+# Real-World Examples Reference
+
+Five complete implementation blueprints. Each describes exactly which techniques to combine, in what order, with key code patterns.
+
+## Table of Contents
+1. [Juice/Beverage Brand Launch](#juice-brand)
+2. [Tech SaaS Landing Page](#saas)
+3. [Creative Portfolio](#portfolio)
+4. [Gaming Website](#gaming)
+5. [Luxury Product E-Commerce](#ecommerce)
+
+---
+
+## Example 1: Juice/Beverage Brand Launch {#juice-brand}
+
+**Brief:** Premium juice brand. Hero has floating glass. Sections transition smoothly with the product "rising" between them.
+
+**Techniques Used:**
+- Loading screen curtain lift
+- 6-layer depth parallax in hero
+- Floating product between sections (THE signature move)
+- Top-down clip birth for ingredients section
+- Word-by-word scroll lighting for tagline
+- Cascading card stack for flavors
+- Split converge title exit
+
+**Section Architecture:**
+
+```
+[LOADING SCREEN — brand logo on black, splits open]
+ ↓
+[HERO — dark purple gradient]
+ depth-0: purple/dark gradient background
+ depth-1: orange glow blob (brand color)
+ depth-2: floating citrus slice PNGs (scattered, decorative)
+ depth-3: juice glass PNG (main product, float-loop)
+ depth-4: headline "Pure. Fresh. Electric." (split converge on enter)
+ depth-5: liquid splash particle PNGs
+
+[FLOATING PRODUCT BRIDGE — glass hovers between sections]
+
+[INGREDIENTS — warm cream/yellow section]
+ Entry: top-down clip birth (section drops from top)
+ depth-0: warm gradient background
+ depth-3: large orange PNG illustration
+ depth-4: "Word by word" ingredient callouts (scroll-lit)
+ Floating text: ingredient names fade in one by one
+
+[FLAVORS — cascading card stack, 3 cards]
+ Card 1: Orange — scales down as Card 2 arrives
+ Card 2: Mango — scales down as Card 3 arrives
+ Card 3: Berry — stays full screen
+ Each card: full-bleed color + depth-3 bottle + depth-4 title
+
+[CTA — minimal, dark]
+ Circle iris expand reveal
+ Oversized bleed typography: "DRINK DIFFERENT"
+ Simple form/button
+```
+
+**Key Code Pattern — The Glass Journey:**
+```javascript
+// Glass starts in hero depth-3, floats between sections,
+// then descends into ingredients section
+initFloatingProduct(); // from inter-section-effects.md
+
+// On arrival in ingredients section, glass triggers
+// the ingredient words to light up one by one
+ScrollTrigger.create({
+ trigger: '.ingredients-section',
+ start: 'top 50%',
+ onEnter: () => {
+ initWordScrollLighting(
+ '.ingredients-section',
+ '.ingredients-tagline'
+ );
+ }
+});
+```
+
+**Color Palette:**
+- Hero: `#0a0014` (deep purple) → `#2d0b4e`
+- Glow: `#ff6b00` (orange), `#ff9900` (amber)
+- Ingredients: `#fdf4e7` (warm cream)
+- Flavors: Brand-specific per flavor
+- CTA: `#0a0014` (returns to hero dark)
+
+---
+
+## Example 2: Tech SaaS Landing Page {#saas}
+
+**Brief:** B2B SaaS product — analytics dashboard. Premium, modern, tech-forward. Animated product screenshots.
+
+**Techniques Used:**
+- Window pane iris open (hero reveals from keyhole)
+- DJI-style scale-in pin (dashboard screenshot fills viewport)
+- Scrub timeline (features appear one by one)
+- Curtain panel roll-up (pricing tiers reveal)
+- Character cylinder rotation (headline numbers: "10x faster")
+- Line clip wipe (feature descriptions)
+- Horizontal scroll (integration logos)
+
+**Section Architecture:**
+
+```
+[HERO — midnight blue]
+ Entry: window pane iris — site reveals from tiny centered rectangle
+ depth-0: mesh gradient (dark blue/purple)
+ depth-1: subtle grid pattern (CSS, not PNG) with opacity 0.15
+ depth-2: floating abstract geometric shapes (low opacity)
+ depth-3: dashboard screenshot PNG (float-loop subtle)
+ depth-4: headline with CYLINDER ROTATION on "10x"
+ "Make your analytics 10x smarter"
+ depth-5: small glow dots/particles
+
+[FEATURE ZOOM — pinned section, 300vh scroll distance]
+ DJI-style: Dashboard screenshot starts small, expands to full viewport
+ Scrub timeline reveals 3 features as user scrolls through pin:
+ - Feature 1: "Real-time insights" fades in left
+ - Feature 2: "AI-powered" fades in right
+ - Feature 3: "Zero setup" fades in center
+ Each feature: line clip wipe on description text
+
+[HOW IT WORKS — top-down clip birth]
+ 3-step process
+ Each step: multi-directional stagger (step 1 from left, step 2 from top, step 3 from right)
+ Numbered steps with variable font weight animation
+
+[INTEGRATIONS — horizontal scroll]
+ Pin section, logos scroll horizontally
+ Speed reactive marquee for "works with everything you use"
+
+[PRICING — curtain panel roll-up]
+ 3 pricing tiers as curtain panels
+ Free → Pro → Enterprise reveals one by one
+ Each reveal: scramble text on price number
+
+[CTA — circle iris]
+ Dark background
+ Bleed typography: "START FREE TODAY"
+ Magnetic button (cursor-attracted)
+```
+
+---
+
+## Example 3: Creative Portfolio {#portfolio}
+
+**Brief:** Designer/developer portfolio. Bold, experimental, Awwwards-worthy. The work is the hero.
+
+**Techniques Used:**
+- Offset diagonal layout for name/title
+- Theatrical enter+exit for all section content
+- Horizontal scroll for project showcase
+- GSAP Flip cross-section for project previews
+- Scroll-speed reactive marquee for skills
+- Bleed typography throughout
+- Diagonal wipe births
+- Cursor spotlight
+
+**Section Architecture:**
+
+```
+[INTRO — stark black]
+ NO loading screen — shock with immediate bold text
+ depth-0: pure black (#000)
+ depth-4: MASSIVE bleed title — name in 180px+ font
+ offset diagonal layout:
+ Line 1: "ALEX" — top-left, x: 5%
+ Line 2: "MORENO" — lower-right, x: 40%
+ Line 3: "Designer" — far right, smaller, italic
+ Cursor spotlight effect follows mouse
+ CTA: "See Work ↓" — subtle, bottom-right
+
+[MARQUEE DIVIDER]
+ Scroll-speed reactive marquee:
+ "AVAILABLE FOR WORK · BASED IN LONDON · OPEN TO REMOTE ·"
+ Speed up when user scrolls fast
+
+[PROJECTS — horizontal scroll, 4 projects]
+ Pinned container, horizontal scroll
+ Each panel: full-bleed project image
+ project title via line clip wipe
+ brief description via theatrical enter
+ On hover: project image scale(1.03), cursor becomes "View →"
+ Between projects: diagonal wipe transition
+
+[ABOUT — section peel]
+ Upper section peels away to reveal about section
+ depth-3: portrait photo (clip-path circle iris, expands to full)
+ depth-4: about text — curtain line reveal
+ Skills: variable font wave animation
+
+[PROCESS — pinned scrub timeline]
+ 3 process stages animate through scroll:
+ Each stage: top-down clip birth reveals content
+ Numbers: character cylinder rotation
+
+[CONTACT — minimal]
+ Circle iris expand
+ Email address: scramble text effect on hover
+ Social links: skew + bounce on scroll in
+```
+
+---
+
+## Example 4: Gaming Website {#gaming}
+
+**Brief:** Game launch page. Dark, cinematic, intense. Character reveals, environment depth.
+
+**Techniques Used:**
+- Curved path travel (character moves across page)
+- Perspective zoom fly-through (fly into the game world)
+- Full layered parallax (6 levels deep)
+- SVG morph borders (organic landscape edges)
+- Cascading card stacks (character select)
+- Word-by-word scroll lighting (lore text)
+- Particle trails (cursor leaves sparks)
+- Multiple floating loops (atmospheric)
+
+**Section Architecture:**
+
+```
+[LOADING SCREEN — game-style]
+ Loading bar fills
+ Logo does cylinder rotation
+ Splits open with curtain top/bottom
+
+[HERO — extreme depth parallax]
+ depth-0: distant mountains/sky PNG (very slow, heavily blurred)
+ depth-1: mid-distance fog layer (slightly blurred, mix-blend: screen)
+ depth-2: closer terrain elements (decorative)
+ depth-3: CHARACTER PNG — hero character (main float-loop)
+ depth-4: game title — "SHADOWREALM" (split converge from sides)
+ depth-5: foreground particles — embers/sparks (fast float)
+ Cursor: particle trail (sparks follow cursor)
+
+[FLY-THROUGH — perspective zoom, 300vh]
+ Pinned section
+ Camera appears to fly INTO the game world
+ Background rushes toward viewer (scale 0.3 → 1.4)
+ Character appears from far (scale 0.05 → 1)
+ Title resolves via scramble text
+
+[LORE — word scroll lighting, pinned 400vh]
+ Dark section, long block of atmospheric text
+ Words light up as user scrolls
+ Atmospheric background particles drift slowly
+ Character silhouette visible at depth-1 (very faint)
+
+[CHARACTERS — cascading card stack, 4 characters]
+ Each card: character art full-bleed
+ Character name: cylinder rotation
+ Class/description: line clip wipe
+ Stats: stagger animate (bars fill on enter)
+ Each card buried: scale(0.88), blur, pushed back
+
+[WORLD MAP — horizontal scroll]
+ 5 zones scroll horizontally
+ Zone titles: offset diagonal layout
+ Environment art at different parallax speeds
+
+[PRE-ORDER — window pane iris]
+ Iris opens revealing pre-order section
+ Bleed typography: "ENTER THE REALM"
+ Magnetic CTA button
+```
+
+---
+
+## Example 5: Luxury Product E-Commerce {#ecommerce}
+
+**Brief:** High-end watch/jewelry brand. Understated elegance. Every animation whispers, not shouts. The product is the hero.
+
+**Techniques Used:**
+- DJI-style scale-in (product fills viewport, slowly)
+- GSAP Flip (watch travels from hero to detail view)
+- Section peel reveal (product details peel open)
+- Masked line curtain reveal (all body text)
+- Clip-path section birth (materials section)
+- Floating product between sections
+- Subtle parallax (depth factors halved for elegance)
+- Bleed typography (collection names)
+
+**Section Architecture:**
+
+```
+[HERO — pure white or cream]
+ No loading screen — immediate elegance
+ depth-0: pure white / soft cream gradient
+ depth-1: VERY subtle warm glow (opacity 0.2 only)
+ depth-2: minimal geometric line decoration (thin, opacity 0.3)
+ depth-3: WATCH PNG — centered, generous space, slow float (14s loop, tiny movement)
+ depth-4: brand name — thin weight, large tracking
+ "Est. 1887" — tiny, centered below
+ Parallax factors reduced: depth-3 factor = 0.3 (elegant, not dramatic)
+
+[PRODUCT TRANSITION — GSAP Flip]
+ Watch morphs from hero center to detail view (left side)
+ Detail text reveals via masked line curtain (right side)
+ Flip duration: 1.4s (luxury = slow, unhurried)
+
+[MATERIALS — clip-path section birth]
+ Cream/beige section
+ Product rises up through the section boundary
+ Material close-ups: stagger fade in from bottom (gentle)
+ Text: curtain line reveal (one line at a time, 0.2s stagger)
+
+[CRAFTSMANSHIP — top-down clip birth, then peel]
+ Section drops from top (elegant, not dramatic)
+ Video/image of watchmaker — DJI scale-in at reduced intensity
+ Text: word-by-word scroll lighting (VERY slow, meditative)
+
+[COLLECTION — section peel + horizontal scroll]
+ Peel reveals horizontal scroll gallery
+ 4 watch variants scroll horizontally
+ Each: full-bleed product + minimal text (clip wipe)
+
+[PURCHASE — circle iris (small, elegant)]
+ Circle opens from center, but slowly (2s duration)
+ Minimal layout: price, materials, add to cart
+ CTA: subtle skew + bounce (barely perceptible)
+ Trust signals: line-by-line curtain reveal
+```
+
+---
+
+## Combining Patterns — Quick Reference
+
+These combinations appear most often across successful premium sites:
+
+**The "Product Hero" Combination:**
+Floating product between sections + Top-down clip birth + Split converge title + Word scroll lighting
+
+**The "Cinematic Chapter" Combination:**
+Pinned sticky + Scrub timeline + Curtain panel roll-up + Theatrical enter/exit
+
+**The "Tech Premium" Combination:**
+Window pane iris + DJI scale-in + Line clip wipe + Cylinder rotation
+
+**The "Editorial" Combination:**
+Bleed typography + Offset diagonal + Horizontal scroll + Diagonal wipe
+
+**The "Minimal Luxury" Combination:**
+GSAP Flip + Section peel + Masked line curtain + Reduced parallax factors
diff --git a/engineering-team/epic-design/references/inter-section-effects.md b/engineering-team/epic-design/references/inter-section-effects.md
new file mode 100644
index 0000000..73d7c75
--- /dev/null
+++ b/engineering-team/epic-design/references/inter-section-effects.md
@@ -0,0 +1,493 @@
+# Inter-Section Effects Reference
+
+These are the most premium techniques — effects where elements **persist, travel, or transition between sections**, creating a seamless narrative thread across the entire page.
+
+## Table of Contents
+1. [Floating Product Between Sections](#floating-product)
+2. [GSAP Flip Cross-Section Morph](#flip-morph)
+3. [Clip-Path Section Birth (Product Grows from Border)](#clip-birth)
+4. [DJI-Style Scale-In Pin](#dji-scale)
+5. [Element Curved Path Travel](#curved-path)
+6. [Section Peel Reveal](#section-peel)
+
+---
+
+## Technique 1: Floating Product Between Sections {#floating-product}
+
+This is THE signature technique for product brands. A product image (juice bottle, phone, sneaker) starts inside the hero section. As you scroll, it appears to "rise up" through the section boundary and hover between two differently-colored sections — partially owned by neither. Then as you continue scrolling, it gracefully descends back in.
+
+**The Visual Story:**
+- Hero section: product sitting naturally inside
+- Mid-scroll: product "floating" in space, section colors visible above and below it
+- Continue scroll: product becomes part of the next section
+
+```css
+/* The product is positioned in a sticky wrapper */
+.inter-section-product-wrapper {
+ /* This wrapper spans BOTH sections */
+ position: relative;
+ z-index: 100;
+ pointer-events: none;
+ height: 0; /* no height — just a position anchor */
+}
+
+.inter-section-product {
+ position: sticky;
+ top: 50vh; /* stick to vertical center of viewport */
+ transform: translateY(-50%); /* true center */
+ width: 100%;
+ display: flex;
+ justify-content: center;
+ pointer-events: none;
+}
+
+.inter-section-product img {
+ width: clamp(280px, 35vw, 560px);
+ /* The product will be exactly at the section boundary
+ when the page is scrolled to that point */
+}
+```
+
+```javascript
+function initFloatingProduct() {
+ const wrapper = document.querySelector('.inter-section-product-wrapper');
+ const productImg = wrapper.querySelector('img');
+ const heroSection = document.querySelector('.hero-section');
+ const nextSection = document.querySelector('.feature-section');
+
+ // Create a ScrollTrigger timeline for the product's journey
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: heroSection,
+ start: 'bottom 80%', // starts rising as hero bottom approaches viewport
+ end: 'bottom 20%', // completes rise when hero fully exited
+ scrub: 1.5,
+ }
+ });
+
+ // Phase 1: Product rises up from hero (scale grows, shadow intensifies)
+ tl.fromTo(productImg,
+ {
+ y: 0,
+ scale: 0.85,
+ filter: 'drop-shadow(0 10px 20px rgba(0,0,0,0.2))',
+ },
+ {
+ y: '-8vh',
+ scale: 1.05,
+ filter: 'drop-shadow(0 40px 80px rgba(0,0,0,0.5))',
+ duration: 0.5,
+ }
+ );
+
+ // Phase 2: Product fully "between" sections — peak visibility
+ tl.to(productImg, {
+ y: '-5vh',
+ scale: 1.1,
+ duration: 0.3,
+ });
+
+ // Phase 3: Product descends into next section
+ ScrollTrigger.create({
+ trigger: nextSection,
+ start: 'top 60%',
+ end: 'top 20%',
+ scrub: 1.5,
+ onUpdate: (self) => {
+ gsap.to(productImg, {
+ y: `${self.progress * 8}vh`,
+ scale: 1.1 - (self.progress * 0.2),
+ duration: 0.1,
+ overwrite: true,
+ });
+ }
+ });
+}
+```
+
+### Required HTML Structure
+
+```html
+
+
+
+
+
+
Your Headline
+
Hero subtext here
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Features Headline
+
+
+```
+
+---
+
+## Technique 2: GSAP Flip Cross-Section Morph {#flip-morph}
+
+The same DOM element appears to travel between completely different layout positions across sections. In the hero it's large and centered; in the feature section it's small and left-aligned; in the detail section it's full-width. One smooth morph connects them all.
+
+```javascript
+function initFlipMorphSections() {
+ gsap.registerPlugin(Flip);
+
+ // The product element exists in one place in the DOM
+ // but we have "ghost" placeholder positions in other sections
+ const product = document.querySelector('.traveling-product');
+ const positions = {
+ hero: document.querySelector('.product-position-hero'),
+ feature: document.querySelector('.product-position-feature'),
+ detail: document.querySelector('.product-position-detail'),
+ };
+
+ function morphToPosition(positionEl, options = {}) {
+ // Capture current state
+ const state = Flip.getState(product);
+
+ // Move element to new position
+ positionEl.appendChild(product);
+
+ // Animate from captured state to new position
+ Flip.from(state, {
+ duration: 0.9,
+ ease: 'power3.inOut',
+ ...options
+ });
+ }
+
+ // Trigger morphs on scroll
+ ScrollTrigger.create({
+ trigger: '.feature-section',
+ start: 'top 60%',
+ onEnter: () => morphToPosition(positions.feature),
+ onLeaveBack: () => morphToPosition(positions.hero),
+ });
+
+ ScrollTrigger.create({
+ trigger: '.detail-section',
+ start: 'top 60%',
+ onEnter: () => morphToPosition(positions.detail),
+ onLeaveBack: () => morphToPosition(positions.feature),
+ });
+}
+```
+
+### Ghost Position Placeholders HTML
+
+```html
+
+
+
+
+
+
+
+
+
+
+```
+
+---
+
+## Technique 3: Clip-Path Section Birth (Product Grows from Border) {#clip-birth}
+
+The product image starts completely hidden below the section's bottom border — clipped out of existence. As the user scrolls into the section boundary, the product "grows up" through the border like a plant emerging from soil. This is distinct from the floating product — here, the section itself is the stage.
+
+```css
+.birth-section {
+ position: relative;
+ overflow: hidden; /* hard clip at section border */
+ min-height: 100vh;
+}
+
+.birth-product {
+ position: absolute;
+ bottom: -20%; /* starts 20% below the section — invisible */
+ left: 50%;
+ transform: translateX(-50%);
+ width: clamp(300px, 40vw, 600px);
+ /* Will animate up through the section boundary */
+}
+```
+
+```javascript
+function initClipPathBirth(sectionEl, productEl) {
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: sectionEl,
+ start: 'top 80%',
+ end: 'top 20%',
+ scrub: 1.2,
+ }
+ });
+
+ // Product rises from below section boundary
+ tl.fromTo(productEl,
+ {
+ y: '120%', // fully below section
+ scale: 0.7,
+ opacity: 0,
+ filter: 'blur(8px)'
+ },
+ {
+ y: '0%', // sits naturally in section
+ scale: 1,
+ opacity: 1,
+ filter: 'blur(0px)',
+ ease: 'power3.out',
+ duration: 1,
+ }
+ );
+
+ // Continue scroll → product rises further and becomes full height
+ // then disappears back below as section exits
+ ScrollTrigger.create({
+ trigger: sectionEl,
+ start: 'bottom 60%',
+ end: 'bottom top',
+ scrub: 1,
+ onUpdate: (self) => {
+ gsap.to(productEl, {
+ y: `${-self.progress * 50}%`,
+ opacity: 1 - self.progress,
+ scale: 1 + self.progress * 0.2,
+ duration: 0.1,
+ overwrite: true,
+ });
+ }
+ });
+}
+```
+
+---
+
+## Technique 4: DJI-Style Scale-In Pin {#dji-scale}
+
+Made famous by DJI drone product pages. A section starts with a small, contained image. As the user scrolls, the image scales up to fill the entire viewport — THEN the section unpins and the next content reveals. Creates a "zoom into the world" feeling.
+
+```javascript
+function initDJIScaleIn(sectionEl) {
+ const heroMedia = sectionEl.querySelector('.dji-media');
+ const heroContent = sectionEl.querySelector('.dji-content');
+ const overlay = sectionEl.querySelector('.dji-overlay');
+
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: sectionEl,
+ start: 'top top',
+ end: '+=300%',
+ pin: true,
+ scrub: 1.5,
+ }
+ });
+
+ // Stage 1: Small image scales up to fill viewport
+ tl.fromTo(heroMedia,
+ {
+ borderRadius: '20px',
+ scale: 0.3,
+ width: '60%',
+ left: '20%',
+ top: '20%',
+ },
+ {
+ borderRadius: '0px',
+ scale: 1,
+ width: '100%',
+ left: '0%',
+ top: '0%',
+ duration: 0.4,
+ ease: 'power2.inOut',
+ }
+ )
+ // Stage 2: Overlay fades in over the full-viewport image
+ .fromTo(overlay,
+ { opacity: 0 },
+ { opacity: 0.6, duration: 0.2 },
+ 0.35
+ )
+ // Stage 3: Content text appears over the overlay
+ .from(heroContent.querySelectorAll('.dji-line'),
+ {
+ y: 40,
+ opacity: 0,
+ stagger: 0.08,
+ duration: 0.25,
+ },
+ 0.45
+ );
+
+ return tl;
+}
+```
+
+```css
+.dji-section {
+ position: relative;
+ height: 100vh;
+ overflow: hidden;
+}
+.dji-media {
+ position: absolute;
+ height: 100%;
+ object-fit: cover;
+ /* Will be animated to full coverage */
+}
+.dji-overlay {
+ position: absolute;
+ inset: 0;
+ background: linear-gradient(to bottom, transparent, rgba(0,0,0,0.8));
+ opacity: 0;
+}
+.dji-content {
+ position: absolute;
+ bottom: 15%;
+ left: 8%;
+ right: 8%;
+ color: white;
+}
+```
+
+---
+
+## Technique 5: Element Curved Path Travel {#curved-path}
+
+The most advanced technique. A product element travels along a smooth, curved Bezier path across the page as the user scrolls — arcing through space like it's floating or being thrown, rather than just translating in a straight line.
+
+```html
+
+```
+
+```javascript
+function initCurvedPathTravel(productEl) {
+ gsap.registerPlugin(MotionPathPlugin);
+
+ // Define the curved path as SVG coordinates
+ // Relative to the product's parent container
+ const path = [
+ { x: 0, y: 0 }, // Start: hero center
+ { x: -200, y: -100 }, // Arc left and up
+ { x: 100, y: -300 }, // Continue arcing
+ { x: 300, y: -150 }, // Swing right
+ { x: 200, y: 50 }, // Land into feature section
+ ];
+
+ gsap.to(productEl, {
+ motionPath: {
+ path: path,
+ curviness: 1.4, // How curvy (0 = straight lines, 2 = very curved)
+ autoRotate: false, // Don't rotate along path (keep product upright)
+ },
+ scale: gsap.utils.interpolate([0.8, 1.1, 0.9, 1.0, 1.2]),
+ ease: 'none',
+ scrollTrigger: {
+ trigger: '.journey-container',
+ start: 'top top',
+ end: '+=400%',
+ pin: true,
+ scrub: 1.5,
+ }
+ });
+}
+```
+
+---
+
+## Technique 6: Section Peel Reveal {#section-peel}
+
+The section below is revealed by the section above peeling away — like turning a page. Uses `sticky: bottom: 0` so the lower section sticks to the screen bottom while the upper section scrolls away.
+
+```css
+.peel-upper {
+ position: relative;
+ z-index: 2;
+ min-height: 100vh;
+ /* This section scrolls away normally */
+}
+
+.peel-lower {
+ position: sticky;
+ bottom: 0; /* sticks to BOTTOM of viewport */
+ z-index: 1;
+ min-height: 100vh;
+ /* This section waits at the bottom as upper section peels away */
+}
+
+/* Container wraps both */
+.peel-container {
+ position: relative;
+}
+```
+
+```javascript
+function initSectionPeel() {
+ const upper = document.querySelector('.peel-upper');
+ const lower = document.querySelector('.peel-lower');
+
+ // As upper section scrolls, reveal lower by reducing clip
+ gsap.fromTo(upper,
+ { clipPath: 'inset(0 0 0 0)' },
+ {
+ clipPath: 'inset(0 0 100% 0)', // upper peels up and away
+ ease: 'none',
+ scrollTrigger: {
+ trigger: '.peel-container',
+ start: 'top top',
+ end: 'center top',
+ scrub: true,
+ }
+ }
+ );
+
+ // Lower section content animates in as it's revealed
+ gsap.from(lower.querySelectorAll('.peel-content > *'), {
+ y: 30,
+ opacity: 0,
+ stagger: 0.1,
+ duration: 0.6,
+ scrollTrigger: {
+ trigger: '.peel-container',
+ start: '30% top',
+ toggleActions: 'play none none reverse',
+ }
+ });
+}
+```
+
+---
+
+## Choosing the Right Inter-Section Technique
+
+| Situation | Best Technique |
+|-----------|---------------|
+| Brand/product site with hero image | Floating Product Between Sections |
+| Product appears in multiple contexts | GSAP Flip Cross-Section Morph |
+| Product "rises" from section boundary | Clip-Path Section Birth |
+| Cinematic "enter the world" feeling | DJI-Style Scale-In Pin |
+| Product travels a journey narrative | Curved Path Travel |
+| Elegant section-to-section transition | Section Peel Reveal |
+| Dark → light section transition | Floating Product (section backgrounds change beneath) |
diff --git a/engineering-team/epic-design/references/motion-system.md b/engineering-team/epic-design/references/motion-system.md
new file mode 100644
index 0000000..829a35d
--- /dev/null
+++ b/engineering-team/epic-design/references/motion-system.md
@@ -0,0 +1,531 @@
+# Motion System Reference
+
+## Table of Contents
+1. [GSAP Setup & CDN](#gsap-setup)
+2. [Pattern 1: Multi-Layer Parallax](#pattern-1)
+3. [Pattern 2: Pinned Sticky Sections](#pattern-2)
+4. [Pattern 3: Cascading Card Stack](#pattern-3)
+5. [Pattern 4: Scrub Timeline](#pattern-4)
+6. [Pattern 5: Clip-Path Wipe Reveals](#pattern-5)
+7. [Pattern 6: Horizontal Scroll Conversion](#pattern-6)
+8. [Pattern 7: Perspective Zoom Fly-Through](#pattern-7)
+9. [Pattern 8: Snap-to-Section](#pattern-8)
+10. [Lenis Smooth Scroll](#lenis)
+11. [IntersectionObserver Activation](#intersection-observer)
+
+---
+
+## GSAP Setup & CDN {#gsap-setup}
+
+Always load from jsDelivr CDN:
+
+```html
+
+
+
+
+
+
+
+
+
+
+
+
+```
+
+---
+
+## Pattern 1: Multi-Layer Parallax {#pattern-1}
+
+The foundation of all 2.5D depth. Different layers scroll at different speeds.
+
+```javascript
+function initParallax() {
+ const layers = document.querySelectorAll('[data-depth]');
+
+ const depthFactors = {
+ '0': 0.10, '1': 0.25, '2': 0.50,
+ '3': 0.80, '4': 1.00, '5': 1.20
+ };
+
+ layers.forEach(layer => {
+ const depth = layer.dataset.depth;
+ const factor = depthFactors[depth] || 1.0;
+
+ gsap.to(layer, {
+ yPercent: -15 * factor, // adjust multiplier for desired effect intensity
+ ease: 'none',
+ scrollTrigger: {
+ trigger: layer.closest('.scene'),
+ start: 'top bottom',
+ end: 'bottom top',
+ scrub: true, // 1:1 scroll-to-animation
+ }
+ });
+ });
+}
+```
+
+**When to use:** Every project. This is always on.
+
+---
+
+## Pattern 2: Pinned Sticky Sections {#pattern-2}
+
+A section stays fixed while its content animates. Other sections slide over/under it. The "window over window" effect.
+
+```javascript
+function initPinnedSection(sceneEl) {
+ // The section stays pinned for `duration` scroll pixels
+ // while inner content animates on a scrubbed timeline
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: sceneEl,
+ start: 'top top',
+ end: '+=150%', // stay pinned for 1.5x viewport of scroll
+ pin: true, // THIS is what pins the section
+ scrub: 1, // 1 second smoothing
+ anticipatePin: 1, // prevents jump on pin
+ }
+ });
+
+ // Inner content animations while pinned
+ // These play out over the scroll distance
+ tl.from('.pinned-title', { opacity: 0, y: 60, duration: 0.3 })
+ .from('.pinned-image', { scale: 0.8, opacity: 0, duration: 0.4 })
+ .to('.pinned-bg', { backgroundColor: '#1a0a2e', duration: 0.3 })
+ .from('.pinned-sub', { opacity: 0, x: -40, duration: 0.3 });
+
+ return tl;
+}
+```
+
+**Visual result:** Section feels like a chapter — the page "lives inside it" for a while, then moves on.
+
+---
+
+## Pattern 3: Cascading Card Stack {#pattern-3}
+
+New sections slide over previous ones. Each buried section scales down and darkens, feeling like it's receding.
+
+```css
+/* CSS Setup */
+.card-stack-section {
+ position: sticky;
+ top: 0;
+ height: 100vh;
+ /* Each subsequent section has higher z-index */
+}
+.card-stack-section:nth-child(1) { z-index: 1; }
+.card-stack-section:nth-child(2) { z-index: 2; }
+.card-stack-section:nth-child(3) { z-index: 3; }
+.card-stack-section:nth-child(4) { z-index: 4; }
+```
+
+```javascript
+function initCardStack() {
+ const cards = gsap.utils.toArray('.card-stack-section');
+
+ cards.forEach((card, i) => {
+ // Each card (except last) gets buried as next one enters
+ if (i < cards.length - 1) {
+ gsap.to(card, {
+ scale: 0.88,
+ filter: 'brightness(0.5) blur(3px)',
+ borderRadius: '20px',
+ ease: 'none',
+ scrollTrigger: {
+ trigger: cards[i + 1], // fires when NEXT card enters
+ start: 'top bottom',
+ end: 'top top',
+ scrub: true,
+ }
+ });
+ }
+ });
+}
+```
+
+---
+
+## Pattern 4: Scrub Timeline {#pattern-4}
+
+The most powerful pattern. Elements transform EXACTLY in sync with scroll position. One pixel of scroll = one frame of animation.
+
+```javascript
+function initScrubTimeline(sceneEl) {
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: sceneEl,
+ start: 'top top',
+ end: '+=200%',
+ pin: true,
+ scrub: 1.5, // 1.5s lag for smooth, dreamy feel (use 0 for precise 1:1)
+ }
+ });
+
+ // Sequences play out as user scrolls
+ // 0.0 to 0.25 → first 25% of scroll
+ tl.fromTo('.hero-product',
+ { scale: 0.6, opacity: 0, y: 100 },
+ { scale: 1, opacity: 1, y: 0, duration: 0.25 }
+ )
+ // 0.25 to 0.5 → second quarter
+ .to('.hero-title span:first-child', {
+ x: '-30vw', opacity: 0, duration: 0.25
+ }, 0.25)
+ .to('.hero-title span:last-child', {
+ x: '30vw', opacity: 0, duration: 0.25
+ }, 0.25)
+ // 0.5 to 0.75 → third quarter
+ .to('.hero-product', {
+ scale: 1.3, y: -50, duration: 0.25
+ }, 0.5)
+ .fromTo('.next-section-content',
+ { opacity: 0, y: 80 },
+ { opacity: 1, y: 0, duration: 0.25 },
+ 0.5
+ )
+ // 0.75 to 1.0 → final quarter
+ .to('.hero-product', {
+ opacity: 0, scale: 1.6, duration: 0.25
+ }, 0.75);
+
+ return tl;
+}
+```
+
+---
+
+## Pattern 5: Clip-Path Wipe Reveals {#pattern-5}
+
+Content is hidden behind a clip-path mask that animates away to reveal the content beneath. GPU-accelerated, buttery smooth.
+
+```javascript
+// Left-to-right horizontal wipe
+function initHorizontalWipe(el) {
+ gsap.fromTo(el,
+ { clipPath: 'inset(0 100% 0 0)' },
+ {
+ clipPath: 'inset(0 0% 0 0)',
+ duration: 1.2,
+ ease: 'power3.out',
+ scrollTrigger: { trigger: el, start: 'top 80%' }
+ }
+ );
+}
+
+// Top-to-bottom drop reveal
+function initTopDropReveal(el) {
+ gsap.fromTo(el,
+ { clipPath: 'inset(0 0 100% 0)' },
+ {
+ clipPath: 'inset(0 0 0% 0)',
+ duration: 1.0,
+ ease: 'power2.out',
+ scrollTrigger: { trigger: el, start: 'top 75%' }
+ }
+ );
+}
+
+// Circle iris expand
+function initCircleIris(el) {
+ gsap.fromTo(el,
+ { clipPath: 'circle(0% at 50% 50%)' },
+ {
+ clipPath: 'circle(75% at 50% 50%)',
+ duration: 1.4,
+ ease: 'power2.inOut',
+ scrollTrigger: { trigger: el, start: 'top 60%' }
+ }
+ );
+}
+
+// Window pane iris (tiny box expands to full)
+function initWindowPaneIris(sceneEl) {
+ gsap.fromTo(sceneEl,
+ { clipPath: 'inset(45% 30% 45% 30% round 8px)' },
+ {
+ clipPath: 'inset(0% 0% 0% 0% round 0px)',
+ ease: 'none',
+ scrollTrigger: {
+ trigger: sceneEl,
+ start: 'top 80%',
+ end: 'top 20%',
+ scrub: 1,
+ }
+ }
+ );
+}
+```
+
+---
+
+## Pattern 6: Horizontal Scroll Conversion {#pattern-6}
+
+Vertical scrolling drives horizontal movement through panels. Classic premium technique.
+
+```javascript
+function initHorizontalScroll(containerEl) {
+ const panels = gsap.utils.toArray('.h-panel', containerEl);
+
+ gsap.to(panels, {
+ xPercent: -100 * (panels.length - 1),
+ ease: 'none',
+ scrollTrigger: {
+ trigger: containerEl,
+ pin: true,
+ scrub: 1,
+ end: () => `+=${containerEl.offsetWidth * (panels.length - 1)}`,
+ snap: 1 / (panels.length - 1), // auto-snap to each panel
+ }
+ });
+}
+```
+
+```css
+.h-scroll-container {
+ display: flex;
+ width: calc(300vw); /* 3 panels × 100vw */
+ height: 100vh;
+ overflow: hidden;
+}
+.h-panel {
+ width: 100vw;
+ height: 100vh;
+ flex-shrink: 0;
+}
+```
+
+---
+
+## Pattern 7: Perspective Zoom Fly-Through {#pattern-7}
+
+User appears to fly toward content. Combines scale, Z-axis, and opacity on a scrubbed pin.
+
+```javascript
+function initPerspectiveZoom(sceneEl) {
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: sceneEl,
+ start: 'top top',
+ end: '+=300%',
+ pin: true,
+ scrub: 2,
+ }
+ });
+
+ // Background "rushes toward" viewer
+ tl.fromTo('.zoom-bg',
+ { scale: 0.4, filter: 'blur(20px)', opacity: 0.3 },
+ { scale: 1.2, filter: 'blur(0px)', opacity: 1, duration: 0.6 }
+ )
+ // Product appears from far
+ .fromTo('.zoom-product',
+ { scale: 0.1, z: -2000, opacity: 0 },
+ { scale: 1, z: 0, opacity: 1, duration: 0.5, ease: 'power2.out' },
+ 0.2
+ )
+ // Text fades in after product arrives
+ .fromTo('.zoom-title',
+ { opacity: 0, letterSpacing: '2em' },
+ { opacity: 1, letterSpacing: '0.05em', duration: 0.3 },
+ 0.55
+ );
+}
+```
+
+```css
+.zoom-scene {
+ perspective: 1200px;
+ perspective-origin: 50% 50%;
+ transform-style: preserve-3d;
+ overflow: hidden;
+}
+```
+
+---
+
+## Pattern 8: Snap-to-Section {#pattern-8}
+
+Full-page scroll snapping between sections — creates a chapter-like book feeling.
+
+```javascript
+// Using GSAP Observer for smooth snapping
+function initSectionSnap() {
+ // Register Observer plugin
+ gsap.registerPlugin(Observer);
+
+ const sections = gsap.utils.toArray('.snap-section');
+ let currentIndex = 0;
+ let animating = false;
+
+ function goTo(index) {
+ if (animating || index === currentIndex) return;
+ animating = true;
+
+ const direction = index > currentIndex ? 1 : -1;
+ const current = sections[currentIndex];
+ const next = sections[index];
+
+ const tl = gsap.timeline({
+ onComplete: () => {
+ currentIndex = index;
+ animating = false;
+ }
+ });
+
+ // Current section exits upward
+ tl.to(current, {
+ yPercent: -100 * direction,
+ opacity: 0,
+ duration: 0.8,
+ ease: 'power2.inOut'
+ })
+ // Next section enters from below/above
+ .fromTo(next,
+ { yPercent: 100 * direction, opacity: 0 },
+ { yPercent: 0, opacity: 1, duration: 0.8, ease: 'power2.inOut' },
+ 0
+ );
+ }
+
+ Observer.create({
+ type: 'wheel,touch',
+ onDown: () => goTo(Math.min(currentIndex + 1, sections.length - 1)),
+ onUp: () => goTo(Math.max(currentIndex - 1, 0)),
+ tolerance: 100,
+ preventDefault: true,
+ });
+}
+```
+
+---
+
+## Lenis Smooth Scroll {#lenis}
+
+Lenis replaces native browser scroll with silky-smooth physics-based scrolling. Always pair with GSAP ScrollTrigger.
+
+```html
+
+```
+
+```javascript
+function initLenis() {
+ const lenis = new Lenis({
+ duration: 1.2,
+ easing: (t) => Math.min(1, 1.001 - Math.pow(2, -10 * t)),
+ orientation: 'vertical',
+ smoothWheel: true,
+ });
+
+ // CRITICAL: Connect Lenis to GSAP ticker
+ lenis.on('scroll', ScrollTrigger.update);
+ gsap.ticker.add((time) => lenis.raf(time * 1000));
+ gsap.ticker.lagSmoothing(0);
+
+ return lenis;
+}
+```
+
+---
+
+## IntersectionObserver Activation {#intersection-observer}
+
+Only animate elements that are currently visible. Critical for performance.
+
+```javascript
+function initRevealObserver() {
+ const observer = new IntersectionObserver((entries) => {
+ entries.forEach(entry => {
+ if (entry.isIntersecting) {
+ entry.target.classList.add('is-visible');
+ // Trigger GSAP animation
+ const animType = entry.target.dataset.animate;
+ if (animType) triggerAnimation(entry.target, animType);
+ // Stop observing after first trigger
+ observer.unobserve(entry.target);
+ }
+ });
+ }, {
+ threshold: 0.15,
+ rootMargin: '0px 0px -50px 0px'
+ });
+
+ document.querySelectorAll('[data-animate]').forEach(el => observer.observe(el));
+}
+
+function triggerAnimation(el, type) {
+ const animations = {
+ 'fade-up': () => gsap.from(el, { y: 60, opacity: 0, duration: 0.8, ease: 'power3.out' }),
+ 'fade-in': () => gsap.from(el, { opacity: 0, duration: 1.0, ease: 'power2.out' }),
+ 'scale-in': () => gsap.from(el, { scale: 0.8, opacity: 0, duration: 0.7, ease: 'back.out(1.7)' }),
+ 'slide-left': () => gsap.from(el, { x: -80, opacity: 0, duration: 0.8, ease: 'power3.out' }),
+ 'slide-right':() => gsap.from(el, { x: 80, opacity: 0, duration: 0.8, ease: 'power3.out' }),
+ 'converge': () => animateSplitConverge(el), // See text-animations.md
+ };
+ animations[type]?.();
+}
+```
+
+---
+
+## Pattern 9: Elastic Drop with Impact Shake {#elastic-drop}
+
+An element falls from above with an elastic overshoot, then a rapid
+micro-rotation shake fires on landing — simulating physical weight and impact.
+
+```javascript
+function initElasticDrop(productEl, wrapperEl) {
+ const tl = gsap.timeline({ delay: 0.3 });
+
+ // Phase 1: element drops with elastic bounce
+ tl.from(productEl, {
+ y: -180,
+ opacity: 0,
+ scale: 1.1,
+ duration: 1.3,
+ ease: 'elastic.out(1, 0.65)',
+ })
+
+ // Phase 2: shake fires just as the elastic settles
+ // Apply to the WRAPPER not the element — avoids transform conflicts
+ .to(wrapperEl, {
+ keyframes: [
+ { rotation: -2, duration: 0.08 },
+ { rotation: 2, duration: 0.08 },
+ { rotation: -1.5, duration: 0.07 },
+ { rotation: 1, duration: 0.07 },
+ { rotation: 0, duration: 0.10 },
+ ],
+ ease: 'power1.inOut',
+ }, '-=0.35');
+
+ return tl;
+}
+```
+
+```html
+
+
+
+
+```
+
+Ease variants:
+- `elastic.out(1, 0.65)` — standard product, moderate bounce
+- `elastic.out(1.2, 0.5)` — heavier object, more overshoot
+- `elastic.out(0.8, 0.8)` — lighter, quicker settle
+- `back.out(2.5)` — no oscillation, one clean overshoot
+
+Do NOT use for: gentle floaters, airy elements (flowers, feathers) — use `power3.out` instead.
diff --git a/engineering-team/epic-design/references/performance.md b/engineering-team/epic-design/references/performance.md
new file mode 100644
index 0000000..6055a97
--- /dev/null
+++ b/engineering-team/epic-design/references/performance.md
@@ -0,0 +1,261 @@
+# Performance Reference
+
+## The Golden Rule
+
+**Only animate properties that the browser can handle on the GPU compositor thread:**
+
+```
+✅ SAFE (GPU composited): transform, opacity, filter, clip-path, will-change
+❌ AVOID (triggers layout): width, height, top, left, right, bottom, margin, padding,
+ font-size, border-width, background-size (avoid)
+```
+
+Animating layout properties causes the browser to recalculate the entire page layout on every frame — this is called "layout thrash" and causes jank.
+
+---
+
+## requestAnimationFrame Pattern
+
+Never put animation logic directly in event listeners. Always batch through rAF:
+
+```javascript
+let rafId = null;
+let pendingScrollY = 0;
+
+function onScroll() {
+ pendingScrollY = window.scrollY;
+ if (!rafId) {
+ rafId = requestAnimationFrame(processScroll);
+ }
+}
+
+function processScroll() {
+ rafId = null;
+ document.documentElement.style.setProperty('--scroll-y', pendingScrollY);
+ // update other values...
+}
+
+window.addEventListener('scroll', onScroll, { passive: true });
+// passive: true is CRITICAL — tells browser scroll handler won't preventDefault
+// allows browser to scroll on a separate thread
+```
+
+---
+
+## will-change Usage Rules
+
+`will-change` promotes an element to its own GPU layer. Powerful but dangerous if overused.
+
+```css
+/* DO: Only apply when animation is about to start */
+.element-about-to-animate {
+ will-change: transform, opacity;
+}
+
+/* DO: Remove after animation completes */
+element.addEventListener('animationend', () => {
+ element.style.willChange = 'auto';
+});
+
+/* DON'T: Apply globally */
+* { will-change: transform; } /* WRONG — massive GPU memory usage */
+
+/* DON'T: Apply statically on all animated elements */
+.animated-thing { will-change: transform; } /* Wrong if there are many of these */
+```
+
+### GSAP handles this automatically
+GSAP applies `will-change` during animations and removes it after. If using GSAP, you generally don't need to manage `will-change` yourself.
+
+---
+
+## IntersectionObserver Pattern
+
+Never animate all elements all the time. Only animate what's currently visible.
+
+```javascript
+class AnimationManager {
+ constructor() {
+ this.activeAnimations = new Set();
+ this.observer = new IntersectionObserver(
+ this.handleIntersection.bind(this),
+ { threshold: 0.1, rootMargin: '50px 0px' }
+ );
+ }
+
+ observe(el) {
+ this.observer.observe(el);
+ }
+
+ handleIntersection(entries) {
+ entries.forEach(entry => {
+ if (entry.isIntersecting) {
+ this.activateElement(entry.target);
+ } else {
+ this.deactivateElement(entry.target);
+ }
+ });
+ }
+
+ activateElement(el) {
+ // Start GSAP animation / add floating class
+ el.classList.add('animate-active');
+ this.activeAnimations.add(el);
+ }
+
+ deactivateElement(el) {
+ // Pause or stop animation
+ el.classList.remove('animate-active');
+ this.activeAnimations.delete(el);
+ }
+}
+
+const animManager = new AnimationManager();
+document.querySelectorAll('.animated-layer').forEach(el => animManager.observe(el));
+```
+
+---
+
+## content-visibility: auto
+
+For pages with many off-screen sections, this dramatically improves initial load and scroll performance:
+
+```css
+/* Apply to every major section except the first (which is immediately visible) */
+.scene:not(:first-child) {
+ content-visibility: auto;
+ /* Tells browser: don't render this until it's near the viewport */
+ contain-intrinsic-size: 0 100vh;
+ /* Gives browser an estimated height so scrollbar is correct */
+}
+```
+
+**Note:** Don't apply to the first section — it causes a flash of invisible content.
+
+---
+
+## Asset Optimization Rules
+
+### PNG File Size Targets (Maximum)
+
+| Depth Level | Element Type | Max File Size | Max Dimensions |
+|-------------|---------------------|---------------|----------------|
+| Depth 0 | Background | 150KB | 1920×1080 |
+| Depth 1 | Glow layer | 60KB | 1000×1000 |
+| Depth 2 | Decorations | 50KB | 400×400 |
+| Depth 3 | Main product/hero | 120KB | 1200×1200 |
+| Depth 4 | UI components | 40KB | 800×800 |
+| Depth 5 | Particles | 10KB | 128×128 |
+
+**Total page weight target: Under 2MB for all assets combined.**
+
+### Image Loading Strategy
+
+```html
+
+
+
+
+
+
+
+
+
+
+
+```
+
+---
+
+## Mobile Performance
+
+Touch devices have less GPU power. Always detect and reduce effects:
+
+```javascript
+const isTouchDevice = window.matchMedia('(pointer: coarse)').matches;
+const prefersReduced = window.matchMedia('(prefers-reduced-motion: reduce)').matches;
+const isLowPower = navigator.hardwareConcurrency <= 4; // heuristic for low-end devices
+
+const performanceMode = (isTouchDevice || prefersReduced || isLowPower) ? 'lite' : 'full';
+
+function initForPerformanceMode() {
+ if (performanceMode === 'lite') {
+ // Disable: mouse tracking, floating loops, particles, perspective zoom
+ document.documentElement.classList.add('perf-lite');
+ // Keep: basic scroll fade-ins, curtain reveals (CSS only)
+ } else {
+ // Full experience
+ initParallaxLayers();
+ initFloatingLoops();
+ initParticles();
+ initMouseTracking();
+ }
+}
+```
+
+```css
+/* Disable GPU-heavy effects in lite mode */
+.perf-lite .depth-0,
+.perf-lite .depth-1,
+.perf-lite .depth-5 {
+ transform: none !important;
+ will-change: auto !important;
+}
+.perf-lite .float-loop {
+ animation: none !important;
+}
+.perf-lite .glow-blob {
+ display: none;
+}
+```
+
+---
+
+## Chrome DevTools Performance Checklist
+
+Before shipping, verify:
+
+1. **Layers panel**: Check `chrome://settings` → DevTools → "Show Composited Layer Borders" — should not show excessive layer count (target: under 20 promoted layers)
+2. **Performance tab**: Record scroll at 60fps. Look for long frames (>16ms)
+3. **Memory tab**: Heap snapshot — should not grow during scroll (no leaks)
+4. **Coverage tab**: Check unused CSS/JS — strip unused animation classes
+
+---
+
+## GSAP Performance Tips
+
+```javascript
+// BAD: Creates new tween every scroll event
+window.addEventListener('scroll', () => {
+ gsap.to(element, { y: window.scrollY * 0.5 }); // creates new tween each frame!
+});
+
+// GOOD: Use scrub — GSAP manages timing internally
+gsap.to(element, {
+ y: 200,
+ ease: 'none',
+ scrollTrigger: {
+ scrub: true, // GSAP handles this efficiently
+ }
+});
+
+// GOOD: Kill ScrollTriggers when not needed
+const trigger = ScrollTrigger.create({ ... });
+// Later:
+trigger.kill();
+
+// GOOD: Use gsap.set() for instant placement (no tween overhead)
+gsap.set('.element', { x: 0, opacity: 1 });
+
+// GOOD: Batch DOM reads/writes
+gsap.utils.toArray('.elements').forEach(el => {
+ // GSAP batches these reads automatically
+ gsap.from(el, { ... });
+});
+```
diff --git a/engineering-team/epic-design/references/text-animations.md b/engineering-team/epic-design/references/text-animations.md
new file mode 100644
index 0000000..c5b0fec
--- /dev/null
+++ b/engineering-team/epic-design/references/text-animations.md
@@ -0,0 +1,709 @@
+# Text Animation Reference
+
+## Table of Contents
+1. [Setup: SplitText & Dependencies](#setup)
+2. [Technique 1: Split Converge (Left+Right Merge)](#split-converge)
+3. [Technique 2: Masked Line Curtain Reveal](#masked-line)
+4. [Technique 3: Character Cylinder Rotation](#cylinder)
+5. [Technique 4: Word-by-Word Scroll Lighting](#word-lighting)
+6. [Technique 5: Scramble Text](#scramble)
+7. [Technique 6: Skew + Elastic Bounce Entry](#skew-bounce)
+8. [Technique 7: Theatrical Enter + Auto Exit](#theatrical)
+9. [Technique 8: Offset Diagonal Layout](#offset-diagonal)
+10. [Technique 9: Line Clip Wipe](#line-clip-wipe)
+11. [Technique 10: Scroll-Speed Reactive Marquee](#marquee)
+12. [Technique 11: Variable Font Wave](#variable-font)
+13. [Technique 12: Bleed Typography](#bleed-type)
+
+---
+
+## Setup: SplitText & Dependencies {#setup}
+
+```html
+
+
+
+
+
+```
+
+### Universal Text Setup CSS
+
+```css
+/* All text elements that animate need this */
+.anim-text {
+ overflow: hidden; /* Contains line mask reveals */
+ line-height: 1.15;
+}
+/* Screen reader: preserve meaning even when SplitText fragments it */
+.anim-text[aria-label] > * {
+ aria-hidden: true;
+}
+```
+
+---
+
+## Technique 1: Split Converge (Left+Right Merge) {#split-converge}
+
+The signature effect: two halves of a title fly in from opposite sides, converge to form the complete title, hold, then diverge and disappear on scroll exit. Exactly what the user described.
+
+```css
+.hero-title {
+ display: flex;
+ flex-wrap: wrap;
+ gap: 0.25em;
+ overflow: visible; /* allow parts to fly from outside viewport */
+}
+.hero-title .word-left {
+ display: inline-block;
+ /* starts at far left */
+}
+.hero-title .word-right {
+ display: inline-block;
+ /* starts at far right */
+}
+```
+
+```javascript
+function initSplitConverge(titleEl) {
+ // Preserve accessibility
+ const fullText = titleEl.textContent;
+ titleEl.setAttribute('aria-label', fullText);
+
+ const words = titleEl.querySelectorAll('.word');
+ const midpoint = Math.floor(words.length / 2);
+
+ const leftWords = Array.from(words).slice(0, midpoint);
+ const rightWords = Array.from(words).slice(midpoint);
+
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: titleEl.closest('.scene'),
+ start: 'top top',
+ end: '+=250%',
+ pin: true,
+ scrub: 1.2,
+ }
+ });
+
+ // Phase 1 — ENTER (0% → 25%): Words converge from sides
+ tl.fromTo(leftWords,
+ { x: '-120vw', opacity: 0 },
+ { x: 0, opacity: 1, duration: 0.25, ease: 'power3.out', stagger: 0.03 },
+ 0
+ )
+ .fromTo(rightWords,
+ { x: '120vw', opacity: 0 },
+ { x: 0, opacity: 1, duration: 0.25, ease: 'power3.out', stagger: -0.03 },
+ 0
+ )
+
+ // Phase 2 — HOLD (25% → 70%): Nothing — words are readable, section pinned
+ // (empty duration keeps the scrub paused here)
+ .to({}, { duration: 0.45 }, 0.25)
+
+ // Phase 3 — EXIT (70% → 100%): Words diverge back out
+ .to(leftWords,
+ { x: '-120vw', opacity: 0, duration: 0.28, ease: 'power3.in', stagger: 0.02 },
+ 0.70
+ )
+ .to(rightWords,
+ { x: '120vw', opacity: 0, duration: 0.28, ease: 'power3.in', stagger: -0.02 },
+ 0.70
+ );
+
+ return tl;
+}
+```
+
+### HTML Template
+
+```html
+
+ Your
+ Brand
+ Name
+ Here
+
+```
+
+---
+
+## Technique 2: Masked Line Curtain Reveal {#masked-line}
+
+Lines slide upward from behind an invisible curtain. Each line is hidden in an `overflow: hidden` container and translates up into view.
+
+```css
+.curtain-text .line-mask {
+ overflow: hidden;
+ line-height: 1.2;
+ /* The mask — content starts below and slides up into view */
+}
+.curtain-text .line-inner {
+ display: block;
+ /* Starts translated down below the mask */
+ transform: translateY(110%);
+}
+```
+
+```javascript
+function initCurtainReveal(textEl) {
+ // SplitText splits into lines automatically
+ const split = new SplitText(textEl, {
+ type: 'lines',
+ linesClass: 'line-inner',
+ // Wraps each line in overflow:hidden container
+ lineThreshold: 0.1,
+ });
+
+ // Wrap each line in a mask container
+ split.lines.forEach(line => {
+ const mask = document.createElement('div');
+ mask.className = 'line-mask';
+ line.parentNode.insertBefore(mask, line);
+ mask.appendChild(line);
+ });
+
+ gsap.from(split.lines, {
+ y: '110%',
+ duration: 0.9,
+ ease: 'power4.out',
+ stagger: 0.12,
+ scrollTrigger: {
+ trigger: textEl,
+ start: 'top 80%',
+ }
+ });
+}
+```
+
+---
+
+## Technique 3: Character Cylinder Rotation {#cylinder}
+
+Letters rotate in on a 3D cylinder axis — like a slot machine or odometer rolling into place. Premium, memorable.
+
+```css
+.cylinder-text {
+ perspective: 800px;
+}
+.cylinder-text .char {
+ display: inline-block;
+ transform-origin: center center -60px; /* pivot point BEHIND the letter */
+ transform-style: preserve-3d;
+}
+```
+
+```javascript
+function initCylinderRotation(titleEl) {
+ const split = new SplitText(titleEl, { type: 'chars' });
+
+ gsap.from(split.chars, {
+ rotateX: -90,
+ opacity: 0,
+ duration: 0.6,
+ ease: 'back.out(1.5)',
+ stagger: {
+ each: 0.04,
+ from: 'start'
+ },
+ scrollTrigger: {
+ trigger: titleEl,
+ start: 'top 75%',
+ }
+ });
+}
+```
+
+---
+
+## Technique 4: Word-by-Word Scroll Lighting {#word-lighting}
+
+Words appear to light up one at a time, driven by scroll position. Apple's signature prose technique.
+
+```css
+.scroll-lit-text {
+ /* Start all words dim */
+}
+.scroll-lit-text .word {
+ display: inline-block;
+ color: rgba(255, 255, 255, 0.15); /* dim unlit state */
+ transition: color 0.1s ease;
+}
+.scroll-lit-text .word.lit {
+ color: rgba(255, 255, 255, 1.0); /* bright lit state */
+}
+```
+
+```javascript
+function initWordScrollLighting(containerEl, textEl) {
+ const split = new SplitText(textEl, { type: 'words' });
+ const words = split.words;
+ const totalWords = words.length;
+
+ // Pin the section and light words as user scrolls
+ ScrollTrigger.create({
+ trigger: containerEl,
+ start: 'top top',
+ end: `+=${totalWords * 80}px`, // ~80px per word
+ pin: true,
+ scrub: 0.5,
+ onUpdate: (self) => {
+ const progress = self.progress;
+ const litCount = Math.round(progress * totalWords);
+ words.forEach((word, i) => {
+ word.classList.toggle('lit', i < litCount);
+ });
+ }
+ });
+}
+```
+
+---
+
+## Technique 5: Scramble Text {#scramble}
+
+Characters cycle through random values before resolving to real text. Feels digital, techy, premium.
+
+```html
+
+```
+
+```javascript
+// Custom scramble implementation (no plugin needed)
+function scrambleText(el, finalText, duration = 1.5) {
+ const chars = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789!@#$%';
+ let startTime = null;
+ const originalText = finalText;
+
+ function step(timestamp) {
+ if (!startTime) startTime = timestamp;
+ const progress = Math.min((timestamp - startTime) / (duration * 1000), 1);
+
+ let result = '';
+ for (let i = 0; i < originalText.length; i++) {
+ if (originalText[i] === ' ') {
+ result += ' ';
+ } else if (i / originalText.length < progress) {
+ // This character has resolved
+ result += originalText[i];
+ } else {
+ // Still scrambling
+ result += chars[Math.floor(Math.random() * chars.length)];
+ }
+ }
+ el.textContent = result;
+
+ if (progress < 1) requestAnimationFrame(step);
+ }
+
+ requestAnimationFrame(step);
+}
+
+// Trigger on scroll
+ScrollTrigger.create({
+ trigger: '.scramble-title',
+ start: 'top 80%',
+ once: true,
+ onEnter: () => {
+ scrambleText(
+ document.querySelector('.scramble-title'),
+ document.querySelector('.scramble-title').dataset.text,
+ 1.8
+ );
+ }
+});
+```
+
+---
+
+## Technique 6: Skew + Elastic Bounce Entry {#skew-bounce}
+
+Elements enter with a skew that corrects itself, combined with a slight overshoot. Feels physical and energetic.
+
+```javascript
+function initSkewBounce(elements) {
+ gsap.from(elements, {
+ y: 80,
+ skewY: 7,
+ opacity: 0,
+ duration: 0.9,
+ ease: 'back.out(1.7)',
+ stagger: 0.1,
+ scrollTrigger: {
+ trigger: elements[0],
+ start: 'top 85%',
+ }
+ });
+}
+```
+
+---
+
+## Technique 7: Theatrical Enter + Auto Exit {#theatrical}
+
+Element automatically animates in when entering the viewport AND animates out when leaving — zero JavaScript needed.
+
+```css
+/* Enter animation */
+@keyframes theatrical-enter {
+ from {
+ opacity: 0;
+ transform: translateY(60px);
+ filter: blur(4px);
+ }
+ to {
+ opacity: 1;
+ transform: translateY(0);
+ filter: blur(0px);
+ }
+}
+
+/* Exit animation */
+@keyframes theatrical-exit {
+ from {
+ opacity: 1;
+ transform: translateY(0);
+ }
+ to {
+ opacity: 0;
+ transform: translateY(-60px);
+ }
+}
+
+.theatrical {
+ /* Enter when element comes into view */
+ animation: theatrical-enter linear both;
+ animation-timeline: view();
+ animation-range: entry 0% entry 40%;
+}
+
+.theatrical-with-exit {
+ animation: theatrical-enter linear both, theatrical-exit linear both;
+ animation-timeline: view(), view();
+ animation-range: entry 0% entry 30%, exit 60% exit 100%;
+}
+```
+
+**Zero JavaScript required.** Just add `.theatrical` or `.theatrical-with-exit` class.
+
+---
+
+## Technique 8: Offset Diagonal Layout {#offset-diagonal}
+
+Lines of a title start at offset positions (one top-left, one lower-right), then animate FROM their natural offset positions FROM opposite directions. Creates a staircase visual composition that feels dynamic even before animation.
+
+```css
+.offset-title {
+ position: relative;
+ /* Don't center — let offset do the work */
+}
+.offset-title .line-1 {
+ /* Top-left */
+ display: block;
+ text-align: left;
+ padding-left: 5%;
+ font-size: clamp(48px, 8vw, 100px);
+}
+.offset-title .line-2 {
+ /* Lower-right — drops down and shifts right */
+ display: block;
+ text-align: right;
+ padding-right: 5%;
+ margin-top: 0.4em;
+ font-size: clamp(48px, 8vw, 100px);
+}
+```
+
+```javascript
+function initOffsetDiagonal(titleEl) {
+ const line1 = titleEl.querySelector('.line-1');
+ const line2 = titleEl.querySelector('.line-2');
+
+ gsap.from(line1, {
+ x: '-15vw',
+ opacity: 0,
+ duration: 1.0,
+ ease: 'power4.out',
+ scrollTrigger: { trigger: titleEl, start: 'top 75%' }
+ });
+
+ gsap.from(line2, {
+ x: '15vw',
+ opacity: 0,
+ duration: 1.0,
+ ease: 'power4.out',
+ delay: 0.15,
+ scrollTrigger: { trigger: titleEl, start: 'top 75%' }
+ });
+}
+```
+
+---
+
+## Technique 9: Line Clip Wipe {#line-clip-wipe}
+
+Each line of text reveals from left to right, like a typewriter but with a clean clip-path sweep.
+
+```javascript
+function initLineClipWipe(textEl) {
+ const split = new SplitText(textEl, { type: 'lines' });
+
+ split.lines.forEach((line, i) => {
+ gsap.fromTo(line,
+ { clipPath: 'inset(0 100% 0 0)' },
+ {
+ clipPath: 'inset(0 0% 0 0)',
+ duration: 0.8,
+ ease: 'power3.out',
+ delay: i * 0.12, // stagger between lines
+ scrollTrigger: {
+ trigger: textEl,
+ start: 'top 80%',
+ }
+ }
+ );
+ });
+}
+```
+
+---
+
+## Technique 10: Scroll-Speed Reactive Marquee {#marquee}
+
+Infinite scrolling text. Speed scales with scroll velocity — fast scroll = fast marquee. Slow scroll = slow/paused.
+
+```css
+.marquee-wrapper {
+ overflow: hidden;
+ white-space: nowrap;
+}
+.marquee-track {
+ display: inline-flex;
+ gap: 4rem;
+ /* Two copies side by side for seamless loop */
+}
+.marquee-track .marquee-item {
+ display: inline-block;
+ font-size: clamp(2rem, 5vw, 5rem);
+ font-weight: 700;
+ letter-spacing: -0.02em;
+}
+```
+
+```javascript
+function initReactiveMarquee(wrapperEl) {
+ const track = wrapperEl.querySelector('.marquee-track');
+ let currentX = 0;
+ let velocity = 0;
+ let baseSpeed = 0.8; // px per frame base speed
+ let lastScrollY = window.scrollY;
+ let lastTime = performance.now();
+
+ // Track scroll velocity
+ window.addEventListener('scroll', () => {
+ const now = performance.now();
+ const dt = now - lastTime;
+ const dy = window.scrollY - lastScrollY;
+ velocity = Math.abs(dy / dt) * 30; // scale to marquee speed
+ lastScrollY = window.scrollY;
+ lastTime = now;
+ }, { passive: true });
+
+ function animate() {
+ velocity = Math.max(0, velocity - 0.3); // decay
+ const speed = baseSpeed + velocity;
+ currentX -= speed;
+
+ // Reset when first copy exits viewport
+ const trackWidth = track.children[0].offsetWidth * track.children.length / 2;
+ if (Math.abs(currentX) >= trackWidth) {
+ currentX += trackWidth;
+ }
+
+ track.style.transform = `translateX(${currentX}px)`;
+ requestAnimationFrame(animate);
+ }
+ animate();
+}
+```
+
+---
+
+## Technique 11: Variable Font Wave {#variable-font}
+
+If the font supports variable axes (weight, width), animate them per-character for a wave/ripple effect.
+
+```javascript
+function initVariableFontWave(titleEl) {
+ const split = new SplitText(titleEl, { type: 'chars' });
+
+ // Wave through characters using weight axis
+ gsap.to(split.chars, {
+ fontVariationSettings: '"wght" 800',
+ duration: 0.4,
+ ease: 'power2.inOut',
+ stagger: {
+ each: 0.06,
+ yoyo: true,
+ repeat: -1, // infinite loop
+ }
+ });
+}
+```
+
+**Note:** Requires a variable font. Free options: Inter Variable, Fraunces, Recursive. Load from Google Fonts with `?display=swap&axes=wght`.
+
+---
+
+## Technique 12: Bleed Typography {#bleed-type}
+
+Oversized headline that intentionally exceeds section boundaries. Creates drama, depth, and visual tension.
+
+```css
+.bleed-title {
+ font-size: clamp(80px, 18vw, 220px);
+ font-weight: 900;
+ line-height: 0.9;
+ letter-spacing: -0.04em;
+
+ /* Allow bleeding outside section */
+ position: relative;
+ z-index: 10;
+ pointer-events: none;
+
+ /* Negative margins to bleed out */
+ margin-left: -0.05em;
+ margin-right: -0.05em;
+
+ /* Optionally: half above, half below section boundary */
+ transform: translateY(30%);
+}
+
+/* Parent section allows overflow */
+.bleed-section {
+ overflow: visible;
+ position: relative;
+ z-index: 2;
+}
+/* Next section needs to be higher to "trap" the bleed */
+.bleed-section + .next-section {
+ position: relative;
+ z-index: 3;
+}
+```
+
+```javascript
+// Parallax on the bleed title — moves at slightly different rate
+// to emphasize that it belongs to a different depth than content
+gsap.to('.bleed-title', {
+ y: '-12%',
+ ease: 'none',
+ scrollTrigger: {
+ trigger: '.bleed-section',
+ start: 'top bottom',
+ end: 'bottom top',
+ scrub: true,
+ }
+});
+```
+
+---
+
+## Technique 13: Ghost Outlined Background Text {#ghost-text}
+
+Massive atmospheric text sitting BEHIND the main product using only a thin stroke
+with transparent fill. Supports the scene without competing with the content.
+
+```css
+.ghost-bg-text {
+ color: transparent;
+ -webkit-text-stroke: 1px rgba(255, 255, 255, 0.15); /* light sites */
+ /* dark sites: -webkit-text-stroke: 1px rgba(255, 106, 26, 0.18); */
+
+ font-size: clamp(5rem, 15vw, 18rem);
+ font-weight: 900;
+ line-height: 0.85;
+ letter-spacing: -0.04em;
+ white-space: nowrap;
+
+ z-index: 2; /* must be lower than the hero product (depth-3 = z-index 3+) */
+ pointer-events: none;
+ user-select: none;
+}
+```
+
+```javascript
+// Entrance: lines slide up from a masked overflow:hidden parent
+function initGhostTextEntrance(lines) {
+ gsap.set(lines, { y: '110%' });
+ gsap.to(lines, {
+ y: '0%',
+ stagger: 0.1,
+ duration: 1.1,
+ ease: 'power4.out',
+ delay: 0.2,
+ });
+}
+
+// Exit: lines drift apart as hero scrolls out
+function addGhostTextExit(scrubTimeline, line1, line2) {
+ scrubTimeline
+ .to(line1, { x: '-12vw', opacity: 0.06, duration: 0.3 }, 0)
+ .to(line2, { x: '12vw', opacity: 0.06, duration: 0.3 }, 0)
+ .to(line1, { x: '-40vw', opacity: 0, duration: 0.25 }, 0.4)
+ .to(line2, { x: '40vw', opacity: 0, duration: 0.25 }, 0.4);
+}
+```
+
+Stroke opacity guide:
+- `0.08–0.12` → barely-there atmosphere
+- `0.15–0.22` → readable on inspection, still subtle
+- `0.25–0.35` → prominently visible — only if it IS the visual focus
+
+Rules:
+1. Always `aria-hidden="true"` — never the real heading
+2. A real `` must exist elsewhere for SEO/screen readers
+3. Only works on dark backgrounds — thin strokes vanish on light ones
+4. Maximum 2 lines — 3+ becomes noise
+5. Best with ultra-heavy weights (800–900) and tight letter-spacing
+
+---
+
+## Combining Techniques
+
+The most premium results come from layering multiple text techniques in the same section:
+
+```javascript
+// Example: Full hero text sequence
+function initHeroTextSequence() {
+ const tl = gsap.timeline({
+ scrollTrigger: {
+ trigger: '.hero-scene',
+ start: 'top top',
+ end: '+=300%',
+ pin: true,
+ scrub: 1,
+ }
+ });
+
+ // 1. Bleed title already visible via CSS
+ // 2. Subtitle curtain reveal
+ tl.from('.hero-sub .line-inner', {
+ y: '110%', duration: 0.2, stagger: 0.05
+ }, 0)
+ // 3. CTA skew bounce
+ .from('.hero-cta', {
+ y: 40, skewY: 5, opacity: 0, duration: 0.15, ease: 'back.out'
+ }, 0.15)
+ // 4. On scroll-through: title exits via split converge reverse
+ .to('.hero-title .word-left', {
+ x: '-80vw', opacity: 0, duration: 0.25, stagger: 0.03
+ }, 0.7)
+ .to('.hero-title .word-right', {
+ x: '80vw', opacity: 0, duration: 0.25, stagger: -0.03
+ }, 0.7);
+}
+```
diff --git a/engineering-team/epic-design/scripts/inspect-assets.py b/engineering-team/epic-design/scripts/inspect-assets.py
new file mode 100644
index 0000000..337f2d1
--- /dev/null
+++ b/engineering-team/epic-design/scripts/inspect-assets.py
@@ -0,0 +1,254 @@
+#!/usr/bin/env python3
+"""
+2.5D Asset Inspector
+Usage: python scripts/inspect-assets.py image1.png image2.jpg ...
+ or: python scripts/inspect-assets.py path/to/folder/
+
+Checks each image and reports:
+- Format and mode
+- Whether it has a real transparent background
+- Background type if not transparent (dark, light, complex)
+- Recommended depth level based on image characteristics
+- Whether the background is likely a problem (product shot vs scene/artwork)
+
+The AI reads this output and uses it to inform the user.
+The script NEVER modifies images — inspect only.
+"""
+
+import sys
+import os
+
+try:
+ from PIL import Image
+except ImportError:
+ print("PIL not found. Install with: pip install Pillow")
+ sys.exit(1)
+
+
+def analyse_image(path):
+ result = {
+ "path": path,
+ "filename": os.path.basename(path),
+ "status": None,
+ "format": None,
+ "mode": None,
+ "size": None,
+ "bg_type": None,
+ "bg_colour": None,
+ "likely_needs_removal": None,
+ "notes": [],
+ }
+
+ try:
+ img = Image.open(path)
+ result["format"] = img.format or os.path.splitext(path)[1].upper().strip(".")
+ result["mode"] = img.mode
+ result["size"] = img.size
+ w, h = img.size
+
+ except Exception as e:
+ result["status"] = "ERROR"
+ result["notes"].append(f"Could not open: {e}")
+ return result
+
+ # --- Alpha / transparency check ---
+ if img.mode == "RGBA":
+ extrema = img.getextrema()
+ alpha_min = extrema[3][0] # 0 = has real transparency, 255 = fully opaque
+ if alpha_min == 0:
+ result["status"] = "CLEAN"
+ result["bg_type"] = "transparent"
+ result["notes"].append("Real alpha channel with transparent pixels — clean cutout")
+ result["likely_needs_removal"] = False
+ return result
+ else:
+ result["notes"].append("RGBA mode but alpha is fully opaque — background was never removed")
+ img = img.convert("RGB") # treat as solid for analysis below
+
+ if img.mode not in ("RGB", "L"):
+ img = img.convert("RGB")
+
+ # --- Sample corners and edges to detect background colour ---
+ pixels = img.load()
+ sample_points = [
+ (0, 0), (w - 1, 0), (0, h - 1), (w - 1, h - 1), # corners
+ (w // 2, 0), (w // 2, h - 1), # top/bottom center
+ (0, h // 2), (w - 1, h // 2), # left/right center
+ ]
+
+ samples = []
+ for x, y in sample_points:
+ try:
+ px = pixels[x, y]
+ if isinstance(px, int):
+ px = (px, px, px)
+ samples.append(px[:3])
+ except Exception:
+ pass
+
+ if not samples:
+ result["status"] = "UNKNOWN"
+ result["notes"].append("Could not sample pixels")
+ return result
+
+ # --- Classify background ---
+ avg_r = sum(s[0] for s in samples) / len(samples)
+ avg_g = sum(s[1] for s in samples) / len(samples)
+ avg_b = sum(s[2] for s in samples) / len(samples)
+ avg_brightness = (avg_r + avg_g + avg_b) / 3
+
+ # Check colour consistency (low variance = solid bg, high variance = scene/complex bg)
+ max_r = max(s[0] for s in samples)
+ max_g = max(s[1] for s in samples)
+ max_b = max(s[2] for s in samples)
+ min_r = min(s[0] for s in samples)
+ min_g = min(s[1] for s in samples)
+ min_b = min(s[2] for s in samples)
+ variance = max(max_r - min_r, max_g - min_g, max_b - min_b)
+
+ result["bg_colour"] = (int(avg_r), int(avg_g), int(avg_b))
+
+ if variance > 80:
+ result["status"] = "COMPLEX_BG"
+ result["bg_type"] = "complex or scene"
+ result["notes"].append(
+ "Background varies significantly across edges — likely a scene, "
+ "photograph, or artwork background rather than a solid colour"
+ )
+ result["likely_needs_removal"] = False # complex bg = probably intentional content
+ result["notes"].append(
+ "JUDGMENT: Complex backgrounds usually mean this image IS the content "
+ "(site screenshot, artwork, section bg). Background likely should be KEPT."
+ )
+
+ elif avg_brightness < 40:
+ result["status"] = "DARK_BG"
+ result["bg_type"] = "solid dark/black"
+ result["notes"].append(
+ f"Solid dark background detected — average edge brightness: {avg_brightness:.0f}/255"
+ )
+ result["likely_needs_removal"] = True
+ result["notes"].append(
+ "JUDGMENT: Dark studio backgrounds on product shots typically need removal. "
+ "BUT if this is a screenshot, artwork, or intentionally dark composition, keep it."
+ )
+
+ elif avg_brightness > 210:
+ result["status"] = "LIGHT_BG"
+ result["bg_type"] = "solid white/light"
+ result["notes"].append(
+ f"Solid light background detected — average edge brightness: {avg_brightness:.0f}/255"
+ )
+ result["likely_needs_removal"] = True
+ result["notes"].append(
+ "JUDGMENT: White studio backgrounds on product shots typically need removal. "
+ "BUT if this is a screenshot, UI mockup, or document, keep it."
+ )
+
+ else:
+ result["status"] = "MIDTONE_BG"
+ result["bg_type"] = "solid mid-tone colour"
+ result["notes"].append(
+ f"Solid mid-tone background detected — avg colour: RGB{result['bg_colour']}"
+ )
+ result["likely_needs_removal"] = None # ambiguous — let AI judge
+ result["notes"].append(
+ "JUDGMENT: Ambiguous — could be a branded background (keep) or a "
+ "studio colour backdrop (remove). AI must judge based on context."
+ )
+
+ # --- JPEG format warning ---
+ if result["format"] in ("JPEG", "JPG"):
+ result["notes"].append(
+ "JPEG format — cannot store transparency. "
+ "If bg removal is needed, user must provide a PNG version or approve CSS workaround."
+ )
+
+ # --- Size note ---
+ if w > 2000 or h > 2000:
+ result["notes"].append(
+ f"Large image ({w}x{h}px) — resize before embedding. "
+ "See references/asset-pipeline.md Step 3 for depth-appropriate targets."
+ )
+
+ return result
+
+
+def print_report(results):
+ print("\n" + "═" * 55)
+ print(" 2.5D Asset Inspector Report")
+ print("═" * 55)
+
+ for r in results:
+ print(f"\n📁 {r['filename']}")
+ print(f" Format : {r['format']} | Mode: {r['mode']} | Size: {r['size']}")
+
+ status_icons = {
+ "CLEAN": "✅",
+ "DARK_BG": "⚠️ ",
+ "LIGHT_BG": "⚠️ ",
+ "COMPLEX_BG": "🔵",
+ "MIDTONE_BG": "❓",
+ "UNKNOWN": "❓",
+ "ERROR": "❌",
+ }
+ icon = status_icons.get(r["status"], "❓")
+ print(f" Status : {icon} {r['status']}")
+
+ if r["bg_type"]:
+ print(f" Bg type: {r['bg_type']}")
+
+ if r["likely_needs_removal"] is True:
+ print(" Removal: Likely needed (product/object shot)")
+ elif r["likely_needs_removal"] is False:
+ print(" Removal: Likely NOT needed (scene/artwork/content image)")
+ else:
+ print(" Removal: Ambiguous — AI must judge from context")
+
+ for note in r["notes"]:
+ print(f" → {note}")
+
+ print("\n" + "═" * 55)
+ clean = sum(1 for r in results if r["status"] == "CLEAN")
+ flagged = sum(1 for r in results if r["status"] in ("DARK_BG", "LIGHT_BG", "MIDTONE_BG"))
+ complex_bg = sum(1 for r in results if r["status"] == "COMPLEX_BG")
+ errors = sum(1 for r in results if r["status"] == "ERROR")
+
+ print(f" Clean: {clean} | Flagged: {flagged} | Complex/Scene: {complex_bg} | Errors: {errors}")
+ print("═" * 55)
+ print("\nNext step: Read JUDGMENT notes above and inform the user.")
+ print("See references/asset-pipeline.md for the exact notification format.\n")
+
+
+def collect_paths(args):
+ paths = []
+ for arg in args:
+ if os.path.isdir(arg):
+ for f in os.listdir(arg):
+ if f.lower().endswith((".png", ".jpg", ".jpeg", ".webp", ".avif")):
+ paths.append(os.path.join(arg, f))
+ elif os.path.isfile(arg):
+ paths.append(arg)
+ else:
+ print(f"⚠️ Not found: {arg}")
+ return paths
+
+
+if __name__ == "__main__":
+ if len(sys.argv) < 2 or sys.argv[1] in ('-h', '--help'):
+ print("\nUsage:")
+ print(" python scripts/inspect-assets.py image.png")
+ print(" python scripts/inspect-assets.py image1.jpg image2.png")
+ print(" python scripts/inspect-assets.py path/to/folder/\n")
+ if len(sys.argv) < 2:
+ sys.exit(1)
+ else:
+ sys.exit(0)
+
+ paths = collect_paths(sys.argv[1:])
+ if not paths:
+ print("No valid image files found.")
+ sys.exit(1)
+
+ results = [analyse_image(p) for p in paths]
+ print_report(results)
diff --git a/engineering-team/epic-design/scripts/validate-layers.js b/engineering-team/epic-design/scripts/validate-layers.js
new file mode 100644
index 0000000..4e69c49
--- /dev/null
+++ b/engineering-team/epic-design/scripts/validate-layers.js
@@ -0,0 +1,165 @@
+#!/usr/bin/env node
+/**
+ * 2.5D Layer Validator
+ * Usage: node scripts/validate-layers.js path/to/your/index.html
+ *
+ * Checks:
+ * 1. Every animated element has a data-depth attribute
+ * 2. Decorative elements have aria-hidden="true"
+ * 3. prefers-reduced-motion is implemented in CSS
+ * 4. Product images have alt text
+ * 5. SplitText elements have aria-label
+ * 6. No more than 80 animated elements (performance)
+ * 7. Will-change is not applied globally
+ */
+
+const fs = require('fs');
+const path = require('path');
+
+const filePath = process.argv[2];
+
+if (!filePath) {
+ console.error('\n❌ Usage: node validate-layers.js path/to/index.html\n');
+ process.exit(1);
+}
+
+const html = fs.readFileSync(path.resolve(filePath), 'utf8');
+
+let passed = 0;
+let failed = 0;
+const results = [];
+
+function check(label, condition, suggestion) {
+ if (condition) {
+ passed++;
+ results.push({ status: '✅', label });
+ } else {
+ failed++;
+ results.push({ status: '❌', label, suggestion });
+ }
+}
+
+function warn(label, condition, suggestion) {
+ if (!condition) {
+ results.push({ status: '⚠️ ', label, suggestion });
+ }
+}
+
+// --- CHECKS ---
+
+// 1. Scene elements present
+check(
+ 'Scene elements found (.scene)',
+ html.includes('class="scene') || html.includes("class='scene"),
+ 'Wrap each major section in for the depth system to work.'
+);
+
+// 2. Depth layers present
+const depthMatches = html.match(/data-depth=["']\d["']/g) || [];
+check(
+ `Depth attributes found (${depthMatches.length} elements)`,
+ depthMatches.length >= 3,
+ 'Each scene needs at least 3 elements with data-depth="0" through data-depth="5".'
+);
+
+// 3. prefers-reduced-motion in linked CSS
+const hasReducedMotionInline = html.includes('prefers-reduced-motion');
+check(
+ 'prefers-reduced-motion implemented',
+ hasReducedMotionInline || html.includes('hero-section.css'),
+ 'Add @media (prefers-reduced-motion: reduce) { } block. See references/accessibility.md.'
+);
+
+// 4. Decorative elements have aria-hidden
+const decorativeElements = (html.match(/class="[^"]*(?:depth-0|depth-1|depth-5|glow-blob|particle|deco)[^"]*"/g) || []).length;
+const ariaHiddenCount = (html.match(/aria-hidden="true"/g) || []).length;
+check(
+ `Decorative elements have aria-hidden (found ${ariaHiddenCount})`,
+ ariaHiddenCount >= 1,
+ 'Add aria-hidden="true" to all decorative layers (depth-0, depth-1, particles, glows).'
+);
+
+// 5. Images have alt text
+const imgTags = html.match(/ ]*>/g) || [];
+const imgsWithoutAlt = imgTags.filter(tag => !tag.includes('alt=')).length;
+check(
+ `All images have alt attributes (${imgTags.length} images found)`,
+ imgsWithoutAlt === 0,
+ `${imgsWithoutAlt} image(s) missing alt attribute. Decorative images use alt="", meaningful images need descriptive alt text.`
+);
+
+// 6. Skip link present
+check(
+ 'Skip-to-content link present',
+ html.includes('skip-link') || html.includes('Skip to'),
+ 'Add Skip to main content as first element in .'
+);
+
+// 7. GSAP script loaded
+check(
+ 'GSAP script included',
+ html.includes('gsap') || html.includes('gsap.min.js'),
+ 'Include GSAP from CDN: '
+);
+
+// 8. ScrollTrigger plugin loaded
+warn(
+ 'ScrollTrigger plugin loaded',
+ html.includes('ScrollTrigger'),
+ 'Add ScrollTrigger plugin for scroll animations: '
+);
+
+// 9. Performance: too many animated elements
+const animatedElements = (html.match(/data-animate=/g) || []).length + depthMatches.length;
+check(
+ `Animated element count acceptable (${animatedElements} total)`,
+ animatedElements <= 80,
+ `${animatedElements} animated elements found. Target is under 80 for smooth 60fps performance.`
+);
+
+// 10. Main landmark present
+check(
+ ' landmark present',
+ html.includes(' for accessibility and skip link target.'
+);
+
+// 11. Heading hierarchy
+const h1Count = (html.match(/]/g) || []).length;
+check(
+ `Single present (found ${h1Count})`,
+ h1Count === 1,
+ h1Count === 0
+ ? 'Add one element as the main page heading.'
+ : `Multiple elements found (${h1Count}). Each page should have exactly one .`
+);
+
+// 12. lang attribute on html
+check(
+ ' attribute present',
+ html.includes('lang='),
+ 'Add lang="en" (or your language) to the element: '
+);
+
+// --- REPORT ---
+
+console.log('\n📋 2.5D Layer Validator Report');
+console.log('═══════════════════════════════════════');
+console.log(`File: ${filePath}\n`);
+
+results.forEach(r => {
+ console.log(`${r.status} ${r.label}`);
+ if (r.suggestion) {
+ console.log(` → ${r.suggestion}`);
+ }
+});
+
+console.log('\n═══════════════════════════════════════');
+console.log(`Passed: ${passed} | Failed: ${failed}`);
+
+if (failed === 0) {
+ console.log('\n🎉 All checks passed! Your 2.5D site is ready.\n');
+} else {
+ console.log(`\n🔧 Fix the ${failed} issue(s) above before shipping.\n`);
+ process.exit(1);
+}
diff --git a/ra-qm-team/capa-officer/scripts/root_cause_analyzer.py b/ra-qm-team/capa-officer/scripts/root_cause_analyzer.py
new file mode 100644
index 0000000..d644546
--- /dev/null
+++ b/ra-qm-team/capa-officer/scripts/root_cause_analyzer.py
@@ -0,0 +1,486 @@
+#!/usr/bin/env python3
+"""
+Root Cause Analyzer - Structured root cause analysis for CAPA investigations.
+
+Supports multiple analysis methodologies:
+- 5-Why Analysis
+- Fishbone (Ishikawa) Diagram
+- Fault Tree Analysis
+- Kepner-Tregoe Problem Analysis
+
+Generates structured root cause reports and CAPA recommendations.
+
+Usage:
+ python root_cause_analyzer.py --method 5why --problem "High defect rate in assembly line"
+ python root_cause_analyzer.py --interactive
+ python root_cause_analyzer.py --data investigation.json --output json
+"""
+
+import argparse
+import json
+import sys
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional
+from enum import Enum
+from datetime import datetime
+
+
+class AnalysisMethod(Enum):
+ FIVE_WHY = "5-Why"
+ FISHBONE = "Fishbone"
+ FAULT_TREE = "Fault Tree"
+ KEPNER_TREGOE = "Kepner-Tregoe"
+
+
+class RootCauseCategory(Enum):
+ MAN = "Man (People)"
+ MACHINE = "Machine (Equipment)"
+ MATERIAL = "Material"
+ METHOD = "Method (Process)"
+ MEASUREMENT = "Measurement"
+ ENVIRONMENT = "Environment"
+ MANAGEMENT = "Management (Policy)"
+ SOFTWARE = "Software/Data"
+
+
+class SeverityLevel(Enum):
+ LOW = "Low"
+ MEDIUM = "Medium"
+ HIGH = "High"
+ CRITICAL = "Critical"
+
+
+@dataclass
+class WhyStep:
+ """A single step in 5-Why analysis."""
+ level: int
+ question: str
+ answer: str
+ evidence: str = ""
+ verified: bool = False
+
+
+@dataclass
+class FishboneCause:
+ """A cause in fishbone analysis."""
+ category: str
+ cause: str
+ sub_causes: List[str] = field(default_factory=list)
+ is_root: bool = False
+ evidence: str = ""
+
+
+@dataclass
+class FaultEvent:
+ """An event in fault tree analysis."""
+ event_id: str
+ description: str
+ is_basic: bool = True # Basic events have no children
+ gate_type: str = "OR" # OR, AND
+ children: List[str] = field(default_factory=list)
+ probability: Optional[float] = None
+
+
+@dataclass
+class RootCauseFinding:
+ """Identified root cause with evidence."""
+ cause_id: str
+ description: str
+ category: str
+ evidence: List[str] = field(default_factory=list)
+ contributing_factors: List[str] = field(default_factory=list)
+ systemic: bool = False # Whether it's a systemic vs. local issue
+
+
+@dataclass
+class CAPARecommendation:
+ """Corrective or preventive action recommendation."""
+ action_id: str
+ action_type: str # "Corrective" or "Preventive"
+ description: str
+ addresses_cause: str # cause_id
+ priority: str
+ estimated_effort: str
+ responsible_role: str
+ effectiveness_criteria: List[str] = field(default_factory=list)
+
+
+@dataclass
+class RootCauseAnalysis:
+ """Complete root cause analysis result."""
+ investigation_id: str
+ problem_statement: str
+ analysis_method: str
+ root_causes: List[RootCauseFinding]
+ recommendations: List[CAPARecommendation]
+ analysis_details: Dict
+ confidence_level: float
+ investigator_notes: List[str] = field(default_factory=list)
+
+
+class RootCauseAnalyzer:
+ """Performs structured root cause analysis."""
+
+ def __init__(self):
+ self.analysis_steps = []
+ self.findings = []
+
+ def analyze_5why(self, problem: str, whys: List[Dict] = None) -> Dict:
+ """Perform 5-Why analysis."""
+ steps = []
+ if whys:
+ for i, w in enumerate(whys, 1):
+ steps.append(WhyStep(
+ level=i,
+ question=w.get("question", f"Why did this occur? (Level {i})"),
+ answer=w.get("answer", ""),
+ evidence=w.get("evidence", ""),
+ verified=w.get("verified", False)
+ ))
+
+ # Analyze depth and quality
+ depth = len(steps)
+ has_root = any(
+ s.answer and ("system" in s.answer.lower() or "policy" in s.answer.lower() or "process" in s.answer.lower())
+ for s in steps
+ )
+
+ return {
+ "method": "5-Why Analysis",
+ "steps": [asdict(s) for s in steps],
+ "depth": depth,
+ "reached_systemic_cause": has_root,
+ "quality_score": min(100, depth * 20 + (20 if has_root else 0))
+ }
+
+ def analyze_fishbone(self, problem: str, causes: List[Dict] = None) -> Dict:
+ """Perform fishbone (Ishikawa) analysis."""
+ categories = {}
+ fishbone_causes = []
+
+ if causes:
+ for c in causes:
+ cat = c.get("category", "Method")
+ cause = c.get("cause", "")
+ sub = c.get("sub_causes", [])
+
+ if cat not in categories:
+ categories[cat] = []
+ categories[cat].append({
+ "cause": cause,
+ "sub_causes": sub,
+ "is_root": c.get("is_root", False),
+ "evidence": c.get("evidence", "")
+ })
+ fishbone_causes.append(FishboneCause(
+ category=cat,
+ cause=cause,
+ sub_causes=sub,
+ is_root=c.get("is_root", False),
+ evidence=c.get("evidence", "")
+ ))
+
+ root_causes = [fc for fc in fishbone_causes if fc.is_root]
+
+ return {
+ "method": "Fishbone (Ishikawa) Analysis",
+ "problem": problem,
+ "categories": categories,
+ "total_causes": len(fishbone_causes),
+ "root_causes_identified": len(root_causes),
+ "categories_covered": list(categories.keys()),
+ "recommended_categories": [c.value for c in RootCauseCategory],
+ "missing_categories": [c.value for c in RootCauseCategory if c.value.split(" (")[0] not in categories]
+ }
+
+ def analyze_fault_tree(self, top_event: str, events: List[Dict] = None) -> Dict:
+ """Perform fault tree analysis."""
+ fault_events = {}
+ if events:
+ for e in events:
+ fault_events[e["event_id"]] = FaultEvent(
+ event_id=e["event_id"],
+ description=e.get("description", ""),
+ is_basic=e.get("is_basic", True),
+ gate_type=e.get("gate_type", "OR"),
+ children=e.get("children", []),
+ probability=e.get("probability")
+ )
+
+ # Find basic events (root causes)
+ basic_events = {eid: ev for eid, ev in fault_events.items() if ev.is_basic}
+ intermediate_events = {eid: ev for eid, ev in fault_events.items() if not ev.is_basic}
+
+ return {
+ "method": "Fault Tree Analysis",
+ "top_event": top_event,
+ "total_events": len(fault_events),
+ "basic_events": len(basic_events),
+ "intermediate_events": len(intermediate_events),
+ "basic_event_details": [asdict(e) for e in basic_events.values()],
+ "cut_sets": self._find_cut_sets(fault_events)
+ }
+
+ def _find_cut_sets(self, events: Dict[str, FaultEvent]) -> List[List[str]]:
+ """Find minimal cut sets (combinations of basic events that cause top event)."""
+ # Simplified cut set analysis
+ cut_sets = []
+ for eid, event in events.items():
+ if not event.is_basic and event.gate_type == "AND":
+ cut_sets.append(event.children)
+ return cut_sets[:5] # Return top 5
+
+ def generate_recommendations(
+ self,
+ root_causes: List[RootCauseFinding],
+ problem: str
+ ) -> List[CAPARecommendation]:
+ """Generate CAPA recommendations based on root causes."""
+ recommendations = []
+
+ for i, cause in enumerate(root_causes, 1):
+ # Corrective action (fix the immediate cause)
+ recommendations.append(CAPARecommendation(
+ action_id=f"CA-{i:03d}",
+ action_type="Corrective",
+ description=f"Address immediate cause: {cause.description}",
+ addresses_cause=cause.cause_id,
+ priority=self._assess_priority(cause),
+ estimated_effort=self._estimate_effort(cause),
+ responsible_role=self._suggest_responsible(cause),
+ effectiveness_criteria=[
+ f"Elimination of {cause.description} confirmed by audit",
+ "No recurrence within 90 days",
+ "Metrics return to acceptable range"
+ ]
+ ))
+
+ # Preventive action (prevent recurrence in other areas)
+ if cause.systemic:
+ recommendations.append(CAPARecommendation(
+ action_id=f"PA-{i:03d}",
+ action_type="Preventive",
+ description=f"Systemic prevention: Update process/procedure to prevent similar issues",
+ addresses_cause=cause.cause_id,
+ priority="Medium",
+ estimated_effort="2-4 weeks",
+ responsible_role="Quality Manager",
+ effectiveness_criteria=[
+ "Updated procedure approved and implemented",
+ "Training completed for affected personnel",
+ "No similar issues in related processes within 6 months"
+ ]
+ ))
+
+ return recommendations
+
+ def _assess_priority(self, cause: RootCauseFinding) -> str:
+ if cause.systemic or "safety" in cause.description.lower():
+ return "High"
+ elif "quality" in cause.description.lower():
+ return "Medium"
+ return "Low"
+
+ def _estimate_effort(self, cause: RootCauseFinding) -> str:
+ if cause.systemic:
+ return "4-8 weeks"
+ elif len(cause.contributing_factors) > 3:
+ return "2-4 weeks"
+ return "1-2 weeks"
+
+ def _suggest_responsible(self, cause: RootCauseFinding) -> str:
+ category_roles = {
+ "Man": "Training Manager",
+ "Machine": "Engineering Manager",
+ "Material": "Supply Chain Manager",
+ "Method": "Process Owner",
+ "Measurement": "Quality Engineer",
+ "Environment": "Facilities Manager",
+ "Management": "Department Head",
+ "Software": "IT/Software Manager"
+ }
+ cat_key = cause.category.split(" (")[0] if "(" in cause.category else cause.category
+ return category_roles.get(cat_key, "Quality Manager")
+
+ def full_analysis(
+ self,
+ problem: str,
+ method: str = "5-Why",
+ analysis_data: Dict = None
+ ) -> RootCauseAnalysis:
+ """Perform complete root cause analysis."""
+ investigation_id = f"RCA-{datetime.now().strftime('%Y%m%d-%H%M')}"
+ analysis_details = {}
+ root_causes = []
+
+ if method == "5-Why" and analysis_data:
+ analysis_details = self.analyze_5why(problem, analysis_data.get("whys", []))
+ # Extract root cause from deepest why
+ steps = analysis_details.get("steps", [])
+ if steps:
+ last_step = steps[-1]
+ root_causes.append(RootCauseFinding(
+ cause_id="RC-001",
+ description=last_step.get("answer", "Unknown"),
+ category="Systemic",
+ evidence=[s.get("evidence", "") for s in steps if s.get("evidence")],
+ systemic=analysis_details.get("reached_systemic_cause", False)
+ ))
+
+ elif method == "Fishbone" and analysis_data:
+ analysis_details = self.analyze_fishbone(problem, analysis_data.get("causes", []))
+ for i, cat in enumerate(analysis_data.get("causes", [])):
+ if cat.get("is_root"):
+ root_causes.append(RootCauseFinding(
+ cause_id=f"RC-{i+1:03d}",
+ description=cat.get("cause", ""),
+ category=cat.get("category", ""),
+ evidence=[cat.get("evidence", "")] if cat.get("evidence") else [],
+ sub_causes=cat.get("sub_causes", []),
+ systemic=True
+ ))
+
+ recommendations = self.generate_recommendations(root_causes, problem)
+
+ # Confidence based on evidence and method
+ confidence = 0.7
+ if root_causes and any(rc.evidence for rc in root_causes):
+ confidence = 0.85
+ if len(root_causes) > 1:
+ confidence = min(0.95, confidence + 0.05)
+
+ return RootCauseAnalysis(
+ investigation_id=investigation_id,
+ problem_statement=problem,
+ analysis_method=method,
+ root_causes=root_causes,
+ recommendations=recommendations,
+ analysis_details=analysis_details,
+ confidence_level=confidence
+ )
+
+
+def format_rca_text(rca: RootCauseAnalysis) -> str:
+ """Format RCA report as text."""
+ lines = [
+ "=" * 70,
+ "ROOT CAUSE ANALYSIS REPORT",
+ "=" * 70,
+ f"Investigation ID: {rca.investigation_id}",
+ f"Analysis Method: {rca.analysis_method}",
+ f"Confidence Level: {rca.confidence_level:.0%}",
+ "",
+ "PROBLEM STATEMENT",
+ "-" * 40,
+ f" {rca.problem_statement}",
+ "",
+ "ROOT CAUSES IDENTIFIED",
+ "-" * 40,
+ ]
+
+ for rc in rca.root_causes:
+ lines.extend([
+ f"",
+ f" [{rc.cause_id}] {rc.description}",
+ f" Category: {rc.category}",
+ f" Systemic: {'Yes' if rc.systemic else 'No'}",
+ ])
+ if rc.evidence:
+ lines.append(f" Evidence:")
+ for ev in rc.evidence:
+ if ev:
+ lines.append(f" • {ev}")
+ if rc.contributing_factors:
+ lines.append(f" Contributing Factors:")
+ for cf in rc.contributing_factors:
+ lines.append(f" - {cf}")
+
+ lines.extend([
+ "",
+ "RECOMMENDED ACTIONS",
+ "-" * 40,
+ ])
+
+ for rec in rca.recommendations:
+ lines.extend([
+ f"",
+ f" [{rec.action_id}] {rec.action_type}: {rec.description}",
+ f" Priority: {rec.priority} | Effort: {rec.estimated_effort}",
+ f" Responsible: {rec.responsible_role}",
+ f" Effectiveness Criteria:",
+ ])
+ for ec in rec.effectiveness_criteria:
+ lines.append(f" ✓ {ec}")
+
+ if "steps" in rca.analysis_details:
+ lines.extend([
+ "",
+ "5-WHY CHAIN",
+ "-" * 40,
+ ])
+ for step in rca.analysis_details["steps"]:
+ lines.extend([
+ f"",
+ f" Why {step['level']}: {step['question']}",
+ f" → {step['answer']}",
+ ])
+ if step.get("evidence"):
+ lines.append(f" Evidence: {step['evidence']}")
+
+ lines.append("=" * 70)
+ return "\n".join(lines)
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Root Cause Analyzer for CAPA Investigations")
+ parser.add_argument("--problem", type=str, help="Problem statement")
+ parser.add_argument("--method", choices=["5why", "fishbone", "fault-tree", "kt"],
+ default="5why", help="Analysis method")
+ parser.add_argument("--data", type=str, help="JSON file with analysis data")
+ parser.add_argument("--output", choices=["text", "json"], default="text", help="Output format")
+ parser.add_argument("--interactive", action="store_true", help="Interactive mode")
+
+ args = parser.parse_args()
+
+ analyzer = RootCauseAnalyzer()
+
+ if args.data:
+ with open(args.data) as f:
+ data = json.load(f)
+ problem = data.get("problem", "Unknown problem")
+ method = data.get("method", "5-Why")
+ rca = analyzer.full_analysis(problem, method, data)
+ elif args.problem:
+ method_map = {"5why": "5-Why", "fishbone": "Fishbone", "fault-tree": "Fault Tree", "kt": "Kepner-Tregoe"}
+ rca = analyzer.full_analysis(args.problem, method_map.get(args.method, "5-Why"))
+ else:
+ # Demo
+ demo_data = {
+ "method": "5-Why",
+ "whys": [
+ {"question": "Why did the product fail inspection?", "answer": "Surface defect detected on 15% of units", "evidence": "QC inspection records"},
+ {"question": "Why did surface defects occur?", "answer": "Injection molding temperature was outside spec", "evidence": "Process monitoring data"},
+ {"question": "Why was temperature outside spec?", "answer": "Temperature controller calibration drift", "evidence": "Calibration log"},
+ {"question": "Why did calibration drift go undetected?", "answer": "No automated alert for drift, manual checks missed it", "evidence": "SOP review"},
+ {"question": "Why was there no automated alert?", "answer": "Process monitoring system lacks drift detection capability - systemic gap", "evidence": "System requirements review"}
+ ]
+ }
+ rca = analyzer.full_analysis("High defect rate in injection molding process", "5-Why", demo_data)
+
+ if args.output == "json":
+ result = {
+ "investigation_id": rca.investigation_id,
+ "problem": rca.problem_statement,
+ "method": rca.analysis_method,
+ "root_causes": [asdict(rc) for rc in rca.root_causes],
+ "recommendations": [asdict(rec) for rec in rca.recommendations],
+ "analysis_details": rca.analysis_details,
+ "confidence": rca.confidence_level
+ }
+ print(json.dumps(result, indent=2, default=str))
+ else:
+ print(format_rca_text(rca))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/ra-qm-team/quality-documentation-manager/scripts/document_version_control.py b/ra-qm-team/quality-documentation-manager/scripts/document_version_control.py
new file mode 100644
index 0000000..1e3a4ef
--- /dev/null
+++ b/ra-qm-team/quality-documentation-manager/scripts/document_version_control.py
@@ -0,0 +1,466 @@
+#!/usr/bin/env python3
+"""
+Document Version Control for Quality Documentation
+
+Manages document lifecycle for quality manuals, SOPs, work instructions, and forms.
+Tracks versions, approvals, revisions, change history, electronic signatures per 21 CFR Part 11.
+
+Features:
+- Version numbering (Major.Minor.Edit, e.g., 2.1.3)
+- Change control with impact assessment
+- Review/approval workflows
+- Electronic signature capture
+- Document distribution tracking
+- Training record integration
+- Expiry/obsolete management
+
+Usage:
+ python document_version_control.py --create new_sop.md
+ python document_version_control.py --revise existing_sop.md --reason "Regulatory update"
+ python document_version_control.py --status
+ python document_version_control.py --matrix --output json
+"""
+
+import argparse
+import json
+import os
+import hashlib
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional, Tuple
+from datetime import datetime, timedelta
+from pathlib import Path
+import re
+
+
+@dataclass
+class DocumentVersion:
+ """A single document version."""
+ doc_id: str
+ title: str
+ version: str
+ revision_date: str
+ author: str
+ status: str # "Draft", "Under Review", "Approved", "Obsolete"
+ change_summary: str = ""
+ next_review_date: str = ""
+ approved_by: List[str] = field(default_factory=list)
+ signed_by: List[Dict] = field(default_factory=list) # electronic signatures
+ attachments: List[str] = field(default_factory=list)
+ checksum: str = ""
+ template_version: str = "1.0"
+
+
+@dataclass
+class ChangeControl:
+ """Change control record."""
+ change_id: str
+ document_id: str
+ change_type: str # "New", "Revision", "Withdrawal"
+ reason: str
+ impact_assessment: Dict # Quality, Regulatory, Training, etc.
+ risk_assessment: str
+ notifications: List[str]
+ effective_date: str
+ change_author: str
+
+
+class DocumentVersionControl:
+ """Manages quality document lifecycle and version control."""
+
+ VERSION_PATTERN = re.compile(r'^(\d+)\.(\d+)\.(\d+)$')
+ DOCUMENT_TYPES = {
+ 'QMSM': 'Quality Management System Manual',
+ 'SOP': 'Standard Operating Procedure',
+ 'WI': 'Work Instruction',
+ 'FORM': 'Form/Template',
+ 'REC': 'Record',
+ 'POL': 'Policy'
+ }
+
+ def __init__(self, doc_store_path: str = "./doc_store"):
+ self.doc_store = Path(doc_store_path)
+ self.doc_store.mkdir(parents=True, exist_ok=True)
+ self.metadata_file = self.doc_store / "metadata.json"
+ self.documents = self._load_metadata()
+
+ def _load_metadata(self) -> Dict[str, DocumentVersion]:
+ """Load document metadata from storage."""
+ if self.metadata_file.exists():
+ with open(self.metadata_file, 'r', encoding='utf-8') as f:
+ data = json.load(f)
+ return {
+ doc_id: DocumentVersion(**doc_data)
+ for doc_id, doc_data in data.items()
+ }
+ return {}
+
+ def _save_metadata(self):
+ """Save document metadata to storage."""
+ with open(self.metadata_file, 'w', encoding='utf-8') as f:
+ json.dump({
+ doc_id: asdict(doc)
+ for doc_id, doc in self.documents.items()
+ }, f, indent=2, ensure_ascii=False)
+
+ def _generate_doc_id(self, title: str, doc_type: str) -> str:
+ """Generate unique document ID."""
+ # Extract first letters of words, append type code
+ words = re.findall(r'\b\w', title.upper())
+ prefix = ''.join(words[:3]) if words else 'DOC'
+ timestamp = datetime.now().strftime('%y%m%d%H%M')
+ return f"{prefix}-{doc_type}-{timestamp}"
+
+ def _parse_version(self, version: str) -> Tuple[int, int, int]:
+ """Parse semantic version string."""
+ match = self.VERSION_PATTERN.match(version)
+ if match:
+ return tuple(int(x) for x in match.groups())
+ raise ValueError(f"Invalid version format: {version}")
+
+ def _increment_version(self, current: str, change_type: str) -> str:
+ """Increment version based on change type."""
+ major, minor, edit = self._parse_version(current)
+ if change_type == "Major":
+ return f"{major+1}.0.0"
+ elif change_type == "Minor":
+ return f"{major}.{minor+1}.0"
+ else: # Edit
+ return f"{major}.{minor}.{edit+1}"
+
+ def _calculate_checksum(self, filepath: Path) -> str:
+ """Calculate SHA256 checksum of document file."""
+ with open(filepath, 'rb') as f:
+ return hashlib.sha256(f.read()).hexdigest()
+
+ def create_document(
+ self,
+ title: str,
+ content: str,
+ author: str,
+ doc_type: str,
+ change_summary: str = "Initial release",
+ attachments: List[str] = None
+ ) -> DocumentVersion:
+ """Create a new document version."""
+ if doc_type not in self.DOCUMENT_TYPES:
+ raise ValueError(f"Invalid document type. Choose from: {list(self.DOCUMENT_TYPES.keys())}")
+
+ doc_id = self._generate_doc_id(title, doc_type)
+ version = "1.0.0"
+ revision_date = datetime.now().strftime('%Y-%m-%d')
+ next_review = (datetime.now() + timedelta(days=365)).strftime('%Y-%m-%d')
+
+ # Save document content
+ doc_path = self.doc_store / f"{doc_id}_v{version}.md"
+ with open(doc_path, 'w', encoding='utf-8') as f:
+ f.write(content)
+
+ doc = DocumentVersion(
+ doc_id=doc_id,
+ title=title,
+ version=version,
+ revision_date=revision_date,
+ author=author,
+ status="Approved", # Initially approved for simplicity
+ change_summary=change_summary,
+ next_review_date=next_review,
+ attachments=attachments or [],
+ checksum=self._calculate_checksum(doc_path)
+ )
+
+ self.documents[doc_id] = doc
+ self._save_metadata()
+ return doc
+
+ def revise_document(
+ self,
+ doc_id: str,
+ new_content: str,
+ change_author: str,
+ change_type: str = "Edit",
+ change_summary: str = "",
+ attachments: List[str] = None
+ ) -> Optional[DocumentVersion]:
+ """Create a new revision of an existing document."""
+ if doc_id not in self.documents:
+ return None
+
+ old_doc = self.documents[doc_id]
+ new_version = self._increment_version(old_doc.version, change_type)
+ revision_date = datetime.now().strftime('%Y-%m-%d')
+
+ # Archive old version
+ old_path = self.doc_store / f"{doc_id}_v{old_doc.version}.md"
+ archive_path = self.doc_store / "archive" / f"{doc_id}_v{old_doc.version}_{revision_date}.md"
+ archive_path.parent.mkdir(exist_ok=True)
+ if old_path.exists():
+ os.rename(old_path, archive_path)
+
+ # Save new content
+ doc_path = self.doc_store / f"{doc_id}_v{new_version}.md"
+ with open(doc_path, 'w', encoding='utf-8') as f:
+ f.write(new_content)
+
+ # Create new document record
+ new_doc = DocumentVersion(
+ doc_id=doc_id,
+ title=old_doc.title,
+ version=new_version,
+ revision_date=revision_date,
+ author=change_author,
+ status="Draft", # Needs re-approval
+ change_summary=change_summary or f"Revision {new_version}",
+ next_review_date=(datetime.now() + timedelta(days=365)).strftime('%Y-%m-%d'),
+ attachments=attachments or old_doc.attachments,
+ checksum=self._calculate_checksum(doc_path)
+ )
+
+ self.documents[doc_id] = new_doc
+ self._save_metadata()
+ return new_doc
+
+ def approve_document(
+ self,
+ doc_id: str,
+ approver_name: str,
+ approver_title: str,
+ comments: str = ""
+ ) -> bool:
+ """Approve a document with electronic signature."""
+ if doc_id not in self.documents:
+ return False
+
+ doc = self.documents[doc_id]
+ if doc.status != "Draft":
+ return False
+
+ signature = {
+ "name": approver_name,
+ "title": approver_title,
+ "date": datetime.now().strftime('%Y-%m-%d %H:%M'),
+ "comments": comments,
+ "signature_hash": hashlib.sha256(f"{doc_id}{doc.version}{approver_name}".encode()).hexdigest()[:16]
+ }
+
+ doc.approved_by.append(approver_name)
+ doc.signed_by.append(signature)
+
+ # Approve if enough approvers (simplified: 1 is enough for demo)
+ doc.status = "Approved"
+ self._save_metadata()
+ return True
+
+ def withdraw_document(self, doc_id: str, reason: str, withdrawn_by: str) -> bool:
+ """Withdraw/obsolete a document."""
+ if doc_id not in self.documents:
+ return False
+
+ doc = self.documents[doc_id]
+ doc.status = "Obsolete"
+ doc.change_summary = f"OBsolete: {reason}"
+
+ # Add withdrawal signature
+ signature = {
+ "name": withdrawn_by,
+ "title": "QMS Manager",
+ "date": datetime.now().strftime('%Y-%m-%d %H:%M'),
+ "comments": reason,
+ "signature_hash": hashlib.sha256(f"{doc_id}OB{withdrawn_by}".encode()).hexdigest()[:16]
+ }
+ doc.signed_by.append(signature)
+
+ self._save_metadata()
+ return True
+
+ def get_document_history(self, doc_id: str) -> List[Dict]:
+ """Get version history for a document."""
+ history = []
+ pattern = f"{doc_id}_v*.md"
+ for file in self.doc_store.glob(pattern):
+ match = re.search(r'_v(\d+\.\d+\.\d+)\.md$', file.name)
+ if match:
+ version = match.group(1)
+ stat = file.stat()
+ history.append({
+ "version": version,
+ "filename": file.name,
+ "size": stat.st_size,
+ "modified": datetime.fromtimestamp(stat.st_mtime).strftime('%Y-%m-%d %H:%M')
+ })
+
+ # Check archive
+ for file in (self.doc_store / "archive").glob(f"{doc_id}_v*.md"):
+ match = re.search(r'_v(\d+\.\d+\.\d+)_(\d{4}-\d{2}-\d{2})\.md$', file.name)
+ if match:
+ version, date = match.groups()
+ history.append({
+ "version": version,
+ "filename": file.name,
+ "status": "archived",
+ "archived_date": date
+ })
+
+ return sorted(history, key=lambda x: x["version"])
+
+ def generate_document_matrix(self) -> Dict:
+ """Generate document matrix report."""
+ matrix = {
+ "total_documents": len(self.documents),
+ "by_status": {},
+ "by_type": {},
+ "documents": []
+ }
+
+ for doc in self.documents.values():
+ # By status
+ matrix["by_status"][doc.status] = matrix["by_status"].get(doc.status, 0) + 1
+
+ # By type (from doc_id)
+ doc_type = doc.doc_id.split('-')[1] if '-' in doc.doc_id else "Unknown"
+ matrix["by_type"][doc_type] = matrix["by_type"].get(doc_type, 0) + 1
+
+ matrix["documents"].append({
+ "doc_id": doc.doc_id,
+ "title": doc.title,
+ "type": doc_type,
+ "version": doc.version,
+ "status": doc.status,
+ "author": doc.author,
+ "last_modified": doc.revision_date,
+ "next_review": doc.next_review_date,
+ "approved_by": doc.approved_by
+ })
+
+ matrix["documents"].sort(key=lambda x: (x["type"], x["title"]))
+ return matrix
+
+
+def format_matrix_text(matrix: Dict) -> str:
+ """Format document matrix as text."""
+ lines = [
+ "=" * 80,
+ "QUALITY DOCUMENTATION MATRIX",
+ "=" * 80,
+ f"Total Documents: {matrix['total_documents']}",
+ "",
+ "BY STATUS",
+ "-" * 40,
+ ]
+ for status, count in matrix["by_status"].items():
+ lines.append(f" {status}: {count}")
+
+ lines.extend([
+ "",
+ "BY TYPE",
+ "-" * 40,
+ ])
+ for dtype, count in matrix["by_type"].items():
+ lines.append(f" {dtype}: {count}")
+
+ lines.extend([
+ "",
+ "DOCUMENT LIST",
+ "-" * 40,
+ f"{'ID':<20} {'Type':<8} {'Version':<10} {'Status':<12} {'Title':<30}",
+ "-" * 80,
+ ])
+
+ for doc in matrix["documents"]:
+ lines.append(f"{doc['doc_id'][:19]:<20} {doc['type']:<8} {doc['version']:<10} {doc['status']:<12} {doc['title'][:29]:<30}")
+
+ lines.append("=" * 80)
+ return "\n".join(lines)
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Document Version Control for Quality Documentation")
+ parser.add_argument("--create", type=str, help="Create new document from template")
+ parser.add_argument("--title", type=str, help="Document title (required with --create)")
+ parser.add_argument("--type", choices=list(DocumentVersionControl.DOCUMENT_TYPES.keys()), help="Document type")
+ parser.add_argument("--author", type=str, default="QMS Manager", help="Document author")
+ parser.add_argument("--revise", type=str, help="Revise existing document (doc_id)")
+ parser.add_argument("--reason", type=str, help="Reason for revision")
+ parser.add_argument("--approve", type=str, help="Approve document (doc_id)")
+ parser.add_argument("--approver", type=str, help="Approver name")
+ parser.add_argument("--withdraw", type=str, help="Withdraw document (doc_id)")
+ parser.add_argument("--reason", type=str, help="Withdrawal reason")
+ parser.add_argument("--status", action="store_true", help="Show document status")
+ parser.add_argument("--matrix", action="store_true", help="Generate document matrix")
+ parser.add_argument("--output", choices=["text", "json"], default="text")
+ parser.add_argument("--interactive", action="store_true", help="Interactive mode")
+
+ args = parser.parse_args()
+ dvc = DocumentVersionControl()
+
+ if args.create and args.title and args.type:
+ # Create new document with default content
+ template = f"""# {args.title}
+
+**Document ID:** [auto-generated]
+**Version:** 1.0.0
+**Date:** {datetime.now().strftime('%Y-%m-%d')}
+**Author:** {args.author}
+
+## Purpose
+[Describe the purpose and scope of this document]
+
+## Responsibility
+[List roles and responsibilities]
+
+## Procedure
+[Detailed procedure steps]
+
+## References
+[List referenced documents]
+
+## Revision History
+| Version | Date | Author | Change Summary |
+|---------|------|--------|----------------|
+| 1.0.0 | {datetime.now().strftime('%Y-%m-%d')} | {args.author} | Initial release |
+"""
+ doc = dvc.create_document(
+ title=args.title,
+ content=template,
+ author=args.author,
+ doc_type=args.type,
+ change_summary=args.reason or "Initial release"
+ )
+ print(f"✅ Created document {doc.doc_id} v{doc.version}")
+ print(f" File: doc_store/{doc.doc_id}_v{doc.version}.md")
+ elif args.revise and args.reason:
+ # Add revision reason to the content (would normally modify the file)
+ print(f"📝 Would revise document {args.revise} - reason: {args.reason}")
+ print(" Note: In production, this would load existing content, make changes, and create new revision")
+ elif args.approve and args.approver:
+ success = dvc.approve_document(args.approve, args.approver, "QMS Manager")
+ print(f"{'✅ Approved' if success else '❌ Failed'} document {args.approve}")
+ elif args.withdraw and args.reason:
+ success = dvc.withdraw_document(args.withdraw, args.reason, "QMS Manager")
+ print(f"{'✅ Withdrawn' if success else '❌ Failed'} document {args.withdraw}")
+ elif args.matrix:
+ matrix = dvc.generate_document_matrix()
+ if args.output == "json":
+ print(json.dumps(matrix, indent=2))
+ else:
+ print(format_matrix_text(matrix))
+ elif args.status:
+ print("📋 Document Status:")
+ for doc_id, doc in dvc.documents.items():
+ print(f" {doc_id} v{doc.version} - {doc.title} [{doc.status}]")
+ else:
+ # Demo
+ print("📁 Document Version Control System Demo")
+ print(" Repository contains", len(dvc.documents), "documents")
+ if dvc.documents:
+ print("\n Existing documents:")
+ for doc in dvc.documents.values():
+ print(f" {doc.doc_id} v{doc.version} - {doc.title} ({doc.status})")
+
+ print("\n💡 Usage:")
+ print(" --create \"SOP-001\" --title \"Document Title\" --type SOP --author \"Your Name\"")
+ print(" --revise DOC-001 --reason \"Regulatory update\"")
+ print(" --approve DOC-001 --approver \"Approver Name\"")
+ print(" --matrix --output text/json")
+
+if __name__ == "__main__":
+ main()
diff --git a/ra-qm-team/quality-manager-qmr/scripts/quality_effectiveness_monitor.py b/ra-qm-team/quality-manager-qmr/scripts/quality_effectiveness_monitor.py
new file mode 100644
index 0000000..a0ddf13
--- /dev/null
+++ b/ra-qm-team/quality-manager-qmr/scripts/quality_effectiveness_monitor.py
@@ -0,0 +1,481 @@
+#!/usr/bin/env python3
+"""
+Quality Management System Effectiveness Monitor
+
+Quantitatively assess QMS effectiveness using leading and lagging indicators.
+Tracks trends, calculates control limits, and predicts potential quality issues
+before they become failures. Integrates with CAPA and management review processes.
+
+Supports metrics:
+- Complaint rates, defect rates, rework rates
+- Supplier performance
+- CAPA effectiveness
+- Audit findings trends
+- Non-conformance statistics
+
+Usage:
+ python quality_effectiveness_monitor.py --metrics metrics.csv --dashboard
+ python quality_effectiveness_monitor.py --qms-data qms_data.json --predict
+ python quality_effectiveness_monitor.py --interactive
+"""
+
+import argparse
+import json
+import csv
+import sys
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional, Tuple
+from datetime import datetime, timedelta
+from statistics import mean, stdev, median
+
+
+
+@dataclass
+class QualityMetric:
+ """A single quality metric data point."""
+ metric_id: str
+ metric_name: str
+ category: str
+ date: str
+ value: float
+ unit: str
+ target: float
+ upper_limit: float
+ lower_limit: float
+ trend_direction: str = "" # "up", "down", "stable"
+ sigma_level: float = 0.0
+ is_alert: bool = False
+ is_critical: bool = False
+
+
+@dataclass
+class QMSReport:
+ """QMS effectiveness report."""
+ report_period: Tuple[str, str]
+ overall_effectiveness_score: float
+ metrics_count: int
+ metrics_in_control: int
+ metrics_out_of_control: int
+ critical_alerts: int
+ trends_analysis: Dict
+ predictive_alerts: List[Dict]
+ improvement_opportunities: List[Dict]
+ management_review_summary: str
+
+
+class QMSEffectivenessMonitor:
+ """Monitors and analyzes QMS effectiveness."""
+
+ SIGNAL_INDICATORS = {
+ "complaint_rate": {"unit": "per 1000 units", "target": 0, "upper_limit": 1.5},
+ "defect_rate": {"unit": "PPM", "target": 100, "upper_limit": 500},
+ "rework_rate": {"unit": "%", "target": 2.0, "upper_limit": 5.0},
+ "on_time_delivery": {"unit": "%", "target": 98, "lower_limit": 95},
+ "audit_findings": {"unit": "count/month", "target": 0, "upper_limit": 3},
+ "capa_closure_rate": {"unit": "% within target", "target": 100, "lower_limit": 90},
+ "supplier_defect_rate": {"unit": "PPM", "target": 200, "upper_limit": 1000}
+ }
+
+ def __init__(self):
+ self.metrics = []
+
+ def load_csv(self, csv_path: str) -> List[QualityMetric]:
+ """Load metrics from CSV file."""
+ metrics = []
+ with open(csv_path, 'r', encoding='utf-8') as f:
+ reader = csv.DictReader(f)
+ for row in reader:
+ metric = QualityMetric(
+ metric_id=row.get('metric_id', ''),
+ metric_name=row.get('metric_name', ''),
+ category=row.get('category', 'General'),
+ date=row.get('date', ''),
+ value=float(row.get('value', 0)),
+ unit=row.get('unit', ''),
+ target=float(row.get('target', 0)),
+ upper_limit=float(row.get('upper_limit', 0)),
+ lower_limit=float(row.get('lower_limit', 0)),
+ )
+ metrics.append(metric)
+ self.metrics = metrics
+ return metrics
+
+ def calculate_sigma_level(self, metric: QualityMetric, historical_values: List[float]) -> float:
+ """Calculate process sigma level based on defect rate."""
+ if metric.unit == "PPM" or "rate" in metric.metric_name.lower():
+ # For defect rates, DPMO = defects_per_million_opportunities
+ if historical_values:
+ avg_defect_rate = mean(historical_values)
+ if avg_defect_rate > 0:
+ dpmo = avg_defect_rate
+ # Simplified sigma conversion (actual uses 1.5σ shift)
+ sigma_map = {
+ 330000: 1.0, 620000: 2.0, 110000: 3.0, 27000: 4.0,
+ 6200: 5.0, 230: 6.0, 3.4: 6.0
+ }
+ # Rough sigma calculation
+ sigma = 6.0 - (dpmo / 1000000) * 10
+ return max(0.0, min(6.0, sigma))
+ return 0.0
+
+ def analyze_trend(self, values: List[float]) -> Tuple[str, float]:
+ """Analyze trend direction and significance."""
+ if len(values) < 3:
+ return "insufficient_data", 0.0
+
+ x = list(range(len(values)))
+ y = values
+
+ # Linear regression
+ n = len(x)
+ sum_x = sum(x)
+ sum_y = sum(y)
+ sum_xy = sum(x[i] * y[i] for i in range(n))
+ sum_x2 = sum(xi * xi for xi in x)
+
+ slope = (n * sum_xy - sum_x * sum_y) / (n * sum_x2 - sum_x * sum_x) if (n * sum_x2 - sum_x * sum_x) != 0 else 0
+
+ # Determine trend direction
+ if slope > 0.01:
+ direction = "up"
+ elif slope < -0.01:
+ direction = "down"
+ else:
+ direction = "stable"
+
+ # Calculate R-squared
+ if slope != 0:
+ intercept = (sum_y - slope * sum_x) / n
+ y_pred = [slope * xi + intercept for xi in x]
+ ss_res = sum((y[i] - y_pred[i])**2 for i in range(n))
+ ss_tot = sum((y[i] - mean(y))**2 for i in range(n))
+ r2 = 1 - (ss_res / ss_tot) if ss_tot > 0 else 0
+ else:
+ r2 = 0
+
+ return direction, r2
+
+ def detect_alerts(self, metrics: List[QualityMetric]) -> List[Dict]:
+ """Detect metrics that require attention."""
+ alerts = []
+ for metric in metrics:
+ # Check immediate control limit violation
+ if metric.upper_limit and metric.value > metric.upper_limit:
+ alerts.append({
+ "metric_id": metric.metric_id,
+ "metric_name": metric.metric_name,
+ "issue": "exceeds_upper_limit",
+ "value": metric.value,
+ "limit": metric.upper_limit,
+ "severity": "critical" if metric.category in ["Customer", "Regulatory"] else "high"
+ })
+ if metric.lower_limit and metric.value < metric.lower_limit:
+ alerts.append({
+ "metric_id": metric.metric_id,
+ "metric_name": metric.metric_name,
+ "issue": "below_lower_limit",
+ "value": metric.value,
+ "limit": metric.lower_limit,
+ "severity": "critical" if metric.category in ["Customer", "Regulatory"] else "high"
+ })
+
+ # Check for adverse trend (3+ points in same direction)
+ # Need to group by metric_name and check historical data
+ # Simplified: check trend_direction flag if set
+ if metric.trend_direction in ["up", "down"] and metric.sigma_level > 3:
+ alerts.append({
+ "metric_id": metric.metric_id,
+ "metric_name": metric.metric_name,
+ "issue": f"adverse_trend_{metric.trend_direction}",
+ "value": metric.value,
+ "severity": "medium"
+ })
+
+ return alerts
+
+ def predict_failures(self, metrics: List[QualityMetric], forecast_days: int = 30) -> List[Dict]:
+ """Predict potential failures based on trends."""
+ predictions = []
+
+ # Group metrics by name to get time series
+ grouped = {}
+ for m in metrics:
+ if m.metric_name not in grouped:
+ grouped[m.metric_name] = []
+ grouped[m.metric_name].append(m)
+
+ for metric_name, metric_list in grouped.items():
+ if len(metric_list) < 5:
+ continue
+
+ # Sort by date
+ metric_list.sort(key=lambda m: m.date)
+ values = [m.value for m in metric_list]
+
+ # Simple linear extrapolation
+ x = list(range(len(values)))
+ y = values
+ n = len(x)
+ sum_x = sum(x)
+ sum_y = sum(y)
+ sum_xy = sum(x[i] * y[i] for i in range(n))
+ sum_x2 = sum(xi * xi for xi in x)
+ slope = (n * sum_xy - sum_x * sum_y) / (n * sum_x2 - sum_x * sum_x) if (n * sum_x2 - sum_x * sum_x) != 0 else 0
+
+ if slope != 0:
+ # Forecast next value
+ next_value = y[-1] + slope
+ target = metric_list[0].target
+ upper_limit = metric_list[0].upper_limit
+
+ if (target and next_value > target * 1.2) or (upper_limit and next_value > upper_limit * 0.9):
+ predictions.append({
+ "metric": metric_name,
+ "current_value": y[-1],
+ "forecast_value": round(next_value, 2),
+ "forecast_days": forecast_days,
+ "trend_slope": round(slope, 3),
+ "risk_level": "high" if upper_limit and next_value > upper_limit else "medium"
+ })
+
+ return predictions
+
+ def calculate_effectiveness_score(self, metrics: List[QualityMetric]) -> float:
+ """Calculate overall QMS effectiveness score (0-100)."""
+ if not metrics:
+ return 0.0
+
+ scores = []
+ for m in metrics:
+ # Score based on distance to target
+ if m.target != 0:
+ deviation = abs(m.value - m.target) / max(abs(m.target), 1)
+ score = max(0, 100 - deviation * 100)
+ else:
+ # For metrics where lower is better (defects, etc.)
+ if m.upper_limit:
+ score = max(0, 100 - (m.value / m.upper_limit) * 100 * 0.5)
+ else:
+ score = 50 # Neutral if no target
+ scores.append(score)
+
+ # Penalize for alerts
+ alerts = self.detect_alerts(metrics)
+ penalty = len([a for a in alerts if a["severity"] in ["critical", "high"]]) * 5
+ return max(0, min(100, mean(scores) - penalty))
+
+ def identify_improvement_opportunities(self, metrics: List[QualityMetric]) -> List[Dict]:
+ """Identify metrics with highest improvement potential."""
+ opportunities = []
+ for m in metrics:
+ if m.upper_limit and m.value > m.upper_limit * 0.8:
+ gap = m.upper_limit - m.value
+ if gap > 0:
+ improvement_pct = (gap / m.upper_limit) * 100
+ opportunities.append({
+ "metric": m.metric_name,
+ "current": m.value,
+ "target": m.upper_limit,
+ "gap": round(gap, 2),
+ "improvement_potential_pct": round(improvement_pct, 1),
+ "recommended_action": f"Reduce {m.metric_name} by at least {round(gap, 2)} {m.unit}",
+ "impact": "High" if m.category in ["Customer", "Regulatory"] else "Medium"
+ })
+
+ # Sort by improvement potential
+ opportunities.sort(key=lambda x: x["improvement_potential_pct"], reverse=True)
+ return opportunities[:10]
+
+ def generate_management_review_summary(self, report: QMSReport) -> str:
+ """Generate executive summary for management review."""
+ summary = [
+ f"QMS EFFECTIVENESS REVIEW - {report.report_period[0]} to {report.report_period[1]}",
+ "",
+ f"Overall Effectiveness Score: {report.overall_effectiveness_score:.1f}/100",
+ f"Metrics Tracked: {report.metrics_count} | In Control: {report.metrics_in_control} | Alerts: {report.critical_alerts}",
+ ""
+ ]
+
+ if report.critical_alerts > 0:
+ summary.append("🔴 CRITICAL ALERTS REQUIRING IMMEDIATE ATTENTION:")
+ for alert in [a for a in report.predictive_alerts if a.get("risk_level") == "high"]:
+ summary.append(f" • {alert['metric']}: forecast {alert['forecast_value']} (from {alert['current_value']})")
+ summary.append("")
+
+ summary.append("📈 TOP IMPROVEMENT OPPORTUNITIES:")
+ for i, opp in enumerate(report.improvement_opportunities[:3], 1):
+ summary.append(f" {i}. {opp['metric']}: {opp['recommended_action']} (Impact: {opp['impact']})")
+ summary.append("")
+
+ summary.append("🎯 RECOMMENDED ACTIONS:")
+ summary.append(" 1. Address all high-severity alerts within 30 days")
+ summary.append(" 2. Launch improvement projects for top 3 opportunities")
+ summary.append(" 3. Review CAPA effectiveness for recurring issues")
+ summary.append(" 4. Update risk assessments based on predictive trends")
+
+ return "\n".join(summary)
+
+ def analyze(
+ self,
+ metrics: List[QualityMetric],
+ start_date: str = None,
+ end_date: str = None
+ ) -> QMSReport:
+ """Perform comprehensive QMS effectiveness analysis."""
+ in_control = 0
+ for m in metrics:
+ if not m.is_alert and not m.is_critical:
+ in_control += 1
+
+ out_of_control = len(metrics) - in_control
+
+ alerts = self.detect_alerts(metrics)
+ critical_alerts = len([a for a in alerts if a["severity"] in ["critical", "high"]])
+
+ predictions = self.predict_failures(metrics)
+ improvement_opps = self.identify_improvement_opportunities(metrics)
+
+ effectiveness = self.calculate_effectiveness_score(metrics)
+
+ # Trend analysis by category
+ trends = {}
+ categories = set(m.category for m in metrics)
+ for cat in categories:
+ cat_metrics = [m for m in metrics if m.category == cat]
+ if len(cat_metrics) >= 2:
+ avg_values = [mean([m.value for m in cat_metrics])] # Simplistic - would need time series
+ trends[cat] = {
+ "metric_count": len(cat_metrics),
+ "avg_value": round(mean([m.value for m in cat_metrics]), 2),
+ "alerts": len([a for a in alerts if any(m.metric_name == a["metric_name"] for m in cat_metrics)])
+ }
+
+ period = (start_date or metrics[0].date, end_date or metrics[-1].date) if metrics else ("", "")
+
+ report = QMSReport(
+ report_period=period,
+ overall_effectiveness_score=effectiveness,
+ metrics_count=len(metrics),
+ metrics_in_control=in_control,
+ metrics_out_of_control=out_of_control,
+ critical_alerts=critical_alerts,
+ trends_analysis=trends,
+ predictive_alerts=predictions,
+ improvement_opportunities=improvement_opps,
+ management_review_summary="" # Filled later
+ )
+
+ report.management_review_summary = self.generate_management_review_summary(report)
+
+ return report
+
+
+def format_qms_report(report: QMSReport) -> str:
+ """Format QMS report as text."""
+ lines = [
+ "=" * 80,
+ "QMS EFFECTIVENESS MONITORING REPORT",
+ "=" * 80,
+ f"Period: {report.report_period[0]} to {report.report_period[1]}",
+ f"Overall Score: {report.overall_effectiveness_score:.1f}/100",
+ "",
+ "METRIC STATUS",
+ "-" * 40,
+ f" Total Metrics: {report.metrics_count}",
+ f" In Control: {report.metrics_in_control}",
+ f" Out of Control: {report.metrics_out_of_control}",
+ f" Critical Alerts: {report.critical_alerts}",
+ "",
+ "TREND ANALYSIS BY CATEGORY",
+ "-" * 40,
+ ]
+
+ for category, data in report.trends_analysis.items():
+ lines.append(f" {category}: {data['avg_value']} (alerts: {data['alerts']})")
+
+ if report.predictive_alerts:
+ lines.extend([
+ "",
+ "PREDICTIVE ALERTS (Next 30 days)",
+ "-" * 40,
+ ])
+ for alert in report.predictive_alerts[:5]:
+ lines.append(f" ⚠ {alert['metric']}: {alert['current_value']} → {alert['forecast_value']} ({alert['risk_level']})")
+
+ if report.improvement_opportunities:
+ lines.extend([
+ "",
+ "TOP IMPROVEMENT OPPORTUNITIES",
+ "-" * 40,
+ ])
+ for i, opp in enumerate(report.improvement_opportunities[:5], 1):
+ lines.append(f" {i}. {opp['metric']}: {opp['recommended_action']}")
+
+ lines.extend([
+ "",
+ "MANAGEMENT REVIEW SUMMARY",
+ "-" * 40,
+ report.management_review_summary,
+ "=" * 80
+ ])
+
+ return "\n".join(lines)
+
+
+def main():
+ parser = argparse.ArgumentParser(description="QMS Effectiveness Monitor")
+ parser.add_argument("--metrics", type=str, help="CSV file with quality metrics")
+ parser.add_argument("--qms-data", type=str, help="JSON file with QMS data")
+ parser.add_argument("--dashboard", action="store_true", help="Generate dashboard summary")
+ parser.add_argument("--predict", action="store_true", help="Include predictive analytics")
+ parser.add_argument("--output", choices=["text", "json"], default="text")
+ parser.add_argument("--interactive", action="store_true", help="Interactive mode")
+
+ args = parser.parse_args()
+ monitor = QMSEffectivenessMonitor()
+
+ if args.metrics:
+ metrics = monitor.load_csv(args.metrics)
+ report = monitor.analyze(metrics)
+ elif args.qms_data:
+ with open(args.qms_data) as f:
+ data = json.load(f)
+ # Convert to QualityMetric objects
+ metrics = [QualityMetric(**m) for m in data.get("metrics", [])]
+ report = monitor.analyze(metrics)
+ else:
+ # Demo data
+ demo_metrics = [
+ QualityMetric("M001", "Customer Complaint Rate", "Customer", "2026-03-01", 0.8, "per 1000", 1.0, 1.5, 0.5),
+ QualityMetric("M002", "Defect Rate PPM", "Quality", "2026-03-01", 125, "PPM", 100, 500, 0, trend_direction="down", sigma_level=4.2),
+ QualityMetric("M003", "On-Time Delivery", "Operations", "2026-03-01", 96.5, "%", 98, 0, 95, trend_direction="down"),
+ QualityMetric("M004", "CAPA Closure Rate", "Quality", "2026-03-01", 92.0, "%", 100, 0, 90, is_alert=True),
+ QualityMetric("M005", "Supplier Defect Rate", "Supplier", "2026-03-01", 450, "PPM", 200, 1000, 0, is_critical=True),
+ ]
+ # Simulate time series
+ all_metrics = []
+ for i in range(30):
+ for dm in demo_metrics:
+ new_metric = QualityMetric(
+ metric_id=dm.metric_id,
+ metric_name=dm.metric_name,
+ category=dm.category,
+ date=f"2026-03-{i+1:02d}",
+ value=dm.value + (i * 0.1) if dm.metric_name == "Customer Complaint Rate" else dm.value,
+ unit=dm.unit,
+ target=dm.target,
+ upper_limit=dm.upper_limit,
+ lower_limit=dm.lower_limit
+ )
+ all_metrics.append(new_metric)
+ report = monitor.analyze(all_metrics)
+
+ if args.output == "json":
+ result = asdict(report)
+ print(json.dumps(result, indent=2))
+ else:
+ print(format_qms_report(report))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/ra-qm-team/regulatory-affairs-head/scripts/regulatory_pathway_analyzer.py b/ra-qm-team/regulatory-affairs-head/scripts/regulatory_pathway_analyzer.py
new file mode 100644
index 0000000..34432f0
--- /dev/null
+++ b/ra-qm-team/regulatory-affairs-head/scripts/regulatory_pathway_analyzer.py
@@ -0,0 +1,557 @@
+#!/usr/bin/env python3
+"""
+Regulatory Pathway Analyzer - Determines optimal regulatory pathway for medical devices.
+
+Analyzes device characteristics and recommends the most efficient regulatory pathway
+across multiple markets (FDA, EU MDR, UK UKCA, Health Canada, TGA, PMDA).
+
+Supports:
+- FDA: 510(k), De Novo, PMA, Breakthrough Device
+- EU MDR: Class I, IIa, IIb, III, AIMDD
+- UK: UKCA marking
+- Health Canada: Class I-IV
+- TGA: Class I, IIa, IIb, III
+- Japan PMDA: Class I-IV
+
+Usage:
+ python regulatory_pathway_analyzer.py --device-class II --predicate yes --market all
+ python regulatory_pathway_analyzer.py --interactive
+ python regulatory_pathway_analyzer.py --data device_profile.json --output json
+"""
+
+import argparse
+import json
+import sys
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional, Tuple
+from enum import Enum
+
+
+class RiskClass(Enum):
+ CLASS_I = "I"
+ CLASS_IIA = "IIa"
+ CLASS_IIB = "IIb"
+ CLASS_III = "III"
+ CLASS_IV = "IV"
+
+
+class MarketRegion(Enum):
+ US_FDA = "US-FDA"
+ EU_MDR = "EU-MDR"
+ UK_UKCA = "UK-UKCA"
+ HEALTH_CANADA = "Health-Canada"
+ AUSTRALIA_TGA = "Australia-TGA"
+ JAPAN_PMDA = "Japan-PMDA"
+
+
+@dataclass
+class DeviceProfile:
+ """Medical device profile for pathway analysis."""
+ device_name: str
+ intended_use: str
+ device_class: str # I, IIa, IIb, III
+ novel_technology: bool = False
+ predicate_available: bool = True
+ implantable: bool = False
+ life_sustaining: bool = False
+ software_component: bool = False
+ ai_ml_component: bool = False
+ sterile: bool = False
+ measuring_function: bool = False
+ target_markets: List[str] = field(default_factory=lambda: ["US-FDA", "EU-MDR"])
+
+
+@dataclass
+class PathwayOption:
+ """A regulatory pathway option."""
+ pathway_name: str
+ market: str
+ estimated_timeline_months: Tuple[int, int]
+ estimated_cost_usd: Tuple[int, int]
+ key_requirements: List[str]
+ advantages: List[str]
+ risks: List[str]
+ recommendation_level: str # "Recommended", "Alternative", "Not Recommended"
+
+
+@dataclass
+class PathwayAnalysis:
+ """Complete pathway analysis result."""
+ device: DeviceProfile
+ recommended_pathways: List[PathwayOption]
+ optimal_sequence: List[str] # Recommended submission order
+ total_timeline_months: Tuple[int, int]
+ total_estimated_cost: Tuple[int, int]
+ critical_success_factors: List[str]
+ warnings: List[str]
+
+
+class RegulatoryPathwayAnalyzer:
+ """Analyzes and recommends regulatory pathways for medical devices."""
+
+ # FDA pathway decision matrix
+ FDA_PATHWAYS = {
+ "I": {
+ "pathway": "510(k) Exempt / Registration & Listing",
+ "timeline": (1, 3),
+ "cost": (5000, 15000),
+ "requirements": ["Establishment registration", "Device listing", "GMP compliance (if non-exempt)"]
+ },
+ "II": {
+ "pathway": "510(k)",
+ "timeline": (6, 12),
+ "cost": (50000, 250000),
+ "requirements": ["Predicate device identification", "Substantial equivalence demonstration", "Performance testing", "Biocompatibility (if applicable)", "Software documentation (if applicable)"]
+ },
+ "II-novel": {
+ "pathway": "De Novo",
+ "timeline": (12, 18),
+ "cost": (150000, 400000),
+ "requirements": ["Risk-based classification request", "Special controls development", "Performance testing", "Clinical data (potentially)"]
+ },
+ "III": {
+ "pathway": "PMA",
+ "timeline": (18, 36),
+ "cost": (500000, 2000000),
+ "requirements": ["Clinical investigations", "Manufacturing information", "Performance testing", "Risk-benefit analysis", "Post-approval studies"]
+ },
+ "III-breakthrough": {
+ "pathway": "Breakthrough Device Program + PMA",
+ "timeline": (12, 24),
+ "cost": (500000, 2000000),
+ "requirements": ["Breakthrough designation request", "More flexible clinical evidence", "Iterative FDA engagement", "Post-market data collection"]
+ }
+ }
+
+ # EU MDR pathway decision matrix
+ EU_MDR_PATHWAYS = {
+ "I": {
+ "pathway": "Self-declaration (Class I)",
+ "timeline": (2, 4),
+ "cost": (10000, 30000),
+ "requirements": ["Technical documentation", "EU Declaration of Conformity", "UDI assignment", "EUDAMED registration", "Authorized Representative (if non-EU)"]
+ },
+ "IIa": {
+ "pathway": "Notified Body assessment (Class IIa)",
+ "timeline": (12, 18),
+ "cost": (80000, 200000),
+ "requirements": ["QMS certification (ISO 13485)", "Technical documentation", "Clinical evaluation", "Notified Body audit", "Post-market surveillance plan"]
+ },
+ "IIb": {
+ "pathway": "Notified Body assessment (Class IIb)",
+ "timeline": (15, 24),
+ "cost": (150000, 400000),
+ "requirements": ["Full QMS certification", "Comprehensive technical documentation", "Clinical evaluation (may need clinical investigation)", "Type examination or product verification", "Notified Body scrutiny"]
+ },
+ "III": {
+ "pathway": "Notified Body assessment (Class III)",
+ "timeline": (18, 30),
+ "cost": (300000, 800000),
+ "requirements": ["Full QMS certification", "Complete technical documentation", "Clinical investigation (typically required)", "Notified Body clinical evaluation review", "Scrutiny procedure (possible)", "PMCF plan"]
+ }
+ }
+
+ def __init__(self):
+ self.analysis_warnings = []
+
+ def analyze_fda_pathway(self, device: DeviceProfile) -> PathwayOption:
+ """Determine optimal FDA pathway."""
+ device_class = device.device_class.upper().replace("IIA", "II").replace("IIB", "II")
+
+ if device_class == "I":
+ pathway_data = self.FDA_PATHWAYS["I"]
+ return PathwayOption(
+ pathway_name=pathway_data["pathway"],
+ market="US-FDA",
+ estimated_timeline_months=pathway_data["timeline"],
+ estimated_cost_usd=pathway_data["cost"],
+ key_requirements=pathway_data["requirements"],
+ advantages=["Fastest path to market", "Minimal regulatory burden", "No premarket submission required (if exempt)"],
+ risks=["Limited to exempt product codes", "Still requires GMP compliance"],
+ recommendation_level="Recommended"
+ )
+
+ elif device_class == "III" or device.implantable or device.life_sustaining:
+ if device.novel_technology:
+ pathway_data = self.FDA_PATHWAYS["III-breakthrough"]
+ rec_level = "Recommended" if device.novel_technology else "Alternative"
+ else:
+ pathway_data = self.FDA_PATHWAYS["III"]
+ rec_level = "Recommended"
+ else: # Class II
+ if device.predicate_available and not device.novel_technology:
+ pathway_data = self.FDA_PATHWAYS["II"]
+ rec_level = "Recommended"
+ else:
+ pathway_data = self.FDA_PATHWAYS["II-novel"]
+ rec_level = "Recommended"
+
+ return PathwayOption(
+ pathway_name=pathway_data["pathway"],
+ market="US-FDA",
+ estimated_timeline_months=pathway_data["timeline"],
+ estimated_cost_usd=pathway_data["cost"],
+ key_requirements=pathway_data["requirements"],
+ advantages=self._get_fda_advantages(pathway_data["pathway"], device),
+ risks=self._get_fda_risks(pathway_data["pathway"], device),
+ recommendation_level=rec_level
+ )
+
+ def analyze_eu_mdr_pathway(self, device: DeviceProfile) -> PathwayOption:
+ """Determine optimal EU MDR pathway."""
+ device_class = device.device_class.lower().replace("iia", "IIa").replace("iib", "IIb")
+
+ if device_class in ["i", "1"]:
+ pathway_data = self.EU_MDR_PATHWAYS["I"]
+ class_key = "I"
+ elif device_class in ["iia", "2a"]:
+ pathway_data = self.EU_MDR_PATHWAYS["IIa"]
+ class_key = "IIa"
+ elif device_class in ["iib", "2b"]:
+ pathway_data = self.EU_MDR_PATHWAYS["IIb"]
+ class_key = "IIb"
+ else:
+ pathway_data = self.EU_MDR_PATHWAYS["III"]
+ class_key = "III"
+
+ # Adjust for implantables
+ if device.implantable and class_key in ["IIa", "IIb"]:
+ pathway_data = self.EU_MDR_PATHWAYS["III"]
+ self.analysis_warnings.append(
+ f"Implantable devices are typically upclassified to Class III under EU MDR"
+ )
+
+ return PathwayOption(
+ pathway_name=pathway_data["pathway"],
+ market="EU-MDR",
+ estimated_timeline_months=pathway_data["timeline"],
+ estimated_cost_usd=pathway_data["cost"],
+ key_requirements=pathway_data["requirements"],
+ advantages=self._get_eu_advantages(pathway_data["pathway"], device),
+ risks=self._get_eu_risks(pathway_data["pathway"], device),
+ recommendation_level="Recommended"
+ )
+
+ def _get_fda_advantages(self, pathway: str, device: DeviceProfile) -> List[str]:
+ advantages = []
+ if "510(k)" in pathway:
+ advantages.extend([
+ "Well-established pathway with clear guidance",
+ "Predictable review timeline",
+ "Lower clinical evidence requirements vs PMA"
+ ])
+ if device.predicate_available:
+ advantages.append("Predicate device identified - streamlined review")
+ elif "De Novo" in pathway:
+ advantages.extend([
+ "Creates new predicate for future 510(k) submissions",
+ "Appropriate for novel low-moderate risk devices",
+ "Can result in Class I or II classification"
+ ])
+ elif "PMA" in pathway:
+ advantages.extend([
+ "Strongest FDA approval - highest market credibility",
+ "Difficult for competitors to challenge",
+ "May qualify for breakthrough device benefits"
+ ])
+ elif "Breakthrough" in pathway:
+ advantages.extend([
+ "Priority review and interactive FDA engagement",
+ "Flexible clinical evidence requirements",
+ "Faster iterative development with FDA feedback"
+ ])
+ return advantages
+
+ def _get_fda_risks(self, pathway: str, device: DeviceProfile) -> List[str]:
+ risks = []
+ if "510(k)" in pathway:
+ risks.extend([
+ "Predicate device may be challenged",
+ "SE determination can be subjective"
+ ])
+ if device.software_component:
+ risks.append("Software documentation requirements increasing (Cybersecurity, AI/ML)")
+ elif "De Novo" in pathway:
+ risks.extend([
+ "Less predictable than 510(k)",
+ "May require more clinical data than expected",
+ "New special controls may be imposed"
+ ])
+ elif "PMA" in pathway:
+ risks.extend([
+ "Very expensive and time-consuming",
+ "Clinical trial risks and delays",
+ "Post-approval study requirements"
+ ])
+ if device.ai_ml_component:
+ risks.append("AI/ML components face evolving regulatory requirements")
+ return risks
+
+ def _get_eu_advantages(self, pathway: str, device: DeviceProfile) -> List[str]:
+ advantages = ["Access to entire EU/EEA market (27+ countries)"]
+ if "Self-declaration" in pathway:
+ advantages.extend([
+ "No Notified Body involvement required",
+ "Fastest path to EU market",
+ "Lowest cost option"
+ ])
+ elif "IIa" in pathway:
+ advantages.append("Moderate regulatory burden with broad market access")
+ elif "IIb" in pathway or "III" in pathway:
+ advantages.extend([
+ "Strong market credibility with NB certification",
+ "Recognized globally for regulatory quality"
+ ])
+ return advantages
+
+ def _get_eu_risks(self, pathway: str, device: DeviceProfile) -> List[str]:
+ risks = []
+ if "Self-declaration" not in pathway:
+ risks.extend([
+ "Limited Notified Body capacity - long wait times",
+ "Notified Body costs increasing under MDR"
+ ])
+ risks.append("MDR transition still creating uncertainty")
+ if device.software_component:
+ risks.append("EU AI Act may apply to AI/ML medical devices")
+ return risks
+
+ def determine_optimal_sequence(self, pathways: List[PathwayOption], device: DeviceProfile) -> List[str]:
+ """Determine optimal submission sequence across markets."""
+ # General principle: Start with fastest/cheapest, use data for subsequent submissions
+ sequence = []
+
+ # Sort by timeline (fastest first)
+ sorted_pathways = sorted(pathways, key=lambda p: p.estimated_timeline_months[0])
+
+ # FDA first if 510(k) - well recognized globally
+ fda_pathway = next((p for p in pathways if p.market == "US-FDA"), None)
+ eu_pathway = next((p for p in pathways if p.market == "EU-MDR"), None)
+
+ if fda_pathway and "510(k)" in fda_pathway.pathway_name:
+ sequence.append("1. US-FDA 510(k) first - clearance recognized globally, data reusable")
+ if eu_pathway:
+ sequence.append("2. EU-MDR - use FDA data in clinical evaluation")
+ elif eu_pathway and "Self-declaration" in eu_pathway.pathway_name:
+ sequence.append("1. EU-MDR (Class I self-declaration) - fastest market entry")
+ if fda_pathway:
+ sequence.append("2. US-FDA - use EU experience and data")
+ else:
+ for i, p in enumerate(sorted_pathways, 1):
+ sequence.append(f"{i}. {p.market} ({p.pathway_name})")
+
+ return sequence
+
+ def analyze(self, device: DeviceProfile) -> PathwayAnalysis:
+ """Perform complete pathway analysis."""
+ self.analysis_warnings = []
+ pathways = []
+
+ for market in device.target_markets:
+ if "FDA" in market or "US" in market:
+ pathways.append(self.analyze_fda_pathway(device))
+ elif "MDR" in market or "EU" in market:
+ pathways.append(self.analyze_eu_mdr_pathway(device))
+ # Additional markets can be added here
+
+ sequence = self.determine_optimal_sequence(pathways, device)
+
+ total_timeline_min = sum(p.estimated_timeline_months[0] for p in pathways)
+ total_timeline_max = sum(p.estimated_timeline_months[1] for p in pathways)
+ total_cost_min = sum(p.estimated_cost_usd[0] for p in pathways)
+ total_cost_max = sum(p.estimated_cost_usd[1] for p in pathways)
+
+ csf = [
+ "Early engagement with regulators (Pre-Sub/Scientific Advice)",
+ "Robust QMS (ISO 13485) in place before submissions",
+ "Clinical evidence strategy aligned with target markets",
+ "Cybersecurity and software documentation (if applicable)"
+ ]
+
+ if device.ai_ml_component:
+ csf.append("AI/ML transparency and bias documentation")
+
+ return PathwayAnalysis(
+ device=device,
+ recommended_pathways=pathways,
+ optimal_sequence=sequence,
+ total_timeline_months=(total_timeline_min, total_timeline_max),
+ total_estimated_cost=(total_cost_min, total_cost_max),
+ critical_success_factors=csf,
+ warnings=self.analysis_warnings
+ )
+
+
+def format_analysis_text(analysis: PathwayAnalysis) -> str:
+ """Format analysis as readable text report."""
+ lines = [
+ "=" * 70,
+ "REGULATORY PATHWAY ANALYSIS REPORT",
+ "=" * 70,
+ f"Device: {analysis.device.device_name}",
+ f"Intended Use: {analysis.device.intended_use}",
+ f"Device Class: {analysis.device.device_class}",
+ f"Target Markets: {', '.join(analysis.device.target_markets)}",
+ "",
+ "DEVICE CHARACTERISTICS",
+ "-" * 40,
+ f" Novel Technology: {'Yes' if analysis.device.novel_technology else 'No'}",
+ f" Predicate Available: {'Yes' if analysis.device.predicate_available else 'No'}",
+ f" Implantable: {'Yes' if analysis.device.implantable else 'No'}",
+ f" Life-Sustaining: {'Yes' if analysis.device.life_sustaining else 'No'}",
+ f" Software/AI Component: {'Yes' if analysis.device.software_component or analysis.device.ai_ml_component else 'No'}",
+ f" Sterile: {'Yes' if analysis.device.sterile else 'No'}",
+ "",
+ "RECOMMENDED PATHWAYS",
+ "-" * 40,
+ ]
+
+ for pathway in analysis.recommended_pathways:
+ lines.extend([
+ "",
+ f" [{pathway.market}] {pathway.pathway_name}",
+ f" Recommendation: {pathway.recommendation_level}",
+ f" Timeline: {pathway.estimated_timeline_months[0]}-{pathway.estimated_timeline_months[1]} months",
+ f" Estimated Cost: ${pathway.estimated_cost_usd[0]:,} - ${pathway.estimated_cost_usd[1]:,}",
+ f" Key Requirements:",
+ ])
+ for req in pathway.key_requirements:
+ lines.append(f" • {req}")
+ lines.append(f" Advantages:")
+ for adv in pathway.advantages:
+ lines.append(f" + {adv}")
+ lines.append(f" Risks:")
+ for risk in pathway.risks:
+ lines.append(f" ! {risk}")
+
+ lines.extend([
+ "",
+ "OPTIMAL SUBMISSION SEQUENCE",
+ "-" * 40,
+ ])
+ for step in analysis.optimal_sequence:
+ lines.append(f" {step}")
+
+ lines.extend([
+ "",
+ "TOTAL ESTIMATES",
+ "-" * 40,
+ f" Combined Timeline: {analysis.total_timeline_months[0]}-{analysis.total_timeline_months[1]} months",
+ f" Combined Cost: ${analysis.total_estimated_cost[0]:,} - ${analysis.total_estimated_cost[1]:,}",
+ "",
+ "CRITICAL SUCCESS FACTORS",
+ "-" * 40,
+ ])
+ for i, factor in enumerate(analysis.critical_success_factors, 1):
+ lines.append(f" {i}. {factor}")
+
+ if analysis.warnings:
+ lines.extend([
+ "",
+ "WARNINGS",
+ "-" * 40,
+ ])
+ for warning in analysis.warnings:
+ lines.append(f" ⚠ {warning}")
+
+ lines.append("=" * 70)
+ return "\n".join(lines)
+
+
+def interactive_mode():
+ """Interactive device profiling."""
+ print("=" * 60)
+ print("Regulatory Pathway Analyzer - Interactive Mode")
+ print("=" * 60)
+
+ device = DeviceProfile(
+ device_name=input("\nDevice Name: ").strip(),
+ intended_use=input("Intended Use: ").strip(),
+ device_class=input("Device Class (I/IIa/IIb/III): ").strip(),
+ novel_technology=input("Novel technology? (y/n): ").strip().lower() == 'y',
+ predicate_available=input("Predicate device available? (y/n): ").strip().lower() == 'y',
+ implantable=input("Implantable? (y/n): ").strip().lower() == 'y',
+ life_sustaining=input("Life-sustaining? (y/n): ").strip().lower() == 'y',
+ software_component=input("Software component? (y/n): ").strip().lower() == 'y',
+ ai_ml_component=input("AI/ML component? (y/n): ").strip().lower() == 'y',
+ )
+
+ markets = input("Target markets (comma-separated, e.g., US-FDA,EU-MDR): ").strip()
+ if markets:
+ device.target_markets = [m.strip() for m in markets.split(",")]
+
+ analyzer = RegulatoryPathwayAnalyzer()
+ analysis = analyzer.analyze(device)
+ print("\n" + format_analysis_text(analysis))
+
+
+def main():
+ parser = argparse.ArgumentParser(description="Regulatory Pathway Analyzer for Medical Devices")
+ parser.add_argument("--device-name", type=str, help="Device name")
+ parser.add_argument("--device-class", type=str, choices=["I", "IIa", "IIb", "III"], help="Device classification")
+ parser.add_argument("--predicate", type=str, choices=["yes", "no"], help="Predicate device available")
+ parser.add_argument("--novel", action="store_true", help="Novel technology")
+ parser.add_argument("--implantable", action="store_true", help="Implantable device")
+ parser.add_argument("--software", action="store_true", help="Software component")
+ parser.add_argument("--ai-ml", action="store_true", help="AI/ML component")
+ parser.add_argument("--market", type=str, default="all", help="Target market(s)")
+ parser.add_argument("--data", type=str, help="JSON file with device profile")
+ parser.add_argument("--output", choices=["text", "json"], default="text", help="Output format")
+ parser.add_argument("--interactive", action="store_true", help="Interactive mode")
+
+ args = parser.parse_args()
+
+ if args.interactive:
+ interactive_mode()
+ return
+
+ if args.data:
+ with open(args.data) as f:
+ data = json.load(f)
+ device = DeviceProfile(**data)
+ elif args.device_class:
+ device = DeviceProfile(
+ device_name=args.device_name or "Unnamed Device",
+ intended_use="Medical device",
+ device_class=args.device_class,
+ novel_technology=args.novel,
+ predicate_available=args.predicate == "yes" if args.predicate else True,
+ implantable=args.implantable,
+ software_component=args.software,
+ ai_ml_component=args.ai_ml,
+ )
+ if args.market != "all":
+ device.target_markets = [m.strip() for m in args.market.split(",")]
+ else:
+ # Demo mode
+ device = DeviceProfile(
+ device_name="SmartGlucose Monitor Pro",
+ intended_use="Continuous glucose monitoring for diabetes management",
+ device_class="II",
+ novel_technology=False,
+ predicate_available=True,
+ software_component=True,
+ ai_ml_component=True,
+ target_markets=["US-FDA", "EU-MDR"]
+ )
+
+ analyzer = RegulatoryPathwayAnalyzer()
+ analysis = analyzer.analyze(device)
+
+ if args.output == "json":
+ result = {
+ "device": asdict(analysis.device),
+ "pathways": [asdict(p) for p in analysis.recommended_pathways],
+ "optimal_sequence": analysis.optimal_sequence,
+ "total_timeline_months": list(analysis.total_timeline_months),
+ "total_estimated_cost": list(analysis.total_estimated_cost),
+ "critical_success_factors": analysis.critical_success_factors,
+ "warnings": analysis.warnings
+ }
+ print(json.dumps(result, indent=2))
+ else:
+ print(format_analysis_text(analysis))
+
+
+if __name__ == "__main__":
+ main()
diff --git a/ra-qm-team/risk-management-specialist/scripts/fmea_analyzer.py b/ra-qm-team/risk-management-specialist/scripts/fmea_analyzer.py
new file mode 100644
index 0000000..6db0819
--- /dev/null
+++ b/ra-qm-team/risk-management-specialist/scripts/fmea_analyzer.py
@@ -0,0 +1,442 @@
+#!/usr/bin/env python3
+"""
+FMEA Analyzer - Failure Mode and Effects Analysis for medical device risk management.
+
+Supports Design FMEA (dFMEA) and Process FMEA (pFMEA) per ISO 14971 and IEC 60812.
+Calculates Risk Priority Numbers (RPN), identifies critical items, and generates
+risk reduction recommendations.
+
+Usage:
+ python fmea_analyzer.py --data fmea_input.json
+ python fmea_analyzer.py --interactive
+ python fmea_analyzer.py --data fmea_input.json --output json
+"""
+
+import argparse
+import json
+import sys
+from dataclasses import dataclass, field, asdict
+from typing import List, Dict, Optional, Tuple
+from enum import Enum
+from datetime import datetime
+
+
+class FMEAType(Enum):
+ DESIGN = "Design FMEA"
+ PROCESS = "Process FMEA"
+
+
+class Severity(Enum):
+ INCONSEQUENTIAL = 1
+ MINOR = 2
+ MODERATE = 3
+ SIGNIFICANT = 4
+ SERIOUS = 5
+ CRITICAL = 6
+ SERIOUS_HAZARD = 7
+ HAZARDOUS = 8
+ HAZARDOUS_NO_WARNING = 9
+ CATASTROPHIC = 10
+
+
+class Occurrence(Enum):
+ REMOTE = 1
+ LOW = 2
+ LOW_MODERATE = 3
+ MODERATE = 4
+ MODERATE_HIGH = 5
+ HIGH = 6
+ VERY_HIGH = 7
+ EXTREMELY_HIGH = 8
+ ALMOST_CERTAIN = 9
+ INEVITABLE = 10
+
+
+class Detection(Enum):
+ ALMOST_CERTAIN = 1
+ VERY_HIGH = 2
+ HIGH = 3
+ MODERATE_HIGH = 4
+ MODERATE = 5
+ LOW_MODERATE = 6
+ LOW = 7
+ VERY_LOW = 8
+ REMOTE = 9
+ ABSOLUTELY_UNCERTAIN = 10
+
+
+@dataclass
+class FMEAEntry:
+ """Single FMEA line item."""
+ item_process: str
+ function: str
+ failure_mode: str
+ effect: str
+ severity: int
+ cause: str
+ occurrence: int
+ current_controls: str
+ detection: int
+ rpn: int = 0
+ criticality: str = ""
+ recommended_actions: List[str] = field(default_factory=list)
+ responsibility: str = ""
+ target_date: str = ""
+ actions_taken: str = ""
+ revised_severity: int = 0
+ revised_occurrence: int = 0
+ revised_detection: int = 0
+ revised_rpn: int = 0
+
+ def calculate_rpn(self):
+ self.rpn = self.severity * self.occurrence * self.detection
+ if self.severity >= 8:
+ self.criticality = "CRITICAL"
+ elif self.rpn >= 200:
+ self.criticality = "HIGH"
+ elif self.rpn >= 100:
+ self.criticality = "MEDIUM"
+ else:
+ self.criticality = "LOW"
+
+ def calculate_revised_rpn(self):
+ if self.revised_severity and self.revised_occurrence and self.revised_detection:
+ self.revised_rpn = self.revised_severity * self.revised_occurrence * self.revised_detection
+
+
+@dataclass
+class FMEAReport:
+ """Complete FMEA analysis report."""
+ fmea_type: str
+ product_process: str
+ team: List[str]
+ date: str
+ entries: List[FMEAEntry]
+ summary: Dict
+ risk_reduction_actions: List[Dict]
+
+
+class FMEAAnalyzer:
+ """Analyzes FMEA data and generates risk assessments."""
+
+ # RPN thresholds
+ RPN_CRITICAL = 200
+ RPN_HIGH = 100
+ RPN_MEDIUM = 50
+
+ def __init__(self, fmea_type: FMEAType = FMEAType.DESIGN):
+ self.fmea_type = fmea_type
+
+ def analyze_entries(self, entries: List[FMEAEntry]) -> Dict:
+ """Analyze all FMEA entries and generate summary."""
+ for entry in entries:
+ entry.calculate_rpn()
+ entry.calculate_revised_rpn()
+
+ rpns = [e.rpn for e in entries if e.rpn > 0]
+ revised_rpns = [e.revised_rpn for e in entries if e.revised_rpn > 0]
+
+ critical = [e for e in entries if e.criticality == "CRITICAL"]
+ high = [e for e in entries if e.criticality == "HIGH"]
+ medium = [e for e in entries if e.criticality == "MEDIUM"]
+
+ # Severity distribution
+ sev_dist = {}
+ for e in entries:
+ sev_range = "1-3 (Low)" if e.severity <= 3 else "4-6 (Medium)" if e.severity <= 6 else "7-10 (High)"
+ sev_dist[sev_range] = sev_dist.get(sev_range, 0) + 1
+
+ summary = {
+ "total_entries": len(entries),
+ "rpn_statistics": {
+ "min": min(rpns) if rpns else 0,
+ "max": max(rpns) if rpns else 0,
+ "average": round(sum(rpns) / len(rpns), 1) if rpns else 0,
+ "median": sorted(rpns)[len(rpns) // 2] if rpns else 0
+ },
+ "risk_distribution": {
+ "critical_severity": len(critical),
+ "high_rpn": len(high),
+ "medium_rpn": len(medium),
+ "low_rpn": len(entries) - len(critical) - len(high) - len(medium)
+ },
+ "severity_distribution": sev_dist,
+ "top_risks": [
+ {
+ "item": e.item_process,
+ "failure_mode": e.failure_mode,
+ "rpn": e.rpn,
+ "severity": e.severity
+ }
+ for e in sorted(entries, key=lambda x: x.rpn, reverse=True)[:5]
+ ]
+ }
+
+ if revised_rpns:
+ summary["revised_rpn_statistics"] = {
+ "min": min(revised_rpns),
+ "max": max(revised_rpns),
+ "average": round(sum(revised_rpns) / len(revised_rpns), 1),
+ "improvement": round((sum(rpns) - sum(revised_rpns)) / sum(rpns) * 100, 1) if rpns else 0
+ }
+
+ return summary
+
+ def generate_risk_reduction_actions(self, entries: List[FMEAEntry]) -> List[Dict]:
+ """Generate recommended risk reduction actions."""
+ actions = []
+
+ # Sort by RPN descending
+ sorted_entries = sorted(entries, key=lambda e: e.rpn, reverse=True)
+
+ for entry in sorted_entries[:10]: # Top 10 risks
+ if entry.rpn >= self.RPN_HIGH or entry.severity >= 8:
+ strategies = []
+
+ # Severity reduction strategies (highest priority for high severity)
+ if entry.severity >= 7:
+ strategies.append({
+ "type": "Severity Reduction",
+ "action": f"Redesign {entry.item_process} to eliminate failure mode: {entry.failure_mode}",
+ "priority": "Highest",
+ "expected_impact": "May reduce severity by 2-4 points"
+ })
+
+ # Occurrence reduction strategies
+ if entry.occurrence >= 5:
+ strategies.append({
+ "type": "Occurrence Reduction",
+ "action": f"Implement preventive controls for cause: {entry.cause}",
+ "priority": "High",
+ "expected_impact": f"Target occurrence reduction from {entry.occurrence} to {max(1, entry.occurrence - 3)}"
+ })
+
+ # Detection improvement strategies
+ if entry.detection >= 5:
+ strategies.append({
+ "type": "Detection Improvement",
+ "action": f"Enhance detection methods: {entry.current_controls}",
+ "priority": "Medium",
+ "expected_impact": f"Target detection improvement from {entry.detection} to {max(1, entry.detection - 3)}"
+ })
+
+ actions.append({
+ "item": entry.item_process,
+ "failure_mode": entry.failure_mode,
+ "current_rpn": entry.rpn,
+ "current_severity": entry.severity,
+ "strategies": strategies
+ })
+
+ return actions
+
+ def create_entry_from_dict(self, data: Dict) -> FMEAEntry:
+ """Create FMEA entry from dictionary."""
+ entry = FMEAEntry(
+ item_process=data.get("item_process", ""),
+ function=data.get("function", ""),
+ failure_mode=data.get("failure_mode", ""),
+ effect=data.get("effect", ""),
+ severity=data.get("severity", 1),
+ cause=data.get("cause", ""),
+ occurrence=data.get("occurrence", 1),
+ current_controls=data.get("current_controls", ""),
+ detection=data.get("detection", 1),
+ recommended_actions=data.get("recommended_actions", []),
+ responsibility=data.get("responsibility", ""),
+ target_date=data.get("target_date", ""),
+ actions_taken=data.get("actions_taken", ""),
+ revised_severity=data.get("revised_severity", 0),
+ revised_occurrence=data.get("revised_occurrence", 0),
+ revised_detection=data.get("revised_detection", 0)
+ )
+ entry.calculate_rpn()
+ entry.calculate_revised_rpn()
+ return entry
+
+ def generate_report(self, product_process: str, team: List[str], entries_data: List[Dict]) -> FMEAReport:
+ """Generate complete FMEA report."""
+ entries = [self.create_entry_from_dict(e) for e in entries_data]
+ summary = self.analyze_entries(entries)
+ actions = self.generate_risk_reduction_actions(entries)
+
+ return FMEAReport(
+ fmea_type=self.fmea_type.value,
+ product_process=product_process,
+ team=team,
+ date=datetime.now().strftime("%Y-%m-%d"),
+ entries=entries,
+ summary=summary,
+ risk_reduction_actions=actions
+ )
+
+
+def format_fmea_text(report: FMEAReport) -> str:
+ """Format FMEA report as text."""
+ lines = [
+ "=" * 80,
+ f"{report.fmea_type.upper()} REPORT",
+ "=" * 80,
+ f"Product/Process: {report.product_process}",
+ f"Date: {report.date}",
+ f"Team: {', '.join(report.team)}",
+ "",
+ "SUMMARY",
+ "-" * 60,
+ f"Total Failure Modes Analyzed: {report.summary['total_entries']}",
+ f"Critical Severity (≥8): {report.summary['risk_distribution']['critical_severity']}",
+ f"High RPN (≥100): {report.summary['risk_distribution']['high_rpn']}",
+ f"Medium RPN (50-99): {report.summary['risk_distribution']['medium_rpn']}",
+ "",
+ "RPN Statistics:",
+ f" Min: {report.summary['rpn_statistics']['min']}",
+ f" Max: {report.summary['rpn_statistics']['max']}",
+ f" Average: {report.summary['rpn_statistics']['average']}",
+ f" Median: {report.summary['rpn_statistics']['median']}",
+ ]
+
+ if "revised_rpn_statistics" in report.summary:
+ lines.extend([
+ "",
+ "Revised RPN Statistics:",
+ f" Average: {report.summary['revised_rpn_statistics']['average']}",
+ f" Improvement: {report.summary['revised_rpn_statistics']['improvement']}%",
+ ])
+
+ lines.extend([
+ "",
+ "TOP RISKS",
+ "-" * 60,
+ f"{'Item':<25} {'Failure Mode':<30} {'RPN':>5} {'Sev':>4}",
+ "-" * 66,
+ ])
+ for risk in report.summary.get("top_risks", []):
+ lines.append(f"{risk['item'][:24]:<25} {risk['failure_mode'][:29]:<30} {risk['rpn']:>5} {risk['severity']:>4}")
+
+ lines.extend([
+ "",
+ "FMEA ENTRIES",
+ "-" * 60,
+ ])
+
+ for i, entry in enumerate(report.entries, 1):
+ marker = "⚠" if entry.criticality in ["CRITICAL", "HIGH"] else "•"
+ lines.extend([
+ f"",
+ f"{marker} Entry {i}: {entry.item_process} - {entry.function}",
+ f" Failure Mode: {entry.failure_mode}",
+ f" Effect: {entry.effect}",
+ f" Cause: {entry.cause}",
+ f" S={entry.severity} × O={entry.occurrence} × D={entry.detection} = RPN {entry.rpn} [{entry.criticality}]",
+ f" Current Controls: {entry.current_controls}",
+ ])
+ if entry.recommended_actions:
+ lines.append(f" Recommended Actions:")
+ for action in entry.recommended_actions:
+ lines.append(f" → {action}")
+ if entry.revised_rpn > 0:
+ lines.append(f" Revised: S={entry.revised_severity} × O={entry.revised_occurrence} × D={entry.revised_detection} = RPN {entry.revised_rpn}")
+
+ if report.risk_reduction_actions:
+ lines.extend([
+ "",
+ "RISK REDUCTION RECOMMENDATIONS",
+ "-" * 60,
+ ])
+ for action in report.risk_reduction_actions:
+ lines.extend([
+ f"",
+ f" {action['item']} - {action['failure_mode']}",
+ f" Current RPN: {action['current_rpn']} (Severity: {action['current_severity']})",
+ ])
+ for strategy in action["strategies"]:
+ lines.append(f" [{strategy['priority']}] {strategy['type']}: {strategy['action']}")
+ lines.append(f" Expected: {strategy['expected_impact']}")
+
+ lines.append("=" * 80)
+ return "\n".join(lines)
+
+
+def main():
+ parser = argparse.ArgumentParser(description="FMEA Analyzer for Medical Device Risk Management")
+ parser.add_argument("--type", choices=["design", "process"], default="design", help="FMEA type")
+ parser.add_argument("--data", type=str, help="JSON file with FMEA data")
+ parser.add_argument("--output", choices=["text", "json"], default="text", help="Output format")
+ parser.add_argument("--interactive", action="store_true", help="Interactive mode")
+
+ args = parser.parse_args()
+
+ fmea_type = FMEAType.DESIGN if args.type == "design" else FMEAType.PROCESS
+ analyzer = FMEAAnalyzer(fmea_type)
+
+ if args.data:
+ with open(args.data) as f:
+ data = json.load(f)
+ report = analyzer.generate_report(
+ product_process=data.get("product_process", ""),
+ team=data.get("team", []),
+ entries_data=data.get("entries", [])
+ )
+ else:
+ # Demo data
+ demo_entries = [
+ {
+ "item_process": "Battery Module",
+ "function": "Provide power for 8 hours",
+ "failure_mode": "Premature battery drain",
+ "effect": "Device shuts down during procedure",
+ "severity": 8,
+ "cause": "Cell degradation due to temperature cycling",
+ "occurrence": 4,
+ "current_controls": "Incoming battery testing, temperature spec in IFU",
+ "detection": 5,
+ "recommended_actions": ["Add battery health monitoring algorithm", "Implement low-battery warning at 20%"]
+ },
+ {
+ "item_process": "Software Controller",
+ "function": "Control device operation",
+ "failure_mode": "Firmware crash",
+ "effect": "Device becomes unresponsive",
+ "severity": 7,
+ "cause": "Memory leak in logging module",
+ "occurrence": 3,
+ "current_controls": "Code review, unit testing, integration testing",
+ "detection": 4,
+ "recommended_actions": ["Add watchdog timer", "Implement memory usage monitoring"]
+ },
+ {
+ "item_process": "Sterile Packaging",
+ "function": "Maintain sterility until use",
+ "failure_mode": "Seal breach",
+ "effect": "Device contamination",
+ "severity": 9,
+ "cause": "Sealing jaw temperature variation",
+ "occurrence": 2,
+ "current_controls": "Seal integrity testing (dye penetration), SPC on sealing process",
+ "detection": 3,
+ "recommended_actions": ["Add real-time seal temperature monitoring", "Implement 100% seal integrity testing"]
+ }
+ ]
+ report = analyzer.generate_report(
+ product_process="Insulin Pump Model X200",
+ team=["Quality Engineer", "R&D Lead", "Manufacturing Engineer", "Risk Manager"],
+ entries_data=demo_entries
+ )
+
+ if args.output == "json":
+ result = {
+ "fmea_type": report.fmea_type,
+ "product_process": report.product_process,
+ "date": report.date,
+ "team": report.team,
+ "entries": [asdict(e) for e in report.entries],
+ "summary": report.summary,
+ "risk_reduction_actions": report.risk_reduction_actions
+ }
+ print(json.dumps(result, indent=2))
+ else:
+ print(format_fmea_text(report))
+
+
+if __name__ == "__main__":
+ main()