← Back to Resources

AI Prompt Playbooks

Copy-paste ready prompts for every BVDLC phase

Purpose: Transform AI from impressive to useful with context-aware prompts
Compatible with: Claude, GPT-4, Gemini, and other AI assistants
Key Principle: Always include context from previous phases

How to Use These Prompts

  1. Pull context first: Grab the latest artifacts from 00-05 folders so the AI reflects reality.
  2. Tell the AI who you are: Mention whether you’re acting as product, architecture, QA, or ops so tone matches.
  3. Review like an engineer: AI output is a draft—validate feasibility, ethics, and compliance.
  4. Commit back to context: Anything you keep must land in the context folder with traceability.

Adapt Prompts to Your Team

Startups < 50 people

  • Use short context snippets; reference real customers rather than formal KPIs.
  • Run prompts directly inside your IDE/terminal for speed.
  • Archive AI output in Git to build your first playbook library.

Growth-Stage / Multi-team

  • Embed context-folder links in every prompt so squads don’t diverge.
  • Rotate a “prompt steward” per sprint to review/approve shared prompts.
  • Pair prompts with Jira templates so work items trace to AI output.

Enterprise / Regulated

  • Route prompts through approved AI environments; redact PII before paste.
  • Add compliance checkpoints (SOX, HIPAA, etc.) inside the prompt body.
  • Save prompt + response as evidence inside 00/04 folders for audits.

Jump to Phase:

Phase 0: Business Context & Objectives

Quick Start:
Summarize these notes into a Phase 0 brief covering:
- core business problem & KPI target
- target users/stakeholders
- constraints + top 3 risks
Role cues: Product highlights customer signal + KPI impact; Executives confirm ROI + risk appetite; Ops/Compliance list approvals or policies that must be honored.
Sample Output (excerpt):
Problem: Claims backlog costing $2.4M/quarter.
Target KPI: Cycle time 14d → 5d by Q3.
Users: Field adjusters + compliance.
Constraints: SOC2 logging, $450k budget, legal sign-off.
Executive Intent Extraction

Extract Business Context from Documents

Use this when you have meeting notes, emails, or strategy documents and need to extract actionable business context.

Quick Start:
Summarize the attached notes into a Phase 0 brief with:
- core problem & KPI
- target users
- constraints & risks
Make it bullet-friendly for AI/to repo.
I need to extract and structure business context from the following information: [PASTE: Meeting notes, emails, strategy documents, stakeholder conversations] Please extract and structure this into a BVDLC Phase 0 artifact with: 1. **Core Business Problem** (2 sentences max) - What's broken and why it matters - Root cause, not just symptoms 2. **Target Users/Stakeholders** - Who is affected by this problem - Who benefits from solving it 3. **Success Metrics with Specific Targets** - Primary KPI: Current baseline → Target value by [date] - Secondary KPIs (if applicable) - How we'll measure success 4. **Constraints and Non-Negotiables** - Technical constraints - Business constraints (budget, timeline, resources) - Compliance/regulatory requirements 5. **Timeline and Dependencies** - Critical dates - External dependencies - Market/competitive pressures 6. **Stakeholder Expectations** - What does the executive sponsor expect? - What are the key concerns? 7. **Risk Register** - What could go wrong? - How likely? How severe? - Mitigation strategies Format this as a structured document that AI can consume in subsequent phases. Make it specific and measurable—no vague goals.

Phase 1: Ideation & Rapid Prototyping

Quick Start:
Draft a 4-hour prototype plan that lists:
- must-have flows
- stack suggestion
- 3-question user test script
- success criteria tied to KPI
Role cues: Product frames the value hypothesis; Engineering forces feasibility + tech stack; UX/Research ensures the user test plan is concrete.
Sample Output (excerpt):
Prototype focus: instant photo triage.
Stack: Next.js + Supabase + Vision API.
Test script: “Upload latest claim; what slowed you?”
Success: 3 adjusters finish flow & rate ≥4/5.
Rapid Prototype Generation

Build Working Prototype

Use this when Phase 0 is complete and you're ready to build working prototypes to validate feasibility.

**Phase 0 Business Context:** - Problem: [BUSINESS PROBLEM FROM PHASE 0] - Target Users: [WHO THIS IS FOR] - Success Metrics: [PRIMARY KPI TO MOVE] - Constraints: [KEY CONSTRAINTS] ***Task:** Build a working prototype that proves feasibility and validates value hypothesis. Build a working prototype that: 1. Demonstrates the core value proposition 2. Uses real or realistic data (not dummy data) 3. Can be tested with actual users 4. Can be built in < 4 hours 5. Proves technical feasibility **Technology Stack:** [SPECIFY IF KNOWN, OR ASK AI TO RECOMMEND] **Prototype Requirements:** - Must solve: [CORE PROBLEM] - Must include: [MINIMUM FEATURES] - Must prove: [KEY ASSUMPTIONS] **Also Generate:** 1. Working prototype code 2. Setup/deployment instructions 3. User testing script with questions 4. Success criteria for validation Make this fast and rough—we're validating direction, not building production quality.

Phase 2: Architecture & Design

Quick Start:
Convert prototype learnings into a production blueprint:
- list services + responsibilities
- note integrations + security callouts
- highlight NFR targets + deployment sketch
Role cues: Architects own trade-offs & ADRs; Security/Compliance add required controls; Product confirms architecture still delivers KPI.
Sample Output (excerpt):
Services: Ingestion API, Vision worker, Claims hub.
Integrations: Guidewire, S3 evidence bucket.
NFRs: P95 < 2s, 99.9% uptime.
Deploy: EKS + async queue, blue/green releases.
Production Architecture Design

Transform Prototype to Production Architecture

Use this when prototype is validated and you need to design production-ready architecture.

**Phase 0 Business Context:** [PASTE: Business problem, success metrics, constraints from Phase 0] **Phase 1 Prototype Results:** [PASTE: What we built, what we learned, user feedback summary] **Validated Approach:** - Prototype demonstrated: [WHAT WORKED] - Users confirmed: [VALUE VALIDATION] - Technical learnings: [WHAT WE DISCOVERED] Now design a PRODUCTION-READY architecture based on this validated prototype: 1. **System Architecture** - High-level component diagram - Services and their responsibilities - Data stores and schemas - Integration points with existing systems 2. **Security Hardening** - Vulnerabilities identified in prototype - Recommended security patterns - Authentication/authorization approach - Data protection strategy - Compliance requirements (from Phase 0) 3. **Performance Optimization** - Bottlenecks identified in prototype - Scaling strategy (horizontal/vertical) - Caching approach - Database optimization - Target performance metrics 4. **Integration Mapping** - External system connections - API contracts and specifications - Data flows and transformations - Error handling and retry logic 5. **Non-Functional Requirements** - Availability targets (e.g., 99.9% uptime) - Performance targets (response time, throughput) - Scalability requirements - Disaster recovery approach 6. **Technology Stack** - Final technology choices with rationale - Why this stack serves Phase 0 objectives - Migration path from prototype (if different) 7. **Deployment Architecture** - Infrastructure requirements - Environment strategy (dev/staging/prod) - Cost projections - Operational considerations Output: Production-ready architecture blueprint that: - Builds on validated prototype approach - Addresses all gaps found during prototyping - Meets Phase 0 business requirements - Can be built by the team in [TIMELINE] - Is maintainable by [OPERATIONS TEAM] Format as structured markdown with diagrams (Mermaid format).
Architecture Decision Record

Document Architecture Decisions

Use this when making significant architecture decisions that need documentation.

Document this architecture decision using the ADR format: **Decision Context:** - Phase 0 Business Requirement: [WHAT BUSINESS NEED DRIVES THIS] - Phase 1 Learnings: [WHAT PROTOTYPE TAUGHT US] - Technical Challenge: [WHAT PROBLEM ARE WE SOLVING] **Decision:** [WHAT WE'RE DECIDING] **Options Considered:** **Option 1: [NAME]** - Description: [HOW THIS WORKS] - Pros: [BENEFITS] - Cons: [TRADE-OFFS] - Estimated complexity: [HIGH/MEDIUM/LOW] - Estimated cost impact: [$AMOUNT OR %] **Option 2: [NAME]** - Description: [HOW THIS WORKS] - Pros: [BENEFITS] - Cons: [TRADE-OFFS] - Estimated complexity: [HIGH/MEDIUM/LOW] - Estimated cost impact: [$AMOUNT OR %] Generate an ADR that: 1. Compares options objectively 2. Recommends best option with clear rationale 3. Explains how recommendation serves Phase 0 business objectives 4. Documents trade-offs we're accepting 5. Defines success criteria for this decision 6. Includes rollback strategy if decision proves wrong Format as: ADR-[NUMBER]-[SHORT-NAME].md

Phase 3: Planning & Work Breakdown

Quick Start:
Break the architecture into deliverables:
- ordered backlog with effort
- dependency map
- Definition of Done tied to KPIs
Role cues: Delivery/Program leads drive dependencies; Engineering leads map tasks to components; QA/DevOps add acceptance + release criteria.
Sample Output (excerpt):
Backlog:
1. Vision worker (3d)
2. Guidewire connector (4d)
Dependencies: data contract sign-off, SOC review.
DoD: latency ≤2s measured via synthetic tests.
Task Breakdown

Break Architecture into Tasks

Use this when architecture is complete and you need to break it into executable tasks.

**Phase 0 Business Context:** [PASTE: Business objectives, success metrics, constraints] **Phase 2 Architecture:** [PASTE: System architecture, components, integration points] Generate a comprehensive implementation plan: **1. Component Prioritization** Analyze which components: - Deliver independent value (can be released alone) - Prove Phase 0 hypothesis fastest - Have dependencies that must be built first - Can be built in parallel vs. sequentially Provide prioritized build order with rationale. **2. Task Breakdown for Each Component** For each component, create tasks with: | Task ID | Description | Acceptance Criteria | Input Requirements | Output Specifications | AI Suitable? | Estimated Effort | Dependencies | |---------|-------------|--------------------|--------------------|----------------------|--------------|------------------|--------------| | T-001 | | Functional:
Business Value: | | | Yes/No | [Hours/Days] | [Task IDs] | **Acceptance Criteria Format:** - Functional: What it must do technically - Business Value: How it serves Phase 0 objectives **AI Suitability Assessment:** - **Yes:** Clear specs, well-defined inputs/outputs, AI can generate - **No:** Requires domain expertise, complex business logic, human judgment **3. Execution Strategy** - Parallel work streams: What can be built simultaneously - Critical path: Tasks that block other work - Risk mitigation: How we handle high-risk tasks - Value delivery sequence: Order that delivers business value soonest **4. AI Acceleration Plan** For AI-suitable tasks: - What context AI needs (link to Phase 2 docs) - What prompts to use - What validation humans must do Output: Complete task backlog ready for Phase 4 implementation. Format: Markdown tables + dependency diagram (Mermaid).

Phase 4: Build, Test & Deploy

Code Generation

Generate Implementation with Full Context

Use this when ready to implement a specific component with AI assistance.

**Phase 0 Business Context:** [PASTE: What business problem this code solves] **Phase 2 Architecture:** [PASTE: Relevant architecture design, technology stack, patterns] **Phase 3 Task:** Task ID: [ID] Description: [WHAT TO BUILD] Acceptance Criteria: [FUNCTIONAL + BUSINESS VALUE REQUIREMENTS] Input Requirements: [WHAT THIS NEEDS] Output Specifications: [WHAT THIS PRODUCES] **Technology Stack:** [FROM PHASE 2] **Coding Standards:** [YOUR ORG STANDARDS OR LINK TO STYLE GUIDE] Generate implementation for: [COMPONENT NAME] **Requirements:** 1. **Follow Phase 2 Architecture Exactly** - Use specified design patterns - Adhere to component boundaries - Match integration specifications 2. **Serve Phase 0 Business Objective** - Every function should have a purpose traceable to business value - No over-engineering beyond requirements 3. **Include Comprehensive Error Handling** - Expected errors with user-friendly messages - Unexpected errors with logging and monitoring - Graceful degradation where appropriate 4. **Add Inline Documentation with Business Context** - Document WHY (business reason), not just WHAT (code does) - Include traceability comments to Phase 0/2/3 5. **Create Tests Alongside Code (TDD Approach)** - Unit tests for business logic - Integration tests for external dependencies - Test coverage > 80% for value-critical paths 6. **Ensure Traceability** - Comments linking to Phase 0 objectives - Comments linking to Phase 2 architecture decisions - Comments linking to Phase 3 acceptance criteria Also generate: - Configuration files - Environment variables needed - README with setup instructions and business justification - Integration points documentation
Comprehensive Testing

Generate Test Suite

Use this when code is written and you need comprehensive test coverage.

**Phase 0 Business Objectives:** [PASTE: Success metrics, KPIs to move] **Phase 2 NFRs:** [PASTE: Performance targets, security requirements, integration specs] **Phase 4 Implementation:** [PASTE: Brief description of what was built] **Code Location:** [LINK OR PASTE CODE] Generate a comprehensive test suite covering all five quality dimensions: **1. Functional Testing** - Happy path scenarios (expected usage) - Edge cases from Phase 2 architecture - Error conditions and exception handling - User acceptance criteria from Phase 0 **2. Performance Testing** - Load scenarios meeting Phase 2 NFR targets - Stress testing to find breaking points - Resource utilization (memory, CPU, network) - Latency measurements under various loads - Target: [SPECIFY FROM PHASE 2 NFRs] **3. Security Testing** - OWASP Top 10 vulnerability checks - Authentication/authorization validation - Input validation and sanitization - Data protection verification (encryption, PII handling) - Compliance requirements from Phase 0 **4. Integration Testing** - API contracts from Phase 2 - Data flow validation across system boundaries - External system integration points - System boundary testing - Mocking strategies for external dependencies **5. Business Value Testing** - Instrumentation to track Phase 0 KPIs - Accuracy of business metric tracking - Validation that code achieves Phase 0 success criteria - A/B testing setup (if applicable) Output: - Executable test code with clear assertions - Test data setup/teardown scripts - Expected vs. actual results documentation - Coverage report targeting >90% for value-critical paths Format: [YOUR TEST FRAMEWORK - e.g., Jest, pytest, JUnit]

Phase 5: Monitoring & Value Tracking

Quick Start:
Set up value monitoring:
- dashboard metrics (business + tech)
- alert rules tied to KPIs
- weekly review agenda bullets
Role cues: Product/Business owns KPI narratives; Data/Analytics wire up instrumentation; Ops/SRE ensure runbooks + alerting close the loop.
Sample Output (excerpt):
Business KPI: claim cycle time → Grafana panel.
Alert: if P95 > 8h for 30 min, page duty manager.
Review agenda: KPI trend, blockers, next experiment.
Value Monitoring Setup

Set Up Value Tracking

Use this when system is deployed and you need to set up ongoing value tracking.

**Phase 0 Business Objectives:** [PASTE: Original KPIs, success metrics, targets] **Phase 4 Deployment:** [PASTE: What was deployed, when, to what scale] Set up continuous value monitoring: **1. Business KPI Tracking** For each Phase 0 KPI, configure: - **Metric:** [KPI NAME] - **Baseline:** [VALUE BEFORE DEPLOYMENT] - **Target:** [PHASE 0 TARGET] - **Current:** [CURRENT VALUE] - **Trend:** [IMPROVING / FLAT / DEGRADING] - **Measurement Method:** [HOW WE TRACK THIS] - **Dashboard Link:** [WHERE TO VIEW] - **Update Frequency:** [REAL-TIME / HOURLY / DAILY / WEEKLY] **2. Value Realization Report Template** Generate weekly report template in markdown format. **3. Feedback Loop Triggers** Define when to trigger feedback to earlier phases: **Trigger Phase 0 Reassessment:** - KPIs degrading > [THRESHOLD]% for [DURATION] - Business assumptions proven wrong - Market conditions changed significantly **Trigger Phase 1 New Feature:** - User requests for specific capability > [THRESHOLD] - Opportunity to improve KPI by [THRESHOLD]% identified - Competitive gap emerged **4. Dashboard Configuration** **Executive Value Dashboard:** - Business KPIs with trend lines - ROI calculation (actual vs. projected) - User adoption metrics - System health summary (traffic light) **Operations Health Dashboard:** - Technical performance metrics - Error rates and incident history - Resource utilization and cost - SLA compliance **5. Alert Rules** **Business Metric Alerts:** - Primary KPI degradation: [THRESHOLD LOGIC] → Notify [STAKEHOLDER] - User adoption below target: [THRESHOLD LOGIC] → Notify [PRODUCT OWNER] - Cost overrun: [THRESHOLD LOGIC] → Notify [FINANCE + ENGINEERING] Output: Complete Phase 5 monitoring configuration
💡 Pro Tips for Using AI Prompts:
• Always include previous phase context—AI needs business context to be useful
• Start specific, get more specific if output is too generic
• Ask for 2-3 alternatives for AI to compare options
• Validate AI output against phase exit criteria
• Build a library of prompts that work well for your domain
← Back to Resources