KPI & Metrics Prompt Templates

AI prompt templates for defining KPIs and metrics. Track the right measurements for your business.

Overview

What you measure shapes what you optimize. These prompts help you define metrics that actually matter, avoid vanity metrics, and build measurement systems that drive good decisions. Use them when setting up new tracking, reviewing existing metrics, or trying to figure out how to measure something tricky.

Best Practices

1

Start with the decision you need to make, then work backward to what metric would inform it

2

Define exactly how the metric is calculated. Ambiguity causes arguments later

3

Include what a good value looks like (target or benchmark)

4

Consider gaming: how could someone hit the metric while hurting the business?

5

Pair metrics that balance each other (speed AND quality, growth AND retention)

Prompt Templates

1. KPI Definition Framework

Help me define a KPI for [WHAT_YOU_WANT_TO_MEASURE].

Business goal: [THE_OUTCOME_YOU_CARE_ABOUT]
Who will use this metric: [AUDIENCE]
Decisions it should inform: [WHAT_WILL_YOU_DO_WITH_IT]
Data available: [WHAT_CAN_YOU_ACTUALLY_TRACK]

Provide: name, definition, formula, target-setting approach, and potential pitfalls.
Help me define a KPI for customer health. Business goal: Predict and prevent churn before it happens. Who will use this metric: Customer Success team. Decisions it should inform: Which customers to prioritize for outreach. Data available: Product usage logs, support tickets, billing data, NPS scores.
**KPI: Customer Health Score**

**Definition**: A composite score (0-100) predicting a customer's likelihood to renew based on engagement and sentiment signals.

**Formula**:
```
Health Score = (Usage Score × 0.4) + (Engagement Score × 0.3) + (Support Score × 0.2) + (Sentiment Score × 0.1)

Where:
- Usage Score (0-100): Based on login frequency and feature adoption vs. their plan tier's typical usage
- Engagement Score (0-100): Based on stakeholder breadth (# of active users / licensed seats)
- Support Score (0-100): Inverse of ticket volume and severity, adjusted for company size
- Sentiment Score (0-100): Latest NPS response, normalized
```

**Target-setting approach**:
- Red (0-40): High churn risk, immediate intervention needed
- Yellow (41-70): Moderate risk, proactive outreach recommended
- Green (71-100): Healthy, maintain relationship

Calibrate thresholds by back-testing against actual churn data. Adjust weights quarterly based on which components best predict churn.

**Potential pitfalls**:
1. Over-indexing on usage: Some customers are light users but happy. Check if usage patterns differ by customer segment.
2. Lagging indicators: NPS and tickets trail problems. Add leading indicators like declining usage trends.
3. Gaming: CS team might focus on green accounts (easy) instead of red accounts (impactful). Track 'saves' as a complementary metric.

**Complementary metric**: Churn save rate (% of red-flagged accounts that renew after intervention). Ensures the health score is actionable, not just predictive.

When you need to create a new metric or formalize how something is measured

  • Test your metric definition against edge cases before launching
  • Get agreement on the formula in writing. Debates about metric definitions waste time

2. Metric Audit

Review these existing metrics and identify problems.

Metrics currently tracked:
[LIST_OF_METRICS]

Business context: [WHAT_THE_TEAM/COMPANY_DOES]
Current pain points: [WHAT'S_NOT_WORKING]

Identify: vanity metrics, missing metrics, conflicting incentives, and gaps.
Review these metrics for our content marketing team. Metrics: Page views, time on page, social shares, blog posts published per month, email subscribers. Business context: B2B SaaS, blog is supposed to drive trial signups. Current pain points: Traffic is up but signups are flat.
**Metric Audit: Content Marketing**

**Missing (Critical):**
- **Content-attributed signups**: You're measuring inputs (content) and vanity outputs (views) but not the actual goal (signups). Add UTM tracking and conversion tracking to tie content to trials.
- **Content-to-MQL rate**: What % of blog readers become marketing qualified leads?

**Vanity metrics:**
- **Page views**: High views on irrelevant topics don't help. A post that gets 100K views from people who'll never buy is worse than 1K views from your ICP.
- **Social shares**: Feels good but rarely correlates with business outcomes. People share entertainment, not purchase intent.

**Reframe existing metrics:**
- **Time on page** → Good, but segment it. Time on page for high-intent content (pricing comparison, how-to guides) matters more than listicles.
- **Posts per month** → This incentivizes quantity over quality. Replace with 'posts meeting quality threshold' or pair with a quality metric.

**Conflicting incentives:**
- Measuring 'posts published' encourages the team to write more, but if more posts = more low-quality content = lower average conversion rate, you're going backward.

**Recommended metric stack:**
1. Content-attributed trial signups (north star)
2. High-intent page views (views on bottom-funnel content)
3. Email-to-trial conversion rate
4. Content engagement rate (scroll depth, not just time)

**Remove or de-emphasize:**
- Social shares (unless social is a real channel for you)
- Raw page views (replace with segmented views)

When your metrics feel off or aren't driving the right behavior

  • The best metrics create a clear line between action and outcome
  • If a metric doesn't change anyone's behavior, stop tracking it

3. Leading vs Lagging Indicators

Help me build a metrics hierarchy with leading and lagging indicators for [GOAL].

Ultimate goal (lagging): [THE_OUTCOME]
Time horizon: [HOW_FAR_AHEAD_YOU_NEED_TO_SEE]
Areas of influence: [WHAT_ACTIVITIES_DRIVE_THE_GOAL]

Create a hierarchy from input metrics to output metrics.
Help me build a metrics hierarchy for reducing customer churn. Ultimate goal: Annual churn rate below 5%. Time horizon: I need 90-day early warning. Areas of influence: Product usage, customer support, relationship strength, competitive pressure.
**Metrics Hierarchy: Churn Reduction**

**Lagging (Outcome) - Quarterly/Annual view:**
- Annual churn rate (goal: <5%)
- Net Revenue Retention (accounts for expansion)
- Lost revenue from churn ($)

**Lagging (Intermediate) - Monthly view:**
- Monthly churn rate
- Churn by segment (helps identify problem areas)
- Win-back rate (% of churned customers who return)

**Leading (Predictive) - Weekly view - Your 90-day early warning:**
- Health score distribution (% in red/yellow/green)
- Usage trend slope (are customers using more or less over past 30 days?)
- Support ticket sentiment trend
- Renewal calls scheduled vs. renewals due in 90 days

**Leading (Input/Activity) - Daily view:**
- Customer Success outreach volume to at-risk accounts
- Feature adoption for sticky features
- Time to resolve critical tickets
- QBR completion rate

**How to use this hierarchy:**
1. Daily: Track inputs to ensure CS team is doing the right activities
2. Weekly: Review leading indicators to spot accounts heading toward churn
3. Monthly: Check if leading indicator improvements are flowing to lagging metrics
4. Quarterly: Evaluate if you're hitting the churn target

**Key insight**: If leading indicators improve but lagging indicators don't follow, your leading indicators aren't actually leading indicators. Revisit and adjust.

**Suggested alerts:**
- Any account drops 20+ health score points in a week
- Usage for any account drops below 50% of their historical average
- Any Enterprise account goes 30 days without login

When you need visibility into future outcomes, not just past results

  • Leading indicators should be things you can influence. If you can't affect it, it's just early information, not a lever
  • Validate leading indicators by checking their correlation with the lagging outcome you care about

4. Counter-Metric Pairing

I'm improving for [PRIMARY_METRIC]. What counter-metrics should I track to prevent bad outcomes?

Primary metric: [METRIC]
Context: [HOW_THE_METRIC_IS_USED]
Concern: [WHAT_COULD_GO_WRONG]

Suggest counter-metrics that balance the primary metric.
I'm improving for support ticket resolution time. Primary metric: Average time to close tickets. Context: Support team is evaluated on this. Concern: Reps might close tickets prematurely to hit the target.
**Counter-metrics for Resolution Time:**

1. **Reopen rate**: % of tickets reopened within 7 days of closing
 - Why: Directly catches premature closes. If someone closes without solving, the customer reopens.
 - Target: Keep below 5%
 - Red flag: Rep with fast resolution AND high reopens

2. **Customer satisfaction (CSAT) per ticket**: Survey score on closed tickets
 - Why: Even if not reopened, customer might be dissatisfied with rushed resolution
 - Target: Above 4.2/5
 - Red flag: CSAT dropping as resolution time improves

3. **First contact resolution rate**: % of tickets resolved without escalation or follow-up
 - Why: Rewards doing it right the first time, not just fast
 - Target: Above 70%
 - Red flag: FCR dropping while resolution time drops

4. **Escalation rate**: % of tickets escalated to senior support or engineering
 - Why: Reps might close tickets by telling customers 'this isn't supported' when it actually is
 - Target: Below 10%
 - Red flag: Sudden drop in escalations alongside drop in resolution time

**Recommended approach:**
Create a composite score: Speed (resolution time) accounts for 40%, Quality (CSAT + reopen rate) accounts for 60%. This lets you improve speed without sacrificing quality.

**Dashboard view for managers:**
| Rep | Avg Resolution | Reopen % | CSAT | Quality Score | Overall |
|-----|---------------|----------|------|---------------|---------|

When you're worried that improving one metric might cause problems elsewhere

  • Goodhart's Law: 'When a measure becomes a target, it ceases to be a good measure.' Counter-metrics help fight this
  • The best counter-metrics are ones that would naturally suffer if someone games the primary metric

Common Mistakes to Avoid

Tracking too many metrics. If everything is a KPI, nothing is. Limit to 3-5 true KPIs per team

Setting targets without baselines. How do you know 5% is achievable if you don't know where you're starting?

Defining metrics vaguely ('improve customer satisfaction') instead of precisely ('increase NPS from 32 to 45')

Frequently Asked Questions

What you measure shapes what you optimize. These prompts help you define metrics that actually matter, avoid vanity metrics, and build measurement systems that drive good decisions. Use them when setting up new tracking, reviewing existing metrics, or trying to figure out how to measure something tricky.

Related Templates

Have your own prompt to optimize?