Data Interpretation Prompt Templates

AI prompt templates for interpreting data. Explain findings to non-technical stakeholders.

Overview

Analysis without communication is just math. These prompts help you translate data into insights that people understand and act on. Whether you're presenting to executives, writing for non-technical stakeholders, or just trying to explain a complex finding, these templates will help you tell the story in the data.

Best Practices

1

Lead with the 'so what.' What should the audience do with this information?

2

Use comparisons to make numbers meaningful. $1.2M means nothing without context

3

Match your detail level to your audience. Executives need headlines, analysts need methodology

4

Acknowledge uncertainty. Present confidence levels, not just point estimates

5

Test your explanation on someone unfamiliar with the analysis before presenting

Prompt Templates

1. Executive Summary Writer

Help me summarize these findings for an executive audience.

Analysis topic: [WHAT_YOU_ANALYZED]
Key findings:
[LIST_OF_FINDINGS]

Data sources: [WHERE_DATA_CAME_FROM]
Audience: [WHO_WILL_READ_IT]
Decision they need to make: [WHAT_ACTION_IS_POSSIBLE]

Write a summary that a busy executive can understand in 2 minutes.
Analysis: Customer acquisition cost by channel for Q1 2024. Findings: 1) Paid search CAC increased 35% YoY to $145. 2) Organic search CAC stable at $45. 3) Referral program CAC lowest at $28 but only 12% of volume. 4) Social ads CAC highly variable ($80-200) depending on campaign. 5) Overall blended CAC is $98, up from $82 last year. Data sources: Marketing spend data, CRM, attribution model. Audience: CMO and CFO. Decision: Whether to shift Q2 budget allocation.
**Customer Acquisition Cost Analysis: Q1 2024**

**Bottom line**: Our cost to acquire customers rose 20% year-over-year ($98 vs $82). Paid search is the biggest driver of the increase.

**Key finding**: We're over-reliant on paid search, which is getting more expensive. It's up 35% YoY and now costs 3x more than organic ($145 vs $45). Meanwhile, our most efficient channel (referrals at $28) drives only 12% of customers.

**What the data suggests**:

| Channel | CAC | Share of Customers | Trend |
|---------|-----|-------------------|-------|
| Paid Search | $145 | 45% | ▲ 35% YoY |
| Social Ads | $80-200 | 23% | Volatile |
| Organic | $45 | 20% | Stable |
| Referral | $28 | 12% | Stable |

**Recommended actions**:

1. **Invest more in organic**: Stable, low CAC, but under-indexed. SEO and content investment pays off over time.

2. **Scale the referral program**: Best CAC at $28, but only 12% of volume. Can we 2x this with better incentives?

3. **Audit paid search**: Which keywords/campaigns drove the 35% increase? Cut low performers before increasing budget.

4. **Stabilize social**: High variance ($80-200) suggests inconsistent execution. Focus on replicating winning campaigns.

**If we shift 10% of paid search budget to referral and organic**, we project blended CAC could drop to ~$88, saving approximately $150K annually at current volumes.

**Data note**: Attribution uses 30-day last-touch model. Multi-touch would show organic contributing more, strengthening the case for investment there.

When presenting analysis results to leadership

  • Start with the answer, not the methodology. Executives want conclusions first
  • Tie everything to dollars or business outcomes. 'CAC rose 35%' is less compelling than 'We're spending $150K more to get the same customers'

2. Technical to Non-Technical Translation

Help me explain this technical finding to a non-technical audience.

Technical finding: [THE_RESULT]
Statistical details: [P_VALUES_CONFIDENCE_INTERVALS_ETC]
Context: [BUSINESS_SITUATION]
Audience: [WHO_NEEDS_TO_UNDERSTAND]

Make it understandable without losing accuracy.
Technical finding: A/B test showed variant B had 2.3% conversion rate vs control's 2.0%, p=0.03, 95% CI for lift [0.1%, 0.5%]. Statistical details: n=50,000 per group, two-tailed z-test for proportions. Context: Testing a new checkout flow. Audience: Product manager and engineering lead (smart but not statisticians).
**Checkout Flow Test Results**

**Plain English summary**: The new checkout flow (B) converts better than the current one. We're confident this is a real improvement, not random chance.

**The numbers**:
- Old checkout: 2.0% of visitors completed purchase
- New checkout: 2.3% of visitors completed purchase
- That's a 15% relative improvement (0.3 percentage points)

**How confident are we?**

Very. There's only a 3% chance we'd see this difference if the new checkout wasn't actually better. In statistical terms, p=0.03, which is below the typical 5% threshold for significance.

Think of it this way: If we ran this test 100 times when there was no real difference, we'd expect to see a result this strong only 3 times. So it's unlikely we're just seeing noise.

**What's the range of impact?**

The true improvement is probably between 0.1% and 0.5% (additional conversions). Our best estimate is 0.3%, but it could be a bit smaller or larger.

At our current traffic:
- 0.1% lift = ~500 extra conversions/month
- 0.3% lift = ~1,500 extra conversions/month
- 0.5% lift = ~2,500 extra conversions/month

**My recommendation**: Ship the new checkout. The evidence is strong enough to act on, and even the low end of the range (0.1%) is worth capturing.

**Caveat**: We tested for 2 weeks. There could be novelty effects that wear off. I'd suggest monitoring conversion rate for the next month after launch to confirm the improvement holds.

When sharing statistical results with people who don't speak statistics

  • Translate p-values into language about confidence, not technical definitions
  • Always convert percentages to real numbers (conversions, dollars, customers) that people can visualize

3. Data Story Structure

Help me structure a data presentation as a compelling story.

Main insight: [WHAT_YOU_DISCOVERED]
Supporting data: [KEY_EVIDENCE]
Audience: [WHO]
Desired outcome: [WHAT_ACTION_DO_YOU_WANT]

Create a story arc that leads to action.
Main insight: We're losing customers in the first 30 days because they don't experience our core value proposition. Supporting data: 60% of churned users never used Feature X (our differentiator), users who use Feature X in week 1 have 3x better retention, current onboarding doesn't mention Feature X until day 14. Audience: Product and leadership team. Desired outcome: Prioritize onboarding redesign to surface Feature X earlier.
**Presentation Structure: The Onboarding Problem**

**Opening hook (1 slide, 30 seconds)**

*"We have a Feature X problem. Not with the feature itself, but with the fact that most customers never discover it."*

Show: Two numbers side by side
- Customers who use Feature X in week 1: 82% still with us at day 90
- Customers who don't: 28% still with us at day 90

This grabs attention with the contrast before explaining why.

---

**The mystery (2 slides, 2 minutes)**

*"We've been investigating why early churn is so high. Here's what we found."*

Slide 1: Churn curve showing steep drop in first 30 days
- Annotation: "47% of churn happens before day 30"

Slide 2: Comparison of churned vs retained users
- Churned users: Avg 2 logins, 1.5 features used, 0 Feature X usage
- Retained users: Avg 8 logins, 4.2 features used, Feature X in week 1

*"The retained customers aren't just more active. They're doing something specific."*

---

**The revelation (2 slides, 2 minutes)**

*"Feature X is the difference. But most users never see it."*

Slide 1: Feature X usage breakdown
- 60% of churned users: Never used Feature X
- 85% of retained users: Used Feature X

Slide 2: Current onboarding timeline
- Day 1: Account setup
- Day 3: Basic features intro
- Day 7: Settings walkthrough
- Day 14: Feature X mentioned (too late!)

*"We're burying our best feature. By day 14, half these users are already gone."*

---

**The proof point (1 slide, 1 minute)**

*"Users who discover Feature X early behave completely differently."*

Show: Retention curves split by Feature X adoption timing
- Used Feature X in week 1: Best retention
- Used Feature X in week 2-4: Medium retention
- Never used: Worst retention

*"This isn't correlation. Feature X delivers our core value. Without it, we're just another tool."*

---

**The ask (1 slide, 1 minute)**

*"We need to redesign onboarding to get users to Feature X in their first session."*

Proposed change:
- Day 1: After signup, guided Feature X experience (5 min)
- Success metric: 50% of new users try Feature X within 24 hours
- Expected impact: If we move from 25% to 50% week-1 adoption, we project 15% reduction in 30-day churn

*"This is our highest-use retention fix. I'm asking for priority in the next sprint."*

---

**Story arc summary**:
1. Hook: Dramatic contrast to create curiosity
2. Mystery: Show the problem without the answer yet
3. Revelation: Deliver the insight (Feature X is the key)
4. Proof: Demonstrate this isn't just correlation
5. Ask: Clear, specific action with expected outcome

Total: 7 slides, 7 minutes, one clear ask.

When you need to convince an audience to take action based on data

  • Don't show all your data. Show the data that tells the story
  • Create tension before resolution. Present the problem before the solution to keep attention

4. Uncertainty Communication

Help me communicate the uncertainty in these results appropriately.

Finding: [WHAT_YOU_FOUND]
Confidence level: [HOW_SURE_ARE_YOU]
Sources of uncertainty: [WHAT_COULD_BE_WRONG]
Audience: [WHO]
Stakes: [WHAT_DECISION_DEPENDS_ON_THIS]

How do I present this honestly without undermining the finding?
Finding: Predicted Q2 revenue of $4.2M. Confidence level: 80% prediction interval is $3.8M-$4.6M. Sources of uncertainty: One major deal ($400K) may slip to Q3, historical accuracy of model is +/- 12%, economic conditions uncertain. Audience: Board of directors. Stakes: They're using this to set hiring budget for Q3.
**Communicating Q2 Revenue Forecast to the Board**

**Recommended framing**:

*"Our best estimate for Q2 revenue is $4.2 million. Planning should account for a realistic range of $3.8M to $4.6M, depending on deal timing and economic conditions."*

---

**How to present the uncertainty**:

**Visual**: Show the forecast as a range, not a point

```
Q2 Revenue Forecast

 $3.8M $4.2M $4.6M
 |============|============|
 Downside Expected Upside
 (if deal estimate (if pipeline
 slips) accelerates)
```

**Language that works**:

✓ "We expect Q2 revenue of $4.2M, with a realistic range of $3.8M to $4.6M"
✓ "We're 80% confident revenue will fall between $3.8M and $4.6M"
✓ "The biggest swing factor is the [Customer] deal, which may close in Q2 or Q3"

**Language to avoid**:

✗ "Revenue will be $4.2M" (false precision)
✗ "Revenue could be anywhere from $3M to $5M" (too vague, undermines credibility)
✗ "We're not sure" (unhelpful)

---

**Scenario planning for the board**:

Instead of one number, present three scenarios tied to decisions:

| Scenario | Q2 Revenue | Likelihood | Hiring Implication |
|----------|------------|------------|--------------------|
| Conservative | $3.8M | 20% | Hold at current headcount |
| Expected | $4.2M | 60% | Add 2 roles in Q3 |
| Optimistic | $4.6M | 20% | Accelerate 4 hires |

*"I recommend budgeting for the expected case but not committing to hires until we see June pipeline."*

---

**Addressing specific risks**:

Don't hide them; frame them as known variables:

*"Three factors could move us within this range:*

*1. The [Customer] deal ($400K): 70% likely to close in Q2. If it slips, we're closer to $3.8M.*

*2. Renewal timing: Two renewals ($150K combined) are in final negotiation. Risk of delay is low but non-zero.*

*3. Economic conditions: Our model is based on current pipeline velocity. A slowdown in close rates would push us to the lower end."*

---

**Credibility statement**:

*"Our forecasting model has been within 12% of actual results for the past 6 quarters. The range I'm presenting accounts for that historical variance."*

This shows you know the model's track record without apologizing for it.

---

**Key principle**: Uncertainty isn't weakness. Presenting a range with clear drivers shows you understand the business. A single number with hidden assumptions is more risky for decision-making.

When results have significant uncertainty that affects decisions

  • Ranges are more honest than point estimates, and executives prefer them for planning
  • Tie uncertainty to actionable factors ('if deal X closes' or 'if growth rate holds'). Abstract uncertainty is hard to plan around

Common Mistakes to Avoid

Burying the insight in methodology. Start with what matters, add technical details as appendix

Presenting data without recommendations. Analysis should lead to 'so, we should...'

Using jargon that loses your audience. 'p < 0.05' means nothing to most people

Frequently Asked Questions

Analysis without communication is just math. These prompts help you translate data into insights that people understand and act on. Whether you're presenting to executives, writing for non-technical stakeholders, or just trying to explain a complex finding, these templates will help you tell the story in the data.

Related Templates

Have your own prompt to optimize?