Unit Testing Prompt Templates

AI prompt templates for writing unit tests. Create comprehensive test coverage for your code.

Overview

Good tests catch bugs before they reach production and give you confidence to refactor. These prompts help you write tests that cover edge cases you hadn't considered, follow testing best practices, and remain maintainable as code evolves. The key is testing behavior, not implementation, your tests should still pass after refactoring internals.

Best Practices

1

Share the code you want to test AND any interfaces/types it depends on for accurate test generation

2

Specify your testing framework (Jest, pytest, JUnit, etc.) and any assertion libraries you use

3

Mention what's already tested so the AI focuses on gaps rather than duplicating coverage

4

Describe the behavior you expect, not just 'write tests', good prompts specify what should be tested

5

Include information about mocking needs, external APIs, databases, or services that should be stubbed

Prompt Templates

1. Comprehensive Unit Test Generator

Write unit tests for this code:

```[LANGUAGE]
[PASTE CODE TO TEST]
```

**Testing framework:** [JEST / PYTEST / JUNIT / MOCHA / etc.]
**Assertion style:** [EXPECT / ASSERT / SHOULD]

**What this code does:** [BRIEF DESCRIPTION]

**Test these scenarios:**
1. Happy path - normal expected usage
2. Edge cases - boundary values, empty inputs
3. Error cases - invalid inputs, failure modes
4. [ANY SPECIFIC SCENARIOS]

**Mocking requirements:**
- [DEPENDENCIES TO MOCK - databases, APIs, etc.]

**Include:** Setup/teardown if needed, descriptive test names, and comments explaining non-obvious test cases.
Language: TypeScript
Code: Function that calculates order total with discounts
Framework: Jest
Mocking: Price lookup service
Description: Takes order items, applies discount codes, returns final total
```typescript
import { calculateOrderTotal } from './orderCalculator';
import { PriceService } from './priceService';

jest.mock('./priceService');

describe('calculateOrderTotal', () => {
 const mockPriceService = PriceService as jest.Mocked<typeof PriceService>;
 
 beforeEach(() => {
 jest.clearAllMocks();
 mockPriceService.getPrice.mockImplementation((sku) => {
 const prices: Record<string, number> = {
 'ITEM-001': 29.99,
 'ITEM-002': 49.99,
 };
 return Promise.resolve(prices[sku] || 0);
 });
 });

 describe('happy path', () => {
 it('calculates total for single item without discount', async () => {
 const result = await calculateOrderTotal(
 [{ sku: 'ITEM-001', quantity: 2 }],
 null
 );
 expect(result.total).toBe(59.98);
 });

 it('applies percentage discount correctly', async () => {
 const result = await calculateOrderTotal(
 [{ sku: 'ITEM-001', quantity: 1 }],
 { type: 'percent', value: 10 }
 );
 expect(result.total).toBe(26.99); // 29.99 - 10%
 expect(result.discount).toBe(3.00);
 });
 });

 describe('edge cases', () => {
 it('returns zero for empty order', async () => {
 const result = await calculateOrderTotal([], null);
 expect(result.total).toBe(0);
 });

 it('handles quantity of zero', async () => {
 const result = await calculateOrderTotal(
 [{ sku: 'ITEM-001', quantity: 0 }],
 null
 );
 expect(result.total).toBe(0);
 });

 // Discount should never exceed order total
 it('caps discount at order total', async () => {
 const result = await calculateOrderTotal(
 [{ sku: 'ITEM-001', quantity: 1 }],
 { type: 'fixed', value: 100 }
 );
 expect(result.total).toBe(0);
 expect(result.discount).toBe(29.99);
 });
 });

 describe('error cases', () => {
 it('throws on negative quantity', async () => {
 await expect(
 calculateOrderTotal([{ sku: 'ITEM-001', quantity: -1 }], null)
 ).rejects.toThrow('Quantity must be non-negative');
 });

 it('throws on invalid discount type', async () => {
 await expect(
 calculateOrderTotal(
 [{ sku: 'ITEM-001', quantity: 1 }],
 { type: 'invalid' as any, value: 10 }
 )
 ).rejects.toThrow('Invalid discount type');
 });
 });
});
```

Writing tests for new code, increasing coverage on existing code, or learning what edge cases to consider

  • Ask for tests organized by scenario (happy path, edge cases, errors) for better readability
  • Request that the AI explain WHY each edge case matters, this helps you learn testing patterns

2. Edge Case Test Generator

Generate edge case tests for this function:

```[LANGUAGE]
[PASTE FUNCTION]
```

**Function purpose:** [WHAT IT DOES]
**Input types:** [DESCRIBE INPUTS AND THEIR CONSTRAINTS]
**Framework:** [TESTING FRAMEWORK]

Identify and write tests for:
- Boundary values (min, max, zero, one-off)
- Empty/null/undefined inputs
- Type coercion issues
- Numeric edge cases (negative, decimal, very large numbers)
- String edge cases (empty, whitespace, special characters, unicode)
- Collection edge cases (empty array, single item, duplicates)
- Timing/async edge cases if applicable
Function: paginate(items: any[], page: number, perPage: number)
Purpose: Returns a slice of items for the given page
Input constraints: page >= 1, perPage >= 1, items is an array
Framework: Jest
```typescript
describe('paginate edge cases', () => {
 describe('boundary values', () => {
 it('page 1 returns first perPage items', () => {
 const items = [1, 2, 3, 4, 5];
 expect(paginate(items, 1, 2)).toEqual([1, 2]);
 });

 it('last page may have fewer than perPage items', () => {
 const items = [1, 2, 3, 4, 5];
 expect(paginate(items, 3, 2)).toEqual([5]);
 });

 it('page beyond items returns empty array', () => {
 const items = [1, 2, 3];
 expect(paginate(items, 10, 2)).toEqual([]);
 });

 it('perPage larger than items returns all items', () => {
 const items = [1, 2, 3];
 expect(paginate(items, 1, 100)).toEqual([1, 2, 3]);
 });
 });

 describe('empty/minimal inputs', () => {
 it('empty array returns empty array', () => {
 expect(paginate([], 1, 10)).toEqual([]);
 });

 it('single item array, first page', () => {
 expect(paginate([1], 1, 10)).toEqual([1]);
 });

 it('perPage of 1 returns single items', () => {
 expect(paginate([1, 2, 3], 2, 1)).toEqual([2]);
 });
 });

 describe('invalid inputs', () => {
 it('page 0 throws error', () => {
 expect(() => paginate([1, 2, 3], 0, 10)).toThrow();
 });

 it('negative page throws error', () => {
 expect(() => paginate([1, 2, 3], -1, 10)).toThrow();
 });

 it('perPage 0 throws error', () => {
 expect(() => paginate([1, 2, 3], 1, 0)).toThrow();
 });

 it('non-integer page floors to integer', () => {
 // Or throws? Specify expected behavior
 expect(paginate([1, 2, 3, 4], 1.7, 2)).toEqual([1, 2]);
 });
 });

 describe('large numbers', () => {
 it('handles very large perPage', () => {
 const items = [1, 2, 3];
 expect(paginate(items, 1, Number.MAX_SAFE_INTEGER)).toEqual([1, 2, 3]);
 });
 });
});
```

When you have basic happy path tests but want to ensure robustness against unusual inputs

  • The most valuable edge cases are often: empty, one, two, max, max+1, and negative
  • Consider what happens when users copy-paste data with hidden characters or whitespace

3. Test Refactoring Assistant

Refactor these tests to be more maintainable:

```[LANGUAGE]
[PASTE CURRENT TESTS]
```

**Issues I'm seeing:**
- [DESCRIBE PROBLEMS - e.g. lots of duplication, hard to read, flaky]

**Improve:**
1. Reduce duplication with setup/helpers
2. Make test names more descriptive
3. Better organize test structure (describe blocks)
4. Add missing assertions or remove redundant ones
5. Make mocks more reusable

**Keep:** [ANY PATTERNS TO PRESERVE]
**Framework:** [TESTING FRAMEWORK]
Issues: Lots of repeated mock setup, test names are unclear (test1, test2), some tests are testing multiple things
Framework: pytest
**Problems identified:**

1. Mock setup duplicated in every test
2. Test names like `test_user_1` don't describe behavior
3. `test_create_user` has 5 assertions testing different scenarios

**Refactored tests:**

```python
import pytest
from unittest.mock import Mock, patch


class TestUserCreation:
 """Tests for user creation functionality."""

 @pytest.fixture
 def mock_db(self):
 """Shared database mock for all user tests."""
 with patch('app.services.user.Database') as mock:
 mock.return_value.save.return_value = {'id': 'usr_123'}
 yield mock

 @pytest.fixture
 def user_service(self, mock_db):
 """Pre-configured UserService with mocked dependencies."""
 return UserService(db=mock_db)

 def test_creates_user_with_valid_email(self, user_service):
 """Valid email creates user and returns user ID."""
 result = user_service.create(email='test@example.com')
 assert result['id'] == 'usr_123'

 def test_rejects_invalid_email_format(self, user_service):
 """Invalid email raises ValidationError."""
 with pytest.raises(ValidationError, match='Invalid email'):
 user_service.create(email='not-an-email')

 def test_rejects_duplicate_email(self, user_service, mock_db):
 """Existing email raises DuplicateError."""
 mock_db.return_value.exists.return_value = True
 with pytest.raises(DuplicateError):
 user_service.create(email='existing@example.com')

 @pytest.mark.parametrize('email', [
 '',
 ' ',
 None,
 ])
 def test_rejects_empty_email(self, user_service, email):
 """Empty or whitespace email raises ValidationError."""
 with pytest.raises(ValidationError):
 user_service.create(email=email)
```

**Changes made:**
- Extracted mock setup to fixtures (reusable across all tests)
- Used descriptive test names that explain the behavior being tested
- Split multi-assertion test into focused single-behavior tests
- Used parametrize for similar test cases with different inputs
- Grouped related tests in a class with clear docstring

When test suites become hard to maintain, when tests are flaky, or when adding tests feels tedious due to boilerplate

  • Each test should test one behavior, if the name needs 'and', split it
  • Test names should read like documentation: test_[action]_[scenario]_[expected_result]

4. Mock Strategy Advisor

Help me design mocks for testing this code:

```[LANGUAGE]
[PASTE CODE WITH DEPENDENCIES]
```

**Dependencies that need mocking:**
- [DEPENDENCY 1 - e.g. database, external API]
- [DEPENDENCY 2]

**Testing framework:** [FRAMEWORK]
**Mocking library:** [LIBRARY IF SPECIFIC - e.g. unittest.mock, sinon]

**Questions:**
1. What should I mock vs. use real implementations?
2. How should I structure mocks for reuse?
3. What mock behaviors should I set up for different test scenarios?
4. Are there any integration test opportunities where mocking isn't needed?
Code: Service that sends emails via SendGrid and logs to database
Dependencies: SendGrid API client, PostgreSQL database
Framework: Jest
Mocking library: Jest built-in mocks
## Mocking Strategy

### What to Mock vs. Real

| Dependency | Unit Tests | Integration Tests |
|------------|------------|-------------------|
| SendGrid API | Always mock | Mock (or use sandbox mode) |
| Database | Mock for unit tests | Use test database |
| Logger | Usually mock | Real or mock |

### Mock Structure

```typescript
// __mocks__/sendgrid.ts - Shared mock for all tests
export const sendEmail = jest.fn();

// Default success behavior
sendEmail.mockResolvedValue({ statusCode: 202, messageId: 'msg_123' });

// Helper for common scenarios
export const mockSendGridSuccess = () => {
 sendEmail.mockResolvedValue({ statusCode: 202, messageId: 'msg_123' });
};

export const mockSendGridRateLimit = () => {
 sendEmail.mockRejectedValue({
 code: 429,
 message: 'Rate limit exceeded',
 });
};

export const mockSendGridNetworkError = () => {
 sendEmail.mockRejectedValue(new Error('Network error'));
};
```

### Using in Tests

```typescript
import { sendEmail, mockSendGridRateLimit } from './__mocks__/sendgrid';
import { EmailService } from './emailService';

jest.mock('./sendgrid');

describe('EmailService', () => {
 beforeEach(() => {
 jest.clearAllMocks();
 });

 it('retries on rate limit', async () => {
 mockSendGridRateLimit();
 sendEmail
.mockRejectedValueOnce({ code: 429 }) // First call fails
.mockResolvedValueOnce({ statusCode: 202 }); // Retry succeeds

 const service = new EmailService();
 await service.send('test@example.com', 'Hello');

 expect(sendEmail).toHaveBeenCalledTimes(2);
 });
});
```

### Integration Test Opportunity

For the database logging, consider an integration test with a test database:

```typescript
describe('EmailService integration', () => {
 // Use test database, mock only external APIs
 beforeAll(async () => {
 await testDb.migrate();
 });

 it('logs email to database on send', async () => {
 const service = new EmailService(testDb);
 await service.send('test@example.com', 'Hello');

 const logs = await testDb.query('SELECT * FROM email_logs');
 expect(logs).toHaveLength(1);
 expect(logs[0].recipient).toBe('test@example.com');
 });
});
```

Setting up test infrastructure, deciding what to mock, or improving test isolation

  • Mock at the boundary of your code, external APIs, databases, file systems
  • Don't mock what you own, if you wrote it and it's fast, use the real implementation

Common Mistakes to Avoid

Testing implementation details instead of behavior, tests break when you refactor even if functionality is correct

Not providing mock setup requirements, resulting in tests that can't actually run

Asking for '100% coverage' instead of meaningful tests, leads to tests that don't actually verify behavior

Frequently Asked Questions

Good tests catch bugs before they reach production and give you confidence to refactor. These prompts help you write tests that cover edge cases you hadn't considered, follow testing best practices, and remain maintainable as code evolves. The key is testing behavior, not implementation, your tests should still pass after refactoring internals.

Related Templates

Have your own prompt to optimize?