Algorithm Implementation Prompt Templates
AI prompt templates for implementing algorithms. Code sorting, searching, and complex data structures.
Overview
Algorithm implementation prompts help you write efficient, correct implementations of data structures and algorithms. These templates cover sorting, searching, graph algorithms, flexible programming, and custom data structures with proper time/space complexity analysis.
Best Practices
Specify the programming language and any constraints (memory limits, time limits, input size ranges)
Describe the expected time and space complexity requirements upfront
Include edge cases that must be handled (empty inputs, duplicates, negative numbers)
Mention if you need an iterative vs recursive solution, or in-place vs using extra space
Provide sample inputs and expected outputs to verify correctness
Prompt Templates
1. Data Structure Implementation
Implement a [DATA_STRUCTURE] in [LANGUAGE] with the following operations: Required operations: [OPERATIONS_LIST] Constraints: - [TIME_COMPLEXITY_REQUIREMENTS] - [SPACE_REQUIREMENTS] Additional requirements: - [ADDITIONAL_REQUIREMENTS] Include time complexity analysis for each operation.
Implement a LRU (Least Recently Used) Cache in Python with the following operations: Required operations: - get(key): Return the value if key exists, otherwise return -1 - put(key, value): Insert or update the value. Evict LRU item if at capacity Constraints: - Both operations must be O(1) time complexity - Space complexity O(capacity) Additional requirements: - Thread-safe implementation - Support for TTL (time-to-live) per entry - Callback function when items are evicted
```python
import threading
import time
from typing import TypeVar, Generic, Optional, Callable, Dict
from dataclasses import dataclass
K = TypeVar('K')
V = TypeVar('V')
@dataclass
class CacheEntry(Generic[K, V]):
"""Doubly-linked list node with TTL support."""
key: K
value: V
prev: Optional['CacheEntry[K, V]'] = None
next: Optional['CacheEntry[K, V]'] = None
expires_at: Optional[float] = None # Unix timestamp
def is_expired(self) -> bool:
if self.expires_at is None:
return False
return time.time() > self.expires_at
class LRUCache(Generic[K, V]):
"""
Thread-safe LRU Cache with O(1) get/put operations.
Implementation uses a hash map + doubly-linked list:
- Hash map: O(1) key lookup
- Doubly-linked list: O(1) insertion/removal for LRU ordering
Time Complexity:
- get(key): O(1)
- put(key, value): O(1)
Space Complexity: O(capacity)
"""
def __init__(
self,
capacity: int,
default_ttl: Optional[float] = None,
on_evict: Optional[Callable[[K, V], None]] = None
):
"""
Args:
capacity: Maximum number of items in cache
default_ttl: Default time-to-live in seconds (None = no expiry)
on_evict: Callback function called when item is evicted
"""
if capacity <= 0:
raise ValueError("Capacity must be positive")
self._capacity = capacity
self._default_ttl = default_ttl
self._on_evict = on_evict
# Hash map for O(1) lookup
self._cache: Dict[K, CacheEntry[K, V]] = {}
# Dummy head and tail for easier list manipulation
self._head: CacheEntry[K, V] = CacheEntry(None, None) # type: ignore
self._tail: CacheEntry[K, V] = CacheEntry(None, None) # type: ignore
self._head.next = self._tail
self._tail.prev = self._head
# Thread safety
self._lock = threading.RLock()
def get(self, key: K) -> Optional[V]:
"""
Get value by key. Returns None if not found or expired.
Time Complexity: O(1)
"""
with self._lock:
if key not in self._cache:
return None
entry = self._cache[key]
# Check TTL expiration
if entry.is_expired():
self._remove_entry(entry)
self._notify_eviction(entry)
return None
# Move to front (most recently used)
self._move_to_front(entry)
return entry.value
def put(
self,
key: K,
value: V,
ttl: Optional[float] = None
) -> None:
"""
Insert or update a key-value pair.
Time Complexity: O(1)
Args:
key: Cache key
value: Cache value
ttl: Time-to-live in seconds (None = use default_ttl)
"""
with self._lock:
# Calculate expiration time
effective_ttl = ttl if ttl is not None else self._default_ttl
expires_at = time.time() + effective_ttl if effective_ttl else None
# Update existing entry
if key in self._cache:
entry = self._cache[key]
entry.value = value
entry.expires_at = expires_at
self._move_to_front(entry)
return
# Create new entry
entry = CacheEntry(
key=key,
value=value,
expires_at=expires_at
)
# Evict if at capacity
if len(self._cache) >= self._capacity:
self._evict_lru()
# Add to cache and front of list
self._cache[key] = entry
self._add_to_front(entry)
def delete(self, key: K) -> bool:
"""
Explicitly remove a key from cache.
Time Complexity: O(1)
Returns: True if key existed and was removed
"""
with self._lock:
if key not in self._cache:
return False
entry = self._cache[key]
self._remove_entry(entry)
return True
def clear(self) -> None:
"""Remove all entries from cache. O(n)"""
with self._lock:
self._cache.clear()
self._head.next = self._tail
self._tail.prev = self._head
def _add_to_front(self, entry: CacheEntry[K, V]) -> None:
"""Add entry right after head. O(1)"""
entry.prev = self._head
entry.next = self._head.next
self._head.next.prev = entry
self._head.next = entry
def _remove_entry(self, entry: CacheEntry[K, V]) -> None:
"""Remove entry from list and cache. O(1)"""
# Remove from doubly-linked list
entry.prev.next = entry.next
entry.next.prev = entry.prev
# Remove from hash map
del self._cache[entry.key]
def _move_to_front(self, entry: CacheEntry[K, V]) -> None:
"""Move existing entry to front of list. O(1)"""
# Remove from current position
entry.prev.next = entry.next
entry.next.prev = entry.prev
# Add to front
self._add_to_front(entry)
def _evict_lru(self) -> None:
"""Remove least recently used entry (before tail). O(1)"""
lru_entry = self._tail.prev
if lru_entry is self._head:
return # Empty cache
self._remove_entry(lru_entry)
self._notify_eviction(lru_entry)
def _notify_eviction(self, entry: CacheEntry[K, V]) -> None:
"""Call eviction callback if registered."""
if self._on_evict:
try:
self._on_evict(entry.key, entry.value)
except Exception:
pass # Don't let callback errors break cache
@property
def size(self) -> int:
"""Current number of items in cache."""
with self._lock:
return len(self._cache)
@property
def capacity(self) -> int:
"""Maximum cache capacity."""
return self._capacity
def __contains__(self, key: K) -> bool:
"""Check if key exists (without updating LRU order)."""
with self._lock:
if key not in self._cache:
return False
return not self._cache[key].is_expired()
def __len__(self) -> int:
return self.size
# Usage example
if __name__ == "__main__":
evicted_items = []
cache = LRUCache[str, int](
capacity=3,
default_ttl=60.0, # 60 seconds default TTL
on_evict=lambda k, v: evicted_items.append((k, v))
)
# Basic operations
cache.put("a", 1)
cache.put("b", 2)
cache.put("c", 3)
print(cache.get("a")) # 1 - "a" is now most recently used
cache.put("d", 4) # Evicts "b" (LRU)
print(evicted_items) # [('b', 2)]
print(cache.get("b")) # None - was evicted
print(cache.get("c")) # 3
# With custom TTL
cache.put("e", 5, ttl=0.001) # Expires in 1ms
time.sleep(0.01)
print(cache.get("e")) # None - expired
```
**Complexity Analysis:**
| Operation | Time | Space | Notes |
|-----------|------|-------|-------|
| `get()` | O(1) | O(1) | Hash lookup + list pointer updates |
| `put()` | O(1) | O(1) | Hash insert + list pointer updates |
| `delete()` | O(1) | O(1) | Hash delete + list pointer updates |
| `clear()` | O(n) | O(1) | Must clear hash map |
| Overall | - | O(capacity) | Hash map + doubly-linked list |When implementing cache systems, in-memory stores, or any data structure that needs to track recency with efficient access patterns.
- •Always use dummy head/tail nodes in doubly-linked lists to simplify edge cases
- •Use RLock instead of Lock for reentrant calls in Python thread-safe code
- •Consider using collections.OrderedDict for a simpler (but less customizable) LRU cache
2. Sorting Algorithm Implementation
Implement [SORTING_ALGORITHM] in [LANGUAGE]. Requirements: - Input: [INPUT_TYPE] - Must handle: [EDGE_CASES] - Stability requirement: [stable/unstable] - In-place requirement: [yes/no] Expected complexity: - Time: [TIME_COMPLEXITY] - Space: [SPACE_COMPLEXITY] Include step-by-step explanation of the algorithm.
Implement Merge Sort in TypeScript. Requirements: - Input: Generic array with custom comparator support - Must handle: Empty arrays, single elements, already sorted, reverse sorted, duplicates - Stability requirement: stable (equal elements maintain relative order) - In-place requirement: no (standard merge sort uses O(n) extra space) Expected complexity: - Time: O(n log n) in all cases - Space: O(n) for the temporary arrays
```typescript
/**
* Merge Sort Implementation
*
* Algorithm Overview:
* 1. Divide: Split array into two halves
* 2. Conquer: Recursively sort each half
* 3. Combine: Merge the sorted halves
*
* Time Complexity: O(n log n) - always, regardless of input
* - We divide log(n) times
* - Each level requires O(n) work to merge
*
* Space Complexity: O(n)
* - Temporary arrays for merging
* - Plus O(log n) call stack depth
*
* Stability: Yes - equal elements maintain their relative order
* - When merging, we take from left array first on ties
*/
type Comparator<T> = (a: T, b: T) => number;
/**
* Default comparator for primitive types
*/
function defaultComparator<T>(a: T, b: T): number {
if (a < b) return -1;
if (a > b) return 1;
return 0;
}
/**
* Merge Sort - Main function
*
* @param array - Array to sort
* @param comparator - Custom comparison function (optional)
* @returns New sorted array (original unchanged)
*/
export function mergeSort<T>(
array: T[],
comparator: Comparator<T> = defaultComparator
): T[] {
// Base cases
if (array.length <= 1) {
return [...array]; // Return copy for consistency
}
// Divide: Find middle point
const mid = Math.floor(array.length / 2);
// Conquer: Recursively sort both halves
const left = mergeSort(array.slice(0, mid), comparator);
const right = mergeSort(array.slice(mid), comparator);
// Combine: Merge sorted halves
return merge(left, right, comparator);
}
/**
* Merge two sorted arrays into one sorted array
*
* This is the key operation that maintains stability:
* - When elements are equal, we take from the LEFT array first
* - This preserves the original relative order of equal elements
*/
function merge<T>(
left: T[],
right: T[],
comparator: Comparator<T>
): T[] {
const result: T[] = [];
let leftIndex = 0;
let rightIndex = 0;
// Compare elements from both arrays
while (leftIndex < left.length && rightIndex < right.length) {
// Use <= 0 (not < 0) to maintain stability
// When equal, take from left first to preserve original order
if (comparator(left[leftIndex], right[rightIndex]) <= 0) {
result.push(left[leftIndex]);
leftIndex++;
} else {
result.push(right[rightIndex]);
rightIndex++;
}
}
// Copy remaining elements (only one of these loops will execute)
while (leftIndex < left.length) {
result.push(left[leftIndex]);
leftIndex++;
}
while (rightIndex < right.length) {
result.push(right[rightIndex]);
rightIndex++;
}
return result;
}
/**
* Iterative Merge Sort (bottom-up approach)
*
* Avoids recursion overhead, same complexity.
* Useful when stack depth is a concern.
*/
export function mergeSortIterative<T>(
array: T[],
comparator: Comparator<T> = defaultComparator
): T[] {
const n = array.length;
if (n <= 1) return [...array];
// We'll alternate between two arrays
let src = [...array];
let dest = new Array<T>(n);
// Start with subarrays of size 1, double each iteration
for (let size = 1; size < n; size *= 2) {
// Merge adjacent subarrays of current size
for (let leftStart = 0; leftStart < n; leftStart += 2 * size) {
const mid = Math.min(leftStart + size, n);
const rightEnd = Math.min(leftStart + 2 * size, n);
// Merge src[leftStart..mid) and src[mid..rightEnd) into dest
let left = leftStart;
let right = mid;
let destIndex = leftStart;
while (left < mid && right < rightEnd) {
if (comparator(src[left], src[right]) <= 0) {
dest[destIndex++] = src[left++];
} else {
dest[destIndex++] = src[right++];
}
}
while (left < mid) {
dest[destIndex++] = src[left++];
}
while (right < rightEnd) {
dest[destIndex++] = src[right++];
}
}
// Swap src and dest for next iteration
[src, dest] = [dest, src];
}
return src;
}
// ============ Usage Examples ============
// Basic number sorting
const numbers = [64, 34, 25, 12, 22, 11, 90];
console.log(mergeSort(numbers));
// Output: [11, 12, 22, 25, 34, 64, 90]
// String sorting
const strings = ['banana', 'apple', 'cherry', 'date'];
console.log(mergeSort(strings));
// Output: ['apple', 'banana', 'cherry', 'date']
// Custom object sorting (stable)
interface Person {
name: string;
age: number;
}
const people: Person[] = [
{ name: 'Alice', age: 30 },
{ name: 'Bob', age: 25 },
{ name: 'Charlie', age: 30 },
{ name: 'David', age: 25 },
];
// Sort by age (stable - Alice stays before Charlie, Bob before David)
const sortedByAge = mergeSort(people, (a, b) => a.age - b.age);
console.log(sortedByAge.map(p => p.name));
// Output: ['Bob', 'David', 'Alice', 'Charlie']
// Descending order
const descending = mergeSort(numbers, (a, b) => b - a);
console.log(descending);
// Output: [90, 64, 34, 25, 22, 12, 11]
// Edge cases
console.log(mergeSort([])); // []
console.log(mergeSort([1])); // [1]
console.log(mergeSort([1, 1, 1])); // [1, 1, 1]
console.log(mergeSort([3, 2, 1])); // [1, 2, 3]
console.log(mergeSort([1, 2, 3])); // [1, 2, 3]
```
**Visual Example of Merge Sort:**
```
Original: [64, 34, 25, 12, 22, 11, 90]
|
+-----------+-----------+
| |
[64, 34, 25] [12, 22, 11, 90]
| |
+-----+-----+ +-----+-----+
| | | |
[64, 34] [25] [12, 22] [11, 90]
| | |
+--+--+ +--+--+ +--+--+
| | | | | |
[64] [34] [12] [22] [11] [90]
| | | | | |
+--+--+ +--+--+ +--+--+
| | |
[34, 64] [12, 22] [11, 90]
| | |
+-----+-----+ +-----+-----+
| |
[25, 34, 64] [11, 12, 22, 90]
| |
+-----------+-----------+
|
[11, 12, 22, 25, 34, 64, 90]
```When you need a guaranteed O(n log n) sort regardless of input, need stability, or are sorting linked lists (where merge sort excels).
- •For large arrays of primitives, consider using the iterative version to avoid stack overflow
- •Merge sort is often the best choice for sorting linked lists since it doesn't need random access
- •Hybrid approaches switch to insertion sort for small subarrays (< 10-15 elements)
3. Graph Algorithm Implementation
Implement [ALGORITHM_NAME] for [GRAPH_TYPE] graphs in [LANGUAGE]. Graph representation: [ADJACENCY_LIST/MATRIX/EDGE_LIST] Problem to solve: [PROBLEM_DESCRIPTION] Input format: [INPUT_FORMAT] Output requirements: [OUTPUT_REQUIREMENTS] Handle edge cases: [EDGE_CASES]
Implement Dijkstra's Algorithm for weighted directed graphs in Python. Graph representation: Adjacency list with edge weights Problem to solve: Find the shortest path from a source vertex to all other vertices in a graph with non-negative edge weights. Input format: - graph: Dict mapping vertex to list of (neighbor, weight) tuples - source: Starting vertex Output requirements: - Return dict of shortest distances from source to each vertex - Return dict of previous vertices to reconstruct paths - Handle disconnected vertices (distance = infinity) Handle edge cases: - Source vertex not in graph - Empty graph - Single vertex - Disconnected components
```python
import heapq
from typing import TypeVar, Dict, List, Tuple, Optional, Set
from dataclasses import dataclass, field
from math import inf
V = TypeVar('V') # Vertex type
# Type aliases for clarity
Graph = Dict[V, List[Tuple[V, float]]] # vertex -> [(neighbor, weight)...]
Distances = Dict[V, float]
Predecessors = Dict[V, Optional[V]]
@dataclass(order=True)
class PriorityEntry:
"""Entry for the priority queue, ordered by distance."""
distance: float
vertex: V = field(compare=False)
def dijkstra(
graph: Graph[V],
source: V
) -> Tuple[Distances[V], Predecessors[V]]:
"""
Dijkstra's Shortest Path Algorithm
Finds shortest paths from source to all reachable vertices in a
weighted graph with non-negative edge weights.
Algorithm:
1. Initialize distances: source=0, all others=infinity
2. Use min-heap to always process closest unvisited vertex
3. For each neighbor, check if going through current vertex is shorter
4. If shorter, update distance and add to heap
Time Complexity: O((V + E) log V)
- Each vertex extracted from heap once: O(V log V)
- Each edge relaxed once: O(E log V)
Space Complexity: O(V)
- Distance array: O(V)
- Predecessor array: O(V)
- Priority queue: O(V) in worst case
Args:
graph: Adjacency list {vertex: [(neighbor, weight)...]}
source: Starting vertex
Returns:
Tuple of (distances dict, predecessors dict)
Raises:
ValueError: If source not in graph or negative weights found
"""
# Handle edge cases
if not graph:
return {}, {}
if source not in graph:
raise ValueError(f"Source vertex {source} not in graph")
# Initialize data structures
distances: Distances[V] = {v: inf for v in graph}
predecessors: Predecessors[V] = {v: None for v in graph}
visited: Set[V] = set()
# Distance to source is 0
distances[source] = 0
# Priority queue: (distance, vertex)
# Using dataclass with ordering for cleaner heap operations
pq: List[PriorityEntry] = [PriorityEntry(0, source)]
while pq:
# Get vertex with minimum distance
entry = heapq.heappop(pq)
current_dist = entry.distance
current = entry.vertex
# Skip if already processed (we may have stale entries in heap)
if current in visited:
continue
visited.add(current)
# Relax all outgoing edges
for neighbor, weight in graph.get(current, []):
if weight < 0:
raise ValueError(
f"Negative weight {weight} found on edge {current}->{neighbor}. "
"Use Bellman-Ford for graphs with negative weights."
)
# Calculate new distance through current vertex
new_dist = current_dist + weight
# If shorter path found, update
if new_dist < distances.get(neighbor, inf):
distances[neighbor] = new_dist
predecessors[neighbor] = current
heapq.heappush(pq, PriorityEntry(new_dist, neighbor))
return distances, predecessors
def reconstruct_path(
predecessors: Predecessors[V],
source: V,
target: V
) -> Optional[List[V]]:
"""
Reconstruct the shortest path from source to target.
Time Complexity: O(V) - path length is at most V
Returns:
List of vertices from source to target, or None if no path exists
"""
if target not in predecessors:
return None
# Check if target is reachable
if predecessors[target] is None and target != source:
return None
# Build path backwards
path = []
current: Optional[V] = target
while current is not None:
path.append(current)
current = predecessors[current]
# Reverse to get source -> target order
path.reverse()
# Verify path starts at source
if path[0] != source:
return None
return path
def dijkstra_with_path(
graph: Graph[V],
source: V,
target: V
) -> Tuple[float, Optional[List[V]]]:
"""
Convenience function to get shortest distance and path to a specific target.
Returns:
Tuple of (distance, path) where path is None if unreachable
"""
distances, predecessors = dijkstra(graph, source)
distance = distances.get(target, inf)
path = reconstruct_path(predecessors, source, target)
return distance, path
# ============ Example Usage ============
if __name__ == "__main__":
# Example graph (directed, weighted)
#
# 7 2
# A -----> B -----> C
# | | ^
# |1 |3 |1
# v v |
# D -----> E -------+
# 2
#
graph: Graph[str] = {
'A': [('B', 7), ('D', 1)],
'B': [('C', 2), ('E', 3)],
'C': [],
'D': [('E', 2)],
'E': [('C', 1)],
}
# Find shortest paths from A
distances, predecessors = dijkstra(graph, 'A')
print("Shortest distances from A:")
for vertex, dist in sorted(distances.items()):
print(f" A -> {vertex}: {dist}")
# Output:
# A -> A: 0
# A -> B: 7
# A -> C: 4 (A->D->E->C = 1+2+1)
# A -> D: 1
# A -> E: 3 (A->D->E = 1+2)
print("\nShortest paths:")
for target in ['B', 'C', 'D', 'E']:
path = reconstruct_path(predecessors, 'A', target)
print(f" A -> {target}: {' -> '.join(path) if path else 'No path'}")
# Output:
# A -> B: A -> B
# A -> C: A -> D -> E -> C
# A -> D: A -> D
# A -> E: A -> D -> E
# Using convenience function
dist, path = dijkstra_with_path(graph, 'A', 'C')
print(f"\nShortest path A->C: {path}, distance: {dist}")
# Handle disconnected graph
graph_disconnected: Graph[str] = {
'A': [('B', 1)],
'B': [],
'X': [('Y', 1)], # Disconnected component
'Y': [],
}
distances, _ = dijkstra(graph_disconnected, 'A')
print(f"\nDistance A->X (disconnected): {distances['X']}")
# Output: inf
```
**Complexity Analysis:**
| Operation | Time | Notes |
|-----------|------|-------|
| Extract min | O(log V) | Using binary heap |
| Decrease key | O(log V) | We insert new entry instead of true decrease-key |
| Total | O((V + E) log V) | Each vertex/edge processed once |
| Space | O(V) | Distance, predecessor arrays, heap |
**When to Use Dijkstra vs Other Algorithms:**
| Algorithm | Use Case |
|-----------|----------|
| Dijkstra | Non-negative weights, single source |
| Bellman-Ford | Negative weights, single source |
| Floyd-Warshall | All pairs shortest paths |
| BFS | Unweighted graphs (all edges = 1) |When finding shortest paths in weighted graphs with non-negative edge weights, such as road networks, routing, or any optimization problem with positive costs.
- •Use a Fibonacci heap for O((V + E) + V log V) if you need theoretical optimality, but binary heaps are faster in practice
- •For dense graphs (E ≈ V²), consider Floyd-Warshall for all-pairs shortest paths
- •If you only need distance to one target, stop early when target is popped from the heap
4. Dynamic Programming Solution
Solve this problem using flexible programming in [LANGUAGE]: Problem: [PROBLEM_STATEMENT] Constraints: [CONSTRAINTS] Provide: 1. Recurrence relation 2. Base cases 3. Implementation with memoization (top-down) 4. Implementation with tabulation (bottom-up) 5. Space-improved version if applicable
Solve this problem using flexible programming in JavaScript: Problem: Given a string s, find the length of the longest palindromic subsequence. A subsequence is a sequence that can be derived by deleting some or no elements without changing the order of remaining elements. Constraints: - 1 <= s.length <= 1000 - s consists only of lowercase English letters Provide: 1. Recurrence relation 2. Base cases 3. Implementation with memoization (top-down) 4. Implementation with tabulation (bottom-up) 5. Space-improved version if applicable
```javascript
/**
* Longest Palindromic Subsequence
*
* Problem Analysis:
* - A palindrome reads the same forwards and backwards
* - We need to find the longest subsequence (not substring) that is a palindrome
*
* Key Insight:
* - If s[i] === s[j], both characters can be part of the palindrome
* - If s[i] !== s[j], we try excluding each character and take the max
*
* Recurrence Relation:
* Let dp[i][j] = length of longest palindromic subsequence in s[i..j]
*
* dp[i][j] = {
* 1 if i === j (single character)
* 2 + dp[i+1][j-1] if s[i] === s[j]
* max(dp[i+1][j], dp[i][j-1]) if s[i] !== s[j]
* }
*
* Base Cases:
* - dp[i][i] = 1 (single character is a palindrome of length 1)
* - dp[i][i-1] = 0 (empty string, when i > j)
*
* Time Complexity: O(n²) - we fill n² states
* Space Complexity: O(n²) for full table, O(n) for improved
*/
// ============ Solution 1: Top-Down with Memoization ============
/**
* @param {string} s
* @return {number}
*/
function longestPalindromeSubseqMemo(s) {
const n = s.length;
// Memoization cache
// Using Map for potentially better performance with string keys
const memo = new Map();
function dp(i, j) {
// Base case: empty or invalid range
if (i > j) return 0;
// Base case: single character
if (i === j) return 1;
// Check memo
const key = `${i},${j}`;
if (memo.has(key)) {
return memo.get(key);
}
let result;
if (s[i] === s[j]) {
// Characters match: include both in palindrome
result = 2 + dp(i + 1, j - 1);
} else {
// Characters don't match: try excluding each
result = Math.max(dp(i + 1, j), dp(i, j - 1));
}
memo.set(key, result);
return result;
}
return dp(0, n - 1);
}
// ============ Solution 2: Bottom-Up with Tabulation ============
/**
* @param {string} s
* @return {number}
*/
function longestPalindromeSubseqTab(s) {
const n = s.length;
// dp[i][j] = LPS length for s[i..j]
// Initialize with 0s
const dp = Array.from({ length: n }, () => Array(n).fill(0));
// Base case: single characters
for (let i = 0; i < n; i++) {
dp[i][i] = 1;
}
// Fill table for increasing lengths
// len = 2, 3... n
for (let len = 2; len <= n; len++) {
for (let i = 0; i <= n - len; i++) {
const j = i + len - 1;
if (s[i] === s[j]) {
dp[i][j] = 2 + dp[i + 1][j - 1];
} else {
dp[i][j] = Math.max(dp[i + 1][j], dp[i][j - 1]);
}
}
}
return dp[0][n - 1];
}
// ============ Solution 3: Space-Improved O(n) ============
/**
* Key observation for space optimization:
* - dp[i][j] only depends on dp[i+1][j-1], dp[i+1][j], dp[i][j-1]
* - We can use a 1D array if we process carefully
*
* Alternative approach: LPS(s) = LCS(s, reverse(s))
* This naturally leads to O(n) space LCS solution
*/
/**
* @param {string} s
* @return {number}
*/
function longestPalindromeSubseqOptimized(s) {
const n = s.length;
// Previous row of the DP table
let prev = Array(n).fill(0);
// Current row
let curr = Array(n).fill(0);
// Fill from bottom-right to top-left
for (let i = n - 1; i >= 0; i--) {
curr[i] = 1; // Base case: dp[i][i] = 1
for (let j = i + 1; j < n; j++) {
if (s[i] === s[j]) {
// dp[i][j] = 2 + dp[i+1][j-1]
curr[j] = 2 + prev[j - 1];
} else {
// dp[i][j] = max(dp[i+1][j], dp[i][j-1])
curr[j] = Math.max(prev[j], curr[j - 1]);
}
}
// Move current to previous for next iteration
[prev, curr] = [curr, prev];
}
// Result is in prev because of the final swap
return prev[n - 1];
}
// ============ Bonus: Reconstruct the Actual Palindrome ============
/**
* @param {string} s
* @return {string}
*/
function longestPalindromeSubseqString(s) {
const n = s.length;
const dp = Array.from({ length: n }, () => Array(n).fill(0));
// Fill DP table
for (let i = 0; i < n; i++) dp[i][i] = 1;
for (let len = 2; len <= n; len++) {
for (let i = 0; i <= n - len; i++) {
const j = i + len - 1;
if (s[i] === s[j]) {
dp[i][j] = 2 + dp[i + 1][j - 1];
} else {
dp[i][j] = Math.max(dp[i + 1][j], dp[i][j - 1]);
}
}
}
// Reconstruct palindrome
let result = [];
let i = 0, j = n - 1;
while (i <= j) {
if (i === j) {
// Single middle character
result.push(s[i]);
break;
} else if (s[i] === s[j]) {
// Both characters are part of palindrome
result.push(s[i]);
i++;
j--;
} else if (dp[i + 1][j] > dp[i][j - 1]) {
// Skip left character
i++;
} else {
// Skip right character
j--;
}
}
// Build palindrome: result + reversed result (minus middle if odd length)
const leftHalf = result.join('');
const rightHalf = result.slice(0, -1).reverse().join('');
return leftHalf + rightHalf;
}
// ============ Tests ============
console.log('Testing Longest Palindromic Subsequence:\n');
const testCases = [
{ input: 'bbbab', expected: 4 }, // "bbbb"
{ input: 'cbbd', expected: 2 }, // "bb"
{ input: 'a', expected: 1 }, // "a"
{ input: 'ac', expected: 1 }, // "a" or "c"
{ input: 'aba', expected: 3 }, // "aba"
{ input: 'abcdef', expected: 1 }, // any single char
{ input: 'abaaba', expected: 6 }, // "abaaba"
{ input: 'character', expected: 5 }, // "carac"
];
for (const { input, expected } of testCases) {
const memo = longestPalindromeSubseqMemo(input);
const tab = longestPalindromeSubseqTab(input);
const opt = longestPalindromeSubseqOptimized(input);
const actual = longestPalindromeSubseqString(input);
const pass = memo === expected && tab === expected && opt === expected;
console.log(`"${input}"`);
console.log(` Length: ${memo} (expected: ${expected}) ${pass ? '✓' : '✗'}`);
console.log(` Actual: "${actual}"`);
console.log();
}
```
**Visualization of DP Table for "bbbab":**
```
b b b a b
0 1 2 3 4
+---------------
0 | 1 2 3 3 4 <- dp[0][4] = 4 (answer)
1 | 1 2 2 3
2 | 1 1 3
3 | 1 1
4 | 1
Reading: dp[i][j] = LPS length for substring s[i..j]
Final answer: dp[0][n-1] = dp[0][4] = 4
```When solving optimization problems with overlapping subproblems and optimal substructure, such as sequence problems, path counting, or resource allocation.
- •Start with the recurrence relation before writing any code, get the math right first
- •Use memoization for trees/graphs and tabulation when you need to fill a regular grid
- •Always look for space optimization opportunities, often O(n²) can become O(n)
5. Binary Search Variant
Implement a binary search solution for this problem in [LANGUAGE]: Problem: [PROBLEM_STATEMENT] Input: [INPUT_FORMAT] Constraints: [CONSTRAINTS] Explain the invariant maintained during the search and handle all edge cases.
Implement a binary search solution for this problem in Python: Problem: You are given a sorted rotated array of unique integers (e.g. [4,5,6,7,0,1,2] was originally [0,1,2,4,5,6,7] rotated 4 times). Find the index of a target value, or return -1 if not found. Input: - nums: List of sorted rotated integers - target: Integer to find Constraints: - 1 <= nums.length <= 5000 - -10^4 <= nums[i], target <= 10^4 - All values are unique - Array is sorted in ascending order, then rotated
```python
from typing import List
def search(nums: List[int], target: int) -> int:
"""
Search in Rotated Sorted Array
Key Insight:
Even though the array is rotated, at least ONE half is always sorted.
We can determine which half is sorted and whether our target lies in it.
Invariant:
At each step, the target (if it exists) is within nums[left..right]
Decision Logic:
1. Find mid
2. If nums[mid] == target, done!
3. Determine which half is sorted:
- If nums[left] <= nums[mid]: left half is sorted
- Else: right half is sorted
4. Check if target is in the sorted half:
- If yes, search there
- If no, search the other half
Time Complexity: O(log n)
Space Complexity: O(1)
Example: [4,5,6,7,0,1,2], target = 0
Step 1: left=0, right=6, mid=3
nums[mid]=7 != 0
nums[0]=4 <= nums[3]=7, so left half [4,5,6,7] is sorted
Is 0 in [4,7]? No (0 < 4)
Search right: left=4
Step 2: left=4, right=6, mid=5
nums[mid]=1 != 0
nums[4]=0 <= nums[5]=1, so left half [0,1] is sorted
Is 0 in [0,1]? Yes!
Search left: right=4
Step 3: left=4, right=4, mid=4
nums[mid]=0 == 0, found at index 4!
"""
if not nums:
return -1
left, right = 0, len(nums) - 1
while left <= right:
mid = left + (right - left) // 2 # Avoid overflow
# Found target
if nums[mid] == target:
return mid
# Determine which half is sorted
if nums[left] <= nums[mid]:
# Left half [left..mid] is sorted
# Check if target is in the sorted left half
if nums[left] <= target < nums[mid]:
# Target is in sorted left half
right = mid - 1
else:
# Target must be in right half
left = mid + 1
else:
# Right half [mid..right] is sorted
# Check if target is in the sorted right half
if nums[mid] < target <= nums[right]:
# Target is in sorted right half
left = mid + 1
else:
# Target must be in left half
right = mid - 1
return -1
def search_with_duplicates(nums: List[int], target: int) -> bool:
"""
Variant: What if duplicates are allowed?
Problem: When nums[left] == nums[mid] == nums[right],
we can't determine which half is sorted!
Solution: Skip duplicates by moving pointers inward.
Time Complexity: O(n) worst case (all same elements)
O(log n) average case
"""
if not nums:
return False
left, right = 0, len(nums) - 1
while left <= right:
mid = left + (right - left) // 2
if nums[mid] == target:
return True
# Handle duplicates: can't determine sorted half
if nums[left] == nums[mid] == nums[right]:
left += 1
right -= 1
continue
# Same logic as before
if nums[left] <= nums[mid]:
if nums[left] <= target < nums[mid]:
right = mid - 1
else:
left = mid + 1
else:
if nums[mid] < target <= nums[right]:
left = mid + 1
else:
right = mid - 1
return False
def find_rotation_index(nums: List[int]) -> int:
"""
Bonus: Find the index where rotation occurred (minimum element).
Useful for other problems like finding rotation count.
The minimum element is the only one where nums[i] < nums[i-1]
"""
if not nums:
return -1
left, right = 0, len(nums) - 1
# Array is not rotated
if nums[left] <= nums[right]:
return 0
while left <= right:
mid = left + (right - left) // 2
# Found rotation point
if nums[mid] > nums[mid + 1]:
return mid + 1
if nums[mid - 1] > nums[mid]:
return mid
# Decide which half to search
if nums[mid] >= nums[left]:
# Left half is sorted, rotation is in right half
left = mid + 1
else:
# Right half is sorted, rotation is in left half
right = mid - 1
return 0
# ============ Tests ============
def test_search():
test_cases = [
# (nums, target, expected)
([4, 5, 6, 7, 0, 1, 2], 0, 4),
([4, 5, 6, 7, 0, 1, 2], 3, -1),
([1], 0, -1),
([1], 1, 0),
([3, 1], 3, 0),
([3, 1], 1, 1),
([5, 1, 3], 5, 0),
([5, 1, 3], 1, 1),
([5, 1, 3], 3, 2),
([5, 1, 3], 0, -1),
([1, 2, 3, 4, 5], 3, 2), # Not rotated
([2, 3, 4, 5, 1], 1, 4), # Rotated once
]
print("Testing search in rotated sorted array:\n")
for nums, target, expected in test_cases:
result = search(nums, target)
status = "✓" if result == expected else "✗"
print(f"{status} search({nums}, {target}) = {result} (expected {expected})")
print("\n" + "="*50 + "\n")
print("Testing find rotation index:\n")
rotation_tests = [
([4, 5, 6, 7, 0, 1, 2], 4), # Rotated 4 times
([1, 2, 3, 4, 5], 0), # Not rotated
([2, 1], 1), # Rotated once
([3, 4, 5, 1, 2], 3), # Rotation at index 3
]
for nums, expected in rotation_tests:
result = find_rotation_index(nums)
status = "✓" if result == expected else "✗"
print(f"{status} find_rotation_index({nums}) = {result} (expected {expected})")
if __name__ == "__main__":
test_search()
```
**Visual Walkthrough:**
```
Array: [4, 5, 6, 7, 0, 1, 2]
^ ^ ^
L mid R
Searching for target = 0
Iteration 1:
[4, 5, 6, 7, 0, 1, 2]
L M R
mid = 3, nums[mid] = 7 ≠ 0
Left half [4,5,6,7] is sorted (4 ≤ 7)
Is 0 in [4, 7)? No, 0 < 4
→ Search right half: L = 4
Iteration 2:
[4, 5, 6, 7, 0, 1, 2]
L M R
mid = 5, nums[mid] = 1 ≠ 0
Left half [0,1] is sorted (0 ≤ 1)
Is 0 in [0, 1)? Yes!
→ Search left half: R = 4
Iteration 3:
[4, 5, 6, 7, 0, 1, 2]
L
M
R
mid = 4, nums[mid] = 0 == target
→ Found at index 4!
```When searching in sorted data with a twist (rotated, finding boundaries, or optimization problems that have monotonic properties).
- •Always use `mid = left + (right - left) // 2` to avoid integer overflow
- •Draw out 2-3 examples before coding to understand the invariants
- •Test with arrays of size 1, 2, and 3 specifically, they often reveal edge case bugs
Common Mistakes to Avoid
Off-by-one errors in loop bounds and array indices, test with small arrays (size 1, 2, 3) to catch these
Not handling edge cases like empty input, single element, or all-same elements before the main algorithm
Choosing the wrong DP state, make sure your state captures all the information needed to make optimal decisions
Frequently Asked Questions
Algorithm implementation prompts help you write efficient, correct implementations of data structures and algorithms. These templates cover sorting, searching, graph algorithms, flexible programming, and custom data structures with proper time/space complexity analysis.
Related Templates
Code Review Prompt Templates
AI prompt templates for thorough code reviews. Get comprehensive feedback on code quality, security, and best practices.
Debugging Prompt Templates
AI prompt templates for debugging code. Identify issues, understand errors, and find solutions faster.
Code Documentation Prompt Templates
AI prompt templates for writing code documentation. Create clear comments, READMEs, and API docs.
Have your own prompt to optimize?