You asked ChatGPT a simple question. The response sounded confident, well-written, and completely wrong.
Maybe it cited a study that doesn't exist. Or it gave you statistics that seemed plausible but fell apart when you checked them. This happens more often than most people realize, and it's not really your fault. AI models like ChatGPT and Claude are designed to generate helpful-sounding text. They're not designed to say "I don't know" or "I'm not sure about this."
The good news? You can cut way down on AI hallucinations just by changing how you write your prompts. The techniques aren't complicated, but they make a real difference.
This guide will show you exactly how to prompt AI for factual, reliable answers. You'll learn when to trust the output, when to verify it, and how to catch mistakes before they cause problems.
Step 1: Tell the AI to Admit Uncertainty
Most AI hallucinations happen because the model tries to be helpful even when it shouldn't. It fills gaps in its knowledge with plausible-sounding information instead of admitting it doesn't know.
You can fix this by explicitly giving the AI permission to say "I don't know."
What most people type:
What were Apple's exact revenue numbers for Q3 2024?
What gets you a more honest answer:
What were Apple's exact revenue numbers for Q3 2024? If you don't have access to this specific data or aren't confident in the accuracy, please say so rather than estimating.
That small addition changes how the AI responds. Instead of inventing numbers that sound right, it's more likely to tell you it can't verify current financial data and suggest you check Apple's investor relations page.
Try adding phrases like:
- "If you're not certain, please indicate that"
- "Only include information you're confident about"
- "It's okay to say you don't know"
Simple stuff. But it works.
Step 2: Ask for Sources and References
One of the fastest ways to spot AI hallucinations is to ask for sources. If the AI can't point to where information came from, that's a red flag.
Before:
What are the health benefits of intermittent fasting?
After:
What are the health benefits of intermittent fasting? Please include specific studies or research papers that support each claim, including the researchers involved and when the research was published.
Now here's the catch: AI models can still make up fake sources. I've seen ChatGPT invent journal names, author names, and publication dates that don't exist. So this technique isn't foolproof.
But it helps in two ways. First, real information tends to come with verifiable details. Second, when you go to check those sources and they don't exist, you know the whole response needs more scrutiny.
Bottom line: always verify at least one or two sources before using the information for anything that matters.
Step 3: Request Step-by-Step Reasoning
AI models make fewer mistakes when they show their work. This is called "chain of thought" prompting, and it's one of the most reliable ways to get accurate responses from any AI tool.
When you ask the AI to explain its reasoning step by step, it can't just jump to a conclusion. It has to build a logical path to get there. And errors in that path become visible, both to you and sometimes to the AI itself.
Vague prompt:
Is it legal to record phone calls in California?
Better prompt:
Is it legal to record phone calls in California? Walk me through your reasoning step by step, including what type of consent laws apply, any exceptions, and how this differs from federal law.
The second version forces the AI to break down the legal framework rather than giving you a simple yes or no. You get the "why" behind the answer, which makes it much easier to verify.
This approach works especially well for legal questions, math and calculations, technical explanations, and anything with multiple factors to consider. We covered this technique in more detail in our prompt engineering best practices guide.
Step 4: Use Verification Prompts
Here's a technique most people don't bother with: ask the AI to check its own work.
After you get a response, follow up with a verification prompt. Ask the AI to review what it just said and flag anything that might be inaccurate.
Example verification prompt:
Review your previous response. Are there any claims that might be outdated, oversimplified, or that you're less confident about? Please flag anything I should double-check.
This won't catch everything. But AI models often "know" when they're on shaky ground, even if they don't volunteer that information upfront. Asking directly can surface doubts that would otherwise stay hidden.
You can also try:
- "What assumptions did you make in that answer?"
- "What's the strongest counterargument to what you just said?"
- "Rate your confidence in each claim from 1-10"
These follow-ups take an extra minute but can save you from some embarrassing mistakes.
Step 5: Break Complex Questions into Smaller Parts
AI hallucinations spike when you ask complicated, multi-part questions. The model tries to address everything at once and ends up cutting corners on accuracy.
Break your question into smaller pieces instead. Get a solid answer for each part before moving to the next.
Instead of this:
Compare the economic policies, environmental records, and healthcare proposals of the last three US presidents and tell me which approach was most effective.
Start with this:
Let's break this into parts. First, summarize the major economic policies of the Biden administration from 2021-2024. Focus on policies that were actually implemented, not just proposed.
After getting that response (and verifying it), you can move on to the next president, then environmental records, and so on. Each focused question gets better attention than one massive query.
Takes longer? Yes. But the accuracy improvement is worth it for research or anything you'll share publicly. This is one of the best practices for getting better AI results that people consistently underuse.
Step 6: Specify What "Accurate" Means for Your Use Case
Different situations need different levels of precision. A rough estimate might be fine for a brainstorming session but dangerous for a business proposal.
Tell the AI exactly what level of accuracy you need.
Most people ask:
How much does it cost to build a mobile app?
This gets you something you can actually use:
I'm creating a budget proposal for my company. What's a realistic cost range for building a basic iOS mobile app with user authentication and payment processing? I need figures I can defend to stakeholders, so please only include ranges you're confident about and note where costs can vary significantly.
The context changes everything. Now the AI knows this is for a real business decision, not casual curiosity. It's more likely to give conservative estimates and flag uncertainties.
If writing detailed prompts like this feels like a lot of effort, tools like Prompt Optimizer can help. You type your basic question, and it automatically adds the kind of structure and context that gets more reliable answers.
Common Mistakes That Lead to AI Hallucinations
Trusting the tone. AI models sound certain even when they're wrong. That authoritative voice is designed in, not earned. A confident-sounding paragraph with fake statistics reads exactly the same as one with real data. Always verify facts that matter.
Asking about recent events without checking the model's limits. Most AI models have training cutoffs, meaning they don't know about things that happened after a certain date. If you're asking about current events, use a model with web search or verify the information yourself.
Stopping after one prompt. One question rarely gets you the best answer. Follow up. Ask for verification. Request sources. The back-and-forth is where accuracy improves. If you want to understand why this matters, our guide on why ChatGPT gives generic answers covers the reasoning behind it.
Using AI for tasks it's bad at. AI is great for summarizing, brainstorming, explaining concepts, and drafting content. It's not great for precise statistics, breaking news, or anything requiring real-time data. Knowing the limits saves you a lot of trouble.
Copying output without reading it. This sounds obvious. But it happens constantly. People paste AI output into emails, reports, and presentations without actually reading it carefully. Take two minutes. Read it. You'd be surprised what you catch.
When Can You Trust AI Output?
Not everything needs the same level of scrutiny. Here's a rough guide:
You can usually relax when:
- You're getting general explanations of well-known concepts
- You're brainstorming or generating ideas
- You're writing drafts you'll edit yourself
- You're learning about a topic (knowing you'll go deeper later)
- You're generating code you'll test before using
Be more careful when:
- Specific statistics, dates, or numbers are involved
- You're asking about recent events or current information
- The topic is medical, legal, or financial
- You need direct quotes or citations
- You'll publish or share the output without major editing
The goal isn't to distrust AI completely. It's to match your verification effort to the stakes. Low-risk brainstorming? Take it at face value. High-stakes report? Verify everything.
Start Getting More Reliable Answers Today
Getting accurate AI responses isn't about memorizing one perfect prompt. It's about building a few simple habits: asking for uncertainty, requesting sources, breaking down complex questions, and verifying what matters.
Start with one technique from this guide. Try adding "If you're not certain, please say so" to your next ChatGPT conversation. See how the responses change.
Once that feels natural, layer in step-by-step reasoning requests. Then verification prompts. Each one reduces the chances of getting bad information and builds your confidence in the output.
You don't need to become a prompting expert. You just need to stop accepting the first response as truth and start treating AI like what it is: a powerful tool that works best when you ask the right questions.
FAQ
Do these techniques work with Claude, Gemini, and other AI tools?
Yes. These prompting strategies work across all major AI models. The underlying reason AI hallucinates is the same regardless of which tool you're using, so asking for sources, requesting step-by-step reasoning, and giving permission to say "I don't know" helps with all of them.
Can AI ever be 100% accurate?
No, and that's important to understand. Even with perfect prompts, AI models can still make mistakes. These techniques reduce errors significantly, but they don't eliminate them. For anything high-stakes, always verify the output against a reliable source.
What should I do if I catch AI making something up?
Point it out directly. Say something like "That source doesn't appear to exist. Can you verify it or provide a different one?" AI models respond well to direct correction and will usually adjust. You can also start a fresh conversation if the current one seems to be going off track.
Is it worth asking for sources if AI might make them up anyway?
Yes, because it shifts the dynamic. When you ask for sources, verifiable claims tend to come with real references, and fabricated ones are easier to spot because you can check them. It also signals to the AI that you care about accuracy, which tends to produce more careful responses overall.
How do I know if information from AI is outdated?
Ask the AI directly: "What's your knowledge cutoff date?" or "Is this information current as of 2026?" Most models will tell you when their training data ends. For time-sensitive topics like pricing, regulations, or recent events, always cross-check with a current source.



