Chain-of-Thought Prompting: Make AI Think Step by Step
Learn the simple prompting technique that improved AI accuracy from 18% to 79% on reasoning tasks — with copy-paste examples for everyday work.
·Erla Team
You asked ChatGPT a question that required some thinking — maybe comparing two options, analyzing a decision, or working through a multi-step problem. The AI responded instantly with a confident answer. Only problem: it was completely wrong.
This happens more than you'd think. AI assistants are trained to produce plausible-sounding responses, not to actually reason through problems. When you ask a complex question the normal way, the AI often skips the thinking part and jumps straight to an answer — sometimes getting it spectacularly wrong while sounding absolutely certain.
There's a fix. In 2022, researchers at Google discovered that adding one phrase to prompts — "Let's think step by step" — improved accuracy on math problems from 17.7% to 78.7%. That's not a typo. One sentence made the AI four times more accurate.
This technique is called chain-of-thought prompting, and it works because it forces the AI to show its work instead of jumping to conclusions. Here's how to use it for real tasks — not just math problems.
What Is Chain-of-Thought Prompting?
Chain-of-thought (CoT) prompting is exactly what it sounds like: you ask the AI to explain its reasoning step by step before giving a final answer. Instead of "What's the answer?" you ask "Walk me through your thinking, then give me the answer."
Think of it like asking a coworker to show their work. If someone gives you a recommendation with no explanation, you can't tell if they actually thought it through or just guessed. But if they walk you through their reasoning — "I considered X, ruled out Y because of Z, which led me to this conclusion" — you can spot any flaws in their logic.
The same principle applies to AI. When you force it to articulate intermediate steps, two things happen:
The AI catches its own mistakes mid-reasoning
You can see exactly where the logic went wrong if the answer is off
Why AI Skips Steps (And Gets Things Wrong)
Here's something most people don't realize: AI models aren't actually "thinking" the way humans do. They're pattern-matching against billions of text examples to predict what words should come next. When you ask a straightforward question, they jump to the most statistically likely answer.
For simple questions, this works fine. "What's the capital of France?" doesn't require reasoning — the AI has seen this question and answer paired together millions of times.
But for anything requiring actual logic — comparing options, analyzing tradeoffs, solving multi-step problems — the pattern-matching approach falls apart. The AI picks an answer that sounds right without doing the work to verify it actually is right.
Chain-of-thought prompting interrupts this shortcut. By asking the AI to reason out loud, you force it to generate the intermediate steps — and those steps constrain what the final answer can be. It's harder to reach a wrong conclusion when you have to show the path that got you there.
Comparison showing AI jumping to an answer versus AI reasoning through steps before answering
The Simplest Way to Use Chain-of-Thought
The easiest version requires zero setup. Just add one of these phrases to the end of your prompt:
"Let's think step by step."
"Walk me through your reasoning."
"Explain your thinking before giving a final answer."
"Break this down step by step."
Researchers found that "Let's think step by step" worked best in their tests, though follow-up research discovered an even better phrasing: "Let's work this out in a step by step way to be sure we have the right answer."
Here's what this looks like in practice. Say you're trying to decide whether to accept a job offer.
Without chain-of-thought:
Should I accept a job offer that pays 20% more but requires relocating to a city with 40% higher cost of living?
The AI might give you a quick "yes" or "no" based on surface-level pattern matching.
With chain-of-thought:
Should I accept a job offer that pays 20% more but requires relocating to a city with 40% higher cost of living?
Let's think through this step by step, considering the financial implications, quality of life factors, and career impact before reaching a conclusion.
Now the AI will break down each factor, do the math on whether 20% more salary covers 40% higher costs, consider what you might be gaining or losing, and give you a reasoned recommendation.
Few-Shot CoT: Show the AI How to Think
The "Let's think step by step" approach is called zero-shot CoT because you're not showing any examples. It works well for many situations, but for complex or specialized tasks, you can get even better results by demonstrating the reasoning pattern you want.
This is called few-shot CoT — you include one or two worked examples that show the AI exactly how to reason through similar problems.
Here's a template for analyzing business decisions:
I need help evaluating options. Here's how I'd like you to reason through each one:
Example:
Question: Should we switch from monthly to annual billing?
Step 1 - Identify the key factors: Cash flow predictability, customer churn risk, pricing psychology.
Step 2 - Analyze each factor:
- Cash flow: Annual billing gives us revenue upfront, improving predictability
- Churn risk: Customers who pay annually have lower churn rates
- Pricing: We can offer a discount for annual plans without losing money
Step 3 - Weigh tradeoffs: The main downside is higher friction for new signups.
Step 4 - Conclusion: Yes, but offer both options with annual at a 15% discount.
Now apply this same reasoning structure to my question:
{{question}}
The example doesn't have to match your exact question — it just needs to demonstrate the reasoning structure you want. The AI will adapt the pattern to your specific situation.
When Chain-of-Thought Actually Helps (And When It Doesn't)
CoT prompting isn't a magic fix for every AI interaction. Research from Wharton found that while it improves performance on hard problems, it can actually hurt accuracy on easy ones by introducing unnecessary complexity.
Use chain-of-thought when:
You're comparing multiple options or weighing tradeoffs
The task involves multi-step reasoning or calculations
You need to troubleshoot or diagnose a problem
The answer requires analyzing cause and effect
You want to understand the AI's reasoning, not just get an answer
Skip chain-of-thought when:
You're asking for simple facts or definitions
You need creative output like brainstorming or writing
You want a summary or translation
Speed matters more than accuracy
The task doesn't involve logical reasoning
Also worth noting: CoT prompting is less effective with smaller AI models. The original Google research found that meaningful improvements only appeared in models with 100+ billion parameters. With current consumer AI tools like ChatGPT-4, Claude, and Gemini, you're in the right range. But if you're using older or smaller models, results may vary.
5 Ready-to-Use CoT Prompts for Real Work
Here are copy-paste prompts you can start using today. Each one includes the chain-of-thought structure built in.
1. Decision Analysis
Help me decide: {{decision_to_make}}
Walk through this step by step:
1. List the key factors I should consider
2. Analyze how each option performs on these factors
3. Identify the main risks and tradeoffs
4. Give me your recommendation with reasoning
Be specific and use my actual situation, not generic advice.
2. Pros and Cons Comparison
Compare these options: {{option_1}} vs {{option_2}}
Think through this systematically:
1. First, identify 5 criteria that matter most for this type of decision
2. Evaluate each option against each criterion
3. Note any dealbreakers or must-haves
4. Weigh the overall tradeoffs
5. Give me a clear recommendation
Don't just list pros and cons — actually reason through which factors matter more and why.
3. Root Cause Analysis
Help me figure out why this is happening: {{problem_description}}
Use this reasoning process:
1. Clarify what's actually happening vs what should be happening
2. List all possible causes (even unlikely ones)
3. For each cause, consider what evidence would confirm or rule it out
4. Based on the information available, identify the most likely root cause
5. Suggest how to verify this and what to do about it
4. Step-by-Step Planning
I need to {{goal}}.
Break this down into steps:
1. First, identify what needs to happen before anything else (prerequisites)
2. Then map out the main phases or milestones
3. For each phase, list the specific actions required
4. Flag any dependencies (what has to happen before something else can start)
5. Note potential blockers and how to handle them
Be concrete — give me actionable steps, not vague advice.
5. Complex Question Analysis
{{complex_question}}
Before answering, let's work through this carefully:
1. Break down what this question is really asking
2. Identify any assumptions built into the question
3. Consider the key factors that affect the answer
4. Reason through each factor
5. Then give me your answer with the reasoning that supports it
If there's genuine uncertainty, acknowledge it rather than pretending to be certain.
These prompts follow the same pattern: state what you need, then explicitly describe the reasoning process you want the AI to follow. The structure guides the AI through thorough analysis instead of letting it jump to conclusions.
A prompt template card with variable placeholders being customized for different tasks
If you find yourself reusing these prompts — swapping in different decisions, problems, or questions each time — a tool like PromptNest lets you save them with the {{variables}} already in place. When you need one, just fill in the blanks and copy the complete prompt.
Troubleshooting: When the Reasoning Goes Wrong
Sometimes you'll use chain-of-thought prompting and the AI will show its steps... but still reach a wrong conclusion. Here's how to handle that.
The reasoning looks fine but the conclusion is wrong. The AI may have started from a faulty assumption. Ask: "What assumptions are you making here? List them explicitly." Often the error is in an unstated premise, not the logic itself.
The AI skipped important factors. Reply with: "You didn't consider {{factor}}. How does that change your analysis?" The AI will incorporate the new information and often revise its conclusion.
The reasoning is circular or vague. Ask for more specificity: "In step 2, you said 'this could be risky.' What specific risks are you referring to, and how would you quantify them?" Forcing concrete details exposes fuzzy thinking.
You suspect the AI is overconfident. Try: "Play devil's advocate. What's the strongest argument against this conclusion?" This often surfaces weaknesses the AI glossed over the first time.
The point of chain-of-thought prompting isn't just to get better answers — it's to make the AI's reasoning visible so you can catch and correct errors. Treat the first response as a starting point, not a final answer.
Start Using Chain-of-Thought Today
You don't need to memorize techniques or follow complicated frameworks. Just remember the core idea: when you need the AI to actually think instead of guess, ask it to show its work.
Start with one task you regularly use AI for — something involving analysis, comparison, or troubleshooting. Add "Let's think through this step by step" and see how the response changes. Once you see the difference, you'll start recognizing when to use it.
If you want to build a library of reasoning prompts like the ones above, you can save them anywhere — a note app, a doc, whatever you already use. Or if you'd prefer something purpose-built, PromptNest is a free desktop app that keeps your prompts organized with variables built in. Either way, the key is having your best prompts ready when you need them — not buried in old chat histories.
The difference between AI that helps you think and AI that just sounds confident often comes down to six words: "Let's think through this step by step."