Skip to main content
AI Tool Radar
How-to

Debugging a Claude Prompt: 7 Mistakes Every Beginner Makes

Claude is the most capable writing and coding AI available, but most people sabotage their own results. These are the seven mistakes I see again and again when reviewing other people's prompts.

6 min read2026-04-15By Roland Hentschel
claudeprompt engineeringai workflowsllm tips

The prompt is almost always the problem#

Every week, someone sends me a Claude conversation and asks why the output is bad. In roughly nine out of ten cases, the model is doing exactly what was asked. The person just did not realise what they were asking.

Claude is not a search engine. It does not read minds, it has no memory between sessions unless you build one in, and it takes context surprisingly literally. Prompt engineering sounds like a buzzword, but it is really just the habit of writing requests in a way that matches how the model actually processes them.

After hundreds of hours using Claude Opus 4.6 for client work, and plenty of watching friends and colleagues fight with it, here are the seven mistakes I see most often.

1. Starting with the task, not the role#

The worst prompt I see regularly looks like this:

Write me a landing page for a dental practice.

Claude will produce something. It will be generic. The output has no hook because the request has no hook.

The fix is not longer prompts. It is putting the role first:

You are a senior conversion copywriter who has worked on 40+ healthcare landing pages. Your pages consistently outperform generic agency copy because you lead with the patient's actual fear, not the clinic's credentials. Now write a landing page for a dental practice in Hamburg specialising in anxious patients.

The role is doing 80 percent of the work. It frames every downstream decision the model makes about tone, structure and what to emphasise. Tell Claude who it is before you tell it what to do.

2. Asking for "best practices" instead of constraints#

"Write SEO-friendly content" is meaningless to the model. SEO-friendly according to whom? For which industry? At which search intent?

Replace every vague quality word in your prompt with a constraint. Instead of "make it SEO-friendly", try "target the keyword X, place it in H1 and the first 100 words, keep paragraphs under three sentences, write for the awareness stage, not the decision stage". The model cannot optimise for fuzzy adjectives. It can absolutely optimise for measurable constraints.

The same rule applies to "professional", "engaging", "modern" and "clean". Those words tell Claude nothing. Replace them with what you actually want.

3. Ignoring the system prompt / project context#

If you use Claude through the web app, you have access to Projects. Most people never touch them. That is a huge mistake.

A Project lets you upload reference documents, set a persistent system prompt, and give the model durable context that survives every new chat in that Project. For any recurring work (writing for a specific client, coding in a specific codebase, editing a specific tone), the one-time setup pays back within days.

If you use Claude via API, the equivalent is the system parameter. Put the role, the constraints, the forbidden phrases, the voice rules. Leave the user messages for the actual task.

4. Stuffing everything into one giant prompt#

There is a point where a prompt becomes so long that the model starts ignoring parts of it. In my testing, that point is around 600-800 words for instructions, before you hit diminishing returns.

If your prompt is longer than that, split it. Turn your "write a 2000-word article" prompt into three steps:

  1. Outline the structure (headings, 1-sentence summary per section, target word count)
  2. Draft each section one at a time, referencing the outline
  3. Edit the full draft against the original constraints

This is how experienced copywriters actually work, and Claude works better when you let it follow the same flow. For more on this, see our Claude guide.

5. Not telling Claude what the output should look like#

Claude will produce prose by default. If you want JSON, a markdown table, a bulleted list or a specific XML-like structure, say so explicitly.

Return your answer as a markdown table with exactly three columns: "Objection", "Underlying fear", "1-sentence reassurance". Do not include any other text before or after the table.

The "do not include any other text" line is the one people forget. Without it, Claude will often wrap the table in a polite sentence like "Here is the table you requested", which breaks your downstream parsing.

6. Providing examples that contradict your instructions#

This one is subtle and catches experienced people too. You write a careful prompt asking for a casual, conversational tone, then paste three example outputs that are written in stiff corporate language because those were the easiest to copy.

Claude will prioritise the examples. It always does. Examples are stronger signal than instructions because they show the model what success actually looks like. If your examples contradict your written instructions, the examples win.

The fix: either rewrite the examples to match the tone you want, or do not include examples at all and be more specific in the instructions.

7. Not reading the output critically#

The final mistake is downstream. People paste the output into a document without really reading it.

Claude Opus 4.6 is very good, but it still hallucinates specifics (numbers, dates, product names) maybe 5-10 percent of the time. It still pads with generic filler when the topic runs thin. It still defaults to structures it has seen a million times (three-paragraph intro, five bullet points, one-sentence conclusion).

Your job is to be the editor. Read the output as if a junior writer produced it. Cut the filler. Verify the specifics. Replace any sentence that could have been written by any competitor.

The AI is a first draft engine. It is not a publishing pipeline.

The underlying pattern#

All seven mistakes come from the same assumption: that Claude understands what you mean, not just what you wrote. It does not. It understands exactly what you wrote, interpreted through the statistical patterns of its training data.

The upside of this is that once you stop expecting mind-reading, prompt engineering becomes pretty simple. Be explicit. Set the role. Define constraints. Structure the output. Verify the result. That is it.

If you want the deeper framework, our ChatGPT vs Claude comparison covers where each model rewards different prompt styles. And for a structured walk through how I use Claude Code for development work, the Claude guide goes into more detail.

But none of that matters if you keep making the seven mistakes above. Fix those first.


Roland Hentschel

Roland Hentschel

AI & Web Technology Expert

Web developer and AI enthusiast helping businesses navigate the rapidly evolving landscape of AI tools. Testing and comparing tools so you don't have to.

More from the Blog