ChatGPT prompts — how to write them and make them work

Prompt structure, examples by category, and the mistakes that produce bad outputs.

HAC Human + AI

Most bad ChatGPT prompts are not bad because the model is weak — they are bad because they are vague, missing context, or structured in a way that leaves the model no choice but to produce generic output. The difference between a prompt that gets you something usable and one that wastes your time is almost always in the structure, not the topic.

This page covers how ChatGPT prompts work, the four components that determine output quality, prompt examples across common use cases, and the most frequent mistakes — with what to do instead.

If you are evaluating ChatGPT for organisational or professional use, see the ChatGPT Ethical AI Review — covering transparency, data handling, safety controls, and documented corporate conduct.


The four components of an effective ChatGPT prompt

Every high-performing ChatGPT prompt contains some combination of four elements. You do not always need all four — but knowing which one is missing usually explains why an output fell short.

Prompt anatomy — what each component does
Role

Who the model should behave as. "Act as a senior financial analyst" or "You are an experienced copywriter." Sets the register, vocabulary, and level of expertise the response should reflect. Omitting this produces generalist output when you need specialist output.

Context

The situation the model needs to understand. What you are working on, who the audience is, what constraints exist. The more specific, the narrower the output space — which is what you want. Vague context produces vague outputs.

Task

What you want it to do. A single, clearly stated action: summarise, rewrite, compare, generate, analyse, extract. Multiple tasks in one prompt usually means the first task gets handled well and the rest get diluted. Break them up.

Format

What the output should look like. Bullet points, a table, a 300-word paragraph, a numbered list, a JSON object. If you do not specify format, the model picks one — and it rarely picks the one that fits your workflow.


ChatGPT prompts by use case

The following prompts are structured using the four-component framework. Each includes a brief note on what makes it work.

Writing & Editing

Rewrite for clarity

"Rewrite the following paragraph for a non-technical audience. Keep it under 80 words. Avoid jargon. Maintain the original meaning: [paste text]"

Works because: task is singular, format constraint is explicit, audience is defined. The model has no room to be vague.

Research & Summarisation

Extract key points

"You are a research analyst. Read the text below and extract the five most important claims. For each, write one sentence summarising the claim and one sentence on what evidence supports it: [paste text]"

Works because: role is set, output structure is pre-defined, and the two-part format per item forces the model to separate claim from evidence.

Work & Productivity

Draft a professional email

"Draft a professional email declining a meeting request. Tone: polite but firm. Length: under 100 words. Do not offer an alternative time. Context: [brief explanation]"

Works because: tone, length, and a specific constraint (no alternative time) prevent the model from producing a generic polite decline that misses the point.

Analysis

Compare two options

"Compare [Option A] and [Option B] for [specific use case]. Format: a table with rows for Cost, Ease of use, Scalability, Key risk. End with a one-sentence recommendation."

Works because: the table format forces parallel structure and prevents the model from writing a narrative that buries the comparison.

Coding

Explain code simply

"Explain what the following code does in plain English. Assume the reader has no programming background. Use an analogy if it helps. Do not rewrite the code: [paste code]"

Works because: audience level is defined, analogy permission is explicit, and the constraint (no rewrite) prevents a code-heavy response that defeats the purpose.

Strategy & Planning

Generate objections

"Act as a sceptical stakeholder reviewing the following proposal. List the five strongest objections someone in that role would raise. Be specific — no generic concerns: [paste proposal summary]"

Works because: adversarial role-setting produces genuinely critical output. Without the role, the model defaults to balanced, non-committal feedback.


Why most ChatGPT prompts underperform

Most prompt failures follow a small number of repeating patterns. Recognising them is faster than iterating randomly.

MistakeWhat happens and what to do instead
One-line prompts with no context The model has nothing to anchor to, so it produces the most statistically average response to your topic. Add role, audience, and at least one specific constraint before you send anything.
Asking multiple things at once "Summarise this, then rewrite it, then suggest improvements" produces three mediocre outputs instead of one good one. Send them as separate prompts in sequence.
No format instruction The model picks the format it has seen most often for your topic — usually a block of prose or an undifferentiated bullet list. Tell it exactly what you want: table, numbered steps, two paragraphs, JSON.
Accepting the first output The first response is a starting point, not a final product. Follow-up prompts — "make it shorter", "make the third point more specific", "remove the last paragraph" — are where quality comes from.
Vague length instructions "Keep it brief" is not a constraint. "Under 150 words" is. Specific word counts consistently produce tighter outputs than qualitative instructions.

ChatGPT prompts contain information about you and your work. What you paste into the prompt box — documents, client data, internal strategy, personal details — is processed by OpenAI's systems. By default, conversations in ChatGPT may be used to train future models unless you opt out in settings.

For professional and organisational use, review your organisation's data handling policy before pasting anything sensitive into a prompt. The principle applies to all AI tools, not only ChatGPT.

For a full assessment of ChatGPT's data practices, training data policies, and documented conduct: see the ChatGPT Ethical AI Review on BrokenCtrl.


QUESTIONS

What are ChatGPT prompts?

ChatGPT prompts are the instructions you give the model to produce a response. A prompt can be a question, a command, a role assignment, a document to process, or any combination of these. The quality of a ChatGPT prompt directly determines the quality of the output — vague prompts produce vague outputs, and structured prompts with clear role, context, task, and format instructions consistently produce better results.

How do you write a good ChatGPT prompt?

A good ChatGPT prompt has four components: a role (who the model should behave as), context (the situation and constraints), a single clearly stated task, and a format instruction (how the output should be structured). You do not always need all four — but identifying which component is missing usually explains why an output underperformed. Follow-up prompts that refine the output are normal and often necessary.

What is prompt engineering?

Prompt engineering is the practice of designing inputs to language models to reliably produce useful outputs. It covers prompt structure, role assignment, few-shot examples (showing the model what you want before asking for it), chain-of-thought instructions (asking the model to reason step by step), and iterative refinement. At a practical level, it means learning which prompt patterns work for which tasks — and building reusable templates from them.

Are ChatGPT prompts private?

By default, ChatGPT conversations may be used by OpenAI to improve its models. Users can opt out of this in settings under Data Controls. ChatGPT Team and Enterprise plans offer stronger data separation. Anything pasted into a prompt — documents, client information, internal data — is processed by OpenAI's systems. For organisations, review your data handling policy before using ChatGPT with sensitive material.

What is a system prompt in ChatGPT?

A system prompt is an instruction given to the model before the conversation starts, typically used to set its role, tone, and constraints for the entire session. In the ChatGPT interface, this is available through Custom Instructions in settings. In the API, it is passed as a separate system message. System prompts are how developers and organisations shape model behaviour at a session level — they persist across the conversation in a way that individual user prompts do not.

What is the difference between a prompt and a ChatGPT template?

A prompt is a single instruction for a single output. A template is a reusable prompt structure with placeholder fields — for example, a prompt for writing a professional email where you fill in the recipient, purpose, and tone each time. Templates are useful when you repeat the same type of task regularly. Building a library of tested templates is the practical outcome of prompt engineering for most professional users.

Last updated: April 2026 · ← All AI types