AGCAI Generated, Human Reviewed

ChatGPT review

OpenAI's conversational AI — the most widely deployed AI assistant in the world, now running on GPT-5.4. Six tiers from free to $200/month. One of the most commercially important, ethically contested AI products of the decade.

Try ChatGPT → Affiliate link — disclosure

ChatGPT is the consumer-facing AI assistant built on OpenAI's GPT model series. Since its 2022 launch it has become the reference point for general-purpose AI — used for writing, coding, analysis, research, image generation, and agentic task completion. As of March 2026, it runs on GPT-5.4, OpenAI's most capable model to date.

The product has grown from a single free chatbot into a six-tier subscription platform. That expansion has come with increasing ethical complexity: advertising introduced to lower tiers in February 2026, documented use in U.S. military operations, and an ongoing dispute over safeguard removal. Those issues are documented in the Cases library.


Conversational reasoning

Multi-turn dialogue with persistent context. GPT-5.4 Thinking available on Plus and above for step-by-step reasoning on complex problems.

Deep Research

Autonomous multi-source research that synthesises findings into structured reports. 10 runs/month on Plus; 250/month on Pro.

Image generation (DALL·E)

Text-to-image generation integrated directly into the chat interface. Available on Plus and above.

Agent Mode + Codex

Autonomous task execution — browsing, writing files, running code. Plus and above only.

Memory

Persists context across conversations. Configurable and deletable in account settings.

Custom GPTs

Build and deploy specialised AI assistants with custom instructions, knowledge, and tools. Plus and above.


Ads on Free and Go tiers: OpenAI began serving ads in the US on both plans from February 9, 2026. Plus, Pro, Business, and Enterprise remain ad-free. All prices USD.

Free

$0/mo

Limited GPT-5.2 access. Ads in US. Suitable for occasional use only.

  • GPT-5.2 (tight limits)
  • Basic image generation
  • Limited memory
  • Ads (US)

Go

$8/mo

More messages and uploads. Ads included. Missing all advanced features.

  • GPT-5.2 Instant (more msgs)
  • More uploads + images
  • No Sora, Codex, Agent Mode
  • Ads included

Pro

$200/mo

GPT-5.4 Pro mode. 250 Deep Research runs/month. Extended context.

  • GPT-5.4 Pro (enhanced reasoning)
  • Deep Research (250/mo)
  • 400K reasoning context window
  • Near-unlimited usage
  • Ad-free

Business

$25/user/mo

Annual billing. Secure team workspace with admin controls.

  • Full Plus features per user
  • Data not used for training
  • SAML SSO + MFA
  • SOC 2 Type 2 aligned
  • Admin console

Enterprise

Custom

Contact OpenAI sales. Full governance and compliance infrastructure.

  • Extended context (250 pages)
  • SCIM + RBAC + EKM
  • Data residency options
  • 24/7 support + SLAs
  • GDPR/CCPA compliance tools

Ethics assessment — summary

ChatGPT is the most thoroughly documented AI product for both capability and harm. OpenAI publishes safety research and maintains usage policies. It has also introduced advertising to lower tiers, been implicated in military targeting workflows, and faced credible reporting about enforcement gaps between stated policy and observed conduct.

The introduction of ads in February 2026 is a material governance change: the company controlling what the model says now has a financial relationship with advertisers. A full Ethics Score review is in the Ethical AI Reviews queue.


Occasional usersFree tier works for low-frequency queries and basic writing — if you can tolerate ads and usage caps.
Professionals + freelancersPlus at $20/month is the practical choice: full model suite, image generation, Deep Research, ad-free.
Researchers + engineersPro ($200/month) only justifies cost if you routinely exhaust Plus limits or need extended reasoning.
Teams with compliance needsBusiness tier. Primary value is the default data-privacy guarantee and admin governance.

QUESTIONS

Is ChatGPT free to use?

Yes. The Free tier gives access to GPT-5.2 at no cost. Since February 2026, the Free tier in the US includes advertising. For professional use, Plus at $20/month is the minimum for consistent daily use.

What is the difference between ChatGPT Plus and Pro?

Plus ($20/month) gives the full feature suite including GPT-5.4 Thinking, Deep Research (10/month), Sora, Codex, and Agent Mode — ad-free. Pro ($200/month) adds GPT-5.4 Pro mode, 250 Deep Research runs per month, and near-unlimited usage. The 10x price difference is only justified for users who consistently hit Plus limits.

Does ChatGPT use my data to train its models?

On Free, Go, and Plus tiers, conversations may be used to train OpenAI's models unless you opt out in Settings. On Business tier, data is not used for training by default. On Enterprise, full data governance controls apply.

Is ChatGPT safe to use for sensitive work?

Not on Free, Go, or Plus without explicit opt-out from training. Business tier provides the data-privacy guarantee by default. For regulated industries, only Enterprise with custom terms provides sufficient control.


31 out of 60 Ethics Score

ChatGPT scores 31/60 — below the midpoint. OpenAI publishes more safety research than most AI companies and maintains meaningful technical controls. It loses ground on corporate conduct: advertising introduced to lower tiers in February 2026, a Department of Defense agreement signed rapidly after Anthropic refused, and documented enforcement gaps between stated policy and observed use. The score reflects a company that takes safety seriously in some dimensions and treats it as negotiable in others.

Transparency 6 / 10

OpenAI publishes system cards, usage policies, and safety research. Model architecture and training data sources are not disclosed. The GPT-5.x model series has no public technical report equivalent to earlier GPT-4 documentation. Advertising relationships introduced in 2026 are not yet reflected in transparency reporting.

Data Privacy 5 / 10

Free, Go, and Plus tier conversations may be used for model training unless manually opted out — opt-out is not the default. Business tier provides a data-use guarantee by default; Enterprise adds full governance controls. The February 2026 ad introduction creates a financial relationship between OpenAI and advertisers that is not yet fully reflected in privacy documentation. GDPR and CCPA compliance tools exist at Enterprise tier only.

Safety Architecture 6 / 10

OpenAI maintains inference-time classifiers, RLHF safety training, and red-teaming programmes. Usage policy covers most documented harm categories. The agentic features — Agent Mode, Codex, Deep Research — introduce real-world action capabilities where safety evaluation methodology has not been publicly documented to the same standard as the base model. No published independent audit of safety controls exists.

Corporate Conduct 4 / 10

This dimension carries the most documented contradictions. OpenAI signed a DoD agreement rapidly after Anthropic refused, raising questions about whether stated safeguards are contractually enforced or policy-only. Advertising was introduced to lower tiers without prior public consultation. The company's stated mission — "ensure that artificial general intelligence benefits all of humanity" — sits in direct tension with a tiered model that delivers diminishing safety guarantees at lower price points and advertising revenue that creates structural incentive conflicts.

Bias Mitigation 5 / 10

OpenAI publishes research on bias and fairness. No independent external audit of bias performance has been published with full methodology access. Training data composition — a primary driver of model bias — is not disclosed. Custom GPTs can be configured to reduce or remove default safety behaviours, creating deployment-level bias risk that the base model evaluation does not capture.

Regulatory Alignment 5 / 10

OpenAI is engaged with EU AI Act compliance processes and has a GDPR-oriented privacy policy. Active litigation across multiple jurisdictions on training data and copyright. No binding regulatory finding has been issued against OpenAI to date, but several investigations are ongoing. The DoD agreement places OpenAI in a regulatory grey zone regarding military AI governance — no public framework governs this use case.


The Corporate Conduct score requires specific documentation. The OpenAI / Department of Defense agreement — reached in early 2026 after Anthropic refused comparable terms — is the most significant governance event in ChatGPT's recent history.

What happened: After Anthropic refused a Pentagon demand to remove safeguards against autonomous weapons and mass surveillance use, OpenAI moved quickly to formalise its own DoD partnership. OpenAI stated the deal included restrictions on autonomous weapons. Following public criticism, it clarified the agreement excluded intelligence agency use cases without additional contract modifications.

The governance problem: The clarifications came after public backlash — not before signing. Whether OpenAI's stated guardrails are contractually defined, technically enforced, or described in a press release is not publicly verifiable. This is the core finding of Framework 04: Policy vs Enforcement — without independent audit rights and public disclosure of enforcement mechanisms, the public cannot distinguish a real constraint from a policy statement.

The Doctorow analysis (Pluralistic, April 2026): Cory Doctorow's reporting frames the OpenAI/DoD sequence as a pattern of "ethics as positioning" — commitments stated strongly enough to generate positive press, then amended quietly under commercial pressure. The sequencing of the announcement, the backlash, and the clarifications supports this reading.

Verified

OpenAI signed a DoD agreement in early 2026 after Anthropic refused comparable terms. Confirmed by multiple credible outlets and not disputed by either company.

Verified

OpenAI publicly stated the agreement excludes autonomous weapons use. This is a policy commitment. Whether it is contractually defined or technically enforced has not been publicly confirmed.

Verified

OpenAI clarified post-signing that intelligence agency use cases require additional contract modifications. The clarification followed public criticism of the original announcement.

Probable

OpenAI moved to fill the contract gap specifically because Anthropic's refusal created a procurement opportunity. Reported consistently across multiple outlets; not confirmed in direct statements from OpenAI.

Unverified

Whether the DoD agreement contains contractual definitions of "autonomous weapons" specific enough to be enforceable — or relies on OpenAI's own interpretation — is not publicly known. No contract text or independent summary has been published.

The full account of the Pentagon dispute — including the 5:01pm Friday deadline, the Anthropic refusal, and the subsequent use of Claude in Iran operations despite a ban — is documented in Case BC-001: Anthropic vs. the Pentagon. The analytical framework for assessing whether OpenAI's stated guardrails constitute real enforcement is in Framework 04: Policy vs Enforcement.


OpenAI began serving ads in the US on Free and Go tiers from February 9, 2026. This is a material change to ChatGPT's governance model — not just its business model.

When the company controlling what the AI says has a financial relationship with advertisers, a structural incentive conflict exists. The risk is not that OpenAI will directly instruct the model to promote products — it is that the presence of advertising revenue creates pressure, over time, on decisions about what the model recommends, how it frames choices, and what it declines to say. No published policy addresses how advertising relationships are firewalled from model behaviour.

For context: Claude.ai — Anthropic's consumer product — is explicitly ad-free at all tiers, and Anthropic's published policy prohibits advertisers from paying to influence model outputs. No equivalent policy exists at OpenAI for its ad-supported tiers.

Ethics Score last updated: April 2026 · Methodology: six-dimension framework →