What are the different types of AI? A practical guide

Understanding what are the different types of AI matters because the category determines the risk, the governance gap, and what ethical oversight actually requires. This guide covers the main AI types in active deployment — with links to sourced ethical reviews of 169 tools.

Most guides to AI types focus on technical architecture — narrow vs general, reactive vs limited memory. This one focuses on deployment categories: the types of AI systems that are actually in use, the governance questions they raise, and where the documented failures tend to cluster. What are the different types of AI in practice? Twelve categories cover the majority of what organisations and individuals encounter.

Each category links to the Ethical AI Reviews section, where tools are scored across six governance dimensions — transparency, data privacy, safety architecture, corporate conduct, bias mitigation, and regulatory alignment.

Looking for a specific tool rather than a category? Ethical AI Reviews scores 169 tools individually. The Cases library documents governance failures by type.


Type 01

Conversational AI

Chatbots and virtual assistants designed for natural language interaction. Includes general-purpose assistants (ChatGPT, Claude, Gemini) and task-specific dialogue systems.

Type 02

Writing Assistants

AI tools for content creation, editing, grammar correction, and automated copywriting. Governance concerns centre on training data sourcing and attribution.

Type 03

AI in Finance

Risk assessment, algorithmic trading, fraud detection, and robo-advisory systems. Among the most heavily regulated AI deployment contexts globally.

Type 04

Customer Service Bots

Automated customer interaction systems deployed across retail, banking, and utilities. Documented for failure modes when handling complaints, escalations, and vulnerable users.

Type 05

AI in Education

Adaptive learning platforms, tutoring systems, and automated assessment tools. Privacy concerns are acute given the involvement of minors and sensitive learning data.

Type 06

Image & Video AI

Generative image and video tools, photo editors, and synthetic media platforms. Non-consensual image manipulation is the primary documented harm category.

Type 07

Speech AI

Voice recognition, text-to-speech, voice cloning, and synthesis tools. Voice cloning without consent is an emerging governance gap with limited regulatory coverage.

Type 08

Marketing Automation AI

AI-driven campaign management, personalisation engines, and customer segmentation. Data collection scope and consent mechanisms are the primary governance considerations.

Type 09

Retail & E-Commerce AI

Recommendation engines, dynamic pricing, and inventory management systems. Dynamic pricing algorithms have attracted regulatory scrutiny in multiple jurisdictions.

Type 10

Personal Assistants

Consumer-facing AI assistants for scheduling, information retrieval, and smart home control. Always-on listening and data retention are the primary documented privacy concerns.

Type 11

Creative AI

Music generation, art creation, and design tools powered by generative models. Copyright and attribution disputes are the dominant legal and ethical exposure.

Type 12

AI for Gaming

NPC behaviour, procedural generation, matchmaking, and anti-cheat systems. Behavioural profiling and engagement optimisation raise emerging player protection concerns.


FEATURED AI TOOLS UNDER REVIEW

Browse all 169 tools →
Loading tools...

QUESTIONS

What are the different types of AI?

The different types of AI can be categorised by capability or by deployment context. By capability: reactive machines (respond to current input only), limited memory AI (learn from recent data, most commercial AI today), and theoretical systems beyond that. By deployment — which is more useful for governance — the main types are conversational AI, writing assistants, AI in finance, customer service bots, educational AI, image and video AI, speech AI, marketing automation, retail AI, personal assistants, creative AI, and gaming AI. Each category carries different risk profiles, regulatory exposure, and documented failure modes.

What is the most common type of AI in use today?

The most widely deployed AI type is narrow AI — systems trained to perform specific tasks rather than general reasoning. Within that, conversational AI (chatbots and assistants) and recommendation systems are the most pervasive by user contact volume. Most people interact with AI dozens of times daily through recommendation engines, spam filters, navigation apps, and search — often without recognising it as AI.

What is the difference between narrow AI and general AI?

Narrow AI (also called weak AI or ANI) is designed for a specific task — translating language, detecting fraud, or generating images. Every commercially deployed AI system today is narrow AI. General AI (AGI) would perform any intellectual task a human can. It does not exist in any verified form. The distinction matters for governance: narrow AI's risks are specific and documentable; AGI risks are largely speculative. BrokenCtrl focuses on narrow AI, where documented harms are already occurring.

Which type of AI carries the highest regulatory risk under the EU AI Act?

Under the EU AI Act, high-risk AI systems include those used in critical infrastructure, education, employment, essential services, law enforcement, migration, and justice. Finance AI (credit scoring, risk assessment) and AI in education both fall into high-risk categories requiring conformity assessments, human oversight mechanisms, and audit trails before deployment. Conversational AI used for emotion recognition in workplaces is separately restricted. The Frameworks section covers how the regulatory alignment dimension is scored across tool reviews.

Where can I find ethical reviews of AI tools by type?

The Ethical AI Reviews section covers 169 tools scored across six dimensions: transparency, data privacy, safety architecture, corporate conduct, bias mitigation, and regulatory alignment. Each score is sourced — not based on vendor claims. The full tool directory is at brokenctrl.com/shop. Documented governance failures by AI type are in the Cases library.