Home / Types of AI / What is conversational AI?

What is conversational AI? — types, risks and governance

Chatbots, virtual assistants and large language model interfaces — the risk profile most organisations encounter first.


AGC
AI Generated, Human Reviewed

Conversational AI refers to systems designed to interact with humans through natural language — chatbots, voice assistants, and interfaces built on large language models such as ChatGPT, Claude, and Gemini. These systems are among the most widely deployed AI types globally, handling everything from customer support queries to complex research tasks.

The governance concerns are significant precisely because of scale. A conversational AI system interacting with millions of users daily creates risks that a narrow enterprise tool does not: hallucination at scale, identity deception, psychological manipulation, and the difficulty of enforcing consistent safety behaviour across all interaction contexts.


Hallucination

Factual errors presented confidently. LLMs generate plausible-sounding false information. Documented harms include fabricated legal citations, false medical guidance, and invented biographical facts.

Identity deception

Users unaware they are talking to AI. EU AI Act Article 52 requires disclosure. Several deployments have been documented concealing AI nature to build emotional trust.

Jailbreaking

Safety guardrails bypassed via prompt manipulation. Published jailbreak techniques exist for every major model. Companies’ enforcement records vary significantly.

Data retention

Conversation data stored and used for training. Consent mechanisms are inconsistent. Sensitive disclosures made in chat contexts have been retained without clear user knowledge.


EU AI Act classification: Conversational AI deployed for direct user interaction falls under General Purpose AI (GPAI) transparency requirements. Systems must disclose their AI nature to users (Article 52). Emotion recognition applications in workplaces are separately prohibited. High-risk classification applies when used in employment, credit, or essential service contexts.


QUESTIONS

What is conversational AI?

Conversational AI is any system that interacts with humans through natural language. This includes rule-based chatbots, retrieval-augmented systems, and large language model (LLM) interfaces. Modern conversational AI is predominantly LLM-based, capable of open-ended dialogue across almost any topic.

What are the main risks of conversational AI?

The four documented risk categories are hallucination (false information stated confidently), identity deception (users unaware they are talking to AI), jailbreaking (bypassing safety controls), and data retention issues (conversation data stored without clear consent). The Cases library documents specific governance failures across each.

Is conversational AI regulated under the EU AI Act?

Yes. The EU AI Act imposes GPAI transparency obligations on LLM providers and requires that users be informed when interacting with an AI system. Higher-risk use cases — employment screening, credit decisions — require additional conformity assessments.