Home / Types of AI / Image and video AI

Image and video AI — deepfakes, consent violations and synthetic media governance

Generative image and video tools, synthetic media platforms — where non-consensual manipulation and copyright disputes are most documented.


AGC
AI Generated, Human Reviewed

Image and video AI includes generative image tools (Midjourney, DALL-E, Stable Diffusion), video synthesis platforms (Sora, Runway), AI photo editors with generative features, and deepfake tools. These technologies have legitimate creative and commercial applications — and have also produced some of the most serious documented harms in the AI landscape.

The governance concerns are dominated by two categories: non-consensual intimate imagery (NCII) — AI-generated or AI-manipulated sexual content of real people without consent — and deepfakes used for fraud, political manipulation, or reputational damage. Both are documented at scale, and both have prompted the most urgent regulatory responses of any AI category.


Non-consensual intimate imagery

AI-generated sexual content of real people. Documented at scale — including content involving public figures and, critically, minors. Several platforms have been investigated by regulators. Grok/X image features are a documented case.

Deepfakes for fraud

Voice and video cloning used for financial fraud and identity theft. CEO fraud, romance scams, and identity verification bypass using AI-generated media are documented and growing in volume.

Political manipulation

Synthetic media used in election interference. AI-generated images and videos of political figures have been documented in multiple election cycles as disinformation tools.

Training data copyright

Models trained on copyrighted images without consent. Multiple class-action lawsuits from artists and stock image companies are active against generative image AI providers.


EU AI Act classification: Deepfake content — AI-generated or manipulated video, audio, or images of real people — must be labelled as such under Article 50. The creation of CSAM using AI is prohibited. Emotion recognition systems are restricted. Biometric categorisation from images raises additional data protection requirements. Member states are also advancing national legislation specifically targeting NCII.


QUESTIONS

What is image and video AI?

Image and video AI refers to systems that generate, edit, or manipulate visual content using machine learning — including text-to-image generators, deepfake tools, AI-powered photo editors, and video synthesis platforms.

What is a deepfake and why is it a governance concern?

A deepfake is AI-generated or AI-manipulated media that depicts a real person saying or doing something they did not. Governance concerns include their use for fraud, non-consensual sexual content, and political disinformation. EU AI Act Article 50 requires deepfake content to be labelled as AI-generated.

Are there laws against AI-generated non-consensual intimate imagery?

Multiple jurisdictions — including the EU, UK, and US states — have passed or are advancing legislation specifically targeting AI-generated NCII. Several platforms have faced regulatory action. The creation of AI-generated CSAM is prohibited under EU law and criminal law in most jurisdictions.