AI risk assessment template & compliance tools — free downloads

Practical tools for AI risk assessment, vendor evaluation, and incident documentation — built for risk and compliance practitioners.

These templates are built for practitioners — risk, compliance, procurement, and governance professionals who need structured tools for working with AI systems, not academic frameworks. The AI risk assessment template is the most-used download: it covers use-case classification, harm identification, and control evaluation in a format any compliance team can apply without specialist training. Every template is grounded in the same analytical approach used across BrokenCtrl's case studies and reviews.

All five templates are free. Enter your email below and they go straight to your inbox.


Template 01

AI Incident Report Template

Structured documentation framework for recording AI-related failures, near-misses, and harm events. Covers timeline, affected systems, confidence labelling, and escalation chain.

Free Risk Compliance
Template 02

Evidence Grading Rubric

Applies the BrokenCtrl three-tier confidence system — Verified, Probable, Unverified — to any AI-related claim or case. Designed for journalism, policy, and compliance documentation.

Free Research Governance
Template 03

AI Risk Assessment Template (Mini)

Lightweight AI risk assessment template aligned with EU AI Act risk-tiering logic. Covers use-case classification, harm identification, control evaluation, and residual risk documentation.

Free EU AI Act Risk
Template 04

Vendor Due Diligence Checklist

Procurement-ready question set for evaluating AI vendors across transparency, data handling, safety controls, and contractual accountability. Based on the BrokenCtrl Ethics Score dimensions.

Free Procurement Governance
Template 05

Post-Incident Mitigation Tracker

Spreadsheet-based tracker for managing remediation actions after an AI incident. Covers owner assignment, timeline, control type (technical vs policy), and verification status.

Free Incident Response

Free download

Get all five templates

Enter your email and the full template pack goes straight to your inbox. No paid tier, no upsell. You'll also receive occasional BrokenCtrl updates when significant new cases or frameworks are published.

Your email is added to the BrokenCtrl Downloads list in Brevo. Unsubscribe any time. No data is sold or shared with third parties. Independence statement →


QUESTIONS

What are AI compliance templates?

AI compliance templates are structured documents that help organisations apply consistent risk assessment, incident documentation, and governance evaluation processes to AI systems. The templates here are built for practitioners — not consultants — and designed to be usable without specialist knowledge of any single regulatory framework.

Are these templates aligned with the EU AI Act?

The AI Risk Assessment template is designed with EU AI Act risk-tiering logic as its reference framework — covering prohibited, high-risk, and limited-risk classification. The other templates are framework-agnostic and are usable alongside NIST AI RMF, ISO 42001, or internal governance standards. They are not certified compliance tools and do not constitute legal advice.

What is a vendor due diligence checklist for AI?

An AI vendor due diligence checklist is a structured set of questions used to evaluate an AI provider before procurement or integration. It covers how the vendor handles data, what safety controls exist, how transparent they are about model limitations, what their incident response process looks like, and whether their stated ethics commitments are backed by enforceable contract terms. Template 04 here is built around the same six dimensions used in BrokenCtrl's Ethical AI Reviews.

Can I use these templates for commercial purposes?

Yes. The templates are free to use, adapt, and apply in any organisational context — internal compliance work, client engagements, or research. Attribution is appreciated but not required. They may not be resold or repackaged as a product.

What is an AI risk assessment?

An AI risk assessment is a structured evaluation of the potential harms, control gaps, and governance failures associated with deploying an AI system in a specific context. It identifies the use case, classifies risk level, documents what safeguards are in place, and records what residual risk remains after controls are applied. Template 03 here provides a lightweight version aligned with EU AI Act logic — suitable for initial screening rather than full conformity assessment.