Details
Canva's AI safety framework, called Canva Shield, includes multiple layers of automated moderation. On the input side, machine learning tools automatically review text prompts to identify and block terms likely to generate inappropriate content (e.g., sexual content, self-harm topics, political content). On the output side, a separate automated system scans generated images, text, and video before they are shown to the user. Canva also provides user-facing reporting tools. This system applies across Magic Write, Magic Media (Text to Image), Magic Edit, Translate, and Canva AI.
Have evidence about Canva's AI practices? Submit a report.
Report a Sighting →