Runway: Runway uses an in-house AI visual moderation system that automatically detects and blocks attempts to generate harmful or inappropriate content, including depictions of public figures without consent, and adds invisible C2PA-standard watermarks to every generated output to label it as AI-made. | AI Trace
Content ModerationVerified
Runway uses an in-house AI visual moderation system that automatically detects and blocks attempts to generate harmful or inappropriate content, including depictions of public figures without consent, and adds invisible C2PA-standard watermarks to every generated output to label it as AI-made.
Details
Runway's safety page and research publications confirm an in-house visual moderation system trained to classify both AI-generated and real-world images and videos. The system automatically suspends users who repeatedly trigger moderation filters or upload unlawful content. All outputs receive invisible C2PA (Coalition for Content Provenance and Authenticity) watermarks embedded in metadata, enabling downstream platforms to identify the content as AI-generated. Runway also applies bias-mitigation measures to reduce demographic skew in generated content.