Redbubble: Redbubble deployed a machine learning-based content moderation system that automatically scans artist-uploaded images and text for unauthorized use of copyrighted logos, offensive material, and potential trademark violations. The system flags likely violations for human review or removes content automatically, operating across Redbubble's global marketplace. | AI Trace
Content ModerationVerified
Redbubble deployed a machine learning-based content moderation system that automatically scans artist-uploaded images and text for unauthorized use of copyrighted logos, offensive material, and potential trademark violations. The system flags likely violations for human review or removes content automatically, operating across Redbubble's global marketplace.
Details
Working with technology consulting firm DiUS and image recognition vendor VISUA, Redbubble built an automated computer vision pipeline to detect infringing and offensive content at scale. Prior to this, a team of approximately 60 human reviewers manually vetted uploads, but a surge in content volume during COVID-19 made the manual process unsustainable. The ML-based solution achieved approximately 80% accuracy in detecting infringing content; borderline cases (such as artworks that reference but transform a copyrighted logo) are still escalated to human reviewers to make the final call. Following success with copyright imagery, Redbubble extended a similar approach to trademark text detection.