Details
According to Getty's published model card and FAQ, the filtering system includes a language model that analyzes and filters text prompts before image generation, and a separate image filter that screens generated outputs for unsafe content. The system blocks prompts related to recognizable public figures, living artists' styles, trademarked logos, explicit content, and content that promotes violence or hatred. Users who receive blocked outputs can flag images for human review using a built-in flag icon. The model card notes that both filters can produce false positives (blocking safe content) or false negatives (missing unsafe content) due to adversarial prompts or model limitations.
Have evidence about Getty Images's AI practices? Submit a report.
Report a Sighting →