Details
The moderation system operates at two levels. At the prompt level, the system scans input text for keywords and contextual signals associated with prohibited content, returning a blocking error (e.g., 'content moderation filter: nude') if triggered. At the output level, the generated image is scored for potential NSFW content; images above a threshold are automatically flagged and hidden from public feeds unless a user with age verification chooses to reveal them. Free users are subject to stricter filtering than paid users. The API also returns an 'nsfw' boolean attribute with each generated image, allowing developers to filter outputs programmatically. The platform states it continuously updates these filters.
Have evidence about Leonardo AI's AI practices? Submit a report.
Report a Sighting →