Details
According to Midjourney's official Community Guidelines and Terms of Service, the platform automatically blocks certain text inputs and generated image outputs before they are shown to the user. The system uses a keyword/banned-word filter for explicit prompt terms and also employs AI-based contextual analysis that can flag prompts even without banned words if the system detects potential for policy-violating output. Additionally, Midjourney's training data pipeline includes safety filtering to remove data with known risk of containing child sexual abuse material (CSAM), as disclosed in its California AB2013 documentation. Violations can result in warnings, temporary suspensions, or permanent bans. Community members can also report content, and human moderators review and enforce community guidelines. The automated blocking system applies regardless of whether the user is on a public server, in private/Stealth Mode, or using direct messages with the bot.
Have evidence about Midjourney's AI practices? Submit a report.
Report a Sighting →