Details
Launched in April 2025, this system automatically evaluates list titles and descriptions for toxic content when a user report is received, applying a '!hide' label that makes the list invisible to everyone except its creator. Bluesky's 2025 Transparency Report states that previously, human moderators would assess and take down toxic lists entirely, which frustrated users who lost their curational work. The automated approach allows creators to revise their content and appeal for label removal, preserving their list while reducing harm. Bluesky credits this change with significantly reducing abusive use of the lists feature.
Have evidence about Bluesky's AI practices? Submit a report.
Report a Sighting →