Details
The project involves collecting and anonymizing in-game chat logs from Riot and Ubisoft titles, labeling them by type of behavior (neutral, racist, sexist, etc.), and using that labeled data to train and improve AI moderation systems. Both companies described this as the first time two major independent game studios had openly shared internal data on toxic interactions to jointly improve AI moderation. The initiative was described as a research project at an early stage when announced, with plans to share findings with the broader industry in 2023. The project targets both companies' AI systems rather than a single shared product.
Have evidence about Riot Games's AI practices? Submit a report.
Report a Sighting →