It wasnât all that long ago that Discord housed some of the internetâs most noxious groups, including white supremacist communities like The Daily Stormer. Since the Charlottesville rally in 2017, however, the chat platform has worked toward slowly but surely cleaning up the mess. But as Discord has grown, so too has the risk of harassment. As part of its continued effort to ensure that no discordant deed goes unpunished, it has acquired anti-harassment AI company Sentropy.
Sentropy is a startup that pretty much immediately made waves when it emerged out into the open last year due to funding from big names like Reddit founder Alexis Ohanian and execs from Riot Games, Nextdoor, OpenAI, Twitch, and Twitter. It offered AI-powered moderation tools to platforms, as well as free consumer-facing tools like the now-defunct Sentropy Protect, which was intended to help users detect offensive content on sites like Twitter and banish it via a dashboard. Now itâs part of Discord.
Sentropy CEO John Redgrave made the announcement in a Medium post
âDiscord represents the next generation of social companiesâa generation where users are not the product to be sold, but the engine of connectivity, creativity, and growth,â Redgrave wrote. âIn this model, user privacy and user safety are essential product features, not an afterthought. The success of this model depends upon building next-generation Trust and Safety into every product. We donât take this responsibility lightly and are humbled to work at the scale of Discord and with Discordâs resources to increase the depth of our impact.â
Redgrave went on to note that one appeal of working with Discord is that he and his team will still be able to share findings with others in the content moderation spaceânot just fellow Discord employees. â[Trust and safety] tech and processes should not be used as a competitive advantage,â Redgrave wrote. âWe all deserve digital and physical safety, and moderators deserve better tooling to help them do one of the hardest jobs online more effectively and with fewer harmful impacts.â
While platforms like Twitter, Facebook, YouTube, and Twitch struggle on a daily basis with questions of how to best handle the internetâs harassing-est hordes, Discord finds itself in especially tricky territory. The platform now touts 150 million monthly active users spread across millions of separate channels, or servers in Discord parlance. Back in 2017, just before it began draining its various cesspools, it had just 12,000 servers. Last year alone, the company removed hundreds of thousands of accounts for exploitative content, harassment, and extremismâmany discovered through user reports. According to TechCrunch, Discordâs trust and safety team made up 15% of its workforce as of May 2020.
But even with that team doing its best to keep the internetâs worst elements at bay and the long-running tradition of individual Discord servers recruiting their own volunteer moderators, the scale of Discord is still staggering. It is not uncommon for harassment groups to use Discord as their main means of coordinating attacks against users on sites like Twitter, YouTube, and Twitch. Last year, the practice of âZoom bombingââpeople popping into Zoom calls with pornography, Nazi imagery, and things of that natureâtook off thanks to Discord servers dedicated to facilitating it. Many servers disguised their true purposes with bogus rules while organizing raids on Zoom meetings held by Narcotics Anonymous and support groups for at-risk individuals like LGBTQ and trans teens. Discord worked to ban these servers, but not before damage had already been done.
It makes sense that Discord would want to step up its efforts in this regard, as reactive approaches only get platforms so far. But AI is still evolving. Itâs not always going to ace the test when it comes to sussing out context or recognizing when, for example, a normal-looking rule page is actually a red herring. And so, like Discordâs other efforts, this represents a larger piece of a still-incomplete puzzleânot a silver bullet.
Â