| More than half of women in 22 countries reported being harassed or abused online, and one in five of them cut back or stopped using the internet as a result, according to a 2020 Plain International study. That's a lot of voices missing from online conversations. Platforms often turn to artificial intelligence to detect misogynistic or otherwise offensive posts, with mixed results. As Khari Johnson reports this week, researchers in Denmark have developed a novel way to train algorithms to identify sexist content. Seven people were hired full-time to review and label posts on Facebook, Twitter, and Reddit. The group included people of different ages and nationalities, with varied political views. They didn't always agree, especially when posts included jokes, irony, and sarcasm—and that was the point. By navigating the trickier calls, the group helped create an algorithm that was better at detecting offensive posts. Automated content moderation has historically been a challenge for platforms. Misogynistic language varies in different languages and cultures, the people labeling posts to train algorithms are often part-time contract workers who don't have the time and training to make tough calls, and most of the research so far has been done in English. But the work is critical to a vital and diverse online environment. "If you're going to turn a blind eye to threats and aggression against half the population, then you won't have as good democratic online spaces as you could have," said Leon Derczynski, a coauthor of the study and an associate professor at IT University of Copenhagen. Read more about the study and potential applications beyond content moderation. |
Post a Comment