| Hey all, it's Natalia. As racial tensions have flared in the U.S. following the death of George Floyd in police custody, social media platforms around the world have faced fresh questions about how neutrally to treat the content posted by users—and not just President Trump. The debate is drawing a distinct line between the U.S. and the European Union, both of which are looking to overhaul long-standing legal provisions that shield platforms from liability over user activity on their sites. The EU is planning changes to its framework that would require companies like Twitter and Facebook to shoulder more responsibility if users spread hate speech and disinformation on their platforms. The changes, which are set to be announced by year-end, could mean companies will be obliged to scour their sites for content that violates the rules. The regulatory revamp has been a long time in the making. Various EU initiatives in recent years, including voluntary codes of conduct, have pressured platforms to remove hate speech or demote misleading content. In France and Germany, social media companies can be subject to fines if they don't remove non-compliant posts quickly enough. Twitter's decision to add a fact-check label to Trump's unsubstantiated claims about mail-in voting and a warning to a post about the protests in Minneapolis it said glorified violence, could be a preview of the type of interventions the EU is asking for—even in the cases of posts from powerful people. The company earned ire from Trump for the move, but cheers from EU officials: Twitter "did on American territory and vis-a-vis the American president what we are thinking about in Europe," European Commission Vice President Vera Jourova told Bloomberg TV last week. In the U.S., Trump unveiled an executive order in response to the fact-check, aimed at rolling back parts of Section 230 of the U.S.'s Communications Decency Act, which provides legal protections for social media sites regarding user activity on their sites. Trump's tactic was strikingly similar to the approach pursued by the EU, even though the goals were different—Trump is hoping the platforms will be more hands-off, whereas the EU is trying to force them to step up policing. It appears possible that Trump's push could result in more of the type of fact-checking the president objected to in the first place. But there's a logic to the president's assault on Section 230: The protections are very, very important to technology's company's current business models. For the past two decades, those rules—and their European equivalents—have underpinned how the internet has functioned and how platforms have grown thanks to their users' content. The debate is still seething. Tech representatives warn that an overhaul to current rules risks undermining free speech by forcing private companies to make editorial decisions. But their case is weakened by years of not doing enough to police user activity, including letting Russian actors spread disinformation across the sites to influence the 2016 U.S. presidential election and the U.K.'s Brexit vote. For now, the tide seems to be turning in favor of more oversight. As the White House scrutinizes Section 230, Facebook employees blasted their leader for his decision to leave Trump's posts untouched, prompting Chief Executive Officer Mark Zuckerberg to eventually say the company would review some of its content policies. The question may no longer be whether platforms should be made more responsible for their users' content—but when. Europe, at least, appears to have an answer. —Natalia Drozdiak |
Post a Comment