Hi, it's Kartikay on Bloomberg's cybersecurity team. It's been a big summer for misinformation online. Reports circulated that Antifa members were leaving bricks on every city corner to abet looters (they weren't), and that authorities had cut cellphone communication to thwart the protests (they hadn't). And then there was this alarming statistic out of Carnegie Mellon University: More than half of all coronavirus content on Twitter comes from bots. About a dozen news outlets published the finding, based on a press release last month. According to Trendsmap.com, which analyzes Twitter data, those articles were tweeted and retweeted hundreds of thousands of times—including by Hillary Clinton, with a tweet that got more than 50,000 interactions. The only problem is that it's not quite true. The figure was attributed to Carnegie Mellon computer science professor Kathleen Carley, who found that 62% of the top 1,000 retweeters on coronavirus information were bots. But Carley recently clarified to my colleague Alyza Sebenius that this finding was limited to the universe of "bots retweeting tweets posted by state-sponsored media accounts that are talking about the pandemic." The actual finding, then, reflected the much narrower subset of Twitter users spreading government-issued virus information. Since the initial report from Carnegie Mellon, the school revised the press release to offer some nuance, but still made no mention of state-sponsored media. The summary of Carley's findings is a preview of a yet-to-be-published study. Editing and peer-review could yield more clarity. ''There are lots of findings we have that are not in the press release," Carley said. In the meantime, the grim reality is that on social media—particularly when it comes to misinformation—real-live humans are often just as bad, or worse, than robots. Actual people on social media in 2020 are playing a greater role curating and disseminating misinformation than they were in 2015 and 2016, when bots influenced our feeds leading up to and through the presidential election, according to interviews with researchers at Clemson and Stanford Universities and a report published by Indiana University. Bots are playing a role in Covid-related information, but "the majority of volume is generated by likely humans," wrote the authors of the Indiana University paper. This year in particular, supporters of Donald Trump have mobilized around the president's online political rhetoric. The president has gained more than 2 million followers in the last 30 days, despite sparring with Twitter Inc. about misinformation, according to the analytics company Social Blade. The day Trump tweeted an unfounded conspiracy theory about a 75-year-old man injured by police in Buffalo, New York, it was Twitter's second-most retweeted English language post as measured by Trendsmap.
"There's a ton of energy in online groups, particularly those who want to drive a populist revolution," said Renee DiResta, research manager at the Stanford Internet Observatory. "There are people who sincerely believe they are participating in a great war for control of the narrative in our society." Of course, this doesn't mean bots are gone. While it's hard to nail down the ratio of bots to humans on social media, their combined progeny, "cyborgs," may truly be the most effective proliferators of inflammatory content. Still, bots and their nation-state masters can't take all the blame for disinformation around the coronavirus or U.S. elections in 2020. Misinformation has devolved into a political tool used to dispel facts, and it's increasingly coming from within American borders and its own voters—with a little help from the algorithms.—Kartikay Mehrotra |
Post a Comment