To perform their most recent analysis, the researchers analyzed more than 200 million tweets discussing coronavirus or covid-19 since January. They used machine-learning and network analysis techniques to identify which accounts were spreading disinformation and which were most likely bots or cyborgs (accounts run jointly by bots and humans).
The system looks for 16 different maneuvers that disinformation accounts can perform, including “bridging” between two groups (connecting two online communities), “backing” an individual (following an individual to increase their level of perceived influence), and “nuking” a group (actions that lead to an online community being dismantled).
Through the analysis, they identified more than 100 types of inaccurate covid-19 stories and found that not only were bots gaining traction and accumulating followers, they accounted for 82% of the top 50 and 62% of the top 1,000 influential retweeters. The influence of each account was calculated based on the number of followers it reached as well as the number of followers its followers reached.
The researchers have also begun to analyze Facebook, Reddit, and YouTube to understand how disinformation spreads between platforms. The work is still in the early stages, but it’s already revealed some unexpected patterns. For one, the researchers have found that many disinformation stories come from regular websites or blogs before being picked up on different social platforms and amplified. Different stories also have different provenance patterns. Those related to the virus being a bioweapon, for example, mostly come from so-called “black news” sites, fake news webpages designed to spread disinformation and often run outside the US. In contrast, the reopen America rhetoric mostly comes from blogs and Facebook pages run in the US.
The researchers also found that users of different platforms will respond to them in very different ways. On Reddit, for example, moderators are more likely to debunk and ban disinformation. When a coordinated campaign around reopening America popped on Facebook, for example, Reddit users began discussing the phenomenon and counteracting the messaging. “They were saying, ‘Don’t believe any of that stuff. You can’t trust Facebook,’” says Carley.
Unfortunately, there are no easy solutions for how to fix this problem. Banning or removing accounts won’t work, as more can be spun up for every one that is deleted. Banning accounts that spread inaccurate facts also won’t solve the problem. “A lot of disinformation is done through innuendo or done through illogical statements, and those are hard to discover,” she says.
Carley believes there needs to be greater coordination among researchers, corporations, and the government to come up with effective policies and practices for tamping this down. “I think we need some kind of general oversight group,” she says. “Because no one group can do it alone.”