News Technology

Social-media Companies Must Flatten the Curve of Misinformation

The pandemic lays bare the failure to quarantine online scams, hoaxes and lies amid political battles.

At Harvard Kennedy’s Shorenstein Center in Cambridge, Massachusetts, we track misinformation online. Social media is tooled to distribute financial scams, miracle-promising products and fear-mongering conspiracies alongside medical advice, school closures, news and family updates. It’s no surprise that damaging guff overwhelms valid recommendations and crucial health information. Correcting the record is impossible.

The pandemic lays bare how tech companies’ reluctance to act recursively worsens our world. In times of uncertainty, the vicious cycle is more potent than ever. Scientific debates that are typically confined to a small community of experts become fodder for mountebanks of all kinds.

Because ‘COVID-19’ and ‘coronavirus’ are unique keywords, they are easily exploited by scammers hiding among the 120,000 domains related to the outbreak. Attempting to dupe unwitting information seekers, scammers pair COVID-19 and coronavirus with terms such as ‘masks’, ‘loan’, ‘unemployment’, ‘trial’, ‘vaccine’ and ‘cure’. Some domain companies are restricting the use of these keywords to prevent fraud, and tech companies are coordinating takedowns. But that’s not nearly enough.

After years of pressure, and even congressional hearings, tech companies are taking action against misinformation because the consequences of their doing nothing have become more obvious. In mid-March, Facebook, Google, LinkedIn, Microsoft, Reddit, Twitter and YouTube sent out a vague joint statement announcing new, combined efforts on fighting COVID-19 misinformation and elevating ‘authoritative content’.

Researchers and policymakers can — and must — make sure social-media companies identify, implement and evaluate ways to curtail the spread of dangerous misinformation, and that independent researchers can assess this work.

A month ago, my team began monitoring rumours about a potential ‘coronavirus cure’ circulating among tech investors on Twitter. It quickly gained traction after technology entrepreneur Elon Musk shared a Google doc purporting to be a scientific paper from an adviser to Stanford University’s School of Medicine in California. The next night, on a popular right-wing broadcast on Fox News, host Tucker Carlson featured the author of the Google doc, who claimed that hydroxychloroquine has a ‘100% cure rate’ against COVID-19 on the basis of a small study in France. Moments after, online searches for ‘quinine’, ‘tonic water’, ‘malaria drug’ and the like surged. If there was a product to be had, it would have sold out. Clearly, the public was listening.

In the wake of the Fox News broadcast, Stanford clarified that the author was not an adviser and the school was not involved. Efforts by doctors and scientists to use their personal social media to counter this dangerous speculation paled against the tweet storm. Because debates about the drug’s efficacy have caused shortages, poisoning and death, Twitter took exceptional measures and removed tweets celebrating hydroxychloroquine by several prominent right-wingers, including Fox television host Laura Ingraham, US President Donald Trump’s attorney Rudy Giuliani and conservative pundit Charlie Kirk.

The politicization of hydroxychloroquine since then, fuelled by Trump’s enthusiasm, has alarmed pharmacists and researchers, who are overwhelmed by requests from the worried well. Political pressure to do science faster can cause harm from doctors prescribing off-label drugs with little oversight. The frenzy is increasing pressure on frontline health-care workers and is not expediting research. Politicians, though, have little to lose, because hype provides the veneer of control.

After blanket coverage of the distortion of the 2016 US election, the role of algorithms in fanning the rise of the far right in the United States and United Kingdom, and of the antivax movement, tech companies have announced policies against misinformation. But they have slacked off on building the infrastructure to do commercial-content moderation and, despite the hype, artificial intelligence is not sophisticated enough to moderate social-media posts without human supervision. Tech companies acknowledge that groups, such as The Internet Research Agency and Cambridge Analytica, used their platforms for large-scale operations to influence elections within and across borders. At the same time, these companies have balked at removing misinformation, which they say is too difficult to identify reliably.

Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.

Early this April, I attended a virtual meeting hosted by the World Health Organization, which had convened journalists, medical researchers, social scientists, tech companies and government representatives to discuss health misinformation. This cross-sector collaboration is a promising and necessary start. As I listened, though, I could not help but to feel teleported back to 2017, when independent researchers first began uncovering the data trails of the Russian influence operations. Back then, tech companies were dismissive. If we can take on health misinformation collaboratively now, then we will have a model for future efforts.

As researchers, we must demand that tech companies become more transparent, accountable and socially beneficial. And we must hold technology companies to this commitment long after the pandemic. Only after several scandals did companies such as Twitter, Facebook and Google News introduce more rigorous standards to flag and disrupt bots and other tools for disinformation.

Curating knowledge is just as important as moderating content. Social-media companies must flatten the curve of misinformation.

Source: https://www.nature.com/articles/d41586-020-01107-z

Leave a Comment