The 2020 presidential election is right around the corner and with it come questions around social media’s potential impact on the outcome. Social media’s potential to spread misinformation has been in the news a lot over the past few years and its evolved from something most people saw as a place to connect with friends to a machine that disrupts key parts of our democracy. People are able to curate their timelines and newsfeeds, but algorithms hold the real power, working behind the scenes to regulate the content you see, from targeted ads to the publications that are presented to you.
With social media essentially shaping the information you get, many people are concerned about the spread of misinformation. After all, not everything that you read on the internet is true, and social media platforms themselves have been guilty of spreading misinformation or otherwise allowing election interference to occur.
Perhaps the most infamous case of election interference spurred by social media was the Cambridge Analytica scandal. Between 2013 and 2015, the data analytics firm — which worked with Donald Trump’s campaign — harvested data from 50 million users. Researcher Aleksandr Kogan created a quiz on Facebook, and a loophole let the quiz access not only the information of anyone who took it, but all their friends as well.
“Using a personality profiling methodology, [Cambridge Analytica] — formed by high-powered right-wing investors for just this purpose — began offering its profiling system to dozens of political campaigns,” Vox reported.
Cambridge Analytica is a warning of how user data is treated as something free to be harvested without any consideration for privacy or its potential consequences. However, Facebook’s role in the 2016 presidential elections spread beyond Cambridge Analytica alone.
“Blame Facebook for creating a massive reality-distortion field; for allowing its more than 200 million active North American users to dwell in a fever swamp of misinformation and ridiculous falsehood,” Deadspin editor Alex Pareene wrote immediately following the election.
No social media company claims to want to spread misinformation or allow election interference on its platform. For the 2018 midterms, Facebook even established a “war room” to help combat election interference. In April of this year, Twitter also introduced a new “misleading about voting” report option to crack down on election interference on its platform. However, neither of these approaches address one key issue with social media platforms: a focus on engagement above all else.
It’s no secret that social media companies need engagement to survive. After all, if people aren’t interacting with your platform, then nobody is going to pay to advertise on it. While the downfalls of this focus can be seen across platforms, it’s obnoxiously clear on YouTube, which is a cesspool of misinformation endorsed by the platform’s algorithms.
YouTube knew about the problems with its recommendation system for a while, but the company refused to do anything about it, as revealed by a report from Bloomberg. According to Bloomberg, conversations with people who currently worked at or had only recently left YouTube revealed, “a corporate leadership unable or unwilling to act on these internal alarms for fear of throttling engagement.”
This means YouTube continues to routinely recommend conspiracy videos, promoting anti-vax conspiracies. Plus, its “fact-check” tool even began to insert information about 9/11 into live streams of the Notre Dame cathedral fire.
“Why in the world is YouTube putting information about 9/11 underneath the Notre Dame livestream from France 24? (Especially since it seems like, at least for right now, ongoing renovations are the most likely cause, no indication of terror),” Director of Harvard’s Nieman Journalism Lab, Joshua Benton, tweeted.
Why in the world is @YouTube putting information about 9/11 underneath the Notre Dame livestream from @FRANCE24?
(Especially since it seems like, at least right now, ongoing renovations are the most likely cause, no indication of terror) https://t.co/A3HP36epxx pic.twitter.com/ZheCMC5pcG
— Joshua Benton (@jbenton) April 15, 2019
In addition to recommending conspiracy videos, YouTube’s misinformation problems extend to its position as a “radicalization engine.” Notorious right-wing and Neo-Nazi figures are allowed to thrive on the platform, like Infowars host, Alex Jones, and others. This has real-life consequences. During his live stream, Christchurch shooter — who massacred 51 Muslims during Friday prayers in New Zealand — said, “subscribe to PewDiePie.” Felix Kjellberg — or PewDiePie — has been criticized before for amplifying antisemitism and other alt-right conspiracies. To be clear, these type of conspiracies rely on their own form of misinformation.
Tackling misinformation and election interference on social media will not be easy because the problem stems from many places. However, to begin to make a dent, each company needs to consider how and why misinformation was allowed to spread on their platforms in the first place.
This means that social media companies will have to seriously reconsider their algorithms and prioritization of engagement. There is no hope of addressing an issue through new features if companies cannot be honest about why a problem existed in the first place.