Leading up to the US 2020 November election, tech companies are under the spotlight to ensure there’s no dissemination of false misinformation about the voting process or other bogus claims.
In 2016, Russian aids used Facebook to target Americans on the platform, and the manipulated content spreading fake news reached as many as 126 million Americans, according to The New York Times.
From then on Facebook and other platforms have been under scrutiny about how they are policing misinformation and the spread of fake news ahead of the election. Facebook, Google, Microsoft, and Twitter have teamed up to fight election interference for this reason. As a coalition they have regular meetings with government agencies such as the FBI and the Department of Homeland Security to discuss trends from their user data and coordinate efforts between platforms.
Each company has also implemented its own policies and procedures ahead of the election, the most common is banning deepfakes from their platforms and making sure there is transparency on accounts and their affiliations both political, business and media origins. Here is what the biggest tech companies are doing to ensure the integrity of the US election in 2020.
Facebook’s most comprehensive plan leading up to the election campaign is its Voting Information Center, which serves as the platform’s first line of defense when it comes to preparing for Election Day chaos. The Voting Information Center has resources on how to register to vote, how to vote by mail, how the coronavirus is impacting the election, how to find your polling place, and alerts and updates to election news.
Once Election Day hits, Facebook will turn over its voter information center to focus on providing accurate updates when it comes to ballot counting, including providing clear and accurate information to the top of users’ news feeds. Facebook said teams would be working 24/7 during Election Day and the days following to find and stop anyone spreading misinformation about the election results.
Nathaniel Gleicher, Facebook’s head of security policy, stated that the social network is actively tracking three types of threats leading up to Election Day. These include attempts to suppress voter turnout by spreading false information in how voting works, hack and leak scenarios, and attempts to corrupt or manipulate public debate during ballot counting.
Facebook is also restricting news outlets that have political ties attached to them. The new policy applies to publishers directly affiliated with a political entity or person, and limits what features they can access, including claiming a news exemption within Facebook’s ad authorization process and restricting them from being featured in Facebook News.
The social network has also banned deepfakes in January since the very realistic fake videos are becoming more difficult to detect.
Even though they have been critiqued for it, Facebook has decided not to back down from its policy that allows political ads containing false or misleading information ahead of the 2020 U.S. election, choosing instead to allow users to see fewer political ads in their feed. CEO Mark Zuckerberg said he wants to keep the platform as open as possible to let voters make judgments for themselves.
Last year Facebook introduced stricter rules for political ads by making advertisers verify their legitimacy by providing government credentials and adding disclaimers to political ads. At the start of this year Facebook also introduced the option for users to turn off political ads.
Twitter, unlike Facebook, banned political ads altogether last year. CEO Jack Dorsey argued that allowing targeted paid political ads pushed unwanted messages on users, especially by ad buyers who game the system.
Along with political ads, Twitter also banned deepfakes and manipulated media in February. The platform initiated a label on tweets containing manipulated media and a policy to hide or remove tweets based on if the media is deemed “harmful.”
Twitter put this policy into action against President Donald Trump in May when it hid Trump’s tweet about the Black Lives Matter protests in Minnesota, saying the tweet violated its policies about the “glorification of violence.” The tweet in question read, “When the looting starts, the shooting starts.”
Another of Twitter’s initiatives is to make sure politically tied accounts are labelled in terms of who the candidates are on the ballot in upcoming elections and state-affiliated media accounts. Also Twitter’s new policy to protect the election is voting misinformation reporting. The tool helps “identify and remove misinformation that could suppress voter turnout.” Users can use the Report an issue tool on a tweet and choose It’s misleading about a political election to flag false content.
Google’s approach at election preparation has mostly been in the form of cracking down on political ads. The tech giant implemented a policy last year on political campaigns that buy ad space on Google Search, YouTube, and Google-powered display ads. The policy restricts these campaigns from using target ads based on a person’s political leanings according to their online activity, or data collected from public voting records.
The search giant is capitalizing on questions people might search for as the election comes closer, like “how to vote” and “how to register to vote.” Google will provide clear-cut information at the top of these search results in partnership with non-partisan, third-party data partners, such as Democracy Works.
Google is moderating security threats since hackers from Iran and China targeted the presidential campaigns of both Trump and former Vice President Joe Biden in June. Google’s Threat Analysis Group is working to identify and prevent these types of government-backed attacks against Google and its users. The company also launched enhanced security for Gmail and G Suite users.
YouTube, the Google-owned streaming platform has the same policies as its parent company, but it also introduced fact-checking notices in April. For example, if a user searches for a specific term, and a third-party publisher has a fact-check article relevant to that, the user will see a fact-check message at the top of search results.
YouTube is also continually pulling down videos linked to conspiracy theories and conspiracy groups such as QAnon.
TikTok, the China-based app and the newest social media platform has gotten a lot of criticism over its security, but TikTok is attempting to help inspire its young users to vote in the election. TikTok creators have utilized the platform to spread social activism and election education within the 15-second videos by talking about voting by mail and voter registration.
TikTok also introduced policies to stop the spread of misinformation and fight foreign interference within the app. TikTok announced earlier this month that it’s working with experts from the U.S. Department of Homeland Security “to protect against foreign influence on our platform.” The app has also partnered with organizations like PolitiFact and Lead Stories to fact check potential misinformation about the 2020 election.
Reddit, like all the platforms mentioned, Reddit has banned deepfakes and “impersonation content” ahead of the election.
The “front page of the internet” is ramping up its voting resources through a campaign called Up the Vote, which is meant to educate Redditors on their right to vote. The campaign includes the upcoming Ask Me Anything series on voting laws and processes, providing resources on how to vote early and updating your registration status, and reminding users to get out and vote on Election Day itself.
Snapchat, is also hoping to help young people to vote in the upcoming election with voter registration tools that live within the app. The new features include a voter checklist, a voter guide that gives more information on topics like voting by mail and ballot education and even registering to vote directly in Snapchat. The voting tools will reside in Snapchat’s “Discover” section.