What responsibilities does Facebook, Google, or Twitter have for policing political advertisement placed on their platform?

Expert Answers

An illustration of the letter 'A' in a speech bubbles

A good resource for this topic is a post by the American Bar Association from the summer of 2020. I'm linking it below for your reference.

In 2008, Barak Obama became one of the first candidates to leverage the powers of social media advertising in his bid for presidency. That year, all candidates spent a total of $22.25 million on online political ads; in 2016, by contrast, that amount had soared to $1.4 billion. Clearly, this is a worthwhile topic because of recent events and trends.

According to Lata Nott for the American Bar Association, political ads are considered political speech, which is covered by the First Amendment. As such, there is little that can be done about false political ads from a legal standpoint. The rationale is that "voters have a right to uncensored information from the candidates, which they can then evaluate themselves before making their decisions."

However, social media platforms are much like newspapers in that they are not obligated to run every political ad they receive: "Contrary to popular belief, social media platforms do not have to comply with the First Amendment. They are private companies that are free to set their own content policies."

Just before the 2020 election, Facebook outlined its plans to include flagging political content for "false information." Facebook later wrote a blog post that stated, "We have based [our policies] on the principle that people should be able to hear from those who wish to lead them, warts and all, and that what they say should be scrutinized and debated in public."

Social media does differ from other forms of advertising because its users are subjected to "microtargeting," which is a marketing strategy that uses consumer data and demographics to predict user beliefs and behaviors. Microtargeting allows political campaigns to identify individuals and groups who are particularly susceptible to specific messages and to then sway their beliefs in potentially incorrect ways with little to no accountability. Most people never even see these ads, so they often go undetected by the majority. Google currently allows microtargeting to broad groups of people, such as by gender or by zip code, while Facebook is much more "permissive" and has not put limits on how campaigns target their audiences.

This outlines the legal responsibilities of social media platforms pertaining to political advertising. The question that remains—which is a matter of personal opinion—is whether it is ethical to allow social media platforms to intentionally target groups of people for the purposes of political advertisements. Should social media platforms be required to comply with the established expectations of the First Amendment because of the purpose they serve? Should social media companies be more transparent about the political advertisements they are censoring and their methods for microtargeting specific populations of potential voters? Or does the ultimate burden fall to voters to assimilate the truth from a wide base of information, not leaning too heavily into any particular media? Lata Nott concludes that "allowing candidates to microtarget ads while at the same time refraining from factchecking their statements creates an environment where false information can spread unchecked."

I hope this provides ample points of discussion as you construct your response to this topic. I'm also attaching a couple of other articles on this topic for your review. Good luck!

Approved by eNotes Editorial Team