Menu
Blog

Taking Accountability for Social Media Misinformation

Taking Accountability for Social Media Misinformation
5 minute read

When half of American adults use social media to get news, determining who is responsible for ensuring factual information is found on these platforms is critical. With 2024 reportedly the biggest election year in history, as more than half of the world is headed to the polls, the source and integrity of news information is of utmost importance. The prevalence of altered and synthetic material on social media makes it more difficult than ever for users to discern fact from fiction, especially as AI-generated video and audio content becomes more advanced and commonplace.

At the center of these complex challenges is the propagation and exploitation of mis, dis or malinformation (MDM) online. Recent findings show that 72 percent of people in the U.S. shared concerns about what is real versus fake when it comes to online news. Recognizing this fragility of trust, threat actors are taking advantage of widely available generative AI tools to launch large-scale influence operations programs that target social media users, broadening the reach of false information.  In this way, we’re quickly approaching a post-truth world in which social media users are able to find “evidence” to support their chosen beliefs. Individuals must therefore embrace continued media literacy education and healthy skepticism to ward off the impact of targeted MDM material

Accountability Versus Censorship: Supreme Court Ruling in Murthy v. Missouri

Amid the recent Supreme Court ruling in Murthy v. Missouri, it is clear that the line between censorship and MDM is getting blurred, when in reality, the distinction is more straightforward than most think. Social media companies have a responsibility to remove MDM from their platforms before it has a chance to influence users. In a world where voices and likeness are easily and often manipulated, putting the sole burden on users to discern fact from fiction is nearly impossible and can cause substantial harm.

Social media companies are private organizations that have the right to establish conditions for the content that is allowed to appear on their platforms. This means that no person or account is entitled to posting content that violates a given platform’s terms of service policies– many of which include MDM stipulations. Users agree to abide by these guidelines when they create accounts, and noncompliance has consequences. However, when it comes to MDM, many platforms also require the flagged content to be verified as false or malicious before it’s taken down, adding an extra layer of protection for well intentioned users. In short, stepping in to correct MDM is not the frightening or malicious act it is often made out to be when it is confused with censorship.

When it comes to defending certified sources of truth, other bodies like the government should also empower social media companies to correct false information when it’s being disseminated, especially on a large scale. Governments have a responsibility to protect their citizens, which includes spreading pertinent and accurate information on important topics like public health and safety. Referencing the ways in which MDM content violates a platform’s existing terms of service policy is not censorship, either; rather, it is an aspect of the historic relationship between the government and private social media companies. The more forces at work monitoring MDM activity across social media platforms, the more difficult it is for malicious actors and nation-states to wreak havoc through MDM campaigns.

The 2024 Election Year Is a Global Turning Point

Safeguarding all sources of information is especially vital during a pivotal election year. We have seen in previous elections that social media has been an arena in which nation-state adversaries have planted MDM with the intent to influence electoral outcomes, and 2024 will likely be no different, if not increasingly more threatening than previous years.

We’re in a time in which tools that generate synthetic material like video or audio deepfakes are more accessible and realistic than ever. While pervasive on social media, false content is also leaping off timelines and making its way to more traditional means of communication, like phone calls. This makes it difficult for even vigilant civilians to catch these campaigns before they fall victim, as they are now receiving MDM via every communication channel they have. New and sophisticated attack methods, combined with a growing threat landscape from heightened MDM campaigns in a geopolitically charged time, means that mitigating the presence of false and synthetic information should be top of mind for social media companies and governing bodies alike in 2024 and beyond. 

The Bottom Line on Disinformation

Disinformation is dangerous when it’s left unchecked; it’s designed to be believed and propagated. Therefore, we must uphold a clear distinction between censorship and accountability when it comes to moderating content on social platforms. Legal precedents like Murthy v. Missouri create a common understanding of how to respond to these coordinated MDM attacks. 

The bottom line is that MDM content must be monitored and mitigated before it exacerbates existing rifts in our society. This should ideally be done by social media companies themselves that better understand their environments and potential repercussions while being supported by government bodies. Part of being prepared for online disinformation attacks is determining who is ultimately responsible to intervene once they have been launched. This game plan allows us to act swiftly and appropriately in an emergency, like a time of increased MDM attacks.

Tristen Yancey

Vice President, Public Sector Sales

Tristen Yancey is the Vice President of Public Sector. She is responsible for developing and executing ZeroFox’s go-to-market strategy, revenue attainment, customer success and channel growth. Tristen has started and grown existing public sector teams for several security software teams over her 25+ year career. Her expertise in government organization, contracting and sales methodology has led to her success as well as the success of her government customers.

Tags: Digital Risk ProtectionDisinformationMisinformationThreat Intelligence

See ZeroFox in action