recent

Slack CEO: How to roll out artificial intelligence internally

How generative AI affects highly skilled workers

The rise of the union-curious worker, and how to win them over

Credit: Rob Dobi

Ideas Made to Matter

Social Media

MIT Sloan research about social media, misinformation, and elections

By

False information has become a feature of social media — especially during election years. Research shows false news peaked on Twitter during the 2012 and 2016 presidential elections, and a bipartisan Senate committee found that before and after the 2016 election, the Russian government used Facebook, Instagram, and Twitter to spread false information and conspiracy theories and stoke divisions.

Over the last several years, MIT Sloan researchers have studied the spread of false information, or so-called fake news, described by researchers as “entirely fabricated and often partisan content presented as factual.” Understanding more about why people share misinformation, and how it spreads, leads to proposed solutions — a goal that becomes more important as people spend more time on social media platforms, and the connections between misinformation and election results become clearer.

Below is an overview of some MIT Sloan research about social media, fake news, and elections.

Problems

 

False rumors spread faster and wider than true information, according to a 2018 study published in Science by MIT Sloan professor Sinan Aral and Deb Roy and Soroush Vosoughi of the MIT Media Lab. They found falsehoods are 70% more likely to be retweeted on Twitter than the truth, and reach their first 1,500 people six times faster. This effect is more pronounced with political news than other categories. Bots spread true and false information at the same rates, the researchers found, so people are the ones hitting retweet on false information. One potential reason: the novelty hypothesis, which found that people are drawn to information that is novel and unusual, as false news often is. (Not that bots don’t play a role in spreading misinformation — in fact, they can easily manipulate people’s opinions.) 

7 0 %

Falsehoods are 70% more likely to be retweeted than the truth.

People who share false information are more likely distracted or lazy, rather than biased, according to MIT Sloan professor David Rand and his co-author Gordon Pennycook. Their 2018 study asking people to rate the accuracy of news headlines on Facebook found that people who engage in more analytical thinking are more likely to discern true from false, regardless of their political views.   

Some misinformation comes from politicians — and it might help them get votes. Under certain circumstances, people appreciate a candidate who tells obvious lies, even seeing that candidate as more “authentic,” according to research co-authored by Ezra Zuckerman Sivan, an associate dean and professor at MIT Sloan. A norm-breaking candidate who tells lies appeals to aggrieved constituencies because those voters see norms as illegitimately imposed by the establishment. The paper was co-authored by Minjae Kim, PhD '18 and assistant professor at Rice University, and Oliver Hahl, PhD '13 and and assistant professor at Carnegie Mellon.  

Attaching warnings to social media posts that feature information disputed by fact-checkers can backfire. A study by Rand and his co-authors outlined a potential downfall to labeling misinformation online: the “implied truth effect,” where people assume all information without a label is true. As a result, false headlines that fail to get tagged, or aren’t tagged quickly, could be taken as truth. Attaching verifications to some true headlines could be a possible fix.

Social media can also skew opinions because of what people don’t see.  Another study by Rand and several co-authors looked at "information gerrymandering," or how people tend to live in partisan bubbles where they receive just a partial picture of how others feel about political issues. This can distort what people think about how others plan to vote — and even influence the outcome of elections.

Solutions

Aral and MIT Sloan professor Dean Eckles outlined a four-step plan for researchers to measure and analyze social media manipulation and turn that information into a defense against future interference. The steps, in brief: catalog exposure to social media manipulation; combine exposure and voter behavior datasets; assess the effectiveness of manipulative messages; and calculate consequences of voting behavior changes. 

And in his new book “The Hype Machine,” Aral goes more in-depth, exploring the promise and peril of social media and how to protect society and democracy from its threats.

When asked directly, most people say it is important to share information that is accurate, according to a study co-authored by Rand. Yet people tend to share false information online because the social media context focuses their attention on factors other than truth and accuracy — not because they don’t care about the truth of what they are sharing. Reminding people to think about the importance of accuracy — an “accuracy nudge” — can increase the quality of news they subsequently share. (The same is true for inaccurate information about COVID-19.)

Taking time to think also helps. In another study, Rand and his co-authors found that when people had a chance to deliberate about the accuracy of news headlines, they were more likely to identify false headlines than they were when they made a snap judgement. This was true regardless of a person’s political beliefs and whether the headline affirmed them.

Look at how advertising works on social media platforms. Advertising spreads fake news through methods like Facebook’s marketing tools, which allow advertisers to pay to target certain groups of people. A study co-authored by MIT Sloan marketing professor Catherine Tucker found a 75% reduction in fake news being shared after Facebook rolled out a new advertising system designed to intercept articles with fake news stories.

Crowdsource ratings for online news sources. After initial concerns about Facebook’s idea to survey users about the validity of various news sources, Rand and his colleagues found in a study that people generally came to the same conclusion as fact-checkers — showing that using the wisdom of the crowd could work. One caveat: the decision to only allow people familiar with a news source to rate its validity was a “terrible idea,” Rand said.

Rand, MIT Sloan research scientist Mohsen Mosleh, and MIT Sloan graduate student Cameron Martel also studied whether the type of correction (for example, polite and hedged or more direct) makes people more likely to reply or correct their behavior. The bottom line: it does not. But analytic thinking and active open-minded thinking are associated with updating beliefs in response to corrections.

More social media coverage

For more info Sara Brown Senior News Editor and Writer