Insights: Getting Social Media Right
Social media’s ability to spread mis- and disinformation has come under intense scrutiny, with public awareness and outrage peaking when the Facebook Papers were released. Experts and parents weigh in on the online threats overwhelming us, causing divisions in our society—and they also offer thoughtful solutions.
Jay Varma was in line at the pharmacy to get his Covid-19 vaccine booster shot when he noticed a 15-year-old girl whose father insisted that she get the vaccine as well. The girl resisted, claiming she had read that the vaccine weakens the immune system and ultimately leads to death. For Varma, a 49-year-old father of five who lives in Vinings, Georgia, hearing the girl cite inaccurate information that she had discovered on social media hit close to home.
“I have read so much over the last year in terms of misinformation that it is absolutely mind-boggling,” said Varma, who receives numerous dubious news stories shared over Facebook and WhatsApp each day. “We’re sitting on powerful tools, but they have to be used for the right reasons.”
The Facebook Papers are internal documents leaked by whistleblower Frances Haugen in October 2021. The documents indicated that the tech giant, in favor of profits over social responsibility, knowingly used algorithms that continued to perpetuate mis- and disinformation and hate speech by promoting posts that were more likely to get likes, comments and shares.
“There is a business model designed to keep you engaged and involved,” said Anthony Lemieux, professor of communication at Georgia State University. “When you have things that are outrageous or that we perceive as outrageous, we are inclined to get worked up by those and engage more.”
Prevalence of misinformation
Lemieux noted that social media platforms like Facebook and Twitter hire teams to flag dangerous content, and that these companies even sometimes share information with smaller companies that may not have the wherewithal to identify and remove problematic content themselves. However, these measures are still not enough to stymie the spread of false facts.
Once misinformation has already been released, social media can propagate its spread. According to a Pew Research Center survey conducted between July 26 and August 8, 2021, almost half (48 percent) of U.S. adults say they get news from social media “often” or “sometimes.” “It’s unfortunate. News travels faster on Twitter than it does in real life,” Varma said.
In other cases, these companies claim they are not in a position to flag content at all. For instance, Meta, the new name for the Facebook Company, states in its privacy policies that content shared via its over-the-top messaging app, WhatsApp, is end-to-end encrypted and cannot be viewed by WhatsApp or Meta, which means they do not monitor or review it for accuracy, decency or fraudulent claims either.
For the Indian-American community, this can be particularly concerning: the largest share of WhatsApp’s two billion active users—390.1 million monthly active users—comes from India, according to eMarketer. That means that for the more than 100 billion unmonitored messages that are sent each day on WhatsApp, Indians are the largest audience. Moreover, a study published in the Journal of Medical Internet Research found that 30 percent of Indians surveyed used WhatsApp for Covid-19 information, and roughly the same percentage fact-checked less than 50 percent of messages before forwarding them. About 13 percent of respondents in the study said that they never fact-checked messages before forwarding on WhatsApp.
From political propaganda to false health claims, and celebrity gossip to religious indoctrination, Varma’s seen it all shared over WhatsApp with little evidence or source documentation cited. That’s why he warns his 16-year-old son, an avid social media user, that these apps are not the best sources of information, because the tech companies are not obligated to verify the content posted, per Section 230 of the Communications Decency Act.
“These platforms shirked their social responsibility to verify information to the general public,” said Jonathan Peters, a professor of media law at the University of Georgia. “They are not legally required to do this level of policing.”
User responsibility
Even if Big Tech were more regulated by government or the tech companies decided to intervene themselves, the U.S. first amendment right to free speech protects most of what’s posted, even if it’s hostile or false, Peters said. Aside from certain narrow categories of speech like obscenity, defamation, fraud and fighting words, the Supreme Court is protective of all other speech, including most false statements of fact. Perhaps, in light of recent misinformation about Covid-19 and its inoculation, Peters said there could eventually be protection against inaccurate information if someone could bring forth a claim about information that led to harming a person medically or physically.
“There is some precedent for protecting public health,” he said. But mostly, for a law to be justified in violating an individual right, it would have to serve a compelling government interest and be narrowly tailored to serve that interest, Peters added.
While Americans may have a penchant for looking to the law to solve many social problems, the right to free speech is granted in the U.S. chiefly on the rationale of the marketplace of ideas, the premise that society self-regulates false statements through public discourse—that is, counter speech.
“One thing that we have to consider is, what does this say about us and the social norms that normally regulate when the marketplace is well-functioning? The social norms are breaking down,” Peters said. “We need to do a better job of helping individuals learn how to process information credibility more effectively than they do.”
Before reposting or sharing information
Ask yourself the following questions to evaluate accuracy and validity of information :
- Who is sharing the information? Are they a credible (i.e., named, knowledgeable, objective) source?
- What arguments are being made, and what evidence is used to support it?
- Where else can you corroborate this information?
- Why is this information being shared? What is its purpose?
- How does it make you feel? Be wary of information that elicits strong reactions, including hyper-agreement with our own views.
Verifying information
For Varma, a senior financial analyst for Meridian Cooperative, that process includes relying on traditional, trusted news sources to provide and verify information and gathering perspectives from various media outlets to understand different viewpoints. Named, official, objective and peer-reviewed sources that cite evidence are more credible, though it has become more difficult to detect “deep fakes,” according to Lemieux, whose areas of research include persuasion, crisis messaging and social influence. He said he considers who is presenting the information, how it is presented, and how it makes him feel. Anonymous sources or content that uses emotionally charged language should be a red flag, as well as content that resonates too strongly with our beliefs.
“We are bad at seeing our own biases,” Lemieux said. “Use that sense of hyper-agreement as a gut check.”
Cultivating understanding
If we do come across questionable information, we can try to use counter-speech to set things straight, per Peters. When doing so, South Asian psychosocial experts across the board recommend exercising compassion and empathy with the sender to reach a point of understanding.
“Meet them where they are,” said Dr. Jay Trambadia, a licensed clinical psychologist at Wellstar Health System. “Build the environment that will automatically open the guard that people have. Then they are more likely to think and communicate with you.”
To get to this level of understanding, he suggested asking open-ended questions, engaging in reflective listening, and asking permission to engage in discussion.
This is particularly important on channels such as WhatsApp, according to Atlanta psychotherapist Bhavana Goel, who specializes in multicultural competency. She contends that WhatsApp is no different than a family room communication. As a result, how people respond—or the absence of response—on WhatsApp can elicit strong emotions in users, similar to a live communication. Often, people are simply looking to connect, she said.
“Sometimes it’s benign,” Goel said. “Facebook and Twitter are intentionally politically charged, but WhatsApp? That’s where we share something that’s cool today to feel validated.” Thus it’s possible that the sender may be taking the post less seriously than their audience, so recipients can ask the sender if he or she truly believes what they have sent, and who the source is.
People are often more willing to accept as true information from a person whose beliefs are similar to their own, Trambadia said. While the source of information is important, one cannot use that as the sole basis for rejecting or accepting the sender’s message. If the sender’s beliefs or actions are typically different than your own, explore together the discrepancies in that person’s thinking versus their actions. “People on both sides will start noticing they are very similar,” Trambadia said.
If the message is aggressive or inaccurate, it’s important to set boundaries, Goel added. She encourages clients to tell the sender that what they have sent is not acceptable due to how it makes them feel. It’s also perfectly okay to leave the group entirely if you feel the environment is toxic, she said. “How much are you willing to speak for your mental health? You have to draw the line,” she said.
Finally, it’s possible that there may be some truth to the sender’s message, according to Dr. Neha Khorana, an Atlanta clinical psychologist specializing in relational and family stress. Being open to validating parts of the sender’s message will make them more willing to validate your counter-argument as well, she said. “Two opposing thoughts can be simultaneously true.” For instance, consuming ginger can help strengthen immunity. But consuming ginger does not prevent or cure coronavirus. “The point should be to understand and hear the other person as well.”
When challenging a dubious post
Consider the following actions before attacking or refuting a contentious content:
- Consider how much you value the sender or community. Depending on the content, it may not be worth the effort to challenge the information.
- Ask open-ended questions to try to understand the motivations and belief system of the sender.
- Ask the poster if he or she believes the content that was shared, and why. Request their sources/data.
- Use summary statements to verify your understanding and acknowledge the sender’s point of view.
- Educate the sender, if applicable, on how to validate the information on the internet
- Assert how the post makes you feel and that you disagree.
- Request permission to engage in discussion offline and, if possible, in person
- Acknowledge that both of you could be right.
- Leave the community if you feel it is toxic, informing the community that you are doing so.
Amritha Alladi Joseph is the creator of In Transit, a blog about journeys through far-off places, food, and finding fulfillment (joinmeintransit.com). A former reporter for Gannett newspapers, The Hindu, The Gainesville Sun, Gainesville Magazine and CNN-IBN, she is now a consulting manager at EY and lives in Sandy Springs, Georgia, with her husband and two children.
Enjoyed reading Khabar magazine? Subscribe to Khabar and get a full digital copy of this Indian-American community magazine.
blog comments powered by Disqus