Social Media and Mental Health: Safety in the Digital Age?

How do we balance the benefits of social media with the risks to mental health? Experts from both sides of the aisle discuss.
Image design by Connor Peck for DWF. All rights reserved.

How Do We Balance the Benefits and Risks of Social Media?

By Mitch Prinstein, Chief Science Officer, American Psychological Association, Dr. Rodolfo Leyva, Lecturer in Quantitative Methods, University of Birmingham, and Jason Kelley, Activism Director, Electronic Frontier Foundation

This debate is being published in collaboration with The Impact Guild, a professional network for people who create, use, or distribute media, arts, or entertainment for social good or healthy democracy. 

How do we balance the benefits of social media with the risks to mental health? Experts from both sides of the aisle discuss.

With the Right Guidance, Social Media Can Be a Force for Good

By Mitch Prinstein – Chief Science Officer, American Psychological Association

The word is out—social media isn’t just harmless fun. But it’s not all bad either and it’s time to set the record straight using the science that we have so far. Before we begin, let’s talk about that science itself. We can’t yet tell you whether the effects we have found suggest lifetime consequences or just ephemeral deviations from what many adolescents experience. But what we can say is that we have observed concerning patterns that need serious attention and research investment. In the meantime, there are some very reasonable opportunities to help kids remain safe, and even benefit from social media, with relatively simple adaptations. Here are five.

Understanding How Youth Engage With Social Media

First, banning social media from youth until they reach a certain age should be considered with caution. I can’t think of a feasible and equitable way to completely remove social media from children’s lives and that would not be consistent with the science anyway. Kids can, and do, benefit from social media, especially kids who have an identity not shared by many in their family or among their classmates. Moreover, we have no evidence that the risks associated with social media disappear on the 13th (or 16th or 25th) birthdays. So rather than take away social media, let’s build in much easier parental controls on the time, content, and channels for restricted communication instead and loosen them gradually as kids mature. 

Second, social media is not a monolithic experience, so let’s stop talking about it as if we all use it the same way. Logging in to read reliable news, form emotionally intimate friendships, and offer social support is far different from pursuing followers or reposts at all costs, falling down a rabbit hole of influencer content, or posting selfies to fish for validation and praise. If we start discussing social media as a collection of disparate experiences, we can help communicate a more precise message to teens and families about healthy versus unhealthy uses.

Building Media Literacy and Transparency 

A third strategy is for platforms to list warnings on their sites. For youth, those warnings will need to be offered repeatedly, in language kids can understand, and in engaging ways. I can’t buy a reading light for my kids without a package insert reminding them not to play with it in the bathtub. But when it comes to social media platforms, there is no safety information whatsoever that instructs users (of any age) how to use their product safely. We know the platforms have many bots, plenty of misinformation, and content that directs kids to engage in maladaptive behavior. We know the like button changes not only our own behavior, but also how our brain processes information, and whether we subsequently engage in the same behavior offline. We know that algorithms change what we see, in what order, and with what urgency—this affects our estimations of others’ attitudes and behavior as well as our psychological development. Why not at least require a warning to help kids be on the lookout for these issues and maybe even the option (for them or their parents) to turn on or off some of these functions to stay safe?

A fourth possibility takes this one step further. Social media is not going anywhere, and it has become the primary context in which teens’ social development occurs. We should be training kids (at home, in schools, and on the platforms themselves) how to function in a digital world. Psychological scientists already have made fun tools online that could be brought to scale or embedded into the platforms. In ten minutes, my kids and I used these tools to learn how to spot fake news and how to detect a troll online, among several other skills. Science suggests that these work, at least in the short-term, so let’s use them.

Promoting Both Social Media and Mental Health Is Within Reach

And last, but certainly not least, it’s just not fair to exploit maturing brains, absent of fully developed self-control regions, with technology designed to cajole even sophisticated adults into spending more time than we want on social media platforms. We have more than enough science to understand that these platforms are biologically irresistible to teens, and that the effects on lost sleep and physical activity, among others, are dire. So, we must work to built healthy practices and limit over exposure to social media, amongst ourselves and our children. 

We still have a lot to learn about social media and youth development, but letting parents have more control, offering some warning labels, and building instruction into the platforms themselves seems like low hanging fruit. These practices are fairly non-controversial ways to help ensure that the drive for billions in profit does not leave billions of humans overly dependent on their devices rather than each other. 

How do we balance the benefits of social media with the risks to mental health? Experts from both sides of the aisle discuss.

Social Media’s Concerning Effect on Youth Warrants Stricter Regulation

By Dr. Rodolfo Leyva – Lecturer in Quantitative Methods, University of Birmingham

As a lecturer in quantitative methods with expertise in the fields of media psychology and political sociology, I’ve written multiple peer-reviewed academic publications on experiments, surveys, and ethnographic work that examine the politicizing and socialization impact of social networking sites and the internet more broadly. Thus, my perspective on this debate is that of a non-partisan behavioral scientist who is familiar with the scholarly literature on the effects of social media on young people’s mental health.

Before proceeding to my main points, it should be noted that the effects of media are not homogenous nor always direct or significant. This is because the influence of media on our thoughts, feelings, and behaviors depends on the medium, type of content, engagement contexts, and various individual-level characteristics. As such, studies on the effects of specific mass mediums often come out with mixed or even negligible results (e.g., research on violent video games, fake news, or anti-smoking advertisements). Accordingly, the same holds true for studies on how social media affects youth mental health. That said, there is a leading and rather disconcerting scientific consensus on this topic, which goes as follows. 

A Disturbing Consensus: Social Media Harms Mental Health

An extensive and international body of research—which includes several longitudinal and experimental examinations—consistently shows that social media use increases anxiety, depression, suicidal thoughts, life dissatisfaction, and psychological distress. There are two central contextual factors at play. First, social media companies use insights from neuroscience, psychology, and the casino gambling industry to create extremely addictive platforms. Second, children and adolescents are undergoing formative cognitive, emotional, and neurobiological development, making them particularly susceptible to media and peer influences. Correspondingly, the addictive design elements and consequent frequent use of social media apps can lead to poor sleep quality, which, in turn, can induce or aggravate various mental health disorders.

Other major mechanisms of the aforementioned social media pathologies are cyberbullying and unfavorable social comparisons. The former mechanism is self-explanatory. As for the latter, it’s fomented by the invariable exposure to the seemingly endless stream of posts showing wealthy, attractive, and/or popular celebrities, influencers, friends, and/or schoolmates partying, enjoying affluent lifestyles, or otherwise having fun. Such exposure can generate low self-esteem, a negative body image, and loneliness. Additionally, all these effects are dose-dependent and worsen with usage frequency. It’s worth noting that they may vary by gender and other individual factors and that their extent and potency are still being debated and investigated. Nevertheless, the preponderance of empirical evidence indicates that these negative impacts are substantial and hence quite concerning.

Potential Solutions and Regulatory Measures

So, what should be done about them? Well, here I’m more hesitant to comment as there’s simply not enough research on effective countermeasures to draw on. I will, however, aver that we can’t rely on the free-market to resolve these potentially growing problems, since Facebook, TikTok, and every other social media company have routinely demonstrated that they have no interest in meaningfully changing their existing modes of operation. Furthermore, considering the rapid societal adoption of, and ever-expanding dependence on, digital technologies, it’s not reasonable to expect parents to be able to constantly monitor their kids’ social media consumption. Thinking conversely would be analogous to expecting parents to be able to always keep their children safe from getting hit by a car in a modern society without traffic lights, stop signs, delineated sidewalks, driver’s licenses, and reckless driving laws.

Some federal government regulation is, therefore, clearly needed. For starters, this could entail legal mandates requiring social media companies to enact formulations of the following measures: 

  • Impose stricter age verification protocols and automated time limits to restrict the daily usage of underage youth 
  • Increase content moderation targeted at removing images of suicide, self-harm, and violence, as well as threatening and abusive posts 
  • Allow third parties to run and publish the results of algorithm risk audits

Moreover, enforcement of all these mandates could be carried out via strict fines for inadequate compliance. Of course, implementing these regulations will be difficult, and there’s no guarantee they’ll work to curb the problems outlined. Still, they’re worth at least a trial run.

How do we balance the benefits of social media with the risks to mental health? Experts from both sides of the aisle discuss.

We Can Fix Social Media Without Punishing the People Who Use It

By Jason Kelley – Activism Director, Electronic Frontier Foundation

In March, the American Psychological Association (APA) released a “health advisory” on social media use in adolescence. In it, the APA makes clear that “using social media is not inherently beneficial or harmful to young people.” Rather, the effects of social media depend on multiple factors—in particular, “teens’ preexisting strengths or vulnerabilities, and the contexts in which they grow up.” 

This view is backed up by science, but it’s also common sense. Social media reflects society. It has dark corners, but also bright spots, like educational resources, vibrant and uplifting communities, and places where people can stand up for what they believe in—from fighting racism to pushing for stronger gun laws. Nearly five billion people use it to connect and share ideas, so it would be absurd to claim that this complex landscape is inherently good or bad for anyone. 

We Must Take Into Account the Complexities of Social Media and Mental Health

So, what if we just isolate the parts that are bad for young people and force companies to correct them? Unfortunately, it’s not that simple. Dr. Leyva is not alone in suggesting well-intentioned fixes that would, unfortunately, backfire and punish the very people who use the platforms, particularly the marginalized youth that we are trying to protect—many of whom find important support and resources there. A lot of these “solutions” would likely make our social media problems worse—not to mention that they are, in some cases, unconstitutional. 

Recent laws requiring parental consent before a teenager can access social media, for example, throw the baby out with the bathwater—and will likely get thrown out by the courts. Parents have the ability to limit social media access and many underutilized tools exist for doing so. But for many young people, cutting off this access would exacerbate mental health issues. Common Sense Media found that teens who are already at risk or dealing with mental health challenges are more likely to have negative experiences with social media, but those same teens are also more likely to value the benefits of social media, like finding resources or support. Parental consent mandates would almost certainly lock these same kids out of the very place where they find important communities.

The same is true for kids in unsupportive households, or those in foster care, and the many other young people who find support online. And because age verification requirements force every user to verify their age to use the sites, that also means cutting off access for anyone who can’t do so for other reasons, like not having an ID—a real issue among ten million often already-marginalized adults and among many other adults who have good reasons for not wanting to identify themselves.

Ethical, Logistical, and Legal Concerns

Fixing content moderation through the law is also a thorny issue. Moderation can get better, but much “harmful” content will always be difficult to distinguish from similar content that merely comments on those things. Pick any number of controversial topics, from eating disorder discussions to Nyquil chicken recipes, and try to figure out if someone is commenting on it or advocating for it. At scale, this is impossible. Requiring companies to delete all pro-eating disorder content, for example, will invariably remove resources for overcoming eating disorders. It’s also difficult to carry out this level of moderation at all unless you’re an enormous company with a large budget, so laws requiring it would create a competitive edge for the current companies while making it harder for anyone to build a new, better platform. 

Both of these approaches are likely unconstitutional. The government can’t just stop people under a certain age from logging on any more than it can stop people from engaging in protected speech in other places. And it can’t just decide what otherwise legal speech should be available to young people. These constitutional roadblocks are good things. Without them, we could end up with a sanitized internet where discussing anything remotely controversial, from LGBTQ+ issues to race relations, is not possible—something we see in authoritarian countries. We’ve seen this exact thing occur here at the local level, as schools in Florida remove award-winning books like Slaughterhouse-Five and The Bluest Eye from libraries because of complaints about their content. Allowing governments to make these same determinations about online content would be a terrible outcome for vulnerable young people, who would certainly be caught in the crossfire. 

Dr. Leyva is correct that we need regulation. But we shouldn’t ask the government to set limits for users. Besides, young people blocked from the platforms will just grow up and encounter the same problems as adults. Instead of kicking the can down the road, we should fix the issue at its source. 

Solutions Must Address Root Causes

First, we must ban behavioral advertising. As long as it’s profitable to collect endless amounts of private information about users to target them with ads, it’s in these companies’ best interests to dangerously monetize our attention, at any age. Then, we must require companies to be interoperable—basically, to allow more accessibility from outside the platform. Many of us don’t want to be on Facebook or TikTok, but that’s where all of our friends are. We can’t leave, so companies have no reason to improve their product. Similar to banning behavioral advertising, requiring the companies to be interoperable would force improvements and make it easier for us to create alternatives to today’s incumbents. Interoperability would allow us to share content, and our accounts, across platforms with different types of features. Historically this has led to better options for everyone. Newer, smaller platforms already have improved, customized algorithms built by people who actually use the site—not by the companies. We just need to break down the barriers so more people can get there. 

We should be wary of solutions that would restrict users or handcuff us to current Big Tech companies. If we stop the companies from holding us hostage, mining our data, and making billions of dollars off of these predatory practices, we will also make the entire ecosystem less harmful. Real solutions will give all of us—parents, kids, and everyone in between—more choices, while protecting and improving our ability to build the better digital world we want to live in.

How do we balance the benefits of social media with the risks to mental health? Experts from both sides of the aisle discuss.

Finding Consensus on Social Media Regulation

By Dr. Rodolfo Leyva – Lecturer in Quantitative Methods, University of Birmingham

It seems that there’s much we all agree on. However, I’d like to respond to some of Mr. Kelley’s points. First, the debate is not about whether social media (SM) is “inherently beneficial or harmful.” SM is simply a communication tool that, like any other tool, can have positive or negative outcomes. SM undoubtedly offers many benefits that youth should enjoy. Nevertheless, SM consumption can catalyze or exacerbate several psychological and behavioral maladaptations in young people. The APA statement cited by Mr. Kelley supports this claim, particularly regarding vulnerable and marginalized youth. Therefore, although further research is needed to determine the degree, duration, and contexts of these deleterious effects, we can confidently infer from the scholarly literature that allowing SM companies to freely experiment on impressionable, developing minds is probably not a good idea.

Second, I recommended stricter—not strict—age verification protocols. Accordingly, my formulation wouldn’t completely cut off access to SM; it would only limit daily usage for minors. This aligns with Professor Prinstein’s idea of instituting greater time and content restrictions for children, gradually loosening them as they grow older. Incidentally, Twitter now restricts unverified accounts to 600 tweets per day. It’s not unreasonable to assume that platforms can do something similar for minors. Furthermore, age verification could be expanded to include alternative proofs of legal age (e.g., credit cards), so that adults aren’t affected. School staff could also be mobilized and compensated to set up SM accounts for minors who lack eligible documentation, with parents/guardians monitoring these accounts thereafter.

Third, the Federal Trade Commission collaborates with the film, television, music, and video game industries to restrict access to violent and sexual content. Moreover, the Federal Communications Commission enforces a 10pm watershed period for airing mature material, and some states and municipalities legally require cinemas to prohibit minors from viewing R-rated movies without adult supervision. My point here is that mass media content deemed inappropriate for children and adolescents is already fairly regulated in a way that’s generally acceptable to most American adults. Correspondingly, I argue that SM companies must work collaboratively with the government, civil society, and academics to establish clear guidelines on content that’s harmful to underage users, along with enforcement actions to reduce its spread and impact.

That being said, I fully agree with Mr. Kelley’s call to ban behavioral advertising. Additionally, I believe we can all agree that policymakers should at least implement the following measures while researchers continue investigating the long-term effects of SM on youth mental health. First, increase fiscal funding for the development and implementation of media literacy curriculum in schools. Second, impel SM companies to develop stronger parental controls.

This debate is being published in collaboration with The Impact Guild, a professional network for people who create, use, or distribute media, arts, or entertainment for social good or healthy democracy. You can also read more from our Political Pen Pals debates here

Mitch Prinstein
Chief Science Officer, American Psychological Association

Mitchell J. Prinstein, PhD, is APA’s chief science officer (CSO), responsible for leading the association’s science agenda and advocating for the application of psychological research and knowledge in settings including academia, government, industry, and the law. Prinstein previously served on APA’s Board of Directors as an at-large member and was active in APA governance beginning as chair of the American Psychological Association of Graduate Students (APAGS). Before assuming the CSO post, he was the John Van Seters distinguished professor of psychology and neuroscience, and assistant dean of Honors Carolina at the University of North Carolina at Chapel Hill. Prinstein has published more than 150 scientific articles and nine books, including a set of encyclopedias on adolescent development, textbooks for both graduate and undergraduate education in psychology, and two professional development volumes for graduate students. Prinstein holds a PhD and an MS in clinical psychology from the University of Miami and a bachelor’s degree in psychology from Emory University.

Rodolfo Leyva
Lecturer in Quantitative Methods, University of Birmingham

Dr. Rudolfo Leyva is a social-behavioural scientist with a PhD in political sociology from King’s College London (2013). He has expertise and extensive research experience in the fields of cognitive psychology, media communications, and policy studies. Prior to his current appointment at the University of Birmingham's School of Social Policy, he held academic posts at various British universities including the University of Warwick, the London School of Economics & Political Science, University of London, and King’s College London.

Jason Kelley
Activism Director, Electronic Frontier Foundation

Jason Kelley oversees activism at the Electronic Frontier Foundation, the leading nonprofit organization defending civil liberties in the digital world, where he is also the co-host of EFF's podcast, "How To Fix the Internet." Before joining EFF, he managed marketing strategy and content for a software company that helps non-programmers learn to code. Jason received his BA in English and Philosophy from Kent State University and his M.F.A. in creative writing from The University of the South.

Leave a Comment

%d bloggers like this: