Social Media Algorithms and Human Learning Biases: A Complex Interplay

By
Anna Oneal
March 28, 2023
This is some text inside of a div block.
min read
Share this post

Published on August 3, 2023, in the journal Trends in Cognitive Science, a recent study has illuminated the intricate relationship between social media algorithms and human biases. Designed to optimize user engagement for advertising revenue, these algorithms often unwittingly magnify the biases inherent in human social learning processes, contributing to the proliferation of misinformation and heightened polarization.

Exploring the evolutionary basis of human learning biases, the study notes that in prehistoric societies, individuals naturally leaned towards learning from their ingroup or prestigious figures. This inclination was rooted in the belief that information from these sources was more likely to be reliable and would contribute to the collective success of the group.

However, in the contemporary landscape, especially on social media platforms, these ingrained biases become less effective. The study underlines that online connections may not necessarily equate to trustworthiness, and the ease with which individuals can feign prestige on social media further complicates matters.

The central argument posited by the study is that social media algorithms, in their pursuit of user engagement, perpetuate the biases ingrained in human social learning. This is achieved by promoting information that aligns with users' biases, irrespective of its accuracy or representativeness of a group's opinions. The researchers refer to this phenomenon as the amplification of Prestigious, Ingroup, Moral, and Emotional (PRIME) information.

This amplification, while serving the algorithms' goal of maximizing user engagement and advertising revenue, unintentionally fosters misinformation and polarization. Content that is politically extreme or controversial is more likely to be amplified, leading users to potentially develop a skewed understanding of the majority opinion within different groups.

The study suggests that addressing this issue requires a multi-faceted approach involving both user education and algorithmic adjustments. Social media users, the study argues, need to be more aware of how algorithms operate and why specific content appears on their feed. While social media companies do not often disclose the full details of their algorithms, the study proposes offering users explainers for the content shown, such as whether it is popular among their friends or generally trending.

Moreover, the researchers recommend that social media companies consider tweaking their algorithms to foster healthier online communities. Instead of solely prioritizing the amplification of PRIME information, algorithms could be adjusted to set limits on such content while prioritizing the presentation of a more diverse range of information to users.

In conclusion, the study underscores the necessity of aligning social media algorithms more effectively with human social instincts to cultivate healthier online interactions and mitigate the inadvertent spread of misinformation (Source: Cell Press).

Full article: https://neurosciencenews.com/social-media-behavior-misinformation-23752/

Share this post
Anna Oneal

Similar articles

Ready to get started?

Get Started