Social Media Platforms Release Updates to Keep Users Engaged, Combat Misinformation and Tackle Inequity
As the world prepares for life after the pandemic, social media platforms are in a scramble to fill their coffers with new features to keep users engaged, combat misinformation and tackle newsfeed inequities.
In April, Facebook announced a series of algorithmic changes to expand user engagement, including a “Suggested Topics’” prompt in the newsfeed. The feature is based on users’ demonstrated interests, such as the content, people, and Pages a user interacts with. Another discovery tool, “Page Suggestions” was also added. But it’s the social media giant’s new “Related Discussions” prompt that has some people on the fence.
Once a Facebook user interacts with a post, an animated icon will now appear with a link to see groups and other users who have shared the same post. While the feature’s intent is to deliver more context around a users’ interests by showing them how other people are discussing the topic, according to honchos at Facebook, some worry that it will add to the already fiery and divisive nature of the platform.
At any rate, Facebook is hoping its newly-refined ranking processes, like its updated user surveys and feedback assessments, will help flag posts that are causing a negative reaction (literally weighted, in part, by the 😡 emoji!)
That feature is also part of the company’s initiative to combat offensive content and misinformation, following intense scrutiny the social media giant has fanned the flames of hate speech and fringe groups. Among the new user features rolled out during the month, Facebook announced its new Oversight Board, which will allow users to appeal Facebook’s decisions to keep content on or off the platform.
On the heels of the #BlackLivesMatter and #StopAsianHate movements, other social media platforms announced their efforts to curb inequity in their algorithms. Instagram has appointed an Equity Team to improve diversity on its platform, and Twitter plans to research whether its machine-learning causes “unintentional harm” by analyzing potential racial and gender bias in its image algorithm.
Will it work? It remains to be seen how effective these new updates will be, but users are sure to make their voices heard if or when social media companies miss the mark.