There is a belief in certain quarters that the algorithms used by Facebook and other social media companies to feed users content are disastrous, because they create "media bubbles," reinforce people's beliefs, increase polarization, and spread misinformation. So Meta agreed to cooperate with outsider researchers to investigate the issue, providing them with data outsiders could not usually obtain, and allowing them to randomly change how some people were offered content.
The first results are now out, and they do not show that the algorithms are very important (NY Times).
First of all, many users don't even look at the content fed them by the algorithm but visit certain pages and groups for their news, so they only see the content shared by the creators of those pages or the members of those groups. Politically active people are not passive consumers of whatever social media companies offer them, but actively seek out the content they like.
In one experiment, the algorithm was turned off and people simply saw posts from their friends in chronological order, and this had no measurable effect on what they read, posted, or did.
This makes sense to me; I have always found the notion that somebody might, for example, be converted into Islamic terrorism by a few articles in their news feed to be ridiculous, and I have never been convinced that social media are to blame for what is wrong with American politics. I have always traced our current bout of polarization problems to the 90s, before social media even existed. All of this feels to me like another way of refusing to believe that other people might actually disagree with you in a serious and heartfelt way. People who think that their own beliefs are obviously true imagine that others only disagree with them because they are fed misinformation; liberals blame Fox news and social media, while conservatives blame the mainstream media. But maybe, just maybe, other people are simply different than you are.
I think you have a strange takeaway.
ReplyDeleteThe point of these algorithms is to give people more of what the system thinks they want. Thus, it's not remotely surprising that people still go find what they want without the algorithms.
The problem is that the algorithms don't care WHAT they feed to people, and that means the algorithms frequently actively promote things which are bad for society - like misinformation and disinformation, or hate speech.
Will the people who want to see that sort of thing actively seek it out? Absolutely. But that in no way suggests we should be enabling them. We should, in fact, be making it harder for them.
Facebook should not be recommending The Anarchist's Cookbook, for example. Twitter should not be helping to spread Holocaust Denialism. Et cetera.
There's a subtle but massive difference between letting your users discuss things amongst themselves, and actively promoting those things yourself. The classic argument in internet law is that the government cannot hold companies responsible for content that their users post without their knowledge or consent, because any method that could possibly police and enforce that would render the service itself unusable. But that defense falls apart when a company goes out its way to actively promote said content - you can't claim you don't endorse something objectionable or dangerous when your system is actively putting it in front of people's eyes unprompted, rather than merely allowing them to post it or seek it out for themselves.
The question is for how long the algorithms should be disabled to have a measurable effect. Or what if the damage was done in the past by pushing people in some direction, and once they are pushed, it cannot be undone?
ReplyDelete