How have social media algorithms changed the way we interact?

How have social media algorithms changed the way we interact?
By: BBC Tech Posted On: October 12, 2024 View: 20

How have social media algorithms changed the way we interact?

Hands around a mobile phone displaying binary code and a megaphone

Social media algorithms, in their commonly known form, are now 15 years old.

They were born with Facebook’s introduction of ranked, personalised news feeds in 2009 and have transformed how we interact online.

And like many teenagers, they pose a challenge to grown-ups who hope to curb their excesses.

It’s not for want of trying. This year alone, governments around the world have attempted to limit the impacts of harmful content and disinformation on social media – effects that are amplified by algorithms.

In Brazil, authorities briefly banned X, formerly known as Twitter, until the site agreed to appoint a legal representative in the country and block a list of accounts that the authorities accused of questioning the legitimacy of the country’s last election.

Meanwhile, the EU has introduced new rules threatening to fine tech firms 6% of turnover and suspend them if they fail to prevent election interference on their platforms.

In the UK, a new online safety act aims to compel social media sites to tighten content moderation.

And in the US, a proposed law could ban TikTok if the app isn’t sold by its Chinese parent company.

The governments face accusations that they are restricting free speech and interfering with the principles of the internet as laid down in its early days.

In a 1996 essay that was republished by 500 websites – the closest you could get to going viral back then – US poet and cattle rancher John Perry Barlow argued: “Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.”

Adam Candeub is a law professor and a former advisor to President Trump, who describes himself as a free speech absolutist.

Social media is “polarising, it’s fractious, it’s rude, it’s not elevating – I think it's a terrible way to have public discourse”, he tells the BBC. “But the alternative, which I think a lot of governments are pushing for, is to make it an instrument of social and political control and I find that horrible.”

Professor Candeub believes that, unless “there is a clear and present danger” posed by the content, “the best approach is for a marketplace of ideas and openness towards different points of view”.

The limits of the digital town square

This idea of a “marketplace of ideas” feeds into a view of social media as offering a level playing field, allowing all voices to be heard equally. When he took over Twitter (now rebranded as X) in 2022, Elon Musk said that he saw the platform as a “digital town square”.

But does that fail to take into account the role of algorithms?

According to US lawyer and Yale University global affairs lecturer Asha Rangappa, Musk “ignores some important differences between the traditional town square and the one online: removing all content restrictions without accounting for these differences would harm democratic debate, rather than help it.”

A town square set against a backdrop of binary code
Elon Musk has compared X to a ‘digital town square’ – but some argue that is distorted by algorithms

Introduced in an early 20th-Century Supreme Court case, the concept of a “marketplace of ideas”, Rangappa argues, “is based on the premise that ideas should compete with each other without government interference”. However, she claims, “the problem is that social media platforms like Twitter are nothing like a real public square”.

Rather, argues Rangappa, “the features of social media platforms don’t allow for free and fair competition of ideas to begin with… the ‘value’ of an idea on social media isn’t a reflection of how good it is, but is rather the product of the platform’s algorithm.”

The evolution of algorithms

Algorithms can watch our behaviour and determine what millions of us see when we log on – and, for some, it is algorithms that have disrupted the free exchange of ideas possible on the internet when it was first created.

“In its early days, social media did function as a kind of digital public sphere, with speech flowing freely,” Kai Riemer and Sandra Peter, professors at the University of Sydney Business School, tell the BBC.

However, “algorithms on social media platforms have fundamentally reshaped the nature of free speech, not necessarily by restricting what can be said, but by determining who gets to see what content”, argue Professors Riemer and Peter, whose research looks at why we need to rethink free speech on social media.

“Rather than ideas competing freely on their merits, algorithms amplify or suppress the reach of messages… introducing an unprecedented form of interference in the free exchange of ideas that is often overlooked.”

Facebook is one of the pioneers of recommendation algorithms on social media, and with an estimated three billion users, its Feed is arguably one of the biggest.

When the platform rolled out a ranking algorithm based on users’ data 15 years ago, instead of seeing posts in chronological order, people saw what Facebook wanted them to see.

Determined by the interactions on each post, this came to prioritise posts about controversial topics, as those garnered the most engagement.

Shaping our speech

Because contentious posts are more likely to be rewarded by algorithms, there is the possibility that the fringes of political opinion can be overrepresented on social media. Rather than free and open public forums, critics argue that social media instead offers a distorted and sensationalised mirror of public sentiment that exaggerates discord and muffles the views of the majority.

So while social media platforms accuse governments of threatening free speech, is it the case that their own algorithms might also inadvertently pose a threat?

“Recommendation engines are not blocking content – instead it is the community guidelines that restrict freedom of speech, according to the platform’s preference,” Theo Bertram, the former vice president of public policy at TikTok, tells the BBC.

“Do recommendation engines make a big difference to what we see? Yes, absolutely. But whether you succeed or fail in the market for attention is not the same thing as whether you have the freedom to speak.”

Yet is “free speech” purely about the right to speak, or also about the right to be heard?

As Arvind Narayanan, professor of Computer Science at Princeton University, has said: “When we speak online – when we share a thought, write an essay, post a photo or video – who will hear us? The answer is determined in large part by algorithms.”

A supermarket with binary code on the shelves
A ‘marketplace of ideas’ in which everyone is heard equally isn’t possible when billions use social media

By determining the audience for each piece of content that’s posted, platforms “sever the direct relationship between speakers and their audiences”, argue Professors Riemer and Peter. “Speech is no longer organised by speaker and audience, but by algorithms.”

It’s something that they claim is not acknowledged in the current debates over free speech – which focus on “the speaking side of speech”. And, they argue, it “interferes with free speech in unprecedented ways”.

The algorithmic society

Our era has been labelled “the algorithmic society” – one in which, it could be argued, social media platforms and search engines govern speech in the same way nation states once did.

This means straightforward guarantees of freedom of speech in the US constitution can only get you so far, according to Jack Balkin of Yale University: “the First Amendment, as normally construed, is simply inadequate to protect the practical ability to speak”.

Professors Riemer and Peter agree that the law needs to play catch-up. “Platforms play a much more active role in shaping speech than the law currently recognises.”

And, they claim, the way in which harmful posts are monitored also needs to change. “We need to expand how we think about free speech regulation. Current debates focused on content moderation overlook the deeper issue of how platforms' business models incentivise them to algorithmically shape speech.”

While Professor Candeub is a “free speech absolutist”, he’s also wary of the power concentrated in the platforms that can be gatekeepers of speech via computer code. “I think that we would do well to have these algorithms made public because otherwise we're just being manipulated.”

Yet algorithms aren’t going away. As Bertram says, “The difference between the town square and social media is that there are several billion people on social media. There is a right to freedom of speech online but not a right for everyone to be heard equally: it would take more than a lifetime to watch every TikTok video or read every tweet.”

What, then, is the solution? Could modest tweaks to the algorithms cultivate more inclusive conversations that more closely resemble the ones we have in person?

New microblogging platforms like Bluesky are trying to offer users control over the algorithm that displays content – and to revive the chronological timelines of old, in the belief that offers an experience which is less mediated.

In testimony she gave to the Senate in 2021, Facebook whistleblower Frances Haugen said: “I’m a strong proponent of chronological ranking, ordering by time… because we don’t want computers deciding what we focus on, we should have software that is human-scaled, or humans have conversations together, not computers facilitating who we get to hear from.”

However, as Professor Narayanan has pointed out, “Chronological feeds are not … neutral: They are also subject to rich-get-richer effects, demographic biases, and the unpredictability of virality. There is, unfortunately, no neutral way to design social media.”

Platforms do offer some alternatives to algorithms, with people on X able to choose a feed from only those they follow. And by filtering huge amounts of content, “recommendation engines provide greater diversity and discovery than just following people we already know”, argues Bertram. “That feels like the opposite of a restriction of freedom of speech – it’s a mechanism for discovery.”

A third way

According to the US political scientist Francis Fukuyama, “neither platform self-regulation, nor the forms of state regulation coming down the line” can solve “the online freedom of speech question”. Instead, he has proposed a third way.

“Middleware” could offer social media users more control over what they see, with independent services providing a form of curation separate from that inbuilt on the platforms. Rather than being fed content according to the platforms’ internal algorithms, “a competitive ecosystem of middleware providers … could filter platform content according to the user’s individual preferences,” writes Fukuyama.

“Middleware would restore that freedom of choice to individual users, whose agency would return the internet to the kind of diverse, multiplatform system it aspired to be back in the 1990s.”

In the absence of that, there could be ways we can currently improve our sense of agency when interacting with algorithms. “Regular TikTok users are often very deliberate about the algorithm – giving it signals to encourage or discourage the recommendation engine along avenues of new discovery,” says Bertram.

“They see themselves as the curator of the algorithm. I think this is a helpful way of thinking about the challenge – not whether we need to switch the algorithms off but how do we ensure users have agency, control and choice so that the algorithms are working for them.”

Although, of course, there’s always the danger that even when self-curating our own algorithms, we could still fall into the echo chambers that beset social media. And the algorithms might not do what we ask of them – a BBC investigation found that, when a young man tried to use tools on Instagram and TikTok to say he was not interested in violent or misogynistic content, he continued to be recommended it.

Despite that, there are signs that as social media algorithms move towards maturity, their future could not be in the hands of big tech, nor politicians, but with the people.

According to a recent survey by the market-research company Gartner, just 28% of Americans say they like documenting their life in public online, down from 40% in 2020. People are instead becoming more comfortable in closed-off group chats with trusted friends and relatives; spaces with more accountability and fewer rewards for shocks and provocations.

Meta says the number of photos sent in direct messages now outnumbers those shared for all to see.

Just as Barlow, in his 1996 essay, told governments they were not welcome in Cyberspace, some online users might have a similar message to give to social media algorithms. For now, there remain competing visions on what to do with the internet’s wayward teen.

BBC InDepth is the new home on the website and app for the best analysis and expertise from our top journalists. Under a distinctive new brand, we’ll bring you fresh perspectives that challenge assumptions, and deep reporting on the biggest issues to help you make sense of a complex world. And we’ll be showcasing thought-provoking content from across BBC Sounds and iPlayer too. We’re starting small but thinking big, and we want to know what you think - you can send us your feedback by clicking on the button below.

Read this on BBC Tech
This site uses cookies. By continuing to use this site you agree to our use of cookies.
Read more I agree