Social Media

Facebook says it doesn’t benefit from hate, but its algorithms tell a different story: op-ed

‘Zuckerberg is hiding the fact that he knows that hate, lies & divisiveness are good for business’ — Dr. Hany Farid

At a time when huge companies are suspending their advertising campaigns on Facebook, the social media giant claims it doesn’t benefit from hate, yet its algorithms and business model tell a different story.

Nick Clegg

“I want to be unambiguous: Facebook does not profit from hate” — Nick Clegg

Last week, Facebook VP of Global Affairs and Communication Nick Clegg declared that Facebook did not benefit or profit from hate in a story that ran on AdAge and on the Facebook blog.

“I want to be unambiguous: Facebook does not profit from hate,” wrote Clegg.

“Billions of people use Facebook and Instagram because they have good experiences — they don’t want to see hateful content, our advertisers don’t want to see it, and we don’t want to see it. There is no incentive for us to do anything but remove it,” he added.

However, Facebook can still benefit from hate even when it removes hate speech because its algorithms fuel divisiveness, and hate is good for Facebook’s business model, according to expert witness testimony during a House Committee on Energy and Commerce hearing last month.

Dr. Hany Farid

“Mark Zuckerberg is hiding the fact that he knows that hate, lies, and divisiveness are good for business” — Dr. Hany Farid

UC Berkeley professor and expert in digital forensics Dr. Hany Farid testified that Facebook has a toxic business model that puts profit over the good of society and that its algorithms have been trained to encourage divisiveness and the amplification of misinformation.

“Mark Zuckerberg is hiding the fact that he knows that hate, lies, and divisiveness are good for business,” Farid testified.

“They didn’t set out to fuel misinformation and hate and divisiveness, but that’s what the algorithms learned.

“The core poison here is the business model” — Dr. Hany Farid

“Algorithms have learned that the hateful, the divisive, the conspiratorial, the outrageous, and the novel keep us on the platforms longer, and since that is the driving factor for profit, that’s what the algorithms do.”

“The core poison here is the business model. The business model is that when you keep people on the platform, you profit more, and that is fundamentally at odds with our societal and democratic goals,” he added.

Tristan Harris

“Facebook and the other companies will often claim that they’re holding up a neutral mirror to society — but they’re not!  They’re holding up a fun house mirror” — Tristan Harris

Another point that Facebook’s Clegg penned in his company’s defense was that “Platforms like Facebook hold up a mirror to society.”

However, that mirror is a distorted one, according to Center for Humane Technology President and ex-Google ethicist Tristan Harris.

“Facebook and the other companies will often claim that they’re holding up a neutral mirror to society […] but they’re not!” said Harris in presentation last month.

“They’re holding up a fun house mirror, a distorted mirror that tends to amplify the things that worked for manipulating human vulnerabilities and preying on the deep, soft underbelly of our hatred, our fear, our anxiety, our emotions instead of actually trying to help us,” he added.

“Platforms like Facebook hold up a mirror to society” — Nick Clegg

Facebook’s algorithms were designed to keep users on the platform for as long as they could in order to rake in more ad revenue, and they manipulate human vulnerabilities to keep people’s eyes glued to the page.

We don’t spend our days looking for car crashes, but when we pass by one, we can’t help but slow down and look.

The same thing happens when we are bombarded by outrageous social media posts — we can’t help but look at the flashy information that tickles our senses like a virtual car crash.

“Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility” — Tristan Harris

“Tech companies manipulate our sense of identity, self-worth, relationships, beliefs, actions, attention, memory, physiology and even habit-formation processes, without proper responsibility,” Harris testified before Congress in January.

“Technology has directly led to the many failures and problems that we are all seeing: fake news, addiction, polarization, social isolation, declining teen mental health, conspiracy thinking, erosion of trust, breakdown of truth,” he added.

Facebook’s Clegg concluded in his argument, “We may never be able to prevent hate from appearing on Facebook entirely, but we are getting better at stopping it all the time” — and I have to agree with him about never being able to prevent hate — people will always be people.

But Clegg’s focus on preventing and removing hate speech does nothing to address the fundamental issue — that Facebook’s algorithms and entire business model are drivers of hate and divisiveness.

Clegg is looking at symptom, not a cause.

Facebook knows that its algorithms “exploit the human brain’s attraction to divisiveness,” according to the Wall Street Journal, and yet “Facebook executives shut down efforts to make the site less divisive.”

Brandi Collins-Dexter

“When executives at Facebook were alerted that their algorithms were dividing people in dangerous ways, they rushed to kill any efforts to create a healthy dialogue on the platform” — Brandi Collins-Dexter

As Color Of Change Senior Campaign Director Brandi Collins-Dexter testified last month in the same hearing as Farid:

“When executives at Facebook were alerted that their algorithms were dividing people in dangerous ways, they rushed to kill any efforts to create a healthy dialogue on the platform.”

Strong evidence points to Facebook directly benefiting from hate, regardless of whether or not “hate speech” is a factor.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

OpenFi Raises £500,000 to Enhance AI-Powered Lead Generation and Customer Engagement

Conversational AI platform OpenFi has raised £500,000 in pre-seed funding round. Led by Bijan Morvaridi with additional…

2 days ago

Innovative ventures set to be formed in 54 hours during the 27th Seville Edition of Techstars Startup Weekend

Spain is an enduringly popular destination thanks to its culinary offerings, beautiful scenery and world-famous…

2 days ago

How can small-scale solar systems reduce e-waste and support communities?

Most people don’t know that behind huge renewable energy projects lies a hidden cost—e-waste from…

2 days ago

IARPA seeks methods to assess foreign malign influence, not intended for use on US population

The R&D funding arm of the US spy community, IARPA, is looking for "information on…

3 days ago

Why the cleaning industry needs to solve its plastic problem

Our plastic problem is not going away. Despite years of environmental initiatives, we still produce…

3 days ago

With pet market projecting to grow to $500 billion, Dosty looks to become industry’s super app

Sometimes pets can be a mystery to their parents. What do they feel, what symptoms…

3 days ago