During July 2019, The Verge, an online digital technology publication, published transcripts from open question-and-answer sessions that Mark Zuckerberg held with his Facebook employees, this included leaked audio excerpts from his answers. What specifically caught my attention was the question, and Zuckerberg's answer, on how Facebook determines what is hate speech and how to deal with it.

Content moderation, both of hate speech and fake news, has been a big challenge for the social media company over the years. As such, Zuckerberg's leaked Q&A audio gives us a glimpse into how he and his company think about this very important subject.

"We’ve made the policy decision that we don’t think that we should be in the business of assessing which group has been disadvantaged or oppressed, if for no other reason than that it can vary very differently from country to country. So we’re talking about nuances in the US, but there are different ethnic groups or different religions that are in the majority or the minority in different countries, and just being able to track all that and make assessments with any kind of precision, and then deal to hand those rules to, again, 30,000 people who need to make consistent judgments, is just not going to happen. Or, we don’t have the technology yet to do that," said Zuckerberg when answering an employee during an internal Q&A session that was leaked by The Verge.

How Facebook moderates content in Africa

As Zuckerberg correctly points out, nuances vary depending on each country and one can go as far as saying they vary depending on each community. There's also the issue of different cultures and then all of a sudden, Facebook is now faced with a momentous task of determining which content should be moderated and which should not be.

Part of this stems from the fact that for a long time Facebook has been avoiding being labelled as a media company (which would mean it needs to take responsibility for content curation and moderation) and has rather preferred being called a technology company.

However, this seems to be changing. This is because in Africa Facebook launched its third-party fact checking program in 2016 with some partners and has recently expanded the program. On 8 October 2019 the social media company announced that it expanding this program to Ethiopia, Zambia, Somalia and Burkina Faso through AFP; Uganda and Tanzania through both Pesa Check and AFP;  Democratic Republic of Congo and Cote d’Ivoire through France 24 observers and AFP; and Guinea through France 24 observers and Ghana through Dubawa. This signals, in my opinion, Facebook finally accepting that it has a responsibility in monitoring the type of content that is published on its platform.

A thin line

However, there's a very thin line between censorship and content moderation. Freedom of speech is enshrined in many constitutions across the continent. As such, when a platform like Facebook moderates and removes content based on whether they (Facebook) feel it is hate speech or fake news, at what point does that count as censorship?

In this specific examples that Zuckerberg speaks about when answering a Facebook employee, the phrase that has been deemed by Facebook as hate speech is "Men Are Trash." Although Zuckerberg's computer-like logic is sound, it lacks nuance in understanding under which circumstances such a phrase is used.

Which brings us back to the question: should Facebook be responsible for moderating user content?

You can read Zuckerberg's leaked answer (transcribed) below.

Mark Zuckerberg explains why "Men Are Trash" is considered hate speech on Facebook

Facebook employee: According to your policies “men are trash” is considered tier-one hate speech. So what that means is that our classifiers are able to automatically delete most of the posts or comments that have this phrase in it. [Why?]

Mark Zuckerberg: The hate speech policies are the most fraught. So I’ll walk you through the reasoning of how we got to this policy. And so there are a few things that are going on that I think you want to think about. So one is, gender is a protected category. So substitute in your mind while you’re thinking through this, what if this were “Muslims are trash,” right? You would not want that on the service.

So as a generalization, that kind of framework and protocol that you’ve handed to 30,000 people around the world who are doing the enforcements, the protocols need to be very specific in order to get any kind of consistent enforcement. So then you get to this question on the flip side, which is, “Alright, well maybe you want to have a different policy for groups that have been historically disadvantaged or oppressed.” Maybe you want to be able to say okay, well maybe people shouldn’t say “women are trash,” but maybe “men are trash” is okay.

We’ve made the policy decision that we don’t think that we should be in the business of assessing which group has been disadvantaged or oppressed, if for no other reason than that it can vary very differently from country to country. So we’re talking about nuances in the US, but there are different ethnic groups or different religions that are in the majority or the minority in different countries, and just being able to track all that and make assessments with any kind of precision, and then deal to hand those rules to, again, 30,000 people who need to make consistent judgments, is just not going to happen. Or, we don’t have the technology yet to do that.

So what we’ve basically made the decision on is, we’re going to look at these protected categories, whether it’s things around gender or race or religion, and we’re going to say that that we’re going to enforce against them equally. And now that leads to the discussion that we had in the last question, which is that, is this perfect? No. It’s really challenging to get to something — I mean, you’re not gonna get any answer that everyone is going to agree with.

Some of these things people think we take down too much, some things people think we take down too little. But we’re trying to navigate this in a way where we have a principled approach for having a global framework that is actually enforceable around the world, because to some degree whenever you read about big mistakes that come up in our content enforcement, most of them are actually not because people disagree with the policy.

The question you’re raising, this might be a case where you disagree with the policy, but most of the issues are because one of the 30,000 people who’s made a call didn’t apply the rules consistently. And then that kind of gets put on our motives, and people say “oh well no, you just did this because you’re trying to censor some group of people” or “you just did this because you don’t care about protecting this group of people.” It’s really not that. We try very hard to get this right, as I think you probably all had exposure to here. It’s just that there’s one thing to try to have policies that are principled. It’s another to execute this consistently with a low error rate, when you have 100 hundred billion pieces of content through our systems every day, and tens of thousands of people around the world executing this in more than 150 different languages, and a lot of different countries that have different traditions. So this is challenging stuff, but that’s how we got to where we are.

Share this article via: