Monday, December 28, 2015

Facebook and Hate Speech

I'm a very firm believer in freedom of speech. There are lots of reasons to oppose censorship, but that's not what this post is going to be about. This post is about what Facebook, and similar sites can do to help reduce the effect of hate speech without censoring anyone.


One of the common assertions against censorship is that freedom of thought, speech and expression are paramount, and that anything that harms these freedoms is detrimental to the functioning of a liberal society in a way that makes it very difficult to regain those freedoms in the future.

People, like myself, who argue in favour of that assertion, generally make the point that opposing censorship does not in any way endorse hate speech or bigotry, but it can be difficult to convince people of this when the consequences are that hate speech continues. It's also often unhelpful to speak about hate speech in purely philosophical terms when there are real people suffering harm.

I've said, for a long time, that it is the duty of good people to individually tell a bigot that they are being bigoted, and that it's not acceptable. Chastisement by peers, or even shunning, is a more preferable way of trying to negate bad ideas than government-sponsored power being brought to bear.

And this is where Facebook comes in (as well as any other large social media sites like Twitter and YouTube). They are platforms that gives more exposure to regular people than anything that's ever existed, which undoubtedly puts a big social good on one side of the ledger, independent of negative aspects to their business models.

In meatspace, if a person says something terrible, they are (hopefully) shouted down, so their ideas spread less. On the internet, bad ideas spread proportionally more due to the way levels of interaction are treated without understanding the context and meaning of those interactions, e.g. a hate-filled post will be shown to more people if people tell off the author.

In addition to hate speech, I find it somewhat unsatisfactory that people who are being bullied online generally have to stop using social media in order to avoid their tormentors. Bullying short of threats or clear incitement shouldn't be banned, in my opinion, and people's privacy should be treated as another paramount right (subject to targeted warrants issued on probable cause), allowing for horrible people to get away with quite a lot.

I've even reported hate speech on Facebook in the past. It's never resulted in a post being taken down, and I wouldn't actually want a post taken down. Where I think social platforms like Facebook could improve, however, and this is the point of this long-winded post, is to allow for reporting mechanisms that never actually result in posted being removed, but instead merely have them flagged as bigoted, or bullying or whatever, and hidden by default, but remain available if a user so chooses.

This mechanism would force people to consciously decide to want to read the thing that might cause them personal offence, and only expose them to bad ideas once they've been primed to understand that this is what they're about to read. This is not too dissimilar to how Slashdot has operated for many years, with its community of moderators and meta-moderators.

Meanwhile, sites like 4chan should remain havens for whatever, because by providing a standard way of allowing any speech, while allowing it to be neatly sequestered from those who don't want to read it, sites that still assert their rights to be mere conduits for the messiest of human ideas will be used by a self-selected audience who know what they're in for.

It seems to me that this is a good middle ground.

Thoughts?

1 comment:

  1. Good idea, David. Every public discussion web site should employ that mechanism. I participated in a forum for some time that had fairly heavy moderation, and whole threads would be taken down without explanation. I think your approach is much preferable, and would be a service to society.

    ReplyDelete