Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The solution is to realize "hate speech" is mostly subjective and revert back to the clear rules that we had last decade before the current political climate of gratuitous outrage.


I think part of the issue is the power of comments like yours that pretends there's a simple solution.

In some conversations bad actors can gain more power in a debate using misleading information than a good actor can by using the truth.

Most conversations, such as this one about free speech, are so complex that it's tough for a 'good actor' to offer solutions. They may discuss the pros / cons of each side, talk about where further research is needed, talk about experts that are more informed, etc. They can still offer solutions but they shouldn't be pretending an unconfident solution is complete if they really are a good actor.

Bad actors, on the other hand, can simplify the complexities. They can provide confident solutions to complex problems and do so without worrying about the information they don't know or about misleading others.

Fortunately bad actors are often weeded out in discussions, but in conversations that abuse humans fear we become more demanding of answers. Explaining to fearful people "it's complex" isn't satisfying so we become more susceptible to lies / misinformation spread by bad actors. Conversations abusing this fear / anger is where there is an argument about whether misinformation should be limited.


Why is the simple solution just "pretending"? What evidence is there that a complex solution exists or works? What is the basis of the assumption that good actors are not as effective as bad actors? Why has society not devolved into chaos if that's the case? How did this work before social media and online anonymity?


I would say it's pretending because it's not 100% certain to be the solution. As mentioned, the situation is incredibly complex and can't be properly argued in a one-liner. There is no evidence that a complex solution exists, but I would say it's better if you know you don't have the solution to imply uncertainty.

Good actors are almost certainly more effective than bad actors. That's likely why the world hasn't devolved into chaos. But societies DO devolve into chaos, such as revolutions and wars. These also obviously take place at times when society is most fearful / angry. But it's possible that many societal wars and revolutions would be unnecessary if good actors used rational discourse to reach solutions.


> "Good actors are almost certainly more effective than bad actors."

By your own exposition, this is the simple and effective solution that's inline with what I stated. Let freedom reign because the good already solves for the bad, and has done so for the entirety of civilization. Do we really need more?


Fair point. I guess part of this conversation also depends on what each of us believes society's goal should be (minimize suffering, maximize individualistic freedom, etc).

My thinking is more so from a position of minimizing suffering, and that finding a way to reduce the power of bad actors may help minimize wars and other conflicts and reduce suffering. There's obviously the big counterpoint that attempts to reduce power of bad actors may lead to increased corruption that in turn worsens society that if it hadn't been attempted.

But that's why this is just another example of a complex problem.


People should never be allowed to claim suffering from political speech. It is that simple in my opinion. Yes, this legally allows Nazis to hold rallies outside of a synagogue. However, in practice this rarely happens, and is arguably a benefit to society since these people (Nazis) are now outing themselves as insane.

On the other hand, human history has shown that political parties will attempt to censor anything they can. We can see this with the alternative realities constructed by Fox News/MSNBC and we can see it in totalitarian dictatorships like North Korea. I am not aware of any counterexamples of a country where the government does not attempt to influence the narrative.


I’d also like to point out. The Nazis didn’t come to power with free speech.

They came to power by silencing their opponents through force, political posturing, manipulation, and opportunistic behavior.

In fact the Nazis squashed free speech in many ways, and used violence, threats, and coercion to make others silence their friends.

Frankly had free speech been protected and valued by any means. We may not have had a third reich.

Let the fascists speak, let them out theirselves. So that common people can distance themselves.


The problem is that the concept of free speech cannot be independent of the concept of power. Rules and Laws are only as good as the people in power choose; we've been lucky in many ways that most of the time, the people in power choose to behave in a way that upholds the laws that we consider important. However, that "good will" is specifically what malicious actors exploit in order to achieve that power and then violate the same rules that allowed them to spread their message and come to power.

And unfortunately, once they have power, there is very little that normal people can do to ensure that they will obey by the rules that got them there (i.e. free speech). I would argue that there was no way to prevent the Nazis from squashing free speech except by preventing them from coming to power - and this is true with many malicious actors. Unfortunately, that means we have to balance the need for free speech with the need to prevent malicious groups from coming to power.

Ergo, the Paradox of Tolerance.


> And unfortunately, once they have power, there is very little that normal people can do to ensure that they will obey by the rules that got them there (i.e. free speech).

This is where the second amendment shines isn't it?


Laws do not fight. People do.


> My thinking is more so from a position of minimizing suffering

Don't you think that's unfair by nature? How do you justify it?


Fairness isn't an end in itself, but unfairness can cause people to revolt, which then leads to more suffering.

It's just a matter of how you weigh such considerations: https://www.utilitarianism.net/types-of-utilitarianism


I know what utilitarianism is, as someone who leans more to the side of deontological ethics. The problem with fairness not being an end means you can't have justice. Your moral ethics culminates on the never ending struggle we have right now of shifting the blame for what happens and what doesn't, and you're fine with it because you're blinded by the hope that it'll work. If you're willing to tell me why, as well as why you're proud of your relativism, why the means justify the end, I'm all eyes. If you're not, I'll understand.


> In some conversations bad actors can gain more power in a debate using misleading information than a good actor can by using the truth.

Can "actors" be neatly divided into "good" and "bad"? Who decides who is a "good actor" and who is a "bad actor"? Your judgement of who is "good" and "bad" may well differ from mine.


> Your judgement of who is "good" and "bad" may well differ from mine.

One man's terrorist is another man's freedom fighter.


Usually that only works if you ignore the definition of terrorism.


What is the definition?


There is a simple solution. If you read something, and you’re offended by it, that’s your problem. The worlds full of stuff, you’re free to take offence to absolutely anything at all. Nobody has any obligation to protect you from that.

The one and only place this falls over is with the concept of “obscenity”. But if that’s the only place you have to make exceptions, they are much, much easier to define.


Is bad actor bad because they use misleading information?

Is good actor good because they use truth?

What is good/bad actor?


To prevent bad actors we must become just as bad or worse.


I am offended when you say these things: Bad actors weeded abuse fear lies misinformation anger

They should delete your post for sure right?


I think the issue is that YouTube wants to be seen as the "creator" for the content they distribute, and therefore, all content must not contradict the brand image of Google.

Alternatively, YouTube needs to put more emphasis and responsibility on the reputation of content creators and make it such that their reputation gains them visibility over time. A creator with 20 years of community determined honest reporting, should be featured more prominently than a channel that is new or has a controversial history.


Sadly, can't agree. The rules we had in the last decade weren't actually working; they only appeared to work. They looked like they were creating conflict-free environments, but they were really creating environments where, in general, minority populations hate speech was targeted against weren't using the tools or participating in the forums that had a laissez-faire attitude on such things.


What do you mean by not "actually working"? What is a "conflict-free environment"? What does hate speech have to do with minority populations? Do people within those populations never say hateful things, even to each other? How do you know they never participated in any forums?

This sounds like an awful lot of assumptions.


>What does hate speech have to do with minority populations?

Come on. There's a debate to be had about hate speech and free speech, but it can't even get started on a reasonable footing if some of the participants don't know enough history to be able to answer this question.


There are no laws in the United States against hate speech. It is a made up term that changes with the wind. It used to describe speech that was encouraging violence towards a person or group but then it just became "speech that expresses hate." Nowadays it is mostly used to describe speech from a majority group that a minority group found offensive.

What is hate? What is offensive? It's whatever you want it to be!

When in doubt, err on the side of freedom and liberty.


> There are no laws in the United States against hate speech

An interesting part of this discussion is that Youtube is operating globally. There are a number of countries with the concept of “hate speech” in the law. Saying it doesn’t exist in US law is only a partial answer.


That’s true. But as a consequence, content can be blocked/deleted per country, it doesn’t have to be global.

Otherwise countries with most restrictive laws basically dictate what is allowed or not to the rest of the world (though in practice that’s already in part the case).


This is true. I sometimes forget that people from all over the world are on HN. Good point.


Yes, I know there aren't. My comment was just pointing out that there is indeed a connection between hate speech and minority groups.


Every term is a made up term. Hate speech is generally understood as sexist/racist/homophobic speech (in general, discrimination against a minority group with no justification based in reality). It doesn't need to be encouraging violence in the immediate against such a group, that's a subset of hate speech.

You may disagree with the definition, but that's what people are talking about. If you want to elevate the discussion, you should avoid pointless semantic arguments IMO.


>Hate speech is generally understood as sexist/racist/homophobic speech

That may be your definition but it is not the definition and certainly is not close to being the "generally understood" definition.

All words are made up but definitions should not change with the wind. What is happening with hate speech is that it has morphed into "speech that a minority group found offensive." This is not a workable definition because what you find offensive is not what I find offensive. The most broadly accepted definition based on the laws I see on wikipedia is "speech that encourages imminent violence."

The only reason we are even discussing this is because people have begged online platforms to police speech. The inevitable conclusion when you police speech is this problem we are discussing right now. You ultimately just devolve into tyranny of the majority where dissenting thoughts are silenced.

>You may disagree with the definition, but that's what people are talking about.

Just because you don't want there to be nuance doesn't make the nuance go away.


Wikipedia's definition is actually broader than what you quoted.

> Hate speech is defined by Cambridge Dictionary as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". Hate speech is "usually thought to include communications of animosity or disparagement of an individual or a group on account of a group characteristic such as race, color, national origin, sex, disability, religion, or sexual orientation".

If I may clarify, the key part of the definition is "based on something such as [a core characteristic of a person that has no relationship to the hate] such as race, religion, sex, or sexual orientation". So hating someone for doing something is not hate speech. Hating someone for what they "are" (at their "core", if there's such a thing) is hate speech.

This isn't about finding things offensive (although clearly hate speech is found offensive by most people).

I disagree strongly about whether policing speech inevitably devolves into censorship. We already police speech, for example calls to violence in the US, with no visible devolution. We also police where you can physically be, without limiting your ability to go about your life with no undue policing. The slippery slope argument without supporting evidence is lazy.


Under that definition, would you agree that this article titled "Why can’t we hate men?" is hate speech?

https://www.washingtonpost.com/opinions/why-cant-we-hate-men...

If so, what do you think should be done about it?


I don't think that article can be classified as hate speech using any reasonable definition. What I get from it is a disdain toward men for their behavior, particularly as it pertains to power and violence in a particular social/political context. I don't read it as, "I hate you because you have a penis," although there are certainly people who think that way (a very small minority as far as I can tell). As a person with a penis, I certainly don't get a feeling of personal animus from the article nor am I threatened by it.


> Hate speech is generally understood as sexist/racist/homophobic speech

No it's not. Here you have Google removing anti-communist speech which is not sexist, not racist and not homophobic. And communists aren't even a minority in China (though thankfully a minority in most of other countries). But "I hate Nazis" is also hate speech, obviously - should Google ban anybody who hates Nazis? Maybe not, you say? So there's some hate that is allowed, but some is verboten. And who decides which is which? Ah, now we are getting to the point of "hate speech" term - "hate speech" is hate I disapprove of. If I approve of it, it's a vigorous and righteous indignation against the evils of this world and should be lauded, but if I disapprove - it's "hate speech" and should be banned. Now we need only do figure out who holds the power to decide these questions... but wait, we already did, Google decides that. All hail Google, the bastion of free speech and protector from the hate speech! I, for one, welcome our new speech overlords.


Anti-communist (against the ideology) speech is not hate speech. Dehumanizing speech against communists (people who identify or are identified as communist) may be. Quoting Wikipedia quoting Cambridge dictionary:

> Hate speech is defined by Cambridge Dictionary as "public speech that expresses hate or encourages violence towards a person or group based on something such as race, religion, sex, or sexual orientation". Hate speech is "usually thought to include communications of animosity or disparagement of an individual or a group on account of a group characteristic such as race, color, national origin, sex, disability, religion, or sexual orientation".

This is a workable definition and it doesn't lead to a slippery slope argument.


> Anti-communist (against the ideology) speech is not hate speech

Why not? Because you said so? If somebody hates communists and publicly proclaims that I don't see how it's not hate speech. Unless, of course, you massage the definition to match exactly the cases you like. Religion and ideology are the same thing - or, more precisely, religion is subset of ideology with some specific properties. Why would anti-certain ideology be different from anti-another ideology because some of these ideologies call themselves "religion"? How does it make any sense?

> something such as race, religion, sex, or sexual orientation

Ok, so hating buddhists is hate speech, but hating communists isn't. And hating atheists is ... who knows. Now sure about Wiccans either. How about Objectivists? That looks like exactly the definition to match one narrow case of US contemporary politics (I'd even say very narrow sliver of a contemporary US politics), where racial and sexual discrimination issues is all the rage. But outside that context it makes zero sense, the categories it chooses are just arbitrary.

Is hating scientologists "hate speech"? Well, depends on whether it's a religion or not, right? Because if it's not then no hate speech for you. Is hating furries "hate speech"? If it's about sex fetish then yes, "sexual orientation", but if it's just about cosplay then no, because it doesn't fit the official categories. And so on. Completely nonsensical definition, unless you use it exactly as declared - to privilege certain categories of speech and suppress others, because you want so. There's no logical basis under it, just an arbitrary list.

> This is a workable definition and it doesn't lead to a slippery slope argument.

It's not workable because it selects arbitrary categories based on certain political agenda. If you expand it using "such as" and argue, for example, that regardless of whether Scientology is a religion or not, hating for the group characteristic of belonging to it is under "such as" - good, then how anti-communism is not "such as"? If I make a church that declares Vladimir Lenin a top saint and otherwise the views would completely match communist views with the exception that I also would celebrate Lenin's birthday once a year and call it a religion, now anti-communism is a hate speech? Or only if it's directed against me, but if it's against Chinese communist who has the same ideology but officially not in my church then it's not? Again, nonsensical.


Instead of claiming that I lack the history, why don't you explain what the connection is?

The definition of minority is just as subjective.


In a nutshell: hate speech causes people to commit suicide.

In many peoples' opinion, life is more valuable than pure freedom, therefore we should take away (some) freedom in order to save lives.

To be clear, I am not in the camp that believes that, but I try to understand their point of view.


> In a nutshell: hate speech causes people to commit suicide.

Can you elaborate on how we establish a causal relationship between hate speech and suicide? Let alone specifically hateful comments on YouTube? This doesn't seem corroborated by the suicide rates in the US over time. Rates were just as high (and in fact, higher in some years) in pre-internet days [1].

This claim is made even more questionable when taken in conjunction the previous comments' emphasis on its impact on minorities. Minorities in the US actually have substantially lower suicide rates than Whites [2].

1. https://en.wikipedia.org/wiki/Suicide_in_the_United_States#/...

2. https://en.wikipedia.org/wiki/Suicide_in_the_United_States#/...


What does this have to do with minority populations? Speech that incites violence is already a crime.


Minorities are most often the targets of hate speech. Which is why LGTBQ youth are often bullied both IRL and online and they commit suicide. And yes, it is a crime. But this thread is talking about deleting comments; not about law enforcement.


This thread is about hate speech and the ambiguous subjective definitions that have lead to this unending censorship problem.

If the core agreement is that it's about hate against a group, then saying "minority" is nothing more than just a shortcut for a certain group and doesn't really add anything to the discussion of what is hate speech.


There are studies dating back to the 70s that suggest publishing headlines about suicide causes people to commit suicide. Headlines about murder-suicides similarly caused an uptick in murder suicides.

https://www.ncbi.nlm.nih.gov/pubmed/17750236 (https://sci-hub.tw/10.1126/science.201.4357.748)

> Abstract: Fatal crashes of private, business, and corporate-executive airplanes have increased after publicized murder-suicides. The more publicity given to muder-suicide, the more crashes occurred. The increase in plane crashes occurred primarily in states where murder-suicides were publicized. These findings suggest that murder-suicide stories trigger subsequent murder-suicides, some of which are disguised as airplane accidents.

There have been some studies which found similar correlations with fatal car 'accidents.'

It's easy to speculate that comments discussing suicide, such as yours or mine, might also plausibly cause suicides. Perhaps that's something for you to consider.


I've seen a few cases where people have committed suicide after suffering online hate, and to be honest, I don't think that this hate made a radical difference – rather, it was something that triggered an individual who already was in a very dark place.

Sacrificing free speech for their sake of like doing a nation-wide ban on peanuts to help people with allergies: it will prevent quit a lot of accidental deaths, too. Except that peanuts are not essential for a functioning democracy.


There are a number of well-known historical cases where hate speech targeted at certain minority groups has precipitated genocide (in the worst case) or other forms of violence.


There's a debate to be had about undesirable and uncomfortable speech as well


The problem is that there is not a single, universal correct answer. Answering this question requires that we uncover built-in assumptions, and make them explicit.


> it can't even get started on a reasonable footing if some of the participants don't know enough history to be able to answer this question.

Perhaps you're conflating hate speech with hateful (and harmful) actions. History is full of examples of the latter, but it's also full of examples of the former where no one was actually hurt, minority groups included.

Speech can be a precursor to action, but that doesn't make it a crime in itself (unless you're living in "1984"). What's that old rhyme about "sticks and stones..."?


You acknowledge that speech can be a precursor to action. We have seen direct evidence of this.

Following a speech by president trump where he uses coded hate speech, violence against minorities increases.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3102652

I'm not arguing we make hate speech illegal, or have thought crime. In fact if it were up to me racists would be even more up front about their deplorable worldview. But I think it's very important that the employees of companies are allowed to exercise their own value systems by denying users of hate speech a platform.

This isn't a defense of banning anti-PRC posters though. Calling xi jinping Winnie the Pooh isn't hate speech. Discussing what constitutes hate speech could be another valuable discussion perhaps held elsewhere - I don't agree that a definition is impossible, if that's where you're thinking of going. I don't agree that there's a slippery slope.


> But I think it's very important that the employees of companies are allowed to exercise their own value systems by denying users of hate speech a platform.

I completely agree. That doesn't mean I have to agree with their value systems or stay silent about my discontent (and I'm not implying that you claimed that either).

I'm just tired of hearing the phrase "hate speech" used whenever these discussions happen, as if there were some fundamental human law that we have to protect people from ideas that might offend them - either via regulation or private company policy. Illegal or not, I don't think it helps to talk about "hate speech" in the context of censorship and free speech. It's like having a debate about which curse words are the worst, when really that's missing the point.


> that might offend them

There's more at stake than being offended. If the reality was that the reason companies like Google kick out people for, say, ambiguously dogwhistling with sexist blog posts is because sexist blog posts hurt people's feelings, I wouldn't feel quite as strongly about how ethically good it is for Google to have taken said action.

However, it's not "just speech." As I linked elsewhere, studies show that after every dogwhistle-ridden Trump speech, violence against minorities increase. We've strayed into Sticks and Stones territory. Apparently, it's important to not let hate speech spread unfettered. Standing aside, allowing racists to post racist things on your platform, makes you directly culpable in actual violence being committed against minorities.

If I was the CEO of Google, I would not accept that violence is being committed that I could take direct action in preventing. If I was an employee, I'd put extreme pressure on my management to prevent that violence.


The issue is the harmful actions and the people who commit them. That's why there's an exception for speech that incites violence.

Outside of that, rhetoric that you consider hateful is not a direct cause of any harm. That is a dangerous road to go down and it's far better to counter with your own speech instead.


> However, it's not "just speech." As I linked elsewhere, studies show that after every dogwhistle-ridden Trump speech, violence against minorities increase. We've strayed into Sticks and Stones territory.

Studies would show that a thousand other things are also followed by an increase in violence. In literature, movies, telephone calls – anywhere people communicate with other people, you will find that some people communicate offensive (to someone) ideas, and as a result, other people who are prone to violence will be pushed over the edge and do something harmful. Those who seek violent ideas will find them regardless of censorship. And those who seek positivity and love will find them.

It is absolutely "just speech". We haven't crossed any magical boundary today that we hadn't crossed 229 years ago when the bill of rights was ratified. If you want to fully prevent violence from ever occurring by censoring speech, you're welcome to live in a communist country. But of course, you won't find what you're looking for there either.


> We haven't crossed any magical boundary today that we hadn't crossed 229 years ago when the bill of rights was ratified.

You mean, when black people were still considered non-human? I don't think the bill of rights, nor the people that wrote it, gets to stand on its own laurels. There's plenty of room to debate the problems with the Constitution as it stands today.

> If you want to fully prevent violence from ever occurring by censoring speech, you're welcome to live in a communist country.

I'm not sure what to do with an HN poster that conflates the economic theory of Communism with the authoritarian plutocracies of China and north Korea.

Oh hoh if you wanna live in a capitalist country like Russia, where the secret police can disappear you for speaking out against the democratically elected president, maybe you should just move there!

See how ridiculous that sounds?


> I don't think the bill of rights, nor the people that wrote it, gets to stand on its own laurels.

Agreed, but I'm talking specifically about free speech, not any other parts of the constitution or the people who wrote it. I'm not debating the need for a "living constitution".


Your interpretation of the study is not accurate. From https://www.politifact.com/factchecks/2019/aug/12/bernie-san...

"Perhaps most important is that Sanders’ wording implies that the 226% jump stems from comparing hate crimes before and after a Trump rally within the same county, when in fact it’s a comparison from Trump rally counties to similar counties that did not host a Trump rally."


My comment doesn't say that hate speech is a crime. This varies between jurisdictions.


You made a claim here.[1] Then to virtually every response, you ask lots and lots of questions like "What does this mean?" "Why.." "What..."

While your questions are fair, it is noteworthy that you haven't clarified or backed up your original claim. As someone reading this thread, I'm a bit surprised people are engaging with you. Personally, I would not engage with someone who makes an unsubstantiated claim, and instead of backing it up keeps trying to poke holes in others' responses.

[1] https://news.ycombinator.com/item?id=23223793


Which part of that statement are you referring to? The rules are the First Amendment, stated here: https://news.ycombinator.com/item?id=23224929

The fact that hate speech is subjective is evidenced by my questions which examine just how loose and vague the definitions are. There still is no clear answer on what hate speech is, beyond the 1st Amendment which I think is good enough.

If you mean something else, then please state what it is that's unsubstantiated.


The part where you say "The solution is..." and then don't substantiate it, but instead focus on knocking down other solutions. That other solutions have problems does not make your one any better.

(At least in this comment you are providing a qualifier ("I think..."))


My solution is a lack of implementing any of the solutions, therefore that's the only focus there is. Freedom (by the 1A) instead of vague, complicated and heavy-handed policies that are currently causing an American company to side with an authoritarian regime.

I don't claim that the 1A has no problems, but it does have less problems and that's still an improvement, and it has worked for more than a century for this great nation.

What other evidence would you like to see?


> What other evidence would you like to see?

What do you mean by "other" evidence? You didn't provide any.

> but it does have less problems

Unsubstantiated claim.

> and it has worked for more than a century for this great nation

Unsubstantiated claim.

To respond in a manner similar to your other comments:

What does "worked" mean? How is it an improvement? How does it have fewer problems? Where exactly did it work in whatever great nation you're thinking of. Have you considered where it didn't work? What is the metric for "worked"?


American companies didn't side with authoritarian regimes before and wouldn't if they just followed 1A rule. People wouldn't be censored on platforms if they just followed the 1A rule.

The metric is personal freedom and how much you can do without the government or someone else preventing you from doing so. Anyone can say what they want and people can ignore and block, leading to more freedom for all.

This is an improvement. It was also what we had before. It worked everywhere in America. I don't know of any place where it didn't work and do note that many countries without freedom envy the American way.


> American companies didn't side with authoritarian regimes before

Please do a simple search before making incredibly wrong claims. There was one very famous case in the decade you refer to. And when you look at the history of American companies in general, you'll find more.

> The metric is personal freedom and how much you can do without the government or someone else preventing you from doing so.

> This is an improvement. It was also what we had before. It worked everywhere in America. I don't know of any place where it didn't work and do note that many countries without freedom envy the American way.

As others have pointed out to you, by that metric, the US failed considerably. It's not what we had before in practice. People have pointed out places where it didn't work.

> do note that many countries without freedom envy the American way.

And many Americans envy other countries for traits they have that the US lacks. What's the relevance?


> How do you know they never participated in any forums?

Large amounts of anecdata of people reporting (e.g. with the first post on a new blog) that they have finally found places where they can freely engage in Internet discourse; and explaining that they hadn’t been engaging in Internet discourse up until then, because any attempt previously was met with people reacting to the cultural “outgroup” signifiers in their message, rather than to the content of the message itself.

> What does hate speech have to do with minority populations?

Pretty much every country other than the US has an official legal definition of hate speech—but even the US has a definition of hate crime. Both terms are defined in terms of prejudice toward a group. Wikipedia’s definition of “hate crime”, for example:

“A hate crime (also known as a bias-motivated crime or bias crime) is a prejudice-motivated crime which occurs when a perpetrator targets a victim because of their membership (or perceived membership) of a certain social group or race.”

> Do people within those populations never say hateful things, even to each other?

“Hate speech” doesn’t literally mean “hateful speech.” If just say something with hatred, you’re not engaging in hate speech. If you say something with prejudice, intending injury to the victim because of that prejudice, you’re engaging in hate speech.

Keeping that in mind, you can certainly commit an act of hate speech (or a hate crime generally) against someone in the same intersection of groups as you. It probably implies that you hate yourself (or don’t consider yourself a part of such group/groups), though.


The discussion is about speech, and speech is not action. Harmful action is already a crime, and any crime can be upgraded to "hate crime" based on the motivation.

Group membership doesn't have anything to do with minorities though. You can define groups however you want, so a "minority" is entirely dependent on the context of the situation and just as subjective as the hate speech. So who are you considering minorities and what is this anecdotal data that claims they did not participate in forums? Must every forum be welcoming to everyone? Did no other forum exist? Could they not have created their own forum? If they talked to each other, does that mean a forum exists? And if so, doesn't that mean they are free to engage in their own discourse after all?


I don't see how membership of a minority group is all that subjective in most cases. E.g., LGBTQ individuals are pretty clearly a minority, Jews were pretty clearly a minority in Europe in the 20th century, etc. Minorities tend to be more vulnerable to hate speech for obvious reasons.


How you scope the entirety of the universe and how you group the people within determines the minority.

Either way, the point that if hate speech is targeting against a "group", then that group can be anything and anyone. There is no specific connection to "minorities", whatever that means to you. It just dilutes the discussion about defining hate speech.


>How you scope the entirety of the universe and how you group the people within determines the minority.

That is entirely specious, pedantic and irrelevant. In the context relevant to hate speech, there is no doubt that, say, LGBTQ individuals or Jews are minority groups. I don't think you can really be serious about denying this (given that the numbers are what they are). I am not sure what point you are trying to make here.

>There is no specific connection to "minorities", whatever that means to you.

There is no essential connection between hate speech and minorities, but there's a very obvious connection. Historically, many victims of hate speech and other hate crimes have been members of minority groups. And it's not difficult to see why. It's a lot easier for a majority group to persecute a minority group than vice versa.


Alright, let me rephrase:

Hate speech is subjective. Minority is subjective. Even if there are commonly considered minority groups, it doesn't solve for the definition of hate speech other than just being a shortcut to defining a group membership that is the basis for the "hate".

Therefore there is really no connection (essential or otherwise) that is useful to the discussion of what is hate speech.


It is not subjective in any interesting sense whether a given group is a minority within a particular society. It's simply a question of counting.

The connection, as various people have explained to you, is that minorities within a society tend to be more vulnerable to hate speech and its associated effects.


You can't count until you know the (subjectively chosen) boundary of the whole. For example, a minority in your city may not be the minority in another country.

Either way, so what if it affects minorities more often? The question is What is hate speech? and saying "it affects minorities more" does not solve for that definition at all. Hence why the connection does not matter/exist.


Obviously you choose the ‘boundary of the whole’ according to the location of the instance of hate speech. E.g., Jews were a minority in 1930s Germany; LGBTQ individuals are a minority more or less everywhere; Muslims are a minority in London, etc. etc. There’s nothing about this that’s difficult to understand, and you really just seem to be trolling at this point.

As to your ‘so what’, you now seem to accept that there is a connection between minorities and hate speech, which was the point at issue.


There's no connection. It's an observation at best, based on what you define as a minority. For example, billionaires are also a minority group and frequent recipients of hate speech. Do you disagree?


I don't disagree that they are a minority group. I haven't seen examples of hate speech directed at billionaires per se. In this context, people are usually thinking of ethnic, religious and sexual minorities - but of course you know that, right?


To pick a concrete (if inflammatory) example of the subjectivity of "minority", perhaps it is worth considering the example of "Trump voters". Within the context of the US, they are less than 50% of the population, so would it be hate speech to insult his supporters?

To pick a smaller minority, what about "Baby Boomers". Should the "OK, boomer" meme be banned as hate speech? At 22% of the US population, they constitute a smaller minority than the proportion of non-white Americans (28%, excluding White Hispanics).


No-one thinks that anything insulting said to any minority automatically qualifies as hate speech. This is a straw man. The point is that e.g. ethnic, religious and sexual minorities have historically been some of the primary victims of hate speech.


Thanks for clearing that up. Maybe the problem then is the misleading nature of the phrase "hate speech". The speech that is being banned isn't distinguished by being "hateful", but because it is targeted at certain subjectively chosen minorities.

Don't get me wrong, I'm not saying that we shouldn't give extra protections to groups of people who have historically faced disproportionate amounts of violence (and other harms), just that we should maybe call it "selected minority endangering speech" instead of "hate speech", and be clear about the specific cost-benefit trade-off we are making by how we choose and delineate those categories of people and how much speech is covered.


It's pretty easy to find out what people mean by hate speech. What you're doing seems a bit like people who derail discussions about homophobia by pointing out that homophobes aren't literally afraid of homosexuals. In both cases, the terminology is well established. Fussing over it just serves as an excuse to avoid addressing the problem.


I'm not trying to derail the discussion by pointless complaints about etymology, I'm saying that part of the problem with the concept of hate speech is that the name given to it obscures (accidentally) the nuances of how it is applied in practice.

A better analogy would be if the critics of homophobes genuinely thought that homophobia was literally a fear of homosexuals, causing the homophobes to complain that this framing of their position made it hard for them to explain their objection to homosexuality.

By hiding the subjectivity of "hate speech", people then get surprised or angry when it does or doesn't get applied to terms like "communist bandits" or "OK, boomer", or "eat the rich". The real debate isn't about whether the terms are hateful (as the name suggests), but whether the specific groups that are targeted need the specific protections being implemented.


So you were genuinely and not merely rhetorically confused when you asked whether insulting Trump supporters would qualify as hate speech?

Sorry, it just seems like you are deliberately trying to introduce confusion about what hate speech is into this discussion.

There's nothing particularly 'subjective' about the definition of hate speech. At least, it's no more subjective than the definition of 'free speech' or 'censorship' or any of the other relevant concepts in this domain. There's a perfectly objective history of persecution targeting certain minority groups.


I was genuinely looking for a logically consistent framework for excluding "insulting Trump supporters" from being an example of hate speech. I'm sorry if it seemed like I was labouring the point too much by asking where the lines around hate speech should be drawn.

I accept that there are objective historical examples of majority groups persecuting minority groups, and I'll ignore the difficulties of constructing well-defined subsets of a population (e.g. "working class") or whether a given group is numerically a minority (e.g. "females" in many countries). What I still think is subjective, though, is how much (and what sort of) persecution is necessary before a group becomes entitled to claim that hateful language used against them is "hate speech".

Imagine a hypothetical African country that had, say, France as its colonial occupier, under an apartheid system, but then allowed free elections, leading to the native population gaining political power. If the native population had talked about "getting rid of" their French occupiers, while the apartheid system was in place, presumably your definition of "hate speech" wouldn't have applied to that speech. But would your definition also not apply to similar speech (targeted at the same French people) after the occupying minority population lost their power? Would some amount of time (and violence) have to pass before the minority was entitled to point out that the hateful speech directed towards them was this special kind of "hate speech"?

Again, I apologise if this seems like a contrived example (and it's very hard to come up with an example that people don't have instinctive pre-conceptions and biases around), but I'm trying to explore if your definition really is as neutral as you think it is. You're right, though, that terms like "free speech" can be very nebulous, while still being useful concepts.


Yes, the history matters.

If your point is that you can contrive edge cases then, well, duh. There are also edge cases involving free speech and just about every other legal/moral/political concept.


My point isn't that edge cases exist, but that the edge cases force us to examine the process by which we decide whether something is or isn't hate speech.

I think that for a lot of people, they are exposed to a few clear examples of hate speech, and unconsciously build a heuristic that says "Anything that makes me feel the same sense of disgust towards the speaker or sympathy towards the target, is hate speech". Fortunately that heuristic works quite well most of the time for people, but I think it can work so well that the people using it don't question it, and don't realise that their definition has some blind spots in some areas, or scope-creep in others.

So, regarding my contrived edge case, when you say "the history matters", do you mean that the majority can continue to talk about "getting rid" of the minority without it being classed as hate speech, because the minority were historically privileged?

Alternatively, perhaps you mean "speech can change from being allowed to being hate speech (and vice versa) over the course of history". I don't disagree that the meaning of (and people's sensitivities to) words can change over time, but in my example, the change in circumstances happens in a single day. If that is significant, then it means the definition of hate speech depends not solely on the words themselves, or the size of the target group relative to that of the speaker's group, but rather on some sort of determination of whether the target "deserves" to be a subject of hate because of their membership of a group that you (the arbiter of hate speech) deems to be currently or historically over-represented politically.

I really am trying not to put words into your mouth, and I appreciate you taking the time to understand my concerns. Hopefully we'll both be more clear about what we mean when we use the term "hate speech" in future.


Obvious and inaccurate.

Minorities are likely to use persecution because their position is vulnerable.


The US has a pretty well defined notion of "protected class" since at least 1964. Like all of law, there are ambiguities and edge cases that have not been fully enumerated or explored in case law. You're acting like this is a huge nebulous concept, and displaying confusion about minorities.

People often say "minorities" when they mean "protected class." Women are not a minority, but they're a protected class. Billionaires are a minority, but they are not a protected class. Who decides what constitutes a protected class? Case law. When a suitable number of cases demonstrate harm on the basis of membership in a class, then that class may be considered for inclusion in the definition.

The definition of hate speech in Canada is quite narrow, and hinges on the definition of protected class. Progress is slow and methodical, and not the slippery slope thay you drscribe.


> Women are not a minority, but they're a protected class.

No, in the US, women are not legally a protected class. Gender is a protected class, which prohibits discrimination against women, or men.


Conflict isn't necessarily a bad thing. Conflict can resolve tension and raise long-hidden concerns to the forefront. Conflict is a natural part of human interaction.


But the conflict was and still is directed at minority groups for their being a minority. There is nothing to resolve there, a Trans person will not stop being trans, a gay person stop being gay or a woman stop being a woman. These people genuinely need protection and safe spaces.

(I'm also aware that these are not easy issues to tackle but they haven't been met with a lot of care in the last few years)


A bigot can stop being a bigot, but first they must express their bigotry in order to confront it.

Allowing conflict does not obviate a need to allow safe spaces, either.


And people should suffer and not feel safe, for others to learn that different acting or looking people are also just human beings?

Society has failed minorities and not minorities society.


Allowing for conflict does not obviate the need for safe spaces.


>* They looked like they were creating conflict-free environments*

No, they did not. They looked they were creating environments in which inevitable conflict was common, and in which the discussion of that conflict sometimes lead to helpful resolutions and improvements, and sometimes did not.


No. The great opening of the Internet brought points of view onto the internet at scales that hitherto were unknown. There are so many asshats on the Internet that old tactics of manual moderation do not work anymore. Doing nothing is a pathetic excuse for a solution.


What if someone considers you an "asshat"? What should they do?

Sounds like the real issue is scale and anonymity then. What solution do you propose?


Self service moderation which serves as training for machine learning algorithms. Perhaps a system where messages aren’t whitelisted for the entire platform by default but instead must pass scrutiny as a canary. At first you could seed this with employees for users with no audience. During this time the sentiments of those who view the message are used to determine if wider distribution is desired.


Part of that is that the average age of participants on the internet has dropped very significantly since mobile applications become ubiquitous.


What were those clear rules we apparently abandoned for no reason?


I think the parent is referring to the idea that even hate speech is free speech.


The First Amendment of the US Constitution.


>>> The solution is to realize "hate speech" is mostly subjective and revert back to the clear rules that we had last decade before the current political climate of gratuitous outrage.

>> What were those clear rules we apparently abandoned for no reason?

> The First Amendment of the US Constitution.

Ah yes, the good old days when every newspaper was obligated to publish every letter to the editor and every citizen was required to reproduce and distribute pamphlets put out by every nut job they ran into on the street.


First Amendment maintains freedom to express but does not state there's an obligation to be heard.

You are free to ignore people as much as you want. You should not stop them from uttering those words in the first place though.


> First Amendment maintains freedom. It does not state there's an obligation. You are free to ignore people as much as you want. You should not stop them from uttering those words in the first place though.

So you're fine if YouTube/Google chooses not to reproduce and distribute certain messages?


I think the point is that it depends on their reasons. The U.S. government imposes restrictions on broadcast content, but these restrictions are on extremely shaky ground, and they rely heavily on not venturing into viewpoint discrimination and focusing entirely on content discrimination (i.e. sexual nudity on ordinary daytime television).


1) There's an argument that deleting the comment disallows the expression in the first place.

2) Sure, private companies can do what they want. My point is that they just use the first amendment, because the current nebulous definitions aren't really working.


> 1) There's an argument that deleting the comment disallows the expression in the first place.

That argument doesn't hold water. If someone pins a message to my corkboard, I'm disallowing the expression if I express my disapproval by removing it?

> 2) Sure, private companies can do what they want. My point is that they just use the first amendment, because the current nebulous definitions aren't really working.

The First Amendment only works because non-governmental actors in society are allowed to decide what to publish, what to pass on, and what to throw away according to their own standards. It's not some blanket "everyone allow all" rule.

Most pamphlets are judged to be garbage and go into the trash, much to the consternation of their authors.


1) This is the platform vs publisher discussion. I'm just stating it as a potential case.

2) You agree with me. Again, my point is not that they shouldn't do that, but that the filter they use to do so is so highly subjective and opaque that it's no longer useful.


>> The First Amendment only works because non-governmental actors in society are allowed to decide what to publish, what to pass on, and what to throw away according to their own standards. It's not some blanket "everyone allow all" rule.

> 2) You agree with me. Again, my point is not that they shouldn't do that, but that the filter they use to do so is so highly subjective and opaque that it's no longer useful.

No, I don't. I think it's fine to have opaque, highly subjective standards; and such standards are can be very useful. My standards about what I publish, what I pass on, and what I throw away are like that. The costs of transparency can be extremely high, and subjectivity just can't be avoided. In some cases, my judgement may differ from theirs, and I may even say so, but ultimately subjective judgement can't be avoided.


1) That argument is false because the comment could be expressed on another platform or their own platform (e.g. a personal website).

2) That doesn't make sense in the context of a privately owned company, the 1st amendment doesn't define a standard for speech, it outlines explicit limitations of the government.


What if those words they utter are death threats and call for violence against a group? Should that be protected against the First Amendment?



I see there’s a lot of court cases listed in that Wikipedia entry so it seems like it'll have to be decided by a judge on what is considered free speech or not, because language by nature is vague and can interpreted in many different ways.

So there is no such thing completely “free” speech if there are constraints.


Have you read any of the cases? They get pretty detailed about what is and isn’t allowed. This is not the first time we’ve been having this debate - far front it. Though, some seem unaware that there’s a substantial body of court history on the subject, both American and otherwise.


As long as they are not going to immediately cause violence but generic "Death to America" chats are absolutely protected.


Newspaper has editorial control, while Google and YouTube pretend to be platforms. Phone company doesn't censor your conversation, but also doesn't take responsibility for their content. YouTube has much more in common with it than with a newspaper.


An open forum is not the same as letters to the editor.

It's more analogous to people talking in a bar.


> An open forum is not the same as letters to the editor.

I disagree.

"I rented this hall [for this open forum], and now I’m going to turn off the lights."


You really don't see the difference between an editor selecting one person to speak, versus a forum where almost everyone gets to speak?


> You really don't see the difference between an editor selecting one person to speak, versus a forum where almost everyone gets to speak?

There is no difference. If I invite people to come to my place to talk in some kind of forum, I have no obligation to let people who I think are disruptive or offensive stay, nor do I have an obligation to preserve any record of their words.


There's also an important difference between "your house" and a common open forum with tens of millions of people.


> There's also an important difference between "your house" and a common open forum with tens of millions of people.

No, not really. The only difference that's important is between forums run by the government and forums that are not. In the case of the latter, the people who run it have an extremely free hand.


"Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances."


That never applied to privately owned forums.


I know, they are the clear rules that I’m referring to though.


What law did Congress pass that violates the first amendment to the Constitution? You of course acknowledge that that limitation of the first amendment is explicit - Google, not being an aspect of the government, can do whatever it pleases regarding speech.


I think you are misunderstand GP. It’s not that YouTube violates the law. It’s that the law makes a clear rule for the government.

What’s good is the clarity of the rule. Other attempts at rules become too subjective, perhaps.

So maybe it would be better if these companies actually followed the same rule as the government is required to, even if they aren’t currently legally obligated to.

Perhaps if they had to play by those rules, they would come up with a better solution than subjective moderation by a small group of employees.


> So maybe it would be better if these companies actually followed the same rule as the government is required to, even if they aren’t currently legally obligated to.

I disagree. The difference between private organizations enforcing a moral code, and the government, is strong.

Ostensibly, if racists wanted their own youtube, they could make one. I see nothing of value in the further existence of the racists' mindset, but I do think it's important that there aren't thought police, for many of the same reasons that your average alt-right racist would yell about "freeze peach."

Of course, perhaps it isn't such a bad thing to have thought police, if there were open stretches of ungoverned land where racists could go live and start their own government. Beyond Antarctica though, that isn't the case. So for now, I think it's important to allow that separation between private and public rule.


In many people’s view, adhering to the first amendment principles is the strong moral code.

So even if it shouldn’t be mandated that a private company do that, it’s reasonable to hold the position that this would be a preferable stance for the company to take.


Perhaps there's a misunderstanding here. I'm saying that the 1st Amendment is an exemplary rule to follow for private corporations too.

They didn't really have much more for a long time, until the recent addition of ever-changing and heavy-handed limitations.


I'm mostly on your side of this debate, but I don't think this quite works because of spammers (though perhaps that's the only reason). Are spammers protected by the first amendment? I bet Gab has anti-spam measures too.


"Hate speech" was only ever a tool of political control. It's contemporary usage is no accident.


Those rules applied today would result in indecipherable cesspools of memes at best or a community of authoritarian sympathizers at worst, who would be oh-so-happy to start conditioning their like-minded members to force undesirables away, either explicitly through bans or implicitly through non-stop hatred and coordinated harassment.

Neither one is the type of community I have any interest in participating in, and if some sort of mandate came down enforcing it on any of the large social networks I would no longer participate in them.


Your first paragraph describes subs like r/esist, r/againsthatesubreddits, r/fragilewhiteredditor, etc. They just don't do what many people would consider hate speech so they don't get banned. I think this highlights that while your goal many occur in parallel with banning hate speech in some cases, on its own it will not accomplish much because the same thing will just happen on whatever side nobody thinks is "spewing hate"

And fwiw, blocking these subs (and subs like r/trumpforpresident! You can block both sides this way) made me completely forget about them until this comment, which I think also shows that self moderation is a perfectly acceptable answer. Why can't everyone make their own bubble? Why do we have to let other people decide how our bubble looks?


I've not seen any evidence of authoritarian trends in fragilewhiteredditor. I have seen many "but both sides" types try to fruitlessly argue that pointing out white racists are fragile is somehow in and of itself racist, but that is of course absurd.


what a topsy turvy world to live in where calling out bigotry is equated with hate subs. maybe on the fronts of brigading or the like, but content wise I don't see how they're morally equivalent


Generally the comments have pretty bad stuff in them, or at least they did before I blocked them.


I'd agree the world is topsy-turvy when the New York Times thinks "are white people genetically predisposed to burn faster in the sun, thus logically being only fit to live underground like groveling goblins" is cool but a white guy wearing a sombrero is beyond the pale.


What made you think I was talking about Trump? I just said "authoritarian".


Nothing, these are just examples that all happen to involve Trump because it's reddit so everything controversial involves Trump


I just found it strange that _all_ of the examples were anti-trump, especially when various conservative pro-trump subreddits were guilty of the exact same thing.

And frankly, I'm A-OK with communities having heavy-handed moderation no matter what side of the aisle you're on. That's great! At least when you join those communities, you know what you're signing up for.

But the trouble comes when you have lightly-moderated spaces that are targeted by authoritarian extremists with the explicit goal of trying to shift people's opinions. That's subterfuge, and lightly-moderated spaces are defenseless against it, since at point you either introduce moderation or throw up your hands and let the extremists win.


Three of the four examples I gave were anti trump because they happened to be at the top of my blocked list, probably because they were the first posts I was annoyed by. I also mentioned r/trumpforpresident.

And in my opinion, your third paragrah applies even stronger to heavily moderated subreddits (/r/the_donald anyone?) because dissenting opinions are just removed. Unless this is super public and regularly discussed, oftentimes I find that it's easy to forget about and fall into the trap of thinking that everyone feels a certain way. Sure at first it is right in the front of your mind but eventually it fades away (at least, in my experience).


Of all the problems that the_donald had, their exclusion of dissenting voices was not one of them. They made it abundantly clear what kind of sub it was, and I think even codified it in the rules.

And I don't quite follow your second concern. Most of the time when I'm socializing I'm not in the mood for pseudo-anonymous debate club, I just want to talk to like-minded people about our shared hobbies. If I want dissenting opinions, I know where to find them.


Then don't participate. People will form communities and stick with the groups they like and enjoy, like they always have.

If anything, the issue with social media is that it's too vast and anonymous. Things still work just fine in the physical world.


>Those rules applied today would result in indecipherable cesspools of memes at best or a community of authoritarian sympathizers at worst, who would be oh-so-happy to start conditioning their like-minded members to force undesirables away, either explicitly through bans or implicitly through non-stop hatred and coordinated harassment.

It seems like the only point of disagreement between you and "them" is who should be targeted.


I don't like political authoritarianism apologia no matter where it's coming from. I've seen both neo-nazis and stalinist tankies overrun and completely ruin communities and I don't care for either.

An internet community having a solid framework of rules that is strictly enforced that keeps discussions friendly and on-topic is a completely separate axis entirely.


What if moderation of hate speech is part of the product?


How do you moderate something you can't define?


Hate speech can be defined. Of course, not everyone will agree on the definition, but that's beside the point.


I mean, isn't that exactly the point the person above is making?


I'm not sure, but either way my question remains. What if moderation of hate speech is part of the product?


Well that's why we need more choices.

...so that they have to compete to levy the most reasonable, well articulated, and transparent policies.


There are other choices, consumers just don't want them.


> I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description ["hard-core pornography"], and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.

Hasn't stopped the government from moderating other things that can't be defined crisply.


That's also a problem.


That's belittling a very serious issue. A lot of these recent anti-lockdown protests have been fueled out of social media by actors peddling extremely questionable narratives.

Holocaust denial has seen a massive revival in Germany, to such a degree that sentiments like that are intermixing with conspiracy theories about the "NWO" supposedly using COVID-19 to finalize their "2000 years old rule", with the help of Bill Gates who apparently wants to microchip everybody on the planet to "depopulate" it.

Trump is supposedly the last and only defense against this take-over by the "deep state", and will soon end it all when he reveals "Obamagate", he apparently also federalized the FED. Tho none of these people could even tell me when that supposedly happened.

While individually these ideas and movements have been floating around the web for quite a while, it's absolutely scary how they are now merging together [0] and being chanted by people in the streets after they got their "Information on the Internet" which regularly means: Facebook groups, YouTube channels and now even Twitch streams, places they usually arrive at after using search engines in the most misleading way possible.

It's like peak Eternal September where people will just believe the most obscure sources when they confirm their already established beliefs, over well-established data, and factual reality, which apparently is all controlled and manipulated by "dark powers behind the scenes". It's depressing and scary because so far I thought rationality will pull trough, people will learn to properly parse information for its validity and sources for their credibility.

That did not happen, the bad actors are now taking over to a point where they stage events in the meat world, openly threatening democratic institutions. I do not know how to stop this, but this can't keep on going like this, it will take us no place good.

[0] https://www.thedailybeast.com/neo-nazis-qanon-nuts-and-hardc...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: