Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.

Many users will simply mash the "report abuse" button if they see a politician they don't like, or a sports player for an opposing team.

If the normal rules applied identically to everyone, all high profile accounts would simply be inactive in perpetuity.

Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?



This is exactly the solution. IF tons of people are reporting Steve Newscaster because he posted a status about how his team won, then he shouldn't become immune to criticism, those people should lose the privilege of having their voice heard.

Just send them a little message saying "Hey, you've falsely reported a bunch of posts/accounts lately, so we're restricting your ability to report content for 30 days." and if they keep doing it, make it permanent.


Here's the thing: I can think of multiple ways this could be abused or controversial, in less than 30 seconds. You can probably too.

Actual hostile actors will have years to find ways to use the rules to create controversy and shut down dissenting posts.

If people learn they get flagged after N spurious reports, they'll start rationing their reports so they don't meet the treshhold; they'll start making inflammatory posts that technically respect the rules to bait false reports. They'll create scandals, eg "Person X said awful thing Y, but when I tried to report them Facebook told me I wasn't allowed to report people anymore. Why does Facebook support Y?"

That's not to say your idea is bad. Just, you're making it sound really easy, when it's a problem Facebook has poured millions into without finding many solutions.


That might not work. In the eternal September of Facebook there may always be enough new accounts to continuously file false reports against high profile accounts.


HN has addressed this problem by not granting new users flagging privileges - you need a certain karma threshold to flag an article or comment.

And flagging/vouching privileges can be removed for HN users who abuse them.


Yeah but you can just buy warmed-up verified accounts on SEO marketplaces, often several years old and patiently curated to be blandly inoffensive and ready to be turned into whatever you need.


Still, having to do that greatly reduces the problem. The average person who wants to censor a politician they hate isn't going to spend money to buy an account.

Another potential mitigation might be to put a limit on the number of posts that a single user can flag in a day. At some point, the cost of large-scale content manipulation could be made to exceed the expected gains.

It may even be profitable for Facebook to crack down on this. Every celebrity post that gets illegitimately taken down has potential for showing ads to millions of people.


No.

HN has addressed this problem by keeping it text-only.


Write me a sentence that won't get 1000 reports if viewed by 60 million people.


right. very likely they wouldn't be even capable to instruct their faked 60 mill ai-managed profiles 'to like' Zuckerboi's own sentence for that matter ..


Nobody is complaining that Facebook has exceptions for automatically suspending accounts just because people are misreporting them as abusive. The issue is, and has always been, exceptions to the content-based rules which are supposed to apply equally to everyone.


So if hundreds of people reported the content posted by a celebrity, or if a classifier misfired on the content posted by a celebrity, there shouldn’t be any protection for the content?


A person spending a minute looking at something that has garnered a high number of reports is reasonable instead of just ignoring reports because it's a celebrity. It doesn't have to be automated. Edge cases can be expensive


The content should be evaluated according to the same set of the rules that apply to everyone else. If the content classification rules are properly implemented, no amount of button-smashing should result in a different outcome.


If you know how to build a bug-free system with content classification rules that are properly implemented for 2 billion people in every conceivable circumstance, I think Facebook (or any other big social media company) would pay you a lot of money to implement that. Like, an unbelievably large sum of money.


I’m not sure that this is a problem that can be solved solely through software and automation. It’s a business problem. Automation is a means to solve this problem - and the one Facebook uses to save money and operate at scale - but when it comes to making content judgments, people are often better than machines at doing it. “Report” buttons can be an input into the mechanism, but they need not be authoritative.

The problem for Facebook is that adding people is going to be expensive and hurt their bottom line. So they are likely incented to treat these disparities as collateral damage on the road to ever-increasing profits.


So are you saying you could develop the solution but Facebook won't offer you enough? I don't think it makes a difference whether the system is automated or not.


No, I’m saying Facebook could probably solve this problem with the managers they already have, but with different priorities set by top management, and at greater expense.


Sorry but that seems to be understating the issue. Are you saying it's only a matter of changing the entire company's priorities and increasing the budget by large amounts? If it were that easy then it seems any other wealthy company would have solved it by now.

And in any case, have you tried applying at Facebook? If you have the answer to their problems and are capable, then why not?


I’m not particularly interested in solving Facebook’s self-created problems. Fifteen years ago, I was interested in helping them from a tech growth side, but I can’t imagine working there now, especially in light of the company culture. My priorities would clash with those of the existing upper management all the way up to the CEO (who is completely unchecked by the Board, BTW).

As for this particular issue, I consider celebrities and politicians who stir up trouble largely as fat to be cut. Anyone who builds that much gravity and controversy tends to cause more problems than add value, and detracts from democratizing the platform.


Well I think that seems to answer it then -- if nobody who is capable wants to work there for any amount of money, then nothing will change, and the self-inflicted problems will probably get worse. And I believe the smart thing to do from a business perspective would be to severely limit the reach of posts by problem users and then charge them increasing amounts of money to promote posts to the decreasing pool of users who won't block it.


Sure, I'll do that. It's not as hard as you make it sound. Spam classifiers work fine for example. Nobody complains about special exemption lists for the Gmail spam filter because it doesn't need any. The term spam is not precisely defined but a strong social consensus exists on what the word means and that's sufficient for voting on mail streams to work. So I'd just build one of those and collect my big pile of cash.

But that's not the sort of content classification you mean, is it.

So the problem isn't technical. Rather it's an entirely self inflicted problem by Facebook. The cause is an attempt to broker a compromise between the original spirit of the site and internal activists who are bent on controlling information and manipulating opinion. The reason they make so many bad judgement calls is because they're not trying to impose clearly defined rules, but rather, ever shifting and deliberately ill-posed left wing standards like "misinformation" or "hate speech". Because these terms are defined and redefined on a near daily basis by ideological warriors and not some kind of universal social consensus, it is absolutely inevitable that the site ends up like this. A death spiral in which no amount of moderation can ever be sufficient for some of their loudest employees simply cannot be avoided once you cross the Rubicon of arbitrary vague censorship rules.

If Facebook want to reduce the number of embarrassing incidents, they can do it tomorrow. All they have to do is stop trying to label misinformation or auto-classify posts as racist/terrorism-supporting/etc. Stand up and loudly defend the morality of freedom of speech. Refocus their abuse teams on auto-generated commercial spam, like webmail companies do, and leave the rest alone. This isn't hard, they did it in the early days of the site. It may mean firing employees who don't agree, but those employees would never get tired of waging war against opinions they don't like no matter how much was spent on moderation, so it's ultimately futile trying to please them.


I'm sorry, I am having a really hard time understanding this comment. You're suggesting they just apply a email spam filter and that will fix everything? Can you explain how this would work, considering that Facebook is not an email service, and any spam filter would likely face the same issues with abuse that have already been mentioned anyway? Also I really don't follow how your last two paragraphs have anything to do with that, it seems you're saying a lot of unprovable and out-of-context politically motivated things, and then saying they should just give up on the problem entirely and have no moderation whatsoever? Isn't that the whole thing you're trying to solve though? I don't get it. Please avoid the off-topic political comments if you want to make your arguments easy to follow, and just focus on what the solution is and how to get there. Rule of thumb I've noticed: if a comment could be viewed as a political rant, it's probably not going to be very convincing.


Spam filters don't only apply to emails and Facebook already has one, of course, to stop fake account signups and posting of commercial spam. Nonetheless spam filters are a form of content classification. Yes, I'm saying that Facebook should just "give up" and stop trying to do political moderation on top of their pre-existing spam classification. This is not radical. For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine. It was during this period that Facebook grew like crazy and took over the world, so clearly real users were happy with the level and type of content classification during this time.

You ask - isn't moderation the problem they're trying to solve. This gets to the heart of the matter. What is the actual problem Facebook are facing here?

I read the WSJ article and the quotes from the internal documents. To me the articles and documents are discussing this problem: their content moderation is very unreliable (10% error rate according to Zuck) therefore they have created a massive whitelist of famous people who would otherwise get random takedowns which would expose the arbitrary and flaky nature of the system. By their own telling this is unfair, unequal, runs counter to the site's founding principles and has led to bad things like lying to their own oversight board.

It's clear from this thread that some HN posters are reading it and seeing a different problem: content moderation isn't aggressive enough and stupid decisions, like labelling a discussion of paint as racist, should just apply to everyone without exception.

I think the actual problem is better interpreted the first way. Facebook created XCheck because their content moderation is horrifically unreliable. This is not inherent to the nature of automated decision making, as Gmail spam filtering shows - it works fine, is uncontroversial and makes users happy regardless of their politics. Rather, it's inherent to the extremely vague "rules" they're trying to enforce, which aren't really rules at all but rather an attempt to guess what might inflame political activists of various kinds, mostly on the left. But most people aren't activists. If they just rolled back their content policies to what they were seven or eight years ago, the NYT set would go berserk, but most other people wouldn't care much or would actively approve. After all, they didn't care before and Facebook's own documents show that their own userbase is making a sport out of mocking their terrible moderation.

Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments". There could be nothing more on-topic, as you must know from reading the article. The XCheck system exists to stop celebrities and politicians from being randomly whacked by an out of control auto-moderator system. A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents. You can read the article for free if you sign in - that's how I did it.


> For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine.

I’m not sure that I would consider any site that contains virally-spreading racism and falsehoods to be “fine,” but that’s just me, I guess. Even HN is full of BS, but at least it’s contained to comments and can’t be republished with the click of a button.


"Fact checking" causes more problems than it solves. One man's truth is another man's falsehood. One man's racism is another man's plain speaking.

Best to apply any moderation as minimally as possible. Tag posts which cause arguments as "controversial" and leave it at that.


> One man's truth is another man's falsehood.

That's just silly. Some facts are incontrovertible. Do you really think there's any room for disagreement that the earth is round, or that water freezes at 0 degrees C?

> One man's racism is another man's plain speaking.

There's no room in this world for racism, and you shouldn't be trying to defend it here - or anywhere.


Yes, but I'm a classical tech libertarian so my definition of fine is "it made lots of users happy". If you think Facebook is filled with bad content you could just not go there, obviously.

Facebook's problem is that it has given in to the activist class that feels simply not going there is insufficient. Instead they feel a deep need to try and control people through communication and content platforms. This was historically understood to be a bad idea, exactly because the demands of such people are limitless. "Virally spreading falsehoods" is no different to the Chinese government's ban on "rumours", it's a category so vague that Facebook could spend every last cent they have on human moderators and you'd still consider the site to be filled with it. Especially because it's impossible to decide what is and isn't false: that's the root cause of this WSJ story! Their moderators keep disagreeing with each other and making embarrassing decisions, which is why they have a two-tier system where they try and protect famous people who could make a big noise from the worst of their random "falsehood detecting machine".


I'm also baffled by this comment. Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there? How are they "controlling people" if you acknowledge that going there is voluntary? Also what if their moderation is what is making a lot of people happy? I really don't get what your complaint is, please be more clear.

Edit:

>it's impossible to decide what is and isn't false

I can't agree with this, all organizations have to decide at one point what is false and what isn't, otherwise they have nothing to act on. It would be more convincing if you could suggest ways their moderators could resolve these disputes and determine what is actually true, because what you're suggesting sounds to me like they should just decide that everything is false all the time.


"Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there?"

Sure, and I don't! But this thread isn't about my problems or even your problems, it's about Facebook's problems.

"How are they controlling people if you acknowledge that going there is voluntary?"

People go there because they think they are seeing content shared by their friends, by people they follow and so on. In fact they are seeing a heavily filtered version of that designed to manipulate their own opinions. That's the whole point of the filtering: with the exception of where they label articles as 'fact checked', content that is politically unacceptable to them just vanishes and people aren't told it happens, so they can remain unaware of it. Like any system of censorship, right?

"all organizations have to decide at one point what is false and what isn't"

No, they have to decide that for a very narrow range of issues they've chosen to explicitly specialize in, and organizations frequently disagree with each other about what is or is not true despite that. That's the entire point of competition and why competitive economies work better than centrally planned economies: often it's just not clear what is or isn't true, and "trueness" may well be different depending on who is asking, so different firms just have to duke it out in the market. Sometimes an agreement emerges and competitors start copying the leader, sometimes it never does.

Facebook has a disastrously collapsing system here because they are not only trying to decide what's true in the narrow space of social network design, but trying to decide what's true for any possible statement. This is as nonsensical as GOSPLAN was; it can never work. Heck they aren't even a respected authority on the question of what's true in social networking anymore, that's why they had to buy Instagram and WhatsApp. Their competitors beat them. To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.


I don't understand what you mean, that's not censorship. To use an analogy, when you tell everyone you don't like Star Wars, your friends will likely decide to only talk to you about Westworld and not talk to you about what happens in The Mandalorian. Censorship would be if someone was actively trying to stop everyone from talking about Star Wars, which is not what is happening. I would advise against using that word without clear proof of it, as it's misused quite often. Also I don't understand why you were previously encouraging spam filtering but now seem to be against any kind of filtering?

>trying to decide what's true for any possible statement

Any possible statement that happens on their platform, yes. That's generally how it works when you have a company and I don't see how Facebook is doing anything out of the ordinary here -- any company can fire employees/customers for lying. If they know there are lies being posted on their site, it's perfectly reasonable to delete them. In fact I wish they would take more effort to delete more lies and falsehoods, the website is unusable when everybody is caught up in discussing a lie and doesn't want to hear the truth.

>To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.

I really can't agree. In general, we know what's true for politics: it's what the elections and the courts decide, at least in the US anyway. By design those are the authoritative sources. There is no authoritative source for science but with that you can verify the accuracy of any statement by testing it yourself and reproducing (or not reproducing) the results, that's the whole point. I don't see why it would be arrogant of a company to do this, as it's what a company is supposed to be doing in order for the system to work.


>For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine

>Gmail spam filtering [...] works fine, is uncontroversial and makes users happy regardless of their politics

>an attempt to guess what might inflame political activists of various kinds, mostly on the left

This is more of what I meant: I find these arguments to be unconvincing, if you're going to make these claims convincingly then you need to show proof of this. Remember we're talking about two billion users here. Your post does not read like an actual attempt to solve the problem but instead an attempt to attack "activists on the left" and "the NYT set" which I don't even know who you're talking about or what that's supposed to mean in this context, I would advise against making these type of statements. It would be more convincing to mention a specific person or persons, what their claims are, and what you disagree with and why.

>Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments".

This is false, there's no personal attack, I'm saying your comments are unrelated to the argument and are not convincing to me. If you're confused about this, a personal attack would be something along the lines of "X person is a bad person because they follow Y political persuasion" which your comments could be construed as doing about Facebook's employees. So please make sure not to do that. This is again why I highly suggest against making this a political argument, usually it results in someone getting defensive, when there's no reason to do that.

If you want me to put this more frankly: I don't care about your politics or facebook's politics, that's not an interesting subject to me. It's nothing personal against you or facebook. Just talk about the problem please, otherwise I don't want to continue this discussion.

>A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents.

This also seems not really related to the argument. Sure it affects politicians but as has been established, that's a side effect of the way the system has been designed. I don't think it makes a difference whether the side effect was intentional or not.


I really wonder if we read the same article. Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians. I don't understand why you find this somehow irrelevant or compartmentalizable. If you don't care about politics then this whole article and thread about it probably just isn't for you, because the concerns are fundamentally political to their core. In fact the criticism of XCheck is an egalitarian one, that the rules for the famous/ruling classes should be the same as for the plebs.

To flesh this out, look at the examples in the article where moderation decisions split or were reversed - that's the problem XCheck is designed to solve. Most are about politics:

- Saying "white paint colors are the worst" was classed as racism. Trying to define and then scrub all racist speech from the world is a left wing policy.

- A journalist who (we are told) was criticizing Osama bin Laden was classed automatically as supporting him, and then human moderators agreed. Scrubbing this sort of thing is (in the USA) historically either a right wing or bipartisan consensus. We don't know why the moderators agreed because the comments themselves are not presented to us. This was later overridden by higher ups at Facebook.

- Split decisions over Trump's post saying "when the looting starts the shooting starts" that got escalated to Zuckerberg himself.

- "The program included most government officials but didn't include all public candidates for office".

etc. If you try to ignore politics, the entire debate becomes meaningless, because indeed, you would not even perceive any problem with unequal enforcement in the first place.


>Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians

This again would be a side effect. Obviously it's not irrelevant but I have seen no reason to prioritize that over anything else.

>because the concerns are fundamentally political to their core

If by political you mean "needs to be solved with politics" then I can't necessarily agree, this is also a technical problem. To put it another way, it wouldn't just magically get fixed if you elect some new congresspeople or replace Zuckerburg, the new people still have to take additional technical steps to fix the problem. If the solution already exists and is being ignored then I would agree, but I have seen no solution offered in this thread. Instead of trying to present the solution, which I have asked you to do repeatedly, it seems you're still trying to make this a political argument, which I wish you wouldn't do. I don't know how many times I need to say that it's not convincing to me to come at it from that angle. I'm sorry if that seems rude but it's the truth of the matter. If your end goal is to campaign for me to vote for somebody, please stop, I'm not interested to hear it (again nothing personal).

On the rest of your comment: I honestly have no idea what your examples are supposed to mean, why you are making these assumptions or why the political motives or policies of some other parties matters. There will always be some users that take issue with any kind of filtering and it makes no sense to me to prioritize ones who happen to have adopted something as a political position at some arbitrary point in the past. If your issue is "unequal enforcement" then can you please elaborate on how a different kind of filtering would help with any of these examples? Why would a different system result in not needing to step in and reverse controversial decisions? I asked for proof of this a few posts ago and you didn't give any.


That's beside the point. If your classification system isn't good enough to use on celebrities, it's not good enough to use on regular people either - bans are just as annoying for them, even if they have less voice to complain.


I'm not sure what the point here is other than that bans are annoying? Also, I don't think it was suggested to use a classification system with no bans?


People aren't objecting to the fact that the rules misclassify people sometimes. They're objecting to the two-tier system that lets celebrities avoid bans but doesn't let regular users do so.


I think you may have missed something in the post chain? Those are both the same thing in this context.


In addition, maybe a better system would also increase the effort needed to file a report. Calling in and leaving a voicemail message in response to specific questions, for example.


>Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?

This might work if the response to a report wasn't so arbitrary. I've been given bans for using heated language, yet had comments that were just as heated AND making direct threats at people marked as not violating any rules when I made the report.


> You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.

You would think with the literal legion of geniuses Facebook has they would have a smarter way of handling reports than simply counting them and taking down content that receives over X reports.


They do. In the linked article it discusses high-risk accounts requiring less reports to take down a post, down to as little as one.


I was being sarcastic.


> high profile accounts get 10000x the number of abuse-reports

Has anyone considered the possibility that this is a signal from the non-elites that something is wrong? That ignoring this "mass downvote" is the essence of the structural elitism?


Popular users get lots of eyeballs on their content. If an average post will get 1 report per 10k views, a popular post with 10m views will get 1000 reports. It doesn't have to have a deeper meaning.


well, in that case, why not make the metric reports-per-view? if you make the metric a rate then it doesn't matter whether it gets 10k views or 10m views, the question becomes "what % of viewers thought this was worth a report".

The rate can still be (and probably is) higher for high-visibility accounts of course but in the example you gave the rate of reports is the same and the problem is using a naive "10 reports = ban" metric.


Because brigading exists.


Brigading is likely a detectable pattern with enough data. Sure, it'd be hard to distinguish between residents of some chan brigading their enemy and somebody being a target of public shaming due to cancel campaign, but in my eyes it's a feature.


FB knows how many eyeballs there was. Their whole business is counting eyeballs. So they can easily teach whatever robot they use to take eyeball counts into account, and give more weight to "report per view" than to "absolute number of reports".


The tail end of more than a billion users is very long.

I wouldn't be surprise if a million users understood "report" as a downvote for a post they don't like.


Even if it was, it's really not Facebooks job to rework the social structure.


Facebook is already doing that. It just happens to be making choices about how to do it that maximize its profits while ignoring the voices of most of its users.


I would rather them maximize profits rather than decide to actually take a stance in manipulating society. At least I know where I stand with a greedy corporation.


Facebook has already taken a stance on manipulating society.


> rather them maximize profits rather than decide to actually take a stance in manipulating society

That is manipulating society.


Sure, but at least they're being money grubbing than having hidden political agendas. I think Facebook could be much more evil if it decided to forego advertising profit.

Think literally selling elections to foreign governments as a business model, but in the open.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: