Huh, never thought I’d see XCheck in a news article. I used to work at Facebook and spotted abuse of this system by bad actors and partly fixed it. It’s still not perfect but it’s better than it used to be.
I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.
With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.
I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.
In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.
Thanks a lot for posting these details and dealing with the critical replies.
I think that with your background and investment in improving these problems, it will be hard for you to understand the perspective many people have that Facebook is fundamentally rotten at this point. These conflicts arise from FB's core business model. It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
You can hire whole teams to prevent singed fingers or protect certain possessions, but the point of a fire is to burn. If there are no good solutions while maintaining FB's core approach and business model, then it would be better for the world if it were extinguished.
> It calls up a torrent of hate speech and misinformation with the right hand while trying to clumsily moderate with the left.
Not a Facebook employee (or supporter for that matter), but I'm curious if you consider this an issue of Facebook or of social media in general.
Not saying it's OK for FB because everyone does it, but you generally see the same dynamic of the "torrent of hate speech and misinformation" on Twitter, on Reddit, on Youtube even (personal experience: I have a family member that was radicalized by misinformation on the internet. It was all on Youtube, she had never even used Facebook).
I've noticed that people go a lot harder on Facebook than on other tech companies. I think Facebook's reputation is well deserved, but I do think that reputation should be shared with really all social media in general.
>I'm curious if you consider this an issue of Facebook or of social media in general.
I think that a lot of people who had the privilege to be on the Internet before the age of engagement-maximizing algorithms feel that the overall mood was less inflammatory and the prevalence of intentional misinformation much lower. Because these algorithms can reward people who are willing to be misleading or abusive to attract attention, there is a concern that these kinds of misbehavior are encouraged by the application of engagement-maximizing sorting to user-generated content.
Facebook participated in the arms race to develop these algorithms much as the Turks and Hungarians developed the use of single-user firearms (arquebus/matchlock) in warfare, by necessity of competition, but are no less responsible than anyone else. The fundamental challenge is whether these algorithms must be fundamentally modified or restricted to prevent their tendency to support bad actors or whether, as the platform operators seem to be insisting, the problem can be managed by post hoc content moderation without limiting the behavior of the algorithm.
Because there is a wall of trade secrets and a sense of geopolitical competition, regulators have not been able to manage this situation in a manner that yields satisfactory results. So we lament this situation where Facebook appears to be chasing profits at the expense of the political stability of modern democracies, and while Google/Reddit/Twitter may be similarly responsible, this article happens to be about Facebook.
I'm not on Twitter, but I am pretty active on Reddit. I think the manipulation is particularly, egregiously bad on Facebook.
For example, I'm not sure how I got bucketed in the "anti-masker" cohort, but over the past couple weeks I've noticed that I'm pushed a torrent of articles along the lines of "Look at this brave person serving the school board!", showing a person going on a rant about how masks are the work of the devil and how great that is.
I kind of like it, because it's given me a window of "the other side" of my social media bubble. I think the thing Facebook is particularly nefarious at is that, the way the feed works, it makes it seem like of course everyone agrees with you, except that idiot crazy uncle who you eventually have to unfriend, because all you see are seemingly "random" posts that enforce your own viewpoints.
>I kind of like it, because it's given me a window of "the other side" of my social media bubble. I think the thing Facebook is particularly nefarious at is that, the way the feed works, it makes it seem like of course everyone agrees with you, except that idiot crazy uncle who you eventually have to unfriend, because all you see are seemingly "random" posts that enforce your own viewpoints.
True, though I think the same thing applies to Twitter and YouTube. You see what seems like random things from a huge variety of random people that all happen to affirm your worldview.
I believe Jack Dorsey has mentioned some vague plans to try to address the bubble problem. And I believe I may have been temporarily placed into a YouTube beta test where they tried to include videos unrelated to things I've seen before (I seem to recall an explicit notice about this, and a request for me to give feedback). So, I think they're probably trying, kind of. (I don't use Facebook and never have, so can't comment, there.)
reddit, as far as I know, doesn't do user-specific recommendations in their submission sorting (besides what they choose to be default subreddits), so you have to go out of your way to form a little bubble for yourself. Which many people do, but at least they have some awareness that they're explicitly setting it up to be that way.
That said, especially as of the past 5 or so years, most sites out there have skewed pretty unilaterally right or left, and I believe all or nearly all of the default subreddits currently have a pretty homogeneous political stance. It's not as insidious as the recommendation-based per-user bubble formation other sites have, but it might have a similar effect, especially if someone mostly just looks at their front page and doesn't subscribe to more than a few subreddits besides the defaults.
While I find antivax schoolboard patchy-blond-hairdye rage moms to be boring and dull, I’m quite happy for social media platforms to work like that and show you content you want from actual people posting that content. And I don’t think either the “Facebook and Twitter shows you a pure bubble” or “Facebook intentionally elevates internet rage and arguments to mine engagement” are exactly true, partially as they’re contradictory. If you want to see posts you disagree with, which many absolutely do (a significant number of top Reddit subs are r/screenshotsofpeopleidontlikebeingidiotslmao) you see it, and if you don’t you don’t, and if you just want random unmoderated unfiltered content you can have that too.
Yeah, I generally agree with you. Youtube and Twitter have the same issues, and I think Reddit too. (Does TikTok?) I don't know if the extent is as broad. Or how possible it is to use each platform for its purpose without running into those problems.
added: I remember when Facebook switched from showing a chronological timeline to "the algorithm". That turned out to be a monumental paradigm shift.
More generally, social media may be an amplification of crazy people, but it doesn't create the crazy people (pick the side you don't agree with for a definition I guess).
Like, all of this stuff is sortof vaguely like what happened when the printing press got invented, and when literacy became more widespread. It's people, rather than technology, driving these changes.
> added: I remember when Facebook switched from showing a chronological timeline to "the algorithm". That turned out to be a monumental paradigm shift.
So, assume that 10% of your friends are responsible for 90% of the content. Is it really a better experience to see all of their content in your feed rather than some of their content plus some other people?
Let's not have a false dichotomy between the "chronological firehose" algorithm and the "maximize engagement" algorithm. The problem you cite could be addressed in many ways.
At the very least, you could be transparent about what the algorithm actually is, or even gasp give the user control over it.
Like, neither of us know what FB's ranking model actually optimises for. I suspect that it's engagement rates, but that's just from a user perspective.
> The problem you cite could be addressed in many ways.
You are totally right. I have seen some posts on some networks that brand themselves as the guardian of free speech. Well if we took them to a Facebook Twitter proportion the same problems will arise very soon. So just may be the problem is with the people too.
If those current big techs were to disappear, would people stop their nonsense and sometimes hated behavior?
I am starting to believe that is not just bad company management or moderation, we definitely have bad actors on those networks too.
There's isn't a platonic ideal of social media to argue for/against though. Pandora's box has been opened and we can't unopen it. If Facebook magically disappeared tomorrow, people would still take their phone-computer out of their pockets and send messages, thoughts, pictures, videos to their friends and family. Socializing through multimedia technologies. Where there are platforms that don't have the same problems as Facebook, (but also don't have the same reach as Facebook), what is actually inherent to social media, and what is because it's built on top of late-stage capitalism? Hate speech and radicalization hit an inflection point in the 1940s, hopefully we can avoid the same in the 2040s.
I recall that their approach to that was outsourcing it because I read that article that was going into how some of the employees working for the company doing the moderation were losing their minds by being the mental sewage treatment facility of social media
I think people that work on this feature mean well - or at least they think that they mean well. But as a result, we have a two-tier system where the peasants have one set of rules and the nobility has an entirely different one. It may have started as a hack to correct the obvious inadequacies of the moderation system, but it grew into something much more sinister and alien to the spirit of free speech, and is ripe for capture by ideologically driven partisans (which, in my opinion, has already happened). And once it did, the care that people implementing and maintaining the unjust system have for it isn't exactly a consolation for anybody who encounters it.
You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.
Many users will simply mash the "report abuse" button if they see a politician they don't like, or a sports player for an opposing team.
If the normal rules applied identically to everyone, all high profile accounts would simply be inactive in perpetuity.
Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?
This is exactly the solution. IF tons of people are reporting Steve Newscaster because he posted a status about how his team won, then he shouldn't become immune to criticism, those people should lose the privilege of having their voice heard.
Just send them a little message saying "Hey, you've falsely reported a bunch of posts/accounts lately, so we're restricting your ability to report content for 30 days." and if they keep doing it, make it permanent.
Here's the thing: I can think of multiple ways this could be abused or controversial, in less than 30 seconds. You can probably too.
Actual hostile actors will have years to find ways to use the rules to create controversy and shut down dissenting posts.
If people learn they get flagged after N spurious reports, they'll start rationing their reports so they don't meet the treshhold; they'll start making inflammatory posts that technically respect the rules to bait false reports. They'll create scandals, eg "Person X said awful thing Y, but when I tried to report them Facebook told me I wasn't allowed to report people anymore. Why does Facebook support Y?"
That's not to say your idea is bad. Just, you're making it sound really easy, when it's a problem Facebook has poured millions into without finding many solutions.
That might not work. In the eternal September of Facebook there may always be enough new accounts to continuously file false reports against high profile accounts.
Yeah but you can just buy warmed-up verified accounts on SEO marketplaces, often several years old and patiently curated to be blandly inoffensive and ready to be turned into whatever you need.
Still, having to do that greatly reduces the problem. The average person who wants to censor a politician they hate isn't going to spend money to buy an account.
Another potential mitigation might be to put a limit on the number of posts that a single user can flag in a day. At some point, the cost of large-scale content manipulation could be made to exceed the expected gains.
It may even be profitable for Facebook to crack down on this. Every celebrity post that gets illegitimately taken down has potential for showing ads to millions of people.
right. very likely they wouldn't be even capable to instruct their faked 60 mill ai-managed profiles 'to like' Zuckerboi's own sentence for that matter ..
Nobody is complaining that Facebook has exceptions for automatically suspending accounts just because people are misreporting them as abusive. The issue is, and has always been, exceptions to the content-based rules which are supposed to apply equally to everyone.
So if hundreds of people reported the content posted by a celebrity, or if a classifier misfired on the content posted by a celebrity, there shouldn’t be any protection for the content?
A person spending a minute looking at something that has garnered a high number of reports is reasonable instead of just ignoring reports because it's a celebrity. It doesn't have to be automated. Edge cases can be expensive
The content should be evaluated according to the same set of the rules that apply to everyone else. If the content classification rules are properly implemented, no amount of button-smashing should result in a different outcome.
If you know how to build a bug-free system with content classification rules that are properly implemented for 2 billion people in every conceivable circumstance, I think Facebook (or any other big social media company) would pay you a lot of money to implement that. Like, an unbelievably large sum of money.
I’m not sure that this is a problem that can be solved solely through software and automation. It’s a business problem. Automation is a means to solve this problem - and the one Facebook uses to save money and operate at scale - but when it comes to making content judgments, people are often better than machines at doing it. “Report” buttons can be an input into the mechanism, but they need not be authoritative.
The problem for Facebook is that adding people is going to be expensive and hurt their bottom line. So they are likely incented to treat these disparities as collateral damage on the road to ever-increasing profits.
So are you saying you could develop the solution but Facebook won't offer you enough? I don't think it makes a difference whether the system is automated or not.
No, I’m saying Facebook could probably solve this problem with the managers they already have, but with different priorities set by top management, and at greater expense.
Sorry but that seems to be understating the issue. Are you saying it's only a matter of changing the entire company's priorities and increasing the budget by large amounts? If it were that easy then it seems any other wealthy company would have solved it by now.
And in any case, have you tried applying at Facebook? If you have the answer to their problems and are capable, then why not?
I’m not particularly interested in solving Facebook’s self-created problems. Fifteen years ago, I was interested in helping them from a tech growth side, but I can’t imagine working there now, especially in light of the company culture. My priorities would clash with those of the existing upper management all the way up to the CEO (who is completely unchecked by the Board, BTW).
As for this particular issue, I consider celebrities and politicians who stir up trouble largely as fat to be cut. Anyone who builds that much gravity and controversy tends to cause more problems than add value, and detracts from democratizing the platform.
Well I think that seems to answer it then -- if nobody who is capable wants to work there for any amount of money, then nothing will change, and the self-inflicted problems will probably get worse. And I believe the smart thing to do from a business perspective would be to severely limit the reach of posts by problem users and then charge them increasing amounts of money to promote posts to the decreasing pool of users who won't block it.
Sure, I'll do that. It's not as hard as you make it sound. Spam classifiers work fine for example. Nobody complains about special exemption lists for the Gmail spam filter because it doesn't need any. The term spam is not precisely defined but a strong social consensus exists on what the word means and that's sufficient for voting on mail streams to work. So I'd just build one of those and collect my big pile of cash.
But that's not the sort of content classification you mean, is it.
So the problem isn't technical. Rather it's an entirely self inflicted problem by Facebook. The cause is an attempt to broker a compromise between the original spirit of the site and internal activists who are bent on controlling information and manipulating opinion. The reason they make so many bad judgement calls is because they're not trying to impose clearly defined rules, but rather, ever shifting and deliberately ill-posed left wing standards like "misinformation" or "hate speech". Because these terms are defined and redefined on a near daily basis by ideological warriors and not some kind of universal social consensus, it is absolutely inevitable that the site ends up like this. A death spiral in which no amount of moderation can ever be sufficient for some of their loudest employees simply cannot be avoided once you cross the Rubicon of arbitrary vague censorship rules.
If Facebook want to reduce the number of embarrassing incidents, they can do it tomorrow. All they have to do is stop trying to label misinformation or auto-classify posts as racist/terrorism-supporting/etc. Stand up and loudly defend the morality of freedom of speech. Refocus their abuse teams on auto-generated commercial spam, like webmail companies do, and leave the rest alone. This isn't hard, they did it in the early days of the site. It may mean firing employees who don't agree, but those employees would never get tired of waging war against opinions they don't like no matter how much was spent on moderation, so it's ultimately futile trying to please them.
I'm sorry, I am having a really hard time understanding this comment. You're suggesting they just apply a email spam filter and that will fix everything? Can you explain how this would work, considering that Facebook is not an email service, and any spam filter would likely face the same issues with abuse that have already been mentioned anyway? Also I really don't follow how your last two paragraphs have anything to do with that, it seems you're saying a lot of unprovable and out-of-context politically motivated things, and then saying they should just give up on the problem entirely and have no moderation whatsoever? Isn't that the whole thing you're trying to solve though? I don't get it. Please avoid the off-topic political comments if you want to make your arguments easy to follow, and just focus on what the solution is and how to get there. Rule of thumb I've noticed: if a comment could be viewed as a political rant, it's probably not going to be very convincing.
Spam filters don't only apply to emails and Facebook already has one, of course, to stop fake account signups and posting of commercial spam. Nonetheless spam filters are a form of content classification. Yes, I'm saying that Facebook should just "give up" and stop trying to do political moderation on top of their pre-existing spam classification. This is not radical. For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine. It was during this period that Facebook grew like crazy and took over the world, so clearly real users were happy with the level and type of content classification during this time.
You ask - isn't moderation the problem they're trying to solve. This gets to the heart of the matter. What is the actual problem Facebook are facing here?
I read the WSJ article and the quotes from the internal documents. To me the articles and documents are discussing this problem: their content moderation is very unreliable (10% error rate according to Zuck) therefore they have created a massive whitelist of famous people who would otherwise get random takedowns which would expose the arbitrary and flaky nature of the system. By their own telling this is unfair, unequal, runs counter to the site's founding principles and has led to bad things like lying to their own oversight board.
It's clear from this thread that some HN posters are reading it and seeing a different problem: content moderation isn't aggressive enough and stupid decisions, like labelling a discussion of paint as racist, should just apply to everyone without exception.
I think the actual problem is better interpreted the first way. Facebook created XCheck because their content moderation is horrifically unreliable. This is not inherent to the nature of automated decision making, as Gmail spam filtering shows - it works fine, is uncontroversial and makes users happy regardless of their politics. Rather, it's inherent to the extremely vague "rules" they're trying to enforce, which aren't really rules at all but rather an attempt to guess what might inflame political activists of various kinds, mostly on the left. But most people aren't activists. If they just rolled back their content policies to what they were seven or eight years ago, the NYT set would go berserk, but most other people wouldn't care much or would actively approve. After all, they didn't care before and Facebook's own documents show that their own userbase is making a sport out of mocking their terrible moderation.
Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments". There could be nothing more on-topic, as you must know from reading the article. The XCheck system exists to stop celebrities and politicians from being randomly whacked by an out of control auto-moderator system. A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents. You can read the article for free if you sign in - that's how I did it.
> For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine.
I’m not sure that I would consider any site that contains virally-spreading racism and falsehoods to be “fine,” but that’s just me, I guess. Even HN is full of BS, but at least it’s contained to comments and can’t be republished with the click of a button.
That's just silly. Some facts are incontrovertible. Do you really think there's any room for disagreement that the earth is round, or that water freezes at 0 degrees C?
> One man's racism is another man's plain speaking.
There's no room in this world for racism, and you shouldn't be trying to defend it here - or anywhere.
Yes, but I'm a classical tech libertarian so my definition of fine is "it made lots of users happy". If you think Facebook is filled with bad content you could just not go there, obviously.
Facebook's problem is that it has given in to the activist class that feels simply not going there is insufficient. Instead they feel a deep need to try and control people through communication and content platforms. This was historically understood to be a bad idea, exactly because the demands of such people are limitless. "Virally spreading falsehoods" is no different to the Chinese government's ban on "rumours", it's a category so vague that Facebook could spend every last cent they have on human moderators and you'd still consider the site to be filled with it. Especially because it's impossible to decide what is and isn't false: that's the root cause of this WSJ story! Their moderators keep disagreeing with each other and making embarrassing decisions, which is why they have a two-tier system where they try and protect famous people who could make a big noise from the worst of their random "falsehood detecting machine".
I'm also baffled by this comment. Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there? How are they "controlling people" if you acknowledge that going there is voluntary? Also what if their moderation is what is making a lot of people happy? I really don't get what your complaint is, please be more clear.
Edit:
>it's impossible to decide what is and isn't false
I can't agree with this, all organizations have to decide at one point what is false and what isn't, otherwise they have nothing to act on. It would be more convincing if you could suggest ways their moderators could resolve these disputes and determine what is actually true, because what you're suggesting sounds to me like they should just decide that everything is false all the time.
"Wouldn't that same reasoning also apply to moderation, i.e. if you think Facebook's moderation is bad, you should just not go there?"
Sure, and I don't! But this thread isn't about my problems or even your problems, it's about Facebook's problems.
"How are they controlling people if you acknowledge that going there is voluntary?"
People go there because they think they are seeing content shared by their friends, by people they follow and so on. In fact they are seeing a heavily filtered version of that designed to manipulate their own opinions. That's the whole point of the filtering: with the exception of where they label articles as 'fact checked', content that is politically unacceptable to them just vanishes and people aren't told it happens, so they can remain unaware of it. Like any system of censorship, right?
"all organizations have to decide at one point what is false and what isn't"
No, they have to decide that for a very narrow range of issues they've chosen to explicitly specialize in, and organizations frequently disagree with each other about what is or is not true despite that. That's the entire point of competition and why competitive economies work better than centrally planned economies: often it's just not clear what is or isn't true, and "trueness" may well be different depending on who is asking, so different firms just have to duke it out in the market. Sometimes an agreement emerges and competitors start copying the leader, sometimes it never does.
Facebook has a disastrously collapsing system here because they are not only trying to decide what's true in the narrow space of social network design, but trying to decide what's true for any possible statement. This is as nonsensical as GOSPLAN was; it can never work. Heck they aren't even a respected authority on the question of what's true in social networking anymore, that's why they had to buy Instagram and WhatsApp. Their competitors beat them. To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.
I don't understand what you mean, that's not censorship. To use an analogy, when you tell everyone you don't like Star Wars, your friends will likely decide to only talk to you about Westworld and not talk to you about what happens in The Mandalorian. Censorship would be if someone was actively trying to stop everyone from talking about Star Wars, which is not what is happening. I would advise against using that word without clear proof of it, as it's misused quite often. Also I don't understand why you were previously encouraging spam filtering but now seem to be against any kind of filtering?
>trying to decide what's true for any possible statement
Any possible statement that happens on their platform, yes. That's generally how it works when you have a company and I don't see how Facebook is doing anything out of the ordinary here -- any company can fire employees/customers for lying. If they know there are lies being posted on their site, it's perfectly reasonable to delete them. In fact I wish they would take more effort to delete more lies and falsehoods, the website is unusable when everybody is caught up in discussing a lie and doesn't want to hear the truth.
>To try and decide what's true for the entire space of arbitrary questions about politics or science is thus arrogance on a galactic scale.
I really can't agree. In general, we know what's true for politics: it's what the elections and the courts decide, at least in the US anyway. By design those are the authoritative sources. There is no authoritative source for science but with that you can verify the accuracy of any statement by testing it yourself and reproducing (or not reproducing) the results, that's the whole point. I don't see why it would be arrogant of a company to do this, as it's what a company is supposed to be doing in order for the system to work.
>For years they didn't do things like fact check posts or try to auto-classify racism, the site was fine
>Gmail spam filtering [...] works fine, is uncontroversial and makes users happy regardless of their politics
>an attempt to guess what might inflame political activists of various kinds, mostly on the left
This is more of what I meant: I find these arguments to be unconvincing, if you're going to make these claims convincingly then you need to show proof of this. Remember we're talking about two billion users here. Your post does not read like an actual attempt to solve the problem but instead an attempt to attack "activists on the left" and "the NYT set" which I don't even know who you're talking about or what that's supposed to mean in this context, I would advise against making these type of statements. It would be more convincing to mention a specific person or persons, what their claims are, and what you disagree with and why.
>Finally, you edited your comment to add a personal attack claiming I'm making "off topic political comments".
This is false, there's no personal attack, I'm saying your comments are unrelated to the argument and are not convincing to me. If you're confused about this, a personal attack would be something along the lines of "X person is a bad person because they follow Y political persuasion" which your comments could be construed as doing about Facebook's employees. So please make sure not to do that. This is again why I highly suggest against making this a political argument, usually it results in someone getting defensive, when there's no reason to do that.
If you want me to put this more frankly: I don't care about your politics or facebook's politics, that's not an interesting subject to me. It's nothing personal against you or facebook. Just talk about the problem please, otherwise I don't want to continue this discussion.
>A major source of controversy inside Facebook was when it was realized that the system was protecting political incumbents simply because they were more famous than their political challengers, thus giving the ruling parties a literal exemption from the rules that applied to other politicians. The political nature of the system and impact of moderation on politics is an overriding theme of concern throughout the documents.
This also seems not really related to the argument. Sure it affects politicians but as has been established, that's a side effect of the way the system has been designed. I don't think it makes a difference whether the side effect was intentional or not.
I really wonder if we read the same article. Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians. I don't understand why you find this somehow irrelevant or compartmentalizable. If you don't care about politics then this whole article and thread about it probably just isn't for you, because the concerns are fundamentally political to their core. In fact the criticism of XCheck is an egalitarian one, that the rules for the famous/ruling classes should be the same as for the plebs.
To flesh this out, look at the examples in the article where moderation decisions split or were reversed - that's the problem XCheck is designed to solve. Most are about politics:
- Saying "white paint colors are the worst" was classed as racism. Trying to define and then scrub all racist speech from the world is a left wing policy.
- A journalist who (we are told) was criticizing Osama bin Laden was classed automatically as supporting him, and then human moderators agreed. Scrubbing this sort of thing is (in the USA) historically either a right wing or bipartisan consensus. We don't know why the moderators agreed because the comments themselves are not presented to us. This was later overridden by higher ups at Facebook.
- Split decisions over Trump's post saying "when the looting starts the shooting starts" that got escalated to Zuckerberg himself.
- "The program included most government officials but didn't include all public candidates for office".
etc. If you try to ignore politics, the entire debate becomes meaningless, because indeed, you would not even perceive any problem with unequal enforcement in the first place.
>Large parts of it are about politics and the different ways their system affects different kinds of politics and politicians
This again would be a side effect. Obviously it's not irrelevant but I have seen no reason to prioritize that over anything else.
>because the concerns are fundamentally political to their core
If by political you mean "needs to be solved with politics" then I can't necessarily agree, this is also a technical problem. To put it another way, it wouldn't just magically get fixed if you elect some new congresspeople or replace Zuckerburg, the new people still have to take additional technical steps to fix the problem. If the solution already exists and is being ignored then I would agree, but I have seen no solution offered in this thread. Instead of trying to present the solution, which I have asked you to do repeatedly, it seems you're still trying to make this a political argument, which I wish you wouldn't do. I don't know how many times I need to say that it's not convincing to me to come at it from that angle. I'm sorry if that seems rude but it's the truth of the matter. If your end goal is to campaign for me to vote for somebody, please stop, I'm not interested to hear it (again nothing personal).
On the rest of your comment: I honestly have no idea what your examples are supposed to mean, why you are making these assumptions or why the political motives or policies of some other parties matters. There will always be some users that take issue with any kind of filtering and it makes no sense to me to prioritize ones who happen to have adopted something as a political position at some arbitrary point in the past. If your issue is "unequal enforcement" then can you please elaborate on how a different kind of filtering would help with any of these examples? Why would a different system result in not needing to step in and reverse controversial decisions? I asked for proof of this a few posts ago and you didn't give any.
That's beside the point. If your classification system isn't good enough to use on celebrities, it's not good enough to use on regular people either - bans are just as annoying for them, even if they have less voice to complain.
I'm not sure what the point here is other than that bans are annoying? Also, I don't think it was suggested to use a classification system with no bans?
People aren't objecting to the fact that the rules misclassify people sometimes. They're objecting to the two-tier system that lets celebrities avoid bans but doesn't let regular users do so.
In addition, maybe a better system would also increase the effort needed to file a report. Calling in and leaving a voicemail message in response to specific questions, for example.
>Maybe a better system would penalize reporters that are found to have reported content that do NOT violent content policies?
This might work if the response to a report wasn't so arbitrary. I've been given bans for using heated language, yet had comments that were just as heated AND making direct threats at people marked as not violating any rules when I made the report.
> You have to remember that high profile accounts get 10000x the number of abuse-reports than a normal account - nearly all bogus. The normal automated moderation functions simply do not work.
You would think with the literal legion of geniuses Facebook has they would have a smarter way of handling reports than simply counting them and taking down content that receives over X reports.
> high profile accounts get 10000x the number of abuse-reports
Has anyone considered the possibility that this is a signal from the non-elites that something is wrong? That ignoring this "mass downvote" is the essence of the structural elitism?
Popular users get lots of eyeballs on their content. If an average post will get 1 report per 10k views, a popular post with 10m views will get 1000 reports. It doesn't have to have a deeper meaning.
well, in that case, why not make the metric reports-per-view? if you make the metric a rate then it doesn't matter whether it gets 10k views or 10m views, the question becomes "what % of viewers thought this was worth a report".
The rate can still be (and probably is) higher for high-visibility accounts of course but in the example you gave the rate of reports is the same and the problem is using a naive "10 reports = ban" metric.
Brigading is likely a detectable pattern with enough data. Sure, it'd be hard to distinguish between residents of some chan brigading their enemy and somebody being a target of public shaming due to cancel campaign, but in my eyes it's a feature.
FB knows how many eyeballs there was. Their whole business is counting eyeballs. So they can easily teach whatever robot they use to take eyeball counts into account, and give more weight to "report per view" than to "absolute number of reports".
Facebook is already doing that. It just happens to be making choices about how to do it that maximize its profits while ignoring the voices of most of its users.
I would rather them maximize profits rather than decide to actually take a stance in manipulating society. At least I know where I stand with a greedy corporation.
Sure, but at least they're being money grubbing than having hidden political agendas. I think Facebook could be much more evil if it decided to forego advertising profit.
Think literally selling elections to foreign governments as a business model, but in the open.
Let me try to explain it again. Suppose an integrity system has a true positive rate of 99.99%. That would be good enough to deploy right? Except that when applied to millions of accounts, 0.01% is still a massive number of people. This is even worse when those people are unusual in some way. For example they might open conversations with hundreds of strangers a day for good reasons. But their behaviour is so similar to those of abusive accounts that they get penalised.
You might say that maybe 99.99% isn’t good enough and the engineers should try for more 9s. Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
Your concerns about different treatment for some people is valid. But again, their experience is different. For example, if an account or content is reported by hundreds of people it’ll be taken down. After all, there’s no reason for accounts in good standing to lie right? Except celebrities often are at the receiving end of such campaigns. There needs to be exceptions so such rules aren’t exploited in such a manner.
> You might say that maybe 99.99% isn’t good enough and the engineers should try for more 9s. Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
Hire a large human moderation team. Facebook can afford to. They choose not to.
You're not the only person who has suggested this. Let's think about that for a second. Let's say it takes 6 minutes for a moderation team to review an older account. There's 2 billion accounts, so it'd be good to review all of those. It would take about 200 million hours. Presumably you'd want to re-review positive cases so no moderator has too much power. Additional time. Even if Facebook literally doubled the number of employees, and hired 50000 people overnight, they would still take 2 years to complete the review. But in that time it's possible that previously benign accounts turn abusive.
And then think about the 20 million odd new accounts that are created every day. How long before each of those are reviewed? And what signals will you use to review them? These are mostly empty accounts, so there's not much to go on.
And that's just the problem of aged fake accounts. How about bullying, harassment, nudity, terrorism, CEI and all the other problems?
It's interesting talking to people who say "oh that problem is easy to solve, just do X" without realising that the problem is more complicated than it looks.
> It's interesting talking to people who say "oh that problem is easy to solve, just do X" without realising that the problem is more complicated than it looks.
At no point did I state that the solution was easy. My response was to your claim that you do not know of any possible solution, not an easy solution; to wit, you invited input:
> Maybe it’s possible but I don’t know how. If you have ideas on this, please share.
I also don't follow your examples. Why are you tasking this hypothetical team to review all two billion accounts? The main issue at hand seems to be lack of sufficient staffing to review reported accounts. Why not start there?
Facebook can afford to hire 100,000 moderators. That would let them review every account every year. 100,000 * 2000 hours a year is 200 million hours. They don' actually need to do that, so they can have multiple moderators review some accounts instead.
That is (approximately) accounting for two weeks of annual leave, but not budgeting for illness or other factors, I normally go with "roughly 200 work-days" (socialist Europe, here, so I start from a base-line of five weeks of annual leave), that gives 1600 worked hours per person, taking it to 160 million hours. Still, plenty.
They didn’t say that: none of their comments seem to be defending Facebook. They are giving their opinion that human moderation is not a simple solution. I super appreciate nindalf’s comments here. It is a shame that an ex-developer who knows the problem space and is clearly explaining some if the issues is getting flamed by association.
If human moderation won't work. And whatever they're doing now is an unqualified disaster. Then what is the solution?
Oil companies tell us that oil spills and pollution and ruined ecologies and the burning planet are just part of using oil. Sad face.
They're doing their very best to minimize the negatives. They hire the very best lobbyists and memory hole as much as possible and donate to some zoos and greenwash and "recycle".
What more could they possibly do?
Really, what do you expect? Stop using oil?! Please. That's crazy talk.
More seriously, I'm not saying that Facebook is an unmitigated evil, that their biz is the moral equivalent of trafficking (humans, arms, drugs, toxic waste), or that humanity would be better if it had never existed.
I'm only asking why they continue to create a mess that they are incapable of cleaning up, by their own admission.
--
I understand these questions are for The Zuck, The Jack, and their cast of enablers (profiteers) like Thiel, Andreessen, etc.
If you're not morally comparing Facebook to oil companies & toxic polluters, why do you constantly analogize them to Facebook and describe their memory holes & ruined ecologies & the burning planet as if that's comparable to what Facebook is doing? Where does "unqualified disaster" even come from?
Toxic waste companies engaged in the conduct you described and had the impact you described and they were condemned after we found solid evidence that they were doing so. Do you have any argument or evidence whatsoever that Facebook is behaving similarly? If you want to argue the moral equivalency, argue it. Don't spin an evocative narrative about the disasters of the oil industry in the same breath as Facebook's moderation policies then disclaim it "I'm not saying that...".
>"After all, there's no reason for accounts in good standing to lie, right?" No reason is different from no good reason.
Also, 99.99% only seems like a high number when we think anecdotally, not statistically. For anything at Facebook's scale, the number of nines should be increased! Because .01% of the actions of three billion people on a single day gives you a city roughly the size of Tampa, Florida (~300,000 people).
Given Facebook's financial resources, it should be no problem to increase the size of the team working on the tool. Like any engineering problem, the problem can be broken down into smaller parts, the edge cases can be caught and/or anticipated, creative solutions can be applied, etc.
If history teaches us anything, it's that all public facing systems will be exploited. Those who design them should anticipate this.
(Thanks, by the way, for posting about your perspective. It looks very different to those of us on the outside.)
> Given Facebook's financial resources, it should be no problem to increase the size of the team working on the tool. Like any engineering problem, the problem can be broken down into smaller parts
And I worked for 4 years on one such small part (internally called UFAC), trying to help potential false positives of such a system.
As for classifier with a true positive rate of 99.99999%, I don't know much but I don't think it's possible. But if there's someone out there who might know, then they should say so.
> if an account or content is reported by hundreds of people it’ll be taken down.
This is extremely simplistic view which is extremely prone to obvious abuse, I really hope FB does much better than this. With obsessive surveillance that FB collects on pretty much everything it can lay their paws on and then some, they could do much better than just counting abuse reports. Much, much better if they really had some smart people - like behavioral science PhDs they surely can afford and they now surely use to figure out how to better sell ads - to work on it for a couple of years, I'm sure they could arrive at something better than "if it gets more than N reports, shut it down". If they don't, it means they don't care enough.
> After all, there’s no reason for accounts in good standing to lie right?
Of course there is. Politicians lie all the time. MSM lie all the time. Why won't regular people lie all the time? Of course they do, for a myriad of reasons. And since there's no punishment for lying, there's zero incentive for them not to.
> There needs to be exceptions so such rules
No, there need to be better rules. If you rules suck, and you have to do exceptions for the nobles, they still suck for the peasants. Making it easy for friends of Zuckerberg is not fixing it, it's just sweeping it under the carpet and leaving it to rot.
It's not a problem that there's a different technical solution to high-profile users. There's no problem with FB hiring more (or more skilled) moderators for higher-profile users.
The problem is when the rules are not applied evenly, especially when high profile users with greater audience can abuse those rules.
Meh, we already live in a world with one set of rules for the peasants and another for the nobility. Seems like just another area where Facebook reflects the real world.
I agree with the perspective of peasants and nobility being at play...
Remember back in the day when a friend got a new game console and invited you over to check it out with the ideal that you'd get to try it but they only had one controller? Really all they wanted to do was have you come and watch them play, and you sat there until you got bored of never having a chance to engage with it? That is the modern social media experience.
They play with your ability to be visible even to people that follow you for updates on your posts. The only way for non-elites (people deemed worthy of ranking) is to pay for ads, which appear as lower quality "promoted" content.
The model of social media started with everyone on the same playing field, but there are so many dimensions that can be manipulated to keep users thinking that it is functional while these sites change to serving the purpose of generating revenue for partners and paying interests underneath a façade of being fair communities. If you speak out against them, they censor you as well, all behind the scenes.
It's simply better to go back to creating independent sites, and then to hope you get ranked fairly on Google, and that people bookmark you... We become powerless when we allow corporate control of our communication, because governments are not aware nor vigilant of/to the impacts to regulate social sites until it's far too late, and because profit is king in that world over simply doing what is fair and positive. The business model feeds misinformation, chaos, disharmony, and conflict, just like reality TV does now, Why? Because it keeps people glued to their screens.
Even these platforms are terrified of instituting positive changes out of fear of losing their market share and user base... Overnight Twitter, FaceBook, IG, or any of these sites can lose their user base and reach just like Club House and Vine... That has to be said as well.
A big problem is that many accounts that post sensationalized (violent, graphic, sexual) content are really run by people that are building follower accounts to sell later on, as a symptom of heavily limited organic account growth (the ability to get followers) on these platforms... People build accounts by posting wild content and then sell them on the black market to others who start out looking like popular individuals because the accounts come with followers already included. Being successful on social media is no longer about having quality content, it's about how much you pay and how professional your image is... No wonder why class-ism has taken hold on it all.
There are still plenty of ways of maintaining fairness on any of these communities/platforms, and company leadership needs to go back and review the original promises they made to everyone in order to build their current user base (promises that they've all now totally broken) and fix those issues as the primary basis for resetting their flaws and oversight.
What makes you think that this is unique case? What about people that suddenly come to fame, like viral video subjects?
A simple solution is to disallow logging in from new devices and the attempt being silently dropped so you are not bothered, unless you do some magic like generate one time key to complete the procedure on the new device.
I could think of a lot of people that would find it useful.
Or allow setting up 2FA token (other than mobile) correctly.
Instead what FB does is make it impossible to secure your account because they insist whatever you want you should always be able to recover your password with your phone number.
Years ago when I was still using it (I had reason) I tried to secure it with my Yubico. Unfortunately, it wasn't possible to configure FB to not allow you to log in on a new device without the key.
I understand how the discussion probably went: "Let's make it so that we can score some marketing points but let's not really make it requirement because we will be flooded with requests from people who do not understand they will never be able to log in if they loose the token."
But that's exactly what I want. I have a small fleet of these so it is not possible for me to loose them all but unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
> unfortunately most sites that purport to allowing 2FA can't do it well because they either don't allow configuring multiple tokens or if they do, they don't allow really lock your account so it is not possible to log in the next time without the token.
This is a great point. AFAIK, Google is the only service which allows you to set mandatory U2F login requirements. Does any other service offer this functionality?
Many enterprise apps you can force it or depend on authentication from Google or Azure kind of SSO providers, who have this feature.
Consumer apps try hard not to do hard security unless they are forced to, usually for cost reasons.
Security measures like these create a ton of administrative tickets - check any sysad ticket queue a good chunk is password reset/recovery, in enterprise apps the org sysads are paid to handle this.
In consumer apps, the app company has to manage it, also it lot harder to verify identity of a random user than company employees making it harder to do this kind of support.
A good jarring example is AWS, the amazon.com and AWS did (does?) share authentication stack so some basic 2FA functionality like backup codes is not there for AWS .
Google is better at this because they have for long time also focused on SSO service as a product.
Many companies use Google AD /SSO workspace/suite because third party apps support google login out-of the box free[1], maybe charge for AzureAD/SAML2 and likely not support others at all without customization costs.
[1]It is standard because SMB/mid market companies are more likely to use Google for productivity than Azure/o365 as it is easier to manage albeit with lesser features. Third party apps don't want to expend support time on smaller customers if they can avoid it
People do bring this up on HN a lot. For WebAuthn / U2F the only actual example anybody has is AWS. So that's not an industry problem that's specifically an AWS problem. Unless you have an actual example which isn't AWS?
As to TOTP it's a shared secret, so just clone it. If they allow you to set multiple secrets it would just reduce your overall security because more random guesses work. Also, get WebAuthn instead.
Most of your response treats the service and its flaws as an engineering problem, whereas the ramifications in the real world aren't something Facebook gets to absolve itself from. They need to own the problem completely. If they can't solve the issue through engineering, it is their responsibility to hire hundreds of thousands of moderators.
You haven’t really touched on the main problem discussed in the article, which is that to Facebook, there are special users - mainly celebrities and politicians - who get to play by different rules than the rest of us. Social media was supposed to help level the playing field of society, not exacerbate its inequalities.
I did touch on that problem. Like I pointed out Zuckerberg can’t log in on new devices anymore. That’s because of the thousands of attempts per second to log into his account. Those attempts happen because he’s a celebrity. His experience is objectively different because of who he is.
It’s the same with Neymar. How many times do you think his profile is reported for any number of violations by people who don’t like him? If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
I don’t know how every Integrity system could be modified to make an exception for any of these classes of accounts or how to codify it in a way that would seem “fair”. If you have an idea for a better way, you should share it.
> It’s the same with Neymar. How many times do you think his profile is reported for any number of violations by people who don’t like him? If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
More to the point, after a human reviewed Neymar's conduct and it clearly violated Facebook policies about posting revenge porn, his account still wasn't taken down. And that's not a technical issue of the false positive rate -- it is a double standard.
From the article: 'A December memo from another Facebook data scientist was blunter: “Facebook routinely makes exceptions for powerful actors.”'
(Edited: changed verb tense to make it clear that this already happened and wasn't a hypothetical.)
This may be a naive question, but isn't that a problem with the content reporting system itself? That it requires blanket exceptions?
Popular users are going to have more false-positive reports than others, but when those reports deviate from the norm on individual pieces of content (say a nude photo) then the system should still be able to pick it up. It's an exercise in feature extraction.
The login blanket exception (for the CEO of the company) is a different use case and purpose than content control, one that blanket exceptions can solve efficiently.
>If you have an idea for a better way, you should share it.
I'm certain that the other poster won't present some kind of practical outline because part-way through the brainstorming they will realize they are designing a "Social Credit Score" and become too frustrated to continue.
The solution is simple. Facebook needs at least 100,000 human moderators. Clearly their engineering team isn't able to solve the problems facebook creates.
I concede that some accounts probably deserve that specific type of protection. However, it doesn’t explain the other kinds of protections these people have, including exemption from content-based rules. Those are the issues of real concern.
The fact that the content couldn’t be taken down was a bug. I feel like you, the author and most critics of Big Tech discount the possibility of bugs existing.
I’m sure you’ve read the old fable of “The Boy Who Cried Wolf.” Facebook has made voluntary decisions at the highest levels, reported over the course of the past decade, to shield certain people from its content rules that apply to the general public. So if it was a bug this time (and I have no reason to believe that it wasn’t), I’m sure you can understand people’s skepticism about it.
Moreover, I assume this bug was reported internally, probably pretty quickly. How long did it take to get fixed? If the fix wasn’t prioritized and corrected within, say, a day (along with a regression test to ensure it never happens again!), then that would be pretty damning of the company’s culture and priorities as well.
But facebook comes across as unwilling and uninterested to fix the bugs in cases like these. Sometimes, even, the bugs themselves seem to surface resulting from fundamentally misstructuring the problem.
Maybe normal users also shouldn't be able to have their accounts destroyed by a hundred spurious reports.
Currently if you register a Facebook account to manage a business and post ads, it will be banned off and on for weeks, and the recourse suggested to me by a director in the Integrity org was "try posting on facebook like a normal person."
> If an ordinary persons account got 100 reports a minute it would be taken down. Neymar’s won’t be.
Why does this disparity exist? By your own account, the number of times this happens for normal users is significantly smaller than for high-profile users, so why is Facebook incapable of having sufficient staffing to deal with this case for all users? _This_ is what has a lot of people annoyed.
> Those attempts happen because he’s a celebrity. His experience is objectively different because of who he is.
Then the rules should be different, like Twitter giving a blue check mark. If there are accounts that need to be treated differently then it should be clear why and how the rules are different. Fixing problems with tech (like not allowing Zuckerberg to login) should be the exception.
Twitter caught hell for letting Trump break their rules because they pretended the rules were the same for everyone.
I'm not sure what you're point really is. You keep harping on Zuckerberg not being able to log in on new devices but dismiss the entirety of the report and the internal review as "Yeah, well, at scale there's nothing you can do." If that's the case, shut it off. We wouldn't accept that from a manufacturer of a physical product.
"We wouldn't accept that from a manufacturer of a physical product." - why not? I would think that any manufacturer of a physical product is clearly entitled to provide better service or better product versions or better legal conditions or better pricing for some of their customers if they want. And they definitely do so for various VIPs - some people pay to wear Nike shoes, and some people get paid millions to wear Nike shoes, and we do consider that a manufacturer of a physical product has the right to do that.
There is not a trade principle of having to treat all customers equally (the sole exception being not denying service for a specific list of protected groups), and there is a general principle that people can trade with others as they please and provide different conditions for arbitrary reasons.
> We wouldn't accept that from a manufacturer of a physical product.
Not sure what the correct analog for physical product is, but we accept everything up to and including catastrophic failure resulting in deaths from physical products under the right circumstances, so you're going to have to be much more specific.
> Social media was supposed to help level the playing field of society,
Why do you think this? I mean it isn't like there was a plebiscite on what "social media was supposed to help".
As with most things of consequence in our world, social media is more of an emergent phenomena that any sort of planned effort. We have a legislative system that is there to provide a mechanism to adapt our legal system as needed.
This expectation was indeed common in tech circles in the 00’s “web 2.0” days, and it didn’t seem ludicrous. Removing government and corporate gatekeepers (like newspapers and TV networks) meant that disenfranchised voices could finally be heard —anyone could have a blog or whatever and be heard. It wasn’t crazy to think that if only everyone in the world could finally talk to each other that we could work out differences and make friendships across political and geographical boundaries.
That was the hypothesis. The worldwide experiment that is still running seems to have falsified it.
> This expectation was indeed common in tech circles in the 00’s “web 2.0” days, and it didn’t seem ludicrous.
I heard people voice similar expectations about roughly equally ludicrous categories of online services then, but never social media as such. Most of them were ludicrous for reasons that were obvious at the time, and apply equally to social media:
(1) In the short term, the digital divide was acute, and any benefits they brought would naturally increase inequality across that divide.
(2) In the longer term, where one might presume the digital divide would erode, they overlooked that while the services involved were generally still in the venture-subsidized artificially underpriced/undermonetized phase, any plausible business model would either promote inequality by narrowing reach to an elite or promote inequality by creating sharply tiered service or (most commonly) a sharp division between a broad class of users being engaged to be marketed to moneyed interests and the moneyed interests buying their eyeballs. Any prediction of resolving inequality was based on venture subsidies and monopoly building dumping being converted into a permanent state out of charity.
We may then just have different scopes for what counts as "Social Media" and what are other "categories of online services". The wikipedia definition seems good: https://en.wikipedia.org/wiki/Social_media
> Most of them were ludicrous for reasons that were obvious at the time, and apply equally to social media:
It seems post facto to conclude that they were obviously ludicrous at the time. Perhaps with hindsight it seems as ludicrous as a belief in alchemy in the middle ages, but it wasn't obvious that you couldn't turn lead into gold before we had chemistry. In the Web 2.0 era lots of smart people thought social media could make the world better. I raised money and founded a "social media" startup expressly thinking it would empower people, and many of my peers in that world were equally earnest.
Facebook’s own mission statement is “to give people the power to share and to make the world more connected” (emphasis mine). And if you were there when Facebook was founded, as I was, before celebrities and politicians were accommodated by them, you would have felt very empowered indeed.
It's hard to imagine now, but at the end of the 20th century, if you weren't employed by a Newspaper or a TV network, about the only way you could let the world know your views was to write a letter to the editor of your local paper. And even if you could somehow make a friend in a far-away land, phone calls and postal mail were expensive. (I'm still sad I lost touch with students I met in Japan, Taiwan, and Indonesia in the 80's)
The internet, and social media in particular, changed all that. First only nerds could make a web pages. Some, like me, published our thoughts there using raw HTML. Then Moveable Type allowed anyone with an FTP site to publish. Then Blogger allowed anyone with a browser to publish. Then Flickr and Facebook and Twitter and all the rest. It was an exciting time.
I hope this help explains why we thought this would "level the playing field." What we read and what we watched was no longer dictated by TV Network bosses or editorial boards. Governments could no longer demonize people in other lands, because we were all free to (for example) be Facebook friends with those people. At least that was the theory. As I mentioned in another comment, it sure hasn't worked out that way.
> It's hard to imagine now, but at the end of the 20th century, if you weren't employed by a Newspaper or a TV network, about the only way you could let the world know your views was to write a letter to the editor of your local paper.
When you compare the profits of newspapers and TV stations to FB, it becomes obvious why so many media mogul billionaires are pushing for harsh regulations for Social Media...
> As I mentioned in another comment, it sure hasn't worked out that way.
You’re thinking of HotOrNot. A “face book” has been a staple of US universities for decades to help new students identify each other. They were literally printed booklets with people’s faces in it.
I actually disagree that Facebook has consistently been about ranking from the start, I think for a while in the middle it was legitimately a social media platform. But it most certainly started out as that. If you dig a bit into the history of it the primordial version of Facebook was essentially little more than HotOrNot.
I was on Facebook from almost the first day, when it was only open to college students (and the domain name was thefacebook.com). It was definitely not as you describe. There was no ranking of faces.
You can see for yourself by searching for “2005 Facebook screenshots”.
The real underlying issue is that high profile accounts are targeted by groups of users who "report abuse" simply because they don't like that sports team/politician/etc...
High profile accounts cannot work under identical rules or they'd simply all be suspended all the time.
> Huh, never thought I’d see XCheck in a news article.
Is everyone at Facebook this naive? You didn't think a system that creates a secret tier of VIP accounts where the rules (and laws) don't apply while publicly claiming the opposite would end up ... in the news?!?!
This system also made it impossible for me to ever log in again. It had been a few years since I used FB but some friends tagged me at an event, so I figured what the heck.
I was presented with a system I had never configured, which asked me to contact people I don't know to get them to vouch for me. At the same time my FB profile was blackholed, and my wife and long time actual friends can't even see that I exist anymore. Just some person that astroturfed my name with no content (I have a globally unique name).
So I no longer exist from FB perspective, which made both my decision to not use FB as well as never use any FB products like Occulus much easier.
One of my favorite things about HN is seeing people come out of the woodwork to raise their hand and say that they worked on a system and give their insight. Thanks for sharing this perspective.
My bet is that he emails someone on his staff one day and says “I want the new iPhone” and the next day he has one and he can do whatever he needs to on it.
This whole idea of “Zuck can’t log in from a new device” is laughable. That’s like saying Biden doesn’t have the keys to Air Force One. He doesn’t need them.
Please consider the power dynamics here. The HN community owes no favors to FB or FB former staff. When a former FB staff member logs on to post the kind of response they posted, community members may stand up and call them out on their trolling. I accept the consequences of my behavior (as my comments arguably violate HN guidelines), and am open to learning more about the FB’s behavior, from good faith commenters on both sides.
I think it’s valid because it’s an example of a system not working for a small minority while still working well for others. And more pertinently, there’s no plans to fix it for him specifically just because he’s the CEO. It’s better to spend time making the account compromise system better for the vast majority of users instead.
The problem is not account compromise. Nobody is complaining about inability to compromise Zuckerberg's account or it being to hard to register a new device or anything like that. The issue is question is the two-tier (or maybe multiple-tier) system of rules that secretly exists inside Facebook while the public materials falsely claim all users are guided by the same rules.
Or perhaps the underlying business/product model is inherently flawed in a way that's bad for society, all patches have proven woefully insufficient in mitigating that, and Facebook have been intentionally concealing this.
> I think it’s valid because it’s an example of a system not working for a small minority while still working well for others.
That's not what the article mentioned, which is why people are saying you don't seem to understand. The article mentioned there is a system, which does work for a small minority of people, not that it doesn't work for them. It's as if you unintentionally have blinders on and can't see the issue.
It is tone-deaf. You seem to be regaling it as an inconvenience or problem that Zuckerburg has. It's not an inconvenience, it's a personal bodyguard that he alone has. If he changes devices, he has a team to allow him to log into it. How is that relevant?
The rest of us don't get that. If someone hacks our account and changes our password, the majority of us have little hope of really getting FB's attention to help us fix it.
The point about popular accounts getting 10000x the number of abuse reports is much more relevant.
The point isn’t that he gets special protection for logging on on new devices, it’s that he receives thousands of fake login attempts a day. It’s basically the same point: 10000x the number of abuse reports be 10000x the number of login attempts.
No it is not.
What if Trump or Biden could not login to Facebook due to too many failed attempts ? Would they scream that Facebook is blocking them and FB MUST unblock them. Should FB just tell them, "Tough, you can't use Facebook, deal with it?" What if it was Ronaldo or Brad Pitt.
A company will make exemptions for very famous people.
In every case, when something grows too big, you will find that are special cases that must be exempt from the general rules.
There are very few exemption to this. Usually rule of law. Almost anything else has exemption.
Disclaimer. I don't and never worked in Facebook. And I barely use Facebook. BUT I've build big systems that deal with the web and the real world.
That's interesting. Was this because of the volume of mail I/O, for scale/fanout-related performance reasons, or to simply keep responsiveness at p99 for that account "just because VIP"?
I recall twitter having special rules for accounts with a large number of followers, like keeping them on dedicated hardware or DB instances just so they could replicate them differently.
That's all well and great, but your comment as an insider directly implicates Facebook CEO of perjury by lying to congress. During a hearing he claimed all users were treated equally. This is clearly not the case.
Perjury before congress can result in jail time and I hope he's made an example of.
The problem seems to be though, that while the company may have tools to detect abuse, if they're choosing selectively when to enforce things it defeats the entire point
I think I might have agreed with the author of this article before working in Integrity for a few years. But with time I learned that any system that’s meant to work for millions of users will have some edge cases that need to be papered over. Especially when it’s not a system owned and operated by a handful of people. Here’s an example - as far as I know it’s not possible for Mark Zuckerberg to log in to Facebook on a new device. The system that prevents malicious log in attempts sees so many attempts on his account that it disallows any attempt now. There’s no plans to fix it for him specifically because it works reasonably well for hundreds of millions of other users whose accounts are safeguarded from being compromised. His inconvenience is an edge case.
With XCheck specifically what would happen is that some team working closely on a specific problem in integrity might find a sub population of users being wrongly persecuted by systems built by other teams located in other time zones. They would use XCheck as a means to prevent these users from being penalised by the other systems. It worked reasonably well, but there’s always room for improvement.
I can confirm some of what the article says though. The process for adding shields wasn’t policed internally very well in the past. Like I mentioned, this was being exploited by abusive accounts - if an account was able to verify its identity it would get a “Shielded-ID-Verified” tag applied to it. ID verification was considered to be a strong signal of authenticity. So teams that weren’t related to applying the tag would see the tag and assume the account was authentic. And as I investigated this more I realised no one really “owned” the tag or policed who could apply it and under what circumstances. I closed this particular loop hole.
In later years the XCheck system started being actively maintained by a dedicated team that cared. They looked into problems like these and made it better.