The Wired article’s title is frustrating. I’m pretty sure the Supreme Court does “understand the internet.” The judge I clerked for wrote the original decision that struck down most of the CDA except Section 230, which the Supreme Court later affirmed. If you go read that decision, it’s got a pretty accurate description of the internet: https://archive.nytimes.com/www.nytimes.com/library/cyber/we... (see Findings of Fact).
For example it talks about routing:
> “The Internet uses ‘packet switching’ communication protocols that allow individual messages to be subdivided into smaller ‘packets’ that are then sent independently to the destination, and are then automatically reassembled by the receiving computer. While all packets of a given message often travel along the same path to the destination, if computers along the route become overloaded, the packets can be re-routed to less loaded computers.”
And that was back in the 1990s. What people seem to really mean when they say “the Supreme Court doesn’t understand X” is that the Supreme Court doesn’t share the values they associate with X. That’s almost certainly the case. From the Supreme Court’s view, the internet is just another part of the national economy. They don’t understand or care about the distinct cultural values of people in that sector.
Also, even if the Supreme Court judges are older, the decisions are written, to a very significant degree, by the Supreme Court clerks, who are generally the very best law school grads in the US a few years out from graduation (so generally around 30).
I encourage anyone who thinks the Supreme Court is composed of 9 wordcel idiots who can't possibly understand anything about computers to read their opinion in Google v. Oracle [1]. It's very readable and well-argued. It's like a reverse Gell-Mann amnesia—clearly the author of the draft understands the issues at play quite well.
Google v. Oracle is one of the most anticlimactic decisions I've read.
The decision does show that the Supreme Court justices are savvy as to how the tech industry works, at least they know from a policy perspective they can't strike down the whole Android ecosystem without causing mayhem. And they know they can't affirm copyrights of APIs for similar reasons.
But from a legal perspective the way they outright dodged the legal question they were supposed to answer (i.e. are APIs copyrightable?) and the way they just asserted the conclusion about fair use without pretending to apply legal reasoning (together with a disclaimer that they haven't tried to change the existing law even if it looks like they have), I wouldn't say it's one of their better decisions.
As you say, in general people in tech have a tendency to assume judges are idiots. Most often they aren't, because judges sitting in the top courts are a handful of people at the pinnacle of their field, and they don't get there by being idiots. I agree with you that it's a good idea to sample some opinions from Supreme Courts or perhaps even others appeal courts to gain an appreciation of how judges tend to think. I just think Google v. Oracle is one of the worse ones to start with..
To your point w/r/t Google v Oracle, the justices aren't being useful idiots, but instead, useless geniuses. Hopefully they do something of value with Warhol and the Alice cases coming up.
Wired has been in the same camp as the Verge, IGN, Mac Rumors, and so on for a while now. They are closer to a tabloid than what Wired Magazine once was.
Not sure who they are a mouthpiece for but they certainly don’t mind being one.
Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider
Should it? This feels a lot like a "have your cake and eat it too" situation. Either you're a neutral party or you are not. Being a trillion-dollar company doesn't exempt you from that. Letting a computer make the decisions instead of a person doesn't exempt you either.
Youtube would be in no danger if all it was doing was keyword-matching with recommended videos. Instead it created profit by designing its algorithm to maximize engagement by any means necessary, especially via outrage and bandwagoning. What the fuck did they expect to happen when they created a money engine that runs on human emotional intensity?
Choosing keyword matching as the algorithm is explicitly a way to target recommendations of information provided by another information content provider. As is sorting by most recently updated, as is using a machine learning model that optimizes for engagement.
All of those approaches can lead to illegal content, and the entire point of Section 230 is that regardless of which they choose, they are immune from any legal repercussions of surfacing that content.
The thing is that the law as written allows them to do just that. If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON. And that's not even including their first amendment right to refuse distributing or listing your content.
If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON
Certainly. But Section 230, at least from my reading, does not protect them for the promotion of content. I could be wrong about that. The Supreme Court will decide. Personally I'd find it delightful if the rage-engine got smashed with a legal hammer and my Youtube recommendations were as useful as they were fifteen years ago.
>Personally I'd find it delightful if the rage-engine got smashed with a legal hammer and my Youtube recommendations were as useful as they were fifteen years ago.
Why would it be safe for them to use an older recommendation system? It doesn't solve the problem, if their older system recommends a terrorism video, even if it only did so because that video came up chronologically, they're still liable.
I would think they would need to just stop allowing the general public to upload videos anymore and only permit trusted media companies and influencers (ones known to not create controversial content) to do so. Probably after being approved through a vetting process where their lawyers can look through at least some of the content first.
>Why would it be safe for them to use an older recommendation system? It doesn't solve the problem, if their older system recommends a terrorism video, even if it only did so because that video came up chronologically, they're still liable.
A system that keyword matches isn't making recommendations, it's just keyword matching based upon the user's request. The law actually cares about intent and how things function, not just hypothetical possibilities that can occur, i.e. the law cares about what does happen and why it happens that way. So it's pointless to characterize a non-recommendation system as a recommendation system as a means of end-running an argument.
If the answer to that is "results that the search engine thinks is most relevant to you", then that's probably a recommendation engine. If the answer is "results that are most recent" or even "results that many people have watched", then that probably isn't a recommendation engine.
You're acting like any kind of algorithm is automatically a recommendation engine that should terminate Section 230 protections, but I don't think it's that simple.
The most recent, the oldest, the closest match? That doesn't make it a recommendation system. Maybe try and read my post and make an effort to understand it rather than just responding with the first thing that comes to mind, because it is as if you have not understood my post at all and you seem to have not made any effort thereto.
Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.
>the closest match
How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?
>Do you not recognize how lousy of a video sharing website this would be? Spammers are going to be constantly uploading marketing and other low-quality content with irrelevant keywords, while users that actually put work into making good quality videos will see their results pushed to the bottom quickly. How will you deal with that without implementing a system that can identify and recommend non-spam videos? Even the oldest versions of Youtube were boosting videos that got lots of likes.
Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.
>How is deciding the "closest match" not considered a recommendation? They all have the user's keyword, what other criteria will you use?
Because it's not a recommendation, some are better matches than others, thats' all. Some match the entire keyword, some just parts, some in different places... I don't understand what is difficult about this for you.
And what do you do when there's 10,000 exact keyword matches, how do you sort them? If it's newest the entire thing is just going to be spam accounts reposing the same video(s) on any major keyword.
"top", or anything notable is also likely to be gamed and abused too, especially if you fuzz "top" sorting because then its not really neutral, you're deciding the order and therefore making a recommendation.
Then there might be a circumstance where it is promoting something. Your point? The law shouldn't make this illegal because then YouTube would have to have greater regard for what it surfaces? I'm not sure that's a bad thing, that's the entire point of the thread.
My point wasn't the frequency of it but rather that it might be the case that some of YouTube's operations do work that way... so what? Is YouTube's convenience the point of law? No. So why does it matter?
>Not sure why that's my problem, I'm not the one making money by promoting reactionary videos to reactionaries.
The reason I think we should see it as our problem is because I think the solution companies arrive at is just to turn the internet into cable TV, where only approved media organizations are able to share content because of liability concerns.
I'm not sure why YouTube should be able to operate the service it does with the little content filtering it does. In what other industry would you be allowed to post child pornography because it's too difficult to make sure it doesn't get posted? No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.
>No newspaper could take that excuse. Toys R Us couldn't say "oh jeez, we didn't realize that a corner of our store was being used by child pornographers to spread child pornography and also recruit children" and not be liable. I'm not sure why we think it's good to give an excuse to YouTube and Facebook for this and anything else anyone else would normally be liable for.
I'll admit, we may even be better off as a society of communication was less "democratized." There certainly would have been a lot less covid and election misinformation out there if every rando wasn't able to have their uninformed ideas broadcasted by giant platforms.
Exactly, I understand why section 230 is in place and what it achieved, but I do wonder what good it has actually done and whether or not we actually need it. perhaps we don't need to break up the big tech co's, and instead just make them as liable as any other business would be. in that sense, I don't think they could afford the conglomeration they have right now.
Is the intention of the algo promotion or matching user interest to videos. There's a big difference to saying: I want you to watch this.. and I think you want to watch this.
The later is just sorting by additional attributes (video length, keywords in content, likelihood of clicking->watching, keywords of past content watched, ...). Youtube doesnt care what you watch... as long as they match what you want to watch to a list of videos, you stay on the site. If they dont, then you leave. The actual content of the videos doesnt matter to youtube. In this way, the page that displays the feed is very similar to showing search engine results sorted by best match, where the keywords are pulled from your past videos.
If sorting is now promotion and prohibited by 230, then the internet is f'd. Search engines are going to be completely useless.
Pick any definition you like. If recommendation systems come with existential legal risks for a small company, then only the biggest companies can afford to run them.
Or think of it this way: How is Mastodon supposed to take on larger social networks without recommending people to follow? Should every Mastodon server operator be legally liable for recommending someone harmful?
My definition would be that they're all bad and there is no good use for them because the end results are harmful - more spend/engagements I view in the same way I would "more smoking". Any algorithm or curation excluding perhaps based on latest or "most views" or something similar would be a recommendation as far as I'm concerned.
But I'm just not sure how or why online platforms get to have their cake and eat it too. If the NYTs publishes a story that eating Tidepods is healthy and encourages kids and parents to do so, they get sued. If Facebook creates an algorithm that causes the same or similar to happen they get a free pass. They either have to be a public speech platform where anyone can say anything as long as it isn't literally breaking the law, or they have to follow the same rules as other entities that curate content. If you want to say "why not both?" then that's fine but you have to apply that to all entities, not just online content.
I don't think there should be blanket immunity for anything simply because an algorithm did it. Let's just imagine there wasn't. Imagine that you could, in principle, sue a website over their promotion of illegal content. I would think on a fact-specific basis, HN would have a very good defence against liability even absent blanket immunity.
You could imagine the kind of elements that might matter for a fact pattern that would emerge from a deposition: revenue and size of website, proportion of website revenue directed towards moderation, percentage of requests that identify illegal material that are responded to, manner of response, tools provided to users, the types of content actually hosted on the site, nature of the algorithm itself, discussions that were had internally about access to harmful content. HN is a text-based website (which also mitigates the harm claim), it gets maybe in the orbit of a few hundred submissions a day, the vast majority of possible harm is when a submission is connected to a topic likely to cause legal issues, and in my experience such topics are typically flagged within a few minutes and removed quickly. There's no mechanism to directly communicate with users, there is no mechanism to follow users, there's no mechanism to change what you see on the front page based on what you clicked before. Everyone is exposed to the same content.
By contrast, thinking about the companies that are actually the target of these lawsuits I was at the TASM (Terrorism and Social Media) 2022 conference -- some of my research is adjacent to this but I've never done any work on terrorism and my social media work involves alt-tech stuff, not the big social media platforms -- where the keynotes were harm policy leads for Europe for Twitter, Facebook, and YouTube, all of whom made it clear that their position was that it is incumbent on academics and government agencies to identify harmful content and work with social media, because every region has their own unique content challenges and it's not possible for Tech companies to handle those at a global scale. A question was asked of the panel that went something like "Meta was, as it admits, instrumental in spreading the violence-inciting messages that drove the anti-Rohingya pogroms in Myanmar. The defense is that Meta wasn't prepared for, or staffed for, detecting issues in a single small market, and things snowballed quickly. You could hire 100 full time content people in that country for the salary of a single person sitting on the panel or a single SWE, so how could resource constraints be the issue?" and the answer was "We're already devoting enough resources to this problem." I think that's an incredibly shitty answer to the question, and I think a deposition could surface exactly this kind of logic, and to me that would be a fact pattern that supports liability. I hope they get their pants sued off in every jurisdiction that allows it. It's clear an aversion to staffing is a huge part of the issue.
So from my perspective, I think the typical process for resolving civil liability + reasonable assumptions about how courts should interpret fact patterns is likely to get to outcomes I'm plenty happy with.
(In the two cases in front of SCOTUS right now it seems like the victims, who have basically no evidence connecting the perpetrators of the violence to the social media services used, would win: the argument seems to be of the form "some terrorist killed my family member, and some other terrorists got radicalized online, ipso facto social media is liable". I don't think that'd be a winning case with or without s.230)
If all that's allowed is "latest" or "most views", I will keep uploading my content to your platform and bot-voting/-viewing it to keep it at the top of everyone's feed.
These are solvable problems - for example requiring registration before posting. But I'm not moved at all by the technical problem because the technical problem isn't what is in question, it's the promotion and curation of content algorithmically.
Either way I think we're going to see a big swing back to authoritative sources because the very technical problems you mention will be taken advantage of by new tools and so the already meaningless content will not even be generated by humans. The Internet in the sense of "publishing content" will be meaningless and unprofitable [1].
[1] Obviously there will exist use cases where this is not the case
'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US? This sounds strangely in conflict with the both the first amendment and the use of anonymous materials historically as is part of our national identity.
Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources. If you're a big nice identified company, or you're a member of "the party" you get to post permitted information. If you're not, well, better learn how to post those pics of a your senator pulling some crap on the darknet.
> 'registration', what does that mean exactly? Only people with government validated IDs are allowed to post in the internet in the US?
You can just register anonymously like you do on HN. Though for social media sites or similar having "verified human" seems like not just a good idea but ultimately the direction we'll go.
> Really everything that you're saying doesn't have shit to do with authoritative sources, but authoritarian sources.
You are really jumping the gun here so I'm not going to respond to your points here since I wasn't making those.
Mothers like mothers in porn? Mothers giving birth? Grandmothers? What about animal mothers? What about fathers in videos with mothers? Are the mothers giving parenting or medical advice? Who is policing the filters and deciding what constitutes a video about mothers?
And either way, why bother? Just don't recommend content there's no point except to drive engagement which is fundamentally no different than what Facebook (or whoever) is doing.
I'm not sure. If the incumbents were barred from doing a thing they currently have invested a lot of time and money and effort into or such thing was made legally riskier I think that would open the door to competition.
> YouTube said the clips did not violate its community guidelines.
> The warehouse accounts on YouTube have attracted more than 480,000 views in total. People on YouTube, TikTok and other platforms have cited the testimonials to argue that all is well in Xinjiang — and received hundreds of thousands of additional views.
> [YouTube] even suggested videos for campaigns with terms that it clearly finds problematic, such as “great replacement.” YouTube slaps Wikipedia boxes on videos about the “the great replacement,” noting that it’s “a white nationalist far-right conspiracy theory.”
> Some of the hundreds of millions of videos that the company suggested for ad placements related to these hate terms contained overt racism and bigotry, including multiple videos featuring re-posted content from the neo-Nazi podcast The Daily Shoah, whose official channel was suspended by YouTube in 2019 for hate speech. Google’s top video suggestions for these hate terms returned many news videos and some anti-hate content—but also dozens of videos from channels that researchers labeled as espousing hate or White nationalist views.
> Even after [Google spokesperson Christopher Lawton] made that statement, 14 of the hate terms on our list—about one in six of them—remained available to search for videos for ad placements on Google Ads, including the anti-Black meme “we wuz kangz”; the neo-Nazi appropriated symbol “black sun”; “red ice tv,” a White nationalist media outlet that YouTube banned from its platform in 2019; and the White nationalist slogans “you will not replace us” and “diversity is a code word for anti-white.”
What does that have to do with the affirmative act they undertake of promoting certain materials? That's the issue - not that they punt thing, but that they promote things and that promoting things isn't the same as just hosting third party uploaded content. They take that third party content and show it to people to generate interest and advertising revenue. That's not the same thing as blindly hosting.
> The thing is that the law as written allows them to do just that. If they don't like your content on YouTube, they can punt it instantly. And it can be for ANY REASON. And that's not even including their first amendment right to refuse distributing or listing your content.
You're not wrong, but in addition to the leeway afforded to the rich and powerful by "the law", there is also substantial leeway afforded to every individual under "reality", and one option available is that it is technically possible to behave however one likes, including in a manner that is not compliant with "the law" or "the social contract", neither of which I or most anyone else was consulted on, despite living in a country governed by "democracy".
Interestingly, it seems like it is those who are classically "less intelligent" who are most likely to realize that this powerful exploit exists, buffoonery like January 6, anti-vaxx, and shooting power stations with an off the shelf rifle being prime examples of this.
I sometimes wonder if like corporations or most any other organization on the planet, it might be prudent to review our governmental and legal standard operating procedures from time to time to ensure they are working as intended (underlying, actual intent (as opposed to proclaimed intent) being another matter that more than a few people are starting to become rather dangerously curious about).
Perhaps it would be useful to separate these functionalities into two categories: User-initiated (searches) and passive (sidebar garbage, play next video trash, etc).
Giving the user the ability to search doesn't mean you're curating content with a recommendation engine.
> Youtube would be in no danger if all it was doing was keyword-matching with recommended videos.
What order should those keyword matching videos come back in? By total views, by 30 day views, popularity, by upvotes, by downvotes, by keywords in the title, the description, the comments, the video itself?
Any choice made would be effectively indistinguishable from "designing the [search] to maximize engagement" as it comes to the law, since search itself is a method of surfacing new videos, and the order (even if only the default order) would matter.
> "designing the [search] to maximize engagement" as it comes to the law
I don't think the law has a definition for that term. I think it's -- probably correctly -- up to judges (and/or juries, depending on the type of case) to weigh intent and decide if a search engine to be liable for the results it produces.
This idea that "doing literally anything is an evil engagement maximizer" seems too simplistic for how the world actually works.
Newsstands, bookstores recommend items. Ones they put on front, at eye level, on the "look at this" table as you walk in. We don't typically hold them liable.
In meatspace we don't generally hold those making recommendations liable for the 3rd party content. We shouldn't online either.
If a newsstand was promoting ISIS recruitment propaganda, you better believe we'd hold that newsstand responsible for harm caused by that recommendation.
And those recommendations are the same for every person that shops there. So they would pay a stiff penalty in the market if they promoted extreme content, whereas YT and social media can present a customized "bookstore" for each person, without the consequences of everyone else seeing what it's recommending.
The tl;dr is that it may not be possible to split a hair as thin as the difference between an automated recommendation algorithm and automated (or manual) moderation (if the system chooses not to put your tweets in the Trending Topics, are we refraining from up-signalling you or down-signalling you? Is "signal" on a continuous real number line or two separate number lines?).
If the Court rules narrowly against Google, it'll be a major change to the function of much of the Internet (we can expect companies to respond to the new liability by switching off their recommendation systems completely), but the fundamental way-of-life we know today will continue. If the Court rules broadly against Google, it functionally kills S230 and opens the entire Internet up to massive liability lawsuits in a way that may end online fora for all but the "judgment proof" of the world.
Not even HN would likely be able to afford to operate if anyone posting a controversial idea slips through the moderation cracks and gets seen. The site would have to go to moderate-all-by-default, not post-and-then-moderate.
I presume that non-profit forums would be judgement proof (literal first amendment grounds). Possibly even including non-profit forums attached to a for-profit entity. With regard to Hacker News in particular the only possibly problematic element would be the YC-linked advertisement posts.
Of course I'm not a lawyer. But section 230 is not the only protection here.
No; "Judgment-proof" means "too poor to be sued because there is no universe in which the costs will be repaid," not "Case too worthless to bring." Anything that is criminal liability is never judgment-proof (when jail is on the table, a defendant can always "pay" with their freedom); civil liability is judgment-proof if the defendant could never possibly scrape together the cash to make the plaintiff whole.
Basically anything First Amendment-grounded can pass the bar of worth bringing to court because the First Amendment protections are a patchwork of carve-outs, exceptions, and careful interpretations to stretch the ideal of "The government cannot constrain the people here" over the reality of "Some speech is harmful in a way that cannot be made whole."
There's nothing about being a nonprofit that prevents some Internet rando from using your forums to post massive obviously-false defamatory statements. The only thing that keeps the forums themselves from catching a lawsuit when that happens is s230.
> The only thing that keeps the forums themselves from catching a lawsuit when that happens is s230.
Section 230 came about because of suits against for-profit entities. Has there ever been suit against the likes of IRC, USENET, or BBSes for libel from content of their participants?
It would have to be a suit against individual nodes, which would certainly increase the cost (particularly for something like USENET).
... But that's a little irrelevant because those services are ghost towns relative to the past now. The more pressing reason nobody would bother to sue those channels is because nobody cares what's said there. If the loss of s230 shuttered the web services, and there were a migration back to those channels instead of just a quieter internet... Eventually a threshold would be reached that would be worth it for somebody to fire off a salvo of lawsuits against those service providers. A lot of IRC and USENET nodes are tied to institutions with enough assets that they aren't judgment-proof.
For-profit or non-profit seems to make 0 difference to any of the discussed arguments. Also, the first amendment doesn't protect you from accusations of libel or defamation etc. There are limits to it, and a company that is knowingly spreading either could be on the hook for huge lawsuits if not under section 230 protections.
For example, if a newspaper published a reader letter that accused some rich dude of having defrauded them, the newspaper itself would probably be liable just as much as the reader whose letter was sent. Apply this same logic to any HN comment and you'll get a huge issue - and being a non-profit is entirely irrelevant to this.
You could potentially attack a forum by posting verboten content on it then turn around an petition the government/state to sue the forum out of existence. A new form of SLAPP.
I remember when DCMA passed, and everyone assumed that "safe harbor" meant that if a provider moderated or policed the content, then that service would be liable. If they just allowed things to pass through, not unlike the phone company, then the company could claim safe harbor. The law was designed to shield ISPs and online services from liability for what their users did with the service, not to enable content filtering and moderation... and in many ways it is the opposite of the intent of congress, which was just trying to make it so you wouldn't be sued into oblivion because a user of your service did something bad.
“Section 230 was intended to clarify that the government would not impose liability on internet companies even if they moderated their content.” -Jeff Kosseff
How come you and Jeff Kosseff seem to think opposite ideas about whether 230 was intended to enable content providers to moderate content? I wonder who is right.
That is a complete non-sequitur and yet people take that propaganda seriously. Section 230 says nothing about being neutral. The "whole question" is like asking if your driver's license is still valid when you are wearing a purple necktie, then asserting that drivers licenses aren't valid then because a purple tie excludes green and yellow and that ruins society and promotes pollution for the sake of privilege because purple was reserved for royals.
If you ignore the emotional manipulation sophistry and look at the logical content it looks a lot more like a thought disorder than anything else.
I think by the logic of "either you're a neutral party or you are not" no provider, big or small, is a neutral party, but I don't think this would produce the best outcome. It's a very good thing if we have lots of sites making different editorial decisions about what is and isn't allowed from users in a community—whether it's a search engine, YouTube, Facebook, Twitter, etc—and let the competition of different rulesets in a market determine who gets traffic, attention, and revenue.
But making editorial decisions downstream—what it chooses to show to users first vs last—doesn't change what people contribute, and this is where Section 230 is trying to provide a balance. People can contribute whatever great or terrible things they contribute, and Section 230s stance is, the provider just has to remove it if it's actually out of bounds. That allows sites to have their own editorial decision-making downstream while allowing user-generated content to flourish upstream, so those decisions matter at all. If the burden of moderation is moved farther upstream, to the point at which people contribute, you've increased the barrier to creating content entirely, you've turned every social network into a newspaper, and you get none of the benefits of the scale, low barrier to entry, and low marginal cost of the Internet to content creation.
In my view, an algorithm is no different than the downstream moderation decisions every website makes with or without an algorithm. Of course an algorithm has values. That's the point! They should have values, and people should choose which sites to use based on how well those design decisions translate into value for the user. Rather than the existence of out-of-bounds content in their systems at all, we should be judging YouTube and others on how well they identify and how quickly they remove that content and how quickly they update their policies (which is just another word for algorithm) to reflect those new values. This is explicitly a tradeoff for making mistakes in the interest of making progress versus safety, because what I'm advocating here does mean you get terrible stuff posted to places where it can get a lot of reach, and it's more likely to get that reach than with a system that moderates farther upstream. Where I think we should be coming down stronger is in regulating reach, not access (at least not entirely). The speed of the stream—the speed of virility—should be slowed to allow for reason to re-enter the conversation. That's where I think regulating content on the Internet needs updating, but not so far upstream that you stop it from getting any kind of reach at all.
The practical framework for regulation that I'd suggest is, if something hits a threshold of reach, it gets moderated more heavily and judged more heavily by authorities.
Related: Deciding on what counts as "out of bounds" content is also extremely hard, but we can put that side for a second, because I think the main issues here are 1) if a moderation decision is required, and 2) where the moderation decision occurs.
Say your friend looks great in a piece of clothing. Based on just this, the next time they ask you what they should buy, you suggest the style and brand they looked good in before.
A day later, you discover that the brand in question clubs baby seals.
Does this suddenly make you liable for all the atrocities the brand commits?
Superficially, a YT recommendation is based on metadata - video length? Did the user watch the video beyond threshold values? Did they comment or react to a video? Where do this user’s metrics like in comparison to metrics of other users who watched the same video?
I concluded that since the algorithms aren’t moderating the content, just your access to it, they are not afoul of S230.
Maybe the answer is to require all social media to provide a strictly timeline based view. Even here, the submission time stamp is purely metadata about the content, as is the watch duration or “comment/react?” flags.
FWIW, the Supreme Court doesn't need to understand the Internet to do its job.
A judge's job is to understand the law. It is the job of the lawyers to bring facts and argumentation before the judges to pursuade them that the law should be interpreted one way or the other. The expertise is expected to live with the lawyers, not the judges (and the ability to find expert witnesses and spin up on the details of what they're advocating is a lawyer skill).
Yes, this is the most frustrating thing about these kind of article titles. Justice Kagan isn't saying "we have no clue what we're doing" she's saying, "I'm not sure you're in the right place"
It's really grounded in an anti-adversarial process ideology. Pre-empting it and demanding there should only be one side dictating what is and isn't true.
Thats probably overanalyzing a narrative piece for their owners but its what this accumulates to at scale.
It seems to me the difference in this case is that it complains about content promotion, not content publishing. The issue is that YouTube's algorithms promoted extremist content to people who were prone to extremist behavior. That's not quite the same as simply hosting extremist content uploaded by users.
As it says on the Supreme Court site:
"Issue: Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information."
Google decides what emails go in the gmail inbox vs spam folder vs rejected outright. Should they be legally responsible if an offensive email lands in your inbox?
My inbox is not available to the public, have public “like” counts and view counts, cannot be shared with a single click with the same viral network effects (sure, emails can be forwarded, but I think we can agree sharing emails vs sharing on social media is wildly different).
So I think there’s a reasonable argument to be made about the difference here. Agree?
I'm having a hard time connecting this to Section 230. So you're saying there should be an exemption for recommending harmful content but only if the content is easier to share than forwarding an email?
The issue is that there is no connection to Section 230. Section 230 deals with liability for user-generated content posted publicly. Email is... not that.
E-mail absolutely is user-generated content. Section 230 doesn't say anything about "posted publically". Now, there's not quite as much need for it to protect non-public content, because getting people to sue over that is a bit harder, but it really is rightfully protected too.
There are not ToS associated specifically with gmail. The service has no guarantees or expectations set whatsoever. You get better guarantees on email reception from running your own server.
Therefore it's hard to say what the gmail user could even base their complaint on. Google can do whatever they want with the incomming email as far as categorization and blocking goes, and nothing in the nonexistent ToS is violated.
I would challenge you to craft a legal opinion around 230 that excludes Youtube recommending harmful content from protection but not Google placing harmful content in the "Priority Inbox".
There isn't difficulty there. Lawyers aren't having problems navigating this language. Obtuse techies who obsess over edge-cases definitions are having difficulty. No one reasonable would describe the SPAM filtering as a recommendation algorithm and to suggest otherwise is either outright ignorant or just facially disingenuous. The law doesn't care what can be argued to be a promotion algorithm based upon some HN-poster's obtuse reliance upon a specific word, the law deals with reasonability as a standard all the time.
Navigating what language? The law today I agree is pretty clear. This is speculation on a ruling that doesn't yet exist that could change all that.
Your suggestion is that there would be a carve out for spam filtering? Or that Google deciding what goes in "Promotions" and what goes in "Priority Inbox" isn't a recommendation?
My suggestion is that your hypothetical is useless and inapplicable because it is at best, reflective of your own personal misunderstanding, or at worst, outright disingenuous.
IANAL, but I would start developing my argument with the idea that emails Google placed in my Priority Inbox were sent to me specifically, and the intention of the sender is that I specifically would see it. Google is still not putting anything in front of my eyes that was not intended to be there anyway.
When YouTube recommends content to me, the original author did not target that content specifically to me, and YouTube alone is making the decision to put it in front of my eyes.
They could rule against personalised recommendations (YouTube) vs while protecting recommendations where everyone sees the same thing (HN). In HN’s case I’m not sure it would matter much either way. HN is pretty heavily moderated already. If stories went into a moderation queue before hitting the main page rather than being retroactively moderated I’m not sure many of us would notice a difference.
The heavy moderation of HN would mean that they would be more liable for content. And the algorithms showing the front page recommendations would likely be found to be similar to the "what a visitor to YouTube who isn't signed in sees" or "what you see if you go to https://twitter.com without being logged in."
I'm not sure how to construct an argument that would allow HN's front page while at the same time curtailing YouTube's not signed in front page - both are recommendation algorithms.
They could rule against personalised recommendations whilst protecting recommendations where everyone sees the same thing, but logically it seems more likely they'd do the opposite (more reasonable to consider a single centralised top stories system consistently highlighting content to all its users a reflection of a "publisher" preference than an algorithm which highlights content based on each individual user's activity and filters the individual user may have set, moderators tend to manually intervene to influence the former more, and of course it's less reasonable to demand that all potentially libellous content is screened from procedurally generated individual user links than from a "top stories" list or home page.)
I think most of us would notice the difference if HN was allowed only to rank comments only in chronological order, never mind if for liability reasons the stories permitted to appear in order of upvotes on HN were restricted to the ones Dang was satisfied weren't libellous, or possibly none at all if YC decided it wasn't worth the risk
HN is not recommending stories. Community members are endorsing or flagging stories and HN displays them ranked on that process.
There is moderation as well with the removal of stories but I’m not sure the responsibility is for removing harmful content. If someone posted a slanderous story or other illegal content and it was allowed to stay for some length of time then I think HN would be responsible. The most egregious would be if a child porn story was on the front page for days because HN staff chose to leave it there.
For YouTube, they are suggesting beheading videos to my child and I think bear some responsibility for doing that, and hopefully to stop doing that. They are making editorial decisions to promote content and so, I think, shouldn’t be protected by 230.
HN does recommend stories. If the mods feel a story doesn't deserve its virality, they will manually weigh it down, if they feel a story isn't getting the visibility it does deserve, they will manually boost it. They will even sometimes replace the posted URL with one they feel is more relevant. This forum asbolutely does not place content based solely on user input.
What does 'choose to leave it there' mean in a legal sense?
For example if I the moderator check the site once a day, and someone posts 5 minutes after I leave, would the law say it's ok for the content to remain up another 23 hours because no moderative choice occurred? Is there now a legal requirement to ensure you moderate fast enough?
How could they fix that? If it was that easy to reliably identify illegal/extremist content and prevent it from being recommended they’d just take it down
The obvious options if targeted recommendations are found not to be protected is to only use them with curated content (Netflix) or not use them (reddit).
That will completely destroy the experience of using YouTube for many people, the reason I like it is because I can see really niche content that has a really small target audience. With curated recommendations I wouldn't discover that type of thing at all, which would be bad for both me and the creators, and with no recommendations it would be even harder to find it. Reddit does target recommendations, by the way, though maybe it's just based on what subreddits you joined so maybe that isn't the same thing as the user chose to receive it
(I'm not trying to imply that YouTube should work how specially I want it to, just using myself as an example of what I think many people use it for)
To look for an upside to the worst case scenario, the biggest win from a deep challenge to Section 230 protections would be a return to a smaller and more mindfully curated web.
Many of the problems from social media are rooted in the idea of having such enormous hoards of content that the only way to trawl through it is with automated algorithms. This was great when it worked, but the content pool seems to grow faster than algorithm design can accomodate and will only get worse as AI content generators mature.
It suggests that there are possibly as few as two futures:
1. The internet is a wasteland of content pollution and the automated tools for sorting and sifting through it are overwhelmed with toxic waste.
2. The internet returns to a network of trust where people are individually, but only marginally, accountable for what they share with others and this accountability engenders thoughtful curation at a manageable scale.
It might be that modern equivalents of web rings, curated directories, group chats, and member forums supplant "Internet Scale" search engines and social media networks. It would be an adjustment, but it wouldn't be the end of the world.
In scenario #2 the toxic waste still exists. Unmoderated forums are protected against liability even if CDA 230 is repealed. So 4chan will still be around.
Furthermore, distributing liability among your users is not a great idea. What will happen is that extortion enterprises will be created to sue people on our hypothetical old-web-of-trust. We know this because BitTorrent allowed Prenda Law to make porn, share it themselves to dox users[0], and then sue the people who downloaded or watched it. This scheme worked because copyright lawsuits are expensive to defend against, and defamation is no different. So everyone just quickly settled, which is why it took years for judges to catch onto this particular fraudster.
The only thing that keeps you from being sued for watching an infringing YouTube video is DMCA 512, which works almost[1] identically to CDA 230. Because it's a large centralized service, it's a juicier target, and people with legitimate copyright grievances can get things taken down from them. So nobody bothers to try and sue individual viewers.
Your #2 scenario only works if the Supreme Court is merciful and only kills recommendation systems. If CDA 230 is struck down entirely, you won't get to run a network-of-trust version of the web, because just hosting a public web server will require signing an indemnification agreement and posting a very large bond. In this scenario the Internet becomes more like cable, or perhaps a games console.
[0] Normally an IP address is not dox; but a lawyer and an out-of-order DMCA subpoena can turn it into dox.
[1] 512 adds a notice-and-takedown regime because Hollywood wanted censorship powers over the Internet.
3. Some other country tells the US to fuck off and foots a lot of the lost advertizing bill in order to collect a shit ton of user data like the big US sites do now?
It's a world wide web, if the US screws off too much there is no golden rule that says it has to be the monetary king of the internet.
I think the Supreme Court has a real out here to say, "Fuck, we dunno" and rule to change nothing and suggest that Congress remedy the situation through new laws. That's ideologically consistent and the right move. I would be astonished if they don't do that here.
A lot of news outlets are just posting the funny part of Kagan's quote and not the rest of the context... but it sounds like that's exactly what she was saying.
> "We're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the internet," Kagan said of her colleagues, eliciting a laugh from the courtroom gallery. "There's a lot of uncertainty going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we're finding that Google isn't protected. And maybe Congress should want that system, but isn't that something for Congress to do, not the court?"
> Supreme Court Justice Elena Kagan said one could question why Congress provided such immunity when passing Section 230 of the Communications Decency Act of 1996. But she drew laughter when she wondered how far the Supreme Court should go in cutting back such protection.
> “We’re a court. We really don’t know about these things. You know, these are not like the nine greatest experts on the internet,” Kagan said.
> Kavanaugh said Congress knows that lower courts have interpreted the protections broadly. “Isn’t it better ... to put the burden on Congress to change that, and they can consider the implications and make these predictive judgments?” he asked Stewart.
JUSTICE KAGAN: Yeah, so I don't think that a court did it over there, and I think that that's my concern, is I can imagine a world where you're right that none of this stuff gets protection. And, you know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? A little bit unclear.
On the other hand, I mean, we're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet.
(Laughter.)
JUSTICE KAGAN: And I don't have to -- I don't have to accept all Ms. Blatt's "the sky is falling" stuff to accept something about, boy, there is a lot of uncertainty about going the way you would have us go, in part, just because of the difficulty of drawing lines in this area and just because of the fact that, once we go with you, all of a sudden we're finding that Google isn't protected. And maybe Congress should want that system, but isn't that something for Congress to do, not the Court?
Based on reporting of the oral arguments, it seems to show that the Supreme Court understands that they don't understand (other than Alito, perhaps), are worried about breaking the whole thing, and will probably do nothing and issue a ruling saying that if changes are needed it's Congress's job to do it.
The reality is that most people, EVEN technical people, don't actually understand the Internet. The only people I consistently have conversed with who understand the Internet are technology advocates that have a deep background in Internet networking. Even on HN, most people believe many things about how the Internet works, at the networking layer, at the application layer, and socioculturally, that are not true, and provably not true, but widely believed. A true headline would be "Hardly anyone who isn't at least a double CCIE with ~15 years of experience in web policy understands the Internet."
Most technical people don't understand DNS, and it's one of the most basic and core technologies to how the Internet works today. The courts definitely don't understand DNS. The vast majority don't understand BGP, and it's literally the basis for how Internet networking works. Many don't understand the Web as a tech stack, outside possibly the basics of website design. Even people who develop web applications don't understand the Web as a tech stack.
It's actually a major concern for me, because the promise that future generations would be "digital natives" with deep technology understanding didn't come to fruition. The knowledge of how all of this infrastructure actually works is dying off, and very few people are interested in learning it, and the entire global economy is now built on the Internet in a myriad of ways. Reminds me of https://xkcd.com/2347/ except extrapolate from open source software to literally all essential technologies the world runs on.
I don't think it is necessary to understand how the internet works technically to "understand the internet". It's like saying a race car driver can't drive fast without understanding the physics of a piston.
I get what you're saying, but I think you might be missing the point that I'm making. DNS and BGP aren't JUST technical, they're also deeply geopolitical. The Internet isn't an accident, it's an intentionally and carefully formed set of peer autonomous networks, with a shared protocol and written (and unwritten) rules for how we name and resolve the path to get to those different networks from one another. These standards, the technical implementation of them, and how they interact with the real world is actually deeply geopolitical as well as socioeconomic. It's not just understanding how it works technically, it's understanding how it works from a policy, standardization, and governance perspective. Just like the law, how we arrived at the particular standards and policies informs the context for decisions we make about the technology and how its implemented.
Even then, without an understanding of how it works technically, it's not possible to responsibly and accurately adjudicate the law as it relates to these technologies. How BGP and DNS work is actually front and center in many court cases, where the decision rested on a basic misunderstanding, which rendered the outcome either ineffective or unjust. Think of all the cases where DNS blocking is utilized on a court order, not realizing that this is effectively meaningless and is not actually effective.
People don't often understand how deep this rabbit hole goes, even technical people who understand the basics of routing. They don't actually understand what a "peering agreement" means, especially what it means when it crosses an international border. The way that the law interacts with the Internet is an integration point that has an especially high level of complexity, and nearly no-one involved in that complexity in our current system day to day is actually qualified to discuss it.
When we have failures in this integration point is has deep reaching impacts that have broad implications for how technology continues to develop into the future, and there are both positive and negative consequences for this. As a simple example, the stupidity with courts and DNS blocking and the way in which some governments around the world behave in relation to DNS is actually a core reason why DoH and DoT exist, because misapplication of law based on misunderstanding of technology risked fundamentally breaking the basic building blocks of the Internet required for it to function and for networks to successfully interoperate, so the people who actually understand how it works had to come up with new technology to "route around" stupidity in legislatures and courts around the world. These new technologies /themselves/ have geopolitical consequences (see UK and EU stupidity regarding crypto and its implications for DoH and DoT).
It's not JUST understanding the technology, there's the 0th layer and the 8th layer in the reference model.
These things are not the court's job to figure out. It's the job of the parties arguing the cases to explain the technical details of the case and how the law applies. The court is not an expert on any topic other than the law.
These things are literally something which impacts the outcomes of court cases on a nearly daily basis somewhere on the globe, and they interlace into the law in various ways both in the legislative process and in the judicial process. It is possible to create a law which does not map correctly to technical reality if you do not understand, it is possible for the court to order something which does not map correctly to technical reality. Technical reality is not always fungible. At some level, it all becomes physics.
Both the legislature and the judiciary in the US, and in other governments around the world, are technically and scientifically illiterate.
Judges don't make decisions about the laws of physics, they make the decisions about the laws of people. If the laws of physics are relevant, an expert can explain the relevant parts during the court proceedings. But the technical details are only of contextual relevance. If the law has ignorant technical consequences, that ain't the judiciary's problem. That has to be fixed by the legislature.
You're basically just saying "I don't like the laws/decisions of XXX because they break my toys and makes my life harder". And it sounds like they're kind of breaking your toys intentionally, instead of, as you claim, "they don't understand it".
> Even on HN, most people believe many things about how the Internet works, at the networking layer, at the application layer, and socioculturally, that are not true, and provably not true, but widely believed.
Could you debunk some of the most common/biggest false beliefs you encounter? I'm curious to see which traps I fall into.
There's so many I'm not sure where to begin. A few simple ones that most people believe about networking on the Internet that aren't true are:
1. Routing paths are primarily decided by path distance as the metric.
2. IP ownership is authenticated and therefore the global route table is deterministic and stable.
3. DNS traffic is always over UDP/53
4. DNS is controlled by the network. Related, split horizon DNS is reliable.
5. Domain registrars and the root nameservers are required to be contacted in order to get a DNS response for a given public domain. As a corollary, seizing domains or blocking DNS at the root is a sufficient method to block access to a site. Secondary corollary, a correctly configured and valid public domain is reachable from anywhere on the Internet that can make a DNS query.
A lot of these are true in /some cases/, or in the general case, but not in the corner cases. A lot of policy and court decisions about the Internet assume the happy path or the common case when dealing with situations where motivated actors may take actions outside the happy path to work around that court order. It is entirely possible to bypass public DNS to resolve a public IP on the Internet as long as it routable, regardless of what registrars or public (and root) nameservers say. (see: Pirate Bay and their shenanigans, also see Tor resolvers, see non-authoritative DNS resolvers, see host file sharing, see alternate roots, etc.)
A lot of policies within organizations, including those set by network admins, are based on a belief that systems in their network obey the things provided by their network, when in fact the client controls nearly everything about how it decides to interact with a network, and the network can only provide configuration as a suggestion or recommendation. There are very limited options for enforcing policy at the network level. There is no guarantee that any given client on your network is actually utilizing your local DNS resolvers or successfully will resolve split horizon DNS.
A lot of configurations in the world are based on trust in systems which are inherently designed in an untrustworthy manner, or which require additional configuration and technology to provide an external basis for trust. This has had consequences with far-reaching impact and will continue to do (see BGP hijacking).
DNS, except when using newer protocols and taking extra measures to authenticate responses, is easily intercepted and hijacked. It is entirely possible to capture queries and inject non-authoritative responses, including responses which claim to be authoritative at any point in the path the query traverses. This is actually commonplace in some regions of the world (see: China). DNS that is authenticated and encrypted doesn't traverse UDP/53.
I'm either feeling disappointed with this list, or feeling proud of myself that I can say, "Yes, of course, those things aren't true, and here's why..." I just have an expired CCNA and have taken a couple graduate-level courses on network security and traffic analysis. The only one I haven't heard of is "split horizon DNS."
This particular case? No. Many other cases, yes. I think there is a severe lack of understanding in the court system at every level, not just with BGP.
Yeah, I actually find it really disconcerting. It's one thing when you read mainstream news stories and see that they're inaccurate; like, "OK, sure the writer is a journalist, obviously not a specialist in this area, and the intended audience is also comprised of non-specialists." But reading HN makes me feel very inadequate 99% of the time, in terms of my knowledge of the computing domain. Then every few months I see a thread about some specific thing that I feel quite comfortable with and it's like, "What on earth? I thought these people were all smarter than me, but look at these comments..."
I'm currently a PhD student doing research related to Tor, and there was a thread about Tor last month. I didn't even know where to start with the comments, so I didn't participate at all.
The article doesn't really explain what it is about the internet that isn't being understood. Even if these companies have been identified under the general "tech" category by investment firms, they're essentially media companies and there's no reason why they should be exempt from the regulations of other media companies. The idea that somehow first amendment rights will somehow be overtly compromised by adding extra responsibilities for internet companies would require us to ignore the censorship these companies have been willfully prompting in the past decade.
given the rate at which youtube takes in new videos (millions per day), I think you would be supremely lucky to see anything even resembling "an opinion" in a truly unfiltered list.
A legitimate result from this lawsuit would be to simply force internet companies to be more responsive. The family in question did everything they could to alert Google to the extreme content, and were completely ignored. I think anyone who has had their user ID mistakenly flagged, or app removed, or content stolen can relate to the unresponsiveness of internet companies to legitimate user complaints. It's what is driving the sense of arrogance people get from these companies, who act as though they are untouchable.
There needs to be some accountability. The "slippery slope" or "chilling effect" arguments are valid, but the scale has tipped too far so that individuals being harmed by these companies have little to no recourse.
These companies make billions of dollars in profit and much of that is because they underfund customer support, or simply ignore it all together. It's like any consumer product safety issue. It's expensive, but should be a required cost of doing business, like product recalls or EPA pollution regulations.
I'm done with wired magazine. They no longer produce content, they produce drama. That article about how there's no point in identifying evil people because there's no evidence supporting it was massively harmful to people in abusive relationships who are trying to sort out why they are suffering so much and what evil really is. From the comments below this one apparently is clickbait. I'm done.
If I start a TV network on basic cable and let anyone submit a video that will be broadcast live on air in real time, that network will likely be broadcasting user-submitted material that would not be allowed to be shown on TV during the daytime per FCC decency standards.
How is YouTube different than this?
Or Facebook, Twitter, or any other social media that is free to see once you have an internet connection?
TV network made a deal with the government (FCC): they got a monopoly access to a limited resource (spectrum).
In exchange they agreed to limitations set by FCC, like decency standards.
Limitations that go beyond what the law, as created by congress, requires.
YouTube or Facebook didn't make a similar deal with FCC because they use unconstrained resource (internet bandwidth).
Furthermore congress did the right thing and created a law explicitly calling out this scenario and made internet companies not liable for user generated content.
Why did they do it? Because without that there would be madness.
I wonder if making platforms liable for providing the identity of content producers is a fair balance between providing the platform immunity and allowing those potentially harmed by content to pursue the content creator if needed.
I think it's an interesting idea to explore. I think if a platform makes a good faith effort to know their users that it would be able to defer liability onto them. Ideally this would be coupled with a privacy aware proof of identity system but perhaps that's asking for too much.
Sites could still allow anonymous communication, but it would need to vet it first and assume liability.
You can't go out in public in a mask and say "John Smith eats worms" (when he doesn't) and not have any repercussions.
One of the first challenges was the Zeran v. American Online case where a business owner was harmed but was never able to compel AOL to provide the identity of the person creating the harmful content. Forcing platforms to provide identity information in pursuit of legal action would allow action against the formally anonymous poster. As it stands today, an anonymous person (or botnet) can post whatever they want with no fear they will be exposed.
> As it stands today, an anonymous person (or botnet) can post whatever they want with no fear they will be exposed.
This is also how is has stood throughout American history. The foundation of the US was built on anonymous/pseudonymous pamphlets and secret communications between the "Founding Fathers."
I suspect Section 230 will need to be revamped or at least more thoroughly defined. Some interpretations suggest that any editorial action by a platform (aside from those required by law) changes them from being a common carrier to a publisher with all of the liability that goes with it. It only makes sense that there should be an intermediate zone where a platform provider can engage in editorialism/content restriction without falling under the liability of being a publisher.
> Some interpretations suggest that any editorial action by a platform [...] changes them from being a common carrier to a publisher
None that have been made with a straight face before a real court, that I'm aware of. That's the spin that the activists put on this, not something that anyone thinks SCOTUS is going to rule on.
You're absolutely right that 230 as written doesn't really speak well to the modern semi-automated echo chamber. But improperly written laws are congress's job to fix, not the courts. Courts step in when laws conflict, they aren't there to figure out how to solve problems with new laws.
The simplest way to look at the spirit of this law is: Congress said that internet companies shouldn't be punished just for hosting other people's opinions. And at the end of the day, TikTok and YouTube and Facebook are still just hosting this data. They didn't write it. They don't curate it. Anyone can post. Anyone can read.
Arguments about "recommendation algorithms" are legitimate, but not really in scope of the first amendment and liability issues envisioned by the original law. They're just not. It's not something congress thought about. And if congress didn't have a clue, why should the courts?
>You're absolutely right that 230 as written doesn't really speak well to the modern semi-automated echo chamber.
Section 230 does not speak about this because section 230 wasn't supposed to care about this. Section 230 was entirely about protecting large companies from legal harm when, say, the christchurch shooter posts their spree on your platform, or someone uploads literal child porn to your platform, as long as you attempt to remove such content when it appears. Section 230 is simply a legal admission that moderation is hard but necessary, and a single failure should not doom your platform.
If we think the way large platforms recommend content is harmful, we should write legislation for that, instead of yet again fucking with a working law to hammer it into something it wasn't meant to do. The USA loves to do that and all it does is give us really shitty legislation.
This isn't a case about turning 230 into a law that it isn't supposed to be... this is a case wondering whether 230 serves as an exception to another law which does create liability here...
>The simplest way to look at the spirit of this law is: Congress said that internet companies shouldn't be punished just for hosting other people's opinions. And at the end of the day, TikTok and YouTube and Facebook are still just hosting this data. They didn't write it. They don't curate it. Anyone can post. Anyone can read.
Okay - so in your universe, it's just as easy to punish them, not for the hosting, but the promotion of the content which is explicitly not covered in 230...
The problem isn’t the definition it’s that companies want the common carrier status without actually acting like one. The phone company doesn’t editorialize and we would think it insane if they started doing so.
Except we are right now screaming that the phone companies should be responsible for eliminating spam from the phone system! That exact moderation should not cause them to lose their common carrier status, and that moderation should be expected, and that moderation should not cause them to be responsible if someone uses their phone network to call in a bomb threat.
All section 230 does is make clear that if you don't have people signing off on each and every bit of content on your platform, but respond with a good effort to reports of illegal content, then you will not be held responsible for someone using your platform as a way to do illegal things.
If all these weird people that misunderstand this get what they want, it doesn't mean youtube recommends their conservative creators more, what it means is that youtube stops doing any personalized recommendations, and all you are going to see on youtube is Logan Paul, Mr Beast, and the other mega creators that little children love.
I think we would rather have the tools to eliminate phone spam such as a working caller ID system rather than a phone company that monitors content a deprioritize people who curse excessively etc.
Humans don't work like formal logic processors and are capable of navigating the ambiguity between "ban spammers" and "editorialize" without much difficulty.
> All section 230 does is make clear that if you don't have people signing off on each and every bit of content on your platform, but respond with a good effort to reports of illegal content, then you will not be held responsible for someone using your platform as a way to do illegal things.
Web hosts that just serve content when a client requests it should definitely have this protection. Great idea. YouTube promoting outrage bait because it makes them more money? God no, no protection beyond what any other business doing that would have, not on the Internet. The line's somewhere between those.
No, I would not want HN legally responsible for comments posted by users.
Hacker News does not directly promote content (aside from prioritizing content based on non-content factors [age, voting, etc.]). It does moderate, but that is not the same as promotion.
The crux of the argument in this case is that the plaintiffs want Google held liable for promoting content, not hosting it. The difficulty the Supreme Court seems to have is understanding what the boundaries are between promoting content and simply delivering a usable view of content choices to consumers, and whether Congress intended for that difference to matter with regards to Section 230.
Is Hacker News default feed that promotes highly-upvoted, newer stories to the front of the list and example of HN promoting those stories? If so, should HN be held liable if that algorithm pushes content that harms others?
I genuinely do not understand your definition of "promote"? Is the difference that Youtube recommendations are personalized?
Of course HN promotes content. There's much more content submitted than can fit on a screen. Automated filtering, human moderation, and signals from other users decide what to show you -- on both sites.
I think your comment shows the difficulty SCOTUS is having with this case. What's the difference between "promotion" (in my definition, that means putting an influence behind it to improve its standing with the viewer) and other actions commonly necessary to display information, and is that difference enough to eliminate the Section 230 protections for the defendants?
In my language, I say HN does not "promote" content because the view for me is substantially the same as the view for everyone else. Contrast that with YouTube, where Google's algorithms elevate content specific to individual users, thus leading me to conclude that Google thinks I "should" watch the content. I would argue that far more than automated filtering, signals from other users, and human moderation goes into YouTube's recommendations -- particularly a calculation of the revneue to Google of you watching the video.
But again, this concept that Google "promoted" the content that harmed the plaintiffs and should thus be liable for that action is the heart of the case, not that Google should be liable for the content itself.
So if YouTube used the same deep learning models to push people to more extremist content but without using any user signals so that everyone has the same recommendations that's no longer promotion?
Fundamentally any decisions a site makes to filter and sort content to show, including HN ordering by vote count and mixing in new content to allow it to make the top page, is an explicit choice they are making that cannot be differentiated from "promotion"
> So if YouTube used the same deep learning models to push people to more extremist content but without using any user signals so that everyone has the same recommendations that's no longer promotion?
I don't want to respond to part of your comment and not the other, so I'll just say: I don't know, because exactly defining the specifics is not my goal.
> Fundamentally any decisions a site makes to filter and sort content to show, including HN ordering by vote count and mixing in new content to allow it to make the top page, is an explicit choice they are making that cannot be differentiated from "promotion"
This is the part of your comment that actually matters to me. This statement is so definitive, and yet there are people arguing just as definitively that some ways of prioritizing content for users create liability while others do not. I think there is a difference between "elevating when it otherwise wouldn't be elevated" and "providing a moderated list", but those two states are probably separated by a grey area, not a bright line. I think how YouTube identifies content for users is distinguishable from what HN does, but I also don't know if that difference matters with regard to liability, especially with regards to Section 230, which makes no attempt to legislate HOW content is made discoverable.
I'm very unclear about what "otherwise" means in "elevating when it otherwise wouldn't be elevated".
There's no natural state of how content would be displayed, any choice of how to do it would result in a moderated list that elevates something that wouldn't have otherwise been elevated with a different approach.
There is no distinction to me between what YouTube and HN do, and I certainly don't think the law should treat them any differently. Both should be legally protected regardless of which specific approach it takes.
It absolutely can be differentiated. One major one is that Youtube is optimizing for engagement and increased viewing time to keep people on the website longer to see more ads and increase their profits. That intent is entirely different from HN's "promotion algorithm".
The law can and does differentiate across lines like these even if they're both technically "promotion algorithms".
HN optimizes for increasing engagement from people in the tech industry to show more ads for portfolio companies job postings, which increases YC's profits.
> Hacker News does not directly promote content (aside from prioritizing content based on non-content factors [age, voting, etc.])
This, by many of the same arguments people say about Youtube, would absolutely constitute a recommendation. Even if it's primarily user driven, HN is responsible for the synthesis of all these values which results in content appearing on the front page and tuning which content appears or is downweighted. You can try to play word games but the reality is that should Section 230 go, no one's going to risk the cost of lawsuits to discover what the limits of promoted content / algorithms is.
I do think that any site which believes it is too burdensome to be legally responsible for their content should not be permitted to market their content as safe for children.
Personally, I would want nobody to be "legally responsible" for comments posted by anybody or rather, I disagree that there should ever be any legal repercussions of any kind for something somebody just said.
Well, you guys are nothing if not predictable, I knew this would be the first response.
But you're living in a fantasy world if you think that I, personally, could ever recover meaningful damages from somebody with enough reach to meaningfully defame me. Rather, defamation laws are being successfully (ab)used to silence politically inconvenient people like Alex Jones.
So you won't respond whether or not you'd care that you were defamed, and instead offered some ridiculous made up scenario where there is no way you'd recover from defamation? Do you do defamation litigation? Oh, you don't?
>Rather, defamation laws are being successfully (ab)used to silence politically inconvenient people like Alex Jones.
LOL. Strong disagree there. He's not being silenced by any means. He's free to engage in whatever speech he wants, be it defamatory. He only need pay for the damages he causes in doing so. I cannot imagine what's more libertarian than that.
Any Web 2.0 system's basic premise is that it provides a space where humans can be humans. The bedrock assumption is that people can POST things that a server will instantly and automatically display publicly, and that the server owner will not instantly become legally liable for what they post.
This mirrors pretty well how the rest of the world operates. If I walk into a Macy's and start screaming libel or inciting a riot, the Macy's Corporation is not liable for that speech. Why should a 'digital property' like a social media site work differently?
Of course, it doesn't totally erase the server owner's liability. If someone posts child pornography that my server displays, it is my responsibility to remove it as soon as I become aware of it, just as it is Macy's responsibility to escort the libel-screamer off the premises. Failure to do that can confer liability onto me.
To break this consensus is to break the ability to foster human interactions on the internet that mirror how humans interact in the real world.
I wonder of scale of distribution would be a reasonable way to scope liability. For example, if you create a post and 50 of your friends see it, immunity for the platform. If you create a post and the platform distributes it to 1 million people, they now have publisher liability.
I think this is really smart. It reminds me of something I read within the last few months, but can't find right now, that described the descent from "social networking" (a nearly forgotten term, despite being the term for websites like Facebook just a decade ago) into the hell of modern "social media." The key difference is that social networking was about connecting with people you actually knew (realistically up to a couple hundred people), and social media is about broadcasting to the masses (measured in thousands or millions) with the help of these algorithms to promote engagement at any cost. That's where everything just went off the rails, from my perspective. I don't know why your comment got down-voted.
Edited to add: It seems like this is what distinguishes a telephone company from a television broadcaster too.
For example it talks about routing:
> “The Internet uses ‘packet switching’ communication protocols that allow individual messages to be subdivided into smaller ‘packets’ that are then sent independently to the destination, and are then automatically reassembled by the receiving computer. While all packets of a given message often travel along the same path to the destination, if computers along the route become overloaded, the packets can be re-routed to less loaded computers.”
And that was back in the 1990s. What people seem to really mean when they say “the Supreme Court doesn’t understand X” is that the Supreme Court doesn’t share the values they associate with X. That’s almost certainly the case. From the Supreme Court’s view, the internet is just another part of the national economy. They don’t understand or care about the distinct cultural values of people in that sector.