Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It seems to me the difference in this case is that it complains about content promotion, not content publishing. The issue is that YouTube's algorithms promoted extremist content to people who were prone to extremist behavior. That's not quite the same as simply hosting extremist content uploaded by users.

As it says on the Supreme Court site:

"Issue: Whether Section 230(c)(1) of the Communications Decency Act immunizes interactive computer services when they make targeted recommendations of information provided by another information content provider, or only limits the liability of interactive computer services when they engage in traditional editorial functions (such as deciding whether to display or withdraw) with regard to such information."

https://www.scotusblog.com/case-files/cases/gonzalez-v-googl...



Google decides what emails go in the gmail inbox vs spam folder vs rejected outright. Should they be legally responsible if an offensive email lands in your inbox?


My inbox is not available to the public, have public “like” counts and view counts, cannot be shared with a single click with the same viral network effects (sure, emails can be forwarded, but I think we can agree sharing emails vs sharing on social media is wildly different).

So I think there’s a reasonable argument to be made about the difference here. Agree?


I'm having a hard time connecting this to Section 230. So you're saying there should be an exemption for recommending harmful content but only if the content is easier to share than forwarding an email?


The issue is that there is no connection to Section 230. Section 230 deals with liability for user-generated content posted publicly. Email is... not that.


E-mail absolutely is user-generated content. Section 230 doesn't say anything about "posted publically". Now, there's not quite as much need for it to protect non-public content, because getting people to sue over that is a bit harder, but it really is rightfully protected too.


I don't think that is correct.

Section 230 is a very short section of law, you can read it yourself: https://www.law.cornell.edu/uscode/text/47/230

In any event, it was just an example. Feel free to substitute "HN decides what to show on the homepage" instead.


There are not ToS associated specifically with gmail. The service has no guarantees or expectations set whatsoever. You get better guarantees on email reception from running your own server.

Therefore it's hard to say what the gmail user could even base their complaint on. Google can do whatever they want with the incomming email as far as categorization and blocking goes, and nothing in the nonexistent ToS is violated.


I don’t think spam filtering is promoting content as it is removing material.

I think they would be liable if they took emails that weren’t addressed to you and showed them in your inbox and it did some harm.

Or if they showed an ad that did some harm.


I would challenge you to craft a legal opinion around 230 that excludes Youtube recommending harmful content from protection but not Google placing harmful content in the "Priority Inbox".


There isn't difficulty there. Lawyers aren't having problems navigating this language. Obtuse techies who obsess over edge-cases definitions are having difficulty. No one reasonable would describe the SPAM filtering as a recommendation algorithm and to suggest otherwise is either outright ignorant or just facially disingenuous. The law doesn't care what can be argued to be a promotion algorithm based upon some HN-poster's obtuse reliance upon a specific word, the law deals with reasonability as a standard all the time.


Navigating what language? The law today I agree is pretty clear. This is speculation on a ruling that doesn't yet exist that could change all that.

Your suggestion is that there would be a carve out for spam filtering? Or that Google deciding what goes in "Promotions" and what goes in "Priority Inbox" isn't a recommendation?


The language in 230...

My suggestion is that your hypothetical is useless and inapplicable because it is at best, reflective of your own personal misunderstanding, or at worst, outright disingenuous.


IANAL, but I would start developing my argument with the idea that emails Google placed in my Priority Inbox were sent to me specifically, and the intention of the sender is that I specifically would see it. Google is still not putting anything in front of my eyes that was not intended to be there anyway.

When YouTube recommends content to me, the original author did not target that content specifically to me, and YouTube alone is making the decision to put it in front of my eyes.


When you post a story on HN you aren't specifically choosing to send it to a person. Is HN responsible if a harmful story reaches the front page?


They could rule against personalised recommendations (YouTube) vs while protecting recommendations where everyone sees the same thing (HN). In HN’s case I’m not sure it would matter much either way. HN is pretty heavily moderated already. If stories went into a moderation queue before hitting the main page rather than being retroactively moderated I’m not sure many of us would notice a difference.


The heavy moderation of HN would mean that they would be more liable for content. And the algorithms showing the front page recommendations would likely be found to be similar to the "what a visitor to YouTube who isn't signed in sees" or "what you see if you go to https://twitter.com without being logged in."

Instead, HN's view would likely become "everyone sees https://news.ycombinator.com/newest and showdead is set to 'yes'"

I'm not sure how to construct an argument that would allow HN's front page while at the same time curtailing YouTube's not signed in front page - both are recommendation algorithms.


They could rule against personalised recommendations whilst protecting recommendations where everyone sees the same thing, but logically it seems more likely they'd do the opposite (more reasonable to consider a single centralised top stories system consistently highlighting content to all its users a reflection of a "publisher" preference than an algorithm which highlights content based on each individual user's activity and filters the individual user may have set, moderators tend to manually intervene to influence the former more, and of course it's less reasonable to demand that all potentially libellous content is screened from procedurally generated individual user links than from a "top stories" list or home page.)

I think most of us would notice the difference if HN was allowed only to rank comments only in chronological order, never mind if for liability reasons the stories permitted to appear in order of upvotes on HN were restricted to the ones Dang was satisfied weren't libellous, or possibly none at all if YC decided it wasn't worth the risk


The comments too? Everything in a moderation queue and potentially serious penalties for getting it wrong? I'm pretty sure you would notice.

And would the developer of a Mastodon client be personally liable if their algorithm recommends someone harmful as a new account to follow?


HN is not recommending stories. Community members are endorsing or flagging stories and HN displays them ranked on that process.

There is moderation as well with the removal of stories but I’m not sure the responsibility is for removing harmful content. If someone posted a slanderous story or other illegal content and it was allowed to stay for some length of time then I think HN would be responsible. The most egregious would be if a child porn story was on the front page for days because HN staff chose to leave it there.

For YouTube, they are suggesting beheading videos to my child and I think bear some responsibility for doing that, and hopefully to stop doing that. They are making editorial decisions to promote content and so, I think, shouldn’t be protected by 230.


HN does recommend stories. If the mods feel a story doesn't deserve its virality, they will manually weigh it down, if they feel a story isn't getting the visibility it does deserve, they will manually boost it. They will even sometimes replace the posted URL with one they feel is more relevant. This forum asbolutely does not place content based solely on user input.


What does 'choose to leave it there' mean in a legal sense?

For example if I the moderator check the site once a day, and someone posts 5 minutes after I leave, would the law say it's ok for the content to remain up another 23 hours because no moderative choice occurred? Is there now a legal requirement to ensure you moderate fast enough?


How could they fix that? If it was that easy to reliably identify illegal/extremist content and prevent it from being recommended they’d just take it down


The obvious options if targeted recommendations are found not to be protected is to only use them with curated content (Netflix) or not use them (reddit).


That will completely destroy the experience of using YouTube for many people, the reason I like it is because I can see really niche content that has a really small target audience. With curated recommendations I wouldn't discover that type of thing at all, which would be bad for both me and the creators, and with no recommendations it would be even harder to find it. Reddit does target recommendations, by the way, though maybe it's just based on what subreddits you joined so maybe that isn't the same thing as the user chose to receive it

(I'm not trying to imply that YouTube should work how specially I want it to, just using myself as an example of what I think many people use it for)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: