Hacker Newsnew | past | comments | ask | show | jobs | submit | phyzome's commentslogin

This is one of the few instances of generative AI for images that I actually like.

I've tried it, and my account was shadowbanned a few hours after I created it. It's very obnoxious.

Reddit bots shadowban almost everyone who post before they have enough comment karma. Nothing to do with Tor or VPN.

I didn't try posting, I tried commenting.

There's only one explicit reference to capitalism.

Also a comparison to Chernobyl (which no one would ever think they were anywhere near related.). Clearly the author wanted to communicate “something” more than the interesting takeover by nature.

I would agree that the root cause analysis of these two disasters is pretty different, and not super related to capitalism. But I don't think they were really trying to push that connection.

Can't wait for this to fail hilariously, complete with legal troubles.

That's wild. Google's AI summaries are so frequently wrong. I hope you understand that you're getting bad info and not knowing which bits are bad.

I mean in this case it's pretty quick to tell that it's not right, try sam deploy and it fails, but for the most part it's working, it seems straight out of AWS's docs


Or "...For the Tasks That Were Measured". You can always complain that it's not universal enough.

Humans don't suck at arithmetic.

Anecdata: Most cashiers used to be able to give correct change at checkout very quickly; only a few would type it into the register to have it do the math. Nowadays, with so many people using cards etc., many of them freeze up and struggle with basic change-making.

It's just a matter of keeping in practice and not letting your skills atrophy.


This isn't directly to your point, but: A civil suit for such an incident would generally name both the weapon owner (for negligence, etc.) and the manufacturer (for dangerous design).

That surprises me -- from what I've seen, Daniel is actually remarkably tolerant of incomplete/unclear reports. (Too tolerant.) But I imagine that could depend on the day.

(Now, if you used AI to generate the report, well... that's different. Especially if you didn't disclose it up front.)


On the flip side I’ve been following him for a while on Mastodon.

I’ve basically watched the AI crap cycle go from “this is a weird report, oh it’s fake” to “all the reports are trash, it’s so hard to find real humans in the flood” through his posts.

I suspect I would’ve stepped down long ago. I feel so bad for the open source maintainers who are just being assaulted with nonsense.


I submit typo PRs sometimes. I just really like cleaning up docs, and some typos are important because they affect doc searchability. (But I do bundle them up so there's just one PR, and I generally won't do it for a single typo.)

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: