Hacker Newsnew | past | comments | ask | show | jobs | submit | freejazz's commentslogin

Here's one example. Have you not been following DOGE? You do come off like you're disingenuously concern trolling over something you don't agree with politically.

https://krebsonsecurity.com/2025/04/whistleblower-doge-sipho...


> You do come off like you're disingenuously concern trolling over something you don't agree with politically.

Beyond mere political alignment, lots of actual DOGE boys were recruited (or volunteered) from the valley, and hang around HN. Don't be surprised by intentional muddying of the waters. There are bunch of people invested in managing the reputation of DOGE, so their association with it doesn't become a stain on theirs.


Great point. It's all so funny because DOGE was just so ridiculous on the face of itself.

>Do you have any actual evidence of this?

Any evidence it was an actual audit?


In the US you can't sue without having obtained or applied for a registration. If the registration does not grant, you cannot sue. You cannot get a registration for code developed by AI.

"reasoning."

It's real reasoning. But it's not comparable to a human level.

It turns in top-level performance on original, out-of-distribution problems given in international math and programming competitions, but it's "not comparable to a human level." Got it.

> In that way, perhaps we now have a monotexture of writing, but also perhaps more interesting ideas being shared.

They aren't your ideas if its coming out of an LLM


Are they your ideas if they go through a heavy-handed editor? If you've had lots of conversations with others to refine them?

I dunno. There's ways to use LLMs that produces writing that is substantially not-your-ideas. But there's also definitely ways to use it to express things that the model would not have otherwise outputted without your unique input.


counterargument: they still are your ideas even if they went through LLM.

Unsubstantiated

wrong

Then substantiate them.

I don't understand why people think that AI "being able to code" has any bearing on anything else that humans do

Because AI doesn't mean 'AI' it means massive compute, massive amount of data and machine learning.

All of that pushes everything forward: LLMs and any other architecture to LLMs, GenAI for images, sound and video, movement for robotics, image feature detection.

Segment Anything 2 was a breakthrough in image segmentation, for example.

The latest Google Weather model is also a breakthrough.

All progress in robotics is ML driven.

I don't think any investor thinks that OpenAI will achieve AGI with an LLM. Its Data + Compute -> Some Architecture -> AI/ML Model.

if it will become the golden model which will be capable of everything or a thousand expert models or the Model of Expert model, we don't know yet.


Yeesh

> Yet, almost everything else seems to prove the opposite based on how fragile life is, and how little things change when one is lost.

What a sad way to view things


No


IPR's are generally used by Big Tech companies and I have no idea how EFF's position could be construed as in the interest of the general public at all.


Read TFA. The EFF has apparently helped successfully fight several patent trolls using this process, which is good for everybody.


Look up the definition of the word "generally" and then go check the docket and then get back to me

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: