Hacker Newsnew | past | comments | ask | show | jobs | submit | chrsw's commentslogin

Real machine learning research has promise, especially over long time scales.

Imminent AGI/ASI/God-like AI/end of humanity hawks are part of a growing AI cult. The cult leaders are driven by insatiable greed and the gullible cult followers are blinded by hope.

And I say this as a developer who is quite pleased with the progress of coding assistant tools recently.


How does this work if your repos aren't on GitHub? And what if your code has nothing to do with backend web apps?

Github only for now. Out of curiosity, is yours on gitlab? Something else?

We should be able to find something interesting in most codebases, as long as there's some plausible way to build and test the code and the codebase is big enough. (Below ~250 files the results get iffy.) We've just tested it a lot more thoroughly on app backends, because that's what we know best.


> Out of curiosity, is yours on gitlab? Something else?

Something else, it's a self-hosted Git server similar to GitHub, GitLab, etc. We have multiple repos well clear of 1k files. Almost none of it is JavaScript or TypeScript or anything like that. None of our own code is public.


I think that's just the name they picked. I don't mind it. Taking a glance at what it actually does, it just looks like another command line coding assistant/agent similar to Opencode and friends. You can use it for whatever you want not just "vibe coding", including high quality, serious, professional development. You just have to know what you're doing.

> run locally for agentic coding. Nowadays I mostly use GPT-OSS-120b for this

What kind of hardware do you have to be able to run a performant GPT-OSS-120b locally?


RTX Pro 6000, ends up taking ~66GB when running the MXFP4 native quant with llama-server/llama.cpp and max context, as an example. Guess you could do it with two 5090s with slightly less context, or different software aimed at memory usage efficiency.

That has 96GB GDDR7 ECC, to save people looking it up.

The model is 64GB (int4 native), add 20GB or so for context.

There are many platforms out there that can run it decently.

AMD strix halo, Mac platforms. Two (or three without extra ram) of the new AMD AI Pro R9700 (32GB of RAM, $1200), multi consumer gpu setups, etc.


Mbp 128gb.

How do you know?

If it happens today, OP is right, and if it happens in a century they are too.

What about if its in a millenium?

That's the nice thing about completely unsubstantiated, baseless claims on the Internet, if it ever happens, you can always point at it like you're Nostradamus.

My predictions:

Actual zombie president in 2044.

New COVID in 2061.

Dinosaurs come back in 2123, reveal they've been steadily populating hidden Nazi underground bunkers and have declared peace with the yeti.


I've connected the dots.

I'm not sure how it is in other countries, but here the US, gas cars and EVs are political statements.

Tech company leadership sees AI as a shortcut to success. You know how in project planning meetings engineers are usually asked how they can pull in the schedule by x number of months? AI is now that thing. Obviously, this is a mistake.

The cult of AI maximalists aren't helping the situation.


> LLMs "survive" by being useful - whatever use they're put to.

I might be wrong or inaccurate on this because it's well outside my area of expertise, but isn't this what individual neurons are basically doing?


I could be wrong but I think a lot of the negativity comes from people who want a modern laptop, with decent port selection, a good screen and a good keyboard, fully supported by Linux because everything is open. Quality hardware with support when you want it and open documentation and open drivers if you want to do something yourself. Like a MacBook Pro but with USB-A ports and built with 100% Linux compatibility from the ground up.


It will probably take decades for machine learning to transform the way we live and work.


Yes, just like computers and later the internet. The technology always preceeds the cultural/economic changes by decades.


Growth in the PC market and internet usage had a substantial bottom-up component. The PC, even without connectivity, was useful for word processing, games, etc. Families stretched their budgets to buy one in the 80's and 90's.

The internet famously doubled in connectivity every 100 days during its expansion era. Its usefulness was blindingly obvious - there was no need for management to send out emails warning that they were monitoring internet usage, and you'd better make sure that you were using it enough. Can you imagine!

We are at a remarkable point in tech. The least-informed people in an organization (execs) are pushing a technology onto their organizations. A jaw-droppingly enormous amount of capital is being deployed in essentially a "pushing on a rope" scenario.


And sometimes it disappears entirely for a while because either culturally, the world isn't ready for it/to adapt to it, or it wasn't delivered in the right form.

Google Glass comes to mind, which died 11 years ago and XR is only just now starting to resurface.

Tablets also come to mind, pre-iPad, they more or less failed to achieve any meaningful adoption, and again sort of disappeared for a while until Apple released the iPad.

Then you have Segway as an example of innovation failure which never really returned in the same form like the others, and instead now we have e-scooters and e-bikes which fit better into existing infrastructure and cultural attitudes.

It's quite possible LLMs are just like those other examples, and the current form is not the going to be the successful form the technology takes.


An even more recent example is the Metaverse. Something that has a similar pattern of being pushed top-down onto employees. Remember when Mark Zuckerberg decreed[1] that employees must spend part of their time in Horizon Worlds?

[1] https://futurism.com/facebook-employees-confused-metaverse


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: