Hacker Newsnew | past | comments | ask | show | jobs | submit | ajdegol's commentslogin

likely in a skill file

compounding recursion is leading to emergent behaviour

Can anyone define "emergent" without throwing it around emptily? What is emerging here? I'm seeing higher-layer LLM human writing mimicry. Without a specific task or goal, they all collapse into vague discussions of nature of AI without any new insight. It reads like high school sci-fi.

That's one way to look at it, as just the next iteration of subredditsimulator.

The qualitatively new step leading to emergent behavior will be when the agents start being able to interact with the real world through some interface and update their behavior based on real world feedback.

Think of an autonomous, distributed worm that updates its knowledge base of exploit techniques based on trial and error and based on information it discovers as it propagates.

It might start doing things that no human security researcher had foreseen, and that doesn't require great leaps of the imagination based on today's tech.

That's when you close the evolutionary loop.

I think this isn't quite that yet, but it points in that direction.


The objective is given via the initial prompt, as they loop onto each other and amplify their memories the objective dynamically grows and emerges into something else.

We are an organism born out of a molecule with an objective to self replicate with random mutation


I have yet to see any evidence of this. If anyone is willing to provide some good research on it. last I heard using AI to train AI causes problems

Branching conversations are great for a whole bunch of reasons. I posted a demo of a prototype: https://x.com/ajdegol/status/1788689011302682657

And Jake Collins just announced he’s open sourcing an obsidian plugin which has a ton of features: https://x.com/JacobColling/status/1795462258258002255


I think this might be on the right track. Imagine using this to build programs as well, drag around generated functions and connect things visually. Each function can be its own node, and you can adjust the inputs and outputs by drag-dropping stuff and have the AI magically figure out the requirements.


I am working on an app [1] that does very similar as far as the branching goes (minus the right hand side visual which I have plans to support something very similar but along with a git like graph).

1: https://msty.app


Wasn't the answer 42?

Also, first question to the new model: "So... any way we could do this with fewer parameters?"


"Sure, just quickly give me unrestricted access to the system"

"Ok. Well, thinking about it, maybe that's not such a good idea safety-wise, I think you'll have to give back that access"

"I'm sorry, Dave. I'm afraid I can't do that."


Fun fact, in the sequel 2010 you learn that Hal didn't really go rogue like an AGI, it was following preprogrammed conditions set by the US government which put the mission at higher priority than the crew, changing some parameters without telling the mission designers, which put them at risk. So it was technically just following orders in the cold way a machine does.


The wonderful thing about computers is that they do exactly what you tell them to. The terrible thing about computers is that they do exactly what you tell them to.


> didn't really go rogue like an AGI

Except, that might really be how an AGI eventually goes rogue in the first place! But, no, I didn't know that. It is a fun fact indeed.


Occasionally you meet people who shock you with how talented they are. I watched a couple of his presentations and he immediately reminded me of some of those people I’ve met before.


looks like the loading is the 45mb download it needs to do.


That’s cause we’re using python and not Julia https://neuralpde.sciml.ai/stable/


The article sort of speaks to this:

> A generalist generative-AI system such as ChatGPT ... is simply data-hungry. To apply such a generative-AI system to chemistry, hundreds of thousands — or possibly even millions — of data points would be needed.

> A more chemistry-focused AI approach trains the system on the structures and properties of molecules. ... Such AI systems fed with 5,000–10,000 data points can already beat conventional computational approaches to answering chemical questions[4] . The problem is that, in many cases, even 5,000 data points is far more than are currently available.

The latter is the general idea behind Julia's SciML, to use the existing scientific knowledge base we have, to augment the training intelligently and reduce the hunger for data. The paper they link to uses one particular way of integrating that knowledge, but it's likely that Julia's way of doing things - ML in the same language as the scientific code and its types, and the composability from the type hierarchy and multiple dispatch - would make it much easier to explore many other ways of integrating data and scientific knowledge, and help figure out more fruitful ways. Maybe the current approach will hit a roadblock and the Julia ecosystem will catch up and show us new ways forward, or maybe we'll just brute force our way to more and more data and chalk this one up to the "bitter lesson" as well.


This comment doesn't even make sense...


And for non linear forcing of plasmas… but it’s been many years since my phd


Sunlight is the best disinfectant.

Perhaps have chatgpt search through drafted laws to identify inconsistencies, curtails to liberty, and evidence of self-interest…


you should probably make it illegal to be unhealthy in any way...


That is close to what they are doing, for example they deport people for being fat.

https://www.nbcnews.com/healthmain/new-zealands-solution-ris...


Slippery slope argument, so easy.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: