Can anyone define "emergent" without throwing it around emptily? What is emerging here? I'm seeing higher-layer LLM human writing mimicry. Without a specific task or goal, they all collapse into vague discussions of nature of AI without any new insight. It reads like high school sci-fi.
That's one way to look at it, as just the next iteration of subredditsimulator.
The qualitatively new step leading to emergent behavior will be when the agents start being able to interact with the real world through some interface and update their behavior based on real world feedback.
Think of an autonomous, distributed worm that updates its knowledge base of exploit techniques based on trial and error and based on information it discovers as it propagates.
It might start doing things that no human security researcher had foreseen, and that doesn't require great leaps of the imagination based on today's tech.
That's when you close the evolutionary loop.
I think this isn't quite that yet, but it points in that direction.
The objective is given via the initial prompt, as they loop onto each other and amplify their memories the objective dynamically grows and emerges into something else.
We are an organism born out of a molecule with an objective to self replicate with random mutation
I think this might be on the right track. Imagine using this to build programs as well, drag around generated functions and connect things visually. Each function can be its own node, and you can adjust the inputs and outputs by drag-dropping stuff and have the AI magically figure out the requirements.
I am working on an app [1] that does very similar as far as the branching goes (minus the right hand side visual which I have plans to support something very similar but along with a git like graph).
Fun fact, in the sequel 2010 you learn that Hal didn't really go rogue like an AGI, it was following preprogrammed conditions set by the US government which put the mission at higher priority than the crew, changing some parameters without telling the mission designers, which put them at risk. So it was technically just following orders in the cold way a machine does.
The wonderful thing about computers is that they do exactly what you tell them to.
The terrible thing about computers is that they do exactly what you tell them to.
Occasionally you meet people who shock you with how talented they are. I watched a couple of his presentations and he immediately reminded me of some of those people I’ve met before.
> A generalist generative-AI system such as ChatGPT ... is simply data-hungry. To apply such a generative-AI system to chemistry, hundreds of thousands — or possibly even millions — of data points would be needed.
> A more chemistry-focused AI approach trains the system on the structures and properties of molecules. ... Such AI systems fed with 5,000–10,000 data points can already beat conventional computational approaches to answering chemical questions[4] . The problem is that, in many cases, even 5,000 data points is far more than are currently available.
The latter is the general idea behind Julia's SciML, to use the existing scientific knowledge base we have, to augment the training intelligently and reduce the hunger for data. The paper they link to uses one particular way of integrating that knowledge, but it's likely that Julia's way of doing things - ML in the same language as the scientific code and its types, and the composability from the type hierarchy and multiple dispatch - would make it much easier to explore many other ways of integrating data and scientific knowledge, and help figure out more fruitful ways. Maybe the current approach will hit a roadblock and the Julia ecosystem will catch up and show us new ways forward, or maybe we'll just brute force our way to more and more data and chalk this one up to the "bitter lesson" as well.
reply