In future we will see ton of similar charts borrowing elements from graph and signal theories. There's no limit on amount of different LLM-multiagents.
This sounds quite interesting, could you expand on it? What is some of the low-hanging fruit in your opinion? Do you have any examples of projects that are explicitly building on top of these ideas?
The space of possible GPT-4 outputs is hard to comprehend.
The space of possible different "graphs" of LLM agents connected to each other is even larger.
Each graph represents a multiagent system
Here's a generic syntax for notating graphs that don't have loops (essentially trees):
AgentName: Descriptive Name
Goals:
- Goal1
- Goal2
...
Techniques:
- Instruction1
- Instruction2
...
Inputs:
- From AgentName: Description of input
- From OtherAgentName: Description of input
...
Outputs:
- To AgentName: Description of output
- To OtherAgentName: Description of output
...
-> SubAgentName1: Descriptive Name
Goals:
- Goal1
- Goal2
...
Techniques:
- Instruction1
- Instruction2
...
Inputs:
- From AgentName: Description of input
- From OtherAgentName: Description of input
...
Outputs:
- To AgentName: Description of output
- To OtherAgentName: Description of output
...
-> SubSubAgentName1: Descriptive Name
Goals:
- Goal1
- Goal2
Techniques:
- Instruction1
- Instruction2
Inputs:
- From SubAgentName1: Description of input
Outputs:
- To SubAgentName1: Description of output
...
-> SubSubAgentName2: Descriptive Name
Goals:
- Goal1
- Goal2
Techniques:
- Instruction1
- Instruction2
Inputs:
- From SubAgentName1: Description of input
Outputs:
- To SubAgentName1: Description of output
...
-> SubAgentName2: Descriptive Name
Goals:
- Goal1
- Goal2
...
Techniques:
- Instruction1
- Instruction2
...
Inputs:
- From AgentName: Description of input
...
Outputs:
- To AgentName: Description of output
...
---
Here's an example researcher agent and it's interior using the syntax. The English translation was lost in my notes but you can put this to a translator:
Tutkimus:
Tavoitteet:
- Tuottaa ja analysoida tiedusteluja
- Rakentaa uutta tutkimusta
- Tutkia tuntemattomia aiheita
- Päätellä tiedusteluista
Tekniset ohjeet:
- Ohje / sääntö päättelylle 1
- Ohje / sääntö päättelylle 2
Syötteet:
- Agentilta Meta-tietoisuus: Ehdotukset
- Agentilta Alatutkimus: Tulokset
Tulosteet:
- Agentille Alatutkimus: Käskyt
- Agentille Muisti: Tutkimus
-> Muisti:
Tavoitteet:
- Ylläpitää tutkimuksen tallennetta
Tekniikat:
- Ohje / sääntö muistikantojen käyttämiselle 1
- Ohje / sääntö muistikantojen käyttämiselle 2
Syötteet:
- Agentilta Tutkimus: Tutkimus
Tulosteet: Ei mitään
---
Signal theory becomes relevant when thinking about I/O, embedded agency, and when the agents aren't / cannot be constantly "reading" each other.
---
For similar projects, the current AutoGPT-style systems are very primitive and haven't adapted to my ideas. If what I call the cognitive architectures of the LLM-multiagent systems were carefully designed, which I predict will become a thing (and subject to ton of future research!), our AI systems could gain very advanced cognitive capabilities, perhaps even approaching humans but in their own, formal manner.
I appreciate the detailed response. I'll look into the SocraticAI project as well.
FWIW I asked because I'm working on a toolkit for applying Monte Carlo tree search to agent graph generation and am always on the lookout for fundamental insights that could help direct its development.