It is deprecated and unavailable now, so it's convenient that no one has the ability to test these theses any longer.
In any case, it doesn't matter, this was over a year ago, so current models don't suffer from the exact same problems described above, if you consider them problems.
I am not probing models with jailbreaks making them behave in strange ways. This was purely from a eval environment I composed where it is asked to repeatedly asked to interact with itself and they both had basically terminal emulators and access to a scaffold to make them able to look at their own current 2D grid state (like a CLI you could write yourself and easily scroll up to review previous AI-generated outputs)
These child / neighbor comments suggesting that interacting with LLMs and equivalent compound AI systems adversarially or not might be indicative of LLM psychosis are fairly reductive & childish at best
It is deprecated and unavailable now, so it's convenient that no one has the ability to test these theses any longer.
In any case, it doesn't matter, this was over a year ago, so current models don't suffer from the exact same problems described above, if you consider them problems.
I am not probing models with jailbreaks making them behave in strange ways. This was purely from a eval environment I composed where it is asked to repeatedly asked to interact with itself and they both had basically terminal emulators and access to a scaffold to make them able to look at their own current 2D grid state (like a CLI you could write yourself and easily scroll up to review previous AI-generated outputs)
These child / neighbor comments suggesting that interacting with LLMs and equivalent compound AI systems adversarially or not might be indicative of LLM psychosis are fairly reductive & childish at best