> Since the inner nature does affect behavior, that's a non sequitur.
I would say the reverse: we humans exhibit diverse behaviour despite similar inner nature, and likewise clusters of AI with similar nature to each other display diverse behaviour.
So from my point of view, that I can draw clusters — based on similarities of failures — that encompasses both humans and AI, makes it a non sequitur to point to the internal differences.
> The ability to form a coherent - even if novel - theory and an experiment to test it is key to that kind of progress, and it's something these models are fundamentally incapable of doing.
Sure.
But, again, this is something most humans demonstrate they can't get right.
IMO, most people act like science is a list of facts, not a method, and also most people mix up correlation and causation.
I would say the reverse: we humans exhibit diverse behaviour despite similar inner nature, and likewise clusters of AI with similar nature to each other display diverse behaviour.
So from my point of view, that I can draw clusters — based on similarities of failures — that encompasses both humans and AI, makes it a non sequitur to point to the internal differences.
> The ability to form a coherent - even if novel - theory and an experiment to test it is key to that kind of progress, and it's something these models are fundamentally incapable of doing.
Sure.
But, again, this is something most humans demonstrate they can't get right.
IMO, most people act like science is a list of facts, not a method, and also most people mix up correlation and causation.