Interesting to see how views around AI evolve over the years. My opinion is that AI from a Turing Test perspective or from a singularity pov is focusing on the wrong things. My focus instead is on symbiotic intelligence (i.e. how machines will change the way symbiotic people/technology societies operate.) I have fragments of an unpublished paper I am working on, around these ideas, here, for anyone interested
The popularity of Searle's Chinese room argument has always bothered me. It's not clear what he actually believes in. So the mind is not a computer, what is it then? Something with "special causal powers"? What does that mean...
And presupposing a rule book that passes the Turing test is of course a very dubious premise. I think arm chair philosophizing is ultimately cannot give use clear answers to questions like this.
The Chinese room is a deeply confused thought experiment. It starts from the (unstated) assumption that humans are magic, and all it asserts is that computers can't be magic.
The argument even says that it might be possible to create AIs that are just as intelligent as humans, and which behave exactly the same as humans. All it suggests is that such AIs won't be magic. Unlike humans, which it assumes are magic.
That anyone ever took this argument seriously is amazing and sad.
https://drive.google.com/file/d/0BxtoB2exHDnISjFDSEdCOEN1cDA...