I agree with you. I also think it is useful to understand limits. This is not what I’m arguing against, it is affirming limits without the knowledge to do so. Until we don’t really figure out how our intelligence works we can’t say we can’t reproduce it algorithmically.
We can find plenty of examples where engineering advanced beyond our understanding: bridges, boats, tables, etc. When humanity started building those we didn’t have the full picture of our physics yet we build and used those for more than 1000s years.
Again, I’m not saying we can’t engineer something from knowledge, I’m saying we can’t affirm limits onto something we don’t fully understand yet.
IMHO this paper is speculative and biased towards our human intelligence. From all the advances I have seen so far in the AI landscape, I’m growing more and more skeptic that our intelligence is something so complex that we can’t replicate.
We can find plenty of examples where engineering advanced beyond our understanding: bridges, boats, tables, etc. When humanity started building those we didn’t have the full picture of our physics yet we build and used those for more than 1000s years.
Again, I’m not saying we can’t engineer something from knowledge, I’m saying we can’t affirm limits onto something we don’t fully understand yet.
IMHO this paper is speculative and biased towards our human intelligence. From all the advances I have seen so far in the AI landscape, I’m growing more and more skeptic that our intelligence is something so complex that we can’t replicate.