The core piece as quoted from the abstract: "AGI predictions fail not from
insufficient compute, but from fundamental misunderstanding of what intelligence demands structurally."
Then goes in detail as to what that is and why LLMs don't fit that. There are plenty other similar papers out there.