The paper was rejected (you can read the ICLR comments) because the experiments did not really support their point. And I agree. The gist of the experiments they ran to support their thesis was to take a CNN and construct adversarial examples that sucessfully fooled it. They then applied foveation, and showed that the CNN was no longer fooled. Which is obvious! It's kind of obvious to me that adding preprocessing that the attacker is unaware of would be able to beat the attacker. What they didn't do is regenerate the adversarial examples assuming the attacker has knowledge that the target was using foveation.
There are no experiments that support your statements, unfortunately.
There are no experiments that support your statements, unfortunately.