> First, convolution and invariance are definitely not the only things you need. Modern DL architectures use lots of very clever gadgets inspired by decades of interdisciplinary research.
i have noticed this. rather than replacing feature engineering, it seems that you find some of those ideas from psychophysics just manually built into the networks.
The weight patterns that convolutional neural networks develop are pretty familiar in many ways. For example, the first layer will generally end up with small-scale feature detectors, such as borders, gradients/color pairs and certain textures, at various scales and angles.
Try an image search for "imagenet first layer" to see examples.
I took the comment to mean "we have ourselves discovered certain filters being useful (e.g. https://en.wikipedia.org/wiki/Gabor_filter), and the networks now also discover this same information".
it is true that the promise of dl succeeds at finding some handcrafted features. it is also true that (at least the last time i checked), the state of the art is still making use of handcoded transforms that are derived from results in psychophysics.
i have noticed this. rather than replacing feature engineering, it seems that you find some of those ideas from psychophysics just manually built into the networks.