Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Those are great links! But the 2008 Hinton paper would not be considered deep learning, it is classic neural nets. It makes no mention of CNNs or GPUs, which is what really got this all going back in 2012 with ImageNet / Krizhevsky.

The ImageNet paper is from 2012, not 2010. That's when the computer vision community really went "wow". IIRC, almost every entry in ImageNet 2013 was using CNNs.



Good call on the 2012 not 2010 date. I missed that. GPU are not requirements of deep nn. Hinton's pseudo-bayesian + ReLU approach was the last piece of the deep neural net functionality. CNNs dated back to 1995-1998 with LeCun and Bengio. Although GPUs do accelerate deep NNs enough to be feasible on image data (thanks to Ng).


> it is classic neural nets. It makes no mention of CNNs or GPUs

Is using a GPU "essential" for something to be deep learning? I'd always thought that the important part was some sort of hierarchical representation learning.

GPUs certainly help, in that you don't want to wait all day while your code does that, but they're not necessary.


I think Tsvi Achler's video here will be useful to understand better what the article is about https://www.youtube.com/watch?v=9gTJorBeLi8




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: