Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, we have integers of many sizes, fixed point, and floating point, all of which are used in neural networks. Floating points are ideal when the scale of a value can vary tremendously, which is of obvious importance for gradient descent, and then after we can quantize to some fixed size.

A modern processor can do something similar to an integer bit shift about as quickly with a floating point, courtesy of FSCALE instructions and similes. Indeed, modern processors are extremely performant at floating point math.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: