Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it is exposing how naïve it will be to go full speed into full AGI. I personally think AGI Safety in the context of AGI is an oxymoron. Primitive AI is already beyond what we can manage.

“The size and complexity of deep learning models, particularly language models, have increased to the point where even the creators have difficulty comprehending why their models make specific predictions. This lack of interpretability is a major concern, particularly in situations where individuals want to understand the reasoning behind a model’s output”

from - https://arxiv.org/pdf/2302.03494.pdf



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: