Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

An LLM is mathematically incapable of telling you "I don't know"

It was never trained to "know" or not.

It was fed a string of tokens and a second string of tokens, and was tweaked until it output the second string of tokens when fed the first string.

Humans do not manage "I don't know" through next token prediction.

Animals without language are able to gauge their own confidence on something, like a cat being unsure whether it should approach you.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: