Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The problem with LLMs is that the output is inherently stochastic

Isn't that true with humans too?

There's some leap humans make, even as stochastic parrots, that lets us generate new knowledge.



I think it is because we don't feel the random and chaotic nature of what we know as individuals.

If I had been born a day earlier or later I would have a completely different life because of initial conditions and randomness but life doesn't feel that way even though I think this is obviously true.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: