Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is not replicating when I try it on various simulators.

It's also 12 characters shorter.

Let me guess: you ran this through an LLM, and have posted the resulting hallucination as an undisclaimed fact.



https://chat.openai.com/share/b0f021ec-20b8-4b6d-bf6d-e8aa83... at least for me gpt-4 gave a summary and said it cant crack it


I actually would find it quite unlikely that this is LLM output, if only because I bet it would have instantly been censored the moment it uttered something alike to "Heil Hitler"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: