Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

you're reading to much into it. i make no assumptions.




It doesn't matter if you make assumptions or not - your prompt does. I think the point of failure isn't even necessarily the LLM, but your writing - because you leave the model no leeway or a way to report back on something truly neutral or impartial. Instead, you're asking it to dig up any proof of wrongdoing no matter what, basically saying that lies surely exist in whatever you post, and you just need help uncovering all the deception. When told to do this, it would read absolutely anything you give it in the most hostile way possible, stringing together any coherent-sounding arguments that would reinforce the viewpoint that your prompt implies.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: