Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The content ChatGPT returns is non-deterministic (you will get different responses on the same day for the same email), and these models change over time. Even if you're an expert in your field and you can assess that the chatbot returned correct information for one entry, that's not guaranteed to be repeated.

You're staking personal reputation in the output of something you can expect to be wrong. When someone gets a suspicious email, they follow your advice, and ChatGPT incorrectly assures them that it's fine, then the scammed person would be correct in thinking you're a person with bad advice.

And if you don't believe my arguments, maybe just ask ChatGPT to generate a persuasive argument against using ChatGPT to identify scam emails.



It's a good point and I should make a distinction on what models are appropriate. I think of chatGPT 4 like a college student and chatGPT 5.1 5 Pro (deep thinking model) more like a seasoned professional. I wouldn't trust non-frontier, non-thinking models with a result for this kind of question. But the determinism of the result does not scare me, the out output may vary but not directionally. The same thing would happen if you asked the foremost security expert in the world, you'd get slightly different answers on different days. One time as a I test I ran a very complex legal analysis through chat GPT pro 10 times to see how the results would vary and it was pretty consistent with ~10% variation in numbers it suggested.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: