Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

GPT makes this 100x or maybe even 1000x. On the other hand, can we potentially train generative AI to detect and refute BS as well? It may be our only hope.


Neal Stephenson's Anathem[0] which revolves around epistemology a lot coined the term Artificial Inanity for AI

[0]https://englishwotd.wordpress.com/2014/02/17/artificial-inan...


> On the other hand, can we potentially train generative AI to detect and refute BS as well? It may be our only hope.

LLMs store their training information in an incredibly lossy format. You're going to need some kind of different approach if you want one to tell the difference between plausible-sounding bullshit and implausible-sounding truth.


GPT is also pretty good at cutting through BS. It can detect logical fallacies for instance or explain a lack of rigor in a discussion. Depends on how you fine tune it, couple it with an external fact database and you could get it to cite its sources. Couple it with a prolog engine AND a fact database and it could modus pwnens ur ass.


That's funny, because ChatGPT feeds me BS quite often. It's only when I call it out that it corrects itself.


GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.

The other issue is that the generated content might be overall composed of true facts, but used to manipulate via less in the face techniques. Things like agenda setting, flooding with content with no lies, but a particular interpretation of those facts etc.


> GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.

That is absolutely NOT true. Try it. Next time it does it, quote it, and ask it to find the logical fallacy and it will.

There isn't another session following the existing session asking it to double check its work. It is running open loop.

Humans exhibit the SAME behavior. They make logical fallacies all the time, but if you ask them to identify the logical fallacy in a passage of their own text they can spot it easily. Attention to Logical Fallacies Is All You Need.

GPT is not Spock, but you could make it Spock by combining LLMs and external tools and fact databases.

----

Please spot any potential logical fallacies in this statement

> GPT will not be able to detect contradictions / logical fallacies generated by GPT itself or similar LLMs. If it could spot the mistake, it wouldn't make it in the first place.

This statement contains a few potential logical fallacies:

False dilemma (also known as false dichotomy or either-or fallacy): The statement implies that either GPT can detect all logical fallacies and contradictions, or it cannot detect any of them. In reality, GPT's ability to detect logical fallacies could be imperfect, meaning that it can identify some fallacies but still make others.

Circular reasoning (also known as begging the question): The statement assumes that GPT cannot detect logical fallacies generated by itself or similar LLMs, without providing evidence or reasoning to support this claim.

Hasty generalization: The statement seems to imply that if GPT makes a mistake, it must be unable to detect that mistake in general. However, GPT's performance can be inconsistent, and it might sometimes make mistakes that it can, in fact, detect in other contexts.

----

I concur captain.


Bonus: a quick tutorial on how to use GPT to scale up attribution bias: https://sonnet.io/posts/emotive-conjugation/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: