Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think that ChatGPT could be a big accelerator for creative activity, but I wouldn't trust any output that I've only partially verified. That limits it to human scale problems in its direct output, but there are many ways that human scale output like code snippets can be useful on computer scale data.


There are plenty of valuable use-cases that aren't at much risk from hallucinations at all:

- Asking it to summarize text

- Using it to extract facts from text and present them in an alternative format - turning a chunk of HTML into JSON for example

- Creative writing - poems, stories etc

- Getting feedback on your own text - asking it what should be tightened up, which bits are confusing and so on

- All kinds of code generation activities


> turning a chunk of HTML into JSON for example

I haven't done exactly that, but based on similar examples this is likely very vulnerable to hallucinations.


For simple things it's pretty safe. I tried pasting in HTML from the homepage of Hacker News and having it turn that into a list of JSON objects each with the title, submitter, number of upvotes and number of comments.

Here's a similar trick I did with Copilot: https://til.simonwillison.net/gpt3/reformatting-text-with-co...


There's two classes of response: those that are factually "right" or "wrong" -- who was the sixteenth president of the U.S.? And those that are opinion/debatable: "How should I break up with my boyfriend?" People will focus on the facts, and those will be improved (viz: the melding of ChatGPT with Wolfram Alpha) but the opinion answers are going to be more readily acceptable (and harder to optimize?).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: