Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The LLM may well have pulled the answer from a medical reference similar to that used by the dr. I have no idea why you think an expert in the field would use ChatGPT for a simple question, that would be negligence.


A climate scientist I follow uses Perplexity AI in some of his YouTube videos. He stated one time that he uses it for the formatting, graphs, and synopses, but knows enough about what he's asking that he knows what it's outputting is correct.

An "expert" might use ChatGPT for the brief synopsis. It beats trying to recall something learned about a completely different sub-discipline years ago.


This is the root of the problem with LLMs.

At best, the can attempt to recall sections of scraped information, which may happen to be the answer to a question. No different to searching the web except you instantly know the source and how much to trust it, if you search yourself. I've found LLMs tend to invent sources when queried (although that seems to be getting better), so it's slower than searching for information I already know exists.

If you have to be more of an expert than the LLM to then verify the output, it requires more careful attention than going back to the original source. Useful, but it's always writing in a different way to previous models/conversations and your own writing style.

LLMs can be used to suggest ideas and summarize sources, if you can verify and mediate it. They can be used for a potential sourcing of information (and the more data agreeing, the better). However, they cannot readily be used to accurately infer new information, so the best they can do here is guess. It would be useful if they could provide a confidence indicator for all scenarios.


She read it EXACTLY as written from the ChatGPT response, verbatim. If it was her own unique response there would have been some variation.


What makes you think the LLM wasn't reproducing a snippet from a medical reference?

I mean it's possible an expert in the field was using ChatGPT to answer questions but is seems rather stupid and improbable doesn't it? It'd be a good way to completely crash your career when found out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: