Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

By LLM, you mean some flavor of ChatGPT? Have you checked its results using non-LLM sources? It sounds like a mild case of AI mania. Your doctor presumably knows what a dangerous LDL level is, maybe they weren’t concerned because it isn’t dangerous?


Yes, precisely these tools and yes, I did cross-reference with studies. In-fact the physician immediately misattributed the cause, and I had to guide her into self-correcting herself, since she failed to ask conditional followup questions that I forced the LLM to.

In-fact, in another similar case, an almost identical thing happened to my partner. She was experiencing medical issues and asked her friends, who were doctors. They confidently gave a flurry of knee jerk responses of the cause without carefully considering all the variables I forced the LLM to do. With the guidance of the LLM, we took a local diagnostic at a nearby pharmacy, CORRECTED the pharmacist's recommendation who attributed it to 'the heat', and the problem was solved within a few hours after determining it was related to a magnesium deficiency.

I'm not saying it's perfect out of of the box, but I remember getting excited when OCR medical imaging 5 years or so was 85% as effective as a doctor, now its surpassed human performance and for much the same reasons that LLMs are superior.

The primary mistake of the physician, and most, is their knowledge window is limited, as another comment cites. Also, they tend to, based on my observation be more reactionary. If you're not visibly sick then you're not sick and several studies demonstrate the long-term compounding effects of LDL at these levels without side effects for years.

Again, the argument isn't to replace primary care physicians with self-diagnostic ChatGPT usage, but rather than in the near term we will observe a threshold similar to medical image recognition surpassing primary care physicians specifically and at some point we will reach a threshold where physician interaction is in-fact meddlesome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: