Hacker Newsnew | past | comments | ask | show | jobs | submit | lionkor's commentslogin

Are medical professionals not usually held accountable, globally speaking?

Lawsuits against medical professionals are difficult in many cases impossible for the average person to win. They are held less accountable compared to other professions.

Java and JVM all over again

I feel like using an LLM for this is not a good fit, because it's super difficult to verify whether the knowledge it found is true or made up. LLMs are much better at coming to a conclusion when a human wouldn't be sure at all, and that seems really important here.

In this case, you verify whether the knowledge was made up by comparing the virtual waiter behaviour to the actual waiter. Having a strong test suite like that is actually the ideal scenario for agentic development.

(It still incredibly hard to pull off for real, because of complex stateful protocols and edge cases around timing and transfer sizes. Samba did take 12 years to develop, so even with LLM help you'd probably still be looking at several years.)


I guess the LLM doesn't need to verify whether what it found is true or made up, but rather just save the request and answer for later, so it can be reviewed by a developer and documented.

Not only are you wrong (LLMs are horrible at reproducing anything that isn't fairly ABUNDANT in the training data), but it's also quite sad.

AI can write a whole book on anything. You can take anything, even make up a phenomenon, and have an AI write a whole factual-sounding book on it.

How that isn't clearly an indicator to you that it produces loads and loads of BS, I'm really not sure.


It works because if you want some information on React or say Python, or say Prolog. Whatever information ChatGPT generates is quickly verifiable, as you have to write code to test it.

Even better many times, it shows me new insights into doing things.

I haven't bought a book in a while, but Im reading a lot, like really a lot.


All the Americans here arguing why this is a good thing, how your system is so flawed, etc. remember that this will be accessible to people in countries with good, free healthcare.

This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees. People are going to choose asking ChatGPT instead of their local doctor who is not only cheaper(!!!) but also actually educated.

People saying that this is good because the US system specifically is so messed up and useless are missing that the US makes up ~5% of the world's population, yet you think that a medical tool made for the issues of 5% of the population will be AMAZING and LIFE SAVING for the other 95%, more than harmful? Get a grip.

Not to mention shitty doctors, which exist everywhere, likely using this instead of their own brains. Great work guys.

I suspect the rationale at OpenAI at the moment is "If we don't do it, someone else will!", which I last heard in an interview with someone who produces and sells fentanyl.


>> This is going to be the alternative to going to a doctor that is 10 minutes by car away, that is entirely and completely free, and who knows me, my history, and has a couple degrees.

Well then I suppose they'd have no need or motivation to use it, right?


They will because the grassroots marketing will be that its amazing, just like all the other AI tools

Same here. Also no "TRY OUR AI NOW" button, no Copilot popups, no feeding all emails into LLM training, no ads (!!!) in the inbox(!!!). Just great value.

He himself said "75%", nowhere in that thread does he say 3 people. That's why the headline is like that.

[flagged]


What? He could have said 3 if he wanted, but he wanted it to sound worse so he said 75. I know its inferrable how many people it is, but if the guy laying them off doesn't care to say the number, why should someone else when posting this?

Both of those numbers in isolation dont tell the whole story. Saying firing 3 people sounds like a wednesday at a big company. Saying firing 75% of the staff indicates the impact that those changes will have on everything about the company. The latter is more useful.

This was expected. People are going to be convinced that this AI knows more than any doctor, will self medicate, and will die, harm others, their kids, etc.

Great work, can't wait to see what's next.


In my experience GPT is uber-careful with health related advice.

Which makes me think it's likely on the user if what you said actually happened...


Please look at the post. This is about a GPT which is designed to give you health advice, with all hallucinations, miscommunication, bad training data, lack of critical thinking (or, any thinking, obviously).

I made something like that https://github.com/lionkor/ai-httpd

Love it!

Your health data could be used in the future, when technology is more advanced, to infer things about you that we don't even know about, and target you or your family for it.

Health data could also be used now to spot trends and problems that an assembly-line health system doesn't optimize for.

I think in the US, you get out of the system what you put into it - specific queries and concerns with as much background as you can muster for your doctor. You have to own the initiative to get your reactive medical provider to help.

Using your own AI subscription to analyze your own data seems like immense ROI versus a distant theoretical risk.


It feels like everyone is ignoring the major part of the other side’s argument. Sure, sharing the health data can be used against you in the future, but it can be used to help you right now as well. Anyone with any sort of pain in the past will try any available method to get rid of it. And that’s fair when those methods, even with 50% success rate, are useful.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: