> One panelist said that a startup they were working with wants to help doctors use AI to use NLP to send claims to insurance companies in a way that won't be rejected. Another panelist said that he was working with another startup that wants to use AI and NLP to help insurance companies reject claims.
But quite possibly "greater efficiency" according to a fitness function that's not accurately mapped onto "keeping humans alive"...
I wonder if this'll end up in an equivalent state to the "tank detection neural net" which learned with 100% accuracy that the researchers/trainers had always taken pictures of tanks on cloudy days and pictures without tanks on sunny days? ( https://www.jefftk.com/p/detecting-tanks )
Who'd bet against the doctor/insurer neural net training ending up approving all procedures where, say, the doctor ends up with a kickback from a drug company - instead of optimising for maximum human health benefit?
>But quite possibly "greater efficiency" according to a fitness function that's not accurately mapped onto "keeping humans alive"...
Since when was this ever the case? Especially in America? The US healthcare system is NOT built around providing adequate care for everyone, as far as I've read/heard.
Sounds like GAN in meatspace.