Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I do worry about my data with them, but when I think about the worst-case scenario - you will not get insured (or have high rates) because you have {some genetic condition}.. it seems just as likely that they will simply require my DNA to apply for insurance. (or get my DNA from a blood test within their system, etc.).

The obvious solution is with legislation for transparency and better health care system.



One aspect to this tangle is knowledge asymmetry: One of the traditional justifications for insurers poking around is to guard against an applicant that conceals important factors as a kind of fraud.

But what about the reverse? There's something intuitively unjust about the customer not knowing why they're being charged a higher rate, especially if it means the company believes there's a potential danger (enough that it affects the bottom-line) but conceals it.

So yeah, I think "transparency" is a robust principle to follow here, especially if anyone is arguing market competition is going to curb the worst abuses.


The paradox is that the better insurance companies are at pricing risk, the more irrelevant their business becomes. If they can predict your disease with 100% reliability, and charge you based on that, then what is the point of buying insurance if you are going to be charged either nothing or the full cost of the treatment, just like if you were not insured.

Except of course in the US, with scammy hospitals charging over-inflated prices to uninsured patients, while only insurers can access realistic prices.


I think that conflates accurate probabilities with predicting discrete future events.

For example, suppose you can accurately determine that someone has exactly a 5% chance of developing a problem that will cost exactly $X to treat. Find 10,000 of those people, and insurance is still useful for spreading the risk and financing the treatments.


Health problems and healthcare is not that simple. Almost everyone will experience healthcare issues in their life, so the insurance becomes more of a subsidy mechanism to time shift costs from one population to another (which at some point becomes indistinguishable from taxpayer funded healthcare, just with a different administrator).

Also, legislation requires health insurance to pay for all healthcare, so there is no point paying to insure against specific ailments.


This isn't really true in the US, I have specific policies that insure me against other types of illness. They pay directly to me in the form of a predetermined cash settlement in the event of a diagnosis.


The Affordable Care Act does not allow for benefit maximums. Necessary healthcare beyond the out of pocket maximum is (theoretically) required to be paid for by the insurer.

You might have supplemental insurance that pays you in case of a specific illness, but that is not what is commonly referred to as health insurance.

Can you provide a link to a business selling the type of policy you are referring to? I am curious what these look like.


At 5% you are still loosely predicting a disease, but with progress in generics you may be one day predicting with a 80% accuracy. The insurance premium will converge to cost of treatment.


Currently, due to the Affordable Care Act aka Obamacare, health insurance companies are prohibited from setting rates based on individual risk (except they can charge higher premiums to tobacco users). Before the law, ensures would typically put applicants through a health screening that would determine their rate. People with pre-existing medical conditions could be denied coverage or charged higher rates. Women routinely paid higher premiums than men.


The analogy I have used in the past is this fear is like thinking that health insurance companies were more likely to buy the old Marlboro Miles database rather than just making detailing your smoking history a required part of the application process.

If these companies have the legal clearance to use DNA data, why would they be satisfied only having secondhand access to that data for a relatively small subset of the population? They'll obviously want that data for everyone.


Yes. Behavioral data is in most cases far more useful to them than DNA. No need to go all the way of using the non-coding SNPs in a genealogical test to infer your coding DNA, to infer your propensity for smoking, when they can just find out if you smoke instead.


> Yes. Behavioral data is in most cases far more useful to them than DNA.

Makes perfect sense to me, to be honest. Outside of certain genetic deceases, the behavioral aspect is what ultimately matters. And afaik in terms of the US, a large proportion of the population dies due to behavioral aspects (which I would count obesity rates as an outcome of that).

A simple example: yes, I would say I am genetically predisposed to have addictive habits, suffer from liver failure due to alcohol+drugs, and/or deal with lung problems from smoking (given that this was the fate of multiple males up my ancestry tree from both sides of the family). Naturally, I believe that it is more valuable for an insurance company to know whether I actually engage in those harmful behaviors vs. my genes dictating the likelihood of me engaging in those behaviors. In the absence of the former, the latter could be pretty useful, but otherwise there is no competition at all.

Sure, genes can give you a probability of those behavioral aspects occurring, but going for the primary metric you are looking for vs. a weak predictor of it clearly makes way more sense.


It may have been the lower hanging fruit. But they already have all the tools and behavior analysis is no longer the competitive edge. They can ask for your smoking status and deny your claim for fraud later, but (any genetic data on behavior is an update to fraud investigation value and) they can't know your propensity to non-smoker cancers.


The data that 23andme goes far beyond DNA. It goes into explicit family history. Which as it turns out is pretty close to behavioral information


It’s much ‘better’ for them to use it as an anti-fraud measure later when someone makes a claim, not early on when people are paying them money.

‘You said on your application you don’t smoke, but on x data dump it shows you do. Por que?’ Or just deny your claim for that reason, and make you fight it.


> you will not get insured (or have high rates) because you have {some genetic condition}..

s/insured/hired

Wait until we have DNA detectors wired up to collect the DNA we exhale and rapid sequencers that handle what might be below the limit of detection today.

Maybe that's fifty years down the road, but it's coming.

Gattaca was a prescient premonition, it was just a hundred years ahead of its time.


Either you or me are deeply wrong about how genotypes relate to phenotype.

While some DNA characteristics can be statistically linked with some costly health conditions, the connection to "being a good hire" seems totally imaginary to me, has always been and will always be.

For what it's worth, public posts and comments on internet are probably a much better indicator of whether someone is going to be an obedient employee, and this dystopia is technically doable right now, and certainly many are working on it already.


Whether or not the connection exists is not the question. The question is whether or not someone will make decisions based off that limited information


The samples of DNA of the best employees will be collected, evaluated and compared to applicants. So if your DNA is similar they will let you in. Wait for a clever startup to offer a complete solution for this comparison soon.


I'm not discussing that it would be doable if it worked, and that some startups will be up for it. I'm saying that DNA is not a predictor of being a "good employee", and if you believe otherwise I'd be interrested to know why.


> I'm saying that DNA is not a predictor of being a "good employee"

Sure it is.

Easy case in point: count the chromosomes. The wrong number alters gene dosage, leading to diseases such as down syndrome.

Neuroticism amongst many other personality traits is heavily genetically linked, obesity and cardiovascular health are genetically linked, ...

Lots of things an employer or insurer would be interested in if our laws do not protect us. Some may eventually be willing to skirt the law if it gives them an edge and the penalties don't outweigh the benefits.

We're not there yet, but give it 25-50 years.


Come on, I don't need a dna analysis to tell if one has Down syndrome or is obese. Again, a google search teach me much more about a candidate than his or her dna.

And about dna to be used baselessly to take decision, sure, like graphology, or astrology could be used... but then concealing dna is like concealing your handwriting or exact date of birth. Why not, but we are no longer speaking about privacy protection but about fighting superstition.


But good employees have DNA to analyze and compare to applicants. I am not saying, there is science behind. I also not saying, that good employees are the real good ones and not good looking ones.


Yes, I can imagine those dystopias - my point was that I don't imagine my choice to try 23andMe in 2019 is what dooms me - while others are saved by not making that choice.


I kind of understand your point, now my view would be that it should not be a reason not to delete the data from them, in case it actually helps. Otherwise:

1. Why take the chance?

2. Your DNA being out in the wild also impacts the privacy of your relatives (including those you might not know, and those who don't exist yet (a child, a nephew, a niece for instance)), so if not for you, do it for them.

It won't be a guarantee, but it maximizes the chances.


Why take the chance? Because genealogical data is valuable to you, of course. If it isn't, no amount of legal or technical security will make it worth it.

I do think that the health stuff from 23andMe is only marginally better than astrology, that the ethnicity estimates are inferior to what most people can get from good old fashioned genealogy, but that the matching may be useful, if you value knowing who you're related to a lot.


OP said "I do worry about my data with them", it seems to be the best they could do to try resolve this would be to delete the data.

But indeed, that's me thinking that you should care a lot more about your privacy and the privacy of your relatives than anything 23andMe provides.

I mean, I think I would find it fun to discover relatives, but not at this cost.


Why would that matter at hiring time? If the person develops a health issue during employment they’ll just fire them. Unlike insurance where there they’d have to spend money.


50 years down the road AI will have taken all the jobs, so I'm not sure we should be worried about getting hired. That ship will have sailed.


What makes you believe this is coming? What evidence points to this inevitability?


I put a fake name in when I signed up.

Good luck blue cross.


I always wondered why people are so trusting (gullible?) to use their real data


If they have enough DNA and not-so-secret genealogical data, they can derive your real name anyway.


They don't even need your DNA. Just your relatives.


Things like financial and medical data should be required to have an audit log that you can see, in real-time and subscribe to updates for, including extraction into "anonymised" formats, along with a description of that process, format and a justification for why it is robust against deanonymisation. If data is handled well, there is nothing to fear here. Fiddly, perhaps. Expensive, probably. But personal data processing should be risky and expensive.

Deliberately extracting personal data into un-audited environments without good reason (eg printing a label for shipping), should be punished with GDPR-style global turnover-based penalties and jail for those responsible.


> Deliberately extracting personal data into un-audited environments without good reason (eg printing a label for shipping), should be punished with GDPR-style global turnover-based penalties and jail for those responsible.

There already are, but only for Europeans through the GDPR.


Technically not quite, because even in the EU, you don't have to provide the audit log for someone's data specifically and you as a subject have to make specific requests to delete or retreive your data, it's not make transparent to you as a default position. But yes, you can't just dump it out anywhere you want.

How it should be is that personal data's current and historical disposition is always available to the person in question.

If that's a problem for the company processing the data (other than being fiddly to implement at first), that sounds like the company is up to some shady shit that they want to keep quiet about.

Nothing to hide, nothing to fear should apply here, and companies should be fucking terrified with an existential dread of screwing up their data handling and looking for ways to always avoid handing PII at all costs. The analogy of PII being like radioactive material is a good one. You need excellent processes, excellent reasons to be doing it in the first place, you must show you can do it safely, securely and if you fuck up, you'd better hope your process documentation is top tier or you'll be in the dock. Or, better, you can decide that actually you can make do by handling the nuclear material only in some safer form like encapsulated vitrified blocks at least for most of your processes.

The data processing industry has repeatedly demonstrated they they cannot be trusted and so they should reap the whirlwind.


It doesn't say audited environments as such, but you are required to use secure environments that you control as a basis. What "secure" means can always be discussed, but in general it depends on what data you process and what you do with it; if it is a large volume/big population/article 9-data auditable environments should be expected - though not publicly auditable. Although that would be nice...

Fully agree on what you are saying, and my popcorn is ready for August when the penalties part of the AI Act comes into force. There is a grace period for two years for certain systems already on the market, but any new model introduced after August this year has to be compliant. AI Act+GDPR will be a great show to watch...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: