Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> This is why black box AIs should not be tolerated. Making a decision is one thing. Being able to explain that decision is something else.

This is basically not possible with deep-learning. Perhaps an alternative is to require organisations using AI systems like this to define policies around how they make their decisions, and then allow consumers to hold them to their policies.

i.e. a policy of not discriminating based on race, and then checking that they don't, and punishing them if they do. They can still use an AI system, perhaps even a racist one if they control for it correctly.

Mandating technological details rarely works, is hard to police, and doesn't keep up with technology. Mandating the outcomes however can work.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: