Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The article:

> five conditions: happy Black/sad white; happy white/sad Black; all white; all Black; and no racial confound

The paper:

> five levels (underrepresentation of black subject images in the happy category, underrepresentation of white subject images in the happy category, black subject images only across both happy and unhappy categories, white subject images only across both happy and unhappy categories, and a balanced representation of both white and black subject images across both happy and unhappy categories)

These are not the same. It's impossible to figure out what actually took place from reading the article.

In fact what I'm calling the paper is just an overview of the (third?) experiment, and doesn't give the outcomes.

The article says "most participants in their experiments only started to notice bias when the AI showed biased performance". So they did, at that point, notice bias? This contradicts the article's own title which says they cannot identify bias "even in training data". It should say "but only in training data". Unless of course the article is getting the results wrong. Which is it? Who knows?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: