It's an analogy for the second law of thermodynamics: https://en.wikipedia.org/wiki/Second_law_of_thermodynamics. The point is that it's “challenging” and that complex events “evade a straightforward explanation”. That “challenge” is our intuitive experience of the directional arrow of entropy. A similar analogy would be to consider the difficulty of un-cracking an egg. Also, imagine comparing one bowl of soup to another bowl which was made just from tasting (or distilling) the first. Intuitively these are not likely to taste the same, otherwise a notable episode of Seinfeld would be invalidated.
What I’ve seen is differential comparisons, eg, comparing the rate of white detection to black detection or the difference in certainty scores on each — but I’d really appreciate it if people could show me the actual certainty numbers on black faces so I can see if it’s failing to recognize, misrecognizing, or just less sure then white faces.
The evidence is literally in the original article: "Darker-skinned women were the most misclassified group, with error rates of up to 34.7%. By contrast, the maximum error rate for lighter-skinned males was less than 1%"
Regardless of whether the higher error rate is a combination of race, gender, or both, it's still a huge issue. Granted, that study was from a year ago, and other companies have since improved their facial recognition systems. But an overall precision/accuracy/f1 score doesn't mean much when accuracy varies that much by group. Sure, you can market it as "accurate on white males", but you can't market it as "accurate"
The semaphore ploy strikes me as much more rooted in information, thus the description “cyber”. Stealing code is just theft, since the object of theft doesn't generally qualify the act (though it does in some cases).