Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Granted, arrests are held to a different standard than convictions in that they merely require "probable cause" rather than proof of guilt and this lower standard does make it look like the spam filtering analogy scenario may fit - but in calculating this new "guilt probability" our spam filter is relying increasingly on the "testimony" and "facts" presented by the surveillance database itself and it is the objectiveness of this database in practice, or rather the ones accessing it, that I am directly calling into question (though I didn't elaborate above).

Unfortunately, the database cannot be trusted by virtue of its centralized nature and administration (even if that centralization is justifiable, for example to protect everyone's privacy). The hardware may be objective, but people are not - people lie cheat and steal when they can get away with it - and there are simply too few separate and competing interests to hold the small number of people with access to the database and tools accountable for their inevitably selective use of them to ensure their objective application. We have seen centralized data collected and used for private interests (and books censored, and guns regulated, and...) in the past, be they fascist governments or police protectionism (lying under oath; evidence tampering; racial "profiling"), economic fraud, etc. It is human nature to use one's control to his advantage, and it is simply too tempting for police to shoot first (detain, seize, etc), especially when it is in their interest, and ask questions later (check the database for cause; use "parallel reconstruction"; incriminating speech taken out of context).

It would be worse if that extended all the way to conviction, but it presents the same kind of problem for arrests, detainments, and searches, etc, since it is effectively the word of the administrators (who we trust not to abuse the data and tools) against the person arrested. The more centralized the data and tools become, the less we can trust them to be applied objectively without accountability.

Unfortunately, there are no checks and balances on absolute power (centralization), and so we cannot allow centralization to continue indefinitely. Absolute power corrupts, absolutely, and it is my "thesis" that arrests are not a suitable application of these tools. The risk is too great. Police already have a high level of responsibility (the authority, training, and tools/weapons to control use by force) and what feels like decreasing accountability (because the kids, because the drugs, because I said so, because I can, because of cronyism, and because wealthy people don't like hearing criticism), and since they are none the less "only human" - I don't recommend giving them more.

Granted, you are merely describing a potentially objective algorithm, but my point is that the objectivity of any given tool is moot given the human element. Guns don't kill people, people do, and will continue to do so even with checks and balances (like laws against murder; if prevention was the goal we fail daily). It is only the distribution of accountability (peer juries, private key sharing, democratic voting, citizen groups, etc) that keeps such roles in check.

Anyways, thanks for the opportunity to flesh my thoughts out more.



I guess my theory partly depends on the filter being too sophisticated for any one person to co-opt. We can design machine learning, but there can't be many people who are capable of wrapping their head around a running machine learning system, and be able to reach in right here and peek/poke some weight and bam your nephew is arrested in Texas. On the bright side, most of those people are probably not officers, whom you seem to be most afraid of.

As for the objectivity of feeding the filter data, I envision something completely automatic. No selective entry for this or that suspicious person- the filter is fed a database of all people, and perhaps monitors the internet's traffic on its own. Maybe ACH traffic too. Financial crime could be this system's biggest win- computers are way more suited to uncovering financial crime relative to humans.

Basically, when it's big enough and sophisticated enough and automated enough that no one person can fully understand it, it becomes significantly harder to pervert. And, as I mentioned before, it needn't be perfect- our current system is pervertable too (see: papers please, racial profiling, etc), so this one would just need to be less pervertable...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: