Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Shut up and ship (jgc.org)
109 points by jgrahamc on Aug 9, 2010 | hide | past | favorite | 26 comments


We are having reports of man-in-the-middle attacks on gmail on some ISP's. However, main focus of goverment is not spying on people and decrypting stuff, but to censor opposition websites.

Now, this is much more simple to do. They use l7 to drop https connections. Some of us who are lucky to have a vps and the simple knowledge of ssh stuff are able to use simple ssh tunnels.

However, the days in which we had protest, they simply dropped all encrypted connections, which included ssh tunnels.

I hope that little information gives you a little insight about whats going on here.


Elliptic curve cryptography? That sounds a bit suspect - if its true, its _way_ more than is needed, since the Iranian government isn't going to have the capability of breaking something like 2048bit RSA, or 512bit Public Key.

Elliptic curve cryptography is a very interesting academic subject, but implementations are slow (so not so hot at being on all of your internet traffic). Given that its also of ludicrously unnecessary strength, either the author is misguided or lying.


1. The NSA says that RSA 2048 should only be used to protect information that need to remain secret for at most a few years. Nobody should be using RSA 2048 to protect information that could get then jailed/hurt/killed 5 or 10 years from now.

2. The main advantage of ECC is that it is generally faster than RSA, especially for security levels that would require RSA keys larger than 2048 bits.

3. It is meaningless at this time to say that ECC is unnecessarily strong compared to RSA. It is possible to match the security of ECC using RSA at every level. But, a linear increase in ECC key size requires the RSA key size to increase exponentially. For example, to match the strength of AES-128 you need a 256-bit ECC key or a 3072-bit RSA key, whereas to match the strength of AES-256 you need a 384-bit ECC key or a 15,360-bit RSA key.


Nobody should store information that could get them jailed/hurt/killed 5 or 10 years from now.


Closely related to the fact it's patent encumbered, there's very little (relatively) work going into the practical side of ECC because the mess of patent surrounding the area.


Almost everybody I know of that is doing real-world cryptography is using or preparing to use ECC. In one or two years I think the typical computer user will be using ECC very frequently. The ECC patents are pretty easy to work around. One very conservative approach is: don't use ECMQV, don't use point compression, only use the Suite B curves, and use Sun's ECC contribution to OpenSSL (as it was created with the benefit of Sun's legal department and probably also reviewed by several other companies), and review the math algorithms used to make sure they were all published before 1994.


Windows Media DRM uses ECC.


These comments regarding ECC are pretty much the exact opposite of reality; ECC is mainstream, and used in plenty of environments because its "ludicrously unnecessary strength" correlates with "smaller, more manageable keys". ECC vs. RSA is a library option setting for pretty much every dev environment in the world.


> but implementations are slow

Well, think about this: One major reason hash functions like sha1 aren't recommended (as opposed to, say, bcrypt) for encryption is because they are optimized for speed and so it's faster to generate lookup tables or to iteratively search for the password. So unless it's slow because the implementation itself is crappy, this might actually be a good thing after all.


There is absolutely no relation between the openness of a source code and its security.

Debian is open source and yet this bug was introduced and nobody made any remark:

http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-0166

Windows is closed source and yet security researchers manage to study it and find security holes.

The reason for that is that you don't really need to the source code to find security holes, it's better, but it's far from being mandatory.

So what matters?

IMHO, a strong influence on the security of a solution:

- Awareness (of security issues generally speaking)

which needs...

- Competence (of the maker(s) and really you don't need the whole world, a couple of outstanding engineers is enough)

which needs...

- Solid process (e.g. auditing, tests, validation, management)

which in turns needs...

- Time


There is absolutely no relation between the openness of a source code and its security

I think that's overstating the case a tad.

Take a look again at JGC's very nice summary of the case in question:

Worryingly, Haystack's only 'technical' detail is the following: "We use state-of-the-art elliptic curve cryptography to ensure that these communications cannot be read." Fair enough, but frankly that means nothing. They could be using AES, or RSA, or pretty much any good algorithm and I still wouldn't care. Two reasons: their implementation might be rubbish and enable attacks or their cryptography might be irrelevant because another technique (traffic analysis?) might make breaking Haystack possible. After all, all the Iranian government needs is a list of people running the software.

In this specific case, having the source open would permit cryptography researchers (of sufficient skill and aptitude) to analyze whether or not Haystack is actually doing what it needs to do. Put another way: how do we know that Haystack isn't including a back door to capture all the information passing through their system, to sell on to the Iranian government for a hefty fee? We don't.

I'm not arguing that open source is inherently more secure than closed source; I'm just saying that in some specific cases, there are clear benefits to having the source available for inspection.


Elaborating on your point further: One of those "a little knowledge is a dangerous thing" errors that a security dilettante (like me!... though I'm at least far enough to know about this) can easily make is to assume that because you're using encryption, you're done. You're secure! Hooray!

But of course it's not that simple. You have to consider your entire attack surface, and when dealing with a government that doesn't care if it disappears you for no good reason, there's one hell of an attack surface here. (Note: I neither know nor for the purposes of this post care if Iran is that cavalier, what matters is the existence of governments that are, somewhere, sometime, which I consider pretty likely.) They don't have to prove you've sent subversive stuff behind that encryption. They don't even have to prove you're using the subversive software. They just have to suspect it. Even if we assume the encrypted stuff is perfect, what other tells are there? Characteristic ports? Characteristic communication patterns? Characteristic headers? And even quick solutions to those problems, "oh, we make it look like HTTPS" can have problems of their own, ad infinitum. Are you doing HTTPS to sites that obviously don't serve a website? Are you trying to fake a website in a way easy to characterize? What will you do when the ISP straight-out bans websites on your home computer, making it impossible to mask your traffic that way? (And how suspicious is it for two home users to hit each other's "websites" every few seconds, anyhow?)

These are all issues that a government won't have a problem answering, and there are yet more that they won't have trouble answering. The hostiles here have it easier because their threshold for deciding they have enough information to act is very low. The only way anybody should feel even remotely confident about this is if it is reviewed. (Whereupon it'll probably be discovered to be impossible, IMHO, but that's for reviewers to figure out.)


You can only verify that the software is doing what it is supposed to do by analyzing the resultant binary. And, analyzing the resultant binary can be done with or without the source code. In fact, many security researchers work exclusively on software (often Microsoft Windows) for which they will never be able to see the source code.

http://cm.bell-labs.com/who/ken/trust.html


> There is absolutely no relation between the openness of a source code and its security.

That's simply not true.

Open source doesn't necessarily mean it's always secure, but at least you, or someone you trust, can analyze it and find out. Maybe even fix it. Anyone that understands the domain can verifiably prove that it's secure or insecure by inspecting its inner-workings, and in the case of things like crypto, this leads to stronger and more secure implementations.

So it might not always ensure security, but it does ensure the possibility of security. As far as I'm concerned, closed-source crypto might as well be full of backdoors and easily breakable.


http://www.veracode.com/images/pdf/executive_summary_veracod...

Spefically finding number 3: "Open Source projects have comparable security, faster remediation times, and fewer Potential Backdoors than Commercial or Outsourced software."


The author's argument is that we shouldn't just have to wait for time to determine the security and/or plausibility of this solution. It is probably true that eventually we will know if Haystack is effective or not. But we would know a lot sooner if the code were available.

It's not that there's security bugs or buffer overflows in some software. Everything has bugs and if the design and idea is good, they can usually be fixed and everything can keep moving along fine. It's that we know essentially nothing about the implementation or design of a program that asks its users to trust it to keep their traffic safe from an oppressive government. I think you'd have to be awfully naive to just take Haystack at its word that Haystack actually works.


As I understand it, Haystack has already 'shipped' code. They 'shipped' it to Iran on USB thumb drives. Austin Heap posted up info not too long after the Iranian elections last year.


> Ship an open source version of your code and let's take a look at it. Let the Iranian government have a look at it.

Skype is famous for being a black box.

http://www.secdev.org/conf/skype_BHEU06.handout.pdf


Skype is not promising to protect human rights.


Yes, but at least they shipped. They didn't drum up a lot of hype first, they just made something that actually worked.


That's irrelevant ... Skype is not vaporware.


How is Skype related to this?


many companies build up the hype before they ship. isn't that just marketing?

i call hypocrisy. jgrahamc is involved with causata: check out their website.

http://www.causata.com/

count the number of times you read:

"This has to change. Stay tuned for more details."

where are the products? what have they shipped?


Causata shipped our software in January 2010. It's not something you can just download, but we would be happy to sell it to you.


which machine learning algorithms is causata shipping to customers? i could not find this (nor the fact you were shipping) from the causata website.

you wrote:

Alas, the Haystack web site has zero technical details.

the causata web site appears to be equally placed.


On the subject of personal encryption I recently found some interesting plausible deniability software [1] in Phrack [2] for protection against "give us the password or you're going to jail" laws

Different passwords decrypt different plaintext from the same cyphertext

[1] http://www.winstonsmith.info/julia/elettra/ [2] http://phrack.org/issues.html?issue=65&id=6#article




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: