Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Googler Drops Windows Zero-Day, Microsoft Unhappy (threatpost.com)
73 points by ukdm on June 10, 2010 | hide | past | favorite | 47 comments


Before anyone starts talking about full disclosure or Microsoft or whatnot, just know that whether you agree with Tavis' logic or not, not giving Microsoft a couple weeks to patch a remote is a breach of protocol.

Tavis is one of my favorite researchers to watch and I'm not going to say anything negative about him or how he played this. But if I was Google, I'd be treading very, very, very carefully here. Google's gotten a lot of talent lately, but Microsoft has them way outgunned. We don't need these two companies setting up opposition labs because of a document that ends with "greetz".

Or, maybe we do. Depends on your politics.

(For what it's worth: http://twitter.com/taviso/status/15887290335S)


The idea that Microsoft or Google would create "opposition labs" to attack each other is pretty far out there. You may have been in the jungle of security just a wee bit too long.


Man, that would be awesome. Companies aren't doing a very good job of policing themselves, if they policed each other customers would win big time.


I dunno, sounds like an ideal anti-competitive tool to kill off potential competitors before they get too big. It'd also encourage behavior similar to patent trolls, let them get big enough to royally screw for big monies.


If we proceed on the assumption that the blackhats will find holes the white hats don't find first, it'd still be a good thing. At the limit of this behavior, the only software producer left would be Daniel J. Bernstein, which would be sort of awkward, but if you want things built secure you want them built secure.


I think that while the transition would be rocky, there would be a lot more work done on languages, libraries, and frameworks that are secure by default, rather than insecure by default, and in the end people would end up a lot better trained.

We get insecure software for a variety of reasons, including lack of incentives to produce secure software (or equivalently lack of penalties for failing), ignorance, lack of sufficient talent or intelligence to deal with the issues of security, and so on. At least taking care of the first one would have some sort of effect. It would probably also drum some people out of software development, at least professionally, but one could fruitfully argue that anyone intellectual incapable of writing secure software would probably be better employed elsewhere anyhow.


I think you're just reacting to the name I chose. Obviously, plenty of companies large and small run labs staffed with people who do nothing but find flaws in other companies code. For, uh, all sorts of reasons.


You're insinuating there are groups inside major companies setup with the express purpose of discovering and releasing exploits against competitive products?

Since plenty of companies do this, I'm sure it would be easy for you to show evidence of a few examples.


It would in fact be easy to do that, but I'm not going to.

You are free to believe I am making things up about vulnerability research, which I've been doing professionally since 1995.

(Note that I didn't assert it; I insinuated it.)


Since the comment to which you're replying uses the word "insinuating" I deduce that originally it said "asserting" but after your comment it was edited by staunch without a note to say so. Yes?


He's just crazy! Nah, I did change it, and I should have noted it, my bad.



security "researcher" does something questionable in order to generate publicity, news at 11


Tavis is the real deal, Robert.


His actions indicate otherwise. Surely any halfways mature security researcher realizes the following two facts:

#1 - In excess of 95% of all security patches/updates/workarounds implemented on a windows platform are those that come through the automatic update process

#2 - Upon disclosure, the universe of script kiddies which never, ever, would have discovered this security exploit on their own have now been handed a gift which they will exploit with their rudimentary skills.

I'm a strong proponent of responsible disclosure, and really believe that it has made our platforms much, much more secure, obviously in an absolute sense, but also, in a relative sense.

But, responsible disclosure involves giving the vendor a reasonable amount of time - typically one or two patch cycles, prior to releasing the exploit. Anything other than that is juvenile and besmirches the community of responsible security researchers.


You are unlikely to find anyone in the "community of responsible security researchers" to say anything negative about Tavis Ormandy. It is way over the top to imply that he's not "halfways mature".

You will, on the other hand, find plenty of people with real reputations in the industry at stake (unlike yours, which is influenced not one whit by anything you say about disclosure) who will be happy to explain why "responsible disclosure" is damaging the industry. It's not even a hard argument to make. The dollar value of a reliable Windows remote is too high to pretend that bona fide researchers are the only people who will find them. Meanwhile, because product managers at large vendors are given the latitude to fix problems on the business' schedule instead of the Internet's, people get to wait 6-18 months for fixes to trivial problems.

Personally, without wading into "responsible" vs. "full" disclosure, I will point out that vulnerability research has made your systems more secure; the manner in which the vulnerabilities were uncovered has very little to do with it. You are more secure now because vendors and customers pay to have software tested before and after shipping it.

(My personal beliefs don't often enter the picture; I represent the interests of my clients, almost all of whom would rather fix things on their own schedule).

This is an issue that reasonable and responsible people disagree about.


Are you trying to argue that, for a "reliable Windows remote" that:

  A) Security Researchers should not disclose 
      vulnerabilities.

  B) Security Researchers should not disclose 
     vulnerabilities before the vendor has a 
     chance to patch them.

  C) Security Researchers should aggressively 
     disclose on a 30-60 day time frame.

  D) Security Researchers should disclose within a week.
I'll presuppose that your answer will be "E - it depends" - so let's restrict it to this _particular_ vulnerability.

My answer is C), but only because I realize that the squeaky wheel _really_ gets the grease, and that the threat of disclosure really, really inspires developers to Lab, Replicate, Solve, and deploy a fix. My answer is not D), because I believe more harm is done by disclosing vulnerabilities where there is _no chance_ of a patch being completed in time. Tavis Ormandy clearly believed the answer was D) in this case.

Also, of all vendors, Microsoft is actually pretty good (not perfect) about getting regular security patches out on a monthly basis - I'd have to believe that they probably prioritized this one fairly high.


Microsoft releases security updates on a regular schedule, but that doesn't tell us how long the turnaround is from a vulnerability being reported to a fix for that vulnerability being released. It could be reported today and not fixed for 10 patch cycles, it will still look like Microsoft is doing something. I think the turnaround is more relevant to the point that just how often patches are released.


If I was doing independent work on Microsoft or Google code, I'd pick (B).

I think the world would be a better place if everyone would do (D).

I agree with you about Microsoft.


That reply contains a rather petty ad hominem attack about the relevance of someone's security industry reputation.

But I am interested in the argument about "responsible disclosure" damaging "the industry" is one I would like to hear. Since it's not hard to make, it should be easy to repeat. I'm not sure what "the industry" is here, though.

Some other strange parts of the response: "The dollar value of a reliable Windows remote is too high to pretend that bona fide researchers are the only people who will find them."

Why accuse someone of pretending? I would not be surprised to find that you are right, but you should be able to point at data.

Another problem is use of the passive voice: "product managers at large vendors are given the latitude to fix problems on the business' schedule instead of the Internet's"

Who gives them that latitude?


Pointing out that people are willing to put their money where their mouths are, and that nobody in this particular message board has so far, is not ad hominem. If he's insulted to hear me point out that he's not a security researcher, I'm surprised.

The process of "responsible disclosure" gives product managers latitude, because it effectively dictates that researchers can't publish until the vendor releases a fix. The vendors almost always decide when to release fixes.

When a researcher publishes immediately, vendors are forced to fix problems immediately. A small window of vulnerability is created ("small" relative to the half life of the vulnerability, which depends on all sorts of other things) where less-skilled attackers can exploit the problem against more hosts.

On the other hand, in the "responsible" scenario, many months will invariably pass before fixes to known problems are released. During that longer window, anybody else who finds the same problem (and, obviously, anyone who had it beforehand) can exploit the vulnerability as well.

Furthermore, full disclosure creates a norm in which vendors are forced to allocate more resources to fixing security problems, instead of waiting half a year or more. This costs vendors. But the alternative may cost everyone else more. It depends on how well-armed you think organized crime is.

Finally, Robert, there's the issue nobody ever seems willing to point out. If you disclose immediately, lots of people can protect themselves immediately: by uninstalling or disabling the affected software.


It seems weird that you use my first name in your comments. My name is not a secret, but I barely know who you are.

You're still using the passive voice. It's not as if "product managers" are some well-defined group of people. For that matter, neither are "bona-fide security researchers".

When I used the word "researcher" in scare quotes before, I wasn't trying to say this guy is quack. He obviously isn't. But using the word researcher is overblowing things. In reality, these guys are inspectors. They find real problems, and that is important, but very few of the problems they find are novel in nature. We do continually get the same types of defects reported, so something is wrong, but it has nothing to with reporting strategies. Change the way we write software--that would be actual research.

The disclosure trade-offs you describe sound plausible, but don't account for the fact that an inspector may find an issue no one else has discovered. They are also backed up by zero data.


"Product managers" are people who hold the title "product manager", and I've singled them out because they own the release schedule. Substitute whichever title you like.

"Bona-fide researchers" are people who find and report flaws in good faith, as opposed to researchers who find flaws and sell them to organized crime. I use the term "bona-fide" because that's what it means: "good faith".

I use the term "researcher" because that's the convention.

That's not why you put scare quotes around the term.

An infinitessimal fraction of all computer crime is ever seriously investigated. If you want hard stats, no argument I can make will satisfy you. I'm fine with that.


1.) Product Managers don't own the release schedule.

2.) Ah, organized crime, but none of it is ever reported. Lack data much? For an "industry" that supposedly values transparency, the total absence of data seems odd.

3.) Why are you telling me why I put scare quotes around the term "researcher"? I told you why. I'm not lying.


1 & 3) I find it curious that you feel free to define your own meaning for scare quotes, but have a problem with him defining "product manager" as "the person who owns the release schedule." I think Lewis Carroll wrote about how that works:

http://en.wikipedia.org/wiki/Humpty_Dumpty#In_Through_the_Lo...

2) You won't find that kind of news on CNN, but it's out there if you know where to look. Try DarkReading or F-Secure's blog, just to get started. There are many botnets out there right now. Even some controlled by Russian mobsters. There are places where you can buy, sell & trade credit card numbers. There's a TON of crime out there, but for people who don't deal with it, all you hear is the occasional, "Company X had a data breach affecting approximately Y users. A company spokesman wants to assure you that everything is all right and that the very same company that allowed this breech to happen will make sure that it quickly vanishes from the public eye."

Finally, you don't know Thomas as you indicated in a post further up, you might want to read this:

http://www.darkreading.com/security/management/showArticle.j...

In case you're wondering, it's really not uncommon for people who deal with computer security to take the time to find out who they're talking to online. You might be surprised at how often it proves useful. And I'm practicing what I preach here, because I only know him by reputation.


For what its worth in my experience in enterprise administration for a 10,000+ computer network at one company, and a 4,000+ computer network at another, computer crime is _never_ reported, rarely receives a cursory investigation, and a full investigation is forbidden by policy.

I define computer crime as a successful network intrusion where an attacker gains access to the internal network, which occurs at a frighteningly high level.


> It seems weird that you use my first name in your comments. My name is not a secret, but I barely know who you are.

It's making me feel weird as well.


I hardly think 5 days is enough time for Microsoft to patch this. The guy could have given them two weeks before releasing the exploit.


The article says "Ormandy said protocol handlers are a popular source of vulnerabilities and argued that 'hcp://' itself has been the target of attacks multiple times in the past."

I'm not defending his actions, but I think he's arguing that it's likely that black-hats already know about the vulnerability because it occurs in a heavily targeted attack vector. Therefor, releasing details gives IT professionals the knowledge they need to setup firewalls or do whatever they have to do to protect against the vulnerability while waiting for an official patch from Microsoft.

With that said, perhaps there's a better way to accomplish that goal.


You can make that same argument about a whole lot of different vulnerabilities, for what it's worth. The core issue is that a lot of people practice "responsible disclosure" by custom but don't really buy into the principle of it. Can't blame 'em.


That's an absurd excuse for Ormandy. As, I'm sure Ormandy knows, IT pros are a LOT less likely to implement a workaround to this problem than are hackers likely to exploit it.

Furthermore, MS stated that the provided workaround is insufficient and easily circumvented.

And, he provides no evidence that any blackhats know about this at all.

To me this clearly looked like a way for Google to try to attack MS's security -- this goes hand in hand with their PR stunt of moving Google employees off of Windows due to security.


How do you know that IT pros are less likely to implement a workaround than hackers are to exploit it?

How prevalent is deploying workarounds and mitigations versus deploying patches? I don't know of any research in this area; it would be very interesting to know.


Based on history. There have been several known exploits that have been exploited where a Windows Update patch has been available for months, and admins didn't update.

Now, take it a step further and now you have an exploit where is no Windows Update package, but each server has to be manually updated following a procedure from a webpage.

This is a no-brainer to me. Of course if you're looking for double-blind randomized control studies to prove this, well I'm afraid you're in the wrong field.


That was a dick move. 5 days are not enough to go through debugging, coding, reviewing, testing, and deployment of a complicate software like Windows. 2 weeks to 4 weeks are more reasonable.


Or, 5 days is enough time, because people need to know it's out there, and the developers have had years to get this particular type of bug right.


The article links to http://threatpost.com/en_us/blogs/does-google-have-double-st... which discusses the ethics of this situation, and makes unspecified allegations that Google does "all sorts of hinky things behind your back" that justifies doing full disclosure against them.

Does anyone have any idea of what their beef is with Google?


Tavis works for Google. He's one of their best-known security researchers.

Tavis published a Microsoft remote. Without waiting for a patch. That is unusual. Most researchers, Tavis included in other situations, don't do that.

Not at all surprising Microsoft took it personally. That doesn't mean it's constructive for MSFT to complain. But, there you go.


You're absolutely right, which makes me wonder if something else could be going on. The article makes it sound like he believed it likely that this exploit is already in the wild, but didn't want to say that directly when he said: "I've concluded that there’s a significant possibility that attackers have studied this component, and releasing this information rapidly is in the best interest of security"

And an active exploit in the wild seems like the kind of thing that would tip the scales. Or maybe he believed that they'd sit on this for 6 months while all the black hats quietly formulated and circulated an exploit.


You gave an excellent answer to the question I didn't ask. I don't need to be told why MSFT is unhappy. I want to know why Robert Hansen is upset enough at Google's practices that he wants full disclosure used against them.


Things to know about Robert Hanson:

* He's a web guy (a very, very good web guy)

* He's had issues with G in the past

* He's doubtless reported things to G in the past

* Tavis is operating in an entirely different universe from him (one in which "lock.cmpxchg8b.com" is clever, not sketchy).

Don't read too much into this.


He's had issues with G in the past

That's what I wanted to know more about. Do you have links where he discusses what he doesn't like about Google?


At this point in the conversation I think you should just ask him. He's a good guy.


I posted a reply on his blog asking him the question. It is currently awaiting moderation.


I understand the anger over this quick-publish.

But I also wonder what Robert Hansen was alluding to with the "all sorts of hinky things behind your back" statement. It seems to be about Google's behavior in prior situations.


I don't know, but just bear in mind that Microsoft and Google deal with a lot of vulnerability reports, and these things can turn into soap operas even on good days.


Reavey confirmed that the issue affects Windows XP and Windows Server 2003 only.

It's 2010, no one should willingly still be using windows XP unless they like hosting botnets. Can't say windows is the most intelligent choice for a server platform either.


Some server software only runs on Windows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: