Usually in these discussions we use the term "Unfriendly AI" instead of "Hostile AI". That gets at an important distinction: these AIs don't want humanity to die, it's just that they don't want humanity to live and they're presumed to have the ability to steal all our resources for themselves. Humanity still dies, but only incidentally. The author discusses this point a little but I think it's important enough to put it front and center, in our terminology.
It's interesting to think of corporations as being Unfriendly in this sense. The analogy isn't perfect, though: humans make up a corporation's computing substrate, so they're forced to value humans more than the classic Unfriendly AI would.
is there a third category of AI which doesn't necessarily care to live or not, but winds up destroying humanity because of its Lennie-like power (possibly even in the service of humanity)?
It's interesting to think of corporations as being Unfriendly in this sense. The analogy isn't perfect, though: humans make up a corporation's computing substrate, so they're forced to value humans more than the classic Unfriendly AI would.