I have no objection to static sites, but it is 2011. Computers are beefy beefy beasts.
Even CMSes with a reputation for being slow will eat almost all conceivable loads for breakfast unless you bork something architecturally like, say, leave Apache KeepAlive on. (That would similarly kill you if you got on the front page of Reddit with a 1 kb static text file, but people remember CMSes as dying to KeepAlive because CMSes are often written in PHP and the way everyone tells you to configure Apache/PHP is broken by design.)
You've got billions of operations per second to play with. Read from database, render template, spit to browser is not really that hard. This is even more true if you can cache things, in which case you're the moral equivalent of running a static site from performance perspectives, with the only difference being whose CPU gets used for the single compile step.
I don't think the primary appeal of static sites is their performance. It's about complexity and maintenance.
At a minimum, a LAMP stack requires prompt security patches at all levels of the stack and a working system for automated SQL backups (and, of course, testing your SQL backups to make sure they restore). If you haven't configured your system perfectly (e.g. you forgot to rotate one of your logfiles properly) you'll need to perform more maintenance than that; Murphy's Law implies that you'll be doing that at three in the morning local time. Hopefully you installed an uptime monitor.
If your CMS contains bugs -- and it does -- that's a slew of additional security patches which you'll have to apply, and occasional broken functionality that you'll have to track down and fix.
Every few years (at most) a new version of Linux will come out. Every few years (at most) a new version of your CMS will come out. The old versions will stop getting fixes, so you'll have to upgrade.
Despite these efforts, there is still a good chance that your CMS will get hacked: LAMP stacks are complex beasts, even out of the box with no customizations. When you do get hacked, what will you do? Turn off the code with the vuln in it? How do you know which bit of code that was?
In theory your $10 per month is buying a hosting provider that will take care of all of the above for you, by leveraging the awesome economy of scale. And, indeed, maybe the correct answer is to use a blog-hosting service. Many people seem to be happy with Tumblr.
But for those of a more control-freaky nature, the dream of the static site is that you'll reduce or eliminate these pains by making all the moving parts as stupid as possible. (You can't perform SQL injection on a site which has no forms or, for that matter, SQL.) In addition to being stupid, static sites are also as generic as possible: You can switch from S3 to another host in minutes, sign up for a CDN in minutes, switch from Apache to nginx to IIS in minutes. Meanwhile you have at least one up-to-date offsite backup at all times by default -- the data for your entire site lives on the box you publish from. Plus you have a perfectly functioning, extremely intuitive dev/staging setup -- you can tinker with a static site offline on your local machine, get your new hacks working, then push live with an rsync and be reasonably assured that the static pages will work the same in production as they do locally.
There is a very compelling reason to run a static site: it is much less likely to harbor a security vulnerability. Most CMS sites do.
There is a very compelling reason to run a dynamic site: for most software businesses, that site is the primary venue for all your "marketing engineering", including SEO, A/B testing, interactivity, &c.
The companies that have this dilemma generally have already excepted the architecture of "public site that is separate from their actual application site", which is a step in the direction of mitigating the impact of the security risk.
Those companies would mitigate the impact even further by hosting the public site in a different data center from their application.
Generally speaking, between "adequately secure CMS site" and "adequately engineered marketing processes", and speaking as a security person with a vested interest in selling everyone on buttoning down security everywhere as much as possible: most startups are doing more damage to themselves by not maximizing marketing engineering than they are by exposing themselves to attack.
I'd suggest a middle ground: designate some large, important swath of your public site as sacred and make it static. Host it with stripped down vanilla Apache or Nginx. Then, where you benefit from dynamicism, expose another resource (like a small EC2 server) in your URL space to implement it. This sounds complicated but is really very easy and gives you some of the benefit of a static site (attacks won't tear down your front page) and all the benefits of a dynamic site.
A rule of thumb, by the way: if you sell a web app on the public Internet, and you haven't done something to have that web app formally assessed for security flaws, running a static brochure site for its security benefits is a premature optimization.
I'd suggest a middle ground: designate some large, important swath of your public site as sacred and make it static. Host it with stripped down vanilla Apache or Nginx. Then, where you benefit from dynamicism, expose another resource (like a small EC2 server) in your URL space to implement it.
Yes, exactly. This is what I'm actually thinking about.
(I agree that a one-hundred-percent static site is a thought experiment, not a plan; it's not really going to fly in the twenty-first century. I chose the word "dream" in my original post with some care. ;) And "marketing engineering" is a nice concise term to explain why you're going to need dynamic features somewhere on your site, sooner or later, even on something as apparently simple as a personal blog.)
I've seen quite a few websites built on a dynamic CMS with a bunch of optional modules containing tens of thousands of lines of hairy PHP, running on a big pile of heavy-duty high-availability hardware... most of which is apparently there just to be ready to rebuild the Varnish cache as quickly and reliably as possible on the rare occasions when that cache gets cold. (On some sites, the cache never gets cleared except by some unavoidable accident: The performance consequences of trying to rebuild the cold cache under load are too dire to contemplate.) A very high percentage of the actual page loads on such sites are served directly from a Varnish instance, which consumes very few CPU resources and a moderate amount of RAM on a single box.
At some point I began to wonder why we conceptualize such sites as "dynamic sites with a static cache" instead of as "static sites, with a few dynamic elements on some pages, and a perhaps-more-synchronous-than-necessary PHP-based page-generation engine that runs on public-facing servers". There are a bunch of reasons, but I wonder how many of them might ultimately prove to be historical reasons.
This is exactly why I made my site for my iOS apps static. It's just there to provide information and I don't have to spend time fiddling with it at all unless I want to add content. HTML5 makes it possible to add some dynamism and interactivity without running a server backend (using Disqus for comments, for instance).
Whilst CPUs can do billions of operations a second, a traditional hard disk still only does 200 seeks/second at a 5ms seek time. SSDs still aren't common on hosting solutions.
Even CMSes with a reputation for being slow will eat almost all conceivable loads for breakfast unless you bork something architecturally like, say, leave Apache KeepAlive on. (That would similarly kill you if you got on the front page of Reddit with a 1 kb static text file, but people remember CMSes as dying to KeepAlive because CMSes are often written in PHP and the way everyone tells you to configure Apache/PHP is broken by design.)
You've got billions of operations per second to play with. Read from database, render template, spit to browser is not really that hard. This is even more true if you can cache things, in which case you're the moral equivalent of running a static site from performance perspectives, with the only difference being whose CPU gets used for the single compile step.