Weirdly, when they get to the programmer productivity bit they decide to compare _reported_ times for the scripters and _measured_ times for the non-scripters. Then they go on to try to justify doing this.
Agreed-- this puzzled me as well. I don't really doubt that the scripting languages are faster in terms of coding time but why not just measure them or at least have a 3rd party confirm the developer's time?
In any other study of workers, I'm pretty sure even the author would feel self-reported data would invalidate the results (especially when all of the "best" scores were self-reported).
e.g. The ten chosen Honda dealerships reported that they can fix a Honda in 10-23 minutes compared to our measurements that Ford dealerships can fix a Ford in about an hour and a half.
You have to give this guy some credit for at least trying, but the standing problem with these types of studies is that there are just too many variables. For instance, this particular study only consisted of one program. Also, it doesn't really control for program proficiency very well...he tried, but environmental impacts are going to skew those results (classes taken, work environment and language of choice, sample set bias for the previous, etc). Also, as a previous poster mentioned, these are from 2000. Since 2000, Java has made serious performance ground. I would be interested to see a few more studies like this...even if they aren't perfect, they do tend to remind people that there is a huge difference for language productivity, and that it dwarfs performance.
Maybe we could get a version where the program was to implement a decent sized project requiring several developers, to emphasise the code quality angle? That would take a lot of bored programmers to do, though.
When you post old news, please add the date! This article is interesting, but much of it is obsolete given that the languages and their implementations have evolved quite a bit since 2000.
Why didn't they just measure them all?