> and thus debunk one of the greatest virtual machine myths of all - the blind faith in the use of jitting as a performance accelerator.
It's not blind faith, it's demonstrated in pretty much every situation where an interpreter and a JIT are both available for the same language. All of the fastest VMs out there today (V8, LuaJIT, JVM, etc.) are based on a JIT.
> The performance level made possible by the Nostradamus Distributor also means that [interpreter-based] Bochs could purely interpreted pretty much match the performance of [JIT-based] QEMU, although I admit that is still left to actually implement and try out.
Just for the record, while I agree at least in principle regarding your assessment of interpreters vs. JIT compilers, the situation seems to be far from clear though, and I think the last word is not yet spoken on that topic.
As far as I am concerned, most of the benchmark suites out there give an unfair advantage to JIT compilers. For example, all numeric JavaScript and Python benchmarks can be heavily optimized by JIT compilers (essentially removing all of their interpreters' weaknesses: (un-)boxing, dynamic typing, and in the case of Python, reference counting; plus removing the interpreter overhead [i.e., instruction dispatching]). Many of the benchmarks are numerical in nature, too, even if the actual workload is usually non-numerical. So it might very well be that your actual workload does not use any of the fancy numerical operations that a JIT can optimize heavily. In such a case, the additional memory consumption of the code caches and the additional memory requirements of a generational garbage collector may in fact not give you any practical speedups in comparison to a sophisticated interpreter using just reference counting (which is known to be very space efficient).
Aside of this unfair skewing of benchmarks towards numerical computations, there are other points to consider in the discussion of JIT vs. interpreters, such as energy consumption. Does a programming language implementation using a JIT subsystem require more or less memory than an interpreter? (I am positive that some companies have already measured this, but there are AFAIK no publications concerning this important question.)
Summing up, I think--as is so often the case in computer science--which of the two techniques give the best results depends heavily on a) what trade-offs you/your customers are willing to make [space vs. time] and b) the actual performance is of your workload [numerical vs. non-numerical].
It's not blind faith, it's demonstrated in pretty much every situation where an interpreter and a JIT are both available for the same language. All of the fastest VMs out there today (V8, LuaJIT, JVM, etc.) are based on a JIT.
> The performance level made possible by the Nostradamus Distributor also means that [interpreter-based] Bochs could purely interpreted pretty much match the performance of [JIT-based] QEMU, although I admit that is still left to actually implement and try out.
This sounds like blind faith to me.