Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My colour blindness is relatively mild, but I struggled with many of the charts. I'd suggest putting the labels on the actual lines, either in the chart area or one side or other next to the line termination.

It sounds trivial, but with 20+ charts that I had to stick my face up to the screen for, I gave up after about 5.

Regarding "Due to SPEC rules governing test repeatability (which cannot be guaranteed in virtualized, multi-tenant cloud environments), the results below should be taken as estimates.". I'd like to have seen some attempt to migate this with, say, some kind of averaging of x tests over different instances. Although, I understand this would have extended an already long test process.



The SPEC CPU 2006 estimate comment is a legal requirement for any public results that cannot be guaranteed to be repeatable (which will never be the case in a virtualized, multi-tenant cloud environment). The results provided are based on the mean from multiple iterations on multiple instances from each service. If you were to run it on these same instances your results should be similar. The post also provides standard deviation analysis for all SPEC CPU runs on a given instance type to demonstrate the type of results variability you might expect.


I'm really impressed with the amount of effort that has gone into this study, but those charts are just awful. Those line charts should be bar charts IMHO, I'm immediately scanning the chart left-to-right before I realise that the x-axis are actually categories




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: