Last month was particularly busy for browser makers, as all the top three players in terms of market share released updated versions. While offering a double-digit version number was not such a big event for Chrome with it’s fast release cycle, the changes in both Firefox and Internet Explorer have been huge, covering everything from performance and standard support to the interface. It’s a good time to do another round of speed tests to compare what they’re currently offering. After my last two tests, this time I am going for something different: since I spotted on Twitter a Google Docs spreadsheet where several people contributed their own system configurations and test results, I am using it for my quick analysis.
D'autre gens pour compléter mon doc de benchmark de navigateurs http://j.mp/fMj96S ?
— Edouard Seynaeve (@seynaeve) March 22, 2011
Using data submitted by different people does have it’s challenges: you no longer control the test environment and can get abnormal results. The first column has some suspect results for Internet Explorer 9; if I remember well the 64 bit version has a bug that affects some online benchmarks. And if you follow the comments, you can see some people contributed results from other browser versions than the latest stable releases – e.g. for cell F11 Opera 10 was tested instead of version 11, so some cleaning up is needed. To do that, I duplicated the original sheet using the ImportRange formula and averaged the test results for each browser and test, but excluded the lowest and highest result. It’s a crude method, but it does remove most of the outliers from this data set.
The result, live in the table below, should be closer to the average performance of browsers as experienced by the public. Of course, the sample is far from representative and quite small, especially since almost no one had all the browsers available for testing. Internet Explorer is under-represented, being only available under Windows Vista and 7, so it would be great if some of the readers would contribute more results to the original file, which is public and editable. Even so, there are few surprises: Chrome remains the performance leader in V8 by a solid margin, and narrowly wins SunSpider as well. Firefox leads for Kraken, but trails behind Opera and Safari in SunSpider, despite being a ‘fresher’ release. Internet Explorer is last on all three tests, but that could be due to the very small set of results – only two after removing the extreme values. On my system it actually has the fastest time in SunSpider and Chrome, surprisingly, the slowest – my results are in column T, in case you’re wondering.
The fact is, all the latest browser versions are pretty much tied for JavaScript performance. In the graph below you can see the results from the previous table more or less to scale. While for V8 Chrome is still some 7 times faster that the last placed, the differences for the other two tests are much smaller: about 2.5x for Kraken and only 1.75x for SunSpider. Compared to the results from last year, when Internet Explorer was about 9 times slower than Chrome in SunSpider and over 33 times slower in V8, the progress is impressive. In day-to-day use, these differences are probably small enough to go unnoticed. I think the speed race will soon shift to other aspects of browsing, starting with hardware acceleration. It would be interesting to do some testing there as well, but I don’t think there are some good ‘independent’ (or at least semi-independent) test suites available right now.
Another thing that could be done with this data set would be to evaluate the impact of the system configuration on the browser performance, in particular to see if some browsers are relatively better for older systems or even other operating systems. From what I calculated so far – unfortunately only offline – the amount of RAM on the computer seems to be the most important factor: the more you have, the faster the browser will run JavaScript. It’s another analysis best left for a larger data set: there could be a number of other elements at work, so the probability of drawing the wrong conclusion is pretty high.
Post a Comment