Detailed description of traditional JavaScript benchmarks

others
10-10-2018
benediktmeurer-de

#1

It is probably fair to say that JavaScript is the most important technology these days when it comes to software engineering. To many of us who have been into programming languages, compilers and virtual machines for some time, this still comes a bit as a surprise, as JavaScript is neither very elegant from the language designers point of view, nor very optimizable from the compiler engineers point of view, nor does it have a great standard library. Depending on who you talk to, you can enumerate shortcomings of JavaScript for weeks and still find another odd thing you didn’t know about. Despite what seem to be obvious obstacles, JavaScript is at the core of not only the web today, but it’s also becoming the dominant technology on the server-/cloud-side (via Node.js), and even finding its way into the IoT space.

That raises the question, why is JavaScript so popular/successful? There is no one great answer to this I’d be aware of. There are many good reasons to use JavaScript today, probably most importantly the great ecosystem that was built around it, and the huge amount of resources available today. But all of this is actually a consequence to some extent. Why did JavaScript became popular in the first place? Well, it was the lingua franca of the web for ages, you might say. But that was the case for a long time, and people hated JavaScript with passion. Looking back in time, it seems the first JavaScript popularity boosts happened in the second half of the last decade. Unsurprisingly this was the time when JavaScript engines accomplished huge speed-ups on various different workloads, which probably changed the way that many people looked at JavaScript.

Back in the days, these speed-ups were measured with what is now called traditional JavaScript benchmarks , starting with Apple’s SunSpider benchmark, the mother of all JavaScript micro-benchmarks, followed by Mozilla’s Kraken benchmark and Google’s V8 benchmark. Later the V8 benchmark was superseded by the Octane benchmark and Apple released its new JetStream benchmark. These traditional JavaScript benchmarks drove amazing efforts to bring a level of performance to JavaScript that noone would have expected at the beginning of the century. Speed-ups up to a factor of 1000 were reported, and all of a sudden using <script> within a website was no longer a dance with the devil, and doing work client-side was not only possible, but even encouraged.

Measuring performance, A simplified history of benchmarking JS
_Source:Advanced JS performance with V8 and Web Assembly, Chrome Developer Summit 2016, @s3ththompson. _

Now in 2016, all (relevant) JavaScript engines reached a level of performance that is incredible and web apps are as snappy as native apps (or can be as snappy as native apps). The engines ship with sophisticated optimizing compilers, that generate short sequences of highly optimized machine code by speculating on the type/shape that hit certain operations (i.e. property access, binary operations, comparisons, calls, etc.) based on feedback collected about types/shapes seen in the past. Most of these optimizations were driven by micro-benchmarks like SunSpider or Kraken, and static test suites like Octane and JetStream. Thanks to JavaScript-based technologies like asm.js and Emscripten it is even possible to compile large C++ applications to JavaScript and run them in your web browser, without having to download or install anything, for example you can play AngryBots on the web out-of-the-box, whereas in the past gaming on the web required special plugins like Adobe Flash or Chrome’s PNaCl.

The vast majority of these accomplishments were due to the presence of these micro-benchmarks and static performance test suites, and the vital competition that resulted from having these traditional JavaScript benchmarks. You can say what you want about SunSpider, but it’s clear that without SunSpider, JavaScript performance would likely not be where it is today. Okay, so much for the praise… now on to the flip side of the coin: Any kind of static performance test - be it a micro-benchmark or a large application macro-benchmark - is doomed to become irrelevant over time! Why? Because the benchmark can only teach you so much before you start gaming it. Once you get above (or below) a certain threshold, the general applicability of optimizations that benefit a particular benchmark will decrease exponentially. For example we built Octane as a proxy for performance of real world web applications, and it probably did a fairly good job at that for quite some time, but nowadays the distribution of time in Octane vs. real world is quite different, so optimizing for Octane beyond where it is currently, is likely not going to yield any significant improvements in the real world (neither general web nor Node.js workloads).

Distribution of time in benchmarks vs. real world
_Source:Real-World JavaScript Performance, BlinkOn 6 conference, @tverwaes. _

Since it became more and more obvious that all the traditional benchmarks for measuring JavaScript performance, including the most recent versions of JetStream and Octane, might have outlived their usefulness, we started investigating new ways to measure real-world performance beginning of the year, and added a lot of new profiling and tracing hooks to V8 and Chrome. We especially added mechanisms to see where exactly we spend time when browsing the web, i.e. whether it’s script execution, garbage collection, compilation, etc., and the results of these investigations were highly interesting and surprising. As you can see from the slide above, running Octane spends more than 70% of the time executing JavaScript and collecting garbage, while browsing the web you always spend less than 30% of the time actually executing JavaScript, and never more than 5% collecting garbage. Instead a significant amount of time goes to parsing and compiling, which is not reflected in Octane. So spending a lot of time to optimize JavaScript execution will boost your score on Octane, but won’t have any positive impact on loading youtube.com. In fact, spending more time on optimizing JavaScript execution might even hurt your real-world performance since the compiler takes more time, or you need to track additional feedback, thus eventually adding more time to the Compile, IC and Runtime buckets.

Speedometer

There’s another set of benchmarks, which try to measure overall browser performance, including JavaScript and DOM performance, with the most recent addition being the Speedometer benchmark. The benchmark tries to capture real world performance more realistically by running a simple TodoMVC application implemented with different popular web frameworks (it’s a bit outdated now, but a new version is in the makings). The various tests are included in the slide above next to octane (angular, ember, react, vanilla, flight and backbone), and as you can see these seem to be a better proxy for real world performance at this point in time. Note however that this data is already six months old at the time of this writing and things might have changed as we optimized more real world patterns (for example we are refactoring the IC system to reduce overhead significantly, and the parser is being redesigned). Also note that while this looks like it’s only relevant in the browser space, we have very strong evidence that traditional peak performance benchmarks are also not a good proxy for real world Node.js application performance.

Speedometer vs. Octane
_Source:Real-World JavaScript Performance, BlinkOn 6 conference, @tverwaes. _

All of this is probably already known to a wider audience, so I’ll use the rest of this post to highlight a few concrete examples, why I think it’s not only useful, but crucial for the health of the JavaScript community to stop paying attention to static peak performance benchmarks above a certain threshold. So let me run you through a couple of example how JavaScript engines can and do game benchmarks.

The notorious SunSpider examples

A blog post on traditional JavaScript benchmarks wouldn’t be complete without pointing out the obvious SunSpider problems. So let’s start with the prime example of performance test that has limited applicability in real world: The bitops-bitwise-and.js performance test.

bitops-bitwise-and.js

There are a couple of algorithms that need fast bitwise and, especially in the area of code transpiled from C/C++ to JavaScript, so it does indeed make some sense to be able to perform this operation quickly. However real world web pages will probably not care whether an engine can execute bitwise and in a loop 2x faster than another engine. But staring at this code for another couple of seconds, you’ll probably notice that bitwiseAndValue will be 0 after the first loop iteration and will remain 0 for the next 599999 iterations. So once you get this to good performance, i.e. anything below 5ms on decent hardware, you can start gaming this benchmark by trying to recognize that only the first iteration of the loop is necessary, while the remaining iterations are a waste of time (i.e. dead code after loop peeling). This needs some machinery in JavaScript to perform this transformation, i.e. you need to check that bitwiseAndValue is either a regular property of the global object or not present before you execute the script, there must be no interceptor on the global object or it’s prototypes, etc., but if you really want to win this benchmark, and you are willing to go all in, then you can execute this test in less than 1ms. However this optimization would be limited to this special case, and slight modifications of the test would probably no longer trigger it.

Ok, so that bitops-bitwise-and.js test was definitely the worst example of a micro-benchmark. Let’s move on to something more real worldish in SunSpider, the string-tagcloud.js test, which essentially runs a very early version of the json.js polyfill. The test arguably looks a lot more reasonable that the bitwise and test, but looking at the profile of the benchmark for some time immediately reveals that a lot of time is spent on a single eval expression (up to 20% of the overall execution time for parsing and compiling plus up to 10% for actually executing the compiled code):

string-tagcloud.js

Looking closer reveals that this eval is executed exactly once, and is passed a JSONish string, that contains an array of 2501 objects with tag and popularity fields:

([
  {
    "tag": "titillation",
    "popularity": 4294967296
  },
  {
    "tag": "foamless",
    "popularity": 1257718401
  },
  {
    "tag": "snarler",
    "popularity": 613166183
  },
  {
    "tag": "multangularness",
    "popularity": 368304452
  },
  {
    "tag": "Fesapo unventurous",
    "popularity": 248026512
  },
  {
    "tag": "esthesioblast",
    "popularity": 179556755
  },
  {
    "tag": "echeneidoid",
    "popularity": 136641578
  },
  {
    "tag": "embryoctony",
    "popularity": 107852576
  },
  ...
])

Obviously parsing these object literals, generating native code for it and then executing that code, comes at a high cost. It would be a lot cheaper to just parse the input string as JSON and generate an appropriate object graph. So one trick to speed up this benchmark is to mock with eval and try to always interpret the data as JSON first and only fallback to real parse, compile, execute if the attempt to read JSON failed (some additional magic is required to skip the parenthesis, though). Back in 2007, this wouldn’t even be a bad hack, since there was no JSON.parse, but in 2017 this is just technical debt in the JavaScript engine and potentially slows down legit uses of eval. In fact updating the benchmark to modern JavaScript

--- string-tagcloud.js.ORIG     2016-12-14 09:00:52.869887104 +0100
+++ string-tagcloud.js  2016-12-14 09:01:01.033944051 +0100
@@ -198,7 +198,7 @@
                     replace(/"[^"\\\n\r]*"|true|false|null|-?\d+(?:\.\d*)?(:?[eE][+\-]?\d+)?/g, ']').
                     replace(/(?:^|:|,)(?:\s*\[)+/g, ''))) {

-                j = eval('(' + this + ')');
+                j = JSON.parse(this);

                 return typeof filter === 'function' ? walk('', j) : j;
             }

yields an immediate performance boost, dropping runtime from 36ms to 26ms for V8 LKGR as of today, a 30% improvement!

$ node string-tagcloud.js.ORIG
Time (string-tagcloud): 36 ms.
$ node string-tagcloud.js
Time (string-tagcloud): 26 ms.
$ node -v
v8.0.0-pre
$ 

This is a common problem with static benchmarks and performance test suites. Today noone would seriously use eval to parse JSON data (also for obvious security reaons, not only for the performance issues), but rather stick to JSON.parse for all code written in the last five years. In fact using eval to parse JSON would probably be considered a bug in production code today! So the engine writers effort of focusing on performance of newly written code is not reflected in this ancient benchmark, instead it would be beneficial to make eval unnecessarily smart complex to win on string-tagcloud.js.

Ok, so let’s look at yet another example: the 3d-cube.js. This benchmark does a lot of matrix operations, where even the smartest compiler can’t do a lot about it, but just has to execute it. Essentially the benchmark spends a lot of time executing the Loop function and functions called by it.

3d-cube.js

One interesting observation here is that the RotateX, RotateY and RotateZ functions are always called with the same constant parameter Phi.

3d-cube.js

This means that we basically always compute the same values for Math.sin and Math.cos, 204 times each. There are only three different inputs,

  • 0.017453292519943295,
  • 0.05235987755982989, and
  • 0.08726646259971647

obviously. So, one thing you could do here to avoid recomputing the same sine and cosine values all the time is to cache the previously computed values, and in fact, that’s what V8 used to do in the past, and other engines like SpiderMonkey still do. We removed the so-called transcendental cache from V8 because the overhead of the cache was noticable in actual workloads where you don’t always compute the same values in a row, which is unsurprisingly very common in the wild. We took serious hits on the SunSpider benchmark when we removed this benchmark specific optimizations back in 2013 and 2014, but we totally believe that it doesn’t make sense to optimize for a benchmark while at the same time penalizing the real world use case in such a way.

3d-cube benchmark
_Source:arewefastyet.com. _

Obviously a better way to deal with the constant sine/cosine inputs is a sane inlining heuristic that tries to balance inlining and take into account different factors like prefer inlining at call sites where constant folding can be beneficial, like in case of the RotateX, RotateY, and RotateZ call sites. But this was not really possible with the Crankshaft compiler for various reasons. With Ignition and TurboFan, this becomes a sensible option, and we are already working on better inlining heuristics.

Garbage collection considered harmful

Besides these very test specific issues, there’s another fundamental problem with the SunSpider benchmark: The overall execution time. V8 on decent Intel hardware runs the whole benchmark in roughly 200ms currently (with the default configuration). A minor GC can take anything between 1ms and 25ms currently (depending on live objects in new space and old space fragmentation), while a major GC pause can easily take 30ms (not even taking into account the overhead from incremental marking), that’s more than 10% of the overall execution time of the whole SunSpider suite! So any engine that doesn’t want to risk a 10-20% slowdown due to a GC cycle has to somehow ensure it doesn’t trigger GC while running SunSpider.

driver-TEMPLATE.html

There are different tricks to accomplish this, none of which has any positive impact in real world as far as I can tell. V8 uses a rather simple trick: Since every SunSpider test is run in a new <iframe>, which corresponds to a new native context in V8 speak, we just detect rapid <iframe> creation and disposal (all SunSpider tests take less than 50ms each), and in that case perform a garbage collection between the disposal and creation, to ensure that we never trigger a GC while actually running a test. This trick works pretty well, and in 99.9% of the cases doesn’t clash with real uses; except every now and then, it can hit you hard if for whatever reason you do something that makes you look like you are the SunSpider test driver to V8, then you can get hit hard by forced GCs, and that can have a negative effect on your application. So rule of thumb: Don’t let your application look like SunSpider!

I could go on with more SunSpider examples here, but I don’t think that’d be very useful. By now it should be clear that optimizing further for SunSpider above the threshold of good performance will not reflect any benefits in real world. In fact the world would probably benefit a lot from not having SunSpider any more, as engines could drop weird hacks that are only useful for SunSpider and can even hurt real world use cases. Unfortunately SunSpider is still being used heavily by the (tech) press to compare what they think is browser performance, or even worse compare phones! So there’s a certain natural interest from phone makers and also from Android in general to have Chrome look somewhat decent on SunSpider (and other nowadays meaningless benchmarks FWIW). The phone makers generate money by selling phones, so getting good reviews is crucial for the success of the phone division or even the whole company, and some of them even went as far as shipping old versions of V8 in their phones that had a higher score on SunSpider, exposing their users to all kinds of unpatched security holes that had long been fixed, and shielding their users from any real world performance benefits that come with more recent V8 versions!

Galaxy S7 and S7 Edge review: Samsung's finest get more polished
_Source: Galaxy S7 and S7 Edge review: Samsung’s finest get more polished,www.engadget.com. _

If we as the JavaScript community really want to be serious about real world performance in JavaScript land, we need to make the tech press stop using traditional JavaScript benchmarks to compare browsers or phones. I see that there’s a benefit in being able to just run a benchmark in each browser and compare the number that comes out of it, but then please, please use a benchmark that has something in common with what is relevant today, i.e. real world web pages; if you feel the need to compare two phones via a browser benchmark, please at least consider using Speedometer.

Cuteness break!

I always loved this in Myles Borins’ talks, so I had to shamelessly steal his idea. So now that we recovered from the SunSpider rant, let’s go on to check the other classic benchmarks…

The not so obvious Kraken case

The Kraken benchmark was released by Mozilla in September 2010, and it was said to contain snippets/kernels of real world applications, and be less of a micro-benchmark compared to SunSpider. I don’t want to spend too much time on Kraken, because I think it wasn’t as influential on JavaScript performance as SunSpider and Octane, so I’ll highlight one particular example from the audio-oscillator.js test.

audio-oscillator.js

So the test invokes the calcOsc function 500 times. calcOsc first calls generate on the global sine Oscillator, then creates a new Oscillator, calls generate on that and adds it to the global sine oscillator. Without going into detail why the test is doing this, let’s have a look at the generate method on the Oscillator prototype.

audio-oscillator-data.js

Looking at the code, you’d expect this to be dominated by the array accesses or the multiplications or the Math.round calls in the loop, but surprisingly what’s completely dominating the runtime of Oscillator.prototype.generate is the offset % this.waveTableLength expression. Running this benchmark in a profiler on any Intel machine reveals that more than 20% of the ticks are attributed to the idiv instruction that we generate for the modulus. One interesting observation however is that the waveTableLength field of the Oscillator instances always contains the same value 2048, as it’s only assigned once in the Oscillator constructor.

audio-oscillator-data.js

If we know that the right hand side of an integer modulus operation is a power of two, we can generate way better code obviously and completely avoid the idiv instruction on Intel. So what we needed was a way to get the information that this.waveTableLength is always 2048 from the Oscillator constructor to the modulus operation in Oscillator.prototype.generate. One obvious way would be to try to rely on inlining of everything into the calcOsc function and let load/store elimination do the constant propagation for us, but this would not work for the sine oscillator, which is allocated outside the calcOsc function.

So what we did instead is add support for tracking certain constant values as right-hand side feedback for the modulus operator. This does make some sense in V8, since we track type feedback for binary operations like +, * and % on uses, which means the operator tracks the types of inputs it has seen and the types of outputs that were produced (see the slides from the round table talk on Fast arithmetic for dynamic languages recently for some details). Hooking this up with fullcodegen and Crankshaft was even fairly easy back then, the BinaryOpIC for MOD can also track known power of two right hand sides. In fact running the default configuration of V8 (with Crankshaft and fullcodegen)

$ ~/Projects/v8/out/Release/d8 --trace-ic audio-oscillator.js
[...SNIP...]
[BinaryOpIC(MOD:None*None->None) => (MOD:Smi*2048->Smi) @ ~Oscillator.generate+598 at audio-oscillator.js:697]
[...SNIP...]
$ 

shows that the BinaryOpIC is picking up the proper constant feedback for the right hand side of the modulus, and properly tracks that the left hand side was always a small integer (a Smi in V8 speak), and we also always produced a small integer result. Looking at the generated code using --print-opt-code --code-comments quickly reveals that Crankshaft utilizes the feedback to generate an efficient code sequence for the integer modulus in Oscillator.prototype.generate:

[...SNIP...]
                  ;;; <@80,#84> load-named-field
0x133a0bdacc4a   330  8b4343         movl rax,[rbx+0x43]
                  ;;; <@83,#86> compare-numeric-and-branch
0x133a0bdacc4d   333  3d00080000     cmp rax,0x800
0x133a0bdacc52   338  0f85ff000000   jnz 599  (0x133a0bdacd57)
[...SNIP...]
                  ;;; <@90,#94> mod-by-power-of-2-i
0x133a0bdacc5b   347  4585db         testl r11,r11
0x133a0bdacc5e   350  790f           jns 367  (0x133a0bdacc6f)
0x133a0bdacc60   352  41f7db         negl r11
0x133a0bdacc63   355  4181e3ff070000 andl r11,0x7ff
0x133a0bdacc6a   362  41f7db         negl r11
0x133a0bdacc6d   365  eb07           jmp 374  (0x133a0bdacc76)
0x133a0bdacc6f   367  4181e3ff070000 andl r11,0x7ff
[...SNIP...]
                  ;;; <@127,#88> deoptimize
0x133a0bdacd57   599  e81273cdff     call 0x133a0ba8406e
[...SNIP...]

So you see we load the value of this.waveTableLength (rbx holds the this reference), check that it’s still 2048 (hexadecimal 0x800), and if so just perform a bitwise and with the proper bitmask 0x7ff (r11 contains the value of the loop induction variable i) instead of using the idiv instruction (paying proper attention to preserve the sign of the left hand side).

The over-specialization issue

So this trick is pretty damn cool, but as with many benchmark focused tricks, it has one major drawback: It’s over-specialized! As soon as the right hand side ever changes, all optimized code will have to be deoptimized (as the assumption that the right hand is always a certain power of two no longer holds) and any further optimization attempts will have to use idiv again, as the BinaryOpIC will most likely report feedback in the form Smi*Smi->Smi then. For example, let’s assume we instantiate another Oscillator, set a different waveTableLength on it, and call generate for the oscillator, then we’d lose 20% performance even though the actually interesting Oscillators are not affected (i.e. the engine does non-local penalization here).

--- audio-oscillator.js.ORIG    2016-12-15 22:01:43.897033156 +0100
+++ audio-oscillator.js 2016-12-15 22:02:26.397326067 +0100
@@ -1931,6 +1931,10 @@
 var frequency = 344.53;
 var sine = new Oscillator(Oscillator.Sine, frequency, 1, bufferSize, sampleRate);

+var unused = new Oscillator(Oscillator.Sine, frequency, 1, bufferSize, sampleRate);
+unused.waveTableLength = 1024;
+unused.generate();
+
 var calcOsc = function() {
   sine.generate();

Comparing the execution times of the original audio-oscillator.js and the version that contains an additional unused Oscillator instance with a modified waveTableLength shows the expected results:

$ ~/Projects/v8/out/Release/d8 audio-oscillator.js.ORIG
Time (audio-oscillator-once): 64 ms.
$ ~/Projects/v8/out/Release/d8 audio-oscillator.js
Time (audio-oscillator-once): 81 ms.
$ 

This is an example for a pretty terrible performance cliff: Let’s say a developer writes code for a library and does careful tweaking and optimizations using certain sample input values, and the performance is decent. Now a user starts using that library reading through the performance notes, but somehow falls off the performance cliff, because she/he is using the library in a slightly different way, i.e. somehow polluting type feedback for a certain BinaryOpIC, and is hit by a 20% slowdown (compared to the measurements of the library author) that neither the library author nor the user can explain, and that seems rather arbitrary.

Now this is not uncommon in JavaScript land, and unfortunately quite a couple of these cliffs are just unavoidable, because they are due to the fact that JavaScript performance is based on optimistic assumptions and speculation. We have been spending a lot of time and energy trying to come up with ways to avoid these performance cliffs, and still provide (nearly) the same performance. As it turns out it makes a lot of sense to avoid idiv whenever possible, even if you don’t necessarily know that the right hand side is always a power of two (via dynamic feedback), so what TurboFan does is different from Crankshaft, in that it always checks at runtime whether the input is a power of two, so general case for signed integer modulus, with optimization for (unknown) power of two right hand side looks like this (in pseudo code):

 if 0 < rhs then
  msk = rhs - 1
  if rhs & msk != 0 then
    lhs % rhs
  else
    if lhs < 0 then
      -(-lhs & 

Click here to keep reading

Original links

The truth about traditional JavaScript benchmarks | Benedikt Meurer