Benchmark: Rhino vs Chrome V8 on server side

Since this post received a lot of criticism about the measurement method, I’ve created a revised version from it.

Here you can continue with the old post:

At my current workplace (an also at my previous one) we’re using javascript as a scripting language to deal with some special tasks which we wanted to “decouple” from the backend code. In both places for this task we’re using Mozilla’s Rhino. Rhino is written fully in Java and has some nice features, like host objects, and “seamless” transparency between javascript and java objects (at least in the latest version). But what about performance? I’ve seen the tests that Google’s V8 engine is performing well in the war of browsers on the client side.

At first I found this blog post about V8 engine’s integration into Java, but from the comments I’ve ended up with this project. I’ve downloaded the test cases from the Sunspider benchmark suite, which is a set of small benchmark scripts to test “the core JavaScript language only, not the DOM or other browser APIs”.

You can reach the compare code here.

This are the results:

 
V8   :lu.flier.script.V8ScriptEngine@18fef3d
Rhino:com.sun.script.javascript.RhinoScriptEngine@a3bcc1

Running script nr 3d-cube
V8   : 28 ms
Rhino: 451 ms

Running script nr 3d-morph
V8   : 68 ms
Rhino: 466 ms

Running script nr 3d-raytrace
V8   : 61 ms
Rhino: 428 ms

Running script nr access-binary-trees
V8   : 6 ms
Rhino: 173 ms

Running script nr access-fannkuch
V8   : 17 ms
Rhino: 420 ms

Running script nr access-nbody
V8   : 34 ms
Rhino: 458 ms

Running script nr access-nsieve
V8   : 7 ms
Rhino: 228 ms

Running script nr bitops-3bit-bits-in-byte
V8   : 6 ms
Rhino: 210 ms

Running script nr bitops-bits-in-byte
V8   : 11 ms
Rhino: 204 ms

Running script nr bitops-bitwise-and
V8   : 15 ms
Rhino: 917 ms

Running script nr bitops-nsieve-bits
V8   : 13 ms
Rhino: 312 ms

Running script nr controlflow-recursive
V8   : 6 ms
Rhino: 130 ms

Running script nr crypto-aes
V8   : 19 ms
Rhino: 392 ms

Running script nr crypto-md5
V8   : 15 ms
Rhino: 236 ms

Running script nr crypto-sha1
V8   : 11 ms
Rhino: 277 ms

Running script nr date-format-tofte
V8   : 23 ms
Rhino: 752 ms

Running script nr date-format-xparb
V8   : 28 ms
Rhino: 372 ms

Running script nr math-cordic
V8   : 6 ms
Rhino: 301 ms

Running script nr math-partial-sums
V8   : 21 ms
Rhino: 515 ms

Running script nr math-spectral-norm
V8   : 6 ms
Rhino: 144 ms

Running script nr regexp-dna
V8   : 34 ms
Rhino: 4224 ms

Running script nr string-base64
V8   : 8 ms
Rhino: 1681 ms

Running script nr string-fasta
V8   : 20 ms
Rhino: 375 ms

Running script nr string-tagcloud
V8   : 66 ms
Rhino: 4557 ms

Running script nr string-unpack-code
V8   : 84 ms
Rhino: 6637 ms

Running script nr string-validate-input
V8   : 16 ms
Rhino: 10304 ms

Running script nr 3d-cube
V8   : 45 ms
Rhino: 383 ms

Running script nr 3d-morph
V8   : 16 ms
Rhino: 473 ms

Running script nr 3d-raytrace
V8   : 56 ms
Rhino: 343 ms

Running script nr access-binary-trees
V8   : 7 ms
Rhino: 145 ms

Running script nr access-fannkuch
V8   : 14 ms
Rhino: 449 ms

Running script nr access-nbody
V8   : 31 ms
Rhino: 275 ms

Running script nr access-nsieve
V8   : 19 ms
Rhino: 355 ms

Running script nr bitops-3bit-bits-in-byte
V8   : 6 ms
Rhino: 142 ms

Running script nr bitops-bits-in-byte
V8   : 10 ms
Rhino: 274 ms

Running script nr bitops-bitwise-and
V8   : 14 ms
Rhino: 968 ms

Running script nr bitops-nsieve-bits
V8   : 11 ms
Rhino: 277 ms

Running script nr controlflow-recursive
V8   : 5 ms
Rhino: 130 ms

Running script nr crypto-aes
V8   : 15 ms
Rhino: 331 ms

Running script nr crypto-md5
V8   : 11 ms
Rhino: 229 ms

Running script nr crypto-sha1
V8   : 14 ms
Rhino: 286 ms

Running script nr date-format-tofte
V8   : 30 ms
Rhino: 613 ms

Running script nr date-format-xparb
V8   : 38 ms
Rhino: 354 ms

Running script nr math-cordic
V8   : 5 ms
Rhino: 389 ms

Running script nr math-partial-sums
V8   : 22 ms
Rhino: 517 ms

Running script nr math-spectral-norm
V8   : 10 ms
Rhino: 174 ms

Running script nr regexp-dna
V8   : 28 ms
Rhino: 4063 ms

Running script nr string-base64
V8   : 8 ms
Rhino: 1637 ms

Running script nr string-fasta
V8   : 84 ms
Rhino: 414 ms

Running script nr string-tagcloud
V8   : 28 ms
Rhino: 4835 ms

Running script nr string-unpack-code
V8   : 89 ms
Rhino: 6725 ms

Running script nr string-validate-input
V8   : 13 ms
Rhino: 6471 ms 

As you can see in this language-only tests Google’s V8 is overperforming Rhino 10-1000 times, which is pretty impressive.

It would be nice to measure a performance difference in real-life situation, but it is currently impossible, because we are heavily using Rhino’s features, so to do this kind of test, we need to redesign the whole API.

About these ads
This entry was posted in Benchmark and tagged , , , , . Bookmark the permalink.

13 Responses to Benchmark: Rhino vs Chrome V8 on server side

  1. Jochen says:

    Did you measure that without the JVM startup time?

    • axtaxt says:

      Yes, you can check the uploaded code to see what i’m exactly measuring. It loads the test from the sunspider benchmark suite, and only the evaluation time of the script is measured. Maybe it would be interesting to measure it with interpreted and JIT compiled mode.

  2. asdfasdf says:

    is this interpreted javascript or compiled? rhino has several optimization levels, also, it would be interesting to see after the warm up time on a server vm

  3. James says:

    I was curious how much JIT would affect the results, so I tried the AES benchmark on Rhino. When I run it 10 times, here are the results:

    550.0
    251.0
    80.0
    90.0
    60.0
    50.0
    40.0
    50.0
    30.0
    30.0

    The fastest time after a few dozen iterations was 20 ms on my laptop (27X faster than the first run). Not that V8 isn’t fast, but JIT seems to make quite a big difference (at least with my small test).

    • axtaxt says:

      You can use -Xcomp compiler option, to JIT everything before executing the main function. I guess I would need to redo my tests with JIT enabled from the beginning of the execution to make a fair comparison, cause V8 is running native code.

    • axtaxt says:

      I’ve just tested it with -Xcomp + modifying the runner script to execute each test 10 times. (I’m using a different machine than in the post.) The results show the same characteristics as in the post.

      I guess that means that in my original tests JIT was running, which seems plausible, because all these tests are burning a lot of cpu.

      Here is my result:

      Running script nr crypto-aes
      V8   : 12 ms
      V8   : 6 ms
      V8   : 6 ms
      V8   : 6 ms
      V8   : 6 ms
      V8   : 9 ms
      V8   : 7 ms
      V8   : 6 ms
      V8   : 7 ms
      V8   : 7 ms
      Rhino: 1699 ms
      Rhino: 1098 ms
      Rhino: 1079 ms
      Rhino: 1090 ms
      Rhino: 1079 ms
      Rhino: 1082 ms
      Rhino: 1085 ms
      Rhino: 1085 ms
      Rhino: 1078 ms
      Rhino: 1094 ms
      

      What am I doing wrong? Can you share your code?

  4. Pingback: [revised] Benchmark: Rhino vs Chrome V8 on server side | Axtaxt's Blog

  5. axtaxt says:

    Since this post received a lot of criticism about the measurement method, I’ve created a revised version from it. Feel free to share your thoughts about the measurement at that post. Thanx, axt

  6. Well, it seems sort of obvious V8 is much faster.
    C++ is well known to be faster than Java, and the lower you go, the faster code can run but you lose on platform independence obviously.

    This is a comparison of:
    Javascript interpreted through JAVA
    VS
    C++ Javascript engine running in Chrome Browser
    :)

  7. Eugene says:

    Try to do some extensive floating point calculations on java and c++, and you will see, how java is sloooooow. Yes, it is possible to make extremely laggy c++ web server, but c++ always will be much faster than java. Why? Here is simble example – add two integers. C++ code will load one number to cpu register, load another, and call ADD instruction of cpu. On java we must read bytecode instruction, get type of instruction, load operand from stack, load another operand from stack, call ADD instruction, put result back to stack… Also, when you call method in jvm, you are ALWAYS looking for it in method table of class, when in c++(in case it is not virtual method) you call it directly by address in memory.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s