By John Nilsson


2009-02-02 17:39:41 8 Comments

How do you write (and run) a correct micro-benchmark in Java?

I'm looking for some code samples and comments illustrating various things to think about.

Example: Should the benchmark measure time/iteration or iterations/time, and why?

Related: Is stopwatch benchmarking acceptable?

11 comments

@Kip 2009-02-02 17:57:01

If you are trying to compare two algorithms, do at least two benchmarks for each, alternating the order. i.e.:

for(i=1..n)
  alg1();
for(i=1..n)
  alg2();
for(i=1..n)
  alg2();
for(i=1..n)
  alg1();

I have found some noticeable differences (5-10% sometimes) in the runtime of the same algorithm in different passes..

Also, make sure that n is very large, so that the runtime of each loop is at the very least 10 seconds or so. The more iterations, the more significant figures in your benchmark time and the more reliable that data is.

@Mnementh 2009-02-02 18:04:23

Naturally changing the order influences the runtime. JVM-optimizations and caching-effects are going to work here. Better is to 'warm up' the JVM-optimization, mak multiple runs and benchmark every test in a different JVM.

@Peter Lawrey 2009-02-02 19:54:35

Should the benchmark measure time/iteration or iterations/time, and why?

It depends on what you are trying to test.

If you are interested in latency, use time/iteration and if you are interested in throughput, use iterations/time.

@Jon Skeet 2009-02-02 17:46:59

Important things for Java benchmarks are:

  • Warm up the JIT first by running the code several times before timing it
  • Make sure you run it for long enough to be able to measure the results in seconds or (better) tens of seconds
  • While you can't call System.gc() between iterations, it's a good idea to run it between tests, so that each test will hopefully get a "clean" memory space to work with. (Yes, gc() is more of a hint than a guarantee, but it's very likely that it really will garbage collect in my experience.)
  • I like to display iterations and time, and a score of time/iteration which can be scaled such that the "best" algorithm gets a score of 1.0 and others are scored in a relative fashion. This means you can run all algorithms for a longish time, varying both number of iterations and time, but still getting comparable results.

I'm just in the process of blogging about the design of a benchmarking framework in .NET. I've got a couple of earlier posts which may be able to give you some ideas - not everything will be appropriate, of course, but some of it may be.

@Sanjay T. Sharma 2013-04-20 06:52:06

Minor nitpick: IMO "so that each test gets" should be "so that each test might get" since the former gives the impression that calling gc always frees up unused memory.

@Jon Skeet 2013-04-20 08:02:18

@SanjayT.Sharma: Well, the intention is that it actually does. While it's not strictly guaranteed, it's actually a pretty strong hint. Will edit to be clearer.

@gyorgyabraham 2013-06-14 10:38:16

I don't agree with calling System.gc(). It is a hint, that's all. Not even "it will hopefully do something". You should never ever call it. This is programming, not art.

@Jon Skeet 2013-06-14 10:58:52

@gyabraham: Yes, it's a hint - but it's one which I've observed to usually be taken. So if you don't like using System.gc(), how do you propose to minimize garbage collection in one test due to objects created in previous tests? I'm pragmatic, not dogmatic.

@gyorgyabraham 2013-06-14 17:42:05

Your benchmark's deterministic property suffers great fallback. Thats all.

@Jon Skeet 2013-06-14 17:44:34

@gyabraham: I don't know what you mean by "great fallback". Can you elaborate, and again - do you have a proposal to give better results? I did explicitly say that it's not a guarantee...

@Jenix 2018-05-10 11:48:23

I'm very interested in your "blog about the design of a benchmarking framework in .NET". Where can I find it? Also want to know if Stopwatch class is the best built-in way in C# which must work monotonically and fast.

@Jon Skeet 2018-05-10 14:38:14

@Jenix: I don't remember whether I wrote up that blog post or not, but github.com/dotnet/BenchmarkDotNet is the tool to use.

@Jenix 2018-05-10 17:29:11

@JonSkeet Ah, thanks!

@assylias 2013-04-03 12:32:49

jmh is a recent addition to OpenJDK and has been written by some performance engineers from Oracle. Certainly worth a look.

The jmh is a Java harness for building, running, and analysing nano/micro/macro benchmarks written in Java and other languages targetting the JVM.

Very interesting pieces of information buried in the sample tests comments.

See also:

@Nitsan Wakart 2013-05-02 15:41:49

See also this blog post: psy-lob-saw.blogspot.com/2013/04/… for details on getting started with JMH.

@Basil Bourque 2016-07-01 23:03:11

FYI, JEP 230: Microbenchmark Suite is an OpenJDK proposal based on this Java Microbenchmark Harness (JMH) project. Did not make the cut for Java 9 but may be added later.

@Aravind R. Yarram 2010-12-18 23:35:07

I know this question has been marked as answered but I wanted to mention two libraries that help us to write micro benchmarks

Caliper from Google

Getting started tutorials

  1. http://codingjunkie.net/micro-benchmarking-with-caliper/
  2. http://vertexlabs.co.uk/blog/caliper

JMH from OpenJDK

Getting started tutorials

  1. Avoiding Benchmarking Pitfalls on the JVM
  2. http://nitschinger.at/Using-JMH-for-Java-Microbenchmarking
  3. http://java-performance.info/jmh/

@assylias 2012-12-06 23:58:25

+1 it could have been added as Rule 8 of the accepted answer: Rule 8: because so many things can go wrong, you should probably use an existing library rather than trying to do it yourself!

@assylias 2015-12-03 09:49:14

@Pangea jmh is probably superior to Caliper nowadays, See also: groups.google.com/forum/#!msg/mechanical-sympathy/m4opvy4xq3‌​U/…

@Eugene Kuleshov 2009-02-04 20:49:28

Tips about writing micro benchmarks from the creators of Java HotSpot:

Rule 0: Read a reputable paper on JVMs and micro-benchmarking. A good one is Brian Goetz, 2005. Do not expect too much from micro-benchmarks; they measure only a limited range of JVM performance characteristics.

Rule 1: Always include a warmup phase which runs your test kernel all the way through, enough to trigger all initializations and compilations before timing phase(s). (Fewer iterations is OK on the warmup phase. The rule of thumb is several tens of thousands of inner loop iterations.)

Rule 2: Always run with -XX:+PrintCompilation, -verbose:gc, etc., so you can verify that the compiler and other parts of the JVM are not doing unexpected work during your timing phase.

Rule 2.1: Print messages at the beginning and end of timing and warmup phases, so you can verify that there is no output from Rule 2 during the timing phase.

Rule 3: Be aware of the difference between -client and -server, and OSR and regular compilations. The -XX:+PrintCompilation flag reports OSR compilations with an at-sign to denote the non-initial entry point, for example: Trouble$1::run @ 2 (41 bytes). Prefer server to client, and regular to OSR, if you are after best performance.

Rule 4: Be aware of initialization effects. Do not print for the first time during your timing phase, since printing loads and initializes classes. Do not load new classes outside of the warmup phase (or final reporting phase), unless you are testing class loading specifically (and in that case load only the test classes). Rule 2 is your first line of defense against such effects.

Rule 5: Be aware of deoptimization and recompilation effects. Do not take any code path for the first time in the timing phase, because the compiler may junk and recompile the code, based on an earlier optimistic assumption that the path was not going to be used at all. Rule 2 is your first line of defense against such effects.

Rule 6: Use appropriate tools to read the compiler's mind, and expect to be surprised by the code it produces. Inspect the code yourself before forming theories about what makes something faster or slower.

Rule 7: Reduce noise in your measurements. Run your benchmark on a quiet machine, and run it several times, discarding outliers. Use -Xbatch to serialize the compiler with the application, and consider setting -XX:CICompilerCount=1 to prevent the compiler from running in parallel with itself. Try your best to reduce GC overhead, set Xmx(large enough) equals Xms and use UseEpsilonGC if it is available.

Rule 8: Use a library for your benchmark as it is probably more efficient and was already debugged for this sole purpose. Such as JMH, Caliper or Bill and Paul's Excellent UCSD Benchmarks for Java.

@John Nilsson 2010-07-10 22:29:41

This was also an interesting article: ibm.com/developerworks/java/library/j-jtp12214

@Scott Carey 2011-04-22 18:43:06

Also, never use System.currentTimeMillis() unless you are OK with + or - 15 ms accuracy, which is typical on most OS + JVM combinations. Use System.nanoTime() instead.

@bestsss 2011-06-05 12:29:39

@Gravity 2011-07-27 08:00:00

It should be noted that System.nanoTime() is not guaranteed to be more accurate than System.currentTimeMillis(). It is only guaranteed to be at least as accurate. It usually is substantially more accurate, however.

@Waldheinz 2015-03-16 10:51:48

The main reason why one must use System.nanoTime() instead of System.currentTimeMillis() is that the former is guaranteed to be monotonically increasing. Subtracting the values returned two currentTimeMillis invocations can actually give negative results, possibly because the system time was adjusted by some NTP daemon.

@CaptainHastings 2017-03-16 19:35:49

Be aware that your bench-marking results will be misleading unless you account for "co-ordinated omissions". groups.google.com/forum/#!msg/mechanical-sympathy/icNZJejUHf‌​E/…

@Sina Madani 2017-03-19 19:21:23

To add to the other excellent advice, I'd also be mindful of the following:

For some CPUs (e.g. Intel Core i5 range with TurboBoost), the temperature (and number of cores currently being used, as well as thier utilisation percent) affects the clock speed. Since CPUs are dynamically clocked, this can affect your results. For example, if you have a single-threaded application, the maximum clock speed (with TurboBoost) is higher than for an application using all cores. This can therefore interfere with comparisons of single and multi-threaded performance on some systems. Bear in mind that the temperature and volatages also affect how long Turbo frequency is maintained.

Perhaps a more fundamentally important aspect that you have direct control over: make sure you're measuring the right thing! For example, if you're using System.nanoTime() to benchmark a particular bit of code, put the calls to the assignment in places that make sense to avoid measuring things which you aren't interested in. For example, don't do:

long startTime = System.nanoTime();
//code here...
System.out.println("Code took "+(System.nanoTime()-startTime)+"nano seconds");

Problem is you're not immediately getting the end time when the code has finished. Instead, try the following:

final long endTime, startTime = System.nanoTime();
//code here...
endTime = System.nanoTime();
System.out.println("Code took "+(endTime-startTime)+"nano seconds");

@Peter Cordes 2019-03-23 08:44:10

Yes it's important to not do unrelated work inside the timed region, but your first example is still fine. There's only one call to println, not a separate header line or something, and System.nanoTime() has to be evaluated as the first step in constructing the string arg for that call. There's nothing a compiler can do with the first that they can't do with the second, and neither one is even encouraging them to do extra work before recording a stop time.

@SpaceTrucker 2013-01-21 14:04:05

It should also be noted that it might also be important to analyze the results of the micro benchmark when comparing different implementations. Therefore a significance test should be made.

This is because implementation A might be faster during most of the runs of the benchmark than implementation B. But A might also have a higher spread, so the measured performance benefit of A won't be of any significance when compared with B.

So it is also important to write and run a micro benchmark correctly, but also to analyze it correctly.

@Yuriy 2010-12-18 23:22:11

http://opt.sourceforge.net/ Java Micro Benchmark - control tasks required to determine the comparative performance characteristics of the computer system on different platforms. Can be used to guide optimization decisions and to compare different Java implementations.

@Stefan L 2012-02-29 22:05:31

Seems to just benchmark the JVM + hardware, not an arbitrary piece of Java code.

@Mnementh 2009-02-02 17:46:59

There are many possible pitfalls for writing micro-benchmarks in Java.

First: You have to calculate with all sorts of events that take time more or less random: Garbage collection, caching effects (of OS for files and of CPU for memory), IO etc.

Second: You cannot trust the accuracy of the measured times for very short intervals.

Third: The JVM optimizes your code while executing. So different runs in the same JVM-instance will become faster and faster.

My recommendations: Make your benchmark run some seconds, that is more reliable than a runtime over milliseconds. Warm up the JVM (means running the benchmark at least once without measuring, that the JVM can run optimizations). And run your benchmark multiple times (maybe 5 times) and take the median-value. Run every micro-benchmark in a new JVM-instance (call for every benchmark new Java) otherwise optimization effects of the JVM can influence later running tests. Don't execute things, that aren't executed in the warmup-phase (as this could trigger class-load and recompilation).

@Peter ┼átibraný 2009-02-02 18:00:10

Make sure you somehow use results which are computed in benchmarked code. Otherwise your code can be optimized away.

Related Questions

Sponsored Content

46 Answered Questions

38 Answered Questions

[SOLVED] How do I efficiently iterate over each entry in a Java Map?

59 Answered Questions

[SOLVED] How do I read / convert an InputStream into a String in Java?

43 Answered Questions

[SOLVED] How do I convert a String to an int in Java?

63 Answered Questions

[SOLVED] How do I generate random integers within a specific range in Java?

  • 2008-12-12 18:20:57
  • user42155
  • 3734319 View
  • 3154 Score
  • 63 Answer
  • Tags:   java random integer

78 Answered Questions

[SOLVED] Is Java "pass-by-reference" or "pass-by-value"?

32 Answered Questions

[SOLVED] When to use LinkedList over ArrayList in Java?

53 Answered Questions

[SOLVED] Creating a memory leak with Java

35 Answered Questions

30 Answered Questions

[SOLVED] How do I create a file and write to it in Java?

  • 2010-05-21 19:58:55
  • Drew Johnson
  • 2618595 View
  • 1247 Score
  • 30 Answer
  • Tags:   java file-io

Sponsored Content