Skip to content

Java Matrix Benchmark

Anders Peterson edited this page Mar 21, 2016 · 29 revisions

Summary Performance (pure Java only)

The author of EJML benchmark his library against several other libraries. ojAlgo is one of them. What I did (2015-07-03):

  1. Downloaded the test code and third party libraries from http://code.google.com/p/java-matrix-benchmark/source/checkout
  2. Updated all libraries to their latest released versions
  3. Ran the tests (takes about 2 weeks) and generated the charts.

See the sidebar menu for links to pages with charts for the individual tests!

Tested Libraries

Library Version Comment
Colt 1.2.0
Commons Math 3.5
EJML 0.27
JAMA 1.0.3
la4j 0.5.5
MTJ 1.0.3 Configured to run pure Java code.
ojAlgo 38.1
Parallel Colt 0.10.1 A classpath (class loading order) problem prevented Parallel Colt from functioning properly. In terms of practical results Parallel Colt cannot be included in this comparison. Therefore it has been removed from the summary chart above. Matrix multiplication are the only individual tests where it could participate (and it is included in those particular results charts). Parallel Colt has performed well in previous benchmark executions.
UJMP 0.2.5 UJMP is a kind of a meta library. It has optional dependecies on many (all) other libraries. Potentially it could always be as fast as the fastest library, but achieving that would require complex configuration and/or execution logic. Further it's been a while since UJMP released a new version (other than maven snapshot builds).

Native Code Libraries

This is a Java linear algebra benchmark, but it contains 2 libraries using native code. These are included in the benchmark calculations but not highlighted when presenting the results.

For each specific operation (test case) there are two charts. One that shows the absolute operation timings for each library and matrix size, and one that shows relative performance. The fastest library, for each operation at each matrix size, has relative performance 1.0. A library that performs a specific operation for a specific matrix size, half as fast has the fastest library has relative performance 0.5...

In the absolute time charts; a "lower" curve is better as it means less time. In the relative performance charts it's better to be "above" as it means greater speed.

In either chart it is no good if a library is missing values. If the curve for a specific library is not drawn at all it means that function is not supported or has a problem. If only the last points (representing the largest matrices) are missing the implementation is too slow and/or consumes too much memory. None of the tested libraries could do a singular values decomposition for a 10 000 x 10 000 matrix within the benchmark's time/memory limit.

Each of these 2 charts can be drawn with or excluding the libraries using native code - resulting in 4 different charts. Here we show the relative performance chart excluding the native libraries and the absolute time chart including the native libraries.

Library Version Comment
jBLAS 1.2.4
MTJ-N 1.0.3 MTJ configured to use native code. In this case Apple's vecLib framework - it is very fast. Running MTJ on a Linux box with some generally available native code library will NOT give you the same performance. It'll probably performance more like jBLAS.

Hardware & JVM

MacPro, 2 x 2.26 GHz Quad-Core Intel Xeon, 12 GB 1066 MHz DDR3, OS X 10.10.4 (64-bit kernel), JVM 1.8.0_45-b14 64-bit

Benchmark Cases

See the sidebar menu...

Java Matrix Benchmark

The Very Basics

More Advanced Operations

Solving Linear Systems

SVD & EvD

Clone this wiki locally