-
-
Notifications
You must be signed in to change notification settings - Fork 209
Java Matrix Benchmark
The author of EJML benchmark his library against several other libraries (ojAlgo is one of them). To do that he created the Java Matrix Benchmark. This is by far the most ambitious java linear algebra benchmarking suite available. As far as I know all the library (contestant) specific code has been written/reviewed by people with expert knowledge of the individual libraries. Several of those persons also had some influence on what/how to benchmark.
This is to my knowledge the first published results from the newer/updated version of the Java Matrix Benchmark. The benchmark execution was started 2018-04-04. It took about 6 weeks to run it.
See the sidebar menu for links to pages with charts for the individual tests!
Library | Version | Comment |
---|---|---|
Colt | 1.2.0 | |
Commons Math | 3.6.1 | |
EJML | 0.33 | |
JAMA | 1.0.3 | |
la4j | 0.6.0 | |
MTJ | 1.0.7 | Configured to run pure Java code. |
ojAlgo | 45.0.0 | |
Parallel Colt | 0.9.4 | |
UJMP | 0.3.0 | UJMP is a kind of a meta library. It has optional dependecies on many (all) the other libraries. Potentially it could always be as fast as the fastest library. |
This is a Java linear algebra benchmark, but it contains 3 libraries using native code. These are included in the benchmark calculations but not included in the charts shown here. More complete results will be made available (perhaps at the Java Matrix Benchmark web site).
Library | Version | Comment |
---|---|---|
jBLAS | 1.2.4 | jBLAS comes packaged with it's own set of native libraries. Don't know exactly what they are, the web site states that it includes ATLAS. (jBLAS crashed the entire benchmark execution twice. Pretty sure it was caused some sort of memory leak.) |
MTJ-N | 1.0.7 | MTJ configured to use native code. It can use different native libraries. In this case the system had OpenBLAS and ATLAS installed. |
UJMP-N | 0.3.0 | UJMP configured to potentially use jBLAS or MTJ-N if those are available. |
For heavy weight operations on larger matrices using high quality native code (optimised for the specific hardware) will improve performance. With simpler operations (regardless of matrix size) or small matrices (regardless of operation) there is nothing to gain buy calling native code. When everything is to native code's advantage it can easily be 10x faster, but unless you do something do ensure the best possible native performance (perhaps to purchase an Intel MKL license) a good pure Java implementation may be faster.
Results including the libraries using native code will be published elsewhere/later.
For each specific operation (test case) there are two charts. One that shows the absolute operation timings for each library and matrix size, and one that shows relative performance. The fastest library, for each operation at each matrix size, has relative performance 1.0. A library that performs a specific operation for a specific matrix size, half as fast has the fastest library has relative performance 0.5...
In the absolute time charts; a "lower" curve is better as it means less time. In the relative performance charts it's better to be "above" as it means greater speed.
In either chart it is no good if a library is missing values. If the curve for a specific library is not drawn at all it means that function is not supported or has a problem. If only the last points (representing the largest matrices) are missing the implementation is too slow and/or consumes too much memory. None of the tested libraries could do a singular values decomposition for a 10 000 x 10 000 matrix within the benchmark's time/memory limit.
Each of these 2 charts can be drawn with or excluding the libraries using native code - resulting in 4 different charts. Here we show the relative performance chart excluding the native libraries and the absolute time chart including the native libraries.
Google Cloud Platform Compute Engine (n1-highmem-4 (4 vCPUs, 26 GB memory) with Intel Skylake processors)
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
See the sidebar menu...
There's more Java, performance and ojAlgo related stuff at ojBlog.