-
-
Notifications
You must be signed in to change notification settings - Fork 209
Java Matrix Benchmark
Results/charts updated 2018-06-05
The author of EJML benchmark his library against several other libraries (ojAlgo is one of them). To do that he created the Java Matrix Benchmark. This is by far the most ambitious java linear algebra benchmarking suite available. As far as I know all the library (contestant) specific code has been written/reviewed by people with expert knowledge of the individual libraries. Several of those persons also had some influence on what/how to benchmark.
This is to my knowledge the first published results from the newer/updated version of the Java Matrix Benchmark. The benchmark execution was started 2018-04-04. It took more than 8 weeks to complete it. (There were some problems so parts had to re-executed.)
See the sidebar menu for links to pages with charts for the individual tests!
Library | Version | Comment |
---|---|---|
Colt | 1.2.0 | |
Commons Math | 3.6.1 | |
EJML | 0.33 | |
JAMA | 1.0.3 | |
la4j | 0.6.0 | |
MTJ | 1.0.7 | Configured to run pure Java code. |
ojAlgo | 45.0.0 | |
Parallel Colt | 0.9.4 | |
UJMP | 0.3.0 | UJMP is a kind of a meta library. It has optional dependecies on many (all) the other libraries. Potentially it could always be as fast as the fastest library. |
This is a Java linear algebra benchmark, but it contains 3 libraries using native code. These are included in the benchmark calculations but not included in the charts shown here. More complete results will be made available (perhaps at the Java Matrix Benchmark web site).
Library | Version | Comment |
---|---|---|
jBLAS | 1.2.4 | jBLAS comes packaged with it's own set of native libraries. Don't know exactly what they are, the web site states that it includes ATLAS. (jBLAS crashed the entire benchmark execution twice. Pretty sure it was caused by some sort of memory leak.) |
MTJ-N | 1.0.7 | MTJ configured to use native code. It can use different native libraries. In this case the system had OpenBLAS and ATLAS installed. |
UJMP-N | 0.3.0 | UJMP configured to potentially use jBLAS or MTJ-N if those are available. |
For heavy weight operations on larger matrices using high quality native code (optimised for the specific hardware) will improve performance. With simpler operations (regardless of matrix size) or small matrices (regardless of operation) there is nothing to gain buy calling native code. When everything is to native code's advantage it can easily be 10x faster, but unless you do something do ensure the best possible native performance (perhaps to purchase an Intel MKL license) a good pure Java implementation may be faster.
Results including the libraries using native code will be published elsewhere/later.
The charts shows relative performance. The fastest library, for each operation at each matrix size, has relative performance 1.0. A library that performs a specific operation for a specific matrix size, half as fast has the fastest library has relative performance 0.5...
If the curve for a specific library is not drawn at all it means that function is not supported or has a problem. If only the last points (representing the largest matrices) are missing the implementation is too slow and/or consumes too much memory. With the largest matrices, 20 000 x 20 000, some libraries fail to complete even the simplest operations, and none of the libraries could do singular value or eigenvalue decompositions at that size (within the benchmarks time and memory limits).
Actually the benchmark was executed 3 times using different JVM:s on otherwise identical cloud machines.
Google Cloud Platform Compute Engine (n1-highmem-4 (4 vCPUs, 26 GB memory) with Intel Skylake processors)
- Oracle JVM 1.8.0_161
- Oracle JVM 9.0.4
- Azul JVM 1.8.0 Zing 18.05.0.0
The summary chart above shows the Oracle 1.8 results. That would be the most relevant JVM for most users. On each of the pages for the individual operations there are three charts shown. One for each of the JVM:s.
See the sidebar menu...
There's more Java, performance and ojAlgo related stuff at the ojAlgo web site.